id
stringlengths 9
16
| raw
stringlengths 0
3.38M
| cleaned
stringlengths 0
3.38M
|
|---|---|---|
2509.16302
|
Recovering unbiased CMB polarization maps using modern ground-based experiments
with minimal assumptions about atmospheric emission
Simon Biquard ,1, 2, ∗Josquin Errard ,1 and Radek Stompor
1, 2
1Universit´e Paris Cit´e, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France
2CNRS-UCB International Research Laboratory, Centre Pierre Bin´etruy, IRL2007, CPB-IN2P3, Berkeley, US
(Dated: September 23, 2025)
We present a study of unbiased reconstruction of cosmic microwave background (CMB) polariza-
tion maps from data collected by modern ground-based observatories. Atmospheric emission is a
major source of correlated noise in such experiments, complicating the recovery of faint cosmological
signals. We consider estimators that require only minimal assumptions about the properties of the
unpolarized atmospheric emission, instead exploiting hardware solutions commonly implemented in
modern instruments, such as pairs of orthogonal antennas in each focal plane pixel, interleaving
of the antenna orientation from pixel-to-pixel, and polarization signal modulation via a continu-
ously rotating half-wave plate (HWP). We focus on two techniques: (i) statistical down-weighting
of low-frequency atmospheric signals, and (ii) pair-differencing (PD), which involves differencing the
signals collected by two detectors in the same focal plane pixel. We compare their performance and
contrast them with the idealized case where the atmospheric signal is perfectly known and can be
cleanly subtracted. We show that PD can be derived from the maximum likelihood principle under
very general assumptions about the atmospheric signal, and is therefore optimal in terms of map
sensitivity for essentially any such signal. Remarkably, in the absence of instrumental systematics,
but with reasonable variations of detector noise properties, PD yields polarized sky maps with noise
levels only slightly worse (typically a few percent) than the ideal reference case. While the down-
weighting method could in principle match or surpass this performance, it requires highly accurate
models of the atmospheric signal, which are not readily available. The performance of PD will be
affected by instrumental systematics and those leaking the atmospheric signal to the difference time
stream may have particularly adverse impact. However, we expect that some of these effects, as
shown in the case of gain mismatch, are efficiently mitigated by a continuously rotating HWP, and
therefore, that in the context of modern CMB experimental setups PD could provide a competitive,
numerically robust, and efficient practical solution for CMB polarization mapmaking without the
need for atmospheric modeling.
I.
INTRODUCTION
The cosmic microwave background (CMB) is a power-
ful cosmological probe that has shaped our understand-
ing of the universe.
Measurements of its temperature
anisotropies have precisely determined the parameters of
the standard cosmological model, ΛCDM. However, the
next frontier in CMB research lies in the study of its
polarization anisotropies, which potentially contain the
signature of new physics.
The “B-mode” (parity-odd)
component of the spin-two polarization field is of par-
ticular interest and a key target of current and future
surveys, such as the Simons Observatory (SO) [1], CMB-
S4 [2], and LiteBIRD [3]. On moderate to small angular
scales, it is generated by the gravitational lensing effect
of large-scale structure acting on the primordial CMB E-
mode (parity-even) signal. This effect has been detected
by several experiments with high significance in the last
decade [4–6]. However, a large-scale primordial B-mode
signal is expected if there are significant primordial grav-
itational waves in the early universe [7]. The detection of
this signal would provide direct evidence for inflation, a
period of accelerated expansion that is believed to occur
∗biquard@apc.in2p3.fr
in the early universe, and predicts the generation of such
waves, thus opening a window on extreme high-energy
physics.
Ground-based experiments are leading the way in this
quest for primordial B modes. Current constraints from
the BICEP/Keck collaboration have placed an upper
limit on the tensor-to-scalar ratio of r < 0.036 at 95 %
confidence [8] (a reanalysis obtained r < 0.032 [9]), high-
lighting the need for improved sensitivity to detect the
elusive inflationary B-mode signal. However, achieving
constraining measurements of large angular scales from
the ground requires overcoming key challenges, including
atmospheric contamination and ground pickup. Atmo-
spheric fluctuations introduce correlated noise that ob-
scures the cosmological signal. Ground pickup, caused by
side-lobe response to thermal radiation from the Earth,
further complicates observations by adding an azimuth-
dependent contribution that is partially degenerate with
the sky signal. Hardware developments have been imple-
mented to address these challenges, which typically man-
ifest through unpolarized correlated noise.
Successful
technologies include dual-polarization detectors and po-
larization modulators such as rotating half-wave plates.
Because the B-mode spectrum is much smaller than
both the temperature and E-mode polarization signals,
modern experiments such as the SO deploy tens of thou-
sands of detectors to achieve the required sensitivity. In
arXiv:2509.16302v1 [astro-ph.IM] 19 Sep 2025
2
this context, we must ensure that the data reduction and
analysis pipelines are able to cope with the data volumes
generated by these large arrays of detectors, while accu-
rately modeling noise and systematic effects in order to
extract the cosmological information distributed over the
entire data set. Mapmaking, which refers to the recon-
struction of the sky signal from the time-ordered data
(TOD), is a crucial step of this program, and one that
represents a significant computational burden, since it
deals with the full size of the TOD.
This challenge is further amplified by the presence
of spurious signals like atmospheric emission or ground
pickup.
Consequently, explicit mitigation techniques,
such as filtering and/or template deprojection performed
at the time stream level, are necessary in order to sup-
press undesirable contributions.
Such operations typi-
cally affect the cosmologically relevant signal in the data,
an effect that has to be corrected for.
Different ap-
proaches perform this correction at different stages. For
instance, popular “filter-and-bin” techniques [4, 10–14]
do so on the power spectrum level. These have gained
widespread popularity due to their flexibility and numer-
ical efficiency, and the ability to deliver unbiased angular
power spectra under rather general assumptions for cur-
rently achievable survey depths [15].
Other classes of methods attempt to apply the correc-
tion directly on the mapmaking stage, and aim to pro-
duce unbiased maps of the sky.
Various techniques of
this sort have been proposed and applied in the past,
ranging from general maximum likelihood to specialized
destriping approaches [13, 16–20]. In general, they capi-
talize on an appropriately designed observing strategy to
break potential degeneracies between the sky signal and
contaminants [13, 19]. This is the case, for instance, for
contaminants that happen to be synchronous with some
well-defined instrumental parameters, such as telescope
orientation or half-wave plate rotation. Such methods are
in general more involved in terms of implementation and
more costly to execute than the filter-and-bin technique.
As they are based on different assumptions and could
potentially deliver more accurate results, they remain of
significant interest–an interest that is bound to grow fur-
ther once the cosmological signals that have eluded us
up to now are detected. From this perspective, atmo-
spheric contamination appears particularly insidious, as
it stems from a dynamic, turbulent medium and thus
is not easy to characterize in a succinct and sufficiently
accurate manner suitable for this class of mapmaking al-
gorithms.
In this work, we study the fidelity and precision of po-
larization maps produced by unbiased mapmaking meth-
ods from data affected by atmospheric contamination,
explicitly taking into account typical hardware features
of modern experiments. In particular, we focus on the so-
called pair differencing technique, applied broadly in the
past, see e.g. Refs. [10, 11, 21]. We show that this method
can be derived from maximum likelihood considerations,
requiring only a minimal set of assumptions about the
atmosphere, and discuss in detail its impact on the sen-
sitivity of the recovered polarization maps obtained from
modern CMB experiments. This work thus complements
previous studies that focused predominantly on the dif-
ferential systematic effects that can affect it [22–26].
The paper is organized as follows. In section II, we
provide a short introduction to the mapmaking formal-
ism, and discuss specific challenges and features inherent
to ground-based experiments. In section III we compare
different methods of reconstructing the polarized part of
the signal, including the pair differencing technique. In
section IV, we assess the sensitivity of the latter to the
predicted inflationary primordial B-mode signal. In sec-
tion V, we discuss differential systematic effects. Finally,
we summarize the results and conclude in section VI.
II.
FORMALISM AND BACKGROUND
A.
Mapmaking
1.
Data model
We model the TOD d as a linear function of the sky
signal amplitudes s, and additional contributions charac-
terized by amplitudes a,
d = Ps + Ta + n.
(1)
The pointing matrix P and template matrix T encode
the response of the instrument to s and a respectively.
The time-domain vector n represents instrumental noise,
whose average over the statistical ensemble of all possible
realizations is assumed to be zero.
A key assumption in Eq. (1) is that relevant contribu-
tions have a limited number of degrees of freedom, i.e.,
can be described by a limited number of known tem-
plates (columns of P and T) and corresponding ampli-
tudes (vectors s and a). This implies in particular that
we can discretize continuous signals (e.g. sky signal) with
sufficient accuracy. For simplicity, we take the same pix-
elization for all three relevant Stokes parameters I, Q
and U. Similarly, beams are assumed axially symmetric
and the same for all three Stokes components, therefore
they never appear explicitly here and can be thought of
as being convolved with the maps.
We also recall that for a perfect linear polarizer coupled
to a total power detector, the sky signal part of Eq. (1)
takes the explicit form [27]
(Ps)t = Ipt + cos(2φt)Qpt + sin(2φt)Upt
(2)
where pt is the observed sky pixel (assuming no pointing
interpolation) and φt is the angle of the polarizer with
respect to the sky coordinates, both at time t. Similarly,
if an ideal HWP oriented with time-dependent angle ψt
with respect to the instrument is used to modulate the
incoming polarization, the model instead reads [28]
(Ps)t = Ipt +cos(2φt+4ψt)Qpt +sin(2φt+4ψt)Upt. (3)
3
2.
Maximum likelihood solution
Let us derive the general maximum likelihood (ML)
estimator ˆs of the sky signal, assuming Gaussian noise
with covariance N and a Gaussian prior on a. We may
write the corresponding negative log-likelihood as
χ2 = (d −Ps −Ta)⊤N −1(d −Ps −Ta)
+ (a −¯a)⊤Σ−1
a (a −¯a)
(4)
where ¯a is the expectation value of a, and Σa the associ-
ated covariance. The ML solution is found by minimizing
χ2 with respect to a and s. For this, it is useful to change
variables to
a′ ≡a −¯a,
d′ ≡d −T ¯a
(5)
such that minimizing the χ2 with respect to a or a′ is
equivalent and gives
ˆa′ = (Σ−1
a
+ T ⊤N −1T)−1T ⊤N −1(d′ −Ps).
(6)
Injecting this in (4) gives
χ2 = (d′ −Ps)⊤Z⊤N −1Z(d′ −Ps) + const(s)
(7)
with
Z ≡I −T
Σ−1
a
+ T ⊤N −1T
−1 T ⊤N −1.
(8)
Noticing that Z⊤N −1 = N −1Z, we have the simplifica-
tion
Z⊤N −1Z = N −1Z2 = N −1Z
(9)
since Z is a projector. Finally, after minimizing the χ2
with respect to s we find
ˆs = (P ⊤N −1ZP)−1P ⊤N −1Z(d −T ¯a).
(10)
If the prior is improper, i.e.
Σ−1
a
= 0, at least for
some amplitudes, the corresponding modes are directly
deprojected from the original data and not included in
the sky signal estimate. In this extreme case of no prior
information, N −1Z becomes the “filtering and weighting
operator” FT (c.f. [19, 20])
N −1Z →FT ≡N −1
I −T(T ⊤N −1T)−1T ⊤N −1
(11)
which filters all contributions spanned by the columns of
T (i.e. FT T = 0) and weights orthogonal modes by the
inverse noise variance. This construction yields an un-
biased estimate of the sky signal in the presence of any
systematic effect if T defines a subspace rich enough to
encompass every possible manifestation of the effect. If
that subspace is not orthogonal to the sky, i.e., if there
are degeneracies between T and P, the system matrix
(P ⊤FT P)−1 becomes singular, and the corresponding
sky modes are impossible to distinguish from the system-
atic effect and can not be recovered [19]. Other modes
are by construction unbiased over the statistical ensem-
ble of instrumental noise realizations. The challenge in
this case is to define a sufficiently general T that cor-
rectly captures the systematic while keeping the number
of potentially degenerate sky modes to a minimum.
When proper priors are available, i.e.
when Σ−1
a
is
invertible, one can rewrite N −1Z using the Woodbury
matrix identity:
N −1Z = N −1 −N −1T(Σ−1
a
+ T ⊤N −1T)−1T ⊤N −1
= (N + TΣaT ⊤)−1 ≡e
N −1.
(12)
The ML estimate now reads
ˆs = (P ⊤e
N −1P)−1P ⊤e
N −1(d −T ¯a)
(13)
where e
N includes both the variance due to instrumen-
tal noise and the systematic contribution, which is now
characterized statistically around its average. We refer
to this as the down-weighting approach hereafter. This
estimator is unbiased over the ensemble of noise and sys-
tematic effect realizations, if only the average signal, ¯a,
is known. In many actual applications it is assumed to
vanish. The challenge then is to find a representation of
the total covariance e
N that is statistically sufficient and
computationally manageable.
As emphasized earlier, our sole focus will be the recov-
ery of the polarization sky signal, i.e., maps of the Q and
U Stokes parameters, from the available data. Typical
data sets are composed of measurements taken by total
power detectors, hence the measured signals combine all
three Stokes parameters, I, Q, and U (neglecting circular
polarization). The relevant pointing matrix can be split
into polarized and total intensity parts, as well as the
corresponding sky map:
P =
PI PQU
;
s =
sI
sQU
.
(14)
While we can in principle estimate the full sky map s (and
possibly drop its I component a posteriori), a somewhat
more general approach is to marginalize over the a priori
unknown total intensity part by deriving the estimators
directly for Q/U only. This is also applicable to situa-
tions where the total intensity signal is not recoverable.
The relevant expressions can be readily obtained in the
formalism presented above by extending the set of tem-
plates T,
T −→T ′ =
T PI
,
(15)
and assuming improper priors for all the extra degrees of
freedom, such that
ˆsQU =
P ⊤
QUN −1Z′PQU
−1 P ⊤
QUN −1Z′d,
(16)
where Z′ follows Eq. (8) with the extended templates T ′.
Lastly, we note that different choices of templates can
be used to model a given systematic effect and that this
choice may depend on the mapmaking method as well
as the specific model of the data, and the details of the
experiment and its operations.
We discuss this in the
next section.
4
B.
Ground-based CMB observations
1.
Atmospheric emission
Ground-based telescopes have to observe the sky
through the Earth’s atmosphere. While the latter has re-
gions of reduced opacity in the 30−300 GHz range where
the CMB is at its brightest, even in these atmospheric
windows there remains significant emission. This radi-
ation increases the optical loading of the detectors and
therefore raises their photon noise level, limiting the over-
all sensitivity of the instrument. More critically, fluctua-
tions in the water vapor density column cause spatial and
temporal variations in the atmosphere’s brightness tem-
perature, which introduce correlated noise both in time
and between detectors. Those fluctuations are typically
the dominant source of low-frequency noise for ground-
based experiments [29–32] and represent one of the pri-
mary challenges for sensitive large-scale CMB measure-
ments from the ground.
While atmospheric emission is slightly circularly po-
larized due to Zeeman splitting of oxygen in the Earth’s
magnetic field, the expected linear polarized intensity is
very low [33, 34]. Previous studies have set a 1 % limit on
the linear polarization fraction of atmospheric emission
[31]. However, the presence of horizontally aligned ice
crystals in tropospheric clouds can generate spurious po-
larized emission by scattering radiation from the ground
[35–37].
This causes significant “bursts” of horizontal
(Stokes Q < 0) polarization in the data, which manifest
as excess low-frequency noise in the TOD and are not
mitigated by polarization modulation. Possible avenues
for addressing this issue have been discussed in e.g. [37].
We will ignore this effect in the present work. Therefore,
for the rest of the paper we model the atmosphere as an
unpolarized signal correlated both spatially and in time.
This opens multiple avenues to mitigate its impact on
the recovered Q and U maps through combinations of
appropriate hardware and software solutions.
2.
Hardware solutions
Ground-based CMB experiments have developed spe-
cialized hardware components to allow for an efficient
detection of polarized sky signals in the presence of corre-
lated noise sources. These help to address the problem of
atmospheric contamination. Two key technologies have
proved particularly useful: dual-polarization detectors,
and polarization modulators (notably HWPs).
Dual-polarization detectors allow to measure both the
total power I and a linear combination of Q and U in a
single focal plane pixel thanks to two orthogonal anten-
nas [1, 38–46]. Samples collected by orthogonal antennas
can be differenced in order to reject unpolarized signals,
whose intensity is independent of antenna orientation.
This enables the reconstruction of polarized signals (an-
tenna angle dependent) with minimal contamination if
the antennas are close to orthogonal, and also with neg-
ligible or small loss of precision (c.f. section IV).
Polarization modulators perform frequency modulation
to shift the polarization signal to higher frequencies in the
TOD. With sufficiently high modulation frequency, the
signal is measured in a band where detector sensitivity is
mostly limited by photon noise rather than atmospheric
fluctuations. The most common implementation of this
technique is a continuously rotating HWP, which modu-
lates the polarization signal at four times its mechanical
rotation frequency. HWPs have been deployed and char-
acterized for several millimeter-wave polarization experi-
ments [28, 47–49]. Despite obvious advantages, they can
also introduce their own systematic effects [28, 47, 50], in-
cluding intensity-to-polarization leakage, modulation ef-
ficiency variations across the bandwidth, and mechanical
vibrations. These effects require careful calibration and
modeling in the data analysis pipeline, but are out of the
scope of this paper.
III.
POLARIZED MAPMAKING FOR
GROUND-BASED EXPERIMENTS
We focus hereafter on unpolarized atmospheric emis-
sion as the only relevant systematic effect and aim exclu-
sively to recover the polarized sky signals from the data.
For this purpose, we study and compare three ap-
proaches corresponding to varying degrees of knowledge
and assumptions about the atmospheric signal:
1. A minimal model where we only assume that the
atmospheric contamination is the same for both de-
tectors of any orthogonal pair in the focal plane (c.f.
section III A).
2. A statistical model based on the down-weighting
approach where we assume that the combined noise
covariance Eq. (12), accounting for both instrumen-
tal noise and atmospheric fluctuations, can be de-
scribed as stationary over defined time intervals,
with the noise power spectral density (PSD) de-
rived from the TOD itself (c.f. section III B).
3. An idealized case where we assume that the atmo-
spheric contribution is perfectly known. In the for-
malism of section II A 2, this corresponds to Σa →0
and a →¯a, and the ML solution is simply the stan-
dard estimate in the absence of any atmospheric
contamination, since it can be readily subtracted:
ˆsideal ≡(P ⊤N −1P)−1P ⊤N −1datm-free.
(17)
While this case is unattainable in practice, it does
provide a valuable statistical benchmark.
Before going any further, we note that, in the presence
of a rotating HWP, specialized mapmaking approaches
based on explicit, “lock-in” demodulation of the data
can recover Q and U Stokes parameters efficiently (see
5
e.g., [28, 47, 48] for more details) by isolating the polar-
ization information from low-frequency noise in the time
streams. In principle, this method also has the advantage
that polarization can be reconstructed from individual
detectors in isolation (not relying on pairs of orthogonal
antennas), which can potentially help when dealing with
differential systematics that affect the two antennas of a
pair differently, such as beam mismatch or pointing er-
rors. However, the combination of the information from
multiple detectors is necessary, albeit at a later stage
of the analysis, to produce statistically optimal results.
Discussions about demodulation and its interaction with
multiple detectors can be found in [51, 52].
In this work we consider an alternative approach and
do not assume any data preprocessing. Instead, an effec-
tive demodulation is only performed as an integral part
of the mapmaking procedure as encoded in the pointing
matrix as Eqs. (2) and (3) and thus its impact on the sky
signal is correctly included and corrected for. This also
allows us to use the same formalism and implementation
for both HWP and non-HWP cases, and to compare the
two approaches more straightforwardly.
A.
Minimal model
A fundamental challenge in atmospheric mitigation is
the unpredictable and complex nature of its emission. If
the atmosphere were fixed in Earth-bound coordinates,
modeling its fluctuations over moderate time scales on
a two-dimensional map of pixels fixed to this coordi-
nate system (different from sky coordinates) could be
a promising avenue towards its characterization and re-
moval, given that it typically only varies on rather large
angular scales. In practice, however, the presence of wind
means that the atmosphere is potentially only “fixed” in
coordinates moving with the wind. As the wind’s speed
is a priori unknown, the effect of the atmosphere can not
be easily captured by a linear model such as Eq. (1).
Instead, to work around this problem we may want to
consider a construction allowing for maximal flexibility in
the atmospheric modeling. An extreme example would
be a model with as many degrees of freedom as measured
data points, i.e., T = I (the identity matrix) with dimen-
sions given by the total number of samples. This obvi-
ously would exceed the number of available constraints,
making the problem degenerate. However, we can break
this degeneracy if the telescope is equipped with dual-
polarization detectors, making the only assumption that
unpolarized signals are measured identically by orthogo-
nal antennas in a pair, and assuming no priors (Σ−1
a
= 0)
on the corresponding amplitudes, thus making a minimal
number of assumptions on the atmospheric signal. Every
column of T now has two non-zero entries, corresponding
to the two detectors of a pair, such that T is conceptu-
ally two identity matrices stacked on top of each other
for every focal plane pixel:
T =
I
I
.
(18)
As discussed in section II A 2, this yields a sky estimate
that is free of signals captured by T. Note that this choice
of T captures sky total intensity as well since TPI = PI,
so only the polarized sky components are estimated.
In Appendix A, we show that this minimal model leads
in fact to the well-known pair differencing estimator,
which we now define.
For any pair of orthogonal de-
tectors denoted by ∥and ⊥subscripts, we are free to
transform the TOD into sum and difference data,
d+
d−
≡1
2
d∥+ d⊥
d∥−d⊥
.
(19)
We emphasize that this transformation does not by itself
entail any loss of information, as it is represented by the
block matrix
X ≡1
2
I
I
I −I
with inverse
X−1 = 2X.
(20)
Any estimate previously expressed in terms of d could be
equivalently expressed in terms of the transformed data
set, Xd.
However, since the difference data, d−, is free of atmo-
spheric signal (ideally) and contains all the polarization
information, it seems natural to consider it an indepen-
dent data vector and process it by itself to recover the
polarization signal. We thus call pair differencing (PD)
estimate the following Q/U estimator, computed from
d−, and discarding d+,
ˆspd ≡ˆspd
QU ≡(P ⊤
−N −1
−P−)−1P ⊤
−N −1
−d−,
(21)
where
N−≡
n−n⊤
−
(22)
is the covariance of the difference noise, and
P−≡1
2(P QU
∥
−P QU
⊥
)
(23)
is the relevant part of the pointing matrix for the differ-
ence time stream.
As part of the available information is discarded, it
would seem that this approach is suboptimal.
How-
ever, we showed above that it results from a rigorous
maximum-likelihood derivation and is therefore as opti-
mal as only possible given the assumptions made about
the atmospheric signal. We discuss this further in Sec-
tion IV.
In practice, we estimate N−in Eq. (21) from the data,
and this may differ from the true covariance. Our imple-
mentation of the PD estimator is then given by
ˆspd
W ≡(P ⊤
−WP−)−1P ⊤
−Wd−.
(24)
This expression is manifestly unbiased, but may not al-
ways reach the same precision as Eq. (21). The general
expression for the covariance of the estimated maps is:
N pd
QU ≡(P ⊤
−WP−)−1 P ⊤
−WN−WP−(P ⊤
−WP−)−1. (25)
6
B.
Statistical down-weighting
For comparison, we also consider an alternative ap-
proach motivated by Eq. (13), and treat the atmospheric
signal as a stochastic contribution, assuming that on av-
erage it vanishes, i.e., ¯a = 0. In doing so we accept the
fact that the atmosphere will contribute to our final sky
signal estimates, however, we hope to minimize its im-
pact by appropriately down-weighting the modes where
its contribution is significant, and then to be able to ac-
count for the extra uncertainty in the final map error
budget. Reaching both of these goals requires finding a
suitable description of the atmospheric signal covariance.
It has been shown that distinguishing atmospheric
modes from systematics in the data involves somewhat
sophisticated TOD processing [53].
Thus, successful
models may require advanced constructions (see e.g.
[18]), in particular if one aims to recover the total in-
tensity of the CMB. Consequently, such models could
be numerically complex, and not straightforward to val-
idate. As our focus is solely on polarization, we expect,
however, that simpler and less demanding models could
be already sufficient. In particular, we will neglect noise
correlations that are induced by the atmosphere between
different detectors [31]. Our DW estimator then reads
ˆsdw ≡(P ⊤WP)−1P ⊤Wd.
(26)
Contrary to the PD estimator (24), it uses the full data
set d as an input, and includes all three Stokes param-
eters in the pointing matrices and the recovered map.
While we could marginalize right away over the total in-
tensity part, in practice, the resulting equations are more
cumbersome to implement and there is little advantage
in doing so, if any, from the perspective of numerical ef-
ficiency. We thus simply drop the total intensity map at
the end of the mapmaking procedure.
We note that the inversion of the system matrix in
Eqs. (24) and (26) may fail if some pixels are not observed
with sufficient redundancy. In particular, to estimate the
Q and U amplitudes in a given pixel, it is necessary to
observe it with at least two sufficiently different “crossing
angles”, i.e., effective antenna orientations (taking HWP
rotation into account). Whenever these conditions are
not met, the corresponding samples need to be flagged
and excluded from the reconstruction, along with those
contaminated by glitches, or acquired during unstable
scanning intervals. The treatment of such gaps in the
time stream requires dedicated procedures [16].
The weights W
adopted in Eq. (26) assume no
detector-detector correlations.
They are thus block-
diagonal, with each detector-specific block given by the
PSD of the detector time stream. While this is probably
highly suboptimal for the recovery of the total intensity
maps, one may hope that it is sufficient for polarization.
This is one of the questions we investigate in the follow-
ing. The error covariance of the estimated maps is then
given by
N dw ≡(P ⊤WP)−1 P ⊤W e
NWP (P ⊤WP)−1,
(27)
where e
N is the covariance of the instrumental noise and
the atmospheric signal combined together, c.f. Eq. (12).
As we target polarization maps here, only the Q/U blocks
of N dw are of actual interest. One can write down an an-
alytic expression for them, however, it is long and rather
complex and thus not very enlightening.
Instead, let us try and draw some conclusions directly
from Eq. (27). The estimated maps are minimum vari-
ance if W = e
N −1, and any other choice will unavoidably
lead to increased uncertainty. As the atmosphere con-
tribution is unpolarized, it only affects the I × {I, Q, U}
blocks of the central product P ⊤W e
NWP. Its impact on
the polarization maps can be mitigated if the I × Q/U
blocks of both P ⊤WP and P ⊤W e
NWP vanish.
This
latter requirement ensures that the atmospheric signal is
well separated from the polarization signal, and the for-
mer ensures that the variance due to atmosphere is never
folded back in the polarization maps’ covariance. There
is a potential huge benefit to ensuring that those two
conditions are met, and in the following we discuss how
to achieve this by adapting mapmaking choices to the
hardware features discussed earlier. Nonetheless, we also
show that those conditions are not sufficient to ensure
the optimality of the polarization estimates, and that
weighting may need to be adjusted accordingly.
C.
Comparison of both approaches
We present here a comparison of the discussed map-
making approaches, pair differencing (PD) and down-
weighting (DW) in order to demonstrate salient features
of their performance.
1.
Simulations
As we focus on the recovery of large-scale polarization,
we consider an SO-like small aperture telescope (SAT)
observing at 90 GHz, equipped with pairs of orthogonal
detectors and a rotating HWP. The time-domain simu-
lations are generated using the TOAST 31 framework and
the sotodlib2 library. The reduction of the data into
HEALPix maps [54] is performed with the MAPPRAISER3
library [20]. The resolution parameter is Nside = 512.
The simulated instrument scans a region of the sky
south of the Galactic plane, similar to the “SAT South
field” described in [55].
Data is acquired with a sam-
ple rate of 37 Hz. The HWP rotation frequency is set
1 https://github.com/hpc4cmb/toast/tree/toast3
2 https://github.com/simonsobs/sotodlib
3 https://github.com/B3Dcmb/midapack
7
to 120 rpm, and can be turned off.
The dataset con-
sists of ∼1 h long constant-elevation scans (CESs) taken
over the course of one day. To reduce computational re-
quirements, we only keep one in four focal plane pixels,
resulting in roughly 700 detectors. The total size of the
data set (including TOD, pointing information, etc.) is
measured to be around 240 GB.
As the mapmaking procedure that we use is linear,
we directly obtain noise maps by only simulating real-
istic atmosphere signal and instrumental noise.
There
are no systematics (gain errors, beam mismatches, etc.).
The atmosphere simulation in TOAST is based on a phys-
ical model of the atmosphere [31], and uses a three-
dimensional structure of Kolmogorov turbulence moving
through the telescope’s field of view, with a distribution
of temperature, water vapor, and wind speed. On the
other hand, instrumental noise streams (uncorrelated be-
tween detectors) are simulated according to a 1/f model
S(f) = σ2
1 +
f
fknee
α
,
(28)
with nominal parameter values typical of detector noise,
¯α = −1.0
and
¯fknee = 50 mHz.
(29)
The nominal value of the noise equivalent temperature,
σ, does not matter since we will be always be looking at
results relative to a reference case.
In the nominal case, all detectors get the same instru-
mental noise parameters.
Alternatively, we added the
possibility of varying parameters across the focal plane.
In this case, each parameter is perturbed around its nom-
inal value by a random multiplicative sample ∼N(1, z2).
We will usually quote the dispersion z in percent. A dif-
ferent set of perturbations is applied for each CES and
detector.
2.
Results
Using these simulations, we perform mapmaking runs
following the three models presented earlier, recovering
Q and U noise maps.
As a figure of merit, we compute for each case the
noise power spectra, i.e., the angular power spectra of
the noise maps. We then compare them with reference
spectra, i.e., the noise spectra of the ideal runs, showed in
Figure 1. All spectra are obtained using the NaMaster4
package [56], with a bin size of 10, and keeping only bins
whose center multipole ℓb ∈[30, 500]. In addition, for
each pixel p we also compute the diagonal block of the
white noise covariance,
∆≡(P ⊤diag N −1P)−1.
(30)
4 https://github.com/LSSTDESC/NaMaster
100
200
300
400
500
Multipole
0.5
1.0
1.5
2.0
2.5
3.0
N [ K2]
1e
5
with HWP
without HWP
Figure 1.
Reference BB noise power spectra obtained from
the ideal reconstruction (atmosphere-free, only instrumental
noise).
These blocks have dimensions 3 × 3 for the down-
weighting approach and 2 × 2 for pair differencing, cor-
responding to the number of relevant Stokes parameters.
For each pixel they are explicitly given by,5
(∆−1
p )ss′ =
X
t
PptsN −1
tt Ptps′,
(31)
where the subscripts s, s′ ∈{I, Q, U} stand for a Stokes
parameter.
These blocks quantify the coupling of the
noise map between the different estimated Stokes param-
eters.
For the cases studied here, we display the elements of
the blocks in Figure 2. As discussed in section III B, in
DW mapmaking, off-diagonal elements IQ and IU are
particularly important as they determine the contribu-
tion of the atmospheric signal variance to the polariza-
tion maps. When noise weights are fitted independently
for each detector, the values of these elements, though
centered at zero, scatter significantly around it. As ex-
pected, the scatter is particularly pronounced without
HWP, however, in both cases, this is bound to increase
the uncertainty of the derived maps, given the magnitude
of the atmospheric signal. While the effect can be further
limited by imposing strict selection criteria on retained
pixels, this would unavoidably lead to a loss of observed
sky fraction, thereby enhancing the sample variance on
the largest angular scales.
A more fruitful approach would be to find a way to
suppress these off-diagonal elements. This can be done
in a robust way through enforcing the noise weights for
5 Note that Ptps = 0 whenever p is not the sky pixel observed at
time t.
8
1
5
10
101
102
103
104
II
0.02
0.01 0.00
0.01
0.02
IQ
0.02
0.01 0.00
0.01
0.02
IU
2
10
20
30
QQ
4
2
0
2
4
QU
2
10
20
30
UU
DW - HWP
DW - No HWP
PD - HWP
PD - No HWP
Figure 2.
Comparison of white noise covariance block values for DW/PD and HWP/no-HWP cases. Nominal values of the
instrumental noise parameters are used. Noise weights are fitted to the data independently for each detector. Each panel of
this 3 × 3 histogram matrix shows the distribution of values of the white noise covariance blocks (31) across the map. These
blocks are normalized by the variance of the best constrained pixel, such that their smallest values across the map are 1 for II
and 2 for QQ and UU.
two detectors in each pair to be the same. While this will
make the weighting of each detector stream less optimal,
the overall gain may still be appreciable. In the follow-
ing we confront these conclusions with the results of our
simulations.
The resulting BB noise spectra are shown in Figs. 3
and 4 (EE is not shown as the results are nearly iden-
tical).
The key observations are as follows.
For PD,
Figure 3, they coincide perfectly with those of the ideal
case if the noise properties of both detectors in each pair
are the same. This is independent from the noise differ-
ences between the pairs as indeed expected given that
these are optimally weighted in the procedure (see [57]
and section IV). If, on the contrary, the noise properties
of detectors in a pair differ, PD yields noisier than ideal
maps. For the z = 10 % dispersion of parameters in the
simulations, the precision loss seems to be ∼2 %. We
investigate this issue in more depth in section IV. The
main impact of having a HWP is that the noise increase
is essentially constant across all multipoles, all the way to
the largest angular scales considered. If no HWP is used,
the increase is more pronounced at the low end of the
multipole range, all the more so when noise parameters
vary, with the additional large scale variance becoming
apparent at ℓ∼100 ÷ 150, respectively. However, the
total increase of power at low multipoles over the white
noise plateau is not more than ∼30 %. We note that
in all these cases we used the same set of map pixels,
even if in cases without HWP these are generally less
well-conditioned than in cases with HWP, as shown in
9
100
200
300
400
500
Multipole
0.96
0.98
1.00
1.02
1.04
1.06
N /Nref
With HWP
100
200
300
400
500
Multipole
0.95
1.00
1.05
1.10
1.15
1.20
1.25
1.30
Without HWP
Reference
Nominal
Perturbed
Perturbed pairs
Figure 3.
Comparison of BB noise power spectra using pair differencing, with (left) vs. without (right) HWP rotation.
Different setups are represented with combinations of markers and colors.
“Nominal” (blue) corresponds to the nominal
instrumental noise model. “Perturbed” (orange) has instrumental noise parameters drawn with a dispersion of 10 % around
nominal values. “Perturbed pairs” (green) is a case where perturbations are applied in such a way that detectors of one pair
always have the same parameters, i.e., any variations are only between different detector pairs. For each case, we plot the ratio
of the noise power spectrum over that of the corresponding ideal (reference) case.
Figure 2.
As far as DW mapmaking is concerned, the situation
is somewhat more complex. Let us start by inspecting
the cases where HWP rotation is enabled (left panel of
Figure 4). Even for the simplest case of nominal noise
parameters, the noise spectra do not exactly match those
of the ideal case.
The difference is small but persis-
tent and it does not disappear even when we symmetrize
the weights within each detector pair, as shown by the
blue (nominal weighting) and green curves (symmetrized
weighting), distinct from the black dashed line. This ef-
fect goes away only if we use noise weights with a lower
fknee, e.g., similar to instrumental noise, Eq. (29), as rep-
resented by thin diamond-shaped markers. On the other
hand, if we simulate different noise levels (and use cor-
responding weights) for each detector, we see significant
increase of the noise (orange curve). We interpret these
observations as follows. When noise weights are (nearly)
identical within each pair of detectors, the atmosphere
does not contribute directly to the variance of the po-
larization maps. Indeed, the mapmaking operator per-
forms implicit pair-differencing while computing P ⊤Wd.
Nevertheless, since the weights account for atmospheric
contamination but neglect that it is highly correlated be-
tween detectors, they result in suboptimal maps. The
effect is small owing to the HWP modulation, which mit-
igates this low frequency noise by shifting the polariza-
tion signal well above the atmospheric fknee. Still, we
find an excess ∼1 % variance at the power spectrum
level, but this goes away if we decrease sufficiently the
assumed fknee such that the signal in the polarization
band is weighted uniformly. The method then matches
the performance of the reference and PD cases.
These conclusions are fully consistent with the noise
spectra obtained in the cases without HWP (right panel
of Figure 4). The impact of atmosphere is however gener-
ally enhanced by the presence of more significant leakage
of the atmosphere into polarization, as expected from
Figure 2.
In the case of detector-dependent noise pa-
rameters (orange curve), the mismatch of weights within
each detector pair further prevents the implicit differenc-
ing mentioned above, leading to huge atmospheric noise
residuals dominating on all angular scales by orders of
magnitude.
If we now enforce symmetric weights (red
curve), the noise spectra are suppressed by 3 to 4 or-
ders of magnitude and become comparable to those from
cases with nominal noise parameters (blue and green
curves).
Indeed, the corresponding 3 × 3 blocks and
the noise weights are now expected to be similar. The
“symmetrized” cases show that the remaining extra noise
power is purely due to suboptimal weighting, since the
atmosphere is removed as a result of implicit differencing.
The noise weights in those cases include the atmospheric
1/f and therefore give much less weight to Fourier modes
10
2
3
N /Nref
With HWP
100
200
300
400
500
Multipole
1.00
1.05
1.10
N /Nref
100
200
300
400
500
Multipole
100
101
102
103
104
105
106
107
Without HWP
Reference
Nominal
Perturbed
Nominal + symmetric weights
Perturbed + symmetric weights
Nominal + instrumental weights
Figure 4.
Comparison of BB noise power spectra using down-weighting, with (left) vs.
without (right) HWP rotation.
Different setups are represented with combinations of markers and colors. “Nominal” and “Perturbed” cases are the same as
in Figure 3. “Nominal + symmetric weights” is a case where instrumental noise parameters are nominal, and moreover the
assumed noise weights are forced to be symmetric, i.e., the same for both detectors of a pair. “Perturbed + symmetric weights”
is the same idea but with the perturbed noise parameters. “Nominal + instrumental weights” has both nominal instrumental
parameters and noise weights following that model (not fitted to the data which contains atmosphere). For each case, we plot
the ratio of the noise power spectrum over that of the corresponding ideal (reference) case.
below the atmospheric fknee, where most of the polariza-
tion signal resides in the absence of HWP modulation.
This in turn leads to significantly increased variance at
large multipoles.
Like before, we can reduce this un-
certainty to the reference level by simply taking noise
weights as if they were given by the instrumental noise
only (diamond markers). Alternately, we could improve
the noise model by allowing for correlations between de-
tectors in the same focal plane pixel.
We thus conclude that in these conditions the most
beneficial setup for down-weighting is to assume iden-
tical weights within each pair of detectors and derive
them assuming the properties of the instrumental noise.
But then the best this approach can do is to reproduce
the results of pair differencing, the latter being in addi-
tion more numerically efficient and robust. We expect
this conclusion to hold for as long as there is no bet-
ter and more reliable model of the atmospheric signal
available, which would allow DW to benefit from op-
timal propagation of the instrumental noise, as in the
ideal case, while keeping the atmospheric leakage under
control. This is however a tall order as argued in sec-
tion III A). Meanwhile, the PD approach, thanks to its
independence from the atmosphere model, may offer a
competitive and practical option. However, as it is, if we
are ready to accept a small sensitivity hit, then for the ex-
periments with continuously rotating HWP and pairs of
orthogonal detectors, the down-weighting approach can
still achieve very good performance if only symmetric
weights are adopted. Moreover, if the atmospheric signal
is somehow suppressed prior to actual mapmaking, then
this symmetrization may not even be needed, allowing
the method to benefit from more optimal weighting of
individual detectors. This idea is implemented, for in-
stance, in demodulation methods [28, 52, 58].
IV.
SENSITIVITY ASSESSMENT OF THE PAIR
DIFFERENCING APPROACH
In this section, we address the question of sensitiv-
ity (or precision) of the pair differencing method. We
argued in section III A that PD is an optimal polariza-
tion estimator under the single assumption that unpo-
larized signals are measured identically by detectors in
a pair. However, we should in general expect it to yield
a lower signal-to-noise ratio than the unattainable ideal
case (17) where the atmospheric contribution is perfectly
known and subtracted from the data.
Given the very
general assumption behind the PD derivation, we might
even expect a significant loss of precision: the number of
templates that it implicitly marginalizes over, and thus
11
the number of additional degrees of freedom, is as large
as half the length of the total combination of data from
all the detectors.
However, we already saw in several specific examples of
the previous section that the PD estimator is ideal when
both detectors in a pair have identical noise properties.
In Appendix B, we show rigorously why this is the case,
and obtain the following expression for the polarized part
of the ideal estimator:
ˆsid
QU = ˆspd
QU + (P ⊤
−Π−−P−)−1P ⊤
−Π−+n+
(32)
where Π±± is the (±, ±) block of the inverse transformed
noise covariance matrix (XNX)−1. Whenever the noise
properties of the two orthogonal detectors in a pair are
identical, we have N−+ = 0 and thus Π−+ = 0 and ˆsid
QU =
ˆspd
QU. As surprising as it may seem, this result merely
emphasizes the fact that using all available data only
improves on PD by modeling the noise cross-correlations
between detectors of a same pair. In this case those are
zero, such that the marginalization over the additional
templates has no impact on the statistical precision of
the map estimate.
We are now interested in quantifying the reduction of
precision (excess variance) in PD relative to the ideal
IQU estimator computed from all available data in more
general cases, specifically when the noise properties of
detectors in a pair can differ. To examine this question,
we now use atmosphere-free simulations, assuming that
atmospheric contamination in the absence of (other) sys-
tematic effects has no impact on the recovered Q and
U maps. In these highly idealized conditions, our DW
implementation is actually ideal since instrumental noise
is simulated independently for each detector and is thus
fully accounted for in the noise weights. It thus provides
a meaningful statistical benchmark to assess the perfor-
mance of pair differencing. We note that in more realis-
tic conditions, the “optimality” of the DW approach as
formulated in section II A 2 rests on several assumptions
such as the Gaussian distribution of the atmospheric am-
plitudes a, which may not be a good model in general.
Pair differencing does not rely on such assumptions.
A.
Simulations
We use simulations following almost the same setup as
before, section III C 1. A small additional feature is the
possibility to generate white instrumental noise instead
of 1/f. This is compatible with the detector-specific per-
turbations described previously.
The other difference is a longer observing schedule cov-
ering not one, but ten observing days, namely the first
day of each month except February and March, when
weather conditions are typically degraded. This sched-
ule captures variations in the scanning pattern due to
Sun and Moon avoidance across the year. In total, those
ten days amount to 166 CESs, or approximately 137 h
Normalized hit map
0
1
Figure 5.
Normalized hit map of the extended simulation
(10 days of observation) in equatorial coordinates. The cor-
responding sky fraction is about twenty percent.
of observation. The corresponding normalized hit map,
showing the sky coverage of the simulation, is plotted in
Figure 5.
B.
Variance comparison at the map level in the
white noise case
The noise properties of the maps are contained in the
pixel covariance matrix, (P ⊤N −1P)−1. This matrix is in
general not accessible because of computational limita-
tions. However, when the instrumental noise is white, N
is diagonal, and the pixel covariance is simply the white
noise covariance ∆, Eq. (30).
In Appendix C, we compute analytically the excess
variance of the pair differencing estimate compared to
the ideal estimate, considering a single detector pair,
white instrumental noise, and noise levels independent
between detectors.
Since PD does not take into ac-
count potentially different noise levels inside this single
pair (but would optimally weight noise between differ-
ent pairs), the excess variance in this case is simply a
function of the relative difference of noise levels in the
pair,
ε ≡
σ2
∥−σ2
⊥
σ2
∥+ σ2
⊥
∈(−1, 1),
(33)
which is equal to zero when detectors have the same noise
level and ±1 when one of them has zero (or infinite) noise.
The polarization covariance matrices in pixel domain for
this single pair are then related by
∆id = (1 −ε2)∆pd.
(34)
This confirms that the ideal estimate has lower variance
than PD unless the noise levels are the same in both
detectors.
In general, we define the relative excess variance as
ζ ≡(∆pd −∆id)/∆id.
(35)
12
It is a random variable as it depends on the detector noise
levels. From Eq. (34), we see that ζ is the same in every
map pixel for a single pair of detectors with given noise
levels. This will be our theoretical expectation for the
excess variance:
ζtheory ≡
ε2
1 −ε2 ⩾0.
(36)
This may however not be true in the general case of mul-
tiple detector pairs because different focal plane pixels
have different sky footprints (SATs typically have a field
of view of several tens of degrees). Additionally, the noise
level of each detector can vary during the observing sea-
son.
Using the simulations described earlier, we have access
to the actual ζ in each pixel of the map, therefore we can
study how the analytical result (36) generalizes to the
case of multiple detector pairs, and multiple independent
CESs. Here are the two main takeaways:
• the empirical average of ζtheory over detector pairs
and CESs is a good proxy for the mathematical
expectation over all possible values of ε;
• the average ⟨ζ⟩pixels across the map also follows this
expectation.
The results can be visualized in Figure 6, which shows
the relative excess variance as a function of the disper-
sion of noise levels z ∈{0.1 %, 1 %, 10 %, 20 %}. Unsur-
prisingly, we observe that ζ grows with z. This illustrates
how weighting individual detectors optimally in the ideal
estimator takes advantage of the less noisy individual de-
tectors to improve the map quality, while PD does not
capitalize on this. As z increases, there are more and
more pairs with a large discrepancy in noise levels, which
intensifies the effect.
The agreement between empirical expectation and
measured excess variance is useful from a practical point
of view: it shows that we can estimate how the noise level
of a map increases with respect to the ideal case using
only the detector noise levels, which can be derived from
the data directly, without going through the whole map-
making procedure. The 99th percentile of the pixel dis-
tribution of ζ provides a pessimistic estimate that would
bound the excess variance for 99 % of the map area, and
at the same time account for deviations between the ac-
tual values and the empirical estimate from the detector
noise levels. Overall, the excess variance is (i) under 0.1 %
(0.5 % pessimistic) when z ≲1 % (ii) ∼2 % (2.5 % pes-
simistic) when z = 10 % (iii) ∼10 % (15 % pessimistic)
when z = 20 %.
C.
Impact of variable instrumental noise
parameters on the determination of the
tensor-to-scalar ratio
We now turn to the more realistic case of instrumen-
tal 1/f noise with variations from detector to detector
as described in section III C 1. Contrary to the previous
section, where we explored the white noise case, we do
not have direct access to the exact map-space covariance
matrix: it is a dense object because of temporal correla-
tions in the noise and can not in general be computed.
Therefore, we evaluate the loss of statistical power at
the angular power spectrum level, using results from 25
different random noise realizations.
Figure 7 shows the increase in PD noise spectra with
respect to the ideal case, averaged over the 25 realiza-
tions, along with the standard deviation. It illustrates
the added value of a rotating HWP in mitigating large-
scale correlated noise not rejected by the differencing op-
eration. While the two top panels show that the excess
noise is “white” (does not depend on ℓ) in the presence
of a HWP, the noise curves in the bottom panels (no
HWP) exhibit a 1/ℓ-like behavior, with an important
dependence on the variability of noise parameters in the
detector pairs.
Table I summarizes those observations by quoting the
Fisher uncertainty σ(r = 0) on the tensor-to-scalar ratio
r, computed using
σ(r = 0)−2 ≈fsky
2
×
X
bins b
δℓ(2ℓb + 1)
CBB,prim
ℓb
|r=1
CBB,lens
ℓb
+
N BB
ℓb
!2
(37)
where
N BB
ℓb
is the average noise power spectrum over
the 25 noise realizations, and δℓ= 10 stands for the bin
size. The table also gives for each value a 1σ confidence
interval computed by evaluating Eq. (37) at the average
value of the noise spectrum, plus/minus the standard
deviation over the 25 noise realizations.
As expected,
the increase of uncertainty is more pronounced when no
HWP is used, as the power spectrum estimates at low
multipoles are more noisy, and this is where most of
the signal of interest is.
For typical dispersion values
z ∼10 % −20 %, the uncertainty on r could increase by
a few percent when using a HWP, and as much as 10 %
when not.
One may notice two potentially surprising features in
Table I. First, the absolute uncertainty on r in the case
with HWP actually decreases with higher values of z.
This is because given the implementation of the per-
turbed noise model, having larger dispersion of the noise
parameters around their nominal values doesn’t necessar-
ily mean that the detector array loses sensitivity overall.
In the case without HWP, this effect is buried under the
excess noise at low multipoles.
The second feature is that the PD estimator seems to
perform better than the ideal estimator, in the no HWP
case, for low z values. This seems inconsistent with the
fact that, from first principles, the ideal estimate should
have minimal variance. It turns out that this is driven by
the very low multipole bins (ℓ< 50). However, inspect-
ing the scatter of their values across the 25 realizations
reveals that each bin is statistically consistent with the
13
0.1%
1%
10%
20%
Scatter around nominal NET
0%
0.1%
1%
5%
10%
20%
Relative variance increase
true expectation
empirical expectation
Q pixels (1-99th percentiles)
U pixels (1-99th percentiles)
Figure 6.
Relative excess variance ζ in the white noise case as a function of the dispersion of noise levels z. The mathematical
expectation (solid black) is computed numerically following Eqs. (36) and (33) after drawing samples of σ2
∥and σ2
⊥from the
same Gaussian distribution as the noise levels in the simulation. The empirical average (red crosses) over the detector pairs
and CESs shows good agreement with the expectation at the four simulated values, z ∈{0.1 %, 1 %, 10 %, 20 %}. Error bars
represent the 1st to 99th percentile of the Q (blue) and U (orange) pixel distribution of ζ, with the triangle marks showing the
average value, in excellent agreement between the two Stokes parameters. Blue and orange markers are slightly shifted left and
right in order to improve visibility.
With HWP
Without HWP
Dispersion σ(r)id[×10−3] σ(r)pd[×10−3] Increase [%] σ(r)id[×10−3] σ(r)pd[×10−3] Increase [%]
0.1
1.378 ± 0.071 1.378 ± 0.071
0.001 ± 0.001 3.786 ± 0.489 3.781 ± 0.487 −0.123 ± 0.055
1.0
1.378 ± 0.071 1.378 ± 0.071
0.017 ± 0.008 3.789 ± 0.489 3.784 ± 0.487 −0.111 ± 0.048
10.0
1.363 ± 0.067 1.375 ± 0.070
0.889 ± 0.152 3.879 ± 0.496 3.947 ± 0.507
1.773 ± 0.048
20.0
1.309 ± 0.059 1.361 ± 0.068
3.977 ± 0.526 4.075 ± 0.507 4.448 ± 0.570
9.161 ± 0.418
Table I. Fisher uncertainty from 25 noise realizations on the tensor-to-scalar ratio r for the ideal and PD estimators, assuming
the true r = 0, with and without a rotating HWP, for dispersion z ∈{0.1 %, 1 %, 10 %, 20 %} of the noise parameters. In
addition to the absolute numbers, the relative increase of uncertainty inherent to the PD estimator is given in percent.
ideal case. We therefore consider that this is a statistical
fluke and/or caused by uncontrollable numerical effects
that could be due too many different factors, such as
imperfections in the estimation of the noise weights, esti-
mation of the pseudo-spectra, etc. Finally, we emphasize
that this is anyway a very small effect (about 0.1 %) and
does not affect our conclusions.
V.
PAIR-DIFFERENCING IN THE PRESENCE
OF INSTRUMENTAL SYSTEMATIC EFFECTS
In previous sections, we demonstrated two key points
about pair-differencing. First, it provides estimates of the
polarized sky that are as optimal as possible when arbi-
trary, unpolarized common modes are present. Second,
in practical applications, PD maps nearly have the best
possible precision given the sensitivity of the instrument
itself, i.e., as if atmospheric correlated noise were not
present. While these results are promising, they assumed
rather idealized conditions, without instrumental system-
atics.
All mapmaking methods are affected by those,
though some handle certain types of systematics better
than others. The specific impact may depend heavily on
the particular systematic effect and instrument involved,
requiring case-by-case analysis. The purpose of this sec-
tion is not to replace such detailed studies, but rather
to show that PD could still perform well, and deserves
14
Figure 7.
Increase of noise power in PD maps relative to the ideal high-ℓwhite noise floor, computed as an average of the 10 %
largest multipole bins. The dependence on the dispersion z is encoded by color, and the markers and error bars respectively
show the average and standard deviation over 25 independent instrumental noise realizations. Top panels have a rotating HWP,
and bottom panels do not. Left (resp. right) panels show the EE (resp. BB) spectra. A threshold of 10 % of the maximum
hit count is applied to the maps, before apodizing them on a 10◦radius.
consideration in real-world analysis pipelines.
We consider here the simple example of gain mismatch
between the two detectors of a pair [11, 24, 59]. This
undermines the ability of PD to cleanly remove signals
common to both detectors, a key strength of the method.
Still, given that relative calibration factors in real-world
experiments can typically be determined with percent
level accuracy [32], we expect most of the atmospheric
contamination to be rejected by the differencing opera-
tion. Any residual leakage is then dealt with by using
appropriate weights reflecting the additional 1/f noise.
To investigate the impact of gain mismatch on the re-
covered PD maps, we use the setup described in sec-
tion III C 1. For each detector pair, we multiply the at-
mosphere contribution to the data of the first detector
by 1 + δ/2 and that of the second one by 1 −δ/2, where
δ is drawn randomly around zero with some dispersion
(either 0.1 or 1 %), and independently for each detector
pair. After forming the difference streams, a 1/f model,
Eq. (28), is fitted to each of them in order to account
for both the instrumental noise and residual atmosphere.
The maps are then estimated as in Eq. (24).
In Figure 8, left panel, we show that HWP modulation
still enables a precise recovery of the polarization infor-
mation. Indeed, the additional variance at large angular
scales caused by the leakage seems of comparable scale to
the one due to a variation of instrumental noise param-
eters across the focal plane (c.f. section IV). When no
15
100
200
300
400
500
Multipole
0.950
0.975
1.000
1.025
1.050
1.075
1.100
1.125
1.150
N /Nref
HWP
100
200
300
400
500
Multipole
100
101
102
103
104
No HWP
PD nominal
PD perturbed
PD 0.1% gain error
PD 1% gain error
Figure 8.
Comparison of BB noise spectra using pair differencing, with (left) and without (right) HWP rotation. Spectra
are plotted relative to a reference case with no systematics and instrumental noise parameters at there nominal values. The
“perturbed” curves (blue) correspond to a typical 10 % variation around the nominal values. Finally, cases with 0.1 % (resp.
1 %) gain errors are plotted in orange (resp. green).
HWP is used (right panel), the situation is different. At-
mospheric leakage increases the variance by roughly two
to four orders of magnitude at low multipoles (ℓ< 100)
depending on the level of gain mismatch.
It is likely
that better modeling of the residual atmospheric noise,
in particular inclusion of the correlations between detec-
tor pairs, could lead to more precise recovery of the po-
larization signal in both cases. Nonetheless, we conclude
overall that even in the presence of such residuals, owing
to the presence of a continuously rotating HWP, the sim-
ple PD technique, as is, is capable of delivering maps of
quality comparable to the best achievable while making
minimal assumptions about atmospheric contamination.
More complex sources of systematic effects will un-
avoidably affect the data of real experiments and impact
PD performance, for instance slightly different or asym-
metric beams [24]. We expect that the powerful combi-
nation of orthogonal detector pairs and HWP will con-
tinue helping to mitigate many of them, as long as their
main consequence is the atmospheric, or more generally
any unpolarized, signal leakage to the difference data.
Other classes of effects may instead require extensions
of the formalism, for example the introduction of addi-
tional degrees of freedom to model them, e.g., [11, 60],
thus potentially also affecting the overall precision of the
recovered maps. This needs however to be studied in de-
tail, case-by-case, for PD as well as any other mapmaking
procedure.
VI.
CONCLUSION
In this work, we have compared two mapmaking ap-
proaches based on their ability to precisely reconstruct
CMB polarization maps from ground-based experiments
in the presence of atmospheric contamination. In partic-
ular, we have demonstrated that pair-differencing, a tech-
nique that relies on the rejection of common-mode noise
using pairs of orthogonally polarized detectors, emerges
naturally as a maximum likelihood estimator under the
assumption of perfectly correlated atmospheric emission
between detectors in a pair, without explicit modeling
of the contaminant signal. This is a key feature given
the dynamic and turbulent nature of atmospheric fluc-
tuations, which makes them difficult to characterize and
model accurately.
Our results show that PD achieves close to ideal sen-
sitivity to the inflationary tensor-to-scalar ratio r in the
presence of realistic variations of instrumental noise prop-
erties across the focal plane, despite using only half of
the available data and thus neglecting cross-correlations
between detectors of a pair.
In particular, we find
only a small (few percent) enlargement of the uncer-
tainty on r compared to an idealized reconstruction from
atmosphere-free data, when a rotating half-wave plate is
used. This degradation increases to about ten percent in
the absence of a HWP.
Compared to usual down-weighting approaches, which
include both instrumental and atmospheric noise in the
16
noise weights, PD offers a simpler and more numerically
stable alternative, particularly when detectors in a pair
have similar noise properties. Moreover, in our tests we
find that DW typically has to perform an implicit dif-
ferencing operation, by assigning the same weights to
both detectors in a pair, to reach competitive perfor-
mance with PD. Since the atmosphere is often used as a
beam-filling calibrator to compute relative gain factors,
we expect that this remains the case in practice, unless
a very accurate model of the atmospheric noise is avail-
able, allowing for its removal from the data.
We also
note that the inclusion of cross-correlations between dif-
ferent detectors in the noise model should allow DW to
better mitigate atmospheric contamination while taking
into account variations within detector pairs.
We conclude that for the recovery of polarization maps,
and experiments featuring both a HWP and pairs of or-
thogonal detectors, PD is a numerically efficient method
that is surprisingly good at mitigating unpolarized at-
mospheric contamination in a model-independent way,
while delivering nearly optimal performance in terms of
statistical uncertainty of the recovered maps, and poten-
tially being as vulnerable as any other method to some
common systematic effects. Future work should presum-
ably explore hybrid strategies and systematic mitigation
schemes to further enhance the robustness and accuracy
of polarization reconstruction.
ACKNOWLEDGMENTS
The authors would like to thank Hamza El Bouhargani,
Wuhyun Sohn and the SciPol team for useful discussions.
SB acknowledges partial PhD funding from the DIM
ORIGINES program (project RADIATION). This work
was supported by the SCIPOL project6, funded by the
European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation program
(PI: Josquin Errard, Grant agreement No. 101044073).
The authors also benefited from the European Union’s
Horizon 2020 research and innovation program under
grant agreement No. 101007633 CMB-Inflate.
This work was granted access to the HPC resources of
IDRIS under the allocation 2024-AD010414161R2 and
2025-AD010416919R1 made by GENCI. This research
also used resources of the National Energy Research Sci-
entific Computing Center (NERSC), a Department of
Energy User Facility using NERSC award HEP-ERCAP
0034243 project. Results presented in this paper have
made use of the following packages:
NaMaster [56],
healpy [61], NumPy [62], SciPy [63], Matplotlib [64],
and seaborn [65].
6 https://scipol.in2p3.fr
Appendix A: Pair differencing as template
marginalization
Assume a data model where two orthogonal detectors,
denoted by ∥and ⊥, measure the same, unknown, total
intensity signal a in addition to sky polarization sQU:
d =
d∥
d⊥
=
T
T
a +
PQU
−PQU
sQU + n.
(A1)
Using the pair transformation X = 1
2
I
I
I −I
from Eq.(20),
the data model is rewritten as
Xd =
d+
d−
=
T
0
|{z}
≡T
a +
0
PQU
| {z }
≡P
sQU + Xn.
(A2)
The sky polarization estimate that deprojects the total
intensity signal (and corresponds to marginalizing over
the total intensity amplitudes a) is
ˆsQU = (P⊤FP)−1P⊤F(Xd)
(A3a)
with
F ≡C −CT (T ⊤CT )−1T ⊤C
(A3b)
C ≡(XNX)−1.
(A3c)
We want to minimize the assumptions on the total in-
tensity signal.
This amounts to deprojecting anything
that is measured the same way by both detectors, i.e.,
taking the original T = I. In particular, we do not as-
sume that this contribution can be pixelized or has any
predictable structure. We now show that this leads to
the pair differencing estimator.
Given the structure of P = PQU[ 0
I ] and T = [ I
0 ], de-
fined in Eq. (A2), any sandwiches between those matrices
extract one of the four blocks of the object in the mid-
dle. For example, the template orthogonalization kernel
is simply:
T ⊤CT =
I 0
C11 C12
C21 C22
I
0
= C11.
(A4)
The mapmaking kernel, on the other hand, is:
P⊤FP =
I 0
P ⊤
QU
|
{z
}
commute
F11 F12
F21 F22
PQU
I
0
|
{z
}
commute
= P ⊤
QUF22PQU.
(A5)
Using this trick, we can write the estimator as
ˆsQU = (P ⊤
QUF22PQU)−1P ⊤
QU(F21d+ + F22d−).
(A6)
By definition of the filtering operator, we have FT = 0.
Thus, F11 = F21 = 0. All that is left is to compute F22.
This is straightforward from its definition (A3b):
F22 =
C −C·1(C11)−1C1·
22
= C22 −C21(C11)−1C12
=
(C−1)22
−1
by blockwise inversion
= ((XNX)22)−1
(A7)
17
and we recognize the inverse noise covariance matrix of
the difference data. Therefore, the estimator is simply
the pair differencing estimator (21), which concludes the
proof.
Appendix B: Polarized part of the full IQU
estimator
El Bouhargani [57, Appendix C] shows that when the
instrumental noise is the same in two detectors of a pair,
the pair differencing solution is as optimal as the simul-
taneous IQU reconstruction. We broadly recall the ar-
gument here, but refer anyone interested in the detailed
derivation to the original work.
Let us introduce the pair-sum and pair-difference data
vectors and associated pointing matrices and noise vec-
tors,
d+ = 1
2
d∥+ d⊥
= P+s + n+
(B1a)
d−= 1
2
d∥−d⊥
= P−s + n−
(B1b)
where the ∥and ⊥subscripts denote the orthogonal de-
tectors of a pair.
The IQU solution ˆs can be rewritten in terms of the
transformed data set, with block versions of the matrices:
ˆs =
P ⊤N −1P
−1 P ⊤N −1d
=
P ⊤
+ P ⊤
−
N −1
P+
P−
−1
P ⊤
+ P ⊤
−
N −1
d+
d−
(B2)
with the transformed noise covariance matrix N inverted
blockwise:
N −1 ≡
N++ N+−
N−+ N−−
−1
≡
Π++ Π+−
Π−+ Π−−
.
(B3)
Π□□is just a notation for the □□block of N −1.
From there, we can write the map estimator in block
form, separating the intensity and polarization compo-
nents. The polarized part is given by (cf. [57], Equation
(C.14)):
ˆsQU = sQU +
P ⊤
−F−−P−
−1 P ⊤
−[F−−n−+ F−+n+]
(B4)
with
F−−= Π−−−Π−+P+(P ⊤
+ Π++P+)−1P ⊤
+ Π+−
(B5a)
F−+ = Π−+ −Π−+P+(P ⊤
+ Π++P+)−1P ⊤
+ Π++
(B5b)
We see from Eq. (B4) that ˆsQU is the sum of three
terms: (i) the true sky map (ii) a direct contribution from
the pair-difference noise (iii) a cross-contribution from
the pair-sum noise. Whenever the covariance of the pair-
sum and pair-difference noise vectors, N+−, is small, the
transformed noise covariance matrix N becomes block-
diagonal,
N =
N++
≃0
≃0
N−−
,
(B6)
such that
F−+ = 0
and
F−−= N −1
−−.
(B7)
In this limit, the optimal polarization map reduces to the
PD solution (21):
ˆsQU
N−+→0
−−−−−→ˆspd
QU = sQU + (P ⊤
−N −1
−−P−)−1P ⊤
−N −1
−−n−
(B8)
Appendix C: Pair differencing in the case of white
noise
We start by simplifying Eq. (B4) by assuming that all
sky pixels are observed with a uniform variety of crossing
angles, such that:
P ⊤
−F−−≈P ⊤
−Π−−
(C1a)
P ⊤
−F−+ ≈P ⊤
−Π−+
(C1b)
This simplification is a very good approximation for the
deepest parts of the map, which are observed many times
by different detectors with varying telescope orientations.
The presence of a rotating HWP makes this case com-
pelling. The QU part of the estimate now reads:
ˆsQU = sQU +
P ⊤
−Π−−P−
−1 P ⊤
−[Π−−n−+ Π−+n+] .
(C2)
We denote the white noise levels of the even- and odd-
index detectors by σ∥and σ⊥respectively. They are free
to vary from one detector pair to another, and we treat
them as independent Gaussian random variables. Leav-
ing out any correlations between different detectors, we
consider a single pair of detectors with independent noise
levels following the same distribution N(¯σ, z2) centered
on a nominal value ¯σ and with variance z2. We will com-
ment on the multi-detector case at the end.
These assumptions simplify the starting equation (C2)
because the noise matrices are just scalars (∝I), so they
commute with other matrices:
ˆsQU = sQU +
P ⊤
−P−
−1 P ⊤
−
|
{z
}
≡B−
h
n−+ (Π−−)−1Π−+n+
|
{z
}
≡˜n+
i
= sQU + B−n−
|
{z
}
PD estimator
+B−˜n+.
(C3)
We have recognized the PD estimator ˆspd
QU and labeled
the binning operator B−.
18
The blocks of the transformed noise covariance matrix
N (B3) are related to those of the original one by:
N++ = 1
4
N∥+ 2N∥×⊥+ N⊥
= 1
4
σ2
∥+ σ2
⊥
I (C4a)
N−−= 1
4
N∥−2N∥×⊥+ N⊥
= 1
4
σ2
∥+ σ2
⊥
I (C4b)
N+−= N−+ = 1
4
N∥−N⊥
= 1
4
σ2
∥−σ2
⊥
I
(C4c)
with N∥×⊥= 0 in our case where detector noise levels are
uncorrelated. One can write the inverse noise covariance
blocks Π appearing in Eq. (C3) explicitly, omitting the
identity matrix:
(Π−−)−1 = N++ −N+−N −1
−−N−+
= 1
4(σ2
∥+ σ2
⊥)
−1
4(σ2
∥−σ2
⊥) ×
4
σ2
∥+ σ2
⊥
× 1
4(σ2
∥−σ2
⊥)
=
σ2
∥σ2
⊥
σ2
∥+ σ2
⊥
and similarly,
Π−+ = −Π−−N−+N −1
++
= −
σ2
∥+ σ2
⊥
σ2
∥σ2
⊥
× 1
4(σ2
∥−σ2
⊥) ×
4
σ2
∥+ σ2
⊥
= −
σ2
∥−σ2
⊥
σ2
∥σ2
⊥
.
Together these give the cross noise term
˜n+ = −
σ2
∥−σ2
⊥
σ2
∥+ σ2
⊥
n+.
(C5)
It is now straightforward to compute the pixel-domain
covariance matrix for each noise term and their cross-
covariance. We use ∆to denote those quantities, consis-
tently with the main text, Eq. (30), and obtain:
∆1 ≡Var[B−n−] =
σ2
∥+ σ2
⊥
4
Σ−1
−
(C6a)
∆2 ≡Var[B−˜n+] =
σ2
∥−σ2
⊥
σ2
∥+ σ2
⊥
!2
∆1 = ε2∆1
(C6b)
∆12 ≡Cov[B−n−, B−˜n+] = −∆2
(C6c)
where
Σ−≡B⊤
−B−= P ⊤
−P−
(C7)
describes the sky coverage in Q and U, and ε is the rel-
ative difference of noise levels in the detector pair, de-
fined in Eq. (33), which is equal to zero when detectors
have the same noise level and ±1 when one of them has
zero (or infinite) noise. Since Σ−is positive definite, ∆1
is as well, whereas the cross-covariance ∆12 is negative-
definite. Therefore, at least in our simplified white noise
model, the two noise terms of the IQU estimator are anti-
correlated in every pixel. This is important because we
know that this estimator should be optimal (in the sense
of minimum variance), and therefore have a smaller vari-
ance than the PD estimator which only has the first term
with variance ∆1. Quantitatively, we have
∆iqu ≡Var[ˆsQU] = Var[B−n−+ B−˜n+]
= ∆1 + ∆2 + 2∆12
= ∆1 −∆2
= (1 −ε2)∆1
⩽∆1 = Var[ˆspd
QU] ≡∆pd.
(C8)
The variance of the PD estimator is indeed larger than
what can be obtained by computing the full IQU solution
in the white noise case. The increase of variance is given
by the factor, uniform across the map,
η ≡∆pd
∆iqu =
∆1
∆1 −∆2
=
1
1 −ε2 ⩾1,
(C9)
with η = 1 when the noise levels of the two detectors are
equal (ε = 0).
We emphasize that η is a random variable, because it
ultimately depends on the detector noise levels which are
drawn randomly. The expected increase of variance can
be evaluated numerically by drawing many pairs (σ∥, σ⊥)
and averaging the corresponding η values.
Let us now comment on the multi-detector case. The
generalization of the result (C9) is not straightforward.
Each focal plane pixel observes a slightly different part
of the sky, so the Σ−matrix is specific to each detector
pair. For example, the total covariance matrix for the
PD estimator would be
∆pd tot =
X
pairs
wΣ−
−1
with w =
4
σ2
∥+ σ2
⊥
.
(C10)
Because of the dispersion of noise levels across the fo-
cal plane, combined with slightly different footprints of
those detectors (especially when considering an instru-
ment with a wide field-of-view), this expression can not
be simplified in the general case.
19
[1] P. Ade, J. Aguirre, Z. Ahmed, S. Aiola, A. Ali, D. Alonso,
M. A. Alvarez, K. Arnold, P. Ashton, J. Austermann,
et al., Journal of Cosmology and Astroparticle Physics
2019, 056 (2019), ISSN 1475-7516.
[2] K. Abazajian, G. Addison, P. Adshead, Z. Ahmed, S. W.
Allen, D. Alonso, M. Alvarez, A. Anderson, K. S. Arnold,
C. Baccigalupi, et al., CMB-S4 Science Case, Reference
Design, and Project Plan (2019), 1907.04473.
[3] LiteBIRD Collaboration, E. Allys, K. Arnold, J. Au-
mont, R. Aurlien, S. Azzoni, C. Baccigalupi, A. J. Ban-
day, R. Banerji, R. B. Barreiro, et al., Progress of The-
oretical and Experimental Physics 2023, 042F01 (2023),
ISSN 2050-3911.
[4] Polarbear Collaboration, P. A. R. Ade, M. Aguilar,
Y. Akiba, K. Arnold, C. Baccigalupi, D. Barron, D. Beck,
F. Bianchini, D. Boettger, et al., The Astrophysical Jour-
nal 848, 121 (2017), ISSN 0004-637X, 1538-4357.
[5] Keck Array and BICEP2 Collaborations, P. A. R. Ade,
Z. Ahmed, R. W. Aikin, K. D. Alexander, D. Barkats,
S. J. Benton, C. A. Bischoff, J. J. Bock, R. Bowens-
Rubin, et al., Physical Review Letters 121, 221301
(2018).
[6] R. Keisler, S. Hoover, N. Harrington, J. W. Henning,
P. A. R. Ade, K. A. Aird, J. E. Austermann, J. A. Beall,
A. N. Bender, B. A. Benson, et al., The Astrophysical
Journal 807, 151 (2015), ISSN 0004-637X.
[7] U. Seljak and M. Zaldarriaga, Physical Review Letters
78, 2054 (1997).
[8] BICEP/Keck Collaboration, P. A. R. Ade, Z. Ahmed,
M.
Amiri,
D.
Barkats,
R.
B.
Thakur,
D.
Beck,
C. Bischoff, J. J. Bock, H. Boenish, et al., Physical Re-
view Letters 127, 151301 (2021), ISSN 0031-9007, 1079-
7114, 2110.00483.
[9] M. Tristram, A. J. Banday, K. M. G´orski, R. Keskitalo,
C. R. Lawrence, K. J. Andersen, R. B. Barreiro, J. Bor-
rill, L. P. L. Colombo, H. K. Eriksen, et al., Physical
Review D 105, 083524 (2022).
[10] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin,
M. Amiri, D. Barkats, S. J. Benton, C. A. Bischoff, J. J.
Bock, J. A. Brevik, I. Buder, et al., The Astrophysical
Journal 792, 62 (2014), ISSN 1538-4357, 1403.4302.
[11] Polarbear Collaboration, P. A. R. Ade, Y. Akiba, A. E.
Anthony, K. Arnold, M. Atlas, D. Barron, D. Boettger,
J. Borrill, S. Chapman, et al., The Astrophysical Journal
794, 171 (2014), ISSN 0004-637X.
[12] F. Ge, M. Millea, E. Camphuis, C. Daley, N. Huang,
Y. Omori, W. Quan, E. Anderes, A. J. Anderson,
B. Ansarinejad, et al., Cosmology From CMB Lensing
and Delensed EE Power Spectra Using 2019-2020 SPT-
3G Polarization Data (2024), 2411.06000.
[13] Y. Li, J. R. Eimer, K. Osumi, J. W. Appel, M. K. Brewer,
A. Ali, C. L. Bennett, S. M. Bruno, R. Bustos, D. T.
Chuss, et al., The Astrophysical Journal 956, 77 (2023),
ISSN 0004-637X.
[14] Y. Li, J. Eimer, J. Appel, C. Bennett, M. Brewer, S. M.
Bruno, R. Bustos, C. Chan, D. Chuss, J. Cleary, et al.,
A Measurement of the Largest-Scale CMB E-mode Po-
larization with CLASS (2025), 2501.11904.
[15] C. Herv´ıas-Caimapo, K. Wolz, A. La Posta, S. Azzoni,
D. Alonso, K. Arnold, C. Baccigalupi, S. Biquard, M. L.
Brown, E. Calabrese, et al., Journal of Cosmology and
Astroparticle Physics 2025, 055 (2025), ISSN 1475-7516.
[16] R. Stompor, A. Balbi, J. D. Borrill, P. G. Ferreira,
S. Hanany, A. H. Jaffe, A. T. Lee, S. Oh, B. Rabii, P. L.
Richards, et al., Physical Review D 65, 022003 (2001),
ISSN 0556-2821, 1089-4918, astro-ph/0106451.
[17] E. Keih¨anen, R. Keskitalo, H. Kurki-Suonio, T. Pouta-
nen, and A.-S. Sirvio, Astronomy and Astrophysics 510,
A57 (2010), ISSN 0004-6361, 1432-0746, 0907.0367.
[18] R. D¨unner, M. Hasselfield, T. A. Marriage, J. Sievers,
V. Acquaviva, G. E. Addison, P. A. R. Ade, P. Aguirre,
M. Amiri, J. W. Appel, et al., The Astrophysical Journal
762, 10 (2012), ISSN 0004-637X.
[19] D. Poletti,
G. Fabbian,
M. Le Jeune,
J. Peloton,
K. Arnold, C. Baccigalupi, D. Barron, S. Beckman,
J. Borrill, S. Chapman, et al., Astronomy & Astrophysics
600, A60 (2017), ISSN 0004-6361, 1432-0746.
[20] H. El Bouhargani, A. Jamal, D. Beck, J. Errard, L. Grig-
ori, and R. Stompor, Astronomy and Computing 39,
100576 (2022), ISSN 22131337, 2112.03370.
[21] QUaD Collaboration, C. Pryke, P. Ade, J. Bock, M. Bow-
den, M. L. Brown, G. Cahill, P. G. Castro, S. Church,
T. Culverhouse, et al., The Astrophysical Journal 692,
1247 (2009), ISSN 0004-637X.
[22] W. Hu, M. M. Hedman, and M. Zaldarriaga, Physical
Review D 67, 043004 (2003).
[23] D. O’Dea, A. Challinor, and B. R. Johnson, Monthly No-
tices of the Royal Astronomical Society 376, 1767 (2007),
ISSN 0035-8711.
[24] M. Shimon, B. Keating, N. Ponthieu, and E. Hivon,
Physical Review D 77, 083003 (2008), ISSN 1550-7998,
1550-2368, 0709.1513.
[25] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin,
D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, J. A.
Brevik, I. Buder, E. Bullock, et al., The Astrophysical
Journal 814, 110 (2015), ISSN 1538-4357, 1502.00608.
[26] D. B. Thomas, N. McCallum, and M. L. Brown, Monthly
Notices of the Royal Astronomical Society 491, 1960
(2020), ISSN 0035-8711, 1365-2966, 1905.12647.
[27] G. Hinshaw, C. Barnes, C. L. Bennett, M. R. Greason,
M. Halpern, R. S. Hill, N. Jarosik, A. Kogut, M. Limon,
S. S. Meyer, et al., The Astrophysical Journal Supple-
ment Series 148, 63 (2003), ISSN 0067-0049.
[28] B. R. Johnson, J. Collins, M. E. Abroe, P. a. R.
Ade, J. Bock, J. Borrill, A. Boscaleri, P. de Bernardis,
S. Hanany, A. H. Jaffe, et al., The Astrophysical Journal
665, 42 (2007), ISSN 0004-637X.
[29] O. P. Lay and N. W. Halverson, The Astrophysical Jour-
nal 543, 787 (2000), ISSN 0004-637X.
[30] C. L. Kuo, P. a. R. Ade, J. J. Bock, C. Cantalupo,
M. D. Daub, J. Goldstein, W. L. Holzapfel, A. E. Lange,
M. Lueker, M. Newcomb, et al., The Astrophysical Jour-
nal 600, 32 (2004), ISSN 0004-637X.
[31] J. Errard, P. A. R. Ade, Y. Akiba, K. Arnold, M. At-
las, C. Baccigalupi, D. Barron, D. Boettger, J. Borrill,
S. Chapman, et al., The Astrophysical Journal 809, 63
(2015), ISSN 0004-637X.
[32] S. Naess, Y. Guan, A. J. Duivenvoorden, M. Hasselfield,
Y. Wang, I. Abril-Cabezas, G. E. Addison, P. A. R. Ade,
S. Aiola, T. Alford, et al., The Atacama Cosmology Tele-
scope: DR6 Maps (2025), 2503.14451.
[33] S. Hanany and P. Rosenkranz, New Astronomy Reviews
20
47, 1159 (2003), ISSN 1387-6473.
[34] S. Spinelli, G. Fabbian, A. Tartari, M. Zannoni, and
M. Gervasi, Monthly Notices of the Royal Astronomical
Society 414, 3272 (2011), ISSN 0035-8711.
[35] S. Takakura, M. A. O. Aguilar-Fa´undez, Y. Akiba,
K. Arnold, C. Baccigalupi, D. Barron, D. Beck, F. Bian-
chini, D. Boettger, J. Borrill, et al., The Astrophysical
Journal 870, 102 (2019), ISSN 0004-637X, 1538-4357.
[36] Y. Li, J. W. Appel, C. L. Bennett, R. Bustos, D. T.
Chuss, J. Cleary, J. D. Couto, S. Dahal, R. Datta,
R. D¨unner, et al., The Astrophysical Journal 958, 154
(2023), ISSN 0004-637X.
[37] A. Coerver,
J. A. Zebrowski,
S. Takakura,
W. L.
Holzapfel, P. A. R. Ade, A. J. Anderson, Z. Ahmed,
B. Ansarinejad, M. Archipley, L. Balkenhol, et al., The
Astrophysical Journal 982, 15 (2025), ISSN 0004-637X.
[38] T. E. Montroy, P. a. R. Ade, J. J. Bock, J. R. Bond,
J. Borrill, A. Boscaleri, P. Cabella, C. R. Contaldi, B. P.
Crill, P. de Bernardis, et al., The Astrophysical Journal
647, 813 (2006), ISSN 0004-637X.
[39] K. W. Yoon, P. a. R. Ade, D. Barkats, J. O. Battle,
E. M. Bierman, J. J. Bock, J. A. Brevik, H. C. Chiang,
A. Crites, C. D. Dowell, et al., in Millimeter and Submil-
limeter Detectors and Instrumentation for Astronomy III
(SPIE, 2006), vol. 6275, pp. 508–525.
[40] W. C. Jones, T. E. Montroy, B. P. Crill, C. R. Contaldi,
T. S. Kisner, A. E. Lange, C. J. MacTavish, C. B. Net-
terfield, and J. E. Ruhl, Astronomy & Astrophysics 470,
771 (2007), ISSN 0004-6361, 1432-0746.
[41] C. L. Kuo, J. J. Bock, J. A. Bonetti, J. Brevik, G. Chat-
topadhyay, P. K. Day, S. Golwala, M. Kenyon, A. E.
Lange, H. G. LeDuc, et al., in Millimeter and Submil-
limeter Detectors and Instrumentation for Astronomy IV
(SPIE, 2008), vol. 7020, pp. 415–428.
[42] J. R. Hinderks, P. Ade, J. Bock, M. Bowden, M. L.
Brown,
G. Cahill,
J. E. Carlstrom,
P. G. Castro,
S. Church, T. Culverhouse, et al., The Astrophysical
Journal 692, 1221 (2009), ISSN 0004-637X.
[43] Z. Ahmed, M. Amiri, S. J. Benton, J. J. Bock, R. Bowens-
Rubin, I. Buder, E. Bullock, J. Connors, J. P. Filippini,
J. A. Grayson, et al., in Millimeter, Submillimeter, and
Far-Infrared Detectors and Instrumentation for Astron-
omy VII (SPIE, 2014), vol. 9153, pp. 540–551.
[44] S. W. Henderson, R. Allison, J. Austermann, T. Baildon,
N. Battaglia, J. A. Beall, D. Becker, F. D. Bernardis,
J. R. Bond, E. Calabrese, et al., Journal of Low Temper-
ature Physics 184, 772 (2016), ISSN 0022-2291, 1573-
7357, 1510.02809.
[45] C. M. Posada, P. A. R. Ade, Z. Ahmed, K. Arnold, J. E.
Austermann, A. N. Bender, L. E. Bleem, B. A. Benson,
K. Byrum, J. E. Carlstrom, et al., Superconductor Sci-
ence and Technology 28, 094002 (2015), ISSN 0953-2048.
[46] N. Stebor, P. Ade, Y. Akiba, C. Aleman, K. Arnold,
C. Baccigalupi, B. Barch, D. Barron, S. Beckman,
A. Bender, et al., in Millimeter, Submillimeter, and Far-
Infrared Detectors and Instrumentation for Astronomy
VIII (SPIE, 2016), vol. 9914, pp. 363–371.
[47] A. Kusaka, T. Essinger-Hileman, J. W. Appel, P. Gal-
lardo, K. D. Irwin, N. Jarosik, M. R. Nolta, L. A. Page,
L. P. Parker, S. Raghunathan, et al., Review of Scientific
Instruments 85, 024501 (2014), ISSN 0034-6748, 1089-
7623, 1310.3711.
[48] A. Ritacco, N. Ponthieu, A. Catalano, R. Adam, P. Ade,
P. Andr´e, A. Beelen, A. Benoˆıt, A. Bideaud, N. Billot,
et al., Astronomy & Astrophysics 599, A34 (2017), ISSN
0004-6361, 1432-0746.
[49] S. Takakura, M. Aguilar, Y. Akiba, K. Arnold, C. Bac-
cigalupi, D. Barron, S. Beckman, D. Boettger, J. Borrill,
S. Chapman, et al., Journal of Cosmology and Astropar-
ticle Physics 2017, 008 (2017), ISSN 1475-7516.
[50] T. Essinger-Hileman, A. Kusaka, J. W. Appel, S. K.
Choi, K. Crowley, S. P. Ho, N. Jarosik, L. A. Page, L. P.
Parker, S. Raghunathan, et al., Review of Scientific In-
struments 87, 094503 (2016), ISSN 0034-6748.
[51] M. L. Brown, A. Challinor, C. E. North, B. R. Johnson,
D. O’Dea, and D. Sutton, Monthly Notices of the Royal
Astronomical Society 397, 634 (2009), ISSN 00358711,
13652966.
[52] M. Rashid, M. L. Brown, and D. B. Thomas, CMB Polar-
isation Signal Demodulation with a Rotating Half-Wave
Plate (2023), 2307.01860.
[53] T. W. Morris, E. Battistelli, R. Bustos, S. K. Choi, A. J.
Duivenvoorden, J. Dunkley, R. D¨unner, M. Halpern,
Y. Guan, J. van Marrewijk, et al., Physical Review D
111, 082001 (2025).
[54] K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt,
F. K. Hansen, M. Reinecke, and M. Bartelman, The As-
trophysical Journal 622, 759 (2005), ISSN 0004-637X,
1538-4357, astro-ph/0409513.
[55] J. R. Stevens, N. Goeckner-Wald, R. Keskitalo, N. Mc-
Callum, A. Ali, J. Borrill, M. L. Brown, Y. Chinone,
P. A. Gallardo, A. Kusaka, et al., in Millimeter, Submil-
limeter, and Far-Infrared Detectors and Instrumentation
for Astronomy IX (2018), p. 136, 1808.05131.
[56] D. Alonso, J. Sanchez, A. Slosar, and LSST Dark Energy
Science Collaboration, Monthly Notices of the Royal As-
tronomical Society 484, 4127 (2019), ISSN 0035-8711.
[57] H. El Bouhargani, Ph.D. thesis, Universit´e Paris Cit´e
(2021).
[58] A. Kusaka, J. Appel, T. Essinger-Hileman, J. A. Beall,
L. E. Campusano, H.-M. Cho, S. K. Choi, K. Crowley,
J. W. Fowler, P. Gallardo, et al., Journal of Cosmology
and Astroparticle Physics 2018, 005 (2018), ISSN 1475-
7516.
[59] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin,
D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, J. A.
Brevik, I. Buder, E. Bullock, et al., Physical Review Let-
ters 112, 241101 (2014).
[60] N. McCallum, D. B. Thomas, M. L. Brown, and N. Tes-
sore, Monthly Notices of the Royal Astronomical Society
501, 802 (2020), ISSN 0035-8711, 1365-2966, 2008.00011.
[61] A. Zonca, L. P. Singer, D. Lenz, M. Reinecke, C. Rosset,
E. Hivon, and K. M. Gorski, Journal of Open Source
Software 4, 1298 (2019), ISSN 2475-9066.
[62] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gom-
mers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor,
S. Berg, N. J. Smith, et al., Nature 585, 357 (2020), ISSN
1476-4687.
[63] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haber-
land, T. Reddy, D. Cournapeau, E. Burovski, P. Peter-
son, W. Weckesser, J. Bright, et al., Nature Methods 17,
261 (2020), ISSN 1548-7105.
[64] J. D. Hunter, Computing in Science & Engineering 9, 90
(2007), ISSN 1558-366X.
[65] M. L. Waskom, Journal of Open Source Software 6, 3021
(2021), ISSN 2475-9066.
|
Recovering unbiased CMB polarization maps using modern ground-based experiments with minimal assumptions about atmospheric emission Simon Biquard ,1, 2, ∗Josquin Errard ,1 and Radek Stompor 1, 2 1Universit ́e Paris Cit ́e, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France 2CNRS-UCB International Research Laboratory, Centre Pierre Bin ́etruy, IRL2007, CPB-IN2P3, Berkeley, US (Dated: September 23, 2025) We present a study of unbiased reconstruction of cosmic microwave background (CMB) polarization maps from data collected by modern ground-based observatories. Atmospheric emission is a major source of correlated noise in such experiments, complicating the recovery of faint cosmological signals. We consider estimators that require only minimal assumptions about the properties of the unpolarized atmospheric emission, instead exploiting hardware solutions commonly implemented in modern instruments, such as pairs of orthogonal antennas in each focal plane pixel, interleaving of the antenna orientation from pixel-to-pixel, and polarization signal modulation via a continuously rotating half-wave plate (HWP). We focus on two techniques: (i) statistical down-weighting of low-frequency atmospheric signals, and (ii) pair-differencing (PD), which involves differencing the signals collected by two detectors in the same focal plane pixel. We compare their performance and contrast them with the idealized case where the atmospheric signal is perfectly known and can be cleanly subtracted. We show that PD can be derived from the maximum likelihood principle under very general assumptions about the atmospheric signal, and is therefore optimal in terms of map sensitivity for essentially any such signal. Remarkably, in the absence of instrumental systematics, but with reasonable variations of detector noise properties, PD yields polarized sky maps with noise levels only slightly worse (typically a few percent) than the ideal reference case. While the downweighting method could in principle match or surpass this performance, it requires highly accurate models of the atmospheric signal, which are not readily available. The performance of PD will be affected by instrumental systematics and those leaking the atmospheric signal to the difference time stream may have particularly adverse impact. However, we expect that some of these effects, as shown in the case of gain mismatch, are efficiently mitigated by a continuously rotating HWP, and therefore, that in the context of modern CMB experimental setups PD could provide a competitive, numerically robust, and efficient practical solution for CMB polarization mapmaking without the need for atmospheric modeling. I. INTRODUCTION The cosmic microwave background (CMB) is a powerful cosmological probe that has shaped our understanding of the universe. Measurements of its temperature anisotropies have precisely determined the parameters of the standard cosmological model, ΛCDM. However, the next frontier in CMB research lies in the study of its polarization anisotropies, which potentially contain the signature of new physics. The "B-mode" (parity-odd) component of the spin-two polarization field is of particular interest and a key target of current and future surveys, such as the Simons Observatory (SO) [1], CMBS4 [2], and LiteBIRD [3]. On moderate to small angular scales, it is generated by the gravitational lensing effect of large-scale structure acting on the primordial CMB Emode (parity-even) signal. This effect has been detected by several experiments with high significance in the last decade [4-6]. However, a large-scale primordial B-mode signal is expected if there are significant primordial gravitational waves in the early universe [7]. The detection of this signal would provide direct evidence for inflation, a period of accelerated expansion that is believed to occur ∗ in the early universe, and predicts the generation of such waves, thus opening a window on extreme high-energy physics. Ground-based experiments are leading the way in this quest for primordial B modes. Current constraints from the BICEP/Keck collaboration have placed an upper limit on the tensor-to-scalar ratio of r < 0.036 at 95 % confidence [8] (a reanalysis obtained r < 0.032 [9]), highlighting the need for improved sensitivity to detect the elusive inflationary B-mode signal. However, achieving constraining measurements of large angular scales from the ground requires overcoming key challenges, including atmospheric contamination and ground pickup. Atmospheric fluctuations introduce correlated noise that obscures the cosmological signal. Ground pickup, caused by side-lobe response to thermal radiation from the Earth, further complicates observations by adding an azimuthdependent contribution that is partially degenerate with the sky signal. Hardware developments have been implemented to address these challenges, which typically manifest through unpolarized correlated noise. Successful technologies include dual-polarization detectors and polarization modulators such as rotating half-wave plates. Because the B-mode spectrum is much smaller than both the temperature and E-mode polarization signals, modern experiments such as the SO deploy tens of thousands of detectors to achieve the required sensitivity. In 19 Sep 2025 2 this context, we must ensure that the data reduction and analysis pipelines are able to cope with the data volumes generated by these large arrays of detectors, while accurately modeling noise and systematic effects in order to extract the cosmological information distributed over the entire data set. Mapmaking, which refers to the reconstruction of the sky signal from the time-ordered data (TOD), is a crucial step of this program, and one that represents a significant computational burden, since it deals with the full size of the TOD. This challenge is further amplified by the presence of spurious signals like atmospheric emission or ground pickup. Consequently, explicit mitigation techniques, such as filtering and/or template deprojection performed at the time stream level, are necessary in order to suppress undesirable contributions. Such operations typically affect the cosmologically relevant signal in the data, an effect that has to be corrected for. Different approaches perform this correction at different stages. For instance, popular "filter-and-bin" techniques [4, 10-14] do so on the power spectrum level. These have gained widespread popularity due to their flexibility and numerical efficiency, and the ability to deliver unbiased angular power spectra under rather general assumptions for currently achievable survey depths [15]. Other classes of methods attempt to apply the correction directly on the mapmaking stage, and aim to produce unbiased maps of the sky. Various techniques of this sort have been proposed and applied in the past, ranging from general maximum likelihood to specialized destriping approaches [13, 16-20]. In general, they capitalize on an appropriately designed observing strategy to break potential degeneracies between the sky signal and contaminants [13, 19]. This is the case, for instance, for contaminants that happen to be synchronous with some well-defined instrumental parameters, such as telescope orientation or half-wave plate rotation. Such methods are in general more involved in terms of implementation and more costly to execute than the filter-and-bin technique. As they are based on different assumptions and could potentially deliver more accurate results, they remain of significant interest-an interest that is bound to grow further once the cosmological signals that have eluded us up to now are detected. From this perspective, atmospheric contamination appears particularly insidious, as it stems from a dynamic, turbulent medium and thus is not easy to characterize in a succinct and sufficiently accurate manner suitable for this class of mapmaking algorithms. In this work, we study the fidelity and precision of polarization maps produced by unbiased mapmaking methods from data affected by atmospheric contamination, explicitly taking into account typical hardware features of modern experiments. In particular, we focus on the socalled pair differencing technique, applied broadly in the past, see e.g. Refs. [10, 11, 21]. We show that this method can be derived from maximum likelihood considerations, requiring only a minimal set of assumptions about the atmosphere, and discuss in detail its impact on the sensitivity of the recovered polarization maps obtained from modern CMB experiments. This work thus complements previous studies that focused predominantly on the differential systematic effects that can affect it [22-26]. The paper is organized as follows. In section II, we provide a short introduction to the mapmaking formalism, and discuss specific challenges and features inherent to ground-based experiments. In section III we compare different methods of reconstructing the polarized part of the signal, including the pair differencing technique. In section IV, we assess the sensitivity of the latter to the predicted inflationary primordial B-mode signal. In section V, we discuss differential systematic effects. Finally, we summarize the results and conclude in section VI. II. FORMALISM AND BACKGROUND A. Mapmaking 1. Data model We model the TOD d as a linear function of the sky signal amplitudes s, and additional contributions characterized by amplitudes a, d = Ps + Ta + n. (1) The pointing matrix P and template matrix T encode the response of the instrument to s and a respectively. The time-domain vector n represents instrumental noise, whose average over the statistical ensemble of all possible realizations is assumed to be zero. A key assumption in Eq. (1) is that relevant contributions have a limited number of degrees of freedom, i.e., can be described by a limited number of known templates (columns of P and T) and corresponding amplitudes (vectors s and a). This implies in particular that we can discretize continuous signals (e.g. sky signal) with sufficient accuracy. For simplicity, we take the same pixelization for all three relevant Stokes parameters I, Q and U. Similarly, beams are assumed axially symmetric and the same for all three Stokes components, therefore they never appear explicitly here and can be thought of as being convolved with the maps. We also recall that for a perfect linear polarizer coupled to a total power detector, the sky signal part of Eq. (1) takes the explicit form [27] (Ps)t = Ipt + cos(2φt)Qpt + sin(2φt)Upt (2) where pt is the observed sky pixel (assuming no pointing interpolation) and φt is the angle of the polarizer with respect to the sky coordinates, both at time t. Similarly, if an ideal HWP oriented with time-dependent angle ψt with respect to the instrument is used to modulate the incoming polarization, the model instead reads [28] (Ps)t = Ipt +cos(2φt+4ψt)Qpt +sin(2φt+4ψt)Upt. (3) 3 2. Maximum likelihood solution Let us derive the general maximum likelihood (ML) estimator ˆs of the sky signal, assuming Gaussian noise with covariance N and a Gaussian prior on a. We may write the corresponding negative log-likelihood as χ2 = (d -Ps -Ta)⊤N -1(d -Ps -Ta) + (a - ̄a)⊤Σ-1 a (a - ̄a) (4) where ̄a is the expectation value of a, and Σa the associated covariance. The ML solution is found by minimizing χ2 with respect to a and s. For this, it is useful to change variables to a′ ≡a - ̄a, d′ ≡d -T ̄a (5) such that minimizing the χ2 with respect to a or a′ is equivalent and gives ˆa′ = (Σ-1 a + T ⊤N -1T)-1T ⊤N -1(d′ -Ps). (6) Injecting this in (4) gives χ2 = (d′ -Ps)⊤Z⊤N -1Z(d′ -Ps) + const(s) (7) with Z ≡I -T Σ-1 a + T ⊤N -1T -1 T ⊤N -1. (8) Noticing that Z⊤N -1 = N -1Z, we have the simplification Z⊤N -1Z = N -1Z2 = N -1Z (9) since Z is a projector. Finally, after minimizing the χ2 with respect to s we find ˆs = (P ⊤N -1ZP)-1P ⊤N -1Z(d -T ̄a). (10) If the prior is improper, i.e. Σ-1 a = 0, at least for some amplitudes, the corresponding modes are directly deprojected from the original data and not included in the sky signal estimate. In this extreme case of no prior information, N -1Z becomes the "filtering and weighting operator" FT (c.f. [19, 20]) N -1Z →FT ≡N -1 I -T(T ⊤N -1T)-1T ⊤N -1 (11) which filters all contributions spanned by the columns of T (i.e. FT T = 0) and weights orthogonal modes by the inverse noise variance. This construction yields an unbiased estimate of the sky signal in the presence of any systematic effect if T defines a subspace rich enough to encompass every possible manifestation of the effect. If that subspace is not orthogonal to the sky, i.e., if there are degeneracies between T and P, the system matrix (P ⊤FT P)-1 becomes singular, and the corresponding sky modes are impossible to distinguish from the systematic effect and can not be recovered [19]. Other modes are by construction unbiased over the statistical ensemble of instrumental noise realizations. The challenge in this case is to define a sufficiently general T that correctly captures the systematic while keeping the number of potentially degenerate sky modes to a minimum. When proper priors are available, i.e. when Σ-1 a is invertible, one can rewrite N -1Z using the Woodbury matrix identity: N -1Z = N -1 -N -1T(Σ-1 a + T ⊤N -1T)-1T ⊤N -1 = (N + TΣaT ⊤)-1 ≡e N -1. (12) The ML estimate now reads ˆs = (P ⊤e N -1P)-1P ⊤e N -1(d -T ̄a) (13) where e N includes both the variance due to instrumental noise and the systematic contribution, which is now characterized statistically around its average. We refer to this as the down-weighting approach hereafter. This estimator is unbiased over the ensemble of noise and systematic effect realizations, if only the average signal, ̄a, is known. In many actual applications it is assumed to vanish. The challenge then is to find a representation of the total covariance e N that is statistically sufficient and computationally manageable. As emphasized earlier, our sole focus will be the recovery of the polarization sky signal, i.e., maps of the Q and U Stokes parameters, from the available data. Typical data sets are composed of measurements taken by total power detectors, hence the measured signals combine all three Stokes parameters, I, Q, and U (neglecting circular polarization). The relevant pointing matrix can be split into polarized and total intensity parts, as well as the corresponding sky map: P = PI PQU ; s = sI sQU . (14) While we can in principle estimate the full sky map s (and possibly drop its I component a posteriori), a somewhat more general approach is to marginalize over the a priori unknown total intensity part by deriving the estimators directly for Q/U only. This is also applicable to situations where the total intensity signal is not recoverable. The relevant expressions can be readily obtained in the formalism presented above by extending the set of templates T, T -→T ′ = T PI , (15) and assuming improper priors for all the extra degrees of freedom, such that ˆsQU = P ⊤ QUN -1Z′PQU -1 P ⊤ QUN -1Z′d, (16) where Z′ follows Eq. (8) with the extended templates T ′. Lastly, we note that different choices of templates can be used to model a given systematic effect and that this choice may depend on the mapmaking method as well as the specific model of the data, and the details of the experiment and its operations. We discuss this in the next section. 4 B. Ground-based CMB observations 1. Atmospheric emission Ground-based telescopes have to observe the sky through the Earth's atmosphere. While the latter has regions of reduced opacity in the 30-300 GHz range where the CMB is at its brightest, even in these atmospheric windows there remains significant emission. This radiation increases the optical loading of the detectors and therefore raises their photon noise level, limiting the overall sensitivity of the instrument. More critically, fluctuations in the water vapor density column cause spatial and temporal variations in the atmosphere's brightness temperature, which introduce correlated noise both in time and between detectors. Those fluctuations are typically the dominant source of low-frequency noise for groundbased experiments [29-32] and represent one of the primary challenges for sensitive large-scale CMB measurements from the ground. While atmospheric emission is slightly circularly polarized due to Zeeman splitting of oxygen in the Earth's magnetic field, the expected linear polarized intensity is very low [33, 34]. Previous studies have set a 1 % limit on the linear polarization fraction of atmospheric emission [31]. However, the presence of horizontally aligned ice crystals in tropospheric clouds can generate spurious polarized emission by scattering radiation from the ground [35-37]. This causes significant "bursts" of horizontal (Stokes Q < 0) polarization in the data, which manifest as excess low-frequency noise in the TOD and are not mitigated by polarization modulation. Possible avenues for addressing this issue have been discussed in e.g. [37]. We will ignore this effect in the present work. Therefore, for the rest of the paper we model the atmosphere as an unpolarized signal correlated both spatially and in time. This opens multiple avenues to mitigate its impact on the recovered Q and U maps through combinations of appropriate hardware and software solutions. 2. Hardware solutions Ground-based CMB experiments have developed specialized hardware components to allow for an efficient detection of polarized sky signals in the presence of correlated noise sources. These help to address the problem of atmospheric contamination. Two key technologies have proved particularly useful: dual-polarization detectors, and polarization modulators (notably HWPs). Dual-polarization detectors allow to measure both the total power I and a linear combination of Q and U in a single focal plane pixel thanks to two orthogonal antennas [1, 38-46]. Samples collected by orthogonal antennas can be differenced in order to reject unpolarized signals, whose intensity is independent of antenna orientation. This enables the reconstruction of polarized signals (antenna angle dependent) with minimal contamination if the antennas are close to orthogonal, and also with negligible or small loss of precision (c.f. section IV). Polarization modulators perform frequency modulation to shift the polarization signal to higher frequencies in the TOD. With sufficiently high modulation frequency, the signal is measured in a band where detector sensitivity is mostly limited by photon noise rather than atmospheric fluctuations. The most common implementation of this technique is a continuously rotating HWP, which modulates the polarization signal at four times its mechanical rotation frequency. HWPs have been deployed and characterized for several millimeter-wave polarization experiments [28, 47-49]. Despite obvious advantages, they can also introduce their own systematic effects [28, 47, 50], including intensity-to-polarization leakage, modulation efficiency variations across the bandwidth, and mechanical vibrations. These effects require careful calibration and modeling in the data analysis pipeline, but are out of the scope of this paper. III. POLARIZED MAPMAKING FOR GROUND-BASED EXPERIMENTS We focus hereafter on unpolarized atmospheric emission as the only relevant systematic effect and aim exclusively to recover the polarized sky signals from the data. For this purpose, we study and compare three approaches corresponding to varying degrees of knowledge and assumptions about the atmospheric signal: 1. A minimal model where we only assume that the atmospheric contamination is the same for both detectors of any orthogonal pair in the focal plane (c.f. section III A). 2. A statistical model based on the down-weighting approach where we assume that the combined noise covariance Eq. (12), accounting for both instrumental noise and atmospheric fluctuations, can be described as stationary over defined time intervals, with the noise power spectral density (PSD) derived from the TOD itself (c.f. section III B). 3. An idealized case where we assume that the atmospheric contribution is perfectly known. In the formalism of section II A 2, this corresponds to Σa →0 and a → ̄a, and the ML solution is simply the standard estimate in the absence of any atmospheric contamination, since it can be readily subtracted: ˆsideal ≡(P ⊤N -1P)-1P ⊤N -1datm-free. (17) While this case is unattainable in practice, it does provide a valuable statistical benchmark. Before going any further, we note that, in the presence of a rotating HWP, specialized mapmaking approaches based on explicit, "lock-in" demodulation of the data can recover Q and U Stokes parameters efficiently (see 5 e.g., [28, 47, 48] for more details) by isolating the polarization information from low-frequency noise in the time streams. In principle, this method also has the advantage that polarization can be reconstructed from individual detectors in isolation (not relying on pairs of orthogonal antennas), which can potentially help when dealing with differential systematics that affect the two antennas of a pair differently, such as beam mismatch or pointing errors. However, the combination of the information from multiple detectors is necessary, albeit at a later stage of the analysis, to produce statistically optimal results. Discussions about demodulation and its interaction with multiple detectors can be found in [51, 52]. In this work we consider an alternative approach and do not assume any data preprocessing. Instead, an effective demodulation is only performed as an integral part of the mapmaking procedure as encoded in the pointing matrix as Eqs. (2) and (3) and thus its impact on the sky signal is correctly included and corrected for. This also allows us to use the same formalism and implementation for both HWP and non-HWP cases, and to compare the two approaches more straightforwardly. A. Minimal model A fundamental challenge in atmospheric mitigation is the unpredictable and complex nature of its emission. If the atmosphere were fixed in Earth-bound coordinates, modeling its fluctuations over moderate time scales on a two-dimensional map of pixels fixed to this coordinate system (different from sky coordinates) could be a promising avenue towards its characterization and removal, given that it typically only varies on rather large angular scales. In practice, however, the presence of wind means that the atmosphere is potentially only "fixed" in coordinates moving with the wind. As the wind's speed is a priori unknown, the effect of the atmosphere can not be easily captured by a linear model such as Eq. (1). Instead, to work around this problem we may want to consider a construction allowing for maximal flexibility in the atmospheric modeling. An extreme example would be a model with as many degrees of freedom as measured data points, i.e., T = I (the identity matrix) with dimensions given by the total number of samples. This obviously would exceed the number of available constraints, making the problem degenerate. However, we can break this degeneracy if the telescope is equipped with dualpolarization detectors, making the only assumption that unpolarized signals are measured identically by orthogonal antennas in a pair, and assuming no priors (Σ-1 a = 0) on the corresponding amplitudes, thus making a minimal number of assumptions on the atmospheric signal. Every column of T now has two non-zero entries, corresponding to the two detectors of a pair, such that T is conceptually two identity matrices stacked on top of each other for every focal plane pixel: T = I I . (18) As discussed in section II A 2, this yields a sky estimate that is free of signals captured by T. Note that this choice of T captures sky total intensity as well since TPI = PI, so only the polarized sky components are estimated. In Appendix A, we show that this minimal model leads in fact to the well-known pair differencing estimator, which we now define. For any pair of orthogonal detectors denoted by ∥and ⊥subscripts, we are free to transform the TOD into sum and difference data, d+ d- ≡1 2 d∥+ d⊥ d∥-d⊥ . (19) We emphasize that this transformation does not by itself entail any loss of information, as it is represented by the block matrix X ≡1 2 I I I -I with inverse X-1 = 2X. (20) Any estimate previously expressed in terms of d could be equivalently expressed in terms of the transformed data set, Xd. However, since the difference data, d-, is free of atmospheric signal (ideally) and contains all the polarization information, it seems natural to consider it an independent data vector and process it by itself to recover the polarization signal. We thus call pair differencing (PD) estimate the following Q/U estimator, computed from d-, and discarding d+, ˆspd ≡ˆspd QU ≡(P ⊤ -N -1 -P-)-1P ⊤ -N -1 -d-, (21) where N-≡ n-n⊤ - (22) is the covariance of the difference noise, and P-≡1 2(P QU ∥ -P QU ⊥ ) (23) is the relevant part of the pointing matrix for the difference time stream. As part of the available information is discarded, it would seem that this approach is suboptimal. However, we showed above that it results from a rigorous maximum-likelihood derivation and is therefore as optimal as only possible given the assumptions made about the atmospheric signal. We discuss this further in Section IV. In practice, we estimate N-in Eq. (21) from the data, and this may differ from the true covariance. Our implementation of the PD estimator is then given by ˆspd W ≡(P ⊤ -WP-)-1P ⊤ -Wd-. (24) This expression is manifestly unbiased, but may not always reach the same precision as Eq. (21). The general expression for the covariance of the estimated maps is: N pd QU ≡(P ⊤ -WP-)-1 P ⊤ -WN-WP-(P ⊤ -WP-)-1. (25) 6 B. Statistical down-weighting For comparison, we also consider an alternative approach motivated by Eq. (13), and treat the atmospheric signal as a stochastic contribution, assuming that on average it vanishes, i.e., ̄a = 0. In doing so we accept the fact that the atmosphere will contribute to our final sky signal estimates, however, we hope to minimize its impact by appropriately down-weighting the modes where its contribution is significant, and then to be able to account for the extra uncertainty in the final map error budget. Reaching both of these goals requires finding a suitable description of the atmospheric signal covariance. It has been shown that distinguishing atmospheric modes from systematics in the data involves somewhat sophisticated TOD processing [53]. Thus, successful models may require advanced constructions (see e.g. [18]), in particular if one aims to recover the total intensity of the CMB. Consequently, such models could be numerically complex, and not straightforward to validate. As our focus is solely on polarization, we expect, however, that simpler and less demanding models could be already sufficient. In particular, we will neglect noise correlations that are induced by the atmosphere between different detectors [31]. Our DW estimator then reads ˆsdw ≡(P ⊤WP)-1P ⊤Wd. (26) Contrary to the PD estimator (24), it uses the full data set d as an input, and includes all three Stokes parameters in the pointing matrices and the recovered map. While we could marginalize right away over the total intensity part, in practice, the resulting equations are more cumbersome to implement and there is little advantage in doing so, if any, from the perspective of numerical efficiency. We thus simply drop the total intensity map at the end of the mapmaking procedure. We note that the inversion of the system matrix in Eqs. (24) and (26) may fail if some pixels are not observed with sufficient redundancy. In particular, to estimate the Q and U amplitudes in a given pixel, it is necessary to observe it with at least two sufficiently different "crossing angles", i.e., effective antenna orientations (taking HWP rotation into account). Whenever these conditions are not met, the corresponding samples need to be flagged and excluded from the reconstruction, along with those contaminated by glitches, or acquired during unstable scanning intervals. The treatment of such gaps in the time stream requires dedicated procedures [16]. The weights W adopted in Eq. (26) assume no detector-detector correlations. They are thus blockdiagonal, with each detector-specific block given by the PSD of the detector time stream. While this is probably highly suboptimal for the recovery of the total intensity maps, one may hope that it is sufficient for polarization. This is one of the questions we investigate in the following. The error covariance of the estimated maps is then given by N dw ≡(P ⊤WP)-1 P ⊤W e NWP (P ⊤WP)-1, (27) where e N is the covariance of the instrumental noise and the atmospheric signal combined together, c.f. Eq. (12). As we target polarization maps here, only the Q/U blocks of N dw are of actual interest. One can write down an analytic expression for them, however, it is long and rather complex and thus not very enlightening. Instead, let us try and draw some conclusions directly from Eq. (27). The estimated maps are minimum variance if W = e N -1, and any other choice will unavoidably lead to increased uncertainty. As the atmosphere contribution is unpolarized, it only affects the I × {I, Q, U} blocks of the central product P ⊤W e NWP. Its impact on the polarization maps can be mitigated if the I × Q/U blocks of both P ⊤WP and P ⊤W e NWP vanish. This latter requirement ensures that the atmospheric signal is well separated from the polarization signal, and the former ensures that the variance due to atmosphere is never folded back in the polarization maps' covariance. There is a potential huge benefit to ensuring that those two conditions are met, and in the following we discuss how to achieve this by adapting mapmaking choices to the hardware features discussed earlier. Nonetheless, we also show that those conditions are not sufficient to ensure the optimality of the polarization estimates, and that weighting may need to be adjusted accordingly. C. Comparison of both approaches We present here a comparison of the discussed mapmaking approaches, pair differencing (PD) and downweighting (DW) in order to demonstrate salient features of their performance. 1. Simulations As we focus on the recovery of large-scale polarization, we consider an SO-like small aperture telescope (SAT) observing at 90 GHz, equipped with pairs of orthogonal detectors and a rotating HWP. The time-domain simulations are generated using the TOAST 31 framework and the sotodlib2 library. The reduction of the data into HEALPix maps [54] is performed with the MAPPRAISER3 library [20]. The resolution parameter is Nside = 512. The simulated instrument scans a region of the sky south of the Galactic plane, similar to the "SAT South field" described in [55]. Data is acquired with a sample rate of 37 Hz. The HWP rotation frequency is set 1 https://github.com/hpc4cmb/toast/tree/toast3 2 https://github.com/simonsobs/sotodlib 3 https://github.com/B3Dcmb/midapack 7 to 120 rpm, and can be turned off. The dataset consists of ∼1 h long constant-elevation scans (CESs) taken over the course of one day. To reduce computational requirements, we only keep one in four focal plane pixels, resulting in roughly 700 detectors. The total size of the data set (including TOD, pointing information, etc.) is measured to be around 240 GB. As the mapmaking procedure that we use is linear, we directly obtain noise maps by only simulating realistic atmosphere signal and instrumental noise. There are no systematics (gain errors, beam mismatches, etc.). The atmosphere simulation in TOAST is based on a physical model of the atmosphere [31], and uses a threedimensional structure of Kolmogorov turbulence moving through the telescope's field of view, with a distribution of temperature, water vapor, and wind speed. On the other hand, instrumental noise streams (uncorrelated between detectors) are simulated according to a 1/f model S(f) = σ2 1 + f fknee α , (28) with nominal parameter values typical of detector noise, ̄α = -1.0 and ̄fknee = 50 mHz. (29) The nominal value of the noise equivalent temperature, σ, does not matter since we will be always be looking at results relative to a reference case. In the nominal case, all detectors get the same instrumental noise parameters. Alternatively, we added the possibility of varying parameters across the focal plane. In this case, each parameter is perturbed around its nominal value by a random multiplicative sample ∼N(1, z2). We will usually quote the dispersion z in percent. A different set of perturbations is applied for each CES and detector. 2. Results Using these simulations, we perform mapmaking runs following the three models presented earlier, recovering Q and U noise maps. As a figure of merit, we compute for each case the noise power spectra, i.e., the angular power spectra of the noise maps. We then compare them with reference spectra, i.e., the noise spectra of the ideal runs, showed in Figure 1. All spectra are obtained using the NaMaster4 package [56], with a bin size of 10, and keeping only bins whose center multipole lb ∈[30, 500]. In addition, for each pixel p we also compute the diagonal block of the white noise covariance, ∆≡(P ⊤diag N -1P)-1. (30) 4 https://github.com/LSSTDESC/NaMaster 100 200 300 400 500 Multipole 0.5 1.0 1.5 2.0 2.5 3.0 N [ K2] 1e 5 with HWP without HWP Figure 1. Reference BB noise power spectra obtained from the ideal reconstruction (atmosphere-free, only instrumental noise). These blocks have dimensions 3 × 3 for the downweighting approach and 2 × 2 for pair differencing, corresponding to the number of relevant Stokes parameters. For each pixel they are explicitly given by,5 (∆-1 p )ss′ = X t PptsN -1 tt Ptps′, (31) where the subscripts s, s′ ∈{I, Q, U} stand for a Stokes parameter. These blocks quantify the coupling of the noise map between the different estimated Stokes parameters. For the cases studied here, we display the elements of the blocks in Figure 2. As discussed in section III B, in DW mapmaking, off-diagonal elements IQ and IU are particularly important as they determine the contribution of the atmospheric signal variance to the polarization maps. When noise weights are fitted independently for each detector, the values of these elements, though centered at zero, scatter significantly around it. As expected, the scatter is particularly pronounced without HWP, however, in both cases, this is bound to increase the uncertainty of the derived maps, given the magnitude of the atmospheric signal. While the effect can be further limited by imposing strict selection criteria on retained pixels, this would unavoidably lead to a loss of observed sky fraction, thereby enhancing the sample variance on the largest angular scales. A more fruitful approach would be to find a way to suppress these off-diagonal elements. This can be done in a robust way through enforcing the noise weights for 5 Note that Ptps = 0 whenever p is not the sky pixel observed at time t. 8 1 5 10 101 102 103 104 II 0.02 0.01 0.00 0.01 0.02 IQ 0.02 0.01 0.00 0.01 0.02 IU 2 10 20 30 QQ 4 2 0 2 4 QU 2 10 20 30 UU DW - HWP DW - No HWP PD - HWP PD - No HWP Figure 2. Comparison of white noise covariance block values for DW/PD and HWP/no-HWP cases. Nominal values of the instrumental noise parameters are used. Noise weights are fitted to the data independently for each detector. Each panel of this 3 × 3 histogram matrix shows the distribution of values of the white noise covariance blocks (31) across the map. These blocks are normalized by the variance of the best constrained pixel, such that their smallest values across the map are 1 for II and 2 for QQ and UU. two detectors in each pair to be the same. While this will make the weighting of each detector stream less optimal, the overall gain may still be appreciable. In the following we confront these conclusions with the results of our simulations. The resulting BB noise spectra are shown in Figs. 3 and 4 (EE is not shown as the results are nearly identical). The key observations are as follows. For PD, Figure 3, they coincide perfectly with those of the ideal case if the noise properties of both detectors in each pair are the same. This is independent from the noise differences between the pairs as indeed expected given that these are optimally weighted in the procedure (see [57] and section IV). If, on the contrary, the noise properties of detectors in a pair differ, PD yields noisier than ideal maps. For the z = 10 % dispersion of parameters in the simulations, the precision loss seems to be ∼2 %. We investigate this issue in more depth in section IV. The main impact of having a HWP is that the noise increase is essentially constant across all multipoles, all the way to the largest angular scales considered. If no HWP is used, the increase is more pronounced at the low end of the multipole range, all the more so when noise parameters vary, with the additional large scale variance becoming apparent at l∼100 ÷ 150, respectively. However, the total increase of power at low multipoles over the white noise plateau is not more than ∼30 %. We note that in all these cases we used the same set of map pixels, even if in cases without HWP these are generally less well-conditioned than in cases with HWP, as shown in 9 100 200 300 400 500 Multipole 0.96 0.98 1.00 1.02 1.04 1.06 N /Nref With HWP 100 200 300 400 500 Multipole 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 Without HWP Reference Nominal Perturbed Perturbed pairs Figure 3. Comparison of BB noise power spectra using pair differencing, with (left) vs. without (right) HWP rotation. Different setups are represented with combinations of markers and colors. "Nominal" (blue) corresponds to the nominal instrumental noise model. "Perturbed" (orange) has instrumental noise parameters drawn with a dispersion of 10 % around nominal values. "Perturbed pairs" (green) is a case where perturbations are applied in such a way that detectors of one pair always have the same parameters, i.e., any variations are only between different detector pairs. For each case, we plot the ratio of the noise power spectrum over that of the corresponding ideal (reference) case. Figure 2. As far as DW mapmaking is concerned, the situation is somewhat more complex. Let us start by inspecting the cases where HWP rotation is enabled (left panel of Figure 4). Even for the simplest case of nominal noise parameters, the noise spectra do not exactly match those of the ideal case. The difference is small but persistent and it does not disappear even when we symmetrize the weights within each detector pair, as shown by the blue (nominal weighting) and green curves (symmetrized weighting), distinct from the black dashed line. This effect goes away only if we use noise weights with a lower fknee, e.g., similar to instrumental noise, Eq. (29), as represented by thin diamond-shaped markers. On the other hand, if we simulate different noise levels (and use corresponding weights) for each detector, we see significant increase of the noise (orange curve). We interpret these observations as follows. When noise weights are (nearly) identical within each pair of detectors, the atmosphere does not contribute directly to the variance of the polarization maps. Indeed, the mapmaking operator performs implicit pair-differencing while computing P ⊤Wd. Nevertheless, since the weights account for atmospheric contamination but neglect that it is highly correlated between detectors, they result in suboptimal maps. The effect is small owing to the HWP modulation, which mitigates this low frequency noise by shifting the polarization signal well above the atmospheric fknee. Still, we find an excess ∼1 % variance at the power spectrum level, but this goes away if we decrease sufficiently the assumed fknee such that the signal in the polarization band is weighted uniformly. The method then matches the performance of the reference and PD cases. These conclusions are fully consistent with the noise spectra obtained in the cases without HWP (right panel of Figure 4). The impact of atmosphere is however generally enhanced by the presence of more significant leakage of the atmosphere into polarization, as expected from Figure 2. In the case of detector-dependent noise parameters (orange curve), the mismatch of weights within each detector pair further prevents the implicit differencing mentioned above, leading to huge atmospheric noise residuals dominating on all angular scales by orders of magnitude. If we now enforce symmetric weights (red curve), the noise spectra are suppressed by 3 to 4 orders of magnitude and become comparable to those from cases with nominal noise parameters (blue and green curves). Indeed, the corresponding 3 × 3 blocks and the noise weights are now expected to be similar. The "symmetrized" cases show that the remaining extra noise power is purely due to suboptimal weighting, since the atmosphere is removed as a result of implicit differencing. The noise weights in those cases include the atmospheric 1/f and therefore give much less weight to Fourier modes 10 2 3 N /Nref With HWP 100 200 300 400 500 Multipole 1.00 1.05 1.10 N /Nref 100 200 300 400 500 Multipole 100 101 102 103 104 105 106 107 Without HWP Reference Nominal Perturbed Nominal + symmetric weights Perturbed + symmetric weights Nominal + instrumental weights Figure 4. Comparison of BB noise power spectra using down-weighting, with (left) vs. without (right) HWP rotation. Different setups are represented with combinations of markers and colors. "Nominal" and "Perturbed" cases are the same as in Figure 3. "Nominal + symmetric weights" is a case where instrumental noise parameters are nominal, and moreover the assumed noise weights are forced to be symmetric, i.e., the same for both detectors of a pair. "Perturbed + symmetric weights" is the same idea but with the perturbed noise parameters. "Nominal + instrumental weights" has both nominal instrumental parameters and noise weights following that model (not fitted to the data which contains atmosphere). For each case, we plot the ratio of the noise power spectrum over that of the corresponding ideal (reference) case. below the atmospheric fknee, where most of the polarization signal resides in the absence of HWP modulation. This in turn leads to significantly increased variance at large multipoles. Like before, we can reduce this uncertainty to the reference level by simply taking noise weights as if they were given by the instrumental noise only (diamond markers). Alternately, we could improve the noise model by allowing for correlations between detectors in the same focal plane pixel. We thus conclude that in these conditions the most beneficial setup for down-weighting is to assume identical weights within each pair of detectors and derive them assuming the properties of the instrumental noise. But then the best this approach can do is to reproduce the results of pair differencing, the latter being in addition more numerically efficient and robust. We expect this conclusion to hold for as long as there is no better and more reliable model of the atmospheric signal available, which would allow DW to benefit from optimal propagation of the instrumental noise, as in the ideal case, while keeping the atmospheric leakage under control. This is however a tall order as argued in section III A). Meanwhile, the PD approach, thanks to its independence from the atmosphere model, may offer a competitive and practical option. However, as it is, if we are ready to accept a small sensitivity hit, then for the experiments with continuously rotating HWP and pairs of orthogonal detectors, the down-weighting approach can still achieve very good performance if only symmetric weights are adopted. Moreover, if the atmospheric signal is somehow suppressed prior to actual mapmaking, then this symmetrization may not even be needed, allowing the method to benefit from more optimal weighting of individual detectors. This idea is implemented, for instance, in demodulation methods [28, 52, 58]. IV. SENSITIVITY ASSESSMENT OF THE PAIR DIFFERENCING APPROACH In this section, we address the question of sensitivity (or precision) of the pair differencing method. We argued in section III A that PD is an optimal polarization estimator under the single assumption that unpolarized signals are measured identically by detectors in a pair. However, we should in general expect it to yield a lower signal-to-noise ratio than the unattainable ideal case (17) where the atmospheric contribution is perfectly known and subtracted from the data. Given the very general assumption behind the PD derivation, we might even expect a significant loss of precision: the number of templates that it implicitly marginalizes over, and thus 11 the number of additional degrees of freedom, is as large as half the length of the total combination of data from all the detectors. However, we already saw in several specific examples of the previous section that the PD estimator is ideal when both detectors in a pair have identical noise properties. In Appendix B, we show rigorously why this is the case, and obtain the following expression for the polarized part of the ideal estimator: ˆsid QU = ˆspd QU + (P ⊤ -Π--P-)-1P ⊤ -Π-+n+ (32) where Π±± is the (±, ±) block of the inverse transformed noise covariance matrix (XNX)-1. Whenever the noise properties of the two orthogonal detectors in a pair are identical, we have N-+ = 0 and thus Π-+ = 0 and ˆsid QU = ˆspd QU. As surprising as it may seem, this result merely emphasizes the fact that using all available data only improves on PD by modeling the noise cross-correlations between detectors of a same pair. In this case those are zero, such that the marginalization over the additional templates has no impact on the statistical precision of the map estimate. We are now interested in quantifying the reduction of precision (excess variance) in PD relative to the ideal IQU estimator computed from all available data in more general cases, specifically when the noise properties of detectors in a pair can differ. To examine this question, we now use atmosphere-free simulations, assuming that atmospheric contamination in the absence of (other) systematic effects has no impact on the recovered Q and U maps. In these highly idealized conditions, our DW implementation is actually ideal since instrumental noise is simulated independently for each detector and is thus fully accounted for in the noise weights. It thus provides a meaningful statistical benchmark to assess the performance of pair differencing. We note that in more realistic conditions, the "optimality" of the DW approach as formulated in section II A 2 rests on several assumptions such as the Gaussian distribution of the atmospheric amplitudes a, which may not be a good model in general. Pair differencing does not rely on such assumptions. A. Simulations We use simulations following almost the same setup as before, section III C 1. A small additional feature is the possibility to generate white instrumental noise instead of 1/f. This is compatible with the detector-specific perturbations described previously. The other difference is a longer observing schedule covering not one, but ten observing days, namely the first day of each month except February and March, when weather conditions are typically degraded. This schedule captures variations in the scanning pattern due to Sun and Moon avoidance across the year. In total, those ten days amount to 166 CESs, or approximately 137 h Normalized hit map 0 1 Figure 5. Normalized hit map of the extended simulation (10 days of observation) in equatorial coordinates. The corresponding sky fraction is about twenty percent. of observation. The corresponding normalized hit map, showing the sky coverage of the simulation, is plotted in Figure 5. B. Variance comparison at the map level in the white noise case The noise properties of the maps are contained in the pixel covariance matrix, (P ⊤N -1P)-1. This matrix is in general not accessible because of computational limitations. However, when the instrumental noise is white, N is diagonal, and the pixel covariance is simply the white noise covariance ∆, Eq. (30). In Appendix C, we compute analytically the excess variance of the pair differencing estimate compared to the ideal estimate, considering a single detector pair, white instrumental noise, and noise levels independent between detectors. Since PD does not take into account potentially different noise levels inside this single pair (but would optimally weight noise between different pairs), the excess variance in this case is simply a function of the relative difference of noise levels in the pair, ε ≡ σ2 ∥-σ2 ⊥ σ2 ∥+ σ2 ⊥ ∈(-1, 1), (33) which is equal to zero when detectors have the same noise level and ±1 when one of them has zero (or infinite) noise. The polarization covariance matrices in pixel domain for this single pair are then related by ∆id = (1 -ε2)∆pd. (34) This confirms that the ideal estimate has lower variance than PD unless the noise levels are the same in both detectors. In general, we define the relative excess variance as ζ ≡(∆pd -∆id)/∆id. (35) 12 It is a random variable as it depends on the detector noise levels. From Eq. (34), we see that ζ is the same in every map pixel for a single pair of detectors with given noise levels. This will be our theoretical expectation for the excess variance: ζtheory ≡ ε2 1 -ε2 ⩾0. (36) This may however not be true in the general case of multiple detector pairs because different focal plane pixels have different sky footprints (SATs typically have a field of view of several tens of degrees). Additionally, the noise level of each detector can vary during the observing season. Using the simulations described earlier, we have access to the actual ζ in each pixel of the map, therefore we can study how the analytical result (36) generalizes to the case of multiple detector pairs, and multiple independent CESs. Here are the two main takeaways: • the empirical average of ζtheory over detector pairs and CESs is a good proxy for the mathematical expectation over all possible values of ε; • the average ⟨ζ⟩pixels across the map also follows this expectation. The results can be visualized in Figure 6, which shows the relative excess variance as a function of the dispersion of noise levels z ∈{0.1 %, 1 %, 10 %, 20 %}. Unsurprisingly, we observe that ζ grows with z. This illustrates how weighting individual detectors optimally in the ideal estimator takes advantage of the less noisy individual detectors to improve the map quality, while PD does not capitalize on this. As z increases, there are more and more pairs with a large discrepancy in noise levels, which intensifies the effect. The agreement between empirical expectation and measured excess variance is useful from a practical point of view: it shows that we can estimate how the noise level of a map increases with respect to the ideal case using only the detector noise levels, which can be derived from the data directly, without going through the whole mapmaking procedure. The 99th percentile of the pixel distribution of ζ provides a pessimistic estimate that would bound the excess variance for 99 % of the map area, and at the same time account for deviations between the actual values and the empirical estimate from the detector noise levels. Overall, the excess variance is (i) under 0.1 % (0.5 % pessimistic) when z ≲1 % (ii) ∼2 % (2.5 % pessimistic) when z = 10 % (iii) ∼10 % (15 % pessimistic) when z = 20 %. C. Impact of variable instrumental noise parameters on the determination of the tensor-to-scalar ratio We now turn to the more realistic case of instrumental 1/f noise with variations from detector to detector as described in section III C 1. Contrary to the previous section, where we explored the white noise case, we do not have direct access to the exact map-space covariance matrix: it is a dense object because of temporal correlations in the noise and can not in general be computed. Therefore, we evaluate the loss of statistical power at the angular power spectrum level, using results from 25 different random noise realizations. Figure 7 shows the increase in PD noise spectra with respect to the ideal case, averaged over the 25 realizations, along with the standard deviation. It illustrates the added value of a rotating HWP in mitigating largescale correlated noise not rejected by the differencing operation. While the two top panels show that the excess noise is "white" (does not depend on l) in the presence of a HWP, the noise curves in the bottom panels (no HWP) exhibit a 1/l-like behavior, with an important dependence on the variability of noise parameters in the detector pairs. Table I summarizes those observations by quoting the Fisher uncertainty σ(r = 0) on the tensor-to-scalar ratio r, computed using σ(r = 0)-2 ≈fsky 2 × X bins b δl(2lb + 1) CBB,prim lb |r=1 CBB,lens lb + N BB lb !2 (37) where N BB lb is the average noise power spectrum over the 25 noise realizations, and δl= 10 stands for the bin size. The table also gives for each value a 1σ confidence interval computed by evaluating Eq. (37) at the average value of the noise spectrum, plus/minus the standard deviation over the 25 noise realizations. As expected, the increase of uncertainty is more pronounced when no HWP is used, as the power spectrum estimates at low multipoles are more noisy, and this is where most of the signal of interest is. For typical dispersion values z ∼10 % -20 %, the uncertainty on r could increase by a few percent when using a HWP, and as much as 10 % when not. One may notice two potentially surprising features in Table I. First, the absolute uncertainty on r in the case with HWP actually decreases with higher values of z. This is because given the implementation of the perturbed noise model, having larger dispersion of the noise parameters around their nominal values doesn't necessarily mean that the detector array loses sensitivity overall. In the case without HWP, this effect is buried under the excess noise at low multipoles. The second feature is that the PD estimator seems to perform better than the ideal estimator, in the no HWP case, for low z values. This seems inconsistent with the fact that, from first principles, the ideal estimate should have minimal variance. It turns out that this is driven by the very low multipole bins (l< 50). However, inspecting the scatter of their values across the 25 realizations reveals that each bin is statistically consistent with the 13 0.1% 1% 10% 20% Scatter around nominal NET 0% 0.1% 1% 5% 10% 20% Relative variance increase true expectation empirical expectation Q pixels (1-99th percentiles) U pixels (1-99th percentiles) Figure 6. Relative excess variance ζ in the white noise case as a function of the dispersion of noise levels z. The mathematical expectation (solid black) is computed numerically following Eqs. (36) and (33) after drawing samples of σ2 ∥and σ2 ⊥from the same Gaussian distribution as the noise levels in the simulation. The empirical average (red crosses) over the detector pairs and CESs shows good agreement with the expectation at the four simulated values, z ∈{0.1 %, 1 %, 10 %, 20 %}. Error bars represent the 1st to 99th percentile of the Q (blue) and U (orange) pixel distribution of ζ, with the triangle marks showing the average value, in excellent agreement between the two Stokes parameters. Blue and orange markers are slightly shifted left and right in order to improve visibility. With HWP Without HWP Dispersion σ(r)id[×10-3] σ(r)pd[×10-3] Increase [%] σ(r)id[×10-3] σ(r)pd[×10-3] Increase [%] 0.1 1.378 ± 0.071 1.378 ± 0.071 0.001 ± 0.001 3.786 ± 0.489 3.781 ± 0.487 -0.123 ± 0.055 1.0 1.378 ± 0.071 1.378 ± 0.071 0.017 ± 0.008 3.789 ± 0.489 3.784 ± 0.487 -0.111 ± 0.048 10.0 1.363 ± 0.067 1.375 ± 0.070 0.889 ± 0.152 3.879 ± 0.496 3.947 ± 0.507 1.773 ± 0.048 20.0 1.309 ± 0.059 1.361 ± 0.068 3.977 ± 0.526 4.075 ± 0.507 4.448 ± 0.570 9.161 ± 0.418 Table I. Fisher uncertainty from 25 noise realizations on the tensor-to-scalar ratio r for the ideal and PD estimators, assuming the true r = 0, with and without a rotating HWP, for dispersion z ∈{0.1 %, 1 %, 10 %, 20 %} of the noise parameters. In addition to the absolute numbers, the relative increase of uncertainty inherent to the PD estimator is given in percent. ideal case. We therefore consider that this is a statistical fluke and/or caused by uncontrollable numerical effects that could be due too many different factors, such as imperfections in the estimation of the noise weights, estimation of the pseudo-spectra, etc. Finally, we emphasize that this is anyway a very small effect (about 0.1 %) and does not affect our conclusions. V. PAIR-DIFFERENCING IN THE PRESENCE OF INSTRUMENTAL SYSTEMATIC EFFECTS In previous sections, we demonstrated two key points about pair-differencing. First, it provides estimates of the polarized sky that are as optimal as possible when arbitrary, unpolarized common modes are present. Second, in practical applications, PD maps nearly have the best possible precision given the sensitivity of the instrument itself, i.e., as if atmospheric correlated noise were not present. While these results are promising, they assumed rather idealized conditions, without instrumental systematics. All mapmaking methods are affected by those, though some handle certain types of systematics better than others. The specific impact may depend heavily on the particular systematic effect and instrument involved, requiring case-by-case analysis. The purpose of this section is not to replace such detailed studies, but rather to show that PD could still perform well, and deserves 14 Figure 7. Increase of noise power in PD maps relative to the ideal high-lwhite noise floor, computed as an average of the 10 % largest multipole bins. The dependence on the dispersion z is encoded by color, and the markers and error bars respectively show the average and standard deviation over 25 independent instrumental noise realizations. Top panels have a rotating HWP, and bottom panels do not. Left (resp. right) panels show the EE (resp. BB) spectra. A threshold of 10 % of the maximum hit count is applied to the maps, before apodizing them on a 10◦radius. consideration in real-world analysis pipelines. We consider here the simple example of gain mismatch between the two detectors of a pair [11, 24, 59]. This undermines the ability of PD to cleanly remove signals common to both detectors, a key strength of the method. Still, given that relative calibration factors in real-world experiments can typically be determined with percent level accuracy [32], we expect most of the atmospheric contamination to be rejected by the differencing operation. Any residual leakage is then dealt with by using appropriate weights reflecting the additional 1/f noise. To investigate the impact of gain mismatch on the recovered PD maps, we use the setup described in section III C 1. For each detector pair, we multiply the atmosphere contribution to the data of the first detector by 1 + δ/2 and that of the second one by 1 -δ/2, where δ is drawn randomly around zero with some dispersion (either 0.1 or 1 %), and independently for each detector pair. After forming the difference streams, a 1/f model, Eq. (28), is fitted to each of them in order to account for both the instrumental noise and residual atmosphere. The maps are then estimated as in Eq. (24). In Figure 8, left panel, we show that HWP modulation still enables a precise recovery of the polarization information. Indeed, the additional variance at large angular scales caused by the leakage seems of comparable scale to the one due to a variation of instrumental noise parameters across the focal plane (c.f. section IV). When no 15 100 200 300 400 500 Multipole 0.950 0.975 1.000 1.025 1.050 1.075 1.100 1.125 1.150 N /Nref HWP 100 200 300 400 500 Multipole 100 101 102 103 104 No HWP PD nominal PD perturbed PD 0.1% gain error PD 1% gain error Figure 8. Comparison of BB noise spectra using pair differencing, with (left) and without (right) HWP rotation. Spectra are plotted relative to a reference case with no systematics and instrumental noise parameters at there nominal values. The "perturbed" curves (blue) correspond to a typical 10 % variation around the nominal values. Finally, cases with 0.1 % (resp. 1 %) gain errors are plotted in orange (resp. green). HWP is used (right panel), the situation is different. Atmospheric leakage increases the variance by roughly two to four orders of magnitude at low multipoles (l< 100) depending on the level of gain mismatch. It is likely that better modeling of the residual atmospheric noise, in particular inclusion of the correlations between detector pairs, could lead to more precise recovery of the polarization signal in both cases. Nonetheless, we conclude overall that even in the presence of such residuals, owing to the presence of a continuously rotating HWP, the simple PD technique, as is, is capable of delivering maps of quality comparable to the best achievable while making minimal assumptions about atmospheric contamination. More complex sources of systematic effects will unavoidably affect the data of real experiments and impact PD performance, for instance slightly different or asymmetric beams [24]. We expect that the powerful combination of orthogonal detector pairs and HWP will continue helping to mitigate many of them, as long as their main consequence is the atmospheric, or more generally any unpolarized, signal leakage to the difference data. Other classes of effects may instead require extensions of the formalism, for example the introduction of additional degrees of freedom to model them, e.g., [11, 60], thus potentially also affecting the overall precision of the recovered maps. This needs however to be studied in detail, case-by-case, for PD as well as any other mapmaking procedure. VI. CONCLUSION In this work, we have compared two mapmaking approaches based on their ability to precisely reconstruct CMB polarization maps from ground-based experiments in the presence of atmospheric contamination. In particular, we have demonstrated that pair-differencing, a technique that relies on the rejection of common-mode noise using pairs of orthogonally polarized detectors, emerges naturally as a maximum likelihood estimator under the assumption of perfectly correlated atmospheric emission between detectors in a pair, without explicit modeling of the contaminant signal. This is a key feature given the dynamic and turbulent nature of atmospheric fluctuations, which makes them difficult to characterize and model accurately. Our results show that PD achieves close to ideal sensitivity to the inflationary tensor-to-scalar ratio r in the presence of realistic variations of instrumental noise properties across the focal plane, despite using only half of the available data and thus neglecting cross-correlations between detectors of a pair. In particular, we find only a small (few percent) enlargement of the uncertainty on r compared to an idealized reconstruction from atmosphere-free data, when a rotating half-wave plate is used. This degradation increases to about ten percent in the absence of a HWP. Compared to usual down-weighting approaches, which include both instrumental and atmospheric noise in the 16 noise weights, PD offers a simpler and more numerically stable alternative, particularly when detectors in a pair have similar noise properties. Moreover, in our tests we find that DW typically has to perform an implicit differencing operation, by assigning the same weights to both detectors in a pair, to reach competitive performance with PD. Since the atmosphere is often used as a beam-filling calibrator to compute relative gain factors, we expect that this remains the case in practice, unless a very accurate model of the atmospheric noise is available, allowing for its removal from the data. We also note that the inclusion of cross-correlations between different detectors in the noise model should allow DW to better mitigate atmospheric contamination while taking into account variations within detector pairs. We conclude that for the recovery of polarization maps, and experiments featuring both a HWP and pairs of orthogonal detectors, PD is a numerically efficient method that is surprisingly good at mitigating unpolarized atmospheric contamination in a model-independent way, while delivering nearly optimal performance in terms of statistical uncertainty of the recovered maps, and potentially being as vulnerable as any other method to some common systematic effects. Future work should presumably explore hybrid strategies and systematic mitigation schemes to further enhance the robustness and accuracy of polarization reconstruction. ACKNOWLEDGMENTS The authors would like to thank Hamza El Bouhargani, Wuhyun Sohn and the SciPol team for useful discussions. SB acknowledges partial PhD funding from the DIM ORIGINES program (project RADIATION). This work was supported by the SCIPOL project6, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (PI: Josquin Errard, Grant agreement No. 101044073). The authors also benefited from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101007633 CMB-Inflate. This work was granted access to the HPC resources of IDRIS under the allocation 2024-AD010414161R2 and 2025-AD010416919R1 made by GENCI. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a -ERCAP 0034243 project. Results presented in this paper have made use of the following packages: NaMaster [56], healpy [61], NumPy [62], SciPy [63], Matplotlib [64], and seaborn [65]. 6 https://scipol.in2p3.fr Appendix A: Pair differencing as template marginalization Assume a data model where two orthogonal detectors, denoted by ∥and ⊥, measure the same, unknown, total intensity signal a in addition to sky polarization sQU: d = d∥ d⊥ = T T a + PQU -PQU sQU + n. (A1) Using the pair transformation X = 1 2 I I I -I from Eq.(20), the data model is rewritten as Xd = d+ d- = T 0 |{z} ≡T a + 0 PQU | {z } ≡P sQU + Xn. (A2) The sky polarization estimate that deprojects the total intensity signal (and corresponds to marginalizing over the total intensity amplitudes a) is ˆsQU = (P⊤FP)-1P⊤F(Xd) (A3a) with F ≡C -CT (T ⊤CT )-1T ⊤C (A3b) C ≡(XNX)-1. (A3c) We want to minimize the assumptions on the total intensity signal. This amounts to deprojecting anything that is measured the same way by both detectors, i.e., taking the original T = I. In particular, we do not assume that this contribution can be pixelized or has any predictable structure. We now show that this leads to the pair differencing estimator. Given the structure of P = PQU[ 0 I ] and T = [ I 0 ], defined in Eq. (A2), any sandwiches between those matrices extract one of the four blocks of the object in the middle. For example, the template orthogonalization kernel is simply: T ⊤CT = I 0 C11 C12 C21 C22 I 0 = C11. (A4) The mapmaking kernel, on the other hand, is: P⊤FP = I 0 P ⊤ QU | {z } commute F11 F12 F21 F22 PQU I 0 | {z } commute = P ⊤ QUF22PQU. (A5) Using this trick, we can write the estimator as ˆsQU = (P ⊤ QUF22PQU)-1P ⊤ QU(F21d+ + F22d-). (A6) By definition of the filtering operator, we have FT = 0. Thus, F11 = F21 = 0. All that is left is to compute F22. This is straightforward from its definition (A3b): F22 = C -C·1(C11)-1C1· 22 = C22 -C21(C11)-1C12 = (C-1)22 -1 by blockwise inversion = ((XNX)22)-1 (A7) 17 and we recognize the inverse noise covariance matrix of the difference data. Therefore, the estimator is simply the pair differencing estimator (21), which concludes the proof. Appendix B: Polarized part of the full IQU estimator El Bouhargani [57, Appendix C] shows that when the instrumental noise is the same in two detectors of a pair, the pair differencing solution is as optimal as the simultaneous IQU reconstruction. We broadly recall the argument here, but refer anyone interested in the detailed derivation to the original work. Let us introduce the pair-sum and pair-difference data vectors and associated pointing matrices and noise vectors, d+ = 1 2 d∥+ d⊥ = P+s + n+ (B1a) d-= 1 2 d∥-d⊥ = P-s + n- (B1b) where the ∥and ⊥subscripts denote the orthogonal detectors of a pair. The IQU solution ˆs can be rewritten in terms of the transformed data set, with block versions of the matrices: ˆs = P ⊤N -1P -1 P ⊤N -1d = P ⊤ + P ⊤ - N -1 P+ P- -1 P ⊤ + P ⊤ - N -1 d+ d- (B2) with the transformed noise covariance matrix N inverted blockwise: N -1 ≡ N++ N+- N-+ N-- -1 ≡ Π++ Π+- Π-+ Π-- . (B3) Π□□is just a notation for the □□block of N -1. From there, we can write the map estimator in block form, separating the intensity and polarization components. The polarized part is given by (cf. [57], Equation (C.14)): ˆsQU = sQU + P ⊤ -F--P- -1 P ⊤ -[F--n-+ F-+n+] (B4) with F--= Π---Π-+P+(P ⊤ + Π++P+)-1P ⊤ + Π+- (B5a) F-+ = Π-+ -Π-+P+(P ⊤ + Π++P+)-1P ⊤ + Π++ (B5b) We see from Eq. (B4) that ˆsQU is the sum of three terms: (i) the true sky map (ii) a direct contribution from the pair-difference noise (iii) a cross-contribution from the pair-sum noise. Whenever the covariance of the pairsum and pair-difference noise vectors, N+-, is small, the transformed noise covariance matrix N becomes blockdiagonal, N = N++ ≃0 ≃0 N-- , (B6) such that F-+ = 0 and F--= N -1 --. (B7) In this limit, the optimal polarization map reduces to the PD solution (21): ˆsQU N-+→0 -----→ˆspd QU = sQU + (P ⊤ -N -1 --P-)-1P ⊤ -N -1 --n- (B8) Appendix C: Pair differencing in the case of white noise We start by simplifying Eq. (B4) by assuming that all sky pixels are observed with a uniform variety of crossing angles, such that: P ⊤ -F--≈P ⊤ -Π-- (C1a) P ⊤ -F-+ ≈P ⊤ -Π-+ (C1b) This simplification is a very good approximation for the deepest parts of the map, which are observed many times by different detectors with varying telescope orientations. The presence of a rotating HWP makes this case compelling. The QU part of the estimate now reads: ˆsQU = sQU + P ⊤ -Π--P- -1 P ⊤ -[Π--n-+ Π-+n+] . (C2) We denote the white noise levels of the even- and oddindex detectors by σ∥and σ⊥respectively. They are free to vary from one detector pair to another, and we treat them as independent Gaussian random variables. Leaving out any correlations between different detectors, we consider a single pair of detectors with independent noise levels following the same distribution N( ̄σ, z2) centered on a nominal value ̄σ and with variance z2. We will comment on the multi-detector case at the end. These assumptions simplify the starting equation (C2) because the noise matrices are just scalars (∝I), so they commute with other matrices: ˆsQU = sQU + P ⊤ -P- -1 P ⊤ - | {z } ≡Bh n-+ (Π--)-1Π-+n+ | {z } ≡ ̃n+ i = sQU + B-n- | {z } PD estimator +B- ̃n+. (C3) We have recognized the PD estimator ˆspd QU and labeled the binning operator B-. 18 The blocks of the transformed noise covariance matrix N (B3) are related to those of the original one by: N++ = 1 4 N∥+ 2N∥×⊥+ N⊥ = 1 4 σ2 ∥+ σ2 ⊥ I (C4a) N--= 1 4 N∥-2N∥×⊥+ N⊥ = 1 4 σ2 ∥+ σ2 ⊥ I (C4b) N+-= N-+ = 1 4 N∥-N⊥ = 1 4 σ2 ∥-σ2 ⊥ I (C4c) with N∥×⊥= 0 in our case where detector noise levels are uncorrelated. One can write the inverse noise covariance blocks Π appearing in Eq. (C3) explicitly, omitting the identity matrix: (Π--)-1 = N++ -N+-N -1 --N-+ = 1 4(σ2 ∥+ σ2 ⊥) -1 4(σ2 ∥-σ2 ⊥) × 4 σ2 ∥+ σ2 ⊥ × 1 4(σ2 ∥-σ2 ⊥) = σ2 ∥σ2 ⊥ σ2 ∥+ σ2 ⊥ and similarly, Π-+ = -Π--N-+N -1 ++ = - σ2 ∥+ σ2 ⊥ σ2 ∥σ2 ⊥ × 1 4(σ2 ∥-σ2 ⊥) × 4 σ2 ∥+ σ2 ⊥ = - σ2 ∥-σ2 ⊥ σ2 ∥σ2 ⊥ . Together these give the cross noise term ̃n+ = - σ2 ∥-σ2 ⊥ σ2 ∥+ σ2 ⊥ n+. (C5) It is now straightforward to compute the pixel-domain covariance matrix for each noise term and their crosscovariance. We use ∆to denote those quantities, consistently with the main text, Eq. (30), and obtain: ∆1 ≡Var[B-n-] = σ2 ∥+ σ2 ⊥ 4 Σ-1 - (C6a) ∆2 ≡Var[B- ̃n+] = σ2 ∥-σ2 ⊥ σ2 ∥+ σ2 ⊥ !2 ∆1 = ε2∆1 (C6b) ∆12 ≡Cov[B-n-, B- ̃n+] = -∆2 (C6c) where Σ-≡B⊤ -B-= P ⊤ -P- (C7) describes the sky coverage in Q and U, and ε is the relative difference of noise levels in the detector pair, defined in Eq. (33), which is equal to zero when detectors have the same noise level and ±1 when one of them has zero (or infinite) noise. Since Σ-is positive definite, ∆1 is as well, whereas the cross-covariance ∆12 is negativedefinite. Therefore, at least in our simplified white noise model, the two noise terms of the IQU estimator are anticorrelated in every pixel. This is important because we know that this estimator should be optimal (in the sense of minimum variance), and therefore have a smaller variance than the PD estimator which only has the first term with variance ∆1. Quantitatively, we have ∆iqu ≡Var[ˆsQU] = Var[B-n-+ B- ̃n+] = ∆1 + ∆2 + 2∆12 = ∆1 -∆2 = (1 -ε2)∆1 ⩽∆1 = Var[ˆspd QU] ≡∆pd. (C8) The variance of the PD estimator is indeed larger than what can be obtained by computing the full IQU solution in the white noise case. The increase of variance is given by the factor, uniform across the map, η ≡∆pd ∆iqu = ∆1 ∆1 -∆2 = 1 1 -ε2 ⩾1, (C9) with η = 1 when the noise levels of the two detectors are equal (ε = 0). We emphasize that η is a random variable, because it ultimately depends on the detector noise levels which are drawn randomly. The expected increase of variance can be evaluated numerically by drawing many pairs (σ∥, σ⊥) and averaging the corresponding η values. Let us now comment on the multi-detector case. The generalization of the result (C9) is not straightforward. Each focal plane pixel observes a slightly different part of the sky, so the Σ-matrix is specific to each detector pair. For example, the total covariance matrix for the PD estimator would be ∆pd tot = X pairs wΣ- -1 with w = 4 σ2 ∥+ σ2 ⊥ . (C10) Because of the dispersion of noise levels across the focal plane, combined with slightly different footprints of those detectors (especially when considering an instrument with a wide field-of-view), this expression can not be simplified in the general case. 19 [1] P. Ade, J. Aguirre, Z. Ahmed, S. Aiola, A. Ali, D. Alonso, M. A. Alvarez, K. Arnold, P. Ashton, J. Austermann, et al., Journal of Cosmology and Astroparticle Physics 2019, 056 (2019), ISSN 1475-7516. [2] K. Abazajian, G. Addison, P. Adshead, Z. Ahmed, S. W. Allen, D. Alonso, M. Alvarez, A. Anderson, K. S. Arnold, C. Baccigalupi, et al., CMB-S4 Science Case, Reference Design, and Project Plan (2019), 1907.04473. [3] LiteBIRD Collaboration, E. Allys, K. Arnold, J. Aumont, R. Aurlien, S. Azzoni, C. Baccigalupi, A. J. Banday, R. Banerji, R. B. Barreiro, et al., Progress of Theoretical and Experimental Physics 2023, 042F01 (2023), ISSN 2050-3911. [4] Polarbear Collaboration, P. A. R. Ade, M. Aguilar, Y. Akiba, K. Arnold, C. Baccigalupi, D. Barron, D. Beck, F. Bianchini, D. Boettger, et al., The Astrophysical Journal 848, 121 (2017), ISSN 0004-637X, 1538-4357. [5] Keck Array and BICEP2 Collaborations, P. A. R. Ade, Z. Ahmed, R. W. Aikin, K. D. Alexander, D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, R. BowensRubin, et al., Physical Review Letters 121, 221301 (2018). [6] R. Keisler, S. Hoover, N. Harrington, J. W. Henning, P. A. R. Ade, K. A. Aird, J. E. Austermann, J. A. Beall, A. N. Bender, B. A. Benson, et al., The Astrophysical Journal 807, 151 (2015), ISSN 0004-637X. [7] U. Seljak and M. Zaldarriaga, Physical Review Letters 78, 2054 (1997). [8] BICEP/Keck Collaboration, P. A. R. Ade, Z. Ahmed, M. Amiri, D. Barkats, R. B. Thakur, D. Beck, C. Bischoff, J. J. Bock, H. Boenish, et al., Physical Review Letters 127, 151301 (2021), ISSN 0031-9007, 10797114, 2110.00483. [9] M. Tristram, A. J. Banday, K. M. G ́orski, R. Keskitalo, C. R. Lawrence, K. J. Andersen, R. B. Barreiro, J. Borrill, L. P. L. Colombo, H. K. Eriksen, et al., Physical Review D 105, 083524 (2022). [10] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin, M. Amiri, D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, J. A. Brevik, I. Buder, et al., The Astrophysical Journal 792, 62 (2014), ISSN 1538-4357, 1403.4302. [11] Polarbear Collaboration, P. A. R. Ade, Y. Akiba, A. E. Anthony, K. Arnold, M. Atlas, D. Barron, D. Boettger, J. Borrill, S. Chapman, et al., The Astrophysical Journal 794, 171 (2014), ISSN 0004-637X. [12] F. Ge, M. Millea, E. Camphuis, C. Daley, N. Huang, Y. Omori, W. Quan, E. Anderes, A. J. Anderson, B. Ansarinejad, et al., Cosmology From CMB Lensing and Delensed EE Power Spectra Using 2019-2020 SPT3G Polarization Data (2024), 2411.06000. [13] Y. Li, J. R. Eimer, K. Osumi, J. W. Appel, M. K. Brewer, A. Ali, C. L. Bennett, S. M. Bruno, R. Bustos, D. T. Chuss, et al., The Astrophysical Journal 956, 77 (2023), ISSN 0004-637X. [14] Y. Li, J. Eimer, J. Appel, C. Bennett, M. Brewer, S. M. Bruno, R. Bustos, C. Chan, D. Chuss, J. Cleary, et al., A Measurement of the Largest-Scale CMB E-mode Polarization with CLASS (2025), 2501.11904. [15] C. Herv ́ıas-Caimapo, K. Wolz, A. La Posta, S. Azzoni, D. Alonso, K. Arnold, C. Baccigalupi, S. Biquard, M. L. Brown, E. Calabrese, et al., Journal of Cosmology and Astroparticle Physics 2025, 055 (2025), ISSN 1475-7516. [16] R. Stompor, A. Balbi, J. D. Borrill, P. G. Ferreira, S. Hanany, A. H. Jaffe, A. T. Lee, S. Oh, B. Rabii, P. L. Richards, et al., Physical Review D 65, 022003 (2001), ISSN 0556-2821, 1089-4918, astro-ph/0106451. [17] E. Keih ̈anen, R. Keskitalo, H. Kurki-Suonio, T. Poutanen, and A.-S. Sirvio, Astronomy and Astrophysics 510, A57 (2010), ISSN 0004-6361, 1432-0746, 0907.0367. [18] R. D ̈unner, M. Hasselfield, T. A. Marriage, J. Sievers, V. Acquaviva, G. E. Addison, P. A. R. Ade, P. Aguirre, M. Amiri, J. W. Appel, et al., The Astrophysical Journal 762, 10 (2012), ISSN 0004-637X. [19] D. Poletti, G. Fabbian, M. Le Jeune, J. Peloton, K. Arnold, C. Baccigalupi, D. Barron, S. Beckman, J. Borrill, S. Chapman, et al., Astronomy & Astrophysics 600, A60 (2017), ISSN 0004-6361, 1432-0746. [20] H. El Bouhargani, A. Jamal, D. Beck, J. Errard, L. Grigori, and R. Stompor, Astronomy and Computing 39, 100576 (2022), ISSN 22131337, 2112.03370. [21] QUaD Collaboration, C. Pryke, P. Ade, J. Bock, M. Bowden, M. L. Brown, G. Cahill, P. G. Castro, S. Church, T. Culverhouse, et al., The Astrophysical Journal 692, 1247 (2009), ISSN 0004-637X. [22] W. Hu, M. M. Hedman, and M. Zaldarriaga, Physical Review D 67, 043004 (2003). [23] D. O'Dea, A. Challinor, and B. R. Johnson, Monthly Notices of the Royal Astronomical Society 376, 1767 (2007), ISSN 0035-8711. [24] M. Shimon, B. Keating, N. Ponthieu, and E. Hivon, Physical Review D 77, 083003 (2008), ISSN 1550-7998, 1550-2368, 0709.1513. [25] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin, D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, J. A. Brevik, I. Buder, E. Bullock, et al., The Astrophysical Journal 814, 110 (2015), ISSN 1538-4357, 1502.00608. [26] D. B. Thomas, N. McCallum, and M. L. Brown, Monthly Notices of the Royal Astronomical Society 491, 1960 (2020), ISSN 0035-8711, 1365-2966, 1905.12647. [27] G. Hinshaw, C. Barnes, C. L. Bennett, M. R. Greason, M. Halpern, R. S. Hill, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, et al., The Astrophysical Journal Supplement Series 148, 63 (2003), ISSN 0067-0049. [28] B. R. Johnson, J. Collins, M. E. Abroe, P. a. R. Ade, J. Bock, J. Borrill, A. Boscaleri, P. de Bernardis, S. Hanany, A. H. Jaffe, et al., The Astrophysical Journal 665, 42 (2007), ISSN 0004-637X. [29] O. P. Lay and N. W. Halverson, The Astrophysical Journal 543, 787 (2000), ISSN 0004-637X. [30] C. L. Kuo, P. a. R. Ade, J. J. Bock, C. Cantalupo, M. D. Daub, J. Goldstein, W. L. Holzapfel, A. E. Lange, M. Lueker, M. Newcomb, et al., The Astrophysical Journal 600, 32 (2004), ISSN 0004-637X. [31] J. Errard, P. A. R. Ade, Y. Akiba, K. Arnold, M. Atlas, C. Baccigalupi, D. Barron, D. Boettger, J. Borrill, S. Chapman, et al., The Astrophysical Journal 809, 63 (2015), ISSN 0004-637X. [32] S. Naess, Y. Guan, A. J. Duivenvoorden, M. Hasselfield, Y. Wang, I. Abril-Cabezas, G. E. Addison, P. A. R. Ade, S. Aiola, T. Alford, et al., The Atacama Cosmology Telescope: DR6 Maps (2025), 2503.14451. [33] S. Hanany and P. Rosenkranz, New Astronomy Reviews 20 47, 1159 (2003), ISSN 1387-6473. [34] S. Spinelli, G. Fabbian, A. Tartari, M. Zannoni, and M. Gervasi, Monthly Notices of the Royal Astronomical Society 414, 3272 (2011), ISSN 0035-8711. [35] S. Takakura, M. A. O. Aguilar-Fa ́undez, Y. Akiba, K. Arnold, C. Baccigalupi, D. Barron, D. Beck, F. Bianchini, D. Boettger, J. Borrill, et al., The Astrophysical Journal 870, 102 (2019), ISSN 0004-637X, 1538-4357. [36] Y. Li, J. W. Appel, C. L. Bennett, R. Bustos, D. T. Chuss, J. Cleary, J. D. Couto, S. Dahal, R. Datta, R. D ̈unner, et al., The Astrophysical Journal 958, 154 (2023), ISSN 0004-637X. [37] A. Coerver, J. A. Zebrowski, S. Takakura, W. L. Holzapfel, P. A. R. Ade, A. J. Anderson, Z. Ahmed, B. Ansarinejad, M. Archipley, L. Balkenhol, et al., The Astrophysical Journal 982, 15 (2025), ISSN 0004-637X. [38] T. E. Montroy, P. a. R. Ade, J. J. Bock, J. R. Bond, J. Borrill, A. Boscaleri, P. Cabella, C. R. Contaldi, B. P. Crill, P. de Bernardis, et al., The Astrophysical Journal 647, 813 (2006), ISSN 0004-637X. [39] K. W. Yoon, P. a. R. Ade, D. Barkats, J. O. Battle, E. M. Bierman, J. J. Bock, J. A. Brevik, H. C. Chiang, A. Crites, C. D. Dowell, et al., in Millimeter and Submillimeter Detectors and Instrumentation for Astronomy III (SPIE, 2006), vol. 6275, pp. 508-525. [40] W. C. Jones, T. E. Montroy, B. P. Crill, C. R. Contaldi, T. S. Kisner, A. E. Lange, C. J. MacTavish, C. B. Netterfield, and J. E. Ruhl, Astronomy & Astrophysics 470, 771 (2007), ISSN 0004-6361, 1432-0746. [41] C. L. Kuo, J. J. Bock, J. A. Bonetti, J. Brevik, G. Chattopadhyay, P. K. Day, S. Golwala, M. Kenyon, A. E. Lange, H. G. LeDuc, et al., in Millimeter and Submillimeter Detectors and Instrumentation for Astronomy IV (SPIE, 2008), vol. 7020, pp. 415-428. [42] J. R. Hinderks, P. Ade, J. Bock, M. Bowden, M. L. Brown, G. Cahill, J. E. Carlstrom, P. G. Castro, S. Church, T. Culverhouse, et al., The Astrophysical Journal 692, 1221 (2009), ISSN 0004-637X. [43] Z. Ahmed, M. Amiri, S. J. Benton, J. J. Bock, R. BowensRubin, I. Buder, E. Bullock, J. Connors, J. P. Filippini, J. A. Grayson, et al., in Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VII (SPIE, 2014), vol. 9153, pp. 540-551. [44] S. W. Henderson, R. Allison, J. Austermann, T. Baildon, N. Battaglia, J. A. Beall, D. Becker, F. D. Bernardis, J. R. Bond, E. Calabrese, et al., Journal of Low Temperature Physics 184, 772 (2016), ISSN 0022-2291, 15737357, 1510.02809. [45] C. M. Posada, P. A. R. Ade, Z. Ahmed, K. Arnold, J. E. Austermann, A. N. Bender, L. E. Bleem, B. A. Benson, K. Byrum, J. E. Carlstrom, et al., Superconductor Science and Technology 28, 094002 (2015), ISSN 0953-2048. [46] N. Stebor, P. Ade, Y. Akiba, C. Aleman, K. Arnold, C. Baccigalupi, B. Barch, D. Barron, S. Beckman, A. Bender, et al., in Millimeter, Submillimeter, and FarInfrared Detectors and Instrumentation for Astronomy VIII (SPIE, 2016), vol. 9914, pp. 363-371. [47] A. Kusaka, T. Essinger-Hileman, J. W. Appel, P. Gallardo, K. D. Irwin, N. Jarosik, M. R. Nolta, L. A. Page, L. P. Parker, S. Raghunathan, et al., Review of Scientific Instruments 85, 024501 (2014), ISSN 0034-6748, 10897623, 1310.3711. [48] A. Ritacco, N. Ponthieu, A. Catalano, R. Adam, P. Ade, P. Andr ́e, A. Beelen, A. Benoˆıt, A. Bideaud, N. Billot, et al., Astronomy & Astrophysics 599, A34 (2017), ISSN 0004-6361, 1432-0746. [49] S. Takakura, M. Aguilar, Y. Akiba, K. Arnold, C. Baccigalupi, D. Barron, S. Beckman, D. Boettger, J. Borrill, S. Chapman, et al., Journal of Cosmology and Astroparticle Physics 2017, 008 (2017), ISSN 1475-7516. [50] T. Essinger-Hileman, A. Kusaka, J. W. Appel, S. K. Choi, K. Crowley, S. P. Ho, N. Jarosik, L. A. Page, L. P. Parker, S. Raghunathan, et al., Review of Scientific Instruments 87, 094503 (2016), ISSN 0034-6748. [51] M. L. Brown, A. Challinor, C. E. North, B. R. Johnson, D. O'Dea, and D. Sutton, Monthly Notices of the Royal Astronomical Society 397, 634 (2009), ISSN 00358711, 13652966. [52] M. Rashid, M. L. Brown, and D. B. Thomas, CMB Polarisation Signal Demodulation with a Rotating Half-Wave Plate (2023), 2307.01860. [53] T. W. Morris, E. Battistelli, R. Bustos, S. K. Choi, A. J. Duivenvoorden, J. Dunkley, R. D ̈unner, M. Halpern, Y. Guan, J. van Marrewijk, et al., Physical Review D 111, 082001 (2025). [54] K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M. Bartelman, The Astrophysical Journal 622, 759 (2005), ISSN 0004-637X, 1538-4357, astro-ph/0409513. [55] J. R. Stevens, N. Goeckner-Wald, R. Keskitalo, N. McCallum, A. Ali, J. Borrill, M. L. Brown, Y. Chinone, P. A. Gallardo, A. Kusaka, et al., in Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX (2018), p. 136, 1808.05131. [56] D. Alonso, J. Sanchez, A. Slosar, and LSST Dark Energy Science Collaboration, Monthly Notices of the Royal Astronomical Society 484, 4127 (2019), ISSN 0035-8711. [57] H. El Bouhargani, Ph.D. thesis, Universit ́e Paris Cit ́e (2021). [58] A. Kusaka, J. Appel, T. Essinger-Hileman, J. A. Beall, L. E. Campusano, H.-M. Cho, S. K. Choi, K. Crowley, J. W. Fowler, P. Gallardo, et al., Journal of Cosmology and Astroparticle Physics 2018, 005 (2018), ISSN 14757516. [59] BICEP2 Collaboration, P. A. R. Ade, R. W. Aikin, D. Barkats, S. J. Benton, C. A. Bischoff, J. J. Bock, J. A. Brevik, I. Buder, E. Bullock, et al., Physical Review Letters 112, 241101 (2014). [60] N. McCallum, D. B. Thomas, M. L. Brown, and N. Tessore, Monthly Notices of the Royal Astronomical Society 501, 802 (2020), ISSN 0035-8711, 1365-2966, 2008.00011. [61] A. Zonca, L. P. Singer, D. Lenz, M. Reinecke, C. Rosset, E. Hivon, and K. M. Gorski, Journal of Open Source Software 4, 1298 (2019), ISSN 2475-9066. [62] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, et al., Nature 585, 357 (2020), ISSN 1476-4687. [63] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, et al., Nature Methods 17, 261 (2020), ISSN 1548-7105. [64] J. D. Hunter, Computing in Science & Engineering 9, 90 (2007), ISSN 1558-366X. [65] M. L. Waskom, Journal of Open Source Software 6, 3021 (2021), ISSN 2475-9066.
|
2509.16294
|
Balancing Innovation and Oversight: AI in the U.S.
Treasury and IRS: A Survey
Sohail Shaikh
Department of Computer Science
George Mason University
Fairfax, VA 22030
mshaikh5@gmu.edu
August, 2025
Abstract—This paper explores how the U.S. Department of
Treasury, particularly the Internal Revenue Service (IRS), is
adopting artificial intelligence (AI) to modernize tax administra-
tion. Using publicly available information, the survey highlights
the applications of AI for taxpayer support, operational efficiency,
fraud detection, and audit optimization. Key initiatives include
AI-powered chatbots, robotic process automation, machine learn-
ing for case selection, and advanced analytics for fraud preven-
tion. These technologies aim to reduce errors, improve efficiency,
and improve taxpayer experiences. At the same time, the IRS
is implementing governance measures to ensure responsible use
of AI, including privacy safeguards, transparency initiatives, and
oversight mechanisms. The analysis shows that the Treasury AI
strategy balances technological innovation with legal compliance,
confidentiality, and public trust, reflecting a wider effort to
modernize aging systems while maintaining accountability in tax
collection and enforcement.
Index Terms—artificial intelligence, machine learning, tax
processing, tax compliance, fraud prevention, robotic process
automation, Treasury, IRS
I. INTRODUCTION
The U.S. Department of the Treasury and its tax-collecting
division, the Internal Revenue Service (IRS), are pursuing arti-
ficial intelligence (AI) as part of a broader effort to modernize
aging IT infrastructure and applications. AI initiatives aim
to improve tax collection, improve taxpayer services, prevent
fraud, and reduce operational costs in addition to migrate
legacy application using AI tools that ”translate” old code to
modernized programming languages [9]. This paper examines
the AI strategies of the Treasury and IRS, focusing on both
the anticipated benefits and the concerns surrounding privacy,
security, and confidentiality. All information presented1 is
drawn from publicly available sources and excludes protected
or restricted knowledge, if any.
The Treasury has adopted a department-wide approach to
AI implementation, designed to strengthen operations, improve
public services, and ensure strong governance across its bu-
reaus [8]. Key areas of focus include:
• AI Projects: fraud detection and risk management, pro-
cess automation, customer experience (CX) enhance-
1Disclaimer: The information collected and presented in this survey paper
is collected from public sources with no guarantee of its accuracy or lack of
biases.
ments, financial market analysis, compliance-focused
data analytics, and intelligent document processing.
• Governance and Oversight: establishment of an AI gov-
ernance framework, risk management and accountability
measures, privacy and security safeguards, human over-
sight mechanisms, transparency to maintain public trust,
and continuous policy updates.
II. AI ADOPTION AT THE IRS
The US Department of Treasury, and specifically the Inter-
nal Revenue Service (IRS), is actively developing and deploy-
ing artificial intelligence (AI) to improve taxpayer support,
operational efficiency, tax collection, and fraud prevention.
Their strategy is multifaceted and evolving, with a focus on
both technological innovation and taxpayer privacy. Current
areas of emphasis include:
• Operational improvements and taxpayer support e.g.,
paperless processing
• Tax collection and audit optimization
• Fraud detection and prevention
• AI technologies adopted, under evaluation, or planned
• Privacy and confidentiality safeguards
• Legacy code modernization using AI to reduced or elim-
inate dependence on mainframes
At present, the IRS is pursuing 68 AI-related modernization
projects, including 27 dedicated to enforcement, with the goal
of more efficiently identifying tax discrepancies in filings [5].
A. Operational Improvements and Taxpayer Support
1) AI-powered Customer Service: The IRS has deployed
AI-powered chatbots and virtual assistants to provide self-
service support for routine taxpayer queries. Using natural
language processing (NLP), these tools help taxpayers quickly
find answers on the IRS website and phone lines, covering
topics such as payment plans, transcripts, and basic account
information [1], [2], [10]. By handling common inquiries in-
stantly, they reduce wait times and free staff for more complex
cases. Future plans include secure two-way messaging and
expanded online account services [11].
1
arXiv:2509.16294v1 [cs.CY] 19 Sep 2025
2) Intelligent Automation: The IRS employs robotic pro-
cess automation (RPA) and AI-driven tools to streamline repet-
itive tasks such as data entry, document sorting, and taxpayer
record updates. Software “robots” process forms, generate
standardized letters, and manage back-office functions [1],
[3]. When combined with AI, RPA supports more advanced
functions, including document classification, anomaly detec-
tion, and AI-powered optical character recognition (OCR) for
scanned tax forms. These systems accelerate data extraction,
error checking, and case routing, enabling faster return pro-
cessing and more accurate taxpayer guidance [12]. Benefits
include reduced error rates, faster refunds, and improved
scalability during peak filing seasons.
3) Automated Scheduling and Call Triage: AI systems
prioritize and route incoming inquiries, schedule callbacks,
and offload routine questions to virtual agents. This reduces
hold times, improves response efficiency, and enables staff to
focus on complex or urgent cases [1], [2]. Personalized digital
tools further assist taxpayers in setting up online accounts,
understanding notices, and correcting common errors.
4) Multilingual and Accessibility Features:
AI-powered
tools provide multilingual support and are being enhanced for
accessibility, allowing a broader range of taxpayers to engage
with IRS resources [2].
Overall, these initiatives are expected to reduce wait times,
provide faster resolutions for routine questions, enable 24/7
support, and allow IRS staff to concentrate on high-priority
taxpayer needs.
B. Tax Collection and Audit Optimization
1) Intelligent Case Selection: The IRS uses machine learn-
ing to analyze large datasets and identify high-risk returns
for audit, including those from high-income individuals, large
partnerships, and hedge funds [2], [4], [5]. This targeted
approach improves audit efficiency by directing resources to-
ward cases with the greatest likelihood of non-compliance. In
2023, the Department of Treasury Office of Payment Integrity
employed an improved AI-driven process to mitigate check
fraud, recovering more than $375 million [5], [12].
2) Enforcement and Collection: AI models assess tax fil-
ings and financial data to detect under-reporting, improper
deductions, and potential tax evasion. These models help prior-
itize enforcement actions based on potential returns, ensuring
that resources are effectively allocated [2], [5]. For example,
the Form 1040 Return Classification and Selection Models
apply statistical and machine learning techniques to flag poten-
tially non-compliant individual returns, enabling more targeted
and efficient audits [18].
C. Fraud Detection and Prevention
1) Pattern Analysis: AI analyzes tax returns, bank transac-
tions, and other datasets to detect potentially fraudulent behav-
ior and inconsistencies in reporting [2], [5]. Treasury’s AI tools
have helped prevent or recover more than $4 billion in taxpayer
losses by identifying fraudulent returns, improper payments,
and check schemes. Machine learning models enable the rapid
detection of high-risk transactions and the recovery of funds
that would otherwise be lost [13], [14], [16]. Collaboration
with the Research, Applied Analytics & Statistics (RAAS)
division enhances AI screening and prioritization of inves-
tigations, particularly for exempt organizations. Automated
document-matching tools cross-check W-2s, 1099s, crypto
statements, and other submissions to detect misreporting or
inconsistencies. The IRS is leveraging AI to combat increas-
ingly sophisticated fraud schemes, including those powered
by emerging technologies. AI tools analyze large datasets
to detect patterns and identify potentially fraudulent activity,
enhancing the agency’s enforcement capabilities. While AI is
effective in monitoring electronic payments, physical refund
checks remain vulnerable to theft and alteration, requiring
banks to implement complementary AI detection. Despite
hiring 30,000 new employees to strengthen services and en-
forcement, AI remains essential for managing the volume and
complexity of modern tax fraud [21].
2) Criminal Investigations: The IRS Criminal Investiga-
tions Branch uses AI to uncover sophisticated fraud schemes
and quickly identify emerging tax evasion methods [2]. AI
is used to pinpoint and track abusive tax structures and
schemes designed to generate artificial deductions or credits.
For instance, the agency is using AI to address non-compliance
related to digital assets like cryptocurrency [19].
In general, AI integration enables more efficient, accurate,
and proactive detection and prevention of tax fraud.
D. AI Technologies Used, Explored, and Planned
The IRS employs artificial intelligence (AI) and machine
learning to analyze large datasets, detect tax fraud, and im-
prove compliance. AI prioritizes audits by identifying high-
risk taxpayers, identifies emerging threats such as crypto-
related fraud, and automates fraud detection and recovery [15].
Machine learning supports risk scoring, anomaly detection,
and pattern analysis, while natural language processing (NLP)
powers chatbots and extracts information from unstructured
documents [1], [3]. Robotic process automation (RPA) stream-
lines repetitive back-office tasks, and advanced analytics guide
case selection and investigations [2], [4]. Ongoing efforts
focus on further integrating AI in line with technological
advances and changes in the workforce [1]. The IRS leverages
artificial intelligence (AI) to enhance tax compliance and
reduce the tax gap ($428 billion net gap in 2024) by improving
the selection and targeting of audits. AI models are used
to select representative samples of individual tax returns,
helping the agency identify non-compliance trends and detect
returns likely to contain errors or additional taxes owed. AI is
also applied to more specialized areas. New models identify
taxpayers claiming refundable credits who are at higher risk
of non-compliance, outperforming previous methods in pilot
studies. Additionally, AI prioritizes partnership returns for
audit, allowing the IRS to focus on the highest-risk large
partnerships, a complex and increasingly significant area of
taxation. These AI tools collectively improve the IRS’s ability
to detect non-compliance and improve audit effectiveness [20].
2
E. Privacy and Confidentiality Safeguards
The IRS protects taxpayer information under Section 6103
of the Internal Revenue Code. AI initiatives are governed by
the Chief Data and Analytics Officer to ensure ethical use, bias
mitigation, and compliance with federal privacy requirements.
Policies enforce data sharing only with authorized individuals,
limit access to those with a ’need to know’, require data
removal or restricted re-disclosure, and include oversight of all
AI projects [3]. AI systems follow the same privacy standards
as other IRS technologies [6], and transparency measures such
as public dashboards help maintain accountability [1], [7].
These safeguards ensure that AI use is secure, ethical, and
fully compliant.
III. CONCERNS
Although AI promises to make tax administration more
effective and fair, critics warn of risks such as bias, inaccuracy,
and lack of transparency. The Government Accountability
Office (GAO) and taxpayer advocates have urged the IRS to
disclose more about its data sources, models, and processes
[22].
The IRS has resisted this disclosure, citing the risk that tax-
payers could ’reverse engineer’ its methods to avoid compli-
ance. Requests under the Freedom of Information Act (FOIA)
have been denied, fueling concerns about accountability. These
concerns intensified after a 2023 Stanford study revealed that
African American taxpayers were disproportionately audited
due to algorithmic bias in training data. Although the IRS has
acknowledged these disparities and is developing new tools,
critics argue that stronger safeguards are needed [22].
A further challenge is the ’black box’ nature of some
AI models, making it difficult for taxpayers to understand
or contest audit selections. The GAO has identified insuffi-
cient documentation of IRS models, raising concerns about
accountability and error correction [23]. Although the IRS
states that all AI-selected cases are reviewed by a human
examiner, experts caution that auditors may be reluctant to
override algorithmic outputs, even when faced with legitimate
statistical anomalies.
A comprehensive AI governance strategy is needed that ad-
dresses the concerns raised by industry experts and academics,
and the Treasury has taken the first step in developing and
releasing its AI strategy in December 2024 [8]. In March
2025, the IRS issued interim guidance on AI governance
while awaiting updated directives from the White House and
Treasury [24].
Overall, while AI can strengthen tax administration, it must
be deployed cautiously, with greater transparency, explainabil-
ity, and oversight to maintain fairness and public trust.
In addition, budget cuts leading to AI-trained personal
shortage are another concern. AI is a relatively new and
specialized field and expertise is not readily available which
creates a barrier to AI adoption.
IV. CONCLUSION
The U.S. Department of Treasury and the IRS are leveraging
artificial intelligence as a cornerstone of their modernization
strategy, aiming to improve taxpayer services, optimize en-
forcement, and strengthen fraud detection. Early applications,
including AI-powered chatbots, intelligent automation, legacy
application modernization and machine learning for case selec-
tion, demonstrate measurable gains in efficiency, accuracy, and
responsiveness. At the same time, governance frameworks em-
phasize privacy, transparency, and ethical oversight, acknowl-
edging the sensitivity of taxpayer data and the importance of
public trust. Although challenges remain to integrate AI with
legacy systems and ensure fairness, the Treasury’s multifaceted
approach suggests a deliberate balance between innovation
and accountability. As AI capabilities evolve, continued focus
on privacy safeguards, policy updates, and transparency will
be essential to maintain both operational improvements and
public confidence in IRS mission.
V. FUTURE DIRECTIONS
Looking ahead, the IRS and Treasury are positioned to
expand AI into emerging domains such as cryptocurrency
compliance, adaptive audit strategies, and predictive analytics
for tax gap reduction. Greater integration of natural language
processing (NLP) tools, secure digital services, and multilin-
gual accessibility could further improve taxpayer engagement.
At the same time, advancing explainable AI and bias miti-
gation techniques will be critical to preserving fairness and
trust. Future progress will depend not only on technological
innovation but also on transparent governance and continuous
public dialogue to ensure that AI strengthens, rather than
undermines, confidence in tax administration.
REFERENCES
[1] International Accounting Bulletin: US IRS halts modernisation efforts
to assess AI impact
[2] Galleros Robinson CPS & Advisors: How the IRS is Leveraging
Artificial Intelligence to Transform Tax Administration
[3] TIGTA: Governance Efforts Should Be Accelerated To Ensure the Safe,
Secure, and Trustworthy Development and Use of Artificial Intelligence
[4] Treasury: U.S. Department of the Treasury, IRS Outline Accomplish-
ments in First Year of Implementation of Plan to Improve Taxpayer
Service, Modernize Technology, Increase High-End Enforcement
[5] J David Tax Law: How the IRS Is Utilizing AI to Pursue Tax Debts in
2025
[6] Treasury Memo: Privacy Policy for Artificial Intelligence
[7] FedScoop: Lack of IRS transparency on AI jeopardizes public trust,
advisory panel says, June 25, 2025
[8] Treasury and Artificial Intelligence
[9] Beyond the Hype: The reality of AI in Tax Compliance
[10] IRS Using AI — Foley & Lardner LLP
[11] IRS releases Strategic Operating Plan update outlining future priorities;
transformation momentum accelerating following long list of successes
for taxpayers - May 2024
[12] AI Use in Tax Administration — Bipartisan Policy Center
[13] U.S. Treasury’s AI is Catching Tax Cheats and Saving Billions
[14] Treasury Department now using AI to save taxpayers billions - Oct,
2024
[15] Chairman Jordan and Rep. Hageman Open Inquiry into IRS’s Use of
AI to Surveil Americans’ Financial Information March, 2024
[16] The IRS Is Leveraging AI Tools, Machine Learning, and Fraud Detection
Applications Accelerated by NVIDIA GPUs
[17] How is AI & Machine Learning revolutionizing the Tax Landscape?
3
[18] TIGTA: The IRS Could Leverage Examination Results in Artificial
Intelligence Examination Case Selection Models and Improve Processes
to Evaluate Performance, May, 2025
[19] Does the IRS Use AI to Find Audit Targets & Tax Cheats?
[20] GAO: Artificial Intelligence May Help IRS Close the Tax Gap, June,
2025
[21] IRS Deploys AI Tools to Combat Emerging Tech’s Role in Fraud
Schemes
[22] Transparency, Oversight Urged for IRS Artificial Intelligence
[23] IRS dinged by GAO for subpar documentation of AI audit models, June
7, 2024
[24] IRS Interim Policy for AI Governance, March 11, 2025
4
|
Balancing Innovation and Oversight: AI in the U.S. Treasury and IRS: A Survey Sohail Shaikh 22030 August, 2025 Abstract-This paper explores how the U.S. (IRS), is adopting artificial intelligence (AI) to modernize tax administration. Using publicly available information, the survey highlights the applications of AI for taxpayer support, operational efficiency, fraud detection, and audit optimization. Key initiatives include AI-powered chatbots, robotic process automation, machine learning for case selection, and advanced analytics for fraud prevention. These technologies aim to reduce errors, improve efficiency, and improve taxpayer experiences. At the same time, the IRS is implementing governance measures to ensure responsible use of AI, including privacy safeguards, transparency initiatives, and oversight mechanisms. The analysis shows that the Treasury AI strategy balances technological innovation with legal compliance, confidentiality, and public trust, reflecting a wider effort to modernize aging systems while maintaining accountability in tax collection and enforcement. Index Terms-artificial intelligence, machine learning, tax processing, tax compliance, fraud prevention, robotic process automation, Treasury, IRS I. INTRODUCTION The U.S. Department of the Treasury and its tax-collecting division, the Internal Revenue Service (IRS), are pursuing artificial intelligence (AI) as part of a broader effort to modernize aging IT infrastructure and applications. AI initiatives aim to improve tax collection, improve taxpayer services, prevent fraud, and reduce operational costs in addition to migrate legacy application using AI tools that "translate" old code to modernized programming languages [9]. This paper examines the AI strategies of the Treasury and IRS, focusing on both the anticipated benefits and the concerns surrounding privacy, security, and confidentiality. All information presented1 is drawn from publicly available sources and excludes protected or restricted knowledge, if any. The Treasury has adopted a department-wide approach to AI implementation, designed to strengthen operations, improve public services, and ensure strong governance across its bureaus [8]. Key areas of focus include: • AI Projects: fraud detection and risk management, process automation, customer experience (CX) enhance1Disclaimer: The information collected and presented in this survey paper is collected from public sources with no guarantee of its accuracy or lack of biases. ments, financial market analysis, compliance-focused data analytics, and intelligent document processing. • Governance and Oversight: establishment of an AI governance framework, risk management and accountability measures, privacy and security safeguards, human oversight mechanisms, transparency to maintain public trust, and continuous policy updates. II. AI ADOPTION AT THE IRS The US - nal Revenue Service (IRS), is actively developing and deploying artificial intelligence (AI) to improve taxpayer support, operational efficiency, tax collection, and fraud prevention. Their strategy is multifaceted and evolving, with a focus on both technological innovation and taxpayer privacy. Current areas of emphasis include: • Operational improvements and taxpayer support e.g., paperless processing • Tax collection and audit optimization • Fraud detection and prevention • AI technologies adopted, under evaluation, or planned • Privacy and confidentiality safeguards • Legacy code modernization using AI to reduced or eliminate dependence on mainframes At present, the IRS is pursuing 68 AI-related modernization projects, including 27 dedicated to enforcement, with the goal of more efficiently identifying tax discrepancies in filings [5]. A. Operational Improvements and Taxpayer Support 1) AI-powered Customer Service: The IRS has deployed AI-powered chatbots and virtual assistants to provide selfservice support for routine taxpayer queries. Using natural language processing (NLP), these tools help taxpayers quickly find answers on the IRS website and phone lines, covering topics such as payment plans, transcripts, and basic account information [1], [2], [10]. By handling common inquiries instantly, they reduce wait times and free staff for more complex cases. Future plans include secure two-way messaging and expanded online account services [11]. 1 19 Sep 2025 2) Intelligent Automation: The IRS employs robotic process automation (RPA) and AI-driven tools to streamline repetitive tasks such as data entry, document sorting, and taxpayer record updates. Software "robots" process forms, generate standardized letters, and manage back-office functions [1], [3]. When combined with AI, RPA supports more advanced functions, including document classification, anomaly detection, and AI-powered optical character recognition (OCR) for scanned tax forms. These systems accelerate data extraction, error checking, and case routing, enabling faster return processing and more accurate taxpayer guidance [12]. Benefits include reduced error rates, faster refunds, and improved scalability during peak filing seasons. 3) Automated Scheduling and Call Triage: AI systems prioritize and route incoming inquiries, schedule callbacks, and offload routine questions to virtual agents. This reduces hold times, improves response efficiency, and enables staff to focus on complex or urgent cases [1], [2]. Personalized digital tools further assist taxpayers in setting up online accounts, understanding notices, and correcting common errors. 4) Multilingual and Accessibility Features: AI-powered tools provide multilingual support and are being enhanced for accessibility, allowing a broader range of taxpayers to engage with IRS resources [2]. Overall, these initiatives are expected to reduce wait times, provide faster resolutions for routine questions, enable 24/7 support, and allow IRS staff to concentrate on high-priority taxpayer needs. B. Tax Collection and Audit Optimization 1) Intelligent Case Selection: The IRS uses machine learning to analyze large datasets and identify high-risk returns for audit, including those from high-income individuals, large partnerships, and hedge funds [2], [4], [5]. This targeted approach improves audit efficiency by directing resources toward cases with the greatest likelihood of non-compliance. In 2023, the -driven process to mitigate check fraud, recovering more than 4 billion in taxpayer losses by identifying fraudulent returns, improper payments, and check schemes. Machine learning models enable the rapid detection of high-risk transactions and the recovery of funds that would otherwise be lost [13], [14], [16]. Collaboration with the Research, Applied Analytics & Statistics (RAAS) division enhances AI screening and prioritization of investigations, particularly for exempt organizations. Automated document-matching tools cross-check W-2s, 1099s, crypto statements, and other submissions to detect misreporting or inconsistencies. The IRS is leveraging AI to combat increasingly sophisticated fraud schemes, including those powered by emerging technologies. AI tools analyze large datasets to detect patterns and identify potentially fraudulent activity, enhancing the agency's enforcement capabilities. While AI is effective in monitoring electronic payments, physical refund checks remain vulnerable to theft and alteration, requiring banks to implement complementary AI detection. Despite hiring 30,000 new employees to strengthen services and enforcement, AI remains essential for managing the volume and complexity of modern tax fraud [21]. 2) Criminal Investigations: The IRS Criminal Investigations Branch uses AI to uncover sophisticated fraud schemes and quickly identify emerging tax evasion methods [2]. AI is used to pinpoint and track abusive tax structures and schemes designed to generate artificial deductions or credits. For instance, the agency is using AI to address non-compliance related to digital assets like cryptocurrency [19]. In general, AI integration enables more efficient, accurate, and proactive detection and prevention of tax fraud. D. AI Technologies Used, Explored, and Planned The IRS employs artificial intelligence (AI) and machine learning to analyze large datasets, detect tax fraud, and improve compliance. AI prioritizes audits by identifying highrisk taxpayers, identifies emerging threats such as cryptorelated fraud, and automates fraud detection and recovery [15]. Machine learning supports risk scoring, anomaly detection, and pattern analysis, while natural language processing (NLP) powers chatbots and extracts information from unstructured documents [1], [3]. Robotic process automation (RPA) streamlines repetitive back-office tasks, and advanced analytics guide case selection and investigations [2], [4]. Ongoing efforts focus on further integrating AI in line with technological advances and changes in the workforce [1]. The IRS leverages artificial intelligence (AI) to enhance tax compliance and reduce the tax gap ($428 billion net gap in 2024) by improving the selection and targeting of audits. AI models are used to select representative samples of individual tax returns, helping the agency identify non-compliance trends and detect returns likely to contain errors or additional taxes owed. AI is also applied to more specialized areas. New models identify taxpayers claiming refundable credits who are at higher risk of non-compliance, outperforming previous methods in pilot studies. Additionally, AI prioritizes partnership returns for audit, allowing the IRS to focus on the highest-risk large partnerships, a complex and increasingly significant area of taxation. These AI tools collectively improve the IRS's ability to detect non-compliance and improve audit effectiveness [20]. 2 E. Privacy and Confidentiality Safeguards The IRS protects taxpayer information under Section 6103 of the Internal Revenue Code. AI initiatives are governed by the Chief Data and Analytics Officer to ensure ethical use, bias mitigation, and compliance with federal privacy requirements. Policies enforce data sharing only with authorized individuals, limit access to those with a 'need to know', require data removal or restricted re-disclosure, and include oversight of all AI projects [3]. AI systems follow the same privacy standards as other IRS technologies [6], and transparency measures such as public dashboards help maintain accountability [1], [7]. These safeguards ensure that AI use is secure, ethical, and fully compliant. III. CONCERNS Although AI promises to make tax administration more effective and fair, critics warn of risks such as bias, inaccuracy, and lack of transparency. The Government Accountability Office (GAO) and taxpayer advocates have urged the IRS to disclose more about its data sources, models, and processes [22]. The IRS has resisted this disclosure, citing the risk that taxpayers could 'reverse engineer' its methods to avoid compliance. Requests under the Freedom of Information Act (FOIA) have been denied, fueling concerns about accountability. These concerns intensified after a 2023 Stanford study revealed that African American taxpayers were disproportionately audited due to algorithmic bias in training data. Although the IRS has acknowledged these disparities and is developing new tools, critics argue that stronger safeguards are needed [22]. A further challenge is the 'black box' nature of some AI models, making it difficult for taxpayers to understand or contest audit selections. The GAO has identified insufficient documentation of IRS models, raising concerns about accountability and error correction [23]. Although the IRS states that all AI-selected cases are reviewed by a human examiner, experts caution that auditors may be reluctant to override algorithmic outputs, even when faced with legitimate statistical anomalies. A comprehensive AI governance strategy is needed that addresses the concerns raised by industry experts and academics, and the Treasury has taken the first step in developing and releasing its AI strategy in December 2024 [8]. In March 2025, the IRS issued interim guidance on AI governance while awaiting updated directives from the White House and Treasury [24]. Overall, while AI can strengthen tax administration, it must be deployed cautiously, with greater transparency, explainability, and oversight to maintain fairness and public trust. In addition, budget cuts leading to AI-trained personal shortage are another concern. AI is a relatively new and specialized field and expertise is not readily available which creates a barrier to AI adoption. IV. CONCLUSION The U.S. - forcement, and strengthen fraud detection. Early applications, including AI-powered chatbots, intelligent automation, legacy application modernization and machine learning for case selection, demonstrate measurable gains in efficiency, accuracy, and responsiveness. At the same time, governance frameworks emphasize privacy, transparency, and ethical oversight, acknowledging the sensitivity of taxpayer data and the importance of public trust. Although challenges remain to integrate AI with legacy systems and ensure fairness, the Treasury's multifaceted approach suggests a deliberate balance between innovation and accountability. As AI capabilities evolve, continued focus on privacy safeguards, policy updates, and transparency will be essential to maintain both operational improvements and public confidence in IRS mission. V. FUTURE DIRECTIONS Looking ahead, the IRS and Treasury are positioned to expand AI into emerging domains such as cryptocurrency compliance, adaptive audit strategies, and predictive analytics for tax gap reduction. Greater integration of natural language processing (NLP) tools, secure digital services, and multilingual accessibility could further improve taxpayer engagement. At the same time, advancing explainable AI and bias mitigation techniques will be critical to preserving fairness and trust. Future progress will depend not only on technological innovation but also on transparent governance and continuous public dialogue to ensure that AI strengthens, rather than undermines, confidence in tax administration. REFERENCES [1] International Accounting Bulletin: US IRS halts modernisation efforts to assess AI impact [2] Galleros Robinson CPS & Advisors: How the IRS is Leveraging Artificial Intelligence to Transform Tax Administration [3] TIGTA: Governance Efforts Should Be Accelerated To Ensure the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence [4] Treasury: U.S. Department of the Treasury, IRS Outline Accomplishments in First Year of Implementation of Plan to Improve Taxpayer Service, Modernize Technology, Increase High-End Enforcement [5] J David Tax Law: How the IRS Is Utilizing AI to Pursue Tax Debts in 2025 [6] Treasury Memo: Privacy Policy for Artificial Intelligence [7] FedScoop: Lack of IRS transparency on AI jeopardizes public trust, advisory panel says, June 25, 2025 [8] Treasury and Artificial Intelligence [9] Beyond the Hype: The reality of AI in Tax Compliance [10] IRS Using AI - Foley & Lardner LLP [11] IRS releases Strategic Operating Plan update outlining future priorities; transformation momentum accelerating following long list of successes for taxpayers - May 2024 [12] AI Use in Tax Administration - Bipartisan Policy Center [13] U.S. Treasury's AI is Catching Tax Cheats and Saving Billions [14] Treasury Department now using AI to save taxpayers billions - Oct, 2024 [15] Chairman Jordan and Rep. Hageman Open Inquiry into IRS's Use of AI to Surveil Americans' Financial Information March, 2024 [16] The IRS Is Leveraging AI Tools, Machine Learning, and Fraud Detection Applications Accelerated by NVIDIA GPUs [17] How is AI & Machine Learning revolutionizing the Tax Landscape? 3 [18] TIGTA: The IRS Could Leverage Examination Results in Artificial Intelligence Examination Case Selection Models and Improve Processes to Evaluate Performance, May, 2025 [19] Does the IRS Use AI to Find Audit Targets & Tax Cheats? [20] GAO: Artificial Intelligence May Help IRS Close the Tax Gap, June, 2025 [21] IRS Deploys AI Tools to Combat Emerging Tech's Role in Fraud Schemes [22] Transparency, Oversight Urged for IRS Artificial Intelligence [23] IRS dinged by GAO for subpar documentation of AI audit models, June 7, 2024 [24] IRS Interim Policy for AI Governance, March 11, 2025 4
|
2509.16291
|
Test-Time Learning and Inference-Time Deliberation for
Efficiency-First Offline Reinforcement Learning in Care
Coordination and Population Health Management
Sanjay Basu, MD, PhD1,2
Sadiq Y. Patel, MSW, PhD1,3
Parth Sheth, MSE1,3
Bhairavi Muralidharan, MSE1
Namrata Elamaran, MSE1
Aakriti Kinra, MS1
Rajaie Batniji, MD, PhD1
1Waymark, San Francisco, CA, USA
2San Francisco General Hospital, University of California San Francisco, San Francisco, CA, USA
3University of Pennsylvania, Philadelphia, PA, USA
Corresponding Author: Sanjay Basu, MD, PhD, 2120 Fillmore St, San Francisco, CA 94115 (san-
jay.basu@waymarkcare.com)
Abstract
Care coordination and population health management programs serve large Medicaid and safety-net
populations and must be auditable, efficient, and adaptable. While clinical risk for outreach modalities
is typically low, time and opportunity costs differ substantially across text, phone, video, and in-person
visits. We propose a lightweight offline reinforcement learning (RL) approach that augments trained
policies with (i) test-time learning via local neighborhood calibration, and (ii) inference-time deliberation
via a small Q-ensemble that incorporates predictive uncertainty and time/effort cost. The method exposes
transparent dials for neighborhood size and uncertainty/cost penalties and preserves an auditable training
pipeline. Evaluated on a de-identified operational dataset, TTL+ITD achieves stable value estimates with
predictable efficiency trade-offs and subgroup auditing.
1
Introduction
Care coordination and population health management (PHM) are core functions of health systems and com-
munity partners, impacting large numbers of Americans enrolled in Medicaid and other safety-net programs.
These efforts aim to proactively identify needs, prioritize outreach, and escalate appropriately, all within
finite staffing and budget constraints. While outreach modalities (text, phone, video, in-person) carry low
clinical risk, their time and opportunity costs vary significantly, making efficiency a primary design goal.
In practice, the central operational question is when to deploy expensive in-person outreach versus efficient
virtual modalities to maximize value and equity under capacity constraints.
These decisions must be made in strictly offline settings, where policies are learned from logged data
without exploration at deployment [1]. Classical approaches include constrained Markov decision processes
[2], risk-sensitive objectives, and conservative offline RL (e.g., CQL/IQL) [3, 4].
Conformal prediction
can provide calibrated error control [5, 6]; ensembles provide practical uncertainty quantification [7]; and
decision-time computation is common in control [8]. In health services research and health economic evalu-
ation, cost-effectiveness and cost-benefit analyses (CEA/CBA) guide program-level choices [9–12], but they
are not designed for per-patient, per-decision recommendations that adapt to granular state features and
logged behavior constraints.
1
arXiv:2509.16291v1 [cs.CY] 19 Sep 2025
Population-level context.
Medicaid and the Children’s Health Insurance Program (CHIP) together cover
over 80 million people in the United States, amounting to roughly one in four Americans according to CMS
monthly enrollment reports [13, 14]. Population health management programs serving these beneficiaries
routinely execute millions of outreach events annually across text, phone, video, and in-person modali-
ties. Under finite staffing and budget constraints, small efficiency gains at the patient-decision level can
scale to meaningful improvements in access and equity at the population level, motivating operational tools
that optimize when to deploy higher-touch in-person visits versus efficient virtual modes while preserving
auditability.
Why not standard cost-effectiveness alone?
Classical cost-effectiveness and cost-benefit analyses
(CEA/CBA) provide valuable program-level comparisons and policy choices [9–11]. However, they typi-
cally assume aggregate cohorts or Markov models with fixed transition structures and are not designed to
produce per-patient, per-decision recommendations that adapt to granular state features or logged behavior
constraints. TTL+ITD complements CEA by learning from real operational trajectories, acting at decision
time with state-specific personalization, uncertainty-awareness, and auditable dials; it supports equity au-
diting across subgroups and does not require environment exploration. We adhere to good practice in health
economic modeling and reporting [12, 15] while focusing on patient-level, sequential operational optimization.
Plain-language overview.
In practice, we take 2.77 million historical coordination decisions recorded
by Waymark, learn which outreach options were safe, and then layer simple checks on top of the trained
policy so that each new recommendation can be explained and adjusted. TTL (test-time learning) looks up
a member’s nearest neighbors in the historical data and calibrates a local risk threshold that reflects how
often similar members experienced harm after each action. ITD (inference-time deliberation) then weighs
the predicted benefit of each action against three intuitive penalties: the modeled risk of harm, the model’s
own uncertainty, and the staff time required for the action. Operators can turn these penalties up or down
to emphasize safety or efficiency without re-training models. The result is a set of actionable dials that the
care team can audit and tune in the same way they manage staffing forecasts or visit quotas.
2
Related Work
Offline and conservative RL. Offline RL targets policies without environment interaction [1] using pes-
simism or constraints (e.g., CQL, IQL) [3, 4].
We treat these as baselines and focus on inference-time
control. Safe RL. Safe RL includes constrained MDPs [2] and risk-sensitive criteria; healthcare applica-
tions emphasize verifiability and auditability. Conformal decision-making. Conformal methods provide
calibrated control of error/coverage [5, 6]; our local (kNN) TTL approximates conditional coverage for
action-conditional risks. Uncertainty and deliberation. Deep ensembles provide practical uncertainty
estimates [7]; decision-time planning/computation is standard in control [8]. Our novelty is a pragmatic,
test-time fusion of local conformal safety, cost-aware deliberation, and auditable dials for deployment.
Contributions.
We unify local conformal calibration and inference-time deliberation into a deployment-
ready stack. TTL+ITD (i) augments global conformal safety with local, neighborhood-specific thresholds and
kNN action priors, (ii) injects cost- and uncertainty-aware deliberation through a lightweight Q-ensemble
whose dials (K, β, λ, λcost) expose efficiency–safety trade-offs without re-training, and (iii) ships with an
open, library-agnostic evaluation harness (FQE, DR, sensitivity sweeps, manifests) so that TTL+ITD can
be audited beside standard offline RL baselines (global conformal gating, BC, discrete CQL). Our emphasis
is on inference-time control for efficiency-first coordination workloads—deciding when to expend scarce in-
person effort while preserving an auditable, modular training pipeline.
3
Methods
Data.
We analyze a de-identified operational dataset from Waymark comprising 2.77 million coordination
steps across approximately 168,000 members and nine discrete outreach actions (text, phone, video, in-
person variants, and escalation pathways).
Each trajectory records the member identifier, time index,
2
logged action, observed reward, and JSON snapshots of the member’s state (open tasks, recent utilization,
engagement history). Rewards are 0 when no adverse utilization occurs in the follow-up window and negative
otherwise, so larger values represent fewer near-term harms. To preserve privacy, the export suppresses most
demographic attributes: continuous age is jittered, geography is bucketed, and subgroup indicators collapse
to an “unknown” category. We note this constraint in §5; the codebase accepts richer covariates if partners
can share them under their governance policies.
Risk model.
We estimate action-conditional harm probabilities pharm(s, a) by fitting class-weighted multi-
nomial logistic regressions on the parsed state features concatenated with an action one-hot vector. The
model is trained on a temporally earlier split (70%) and evaluated on a 15% calibration slice and 15% test
slice to respect causal ordering. We also provide hooks for gradient-boosted variants when partners prefer
tree models. A global conformal threshold τ is computed on the calibration slice using standard split con-
formal prediction [5, 6]. To personalise safety, TTL builds a kNN index over the z-scored calibration states
and derives local thresholds τs(a) from the (1 −α) quantile of neighbor-specific risk scores for each action.
Preference model.
We then train a multinomial logistic regression on the “safe” subset of the training
data where pharm(s, a) < τ. This base policy provides a transparent, audit-friendly probability distribution
over actions given the current features. At inference we blend the base probabilities with a neighborhood
prior derived from the K nearest calibration episodes; empirically we set η = 0.3, but partners can adjust it
to emphasise either historical frequencies or the learned policy.
TTL+ITD pipeline.
At deployment we (1) parse the current state through the feature map; (2) query
the kNN calibrator to obtain local thresholds τs(a) and neighborhood action frequencies; (3) adjust the
base policy’s action probabilities with the TTL prior; (4) evaluate the deliberation score Qmean −βQstd −
λpharm −λcostc(a) and mask unsafe actions (pharm ≥τs(a)); and (5) select the highest scoring action (or
sample via softmax at temperature T). The only train-time artifacts are the risk model, base policy, kNN
index, and Q-ensemble; all governance dials act purely at inference. The cost term c(a) is loaded from a
YAML configuration (Appendix A) that combines CPT-typical visit durations, CMS physician-fee-schedule
wage assumptions, and Bureau of Labor Statistics wage data [18–20]. Partners can override this mapping
to reflect local staffing costs or travel times.
Deliberation.
We fit an ensemble of linear FQE models on a subsample of episodes (bootstrap by episode).
At inference, we compute for each action
score(s, a) = E[Q(s, a)]
|
{z
}
ensemble mean
−β Std[Q(s, a)]
|
{z
}
uncertainty
−λ pharm(s, a)
|
{z
}
risk
,
(1)
mask actions with pharm(s, a) ≥τs(a), and select greedily or via a softmax with temperature T. The dials
(β, λ, T) control uncertainty and risk aversion. Appendix C shows that the combination of global and local
quantiles induces a monotone safety–efficiency frontier: decreasing α or increasing (β, λ, λcost) only removes
actions, ensuring predictable governance trade-offs.
Evaluation.
We estimate policy value using fitted Q evaluation (FQE) with a ridge-regularised linear
function class applied to the shared feature map; we run 15 Bellman iterations for the headline model and
6–8 iterations for the lightweight sweep settings. Doubly robust estimators with per-episode bootstrapping
provide confidence intervals and randomisation checks [16, 17]. Baselines include (i) a global conformal gate
that masks unsafe actions but otherwise follows the preference model, (ii) behaviour cloning (maximum-
likelihood imitation), (iii) our deliberation stack without TTL, (iv) the TTL adapter without deliberation,
and (v) a discrete CQL implementation from d3rlpy trained for 2,000 gradient steps [3]. We default to a
70/15/15 temporal split when timestamped data are available (toggle TEMPORAL SPLIT); otherwise we fall
back to index-based slicing.
Our primary governance view is the efficiency frontier—estimated value
versus expected effort cost—as stakeholders routinely balance staff time against member coverage.
3
Figure 1: Efficiency frontier: expected value versus expected time/effort cost for TTL+ITD as the cost
penalty varies.
4
Results
We analyze the deployment question: when should coordinators expend expensive in-person effort versus low-
touch digital outreach? Table 2 and Figure 1 summarise the main trade-offs. With the calibration dials set
to (K=200, β=0.5, λ=1.0, λcost=0), TTL+ITD achieves essentially neutral estimated value ( ˆV0 ≈−7×10−5)
while driving the expected effort cost at episode start to 0.05 (normalized minutes). In contrast, the global-
threshold baseline incurs 3.9 cost units and behaviour cloning (BC) expends 17.1 cost units. Discrete CQL—
our strongest baseline from d3rlpy—improves value relative to BC (−0.149 vs −0.165) but still operates an
order of magnitude more expensively than TTL+ITD. The ablation table (Appendix E) highlights that both
components matter: ITD alone reduces harm but remains costly, whereas TTL alone inherits near-zero harm
but lacks the cost penalty; combining them yields the frontier of high value and low effort. We attempted
to include IQL, but current d3rlpy releases only support continuous-action IQL; the vendor library rejects
our discrete action space, so we report CQL as the representative pessimistic baseline.
Figure 1 exposes the efficiency frontier: increasing the cost penalty smoothly trades value for effort.
The knee of the curve occurs near λcost = 0.75–1.0, after which the policy converges to the cheapest modal-
ities (cost ≈1) with minor additional value loss. The calibration heatmaps (Figure 2) confirm that the
neighbourhood size and uncertainty dial (β) behave monotonically: larger β suppresses uncertain actions,
while larger λ increases thrift at predictable value cost.
Across the full sensitivity sweep (Table 1) TTL+ITD maintains ˆV0 at the empirical optimum of 0 while
the baselines vary between −0.07 and −0.09. The TTL-only rows echo this ceiling because their blended
policy still inherits the local safety mask; ITD-only rows sit lower, reflecting harm incurred without the
conformal gate. Expected cost varies by two orders of magnitude, demonstrating that the cost penalty is the
practical lever once harm has been neutralised. Collectively, these findings show that inference-time dials
are sufficient to navigate the operational trade-space: the data-driven policy already sits on the “no-harm”
face of the value frontier, and governance decisions revolve around how much staff time leadership is willing
to spend.
The retained calibration CDF (Figure 3) shows that the global conformal threshold τ still controls
taken-action risk; local TTL thresholds tighten further in dense regions. Because de-identification collapsed
demographic covariates to an “unknown” bucket, subgroup tables become degenerate; we therefore omit them
from the main text and note the limitation explicitly in §5. Appendix D reports paired bootstrap confidence
intervals and a randomisation test contrasting TTL+ITD against the baselines; results were stable across
three random seeds.
4
Figure 2: Sensitivity of TTL+ITD value (simple FQE) to uncertainty (β) and cost penalty (λ) across
K ∈{100, 200, 300}.
K
β
λ
ˆV0(TTL+ITD)
ˆV0(HACO)
ˆV0(BC)
100
0.3000
0.2500
0.000000
-0.094049
-0.089997
100
0.6000
1.0000
0.000000
-0.069575
-0.067872
100
0.9000
1.5000
0.000000
-0.069575
-0.067872
100
0.9000
1.0000
0.000000
-0.069575
-0.067872
100
0.9000
0.7500
0.000000
-0.094049
-0.089997
100
0.9000
0.5000
0.000000
-0.069575
-0.067872
100
0.3000
0.5000
0.000000
-0.069575
-0.067872
100
0.6000
1.5000
0.000000
-0.069575
-0.067872
100
0.9000
0.2500
0.000000
-0.094049
-0.089997
100
0.6000
0.7500
0.000000
-0.094049
-0.089997
Table 1: Top settings from the TTL+ITD sensitivity sweep (higher ˆV0 is better; simple FQE estimate).
Sensitivity to dials (K, β, λ).
Figure 2 shows a representative sensitivity heatmap of TTL+ITD value
across uncertainty (β) and cost/penalty (λ) at a fixed neighborhood size (K = 200). We observe a predictable
frontier: higher λ reduces expected cost and can reduce value if over-penalized; higher β discourages uncertain
actions. Table 1 summarizes top settings from the sweep.
5
Discussion
TTL+ITD keeps training simple and auditable while adding per-episode adaptivity and uncertainty-aware
selection at deployment. For outreach, the primary decision lever is efficiency: prioritizing value-per-effort
and deciding when in-person engagement is warranted. The approach is modular: richer risk or cost models
drop in; alternative neighborhood metrics or quantile smoothers can be used; and the Q-ensemble can be
swapped for model-based rollouts. Subgroup audits remain essential to monitor equity impacts. Compared
to our earlier global/groupwise safety work, TTL+ITD emphasizes inference-time control and a cost-aware
variant specifically suited to low-risk operational actions.
Contribution to the literature.
Methodologically, our work shows that conformal risk calibration [5, 6]
and inference-time deliberation [8] can be fused into a single governance layer that sits on top of standard
offline RL pipelines [1, 3, 4]. The resulting system delivers explicit knobs for risk, uncertainty, and cost with-
out retraining. Operationally, we translate these ideas into the language of population health management,
connecting them to staffing and auditing practices highlighted in health-services and economic-evaluation
literature [9–11]. To our knowledge, this is the first deployment-oriented study that couples local conformal
safety with value-per-effort deliberation on a large Medicaid coordination dataset.
5
Figure 3: Calibration CDF of taken-action harm probabilities with global τ (red dashed).
Policy
ˆV0 (FQE)
MinCost
-0.173
Global-τ
-0.169
ITD only
-0.166
BC
-0.165
CQL
-0.149
TTL+ITD
-0.000
TTL only
-0.000
Table 2: Policy comparison (FQE) for TTL+ITD and baselines. Non-positive rewards mean that a value
estimate of ˆV0 = 0 corresponds to eliminating observed harms, so the TTL+ITD rows reaching 0 indicate
the policy is predicted to avoid the negative outcomes captured in our reward definition while baselines incur
residual harm.
Comparison to offline RL baselines.
Offline RL algorithms such as BC and CQL optimise value during
training but do not expose governance dials once deployed.
Our experiments show that discrete CQL
improves value relative to BC, yet TTL+ITD delivers comparable value while reducing expected effort
cost by two orders of magnitude. Importantly, TTL+ITD is agnostic to the upstream learner: the same
deliberation layer can sit atop CQL, BC, or clinician-designed heuristics. We attempted to integrate IQL,
but current open-source releases only support continuous actions; once discrete IQL becomes available it can
be slotted into the same evaluation harness.
Positioning vs. health services research (HSR) and CEA.
Our contribution is operational: TTL+ITD
turns logged data into per-episode recommendations that optimize value-per-effort under real constraints,
with transparent dials for governance. In contrast, traditional CEA/CBA frameworks provide macro-level
decisions (e.g., adopt program A vs. B) and are less suited for patient-state personalization, logged behav-
ioral support, or inference-time uncertainty penalization. TTL+ITD can be used alongside CEA: CEA sets
program-level priorities and budget targets; TTL+ITD executes day-to-day, on-policy decisions within those
constraints, with reproducible evaluation and subgroup audits [9–12].
Cost calibration from credible sources.
We align cost weights to external references and internal
time-and-motion. Specifically, we (i) map outreach actions to CPT-style categories and typical times [18],
(ii) use CMS Physician Fee Schedule guidance and work RVU time conventions [19], and (iii) weight staff
time by BLS occupational wages for relevant roles (e.g., RN/CHW/social worker) [20]. Travel time for in-
person visits is added from routing estimates or internal logs. Costs are normalized into units used by ITD;
sensitivity to the cost scale is reported via the efficiency frontier.
6
Limitations.
Our evaluation uses a single de-identified dataset from one organization; while we use tem-
poral splits to mitigate leakage, generalization to other systems and periods remains future work.
De-
identification removes most demographic covariates, so fairness tables collapse to an “unknown” group; the
released code supports richer audits when such features are available. Local (kNN) calibration yields empir-
ical benefits but does not provide formal conditional coverage guarantees; Appendix C outlines theoretical
notes and open questions. Performance depends on logged policy support and data quality; concept drift re-
quires periodic recalibration. Cost functions depend on calibrated time-and-motion assumptions; we provide
sensitivity analyses via the efficiency frontier.
Implications for population health.
For population health teams, the main takeaway is that the
most valuable lever is not additional model complexity but the governance controls surfaced at inference.
Once historical harms are neutralised, administrators can negotiate trade-offs purely in units they under-
stand—minutes of staff time and uncertainty penalties—while still retaining reproducible audit trails. The
same deliberation layer can be placed on top of alternative baseline policies (CQL, BC, clinician heuristics),
suggesting a path to standardise decision support across vendor or in-house systems. As Medicaid programs
expand team-based, community-oriented care [13, 14], such lightweight, auditable controls can help balance
patient reach with finite field resources.
Future directions.
Prospective evaluation remains the decisive next step: we plan to run matched-control
pilots that monitor visit rates, harm events, and staff satisfaction. Richer demographic data would enable
subgroup-specific conformal levels and fairness guarantees similar to FG-FARL. Finally, local conformal
calibration can be combined with episodic model-based lookahead or constrained optimisation to provide
proactive guardrails for high-risk clinical interventions, extending TTL+ITD beyond the low-risk coordina-
tion setting explored here.
Computational requirements.
Inference involves (i) a kNN query over the calibration set (distance on
z-scored features), and (ii) Q-ensemble predictions. With Ncal calibration points and d features, naive kNN
is O(Ncald) per decision; approximate indices or batched queries can reduce costs. The Q-ensemble adds
a small constant factor (few linear models). Memory scales with storing calibration features and ensemble
parameters; action cost tables are negligible. In larger action spaces, we can precompute modality clusters
and use two-stage selection (screening then scoring) to keep latency under operational budgets.
6
Reproducibility
All code, configuration files, and manifests are available at https://github.com/sanjaybasu/ttl_itd_
medicaid. The repository provides Makefile targets and Python entry points (e.g., python run_ttl_itd.py)
that reproduce every table and figure, emit manifests with package versions, and generate subgroup sum-
maries and plots.
Implementation details and clarifications.
Feature engineering: we parse key-value state JSONs and
materialize up to 64 features (most-informative keys) plus basic temporal features (time index and previous
reward). The final feature set is documented in code and manifests. kNN neighborhoods: we use z-scored
features and Euclidean distance by default; cosine distance and PCA-projected spaces are drop-in alternatives
in our code. We sweep K over {100,200,300} and report sensitivity; larger K improves stability but may
dilute locality. Q-ensemble: we bootstrap by episodes (with replacement), train linear fitted Q models with
the same feature map, and aggregate mean and standard deviation across models. Class imbalance: for
harm prediction we use class-weighted logistic regression (or scale pos weight in LightGBM) and calibrate
thresholds on a held-out slice.
Expanded baselines and statistics.
We add implicit Q-learning (IQL) [4] and conservative Q-learning
(CQL) [3] as offline RL baselines in sensitivity analyses. For significance, we report paired bootstrap CIs and
a randomization test on episodic returns. Subgroup fairness tables include counts and uncertainty intervals.
7
Risk as cost (efficiency mode).
In settings where primitive actions (e.g., text/call/visit) are not clinically
risky, TTL+ITD supports a cost-aware objective: we replace the harm term with a per-action cost (e.g., visit
> call > text) and penalize cost directly in the deliberation score. This yields value-per-effort prioritization
under the same modular framework. When truly high-risk interventions exist (e.g., medication changes), we
include them in the action set and return to harm-gated TTL.
Theoretical considerations.
Global conformal gating yields finite-sample marginal coverage [5, 6]. Our
local (kNN) calibration approximates conditional coverage by restricting conformity scores to a neighbor-
hood; larger K increases stability while smaller K increases locality. The ensemble penalty (variance term)
discourages actions with high predictive uncertainty [7]. Together, these design choices induce a monotone
safety–efficiency trade-off controlled by α, K, β, λ; formal guarantees for local coverage and coupled penalties
are left for future work.
References
[1] Levine, Sergey and Kumar, Aviral and Tucker, George and Fu, Justin Offline reinforcement learning: Tutorial,
review, and perspectives on open problems arXiv preprint arXiv:2005.01643, 2020
[2] Altman, Eitan Constrained Markov Decision Processes CRC Press, 1999
[3] Kumar, Aviral and Zhou, Aurick and Tucker, George and Levine, Sergey Conservative Q-Learning for Offline
Reinforcement Learning Advances in Neural Information Processing Systems, 2020
[4] Kostrikov, Ilya and Nair, Ashvin and Levine, Sergey Offline Reinforcement Learning with Implicit Q-Learning
Advances in Neural Information Processing Systems, 2021
[5] Vovk, Vladimir and Gammerman, Alex and Shafer, Glenn Algorithmic Learning in a Random World Springer,
2005
[6] Angelopoulos, Anastasios N and Bates, Stephen Conformal prediction: A gentle introduction Foundations and
Trends in Machine Learning, 16(4):494–591, 2023
[7] Lakshminarayanan, Balaji and Pritzel, Alex and Blundell, Charles Simple and Scalable Predictive Uncertainty
Estimation using Deep Ensembles Advances in Neural Information Processing Systems, 2017
[8] Sutton, Richard S and Barto, Andrew G Reinforcement Learning: An Introduction (2nd Edition) MIT Press,
2018
[9] Drummond, Michael and Sculpher, Mark and Claxton, Karl and Stoddart, Greg and Torrance, George Methods
for the Economic Evaluation of Health Care Programmes Oxford University Press, 2015, 4 ed.
[10] Sanders, Gillian D. and Neumann, Peter J. and Basu, Anirban and et al. Recommendations for Conduct,
Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost-Effectiveness in
Health and Medicine JAMA, 316(10):1093–1103, 2016
[11] Neumann, Peter J. and Sanders, Gillian D. and Russell, Louise B. and Siegel, Joanna E. and Ganiats, Theodore
G. Cost-Effectiveness in Health and Medicine Oxford University Press, 2017, 2 ed.
[12] Husereau, Don and Drummond, Michael and Petrou, Stavros and et al. CHEERS 2022 Statement: Updated
Guidance for Reporting Health Economic Evaluations BMJ, 376:e067975, 2022
[13] Centers for Medicare & Medicaid Services Medicaid and CHIP Enrollment Data:
Monthly Reports 2024.
https://www.medicaid.gov/medicaid/program-information/medicaid-and-chip-enrollment-data/index.html
[14] Centers
for
Medicare
&
Medicaid
Services
Medicaid
and
CHIP
Enrollment
Trend
Snapshots
2025.
https://www.medicaid.gov/
[15] Caro, J. Jaime and et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS)—ISPOR
Good Research Practices Medical Decision Making, 32(5):667–670, 2012
[16] Jiang, Nan and Li, Lihong Doubly robust off-policy value evaluation for reinforcement learning Proceedings of
the 33rd International Conference on Machine Learning, 2016
8
[17] Thomas, Philip and Murphy, Susan and Barto, Andrew High-confidence off-policy evaluation AAAI Conference
on Artificial Intelligence, 2015
[18] American Medical Association Current Procedural Terminology (CPT) Professional Edition AMA Press, 2024
[19] Centers for Medicare & Medicaid Services Medicare Physician Fee Schedule (PFS): Relative Value Units, Time,
and Payment Policies 2024. Technical documentation
[20] U.S. Bureau of Labor Statistics Occupational Employment and Wage Statistics 2024. https://www.bls.gov/oes/
A
Cost Calibration Details
We define a cost dictionary c(a) using: (i) CPT typical times for analogous services (e.g., telephone E/M;
video visits) [18], (ii) CMS PFS time conventions and practice expense guidance [19], (iii) BLS wages for
staffing categories [20], and (iv) internal time-and-motion and travel estimates. For each action a, cost is
c(a) = time(a) × wage(staff) + travel(a), then normalized for use in the deliberation score. We provide
sensitivity analyses to the scaling of c(a).
B
Algorithms and Implementation Details
TTL (local calibration). Given state s, form x = ϕ(s), find the K nearest calibration states via Euclidean
distance on z-scored x, collect action-conditional risk predictions {ˆp(s′, a)} over neighbors, and set τs(a) to
the (1 −α) quantile for each action a. ITD (deliberation). For each action a, compute ˆQmean(s, a) and
ˆQstd(s, a) from a bootstrap ensemble. The decision score is ˆQmean −β ˆQstd −λ ˆp(s, a) −λcost c(a); unsafe
actions with ˆp(s, a) ≥τs(a) are masked when risk gating is enabled. We include pseudocode and additional
implementation notes in the repository.
C
Proofs and Theoretical Notes
Monotone thresholds. Let {ui}n
i=1 be calibration scores (predicted harms). Define τ(α) = Quantile1−α({ui}).
If α2 < α1, then τ(α2) ≥τ(α1) by properties of quantiles. Expected harm reduction.
Under cali-
brated harms E[1{Y = 1} | s, a] = ˆp(s, a) and independence of selection from Y given (s, a), risk gating
1{ˆp(s, a) < τ} yields E[Y | ˆp < τ] = E[ˆp | ˆp < τ] ≤E[ˆp]; thus expected harm is lower on the gated set. For
local gating, analogous statements hold within kNN neighborhoods; formal conditional coverage guarantees
depend on smoothness/overlap assumptions and are left for future work.
D
Statistics and Significance
We report paired bootstrap confidence intervals and a randomization test for episode-level returns comparing
TTL+ITD vs. baselines. The released code continues to emit subgroup tables with counts and 95% CIs, but
in this de-identified export all subgroup indicators collapse to an “unknown” bucket, so the corresponding
tables are omitted from the manuscript.
E
Additional Figures and Tables
F
Code Availability
The implementation and paper sources are available at: https://github.com/sanjaybasu/ttl_itd_medicaid.
9
Figure 4: Policy comparison across TTL+ITD, global- τ, BC, and (if available) IQL/CQL.
Figure 5: Efficiency frontier (expected value vs expected cost) for cost-aware TTL+ITD.
G
Best-Practice Reporting
We document: data provenance and de-identification; feature engineering; train/calibration/test split strat-
egy; hyperparameters; OPE configuration (feature map, iterations); subgroup definitions; fairness metrics;
and governance considerations. Scripts and manifests are provided in the repository; all figures/tables can
be regenerated with the Makefile targets described in the README.
H
Methodological Checklist and Reporting Summary
10
Policy
ˆV0 (FQE)
MinCost
-0.173
Global-τ
-0.169
ITD only
-0.166
BC
-0.165
CQL
-0.149
TTL+ITD
-0.000
TTL only
-0.000
Table 3: Policy comparison table (FQE).
λcost
Expected cost
ˆV0 (FQE)
0.00
16.015
-0.131
0.25
1.180
-0.147
0.50
1.069
-0.151
1.00
1.006
-0.153
2.00
1.000
-0.153
Table 4: Efficiency frontier table across cost penalties.
Aspect
Description
Data provenance
Operational dataset from Waymark Care; de-identified; date shifts
upstream.
Cohort
All members with logged care-coordination actions over study win-
dow.
State features
Parsed JSON (up to 64 keys) + time step + previous reward; stan-
dardization.
Actions
Discrete primitives (text/call/video/home-visit); optional clinical
interventions if available.
Outcome/reward
Negative harms (ED/hospitalization) or cost-aware mode (per-
action effort).
Splits
70% train, 15% calibration, 15% test by step; episode-aware sam-
pling for OPE.
Risk model
Action-conditional
logistic
(class-weighted)
or
LightGBM
(scale pos weight).
TTL
kNN neighborhoods on standardized features; local (1−α) quantile
thresholds.
ITD
Bootstrap
linear
fitted-Q
ensemble;
score
=
mean−βstd−λp harm−λcostc(a).
Baselines
Global- τ, BC; sensitivity to IQL/CQL when available.
OPE
Simple FQE (linear map) + DR; paired bootstrap CIs; randomiza-
tion test.
Fairness
Subgroup value tables (sex, race, age, ADI, dual, BH, high-util).
Governance
Tunable dials (α, K, β, λ, λcost); manifests; audit logs.
Reproducibility
Requirements, Makefile targets, run manifests, scripts for all fig-
ures/tables.
Table 5: Methodological checklist summarizing data, modeling, evaluation, fairness, and reproducibility
items.
11
Item
Summary
Objective
Offline decision-support for safe and efficient care coordination.
Design
Retrospective
observational;
offline
RL
without
exploration;
inference-time control.
Population
Medicaid members served by Waymark Care during study period.
Interventions
Outreach actions (text/call/video/home), optionally clinical ac-
tions if present.
Comparators
Global- τ, BC; sensitivity to IQL/CQL.
Outcomes
Harms (ED/hospitalization) and/or cost (effort) as specified.
Analysis
TTL local conformal gating; ITD with uncertainty/cost penalties;
FQE/DR OPE.
Subgroups
Demographics (sex, race, age), utilization and deprivation indica-
tors.
Missing data
Imputation via medians in pipelines; manifests report feature sets.
Bias
Class imbalance handled in risk; subgroup audits; reporting of
counts and CIs.
Reproducibility
Code, configs, manifests; figures/tables regenerated via Makefile.
Limitations
No online exploration; local coverage not guaranteed; data noise
and drift.
Table 6: CONSORT-style reporting summary adapted for offline decision-support studies.
12
|
Test-Time Learning and Inference-Time Deliberation for Efficiency-First Offline Reinforcement Learning in Care Coordination and Population Health Management Sanjay Basu, MD, PhD1,2 Sadiq Y. Patel, MSW, PhD1,3 Parth Sheth, MSE1,3 Bhairavi Muralidharan, MSE1 Namrata Elamaran, MSE1 Aakriti Kinra, MS1 Rajaie Batniji, MD, PhD1 1Waymark, San Francisco, CA, USA 2San Francisco General Hospital, 3 : Sanjay Basu, MD, PhD, 2120 Fillmore St, San Francisco, CA 94115 (san- ) Abstract Care coordination and population health management programs serve large Medicaid and safety-net populations and must be auditable, efficient, and adaptable. While clinical risk for outreach modalities is typically low, time and opportunity costs differ substantially across text, phone, video, and in-person visits. We propose a lightweight offline reinforcement learning (RL) approach that augments trained policies with (i) test-time learning via local neighborhood calibration, and (ii) inference-time deliberation via a small Q-ensemble that incorporates predictive uncertainty and time/effort cost. The method exposes transparent dials for neighborhood size and uncertainty/cost penalties and preserves an auditable training pipeline. Evaluated on a de-identified operational dataset, TTL+ITD achieves stable value estimates with predictable efficiency trade-offs and subgroup auditing. 1 Introduction Care coordination and population health management (PHM) are core functions of health systems and community partners, impacting large numbers of Americans enrolled in Medicaid and other safety-net programs. These efforts aim to proactively identify needs, prioritize outreach, and escalate appropriately, all within finite staffing and budget constraints. While outreach modalities (text, phone, video, in-person) carry low clinical risk, their time and opportunity costs vary significantly, making efficiency a primary design goal. In practice, the central operational question is when to deploy expensive in-person outreach versus efficient virtual modalities to maximize value and equity under capacity constraints. These decisions must be made in strictly offline settings, where policies are learned from logged data without exploration at deployment [1]. Classical approaches include constrained Markov decision processes [2], risk-sensitive objectives, and conservative offline RL (e.g., CQL/IQL) [3, 4]. Conformal prediction can provide calibrated error control [5, 6]; ensembles provide practical uncertainty quantification [7]; and decision-time computation is common in control [8]. In health services research and health economic evaluation, cost-effectiveness and cost-benefit analyses (CEA/CBA) guide program-level choices [9-12], but they are not designed for per-patient, per-decision recommendations that adapt to granular state features and logged behavior constraints. 1 19 Sep 2025 Population-level context. Medicaid and the Children's Health Insurance Program (CHIP) together cover over 80 million people in the United States, amounting to roughly one in four Americans according to CMS monthly enrollment reports [13, 14]. Population health management programs serving these beneficiaries routinely execute millions of outreach events annually across text, phone, video, and in-person modalities. Under finite staffing and budget constraints, small efficiency gains at the patient-decision level can scale to meaningful improvements in access and equity at the population level, motivating operational tools that optimize when to deploy higher-touch in-person visits versus efficient virtual modes while preserving auditability. Why not standard cost-effectiveness alone? Classical cost-effectiveness and cost-benefit analyses (CEA/CBA) provide valuable program-level comparisons and policy choices [9-11]. However, they typically assume aggregate cohorts or Markov models with fixed transition structures and are not designed to produce per-patient, per-decision recommendations that adapt to granular state features or logged behavior constraints. TTL+ITD complements CEA by learning from real operational trajectories, acting at decision time with state-specific personalization, uncertainty-awareness, and auditable dials; it supports equity auditing across subgroups and does not require environment exploration. We adhere to good practice in health economic modeling and reporting [12, 15] while focusing on patient-level, sequential operational optimization. Plain-language overview. In practice, we take 2.77 million historical coordination decisions recorded by Waymark, learn which outreach options were safe, and then layer simple checks on top of the trained policy so that each new recommendation can be explained and adjusted. TTL (test-time learning) looks up a member's nearest neighbors in the historical data and calibrates a local risk threshold that reflects how often similar members experienced harm after each action. ITD (inference-time deliberation) then weighs the predicted benefit of each action against three intuitive penalties: the modeled risk of harm, the model's own uncertainty, and the staff time required for the action. Operators can turn these penalties up or down to emphasize safety or efficiency without re-training models. The result is a set of actionable dials that the care team can audit and tune in the same way they manage staffing forecasts or visit quotas. 2 Related Work Offline and conservative RL. Offline RL targets policies without environment interaction [1] using pessimism or constraints (e.g., CQL, IQL) [3, 4]. We treat these as baselines and focus on inference-time control. Safe RL. Safe RL includes constrained MDPs [2] and risk-sensitive criteria; healthcare applications emphasize verifiability and auditability. Conformal decision-making. Conformal methods provide calibrated control of error/coverage [5, 6]; our local (kNN) TTL approximates conditional coverage for action-conditional risks. Uncertainty and deliberation. Deep ensembles provide practical uncertainty estimates [7]; decision-time planning/computation is standard in control [8]. Our novelty is a pragmatic, test-time fusion of local conformal safety, cost-aware deliberation, and auditable dials for deployment. Contributions. We unify local conformal calibration and inference-time deliberation into a deploymentready stack. TTL+ITD (i) augments global conformal safety with local, neighborhood-specific thresholds and kNN action priors, (ii) injects cost- and uncertainty-aware deliberation through a lightweight Q-ensemble whose dials (K, β, λ, λcost) expose efficiency-safety trade-offs without re-training, and (iii) ships with an open, library-agnostic evaluation harness (FQE, DR, sensitivity sweeps, manifests) so that TTL+ITD can be audited beside standard offline RL baselines (global conformal gating, BC, discrete CQL). Our emphasis is on inference-time control for efficiency-first coordination workloads-deciding when to expend scarce inperson effort while preserving an auditable, modular training pipeline. 3 Methods Data. We analyze a de-identified operational dataset from Waymark comprising 2.77 million coordination steps across approximately 168,000 members and nine discrete outreach actions (text, phone, video, inperson variants, and escalation pathways). Each trajectory records the member identifier, time index, 2 logged action, observed reward, and JSON snapshots of the member's state (open tasks, recent utilization, engagement history). Rewards are 0 when no adverse utilization occurs in the follow-up window and negative otherwise, so larger values represent fewer near-term harms. To preserve privacy, the export suppresses most demographic attributes: continuous age is jittered, geography is bucketed, and subgroup indicators collapse to an "unknown" category. We note this constraint in §5; the codebase accepts richer covariates if partners can share them under their governance policies. Risk model. We estimate action-conditional harm probabilities pharm(s, a) by fitting class-weighted multinomial logistic regressions on the parsed state features concatenated with an action one-hot vector. The model is trained on a temporally earlier split (70%) and evaluated on a 15% calibration slice and 15% test slice to respect causal ordering. We also provide hooks for gradient-boosted variants when partners prefer tree models. A global conformal threshold τ is computed on the calibration slice using standard split conformal prediction [5, 6]. To personalise safety, TTL builds a kNN index over the z-scored calibration states and derives local thresholds τs(a) from the (1 -α) quantile of neighbor-specific risk scores for each action. Preference model. We then train a multinomial logistic regression on the "safe" subset of the training data where pharm(s, a) call > text) and penalize cost directly in the deliberation score. This yields value-per-effort prioritization under the same modular framework. When truly high-risk interventions exist (e.g., medication changes), we include them in the action set and return to harm-gated TTL. Theoretical considerations. Global conformal gating yields finite-sample marginal coverage [5, 6]. Our local (kNN) calibration approximates conditional coverage by restricting conformity scores to a neighborhood; larger K increases stability while smaller K increases locality. The ensemble penalty (variance term) discourages actions with high predictive uncertainty [7]. Together, these design choices induce a monotone safety-efficiency trade-off controlled by α, K, β, λ; formal guarantees for local coverage and coupled penalties are left for future work. References [1] Levine, Sergey and Kumar, Aviral and Tucker, George and Fu, Justin Offline reinforcement learning: Tutorial, review, and perspectives on open problems arXiv preprint , 2020 [2] Altman, Eitan Constrained Markov Decision Processes CRC Press, 1999 [3] Kumar, Aviral and Zhou, Aurick and Tucker, George and Levine, Sergey Conservative Q-Learning for Offline Reinforcement Learning Advances in Neural Information Processing Systems, 2020 [4] Kostrikov, Ilya and Nair, Ashvin and Levine, Sergey Offline Reinforcement Learning with Implicit Q-Learning Advances in Neural Information Processing Systems, 2021 [5] Vovk, Vladimir and Gammerman, Alex and Shafer, Glenn Algorithmic Learning in a Random World Springer, 2005 [6] Angelopoulos, Anastasios N and Bates, Stephen Conformal prediction: A gentle introduction Foundations and Trends in Machine Learning, 16(4):494-591, 2023 [7] Lakshminarayanan, Balaji and Pritzel, Alex and Blundell, Charles Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles Advances in Neural Information Processing Systems, 2017 [8] Sutton, Richard S and Barto, Andrew G Reinforcement Learning: An Introduction (2nd Edition) MIT Press, 2018 [9] Drummond, Michael and Sculpher, Mark and Claxton, Karl and Stoddart, Greg and Torrance, George Methods for the Economic Evaluation of Health Care Programmes Oxford University Press, 2015, 4 ed. [10] Sanders, Gillian D. and Neumann, Peter J. and Basu, Anirban and et al. Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost-Effectiveness in Health and Medicine JAMA, 316(10):1093-1103, 2016 [11] Neumann, Peter J. and Sanders, Gillian D. and Russell, Louise B. and Siegel, Joanna E. and Ganiats, Theodore G. Cost-Effectiveness in Health and Medicine Oxford University Press, 2017, 2 ed. [12] Husereau, Don and Drummond, Michael and Petrou, Stavros and et al. CHEERS 2022 Statement: Updated Guidance for Reporting Health Economic Evaluations BMJ, 376:e067975, 2022 [13] Centers for Medicare & Medicaid Services Medicaid and CHIP Enrollment Data: Monthly Reports 2024. https://www.medicaid.gov/medicaid/program-information/medicaid-and-chip-enrollment-data/index.html [14] Centers for Medicare & Medicaid Services Medicaid and CHIP Enrollment Trend Snapshots 2025. https://www.medicaid.gov/ [15] Caro, J. Jaime and et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS)-ISPOR Good Research Practices Medical Decision Making, 32(5):667-670, 2012 [16] Jiang, Nan and Li, Lihong Doubly robust off-policy value evaluation for reinforcement learning Proceedings of the 33rd International Conference on Machine Learning, 2016 8 [17] Thomas, Philip and Murphy, Susan and Barto, Andrew High-confidence off-policy evaluation AAAI Conference on Artificial Intelligence, 2015 [18] American Medical Association Current Procedural Terminology (CPT) Professional Edition AMA Press, 2024 [19] Centers for Medicare & Medicaid Services Medicare Physician Fee Schedule (PFS): Relative Value Units, Time, and Payment Policies 2024. Technical documentation [20] U.S. Bureau of Labor Statistics Occupational Employment and Wage Statistics 2024. https://www.bls.gov/oes/ A Cost Calibration Details We define a cost dictionary c(a) using: (i) CPT typical times for analogous services (e.g., telephone E/M; video visits) [18], (ii) CMS PFS time conventions and practice expense guidance [19], (iii) BLS wages for staffing categories [20], and (iv) internal time-and-motion and travel estimates. For each action a, cost is c(a) = time(a) × wage(staff) + travel(a), then normalized for use in the deliberation score. We provide sensitivity analyses to the scaling of c(a). B Algorithms and Implementation Details TTL (local calibration). Given state s, form x = φ(s), find the K nearest calibration states via Euclidean distance on z-scored x, collect action-conditional risk predictions {ˆp(s′, a)} over neighbors, and set τs(a) to the (1 -α) quantile for each action a. ITD (deliberation). For each action a, compute ˆQmean(s, a) and ˆQstd(s, a) from a bootstrap ensemble. The decision score is ˆQmean -β ˆQstd -λ ˆp(s, a) -λcost c(a); unsafe actions with ˆp(s, a) ≥τs(a) are masked when risk gating is enabled. We include pseudocode and additional implementation notes in the repository. C Proofs and Theoretical Notes Monotone thresholds. Let {ui}n i=1 be calibration scores (predicted harms). Define τ(α) = Quantile1-α({ui}). If α2 < α1, then τ(α2) ≥τ(α1) by properties of quantiles. Expected harm reduction. Under calibrated harms E[1{Y = 1} | s, a] = ˆp(s, a) and independence of selection from Y given (s, a), risk gating 1{ˆp(s, a) < τ} yields E[Y | ˆp < τ] = E[ˆp | ˆp < τ] ≤E[ˆp]; thus expected harm is lower on the gated set. For local gating, analogous statements hold within kNN neighborhoods; formal conditional coverage guarantees depend on smoothness/overlap assumptions and are left for future work. D Statistics and Significance We report paired bootstrap confidence intervals and a randomization test for episode-level returns comparing TTL+ITD vs. baselines. The released code continues to emit subgroup tables with counts and 95% CIs, but in this de-identified export all subgroup indicators collapse to an "unknown" bucket, so the corresponding tables are omitted from the manuscript. E Additional Figures and Tables F Code Availability The implementation and paper sources are available at: https://github.com/sanjaybasu/ttl_itd_medicaid. 9 Figure 4: Policy comparison across TTL+ITD, global- τ, BC, and (if available) IQL/CQL. Figure 5: Efficiency frontier (expected value vs expected cost) for cost-aware TTL+ITD. G Best-Practice Reporting We document: data provenance and de-identification; feature engineering; train/calibration/test split strategy; hyperparameters; OPE configuration (feature map, iterations); subgroup definitions; fairness metrics; and governance considerations. Scripts and manifests are provided in the repository; all figures/tables can be regenerated with the Makefile targets described in the README. H Methodological Checklist and Reporting Summary 10 Policy ˆV0 (FQE) MinCost -0.173 Global-τ -0.169 ITD only -0.166 BC -0.165 CQL -0.149 TTL+ITD -0.000 TTL only -0.000 Table 3: Policy comparison table (FQE). λcost Expected cost ˆV0 (FQE) 0.00 16.015 -0.131 0.25 1.180 -0.147 0.50 1.069 -0.151 1.00 1.006 -0.153 2.00 1.000 -0.153 Table 4: Efficiency frontier table across cost penalties. Aspect Description Data provenance Operational dataset from Waymark Care; de-identified; date shifts upstream. Cohort All members with logged care-coordination actions over study window. State features Parsed JSON (up to 64 keys) + time step + previous reward; standardization. Actions Discrete primitives (text/call/video/home-visit); optional clinical interventions if available. Outcome/reward Negative harms (ED/hospitalization) or cost-aware mode (peraction effort). Splits 70% train, 15% calibration, 15% test by step; episode-aware sampling for OPE. Risk model Action-conditional logistic (class-weighted) or LightGBM (scale pos weight). TTL kNN neighborhoods on standardized features; local (1-α) quantile thresholds. ITD Bootstrap linear fitted-Q ensemble; score = mean-βstd-λp harm-λcostc(a). Baselines Global- τ, BC; sensitivity to IQL/CQL when available. OPE Simple FQE (linear map) + DR; paired bootstrap CIs; randomization test. Fairness Subgroup value tables (sex, race, age, ADI, dual, BH, high-util). Governance Tunable dials (α, K, β, λ, λcost); manifests; audit logs. Reproducibility Requirements, Makefile targets, run manifests, scripts for all figures/tables. Table 5: Methodological checklist summarizing data, modeling, evaluation, fairness, and reproducibility items. 11 Item Summary Objective Offline decision-support for safe and efficient care coordination. Design Retrospective observational; offline RL without exploration; inference-time control. Population Medicaid members served by Waymark Care during study period. Interventions Outreach actions (text/call/video/home), optionally clinical actions if present. Comparators Global- τ, BC; sensitivity to IQL/CQL. Outcomes Harms (ED/hospitalization) and/or cost (effort) as specified. Analysis TTL local conformal gating; ITD with uncertainty/cost penalties; FQE/DR OPE. Subgroups Demographics (sex, race, age), utilization and deprivation indicators. Missing data Imputation via medians in pipelines; manifests report feature sets. Bias Class imbalance handled in risk; subgroup audits; reporting of counts and CIs. Reproducibility Code, configs, manifests; figures/tables regenerated via Makefile. Limitations No online exploration; local coverage not guaranteed; data noise and drift. Table 6: CONSORT-style reporting summary adapted for offline decision-support studies. 12
|
2509.16296
|
Learning in Stackelberg Markov Games
Jun He
Edwardson School of Industrial Engineering
Purdue University
West Lafayette, IN 47906
he184@purdue.edu
Andrew L. Liu
Edwardson School of Industrial Engineering
Purdue University
West Lafayette, IN 47906
andrewliu@purdue.edu
Yihsu Chen
Electrical and Computer Engineering
University of California, Santa Cruz
Santa Cruz, CA 95064
yihsuchen@ucsc.edu
Abstract
Designing socially optimal policies in multi-agent environments is a fundamen-
tal challenge in both economics and artificial intelligence. This paper studies a
general framework for learning Stackelberg equilibria in dynamic and uncertain
environments, where a single leader interacts with a population of adaptive fol-
lowers. Motivated by pressing real-world challenges such as equitable electricity
tariff design for consumers with distributed energy resources (such as rooftop
solar and energy storage), we formalize a class of Stackelberg Markov games
and establish the existence and uniqueness of stationary Stackelberg equilibria
under mild continuity and monotonicity conditions. We then extend the framework
to incorporate a continuum of agents via mean-field approximation, yielding a
tractable Stackelberg–Mean Field Equilibrium (S-MFE) formulation. To address
the computational intractability of exact best-response dynamics, we introduce a
softmax-based approximation and rigorously bound its error relative to the true
Stackelberg equilibrium. Our approach enables scalable and stable learning through
policy iteration without requiring full knowledge of follower objectives. We val-
idate the framework on an energy market simulation, where a public utility or a
State utility commission sets time-varying rates for a heterogeneous population
of prosumers. Our results demonstrate that learned policies can simultaneously
achieve economic efficiency, equity across income groups, and stability in en-
ergy systems. This work demonstrates how game-theoretic learning frameworks
can support data-driven policy design in large-scale strategic environments, with
applications to real-world systems like energy markets.
1
Introduction
Designing effective mechanisms in strategic multi-agent environments is a central problem in eco-
nomics and artificial intelligence. From taxation policy to resource allocation, many real-world
scenarios can be naturally modeled as Stackelberg games, where a leader first commits to a strategy
and followers respond rationally based on the leader’s choice. The asymmetry between the leader and
follower, combined with the strategic nature of their interaction, makes the Stackelberg framework
particularly relevant to policy design, regulation, and planning.
Classical approaches to solving Stackelberg games rely on strong assumptions about agents’ knowl-
edge and rationality. These approaches often require explicit models of the follower’s objective and
39th Conference on Neural Information Processing Systems (NeurIPS 2025).
arXiv:2509.16296v1 [eess.SY] 19 Sep 2025
best-response behavior, which may be unavailable or intractable in realistic settings. As a result,
such methods are limited to stylized, static environments. In contrast, many policy design problems
involve dynamic, stochastic, and partially observed environments, where agents adapt to the evolving
system and the leader must learn a policy that effectively shapes long-run outcomes.
Recent advances in multi-agent reinforcement learning (MARL) have opened up new possibilities for
mechanism design in such settings. The AI Economist framework [1] exemplifies this by introducing a
two-level reinforcement learning approach, where a planner (leader) and economic agents (followers)
co-adapt in a complex economic simulation. This framework demonstrates that reinforcement
learning (RL) can help learn optimal policies even when agents are strategic, heterogeneous, and
embedded in dynamic environments. It also highlights the potential of AI-driven simulations to reveal
emergent economic behaviors, such as specialization and tax gaming, that are difficult to capture
analytically.
Building on these perspectives, we propose a general learning framework for Stackelberg Markov
Games with infinite-horizon discounted rewards. We first establish the existence and uniqueness of
a stationary Stackelberg equilibrium under mild regularity conditions in the two-agent setting. We
then extend the framework to incorporate a single leader interacting with a continuum of competitive
followers, modeled via a mean-field approximation. This captures large-population environments
where each individual agent has negligible influence, and the leader seeks to shape collective behavior
through policy design. To compute equilibria in both settings, we introduce a reinforcement learning
algorithm that alternates between follower best-response learning and leader policy improvement,
without requiring explicit knowledge of the follower’s reward function.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 formalizes the
single-leader, single-follower Stackelberg Markov game and establishes the existence and uniqueness
of an equilibrium. Section 4 presents a learning algorithm and discusses its convergence. Section 5
extends the framework to a mean-field population of followers. Section 6 demonstrates the approach
through a tariff design problem in electricity markets, highlighting its potential for real-world policy
applications. All proofs are deferred to the appendix.
2
Related Work
Our work lies at the intersection of RL, game theory, and mechanism design, with a focus on
Stackelberg games – sequential settings where a leader commits to a strategy first and followers
respond optimally. Classical approaches rely on complete-information assumptions and bilevel
optimization techniques [2].
Recent work has turned to learning-based methods to address challenges in stochastic and complex
environments. The AI Economist [1], for example, applies deep multi-agent RL to optimize tax policy
in simulated economies with strategic agents, demonstrating the promise of data-driven mechanism
design. However, its focus is empirical and lacks theoretical guarantees.
Complementing these empirical advances, a parallel line of work has developed theoretical foun-
dations for Stackelberg learning. Bai et al.[3] provide sample complexity bounds for learning
Stackelberg equilibria under bandit feedback in general-sum games, while Fiez et al.[4] analyze
local convergence of gradient-based dynamics in continuous Stackelberg games. In contrast, we
establish global existence and uniqueness results and support learning in infinite-horizon dynamic
environments
Jordan et al. [5] study Stackelberg-Nash equilibria with myopic followers and propose provably
efficient RL algorithms. Their setting, however, assumes short-term follower behavior and limits
temporal expressiveness. Our framework focuses on fully strategic, forward-looking followers and
extends to mean-field populations.
Overall, our work provides a theoretically grounded and scalable framework for learning Stackelberg
equilibria in dynamic environments with either a single or a continuum of strategic followers, without
requiring explicit knowledge of agents’ reward functions and relying only on observed policies.
2
3
The Single-Leader-Single-Follower Stackelberg Markov Game
We now present the formal framework of Stackelberg Markov games and the corresponding learning
algorithms. We begin with a single-leader, single-follower game in a dynamic and uncertain environ-
ment. This setting serves as the foundation for our theoretical results and also for later extensions to
large-population games.
A Stackelberg game is a sequential-move game in which one agent commits to a strategy first,
anticipating its influence on the other’s response, and the second agent selects the best response
after observing this commitment. In the classical formulation, such games are static and played
under complete information, with equilibrium defined over a single strategic interaction. In contrast,
we study Stackelberg interactions embedded in dynamic environments, formally, infinite-horizon
discounted Markov games, where agents repeatedly interact with both one another and an evolving
state governed by stochastic dynamics. While this setting resembles repeated games, we do not
analyze the richer class of subgame perfect equilibria, which would require agents to condition
strategies on full play histories and belief hierarchies. Instead, we adopt the perspective common in
the literature on learning Nash equilibria in Markov games. That is, we focus on stationary strategies
and aim to learn a static Stackelberg equilibrium, interpreted as the leader committing to a fixed
policy and the follower adapting to it optimally within the environment. This formulation preserves
the sequential structure of Stackelberg play while remaining tractable for reinforcement learning and
policy optimization in a dynamic environment.
We now formalize this setting. Let I = {L, F} denote the two agents, where L represents the first
mover (leader) and F the second mover (follower). For notational convenience, we write −i to denote
the opponent of agent i; that is, if i = L, then −i = F, and vice versa. Additionally, we use P(X) to
denote the set of probability measures over a measurable set X, and x ∼Q to indicate that x follows
distribution Q. For sets X, Y, X × Y denotes their Cartesian product, and |X| the cardinality of X if
discrete. The definition of a Stackelberg Markov game is given below.
Definition 3.1 (Stackelberg Markov Game). A Stackelberg Markov game with a single leader and a
single follower is a tuple GS := (S, AL, AF , P, rL, rF , γ), where S is a (measurable) state space,
and AL and AF are the action spaces of two agents: the first mover, denoted by L (the leader), and
the second mover, denoted by F (the follower). The stochastic transition kernel P(· | s, aL, aF )
defines the probability distribution over next states, given current state s ∈S and joint actions
(aL, aF ) ∈AL × AF . The reward functions ri : S × AL × AF →R specify the one-step payoff
to agent i ∈{L, F}, and γ = (γL, γF ) ∈[0, 1)2 denotes the discount factors for the leader and the
follower, respectively.
In this paper, we focus on the case in which Si and Ai are discrete and finite for each i ∈I. The
leader first observes its state sL ∈SL and chooses an action aL ∈AL; then the follower observes
its state sF ∈SF and takes action aF ∈AF . The reward is then calculated through the reward
functions rL and rF . The leader and the follower take their actions according to their own policies
πi ∈Si 7→P(Ai). Each agent’s value function is defined as the discounted total expected return:
Vi(si, s−i, πi, π−i) := E
" ∞
X
t=0
γt
iri(si,t, s−i,t, ai,t, a−i,t)
si,0 = si, s−i,0 = s−i
#
, ∀i ∈I,
(1)
subject to si,t+1 ∼Pi(si, ai, a−i), ai,t ∼πi(·|si,t), ∀i ∈I, where the expectation is taken according
to both agents’ policies πL, πF , and the transition kernels PL, PF . Provided that the other player
chooses π−i, the goal for each player i is to find the best policy π∗
i that maximizes its value function
with initial state si ∈Si:
Vi(si, s−i, π∗
i , π−i) ≥Vi(si, s−i, πi, π−i), ∀πi ∈P(Ai), i ∈I.
(2)
To facilitate the analysis of optimal policies as defined in (2), and to ensure the existence of such
solutions, we introduce the following assumption.
Assumption 3.1 (Continuity and Boundedness). The reward functions ri(si, s−i, ai, a−i) and the
transition kernel functions Pi(si, ai, a−i) are continuous in Si and in Ai, A−i for each i ∈I. In
addition, the reward functions are uniformly bounded; that is, there exists a finite number R such that
|ri(si, s−i, ai, a−i)| ≤R, for all si ∈Si, ai ∈Ai, a−i ∈A−i, ∀i ∈I.
This assumption is central to guaranteeing the existence of optimal stationary policies. Under standard
conditions such as continuity, compactness, and boundedness, it is well established (e.g., [6, 7])
3
that stationary (i.e., time-invariant, memoryless) policies suffice for optimality in infinite-horizon
discounted Markov games. We therefore focus exclusively on stationary policies, which simplifies
the analysis and reflects both standard practice and real-world settings, where agents typically use
fixed policies that depend only on the current state, not on time or history. Specifically, a stationary
Stackelberg equilibrium (SSE) in the game GS is defined as follows.
Definition 3.2. Given a Stackelberg Markov game GS with shared state space S, a policy pair
(πSSE
L , πSSE
F ) is a stationary Stackelberg equilibrium if, for all s ∈S,
πSSE
F
∈arg
max
πF ∈P(AF ) VF (s; πSSE
L , πF ),
(3)
πSSE
L
∈arg
max
πL∈P(AL) VL(s; πL, πSSE
F ),
(4)
where Vi(s; πL, πF ) denotes the value function for agent i ∈{L, F} under initial state s and policy
pair (πL, πF ).
3.1
Equilibrium Existence and Uniqueness
We begin with the follower’s problem. At state sF ∈SF , given the leader’s policy πL, the follower
treats the leader as part of the environment and solves a single-agent MDP to compute an optimal
response π∗
F . This defines the best response mapping BRF : S × P(AL) →P(AF ), which maps
the joint state and leader policy to an optimal follower policy. The leader’s problem is analogous. At
state sL ∈SL, given the follower’s policy πF , the leader solves their own single-agent MDP, treating
the follower as part of the environment, and derives the optimal policy π∗
L. The corresponding best
response function is BRL : S × P(AF ) →P(AL). We now establish the existence and uniqueness
of an SSE, which requires the following assumptions.
Assumption 3.2 (Uniqueness of Best Response). For each i ∈I, and for any si ∈Si, s−i ∈
S−i, π−i ∈P(A−i), agent i’s best response function BRi(si, s−i, π−i) admits a unique policy.
Assumption 3.3 (Lipschitz Best Responses). There exist constants di ≥0 such that for each
i ∈{L, F}, and any policies π−i, π′
−i ∈P(A−i),
sup
si∈Si, s−i∈S−i
∥BRi(si, s−i, π−i) −BRi(si, s−i, π′
−i)∥1 ≤di∥π−i −π′
−i∥1.
Remark (Optimistic vs. Pessimistic Solutions). In Stackelberg games, an important (but subtle)
modeling choice involves how the leader anticipates the follower’s response when multiple best
responses exist. The optimistic (or leader-favorable) solution assumes that the follower selects the
best response that benefits the leader most, while the pessimistic (or leader-averse) solution assumes
the follower selects the worst-case best response from the leader’s perspective. Assumption 3.2
enforces uniqueness of the best response, thereby avoiding this ambiguity. This corresponds to the
optimistic case and simplifies the theoretical analysis. The pessimistic case, while important in
adversarial settings or robust decision-making, introduces discontinuities and set-valued mappings
that require more intricate tools beyond the scope of this paper. We leave such extensions to future
work, and focus here on establishing foundational results under the optimistic formulation.
Theorem 3.1 (Existence and Uniqueness of an SSE). Given Assumptions 3.1, 3.2 and 3.3, and
assume dLdF < 1. Then there exists a unique SSE to GS.
4
Learning in Stackelberg Games
We now turn to the algorithmic question of how an SSE can be learned in dynamic environments.
Our framework models the follower as adapting through best-response learning and the leader as
updating its policy based on observed follower behavior. In this section, we present a general RL
framework for computing equilibria, along with stabilization techniques to ensure convergence.
4.1
General RL Framework
We now introduce a general RL framework for computing an SSE, as defined in Definition 3.2. The
procedure alternates between follower best-response learning and leader policy improvement:
4
(Step 1) Follower’s Move:
Fix the leader’s policy πk
L. The follower computes the best response
policy: πk∗
F = BRF (sF , sL, πk
L), where (sF , sL) denotes the joint state.
(Step 2) Leader’s Move:
Given the follower’s best response πk∗
F , the leader solves its own best
response problem:πk+1
L
= BRL(sL, sF , πk∗
F ), again based on the current joint state.
(Step 3) Iterate Until Convergence:
Repeat Steps 1 and 2 until the leader’s policy converges, that
is, πk+1
L
= πk
L for some k > 0. Since we assume that each agent’s best response mapping is uniquely
defined, convergence of the leader’s policy implies convergence of the follower’s policy as well.
The above three-step scheme forms the basis for Algorithm 1 (pseudocode provided in Appendix A.2).
To implement this procedure in an RL setting, we now formalize value functions and Q-functions.
For notational simplicity, we let si = (si, s−i, a−i) denote agent i’s effective state and P i =
(Pi, P−i, π−i) its joint transition kernel, since each agent treats the opponent’s policy as part of the
environment in a sequential learning setup. Given a fixed opponent policy π−i, the Q-function for
agent i ∈I under policy πi is defined as:
Qπi,π−i
i
(si, ai) := E
" ∞
X
t=0
γt
iri(si,t, ai,t)
si,0 = si, ai,0 = ai
#
.
(5)
The corresponding optimal Q-function satisfies the Bellman equation:
Q∗,π−i
i
(si, ai) = ri(si, ai) + γi max
a′
i
Es′∼P i
Q∗,π−i
i
(s′
i, a′
i)
.
(6)
An optimal policy π∗
i can then be obtained through the best response mapping BRi(si, s−i, π−i) =
argmaxaiQ∗,π−i
i
(si, ai).
However, this general framework does not guarantee convergence unless the best-response mapping
argmax satisfies strong regularity conditions such as single-valuedness and Lipschitz continuity. To
address this, we introduce two smoothing techniques: (i) Boltzmann policies, which replace the
possible discontinuous argmax operator with a softmax approximation; and (ii) reward regularization,
which ensures strict concavity and uniqueness of the best response. These modifications enable us to
prove that the regularized learning dynamics converge to a unique fixed point.
4.2
Boltzmann Policy and Reward Regulation
To ensure desirable properties of optimal policies, we replace the generally non-smooth argmax
operator in best-response updates with smooth approximations and stabilization techniques. In this
section, we formalize the key components and justify their theoretical validity.
We first show that the Boltzmann policy is a Lipschitz continuous function that can approximate the
argmax operator. Specifically, a Boltzmann policy uses the following form with the softmax operator:
πi := softmaxαi(·|si) =
exp(αiQ∗,π−i(si, ·))
P
ai exp(αiQ∗,π−i(si, ai)), i ∈I,
(7)
where αi > 0 is the temperature hyperparameter. Lemma A.1 presents the Lipschitz continuity of
the softmax operator. Following the analysis in [8], we first discretize the policy space using finite
ε-nets to bound the error of the approximation to the argmax operator. That is, for a given policy
πi ∈P(Ai), we define a finite cover N ε
i = { ˆπi
(1), ˆπi
(2), · · · , ˆπi
(N ε
i )} ⊂P(Ai) such that for any
πi, there exists ˆπi ∈N ε
i with ∥πi −ˆπi∥1 ≤ε. The projection of πi onto the net is defined as:
projε(πi) := argminπ′
i∈N ε
i ∥πi −π′
i∥1.
(8)
This projection is applied to both the leader’s and the follower’s learned policies to enforce stability
and discretized action support. In practice, a key challenge in using the argmax operator is the
sensitivity to small changes in the Q-values when the action gap is small. To mitigate instability, the
policy must not only be discretized via an ε-net but also be sufficiently separated in action-value
space. We define the action gap at state si as:
δsi(Q∗,ˆπ(j)
−i ) :=
min
ai∈Ai\argmaxQ
∗,ˆπ(j)
−i (si,·)
max
a′
i∈Ai Q∗,ˆπ(j)
−i (si, a′
i) −Q∗,ˆπ(j)
−i (si, ai)
,
(9)
5
for all j = 1, · · · , N ε
i and i ∈I. Then, for any ε > 0, there exists a positive, decreasing function
ϕ(ε) and an ε-net N ε
i such that for all Q∗,ˆπ(j)
−i and at any state si, δsi(Q∗,ˆπ(j)
−i ) ≥ϕ(ε).
When following the 3-step computation framework, instead of using BRi for both agents, we adopt
smoothed policy updates based on softmax approximations. Specifically, for k = 0, 1, 2, . . ., the
policies are updated as: ˆπk
F = projε(softmaxαF ( ˆQ∗,ˆπk
L)) and ˆπk+1
L
= projε(softmaxαL( ˆQ∗,ˆπk
F )).
Theorem 4.1 (Error Bound for Projected Boltzmann Policy). Let Assumptions 3.2 and 3.3 hold, and
suppose that dLdF < 1. Fix ε > 0 and set the temperature parameters αL = αF = log(1/ε)/ϕ(ε),
where ϕ(ε) denotes the minimal action gap induced by the ε-net discretization. Let (ˆπk
L, ˆπk
F ) denote
the policy iterates generated by Algorithm 1 using projected Boltzmann policies with projection
radius ε. Then, for any K ≥log1/(dLdF )(2/ε), the leader’s policy satisfies
∥ˆπK
L −πSSE
L ∥1 ≤
1 + dL + 2|AL| + 2dL|AF |
1 −dLdF
+ 1
ε = O(ε),
(10)
where πSSE
L
denotes the leader’s stationary Stackelberg equilibrium policy in GS.
This bound shows that for sufficiently large α and small projection radius ε, the projected softmax pol-
icy closely approximates the true best response, while preserving Lipschitz continuity and stabilizing
learning dynamics.
To establish convergence of the three-step computational framework, one usually requires the best-
response operator argmax to be Lipschitz continuous, which can be too restrictive. To avoid imposing
such assumptions, we adopt a regularization approach. Regularization is widely used in RL, where
it can accelerate convergence of policy gradient methods [9] and enhance exploration and robust-
ness [10]. The regularized reward function for agent i ∈I is defined as:
rREG
i
(si, s−i, ai, a−i) = ri(si, s−i, ai, a−i) + H(πi(· | si)),
(11)
where H(·) is a ρ-strongly concave function. A common choice is the Shannon entropy, given by
H(πi(· | si)) = −P
ai∈Ai π(ai | si) log π(ai | si) for each si ∈Si.
We then analyze the game using the regularized value function for for each si ∈Si:
V REG
i
(si, s−i, πi, π−i) := E
" ∞
X
t=0
γt
irREG
i
(si,t, s−i,t, ai,t, a−i,t),
, si,0 = si, , s−i,0 = s−i
#
. (12)
In Theorem A.2 (Appendix A.5), we show that under standard continuity and boundedness conditions,
the best-response mappings in the regularized game are Lipschitz continuous. This, in turn, implies
that the policy iterates converge to a fixed point under the regularized learning dynamics.
5
Extension to Stackelberg Games with Mean-Field (MF) Followers
We now consider the extension where there is one leader but an infinite number of followers in a
(discounted) infinite-horizon Markovian setting. This formulation captures many real-world scenarios
where a central authority (such as a platform, a regulator, or a policymaker) interacts with a large
population of agents whose individual behaviors are negligible but whose aggregate effect shapes the
system dynamics. To formalize this setting, we adopt a mean-field approach in which followers are
modeled as homogeneous and interchangeable. In the limit as the number of followers approaches
infinity, each individual has vanishing influence on the aggregate behavior, which is captured by
an MF distribution over states and actions. We assume the followers are competitive and analyze
the interaction from the perspective of a single representative follower responding to the MF. This
reduction preserves the coupling between individual incentives and population-level dynamics while
simplifying the analysis. For notational consistency, we retain the index set I = {L, F}, and
denote the state and action spaces of the representative follower by SF and AF , respectively. Let
µF,t ∈P(SF × AF ) denote an MF distribution at time t, representing the joint distribution of the
population’s states and actions in the infinite-agent limit:
µF,t(s, a) := lim
N→∞
PN
j=1,j̸=i 1(sj
F,t,aj
F,t)=(s,a)
N
, ∀s ∈SF , a ∈AF ,
(13)
6
where N is the number of followers, and (sj
F,t, aj
F,t) denotes the j-th follower’s state and action pair.
The indicator function 1(sj
F,t,aj
F,t)=(s,a) = 1 if (sj
F,t, aj
F,t) = (s, a), and 0 otherwise. The definition
of a Stackelberg game with MF followers can be stated as follows.
Definition 5.1. A Stackelberg Markov game GMF with a single leader and a MF follower consists of
state spaces (Si)i∈I, action spaces (Ai)i∈I, a stochastic transition kernel Pi : Si×Ai×A−i×P(SF ×
AF ) →P(Si), a reward function for each agent ri : Si × S−i × Ai × A−i × P(SF × AF ) →R,
and discount factors (γi)i∈I. In this game, the leader chooses its strategy first, and the MF follower
chooses its strategy after observing the leader’s policy while playing against the MF.
The leader and the follower take their actions according to their own policies πi ∈Si →P(Ai). As
the reward function and transition kernel are redefined with the MF as an additional argument, each
agent’s value function is also redefined as:
Vi(si, s−i, πi, π−i, µF ) := E
" ∞
X
t=0
γt
iri(si,t, s−i,t, ai,t, a−i,t, µF )
si,0 = si, s−i,0 = s−i
#
,
(14)
subject to si,t+1 ∼Pi(si, ai, a−i, µF ), ai,t ∼πi(·|si,t, µF ), ∀i ∈I, where the expectation is taken
according to both agents’ policies πL, πF , and the transition kernels PL, PF . The MF follower
here assumes that the MF remains as µF throughout the entire lifetime. This can be achieved by
maintaining a belief vector as in [11], which can then be updated (adaptively) when a new leader policy
is observed. Finally, the evolution of MF is a mapping Γ : P(SF × AF ) × SL × SF →P(SF × AF )
that maps from current MF and both agents’ policies to the next MF, defined as follows:
µ′
F := Γ(µF , πL, πF ), ∀µF ∈P(SF × AF ), πL ∈P(AL), πF ∈P(AF ),
(15)
as a new component to the game. Then an equilibrium is defined as follows.
Definition 5.2 (Stationary Stackelberg Mean-Field Equilibrium (SS-MFE)). Given a Stackelberg
Markov game GMF with MF followers, the tuple of the leader and the MF follower’s stationary
policies (πSE
L , πSE
F , µSE
F ) form a stationary Stackelberg equilibrium in GMF, if the following conditions
are satisfied for each state sL ∈SL, sF ∈SF , :
1. Follower – For any policy πF ∈P(AF ),
VF (sF , sL, πSE
F , πSE
L , µSE
F ) ≥VF (sF , sL, πF , πSE
L , µSE
F ).
(16)
2. Consistency of Follower’s MF – The evolution of the MF µF satisfies that,
µSE
F = Γ(µSE
F , πSE
L , πSE
F ).
(17)
3. Leader – For any policy πL ∈P(AL),
VL(sL, sF , πSE
L , πSE
F , µSE
F ) ≥VL(sL, sF , πL, πSE
F , µSE
F ).
(18)
The consistency condition in Item (2) indicates that when all followers adopt a policy in response
to the assumed mean field µSE
F as in (16), the resulting population distribution coincides with the
assumed µSE
F .
5.1
Existence and Uniqueness of MF Stackelberg Equilibrium
Given the initial leader’s policy and initial MF, the game consists of the same 3 steps as in Section
3, which will be referred to as the “outer iteration”. The follower’s move consists of an iterative
approach between learning a policy and waiting for the MF to update after executing the policy,
which will be referred to as the “inner iteration”. We re-define the best response mappings with the
introduction of the MF. That is, BRi : Si × S−i × P(A−i) × P(SF × AF ) 7→P(Ai) for both i ∈I.
Then, at each outer iteration k, given the leader’s policy πk
L, the follower and mean-field dynamics
proceed through an inner loop with iterator τ = 0, 1, · · · :
πk,τ+1
F
= BRF (sF , sL, πk
L, µk,τ
F ),
(19)
µk,τ+1
F
= Γ(µk,τ
F , πk
L, πk,τ+1
F
),
(20)
repeating until convergence to (πk∗
F , µk∗
F ), which defines the mean-field equilibrium corresponding to
πk
L. The leader then updates its policy as:
πk+1
L
= BRL(sL, sF , πk∗
F , µk∗
F ).
(21)
7
Theorem 5.1 (Existence and Uniqueness of SS-MFE). Under the assumptions of continuity and
boundedness of reward functions, and uniqueness and Lipschitz of BR policies for both the leader
and the MF follower, there exists a unique stationary SS-MFE to GMF if dµ
µ + dF
µ dµ
F < 1 and
dL
F +dL
µ
1−(dµ
F +dµ
µ+dF
µ ) < 1.
5.2
General RL Approach for Finding an SS-MFE
We finally present a general RL-based algorithm to solving the single-leader-MF-follower Stackelberg
game GMF based on fixed point iterations. The pseudocode is provided in Algorithm 2 in the appendix.
It follows the same overall structure as the RL procedure for the single-leader-single-follower setting
introduced in Section 3. The key difference lies in the follower’s best-response computation, which
in the mean-field setting must account for the dynamic evolution of the population distribution.
Specifically, the best response becomes a composite map BRF µ defined through a nested fixed-point
iteration between the follower’s best response and the MF update rule Γ in 15. More details are
provided in Appendix A.6.
Note that the stabilization techniques from Section 4 apply directly in this setting. In particular, soft
policy interpolation and projection onto structured distributions (that is, Boltzmann and ε-nets) can be
used in both inner and outer loops to ensure stability and maintain the regularity conditions required
for convergence guarantees in Theorem 5.1.
6
Numerical Experiment
While most test environments for Stackelberg learning are drawn from stylized games or synthetic
benchmarks, we apply our framework to a real-world application: electricity tariff design. This
application is both high-impact and strategically rich, capturing the interaction between a rate-setting
authority (such as a utility or a state energy commission) and a large population of energy users.
In particular, our test case is motivated by the growing adoption of distributed energy resources
(DERs), such as rooftop solar and battery storage, and the resulting risk of a utility death spiral – a
destabilizing feedback loop in which fixed grid maintenance costs are increasingly concentrated on a
shrinking base of conventional consumers. As higher-income households invest in DERs, they reduce
their grid dependence or even export electricity for profit, thereby lowering their net payments to the
utility. Meanwhile, lower-income households, who are less likely to afford DERs, continue to rely
on the grid and bear a disproportionate share of the infrastructure costs. This dynamic exacerbates
energy inequity and raises serious concerns [12]. Building on the AI Economist framework, we use
our Stackelberg mean-field learning framework to model and solve this challenge with theoretical
rigor and practical relevance.
We adopt the same test case and settings as in [12, 13]. In our setup, the leader sets retail electricity
tariffs, including a volumetric rate and an income-based fixed charge. The objective is to balance
economic efficiency and equity. On the follower side, we model three aggregators, each representing
a node in the grid and managing a population of energy users—both prosumers (who can produce
and consume electricity) and conventional consumers. These aggregators operate under a mean-field
game framework: each learns charging and discharging policies for its prosumers’ solar and storage
systems, and responds to both the utility’s pricing policy and real-time locational marginal prices
(LMPs) determined by a power system operator via economic dispatch.
The power network we consider consists of a three-node grid with four generators and three transmis-
sion lines – the simplest configuration that captures loop flows. Each node hosts 3,000 conventional
consumers with identical income ($15,000). The prosumer population varies by node: Node 1 has
1,000 low-income prosumers ($25,000), Node 2 has 500 middle-income prosumers ($45,000), and
Node 3 has 300 high-income prosumers ($65,000). This gradient reflects increasing wealth and
energy capacity across nodes, while keeping consumer profiles uniform. The utility (leader) learns
a pricing policy that includes per-kWh charges (aka, the volumetric charges) and fixed charges –
intended to recover grid maintenance costs – for each customer/prosumer type at each node, aiming
to minimize inequality in energy expenditure incidence (EEI), defined as the percentage of household
income spent on electricity, across the population.
Each aggregator serves as an MF follower and independently learns prosumer strategies based on the
observed real-time electricity prices and utility pricing. We use Proximal Policy Gradient [14] for
8
both leader and followers. Each simulated day includes 8 time steps (3-hour intervals). We assume
the utility updates its policy every 3 days, while aggregators update at every time step. The simulation
runs for 100 days, repeated 5 times with different random seeds. Experiments were conducted on a
Windows 11 system with a 13th Gen Intel Core i7-13700KF (24 cores) and an NVIDIA GeForce
RTX 4070.
Figure 1 compares wholesale electricity prices at the beginning and end of training, with and without
learning. Prices initially align across both cases, but under learning, volatility reduces significantly,
and daily patterns stabilize.
0
6
12
18
0
6
12
18
0
6
12
18
Hour of the Day
10
15
20
25
Price ($/MWh)
Hub Price (First 3 Days)
With RL
No RL
0
6
12
18
0
6
12
18
0
6
12
18
Hour of the Day
Hub Price (Last 3 Days)
With RL
No RL
Figure 1: Comparison of nodal prices with and without learning. RL reduces price volatility and
leads to more stable daily patterns.
Figure 2 displays the learned electricity tariffs – both per-kWh variable rates (adders on top of the
wholesale electricity prices) and fixed charges – over the course of training. Over time, the utility’s
policy converges to a pricing structure in which higher-income groups are assigned higher fixed
charges, helping to align payment responsibility with ability to pay and maintain energy equity. More
numerical results are shown in Appendix A.7.
0
20
40
60
80
100
Day
1
2
3
Rate ($/MWh)
Buy and Sell Rates
Buy Rate
Sell Rate
0
20
40
60
80
100
Day
Fixed Rates for Prosumers
Node 1
Node 2
Node 3
0
20
40
60
80
100
Day
Fixed Rates for Consumers
Node 1
Node 2
Node 3
Figure 2: Learned variable charges (left), fixed charges for prosumers (middle), and consumers
(right).
References
[1] S. Zheng, A. Trott, S. Srinivasa, D. Parkes, and R. Socher, “The AI economist: Improving
equality and productivity with AI-driven tax policies,” Science Advances, vol. 8, no. 24, p.
eabm1799, 2022.
[2] S. Dempe and A. Zemkoho, “Bilevel optimization,” in Springer Optimization and Its Applica-
tions.
Springer, 2020, vol. 161.
[3] Y. Bai, C. Jin, H. Wang, and C. Xiong, “Sample-efficient learning of Stackelberg equilibria in
general-sum games,” in Advances in Neural Information Processing Systems (NeurIPS), 2021.
[4] T. Fiez, B. Chasnov, and L. J. Ratliff, “Convergence of learning dynamics in Stackelberg games,”
arXiv preprint arXiv:1906.01217, 2020.
[5] H. Zhong, Z. Yang, Z. Wang, and M. I. Jordan, “Can reinforcement learning find Stackelberg-
Nash equilibria in general-sum markov games with myopically rational followers?” Journal of
Machine Learning Research, vol. 24, no. 48, pp. 1–52, 2023.
9
[6] M. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming, ser.
Wiley Series in Probability and Statistics.
Wiley, 2014.
[7] A. Agarwal, N. Jiang, S. M. Kakade, and W. Sun, “Reinforcement learning: Theory and
algorithms,” CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep, vol. 32, p. 96, 2019.
[8] X. Guo, A. Hu, R. Xu, and J. Zhang, “A general framework for learning mean-field games,”
Math. Oper. Res., vol. 48, no. 2, p. 656–686, May 2023.
[9] S. Cen, C. Cheng, Y. Chen, Y. Wei, and Y. Chi, “Fast global convergence of natural policy
gradient methods with entropy regularization,” Operations Research, vol. 70, no. 4, pp. 2563–
2578, 2022.
[10] G. Neu, A. Jonsson, and V. Gómez, “A unified view of entropy-regularized Markov decision
processes,” arXiv preprint arXiv:1705.07798, 2017.
[11] C. Feng and A. L. Liu, “Decentralized integration of grid edge resources into wholesale
electricity markets via mean-field games,” arXiv preprint arXiv:2503.07984, 2025.
[12] Y. Chen, A. L. Liu, M. Tanaka, and R. Takashima, “Optimal retail tariff design with prosumers:
Pursuing equity at the expenses of economic efficiencies?” IEEE Transactions on Energy
Markets, Policy and Regulation, vol. 1, no. 3, pp. 198–210, 2023.
[13] J. He and A. L. Liu, “Evaluating the impact of multiple DER aggregators on wholesale energy
markets: A hybrid mean field approach,” arXiv preprint arXiv:2409.00107, 2024.
[14] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization
algorithms,” arXiv preprint arXiv:1707.06347, 2017.
[15] S. Banach, “Sur les opérations dans les ensembles abstraits et leur application aux équations
intégrales,” Fundamenta Mathematicae, vol. 3, no. 1, pp. 133–181, 1922. [Online]. Available:
http://eudml.org/doc/213289
[16] S. Shalev-Shwartz, “Online learning: Theory, algorithms, and applications,” Ph.D. thesis, The
Hebrew University of Jerusalem, 08 2007.
[17] B. Anahtarci, C. D. Kariksiz, and N. Saldi, “Q-learning in regularized mean-field games,”
Dynamic Games and Applications, vol. 13, no. 1, pp. 89–117, 2023.
[18] M. Towers, A. Kwiatkowski, J. Terry, J. U. Balis, G. D. Cola, T. Deleu, M. Goulão,
A. Kallinteris, M. Krimmel, A. KG, R. Perez-Vicente, A. Pierré, S. Schulhoff, J. J. Tai, H. Tan,
and O. G. Younis, “Gymnasium: A standard interface for reinforcement learning environments,”
2024. [Online]. Available: https://arxiv.org/abs/2407.17032
[19] A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann, “Stable-baselines3:
Reliable reinforcement learning implementations,” Journal of Machine Learning Research,
vol. 22, no. 268, pp. 1–8, 2021. [Online]. Available: http://jmlr.org/papers/v22/20-1364.html
[20] W. E. Hart, J.-P. Watson, and D. L. Woodruff, “Pyomo: modeling and solving mathematical
programs in python,” Mathematical Programming Computation, vol. 3, no. 3, pp. 219–260,
2011.
[21] M. L. Bynum, G. A. Hackebeil, W. E. Hart, C. D. Laird, B. L. Nicholson, J. D. Siirola, J.-P.
Watson, and D. L. Woodruff, Pyomo–optimization modeling in python, 3rd ed.
Springer
Science & Business Media, 2021, vol. 67.
[22] A. Robinson, “Solar PV Analysis of Honolulu, United States,” 2024. [Online]. Available:
https://profilesolar.com/locations/United-States/Honolulu/
[23] Hawaiian Electric, “Power facts,” 3 2024. [Online]. Available: https://www.hawaiianelectric.
com/about-us/power-facts
[24] M. Roozbehani, M. A. Dahleh, and S. K. Mitter, “Volatility of power grids under real-time
pricing,” IEEE Transactions on Power Systems, vol. 27, no. 4, pp. 1926–1940, 2012.
[25] EIA, “Residential energy consumption survey 2015,” https://www.eia.gov/consumption/
residential/data/2020/.
10
A
Technical Appendices and Supplementary Material
A.1
Proof of Theorem 3.1 – Existence and Uniqueness of an SSE
The proof relies on the well-known Banach fixed point theorem, for which we first restate the
definition of a contraction mapping.
Definition A.1 (Contraction Mapping). Let (X, d) be a non-empty complete metric space, where d
is a metric on X. A map T : X 7→X is called a contraction mapping on X if for any x, y ∈X, there
exists a constant c ∈[0, 1) such that d(T(x), T(y)) ≤cd(x, y).
The Banach fixed point theorem is stated as follows.
Theorem A.1 (Banach Fixed-Point Theorem [15]). Let (X, d) be a non-empty complete metric space,
and let T : X →X be a contraction mapping. Then T admits a unique fixed point x∗∈X such that
T(x∗) = x∗.
In our work, we choose the distance function to be the ℓ1-norm. We now prove Theorem 3.1.
Proof. Fix sL, sF .
For any πL, π′
L
∈
P(SL), let π∗
F
=
BRF (sF , sL, πL) and π∗′
F
=
BRF (sF , sL, π′
L), then
∥BRL(sL, sF , π∗
F ) −BRL(sL, sF , π∗′
F )∥1 ≤dL∥π∗
F −π∗′
F ∥1
= dL∥BRF (sF , sL, πL) −BRF (sF , sL, π′
L)∥1 ≤dLdF ∥πL −π′
L∥1.
By Banach fixed-point theorem, with 0 ≤dLdF < 1, there exists a unique fixed point of BRL.
Because the best response mappings are single-valued, there exists a unique SSE to the game GS.
A.2
Algorithm 1 – RL-Framework for Finding an SSE
The following pseudocode implements the general, three-step learning framework described in
Section 4.1. Each agent applies an RL algorithm to update its policy, and convergence is monitored
based on the leader’s policy updates.
Algorithm 1: An RL Approach to Single-Leader-Single-Follower Stackelberg Games
Input: Initial states s0
L, s0
F , leader’s policy π0
L, tolerance tol, RL algorithms AlgL, AlgF
for Iteration k = 0, 1, 2, · · · do
Leader takes action ak
L ∼πk
L(·|sk
L) ;
Follower learns its best response policy πk
F = BRF (sk
F , sk
L, πk
L) through AlgF ;
Follower takes action ak
F ∼πk
F (·|sk
F ) ;
Leader learns its best response policy πk+1
L
= BRL(sk
L, sk
F , πk
F ) through AlgL ;
State transition sk+1
L
∼PL(sk
L, ak
L, ak
F ), sk+1
F
∼PF (sk
F , ak
F , ak
L) ;
If ∥πk+1
L
−πk
L∥1 ≤tol, exit the loop.
end
Return (πSSE
L , πSSE
F ) = (πk
L, πk
F ) as the SSE.
A.3
Properties of Boltzmann Policies
The first lemma is to show the Lipschitz continuity of softmax under ℓ1-norm.
Lemma A.1. Let softmaxα be the softmax function with temperature α > 0 defined by
softmaxα(x)i =
exp(αxi)
Pn
j=1 exp(αxj).
Then softmaxα : Rn 7→[0, 1]n is Lipschitz continuous with respect to the ℓ1 norm with Lipschitz
constant √nα; that is,
∥softmaxα(x) −softmaxα(y)∥1 ≤√nα∥x −y∥1,
∀x, y ∈Rn.
11
Proof. Choose arbitrary vectors x, y ∈Rn. We first show that the softmaxα function is α-Lipschitz
with respect to ℓ2-norm. For brevity, in this proof only, we let si = softmaxα(x)i. We start by
computing the Jacobian matrix of softmax(x), denoted as Jα(x), whose entries are given by:
Jα(x)ij = ∂si
∂xj
=
(αsi(1 −si),
if i = j,
−αsisj,
if i ̸= j,
and for any vector u ∈Rn, we have,
u⊤Jα(x)u = α
n
X
i=1
siu2
i −
n
X
i=1
siui
!2
.
By applying Jensen’s inequality to the convex function x 7→x2 and by viewing softmax as a
probability distribution, we have (Pn
i=1 siui)2 ≤Pn
i=1 siu2
i . This implies that:
0 ≤uT Jα(x)u ≤α
n
X
i=1
siu2
i ≤α∥u∥2.
This shows that the eigenvalues of Jα(x) lie between 0 and α. Hence, its spectral norm satisfies that
∥Jα(x)∥2 ≤α. By the Mean Value Theorem for vector-valued functions, we have:
∥softmaxα(x) −softmaxα(y)∥2 ≤sup
t∈[0,1]
∥Jα(y + t(x −y))∥2 · ∥x −y∥2 ≤α∥x −y∥2.
Combining with the fact that ∥x∥2 ≤∥x∥1 ≤√n∥x∥2, the following holds:
∥softmaxα(x) −softmaxα(y)∥1 ≤√n∥softmaxα(x) −softmaxα(y)∥2 ≤√nα∥x −y∥1.
Before proving Theorem 4.1, we need another lemma, which is to show the closeness of softmax
distribution and the uniform argmax distribution.
Lemma A.2. Let x ∈Rn and let M = {j | xj = maxi=1,··· ,n xi} be the index set of maximizers
with cardinality |M|. Define the uniform argmax distribution as
argmaxU(x)i =
1/|M|,
if i ∈M,
0,
otherwise.
(22)
Then for any α > 0, the ℓ1 distance between softmaxα(x) and argmaxU(x) is bounded as
∥softmaxα(x) −argmaxU(x)∥1 ≤2ne−αδ,
where δ := minj /∈M(x∗−xj) > 0 is the minimum gap between the top value x∗and the next largest
value when |M| < n, and δ := ∞when |M| = n (all values are equal in x).
Proof. For any x ∈Rn, let x∗= maxi=1,··· ,n xi be the largest entry. Let M = {j | xj = x∗} be
the set of maximizers with cardinality |M|. We first consider the case where |M| < n. For each
i = 1, . . . , n, define the softmax function with temperature α > 0:
softmaxα(x)i =
exp(αxi)
Pn
j=1 exp(αxj),
and also define the uniform argmax function as in (22). For simplicity, let Z = Pn
j=1 exp(αxj) =
|M|eαx∗+ P
j /∈M eαxj. For all i ∈M, we can rewrite the softmax function as softmaxα(x)i =
eαx∗/Z. Then the ℓ1 distance is:
∥softmaxα(x) −argmaxU(x)∥1 =
X
i∈M
1
|M| −eαx∗
Z
+
X
i/∈M
eαxi
Z
= |M|
1
|M| −eαx∗
Z
+
P
i/∈M eαxi
Z
= 2(Z −|M|eαx∗)
Z
=
2 P
j /∈M eαxj
Z
.
12
Now let δ = minj /∈M(x∗−xj) > 0. Then for all j /∈M, xj ≤x∗−δ, we have eαxj ≤eα(x∗−δ) =
e−αδeαx∗. As a result, the numerator satisfies that P
j /∈M eαxj ≤(n −|M|)e−αδeαx∗, and the
denominator satisfies that Z = |M|eαx∗+ P
j /∈M eαxj ≥|M|eαx∗. With |M| ≥1, we get:
∥softmaxα(x) −argmaxU(x)∥1 ≤2 · (n −|M|)e−αδeαx∗
|M|eαx∗
≤2(n −1)e−αδ ≤2ne−αδ.
In the case where |M| = n, all values in x are equal and δ := ∞. The softmax function returns a
uniform distribution over all n elements and is identical to the uniform argmax function. The ℓ1-
distance between the two functions are therefore 0. Then setting δ = ∞preserves the inequality.
A.4
Proof of Theorem 4.1 – Error Bound for Projected Boltzmann Policy
Proof. Fix sF , sL. For notation simplicity, We drop the two state arguments in the two BRi mappings.
When following Algorithm 1, we let the updates be as follows:
ˆπk
F = projε(softmaxαF ( ˆQ∗,πk
L)), and
ˆπk+1
L
= projε(softmaxαL( ˆQ∗,πk
F )),
where the function projε returns the projected Boltzmann policies through their corresponding ε-net.
For notation simplicity, we let ˜πi = softmaxαi( ˆQ∗,π−i) denote agent i’s Boltzmann policy. Then, at
each step k, we apply Lemma A.2, along with the observation that the ε-net enforces a minimum
action gap of ϕ(ε) by limiting the other player’s policy space (and thus constraining the resulting
Q-values).
∥ˆπk+1
L
−πSE
L ∥1 ≤∥ˆπk+1
L
−˜πk+1
L
∥1 + ∥˜πk+1
L
−BRL(ˆπk
F )∥1 + ∥BRL(ˆπk
F ) −πSE
L ∥1
≤ε + ∥softmaxα( ˆQ∗,πk
F )) −argmaxU( ˆQ∗,πk
F ))∥1 + ∥BRL(ˆπk
F ) −BRL(πSE
F )∥1
≤ε + 2|AL|e−αLϕ(ε) + dL∥ˆπk
F −πSE
F ∥1,
where the last term can be similarly bounded as follows:
∥ˆπk+1
F
−πSE
F ∥1 ≤ε + 2|AF |e−αF ϕ(ε) + dF ∥ˆπk
L −πSE
L ∥1.
Combining the two recursive inequalities, we obtain:
∥ˆπk+1
L
−πSE
L ∥1 ≤ε + 2|AL|e−αLϕ(ε) + dL
ε + 2|AF |e−αF ϕ(ε) + dF ∥ˆπk
L −πSE
L ∥1
= (1 + dL) ε + 2|AL|e−αLϕ(ε) + 2dL|AF |e−αF ϕ(ε) + dLdF ∥ˆπk
L −πSE
L ∥1.
Unfolding the recursion over k yields:
∥ˆπk+1
L
−πSE
L ∥1 ≤
(1 + dL)ε + 2|AL|e−αLϕ(ε) + 2dL|AF |e−αF ϕ(ε)
k
X
κ=0
(dLdF )κ
+ (dLdF )k+1∥ˆπ0
L −πSE
L ∥1.
Assuming dLdF < 1, and setting αL = αF = log(1/ε)
ϕ(ε) , the sum converges as k →∞. At a fixed
iteration K, the bound is:
∥ˆπK
L −πSE
L ∥1 ≤(1 + dL)ε + 2|AL|e−αLϕ(ε) + 2dL|AF |e−αF ϕ(ε)
1 −dLdF
+ (dLdF )K∥ˆπ0
L −πSE
L ∥1
≤(1 + dL + 2|AL| + 2dL|AF |)ε
1 −dLdF
+ 2(dLdF )K,
where we also used the fact that the ℓ1-norm of the difference between two distributions over
a finite set is upper bounded by 2 (consider for any distribution π, π′ over the same finite set,
∥π −π′∥1 ≤∥π∥1 + ∥π′∥1 = 2). To achieve 2(dLdF )K ≤ε, we need:
K ≥logdLdF
ε
2
,
which bounds the error to:
∥ˆπK
L −πSE
L ∥1 ≤
1 + dL + 2|AL| + 2dL|AF |
1 −dLdF
+ 1
ε = O(ε).
13
A.5
Regularization
Regularization is widely used in reinforcement learning (RL) to promote stability, enhance exploration,
and improve convergence rates [9, 10]. In this section, we consider a general class of regularized
learning schemes, where the reward function is augmented with a concave regularizer applied to the
agent’s policy.
For each agent i ∈I, we define the regularized reward function as:
rREG
i
(si, s−i, ai, a−i) = ri(si, s−i, ai, a−i) + H(πi(· | si)),
(23)
where H : P(Ai) →R is a ρ-strongly concave function. A typical example is negative Shannon
entropy:
H(πi(· | si)) = −
X
ai∈Ai
πi(ai | si) log πi(ai | si),
which yields a Boltzmann policy as the optimal solution. However, our analysis does not depend on
this specific form, and applies to any regularizer satisfying the strong concavity condition.
We define the regularized value function as:
V REG
i
(si, s−i, πi, π−i) := E
" ∞
X
t=0
γt
irREG
i
(si,t, s−i,t, ai,t, a−i,t)
si,0 = si, s−i,0 = s−i
#
, ∀i ∈I.
(24)
Lemma A.3 below shows that adding a strongly concave regularization term ensures the regularized
value function V REG
i
is also strongly concave in the agent’s policy πi, which in turn guarantees the
uniqueness of the optimal solution π∗
i = argmaxπiV REG
i
(si, s−i, πi, π−i). We now establish that,
under the following assumption, which is milder than those required in Assumption 3.3, the resulting
regularized best-response mapping admits a fixed point.
To facilitate the analysis, we define the diameter of the (finite) action space as the maximum distance
between any two actions:
diam(Ai) :=
max
ai,a′
i∈Ai ∥ai −a′
i∥1.
Without loss of generality, we normalize the action space so that diam(Ai) = 1 for all i ∈I.
Assumption A.1 (Lipschitz Reward and Transition Kernel). For each agent i ∈I, for any states
si ∈Si, s−i ∈S−i, and for any actions ai, a′
i ∈Ai, a−i, a′
−i ∈A−i, the reward function ri and the
transition kernel Pi satisfy the condition that, there exists dr, dP ≥0 such that
|ri(si, s−i, ai, a−i) −ri(si, s−i, a′
i, a′
−i)|
≤dr(∥si −s′
i∥1 + ∥s−i −s′
−i∥1 + ∥ai −a′
i∥1 + ∥a−i −a′
−i∥1),
(25)
|Pi(si, ai, a−i) −ri(si, a′
i, a′
−i)|
≤dP (∥si −s′
i∥1 + ∥s−i −s′
−i∥1 + ∥ai −a′
i∥1 + ∥a−i −a′
−i∥1),
(26)
and in addition, we assume γidP /2 ∈[0, 1].
Now we define the (regularized) best response mapping for each i ∈I as BRi : Si×S−i×P(A−i) 7→
P(Ai). That is, follows:
BRREG
i
(si, s−i, π−i) := argmaxπiV REG
i
(si, s−i, πi, π−i).
(27)
Then, the Lipschitz continuity condition can be established:
Theorem A.2 (Lipschitz Regularized Best Response). Under Assumptions 3.1 and A.1, the best
response mapping BRREG
i
for each agent i ∈I to GS with regularized reward is Lipschitz with respect
to the other agent’s policy π−i; that is, for any πL, π′
L ∈P(AL) and πF , π′
F ∈P(AF ), there exist
constants dREG
L
, dREG
F
≥0 such that,
∥BRREG
L
(sL, sF , πF ) −BRREG
L
(sL, sF , π′
F )∥≤dREG
L
∥πF −π′
F ∥,
(28)
∥BRREG
F
(sF , sL, πL) −BRREG
F
(sF , sL, π′
L)∥≤dREG
F
∥πL −π′
L∥,
(29)
where the constants are defined symmetrically in the form of:
dREG
i
= dr
ρ
1 +
γi
(1 −γi)(1 −γidP /2) +
γidP /2
1 −γidP /2
, ∀i ∈I.
(30)
14
We first prove that adding a strongly concave regularization term to the value function can ensure
the uniqueness as well as the continuity of the argmax operator. As the proof is symmetric for both
agents, we drop the agent’s index i for simplicity and use superscript † to denote the opponent’s
components. With a slight abuse of notations, in this proof only, we use s = (s, s†), a = (a, a†) to
represent the joint states and actions, and when necessary, we unpack the argument list. We use π, π†
to indicate the policies to the agent and its opponent, respectively. The regularized reward and value
functions can be re-written concisely as follows:
rREG(s, a) = r(s, a) + H(π),
(31)
V REG(s, π, π†) := E
" ∞
X
t=0
γtrREG(st, at)
s0 = s
#
,
(32)
where H is a ρ-strongly concave function. The following lemma is first needed:
Lemma A.3. The argmaxπV REG admits a unique solution.
Proof. We first argue that the expected reward E[r(s, a)] is linear w.r.t. π†. In fact, the linearity is a
direct consequence of the Lebesgue measure by viewing the distribution π† as the measure function.
Then the sum of a linear function and a ρ-strongly concave function preserves the ρ-strong concavity.
Thus, the argmaxπV REG admits a unique solution.
To proceed with our analysis, we state the following properties of the Fenchel conjugate, established
in Lemma 15 of [16].
Lemma A.4 (Fenchel Conjugate Properties [16]). Let E = Rm, m ≥1 with inner product ⟨·, ·⟩. Let
function g : E 7→R+ be a differentiable and ρ-strongly convex function with respect to some norm
∥· ∥, where R+ = R ∪{−∞, ∞}. Let X be its domain. The Fenchel conjugate g⋆is defined as
follows:
g⋆(y) = max
x∈X⟨x, y⟩−g(x).
(33)
Then the following three properties hold:
(i) g⋆is differentiable on E, (ii) ∇g⋆(y)
=
argmaxx∈X⟨x, y⟩−g(x), and (iii) g⋆is 1
ρ-smooth with respect to ∥· ∥⋆, the dual norm of ∥· ∥.
That is, for any y1, y2 ∈E,
∥∇g⋆(y1) −∇g⋆(y2)∥≤1
ρ∥y1 −y2∥⋆.
(34)
We also need the property of ℓ1-norm of distributions on the set of probability distribution over finite
sets.
Lemma A.5. Suppose that there exists a real-valued function f on a finite set E. For any two
probability distributions ψ1, ψ2, we have
X
x∈E
f(x)ψ1(x) −
X
x∈E
f(x)ψ2(x)
≤maxx∈E f(x) −minx∈E f(x)
2
∥ψ1 −ψ2∥1,
(35)
Proof. We first have that P
x(ψ1(x) −ψ2(x)) = 0 and hence for any constant c ∈R, one has
P
x c(ψ1(x) −ψ2(x)) = 0, then
X
x∈E
f(x)ψ1(x) −
X
x∈E
f(x)ψ2(x)
=
X
x∈E
(f(x) −c)(ψ1(x) −ψ2(x))
≤
X
x∈E
|f(x) −c| · |ψ1(x) −ψ2(x)| ≤max
x∈E |f(x) −c| ·
X
x∈E
|ψ1(x) −ψ2(x)|
By choosing c = (maxx∈E f(x) + minx∈E f(x)) /2, we get (35).
As argmax now is single-valued based on Lemma A.3 and thus well-defined, we are ready to present
the proof to Theorem A.2. The proof is adapted from [17] in which they proved the argmax operator
is Lipschitz continuous with respect to the MF in their MF game setting. Our proof replaces the MF
with opponent’s policy, and we will show that our result matches theirs.
15
Proof. We first define the opponent-policy averaged reward and transition as follows:
¯rREG†(s, a, π†) := Ea†∼π†[rREG(s, a, a†)], and
¯P †(s, a, π†) := Ea†∼π†[P(s, a, a†)].
It is easy to show that both ¯rREG† and ¯P † are Lipschitz continuous in π† under Assumption A.1 with
the same constants dr, dP ≥0 respectively. For any π†, π†′ ∈P(A†),
|¯rREG†(s, a, π†) −¯rREG†(s, a, π†′)| =
Ea†∼π†[rREG(s, a, a†)] −Ea†′∼π†′[rREG(s, a, a†′)
= E(a†,a†′)∼Coupling(π†,π†′)
rREG(s, a, a†) −rREG(s, a, a†′)
= E(a†,a†′)∼Coupling(π†,π†′)
r(s, a, a†) −r(s, a, a†′)
≤E(a†,a†′)∼Coupling(π†,π†′)dr∥a† −a†′∥1
As this works for any coupling of (π†, π†′), we can pick the optimal coupling that achieves the
ℓ1-Wasserstein distance, defined as follows:
W1(π†, π†′) =
inf
ν∈Coupling(π†,π†′)
Z
A†×A† ∥a† −a†′∥1dν(a†, a†′),
(36)
in which the infimum can be replaced by minimum when the coupling space is compact. Indeed,
when the action space is discrete and finite in our case, the compactness is guaranteed. Then,
|¯rREG†(s, a, π†) −¯rREG†(s, a, π†′)| ≤drEa∼π[W1(π†, π†′)] = drW1(π†, π†′) ≤dr∥π† −π†′∥1.
The last inequality can be established by noticing that for any optimal coupling νTV that attains the
minimum of the total variance distance, which is defined as:
dTV(π†, π†) = νTV(a† ̸= a†′) :=
inf
ν∈Coupling(π†,π†′) ν(a† ̸= a†′) = 1
2∥π† −π†′∥1,
the following condition must be satisfied with the assumption that diam(A†) = 1 has been normalized:
W1(π†, π†′) ≤E(a†,a†′)∼νTV[∥a† −a†′∥1]
= νTV(a† = a†′)E[∥a† −a†′∥1 | a† = a†′] + νTV(a† ̸= a†′)E[∥a† −a†′∥1 | a† ̸= a†′]
= 0 + νTV(a† ̸= a†′)E[∥a† −a†′∥1 | a† ̸= a†′]
≤diam(A†)νTV(a† ̸= a†′) = 1
2∥π† −π†′∥1 ≤∥π† −π†′∥1.
We immediately have that |¯rREG†(s, a, π†) −¯rREG†(s, a, π†′)| ≤dr∥π† −π†′∥1. The proof to ¯P †
being dP -Lipschitz with respect to π† is symmetric. Now we can look at the learning problem.
Since at different rounds, we solve a different RL problem, we are essentially dealing with different
Q-functions. we define
Qπ†(s, a) = ¯rREG†(s, a, π†) + γ
X
s′∈S
Q∗,π†(s′) ¯P †(s′|s, a, π†),
(37)
where Q∗,π†(s) = maxa∈A Qπ†(s, a) for all s. The next is to prove that Q∗,π† is dQ-Lipschitz
continuous with respect to the states s, where dQ =
dr
1−γdP /2. Define Tπ† as the Bellman operator
for the problem with π†. We can rewrite Q∗,π† in the form of Tπ† as follows
Q∗,π†(s) = max
a∈A
(
¯rREG†(s, a, π†) + γ
X
s′∈S
Q∗,π†(s′) ¯P †(s′|s, a, π†)
)
= Tπ†Q∗,π†(s),
(38)
which is the Bellman optimality condition. It is known that the operator forms a γ-contraction
mapping. Start with any Q, and apply Tπ†, by Banach fixed point theorem, limn→∞T n
π†Q →Q∗,π†.
16
Choose the initial Q to be dK-Lipschitz where dK < dr, then Q/dK is 1-Lipschitz. For any s1, s2,
the following holds
|Tπ†Q(s1) −Tπ†Q(s2)| ≤max
a∈A
¯rREG†(s1, a, π†) + γ
X
s′∈S
Q(s′) ¯P †(s′|s1, a, π†)
−¯rREG†(s2, a, π†) −γ
X
s′∈S
Q(s′) ¯P †(s′|s2, a, π†)
≤max
a∈A
n
|¯rREG†(s1, a, π†) −¯rREG†(s2, a, π†)|
+ γ
X
s′∈S
Q(s′) ¯P †(s′|s1, a, π†) −
X
s′∈S
Q(s′) ¯P †(s′|s2, a, π†)
o
≤max
a∈A
n
dr∥s1 −s2∥1 + γdK
X
s′∈S
Q(s′)
dK
¯P †(s′|s1, a, π†) −
X
s′∈S
Q(s′)
dK
¯P †(s′|s2, a, π†)
o
≤
dr + γ dKdP
2
∥s1 −s2∥1.
Inductively, we have for all n ≥1, the following condition holds,
|T n
π†Q(s1) −T n
π†Q(s2)| ≤
dr
n−1
X
k=0
γdP
2
k
+ dK
γdP
2
n!
∥s1 −s2∥1
≤dr
n
X
k=0
γdP
2
k
∥s1 −s2∥1 ≤
dr
1 −γdP /2∥s1 −s2∥1,
where the second inequality is a result of dK < dr, and the third inequality uses the fact that
γdP /2 ∈[0, 1] with which the geometric series is bounded above. Hence, T n
π† is
dr
1−γdP /2-continuous
for all n, which holds true when n →∞, where T n
π†Q →Q∗,π†(s). We then set dQ =
dr
1−γdP /2
for notation easiness. We now claim that Q∗,π† is d0-Lipschitz continuous with respect to s, where
d0 =
1
1−γ
dr + γ dP dQ
2
. For any π†
1, π†
2 ∈P(A†), we have
∥Q⋆
π†
1 −Q⋆
π†
2∥∞= max
s,a
¯rREG†(s, a, π†
1) + γ
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
1)
−¯rREG†(s, a, π†
2) −γ
X
s′∈S
Q⋆
π†
2(s′) ¯P †(s′|s, a, π†
2)
≤|¯rREG†(s, a, π†
1) −¯rREG†(s, a, π†
2)|
+ γ
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
1) −
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
2)
+ γ
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
2) −
X
s′∈S
Q⋆
π†
2(s′) ¯P †(s′|s, a, π†
2)
≤dr∥π†
1 −π†
2∥1 + γ dP dQ
2
∥π†
1 −π†
2∥1 + γ∥Q⋆
π†
1 −Q⋆
π†
2∥∞,
where the first term follows the Lipschitz assumption on the reward and the last term uses the fact
that ¯P † is probability. The second term can be bounded as follows. Notice that for any π†, Q∗,π† is
dQ-Lipschitz continuous implies Q∗,π†/dQ is 1-Lipschitz continuous with respect to s. Then,
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
1) −
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
2)
= dQ
X
s′∈S
Q⋆
π†
1(s′)
dQ
¯P †(s′|s, a, π†
1) −
X
s′∈S
Q⋆
π†
1(s′)
dQ
¯P †(s′|s, a, π†
2)
≤dQ
2 ∥¯P †(s, a, π†
1) −¯P †(s, a, π†
2)∥1 ≤dP dQ
2
∥π†
1 −π†
2∥1,
17
where we use equation (35) and Lipschitz continuity on the transition kernel. Then by rearranging the
terms, we obtain that ∥Q⋆
π†
1 −Q⋆
π†
2∥∞≤d0∥π†
1 −π†
2∥1 where d0 =
1
1−γ
dr + γ dP dQ
2
. Equation
(37) can be rewritten as follows:
Qπ†(s, a) = ¯rREG†(s, a, π†) + γ
X
s′∈S
Q∗,π†(s′) ¯P †(s′|s, a, π†) −H(π) = ⟨qπ†,s, a⟩−H(π), (39)
where qπ†,s = ¯rREG†(s, ·, π†) + γ P
s′∈S Q∗,π†(s′)P(s′|s, ·, π†) for any s. We now prove that is
dr + γd0 + γ dP dQ
2
-Lipschtiz continuous with respect to π†. Indeed, one has
∥qπ†
1,s −qπ†
2,s∥∞= max
a∈A
¯rREG†(s, a, π†
1) + γ
X
s′∈S
Q∗,π†(s′) ¯P †(s′|s, a, π†
1)
−¯rREG†(s, a, π†
2) −γ
X
s′∈S
Q∗,π†(s′) ¯P †(s′|s, a, π†
2)
≤dr∥π†
1 −π†
2∥1 + γ max
a∈A
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
1) −
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
2)
+ γ max
a∈A
X
s′∈S
Q⋆
π†
1(s′) ¯P †(s′|s, a, π†
2) −
X
s′∈S
Q⋆
π†
2(s′) ¯P †(s′|s, a, π†
2)
≤dr∥π†
1 −π†
2∥1 + γ∥Q⋆
π†
1 −Q⋆
π†
2∥∞+ γ dP dQ
2
∥π†
1 −π†
2∥1
=
dr + γd0 + γ dP dQ
2
∥π†
1 −π†
2∥1.
We now apply Lemma A.4. For any s ∈S, we write BRREG(s, π†) = ∇H⋆(qπ†,s) where H⋆is the
Fenchel conjugate of H. Then,
∥BRREG(s, π†
1) −BRREG(s, π†
2)∥1 ≤1
ρ∥qπ†
1,s −qπ†
2,s∥∞= dr + γd0 + γdP dQ/2
ρ
∥π†
1 −π†
2∥1.
The argmax is therefore Lipschitz with constant dr+γd0+γdP dQ/2
ρ
. Then by substituting d0, dQ and
bringing back the agent index i, we get
dREG
i
= dr
ρ
1 +
γi
(1 −γi)(1 −γidP /2) +
γidP /2
1 −γidP /2
, ∀i ∈I.
(40)
A.6
Stationary Stackelberg Markov Equilibrium with Mean-Field Followers
In this section, we present the proof of Theorem 5.1 and introduce an algorithm that iteratively
computes an SS-MFE.
A.6.1
Assumptions of Theorem 5.1
To establish the existence and uniqueness of an SS-MFE, we adopt the following assumption:
Assumption A.2 (Uniqueness of Best Response and MF Update). For each agent i ∈I, for any
si ∈Si, s−i ∈S−i, π−i ∈P(A−i), and for any follower’s MF µF ∈P(SF × AF ), agent i’s best
response function BRi(si, s−i, π−i, µF ) admits a unique solution. In addition, the MF update map
Γ(µF , πL, πF ) also returns a unique solution.
Assumption A.3 (Lipschitz Best Responses). There exist constants dF
L, dµ
L, dL
F , dµ
F , dL
µ, dF
µ , dµ
µ ≥0
such that for any admissible leader’s policies πL, π′
L ∈P(AL), the follower’s policies πF , π′
F ∈
18
P(AF ), and follower’s MF µF , µ′
F ∈P(SF × AF ):
sup
sF ,sL
∥BRF (sF , sL, πL, µF ) −BRF (sF , sL, π′
L, µ′
F )∥1 ≤dL
F ∥πL −π′
L∥1 + dµ
F ∥µ −µ′∥1,
(41)
∥Γ(µF , πL, πF ) −Γ(µ′
F , π′
L, π′
F )∥1 ≤dµ
µ∥µF −µ′
F ∥1 + dL
µ∥πL −π′
L∥1 + dF
µ ∥πF −π′
F ∥1,
(42)
sup
sL,sF
∥BRL(sL, sF , πF , µF ) −BRL(sL, sF , π′
F , µ′
F )∥1 ≤dF
L∥πF −π′
F ∥1 + dµ
L∥µ −µ′∥1.
(43)
A.6.2
Proof of Theorem 5.1 – Existence and Uniqueness of SS-MFE
We define the map BRF µ : SF × SL × P(AL) 7→P(AF ) × P(SF × AF ), which is simply a
composite update map from (19) and (20); that is, at the outer iteration k, given current states sF , sL
and leader’s policy πk
L, the inner iteration returns BRF µ(sF , sL, πk
L) = (πk∗
F , µk∗
F ).
Proof of Theorem 5.1. Fix a leader policy πL ∈P(AL) and states sL ∈SL, sF ∈SF . We first show
that the mapping BRF µ returns a unique solution by showing that Γ is contractive, then we show
that BRF µ forms a contractive mapping. Consider any pair of follower policies πF , π′
F ∈P(AF )
and mean-field distributions µF , µ′
F ∈P(SF × AF ) that satisfy πF = BRF (sF , sL, πL, µF ) and
π′
F = BRF (sF , sL, πL, µ′
F ), we have:
∥Γ(µF , πL, πF ) −Γ(µ′
F , π′
L, π′
F )∥1 ≤dµ
µ∥µF −µ′
F ∥1 + dF
µ ∥πF −π′
F ∥1
≤dµ
µ∥µF −µ′
F ∥1 + dF
µ ∥BRF (sF , sL, πL, µF ) −BRF (sF , sL, πL, µ′
F )∥1
≤(dµ
µ + dF
µ dµ
F )∥µF −µ′
F ∥1.
As dµ
µ + dF
µ dµ
F ∈[0, 1), Γ forms a contractive mapping by Banach’s fixed point theorem. And since
BRF returns unique solution, we conclude that the follower’s side converges to a unique fixed point.
For any πL, π′
L, let the corresponding follower’s fixed points be (π∗
F , µ∗
F ) = BRF µ(sF , sL, πL), and
(π∗′
F , µ∗′
F ) = BRF µ(sF , sL, π′
L). Then, the following holds:
∥BRF µ(sF , sL, πL) −BRF µ(sF , sL, π′
L)∥= ∥π∗
F −π∗′
F ∥1 + ∥µ∗
F −µ∗′
F ∥1
= ∥BRF (sF , sL, πL, µ∗
F ) −BRF (sF , sL, π′
L, µ∗′
F )∥1 + ∥Γ(µ∗
F , πL, π∗
F ) −Γ(µ∗′
F , π′
L, π∗′
F )∥1
≤(dL
F + dL
µ)∥πL −π′
L∥1 + (dµ
F + dµ
µ)∥µ∗
F −µ∗′
F ∥1 + dF
µ ∥π∗
F −π∗′
F ∥1
≤(dL
F + dL
µ)∥πL −π′
L∥1 + (dµ
F + dµ
µ + dF
µ )
∥π∗
F −π∗′
F ∥1 + ∥µ∗
F −µ∗′
F ∥1
.
By rearranging the term, we get
∥BRF µ(sF , sL, πL) −BRF µ(sF , sL, π′
L)∥1 = ∥π∗
F −π∗′
F ∥1 + ∥µ∗
F −µ∗′
F ∥1
≤
dL
F + dL
µ
1 −(dµ
F + dµ
µ + dFµ )∥πL −π′
L∥1.
Because
dL
F +dL
µ
1−(dµ
F +dµ
µ+dF
µ ) ∈[0, 1), BRF µ forms a contractive mapping by Banach’s fixed point theorem.
As a result, there exists a unique stationary SS-MFE to GMF.
A.6.3
Algorithm 2 – RL-Framework for Finding an SS-MFE
We now present a reinforcement learning procedure for computing an SS-MFE. The algorithm extends
the three-step RL framework described earlier to incorporate a nested fixed-point computation for the
mean-field distribution of the followers. At each outer iteration, the leader updates its policy based
on the aggregate follower response, while the inner loop computes the consistent mean-field and best
response for the followers. The complete procedure is outlined below.
19
Algorithm 2: An RL to Single-Leader-MF-Follower Stackelberg Games
Input: Initial states s0
L, s0
F , leader’s policy π0
L, initial follower’s MF µ0
F , tolerance tol, RL
algorithms AlgL, AlgF
for Iteration k = 0, 1, 2, · · · do
Leader takes action ak
L ∼πk
L(·|sk
L) ;
Set µk,0
F
= µk
F ;
for Iteration τ = 0, 1, 2, · · · do
Follower learns its best response policy πk,τ
F
= BRF (sk
F , sk
L, πk
L, µk,τ
F ) through AlgF ;
Follower’s MF updates as µk,τ+1
F
= Γ(µk,τ
F , πk
L, πk,τ
F ) ;
If ∥µk,τ+1
F
−µk,τ
F ∥1 ≤tol, set (πk
F , µk
F ) = (πk,τ
F , µk,τ
F ) and exit the inner loop.
end
Follower takes action ak
F ∼πk
F (·|sk
F ) ;
Leader learns its best response policy πk+1
L
= BRL(sk
L, sk
F , πk
F , µk
F ) through AlgL ;
State transition sk+1
L
∼PL(sk
L, ak
L, ak
F , µk
F ), sk+1
F
∼PF (sk
F , ak
F , ak
L, µk
F ) ;
If ∥πk+1
L
−πk
L∥1 ≤tol, exit the outer loop.
end
Return (πSE
L , πSE
F , µSE
F ) = (πk
L, πk
F , µk
F ) as the SS-MFE.
A.7
Numerical Experiment Specification and Results
A.7.1
Input Data and Hyper-parameters
Our numerical simulation’s data and code can be found at https://anonymous.4open.science/
r/StackelbergGame-B592 and also in the supplemental materials. To reproduce our results,
we require Python 3.10.11. All necessary packages are included in the requirement.txt file.
The main packages used are: Gymnasium (version 1.0.0, [18]) for environment setting; Stable-
Baselines3 (version 2.3.2, [19]) for RL; and Pyomo (version 6.7.2, [20, 21]) for solving the economic
dispatch linear programming problem. We use a stylized 3-bus power system as the test case. The
input specifications for the bus nodes (including demographic data of prosumers and consumers),
transmission lines, and grid-level generators are provided in Tables 1, 2, and 3, respectively. The
numerical experiment uses PPO as the RL algorithm. The training specification is listed in Table 4.
Table 1: Bus Node Data
Bus Node Name
Parameter
b1
b2
b3
P Load (kW)
110.0
110.0
95.0
Q Load (kVar)
40.0
40.0
50.0
Max Voltage (p.u.)
1.1
1.1
1.1
Min Voltage (p.u.)
0.9
0.9
0.9
Voltage Magnitude
1.1
0.92617
0.9
Voltage Angle
0.0
7.25883
-17.2671
Base KV
345
345
345
Prosumer Population
1,000
500
300
Energy Storage Capacity (kWh)
30
60
100
Energy Storage One-way Efficiency
0.8
0.8
0.8
Prosumer Income/household (US$)
25,000
45,000
65,000
Consumer Population
3,000
3,000
3,000
Consumer Income/household (US$)
15,000
15,000
15,000
A.7.2
Input Data of Solar and Demand Shapes
We set each day to be of 8 time steps, each of which represents a 3-hour interval. Figure 3 shows the
input solar capacity shape from [22] and energy demand shapes for both prosumers and consumers
at each timestep adapted from [23]. Let ∆(a, b, c) denote a triangular distribution with lower limit
20
Table 2: Transmission Line Data
Transmission Line Name
Parameter
l1
l2
l3
Source Bus
b1
b3
b1
Target Bus
b3
b2
b2
Reactance (Ω)
0.065
0.025
0.042
Susceptance (S)
0.62
0.75
0.9
Normal Flow Limit (MW)
100
100
100
Table 3: Grid-Level Generator Data
Grid-Level Generator Name
Parameter
g1
g2
solar
solar2
Bus
b1
b2
b3
b1
Fuel Type
Oil
Oil
Solar
Solar
Cost Curve Coefficients∗
[0.2, 5.0, 0.0]
[0.2, 4.0, 0.0]
Free
Free
Max Production (MW)
2000.0
1500.0
30.0
30.0
∗The cost curve is represented as an array, where the first entry is the quadratic coefficient, the second is the linear
coefficient, and the third is the constant term. For an array [a, b, c], the cost function is C(p) = ap2 + bp + c,
where p is amount of energy consumption in MWh.
Table 4: Hyper-parameters for PPO Agents
Hyperparameter
Aggregators
Utility Company
Learning Rate
0.0003
0.0003
Discount Factor (γ)
0.9999
0.9999
Entropy Coefficient
0.01
0.01
Batch Size
128
128
Number of Epochs
10
10
Steps per Update
1200
1200
Clip Range
0.2
0.2
Policy Network †
[18, 36]
[24, 36]
Value Network ◦
[18, 36]
[24, 36]
Training length
2000
2000
†,◦All policy and value networks are fully connected neural networks. Each array lists the number of neurons at
each hidden layer.
a, upper limit b, and mode c. In our simulation, we assume each consumer/prosumer’s demand and
solar profile follow the average shapes shown in Figure 3, scaled by a random factor drawn from the
triangular distribution ∆(0.8, 1.2, 1). This introduces variability across agents while preserving the
overall profile shape.
All data is scaled relative to the average individual storage capacity across all prosumers and con-
sumers, computed using Table 1. To maintain consistency, we assume each consumer has a reference
storage capacity of 10kWh. The demand input represents the energy consumed in each time step.
For consumers, this is their total consumption; for prosumers, it reflects net demand after subtracting
their solar generation.
A.7.3
Follower’s Learning Result – Price Volatility and Prosumer Net Demand Shape
To better measure the impacts of energy storage coupled with the RL algorithms on locational
marginal price (LMP) volatility, we adopt incremental mean volatility (IMV) from [24] as the metric.
For a sequence of LMPs {LMPt}∞
t=1, the IMV is defined as
IMV = lim
T →∞
1
T
T
X
t=1
LMPt+1 −LMPt
.
(44)
21
0
5
10
15
20
Hour of the Day
0.00
0.25
0.50
0.75
1.00
Energy
Solar Generation Shape
Solar
0
5
10
15
20
Hour of the Day
0.4
0.6
0.8
1.0
1.2
Load Shape (Prosumer & Consumer)
Prosumer
Consumer
Figure 3: Input shapes for solar capacity shape (left, data adapted from [22]) and load demand shapes
for prosumers and consumers (right, data adapted from [23]). All data is scaled to be with respect to
the average storage capacity. The shadow areas indicate the noise bound of each shape.
Figure 4 shows the IMV of the last 3 days between the two scenarios: with storage and RL, and
without storage or RL. Results indicate that the scenario with storage and RL achieves a significant
reduction in IMV, approximately 3 units lower, highlighting notably less volatile and more stable
electricity prices.
0
6
12
18
0
6
12
18
0
6
12
18
Hour of the Day
2
3
4
5
IMV
IMV of Hub Price (Last 3 Days)
With Storage and RL
No Storage or RL
Figure 4: Comparison of IMV of the last 3 days between two scenarios: with storage and RL, and
without storage or RL. Shadow areas show the 1-sigma error bounds across all simulations.
To further understand how RL influences consumption behavior, we examine the resulting net demand
profiles and compare them to the original input demand.
As shown in Figure 5, the RL-based strategy significantly reshapes prosumer net demand. It shifts a
considerable portion of energy consumption (charging) toward midday, as a response to low electricity
prices and abundant solar generation. The net demand turns negative during peak evening hours,
indicating energy selling back to the grid when prices are high. The curve after learning is less
smooth due to the existence of cost-free grid-level solar generators, prosumers can increase their
consumption without increasing the price too much.
0
3
6
9
12
15
18
21
Hour of the Day
1.0
0.5
0.0
0.5
1.0
1.5
Energy
Demand Shape
Prosumer (with Storage and RL)
Prosumer (No Storage or RL)
Consumer
Figure 5: The vertical axis indicates the energy amount scaled down by a factor of the total storage
level of the corresponding agent types (prosumers or consumers). The shaded areas indicate one
standard deviation error bounds computed over all 10 days and all simulation runs.
22
A.7.4
Leader’s Learning Result - Energy Expenditure Incidence (EEI)
We now show the EEI for both prosumers and consumers in Figure 6. The EEI is defined as the
percentage of energy expenditures to total household income. Under our experimental setup, the
utility company’s optimal strategy reduces the EEI gap between prosumers and consumers from
approximately 1% to about 0.7%, indicating improved equity across different income groups and
customer types. We note that the EEI values are typically small since energy spending constitutes
only a minor portion of total household income [25].
0
20
40
60
80
100
Day
1.50
1.75
2.00
2.25
2.50
2.75
EEI (%)
Prosumer and Comsumer EEI
Prosumer
Consumer
Figure 6: EEI over time for prosumers and consumers. The learned policy reduces the EEI gap
between the two groups, indicating improved income-based equity. Shaded regions represent one
standard deviation across simulation runs.
23
|
Learning in Stackelberg Markov Games Jun He Edwardson 47906 Andrew L. Liu Edwardson 47906 Yihsu Chen Electrical and Computer Engineering 95064 Abstract Designing socially optimal policies in multi-agent environments is a fundamental challenge in both economics and artificial intelligence. This paper studies a general framework for learning Stackelberg equilibria in dynamic and uncertain environments, where a single leader interacts with a population of adaptive followers. Motivated by pressing real-world challenges such as equitable electricity tariff design for consumers with distributed energy resources (such as rooftop solar and energy storage), we formalize a class of Stackelberg Markov games and establish the existence and uniqueness of stationary Stackelberg equilibria under mild continuity and monotonicity conditions. We then extend the framework to incorporate a continuum of agents via mean-field approximation, yielding a tractable Stackelberg-Mean Field Equilibrium (S-MFE) formulation. To address the computational intractability of exact best-response dynamics, we introduce a softmax-based approximation and rigorously bound its error relative to the true Stackelberg equilibrium. Our approach enables scalable and stable learning through policy iteration without requiring full knowledge of follower objectives. We validate the framework on an energy market simulation, where a public utility or a State utility commission sets time-varying rates for a heterogeneous population of prosumers. Our results demonstrate that learned policies can simultaneously achieve economic efficiency, equity across income groups, and stability in energy systems. This work demonstrates how game-theoretic learning frameworks can support data-driven policy design in large-scale strategic environments, with applications to real-world systems like energy markets. 1 Introduction Designing effective mechanisms in strategic multi-agent environments is a central problem in economics and artificial intelligence. From taxation policy to resource allocation, many real-world scenarios can be naturally modeled as Stackelberg games, where a leader first commits to a strategy and followers respond rationally based on the leader's choice. The asymmetry between the leader and follower, combined with the strategic nature of their interaction, makes the Stackelberg framework particularly relevant to policy design, regulation, and planning. Classical approaches to solving Stackelberg games rely on strong assumptions about agents' knowledge and rationality. These approaches often require explicit models of the follower's objective and 39th Conference on Neural Information Processing Systems (NeurIPS 2025). 19 Sep 2025 best-response behavior, which may be unavailable or intractable in realistic settings. As a result, such methods are limited to stylized, static environments. In contrast, many policy design problems involve dynamic, stochastic, and partially observed environments, where agents adapt to the evolving system and the leader must learn a policy that effectively shapes long-run outcomes. Recent advances in multi-agent reinforcement learning (MARL) have opened up new possibilities for mechanism design in such settings. The AI Economist framework [1] exemplifies this by introducing a two-level reinforcement learning approach, where a planner (leader) and economic agents (followers) co-adapt in a complex economic simulation. This framework demonstrates that reinforcement learning (RL) can help learn optimal policies even when agents are strategic, heterogeneous, and embedded in dynamic environments. It also highlights the potential of AI-driven simulations to reveal emergent economic behaviors, such as specialization and tax gaming, that are difficult to capture analytically. Building on these perspectives, we propose a general learning framework for Stackelberg Markov Games with infinite-horizon discounted rewards. We first establish the existence and uniqueness of a stationary Stackelberg equilibrium under mild regularity conditions in the two-agent setting. We then extend the framework to incorporate a single leader interacting with a continuum of competitive followers, modeled via a mean-field approximation. This captures large-population environments where each individual agent has negligible influence, and the leader seeks to shape collective behavior through policy design. To compute equilibria in both settings, we introduce a reinforcement learning algorithm that alternates between follower best-response learning and leader policy improvement, without requiring explicit knowledge of the follower's reward function. The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 formalizes the single-leader, single-follower Stackelberg Markov game and establishes the existence and uniqueness of an equilibrium. Section 4 presents a learning algorithm and discusses its convergence. Section 5 extends the framework to a mean-field population of followers. Section 6 demonstrates the approach through a tariff design problem in electricity markets, highlighting its potential for real-world policy applications. All proofs are deferred to the appendix. 2 Related Work Our work lies at the intersection of RL, game theory, and mechanism design, with a focus on Stackelberg games - sequential settings where a leader commits to a strategy first and followers respond optimally. Classical approaches rely on complete-information assumptions and bilevel optimization techniques [2]. Recent work has turned to learning-based methods to address challenges in stochastic and complex environments. The AI Economist [1], for example, applies deep multi-agent RL to optimize tax policy in simulated economies with strategic agents, demonstrating the promise of data-driven mechanism design. However, its focus is empirical and lacks theoretical guarantees. Complementing these empirical advances, a parallel line of work has developed theoretical foundations for Stackelberg learning. Bai et al.[3] provide sample complexity bounds for learning Stackelberg equilibria under bandit feedback in general-sum games, while Fiez et al.[4] analyze local convergence of gradient-based dynamics in continuous Stackelberg games. In contrast, we establish global existence and uniqueness results and support learning in infinite-horizon dynamic environments Jordan et al. [5] study Stackelberg-Nash equilibria with myopic followers and propose provably efficient RL algorithms. Their setting, however, assumes short-term follower behavior and limits temporal expressiveness. Our framework focuses on fully strategic, forward-looking followers and extends to mean-field populations. Overall, our work provides a theoretically grounded and scalable framework for learning Stackelberg equilibria in dynamic environments with either a single or a continuum of strategic followers, without requiring explicit knowledge of agents' reward functions and relying only on observed policies. 2 3 The Single-Leader-Single-Follower Stackelberg Markov Game We now present the formal framework of Stackelberg Markov games and the corresponding learning algorithms. We begin with a single-leader, single-follower game in a dynamic and uncertain environment. This setting serves as the foundation for our theoretical results and also for later extensions to large-population games. A Stackelberg game is a sequential-move game in which one agent commits to a strategy first, anticipating its influence on the other's response, and the second agent selects the best response after observing this commitment. In the classical formulation, such games are static and played under complete information, with equilibrium defined over a single strategic interaction. In contrast, we study Stackelberg interactions embedded in dynamic environments, formally, infinite-horizon discounted Markov games, where agents repeatedly interact with both one another and an evolving state governed by stochastic dynamics. While this setting resembles repeated games, we do not analyze the richer class of subgame perfect equilibria, which would require agents to condition strategies on full play histories and belief hierarchies. Instead, we adopt the perspective common in the literature on learning Nash equilibria in Markov games. That is, we focus on stationary strategies and aim to learn a static Stackelberg equilibrium, interpreted as the leader committing to a fixed policy and the follower adapting to it optimally within the environment. This formulation preserves the sequential structure of Stackelberg play while remaining tractable for reinforcement learning and policy optimization in a dynamic environment. We now formalize this setting. Let I = {L, F} denote the two agents, where L represents the first mover (leader) and F the second mover (follower). For notational convenience, we write -i to denote the opponent of agent i; that is, if i = L, then -i = F, and vice versa. Additionally, we use P(X) to denote the set of probability measures over a measurable set X, and x ∼Q to indicate that x follows distribution Q. For sets X, Y, X × Y denotes their Cartesian product, and |X| the cardinality of X if discrete. The definition of a Stackelberg Markov game is given below. Definition 3.1 (Stackelberg Markov Game). A Stackelberg Markov game with a single leader and a single follower is a tuple GS := (S, AL, AF , P, rL, rF , γ), where S is a (measurable) state space, and AL and AF are the action spaces of two agents: the first mover, denoted by L (the leader), and the second mover, denoted by F (the follower). The stochastic transition kernel P(· | s, aL, aF ) defines the probability distribution over next states, given current state s ∈S and joint actions (aL, aF ) ∈AL × AF . The reward functions ri : S × AL × AF →R specify the one-step payoff to agent i ∈{L, F}, and γ = (γL, γF ) ∈[0, 1)2 denotes the discount factors for the leader and the follower, respectively. In this paper, we focus on the case in which Si and Ai are discrete and finite for each i ∈I. The leader first observes its state sL ∈SL and chooses an action aL ∈AL; then the follower observes its state sF ∈SF and takes action aF ∈AF . The reward is then calculated through the reward functions rL and rF . The leader and the follower take their actions according to their own policies πi ∈Si 7→P(Ai). Each agent's value function is defined as the discounted total expected return: Vi(si, s-i, πi, π-i) := E " ∞ X t=0 γt iri(si,t, s-i,t, ai,t, a-i,t) si,0 = si, s-i,0 = s-i # , ∀i ∈I, (1) subject to si,t+1 ∼Pi(si, ai, a-i), ai,t ∼πi(·|si,t), ∀i ∈I, where the expectation is taken according to both agents' policies πL, πF , and the transition kernels PL, PF . Provided that the other player chooses π-i, the goal for each player i is to find the best policy π∗ i that maximizes its value function with initial state si ∈Si: Vi(si, s-i, π∗ i , π-i) ≥Vi(si, s-i, πi, π-i), ∀πi ∈P(Ai), i ∈I. (2) To facilitate the analysis of optimal policies as defined in (2), and to ensure the existence of such solutions, we introduce the following assumption. Assumption 3.1 (Continuity and Boundedness). The reward functions ri(si, s-i, ai, a-i) and the transition kernel functions Pi(si, ai, a-i) are continuous in Si and in Ai, A-i for each i ∈I. In addition, the reward functions are uniformly bounded; that is, there exists a finite number R such that |ri(si, s-i, ai, a-i)| ≤R, for all si ∈Si, ai ∈Ai, a-i ∈A-i, ∀i ∈I. This assumption is central to guaranteeing the existence of optimal stationary policies. Under standard conditions such as continuity, compactness, and boundedness, it is well established (e.g., [6, 7]) 3 that stationary (i.e., time-invariant, memoryless) policies suffice for optimality in infinite-horizon discounted Markov games. We therefore focus exclusively on stationary policies, which simplifies the analysis and reflects both standard practice and real-world settings, where agents typically use fixed policies that depend only on the current state, not on time or history. Specifically, a stationary Stackelberg equilibrium (SSE) in the game GS is defined as follows. Definition 3.2. Given a Stackelberg Markov game GS with shared state space S, a policy pair (πSSE L , πSSE F ) is a stationary Stackelberg equilibrium if, for all s ∈S, πSSE F ∈arg max πF ∈P(AF ) VF (s; πSSE L , πF ), (3) πSSE L ∈arg max πL∈P(AL) VL(s; πL, πSSE F ), (4) where Vi(s; πL, πF ) denotes the value function for agent i ∈{L, F} under initial state s and policy pair (πL, πF ). 3.1 Equilibrium Existence and Uniqueness We begin with the follower's problem. At state sF ∈SF , given the leader's policy πL, the follower treats the leader as part of the environment and solves a single-agent MDP to compute an optimal response π∗ F . This defines the best response mapping BRF : S × P(AL) →P(AF ), which maps the joint state and leader policy to an optimal follower policy. The leader's problem is analogous. At state sL ∈SL, given the follower's policy πF , the leader solves their own single-agent MDP, treating the follower as part of the environment, and derives the optimal policy π∗ L. The corresponding best response function is BRL : S × P(AF ) →P(AL). We now establish the existence and uniqueness of an SSE, which requires the following assumptions. Assumption 3.2 (Uniqueness of Best Response). For each i ∈I, and for any si ∈Si, s-i ∈ S-i, π-i ∈P(A-i), agent i's best response function BRi(si, s-i, π-i) admits a unique policy. Assumption 3.3 (Lipschitz Best Responses). There exist constants di ≥0 such that for each i ∈{L, F}, and any policies π-i, π′ -i ∈P(A-i), sup si∈Si, s-i∈S-i ∥BRi(si, s-i, π-i) -BRi(si, s-i, π′ -i)∥1 ≤di∥π-i -π′ -i∥1. Remark (Optimistic vs. Pessimistic Solutions). In Stackelberg games, an important (but subtle) modeling choice involves how the leader anticipates the follower's response when multiple best responses exist. The optimistic (or leader-favorable) solution assumes that the follower selects the best response that benefits the leader most, while the pessimistic (or leader-averse) solution assumes the follower selects the worst-case best response from the leader's perspective. Assumption 3.2 enforces uniqueness of the best response, thereby avoiding this ambiguity. This corresponds to the optimistic case and simplifies the theoretical analysis. The pessimistic case, while important in adversarial settings or robust decision-making, introduces discontinuities and set-valued mappings that require more intricate tools beyond the scope of this paper. We leave such extensions to future work, and focus here on establishing foundational results under the optimistic formulation. Theorem 3.1 (Existence and Uniqueness of an SSE). Given Assumptions 3.1, 3.2 and 3.3, and assume dLdF 0. Since we assume that each agent's best response mapping is uniquely defined, convergence of the leader's policy implies convergence of the follower's policy as well. The above three-step scheme forms the basis for Algorithm 1 (pseudocode provided in Appendix A.2). To implement this procedure in an RL setting, we now formalize value functions and Q-functions. For notational simplicity, we let si = (si, s-i, a-i) denote agent i's effective state and P i = (Pi, P-i, π-i) its joint transition kernel, since each agent treats the opponent's policy as part of the environment in a sequential learning setup. Given a fixed opponent policy π-i, the Q-function for agent i ∈I under policy πi is defined as: Qπi,π-i i (si, ai) := E " ∞ X t=0 γt iri(si,t, ai,t) si,0 = si, ai,0 = ai # . (5) The corresponding optimal Q-function satisfies the Bellman equation: Q∗,π-i i (si, ai) = ri(si, ai) + γi max a′ i Es′∼P i Q∗,π-i i (s′ i, a′ i) . (6) An optimal policy π∗ i can then be obtained through the best response mapping BRi(si, s-i, π-i) = argmaxaiQ∗,π-i i (si, ai). However, this general framework does not guarantee convergence unless the best-response mapping argmax satisfies strong regularity conditions such as single-valuedness and Lipschitz continuity. To address this, we introduce two smoothing techniques: (i) Boltzmann policies, which replace the possible discontinuous argmax operator with a softmax approximation; and (ii) reward regularization, which ensures strict concavity and uniqueness of the best response. These modifications enable us to prove that the regularized learning dynamics converge to a unique fixed point. 4.2 Boltzmann Policy and Reward Regulation To ensure desirable properties of optimal policies, we replace the generally non-smooth argmax operator in best-response updates with smooth approximations and stabilization techniques. In this section, we formalize the key components and justify their theoretical validity. We first show that the Boltzmann policy is a Lipschitz continuous function that can approximate the argmax operator. Specifically, a Boltzmann policy uses the following form with the softmax operator: πi := softmaxαi(·|si) = exp(αiQ∗,π-i(si, ·)) P ai exp(αiQ∗,π-i(si, ai)), i ∈I, (7) where αi > 0 is the temperature hyperparameter. Lemma A.1 presents the Lipschitz continuity of the softmax operator. Following the analysis in [8], we first discretize the policy space using finite ε-nets to bound the error of the approximation to the argmax operator. That is, for a given policy πi ∈P(Ai), we define a finite cover N ε i = { ˆπi (1), ˆπi (2), · · · , ˆπi (N ε i )} ⊂P(Ai) such that for any πi, there exists ˆπi ∈N ε i with ∥πi -ˆπi∥1 ≤ε. The projection of πi onto the net is defined as: projε(πi) := argminπ′ i∈N ε i ∥πi -π′ i∥1. (8) This projection is applied to both the leader's and the follower's learned policies to enforce stability and discretized action support. In practice, a key challenge in using the argmax operator is the sensitivity to small changes in the Q-values when the action gap is small. To mitigate instability, the policy must not only be discretized via an ε-net but also be sufficiently separated in action-value space. We define the action gap at state si as: δsi(Q∗,ˆπ(j) -i ) := min ai∈Ai ∗,ˆπ(j) -i (si,·) max a′ i∈Ai Q∗,ˆπ(j) -i (si, a′ i) -Q∗,ˆπ(j) -i (si, ai) , (9) 5 for all j = 1, · · · , N ε i and i ∈I. Then, for any ε > 0, there exists a positive, decreasing function φ(ε) and an ε-net N ε i such that for all Q∗,ˆπ(j) -i and at any state si, δsi(Q∗,ˆπ(j) -i ) ≥φ(ε). When following the 3-step computation framework, instead of using BRi for both agents, we adopt smoothed policy updates based on softmax approximations. Specifically, for k = 0, 1, 2, . . ., the policies are updated as: ˆπk F = projε(softmaxαF ( ˆQ∗,ˆπk L)) and ˆπk+1 L = projε(softmaxαL( ˆQ∗,ˆπk F )). Theorem 4.1 (Error Bound for Projected Boltzmann Policy). Let Assumptions 3.2 and 3.3 hold, and suppose that dLdF 0 and set the temperature parameters αL = αF = log(1/ε)/φ(ε), where φ(ε) denotes the minimal action gap induced by the ε-net discretization. Let (ˆπk L, ˆπk F ) denote the policy iterates generated by Algorithm 1 using projected Boltzmann policies with projection radius ε. Then, for any K ≥log1/(dLdF )(2/ε), the leader's policy satisfies ∥ˆπK L -πSSE L ∥1 ≤ 1 + dL + 2|AL| + 2dL|AF | 1 -dLdF + 1 ε = O(ε), (10) where πSSE L denotes the leader's stationary Stackelberg equilibrium policy in GS. This bound shows that for sufficiently large α and small projection radius ε, the projected softmax policy closely approximates the true best response, while preserving Lipschitz continuity and stabilizing learning dynamics. To establish convergence of the three-step computational framework, one usually requires the bestresponse operator argmax to be Lipschitz continuous, which can be too restrictive. To avoid imposing such assumptions, we adopt a regularization approach. Regularization is widely used in RL, where it can accelerate convergence of policy gradient methods [9] and enhance exploration and robustness [10]. The regularized reward function for agent i ∈I is defined as: rREG i (si, s-i, ai, a-i) = ri(si, s-i, ai, a-i) + H(πi(· | si)), (11) where H(·) is a ρ-strongly concave function. A common choice is the Shannon entropy, given by H(πi(· | si)) = -P ai∈Ai π(ai | si) log π(ai | si) for each si ∈Si. We then analyze the game using the regularized value function for for each si ∈Si: V REG i (si, s-i, πi, π-i) := E " ∞ X t=0 γt irREG i (si,t, s-i,t, ai,t, a-i,t), , si,0 = si, , s-i,0 = s-i # . (12) In Theorem A.2 (Appendix A.5), we show that under standard continuity and boundedness conditions, the best-response mappings in the regularized game are Lipschitz continuous. This, in turn, implies that the policy iterates converge to a fixed point under the regularized learning dynamics. 5 Extension to Stackelberg Games with Mean-Field (MF) Followers We now consider the extension where there is one leader but an infinite number of followers in a (discounted) infinite-horizon Markovian setting. This formulation captures many real-world scenarios where a central authority (such as a platform, a regulator, or a policymaker) interacts with a large population of agents whose individual behaviors are negligible but whose aggregate effect shapes the system dynamics. To formalize this setting, we adopt a mean-field approach in which followers are modeled as homogeneous and interchangeable. In the limit as the number of followers approaches infinity, each individual has vanishing influence on the aggregate behavior, which is captured by an MF distribution over states and actions. We assume the followers are competitive and analyze the interaction from the perspective of a single representative follower responding to the MF. This reduction preserves the coupling between individual incentives and population-level dynamics while simplifying the analysis. For notational consistency, we retain the index set I = {L, F}, and denote the state and action spaces of the representative follower by SF and AF , respectively. Let μF,t ∈P(SF × AF ) denote an MF distribution at time t, representing the joint distribution of the population's states and actions in the infinite-agent limit: μF,t(s, a) := lim N→∞ PN j=1,j̸=i 1(sj F,t,aj F,t)=(s,a) N , ∀s ∈SF , a ∈AF , (13) 6 where N is the number of followers, and (sj F,t, aj F,t) denotes the j-th follower's state and action pair. The indicator function 1(sj F,t,aj F,t)=(s,a) = 1 if (sj F,t, aj F,t) = (s, a), and 0 otherwise. The definition of a Stackelberg game with MF followers can be stated as follows. Definition 5.1. A Stackelberg Markov game GMF with a single leader and a MF follower consists of state spaces (Si)i∈I, action spaces (Ai)i∈I, a stochastic transition kernel Pi : Si×Ai×A-i×P(SF × AF ) →P(Si), a reward function for each agent ri : Si × S-i × Ai × A-i × P(SF × AF ) →R, and discount factors (γi)i∈I. In this game, the leader chooses its strategy first, and the MF follower chooses its strategy after observing the leader's policy while playing against the MF. The leader and the follower take their actions according to their own policies πi ∈Si →P(Ai). As the reward function and transition kernel are redefined with the MF as an additional argument, each agent's value function is also redefined as: Vi(si, s-i, πi, π-i, μF ) := E " ∞ X t=0 γt iri(si,t, s-i,t, ai,t, a-i,t, μF ) si,0 = si, s-i,0 = s-i # , (14) subject to si,t+1 ∼Pi(si, ai, a-i, μF ), ai,t ∼πi(·|si,t, μF ), ∀i ∈I, where the expectation is taken according to both agents' policies πL, πF , and the transition kernels PL, PF . The MF follower here assumes that the MF remains as μF throughout the entire lifetime. This can be achieved by maintaining a belief vector as in [11], which can then be updated (adaptively) when a new leader policy is observed. Finally, the evolution of MF is a mapping Γ : P(SF × AF ) × SL × SF →P(SF × AF ) that maps from current MF and both agents' policies to the next MF, defined as follows: μ′ F := Γ(μF , πL, πF ), ∀μF ∈P(SF × AF ), πL ∈P(AL), πF ∈P(AF ), (15) as a new component to the game. Then an equilibrium is defined as follows. Definition 5.2 (Stationary Stackelberg Mean-Field Equilibrium (SS-MFE)). Given a Stackelberg Markov game GMF with MF followers, the tuple of the leader and the MF follower's stationary policies (πSE L , πSE F , μSE F ) form a stationary Stackelberg equilibrium in GMF, if the following conditions are satisfied for each state sL ∈SL, sF ∈SF , : 1. Follower - For any policy πF ∈P(AF ), VF (sF , sL, πSE F , πSE L , μSE F ) ≥VF (sF , sL, πF , πSE L , μSE F ). (16) 2. Consistency of Follower's MF - The evolution of the MF μF satisfies that, μSE F = Γ(μSE F , πSE L , πSE F ). (17) 3. Leader - For any policy πL ∈P(AL), VL(sL, sF , πSE L , πSE F , μSE F ) ≥VL(sL, sF , πL, πSE F , μSE F ). (18) The consistency condition in Item (2) indicates that when all followers adopt a policy in response to the assumed mean field μSE F as in (16), the resulting population distribution coincides with the assumed μSE F . 5.1 Existence and Uniqueness of MF Stackelberg Equilibrium Given the initial leader's policy and initial MF, the game consists of the same 3 steps as in Section 3, which will be referred to as the "outer iteration". The follower's move consists of an iterative approach between learning a policy and waiting for the MF to update after executing the policy, which will be referred to as the "inner iteration". We re-define the best response mappings with the introduction of the MF. That is, BRi : Si × S-i × P(A-i) × P(SF × AF ) 7→P(Ai) for both i ∈I. Then, at each outer iteration k, given the leader's policy πk L, the follower and mean-field dynamics proceed through an inner loop with iterator τ = 0, 1, · · · : πk,τ+1 F = BRF (sF , sL, πk L, μk,τ F ), (19) μk,τ+1 F = Γ(μk,τ F , πk L, πk,τ+1 F ), (20) repeating until convergence to (πk∗ F , μk∗ F ), which defines the mean-field equilibrium corresponding to πk L. The leader then updates its policy as: πk+1 L = BRL(sL, sF , πk∗ F , μk∗ F ). (21) 7 Theorem 5.1 (Existence and Uniqueness of SS-MFE). Under the assumptions of continuity and boundedness of reward functions, and uniqueness and Lipschitz of BR policies for both the leader and the MF follower, there exists a unique stationary SS-MFE to GMF if dμ μ + dF μ dμ F 0 defined by softmaxα(x)i = exp(αxi) Pn j=1 exp(αxj). Then softmaxα : Rn 7→[0, 1]n is Lipschitz continuous with respect to the l1 norm with Lipschitz constant √nα; that is, ∥softmaxα(x) -softmaxα(y)∥1 ≤√nα∥x -y∥1, ∀x, y ∈Rn. 11 Proof. Choose arbitrary vectors x, y ∈Rn. We first show that the softmaxα function is α-Lipschitz with respect to l2-norm. For brevity, in this proof only, we let si = softmaxα(x)i. We start by computing the Jacobian matrix of softmax(x), denoted as Jα(x), whose entries are given by: Jα(x)ij = ∂si ∂xj = (αsi(1 -si), if i = j, -αsisj, if i ̸= j, and for any vector u ∈Rn, we have, u⊤Jα(x)u = α n X i=1 siu2 i - n X i=1 siui !2 . By applying Jensen's inequality to the convex function x 7→x2 and by viewing softmax as a probability distribution, we have (Pn i=1 siui)2 ≤Pn i=1 siu2 i . This implies that: 0 ≤uT Jα(x)u ≤α n X i=1 siu2 i ≤α∥u∥2. This shows that the eigenvalues of Jα(x) lie between 0 and α. Hence, its spectral norm satisfies that ∥Jα(x)∥2 ≤α. By the Mean Value Theorem for vector-valued functions, we have: ∥softmaxα(x) -softmaxα(y)∥2 ≤sup t∈[0,1] ∥Jα(y + t(x -y))∥2 · ∥x -y∥2 ≤α∥x -y∥2. Combining with the fact that ∥x∥2 ≤∥x∥1 ≤√n∥x∥2, the following holds: ∥softmaxα(x) -softmaxα(y)∥1 ≤√n∥softmaxα(x) -softmaxα(y)∥2 ≤√nα∥x -y∥1. Before proving Theorem 4.1, we need another lemma, which is to show the closeness of softmax distribution and the uniform argmax distribution. Lemma A.2. Let x ∈Rn and let M = {j | xj = maxi=1,··· ,n xi} be the index set of maximizers with cardinality |M|. Define the uniform argmax distribution as argmaxU(x)i = 1/|M|, if i ∈M, 0, otherwise. (22) Then for any α > 0, the l1 distance between softmaxα(x) and argmaxU(x) is bounded as ∥softmaxα(x) -argmaxU(x)∥1 ≤2ne-αδ, where δ := minj /∈M(x∗-xj) > 0 is the minimum gap between the top value x∗and the next largest value when |M| 0: softmaxα(x)i = exp(αxi) Pn j=1 exp(αxj), and also define the uniform argmax function as in (22). For simplicity, let Z = Pn j=1 exp(αxj) = |M|eαx∗+ P j /∈M eαxj. For all i ∈M, we can rewrite the softmax function as softmaxα(x)i = eαx∗/Z. Then the l1 distance is: ∥softmaxα(x) -argmaxU(x)∥1 = X i∈M 1 |M| -eαx∗ Z + X i/∈M eαxi Z = |M| 1 |M| -eαx∗ Z + P i/∈M eαxi Z = 2(Z -|M|eαx∗) Z = 2 P j /∈M eαxj Z . 12 Now let δ = minj /∈M(x∗-xj) > 0. Then for all j /∈M, xj ≤x∗-δ, we have eαxj ≤eα(x∗-δ) = e-αδeαx∗. As a result, the numerator satisfies that P j /∈M eαxj ≤(n -|M|)e-αδeαx∗, and the denominator satisfies that Z = |M|eαx∗+ P j /∈M eαxj ≥|M|eαx∗. With |M| ≥1, we get: ∥softmaxα(x) -argmaxU(x)∥1 ≤2 · (n -|M|)e-αδeαx∗ |M|eαx∗ ≤2(n -1)e-αδ ≤2ne-αδ. In the case where |M| = n, all values in x are equal and δ := ∞. The softmax function returns a uniform distribution over all n elements and is identical to the uniform argmax function. The l1distance between the two functions are therefore 0. Then setting δ = ∞preserves the inequality. A.4 Proof of Theorem 4.1 - Error Bound for Projected Boltzmann Policy Proof. Fix sF , sL. For notation simplicity, We drop the two state arguments in the two BRi mappings. When following Algorithm 1, we let the updates be as follows: ˆπk F = projε(softmaxαF ( ˆQ∗,πk L)), and ˆπk+1 L = projε(softmaxαL( ˆQ∗,πk F )), where the function projε returns the projected Boltzmann policies through their corresponding ε-net. For notation simplicity, we let ̃πi = softmaxαi( ˆQ∗,π-i) denote agent i's Boltzmann policy. Then, at each step k, we apply Lemma A.2, along with the observation that the ε-net enforces a minimum action gap of φ(ε) by limiting the other player's policy space (and thus constraining the resulting Q-values). ∥ˆπk+1 L -πSE L ∥1 ≤∥ˆπk+1 L - ̃πk+1 L ∥1 + ∥ ̃πk+1 L -BRL(ˆπk F )∥1 + ∥BRL(ˆπk F ) -πSE L ∥1 ≤ε + ∥softmaxα( ˆQ∗,πk F )) -argmaxU( ˆQ∗,πk F ))∥1 + ∥BRL(ˆπk F ) -BRL(πSE F )∥1 ≤ε + 2|AL|e-αLφ(ε) + dL∥ˆπk F -πSE F ∥1, where the last term can be similarly bounded as follows: ∥ˆπk+1 F -πSE F ∥1 ≤ε + 2|AF |e-αF φ(ε) + dF ∥ˆπk L -πSE L ∥1. Combining the two recursive inequalities, we obtain: ∥ˆπk+1 L -πSE L ∥1 ≤ε + 2|AL|e-αLφ(ε) + dL ε + 2|AF |e-αF φ(ε) + dF ∥ˆπk L -πSE L ∥1 = (1 + dL) ε + 2|AL|e-αLφ(ε) + 2dL|AF |e-αF φ(ε) + dLdF ∥ˆπk L -πSE L ∥1. Unfolding the recursion over k yields: ∥ˆπk+1 L -πSE L ∥1 ≤ (1 + dL)ε + 2|AL|e-αLφ(ε) + 2dL|AF |e-αF φ(ε) k X κ=0 (dLdF )κ + (dLdF )k+1∥ˆπ0 L -πSE L ∥1. Assuming dLdF < 1, and setting αL = αF = log(1/ε) φ(ε) , the sum converges as k →∞. At a fixed iteration K, the bound is: ∥ˆπK L -πSE L ∥1 ≤(1 + dL)ε + 2|AL|e-αLφ(ε) + 2dL|AF |e-αF φ(ε) 1 -dLdF + (dLdF )K∥ˆπ0 L -πSE L ∥1 ≤(1 + dL + 2|AL| + 2dL|AF |)ε 1 -dLdF + 2(dLdF )K, where we also used the fact that the l1-norm of the difference between two distributions over a finite set is upper bounded by 2 (consider for any distribution π, π′ over the same finite set, ∥π -π′∥1 ≤∥π∥1 + ∥π′∥1 = 2). To achieve 2(dLdF )K ≤ε, we need: K ≥logdLdF ε 2 , which bounds the error to: ∥ˆπK L -πSE L ∥1 ≤ 1 + dL + 2|AL| + 2dL|AF | 1 -dLdF + 1 ε = O(ε). 13 A.5 Regularization Regularization is widely used in reinforcement learning (RL) to promote stability, enhance exploration, and improve convergence rates [9, 10]. In this section, we consider a general class of regularized learning schemes, where the reward function is augmented with a concave regularizer applied to the agent's policy. For each agent i ∈I, we define the regularized reward function as: rREG i (si, s-i, ai, a-i) = ri(si, s-i, ai, a-i) + H(πi(· | si)), (23) where H : P(Ai) →R is a ρ-strongly concave function. A typical example is negative Shannon entropy: H(πi(· | si)) = - X ai∈Ai πi(ai | si) log πi(ai | si), which yields a Boltzmann policy as the optimal solution. However, our analysis does not depend on this specific form, and applies to any regularizer satisfying the strong concavity condition. We define the regularized value function as: V REG i (si, s-i, πi, π-i) := E " ∞ X t=0 γt irREG i (si,t, s-i,t, ai,t, a-i,t) si,0 = si, s-i,0 = s-i # , ∀i ∈I. (24) Lemma A.3 below shows that adding a strongly concave regularization term ensures the regularized value function V REG i is also strongly concave in the agent's policy πi, which in turn guarantees the uniqueness of the optimal solution π∗ i = argmaxπiV REG i (si, s-i, πi, π-i). We now establish that, under the following assumption, which is milder than those required in Assumption 3.3, the resulting regularized best-response mapping admits a fixed point. To facilitate the analysis, we define the diameter of the (finite) action space as the maximum distance between any two actions: diam(Ai) := max ai,a′ i∈Ai ∥ai -a′ i∥1. Without loss of generality, we normalize the action space so that diam(Ai) = 1 for all i ∈I. Assumption A.1 (Lipschitz Reward and Transition Kernel). For each agent i ∈I, for any states si ∈Si, s-i ∈S-i, and for any actions ai, a′ i ∈Ai, a-i, a′ -i ∈A-i, the reward function ri and the transition kernel Pi satisfy the condition that, there exists dr, dP ≥0 such that |ri(si, s-i, ai, a-i) -ri(si, s-i, a′ i, a′ -i)| ≤dr(∥si -s′ i∥1 + ∥s-i -s′ -i∥1 + ∥ai -a′ i∥1 + ∥a-i -a′ -i∥1), (25) |Pi(si, ai, a-i) -ri(si, a′ i, a′ -i)| ≤dP (∥si -s′ i∥1 + ∥s-i -s′ -i∥1 + ∥ai -a′ i∥1 + ∥a-i -a′ -i∥1), (26) and in addition, we assume γidP /2 ∈[0, 1]. Now we define the (regularized) best response mapping for each i ∈I as BRi : Si×S-i×P(A-i) 7→ P(Ai). That is, follows: BRREG i (si, s-i, π-i) := argmaxπiV REG i (si, s-i, πi, π-i). (27) Then, the Lipschitz continuity condition can be established: Theorem A.2 (Lipschitz Regularized Best Response). Under Assumptions 3.1 and A.1, the best response mapping BRREG i for each agent i ∈I to GS with regularized reward is Lipschitz with respect to the other agent's policy π-i; that is, for any πL, π′ L ∈P(AL) and πF , π′ F ∈P(AF ), there exist constants dREG L , dREG F ≥0 such that, ∥BRREG L (sL, sF , πF ) -BRREG L (sL, sF , π′ F )∥≤dREG L ∥πF -π′ F ∥, (28) ∥BRREG F (sF , sL, πL) -BRREG F (sF , sL, π′ L)∥≤dREG F ∥πL -π′ L∥, (29) where the constants are defined symmetrically in the form of: dREG i = dr ρ 1 + γi (1 -γi)(1 -γidP /2) + γidP /2 1 -γidP /2 , ∀i ∈I. (30) 14 We first prove that adding a strongly concave regularization term to the value function can ensure the uniqueness as well as the continuity of the argmax operator. As the proof is symmetric for both agents, we drop the agent's index i for simplicity and use superscript † to denote the opponent's components. With a slight abuse of notations, in this proof only, we use s = (s, s†), a = (a, a†) to represent the joint states and actions, and when necessary, we unpack the argument list. We use π, π† to indicate the policies to the agent and its opponent, respectively. The regularized reward and value functions can be re-written concisely as follows: rREG(s, a) = r(s, a) + H(π), (31) V REG(s, π, π†) := E " ∞ X t=0 γtrREG(st, at) s0 = s # , (32) where H is a ρ-strongly concave function. The following lemma is first needed: Lemma A.3. The argmaxπV REG admits a unique solution. Proof. We first argue that the expected reward E[r(s, a)] is linear w.r.t. π†. In fact, the linearity is a direct consequence of the Lebesgue measure by viewing the distribution π† as the measure function. Then the sum of a linear function and a ρ-strongly concave function preserves the ρ-strong concavity. Thus, the argmaxπV REG admits a unique solution. To proceed with our analysis, we state the following properties of the Fenchel conjugate, established in Lemma 15 of [16]. Lemma A.4 (Fenchel Conjugate Properties [16]). Let E = Rm, m ≥1 with inner product ⟨·, ·⟩. Let function g : E 7→R+ be a differentiable and ρ-strongly convex function with respect to some norm ∥· ∥, where R+ = R ∪{-∞, ∞}. Let X be its domain. The Fenchel conjugate g⋆is defined as follows: g⋆(y) = max x∈X⟨x, y⟩-g(x). (33) Then the following three properties hold: (i) g⋆is differentiable on E, (ii) ∇g⋆(y) = argmaxx∈X⟨x, y⟩-g(x), and (iii) g⋆is 1 ρ-smooth with respect to ∥· ∥⋆, the dual norm of ∥· ∥. That is, for any y1, y2 ∈E, ∥∇g⋆(y1) -∇g⋆(y2)∥≤1 ρ∥y1 -y2∥⋆. (34) We also need the property of l1-norm of distributions on the set of probability distribution over finite sets. Lemma A.5. Suppose that there exists a real-valued function f on a finite set E. For any two probability distributions ψ1, ψ2, we have X x∈E f(x)ψ1(x) - X x∈E f(x)ψ2(x) ≤maxx∈E f(x) -minx∈E f(x) 2 ∥ψ1 -ψ2∥1, (35) Proof. We first have that P x(ψ1(x) -ψ2(x)) = 0 and hence for any constant c ∈R, one has P x c(ψ1(x) -ψ2(x)) = 0, then X x∈E f(x)ψ1(x) - X x∈E f(x)ψ2(x) = X x∈E (f(x) -c)(ψ1(x) -ψ2(x)) ≤ X x∈E |f(x) -c| · |ψ1(x) -ψ2(x)| ≤max x∈E |f(x) -c| · X x∈E |ψ1(x) -ψ2(x)| By choosing c = (maxx∈E f(x) + minx∈E f(x)) /2, we get (35). As argmax now is single-valued based on Lemma A.3 and thus well-defined, we are ready to present the proof to Theorem A.2. The proof is adapted from [17] in which they proved the argmax operator is Lipschitz continuous with respect to the MF in their MF game setting. Our proof replaces the MF with opponent's policy, and we will show that our result matches theirs. 15 Proof. We first define the opponent-policy averaged reward and transition as follows: ̄rREG†(s, a, π†) := Ea†∼π†[rREG(s, a, a†)], and ̄P †(s, a, π†) := Ea†∼π†[P(s, a, a†)]. It is easy to show that both ̄rREG† and ̄P † are Lipschitz continuous in π† under Assumption A.1 with the same constants dr, dP ≥0 respectively. For any π†, π†′ ∈P(A†), | ̄rREG†(s, a, π†) - ̄rREG†(s, a, π†′)| = Ea†∼π†[rREG(s, a, a†)] -Ea†′∼π†′[rREG(s, a, a†′) = E(a†,a†′)∼Coupling(π†,π†′) rREG(s, a, a†) -rREG(s, a, a†′) = E(a†,a†′)∼Coupling(π†,π†′) r(s, a, a†) -r(s, a, a†′) ≤E(a†,a†′)∼Coupling(π†,π†′)dr∥a† -a†′∥1 As this works for any coupling of (π†, π†′), we can pick the optimal coupling that achieves the l1-Wasserstein distance, defined as follows: W1(π†, π†′) = inf ν∈Coupling(π†,π†′) Z A†×A† ∥a† -a†′∥1dν(a†, a†′), (36) in which the infimum can be replaced by minimum when the coupling space is compact. Indeed, when the action space is discrete and finite in our case, the compactness is guaranteed. Then, | ̄rREG†(s, a, π†) - ̄rREG†(s, a, π†′)| ≤drEa∼π[W1(π†, π†′)] = drW1(π†, π†′) ≤dr∥π† -π†′∥1. The last inequality can be established by noticing that for any optimal coupling νTV that attains the minimum of the total variance distance, which is defined as: dTV(π†, π†) = νTV(a† ̸= a†′) := inf ν∈Coupling(π†,π†′) ν(a† ̸= a†′) = 1 2∥π† -π†′∥1, the following condition must be satisfied with the assumption that diam(A†) = 1 has been normalized: W1(π†, π†′) ≤E(a†,a†′)∼νTV[∥a† -a†′∥1] = νTV(a† = a†′)E[∥a† -a†′∥1 | a† = a†′] + νTV(a† ̸= a†′)E[∥a† -a†′∥1 | a† ̸= a†′] = 0 + νTV(a† ̸= a†′)E[∥a† -a†′∥1 | a† ̸= a†′] ≤diam(A†)νTV(a† ̸= a†′) = 1 2∥π† -π†′∥1 ≤∥π† -π†′∥1. We immediately have that | ̄rREG†(s, a, π†) - ̄rREG†(s, a, π†′)| ≤dr∥π† -π†′∥1. The proof to ̄P † being dP -Lipschitz with respect to π† is symmetric. Now we can look at the learning problem. Since at different rounds, we solve a different RL problem, we are essentially dealing with different Q-functions. we define Qπ†(s, a) = ̄rREG†(s, a, π†) + γ X s′∈S Q∗,π†(s′) ̄P †(s′|s, a, π†), (37) where Q∗,π†(s) = maxa∈A Qπ†(s, a) for all s. The next is to prove that Q∗,π† is dQ-Lipschitz continuous with respect to the states s, where dQ = dr 1-γdP /2. Define Tπ† as the Bellman operator for the problem with π†. We can rewrite Q∗,π† in the form of Tπ† as follows Q∗,π†(s) = max a∈A ( ̄rREG†(s, a, π†) + γ X s′∈S Q∗,π†(s′) ̄P †(s′|s, a, π†) ) = Tπ†Q∗,π†(s), (38) which is the Bellman optimality condition. It is known that the operator forms a γ-contraction mapping. Start with any Q, and apply Tπ†, by Banach fixed point theorem, limn→∞T n π†Q →Q∗,π†. 16 Choose the initial Q to be dK-Lipschitz where dK < dr, then Q/dK is 1-Lipschitz. For any s1, s2, the following holds |Tπ†Q(s1) -Tπ†Q(s2)| ≤max a∈A ̄rREG†(s1, a, π†) + γ X s′∈S Q(s′) ̄P †(s′|s1, a, π†) - ̄rREG†(s2, a, π†) -γ X s′∈S Q(s′) ̄P †(s′|s2, a, π†) ≤max a∈A n | ̄rREG†(s1, a, π†) - ̄rREG†(s2, a, π†)| + γ X s′∈S Q(s′) ̄P †(s′|s1, a, π†) - X s′∈S Q(s′) ̄P †(s′|s2, a, π†) o ≤max a∈A n dr∥s1 -s2∥1 + γdK X s′∈S Q(s′) dK ̄P †(s′|s1, a, π†) - X s′∈S Q(s′) dK ̄P †(s′|s2, a, π†) o ≤ dr + γ dKdP 2 ∥s1 -s2∥1. Inductively, we have for all n ≥1, the following condition holds, |T n π†Q(s1) -T n π†Q(s2)| ≤ dr n-1 X k=0 γdP 2 k + dK γdP 2 n! ∥s1 -s2∥1 ≤dr n X k=0 γdP 2 k ∥s1 -s2∥1 ≤ dr 1 -γdP /2∥s1 -s2∥1, where the second inequality is a result of dK < dr, and the third inequality uses the fact that γdP /2 ∈[0, 1] with which the geometric series is bounded above. Hence, T n π† is dr 1-γdP /2-continuous for all n, which holds true when n →∞, where T n π†Q →Q∗,π†(s). We then set dQ = dr 1-γdP /2 for notation easiness. We now claim that Q∗,π† is d0-Lipschitz continuous with respect to s, where d0 = 1 1-γ dr + γ dP dQ 2 . For any π† 1, π† 2 ∈P(A†), we have ∥Q⋆ π† 1 -Q⋆ π† 2∥∞= max s,a ̄rREG†(s, a, π† 1) + γ X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 1) - ̄rREG†(s, a, π† 2) -γ X s′∈S Q⋆ π† 2(s′) ̄P †(s′|s, a, π† 2) ≤| ̄rREG†(s, a, π† 1) - ̄rREG†(s, a, π† 2)| + γ X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 1) - X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 2) + γ X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 2) - X s′∈S Q⋆ π† 2(s′) ̄P †(s′|s, a, π† 2) ≤dr∥π† 1 -π† 2∥1 + γ dP dQ 2 ∥π† 1 -π† 2∥1 + γ∥Q⋆ π† 1 -Q⋆ π† 2∥∞, where the first term follows the Lipschitz assumption on the reward and the last term uses the fact that ̄P † is probability. The second term can be bounded as follows. Notice that for any π†, Q∗,π† is dQ-Lipschitz continuous implies Q∗,π†/dQ is 1-Lipschitz continuous with respect to s. Then, X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 1) - X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 2) = dQ X s′∈S Q⋆ π† 1(s′) dQ ̄P †(s′|s, a, π† 1) - X s′∈S Q⋆ π† 1(s′) dQ ̄P †(s′|s, a, π† 2) ≤dQ 2 ∥ ̄P †(s, a, π† 1) - ̄P †(s, a, π† 2)∥1 ≤dP dQ 2 ∥π† 1 -π† 2∥1, 17 where we use equation (35) and Lipschitz continuity on the transition kernel. Then by rearranging the terms, we obtain that ∥Q⋆ π† 1 -Q⋆ π† 2∥∞≤d0∥π† 1 -π† 2∥1 where d0 = 1 1-γ dr + γ dP dQ 2 . Equation (37) can be rewritten as follows: Qπ†(s, a) = ̄rREG†(s, a, π†) + γ X s′∈S Q∗,π†(s′) ̄P †(s′|s, a, π†) -H(π) = ⟨qπ†,s, a⟩-H(π), (39) where qπ†,s = ̄rREG†(s, ·, π†) + γ P s′∈S Q∗,π†(s′)P(s′|s, ·, π†) for any s. We now prove that is dr + γd0 + γ dP dQ 2 -Lipschtiz continuous with respect to π†. Indeed, one has ∥qπ† 1,s -qπ† 2,s∥∞= max a∈A ̄rREG†(s, a, π† 1) + γ X s′∈S Q∗,π†(s′) ̄P †(s′|s, a, π† 1) - ̄rREG†(s, a, π† 2) -γ X s′∈S Q∗,π†(s′) ̄P †(s′|s, a, π† 2) ≤dr∥π† 1 -π† 2∥1 + γ max a∈A X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 1) - X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 2) + γ max a∈A X s′∈S Q⋆ π† 1(s′) ̄P †(s′|s, a, π† 2) - X s′∈S Q⋆ π† 2(s′) ̄P †(s′|s, a, π† 2) ≤dr∥π† 1 -π† 2∥1 + γ∥Q⋆ π† 1 -Q⋆ π† 2∥∞+ γ dP dQ 2 ∥π† 1 -π† 2∥1 = dr + γd0 + γ dP dQ 2 ∥π† 1 -π† 2∥1. We now apply Lemma A.4. For any s ∈S, we write BRREG(s, π†) = ∇H⋆(qπ†,s) where H⋆is the Fenchel conjugate of H. Then, ∥BRREG(s, π† 1) -BRREG(s, π† 2)∥1 ≤1 ρ∥qπ† 1,s -qπ† 2,s∥∞= dr + γd0 + γdP dQ/2 ρ ∥π† 1 -π† 2∥1. The argmax is therefore Lipschitz with constant dr+γd0+γdP dQ/2 ρ . Then by substituting d0, dQ and bringing back the agent index i, we get dREG i = dr ρ 1 + γi (1 -γi)(1 -γidP /2) + γidP /2 1 -γidP /2 , ∀i ∈I. (40) A.6 Stationary Stackelberg Markov Equilibrium with Mean-Field Followers In this section, we present the proof of Theorem 5.1 and introduce an algorithm that iteratively computes an SS-MFE. A.6.1 Assumptions of Theorem 5.1 To establish the existence and uniqueness of an SS-MFE, we adopt the following assumption: Assumption A.2 (Uniqueness of Best Response and MF Update). For each agent i ∈I, for any si ∈Si, s-i ∈S-i, π-i ∈P(A-i), and for any follower's MF μF ∈P(SF × AF ), agent i's best response function BRi(si, s-i, π-i, μF ) admits a unique solution. In addition, the MF update map Γ(μF , πL, πF ) also returns a unique solution. Assumption A.3 (Lipschitz Best Responses). There exist constants dF L, dμ L, dL F , dμ F , dL μ, dF μ , dμ μ ≥0 such that for any admissible leader's policies πL, π′ L ∈P(AL), the follower's policies πF , π′ F ∈ 18 P(AF ), and follower's MF μF , μ′ F ∈P(SF × AF ): sup sF ,sL ∥BRF (sF , sL, πL, μF ) -BRF (sF , sL, π′ L, μ′ F )∥1 ≤dL F ∥πL -π′ L∥1 + dμ F ∥μ -μ′∥1, (41) ∥Γ(μF , πL, πF ) -Γ(μ′ F , π′ L, π′ F )∥1 ≤dμ μ∥μF -μ′ F ∥1 + dL μ∥πL -π′ L∥1 + dF μ ∥πF -π′ F ∥1, (42) sup sL,sF ∥BRL(sL, sF , πF , μF ) -BRL(sL, sF , π′ F , μ′ F )∥1 ≤dF L∥πF -π′ F ∥1 + dμ L∥μ -μ′∥1. (43) A.6.2 Proof of Theorem 5.1 - Existence and Uniqueness of SS-MFE We define the map BRF μ : SF × SL × P(AL) 7→P(AF ) × P(SF × AF ), which is simply a composite update map from (19) and (20); that is, at the outer iteration k, given current states sF , sL and leader's policy πk L, the inner iteration returns BRF μ(sF , sL, πk L) = (πk∗ F , μk∗ F ). Proof of Theorem 5.1. Fix a leader policy πL ∈P(AL) and states sL ∈SL, sF ∈SF . We first show that the mapping BRF μ returns a unique solution by showing that Γ is contractive, then we show that BRF μ forms a contractive mapping. Consider any pair of follower policies πF , π′ F ∈P(AF ) and mean-field distributions μF , μ′ F ∈P(SF × AF ) that satisfy πF = BRF (sF , sL, πL, μF ) and π′ F = BRF (sF , sL, πL, μ′ F ), we have: ∥Γ(μF , πL, πF ) -Γ(μ′ F , π′ L, π′ F )∥1 ≤dμ μ∥μF -μ′ F ∥1 + dF μ ∥πF -π′ F ∥1 ≤dμ μ∥μF -μ′ F ∥1 + dF μ ∥BRF (sF , sL, πL, μF ) -BRF (sF , sL, πL, μ′ F )∥1 ≤(dμ μ + dF μ dμ F )∥μF -μ′ F ∥1. As dμ μ + dF μ dμ F ∈[0, 1), Γ forms a contractive mapping by Banach's fixed point theorem. And since BRF returns unique solution, we conclude that the follower's side converges to a unique fixed point. For any πL, π′ L, let the corresponding follower's fixed points be (π∗ F , μ∗ F ) = BRF μ(sF , sL, πL), and (π∗′ F , μ∗′ F ) = BRF μ(sF , sL, π′ L). Then, the following holds: ∥BRF μ(sF , sL, πL) -BRF μ(sF , sL, π′ L)∥= ∥π∗ F -π∗′ F ∥1 + ∥μ∗ F -μ∗′ F ∥1 = ∥BRF (sF , sL, πL, μ∗ F ) -BRF (sF , sL, π′ L, μ∗′ F )∥1 + ∥Γ(μ∗ F , πL, π∗ F ) -Γ(μ∗′ F , π′ L, π∗′ F )∥1 ≤(dL F + dL μ)∥πL -π′ L∥1 + (dμ F + dμ μ)∥μ∗ F -μ∗′ F ∥1 + dF μ ∥π∗ F -π∗′ F ∥1 ≤(dL F + dL μ)∥πL -π′ L∥1 + (dμ F + dμ μ + dF μ ) ∥π∗ F -π∗′ F ∥1 + ∥μ∗ F -μ∗′ F ∥1 . By rearranging the term, we get ∥BRF μ(sF , sL, πL) -BRF μ(sF , sL, π′ L)∥1 = ∥π∗ F -π∗′ F ∥1 + ∥μ∗ F -μ∗′ F ∥1 ≤ dL F + dL μ 1 -(dμ F + dμ μ + dFμ )∥πL -π′ L∥1. Because dL F +dL μ 1-(dμ F +dμ μ+dF μ ) ∈[0, 1), BRF μ forms a contractive mapping by Banach's fixed point theorem. As a result, there exists a unique stationary SS-MFE to GMF. A.6.3 Algorithm 2 - RL-Framework for Finding an SS-MFE We now present a reinforcement learning procedure for computing an SS-MFE. The algorithm extends the three-step RL framework described earlier to incorporate a nested fixed-point computation for the mean-field distribution of the followers. At each outer iteration, the leader updates its policy based on the aggregate follower response, while the inner loop computes the consistent mean-field and best response for the followers. The complete procedure is outlined below. 19 Algorithm 2: An RL to Single-Leader-MF-Follower Stackelberg Games Input: Initial states s0 L, s0 F , leader's policy π0 L, initial follower's MF μ0 F , tolerance tol, RL algorithms AlgL, AlgF for Iteration k = 0, 1, 2, · · · do Leader takes action ak L ∼πk L(·|sk L) ; Set μk,0 F = μk F ; for Iteration τ = 0, 1, 2, · · · do Follower learns its best response policy πk,τ F = BRF (sk F , sk L, πk L, μk,τ F ) through AlgF ; Follower's MF updates as μk,τ+1 F = Γ(μk,τ F , πk L, πk,τ F ) ; If ∥μk,τ+1 F -μk,τ F ∥1 ≤tol, set (πk F , μk F ) = (πk,τ F , μk,τ F ) and exit the inner loop. end Follower takes action ak F ∼πk F (·|sk F ) ; Leader learns its best response policy πk+1 L = BRL(sk L, sk F , πk F , μk F ) through AlgL ; State transition sk+1 L ∼PL(sk L, ak L, ak F , μk F ), sk+1 F ∼PF (sk F , ak F , ak L, μk F ) ; If ∥πk+1 L -πk L∥1 ≤tol, exit the outer loop. end Return (πSE L , πSE F , μSE F ) = (πk L, πk F , μk F ) as the SS-MFE. A.7 Numerical Experiment Specification and Results A.7.1 Input Data and Hyper-parameters Our numerical simulation's data and code can be found at https://anonymous.4open.science/ r/StackelbergGame-B592 and also in the supplemental materials. To reproduce our results, we require Python 3.10.11. All necessary packages are included in the requirement.txt file. The main packages used are: Gymnasium (version 1.0.0, [18]) for environment setting; StableBaselines3 (version 2.3.2, [19]) for RL; and Pyomo (version 6.7.2, [20, 21]) for solving the economic dispatch linear programming problem. We use a stylized 3-bus power system as the test case. The input specifications for the bus nodes (including demographic data of prosumers and consumers), transmission lines, and grid-level generators are provided in Tables 1, 2, and 3, respectively. The numerical experiment uses PPO as the RL algorithm. The training specification is listed in Table 4. Table 1: Bus Node Data Bus Node Name Parameter b1 b2 b3 P Load (kW) 110.0 110.0 95.0 Q Load (kVar) 40.0 40.0 50.0 Max Voltage (p.u.) 1.1 1.1 1.1 Min Voltage (p.u.) 0.9 0.9 0.9 Voltage Magnitude 1.1 0.92617 0.9 Voltage Angle 0.0 7.25883 -17.2671 Base KV 345 345 345 Prosumer Population 1,000 500 300 Energy Storage Capacity (kWh) 30 60 100 Energy Storage One-way Efficiency 0.8 0.8 0.8 Prosumer Income/household (US ) 15,000 15,000 15,000 A.7.2 Input Data of Solar and Demand Shapes We set each day to be of 8 time steps, each of which represents a 3-hour interval. Figure 3 shows the input solar capacity shape from [22] and energy demand shapes for both prosumers and consumers at each timestep adapted from [23]. Let ∆(a, b, c) denote a triangular distribution with lower limit 20 Table 2: Transmission Line Data Transmission Line Name Parameter l1 l2 l3 Source Bus b1 b3 b1 Target Bus b3 b2 b2 Reactance (Ω) 0.065 0.025 0.042 Susceptance (S) 0.62 0.75 0.9 Normal Flow Limit (MW) 100 100 100 Table 3: Grid-Level Generator Data Grid-Level Generator Name Parameter g1 g2 solar solar2 Bus b1 b2 b3 b1 Fuel Type Oil Oil Solar Solar Cost Curve Coefficients∗ [0.2, 5.0, 0.0] [0.2, 4.0, 0.0] Free Free Max Production (MW) 2000.0 1500.0 30.0 30.0 ∗The cost curve is represented as an array, where the first entry is the quadratic coefficient, the second is the linear coefficient, and the third is the constant term. For an array [a, b, c], the cost function is C(p) = ap2 + bp + c, where p is amount of energy consumption in MWh. Table 4: Hyper-parameters for PPO Agents Hyperparameter Aggregators Utility Company Learning Rate 0.0003 0.0003 Discount Factor (γ) 0.9999 0.9999 Entropy Coefficient 0.01 0.01 Batch Size 128 128 Number of Epochs 10 10 Steps per Update 1200 1200 Clip Range 0.2 0.2 Policy Network † [18, 36] [24, 36] Value Network ◦ [18, 36] [24, 36] Training length 2000 2000 †,◦All policy and value networks are fully connected neural networks. Each array lists the number of neurons at each hidden layer. a, upper limit b, and mode c. In our simulation, we assume each consumer/prosumer's demand and solar profile follow the average shapes shown in Figure 3, scaled by a random factor drawn from the triangular distribution ∆(0.8, 1.2, 1). This introduces variability across agents while preserving the overall profile shape. All data is scaled relative to the average individual storage capacity across all prosumers and consumers, computed using Table 1. To maintain consistency, we assume each consumer has a reference storage capacity of 10kWh. The demand input represents the energy consumed in each time step. For consumers, this is their total consumption; for prosumers, it reflects net demand after subtracting their solar generation. A.7.3 Follower's Learning Result - Price Volatility and Prosumer Net Demand Shape To better measure the impacts of energy storage coupled with the RL algorithms on locational marginal price (LMP) volatility, we adopt incremental mean volatility (IMV) from [24] as the metric. For a sequence of LMPs {LMPt}∞ t=1, the IMV is defined as IMV = lim T →∞ 1 T T X t=1 LMPt+1 -LMPt . (44) 21 0 5 10 15 20 Hour of the Day 0.00 0.25 0.50 0.75 1.00 Energy Solar Generation Shape Solar 0 5 10 15 20 Hour of the Day 0.4 0.6 0.8 1.0 1.2 Load Shape (Prosumer & Consumer) Prosumer Consumer Figure 3: Input shapes for solar capacity shape (left, data adapted from [22]) and load demand shapes for prosumers and consumers (right, data adapted from [23]). All data is scaled to be with respect to the average storage capacity. The shadow areas indicate the noise bound of each shape. Figure 4 shows the IMV of the last 3 days between the two scenarios: with storage and RL, and without storage or RL. Results indicate that the scenario with storage and RL achieves a significant reduction in IMV, approximately 3 units lower, highlighting notably less volatile and more stable electricity prices. 0 6 12 18 0 6 12 18 0 6 12 18 Hour of the Day 2 3 4 5 IMV IMV of Hub Price (Last 3 Days) With Storage and RL No Storage or RL Figure 4: Comparison of IMV of the last 3 days between two scenarios: with storage and RL, and without storage or RL. Shadow areas show the 1-sigma error bounds across all simulations. To further understand how RL influences consumption behavior, we examine the resulting net demand profiles and compare them to the original input demand. As shown in Figure 5, the RL-based strategy significantly reshapes prosumer net demand. It shifts a considerable portion of energy consumption (charging) toward midday, as a response to low electricity prices and abundant solar generation. The net demand turns negative during peak evening hours, indicating energy selling back to the grid when prices are high. The curve after learning is less smooth due to the existence of cost-free grid-level solar generators, prosumers can increase their consumption without increasing the price too much. 0 3 6 9 12 15 18 21 Hour of the Day 1.0 0.5 0.0 0.5 1.0 1.5 Energy Demand Shape Prosumer (with Storage and RL) Prosumer (No Storage or RL) Consumer Figure 5: The vertical axis indicates the energy amount scaled down by a factor of the total storage level of the corresponding agent types (prosumers or consumers). The shaded areas indicate one standard deviation error bounds computed over all 10 days and all simulation runs. 22 A.7.4 Leader's Learning Result - Energy Expenditure Incidence (EEI) We now show the EEI for both prosumers and consumers in Figure 6. The EEI is defined as the percentage of energy expenditures to total household income. Under our experimental setup, the utility company's optimal strategy reduces the EEI gap between prosumers and consumers from approximately 1% to about 0.7%, indicating improved equity across different income groups and customer types. We note that the EEI values are typically small since energy spending constitutes only a minor portion of total household income [25]. 0 20 40 60 80 100 Day 1.50 1.75 2.00 2.25 2.50 2.75 EEI (%) Prosumer and Comsumer EEI Prosumer Consumer Figure 6: EEI over time for prosumers and consumers. The learned policy reduces the EEI gap between the two groups, indicating improved income-based equity. Shaded regions represent one standard deviation across simulation runs. 23
|
2509.16288
|
Identifying Critical Pathways in
Coronary Heart Disease via Fuzzy
Subgraph Connectivity
Transportation Research Record
2020, Vol. XX(X) 1–9
©National Academy of Sciences:
Transportation Research Board 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/ToBeAssigned
journals.sagepub.com/home/trr
SAGE
Shanookha Ali1 and Nitha Niralda P C 2
Abstract
Coronary heart disease (CHD) arises from complex interactions among uncontrollable factors, controllable
lifestyle factors, and clinical indicators, where relationships are often uncertain. Fuzzy subgraph connectivity
(FSC) provides a systematic tool to capture such imprecision by quantifying the strength of association between
vertices and subgraphs in fuzzy graphs. In this work, a fuzzy CHD graph is constructed with vertices for
uncontrollable, controllable, and indicator components, and edges weighted by fuzzy memberships. Using FSC,
we evaluate connectivity to identify strongest diagnostic routes, dominant risk factors, and critical bridges. Results
show that FSC highlights influential pathways, bounds connectivity between weakest and strongest correlations,
and reveals critical edges whose removal reduces predictive strength. Thus, FSC offers an interpretable and
robust framework for modeling uncertainty in CHD risk prediction and supporting clinical decision-making.
Introduction
Coronary heart disease (CHD) is a leading global
health burden, accounting for significant morbidity
and
mortality.
The
identification
and
analysis
of
CHD risk factors is a major challenge in preventive
cardiology. These risk factors are broadly classified into
uncontrollable factors such as age and family history,
and controllable factors such as smoking, diet, and
physical activity. Additionally, clinical indicators such as
blood pressure, cholesterol levels, and electrocardiogram
(ECG) findings serve as intermediate measures linking
lifestyle and hereditary factors with disease outcomes.
In the study of fuzzy graphs, various aspects such as
connectivity, spanning structures, and network flow have
been extensively explored. The foundational concept of
fuzzy sets introduced by Zadeh (14) laid the groundwork
for the development of fuzzy graph theory, which was
formalized by Rosenfeld (13). Early investigations on
fuzzy graphs and their properties were carried out by
Bhattacharya (5), Bhutani and Rosenfeld (6, 7) and
Rosenfeld (13), who examined strong arcs and fuzzy end
nodes. Applications of vertex and node connectivity in
fuzzy graphs have been explored in the context of human
trafficking networks by Ali et al. (1). Comprehensive
treatments of fuzzy graph theory can be found in the
books by Mordeson and Nair (10) and Mordeson et
al. (9), including detailed discussions on node and arc
connectivity Mathew and Sunitha (8).
Vertex
connectivity
in
fuzzy
graphs
has
been
analyzed to evaluate resilience and vulnerability in
trafficking chains Ali et al. (1), while Hamiltonian
fuzzy graphs provide a foundation for understanding
cyclical structures that often underlie such networks Ali
et al. (2). More recently, the concept of containers and
spanning containers has been introduced to study the
flow and concealment strategies of traffickers within
uncertain environments Ali et al. (3). Complementing
these contributions, Mordeson, Mathew, and Ali have
applied fuzzy path fans to model the health consequences
faced by trafficking victims, thereby highlighting the
applicability of fuzzy graph theory beyond structural
analysis and into human well-being by Mordeson et
al. (11). Together, these works underscore the versatility
of fuzzy graph theory in addressing both theoretical
challenges and real-world problems associated with
trafficking.
Conventional diagnostic methods, including regres-
sion and probabilistic models, often assume precise
relationships between variables. However, in real-world
1Department of General Science, Birla Institute of Technology
& Science, Pilani, Dubai Campus, Dubai 345055, United Arab
Emirates
2Department of Mathematics & Statistics, Providence Women’s
College, Calicut, Kerala, 673009, India
Corresponding author:
Shanookha Ali, shanookha@dubai.bits-pilani.ac.in
Prepared using TRR.cls [Version: 2020/08/31 v1.00]
arXiv:2509.16288v1 [cs.AI] 19 Sep 2025
2
Transportation Research Record XX(X)
medical data, relationships are rarely crisp or determin-
istic. For example, the effect of age on CHD varies
across populations, and the influence of smoking on
cardiovascular risk may depend on duration, intensity,
and co-occurring conditions. These inherent uncertain-
ties motivate the application of fuzzy graph theory, which
provides a mathematical framework for handling impre-
cise, uncertain, and approximate relationships.
In this study, we construct a fuzzy CHD graph in which
vertices represent uncontrollable, controllable, and indi-
cator factors, while edges denote their relationships with
membership values in [0, 1]. Using fuzzy connectivity
concepts, we investigate:
1. pairwise connectivity (CONNG(u, v)) between
individual risk factors,
2. vertex-to-subgraph
connectivity
(CONNG(x, H)), which evaluates the influence
of
a
single
factor
on
a
group
of
related
components, and
3. subgraph-to-subgraph
connectivity
(CONNG(Hi, Hj)), which assesses the global
strength of interaction between categories of
factors.
Our analysis shows that fuzzy connectivity not only
quantifies the strength of risk pathways but also identifies
critical bridges edges whose removal significantly
reduces
connectivity.
Such
bridges
highlight
key
clinical factors (e.g., smoking – ECG relationship) that
dominate diagnostic predictions. Moreover, strongest
paths reveal the most significant diagnostic routes,
providing interpretability to clinicians. By bounding
connectivity between the weakest and strongest observed
correlations, the framework also ensures reliable upper
and lower limits for risk prediction.
Overall, this fuzzy graph-based approach contributes
to a more nuanced understanding of CHD risk dynamics.
By capturing uncertainty and identifying strongest
connections,
the
method
supports
both
diagnostic
decision-making and the prioritization of preventive
interventions.
Preliminaries
In this section, we briefly recall the basic concepts of
fuzzy sets, fuzzy graphs, and fuzzy connectivity that will
be used in the sequel. Throughout, we follow standard
notations used in the literature (13), (12), (5).
A fuzzy set A on a universe X is defined by a
membership function µA : X →[0, 1], where µA(x)
represents the degree of membership of x in A.
For example, if X is the set of possible ages of
patients, then the fuzzy set “elderly” may assign
µA(65) = 0.8, µA(50) = 0.4, reflecting vagueness in
age categorization. This concept provides a natural tool
to model uncertainty in medical datasets.
A fuzzy graph G is defined as a pair G = (σ, µ),
where σ : V →[0, 1] assigns a membership value to
each vertex v ∈V and µ : V × V →[0, 1] assigns a
membership value to each edge (u, v), subject to the
condition
µ(u, v) ≤min{σ(u), σ(v)}.
Here σ(u) may represent the relevance of a CHD
factor, while µ(u, v) denotes the strength of relationship
between two factors. This generalization of classical
graphs was first introduced by Rosenfeld (13). A path in
a fuzzy graph G is a sequence of vertices u0, u1, . . . , uk
such that µ(ui, ui+1) > 0 for all i. The strength of a path
P is defined as
str(P) =
min
(ui,ui+1)∈P µ(ui, ui+1).
The u–v connectivity in G is then given by
CONNG(u, v) = max
P :u;v str(P),
where the maximum is taken over all paths connecting u
and v.
Some basic results used in this paper are summarized
below:
• Symmetry:
CONNG(Hi, Hj) =
CONNG(Hj, Hi).
• Bounds:
r(G) ≤CONNG(Hi, Hj) ≤d(G),
where r(G) and d(G) are the minimum and
maximum edge strengths in G.
• Bridge property: If uv is a fuzzy bridge in
G, then there exist subgraphs Hi, Hj such that
CONNG(Hi, Hj) = µ(uv).
• Strongest path: The strongest path between two
subgraphs corresponds to the most significant
diagnostic route in an applied setting.
These concepts will be applied to the fuzzy CHD graph.
Fuzzy subgraph connectivity
Definition 1.1. u −v connectivity: Maximum of the
strengths of all paths between u and v is called strength
of connectedness between u and v and denoted by
CONNG(u, v).
Definition 1.2. Let G = (σ, µ) be a fuzzy graph with
proper fuzzy subgraph H(ν, τ). For x ∈σ∗\ν∗, x −
H connectivity is defined as maximum of strength of
connectedness between x and u where u ∈ν∗, denoted
by CONNG(x, H). That is
CONNG(x, H) = max
u∈ν∗CONNG(x, u)
.
Prepared using TRR.cls
Shanookha Ali and Nitha Niralda P C
3
t
t
t
t
t
a
e
b
d
c
0.1
0.4
0.3
0.15
0.1
0.9
t
t
t
b
d
c
0.15
0.1
0.1
(a) G
(b) H
Figure 1. Fuzzy graphs G with proper induced fuzzy
subgraph H.
t
t
a
d
0.9
t
t
b
c
0.15
(a) H1
(b) H2
Figure 2. Proper induced fuzzy subgraphs of fuzzy graph
G in Example 1.3
Example
1.3. Let
G = (σ, µ)
be
with
σ∗=
{a, b, c, d, e},
σ(a, b) = 0.4,
σ(b, c) = 0.15,
σ(b, d) = σ(c, d) = 0.1,
σ(a, d) = 0.9
and
σ(e, d) = 0.3 (see Figure 1 (a)). Let H = (ν, τ)
be fuzzy subgraph induced by ν∗= {b, c, d} (see
Figure 1 (b)). For a ∈σ∗\ν∗, CONNG(a, H) =
max{CONNG(a, b), CONNG(a, d), CONNG(a, c)}
= max{0.4, 0.15, 0.9} = 0.9.
Similarly
CONNG(e, H) = 0.4.
Definition 1.4. Let G = (σ, µ) be a fuzzy graph
with proper disjoint induced fuzzy subgraphs H1
and H2. Then fuzzy subgraph connectivity (FSC)
between H1 and H2 denoted by CONNG(H1, H2)
is maximum of CONNG(x, H2), for x ∈σ∗
1. That is
CONNG(H1, H2) = max
x∈σ∗
1
CONNG(x, H2).
Consider the fuzzy graph G in Figure 1 (a). Let H1
and H2 be fuzzy subgraphs of G induced by {a, d}
and {b, c} respectively (see Figure 2 (a) and (b)).
Then
CONNG(H1, H2) = max{CONNG(a, H2),
CONNG(d, H2)} = 0.4.
Proposition 1.5. Let G = (σ, µ) be a fuzzy graph with
proper disjoint induced fuzzy subgraphs H1 and H2.
Then FSC is symmetric.
For u, v ∈σ∗, CONNG(u, v) = CONNG(v, u). So
it is clear that CONNG(H1, H2) = CONNG(H2, H1).
Fuzzy subgraph connectivity need not be transitive. This
can be observed from the following example.
Example 1.6. Let G = (σ, µ) be a fuzzy graph with
σ∗= {a, b, c, d, e, f, g},
σ(a, b) = 0.1,
σ(b, c) = 0.3,
σ(c, d) = 0.25,
σ(d, e) = 0.9
σ(c, f) = 0.11,
σ(a, c) = 0.9 and σ(a, g) = 0.8 (see Figure 3). Let
H1 = (ν1, τ1), H2 = (ν2, τ2), and H3 = (ν3, τ3) be
fuzzy subgraph induced by ν∗
1 = {a, b}, ν∗
2 = {d, e} and
ν∗
3 = {f, g} respectively. Then {CONNG(H1, H2) =
0.25
and
CONNG(H2, H3)} = 0.25.
Where
as
CONNG(H1, H3)} = 0.8.
t
t
t
t
t
t
t
a
b
c
d
e
f
g
0.1
0.3
0.11
0.3
0.25
0.9
0.9
Figure 3. Fuzzy graph in the Example 1.6 .
Definition 1.7. Let G = (σ, µ) be a fuzzy graph. A
pair of proper disjoint induced fuzzy subgraphs H1
and H2 is said to be t-fuzzy subgraph connected if
CONNG(H1, H2) = t.
Theorem 1.8. Let X = {H1, H2, · · · , Hk} be the set
of fuzzy subgraphs of fuzzy graph G = (σ, µ) such
that CONNG(Hi, Hj) = t for i ̸= j. Then we define
a relation R on X such that HiRHj if and only if
CONNG(Hi, Hj) = t or Hi = Hj.
Proof. We prove that R is an equivalence relation on X
by checking reflexivity, symmetry and transitivity.
For any Hi ∈X we have Hi = Hi, so by definition
HiRHi.
Hence
R
is
reflexive.
Let
Hi, Hj ∈X
and
suppose
HiRHj.
By
definition
this
means
either
Hi = Hj
or
CONNG(Hi, Hj) = t.
If
Hi = Hj then trivially Hj = Hi and thus HjRHi.
Otherwise,
since
connection
is
symmetric
(i.e.
CONNG(Hi, Hj) = CONNG(Hj, Hi)
for
all
i, j),
we have CONNG(Hj, Hi) = t, hence HjRHi. Thus
R is symmetric. Let Hi, Hj, Hk ∈X and assume
HiRHj and HjRHk. We consider cases: If any two of
Hi, Hj, Hk are equal, transitivity follows immediately
from reflexivity/symmetry. Otherwise all three are
distinct. By hypothesis of the theorem, for every pair
of distinct indices the connection equals t; in particular
CONNG(Hi, Hk) = t. Hence HiRHk. Therefore R is
transitive.
Having checked reflexivity, symmetry and transitivity,
we conclude that R is an equivalence relation on X.
Under the stated hypothesis (every pair of distinct
subgraphs has connection t), every two distinct elements
of X are related; hence R is actually the universal
relation on X, so X is a single equivalence class under
R.
Proposition 1.9. uv is a fuzzy bridge of G = (σ, µ) if
and only if there exists a pair of proper induced disjoint
Prepared using TRR.cls
4
Transportation Research Record XX(X)
fuzzy subgraphs H1 and H2 with CONNG(H1, H2) =
µ(u, v).
Proof. Suppose uv is a fuzzy bridge of G. Then removal
uv reduces the strength of connectedness between some
pair of vertices say x and y in G. Choose H1 as
{x} and H2 as {y}. Then CONNG(H1, H2) = µ(u, v).
Conversely assume that for proper induced disjoint fuzzy
subgraphs H1 and H2, CONNG(H1, H2) = µ(u, v).
Hence uv is an edge of every strongest H1 −H2 path
in G. Choose a vertex x from σ∗
1 and y from σ∗
1. It
follows that uv is an edge of every strongest x −y path.
Therefore, uv is a fuzzy bridge of G.
Proposition 1.10. P is a strongest path in G if and
only if there exists a pair of proper disjoint induced
fuzzy subgraphs H1 and H2 with CONNG(H1, H2) =
str(P).
Proposition 1.11. Let Gf = (µV ; µE) be a fuzzy graph
and fix a left-continuous t-norm T. For a u–v path P =
(u = x0, x1, . . . , xm = v) define its (edge) strength by
str(P) = T
µE(x0x1), µE(x1x2), . . . , µE(xm−1xm)
.
For fuzzy (induced) subgraphs H1, H2 with disjoint
vertex sets, define
CONNG(H1, H2) = max
max
u∈σ∗
1, v∈σ∗
2
max
P ∈P(u,v) str(P)
,
i.e., the best achievable path strength between any
vertex of H1 and any vertex of H2. Then a path P
is a strongest path in Gf (i.e., its strength equals the
maximum strength over all paths in Gf) if and only
if there exist proper disjoint induced fuzzy subgraphs
H1, H2 with CONNG(H1, H2) = str(P).
Proof. Let P be a strongest path in Gf, and let its
endpoints be u and v. Set H1 = Gf[{u}] and H2 =
Gf[{v}], the induced fuzzy singletons. They are proper
and disjoint. By definition,
CONNG(H1, H2) =
max
P ′∈P(u,v) str(P ′) = str(P),
because P is, by assumption, a strongest u–v path and
(being globally strongest) has strength equal to the global
maximum over all paths as well. Hence the required
H1, H2 exist.
Conversely,
suppose
there
exist
proper
disjoint
induced
fuzzy
subgraphs
H1, H2
with
CONNG(H1, H2) = s. By definition of the maximum,
there exist u ∈σ∗
1, v ∈σ∗
2, and a u–v path P with
str(P) = s, and no path between any u′ ∈σ∗
1 and
v′ ∈σ∗
2 has strength exceeding s. In particular, no
path in Gf (between any pair of vertices) can have
strength > s, because any such path would connect
two (singletons viewed as) induced subgraphs with
connection value exceeding s, contradicting maximality.
Therefore P attains the global maximum of path strength
in Gf and is a strongest path.
The argument uses only that T is a monotone,
associative, and (left-)continuous t-norm so that (i)
adding edges to a path cannot increase its T-aggregated
strength and (ii) “max over paths” is well-defined and
attained (or approached) by a path.
Remark 1.12. If you adopt the common convention
str(P) = min{µE(e) : e ∈P}, take T = min above;
all steps go through verbatim. If you measure path
strength with vertex memberships as well (e.g. T-
aggregating both vertices and edges), replace the
definition of str(P) accordingly the proof structure is
unchanged.
Theorem 1.13. Let G = (σ, µ) be a fuzzy graph. Then
for any proper disjoint induced fuzzy subgraphs H1 and
H2, r(G) ≤CONNG(H1, H2) ≤d(G).
Proof. Let G = (σ, µ) be a fuzzy graph with edge set E.
For clarity we adopt the following standard auxiliaries:
r(G) = min
e∈µ∗µ(e)
and
d(G) = max
e∈µ∗µ(e),
the minimum and maximum edge-strengths in G. For
two proper, disjoint, induced fuzzy subgraphs H1, H2 of
G we set
CONNG(H1, H2) = max{µ(uv) : u ∈σ∗
1, v ∈σ∗
2,
uv ∈µ∗},
i.e. the maximum strength among edges joining a vertex
of H1 to a vertex of H2 (this is the same definition used
earlier for the tree case).
Since CONNG(H1, H2) is the maximum of the set
S = {µ(uv) : u ∈σ∗
1, v ∈σ∗
2, uv ∈µ∗},
and S is a subset of the set of all edge-strengths {µ(e) :
e ∈µ∗}, the following two elementary inequalities hold:
Every element of S is at least the global minimum, so
r(G) = min
e∈µ∗µ(e) ≤min S ≤max S
= CONNG(H1, H2).
In particular r(G) ≤CONNG(H1, H2). Every element
of S is at most the global maximum, so
CONNG(H1, H2) = max S ≤max
e∈µ∗µ(e) = d(G).
Combining the two displays yields the claimed
inequality
r(G) ≤CONNG(H1, H2) ≤d(G).
This completes the proof.
Prepared using TRR.cls
Shanookha Ali and Nitha Niralda P C
5
If instead one uses the path-based max–min connectivity
conn(u, v) = max
P :u;v min
e∈P µ(e),
CONNG(H1, H2) =
max
u∈σ∗
1, v∈σ∗
2
conn(u, v),
then the same inequality holds: for any path P we have
mine∈P µ(e) ≥r(G) and mine∈P µ(e) ≤d(G), hence
taking the outer max over paths yields
r(G) ≤CONNG(H1, H2) ≤d(G).
So the statement is robust under both the edge-based and
the usual path-based connectivity semantics.
Below e(u)H1,H2
G
denotes the
Definition 1.14. Let G = (σ, µ) be a fuzzy tree with
proper disjoint induced fuzzy subgraphs H1 and H2. A
vertex v in σ∗
2 is an eccentric vertex of a vertex u in
σ∗
1 with respect to G∗, denoted by e(u)H1,H2
G
= v, if
d(u, w) ≤d(u, v) for all w ∈σ∗
2.
Theorem 1.15. Let G = (σ, µ) be a fuzzy tree with
proper disjoint induced fuzzy subgraphs H1 and H2).
Then CONNG(H1, H2) is equal to strength of u −v
path P, where e(u)H1,H2
G
= v and e(v)H2,H1
G
= u.
Proof. Let G be a fuzzy graph where each edge
e ∈µ∗has a membership (strength) value µE(e) ∈
[0, 1]. We assume a fuzzy tree means the underlying
crisp graph (V, E) is a tree (no cycles) while edges
carry strengths. For two induced, proper, disjoint fuzzy
subgraphs H1, H2 of G we define
CONNG(H1, H2) = max{µE(uv) : u ∈σ∗
1, v ∈σ∗
2,
uv ∈µ∗},
i.e. the maximum strength of an edge joining a vertex of
H1 to a vertex of H2. Finally let
κ(G) = max{µE(e) : e ∈µ∗},
the maximum edge-strength in G. (The theorem is then
read with these meanings of CONN and κ.)
We must show
CONNG(H1, H2) = κ(G)
⇐⇒∃edge xy ∈µ∗
with x ∈σ∗
1, y ∈σ∗
2 and µE(xy) = κ(G).
Assume CONNG(H1, H2) = κ(G). By the definition
of CONNG(H1, H2) there is at least one edge uv
with u ∈σ∗
1 and v ∈σ∗
2 whose strength achieves the
maximum in the set used to define CONN; that is,
µE(uv) = CONNG(H1, H2).
Since CONNG(H1, H2) = κ(G) by assumption, we
have µE(uv) = κ(G). Setting x = u and y = v yields
the desired edge joining H1 and H2 with maximum
strength.
Conversely, suppose there exists an edge xy ∈µ∗
with x ∈σ∗
1, y ∈σ∗
2, and µE(xy) = κ(G). By definition
of CONNG(H1, H2) as the maximum strength among
edges between H1 and H2, we have
CONNG(H1, H2) ≥µE(xy) = κ(G).
But κ(G) is the global maximum edge-strength in G, so
no edge has strength greater than κ(G); hence
CONNG(H1, H2) ≤κ(G).
Combining
the
two
inequalities
gives
CONNG(H1, H2) = κ(G), as required.
Theorem 1.16. Let G be a fuzzy tree with proper
disjoint induced fuzzy subgraphs H1 and H2. Then
CONNG(H1, H2) = κ(G) if and only if x ∈σ∗
1 and
y ∈σ∗
2, where xy is an edge with maximum strength in
G.
Proof. Let
G = (V, E, µE)
be
a
fuzzy
tree:
the
underlying crisp graph (V, E) is a tree and each edge e ∈
µ∗has a strength (membership) µE(e) ∈[0, 1]. Let H1
and H2 be proper disjoint induced fuzzy subgraphs of G
(so σ∗
1 ∩σ∗
2 = ∅). We use the following two quantities:
CONNG(H1, H2) = max{µE(uv) : u ∈σ∗
1, v ∈σ∗
2,
uv ∈µ∗},
i.e. the maximum strength among edges with one
endpoint in H1 and the other in H2, and
κ(G) = max{µE(e) : e ∈µ∗},
the maximum edge-strength in G.
We must prove
CONNG(H1, H2) = κ(G)
⇐⇒
there exists an edge
xy ∈µ∗with x ∈σ∗
1, y ∈σ∗
2, µE(xy) = κ(G).
Assume CONNG(H1, H2) = κ(G). By the definition of
CONNG(H1, H2) the maximum is attained by at least
one edge between H1 and H2, i.e. there exists uv ∈µ∗
with u ∈σ∗
1, v ∈σ∗
2 and
µE(uv) = CONNG(H1, H2).
Since CONNG(H1, H2) = κ(G) by assumption, we
have µE(uv) = κ(G). Setting x = u and y = v yields
the claimed edge joining H1 and H2 with maximum
strength.
Prepared using TRR.cls
6
Transportation Research Record XX(X)
Conversely, suppose there exists an edge xy ∈µ∗with
x ∈σ∗
1, y ∈σ∗
2 and µE(xy) = κ(G). By the definition
of CONNG(H1, H2) (maximum over all edges between
H1 and H2) we have
CONNG(H1, H2) ≥µE(xy) = κ(G).
But κ(G) is the global maximum edge-strength in G, so
no edge has strength greater than κ(G). Therefore
CONNG(H1, H2) ≤κ(G).
Combining
the
two
inequalities
gives
CONNG(H1, H2) = κ(G), as required.
Corollory 1.17. Let G = (σ, µ) be a fuzzy tree with
proper disjoint induced fuzzy subgraphs H1 and H2.
Then CONNG(H1, H2) ≤κ(G).
Proof. We use the same notation and assumptions as
in the preceding theorem: G = (σ, µ) is a fuzzy tree,
H1, H2 are proper disjoint induced fuzzy subgraphs,
CONNG(H1, H2) = max{µ(e) : e = uv ∈µ∗,
u ∈σ∗
1, v ∈σ∗
2},
and
κ(G) = max{µ(e) : e ∈µ∗},
the global maximum edge-strength in G.
By definition CONNG(H1, H2) is the maximum of
the strengths of a subset of edges of G (those with one
endpoint in H1 and the other in H2). The maximum value
over any subset of real numbers is never larger than the
maximum value over the whole set. Hence
CONNG(H1, H2) ≤κ(G),
as required.
If you use the alternative “path-based” connectivity
CONNG(H1, H2) = max
P
min
e∈P µ(e) (where the outer
max is over all paths P joining a vertex of H1 to a
vertex of H2), the same inequality still holds because for
any path P we have mine∈P µ(e) ≤κ(G), and therefore
maxP mine∈P µ(e) ≤κ(G).
Theorem 1.18. Let G = (σ, µ) be a complete fuzzy
graph with proper disjoint induced fuzzy subgraphs H1
and H2. Then CONNG(H1, H2) = min{σ(u)}, where
u ∈(σ∗
1 ∪σ∗
2).
Proof. Let G = (σ, µ) be a complete fuzzy graph on
vertex set V ; by the usual convention for complete fuzzy
graphs we assume
µ(u, v) = min{σ(u), σ(v)}
for all distinct u, v ∈V.
Let H1, H2 be proper, disjoint, induced fuzzy subgraphs
of G (so σ∗
1 ∩σ∗
2 = ∅). Define
CONNG(H1, H2) = min{µ(u, v) : u ∈σ∗
1, v ∈σ∗
2},
i.e. the minimum edge strength among all edges with one
endpoint in H1 and the other in H2. Set
m = min{σ(u) : u ∈σ∗
1 ∪σ∗
2}.
We will prove CONNG(H1, H2) = m.
(1) CONNG(H1, H2) ≤m. Since m is the minimum
vertex strength on σ∗
1 ∪σ∗
2, there exists some vertex
w ∈σ∗
1 ∪σ∗
2 with σ(w) = m. Without loss of generality
assume w ∈σ∗
1. Pick any vertex v ∈σ∗
2 (such a vertex
exists because H2 is proper). Then the edge wv is present
and
µ(wv) = min{σ(w), σ(v)} = min{m, σ(v)} = m,
hence the minimum over all edges between H1 and H2
is at most m, i.e. CONNG(H1, H2) ≤m.
(2) CONNG(H1, H2) ≥m. For every edge uv with u ∈
σ∗
1 and v ∈σ∗
2 we have
µ(uv) = min{σ(u), σ(v)} ≥m,
because both σ(u) and σ(v) are at least m by definition
of m. Taking the minimum over all such edges gives
CONNG(H1, H2) ≥m.
Combining (1) and (2) yields CONNG(H1, H2) = m,
i.e.
CONNG(H1, H2) = min{σ(u) : u ∈σ∗
1 ∪σ∗
2},
as required.
Application to Chronic Heart Disease
Prediction
Medical data related to Chronic Heart Disease (CHD)
can be naturally modeled using fuzzy graphs, since
both patient data and diagnostic indicators are inherently
uncertain and imprecise. Figure 4 represents a fuzzy
graph model for CHD based on three types of
data: Uncontrollable data U1 = {a1, a2, a3}, consisting
of Age (a1), Gender (a2), Family history (a3).
Indicator data U2 = {c1, c2, . . . , c7}, including ECG,
stress test, echocardiogram, Holter, hematology, CT, etc.
Controllable data U3 = {d1, d2, d3, d4}, consisting of
Diet (d1), Sleep (d2), Activity (d3), Smoking (d4).
Each vertex is assigned a membership value σ(v) ∈
[0, 1] representing its degree of contribution to CHD,
while edges uv are assigned fuzzy membership µ(uv) ∈
[0, 1] representing the strength of influence or correlation
between the two factors.
Thus, the entire medical model can be viewed as a
fuzzy graph G = (σ, µ) with V (G) = U1 ∪U2 ∪U3.
Prepared using TRR.cls
Shanookha Ali and Nitha Niralda P C
7
Uncontrollable
a1: Age
a2: Gender
a3: Family history
Indicator data
c1: ECG
c2: Stress test
c3: Echo
c4: Holter
c5: Hematology
c6: CT
Controllable
d1: Diet
d2: Sleep
d3: Activity
d4: Smoking
Figure 4. Vertical layout of fuzzy CHD graph with
uncontrollable (ai), indicator (ci), and controllable (di) data.
Dotted arrows show mappings ai →cj, cj →dk, and direct
ai →dk.
Fuzzy subgraphs in the CHD model
The fuzzy graph G naturally decomposes into the
following induced fuzzy subgraphs:
H1 = ⟨U1⟩
(uncontrollable data),
H2 = ⟨U2⟩
(indicator data), H3 = ⟨U3⟩(controllable data).
Using our earlier definition of fuzzy subgraph
connectivity (FSC):
CONNG(Hi, Hj) =
max
x∈V (Hi) CONNG(x, Hj),
we can measure the relative influence between these
components.
Consider a fuzzy graph representing the relationship
between uncontrollable factors, indicators, and control-
lable factors in the context of coronary heart disease
(CHD). Let the vertex set be partitioned into three types:
Uncontrollable vertices A = {a1, a2, a3}, represent-
ing factors outside direct control (e.g., age, genetics,
lifestyle). Indicator vertices C = {c1, c2, c3, c4, c5, c6},
representing measurable health indicators. Controllable
vertices D = {d1, d2, d3, d4}, representing factors that
can be managed or intervened (e.g., blood pressure,
cholesterol, exercise).
The edges represent fuzzy relationships between
vertices, with membership values in [0, 1] indicating the
strength of influence. The fuzzy edge set is defined as
follows:
Table 1. Fuzzy edge membership values
Edge
µE
Type
a1 →c2
0.6
Uncontrollable →Indicator
a2 →c3
0.4
Uncontrollable →Indicator
a3 →c5
0.7
Uncontrollable →Indicator
c1 →d1
0.8
Indicator →Controllable
c3 →d2
0.5
Indicator →Controllable
c4 →d3
0.9
Indicator →Controllable
c6 →d4
0.3
Indicator →Controllable
a1 →d3
0.45
Uncontrollable →Controllable
a2 →d4
0.55
Uncontrollable →Controllable
The resulting fuzzy CHD graph is depicted in Fig-
ure 5, showing the hierarchical structure of relationships
between uncontrollable factors, indicators, and control-
lable factors, along with the corresponding fuzzy mem-
bership values.
a1
a2
a3
c1
c2
c3
c4
c5
c6
d1
d2
d3
d4
0.6
0.4
0.7
0.8
0.5
0.9
0.3
0.45
0.55
Fuzzy CHD Graph
Figure 5. Fuzzy graph of CHD with vertices ai
(uncontrollable), ci (indicators), di (controllable) and fuzzy
edge memberships.
Prepared using TRR.cls
8
Transportation Research Record XX(X)
Application of theoretical results
Using the fuzzy CHD graph presented above, we analyze
the connectivity between the subgraphs representing
uncontrollable factors (HA) and controllable factors
(HD) based on the definitions of fuzzy connectivity. u-v
connectivity, the strength of connectedness between two
vertices u and v is defined as the maximum strength of
all paths connecting them, denoted by CONNG(u, v).
x-H connectivity, for a vertex x and a fuzzy subgraph
H, the x-H connectivity is defined as
CONNG(x, H) = max
u∈H CONNG(x, u),
representing the strongest path from x to any vertex in
H.
Consider the uncontrollable vertex a2 ∈HA and
the controllable subgraph HD = {d1, d2, d3, d4}. The
paths from a2 to HD are: a2 →c3 →d2 with edge
memberships (0.4, 0.5), path strength = min(0.4, 0.5) =
0.4. a2 →d4 (direct) with membership 0.55, path
strength = 0.55.
Hence, the maximum connectivity from a2 to HD is
CONNG(a2, HD) = max{0.4, 0.55} = 0.55.
The strongest path from a2 to HD is the direct
edge a2 →d4, indicating that this particular controllable
factor is most strongly influenced by the uncontrollable
factor a2. This highlights the clinical significance of a2
in predicting and managing d4, which could correspond
to a key controllable risk factor in CHD management.
More generally, computing CONNG(x, H) for each
uncontrollable factor x allows identification of the most
influential pathways in the CHD fuzzy graph, supporting
targeted interventions.
For completeness, the overall connectivity between
subgraphs HA and HD is defined as
CONNG(HA, HD) =
max
ai∈HA,dj∈HD CONNG(ai, dj)
= 0.6,
which corresponds to the path a1 →c2 →d1. This
shows that among all uncontrollable factors, a1 has the
strongest overall influence on the controllable factors,
whereas a2 specifically connects most strongly to d4.
Using the fuzzy CHD graph from Example 1, we
analyze the connectivity and significance of relationships
according to the previously stated propositions, theorem,
and corollary.
By Proposition 1.5, the fuzzy subgraph connectivity
(FSC) is symmetric, i.e.,
CONNG(H1, H2) = CONNG(H2, H1).
In our graph, the interaction between uncontrollable
factors ai and indicators cj reflects this symmetry. For
instance, a1 influences c2 with membership 0.6, and
the mutual reinforcement indicates that indicators also
reflect the underlying uncontrollable factors in diagnostic
prediction.
Theorem 1.13 gives
r(G) ≤CONNG(Hi, Hj) ≤d(G),
where r(G) and d(G) are the weakest and strongest
fuzzy edge memberships, respectively. In our example:
r(G) = 0.3 (edge c6 →d4), d(G) = 0.9 (edge c4 →d3).
Therefore, the connectivity between any two components
(e.g., controllable vs. indicator) lies in [0.3, 0.9],
providing clear upper and lower limits for risk prediction.
Proposition 1.9 states that if uv is a fuzzy bridge, there
exists subgraphs Hi, Hj such that
CONNG(Hi, Hj) = µ(uv).
In practice, edges such as a1 →d3 (0.45) or a2 →d4
(0.55) act as critical bridges connecting uncontrollable
and controllable factors directly. Removing these edges
would significantly reduce the predictive connectivity
in the CHD model. Proposition 1.10 indicates that the
strongest path between subgraphs represents the most
significant diagnostic route. Example strongest path in
our graph:
a1 →c2 →d1
(membership values 0.6, 0.8),
suggesting that management of d1 (controllable factor)
is strongly linked to the indicator c2, which is influenced
by a1. Clinically, this could correspond to age-related
effects on echo indicators and subsequent dietary
management.
Corollary 1.17 ensures
CONNG(H1, H2) ≤κ(G),
meaning no subgraph pair can exceed the strongest
individual edge. In our graph, κ(G) = 0.9 (edge c4 →
d3). Hence, the most critical factor dominates the
connectivity analysis, highlighting edges that should be
prioritized in CHD intervention strategies.
Interpretation for CHD prediction
A high value of CONNG(H1, H2) means uncontrol-
lable factors (age, gender, history) are strongly connected
with medical indicators (ECG, echo, CT), which implies
unavoidable risk. A high value of CONNG(H2, H3)
means lifestyle factors (diet, sleep, activity, smoking)
strongly influence clinical indicators, suggesting pre-
ventive strategies are effective. If CONNG(H1, H3) is
Prepared using TRR.cls
Shanookha Ali and Nitha Niralda P C
9
weak, it reflects that controllable lifestyle changes may
not fully mitigate uncontrollable genetic/age risk, which
is medically consistent.
Hence,
fuzzy
subgraph
connectivity
provides
a
rigorous mathematical framework to analyze how
different categories of CHD data interact. It quantifies the
risk contribution of uncontrollable data, the predictive
power of medical indicators, and the preventive impact
of controllable factors.
These results show that fuzziness does not alleviate
the computational hardness of structural edge-deletion
and edge-contraction problems. By embedding crisp
instances as fuzzy graphs with unit memberships,
the entire hardness frontier identified by Asano and
Hirata (4) transfers verbatim to the fuzzy setting.
Conclusion
In this work, we developed a fuzzy graph framework for
analyzing coronary heart disease risk factors. By catego-
rizing vertices into uncontrollable, controllable, and indi-
cator components, and by assigning fuzzy membership
values to their interactions, we constructed a fuzzy CHD
graph that models real-world uncertainty in medical data.
Through measures of connectivity, including u-v, x-H,
and subgraph connectivity, we evaluated the strength of
associations across components.
The analysis revealed several clinically meaningful
results.
Strongest
paths
correspond
to
significant
diagnostic routes, highlighting the interplay between
uncontrollable factors such as age and controllable
factors such as diet through indicator variables. Critical
bridges
were
identified
as
edges
whose
removal
drastically reduces connectivity, indicating their role
as key determinants in predictive accuracy. Moreover,
bounding results provided upper and lower limits for
connectivity, ensuring robustness of the model.
This study demonstrates that fuzzy graph connectivity
is a powerful tool for understanding and predicting
CHD risk. It captures both the uncertainty and strength
of medical relationships, offering interpretability for
clinicians and guiding intervention strategies. Future
work will focus on validating this approach with real
patient datasets and extending the framework to dynamic
fuzzy graphs for monitoring CHD progression over time.
Author Contributions
The authors confirm contribution to the paper as follows:
study conception, design, analysis and interpretation of results:
Shanookha Ali; data collection, draft manuscript preparation:
Shanookha Ali, Nitha Niralda P C. All authors reviewed the
results and approved the final version of the manuscript.()
Declaration of conflicting interests
The authors declare that there is no conflict of interest.
Funding
The authors received no financial support for the research,
authorship, and/or publication of this article.
References
1. Ali, S., S. Mathew, J. N. Mordeson, and H. Rashmanlou.
Vertex Connectivity of Fuzzy Graphs with Applications
to Human Trafficking. New Mathematics and Natural
Computation, Vol. 14, No. 3, 2018, pp. 457–485.
2. Ali, S., S. Mathew, and J. N. Mordeson. Hamiltonian
Fuzzy Graphs with Application to Human Trafficking.
Information Sciences, Vol. 550, 2021, pp. 268–284.
3. Ali, S., S. Mathew, and J. N. Mordeson. Containers and
Spanning Containers in Fuzzy Graphs with Application
to Human Trafficking. New Mathematics and Natural
Computation, Vol. 20, No. 1, 2024, pp. 103–128.
4. Asano, T., and T. Hirata. Edge-Contraction Problems.
Journal of Computer and System Sciences, Vol. 26, No.
2, 1983, pp. 197–208.
5. Bhattacharya, P. Some Remarks on Fuzzy Graphs. Pattern
Recognition Letters, Vol. 6, No. 5, 1987, pp. 297–302.
6. Bhutani, K. R., and A. Rosenfeld. Fuzzy End Nodes in
Fuzzy Graphs. Information Sciences, Vol. 152, No. 1,
2003, pp. 323–326.
7. Bhutani, K. R., and A. Rosenfeld. Strong Arcs in Fuzzy
Graphs. Information Sciences, Vol. 152, No. 1, 2003, pp.
319–322.
8. Mathew, S., and M. S. Sunitha. Node Connectivity and Arc
Connectivity in Fuzzy Graphs. Information Sciences, Vol.
180, No. 4, 2010, pp. 519–531.
9. Mathew, S., J. N. Mordeson, and D. S. Malik. Fuzzy Graph
Theory. Springer International Publishing, Switzerland,
2018.
10. Mordeson, J. N., and P. S. Nair. Fuzzy Graphs and Fuzzy
Hypergraphs. Physica-Verlag, Heidelberg, 2000.
11. Mordeson, J. N., S. Mathew, and S. Ali. Fuzzy Path Fans
Applied to Health Consequences of Trafficking Victims.
Fuzzy Sets and Systems, Vol. 24, No. 1, 2017, pp. 17–28.
12. Mordeson, J. N., S. Mathew, and D. S. Malik. Fuzzy Graph
Theory with Applications to Human Trafficking, Vol. 365.
Springer, 2018.
13. Rosenfeld, A. Fuzzy Graphs. In Fuzzy Sets and Their
Applications (L. A. Zadeh, K. S. Fu, and M. Shimura,
eds.), Academic Press, New York, 1977, pp. 251–299.
14. Zadeh, L. A. Fuzzy Sets. Information and Control, Vol. 8,
1965, pp. 338–353.
Prepared using TRR.cls
|
Identifying Critical Pathways in Coronary Heart Disease via Fuzzy Subgraph Connectivity Transportation Research Record 2020, Vol. XX(X) 1-9 ©National Academy of Sciences: Transportation Research Board 2020 Article reuse guidelines: sagepub.com/journals-permissions journals.sagepub.com/home/trr SAGE Shanookha Ali1 and Nitha Niralda P C 2 Abstract Coronary heart disease (CHD) arises from complex interactions among uncontrollable factors, controllable lifestyle factors, and clinical indicators, where relationships are often uncertain. Fuzzy subgraph connectivity (FSC) provides a systematic tool to capture such imprecision by quantifying the strength of association between vertices and subgraphs in fuzzy graphs. In this work, a fuzzy CHD graph is constructed with vertices for uncontrollable, controllable, and indicator components, and edges weighted by fuzzy memberships. Using FSC, we evaluate connectivity to identify strongest diagnostic routes, dominant risk factors, and critical bridges. Results show that FSC highlights influential pathways, bounds connectivity between weakest and strongest correlations, and reveals critical edges whose removal reduces predictive strength. Thus, FSC offers an interpretable and robust framework for modeling uncertainty in CHD risk prediction and supporting clinical decision-making. Introduction Coronary heart disease (CHD) is a leading global health burden, accounting for significant morbidity and mortality. The identification and analysis of CHD risk factors is a major challenge in preventive cardiology. These risk factors are broadly classified into uncontrollable factors such as age and family history, and controllable factors such as smoking, diet, and physical activity. Additionally, clinical indicators such as blood pressure, cholesterol levels, and electrocardiogram (ECG) findings serve as intermediate measures linking lifestyle and hereditary factors with disease outcomes. In the study of fuzzy graphs, various aspects such as connectivity, spanning structures, and network flow have been extensively explored. The foundational concept of fuzzy sets introduced by Zadeh (14) laid the groundwork for the development of fuzzy graph theory, which was formalized by Rosenfeld (13). Early investigations on fuzzy graphs and their properties were carried out by Bhattacharya (5), Bhutani and Rosenfeld (6, 7) and Rosenfeld (13), who examined strong arcs and fuzzy end nodes. Applications of vertex and node connectivity in fuzzy graphs have been explored in the context of human trafficking networks by Ali et al. (1). Comprehensive treatments of fuzzy graph theory can be found in the books by Mordeson and Nair (10) and Mordeson et al. (9), including detailed discussions on node and arc connectivity Mathew and Sunitha (8). Vertex connectivity in fuzzy graphs has been analyzed to evaluate resilience and vulnerability in trafficking chains Ali et al. (1), while Hamiltonian fuzzy graphs provide a foundation for understanding cyclical structures that often underlie such networks Ali et al. (2). More recently, the concept of containers and spanning containers has been introduced to study the flow and concealment strategies of traffickers within uncertain environments Ali et al. (3). Complementing these contributions, Mordeson, Mathew, and Ali have applied fuzzy path fans to model the health consequences faced by trafficking victims, thereby highlighting the applicability of fuzzy graph theory beyond structural analysis and into human well-being by Mordeson et al. (11). Together, these works underscore the versatility of fuzzy graph theory in addressing both theoretical challenges and real-world problems associated with trafficking. Conventional diagnostic methods, including regression and probabilistic models, often assume precise relationships between variables. However, in real-world 1 345055, United Arab Emirates 2 's College, Calicut, Kerala, 673009, India Corresponding author: Shanookha Ali, Prepared using TRR.cls [Version: 2020/08/31 v1.00] 19 Sep 2025 2 Transportation Research Record XX(X) medical data, relationships are rarely crisp or deterministic. For example, the effect of age on CHD varies across populations, and the influence of smoking on cardiovascular risk may depend on duration, intensity, and co-occurring conditions. These inherent uncertainties motivate the application of fuzzy graph theory, which provides a mathematical framework for handling imprecise, uncertain, and approximate relationships. In this study, we construct a fuzzy CHD graph in which vertices represent uncontrollable, controllable, and indicator factors, while edges denote their relationships with membership values in [0, 1]. Using fuzzy connectivity concepts, we investigate: 1. pairwise connectivity (CONNG(u, v)) between individual risk factors, 2. vertex-to-subgraph connectivity (CONNG(x, H)), which evaluates the influence of a single factor on a group of related components, and 3. subgraph-to-subgraph connectivity (CONNG(Hi, Hj)), which assesses the global strength of interaction between categories of factors. Our analysis shows that fuzzy connectivity not only quantifies the strength of risk pathways but also identifies critical bridges edges whose removal significantly reduces connectivity. Such bridges highlight key clinical factors (e.g., smoking - ECG relationship) that dominate diagnostic predictions. Moreover, strongest paths reveal the most significant diagnostic routes, providing interpretability to clinicians. By bounding connectivity between the weakest and strongest observed correlations, the framework also ensures reliable upper and lower limits for risk prediction. Overall, this fuzzy graph-based approach contributes to a more nuanced understanding of CHD risk dynamics. By capturing uncertainty and identifying strongest connections, the method supports both diagnostic decision-making and the prioritization of preventive interventions. Preliminaries In this section, we briefly recall the basic concepts of fuzzy sets, fuzzy graphs, and fuzzy connectivity that will be used in the sequel. Throughout, we follow standard notations used in the literature (13), (12), (5). A fuzzy set A on a universe X is defined by a membership function μA : X →[0, 1], where μA(x) represents the degree of membership of x in A. For example, if X is the set of possible ages of patients, then the fuzzy set "elderly" may assign μA(65) = 0.8, μA(50) = 0.4, reflecting vagueness in age categorization. This concept provides a natural tool to model uncertainty in medical datasets. A fuzzy graph G is defined as a pair G = (σ, μ), where σ : V →[0, 1] assigns a membership value to each vertex v ∈V and μ : V × V →[0, 1] assigns a membership value to each edge (u, v), subject to the condition μ(u, v) ≤min{σ(u), σ(v)}. Here σ(u) may represent the relevance of a CHD factor, while μ(u, v) denotes the strength of relationship between two factors. This generalization of classical graphs was first introduced by Rosenfeld (13). A path in a fuzzy graph G is a sequence of vertices u0, u1, . . . , uk such that μ(ui, ui+1) > 0 for all i. The strength of a path P is defined as str(P) = min (ui,ui+1)∈P μ(ui, ui+1). The u-v connectivity in G is then given by CONNG(u, v) = max P :u;v str(P), where the maximum is taken over all paths connecting u and v. Some basic results used in this paper are summarized below: • Symmetry: CONNG(Hi, Hj) = CONNG(Hj, Hi). • Bounds: r(G) ≤CONNG(Hi, Hj) ≤d(G), where r(G) and d(G) are the minimum and maximum edge strengths in G. • Bridge property: If uv is a fuzzy bridge in G, then there exist subgraphs Hi, Hj such that CONNG(Hi, Hj) = μ(uv). • Strongest path: The strongest path between two subgraphs corresponds to the most significant diagnostic route in an applied setting. These concepts will be applied to the fuzzy CHD graph. Fuzzy subgraph connectivity Definition 1.1. u -v connectivity: Maximum of the strengths of all paths between u and v is called strength of connectedness between u and v and denoted by CONNG(u, v). Definition 1.2. Let G = (σ, μ) be a fuzzy graph with proper fuzzy subgraph H(ν, τ). For x ∈σ∗\ν∗, x - H connectivity is defined as maximum of strength of connectedness between x and u where u ∈ν∗, denoted by CONNG(x, H). That is CONNG(x, H) = max u∈ν∗CONNG(x, u) . Prepared using TRR.cls Shanookha Ali and Nitha Niralda P C 3 t t t t t a e b d c 0.1 0.4 0.3 0.15 0.1 0.9 t t t b d c 0.15 0.1 0.1 (a) G (b) H Figure 1. Fuzzy graphs G with proper induced fuzzy subgraph H. t t a d 0.9 t t b c 0.15 (a) H1 (b) H2 Figure 2. Proper induced fuzzy subgraphs of fuzzy graph G in Example 1.3 Example 1.3. Let G = (σ, μ) be with σ∗= {a, b, c, d, e}, σ(a, b) = 0.4, σ(b, c) = 0.15, σ(b, d) = σ(c, d) = 0.1, σ(a, d) = 0.9 and σ(e, d) = 0.3 (see Figure 1 (a)). Let H = (ν, τ) be fuzzy subgraph induced by ν∗= {b, c, d} (see Figure 1 (b)). For a ∈σ∗\ν∗, CONNG(a, H) = max{CONNG(a, b), CONNG(a, d), CONNG(a, c)} = max{0.4, 0.15, 0.9} = 0.9. Similarly CONNG(e, H) = 0.4. Definition 1.4. Let G = (σ, μ) be a fuzzy graph with proper disjoint induced fuzzy subgraphs H1 and H2. Then fuzzy subgraph connectivity (FSC) between H1 and H2 denoted by CONNG(H1, H2) is maximum of CONNG(x, H2), for x ∈σ∗ 1. That is CONNG(H1, H2) = max x∈σ∗ 1 CONNG(x, H2). Consider the fuzzy graph G in Figure 1 (a). Let H1 and H2 be fuzzy subgraphs of G induced by {a, d} and {b, c} respectively (see Figure 2 (a) and (b)). Then CONNG(H1, H2) = max{CONNG(a, H2), CONNG(d, H2)} = 0.4. Proposition 1.5. Let G = (σ, μ) be a fuzzy graph with proper disjoint induced fuzzy subgraphs H1 and H2. Then FSC is symmetric. For u, v ∈σ∗, CONNG(u, v) = CONNG(v, u). So it is clear that CONNG(H1, H2) = CONNG(H2, H1). Fuzzy subgraph connectivity need not be transitive. This can be observed from the following example. Example 1.6. Let G = (σ, μ) be a fuzzy graph with σ∗= {a, b, c, d, e, f, g}, σ(a, b) = 0.1, σ(b, c) = 0.3, σ(c, d) = 0.25, σ(d, e) = 0.9 σ(c, f) = 0.11, σ(a, c) = 0.9 and σ(a, g) = 0.8 (see Figure 3). Let H1 = (ν1, τ1), H2 = (ν2, τ2), and H3 = (ν3, τ3) be fuzzy subgraph induced by ν∗ 1 = {a, b}, ν∗ 2 = {d, e} and ν∗ 3 = {f, g} respectively. Then {CONNG(H1, H2) = 0.25 and CONNG(H2, H3)} = 0.25. Where as CONNG(H1, H3)} = 0.8. t t t t t t t a b c d e f g 0.1 0.3 0.11 0.3 0.25 0.9 0.9 Figure 3. Fuzzy graph in the Example 1.6 . Definition 1.7. Let G = (σ, μ) be a fuzzy graph. A pair of proper disjoint induced fuzzy subgraphs H1 and H2 is said to be t-fuzzy subgraph connected if CONNG(H1, H2) = t. Theorem 1.8. Let X = {H1, H2, · · · , Hk} be the set of fuzzy subgraphs of fuzzy graph G = (σ, μ) such that CONNG(Hi, Hj) = t for i ̸= j. Then we define a relation R on X such that HiRHj if and only if CONNG(Hi, Hj) = t or Hi = Hj. Proof. We prove that R is an equivalence relation on X by checking reflexivity, symmetry and transitivity. For any Hi ∈X we have Hi = Hi, so by definition HiRHi. Hence R is reflexive. Let Hi, Hj ∈X and suppose HiRHj. By definition this means either Hi = Hj or CONNG(Hi, Hj) = t. If Hi = Hj then trivially Hj = Hi and thus HjRHi. Otherwise, since connection is symmetric (i.e. CONNG(Hi, Hj) = CONNG(Hj, Hi) for all i, j), we have CONNG(Hj, Hi) = t, hence HjRHi. Thus R is symmetric. Let Hi, Hj, Hk ∈X and assume HiRHj and HjRHk. We consider cases: If any two of Hi, Hj, Hk are equal, transitivity follows immediately from reflexivity/symmetry. Otherwise all three are distinct. By hypothesis of the theorem, for every pair of distinct indices the connection equals t; in particular CONNG(Hi, Hk) = t. Hence HiRHk. Therefore R is transitive. Having checked reflexivity, symmetry and transitivity, we conclude that R is an equivalence relation on X. Under the stated hypothesis (every pair of distinct subgraphs has connection t), every two distinct elements of X are related; hence R is actually the universal relation on X, so X is a single equivalence class under R. Proposition 1.9. uv is a fuzzy bridge of G = (σ, μ) if and only if there exists a pair of proper induced disjoint Prepared using TRR.cls 4 Transportation Research Record XX(X) fuzzy subgraphs H1 and H2 with CONNG(H1, H2) = μ(u, v). Proof. Suppose uv is a fuzzy bridge of G. Then removal uv reduces the strength of connectedness between some pair of vertices say x and y in G. Choose H1 as {x} and H2 as {y}. Then CONNG(H1, H2) = μ(u, v). Conversely assume that for proper induced disjoint fuzzy subgraphs H1 and H2, CONNG(H1, H2) = μ(u, v). Hence uv is an edge of every strongest H1 -H2 path in G. Choose a vertex x from σ∗ 1 and y from σ∗ 1. It follows that uv is an edge of every strongest x -y path. Therefore, uv is a fuzzy bridge of G. Proposition 1.10. P is a strongest path in G if and only if there exists a pair of proper disjoint induced fuzzy subgraphs H1 and H2 with CONNG(H1, H2) = str(P). Proposition 1.11. Let Gf = (μV ; μE) be a fuzzy graph and fix a left-continuous t-norm T. For a u-v path P = (u = x0, x1, . . . , xm = v) define its (edge) strength by str(P) = T μE(x0x1), μE(x1x2), . . . , μE(xm-1xm) . For fuzzy (induced) subgraphs H1, H2 with disjoint vertex sets, define CONNG(H1, H2) = max max u∈σ∗ 1, v∈σ∗ 2 max P ∈P(u,v) str(P) , i.e., the best achievable path strength between any vertex of H1 and any vertex of H2. Then a path P is a strongest path in Gf (i.e., its strength equals the maximum strength over all paths in Gf) if and only if there exist proper disjoint induced fuzzy subgraphs H1, H2 with CONNG(H1, H2) = str(P). Proof. Let P be a strongest path in Gf, and let its endpoints be u and v. Set H1 = Gf[{u}] and H2 = Gf[{v}], the induced fuzzy singletons. They are proper and disjoint. By definition, CONNG(H1, H2) = max P ′∈P(u,v) str(P ′) = str(P), because P is, by assumption, a strongest u-v path and (being globally strongest) has strength equal to the global maximum over all paths as well. Hence the required H1, H2 exist. Conversely, suppose there exist proper disjoint induced fuzzy subgraphs H1, H2 with CONNG(H1, H2) = s. By definition of the maximum, there exist u ∈σ∗ 1, v ∈σ∗ 2, and a u-v path P with str(P) = s, and no path between any u′ ∈σ∗ 1 and v′ ∈σ∗ 2 has strength exceeding s. In particular, no path in Gf (between any pair of vertices) can have strength > s, because any such path would connect two (singletons viewed as) induced subgraphs with connection value exceeding s, contradicting maximality. Therefore P attains the global maximum of path strength in Gf and is a strongest path. The argument uses only that T is a monotone, associative, and (left-)continuous t-norm so that (i) adding edges to a path cannot increase its T-aggregated strength and (ii) "max over paths" is well-defined and attained (or approached) by a path. Remark 1.12. If you adopt the common convention str(P) = min{μE(e) : e ∈P}, take T = min above; all steps go through verbatim. If you measure path strength with vertex memberships as well (e.g. Taggregating both vertices and edges), replace the definition of str(P) accordingly the proof structure is unchanged. Theorem 1.13. Let G = (σ, μ) be a fuzzy graph. Then for any proper disjoint induced fuzzy subgraphs H1 and H2, r(G) ≤CONNG(H1, H2) ≤d(G). Proof. Let G = (σ, μ) be a fuzzy graph with edge set E. For clarity we adopt the following standard auxiliaries: r(G) = min e∈μ∗μ(e) and d(G) = max e∈μ∗μ(e), the minimum and maximum edge-strengths in G. For two proper, disjoint, induced fuzzy subgraphs H1, H2 of G we set CONNG(H1, H2) = max{μ(uv) : u ∈σ∗ 1, v ∈σ∗ 2, uv ∈μ∗}, i.e. the maximum strength among edges joining a vertex of H1 to a vertex of H2 (this is the same definition used earlier for the tree case). Since CONNG(H1, H2) is the maximum of the set S = {μ(uv) : u ∈σ∗ 1, v ∈σ∗ 2, uv ∈μ∗}, and S is a subset of the set of all edge-strengths {μ(e) : e ∈μ∗}, the following two elementary inequalities hold: Every element of S is at least the global minimum, so r(G) = min e∈μ∗μ(e) ≤min S ≤max S = CONNG(H1, H2). In particular r(G) ≤CONNG(H1, H2). Every element of S is at most the global maximum, so CONNG(H1, H2) = max S ≤max e∈μ∗μ(e) = d(G). Combining the two displays yields the claimed inequality r(G) ≤CONNG(H1, H2) ≤d(G). This completes the proof. Prepared using TRR.cls Shanookha Ali and Nitha Niralda P C 5 If instead one uses the path-based max-min connectivity conn(u, v) = max P :u;v min e∈P μ(e), CONNG(H1, H2) = max u∈σ∗ 1, v∈σ∗ 2 conn(u, v), then the same inequality holds: for any path P we have mine∈P μ(e) ≥r(G) and mine∈P μ(e) ≤d(G), hence taking the outer max over paths yields r(G) ≤CONNG(H1, H2) ≤d(G). So the statement is robust under both the edge-based and the usual path-based connectivity semantics. Below e(u)H1,H2 G denotes the Definition 1.14. Let G = (σ, μ) be a fuzzy tree with proper disjoint induced fuzzy subgraphs H1 and H2. A vertex v in σ∗ 2 is an eccentric vertex of a vertex u in σ∗ 1 with respect to G∗, denoted by e(u)H1,H2 G = v, if d(u, w) ≤d(u, v) for all w ∈σ∗ 2. Theorem 1.15. Let G = (σ, μ) be a fuzzy tree with proper disjoint induced fuzzy subgraphs H1 and H2). Then CONNG(H1, H2) is equal to strength of u -v path P, where e(u)H1,H2 G = v and e(v)H2,H1 G = u. Proof. Let G be a fuzzy graph where each edge e ∈μ∗has a membership (strength) value μE(e) ∈ [0, 1]. We assume a fuzzy tree means the underlying crisp graph (V, E) is a tree (no cycles) while edges carry strengths. For two induced, proper, disjoint fuzzy subgraphs H1, H2 of G we define CONNG(H1, H2) = max{μE(uv) : u ∈σ∗ 1, v ∈σ∗ 2, uv ∈μ∗}, i.e. the maximum strength of an edge joining a vertex of H1 to a vertex of H2. Finally let κ(G) = max{μE(e) : e ∈μ∗}, the maximum edge-strength in G. (The theorem is then read with these meanings of CONN and κ.) We must show CONNG(H1, H2) = κ(G) ⇐⇒∃edge xy ∈μ∗ with x ∈σ∗ 1, y ∈σ∗ 2 and μE(xy) = κ(G). Assume CONNG(H1, H2) = κ(G). By the definition of CONNG(H1, H2) there is at least one edge uv with u ∈σ∗ 1 and v ∈σ∗ 2 whose strength achieves the maximum in the set used to define CONN; that is, μE(uv) = CONNG(H1, H2). Since CONNG(H1, H2) = κ(G) by assumption, we have μE(uv) = κ(G). Setting x = u and y = v yields the desired edge joining H1 and H2 with maximum strength. Conversely, suppose there exists an edge xy ∈μ∗ with x ∈σ∗ 1, y ∈σ∗ 2, and μE(xy) = κ(G). By definition of CONNG(H1, H2) as the maximum strength among edges between H1 and H2, we have CONNG(H1, H2) ≥μE(xy) = κ(G). But κ(G) is the global maximum edge-strength in G, so no edge has strength greater than κ(G); hence CONNG(H1, H2) ≤κ(G). Combining the two inequalities gives CONNG(H1, H2) = κ(G), as required. Theorem 1.16. Let G be a fuzzy tree with proper disjoint induced fuzzy subgraphs H1 and H2. Then CONNG(H1, H2) = κ(G) if and only if x ∈σ∗ 1 and y ∈σ∗ 2, where xy is an edge with maximum strength in G. Proof. Let G = (V, E, μE) be a fuzzy tree: the underlying crisp graph (V, E) is a tree and each edge e ∈ μ∗has a strength (membership) μE(e) ∈[0, 1]. Let H1 and H2 be proper disjoint induced fuzzy subgraphs of G (so σ∗ 1 ∩σ∗ 2 = ∅). We use the following two quantities: CONNG(H1, H2) = max{μE(uv) : u ∈σ∗ 1, v ∈σ∗ 2, uv ∈μ∗}, i.e. the maximum strength among edges with one endpoint in H1 and the other in H2, and κ(G) = max{μE(e) : e ∈μ∗}, the maximum edge-strength in G. We must prove CONNG(H1, H2) = κ(G) ⇐⇒ there exists an edge xy ∈μ∗with x ∈σ∗ 1, y ∈σ∗ 2, μE(xy) = κ(G). Assume CONNG(H1, H2) = κ(G). By the definition of CONNG(H1, H2) the maximum is attained by at least one edge between H1 and H2, i.e. there exists uv ∈μ∗ with u ∈σ∗ 1, v ∈σ∗ 2 and μE(uv) = CONNG(H1, H2). Since CONNG(H1, H2) = κ(G) by assumption, we have μE(uv) = κ(G). Setting x = u and y = v yields the claimed edge joining H1 and H2 with maximum strength. Prepared using TRR.cls 6 Transportation Research Record XX(X) Conversely, suppose there exists an edge xy ∈μ∗with x ∈σ∗ 1, y ∈σ∗ 2 and μE(xy) = κ(G). By the definition of CONNG(H1, H2) (maximum over all edges between H1 and H2) we have CONNG(H1, H2) ≥μE(xy) = κ(G). But κ(G) is the global maximum edge-strength in G, so no edge has strength greater than κ(G). Therefore CONNG(H1, H2) ≤κ(G). Combining the two inequalities gives CONNG(H1, H2) = κ(G), as required. Corollory 1.17. Let G = (σ, μ) be a fuzzy tree with proper disjoint induced fuzzy subgraphs H1 and H2. Then CONNG(H1, H2) ≤κ(G). Proof. We use the same notation and assumptions as in the preceding theorem: G = (σ, μ) is a fuzzy tree, H1, H2 are proper disjoint induced fuzzy subgraphs, CONNG(H1, H2) = max{μ(e) : e = uv ∈μ∗, u ∈σ∗ 1, v ∈σ∗ 2}, and κ(G) = max{μ(e) : e ∈μ∗}, the global maximum edge-strength in G. By definition CONNG(H1, H2) is the maximum of the strengths of a subset of edges of G (those with one endpoint in H1 and the other in H2). The maximum value over any subset of real numbers is never larger than the maximum value over the whole set. Hence CONNG(H1, H2) ≤κ(G), as required. If you use the alternative "path-based" connectivity CONNG(H1, H2) = max P min e∈P μ(e) (where the outer max is over all paths P joining a vertex of H1 to a vertex of H2), the same inequality still holds because for any path P we have mine∈P μ(e) ≤κ(G), and therefore maxP mine∈P μ(e) ≤κ(G). Theorem 1.18. Let G = (σ, μ) be a complete fuzzy graph with proper disjoint induced fuzzy subgraphs H1 and H2. Then CONNG(H1, H2) = min{σ(u)}, where u ∈(σ∗ 1 ∪σ∗ 2). Proof. Let G = (σ, μ) be a complete fuzzy graph on vertex set V ; by the usual convention for complete fuzzy graphs we assume μ(u, v) = min{σ(u), σ(v)} for all distinct u, v ∈V. Let H1, H2 be proper, disjoint, induced fuzzy subgraphs of G (so σ∗ 1 ∩σ∗ 2 = ∅). Define CONNG(H1, H2) = min{μ(u, v) : u ∈σ∗ 1, v ∈σ∗ 2}, i.e. the minimum edge strength among all edges with one endpoint in H1 and the other in H2. Set m = min{σ(u) : u ∈σ∗ 1 ∪σ∗ 2}. We will prove CONNG(H1, H2) = m. (1) CONNG(H1, H2) ≤m. Since m is the minimum vertex strength on σ∗ 1 ∪σ∗ 2, there exists some vertex w ∈σ∗ 1 ∪σ∗ 2 with σ(w) = m. Without loss of generality assume w ∈σ∗ 1. Pick any vertex v ∈σ∗ 2 (such a vertex exists because H2 is proper). Then the edge wv is present and μ(wv) = min{σ(w), σ(v)} = min{m, σ(v)} = m, hence the minimum over all edges between H1 and H2 is at most m, i.e. CONNG(H1, H2) ≤m. (2) CONNG(H1, H2) ≥m. For every edge uv with u ∈ σ∗ 1 and v ∈σ∗ 2 we have μ(uv) = min{σ(u), σ(v)} ≥m, because both σ(u) and σ(v) are at least m by definition of m. Taking the minimum over all such edges gives CONNG(H1, H2) ≥m. Combining (1) and (2) yields CONNG(H1, H2) = m, i.e. CONNG(H1, H2) = min{σ(u) : u ∈σ∗ 1 ∪σ∗ 2}, as required. Application to Chronic Heart Disease Prediction Medical data related to Chronic Heart Disease (CHD) can be naturally modeled using fuzzy graphs, since both patient data and diagnostic indicators are inherently uncertain and imprecise. Figure 4 represents a fuzzy graph model for CHD based on three types of data: Uncontrollable data U1 = {a1, a2, a3}, consisting of Age (a1), Gender (a2), Family history (a3). Indicator data U2 = {c1, c2, . . . , c7}, including ECG, stress test, echocardiogram, Holter, hematology, CT, etc. Controllable data U3 = {d1, d2, d3, d4}, consisting of Diet (d1), Sleep (d2), Activity (d3), Smoking (d4). Each vertex is assigned a membership value σ(v) ∈ [0, 1] representing its degree of contribution to CHD, while edges uv are assigned fuzzy membership μ(uv) ∈ [0, 1] representing the strength of influence or correlation between the two factors. Thus, the entire medical model can be viewed as a fuzzy graph G = (σ, μ) with V (G) = U1 ∪U2 ∪U3. Prepared using TRR.cls Shanookha Ali and Nitha Niralda P C 7 Uncontrollable a1: Age a2: Gender a3: Family history Indicator data c1: ECG c2: Stress test c3: Echo c4: Holter c5: Hematology c6: CT Controllable d1: Diet d2: Sleep d3: Activity d4: Smoking Figure 4. Vertical layout of fuzzy CHD graph with uncontrollable (ai), indicator (ci), and controllable (di) data. Dotted arrows show mappings ai →cj, cj →dk, and direct ai →dk. Fuzzy subgraphs in the CHD model The fuzzy graph G naturally decomposes into the following induced fuzzy subgraphs: H1 = ⟨U1⟩ (uncontrollable data), H2 = ⟨U2⟩ (indicator data), H3 = ⟨U3⟩(controllable data). Using our earlier definition of fuzzy subgraph connectivity (FSC): CONNG(Hi, Hj) = max x∈V (Hi) CONNG(x, Hj), we can measure the relative influence between these components. Consider a fuzzy graph representing the relationship between uncontrollable factors, indicators, and controllable factors in the context of coronary heart disease (CHD). Let the vertex set be partitioned into three types: Uncontrollable vertices A = {a1, a2, a3}, representing factors outside direct control (e.g., age, genetics, lifestyle). Indicator vertices C = {c1, c2, c3, c4, c5, c6}, representing measurable health indicators. Controllable vertices D = {d1, d2, d3, d4}, representing factors that can be managed or intervened (e.g., blood pressure, cholesterol, exercise). The edges represent fuzzy relationships between vertices, with membership values in [0, 1] indicating the strength of influence. The fuzzy edge set is defined as follows: Table 1. Fuzzy edge membership values Edge μE Type a1 →c2 0.6 Uncontrollable →Indicator a2 →c3 0.4 Uncontrollable →Indicator a3 →c5 0.7 Uncontrollable →Indicator c1 →d1 0.8 Indicator →Controllable c3 →d2 0.5 Indicator →Controllable c4 →d3 0.9 Indicator →Controllable c6 →d4 0.3 Indicator →Controllable a1 →d3 0.45 Uncontrollable →Controllable a2 →d4 0.55 Uncontrollable →Controllable The resulting fuzzy CHD graph is depicted in Figure 5, showing the hierarchical structure of relationships between uncontrollable factors, indicators, and controllable factors, along with the corresponding fuzzy membership values. a1 a2 a3 c1 c2 c3 c4 c5 c6 d1 d2 d3 d4 0.6 0.4 0.7 0.8 0.5 0.9 0.3 0.45 0.55 Fuzzy CHD Graph Figure 5. Fuzzy graph of CHD with vertices ai (uncontrollable), ci (indicators), di (controllable) and fuzzy edge memberships. Prepared using TRR.cls 8 Transportation Research Record XX(X) Application of theoretical results Using the fuzzy CHD graph presented above, we analyze the connectivity between the subgraphs representing uncontrollable factors (HA) and controllable factors (HD) based on the definitions of fuzzy connectivity. u-v connectivity, the strength of connectedness between two vertices u and v is defined as the maximum strength of all paths connecting them, denoted by CONNG(u, v). x-H connectivity, for a vertex x and a fuzzy subgraph H, the x-H connectivity is defined as CONNG(x, H) = max u∈H CONNG(x, u), representing the strongest path from x to any vertex in H. Consider the uncontrollable vertex a2 ∈HA and the controllable subgraph HD = {d1, d2, d3, d4}. The paths from a2 to HD are: a2 →c3 →d2 with edge memberships (0.4, 0.5), path strength = min(0.4, 0.5) = 0.4. a2 →d4 (direct) with membership 0.55, path strength = 0.55. Hence, the maximum connectivity from a2 to HD is CONNG(a2, HD) = max{0.4, 0.55} = 0.55. The strongest path from a2 to HD is the direct edge a2 →d4, indicating that this particular controllable factor is most strongly influenced by the uncontrollable factor a2. This highlights the clinical significance of a2 in predicting and managing d4, which could correspond to a key controllable risk factor in CHD management. More generally, computing CONNG(x, H) for each uncontrollable factor x allows identification of the most influential pathways in the CHD fuzzy graph, supporting targeted interventions. For completeness, the overall connectivity between subgraphs HA and HD is defined as CONNG(HA, HD) = max ai∈HA,dj∈HD CONNG(ai, dj) = 0.6, which corresponds to the path a1 →c2 →d1. This shows that among all uncontrollable factors, a1 has the strongest overall influence on the controllable factors, whereas a2 specifically connects most strongly to d4. Using the fuzzy CHD graph from Example 1, we analyze the connectivity and significance of relationships according to the previously stated propositions, theorem, and corollary. By Proposition 1.5, the fuzzy subgraph connectivity (FSC) is symmetric, i.e., CONNG(H1, H2) = CONNG(H2, H1). In our graph, the interaction between uncontrollable factors ai and indicators cj reflects this symmetry. For instance, a1 influences c2 with membership 0.6, and the mutual reinforcement indicates that indicators also reflect the underlying uncontrollable factors in diagnostic prediction. Theorem 1.13 gives r(G) ≤CONNG(Hi, Hj) ≤d(G), where r(G) and d(G) are the weakest and strongest fuzzy edge memberships, respectively. In our example: r(G) = 0.3 (edge c6 →d4), d(G) = 0.9 (edge c4 →d3). Therefore, the connectivity between any two components (e.g., controllable vs. indicator) lies in [0.3, 0.9], providing clear upper and lower limits for risk prediction. Proposition 1.9 states that if uv is a fuzzy bridge, there exists subgraphs Hi, Hj such that CONNG(Hi, Hj) = μ(uv). In practice, edges such as a1 →d3 (0.45) or a2 →d4 (0.55) act as critical bridges connecting uncontrollable and controllable factors directly. Removing these edges would significantly reduce the predictive connectivity in the CHD model. Proposition 1.10 indicates that the strongest path between subgraphs represents the most significant diagnostic route. Example strongest path in our graph: a1 →c2 →d1 (membership values 0.6, 0.8), suggesting that management of d1 (controllable factor) is strongly linked to the indicator c2, which is influenced by a1. Clinically, this could correspond to age-related effects on echo indicators and subsequent dietary management. Corollary 1.17 ensures CONNG(H1, H2) ≤κ(G), meaning no subgraph pair can exceed the strongest individual edge. In our graph, κ(G) = 0.9 (edge c4 → d3). Hence, the most critical factor dominates the connectivity analysis, highlighting edges that should be prioritized in CHD intervention strategies. Interpretation for CHD prediction A high value of CONNG(H1, H2) means uncontrollable factors (age, gender, history) are strongly connected with medical indicators (ECG, echo, CT), which implies unavoidable risk. A high value of CONNG(H2, H3) means lifestyle factors (diet, sleep, activity, smoking) strongly influence clinical indicators, suggesting preventive strategies are effective. If CONNG(H1, H3) is Prepared using TRR.cls Shanookha Ali and Nitha Niralda P C 9 weak, it reflects that controllable lifestyle changes may not fully mitigate uncontrollable genetic/age risk, which is medically consistent. Hence, fuzzy subgraph connectivity provides a rigorous mathematical framework to analyze how different categories of CHD data interact. It quantifies the risk contribution of uncontrollable data, the predictive power of medical indicators, and the preventive impact of controllable factors. These results show that fuzziness does not alleviate the computational hardness of structural edge-deletion and edge-contraction problems. By embedding crisp instances as fuzzy graphs with unit memberships, the entire hardness frontier identified by Asano and Hirata (4) transfers verbatim to the fuzzy setting. Conclusion In this work, we developed a fuzzy graph framework for analyzing coronary heart disease risk factors. By categorizing vertices into uncontrollable, controllable, and indicator components, and by assigning fuzzy membership values to their interactions, we constructed a fuzzy CHD graph that models real-world uncertainty in medical data. Through measures of connectivity, including u-v, x-H, and subgraph connectivity, we evaluated the strength of associations across components. The analysis revealed several clinically meaningful results. Strongest paths correspond to significant diagnostic routes, highlighting the interplay between uncontrollable factors such as age and controllable factors such as diet through indicator variables. Critical bridges were identified as edges whose removal drastically reduces connectivity, indicating their role as key determinants in predictive accuracy. Moreover, bounding results provided upper and lower limits for connectivity, ensuring robustness of the model. This study demonstrates that fuzzy graph connectivity is a powerful tool for understanding and predicting CHD risk. It captures both the uncertainty and strength of medical relationships, offering interpretability for clinicians and guiding intervention strategies. Future work will focus on validating this approach with real patient datasets and extending the framework to dynamic fuzzy graphs for monitoring CHD progression over time. Author Contributions The authors confirm contribution to the paper as follows: study conception, design, analysis and interpretation of results: Shanookha Ali; data collection, draft manuscript preparation: Shanookha Ali, Nitha Niralda P C. All authors reviewed the results and approved the final version of the manuscript.() Declaration of conflicting interests The authors declare that there is no conflict of interest. Funding The authors received no financial support for the research, authorship, and/or publication of this article. References 1. Ali, S., S. Mathew, J. N. Mordeson, and H. Rashmanlou. Vertex Connectivity of Fuzzy Graphs with Applications to Human Trafficking. New Mathematics and Natural Computation, Vol. 14, No. 3, 2018, pp. 457-485. 2. Ali, S., S. Mathew, and J. N. Mordeson. Hamiltonian Fuzzy Graphs with Application to Human Trafficking. Information Sciences, Vol. 550, 2021, pp. 268-284. 3. Ali, S., S. Mathew, and J. N. Mordeson. Containers and Spanning Containers in Fuzzy Graphs with Application to Human Trafficking. New Mathematics and Natural Computation, Vol. 20, No. 1, 2024, pp. 103-128. 4. Asano, T., and T. Hirata. Edge-Contraction Problems. Journal of Computer and System Sciences, Vol. 26, No. 2, 1983, pp. 197-208. 5. Bhattacharya, P. Some Remarks on Fuzzy Graphs. Pattern Recognition Letters, Vol. 6, No. 5, 1987, pp. 297-302. 6. Bhutani, K. R., and A. Rosenfeld. Fuzzy End Nodes in Fuzzy Graphs. Information Sciences, Vol. 152, No. 1, 2003, pp. 323-326. 7. Bhutani, K. R., and A. Rosenfeld. Strong Arcs in Fuzzy Graphs. Information Sciences, Vol. 152, No. 1, 2003, pp. 319-322. 8. Mathew, S., and M. S. Sunitha. Node Connectivity and Arc Connectivity in Fuzzy Graphs. Information Sciences, Vol. 180, No. 4, 2010, pp. 519-531. 9. Mathew, S., J. N. Mordeson, and D. S. Malik. Fuzzy Graph Theory. Springer International Publishing, Switzerland, 2018. 10. Mordeson, J. N., and P. S. Nair. Fuzzy Graphs and Fuzzy Hypergraphs. Physica-Verlag, Heidelberg, 2000. 11. Mordeson, J. N., S. Mathew, and S. Ali. Fuzzy Path Fans Applied to Health Consequences of Trafficking Victims. Fuzzy Sets and Systems, Vol. 24, No. 1, 2017, pp. 17-28. 12. Mordeson, J. N., S. Mathew, and D. S. Malik. Fuzzy Graph Theory with Applications to Human Trafficking, Vol. 365. Springer, 2018. 13. Rosenfeld, A. Fuzzy Graphs. In Fuzzy Sets and Their Applications (L. A. Zadeh, K. S. Fu, and M. Shimura, eds.), Academic Press, New York, 1977, pp. 251-299. 14. Zadeh, L. A. Fuzzy Sets. Information and Control, Vol. 8, 1965, pp. 338-353. Prepared using TRR.cls
|
2509.16292
|
Decoding TRON: A Comprehensive Framework
for Large-Scale Blockchain Data Extraction and
Exploration
Qian’ang Mao1, Jiaxin Wang1, Zhiqi Feng1, Yi Zhang1, Jiaqi Yan1*
1*School of Information Management, Nanjing University, Xianlin Road,
Nanjing, 210023, Jiangsu, China.
Contributing authors: me@c0mm4nd.com;
Abstract
Cryptocurrencies and Web3 applications based on blockchain technology have
flourished in the blockchain research field. Unlike Bitcoin and Ethereum, due
to its unique architectural designs in consensus mechanisms, resource manage-
ment, and throughput, TRON has developed a more distinctive ecosystem and
application scenarios centered around stablecoins. Although it is popular in areas
like stablecoin payments and settlement, research on analyzing on-chain data
from the TRON blockchain is remarkably scarce. To fill this gap, this paper
proposes a comprehensive data extraction and exploration framework for the
TRON blockchain. An innovative high-performance ETL system aims to effi-
ciently extract raw on-chain data from TRON, including blocks, transactions,
smart contracts, and receipts, establishing a research dataset. An in-depth anal-
ysis of the extracted dataset reveals insights into TRON’s block generation,
transaction trends, the dominance of exchanges, the resource delegation mar-
ket, smart contract usage patterns, and the central role of the USDT stablecoin.
The prominence of gambling applications and potential illicit activities related
to USDT is emphasized. The paper discusses opportunities for future research
leveraging this dataset, including analysis of delegate services, gambling scenar-
ios, stablecoin activities, and illicit transaction detection. These contributions
enhance blockchain data management capabilities and understanding of the
rapidly evolving TRON ecosystem.
Keywords: TRON, blockchain, data extraction, data exploration, cryptocurrency
1
arXiv:2509.16292v1 [cs.CR] 19 Sep 2025
1 Introduction
In recent years, the blockchain ecosystem has experienced tremendous growth, as
evidenced by the rising market capitalization and the adoption of blockchain-based
cryptocurrencies and platforms. Bitcoin, the first and most well-known cryptocur-
rency, had a market capitalization exceeding $1.35 trillion in March 2024, surpassing
the market value of silver at one point. Bitcoin’s success has inspired countless other
blockchain projects in areas such as decentralized finance (DeFi), non-fungible tokens
(NFTs), and Web3. Similarly, in March 2024, the total valuation of the cryptocur-
rency market exceeded $2.24 trillion, indicating the rapid expansion of the blockchain
ecosystem. Reflecting this growth, academic research on blockchain technology has
also surged. Early research primarily focused on fundamental blockchain technologies,
including consensus algorithms, cryptography, and scalability solutions. As blockchain
platforms and applications have become firmly established, recent research attention
has gradually shifted towards analyzing and improving the design, applications, and
user experience of the blockchain ecosystem. For example, active research areas include
DeFi protocol analysis, NFT markets, DAO formation and governance, blockchain
gaming, and factors influencing user adoption. New specialized journals and confer-
ences focusing on blockchain applications and business use cases have also emerged.
While popular blockchain platforms like Bitcoin and Ethereum and their smart con-
tract applications have garnered widespread research attention, other ecosystems with
unique architectural designs and use cases remain relatively unexplored. The vast
amounts of data generated by these specialized blockchain systems, although possess-
ing significant commercial and academic value similar to Bitcoin and Ethereum, also
present substantial technical challenges for analyzing heterogeneous blockchain data.
In 2021, TRON surpassed Ethereum in USDT stablecoin supply, becoming a lead-
ing stablecoin issuance platform globally. By 2023, TRON reached 200 million users,
with 34.5 million (approximately 17.2%) holding USDT. However, TRON’s heavy
focus on stablecoin transactions poses risks, as it was flagged by Israel and the United
States for aiding terrorist fundraising activities. TRON’s popularity among terror-
ist groups is attributed to its faster transaction speeds and lower fees compared to
Bitcoin, making it a preferred platform for illicit activities like money laundering
However, current research on TRON on-chain data is scarce, with most studies
focusing only on the basic mechanism design analysis of the price fluctuations of
its native token TRX and USDT stablecoin [1–6]. The fundamental challenges stem
from several factors. Firstly, there is an absence of a universal data extraction tool
for TRON. While certain blockchain websites like TRONSCAN 1 offer partial TRON
data, their data extraction tools are not publicly accessible, and the data acquisition
process is rate-limited, restricting access to comprehensive datasets. Secondly, there
is a lack of comprehensive data exploration tools specifically designed for TRON.
Although extensive research has been conducted on data analysis for EOS.IO and
Ethereum [7, 8], studies focusing on TRON are scarce. To the best of our knowl-
edge, there has been no comprehensive analysis performed on the entirety of TRON’s
1https://tronscan.org
2
dataset across all data types, leaving a significant gap in understanding the intrica-
cies of this blockchain platform. Thirdly, the extraction and processing of TRON’s
data present significant difficulties. TRON’s data types are based on Protocol Buffers,
and it employs gRPC to deliver high-performance interfaces, encompassing numer-
ous intricate data structures and data types. Furthermore, the data structures within
TRON involve nesting, arbitrary types, and other complex relationships, posing signif-
icant challenges for data parsing endeavors and hindering the development of effective
analysis tools and techniques.
This
paper
addresses
the
challenges
and
motivations
surrounding
TRON
blockchain data analysis through several key contributions. We present a compre-
hensive data processing framework for the TRON blockchain, featuring robust ETL
workflows and querying capabilities, which enables efficient extraction and analysis
of the vast TRON ecosystem data. Our work includes detailed explanations of mul-
tiple datasets derived from this processing methodology, accompanied by preliminary
analyses to illuminate the data’s content and potential applications. Additionally, we
provide a critical assessment of the current research landscape and explore promis-
ing future directions that could leverage these datasets. By offering these tools and
insights, we aim to empower researchers and developers in the blockchain analytics
field, fostering innovation and a deeper understanding of the TRON blockchain. This
research not only advances the technical capabilities for blockchain data processing but
also paves the way for novel applications and scholarly investigations in this rapidly
evolving domain.
2 Background
2.1 TRON Consensus
TRON’s consensus mechanism is different from that of Bitcoin and Ethereum, and it
adopts the same DPoS (Delegated Proof of Stake) as EOS.io. The DPoS consensus
mechanism is a way to verify transactions and maintain network security by electing
representative nodes (i.e., SR, Super Representatives). Users vote for 27 Super Rep-
resentatives by staking (freezing) their held TRX (TRON’s cryptocurrency). Super
Representatives are responsible for generating blocks and processing transactions on
the network and are re-elected every 6 hours to ensure the network’s decentraliza-
tion and efficiency. The DPoS mechanism increases the network’s processing speed
and transaction throughput by reducing the number of nodes participating in the
consensus while incentivizing users to actively participate in network governance.
2.2 TRON Account Model
TRON uses an account model. The address is the unique identifier of the account,
and operating the account requires a private key signature. The account has many
attributes, including TRX and TRC10 token balances, bandwidth, energy, and more.
The account can send transactions to increase or decrease its TRX or TRC10 token bal-
ance, deploy smart contracts, and trigger its own or others’ published smart contracts,
which are self-executing codes that automatically execute predetermined actions when
3
specific conditions are met. All TRON accounts can apply to become Super Repre-
sentatives or vote for elected Super Representatives. The account is the foundation of
all activities on TRON.
2.3 TRON Transaction Execution
The TRON Transaction Execution is a comprehensive process that begins with
transaction creation, typically initiated by a user through a wallet or decentralized
application (dApp). Once created, the transaction is broadcast to the TRON net-
work, where it’s verified by nodes and enters the mempool (transaction pool) to await
processing. SR nodes, key players in TRON’s consensus mechanism, select transac-
tions from the pool to package into blocks. For transactions involving smart contracts,
the TRON Virtual Machine (TVM) comes into play. The TVM, a core component
of the TRON network, is responsible for executing smart contract code. It’s Turing-
complete and compatible with the Ethereum Virtual Machine (EVM), allowing it
to run contracts written in Solidity. The TVM processes the contract by loading its
code, interpreting opcodes, executing logic, updating states, and consuming energy
(similar to Ethereum’s gas). After contract execution, the SR nodes reach consensus
on the block’s validity using the DPoS mechanism. Once consensus is achieved, the
new block is added to the blockchain, confirming the transactions it contains. This
process updates the global state of the TRON network, including account balances
and contract states. Any events triggered by the transactions or contract executions
are recorded, allowing dApps to listen and respond accordingly. The transaction is
then considered complete, with its results visible across the network and accessible
through block explorers or wallets. Throughout this workflow, the TVM plays a crucial
role, particularly in handling smart contract transactions, enabling TRON to support
complex decentralized applications and DeFi projects.
2.4 TRON Resource Model
TRON’s resource model manages transactions and smart contract execution on the
network through two resources: “Bandwidth” and “Energy”.
Bandwidth is the resource consumed by users when performing ordinary transac-
tions (such as transfers), and each account receives a certain amount of free bandwidth
every day. Users can obtain additional bandwidth by freezing (locking for a period
of time) their held TRX (TRON’s cryptocurrency). Freezing TRX not only increases
bandwidth but also grants voting rights for network governance.
Energy is the resource consumed by users when executing smart contracts. Like
bandwidth, users can obtain energy by freezing TRX. The mechanism of freezing TRX
to obtain energy and bandwidth encourages users to actively participate in network
activities, providing a decentralized way to allocate and manage computing resources
on the blockchain, ensuring the network’s security and efficient operation.
4
3 Our Framework
This chapter describes the process of obtaining raw data from the TRON blockchain.
The extraction and analysis processes in our framework are described below:
3.1 Extract, Transform, and Load
Fig. 1 This figure illustrates the ETL(Extract, Transform, Load) pipeline for TRON blockchain data,
which shows the flow from a Java-TRON Archive Node, through a Rust-based processing system, to
final storage in a Columnar Database.
Figure 1 illustrates an ETL (Extract, Transform, Load) process implemented in
Rust, specifically designed to handle data from the TRON blockchain’s Java-TRON
Archive Node. The source code of the ETL is available on our GitHub2. The process
begins with the extraction phase, where raw data is retrieved from the Java-TRON
Archive Node via a gRPC interface. This raw data is then processed by the tron-
grpc-rs library, which is built using Rust and generated from the TRON Protocol’s
specifications. This library is responsible for handling the gRPC communication and
converting the extracted data into Protobuf types.
Within the transformation phase, encapsulated in a Rust-based ETL submodule,
the Protobuf types undergo several operations to prepare the data for storage. First,
the data is flattened, transforming complex, nested structures into simple, table-like
rows. This is followed by a type conversion step, where data types are adapted to
formats suitable for downstream processes. Additionally, the process involves parsing
contract parameters from the Protobuf data, which are critical for understanding
and processing smart contracts within the TRON ecosystem. The parsed contract
parameters are also subjected to flattening and type conversion, ensuring that all data
is uniformly structured and ready for database insertion.
The final phase of the ETL process is the loading step, where the processed table
rows are batch-inserted into a columnar database via SQL. Post-insertion, the database
undergoes an optimization process to enhance data retrieval efficiency and overall
performance. This step is crucial for maintaining the scalability and responsiveness of
the database when handling large volumes of blockchain data.
5
Fig. 2 This figure illustrates the data exploration process using Python to query a columnar
database, which shows how a user’s script interacts with Python to construct queries, convert data
types, and parse results from a columnar database, with the option to export data back to the user.
3.2 Data Exploration
Figure 2 presents a detailed exploration process, illustrating the interaction between
the user, a Python toolkit, and a columnar database in the context of data analy-
sis. The source codes of the Python toolkit and the corresponding columnar database
DDL(Data Definition Language) are available on our GitHub3. Initially, the user
engages with the Python environment through a script, which acts as a conduit for
sending instructions to the Python toolkit. Upon receiving these instructions, the
Python toolkit initiates a sequence of processes designed to facilitate data retrieval
and manipulation.
The first major step involves the construction of a query, wherein the Python
toolkit interprets the user’s instructions and formulates an appropriate database query.
This query formulation process may require type conversion to ensure that the data
types align with the expected formats in the database. The query, once constructed,
is then executed against a columnar database via an SQL SELECT statement. This
database, specialized in handling columnar data storage, processes the query and
returns the requested data organized into rows.
Following the retrieval of data, the Python toolkit engages in further processing
steps. These include another round of type conversion to adapt the retrieved data
to a format suitable for subsequent analysis and a parsing operation to extract and
organize the relevant columns from the dataset. Once these processes are complete,
the toolkit exports the processed data back to the user, thereby completing the data
exploration cycle.
4 Datasets
In this section, we will provide a detailed description of the statistics and observations
of several datasets obtained from the TRON blockchain through the processing of the
above framework.
2https://github.com/njublockchain/web3research-etl
3https://github.com/njublockchain/web3research-py
6
Main Category
Subcategory
Contracts
Assets
TRC10
assetIssueContracts
participateAssetIssueContracts
updateAssetContracts
Transfer
transferContracts
transferAssetContracts
shieldedTransferContracts
Staking
cancelAllUnfreezeV2Contracts
freezeBalanceContracts
freezeBalanceV2Contracts
unfreezeAssetContracts
unfreezeBalanceContracts
unfreezeBalanceV2Contracts
withdrawBalanceContracts
Account
EOAs
accountCreateContracts
accountPermissionUpdateContracts
accountUpdateContracts
setAccountIdContracts
updateSettingContracts
SmartContracts
clearAbiContracts
createSmartContracts
triggerSmartContracts
Resource
delegateResourceContracts
undelegateResourceContracts
updateBrokerageContracts
updateEnergyLimitContracts
withdrawExpireUnfreezeContracts
DEX
Exchange
exchangeCreateContracts
exchangeInjectContracts
exchangeTransactionContracts
exchangeWithdrawContracts
Market
marketCancelOrderContracts
marketSellAssetContracts
Government
Proposal
proposalApproveContracts
proposalCreateContracts
proposalDeleteContracts
SR Voting
voteWitnessContracts
voteAssetContracts
witnessCreateContracts
witnessUpdateContracts
Table 1 The classification table for our parsed dataset of TRON protocol
contracts. We grouped the data by primary use case into four categories: Assets,
Accounts, DEXs (Decentralized Exchanges), and Governance. Each category is
further divided by specific function: Assets into TRC10, Transfer, and Staking;
Accounts into EOAs (Externally Owned Accounts), Smart Contracts, and
Resources; DEXs into Exchanges and Markets; and Governance into Proposals
and SR (Super Representative) Voting.
7
Table 1 shows the classification relationship from the raw data to the seven data
sets. We collect data block chain chain will be accessible from the public platform4.
4.1 Blocks
Blocks, as the name implies, are essential components of the blockchain data struc-
ture, serving as packages of transactions. In our Blocks dataset, we primarily retain
Block Header information, while specific transaction details are stored in an external
transaction dataset.
The core fields in this dataset include block hash, timestamp, Merkle root hash of
transactions, parent block hash, block height, witness ID and address, block version
number, account state tree root hash, witness signature, and transaction count. The
block hash serves as a unique identifier for each block, ensuring the integrity and
immutability of block data. The timestamp field records the precise time of block
generation, facilitating research on block production rates and temporal distribution.
The Merkle root hash of transactions and the account state tree root hash are used to
verify transactions and account states within the block, respectively, reflecting TRON
network’s design in data validation and state management. The parent block hash
maintains the linkage between blocks, guaranteeing the continuity and consistency of
the blockchain. The block height field indicates the position of the block within the
entire chain, serving as a crucial parameter for analyzing network growth and historical
queries.
Witness-related fields, such as witness ID, address, and signature, directly reflect
the characteristics of TRON’s DPoS consensus mechanism. This information not only
validates the legitimacy of blocks but also enables analysis of Super Representative
activity patterns and network participation. The block version number field reflects
the evolution of the TRON protocol, aiding in tracking network upgrades and com-
patibility changes. The transaction count field records the number of transactions in
each block, providing significant data for analyzing network throughput, user activity,
and economic activity scale.
Through in-depth analysis of this comprehensive block dataset, researchers can
gain insights into TRON network’s operational characteristics, performance metrics,
security, and degree of decentralization. For instance, timestamp and block height
data can be used to study network stability and throughput variations; witness-
related data can be analyzed to examine Super Representative election mechanisms
and power distribution; transaction count and block size can be utilized to explore
network scalability and user adoption trends.
4.2 External Transactions
In TRON, external transactions, similar to those in Ethereum, represent transactions
initiated by EOA (Externally Owned Account) addresses. This dataset focuses on
external transaction data within the TRON blockchain network, encompassing not
only initial transaction information but also merged data on post-execution results.
This design provides researchers with a complete view of a transaction’s entire lifecycle,
4https://web3resear.ch/datasets
8
from initiation to final on-chain confirmation. Each record represents an independent
external transaction, containing both the original transaction data and detailed results
of its execution on the TRON network.
The core fields of the dataset can be broadly categorized into several key areas:
transaction identification information, block information, authorization data, contract
call data, resource consumption and fee information, execution results, and special
operation data. The transaction hash serves as a unique identifier for each transaction,
ensuring traceability. Block number and transaction index precisely locate the trans-
action within the blockchain, crucial for studying transaction temporality and block
filling efficiency.
Authorization-related fields (such as authorityAccountNames, authorityAccoun-
tAddresses, etc.) reflect TRON network’s multi-signature and complex permission
management mechanisms. These data provide a foundation for analyzing network
security models and user interaction patterns. Contract-related fields (contractType,
contractParameter, etc.) detail smart contract invocations, significant for understand-
ing the usage patterns of decentralized applications (DApps) and the popularity of
smart contracts in the TRON ecosystem. Furthermore, we have parsed contractPa-
rameters into multiple sub-datasets based on different contractTypes, which will be
introduced in the next section.
Notably, this dataset incorporates transaction execution result data. Fields such
as energyUsage, netUsage, and fee meticulously record resource consumption, cru-
cial for analyzing TRON network’s resource pricing mechanisms and efficiency. The
receiptResult and result fields directly reflect transaction execution outcomes, while
the resMessage field may contain more detailed execution information or error descrip-
tions. The combination of these data enables researchers to conduct in-depth analyses
of transaction failure reasons, network congestion situations, and smart contract
execution efficiency.
The dataset also includes dedicated fields for special operations, such as
asset issuance (assetIssueId), account freezing and unfreezing (withdrawAmount,
unfreezeAmount), exchange operations (exchangeId, exchangeReceivedAmount, etc.),
and order-related information (orderId, orderDetails, etc.). These fields reflect the
diverse financial functionalities supported by the TRON network, providing direct
data support for researching decentralized finance (DeFi) activities on the network.
Through comprehensive analysis of this external transaction dataset, researchers
can gain multi-faceted insights into TRON network operations. For instance, they can
study the usage frequency, resource consumption patterns, and success rates of dif-
ferent types of smart contracts, evaluating network resource utilization efficiency and
the rationality of pricing mechanisms. Transaction result and resource usage data can
be used to analyze network performance bottlenecks and optimization opportunities.
Authorization account and signature information can be employed to study network
security and user behavior patterns. Data on special transaction types provides valu-
able empirical foundations for researching financial innovations and user adoption
within the TRON ecosystem.
9
4.2.1 Protocol Contracts
4.3 Smart Contract Event Logs
During the transaction execution process in TRON, smart contracts on the TVM
(TRON Virtual Machine) also generate a special data structure called Event Log,
used to record events and log information during contract runtime. Event Log is
essentially a data structure actively triggered and defined by smart contract code,
allowing developers to record custom data at critical execution points, such as state
changes, error messages, audit trails, etc. The dataset used in this study focuses on
event log data from the TRON blockchain network, capturing various events generated
during smart contract execution in the TVM. In the blockchain ecosystem, event
logs play a crucial role, not only recording key points of contract execution but also
providing an important interface for off-chain applications to interact with on-chain
activities. The unique value of this dataset lies in its detailed record of smart contract
activities on the TRON network, offering valuable insights for researchers to deeply
analyze contract behaviors, user interaction patterns, and the dynamics of the entire
ecosystem.
The core fields of the dataset include block number, associated transaction hash,
log index, contract address, up to four topic fields (topic0 to topic3), and a data field.
Each record represents an independent event log, precisely located within a specific
block and transaction, and distinguished by the log index for multiple events within
the same transaction.
The block number (blockNum) and transaction hash (transactionHash) fields link
the event logs to the overall structure of the blockchain, allowing researchers to
track the position and timing of events throughout the blockchain’s history. The log
index (logIndex) further refines the order of multiple events within the same trans-
action, which is crucial for understanding the execution flow and results of complex
transactions.
The contract address (address) field identifies the smart contract that triggered
the event, providing a basis for analyzing activity patterns and user interaction fre-
quencies of specific contracts. The topic fields (topic0 to topic3) are the indexed
part of the event, typically containing the event signature (topic0) and key parame-
ters. The nullable design of these fields increases the flexibility of the data structure,
accommodating the diverse needs of different types of events. The data field con-
tains the non-indexed part of the event, potentially including more detailed parameter
information.
Through in-depth analysis of this event log dataset, researchers can gain multi-
faceted insights into the smart contract ecosystem of the TRON network. For example,
they can study the frequency and distribution of specific types of events, identify the
most active contracts and the most common user interaction patterns. Combined with
transaction data, a complete smart contract activity map can be constructed, provid-
ing a deep understanding of complex DApp operation mechanisms and user behavior
patterns.
Moreover, this dataset is particularly valuable for studying the operations of
decentralized finance (DeFi) applications. For instance, token transfer events can be
10
analyzed to track fund flows, liquidity provision and removal events can be studied
to evaluate the health of DeFi protocols, or price oracle events can be examined to
research the propagation and impact of market data on-chain.
Event log data also provides important tools for developers and auditors. By
analyzing error events and abnormal patterns, potential contract vulnerabilities or
security risks can be identified. Additionally, these data form the basis for building effi-
cient indexing and real-time monitoring systems, supporting more complex off-chain
analysis and application development.
4.4 Internal Transaction
Similar to Ethereum, in TRON’s transaction system, apart from common transactions
(also known as External Transactions), there exists a special type of transaction called
Internal Transactions. This dataset focuses on internal transaction data within the
TRON blockchain network, capturing internal calls and value transfers generated dur-
ing smart contract execution. Internal transactions differ from external transactions in
that they are not directly initiated by users, but are triggered by contract code during
smart contract execution. The significance of this dataset lies in its revelation of com-
plex interaction patterns and detailed value flows of smart contracts on the TRON
network, providing researchers with a valuable resource for in-depth understanding of
the internal operational mechanisms of the TRON ecosystem.
The core fields of the dataset include block number, associated external transaction
hash, internal transaction index, internal transaction hash, caller address, recipient
address, token transfer information, notes, whether the transaction was rejected, and
additional information. Each record represents an independent internal transaction,
precisely located within a specific block and external transaction, and distinguished
by an internal index for multiple internal calls within the same external transaction.
The block number (blockNum) and external transaction hash (transactionHash)
fields link internal transactions to the overall blockchain structure, allowing researchers
to track the position and sequence of internal transactions throughout the transaction
execution process. The internal transaction index (internalIndex) further refines the
order of multiple internal calls within the same external transaction, which is crucial
for understanding the execution flow of complex smart contracts.
The caller address (callerAddress) and recipient address (transferToAddress) fields
reveal patterns of inter-contract calls and paths of value flow. This data is signifi-
cant for analyzing smart contract interaction networks, identifying key contracts, and
tracing fund flows. Token transfer information (callValueInfos.tokenId and callValue-
Infos.callValue) is stored in array form, supporting simultaneous transfers of multiple
tokens, reflecting the complex financial operations supported by the TRON network.
The field indicating whether a transaction was rejected (rejected) provides direct
information about the success or failure of internal transaction execution, which is
valuable for assessing the reliability of smart contracts and identifying potential vulner-
abilities or design flaws. The note and extra information fields may contain additional
explanations about the purpose of internal transactions or special conditions, providing
extra context for in-depth analysis.
11
Through comprehensive analysis of this internal transaction dataset, researchers
can gain multi-faceted insights into the smart contract ecosystem of the TRON net-
work. For instance, they can study common contract interaction patterns, identify
frequently called contracts, and key value transfer paths. Combined with external
transaction data, a complete transaction execution graph can be constructed, enabling
deep understanding of complex DApp operational mechanisms. Analysis of the rejected
field can help evaluate the robustness of smart contracts and potential security
risks. Analysis of token transfer information can reveal token economic activities and
liquidity patterns on the TRON network.
Furthermore, this dataset is particularly valuable for studying the internal work-
ing mechanisms of decentralized finance (DeFi) applications. For example, it allows
analysis of internal transaction processes of complex lending protocols, decentralized
exchanges, or cross-chain bridges, providing understanding of how funds flow between
different contracts and the resource consumption of various operations.
5 On-chain Data Insight
Based on the above data-extracting methodology, we have acquired the ability to
analyze any application or scenario on the TRON network. To investigate the basic
information of the TRON blockchain, we start by analyzing the fundamental Block
information and the basic transaction information within it. As of the writing time
of this paper, UTC zone 2024-04-03 22:59:12, the TRON network has reached block
number 60,505,000, with a total of 60,505,001 blocks and 8,190,158,591 transactions.
Among the various transaction types, TriggerSmartContract is the most frequent,
with a total of 3,437,398,059 transactions, far exceeding other types. This indicates
that smart contract calls on the TRON network are very popular among users. Fol-
lowing this are TransferContract and TransferAssetContract, which are transactions
for transferring TRX and TRC10 assets. Next are DelegateResourceContract and
UndelegateResourceContract, which are transactions for delegating bandwidth and
energy resources of accounts. These transactions are evidently sourced from energy
leasing services provided by wallets or websites like TokenPocket5 and TRXUS6.
Although FreezeBalanceContract and UnfreezeBalanceContract can also provide more
transaction energy for accounts, their transaction numbers are significantly lower.
Due to the numerous transaction types in the TRON Protocol, we will specifically
explore the ecosystem centered around TRON stablecoins.
5.1 Decentralization and Development of TRON
As mentioned above, TRON utilizes a DPoS consensus mechanism, where witnesses
are elected through voting. These witnesses are responsible for generating blocks and
maintaining the network and are subject to stringent requirements, including staking
a large amount of TRX tokens and continuous community voting supervision. This
transparency and accountability of witnesses enable a comprehensive understanding of
the network’s dynamics, block production efficiency, voting support status, and token
5https://help.tokenpocket.pro/en/wallet-faq-en/tron-wallet/energy
6https://www.trxus.com
12
Fig. 3 The image on the left depicts the distribution of witness addresses across all blocks. The
right one illustrates the fluctuation of transaction volume as the block height increases.
flow changes, contributing to the assessment of network security and decentralization.
According to Figure 3, despite the relatively small number of witness addresses, the
TRON network remains decentralized at the block packaging level, with no single or
few dominating witnesses.
TRON has always touted itself as a high-performance blockchain system with
a block time of 3 seconds, significantly faster than Ethereum. However, the actual
throughput in real-world scenarios still needs to be analyzed in conjunction with the
on-chain block packaging situation. As shown in Figure 3, the daily transaction volume
of the TRON network shows an overall upward trend with the increase in block height,
but there are fluctuations.
In the early days, TRON’s daily transaction volume was relatively low, only a few
tens of thousands. As the community ecosystem gradually developed, the number of
users and DApps increased, and the transaction volume also gradually grew. By mid-
2019, TRON’s daily transaction volume had stabilized but had surged to a peak of 1
million at times. Entering 2020, due to the impact of the pandemic on the physical
industry, TRON’s transaction volume began to grow rapidly, reflecting the network’s
activity, even reaching a peak of about 18 million in 2021. However, towards the end
of 2023, due to an increase in negative news reports about TRON, the transaction
volume began to decline sharply, rebounding from the beginning of 2024.
Overall, TRON’s transaction volume has gradually increased with the block height,
reflecting the continuous development and growth of its ecosystem. Particularly in the
past two years, the transaction volume has remained at a high level, indicating that
TRON has gained a relatively stable user base and application coverage. However, the
fluctuations in transaction volume also suggest that there is still significant room for
development in the TRON ecosystem. How to attract more quality projects and funds
in the future to ensure ecosystem activity will be a major challenge for TRON. In the
long run, the continuous growth of transaction volume is crucial and is a litmus test
for TRON’s true strength.
13
5.2 Chain-predefined Native Services
Fig. 4 The image on the left shows the top 50 sender addresses in terms of the number of internal
transactions. The image on the right shows the top 50 recipient addresses in terms of the number of
internal transactions.
Due to the high degree of flexibility in transactions, TRON natively supports some
chain-predefined features like issuing new TRC10 assets and transferring assets. By
analyzing transaction parameters, we can explore these native services on the TRON
network.
The most fundamental operation is the TransferContract, which denotes the trans-
fer of TRX tokens. Analyzing the Top 50 list, it is evident that nearly all sender and
receiver addresses, regardless of the number of transactions or transaction amounts,
belong to centralized exchanges. However, this only represents external transaction
information and does not include TRX transfers resulting from contract control. There-
fore, further analysis of internal transactions is necessary to explore the actual on-chain
scenarios. As shown in Figure 4, the previously mentioned centralized exchange
addresses are absent, leaving mostly gambling addresses and addresses of decentralized
exchanges like Justswap7.
The TransferAssetContract represents the transfer of user-created TRC10 tokens.
In Figure 5, we have compiled the number of transactions for various tokens. Inter-
estingly, apart from Justin Sun’s Bittorrent token, which is not the most popular,
tokens representing transfer services such as Diamond and the blockchain gaming plat-
form token PlayAndLike are more prevalent on the network. Additionally, many token
names include gambling website URLs, and further examination of token descrip-
tions revealed a significant amount of promotional information for these gambling
sites. However, these gambling sites do not use these tokens as betting currency, and
unlike the previously mentioned tokens, these tokens do not have substantial intrinsic
value. The high transaction frequency suggests that sending these tokens is used as a
promotional strategy.
Subsequently, we analyzed the Resource Delegate situation. Figure 6 and Figure 7
represent the number and amount of Bandwidth (i.e., assisting others in completing
7https://justswap.org/
14
Fig. 5 The image on the left shows the top 50 TRC10 token names by number of transactions. The
right one shows the top 50 TRC10 token names by transaction volume.
Fig. 6 The image on the left shows the top 50 addresses in terms of the number of bandwidth
delegation occurrences. The right one shows the top 50 addresses in terms of the total bandwidth
amount delegated.
Fig. 7 The image on the left shows the top 50 addresses in terms of the number of energy delegation
occurrences. The right one shows the top 50 addresses in terms of the total energy amount delegated.
15
transactions) and Energy (i.e., assisting others in executing smart contracts) delega-
tions, respectively. Although most addresses do not have corresponding labels, it is
evident that several service providers specialize in offering resource services to oth-
ers. Additionally, many centralized exchanges use cold wallets to delegate resources to
their hot wallets, thereby reducing on-chain asset transfer costs.
5.3 Smart Contract and USDT Stablecoin
Fig. 8 The image on the left shows the distribution of addresses triggering smart contract trans-
actions, by the number of occurrences. The image on the right shows the distribution of addresses
triggering smart contract events, by the number of occurrences.
Unlike the flourishing scene on Ethereum with various tokens and DeFi applica-
tions, the statistics of smart contract triggers and event logs on the TRON network
are astonishing.
As shown in Figure 8, nearly 50% of users in TRON who actively trigger smart con-
tract transactions choose to directly operate the TetherToken, i.e., USDT stablecoin.
Additionally, from the annotated address types, it can be seen that besides directly
operating USDT, users also favor gambling. Smart contracts related to casinos such
as TronBetDice, TronBet, 888Tron, DiceGame, and TronWow are quite popular.
We further analyzed the events related to USDT and found that there were
1,750,695,957 Transfer events, 12,089,253 Approval events, 805 AddedBlackList events,
51 RemovedBlackList events, 303 DestroyedBlackFunds events, 257 Issue events for
issuing new USDT, and 37 Redeem events. Although we did not delve into these Black-
List events this time, we believe our dataset can aid future research on network fund
security concerning these BlackLists and related events. We further investigated the
more frequent Transfer and Approval events.
From the Transfer events, we established a large list of high-frequency transaction
receivers and senders, most of which are cold and hot wallet addresses of centralized
exchanges, including Okex, HTX, Binance, Kucoin, MXC, and Bybit. This indicates
that centralized exchanges still dominate the use of stablecoins on TRON. Addi-
tionally, addresses of well-known high-volume traders like FarFuture also appeared.
Similarly, we created a list of the Top 50 USDT holders, which also predominantly con-
sists of centralized exchanges. However, compared to the transaction list, the holder
list includes more unlabeled personal or institutional accounts, indicating that many
16
entities trust the security of the TRON blockchain and choose to store large amounts
of assets on it.
5.4 Related Work and Discussion
Based on our data exploration results, we propose several recommendations for future
research on the TRON ecosystem, drawing from existing studies on TRON and other
blockchain ecosystems:
Market and Audience Analysis of Resource Delegate Services: While
there is technical research on resource management within TRON [2], there is a lack of
analysis on its real-world application. In contrast, EOS.IO, which also has a complex
resource management mechanism, has been the subject of studies analyzing resource
usage in real scenarios [7]. Therefore, we recommend conducting market and audience
analyses for the Resource Delegate services in TRON.
Real-Time Tracking and Analysis of Hacker Funds on TRON: Generally,
the profits derived from hacker attacks on TRON are frequently channeled through
fund splitting or intricate transactions to obscure the paths of illicit money transfers,
evading financial regulatory agencies’ fund tracking and analysis. While most hacker
fund analyses focus on Bitcoin or EVM-based blockchains [9, 10], research on real-time
tracking and analysis of hacker funds on TRON is limited. Consequently, this research
aims to facilitate the prompt detection of hacker fund transfer addresses, empowering
financial regulatory bodies to trace and prevent the circulation of illegal funds. This
proactive approach serves to protect investors’ funds and uphold the security of the
TRON network
De-Anonymization of Addresses on TRON: This research aims to identify
and categorize transaction patterns in the TRON ecosystem, with a specific focus
on centralized exchanges (CEXs). CEXs act as vital channels for converting cryp-
tocurrencies to fiat currencies and back, playing a key role in enhancing liquidity and
accessibility of digital assets [11]. Despite the acknowledged importance of CEXs, there
is a lack of targeted research on CEXs within the TRON network, with only one exist-
ing study focusing on CEXs in the context of Bitcoin [12]. While de-anonymization
studies have a significant impact on financial regulatory bodies by providing improved
tools for regulatory compliance and enhancing their ability to investigate illicit activ-
ities on TRON, they also offer valuable insights to investors. By understanding the
transaction behaviors of CEXs on TRON, investors can evaluate risk levels associated
with different wallet addresses and adjust their investment strategies accordingly.
Research on Casino Applications in TRON: The TRON ecosystem boasts a
notable presence of casino applications and associated promotional content , under-
scoring the significance of investigating the development and current status of these
platforms Existing studies have analyzed online casinos and decentralized casinos on
other blockchains, shedding light on the nuances in the game mechanisms, management
structures, and player preferences across different platforms [13–16]. Such research
holds immense value in shaping the trajectory of gambling entertainment develop-
ment and facilitating regulatory frameworks within the gambling industry. By delving
17
into the intricacies of casino applications on TRON, researchers can contribute signifi-
cantly to enhancing transparency, security, and responsible gambling practices within
the ecosystem.
Study of Stablecoin Illicit Activities on TRON: The presence of blacklists
in on-chain USDT and reports of hacks, money laundering, and terrorist financing on
TRON warrant thorough investigation. While there is substantial research on ana-
lyzing stablecoin influence on cryptocurrency markets [17], DeFi security [18], illicit
activities detection [19–22] and de-anonymization [23, 24] on major blockchains like
Bitcoin and Ethereum, the heterogeneity of data between blockchains means that
these detection algorithms may not be directly applicable to TRON. Research aimed
at tracing the origin of funds, identifying, and blocking illegal transactions, which is
crucial for the compliance of TRON applications and the TRON blockchain itself.
6 Conclusion
In summary, this work presents a comprehensive framework for extracting and explor-
ing on-chain data from the TRON blockchain ecosystem. We developed an innovative
ETL process tailored specifically to handle TRON’s unique data structures and a
toolkit for data exploration, we overcame significant technical challenges to acquire
large-scale datasets that span all aspects of the TRON blockchain, including blocks,
all types of external transaction details, event logs, and internal transactions.
Our in-depth exploration and analysis of the TRON datasets have unveiled
novel insights into the blockchain ecosystem. The ecosystem demonstrates reasonable
decentralization in block production, indicating a distributed network of validators.
However, it is heavily centered around specific types of applications and services.
Gambling applications form a significant portion of the ecosystem’s activity, high-
lighting the popularity of blockchain-based gaming and betting platforms. The USDT
stablecoin plays a crucial role in facilitating transactions and providing a stable
medium of exchange within the network. Additionally, centralized exchanges main-
tain a strong presence, serving as important gateways between the TRON ecosystem
and the broader cryptocurrency market. Our analysis also characterized the resource
delegation markets within TRON, shedding light on how computational resources are
allocated and traded among network participants. Furthermore, we examined patterns
in smart contract usage, providing insights into the types of decentralized applications
being built and utilized on the TRON blockchain.
Looking ahead, this foundational dataset establishes a basis for crucial future
research into the TRON blockchain. A key area for investigation is the adoption of
delegate services, which could reveal important trends in how users interact with and
participate in the network’s governance. Conducting thorough audits of prominent
gambling applications is another priority, as it would enhance our understanding of
their operations, fairness, and potential risks. Stablecoin activity on TRON, partic-
ularly involving USDT, warrants in-depth examination. This research could uncover
patterns of usage and potentially identify any illicit activities associated with stable-
coin transactions. In parallel, developing and refining methods to detect and prevent
money laundering within the TRON ecosystem is crucial for maintaining the integrity
18
of the network and complying with regulatory standards. The heterogeneous nature
of TRON’s data opens up exciting possibilities for cross-blockchain research. Explor-
ing transfer learning techniques could allow insights gained from TRON to be applied
to other blockchain ecosystems, potentially accelerating research and development
across the field. Additionally, developing methods to adapt analysis techniques for
use across different blockchain platforms would greatly enhance our ability to conduct
comparative studies and identify broader trends in the blockchain space.
By pursuing these research directions, we can deepen the understanding of the
TRON ecosystem and contribute valuable insights to the broader field of blockchain
analysis. This work has the potential to improve the security, efficiency, and overall
functionality of blockchain networks, ultimately driving innovation and adoption in
the decentralized technology sector.
Data Availability Statement
The code used for this research is available in the GitHub repositories mentioned earlier
in the footnotes. The datasets generated and analyzed are too large to be hosted on
GitHub. Therefore, they are available at https://web3resear.ch/datasets.
References
[1] Yadav, J.S., Yadav, N.S., Sharma, A.K.: A qualitative and quantitative para-
metric estimation of the ethereum and tron blockchain networks. In: 2021 9th
International Conference on Reliability, Infocom Technologies and Optimization
(Trends and Future Directions)(ICRITO), pp. 1–5 (2021). IEEE
[2] Li, H., Li, Z., Tian, N.: Resource bottleneck analysis of the blockchain based on
tron’s tps. In: Advances in Natural Computation, Fuzzy Systems and Knowledge
Discovery: Volume 2, pp. 944–950 (2020). Springer
[3] Li, C., Palanisamy, B., Xu, R., Duan, L., Liu, J., Wang, W.: How hard is
takeover in dpos blockchains? understanding the security of coin-based voting
governance. In: Proceedings of the 2023 ACM SIGSAC Conference on Computer
and Communications Security, pp. 150–164 (2023)
[4] Li, D., Han, D., Weng, T.-H., Zheng, Z., Li, H., Li, K.-C.: On stablecoin: Ecosys-
tem, architecture, mechanism and applicability as payment method. Computer
Standards & Interfaces 87, 103747 (2024)
[5] Shukla, A., Das, T.K., Roy, S.S.: Trx cryptocurrency profit and transaction suc-
cess rate prediction using whale optimization-based ensemble learning framework.
Mathematics 11(11), 2415 (2023)
[6] Maghsoodi, A.I.: Cryptocurrency portfolio allocation using a novel hybrid and
predictive big data decision support system. Omega 115, 102787 (2023)
19
[7] Zheng, W., Zheng, Z., Dai, H.-N., Chen, X., Zheng, P.: Xblock-eos: Extracting
and exploring blockchain data from eosio. Information Processing & Management
58(3), 102477 (2021)
[8] Zheng, P., Zheng, Z., Wu, J., Dai, H.-N.: Xblock-eth: Extracting and exploring
blockchain data from ethereum. IEEE Open Journal of the Computer Society 1,
95–106 (2020)
[9] Goldsmith, D., Grauer, K., Shmalo, Y.: Analyzing hack subnetworks in the bitcoin
transaction graph. Applied Network Science 5(1), 22 (2020)
[10] Tsuchiya, Y., Hiramoto, N.: How cryptocurrency is laundered: Case study of
coincheck hacking incident. Forensic Science International: Reports 4, 100241
(2021)
[11] Aspris, A., Foley, S., Svec, J., Wang, L.: Decentralized exchanges: The “wild west”
of cryptocurrency trading. International Review of Financial Analysis 77, 101845
(2021)
[12] Ranshous, S., Joslyn, C.A., Kreyling, S., Nowak, K., Samatova, N.F., West, C.L.,
Winters, S.: Exchange pattern mining in the bitcoin transaction directed hyper-
graph. In: Financial Cryptography and Data Security: FC 2017 International
Workshops, WAHC, BITCOIN, VOTING, WTSC, and TA, Sliema, Malta, April
7, 2017, Revised Selected Papers 21, pp. 248–263 (2017). Springer
[13] Scholten, O.J.: On the behavioural profiling of gamblers using cryptocurrency
transaction data. PhD thesis, University of York (2022)
[14] Scholten, O.J., Zendle, D., Walker, J.A.: Inside the decentralised casino: A longi-
tudinal study of actual cryptocurrency gambling transactions. PloS one 15(10),
0240693 (2020)
[15] Brown, S.H.V.: Gambling on the blockchain: How the unlawful internet gambling
enforcement act has opened the door for offshore crypto casinos. Vand. J. Ent. &
Tech. L. 24, 535 (2021)
[16] Meng, J., Fu, F.: Understanding gambling behaviour and risk attitudes using
cryptocurrency-based casino blockchain data. Royal Society open science 7(10),
201446 (2020)
[17] Ante, L., Fiedler, I., Strehle, E.: The influence of stablecoin issuances on
cryptocurrency markets. Finance Research Letters 41, 101867 (2021)
[18] Li, W., Bu, J., Li, X., Peng, H., Niu, Y., Zhang, Y.: A survey of defi security:
Challenges and opportunities. Journal of King Saud University-Computer and
Information Sciences 34(10), 10378–10404 (2022)
20
[19] Ibrahim, R.F., Elian, A.M., Ababneh, M.: Illicit account detection in the
ethereum blockchain using machine learning. In: 2021 International Conference
on Information Technology (ICIT), pp. 488–493 (2021). IEEE
[20] Liu, J., Zheng, J., Wu, J., Zheng, Z.: Fa-gnn: Filter and augment graph neural
networks for account classification in ethereum. IEEE Transactions on Network
Science and Engineering 9(4), 2579–2588 (2022)
[21] Wu, Z., Liu, J., Wu, J., Zheng, Z., Luo, X., Chen, T.: Know your transactions:
Real-time and generic transaction semantic representation on blockchain & web3
ecosystem. In: Proceedings of the ACM Web Conference 2023, pp. 1918–1927
(2023)
[22] Wu, J., Yuan, Q., Lin, D., You, W., Chen, W., Chen, C., Zheng, Z.: Who
are the phishers? phishing scam detection on ethereum via network embedding.
IEEE Transactions on Systems, Man, and Cybernetics: Systems 52(2), 1156–1166
(2020)
[23] Huang, T., Lin, D., Wu, J.: Ethereum account classification based on graph con-
volutional network. IEEE Transactions on Circuits and Systems II: Express Briefs
69(5), 2528–2532 (2022)
[24] Wu, J., Liu, J., Zhao, Y., Zheng, Z.: Analysis of cryptocurrency transactions
from a network perspective: An overview. Journal of Network and Computer
Applications 190, 103139 (2021)
21
|
Decoding TRON: A Comprehensive Framework for Large-Scale Blockchain Data Extraction and Exploration Qian'ang Mao1, Jiaxin Wang1, Zhiqi Feng1, Yi Zhang1, Jiaqi Yan1* 1* 210023, Jiangsu, China. Contributing authors: ; Abstract Cryptocurrencies and Web3 applications based on blockchain technology have flourished in the blockchain research field. Unlike Bitcoin and Ethereum, due to its unique architectural designs in consensus mechanisms, resource management, and throughput, TRON has developed a more distinctive ecosystem and application scenarios centered around stablecoins. Although it is popular in areas like stablecoin payments and settlement, research on analyzing on-chain data from the TRON blockchain is remarkably scarce. To fill this gap, this paper proposes a comprehensive data extraction and exploration framework for the TRON blockchain. An innovative high-performance ETL system aims to efficiently extract raw on-chain data from TRON, including blocks, transactions, smart contracts, and receipts, establishing a research dataset. An in-depth analysis of the extracted dataset reveals insights into TRON's block generation, transaction trends, the dominance of exchanges, the resource delegation market, smart contract usage patterns, and the central role of the USDT stablecoin. The prominence of gambling applications and potential illicit activities related to USDT is emphasized. The paper discusses opportunities for future research leveraging this dataset, including analysis of delegate services, gambling scenarios, stablecoin activities, and illicit transaction detection. These contributions enhance blockchain data management capabilities and understanding of the rapidly evolving TRON ecosystem. Keywords: TRON, blockchain, data extraction, data exploration, cryptocurrency 1 19 Sep 2025 1 Introduction In recent years, the blockchain ecosystem has experienced tremendous growth, as evidenced by the rising market capitalization and the adoption of blockchain-based cryptocurrencies and platforms. Bitcoin, the first and most well-known cryptocurrency, had a market capitalization exceeding 2.24 trillion, indicating the rapid expansion of the blockchain ecosystem. Reflecting this growth, academic research on blockchain technology has also surged. Early research primarily focused on fundamental blockchain technologies, including consensus algorithms, cryptography, and scalability solutions. As blockchain platforms and applications have become firmly established, recent research attention has gradually shifted towards analyzing and improving the design, applications, and user experience of the blockchain ecosystem. For example, active research areas include DeFi protocol analysis, NFT markets, DAO formation and governance, blockchain gaming, and factors influencing user adoption. New specialized journals and conferences focusing on blockchain applications and business use cases have also emerged. While popular blockchain platforms like Bitcoin and Ethereum and their smart contract applications have garnered widespread research attention, other ecosystems with unique architectural designs and use cases remain relatively unexplored. The vast amounts of data generated by these specialized blockchain systems, although possessing significant commercial and academic value similar to Bitcoin and Ethereum, also present substantial technical challenges for analyzing heterogeneous blockchain data. In 2021, TRON surpassed Ethereum in USDT stablecoin supply, becoming a leading stablecoin issuance platform globally. By 2023, TRON reached 200 million users, with 34.5 million (approximately 17.2%) holding USDT. However, TRON's heavy focus on stablecoin transactions poses risks, as it was flagged by Israel and the United States for aiding terrorist fundraising activities. TRON's popularity among terrorist groups is attributed to its faster transaction speeds and lower fees compared to Bitcoin, making it a preferred platform for illicit activities like money laundering However, current research on TRON on-chain data is scarce, with most studies focusing only on the basic mechanism design analysis of the price fluctuations of its native token TRX and USDT stablecoin [1-6]. The fundamental challenges stem from several factors. Firstly, there is an absence of a universal data extraction tool for TRON. While certain blockchain websites like TRONSCAN 1 offer partial TRON data, their data extraction tools are not publicly accessible, and the data acquisition process is rate-limited, restricting access to comprehensive datasets. Secondly, there is a lack of comprehensive data exploration tools specifically designed for TRON. Although extensive research has been conducted on data analysis for EOS.IO and Ethereum [7, 8], studies focusing on TRON are scarce. To the best of our knowledge, there has been no comprehensive analysis performed on the entirety of TRON's 1https://tronscan.org 2 dataset across all data types, leaving a significant gap in understanding the intricacies of this blockchain platform. Thirdly, the extraction and processing of TRON's data present significant difficulties. TRON's data types are based on Protocol Buffers, and it employs gRPC to deliver high-performance interfaces, encompassing numerous intricate data structures and data types. Furthermore, the data structures within TRON involve nesting, arbitrary types, and other complex relationships, posing significant challenges for data parsing endeavors and hindering the development of effective analysis tools and techniques. This paper addresses the challenges and motivations surrounding TRON blockchain data analysis through several key contributions. We present a comprehensive data processing framework for the TRON blockchain, featuring robust ETL workflows and querying capabilities, which enables efficient extraction and analysis of the vast TRON ecosystem data. Our work includes detailed explanations of multiple datasets derived from this processing methodology, accompanied by preliminary analyses to illuminate the data's content and potential applications. Additionally, we provide a critical assessment of the current research landscape and explore promising future directions that could leverage these datasets. By offering these tools and insights, we aim to empower researchers and developers in the blockchain analytics field, fostering innovation and a deeper understanding of the TRON blockchain. This research not only advances the technical capabilities for blockchain data processing but also paves the way for novel applications and scholarly investigations in this rapidly evolving domain. 2 Background 2.1 TRON Consensus TRON's consensus mechanism is different from that of Bitcoin and Ethereum, and it adopts the same DPoS (Delegated Proof of Stake) as EOS.io. The DPoS consensus mechanism is a way to verify transactions and maintain network security by electing representative nodes (i.e., SR, Super Representatives). Users vote for 27 Super Representatives by staking (freezing) their held TRX (TRON's cryptocurrency). Super Representatives are responsible for generating blocks and processing transactions on the network and are re-elected every 6 hours to ensure the network's decentralization and efficiency. The DPoS mechanism increases the network's processing speed and transaction throughput by reducing the number of nodes participating in the consensus while incentivizing users to actively participate in network governance. 2.2 TRON Account Model TRON uses an account model. The address is the unique identifier of the account, and operating the account requires a private key signature. The account has many attributes, including TRX and TRC10 token balances, bandwidth, energy, and more. The account can send transactions to increase or decrease its TRX or TRC10 token balance, deploy smart contracts, and trigger its own or others' published smart contracts, which are self-executing codes that automatically execute predetermined actions when 3 specific conditions are met. All TRON accounts can apply to become Super Representatives or vote for elected Super Representatives. The account is the foundation of all activities on TRON. 2.3 TRON Transaction Execution The TRON Transaction Execution is a comprehensive process that begins with transaction creation, typically initiated by a user through a wallet or decentralized application (dApp). Once created, the transaction is broadcast to the TRON network, where it's verified by nodes and enters the mempool (transaction pool) to await processing. SR nodes, key players in TRON's consensus mechanism, select transactions from the pool to package into blocks. For transactions involving smart contracts, the TRON Virtual Machine (TVM) comes into play. The TVM, a core component of the TRON network, is responsible for executing smart contract code. It's Turingcomplete and compatible with the Ethereum Virtual Machine (EVM), allowing it to run contracts written in Solidity. The TVM processes the contract by loading its code, interpreting opcodes, executing logic, updating states, and consuming energy (similar to Ethereum's gas). After contract execution, the SR nodes reach consensus on the block's validity using the DPoS mechanism. Once consensus is achieved, the new block is added to the blockchain, confirming the transactions it contains. This process updates the global state of the TRON network, including account balances and contract states. Any events triggered by the transactions or contract executions are recorded, allowing dApps to listen and respond accordingly. The transaction is then considered complete, with its results visible across the network and accessible through block explorers or wallets. Throughout this workflow, the TVM plays a crucial role, particularly in handling smart contract transactions, enabling TRON to support complex decentralized applications and DeFi projects. 2.4 TRON Resource Model TRON's resource model manages transactions and smart contract execution on the network through two resources: "Bandwidth" and "Energy". Bandwidth is the resource consumed by users when performing ordinary transactions (such as transfers), and each account receives a certain amount of free bandwidth every day. Users can obtain additional bandwidth by freezing (locking for a period of time) their held TRX (TRON's cryptocurrency). Freezing TRX not only increases bandwidth but also grants voting rights for network governance. Energy is the resource consumed by users when executing smart contracts. Like bandwidth, users can obtain energy by freezing TRX. The mechanism of freezing TRX to obtain energy and bandwidth encourages users to actively participate in network activities, providing a decentralized way to allocate and manage computing resources on the blockchain, ensuring the network's security and efficient operation. 4 3 Our Framework This chapter describes the process of obtaining raw data from the TRON blockchain. The extraction and analysis processes in our framework are described below: 3.1 Extract, Transform, and Load Fig. 1 This figure illustrates the ETL(Extract, Transform, Load) pipeline for TRON blockchain data, which shows the flow from a Java-TRON Archive Node, through a Rust-based processing system, to final storage in a Columnar Database. Figure 1 illustrates an ETL (Extract, Transform, Load) process implemented in Rust, specifically designed to handle data from the TRON blockchain's Java-TRON Archive Node. The source code of the ETL is available on our GitHub2. The process begins with the extraction phase, where raw data is retrieved from the Java-TRON Archive Node via a gRPC interface. This raw data is then processed by the trongrpc-rs library, which is built using Rust and generated from the TRON Protocol's specifications. This library is responsible for handling the gRPC communication and converting the extracted data into Protobuf types. Within the transformation phase, encapsulated in a Rust-based ETL submodule, the Protobuf types undergo several operations to prepare the data for storage. First, the data is flattened, transforming complex, nested structures into simple, table-like rows. This is followed by a type conversion step, where data types are adapted to formats suitable for downstream processes. Additionally, the process involves parsing contract parameters from the Protobuf data, which are critical for understanding and processing smart contracts within the TRON ecosystem. The parsed contract parameters are also subjected to flattening and type conversion, ensuring that all data is uniformly structured and ready for database insertion. The final phase of the ETL process is the loading step, where the processed table rows are batch-inserted into a columnar database via SQL. Post-insertion, the database undergoes an optimization process to enhance data retrieval efficiency and overall performance. This step is crucial for maintaining the scalability and responsiveness of the database when handling large volumes of blockchain data. 5 Fig. 2 This figure illustrates the data exploration process using Python to query a columnar database, which shows how a user's script interacts with Python to construct queries, convert data types, and parse results from a columnar database, with the option to export data back to the user. 3.2 Data Exploration Figure 2 presents a detailed exploration process, illustrating the interaction between the user, a Python toolkit, and a columnar database in the context of data analysis. The source codes of the Python toolkit and the corresponding columnar database DDL(Data Definition Language) are available on our GitHub3. Initially, the user engages with the Python environment through a script, which acts as a conduit for sending instructions to the Python toolkit. Upon receiving these instructions, the Python toolkit initiates a sequence of processes designed to facilitate data retrieval and manipulation. The first major step involves the construction of a query, wherein the Python toolkit interprets the user's instructions and formulates an appropriate database query. This query formulation process may require type conversion to ensure that the data types align with the expected formats in the database. The query, once constructed, is then executed against a columnar database via an SQL SELECT statement. This database, specialized in handling columnar data storage, processes the query and returns the requested data organized into rows. Following the retrieval of data, the Python toolkit engages in further processing steps. These include another round of type conversion to adapt the retrieved data to a format suitable for subsequent analysis and a parsing operation to extract and organize the relevant columns from the dataset. Once these processes are complete, the toolkit exports the processed data back to the user, thereby completing the data exploration cycle. 4 Datasets In this section, we will provide a detailed description of the statistics and observations of several datasets obtained from the TRON blockchain through the processing of the above framework. 2https://github.com/njublockchain/web3research-etl 3https://github.com/njublockchain/web3research-py 6 Main Category Subcategory Contracts Assets TRC10 assetIssueContracts participateAssetIssueContracts updateAssetContracts Transfer transferContracts transferAssetContracts shieldedTransferContracts Staking cancelAllUnfreezeV2Contracts freezeBalanceContracts freezeBalanceV2Contracts unfreezeAssetContracts unfreezeBalanceContracts unfreezeBalanceV2Contracts withdrawBalanceContracts Account EOAs accountCreateContracts accountPermissionUpdateContracts accountUpdateContracts setAccountIdContracts updateSettingContracts SmartContracts clearAbiContracts createSmartContracts triggerSmartContracts Resource delegateResourceContracts undelegateResourceContracts updateBrokerageContracts updateEnergyLimitContracts withdrawExpireUnfreezeContracts DEX Exchange exchangeCreateContracts exchangeInjectContracts exchangeTransactionContracts exchangeWithdrawContracts Market marketCancelOrderContracts marketSellAssetContracts Government Proposal proposalApproveContracts proposalCreateContracts proposalDeleteContracts SR Voting voteWitnessContracts voteAssetContracts witnessCreateContracts witnessUpdateContracts Table 1 The classification table for our parsed dataset of TRON protocol contracts. We grouped the data by primary use case into four categories: Assets, Accounts, DEXs (Decentralized Exchanges), and Governance. Each category is further divided by specific function: Assets into TRC10, Transfer, and Staking; Accounts into EOAs (Externally Owned Accounts), Smart Contracts, and Resources; DEXs into Exchanges and Markets; and Governance into Proposals and SR (Super Representative) Voting. 7 Table 1 shows the classification relationship from the raw data to the seven data sets. We collect data block chain chain will be accessible from the public platform4. 4.1 Blocks Blocks, as the name implies, are essential components of the blockchain data structure, serving as packages of transactions. In our Blocks dataset, we primarily retain Block Header information, while specific transaction details are stored in an external transaction dataset. The core fields in this dataset include block hash, timestamp, Merkle root hash of transactions, parent block hash, block height, witness ID and address, block version number, account state tree root hash, witness signature, and transaction count. The block hash serves as a unique identifier for each block, ensuring the integrity and immutability of block data. The timestamp field records the precise time of block generation, facilitating research on block production rates and temporal distribution. The Merkle root hash of transactions and the account state tree root hash are used to verify transactions and account states within the block, respectively, reflecting TRON network's design in data validation and state management. The parent block hash maintains the linkage between blocks, guaranteeing the continuity and consistency of the blockchain. The block height field indicates the position of the block within the entire chain, serving as a crucial parameter for analyzing network growth and historical queries. Witness-related fields, such as witness ID, address, and signature, directly reflect the characteristics of TRON's DPoS consensus mechanism. This information not only validates the legitimacy of blocks but also enables analysis of Super Representative activity patterns and network participation. The block version number field reflects the evolution of the TRON protocol, aiding in tracking network upgrades and compatibility changes. The transaction count field records the number of transactions in each block, providing significant data for analyzing network throughput, user activity, and economic activity scale. Through in-depth analysis of this comprehensive block dataset, researchers can gain insights into TRON network's operational characteristics, performance metrics, security, and degree of decentralization. For instance, timestamp and block height data can be used to study network stability and throughput variations; witnessrelated data can be analyzed to examine Super Representative election mechanisms and power distribution; transaction count and block size can be utilized to explore network scalability and user adoption trends. 4.2 External Transactions In TRON, external transactions, similar to those in Ethereum, represent transactions initiated by EOA (Externally Owned Account) addresses. This dataset focuses on external transaction data within the TRON blockchain network, encompassing not only initial transaction information but also merged data on post-execution results. This design provides researchers with a complete view of a transaction's entire lifecycle, 4https://web3resear.ch/datasets 8 from initiation to final on-chain confirmation. Each record represents an independent external transaction, containing both the original transaction data and detailed results of its execution on the TRON network. The core fields of the dataset can be broadly categorized into several key areas: transaction identification information, block information, authorization data, contract call data, resource consumption and fee information, execution results, and special operation data. The transaction hash serves as a unique identifier for each transaction, ensuring traceability. Block number and transaction index precisely locate the transaction within the blockchain, crucial for studying transaction temporality and block filling efficiency. Authorization-related fields (such as authorityAccountNames, authorityAccountAddresses, etc.) reflect TRON network's multi-signature and complex permission management mechanisms. These data provide a foundation for analyzing network security models and user interaction patterns. Contract-related fields (contractType, contractParameter, etc.) detail smart contract invocations, significant for understanding the usage patterns of decentralized applications (DApps) and the popularity of smart contracts in the TRON ecosystem. Furthermore, we have parsed contractParameters into multiple sub-datasets based on different contractTypes, which will be introduced in the next section. Notably, this dataset incorporates transaction execution result data. Fields such as energyUsage, netUsage, and fee meticulously record resource consumption, crucial for analyzing TRON network's resource pricing mechanisms and efficiency. The receiptResult and result fields directly reflect transaction execution outcomes, while the resMessage field may contain more detailed execution information or error descriptions. The combination of these data enables researchers to conduct in-depth analyses of transaction failure reasons, network congestion situations, and smart contract execution efficiency. The dataset also includes dedicated fields for special operations, such as asset issuance (assetIssueId), account freezing and unfreezing (withdrawAmount, unfreezeAmount), exchange operations (exchangeId, exchangeReceivedAmount, etc.), and order-related information (orderId, orderDetails, etc.). These fields reflect the diverse financial functionalities supported by the TRON network, providing direct data support for researching decentralized finance (DeFi) activities on the network. Through comprehensive analysis of this external transaction dataset, researchers can gain multi-faceted insights into TRON network operations. For instance, they can study the usage frequency, resource consumption patterns, and success rates of different types of smart contracts, evaluating network resource utilization efficiency and the rationality of pricing mechanisms. Transaction result and resource usage data can be used to analyze network performance bottlenecks and optimization opportunities. Authorization account and signature information can be employed to study network security and user behavior patterns. Data on special transaction types provides valuable empirical foundations for researching financial innovations and user adoption within the TRON ecosystem. 9 4.2.1 Protocol Contracts 4.3 Smart Contract Event Logs During the transaction execution process in TRON, smart contracts on the TVM (TRON Virtual Machine) also generate a special data structure called Event Log, used to record events and log information during contract runtime. Event Log is essentially a data structure actively triggered and defined by smart contract code, allowing developers to record custom data at critical execution points, such as state changes, error messages, audit trails, etc. The dataset used in this study focuses on event log data from the TRON blockchain network, capturing various events generated during smart contract execution in the TVM. In the blockchain ecosystem, event logs play a crucial role, not only recording key points of contract execution but also providing an important interface for off-chain applications to interact with on-chain activities. The unique value of this dataset lies in its detailed record of smart contract activities on the TRON network, offering valuable insights for researchers to deeply analyze contract behaviors, user interaction patterns, and the dynamics of the entire ecosystem. The core fields of the dataset include block number, associated transaction hash, log index, contract address, up to four topic fields (topic0 to topic3), and a data field. Each record represents an independent event log, precisely located within a specific block and transaction, and distinguished by the log index for multiple events within the same transaction. The block number (blockNum) and transaction hash (transactionHash) fields link the event logs to the overall structure of the blockchain, allowing researchers to track the position and timing of events throughout the blockchain's history. The log index (logIndex) further refines the order of multiple events within the same transaction, which is crucial for understanding the execution flow and results of complex transactions. The contract address (address) field identifies the smart contract that triggered the event, providing a basis for analyzing activity patterns and user interaction frequencies of specific contracts. The topic fields (topic0 to topic3) are the indexed part of the event, typically containing the event signature (topic0) and key parameters. The nullable design of these fields increases the flexibility of the data structure, accommodating the diverse needs of different types of events. The data field contains the non-indexed part of the event, potentially including more detailed parameter information. Through in-depth analysis of this event log dataset, researchers can gain multifaceted insights into the smart contract ecosystem of the TRON network. For example, they can study the frequency and distribution of specific types of events, identify the most active contracts and the most common user interaction patterns. Combined with transaction data, a complete smart contract activity map can be constructed, providing a deep understanding of complex DApp operation mechanisms and user behavior patterns. Moreover, this dataset is particularly valuable for studying the operations of decentralized finance (DeFi) applications. For instance, token transfer events can be 10 analyzed to track fund flows, liquidity provision and removal events can be studied to evaluate the health of DeFi protocols, or price oracle events can be examined to research the propagation and impact of market data on-chain. Event log data also provides important tools for developers and auditors. By analyzing error events and abnormal patterns, potential contract vulnerabilities or security risks can be identified. Additionally, these data form the basis for building efficient indexing and real-time monitoring systems, supporting more complex off-chain analysis and application development. 4.4 Internal Transaction Similar to Ethereum, in TRON's transaction system, apart from common transactions (also known as External Transactions), there exists a special type of transaction called Internal Transactions. This dataset focuses on internal transaction data within the TRON blockchain network, capturing internal calls and value transfers generated during smart contract execution. Internal transactions differ from external transactions in that they are not directly initiated by users, but are triggered by contract code during smart contract execution. The significance of this dataset lies in its revelation of complex interaction patterns and detailed value flows of smart contracts on the TRON network, providing researchers with a valuable resource for in-depth understanding of the internal operational mechanisms of the TRON ecosystem. The core fields of the dataset include block number, associated external transaction hash, internal transaction index, internal transaction hash, caller address, recipient address, token transfer information, notes, whether the transaction was rejected, and additional information. Each record represents an independent internal transaction, precisely located within a specific block and external transaction, and distinguished by an internal index for multiple internal calls within the same external transaction. The block number (blockNum) and external transaction hash (transactionHash) fields link internal transactions to the overall blockchain structure, allowing researchers to track the position and sequence of internal transactions throughout the transaction execution process. The internal transaction index (internalIndex) further refines the order of multiple internal calls within the same external transaction, which is crucial for understanding the execution flow of complex smart contracts. The caller address (callerAddress) and recipient address (transferToAddress) fields reveal patterns of inter-contract calls and paths of value flow. This data is significant for analyzing smart contract interaction networks, identifying key contracts, and tracing fund flows. Token transfer information (callValueInfos.tokenId and callValueInfos.callValue) is stored in array form, supporting simultaneous transfers of multiple tokens, reflecting the complex financial operations supported by the TRON network. The field indicating whether a transaction was rejected (rejected) provides direct information about the success or failure of internal transaction execution, which is valuable for assessing the reliability of smart contracts and identifying potential vulnerabilities or design flaws. The note and extra information fields may contain additional explanations about the purpose of internal transactions or special conditions, providing extra context for in-depth analysis. 11 Through comprehensive analysis of this internal transaction dataset, researchers can gain multi-faceted insights into the smart contract ecosystem of the TRON network. For instance, they can study common contract interaction patterns, identify frequently called contracts, and key value transfer paths. Combined with external transaction data, a complete transaction execution graph can be constructed, enabling deep understanding of complex DApp operational mechanisms. Analysis of the rejected field can help evaluate the robustness of smart contracts and potential security risks. Analysis of token transfer information can reveal token economic activities and liquidity patterns on the TRON network. Furthermore, this dataset is particularly valuable for studying the internal working mechanisms of decentralized finance (DeFi) applications. For example, it allows analysis of internal transaction processes of complex lending protocols, decentralized exchanges, or cross-chain bridges, providing understanding of how funds flow between different contracts and the resource consumption of various operations. 5 On-chain Data Insight Based on the above data-extracting methodology, we have acquired the ability to analyze any application or scenario on the TRON network. To investigate the basic information of the TRON blockchain, we start by analyzing the fundamental Block information and the basic transaction information within it. As of the writing time of this paper, UTC zone 2024-04-03 22:59:12, the TRON network has reached block number 60,505,000, with a total of 60,505,001 blocks and 8,190,158,591 transactions. Among the various transaction types, TriggerSmartContract is the most frequent, with a total of 3,437,398,059 transactions, far exceeding other types. This indicates that smart contract calls on the TRON network are very popular among users. Following this are TransferContract and TransferAssetContract, which are transactions for transferring TRX and TRC10 assets. Next are DelegateResourceContract and UndelegateResourceContract, which are transactions for delegating bandwidth and energy resources of accounts. These transactions are evidently sourced from energy leasing services provided by wallets or websites like TokenPocket5 and TRXUS6. Although FreezeBalanceContract and UnfreezeBalanceContract can also provide more transaction energy for accounts, their transaction numbers are significantly lower. Due to the numerous transaction types in the TRON Protocol, we will specifically explore the ecosystem centered around TRON stablecoins. 5.1 Decentralization and Development of TRON As mentioned above, TRON utilizes a DPoS consensus mechanism, where witnesses are elected through voting. These witnesses are responsible for generating blocks and maintaining the network and are subject to stringent requirements, including staking a large amount of TRX tokens and continuous community voting supervision. This transparency and accountability of witnesses enable a comprehensive understanding of the network's dynamics, block production efficiency, voting support status, and token 5https://help.tokenpocket.pro/en/wallet-faq-en/tron-wallet/energy 6https://www.trxus.com 12 Fig. 3 The image on the left depicts the distribution of witness addresses across all blocks. The right one illustrates the fluctuation of transaction volume as the block height increases. flow changes, contributing to the assessment of network security and decentralization. According to Figure 3, despite the relatively small number of witness addresses, the TRON network remains decentralized at the block packaging level, with no single or few dominating witnesses. TRON has always touted itself as a high-performance blockchain system with a block time of 3 seconds, significantly faster than Ethereum. However, the actual throughput in real-world scenarios still needs to be analyzed in conjunction with the on-chain block packaging situation. As shown in Figure 3, the daily transaction volume of the TRON network shows an overall upward trend with the increase in block height, but there are fluctuations. In the early days, TRON's daily transaction volume was relatively low, only a few tens of thousands. As the community ecosystem gradually developed, the number of users and DApps increased, and the transaction volume also gradually grew. By mid2019, TRON's daily transaction volume had stabilized but had surged to a peak of 1 million at times. Entering 2020, due to the impact of the pandemic on the physical industry, TRON's transaction volume began to grow rapidly, reflecting the network's activity, even reaching a peak of about 18 million in 2021. However, towards the end of 2023, due to an increase in negative news reports about TRON, the transaction volume began to decline sharply, rebounding from the beginning of 2024. Overall, TRON's transaction volume has gradually increased with the block height, reflecting the continuous development and growth of its ecosystem. Particularly in the past two years, the transaction volume has remained at a high level, indicating that TRON has gained a relatively stable user base and application coverage. However, the fluctuations in transaction volume also suggest that there is still significant room for development in the TRON ecosystem. How to attract more quality projects and funds in the future to ensure ecosystem activity will be a major challenge for TRON. In the long run, the continuous growth of transaction volume is crucial and is a litmus test for TRON's true strength. 13 5.2 Chain-predefined Native Services Fig. 4 The image on the left shows the top 50 sender addresses in terms of the number of internal transactions. The image on the right shows the top 50 recipient addresses in terms of the number of internal transactions. Due to the high degree of flexibility in transactions, TRON natively supports some chain-predefined features like issuing new TRC10 assets and transferring assets. By analyzing transaction parameters, we can explore these native services on the TRON network. The most fundamental operation is the TransferContract, which denotes the transfer of TRX tokens. Analyzing the Top 50 list, it is evident that nearly all sender and receiver addresses, regardless of the number of transactions or transaction amounts, belong to centralized exchanges. However, this only represents external transaction information and does not include TRX transfers resulting from contract control. Therefore, further analysis of internal transactions is necessary to explore the actual on-chain scenarios. As shown in Figure 4, the previously mentioned centralized exchange addresses are absent, leaving mostly gambling addresses and addresses of decentralized exchanges like Justswap7. The TransferAssetContract represents the transfer of user-created TRC10 tokens. In Figure 5, we have compiled the number of transactions for various tokens. Interestingly, apart from Justin Sun's Bittorrent token, which is not the most popular, tokens representing transfer services such as Diamond and the blockchain gaming platform token PlayAndLike are more prevalent on the network. Additionally, many token names include gambling website URLs, and further examination of token descriptions revealed a significant amount of promotional information for these gambling sites. However, these gambling sites do not use these tokens as betting currency, and unlike the previously mentioned tokens, these tokens do not have substantial intrinsic value. The high transaction frequency suggests that sending these tokens is used as a promotional strategy. Subsequently, we analyzed the Resource Delegate situation. Figure 6 and Figure 7 represent the number and amount of Bandwidth (i.e., assisting others in completing 7https://justswap.org/ 14 Fig. 5 The image on the left shows the top 50 TRC10 token names by number of transactions. The right one shows the top 50 TRC10 token names by transaction volume. Fig. 6 The image on the left shows the top 50 addresses in terms of the number of bandwidth delegation occurrences. The right one shows the top 50 addresses in terms of the total bandwidth amount delegated. Fig. 7 The image on the left shows the top 50 addresses in terms of the number of energy delegation occurrences. The right one shows the top 50 addresses in terms of the total energy amount delegated. 15 transactions) and Energy (i.e., assisting others in executing smart contracts) delegations, respectively. Although most addresses do not have corresponding labels, it is evident that several service providers specialize in offering resource services to others. Additionally, many centralized exchanges use cold wallets to delegate resources to their hot wallets, thereby reducing on-chain asset transfer costs. 5.3 Smart Contract and USDT Stablecoin Fig. 8 The image on the left shows the distribution of addresses triggering smart contract transactions, by the number of occurrences. The image on the right shows the distribution of addresses triggering smart contract events, by the number of occurrences. Unlike the flourishing scene on Ethereum with various tokens and DeFi applications, the statistics of smart contract triggers and event logs on the TRON network are astonishing. As shown in Figure 8, nearly 50% of users in TRON who actively trigger smart contract transactions choose to directly operate the TetherToken, i.e., USDT stablecoin. Additionally, from the annotated address types, it can be seen that besides directly operating USDT, users also favor gambling. Smart contracts related to casinos such as TronBetDice, TronBet, 888Tron, DiceGame, and TronWow are quite popular. We further analyzed the events related to USDT and found that there were 1,750,695,957 Transfer events, 12,089,253 Approval events, 805 AddedBlackList events, 51 RemovedBlackList events, 303 DestroyedBlackFunds events, 257 Issue events for issuing new USDT, and 37 Redeem events. Although we did not delve into these BlackList events this time, we believe our dataset can aid future research on network fund security concerning these BlackLists and related events. We further investigated the more frequent Transfer and Approval events. From the Transfer events, we established a large list of high-frequency transaction receivers and senders, most of which are cold and hot wallet addresses of centralized exchanges, including Okex, HTX, Binance, Kucoin, MXC, and Bybit. This indicates that centralized exchanges still dominate the use of stablecoins on TRON. Additionally, addresses of well-known high-volume traders like FarFuture also appeared. Similarly, we created a list of the Top 50 USDT holders, which also predominantly consists of centralized exchanges. However, compared to the transaction list, the holder list includes more unlabeled personal or institutional accounts, indicating that many 16 entities trust the security of the TRON blockchain and choose to store large amounts of assets on it. 5.4 Related Work and Discussion Based on our data exploration results, we propose several recommendations for future research on the TRON ecosystem, drawing from existing studies on TRON and other blockchain ecosystems: Market and Audience Analysis of Resource Delegate Services: While there is technical research on resource management within TRON [2], there is a lack of analysis on its real-world application. In contrast, EOS.IO, which also has a complex resource management mechanism, has been the subject of studies analyzing resource usage in real scenarios [7]. Therefore, we recommend conducting market and audience analyses for the Resource Delegate services in TRON. Real-Time Tracking and Analysis of Hacker Funds on TRON: Generally, the profits derived from hacker attacks on TRON are frequently channeled through fund splitting or intricate transactions to obscure the paths of illicit money transfers, evading financial regulatory agencies' fund tracking and analysis. While most hacker fund analyses focus on Bitcoin or EVM-based blockchains [9, 10], research on real-time tracking and analysis of hacker funds on TRON is limited. Consequently, this research aims to facilitate the prompt detection of hacker fund transfer addresses, empowering financial regulatory bodies to trace and prevent the circulation of illegal funds. This proactive approach serves to protect investors' funds and uphold the security of the TRON network De-Anonymization of Addresses on TRON: This research aims to identify and categorize transaction patterns in the TRON ecosystem, with a specific focus on centralized exchanges (CEXs). CEXs act as vital channels for converting cryptocurrencies to fiat currencies and back, playing a key role in enhancing liquidity and accessibility of digital assets [11]. Despite the acknowledged importance of CEXs, there is a lack of targeted research on CEXs within the TRON network, with only one existing study focusing on CEXs in the context of Bitcoin [12]. While de-anonymization studies have a significant impact on financial regulatory bodies by providing improved tools for regulatory compliance and enhancing their ability to investigate illicit activities on TRON, they also offer valuable insights to investors. By understanding the transaction behaviors of CEXs on TRON, investors can evaluate risk levels associated with different wallet addresses and adjust their investment strategies accordingly. Research on Casino Applications in TRON: The TRON ecosystem boasts a notable presence of casino applications and associated promotional content , underscoring the significance of investigating the development and current status of these platforms Existing studies have analyzed online casinos and decentralized casinos on other blockchains, shedding light on the nuances in the game mechanisms, management structures, and player preferences across different platforms [13-16]. Such research holds immense value in shaping the trajectory of gambling entertainment development and facilitating regulatory frameworks within the gambling industry. By delving 17 into the intricacies of casino applications on TRON, researchers can contribute significantly to enhancing transparency, security, and responsible gambling practices within the ecosystem. Study of Stablecoin Illicit Activities on TRON: The presence of blacklists in on-chain USDT and reports of hacks, money laundering, and terrorist financing on TRON warrant thorough investigation. While there is substantial research on analyzing stablecoin influence on cryptocurrency markets [17], DeFi security [18], illicit activities detection [19-22] and de-anonymization [23, 24] on major blockchains like Bitcoin and Ethereum, the heterogeneity of data between blockchains means that these detection algorithms may not be directly applicable to TRON. Research aimed at tracing the origin of funds, identifying, and blocking illegal transactions, which is crucial for the compliance of TRON applications and the TRON blockchain itself. 6 Conclusion In summary, this work presents a comprehensive framework for extracting and exploring on-chain data from the TRON blockchain ecosystem. We developed an innovative ETL process tailored specifically to handle TRON's unique data structures and a toolkit for data exploration, we overcame significant technical challenges to acquire large-scale datasets that span all aspects of the TRON blockchain, including blocks, all types of external transaction details, event logs, and internal transactions. Our in-depth exploration and analysis of the TRON datasets have unveiled novel insights into the blockchain ecosystem. The ecosystem demonstrates reasonable decentralization in block production, indicating a distributed network of validators. However, it is heavily centered around specific types of applications and services. Gambling applications form a significant portion of the ecosystem's activity, highlighting the popularity of blockchain-based gaming and betting platforms. The USDT stablecoin plays a crucial role in facilitating transactions and providing a stable medium of exchange within the network. Additionally, centralized exchanges maintain a strong presence, serving as important gateways between the TRON ecosystem and the broader cryptocurrency market. Our analysis also characterized the resource delegation markets within TRON, shedding light on how computational resources are allocated and traded among network participants. Furthermore, we examined patterns in smart contract usage, providing insights into the types of decentralized applications being built and utilized on the TRON blockchain. Looking ahead, this foundational dataset establishes a basis for crucial future research into the TRON blockchain. A key area for investigation is the adoption of delegate services, which could reveal important trends in how users interact with and participate in the network's governance. Conducting thorough audits of prominent gambling applications is another priority, as it would enhance our understanding of their operations, fairness, and potential risks. Stablecoin activity on TRON, particularly involving USDT, warrants in-depth examination. This research could uncover patterns of usage and potentially identify any illicit activities associated with stablecoin transactions. In parallel, developing and refining methods to detect and prevent money laundering within the TRON ecosystem is crucial for maintaining the integrity 18 of the network and complying with regulatory standards. The heterogeneous nature of TRON's data opens up exciting possibilities for cross-blockchain research. Exploring transfer learning techniques could allow insights gained from TRON to be applied to other blockchain ecosystems, potentially accelerating research and development across the field. Additionally, developing methods to adapt analysis techniques for use across different blockchain platforms would greatly enhance our ability to conduct comparative studies and identify broader trends in the blockchain space. By pursuing these research directions, we can deepen the understanding of the TRON ecosystem and contribute valuable insights to the broader field of blockchain analysis. This work has the potential to improve the security, efficiency, and overall functionality of blockchain networks, ultimately driving innovation and adoption in the decentralized technology sector. Data Availability Statement The code used for this research is available in the GitHub repositories mentioned earlier in the footnotes. The datasets generated and analyzed are too large to be hosted on GitHub. Therefore, they are available at https://web3resear.ch/datasets. References [1] Yadav, J.S., Yadav, N.S., Sharma, A.K.: A qualitative and quantitative parametric estimation of the ethereum and tron blockchain networks. In: 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), pp. 1-5 (2021). IEEE [2] Li, H., Li, Z., Tian, N.: Resource bottleneck analysis of the blockchain based on tron's tps. In: Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery: Volume 2, pp. 944-950 (2020). Springer [3] Li, C., Palanisamy, B., Xu, R., Duan, L., Liu, J., Wang, W.: How hard is takeover in dpos blockchains? understanding the security of coin-based voting governance. In: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pp. 150-164 (2023) [4] Li, D., Han, D., Weng, T.-H., Zheng, Z., Li, H., Li, K.-C.: On stablecoin: Ecosystem, architecture, mechanism and applicability as payment method. Computer Standards & Interfaces 87, 103747 (2024) [5] Shukla, A., Das, T.K., Roy, S.S.: Trx cryptocurrency profit and transaction success rate prediction using whale optimization-based ensemble learning framework. Mathematics 11(11), 2415 (2023) [6] Maghsoodi, A.I.: Cryptocurrency portfolio allocation using a novel hybrid and predictive big data decision support system. Omega 115, 102787 (2023) 19 [7] Zheng, W., Zheng, Z., Dai, H.-N., Chen, X., Zheng, P.: Xblock-eos: Extracting and exploring blockchain data from eosio. Information Processing & Management 58(3), 102477 (2021) [8] Zheng, P., Zheng, Z., Wu, J., Dai, H.-N.: Xblock-eth: Extracting and exploring blockchain data from ethereum. IEEE Open Journal of the Computer Society 1, 95-106 (2020) [9] Goldsmith, D., Grauer, K., Shmalo, Y.: Analyzing hack subnetworks in the bitcoin transaction graph. Applied Network Science 5(1), 22 (2020) [10] Tsuchiya, Y., Hiramoto, N.: How cryptocurrency is laundered: Case study of coincheck hacking incident. Forensic Science International: Reports 4, 100241 (2021) [11] Aspris, A., Foley, S., Svec, J., Wang, L.: Decentralized exchanges: The "wild west" of cryptocurrency trading. International Review of Financial Analysis 77, 101845 (2021) [12] Ranshous, S., Joslyn, C.A., Kreyling, S., Nowak, K., Samatova, N.F., West, C.L., Winters, S.: Exchange pattern mining in the bitcoin transaction directed hypergraph. In: Financial Cryptography and Data Security: FC 2017 International Workshops, WAHC, BITCOIN, VOTING, WTSC, and TA, Sliema, Malta, April 7, 2017, Revised Selected Papers 21, pp. 248-263 (2017). Springer [13] Scholten, O.J.: On the behavioural profiling of gamblers using cryptocurrency transaction data. PhD thesis, (2022) [14] Scholten, O.J., Zendle, D., Walker, J.A.: Inside the decentralised casino: A longitudinal study of actual cryptocurrency gambling transactions. PloS one 15(10), 0240693 (2020) [15] Brown, S.H.V.: Gambling on the blockchain: How the unlawful internet gambling enforcement act has opened the door for offshore crypto casinos. Vand. J. Ent. & Tech. L. 24, 535 (2021) [16] Meng, J., Fu, F.: Understanding gambling behaviour and risk attitudes using cryptocurrency-based casino blockchain data. Royal Society open science 7(10), 201446 (2020) [17] Ante, L., Fiedler, I., Strehle, E.: The influence of stablecoin issuances on cryptocurrency markets. Finance Research Letters 41, 101867 (2021) [18] Li, W., Bu, J., Li, X., Peng, H., Niu, Y., Zhang, Y.: A survey of defi security: Challenges and opportunities. Journal of King Saud University-Computer and Information Sciences 34(10), 10378-10404 (2022) 20 [19] Ibrahim, R.F., Elian, A.M., Ababneh, M.: Illicit account detection in the ethereum blockchain using machine learning. In: 2021 International Conference on Information Technology (ICIT), pp. 488-493 (2021). IEEE [20] Liu, J., Zheng, J., Wu, J., Zheng, Z.: Fa-gnn: Filter and augment graph neural networks for account classification in ethereum. IEEE Transactions on Network Science and Engineering 9(4), 2579-2588 (2022) [21] Wu, Z., Liu, J., Wu, J., Zheng, Z., Luo, X., Chen, T.: Know your transactions: Real-time and generic transaction semantic representation on blockchain & web3 ecosystem. In: Proceedings of the ACM Web Conference 2023, pp. 1918-1927 (2023) [22] Wu, J., Yuan, Q., Lin, D., You, W., Chen, W., Chen, C., Zheng, Z.: Who are the phishers? phishing scam detection on ethereum via network embedding. IEEE Transactions on Systems, Man, and Cybernetics: Systems 52(2), 1156-1166 (2020) [23] Huang, T., Lin, D., Wu, J.: Ethereum account classification based on graph convolutional network. IEEE Transactions on Circuits and Systems II: Express Briefs 69(5), 2528-2532 (2022) [24] Wu, J., Liu, J., Zhao, Y., Zheng, Z.: Analysis of cryptocurrency transactions from a network perspective: An overview. Journal of Network and Computer Applications 190, 103139 (2021) 21
|
2509.16287
|
Architectural change in neural networks using fuzzy vertex pooling
Shanookha Ali∗1, Nitha Niralda†2 and Sunil Mathew‡3
1Department of General Science, Birla Institute of Technology and Science Pilani,
International Academic City, P.O. Box: 345055, Dubai
2Department of Mathematics, Providence Women’s College, Calicut, India
3Department of Mathematics, National Institute of Technology, Calicut, India
Abstract
The process of pooling vertices involves the creation of a new vertex, which becomes adjacent to
all the vertices that were originally adjacent to the endpoints of the vertices being pooled. After this,
the endpoints of these vertices and all edges connected to them are removed.
In this document, we
introduce a formal framework for the concept of fuzzy vertex pooling (FVP) and provide an overview of
its key properties with its applications to neural networks. The pooling model demonstrates remarkable
efficiency in minimizing loss rapidly while maintaining competitive accuracy, even with fewer hidden layer
neurons. However, this advantage diminishes over extended training periods or with larger datasets,
where the model’s performance tends to degrade. This study highlights the limitations of pooling in
later stages of deep learning training, rendering it less effective for prolonged or large-scale applications.
Consequently, pooling is recommended as a strategy for early-stage training in advanced deep learning
models to leverage its initial efficiency.
fuzzy vertex pooling, f-graph pooling, f-cycle pooling, neural network, fuzzy neural network.
1
Introduction
In recent years, applying deep learning to f-graphs is a fast-growing field. The application of Fuzzy Convo-
lutional Neural Networks (FCNNs) to the sparsely linked data that f-graphs depict serves as the basis for
many of these research. While numerous Fuzzy Graph Convolutional Networks (FGCNs) have been pre-
sented, there haven’t been many pooling layer proposals. However, intelligent pooling on fuzzy networks has
a lot of potential, by lowering the number of vertices, it may be able to both discover clusters and minimise
computing requirements. These two things hold the promise of transforming flat vertices into hierarchical
sets of vertices. They are also a first step toward FGNNs being able to alter graph topologies rather than
just vertex properties [13].
In 1965, Zadeh [23] introduced the paradigm-shifting idea of fuzzy logic, which was crucial in influencing how
many mathematical and engineering ideas are now seen. Rosenfeld used this logic in 1975 [18] to redefine a
number of graph theory terms. These days, fuzzy graph theory is a popular topic in mathematics. Mathew
and Sunitha investigated fuzzy vertex connectivity and fuzzy edge connectivity [10] in 2010. In 2018, average
fuzzy vertex connection was proposed by Shanookha et al after further research on fuzzy vertex connectivity
and fuzzy containers [2, 4] later they formulated concepts of hamiltonian fuzzy graphs and applied them in
analysis of Human trafficking [3] in 2021.
∗shanookhaali@gmail.com
†nithaniraldapc@gmail.com
‡sm@nitc.ac.in
1
arXiv:2509.16287v1 [cs.LG] 19 Sep 2025
Figure 1: Vertex pooling in a f-graph. The graph to the right is produced by pooling the original f-graph
twice.
It makes no difference in vertex pooling if two vertices are joined by an edge; if they are, the edge will
vanish when they are pooled. The idea of pooling the vertices was adopted by Pemmaraju and Skiena [21].
Two vertices p and q are connected in an undirected f-graph if a path can be found between them. Every
pair of vertices must be connected for an f-graph to be considered connected. Partitioning an undirected
graph into its most connected subgraphs is the goal of the f-graph connectivity problem. The term “f-graph
pooling”refers to this method. In addition to connection issues, it is a rather straightforward technique that
may be used to solve f- cycle and f- tree issues as well. We assume the f-graph is undirected.
Fuzzy sets can be used to describe various aspects of Neural Computing. That is, fuzziness may be introduced
at the input output signals, synaptic weights, aggregation operation and activation function of individual
neurons to make it fuzzy neuron. Different aggregation operations and activation functions result in fuzzy
neurons with different properties. Thus there are many possibilities for fuzzification of an artificial neuron.
So we may find a variety of fuzzy neurons in the literature [7, 22, 15, 17]. Sameena and Sunitha [19], studeied
the fuzzy neural network architecture is isomorphic to the fuzzy graph model and the output of a fuzzy neural
network with OR fuzzy neuron is equal to the strength of strongest path between the input layer and the
out put layer. Murphy et. al [13] generalizes graph neural networks (GNNs) beyond those based on the
Weisfeiler-Lehman (WL) algorithm, graph Laplacians, and diffusions.
The work by Ramya and Lavanya [16] delves into the specific operations of pooling and domination in
fuzzy graphs. Pooling involves merging vertices based on their memberships, while domination deals with
identifying dominating vertices.
These operations are fundamental in simplifying graph structures and
could have implications for optimizing information flow and representation learning in GNNs. Qasim et al
[14] introduced distance-weighted graph networks for learning representations of irregular particle detector
geometry. While not directly related to fuzzy graphs, their approach highlights the importance of weighted
connections in graph-based learning. This is particularly relevant when considering how fuzzy graph pooling
can influence the weighting of edges and vertices in GNNs.
In [1], Mutab provides a mathematical exploration of fuzzy graphs, discussing their properties and appli-
cations. The paper offers insights into how fuzzy graphs can model imprecise relationships, an aspect that
2
aligns with the core principle of fuzzy memberships. Understanding the mathematical underpinnings is vital
for integrating fuzzy graph pooling into GNN architectures. This is because pooling reduce the complexity
of hidden layers and eliminate weak edges within them. Though this changes the structure of the network
midway while training, creating higher variance in the output resulting in a higher loss overall. However,
with the existing algorithms of forward and backprogpagation, this loss would be quickly minimized. So
far, a form resembling fuzzy graph pooling has been used in convolutional and pooling networks, but not
on simple fuzzy graph networks. There is no conclusive study conducted on simple fuzzy graph networks to
test this hypothesis.
The paper introduces vertex pooling in fuzzy graphs, a novel pooling method for Graph Neural Networks
(GNNs) based on edge pooling. Unlike traditional pooling methods, FVP focuses on merging vertices via
edges, preserving the graph structure while avoiding vertex loss. It is highly efficient, operating on sparse
graph representations with run-time and memory scaling linearly with the number of edges. The approach
is flexible and seamlessly integrates into various GNN architectures, including GCN, GraphSAGE, and
GIN. EdgePool introduces a new way to handle edge features, enhancing its applicability in diverse graph-
based tasks. It also supports unpooling, enabling graph reconstruction and effective application to vertex
classification tasks. Experiments show consistent performance improvements when FVP is used, with notable
gains in scalability for larger graphs. The method is particularly suitable for localized changes that avoid
computational overhead for the entire graph. In general, FVP is a significant advancement in fuzzy graph
pooling, offering a practical, scalable, and robust solution for modern GNN applications. In addition, in this
paper we show that the FGP problem is NP-hard depends on the characterization of pooling method.
2
Fundamental concepts
Most of the definitions and preliminary results used in this paper can be seen in [5, 6, 11, 9, 12]. Let ζ be a
set. The triplet G = (ζ, σ, µ) is called a f-graph if each element ζ ∈ζ and pq ∈ζ ×ζ are assigned non-negative
real numbers σ(v) and µ(pq), where σ : ζ →[0, 1] and µ : ζ × ζ →[0, 1], called the membership values of v
and e respectively, such that for all p, q ∈ζ, µ(pq) ≤σ(p) ∧σ(q), where ∧denote the minimum. We also
denote G = (σ, µ) to represent a f-graph. All f-graphs considered in this article are simple and connected.
σ∗and µ∗respectively denote the vertex set and edge set of the f-graph G.
A f-graph H = (τ, ν) is called a partial f-subgraph of G = (σ, µ) if τ(v) ≤σ(v) for all v ∈τ ∗and ν(uv) ≤µ(uv)
for all uv ∈ν∗. If H = (τ, ν) is a partial f-subgraph of G = (σ, µ) such that τ(v) = σ(v) for all v ∈τ ∗and
ν(uv) = µ(uv) for all uv ∈ν∗, then H is called a f-subgraph of G. G −pq (G −v) is a f-subgraph of G
obtained by deleting the edge pq ( or vertex v) from G. Note that a f-subgraph H = (τ, ν) spans the f-graph
G = (σ, µ) if τ = σ. A path P in a f-graph G = (σ, µ) is a sequence of distinct vertices p0, p1, · · · , pn such
that µ(pi−1pi) > 0, 1 ≤i ≤n and all the vertices are distinct except possibly the first and last. The strength
of a path, s(P) is the membership value of the weakest edge of the path and it always lies between 0 and 1. If
s(P) = CONNG(p, q), then P is called a strongest p−q path. G is said to be connected if CONNG(p, q) > 0
for each p, q ∈σ∗. An edge pq of a f-graph G is called α-strong if µ(pq) > CONNG−pq(p, q). pq ∈µ∗is called
β-strong if µ(pq) = CONNG−pq(p, q) and a δ-edge if µ(pq) < CONNG−pq(p, q). Thus an edge is strong if
µ(pq) ≥CONNG−pq(p, q). A path P is called a strong path if all of its edges are strong. If the removal of
an edge ( or a vertex) reduces the strength of connectedness between some pair of vertices in G, then the
edge ( vertex ) is called a fuzzy bridge ( or fuzzy cutvertex ) of G.
A f-graph without fuzzy cutvertices is called a f- block / block. A connected f-graph G = (σ, µ) is called a
f- tree if it has a spanning f-subgraph F = (σ, ν) which is a tree, where for all pq not in ν∗, there exists
a path from p to q in F, whose strength is more than µ(pq). The degree of a vertex p ∈σ∗is defined as
d(p) = P
q̸=p µ(qp). The minimum degree of G is δ(G) = ∧{d(p) : p ∈σ∗} and the maximum degree of G
is ∆(G) = ∨{d(p) = p ∈σ∗}. The strong degree of a vertex p ∈σ∗is defined as the sum of membership
values of all strong edges incident at p. It is denoted by ds(p, G). Also if Ns(p, G) denote the set of all strong
neighbors of p, then ds(p, G) = P
q∈Ns(p,G) µ(qp).
3
Here are few notations that is carried out in this paper:
dsG(p)
−
Strong degree of vertex p in G
NsG(p)
−
Strong neighbor set of vertex p in G
The pooling of two vertices in a graph, pi and pj, results in a graph where the two original vertices, pi and
pj, are replaced by a single vertex p, which is next to the union of the original neighboring vertices. If pi
and pj are joined by an edge during vertex pooling, it makes no difference because the edge will vanish after
pi and pj are pooled.
3
Fuzzy vertex pooling in f-graphs
It is easy to intuitively understand the meaning of fuzzy vertex pooling (FGP). In Figure 2, we can see an
example when we pool adjacent vertices g and j that belongs to a f- cycle g −j −k −g of strength 0.6, and
after vertex pooling the result is an edge ( cycle is replaced by a single edge), since the vertices to be pooled
might have common neighbors as in Figure 2. This ways of pooling vertices in a f-graph G are denoted as
G/gj. The formal definition of fuzzy vertex pooling (FGP) in a f-graphs is as follows.
Definition 1. Let G : (σ, µ) be a f-graph with p, q ∈σ∗. Identify p with q as a new vertex vc , we call this
fuzzyfied operation pooling of vertices and the new graph is denoted by G/pq : (σc, µc). Formally, G/pq is a
f-graph such that
i σ∗
c = {σ∗∪vc} \ {p, q}
ii µ∗
c = {µ∗\ {{up, vq : u ∈N(p)&v ∈N(q)} ∪{wvc : w ∈(N(p) \ {q}) ∩(N(q) \ {p})} with
σc(v) =
(
σ(v)
: v ∈σ∗\ {p, q}
σ(p) ∧σ(q)
: v = vc
and
µc(uv) =
µ(uv)
: u, v ̸∈N(p) ∪N(q)
µ(uvc)
: u ̸∈N(q)
µ(vvc)
: v ̸∈N(p)
µ(up) ∧µ(uq)
: u ∈N(p) ∩N(q)
Example 1. Let G : (σ, µ) be a f-graph with σ∗= {g, h, i, j, k} in Figure 2. Let {k, h} be the vertices
to pool. Note that the remaining vertices, {g, j, i} ∈σ∗are left on their own left on their own. We call
new vertex as vc1. G/kh : (σ1, µ1) is the vertex pooled f-subgraph of G with σ∗
1 = {g, i, j, vc1} and µ∗
2 =
{gj, ji, jvc1, gvc1, vc1i}. Here
µ(jvc1)
=
∧{µ(jh), µ(jk)}
µ(gvc1)
=
∧{µ(gh), µ(gk)}
µ(ivc1)
=
µ(hi)
Let {g, j} be the vertices to pool.
Note that the remaining vertices, {k, h, i} ∈σ∗are left on their own
left on their own. We call new vertex as vc2. G/gj : (σ2, µ2) is the vertex pooled f- subgraph of G with
σ∗
2 = {k, i, h, vc2} and µ∗
2 = {kvc2, hvc2, vc2i}, hi. Here
µ(kvc2)
=
∧{µ(jk), µ(gk)}
µ(hvc2)
=
∧{µ(gh), µ(hj)}
µ(ivc1)
=
µ(ji)
4
Figure 2: Original f-graph G; resulting f-graph G/kh after fuzzy vertex pooling edge k and h; resulting f-
graph G/gj after fuzzy vertex pooling edge gj
Proposition 1. Fuzzy vertex pooling in a f-graph is commutative.
Hence, it is sensible to define the FGP of a set of vertices. Hereafter, we use the following shorthand:
Definition 2. Given a f-graph G : (σ, µ) and a path P = {v1 −v2 −v3 −· · · −vp}, we define:
G/P = G/v1v2/v2v3/ · · · /vp−1vp
by naming vi+1 as the new vertex obtained by pooling vi and vi+1.
The above concept can be generalized for any set of vertices.
The vertices to be pooled may change as a result of previous pooling when a collection of vertices is being
pooled. The following lemma makes a claim about the vertex that is pooled last if a f-cycle is formed in the
f-graph by a group of vertex pooling.
Theorem 1. Let C = c1 −c2 −· · · −ck −c1 be a f-cycle of length k ≥4 with strength ϕ and let E be the set
of weakest edges in C, w.l.o.g. E = {(e1 = c1c2), · · · , (er = crcr+1)}, i.e. µ(ei) = ϕ. Then,
i Pool the other vertex combinations first until we get a f- cycle C′ of edges {e1, e2, · · · er, er+1}, where
µ(er+1) ≥ϕ.
ii Pool vertices of C′ to obtain a single vertex, say cc of σ(cc) = ϕ .
Proof. We prove this by induction on k. For k = 4, we have C = c1 −c2 −c3 −c4 −c1. Without loss of
generality, E = {(e1 = c1c2), (e2 = c2c3)}, µ(e1) = µ(e2) = ϕ. Let e3 = c3c4 and e4 = c4c1 be the other
vertices with µ(e3), µ(e4) > ϕ.
Let C1 = C/c3c4 is a cycle with length 3 and we name new vertex as c3 and new edge c3c1 with µ(c3c1) =
∧{µ(c2c3), µ(c4c1)} ≥ϕ. Let C2 = C/c3c1 is an edge and we name new vertex as c1 and new edge c2c1 with
µ(c2c1) = ∧{µ(c1c3), µ(c2c3)} = ϕ. Pooling this edge c1c2 is a single vertex C3 = C/c1c2 with membership
value ϕ.
Hence, we assume the lemma holds for f- cycles of length k, and we will show it also holds for f- cycles of
length k + 1. Let be given a f- cycle C- c1 −c2 −· · · −ck −ck+1 −c1 of length k + 1. C/ckck+1, is a f- cycle
obtained by Pooling ckck+1 introduces a new vertex a ck and edge ckc1. Hence, this results in a f- cycle
C/ckck+1 of length k, for which we know that the lemma holds ( Figure 3 is an illustration).
5
Figure 3: FGP of a f- cycle C with 4 vertices to C/gh, then to C/gh/kj to C/gh/kj/gj to a vertex j
Theorem 2. Let G : (σ, µ) be given a f-graph and a set of edges µ1 = {f1, · · · , fq} and a set of edges µ2 =
{g1, · · · , gp}, µ1 ∩µ2 = ∅. Let µ2 form a f-cycle g1 −· · · −gp with p ≥3. Let be µ3 = {f1, · · · , fq, g1, · · · , gp}
and µ3 = {g1, · · · , gp, f1, · · · , fq}, then
G/µ3 isomorphic to G/µ4.
Proof. Let, fi = vivi+1 and gi = uiui+1. From Proposition 3, we know FGP are commutative. In light of
this, we opt to pool the vertices in the following order: v1, v2, · · · vq, vq+1, u1, u2, · · · , up, up+1. Then we have
by definition
G/µ3
=
G/v1v2/v2v3/ · · · /vqvq+1/vq+1u1u1u2/ · · · /upup+1
=
G/v1v2/v2v3/ · · · /vqvq+1/vcu1u2/ · · · /upup+1
(vc is the new vertex after pooling vq+1u1 )
=
G/v1v2/v2v3/ · · · /vqvq+1/u1vq+1u1u2/ · · · /upup+1
(by Proposition 3 and continue the above steps)
≃
G/u1u2/ · · · /upup+1/up+1v1/v1v2/v2v3/···/vqvq+1
≃
G/µ4
Definition 3. A FGP set ζ′ in a f-graph G : (σ, µ) is a set of vertices v′ ⊆σ∗, such that spanning fuzzy
forest induced by ζ′. A FGP C of G is a f- graph such that there exists a FGP set ζ′ with C = G/ζ′.
After the previous observations, we develop another view on fuzzy vertex pooling a set ζ′ of vertices. The
graph G/ζ′ is composed of connected components Q1, Q2, · · · Qk.
Theorem 3. Let be given a f-graph G : (σ, µ), v ∈σ∗and e ∈µ∗. Then,
dsG(v) ≥dsG/e(v)
Proof. We prove this by considering distinct cases on e = pq.
Case I: v is not an end vertex of e and v ∈N[p] ∨N[q].
dsG(v)
=
X
w∈Ns(v)
µ(wv)
=
X
w∈NsG(v)−{p,q}
µ(wv) + µ(pv) + µ(qv)
≥
X
w∈NsG/e(v)}
µ(wv)
≥
dsG/e(v)
Case II: v is not an end vertex of e and v ̸∈N[p] ∨N[q]. Then, clearly dsG(v) = dsG/e(v)
6
Figure 4: FGP of a complete f-graph
Case III: v is an end vertex of e. Say v = q
dsG(v)
=
X
w∈Ns(v)
µ(wv)
=
X
w∈Ns(v)−{p}
µ(wv) + µ(pq)
≥
X
w∈NsG/e(v)
µ(wv)
≥
dsG/e(v)
Now, we will prove that FGP in CFG is an isomorphism and state two theorems of FGP in f-tree and
f-cycle. Later we define three types of FGP.
Theorem 4. Let be given a G : (σ, µ) with |σ ∗| = n, G/pq : σc, µc) is CFG for e = pq ∈µ∗
Proof. Let σ∗= {p1, p2, · · · , pn} be the n vertices with, σ(p1) ≤σ(p2) ≤· · · ≤σ(pn). Choose any edge pjpk.
Then G/pjpk is a f-graph with n −1 vertices. Name new vertex as p′ with σ(p′) = σ(pj) ∧σ(pk).
µ(pip′) =
σ(p1)
if i < min{j, k}
∧(σ(pj), σ(pk))
else
Here that G/pjpk is a CFG ( Figure 4 is an illustration).
Theorem 5. Let be given a CFG G : (σ, µ) and weakest edges pq, pz ∈µ∗. Then G/pq ≈G/pz.
Proof. Let G/pq = (σ1, µ1) and G/pz = (σ2, µ2).
Looking at definition 2 and Theorem 4, we see that
|σ∗
1| = |σ∗
2|. And G/pq and G/pz is a complete f-graph with |σ∗−1| vertices. Therefore, G/pq is isomorphic
to G/pz .
In a f- tree G : (σ, µ) with |σ∗| = n, G/pq need not be a f- tree if pq is a δ edge. G/pq is a f- tree if pq is an
α edge. In a f- cycle G : (σ, µ) with |σ ∗| = n. Let e = pq ∈µ∗is an α edge. FGP of G satisfies the following
7
a
b
c
g
d
h
e
i f
a
b
c
g
d
h
e
i f
Figure 5: Fuzzy vertex pooling with Criteria
relationship: if p or q is a cutvertex then G/pq is a f- cycle else G/pq is a f- tree. Thus, how to implement a
pooling remains a conundrum. Next, we introduce three kinds of FGP. F- cycle pooling (FCP): Identifying
f- cycle in a f-graph and pooling f- cycle into a single vertex. F- block pooling (FBP): Identifying f- block
in a f-graph and pooling f- block into a single vertex and CFG pooling (CFGP): Identifying complete fuzzy
sub graph in a f-graph and pooling complete f- subgraph into a single vertex.
Π, a criteria for FGP if, for any fuzzy graph G : (σ, µ), all vertex pooling or subgraph pooling of G satisfy
Π. Π is nontrivial on connected fuzzy graphs if it is true for connected fuzzy graphs and false for many
connected f-graphs. The criteria Π is determined by the f-graph if, for any f-graph G, G satisfies Π if and
only if its fuzzy subgraphs satisfies Π. Π is determined by all of its components if, for any f - graph G, G
satisfies Π if and only if all connetced components of G satisfy Π (Figure 5).
• Criteria Π1 must be nontrivial on connected f- graphs. A f-graph would be connected if for any pair
of vertices, there exists a path where the membership value of edges on the path is above a certain
threshold.
• Criteria Π2 must be hereditary under FGP. This means that if a f-graph satisfies the property, then
after pooling edges (with fuzzy memberships considered), the resulting f-graph should still satisfy the
criteria.
• Criteria Π3 is determined by the undirected f-graphs, where edges and vertices are described using
fuzzy memberships, and the property depends on these fuzzy relations.
• Criteria Π4 is determined by fuzzy vertex connectivity, where the vertices that meet a specific connec-
tivity threshold.
Since almost all f- subgraph graph properties are determined by the f-graph, treats the f-subgraph pooling
problem for such a property. Thus, we assume that when we pool vertices and edges, that is, not a directed f-
graph. In this section, we show that the pooling problem is NP-hard if Π satisfies: is nontrivial on connected
graphs; Π is hereditary on pooling, Π is determined undirected f- graph, Π is determined by the strongest
paths. Note that if a criteria Π satisfies above conditions, then a complete f- graph satisfies π.
Theorem 6. The FGP problem is NP-hard for the criteria Π.
Proof. Let G : (σ, µ) be an instance of f-graph that to be pooled under Π, an undirected graph with |σ ∗|
vertices. Let G1 : (σ1, µ1) be the f-graph obtained from G by FGP with criteria Π (edges and vertices
are pooled using definition 2). Let µ1∗be the set of edges formed. Then G1 has a vertex set σ1∗where of
|σ1∗| = |σ∗|−k, where k is the number of vertices in f- subgraph follows Φ. Let H be a connected f-subgraph
with the minimum number of vertices that not satisfying criteria Π. Let pq be an edge of H. The resulting
graph has properties consistent with π, and the transformation can be done in polynomial time.
8
Pooled
Before Pooling
After Pooling
Figure 6: Fuzzy vertex pooling model
4
Fuzzy graph informed neural networks
The neural network designed for this study consists of an input layer of 2 neurons, two hidden layers of 8
neurons each and the output layer has 2 neurons, Figure 6. The goal of this network is to determine if the
two input values belong to class 0 (their sum is less than 1) or class 1 (their sum is greater than 1) and
generate a decision boundary. Inputs 1 and 2 are floating point (decimal) numbers less than 1. Output 1
and 2 represent classes 0 and 1 respectively. The neuron in the next layer is activated using the sigmoid
function on the weighted sum of all the neurons in the previous layer ( σ(x) = 1/(1 + ex)).
4.1
Methods and Sources of Data Collection
The study will comprise of two tests. The dataset of the first set is relatively simple. It comprises of 100
ordered pairs of floating point numbers. Half of which add up to less than 1 and the other half, greater than
1. These numbers are randomly generated and their classes were determined.
4.2
Fuzzy Graph Model
The models were coded using numpy library in python. Numpy stands for numeric python and is useful
when working with arrays and matrices. Additional libraries such as matplot lib was also used to aid in
graphing the figures. The models use traditional deep learning algorithms that activate the neurons in the
next layer (forward propagation) and then backtrack to minimize any loss or errors (backpropagation). The
fuzzy neural networks, both baseline and pooling models, gave best results when the learning rate of the
network was set to 0.025. The learning rate determines the step size that the backpropagation algorithm
must take to minimize loss. If the step size is too small, the model will take too long to tweak the weights
and biases of the model. If the step size is too big, the backpropagation algorithm would overshoot in its
correction and this would exponentially add up to giving high loss. They were trained with 50,000 epochs,
and in all cases, the pooling model was able to minimize the loss much quicker at the cost of being slightly
inaccurate in the testing case for Dataset 2 (1000 floating point numbers, few points are given in Table 4.1).
pooling are applied in the pooling model every 10,000 epochs. When a pooling is applied, 2 neurons merge
based on a threshold value set to eliminate weak relations. [8] A pooling changes the internal structure
of the hidden layers, which causes the model to temporarily behave unpredictably. The backpropagation
algorithm of a neural network tries to minimize the loss (defined as a normalized mean square error of the
9
data - [
# Class 0 (Sum < 1)
[0.1, 0.4]
[0.2, 0.3]
[0.1, 0.5]
[0.3, 0.2]
[0.4, 0.1]
[0.3, 0.3]
[0.2, 0.4]
[0.1, 0.6]
[0.2, 0.5]
[0.4, 0.3]
[0.3, 0.2]
[0.2, 0.7]
[0.3, 0.1]
[0.3, 0.3]
[0.2, 0.6]
[0.2, 0.5]
[0.7, 0.4]
[0.8, 0.3]
[0.9, 0.2]
[0.6, 0.4]
[0.5, 0.6]
[0.7, 0.3]
[0.8, 0.5]
[0.7, 0.4]
[0.9, 0.3]
[0.8, 0.6]
[0.7, 0.5]
[0.6, 0.5]
[0.7, 0.4]
[0.9, 0.5]
[0.8, 0.6]
[0.9, 0.4]
# Class 1 (Sum > 1)
[0.6, 0.5]
[0.7, 0.4]
[0.8, 0.3]
[0.9, 0.2]
[0.6, 0.4]
[0.7, 0.3]
[0.8, 0.2]
[0.9, 0.1]
[0.6, 0.6]
[0.7, 0.5]
[0.8, 0.4]
[0.9, 0.3]
[0.8, 0.7]
[0.9, 0.6]
[0.9, 0.7]
[0.7, 0.8]
[0.7, 0.9]
[0.8, 0.8]
[0.9, 0.8]
[0.9, 0.9]
[0.9, 0.95]
[0.6, 0.95]
[0.7, 0.85]
[0.8, 0.85]
[0.7, 0.65]
[0.6, 0.75]
[0.8, 0.75]
[0.9, 0.75]
[0.6, 0.85]
[0.7, 0.85]
[0.9, 0.65]
[0.6, 0.65]
[0.7, 0.6]
[0.8, 0.55]
Table 1: Data points table
Figure 7: Dataset 1 with 100 floating point ordered pairs and their simple xy scatter plot
model) after every training iteration (known as an epoch). Loss is therefore minimized over time. When
both networks were training over Dataset 1 (Table 4.1), it is observed from Figures 9 and 10 that the pooling
model minimizes the loss much quicker than the baseline model. Scatter plot of the Dataset 1 and 2 is given
in Figure 7 and 8.
10
[0.2510190737913408, 0.6259424942673553]
[0.218839224, 0.405943179971498]
[0.2239640999731, 0.36300336]
[0.7742101924262627, 0.08885469930344578]
[0.120308721, 0.7187321846929098]
[0.27126308384037245, 0.79574132]
[0.4456921923751095, 0.1493200825774095]
[0.234270022, 0.5800658340241851]
[0.024727105161211158, 0.101667]
[0.08146331996385, 0.7131972427875436]
[0.34589612, 0.943313609963865]
[0.1051672309900156, 0.00928737]
[0.43637059037982, 0.55977085410055]
[0.957396348, 0.35251081826426]
[0.34332206177360427, 0.5430516]
[0.201810022611168, 0.03063861833530569]
[0.46053307, 0.10169015185019025]
[0.1694911077118097, 0.36330927]
[0.55022089924165673, 0.4261096553359686]
[0.3253727, 0.49262867111419906]
[0.29022898945996186, 0.643084]
[0.5802113825489248, 0.01458276552724822]
[0.015650422, 0.5117441041653529]
[0.13984017716325267, 0.808742209]
[0.81144393973905, 0.09847477773029108]
[0.4378103, 0.80081139194148]
[0.01281428346066545, 0.6418489]
[0.41026384501725, 0.7622195971674454]
[0.1041839, 0.72102484993907]
[0.45092763427115, 0.3843202]
Table 2: Data points table
Figure 8: Dataset 2 with 1000 floating point ordered pairs and their xy scatter plot
When both networks were training over Dataset 2, it is observed from Figures 13 and 14 that the pooling
model again is much quicker at minimizing loss. It is also important to note that as pooling are applied
every 10,000 epochs, there are spikes in the pooling model’s graph. This is because a pooling changes the
internal structure of the hidden layers, which results in errors surfacing temporarily. However, this too is
quickly minimized.
A decision boundary is the model’s understanding of where the classes tend to separate in the real valued
vector space of input values. This is commonly used with support vector machines where multi-dimensional
inputs are to be classified with decision boundaries. A confusion matrix showcases how well a model performs
by checking how many predicted values were actually correct. The matrix’s leading diagonal represent how
many predicted true’s and false’s were actually correct. Ideal Confusion matrices have their leading diagonal
11
Figure 9: Plot for Loss over 50,000 Epochs
Figure 10: Plot for Loss over the first 600 Epochs
Figure 11: Decision boundary of baseline mode and contraction model
12
Figure 12: Confusion matrix of baseline model and contraction model
Figure 13: Plot for Loss over 50,000 Epochs
high and their reverse diagonal zero.
When both networks were training from Dataset 1, both models performed almost identically. This is
evident from the fact that figures 11 are nearly identical and all points are classes correctly. Therefore, the
confusion matrices in figures 12 for both models also generate a perfect ideal result, where all predicted
values were in fact correct. When both networks were training from Dataset 2, the baseline model was able
to classify the input data more accurately than the pooling model. The pooling model’s decision boundary
does miss a small portion of points [Figures 15]. The confusion matrices for both models also indicate that
the baseline model was more accurate than the pooling model [Figures 16].
5
FGPNNs Algorithm
The Feature-Guided Pooling Neural Network (FGPNN) algorithm optimizes neural networks by dynamically
merging redundant neurons during training. At specified pooling intervals, the algorithm computes activa-
13
Figure 14: Plot for Loss over the first 600 Epochs
Figure 15: Decision boundary of baseline mode and contraction model
Figure 16: Confusion matrix of baseline model and contraction model
14
tions of neurons and evaluates pairwise cosine similarity. Neuron pairs with similarity exceeding a defined
threshold are merged, reducing over-parameterization and enhancing model efficiency. The merging process
creates a new neuron with averaged weights from the selected pair, effectively streamlining the network
architecture.
This approach decreases model complexity, accelerates inference, and mitigates overfitting
by preventing redundant feature learning. FGPNN is particularly advantageous for resource-constrained
environments, offering a balance between model size and performance.
Algorithm 1 Algorithm: FGPNN with Fuzzy Pooling
Require: Neural Network NN, Threshold τ, Pooling Interval p, Epochs max epochs
Ensure: Optimized Neural Network NN
1: Initialize Neural Network NN
2: for epoch = 1 to max epochs do
3:
Train NN for one epoch using backpropagation
4:
if epoch % p == 0 then
▷Pooling interval reached
5:
Compute activations A = {a1, a2, . . . , aN}
6:
for each pair of neurons (i, j) do
7:
Compute Sij = cosine similarity(ai, aj)
8:
end for
9:
Identify pairs (i, j) where Sij > τ
10:
for each identified pair (i, j) do
11:
Apply fuzzy pooling:
vc ←merge(ai, aj)
12:
Update vertex attributes:
σ(vc) ←σ(ai) ∧σ(aj)
13:
Update edge weights for all u connected to ai or aj:
µ(u, vc) ←µ(u, ai) ∧µ(u, aj)
14:
end for
15:
Update NN structure to reflect pooling
16:
end if
17: end forreturn Optimized Neural Network NN
For a neural network’s hidden layer to achieve structural optimization, its vertices (neurons) must be
dynamically constrained. Let us consider a complete neural network optimization problem of the following
general form: Here, in the illustration,the first layer is input layer, with membership values. This is the
latent solution modeled by the neural network, parameterized by edge membership value µi and biases τ, Πi
represents the vertex membership value (set of neurons) in a hidden layer with particular criteria according
to the system, the dataset used for training, are inputs, and are target outputs. To optimize the vertices, a
pooling operation is applied to reduce weak neurons, constrained by a similarity measure and threshold. Two
types of Vertex Pooling Problems are forward Pooling Problem, and backward pooling problem. Given a
fixed neural network structure and fixed pooling parameters (e.g., threshold τ, pooling interval p), minimize
the loss function L(Πi, µi, σc, µc) while reducing redundant neurons in the hidden layer. Specifically, the
threshold τ and pooling rules are predefined, and the objective is to train the network while dynamically
applying pooling every p epochs. The network minimizes the loss function using backpropagation, and the
pooling operation reduces the neuron count by merging highly similar neurons.
The algorithm operates by computing the cosine similarity between neuron activations at specified pooling
intervals during training. Neuron pairs (p, q) with similarity exceeding a threshold τ are identified for merging.
Instead of directly averaging weights, fuzzy pooling is applied, where neurons are treated as vertices in a
15
fuzzy graph. The merging of neurons corresponds to identifying two vertices in the graph, pooling them into
a new vertex vc. This process is formalized by modifying the vertex attributes σ and edge weights µ through
fuzzy conjunction operations, ensuring that the new vertex inherits the intersecting features of the original
neurons.
Epoch
Number
Dataset 1 (100 Floating Point Numbers)
Dataset 2 (1000 Floating Point Numbers)
Baseline
Model Loss
Contraction
Model Loss
Baseline
Model Loss
Contraction
Model Loss
1000
0.2484262
0.24902839
0.04808859
0.05095089
2000
0.24644552
0.24737148
0.02673441
0.03423685
3000
0.24172867
0.24322081
0.0225607
0.02799122
4000
0.22235309
0.22322397
0.02065131
0.02534179
5000
0.08141801
0.08621347
0.01946583
0.02372554
6000
0.03307482
0.03479826
0.01863874
0.02258807
7000
0.02065111
0.0210532
0.01801961
0.02172597
8000
0.01508629
0.01499302
0.01753345
0.02105261
9000
0.01191593
0.01160779
0.01713826
0.02905477
10000
0.00985495
0.00944181
0.01680855
0.02134974
11000
0.00839914
0.00792686
0.01652781
0.02203853
12000
0.00731084
0.00680104
0.01628488
0.02133787
13000
0.00646336
0.00592841
0.01607185
0.01910963
14000
0.00578289
0.00523131
0.01588298
0.01774443
15000
0.00522343
0.00466176
0.01571395
0.01686013
16000
0.00475474
0.0041883
0.01556151
0.01622786
17000
0.0043561
0.0037892
0.01358965
0.01574608
18000
0.00401277
0.00344893
0.01229743
0.01536185
19000
0.00371397
0.003156
0.01200986
0.01504486
20000
0.00345161
0.00290172
0.01161919
0.01477655
21000
0.00321949
0.00267938
0.01139701
0.05289028
22000
0.00301276
0.0024837
0.01120529
0.032185
23000
0.00282758
0.00231048
0.01155281
0.01360401
24000
0.00266084
0.00215633
0.01119383
0.01967304
25000
0.00251002
0.00201849
0.01252019
0.01630115
26000
0.00237303
0.00189469
0.01281253
0.01511133
27000
0.00224815
0.00178303
0.01078383
0.01501028
28000
0.0021339
0.00168195
0.01504412
0.01467604
29000
0.00202906
0.00159011
0.01034442
0.01443588
30000
0.00193257
0.00150639
0.01032878
0.01423167
40000
0.00127634
0.0009589
0.00961296
0.01490049
50000
0.00092503
0.00068232
0.01582948
0.01198592
Table 3: Comparison of Baseline and Contraction Model Loss for Two Datasets
The pooling step reduces the number of neurons while preserving essential network connectivity and
information.
The updated network, denoted as G/pq, reflects the pooled structure, with connections to
neighboring neurons recalculated based on the fuzzy intersection of shared edges. This adaptive pooling
strategy enhances model interpretability, prevents overfitting by reducing redundancy, and accelerates in-
ference by decreasing the overall parameter count. FGPNN with fuzzy pooling is particularly useful for
resource-constrained environments, where efficiency and accuracy must be balanced.
A predefined value that determines whether two neurons are ”similar enough” to be merged. Higher
results in more aggressive pooling. The number of epochs after which pooling is applied during training
16
Aspect
Details
Baseline Model
Pooling Model
Network
Archi-
tecture
2 input neurons, 2 hidden layers
(8 neurons each), 2 output neu-
rons
Traditional backpropagation
Pooling every 10,000 epochs
Activation Func-
tion
Sigmoid (σ(x) =
1
1+e−x )
Used sigmoid in all layers
Same sigmoid function used
Dataset 1
100 ordered pairs (sum < 1:
Class 0; sum > 1: Class 1)
Performed well, achieved perfect
decision boundaries
Performed similarly, classified all
points correctly
Dataset 2
1000 ordered pairs (more com-
plex data)
More accurate in classification
Slightly less accurate;
missed
some points
Loss
Minimiza-
tion Speed
Loss minimized using backprop-
agation, learning rate 0.025
Slower minimization
Faster minimization with tempo-
rary spikes after pooling
Learning Rate
0.025
Adjusted consistently
Same, with quicker adjustments
after pooling
Epochs
50,000 training iterations
Steady performance throughout
Spikes after pooling events, then
rapid recovery
Decision Bound-
ary
Graphical separation of Class 0
and Class 1 in real-valued vector
space
More precise on Dataset 2
Missed a few points on Dataset 2
Confusion
Ma-
trix
Evaluated accuracy of classifica-
tion
Ideal
leading
diagonal
for
Dataset 1
Near-perfect for Dataset 1; lower
accuracy for Dataset 2
Best Use Case
Scenarios requiring high preci-
sion on complex datasets
Suitable for high-accuracy appli-
cations
Scenarios
prioritizing
faster
training with tolerance for minor
errors
Table 4: Comparison of Baseline and Pooling Neural Network Models
17
(e.g., every 10,000 epochs).Two neurons are merged and are replaced with a new neuron, This reduces the
total number of neurons in the hidden layer.
6
Conclusion
In this research, we have explored the potential of fuzzy graph pooling within the realm of neural networks,
with a particular focus on fuzzy graph networks. Fuzzy graphs, with their ability to represent uncertainty
and partial membership, present a versatile framework that can effectively model complex relationships
in real-world scenarios. The reviewed papers provide a foundation for understanding fuzzy graphs, their
operations, and their potential applications in graph neural networks. The principles of fuzzy graph pooling,
as explored in these studies, offer a structured approach to handling uncertainty and optimizing information
flow within GNN architectures. Future research could focus on implementing and evaluating these concepts
in practical GNN frameworks, with the goal of enhancing their efficiency, interpretability, and robustness in
real-world applications. The pooling model is notable for minimizing loss incredibly quickly and still holds
up in accuracy despite obviously having less hidden layer neurons. The benefits are short lasted however, as
this study shows that if trained for longer, or with a bigger dataset, the pooling model does tend to struggle
making it infeasible in the later stages of training. It is therefore, from comparison table (Tables 4 and 5
) recommended that pooling be applied in the early stages of training advanced models in deep learning.
Overall, this research contributes to the ongoing exploration of fuzzy graphs and their integration into neural
network architectures. By bridging these domains, we aim to advance the understanding and applicability
of fuzzy graphs in modeling and analyzing complex system.
References
[1] Al Mutab, H. M. (2019). Fuzzy graphs. J. Adv. Math, 17:77–95.
[2] Ali, S., Mathew, S., Mordeson, J. N., and Rashmanlou, H. (2018). Vertex connectivity of fuzzy graphs
with applications to human trafficking. New Mathematics and Natural Computation, 14(03):457–485.
[3] Ali, S., Mathew, S., and Mordeson, J. N. (2021). Hamiltonian fuzzy graphs with application to human
trafficking. Information Sciences, 550:268–284.
[4] Ali, S., Mathew, S., and Mordeson, J. N. (2024). Containers and spanning containers in fuzzy graphs
with application to human trafficking. New Mathematics and Natural Computation, 20(01):103–128.
[5] Bhattacharya, P. (1987). Some remarks on fuzzy graphs. Pattern Recognition Letters, 6(5):297–302.
[6] Bhutani, K. R. and Rosenfeld, A. (2003). Strong arcs in fuzzy graphs. Information Sciences, 152:319–
322.
[7] Ibrahim, A. M. (1996). Introduction to applied fuzzy electronics. Simon & Schuster Trade.
[8] Li, X. and Sa´ude, J. (2020). Explain graph neural networks to understand weighted graph features in
node classification. In International Cross-Domain Conference for Machine Learning and Knowledge
Extraction, pages 57–76.
[9] Mathew, S. and Sunitha, M. S. (2009).
Types of arcs in a fuzzy graph.
Information Sciences,
179(11):1760–1768.
[10] Mathew, S. and Sunitha, M. S. (2010).
Node connectivity and arc connectivity of a fuzzy graph.
Information Sciences, 180(4):519–531.
[11] Mathew, S., Mordeson, J. N., Malik, D. S., et al. (2018). Fuzzy graph theory, volume 363. Springer.
18
[12] Mordeson, J. N. and Nair, P. S. (2012). Fuzzy graphs and fuzzy hypergraphs, volume 46. Physica.
[13] Murphy, R., Srinivasan, B., Rao, V., and Ribeiro, B. (2019). Relational pooling for graph representa-
tions. In International Conference on Machine Learning, pages 4663–4673.
[14] Qasim, S. R., Kieseler, J., Iiyama, Y., and Pierini, M. (2019). Learning representations of irregular
particle-detector geometry with distance-weighted graph networks. The European Physical Journal C,
79(7):1–11.
[15] Rajasekaran, S. and Pai, G. A. V. (2003). Neural networks, fuzzy logic and genetic algorithm: synthesis
and applications (with cd). PHI Learning Pvt. Ltd.
[16] Ramya, S. and Lavanya, S. (2023). Contraction and domination in fuzzy graphs. TWMS Journal Of
Applied And Engineering Mathematics.
[17] ROSS, T. J. (1997). Fuzzy Logic With Engineering Applications.
[18] Rosenfeld, A. (1975).
Fuzzy graphs.
In Fuzzy sets and their applications to cognitive and decision
processes, pages 77–95. Elsevier.
[19] Sameena, K. and S Sunitha, M. (2009). Fuzzy graphs in fuzzy neural networks. Proyecciones (Antofa-
gasta), 28(3):239–252.
[20] Sitara, M., Akram, M., and Yousaf Bhatti, M. (2019). Fuzzy graph structures with application. Math-
ematics, 7(1):63.
[21] Skiena, S. (1990). Combinatorics and graph theory with mathematica. Perseus Books (Sd).
[22] Tsoukalas, L. H. and Uhrig, R. E. (1996). Fuzzy and neural approaches in engineering. John Wiley &
Sons, Inc.
[23] Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3):338–353.
19
|
Architectural change in neural networks using fuzzy vertex pooling Shanookha Ali∗1, Nitha Niralda†2 and Sunil Mathew‡3 1 .O. Box: 345055, Dubai 2 's College, Calicut, India 3 . After this, the endpoints of these vertices and all edges connected to them are removed. In this document, we introduce a formal framework for the concept of fuzzy vertex pooling (FVP) and provide an overview of its key properties with its applications to neural networks. The pooling model demonstrates remarkable efficiency in minimizing loss rapidly while maintaining competitive accuracy, even with fewer hidden layer neurons. However, this advantage diminishes over extended training periods or with larger datasets, where the model's performance tends to degrade. This study highlights the limitations of pooling in later stages of deep learning training, rendering it less effective for prolonged or large-scale applications. Consequently, pooling is recommended as a strategy for early-stage training in advanced deep learning models to leverage its initial efficiency. fuzzy vertex pooling, f-graph pooling, f-cycle pooling, neural network, fuzzy neural network. 1 Introduction In recent years, applying deep learning to f-graphs is a fast-growing field. The application of Fuzzy Convolutional Neural Networks (FCNNs) to the sparsely linked data that f-graphs depict serves as the basis for many of these research. While numerous Fuzzy Graph Convolutional Networks (FGCNs) have been presented, there haven't been many pooling layer proposals. However, intelligent pooling on fuzzy networks has a lot of potential, by lowering the number of vertices, it may be able to both discover clusters and minimise computing requirements. These two things hold the promise of transforming flat vertices into hierarchical sets of vertices. They are also a first step toward FGNNs being able to alter graph topologies rather than just vertex properties [13]. In 1965, Zadeh [23] introduced the paradigm-shifting idea of fuzzy logic, which was crucial in influencing how many mathematical and engineering ideas are now seen. Rosenfeld used this logic in 1975 [18] to redefine a number of graph theory terms. These days, fuzzy graph theory is a popular topic in mathematics. Mathew and Sunitha investigated fuzzy vertex connectivity and fuzzy edge connectivity [10] in 2010. In 2018, average fuzzy vertex connection was proposed by Shanookha et al after further research on fuzzy vertex connectivity and fuzzy containers [2, 4] later they formulated concepts of hamiltonian fuzzy graphs and applied them in analysis of Human trafficking [3] in 2021. ∗ † ‡ 1 19 Sep 2025 Figure 1: Vertex pooling in a f-graph. The graph to the right is produced by pooling the original f-graph twice. It makes no difference in vertex pooling if two vertices are joined by an edge; if they are, the edge will vanish when they are pooled. The idea of pooling the vertices was adopted by Pemmaraju and Skiena [21]. Two vertices p and q are connected in an undirected f-graph if a path can be found between them. Every pair of vertices must be connected for an f-graph to be considered connected. Partitioning an undirected graph into its most connected subgraphs is the goal of the f-graph connectivity problem. The term "f-graph pooling"refers to this method. In addition to connection issues, it is a rather straightforward technique that may be used to solve f- cycle and f- tree issues as well. We assume the f-graph is undirected. Fuzzy sets can be used to describe various aspects of Neural Computing. That is, fuzziness may be introduced at the input output signals, synaptic weights, aggregation operation and activation function of individual neurons to make it fuzzy neuron. Different aggregation operations and activation functions result in fuzzy neurons with different properties. Thus there are many possibilities for fuzzification of an artificial neuron. So we may find a variety of fuzzy neurons in the literature [7, 22, 15, 17]. Sameena and Sunitha [19], studeied the fuzzy neural network architecture is isomorphic to the fuzzy graph model and the output of a fuzzy neural network with OR fuzzy neuron is equal to the strength of strongest path between the input layer and the out put layer. Murphy et. al [13] generalizes graph neural networks (GNNs) beyond those based on the Weisfeiler-Lehman (WL) algorithm, graph Laplacians, and diffusions. The work by Ramya and Lavanya [16] delves into the specific operations of pooling and domination in fuzzy graphs. Pooling involves merging vertices based on their memberships, while domination deals with identifying dominating vertices. These operations are fundamental in simplifying graph structures and could have implications for optimizing information flow and representation learning in GNNs. Qasim et al [14] introduced distance-weighted graph networks for learning representations of irregular particle detector geometry. While not directly related to fuzzy graphs, their approach highlights the importance of weighted connections in graph-based learning. This is particularly relevant when considering how fuzzy graph pooling can influence the weighting of edges and vertices in GNNs. In [1], Mutab provides a mathematical exploration of fuzzy graphs, discussing their properties and applications. The paper offers insights into how fuzzy graphs can model imprecise relationships, an aspect that 2 aligns with the core principle of fuzzy memberships. Understanding the mathematical underpinnings is vital for integrating fuzzy graph pooling into GNN architectures. This is because pooling reduce the complexity of hidden layers and eliminate weak edges within them. Though this changes the structure of the network midway while training, creating higher variance in the output resulting in a higher loss overall. However, with the existing algorithms of forward and backprogpagation, this loss would be quickly minimized. So far, a form resembling fuzzy graph pooling has been used in convolutional and pooling networks, but not on simple fuzzy graph networks. There is no conclusive study conducted on simple fuzzy graph networks to test this hypothesis. The paper introduces vertex pooling in fuzzy graphs, a novel pooling method for Graph Neural Networks (GNNs) based on edge pooling. Unlike traditional pooling methods, FVP focuses on merging vertices via edges, preserving the graph structure while avoiding vertex loss. It is highly efficient, operating on sparse graph representations with run-time and memory scaling linearly with the number of edges. The approach is flexible and seamlessly integrates into various GNN architectures, including GCN, GraphSAGE, and GIN. EdgePool introduces a new way to handle edge features, enhancing its applicability in diverse graphbased tasks. It also supports unpooling, enabling graph reconstruction and effective application to vertex classification tasks. Experiments show consistent performance improvements when FVP is used, with notable gains in scalability for larger graphs. The method is particularly suitable for localized changes that avoid computational overhead for the entire graph. In general, FVP is a significant advancement in fuzzy graph pooling, offering a practical, scalable, and robust solution for modern GNN applications. In addition, in this paper we show that the FGP problem is NP-hard depends on the characterization of pooling method. 2 Fundamental concepts Most of the definitions and preliminary results used in this paper can be seen in [5, 6, 11, 9, 12]. Let ζ be a set. The triplet G = (ζ, σ, μ) is called a f-graph if each element ζ ∈ζ and pq ∈ζ ×ζ are assigned non-negative real numbers σ(v) and μ(pq), where σ : ζ →[0, 1] and μ : ζ × ζ →[0, 1], called the membership values of v and e respectively, such that for all p, q ∈ζ, μ(pq) ≤σ(p) ∧σ(q), where ∧denote the minimum. We also denote G = (σ, μ) to represent a f-graph. All f-graphs considered in this article are simple and connected. σ∗and μ∗respectively denote the vertex set and edge set of the f-graph G. A f-graph H = (τ, ν) is called a partial f-subgraph of G = (σ, μ) if τ(v) ≤σ(v) for all v ∈τ ∗and ν(uv) ≤μ(uv) for all uv ∈ν∗. If H = (τ, ν) is a partial f-subgraph of G = (σ, μ) such that τ(v) = σ(v) for all v ∈τ ∗and ν(uv) = μ(uv) for all uv ∈ν∗, then H is called a f-subgraph of G. G -pq (G -v) is a f-subgraph of G obtained by deleting the edge pq ( or vertex v) from G. Note that a f-subgraph H = (τ, ν) spans the f-graph G = (σ, μ) if τ = σ. A path P in a f-graph G = (σ, μ) is a sequence of distinct vertices p0, p1, · · · , pn such that μ(pi-1pi) > 0, 1 ≤i ≤n and all the vertices are distinct except possibly the first and last. The strength of a path, s(P) is the membership value of the weakest edge of the path and it always lies between 0 and 1. If s(P) = CONNG(p, q), then P is called a strongest p-q path. G is said to be connected if CONNG(p, q) > 0 for each p, q ∈σ∗. An edge pq of a f-graph G is called α-strong if μ(pq) > CONNG-pq(p, q). pq ∈μ∗is called β-strong if μ(pq) = CONNG-pq(p, q) and a δ-edge if μ(pq) φ. Let C1 = C/c3c4 is a cycle with length 3 and we name new vertex as c3 and new edge c3c1 with μ(c3c1) = ∧{μ(c2c3), μ(c4c1)} ≥φ. Let C2 = C/c3c1 is an edge and we name new vertex as c1 and new edge c2c1 with μ(c2c1) = ∧{μ(c1c3), μ(c2c3)} = φ. Pooling this edge c1c2 is a single vertex C3 = C/c1c2 with membership value φ. Hence, we assume the lemma holds for f- cycles of length k, and we will show it also holds for f- cycles of length k + 1. Let be given a f- cycle C- c1 -c2 -· · · -ck -ck+1 -c1 of length k + 1. C/ckck+1, is a f- cycle obtained by Pooling ckck+1 introduces a new vertex a ck and edge ckc1. Hence, this results in a f- cycle C/ckck+1 of length k, for which we know that the lemma holds ( Figure 3 is an illustration). 5 Figure 3: FGP of a f- cycle C with 4 vertices to C/gh, then to C/gh/kj to C/gh/kj/gj to a vertex j Theorem 2. Let G : (σ, μ) be given a f-graph and a set of edges μ1 = {f1, · · · , fq} and a set of edges μ2 = {g1, · · · , gp}, μ1 ∩μ2 = ∅. Let μ2 form a f-cycle g1 -· · · -gp with p ≥3. Let be μ3 = {f1, · · · , fq, g1, · · · , gp} and μ3 = {g1, · · · , gp, f1, · · · , fq}, then G/μ3 isomorphic to G/μ4. Proof. Let, fi = vivi+1 and gi = uiui+1. From Proposition 3, we know FGP are commutative. In light of this, we opt to pool the vertices in the following order: v1, v2, · · · vq, vq+1, u1, u2, · · · , up, up+1. Then we have by definition G/μ3 = G/v1v2/v2v3/ · · · /vqvq+1/vq+1u1u1u2/ · · · /upup+1 = G/v1v2/v2v3/ · · · /vqvq+1/vcu1u2/ · · · /upup+1 (vc is the new vertex after pooling vq+1u1 ) = G/v1v2/v2v3/ · · · /vqvq+1/u1vq+1u1u2/ · · · /upup+1 (by Proposition 3 and continue the above steps) ≃ G/u1u2/ · · · /upup+1/up+1v1/v1v2/v2v3/···/vqvq+1 ≃ G/μ4 Definition 3. A FGP set ζ′ in a f-graph G : (σ, μ) is a set of vertices v′ ⊆σ∗, such that spanning fuzzy forest induced by ζ′. A FGP C of G is a f- graph such that there exists a FGP set ζ′ with C = G/ζ′. After the previous observations, we develop another view on fuzzy vertex pooling a set ζ′ of vertices. The graph G/ζ′ is composed of connected components Q1, Q2, · · · Qk. Theorem 3. Let be given a f-graph G : (σ, μ), v ∈σ∗and e ∈μ∗. Then, dsG(v) ≥dsG/e(v) Proof. We prove this by considering distinct cases on e = pq. Case I: v is not an end vertex of e and v ∈N[p] ∨N[q]. dsG(v) = X w∈Ns(v) μ(wv) = X w∈NsG(v)-{p,q} μ(wv) + μ(pv) + μ(qv) ≥ X w∈NsG/e(v)} μ(wv) ≥ dsG/e(v) Case II: v is not an end vertex of e and v ̸∈N[p] ∨N[q]. Then, clearly dsG(v) = dsG/e(v) 6 Figure 4: FGP of a complete f-graph Case III: v is an end vertex of e. Say v = q dsG(v) = X w∈Ns(v) μ(wv) = X w∈Ns(v)-{p} μ(wv) + μ(pq) ≥ X w∈NsG/e(v) μ(wv) ≥ dsG/e(v) Now, we will prove that FGP in CFG is an isomorphism and state two theorems of FGP in f-tree and f-cycle. Later we define three types of FGP. Theorem 4. Let be given a G : (σ, μ) with |σ ∗| = n, G/pq : σc, μc) is CFG for e = pq ∈μ∗ Proof. Let σ∗= {p1, p2, · · · , pn} be the n vertices with, σ(p1) ≤σ(p2) ≤· · · ≤σ(pn). Choose any edge pjpk. Then G/pjpk is a f-graph with n -1 vertices. Name new vertex as p′ with σ(p′) = σ(pj) ∧σ(pk). μ(pip′) = σ(p1) if i 1) [0.6, 0.5] [0.7, 0.4] [0.8, 0.3] [0.9, 0.2] [0.6, 0.4] [0.7, 0.3] [0.8, 0.2] [0.9, 0.1] [0.6, 0.6] [0.7, 0.5] [0.8, 0.4] [0.9, 0.3] [0.8, 0.7] [0.9, 0.6] [0.9, 0.7] [0.7, 0.8] [0.7, 0.9] [0.8, 0.8] [0.9, 0.8] [0.9, 0.9] [0.9, 0.95] [0.6, 0.95] [0.7, 0.85] [0.8, 0.85] [0.7, 0.65] [0.6, 0.75] [0.8, 0.75] [0.9, 0.75] [0.6, 0.85] [0.7, 0.85] [0.9, 0.65] [0.6, 0.65] [0.7, 0.6] [0.8, 0.55] Table 1: Data points table Figure 7: Dataset 1 with 100 floating point ordered pairs and their simple xy scatter plot model) after every training iteration (known as an epoch). Loss is therefore minimized over time. When both networks were training over Dataset 1 (Table 4.1), it is observed from Figures 9 and 10 that the pooling model minimizes the loss much quicker than the baseline model. Scatter plot of the Dataset 1 and 2 is given in Figure 7 and 8. 10 [0.2510190737913408, 0.6259424942673553] [0.218839224, 0.405943179971498] [0.2239640999731, 0.36300336] [0.7742101924262627, 0.08885469930344578] [0.120308721, 0.7187321846929098] [0.27126308384037245, 0.79574132] [0.4456921923751095, 0.1493200825774095] [0.234270022, 0.5800658340241851] [0.024727105161211158, 0.101667] [0.08146331996385, 0.7131972427875436] [0.34589612, 0.943313609963865] [0.1051672309900156, 0.00928737] [0.43637059037982, 0.55977085410055] [0.957396348, 0.35251081826426] [0.34332206177360427, 0.5430516] [0.201810022611168, 0.03063861833530569] [0.46053307, 0.10169015185019025] [0.1694911077118097, 0.36330927] [0.55022089924165673, 0.4261096553359686] [0.3253727, 0.49262867111419906] [0.29022898945996186, 0.643084] [0.5802113825489248, 0.01458276552724822] [0.015650422, 0.5117441041653529] [0.13984017716325267, 0.808742209] [0.81144393973905, 0.09847477773029108] [0.4378103, 0.80081139194148] [0.01281428346066545, 0.6418489] [0.41026384501725, 0.7622195971674454] [0.1041839, 0.72102484993907] [0.45092763427115, 0.3843202] Table 2: Data points table Figure 8: Dataset 2 with 1000 floating point ordered pairs and their xy scatter plot When both networks were training over Dataset 2, it is observed from Figures 13 and 14 that the pooling model again is much quicker at minimizing loss. It is also important to note that as pooling are applied every 10,000 epochs, there are spikes in the pooling model's graph. This is because a pooling changes the internal structure of the hidden layers, which results in errors surfacing temporarily. However, this too is quickly minimized. A decision boundary is the model's understanding of where the classes tend to separate in the real valued vector space of input values. This is commonly used with support vector machines where multi-dimensional inputs are to be classified with decision boundaries. A confusion matrix showcases how well a model performs by checking how many predicted values were actually correct. The matrix's leading diagonal represent how many predicted true's and false's were actually correct. Ideal Confusion matrices have their leading diagonal 11 Figure 9: Plot for Loss over 50,000 Epochs Figure 10: Plot for Loss over the first 600 Epochs Figure 11: Decision boundary of baseline mode and contraction model 12 Figure 12: Confusion matrix of baseline model and contraction model Figure 13: Plot for Loss over 50,000 Epochs high and their reverse diagonal zero. When both networks were training from Dataset 1, both models performed almost identically. This is evident from the fact that figures 11 are nearly identical and all points are classes correctly. Therefore, the confusion matrices in figures 12 for both models also generate a perfect ideal result, where all predicted values were in fact correct. When both networks were training from Dataset 2, the baseline model was able to classify the input data more accurately than the pooling model. The pooling model's decision boundary does miss a small portion of points [Figures 15]. The confusion matrices for both models also indicate that the baseline model was more accurate than the pooling model [Figures 16]. 5 FGPNNs Algorithm The Feature-Guided Pooling Neural Network (FGPNN) algorithm optimizes neural networks by dynamically merging redundant neurons during training. At specified pooling intervals, the algorithm computes activa13 Figure 14: Plot for Loss over the first 600 Epochs Figure 15: Decision boundary of baseline mode and contraction model Figure 16: Confusion matrix of baseline model and contraction model 14 tions of neurons and evaluates pairwise cosine similarity. Neuron pairs with similarity exceeding a defined threshold are merged, reducing over-parameterization and enhancing model efficiency. The merging process creates a new neuron with averaged weights from the selected pair, effectively streamlining the network architecture. This approach decreases model complexity, accelerates inference, and mitigates overfitting by preventing redundant feature learning. FGPNN is particularly advantageous for resource-constrained environments, offering a balance between model size and performance. Algorithm 1 Algorithm: FGPNN with Fuzzy Pooling Require: Neural Network NN, Threshold τ, Pooling Interval p, Epochs max epochs Ensure: Optimized Neural Network NN 1: Initialize Neural Network NN 2: for epoch = 1 to max epochs do 3: Train NN for one epoch using backpropagation 4: if epoch % p == 0 then ▷Pooling interval reached 5: Compute activations A = {a1, a2, . . . , aN} 6: for each pair of neurons (i, j) do 7: Compute Sij = cosine similarity(ai, aj) 8: end for 9: Identify pairs (i, j) where Sij > τ 10: for each identified pair (i, j) do 11: Apply fuzzy pooling: vc ←merge(ai, aj) 12: Update vertex attributes: σ(vc) ←σ(ai) ∧σ(aj) 13: Update edge weights for all u connected to ai or aj: μ(u, vc) ←μ(u, ai) ∧μ(u, aj) 14: end for 15: Update NN structure to reflect pooling 16: end if 17: end forreturn Optimized Neural Network NN For a neural network's hidden layer to achieve structural optimization, its vertices (neurons) must be dynamically constrained. Let us consider a complete neural network optimization problem of the following general form: Here, in the illustration,the first layer is input layer, with membership values. This is the latent solution modeled by the neural network, parameterized by edge membership value μi and biases τ, Πi represents the vertex membership value (set of neurons) in a hidden layer with particular criteria according to the system, the dataset used for training, are inputs, and are target outputs. To optimize the vertices, a pooling operation is applied to reduce weak neurons, constrained by a similarity measure and threshold. Two types of Vertex Pooling Problems are forward Pooling Problem, and backward pooling problem. Given a fixed neural network structure and fixed pooling parameters (e.g., threshold τ, pooling interval p), minimize the loss function L(Πi, μi, σc, μc) while reducing redundant neurons in the hidden layer. Specifically, the threshold τ and pooling rules are predefined, and the objective is to train the network while dynamically applying pooling every p epochs. The network minimizes the loss function using backpropagation, and the pooling operation reduces the neuron count by merging highly similar neurons. The algorithm operates by computing the cosine similarity between neuron activations at specified pooling intervals during training. Neuron pairs (p, q) with similarity exceeding a threshold τ are identified for merging. Instead of directly averaging weights, fuzzy pooling is applied, where neurons are treated as vertices in a 15 fuzzy graph. The merging of neurons corresponds to identifying two vertices in the graph, pooling them into a new vertex vc. This process is formalized by modifying the vertex attributes σ and edge weights μ through fuzzy conjunction operations, ensuring that the new vertex inherits the intersecting features of the original neurons. Epoch Number Dataset 1 (100 Floating Point Numbers) Dataset 2 (1000 Floating Point Numbers) Baseline Model Loss Contraction Model Loss Baseline Model Loss Contraction Model Loss 1000 0.2484262 0.24902839 0.04808859 0.05095089 2000 0.24644552 0.24737148 0.02673441 0.03423685 3000 0.24172867 0.24322081 0.0225607 0.02799122 4000 0.22235309 0.22322397 0.02065131 0.02534179 5000 0.08141801 0.08621347 0.01946583 0.02372554 6000 0.03307482 0.03479826 0.01863874 0.02258807 7000 0.02065111 0.0210532 0.01801961 0.02172597 8000 0.01508629 0.01499302 0.01753345 0.02105261 9000 0.01191593 0.01160779 0.01713826 0.02905477 10000 0.00985495 0.00944181 0.01680855 0.02134974 11000 0.00839914 0.00792686 0.01652781 0.02203853 12000 0.00731084 0.00680104 0.01628488 0.02133787 13000 0.00646336 0.00592841 0.01607185 0.01910963 14000 0.00578289 0.00523131 0.01588298 0.01774443 15000 0.00522343 0.00466176 0.01571395 0.01686013 16000 0.00475474 0.0041883 0.01556151 0.01622786 17000 0.0043561 0.0037892 0.01358965 0.01574608 18000 0.00401277 0.00344893 0.01229743 0.01536185 19000 0.00371397 0.003156 0.01200986 0.01504486 20000 0.00345161 0.00290172 0.01161919 0.01477655 21000 0.00321949 0.00267938 0.01139701 0.05289028 22000 0.00301276 0.0024837 0.01120529 0.032185 23000 0.00282758 0.00231048 0.01155281 0.01360401 24000 0.00266084 0.00215633 0.01119383 0.01967304 25000 0.00251002 0.00201849 0.01252019 0.01630115 26000 0.00237303 0.00189469 0.01281253 0.01511133 27000 0.00224815 0.00178303 0.01078383 0.01501028 28000 0.0021339 0.00168195 0.01504412 0.01467604 29000 0.00202906 0.00159011 0.01034442 0.01443588 30000 0.00193257 0.00150639 0.01032878 0.01423167 40000 0.00127634 0.0009589 0.00961296 0.01490049 50000 0.00092503 0.00068232 0.01582948 0.01198592 Table 3: Comparison of Baseline and Contraction Model Loss for Two Datasets The pooling step reduces the number of neurons while preserving essential network connectivity and information. The updated network, denoted as G/pq, reflects the pooled structure, with connections to neighboring neurons recalculated based on the fuzzy intersection of shared edges. This adaptive pooling strategy enhances model interpretability, prevents overfitting by reducing redundancy, and accelerates inference by decreasing the overall parameter count. FGPNN with fuzzy pooling is particularly useful for resource-constrained environments, where efficiency and accuracy must be balanced. A predefined value that determines whether two neurons are "similar enough" to be merged. Higher results in more aggressive pooling. The number of epochs after which pooling is applied during training 16 Aspect Details Baseline Model Pooling Model Network Architecture 2 input neurons, 2 hidden layers (8 neurons each), 2 output neurons Traditional backpropagation Pooling every 10,000 epochs Activation Function Sigmoid (σ(x) = 1 1+e-x ) Used sigmoid in all layers Same sigmoid function used Dataset 1 100 ordered pairs (sum 1: Class 1) Performed well, achieved perfect decision boundaries Performed similarly, classified all points correctly Dataset 2 1000 ordered pairs (more complex data) More accurate in classification Slightly less accurate; missed some points Loss Minimization Speed Loss minimized using backpropagation, learning rate 0.025 Slower minimization Faster minimization with temporary spikes after pooling Learning Rate 0.025 Adjusted consistently Same, with quicker adjustments after pooling Epochs 50,000 training iterations Steady performance throughout Spikes after pooling events, then rapid recovery Decision Boundary Graphical separation of Class 0 and Class 1 in real-valued vector space More precise on Dataset 2 Missed a few points on Dataset 2 Confusion Matrix Evaluated accuracy of classification Ideal leading diagonal for Dataset 1 Near-perfect for Dataset 1; lower accuracy for Dataset 2 Best Use Case Scenarios requiring high precision on complex datasets Suitable for high-accuracy applications Scenarios prioritizing faster training with tolerance for minor errors Table 4: Comparison of Baseline and Pooling Neural Network Models 17 (e.g., every 10,000 epochs).Two neurons are merged and are replaced with a new neuron, This reduces the total number of neurons in the hidden layer. 6 Conclusion In this research, we have explored the potential of fuzzy graph pooling within the realm of neural networks, with a particular focus on fuzzy graph networks. Fuzzy graphs, with their ability to represent uncertainty and partial membership, present a versatile framework that can effectively model complex relationships in real-world scenarios. The reviewed papers provide a foundation for understanding fuzzy graphs, their operations, and their potential applications in graph neural networks. The principles of fuzzy graph pooling, as explored in these studies, offer a structured approach to handling uncertainty and optimizing information flow within GNN architectures. Future research could focus on implementing and evaluating these concepts in practical GNN frameworks, with the goal of enhancing their efficiency, interpretability, and robustness in real-world applications. The pooling model is notable for minimizing loss incredibly quickly and still holds up in accuracy despite obviously having less hidden layer neurons. The benefits are short lasted however, as this study shows that if trained for longer, or with a bigger dataset, the pooling model does tend to struggle making it infeasible in the later stages of training. It is therefore, from comparison table (Tables 4 and 5 ) recommended that pooling be applied in the early stages of training advanced models in deep learning. Overall, this research contributes to the ongoing exploration of fuzzy graphs and their integration into neural network architectures. By bridging these domains, we aim to advance the understanding and applicability of fuzzy graphs in modeling and analyzing complex system. References [1] Al Mutab, H. M. (2019). Fuzzy graphs. J. Adv. Math, 17:77-95. [2] Ali, S., Mathew, S., Mordeson, J. N., and Rashmanlou, H. (2018). Vertex connectivity of fuzzy graphs with applications to human trafficking. New Mathematics and Natural Computation, 14(03):457-485. [3] Ali, S., Mathew, S., and Mordeson, J. N. (2021). Hamiltonian fuzzy graphs with application to human trafficking. Information Sciences, 550:268-284. [4] Ali, S., Mathew, S., and Mordeson, J. N. (2024). Containers and spanning containers in fuzzy graphs with application to human trafficking. New Mathematics and Natural Computation, 20(01):103-128. [5] Bhattacharya, P. (1987). Some remarks on fuzzy graphs. Pattern Recognition Letters, 6(5):297-302. [6] Bhutani, K. R. and Rosenfeld, A. (2003). Strong arcs in fuzzy graphs. Information Sciences, 152:319322. [7] Ibrahim, A. M. (1996). Introduction to applied fuzzy electronics. Simon & Schuster Trade. [8] Li, X. and Sa ́ude, J. (2020). Explain graph neural networks to understand weighted graph features in node classification. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pages 57-76. [9] Mathew, S. and Sunitha, M. S. (2009). Types of arcs in a fuzzy graph. Information Sciences, 179(11):1760-1768. [10] Mathew, S. and Sunitha, M. S. (2010). Node connectivity and arc connectivity of a fuzzy graph. Information Sciences, 180(4):519-531. [11] Mathew, S., Mordeson, J. N., Malik, D. S., et al. (2018). Fuzzy graph theory, volume 363. Springer. 18 [12] Mordeson, J. N. and Nair, P. S. (2012). Fuzzy graphs and fuzzy hypergraphs, volume 46. Physica. [13] Murphy, R., Srinivasan, B., Rao, V., and Ribeiro, B. (2019). Relational pooling for graph representations. In International Conference on Machine Learning, pages 4663-4673. [14] Qasim, S. R., Kieseler, J., Iiyama, Y., and Pierini, M. (2019). Learning representations of irregular particle-detector geometry with distance-weighted graph networks. The European Physical Journal C, 79(7):1-11. [15] Rajasekaran, S. and Pai, G. A. V. (2003). Neural networks, fuzzy logic and genetic algorithm: synthesis and applications (with cd). PHI Learning Pvt. Ltd. [16] Ramya, S. and Lavanya, S. (2023). Contraction and domination in fuzzy graphs. TWMS Journal Of Applied And Engineering Mathematics. [17] ROSS, T. J. (1997). Fuzzy Logic With Engineering Applications. [18] Rosenfeld, A. (1975). Fuzzy graphs. In Fuzzy sets and their applications to cognitive and decision processes, pages 77-95. Elsevier. [19] Sameena, K. and S Sunitha, M. (2009). Fuzzy graphs in fuzzy neural networks. Proyecciones (Antofagasta), 28(3):239-252. [20] Sitara, M., Akram, M., and Yousaf Bhatti, M. (2019). Fuzzy graph structures with application. Mathematics, 7(1):63. [21] Skiena, S. (1990). Combinatorics and graph theory with mathematica. Perseus Books (Sd). [22] Tsoukalas, L. H. and Uhrig, R. E. (1996). Fuzzy and neural approaches in engineering. John Wiley & Sons, Inc. [23] Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3):338-353. 19
|
2509.16289
|
arXiv:2509.16289v1 [gr-qc] 19 Sep 2025
Charged particle dynamics in singular spacetimes: hydrogenic mapping and
curvature-corrected thermodynamics
Abdullah Guvendi
∗
Department of Basic Sciences, Erzurum Technical University, 25050, Erzurum, T¨urkiye
Semra Gurtas Dogan
†
Department of Medical Imaging Techniques, Hakkari University, 30000, Hakkari, T¨urkiye
Omar Mustafa
‡
Department of Physics, Eastern Mediterranean University,
99628, G. Magusa, north Cyprus, Mersin 10 - T¨urkiye
Hassan Hassanabadi
§
Departamento de F´ısica Te´orica, At´omica y Optica and Laboratory for Disruptive Interdisciplinary Science (LaDIS),
Universidad de Valladolid, 47011 Valladolid, Spain and
Department of Physics, Faculty of Science, University of Hradec Kr´alov´e, Rokitansk´eho 62, 500 03 Hradec Kr´alov´e, Czechia
(Dated: September 23, 2025)
We analyze the dynamics of charged test particles in a singular, horizonless spacetime arising as the massless
limit of a charged wormhole in the Einstein-Maxwell-Scalar framework.
The geometry, sustained solely
by an electric charge Q, features an infinite sequence of curvature singularity shells, with the outermost
at r∗= 2|Q|/π acting as a hard boundary for nonradial motion, while radial trajectories can access it
depending on the particle’s charge-to-mass ratio |q|/m. Exploiting exact first integrals, we construct the
effective potential and obtain circular orbit radii, radial epicyclic frequencies, and azimuthal precession
rates. In the weak-field limit (r ≫|Q|), the motion reduces to a Coulombic system with small curvature-
induced retrograde precession. At large radii, the dynamics maps to a hydrogenic system, with curvature
corrections inducing perturbative energy shifts. Approaching r∗, the potential diverges, producing hard-wall
confinement. Curvature corrections also modify the canonical thermodynamics, raising energies and slightly
altering entropy and heat capacity. Our results characterize the transition from Newtonian-like orbits to
strongly confined, curvature-dominated dynamics.
CONTENTS
I. Introduction
1
II. Charged Particle Dynamics
2
III. Analysis of Radial Motion, Particle Orbits, and Stability
4
A. Radial Motion and Effective Potential
4
Stability of Circular Orbits
4
B. Weak-Field Approximation and Orbital Stability
4
C. Strong-Field Dynamics and Orbital Stability
6
IV. Mapping to a one-electron atom
7
V. Curvature-Corrected Thermodynamic Properties
8
VI. Summary and Discussion
9
References
10
∗abdullah.guvendi@erzurum.edu.tr
† semragurtasdogan@hakkari.edu.tr
‡ omar.mustafa@emu.edu.tr
§ hha1349@gmail.com (Corresponding Author)
I.
INTRODUCTION
Charged spacetimes in general relativity provide fundamental insights
into the relationship between electromagnetic fields and spacetime cur-
vature. Classical solutions, such as the Reissner-Nordstr¨om black hole,
illustrate how electric charge modifies spacetime geometry, giving rise
to inner and outer horizons as well as central singularities [1–3]. These
solutions, however, typically assume the presence of mass. This nat-
urally raises the question: can electric charge alone, in the absence
of mass, induce nontrivial spacetime curvature and support physically
meaningful structures?
Wormholes, first introduced by Einstein and Rosen, provide a theoret-
ical framework to explore such questions [4]. These hypothetical struc-
tures connect distant regions of spacetime and, in principle, could act as
shortcuts between them. While traversable wormholes generally require
exotic matter and often violate classical energy conditions, the inclusion
of electric charge adds a new layer of complexity. In charged wormhole
geometries, electromagnetic fields can significantly modify the causal
structure and the trajectories of test particles [5, 6], potentially allow-
ing for configurations that circumvent classical energy condition vio-
lations.
Recent investigations have extended these considerations to
massless configurations, where electric charge alone shapes spacetime
curvature. In particular, Turimov et al. [7] have obtained exact so-
lutions of the Einstein-Maxwell-Scalar field equations for spherically
symmetric charged wormholes characterized by mass M and charge Q.
2
Unlike classical charged black holes, these wormholes reveal a novel
mechanism by which charge governs spacetime, motivating a detailed
analysis of their dynamics and geometric properties.
The spacetime under consideration is described by the static, spheri-
cally symmetric metric (in units G = c = 1) [7]:
ds2 = −f(r) dt2 + f(r)−1dr2 + r2 dθ2 + sin2 θ dϕ2 ,
(1)
with the metric function
f(r) =
"
cosh
p
M2 −Q2
r
+
M
p
M2 −Q2 sinh
p
M2 −Q2
r
#−2
.
(2)
In the extremal limit M →|Q|, we have
p
M2 −Q2 →0. Expanding
the hyperbolic functions for small arguments x =
p
M2 −Q2/r →0
using cosh x ≃1 + x2/2 and sinh x ≃x + x3/6, we obtain:
cosh x +
M
p
M2 −Q2 sinh x ≃1 + M
r + O(x2) →1 + |Q|
r .
Hence, the metric function reduces to
f(r)
M→|Q| =
1 + |Q|
r
−2
.
(3)
Introducing the Schwarzschild-like radial coordinate R = r + |Q|, so
that r = R −|Q|, the line element becomes
ds2 = −
1 −|Q|
R
2
dt2 +
1 −|Q|
R
−2
dR2 + (R −|Q|)2dΩ2,
(4)
where dΩ2 = dθ2 + sin2 θ dϕ2. This geometry coincides with the ex-
tremal Reissner-Nordstr¨om metric in the radial sector but exhibits a
distinct angular sector due to the radial shift R 7→R −|Q|. In the neu-
tral limit Q →0, it reduces to the classical Papapetrou “exponential”
wormhole metric [8]. For |Q| > M, the hyperbolic functions become
trigonometric, yielding oscillatory metrics and generically naked singu-
larities. These features highlight the delicate relationship between mass
and charge in determining the global structure of spacetime [1, 2, 7].
In the massless limit M = 0, electric charge |Q| alone generates space-
time curvature [7], resulting in the line element
ds2 = −
dt2
cos2(|Q|/r) + cos2(|Q|/r) dr2 + dΩ2 .
(5)
This metric exhibits curvature singularities at
rn =
|Q|
(n + 1
2)π ,
n = 0, 1, 2, . . . ,
(6)
where cos(|Q|/r) vanishes. Each singular shell acts as a dynamical bar-
rier that confines timelike test particles. Analogies to confined magnetic
configurations, such as the Bonnor-Melvin universe [9, 10], are formal
and should not be interpreted as physical equivalence. The accessible
radial region between successive singular shells is
|Q|
(n + 3
2)π
< r <
|Q|
(n + 1
2 )π
,
n = 0, 1, 2, . . . ,
(7)
which can be interpreted classically as a sequence of effective potential
wells for large n. The outermost shell (n = 0) is located at
r∗= 2|Q|
π
,
(8)
which represents an effectively impenetrable boundary for timelike or-
bits with nonzero angular momentum (L ̸= 0). This property follows
from the line element given in (5). For purely radial motion (L = 0),
the accessibility of r∗is governed by the particle’s charge-to-mass ra-
tio |q|/m, highlighting the dependence of test particle dynamics on the
underlying spacetime geometry.
In the far-field regime (r ≫|Q|) or for weak charge (|Q| ≪r), the
metric functions expand as
cos−2|Q|
r
= 1 +
|Q|
r
2
+ O
|Q|4
r4
,
cos2|Q|
r
= 1 −
|Q|
r
2
+ O
|Q|4
r4
,
(9)
showing that the spacetime is asymptotically Minkowskian, with cur-
vature corrections decaying as (|Q|/r)2. Thus, the geometry is regular
at large distances, while its short-distance structure is entirely gov-
erned by electric charge, underscoring the nontrivial role of charge in
the absence of mass.
The motion of charged test particles is governed by the Lorentz force in
curved spacetime [11–16]. In this work, we focus exclusively on timelike
trajectories. The singular shell structure may give rise to a rich vari-
ety of dynamics, including bounded motion, scattering, and capture,
all constrained by the outermost shell r∗. A detailed effective poten-
tial analysis reveals how the singular shells regulate orbital motion and
determine the stability of circular orbits. Remarkably, weak-field cir-
cular orbits may exhibit retrograde precession [17], opposite in sign to
the prograde advance [18, 19] observed in Schwarzschild and Reissner-
Nordstr¨om spacetimes, providing a clear dynamical signature of the
charge-induced geometry (5).
More broadly, such charge-dominated
spacetimes offer a unique framework for studying exotic objects, semi-
classical instabilities, and naked singularities, with implications for test-
ing deviations from general relativity in extreme regimes.
In this paper, we present a comprehensive investigation of the dynamics
of massive, charged test particles in the massless, charge-induced geom-
etry (5). We analyze the effective potentials, stability criteria, and the
influence of curvature singularity shells, obtaining analytical solutions
and deriving weak-field approximations to connect with Newtonian in-
tuition. In the weak-field regime, the system is semiclassically mapped
to a hydrogenic model, where curvature-induced corrections yield con-
trolled perturbative energy shifts.
The study is further extended to
canonical thermodynamics, demonstrating how these curvature effects
systematically modify free and internal energies, as well as entropy and
heat capacity. The paper is organized as follows: Section II introduces
the spacetime geometry and associated electromagnetic field, adopting
the gauge At = −tan(|Q|/r), which reduces to the Coulomb poten-
tial At ≃−|Q|/r in the weak-field limit, along with the equations of
motion and construction of the effective potential. Section III presents
a detailed study of orbital dynamics and stability.
Section IV maps
the system to a one-electron atom, highlighting similarities, perturba-
tive curvature corrections, and limits of validity. Section V extends the
analysis to canonical thermodynamics, illustrating the impact of cur-
vature corrections on thermodynamic properties. Finally, Section VI
summarizes the main results and outlines potential directions for future
research.
II.
CHARGED PARTICLE DYNAMICS
We consider the motion of a charged test particle with mass m and
charge q in the curved spacetime geometry (5), introduced in Section I.
The dynamics of the particle are determined by the Lagrangian [20–22]
L = m
2 gµν ˙xµ ˙xν + q Aµ ˙xµ,
(10)
where the dot denotes differentiation with respect to the proper time
τ, i.e. ˙xµ ≡dxµ/dτ. The first term represents the kinetic contribution
associated with motion in the curved background geometry, while the
second term implements the minimal coupling to the electromagnetic
3
four-potential Aµ. The equations of motion are obtained by applying
the Euler-Lagrange equations to the Lagrangian (10), namely
d
dτ
∂L
∂˙xµ
−∂L
∂xµ = 0.
(11)
Evaluating the first term, we find the canonical momentum
∂L
∂˙xµ = m gµν ˙xν + q Aµ,
(12)
and differentiating with respect to proper time yields
d
dτ
∂L
∂˙xµ
= m
∂λgµν ˙xλ ˙xν + gµν ¨xν
+ q ∂λAµ ˙xλ.
(13)
On the other hand, the explicit coordinate dependence of the La-
grangian contributes
∂L
∂xµ = m
2 ∂µgαβ ˙xα ˙xβ + q ∂µAα ˙xα.
(14)
Substituting these expressions into the Euler–Lagrange equation leads
to
m
gµν ¨xν +∂λgµν ˙xλ ˙xν −1
2 ∂µgαβ ˙xα ˙xβ
= q ∂µAν −∂νAµ
˙xν. (15)
At this stage it is natural to recognize the antisymmetric electromag-
netic field strength tensor
Fµν = ∂µAν −∂νAµ,
(16)
which allows the right-hand side to be written in compact form. By
raising an index with the inverse metric and noting that the combi-
nation of metric derivatives reproduces the Christoffel symbols of the
Levi-Civita connection,
Γσ
αβ = 1
2 gσµ ∂αgµβ + ∂βgµα −∂µgαβ
,
(17)
the equation of motion assumes the form
m
¨xσ + Γσ
αβ ˙xα ˙xβ
= q F σν ˙xν.
(18)
This result can be expressed even more transparently in covariant no-
tation. Writing ∇˙x for the covariant derivative along the worldline, one
obtains [16]
m ∇˙x ˙xµ = q F µν ˙xν,
(19)
which is the covariant Lorentz force law describing the trajectory of
a charged particle subject simultaneously to gravitational and electro-
magnetic fields. Here, Fµν encodes the electromagnetic field, while the
gravitational influence enters through the connection Γσ
αβ. Throughout
this work, we adopt units with G = c = 1, unless stated otherwise, and
employ the gauge choice corresponding to the M →0 limit of Eq. (17)
in [7]):
At = −tan
|Q|
r
,
(20)
which asymptotically reduces to the Coulomb form At →−|Q|/r as
r →∞, thereby ensuring the correct flat-space limit. Exploiting spher-
ical symmetry, we restrict motion to the equatorial plane θ = π/2 [23].
In this plane, the Lagrangian (10) simplifies to
L = m
2
"
−
˙t 2
cos2(|Q|/r) + cos2(|Q|/r)
˙r 2 + r2 ˙ϕ 2
#
−q tan(|Q|/r) ˙t.
(21)
The existence of timelike and rotational Killing vectors ensures two
conserved quantities:
the energy E and angular momentum L [24].
These arise from the canonical momenta:
pt = ∂L
∂˙t = −
m
cos2(|Q|/r)
˙t −q tan(|Q|/r) ≡−E,
pϕ = ∂L
∂˙ϕ = mr2 cos2(|Q|/r) ˙ϕ ≡L.
(22)
Solving for the velocities yields
˙t = E −q tan(|Q|/r)
m
cos2(|Q|/r),
˙ϕ =
L
mr2 cos2(|Q|/r).
(23)
Substituting into the timelike condition gµν ˙xµ ˙xν = −1 gives
m2 ˙r2 = (E −q tan(|Q|/r))2 −
m2
cos2(|Q|/r) −
L2
r2 cos4(|Q|/r).
For L ̸= 0, the last term diverges near r∗, ensuring a turning point be-
fore reaching the singular shell. For radial motion (L = 0), accessibility
of r∗depends on |q|/m and E. Defining the energy branches [25]
E±(r) ≡q tan
|Q|
r
±
s
m2
cos2(|Q|/r) +
L2
r2 cos4(|Q|/r) .
(24)
The effective potential per unit mass, shown in Figure 1, is defined as
[25]
Veff(r) = E+(r)
m
.
(25)
Accordingly, the binding energy is
FIG. 1.
Effective radial potentials Veff(r) for different angular mo-
mentum states L = 1, 3, 5 of a particle with charge q = −1 and m = 1
in the presence of a singular shell located at r∗= 2|Q|/π (Q = 10).
Colored curves represent the effective potentials for each L state, with
the corresponding colored stars indicating the classical turning points
for energies E = 3, 5, 10. Shaded regions highlight the classically al-
lowed radial motion (E > Veff(r)), and the dashed vertical line marks
the outermost singular shell position r∗. This visualization illustrates
how the effective potential and the allowed regions depend on angular
momentum and energy levels.
Ebind(r) ≡Veff(r) −1.
(26)
This definition follows from considering the particle at rest at infinity:
in this limit, E+ →m and Veff→1, so Ebind →0.
Regions with
Veff< 1 correspond to bound motion, while Veff> 1 indicates unbound
motion. The radial motion is allowed where E ≥E+, linking turning
4
points directly to the effective potential [25].
Factorizing the radial
equation:
m2 ˙r2 = (E −E+)(E −E−) ≡R(r),
(27)
makes clear that ˙r2 ≥0 only in classically allowed regions. Circular
orbits occur at rc > r∗where E′
+(rc) = 0, with stability determined
via proper-time radial epicyclic frequency [26]
ω2
r ≡−R′′(rc)
2m2
=
E′′
+(rc)
E+(rc) −E−(rc)
2m2
.
(28)
In the weak-field limit, E+−E−≃2m, giving ω2
r ≃E′′
+(rc)/m. Stability
requires ω2
r > 0 or V ′′
eff(rc) > 0. The coordinate-time radial frequency
is
˙t
rc = E −q tan(|Q|/rc)
m
cos2(|Q|/rc),
Ωr = ωr
˙t
rc
.
(29)
Figure 1 illustrates how Veff(r) depends on L and Q.
Key features
include: (i) higher L increases the centrifugal barrier, moving circular
orbits outward; (ii) the depth of Veffindicates the strength of binding,
with lower Veffcorresponding to more tightly bound orbits; (iii) the
combined effect of spacetime curvature and the electric field produces
barriers absent in Reissner–Nordstr¨om spacetimes, making r∗impene-
trable for L ̸= 0; (iv) for radial motion, accessibility of r∗depends on
|q|/m and E. This figure thus encapsulates turning points, classically
allowed regions, and the influence of conserved quantities on orbital
stability.
III.
ANALYSIS OF RADIAL MOTION, PARTICLE
ORBITS, AND STABILITY
We now analyze in detail the dynamics of a classical charged test par-
ticle with rest mass m and charge q in the background geometry (5).
Owing to the stationarity and spherical symmetry of the spacetime,
there exist two Killing vectors, ∂t and ∂ϕ, which yield conserved en-
ergy and angular momentum along the particle’s worldline. These con-
stants of motion reduce the problem to an effective one-dimensional
radial equation without the need for weak-field approximations [23].
A.
Radial Motion and Effective Potential
As shown in Sec. II, the radial dynamics can be cast in terms of two
energy branches E±(r), associated with future- and past-directed time-
like trajectories. Classical motion occurs when E ≥E+(r) for future-
directed trajectories, and E ≤E−(r) for past-directed trajectories.
The spacetime singularities occur at the discrete radii determined by
cos(|Q|/r) = 0 (cf. Eq. (6)), where the effective energies |E±| diverge.
These singular hypersurfaces act as absolute kinematic barriers. The
outermost such barrier, located at r∗= 2|Q|/π, bounds all physically
realizable trajectories.
For purely radial motion (L = 0), the diver-
gences of the terms [E −q tan(|Q|/r)]2 and m2 sec2(|Q|/r) both become
relevant as r →r∗. Now, let us introduce the dimensionless variable
u = |Q|/r, mapping spatial infinity (r →∞) to u →0 and the sin-
gular barrier (r = r∗) to u →π/2. Since tan u ∼sec u as u →π/2,
the near-barrier behavior depends sensitively on the ratio |q|/m and
the conserved canonical energy E.
In particular, for |q|/m ≲1, the
particle is repelled before reaching r∗, while for |q|/m ≳1, the electro-
static attraction may partially compensate, allowing closer approach.
To systematically analyze radial motion, we define the radial function
R(r) ≡E −q tan (|Q|/r)2 −
m2
cos2 (|Q|/r) −
L2
r2 cos4 (|Q|/r),
(30)
so that the radial equation reduces to
m2 ˙r2 = R(r),
R(r) ≥0.
(31)
Physically, R(r) plays the role of the “radial kinetic energy squared”:
the particle can move only where R(r) ≥0.
Turning points occur
at R(r) = 0, corresponding to Veff(r) = E+/m. For nonzero angular
momentum, the centrifugal term ∼L2/(r2 cos4(|Q|/r)) diverges at r∗,
preventing penetration.
Hence the physical domain is r > r∗.
For
orbits with L ̸= 0, circular orbits at r = rc satisfy simultaneously [23]
R(rc) = 0,
R′(rc) = 0.
(32)
The radial acceleration can be written as
m2¨r = 1
2R′(r).
(33)
Stability of Circular Orbits
To study stability, let us consider a small radial perturbation around a
circular orbit [27]:
r(t) = rc + δr(t),
|δr| ≪rc,
(34)
and linearize
R′(r) ≈R′′(rc) δr,
since R′(rc) = 0.
(35)
Substitution into (33) gives harmonic oscillator equation:
m2 ¨δr = 1
2 R′′(rc) δr
⇒
¨δr = R′′(rc)
2m2
δr.
(36)
Defining the proper-time radial epicyclic frequency ωr with a conven-
tional minus sign for stability:
ω2
r ≡−R′′(rc)
2m2
,
(37)
so that ω2
r > 0 corresponds to stable orbits. Expressing in terms of the
energy branches E± yields
ω2
r =
E′′
+(rc)
E+(rc) −E−(rc)
2m2
.
In the weak-field regime, E+ −E−≈2m, giving ω2
r ≃E′′
+(rc)/m. Sta-
bility is equivalent to a local minimum of the effective potential Veff(r).
The coordinate-time radial frequency is obtained from the proper-time
frequency via the relation
Ωr = ωr
dτ
dt
rc
,
dτ
dt
rc
=
m
[E −q tan (|Q|/rc)] cos2 (|Q|/rc).
This expression makes explicit how the radial oscillations in coordinate
time are redshifted relative to proper time due to the spacetime geom-
etry and electromagnetic interaction. The combined effect of angular
momentum, charge-to-mass ratio, and the singular barrier r∗governs
both the allowed radial domain and the stability properties of circular
orbits.
B.
Weak-Field Approximation and Orbital Stability
In the weak-field regime, defined by radial distances much larger than
the characteristic scale of the central charge (note that r in units of
Q), r ≫|Q|, the spacetime metric approaches the Minkowski form,
with small perturbations due to both the electromagnetic field of the
5
FIG. 2. Dynamics of a charged particle with mass m = 1 and charge q = −1 around a central Coulomb charge |Q| = 2.4 for angular momenta
L = 1, 2, 3. (a) Radial energy E+(r) showing contributions from the rest mass (black dashed line), Coulomb interaction, and angular momentum,
with circular orbits rc indicated by vertical dashed lines and the outermost barrier r∗highlighted in purple.
(b) Effective potential Veff(r)
illustrating the radial dependence of the potential energy, with circular orbits and r∗similarly marked (purple). (c) Radial oscillations r(t) around
circular orbits with shaded envelopes representing the oscillation amplitude, and r∗shown as a purple dashed line. (d) Two-dimensional precessing
orbits in the xy plane, exhibiting retrograde precession around the central charge (black dot), with maximum and minimum radial excursions, and
the outermost barrier r∗shown as a dashed purple circle.
central charge and the curvature it induces. In this limit, the dynamics
of a charged particle can be described by an effective energy function
E+(r), which includes contributions from the particle’s rest mass, elec-
tromagnetic potential, orbital angular momentum, and leading curva-
ture corrections. Expanding E+(r) in powers of 1/r up to second order
gives
E+(r) ≃m + q|Q|
r
+
L2
2mr2 + mQ2
2r2
+ O(r−3),
⇒E+(r) −m ≃q|Q|
r
+
L2
2mr2 + mQ2
2r2
= −κ
r + β
r2 ,
(38)
where we define κ = |qQ|, β = (L2 + m2Q2)/(2m) and q < 0. In this
decomposition, the first term represents the attarctive Coulomb inter-
action between the particle and the central charge. The second term
corresponds to the centrifugal barrier arising from the orbital angular
momentum, which prevents the particle from collapsing into the central
charge. The third term represents the leading-order correction due to
spacetime curvature induced by the central charge, which slightly mod-
ifies the effective potential at large distances. Terms of order O(r−3)
and higher are negligible in this approximation and do not significantly
influence the orbital motion in the weak-field regime. A circular orbit
corresponds to a radius rc where the radial derivative of the effective
energy vanishes. Physically, this condition reflects the balance between
the attractive and repulsive contributions to the radial force acting on
the particle. Differentiating the energy function with respect to r yields
E′
+(r) = −qQ
r2 −L2
mr3 −mQ2
r3
+ O(r−4).
(39)
At leading order, we can neglect the curvature term proportional to
Q2/r3, since it is subdominant at large radii. This reduces the circular
orbit condition to the classical balance between the Coulomb force and
the centrifugal barrier:
L2
mr3c
= −qQ
r2c
,
qQ < 0
⇒
rc =
L2
m|qQ|.
(40)
Here, the restriction qQ < 0 ensures that the Coulomb interaction is
attractive, allowing for stable circular orbits. Including the curvature
correction to next-to-leading order slightly increases the circular orbit
radius:
rc ≃
1
|qQ|
L2
m + mQ2
,
(41)
which reduces to the leading-order expression when Q2/r2
c ≪1. This
demonstrates that the curvature of spacetime effectively contributes a
small repulsive term, increasing the orbital radius for a given angular
momentum.
Physically, this reflects the fact that curvature-induced
modifications to the potential slightly oppose the central Coulomb at-
traction.
The stability of circular orbits is characterized by the ra-
dial epicyclic frequency, which describes the particle’s small oscillations
around the circular orbit. A positive radial frequency indicates stable
oscillations, while a negative or imaginary frequency would signal in-
stability. The radial epicyclic frequency is defined as
ω2
r ≃1
mE′′
+(rc),
(42)
with the second derivative of the effective energy given by
E′′
+(r) = 2qQ
r3
+ 3L2
mr4 + 3mQ2
r4
+ O(r−5).
(43)
Evaluating this at the circular orbit radius rc using (40), the leading-
order term yields
E′′
+(rc) ≃|qQ|
r3c
1 + O
mQ2
L2
> 0,
(44)
confirming the stability of the orbit under small radial perturbations.
Consequently, the proper-time radial epicyclic frequency can be ex-
pressed as
ωr ≃m|qQ|2
L3
,
(45)
6
up to minor corrections (cancelled) from the curvature term. This re-
lation has a clear physical interpretation: stronger Coulomb attraction
increases the radial oscillation frequency, while larger angular momen-
tum reduces it due to the broader orbits associated with higher L.
In the limit m →0, the radial frequency vanishes, consistent with
the absence of a restoring force for massless particles. To investigate
the azimuthal motion and the associated orbital precession [19], it is
convenient to define an effective central potential incorporating both
Coulomb and curvature effects:
U(r) = qQ
r
+ mQ2
2r2 .
(46)
The circular orbit condition can be equivalently written as L2 =
m r3 U′(r), and the proper-time frequencies for small radial and az-
imuthal oscillations are given by
ω2
ϕ =
1
mr U′(r),
ω2
r = 1
m
U′′(r) + 3L2
mr4
.
(47)
Differentiating the potential provides
U′(r) = −qQ
r2 −mQ2
r3
,
U′′(r) = 2qQ
r3
+ 3mQ2
r4
.
(48)
Substituting these into the frequency expressions shows that the radial
epicyclic frequency is dominated by the Coulomb term, while the az-
imuthal frequency is slightly reduced due to the curvature contribution:
ω2
ϕ ≃−qQ
mr3 −Q2
r4 ,
ω2
r ≃−qQ
mr3 .
(49)
This difference in frequencies gives rise to a retrograde precession,
meaning that the orbit slowly rotates backward relative to the radial
oscillations. The precession per orbit can be expressed as
∆ϕ ≃2π
1 −ωϕ
ωr
≃2π
1 −
s
1 + mQ2
|qQ|rc
!
≃−πmQ2
|qQ|rc
= −πm2Q2
L2
.
(50)
The negative sign explicitly confirms that the precession is retrograde
[17].
Its magnitude is small, consistent with the weak-field approxi-
mation, and scales as Q2/L2, indicating that curvature effects become
significant only for tight orbits or large central charges. Thus, the weak-
field approximation provides a clear and physically intuitive description
of orbital dynamics in the presence of a central charged source. Circu-
lar orbits exist and are stable under small radial perturbations. Radial
oscillation frequencies increase with stronger Coulomb attraction and
decrease with higher angular momentum. The curvature-induced mod-
ification of the azimuthal frequency leads to a small retrograde preces-
sion, generalizing classical Keplerian dynamics to include leading-order
corrections. The effective potential U(r) offers a concise framework to
understand how electromagnetic forces, centrifugal barriers, and space-
time curvature together determine the orbital structure of charged par-
ticles.
The Figure 2 illustrates the dynamics of a charged particle with mass
m = 1 and charge q = −1 orbiting a central Coulomb charge |Q| = 2.4
for angular momenta L = 1, 2, 3. The radial energy E+(r) demonstrates
the combined contributions of the particle’s rest mass, Coulomb at-
traction, and angular momentum, with circular orbits rc identified as
vertical dashed lines and the outermost radial barrier r∗highlighted
in purple.
The effective potential Veff(r) emphasizes the purely ra-
dial energy landscape, showing the locations of circular orbits relative
to r∗. Radial oscillations r(t) around these orbits are depicted with
shaded envelopes representing the oscillation amplitude, demonstrating
the stability of motion near rc while respecting the minimum radius
r∗.
Two-dimensional precessing orbits in the xy plane reveal retro-
grade precession of periapsis due to the curvature term, with the orbit
envelopes showing the maximal and minimal radial excursions and the
outermost barrier r∗clearly indicated. Together, these panels visualize
how angular momentum and Coulomb interaction shape the particle’s
motion and the retrograde shift of orbital trajectories.
C.
Strong-Field Dynamics and Orbital Stability
In the strong-field limit, corresponding to u →π/2 (equivalently r →
r∗), it is convenient to introduce a small expansion parameter
u = π
2 −ǫ,
0 < ǫ ≪1,
for which the trigonometric functions diverge as
tan u = cot ǫ ≃1
ǫ −ǫ
3 + O(ǫ3),
sec u = csc ǫ ≃1
ǫ + ǫ
6 + O(ǫ3).
The future-directed energy branch then admits the expansion
E+(u) ≃q
ǫ +
s
m2
ǫ2 + L2(π/2)2
|Q|2
1
ǫ4 ,
(51)
where we consider q < 0 (particle) and Q > 0 (background/source). For
nonzero angular momentum (L ̸= 0), the centrifugal term dominates,
giving the leading scaling
E+(u) ∼
Lπ
2|Q|
1
ǫ2 .
For purely radial motion (L = 0), the divergence is milder:
E+(u) ∼1
ǫ .
This distinction shows that angular momentum strongly amplifies the
confining barrier, while radial trajectories approach it more gradually.
The ability of a radial particle to approach the outermost shell at r∗
depends on the charge-to-mass ratio |q|/m: typical values |q|/m < 1
enforce a turning point outside r∗, while larger ratios allow closer ap-
proach due to electrostatic attraction.
Circular orbits, if they exist,
must lie strictly outside the singular shell (r > r∗). The hypersurface
r = r∗acts as an impenetrable barrier: for L ̸= 0, the centrifugal diver-
gence ensures reflection before r∗; for L = 0, accessibility is controlled
by |q|/m and the conserved energy.
Radial dynamics are governed
by the function R(r) defined in Eq. (30), whose zeros specify turn-
ing points separating classically allowed and forbidden regions. In the
strong-field regime, these zeros accumulate near r∗, producing either
tightly confined oscillations or unstable equilibria. Orbital stability is
quantified by the proper-time radial epicyclic frequency ωr evaluated
at the circular orbit radius rc. The behavior of ω2
r is determined by
the curvature of the effective radial potential:
• Stable orbits (ω2
r > 0): R′′(rc) < 0. Small radial perturba-
tions lead to harmonic oscillations around rc.
• Marginal stability (ωr = 0): R′′(rc) = 0. The restoring force
vanishes; the orbit sits at the edge of stability. This typically
occurs for L = 0 and |q|/m ≲1, just outside r∗.
• Instability (ω2
r < 0, ωr imaginary): R′′(rc) > 0. Small radial
perturbations grow exponentially. This arises for L ̸= 0 or when
the centrifugal or electrostatic terms create a steep potential
slope near r∗.
7
Near r∗, the strong divergence of E+(u) imposes a hard-wall confine-
ment. For L ̸= 0, turning points are pushed outward, producing narrow
oscillatory regions; for L = 0, the approach to r∗is controlled by elec-
trostatic attraction and gravitational curvature. Circular orbits near
local maxima of Veff(r) are generically unstable, and stable orbits can-
not exist arbitrarily close to r∗.
The singular hypersurface at r∗partitions the radial domain into iso-
lated zones of motion, producing distinct families of bound and scat-
tering states.
This hard-wall confinement contrasts with black-hole
dynamics, where horizons, rather than divergent shells, impose bound-
aries. The strong-field regime complements the weak-field description:
at large radii, motion is approximately Keplerian with small retrograde
precession, while near r∗, dynamics are dominated by the diverging
effective potential.
Together, these limits provide a continuous and
unified picture of charged-particle motion across all accessible radial
scales.
IV.
MAPPING TO A ONE-ELECTRON ATOM
The dynamics of a charged particle on the background (5) may be
semiclassically mapped to a hydrogen-like one-electron system. This
correspondence is valid in the regime where characteristic orbital radii
satisfy r ≫|Q|(in units where c = 1 = ℏ), allowing metric functions
such as cos(|Q|/r), sec(|Q|/r), and tan(|Q|/r) to be systematically ex-
panded in powers of |Q|/r.
Particle velocities are assumed nonrela-
tivistic, with kinetic energies small compared to the rest mass energy
m (in units where c = 1), justifying a Schr¨odinger or semiclassical Bohr
description. The particle is treated as a test particle, so its electromag-
netic and gravitational backreaction is negligible. Finally, the quantum
probability density should remain concentrated far from the outermost
curvature singular shell r∗= 2|Q|/π, ensuring rapid convergence of
the perturbative expansion.
In this controlled regime, the dominant
dynamics is Coulombic, with curvature-induced corrections that are
small and systematically computable, in principle.
Starting from the exact first integral for timelike charged motion, we
denoted the positive-energy branch by E+(r). In the weak-field regime
r ≫|Q|, the expansion in (38) reads
E+(r) −m ≃qQ
r
+
L2
2mr2 + mQ2
2r2
+ O(r−3) .
This form defines the effective potential for slow particles:
Veff(r) ≡E+(r) −m ≃qQ
r
+
L2
2mr2 + mQ2
2r2
+ O(r−3) ,
where the leading term is Coulombic, the second is the centrifugal term,
and the third is a geometric correction due to curvature. Higher-order
terms modify the centrifugal structure with explicit |Q|/r dependence.
Within this approximation, one can map the system to hydrogenic vari-
ables as q ↔−e, Q ↔Ze, m ↔me, and L ↔nℏsemiclassically. The
Coulomb term then becomes −Ze2/r, and the semiclassical orbital ra-
dius follows from balancing centrifugal and Coulomb forces,
rc ≃
L2
m|qQ| .
(52)
With L = nℏand qQ = −e · Ze, this reproduces the Bohr-like radius
[13, 14]
an =
n2ℏ2
meZe2 ,
(53)
which establishes the expected semiclassical hierarchy in planar geom-
etry.
In the nonrelativistic quantum regime, the unperturbed Hamiltonian
H0 is
H0 = p2
2m + qQ
r .
(54)
However, this form does not fully capture the influence of the curved
spacetime (5). In the weak-field regime, the leading order geometric
correction
δV (r) = mQ2
2r2
(55)
can be treated perturbatively.
To first order, the energy shift of a
hydrogenic eigenstate |nℓ⟩is [13, 14]
∆E(1)
nℓ= ⟨nℓ|δV |nℓ⟩= mQ2
2
⟨r−2⟩nℓ,
(56)
with ⟨r−2⟩nℓfinite for all ℓ≥0. The expectation values ⟨r−2⟩nℓcan be
computed explicitly using 2+1 dimensional hydrogenic wavefunctions
[13, 14, 28], giving ⟨r−2⟩nℓ= 1/(a2
n(ℓ+ 1/2)) for ℓ≥0, consistent with
standard planar quantum mechanics [28].
The unperturbed binding
energies E(0)
n
are given by
E(0)
n
= E(0)
n
−m ≃−m(qQ)2
2ℏ2n2
,
(57)
which, for hydrogen (Z = 1, Q = e), yield
E(0)
1
≃−13.6 eV,
E(0)
2
≃−3.40 eV.
(58)
Here,
E(0)
n
≃
−
µe4
2(4πε0)2ℏ2n2 ,
where the reduced mass is µ
≈
me
1 −me
mp
, i.e., µ = 9.104425 × 10−31 kg and µ/me ≈0.999455
in SI units. The first-order curvature-induced corrections are
∆E(1)
1
≃0.27 eV,
∆E(1)
2
≃0.034 eV.
(59)
Hence, the total energies become
En = E(0)
n
+ ∆E(1)
n
≃−13.33 eV, −3.366 eV
for n = 1, 2.
(60)
These results confirm the validity of the perturbative approach (see also
Figure 3), since ∆E(1)
n
≪|E(0)
n
−E(0)
n+1|. Higher-order terms of order
O(r−3) are negligible for r ≫|Q|, ensuring rapid convergence of the
perturbative series [29].
The classical radial epicyclic frequency, derived from the effective po-
tential, satisfies ω2
r ≃
E′′
+(rc)
m
in the weak-field limit, with curvature
corrections entering at higher order in |Q|/r. Explicitly, differentiating
the expanded E+(r) gives E′′
+(r) = 2qQ/r3 + 3L2/(mr4) + 3mQ2/r4 +
O(r−5).
Evaluated at rc, this reproduces the classical radial oscil-
lation frequency, consistent with semiclassical hydrogenic predictions.
The semiclassical radial oscillation spectrum thus agrees with the hy-
drogenic semiclassical treatment to leading order, validating the energy
and radius identifications.
Nonetheless, the mapping is intrinsically approximate. The outermost
singular shell at r∗= 2|Q|/π constitutes a genuine geometric bound-
ary, conceptually analogous to a nuclear core: it strongly constrains
the wavefunction at short distances. Quantum states with apprecia-
ble support near r∗must satisfy boundary conditions that render the
Hamiltonian self-adjoint. Unlike a conventional nucleus, r∗is a curva-
ture singularity rather than a smooth potential, affecting both kinetic
and potential operators. Moreover, the exact gauge At = −tan(|Q|/r)
deviates from −Q/r at finite radii, introducing non-Coulombic features.
Spin and relativistic corrections acquire metric-dependent contribu-
tions, and tightly bound states may violate the test-particle approxi-
mation due to back-reaction. Different physically reasonable boundary
8
0
5
10
r
-15
-10
-5
0
Energy (eV)
1
2
3
n
0
0.05
0.1
0.15
0.2
0.25
0.3
0.270
0.034
Hydrogenic Energy Levels and Curvature Effects
FIG. 3. Hydrogenic energy levels (left) and curvature-induced shifts
(right).
The Coulomb potential is shown in blue, with unperturbed
hydrogenic energies for n = 1, 2 depicted as solid lines.
Curvature-
perturbed energies are indicated by dashed black lines. The bar plot
quantifies curvature-induced shifts, highlighting that the ground state
(n = 1) experiences the largest shift.
conditions correspond to inequivalent spectra, so the quantum prob-
lem is not uniquely specified without additional input. Quantitatively,
the analogy holds when typical orbital radius rtyp ≫|Q|, first-order
curvature-induced energy shifts remain small compared to interlevel
spacings, the test-particle approximation is valid, and wavefunction
leakage toward r∗is negligible.
In practice, one can expand the ef-
fective potential in powers of |Q|/r, treat δV (r) = mQ2/(2r2) + · · ·
perturbatively, and solve the Schr¨odinger equation with V0(r) = qQ/r
as the unperturbed Hamiltonian.
In the weak-field, non-relativistic
limit, the fully metric-corrected Klein-Gordon Hamiltonian reduces to
this form, providing a systematic justification for employing the per-
turbative approach. When the wavefunction approaches r∗or pertur-
bative corrections become significant, a fully relativistic treatment on
the exact metric with consistent boundary conditions is required. This
framework explains why the curved-space charged-particle problem is
not identical to a hydrogenic atom, while showing that the latter re-
mains a systematically improvable approximation, with the singular
shell playing a nuclear-like role in controlling short-distance quantum
behavior. Also, purely leptonic systems such as positronium cannot be
described by this curved-space hydrogenic analogy.
V.
CURVATURE-CORRECTED
THERMODYNAMIC PROPERTIES
The single-particle spectrum derived in Section IV can be incorporated
into canonical thermodynamics by constructing the partition function
over a controlled set of bound states. For definiteness, we restrict to
the s-wave manifold and adopt unperturbed hydrogenic energies (in
electronvolts)
E(0)
n
= −13.6
n2 ,
n = 1, 2, . . . ,
(61)
augmented by curvature-induced perturbative shifts ∆E(1)
n . Using the
findings
∆E(1)
1
≃+0.270 eV,
∆E(1)
2
≃+0.034 eV,
we consider a power-law interpolation of the shifts, yielding ∆E(1)
n
∝
n−p with p ≈3. Motivated by this observation and aiming for a mini-
mal phenomenological description, we may adopt a simple model
∆E(1)
n
= ∆E(1)
1
n3
,
n ≥1,
(62)
which reproduces the second-level shift ∆E(1)
2
≃0.0338 eV. Accord-
ingly, the total spectrum entering canonical sums is thus
En = E(0)
n
+ ∆E(1)
n .
(63)
The practical calculation requires truncating the Rydberg series at a
finite integer nmax. This truncation reflects the system’s finite spatial
extent, screening effects, or breakdown of the test-particle approxima-
tion; convergence must therefore be checked by varying nmax. With β ≡
1/(kBT) and energies in eV (so kB = 8.617333262145 × 10−5 eV/K),
the canonical partition function reads [30–33]
Z(β) =
nmax
X
n=1
gn e−βEn,
(64)
where gn = 1 for the s-wave truncation, and canonical occupation
probabilities are [30–33]
pn(β) = e−βEn
Z(β) .
(65)
Thermodynamic potentials are obtained in the standard way [30]:
F (β) = −1
β ln Z(β),
(66)
U(β) =
nmax
X
n=1
pn(β) En = −∂ln Z
∂β
,
(67)
S(β) = U(β) −F (β)
T
,
(68)
CV (β) = kBβ2
⟨E2⟩−⟨E⟩2
,
(69)
with ⟨X⟩≡P
n pnXn. Here, F is the Helmholtz free energy, U is the
internal energy, S is the entropy, and CV is the heat capacity at con-
stant volume. Moreover, the identity F = U −TS serves as a stringent
numerical consistency check.
All numerical values can be obtained
via a stable direct evaluation of the truncated sums. To avoid over-
flow/underflow in exponentials, we employ the log-sum-exp technique:
for a given set of energies {En}nmax
n=1 , we define Emin = minn En and
shifted weights ˜zn = exp[−β(En −Emin)]. The partition function is
then Z = ˜Z exp(−βEmin) with ˜Z = P
n ˜zn, and normalized probabili-
ties are pn = ˜zn/ ˜Z. Thermodynamic quantities follow as
F = −β−1(ln ˜Z −βEmin),
U =
X
n
pnEn,
S = U −F
T
,
CV = kBβ2
X
n
pnE2
n −U2
!
.
(70)
The same routine applies seamlessly to both the unperturbed and
curvature-corrected
spectra,
with the
resulting
curvature-induced
shifts, ∆X = X −X(0), evaluated directly.
Numerical verification
must obey that F = U −TS [30].
For small curvature corrections, it is instructive to expand to first order
in ∆E(1)
n . Defining the unperturbed partition function and probabili-
ties
Z(0)(β) =
nmax
X
n=1
e−βE(0)
n ,
p(0)
n (β) = e−βE(0)
n
Z(0)(β) ,
one finds to linear order
∆F ≃⟨∆E(1)⟩0,
(71)
∆U ≃⟨∆E(1)⟩0 −β
⟨E(0)∆E(1)⟩0 −⟨E(0)⟩0⟨∆E(1)⟩0
,
(72)
∆S ≃−kBβ2
⟨E(0)∆E(1)⟩0 −⟨E(0)⟩0⟨∆E(1)⟩0
,
(73)
9
while CV is computed directly from the variance definition for numer-
ical stability.
Convergence with respect to nmax must be carefully
tested.
Representative results are summarized in Table I. Represen-
TABLE I. Convergence test of curvature-induced free-energy shifts ∆F
[eV] and occupation probabilities at different truncation levels nmax
and temperatures T. Values computed using a numerically stable log-
sum-exp evaluation.
T [K]
nmax
∆F [eV]
p1
pnmax
300
100
0.27000
0.9999999999999
1.23 × 10−224
300
200
0.27000
0.9999999999999
1.18 × 10−224
300
300
0.27000
0.9999999999999
1.17 × 10−224
104
100
0.26999
0.999970
1.92 × 10−7
104
200
0.26999
0.999951
1.91 × 10−7
104
300
0.26998
0.999931
1.91 × 10−7
2 × 104
100
0.25870
0.954539
4.18 × 10−4
2 × 104
200
0.24904
0.916259
4.01 × 10−4
2 × 104
300
0.24008
0.880940
3.85 × 10−4
tative curvature-induced shifts of canonical thermodynamic quantities
are reported in Table II. At room temperature, the ensemble is essen-
tially confined to the ground state, so free and internal energies coincide
with the curvature-shifted ground-state value
F (0) ≃−13.600 eV,
F ≃−13.330 eV,
while entropy and heat capacity vanish. At higher temperatures, ther-
mal occupation of excited states produces finite curvature-induced cor-
rections. Free and internal energies are directly influenced by the mean
level correction, whereas entropy and heat capacity reflect the redistri-
bution of populations among excited states.
TABLE II. Curvature-induced shifts of canonical thermodynamic
quantities at representative temperatures.
Values computed with
nmax = 300.
T [K]
∆F [eV]
∆U [eV]
∆S [10−8 eV/K] ∆CV [10−7 eV/K]
300
+0.27000 +0.27000
0.00
0.00
104
+0.26998 +0.27022
2.36
3.11
VI.
SUMMARY AND DISCUSSION
We have performed a detailed analysis of the dynamics of charged test
particles in a static, spherically symmetric spacetime sourced solely
by an electric charge Q. This corresponds to the massless limit of a
charged wormhole solution in the Einstein-Maxwell-Scalar system. The
geometry, described by the metric in Eq. (5), contains an infinite series
of concentric curvature-singularity shells given in Eq. (6).
The out-
ermost shell at r∗= 2|Q|/π defines a true geometric boundary. For
particles with nonzero angular momentum (L ̸= 0), this shell acts as
an impenetrable barrier. For purely radial motion (L = 0), accessi-
bility depends on the charge-to-mass ratio |q|/m, with turning points
occurring outside r∗for particles approaching from infinity. The radial
domain is thus divided into separate regions, forming a confinement
structure reminiscent of classical potential walls.
Using the Lagrangian, we obtained exact first integrals for the temporal,
azimuthal, and radial motion. The dynamics is governed by two en-
ergy branches, E±(r), with the future-directed branch E+(r) describing
physical trajectories. The effective potential, expressed relative to the
particle rest mass m, includes contributions from both the Coulomb in-
teraction and spacetime curvature. In the weak-field regime (r ≫|Q|),
the potential reduces to the Coulomb form with a centrifugal term and
small curvature correction. These correction induces a retrograde peri-
apsis precession,
∆ϕ ≃−πm2Q2
L2
,
(74)
where the negative sign indicates a backward shift compared to the
Newtonian case. For attractive Coulomb interactions (qQ < 0), stable
circular orbits exist at
rc =
L2
m|qQ|,
(75)
to
leading
order,
and
the
radial
epicyclic
frequency
is
ω2
r
≃
m2|qQ|2/L6. Increasing Coulomb coupling strengthens binding, while
larger angular momentum lowers the oscillation frequency, reflecting
the classical balance between central attraction and centrifugal repul-
sion.
Near r∗, the effective potential diverges. Introducing ǫ = π
2 −|Q|/r,
one finds E+ ∼ǫ−1 for radial motion and E+ ∼ǫ−2 for nonradial mo-
tion. This divergence acts as a hard-wall barrier, which becomes more
restrictive with increasing angular momentum. For |q|/m < 1, the bar-
rier is softened for purely radial trajectories, while nonradial motion
remains strictly excluded. This establishes a hierarchy of confinement
strengths, comparable to hard-wall models familiar from quantum me-
chanics.
At sufficiently large radii, the system can be mapped to a hydrogen-
like system. The Coulomb term dominates the potential, the centrifu-
gal term balances orbital motion, and curvature corrections can be
treated perturbatively. Using the semiclassical correspondence q ↔−e,
Q ↔Ze, L ↔nℏ, and m ↔me, the outermost singular shell r∗plays a
role analogous to the atomic nucleus, providing a short-distance bound-
ary. The semiclassical orbital radii an ∼n2ℏ2/(m|qQ|) reproduce the
Bohr scaling, while the curvature-induced r−2 term yields small, sys-
tematically computable energy shifts ∆E(1).
This analogy is quan-
titatively reliable when the wavefunction is localized far from r∗and
perturbative corrections remain small compared to interlevel spacing.
The mapping thus provides a controlled connection between weak-field
Coulombic orbits and the strong-field confinement induced by the sin-
gular shell. The system exhibits two complementary regimes. At large
radii, particle motion resembles classical Coulomb or Keplerian dynam-
ics with minor curvature corrections. Close to the outermost singular
shell, motion is dominated by a steeply rising potential barrier that
enforces strong spatial confinement. This framework provides a contin-
uous description linking weak-field orbits to highly constrained dynam-
ics near the singular shell, connecting classical orbital mechanics with
exotic singular geometries.
Beyond the classical and semiclassical particle dynamics, curvature-
induced corrections to the effective potential have direct consequences
for the canonical thermodynamics of bound states. Constructing the
partition function over s-wave bound states with energy shifts ∆E(1)
n
shows that curvature systematically increases the free and internal ener-
gies, weakens binding, and enhances thermal ionization. These thermo-
dynamic effects become significant at temperatures comparable to the
energy scale of the lowest bound-state corrections, whereas at low tem-
peratures the ensemble remains effectively confined to the ground state.
Entropy and heat capacity are altered subtly through correlations be-
tween unperturbed energies and curvature-induced shifts, providing a
10
FIG. 4. Thermodynamic properties of the truncated hydrogenic spectrum with curvature corrections. Subplots (a–d) display the absolute canonical
quantities: Helmholtz free energy F (T), internal energy U(T), entropy S(T), and heat capacity CV (T) for nmax = 200, with solid black lines for the
unperturbed energies E(0)
n
and dashed blue lines including curvature shifts ∆E(1)
n . Subplots (e–h) present the curvature-induced differences ∆F ,
∆U, ∆S, and ∆CV . Subplots (i–l) show convergence for nmax = 100, 200, 300, illustrating the stability of canonical sums. Residuals F −(U −TS)
are smaller than 10−14 eV, confirming numerical consistency. All quantities are in eV or eV/K; the temperature axis is logarithmic to emphasize
low- and high-temperature regimes.
precise quantitative description of how the geometry modifies statisti-
cal properties. Integrating the results from classical particle dynam-
ics, semiclassical mapping, and curvature-corrected thermodynamics
establishes a consistent framework that links microscopic motion with
macroscopic statistical behavior, demonstrating that the singular shell
not only enforces spatial confinement but also produces measurable (in
principle) shifts in the thermal characteristics of the system.
The results establish a clear and analytically tractable framework for
charged-particle motion in horizonless, singular charged spacetimes.
The combination of integrability, smooth connection to Coulomb dy-
namics at large radii, and hard-wall confinement near the singular shell
demonstrates the value of this system as a theoretical laboratory for
studying charged matter in geometries determined entirely by electro-
magnetic fields.
Several extensions are suggested by this framework.
Studying null
geodesics could reveal the causal and optical properties of the singu-
lar shells, potentially producing distinctive lensing effects. A detailed
analysis of radial and azimuthal oscillation frequencies would relate the
results to classical celestial mechanics. Incorporating electromagnetic
self-force or radiation-reaction effects could extend the model to dissi-
pative systems. Semiclassical studies of wave propagation or quantized
bound states may highlight confinement effects similar to a particle-
in-a-box model. Finally, exploring rotational or perturbed versions of
the geometry would test whether the confinement mechanisms persist
in less symmetric conditions.
[1] H.
Reissner,
ҬUber
die
Eigengravitation
des
elek-
trischen
Feldes
nach
der
Einsteinschen
Theorie”,
Annalen der Physik 55, 106-120 (1916)
[2] G.
Nordstr¨om,
“On
the
Energy
of
the
Gravitational
Field
in
Einstein’s
Theory”,
Koninklijke Nederlandsche Akademie van Wetenschappen Proceedings 20, 1238-1245 (1918)
[3] S. Chandrasekhar, “The Mathematical Theory of Black Holes
(Vol. 69)”, Oxford University Press (1998).
[4] A. Einstein, N. Rosen, “The particle problem in the general theory
of relativity”, Physical Review 48, 73 (1935)
[5] M. Visser, “Lorentzian Wormholes: From Einstein to Hawking”,
American Institute of Physics (AIP Press) (1995).
[6] M. S. Morris and K. S. Thorne, “Wormholes in spacetime and their
use for interstellar travel: A tool for teaching general relativity”,
American Journal of Physics 56, 395–412 (1988)
[7] B.
Turimov,
A.
Abdujabbarov,
B.
Ahmedov,
Z.
Stuchl´ık,
“Exact
Charged
Traversable
Wormhole
Solution”,
Physics Letters B, 139800 (2025).
[8] A. Papapetrou, “Eine theorie des gravitationsfeldes mit einer feld-
funktion”, Zeitschrift f¨ur Physik 139, 518-532 (1954)
[9] M. ˇZofka, ”Bonnor-Melvin universe with a cosmological constant”
Physical Review D, 99 (2019) 044058
[10] O. Mustafa, A. Guvendi, ”Fermions in a (2+ 1)-dimensional mag-
netized spacetime with a cosmological constant: Domain walls and
spinning magnetic vortice” Physics Letters B, 866 (2025) 139569
[11] T.W.B. Kibble, F. H. Berkshire, “Classical Mechanics. 5th ed,”
(Imperial College Press, London 2004)
[12] M.D. Semon, J.R. Taylor, “Thoughts on the magnetic vector po-
tential”, American Journal of Physics, 64 1361-1369 (1996)
[13] J.J. Sakurai, J. Napolitano, “Modern quantum mechanics” (Cam-
bridge University Press 2020)
[14] K. Gottfried, “Quantum Mechanics:
Fundamentals (1st ed.)”
(CRC Press, Boca Raton (1974) 528 pages))
[15] J.D. Jackson,
“Classical Electrodynamics” (Wiley,
Hoboken,
1998)
[16] V.P.
Frolov,
A.A.
Shoom,
”Motion
of
charged
parti-
cles
near
a
weakly
magnetized
Schwarzschild
black
hole”
Physical Review D, 82 (2010) 084034
[17] P.
Bambhaniya,
M.J.
Vyas,
P.S.
Joshi,
Elisabete
M
de
G.
Dal
Pino,
”Retrograde
precession
of
relativis-
11
tic
orbits
and
the
quest
for
charged
black
holes”
Physics of the Dark Universe, 48 (2025) 101949
[18] D. Borka, V. Borka Jovanovi´c, S. Capozziello, A.F. Zakharov,
P. Jovanovi´c,
”Estimating the parameters of extended grav-
ity
theories
with
the
Schwarzschild
precession
of
S2
star”
Universe, 7 (2021) 407
[19] G.S. Adkins, J. McDonnell, “Orbital precession due to central-
force perturbations”, Physical Reviwe D, 75 082001 (2007).
[20] H.M. Siahaan,
”Merger estimates for Kerr-Sen black holes”
Physical Review D, 101 (2020) 064036
[21] G.M.
Deng,
“Self-consistent
geodesic
equation
and
quantum
tunneling
from
charged
AdS
black
holes”,
Journal of Physics: Conference Series, 942 012008 (2017)
[22] B. Turimov, A. Davlataliev, B. Ahmedov, Z. Stuchl´ık, ”Exploring
a novel feature of ellis spacetime: Insights into scalar field dynam-
ics” Chinese Journal of Physics, 94 (2025) 807–819
[23] E. Poisson, A. Pound, I. Vega, “The motion of point particles in
curved spacetime” Living Reviews in Relativity 14, 7 (2011)
[24] S. Gurtas Dogan, A. Guvendi, O. Mustafa,“Geometric and wave
optics in a BTZ optical metric-based wormhole,” Physics Letters
B 139824 (2025).
[25] D. Pugliese, H. Quevedo, R. Ruffini, ”General classification of
charged test particle circular orbits in Reissner–Nordstr¨om space-
time” European Physical Journal C, 77 (2017) 206
[26] M.A. Abramowicz, W. Klu´zniak, ”Epicyclic frequencies derived
from the effective potential:
simple and practical formulae”
Astrophysics and Space Science, 300 (2005) 127–136
[27] A.
Tursunov,
Z.
Stuchl´ık,
M.
Koloˇs,
”Circular orbits
and
related
quasiharmonic
oscillatory
motion
of
charged
par-
ticles
around
weakly
magnetized
rotating
black
holes”
Physical Review D, 93 (2016) 084012
[28] A.
Guvendi,
O.
Mustafa,
”An
innovative
model
for
coupled
fermion-antifermion
pairs”
European Physical Journal C, 84 (2024) 866
[29] M.M.
Stetsko,
V.M.
Tkachuk,
”Perturbation
hydrogen-
atom
spectrum
in
deformed
space
with
minimal
length”
Physical Review A, 74 (2006) 012101
[30] L.D. Landau, E.M. Lifshitz, ”Statistical Physics (Course of Theo-
retical Physics, 3rd Edition, Volume 5)” Elsevier (1980) 544 pages
[31] L.P. Kadanoff,
”Quantum statistical mechanics” CRC Press
(2018) 224 pages
[32] V. Ryabov, ”Principles of Statistical Physics and Numerical Mod-
eling” IOP Publishing (2018) 152 pages
[33] James
H.
Luscombe,
”Statistical
Mechanics:
From
Ther-
modynamics
to
the
Renormalization
Group
(1st
ed)”
CRC Press, Boca Raton (2021) 400 pages
|
19 Sep 2025 Charged particle dynamics in singular spacetimes: hydrogenic mapping and curvature-corrected thermodynamics Abdullah Guvendi ∗ 25050, Erzurum, T ̈urkiye Semra Gurtas Dogan † 30000, Hakkari, T ̈urkiye Omar Mustafa ‡ 99628, G. Magusa, north Cyprus, Mersin 10 - T ̈urkiye Hassan Hassanabadi § Departamento de F ́ısica Te ́orica, At ́omica y Optica and Laboratory for Disruptive Interdisciplinary Science (LaDIS), Universidad de Valladolid, 47011 Valladolid, Spain and ́alov ́e, Rokitansk ́eho 62, 500 03 Hradec Kr ́alov ́e, Czechia (Dated: September 23, 2025) We analyze the dynamics of charged test particles in a singular, horizonless spacetime arising as the massless limit of a charged wormhole in the Einstein-Maxwell-Scalar framework. The geometry, sustained solely by an electric charge Q, features an infinite sequence of curvature singularity shells, with the outermost at r∗= 2|Q|/π acting as a hard boundary for nonradial motion, while radial trajectories can access it depending on the particle's charge-to-mass ratio |q|/m. Exploiting exact first integrals, we construct the effective potential and obtain circular orbit radii, radial epicyclic frequencies, and azimuthal precession rates. In the weak-field limit (r ≫|Q|), the motion reduces to a Coulombic system with small curvatureinduced retrograde precession. At large radii, the dynamics maps to a hydrogenic system, with curvature corrections inducing perturbative energy shifts. Approaching r∗, the potential diverges, producing hard-wall confinement. Curvature corrections also modify the canonical thermodynamics, raising energies and slightly altering entropy and heat capacity. Our results characterize the transition from Newtonian-like orbits to strongly confined, curvature-dominated dynamics. CONTENTS I. Introduction 1 II. Charged Particle Dynamics 2 III. Analysis of Radial Motion, Particle Orbits, and Stability 4 A. Radial Motion and Effective Potential 4 Stability of Circular Orbits 4 B. Weak-Field Approximation and Orbital Stability 4 C. Strong-Field Dynamics and Orbital Stability 6 IV. Mapping to a one-electron atom 7 V. Curvature-Corrected Thermodynamic Properties 8 VI. Summary and Discussion 9 References 10 ∗ † ‡ § (Corresponding Author) I. INTRODUCTION Charged spacetimes in general relativity provide fundamental insights into the relationship between electromagnetic fields and spacetime curvature. Classical solutions, such as the Reissner-Nordstr ̈om black hole, illustrate how electric charge modifies spacetime geometry, giving rise to inner and outer horizons as well as central singularities [1-3]. These solutions, however, typically assume the presence of mass. This naturally raises the question: can electric charge alone, in the absence of mass, induce nontrivial spacetime curvature and support physically meaningful structures? Wormholes, first introduced by Einstein and Rosen, provide a theoretical framework to explore such questions [4]. These hypothetical structures connect distant regions of spacetime and, in principle, could act as shortcuts between them. While traversable wormholes generally require exotic matter and often violate classical energy conditions, the inclusion of electric charge adds a new layer of complexity. In charged wormhole geometries, electromagnetic fields can significantly modify the causal structure and the trajectories of test particles [5, 6], potentially allowing for configurations that circumvent classical energy condition violations. Recent investigations have extended these considerations to massless configurations, where electric charge alone shapes spacetime curvature. In particular, Turimov et al. [7] have obtained exact solutions of the Einstein-Maxwell-Scalar field equations for spherically symmetric charged wormholes characterized by mass M and charge Q. 2 Unlike classical charged black holes, these wormholes reveal a novel mechanism by which charge governs spacetime, motivating a detailed analysis of their dynamics and geometric properties. The spacetime under consideration is described by the static, spherically symmetric metric (in units G = c = 1) [7]: ds2 = -f(r) dt2 + f(r)-1dr2 + r2 dθ2 + sin2 θ dφ2 , (1) with the metric function f(r) = " cosh p M2 -Q2 r + M p M2 -Q2 sinh p M2 -Q2 r #-2 . (2) In the extremal limit M →|Q|, we have p M2 -Q2 →0. Expanding the hyperbolic functions for small arguments x = p M2 -Q2/r →0 using cosh x ≃1 + x2/2 and sinh x ≃x + x3/6, we obtain: cosh x + M p M2 -Q2 sinh x ≃1 + M r + O(x2) →1 + |Q| r . Hence, the metric function reduces to f(r) M→|Q| = 1 + |Q| r -2 . (3) Introducing the Schwarzschild-like radial coordinate R = r + |Q|, so that r = R -|Q|, the line element becomes ds2 = - 1 -|Q| R 2 dt2 + 1 -|Q| R -2 dR2 + (R -|Q|)2dΩ2, (4) where dΩ2 = dθ2 + sin2 θ dφ2. This geometry coincides with the extremal Reissner-Nordstr ̈om metric in the radial sector but exhibits a distinct angular sector due to the radial shift R 7→R -|Q|. In the neutral limit Q →0, it reduces to the classical Papapetrou "exponential" wormhole metric [8]. For |Q| > M, the hyperbolic functions become trigonometric, yielding oscillatory metrics and generically naked singularities. These features highlight the delicate relationship between mass and charge in determining the global structure of spacetime [1, 2, 7]. In the massless limit M = 0, electric charge |Q| alone generates spacetime curvature [7], resulting in the line element ds2 = - dt2 cos2(|Q|/r) + cos2(|Q|/r) dr2 + dΩ2 . (5) This metric exhibits curvature singularities at rn = |Q| (n + 1 2)π , n = 0, 1, 2, . . . , (6) where cos(|Q|/r) vanishes. Each singular shell acts as a dynamical barrier that confines timelike test particles. Analogies to confined magnetic configurations, such as the Bonnor-Melvin universe [9, 10], are formal and should not be interpreted as physical equivalence. The accessible radial region between successive singular shells is |Q| (n + 3 2)π Veff(r)), and the dashed vertical line marks the outermost singular shell position r∗. This visualization illustrates how the effective potential and the allowed regions depend on angular momentum and energy levels. Ebind(r) ≡Veff(r) -1. (26) This definition follows from considering the particle at rest at infinity: in this limit, E+ →m and Veff→1, so Ebind →0. Regions with Veff 1 indicates unbound motion. The radial motion is allowed where E ≥E+, linking turning 4 points directly to the effective potential [25]. Factorizing the radial equation: m2 ̇r2 = (E -E+)(E -E-) ≡R(r), (27) makes clear that ̇r2 ≥0 only in classically allowed regions. Circular orbits occur at rc > r∗where E′ +(rc) = 0, with stability determined via proper-time radial epicyclic frequency [26] ω2 r ≡-R′′(rc) 2m2 = E′′ +(rc) E+(rc) -E-(rc) 2m2 . (28) In the weak-field limit, E+-E-≃2m, giving ω2 r ≃E′′ +(rc)/m. Stability requires ω2 r > 0 or V ′′ eff(rc) > 0. The coordinate-time radial frequency is ̇t rc = E -q tan(|Q|/rc) m cos2(|Q|/rc), Ωr = ωr ̇t rc . (29) Figure 1 illustrates how Veff(r) depends on L and Q. Key features include: (i) higher L increases the centrifugal barrier, moving circular orbits outward; (ii) the depth of Veffindicates the strength of binding, with lower Veffcorresponding to more tightly bound orbits; (iii) the combined effect of spacetime curvature and the electric field produces barriers absent in Reissner-Nordstr ̈om spacetimes, making r∗impenetrable for L ̸= 0; (iv) for radial motion, accessibility of r∗depends on |q|/m and E. This figure thus encapsulates turning points, classically allowed regions, and the influence of conserved quantities on orbital stability. III. ANALYSIS OF RADIAL MOTION, PARTICLE ORBITS, AND STABILITY We now analyze in detail the dynamics of a classical charged test particle with rest mass m and charge q in the background geometry (5). Owing to the stationarity and spherical symmetry of the spacetime, there exist two Killing vectors, ∂t and ∂φ, which yield conserved energy and angular momentum along the particle's worldline. These constants of motion reduce the problem to an effective one-dimensional radial equation without the need for weak-field approximations [23]. A. Radial Motion and Effective Potential As shown in Sec. II, the radial dynamics can be cast in terms of two energy branches E±(r), associated with future- and past-directed timelike trajectories. Classical motion occurs when E ≥E+(r) for futuredirected trajectories, and E ≤E-(r) for past-directed trajectories. The spacetime singularities occur at the discrete radii determined by cos(|Q|/r) = 0 (cf. Eq. (6)), where the effective energies |E±| diverge. These singular hypersurfaces act as absolute kinematic barriers. The outermost such barrier, located at r∗= 2|Q|/π, bounds all physically realizable trajectories. For purely radial motion (L = 0), the divergences of the terms [E -q tan(|Q|/r)]2 and m2 sec2(|Q|/r) both become relevant as r →r∗. Now, let us introduce the dimensionless variable u = |Q|/r, mapping spatial infinity (r →∞) to u →0 and the singular barrier (r = r∗) to u →π/2. Since tan u ∼sec u as u →π/2, the near-barrier behavior depends sensitively on the ratio |q|/m and the conserved canonical energy E. In particular, for |q|/m ≲1, the particle is repelled before reaching r∗, while for |q|/m ≳1, the electrostatic attraction may partially compensate, allowing closer approach. To systematically analyze radial motion, we define the radial function R(r) ≡ E -q tan (|Q|/r) 2 - m2 cos2 (|Q|/r) - L2 r2 cos4 (|Q|/r), (30) so that the radial equation reduces to m2 ̇r2 = R(r), R(r) ≥0. (31) Physically, R(r) plays the role of the "radial kinetic energy squared": the particle can move only where R(r) ≥0. Turning points occur at R(r) = 0, corresponding to Veff(r) = E+/m. For nonzero angular momentum, the centrifugal term ∼L2/(r2 cos4(|Q|/r)) diverges at r∗, preventing penetration. Hence the physical domain is r > r∗. For orbits with L ̸= 0, circular orbits at r = rc satisfy simultaneously [23] R(rc) = 0, R′(rc) = 0. (32) The radial acceleration can be written as m2 ̈r = 1 2R′(r). (33) Stability of Circular Orbits To study stability, let us consider a small radial perturbation around a circular orbit [27]: r(t) = rc + δr(t), |δr| ≪rc, (34) and linearize R′(r) ≈R′′(rc) δr, since R′(rc) = 0. (35) Substitution into (33) gives harmonic oscillator equation: m2 ̈δr = 1 2 R′′(rc) δr ⇒ ̈δr = R′′(rc) 2m2 δr. (36) Defining the proper-time radial epicyclic frequency ωr with a conventional minus sign for stability: ω2 r ≡-R′′(rc) 2m2 , (37) so that ω2 r > 0 corresponds to stable orbits. Expressing in terms of the energy branches E± yields ω2 r = E′′ +(rc) E+(rc) -E-(rc) 2m2 . In the weak-field regime, E+ -E-≈2m, giving ω2 r ≃E′′ +(rc)/m. Stability is equivalent to a local minimum of the effective potential Veff(r). The coordinate-time radial frequency is obtained from the proper-time frequency via the relation Ωr = ωr dτ dt rc , dτ dt rc = m [E -q tan (|Q|/rc)] cos2 (|Q|/rc). This expression makes explicit how the radial oscillations in coordinate time are redshifted relative to proper time due to the spacetime geometry and electromagnetic interaction. The combined effect of angular momentum, charge-to-mass ratio, and the singular barrier r∗governs both the allowed radial domain and the stability properties of circular orbits. B. Weak-Field Approximation and Orbital Stability In the weak-field regime, defined by radial distances much larger than the characteristic scale of the central charge (note that r in units of Q), r ≫|Q|, the spacetime metric approaches the Minkowski form, with small perturbations due to both the electromagnetic field of the 5 FIG. 2. Dynamics of a charged particle with mass m = 1 and charge q = -1 around a central Coulomb charge |Q| = 2.4 for angular momenta L = 1, 2, 3. (a) Radial energy E+(r) showing contributions from the rest mass (black dashed line), Coulomb interaction, and angular momentum, with circular orbits rc indicated by vertical dashed lines and the outermost barrier r∗highlighted in purple. (b) Effective potential Veff(r) illustrating the radial dependence of the potential energy, with circular orbits and r∗similarly marked (purple). (c) Radial oscillations r(t) around circular orbits with shaded envelopes representing the oscillation amplitude, and r∗shown as a purple dashed line. (d) Two-dimensional precessing orbits in the xy plane, exhibiting retrograde precession around the central charge (black dot), with maximum and minimum radial excursions, and the outermost barrier r∗shown as a dashed purple circle. central charge and the curvature it induces. In this limit, the dynamics of a charged particle can be described by an effective energy function E+(r), which includes contributions from the particle's rest mass, electromagnetic potential, orbital angular momentum, and leading curvature corrections. Expanding E+(r) in powers of 1/r up to second order gives E+(r) ≃m + q|Q| r + L2 2mr2 + mQ2 2r2 + O(r-3), ⇒E+(r) -m ≃q|Q| r + L2 2mr2 + mQ2 2r2 = -κ r + β r2 , (38) where we define κ = |qQ|, β = (L2 + m2Q2)/(2m) and q 0, (44) confirming the stability of the orbit under small radial perturbations. Consequently, the proper-time radial epicyclic frequency can be expressed as ωr ≃m|qQ|2 L3 , (45) 6 up to minor corrections (cancelled) from the curvature term. This relation has a clear physical interpretation: stronger Coulomb attraction increases the radial oscillation frequency, while larger angular momentum reduces it due to the broader orbits associated with higher L. In the limit m →0, the radial frequency vanishes, consistent with the absence of a restoring force for massless particles. To investigate the azimuthal motion and the associated orbital precession [19], it is convenient to define an effective central potential incorporating both Coulomb and curvature effects: U(r) = qQ r + mQ2 2r2 . (46) The circular orbit condition can be equivalently written as L2 = m r3 U′(r), and the proper-time frequencies for small radial and azimuthal oscillations are given by ω2 φ = 1 mr U′(r), ω2 r = 1 m U′′(r) + 3L2 mr4 . (47) Differentiating the potential provides U′(r) = -qQ r2 -mQ2 r3 , U′′(r) = 2qQ r3 + 3mQ2 r4 . (48) Substituting these into the frequency expressions shows that the radial epicyclic frequency is dominated by the Coulomb term, while the azimuthal frequency is slightly reduced due to the curvature contribution: ω2 φ ≃-qQ mr3 -Q2 r4 , ω2 r ≃-qQ mr3 . (49) This difference in frequencies gives rise to a retrograde precession, meaning that the orbit slowly rotates backward relative to the radial oscillations. The precession per orbit can be expressed as ∆φ ≃2π 1 -ωφ ωr ≃2π 1 - s 1 + mQ2 |qQ|rc ! ≃-πmQ2 |qQ|rc = -πm2Q2 L2 . (50) The negative sign explicitly confirms that the precession is retrograde [17]. Its magnitude is small, consistent with the weak-field approximation, and scales as Q2/L2, indicating that curvature effects become significant only for tight orbits or large central charges. Thus, the weakfield approximation provides a clear and physically intuitive description of orbital dynamics in the presence of a central charged source. Circular orbits exist and are stable under small radial perturbations. Radial oscillation frequencies increase with stronger Coulomb attraction and decrease with higher angular momentum. The curvature-induced modification of the azimuthal frequency leads to a small retrograde precession, generalizing classical Keplerian dynamics to include leading-order corrections. The effective potential U(r) offers a concise framework to understand how electromagnetic forces, centrifugal barriers, and spacetime curvature together determine the orbital structure of charged particles. The Figure 2 illustrates the dynamics of a charged particle with mass m = 1 and charge q = -1 orbiting a central Coulomb charge |Q| = 2.4 for angular momenta L = 1, 2, 3. The radial energy E+(r) demonstrates the combined contributions of the particle's rest mass, Coulomb attraction, and angular momentum, with circular orbits rc identified as vertical dashed lines and the outermost radial barrier r∗highlighted in purple. The effective potential Veff(r) emphasizes the purely radial energy landscape, showing the locations of circular orbits relative to r∗. Radial oscillations r(t) around these orbits are depicted with shaded envelopes representing the oscillation amplitude, demonstrating the stability of motion near rc while respecting the minimum radius r∗. Two-dimensional precessing orbits in the xy plane reveal retrograde precession of periapsis due to the curvature term, with the orbit envelopes showing the maximal and minimal radial excursions and the outermost barrier r∗clearly indicated. Together, these panels visualize how angular momentum and Coulomb interaction shape the particle's motion and the retrograde shift of orbital trajectories. C. Strong-Field Dynamics and Orbital Stability In the strong-field limit, corresponding to u →π/2 (equivalently r → r∗), it is convenient to introduce a small expansion parameter u = π 2 -ǫ, 0 0 (background/source). For nonzero angular momentum (L ̸= 0), the centrifugal term dominates, giving the leading scaling E+(u) ∼ Lπ 2|Q| 1 ǫ2 . For purely radial motion (L = 0), the divergence is milder: E+(u) ∼1 ǫ . This distinction shows that angular momentum strongly amplifies the confining barrier, while radial trajectories approach it more gradually. The ability of a radial particle to approach the outermost shell at r∗ depends on the charge-to-mass ratio |q|/m: typical values |q|/m r∗). The hypersurface r = r∗acts as an impenetrable barrier: for L ̸= 0, the centrifugal divergence ensures reflection before r∗; for L = 0, accessibility is controlled by |q|/m and the conserved energy. Radial dynamics are governed by the function R(r) defined in Eq. (30), whose zeros specify turning points separating classically allowed and forbidden regions. In the strong-field regime, these zeros accumulate near r∗, producing either tightly confined oscillations or unstable equilibria. Orbital stability is quantified by the proper-time radial epicyclic frequency ωr evaluated at the circular orbit radius rc. The behavior of ω2 r is determined by the curvature of the effective radial potential: • Stable orbits (ω2 r > 0): R′′(rc) 0. Small radial perturbations grow exponentially. This arises for L ̸= 0 or when the centrifugal or electrostatic terms create a steep potential slope near r∗. 7 Near r∗, the strong divergence of E+(u) imposes a hard-wall confinement. For L ̸= 0, turning points are pushed outward, producing narrow oscillatory regions; for L = 0, the approach to r∗is controlled by electrostatic attraction and gravitational curvature. Circular orbits near local maxima of Veff(r) are generically unstable, and stable orbits cannot exist arbitrarily close to r∗. The singular hypersurface at r∗partitions the radial domain into isolated zones of motion, producing distinct families of bound and scattering states. This hard-wall confinement contrasts with black-hole dynamics, where horizons, rather than divergent shells, impose boundaries. The strong-field regime complements the weak-field description: at large radii, motion is approximately Keplerian with small retrograde precession, while near r∗, dynamics are dominated by the diverging effective potential. Together, these limits provide a continuous and unified picture of charged-particle motion across all accessible radial scales. IV. MAPPING TO A ONE-ELECTRON ATOM The dynamics of a charged particle on the background (5) may be semiclassically mapped to a hydrogen-like one-electron system. This correspondence is valid in the regime where characteristic orbital radii satisfy r ≫|Q|(in units where c = 1 = ħ), allowing metric functions such as cos(|Q|/r), sec(|Q|/r), and tan(|Q|/r) to be systematically expanded in powers of |Q|/r. Particle velocities are assumed nonrelativistic, with kinetic energies small compared to the rest mass energy m (in units where c = 1), justifying a Schr ̈odinger or semiclassical Bohr description. The particle is treated as a test particle, so its electromagnetic and gravitational backreaction is negligible. Finally, the quantum probability density should remain concentrated far from the outermost curvature singular shell r∗= 2|Q|/π, ensuring rapid convergence of the perturbative expansion. In this controlled regime, the dominant dynamics is Coulombic, with curvature-induced corrections that are small and systematically computable, in principle. Starting from the exact first integral for timelike charged motion, we denoted the positive-energy branch by E+(r). In the weak-field regime r ≫|Q|, the expansion in (38) reads E+(r) -m ≃qQ r + L2 2mr2 + mQ2 2r2 + O(r-3) . This form defines the effective potential for slow particles: Veff(r) ≡E+(r) -m ≃qQ r + L2 2mr2 + mQ2 2r2 + O(r-3) , where the leading term is Coulombic, the second is the centrifugal term, and the third is a geometric correction due to curvature. Higher-order terms modify the centrifugal structure with explicit |Q|/r dependence. Within this approximation, one can map the system to hydrogenic variables as q ↔-e, Q ↔Ze, m ↔me, and L ↔nħsemiclassically. The Coulomb term then becomes -Ze2/r, and the semiclassical orbital radius follows from balancing centrifugal and Coulomb forces, rc ≃ L2 m|qQ| . (52) With L = nħand qQ = -e · Ze, this reproduces the Bohr-like radius [13, 14] an = n2ħ2 meZe2 , (53) which establishes the expected semiclassical hierarchy in planar geometry. In the nonrelativistic quantum regime, the unperturbed Hamiltonian H0 is H0 = p2 2m + qQ r . (54) However, this form does not fully capture the influence of the curved spacetime (5). In the weak-field regime, the leading order geometric correction δV (r) = mQ2 2r2 (55) can be treated perturbatively. To first order, the energy shift of a hydrogenic eigenstate |nl⟩is [13, 14] ∆E(1) nl= ⟨nl|δV |nl⟩= mQ2 2 ⟨r-2⟩nl, (56) with ⟨r-2⟩nlfinite for all l≥0. The expectation values ⟨r-2⟩nlcan be computed explicitly using 2+1 dimensional hydrogenic wavefunctions [13, 14, 28], giving ⟨r-2⟩nl= 1/(a2 n(l+ 1/2)) for l≥0, consistent with standard planar quantum mechanics [28]. The unperturbed binding energies E(0) n are given by E(0) n = E(0) n -m ≃-m(qQ)2 2ħ2n2 , (57) which, for hydrogen (Z = 1, Q = e), yield E(0) 1 ≃-13.6 eV, E(0) 2 ≃-3.40 eV. (58) Here, E(0) n ≃ - μe4 2(4πε0)2ħ2n2 , where the reduced mass is μ ≈ me 1 -me mp , i.e., μ = 9.104425 × 10-31 kg and μ/me ≈0.999455 in SI units. The first-order curvature-induced corrections are ∆E(1) 1 ≃0.27 eV, ∆E(1) 2 ≃0.034 eV. (59) Hence, the total energies become En = E(0) n + ∆E(1) n ≃-13.33 eV, -3.366 eV for n = 1, 2. (60) These results confirm the validity of the perturbative approach (see also Figure 3), since ∆E(1) n ≪|E(0) n -E(0) n+1|. Higher-order terms of order O(r-3) are negligible for r ≫|Q|, ensuring rapid convergence of the perturbative series [29]. The classical radial epicyclic frequency, derived from the effective potential, satisfies ω2 r ≃ E′′ +(rc) m in the weak-field limit, with curvature corrections entering at higher order in |Q|/r. Explicitly, differentiating the expanded E+(r) gives E′′ +(r) = 2qQ/r3 + 3L2/(mr4) + 3mQ2/r4 + O(r-5). Evaluated at rc, this reproduces the classical radial oscillation frequency, consistent with semiclassical hydrogenic predictions. The semiclassical radial oscillation spectrum thus agrees with the hydrogenic semiclassical treatment to leading order, validating the energy and radius identifications. Nonetheless, the mapping is intrinsically approximate. The outermost singular shell at r∗= 2|Q|/π constitutes a genuine geometric boundary, conceptually analogous to a nuclear core: it strongly constrains the wavefunction at short distances. Quantum states with appreciable support near r∗must satisfy boundary conditions that render the Hamiltonian self-adjoint. Unlike a conventional nucleus, r∗is a curvature singularity rather than a smooth potential, affecting both kinetic and potential operators. Moreover, the exact gauge At = -tan(|Q|/r) deviates from -Q/r at finite radii, introducing non-Coulombic features. Spin and relativistic corrections acquire metric-dependent contributions, and tightly bound states may violate the test-particle approximation due to back-reaction. Different physically reasonable boundary 8 0 5 10 r -15 -10 -5 0 Energy (eV) 1 2 3 n 0 0.05 0.1 0.15 0.2 0.25 0.3 0.270 0.034 Hydrogenic Energy Levels and Curvature Effects FIG. 3. Hydrogenic energy levels (left) and curvature-induced shifts (right). The Coulomb potential is shown in blue, with unperturbed hydrogenic energies for n = 1, 2 depicted as solid lines. Curvatureperturbed energies are indicated by dashed black lines. The bar plot quantifies curvature-induced shifts, highlighting that the ground state (n = 1) experiences the largest shift. conditions correspond to inequivalent spectra, so the quantum problem is not uniquely specified without additional input. Quantitatively, the analogy holds when typical orbital radius rtyp ≫|Q|, first-order curvature-induced energy shifts remain small compared to interlevel spacings, the test-particle approximation is valid, and wavefunction leakage toward r∗is negligible. In practice, one can expand the effective potential in powers of |Q|/r, treat δV (r) = mQ2/(2r2) + · · · perturbatively, and solve the Schr ̈odinger equation with V0(r) = qQ/r as the unperturbed Hamiltonian. In the weak-field, non-relativistic limit, the fully metric-corrected Klein-Gordon Hamiltonian reduces to this form, providing a systematic justification for employing the perturbative approach. When the wavefunction approaches r∗or perturbative corrections become significant, a fully relativistic treatment on the exact metric with consistent boundary conditions is required. This framework explains why the curved-space charged-particle problem is not identical to a hydrogenic atom, while showing that the latter remains a systematically improvable approximation, with the singular shell playing a nuclear-like role in controlling short-distance quantum behavior. Also, purely leptonic systems such as positronium cannot be described by this curved-space hydrogenic analogy. V. CURVATURE-CORRECTED THERMODYNAMIC PROPERTIES The single-particle spectrum derived in Section IV can be incorporated into canonical thermodynamics by constructing the partition function over a controlled set of bound states. For definiteness, we restrict to the s-wave manifold and adopt unperturbed hydrogenic energies (in electronvolts) E(0) n = -13.6 n2 , n = 1, 2, . . . , (61) augmented by curvature-induced perturbative shifts ∆E(1) n . Using the findings ∆E(1) 1 ≃+0.270 eV, ∆E(1) 2 ≃+0.034 eV, we consider a power-law interpolation of the shifts, yielding ∆E(1) n ∝ n-p with p ≈3. Motivated by this observation and aiming for a minimal phenomenological description, we may adopt a simple model ∆E(1) n = ∆E(1) 1 n3 , n ≥1, (62) which reproduces the second-level shift ∆E(1) 2 ≃0.0338 eV. Accordingly, the total spectrum entering canonical sums is thus En = E(0) n + ∆E(1) n . (63) The practical calculation requires truncating the Rydberg series at a finite integer nmax. This truncation reflects the system's finite spatial extent, screening effects, or breakdown of the test-particle approximation; convergence must therefore be checked by varying nmax. With β ≡ 1/(kBT) and energies in eV (so kB = 8.617333262145 × 10-5 eV/K), the canonical partition function reads [30-33] Z(β) = nmax X n=1 gn e-βEn, (64) where gn = 1 for the s-wave truncation, and canonical occupation probabilities are [30-33] pn(β) = e-βEn Z(β) . (65) Thermodynamic potentials are obtained in the standard way [30]: F (β) = -1 β ln Z(β), (66) U(β) = nmax X n=1 pn(β) En = -∂ln Z ∂β , (67) S(β) = U(β) -F (β) T , (68) CV (β) = kBβ2 ⟨E2⟩-⟨E⟩2 , (69) with ⟨X⟩≡P n pnXn. Here, F is the Helmholtz free energy, U is the internal energy, S is the entropy, and CV is the heat capacity at constant volume. Moreover, the identity F = U -TS serves as a stringent numerical consistency check. All numerical values can be obtained via a stable direct evaluation of the truncated sums. To avoid overflow/underflow in exponentials, we employ the log-sum-exp technique: for a given set of energies {En}nmax n=1 , we define Emin = minn En and shifted weights ̃zn = exp[-β(En -Emin)]. The partition function is then Z = ̃Z exp(-βEmin) with ̃Z = P n ̃zn, and normalized probabilities are pn = ̃zn/ ̃Z. Thermodynamic quantities follow as F = -β-1(ln ̃Z -βEmin), U = X n pnEn, S = U -F T , CV = kBβ2 X n pnE2 n -U2 ! . (70) The same routine applies seamlessly to both the unperturbed and curvature-corrected spectra, with the resulting curvature-induced shifts, ∆X = X -X(0), evaluated directly. Numerical verification must obey that F = U -TS [30]. For small curvature corrections, it is instructive to expand to first order in ∆E(1) n . Defining the unperturbed partition function and probabilities Z(0)(β) = nmax X n=1 e-βE(0) n , p(0) n (β) = e-βE(0) n Z(0)(β) , one finds to linear order ∆F ≃⟨∆E(1)⟩0, (71) ∆U ≃⟨∆E(1)⟩0 -β ⟨E(0)∆E(1)⟩0 -⟨E(0)⟩0⟨∆E(1)⟩0 , (72) ∆S ≃-kBβ2 ⟨E(0)∆E(1)⟩0 -⟨E(0)⟩0⟨∆E(1)⟩0 , (73) 9 while CV is computed directly from the variance definition for numerical stability. Convergence with respect to nmax must be carefully tested. Representative results are summarized in Table I. RepresenTABLE I. Convergence test of curvature-induced free-energy shifts ∆F [eV] and occupation probabilities at different truncation levels nmax and temperatures T. Values computed using a numerically stable logsum-exp evaluation. T [K] nmax ∆F [eV] p1 pnmax 300 100 0.27000 0.9999999999999 1.23 × 10-224 300 200 0.27000 0.9999999999999 1.18 × 10-224 300 300 0.27000 0.9999999999999 1.17 × 10-224 104 100 0.26999 0.999970 1.92 × 10-7 104 200 0.26999 0.999951 1.91 × 10-7 104 300 0.26998 0.999931 1.91 × 10-7 2 × 104 100 0.25870 0.954539 4.18 × 10-4 2 × 104 200 0.24904 0.916259 4.01 × 10-4 2 × 104 300 0.24008 0.880940 3.85 × 10-4 tative curvature-induced shifts of canonical thermodynamic quantities are reported in Table II. At room temperature, the ensemble is essentially confined to the ground state, so free and internal energies coincide with the curvature-shifted ground-state value F (0) ≃-13.600 eV, F ≃-13.330 eV, while entropy and heat capacity vanish. At higher temperatures, thermal occupation of excited states produces finite curvature-induced corrections. Free and internal energies are directly influenced by the mean level correction, whereas entropy and heat capacity reflect the redistribution of populations among excited states. TABLE II. Curvature-induced shifts of canonical thermodynamic quantities at representative temperatures. Values computed with nmax = 300. T [K] ∆F [eV] ∆U [eV] ∆S [10-8 eV/K] ∆CV [10-7 eV/K] 300 +0.27000 +0.27000 0.00 0.00 104 +0.26998 +0.27022 2.36 3.11 VI. SUMMARY AND DISCUSSION We have performed a detailed analysis of the dynamics of charged test particles in a static, spherically symmetric spacetime sourced solely by an electric charge Q. This corresponds to the massless limit of a charged wormhole solution in the Einstein-Maxwell-Scalar system. The geometry, described by the metric in Eq. (5), contains an infinite series of concentric curvature-singularity shells given in Eq. (6). The outermost shell at r∗= 2|Q|/π defines a true geometric boundary. For particles with nonzero angular momentum (L ̸= 0), this shell acts as an impenetrable barrier. For purely radial motion (L = 0), accessibility depends on the charge-to-mass ratio |q|/m, with turning points occurring outside r∗for particles approaching from infinity. The radial domain is thus divided into separate regions, forming a confinement structure reminiscent of classical potential walls. Using the Lagrangian, we obtained exact first integrals for the temporal, azimuthal, and radial motion. The dynamics is governed by two energy branches, E±(r), with the future-directed branch E+(r) describing physical trajectories. The effective potential, expressed relative to the particle rest mass m, includes contributions from both the Coulomb interaction and spacetime curvature. In the weak-field regime (r ≫|Q|), the potential reduces to the Coulomb form with a centrifugal term and small curvature correction. These correction induces a retrograde periapsis precession, ∆φ ≃-πm2Q2 L2 , (74) where the negative sign indicates a backward shift compared to the Newtonian case. For attractive Coulomb interactions (qQ < 0), stable circular orbits exist at rc = L2 m|qQ|, (75) to leading order, and the radial epicyclic frequency is ω2 r ≃ m2|qQ|2/L6. Increasing Coulomb coupling strengthens binding, while larger angular momentum lowers the oscillation frequency, reflecting the classical balance between central attraction and centrifugal repulsion. Near r∗, the effective potential diverges. Introducing ǫ = π 2 -|Q|/r, one finds E+ ∼ǫ-1 for radial motion and E+ ∼ǫ-2 for nonradial motion. This divergence acts as a hard-wall barrier, which becomes more restrictive with increasing angular momentum. For |q|/m < 1, the barrier is softened for purely radial trajectories, while nonradial motion remains strictly excluded. This establishes a hierarchy of confinement strengths, comparable to hard-wall models familiar from quantum mechanics. At sufficiently large radii, the system can be mapped to a hydrogenlike system. The Coulomb term dominates the potential, the centrifugal term balances orbital motion, and curvature corrections can be treated perturbatively. Using the semiclassical correspondence q ↔-e, Q ↔Ze, L ↔nħ, and m ↔me, the outermost singular shell r∗plays a role analogous to the atomic nucleus, providing a short-distance boundary. The semiclassical orbital radii an ∼n2ħ2/(m|qQ|) reproduce the Bohr scaling, while the curvature-induced r-2 term yields small, systematically computable energy shifts ∆E(1). This analogy is quantitatively reliable when the wavefunction is localized far from r∗and perturbative corrections remain small compared to interlevel spacing. The mapping thus provides a controlled connection between weak-field Coulombic orbits and the strong-field confinement induced by the singular shell. The system exhibits two complementary regimes. At large radii, particle motion resembles classical Coulomb or Keplerian dynamics with minor curvature corrections. Close to the outermost singular shell, motion is dominated by a steeply rising potential barrier that enforces strong spatial confinement. This framework provides a continuous description linking weak-field orbits to highly constrained dynamics near the singular shell, connecting classical orbital mechanics with exotic singular geometries. Beyond the classical and semiclassical particle dynamics, curvatureinduced corrections to the effective potential have direct consequences for the canonical thermodynamics of bound states. Constructing the partition function over s-wave bound states with energy shifts ∆E(1) n shows that curvature systematically increases the free and internal energies, weakens binding, and enhances thermal ionization. These thermodynamic effects become significant at temperatures comparable to the energy scale of the lowest bound-state corrections, whereas at low temperatures the ensemble remains effectively confined to the ground state. Entropy and heat capacity are altered subtly through correlations between unperturbed energies and curvature-induced shifts, providing a 10 FIG. 4. Thermodynamic properties of the truncated hydrogenic spectrum with curvature corrections. Subplots (a-d) display the absolute canonical quantities: Helmholtz free energy F (T), internal energy U(T), entropy S(T), and heat capacity CV (T) for nmax = 200, with solid black lines for the unperturbed energies E(0) n and dashed blue lines including curvature shifts ∆E(1) n . Subplots (e-h) present the curvature-induced differences ∆F , ∆U, ∆S, and ∆CV . Subplots (i-l) show convergence for nmax = 100, 200, 300, illustrating the stability of canonical sums. Residuals F -(U -TS) are smaller than 10-14 eV, confirming numerical consistency. All quantities are in eV or eV/K; the temperature axis is logarithmic to emphasize low- and high-temperature regimes. precise quantitative description of how the geometry modifies statistical properties. Integrating the results from classical particle dynamics, semiclassical mapping, and curvature-corrected thermodynamics establishes a consistent framework that links microscopic motion with macroscopic statistical behavior, demonstrating that the singular shell not only enforces spatial confinement but also produces measurable (in principle) shifts in the thermal characteristics of the system. The results establish a clear and analytically tractable framework for charged-particle motion in horizonless, singular charged spacetimes. The combination of integrability, smooth connection to Coulomb dynamics at large radii, and hard-wall confinement near the singular shell demonstrates the value of this system as a theoretical laboratory for studying charged matter in geometries determined entirely by electromagnetic fields. Several extensions are suggested by this framework. Studying null geodesics could reveal the causal and optical properties of the singular shells, potentially producing distinctive lensing effects. A detailed analysis of radial and azimuthal oscillation frequencies would relate the results to classical celestial mechanics. Incorporating electromagnetic self-force or radiation-reaction effects could extend the model to dissipative systems. Semiclassical studies of wave propagation or quantized bound states may highlight confinement effects similar to a particlein-a-box model. Finally, exploring rotational or perturbed versions of the geometry would test whether the confinement mechanisms persist in less symmetric conditions. [1] H. Reissner, " ̈Uber die Eigengravitation des elektrischen Feldes nach der Einsteinschen Theorie", Annalen der Physik 55, 106-120 (1916) [2] G. Nordstr ̈om, "On the Energy of the Gravitational Field in Einstein's Theory", Koninklijke Nederlandsche Akademie van Wetenschappen Proceedings 20, 1238-1245 (1918) [3] S. Chandrasekhar, "The Mathematical Theory of Black Holes (Vol. 69)", Oxford University Press (1998). [4] A. Einstein, N. Rosen, "The particle problem in the general theory of relativity", Physical Review 48, 73 (1935) [5] M. Visser, "Lorentzian Wormholes: From Einstein to Hawking", American (AIP Press) (1995). [6] M. S. Morris and K. S. Thorne, "Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity", American Journal of Physics 56, 395-412 (1988) [7] B. Turimov, A. Abdujabbarov, B. Ahmedov, Z. Stuchl ́ık, "Exact Charged Traversable Wormhole Solution", Physics Letters B, 139800 (2025). [8] A. Papapetrou, "Eine theorie des gravitationsfeldes mit einer feldfunktion", Zeitschrift f ̈ur Physik 139, 518-532 (1954) [9] M. ˇZofka, "Bonnor-Melvin universe with a cosmological constant" Physical Review D, 99 (2019) 044058 [10] O. Mustafa, A. Guvendi, "Fermions in a (2+ 1)-dimensional magnetized spacetime with a cosmological constant: Domain walls and spinning magnetic vortice" Physics Letters B, 866 (2025) 139569 [11] T.W.B. Kibble, F. H. Berkshire, "Classical Mechanics. 5th ed," (Imperial College Press, London 2004) [12] M.D. Semon, J.R. Taylor, "Thoughts on the magnetic vector potential", American Journal of Physics, 64 1361-1369 (1996) [13] J.J. Sakurai, J. Napolitano, "Modern quantum mechanics" (Cambridge University Press 2020) [14] K. Gottfried, "Quantum Mechanics: Fundamentals (1st ed.)" (CRC Press, Boca Raton (1974) 528 pages)) [15] J.D. Jackson, "Classical Electrodynamics" (Wiley, Hoboken, 1998) [16] V.P. Frolov, A.A. Shoom, "Motion of charged particles near a weakly magnetized Schwarzschild black hole" Physical Review D, 82 (2010) 084034 [17] P. Bambhaniya, M.J. Vyas, P.S. Joshi, Elisabete M de G. Dal Pino, "Retrograde precession of relativis11 tic orbits and the quest for charged black holes" Physics of the Dark Universe, 48 (2025) 101949 [18] D. Borka, V. Borka Jovanovi ́c, S. Capozziello, A.F. Zakharov, P. Jovanovi ́c, "Estimating the parameters of extended gravity theories with the Schwarzschild precession of S2 star" Universe, 7 (2021) 407 [19] G.S. Adkins, J. McDonnell, "Orbital precession due to centralforce perturbations", Physical Reviwe D, 75 082001 (2007). [20] H.M. Siahaan, "Merger estimates for Kerr-Sen black holes" Physical Review D, 101 (2020) 064036 [21] G.M. Deng, "Self-consistent geodesic equation and quantum tunneling from charged AdS black holes", Journal of Physics: Conference Series, 942 012008 (2017) [22] B. Turimov, A. Davlataliev, B. Ahmedov, Z. Stuchl ́ık, "Exploring a novel feature of ellis spacetime: Insights into scalar field dynamics" Chinese Journal of Physics, 94 (2025) 807-819 [23] E. Poisson, A. Pound, I. Vega, "The motion of point particles in curved spacetime" Living Reviews in Relativity 14, 7 (2011) [24] S. Gurtas Dogan, A. Guvendi, O. Mustafa,"Geometric and wave optics in a BTZ optical metric-based wormhole," Physics Letters B 139824 (2025). [25] D. Pugliese, H. Quevedo, R. Ruffini, "General classification of charged test particle circular orbits in Reissner-Nordstr ̈om spacetime" European Physical Journal C, 77 (2017) 206 [26] M.A. Abramowicz, W. Klu ́zniak, "Epicyclic frequencies derived from the effective potential: simple and practical formulae" Astrophysics and Space Science, 300 (2005) 127-136 [27] A. Tursunov, Z. Stuchl ́ık, M. Koloˇs, "Circular orbits and related quasiharmonic oscillatory motion of charged particles around weakly magnetized rotating black holes" Physical Review D, 93 (2016) 084012 [28] A. Guvendi, O. Mustafa, "An innovative model for coupled fermion-antifermion pairs" European Physical Journal C, 84 (2024) 866 [29] M.M. Stetsko, V.M. Tkachuk, "Perturbation hydrogenatom spectrum in deformed space with minimal length" Physical Review A, 74 (2006) 012101 [30] L.D. Landau, E.M. Lifshitz, "Statistical Physics (Course of Theoretical Physics, 3rd Edition, Volume 5)" Elsevier (1980) 544 pages [31] L.P. Kadanoff, "Quantum statistical mechanics" CRC Press (2018) 224 pages [32] V. Ryabov, "Principles of Statistical Physics and Numerical Modeling" IOP Publishing (2018) 152 pages [33] James H. Luscombe, "Statistical Mechanics: From Thermodynamics to the Renormalization Group (1st ed)" CRC Press, Boca Raton (2021) 400 pages
|
2509.16290
|
1
Facile Synthesis and On-Chip Color Tuning of CsPbBr3@CsPbBr3-xTFAx
Nanoplatelets via Ion Engineering
Sana Khan1,2, Saeed Goudarzi2, Stephan Schäffer3, Lars Sonneveld4, Bart Macco5, Ke Ran1,6,7, Naho Kurahashi8,9,
Peter Haring Bolívar
3, Bruno Ehrler4, Thomas Riedl
8,9, Gerhard Müller-Newen10, Surendra. B. Anantharaman
1,11,
Maryam Mohammadi
1, *, Max. C. Lemme
1,2, *
1AMO GmbH, Otto-Blumenthal-Straße 25, 52074 Aachen, Germany
2RWTH Aachen University, Chair of Electronic Devices, Otto-Blumenthal-Str. 25, 52074 Aachen, Germany
3Institute for High Frequency and Quantum Electronics, University of Siegen, 57076 Siegen, Germany
4LMPV-Sustainable Energy Materials Department, AMOLF Science Park 104, 1098 XG Amsterdam, The
Netherlands
5Department of Applied Physics and Science Education, Eindhoven University of Technology, P.O. Box 513, 5600,
MB, Eindhoven, the Netherlands
6Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Ahornstr. 55, 52074 Aachen,
Germany
7Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons ER-C, Forschungszentrum Jülich GmbH,
Jülich 52428, Germany
8Institute of Electronic Devices, University of Wuppertal, Rainer-Gruenter-Str. 21, 42119 Wuppertal, Germany
9Wuppertal Center for Smart Materials & Systems (CM@S), University of Wuppertal, Rainer-Gruenter-Str. 21,
42119 Wuppertal, Germany
10Institute of Biochemistry and Molecular Biology, Uniklinik RWTH Aachen, Pauwelsstrasse 30, Aachen, Germany
11Low-dimensional Semiconductors Lab, Department of Metallurgical and Materials Engineering, Indian Institute
of Technology Madras, Chennai 600 036, India
*Corresponding Authors: mohammadi@amo.de; max.lemme@eld.rwth-aachen.de
2
Abstract
Metal halide perovskites (MHPs) have emerged as attractive optoelectronic materials because
of high fluorescence quantum yield, broad color tunability, and excellent color purity. However,
the ionic nature of MHPs makes them susceptible to polar solvents, leading to defect-induced
nonradiative recombination and photoluminescence (PL) quenching. Here, we present a
combined in-synthesis (in situ) and post-synthesis ion engineering to suppress nonradiative
recombination and integrate multicolor MHP arrays on-chip through a perovskite-compatible
photolithography process and in situ vapor-phase anion exchange. CsPbBr3@CsPbBr3-xTFAx
nanoplatelets were grown on-chip via a single-step solution process incorporating
trifluoroacetate (TFA−) pseudohalides. X-ray photoelectron spectroscopy revealed that TFA−
passivate uncoordinated Pb2+ ions on nanoplatelet surface and suppresses the formation of
metallic lead (Pb0). This decreases the non-radiative recombination centers and yields a PL peak
at 520 nm with a linewidth of 14.56 ± 0.5 nm. The nanoplatelets were patterned via a top-down
photolithography process and selectively masked with a PMMA/Al2O3 stack to enable vapor-
phase anion exchange. The PL peak shifted in the unmasked regions from 520 nm to 413 nm,
resulting in distinct green and blue emission arrays. Our method enables the scalable fabrication
of highly luminescent, two-color MHP arrays with tailored optical properties, advancing their
integration into next-generation optoelectronic devices.
3
1.1
Introduction
Metal halide perovskites (MHPs), including nanowires and nanoplatelets, have garnered
significant attention because of their high photoluminescence quantum yields and narrowband
emission.1,2 Additionally, their emission wavelength can be precisely tuned by adjusting the
halide composition.3,4 Furthermore, these nanostructures can support optical microcavities,
such as Fabry-Perot and whispering gallery modes, facilitating strong light confinement and
amplification.5,6 Despite these advantages, practical applications of perovskites in
optoelectronics are limited by their inherent instability, which arises from their ionic nature and
leads to defects that provide nonradiative recombination pathways.
Several defect passivation methods have been proposed to suppress nonradiative
recombination, such as surface chemical treatments,7–9 core-shell configurations,10–14 and ion
engineering.15–20 Surface chemical treatments are post-synthetic processes in which small
molecules or ligands are employed to neutralize trap states, primarily on the surface, resulting
in improved PL.7–9 Core-shell structures, on the other hand, are formed during synthesis and
involve encapsulating the perovskite core with a protective shell. These heterostructures
enhance both the environmental and structural stability while also reducing the surface defect
density.10–14 Ion engineering enables defect passivation through direct modification of the
perovskite lattice, either during synthesis (in situ) or post-synthesis.15–20 In situ techniques
involve introducing alkylammonium cations 21–23 and pseudohalides, which can incorporate
either into the lattice or at the surface of the MHP. 18,24–26 The incorporation of pseudohalides
such as thiocyanate (SCN⁻),27,28 formate (HCOO−),29 acetate (CH3COO−),
30 and
tetrafluoroborate (BF4−)31 controls the crystallization process, enhances the preferential crystal
orientation, and assists in defect passivation.26 Pseudohalides functionalized with carboxyl
(O═C─O−) groups act as Lewis bases to passivate undercoordinated Pb2+ ions, resulting in well-
oriented crystals with fewer trap states.18,32–34 Additionally, fluorine-containing pseudohalides
4
improve the ambient stability and surface hydrophobicity of MHPs.35 Trifluoroacetate (TFA−)
halides, which contain both carboxyl and fluorine functional groups, have emerged as
promising candidates for influencing homogeneous nucleation and crystal growth orientation,
enhancing defect passivation, and improving long-term stability.19,36,37
Despite the ability to tune the emission color of perovskite materials across the visible spectrum
by adjusting the halide composition,38 the in situ synthesis of chlorine-containing, blue-emitting
perovskites remains limited.39 This limitation arises from the low solubility of chlorine-based
precursors, such as lead chloride (PbCl2) and cesium chloride (CsCl), in common solvents such
as dimethyl sulfoxide (DMSO) and dimethylformamide (DMF),40–43 which leads to poor film
quality with high defect densities. In addition to in situ techniques, post-synthesis approaches,
particularly anion exchange, play a crucial role in fine-tuning the optical emission of MHPs.3,4
Anion exchange can be performed in the liquid or vapor phase,3,44,45 although precise, selective
liquid‒phase ion exchange is constrained by precursor solubility, slow ion exchange kinetics,
and the dissolution of the original MHPs. In contrast, vapor-phase methods could avoid these
effects, offering a more controlled approach that allows for spatially selective ion exchange and
accurate color tuning.46
Here, we combine in situ and post-synthesis ion engineering approaches to tune the optical
properties of all-inorganic cesium lead bromide (CsPbBr3) perovskite. We propose a single-step
solution-based approach for the in situ growth of perovskite nanoplatelets via cesium
trifluoroacetate (CsTFA), which facilitates the dissolution of lead bromide (PbBr2) in organic
36 provides cesium ions (Cs+), and introduces TFA− as a pseudohalide anion. TFA− promotes
preferential crystal orientation and reduces surface defects, resulting in highly luminescent all-
inorganic perovskite nanoplatelets. The nanoplatelets are denoted as CsPbBr3@CsPbBr3-xTFAx.
We applied our versatile perovskite-compatible top-down photolithography technique to pattern
the nanoplatelets.47,48 We then performed a facile vapor-phase anion exchange process to create
5
two distinct emissive arrays by selectively converting bromine-based green-emissive
nanoplatelets into chlorine-based blue-emissive CsPbCl3 nanoplatelets. The resulting CsPbCl3
exhibited improved phase stability under ambient conditions.
1.2
Results and Discussion
We first investigated the influence of TFA− on the perovskite structure and properties by
preparing two types of samples: CsPbBr3@CsPbBr3-xTFAx nanoplatelets and reference
CsPbBr3 thin films. Both were fabricated via spin-coating followed by post-annealing at 80°C.
The nanoplatelets were synthesized from a precursor solution containing CsTFA and lead
bromide (PbBr2), whereas the thin films were deposited from a solution of cesium bromide
(CsBr) and PbBr2 (details can be found in the Experimental Section). For simplicity, in the
following discussion, the notation CsPbBr3@TFA refers to the CsPbBr3@CsPbBr3-xTFAx
nanoplatelets, and CsPbBr3 denotes the thin films. A schematic illustration of the synthesis
process for CsPbBr3@TFA nanoplatelets is shown in Figure 1a. The scanning electron
microscopy (SEM) images of the CsPbBr3@TFA shown in Figure 1b and Figure S1a veal well-
defined rectangular nanoplatelets with an average size of 218 ± 2 nm (Figure S1a-inset). Atomic
force microscopy (AFM) measurements further confirmed the rectangular morphology and a
height of 90 nm (Figure S1b,c). In contrast, thin-film samples prepared similarly but without
TFA− exhibited irregularly shaped grains, as shown in Figure S1 d,e.
Ultraviolet-visible (UV-Vis) absorption and PL spectroscopy were conducted to investigate the
optical properties of the CsPbBr3@TFA nanoplatelets and CsPbBr3 thin films. The
CsPbBr3@TFA nanoplatelets showed an intense excitonic peak at 517 nm (Figure 1c), which is
consistent with the characteristic absorption edge peak of the CsPbBr3 thin film (Figure S1f),
and a narrow PL peak at 520 nm with a full width at half maximum (FWHM) of 14.56 ± 0.5 nm
(Figure 1c). The phase and crystallinity of the nanoplatelets was analyzed via X-ray diffraction
(XRD) measurements. As shown in Figure 1d, the XRD pattern of CsPbBr3@TFA exhibits
6
intense peaks corresponding to the pure 3D CsPbBr3 phase (reference code: 00-018-0364),
together with additional peaks at 7.33º and 8.14º, which confirms the presence of a 2D layered
perovskite phase, presumably CsPbBr3-xTFAx.49–51 The secondary phase, Cs4PbBr6, which
appears in the XRD pattern of the CsPbBr3 thin films (Figure 1d), was not detected in the
nanoplatelets. Grazing incidence X-ray diffraction (GIXRD) patterns of the (001) plane of the
CsPbBr3 phase for both the thin film and nanoplatelets are shown in Figure S2a,b. In the
CsPbBr3 thin film, increasing the incident angle (Ψ) from 0.2° to 5° led to a gradual shift of the
diffraction peak toward lower angles, suggesting the presence of tensile strain within the film.
In contrast, the peak shift was significantly reduced in the nanoplatelets, supporting the stress-
relieving effect of TFA−.25,52 This relaxation is likely facilitated by surface-bound TFA− anions
and defect passivation.25,52,53
The local crystal orientations of the CsPbBr3@TFA nanoplatelets and CsPbBr3 thin films were
mapped via electron backscatter diffraction (EBSD). The nanoplatelets aligned preferentially
along the [001] crystal direction relative to the sample normal direction, regardless of the
crystallinity of the substrate (crystalline Si or amorphous Si/SiO2) (Figure 2). The
corresponding pole figure displayed a pronounced central intensity, with maximum multiple of
random distribution (MRD) values of approximately 18.87 and 16.04, respectively (Figure 2 b,
d). These high MRD values suggest that the nanoplatelets exhibit preferential orientation. In
contrast, the EBSD orientation map of the CsPbBr3 thin film revealed a polycrystalline structure
with a variety of grain orientations (Figure S3). The (001) pole figure exhibited a more uniform
distribution with a lower maximum MRD value of approximately 2.01, which indicates more
random grain orientations. These morphological and crystallographic analyses indicated that
TFA− pseudohalides play a key role in regulating the crystal orientation of the perovskite.54
The elemental distribution of the nanoplatelets were investigated via high-resolution
transmission electron microscopy (HRTEM) and energy-dispersive X-ray spectroscopy
7
(EDXS) mapping. The HRTEM image of CsPbBr3@TFA nanoplatelets and the corresponding
fast Fourier transform (FFT), taken along the [001] zone axis, are shown in Figure 3a-c. The
FFT pattern confirms the crystalline nature of nanoplatelets and indicates the presence of the
pure 3D CsPbBr3 phase in the bulk. However, the sensitivity of the nanoplatelets to high-energy
electron beams (60 kV) prevented the acquisition of reliable crystalline phase information over
larger areas (Figure S4). The EDX elemental mapping (Figure 3d) reveals a homogeneous
distribution of cesium (Cs), lead (Pb), and bromine (Br) within the bulk of the nanoplatelets. In
addition, we found an intense carbon signal associated with the coating layer of the lamella and
a very weak fluorine (F) signal from TFA−. The silicon (Si) and oxygen (O) signals in the EDX
map are attributed to the supporting substrate.
The chemical states of C, F, Pb, Br, and Cs at the surface of the CsPbBr3@TFA nanoplatelets
and their bonding environments were investigated via X-ray photoelectron spectroscopy (XPS)
(Figure 4a-e). All the spectra were corrected using the adventitious C 1s signal at 284.8 eV. The
presence of TFA− on the surface of CsPbBr3@TFA was confirmed through the C 1s and F 1s
XPS spectra. The C 1s spectrum showed two distinct peaks at 292.5 eV and 288.8 eV,
corresponding to −CF355 and −O−C=O groups,36 respectively (Figure 4a). In the F 1s spectrum
in Figure 4b, the peak at 688.7 eV corresponds to −CF3,36,56 whereas the peak at 683.8 eV
corresponds to the F ions bonded to uncoordinated Pb2+ on the nanoplatelet surface, forming
dangling bonds.57,58 The Pb 4f spectrum in Figure 4c reveals two dominant peaks for Pb 4f7/2
and Pb 4f5/2, which indicate a 0.66 eV shift toward higher binding energies than those of
CsPbBr3 thin films. This shift represents the passivation of uncoordinated Pb2+ ions on the
surface of the nanoplatelets.59,60 Moreover, minor shoulder peaks at ~141 eV and ~137 eV
indicate the presence of metallic lead (Pb0).61 The peak area ratio of Pb0 to Pb4f decreased from
0.12 in the thin film to 0.084 in the CsPbBr3@TFA nanoplatelets, further supporting effective
surface passivation by TFA−.62,63 Similar binding energy shifts of approximately 0.2 eV to
0.4 eV were observed in the Cs 3d and Br 3d spectra (Figure 4d,e). These spectral shifts,
8
combined with the reduction in metallic lead content and the appearance of 2D layered
perovskite diffraction peaks in the XRD pattern (Figure 2d), suggest that TFA− may contribute
to surface defect passivation36,59 and facilitate the formation of a 2D CsPbBr3-xTFAx phase on
the surface of the nanoplatelets. Based on the XPS results and TEM EDX data, which did not
show significant traces of C or F in the cores of the nanoplatelets, we conclude that TFA− ions
are located on the surface of the nanoplatelets rather than being incorporated into the perovskite
crystal structure.
We carried out terahertz scattering near-field optical microscopy (THz-s-SNOM) at 0.6 THz to
investigate the influence of TFA− surface defect passivation on the electronic quality of the
perovskite nanoplatelets (Figure 5a-c; see details in the Experimental Section).64 THz-s-SNOM
optically scans the local THz conductivity via a contact-free method with a nanoscale 50 nm
spatial resolution.65,66 Experimental near-field images of the CsPbBr3 nanoplatelets on the
highly doped silicon substrate (Figure 5a-c) display a negative THz phase ϕ2 on the
nanoplatelets relative to the highly doped substrate. This is exemplary shown in line profiles
for two nanoplatelets (Figure 5d). The response, accompanied by a reduced THz near-field
magnitude S2 on the nanoplatelets relative to the substrate, is characteristic of undoped, intrinsic
semiconductor materials. The near-field contrasts of the CsPbBr3@TFA nanoplatelets relative
to the highly doped silicon substrate were then modeled via the finite-dipole model67 for the
samples 68 (input parameters: tapping amplitude 200 nm, tip radius 40 nm, spheroid length L =
600 nm, film thickness 100 nm) and a Drude conductivity model (effective electron mass =
0.26 m0, where m0 is the electron mass, and the THz permittivity of the lattice is εL = 5.8).64,69,70
Assuming a carrier mobility of 50 cm2V-1s-1, a typical mobility in lead-halide perovskites,71 we
find that the extracted doping carrier densities are well below 1016 cm-3 (Figure 5e), reflecting
the intrinsic semiconducting behavior of the material.
9
The low degree of doping can be attributed to efficient surface defect passivation, which reduces
nonradiative recombination and enhances the emission intensity of CsPbBr3@TFA compared
with those of the CsPbBr3 samples (Figure 6a,b, Figure S5). This enhancement persisted for
more than 1440 hours (Figure S6). Additionally, the PL intensity increased over time, which
can be attributed to a slow passivation effect of TFA−.72 Building on these observations, we
propose a crystal configuration for the nanoplatelets, as shown in the schematic illustration in
Figure 6c.
Finally, we fabricated perovskite arrays containing CsPbBr3@TFA nanoplatelets on a chip. We
selectively masked them with a PMMA/Al2O3 stack via a two-step top-down photolithography
process (details in the Experimental Section).47,48 The process flow is illustrated in Figure 7a.
PL spectroscopy was used to monitor the stability of the nanoplatelets during the structuring
and encapsulation steps. No changes were observed in the PL spectra (Figure S7), confirming
the stability of the nanoplatelets throughout the patterning process. The chip was subsequently
exposed to chlorine vapors to initiate an anion exchange process, as schematically shown in
Figure 7b (details in the Experimental Section). The absorption and PL spectra taken at different
times during the anion exchange process show a gradual shift in the absorption edge and
emission peaks, from 517 to 412 nm and 522 to 413 nm, respectively, as the exposure time
increases (Figure S8). The peak at 413 nm is characteristic of CsPbCl3 73 and confirms the
completion of halide exchange after 20 minutes. Next, the PMMA/Al2O3 stack was removed
with toluene. The absorption and PL spectra of the pixel array were recorded after resist
stripping and revealed green emission from CsPbBr3 and blue emission from CsPbCl3 (Figure
7c). The XRD pattern of the halide-exchanged CsPbCl3 region indicates intense peaks
corresponding to a pure CsPbCl3 cubic phase (Figure 7d, reference code: 01-084-0437). The
shift in the peaks, compared with those of the CsPbBr3@TFA pattern, is due to the smaller ionic
radius of Cl−.74 Furthermore, the XRD results confirm the phase stability of the CsPbCl3
nanoplatelets under ambient conditions (Figure S9). The Cl 2p XPS spectrum in Figure 7e
10
displays binding energy peaks at 197.5 eV and 199.1 eV, assigned to Cl2p3/2 and Cl2p1/2,
respectively, as expected for CsPbCl3.75 The depth profile analysis in Figure 7f confirms the
absence of residual Br− ions in the halide-exchanged region, indicating the complete
replacement of bromide with chloride ions. The optical micrographs of square perovskite
structures (Figure 8a-c) demonstrate the successful patterning of nanoplatelets on-chip. The
confocal microscopy images in Figure 8d-h further reveal two distinct emission colors
corresponding to the typical green and blue emission of the two MHPs. This outcome is a direct
result of the selective halide exchange process, which allows precise control of the PL emission
wavelength in different regions of the same chip. These findings highlight our scalable approach
that combines top-down patterning with wavelength tuning, enabling distinct green and blue
emission on a single substrate for integrated optoelectronic applications.
Conclusion
In summary, this study demonstrated the in situ growth of CsPbBr3@CsPbBr3-xTFAx
nanoplatelets by employing CsTFA as a multifunctional cesium source. We used CsTFA as a
source of TFA− pseudohalides, which suppressed nonradiative recombination, enhanced the PL
intensity, and promoted the preferential orientation of nanoplatelets along the [001] crystal
direction, regardless of the substrate crystallinity. XPS analysis revealed that TFA− facilitated
the passivation of uncoordinated Pb2+ and reduced the metallic lead content. THz conductivity
measurements confirmed successful surface passivation with TFA−, which minimized the
doping levels in the nanoplatelets. The nanoplatelets were etched into arrays with a perovskite-
compatible top-down patterning process and selectively color-tuned on-chip by substituting Br−
with Cl− via vapor-phase anion exchange. This resulted in two-color green and blue emission
MHP arrays on a single chip, marking a significant step toward next-generation multi-emitter
LEDs and, possibly, lasers.
11
Experimental
CsPbBr3@CsPbBr3-xTFAx nanoplatelets and CsPbBr3 thin film deposition: Si and Si/SiO2
substrates were sequentially sonicated in acetone and isopropyl alcohol for 10 minutes each.
Afterward, they were blow-dried with nitrogen and treated with oxygen plasma (Tepla, Semi
300) for 10 minutes. The cleaned substrates were then transferred to a nitrogen-filled glove box.
CsPbBr3@CsPbBr3-xTFAx nanoplatelets were grown using a perovskite precursor solution
containing 0.191 g of cesium trifluoroacetate (CsTFA, Thermo Scientific) and 0.259 g of lead
bromide (PbBr2, ≥ 98%, Sigma Aldrich) dissolved in 3 mL of dimethyl sulfoxide (DMSO,
anhydrous, ≥ 99.9%). CsPbBr3 thin films were deposited using a conventional solution with
cesium bromide (CsBr, >99.0%, TCI) and PbBr2 in DMSO. The solution was stirred overnight
at 60°C and filtered through polytetrafluoroethylene (PTFE) filters with a pore size of 0.2 µm.
A volume of 80 µL of the precursor solution was spin-coated on each 4 cm2 substrate at
2000 rpm for 40 seconds, followed by annealing at 80°C for 10 minutes.
Integration of perovskite arrays on-chip:
A perovskite-compatible top-down patterning process utilizing a double-stack resist consisting
of a poly(methyl methacrylate) (PMMA) protective layer and an AZ Mir 701 positive-tone
photoresist was conducted as described in previous work.47,48 The samples were exposed to
ultraviolet (UV) light for 25 s via a contact lithography mask aligner (EVG 420) system. The
samples were then baked for 90 s at 115°C. The patterns were developed in an MF26A
developer for 35 s, followed by rinsing with deionized water for 7 seconds. The samples were
subsequently etched via plasma with BCl3 and HBr gases via a PlasmaLab System 100
inductively coupled plasma (ICP)-RIE tool (Oxford Instruments). Finally, the resist stack was
removed by immersing the samples in toluene at 80°C for 1 hour.
12
Selective Anion Exchange:
Photolithography and etching steps were performed to define masked and unmasked regions. A
PMMA (350–400 nm)/evaporated Al2O3 (5 nm)/AZ Mir 701 positive-tone photoresist stack
served as a mask for certain perovskite arrays during the anion exchange process. Anion
exchange was performed by exposing the unmasked CsPbBr3@TFA nanoplatelets to chlorine
vapors, which was achieved by heating hydrochloric acid (HCl) at 108°C in a sealed vial for 20
minutes.
Characterization: Scanning electron microscopy (SEM) measurements were performed in a
Zeiss SUPRA 60 at 4 kV with a working distance of 4 mm. Atomic force microscopy (AFM)
images were taken using a Bruker Dimension Icon instrument in tapping mode. Electron
backscatter diffraction (EBSD) measurements were performed on an FEI Verios 460L SEM
with an EDAX Clarity direct electron detection system. The sample was held at a 70° tilt angle.
An accelerating voltage of 8 kV was used with a beam current of 400 pA. Electron diffraction
patterns were collected with a step size of 25 nm and detector exposure time of 15 ms. The
resulting diffraction patterns were postprocessed and re-indexed using spherical indexing
(bandwidth of 255) in OIM analysis 9. For spherical indexing, a simulated master pattern of
cubic CsPbBr3 (space group Pm-3m) was used. For pole figure plotting, grain CI
standardization was performed; only grain average orientations corresponding to CsPbBr3 were
plotted, and silicon data and unindexed points (CI < 0.1) were filtered out. Ultraviolet-visible
(UV-Vis) spectroscopy was performed using a PerkinElmer UV-Vis spectrophotometer.
Photoluminescence (PL) mapping was conducted via a WiTec confocal Raman microscope.
The CsPbBr3 samples were scanned with a 457 nm CW laser at 1 µW power. For the CsPbCl3
samples, PL measurements were performed with a continuous wave, diode pumped, frequency
tripled solid state laser (λ = 355 nm). The emitted light was coupled into a monochromator
(Princeton Instruments, Acton SP2500, gratings: 300 lines mm−1), and the spectrally dispersed
13
light was detected by a thermoelectrically cooled charge coupled device camera (Princeton
Instruments. The measurements were performed at room temperature under ambient conditions.
XRD spectra with filtered Cu-Kα radiation (wavelength of 1.5405 Å) were taken via a
PANalytical instrument at a current of 40 mA and a voltage of 40 kV. Grazing Incidence XRD
(GIXRD) was performed at various tilt angles. The TEM sample was prepared with a focused
ion beam (FIB) Strata FIB 400. HRTEM and EDXS elemental mappings were performed with
a JEOL JEM F200 at 200 kV. X-ray photoelectron spectroscopy (XPS) measurements were
performed using a Thermo Scientific KA1066 system equipped with monochromatic Al K-α
radiation (1486.6 eV). The spectra were charge-corrected by referencing the C–C peak of
adventitious carbon to 284.8 eV to account for possible surface charging. Signals from C1s,
O1s, F1s, Si2p, Br3d, Cl2p, Cs3d, and Pb4f were recorded, and elemental ratios were quantified
from the integrated peak areas using appropriate sensitivity factors. Depth profiling was carried
out via Ar+ ion sputtering. THz measurements were performed via custom-built all-electronic
THz-sSNOM. 600 GHz radiation from a Schottky diode-based electronic multiplier chain is
scattered at the tip of a conductive atomic force microscopy (AFM) cantilever (RMN
25Pt200B−H, 40 nm tip radius) and detected in the far field by a corresponding heterodyne
detector. Confocal spectral fluorescence imaging was performed with an LSM 710 confocal
microscope with a 34-channel Quasar detector (Zeiss, Oberkochen). The samples were excited
with a 405 nm diode laser, and the emitted light was collected with an EC Plan-Neofluar
10x/0.30 objective. The microscope system was controlled, and images were acquired in l mode
with the ZEN black software version 2.3 SP1 FP3 (64-bit).
Acknowledgments: This project has received funding from the German Research Foundation
(DFG) through the project Hiper-Lase (GI 1145/4-1, LE 2440/12-1, HA 3022/13-1, and
RI1551/18-1), the European Union’s Horizon 2020 research and innovation programme under
the project FOXES (951774), the Deutsche Forschungsgemeinschaft within TRR 404 Active-
3D (project number: 528378584), and the German Ministry of Education and Research (BMBF)
14
through the project NEPOMUQ (13N17112 and 13N17113). The authors want to thank P.
Grewe and Dr. U. Böttger from the Electronic Material Research Lab, RWTH Aachen
University, for their support in the XRD measurement and analysis. This work was also
supported by the Confocal Microscopy Facility, a Core Facility of the Interdisciplinary Center
for Clinical Research (IZKF) Aachen within the Faculty of Medicine at RWTH Aachen
University. L.S. and B.E. acknowledge the Dutch Research Council (NWO), Gatan (EDAX),
Amsterdam Scientific Instruments (ASI) and CL Solutions for financing the project ‘Achieving
Semiconductor Stability From The Ground Up’ (NWO project number 19459).
15
References
(1)
Ravi, V. K.; Swarnkar, A.; Chakraborty, R.; Nag, A. Excellent Green but Less Impressive Blue
Luminescence from CsPbBr 3 Perovskite Nanocubes and Nanoplatelets. Nanotechnology 2016,
27 (32), 325708. https://doi.org/10.1088/0957-4484/27/32/325708.
(2)
Maes, J.; Balcaen, L.; Drijvers, E.; Zhao, Q.; De Roo, J.; Vantomme, A.; Vanhaecke, F.; Geiregat, P.;
Hens, Z. Light Absorption Coefficient of CsPbBr 3 Perovskite Nanocrystals. J. Phys. Chem. Lett.
2018, 9 (11), 3093–3097. https://doi.org/10.1021/acs.jpclett.8b01065.
(3)
Zhang, D.; Yang, Y.; Bekenstein, Y.; Yu, Y.; Gibson, N. A.; Wong, A. B.; Eaton, S. W.; Kornienko, N.;
Kong, Q.; Lai, M.; Alivisatos, A. P.; Leone, S. R.; Yang, P. Synthesis of Composition Tunable and
Highly Luminescent Cesium Lead Halide Nanowires through Anion-Exchange Reactions. J. Am.
Chem. Soc. 2016, 138 (23), 7236–7239. https://doi.org/10.1021/jacs.6b03134.
(4)
Nedelcu, G.; Protesescu, L.; Yakunin, S.; Bodnarchuk, M. I.; Grotevent, M. J.; Kovalenko, M. V.
Fast Anion-Exchange in Highly Luminescent Nanocrystals of Cesium Lead Halide Perovskites
(CsPbX3 , X = Cl, Br, I). Nano Lett. 2015, 15 (8), 5635–5640.
https://doi.org/10.1021/acs.nanolett.5b02404.
(5)
Zhang, Q.; Shang, Q.; Su, R.; Do, T. T. H.; Xiong, Q. Halide Perovskite Semiconductor Lasers:
Materials, Cavity Design, and Low Threshold. Nano Lett. 2021, 21 (5), 1903–1914.
https://doi.org/10.1021/acs.nanolett.0c03593.
(6)
Fu, Y.; Zhu, H.; Schrader, A. W.; Liang, D.; Ding, Q.; Joshi, P.; Hwang, L.; Zhu, X.-Y.; Jin, S. Nanowire
Lasers of Formamidinium Lead Halide Perovskites and Their Stabilized Alloys with Improved
Stability. Nano Lett. 2016, 16 (2), 1000–1008. https://doi.org/10.1021/acs.nanolett.5b04053.
(7)
Bohn, B. J.; Tong, Y.; Gramlich, M.; Lai, M. L.; Döblinger, M.; Wang, K.; Hoye, R. L. Z.; Müller-
Buschbaum, P.; Stranks, S. D.; Urban, A. S.; Polavarapu, L.; Feldmann, J. Boosting Tunable Blue
Luminescence of Halide Perovskite Nanoplatelets through Postsynthetic Surface Trap Repair.
Nano Lett. 2018, 18 (8), 5231–5238. https://doi.org/10.1021/acs.nanolett.8b02190.
(8)
Bodnarchuk, M. I.; Boehme, S. C.; Ten Brinck, S.; Bernasconi, C.; Shynkarenko, Y.; Krieg, F.;
Widmer, R.; Aeschlimann, B.; Günther, D.; Kovalenko, M. V.; Infante, I. Rationalizing and
Controlling the Surface Structure and Electronic Passivation of Cesium Lead Halide Nanocrystals.
ACS Energy Lett. 2019, 4 (1), 63–74. https://doi.org/10.1021/acsenergylett.8b01669.
(9)
deQuilettes, D. W.; Koch, S.; Burke, S.; Paranji, R. K.; Shropshire, A. J.; Ziffer, M. E.; Ginger, D. S.
Photoluminescence Lifetimes Exceeding 8 Μs and Quantum Yields Exceeding 30% in Hybrid
Perovskite Thin Films by Ligand Passivation. ACS Energy Lett. 2016, 1 (2), 438–444.
https://doi.org/10.1021/acsenergylett.6b00236.
(10) Liang, T.; Liu, W.; Liu, X.; Li, Y.; Wu, W.; Fan, J. In Situ Phase-Transition Crystallization of All-
Inorganic Water-Resistant Exciton-Radiative Heteroepitaxial CsPbBr 3 –CsPb 2 Br 5 Core–Shell
Perovskite Nanocrystals. Chem. Mater. 2021, 33 (13), 4948–4959.
https://doi.org/10.1021/acs.chemmater.1c00542.
(11) Zhong, Q.; Cao, M.; Hu, H.; Yang, D.; Chen, M.; Li, P.; Wu, L.; Zhang, Q. One-Pot Synthesis of
Highly Stable CsPbBr 3 @SiO 2 Core–Shell Nanoparticles. ACS Nano 2018, 12 (8), 8579–8587.
https://doi.org/10.1021/acsnano.8b04209.
(12) Yang, D.; Li, P.; Zou, Y.; Cao, M.; Hu, H.; Zhong, Q.; Hu, J.; Sun, B.; Duhm, S.; Xu, Y.; Zhang, Q.
Interfacial Synthesis of Monodisperse CsPbBr 3 Nanorods with Tunable Aspect Ratio and Clean
Surface for Efficient Light-Emitting Diode Applications. Chem. Mater. 2019, 31 (5), 1575–1583.
https://doi.org/10.1021/acs.chemmater.8b04651.
(13) Ravi, V. K.; Saikia, S.; Yadav, S.; Nawale, V. V.; Nag, A. CsPbBr 3 /ZnS Core/Shell Type Nanocrystals
for Enhancing Luminescence Lifetime and Water Stability. ACS Energy Lett. 2020, 5 (6), 1794–
1796. https://doi.org/10.1021/acsenergylett.0c00858.
(14) Lin, K.; Xing, J.; Quan, L. N.; De Arquer, F. P. G.; Gong, X.; Lu, J.; Xie, L.; Zhao, W.; Zhang, D.; Yan,
C.; Li, W.; Liu, X.; Lu, Y.; Kirman, J.; Sargent, E. H.; Xiong, Q.; Wei, Z. Perovskite Light-Emitting
Diodes with External Quantum Efficiency Exceeding 20 per Cent. Nature 2018, 562 (7726), 245–
248. https://doi.org/10.1038/s41586-018-0575-3.
16
(15) Zhu, L.; Cao, H.; Xue, C.; Zhang, H.; Qin, M.; Wang, J.; Wen, K.; Fu, Z.; Jiang, T.; Xu, L.; Zhang, Y.;
Cao, Y.; Tu, C.; Zhang, J.; Liu, D.; Zhang, G.; Kong, D.; Fan, N.; Li, G.; Yi, C.; Peng, Q.; Chang, J.; Lu,
X.; Wang, N.; Huang, W.; Wang, J. Unveiling the Additive-Assisted Oriented Growth of Perovskite
Crystallite for High Performance Light-Emitting Diodes. Nat. Commun. 2021, 12 (1), 5081.
https://doi.org/10.1038/s41467-021-25407-8.
(16) Mir, W. J.; Mahor, Y.; Lohar, A.; Jagadeeswararao, M.; Das, S.; Mahamuni, S.; Nag, A.
Postsynthesis Doping of Mn and Yb into CsPbX3 (X = Cl, Br, or I) Perovskite Nanocrystals for
Downconversion Emission. Chem. Mater. 2018, 30 (22), 8170–8178.
https://doi.org/10.1021/acs.chemmater.8b03066.
(17) Lee, H. B.; Kumar, N.; Devaraj, V.; Tyagi, B.; He, S.; Sahani, R.; Ko, K.-J.; Oh, J.-W.; Kang, J.-W.
Trifluoromethyl-Group Bearing, Hydrophobic Bulky Cations as Defect Passivators for Highly
Efficient, Stable Perovskite Solar Cells. Sol. RRL 2021, 5 (12), 2100712.
https://doi.org/10.1002/solr.202100712.
(18) Li, H.; Xu, Y.; Ramakrishnan, S.; Zhang, Y.; Cotlet, M.; Xu, T. L.; Yu, Q. Pseudo-Halide Anion
Engineering for Efficient Quasi-2D Ruddlesden-Popper Tin Perovskite Solar Cells. Cell Rep. Phys.
Sci. 2022, 3 (10), 101060. https://doi.org/10.1016/j.xcrp.2022.101060.
(19) Wei, N.; Chen, Y.; Wang, X.; Miao, Y.; Qin, Z.; Liu, X.; Wei, H.; Zhao, Y. Multi-Level Passivation of
MAPbI 3 Perovskite for Efficient and Stable Photovoltaics. Adv. Funct. Mater. 2022, 32 (16),
2108944. https://doi.org/10.1002/adfm.202108944.
(20) Hossain, M.; Garai, R.; Gupta, R. K.; Arunagirinathan, R. N.; Iyer, P. K. Fluoroarene Derivative
Based Passivation of Perovskite Solar Cells Exhibiting Excellent Ambient and Thermo-Stability
Achieving Efficiency >20%. J. Mater. Chem. C 2021, 9 (32), 10406–10413.
https://doi.org/10.1039/D1TC02335G.
(21) Shi, Z.; Guo, R.; Luo, R.; Wang, X.; Ma, J.; Feng, J.; Niu, X.; Alvianto, E.; Jia, Z.; Guo, X.; Liang, H.;
Chen, J.; Li, Z.; Sun, K.; Jiang, X.; Wu, Y.; Müller-Buschbaum, P.; Hu, W.; Hou, Y. “T-Shaped”
Carbazole Alkylammonium Cation Passivation in Perovskite Solar Cells. ACS Energy Lett. 2024, 9
(2), 419–427. https://doi.org/10.1021/acsenergylett.3c02357.
(22) Zhang, J.; Wu, J.; Langner, S.; Zhao, B.; Xie, Z.; Hauch, J. A.; Afify, H. A.; Barabash, A.; Luo, J.;
Sytnyk, M.; Meng, W.; Zhang, K.; Liu, C.; Osvet, A.; Li, N.; Halik, M.; Heiss, W.; Zhao, Y.; Brabec, C.
J. Exploring the Steric Hindrance of Alkylammonium Cations in the Structural Reconfiguration of
Quasi-2D Perovskite Materials Using a High-throughput Experimental Platform. Adv. Funct.
Mater. 2022, 32 (43), 2207101. https://doi.org/10.1002/adfm.202207101.
(23) Han, D.; Chen, S.; Du, M.-H. Role of Polycyclic Aromatic Alkylammonium Cations in Tuning the
Electronic Properties and Band Alignment of Two-Dimensional Hybrid Perovskite
Semiconductors. J. Phys. Chem. Lett. 2021, 12 (40), 9754–9760.
https://doi.org/10.1021/acs.jpclett.1c02603.
(24) Wang, F.; Jiang, X.; Chen, H.; Shang, Y.; Liu, H.; Wei, J.; Zhou, W.; He, H.; Liu, W.; Ning, Z. 2D-
Quasi-2D-3D Hierarchy Structure for Tin Perovskite Solar Cells with Enhanced Efficiency and
Stability. Joule 2018, 2 (12), 2732–2743. https://doi.org/10.1016/j.joule.2018.09.012.
(25) Sun, Y.; Miao, W.; Sun, W.; Niu, Z.; Yin, R.; Huo, X.; Wang, K.; You, T.; Yin, P. Lattice Strain
Regulation and Halogen Vacancies Passivation Enable High-Performance Formamidine-Based
Perovskite Solar Cells. Small 2024, 20 (46), 2404272. https://doi.org/10.1002/smll.202404272.
(26) Walker, B.; Kim, G.; Kim, J. Y. Pseudohalides in Lead-Based Perovskite Semiconductors. Adv.
Mater. 2019, 31 (20), 1807029. https://doi.org/10.1002/adma.201807029.
(27) Yang, S.; Liu, W.; Zuo, L.; Zhang, X.; Ye, T.; Chen, J.; Li, C.-Z.; Wu, G.; Chen, H. Thiocyanate
Assisted Performance Enhancement of Formamidinium Based Planar Perovskite Solar Cells
through a Single One-Step Solution Process. J. Mater. Chem. A 2016, 4 (24), 9430–9436.
https://doi.org/10.1039/C6TA02999J.
(28) Halder, A.; Chulliyil, R.; Subbiah, A. S.; Khan, T.; Chattoraj, S.; Chowdhury, A.; Sarkar, S. K.
Pseudohalide (SCN– )-Doped MAPbI3 Perovskites: A Few Surprises. J. Phys. Chem. Lett. 2015, 6
(17), 3483–3489. https://doi.org/10.1021/acs.jpclett.5b01327.
(29) Jeong, J.; Kim, M.; Seo, J.; Lu, H.; Ahlawat, P.; Mishra, A.; Yang, Y.; Hope, M. A.; Eickemeyer, F. T.;
Kim, M.; Yoon, Y. J.; Choi, I. W.; Darwich, B. P.; Choi, S. J.; Jo, Y.; Lee, J. H.; Walker, B.;
17
Zakeeruddin, S. M.; Emsley, L.; Rothlisberger, U.; Hagfeldt, A.; Kim, D. S.; Grätzel, M.; Kim, J. Y.
Pseudo-Halide Anion Engineering for α-FAPbI3 Perovskite Solar Cells. Nature 2021, 592 (7854),
381–385. https://doi.org/10.1038/s41586-021-03406-5.
(30) Li, H.; Xu, Y.; Ramakrishnan, S.; Zhang, Y.; Cotlet, M.; Xu, T. L.; Yu, Q. Pseudo-Halide Anion
Engineering for Efficient Quasi-2D Ruddlesden-Popper Tin Perovskite Solar Cells. Cell Rep. Phys.
Sci. 2022, 3 (10), 101060. https://doi.org/10.1016/j.xcrp.2022.101060.
(31) Tao, J.; Liu, X.; Shen, J.; Han, S.; Guan, L.; Fu, G.; Kuang, D.-B.; Yang, S. F-Type Pseudo-Halide
Anions for High-Efficiency and Stable Wide-Band-Gap Inverted Perovskite Solar Cells with Fill
Factor Exceeding 84%. ACS Nano 2022, 16 (7), 10798–10810.
https://doi.org/10.1021/acsnano.2c02876.
(32) Jeong, J.; Kim, M.; Seo, J.; Lu, H.; Ahlawat, P.; Mishra, A.; Yang, Y.; Hope, M. A.; Eickemeyer, F. T.;
Kim, M.; Yoon, Y. J.; Choi, I. W.; Darwich, B. P.; Choi, S. J.; Jo, Y.; Lee, J. H.; Walker, B.;
Zakeeruddin, S. M.; Emsley, L.; Rothlisberger, U.; Hagfeldt, A.; Kim, D. S.; Grätzel, M.; Kim, J. Y.
Pseudo-Halide Anion Engineering for α-FAPbI3 Perovskite Solar Cells. Nature 2021, 592 (7854),
381–385. https://doi.org/10.1038/s41586-021-03406-5.
(33) Halder, A.; Chulliyil, R.; Subbiah, A. S.; Khan, T.; Chattoraj, S.; Chowdhury, A.; Sarkar, S. K.
Pseudohalide (SCN– )-Doped MAPbI3 Perovskites: A Few Surprises. J. Phys. Chem. Lett. 2015, 6
(17), 3483–3489. https://doi.org/10.1021/acs.jpclett.5b01327.
(34) Lin, P.; Loganathan, A.; Raifuku, I.; Li, M.; Chiu, Y.; Chang, S.; Fakharuddin, A.; Lin, C.; Guo, T.;
Schmidt-Mende, L.; Chen, P. Pseudo-Halide Perovskite Solar Cells. Adv. Energy Mater. 2021, 11
(28), 2100818. https://doi.org/10.1002/aenm.202100818.
(35) Tao, J.; Liu, X.; Shen, J.; Han, S.; Guan, L.; Fu, G.; Kuang, D.-B.; Yang, S. F-Type Pseudo-Halide
Anions for High-Efficiency and Stable Wide-Band-Gap Inverted Perovskite Solar Cells with Fill
Factor Exceeding 84%. ACS Nano 2022, 16 (7), 10798–10810.
https://doi.org/10.1021/acsnano.2c02876.
(36) Wang, H.; Zhang, X.; Wu, Q.; Cao, F.; Yang, D.; Shang, Y.; Ning, Z.; Zhang, W.; Zheng, W.; Yan, Y.;
Kershaw, S. V.; Zhang, L.; Rogach, A. L.; Yang, X. Trifluoroacetate Induced Small-Grained CsPbBr3
Perovskite Films Result in Efficient and Stable Light-Emitting Devices. Nat. Commun. 2019, 10
(1), 665. https://doi.org/10.1038/s41467-019-08425-5.
(37) Kuang, Y.; Yang, L.; Ma, J.; Bie, T.; Zhang, D.; Xue, Y.; Zhou, N.; Shao, M. High-Performance Pure
Red Quasi-Two-Dimensional Perovskite Light-Emitting Diodes with Bifunctional Potassium
Trifluoroacetate Additive. ACS Mater. Lett. 2023, 5 (11), 2922–2928.
https://doi.org/10.1021/acsmaterialslett.3c00557.
(38) Ou, Q.; Bao, X.; Zhang, Y.; Shao, H.; Xing, G.; Li, X.; Shao, L.; Bao, Q. Band Structure Engineering
in Metal Halide Perovskite Nanostructures for Optoelectronic Applications. Nano Mater. Sci.
2019, 1 (4), 268–287. https://doi.org/10.1016/j.nanoms.2019.10.004.
(39) Liu, M.; Johnston, M. B.; Snaith, H. J. Efficient Planar Heterojunction Perovskite Solar Cells by
Vapour Deposition. Nature 2013, 501 (7467), 395–398. https://doi.org/10.1038/nature12509.
(40) Li, W.; Li, J.; Niu, G.; Wang, L. Effect of Cesium Chloride Modification on the Film Morphology
and UV-Induced Stability of Planar Perovskite Solar Cells. J. Mater. Chem. A 2016, 4 (30), 11688–
11695. https://doi.org/10.1039/C5TA09165A.
(41) Haeger, T.; Ketterer, M.; Bahr, J.; Pourdavoud, N.; Runkel, M.; Heiderhoff, R.; Riedl, T. Thermal
Properties of CsPbCl 3 Thin Films across Phase Transitions. J. Phys. Mater. 2020, 3 (2), 024004.
https://doi.org/10.1088/2515-7639/ab749d.
(42) Gui, P.; Zhou, H.; Yao, F.; Song, Z.; Li, B.; Fang, G. Space-Confined Growth of Individual Wide
Bandgap Single Crystal CsPbCl3 Microplatelet for Near-Ultraviolet Photodetection. Small 2019,
15 (39), 1902618. https://doi.org/10.1002/smll.201902618.
(43) Prochowicz, D.; Yadav, P.; Saliba, M.; Kubicki, D. J.; Tavakoli, M. M.; Zakeeruddin, S. M.; Lewiński,
J.; Emsley, L.; Grätzel, M. One-Step Mechanochemical Incorporation of an Insoluble Cesium
Additive for High Performance Planar Heterojunction Solar Cells. Nano Energy 2018, 49, 523–
528. https://doi.org/10.1016/j.nanoen.2018.05.010.
(44) Li, H.; Li, J.; Bao, Y.; Li, J.; He, C.; Wang, H.; Zhang, Y.; Tang, H.; Xu, J.; Fang, Y.; Liang, S.; Yang, Y. A
Liquid Phase Anion-Exchange Approach to High-Quality All-Inorganic Halide Perovskite Micro-
18
and Nanowires. J. Mater. Sci. 2021, 56 (28), 16059–16067. https://doi.org/10.1007/s10853-021-
06316-z.
(45) Zhang, Y.; Lu, D.; Gao, M.; Lai, M.; Lin, J.; Lei, T.; Lin, Z.; Quan, L. N.; Yang, P. Quantitative Imaging
of Anion Exchange Kinetics in Halide Perovskites. Proc. Natl. Acad. Sci. 2019, 116 (26), 12648–
12653. https://doi.org/10.1073/pnas.1903448116.
(46) Cen, G.; Xia, Y.; Zhao, C.; Fu, Y.; An, Y.; Yuan, Y.; Shi, T.; Mai, W. Precise Phase Control of Large-
Scale Inorganic Perovskites via Vapor-Phase Anion-Exchange Strategy. arXiv 2020.
https://doi.org/10.48550/ARXIV.2009.14350.
(47) Cegielski, P. J.; Giesecke, A. L.; Neutzner, S.; Porschatis, C.; Gandini, M.; Schall, D.; Perini, C. A. R.;
Bolten, J.; Suckow, S.; Kataria, S.; Chmielak, B.; Wahlbrink, T.; Petrozza, A.; Lemme, M. C.
Monolithically Integrated Perovskite Semiconductor Lasers on Silicon Photonic Chips by Scalable
Top-Down Fabrication. Nano Lett. 2018, 18 (11), 6915–6923.
https://doi.org/10.1021/acs.nanolett.8b02811.
(48) Fabrizi, F.; Goudarzi, S.; Khan, S.; Mohammad, T.; Starodubtceva, L.; Cegielski, P. J.; Thiel, F.;
Özen, S.; Schiffer, M.; Lang, F.; Bolívar, P. H.; Riedl, T.; Müller-Newen, G.; Anantharaman, S. B.;
Mohammadi, M.; Lemme, M. C. A Versatile Top-Down Patterning Technique for Perovskite On-
Chip Integration. ACS Nano 2025, 19 (33), 30428–30440.
https://doi.org/10.1021/acsnano.5c10397.
(49) Li, G.; Song, J.; Wu, J.; Song, Z.; Wang, X.; Sun, W.; Fan, L.; Lin, J.; Huang, M.; Lan, Z.; Gao, P.
Efficient and Stable 2D@3D/2D Perovskite Solar Cells Based on Dual Optimization of Grain
Boundary and Interface. ACS Energy Lett. 2021, 6 (10), 3614–3623.
https://doi.org/10.1021/acsenergylett.1c01649.
(50) Liao, Y.; Liu, H.; Zhou, W.; Yang, D.; Shang, Y.; Shi, Z.; Li, B.; Jiang, X.; Zhang, L.; Quan, L. N.;
Quintero-Bermudez, R.; Sutherland, B. R.; Mi, Q.; Sargent, E. H.; Ning, Z. Highly Oriented Low-
Dimensional Tin Halide Perovskites with Enhanced Stability and Photovoltaic Performance. J.
Am. Chem. Soc. 2017, 139 (19), 6693–6699. https://doi.org/10.1021/jacs.7b01815.
(51) Lian, Z.; Wang, B.; Wu, Z.; Lin, H.; Ding, T.; Wang, J.; Zhang, L.; Xu, J.; Xiao, P.; Xu, H.; Wang, S.;
Ng, K. W. Water-Assisted Synthesis of Layer-Controlled CsPbBr 3 Nanoplates Spontaneously
Encapsulated in PbBr(OH). Adv. Opt. Mater. 2024, 12 (19), 2400333.
https://doi.org/10.1002/adom.202400333.
(52) Zhang, W.; Liu, J.; Song, W.; Shan, J.; Guan, H.; Zhou, J.; Meng, Y.; Tong, X.; Zhu, J.; Yang, M.; Ge,
Z. Chemical Passivation and Grain-Boundary Manipulation via in Situ Cross-Linking Strategy for
Scalable Flexible Perovskite Solar Cells. Sci. Adv. 2025, 11 (5), eadr2290.
https://doi.org/10.1126/sciadv.adr2290.
(53) Fang, Z.; Deng, B.; Jin, Y.; Yang, L.; Chen, L.; Zhong, Y.; Feng, H.; Yin, Y.; Liu, K.; Li, Y.; Zhang, J.;
Huang, J.; Zeng, Q.; Wang, H.; Yang, X.; Yang, J.; Tian, C.; Xie, L.; Wei, Z.; Xu, X. Surface
Reconstruction of Wide-Bandgap Perovskites Enables Efficient Perovskite/Silicon Tandem Solar
Cells. Nat. Commun. 2024, 15 (1), 10554. https://doi.org/10.1038/s41467-024-54925-4.
(54) Lee, J.; Shin, Y. S.; Oleiki, E.; Seo, J.; Roe, J.; Lee, D.; Lee, Y.; Song, T.; Jang, H.; Song, J. W.; Lee, W.;
Lee, G.; Kim, J. Y.; Kim, D. S. Constructing Orderly Crystal Orientation with a Bidirectional
Coordinator for High Efficiency and Stable Perovskite Solar Cells. Energy Environ. Sci. 2024, 17
(16), 6003–6012. https://doi.org/10.1039/D4EE02017K.
(55) Xu, Y.; Ali, A.; Shehzad, K.; Meng, N.; Xu, M.; Zhang, Y.; Wang, X.; Jin, C.; Wang, H.; Guo, Y.; Yang,
Z.; Yu, B.; Liu, Y.; He, Q.; Duan, X.; Wang, X.; Tan, P.; Hu, W.; Lu, H.; Hasan, T. Solvent-Based Soft-
Patterning of Graphene Lateral Heterostructures for Broadband High-Speed Metal–
Semiconductor–Metal Photodetectors. Adv. Mater. Technol. 2017, 2 (2), 1600241.
https://doi.org/10.1002/admt.201600241.
(56) Li, G.; Huang, Q.; He, X.; Gao, Y.; Wang, D.; Kim, S. H.; Wang, D. Self-Formed Hybrid Interphase
Layer on Lithium Metal for High-Performance Lithium–Sulfur Batteries. ACS Nano 2018, 12 (2),
1500–1507. https://doi.org/10.1021/acsnano.7b08035.
(57) Antonio, A. M.; Dworzak, M. R.; Korman, K. J.; Yap, G. P. A.; Bloch, E. D. Anion Binding as a
Strategy for the Synthesis of Porous Salts. Chem. Mater. 2022, 34 (24), 10823–10831.
https://doi.org/10.1021/acs.chemmater.2c01476.
19
(58) Zhai, Y.; Bai, X.; Pan, G.; Zhu, J.; Shao, H.; Dong, B.; Xu, L.; Song, H. Effective Blue-Violet
Photoluminescence through Lanthanum and Fluorine Ions Co-Doping for CsPbCl3 Perovskite
Quantum Dots. Nanoscale 2019, 11 (5), 2484–2491. https://doi.org/10.1039/C8NR09794A.
(59) Kuang, Y.; Yang, L.; Ma, J.; Bie, T.; Zhang, D.; Xue, Y.; Zhou, N.; Shao, M. High-Performance Pure
Red Quasi-Two-Dimensional Perovskite Light-Emitting Diodes with Bifunctional Potassium
Trifluoroacetate Additive. ACS Mater. Lett. 2023, 5 (11), 2922–2928.
https://doi.org/10.1021/acsmaterialslett.3c00557.
(60) Liu, Y.; Wang, S.; Yu, Z.; Chen, G.; Wang, C.; Wang, T.; Ke, W.; Fang, G. A Multifunctional Additive
Strategy Enables Efficient Pure-Blue Perovskite Light-Emitting Diodes. Adv. Mater. 2023, 35 (35),
2302161. https://doi.org/10.1002/adma.202302161.
(61) Zhang, W.; Pathak, S.; Sakai, N.; Stergiopoulos, T.; Nayak, P. K.; Noel, N. K.; Haghighirad, A. A.;
Burlakov, V. M.; deQuilettes, D. W.; Sadhanala, A.; Li, W.; Wang, L.; Ginger, D. S.; Friend, R. H.;
Snaith, H. J. Enhanced Optoelectronic Quality of Perovskite Thin Films with Hypophosphorous
Acid for Planar Heterojunction Solar Cells. Nat. Commun. 2015, 6 (1), 10030.
https://doi.org/10.1038/ncomms10030.
(62) Zhang, W.; Pathak, S.; Sakai, N.; Stergiopoulos, T.; Nayak, P. K.; Noel, N. K.; Haghighirad, A. A.;
Burlakov, V. M.; deQuilettes, D. W.; Sadhanala, A.; Li, W.; Wang, L.; Ginger, D. S.; Friend, R. H.;
Snaith, H. J. Enhanced Optoelectronic Quality of Perovskite Thin Films with Hypophosphorous
Acid for Planar Heterojunction Solar Cells. Nat. Commun. 2015, 6 (1), 10030.
https://doi.org/10.1038/ncomms10030.
(63) Xu, C.; Zuo, L.; Hang, P.; Guo, X.; Pan, Y.; Zhou, G.; Chen, T.; Niu, B.; Xu, X.; Hong, Z.; Wang, D.;
Zhu, H.; Yu, X.; Yang, D.; Chen, H. Synergistic Effects of Bithiophene Ammonium Salt for High-
Performance Perovskite Solar Cells. J. Mater. Chem. A 2022, 10 (18), 9971–9980.
https://doi.org/10.1039/D2TA01349E.
(64) Schäffer, S.; Ogolla, C. O.; Loth, Y.; Haeger, T.; Kreusel, C.; Runkel, M.; Riedl, T.; Butz, B.; Wigger,
A. K.; Bolívar, P. H. Imaging the Terahertz Nanoscale Conductivity of Polycrystalline CsPbBr 3
Perovskite Thin Films. Nano Lett. 2023, 23 (6), 2074–2080.
https://doi.org/10.1021/acs.nanolett.2c03214.
(65) Cocker, T. L.; Jelic, V.; Hillenbrand, R.; Hegmann, F. A. Nanoscale Terahertz Scanning Probe
Microscopy. Nat. Photonics 2021, 15 (8), 558–569. https://doi.org/10.1038/s41566-021-00835-
6.
(66) Hillenbrand, R.; Abate, Y.; Liu, M.; Chen, X.; Basov, D. N. Visible-to-THz near-Field Nanoscopy.
Nat. Rev. Mater. 2025, 10 (4), 285–310. https://doi.org/10.1038/s41578-024-00761-3.
(67) Cvitkovic, A.; Ocelic, N.; Hillenbrand, R. Analytical Model for Quantitative Prediction of Material
Contrasts in Scattering-Type near-Field Optical Microscopy. Opt. Express 2007, 15 (14), 8550.
https://doi.org/10.1364/OE.15.008550.
(68) Hauer, B.; Engelhardt, A. P.; Taubner, T. Quasi-Analytical Model for Scattering Infrared near-Field
Microscopy on Layered Systems. Opt. Express 2012, 20 (12), 13173.
https://doi.org/10.1364/OE.20.013173.
(69) Iaru, C. M.; Brodu, A.; Van Hoof, N. J. J.; Ter Huurne, S. E. T.; Buhot, J.; Montanarella, F.; Buhbut,
S.; Christianen, P. C. M.; Vanmaekelbergh, D.; De Mello Donega, C.; Rivas, J. G.; Koenraad, P. M.;
Silov, A. Yu. Fröhlich Interaction Dominated by a Single Phonon Mode in CsPbBr3. Nat. Commun.
2021, 12 (1), 5844. https://doi.org/10.1038/s41467-021-26192-0.
(70) Puppin, M.; Polishchuk, S.; Colonna, N.; Crepaldi, A.; Dirin, D. N.; Nazarenko, O.; De Gennaro, R.;
Gatti, G.; Roth, S.; Barillot, T.; Poletto, L.; Xian, R. P.; Rettig, L.; Wolf, M.; Ernstorfer, R.; Kovalenko,
M. V.; Marzari, N.; Grioni, M.; Chergui, M. Evidence of Large Polarons in Photoemission Band
Mapping of the Perovskite Semiconductor CsPbBr 3. Phys. Rev. Lett. 2020, 124 (20), 206402.
https://doi.org/10.1103/PhysRevLett.124.206402.
(71) Xia, C. Q.; Peng, J.; Poncé, S.; Patel, J. B.; Wright, A. D.; Crothers, T. W.; Uller Rothmann, M.;
Borchert, J.; Milot, R. L.; Kraus, H.; Lin, Q.; Giustino, F.; Herz, L. M.; Johnston, M. B. Limits to
Electrical Mobility in Lead-Halide Perovskite Semiconductors. J. Phys. Chem. Lett. 2021, 12 (14),
3607–3617. https://doi.org/10.1021/acs.jpclett.1c00619.
20
(72) Jokar, E.; Chien, C.-H.; Fathi, A.; Rameez, M.; Chang, Y.-H.; Diau, E. W.-G. Slow Surface Passivation
and Crystal Relaxation with Additives to Improve Device Performance and Durability for Tin-
Based Perovskite Solar Cells. Energy Environ. Sci. 2018, 11 (9), 2353–2362.
https://doi.org/10.1039/C8EE00956B.
(73) Khan, S.; Cegielski, P. J.; Runkel, M.; Riedl, T.; Mohammadi, M.; Lemme, M. C. Complex Optical
Constants of CsPbCl3 Perovskite Thin Films Determined by Spectroscopic Ellipsometry. APL
Energy 2025, 3 (2), 026108. https://doi.org/10.1063/5.0268298.
(74) Wong, Y.; Wu, W.; Wang, T.; Ng, J. D. A.; Khoo, K. H.; Wu, J.; Tan, Z. Color Patterning of
Luminescent Perovskites via Light-Mediated Halide Exchange with Haloalkanes. Adv. Mater.
2019, 31 (24), 1901247. https://doi.org/10.1002/adma.201901247.
(75) Yu, H.; Gao, X.; Huang, C.; Liu, S.; Chen, B.; Xu, S.; Zhang, Y.; Zhao, H. CsPbCl3 and Mn:CsPbCl3
Perovskite Nanocubes/Nanorods as a Prospective Cathode Material for LIB Application. J. Mater.
Sci. Mater. Electron. 2023, 34 (21), 1582. https://doi.org/10.1007/s10854-023-10998-3.
21
Figure 1. Synthesis of CsPbBr3@TFA nanoplatelets. (a) Schematic illustration of the in situ
fabrication of CsPbBr3@TFA nanoplatelets via a one-step spin-coating process. (b) False color
tilted top-view SEM image of the nanoplatelets. (c) Optical characterization of the
nanoplatelets; the dashed blue line shows the absorption spectrum of the as grown nanoplatelets,
and the solid red line shows the PL peak position of the nanoplatelets. (e) XRD patterns of the
CsPbBr3@TFA nanoplatelets and CsPbBr3 thin film.
22
Figure 2. EBSD analysis of nanoplatelets on different substrates. Inverse pole figure (IPF)
maps (a, c) and corresponding (001) pole figures (b, d) for nanoplatelets deposited on (a, b)
crystalline Si and (c, d) amorphous Si/SiO2 substrates. The IPF maps reveal that the
nanoplatelets exhibit a strong preferential alignment along the [001] crystal direction regardless
of substrate crystallinity.
23
Figure 3. TEM characterization of nanoplatelets. (a-b) HRTEM image and FFT pattern of
nanoplatelets along the [001] zone axis line. (c) simulated diffraction pattern. (d) TEM-EDX
mapping of the nanoplatelets. *The F peak is rather low and contains considerable background
intensity.
24
Figure 4. High-resolution XPS spectra of key elements in the CsPbBr3 thin film and
CsPbBr3@TFA nanoplatelets. (a) C 1s spectrum, showing the presence of surface-bound
organic species; (b) F 1s spectrum, confirming the successful incorporation of TFA; (c) Pb 4f,
(d), Br 3d and (f) Cs 3d spectra, revealing shifts in binding energies indicative of altered
chemical environments following TFA surface modification.
25
Figure 5. THz nanoimaging of CsPbBr3@TFA nanoplatelets. (a-c) THz nanoimaging of
CsPbBr3@TFA nanoplatelets on a highly doped silicon substrate. (d) Normalized phase 𝜙2
(lines) and magnitude 𝑆2 (dots) near-field profiles along two different nanoplatelets and the
highly doped substrate indicated by the red and blue dashed lines in a-c. (e) Modeled near-field
contrasts 𝜙2 (lines) and 𝑆2 (dots) at 600 GHz assuming an intrinsic charge carrier mobility of
50 cm2/Vs.
26
Figure 6. PL characterization of perovskite samples. PL area scans for the comparison of
average PL intensity for (a) CsPbBr3@TFA nanoplatelets and (b) CsPbBr3 thin films. (c)
Schematic illustration of the proposed structure of the CsPbBr3@TFA nanoplatelets.
27
Figure 7. Fabrication and halide-exchange characterization of CsPbBr3@TFA nanoplatelets (a)
Schematic illustration of the critical fabrication steps, including top-down photolithography. (b)
Illustration of nanoplate treatment with HCl and the possible anion exchange mechanism. (c)
PL (solid lines) and UV-Vis absorption (dashed lines) of samples treated for different durations
in a chlorine environment. (d) XRD patterns of the samples before and after halide exchange.
(e) XPS peak position for Cl 2p before and after anion exchange. (f) Depth profile extracted
from XPS data.
28
Figure 8. Optical and confocal imaging of patterned nanoplates. (a-c) Optical micrographs of
different features achieved after top-down photolithography. (d-f) Confocal images of patterned
areas from different sites of the chip. (g) Confocal images of letters patterned on another chip
after selective anion exchange.
29
Supporting Information
Facile Synthesis and On-Chip Color Tuning of CsPbBr3@CsPbBr3-xTFAx
Nanoplatelets via Ion Engineering
Sana Khan1,2, Saeed Goudarzi2, Stephan Schäffer3, Lars Sonneveld4, Bart Macco5, Ke Ran1,6,7, Naho Kurahashi8,9,
Peter Haring Bolívar
3, Bruno Ehrler4, Thomas Riedl
8,9, Gerhard Müller-Newen10, Surendra. B. Anantharaman
1,11,
Maryam Mohammadi
1, *, Max. C. Lemme
1,2, *
1AMO GmbH, Otto-Blumenthal-Straße 25, 52074 Aachen, Germany
2RWTH Aachen University, Chair of Electronic Devices, Otto-Blumenthal-Str. 25, 52074 Aachen, Germany
3Institute for High Frequency and Quantum Electronics, University of Siegen, 57076 Siegen, Germany
4LMPV-Sustainable Energy Materials Department, AMOLF Science Park 104, 1098 XG Amsterdam, The
Netherlands
5Department of Applied Physics and Science Education, Eindhoven University of Technology, P.O. Box 513, 5600,
MB, Eindhoven, the Netherlands
6Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Ahornstr. 55, 52074 Aachen,
Germany
7Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons ER-C, Forschungszentrum Jülich GmbH,
Jülich 52428, Germany
8Institute of Electronic Devices, University of Wuppertal, Rainer-Gruenter-Str. 21, 42119 Wuppertal, Germany
9Wuppertal Center for Smart Materials & Systems (CM@S), University of Wuppertal, Rainer-Gruenter-Str. 21,
42119 Wuppertal, Germany
10Institute of Biochemistry and Molecular Biology, Uniklinik RWTH Aachen, Pauwelsstrasse 30, Aachen, Germany
11Low-dimensional Semiconductors Lab, Department of Metallurgical and Materials Engineering, Indian Institute
of Technology Madras, Chennai 600 036, India
*Corresponding Authors: mohammadi@amo.de; max.lemme@eld.rwth-aachen.de
30
Figure S1. Morphology and Optical Properties of CsPbBr3@TFA Nanoplatelets and thin film.
(a) Top-view SEM image of CsPbBr3@TFA nanoplatelets. inset: Size distribution statistics for
nanoplatelets. (b) AFM topography of CsPbBr3@TFA nanoplatelets. (c) AFM high-profile plot
of CsPbBr3@TFA nanoplatelets. (d, e) SEM image and AFM topography of a CsPbBr3 thin
film. (f) UV-absorption and PL spectra of a CsPbBr3 thin film.
31
Figure S2. GIXRD analysis of the (001) plane for the perovskite samples at various tilt angles.
(a) CsPbBr3 thin film, where diffraction peaks are shifted to the left, indicating residual strain
in the film. (b) CsPbBr3@TFA nanoplatelets, where the peak shift is significantly reduced,
suggesting that TFA- effectively relieves strain.
32
Figure S3. EBSD analysis of CsPbBr3 thin film. (a) Inverse pole figure (IPF) map obtained
by EBSD for the CsPPbBr3 thin film deposited on a Si/SiO2 substrate, showing the grain
orientation relative to the sample surface. (b) Corresponding [001] pole figure illustrating the
crystallographic texture distribution.
33
Figure S4. Effect of electron beam exposure on CsPbBr3 nanoplatelets. Cross-sectional images
of the nanoplatelets (a) before, and (b) after electron beam exposure. The sensitivity of the
perovskite nanoplatelets to high-energy electron beams (60 kV) limits the acquisition of reliable
crystalline phase information from the surface.
34
Figure S5. PL spectra showing the average PL intensity from a 25 × 25 µm2 area of
CsPbBr3@TFA nanoplatelets and CsPbBr3 thin film.
35
Figure S6. Optical emission stability of the CsPbBr3@TFA nanoplatelets and CsPbBr3. During
this test, the samples were stored outside the glovebox (the environment was 46.1% humidity,
and the temperature was 22°C).
36
Figure S7. Optical emission stability of CsPbBr3@TFA nanoplatelets during the
photolithography process.
37
Figure S8. The PL and absorption peak positions shift as a function of time during the anion
exchange reaction.
38
Figure S9. XRD patterns of the CsPbCl3 nanoplatelets measured over a period of 30 days. The
humidity was 46.1%, and the temperature was 22°C during the stability tests.
|
1 Facile Synthesis and On-Chip Color Tuning of CsPbBr3@CsPbBr3-xTFAx Nanoplatelets via Ion Engineering Sana Khan1,2, Saeed Goudarzi2, Stephan Schäffer3, Lars Sonneveld4, Bart Macco5, Ke Ran1,6,7, Naho Kurahashi8,9, Peter Haring Bolívar 3, Bruno Ehrler4, Thomas Riedl 8,9, Gerhard Müller-Newen10, Surendra. B. Anantharaman 1,11, Maryam Mohammadi 1, *, Max. C. Lemme 1,2, * 1AMO GmbH, Otto-Blumenthal-Straße 25, 52074 Aachen, Germany 2RWTH Aachen University, Chair of Electronic Devices, Otto-Blumenthal-Str. 25, 52074 Aachen, Germany 3Institute for High Frequency and Quantum Electronics, 57076 Siegen, Germany 4LMPV-Sustainable Energy Materials Department, AMOLF Science Park 104, 1098 XG Amsterdam, The Netherlands 5 .O. Box 513, 5600, MB, Eindhoven, the Netherlands 6Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Ahornstr. 55, 52074 Aachen, Germany 7Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons ER-C, Forschungszentrum Jülich GmbH, Jülich 52428, Germany 8 -Gruenter-Str. 21, 42119 Wuppertal, Germany 9Wuppertal Center for Smart Materials & Systems (CM@S), -Gruenter-Str. 21, 42119 Wuppertal, Germany 10 30, Aachen, Germany 11Low-dimensional Semiconductors Lab, 600 036, India *Corresponding Authors: ; 2 Abstract Metal halide perovskites (MHPs) have emerged as attractive optoelectronic materials because of high fluorescence quantum yield, broad color tunability, and excellent color purity. However, the ionic nature of MHPs makes them susceptible to polar solvents, leading to defect-induced nonradiative recombination and photoluminescence (PL) quenching. Here, we present a combined in-synthesis (in situ) and post-synthesis ion engineering to suppress nonradiative recombination and integrate multicolor MHP arrays on-chip through a perovskite-compatible photolithography process and in situ vapor-phase anion exchange. CsPbBr3@CsPbBr3-xTFAx nanoplatelets were grown on-chip via a single-step solution process incorporating trifluoroacetate (TFA-) pseudohalides. X-ray photoelectron spectroscopy revealed that TFApassivate uncoordinated Pb2+ ions on nanoplatelet surface and suppresses the formation of metallic lead (Pb0). This decreases the non-radiative recombination centers and yields a PL peak at 520 nm with a linewidth of 14.56 ± 0.5 nm. The nanoplatelets were patterned via a top-down photolithography process and selectively masked with a PMMA/Al2O3 stack to enable vaporphase anion exchange. The PL peak shifted in the unmasked regions from 520 nm to 413 nm, resulting in distinct green and blue emission arrays. Our method enables the scalable fabrication of highly luminescent, two-color MHP arrays with tailored optical properties, advancing their integration into next-generation optoelectronic devices. 3 1.1 Introduction Metal halide perovskites (MHPs), including nanowires and nanoplatelets, have garnered significant attention because of their high photoluminescence quantum yields and narrowband emission.1,2 Additionally, their emission wavelength can be precisely tuned by adjusting the halide composition.3,4 Furthermore, these nanostructures can support optical microcavities, such as Fabry-Perot and whispering gallery modes, facilitating strong light confinement and amplification.5,6 Despite these advantages, practical applications of perovskites in optoelectronics are limited by their inherent instability, which arises from their ionic nature and leads to defects that provide nonradiative recombination pathways. Several defect passivation methods have been proposed to suppress nonradiative recombination, such as surface chemical treatments,7-9 core-shell configurations,10-14 and ion engineering.15-20 Surface chemical treatments are post-synthetic processes in which small molecules or ligands are employed to neutralize trap states, primarily on the surface, resulting in improved PL.7-9 Core-shell structures, on the other hand, are formed during synthesis and involve encapsulating the perovskite core with a protective shell. These heterostructures enhance both the environmental and structural stability while also reducing the surface defect density.10-14 Ion engineering enables defect passivation through direct modification of the perovskite lattice, either during synthesis (in situ) or post-synthesis.15-20 In situ techniques involve introducing alkylammonium cations 21-23 and pseudohalides, which can incorporate either into the lattice or at the surface of the MHP. 18,24-26 The incorporation of pseudohalides such as thiocyanate (SCN-),27,28 formate (HCOO-),29 acetate (CH3COO-), 30 and tetrafluoroborate (BF4-)31 controls the crystallization process, enhances the preferential crystal orientation, and assists in defect passivation.26 Pseudohalides functionalized with carboxyl (O═C─O-) groups act as Lewis bases to passivate undercoordinated Pb2+ ions, resulting in welloriented crystals with fewer trap states.18,32-34 Additionally, fluorine-containing pseudohalides 4 improve the ambient stability and surface hydrophobicity of MHPs.35 Trifluoroacetate (TFA-) halides, which contain both carboxyl and fluorine functional groups, have emerged as promising candidates for influencing homogeneous nucleation and crystal growth orientation, enhancing defect passivation, and improving long-term stability.19,36,37 Despite the ability to tune the emission color of perovskite materials across the visible spectrum by adjusting the halide composition,38 the in situ synthesis of chlorine-containing, blue-emitting perovskites remains limited.39 This limitation arises from the low solubility of chlorine-based precursors, such as lead chloride (PbCl2) and cesium chloride (CsCl), in common solvents such as dimethyl sulfoxide (DMSO) and dimethylformamide (DMF),40-43 which leads to poor film quality with high defect densities. In addition to in situ techniques, post-synthesis approaches, particularly anion exchange, play a crucial role in fine-tuning the optical emission of MHPs.3,4 Anion exchange can be performed in the liquid or vapor phase,3,44,45 although precise, selective liquid‒phase ion exchange is constrained by precursor solubility, slow ion exchange kinetics, and the dissolution of the original MHPs. In contrast, vapor-phase methods could avoid these effects, offering a more controlled approach that allows for spatially selective ion exchange and accurate color tuning.46 Here, we combine in situ and post-synthesis ion engineering approaches to tune the optical properties of all-inorganic cesium lead bromide (CsPbBr3) perovskite. We propose a single-step solution-based approach for the in situ growth of perovskite nanoplatelets via cesium trifluoroacetate (CsTFA), which facilitates the dissolution of lead bromide (PbBr2) in organic 36 provides cesium ions (Cs+), and introduces TFA- as a pseudohalide anion. TFA- promotes preferential crystal orientation and reduces surface defects, resulting in highly luminescent allinorganic perovskite nanoplatelets. The nanoplatelets are denoted as CsPbBr3@CsPbBr3-xTFAx. We applied our versatile perovskite-compatible top-down photolithography technique to pattern the nanoplatelets.47,48 We then performed a facile vapor-phase anion exchange process to create 5 two distinct emissive arrays by selectively converting bromine-based green-emissive nanoplatelets into chlorine-based blue-emissive CsPbCl3 nanoplatelets. The resulting CsPbCl3 exhibited improved phase stability under ambient conditions. 1.2 Results and Discussion We first investigated the influence of TFA- on the perovskite structure and properties by preparing two types of samples: CsPbBr3@CsPbBr3-xTFAx nanoplatelets and reference CsPbBr3 thin films. Both were fabricated via spin-coating followed by post-annealing at 80°C. The nanoplatelets were synthesized from a precursor solution containing CsTFA and lead bromide (PbBr2), whereas the thin films were deposited from a solution of cesium bromide (CsBr) and PbBr2 (details can be found in the Experimental Section). For simplicity, in the following discussion, the notation CsPbBr3@TFA refers to the CsPbBr3@CsPbBr3-xTFAx nanoplatelets, and CsPbBr3 denotes the thin films. A schematic illustration of the synthesis process for CsPbBr3@TFA nanoplatelets is shown in Figure 1a. The scanning electron microscopy (SEM) images of the CsPbBr3@TFA shown in Figure 1b and Figure S1a veal welldefined rectangular nanoplatelets with an average size of 218 ± 2 nm (Figure S1a-inset). Atomic force microscopy (AFM) measurements further confirmed the rectangular morphology and a height of 90 nm (Figure S1b,c). In contrast, thin-film samples prepared similarly but without TFA- exhibited irregularly shaped grains, as shown in Figure S1 d,e. Ultraviolet-visible (UV-Vis) absorption and PL spectroscopy were conducted to investigate the optical properties of the CsPbBr3@TFA nanoplatelets and CsPbBr3 thin films. The CsPbBr3@TFA nanoplatelets showed an intense excitonic peak at 517 nm (Figure 1c), which is consistent with the characteristic absorption edge peak of the CsPbBr3 thin film (Figure S1f), and a narrow PL peak at 520 nm with a full width at half maximum (FWHM) of 14.56 ± 0.5 nm (Figure 1c). The phase and crystallinity of the nanoplatelets was analyzed via X-ray diffraction (XRD) measurements. As shown in Figure 1d, the XRD pattern of CsPbBr3@TFA exhibits 6 intense peaks corresponding to the pure 3D CsPbBr3 phase (reference code: 00-018-0364), together with additional peaks at 7.33o and 8.14o, which confirms the presence of a 2D layered perovskite phase, presumably CsPbBr3-xTFAx.49-51 The secondary phase, Cs4PbBr6, which appears in the XRD pattern of the CsPbBr3 thin films (Figure 1d), was not detected in the nanoplatelets. Grazing incidence X-ray diffraction (GIXRD) patterns of the (001) plane of the CsPbBr3 phase for both the thin film and nanoplatelets are shown in Figure S2a,b. In the CsPbBr3 thin film, increasing the incident angle (Ψ) from 0.2° to 5° led to a gradual shift of the diffraction peak toward lower angles, suggesting the presence of tensile strain within the film. In contrast, the peak shift was significantly reduced in the nanoplatelets, supporting the stressrelieving effect of TFA-.25,52 This relaxation is likely facilitated by surface-bound TFA- anions and defect passivation.25,52,53 The local crystal orientations of the CsPbBr3@TFA nanoplatelets and CsPbBr3 thin films were mapped via electron backscatter diffraction (EBSD). The nanoplatelets aligned preferentially along the [001] crystal direction relative to the sample normal direction, regardless of the crystallinity of the substrate (crystalline Si or amorphous Si/SiO2) (Figure 2). The corresponding pole figure displayed a pronounced central intensity, with maximum multiple of random distribution (MRD) values of approximately 18.87 and 16.04, respectively (Figure 2 b, d). These high MRD values suggest that the nanoplatelets exhibit preferential orientation. In contrast, the EBSD orientation map of the CsPbBr3 thin film revealed a polycrystalline structure with a variety of grain orientations (Figure S3). The (001) pole figure exhibited a more uniform distribution with a lower maximum MRD value of approximately 2.01, which indicates more random grain orientations. These morphological and crystallographic analyses indicated that TFA- pseudohalides play a key role in regulating the crystal orientation of the perovskite.54 The elemental distribution of the nanoplatelets were investigated via high-resolution transmission electron microscopy (HRTEM) and energy-dispersive X-ray spectroscopy 7 (EDXS) mapping. The HRTEM image of CsPbBr3@TFA nanoplatelets and the corresponding fast Fourier transform (FFT), taken along the [001] zone axis, are shown in Figure 3a-c. The FFT pattern confirms the crystalline nature of nanoplatelets and indicates the presence of the pure 3D CsPbBr3 phase in the bulk. However, the sensitivity of the nanoplatelets to high-energy electron beams (60 kV) prevented the acquisition of reliable crystalline phase information over larger areas (Figure S4). The EDX elemental mapping (Figure 3d) reveals a homogeneous distribution of cesium (Cs), lead (Pb), and bromine (Br) within the bulk of the nanoplatelets. In addition, we found an intense carbon signal associated with the coating layer of the lamella and a very weak fluorine (F) signal from TFA-. The silicon (Si) and oxygen (O) signals in the EDX map are attributed to the supporting substrate. The chemical states of C, F, Pb, Br, and Cs at the surface of the CsPbBr3@TFA nanoplatelets and their bonding environments were investigated via X-ray photoelectron spectroscopy (XPS) (Figure 4a-e). All the spectra were corrected using the adventitious C 1s signal at 284.8 eV. The presence of TFA- on the surface of CsPbBr3@TFA was confirmed through the C 1s and F 1s XPS spectra. The C 1s spectrum showed two distinct peaks at 292.5 eV and 288.8 eV, corresponding to -CF355 and -O-C=O groups,36 respectively (Figure 4a). In the F 1s spectrum in Figure 4b, the peak at 688.7 eV corresponds to -CF3,36,56 whereas the peak at 683.8 eV corresponds to the F ions bonded to uncoordinated Pb2+ on the nanoplatelet surface, forming dangling bonds.57,58 The Pb 4f spectrum in Figure 4c reveals two dominant peaks for Pb 4f7/2 and Pb 4f5/2, which indicate a 0.66 eV shift toward higher binding energies than those of CsPbBr3 thin films. This shift represents the passivation of uncoordinated Pb2+ ions on the surface of the nanoplatelets.59,60 Moreover, minor shoulder peaks at ~141 eV and ~137 eV indicate the presence of metallic lead (Pb0).61 The peak area ratio of Pb0 to Pb4f decreased from 0.12 in the thin film to 0.084 in the CsPbBr3@TFA nanoplatelets, further supporting effective surface passivation by TFA-.62,63 Similar binding energy shifts of approximately 0.2 eV to 0.4 eV were observed in the Cs 3d and Br 3d spectra (Figure 4d,e). These spectral shifts, 8 combined with the reduction in metallic lead content and the appearance of 2D layered perovskite diffraction peaks in the XRD pattern (Figure 2d), suggest that TFA- may contribute to surface defect passivation36,59 and facilitate the formation of a 2D CsPbBr3-xTFAx phase on the surface of the nanoplatelets. Based on the XPS results and TEM EDX data, which did not show significant traces of C or F in the cores of the nanoplatelets, we conclude that TFA- ions are located on the surface of the nanoplatelets rather than being incorporated into the perovskite crystal structure. We carried out terahertz scattering near-field optical microscopy (THz-s-SNOM) at 0.6 THz to investigate the influence of TFA- surface defect passivation on the electronic quality of the perovskite nanoplatelets (Figure 5a-c; see details in the Experimental Section).64 THz-s-SNOM optically scans the local THz conductivity via a contact-free method with a nanoscale 50 nm spatial resolution.65,66 Experimental near-field images of the CsPbBr3 nanoplatelets on the highly doped silicon substrate (Figure 5a-c) display a negative THz phase φ2 on the nanoplatelets relative to the highly doped substrate. This is exemplary shown in line profiles for two nanoplatelets (Figure 5d). The response, accompanied by a reduced THz near-field magnitude S2 on the nanoplatelets relative to the substrate, is characteristic of undoped, intrinsic semiconductor materials. The near-field contrasts of the CsPbBr3@TFA nanoplatelets relative to the highly doped silicon substrate were then modeled via the finite-dipole model67 for the samples 68 (input parameters: tapping amplitude 200 nm, tip radius 40 nm, spheroid length L = 600 nm, film thickness 100 nm) and a Drude conductivity model (effective electron mass = 0.26 m0, where m0 is the electron mass, and the THz permittivity of the lattice is εL = 5.8).64,69,70 Assuming a carrier mobility of 50 cm2V-1s-1, a typical mobility in lead-halide perovskites,71 we find that the extracted doping carrier densities are well below 1016 cm-3 (Figure 5e), reflecting the intrinsic semiconducting behavior of the material. 9 The low degree of doping can be attributed to efficient surface defect passivation, which reduces nonradiative recombination and enhances the emission intensity of CsPbBr3@TFA compared with those of the CsPbBr3 samples (Figure 6a,b, Figure S5). This enhancement persisted for more than 1440 hours (Figure S6). Additionally, the PL intensity increased over time, which can be attributed to a slow passivation effect of TFA-.72 Building on these observations, we propose a crystal configuration for the nanoplatelets, as shown in the schematic illustration in Figure 6c. Finally, we fabricated perovskite arrays containing CsPbBr3@TFA nanoplatelets on a chip. We selectively masked them with a PMMA/Al2O3 stack via a two-step top-down photolithography process (details in the Experimental Section).47,48 The process flow is illustrated in Figure 7a. PL spectroscopy was used to monitor the stability of the nanoplatelets during the structuring and encapsulation steps. No changes were observed in the PL spectra (Figure S7), confirming the stability of the nanoplatelets throughout the patterning process. The chip was subsequently exposed to chlorine vapors to initiate an anion exchange process, as schematically shown in Figure 7b (details in the Experimental Section). The absorption and PL spectra taken at different times during the anion exchange process show a gradual shift in the absorption edge and emission peaks, from 517 to 412 nm and 522 to 413 nm, respectively, as the exposure time increases (Figure S8). The peak at 413 nm is characteristic of CsPbCl3 73 and confirms the completion of halide exchange after 20 minutes. Next, the PMMA/Al2O3 stack was removed with toluene. The absorption and PL spectra of the pixel array were recorded after resist stripping and revealed green emission from CsPbBr3 and blue emission from CsPbCl3 (Figure 7c). The XRD pattern of the halide-exchanged CsPbCl3 region indicates intense peaks corresponding to a pure CsPbCl3 cubic phase (Figure 7d, reference code: 01-084-0437). The shift in the peaks, compared with those of the CsPbBr3@TFA pattern, is due to the smaller ionic radius of Cl-.74 Furthermore, the XRD results confirm the phase stability of the CsPbCl3 nanoplatelets under ambient conditions (Figure S9). The Cl 2p XPS spectrum in Figure 7e 10 displays binding energy peaks at 197.5 eV and 199.1 eV, assigned to Cl2p3/2 and Cl2p1/2, respectively, as expected for CsPbCl3.75 The depth profile analysis in Figure 7f confirms the absence of residual Br- ions in the halide-exchanged region, indicating the complete replacement of bromide with chloride ions. The optical micrographs of square perovskite structures (Figure 8a-c) demonstrate the successful patterning of nanoplatelets on-chip. The confocal microscopy images in Figure 8d-h further reveal two distinct emission colors corresponding to the typical green and blue emission of the two MHPs. This outcome is a direct result of the selective halide exchange process, which allows precise control of the PL emission wavelength in different regions of the same chip. These findings highlight our scalable approach that combines top-down patterning with wavelength tuning, enabling distinct green and blue emission on a single substrate for integrated optoelectronic applications. Conclusion In summary, this study demonstrated the in situ growth of CsPbBr3@CsPbBr3-xTFAx nanoplatelets by employing CsTFA as a multifunctional cesium source. We used CsTFA as a source of TFA- pseudohalides, which suppressed nonradiative recombination, enhanced the PL intensity, and promoted the preferential orientation of nanoplatelets along the [001] crystal direction, regardless of the substrate crystallinity. XPS analysis revealed that TFA- facilitated the passivation of uncoordinated Pb2+ and reduced the metallic lead content. THz conductivity measurements confirmed successful surface passivation with TFA-, which minimized the doping levels in the nanoplatelets. The nanoplatelets were etched into arrays with a perovskitecompatible top-down patterning process and selectively color-tuned on-chip by substituting Brwith Cl- via vapor-phase anion exchange. This resulted in two-color green and blue emission MHP arrays on a single chip, marking a significant step toward next-generation multi-emitter LEDs and, possibly, lasers. 11 Experimental CsPbBr3@CsPbBr3-xTFAx nanoplatelets and CsPbBr3 thin film deposition: Si and Si/SiO2 substrates were sequentially sonicated in acetone and isopropyl alcohol for 10 minutes each. Afterward, they were blow-dried with nitrogen and treated with oxygen plasma (Tepla, Semi 300) for 10 minutes. The cleaned substrates were then transferred to a nitrogen-filled glove box. CsPbBr3@CsPbBr3-xTFAx nanoplatelets were grown using a perovskite precursor solution containing 0.191 g of cesium trifluoroacetate (CsTFA, Thermo Scientific) and 0.259 g of lead bromide (PbBr2, ≥ 98%, Sigma Aldrich) dissolved in 3 mL of dimethyl sulfoxide (DMSO, anhydrous, ≥ 99.9%). CsPbBr3 thin films were deposited using a conventional solution with cesium bromide (CsBr, >99.0%, TCI) and PbBr2 in DMSO. The solution was stirred overnight at 60°C and filtered through polytetrafluoroethylene (PTFE) filters with a pore size of 0.2 μm. A volume of 80 μL of the precursor solution was spin-coated on each 4 cm2 substrate at 2000 rpm for 40 seconds, followed by annealing at 80°C for 10 minutes. Integration of perovskite arrays on-chip: A perovskite-compatible top-down patterning process utilizing a double-stack resist consisting of a poly(methyl methacrylate) (PMMA) protective layer and an AZ Mir 701 positive-tone photoresist was conducted as described in previous work.47,48 The samples were exposed to ultraviolet (UV) light for 25 s via a contact lithography mask aligner (EVG 420) system. The samples were then baked for 90 s at 115°C. The patterns were developed in an MF26A developer for 35 s, followed by rinsing with deionized water for 7 seconds. The samples were subsequently etched via plasma with BCl3 and HBr gases via a PlasmaLab System 100 inductively coupled plasma (ICP)-RIE tool (Oxford Instruments). Finally, the resist stack was removed by immersing the samples in toluene at 80°C for 1 hour. 12 Selective Anion Exchange: Photolithography and etching steps were performed to define masked and unmasked regions. A PMMA (350-400 nm)/evaporated Al2O3 (5 nm)/AZ Mir 701 positive-tone photoresist stack served as a mask for certain perovskite arrays during the anion exchange process. Anion exchange was performed by exposing the unmasked CsPbBr3@TFA nanoplatelets to chlorine vapors, which was achieved by heating hydrochloric acid (HCl) at 108°C in a sealed vial for 20 minutes. Characterization: Scanning electron microscopy (SEM) measurements were performed in a Zeiss SUPRA 60 at 4 kV with a working distance of 4 mm. Atomic force microscopy (AFM) images were taken using a Bruker Dimension Icon instrument in tapping mode. Electron backscatter diffraction (EBSD) measurements were performed on an FEI Verios 460L SEM with an EDAX Clarity direct electron detection system. The sample was held at a 70° tilt angle. An accelerating voltage of 8 kV was used with a beam current of 400 pA. Electron diffraction patterns were collected with a step size of 25 nm and detector exposure time of 15 ms. The resulting diffraction patterns were postprocessed and re-indexed using spherical indexing (bandwidth of 255) in OIM analysis 9. For spherical indexing, a simulated master pattern of cubic CsPbBr3 (space group Pm-3m) was used. For pole figure plotting, grain CI standardization was performed; only grain average orientations corresponding to CsPbBr3 were plotted, and silicon data and unindexed points (CI 20%. J. Mater. Chem. C 2021, 9 (32), 10406-10413. https://doi.org/10.1039/D1TC02335G. (21) Shi, Z.; Guo, R.; Luo, R.; Wang, X.; Ma, J.; Feng, J.; Niu, X.; Alvianto, E.; Jia, Z.; Guo, X.; Liang, H.; Chen, J.; Li, Z.; Sun, K.; Jiang, X.; Wu, Y.; Müller-Buschbaum, P.; Hu, W.; Hou, Y. "T-Shaped" Carbazole Alkylammonium Cation Passivation in Perovskite Solar Cells. ACS Energy Lett. 2024, 9 (2), 419-427. https://doi.org/10.1021/acsenergylett.3c02357. (22) Zhang, J.; Wu, J.; Langner, S.; Zhao, B.; Xie, Z.; Hauch, J. A.; Afify, H. A.; Barabash, A.; Luo, J.; Sytnyk, M.; Meng, W.; Zhang, K.; Liu, C.; Osvet, A.; Li, N.; Halik, M.; Heiss, W.; Zhao, Y.; Brabec, C. J. Exploring the Steric Hindrance of Alkylammonium Cations in the Structural Reconfiguration of Quasi-2D Perovskite Materials Using a High-throughput Experimental Platform. Adv. Funct. Mater. 2022, 32 (43), 2207101. https://doi.org/10.1002/adfm.202207101. (23) Han, D.; Chen, S.; Du, M.-H. Role of Polycyclic Aromatic Alkylammonium Cations in Tuning the Electronic Properties and Band Alignment of Two-Dimensional Hybrid Perovskite Semiconductors. J. Phys. Chem. Lett. 2021, 12 (40), 9754-9760. https://doi.org/10.1021/acs.jpclett.1c02603. (24) Wang, F.; Jiang, X.; Chen, H.; Shang, Y.; Liu, H.; Wei, J.; Zhou, W.; He, H.; Liu, W.; Ning, Z. 2DQuasi-2D-3D Hierarchy Structure for Tin Perovskite Solar Cells with Enhanced Efficiency and Stability. Joule 2018, 2 (12), 2732-2743. https://doi.org/10.1016/j.joule.2018.09.012. (25) Sun, Y.; Miao, W.; Sun, W.; Niu, Z.; Yin, R.; Huo, X.; Wang, K.; You, T.; Yin, P. Lattice Strain Regulation and Halogen Vacancies Passivation Enable High-Performance Formamidine-Based Perovskite Solar Cells. Small 2024, 20 (46), 2404272. https://doi.org/10.1002/smll.202404272. (26) Walker, B.; Kim, G.; Kim, J. Y. Pseudohalides in Lead-Based Perovskite Semiconductors. Adv. Mater. 2019, 31 (20), 1807029. https://doi.org/10.1002/adma.201807029. (27) Yang, S.; Liu, W.; Zuo, L.; Zhang, X.; Ye, T.; Chen, J.; Li, C.-Z.; Wu, G.; Chen, H. Thiocyanate Assisted Performance Enhancement of Formamidinium Based Planar Perovskite Solar Cells through a Single One-Step Solution Process. J. Mater. Chem. A 2016, 4 (24), 9430-9436. https://doi.org/10.1039/C6TA02999J. (28) Halder, A.; Chulliyil, R.; Subbiah, A. S.; Khan, T.; Chattoraj, S.; Chowdhury, A.; Sarkar, S. K. Pseudohalide (SCN- )-Doped MAPbI3 Perovskites: A Few Surprises. J. Phys. Chem. Lett. 2015, 6 (17), 3483-3489. https://doi.org/10.1021/acs.jpclett.5b01327. (29) Jeong, J.; Kim, M.; Seo, J.; Lu, H.; Ahlawat, P.; Mishra, A.; Yang, Y.; Hope, M. A.; Eickemeyer, F. T.; Kim, M.; Yoon, Y. J.; Choi, I. W.; Darwich, B. P.; Choi, S. J.; Jo, Y.; Lee, J. H.; Walker, B.; 17 Zakeeruddin, S. M.; Emsley, L.; Rothlisberger, U.; Hagfeldt, A.; Kim, D. S.; Grätzel, M.; Kim, J. Y. Pseudo-Halide Anion Engineering for α-FAPbI3 Perovskite Solar Cells. Nature 2021, 592 (7854), 381-385. https://doi.org/10.1038/s41586-021-03406-5. (30) Li, H.; Xu, Y.; Ramakrishnan, S.; Zhang, Y.; Cotlet, M.; Xu, T. L.; Yu, Q. Pseudo-Halide Anion Engineering for Efficient Quasi-2D Ruddlesden-Popper Tin Perovskite Solar Cells. Cell Rep. Phys. Sci. 2022, 3 (10), 101060. https://doi.org/10.1016/j.xcrp.2022.101060. (31) Tao, J.; Liu, X.; Shen, J.; Han, S.; Guan, L.; Fu, G.; Kuang, D.-B.; Yang, S. F-Type Pseudo-Halide Anions for High-Efficiency and Stable Wide-Band-Gap Inverted Perovskite Solar Cells with Fill Factor Exceeding 84%. ACS Nano 2022, 16 (7), 10798-10810. https://doi.org/10.1021/acsnano.2c02876. (32) Jeong, J.; Kim, M.; Seo, J.; Lu, H.; Ahlawat, P.; Mishra, A.; Yang, Y.; Hope, M. A.; Eickemeyer, F. T.; Kim, M.; Yoon, Y. J.; Choi, I. W.; Darwich, B. P.; Choi, S. J.; Jo, Y.; Lee, J. H.; Walker, B.; Zakeeruddin, S. M.; Emsley, L.; Rothlisberger, U.; Hagfeldt, A.; Kim, D. S.; Grätzel, M.; Kim, J. Y. Pseudo-Halide Anion Engineering for α-FAPbI3 Perovskite Solar Cells. Nature 2021, 592 (7854), 381-385. https://doi.org/10.1038/s41586-021-03406-5. (33) Halder, A.; Chulliyil, R.; Subbiah, A. S.; Khan, T.; Chattoraj, S.; Chowdhury, A.; Sarkar, S. K. Pseudohalide (SCN- )-Doped MAPbI3 Perovskites: A Few Surprises. J. Phys. Chem. Lett. 2015, 6 (17), 3483-3489. https://doi.org/10.1021/acs.jpclett.5b01327. (34) Lin, P.; Loganathan, A.; Raifuku, I.; Li, M.; Chiu, Y.; Chang, S.; Fakharuddin, A.; Lin, C.; Guo, T.; Schmidt-Mende, L.; Chen, P. Pseudo-Halide Perovskite Solar Cells. Adv. Energy Mater. 2021, 11 (28), 2100818. https://doi.org/10.1002/aenm.202100818. (35) Tao, J.; Liu, X.; Shen, J.; Han, S.; Guan, L.; Fu, G.; Kuang, D.-B.; Yang, S. F-Type Pseudo-Halide Anions for High-Efficiency and Stable Wide-Band-Gap Inverted Perovskite Solar Cells with Fill Factor Exceeding 84%. ACS Nano 2022, 16 (7), 10798-10810. https://doi.org/10.1021/acsnano.2c02876. (36) Wang, H.; Zhang, X.; Wu, Q.; Cao, F.; Yang, D.; Shang, Y.; Ning, Z.; Zhang, W.; Zheng, W.; Yan, Y.; Kershaw, S. V.; Zhang, L.; Rogach, A. L.; Yang, X. Trifluoroacetate Induced Small-Grained CsPbBr3 Perovskite Films Result in Efficient and Stable Light-Emitting Devices. Nat. Commun. 2019, 10 (1), 665. https://doi.org/10.1038/s41467-019-08425-5. (37) Kuang, Y.; Yang, L.; Ma, J.; Bie, T.; Zhang, D.; Xue, Y.; Zhou, N.; Shao, M. High-Performance Pure Red Quasi-Two-Dimensional Perovskite Light-Emitting Diodes with Bifunctional Potassium Trifluoroacetate Additive. ACS Mater. Lett. 2023, 5 (11), 2922-2928. https://doi.org/10.1021/acsmaterialslett.3c00557. (38) Ou, Q.; Bao, X.; Zhang, Y.; Shao, H.; Xing, G.; Li, X.; Shao, L.; Bao, Q. Band Structure Engineering in Metal Halide Perovskite Nanostructures for Optoelectronic Applications. Nano Mater. Sci. 2019, 1 (4), 268-287. https://doi.org/10.1016/j.nanoms.2019.10.004. (39) Liu, M.; Johnston, M. B.; Snaith, H. J. Efficient Planar Heterojunction Perovskite Solar Cells by Vapour Deposition. Nature 2013, 501 (7467), 395-398. https://doi.org/10.1038/nature12509. (40) Li, W.; Li, J.; Niu, G.; Wang, L. Effect of Cesium Chloride Modification on the Film Morphology and UV-Induced Stability of Planar Perovskite Solar Cells. J. Mater. Chem. A 2016, 4 (30), 1168811695. https://doi.org/10.1039/C5TA09165A. (41) Haeger, T.; Ketterer, M.; Bahr, J.; Pourdavoud, N.; Runkel, M.; Heiderhoff, R.; Riedl, T. Thermal Properties of CsPbCl 3 Thin Films across Phase Transitions. J. Phys. Mater. 2020, 3 (2), 024004. https://doi.org/10.1088/2515-7639/ab749d. (42) Gui, P.; Zhou, H.; Yao, F.; Song, Z.; Li, B.; Fang, G. Space-Confined Growth of Individual Wide Bandgap Single Crystal CsPbCl3 Microplatelet for Near-Ultraviolet Photodetection. Small 2019, 15 (39), 1902618. https://doi.org/10.1002/smll.201902618. (43) Prochowicz, D.; Yadav, P.; Saliba, M.; Kubicki, D. J.; Tavakoli, M. M.; Zakeeruddin, S. M.; Lewiński, J.; Emsley, L.; Grätzel, M. One-Step Mechanochemical Incorporation of an Insoluble Cesium Additive for High Performance Planar Heterojunction Solar Cells. Nano Energy 2018, 49, 523528. https://doi.org/10.1016/j.nanoen.2018.05.010. (44) Li, H.; Li, J.; Bao, Y.; Li, J.; He, C.; Wang, H.; Zhang, Y.; Tang, H.; Xu, J.; Fang, Y.; Liang, S.; Yang, Y. A Liquid Phase Anion-Exchange Approach to High-Quality All-Inorganic Halide Perovskite Micro18 and Nanowires. J. Mater. Sci. 2021, 56 (28), 16059-16067. https://doi.org/10.1007/s10853-02106316-z. (45) Zhang, Y.; Lu, D.; Gao, M.; Lai, M.; Lin, J.; Lei, T.; Lin, Z.; Quan, L. N.; Yang, P. Quantitative Imaging of Anion Exchange Kinetics in Halide Perovskites. Proc. Natl. Acad. Sci. 2019, 116 (26), 1264812653. https://doi.org/10.1073/pnas.1903448116. (46) Cen, G.; Xia, Y.; Zhao, C.; Fu, Y.; An, Y.; Yuan, Y.; Shi, T.; Mai, W. Precise Phase Control of LargeScale Inorganic Perovskites via Vapor-Phase Anion-Exchange Strategy. arXiv 2020. https://doi.org/10.48550/ARXIV.2009.14350. (47) Cegielski, P. J.; Giesecke, A. L.; Neutzner, S.; Porschatis, C.; Gandini, M.; Schall, D.; Perini, C. A. R.; Bolten, J.; Suckow, S.; Kataria, S.; Chmielak, B.; Wahlbrink, T.; Petrozza, A.; Lemme, M. C. Monolithically Integrated Perovskite Semiconductor Lasers on Silicon Photonic Chips by Scalable Top-Down Fabrication. Nano Lett. 2018, 18 (11), 6915-6923. https://doi.org/10.1021/acs.nanolett.8b02811. (48) Fabrizi, F.; Goudarzi, S.; Khan, S.; Mohammad, T.; Starodubtceva, L.; Cegielski, P. J.; Thiel, F.; Özen, S.; Schiffer, M.; Lang, F.; Bolívar, P. H.; Riedl, T.; Müller-Newen, G.; Anantharaman, S. B.; Mohammadi, M.; Lemme, M. C. A Versatile Top-Down Patterning Technique for Perovskite OnChip Integration. ACS Nano 2025, 19 (33), 30428-30440. https://doi.org/10.1021/acsnano.5c10397. (49) Li, G.; Song, J.; Wu, J.; Song, Z.; Wang, X.; Sun, W.; Fan, L.; Lin, J.; Huang, M.; Lan, Z.; Gao, P. Efficient and Stable 2D@3D/2D Perovskite Solar Cells Based on Dual Optimization of Grain Boundary and Interface. ACS Energy Lett. 2021, 6 (10), 3614-3623. https://doi.org/10.1021/acsenergylett.1c01649. (50) Liao, Y.; Liu, H.; Zhou, W.; Yang, D.; Shang, Y.; Shi, Z.; Li, B.; Jiang, X.; Zhang, L.; Quan, L. N.; Quintero-Bermudez, R.; Sutherland, B. R.; Mi, Q.; Sargent, E. H.; Ning, Z. Highly Oriented LowDimensional Tin Halide Perovskites with Enhanced Stability and Photovoltaic Performance. J. Am. Chem. Soc. 2017, 139 (19), 6693-6699. https://doi.org/10.1021/jacs.7b01815. (51) Lian, Z.; Wang, B.; Wu, Z.; Lin, H.; Ding, T.; Wang, J.; Zhang, L.; Xu, J.; Xiao, P.; Xu, H.; Wang, S.; Ng, K. W. Water-Assisted Synthesis of Layer-Controlled CsPbBr 3 Nanoplates Spontaneously Encapsulated in PbBr(OH). Adv. Opt. Mater. 2024, 12 (19), 2400333. https://doi.org/10.1002/adom.202400333. (52) Zhang, W.; Liu, J.; Song, W.; Shan, J.; Guan, H.; Zhou, J.; Meng, Y.; Tong, X.; Zhu, J.; Yang, M.; Ge, Z. Chemical Passivation and Grain-Boundary Manipulation via in Situ Cross-Linking Strategy for Scalable Flexible Perovskite Solar Cells. Sci. Adv. 2025, 11 (5), eadr2290. https://doi.org/10.1126/sciadv.adr2290. (53) Fang, Z.; Deng, B.; Jin, Y.; Yang, L.; Chen, L.; Zhong, Y.; Feng, H.; Yin, Y.; Liu, K.; Li, Y.; Zhang, J.; Huang, J.; Zeng, Q.; Wang, H.; Yang, X.; Yang, J.; Tian, C.; Xie, L.; Wei, Z.; Xu, X. Surface Reconstruction of Wide-Bandgap Perovskites Enables Efficient Perovskite/Silicon Tandem Solar Cells. Nat. Commun. 2024, 15 (1), 10554. https://doi.org/10.1038/s41467-024-54925-4. (54) Lee, J.; Shin, Y. S.; Oleiki, E.; Seo, J.; Roe, J.; Lee, D.; Lee, Y.; Song, T.; Jang, H.; Song, J. W.; Lee, W.; Lee, G.; Kim, J. Y.; Kim, D. S. Constructing Orderly Crystal Orientation with a Bidirectional Coordinator for High Efficiency and Stable Perovskite Solar Cells. Energy Environ. Sci. 2024, 17 (16), 6003-6012. https://doi.org/10.1039/D4EE02017K. (55) Xu, Y.; Ali, A.; Shehzad, K.; Meng, N.; Xu, M.; Zhang, Y.; Wang, X.; Jin, C.; Wang, H.; Guo, Y.; Yang, Z.; Yu, B.; Liu, Y.; He, Q.; Duan, X.; Wang, X.; Tan, P.; Hu, W.; Lu, H.; Hasan, T. Solvent-Based SoftPatterning of Graphene Lateral Heterostructures for Broadband High-Speed MetalSemiconductor-Metal Photodetectors. Adv. Mater. Technol. 2017, 2 (2), 1600241. https://doi.org/10.1002/admt.201600241. (56) Li, G.; Huang, Q.; He, X.; Gao, Y.; Wang, D.; Kim, S. H.; Wang, D. Self-Formed Hybrid Interphase Layer on Lithium Metal for High-Performance Lithium-Sulfur Batteries. ACS Nano 2018, 12 (2), 1500-1507. https://doi.org/10.1021/acsnano.7b08035. (57) Antonio, A. M.; Dworzak, M. R.; Korman, K. J.; Yap, G. P. A.; Bloch, E. D. Anion Binding as a Strategy for the Synthesis of Porous Salts. Chem. Mater. 2022, 34 (24), 10823-10831. https://doi.org/10.1021/acs.chemmater.2c01476. 19 (58) Zhai, Y.; Bai, X.; Pan, G.; Zhu, J.; Shao, H.; Dong, B.; Xu, L.; Song, H. Effective Blue-Violet Photoluminescence through Lanthanum and Fluorine Ions Co-Doping for CsPbCl3 Perovskite Quantum Dots. Nanoscale 2019, 11 (5), 2484-2491. https://doi.org/10.1039/C8NR09794A. (59) Kuang, Y.; Yang, L.; Ma, J.; Bie, T.; Zhang, D.; Xue, Y.; Zhou, N.; Shao, M. High-Performance Pure Red Quasi-Two-Dimensional Perovskite Light-Emitting Diodes with Bifunctional Potassium Trifluoroacetate Additive. ACS Mater. Lett. 2023, 5 (11), 2922-2928. https://doi.org/10.1021/acsmaterialslett.3c00557. (60) Liu, Y.; Wang, S.; Yu, Z.; Chen, G.; Wang, C.; Wang, T.; Ke, W.; Fang, G. A Multifunctional Additive Strategy Enables Efficient Pure-Blue Perovskite Light-Emitting Diodes. Adv. Mater. 2023, 35 (35), 2302161. https://doi.org/10.1002/adma.202302161. (61) Zhang, W.; Pathak, S.; Sakai, N.; Stergiopoulos, T.; Nayak, P. K.; Noel, N. K.; Haghighirad, A. A.; Burlakov, V. M.; deQuilettes, D. W.; Sadhanala, A.; Li, W.; Wang, L.; Ginger, D. S.; Friend, R. H.; Snaith, H. J. Enhanced Optoelectronic Quality of Perovskite Thin Films with Hypophosphorous Acid for Planar Heterojunction Solar Cells. Nat. Commun. 2015, 6 (1), 10030. https://doi.org/10.1038/ncomms10030. (62) Zhang, W.; Pathak, S.; Sakai, N.; Stergiopoulos, T.; Nayak, P. K.; Noel, N. K.; Haghighirad, A. A.; Burlakov, V. M.; deQuilettes, D. W.; Sadhanala, A.; Li, W.; Wang, L.; Ginger, D. S.; Friend, R. H.; Snaith, H. J. Enhanced Optoelectronic Quality of Perovskite Thin Films with Hypophosphorous Acid for Planar Heterojunction Solar Cells. Nat. Commun. 2015, 6 (1), 10030. https://doi.org/10.1038/ncomms10030. (63) Xu, C.; Zuo, L.; Hang, P.; Guo, X.; Pan, Y.; Zhou, G.; Chen, T.; Niu, B.; Xu, X.; Hong, Z.; Wang, D.; Zhu, H.; Yu, X.; Yang, D.; Chen, H. Synergistic Effects of Bithiophene Ammonium Salt for HighPerformance Perovskite Solar Cells. J. Mater. Chem. A 2022, 10 (18), 9971-9980. https://doi.org/10.1039/D2TA01349E. (64) Schäffer, S.; Ogolla, C. O.; Loth, Y.; Haeger, T.; Kreusel, C.; Runkel, M.; Riedl, T.; Butz, B.; Wigger, A. K.; Bolívar, P. H. Imaging the Terahertz Nanoscale Conductivity of Polycrystalline CsPbBr 3 Perovskite Thin Films. Nano Lett. 2023, 23 (6), 2074-2080. https://doi.org/10.1021/acs.nanolett.2c03214. (65) Cocker, T. L.; Jelic, V.; Hillenbrand, R.; Hegmann, F. A. Nanoscale Terahertz Scanning Probe Microscopy. Nat. Photonics 2021, 15 (8), 558-569. https://doi.org/10.1038/s41566-021-008356. (66) Hillenbrand, R.; Abate, Y.; Liu, M.; Chen, X.; Basov, D. N. Visible-to-THz near-Field Nanoscopy. Nat. Rev. Mater. 2025, 10 (4), 285-310. https://doi.org/10.1038/s41578-024-00761-3. (67) Cvitkovic, A.; Ocelic, N.; Hillenbrand, R. Analytical Model for Quantitative Prediction of Material Contrasts in Scattering-Type near-Field Optical Microscopy. Opt. Express 2007, 15 (14), 8550. https://doi.org/10.1364/OE.15.008550. (68) Hauer, B.; Engelhardt, A. P.; Taubner, T. Quasi-Analytical Model for Scattering Infrared near-Field Microscopy on Layered Systems. Opt. Express 2012, 20 (12), 13173. https://doi.org/10.1364/OE.20.013173. (69) Iaru, C. M.; Brodu, A.; Van Hoof, N. J. J.; Ter Huurne, S. E. T.; Buhot, J.; Montanarella, F.; Buhbut, S.; Christianen, P. C. M.; Vanmaekelbergh, D.; De Mello Donega, C.; Rivas, J. G.; Koenraad, P. M.; Silov, A. Yu. Fröhlich Interaction Dominated by a Single Phonon Mode in CsPbBr3. Nat. Commun. 2021, 12 (1), 5844. https://doi.org/10.1038/s41467-021-26192-0. (70) Puppin, M.; Polishchuk, S.; Colonna, N.; Crepaldi, A.; Dirin, D. N.; Nazarenko, O.; De Gennaro, R.; Gatti, G.; Roth, S.; Barillot, T.; Poletto, L.; Xian, R. P.; Rettig, L.; Wolf, M.; Ernstorfer, R.; Kovalenko, M. V.; Marzari, N.; Grioni, M.; Chergui, M. Evidence of Large Polarons in Photoemission Band Mapping of the Perovskite Semiconductor CsPbBr 3. Phys. Rev. Lett. 2020, 124 (20), 206402. https://doi.org/10.1103/PhysRevLett.124.206402. (71) Xia, C. Q.; Peng, J.; Poncé, S.; Patel, J. B.; Wright, A. D.; Crothers, T. W.; Uller Rothmann, M.; Borchert, J.; Milot, R. L.; Kraus, H.; Lin, Q.; Giustino, F.; Herz, L. M.; Johnston, M. B. Limits to Electrical Mobility in Lead-Halide Perovskite Semiconductors. J. Phys. Chem. Lett. 2021, 12 (14), 3607-3617. https://doi.org/10.1021/acs.jpclett.1c00619. 20 (72) Jokar, E.; Chien, C.-H.; Fathi, A.; Rameez, M.; Chang, Y.-H.; Diau, E. W.-G. Slow Surface Passivation and Crystal Relaxation with Additives to Improve Device Performance and Durability for TinBased Perovskite Solar Cells. Energy Environ. Sci. 2018, 11 (9), 2353-2362. https://doi.org/10.1039/C8EE00956B. (73) Khan, S.; Cegielski, P. J.; Runkel, M.; Riedl, T.; Mohammadi, M.; Lemme, M. C. Complex Optical Constants of CsPbCl3 Perovskite Thin Films Determined by Spectroscopic Ellipsometry. APL Energy 2025, 3 (2), 026108. https://doi.org/10.1063/5.0268298. (74) Wong, Y.; Wu, W.; Wang, T.; Ng, J. D. A.; Khoo, K. H.; Wu, J.; Tan, Z. Color Patterning of Luminescent Perovskites via Light-Mediated Halide Exchange with Haloalkanes. Adv. Mater. 2019, 31 (24), 1901247. https://doi.org/10.1002/adma.201901247. (75) Yu, H.; Gao, X.; Huang, C.; Liu, S.; Chen, B.; Xu, S.; Zhang, Y.; Zhao, H. CsPbCl3 and Mn:CsPbCl3 Perovskite Nanocubes/Nanorods as a Prospective Cathode Material for LIB Application. J. Mater. Sci. Mater. Electron. 2023, 34 (21), 1582. https://doi.org/10.1007/s10854-023-10998-3. 21 Figure 1. Synthesis of CsPbBr3@TFA nanoplatelets. (a) Schematic illustration of the in situ fabrication of CsPbBr3@TFA nanoplatelets via a one-step spin-coating process. (b) False color tilted top-view SEM image of the nanoplatelets. (c) Optical characterization of the nanoplatelets; the dashed blue line shows the absorption spectrum of the as grown nanoplatelets, and the solid red line shows the PL peak position of the nanoplatelets. (e) XRD patterns of the CsPbBr3@TFA nanoplatelets and CsPbBr3 thin film. 22 Figure 2. EBSD analysis of nanoplatelets on different substrates. Inverse pole figure (IPF) maps (a, c) and corresponding (001) pole figures (b, d) for nanoplatelets deposited on (a, b) crystalline Si and (c, d) amorphous Si/SiO2 substrates. The IPF maps reveal that the nanoplatelets exhibit a strong preferential alignment along the [001] crystal direction regardless of substrate crystallinity. 23 Figure 3. TEM characterization of nanoplatelets. (a-b) HRTEM image and FFT pattern of nanoplatelets along the [001] zone axis line. (c) simulated diffraction pattern. (d) TEM-EDX mapping of the nanoplatelets. *The F peak is rather low and contains considerable background intensity. 24 Figure 4. High-resolution XPS spectra of key elements in the CsPbBr3 thin film and CsPbBr3@TFA nanoplatelets. (a) C 1s spectrum, showing the presence of surface-bound organic species; (b) F 1s spectrum, confirming the successful incorporation of TFA; (c) Pb 4f, (d), Br 3d and (f) Cs 3d spectra, revealing shifts in binding energies indicative of altered chemical environments following TFA surface modification. 25 Figure 5. THz nanoimaging of CsPbBr3@TFA nanoplatelets. (a-c) THz nanoimaging of CsPbBr3@TFA nanoplatelets on a highly doped silicon substrate. (d) Normalized phase φ2 (lines) and magnitude S2 (dots) near-field profiles along two different nanoplatelets and the highly doped substrate indicated by the red and blue dashed lines in a-c. (e) Modeled near-field contrasts φ2 (lines) and S2 (dots) at 600 GHz assuming an intrinsic charge carrier mobility of 50 cm2/Vs. 26 Figure 6. PL characterization of perovskite samples. PL area scans for the comparison of average PL intensity for (a) CsPbBr3@TFA nanoplatelets and (b) CsPbBr3 thin films. (c) Schematic illustration of the proposed structure of the CsPbBr3@TFA nanoplatelets. 27 Figure 7. Fabrication and halide-exchange characterization of CsPbBr3@TFA nanoplatelets (a) Schematic illustration of the critical fabrication steps, including top-down photolithography. (b) Illustration of nanoplate treatment with HCl and the possible anion exchange mechanism. (c) PL (solid lines) and UV-Vis absorption (dashed lines) of samples treated for different durations in a chlorine environment. (d) XRD patterns of the samples before and after halide exchange. (e) XPS peak position for Cl 2p before and after anion exchange. (f) Depth profile extracted from XPS data. 28 Figure 8. Optical and confocal imaging of patterned nanoplates. (a-c) Optical micrographs of different features achieved after top-down photolithography. (d-f) Confocal images of patterned areas from different sites of the chip. (g) Confocal images of letters patterned on another chip after selective anion exchange. 29 Supporting Information Facile Synthesis and On-Chip Color Tuning of CsPbBr3@CsPbBr3-xTFAx Nanoplatelets via Ion Engineering Sana Khan1,2, Saeed Goudarzi2, Stephan Schäffer3, Lars Sonneveld4, Bart Macco5, Ke Ran1,6,7, Naho Kurahashi8,9, Peter Haring Bolívar 3, Bruno Ehrler4, Thomas Riedl 8,9, Gerhard Müller-Newen10, Surendra. B. Anantharaman 1,11, Maryam Mohammadi 1, *, Max. C. Lemme 1,2, * 1AMO GmbH, Otto-Blumenthal-Straße 25, 52074 Aachen, Germany 2RWTH Aachen University, Chair of Electronic Devices, Otto-Blumenthal-Str. 25, 52074 Aachen, Germany 3Institute for High Frequency and Quantum Electronics, 57076 Siegen, Germany 4LMPV-Sustainable Energy Materials Department, AMOLF Science Park 104, 1098 XG Amsterdam, The Netherlands 5 .O. Box 513, 5600, MB, Eindhoven, the Netherlands 6Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Ahornstr. 55, 52074 Aachen, Germany 7Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons ER-C, Forschungszentrum Jülich GmbH, Jülich 52428, Germany 8 -Gruenter-Str. 21, 42119 Wuppertal, Germany 9Wuppertal Center for Smart Materials & Systems (CM@S), -Gruenter-Str. 21, 42119 Wuppertal, Germany 10 30, Aachen, Germany 11Low-dimensional Semiconductors Lab, 600 036, India *Corresponding Authors: ; 30 Figure S1. Morphology and Optical Properties of CsPbBr3@TFA Nanoplatelets and thin film. (a) Top-view SEM image of CsPbBr3@TFA nanoplatelets. inset: Size distribution statistics for nanoplatelets. (b) AFM topography of CsPbBr3@TFA nanoplatelets. (c) AFM high-profile plot of CsPbBr3@TFA nanoplatelets. (d, e) SEM image and AFM topography of a CsPbBr3 thin film. (f) UV-absorption and PL spectra of a CsPbBr3 thin film. 31 Figure S2. GIXRD analysis of the (001) plane for the perovskite samples at various tilt angles. (a) CsPbBr3 thin film, where diffraction peaks are shifted to the left, indicating residual strain in the film. (b) CsPbBr3@TFA nanoplatelets, where the peak shift is significantly reduced, suggesting that TFA- effectively relieves strain. 32 Figure S3. EBSD analysis of CsPbBr3 thin film. (a) Inverse pole figure (IPF) map obtained by EBSD for the CsPPbBr3 thin film deposited on a Si/SiO2 substrate, showing the grain orientation relative to the sample surface. (b) Corresponding [001] pole figure illustrating the crystallographic texture distribution. 33 Figure S4. Effect of electron beam exposure on CsPbBr3 nanoplatelets. Cross-sectional images of the nanoplatelets (a) before, and (b) after electron beam exposure. The sensitivity of the perovskite nanoplatelets to high-energy electron beams (60 kV) limits the acquisition of reliable crystalline phase information from the surface. 34 Figure S5. PL spectra showing the average PL intensity from a 25 × 25 μm2 area of CsPbBr3@TFA nanoplatelets and CsPbBr3 thin film. 35 Figure S6. Optical emission stability of the CsPbBr3@TFA nanoplatelets and CsPbBr3. During this test, the samples were stored outside the glovebox (the environment was 46.1% humidity, and the temperature was 22°C). 36 Figure S7. Optical emission stability of CsPbBr3@TFA nanoplatelets during the photolithography process. 37 Figure S8. The PL and absorption peak positions shift as a function of time during the anion exchange reaction. 38 Figure S9. XRD patterns of the CsPbCl3 nanoplatelets measured over a period of 30 days. The humidity was 46.1%, and the temperature was 22°C during the stability tests.
|
2509.16283
|
Simulating the Angular Distribution of Cherenkov Radiation in Thick
Media
Dmytro Minchenkoa,∗, Juan Pablo Yañeza and Aksel Hallina
aDept. of Physics, University of Alberta, Edmonton, Alberta, T6G 2E1, Canada
A R T I C L E I N F O
Keywords:
Cherenkov light
Astroparticle physics
Detector physics
Monte Carlo simulation
Geant4
A B S T R A C T
We present a study of the emission of Cherenkov radiation in thick media, explore the limitations
of the simulation tools currently in use in particle and nuclear physics, and propose a method for
overcoming these limitations. We start with a derivation of the Cherenkov power spectrum and its
angular profile, accounting for interference of the radiation emitted, in contrast with commonly used
tools that assume perfect coherence. We then study the impact that the path of electrons through a
medium has on the angular profile of Cherenkov light. Finally, we devise a model that can introduce
these effects in Geant4 and tune it to explain calibration data from the water-phase of SNO+. We find
that the tuned model significantly improves the agreement between data and simulation, so we provide
it for its wider use.
1. Introduction
Cherenkov radiation [1] is widely used to detect and
characterize charged particles, with applications in experi-
mental particle physics, astrophysics, nuclear physics, and
medicine. The Cherenkov light emitted by a particle depends
upon particle type, energy and momentum. A property com-
monly used is that most light is emitted at a fixed angle 𝜃𝐶,
also known as the Cherenkov angle.
The standard derivation of Cherenkov light emission
[2, 3] shows that a charged particle in a dielectric medium
with refractive index 𝑛, moving on a straight line, with a
constant velocity 𝛽= 𝑣
𝑐faster than the local speed of light
in the medium, will emit Cherenkov light, a coherent front
of light in a cone with half-angle 𝜃given by
cos 𝜃𝐶= 1
𝑛𝛽.
(1)
Since the Cherenkov angle is well-defined with respect to
the direction of travel of the charged particle, the resulting
cone of Cherenkov light can be used to trace the path of the
particle with high precision. Experiments have exploited this
effect for almost four decades to detect charged particles and
infer their properties over a wide range of energies [4, 5].
The technique is so powerful that it will be continued to
be used by future generations of detectors, which in turn
requires ever more precise simulation of the process.
Modern simulation tools that study the movement of
charged particles through matter often discretize physical
processes. Interactions are simulated at scattering centers,
with the particle undergoing free motion in-between them.
The kinematics and probability of interactions are defined by
scattering models that take into account a variety of physical
phenomena.
∗Corresponding author
minchenk@ualberta.ca (D. Minchenko)
ORCID(s):
Simulations of Cherenkov radiation are typically dis-
cretized. It is commonly assumed that the conditions re-
quired for emission of Cherenkov radiation persist along
a charged particle’s trajectory and the Cherenkov emission
angle is calculated with respect to the average displacement
of the particle. In reality, however, a particle is subject to a
continuous interaction with the medium at the micron scale
that deflects its trajectory and makes it lose energy continu-
ously, alongside discrete interactions at atomic scales. As a
result, the Cherenkov light emission might be significantly
different from the predictions from simulations that approx-
imate these effects. Cherenkov light is a coherent effect and
we examine the potential interference and loss of coherence
of the light front as well as the smearing of the emission
angle that might result due to these effects.
This work was motivated by an observed tension be-
tween data and simulation reported by the SNO+ experiment
when comparing the angular distribution of Cherenkov light
emitted by electrons with MeV energies in water [6]. Internal
studies explored the effects of multiple scattering [7, 8, 9],
the coherence of the Cherenkov light emitted [10], and
the influence of the simulation step size on the resulting
Cherenkov light distribution [11]. However, none of these
studies explain the tension. Therefore, in this work, we
assess the approximations routinely made when simulating
Cherenkov light, and propose a method to account for the
effects that are currently neglected.
In Sec. 2, we provide a review of the classic deriva-
tion of Cherenkov light emission, including its associated
approximations. In Sec. 3 we look at different methods for
electron transport in simulation, and investigate their impact
on their predicted angular distribution of Cherenkov light. In
Sec. 4, we develop a new Cherenkov light radiation model,
implement it in the commonly used Geant4 platform [12],
and tune it to SNO+ calibration data. We find that, after
tuning the model, the previously observed tension between
the data and Monte-Carlo simulation is resolved.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 1 of 10
arXiv:2509.16283v1 [physics.ins-det] 19 Sep 2025
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
2. Cherenkov light from first principles
We derive the radiated power of a charged particle as
it scatters while traversing a medium1. This is based on
the works of R.J. Komar [13], Schiff [14], and Dedrick [7].
Following Schiff, we define the current density 𝐽𝑥of a point
charge 𝑒that moves with a constant speed 𝑣along the z-axis
starting at time 𝑡= 0 and position r = (𝑥, 𝑦, 𝑧) = 0:
𝐽𝑥(𝒓, 𝑡) = 𝐽𝑦(𝒓, 𝑡) = 0,
(2)
and
𝐽𝑧(𝒓, 𝑡) = 𝑒𝑣𝛿(𝑥) ⋅𝛿(𝑦) ⋅𝛿(𝑧−𝑣𝑡).
(3)
where J = (𝐽𝑥, 𝐽𝑦, 𝐽𝑧) is the current density, 𝑒the charge of
the particle, and 𝑣the velocity.
To calculate the angular distribution of the Cherenkov
radiation, we use the exact expression for the average energy
radiated at a position 𝒓by a harmonically time-varying
current distribution in a homogeneous isotropic dielectric
medium [7], as
𝑃𝑘𝜔(𝒓) =
𝑛𝑘2
2𝜋𝑟2𝑐
||||∫𝐽⟂𝑘(𝒓′) 𝑒−𝑖𝑛𝒌⋅𝒓′𝑑𝜏′||||
2
,
(4)
where 𝑃𝑘𝜔is the energy flow per unit area and angular
frequency in the direction of observation (parallel to 𝒌or
𝒓), |𝒌| = 𝜔∕𝑐, 𝑛is the index of refraction for the medium,
𝐽⟂𝑘is the component of the current density perpendicular
to 𝒌and 𝑑𝜏′ is the volume element. Replacing the density in
Eq. 3 by the Fourier amplitude of angular frequency 𝜔we
obtain
𝐽𝑧𝜔(𝒓, 𝑡) = 𝑒
2𝜋𝛿(𝑥) ⋅𝛿(𝑦) 𝑒(𝑖𝑤𝑧∕𝑣).
(5)
Substituting 𝐽⟂𝑘= 𝐽𝑧𝜔sin 𝜃into Eq. 4 we obtain the energy
flow per unit area and angular frequency
2𝜋𝑃𝑘𝜔(𝒓) = 𝑛𝑒2𝜔2 sin2 𝜃
4𝜋2𝑟2𝑐3
||||∫𝑒𝑖𝜔𝑧′( 1
𝑣−𝑛cos 𝜃
𝑐
)𝑑𝑧′||||
2
. (6)
For a path length 𝐿centered at the origin, the integral is
∫
𝐿∕2
−𝐿∕2
𝑒
𝑖𝜔𝑧′( 1
𝑣−𝑛cos 𝜃
𝑐
)
𝑑𝑧′ =
2 sin
[
𝜔𝐿
2
(
1
𝑣−𝑛cos 𝜃
𝑐
)]
𝜔
(
1
𝑣−𝑛cos 𝜃
𝑐
)
. (7)
For a particle with a straight and infinite trajectory we
evaluate the limit for infinite 𝐿to be 2𝜋𝛿
(
1
𝑣−𝑛cos 𝜃
𝑐
)
,
leading to the classical expression shown in Eq. 1.
From Eq. 6 the radiated power is proportional to
𝐿2 sin2 𝜃sin2 𝜒
𝜒2
(8)
1We surveyed the literature searching for a derivation of the emission
of Cherenkov light in thick media, but could not find any that would go
beyond use of thin plates and straight-path approximations.
with
𝜒= 𝜔𝐿
2
(1
𝑣−𝑛cos 𝜃
𝑐
)
≡𝜋𝐿
𝜆′
(
1
𝑛𝛽−cos 𝜃
)
,
(9)
where 𝜆′ = 2𝜋𝑐∕𝑛𝜔is the wavelength in the medium.
The behavior of Eq. 8 depends on how 𝐿compares to 𝜆′.
In the case that 𝐿≪𝜆′, the sin2 (𝜒)∕𝜒2 in Eq. 8 becomes
constant and the radiation is emitted over a dipole angular
distribution. For 𝐿≫𝜆′, the shape of Eq. 8 resembles
a gaussian function summed to a cosine function with a
relatively smaller amplitude, and it is valid in the range 𝜒=
[−∞, +∞]. The total light output is given by the integral,
and in this limiting case about 90% of the contribution comes
from the range 𝜒= [−𝜋, +𝜋]. Here, the angular distribution
of the radiation emitted is sharply peaked at the Cherenkov
angle 𝜃𝐶and has a full width at half maximum
𝛿𝜃≃
𝜆′
𝐿sin 𝜃𝐶
.
(10)
We note that the integral of Eq. 8 can be done in 𝜒
space, where the valid range of 𝜒depends linearly on 𝐿∕𝜆′.
By requiring that |𝜒| can acquire values larger than 𝑝𝑖,
motivated by the limiting case of 𝐿≫𝜆′, we can estimate
the regime where 𝐿is sufficiently larger than 𝜆′ to achieve
coherence. Since the minimum value of 𝜒(where 𝜃= 0) is
always the most restrictive one, we use it and demand it is
≤−𝜋, which results in the coherence condition that
𝐿
𝜆′
(
1 −1
𝑛𝛽
)
≥1.
(11)
Water is a commonly used medium for experiments involv-
ing Cherenkov radiation. Using a refractive index for water
of 𝑛= 4∕3 and assuming that 𝛽= 1, Eq. 11 becomes
𝐿> 4𝜆′.
(12)
Cherenkov detectors are typically sensitive to light with
wavelengths 300 nm < 𝜆′ < 720 nm, so full coherence
requires straight path lengths between 1.2-3 𝜇m. This is
comparable to the mean free path between scattering of
electrons in the water, approximately 1.3 𝜇m [15].
Dedrick [7] derives the formula including interference
between light emitted by different track segments. Equa-
tion (4) becomes:
𝑃𝑘𝜔(𝒓) =
𝑛𝑘2
2𝜋𝑟2𝑐
||||||
𝑁
∑
𝜈=1
𝐼𝜈
||||||
2
,
(13)
where the contribution of each segment 𝜈is
𝐼𝜈= 𝑒
2𝜋sin Θ𝜈𝑒𝑖(𝛿𝜈+𝜒𝜈)𝑙𝜈
sin 𝜒𝜈
𝑖𝜒𝜈
(14)
with the phase angles given by
𝜒𝜈= 𝜔𝑙𝜈
2
(
1
𝑣𝜈
−𝑛
𝑐cos Θ𝜈
)
≡𝜋𝑙𝜈
𝜆′
(
1
𝑛𝛽𝜈
−cos Θ𝜈
)
,
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 2 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
(15)
with
cos Θ𝜈= cos 𝜃cos 𝜃𝜈+ sin 𝜃sin 𝜃𝜈cos (𝜑−𝜑𝜈) (16)
and
𝛿𝜈= 𝜔𝑡𝜈−𝑛𝑘(𝑥𝜈sin 𝜃cos 𝜑+𝑦𝜈sin 𝜃sin 𝜑+𝑧𝜈cos 𝜃). (17)
Here 𝑙𝜈is the length of the 𝜈step from 𝑥𝜈, 𝑦𝜈, 𝑧𝜈to 𝑥𝜈+1,
𝑦𝜈+1, 𝑧𝜈+1, 𝜃𝜈and 𝜑𝜈are the polar angles of the segment,
and the particle is at point 𝜈at time 𝑡𝜈with the speed 𝑣𝜈.
The expression for the total energy radiated is composed
both of terms 𝐼𝜈𝐼∗
𝜈and also terms (𝐼𝜈𝐼∗
𝜇+ 𝐼∗
𝜈𝐼𝜇) that
represent interference effects between segments.
Our step-based Monte Carlo simulation uses Eq. 13 to
calculate the total angular distribution of Cherenkov light.
3. Electron transport
The distribution of electron directions and step sizes has
a significant impact on the angular distribution of Cherenkov
light. We focus on the difference between single-scattering
and multiple-scattering models.
3.1. Charged particle scattering models
Monte-Carlo scattering models define the longitudinal
and lateral displacement of a particle along its trajectory,
forming steps. Models can be divided into two groups, con-
densed and detailed [16]. Detailed algorithms, also known
as “single scattering models”, simulate each interaction and
therefore provide the highest accuracy possible. Condensed
models, on the other hand, average several physical interac-
tions to calculate displacement and energy losses, typically
reducing the number of steps by factors of hundreds, there-
fore reducing the computational needs by a similar factor.
Condensed models are commonly referred to as “multiple
scattering models”.
Commonly used multiple scattering models in Geant4
are Urban [16], Penelope [17] and Wentzel [18]. While
the step size of the condensed model can be shortened
artificially to increase the precision of the simulation, the
most accurate simulation is obtained when using a detailed
Single Scattering model [18].
We simulate the Cherenkov light angular distribution for
MeV electrons in water with Geant4. We use the Urban
model [16] for the multiple scattering case, and the model
described in [18] for the single scattering case. We inject
1000 electrons in the middle of a 6 m radius water sphere
with an initial momentum direction along the z-axis. The
initial electron energies are 0.3 MeV, 0.5 MeV, 1 MeV, and
5 MeV. We take into account bremsstrahlung, ionization,
scattering and the Cherenkov light emission process.
The photons propagate in straight paths through the
water, with all physical processes disabled, until they are
absorbed at the sphere’s surface. The position of the photons
on the sphere with respect to its center is used to compute
the total Cherenkov light angular distribution far from its
emission point. From these distributions, we compute three
useful quantities that allow us to compare the results from
different simulations, namely the angle at which the maxi-
mum number of photons are emitted, the fraction of photons
emitted in the tails, with angles > 𝜋∕2, and the Full Width
Half Maximum (FWHM) of the resulting Cherenkov light
distribution.
3.2. Multiple Scattering vs Single Scattering
A comparison of the angular distributions of the multiple
scattering and single scattering models is shown in Fig. 1.
Table 1 summarizes the values of the three benchmark quan-
tities. Both models produce a similar angular distribution
when the electron energy reaches 5 MeV. However, for
lower energies, the distributions differ significantly, with the
multiple scattering model producing a sharper peak than
the single scattering model. The single scattering model
produces a cone with a larger half-angle.
The differences at low energies occur because the mul-
tiple scattering model produces large steps, of order 100 𝜇s,
for low energy electrons, emitting most of the light in the
Cherenkov angle defined along the step. After a couple of
steps, the electrons are below the Cherenkov threshold and
cannot emit more light. However, electrons propagated with
the single scattering model have comparatively short steps,
of the order of a few 𝜇s, and most of these segments have
enough energy to produce Cherenkov light. At 5 MeV and
higher energies, even the multiple scattering model produces
multiple steps before falling below the energy threshold
for Cherenkov light production. As a result, both models
produce very similar distributions. Figure 2 depicts these
differences using a simulated 2 MeV electron in water, prop-
agated with the multiple scattering and the single scattering
models.
3.3. Impact of interference effects in the Single
Scattering model
The most physically accurate Cherenkov simulation
combines the single scattering model and the calculations
in Section 2, which considers interference effects and coher-
ence constraints. The simulation setup is the same as in the
previous tests, except that Cherenkov photons are restricted
to the single wavelength 𝜆
=
400 nm. The electron’s
position, time, and 𝛽
= 𝑣∕𝑐at every step are used to
calculate the sum of the integrals 𝐼𝜈from Eq. 13 and in this
way obtain an angular distribution of Cherenkov light that
includes interference effects.
The interference term causes minimal deviations from
the current simulation tools with the single scattering model,
as shown in Fig. 1 and in Table 1. The results are in agree-
ment with Ref. [10], where the authors state that there should
not be any significant coherence violation of EM-radiation
that forms Cherenkov light due to electron scattering in
water.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 3 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
0
2
0
1
2
3
4
5
Arb. Units
0.3 MeV
SNO+ Multiple Scattering
Single Scattering
SS + Coherent Calculations
Cherenkov light sampling
0
2
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Arb. Units
0.5 MeV
SNO+ Multiple Scattering
Single Scattering
SS + Coherent Calculations
Cherenkov light sampling
0
2
0.0
0.5
1.0
1.5
2.0
2.5
Arb. Units
1 MeV
SNO+ Multiple Scattering
Single Scattering
SS + Coherent Calculations
Cherenkov light sampling
0
2
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
Arb. Units
5 MeV
SNO+ Multiple Scattering
Single Scattering
SS + Coherent Calculations
Cherenkov light sampling
Figure 1: Comparison of the angular distribution of Cherenkov light obtained from using the SNO+ default multiple scattering
model, the Geant4 Single Scattering model, the single scattering model with coherent emission and the Cherenkov light sampling
technique described in Sec. 4. Electrons are injected in the center of a sphere, pointing in the +z-direction. The angle of a photon
is defined by their intersection with the sphere. Each panel corresponds to different electron initial energy.
Energy
Quantity
Multiple Scattering
Single Scattering
Coherent
Sampling
0.3 MeV
Peak position
0.21 ± 0.03
0.31 ± 0.03
0.29 ± 0.03
0.38 ± 0.03
FWHM
0.05+0.06
−0.05
0.29 ± 0.06
0.25 ± 0.06
0.55 ± 0.06
Tail fraction
0.0001 ± 6e-5
0.0051 ± 0.0004
0.0083 ± 0.0006
0.0054 ± 0.0004
0.5 MeV
Peak position
0.52 ± 0.03
0.57 ± 0.03
0.57 ± 0.03
0.59 ± 0.03
FWHM
0.04+0.06
−0.04
0.46 ± 0.06
0.46 ± 0.06
0.57 ± 0.06
Tail fraction
0.036 ± 0.002
0.045 ± 0.002
0.040 ± 0.002
0.045 ± 0.002
1 MeV
Peak position
0.69 ± 0.03
0.71 ± 0.03
0.71 ± 0.03
0.72 ± 0.03
FWHM
0.29 ± 0.06
0.55 ± 0.06
0.59 ± 0.06
0.62 ± 0.06
Tail fraction
0.089 ± 0.001
0.081 ± 0.001
0.084 ± 0.001
0.090 ± 0.001
5 MeV
Peak position
0.77 ± 0.03
0.77 ± 0.03
0.78 ± 0.03
0.76 ± 0.03
FWHM
0.46 ± 0.06
0.47 ± 0.06
0.50 ± 0.06
0.46 ± 0.06
Tail fraction
0.099 ± 0.001
0.092 ± 0.001
0.092 ± 0.001
0.102 ± 0.001
Table 1
Comparisons of the distributions in Fig. 1 for different electron energies. For each model we compute the peak of the Cherenkov
light distribution, the full width at half maximum (FWHM), as well as the fraction of photons emitted in the tails, with angle
> 𝜋∕2. The comparatively large error on the tail fraction of low-energy multiple scattering models comes from the low number
of photons that the model produces in this region (see Fig. 1).
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 4 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
Figure 2: Geant4 simulation of a sample trajectory followed by 2 MeV electrons in water. Trajectories are shown in red. The left
panel shows the result from using a multiple scattering model, while the panel on the right shows the trajectory produced by a
single scattering model.
4. Cherenkov light sampling: a fast
implementation of the improved Cherenkov
light model
The single scattering model is significantly more com-
putationally expensive than multiple scattering models; the
simulations in Section 3.2 differed in computational time
by a factor of 7. We developed and tested an algorithm we
call “Cherenkov light sampling” that corrects the multiple
scattering model and approximates the angular distribution
of single scattering models with almost negligible impact on
computational speed.
For each medium, the single scattering model is used
to generate the probability distributions for single events
scatters, 𝜃𝑠𝑠(𝐸𝑒) and the free mean path of the electron
𝜆𝑠𝑠(𝐸𝑒) as a function of the electron’s energy. We binned
these distributions in steps of 50 keV for 𝜃𝑠𝑠(𝐸𝑒) and steps
of 100 keV for 𝜆𝑠𝑠(𝐸𝑒), and then interpolated with a smooth
function to have access to arbitrary energies. We chose the
steps to be smaller than the electron energies being tested,
and we confirmed that modifying their values did not impact
the results.
To simulate Cherenkov photons, we then use a multiple-
scattering model, which generates a step length and particle
direction, and determines the number of Cherenkov photons.
A more detailed trajectory is generated with a length sam-
pled from 𝜆𝑠𝑠and scattering angle from 𝜃𝑠𝑠(𝐸𝑒). Cherenkov
photons are generated uniformly along the detailed trajec-
tory. The algorithm is explained in Fig. 3.
We assessed the accuracy with which the Cherenkov
light sampling reproduces the result of the single scattering
model using electrons in water as in Section 3.2. Figure 1
and Table 1 summarize the distributions. Cherenkov light
sampling essentially reproduces the results from the single
scattering model, although minor differences remain, partic-
ularly at 0.3 MeV. We observe a 7% increase of computation
time compared to the multiple scattering model.
We can compensate for the some of the differences
between the Cherenkov sampling and the single scattering
model and for differences due to neglecting the interference
term by introducing a parameter 𝛼that scales the mean free
path of an electron as 𝜆′
𝑠𝑠(𝐸) = 𝛼𝜆𝑠𝑠(𝐸). A value smaller
than one results in an additional smearing of the Cherenkov
light angular distribution, as shown in Fig. 4.
We ran simulations of the Cherenkov sampling method
using different values of 𝛼, and found that the best agreement
with the results from the single scattering model were ob-
tained for values of 𝛼∼0.6. This indicates that the approx-
imations in the method do not fully capture the effects from
the single scattering model and require further smearing of
the photon injection angle. In the analysis that follows we test
the method on experimental data, and we fit for the value of
𝛼that describes the data best.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 5 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
Start
Obtain 𝐿𝑖, 𝑁𝑖, 𝐸𝑖, 𝐸𝑖+1, and ⃗𝑝𝑖by multiple scattering model
𝐿𝑝= 𝐿𝑖
𝑁𝑖
Δ𝐸
=
𝐸𝑖−𝐸𝑖+1
𝑁𝑖
𝐸𝑖𝑗
= 𝐸𝑖+
1
2Δ𝐸
j = 0
𝐸𝑖𝑗
=
𝐸𝑖𝑗−Δ𝐸
𝑁𝜃
=
𝐿𝑝
𝜆𝑠𝑠(𝐸𝑖𝑗)
jj = 0
Sample Δ𝜃from 𝜃𝑠𝑠(𝐸𝑖𝑗)
j = j + 1
Sample Δ𝜑from [0; 2𝜋] interval.
⃗𝑝𝑖.theta = ⃗𝑝𝑖.theta + Δ𝜃
Rotate ⃗𝑝𝑖for Δ𝜑
jj = jj + 1
Is jj < 𝑁𝜃?
Emit a photon w.r.t. ⃗𝑝𝑖
Is j < 𝑁𝑖?
Stop
Yes
No
Yes
No
Figure 3: Flow chart of the Cherenkov light sampling algorithm.
Here 𝐿𝑖is the length of the 𝑖−th step of the propagated
electron with a mulitiple scattering model, 𝑁𝑖is number of
photons for that step, 𝐸𝑖and 𝐸𝑖+1 are the kinetic energies of
the electron at the beginning and at the end of the segment
and ⃗𝑝𝑖is the direction of the segment.
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
Arb. Units
= 0.1
= 0.2
= 0.3
= 0.4
= 0.5
= 0.6
= 0.7
= 0.8
= 0.9
= 1
SS
Figure 4: Comparison of the Cherenkov light distributions
produced by the Cherenkov light sampling method using
different 𝛼values. The simulation is the same as in Fig. 1
with an electron initial energy of 1 MeV.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 6 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
4.1. Cherenkov light sampling with SNO+ data
SNO+ is a large-scale underground neutrino detec-
tor [19] consisting of a spherical acrylic vessel of 6 m radius,
submerged in water and surrounded by a structure of 9,394
photomultiplier tubes (PMTs). The PMTs provide about
50% photo coverage for events inside the vessel. The data
discussed in this section comes from the water phase of the
experiment.
In the water phase of SNO+, events are detected if
they produce Cherenkov light. A single electron produces a
well defined ring-like pattern on the PMTs with an average
number of 9 PMT hits per MeV. The angular distribution has
been used to distinguish between different single electrons,
gammas, and gamma cascades [6, 20]. SNO+ characterizes
the isotropy of the light using parameters derived from
the angular distribution of the PMTs, initially described in
Ref. [21].
Figure 5 shows a schematic representation of PMT hits
by Cherenkov light, where the ring structure is clearly visi-
ble. For a particle at vertex position v travelling along unit
vector ̂𝑢, and with PMTs located at p𝑖, r𝑖= p𝑖−v. We define
𝜃𝑖𝑗by cos 𝜃𝑖𝑗=
r𝑖r𝑗
|r𝑖||r𝑗| and
𝛽𝑙=
2
𝑁(𝑁−1)
[𝑁−1
∑
𝑖=1
𝑁
∑
𝑗=𝑖+1
𝑃𝑙(cos 𝜃𝑖𝑗)
]
,
(18)
where 𝑁is number of hits in the event and 𝑃𝑙is a Legendre
polynomial of degree 𝑙. For SNO [22], the predecessor of
SNO+, it was determined[21] that the best differentiation
was achieved using 𝛽14 = 𝛽1 + 4𝛽4. The value of 𝛽14 is
anticorrelated with the isotropy of the light.
In the water phase of SNO+ a tension in 𝛽14 between
data and simulation [6] was observed. Since the parameter
contains the most information on the angular distribution
of Cherenkov light, we will use it to fit the 𝛼parameter of
the Cherenkov light sampling model to calibration data, and
compare the results obtained. Moreover, since the SNO+
simulation for the water phase uses Geant4 for propagation
of particles, we can directly apply the Cherenkov light sam-
pling model to conduct our tests.
The calibration data chosen for the comparisons come
from the deployment of radioactive sources in the cen-
ter of SNO+. The two calibration devices used are a 16N
source [23] and an AmBe source [24], which provide nearly
monoenergetic 6.13 MeV, 4.4 MeV, and 2.2 MeV 𝛾-rays. We
run simulations with different 𝛼in a range from 0.1 to 1 for
the 16N and AmBe sources placed in the middle of the acrylic
vessel, using the corresponding detector conditions for each
run. As a result we obtain 𝛽14 distributions as a function of
𝛼for three different energies of 𝛾-rays.
Figure 6 shows the values of the mean of a Gaussian
fit of these 𝛽14 distributions as well as the values from the
calibration data. While the overall dependence of the mean
of 𝛽14 on 𝛼is clear, the results show a significant level of
noise when making small changes in 𝛼. This is more obvious
for the 2.2 MeV gamma. For this reason, we opted for fitting
the relationship between 𝛽14 and 𝛼as 𝛽14(𝛼) = 𝑎
√
𝛼+ 𝑏for
the range 𝛼= [0.5, 0.6]. To find an optimal value of 𝛼, we
then minimize the function
𝜒2(𝛼) =
∑
𝑖
(𝛽14𝑖(𝛼) −𝛽𝑑𝑎𝑡𝑎
14𝑖)2
𝜎2
𝑑𝑎𝑡𝑎𝑖+ 𝜎2
𝑀𝐶𝑖(𝛼)
,
(19)
where 𝑖= 6.1 MeV, 2.2 MeV, 4.4 MeV, and 𝜎2
𝑑𝑎𝑡𝑎and 𝜎2
𝑓𝑖𝑡are
errors of 𝛽14 data and MC respectively. The result of the fit
is 𝛼= 0.556 ± 0.005, which is close to the expectation from
simulations of 0.6. To verify this result, we produced new
simulation for the calibration sources using the fit value of
𝛼, and compare the expectation of the full 𝛽14 distribution, as
well as angular distribution of the Cherenkov light observed.
𝛽14 –
The complete 𝛽14 distributions for the three 𝛾-ray
energies considered are shown in Figs. 7 and 9. The im-
provement in agreement can be seen directly in the figures,
particularly for the 6.1 MeV gammas. Table 2 quantifies this
agreement, and also summarizes and compares the prop-
erties of the distributions. The Cherenkov sampling model
explains the mean of the distributions by design, but it also
describes their width better than the default simulation. The
agreement in the shape of the distribution is highlighted by
the 𝜒2, which shows significant improvement for all three
𝛾-ray energies.
cos 𝜃𝑃𝑀𝑇– The angular distribution of the Cherenkov
light is quantified by cos 𝜃𝑃𝑀𝑇, obtained as
cos 𝜃𝑃𝑀𝑇= ⃗𝑢𝑓𝑖𝑡⋅
⃗𝑥𝑃𝑀𝑇−⃗𝑥𝑓𝑖𝑡
|||⃗𝑥𝑃𝑀𝑇−⃗𝑥𝑓𝑖𝑡|||
,
(20)
where ⃗𝑥𝑃𝑀𝑇is the coordinate of the triggered PMT, ⃗𝑥𝑓𝑖𝑡
is the reconstructed position of the electron observed, and
⃗𝑢𝑓𝑖𝑡is its reconstructed direction. This variable is similar to
Fig. 1, now including the detector response: geometry, PMTs
response and photon transport. Figure 8 shows the cos 𝜃𝑃𝑀𝑇
distribution for the 16N (6.1 MeV) source, while the dis-
tributions for the AmBe source are shown in Appendix.
A quantitative description of the distributions and their
agreement is given in Table 2. Here, the sampling model
also shows a significant improvement with respect to the
default simulation. The improvements are most noticeable
for the 16N calibration source, despite the similarity in the
distributions of sampling and multiple scattering on Fig. 1.
Overall, the Cherenkov sampling method shows an im-
proved agreement between the MC and data in the two vari-
ables that encode information about the angular emission
of the light. Interestingly, these changes do not modify the
behavior of other variables typically used in analyses, which
mainly depend on timing and number of detected photons.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 7 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
Calibration source
16N
AmBe4.4MeV
AmBe2.2MeV
Data 𝛽14 mean
0.4176 ± 4e-4
0.397 ± 2e-3
0.350 ± 3e-3
Default MC 𝛽14 mean
0.4414 ± 5e-4
0.422 ± 2e-3
0.387 ± 3e-3
Sampling 𝛽14 mean
0.4177 ± 7e-4
0.399 ± 2e-3
0.367 ± 3e-3
Data 𝛽14 𝜎
0.1727 ± 3e-4
0.201 ± 2e-3
0.290 ± 3e-3
Default MC 𝛽14 𝜎
0.1760 ± 3e-4
0.209 ± 1e-3
0.297 ± 2e-3
Sampling MC 𝛽14 𝜎
0.1714 ± 5e-4
0.207 ± 2e-3
0.292 ± 3e-3
Default MC 𝛽14 𝜒2/d.o.f.
1085.6/80
141.6/80
117.6/80
Sampling MC 𝛽14 𝜒2/d.o.f.
90.4/80
69.6/80
78.4/80
Default MC cos 𝜃𝑃𝑀𝑇𝜒2/d.o.f.
8706/200
9312/200
12374/200
Sampling MC cos 𝜃𝑃𝑀𝑇
𝜒2/d.o.f.
1772/200
7523/200
10763/200
Table 2
Comparison of variables that encode information of the angular distribution of Cherenkov light for data and simulation. The mean
and 𝜎of 𝛽14 from the Gaussian fits are shown, as well as the 𝜒2 over degrees of freedom for 𝛽14 and cos 𝜃𝑃𝑀𝑇.
Figure 5: The 𝜃𝑖𝑗angles within a Cherenkov ring event used to
calculate 𝛽14. Reproduced from [21].
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
14
Data 6.1 MeV
Data 4.4 MeV
Data 2.2 MeV
Best alpha estimation
6.1 MeV
4.4 MeV
2.2 MeV
Figure 6: Comparison of 𝛽14 mean values obtained from
simulations as a function of the parameter 𝛼together with
the curve that fits them best, described in the text. The 𝛽14
mean values from calibration sources with 𝛾- rays of energies
2.2 MeV, 4.4 MeV, and 6.1 MeV are also shown. The optimal
value for 𝛼is shown by the pink dashed line.
0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
14
0.5
1.0
1.5
2.0
2.5
Arb. Units
N16 (6.1 MeV)
MC
Sampling
Data
0.0
Figure 7: Comparison of the 𝛽14 distribution of the of 16N
(6.1 MeV) for data, the default SNO+ MC and simulation
produced using the Cherenkov light sampling method.
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Cos PMT
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
Arb. Units
MC
Sampling
Data
0.65
0.70
0.75
0.80
0.85
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
Figure 8: Comparison of the cos 𝜃𝑃𝑀𝑇distribution of the 16N
(6.1 MeV) for data, the default SNO+ MC and simulation
produced using the Cherenkov light sampling method.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 8 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
5. Conclusion
In this paper, we investigated the relevance of the in-
terference effects and the electron propagation method for
simulation of Cherenkov light at a few MeVs. We demon-
strated that the choice of electron propagation method
has a considerable impact on the overall distribution of
Cherenkov light below 6 MeV, while interference effects
play a much smaller role. Consequently, we developed an
improved Cherenkov light simulation model specifically tai-
lored for MeV electrons propagating in a uniform medium.
This model is based on the single scattering model and
provides an improved level of accuracy while maintaining
computational efficiency similar to previous methods. The
model requires a one-time calculation of electron scattering
in the relevant medium, and has a single parameter that can
be tuned to relevant experimental data. We implemented the
method in the Geant4 package and used it on SNO+ calibra-
tion data. After tuning the model, we are able to significantly
improve the description of variables that depend on the
angular distribution of Cherenkov light. Our findings might
be relevant for experimental setups that use the angular
distribution of Cherenkov light, in particular new projects
that will have improved sensitivity to these effects, such as
Hyper-Kamiokande [25], currently under construction, or
the planned THEIA detector [26].
Acknowledgements
This research was undertaken thanks in part to fund-
ing from the Natural Sciences and Engineering Research
Council of Canada (NSERC), the Canada First Research
Excellence Fund through the Arthur B. McDonald Cana-
dian Astroparticle Physics Research Institute, WestGrid and
the Digital Research Alliance of Canada2 (formerly known
as Compute Canada). The authors extend their thanks to
the SNO+ Collaboration for access to the calibration data
that spurred the questions addressed here, allowing us to
study the Cherenkov light sampling method, and to Eric
Vázquez Jáuregui and Logan Lebanowski for reviewing the
manuscript.
2www.alliancecan.ca
References
[1] P. A. Cherenkov, Visible radiation produced by electrons moving in a
medium with velocities exceeding that of light, Phys. Rev. 52 (1937)
378–379.
[2] I. M. Frank, I. E. Tamm, Coherent visible radiation of fast electrons
passing through matter, Compt. Rend. Acad. Sci. URSS 14 (1937)
109–114.
[3] J. Jackson, Classical Electrodynamics, 2 ed., John Wiley and Sons,
1975.
[4] J. Seguinot, T. Ypsilantis,
A Historical survey of ring imaging
Cherenkov counters, Nucl. Instrum. Meth. A 343 (1994) 1–29.
[5] H. Kolanoski, N. Wermes, Particle Detectors, Oxford University
Press, 2020.
[6] M. Anderson, et al. (SNO+), Search for invisible modes of nucleon
decay in water with the SNO+ detector,
Phys. Rev. D 99 (2019)
032008.
[7] K. G. Dedrick, The Influence of Multiple Scattering on the Angular
Width of Čerenkov Radiation, Phys. Rev. 87 (1952) 891–896.
[8] A. P. Kobzev, V. E. Pafomov, I. M. Frank, Angular distributions of the
Vavilov-Cherenkov radiation induced by the 170-250 keV electrons in
mica, Yad. Fiz. 29 (1979) 122–132.
[9] A. Kobzev, I. Frank, Spectral dependence of the half-width of the
angular distributions of vavilov-cerenkov radiation,
Sov. J. Nucl.
Phys.(Engl. Transl.);(United States) 31 (1980).
[10] M. G. Bowler,
Effects of electron scattering on Cherenkov light
output, Nucl. Instrum. Meth. A 378 (1996) 463–467.
[11] M. Bowler, M. Lay, Angular distribution of cherenkov light from elec-
trons both produced and stopping in water, Nuclear Instruments and
Methods in Physics Research Section A: Accelerators, Spectrometers,
Detectors and Associated Equipment 378 (1996) 468–471.
[12] S. Agostinelli, et al., Geant4—a simulation toolkit, Nucl. Instrum.
Meth. A 506 (2003) 250–303.
[13] R. J. Komar (SNO Collaboration), The Effects of Electron Scattering
on Cherenkov Light Output, Technical Report, University of British
Columbia, Vancouver, 1995.
[14] L. Schiff, Quantum Mechanics, McGraw-Hill book company, 1949.
[15] W. R. Nelson, H. Hirayama, D. W. O. Rogers, The Egs4 Code System
(1985).
[16] L. Urban, A model for multiple scattering in geant4 (2006).
[17] F. Salvat, J. M. Fernandez-Varea, E. Acosta, J. Sempau, Penelope -
A code system for Monte Carlo simulation of electron and photon
transport, Organisation for Economic Co-Operation and Development
- Nuclear Energy Agency, Nuclear Energy Agency of the OECD
(NEA), 2001.
[18] V. N. Ivanchenko, O. Kadri, M. Maire, L. Urban, Geant4 models
for simulation of multiple scattering, J. Phys. Conf. Ser. 219 (2010)
032045.
[19] V. Albanese, et al. (SNO+), The SNO+ experiment, JINST 16 (2021)
P08059.
[20] A. Allega, et al. (SNO+), Improved search for invisible modes of
nucleon decay in water with the SNO+detector, Phys. Rev. D 105
(2022) 112012.
[21] J. Dunmore, The separation of CC and NC events in the Sudbury
Neutrino Observatory, Ph.D. thesis, 2004.
[22] J. Boger, et al. (SNO), The Sudbury neutrino observatory, Nucl.
Instrum. Meth. A 449 (2000) 172–207.
[23] M. R. Dragowsky, et al., The N-16 calibration source for the Sudbury
Neutrino Observatory, Nucl. Instrum. Meth. A 481 (2002) 284–296.
[24] Y. Liu, Ambe source calibration in the SNO+ water phase (2018).
[25] K. Abe, et al., Letter of Intent: The Hyper-Kamiokande Experiment
— Detector Design and Physics Potential — (2011).
[26] M. Askins, et al. (Theia),
THEIA: an advanced optical neutrino
detector, Eur. Phys. J. C 80 (2020) 416.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 9 of 10
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media
0.50
0.25
0.00
0.25
0.50
0.75
1.00
1.25
1.50
14
0.0
0.5
1.0
1.5
2.0
Arb. Units
4.4MeV
MC
Sampling
Data
1.0
0.5
0.0
0.5
1.0
1.5
2.0
14
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
Arb. Units
2.2MeV
MC
Sampling
Data
Figure 9: Comparison of the 𝛽14 distribution of the AmBe source, 2.2 MeV (left) and 4.4 MeV (right), for data, the default SNO+
MC and simulation produced using the Cherenkov light sampling method.
1.00
0.75
0.50
0.25 0.00
0.25
0.50
0.75
1.00
Cos PMT
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Arb. Units
MC
Sampling
Data
1.00
0.75
0.50
0.25 0.00
0.25
0.50
0.75
1.00
Cos PMT
0
1
2
3
4
Arb. Units
MC
Sampling
Data
Figure 10: Comparison of the cos 𝜃𝑃𝑀𝑇distribution of the AmBe source, 2.2 MeV (left) and 4.4 MeV (right), for data, the default
SNO+ MC and simulation produced using the Cherenkov light sampling method.
A. Appendix
The comparison of the tuned Cherenkov light sampling
distributions and the SNO+ default simulation for AmBe
data are shown below. The 𝛽14 distribution is contained in
Fig. 9, while cos 𝜃𝑃𝑀𝑇can be found in Fig. 10. We note
that, after completing the analysis, the SNO+ collaboration
found issues with the accuracy of the reconstructed event
positions in the AmBe runs. Nonetheless, despite these
issues, Figs. 9 and 10 display a better agreement with the
Cherenkov sampling model than with the standard SNO+
simulation. Moreover, the 16N data dominates the final fit of
the 𝛼parameter, and therefore we do not expect these issues
to have an impact on our final result.
D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier
Page 10 of 10
|
Simulating the Angular Distribution of Cherenkov Radiation in Thick Media Dmytro Minchenkoa,∗, Juan Pablo Yañeza and Aksel Hallina aDept. of Physics, 6G 2E1, Canada A R T I C L E I N F O Keywords: Cherenkov light Astroparticle physics Detector physics Monte Carlo simulation Geant4 A B S T R A C T We present a study of the emission of Cherenkov radiation in thick media, explore the limitations of the simulation tools currently in use in particle and nuclear physics, and propose a method for overcoming these limitations. We start with a derivation of the Cherenkov power spectrum and its angular profile, accounting for interference of the radiation emitted, in contrast with commonly used tools that assume perfect coherence. We then study the impact that the path of electrons through a medium has on the angular profile of Cherenkov light. Finally, we devise a model that can introduce these effects in Geant4 and tune it to explain calibration data from the water-phase of SNO+. We find that the tuned model significantly improves the agreement between data and simulation, so we provide it for its wider use. 1. Introduction Cherenkov radiation [1] is widely used to detect and characterize charged particles, with applications in experimental particle physics, astrophysics, nuclear physics, and medicine. The Cherenkov light emitted by a particle depends upon particle type, energy and momentum. A property commonly used is that most light is emitted at a fixed angle θC, also known as the Cherenkov angle. The standard derivation of Cherenkov light emission [2, 3] shows that a charged particle in a dielectric medium with refractive index n, moving on a straight line, with a constant velocity β= v cfaster than the local speed of light in the medium, will emit Cherenkov light, a coherent front of light in a cone with half-angle θgiven by cos θC= 1 nβ. (1) Since the Cherenkov angle is well-defined with respect to the direction of travel of the charged particle, the resulting cone of Cherenkov light can be used to trace the path of the particle with high precision. Experiments have exploited this effect for almost four decades to detect charged particles and infer their properties over a wide range of energies [4, 5]. The technique is so powerful that it will be continued to be used by future generations of detectors, which in turn requires ever more precise simulation of the process. Modern simulation tools that study the movement of charged particles through matter often discretize physical processes. Interactions are simulated at scattering centers, with the particle undergoing free motion in-between them. The kinematics and probability of interactions are defined by scattering models that take into account a variety of physical phenomena. ∗Corresponding author (D. Minchenko) ORCID(s): Simulations of Cherenkov radiation are typically discretized. It is commonly assumed that the conditions required for emission of Cherenkov radiation persist along a charged particle's trajectory and the Cherenkov emission angle is calculated with respect to the average displacement of the particle. In reality, however, a particle is subject to a continuous interaction with the medium at the micron scale that deflects its trajectory and makes it lose energy continuously, alongside discrete interactions at atomic scales. As a result, the Cherenkov light emission might be significantly different from the predictions from simulations that approximate these effects. Cherenkov light is a coherent effect and we examine the potential interference and loss of coherence of the light front as well as the smearing of the emission angle that might result due to these effects. This work was motivated by an observed tension between data and simulation reported by the SNO+ experiment when comparing the angular distribution of Cherenkov light emitted by electrons with MeV energies in water [6]. Internal studies explored the effects of multiple scattering [7, 8, 9], the coherence of the Cherenkov light emitted [10], and the influence of the simulation step size on the resulting Cherenkov light distribution [11]. However, none of these studies explain the tension. Therefore, in this work, we assess the approximations routinely made when simulating Cherenkov light, and propose a method to account for the effects that are currently neglected. In Sec. 2, we provide a review of the classic derivation of Cherenkov light emission, including its associated approximations. In Sec. 3 we look at different methods for electron transport in simulation, and investigate their impact on their predicted angular distribution of Cherenkov light. In Sec. 4, we develop a new Cherenkov light radiation model, implement it in the commonly used Geant4 platform [12], and tune it to SNO+ calibration data. We find that, after tuning the model, the previously observed tension between the data and Monte-Carlo simulation is resolved. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 1 of 10 19 Sep 2025 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media 2. Cherenkov light from first principles We derive the radiated power of a charged particle as it scatters while traversing a medium1. This is based on the works of R.J. Komar [13], Schiff [14], and Dedrick [7]. Following Schiff, we define the current density Jxof a point charge ethat moves with a constant speed valong the z-axis starting at time t= 0 and position r = (x, y, z) = 0: Jx(r, t) = Jy(r, t) = 0, (2) and Jz(r, t) = evδ(x) ⋅δ(y) ⋅δ(z-vt). (3) where J = (Jx, Jy, Jz) is the current density, ethe charge of the particle, and vthe velocity. To calculate the angular distribution of the Cherenkov radiation, we use the exact expression for the average energy radiated at a position rby a harmonically time-varying current distribution in a homogeneous isotropic dielectric medium [7], as Pkω(r) = nk2 2πr2c ||||∫J⟂k(r′) e-ink⋅r′dτ′|||| 2 , (4) where Pkωis the energy flow per unit area and angular frequency in the direction of observation (parallel to kor r), |k| = ω∕c, nis the index of refraction for the medium, J⟂kis the component of the current density perpendicular to kand dτ′ is the volume element. Replacing the density in Eq. 3 by the Fourier amplitude of angular frequency ωwe obtain Jzω(r, t) = e 2πδ(x) ⋅δ(y) e(iwz∕v). (5) Substituting J⟂k= Jzωsin θinto Eq. 4 we obtain the energy flow per unit area and angular frequency 2πPkω(r) = ne2ω2 sin2 θ 4π2r2c3 ||||∫eiωz′( 1 v-ncos θ c )dz′|||| 2 . (6) For a path length Lcentered at the origin, the integral is ∫ L∕2 -L∕2 e iωz′( 1 v-ncos θ c ) dz′ = 2 sin [ ωL 2 ( 1 v-ncos θ c )] ω ( 1 v-ncos θ c ) . (7) For a particle with a straight and infinite trajectory we evaluate the limit for infinite Lto be 2πδ ( 1 v-ncos θ c ) , leading to the classical expression shown in Eq. 1. From Eq. 6 the radiated power is proportional to L2 sin2 θsin2 χ χ2 (8) 1We surveyed the literature searching for a derivation of the emission of Cherenkov light in thick media, but could not find any that would go beyond use of thin plates and straight-path approximations. with χ= ωL 2 (1 v-ncos θ c ) ≡πL λ′ ( 1 nβ-cos θ ) , (9) where λ′ = 2πc∕nωis the wavelength in the medium. The behavior of Eq. 8 depends on how Lcompares to λ′. In the case that L≪λ′, the sin2 (χ)∕χ2 in Eq. 8 becomes constant and the radiation is emitted over a dipole angular distribution. For L≫λ′, the shape of Eq. 8 resembles a gaussian function summed to a cosine function with a relatively smaller amplitude, and it is valid in the range χ= [-∞, +∞]. The total light output is given by the integral, and in this limiting case about 90% of the contribution comes from the range χ= [-π, +π]. Here, the angular distribution of the radiation emitted is sharply peaked at the Cherenkov angle θCand has a full width at half maximum δθ≃ λ′ Lsin θC . (10) We note that the integral of Eq. 8 can be done in χ space, where the valid range of χdepends linearly on L∕λ′. By requiring that |χ| can acquire values larger than pi, motivated by the limiting case of L≫λ′, we can estimate the regime where Lis sufficiently larger than λ′ to achieve coherence. Since the minimum value of χ(where θ= 0) is always the most restrictive one, we use it and demand it is ≤-π, which results in the coherence condition that L λ′ ( 1 -1 nβ ) ≥1. (11) Water is a commonly used medium for experiments involving Cherenkov radiation. Using a refractive index for water of n= 4∕3 and assuming that β= 1, Eq. 11 becomes L> 4λ′. (12) Cherenkov detectors are typically sensitive to light with wavelengths 300 nm π∕2, and the Full Width Half Maximum (FWHM) of the resulting Cherenkov light distribution. 3.2. Multiple Scattering vs Single Scattering A comparison of the angular distributions of the multiple scattering and single scattering models is shown in Fig. 1. Table 1 summarizes the values of the three benchmark quantities. Both models produce a similar angular distribution when the electron energy reaches 5 MeV. However, for lower energies, the distributions differ significantly, with the multiple scattering model producing a sharper peak than the single scattering model. The single scattering model produces a cone with a larger half-angle. The differences at low energies occur because the multiple scattering model produces large steps, of order 100 μs, for low energy electrons, emitting most of the light in the Cherenkov angle defined along the step. After a couple of steps, the electrons are below the Cherenkov threshold and cannot emit more light. However, electrons propagated with the single scattering model have comparatively short steps, of the order of a few μs, and most of these segments have enough energy to produce Cherenkov light. At 5 MeV and higher energies, even the multiple scattering model produces multiple steps before falling below the energy threshold for Cherenkov light production. As a result, both models produce very similar distributions. Figure 2 depicts these differences using a simulated 2 MeV electron in water, propagated with the multiple scattering and the single scattering models. 3.3. Impact of interference effects in the Single Scattering model The most physically accurate Cherenkov simulation combines the single scattering model and the calculations in Section 2, which considers interference effects and coherence constraints. The simulation setup is the same as in the previous tests, except that Cherenkov photons are restricted to the single wavelength λ = 400 nm. The electron's position, time, and β = v∕cat every step are used to calculate the sum of the integrals Iνfrom Eq. 13 and in this way obtain an angular distribution of Cherenkov light that includes interference effects. The interference term causes minimal deviations from the current simulation tools with the single scattering model, as shown in Fig. 1 and in Table 1. The results are in agreement with Ref. [10], where the authors state that there should not be any significant coherence violation of EM-radiation that forms Cherenkov light due to electron scattering in water. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 3 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media 0 2 0 1 2 3 4 5 Arb. Units 0.3 MeV SNO+ Multiple Scattering Single Scattering SS + Coherent Calculations Cherenkov light sampling 0 2 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Arb. Units 0.5 MeV SNO+ Multiple Scattering Single Scattering SS + Coherent Calculations Cherenkov light sampling 0 2 0.0 0.5 1.0 1.5 2.0 2.5 Arb. Units 1 MeV SNO+ Multiple Scattering Single Scattering SS + Coherent Calculations Cherenkov light sampling 0 2 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Arb. Units 5 MeV SNO+ Multiple Scattering Single Scattering SS + Coherent Calculations Cherenkov light sampling Figure 1: Comparison of the angular distribution of Cherenkov light obtained from using the SNO+ default multiple scattering model, the Geant4 Single Scattering model, the single scattering model with coherent emission and the Cherenkov light sampling technique described in Sec. 4. Electrons are injected in the center of a sphere, pointing in the +z-direction. The angle of a photon is defined by their intersection with the sphere. Each panel corresponds to different electron initial energy. Energy Quantity Multiple Scattering Single Scattering Coherent Sampling 0.3 MeV Peak position 0.21 ± 0.03 0.31 ± 0.03 0.29 ± 0.03 0.38 ± 0.03 FWHM 0.05+0.06 -0.05 0.29 ± 0.06 0.25 ± 0.06 0.55 ± 0.06 Tail fraction 0.0001 ± 6e-5 0.0051 ± 0.0004 0.0083 ± 0.0006 0.0054 ± 0.0004 0.5 MeV Peak position 0.52 ± 0.03 0.57 ± 0.03 0.57 ± 0.03 0.59 ± 0.03 FWHM 0.04+0.06 -0.04 0.46 ± 0.06 0.46 ± 0.06 0.57 ± 0.06 Tail fraction 0.036 ± 0.002 0.045 ± 0.002 0.040 ± 0.002 0.045 ± 0.002 1 MeV Peak position 0.69 ± 0.03 0.71 ± 0.03 0.71 ± 0.03 0.72 ± 0.03 FWHM 0.29 ± 0.06 0.55 ± 0.06 0.59 ± 0.06 0.62 ± 0.06 Tail fraction 0.089 ± 0.001 0.081 ± 0.001 0.084 ± 0.001 0.090 ± 0.001 5 MeV Peak position 0.77 ± 0.03 0.77 ± 0.03 0.78 ± 0.03 0.76 ± 0.03 FWHM 0.46 ± 0.06 0.47 ± 0.06 0.50 ± 0.06 0.46 ± 0.06 Tail fraction 0.099 ± 0.001 0.092 ± 0.001 0.092 ± 0.001 0.102 ± 0.001 Table 1 Comparisons of the distributions in Fig. 1 for different electron energies. For each model we compute the peak of the Cherenkov light distribution, the full width at half maximum (FWHM), as well as the fraction of photons emitted in the tails, with angle > π∕2. The comparatively large error on the tail fraction of low-energy multiple scattering models comes from the low number of photons that the model produces in this region (see Fig. 1). D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 4 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media Figure 2: Geant4 simulation of a sample trajectory followed by 2 MeV electrons in water. Trajectories are shown in red. The left panel shows the result from using a multiple scattering model, while the panel on the right shows the trajectory produced by a single scattering model. 4. Cherenkov light sampling: a fast implementation of the improved Cherenkov light model The single scattering model is significantly more computationally expensive than multiple scattering models; the simulations in Section 3.2 differed in computational time by a factor of 7. We developed and tested an algorithm we call "Cherenkov light sampling" that corrects the multiple scattering model and approximates the angular distribution of single scattering models with almost negligible impact on computational speed. For each medium, the single scattering model is used to generate the probability distributions for single events scatters, θss(Ee) and the free mean path of the electron λss(Ee) as a function of the electron's energy. We binned these distributions in steps of 50 keV for θss(Ee) and steps of 100 keV for λss(Ee), and then interpolated with a smooth function to have access to arbitrary energies. We chose the steps to be smaller than the electron energies being tested, and we confirmed that modifying their values did not impact the results. To simulate Cherenkov photons, we then use a multiplescattering model, which generates a step length and particle direction, and determines the number of Cherenkov photons. A more detailed trajectory is generated with a length sampled from λssand scattering angle from θss(Ee). Cherenkov photons are generated uniformly along the detailed trajectory. The algorithm is explained in Fig. 3. We assessed the accuracy with which the Cherenkov light sampling reproduces the result of the single scattering model using electrons in water as in Section 3.2. Figure 1 and Table 1 summarize the distributions. Cherenkov light sampling essentially reproduces the results from the single scattering model, although minor differences remain, particularly at 0.3 MeV. We observe a 7% increase of computation time compared to the multiple scattering model. We can compensate for the some of the differences between the Cherenkov sampling and the single scattering model and for differences due to neglecting the interference term by introducing a parameter αthat scales the mean free path of an electron as λ′ ss(E) = αλss(E). A value smaller than one results in an additional smearing of the Cherenkov light angular distribution, as shown in Fig. 4. We ran simulations of the Cherenkov sampling method using different values of α, and found that the best agreement with the results from the single scattering model were obtained for values of α∼0.6. This indicates that the approximations in the method do not fully capture the effects from the single scattering model and require further smearing of the photon injection angle. In the analysis that follows we test the method on experimental data, and we fit for the value of αthat describes the data best. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 5 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media Start Obtain Li, Ni, Ei, Ei+1, and ⃗piby multiple scattering model Lp= Li Ni ΔE = Ei-Ei+1 Ni Eij = Ei+ 1 2ΔE j = 0 Eij = Eij-ΔE Nθ = Lp λss(Eij) jj = 0 Sample Δθfrom θss(Eij) j = j + 1 Sample Δφfrom [0; 2π] interval. ⃗pi.theta = ⃗pi.theta + Δθ Rotate ⃗pifor Δφ jj = jj + 1 Is jj < Nθ? Emit a photon w.r.t. ⃗pi Is j < Ni? Stop Yes No Yes No Figure 3: Flow chart of the Cherenkov light sampling algorithm. Here Liis the length of the i-th step of the propagated electron with a mulitiple scattering model, Niis number of photons for that step, Eiand Ei+1 are the kinetic energies of the electron at the beginning and at the end of the segment and ⃗piis the direction of the segment. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Arb. Units = 0.1 = 0.2 = 0.3 = 0.4 = 0.5 = 0.6 = 0.7 = 0.8 = 0.9 = 1 SS Figure 4: Comparison of the Cherenkov light distributions produced by the Cherenkov light sampling method using different αvalues. The simulation is the same as in Fig. 1 with an electron initial energy of 1 MeV. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 6 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media 4.1. Cherenkov light sampling with SNO+ data SNO+ is a large-scale underground neutrino detector [19] consisting of a spherical acrylic vessel of 6 m radius, submerged in water and surrounded by a structure of 9,394 photomultiplier tubes (PMTs). The PMTs provide about 50% photo coverage for events inside the vessel. The data discussed in this section comes from the water phase of the experiment. In the water phase of SNO+, events are detected if they produce Cherenkov light. A single electron produces a well defined ring-like pattern on the PMTs with an average number of 9 PMT hits per MeV. The angular distribution has been used to distinguish between different single electrons, gammas, and gamma cascades [6, 20]. SNO+ characterizes the isotropy of the light using parameters derived from the angular distribution of the PMTs, initially described in Ref. [21]. Figure 5 shows a schematic representation of PMT hits by Cherenkov light, where the ring structure is clearly visible. For a particle at vertex position v travelling along unit vector ̂u, and with PMTs located at pi, ri= pi-v. We define θijby cos θij= rirj |ri||rj| and βl= 2 N(N-1) [N-1 ∑ i=1 N ∑ j=i+1 Pl(cos θij) ] , (18) where Nis number of hits in the event and Plis a Legendre polynomial of degree l. For SNO [22], the predecessor of SNO+, it was determined[21] that the best differentiation was achieved using β14 = β1 + 4β4. The value of β14 is anticorrelated with the isotropy of the light. In the water phase of SNO+ a tension in β14 between data and simulation [6] was observed. Since the parameter contains the most information on the angular distribution of Cherenkov light, we will use it to fit the αparameter of the Cherenkov light sampling model to calibration data, and compare the results obtained. Moreover, since the SNO+ simulation for the water phase uses Geant4 for propagation of particles, we can directly apply the Cherenkov light sampling model to conduct our tests. The calibration data chosen for the comparisons come from the deployment of radioactive sources in the center of SNO+. The two calibration devices used are a 16N source [23] and an AmBe source [24], which provide nearly monoenergetic 6.13 MeV, 4.4 MeV, and 2.2 MeV γ-rays. We run simulations with different αin a range from 0.1 to 1 for the 16N and AmBe sources placed in the middle of the acrylic vessel, using the corresponding detector conditions for each run. As a result we obtain β14 distributions as a function of αfor three different energies of γ-rays. Figure 6 shows the values of the mean of a Gaussian fit of these β14 distributions as well as the values from the calibration data. While the overall dependence of the mean of β14 on αis clear, the results show a significant level of noise when making small changes in α. This is more obvious for the 2.2 MeV gamma. For this reason, we opted for fitting the relationship between β14 and αas β14(α) = a √ α+ bfor the range α= [0.5, 0.6]. To find an optimal value of α, we then minimize the function χ2(α) = ∑ i (β14i(α) -βdata 14i)2 σ2 datai+ σ2 MCi(α) , (19) where i= 6.1 MeV, 2.2 MeV, 4.4 MeV, and σ2 dataand σ2 fitare errors of β14 data and MC respectively. The result of the fit is α= 0.556 ± 0.005, which is close to the expectation from simulations of 0.6. To verify this result, we produced new simulation for the calibration sources using the fit value of α, and compare the expectation of the full β14 distribution, as well as angular distribution of the Cherenkov light observed. β14 - The complete β14 distributions for the three γ-ray energies considered are shown in Figs. 7 and 9. The improvement in agreement can be seen directly in the figures, particularly for the 6.1 MeV gammas. Table 2 quantifies this agreement, and also summarizes and compares the properties of the distributions. The Cherenkov sampling model explains the mean of the distributions by design, but it also describes their width better than the default simulation. The agreement in the shape of the distribution is highlighted by the χ2, which shows significant improvement for all three γ-ray energies. cos θPMT- The angular distribution of the Cherenkov light is quantified by cos θPMT, obtained as cos θPMT= ⃗ufit⋅ ⃗xPMT-⃗xfit |||⃗xPMT-⃗xfit||| , (20) where ⃗xPMTis the coordinate of the triggered PMT, ⃗xfit is the reconstructed position of the electron observed, and ⃗ufitis its reconstructed direction. This variable is similar to Fig. 1, now including the detector response: geometry, PMTs response and photon transport. Figure 8 shows the cos θPMT distribution for the 16N (6.1 MeV) source, while the distributions for the AmBe source are shown in Appendix. A quantitative description of the distributions and their agreement is given in Table 2. Here, the sampling model also shows a significant improvement with respect to the default simulation. The improvements are most noticeable for the 16N calibration source, despite the similarity in the distributions of sampling and multiple scattering on Fig. 1. Overall, the Cherenkov sampling method shows an improved agreement between the MC and data in the two variables that encode information about the angular emission of the light. Interestingly, these changes do not modify the behavior of other variables typically used in analyses, which mainly depend on timing and number of detected photons. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 7 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media Calibration source 16N AmBe4.4MeV AmBe2.2MeV Data β14 mean 0.4176 ± 4e-4 0.397 ± 2e-3 0.350 ± 3e-3 Default MC β14 mean 0.4414 ± 5e-4 0.422 ± 2e-3 0.387 ± 3e-3 Sampling β14 mean 0.4177 ± 7e-4 0.399 ± 2e-3 0.367 ± 3e-3 Data β14 σ 0.1727 ± 3e-4 0.201 ± 2e-3 0.290 ± 3e-3 Default MC β14 σ 0.1760 ± 3e-4 0.209 ± 1e-3 0.297 ± 2e-3 Sampling MC β14 σ 0.1714 ± 5e-4 0.207 ± 2e-3 0.292 ± 3e-3 Default MC β14 χ2/d.o.f. 1085.6/80 141.6/80 117.6/80 Sampling MC β14 χ2/d.o.f. 90.4/80 69.6/80 78.4/80 Default MC cos θPMTχ2/d.o.f. 8706/200 9312/200 12374/200 Sampling MC cos θPMT χ2/d.o.f. 1772/200 7523/200 10763/200 Table 2 Comparison of variables that encode information of the angular distribution of Cherenkov light for data and simulation. The mean and σof β14 from the Gaussian fits are shown, as well as the χ2 over degrees of freedom for β14 and cos θPMT. Figure 5: The θijangles within a Cherenkov ring event used to calculate β14. Reproduced from [21]. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 14 Data 6.1 MeV Data 4.4 MeV Data 2.2 MeV Best alpha estimation 6.1 MeV 4.4 MeV 2.2 MeV Figure 6: Comparison of β14 mean values obtained from simulations as a function of the parameter αtogether with the curve that fits them best, described in the text. The β14 mean values from calibration sources with γ- rays of energies 2.2 MeV, 4.4 MeV, and 6.1 MeV are also shown. The optimal value for αis shown by the pink dashed line. 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 14 0.5 1.0 1.5 2.0 2.5 Arb. Units N16 (6.1 MeV) MC Sampling Data 0.0 Figure 7: Comparison of the β14 distribution of the of 16N (6.1 MeV) for data, the default SNO+ MC and simulation produced using the Cherenkov light sampling method. 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Cos PMT 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Arb. Units MC Sampling Data 0.65 0.70 0.75 0.80 0.85 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 8: Comparison of the cos θPMTdistribution of the 16N (6.1 MeV) for data, the default SNO+ MC and simulation produced using the Cherenkov light sampling method. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 8 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media 5. Conclusion In this paper, we investigated the relevance of the interference effects and the electron propagation method for simulation of Cherenkov light at a few MeVs. We demonstrated that the choice of electron propagation method has a considerable impact on the overall distribution of Cherenkov light below 6 MeV, while interference effects play a much smaller role. Consequently, we developed an improved Cherenkov light simulation model specifically tailored for MeV electrons propagating in a uniform medium. This model is based on the single scattering model and provides an improved level of accuracy while maintaining computational efficiency similar to previous methods. The model requires a one-time calculation of electron scattering in the relevant medium, and has a single parameter that can be tuned to relevant experimental data. We implemented the method in the Geant4 package and used it on SNO+ calibration data. After tuning the model, we are able to significantly improve the description of variables that depend on the angular distribution of Cherenkov light. Our findings might be relevant for experimental setups that use the angular distribution of Cherenkov light, in particular new projects that will have improved sensitivity to these effects, such as Hyper-Kamiokande [25], currently under construction, or the planned THEIA detector [26]. Acknowledgements This research was undertaken thanks in part to funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada First Research Excellence Fund through the Arthur B. McDonald Canadian Astroparticle Physics Research Institute, WestGrid and the Digital Research Alliance of Canada2 (formerly known as Compute Canada). The authors extend their thanks to the SNO+ Collaboration for access to the calibration data that spurred the questions addressed here, allowing us to study the Cherenkov light sampling method, and to Eric Vázquez Jáuregui and Logan Lebanowski for reviewing the manuscript. 2www.alliancecan.ca References [1] P. A. Cherenkov, Visible radiation produced by electrons moving in a medium with velocities exceeding that of light, Phys. Rev. 52 (1937) 378-379. [2] I. M. Frank, I. E. Tamm, Coherent visible radiation of fast electrons passing through matter, Compt. Rend. Acad. Sci. URSS 14 (1937) 109-114. [3] J. Jackson, Classical Electrodynamics, 2 ed., John Wiley and Sons, 1975. [4] J. Seguinot, T. Ypsilantis, A Historical survey of ring imaging Cherenkov counters, Nucl. Instrum. Meth. A 343 (1994) 1-29. [5] H. Kolanoski, N. Wermes, Particle Detectors, Oxford University Press, 2020. [6] M. Anderson, et al. (SNO+), Search for invisible modes of nucleon decay in water with the SNO+ detector, Phys. Rev. D 99 (2019) 032008. [7] K. G. Dedrick, The Influence of Multiple Scattering on the Angular Width of Čerenkov Radiation, Phys. Rev. 87 (1952) 891-896. [8] A. P. Kobzev, V. E. Pafomov, I. M. Frank, Angular distributions of the Vavilov-Cherenkov radiation induced by the 170-250 keV electrons in mica, Yad. Fiz. 29 (1979) 122-132. [9] A. Kobzev, I. Frank, Spectral dependence of the half-width of the angular distributions of vavilov-cerenkov radiation, Sov. J. Nucl. Phys.(Engl. Transl.);(United States) 31 (1980). [10] M. G. Bowler, Effects of electron scattering on Cherenkov light output, Nucl. Instrum. Meth. A 378 (1996) 463-467. [11] M. Bowler, M. Lay, Angular distribution of cherenkov light from electrons both produced and stopping in water, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 378 (1996) 468-471. [12] S. Agostinelli, et al., Geant4-a simulation toolkit, Nucl. Instrum. Meth. A 506 (2003) 250-303. [13] R. J. Komar (SNO Collaboration), The Effects of Electron Scattering on Cherenkov Light Output, Technical Report, 1995. [14] L. Schiff, Quantum Mechanics, McGraw-Hill book company, 1949. [15] W. R. Nelson, H. Hirayama, D. W. O. Rogers, The Egs4 Code System (1985). [16] L. Urban, A model for multiple scattering in geant4 (2006). [17] F. Salvat, J. M. Fernandez-Varea, E. Acosta, J. Sempau, Penelope - A code system for Monte Carlo simulation of electron and photon transport, Organisation for Economic Co-Operation and Development - Nuclear Energy Agency, Nuclear Energy Agency of the OECD (NEA), 2001. [18] V. N. Ivanchenko, O. Kadri, M. Maire, L. Urban, Geant4 models for simulation of multiple scattering, J. Phys. Conf. Ser. 219 (2010) 032045. [19] V. Albanese, et al. (SNO+), The SNO+ experiment, JINST 16 (2021) P08059. [20] A. Allega, et al. (SNO+), Improved search for invisible modes of nucleon decay in water with the SNO+detector, Phys. Rev. D 105 (2022) 112012. [21] J. Dunmore, The separation of CC and NC events in the Sudbury Neutrino Observatory, Ph.D. thesis, 2004. [22] J. Boger, et al. (SNO), The Sudbury neutrino observatory, Nucl. Instrum. Meth. A 449 (2000) 172-207. [23] M. R. Dragowsky, et al., The N-16 calibration source for the Sudbury Neutrino Observatory, Nucl. Instrum. Meth. A 481 (2002) 284-296. [24] Y. Liu, Ambe source calibration in the SNO+ water phase (2018). [25] K. Abe, et al., Letter of Intent: The Hyper-Kamiokande Experiment - Detector Design and Physics Potential - (2011). [26] M. Askins, et al. (Theia), THEIA: an advanced optical neutrino detector, Eur. Phys. J. C 80 (2020) 416. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 9 of 10 Simulating the Angular Distribution of Cherenkov Radiation in Thick Media 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.25 1.50 14 0.0 0.5 1.0 1.5 2.0 Arb. Units 4.4MeV MC Sampling Data 1.0 0.5 0.0 0.5 1.0 1.5 2.0 14 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Arb. Units 2.2MeV MC Sampling Data Figure 9: Comparison of the β14 distribution of the AmBe source, 2.2 MeV (left) and 4.4 MeV (right), for data, the default SNO+ MC and simulation produced using the Cherenkov light sampling method. 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Cos PMT 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Arb. Units MC Sampling Data 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Cos PMT 0 1 2 3 4 Arb. Units MC Sampling Data Figure 10: Comparison of the cos θPMTdistribution of the AmBe source, 2.2 MeV (left) and 4.4 MeV (right), for data, the default SNO+ MC and simulation produced using the Cherenkov light sampling method. A. Appendix The comparison of the tuned Cherenkov light sampling distributions and the SNO+ default simulation for AmBe data are shown below. The β14 distribution is contained in Fig. 9, while cos θPMTcan be found in Fig. 10. We note that, after completing the analysis, the SNO+ collaboration found issues with the accuracy of the reconstructed event positions in the AmBe runs. Nonetheless, despite these issues, Figs. 9 and 10 display a better agreement with the Cherenkov sampling model than with the standard SNO+ simulation. Moreover, the 16N data dominates the final fit of the αparameter, and therefore we do not expect these issues to have an impact on our final result. D. Minchenko, J. P. Yañez, A. Hallin: Preprint submitted to Elsevier Page 10 of 10
|
2509.16284
|
Temporally staggered cropping co-benefits beneficial insects and pest control globally
Adrija Datta1, Subramanian Sankaranarayanan2, Udit Bhatia1,3,4*
1Department of Earth Sciences, Indian Institute of Technology, Gandhinagar, Gujarat,
India-382355
2Department of Biological Sciences and Engineering, Indian Institute of Technology,
Gandhinagar, Gujarat, India-382355
3Department of Computer Science and Engineering, Indian Institute of Technology,
Gandhinagar, Gujarat, India-382355
4Department of Civil Engineering, Indian Institute of Technology, Gandhinagar, Gujarat,
India-382355
*Correspondence to: bhatia.u@iitgn.ac.in
Abstract:
Reconciling increasing food production with biodiversity conservation is critical yet
challenging, particularly given global declines in beneficial insects driven by monoculture
intensification. Intercropping—the simultaneous or sequential cultivation of multiple
crops—has been proposed as a viable strategy to enhance beneficial insect services and
suppress pests, yet global evidence regarding optimal spatiotemporal intercropping
configurations remains fragmented. Here, we synthesize results from 7,584 field experiments
spanning six continents and 22 Köppen climate regions, evaluating effects of spatial (row,
strip, mixed, agroforestry) and temporal (additive, replacement, relay) intercropping
configuations on beneficial insect (predators, parasitoids, pollinators) abundance and pest
suppression using the Management Efficiency Ratio (MER; log ratio of abundance in
intercropping versus monoculture). Relay intercropping, characterized by temporally
staggered planting, emerged as the universally optimal temporal configuration, substantially
increasing predator (MER = 0.473) and parasitoid populations (MER = 0.512) and effectively
suppressing pests (MER = –0.611) globally. At regional scales, identical spatiotemporal
configurations simultaneously optimized beneficial insect predator abundance and pest
suppression in 57% of regions, while other regions required distinct, insect-specific
approaches. Our findings highlight relay intercropping as a globally generalizable solution,
but underscore regional variation that calls for targeted policies to simultaneously secure food
production and biodiversity conservation.
1. Introduction
The United Nations Sustainable Development Goals (SDGs) set ambitious targets to
concurrently achieve agricultural productivity, biodiversity conservation, and ecosystem
sustainability by 20301. However, prevailing agricultural practices significantly hinder
progress towards these interconnected goals2,3. Agriculture alone drives nearly 90% of global
deforestation4,5 and threatens over 17,000 species worldwide6. Intensive monoculture
farming, heavily dependent on agrochemicals7, has particularly contributed to alarming
declines in insect biodiversity8–10, with more than 40% of insect species experiencing global
reductions11,12. These insects underpin essential ecosystem services, including pollination10,13
(critical for 75% of global food crops14), natural pest suppression15, and nutrient cycling16.
Consequently, identifying scalable agricultural practices that enhance crop productivity by
controlling pests without compromising beneficial insect diversity has become critical, but
requires a detailed global understanding of ecological outcomes associated with such
practices across diverse climatic and geographic contexts.
Crop diversification practices, notably intercropping—the simultaneous or sequential
cultivation of multiple crops within the same agricultural area17—have emerged as a viable
strategy18,19 to enhance agroecosystem resilience and reduce reliance on chemical
pesticides20,21. Previous syntheses highlight intercropping’s capacity to enhance beneficial
insect populations22,23, effectively suppress pests24, and improve soil health25 by increasing
plant diversity and microclimatic complexity. However, global analyses26,27 typically
aggregate results across diverse intercropping systems without explicitly distinguishing
between spatial (Fig. 1a) configurations (row, strip, mixed, agroforestry) or temporal (Fig.
1b) designs (additive, replacement, relay), each of which uniquely influences different insect
population dynamics and ecosystem functioning (for more details, see Methods).
Furthermore, despite extensive local-scale evidence documenting28–31 positive ecological
outcomes from specific intercropping configurations, these findings are inherently
site-specific and cannot directly inform global or regional decision-making. Consequently,
the absence of a unified global assessment examining how distinct spatiotemporal
intercropping configurations systematically influence multiple insect functional groups across
diverse Köppen climate regions limits the development of broadly applicable and effective
agricultural sustainability strategies.
To address this knowledge gap, we synthesized global data from 7,584 field experiments
reported in 336 peer-reviewed studies spanning 22 Köppen climate regions32 across six
continents (Fig. 1c). Using monoculture systems as controls, we quantitatively assessed how
distinct spatiotemporal intercropping configurations influence key insect functional groups,
including pollinators, predators, parasitoids, and pests. Specifically, our synthesis addressed
three research questions: (1) Which spatial (row, strip, mixed, agroforestry) and temporal
(additive, replacement, relay) intercropping configurations most effectively enhance
beneficial insects and suppress pests? (2) Do ecological benefits of these configurations
remain consistent across different insect functional groups and diverse climate regions? (3)
How do specific crop combinations modulate different insect functional responses in varied
climates? We addressed these questions using meta-analytic methods and non-parametric
statistical tests to quantify insect responses across multiple geographic and climatic contexts.
Addressing these questions at a global scale enables assessment of different spatiotemporal
intercropping configurations as a scalable sustainability practice. Our results thus provide
critical insights into how region-specific intercropping strategies can be systematically
optimized, with direct implications for informing agricultural policy, guiding biodiversity
conservation efforts, and achieving practical alignment between crop productivity and
ecological resilience worldwide.
2. Data
2.1 Study Selection:
A literature search was conducted on Google Scholar (last accessed in February 2025) using
two search strings:
i) [“Intercrop” AND “Insect abundance”], and
ii) [(("insect" AND "pest") OR ("insect" AND "pollinator") OR ("insect" AND "predator")
OR ("insect" AND "parasitoid")) AND "intercrop" AND "abundance"].
The first 1000 results from each search were screened. Each article was evaluated for
relevance based on defined inclusion criteria. From the first search string, 298 articles met the
criteria, while 38 articles qualified from the second. The review encompassed studies
published between 1978 and January 30, 2025.
A total of 336 articles were selected based on a two-stage screening process. The
initial screening applied the following inclusion criteria: 1) the article was written in English;
2) duplicate studies published under different titles were excluded; 3) full-text access was
available either through open access or institutional access; and 4) the study was
peer-reviewed.
This initial screening resulted in 819 articles. These were then evaluated against
additional criteria to determine final eligibility: 5) opinion, information bulletins, reviews,
and meta-analyses were excluded; 6) studies needed to compare monoculture (as a control)
with intercropping (as a treatment); 7) articles had to assess insect taxonomic diversity
metrics, specifically abundance; 8) at least one insect species had to be identified and
included in the analysis; 9) studies had to follow the definition of intercropping as the
simultaneous cultivation of crops in the same field; 10) only open-field experiments were
considered i.e. greenhouse or laboratory studies were excluded; 11) studies involving the use
of chemical or organic pesticides were not included; 12) research involving transgenic or
non-edible crops was excluded; 13) modeling-based studies were not considered; 14) studies
using multiple intercrop species were excluded to avoid confounding effects of interspecific
competition on insects; 15) studies focusing on insects in stored grains were not included; 16)
research involving border or cover crops instead of true intercrops was excluded; and 17)
studies that presented data only in 3D graphs or in formats that could not be extracted were
not considered.
The search resulted in 336 articles deemed suitable for inclusion in our meta-analysis, from
which a total of 7,584 effect sizes were extracted (Fig. S10). The large number of effect sizes
is primarily due to many studies evaluating multiple insect taxonomic groups when
comparing intercropping with monoculture systems.
2.2 General information on the articles:
‘Title’, ‘Authors’, ‘DOI’, and ‘Year_of_publication’ variables: These variables provide
essential information for easily locating the articles included in our database. They comprise
the full title of the article, the list of authors, the Digital Object Identifier (DOI), and the year
in which the article was published.
2.3 Experimental site and climate conditions:
‘Country’, ‘Study_site’, ‘Latitude’, and ‘Longitude’ variables: These variables provide
location-specific details of the experimental sites, including the country and specific site
where the study was conducted, along with the latitude and longitude (in decimal degrees).
Coordinates were obtained either directly from the articles or estimated using the site name
via Google Maps (https://maps.google.com/). The ‘Climate_zone’ variable is classified based
on the Köppen-Geiger climate classification system32.
2.4 Spatial intercropping configuration description:
‘Spatial_intercropping’: This variable describes the spatial arrangement used for sowing both
species in the intercropping system33. (Fig. 1a).
1. Row intercropping: Where the two plant species are grown in distinct, alternating
rows34.
2. Strip intercropping: Where multiple rows of one plant species are alternated with one
or more rows of another species, forming strips in which each strip consists of more
than a single row35.
3. Mixed intercropping: Where the component crops are sown at the same time, either
within the same row or in a mixed pattern without any distinct rows or strips36.
4. Agroforestry: All the agroforestry systems included here were alley cropping
systems37. Here in alley cropping system, rows of crops are planted between rows of
trees or shrubs38.
2.5 Temporal intercropping configuration description:
‘Temporal_intercropping’: This variable describes the temporal arrangement used to
intercrop both species. (Fig. 1b):
1. Additive design: In the standard additive design, intercropping systems are created by
combining the plant densities of the respective pure stands. Consequently, the total
density in the intercropping system is higher than in the pure stands, while the density
of each component remains the same as in its pure stand39.
2. Replacement (or substitutive) design: In the standard replacement design,
intercropping systems are established by substituting a specific number of plants from
one component with an equal number of plants from the other component. As a result,
the density of each component in the mixture is lower than in its pure stand, but the
overall stand density remains the same as in each pure stand39.
3. Relay design: In a standard relay intercropping design, the second crop is planted into
the standing first crop at a later growth stage, creating a temporal overlap between the
two crops instead of simultaneous planting. Although both crops occupy the same
field space for part of their growth periods, the total plant density changes over time,
and the density of each crop during the overlap phase is typically lower than in their
respective monocultures40.
2.6 Crop details and insect abundance variables:
Since the intercropping experiments include two species, the demonstration is made for one
species (called Crop_1) and is replicable for the second species (Crop_2).
‘Crop_1_Common_Name’ and ‘Crop_1_Scientific_Name’: These variables provide both the
scientific and common names of each crop species. To ensure consistency and avoid
confusion caused by multiple common names referring to the same species, scientific names
were matched with their corresponding common names using the United States Department
of Agriculture (USDA) Plants Database (http://plants.usda.gov/java/).
‘Crop_1_family’: This variable indicates the botanical family of each crop species. Family
information was either obtained directly from the source article or updated using the World
Crops database (https://world-crops.com/category/crops/).
‘Crop_1_C’: This variable indicates the crop's photosynthetic pathway, identifying whether it
follows the C3 or C4 carbon fixation process.
‘Crop_1_type’: This variable defines the crop category, such as cereals, legumes, vegetables,
oilseeds, spices, forage, flowers, fruits, or trees. Crop classifications were updated using the
World Crops database (https://world-crops.com/category/crops/). Crops without a clearly
defined use were categorized as ‘others’.
‘Insect_scientific_name’ and ‘Insect_common_name’: These variables include both the
scientific and common names of each insect species. While some articles report scientific
names, others mention only common names. In cases where only common names are
provided, we used the GBIF Python package, pygbif (https://techdocs.gbif.org/en/openapi/),
to retrieve the corresponding scientific names. For articles that only specify the insect's order
or family without identifying the species, both the common and scientific name fields are left
blank.
‘Insect_order’ and ‘Insect_family’: These variables describe the order and family of each
insect species, primarily extracted directly from the original articles. For articles that do not
report order or family but include either the scientific or common name, we used the GBIF
Python package, pygbif (https://techdocs.gbif.org/en/openapi/), to determine their taxonomic
classification.
‘Count_type’: This variable indicates the method used to count insects in each study, with the
count type extracted directly from the respective article. This variable also includes the unit
in which insect abundance is reported.
‘Insect_role’ and ‘Insect_role_type’: The ‘Insect_role’ variable broadly categorizes each
insect based on its impact on crops, indicating whether it is beneficial or a pest. The
‘Insect_role_type’ variable further classifies beneficial insects into specific functional groups,
such as pollinators, parasitoids, or predators.
‘Abundance_Crop_1’ and ‘Abundance_intercropped’: These two variables indicate insect
abundance
in
monocropping
(‘Abundance_Crop_1’)
and
intercropping
(‘Abundance_intercropped’) systems, respectively. The method used to quantify insect counts
is specified in the ‘Count_type’ variable.
3. Methods
3.1 Data extraction and collection:
Data were extracted from tables, figures, or textual descriptions within the articles. Values
presented
in
graphs
were manually digitized using the WebPlotDigitizer tool
(https://automeris.io/WebPlotDigitizer/). All extracted data were compiled into an Excel
spreadsheet, a format commonly supported by data analysis tools and well-suited for ensuring
interpretability in scientific research. Additionally, information was documented on crop
combinations, spatial arrangements, and the functional group classification of the insect
species reported in each study.
A single author carefully evaluated each publication to determine its suitability and the
reliability of the data. In total, 107 publications (representing 32% of all entries) underwent
two rounds of review, once in November and again in February, to minimize potential errors.
Data accuracy was verified multiple times by revisiting the original source materials
whenever necessary.
3.2 Calculation of Management Efficiency Ratio (MER):
We calculated the Management Efficiency Ratio (MER) to compare insect abundance in the
treatment (intercropping) versus the control (monoculture) systems (Equation 1).
(1)
𝑀𝐸𝑅 = 𝑙𝑛𝐴𝑏𝑢𝑛𝑑𝑎𝑛𝑐𝑒 𝑖𝑛 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 (𝑖𝑛𝑡𝑒𝑟𝑐𝑟𝑜𝑝)
𝐴𝑏𝑢𝑛𝑑𝑎𝑛𝑐𝑒 𝑖𝑛 𝑐𝑜𝑛𝑡𝑟𝑜𝑙 (𝑚𝑜𝑛𝑜𝑐𝑟𝑜𝑝)
An MER greater than 0 indicates that intercropping has a positive effect on insect abundance
relative to monoculture. In contrast, an MER less than 0 suggests a negative effect of
intercropping compared to monoculture.
3.3 Publication bias analysis:
Egger’s regression test41 was conducted to detect potential funnel plot asymmetry as an
indicator of publication bias. To further examine robustness against small-study effects, we
applied a limit meta-analysis, estimating the effect size as sampling error approaches zero.
3.4 Statistical Analyses:
Anticipating variation in effect sizes across studies, we used the Mann-Whitney U test42 to
compare two major independent groups. Statistical significance was determined at a threshold
of p < 0.05. To assess the magnitude of difference between groups, independent of sample
size, we applied Cliff’s Delta test43. A value of 0 indicates complete overlap between groups,
while values approaching -1 or +1 reflect a strong effect.
To further explore subgroup differences, we assessed statistical heterogeneity using the
Kruskal-Wallis test44 and Higgins & Thompson’s I² statistic45. The I² value quantifies the
degree of heterogeneity: low (≤ 25%), moderate (25–75%), and substantial (≥ 75%).
In addition to the overall group comparisons, we conducted Dunn’s test46 with Bonferroni
correction47 to perform pairwise comparisons among the different intercropping strategies.
This post-hoc analysis allowed us to identify which specific groups differed significantly
after detecting overall differences using the Kruskal-Wallis test. The Bonferroni correction is
crucial in this context as it adjusts for multiple comparisons, thereby reducing the risk of
Type I errors (false positives) when testing multiple group differences simultaneously. While
Dunn’s test identifies statistically significant differences between groups, it does not convey
the magnitude of these differences. Therefore, we calculated Cliff’s Delta for each pairwise
comparison to provide a non-parametric estimate of effect size.
To further investigate whether different spatiotemporal intercropping configurations
significantly differ from one another, we applied the Kruskal-Wallis test along with
Leave-One-Out Cross-Validation (LOO-CV)48. The LOO-CV approach was used to confirm
the robustness of the Kruskal-Wallis results, ensuring that the findings were not
disproportionately influenced by any single study or data point.
3.5 Identification of best strategies for each climate zone and crop combination:
The intercropping pattern and its structural design were combined into a single descriptive
label to uniquely represent each practice. Records with missing or incomplete information on
either the intercropping strategy, its design, or its performance metric were removed to ensure
data quality. For each climate zone and insect role category, the average performance of each
intercropping practice was calculated. This allowed for summarizing how effective different
strategies were across various climatic contexts. To identify the most suitable strategy for
each insect group in each climate zone, the following criteria were applied: i) For insect
pests, the strategy with the lowest average performance metric (lowest MER) was selected
(indicating greater effectiveness in suppression). ii) For beneficial insects (such as pollinators
and natural enemies), the strategy with the highest average value (highest MER) was
considered optimal.
The same strategy was followed to identify the most effective crop combinations
(monocrop-intercrop combination) for each insect group and climatic region.
We have also repeated the same methodology to identify the intercropping pattern design
combination for each insect group on each continent.
4. Results
4.1 Global research highlights regional gaps in crop diversity and insect studies
Geospatial analysis of 336 peer-reviewed studies published between 1978 and 2025 revealed
marked regional biases, with most studies concentrated in tropical savanna (Aw, 24.5%),
temperate oceanic (Cfb, 9.9%), and humid subtropical (Cwa, 9.3%) climate zones. Asia
emerged as the primary research hub (39.2% of studies), followed by North America (21.1%)
and Africa (13.7%). Country-level analyses further identified India (10.3%), Costa Rica
(5.5%), and Iran (5.2%) as key study locations. Asian studies primarily originated from India,
Iran, Indonesia, China, and Pakistan, whereas African research centered predominantly in
Tanzania, Benin, Uganda, Egypt, and Nigeria (Fig. S1a, S1b).
Intercontinental differences were pronounced in crop diversity. Asian studies reported the
greatest diversity with 160 crop species, notably cabbage (28 intercrop combinations) and
maize (22 combinations). African research employed 100 distinct crops (Fig. 2a),
predominantly maize (37 combinations) and okra (19 combinations). Comparatively,
developed regions displayed lower crop diversity: Europe had 46 documented crops (cabbage
most common with 9 combinations), and North America reported 50 crops (maize with 7
combinations) (Fig. S2).
Insect community composition exhibited distinct biogeographic patterns. Asia recorded the
highest taxonomic diversity (16 orders), dominated by Hemiptera (26.5%) and Lepidoptera
(24.6%) (Fig. 2b). African studies documented nine insect orders, predominantly Lepidoptera
(35.6%). Coleoptera dominated insect communities in Europe (64.1%), North America
(59.1%), and Oceania (64.7%), whereas Hymenoptera was the leading order in South
America (45.6%).
Globally, Lepidoptera was the most frequently studied insect order (25.3%), followed closely
by Coleoptera (24.9%) and Hemiptera (23.8%) (Fig. 2c). Additional notable orders included
Hymenoptera (9.9%), Diptera (4.0%), Neuroptera (3.2%), and Thysanoptera (3.0%). Among
climate regions, tropical savanna (Aw) zones harbored the greatest insect diversity (25.4%),
predominantly Lepidoptera (41.3%), followed by arid steppe (BSh, 12.2%), similarly
dominated by Lepidoptera (35.3%) (Fig. 2d, Fig. S3, Table S1).
4.2 Global intercropping studies predominantly reflect pest suppression objectives
We assessed insect responses to intercropping using the Management Efficiency Ratio
(MER), calculated as the natural logarithm of insect abundance ratios in intercropped versus
monoculture systems as control (see Methods for details). If MER is positive, it means insect
abundance will increase in intercropping with respect to monoculture and vice versa.
Globally, intercropping resulted in predominantly negative MER values (63.6% of cases),
indicating generally lower insect abundance in intercropping relative to monocultures (Fig.
3a). The global mean MER was significantly negative (–0.164 ± 0.652; p < 0.0001),
underscoring an overall suppressive effect of intercropping on insect populations. This
globally observed suppression primarily reflects the predominant research emphasis on pest
control, closely aligned with agricultural yield protection goals. Pest-focused studies were
substantially more prevalent (n = 4,138) than studies assessing beneficial insects (pollinators
and natural enemies, n = 2,540; Fig. 3b). Although intercropping is highly effective for pest
management, the broader potential of intercropping to support beneficial insects remains
likely underestimated due to these research biases.
This global trend, however, varied markedly across continents (Fig. S4, Table S2). South
America showed a marginal increase in insect abundance under intercropping (weighted
mean MER = 0.078). In contrast, Africa exhibited the strongest insect population suppression
(weighted mean MER = –0.303).
While the global trend clearly aligns with pest suppression objectives, examining responses
of different insect functional groups can further clarify whether beneficial insects exhibit
systematically different responses under intercropping.
4.3 Intercropping selectively enhances beneficial insects and suppresses pests
We quantified the differential effects of intercropping on beneficial insects versus pests by
comparing their Management Efficiency Ratios (MER). Beneficial insects exhibited
significantly higher MER values (mean = 0.292 ± 0.012 SE), indicating increased abundance
under intercropping compared to monocultures. In contrast, pests showed significantly
negative MER values (mean = –0.445 ± 0.008 SE), with significant heterogeneity across
groups ( p < 0.001; Cliff’s delta = 0.67; Fig. 3b).
Further analysis of beneficial insects by functional group—pollinators, predators, and
parasitoids—revealed that intercropping differentially benefited these groups. Predators
(MER = 0.276 ± 0.015) and parasitoids (MER = 0.303 ± 0.025) responded positively and
significantly more strongly than pollinators (MER = 0.076 ± 0.059), suggesting natural
enemies derive the greatest advantage from intercropping (Fig. 3c).
A Kruskal–Wallis test44 confirmed significant variation among these functional groups (p <
0.001; Higgins & Thompson’s I2 = 99.85%; Fig. S5a). Post hoc Dunn’s pairwise
comparisons46 (Bonferroni-corrected47) indicated predators (Cliff’s delta Δ = 0.203, p <
0.001) and parasitoids (Δ = 0.218, p < 0.001) benefited significantly more than pollinators.
The difference between predators and parasitoids was statistically significant but negligible in
magnitude (Δ = 0.006, p < 0.05). Pests showed consistent and substantial negative responses
relative to all beneficial groups (e.g., Δ = –0.659 vs. predators, p < 0.001), underscoring the
selective enhancement of natural enemies and effective pest suppression under intercropping.
Given that beneficial insect and pest populations respond distinctly, identifying specific
spatiotemporal intercropping designs responsible for these differential outcomes is essential
for targeted agroecological interventions.
4.4 Effective insect management emerges from aligning spatial and temporal
intercropping designs with ecological roles
We analyzed global data to identify which temporal (additive, replacement, relay) and spatial
(mixed, row, strip, agroforestry) intercropping configurations best enhance beneficial insect
abundance and suppress pests. Among temporal designs, relay intercropping showed the
strongest positive effect on parasitoids (mean MER = 0.473 ± 0.094 SE) and predators (mean
MER = 0.512 ± 0.067 SE) and the greatest suppression of pests (mean MER = –0.611 ± 0.045
SE; Fig. 3d). Kruskal–Wallis analyses confirmed significant differences across temporal
designs for predators (p < 0.001), parasitoids (p < 0.01), and pests (p < 0.001). Pairwise
comparisons (Dunn’s tests with Bonferroni correction, Fig. S5b) revealed relay designs
significantly outperformed additive (p < 0.05) and replacement (p < 0.001) designs for both
predators and parasitoids. These findings suggest relay intercropping provides beneficial
insects with temporally continuous habitats and resources, while simultaneously disrupting
pest colonization dynamics.
Spatial intercropping patterns also significantly influenced insect functional groups. Row
intercropping yielded the highest MER values for pollinators (mean MER = 0.284 ± 0.132
SE) and predators (mean MER = 0.382 ± 0.02 SE), potentially due to enhanced foraging
efficiency and accessibility (Fig. 3e, Fig. S6). Parasitoids responded best to strip (mean MER
= 0.374 ± 0.048 SE), likely reflecting improved habitat segregation and host availability.
Mixed intercropping most effectively suppressed pests (mean MER = –0.533 ± 0.076 SE),
suggesting that greater spatial heterogeneity disrupts pest colonization and reproduction.
Finally, considering combined spatial and temporal dimensions, specific configurations
emerged as optimal for different insect groups (Fig. S7, Fig. S8). Strip-replacement
intercropping provided the greatest enhancement of parasitoid abundance (mean MER =
0.647 ± 0.151 SE), while row-additive intercropping best supported pollinators (mean MER =
0.284 ± 0.132 SE). Predators benefited most strongly from row-replacement intercropping
(mean MER = 1.022 ± 0.135 SE, p < 0.001), and pest suppression was maximized in
strip-relay systems (mean MER = –0.934 ± 0.046 SE, p < 0.001). Collectively, these results
demonstrate that optimal insect management through intercropping depends critically on
aligning spatial and temporal crop configurations with the specific ecological roles and
habitat requirements of targeted insect groups.
Finally, since ecological roles and habitat preferences underpin insect responses to
intercropping, it remains crucial to determine whether optimal strategies generalize across
climate zones or require region-specific tailoring.
4.5 Climate-specific intercropping and crop combinations simultaneously optimize
beneficial insect abundance and pest suppression
Given the broad climatic variability of global croplands, we conducted a region-by-region
comparison through mosaic analysis (Fig. 4a) across major Köppen climate zones to identify
optimal intercropping configurations for enhancing beneficial insects and suppressing pests.
Pollinator responses were excluded due to insufficient data. In regions with adequate
information, we determined the intercropping configurations that maximized beneficial insect
abundance and minimized pest populations.
Our analysis revealed strong consistency in optimal intercropping choices among insect
groups within many climate regions. In approximately 57% of regions, predators and pests
responded most favorably to the same intercropping configuration, highlighting substantial
ecological compatibility (Fig. 4a). Notably, in nearly one-third (28.5%) of the climate regions
studied, a single intercropping configuration simultaneously optimized abundance of pests,
predators, and parasitoids, underscoring potential for integrated insect management. Among
these broadly beneficial strategies, row-additive and strip-additive intercropping designs
emerged most frequently.
Crop combinations also strongly influenced insect responses. Cereal-legume intercrops were
most consistently effective, emerging as the optimal combination in 18% of climate regions
and frequently benefiting predators and parasitoids while reducing pests (Fig. 4b). Overall,
28% of regions favored one of six specific crop combinations—including cereals-legumes,
spices-vegetables,
cereals-cereals,
flowers-fruits,
cereals-oilseeds,
and
forage-vegetables—for simultaneously increasing predator abundance and suppressing pests.
These intercrop combinations likely provide complementary ecological resources such as
nectar, alternative prey, or structural habitats favorable to beneficial insects and may emit
compounds that deter pests. However, no single crop combination universally promoted
parasitoid and predator abundance while also suppressing pests, indicating specialized
insect-crop interactions across different agricultural systems.
4.6 Publication bias report:
Egger’s test41 (Fig. S9) for funnel plot asymmetry revealed no significant publication bias (z
= 1.4486, p = 0.1474). Additionally, the limit estimate, representing the effect size as
sampling error approaches zero, was even more negative (b = -0.1916; 95% CI: -0.2845 to
-0.0987), reinforcing the robustness of the observed effect.
5. Discussion:
Agricultural landscapes worldwide face the dual challenge of increasing food production and
conserving biodiversity, two goals traditionally viewed as competing rather than
complementary6,49. Historically, research has predominantly emphasized pest suppression18 or
yield enhancement15,16 in isolation, resulting in a fragmented understanding of the broader
agroecological potential of intercropping. Our global synthesis of 7,584 field experiments
spanning six continents and 22 Köppen climate regions offers empirical evidence that
intercropping can simultaneously enhance beneficial insect populations and suppress pest
abundance
across
diverse
agroecosystems.
Our
results
demonstrate
that
relay
intercropping—characterized by temporally staggered planting—is the universally optimal
temporal configuration, significantly increasing predator (mean MER = 0.512) and parasitoid
(mean MER = 0.473) populations while substantially suppressing pest numbers (mean MER
= –0.611). Mosaic analyses further revealed considerable regional consistency, with identical
spatiotemporal configurations simultaneously optimizing natural enemy abundance and pest
suppression in approximately 28.5% of climate regions. Nevertheless, notable deviations
among insect functional groups and climate zones highlight important geographic variability,
underscoring the necessity for region-specific intercropping strategies.
Despite these robust global patterns, several critical knowledge gaps remain. Notably, our
synthesis identified limited global-scale data on pollinator responses to intercropping, despite
pollinators' critical roles in crop productivity50 and food security51. Furthermore, arid and
boreal climates are significantly underrepresented in existing datasets, restricting
comprehensive global generalizations. Importantly, the absence of integrated yield and
profitability assessments alongside insect abundance metrics limits comprehensive
evaluations of intercropping’s multifunctional benefits. Addressing these limitations by
explicitly
focusing
on
pollinator
populations,
expanding
research
coverage
in
underrepresented climates, and incorporating economic and yield outcomes will strengthen
future global syntheses and enhance practical agricultural relevance.
Our findings have important implications for agricultural policy and management. While
several countries have integrated intercropping and agroforestry into formal frameworks,
such as India’s integrated intercropping and agroforestry initiatives of Council on Energy,
Environment and Water (CEEW)52 and the European Union’s Common Agricultural Policy53
(2023-27), policy attention remains limited to select configurations. Building on this policy
momentum, the recent International Maize and Wheat Improvement Center (CIMMYT)
Intercropping project (2023-2028) focuses exclusively on row-additive intercropping in
South Asia54, overlooking other potentially effective spatiotemporal strategies. Several
countries have already integrated intercropping and agroforestry into formal policy
frameworks. While our results highlight the ecological benefits of relay intercropping for
enhancing beneficial insects and suppressing pests, it remains underrepresented in current
implementation efforts. Relay intercropping also aligns with resource constraints common to
smallholder farming and substantially reduces insecticide dependency55, making it a strong
candidate for inclusion in extension programs, incentive schemes, and training. Strengthening
policy support for relay intercropping can help achieve sustainability goals, specifically the
United Nations Sustainable Development Goals on food security (SDG 2: Zero Hunger) and
terrestrial biodiversity conservation (SDG 15: Life on Land).
Considering projected climate impacts on global agriculture due to anthropogenic climate
change56, future research must evaluate intercropping strategies under evolving pest
dynamics, shifting beneficial insect distributions, and increased climatic extremes. Predictive
ecological modeling that incorporates projected climate scenarios can inform resilient
cropping recommendations, ensuring intercropping systems maintain effectiveness under
altered ecological conditions. Additionally, translating our global-scale insights into spatially
explicit, region-specific recommendations will facilitate practical application, allowing
policymakers and practitioners to target agricultural diversification strategies effectively
across varied climates and agroecological contexts.
Ultimately, our global synthesis provides a robust ecological basis for multifunctional
agriculture, highlighting strategic relay intercropping designs that simultaneously enhance
beneficial insect populations and effectively suppress pests. These clearly quantified and
globally consistent results represent actionable pathways to reconcile intensified food
production with biodiversity conservation goals, traditionally viewed as conflicting
objectives. By systematically addressing identified knowledge gaps, particularly around
pollinator dynamics and yield assessments, future research can further refine and expand the
applicability of intercropping strategies. Such targeted agricultural diversification can support
both ecological resilience and agricultural productivity, directly contributing to global
sustainability priorities and addressing pressing environmental and food security challenges
worldwide.
Author contributions:
Conceptualization: AD. Data collection: AD, Analysis: AD. Visualization: AD, SS, UB.
Writing – original draft: AD, SS, UB. Writing – review and editing: AD, SS, UB.
Competing interests:
The authors declare that they have no competing interests.
Acknowledgements:
This research was primarily funded by IIT Gandhinagar. The authors also thank the members
of the Machine Intelligence and Resilience Laboratory at IIT Gandhinagar for their valuable
discussions and constructive feedback on this manuscript.
Data and materials availability:
All data supporting the findings of this study are included in the main text and/or
Supplementary Materials. Additional data are available at the following repository:
https://doi.org/10.5281/zenodo.16897437 57.
References:
1. United Nations. Transforming Our World by 2030: A New Agenda for Global Action
Zero.
2. The Sustainable Development Goals Report Special Edition. (2023).
3. Transforming Food and Land Systems to Achieve the SDGs. (2024).
4. Pendrill, F. et al. Disentangling the numbers behind agriculture-driven tropical
deforestation. Science 377, eabm9267 (2022).
5. Korotkov, A. V., Peck, T. & Köhl, M. RESOURCE ASSESSMENT | Regional and
Global Forest Resource Assessments. in Encyclopedia of Forest Sciences 973–982
(Elsevier, 2004). doi:10.1016/B0-12-145160-7/00158-7.
6. Williams, D. R. et al. Proactive conservation to prevent habitat losses to agricultural
expansion. Nat. Sustain. 4, 314–322 (2020).
7. Liu, X. et al. Global cropland expansion enhances cropping potential and reduces its
inequality among countries. Earth Syst. Dyn. 15, 817–828 (2024).
8. Tarigan, S. D. Biodiversity-based ecosystem services for the management of monoculture
plantation landscape using a transdisciplinary approach: a review. IOP Conf. Ser. Earth
Environ. Sci. 325, 012013 (2019).
9. Lemaire, G., Faccio CarvalhoPaulo César De Faccio Carvalho, P. C. D., Kronberg, S. L.
& Recous, S. Agroecosystem Diversity: Reconciling Contemporary Agriculture and
Environmental Quality. (Academic Press, Elsevier, 2018).
10.Goulson, D., Nicholls, E., Botías, C. & Rotheray, E. L. Bee declines driven by combined
stress from parasites, pesticides, and lack of flowers. Science 347, 1255957 (2015).
11. Sánchez-Bayo, F. & Wyckhuys, K. A. G. Worldwide decline of the entomofauna: A
review of its drivers. Biol. Conserv. 232, 8–27 (2019).
12.Wagner, D. L., Grames, E. M., Forister, M. L., Berenbaum, M. R. & Stopak, D. Insect
decline in the Anthropocene: Death by a thousand cuts. Proc. Natl. Acad. Sci. 118,
e2023989118 (2021).
13.Dicks, L. V. et al. A global-scale expert assessment of drivers and risks associated with
pollinator decline. Nat. Ecol. Evol. 5, 1453–1461 (2021).
14.Potts, S. G. et al. THE ASSESSMENT REPORT ON POLLINATORS, POLLINATION
AND FOOD PRODUCTION OF THE INTERGOVERNMENTAL SCIENCE-POLICY
PLATFORM ON BIODIVERSITY AND ECOSYSTEM SERVICES. 36
https://doi.org/10.5281/zenodo.3402856 (2016).
15.Bommarco, R., Miranda, F., Bylund, H. & Björkman, C. Insecticides Suppress Natural
Enemies and Increase Pest Damage in Cabbage. J. Econ. Entomol. 104, 782–791 (2011).
16.Raven, P. H. & Wagner, D. L. Agricultural intensification and climate change are rapidly
decreasing insect biodiversity. Proc. Natl. Acad. Sci. 118, e2002548117 (2021).
17.Brooker, R. W. et al. Improving intercropping: a synthesis of research in agronomy, plant
physiology and ecology. New Phytol. 206, 107–117 (2015).
18.Vandermeer, J. H. The Ecology of Intercropping. (Cambridge University Press, 1989).
doi:10.1017/CBO9780511623523.
19.Andrews, D. J. & Kassam, A. H. The Importance of Multiple Cropping in Increasing
World Food Supplies. in ASA Special Publications (eds Papendick, R. I., Sanchez, P. A.
& Triplett, G. B.) 1–10 (American Society of Agronomy, Crop Science Society of
America, and Soil Science Society of America, Madison, WI, USA, 2015).
doi:10.2134/asaspecpub27.c1.
20.Li, L., Zhang, L. & Zhang, F. Crop Mixtures and the Mechanisms of Overyielding. in
Encyclopedia of Biodiversity 382–395 (Elsevier, 2013).
doi:10.1016/B978-0-12-384719-5.00363-4.
21.Li, L., Tilman, D., Lambers, H. & Zhang, F. Plant diversity and overyielding: insights
from belowground facilitation of intercropping in agriculture. New Phytol. 203, 63–69
(2014).
22.Pierre, J. F. et al. A review of the impact of maize-legume intercrops on the diversity and
abundance of entomophagous and phytophagous insects. PeerJ 11, e15640 (2023).
23.Nicholls, C. I. & Altieri, M. A. Plant biodiversity enhances bees and other insect
pollinators in agroecosystems. A review. Agron. Sustain. Dev. 33, 257–274 (2013).
24.Mir, M. S. et al. Role of Intercropping in Sustainable Insect-Pest Management: A
Review. Int. J. Environ. Clim. Change 3390–3404 (2022)
doi:10.9734/ijecc/2022/v12i111390.
25.Zhang, F. et al. Rhizosphere Processes and Management for Improving Nutrient Use
Efficiency and Crop Productivity. in Advances in Agronomy vol. 107 1–32 (Elsevier,
2010).
26.Letourneau, D. K. et al. Does plant diversity benefit agroecosystems? A synthetic review.
Ecol. Appl. 21, 9–21 (2011).
27.Rakotomalala, A. A. N. A., Ficiciyan, A. M. & Tscharntke, T. Intercropping enhances
beneficial arthropods and controls pests: A systematic review and meta-analysis. Agric.
Ecosyst. Environ. 356, 108617 (2023).
28.Hithesh, G. R., Suroshe, S. S., Keerthi, M. C. & Fand, B. B. Companion planting of
flowering annuals enhances the colonization of natural enemies in white cabbage fields.
J. Plant Dis. Prot. 132, 49 (2025).
29.Singh, A., Weisser, W. W., Hanna, R., Houmgny, R. & Zytynska, S. E. Reduce pests,
enhance production: benefits of intercropping at high densities for okra farmers in
Cameroon. Pest Manag. Sci. 73, 2017–2027 (2017).
30.Zheng, Y. et al. Potato/Maize intercropping reduces infestation of potato tuber moth,
Phthorimaea operculella (Zeller) by the enhancement of natural enemies. J. Integr. Agric.
19, 394–405 (2020).
31.Soujanya, P. L. et al. Intercropping in maize reduces fall armyworm Spodoptera
frugiperda (J. E. Smith) infestation, supports natural enemies, and enhances yield. Agric.
Ecosyst. Environ. 373, 109130 (2024).
32.Kottek, M., Grieser, J., Beck, C., Rudolf, B. & Rubel, F. World Map of the
Köppen-Geiger climate classification updated. Meteorol. Z. 15, 259–263 (2006).
33.Kumari, A. & Choudhary, M. Annual Intercrops: An Alternative Pathway for Sustainable
Horticultural Production. Ecol. Environ. Conserv. 28, S244–S251 (2022).
34.Chen, C., Westcott, M., Neill, K., Wichman, D. & Knox, M. Row Configuration and
Nitrogen Application for Barley–Pea Intercropping in Montana. Agron. J. 96, 1730–1738
(2004).
35.Li, L. et al. Wheat/maize or wheat/soybean strip intercropping. Field Crops Res. 71,
123–137 (2001).
36.Agegnehu, G., Ghizaw, A. & Sinebo, W. Yield potential and land-use efficiency of wheat
and faba bean mixed intercropping. Agron. Sustain. Dev. 28, 257–263 (2008).
37.Sinclair, F. L. A general classification of agroforestry practice. Agrofor. Syst. 46, 161–180
(1999).
38.Burgess, A. J., Correa Cano, M. E. & Parkes, B. The deployment of intercropping and
agroforestry as adaptation to climate change. Crop Environ. 1, 145–160 (2022).
39.Paut, R., Garreau, L., Ollivier, G., Sabatier, R. & Tchamitchian, M. A global dataset of
experimental intercropping and agroforestry studies in horticulture. Sci. Data 11, 5
(2024).
40.Lamichhane, J. R. et al. Relay cropping for sustainable intensification of agriculture
across temperate regions: Crop management challenges and future research priorities.
Field Crops Res. 291, 108795 (2023).
41.Lin, L. & Chu, H. Quantifying Publication Bias in Meta-Analysis. Biometrics 74,
785–794 (2018).
42.Nachar, N. The Mann-Whitney U: A Test for Assessing Whether Two Independent
Samples Come from the Same Distribution. Tutor. Quant. Methods Psychol. 4, 13–20
(2008).
43.Marfo, P. & Okyere, G. A. The accuracy of effect-size estimates under normals and
contaminated normals in meta-analysis. Heliyon 5, e01838 (2019).
44.Ostertagová, E., Ostertag, O. & Kováč, J. Methodology and Application of the
Kruskal-Wallis Test. Appl. Mech. Mater. 611, 115–120 (2014).
45.Harrer, M., Cuijpers, P., Furukawa, T. A. & Ebert, D. D. Doing Meta-Analysis with R: A
Hands-On Guide. (Chapman and Hall/CRC, Boca Raton, 2021).
doi:10.1201/9781003107347.
46.Dinno, A. Nonparametric Pairwise Multiple Comparisons in Independent Groups using
Dunn’s Test. Stata J. Promot. Commun. Stat. Stata 15, 292–300 (2015).
47.Sedgwick, P. Multiple significance tests: the Bonferroni correction. BMJ 344, e509–e509
(2012).
48.Yeung, A. W. K., More, S., Wu, J. & Eickhoff, S. B. Reporting details of neuroimaging
studies on individual traits prediction: A literature survey. NeuroImage 256, 119275
(2022).
49.Potapov, P. et al. Global maps of cropland extent and change show accelerated cropland
expansion in the twenty-first century. Nat. Food 3, 19–28 (2021).
50.Aizen, M. A. et al. Global agricultural productivity is threatened by increasing pollinator
dependence without a parallel increase in crop diversification. Glob. Change Biol. 25,
3516–3527 (2019).
51.Klein, A.-M. et al. Importance of pollinators in changing landscapes for world crops.
Proc. R. Soc. B Biol. Sci. 274, 303–313 (2007).
52.Gupta, N., Pradhan, S., Jain, A. & Patel, N. Sustainable Agriculture in India 2021: What
We Know and How to Scale Up.
53.European Commission. The Common Agricultural Policy: 2023-27.
https://agriculture.ec.europa.eu/common-agricultural-policy/cap-overview/cap-2023-27_e
n.
54.CIMMYT. Intercropping. https://www.cimmyt.org/projects/intercropping/.
55.Lamichhane, J. R. et al. Relay cropping for sustainable intensification of agriculture
across temperate regions: Crop management challenges and future research priorities.
Field Crops Res. 291, 108795 (2023).
56.Jägermeyr, J. et al. Climate impacts on global agriculture emerge earlier in new
generation of climate and crop models. Nat. Food 2, 873–885 (2021).
57.Datta, A. Temporally staggered cropping co-benefits beneficial insects and pest control
globally. Zenodo https://doi.org/10.5281/zenodo.16897437 (2025).
Fig. 1| Spatial and temporal arrangements of crops in intercropping and geographical
distribution of sites. a, Spatial arrangements for pure stand crop A (
) and crop B (
) for
row, strip, and mixed design, and for tree species (
) in agroforestry. b, Temporal
arrangements pure stand crop A (
) and crop B (
) for replacement, additive, and relay
intercropping designs. c, Geographical distribution of sites and number of experiments
included in the database. The size of black circles is proportional to the number of
experiments recorded at each location. The Köppen-Geiger climate classification system was
applied to associate each field site with a grid cell measuring 0.50° latitude by 0.50°
longitude. This system categorizes climates into five primary zones, each designated by a
letter: A for tropical, B for arid, C for temperate, D for continental, and E for polar regions.
The bar plot shows the percentage of studies (%) in each climatic region.
Fig. 2| Patterns of crop interactions and insect order distributions in agroecosystems. a,
Crop interaction networks for Africa and North America, with nodes representing crops and
edges showing intercropping pairs. Node size and color correspond to the number of
intercropping connections, highlighting the degree (connection) of each crop within the
network. Bar plots show degree distribution. b, Spatial distribution of insect orders, with dot
size indicating the abundance of each order at each location. Donut charts show the
continent-wise composition of insect orders. c, Global percentage distribution of reported
insect orders. d, Distribution of insect orders across Köppen-Geiger climatic regions.
Fig. 3| Impacts of spatiotemporal intercropping arrangements on insect functional
groups. a, Spatial distribution of Management Efficiency Ratio (MER). b, Effect of
intercropping on beneficial insects and pests. c, Effect of intercropping on pollinators,
parasitoids, and predators. d, Effect of temporal intercropping arrangements on insect
functional groups. e, Effect of spatial intercropping arrangements on insect functional groups.
The total number of individual effect sizes is indicated by “n.” The diamond symbol
illustrates the average effect size.
Fig 4| Identification of region-specific spatiotemporal intercropping arrangements for
different functional group management: a, Mosaic map showing the most effective
spatiotemporal intercropping arrangements for enhancing beneficial insect abundance and
suppressing pest populations across Köppen-Geiger climate regions for each insect functional
group. b, Mosaic map indicating the best crop combinations for enhancing beneficial insect
abundance and suppressing pest populations for each Köppen-Geiger climate region and
insect functional group. White areas represent either no data or only one persistent
intercropping arrangement or crop combination.
|
Temporally staggered cropping co-benefits beneficial insects and pest control globally Adrija Datta1, Subramanian Sankaranarayanan2, Udit Bhatia1,3,4* 1 -382355 2 -382355 3 -382355 4 -382355 *Correspondence to: Abstract: Reconciling increasing food production with biodiversity conservation is critical yet challenging, particularly given global declines in beneficial insects driven by monoculture intensification. Intercropping-the simultaneous or sequential cultivation of multiple crops-has been proposed as a viable strategy to enhance beneficial insect services and suppress pests, yet global evidence regarding optimal spatiotemporal intercropping configurations remains fragmented. Here, we synthesize results from 7,584 field experiments spanning six continents and 22 Köppen climate regions, evaluating effects of spatial (row, strip, mixed, agroforestry) and temporal (additive, replacement, relay) intercropping configuations on beneficial insect (predators, parasitoids, pollinators) abundance and pest suppression using the Management Efficiency Ratio (MER; log ratio of abundance in intercropping versus monoculture). Relay intercropping, characterized by temporally staggered planting, emerged as the universally optimal temporal configuration, substantially increasing predator (MER = 0.473) and parasitoid populations (MER = 0.512) and effectively suppressing pests (MER = -0.611) globally. At regional scales, identical spatiotemporal configurations simultaneously optimized beneficial insect predator abundance and pest suppression in 57% of regions, while other regions required distinct, insect-specific approaches. Our findings highlight relay intercropping as a globally generalizable solution, but underscore regional variation that calls for targeted policies to simultaneously secure food production and biodiversity conservation. 1. Introduction The United Nations Sustainable Development Goals (SDGs) set ambitious targets to concurrently achieve agricultural productivity, biodiversity conservation, and ecosystem sustainability by 20301. However, prevailing agricultural practices significantly hinder progress towards these interconnected goals2,3. Agriculture alone drives nearly 90% of global deforestation4,5 and threatens over 17,000 species worldwide6. Intensive monoculture farming, heavily dependent on agrochemicals7, has particularly contributed to alarming declines in insect biodiversity8-10, with more than 40% of insect species experiencing global reductions11,12. These insects underpin essential ecosystem services, including pollination10,13 (critical for 75% of global food crops14), natural pest suppression15, and nutrient cycling16. Consequently, identifying scalable agricultural practices that enhance crop productivity by controlling pests without compromising beneficial insect diversity has become critical, but requires a detailed global understanding of ecological outcomes associated with such practices across diverse climatic and geographic contexts. Crop diversification practices, notably intercropping-the simultaneous or sequential cultivation of multiple crops within the same agricultural area17-have emerged as a viable strategy18,19 to enhance agroecosystem resilience and reduce reliance on chemical pesticides20,21. Previous syntheses highlight intercropping's capacity to enhance beneficial insect populations22,23, effectively suppress pests24, and improve soil health25 by increasing plant diversity and microclimatic complexity. However, global analyses26,27 typically aggregate results across diverse intercropping systems without explicitly distinguishing between spatial (Fig. 1a) configurations (row, strip, mixed, agroforestry) or temporal (Fig. 1b) designs (additive, replacement, relay), each of which uniquely influences different insect population dynamics and ecosystem functioning (for more details, see Methods). Furthermore, despite extensive local-scale evidence documenting28-31 positive ecological outcomes from specific intercropping configurations, these findings are inherently site-specific and cannot directly inform global or regional decision-making. Consequently, the absence of a unified global assessment examining how distinct spatiotemporal intercropping configurations systematically influence multiple insect functional groups across diverse Köppen climate regions limits the development of broadly applicable and effective agricultural sustainability strategies. To address this knowledge gap, we synthesized global data from 7,584 field experiments reported in 336 peer-reviewed studies spanning 22 Köppen climate regions32 across six continents (Fig. 1c). Using monoculture systems as controls, we quantitatively assessed how distinct spatiotemporal intercropping configurations influence key insect functional groups, including pollinators, predators, parasitoids, and pests. Specifically, our synthesis addressed three research questions: (1) Which spatial (row, strip, mixed, agroforestry) and temporal (additive, replacement, relay) intercropping configurations most effectively enhance beneficial insects and suppress pests? (2) Do ecological benefits of these configurations remain consistent across different insect functional groups and diverse climate regions? (3) How do specific crop combinations modulate different insect functional responses in varied climates? We addressed these questions using meta-analytic methods and non-parametric statistical tests to quantify insect responses across multiple geographic and climatic contexts. Addressing these questions at a global scale enables assessment of different spatiotemporal intercropping configurations as a scalable sustainability practice. Our results thus provide critical insights into how region-specific intercropping strategies can be systematically optimized, with direct implications for informing agricultural policy, guiding biodiversity conservation efforts, and achieving practical alignment between crop productivity and ecological resilience worldwide. 2. Data 2.1 Study Selection: A literature search was conducted on Google Scholar (last accessed in February 2025) using two search strings: i) ["Intercrop" AND "Insect abundance"], and ii) [(("insect" AND "pest") OR ("insect" AND "pollinator") OR ("insect" AND "predator") OR ("insect" AND "parasitoid")) AND "intercrop" AND "abundance"]. The first 1000 results from each search were screened. Each article was evaluated for relevance based on defined inclusion criteria. From the first search string, 298 articles met the criteria, while 38 articles qualified from the second. The review encompassed studies published between 1978 and January 30, 2025. A total of 336 articles were selected based on a two-stage screening process. The initial screening applied the following inclusion criteria: 1) the article was written in English; 2) duplicate studies published under different titles were excluded; 3) full-text access was available either through open access or institutional access; and 4) the study was peer-reviewed. This initial screening resulted in 819 articles. These were then evaluated against additional criteria to determine final eligibility: 5) opinion, information bulletins, reviews, and meta-analyses were excluded; 6) studies needed to compare monoculture (as a control) with intercropping (as a treatment); 7) articles had to assess insect taxonomic diversity metrics, specifically abundance; 8) at least one insect species had to be identified and included in the analysis; 9) studies had to follow the definition of intercropping as the simultaneous cultivation of crops in the same field; 10) only open-field experiments were considered i.e. greenhouse or laboratory studies were excluded; 11) studies involving the use of chemical or organic pesticides were not included; 12) research involving transgenic or non-edible crops was excluded; 13) modeling-based studies were not considered; 14) studies using multiple intercrop species were excluded to avoid confounding effects of interspecific competition on insects; 15) studies focusing on insects in stored grains were not included; 16) research involving border or cover crops instead of true intercrops was excluded; and 17) studies that presented data only in 3D graphs or in formats that could not be extracted were not considered. The search resulted in 336 articles deemed suitable for inclusion in our meta-analysis, from which a total of 7,584 effect sizes were extracted (Fig. S10). The large number of effect sizes is primarily due to many studies evaluating multiple insect taxonomic groups when comparing intercropping with monoculture systems. 2.2 General information on the articles: 'Title', 'Authors', 'DOI', and 'Year_of_publication' variables: These variables provide essential information for easily locating the articles included in our database. They comprise the full title of the article, the list of authors, the Digital Object Identifier (DOI), and the year in which the article was published. 2.3 Experimental site and climate conditions: 'Country', 'Study_site', 'Latitude', and 'Longitude' variables: These variables provide location-specific details of the experimental sites, including the country and specific site where the study was conducted, along with the latitude and longitude (in decimal degrees). Coordinates were obtained either directly from the articles or estimated using the site name via Google Maps (https://maps.google.com/). The 'Climate_zone' variable is classified based on the Köppen-Geiger climate classification system32. 2.4 Spatial intercropping configuration description: 'Spatial_intercropping': This variable describes the spatial arrangement used for sowing both species in the intercropping system33. (Fig. 1a). 1. Row intercropping: Where the two plant species are grown in distinct, alternating rows34. 2. Strip intercropping: Where multiple rows of one plant species are alternated with one or more rows of another species, forming strips in which each strip consists of more than a single row35. 3. Mixed intercropping: Where the component crops are sown at the same time, either within the same row or in a mixed pattern without any distinct rows or strips36. 4. Agroforestry: All the agroforestry systems included here were alley cropping systems37. Here in alley cropping system, rows of crops are planted between rows of trees or shrubs38. 2.5 Temporal intercropping configuration description: 'Temporal_intercropping': This variable describes the temporal arrangement used to intercrop both species. (Fig. 1b): 1. Additive design: In the standard additive design, intercropping systems are created by combining the plant densities of the respective pure stands. Consequently, the total density in the intercropping system is higher than in the pure stands, while the density of each component remains the same as in its pure stand39. 2. Replacement (or substitutive) design: In the standard replacement design, intercropping systems are established by substituting a specific number of plants from one component with an equal number of plants from the other component. As a result, the density of each component in the mixture is lower than in its pure stand, but the overall stand density remains the same as in each pure stand39. 3. Relay design: In a standard relay intercropping design, the second crop is planted into the standing first crop at a later growth stage, creating a temporal overlap between the two crops instead of simultaneous planting. Although both crops occupy the same field space for part of their growth periods, the total plant density changes over time, and the density of each crop during the overlap phase is typically lower than in their respective monocultures40. 2.6 Crop details and insect abundance variables: Since the intercropping experiments include two species, the demonstration is made for one species (called Crop_1) and is replicable for the second species (Crop_2). 'Crop_1_Common_Name' and 'Crop_1_Scientific_Name': These variables provide both the scientific and common names of each crop species. To ensure consistency and avoid confusion caused by multiple common names referring to the same species, scientific names were matched with their corresponding common names using the United States (USDA) Plants Database (http://plants.usda.gov/java/). 'Crop_1_family': This variable indicates the botanical family of each crop species. Family information was either obtained directly from the source article or updated using the World Crops database (https://world-crops.com/category/crops/). 'Crop_1_C': This variable indicates the crop's photosynthetic pathway, identifying whether it follows the C3 or C4 carbon fixation process. 'Crop_1_type': This variable defines the crop category, such as cereals, legumes, vegetables, oilseeds, spices, forage, flowers, fruits, or trees. Crop classifications were updated using the World Crops database (https://world-crops.com/category/crops/). Crops without a clearly defined use were categorized as 'others'. 'Insect_scientific_name' and 'Insect_common_name': These variables include both the scientific and common names of each insect species. While some articles report scientific names, others mention only common names. In cases where only common names are provided, we used the GBIF Python package, pygbif (https://techdocs.gbif.org/en/openapi/), to retrieve the corresponding scientific names. For articles that only specify the insect's order or family without identifying the species, both the common and scientific name fields are left blank. 'Insect_order' and 'Insect_family': These variables describe the order and family of each insect species, primarily extracted directly from the original articles. For articles that do not report order or family but include either the scientific or common name, we used the GBIF Python package, pygbif (https://techdocs.gbif.org/en/openapi/), to determine their taxonomic classification. 'Count_type': This variable indicates the method used to count insects in each study, with the count type extracted directly from the respective article. This variable also includes the unit in which insect abundance is reported. 'Insect_role' and 'Insect_role_type': The 'Insect_role' variable broadly categorizes each insect based on its impact on crops, indicating whether it is beneficial or a pest. The 'Insect_role_type' variable further classifies beneficial insects into specific functional groups, such as pollinators, parasitoids, or predators. 'Abundance_Crop_1' and 'Abundance_intercropped': These two variables indicate insect abundance in monocropping ('Abundance_Crop_1') and intercropping ('Abundance_intercropped') systems, respectively. The method used to quantify insect counts is specified in the 'Count_type' variable. 3. Methods 3.1 Data extraction and collection: Data were extracted from tables, figures, or textual descriptions within the articles. Values presented in graphs were manually digitized using the WebPlotDigitizer tool (https://automeris.io/WebPlotDigitizer/). All extracted data were compiled into an Excel spreadsheet, a format commonly supported by data analysis tools and well-suited for ensuring interpretability in scientific research. Additionally, information was documented on crop combinations, spatial arrangements, and the functional group classification of the insect species reported in each study. A single author carefully evaluated each publication to determine its suitability and the reliability of the data. In total, 107 publications (representing 32% of all entries) underwent two rounds of review, once in November and again in February, to minimize potential errors. Data accuracy was verified multiple times by revisiting the original source materials whenever necessary. 3.2 Calculation of Management Efficiency Ratio (MER): We calculated the Management Efficiency Ratio (MER) to compare insect abundance in the treatment (intercropping) versus the control (monoculture) systems (Equation 1). (1) MER = lnAbundance in treatment (intercrop) Abundance in control (monocrop) An MER greater than 0 indicates that intercropping has a positive effect on insect abundance relative to monoculture. In contrast, an MER less than 0 suggests a negative effect of intercropping compared to monoculture. 3.3 Publication bias analysis: Egger's regression test41 was conducted to detect potential funnel plot asymmetry as an indicator of publication bias. To further examine robustness against small-study effects, we applied a limit meta-analysis, estimating the effect size as sampling error approaches zero. 3.4 Statistical Analyses: Anticipating variation in effect sizes across studies, we used the Mann-Whitney U test42 to compare two major independent groups. Statistical significance was determined at a threshold of p < 0.05. To assess the magnitude of difference between groups, independent of sample size, we applied Cliff's Delta test43. A value of 0 indicates complete overlap between groups, while values approaching -1 or +1 reflect a strong effect. To further explore subgroup differences, we assessed statistical heterogeneity using the Kruskal-Wallis test44 and Higgins & Thompson's I2 statistic45. The I2 value quantifies the degree of heterogeneity: low (≤ 25%), moderate (25-75%), and substantial (≥ 75%). In addition to the overall group comparisons, we conducted Dunn's test46 with Bonferroni correction47 to perform pairwise comparisons among the different intercropping strategies. This post-hoc analysis allowed us to identify which specific groups differed significantly after detecting overall differences using the Kruskal-Wallis test. The Bonferroni correction is crucial in this context as it adjusts for multiple comparisons, thereby reducing the risk of Type I errors (false positives) when testing multiple group differences simultaneously. While Dunn's test identifies statistically significant differences between groups, it does not convey the magnitude of these differences. Therefore, we calculated Cliff's Delta for each pairwise comparison to provide a non-parametric estimate of effect size. To further investigate whether different spatiotemporal intercropping configurations significantly differ from one another, we applied the Kruskal-Wallis test along with Leave-One-Out Cross-Validation (LOO-CV)48. The LOO-CV approach was used to confirm the robustness of the Kruskal-Wallis results, ensuring that the findings were not disproportionately influenced by any single study or data point. 3.5 Identification of best strategies for each climate zone and crop combination: The intercropping pattern and its structural design were combined into a single descriptive label to uniquely represent each practice. Records with missing or incomplete information on either the intercropping strategy, its design, or its performance metric were removed to ensure data quality. For each climate zone and insect role category, the average performance of each intercropping practice was calculated. This allowed for summarizing how effective different strategies were across various climatic contexts. To identify the most suitable strategy for each insect group in each climate zone, the following criteria were applied: i) For insect pests, the strategy with the lowest average performance metric (lowest MER) was selected (indicating greater effectiveness in suppression). ii) For beneficial insects (such as pollinators and natural enemies), the strategy with the highest average value (highest MER) was considered optimal. The same strategy was followed to identify the most effective crop combinations (monocrop-intercrop combination) for each insect group and climatic region. We have also repeated the same methodology to identify the intercropping pattern design combination for each insect group on each continent. 4. Results 4.1 Global research highlights regional gaps in crop diversity and insect studies Geospatial analysis of 336 peer-reviewed studies published between 1978 and 2025 revealed marked regional biases, with most studies concentrated in tropical savanna (Aw, 24.5%), temperate oceanic (Cfb, 9.9%), and humid subtropical (Cwa, 9.3%) climate zones. Asia emerged as the primary research hub (39.2% of studies), followed by North America (21.1%) and Africa (13.7%). Country-level analyses further identified India (10.3%), Costa Rica (5.5%), and Iran (5.2%) as key study locations. Asian studies primarily originated from India, Iran, Indonesia, China, and Pakistan, whereas African research centered predominantly in Tanzania, Benin, Uganda, Egypt, and Nigeria (Fig. S1a, S1b). Intercontinental differences were pronounced in crop diversity. Asian studies reported the greatest diversity with 160 crop species, notably cabbage (28 intercrop combinations) and maize (22 combinations). African research employed 100 distinct crops (Fig. 2a), predominantly maize (37 combinations) and okra (19 combinations). Comparatively, developed regions displayed lower crop diversity: Europe had 46 documented crops (cabbage most common with 9 combinations), and North America reported 50 crops (maize with 7 combinations) (Fig. S2). Insect community composition exhibited distinct biogeographic patterns. Asia recorded the highest taxonomic diversity (16 orders), dominated by Hemiptera (26.5%) and Lepidoptera (24.6%) (Fig. 2b). African studies documented nine insect orders, predominantly Lepidoptera (35.6%). Coleoptera dominated insect communities in Europe (64.1%), North America (59.1%), and Oceania (64.7%), whereas Hymenoptera was the leading order in South America (45.6%). Globally, Lepidoptera was the most frequently studied insect order (25.3%), followed closely by Coleoptera (24.9%) and Hemiptera (23.8%) (Fig. 2c). Additional notable orders included Hymenoptera (9.9%), Diptera (4.0%), Neuroptera (3.2%), and Thysanoptera (3.0%). Among climate regions, tropical savanna (Aw) zones harbored the greatest insect diversity (25.4%), predominantly Lepidoptera (41.3%), followed by arid steppe (BSh, 12.2%), similarly dominated by Lepidoptera (35.3%) (Fig. 2d, Fig. S3, Table S1). 4.2 Global intercropping studies predominantly reflect pest suppression objectives We assessed insect responses to intercropping using the Management Efficiency Ratio (MER), calculated as the natural logarithm of insect abundance ratios in intercropped versus monoculture systems as control (see Methods for details). If MER is positive, it means insect abundance will increase in intercropping with respect to monoculture and vice versa. Globally, intercropping resulted in predominantly negative MER values (63.6% of cases), indicating generally lower insect abundance in intercropping relative to monocultures (Fig. 3a). The global mean MER was significantly negative (-0.164 ± 0.652; p < 0.0001), underscoring an overall suppressive effect of intercropping on insect populations. This globally observed suppression primarily reflects the predominant research emphasis on pest control, closely aligned with agricultural yield protection goals. Pest-focused studies were substantially more prevalent (n = 4,138) than studies assessing beneficial insects (pollinators and natural enemies, n = 2,540; Fig. 3b). Although intercropping is highly effective for pest management, the broader potential of intercropping to support beneficial insects remains likely underestimated due to these research biases. This global trend, however, varied markedly across continents (Fig. S4, Table S2). South America showed a marginal increase in insect abundance under intercropping (weighted mean MER = 0.078). In contrast, Africa exhibited the strongest insect population suppression (weighted mean MER = -0.303). While the global trend clearly aligns with pest suppression objectives, examining responses of different insect functional groups can further clarify whether beneficial insects exhibit systematically different responses under intercropping. 4.3 Intercropping selectively enhances beneficial insects and suppresses pests We quantified the differential effects of intercropping on beneficial insects versus pests by comparing their Management Efficiency Ratios (MER). Beneficial insects exhibited significantly higher MER values (mean = 0.292 ± 0.012 SE), indicating increased abundance under intercropping compared to monocultures. In contrast, pests showed significantly negative MER values (mean = -0.445 ± 0.008 SE), with significant heterogeneity across groups ( p < 0.001; Cliff's delta = 0.67; Fig. 3b). Further analysis of beneficial insects by functional group-pollinators, predators, and parasitoids-revealed that intercropping differentially benefited these groups. Predators (MER = 0.276 ± 0.015) and parasitoids (MER = 0.303 ± 0.025) responded positively and significantly more strongly than pollinators (MER = 0.076 ± 0.059), suggesting natural enemies derive the greatest advantage from intercropping (Fig. 3c). A Kruskal-Wallis test44 confirmed significant variation among these functional groups (p < 0.001; Higgins & Thompson's I2 = 99.85%; Fig. S5a). Post hoc Dunn's pairwise comparisons46 (Bonferroni-corrected47) indicated predators (Cliff's delta Δ = 0.203, p < 0.001) and parasitoids (Δ = 0.218, p < 0.001) benefited significantly more than pollinators. The difference between predators and parasitoids was statistically significant but negligible in magnitude (Δ = 0.006, p < 0.05). Pests showed consistent and substantial negative responses relative to all beneficial groups (e.g., Δ = -0.659 vs. predators, p < 0.001), underscoring the selective enhancement of natural enemies and effective pest suppression under intercropping. Given that beneficial insect and pest populations respond distinctly, identifying specific spatiotemporal intercropping designs responsible for these differential outcomes is essential for targeted agroecological interventions. 4.4 Effective insect management emerges from aligning spatial and temporal intercropping designs with ecological roles We analyzed global data to identify which temporal (additive, replacement, relay) and spatial (mixed, row, strip, agroforestry) intercropping configurations best enhance beneficial insect abundance and suppress pests. Among temporal designs, relay intercropping showed the strongest positive effect on parasitoids (mean MER = 0.473 ± 0.094 SE) and predators (mean MER = 0.512 ± 0.067 SE) and the greatest suppression of pests (mean MER = -0.611 ± 0.045 SE; Fig. 3d). Kruskal-Wallis analyses confirmed significant differences across temporal designs for predators (p < 0.001), parasitoids (p < 0.01), and pests (p < 0.001). Pairwise comparisons (Dunn's tests with Bonferroni correction, Fig. S5b) revealed relay designs significantly outperformed additive (p < 0.05) and replacement (p < 0.001) designs for both predators and parasitoids. These findings suggest relay intercropping provides beneficial insects with temporally continuous habitats and resources, while simultaneously disrupting pest colonization dynamics. Spatial intercropping patterns also significantly influenced insect functional groups. Row intercropping yielded the highest MER values for pollinators (mean MER = 0.284 ± 0.132 SE) and predators (mean MER = 0.382 ± 0.02 SE), potentially due to enhanced foraging efficiency and accessibility (Fig. 3e, Fig. S6). Parasitoids responded best to strip (mean MER = 0.374 ± 0.048 SE), likely reflecting improved habitat segregation and host availability. Mixed intercropping most effectively suppressed pests (mean MER = -0.533 ± 0.076 SE), suggesting that greater spatial heterogeneity disrupts pest colonization and reproduction. Finally, considering combined spatial and temporal dimensions, specific configurations emerged as optimal for different insect groups (Fig. S7, Fig. S8). Strip-replacement intercropping provided the greatest enhancement of parasitoid abundance (mean MER = 0.647 ± 0.151 SE), while row-additive intercropping best supported pollinators (mean MER = 0.284 ± 0.132 SE). Predators benefited most strongly from row-replacement intercropping (mean MER = 1.022 ± 0.135 SE, p < 0.001), and pest suppression was maximized in strip-relay systems (mean MER = -0.934 ± 0.046 SE, p < 0.001). Collectively, these results demonstrate that optimal insect management through intercropping depends critically on aligning spatial and temporal crop configurations with the specific ecological roles and habitat requirements of targeted insect groups. Finally, since ecological roles and habitat preferences underpin insect responses to intercropping, it remains crucial to determine whether optimal strategies generalize across climate zones or require region-specific tailoring. 4.5 Climate-specific intercropping and crop combinations simultaneously optimize beneficial insect abundance and pest suppression Given the broad climatic variability of global croplands, we conducted a region-by-region comparison through mosaic analysis (Fig. 4a) across major Köppen climate zones to identify optimal intercropping configurations for enhancing beneficial insects and suppressing pests. Pollinator responses were excluded due to insufficient data. In regions with adequate information, we determined the intercropping configurations that maximized beneficial insect abundance and minimized pest populations. Our analysis revealed strong consistency in optimal intercropping choices among insect groups within many climate regions. In approximately 57% of regions, predators and pests responded most favorably to the same intercropping configuration, highlighting substantial ecological compatibility (Fig. 4a). Notably, in nearly one-third (28.5%) of the climate regions studied, a single intercropping configuration simultaneously optimized abundance of pests, predators, and parasitoids, underscoring potential for integrated insect management. Among these broadly beneficial strategies, row-additive and strip-additive intercropping designs emerged most frequently. Crop combinations also strongly influenced insect responses. Cereal-legume intercrops were most consistently effective, emerging as the optimal combination in 18% of climate regions and frequently benefiting predators and parasitoids while reducing pests (Fig. 4b). Overall, 28% of regions favored one of six specific crop combinations-including cereals-legumes, spices-vegetables, cereals-cereals, flowers-fruits, cereals-oilseeds, and forage-vegetables-for simultaneously increasing predator abundance and suppressing pests. These intercrop combinations likely provide complementary ecological resources such as nectar, alternative prey, or structural habitats favorable to beneficial insects and may emit compounds that deter pests. However, no single crop combination universally promoted parasitoid and predator abundance while also suppressing pests, indicating specialized insect-crop interactions across different agricultural systems. 4.6 Publication bias report: Egger's test41 (Fig. S9) for funnel plot asymmetry revealed no significant publication bias (z = 1.4486, p = 0.1474). Additionally, the limit estimate, representing the effect size as sampling error approaches zero, was even more negative (b = -0.1916; 95% CI: -0.2845 to -0.0987), reinforcing the robustness of the observed effect. 5. Discussion: Agricultural landscapes worldwide face the dual challenge of increasing food production and conserving biodiversity, two goals traditionally viewed as competing rather than complementary6,49. Historically, research has predominantly emphasized pest suppression18 or yield enhancement15,16 in isolation, resulting in a fragmented understanding of the broader agroecological potential of intercropping. Our global synthesis of 7,584 field experiments spanning six continents and 22 Köppen climate regions offers empirical evidence that intercropping can simultaneously enhance beneficial insect populations and suppress pest abundance across diverse agroecosystems. Our results demonstrate that relay intercropping-characterized by temporally staggered planting-is the universally optimal temporal configuration, significantly increasing predator (mean MER = 0.512) and parasitoid (mean MER = 0.473) populations while substantially suppressing pest numbers (mean MER = -0.611). Mosaic analyses further revealed considerable regional consistency, with identical spatiotemporal configurations simultaneously optimizing natural enemy abundance and pest suppression in approximately 28.5% of climate regions. Nevertheless, notable deviations among insect functional groups and climate zones highlight important geographic variability, underscoring the necessity for region-specific intercropping strategies. Despite these robust global patterns, several critical knowledge gaps remain. Notably, our synthesis identified limited global-scale data on pollinator responses to intercropping, despite pollinators' critical roles in crop productivity50 and food security51. Furthermore, arid and boreal climates are significantly underrepresented in existing datasets, restricting comprehensive global generalizations. Importantly, the absence of integrated yield and profitability assessments alongside insect abundance metrics limits comprehensive evaluations of intercropping's multifunctional benefits. Addressing these limitations by explicitly focusing on pollinator populations, expanding research coverage in underrepresented climates, and incorporating economic and yield outcomes will strengthen future global syntheses and enhance practical agricultural relevance. Our findings have important implications for agricultural policy and management. While several countries have integrated intercropping and agroforestry into formal frameworks, such as India's integrated intercropping and agroforestry initiatives of Council on Energy, Environment and Water (CEEW)52 and the European Union's Common Agricultural Policy53 (2023-27), policy attention remains limited to select configurations. Building on this policy momentum, the recent International Maize and Wheat Improvement Center (CIMMYT) Intercropping project (2023-2028) focuses exclusively on row-additive intercropping in South Asia54, overlooking other potentially effective spatiotemporal strategies. Several countries have already integrated intercropping and agroforestry into formal policy frameworks. While our results highlight the ecological benefits of relay intercropping for enhancing beneficial insects and suppressing pests, it remains underrepresented in current implementation efforts. Relay intercropping also aligns with resource constraints common to smallholder farming and substantially reduces insecticide dependency55, making it a strong candidate for inclusion in extension programs, incentive schemes, and training. Strengthening policy support for relay intercropping can help achieve sustainability goals, specifically the United Nations Sustainable Development Goals on food security (SDG 2: Zero Hunger) and terrestrial biodiversity conservation (SDG 15: Life on Land). Considering projected climate impacts on global agriculture due to anthropogenic climate change56, future research must evaluate intercropping strategies under evolving pest dynamics, shifting beneficial insect distributions, and increased climatic extremes. Predictive ecological modeling that incorporates projected climate scenarios can inform resilient cropping recommendations, ensuring intercropping systems maintain effectiveness under altered ecological conditions. Additionally, translating our global-scale insights into spatially explicit, region-specific recommendations will facilitate practical application, allowing policymakers and practitioners to target agricultural diversification strategies effectively across varied climates and agroecological contexts. Ultimately, our global synthesis provides a robust ecological basis for multifunctional agriculture, highlighting strategic relay intercropping designs that simultaneously enhance beneficial insect populations and effectively suppress pests. These clearly quantified and globally consistent results represent actionable pathways to reconcile intensified food production with biodiversity conservation goals, traditionally viewed as conflicting objectives. By systematically addressing identified knowledge gaps, particularly around pollinator dynamics and yield assessments, future research can further refine and expand the applicability of intercropping strategies. Such targeted agricultural diversification can support both ecological resilience and agricultural productivity, directly contributing to global sustainability priorities and addressing pressing environmental and food security challenges worldwide. Author contributions: Conceptualization: AD. Data collection: AD, Analysis: AD. Visualization: AD, SS, UB. Writing - original draft: AD, SS, UB. Writing - review and editing: AD, SS, UB. Competing interests: The authors declare that they have no competing interests. Acknowledgements: This research was primarily funded by IIT Gandhinagar. The authors also thank the members of the Machine Intelligence and Resilience Laboratory at IIT Gandhinagar for their valuable discussions and constructive feedback on this manuscript. Data and materials availability: All data supporting the findings of this study are included in the main text and/or Supplementary Materials. Additional data are available at the following repository: https://doi.org/10.5281/zenodo.16897437 57. References: 1. United Nations. Transforming Our World by 2030: A New Agenda for Global Action Zero. 2. The Sustainable Development Goals Report Special Edition. (2023). 3. Transforming Food and Land Systems to Achieve the SDGs. (2024). 4. Pendrill, F. et al. Disentangling the numbers behind agriculture-driven tropical deforestation. Science 377, eabm9267 (2022). 5. Korotkov, A. V., Peck, T. & Köhl, M. RESOURCE ASSESSMENT | Regional and Global Forest Resource Assessments. in Encyclopedia of Forest Sciences 973-982 (Elsevier, 2004). 6. Williams, D. R. et al. Proactive conservation to prevent habitat losses to agricultural expansion. Nat. Sustain. 4, 314-322 (2020). 7. Liu, X. et al. Global cropland expansion enhances cropping potential and reduces its inequality among countries. Earth Syst. Dyn. 15, 817-828 (2024). 8. Tarigan, S. D. Biodiversity-based ecosystem services for the management of monoculture plantation landscape using a transdisciplinary approach: a review. IOP Conf. Ser. Earth Environ. Sci. 325, 012013 (2019). 9. Lemaire, G., Faccio CarvalhoPaulo César De Faccio Carvalho, P. C. D., Kronberg, S. L. & Recous, S. Agroecosystem Diversity: Reconciling Contemporary Agriculture and Environmental Quality. (Academic Press, Elsevier, 2018). 10.Goulson, D., Nicholls, E., Botías, C. & Rotheray, E. L. Bee declines driven by combined stress from parasites, pesticides, and lack of flowers. Science 347, 1255957 (2015). 11. Sánchez-Bayo, F. & Wyckhuys, K. A. G. Worldwide decline of the entomofauna: A review of its drivers. Biol. Conserv. 232, 8-27 (2019). 12.Wagner, D. L., Grames, E. M., Forister, M. L., Berenbaum, M. R. & Stopak, D. Insect decline in the Anthropocene: Death by a thousand cuts. Proc. Natl. Acad. Sci. 118, e2023989118 (2021). 13.Dicks, L. V. et al. A global-scale expert assessment of drivers and risks associated with pollinator decline. Nat. Ecol. Evol. 5, 1453-1461 (2021). 14.Potts, S. G. et al. THE ASSESSMENT REPORT ON POLLINATORS, POLLINATION AND FOOD PRODUCTION OF THE INTERGOVERNMENTAL SCIENCE-POLICY PLATFORM ON BIODIVERSITY AND ECOSYSTEM SERVICES. 36 https://doi.org/10.5281/zenodo.3402856 (2016). 15.Bommarco, R., Miranda, F., Bylund, H. & Björkman, C. Insecticides Suppress Natural Enemies and Increase Pest Damage in Cabbage. J. Econ. Entomol. 104, 782-791 (2011). 16.Raven, P. H. & Wagner, D. L. Agricultural intensification and climate change are rapidly decreasing insect biodiversity. Proc. Natl. Acad. Sci. 118, e2002548117 (2021). 17.Brooker, R. W. et al. Improving intercropping: a synthesis of research in agronomy, plant physiology and ecology. New Phytol. 206, 107-117 (2015). 18.Vandermeer, J. H. The Ecology of Intercropping. (Cambridge University Press, 1989). 19.Andrews, D. J. & Kassam, A. H. The Importance of Multiple Cropping in Increasing World Food Supplies. in ASA Special Publications (eds Papendick, R. I., Sanchez, P. A. & Triplett, G. B.) 1-10 (American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Madison, WI, USA, 2015). 20.Li, L., Zhang, L. & Zhang, F. Crop Mixtures and the Mechanisms of Overyielding. in Encyclopedia of Biodiversity 382-395 (Elsevier, 2013). 21.Li, L., Tilman, D., Lambers, H. & Zhang, F. Plant diversity and overyielding: insights from belowground facilitation of intercropping in agriculture. New Phytol. 203, 63-69 (2014). 22.Pierre, J. F. et al. A review of the impact of maize-legume intercrops on the diversity and abundance of entomophagous and phytophagous insects. PeerJ 11, e15640 (2023). 23.Nicholls, C. I. & Altieri, M. A. Plant biodiversity enhances bees and other insect pollinators in agroecosystems. A review. Agron. Sustain. Dev. 33, 257-274 (2013). 24.Mir, M. S. et al. Role of Intercropping in Sustainable Insect-Pest Management: A Review. Int. J. Environ. Clim. Change 3390-3404 (2022) 25.Zhang, F. et al. Rhizosphere Processes and Management for Improving Nutrient Use Efficiency and Crop Productivity. in Advances in Agronomy vol. 107 1-32 (Elsevier, 2010). 26.Letourneau, D. K. et al. Does plant diversity benefit agroecosystems? A synthetic review. Ecol. Appl. 21, 9-21 (2011). 27.Rakotomalala, A. A. N. A., Ficiciyan, A. M. & Tscharntke, T. Intercropping enhances beneficial arthropods and controls pests: A systematic review and meta-analysis. Agric. Ecosyst. Environ. 356, 108617 (2023). 28.Hithesh, G. R., Suroshe, S. S., Keerthi, M. C. & Fand, B. B. Companion planting of flowering annuals enhances the colonization of natural enemies in white cabbage fields. J. Plant Dis. Prot. 132, 49 (2025). 29.Singh, A., Weisser, W. W., Hanna, R., Houmgny, R. & Zytynska, S. E. Reduce pests, enhance production: benefits of intercropping at high densities for okra farmers in Cameroon. Pest Manag. Sci. 73, 2017-2027 (2017). 30.Zheng, Y. et al. Potato/Maize intercropping reduces infestation of potato tuber moth, Phthorimaea operculella (Zeller) by the enhancement of natural enemies. J. Integr. Agric. 19, 394-405 (2020). 31.Soujanya, P. L. et al. Intercropping in maize reduces fall armyworm Spodoptera frugiperda (J. E. Smith) infestation, supports natural enemies, and enhances yield. Agric. Ecosyst. Environ. 373, 109130 (2024). 32.Kottek, M., Grieser, J., Beck, C., Rudolf, B. & Rubel, F. World Map of the Köppen-Geiger climate classification updated. Meteorol. Z. 15, 259-263 (2006). 33.Kumari, A. & Choudhary, M. Annual Intercrops: An Alternative Pathway for Sustainable Horticultural Production. Ecol. Environ. Conserv. 28, S244-S251 (2022). 34.Chen, C., Westcott, M., Neill, K., Wichman, D. & Knox, M. Row Configuration and Nitrogen Application for Barley-Pea Intercropping in Montana. Agron. J. 96, 1730-1738 (2004). 35.Li, L. et al. Wheat/maize or wheat/soybean strip intercropping. Field Crops Res. 71, 123-137 (2001). 36.Agegnehu, G., Ghizaw, A. & Sinebo, W. Yield potential and land-use efficiency of wheat and faba bean mixed intercropping. Agron. Sustain. Dev. 28, 257-263 (2008). 37.Sinclair, F. L. A general classification of agroforestry practice. Agrofor. Syst. 46, 161-180 (1999). 38.Burgess, A. J., Correa Cano, M. E. & Parkes, B. The deployment of intercropping and agroforestry as adaptation to climate change. Crop Environ. 1, 145-160 (2022). 39.Paut, R., Garreau, L., Ollivier, G., Sabatier, R. & Tchamitchian, M. A global dataset of experimental intercropping and agroforestry studies in horticulture. Sci. Data 11, 5 (2024). 40.Lamichhane, J. R. et al. Relay cropping for sustainable intensification of agriculture across temperate regions: Crop management challenges and future research priorities. Field Crops Res. 291, 108795 (2023). 41.Lin, L. & Chu, H. Quantifying Publication Bias in Meta-Analysis. Biometrics 74, 785-794 (2018). 42.Nachar, N. The Mann-Whitney U: A Test for Assessing Whether Two Independent Samples Come from the Same Distribution. Tutor. Quant. Methods Psychol. 4, 13-20 (2008). 43.Marfo, P. & Okyere, G. A. The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis. Heliyon 5, e01838 (2019). 44.Ostertagová, E., Ostertag, O. & Kováč, J. Methodology and Application of the Kruskal-Wallis Test. Appl. Mech. Mater. 611, 115-120 (2014). 45.Harrer, M., Cuijpers, P., Furukawa, T. A. & Ebert, D. D. Doing Meta-Analysis with R: A Hands-On Guide. (Chapman and Hall/CRC, Boca Raton, 2021). 46.Dinno, A. Nonparametric Pairwise Multiple Comparisons in Independent Groups using Dunn's Test. Stata J. Promot. Commun. Stat. Stata 15, 292-300 (2015). 47.Sedgwick, P. Multiple significance tests: the Bonferroni correction. BMJ 344, e509-e509 (2012). 48.Yeung, A. W. K., More, S., Wu, J. & Eickhoff, S. B. Reporting details of neuroimaging studies on individual traits prediction: A literature survey. NeuroImage 256, 119275 (2022). 49.Potapov, P. et al. Global maps of cropland extent and change show accelerated cropland expansion in the twenty-first century. Nat. Food 3, 19-28 (2021). 50.Aizen, M. A. et al. Global agricultural productivity is threatened by increasing pollinator dependence without a parallel increase in crop diversification. Glob. Change Biol. 25, 3516-3527 (2019). 51.Klein, A.-M. et al. Importance of pollinators in changing landscapes for world crops. Proc. R. Soc. B Biol. Sci. 274, 303-313 (2007). 52.Gupta, N., Pradhan, S., Jain, A. & Patel, N. Sustainable Agriculture in India 2021: What We Know and How to Scale Up. 53.European Commission. The Common Agricultural Policy: 2023-27. https://agriculture.ec.europa.eu/common-agricultural-policy/cap-overview/cap-2023-27_e n. 54.CIMMYT. Intercropping. https://www.cimmyt.org/projects/intercropping/. 55.Lamichhane, J. R. et al. Relay cropping for sustainable intensification of agriculture across temperate regions: Crop management challenges and future research priorities. Field Crops Res. 291, 108795 (2023). 56.Jägermeyr, J. et al. Climate impacts on global agriculture emerge earlier in new generation of climate and crop models. Nat. Food 2, 873-885 (2021). 57.Datta, A. Temporally staggered cropping co-benefits beneficial insects and pest control globally. Zenodo https://doi.org/10.5281/zenodo.16897437 (2025). Fig. 1| Spatial and temporal arrangements of crops in intercropping and geographical distribution of sites. a, Spatial arrangements for pure stand crop A ( ) and crop B ( ) for row, strip, and mixed design, and for tree species ( ) in agroforestry. b, Temporal arrangements pure stand crop A ( ) and crop B ( ) for replacement, additive, and relay intercropping designs. c, Geographical distribution of sites and number of experiments included in the database. The size of black circles is proportional to the number of experiments recorded at each location. The Köppen-Geiger climate classification system was applied to associate each field site with a grid cell measuring 0.50° latitude by 0.50° longitude. This system categorizes climates into five primary zones, each designated by a letter: A for tropical, B for arid, C for temperate, D for continental, and E for polar regions. The bar plot shows the percentage of studies (%) in each climatic region. Fig. 2| Patterns of crop interactions and insect order distributions in agroecosystems. a, Crop interaction networks for Africa and North America, with nodes representing crops and edges showing intercropping pairs. Node size and color correspond to the number of intercropping connections, highlighting the degree (connection) of each crop within the network. Bar plots show degree distribution. b, Spatial distribution of insect orders, with dot size indicating the abundance of each order at each location. Donut charts show the continent-wise composition of insect orders. c, Global percentage distribution of reported insect orders. d, Distribution of insect orders across Köppen-Geiger climatic regions. Fig. 3| Impacts of spatiotemporal intercropping arrangements on insect functional groups. a, Spatial distribution of Management Efficiency Ratio (MER). b, Effect of intercropping on beneficial insects and pests. c, Effect of intercropping on pollinators, parasitoids, and predators. d, Effect of temporal intercropping arrangements on insect functional groups. e, Effect of spatial intercropping arrangements on insect functional groups. The total number of individual effect sizes is indicated by "n." The diamond symbol illustrates the average effect size. Fig 4| Identification of region-specific spatiotemporal intercropping arrangements for different functional group management: a, Mosaic map showing the most effective spatiotemporal intercropping arrangements for enhancing beneficial insect abundance and suppressing pest populations across Köppen-Geiger climate regions for each insect functional group. b, Mosaic map indicating the best crop combinations for enhancing beneficial insect abundance and suppressing pest populations for each Köppen-Geiger climate region and insect functional group. White areas represent either no data or only one persistent intercropping arrangement or crop combination.
|
2509.16282
|
SciPost Physics Proceedings
Submission
Automatizing the search for mass resonances using BumpNet
Jean-François Arguin1, Georges Azuelos1,2, Émile Baril1, Ilan Bessudo3, Fannie Bilodeau1,
Maryna Borysova3, Shikma Bressler3, Samuel Calvet4, Julien Donini4, Etienne Dreyer3,
Michael Kwok Lam Chu3, Eva Mayer4, Ethan Meszaros1⋆, Nilotpal Kakati3, Bruna Pascual
Dias4, Joséphine Potdevin1,5, Amit Shkuri3 and Muhammad Usman1
1 Group of Particle Physics, Université de Montréal, Montréal QC; Canada
2 TRIUMF, Vancouver BC; Canada
3 Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot; Israel
4 LPCA, Université Clermont Auvergne, CNRS/IN2P3, Clermont-Ferrand; France
5 Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne; Switzerland
⋆ethan.james.meszaros@cern.ch
The 2nd European AI for Fundamental
Physics Conference (EuCAIFCon2025)
Cagliari, Sardinia, 16-20 June 2025
Abstract
Physics Beyond the Standard Model (BSM) has yet to be observed at the Large Hadron
Collider (LHC), motivating the development of model-agnostic, machine learning-based
strategies to probe more regions of the phase space. As many final states have not yet
been examined for mass resonances, an accelerated approach to bump-hunting is de-
sirable. BumpNet is a neural network trained to map smoothly falling invariant-mass
histogram data to statistical significance values. It provides a unique, automatized ap-
proach to mass resonance searches with the capacity to scan hundreds of final states
reliably and efficiently.
Copyright attribution to authors.
This work is a submission to SciPost Phys. Proc.
License information to appear upon publication.
Publication information to appear upon publication.
Received Date
Accepted Date
Published Date
1
Introduction
Though the Standard Model (SM) is known to be incomplete [1], signatures predicted by Be-
yond the Standard Model (BSM) extensions have yet to be identified. Over the last decade,
many model-agnostic, machine learning-fueled techniques have been developed for new physics
discovery with the intent of broadening signal sensitivity and accelerating analyses. However,
many of these methods require some form of traditional background estimation and are limited
by training sample sizes, low signal fractions in data, and validation issues [2].
Historically, “bump hunting” has been effective at finding new particles (see Refs. [3–5])
due to the model-agnostic nature of mass resonances. Given that many observable final states
at the LHC have yet to be examined [6], a broad search for resonances is well motivated.
BumpNet [7] is a fully supervised technique for conducting resonance searches efficiently
across hundreds of final states.
It utilizes a well-established property of the SM, that of
1
arXiv:2509.16282v1 [hep-ph] 18 Sep 2025
SciPost Physics Proceedings
Submission
Figure 1: BumpNet’s architecture, receiving histogram entries as inputs and out-
putting the predicted local significance in each bin.
smoothly falling backgrounds, to map invariant-mass distributions to local statistical signif-
icance values. This frees BumpNet from the need for prior background estimation and signal
assumptions. The following sections detail its architecture, training procedure, and perfor-
mance in a proof-of-concept study.
2
Methodology
2.1
Architecture
BumpNet’s architecture is convolution-based, as seen in Figure 1. It is designed to receive
as input the number of events in each bin of smoothly falling invariant-mass histograms. It
receives no information about the invariant-mass itself, making BumpNet resistant to potential
invariant-mass correlations. The input is fed into four convolutional stacks, each with a unique
kernel size to capture different sized patterns in the input. This convolved representation is fed,
bin-by-bin, into a multilayer perceptron (MLP) which outputs a single value: the prediction
of the local significance in a given bin. This is repeated to obtain the significance prediction
across the entire histogram range.
2.2
Training
The training dataset is created using an assortment of smoothly falling backgrounds modeled
by (1) a group of eleven analytical functions, with parameters constrained to satisfy a certain
dynamic range, and (2) the "smoothing" of monte carlo (MC) distributions. The latter method
is introduced to account for the fact that analytical functions do not necessarily match the
shapes seen in real data. These MC distributions used for smoothing are produced through a
histogram production framework, described in Section 2.2.1, then subsequently smoothed by
parametric and non-parametric fitting methods to extract shapes that more closely resemble
those expected to be seen in real data.
Once a sample of smoothly falling curves has been created, a 1 bin-width wide gaussian
signal is injected at random positions and with random signal strengths. These final curves are
Poisson-fluctuated to resemble realistic distributions, and the local significance of deviations
from the known background (calculated using formulae from Ref. [8]) serve as the network’s
labels.
2
SciPost Physics Proceedings
Submission
2.2.1
Histogram Production
To demonstrate BumpNet’s methodology, distributions for smoothing are generated using the
Dark Machines (DM) MC dataset [9]. This dataset emulates 10 fb−1 of SM pp collisions at
ps = 13 TeV, including the 26 highest cross-section processes expected at the LHC. Exclusive
final states are constructed from all combinations of the available objects, including electrons,
muons, photons, jets, Emiss
T
and some specially defined objects, such as a boosted hadronic
W/Z. One invariant-mass histogram is then created for every combination of at least two
objects within a given final state, yielding 39,768 total invariant-mass histograms.
Bin sizes vary according to an approximate mass resolution in order to make resonances
appear similar to BumpNet in units of bins. A similar histogram production framework will be
used to generate histograms from real data in future analyses.
3
Performance
For the DM proof-of-concept [7], BumpNet is trained on approximately 3 million samples, one
third of which are background shapes obtained from analytical functions and the remaining
from smoothed MC histograms. To examine performance, BumpNet is tested on an application
set generated in the same manner as its training dataset and on various BSM scenarios.
0.0
2.5
5.0
7.5
10.0
12.5
ZLR
max
4
2
0
2
4
Zmax
=
0.11
= 0.75
0
500
1000
1500
2000
2500
(a) ∆Zmax v. Z LR
max
0.0
0.2
0.4
0.6
0.8
ZLR
max bin / number of bins
4
2
0
2
4
Zmax
=
0.14
= 0.79
0
500
1000
1500
2000
2500
(b) ∆Zmax v. relative position of Z LR
max
(includes all bins)
0.2
0.4
0.6
0.8
ZLR
max bin / number of bins
4
2
0
2
4
Zmax
=
0.1
= 0.68
0
500
1000
1500
2000
2500
(c) ∆Zmax v.
relative position of Z LR
max
(first 10% of bins excluded)
Figure 2: BumpNet’s agreement with Z LR
max measured as a function of signal strength
(2a) and relative position (2b, 2c).
3.1
Injected Gaussian Signals
A total of 500,000 function- and 1,000,000 DM-based histograms are used for the application
set. Performance is assessed by calculating ∆Zmax = ZBumpNet
max
−Z LR
max, the difference between
BumpNet’s prediction and the true significance, respectively. Figure 2 shows that, as a func-
tion of various signal strengths and positions, ∆Zmax is unbiased with relatively small spread.
Excluding the predictions in the first 10% of bins reduces the spread further (Figure 2c), em-
phasizing the ambiguity in the inferred background near the beginning of the histograms.
3.2
Realistic Signals and the Look Elsewhere Effect
To test sensitivity to realistic physics scenarios, BumpNet is applied on previously analyzed
ATLAS data (see Figure 3a and Section 4.1 of Ref. [7]) and BSM samples injected on top of SM
background. These BSM samples include both locally generated events and those provided
3
SciPost Physics Proceedings
Submission
in the DM dataset. The network performs well across multiple scenarios, including the pair
production of scalar leptoquarks with a mass of 600 GeV depicted in Figure 3b.
Despite BumpNet’s stellar performance, the natural problem of false positive signals and
the look elsewhere effect (LEE) arises swiftly when scanning thousands of histograms for
bumps. To mitigate such false positives, an algorithm exploiting physical correlations between
object combinations (described in Section 4.2 of Ref. [7]) has been developed to much suc-
cess. A framework has also been deployed for calculating the global significance of BumpNet’s
predictions, which will strengthen the results of future analyses by quantifying the extent of
the LEE.
500
1000
1500
2000
2500
3000
3500
Events/2GeV
data
Sig+Bkg fit
Bkg (4th order polynomial)
200
100
0
100
200
Events - Bkg
100
110
120
130
140
150
GeV
2
0
2
4
Significance
Zpred
ZLR
(a) Higgs discovery data
(b) Two LQ →be
Figure 3: Examples of BumpNet’s performance over data-like signals. In 3a, the max-
imum prediction (Z pred
max ) of 4.5σ agrees with the likelihood-ratio calculation (Z LR
max)
of 4.2σ within BumpNet’s uncertainties. In 3b, the prediction peaks where the most
BSM events are located, demonstrating BumpNet’s sensitivity to general signatures.
4
Conclusion
The BumpNet methodology presents a promising approach to automatizing the mass reso-
nance search, enabling an efficient scan of hundreds of final states to indicate areas of in-
terest. The performance of our proof-of-concept model is unbiased with low variance and
demonstrates sensitivity to a wide variety of possible physics scenarios. Multiple approaches
have been developed to mitigate the inevitable look elsewhere effect, highlighting BumpNet’s
potential to reliably scan more regions of the phase space than any prior analysis.
Acknowledgements
Funding information
We would like to acknowledge the support provided by the Natural
Sciences and Engineering Research Council of Canada (NSERC), the Institut de valorisation
des données (IVADO), the Canada First Research Excellence Fund, and the Israeli Science
Foundation (ISF, Grant No. 2382/24). We are also grateful to the Krenter-Perinot Center for
High-Energy Particle Physics, the Shimon and Golde Picker-Weizmann Annual Grant, and the
Sir Charles Clore Prize for their generous support. We further wish to thank Martin Kushner
Schnur for his significant and invaluable contribution to this work.
4
SciPost Physics Proceedings
Submission
References
[1] S. Weinberg, Essay: Half a Century of the Standard Model, Phys. Rev. Lett. 121(22), 220001
(2018), doi:10.1103/PhysRevLett.121.220001.
[2] V. Belis, P. Odagiu and T. K. Aarrestad, Machine learning for anomaly detection in particle
physics, Rev. Phys. 12, 100091 (2024), doi:10.1016/j.revip.2024.100091.
[3] S. W. Herb, D. C. Hom, L. M. Lederman, J. C. Sens, H. D. Snyder, J. K. Yoh, J. A. Appel,
B. C. Brown, C. N. Brown, W. R. Innes, K. Ueno, T. Yamanouchi et al., Observation of a
Dimuon Resonance at 9.5 GeV in 400-GeV Proton-Nucleus Collisions, Phys. Rev. Lett. 39(5),
252 (1977), doi:10.1103/PhysRevLett.39.252.
[4] J. J. Aubert, U. Becker, P. J. Biggs, J. Burger, M. Chen, G. Everhart, P. Goldhagen, J. Leong,
T. McCorriston, T. G. Rhoades, M. Rohde, S. C. C. Ting et al., Experimental Observation of a
Heavy Particle J, Phys. Rev. Lett. 33(23), 1404 (1974), doi:10.1103/PhysRevLett.33.1404.
[5] G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. Abdelalim, O. Abdinov,
R. Aben, B. Abi, M. Abolins, O. AbouZeid, H. Abramowicz et al., Observation of a new
particle in the search for the Standard Model Higgs boson with the ATLAS detector at the
LHC, Phys. Lett. B 716(1), 1 (2012), doi:10.1016/j.physletb.2012.08.020.
[6] J. H. Kim, K. Kong, B. Nachman and D. Whiteson, The motivation and status of two-body
resonance decays after the LHC Run 2 and beyond, J. High Energ. Phys. 2020(4), 30 (2020),
doi:10.1007/JHEP04(2020)030.
[7] J.-F. Arguin, G. Azuelos, E. Baril, I. Bessudo, F. Bilodeau, M. Borysova, S. Bressler,
S. Calvet, J. Donini, E. Dreyer, M. K. L. Chu, E. Mayer et al.,
Automatizing the
search for mass resonances using BumpNet, J. High Energ. Phys. 2025(2), 122 (2025),
doi:10.1007/JHEP02(2025)122.
[8] G. Cowan, K. Cranmer, E. Gross and O. Vitells, Asymptotic formulae for likelihood-based
tests of new physics, Eur. Phys. J. C 71(2), 1554 (2011), doi:10.1140/epjc/s10052-011-
1554-0.
[9] T. Aarrestad, M. Van Beekveld, M. Bona, A. Boveia, S. Caron, J. Davies, A. De Simone,
C. Doglioni, J. Duarte, A. Farbin, H. Gupta, L. Hendriks et al., The Dark Machines Anomaly
Score Challenge: Benchmark Data and Model Independent Event Classification for the Large
Hadron Collider, SciPost Phys. 12(1), 043 (2022), doi:10.21468/SciPostPhys.12.1.043.
5
|
SciPost Physics Proceedings Submission Automatizing the search for mass resonances using BumpNet Jean-François Arguin1, Georges Azuelos1,2, Émile Baril1, Ilan Bessudo3, Fannie Bilodeau1, Maryna Borysova3, Shikma Bressler3, Samuel Calvet4, Julien Donini4, Etienne Dreyer3, Michael Kwok Lam Chu3, Eva Mayer4, Ethan Meszaros1⋆, Nilotpal Kakati3, Bruna Pascual Dias4, Joséphine Potdevin1,5, Amit Shkuri3 and Muhammad Usman1 1 Group of Particle Physics, Université de Montréal, Montréal QC; Canada 2 TRIUMF, Vancouver BC; Canada 3 ; Israel 4 LPCA, Université Clermont Auvergne, CNRS/IN2P3, Clermont-Ferrand; France 5 édérale de Lausanne (EPFL), Lausanne; Switzerland ⋆ The 2nd European AI for Fundamental Physics Conference (EuCAIFCon2025) Cagliari, Sardinia, 16-20 June 2025 Abstract Physics Beyond the Standard Model (BSM) has yet to be observed at the Large Hadron Collider (LHC), motivating the development of model-agnostic, machine learning-based strategies to probe more regions of the phase space. As many final states have not yet been examined for mass resonances, an accelerated approach to bump-hunting is desirable. BumpNet is a neural network trained to map smoothly falling invariant-mass histogram data to statistical significance values. It provides a unique, automatized approach to mass resonance searches with the capacity to scan hundreds of final states reliably and efficiently. Copyright attribution to authors. This work is a submission to SciPost Phys. Proc. License information to appear upon publication. Publication information to appear upon publication. Received Date Accepted Date Published Date 1 Introduction Though the Standard Model (SM) is known to be incomplete [1], signatures predicted by Beyond the Standard Model (BSM) extensions have yet to be identified. Over the last decade, many model-agnostic, machine learning-fueled techniques have been developed for new physics discovery with the intent of broadening signal sensitivity and accelerating analyses. However, many of these methods require some form of traditional background estimation and are limited by training sample sizes, low signal fractions in data, and validation issues [2]. Historically, "bump hunting" has been effective at finding new particles (see Refs. [3-5]) due to the model-agnostic nature of mass resonances. Given that many observable final states at the LHC have yet to be examined [6], a broad search for resonances is well motivated. BumpNet [7] is a fully supervised technique for conducting resonance searches efficiently across hundreds of final states. It utilizes a well-established property of the SM, that of 1 18 Sep 2025 SciPost Physics Proceedings Submission Figure 1: BumpNet's architecture, receiving histogram entries as inputs and outputting the predicted local significance in each bin. smoothly falling backgrounds, to map invariant-mass distributions to local statistical significance values. This frees BumpNet from the need for prior background estimation and signal assumptions. The following sections detail its architecture, training procedure, and performance in a proof-of-concept study. 2 Methodology 2.1 Architecture BumpNet's architecture is convolution-based, as seen in Figure 1. It is designed to receive as input the number of events in each bin of smoothly falling invariant-mass histograms. It receives no information about the invariant-mass itself, making BumpNet resistant to potential invariant-mass correlations. The input is fed into four convolutional stacks, each with a unique kernel size to capture different sized patterns in the input. This convolved representation is fed, bin-by-bin, into a multilayer perceptron (MLP) which outputs a single value: the prediction of the local significance in a given bin. This is repeated to obtain the significance prediction across the entire histogram range. 2.2 Training The training dataset is created using an assortment of smoothly falling backgrounds modeled by (1) a group of eleven analytical functions, with parameters constrained to satisfy a certain dynamic range, and (2) the "smoothing" of monte carlo (MC) distributions. The latter method is introduced to account for the fact that analytical functions do not necessarily match the shapes seen in real data. These MC distributions used for smoothing are produced through a histogram production framework, described in Section 2.2.1, then subsequently smoothed by parametric and non-parametric fitting methods to extract shapes that more closely resemble those expected to be seen in real data. Once a sample of smoothly falling curves has been created, a 1 bin-width wide gaussian signal is injected at random positions and with random signal strengths. These final curves are Poisson-fluctuated to resemble realistic distributions, and the local significance of deviations from the known background (calculated using formulae from Ref. [8]) serve as the network's labels. 2 SciPost Physics Proceedings Submission 2.2.1 Histogram Production To demonstrate BumpNet's methodology, distributions for smoothing are generated using the Dark Machines (DM) MC dataset [9]. This dataset emulates 10 fb-1 of SM pp collisions at ps = 13 TeV, including the 26 highest cross-section processes expected at the LHC. Exclusive final states are constructed from all combinations of the available objects, including electrons, muons, photons, jets, Emiss T and some specially defined objects, such as a boosted hadronic W/Z. One invariant-mass histogram is then created for every combination of at least two objects within a given final state, yielding 39,768 total invariant-mass histograms. Bin sizes vary according to an approximate mass resolution in order to make resonances appear similar to BumpNet in units of bins. A similar histogram production framework will be used to generate histograms from real data in future analyses. 3 Performance For the DM proof-of-concept [7], BumpNet is trained on approximately 3 million samples, one third of which are background shapes obtained from analytical functions and the remaining from smoothed MC histograms. To examine performance, BumpNet is tested on an application set generated in the same manner as its training dataset and on various BSM scenarios. 0.0 2.5 5.0 7.5 10.0 12.5 ZLR max 4 2 0 2 4 Zmax = 0.11 = 0.75 0 500 1000 1500 2000 2500 (a) ∆Zmax v. Z LR max 0.0 0.2 0.4 0.6 0.8 ZLR max bin / number of bins 4 2 0 2 4 Zmax = 0.14 = 0.79 0 500 1000 1500 2000 2500 (b) ∆Zmax v. relative position of Z LR max (includes all bins) 0.2 0.4 0.6 0.8 ZLR max bin / number of bins 4 2 0 2 4 Zmax = 0.1 = 0.68 0 500 1000 1500 2000 2500 (c) ∆Zmax v. relative position of Z LR max (first 10% of bins excluded) Figure 2: BumpNet's agreement with Z LR max measured as a function of signal strength (2a) and relative position (2b, 2c). 3.1 Injected Gaussian Signals A total of 500,000 function- and 1,000,000 DM-based histograms are used for the application set. Performance is assessed by calculating ∆Zmax = ZBumpNet max -Z LR max, the difference between BumpNet's prediction and the true significance, respectively. Figure 2 shows that, as a function of various signal strengths and positions, ∆Zmax is unbiased with relatively small spread. Excluding the predictions in the first 10% of bins reduces the spread further (Figure 2c), emphasizing the ambiguity in the inferred background near the beginning of the histograms. 3.2 Realistic Signals and the Look Elsewhere Effect To test sensitivity to realistic physics scenarios, BumpNet is applied on previously analyzed ATLAS data (see Figure 3a and Section 4.1 of Ref. [7]) and BSM samples injected on top of SM background. These BSM samples include both locally generated events and those provided 3 SciPost Physics Proceedings Submission in the DM dataset. The network performs well across multiple scenarios, including the pair production of scalar leptoquarks with a mass of 600 GeV depicted in Figure 3b. Despite BumpNet's stellar performance, the natural problem of false positive signals and the look elsewhere effect (LEE) arises swiftly when scanning thousands of histograms for bumps. To mitigate such false positives, an algorithm exploiting physical correlations between object combinations (described in Section 4.2 of Ref. [7]) has been developed to much success. A framework has also been deployed for calculating the global significance of BumpNet's predictions, which will strengthen the results of future analyses by quantifying the extent of the LEE. 500 1000 1500 2000 2500 3000 3500 Events/2GeV data Sig+Bkg fit Bkg (4th order polynomial) 200 100 0 100 200 Events - Bkg 100 110 120 130 140 150 GeV 2 0 2 4 Significance Zpred ZLR (a) Higgs discovery data (b) Two LQ →be Figure 3: Examples of BumpNet's performance over data-like signals. In 3a, the maximum prediction (Z pred max ) of 4.5σ agrees with the likelihood-ratio calculation (Z LR max) of 4.2σ within BumpNet's uncertainties. In 3b, the prediction peaks where the most BSM events are located, demonstrating BumpNet's sensitivity to general signatures. 4 Conclusion The BumpNet methodology presents a promising approach to automatizing the mass resonance search, enabling an efficient scan of hundreds of final states to indicate areas of interest. The performance of our proof-of-concept model is unbiased with low variance and demonstrates sensitivity to a wide variety of possible physics scenarios. Multiple approaches have been developed to mitigate the inevitable look elsewhere effect, highlighting BumpNet's potential to reliably scan more regions of the phase space than any prior analysis. Acknowledgements Funding information We would like to acknowledge the support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Institut de valorisation des données (IVADO), the Canada First Research Excellence Fund, and the Israeli Science Foundation (ISF, Grant No. 2382/24). We are also grateful to the Krenter-Perinot Center for High-Energy Particle Physics, the Shimon and Golde Picker-Weizmann Annual Grant, and the Sir Charles Clore Prize for their generous support. We further wish to thank Martin Kushner Schnur for his significant and invaluable contribution to this work. 4 SciPost Physics Proceedings Submission References [1] S. Weinberg, Essay: Half a Century of the Standard Model, Phys. Rev. Lett. 121(22), 220001 (2018), [2] V. Belis, P. Odagiu and T. K. Aarrestad, Machine learning for anomaly detection in particle physics, Rev. Phys. 12, 100091 (2024), [3] S. W. Herb, D. C. Hom, L. M. Lederman, J. C. Sens, H. D. Snyder, J. K. Yoh, J. A. Appel, B. C. Brown, C. N. Brown, W. R. Innes, K. Ueno, T. Yamanouchi et al., Observation of a Dimuon Resonance at 9.5 GeV in 400-GeV Proton-Nucleus Collisions, Phys. Rev. Lett. 39(5), 252 (1977), [4] J. J. Aubert, U. Becker, P. J. Biggs, J. Burger, M. Chen, G. Everhart, P. Goldhagen, J. Leong, T. McCorriston, T. G. Rhoades, M. Rohde, S. C. C. Ting et al., Experimental Observation of a Heavy Particle J, Phys. Rev. Lett. 33(23), 1404 (1974), [5] G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. Abdelalim, O. Abdinov, R. Aben, B. Abi, M. Abolins, O. AbouZeid, H. Abramowicz et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys. Lett. B 716(1), 1 (2012), [6] J. H. Kim, K. Kong, B. Nachman and D. Whiteson, The motivation and status of two-body resonance decays after the LHC Run 2 and beyond, J. High Energ. Phys. 2020(4), 30 (2020), [7] J.-F. Arguin, G. Azuelos, E. Baril, I. Bessudo, F. Bilodeau, M. Borysova, S. Bressler, S. Calvet, J. Donini, E. Dreyer, M. K. L. Chu, E. Mayer et al., Automatizing the search for mass resonances using BumpNet, J. High Energ. Phys. 2025(2), 122 (2025), [8] G. Cowan, K. Cranmer, E. Gross and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71(2), 1554 (2011), 1554-0. [9] T. Aarrestad, M. Van Beekveld, M. Bona, A. Boveia, S. Caron, J. Davies, A. De Simone, C. Doglioni, J. Duarte, A. Farbin, H. Gupta, L. Hendriks et al., The Dark Machines Anomaly Score Challenge: Benchmark Data and Model Independent Event Classification for the Large Hadron Collider, SciPost Phys. 12(1), 043 (2022), 5
|
2509.16285
|
Unity based virtual reality for detector and event visualization in JUNO experiment
Kai-Xuan Huang,1, ∗Tian-Zi Song,1, ∗Yu-Ning Su,1 Cheng-Xin
Wu,1 Xue-Sen Wang,2 Yu-Mei Zhang,2, † and Zheng-Yun You1, ‡
1School of Physics, Sun Yat-sen University, Guangzhou 510275, China
2Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China
Detector and event visualization are crucial components of high-energy physics (HEP) experimental software.
Virtual Reality (VR) technologies and multimedia development platforms such as Unity offer enhanced display
effects and flexible extensibility for visualization in HEP experiments. In this study, we present a VR-based
method for detector and event displays in the Jiangmen Underground Neutrino Observatory (JUNO) experiment.
This method shares the same detector geometry descriptions and event data model as those in offline software
and provides necessary data conversion interfaces. The VR methodology facilitates an immersive exploration of
the virtual environment in JUNO, enabling users to investigate detector geometry, visualize event data, and tune
the detector simulation and event reconstruction algorithms. Additionally, this approach supports applications
in data monitoring, physics data analysis, and public outreach initiatives.
Keywords: virtual reality, event display, Unity, detector geometry, JUNO
I.
INTRODUCTION
Visualization techniques are essential in every aspect of
modern High-energy Physics (HEP) experiments.
In the
Roadmap for HEP Software and Computing R&D for the
2020s [1] and the HEP Software Foundation Community
White Paper [2], recommendations and guidelines for visu-
alization tools, such as Virtual Reality (VR) technologies [3]
in future software development are specifically discussed,
particularly regarding interactivity, detector geometry visu-
alization and event display. Compared to traditional visual-
izations, VR techniques offer a truly immersive perspective,
which enhances interactive experience with a better under-
standing of detector geometry and event information. In re-
cent years, some HEP experiments have developed VR ap-
plications for event display and outreach. These include the
Belle2VR software [4, 5] for the BelleII experiment [6], the
ATLASrift platform [7, 8] for the ATLAS experiment [9], the
CMS VR application [10] for the CMS experiment [11], and
the Super-KAVE program [12, 13] for the Super-K experi-
ment [14].
The development of VR applications typically involves
game engines, such as Unity [15] or Unreal Engine [16].
Unity is a cross-platform engine that supports the develop-
ment of games, videos, animations, and architectural visual-
izations. It has been employed for detector visualization and
event display in various HEP experiments, including BelleII,
BESIII [17], ALICE [18], ATLAS [19], JUNO [20], and the
Total Event Visualizer (TEV) of the CERN Media Lab [21],
all of which achieve excellent visualization effects.
The
Jiangmen
Underground
Neutrino
Observa-
tory (JUNO) [22–24] is situated underground in southern
China with a 650 meter rock overburden.
The primary
scientific goal of JUNO is to determine the neutrino mass
hierarchy.
Over an approximately seven–year operational
∗These authors contributed equally to this work.
† Yu-Mei Zhang, zhangym26@mail.sysu.edu.cn
‡ Zheng-Yun You, youzhy5@mail.sysu.edu.cn
period, JUNO is expected to determine the neutrino mass
hierarchy with a significance of 3σ [25], and to measure the
oscillation parameters ∆m2
31, ∆m2
21, and sin2 θ12, achieving
a precision of 0.2% for ∆m2
31, 0.3% for ∆m2
21, and 0.5% for
sin2 θ12 [26, 27], respectively.
Additionally, JUNO experiment is capable of investigat-
ing various types of neutrinos, including earth neutrinos, at-
mospheric neutrinos, solar neutrinos and supernova neutri-
nos [22]. Its excellent energy resolution and large fiducial
volume provide promising opportunities to explore numerous
essential topics in neutrino physics.
In this study, we develop a VR-based event display tool
using Unity for JUNO. This software is compatible with vari-
ous platforms through Head-Mounted Displays (HMDs) [28]
and offers functionalities including the VR-based visualiza-
tion of JUNO detector, event displays for different types of
data, interfaces for reading and converting event data infor-
mation, and spatial Spatial User Interface (Spatial UI) control
features.
The rest of this paper is organized as follows. In Section II,
we introduce VR-based software for HEP experiments. In
Section III, the software methodologies is described, includ-
ing the JUNO VR framework, the data flow of detector geom-
etry and event data conversion, as well as interaction methods
with Spatial UI. The visualization of detector units and event
data in the VR-based tool is introduced in Section IV. The
potential for further applications is discussed in Section V.
Finally, the performance of the software is introduced in Sec-
tion VI.
II.
VISUALIZATION AND VR
A.
Unity and VR
In HEP experiments, physicists typically develop detector
descriptions and event visualization tools within offline soft-
ware frameworks. These event display tools are usually built
upon the widely-used HEP software, such as Geant4 [29] or
ROOT [30], which provide user-friendly visualization capa-
arXiv:2509.16285v1 [physics.ins-det] 19 Sep 2025
2
bilities that facilitate software development.
With the up-
grades to ROOT and its EVE package [31], the development
of event display tools has become more efficient. Several
recent HEP experiments, including ALICE, CMS [11], BE-
SIII [32], JUNO [33, 34], and Mu2e [35], adopt ROOT EVE
for developing event display software. However, due to the
limited visualization technique support in ROOT, its display
capabilities do not fully meet the diverse requirements of
physicists, and most ROOT applications remain confined to
the Linux platform.
To enhance visualization quality, interactivity, and multi-
platform support, several event display tools are developed
based on external visualization software. Unity is widely ap-
plied in the field of HEP, being used in projects including
BelleII, BESIII, ALICE, ATLAS, and JUNO. Unity is a pro-
fessional video and game development engine based on C#,
and visualization software built on Unity offers several ad-
vantages.
• Impressive visualization quality.
Unity, a widely
adopted professional 3D engine in the industry, offers
advanced visual capabilities that surpass those of tradi-
tional software used in HEP, such as ROOT. Addition-
ally, its continuous updates enable HEP visualizations
to stay aligned with the cutting-edge developments in
graphics technology.
• Cross-platform support.
The comprehensive multi-
platform support of Unity enables seamless export and
deployment of projections across a range of operating
systems, including Windows, Linux, macOS, iOS, An-
droid, and web browsers. This functionality ensures
that the same visualization project can be accessed
across various platforms, minimizing the development
effort and streamlining maintenance tasks.
• High-quality VR rendering and performance optimiza-
tion.
Unity supports modern graphics technologies
such as real-time lighting, global illumination, and
physics-based rendering. Light behaves according to
the principles of physics, including energy conserva-
tion and Fresnel reflections [36], resulting in more re-
alistic and immersive graphical effects in VR. These
features are crucial for enhancing details like light-
ing, shadows, textures, and environmental interactions,
significantly improving the user’s sense of immersion.
Additionally, Unity optimizes VR performance by ren-
dering separate images for each eye, providing a dual-
eye perspective while maintaining smooth rendering
and minimizing motion blur and latency.
• VR HMDs compatibility. Unity supports most popular
VR HMDs, including Meta Quest 2 and Quest 3 [37],
HTC Vive [38], Valve Index [39], and Vision Pro [40].
Through the extended reality interaction toolkit in
Unity, developers can easily create interactive applica-
tions for various devices without device-specific cod-
ing.
Additionally, Unity provides a fast turnaround during the
development cycle. Projects can be executed immediately,
running quickly on VR devices for easier debugging, without
the need to compile and link executable files [41].
Compared to 3D-based event visualization software, VR
technology significantly enhances the visual experience of
the user. VR applications are typically conducted through
HMDs.
According to Steam VR hardware statistics [42],
more than half of users utilize the Meta Quest 2 and Quest 3.
These devices, based on the Android operating system, of-
fer sufficient immersion and are widely used across vari-
ous fields, including gaming, social interaction, and educa-
tion. Equipped with accelerometers, gyroscopes, and cam-
eras, these devices can track the user’s head and hand move-
ments, enabling interaction and navigation within virtual en-
vironments. Additionally, the controllers facilitate interac-
tion with Spatial UI in the virtual environment. VR technol-
ogy provides synthesized sensory feedback, creating a strong
sense of immersion and presence within a simulated environ-
ment.
Most HEP experiments are typically conducted in under-
ground or restricted areas, which are typically inaccessible
during data collection. VR technology enables the public to
explore these experiments in an immersive environment to
observe detector operations and event data collection. This
offers a fundamental understanding of the types of scientific
research being conducted in HEP, which is highly beneficial
for both educational and outreach purposes.
Furthermore, by simulating particle emissions and their in-
teractions with detectors, VR provides physicists with an im-
mersive platform for refining offline simulations and recon-
struction software [43–46]. It can also enhance simulation ac-
curacy. For JUNO, considering the deformation of the stain-
less steel truss, offsets need to be applied to the PMT posi-
tions based on limited survey data [47–49]. Overlap checks
and position tuning using the VR event display tool will be
particularly helpful. Additionally, VR enables physicists to
analyze rare events as though they are physically present
within the inner detector environment, which provides alter-
native approach for data analysis and may inspire creativity.
B.
VR application in HEP
In recent years, VR applications have been developed in
several HEP experiments event visualization and outreach.
These software include Belle2VR [5] for the BelleII exper-
iment, ATLASrift [7, 8] for the ATLAS experiment, and
Super-KAVE [12, 13] for the Super-K experiment.
Belle2VR is an interactive VR visualization tool developed
with Unity, designed to represent subatomic particle physics.
This application allows user to explore the BelleII detector
and observe particle jets generated in high energy e+e−col-
lisions.
The Super-KAVE application immerses user in a
scaled representation of the Super-K detector, allowing them
to explore the virtual space, switch between event datasets,
and change visualization modes [12, 13]. In addition to pro-
viding VR modes for exploring the detector and standard
3
event displays, the application features a supernova event vi-
sualization technique, simulating the conversion of a star into
a supernova. This leads to the occurrence of thousands of
neutrino events within approximately ten seconds. It serves
as a valuable outreach tool, offering a new example of visu-
alization techniques for various neutrino particle physics ap-
plications. ATLASrift, a VR application developed for the
ATLAS experiment, is primarily used for data visualization
and outreach [9]. User can move around and inside the de-
tector, as well as explore the entire underground experimental
cavern and its associated facilities, including shafts, service
halls, passageways, scaffolds, and more.
III.
METHODOLOGIES
VR technology provides an immersive experience. How-
ever, the development of comprehensive event display soft-
ware utilizing VR for HEP experiments still involves signifi-
cant challenges.
The first challenge is to convert the detector geometry, typ-
ically based on Geant4 simulations, into a format such as
FBX [50] that can be imported into Unity. Given that detec-
tors usually consist of tens of thousands of components, man-
ually creating the geometry would impose a significant work-
load. Another significant challenge is extracting and convert-
ing event information into a structure compatible with Unity.
In HEP experiments, the fundamental information for event
display is typically defined by the offline software and stored
in ROOT format. However, because Unity does not support
direct reading of ROOT files, a dedicated conversion process
is required. Additionally, a bijective mapping is established
to link the detector unit identifiers used in the offline soft-
ware [51] with the names assigned to the corresponding ge-
ometries in Unity.
This section introduces the software architecture and data
flow in the JUNO VR program. We describe the process of
detector geometry conversion, the exchange of essential event
information from offline software to Unity, and the strategy of
matching detector units. Additionally, we discuss the con-
struction of the Spatial UI and provide an overview of its
functionality.
A.
Software structure and data flow
The event display software should provide visualization ca-
pabilities, including detector geometry, event data informa-
tion at different levels, and interactive controls. For JUNO
VR software visualization, the first step involves converting
and importing the detector geometry and event data informa-
tion into Unity for display, followed by the development of
interactive controls. As shown in Figure 1, the JUNO event
display software consists of four components.
• Detector geometry conversion. The geometric models
of the detector are constructed using Geant4 in the de-
tector simulation, initially stored in a Geometry De-
scription Markup Language (GDML) file [52].
The
GDML file is then automatically converted to the FBX
format using the GDML-FBX conversion tool [17, 53],
which is compatible for import into Unity.
• Event data conversion. The Event Data Model (EDM)
[54] encompasses various types of event information
exchanged between different components of JUNO on-
line and offline software, including data acquisition,
simulation, calibration, and reconstruction. The event
information for JUNO VR event display is extracted
from the offline software EDM [55]. By combining the
detector identifier and Unity geometry name matching
rules, the detector information is remapped, generating
event information that Unity can directly import and
conforms to the geometry hierarchy in Unity.
• Detector and event information visualization. The de-
tector geometry, simulation, and reconstruction infor-
mation, as well as the hit information and their associa-
tions, are visualized in Unity. By adjusting the material
properties and combining Unity’s layers, lighting, and
rendering effects, an immersive and outstanding visu-
alization experience in VR mode is achieved.
• Spatial UI and interactive control.
The Spatial UI
is designed to facilitate the visualization and interac-
tion with the detector and event information.
It in-
cludes the sub-detector geometry panel and the event
display panel, which allow users to control the display
of sub-detectors, switch between event types, and man-
age the event display process. Interactive control is en-
abled through the Meta Quest 3 controller, with dis-
tinct functions assigned to the joystick and various but-
tons. These functions include controlling the visibility
of each panel, navigating within the 3D virtual detector
environment, and switching perspectives.
B.
Detector geometry conversion
The detector geometry in HEP experiments are typically
complex, consisting of up to millions of detector units. The
description of these detectors is commonly developed using
specialized geometric languages, such as GDML and Detec-
tor Description for High-Energy Physics (DD4hep) [56, 57].
The JUNO experiment, along with BESIII, PHENIX [58],
and LHCb [59], uses GDML to describe and optimize the
geometry of detectors for conceptual design and offline soft-
ware development. GDML is a detector description language
based on Extensible Markup Language (XML) [60] that de-
scribes detector information through a set of textual tags and
attributes, providing a persistent description of the detector.
The geometry description files of detectors typically include
essential information about the detector model, such as lists
of materials, positions, rotations, solids, and the hierarchical
structure of the detector.
Since the GDML format does not directly support import
into Unity, some previous HEP applications involving Unity
4
Fig. 1. The software framework and data flow in JUNO VR.
typically required manual construction of geometric models.
Given that HEP detectors are usually highly complex, the cre-
ation of 3D detector models in Unity becomes particularly
challenging. However, Unity does support direct import of
several 3D file formats, including FBX, DAE [61], DXF [62],
and OBJ [63]. Among these, FBX stands out as a widely
used 3D asset format, due to its ability to handle intricate
scene structures. This includes not only geometry but also
animations, materials, textures, lighting, and cameras, which
makes it a highly suitable choice for HEP applications involv-
ing complex 3D models.
A method that can automatically convert GDML or
DD4hep to FBX format is essential for detector construction
in Unity. Several researches have proposed automated meth-
ods for converting GDML files to FBX files, significantly fa-
cilitating Unity-based development. For instance, the BESIII
collaboration group suggests using FreeCAD [64], a 3D CAD
and modeling software, in conjunction with CAD data opti-
mization software Pixyz [65], with the STEP [66] format as
an intermediate conversion format [17]. The CMS collab-
oration group employ SketchUp software for auxiliary data
conversion [67].
Recently, methods were also proposed to directly convert
GDML files to FBX files [53]. This research, based on the
above method, enables a fast and automatic conversion pro-
cess from GDML to FBX, which can be completed in just a
few minutes, and saves significant time in the conversion pro-
cess. This approach becomes particularly beneficial during
the recent geometric updates of JUNO detector at the com-
missioning stage, enabling the swift conversion of the up-
dated FBX file, which includes the latest geometry model of
the real detector units after installation.
C.
Event data conversion
In HEP experiments, the event data is typically stored in
files with binary raw data format or ROOT format. ROOT,
an efficient data analysis framework, is widely adopted for
high-performance data input and output operations. However,
since Unity cannot directly read ROOT files, it is necessary to
extract the required event information based on the EDM and
convert it into a text format that Unity can process.
The essential information for event display comprises three
main components: detector unit hits, Monte Carlo (MC) truth,
and reconstruction data. The detector unit hits include the hit
time and hit charge for each detector unit like a PMT. MC
truth provides detailed truth information such as simulated
vertices and photon trajectories (including 3D coordinates
and propagation with time), which facilitate a deeper anal-
ysis of particle direction and relative velocity. Reconstruction
data typically contain the reconstructed vertex positions, en-
ergy information, and additional track information for muon
events like direction. Together, these information serve as the
foundation for developing event display functionalities and
interactive control modules based on Spatial UI.
Furthermore, the identifiers used for detector units in of-
fline software may differ from the names of the geometric
objects in Unity. In HEP experiments, the detector identifier
system assigns a unique ID to each detector unit and play a
critical role in various applications including data acquisition,
simulation, reconstruction and analysis. Therefore, establish-
ing an accurate mapping between the detector identifiers in
offline software and the geometric objects like PMT in Unity
is essential to ensure the accurate display of an event. Based
on EDM readout rules and leveraging the mapping between
the identifier module and the geometric objects in Unity, an
automated readout and conversion interface is developed to
export event display information.
5
For JUNO VR software, multiple types of datasets are pro-
vided, including radioactive background, Inverse Beta Decay
(IBD) [68], cosmic ray muons and other types of events. The
event display dataset is designed to encompass both simu-
lated and real data event types. Simulated events are produced
with the JUNO offline software to facilitate detector simula-
tion, commissioning and the optimization of reconstruction
algorithms. Since JUNO has not yet commenced formal data
taking, real data events are instead obtained from the Data
Challenge dataset [69], which has data structures identical to
those expected during actual operation. With the event data
conversion interface, the datasets with various types of data
are ready to be displayed in the Unity based visualization and
VR software.
Fig. 2. The Spatial UI in JUNO VR. On the left is the JUNO VR
event display control panel, and on the right is the sub-detector ge-
ometry control panel.
D.
Spatial UI and interactive control
The Spatial UI serves as the interface facilitating interac-
tion between user and the VR application. For JUNO VR
project, we develop two Spatial UIs: the sub-detector geome-
try control panel and the event display control panel, as shown
in Figure 2.
The sub-detector geometry panel primarily controls the
visualization attributes of the geometries of various sub-
detectors, including Central Detector (CD) large PMTs, CD
small PMTs, Top Tracker, and water pool PMTs. Detailed
information about the sub-detectors of JUNO is provided in
Section IV A. In addition to the sensitive detectors like PMTs,
an "Other structure" toggle controls the display of passive
structures, such as the steel structure, the acrylic ball, the
PMT support structures, and the liquid filling pipelines. Addi-
tionally, the "Data type" drop-down is used to switch between
different types of events collected during real data-taking or
from simulation. The "Photon trail mode" toggle enables the
switching of display modes for photon paths, either repre-
sented by green lines or in a manner closely resembling par-
ticle motion.
The event display panel is designed to implement the core
functionality for event visualization, which includes a tog-
gle for switching display mode between simulation and data
types, a slider for controlling the display of an event with its
timeline evolution, a drop-down for selecting different types
of events, and a button to play the event animation. A "Draw
Hit" button initiates the animation of the full event hit pro-
cess, which plays within a period of time window, with the
time slider moving in sync with the event timeline, enabling
user to track the current time of the event.
Interactive control is achieved through the use of con-
trollers, gesture operations, eye-tracking, and other input
methods in HMDs. The following discussion focuses on test-
ing interactive control for the Meta Quest 3. For other HMDs,
the cross-platform support provided by the extended reality
interaction toolkit in Unity minimizes development differ-
ences between various devices. Simple adaptations based on
the specific features of the HMDs are sufficient for operation.
The controller buttons resemble a typical gamepad, with
the addition of side buttons. The X&Y buttons on the left
controller are used to control the visibility of the sub-detector
geometry panel. When displayed, the position of this panel is
based on the user’s orientation and appears at the front left of
the user’s view. User can drag or hide the panel to avoid ob-
structing their view when visualizing events. The A&B but-
tons on the right controller are used to control the visibility of
the event display panel. When displayed, the panel appears at
the front right of the user’s view. Based on the gyroscope and
accelerometer hardware of the Meta Quest 3, these planes are
always oriented perpendicular to the user’s view orientation.
Fig. 3. User perspective during motion while checking the display
information of a simulated muon event in the JUNO VR application.
The CD small PMTs are not shown. Detailed information about the
sub-detectors of JUNO is provided in Section IV A.
6
The joystick on the left controller controls the user’s 3D
movement, based on both the controller’s input and the user’s
view orientation. For example, when the user’s head orienta-
tion is directed towards the upper right, pushing the joystick
upwards move the user in the virtual space toward that direc-
tion. Figure 3 illustrates the user’s viewpoint during motion
in the JUNO VR application. The event depicted is a sim-
ulated muon event. Additional details presented in the fig-
ure are described in detail in Section IV. The joystick on the
right controls the user’s viewpoint direction. Additionally,
the user can change their head orientation to switch perspec-
tives. The side button is used for interaction confirmation.
Furthermore, when interacting with the Spatial UI, if the con-
troller’s laser pointer touches the corresponding component,
audio and highlight feedback are provided, making the inter-
action smoother for user control.
IV.
VISUALIZATION IN JUNO
This section is dedicated to introduce the visualization ef-
fects in the JUNO VR, including detector geometry, hit dis-
tribution for different types of events, as well as MC true in-
formation and display of event reconstruction outputs.
A.
Detector units
Fig. 4. Schematic View of the JUNO Detector.
The schematic design of JUNO detector is illustrated in
Figure 4 [23].
The detector includes the water pool, the
CD [47], and the Top Tracker [70]. The CD is the heart of
JUNO experiment and is filled with 20 ktons of liquid scintil-
lator [71, 72] to serve as the target for neutrino detection. The
liquid scintillator is housed within a spherical acrylic vessel,
which has a thickness of 120 mm and an inner diameter of
35.4 meters. This vessel is supported by a spherical stain-
less steel structure with an inner diameter of 40.1 meters. To
detect photons, the CD is equipped with a total of 17,612 20-
inch PMTs and 25,600 3-inch PMTs. Surrounding the CD is
the water pool containing 35 ktons of highly purified water,
which effectively shields the detector from external radioac-
tivity originating from the surrounding rocks. The water pool
is also instrumental in vetoing cosmic ray muons, with 2,400
20-inch PMTs deployed as part of the water Cherenkov de-
tector. The Top Tracker, located at the top of the water pool,
plays a key role in measuring and vetoing muon tracks [73,
74].
Fig. 5. JUNO detector in the VR application.
As described in Section III B, the JUNO detector geometry
is converted from the GDML file, and matched between the
identifier module and Unity geometry for each detector unit.
The visualization effects of the whole JUNO detector in VR
application is shown in Figure 5.
The light blue cylindrical structure represents the water
pool, with the water pool PMTs positioned outward, as in-
dicated by the yellow portion of the spherical structure. At
the top of the water pool, the reddish-brown structure repre-
sents the Top Tracker detector. From the interior view in the
JUNO VR, the spherical acrylic vessel is shown in light gray,
as depicted in Figure 2, although it is close to fully transparent
in reality to allow more photons to pass through. Surround-
ing this vessel is the stainless steel structure, shown as dark
gray in Figure 5. The CD detector PMTs, oriented toward
the center of the sphere, are designed such to receive photons
with its photocathodes, so that only the white tail structures
of every PMTs are visible in Figure 5.
Owing to the hardware capabilities of Meta Quest 3, there
is no requirement to optimize the grid of detector units or
replace them with simplified geometric shapes. Most of the
geometric details of the detector units are preserved, achiev-
ing effects that are difficult to accomplish in event displays
based on ROOT. Additionally, for the detector units, in or-
der to more closely replicate the effect of real PMTs, we have
assigned different material properties to the detector units, in-
cluding the visualization attributes such as color, reflectivity,
7
and metalicity, to achieve the best display effect.
B.
MC simulation event display
MC simulation is crucial for detector design and assists
physicists in evaluating the detector’s performance and tuning
reconstruction algorithms. There are various kinds of signal
and backgrounds events in JUNO, while currently we primar-
ily focus on the radioactive backgrounds, IBD signals, and
muon events.
The IBD event, νe + p →e+ + n, is the major signal
event for detecting electron anti-neutrinos in the JUNO ex-
periment [22, 23].
JUNO identifies and reconstructs IBD
events by detecting signals from positron and neutron cap-
tures. This dual-signal characteristic helps effectively identify
anti-neutrino signal events while suppressing the huge back-
ground events.
Fig. 6. The event display for a simulated IBD event in JUNO VR ap-
plication. The green lines represent the photon paths of the positron,
while the red lines indicate the photon paths of the neutron. The yel-
low spheres represent PMTs that are not triggered, while the spheres
with color gradient from light blue to blue indicate the PMTs with
an increasing number of hits.
For the IBD event, there are both positron and neutron sig-
nals, whose photon paths are displayed in green and red, re-
spectively, as shown in Figure 6. The detector units that are
triggered are color-coded from cyan to dark blue based on
the number of hits in the event, with bluer colors indicating
a higher number of hits. The PMTs that are not triggered are
displayed in yellow by default. Furthermore, in the time evo-
lution of an event, the color of fired PMTs changes with time
according to the associated timing information. The neutron-
induced photon paths are delayed by approximately 170 µs
relative to those from the positron, and this delay can be vi-
sualized using the time slider in the JUNO VR environment.
One major background event type is the cosmic-ray muon
events.
Muons are secondary particles produced by high-
energy cosmic rays in the earth’s atmosphere, and possess
strong penetrating power. Despite JUNO being located 650 m
deep underground, a small fraction of muons can still pene-
trate the overlying shielding and enter the detector, generating
muon events.
Fig. 7. The event display for a simulated muon event in JUNO VR
application. The green lines represent the photons generated along
the path of a muon penetrating the detector. The controllers and the
lasers emitted from the controllers represent the user’s interactive
control.
Figure 7 presents the event information for the simulated
muon event.
Photon trajectories are represented by light
green lines. These paths gradually extend over time, depicting
the propagation of photons. In the simulated event shown, the
directions of these photon paths may change, indicating their
interactions with the detector materials. For the muon event,
as a muon penetrates the detector, it continuously produces
photons while depositing its energy in the liquid scintillator.
Fig. 8.
Comparison of the reconstructed vertex (purple) with the
weighted energy deposit vertex (green) and the particle production
vertex (red) from MC truth in a simulated event. The yellow line
shows the reconstruction bias.
Event reconstruction plays a key role in JUNO data pro-
cessing, reconstructing the vertex and energy of an event,
which is essential in determining the neutrino mass hierar-
chy. For point-like events like IBD signals, almost all the
photon paths originate from the same event vertex. Figure 8
shows the reconstructed vertex and the MC truth. The ini-
tial particle production vertex (red sphere), derived from MC
truth, indicates where the positron is created. The weighted
8
energy deposit vertex (green sphere) marks the positron’s an-
nihilation point in the liquid scintillator. The reconstructed
vertex (purple sphere) is produced by the event reconstruc-
tion algorithm. The reconstruction bias (light yellow line)
represents the discrepancy between the reconstructed vertex
and the energy deposit vertex. A shorter distance indicates a
more accurate reconstructed vertex. In the ideal scenario, the
reconstructed vertex will converge to the true vertex.
C.
Real data event display
For the real-data event, we utilize the Data Challenge
dataset [69], whose data structures and processing pipeline
are identical to those employed during data taking. This en-
sures that the software will function seamlessly once the ex-
periment enters formal operation. The event composition in
this dataset is the same as that in the MC simulation, en-
compassing radioactive-background events, IBD signals, and
muon events.
Fig. 9.
Display of a reconstructed muon event from datasets in
the JUNO VR application. The translucent part represents the CD
acrylic sphere and its supporting components. The magenta line in-
dicates the reconstructed muon track by connecting the points where
the muon enters and exits the JUNO detector.
Figure 9 presents event information for a muon event de-
rived from the real data events. The reconstructed muon trav-
els through the detector along the magenta line. The left and
right sides represent the reconstructed incident and exit points
of the muon. A time offset is established by dividing the track
distance by the speed of light. User can observe the trajectory
of muon by the Spatial UI. Since the exact point of photon
emission along the path cannot be determined, photon infor-
mation is not displayed in this mode. Using the reconstructed
hit time, the corresponding point on the trajectory is linked to
the relevant PMT unit. Once the photon particles arrive at the
PMT units, those triggered PMTs will change color accord-
ingly.
Moreover, by exploiting Unity’s robust visualization capa-
bilities, a specialized mode is developed to simulate photon
paths using particle-like effects instead of simple line trajec-
tories to display the propagation of particles more realisti-
cally.
V.
APPLICATIONS
JUNO VR software provides an immersive interactive ex-
perience, allowing user to intuitively understand the detector
structure and event information. Some features and applica-
tions of the visualization software are listed below.
Data quality monitoring. The data quality monitoring sys-
tem [75–78] is designed to identify data issues promptly, en-
suring the acquisition of high-quality data. During the future
data-taking phase, event information can be real-time and au-
tomatically extracted from the reconstructed files from the
data quality monitoring system. Based on Unity-supported
databases, such as SQLite, event information can be trans-
mitted from the data quality monitoring server to JUNO VR
software.
This enables immersive visualization of detec-
tor operation status and event information during the data-
taking phase. For example, an animation of a real-time data-
acquisition event is automatically played every 30 seconds.
Through immersive visualization, shifters can easily monitor
anomalies, such as hot or dead PMT channels.
Physics analysis. Physical analysis involves in-depth re-
search of neutrino events to extract physical parameters, val-
idate theoretical models, search for rare signals and uncover
new phenomena. This requires detailed analysis of large vol-
umes of complex data. Through the VR interface, researchers
can reconstruct an immersive view of the event in three-
dimensional space, allowing them to freely explore the data,
observe event details from multiple perspectives, and identify
potential patterns and anomalies.
Outreach. For the HEP experiments, their complex theo-
retical and experimental contents are usually difficult for the
public and students to understand. Based on the VR appli-
cation, students can understand the structure of JUNO de-
tector and the processing of signal and background events
through interactive operations, thereby enhancing engage-
ment and understanding of the physics and principle of the
HEP experiments. And the visualization programs including
VR stand out in the field of education and public outreach.
Due to Unity’s cross-platform support and compatibility with
various HMDs, the completed project can be exported to dif-
ferent platforms and utilized with different HMDs, meeting
the requirements of various outreach scenarios.
VI.
PERFORMANCE
In experimental evaluations conducted on the mainstream
VR device—the Meta Quest 3, the JUNO VR application is
capable of processing a variety of event types and demon-
strates sufficient computational performance. During testing,
the device’s CPU utilization remains below 70%, GPU uti-
lization remains below 40%, and the display maintains a sta-
ble refresh rate of 72 frames per second. The software’s in-
teractive response primarily depends on the event type. For
muon events, which contain a larger volume of hit infor-
mation, the latency when switching between events is ap-
proximately 3 seconds; for IBD and radioactive background
events, it is approximately 1 second.
9
The event display of the JUNO VR application undergoes
rigorous testing, and the application is capable of processing
both simulated and real data events.
VII.
SUMMARY
The VR technology greatly enhances the visualization ef-
fects of HEP experiments. A JUNO VR application for de-
tector and event visualization is developed using Unity. By
converting GDML to FBX format, efficient construction of
the complex detector geometry in Unity is achieved. An event
data conversion interface is created based on matching the de-
tector identifier module and the detector geometry hierarchy
in Unity. Through the Spatial UIs, users can easily control the
display of various subsystems for detector and event visual-
ization.
With the ongoing construction of the JUNO experiment,
the VR event display software is successfully developed, and
more features are expected to be added in the future updates.
VR technology offers an immersive, interactive experience,
and it holds great potential in areas such as offline software
development, data taking, physics analysis, education and
public outreach.
LIST OF ABBREVIATIONS
HEP
High-energy Physics
VR
Virtual Reality
JUNO
Jiangmen Underground Neutrino Observatory
TEV
the Total Event Visualizer
HMDs
Head-Mounted Displays
Spatial UI Spatial User Interface
GDML Geometry Description Markup Language
EDM
Event Data Model
DD4hep Detector Description for High-Energy Physics
XML
Extensible Markup Language
MC
Monte Carlo
IBD
Inverse Beta Decay
CD
Central Detector
ACKNOWLEDGEMENTS
This work was supported by the National Natural Sci-
ence Foundation of China (Nos.
12175321, W2443004,
11975021, 11675275, and U1932101), Strategic Priority
Research Program of the Chinese Academy of Sciences
(No.
XDA10010900), National Key Research and Devel-
opment Program of China (Nos.
2023YFA1606000 and
2020YFA0406400), National College Students Science and
Technology Innovation Project, and Undergraduate Base Sci-
entific Research Project of Sun Yat-sen University.
REFERENCES
[1]
HEP Software Foundation Collaboration. “A Roadmap
for HEP Software and Computing R&D for the 2020s”.
Comput. Softw. Big Sci. 3.1 (2019), p. 7. DOI: 10 .
1007/s41781-018-0018-8.
[2]
M. Bellis et al. “HEP Software Foundation Community
White Paper Working Group – Visualization” (2018).
DOI: 10.48550/ARXIV.1811.10309.
[3]
I. Wohlgenannt, A. Simons, and S. Stieglitz. “Virtual
reality”. Business & Information Systems Engineering
62 (2020), pp. 455–461. DOI: 10.1007/s12599-
020-00658-9.
[4]
M. Bender, T. Kuhr, and L. Piilonen. “Belle II virtual
reality projects”. EPJ Web Conf. 214 (2019). Ed. by A.
Forti, L. Betev, M. Litmaath, et al., p. 02028. DOI: 10.
1051/epjconf/201921402028.
[5]
Z. Duer, L. Piilonen, and G. Glasson. “Belle2VR:
A Virtual-Reality Visualization of Subatomic Particle
Physics in the Belle II Experiment”. IEEE Computer
Graphics and Applications 38.3 (2018), pp. 33–43.
DOI: 10.1109/MCG.2018.032421652.
[6]
Belle-II Collaboration. “Belle II Technical Design Re-
port” (2010). DOI: 10 . 48550 / arXiv . 1011 .
0352.
[7]
ATLAS Collaboration. “Virtual Reality and game en-
gines for interactive data visualization and event dis-
plays in HEP, an example from the ATLAS experi-
ment”. EPJ Web Conf. 214 (2019). Ed. by A. Forti, L.
Betev, M. Litmaath, et al., p. 02013. DOI: 10.1051/
epjconf/201921402013.
[8]
I. Vukotic, E. Moyse, and R. M. Bianchi. “ATLASrift -
a Virtual Reality application”. Meeting of the APS Di-
vision of Particles and Fields. 2015. DOI: 10.48550/
arXiv.1511.00047.
[9]
ATLAS Collaboration. “The ATLAS Experiment at
the CERN Large Hadron Collider”. JINST 3 (2008),
S08003. DOI: 10 . 1088 / 1748 - 0221 / 3 / 08 /
S08003.
[10]
CMS Collaboration. Leveraging Virtual Reality for Vi-
sualising the CMS Detector. PoS (ICHEP2024) 1171.
Available at: https://pos.sissa.it/476/
1171/. Accessed on: June 16, 2025.
[11]
CMS Collaboration. “The CMS Experiment at the
CERN LHC”. JINST 3 (2008), S08004. DOI: 10 .
1088/1748-0221/3/08/S08004.
[12]
B. Izatt, K. Scholberq, and R. P. McMahan. “Super-
KAVE: An immersive visualization tool for neu-
trino physics”. 2013 IEEE Virtual Reality (VR). 2013,
pp. 75–76. DOI: 10.1109/VR.2013.6549370.
[13]
E. Izatt, K. Scholberg, and R. Kopper. “Neutrino-
KAVE: An immersive visualization and fitting tool for
neutrino physics education”. 2014 IEEE Virtual Reality
(VR). 2014, pp. 83–84. DOI: 10.1109/VR.2014.
6802062.
10
[14] Y. Suzuki. “The Super-Kamiokande experiment”. Eur.
Phys. J. C 79.4 (2019), p. 298. DOI: 10.1140/epjc/
s10052-019-6796-2.
[15] W. Goldstone. Unity game development essentials.
Packt Publishing Ltd, 2009.
[16] A. Sanders. An introduction to Unreal engine 4. AK
Peters/CRC Press, 2016.
[17] K.-X. Huang, Z.-J. Li, Z. Qian, et al. “Method for de-
tector description transformation to Unity and applica-
tion in BESIII”. Nucl. Sci. Tech. 33.11 (2022), p. 142.
DOI: 10.1007/s41365-022-01133-8.
[18] ALICE Collaboration. “ALICE: Physics performance
report, volume I”. J. Phys. G 30 (2004). Ed. by F.
Carminati, P. Foka, P. Giubellino, et al., pp. 1517–1763.
DOI: 10.1088/0954-3899/30/11/001.
[19] J.
Pequenao.
CAMELIA
webpage.
Available
at:
https://pdgusers.lbl.gov/~pequenao/
camelia. Accessed on: March 15, 2025.
[20] J. Zhu, Z.-Y. You, Y.-M. Zhang, et al. “A method of
detector and event visualization with Unity in JUNO”.
JINST 14.01 (2019), T01007. DOI: 10.1088/1748-
0221/14/01/T01007.
[21] C. M. Lab. CERN TEV visualization framework web-
page. Available at: https://gitlab.cern.ch/
CERNMediaLab/. Accessed on: March 15, 2025.
[22] JUNO Collaboration. “JUNO physics and detector”.
Prog. Part. Nucl. Phys. 123 (2022), p. 103927. DOI:
10.1016/j.ppnp.2021.103927.
[23] JUNO Collaboration. “JUNO Conceptual Design Re-
port” (2015). DOI: 10 . 48550 / arXiv . 1508 .
07166.
[24] F. An et al. “Neutrino Physics with JUNO”. J. Phys.
G 43.3 (2016), p. 030401. DOI: 10.1088/0954-
3899/43/3/030401.
[25] A. Abusleme et al. “Potential to identify neutrino mass
ordering with reactor antineutrinos at JUNO”. Chin.
Phys. C 49.3 (2025), p. 033104. DOI: 10 . 1088 /
1674-1137/ad7f3e.
[26] JUNO Collaboration. “Sub-percent precision measure-
ment of neutrino oscillation parameters with JUNO”.
Chin. Phys. C 46.12 (2022), p. 123001. DOI: 10 .
1088/1674-1137/ac8bc9.
[27] J. P. Athayde Marcondes de André, N. Chau, M.
Dracos, et al. “Neutrino mass ordering determina-
tion through combined analysis with JUNO and
KM3NeT/ORCA”. Nucl. Instrum. Meth. A 1055
(2023), p. 168438. DOI: 10.1016/j.nima.2023.
168438.
[28] J. E. Melzer and K. Moffitt. Head mounted displays.
1997.
[29] GEANT4 Collaboration. “GEANT4 - A Simulation
Toolkit”. Nucl. Instrum. Meth. A 506 (2003), pp. 250–
303. DOI: 10.1016/S0168-9002(03)01368-8.
[30]
R. Brun, A. Gheata, and M. Gheata. “The ROOT geom-
etry package”. Nucl. Instrum. Meth. A 502 (2003). Ed.
by V. A. Ilyin, V. V. Korenkov, and D. Perret-Gallix,
pp. 676–680. DOI: 10.1016/S0168- 9002(03)
00541-2.
[31]
M. Tadel. “Overview of EVE: The event visualization
environment of ROOT”. J. Phys. Conf. Ser. 219 (2010).
Ed. by J. Gruntorad and M. Lokajicek, p. 042055. DOI:
10.1088/1742-6596/219/4/042055.
[32]
Z.-J. Li, M.-K. Yuan, Y.-X. Song, et al. “Visualization
for physics analysis improvement and applications in
BESIII”. Front. Phys. (Beijing) 19.6 (2024), p. 64201.
DOI: 10.1007/s11467-024-1422-7.
[33]
Z.-Y. You, K.-J. Li, Y.-M. Zhang, et al. “A ROOT Based
Event Display Software for JUNO”. JINST 13.02
(2018), T02002. DOI: 10.1088/1748-0221/13/
02/T02002.
[34]
M.-H. Liao, K.-X. Huang, Y.-M. Zhang, et al. “A
ROOT-based detector geometry and event visualization
system for JUNO-TAO”. Nucl. Sci. Tech. 36.3 (2025),
p. 39. DOI: 10.1007/s41365-024-01604-0.
[35]
Mu2e Collaboration. “Mu2e Technical Design Report”
(2014). DOI: 10.2172/1172555.
[36]
Unity Technologies. Standard Shader. Available at:
https : / / docs . unity3d . com / 2023 .
2 / Documentation / Manual / shader -
StandardShader.html. Accessed on: March 15,
2025.
[37]
M. Aros, C. L. Tyger, and B. S. Chaparro. “Unravel-
ing the Meta Quest 3: An Out-of-Box Experience of
the Future of Mixed Reality Headsets”. HCI Interna-
tional 2024 Posters. Cham: Springer Nature Switzer-
land, 2024, pp. 3–8. DOI: 10.1007/978-3-031-
61950-2_1.
[38]
HTC Corporation. HTC Vive Official Website. Avail-
able at: https://www.vive.com. Accessed on:
March 15, 2025.
[39]
V. Corporation. Valve Index Official Website. Available
at: https://www.valvesoftware.com/en/
index. Accessed on: March 15, 2025.
[40]
R.-Z. Cheng, N. Wu, M. Varvello, et al. “A First Look
at Immersive Telepresence on Apple Vision Pro”. Pro-
ceedings of the 2024 ACM on Internet Measurement
Conference. IMC ’24. Madrid, Spain: Association for
Computing Machinery, 2024, pp. 555–562. DOI: 10.
1145/3646547.3689006.
[41]
Unity Technologies. Unity User Manual. Available at:
https : / / docs . unity3d . com / Manual /
index.html. Accessed on: March 15, 2025.
[42]
V. Corporation. Steam Hardware & Software Survey.
Available at: https://store.steampowered.
com/hwsurvey. Accessed on: March 15, 2025.
11
[43] G.-H. Huang et al. “Improving the energy uniformity
for large liquid scintillator detectors”. Nucl. Instrum.
Meth. A 1001 (2021), p. 165287. DOI: 10.1016/j.
nima.2021.165287.
[44] Z.-Y. Li et al. “Event vertex and time reconstruction
in large-volume liquid scintillator detectors”. Nucl. Sci.
Tech. 32.5 (2021), p. 49. DOI: 10.1007/s41365-
021-00885-z.
[45] Z. Qian et al. “Vertex and energy reconstruction in
JUNO with machine learning methods”. Nucl. Instrum.
Meth. A 1010 (2021), p. 165527. DOI: 10.1016/j.
nima.2021.165527.
[46] Z.-Y. Li, Z. Qian, J.-H. He, et al. “Improvement of ma-
chine learning-based vertex reconstruction for large liq-
uid scintillator detectors with multiple types of PMTs”.
Nucl. Sci. Tech. 33.7 (2022), p. 93. DOI: 10.1007/
s41365-022-01078-y.
[47] JUNO Collaboration. “The design and technology de-
velopment of the JUNO central detector”. Eur. Phys. J.
Plus 139.12 (2024), p. 1128. DOI: 10.1140/epjp/
s13360-024-05830-8.
[48] T. Lin et al. “Simulation software of the JUNO ex-
periment”. Eur. Phys. J. C 83.5 (2023). [Erratum:
Eur.Phys.J.C 83, 660 (2023)], p. 382. DOI: 10.1140/
epjc/s10052-023-11514-x.
[49] Z. Deng. “Status of JUNO Simulation Software”. EPJ
Web Conf. 245 (2020). Ed. by C. Doglioni, D. Kim,
G. A. Stewart, et al., p. 02022. DOI: 10 . 1051 /
epjconf/202024502022.
[50] Autodesk. FBX webpage. Available at: https : / /
www . autodesk . com / products / fbx /
overview. Accessed on: March 15, 2025.
[51] C.-X. Wu and Z.-Y. You. “Detector identifier and ge-
ometry management system in JUNO experiment”. PoS
ICHEP2024 (2025), p. 1049. DOI: 10 . 22323 / 1 .
476.1049.
[52] R. Chytracek, J. McCormick, W. Pokorski, et al. “Ge-
ometry description markup language for physics sim-
ulation and analysis applications.” IEEE Trans. Nucl.
Sci. 53 (2006), p. 2892. DOI: 10.1109/TNS.2006.
881062.
[53] A. Iusupova and S. Nemnyugin. “Geometry import
into virtual reality visualization engine for HEP ex-
periments at BM@N”. Nucl. Instrum. Meth. A 1067
(2024), p. 169619. DOI: 10.1016/j.nima.2024.
169619.
[54] T. Li, X. Xia, X.-T. Huang, et al. “Design and Develop-
ment of JUNO Event Data Model”. Chin. Phys. C 41.6
(2017), p. 066201. DOI: 10.1088/1674- 1137/
41/6/066201.
[55] JUNO Collaboration. “Modern Software Development
for JUNO offline software”. EPJ Web Conf. 295
(2024), p. 05015. DOI: 10 . 1051 / epjconf /
202429505015.
[56]
M. Frank, F. Gaede, C. Grefe, et al. “DD4hep: A De-
tector Description Toolkit for High Energy Physics Ex-
periments”. J. Phys. Conf. Ser. 513 (2014). Ed. by D. L.
Groep and D. Bonacorsi, p. 022010. DOI: 10.1088/
1742-6596/513/2/022010.
[57]
Z.-Y. Yuan et al. “Method for detector description con-
version from DD4hep to Filmbox”. Nucl. Sci. Tech.
35.9 (2024), p. 146. DOI: 10.1007/s41365-024-
01506-1.
[58]
PHENIX Collaboration. “PHENIX detector overview”.
Nucl. Instrum. Meth. A 499 (2003), pp. 469–479. DOI:
10.1016/S0168-9002(02)01950-2.
[59]
LHCb Collaboration. “The LHCb Detector at the
LHC”. JINST 3 (2008), S08005. DOI: 10 . 1088 /
1748-0221/3/08/S08005.
[60]
T. Bray, J. Paoli, and C. Sperberg-McQueen. Extensi-
ble Markup Language (XML) 1.0. Available at: http:
//www.w3.org/XML/1998/06/xmlspec-
report-19980910.htm. Accessed on: March 15,
2025.
[61]
K. Group. COLLADA Document Schema and Refer-
ence (Version 1.5). Available at: https : / / www .
khronos.org/collada/. Accessed on: March 15,
2025.
[62]
Autodesk. Drawing Exchange Format (DXF) Refer-
ence. Available at: https : / / archive . ph /
20121206003818/http://www.autodesk.
com/dxf. Accessed on: March 15, 2025.
[63]
M. Reddy. Wavefront OBJ File Format. Available at:
http://www.martinreddy.net/gfx/3d/
OBJ.spec. Accessed on: March 15, 2025.
[64]
F.
Developers.
FreeCAD
webpage.
Available
at:
https://www.freecadweb.org. Accessed on:
March 15, 2025.
[65]
P. Software. Pixyz Studio Software. Available at:
https : / / www . pixyz - software . com /
studio. Accessed on: March 15, 2025.
[66]
S. Kemmerer. STEP: The Grand Experience. en. 1999.
DOI: 10.6028/NIST.SP.939.
[67]
T. Sakuma and T. McCauley. “Detector and Event Vi-
sualization with SketchUp at the CMS Experiment”. J.
Phys. Conf. Ser. 513 (2014). Ed. by D. L. Groep and D.
Bonacorsi, p. 022032. DOI: 10.1088/1742-6596/
513/2/022032.
[68]
P. Vogel and J. F. Beacom. “Angular distribution of
neutron inverse beta decay, anti-neutrino(e) + p —>
e+ + n”. Phys. Rev. D 60 (1999), p. 053003. DOI: 10.
1103/PhysRevD.60.053003.
[69]
T. Lin and W.-Q. Yin. “Offline data processing in
the First JUNO Data Challenge” (2024). DOI: 10 .
48550/arXiv.2408.00959.
12
[70] JUNO Collaboration. “The JUNO experiment Top
Tracker”.
Nucl.
Instrum.
Meth.
A
1057
(2023),
p. 168680. DOI: 10 . 1016 / j . nima . 2023 .
168680.
[71] M. Yu, W.-J. Wu, Y.-Y. Ding, et al. “A Monte Carlo
method for Rayleigh scattering in liquid detectors”.
Rev. Sci. Instrum. 93.11 (2022), p. 113102. DOI: 10.
1063/5.0119224.
[72] M. Yu, W.-J. Wu, N. Peng, et al. “Measurements of
Rayleigh ratios in linear alkylbenzene”. Rev. Sci. In-
strum. 93.6 (2022), p. 063106. DOI: 10.1063/5.
0091847.
[73] K. Li, Z. You, Y. Zhang, et al. “GDML based geome-
try management system for offline software in JUNO”.
Nucl. Instrum. Meth. A 908 (2018), pp. 43–48. DOI:
10.1016/j.nima.2018.08.008.
[74] S. Zhang, J.-S. Li, Y.-J. Su, et al. “A method for shar-
ing dynamic geometry information in studies on liquid-
based detectors”. Nucl. Sci. Tech. 32.2 (2021), p. 21.
DOI: 10.1007/s41365-021-00852-8.
[75]
ATLAS collaboration. “ATLAS offline data quality
monitoring”. J. Phys. Conf. Ser. 219 (2010). Ed. by
J. Gruntorad and M. Lokajicek, p. 042018. DOI: 10.
1088/1742-6596/219/4/042018.
[76]
CMS collaboration. “CMS data quality monitoring:
Systems and experiences”. J. Phys. Conf. Ser. 219
(2010). Ed. by J. Gruntorad and M. Lokajicek,
p. 072020. DOI: 10.1088/1742-6596/219/7/
072020.
[77]
J.-F. Hu, Y.-H. Zheng, X.-D. Sun, et al. “A data qual-
ity monitoring software framework for the BESIII ex-
periment”. Chin. Phys. C 36 (2012), pp. 62–66. DOI:
10.1088/1674-1137/36/1/010.
[78]
Daya Bay Collaboration. “Onsite data processing and
monitoring for the Daya Bay Experiment”. Chin. Phys.
C 38 (2014), p. 086001. DOI: 10 . 1088 / 1674 -
1137/38/8/086001.
|
Unity based virtual reality for detector and event visualization in JUNO experiment Kai-Xuan Huang,1, ∗Tian-Zi Song,1, ∗Yu-Ning Su,1 Cheng-Xin Wu,1 Xue-Sen Wang,2 Yu-Mei Zhang,2, † and Zheng-Yun You1, ‡ 1 -sen University, Guangzhou 510275, China 2Sino-French -sen University, Zhuhai 519082, China Detector and event visualization are crucial components of high-energy physics (HEP) experimental software. Virtual Reality (VR) technologies and multimedia development platforms such as Unity offer enhanced display effects and flexible extensibility for visualization in HEP experiments. In this study, we present a VR-based method for detector and event displays in the Jiangmen Underground Neutrino Observatory (JUNO) experiment. This method shares the same detector geometry descriptions and event data model as those in offline software and provides necessary data conversion interfaces. The VR methodology facilitates an immersive exploration of the virtual environment in JUNO, enabling users to investigate detector geometry, visualize event data, and tune the detector simulation and event reconstruction algorithms. Additionally, this approach supports applications in data monitoring, physics data analysis, and public outreach initiatives. Keywords: virtual reality, event display, Unity, detector geometry, JUNO I. INTRODUCTION Visualization techniques are essential in every aspect of modern High-energy Physics (HEP) experiments. In the Roadmap for HEP Software and Computing R&D for the 2020s [1] and the HEP Software Foundation Community White Paper [2], recommendations and guidelines for visualization tools, such as Virtual Reality (VR) technologies [3] in future software development are specifically discussed, particularly regarding interactivity, detector geometry visualization and event display. Compared to traditional visualizations, VR techniques offer a truly immersive perspective, which enhances interactive experience with a better understanding of detector geometry and event information. In recent years, some HEP experiments have developed VR applications for event display and outreach. These include the Belle2VR software [4, 5] for the BelleII experiment [6], the ATLASrift platform [7, 8] for the ATLAS experiment [9], the CMS VR application [10] for the CMS experiment [11], and the Super-KAVE program [12, 13] for the Super-K experiment [14]. The development of VR applications typically involves game engines, such as Unity [15] or Unreal Engine [16]. Unity is a cross-platform engine that supports the development of games, videos, animations, and architectural visualizations. It has been employed for detector visualization and event display in various HEP experiments, including BelleII, BESIII [17], ALICE [18], ATLAS [19], JUNO [20], and the Total Event Visualizer (TEV) of the CERN Media Lab [21], all of which achieve excellent visualization effects. The Jiangmen Underground Neutrino Observatory (JUNO) [22-24] is situated underground in southern China with a 650 meter rock overburden. The primary scientific goal of JUNO is to determine the neutrino mass hierarchy. Over an approximately seven-year operational ∗These authors contributed equally to this work. † Yu-Mei Zhang, ‡ Zheng-Yun You, period, JUNO is expected to determine the neutrino mass hierarchy with a significance of 3σ [25], and to measure the oscillation parameters ∆m2 31, ∆m2 21, and sin2 θ12, achieving a precision of 0.2% for ∆m2 31, 0.3% for ∆m2 21, and 0.5% for sin2 θ12 [26, 27], respectively. Additionally, JUNO experiment is capable of investigating various types of neutrinos, including earth neutrinos, atmospheric neutrinos, solar neutrinos and supernova neutrinos [22]. Its excellent energy resolution and large fiducial volume provide promising opportunities to explore numerous essential topics in neutrino physics. In this study, we develop a VR-based event display tool using Unity for JUNO. This software is compatible with various platforms through Head-Mounted Displays (HMDs) [28] and offers functionalities including the VR-based visualization of JUNO detector, event displays for different types of data, interfaces for reading and converting event data information, and spatial Spatial User Interface (Spatial UI) control features. The rest of this paper is organized as follows. In Section II, we introduce VR-based software for HEP experiments. In Section III, the software methodologies is described, including the JUNO VR framework, the data flow of detector geometry and event data conversion, as well as interaction methods with Spatial UI. The visualization of detector units and event data in the VR-based tool is introduced in Section IV. The potential for further applications is discussed in Section V. Finally, the performance of the software is introduced in Section VI. II. VISUALIZATION AND VR A. Unity and VR In HEP experiments, physicists typically develop detector descriptions and event visualization tools within offline software frameworks. These event display tools are usually built upon the widely-used HEP software, such as Geant4 [29] or ROOT [30], which provide user-friendly visualization capa19 Sep 2025 2 bilities that facilitate software development. With the upgrades to ROOT and its EVE package [31], the development of event display tools has become more efficient. Several recent HEP experiments, including ALICE, CMS [11], BESIII [32], JUNO [33, 34], and Mu2e [35], adopt ROOT EVE for developing event display software. However, due to the limited visualization technique support in ROOT, its display capabilities do not fully meet the diverse requirements of physicists, and most ROOT applications remain confined to the Linux platform. To enhance visualization quality, interactivity, and multiplatform support, several event display tools are developed based on external visualization software. Unity is widely applied in the field of HEP, being used in projects including BelleII, BESIII, ALICE, ATLAS, and JUNO. Unity is a professional video and game development engine based on C#, and visualization software built on Unity offers several advantages. • Impressive visualization quality. Unity, a widely adopted professional 3D engine in the industry, offers advanced visual capabilities that surpass those of traditional software used in HEP, such as ROOT. Additionally, its continuous updates enable HEP visualizations to stay aligned with the cutting-edge developments in graphics technology. • Cross-platform support. The comprehensive multiplatform support of Unity enables seamless export and deployment of projections across a range of operating systems, including Windows, Linux, macOS, iOS, Android, and web browsers. This functionality ensures that the same visualization project can be accessed across various platforms, minimizing the development effort and streamlining maintenance tasks. • High-quality VR rendering and performance optimization. Unity supports modern graphics technologies such as real-time lighting, global illumination, and physics-based rendering. Light behaves according to the principles of physics, including energy conservation and Fresnel reflections [36], resulting in more realistic and immersive graphical effects in VR. These features are crucial for enhancing details like lighting, shadows, textures, and environmental interactions, significantly improving the user's sense of immersion. Additionally, Unity optimizes VR performance by rendering separate images for each eye, providing a dualeye perspective while maintaining smooth rendering and minimizing motion blur and latency. • VR HMDs compatibility. Unity supports most popular VR HMDs, including Meta Quest 2 and Quest 3 [37], HTC Vive [38], Valve Index [39], and Vision Pro [40]. Through the extended reality interaction toolkit in Unity, developers can easily create interactive applications for various devices without device-specific coding. Additionally, Unity provides a fast turnaround during the development cycle. Projects can be executed immediately, running quickly on VR devices for easier debugging, without the need to compile and link executable files [41]. Compared to 3D-based event visualization software, VR technology significantly enhances the visual experience of the user. VR applications are typically conducted through HMDs. According to Steam VR hardware statistics [42], more than half of users utilize the Meta Quest 2 and Quest 3. These devices, based on the Android operating system, offer sufficient immersion and are widely used across various fields, including gaming, social interaction, and education. Equipped with accelerometers, gyroscopes, and cameras, these devices can track the user's head and hand movements, enabling interaction and navigation within virtual environments. Additionally, the controllers facilitate interaction with Spatial UI in the virtual environment. VR technology provides synthesized sensory feedback, creating a strong sense of immersion and presence within a simulated environment. Most HEP experiments are typically conducted in underground or restricted areas, which are typically inaccessible during data collection. VR technology enables the public to explore these experiments in an immersive environment to observe detector operations and event data collection. This offers a fundamental understanding of the types of scientific research being conducted in HEP, which is highly beneficial for both educational and outreach purposes. Furthermore, by simulating particle emissions and their interactions with detectors, VR provides physicists with an immersive platform for refining offline simulations and reconstruction software [43-46]. It can also enhance simulation accuracy. For JUNO, considering the deformation of the stainless steel truss, offsets need to be applied to the PMT positions based on limited survey data [47-49]. Overlap checks and position tuning using the VR event display tool will be particularly helpful. Additionally, VR enables physicists to analyze rare events as though they are physically present within the inner detector environment, which provides alternative approach for data analysis and may inspire creativity. B. VR application in HEP In recent years, VR applications have been developed in several HEP experiments event visualization and outreach. These software include Belle2VR [5] for the BelleII experiment, ATLASrift [7, 8] for the ATLAS experiment, and Super-KAVE [12, 13] for the Super-K experiment. Belle2VR is an interactive VR visualization tool developed with Unity, designed to represent subatomic particle physics. This application allows user to explore the BelleII detector and observe particle jets generated in high energy e+e-collisions. The Super-KAVE application immerses user in a scaled representation of the Super-K detector, allowing them to explore the virtual space, switch between event datasets, and change visualization modes [12, 13]. In addition to providing VR modes for exploring the detector and standard 3 event displays, the application features a supernova event visualization technique, simulating the conversion of a star into a supernova. This leads to the occurrence of thousands of neutrino events within approximately ten seconds. It serves as a valuable outreach tool, offering a new example of visualization techniques for various neutrino particle physics applications. ATLASrift, a VR application developed for the ATLAS experiment, is primarily used for data visualization and outreach [9]. User can move around and inside the detector, as well as explore the entire underground experimental cavern and its associated facilities, including shafts, service halls, passageways, scaffolds, and more. III. METHODOLOGIES VR technology provides an immersive experience. However, the development of comprehensive event display software utilizing VR for HEP experiments still involves significant challenges. The first challenge is to convert the detector geometry, typically based on Geant4 simulations, into a format such as FBX [50] that can be imported into Unity. Given that detectors usually consist of tens of thousands of components, manually creating the geometry would impose a significant workload. Another significant challenge is extracting and converting event information into a structure compatible with Unity. In HEP experiments, the fundamental information for event display is typically defined by the offline software and stored in ROOT format. However, because Unity does not support direct reading of ROOT files, a dedicated conversion process is required. Additionally, a bijective mapping is established to link the detector unit identifiers used in the offline software [51] with the names assigned to the corresponding geometries in Unity. This section introduces the software architecture and data flow in the JUNO VR program. We describe the process of detector geometry conversion, the exchange of essential event information from offline software to Unity, and the strategy of matching detector units. Additionally, we discuss the construction of the Spatial UI and provide an overview of its functionality. A. Software structure and data flow The event display software should provide visualization capabilities, including detector geometry, event data information at different levels, and interactive controls. For JUNO VR software visualization, the first step involves converting and importing the detector geometry and event data information into Unity for display, followed by the development of interactive controls. As shown in Figure 1, the JUNO event display software consists of four components. • Detector geometry conversion. The geometric models of the detector are constructed using Geant4 in the detector simulation, initially stored in a Geometry Description Markup Language (GDML) file [52]. The GDML file is then automatically converted to the FBX format using the GDML-FBX conversion tool [17, 53], which is compatible for import into Unity. • Event data conversion. The Event Data Model (EDM) [54] encompasses various types of event information exchanged between different components of JUNO online and offline software, including data acquisition, simulation, calibration, and reconstruction. The event information for JUNO VR event display is extracted from the offline software EDM [55]. By combining the detector identifier and Unity geometry name matching rules, the detector information is remapped, generating event information that Unity can directly import and conforms to the geometry hierarchy in Unity. • Detector and event information visualization. The detector geometry, simulation, and reconstruction information, as well as the hit information and their associations, are visualized in Unity. By adjusting the material properties and combining Unity's layers, lighting, and rendering effects, an immersive and outstanding visualization experience in VR mode is achieved. • Spatial UI and interactive control. The Spatial UI is designed to facilitate the visualization and interaction with the detector and event information. It includes the sub-detector geometry panel and the event display panel, which allow users to control the display of sub-detectors, switch between event types, and manage the event display process. Interactive control is enabled through the Meta Quest 3 controller, with distinct functions assigned to the joystick and various buttons. These functions include controlling the visibility of each panel, navigating within the 3D virtual detector environment, and switching perspectives. B. Detector geometry conversion The detector geometry in HEP experiments are typically complex, consisting of up to millions of detector units. The description of these detectors is commonly developed using specialized geometric languages, such as GDML and Detector Description for High-Energy Physics (DD4hep) [56, 57]. The JUNO experiment, along with BESIII, PHENIX [58], and LHCb [59], uses GDML to describe and optimize the geometry of detectors for conceptual design and offline software development. GDML is a detector description language based on Extensible Markup Language (XML) [60] that describes detector information through a set of textual tags and attributes, providing a persistent description of the detector. The geometry description files of detectors typically include essential information about the detector model, such as lists of materials, positions, rotations, solids, and the hierarchical structure of the detector. Since the GDML format does not directly support import into Unity, some previous HEP applications involving Unity 4 Fig. 1. The software framework and data flow in JUNO VR. typically required manual construction of geometric models. Given that HEP detectors are usually highly complex, the creation of 3D detector models in Unity becomes particularly challenging. However, Unity does support direct import of several 3D file formats, including FBX, DAE [61], DXF [62], and OBJ [63]. Among these, FBX stands out as a widely used 3D asset format, due to its ability to handle intricate scene structures. This includes not only geometry but also animations, materials, textures, lighting, and cameras, which makes it a highly suitable choice for HEP applications involving complex 3D models. A method that can automatically convert GDML or DD4hep to FBX format is essential for detector construction in Unity. Several researches have proposed automated methods for converting GDML files to FBX files, significantly facilitating Unity-based development. For instance, the BESIII collaboration group suggests using FreeCAD [64], a 3D CAD and modeling software, in conjunction with CAD data optimization software Pixyz [65], with the STEP [66] format as an intermediate conversion format [17]. The CMS collaboration group employ SketchUp software for auxiliary data conversion [67]. Recently, methods were also proposed to directly convert GDML files to FBX files [53]. This research, based on the above method, enables a fast and automatic conversion process from GDML to FBX, which can be completed in just a few minutes, and saves significant time in the conversion process. This approach becomes particularly beneficial during the recent geometric updates of JUNO detector at the commissioning stage, enabling the swift conversion of the updated FBX file, which includes the latest geometry model of the real detector units after installation. C. Event data conversion In HEP experiments, the event data is typically stored in files with binary raw data format or ROOT format. ROOT, an efficient data analysis framework, is widely adopted for high-performance data input and output operations. However, since Unity cannot directly read ROOT files, it is necessary to extract the required event information based on the EDM and convert it into a text format that Unity can process. The essential information for event display comprises three main components: detector unit hits, Monte Carlo (MC) truth, and reconstruction data. The detector unit hits include the hit time and hit charge for each detector unit like a PMT. MC truth provides detailed truth information such as simulated vertices and photon trajectories (including 3D coordinates and propagation with time), which facilitate a deeper analysis of particle direction and relative velocity. Reconstruction data typically contain the reconstructed vertex positions, energy information, and additional track information for muon events like direction. Together, these information serve as the foundation for developing event display functionalities and interactive control modules based on Spatial UI. Furthermore, the identifiers used for detector units in offline software may differ from the names of the geometric objects in Unity. In HEP experiments, the detector identifier system assigns a unique ID to each detector unit and play a critical role in various applications including data acquisition, simulation, reconstruction and analysis. Therefore, establishing an accurate mapping between the detector identifiers in offline software and the geometric objects like PMT in Unity is essential to ensure the accurate display of an event. Based on EDM readout rules and leveraging the mapping between the identifier module and the geometric objects in Unity, an automated readout and conversion interface is developed to export event display information. 5 For JUNO VR software, multiple types of datasets are provided, including radioactive background, Inverse Beta Decay (IBD) [68], cosmic ray muons and other types of events. The event display dataset is designed to encompass both simulated and real data event types. Simulated events are produced with the JUNO offline software to facilitate detector simulation, commissioning and the optimization of reconstruction algorithms. Since JUNO has not yet commenced formal data taking, real data events are instead obtained from the Data Challenge dataset [69], which has data structures identical to those expected during actual operation. With the event data conversion interface, the datasets with various types of data are ready to be displayed in the Unity based visualization and VR software. Fig. 2. The Spatial UI in JUNO VR. On the left is the JUNO VR event display control panel, and on the right is the sub-detector geometry control panel. D. Spatial UI and interactive control The Spatial UI serves as the interface facilitating interaction between user and the VR application. For JUNO VR project, we develop two Spatial UIs: the sub-detector geometry control panel and the event display control panel, as shown in Figure 2. The sub-detector geometry panel primarily controls the visualization attributes of the geometries of various subdetectors, including Central Detector (CD) large PMTs, CD small PMTs, Top Tracker, and water pool PMTs. Detailed information about the sub-detectors of JUNO is provided in Section IV A. In addition to the sensitive detectors like PMTs, an "Other structure" toggle controls the display of passive structures, such as the steel structure, the acrylic ball, the PMT support structures, and the liquid filling pipelines. Additionally, the "Data type" drop-down is used to switch between different types of events collected during real data-taking or from simulation. The "Photon trail mode" toggle enables the switching of display modes for photon paths, either represented by green lines or in a manner closely resembling particle motion. The event display panel is designed to implement the core functionality for event visualization, which includes a toggle for switching display mode between simulation and data types, a slider for controlling the display of an event with its timeline evolution, a drop-down for selecting different types of events, and a button to play the event animation. A "Draw Hit" button initiates the animation of the full event hit process, which plays within a period of time window, with the time slider moving in sync with the event timeline, enabling user to track the current time of the event. Interactive control is achieved through the use of controllers, gesture operations, eye-tracking, and other input methods in HMDs. The following discussion focuses on testing interactive control for the Meta Quest 3. For other HMDs, the cross-platform support provided by the extended reality interaction toolkit in Unity minimizes development differences between various devices. Simple adaptations based on the specific features of the HMDs are sufficient for operation. The controller buttons resemble a typical gamepad, with the addition of side buttons. The X&Y buttons on the left controller are used to control the visibility of the sub-detector geometry panel. When displayed, the position of this panel is based on the user's orientation and appears at the front left of the user's view. User can drag or hide the panel to avoid obstructing their view when visualizing events. The A&B buttons on the right controller are used to control the visibility of the event display panel. When displayed, the panel appears at the front right of the user's view. Based on the gyroscope and accelerometer hardware of the Meta Quest 3, these planes are always oriented perpendicular to the user's view orientation. Fig. 3. User perspective during motion while checking the display information of a simulated muon event in the JUNO VR application. The CD small PMTs are not shown. Detailed information about the sub-detectors of JUNO is provided in Section IV A. 6 The joystick on the left controller controls the user's 3D movement, based on both the controller's input and the user's view orientation. For example, when the user's head orientation is directed towards the upper right, pushing the joystick upwards move the user in the virtual space toward that direction. Figure 3 illustrates the user's viewpoint during motion in the JUNO VR application. The event depicted is a simulated muon event. Additional details presented in the figure are described in detail in Section IV. The joystick on the right controls the user's viewpoint direction. Additionally, the user can change their head orientation to switch perspectives. The side button is used for interaction confirmation. Furthermore, when interacting with the Spatial UI, if the controller's laser pointer touches the corresponding component, audio and highlight feedback are provided, making the interaction smoother for user control. IV. VISUALIZATION IN JUNO This section is dedicated to introduce the visualization effects in the JUNO VR, including detector geometry, hit distribution for different types of events, as well as MC true information and display of event reconstruction outputs. A. Detector units Fig. 4. Schematic View of the JUNO Detector. The schematic design of JUNO detector is illustrated in Figure 4 [23]. The detector includes the water pool, the CD [47], and the Top Tracker [70]. The CD is the heart of JUNO experiment and is filled with 20 ktons of liquid scintillator [71, 72] to serve as the target for neutrino detection. The liquid scintillator is housed within a spherical acrylic vessel, which has a thickness of 120 mm and an inner diameter of 35.4 meters. This vessel is supported by a spherical stainless steel structure with an inner diameter of 40.1 meters. To detect photons, the CD is equipped with a total of 17,612 20inch PMTs and 25,600 3-inch PMTs. Surrounding the CD is the water pool containing 35 ktons of highly purified water, which effectively shields the detector from external radioactivity originating from the surrounding rocks. The water pool is also instrumental in vetoing cosmic ray muons, with 2,400 20-inch PMTs deployed as part of the water Cherenkov detector. The Top Tracker, located at the top of the water pool, plays a key role in measuring and vetoing muon tracks [73, 74]. Fig. 5. JUNO detector in the VR application. As described in Section III B, the JUNO detector geometry is converted from the GDML file, and matched between the identifier module and Unity geometry for each detector unit. The visualization effects of the whole JUNO detector in VR application is shown in Figure 5. The light blue cylindrical structure represents the water pool, with the water pool PMTs positioned outward, as indicated by the yellow portion of the spherical structure. At the top of the water pool, the reddish-brown structure represents the Top Tracker detector. From the interior view in the JUNO VR, the spherical acrylic vessel is shown in light gray, as depicted in Figure 2, although it is close to fully transparent in reality to allow more photons to pass through. Surrounding this vessel is the stainless steel structure, shown as dark gray in Figure 5. The CD detector PMTs, oriented toward the center of the sphere, are designed such to receive photons with its photocathodes, so that only the white tail structures of every PMTs are visible in Figure 5. Owing to the hardware capabilities of Meta Quest 3, there is no requirement to optimize the grid of detector units or replace them with simplified geometric shapes. Most of the geometric details of the detector units are preserved, achieving effects that are difficult to accomplish in event displays based on ROOT. Additionally, for the detector units, in order to more closely replicate the effect of real PMTs, we have assigned different material properties to the detector units, including the visualization attributes such as color, reflectivity, 7 and metalicity, to achieve the best display effect. B. MC simulation event display MC simulation is crucial for detector design and assists physicists in evaluating the detector's performance and tuning reconstruction algorithms. There are various kinds of signal and backgrounds events in JUNO, while currently we primarily focus on the radioactive backgrounds, IBD signals, and muon events. The IBD event, νe + p →e+ + n, is the major signal event for detecting electron anti-neutrinos in the JUNO experiment [22, 23]. JUNO identifies and reconstructs IBD events by detecting signals from positron and neutron captures. This dual-signal characteristic helps effectively identify anti-neutrino signal events while suppressing the huge background events. Fig. 6. The event display for a simulated IBD event in JUNO VR application. The green lines represent the photon paths of the positron, while the red lines indicate the photon paths of the neutron. The yellow spheres represent PMTs that are not triggered, while the spheres with color gradient from light blue to blue indicate the PMTs with an increasing number of hits. For the IBD event, there are both positron and neutron signals, whose photon paths are displayed in green and red, respectively, as shown in Figure 6. The detector units that are triggered are color-coded from cyan to dark blue based on the number of hits in the event, with bluer colors indicating a higher number of hits. The PMTs that are not triggered are displayed in yellow by default. Furthermore, in the time evolution of an event, the color of fired PMTs changes with time according to the associated timing information. The neutroninduced photon paths are delayed by approximately 170 μs relative to those from the positron, and this delay can be visualized using the time slider in the JUNO VR environment. One major background event type is the cosmic-ray muon events. Muons are secondary particles produced by highenergy cosmic rays in the earth's atmosphere, and possess strong penetrating power. Despite JUNO being located 650 m deep underground, a small fraction of muons can still penetrate the overlying shielding and enter the detector, generating muon events. Fig. 7. The event display for a simulated muon event in JUNO VR application. The green lines represent the photons generated along the path of a muon penetrating the detector. The controllers and the lasers emitted from the controllers represent the user's interactive control. Figure 7 presents the event information for the simulated muon event. Photon trajectories are represented by light green lines. These paths gradually extend over time, depicting the propagation of photons. In the simulated event shown, the directions of these photon paths may change, indicating their interactions with the detector materials. For the muon event, as a muon penetrates the detector, it continuously produces photons while depositing its energy in the liquid scintillator. Fig. 8. Comparison of the reconstructed vertex (purple) with the weighted energy deposit vertex (green) and the particle production vertex (red) from MC truth in a simulated event. The yellow line shows the reconstruction bias. Event reconstruction plays a key role in JUNO data processing, reconstructing the vertex and energy of an event, which is essential in determining the neutrino mass hierarchy. For point-like events like IBD signals, almost all the photon paths originate from the same event vertex. Figure 8 shows the reconstructed vertex and the MC truth. The initial particle production vertex (red sphere), derived from MC truth, indicates where the positron is created. The weighted 8 energy deposit vertex (green sphere) marks the positron's annihilation point in the liquid scintillator. The reconstructed vertex (purple sphere) is produced by the event reconstruction algorithm. The reconstruction bias (light yellow line) represents the discrepancy between the reconstructed vertex and the energy deposit vertex. A shorter distance indicates a more accurate reconstructed vertex. In the ideal scenario, the reconstructed vertex will converge to the true vertex. C. Real data event display For the real-data event, we utilize the Data Challenge dataset [69], whose data structures and processing pipeline are identical to those employed during data taking. This ensures that the software will function seamlessly once the experiment enters formal operation. The event composition in this dataset is the same as that in the MC simulation, encompassing radioactive-background events, IBD signals, and muon events. Fig. 9. Display of a reconstructed muon event from datasets in the JUNO VR application. The translucent part represents the CD acrylic sphere and its supporting components. The magenta line indicates the reconstructed muon track by connecting the points where the muon enters and exits the JUNO detector. Figure 9 presents event information for a muon event derived from the real data events. The reconstructed muon travels through the detector along the magenta line. The left and right sides represent the reconstructed incident and exit points of the muon. A time offset is established by dividing the track distance by the speed of light. User can observe the trajectory of muon by the Spatial UI. Since the exact point of photon emission along the path cannot be determined, photon information is not displayed in this mode. Using the reconstructed hit time, the corresponding point on the trajectory is linked to the relevant PMT unit. Once the photon particles arrive at the PMT units, those triggered PMTs will change color accordingly. Moreover, by exploiting Unity's robust visualization capabilities, a specialized mode is developed to simulate photon paths using particle-like effects instead of simple line trajectories to display the propagation of particles more realistically. V. APPLICATIONS JUNO VR software provides an immersive interactive experience, allowing user to intuitively understand the detector structure and event information. Some features and applications of the visualization software are listed below. Data quality monitoring. The data quality monitoring system [75-78] is designed to identify data issues promptly, ensuring the acquisition of high-quality data. During the future data-taking phase, event information can be real-time and automatically extracted from the reconstructed files from the data quality monitoring system. Based on Unity-supported databases, such as SQLite, event information can be transmitted from the data quality monitoring server to JUNO VR software. This enables immersive visualization of detector operation status and event information during the datataking phase. For example, an animation of a real-time dataacquisition event is automatically played every 30 seconds. Through immersive visualization, shifters can easily monitor anomalies, such as hot or dead PMT channels. Physics analysis. Physical analysis involves in-depth research of neutrino events to extract physical parameters, validate theoretical models, search for rare signals and uncover new phenomena. This requires detailed analysis of large volumes of complex data. Through the VR interface, researchers can reconstruct an immersive view of the event in threedimensional space, allowing them to freely explore the data, observe event details from multiple perspectives, and identify potential patterns and anomalies. Outreach. For the HEP experiments, their complex theoretical and experimental contents are usually difficult for the public and students to understand. Based on the VR application, students can understand the structure of JUNO detector and the processing of signal and background events through interactive operations, thereby enhancing engagement and understanding of the physics and principle of the HEP experiments. And the visualization programs including VR stand out in the field of education and public outreach. Due to Unity's cross-platform support and compatibility with various HMDs, the completed project can be exported to different platforms and utilized with different HMDs, meeting the requirements of various outreach scenarios. VI. PERFORMANCE In experimental evaluations conducted on the mainstream VR device-the Meta Quest 3, the JUNO VR application is capable of processing a variety of event types and demonstrates sufficient computational performance. During testing, the device's CPU utilization remains below 70%, GPU utilization remains below 40%, and the display maintains a stable refresh rate of 72 frames per second. The software's interactive response primarily depends on the event type. For muon events, which contain a larger volume of hit information, the latency when switching between events is approximately 3 seconds; for IBD and radioactive background events, it is approximately 1 second. 9 The event display of the JUNO VR application undergoes rigorous testing, and the application is capable of processing both simulated and real data events. VII. SUMMARY The VR technology greatly enhances the visualization effects of HEP experiments. A JUNO VR application for detector and event visualization is developed using Unity. By converting GDML to FBX format, efficient construction of the complex detector geometry in Unity is achieved. An event data conversion interface is created based on matching the detector identifier module and the detector geometry hierarchy in Unity. Through the Spatial UIs, users can easily control the display of various subsystems for detector and event visualization. With the ongoing construction of the JUNO experiment, the VR event display software is successfully developed, and more features are expected to be added in the future updates. VR technology offers an immersive, interactive experience, and it holds great potential in areas such as offline software development, data taking, physics analysis, education and public outreach. LIST OF ABBREVIATIONS HEP High-energy Physics VR Virtual Reality JUNO Jiangmen Underground Neutrino Observatory TEV the Total Event Visualizer HMDs Head-Mounted Displays Spatial UI Spatial User Interface GDML Geometry Description Markup Language EDM Event Data Model DD4hep Detector Description for High-Energy Physics XML Extensible Markup Language MC Monte Carlo IBD Inverse Beta Decay CD Central Detector ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China (Nos. 12175321, W2443004, 11975021, 11675275, and U1932101), Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA10010900), National Key Research and Development Program of China (Nos. 2023YFA1606000 and 2020YFA0406400), National College Students Science and Technology Innovation Project, and Undergraduate Base Scientific Research Project of Sun Yat-sen University. REFERENCES [1] HEP Software Foundation Collaboration. "A Roadmap for HEP Software and Computing R&D for the 2020s". Comput. Softw. Big Sci. 3.1 (2019), p. 7. . 1007/s41781-018-0018-8. [2] M. Bellis et al. "HEP Software Foundation Community White Paper Working Group - Visualization" (2018). [3] I. Wohlgenannt, A. Simons, and S. Stieglitz. "Virtual reality". Business & Information Systems Engineering 62 (2020), pp. 455-461. 020-00658-9. [4] M. Bender, T. Kuhr, and L. Piilonen. "Belle II virtual reality projects". EPJ Web Conf. 214 (2019). Ed. by A. Forti, L. Betev, M. Litmaath, et al., p. 02028. 1051/epjconf/201921402028. [5] Z. Duer, L. Piilonen, and G. Glasson. "Belle2VR: A Virtual-Reality Visualization of Subatomic Particle Physics in the Belle II Experiment". IEEE Computer Graphics and Applications 38.3 (2018), pp. 33-43. [6] Belle-II Collaboration. "Belle II Technical Design Report" (2010). . 48550 / arXiv . 1011 . 0352. [7] ATLAS Collaboration. "Virtual Reality and game engines for interactive data visualization and event displays in HEP, an example from the ATLAS experiment". EPJ Web Conf. 214 (2019). Ed. by A. Forti, L. Betev, M. Litmaath, et al., p. 02013. epjconf/201921402013. [8] I. Vukotic, E. Moyse, and R. M. Bianchi. "ATLASrift - a Virtual Reality application". Meeting of the APS Division of Particles and Fields. 2015. arXiv.1511.00047. [9] ATLAS Collaboration. "The ATLAS Experiment at the CERN Large Hadron Collider". JINST 3 (2008), S08003. . 1088 / 1748 - 0221 / 3 / 08 / S08003. [10] CMS Collaboration. Leveraging Virtual Reality for Visualising the CMS Detector. PoS (ICHEP2024) 1171. Available at: https://pos.sissa.it/476/ 1171/. Accessed on: June 16, 2025. [11] CMS Collaboration. "The CMS Experiment at the CERN LHC". JINST 3 (2008), S08004. . 1088/1748-0221/3/08/S08004. [12] B. Izatt, K. Scholberq, and R. P. McMahan. "SuperKAVE: An immersive visualization tool for neutrino physics". 2013 IEEE Virtual Reality (VR). 2013, pp. 75-76. [13] E. Izatt, K. Scholberg, and R. Kopper. "NeutrinoKAVE: An immersive visualization and fitting tool for neutrino physics education". 2014 IEEE Virtual Reality (VR). 2014, pp. 83-84. 6802062. 10 [14] Y. Suzuki. "The Super-Kamiokande experiment". Eur. Phys. J. C 79.4 (2019), p. 298. s10052-019-6796-2. [15] W. Goldstone. Unity game development essentials. Packt Publishing Ltd, 2009. [16] A. Sanders. An introduction to Unreal engine 4. AK Peters/CRC Press, 2016. [17] K.-X. Huang, Z.-J. Li, Z. Qian, et al. "Method for detector description transformation to Unity and application in BESIII". Nucl. Sci. Tech. 33.11 (2022), p. 142. [18] ALICE Collaboration. "ALICE: Physics performance report, volume I". J. Phys. G 30 (2004). Ed. by F. Carminati, P. Foka, P. Giubellino, et al., pp. 1517-1763. [19] J. Pequenao. CAMELIA webpage. Available at: https://pdgusers.lbl.gov/~pequenao/ camelia. Accessed on: March 15, 2025. [20] J. Zhu, Z.-Y. You, Y.-M. Zhang, et al. "A method of detector and event visualization with Unity in JUNO". JINST 14.01 (2019), T01007. 0221/14/01/T01007. [21] C. M. Lab. CERN TEV visualization framework webpage. Available at: https://gitlab.cern.ch/ CERNMediaLab/. Accessed on: March 15, 2025. [22] JUNO Collaboration. "JUNO physics and detector". Prog. Part. Nucl. Phys. 123 (2022), p. 103927. [23] JUNO Collaboration. "JUNO Conceptual Design Report" (2015). . 48550 / arXiv . 1508 . 07166. [24] F. An et al. "Neutrino Physics with JUNO". J. Phys. G 43.3 (2016), p. 030401. 3899/43/3/030401. [25] A. Abusleme et al. "Potential to identify neutrino mass ordering with reactor antineutrinos at JUNO". Chin. Phys. C 49.3 (2025), p. 033104. . 1088 / 1674-1137/ad7f3e. [26] JUNO Collaboration. "Sub-percent precision measurement of neutrino oscillation parameters with JUNO". Chin. Phys. C 46.12 (2022), p. 123001. . 1088/1674-1137/ac8bc9. [27] J. P. Athayde Marcondes de André, N. Chau, M. Dracos, et al. "Neutrino mass ordering determination through combined analysis with JUNO and KM3NeT/ORCA". Nucl. Instrum. Meth. A 1055 (2023), p. 168438. 168438. [28] J. E. Melzer and K. Moffitt. Head mounted displays. 1997. [29] GEANT4 Collaboration. "GEANT4 - A Simulation Toolkit". Nucl. Instrum. Meth. A 506 (2003), pp. 250303. [30] R. Brun, A. Gheata, and M. Gheata. "The ROOT geometry package". Nucl. Instrum. Meth. A 502 (2003). Ed. by V. A. Ilyin, V. V. Korenkov, and D. Perret-Gallix, pp. 676-680. 9002(03) 00541-2. [31] M. Tadel. "Overview of EVE: The event visualization environment of ROOT". J. Phys. Conf. Ser. 219 (2010). Ed. by J. Gruntorad and M. Lokajicek, p. 042055. [32] Z.-J. Li, M.-K. Yuan, Y.-X. Song, et al. "Visualization for physics analysis improvement and applications in BESIII". Front. Phys. (Beijing) 19.6 (2024), p. 64201. [33] Z.-Y. You, K.-J. Li, Y.-M. Zhang, et al. "A ROOT Based Event Display Software for JUNO". JINST 13.02 (2018), T02002. 02/T02002. [34] M.-H. Liao, K.-X. Huang, Y.-M. Zhang, et al. "A ROOT-based detector geometry and event visualization system for JUNO-TAO". Nucl. Sci. Tech. 36.3 (2025), p. 39. [35] Mu2e Collaboration. "Mu2e Technical Design Report" (2014). [36] Unity Technologies. Standard Shader. Available at: https : / / docs . unity3d . com / 2023 . 2 / Documentation / Manual / shader - StandardShader.html. Accessed on: March 15, 2025. [37] M. Aros, C. L. Tyger, and B. S. Chaparro. "Unraveling the Meta Quest 3: An Out-of-Box Experience of the Future of Mixed Reality Headsets". HCI International 2024 Posters. Cham: Springer Nature Switzerland, 2024, pp. 3-8. 61950-2_1. [38] HTC Corporation. HTC Vive Official Website. Available at: https://www.vive.com. Accessed on: March 15, 2025. [39] V. Corporation. Valve Index Official Website. Available at: https://www.valvesoftware.com/en/ index. Accessed on: March 15, 2025. [40] R.-Z. Cheng, N. Wu, M. Varvello, et al. "A First Look at Immersive Telepresence on Apple Vision Pro". Proceedings of the 2024 ACM on Internet Measurement Conference. IMC '24. Madrid, Spain: Association for Computing Machinery, 2024, pp. 555-562. 1145/3646547.3689006. [41] Unity Technologies. Unity User Manual. Available at: https : / / docs . unity3d . com / Manual / index.html. Accessed on: March 15, 2025. [42] V. Corporation. Steam Hardware & Software Survey. Available at: https://store.steampowered. com/hwsurvey. Accessed on: March 15, 2025. 11 [43] G.-H. Huang et al. "Improving the energy uniformity for large liquid scintillator detectors". Nucl. Instrum. Meth. A 1001 (2021), p. 165287. nima.2021.165287. [44] Z.-Y. Li et al. "Event vertex and time reconstruction in large-volume liquid scintillator detectors". Nucl. Sci. Tech. 32.5 (2021), p. 49. 021-00885-z. [45] Z. Qian et al. "Vertex and energy reconstruction in JUNO with machine learning methods". Nucl. Instrum. Meth. A 1010 (2021), p. 165527. nima.2021.165527. [46] Z.-Y. Li, Z. Qian, J.-H. He, et al. "Improvement of machine learning-based vertex reconstruction for large liquid scintillator detectors with multiple types of PMTs". Nucl. Sci. Tech. 33.7 (2022), p. 93. s41365-022-01078-y. [47] JUNO Collaboration. "The design and technology development of the JUNO central detector". Eur. Phys. J. Plus 139.12 (2024), p. 1128. s13360-024-05830-8. [48] T. Lin et al. "Simulation software of the JUNO experiment". Eur. Phys. J. C 83.5 (2023). [Erratum: Eur.Phys.J.C 83, 660 (2023)], p. 382. epjc/s10052-023-11514-x. [49] Z. Deng. "Status of JUNO Simulation Software". EPJ Web Conf. 245 (2020). Ed. by C. Doglioni, D. Kim, G. A. Stewart, et al., p. 02022. . 1051 / epjconf/202024502022. [50] Autodesk. FBX webpage. Available at: https : / / www . autodesk . com / products / fbx / overview. Accessed on: March 15, 2025. [51] C.-X. Wu and Z.-Y. You. "Detector identifier and geometry management system in JUNO experiment". PoS ICHEP2024 (2025), p. 1049. . 22323 / 1 . 476.1049. [52] R. Chytracek, J. McCormick, W. Pokorski, et al. "Geometry description markup language for physics simulation and analysis applications." IEEE Trans. Nucl. Sci. 53 (2006), p. 2892. 881062. [53] A. Iusupova and S. Nemnyugin. "Geometry import into virtual reality visualization engine for HEP experiments at BM@N". Nucl. Instrum. Meth. A 1067 (2024), p. 169619. 169619. [54] T. Li, X. Xia, X.-T. Huang, et al. "Design and Development of JUNO Event Data Model". Chin. Phys. C 41.6 (2017), p. 066201. 1137/ 41/6/066201. [55] JUNO Collaboration. "Modern Software Development for JUNO offline software". EPJ Web Conf. 295 (2024), p. 05015. . 1051 / epjconf / 202429505015. [56] M. Frank, F. Gaede, C. Grefe, et al. "DD4hep: A Detector Description Toolkit for High Energy Physics Experiments". J. Phys. Conf. Ser. 513 (2014). Ed. by D. L. Groep and D. Bonacorsi, p. 022010. 1742-6596/513/2/022010. [57] Z.-Y. Yuan et al. "Method for detector description conversion from DD4hep to Filmbox". Nucl. Sci. Tech. 35.9 (2024), p. 146. 01506-1. [58] PHENIX Collaboration. "PHENIX detector overview". Nucl. Instrum. Meth. A 499 (2003), pp. 469-479. [59] LHCb Collaboration. "The LHCb Detector at the LHC". JINST 3 (2008), S08005. . 1088 / 1748-0221/3/08/S08005. [60] T. Bray, J. Paoli, and C. Sperberg-McQueen. Extensible Markup Language (XML) 1.0. Available at: http: //www.w3.org/XML/1998/06/xmlspecreport-19980910.htm. Accessed on: March 15, 2025. [61] K. Group. COLLADA Document Schema and Reference (Version 1.5). Available at: https : / / www . khronos.org/collada/. Accessed on: March 15, 2025. [62] Autodesk. Drawing Exchange Format (DXF) Reference. Available at: https : / / archive . ph / 20121206003818/http://www.autodesk. com/dxf. Accessed on: March 15, 2025. [63] M. Reddy. Wavefront OBJ File Format. Available at: http://www.martinreddy.net/gfx/3d/ OBJ.spec. Accessed on: March 15, 2025. [64] F. Developers. FreeCAD webpage. Available at: https://www.freecadweb.org. Accessed on: March 15, 2025. [65] P. Software. Pixyz Studio Software. Available at: https : / / www . pixyz - software . com / studio. Accessed on: March 15, 2025. [66] S. Kemmerer. STEP: The Grand Experience. en. 1999. [67] T. Sakuma and T. McCauley. "Detector and Event Visualization with SketchUp at the CMS Experiment". J. Phys. Conf. Ser. 513 (2014). Ed. by D. L. Groep and D. Bonacorsi, p. 022032. 513/2/022032. [68] P. Vogel and J. F. Beacom. "Angular distribution of neutron inverse beta decay, anti-neutrino(e) + p -> e+ + n". Phys. Rev. D 60 (1999), p. 053003. 1103/PhysRevD.60.053003. [69] T. Lin and W.-Q. Yin. "Offline data processing in the First JUNO Data Challenge" (2024). . 48550/arXiv.2408.00959. 12 [70] JUNO Collaboration. "The JUNO experiment Top Tracker". Nucl. Instrum. Meth. A 1057 (2023), p. 168680. . 1016 / j . nima . 2023 . 168680. [71] M. Yu, W.-J. Wu, Y.-Y. Ding, et al. "A Monte Carlo method for Rayleigh scattering in liquid detectors". Rev. Sci. Instrum. 93.11 (2022), p. 113102. 1063/5.0119224. [72] M. Yu, W.-J. Wu, N. Peng, et al. "Measurements of Rayleigh ratios in linear alkylbenzene". Rev. Sci. Instrum. 93.6 (2022), p. 063106. 0091847. [73] K. Li, Z. You, Y. Zhang, et al. "GDML based geometry management system for offline software in JUNO". Nucl. Instrum. Meth. A 908 (2018), pp. 43-48. [74] S. Zhang, J.-S. Li, Y.-J. Su, et al. "A method for sharing dynamic geometry information in studies on liquidbased detectors". Nucl. Sci. Tech. 32.2 (2021), p. 21. [75] ATLAS collaboration. "ATLAS offline data quality monitoring". J. Phys. Conf. Ser. 219 (2010). Ed. by J. Gruntorad and M. Lokajicek, p. 042018. 1088/1742-6596/219/4/042018. [76] CMS collaboration. "CMS data quality monitoring: Systems and experiences". J. Phys. Conf. Ser. 219 (2010). Ed. by J. Gruntorad and M. Lokajicek, p. 072020. 072020. [77] J.-F. Hu, Y.-H. Zheng, X.-D. Sun, et al. "A data quality monitoring software framework for the BESIII experiment". Chin. Phys. C 36 (2012), pp. 62-66. [78] Daya Bay Collaboration. "Onsite data processing and monitoring for the Daya Bay Experiment". Chin. Phys. C 38 (2014), p. 086001. . 1088 / 1674 - 1137/38/8/086001.
|
2509.16286
|
What’s Not on the Plate? Rethinking Food Computing through
Indigenous Indian Datasets
Pamir Gogoi∗
Neha Joshi∗
Ayushi Pandey∗
Vivek Seshadri
pamir.gogoi@karya.in
neha@karya.in
ayushi@karya.in
Karya Inc.
Bengaluru, Karnataka, India
Deepthi Sudharsan
Kalika Bali
Microsoft Research
Bengaluru, Karnataka, India
Saransh Kumar Gupta∗
Lipika Dey
Partha Pratim Das
saransh.gupta@ashoka.edu.in
Ashoka University
Sonepat, Haryana, India
Abstract
This paper presents a multimodal dataset of 1,000 indigenous recipes
from remote regions of India, collected through a participatory
model involving first-time digital workers from rural areas. The
project covers ten endangered language communities in six states.
Documented using a dedicated mobile app, the data set includes
text, images, and audio, capturing traditional food practices along
with their ecological and cultural contexts. This initiative addresses
gaps in food computing, such as the lack of culturally inclusive,
multimodal, and community-authored data. By documenting food
as it is practiced rather than prescribed, this work advances inclu-
sive, ethical, and scalable approaches to AI-driven food systems and
opens new directions in cultural AI, public health, and sustainable
agriculture.
Keywords
Food Computing, Indigenous Recipes, Participatory AI, Multimodal
Dataset, Social Computing
1
Introduction
Food computing encompasses a wide range of computational tech-
niques designed to address food-related challenges across domains
such as medicine [3], nutrition [19], safety [20], and agriculture
[6]. Common research directions include the processing of massive
volumes of heterogeneous data, ranging from recipe books and food
images to cooking videos and food blogs. Advances in computer vi-
sion have enabled automatic recognition of dishes, ingredients, and
even cooking stages from visual inputs, which support dietary mon-
itoring, culinary education, and food documentation [13, 22, 36].
Similarly, natural language processing techniques have significantly
improved recipe understanding, ingredient substitution, and cross-
cultural food translation, powering applications in personalized
nutrition, food recommendation systems, and culinary knowledge
graphs [12, 43, 47].
Food in India is deeply embedded in everyday life—from rites
to medicine, and agriculture to socialization. Yet, despite this vast
and living knowledge system, there are only a few notable efforts
within the Indian context in the field of food computing. Research
initiatives like IIIT Delhi’s Computational Gastronomy Lab [8] and
∗Corresponding Authors
Ashoka University’s Food Computing Lab [11] are pioneering but
early steps toward modeling the unique cultural, linguistic, and
culinary landscape of Indian food traditions. Existing methods,
datasets, and Internet-based data collection fail to capture the socio-
ecological and cultural richness embedded in Indian home cooking.
With over 730 registered tribal groups, India offers a cultural con-
text that is dramatically underrepresented in current computational
resources. As a result, food computing suffers from a severe repre-
sentational bias.
We describe the documentation of 1,000 multimodal recipes
from remote, indigenous regions in India. These recipes are cap-
tured by first-time digital workers who serve as critical links be-
tween ancestral knowledge, their digitalization, and technology
platforms. The resulting dataset includes video, audio, images, text,
and translations, capturing the representation of Indian culinary
practice, suitable for alignment with the ontology and extension
of FKG.in—India’s first comprehensive food knowledge graph that
aims to integrate AI, domain-specific ontologies, and real-world
cooking intelligence with health, nutritional, and agroecological
information and contexts. While FKG.in is being developed inde-
pendently as an unrelated initiative, this paper brings together
contributors from both efforts to explore their convergence among
other potential directions.
This paper explains the key aspects of data collection and use
this dataset to highlight major gaps related to food computing in
India. We present this dataset as a starting point for identifying key
challenges, exploring potential applications, and underscoring the
need for scalable, inclusive, and nuanced food datasets rooted in
the Indian context.
2
Related Work and Gaps
Food computing research has produced several large-scale datasets
and models focused on visual recognition, cross-modal retrieval,
and recipe understanding [29]. Benchmark multimodal datasets
such as Recipe1M+ [28], Food-101 [4], and VireoFood-172 [21] have
advanced tasks like classification, image-to-recipe generation, and
ingredient prediction, but rely mainly on standardized, internet-
based content. Most efforts in Europe, China, and the US focus on
industrially structured recipes, processed food labels, or scientific
nutrition databases, resulting in cultural bias, limited ecological di-
versity, and underrepresentation of vernacular practices, especially
arXiv:2509.16286v1 [cs.CY] 19 Sep 2025
Gogoi et al.
in low-resource settings [30, 34, 39]. Recent work on culinary knowl-
edge graphs [15], dietary monitoring [32], fine-grained datasets
[26, 27], and culturally aware modeling [5, 45, 46] uses readily avail-
able data, but oral and hyperlocal food knowledge—especially in
India—remains largely absent from AI.
India-specific datasets include INDoRI [23] (~5,200 recipes, 18 re-
gions), image-based resources like the Indian Spices Image Dataset
[40] (~11,000 images, 19 spices), IndianFoodNet [1] (~5,500 images,
30 classes), and IndianFood10/20 [35], as well as structured compi-
lations like RecipeDB [2] and FKG.in [14]. While valuable, these are
mostly single-modal, crowd- or web-sourced, and lack cultural or
ecological grounding. CookingINWild [24] adds multimodal video
of Indian cooking, but remains limited to visual data without deeper
community-anchored or narrated content.
Beyond the core domain of food computing, adjacent fields such
as agroecology, ethnobotany, and digital humanities have long rec-
ognized the value of participatory knowledge production, especially
in the context of local food systems. Ethnographic research, such as
work on Himalayan indigenous communities [9], documents how
local food practices intertwine with ecology and climate change.
Similarly, multimedia archives of oral recipes and seasonal forag-
ing patterns exist in regional ethnobotany [25, 31], but are rarely
digitized or integrated into AI frameworks. These lay important
precedents for community-driven, multimodal documentation, but
they have yet to inform food computing datasets.
An overview of the existing work and remaining gaps in the food
knowledge documentation literature across selected dimensions is
detailed in Table 1. Our data collection process and the dataset tries
to address these gaps by creating a sensitive and ethical space for
documenting indigenous knowledge, culinary practices, and local
seasonal ingredients. We try to between globally commodified food
data and real-world, culturally rooted food knowledge by building
a multimodal, locally grounded, and community-led dataset that
not only fills data gaps but also opens new directions for equitable,
scalable, and culturally nuanced AI systems.
3
A Multimodal Recipe Dataset from
Indigenous Indian Regions
3.1
The Recipe Dataset Project
We adopt a ground-up digital methodology, where multimodal data
is collected from Indigenous communities. This expands the scope
of computational food data to include locally produced, culturally
situated knowledge, positioning community members as active
knowledge producers rather than passive subjects. The resulting
dataset is culturally grounded and computationally usable, bridging
participatory knowledge systems with machine-readable infrastruc-
tures.
3.1.1
Project Overview - . This initiative successfully collected
approximately 1,000 traditional recipes across 10 languages, con-
tributed by rural and tribal communities throughout India using
a specially designed mobile application. The dataset captures au-
thentic, underrepresented culinary knowledge from digital work-
ers—primarily rural women aged 15–45 who are native speakers
of endangered languages. Each participant contributed 2–5 tradi-
tional recipes via a dedicated app, earning INR 750 per recipe while
actively preserving their cultural heritage.
The data collection process involved recruiting local coordina-
tors for each language, who mobilized 30–50 participants per region.
Using the dedicated mobile application, contributors documented
their traditional recipes in a structured, multimodal format that
accommodated varying literacy levels through minimal text inter-
faces, audiovisual cues, and offline capabilities for areas with limited
connectivity. The statistics of the dataset is provided in Table 2.
3.1.2
Dataset Statistics - .
• Total Recipes: ~1,000 traditional recipes
• Languages Covered: 10 endangered languages
• Geographic Coverage: Jharkhand, Bihar, Assam, Manipur,
Arunachal Pradesh and Meghalaya
• Participants: 338 rural women across all regions
• Data Formats: Text, audio recordings, and images for each
recipe
• Languages Distribution:
– Jharkhand/Bihar: Ho, Khortha, Sadri, Santhali, Mundari
– Northeast India: Assamese, Meitei, Miju Mishmi, Bodo,
Kaman Mishmi
3.1.3
Data Structure and Format - . Each recipe entry in the data
set contains:
• Recipe Name: In native language with transliteration
• Ingredients List: Local names
• Step-by-step Instructions: Text and/or audio in native
language
• Process Images: Photo documentation of cooking steps
• Cultural Context: Traditional occasions, regional varia-
tions
• Nutritional Information: When available
• Audio Recordings: Native pronunciation and detailed ex-
planations
3.2
Sample Data
Table 2 summarizes the dataset’s culinary and linguistic richness
of 10 Indian communities across six states. The data highlights tra-
ditional recipes and unique ingredients, accompanied by extensive
collections of images and audio recordings in each community’s
native language. In total, the project documents 1,060 recipes and
11,398 unique ingredients, supported by over 46 hours of recorded
audio. Figure 1 provides a snapshot of the app interface used in the
project. A sample recipe data in Bodo is presented in Figure 2. Some
images from the indigenous dataset are presented in Figure 3.
4
Towards Building Inclusive Food Computing
Infrastructures
The multimodal community-authored recipe dataset in ten endan-
gered Indian languages offers a rare foundation for advancing in-
clusive food computing. Each entry is more than a recipe—it is a
culturally embedded narrative, captured through text, audio, and
images, with metadata on local ingredient variants, regional adap-
tations, household practices, and traditional techniques.
What’s Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets
Table 1: Overview of Existing Work and Remaining Gaps in Food Knowledge Documentation
No.
Dimension
Existing Work
Remaining Gaps
1
Modalities
Images, text, video action clips
Missing audio narration, step-by-step process video, field metadata
2
Cultural Grounding
Web/crowdsourced recipes
No indigenous or traditional area-based documentation
3
Scalability
Large, shallow datasets
Need for deep, qualitative data from sampled communities
4
Community Participation
Generic crowdsourced inputs
Missing specific, local, and lived food knowledge
5
Knowledge Integration
Limited use of ontologies/KGs
Direct linkage/extensibility to KGs and specialized AI reasoning systems
6
Contextual Factors
Formulaic and static recipes
Missing temporal, ecological, and oral transmissions of food practices
7
Language & Access
English-dominant data
Recipes in regional languages, with translations and transliteration
8
Ethical Data Practices
Extractive, unclear consent
Participatory design with attribution, consent, and fair labor models
Table 2: Languages, States, and Dataset Statistics for Regional Recipes in India
No.
Language
State (Districts)
Total Recipes
Unique Ingredients
Images
Audio Data (hh:mm:ss)
1
Mundari
Jharkhand
82
85
703
3:43:52
2
Sadri
Jharkhand
107
104
1103
2:32:19
3
Santhali
Bihar
120
98
1004
3:21:56
4
Khortha
Bihar
126
73
1129
4:05:33
5
Ho
Jharkhand
91
80
875
1:52:06
6
Assamese
Assam
113
148
1415
1:57:14
7
Bodo
Assam
95
190
1532
9:27:56
8
Meitei
Manipur
100
97
580
4:40:08
9
Khasi
Meghalaya
98
89
1928
6:47:40
10
Kaman Mishmi
Arunachal Pradesh
128
92
1129
7:33:27
–
Total
—
1060
1056
11,398
46:02:11
Figure 1: Data Collection App interface
It opens novel research directions at the intersection of AI, food
culture, and low-resource language preservation, while providing
structured data to inform domain-specific ontologies—e.g., concepts
like “fermented millet dishes during monsoon,” or “pregnancy-specific
recipes shared through maternal oral traditions.” Such high-context,
multimodal knowledge is essential for building systems that under-
stand food not just nutritionally or procedurally, but relationally
and culturally. What follows is a discussion of how this dataset is
being scaled and its potential future directions.
Figure 2: Raw Recipe Data in Bodo
4.1
Indian Foodways: Scaling Data Collection
Beyond Recipes
The initial dataset of 1,000 indigenous recipes proved a successful
proof of concept for real-world culinary data collection in India,
but it captures only a narrow slice of the country’s vast, decen-
tralized food knowledge systems. India’s culinary practices are
deeply rooted in local agroecologies, health beliefs, ritual tradi-
tions, seasonal behaviors, and socioeconomic realities—dimensions
that surface-level documentation of ingredients and cooking steps
cannot adequately capture. To better capture the diversity and
Gogoi et al.
(a) Raw Purple Taro (Colocasia
esculenta)
(b) Boo Shat (Kaman-Mishmi
dish)
(c) Red Ant Eggs (Oecophylla
smaragdina)
(d) Amaltas (Cassia
fistula)
Figure 3: Sample Images from the Indigenous Dataset
context of Indian foodways, we plan to expand our dataset to un-
derdocumented indigenous regions, especially those affected by
urbanization, climate change, or marginalization. The next phase
will use a redesigned questionnaire with 100+ questions (up from
15) across nine modules—preparation, techniques, consumption,
health, heritage, agriculture, ecology, economy, and cultural narra-
tives. These modules will document beliefs about food-as-medicine,
ingredient seasonality and sourcing, food-related livelihoods, and
stories that convey emotional and intergenerational knowledge.
This framework creates a living archive of food systems that
captures the cultural, ecological, and economic meanings of food
beyond recipes, enabling applications in public health, sustain-
able agriculture, cultural preservation, and policy. The redesigned
questionnaire uses emerging food ontologies to elicit machine-
interpretable responses from oral traditions, ensuring consistent
encoding of context, intent, timing, and health beliefs across lan-
guages and paving the way for integration with reasoning systems
like FKG.in and other domain-specific knowledge graphs.
4.2
FKG.in Integration: Enriching the Indian
Food Knowledge Graph
A compelling real-world application of our culinary dataset is its
potential integration with infrastructures such as FKG.in [14], In-
dia’s first dedicated food knowledge graph. The knowledge graph
format — particularly as implemented in FKG.in — is well suited to
capture the multifaceted, relational knowledge emerging from this
dataset, whether in its current form or as it scales.
Unlike flat recipe databases or “universal” nutrition charts, knowl-
edge graphs can accommodate structured and semi-structured data
across domains, enabling connections between ingredients, regional
names, ecological sourcing, seasonal patterns, cultural meanings,
and health perceptions. Contextual data such as “prepared only
during monsoon,” “consumed after childbirth,” or “leaf used as a plate
during harvest rituals” cannot be adequately represented without
a semantic framework. Integration with FKG.in ensures usability,
interoperability, and long-term impact across food computing, nu-
trition, sustainability, and cultural AI.
So far, FKG.in has focused on building a foundational structure
linking ingredients, processes, nutrition, and dish names. However,
its capacity for reasoning, multilingual querying, and inferencing
across cultural, health or agroecological axes has been limited by
the absence of deeply contextual, field-collected data — precisely
the gap this dataset fills. It enables FKG.in to encode not only what
is cooked, but why, by whom, when, and under what conditions.
This unlocks applications such as personalized health interven-
tions rooted in traditional knowledge; culturally sensitive food
recommendations; climate-aware dietary planning; and AI-driven
storytelling grounded in culinary heritage.
For example, a culturally aware cooking assistant enabled by
FKG.in could infer that a fermented rice recipe shared by a Meitei
woman is not only nutritionally probiotic, but also tied to postpar-
tum customs and seasonal harvests. While our dataset and FKG.in
are separate initiatives by distinct organizations, their convergence
represents a major advance in real-world, AI-driven food systems
modeling — where culturally rich, ecologically grounded, and so-
cially meaningful data finds its most powerful expression through
knowledge graph integration.
4.3
Broader Applications: Emerging
Opportunities in AI, Food, and Culture
Beyond integration with infrastructures like FKG.in, the dataset
unlocks applications across food computing, language technologies,
cultural AI, and health-oriented reasoning. Key emerging directions
include:
• Procedural Soundness Checks with LLMs (Large Lan-
guage Models): LLMs can act as evaluators, validating
recipe structure by representing them as action graphs and
What’s Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets
detecting missing steps, sequencing errors, or implicit de-
pendencies. This improves data quality, ensures complete-
ness and assesses ease of reproduction for novice users
[10, 33, 42].
• Cultural Machine Translation for Endangered Lan-
guages: The corpus described in Section 3.1 enables train-
ing culturally grounded, domain-specific machine transla-
tion models for Indian food recipes. These models can be
designed to capture procedural nuance and rich culinary
context, enabling accurate, culturally sensitive translation
across endangered languages [44].
• Cultural Recipe Corpora for Benchmarking and Su-
pervised Fine-Tuning (SFT): The multimodal corpus of
underrepresented Indian recipes can power culturally rich
knowledge bases, diverse evaluation datasets for bench-
marking models (e.g., QA, reasoning tasks, guessing games
like MMLU [16] or BoolQ [7]), and SFT for domain-specific,
multimodal tasks. Such models could serve as culturally
aware virtual assistants or educational tools for culinary
learning, preservation, and cross-cultural exchange [37, 41].
• Multimodal Understanding and Generation: Combin-
ing text, audio, and images enables applications such as
generating step-by-step instructional visuals or videos from
recipe text, completing cooking videos from an image or
recipe prompt, recognizing indigenous ingredients from
images, identifying dishes from images, or inferring inter-
mediate steps from visual or textual cues [18, 26, 33].
• Reasoning-Driven Cooking Assistants: Culturally
grounded data supports contextual reasoning assistants that
could explain why a step matters, what happens if altered,
or how to adapt recipes to allergen or ingredient availability
constraints - combining common sense reasoning, cultural
sensitivity, and procedural awareness. These systems can
help users navigate real-world cooking constraints while
supporting the creative adaptation and evolution of tradi-
tional recipes across diverse and global culinary contexts
[17, 38].
5
Challenges and Open Questions
Some valuable learnings from this initiative, which may guide sim-
ilar future efforts, are outlined below:
• Trust-building: Initial skepticism stemmed from disbelief
that digital work—particularly involving recipes and sto-
rytelling—could result in real payment. Many participants
were unfamiliar with such tasks being seen as valuable or
remunerative, necessitating reassurance and clear commu-
nication through trusted local coordinators.
• Seasonality constraints:Many tribal and rural recipes
rely on foraged or locally grown ingredients that are only
available during specific seasons. As a result, a significant
number of off-season recipes and ingredients could not be
captured in the initial dataset.
• Infrastructure limitations: Unreliable internet connec-
tivity and power supply in remote villages posed challenges
for uploading multimodal data. To mitigate this, the mo-
bile application was designed to support offline recording,
requiring internet access only during final submission.
• Payment equity: Recipe complexity varied dramatically
— from elaborate dishes involving early-morning foraging
and hours of ancestral techniques, to simpler but equally au-
thentic preparations. Determining fair compensation while
honoring the cultural value of all contributions proved to
be a nuanced challenge.
• Data quality: Ensuring consistency across diverse lan-
guages, cooking styles, and literacy levels required a two-
layer validation process involving local coordinators and
project managers. This system helped ensure data com-
pleteness and cross-regional coherence, ultimately strength-
ening the participatory framework and underscoring the
importance of patient, hyper-local approaches to digital
inclusion.
6
Conclusion
The present work describes a dataset of 1,000 indigenous recipes,
collected in ten languages in Eastern and Northeast India. Data col-
lection and remuneration were facilitated through a smartphone-
based data collection platform. The collection process involved
unprecedented outreach efforts, including the participation of lan-
guage experts, community coordinators, and data scientists. The
primary data contributors were women who are native speakers of
these ten languages.
Although originally designed for cultural inclusion and creation
of knowledge bases, the dataset serves as a valuable resource for
food computing by presenting an exhaustive list of cooking prac-
tices - such as rare ingredients, seasonal traditions, rituals, and
nutritional content - unique to home cooking and often overlooked
in industrially structured recipe formats. The data set also holds
potential for developing culturally aware large-language models
(LLMs), while the English translations can serve as parallel cor-
pora for domain-specific machine translation. Future work will
involve expanding the dataset to include additional rare and under-
resourced languages of India.
Acknowledgments
We gratefully acknowledge the local coordinators and contributors
whose time, knowledge, and effort made this work possible. We
are thankful to the Mphasis AI & Applied Tech Lab at Ashoka - a
collaboration between Ashoka University and Mphasis Limited -
for their support.
References
[1] Ritu Agarwal, Tanupriya Choudhury, Neelu J.Ahuja, and Tanmay Sarkar. 2023.
IndianFoodNet: Detecting Indian Food Items Using Deep Learning. International
Journal of Computational Methods and Experimental Measurements 11 (12 2023).
doi:10.18280/ijcmem.110403
[2] Devansh Batra, Nirav Diwan, Utkarsh Upadhyay, Jushaan Singh Kalra,
Tript Sharma, Aman Kumar Sharma, Dheeraj Khanna, Jaspreet Singh Mar-
wah, Srilakshmi Kalathil, Navjot Singh, Rudraksh Tuwani, and Ganesh
Bagler. 2020.
RecipeDB: a resource for exploring recipes.
Database
2020 (11 2020), baaa077.
arXiv:https://academic.oup.com/database/article-
pdf/doi/10.1093/database/baaa077/34526636/baaa077.pdf doi:10.1093/database/
baaa077
[3] Almog Boanos, Anitha Sri Mothukuri, Kaitlin A Goettsch, and Dhundy K Bastola.
2017. Investigation and utilization of personal food computers for research in
Gogoi et al.
drug development and biomedicine. In 2017 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM). IEEE, 2229–2231.
[4] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. 2014. Food-101 – Mining
Discriminative Components with Random Forests. In European Conference on
Computer Vision.
[5] Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li
Zhou, Megan Dare, Lucia Donatelli, and Daniel Hershcovich. 2024. Cultural
adaptation of recipes. Transactions of the Association for Computational Linguistics
12 (2024), 80–99.
[6] Eduardo Castelló Ferrer, Jake Rye, Gordon Brander, Tim Savas, Douglas Cham-
bers, Hildreth England, and Caleb Harper. 2018. Personal Food Computer: A
new device for controlled-environment agriculture. In Proceedings of the Future
Technologies Conference. Springer, 1077–1096.
[7] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael
Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty
of natural yes/no questions. arXiv preprint arXiv:1905.10044 (2019).
[8] IIIT-Delhi Complex Systems Lab. 2025. Complex Systems Laboratory: Computa-
tional Gastronomy - Making Food Computable with Data Science and Artificial
Intelligence. Web page. https://cosylab.iiitd.edu.in/ Foundational work in com-
putational gastronomy, flavor networks, RecipeDB, FlavorDB, SpiceRx, DietRx.
[9] Suraj Das and Anindya Mishra. 2022. Dynamics of indigenous community’s
food and culture in the time of climate change in the Himalayan region. Journal
of Ethnic Foods 9 (01 2022). doi:10.1186/s42779-022-00118-7
[10] Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, and Rob Miller.
2024. PizzaCommonSense: Learning to Model Commonsense Reasoning about
Intermediate Steps in Cooking Recipes. arXiv:2401.06930 [cs.CL] https://arxiv.
org/abs/2401.06930
[11] Ashoka University Food Computing Lab. 2025. Food Computing Lab: Knowledge
Graphs, Food, Data and AI for Indian Food Systems. Web page. https://fkg-
india.github.io/ Interdisciplinary research in food knowledge graphs, health,
sustainability, vernacular cuisine modeling.
[12] Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, and Dane Bell.
2014. Analyzing the language of food on social media. In 2014 IEEE International
Conference on Big Data (Big Data). IEEE, 778–783.
[13] Juliana Freitas Santos Gomes and Fabiana Rodrigues Leta. 2012. Applications
of computer vision techniques in the agriculture and food industry: a review.
European Food Research and Technology 235, 6 (2012), 989–1000.
[14] Saransh Kumar Gupta, Lipika Dey, Partha Pratim Das, and Ramesh Jain. 2024.
Building FKG.in: a Knowledge Graph for Indian Food. doi:10.48550/arXiv.2409.
00830 arXiv preprint arXiv:2409.00830.
[15] Steven Haussmann, Oshani Seneviratne, Yu Chen, Yarden Ne’eman, James
Codella, Ching-Hua Chen, Deborah L. McGuinness, and Mohammed J. Zaki. 2019.
FoodKG: A Semantics-Driven Knowledge Graph for Food Recommendation. In
The Semantic Web – ISWC 2019: 18th International Semantic Web Conference,
Auckland, New Zealand, October 26–30, 2019, Proceedings, Part II (Auckland, New
Zealand). Springer-Verlag, Berlin, Heidelberg, 146–162. doi:10.1007/978-3-030-
30796-7_10
[16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn
Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under-
standing. arXiv preprint arXiv:2009.03300 (2020).
[17] Tianyi Hu, Maria Maistro, and Daniel Hershcovich. 2024. Bridging Cultures
in the Kitchen: A Framework and Benchmark for Cross-Cultural Recipe Re-
trieval. In Proceedings of the 2024 Conference on Empirical Methods in Natural
Language Processing, Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (Eds.).
Association for Computational Linguistics, Miami, Florida, USA, 1068–1080.
doi:10.18653/v1/2024.emnlp-main.61
[18] Tenghao Huang, Donghee Lee, John Sweeney, Jiatong Shi, Emily Steliotes,
Matthew Lange, Jonathan May, and Muhao Chen. 2024. FoodPuzzle: Developing
Large Language Model Agents as Flavor Scientists. arXiv:2409.12832 [cs.CL]
https://arxiv.org/abs/2409.12832
[19] Shuqiang Jiang. 2024. Food Computing for Nutrition and Health. In 2024 IEEE
40th International Conference on Data Engineering Workshops (ICDEW). IEEE,
29–31.
[20] Cangyu Jin, Yamine Bouzembrak, Jiehong Zhou, Qiao Liang, Leonieke M Van
Den Bulk, Anand Gavai, Ningjing Liu, Lukas J Van Den Heuvel, Wouter Hoen-
derdaal, and Hans JP Marvin. 2020. Big Data in food safety-A review. Current
Opinion in Food Science 36 (2020), 24–32.
[21] Chong-wah NGO Jing-jing Chen. 2016. Deep-based Ingredient Recognition for
Cooking Recipe Retrival. ACM Multimedia (2016).
[22] Vijay Kakani, Van Huan Nguyen, Basivi Praveen Kumar, Hakil Kim, and
Visweswara Rao Pasupuleti. 2020. A critical review on computer vision and
artificial intelligence in food industry. Journal of Agriculture and Food Research 2
(2020), 100033.
[23] Sandeep Khanna, Chiranjoy Chattopadhyay, and Suman Kundu. 2023.
IN-
DoRI: Indian Dataset of Recipes and Ingredients and its Ingredient Network.
arXiv:2309.10403 [cs.SI] https://arxiv.org/abs/2309.10403
[24] Sandeep Khanna, Shreya Goyal, Chiranjoy Chattopadhyay, and Suman Kundu.
2024. CookingINWild: Unleashing the Challenges of Indian Cuisine Cooking
Videos for Action Recognition. In Proceedings of the 7th Joint International Con-
ference on Data Science & Management of Data (11th ACM IKDD CODS and 29th
COMAD) (Bangalore, India) (CODS-COMAD ’24). Association for Computing
Machinery, New York, NY, USA, 222–226. doi:10.1145/3632410.3632429
[25] Banisetti Dileepu Kumar and Nagendra Prasad Kosuri. 2023. Ethnobotanical
Research in the Digital Age: Harnessing Technology for Data Collection and
Analysis. International Journal of Indigenous Herbs and Drugs 8, 4 (2023), –.
doi:10.46956/ijihd.v8i4.466 Article available under CC BY-NC 4.0.
[26] Wenyan Li, Xinyu Zhang, Jiaang Li, Qiwei Peng, Raphael Tang, Li Zhou, Wei-
jia Zhang, Guimin Hu, Yifei Yuan, Anders Søgaard, Daniel Hershcovich, and
Desmond Elliott. 2024.
FoodieQA: A Multimodal Dataset for Fine-Grained
Understanding of Chinese Food Culture.
arXiv:2406.11030 [cs.CL]
https:
//arxiv.org/abs/2406.11030
[27] Jabez Magomere, Shu Ishida, Tejumade Afonja, Aya Salama, Daniel Kochin,
Yuehgoh Foutse, Imane Hamzaoui, Raesetje Sefala, Aisha Alaagib, Samantha
Dalal, et al. 2025. The World Wide recipe: A community-centred framework for
fine-grained data collection and regional bias operationalisation. In Proceedings of
the 2025 ACM Conference on Fairness, Accountability, and Transparency. 246–282.
[28] Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf
Aytar, Ingmar Weber, and Antonio Torralba. 2019. Recipe1M+: A Dataset for
Learning Cross-Modal Embeddings for Cooking Recipes and Food Images. IEEE
Trans. Pattern Anal. Mach. Intell. (2019).
[29] Weiqing Min, Shuqiang Jiang, Linhu Liu, Yong Rui, and Ramesh Jain. 2019. A
Survey on Food Computing. ACM Comput. Surv. 52, 5, Article 92 (Sept. 2019),
36 pages. doi:10.1145/3329168
[30] Weiqing Min, Chunlin Liu, Leyi Xu, and Shuqiang Jiang. 2022. Applications of
knowledge graphs for food science and industry. Patterns 3, 5 (2022), 100484.
doi:10.1016/j.patter.2022.100484
[31] Tawseef Ahmad Mir, Musheerul Hassan, Muatasim Jan, Muhammad Shoaib
Amjad, Muhammad Abdul Aziz, Andrea Pieroni, Ivana Vitasović-Kosić, and
Rainer W. Bussmann. 2024. Traditional culinary uses of wild plants in the Kashmir
Himalayas: ethnobotanical documentation and cultural insights. Journal of
Ethnobiology and Ethnomedicine 20, 1 (2024), 18. doi:10.1186/s13002-024-00707-7
Part of the collection: Local knowledge systems and ecological transition.
[32] Austin Myers, Nick Johnston, Vivek Rathod, Anoop Korattikara, Alex Gorban,
Nathan Silberman, Sergio Guadarrama, George Papandreou, Jonathan Huang,
and Kevin Murphy. 2015. Im2Calories: Towards an Automated Mobile Vision
Food Diary. In Proceedings of the 2015 IEEE International Conference on Computer
Vision (ICCV) (ICCV ’15). IEEE Computer Society, USA, 1233–1241. doi:10.1109/
ICCV.2015.146
[33] Taichi Nishimura, Suzushi Tomori, Hayato Hashimoto, Atsushi Hashimoto, Yoko
Yamakata, Jun Harashima, Yoshitaka Ushiku, and Shinsuke Mori. 2020. Visual
Grounding Annotation of Recipe Flow Graph. In Proceedings of the Twelfth
Language Resources and Evaluation Conference, Nicoletta Calzolari, Frédéric
Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck,
Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo,
Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language
Resources Association, Marseille, France, 4275–4284. https://aclanthology.org/
2020.lrec-1.527/
[34] Marianna Obrist, Patrizia Marti, Carlos Velasco, Yunwen Tu, Takuji Narumi, and
Naja L Holten Møller. 2018. The future of computing and food. In Proceedings of
the 2018 international conference on advanced visual interfaces. 1–3.
[35] Deepanshu Pandey, Purva Parmar, Gauri Toshniwal, Mansi Goel, Vishesh
Agrawal, Shivangi Dhiman, Lavanya Gupta, and Ganesh Bagler. 2022.
Ob-
ject Detection in Indian Food Platters using Transfer Learning with YOLOv4.
arXiv:2205.04841 [cs.CV] https://arxiv.org/abs/2205.04841
[36] Shayan Rokhva, Babak Teimourpour, and Amir Hossein Soltani. 2024. Computer
vision in the food industry: Accurate, real-time, and automatic food recognition
with pretrained MobileNetV2. Food and Humanity 3 (2024), 100378.
[37] David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy
Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abza-
liev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian
Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio Villa-
Cueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan,
Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign
Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise
Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Joce-
lyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe,
Luciana Benotti, Luis Fernando D’Haro, Marcelo Viridiano, Marcos Estecha-
Garitagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie
Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid
Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori,
Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi,
Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora,
Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Teresa Clifford,
Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo,
Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana
Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, and Alham Fikri Aji. 2024.
What’s Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets
CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark.
arXiv:2406.05967 [cs.CV] https://arxiv.org/abs/2406.05967
[38] Sola Shirai, Oshani Seneviratne, Minor Gordon, Ching-Hua Chen, and Deborah
Mcguinness. 2021. Identifying Ingredient Substitutions Using a Knowledge Graph
of Food. Frontiers in Artificial Intelligence 3 (01 2021). doi:10.3389/frai.2020.621766
[39] Quin Thames, Arjun Karpur, Wade Norris, Fangting Xia, Liviu Panait, Tobias
Weyand, and Jack Sim. 2021. Nutrition5k: Towards automatic nutritional under-
standing of generic food. In Proceedings of the IEEE/CVF conference on computer
vision and pattern recognition. 8903–8911.
[40] Sandip Thite, Deepali Godse, Kailas Patil, Prawit Chumchu, and Alfa Nyandoro.
2024. Facilitating spice recognition and classification: An image dataset of Indian
spices. Data in Brief 57 (2024), 110936. doi:10.1016/j.dib.2024.110936
[41] Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, David Anugraha,
Rifki Afina Putri, Yutong Wang, Adam Nohejl, Ubaidillah Ariq Prathama, Ned-
jma Ousidhoum, Afifa Amriani, Anar Rzayev, Anirban Das, Ashmari Pramodya,
Aulia Adila, Bryan Wilie, Candy Olivia Mawalim, Ching Lam Cheng, Daud
Abolade, Emmanuele Chersoni, Enrico Santus, Fariz Ikhwantri, Garry Kuwanto,
Hanyang Zhao, Haryo Akbarianto Wibowo, Holy Lovenia, Jan Christian Blaise
Cruz, Jan Wira Gotama Putra, Junho Myung, Lucky Susanto, Maria Angel-
ica Riera Machin, Marina Zhukova, Michael Anugraha, Muhammad Farid Adi-
lazuarda, Natasha Santosa, Peerat Limkonchotiwat, Raj Dabre, Rio Alexander
Audino, Samuel Cahyawijaya, Shi-Xiong Zhang, Stephanie Yulia Salim, Yi Zhou,
Yinxuan Gui, David Ifeoluwa Adelani, En-Shiun Annie Lee, Shogo Okada, Ayu
Purwarianti, Alham Fikri Aji, Taro Watanabe, Derry Tanti Wijaya, Alice Oh,
and Chong-Wah Ngo. 2025. WorldCuisines: A Massive-Scale Benchmark for
Multilingual and Multicultural Visual Question Answering on Global Cuisines.
arXiv:2410.12705 [cs.CL] https://arxiv.org/abs/2410.12705
[42] Yoko Yamakata, Shinsuke Mori, and John Carroll. 2020. English Recipe Flow
Graph Corpus. In Proceedings of the Twelfth Language Resources and Evaluation
Conference, Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri,
Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Mae-
gaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios
Piperidis (Eds.). European Language Resources Association, Marseille, France,
5187–5194. https://aclanthology.org/2020.lrec-1.638/
[43] Zhongqi Yang, Elahe Khatibi, Nitish Nagesh, Mahyar Abbasian, Iman Azimi,
Ramesh Jain, and Amir M. Rahmani. 2024. ChatDiet: Empowering Personalized
Nutrition-Oriented Food Recommender Chatbots through an LLM-Augmented
Framework. arXiv:2403.00781 [cs.IR] doi:10.1016/j.smhl.2024.100465
[44] Binwei Yao, Ming Jiang, Tara Bobinac, Diyi Yang, and Junjie Hu. 2024. Bench-
marking Machine Translation with Cultural Awareness. arXiv:2305.14328 [cs.CL]
https://arxiv.org/abs/2305.14328
[45] Qing Zhang, David Elsweiler, and Christoph Trattner. 2020. Visual Cultural
Biases in Food Classification. Foods 9, 6 (2020). doi:10.3390/foods9060823
[46] Li Zhou, Taelin Karidi, Wanlong Liu, Nicolas Garneau, Yong Cao, Wenyu Chen,
Haizhou Li, and Daniel Hershcovich. 2024. Does mapo tofu contain coffee?
probing llms for food-related cultural knowledge. arXiv preprint arXiv:2404.06833
(2024).
[47] Pengfei Zhou, Weiqing Min, Chaoran Fu, Ying Jin, Mingyu Huang, Xi-
angyang Li, Shuhuan Mei, and Shuqiang Jiang. 2024.
FoodSky: A Food-
oriented Large Language Model that Passes the Chef and Dietetic Examination.
arXiv:2406.10261 [cs.CL] https://arxiv.org/abs/2406.10261
|
What's Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets Pamir Gogoi∗ Neha Joshi∗ Ayushi Pandey∗ Vivek Seshadri Karya Inc. Bengaluru, Karnataka, India Deepthi Sudharsan Kalika Bali Microsoft Research Bengaluru, Karnataka, India Saransh Kumar Gupta∗ Lipika Dey Partha Pratim Das Ashoka University Sonepat, Haryana, India Abstract This paper presents a multimodal dataset of 1,000 indigenous recipes from remote regions of India, collected through a participatory model involving first-time digital workers from rural areas. The project covers ten endangered language communities in six states. Documented using a dedicated mobile app, the data set includes text, images, and audio, capturing traditional food practices along with their ecological and cultural contexts. This initiative addresses gaps in food computing, such as the lack of culturally inclusive, multimodal, and community-authored data. By documenting food as it is practiced rather than prescribed, this work advances inclusive, ethical, and scalable approaches to AI-driven food systems and opens new directions in cultural AI, public health, and sustainable agriculture. Keywords Food Computing, Indigenous Recipes, Participatory AI, Multimodal Dataset, Social Computing 1 Introduction Food computing encompasses a wide range of computational techniques designed to address food-related challenges across domains such as medicine [3], nutrition [19], safety [20], and agriculture [6]. Common research directions include the processing of massive volumes of heterogeneous data, ranging from recipe books and food images to cooking videos and food blogs. Advances in computer vision have enabled automatic recognition of dishes, ingredients, and even cooking stages from visual inputs, which support dietary monitoring, culinary education, and food documentation [13, 22, 36]. Similarly, natural language processing techniques have significantly improved recipe understanding, ingredient substitution, and crosscultural food translation, powering applications in personalized nutrition, food recommendation systems, and culinary knowledge graphs [12, 43, 47]. Food in India is deeply embedded in everyday life-from rites to medicine, and agriculture to socialization. Yet, despite this vast and living knowledge system, there are only a few notable efforts within the Indian context in the field of food computing. Research initiatives like IIIT Delhi's Computational Gastronomy Lab [8] and ∗Corresponding Authors Ashoka University's Food Computing Lab [11] are pioneering but early steps toward modeling the unique cultural, linguistic, and culinary landscape of Indian food traditions. Existing methods, datasets, and Internet-based data collection fail to capture the socioecological and cultural richness embedded in Indian home cooking. With over 730 registered tribal groups, India offers a cultural context that is dramatically underrepresented in current computational resources. As a result, food computing suffers from a severe representational bias. We describe the documentation of 1,000 multimodal recipes from remote, indigenous regions in India. These recipes are captured by first-time digital workers who serve as critical links between ancestral knowledge, their digitalization, and technology platforms. The resulting dataset includes video, audio, images, text, and translations, capturing the representation of Indian culinary practice, suitable for alignment with the ontology and extension of FKG.in-India's first comprehensive food knowledge graph that aims to integrate AI, domain-specific ontologies, and real-world cooking intelligence with health, nutritional, and agroecological information and contexts. While FKG.in is being developed independently as an unrelated initiative, this paper brings together contributors from both efforts to explore their convergence among other potential directions. This paper explains the key aspects of data collection and use this dataset to highlight major gaps related to food computing in India. We present this dataset as a starting point for identifying key challenges, exploring potential applications, and underscoring the need for scalable, inclusive, and nuanced food datasets rooted in the Indian context. 2 Related Work and Gaps Food computing research has produced several large-scale datasets and models focused on visual recognition, cross-modal retrieval, and recipe understanding [29]. Benchmark multimodal datasets such as Recipe1M+ [28], Food-101 [4], and VireoFood-172 [21] have advanced tasks like classification, image-to-recipe generation, and ingredient prediction, but rely mainly on standardized, internetbased content. Most efforts in Europe, China, and the US focus on industrially structured recipes, processed food labels, or scientific nutrition databases, resulting in cultural bias, limited ecological diversity, and underrepresentation of vernacular practices, especially 19 Sep 2025 Gogoi et al. in low-resource settings [30, 34, 39]. Recent work on culinary knowledge graphs [15], dietary monitoring [32], fine-grained datasets [26, 27], and culturally aware modeling [5, 45, 46] uses readily available data, but oral and hyperlocal food knowledge-especially in India-remains largely absent from AI. India-specific datasets include INDoRI [23] (~5,200 recipes, 18 regions), image-based resources like the Indian Spices Image Dataset [40] (~11,000 images, 19 spices), IndianFoodNet [1] (~5,500 images, 30 classes), and IndianFood10/20 [35], as well as structured compilations like RecipeDB [2] and FKG.in [14]. While valuable, these are mostly single-modal, crowd- or web-sourced, and lack cultural or ecological grounding. CookingINWild [24] adds multimodal video of Indian cooking, but remains limited to visual data without deeper community-anchored or narrated content. Beyond the core domain of food computing, adjacent fields such as agroecology, ethnobotany, and digital humanities have long recognized the value of participatory knowledge production, especially in the context of local food systems. Ethnographic research, such as work on Himalayan indigenous communities [9], documents how local food practices intertwine with ecology and climate change. Similarly, multimedia archives of oral recipes and seasonal foraging patterns exist in regional ethnobotany [25, 31], but are rarely digitized or integrated into AI frameworks. These lay important precedents for community-driven, multimodal documentation, but they have yet to inform food computing datasets. An overview of the existing work and remaining gaps in the food knowledge documentation literature across selected dimensions is detailed in Table 1. Our data collection process and the dataset tries to address these gaps by creating a sensitive and ethical space for documenting indigenous knowledge, culinary practices, and local seasonal ingredients. We try to between globally commodified food data and real-world, culturally rooted food knowledge by building a multimodal, locally grounded, and community-led dataset that not only fills data gaps but also opens new directions for equitable, scalable, and culturally nuanced AI systems. 3 A Multimodal Recipe Dataset from Indigenous Indian Regions 3.1 The Recipe Dataset Project We adopt a ground-up digital methodology, where multimodal data is collected from Indigenous communities. This expands the scope of computational food data to include locally produced, culturally situated knowledge, positioning community members as active knowledge producers rather than passive subjects. The resulting dataset is culturally grounded and computationally usable, bridging participatory knowledge systems with machine-readable infrastructures. 3.1.1 Project Overview - . This initiative successfully collected approximately 1,000 traditional recipes across 10 languages, contributed by rural and tribal communities throughout India using a specially designed mobile application. The dataset captures authentic, underrepresented culinary knowledge from digital workers-primarily rural women aged 15-45 who are native speakers of endangered languages. Each participant contributed 2-5 traditional recipes via a dedicated app, earning INR 750 per recipe while actively preserving their cultural heritage. The data collection process involved recruiting local coordinators for each language, who mobilized 30-50 participants per region. Using the dedicated mobile application, contributors documented their traditional recipes in a structured, multimodal format that accommodated varying literacy levels through minimal text interfaces, audiovisual cues, and offline capabilities for areas with limited connectivity. The statistics of the dataset is provided in Table 2. 3.1.2 Dataset Statistics - . • Total Recipes: ~1,000 traditional recipes • Languages Covered: 10 endangered languages • Geographic Coverage: Jharkhand, Bihar, Assam, Manipur, Arunachal Pradesh and Meghalaya • Participants: 338 rural women across all regions • Data Formats: Text, audio recordings, and images for each recipe • Languages Distribution: - Jharkhand/Bihar: Ho, Khortha, Sadri, Santhali, Mundari - Northeast India: Assamese, Meitei, Miju Mishmi, Bodo, Kaman Mishmi 3.1.3 Data Structure and Format - . Each recipe entry in the data set contains: • Recipe Name: In native language with transliteration • Ingredients List: Local names • Step-by-step Instructions: Text and/or audio in native language • Process Images: Photo documentation of cooking steps • Cultural Context: Traditional occasions, regional variations • Nutritional Information: When available • Audio Recordings: Native pronunciation and detailed explanations 3.2 Sample Data Table 2 summarizes the dataset's culinary and linguistic richness of 10 Indian communities across six states. The data highlights traditional recipes and unique ingredients, accompanied by extensive collections of images and audio recordings in each community's native language. In total, the project documents 1,060 recipes and 11,398 unique ingredients, supported by over 46 hours of recorded audio. Figure 1 provides a snapshot of the app interface used in the project. A sample recipe data in Bodo is presented in Figure 2. Some images from the indigenous dataset are presented in Figure 3. 4 Towards Building Inclusive Food Computing Infrastructures The multimodal community-authored recipe dataset in ten endangered Indian languages offers a rare foundation for advancing inclusive food computing. Each entry is more than a recipe-it is a culturally embedded narrative, captured through text, audio, and images, with metadata on local ingredient variants, regional adaptations, household practices, and traditional techniques. What's Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets Table 1: Overview of Existing Work and Remaining Gaps in Food Knowledge Documentation No. Dimension Existing Work Remaining Gaps 1 Modalities Images, text, video action clips Missing audio narration, step-by-step process video, field metadata 2 Cultural Grounding Web/crowdsourced recipes No indigenous or traditional area-based documentation 3 Scalability Large, shallow datasets Need for deep, qualitative data from sampled communities 4 Community Participation Generic crowdsourced inputs Missing specific, local, and lived food knowledge 5 Knowledge Integration Limited use of ontologies/KGs Direct linkage/extensibility to KGs and specialized AI reasoning systems 6 Contextual Factors Formulaic and static recipes Missing temporal, ecological, and oral transmissions of food practices 7 Language & Access English-dominant data Recipes in regional languages, with translations and transliteration 8 Ethical Data Practices Extractive, unclear consent Participatory design with attribution, consent, and fair labor models Table 2: Languages, States, and Dataset Statistics for Regional Recipes in India No. Language State (Districts) Total Recipes Unique Ingredients Images Audio Data (hh:mm:ss) 1 Mundari Jharkhand 82 85 703 3:43:52 2 Sadri Jharkhand 107 104 1103 2:32:19 3 Santhali Bihar 120 98 1004 3:21:56 4 Khortha Bihar 126 73 1129 4:05:33 5 Ho Jharkhand 91 80 875 1:52:06 6 Assamese Assam 113 148 1415 1:57:14 7 Bodo Assam 95 190 1532 9:27:56 8 Meitei Manipur 100 97 580 4:40:08 9 Khasi Meghalaya 98 89 1928 6:47:40 10 Kaman Mishmi Arunachal Pradesh 128 92 1129 7:33:27 - Total - 1060 1056 11,398 46:02:11 Figure 1: Data Collection App interface It opens novel research directions at the intersection of AI, food culture, and low-resource language preservation, while providing structured data to inform domain-specific ontologies-e.g., concepts like "fermented millet dishes during monsoon," or "pregnancy-specific recipes shared through maternal oral traditions." Such high-context, multimodal knowledge is essential for building systems that understand food not just nutritionally or procedurally, but relationally and culturally. What follows is a discussion of how this dataset is being scaled and its potential future directions. Figure 2: Raw Recipe Data in Bodo 4.1 Indian Foodways: Scaling Data Collection Beyond Recipes The initial dataset of 1,000 indigenous recipes proved a successful proof of concept for real-world culinary data collection in India, but it captures only a narrow slice of the country's vast, decentralized food knowledge systems. India's culinary practices are deeply rooted in local agroecologies, health beliefs, ritual traditions, seasonal behaviors, and socioeconomic realities-dimensions that surface-level documentation of ingredients and cooking steps cannot adequately capture. To better capture the diversity and Gogoi et al. (a) Raw Purple Taro (Colocasia esculenta) (b) Boo Shat (Kaman-Mishmi dish) (c) Red Ant Eggs (Oecophylla smaragdina) (d) Amaltas (Cassia fistula) Figure 3: Sample Images from the Indigenous Dataset context of Indian foodways, we plan to expand our dataset to underdocumented indigenous regions, especially those affected by urbanization, climate change, or marginalization. The next phase will use a redesigned questionnaire with 100+ questions (up from 15) across nine modules-preparation, techniques, consumption, health, heritage, agriculture, ecology, economy, and cultural narratives. These modules will document beliefs about food-as-medicine, ingredient seasonality and sourcing, food-related livelihoods, and stories that convey emotional and intergenerational knowledge. This framework creates a living archive of food systems that captures the cultural, ecological, and economic meanings of food beyond recipes, enabling applications in public health, sustainable agriculture, cultural preservation, and policy. The redesigned questionnaire uses emerging food ontologies to elicit machineinterpretable responses from oral traditions, ensuring consistent encoding of context, intent, timing, and health beliefs across languages and paving the way for integration with reasoning systems like FKG.in and other domain-specific knowledge graphs. 4.2 FKG.in Integration: Enriching the Indian Food Knowledge Graph A compelling real-world application of our culinary dataset is its potential integration with infrastructures such as FKG.in [14], India's first dedicated food knowledge graph. The knowledge graph format - particularly as implemented in FKG.in - is well suited to capture the multifaceted, relational knowledge emerging from this dataset, whether in its current form or as it scales. Unlike flat recipe databases or "universal" nutrition charts, knowledge graphs can accommodate structured and semi-structured data across domains, enabling connections between ingredients, regional names, ecological sourcing, seasonal patterns, cultural meanings, and health perceptions. Contextual data such as "prepared only during monsoon," "consumed after childbirth," or "leaf used as a plate during harvest rituals" cannot be adequately represented without a semantic framework. Integration with FKG.in ensures usability, interoperability, and long-term impact across food computing, nutrition, sustainability, and cultural AI. So far, FKG.in has focused on building a foundational structure linking ingredients, processes, nutrition, and dish names. However, its capacity for reasoning, multilingual querying, and inferencing across cultural, health or agroecological axes has been limited by the absence of deeply contextual, field-collected data - precisely the gap this dataset fills. It enables FKG.in to encode not only what is cooked, but why, by whom, when, and under what conditions. This unlocks applications such as personalized health interventions rooted in traditional knowledge; culturally sensitive food recommendations; climate-aware dietary planning; and AI-driven storytelling grounded in culinary heritage. For example, a culturally aware cooking assistant enabled by FKG.in could infer that a fermented rice recipe shared by a Meitei woman is not only nutritionally probiotic, but also tied to postpartum customs and seasonal harvests. While our dataset and FKG.in are separate initiatives by distinct organizations, their convergence represents a major advance in real-world, AI-driven food systems modeling - where culturally rich, ecologically grounded, and socially meaningful data finds its most powerful expression through knowledge graph integration. 4.3 Broader Applications: Emerging Opportunities in AI, Food, and Culture Beyond integration with infrastructures like FKG.in, the dataset unlocks applications across food computing, language technologies, cultural AI, and health-oriented reasoning. Key emerging directions include: • Procedural Soundness Checks with LLMs (Large Language Models): LLMs can act as evaluators, validating recipe structure by representing them as action graphs and What's Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets detecting missing steps, sequencing errors, or implicit dependencies. This improves data quality, ensures completeness and assesses ease of reproduction for novice users [10, 33, 42]. • Cultural Machine Translation for Endangered Languages: The corpus described in Section 3.1 enables training culturally grounded, domain-specific machine translation models for Indian food recipes. These models can be designed to capture procedural nuance and rich culinary context, enabling accurate, culturally sensitive translation across endangered languages [44]. • Cultural Recipe Corpora for Benchmarking and Supervised Fine-Tuning (SFT): The multimodal corpus of underrepresented Indian recipes can power culturally rich knowledge bases, diverse evaluation datasets for benchmarking models (e.g., QA, reasoning tasks, guessing games like MMLU [16] or BoolQ [7]), and SFT for domain-specific, multimodal tasks. Such models could serve as culturally aware virtual assistants or educational tools for culinary learning, preservation, and cross-cultural exchange [37, 41]. • Multimodal Understanding and Generation: Combining text, audio, and images enables applications such as generating step-by-step instructional visuals or videos from recipe text, completing cooking videos from an image or recipe prompt, recognizing indigenous ingredients from images, identifying dishes from images, or inferring intermediate steps from visual or textual cues [18, 26, 33]. • Reasoning-Driven Cooking Assistants: Culturally grounded data supports contextual reasoning assistants that could explain why a step matters, what happens if altered, or how to adapt recipes to allergen or ingredient availability constraints - combining common sense reasoning, cultural sensitivity, and procedural awareness. These systems can help users navigate real-world cooking constraints while supporting the creative adaptation and evolution of traditional recipes across diverse and global culinary contexts [17, 38]. 5 Challenges and Open Questions Some valuable learnings from this initiative, which may guide similar future efforts, are outlined below: • Trust-building: Initial skepticism stemmed from disbelief that digital work-particularly involving recipes and storytelling-could result in real payment. Many participants were unfamiliar with such tasks being seen as valuable or remunerative, necessitating reassurance and clear communication through trusted local coordinators. • Seasonality constraints:Many tribal and rural recipes rely on foraged or locally grown ingredients that are only available during specific seasons. As a result, a significant number of off-season recipes and ingredients could not be captured in the initial dataset. • Infrastructure limitations: Unreliable internet connectivity and power supply in remote villages posed challenges for uploading multimodal data. To mitigate this, the mobile application was designed to support offline recording, requiring internet access only during final submission. • Payment equity: Recipe complexity varied dramatically - from elaborate dishes involving early-morning foraging and hours of ancestral techniques, to simpler but equally authentic preparations. Determining fair compensation while honoring the cultural value of all contributions proved to be a nuanced challenge. • Data quality: Ensuring consistency across diverse languages, cooking styles, and literacy levels required a twolayer validation process involving local coordinators and project managers. This system helped ensure data completeness and cross-regional coherence, ultimately strengthening the participatory framework and underscoring the importance of patient, hyper-local approaches to digital inclusion. 6 Conclusion The present work describes a dataset of 1,000 indigenous recipes, collected in ten languages in Eastern and Northeast India. Data collection and remuneration were facilitated through a smartphonebased data collection platform. The collection process involved unprecedented outreach efforts, including the participation of language experts, community coordinators, and data scientists. The primary data contributors were women who are native speakers of these ten languages. Although originally designed for cultural inclusion and creation of knowledge bases, the dataset serves as a valuable resource for food computing by presenting an exhaustive list of cooking practices - such as rare ingredients, seasonal traditions, rituals, and nutritional content - unique to home cooking and often overlooked in industrially structured recipe formats. The data set also holds potential for developing culturally aware large-language models (LLMs), while the English translations can serve as parallel corpora for domain-specific machine translation. Future work will involve expanding the dataset to include additional rare and underresourced languages of India. Acknowledgments We gratefully acknowledge the local coordinators and contributors whose time, knowledge, and effort made this work possible. We are thankful to the Mphasis AI & Applied Tech Lab at Ashoka - a collaboration between Ashoka University and Mphasis Limited - for their support. References [1] Ritu Agarwal, Tanupriya Choudhury, Neelu J.Ahuja, and Tanmay Sarkar. 2023. IndianFoodNet: Detecting Indian Food Items Using Deep Learning. International Journal of Computational Methods and Experimental Measurements 11 (12 2023). [2] Devansh Batra, Nirav Diwan, Utkarsh Upadhyay, Jushaan Singh Kalra, Tript Sharma, Aman Kumar Sharma, Dheeraj Khanna, Jaspreet Singh Marwah, Srilakshmi Kalathil, Navjot Singh, Rudraksh Tuwani, and Ganesh Bagler. 2020. RecipeDB: a resource for exploring recipes. Database 2020 (11 2020), baaa077. arXiv:https://academic.oup.com/database/articlepdf/doi/10.1093/database/baaa077/34526636/baaa077.pdf baaa077 [3] Almog Boanos, Anitha Sri Mothukuri, Kaitlin A Goettsch, and Dhundy K Bastola. 2017. Investigation and utilization of personal food computers for research in Gogoi et al. drug development and biomedicine. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2229-2231. [4] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. 2014. Food-101 - Mining Discriminative Components with Random Forests. In European Conference on Computer Vision. [5] Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, and Daniel Hershcovich. 2024. Cultural adaptation of recipes. Transactions of the Association for Computational Linguistics 12 (2024), 80-99. [6] Eduardo Castelló Ferrer, Jake Rye, Gordon Brander, Tim Savas, Douglas Chambers, Hildreth England, and Caleb Harper. 2018. Personal Food Computer: A new device for controlled-environment agriculture. In Proceedings of the Future Technologies Conference. Springer, 1077-1096. [7] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint (2019). [8] IIIT-Delhi Complex Systems Lab. 2025. Complex Systems Laboratory: Computational Gastronomy - Making Food Computable with Data Science and Artificial Intelligence. Web page. https://cosylab.iiitd.edu.in/ Foundational work in computational gastronomy, flavor networks, RecipeDB, FlavorDB, SpiceRx, DietRx. [9] Suraj Das and Anindya Mishra. 2022. Dynamics of indigenous community's food and culture in the time of climate change in the Himalayan region. Journal of Ethnic Foods 9 (01 2022). [10] Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, and Rob Miller. 2024. PizzaCommonSense: Learning to Model Commonsense Reasoning about Intermediate Steps in Cooking Recipes. https://arxiv. org/abs/2401.06930 [11] Ashoka University Food Computing Lab. 2025. Food Computing Lab: Knowledge Graphs, Food, Data and AI for Indian Food Systems. Web page. https://fkgindia.github.io/ Interdisciplinary research in food knowledge graphs, health, sustainability, vernacular cuisine modeling. [12] Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, and Dane Bell. 2014. Analyzing the language of food on social media. In 2014 IEEE International Conference on Big Data (Big Data). IEEE, 778-783. [13] Juliana Freitas Santos Gomes and Fabiana Rodrigues Leta. 2012. Applications of computer vision techniques in the agriculture and food industry: a review. European Food Research and Technology 235, 6 (2012), 989-1000. [14] Saransh Kumar Gupta, Lipika Dey, Partha Pratim Das, and Ramesh Jain. 2024. Building FKG.in: a Knowledge Graph for Indian Food. 00830 arXiv preprint . [15] Steven Haussmann, Oshani Seneviratne, Yu Chen, Yarden Ne'eman, James Codella, Ching-Hua Chen, Deborah L. McGuinness, and Mohammed J. Zaki. 2019. FoodKG: A Semantics-Driven Knowledge Graph for Food Recommendation. In The Semantic Web - ISWC 2019: 18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part II (Auckland, New Zealand). Springer-Verlag, Berlin, Heidelberg, 146-162. 30796-7_10 [16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint (2020). [17] Tianyi Hu, Maria Maistro, and Daniel Hershcovich. 2024. Bridging Cultures in the Kitchen: A Framework and Benchmark for Cross-Cultural Recipe Retrieval. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (Eds.). Association for Computational Linguistics, Miami, Florida, USA, 1068-1080. [18] Tenghao Huang, Donghee Lee, John Sweeney, Jiatong Shi, Emily Steliotes, Matthew Lange, Jonathan May, and Muhao Chen. 2024. FoodPuzzle: Developing Large Language Model Agents as Flavor Scientists. https://arxiv.org/abs/2409.12832 [19] Shuqiang Jiang. 2024. Food Computing for Nutrition and Health. In 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW). IEEE, 29-31. [20] Cangyu Jin, Yamine Bouzembrak, Jiehong Zhou, Qiao Liang, Leonieke M Van Den Bulk, Anand Gavai, Ningjing Liu, Lukas J Van Den Heuvel, Wouter Hoenderdaal, and Hans JP Marvin. 2020. Big Data in food safety-A review. Current Opinion in Food Science 36 (2020), 24-32. [21] Chong-wah NGO Jing-jing Chen. 2016. Deep-based Ingredient Recognition for Cooking Recipe Retrival. ACM Multimedia (2016). [22] Vijay Kakani, Van Huan Nguyen, Basivi Praveen Kumar, Hakil Kim, and Visweswara Rao Pasupuleti. 2020. A critical review on computer vision and artificial intelligence in food industry. Journal of Agriculture and Food Research 2 (2020), 100033. [23] Sandeep Khanna, Chiranjoy Chattopadhyay, and Suman Kundu. 2023. INDoRI: Indian Dataset of Recipes and Ingredients and its Ingredient Network. https://arxiv.org/abs/2309.10403 [24] Sandeep Khanna, Shreya Goyal, Chiranjoy Chattopadhyay, and Suman Kundu. 2024. CookingINWild: Unleashing the Challenges of Indian Cuisine Cooking Videos for Action Recognition. In Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD) (Bangalore, India) (CODS-COMAD '24). Association for Computing Machinery, New York, NY, USA, 222-226. [25] Banisetti Dileepu Kumar and Nagendra Prasad Kosuri. 2023. Ethnobotanical Research in the Digital Age: Harnessing Technology for Data Collection and Analysis. International Journal of Indigenous Herbs and Drugs 8, 4 (2023), -. Article available under CC BY-NC 4.0. [26] Wenyan Li, Xinyu Zhang, Jiaang Li, Qiwei Peng, Raphael Tang, Li Zhou, Weijia Zhang, Guimin Hu, Yifei Yuan, Anders Søgaard, Daniel Hershcovich, and Desmond Elliott. 2024. FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture. https: //arxiv.org/abs/2406.11030 [27] Jabez Magomere, Shu Ishida, Tejumade Afonja, Aya Salama, Daniel Kochin, Yuehgoh Foutse, Imane Hamzaoui, Raesetje Sefala, Aisha Alaagib, Samantha Dalal, et al. 2025. The World Wide recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 246-282. [28] Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images. IEEE Trans. Pattern Anal. Mach. Intell. (2019). [29] Weiqing Min, Shuqiang Jiang, Linhu Liu, Yong Rui, and Ramesh Jain. 2019. A Survey on Food Computing. ACM Comput. Surv. 52, 5, Article 92 (Sept. 2019), 36 pages. [30] Weiqing Min, Chunlin Liu, Leyi Xu, and Shuqiang Jiang. 2022. Applications of knowledge graphs for food science and industry. Patterns 3, 5 (2022), 100484. [31] Tawseef Ahmad Mir, Musheerul Hassan, Muatasim Jan, Muhammad Shoaib Amjad, Muhammad Abdul Aziz, Andrea Pieroni, Ivana Vitasović-Kosić, and Rainer W. Bussmann. 2024. Traditional culinary uses of wild plants in the Kashmir Himalayas: ethnobotanical documentation and cultural insights. Journal of Ethnobiology and Ethnomedicine 20, 1 (2024), 18. Part of the collection: Local knowledge systems and ecological transition. [32] Austin Myers, Nick Johnston, Vivek Rathod, Anoop Korattikara, Alex Gorban, Nathan Silberman, Sergio Guadarrama, George Papandreou, Jonathan Huang, and Kevin Murphy. 2015. Im2Calories: Towards an Automated Mobile Vision Food Diary. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) (ICCV '15). IEEE Computer Society, USA, 1233-1241. ICCV.2015.146 [33] Taichi Nishimura, Suzushi Tomori, Hayato Hashimoto, Atsushi Hashimoto, Yoko Yamakata, Jun Harashima, Yoshitaka Ushiku, and Shinsuke Mori. 2020. Visual Grounding Annotation of Recipe Flow Graph. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association, Marseille, France, 4275-4284. https://aclanthology.org/ 2020.lrec-1.527/ [34] Marianna Obrist, Patrizia Marti, Carlos Velasco, Yunwen Tu, Takuji Narumi, and Naja L Holten Møller. 2018. The future of computing and food. In Proceedings of the 2018 international conference on advanced visual interfaces. 1-3. [35] Deepanshu Pandey, Purva Parmar, Gauri Toshniwal, Mansi Goel, Vishesh Agrawal, Shivangi Dhiman, Lavanya Gupta, and Ganesh Bagler. 2022. Object Detection in Indian Food Platters using Transfer Learning with YOLOv4. https://arxiv.org/abs/2205.04841 [36] Shayan Rokhva, Babak Teimourpour, and Amir Hossein Soltani. 2024. Computer vision in the food industry: Accurate, real-time, and automatic food recognition with pretrained MobileNetV2. Food and Humanity 3 (2024), 100378. [37] David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abzaliev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio VillaCueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan, Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Jocelyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe, Luciana Benotti, Luis Fernando D'Haro, Marcelo Viridiano, Marcos EstechaGaritagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori, Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi, Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora, Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Teresa Clifford, Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo, Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, and Alham Fikri Aji. 2024. What's Not on the Plate? Rethinking Food Computing through Indigenous Indian Datasets CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark. https://arxiv.org/abs/2406.05967 [38] Sola Shirai, Oshani Seneviratne, Minor Gordon, Ching-Hua Chen, and Deborah Mcguinness. 2021. Identifying Ingredient Substitutions Using a Knowledge Graph of Food. Frontiers in Artificial Intelligence 3 (01 2021). [39] Quin Thames, Arjun Karpur, Wade Norris, Fangting Xia, Liviu Panait, Tobias Weyand, and Jack Sim. 2021. Nutrition5k: Towards automatic nutritional understanding of generic food. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8903-8911. [40] Sandip Thite, Deepali Godse, Kailas Patil, Prawit Chumchu, and Alfa Nyandoro. 2024. Facilitating spice recognition and classification: An image dataset of Indian spices. Data in Brief 57 (2024), 110936. [41] Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, David Anugraha, Rifki Afina Putri, Yutong Wang, Adam Nohejl, Ubaidillah Ariq Prathama, Nedjma Ousidhoum, Afifa Amriani, Anar Rzayev, Anirban Das, Ashmari Pramodya, Aulia Adila, Bryan Wilie, Candy Olivia Mawalim, Ching Lam Cheng, Daud Abolade, Emmanuele Chersoni, Enrico Santus, Fariz Ikhwantri, Garry Kuwanto, Hanyang Zhao, Haryo Akbarianto Wibowo, Holy Lovenia, Jan Christian Blaise Cruz, Jan Wira Gotama Putra, Junho Myung, Lucky Susanto, Maria Angelica Riera Machin, Marina Zhukova, Michael Anugraha, Muhammad Farid Adilazuarda, Natasha Santosa, Peerat Limkonchotiwat, Raj Dabre, Rio Alexander Audino, Samuel Cahyawijaya, Shi-Xiong Zhang, Stephanie Yulia Salim, Yi Zhou, Yinxuan Gui, David Ifeoluwa Adelani, En-Shiun Annie Lee, Shogo Okada, Ayu Purwarianti, Alham Fikri Aji, Taro Watanabe, Derry Tanti Wijaya, Alice Oh, and Chong-Wah Ngo. 2025. WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines. https://arxiv.org/abs/2410.12705 [42] Yoko Yamakata, Shinsuke Mori, and John Carroll. 2020. English Recipe Flow Graph Corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association, Marseille, France, 5187-5194. https://aclanthology.org/2020.lrec-1.638/ [43] Zhongqi Yang, Elahe Khatibi, Nitish Nagesh, Mahyar Abbasian, Iman Azimi, Ramesh Jain, and Amir M. Rahmani. 2024. ChatDiet: Empowering Personalized Nutrition-Oriented Food Recommender Chatbots through an LLM-Augmented Framework. [44] Binwei Yao, Ming Jiang, Tara Bobinac, Diyi Yang, and Junjie Hu. 2024. Benchmarking Machine Translation with Cultural Awareness. https://arxiv.org/abs/2305.14328 [45] Qing Zhang, David Elsweiler, and Christoph Trattner. 2020. Visual Cultural Biases in Food Classification. Foods 9, 6 (2020). [46] Li Zhou, Taelin Karidi, Wanlong Liu, Nicolas Garneau, Yong Cao, Wenyu Chen, Haizhou Li, and Daniel Hershcovich. 2024. Does mapo tofu contain coffee? probing llms for food-related cultural knowledge. arXiv preprint (2024). [47] Pengfei Zhou, Weiqing Min, Chaoran Fu, Ying Jin, Mingyu Huang, Xiangyang Li, Shuhuan Mei, and Shuqiang Jiang. 2024. FoodSky: A Foodoriented Large Language Model that Passes the Chef and Dietetic Examination. https://arxiv.org/abs/2406.10261
|
2509.16279
|
Energy Equity, Infrastructure and Demographic Analysis with XAI Methods
Sarahana Shrestha2, Aparna S. Varde1,2, Pankaj Lal2
1. School of Computing (SoC)
2. Clean Energy and Sustainability Analytics Center (CESAC)
Montclair State University (MSU), NJ, USA
(shresthas1 | vardea | lalp)@montclair.edu
ORCID ID: 0000-0002-9829-9607(Shrestha), 0000-0002-3170-2510 (Varde), 0000-0001-6799-3156 (Lal)
Abstract
Understanding the factors influencing energy consumption is
crucial to address disparities in energy equity and to promote
sustainable solutions. Socio-demographic characteristics can
impact energy usage patterns. This study uses XAI methods,
e.g. decision trees and PCC, to analyze electricity usage in
multiple locales. By correlating the infrastructure and socio-
demographic data with energy features, we identify housing
tenure and racial demographics as vital predictors, revealing
that renters and racially diverse groups can often face higher
energy burdens. We demonstrate a novel energy equity web
portal and energy burden calculator, hence offering tailored
actionable advice to multiple energy stakeholders, along with
enhanced explainability. This work addresses challenges in
energy policy adaptation, and aims for a next-generation
framework, therefore heading more towards energy equity.
Keywords: AI in Smart Cities, Energy Burden, Decision
Trees, Explainable AI, Pearson’s Correlation Coefficient,
Sustainable Management, Urban Policy, Web Portal
Introduction and Related Work
Energy equity aims to ensures that all the communities, and
especially marginalized ones, have fair access to affordable
energy (Fu et al. 2021). Socio-economic and demographic
factors e.g. income, race, housing tenure, housing age, and
disparities in infrastructure contribute to inequitable access
(Simcock et al. 2021; Singh et al. 2023).
Underserved communities face numerous challenges
such as outdated energy infrastructure, inadequate housing,
and other structural barriers, thereby limiting their access to
energy efficiency programs. Hence, this exacerbates energy
burdens and consequently, hinders efforts to transition to
more sustainable energy systems (Drehobl et al. 2016). It
thus calls for more data-driven analysis with explainable AI
(XAI) methods in order to better inform numerous energy
policymakers. This motivates our study in this paper.
Related work in the literature (Machlev et al. 2022), (Sim
et al. 2022), (Shrestha et al. 2023), (Varde et al. 2023) point
out many avenues where XAI methods play vital roles in
energy and sustainability analysis. Some works (Conti et al.
2020); (Pawlish et al. 2012) particularly highlight the role of
explainable AI paradigms such as commonsense knowledge
and decision trees in the context of task efficiency and clean
environments respectively, thereby touching upon energy-
based aspects. Likewise, other researchers (Basavaraju et al.
2017); (Garg et al. 2020) emphasize the role of XAI based
methods in supervised learning for mobile devices, and in
generating data for object detection respectively. Both these
works address energy tangentially due to their emphasis on
making computational processes more efficient. Though we
address energy efficiency on the whole for the common
good through various policies, it is important to incorporate
user opinions (Bittencourt et al. 2024); (Shrestha et al.
2024); (Du et al. 2019), and conduct other relevant analysis
to promote more fairness and equity. It is also important to
conduct fine-grained analysis in order to derive meaningful
inferences, and convey the feedback to stakeholders to make
better and more equitable decisions for the future.
We thrive upon many such success stories in the related
literature. While these works contribute significantly to the
state-of-the-art, we have some originality with respect to our
own study. Our work in this paper is novel as being among
the first to address energy equity explicitly with a web portal
design, energy burden calculator based on demographics
and energy infrastructure analysis using XAI methods. This
enhances interpretation and trust, hence providing more
beneficial feedback to a wide variety of stakeholders.
Data, Models and Methods
The region of study in this paper is Northeastern United
States, more specifically focused on New Jersey (NJ). The
Datasets are sourced mainly from NJ programs (New Jersey
Clean Energy Program, 2024 Accessed) and the US census
from 2008-2022 (U.S. Census Bureau, 2024 Accessed).
These are listed next as follows.
●
Aggregated Community-Scale Utility Energy Data
●
Energy Efficiency Program Participation
●
Race (Census Table B02001), Hispanic or Latino
Origin (Census Table B03003), Year Structure Built
(Census Table B25034), Mean Household Income of
Quintiles (Census Table B19081), Household Income
(Census Table B19001), Year Structure Built (Census
Table B25034), Owner vs. Renter Occupied Units
(Census Table B25003)
Explainable AI models are well-deployed in this research.
Decision tree classifiers are used to predict the actual energy
consumption, quantifying feature importance. Pearson’s
Correlation Coefficient (PCC) matrix is then used to further
assesses these relationships. To calculate energy burden, the
total amount spent on energy is divided by the median
household income of the locale, as formulated in Equation1.
𝐸𝑛𝑒𝑟𝑔𝑦 𝐵𝑢𝑟𝑑𝑒𝑛 (%) =
[(𝐸𝑒× 𝑅𝑒)+ (𝐸ℎ× 𝑅ℎ)]
𝑀𝑖
× 100% (1)
In this equation, 𝐸𝑒 is annual household electricity con-
sumption (kWh), 𝑅𝑒 is electricity rate ($/kWh), 𝐸ℎ is annual
heating (therm/BTU), 𝑅ℎ is heating rate ($/therm), and Mi is
median household income. Note that we use these variables
are domain-specific, as typically used.
We design and demonstrate a novel energy web portal
with an energy burden calculator. It gets the user zip-code
as the input, and finds the energy burden using Equation1.
If the burden is higher than the state average, it offers clear
advice with explainable action items, tailored more towards
energy equity. If the burden is below the state average, it
mentions that to the users, so they are well aware of their
situation. The procedure used in implementing the energy
burden calculator is synopsized next in Algorithm1 here.
Using Algorithm1, as well as the analysis with decision
trees and Pearson’s Correlation Coefficient, several useful
insights are obtained in the context of energy equity. In the
next section, the results are synopsized with illustration.
Algorithm 1: Energy Burden Calculator
Input: User Zip-code (Zc), State Data (D)
Parameters: Zc 𝐸𝑒 𝑅𝑒 𝐸ℎ 𝑅ℎ Mi // domain-specific
Output: EB (Energy Burden), Message, Display
1: Let SA = n% // state average for EB (constant)
2: Map (𝐸𝑒,𝐸ℎ) → Zc
3: Get (𝑅𝑒,Rh ) from D
4: Compute E-price = 𝐸𝑒 * 𝑅𝑒; H-price = 𝐸ℎ * 𝑅ℎ
6: Compute T-price = E-price + H-price
7: Compute EB = (T-price / Mi )*100
8:
if (EB > n) then
9:
Message = “Overburdened”;
10: Display = Link → {Tips to lower energy burden}
11:
else Message = “Below State Average”;
12: return Print (EB%), Print (Message), [Display]
Results with Discussion
We conduct experiments using decision trees and Pearson’s
Correlation Coefficient. The results obtained with our XAI
classifier models are: R²=0.7, RMSE=2.5, implying a good
fit of the model to the data for the purpose if prediction. It
identifies housing tenure and demographics as being the
most significant predictors of electricity consumption. This
is observed in Figure 1. In our observations, renter-occupied
housing dominates, confirming disparities in energy infra-
structure of renters vs. homeowners (Baker et al. 2019). The
feature importance of Asian-Americans (15.75%) and
owned housing (13.12%) shows homeownership impact on
energy equity issues.
Figure 1: Feature importance learned from XAI model
Furthermore, PCC analysis shows that there is a strong
correlation between race and home ownership, as observed
in Figure 2. Homeownership is more prevalent in the White
population (r=0.96), whereas Hispanic or Latino (r=0.93)
and Black (r=0.71) populations are more likely to be renters,
thus corroborating the systemic disparities noticed in the
housing infrastructure (Patterson et al. 2019). Typically,
Asian-Americans exhibit moderate positive correlations
with new housing (r = 0.52) versus other persons of color
(POC), substantiating differences in energy infrastructure
(Raymundo 2020). Moreover, the counties with more low-
income households tend to have higher rates of renters
(r=0.72). Figure 3 shows the PCC matrix between race and
income, where we see that the White population shows a
strong positive correlation with several income categories,
including Low Income (r = 0.93), Moderate Income (r =
0.94), and High Income (r = 0.94) indicating that the White
populations are distributed across a wide range of income
levels. In contrast, the Hispanic or Latino populations often
exhibit a more moderate correlation with Low Income (r =
0.68), which suggests that Hispanic populations are more
concentrated in lower-income areas. This suggests that the
income disparities exist within this racial group across the
different counties (Aliprantis et al., 2024).
Figure 2: Pearson Correlation Coefficient (PCC) Matrix
for Race and Housing Tenure
These illustrations in Figures 2 and 3 corroborate many
of the observations we noted earlier, thus confirming that
our results reveal many significant disparities with respect
to energy-related policies and their impacts on the various
communities based on the actual demographics as well as
the infrastructure. The XAI-based analysis reveals that the
alignment of modern infrastructure and affluence are often
perpetuating energy inequity.
Figure 3: Pearson Correlation Coefficient (PCC) Matrix for
Race and Income
Based on such results and various other data, Figure 4
presents a few snapshots from the demonstration of our
novel web portal for energy management, encompassing an
energy burden calculator. These demo snapshots exemplify
the useful applications of our XAI analysis on energy equity,
targeted for residents, policymakers etc. This is mainly
based on infrastructure and demographics.
Figure 4: Energy Management Web Portal and the Energy
Burden Calculator Prototype
Conclusions and Ongoing Work
To the best of our knowledge, ours is among the first works
on XAI-based energy equity analysis encompassing a novel
energy web portal and energy burden calculator. It addresses
crucial challenges in energy adaptation, aiming for higher
energy equity, heading towards next-generation systems.
Future work entails: (1) making the energy web portal
more interactive to enhance explainability; (2) conducting
macro and micro analysis with the countrywide / statewide
energy burden; separation of energy sources (gas, solar etc.);
aspects such as summer/winter, monthly/annual, and finer
granularity in attributes; (3) using methods such as XRRF
(explainable reasonably randomized forest) to obtain more
robustness & accuracy, yet offering explainable solutions;
and (4) enhancing the energy burden calculator further with
the expected energy usage analysis in targeted schemes as
per the projections. Note that the energy equity analysis
based upon decision trees here could be merged with text-
based policy perception analysis (Shrestha et al., 2024) in
the web portal. Users could compare energy burden data
with public opinion trends to see how equity issues align
with public attitudes toward renewable energy policies.
It is important to mention that XAI plays a vital role in
this research with respect to making all the methods more
transparent, and hence interpretable, as well as helping to
foster better trust among users. This work on the whole
makes broader impacts on AI in Smart Cities and also on the
theme of Sustainability Analytics. This is in addition to its
main impacts on XAI and Energy & Critical Infrastructure.
Acknowledgments
Sarahana Shrestha has a Doctoral Assistantship in the PhD
Program in Environmental Science and Management from
the Graduate School at Montclair State University (MSU).
Dr. Aparna Varde acknowledges the NSF MRI grant
2018575, i.e. “MRI: Acquisition of a High-Performance
GPU Cluster for Research and Education”. The authors
thank Gabriel Teves, Julia Konopka, and Joshua Rabo from
the Master’s Project Teams in the School of Computing at
MSU, and Dr. Hao Liu from the Data Science Laboratory,
at MSU, for their valuable inputs and useful feedback in this
work. We also gratefully acknowledge the travel funding
support provided by CSAM, the College of Science and
Mathematics at Montclair State University.
References
Aliprantis, D.; Carroll, D. R. andYoung, E. R. 2024. What
explains neighborhood sorting by income and race? Journal
of Urban Economics, 141, 103508.
Anthopoulos, L. and Kazantzi, V., 2022. Urban energy effi-
ciency assessment models from an AI and big data perspec-
tive: Tools for policy makers. Sustainable Cities and Soci-
ety, 76, p.103492.
Basavaraju, P., and Varde, A. S. 2017. Supervised learning
techniques in mobile device apps for Androids. ACM
SIGKDD Explorations 2016, 18(2), 18-29.
Bittencourt, I., Varde, A.S., Lal, P. 2024. Opinion Mining
on Off-shore Wind Energy for Environmental Engineering.
In IEMTRONICS Lecture Notes in Electrical Engineering,
vol 1229. Springer, https://doi.org/10.1007/978-981-97-
4780-1_37
Baker, S.; DeVar, S. and Prakash, S. 2019. The Energy Jus-
tice Workbook. Initiative for Energy Justice. https://ie-
jusa.org/wp-content/uploads/2019/12/The-Energy-Justice-
Workbook-2019-web.pdf
Conti, C. J., Varde, A. S., and Wang, W. 2020. Robot action
planning by commonsense knowledge in human-robot col-
laborative tasks. In 2020 IEEE International IOT, Electron-
ics and Mechatronics Conf., pp. 170-176.
Drehobl, A. and Ross, L. 2016. Lifting the high energy bur-
den in America’s largest cities: How energy efficiency can
improve low income and underserved communities.
https://www.aceee.org/research-report/u1602
Du, X., Kowalski, M., Varde, A. S., de Melo, G., and Tay-
lor, R. W. (2020). Public opinion matters: Mining social me-
dia text for environmental management. ACM SIGWEB,
2019(Autumn), 5:1-5:15.
Fu, F.Y.; Alharthi, M.; Bhatti, Z.; Sun, L.; Rasul, F.; Hanif,
I. and Iqbal, W. 2021. The dynamic role of energy security,
energy equity and environmental sustainability in the di-
lemma of emission reduction and economic growth. Journal
of Environmental Management, 280.
Garg, A., Tandon, N., and Varde, A. S. 2020. I am guessing
you can't recognize this: Generating adversarial images for
object detection using spatial commonsense (student ab-
stract). In Proceedings of the AAAI Conference on Artificial
Intelligence, Vol. 34, No. 10, pp. 13789-13790.
Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov,
J.; Mannor, S. and Levron, Y. 2022. Explainable Artificial
Intelligence (XAI) techniques for energy and power sys-
tems: Review, challenges and opportunities. Energy and AI,
9, p.100169.
Patterson, J.; Hernández, D. and Morales Knight, L. 2019.
Energy isecurity as a nexus between housing and health
among low-income renters. Energy Research & Social Sci-
ence, 70, 101744.
Pawlish, M., Varde, A. S., and Robila, S. A. 2012. Cloud
computing for environment-friendly data centers. Interna-
tional Workshop on Cloud Data Management at ACM
CIKM, pp. 43-48.
Raymundo, J. 2020. Are Asian Americans POC? Examining
impact of higher education identity-based policies and prac-
tices. The Vermont Connection, 41(1). https://scholar-
works.uvm.edu/tvc/vol41/iss1/5
Shrestha, S.; Bittencourt, I.; Varde, A. S. and Lal, P. 2024.
AI-based modeling for textual data on solar policies in smart
energy applications. In IEEE International Conference on
Information, Intelligence, Systems & Applications (IISA)
DOI: 10.1109/IISA62523.2024.10786713
Shrestha, S. and Varde, A.S. 2023. Roles of the Web in
Commercial Energy Efficiency: IoT, Cloud Computing, and
Opinion
Mining.
ACM
SIGWEB,
2023(Autumn),
https://doi.org/10.1145/3631358.3631363
Sim, T.; Choi, S.; Kim, Y.; Youn, S.H.; Jang, D.J.; Lee, S.
and Chun, C.J. 2022. eXplainable AI (XAI)-based input var-
iable selection methodology for forecasting energy con-
sumption. Electronics, 11(18), p.2947.
Simcock, N;, Jenkins, K.E.; Lacey-Barnacle, M.; Martiskai-
nen, M.; Mattioli, G. and Hopkins, D. 2021. Identifying dou-
ble energy vulnerability: A systematic and narrative review
of groups at-risk of energy and transport poverty in the
global north. Energy Research & Social Science.
Singh, A.; Yadav, J.; Shrestha, S. and Varde, A.S. 2023.
Linking Alternative Fuel Vehicles Adoption with Socioeco-
nomic
Status
and
Air
Quality
Index.
https://doi.org/10.48550/arXiv.2303.08286. AAAI Confer-
ence Workshops Program
Varde, A.S. and Liang, J., 2023. Machine learning ap-
proaches in agile manufacturing with recycled materials for
sustainability.
AAAI
Conference
Bridge
Program,
https://doi.org/10.48550/arXiv.2303.08291
New Jersey Clean Energy Program. (2024 Accessed). "New
Jersey’s Clean Energy Program: Programs & Resources."
https://www.njcleanenergy.com
U.S. Census Bureau. (2024 Accessed). "Explore Census
Data." U.S. Department of Commerce. https://data.cen-
sus.gov.
|
Energy Equity, Infrastructure and Demographic Analysis with XAI Methods Sarahana Shrestha2, Aparna S. Varde1,2, Pankaj Lal2 1. (SoC) 2. Clean Energy and Sustainability Analytics Center (CESAC) Montclair State University (MSU), NJ, USA (shresthas1 | vardea | ORCID ID: 0000-0002-9829-9607(Shrestha), 0000-0002-3170-2510 (Varde), 0000-0001-6799-3156 (Lal) Abstract Understanding the factors influencing energy consumption is crucial to address disparities in energy equity and to promote sustainable solutions. Socio-demographic characteristics can impact energy usage patterns. This study uses XAI methods, e.g. decision trees and PCC, to analyze electricity usage in multiple locales. By correlating the infrastructure and sociodemographic data with energy features, we identify housing tenure and racial demographics as vital predictors, revealing that renters and racially diverse groups can often face higher energy burdens. We demonstrate a novel energy equity web portal and energy burden calculator, hence offering tailored actionable advice to multiple energy stakeholders, along with enhanced explainability. This work addresses challenges in energy policy adaptation, and aims for a next-generation framework, therefore heading more towards energy equity. Keywords: AI in Smart Cities, Energy Burden, Decision Trees, Explainable AI, Pearson's Correlation Coefficient, Sustainable Management, Urban Policy, Web Portal Introduction and Related Work Energy equity aims to ensures that all the communities, and especially marginalized ones, have fair access to affordable energy (Fu et al. 2021). Socio-economic and demographic factors e.g. income, race, housing tenure, housing age, and disparities in infrastructure contribute to inequitable access (Simcock et al. 2021; Singh et al. 2023). Underserved communities face numerous challenges such as outdated energy infrastructure, inadequate housing, and other structural barriers, thereby limiting their access to energy efficiency programs. Hence, this exacerbates energy burdens and consequently, hinders efforts to transition to more sustainable energy systems (Drehobl et al. 2016). It thus calls for more data-driven analysis with explainable AI (XAI) methods in order to better inform numerous energy policymakers. This motivates our study in this paper. Related work in the literature (Machlev et al. 2022), (Sim et al. 2022), (Shrestha et al. 2023), (Varde et al. 2023) point out many avenues where XAI methods play vital roles in energy and sustainability analysis. Some works (Conti et al. 2020); (Pawlish et al. 2012) particularly highlight the role of explainable AI paradigms such as commonsense knowledge and decision trees in the context of task efficiency and clean environments respectively, thereby touching upon energybased aspects. Likewise, other researchers (Basavaraju et al. 2017); (Garg et al. 2020) emphasize the role of XAI based methods in supervised learning for mobile devices, and in generating data for object detection respectively. Both these works address energy tangentially due to their emphasis on making computational processes more efficient. Though we address energy efficiency on the whole for the common good through various policies, it is important to incorporate user opinions (Bittencourt et al. 2024); (Shrestha et al. 2024); (Du et al. 2019), and conduct other relevant analysis to promote more fairness and equity. It is also important to conduct fine-grained analysis in order to derive meaningful inferences, and convey the feedback to stakeholders to make better and more equitable decisions for the future. We thrive upon many such success stories in the related literature. While these works contribute significantly to the state-of-the-art, we have some originality with respect to our own study. Our work in this paper is novel as being among the first to address energy equity explicitly with a web portal design, energy burden calculator based on demographics and energy infrastructure analysis using XAI methods. This enhances interpretation and trust, hence providing more beneficial feedback to a wide variety of stakeholders. Data, Models and Methods The region of study in this paper is Northeastern United States, more specifically focused on New Jersey (NJ). The Datasets are sourced mainly from NJ programs (New Jersey Clean Energy Program, 2024 Accessed) and the US census from 2008-2022 (U.S. Census Bureau, 2024 Accessed). These are listed next as follows. ● Aggregated Community-Scale Utility Energy Data ● Energy Efficiency Program Participation ● Race (Census Table B02001), Hispanic or Latino Origin (Census Table B03003), Year Structure Built (Census Table B25034), Mean Household Income of Quintiles (Census Table B19081), Household Income (Census Table B19001), Year Structure Built (Census Table B25034), Owner vs. Renter Occupied Units (Census Table B25003) Explainable AI models are well-deployed in this research. Decision tree classifiers are used to predict the actual energy consumption, quantifying feature importance. Pearson's Correlation Coefficient (PCC) matrix is then used to further assesses these relationships. To calculate energy burden, the total amount spent on energy is divided by the median household income of the locale, as formulated in Equation1. Energy Burden (%) = [(Ee× Re)+ (Eh× Rh)] Mi × 100% (1) In this equation, Ee is annual household electricity consumption (kWh), Re is electricity rate ( /therm), and Mi is median household income. Note that we use these variables are domain-specific, as typically used. We design and demonstrate a novel energy web portal with an energy burden calculator. It gets the user zip-code as the input, and finds the energy burden using Equation1. If the burden is higher than the state average, it offers clear advice with explainable action items, tailored more towards energy equity. If the burden is below the state average, it mentions that to the users, so they are well aware of their situation. The procedure used in implementing the energy burden calculator is synopsized next in Algorithm1 here. Using Algorithm1, as well as the analysis with decision trees and Pearson's Correlation Coefficient, several useful insights are obtained in the context of energy equity. In the next section, the results are synopsized with illustration. Algorithm 1: Energy Burden Calculator Input: User Zip-code (Zc), State Data (D) Parameters: Zc Ee Re Eh Rh Mi // domain-specific Output: EB (Energy Burden), Message, Display 1: Let SA = n% // state average for EB (constant) 2: Map (Ee,Eh) → Zc 3: Get (Re,Rh ) from D 4: Compute E-price = Ee * Re; H-price = Eh * Rh 6: Compute T-price = E-price + H-price 7: Compute EB = (T-price / Mi )*100 8: if (EB > n) then 9: Message = "Overburdened"; 10: Display = Link → {Tips to lower energy burden} 11: else Message = "Below State Average"; 12: return Print (EB%), Print (Message), [Display] Results with Discussion We conduct experiments using decision trees and Pearson's Correlation Coefficient. The results obtained with our XAI classifier models are: R2=0.7, RMSE=2.5, implying a good fit of the model to the data for the purpose if prediction. It identifies housing tenure and demographics as being the most significant predictors of electricity consumption. This is observed in Figure 1. In our observations, renter-occupied housing dominates, confirming disparities in energy infrastructure of renters vs. homeowners (Baker et al. 2019). The feature importance of Asian-Americans (15.75%) and owned housing (13.12%) shows homeownership impact on energy equity issues. Figure 1: Feature importance learned from XAI model Furthermore, PCC analysis shows that there is a strong correlation between race and home ownership, as observed in Figure 2. Homeownership is more prevalent in the White population (r=0.96), whereas Hispanic or Latino (r=0.93) and Black (r=0.71) populations are more likely to be renters, thus corroborating the systemic disparities noticed in the housing infrastructure (Patterson et al. 2019). Typically, Asian-Americans exhibit moderate positive correlations with new housing (r = 0.52) versus other persons of color (POC), substantiating differences in energy infrastructure (Raymundo 2020). Moreover, the counties with more lowincome households tend to have higher rates of renters (r=0.72). Figure 3 shows the PCC matrix between race and income, where we see that the White population shows a strong positive correlation with several income categories, including Low Income (r = 0.93), Moderate Income (r = 0.94), and High Income (r = 0.94) indicating that the White populations are distributed across a wide range of income levels. In contrast, the Hispanic or Latino populations often exhibit a more moderate correlation with Low Income (r = 0.68), which suggests that Hispanic populations are more concentrated in lower-income areas. This suggests that the income disparities exist within this racial group across the different counties (Aliprantis et al., 2024). Figure 2: Pearson Correlation Coefficient (PCC) Matrix for Race and Housing Tenure These illustrations in Figures 2 and 3 corroborate many of the observations we noted earlier, thus confirming that our results reveal many significant disparities with respect to energy-related policies and their impacts on the various communities based on the actual demographics as well as the infrastructure. The XAI-based analysis reveals that the alignment of modern infrastructure and affluence are often perpetuating energy inequity. Figure 3: Pearson Correlation Coefficient (PCC) Matrix for Race and Income Based on such results and various other data, Figure 4 presents a few snapshots from the demonstration of our novel web portal for energy management, encompassing an energy burden calculator. These demo snapshots exemplify the useful applications of our XAI analysis on energy equity, targeted for residents, policymakers etc. This is mainly based on infrastructure and demographics. Figure 4: Energy Management Web Portal and the Energy Burden Calculator Prototype Conclusions and Ongoing Work To the best of our knowledge, ours is among the first works on XAI-based energy equity analysis encompassing a novel energy web portal and energy burden calculator. It addresses crucial challenges in energy adaptation, aiming for higher energy equity, heading towards next-generation systems. Future work entails: (1) making the energy web portal more interactive to enhance explainability; (2) conducting macro and micro analysis with the countrywide / statewide energy burden; separation of energy sources (gas, solar etc.); aspects such as summer/winter, monthly/annual, and finer granularity in attributes; (3) using methods such as XRRF (explainable reasonably randomized forest) to obtain more robustness & accuracy, yet offering explainable solutions; and (4) enhancing the energy burden calculator further with the expected energy usage analysis in targeted schemes as per the projections. Note that the energy equity analysis based upon decision trees here could be merged with textbased policy perception analysis (Shrestha et al., 2024) in the web portal. Users could compare energy burden data with public opinion trends to see how equity issues align with public attitudes toward renewable energy policies. It is important to mention that XAI plays a vital role in this research with respect to making all the methods more transparent, and hence interpretable, as well as helping to foster better trust among users. This work on the whole makes broader impacts on AI in Smart Cities and also on the theme of Sustainability Analytics. This is in addition to its main impacts on XAI and Energy & Critical Infrastructure. Acknowledgments Sarahana Shrestha has a Doctoral Assistantship in the PhD Program in Environmental Science and Management from the Graduate School at Montclair State University (MSU). Dr. Aparna Varde acknowledges the NSF MRI grant 2018575, i.e. "MRI: Acquisition of a High-Performance GPU Cluster for Research and Education". The authors thank Gabriel Teves, Julia Konopka, and Joshua Rabo from the Master's Project Teams in the . Hao Liu from the Data Science Laboratory, at MSU, for their valuable inputs and useful feedback in this work. We also gratefully acknowledge the travel funding support provided by CSAM, the . References Aliprantis, D.; Carroll, D. R. andYoung, E. R. 2024. What explains neighborhood sorting by income and race? Journal of Urban Economics, 141, 103508. Anthopoulos, L. and Kazantzi, V., 2022. Urban energy efficiency assessment models from an AI and big data perspective: Tools for policy makers. Sustainable Cities and Society, 76, p.103492. Basavaraju, P., and Varde, A. S. 2017. Supervised learning techniques in mobile device apps for Androids. ACM SIGKDD Explorations 2016, 18(2), 18-29. Bittencourt, I., Varde, A.S., Lal, P. 2024. Opinion Mining on Off-shore Wind Energy for Environmental Engineering. In IEMTRONICS Lecture Notes in Electrical Engineering, vol 1229. Springer, https://doi.org/10.1007/978-981-974780-1_37 Baker, S.; DeVar, S. and Prakash, S. 2019. The Energy Justice Workbook. Initiative for Energy Justice. https://iejusa.org/wp-content/uploads/2019/12/The-Energy-JusticeWorkbook-2019-web.pdf Conti, C. J., Varde, A. S., and Wang, W. 2020. Robot action planning by commonsense knowledge in human-robot collaborative tasks. In 2020 IEEE International IOT, Electronics and Mechatronics Conf., pp. 170-176. Drehobl, A. and Ross, L. 2016. Lifting the high energy burden in America's largest cities: How energy efficiency can improve low income and underserved communities. https://www.aceee.org/research-report/u1602 Du, X., Kowalski, M., Varde, A. S., de Melo, G., and Taylor, R. W. (2020). Public opinion matters: Mining social media text for environmental management. ACM SIGWEB, 2019(Autumn), 5:1-5:15. Fu, F.Y.; Alharthi, M.; Bhatti, Z.; Sun, L.; Rasul, F.; Hanif, I. and Iqbal, W. 2021. The dynamic role of energy security, energy equity and environmental sustainability in the dilemma of emission reduction and economic growth. Journal of Environmental Management, 280. Garg, A., Tandon, N., and Varde, A. S. 2020. I am guessing you can't recognize this: Generating adversarial images for object detection using spatial commonsense (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, No. 10, pp. 13789-13790. Machlev, R.; Heistrene, L.; Perl, M.; Levy, K.Y.; Belikov, J.; Mannor, S. and Levron, Y. 2022. Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities. Energy and AI, 9, p.100169. Patterson, J.; Hernández, D. and Morales Knight, L. 2019. Energy isecurity as a nexus between housing and health among low-income renters. Energy Research & Social Science, 70, 101744. Pawlish, M., Varde, A. S., and Robila, S. A. 2012. Cloud computing for environment-friendly data centers. International Workshop on Cloud Data Management at ACM CIKM, pp. 43-48. Raymundo, J. 2020. Are Asian Americans POC? Examining impact of higher education identity-based policies and practices. The Vermont Connection, 41(1). https://scholarworks.uvm.edu/tvc/vol41/iss1/5 Shrestha, S.; Bittencourt, I.; Varde, A. S. and Lal, P. 2024. AI-based modeling for textual data on solar policies in smart energy applications. In IEEE International Conference on Information, Intelligence, Systems & Applications (IISA) Shrestha, S. and Varde, A.S. 2023. Roles of the Web in Commercial Energy Efficiency: IoT, Cloud Computing, and Opinion Mining. ACM SIGWEB, 2023(Autumn), https://doi.org/10.1145/3631358.3631363 Sim, T.; Choi, S.; Kim, Y.; Youn, S.H.; Jang, D.J.; Lee, S. and Chun, C.J. 2022. eXplainable AI (XAI)-based input variable selection methodology for forecasting energy consumption. Electronics, 11(18), p.2947. Simcock, N;, Jenkins, K.E.; Lacey-Barnacle, M.; Martiskainen, M.; Mattioli, G. and Hopkins, D. 2021. Identifying double energy vulnerability: A systematic and narrative review of groups at-risk of energy and transport poverty in the global north. Energy Research & Social Science. Singh, A.; Yadav, J.; Shrestha, S. and Varde, A.S. 2023. Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index. https://doi.org/10.48550/arXiv.2303.08286. AAAI Conference Workshops Program Varde, A.S. and Liang, J., 2023. Machine learning approaches in agile manufacturing with recycled materials for sustainability. AAAI Conference Bridge Program, https://doi.org/10.48550/arXiv.2303.08291 New Jersey Clean Energy Program. (2024 Accessed). "New Jersey's Clean Energy Program: Programs & Resources." https://www.njcleanenergy.com U.S. Census Bureau. (2024 Accessed). "Explore Census Data." U.S. . https://data.census.gov.
|
2509.16276
|
Comparative Analysis of STEM and non-STEM
Teachers’ Needs for Integrating AI into
Educational Environments
Bahare Riahi1
Veronica Cateté2
North Carolina State University, Raleigh, NC 27606, USA
briahi@ncsu.edu vmcatete@ncsu.edu
Abstract. There is an increasing imperative to integrate programming
platforms within AI frameworks to enhance educational tasks for both
teachers and students. However, commonly used platforms such as Code.org,
Scratch, and Snap fall short of providing the desired AI features and lack
adaptability for interdisciplinary applications. This study explores how
educational platforms can be improved by incorporating AI and analyt-
ics features to create more effective learning environments across various
subjects and domains.
We interviewed 8 K-12 teachers and asked their practices and needs while
using any block-based programming (BBP) platform in their classes. We
asked for their approaches in assessment, course development and ex-
pansion of resources, and student monitoring in their classes. Thematic
analysis of the interview transcripts revealed both commonalities and
differences in the AI tools needed between the STEM and non-STEM
groups. Our results indicated advanced AI features that could promote
BBP platforms. Both groups stressed the need for integrity and plagia-
rism checks, AI adaptability, customized rubrics, and detailed feedback in
assessments. Non-STEM teachers also emphasized the importance of cre-
ative assignments and qualitative assessments. Regarding resource devel-
opment, both AI tools desired for updating curricula, tutoring libraries,
and generative AI features. Non-STEM teachers were particularly in-
terested in supporting creative endeavors, such as art simulations. For
student monitoring, both groups prioritized desktop control, daily track-
ing, behavior monitoring, and distraction prevention tools.
Our findings identify specific AI-enhanced features needed by K-12 teach-
ers across various disciplines and lay the foundation for creating more
efficient, personalized, and engaging educational experiences.
Keywords: AI-educational platforms · AI Features · k-12 Teachers’
needs.
1
Introduction
It is no longer assumed that managing classroom and teaching activities in com-
putational teaching and programming instruction follows a standardized ap-
proach [24], with fixed strategies and schedules for instructors. Various factors
arXiv:2509.16276v1 [cs.CY] 18 Sep 2025
2
B. Riahi and V. Cateté
influence teachers’ strategies, including course type, differences in students’ pro-
ficiency levels [13], teachers’ conceptual and educational backgrounds, experience
[26, 21], and methodologies [9].
Given the complexity and variety of teachers’ activities in classroom manage-
ment and enhancing students’ learning, supplementary tools are essential [40,
31]. These tools support tasks such as instruction and assessments, including
setting rubrics, projects, and quizzes [33]. Prior research has explored teachers’
general needs for programming learning systems [28] and developed tools for
block-based programming (BBP) environments, such as assignment hubs, grad-
ing tools, and planning interfaces [18, 33]. However, these needs vary significantly
across instructional fields, especially in non-computing or non-STEM courses.
We aim to identify teachers’ needs and preferences regarding the integra-
tion of AI features in educational platforms (EP), with a focus on three key
areas: assessment; course development and resource expansion; and monitoring
and supporting students. Additionally, we address the research gap concern-
ing non-STEM teachers’ needs in using AI in block-based programming (BBP).
Recognizing this gap we explored how AI could be integrated into educational
platforms to enhance its accessibility and effectiveness for a broader range of
educational needs, spanning both STEM (Science, Technology, Engineering, and
Mathematics) and non-STEM (Arts, Social Studies, and Humanities) disciplines
[45]. Thus, ensuring the adaptability and accessibility of these platforms is of fun-
damental importance. Additionally, identifying customized AI analytics features
to meet the diverse needs of users is essential for creating more effective learning
environments
To achieve these objectives, we conducted semi-structured interviews with
eight K-12 teachers—four from STEM and four from non-STEM disciplines—to
understand their approaches to assessment, course development and resource
expansion, and student monitoring and support. We explored their methods,
strategies, current use of AI tools, and the features they would like to see added.
Thematic analysis was used to analyze their responses for each focus area. For
course development and expansion, teachers highlighted the need for built-in
code tracing, dynamic document updates, and customized course materials tai-
lored to individual student needs and levels. Additionally, they emphasized the
importance of monitoring features such as customizable tools for individualized
growth tracking, help notifications, pop-up tips, reminders, and motivational
tools.
2
Related Works
With widespread advancements in technology and computer science, there is a
growing need to educate students in computer science and computational think-
ing from an early age [44]. With the emphasis on developing computational
thinking (CT) and creativity in K-12 education increasing [25], and consider-
ing benefits such as improved future college performance [6], more schools are
incorporating computer science as a core subject or integrating it into their cur-
Title Suppressed Due to Excessive Length
3
ricula. Computational thinking (CT), which was introduced by Wing [46], is the
ability to solve problems, create solutions through abstraction and algorithmic
thinking, and comprehend human behavior by utilizing the core principles of
computer science [7].
There has been growing interest in learning CT for all students regardless
of their discipline [20], as it can be integrated into both STEM (Science, Tech-
nology, Engineering, and Mathematics) and non-STEM (Human Societies, Lan-
guage, Communication and Culture, and History and Art) [27] disciplines. In
K-12 classes, students use block-based programming (BBP) using various tools,
such as Code.org [23] to develop their CT skills. The adoption of these tools
reflects a wider movement in education, in which supplementary technologies
and AI are used to increase classroom efficiency and productivity [14]. These
innovations not only simplify administrative duties but also enhance student
engagement and allow for personalized learning experiences. Considering the
ongoing challenges in performing teaching tasks, there is a significant demand
for such tools to support teachers with curriculum-focused activities, instruction,
student progress monitoring, and the creation of thorough assessments.
AI tools play a pivotal role in supporting K-12 computer science educators
by streamlining assessment processes [33] and providing substantial assistance
to both teachers and students [34]. As there is a relatively small body of liter-
ature focused on integrating AI tools into block-based programming (BBP) in
both STEM and non-STEM classes, we argue that the complexities of diverse
assessment modules and distinct pedagogical approaches can pose significant
challenges for teachers [33]. Moreover, AI can customize learning experiences by
adjusting content and progression to meet students’ specific needs. By leverag-
ing AI in these ways, educators can enhance their ability to provide effective
performance and supportive feedback to their students [29].
Table 1. Details of the participants, teaching Grade level and their fields in both
groups of STEM and non-STEM
non-STEM Teachers (NST) Grade Level
STEM Teachers (ST)
Grade Level
Art-3D
9-12
CS principles/ math
9-12
English (ESL)
11
Computer Science
11-12
Music
9-12
Math
10
Dance
6-8
Math/Science/IT
10
2.1
Integration block based programming in non-stem classes
BBP has gained significant attention in various educational settings, including
non-STEM classes such as social science, humanities, dance, English for Speakers
of Other Languages (ESOL), and art [42, 17]. Incorporating BBP in non-STEM
classes can be beneficial for students to enhance their computational thinking
abilities and develop skills that are essential in the digital age. Scholars showed
4
B. Riahi and V. Cateté
that Integrating programming into art and music education can significantly
enhance the learning experience, making it more appealing [16].
Using BBP programming in interdisciplinary and non-STEM areas like Arts
& Crafts and Music, using tools like Scratch [32], reduces barriers for teachers
and students, enhances engagement, increases the creativity of students and
practical applications of programming across different subjects—making learning
more exciting and accessible for everyone [37]. Given the need for educational
platforms to be accessible across both STEM and non-STEM subjects, and the
importance of facilitating class management for teachers, it’s crucial that these
platforms include features that meet the diverse requirements of all user groups.
Art, Music, Dance and English for Speakers of Other Languages (ESOL)
Education Students use educational visual programming tools like Scratch
[2] and Snap! to enhance their musical understanding and compositional skills
in subjects like music and dance. Integrating block-based programming with
dance not only improves students’ computational skills and artistic abilities but
also provides teachers with effective ways to assess students’ progress. By using
sound and music blocks, students can compose melodies, manipulate pitch, cre-
ate rhythmic patterns with loops and conditional statements, and deepen their
comprehension of musical timing and structures [41].
Additionally, through musical BBP, students can animate characters and syn-
chronize their movements with music using motion and sound blocks, enabling
dynamic dance choreography that responds to musical cues or user interactions
[4]. Scholars have emphasized that 3D programming can further enhance creativ-
ity in computational thinking [39], and using Scratch to animate dance moves
and synchronize music significantly improves students’ understanding of compu-
tational thinking in the context of dance, music, and art [41].
BBP can be used to enhance foreign language learning and spelling skills
[19] among students. It has been shown that by using jigsaw-like placement of
colored blocks, students can explore and understand grammar structures and vo-
cabulary, get immediate feedback and correct their mistakes, and make different
constructions of the sentence [38]. There is a growing need to support and equip
[47] teachers in teaching and managing their classes more effectively. By apply-
ing various BBP tools in these areas, teachers encounter different needs specific
to their subjects. To improve accessibility and keep these tools up-to-date, we
aim to explore teachers’ requirements for adding AI features, particularly in
non-STEM areas.
3
Research Questions
We aim to answer our research questions in the three area of assessment, course
development and student monitoring. Our first research question explored how
to better address the assessment needs of STEM and non-STEM teachers across
various domains.
Title Suppressed Due to Excessive Length
5
RQ1 What AI features do STEM and non-STEM teachers need and want in the
educational tools for student assessments?
Another important consideration is how well the dashboard aligns with curric-
ula, teaching approaches, and the needs of various subject areas. Therefore, the
second research question is:
RQ2 What AI features do STEM and non-STEM teachers need and want for
educational tools to expand their resources?
Research has shown that, in computer science courses, teachers are actively
involved in monitoring and supporting student and improve their progress [11,
12]. Therefore, our study also explores how AI tools can effectively address the
needs of teachers in monitoring students’ struggles and providing support:
RQ3 What AI features do STEM and non-STEM teachers need and want in ed-
ucational tools to monitor and support students?
4
Methods
4.1
Data Analysis
In the initial phase of our research, we recruited eight K-12 teachers from schools
in North and South Carolina in the United States. We conducted semi-structured
interviews with both STEM and non-STEM teachers who used AI in their
classes. The inclusion criterion for the interview participants was that they had
previously implemented coding activities in their STEM or non-STEM class-
rooms. These activities ranged from short-hour-long modules to four-day lessons
that followed a use-modify-create progression [15], allowing students to complete
open-ended coding assignments. We used a pre-survey to select teachers based
on their grade level and experience by incorporating coding into their subject
areas. We selected eight participants, which have the requirements, (four from
each group). The interview questions categorized in to three sections: assess-
ment, resource planing and student monitoring. We asked about the way they
perform assessment and how they expand their resources, how they support stu-
dents, and how they monitor their struggles in their classes. Moreover, we asked
them how they have been using educational platforms, if any, to perform the
activities mentioned in their classes and if they are integrated with AI features,
which features is their favorite. Finally, we asked them about their expectations
from AI to enhance the belated platforms and what features they would like to
see added to meet their needs better.
Each interview lasted between 40 minutes and one hour, and participants
were compensated with a $40 gift card. The interviews were conducted via Zoom,
a video conferencing software, between March to May 2024 to accommodate the
teachers’ schedules. According to institutional review board (IRB) regulations,
our study qualified as exempt from IRB review. All interviews were recorded with
the teachers’ consent, and participants provided explicit consent at the start of
6
B. Riahi and V. Cateté
each interview for recording the sessions for data analysis purposes. The recorded
videos were accessible only to the researcher. Considering the complex and varied
factors, such as differences in course types (e.g., computing or non-computing),
teachers’ backgrounds [43], experiences, and students’ individual differences, we
chose a qualitative method via interview data collection and applied thematic
analysis [1, 30] to our transcripts to uncover in-depth insights into teachers’ needs
for AI tools [22].
The analysis process involved reviewing the recorded videos, transcribing the
interviews, and thoroughly reading the transcripts to explore the data compre-
hensively. During this process, we took notes and generated initial codes, which
were grouped into broader categories, referred to as themes, to address our re-
search questions. Two reviewers independently coded the data and compared
their findings. Any code identified by one reviewer but missed by the other was
added to the final list. These finalized codes were then grouped into thematic
categories. We categorized themes based on teachers’ perspectives in each sec-
tion of the interview: a) perspectives on adding AI features to current grading
and assessment tools, b) perspectives on adding AI features to enhance exist-
ing tools, and c) perspectives on adding AI features to improve monitoring of
students’ progress. We compared our tags and identified some that are com-
mon between STEM and non-STEM teachers, along with some unique themes.
We use ’ST’ and ’NST’ as abbreviations for STEM and non-STEM teachers,
respectively, followed by a number to indicate specific individuals.
5
Results
5.1
Evaluation For Assessment: AI Features Needed By Teachers
We found that STEM teachers often use formative assessments, such as quizzes in
group projects and final tests. In contrast, non-STEM teachers favor more qual-
itative and non-rubric-based assessments, such as portfolios and project visual-
izations, with a focus on independent work and manual feedback. Both groups
use checklists, rubrics, and online assessment systems and include projects in
their final assessments.
After analyzing the themes based on the transcript we categorize them based
on our research questions. Exploring RQ1 we found the following features that
teachers in both groups need and want for student assessment.
Customization Of Rubrics and Individualized Assessment While current
tools can create and customize questions based on curricula, enabling teachers to
manipulate rubrics is essential for fostering student growth. Additionally, teach-
ers require more individualized capabilities to address students’ specific learning
levels. For example ST1 "STEM teacher number1" mentioned I wish I could add
some features to my current platform and change the rubrics whenever needed
and based on everyone needs. It’s not just about getting the right answer—it’s
Title Suppressed Due to Excessive Length
7
Fig. 1. Percentage of Teachers (STEM & non-STEM) Needing Assessment Features
about seeing their progress, capability and their thought and giving them feed-
back that helps them improve step by step. Such an ideal platform should offer
deeper customization options and effectively measure students’ progress. In the
feedback section, formative assessment—a key strategy widely used by STEM
teachers—was highlighted as an essential feature.
Enhanced Feedback and Supportive Learning Resources Although ex-
isting platforms can trace code and grade based on rubrics or answer correctness,
more detailed and partial grading is needed for line-by-line code assessment. As
mentioned by ST3, teachers need to provide more detailed feedback, similar to
tools like Quizizz, which offer immediate feedback and hints for each individ-
ual answer. They prefer students to understand and see their feedback even for
a single line of code. More importantly, NST1 noted that after viewing their
feedback and grades, students should have access to relevant and recommended
tutorials, including documents, instructions, videos, and other resources to help
them correct their mistakes. This approach prevents students from being over-
whelmed by having to study all materials, thus reducing their workload. This
type of structured feedback and support is especially crucial in visual arts classes,
which often involve independent work.
Originality and Authenticity Non-STEM teachers emphasize AI generative
integration into EP that preserves the originality and authenticity of students’
work and incorporates simulation capabilities for assessments in fields like dance
and art. They seek functionalities that can evaluate creative assignments and
create real-world challenges by converting real-world projects into digital grading
systems. Given the diverse nature of their courses—such as art, dance these
teachers prioritize features that support the authenticity of students’ work. For
example, NST1, who teaches art, highlighted that “originality and authenticity
are the main requirements in art-related courses.”
Additionally, in courses like dance, art and design, simulation is a crucial
component of assessment. There is a clear need for tools that support simulation
and enable evaluation based on simulations and templates. For instance, NST4,
8
B. Riahi and V. Cateté
who specializes in dance, mentioned, “I create a simulation of dancing with Snap
and also use Scratch to animate the dance movement.” NST4 also pointed out
the need for online assessment tools that can evaluate simulations and convert
real-world work into software-based grading. Additionally, three STEM teachers
expressed concerns about the authenticity of their students’ code and assign-
ments. For example, ST2 stated that " although I encourage students for peer
learning in my classes, some students using online resources and do copy-pasting.
I need a feature that not allowed them to do so or show me parts of codes that
weren’t written by themselves."
Integration and Adaptability to LMS (Learning Management System)
and Simplified Access Control Switching between platforms and transfer-
ring information such as feedback, grades, and student progress data requires
significant time and effort. The Participants expressed a need for better integra-
tion and adaptability among the tools they use. For instance, ST1 highlighted
their use of the generative AI tool Copilot, built into the Microsoft Edge browser,
to create rubrics and instructional materials. They emphasized the necessity for
seamless integration of AI-generated content (e.g., from ChatGPT and Copilot)
into educational platforms like as Google Classroom. Additionally, they prefer
more comprehensive tools that encompass all necessary features to streamline
these tasks. Grading, one of the most burdensome and time-consuming tasks for
teachers, can be greatly facilitated by such integrated tools [35].
Moreover, one important feature, according to ST1, is SSO (Single Sign-On)
to simplify access control and enhance security, it also simplifies managing the
dashboard for both teachers and students. Therefore, this feature could enhance
the usability of the platforms. ST2 mentioned,
“Once students use the platforms such as scratch, if these platforms were com-
patible with SSO support features, students can log in to their Google Classroom
account, and from there, they can access other third party platforms, like snap,
scratch whatever it is. It has a lot of benefits, besides easy access, all students’
data will be managed under a single authentication system”.
The mentioned STEM teachers have reported that their current platforms (such
as syncing HMH with Canva or Admenton with each other) do not integrate
seamlessly with school platforms and Google Docs for reporting final grades
and feedback. On the other hand, NST1 and NST3 declared accessing different
platforms for learning and visualizing progress is essential. Moreover, they articu-
lated the need for features that simplify access to various resources, such as links
to music practice exercises, guidance on correcting techniques, and step-by-step
guides, which would be highly beneficial. Therefore, adaptability and flexibil-
ity are crucial AI generative abilities for those platforms to use in classroom
management.
Title Suppressed Due to Excessive Length
9
5.2
Evaluation For Course Development & Expanding Resources:
AI Features needed by teachers
Exploring RQ2a, we found that teachers highlighted the need for AI tools to
support the following: STEM teachers have identified the need for AI features
integration that assist with creating documents for each topic and generating
curricula.
Both STEM and non-STEM teachers mentioned that AI features integration
would be useful for developing course materials, including lesson plans, course
descriptions and image generation. These features include but are not limited to
automated content generation, personalized learning pathways, predictive ana-
lytics, resource optimization, and enhanced collaboration.
These features can enhance the platforms by analyzing students’ performance
and historical data to create personalized learning pathways and predict trends.
This can lead to showing the course materials in an individualized way to stu-
dents and changing the order based on their priorities based on their needs. This
content adjustment and customized resource allocation ensure that course mate-
rials are effectively utilized and available as needed. For example, ST1 and ST2
mentioned, “Sometimes students get mixed up about which course materials are
the most important, especially when they’re falling behind their classmates for
one reason or another. They really need a bit of guidance to point them towards
the materials they should focus on.”
On the other hand, NST1, who teaches art, noted:
“Since I teach art, simulating final artwork by uploading images and generat-
ing images with AI features would be enticing for me and my students. However,
AI-generated images often lack originality and human creativity. While these
tools enhance students’ self-confidence and provide a basic idea, I hope AI can
further inspire students to be more creative”.
Fig. 2. Percentage of Teachers (STEM & non-STEM) Needing Course Development
Features
Individualized (customized materials) and Accessible for English Language
Learners NST3 emphasized the importance of enhancing usability, suggesting
10
B. Riahi and V. Cateté
that the platform would be more effective if it were smarter, with features like
automated language level assessment, curriculum adjustments based on skills
that need reinforcement, and optimized distribution of learning materials ac-
cording to students’ individual learning needs. NST3 addressed the challenges
faced by non-English-speaking students using the Brisk Teaching platform and
expressed a desire for the platform to tailor English instruction to individual
levels, as well as facilitate the remote distribution of materials and quizzes for
continuous learning.
Peer Review and Collaborative Learning One factor both STEM and
non-STEM teachers highlighted, though in different ways, is the importance of
collaborative learning. Teachers highlighted the benefits of students collaborat-
ing on coding projects, such as discussing ideas, explaining their reasoning, and
negotiating solutions. They noted that collaboration enhances problem-solving
by fostering creative solutions and a deeper understanding of concepts. Addi-
tionally, students can learn effectively from their peers and teach each other
strategies and solutions.
They wish AI had the following features: AI could analyze students’ skills
and performance and optimize peer matching to ensure teammates would have a
balanced mix of abilities or could help each other effectively. Moreover, in terms
of solving conflicts and disagreements, AI could provide assistance by offering
resolutions. As ST1 mentioned, “Although CodeCombat is highly engaging for
pair work, I wish AI features could intervene to resolve disputes among students,
facilitate peer feedback, and help students be more creative”.
Real-World Connectivity Another feature appreciated by teacher ST4 is the
tool’s ability to connect with the real world and provide interactive experiences.
For example, being able to test their codes with devices mounting sensors that
interact with their code such as LEGO Mindstorms [36]. As ST4 noted: “Using
resources like circuit boards that come with a hands-on component or project
base is fantastic and engaging for students. I would prefer to use such resources
because they make learning more interactive and enjoyable.”
Gamification in Educational Tools Another feature that most current tools
incorporate is gamification. Gamification has demonstrated significant potential
in improving learning outcomes in K-12 education and positively impacts stu-
dent learning [5, 10]. It also increases motivation and engagement by providing
clear goals and a structured learning process [8]. Participants NST1, NST4, ST2,
and ST3 have experience using tools such as Kahoot, Quizizz, Spheros, Scratch,
and Code.org to teach BBP. For instance, Code.org utilizes block-based coding
and integrates gamification elements, enabling students to learn programming
through game-based challenges, which enhances their motivation to learn cod-
ing [8]. Based on our participants’ statements Integrate AI into interview would
enhance usability by incorporating features such as adaptive learning challenges
Title Suppressed Due to Excessive Length
11
within games, rewarding students accordingly, fostering competitive yet collab-
orative learning and matching peers effectively on platforms.
5.3
Evaluation of Student Monitoring
When we analyzed teachers’ responses about the educational platforms they
use to monitoring students strugles, we found that they utilize platforms such as
Edmentum, Quizizz and other school-related software. These tools provides real-
time analytics, allowing teachers to monitor student performance on assessments.
Moreover, teachers discussed features that can track students’ progress to closely
evaluate and identify specific areas where a student may be struggling would be
a great help.
Exploring RQ3, teachers from both groups expressed a desire for more control
over their students’ activities to prevent distractions. They are interested in built-
in features that allow them to mirror students’ screens and control desktops,
including the ability to block other tabs. Teachers also wish for help notification
suggestions, pop-up tips, deadline reminders, and features to increase student
motivation by simulating progress alerts and notifying them of their classmates’
study time.
Accommodations for Individualized needs Teachers including ST1, ST3,
NS2 have expressed their expectation that AI can be utilized to monitor and
accommodate students with special needs by customizing tools for more effec-
tive student tracking. For example, NST2 mentioned, “A feature that can adjust
expectations based on students’ improvement and level, providing them with in-
dividualized plans, would be helpful”. ST4 explained, “I wish to have productivity
tools that could identify where students are stuck and offer assistance, with pop-
ups if they need help while doing their assignments”.
Fig. 3. Percentage of Teachers (STEM & non-STEM) Needing Monitoring Features
12
B. Riahi and V. Cateté
6
Concerns regarding integrating AI into Educational
Platforms
6.1
Privacy
Here are some additional concerns teachers mentioned during their interviews
that, while unrelated to the three main sections, are still noteworthy. One sig-
nificant issue is privacy. Teachers expect AI platforms to include customizable
security and privacy settings that comply with school policies. This would enable
educators to safely integrate AI into classrooms and platforms while safeguard-
ing student data and adhering to privacy regulations. ST1 emphasized that AI
tools must adhere to strict privacy requirements; failure to meet these standards
could result in their usage being restricted or blocked. He mentioned, "the big
challenge with using AI platforms in schools is nsuring ethical use and protecting
student data. When Facebook acquired Oculus, we saw how a lack of adjustable
security settings led to schools losing access due to data harvesting concerns. The
same thing could happen with AI tools if they don’t meet strict privacy standards.
We need AI companies to provide flexible security and privacy settings so we can
safely integrate these tools into classrooms. Privacy is our top concern—just look
at how schools restrict access to DALL-E because of these limitations". There-
fore, ensuring that these platforms adhere to strict privacy guidelines to protect
individuals in all online platforms is essential [3]. It is not only important for
fostering trust among educators but also for ensuring that the integration of AI
technologies into educational settings is both sustainable and compliant. This
focus on privacy and security will be vital in enabling broader acceptance and
use of AI in classrooms without compromising student data.
7
Discussion
When comparing STEM and non-STEM teachers, it becomes clear that they
integrate educational platforms differently into their teaching and have distinct
expectations for AI integration into these platforms. STEM teachers typically
follow structured curricula with specific content goals and are accustomed to
using technology, often having already integrated advanced systems into their
practices.
In contrast, non-STEM teachers, particularly those in the humanities and
arts, often have more flexibility in their teaching plans. Their curricula may
not be as rigidly defined by standards, allowing for creativity and variation.
These teachers may not have prior experience using technology like computers
and AI in their classrooms. The challenge lies in addressing these knowledge
gaps. Non-STEM teachers may require more practical examples and personalized
support to grasp how AI can benefit their teaching practices. Due to technological
familiarity, teacher expertise and resource utilization, non-stem teachers require
more support in using technologies. Therefore,they may be starting from scratch
when it comes to integrating AI technology.z
Title Suppressed Due to Excessive Length
13
8
Conclusion
Through semi-structured interviews with 8 K-12 STEM and non-STEM teachers,
we explored preferences to integrate AI tools into curricula and coding activi-
ties. We examined the intricacies of student assessment, course development and
progress monitoring across different subject areas. We found valuable insight of
what are the AI features that are needed by STEM and non-STEM teachers
in the mentioned areas. Our findings indicate that both STEM and non-STEM
teachers require significant improvement to current features. Many changes are
feasible with recent advances in AI, such as developing analytical features for
tracking students improvements and their status of progress, more detail feed-
back and customized course materials. However, some needs, such as control of
student devices, are out of scope, and alternative means of student engagement
such as gamification may be explored.
References
1. Alhojailan, M. I. Thematic Analysis: A Critical Review of Its Process and Eval-
uation. WEI International European Academic Conference Proceedings, Zagreb,
Croatia (2012).
2. Begosso, L. C., Begosso, L. R., and Aragao, N. An analysis of block-based pro-
gramming environments for CS1. *IEEE Frontiers in Education Conference (FIE)*.
Available at: https://ieeexplore.ieee.org/document/9273982, 2020.
3. Behfar, A., Shrestha, A., and Al-Ameen, M. N. A First Look into Fake Profiles
on Social Media through the Lens of Victim’s Experiences. *Companion Publica-
tion of the 2024 Conference on Computer-Supported Cooperative Work and Social
Computing*, 444–450, 2024.
4. Bi, T., Fankhauser, P., Bellicoso, D., and Hutter, M. Real-time dance generation to
music for a legged robot. In *2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)*, pages 1038–1044. IEEE, 2018.
5. Buckley, P., Doyle, E. Gamification and Student Motivation. Interactive Learning
Environments, vol. 24, no. 6, pp. 1162–1175. Taylor & Francis (2016).
6. Burgiel, H., Sadler, P. M., Sonnert, G. The Association of High School Computer
Science Content and Pedagogy with Students’ Success in College Computer Science.
ACM Transactions on Computing Education (TOCE), vol. 20, no. 2, pp. 1–21. ACM,
New York, NY (2020).
7. Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., and Eltoukhy, M. Assess-
ing elementary students’ computational thinking in everyday reasoning and robotics
programming. Computers & Education 109 (2017), 162–175.
8. Choi, W. C., and Choi, I. C. Exploring the Impact of Code.org’s Block-Based Coding
Curriculum on Student Motivation in K-12 Education. In *Proceedings of the 2024
12th International Conference on Information and Education Technology (ICIET)*,
pages 93–97, 2024. DOI: 10.1109/ICIET60671.2024.10542810.
9. Compañ-Rosique, P., Molina-Carmona, R., Satorre-Cuerda, R. Effects of Teaching
Methodology on the Students’ Academic Performance in an Introductory Course
of Programming. Learning and Collaboration Technologies: Designing Learning Ex-
periences, 6th International Conference, LCT 2019, Held as Part of the 21st HCI
International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Pro-
ceedings, Part I, pp. 332–345. Springer (2019).
14
B. Riahi and V. Cateté
10. Dehghanzadeh, H., Farrokhnia, M., Dehghanzadeh, H., Taghipour, K., Noroozi, O.
Using Gamification to Support Learning in K-12 Education: A Systematic Literature
Review. British Journal of Educational Technology, vol. 55, no. 1, pp. 34–70. Wiley
Online Library (2024).
11. Dong, Y., Marwan, S., Shabrina, P., Price, T., Barnes, T. Using Student Trace
Logs to Determine Meaningful Progress and Struggle during Programming Problem
Solving. International Educational Data Mining Society, pp. 439–445. ERIC, Online
(2021).
12. Dong, Y., Shabrina, P., Marwan, S., Barnes, T. You Really Need Help: Exploring
Expert Reasons for Intervention During Block-based Programming Assignments.
Proceedings of the 17th ACM Conference on International Computing Education
Research, pp. 334–346. ACM, New York, NY (2021).
13. Figueiredo, J., García-Peñalvo, F. Teaching and Learning Strategies for Intro-
ductory Programming in University Courses. Ninth International Conference on
Technological Ecosystems for Enhancing Multiculturality (TEEM’21), pp. 746–751.
ACM, New York, NY (2021).
14. Fitria, T. N. Artificial Intelligence (AI) in Education: Using AI Tools for Teaching
and Learning Process. Prosiding Seminar Nasional & Call for Paper STIE AAS, vol.
4, no. 1, pp. 134–147 (2021).
15. Franklin, D., Coenraad, M., Palmer, J., Eatinger, D., Zipp, A., Anaya, M., White,
M., Pham, H., Gökdemir, O., and Weintrop, D. An analysis of Use-Modify-Create
pedagogical approach’s success in balancing structure and student agency. *Proceed-
ings of the 2020 ACM Conference on International Computing Education Research*,
14–24, 2020.
16. Graßl, I., and Fraser, G. Girls rocking the code: Gender-dependent stereotypes,
engagement & comprehension in music programming. Proceedings of the 46th In-
ternational Conference on Software Engineering: Software Engineering Education
and Training (2024), 115–126.
17. Guzdial, M. Creating an on-ramp to programming for arts and humanities students
with Teaspoon languages and custom block languages. Proceedings of the 55th ACM
Technical Symposium on Computer Science Education V. 2 (2024), 1898–1898.
18. Harvey, B., Garcia, D. D., Barnes, T., Titterton, N., Armendariz, D., Segars, L.,
Lemon, E., Morris, S., and Paley, J. Snap! (Build Your Own Blocks). In Proceedings
of the 44th ACM Technical Symposium on Computer Science Education, pp. 759–
759 (2013).
19. Holz, H. Design, Development, and Evaluation of Research Tools for Evidence-
Based Learning: A Digital Game-Based Spelling Training for German Primary
School Children. Universität Tübingen (2020).
20. Hsu, T.-C., Chang, S.-C., and Hung, Y.-T. How to learn and how to teach com-
putational thinking: Suggestions based on a review of the literature. Computers &
Education 126 (2018), 296–310.
21. Huang, W., Looi, C.-K., Yeter, I. H. Comparison of STEM, Non-STEM, and Mixed-
Disciplines Pre-service Teachers’ Early Conceptions about Computational Thinking.
CTE-STEM 2022 Conference, pp. 1–6. TU Delft OPEN, Online (2022).
22. Johri, A. Conducting interpretive research in engineering education using qualita-
tive and ethnographic methods. *Cambridge Handbook of Engineering Education
Research*, 551–570. Cambridge University Press, Cambridge, UK, 2014.
23. Kalelioğlu, F. A New Way of Teaching Programming Skills to K-12 Students:
Code.org. Computers in Human Behavior, vol. 52, pp. 200–210. Elsevier (2015).
Title Suppressed Due to Excessive Length
15
24. Koulouri, T., Lauria, S., Macredie, R. D. Teaching Introductory Programming: A
Quantitative Evaluation of Different Approaches. ACM Transactions on Computing
Education (TOCE), vol. 14, no. 4, pp. 1–28. ACM, New York, NY (2014).
25. Lee, S. J., Francom, G. M., Nuatomue, J. Computer Science Education and K-12
Students’ Computational Thinking: A Systematic Review. International Journal of
Educational Research, vol. 114, article 102008. Elsevier (2022).
26. Leyzberg, D., Moretti, C. Teaching CS to CS Teachers: Addressing the Need for
Advanced Content in K-12 Professional Development. Proceedings of the 2017 ACM
SIGCSE Technical Symposium on Computer Science Education, pp. 369–374 (2017).
27. Liao, C. H., Chiang, C.-T., Chen, I.-C., and Parker, K. R. Exploring the relation-
ship between computational thinking and learning satisfaction for non-STEM college
students. International Journal of Educational Technology in Higher Education 19,
1 (2022), 43.
28. Limke, A., Hill, M., Cateté, V., Barnes, T. A Survey of K-12 Teacher Needs for an
Online Programming Learning System. Extended Abstracts of the CHI Conference
on Human Factors in Computing Systems, pp. 1–7. ACM, New York, NY (2024).
29. Lin, P., Van Brummelen, J. Engaging Teachers to Co-design Integrated AI Cur-
riculum for K-12 Classrooms. Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems, pp. 1–12. ACM, New York, NY (2021)
30. Maguire, M., Delahunt, B. Doing a Thematic Analysis: A Practical, Step-by-Step
Guide for Learning and Teaching Scholars. All Ireland Journal of Higher Education,
vol. 9, no. 3 (2017).
31. Mahtani, R., Gill, C., Umstead, K., Behnam Asl, S., Tully, K., et al. Online partic-
ipatory tools in classroom and research settings. *DS 117: Proceedings of the 24th
International Conference on Engineering and Product Design Education (E&PDE
2022), London South Bank University, London, UK, September 8–9, 2022*, 2022.
32. Marji, M. Learn to program with Scratch: A visual introduction to programming
with games, art, science, and math. No Starch Press, 2014.
33. Milliken, A., Cateté, V., Limke, A., Gransbury, I., Chipman, H., Dong, Y., Barnes,
T. Exploring and Influencing Teacher Grading for Block-based Programs through
Rubrics and the GradeSnap Tool. Proceedings of the 17th ACM Conference on
International Computing Education Research, pp. 101–114. ACM, New York, NY
(2021).
34. Milliken, A., Wang, W., Cateté, V., Martin, S., Gomes, N., Dong, Y., Harred,
R., Isvik, A., Barnes, T., Price, T., et al. Planit! A New Integrated Tool to Help
Novices Design for Open-Ended Projects. Proceedings of the 52nd ACM Technical
Symposium on Computer Science Education, pp. 232–238. ACM, New York, NY
(2021).
35. Milliken, A. A. Redesigning How Teachers Learn, Teach, and Assess Computing
with Block-Based Languages in Their Classroom. North Carolina State University,
Raleigh, NC (2021).
36. Moraiti, I., Fotoglou, A., and Drigas, A. Coding with Block Programming Lan-
guages in Educational Robotics and Mobiles, Improve Problem Solving, Creativity
& Critical Thinking Skills. *International Journal of Interactive Mobile Technolo-
gies*, 16(20), 2022.
37. Perin, A. P. J., Silva, D. E., and Valentim, N. Investigating block programming
tools in high school to support Education 4.0: A Systematic Mapping Study. Infor-
matics in Education, 22(3), 463–498, 2023.
38. Purgina, M., Mozgovoy, M., Blake, J. WordBricks: Mobile Technology and Visual
Grammar Formalism for Gamification of Natural Language Grammar Acquisition.
16
B. Riahi and V. Cateté
Journal of Educational Computing Research, vol. 58, no. 1, pp. 126–159. SAGE
Publications (2020).
39. Repenning, A., Webb, D. C., Brand, C., Gluck, F., Grover, R., Miller, S., Nickerson,
H., and Song, M. Beyond Minecraft: Facilitating computational thinking through
modeling and programming in 3D. *IEEE Computer Graphics and Applications*,
34(3), 68–71. IEEE, 2014.
40. Rößling, G., Joy, M., Moreno, A., Radenski, A., Malmi, L., Kerren, A., Naps,
T., Ross, R. J., Clancy, M., Korhonen, A., et al. Enhancing Learning Management
Systems to Better Support Computer Science Education. ACM SIGCSE Bulletin,
vol. 40, no. 4, pp. 142–166. ACM, New York, NY (2008).
41. Shamir, M., Kocherovsky, M., Chung, C. A Paradigm for Teaching Math and
Computer Science Concepts in K-12 Learning Environment by Integrating Cod-
ing, Animation, Dance, Music, and Art. 2019 IEEE Integrated STEM Education
Conference (ISEC), pp. 62–68. IEEE (2019).
42. Sullivan, A., Strawhacker, A., Bers, M. U. Dancing, Drawing, and Dramatic
Robots: Integrating Robotics and the Arts to Teach Foundational STEAM Con-
cepts to Young Children. Robotics in STEM Education: Redesigning the Learning
Experience, pp. 231–260. Springer (2017).
43. Tagare, D. Factors That Predict K-12 Teachers’ Ability to Apply Computational
Thinking Skills. *ACM Transactions on Computing Education*, 24(1), 1–26. ACM
New York, NY, 2024.
44. Tekdal, M. Trends and development in research on computational thinking. Edu-
cation and Information Technologies 26, 5 (2021), 6499–6529.
45. Uddin, S., Imam, T., and Mozumdar, M. Research interdisciplinarity: STEM versus
non-STEM. Scientometrics, 126, 603–618 (2021)
46. Wing, J. M. Computational Thinking. Communications of the ACM, vol. 49, no.
3, pp. 33–35. ACM, New York, NY (2006).
47. Yim, I. H. Y., Su, J. Artificial Intelligence (AI) Learning Tools in K-12 Education:
A Scoping Review. Journal of Computers in Education, pp. 1–39. Springer (2024).
|
Comparative Analysis of STEM and non-STEM Teachers' Needs for Integrating AI into Educational Environments Bahare Riahi1 Veronica Cateté2 North Carolina State University, Raleigh, NC 27606, USA Abstract. There is an increasing imperative to integrate programming platforms within AI frameworks to enhance educational tasks for both teachers and students. However, commonly used platforms such as Code.org, Scratch, and Snap fall short of providing the desired AI features and lack adaptability for interdisciplinary applications. This study explores how educational platforms can be improved by incorporating AI and analytics features to create more effective learning environments across various subjects and domains. We interviewed 8 K-12 teachers and asked their practices and needs while using any block-based programming (BBP) platform in their classes. We asked for their approaches in assessment, course development and expansion of resources, and student monitoring in their classes. Thematic analysis of the interview transcripts revealed both commonalities and differences in the AI tools needed between the STEM and non-STEM groups. Our results indicated advanced AI features that could promote BBP platforms. Both groups stressed the need for integrity and plagiarism checks, AI adaptability, customized rubrics, and detailed feedback in assessments. Non-STEM teachers also emphasized the importance of creative assignments and qualitative assessments. Regarding resource development, both AI tools desired for updating curricula, tutoring libraries, and generative AI features. Non-STEM teachers were particularly interested in supporting creative endeavors, such as art simulations. For student monitoring, both groups prioritized desktop control, daily tracking, behavior monitoring, and distraction prevention tools. Our findings identify specific AI-enhanced features needed by K-12 teachers across various disciplines and lay the foundation for creating more efficient, personalized, and engaging educational experiences. Keywords: AI-educational platforms · AI Features · k-12 Teachers' needs. 1 Introduction It is no longer assumed that managing classroom and teaching activities in computational teaching and programming instruction follows a standardized approach [24], with fixed strategies and schedules for instructors. Various factors 18 Sep 2025 2 B. Riahi and V. Cateté influence teachers' strategies, including course type, differences in students' proficiency levels [13], teachers' conceptual and educational backgrounds, experience [26, 21], and methodologies [9]. Given the complexity and variety of teachers' activities in classroom management and enhancing students' learning, supplementary tools are essential [40, 31]. These tools support tasks such as instruction and assessments, including setting rubrics, projects, and quizzes [33]. Prior research has explored teachers' general needs for programming learning systems [28] and developed tools for block-based programming (BBP) environments, such as assignment hubs, grading tools, and planning interfaces [18, 33]. However, these needs vary significantly across instructional fields, especially in non-computing or non-STEM courses. We aim to identify teachers' needs and preferences regarding the integration of AI features in educational platforms (EP), with a focus on three key areas: assessment; course development and resource expansion; and monitoring and supporting students. Additionally, we address the research gap concerning non-STEM teachers' needs in using AI in block-based programming (BBP). Recognizing this gap we explored how AI could be integrated into educational platforms to enhance its accessibility and effectiveness for a broader range of educational needs, spanning both STEM (Science, Technology, Engineering, and Mathematics) and non-STEM (Arts, Social Studies, and Humanities) disciplines [45]. Thus, ensuring the adaptability and accessibility of these platforms is of fundamental importance. Additionally, identifying customized AI analytics features to meet the diverse needs of users is essential for creating more effective learning environments To achieve these objectives, we conducted semi-structured interviews with eight K-12 teachers-four from STEM and four from non-STEM disciplines-to understand their approaches to assessment, course development and resource expansion, and student monitoring and support. We explored their methods, strategies, current use of AI tools, and the features they would like to see added. Thematic analysis was used to analyze their responses for each focus area. For course development and expansion, teachers highlighted the need for built-in code tracing, dynamic document updates, and customized course materials tailored to individual student needs and levels. Additionally, they emphasized the importance of monitoring features such as customizable tools for individualized growth tracking, help notifications, pop-up tips, reminders, and motivational tools. 2 Related Works With widespread advancements in technology and computer science, there is a growing need to educate students in computer science and computational thinking from an early age [44]. With the emphasis on developing computational thinking (CT) and creativity in K-12 education increasing [25], and considering benefits such as improved future college performance [6], more schools are incorporating computer science as a core subject or integrating it into their curTitle Suppressed Due to Excessive Length 3 ricula. Computational thinking (CT), which was introduced by Wing [46], is the ability to solve problems, create solutions through abstraction and algorithmic thinking, and comprehend human behavior by utilizing the core principles of computer science [7]. There has been growing interest in learning CT for all students regardless of their discipline [20], as it can be integrated into both STEM (Science, Technology, Engineering, and Mathematics) and non-STEM (Human Societies, Language, Communication and Culture, and History and Art) [27] disciplines. In K-12 classes, students use block-based programming (BBP) using various tools, such as Code.org [23] to develop their CT skills. The adoption of these tools reflects a wider movement in education, in which supplementary technologies and AI are used to increase classroom efficiency and productivity [14]. These innovations not only simplify administrative duties but also enhance student engagement and allow for personalized learning experiences. Considering the ongoing challenges in performing teaching tasks, there is a significant demand for such tools to support teachers with curriculum-focused activities, instruction, student progress monitoring, and the creation of thorough assessments. AI tools play a pivotal role in supporting K-12 computer science educators by streamlining assessment processes [33] and providing substantial assistance to both teachers and students [34]. As there is a relatively small body of literature focused on integrating AI tools into block-based programming (BBP) in both STEM and non-STEM classes, we argue that the complexities of diverse assessment modules and distinct pedagogical approaches can pose significant challenges for teachers [33]. Moreover, AI can customize learning experiences by adjusting content and progression to meet students' specific needs. By leveraging AI in these ways, educators can enhance their ability to provide effective performance and supportive feedback to their students [29]. Table 1. Details of the participants, teaching Grade level and their fields in both groups of STEM and non-STEM non-STEM Teachers (NST) Grade Level STEM Teachers (ST) Grade Level Art-3D 9-12 CS principles/ math 9-12 English (ESL) 11 Computer Science 11-12 Music 9-12 Math 10 Dance 6-8 Math/Science/IT 10 2.1 Integration block based programming in non-stem classes BBP has gained significant attention in various educational settings, including non-STEM classes such as social science, humanities, dance, English for Speakers of Other Languages (ESOL), and art [42, 17]. Incorporating BBP in non-STEM classes can be beneficial for students to enhance their computational thinking abilities and develop skills that are essential in the digital age. Scholars showed 4 B. Riahi and V. Cateté that Integrating programming into art and music education can significantly enhance the learning experience, making it more appealing [16]. Using BBP programming in interdisciplinary and non-STEM areas like Arts & Crafts and Music, using tools like Scratch [32], reduces barriers for teachers and students, enhances engagement, increases the creativity of students and practical applications of programming across different subjects-making learning more exciting and accessible for everyone [37]. Given the need for educational platforms to be accessible across both STEM and non-STEM subjects, and the importance of facilitating class management for teachers, it's crucial that these platforms include features that meet the diverse requirements of all user groups. Art, Music, Dance and English for Speakers of Other Languages (ESOL) Education Students use educational visual programming tools like Scratch [2] and Snap! to enhance their musical understanding and compositional skills in subjects like music and dance. Integrating block-based programming with dance not only improves students' computational skills and artistic abilities but also provides teachers with effective ways to assess students' progress. By using sound and music blocks, students can compose melodies, manipulate pitch, create rhythmic patterns with loops and conditional statements, and deepen their comprehension of musical timing and structures [41]. Additionally, through musical BBP, students can animate characters and synchronize their movements with music using motion and sound blocks, enabling dynamic dance choreography that responds to musical cues or user interactions [4]. Scholars have emphasized that 3D programming can further enhance creativity in computational thinking [39], and using Scratch to animate dance moves and synchronize music significantly improves students' understanding of computational thinking in the context of dance, music, and art [41]. BBP can be used to enhance foreign language learning and spelling skills [19] among students. It has been shown that by using jigsaw-like placement of colored blocks, students can explore and understand grammar structures and vocabulary, get immediate feedback and correct their mistakes, and make different constructions of the sentence [38]. There is a growing need to support and equip [47] teachers in teaching and managing their classes more effectively. By applying various BBP tools in these areas, teachers encounter different needs specific to their subjects. To improve accessibility and keep these tools up-to-date, we aim to explore teachers' requirements for adding AI features, particularly in non-STEM areas. 3 Research Questions We aim to answer our research questions in the three area of assessment, course development and student monitoring. Our first research question explored how to better address the assessment needs of STEM and non-STEM teachers across various domains. Title Suppressed Due to Excessive Length 5 RQ1 What AI features do STEM and non-STEM teachers need and want in the educational tools for student assessments? Another important consideration is how well the dashboard aligns with curricula, teaching approaches, and the needs of various subject areas. Therefore, the second research question is: RQ2 What AI features do STEM and non-STEM teachers need and want for educational tools to expand their resources? Research has shown that, in computer science courses, teachers are actively involved in monitoring and supporting student and improve their progress [11, 12]. Therefore, our study also explores how AI tools can effectively address the needs of teachers in monitoring students' struggles and providing support: RQ3 What AI features do STEM and non-STEM teachers need and want in educational tools to monitor and support students? 4 Methods 4.1 Data Analysis In the initial phase of our research, we recruited eight K-12 teachers from schools in North and South Carolina in the United States. We conducted semi-structured interviews with both STEM and non-STEM teachers who used AI in their classes. The inclusion criterion for the interview participants was that they had previously implemented coding activities in their STEM or non-STEM classrooms. These activities ranged from short-hour-long modules to four-day lessons that followed a use-modify-create progression [15], allowing students to complete open-ended coding assignments. We used a pre-survey to select teachers based on their grade level and experience by incorporating coding into their subject areas. We selected eight participants, which have the requirements, (four from each group). The interview questions categorized in to three sections: assessment, resource planing and student monitoring. We asked about the way they perform assessment and how they expand their resources, how they support students, and how they monitor their struggles in their classes. Moreover, we asked them how they have been using educational platforms, if any, to perform the activities mentioned in their classes and if they are integrated with AI features, which features is their favorite. Finally, we asked them about their expectations from AI to enhance the belated platforms and what features they would like to see added to meet their needs better. Each interview lasted between 40 minutes and one hour, and participants were compensated with a $40 gift card. The interviews were conducted via Zoom, a video conferencing software, between March to May 2024 to accommodate the teachers' schedules. According to institutional review board (IRB) regulations, our study qualified as exempt from IRB review. All interviews were recorded with the teachers' consent, and participants provided explicit consent at the start of 6 B. Riahi and V. Cateté each interview for recording the sessions for data analysis purposes. The recorded videos were accessible only to the researcher. Considering the complex and varied factors, such as differences in course types (e.g., computing or non-computing), teachers' backgrounds [43], experiences, and students' individual differences, we chose a qualitative method via interview data collection and applied thematic analysis [1, 30] to our transcripts to uncover in-depth insights into teachers' needs for AI tools [22]. The analysis process involved reviewing the recorded videos, transcribing the interviews, and thoroughly reading the transcripts to explore the data comprehensively. During this process, we took notes and generated initial codes, which were grouped into broader categories, referred to as themes, to address our research questions. Two reviewers independently coded the data and compared their findings. Any code identified by one reviewer but missed by the other was added to the final list. These finalized codes were then grouped into thematic categories. We categorized themes based on teachers' perspectives in each section of the interview: a) perspectives on adding AI features to current grading and assessment tools, b) perspectives on adding AI features to enhance existing tools, and c) perspectives on adding AI features to improve monitoring of students' progress. We compared our tags and identified some that are common between STEM and non-STEM teachers, along with some unique themes. We use 'ST' and 'NST' as abbreviations for STEM and non-STEM teachers, respectively, followed by a number to indicate specific individuals. 5 Results 5.1 Evaluation For Assessment: AI Features Needed By Teachers We found that STEM teachers often use formative assessments, such as quizzes in group projects and final tests. In contrast, non-STEM teachers favor more qualitative and non-rubric-based assessments, such as portfolios and project visualizations, with a focus on independent work and manual feedback. Both groups use checklists, rubrics, and online assessment systems and include projects in their final assessments. After analyzing the themes based on the transcript we categorize them based on our research questions. Exploring RQ1 we found the following features that teachers in both groups need and want for student assessment. Customization Of Rubrics and Individualized Assessment While current tools can create and customize questions based on curricula, enabling teachers to manipulate rubrics is essential for fostering student growth. Additionally, teachers require more individualized capabilities to address students' specific learning levels. For example ST1 "STEM teacher number1" mentioned I wish I could add some features to my current platform and change the rubrics whenever needed and based on everyone needs. It's not just about getting the right answer-it's Title Suppressed Due to Excessive Length 7 Fig. 1. Percentage of Teachers (STEM & non-STEM) Needing Assessment Features about seeing their progress, capability and their thought and giving them feedback that helps them improve step by step. Such an ideal platform should offer deeper customization options and effectively measure students' progress. In the feedback section, formative assessment-a key strategy widely used by STEM teachers-was highlighted as an essential feature. Enhanced Feedback and Supportive Learning Resources Although existing platforms can trace code and grade based on rubrics or answer correctness, more detailed and partial grading is needed for line-by-line code assessment. As mentioned by ST3, teachers need to provide more detailed feedback, similar to tools like Quizizz, which offer immediate feedback and hints for each individual answer. They prefer students to understand and see their feedback even for a single line of code. More importantly, NST1 noted that after viewing their feedback and grades, students should have access to relevant and recommended tutorials, including documents, instructions, videos, and other resources to help them correct their mistakes. This approach prevents students from being overwhelmed by having to study all materials, thus reducing their workload. This type of structured feedback and support is especially crucial in visual arts classes, which often involve independent work. Originality and Authenticity Non-STEM teachers emphasize AI generative integration into EP that preserves the originality and authenticity of students' work and incorporates simulation capabilities for assessments in fields like dance and art. They seek functionalities that can evaluate creative assignments and create real-world challenges by converting real-world projects into digital grading systems. Given the diverse nature of their courses-such as art, dance these teachers prioritize features that support the authenticity of students' work. For example, NST1, who teaches art, highlighted that "originality and authenticity are the main requirements in art-related courses." Additionally, in courses like dance, art and design, simulation is a crucial component of assessment. There is a clear need for tools that support simulation and enable evaluation based on simulations and templates. For instance, NST4, 8 B. Riahi and V. Cateté who specializes in dance, mentioned, "I create a simulation of dancing with Snap and also use Scratch to animate the dance movement." NST4 also pointed out the need for online assessment tools that can evaluate simulations and convert real-world work into software-based grading. Additionally, three STEM teachers expressed concerns about the authenticity of their students' code and assignments. For example, ST2 stated that " although I encourage students for peer learning in my classes, some students using online resources and do copy-pasting. I need a feature that not allowed them to do so or show me parts of codes that weren't written by themselves." Integration and Adaptability to LMS (Learning Management System) and Simplified Access Control Switching between platforms and transferring information such as feedback, grades, and student progress data requires significant time and effort. The Participants expressed a need for better integration and adaptability among the tools they use. For instance, ST1 highlighted their use of the generative AI tool Copilot, built into the Microsoft Edge browser, to create rubrics and instructional materials. They emphasized the necessity for seamless integration of AI-generated content (e.g., from ChatGPT and Copilot) into educational platforms like as Google Classroom. Additionally, they prefer more comprehensive tools that encompass all necessary features to streamline these tasks. Grading, one of the most burdensome and time-consuming tasks for teachers, can be greatly facilitated by such integrated tools [35]. Moreover, one important feature, according to ST1, is SSO (Single Sign-On) to simplify access control and enhance security, it also simplifies managing the dashboard for both teachers and students. Therefore, this feature could enhance the usability of the platforms. ST2 mentioned, "Once students use the platforms such as scratch, if these platforms were compatible with SSO support features, students can log in to their Google Classroom account, and from there, they can access other third party platforms, like snap, scratch whatever it is. It has a lot of benefits, besides easy access, all students' data will be managed under a single authentication system". The mentioned STEM teachers have reported that their current platforms (such as syncing HMH with Canva or Admenton with each other) do not integrate seamlessly with school platforms and Google Docs for reporting final grades and feedback. On the other hand, NST1 and NST3 declared accessing different platforms for learning and visualizing progress is essential. Moreover, they articulated the need for features that simplify access to various resources, such as links to music practice exercises, guidance on correcting techniques, and step-by-step guides, which would be highly beneficial. Therefore, adaptability and flexibility are crucial AI generative abilities for those platforms to use in classroom management. Title Suppressed Due to Excessive Length 9 5.2 Evaluation For Course Development & Expanding Resources: AI Features needed by teachers Exploring RQ2a, we found that teachers highlighted the need for AI tools to support the following: STEM teachers have identified the need for AI features integration that assist with creating documents for each topic and generating curricula. Both STEM and non-STEM teachers mentioned that AI features integration would be useful for developing course materials, including lesson plans, course descriptions and image generation. These features include but are not limited to automated content generation, personalized learning pathways, predictive analytics, resource optimization, and enhanced collaboration. These features can enhance the platforms by analyzing students' performance and historical data to create personalized learning pathways and predict trends. This can lead to showing the course materials in an individualized way to students and changing the order based on their priorities based on their needs. This content adjustment and customized resource allocation ensure that course materials are effectively utilized and available as needed. For example, ST1 and ST2 mentioned, "Sometimes students get mixed up about which course materials are the most important, especially when they're falling behind their classmates for one reason or another. They really need a bit of guidance to point them towards the materials they should focus on." On the other hand, NST1, who teaches art, noted: "Since I teach art, simulating final artwork by uploading images and generating images with AI features would be enticing for me and my students. However, AI-generated images often lack originality and human creativity. While these tools enhance students' self-confidence and provide a basic idea, I hope AI can further inspire students to be more creative". Fig. 2. Percentage of Teachers (STEM & non-STEM) Needing Course Development Features Individualized (customized materials) and Accessible for English Language Learners NST3 emphasized the importance of enhancing usability, suggesting 10 B. Riahi and V. Cateté that the platform would be more effective if it were smarter, with features like automated language level assessment, curriculum adjustments based on skills that need reinforcement, and optimized distribution of learning materials according to students' individual learning needs. NST3 addressed the challenges faced by non-English-speaking students using the Brisk Teaching platform and expressed a desire for the platform to tailor English instruction to individual levels, as well as facilitate the remote distribution of materials and quizzes for continuous learning. Peer Review and Collaborative Learning One factor both STEM and non-STEM teachers highlighted, though in different ways, is the importance of collaborative learning. Teachers highlighted the benefits of students collaborating on coding projects, such as discussing ideas, explaining their reasoning, and negotiating solutions. They noted that collaboration enhances problem-solving by fostering creative solutions and a deeper understanding of concepts. Additionally, students can learn effectively from their peers and teach each other strategies and solutions. They wish AI had the following features: AI could analyze students' skills and performance and optimize peer matching to ensure teammates would have a balanced mix of abilities or could help each other effectively. Moreover, in terms of solving conflicts and disagreements, AI could provide assistance by offering resolutions. As ST1 mentioned, "Although CodeCombat is highly engaging for pair work, I wish AI features could intervene to resolve disputes among students, facilitate peer feedback, and help students be more creative". Real-World Connectivity Another feature appreciated by teacher ST4 is the tool's ability to connect with the real world and provide interactive experiences. For example, being able to test their codes with devices mounting sensors that interact with their code such as LEGO Mindstorms [36]. As ST4 noted: "Using resources like circuit boards that come with a hands-on component or project base is fantastic and engaging for students. I would prefer to use such resources because they make learning more interactive and enjoyable." Gamification in Educational Tools Another feature that most current tools incorporate is gamification. Gamification has demonstrated significant potential in improving learning outcomes in K-12 education and positively impacts student learning [5, 10]. It also increases motivation and engagement by providing clear goals and a structured learning process [8]. Participants NST1, NST4, ST2, and ST3 have experience using tools such as Kahoot, Quizizz, Spheros, Scratch, and Code.org to teach BBP. For instance, Code.org utilizes block-based coding and integrates gamification elements, enabling students to learn programming through game-based challenges, which enhances their motivation to learn coding [8]. Based on our participants' statements Integrate AI into interview would enhance usability by incorporating features such as adaptive learning challenges Title Suppressed Due to Excessive Length 11 within games, rewarding students accordingly, fostering competitive yet collaborative learning and matching peers effectively on platforms. 5.3 Evaluation of Student Monitoring When we analyzed teachers' responses about the educational platforms they use to monitoring students strugles, we found that they utilize platforms such as Edmentum, Quizizz and other school-related software. These tools provides realtime analytics, allowing teachers to monitor student performance on assessments. Moreover, teachers discussed features that can track students' progress to closely evaluate and identify specific areas where a student may be struggling would be a great help. Exploring RQ3, teachers from both groups expressed a desire for more control over their students' activities to prevent distractions. They are interested in builtin features that allow them to mirror students' screens and control desktops, including the ability to block other tabs. Teachers also wish for help notification suggestions, pop-up tips, deadline reminders, and features to increase student motivation by simulating progress alerts and notifying them of their classmates' study time. Accommodations for Individualized needs Teachers including ST1, ST3, NS2 have expressed their expectation that AI can be utilized to monitor and accommodate students with special needs by customizing tools for more effective student tracking. For example, NST2 mentioned, "A feature that can adjust expectations based on students' improvement and level, providing them with individualized plans, would be helpful". ST4 explained, "I wish to have productivity tools that could identify where students are stuck and offer assistance, with popups if they need help while doing their assignments". Fig. 3. Percentage of Teachers (STEM & non-STEM) Needing Monitoring Features 12 B. Riahi and V. Cateté 6 Concerns regarding integrating AI into Educational Platforms 6.1 Privacy Here are some additional concerns teachers mentioned during their interviews that, while unrelated to the three main sections, are still noteworthy. One significant issue is privacy. Teachers expect AI platforms to include customizable security and privacy settings that comply with school policies. This would enable educators to safely integrate AI into classrooms and platforms while safeguarding student data and adhering to privacy regulations. ST1 emphasized that AI tools must adhere to strict privacy requirements; failure to meet these standards could result in their usage being restricted or blocked. He mentioned, "the big challenge with using AI platforms in schools is nsuring ethical use and protecting student data. When Facebook acquired Oculus, we saw how a lack of adjustable security settings led to schools losing access due to data harvesting concerns. The same thing could happen with AI tools if they don't meet strict privacy standards. We need AI companies to provide flexible security and privacy settings so we can safely integrate these tools into classrooms. Privacy is our top concern-just look at how schools restrict access to DALL-E because of these limitations". Therefore, ensuring that these platforms adhere to strict privacy guidelines to protect individuals in all online platforms is essential [3]. It is not only important for fostering trust among educators but also for ensuring that the integration of AI technologies into educational settings is both sustainable and compliant. This focus on privacy and security will be vital in enabling broader acceptance and use of AI in classrooms without compromising student data. 7 Discussion When comparing STEM and non-STEM teachers, it becomes clear that they integrate educational platforms differently into their teaching and have distinct expectations for AI integration into these platforms. STEM teachers typically follow structured curricula with specific content goals and are accustomed to using technology, often having already integrated advanced systems into their practices. In contrast, non-STEM teachers, particularly those in the humanities and arts, often have more flexibility in their teaching plans. Their curricula may not be as rigidly defined by standards, allowing for creativity and variation. These teachers may not have prior experience using technology like computers and AI in their classrooms. The challenge lies in addressing these knowledge gaps. Non-STEM teachers may require more practical examples and personalized support to grasp how AI can benefit their teaching practices. Due to technological familiarity, teacher expertise and resource utilization, non-stem teachers require more support in using technologies. Therefore,they may be starting from scratch when it comes to integrating AI technology.z Title Suppressed Due to Excessive Length 13 8 Conclusion Through semi-structured interviews with 8 K-12 STEM and non-STEM teachers, we explored preferences to integrate AI tools into curricula and coding activities. We examined the intricacies of student assessment, course development and progress monitoring across different subject areas. We found valuable insight of what are the AI features that are needed by STEM and non-STEM teachers in the mentioned areas. Our findings indicate that both STEM and non-STEM teachers require significant improvement to current features. Many changes are feasible with recent advances in AI, such as developing analytical features for tracking students improvements and their status of progress, more detail feedback and customized course materials. However, some needs, such as control of student devices, are out of scope, and alternative means of student engagement such as gamification may be explored. References 1. Alhojailan, M. I. Thematic Analysis: A Critical Review of Its Process and Evaluation. WEI International European Academic Conference Proceedings, Zagreb, Croatia (2012). 2. Begosso, L. C., Begosso, L. R., and Aragao, N. An analysis of block-based programming environments for CS1. *IEEE Frontiers in Education Conference (FIE)*. Available at: https://ieeexplore.ieee.org/document/9273982, 2020. 3. Behfar, A., Shrestha, A., and Al-Ameen, M. N. A First Look into Fake Profiles on Social Media through the Lens of Victim's Experiences. *Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing*, 444-450, 2024. 4. Bi, T., Fankhauser, P., Bellicoso, D., and Hutter, M. Real-time dance generation to music for a legged robot. In *2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*, pages 1038-1044. IEEE, 2018. 5. Buckley, P., Doyle, E. Gamification and Student Motivation. Interactive Learning Environments, vol. 24, no. 6, pp. 1162-1175. Taylor & Francis (2016). 6. Burgiel, H., Sadler, P. M., Sonnert, G. The Association of High School Computer Science Content and Pedagogy with Students' Success in College Computer Science. ACM Transactions on Computing Education (TOCE), vol. 20, no. 2, pp. 1-21. ACM, New York, NY (2020). 7. Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., and Eltoukhy, M. Assessing elementary students' computational thinking in everyday reasoning and robotics programming. Computers & Education 109 (2017), 162-175. 8. Choi, W. C., and Choi, I. C. Exploring the Impact of Code.org's Block-Based Coding Curriculum on Student Motivation in K-12 Education. In *Proceedings of the 2024 12th International Conference on Information and Education Technology (ICIET)*, pages 93-97, 2024. 9. Compañ-Rosique, P., Molina-Carmona, R., Satorre-Cuerda, R. Effects of Teaching Methodology on the Students' Academic Performance in an Introductory Course of Programming. Learning and Collaboration Technologies: Designing Learning Experiences, 6th International Conference, LCT 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part I, pp. 332-345. Springer (2019). 14 B. Riahi and V. Cateté 10. Dehghanzadeh, H., Farrokhnia, M., Dehghanzadeh, H., Taghipour, K., Noroozi, O. Using Gamification to Support Learning in K-12 Education: A Systematic Literature Review. British Journal of Educational Technology, vol. 55, no. 1, pp. 34-70. Wiley Online Library (2024). 11. Dong, Y., Marwan, S., Shabrina, P., Price, T., Barnes, T. Using Student Trace Logs to Determine Meaningful Progress and Struggle during Programming Problem Solving. International Educational Data Mining Society, pp. 439-445. ERIC, Online (2021). 12. Dong, Y., Shabrina, P., Marwan, S., Barnes, T. You Really Need Help: Exploring Expert Reasons for Intervention During Block-based Programming Assignments. Proceedings of the 17th ACM Conference on International Computing Education Research, pp. 334-346. ACM, New York, NY (2021). 13. Figueiredo, J., García-Peñalvo, F. Teaching and Learning Strategies for Introductory Programming in University Courses. Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM'21), pp. 746-751. ACM, New York, NY (2021). 14. Fitria, T. N. Artificial Intelligence (AI) in Education: Using AI Tools for Teaching and Learning Process. Prosiding Seminar Nasional & Call for Paper STIE AAS, vol. 4, no. 1, pp. 134-147 (2021). 15. Franklin, D., Coenraad, M., Palmer, J., Eatinger, D., Zipp, A., Anaya, M., White, M., Pham, H., Gökdemir, O., and Weintrop, D. An analysis of Use-Modify-Create pedagogical approach's success in balancing structure and student agency. *Proceedings of the 2020 ACM Conference on International Computing Education Research*, 14-24, 2020. 16. Graßl, I., and Fraser, G. Girls rocking the code: Gender-dependent stereotypes, engagement & comprehension in music programming. Proceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training (2024), 115-126. 17. Guzdial, M. Creating an on-ramp to programming for arts and humanities students with Teaspoon languages and custom block languages. Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 2 (2024), 1898-1898. 18. Harvey, B., Garcia, D. D., Barnes, T., Titterton, N., Armendariz, D., Segars, L., Lemon, E., Morris, S., and Paley, J. Snap! (Build Your Own Blocks). In Proceedings of the 44th ACM Technical Symposium on Computer Science Education, pp. 759759 (2013). 19. Holz, H. Design, Development, and Evaluation of Research Tools for EvidenceBased Learning: A Digital Game-Based Spelling Training for German Primary School Children. Universität Tübingen (2020). 20. Hsu, T.-C., Chang, S.-C., and Hung, Y.-T. How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers & Education 126 (2018), 296-310. 21. Huang, W., Looi, C.-K., Yeter, I. H. Comparison of STEM, Non-STEM, and MixedDisciplines Pre-service Teachers' Early Conceptions about Computational Thinking. CTE-STEM 2022 Conference, pp. 1-6. TU Delft OPEN, Online (2022). 22. Johri, A. Conducting interpretive research in engineering education using qualitative and ethnographic methods. *Cambridge Handbook of Engineering Education Research*, 551-570. Cambridge University Press, Cambridge, UK, 2014. 23. Kalelioğlu, F. A New Way of Teaching Programming Skills to K-12 Students: Code.org. Computers in Human Behavior, vol. 52, pp. 200-210. Elsevier (2015). Title Suppressed Due to Excessive Length 15 24. Koulouri, T., Lauria, S., Macredie, R. D. Teaching Introductory Programming: A Quantitative Evaluation of Different Approaches. ACM Transactions on Computing Education (TOCE), vol. 14, no. 4, pp. 1-28. ACM, New York, NY (2014). 25. Lee, S. J., Francom, G. M., Nuatomue, J. Computer Science Education and K-12 Students' Computational Thinking: A Systematic Review. International Journal of Educational Research, vol. 114, article 102008. Elsevier (2022). 26. Leyzberg, D., Moretti, C. Teaching CS to CS Teachers: Addressing the Need for Advanced Content in K-12 Professional Development. Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, pp. 369-374 (2017). 27. Liao, C. H., Chiang, C.-T., Chen, I.-C., and Parker, K. R. Exploring the relationship between computational thinking and learning satisfaction for non-STEM college students. International Journal of Educational Technology in Higher Education 19, 1 (2022), 43. 28. Limke, A., Hill, M., Cateté, V., Barnes, T. A Survey of K-12 Teacher Needs for an Online Programming Learning System. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 1-7. ACM, New York, NY (2024). 29. Lin, P., Van Brummelen, J. Engaging Teachers to Co-design Integrated AI Curriculum for K-12 Classrooms. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-12. ACM, New York, NY (2021) 30. Maguire, M., Delahunt, B. Doing a Thematic Analysis: A Practical, Step-by-Step Guide for Learning and Teaching Scholars. All Ireland Journal of Higher Education, vol. 9, no. 3 (2017). 31. Mahtani, R., Gill, C., Umstead, K., Behnam Asl, S., Tully, K., et al. Online participatory tools in classroom and research settings. *DS 117: Proceedings of the 24th International Conference on Engineering and Product Design Education (E&PDE 2022), London South Bank University, London, UK, September 8-9, 2022*, 2022. 32. Marji, M. Learn to program with Scratch: A visual introduction to programming with games, art, science, and math. No Starch Press, 2014. 33. Milliken, A., Cateté, V., Limke, A., Gransbury, I., Chipman, H., Dong, Y., Barnes, T. Exploring and Influencing Teacher Grading for Block-based Programs through Rubrics and the GradeSnap Tool. Proceedings of the 17th ACM Conference on International Computing Education Research, pp. 101-114. ACM, New York, NY (2021). 34. Milliken, A., Wang, W., Cateté, V., Martin, S., Gomes, N., Dong, Y., Harred, R., Isvik, A., Barnes, T., Price, T., et al. Planit! A New Integrated Tool to Help Novices Design for Open-Ended Projects. Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, pp. 232-238. ACM, New York, NY (2021). 35. Milliken, A. A. Redesigning How Teachers Learn, Teach, and Assess Computing with Block-Based Languages in Their Classroom. North Carolina State University, Raleigh, NC (2021). 36. Moraiti, I., Fotoglou, A., and Drigas, A. Coding with Block Programming Languages in Educational Robotics and Mobiles, Improve Problem Solving, Creativity & Critical Thinking Skills. *International Journal of Interactive Mobile Technologies*, 16(20), 2022. 37. Perin, A. P. J., Silva, D. E., and Valentim, N. Investigating block programming tools in high school to support Education 4.0: A Systematic Mapping Study. Informatics in Education, 22(3), 463-498, 2023. 38. Purgina, M., Mozgovoy, M., Blake, J. WordBricks: Mobile Technology and Visual Grammar Formalism for Gamification of Natural Language Grammar Acquisition. 16 B. Riahi and V. Cateté Journal of Educational Computing Research, vol. 58, no. 1, pp. 126-159. SAGE Publications (2020). 39. Repenning, A., Webb, D. C., Brand, C., Gluck, F., Grover, R., Miller, S., Nickerson, H., and Song, M. Beyond Minecraft: Facilitating computational thinking through modeling and programming in 3D. *IEEE Computer Graphics and Applications*, 34(3), 68-71. IEEE, 2014. 40. Rößling, G., Joy, M., Moreno, A., Radenski, A., Malmi, L., Kerren, A., Naps, T., Ross, R. J., Clancy, M., Korhonen, A., et al. Enhancing Learning Management Systems to Better Support Computer Science Education. ACM SIGCSE Bulletin, vol. 40, no. 4, pp. 142-166. ACM, New York, NY (2008). 41. Shamir, M., Kocherovsky, M., Chung, C. A Paradigm for Teaching Math and Computer Science Concepts in K-12 Learning Environment by Integrating Coding, Animation, Dance, Music, and Art. 2019 IEEE Integrated STEM Education Conference (ISEC), pp. 62-68. IEEE (2019). 42. Sullivan, A., Strawhacker, A., Bers, M. U. Dancing, Drawing, and Dramatic Robots: Integrating Robotics and the Arts to Teach Foundational STEAM Concepts to Young Children. Robotics in STEM Education: Redesigning the Learning Experience, pp. 231-260. Springer (2017). 43. Tagare, D. Factors That Predict K-12 Teachers' Ability to Apply Computational Thinking Skills. *ACM Transactions on Computing Education*, 24(1), 1-26. ACM New York, NY, 2024. 44. Tekdal, M. Trends and development in research on computational thinking. Education and Information Technologies 26, 5 (2021), 6499-6529. 45. Uddin, S., Imam, T., and Mozumdar, M. Research interdisciplinarity: STEM versus non-STEM. Scientometrics, 126, 603-618 (2021) 46. Wing, J. M. Computational Thinking. Communications of the ACM, vol. 49, no. 3, pp. 33-35. ACM, New York, NY (2006). 47. Yim, I. H. Y., Su, J. Artificial Intelligence (AI) Learning Tools in K-12 Education: A Scoping Review. Journal of Computers in Education, pp. 1-39. Springer (2024).
|
2509.16275
|
SecureFixAgent: A Hybrid LLM Agent for
Automated Python Static Vulnerability Repair
1st Jugal Gajjar
Computer Science Department
The George Washington University
Washington D.C, USA
jugal.gajjar@gwu.edu
2nd Kamalasankari Subramaniakuppusamy
Computer Science Department
The George Washington University
Washington D.C, USA
kamalasankaris@gwu.edu
3rd Relsy Puthal
Applied Economics Department
The George Washington University
Washington D.C, USA
relsy.puthal@gwu.edu
4th Kaustik Ranaware
Computer Science Department
The George Washington University
Washington D.C, USA
k.ranaware@gwu.edu
Abstract—Modern software development pipelines face grow-
ing challenges in securing large codebases with extensive de-
pendencies. Static analysis tools like Bandit are effective at
vulnerability detection but suffer from high false positives and
lack repair capabilities. Large Language Models (LLMs), in
contrast, can suggest fixes but often hallucinate changes and
lack self-validation. We present SecureFixAgent, a hybrid repair
framework integrating Bandit with lightweight local LLMs
(<8B parameters) in an iterative detect–repair–validate loop.
To improve precision, we apply parameter-efficient LoRA-based
fine-tuning on a diverse, curated dataset spanning multiple
Python project domains, mitigating dataset bias and reducing
unnecessary edits. SecureFixAgent uses Bandit for detection,
the LLM for candidate fixes with explanations, and Bandit re-
validation for verification, all executed locally to preserve privacy
and reduce cloud reliance. Experiments show SecureFixAgent
reduces false positives by 10.8% over static analysis, improves
fix accuracy by 13.51%, and lowers false positives by 5.46%
compared to pre-trained LLMs, typically converging within three
iterations. Beyond metrics, developer studies rate explanation
quality 4.5/5, highlighting its value for human trust and adoption.
By combining verifiable security improvements with transparent
rationale in a resource-efficient local framework, SecureFixAgent
advances trustworthy, automated vulnerability remediation for
modern pipelines.
Index Terms—static program analysis, large language mod-
els, automated code repair, LLM agents, software vulnerability
detection
I. INTRODUCTION
The rapid expansion of modern software systems has re-
sulted in increasingly large and complex codebases, often com-
posed of numerous third-party dependencies [2]. This growth
inevitably widens the potential vulnerability surface, making
security assurance a critical component of the software devel-
opment lifecycle. Vulnerabilities that escape early detection
can lead to severe consequences in production environments,
including data breaches, financial losses, and reputational
damage [1]. As a result, organizations are adopting automated
tools to identify and mitigate security risks before deployment.
Static analysis tools, such as Bandit [13] for Python, are
widely used for early-stage vulnerability detection. These tools
operate by scanning source code against predefined security
rules and patterns, offering high precision in identifying certain
classes of vulnerabilities. However, static analyzers suffer from
notable drawbacks: they tend to generate high false-positive
rates, burdening developers with unnecessary manual triage,
and they lack inherent capabilities for automated repair [6].
Consequently, while they are effective at highlighting poten-
tial issues, the remediation process remains manual, time-
consuming, and prone to human oversight.
Recent advances in Large Language Models (LLMs) have
shown promise in automating code repair. Code-specialized
LLMs can analyze vulnerable snippets, reason about flaws,
and generate candidate fixes [18]. Industry tools such as
GitHub Copilot have begun experimenting with vulnerability-
aware suggestions and automated repairs [5], reflecting the
growing commercial interest in this domain. However, LLM-
based approaches face two major limitations in security-
critical contexts. First, they frequently produce hallucinated
fixes—syntactically valid but semantically ineffective changes
that fail to eliminate the underlying vulnerability or introduce
new ones [3]. Second, they lack built-in self-verification mech-
anisms, resulting in unvalidated patches that require human
review.
To address these limitations, we introduce SecureFixAgent,
a hybrid framework that merges the strengths of static analysis
and LLMs for automated code repair through a detect–repair–
validate loop. SecureFixAgent leverages Bandit [13] to per-
form initial vulnerability detection, utilizes a locally deployed
lightweight code LLM (<8B parameters) to propose context-
aware fixes accompanied by human-readable explanations, and
then re-applies Bandit [13] to validate the correctness of the
patch. If the vulnerability persists, the process iterates with
refined prompts and contextual feedback until a secure fix is
reliably achieved. All inference is performed locally, ensur-
ing privacy preservation and eliminating reliance on external
arXiv:2509.16275v1 [cs.CR] 18 Sep 2025
cloud-based APIs.
The key contributions of this work are threefold. First, we
design and implement a hybrid vulnerability repair pipeline
that autonomously detects and repairs Python code vulnera-
bilities while maintaining human-readable, explainable output.
Second, we integrate multiple local code LLMs with Bandit
to create a validation-driven repair mechanism that minimizes
hallucinated fixes. Third, we present an empirical evalua-
tion on the curated vulnerable code samples, demonstrating
that SecureFixAgent improves fix accuracy and reduces false
positives compared to either static analysis or LLM-based
approaches alone.
II. RELATED WORK
Traditional approaches to vulnerability detection have relied
on static, rule-based analysis. Tools such as Bandit [13] and
PyLint [14] scan Python code for insecure patterns (e.g.,
hardcoded credentials, insecure deserialization). While precise
for well-defined vulnerability classes, these systems often yield
high false-positive rates and provide no automated repair [6].
Dynamic analysis complements static methods by executing
code in controlled environments (e.g., OWASP ZAP [12],
Burp Suite [11]), enabling detection of runtime issues such as
race conditions and memory leaks that static scans may miss.
However, adoption remains limited due to brittle execution
setups, performance overhead, and incomplete code coverage
[7]. This leads to coverage gaps in real-world deployments,
especially for large, distributed systems.
Machine learning and deep learning approaches broadened
flexibility by learning vulnerability patterns from datasets
such as Juliet [9] and Draper VDISC [16]. VulDeePecker
[17] demonstrated semantic-level flaw detection using code
gadget representations, achieving higher precision, though at
the cost of requiring extensive labeled datasets and offering
no automated remediation.
Transformer-based models (CodeBERT [8], CodeT5 [23],
and PolyCoder [22]) advanced code intelligence tasks, while
security-focused variants such as VulBERTa [25] and Mal-
BERT [24] targeted vulnerability detection. APR systems such
as T5APR [21] and Repairnator [20] generate patches, and
commercial tools like GitHub Copilot [27] now incorporate
vulnerability-aware suggestions. However, these systems still
face three persistent gaps: (1) lack of rigorous, security-
specific validation of generated patches, often leading to
unverified or incomplete fixes [3]; (2) reliance on large, cloud-
hosted models, introducing privacy, cost, and latency concerns
[4]; and (3) limited mechanisms for iterative refinement when
initial repairs fail [30].
Recent research has begun to couple vulnerability detec-
tion with semantic reasoning and natural language explana-
tions. For example, MalCodeAI [30] decomposes code into
functional units, applies LLM-based detectors, and produces
both patches and human-readable rationales, improving trans-
parency. Other work integrates user feedback into LLM-
assisted vulnerability remediation processes [19], aiming to
boost repair satisfaction and trustworthiness. However, these
TABLE I
COMPARISON OF SECUREFIXAGENT WITH PRIOR SYSTEMS
System
Validation
Deployment
Iterative Repair
GitHub Copilot
×
Cloud
×
Repairnator
Partial
Server
×
MalCodeAI
×
Local LLM
×
SecureFixAgent
✓Bandit
Local LLM
✓Loop
systems still operate primarily in a single-pass repair mode,
lacking the iterative validation necessary to reliably handle
persistent or complex vulnerabilities in security-critical code.
In contrast, SecureFixAgent explicitly addresses these lim-
itations by combining static analysis feedback with iterative
LLM-driven repair and validation, all executed locally to
preserve privacy. Unlike Copilot or Repairnator, which either
lack validation or rely on cloud-scale models, SecureFixAgent
is designed for on-premise feasibility, explainability, and re-
source efficiency. To make this distinction clearer, Table I sum-
marizes how SecureFixAgent compares with representative
prior systems across validation, deployment, and refinement
dimensions.
III. METHODOLOGY
We present SecureFixAgent, a hybrid vulnerability detection
and repair framework that integrates static analysis with open-
source, local, and resource-efficient LLMs in an iterative
detect–repair–validate loop for robustness. SecureFixAgent ex-
ecutes fully on-premise, ensuring privacy compliance and low
latency with models capped at 8B parameters. Quantization
methods (INT8, 4-bit, mixed precision) further reduce memory
and compute costs with minimal performance loss, enabling
deployment on consumer-grade hardware. Unlike monolithic
LLM-based patching systems, our approach enforces detection
and patch correctness through repeated static revalidation, con-
verging iteratively toward verified vulnerability resolution with
accompanying human-readable rationales. Figure 1 illustrates
the architecture of our proposed framework and data flow
among its core components.
A. System Architecture
The SecureFixAgent pipeline consists of four tightly in-
tegrated stages. Initially, Bandit statically analyzes Python
source code to identify potential vulnerabilities, generating
a structured report. Next, a locally hosted, code-specialized
LLM instance interprets the report to validate the detection
and to synthesize minimal, targeted patches accompanied by
concise, human-readable explanations. The validation stage re-
scans the patched code using Bandit to confirm vulnerability
resolution. If vulnerabilities persist, the system iterates with
updated context until convergence is reached or a predefined
iteration limit is met. The final output stage produces the fully
patched code alongside a concise technical explanation.
B. Pipeline Flow
Starting from an input Python file, Bandit performs vul-
nerability scanning to produce a detailed report. The report
Fig. 1.
Architecture of SecureFixAgent, integrating static analysis with locally hosted LLMs in an iterative detect–repair–validate loop. Bandit identifies
candidate vulnerabilities, LLM 1 cross-validates reports, and LLM 2 performs targeted patching with explanations. Patched code is re-verified by Bandit until
convergence or a maximum iteration limit is reached, yielding traceable and verified repairs.
and original source are provided to the LLM, which proposes
minimal corrective changes with explanatory comments for the
truly positive detections. The patched code is then re-analyzed
by Bandit; if unresolved vulnerabilities remain, this detect–
repair–validate loop repeats. Iteration continues until all issues
are fixed or the maximum iteration count is reached, balancing
fix thoroughness with computational efficiency.
C. Algorithm Specification
Algorithm 1 formalizes the iterative detect–repair–validate
process described in the Pipeline Flow. Unlike monolithic
repair strategies that attempt to patch all detected vulnera-
bilities in a single pass, this variant processes each vulnera-
bility individually. For every element flagged by Bandit, the
original code segment and the relevant portion of the Bandit
report are provided to the LLM, which generates a patch
targeted exclusively at that instance. This per-vulnerability
approach enables finer-grained control, minimizes unintended
code modifications, and supports precise tracking of which
changes resolve specific vulnerabilities. The output package
generated after convergence contains (i) the original code,
(ii) the fully patched code, (iii) the corresponding Bandit
report for each iteration, and (iv) LLM explanations for each
remediation, enabling full traceability and auditability of the
repair process.
D. LLM Configuration and Fine-Tuning
We evaluate several locally executable, code-specialized
LLMs—including DeepSeek Coder [32], Qwen2.5-Coder [26],
CodeLlama [33], and CodeGemma [31]—with parameter sizes
capped at 8 billion to ensure feasible deployment on stan-
dard workstations. In addition to testing base (pre-trained)
model performance, we perform supervised fine-tuning using
a curated dataset of Python vulnerabilities and corresponding
fixes, sourced from both synthetic injections and real-world
CVE patches. Fine-tuning is implemented using the Apple
MLX framework [28] for M-series optimization, combined
with Low-Rank Adaptation (LoRA) [29] to minimize compu-
tational overhead while preserving model performance. This
approach improves the models’ ability to generate precise,
minimal, and contextually appropriate repairs while adhering
to security best practices. All model inference, whether from
base or fine-tuned variants, is executed entirely on-device,
eliminating reliance on cloud services and mitigating sensitive
data exposure.
E. Prompt Design
To minimize hallucinations and promote reproducibility,
prompts are carefully structured to guide the LLM through
three core tasks: interpreting individual Bandit-reported vul-
nerabilities, proposing minimal code changes targeted to re-
solve each issue without altering intended functionality, and
generating concise, human-readable explanations. Outputs are
explicitly requested in structured formats—e.g., JSON-like
dictionaries embedded in valid Python code—enabling pro-
grammatic parsing and validation of LLM responses. This
structured output design is essential for automated downstream
processing and verification.
F. Privacy and Efficiency
By confining all inference to local hardware, SecureFix-
Agent adheres to stringent code confidentiality requirements,
preventing potential leakage of proprietary source code. The
iterative pipeline is optimized for low latency per cycle, sup-
porting integration into continuous integration and deployment
(CI/CD) workflows where timely automated remediation is
essential. All intermediate artifacts are encrypted at rest and in
memory with AES-128 for reliable, efficient protection, and no
telemetry is transmitted externally, preserving organizational
privacy.
IV. EXPERIMENTS
The experimental evaluation of SecureFixAgent was con-
ducted in two hardware environments. The first consisted of an
Algorithm 1 SecureFixAgent: Iterative Fine-Grained Vulner-
ability Detection and Repair
Input: Python source code file C, maximum iteration limit
N
Output: Original code Corig, final patched code C, iteration-
wise Bandit reports, LLM-generated explanations
Initialisation :
1: i ←0
2: Corig ←C
Iterative Detection and Repair Loop :
3: repeat
4:
Run Bandit on C to produce vulnerability report R
5:
if R is empty then
6:
break {No vulnerabilities detected}
7:
end if
8:
for each vulnerability v ∈R do
9:
Extract vulnerable code segment Sv from C
10:
Retrieve corresponding report excerpt Rv
11:
Provide Sv, Rv, and C to locally hosted LLM
12:
LLM cross-validates the report excerpt Rv with
human-readable explanation
13:
if Rv is True Positive then
14:
LLM generates patched segment S′
v with human-
readable explanation
15:
Replace Sv in C with S′
v
16:
end if
17:
end for
18:
Save Bandit report R for iteration i
19:
i ←i + 1
20: until R is empty or i ≥N
Finalization :
21: Run final Bandit scan to confirm all fixes
22: Generate output package containing original code Corig,
final code C, all Bandit reports, and LLM explanations
23: return Final output package
Apple M-series system, offering high-efficiency ARM-based
processing for local inference. The second was a CUDA-
enabled GPU workstation equipped with an NVIDIA GPU,
enabling accelerated inference for larger models and batch
processing. These setups ensured feasibility within on-premise
constraints while supporting evaluation of models up to 8 bil-
lion parameters. We ensured runtime efficiency while keeping
memory usage viable: average per-iteration latency remained
practical for CI/CD deployment, with peak memory during
model execution ranging from 6 to 14 GB for the best-
performing models and up to 40 GB during fine-tuning.
The dataset was drawn from two primary sources. The first
source was a synthetically generated corpus in which vulner-
ability injection locations and types were randomized across
modules and systematically injected into Python programs to
simulate realistic distributions. This allowed controlled exper-
imentation and reproducibility by ensuring that vulnerability
locations and types were known a priori. The second source
consisted of real-world Python repositories, including CVE-
Bench [34], PySecDB [35], and SecurityEval [36], containing
publicly disclosed Common Vulnerabilities and Exposures
(CVEs), providing realistic and diverse vulnerability patterns
for testing the efficacy of our proposed system.
TABLE II
DATA SUMMARY
Data Source
# Samples
# Vulnerability Types
Synthetic (Injected)
740
52
CVE-Bench
40
8
PySecDB
1258
119
SecurityEval
130
75
SecureFixAgent was evaluated using multiple open-source,
code-specialized LLMs: DeepSeek Coder (1.3B, 6.7B) [32],
Qwen 2.5 Coder (3B, 7B) [26], CodeLlama (7B) [33], and
CodeGemma (7B) [31]. Each model was tested in its raw,
instruction-tuned form to establish baseline performance. Sub-
sequently, models were fine-tuned with Apple MLX [28] using
LoRA [29], leveraging the combined synthetic and real-world
vulnerability–repair dataset to improve repair accuracy and
reduce hallucinated changes.
Three system configurations were compared: (1) Ban-
dit alone for vulnerability detection without automated re-
pair; (2) LLM-based repair without static validation (single-
pass patch generation); and (3) the full SecureFixAgent
pipeline integrating Bandit detection with iterative, validation-
driven LLM repair. Performance was assessed using four
primary metrics—fix accuracy, false-positive rate, iterations
to convergence, and explanation clarity (developer-rated Likert
scale)—with statistical significance evaluated via paired t-tests
(p < 0.05). Additionally, while commercial tools such as
GitHub Copilot were not directly benchmarked due to closed-
source constraints, qualitative comparisons were discussed in
terms of privacy, iterative validation, and repair explainability.
Results were reported for both raw and fine-tuned models,
enabling clear evaluation of the impact of domain adaptation
on SecureFixAgent’s repair effectiveness.
V. RESULTS
The experimental results demonstrate that integrating static
analysis with iterative, locally executed LLM-based repair
substantially improves vulnerability resolution compared to
either approach alone. Across all evaluated models, SecureFix-
Agent consistently outperformed the LLM-only and Bandit-
only baselines in terms of fix accuracy and reduction of
irrelevant code changes.
When
operating
with
raw,
instruction-tuned
models,
DeepSeek Coder 1.3B [32] exhibited moderate baseline re-
pair capability, with the larger variant achieving higher raw
accuracy but generally requiring more iterations to converge.
Qwen2.5 Coder 7B [26] showed comparatively strong raw
performance and explanation quality, while CodeGemma 7B
[31] and CodeLlama 7B [33] produced balanced results with
occasional extraneous formatting changes that did not affect
security correctness.
TABLE III
EXPERIMENTAL RESULTS
Configuration
Fix Accuracy (%)
False Positives (%)
Avg. Iterations to Converge
Explanation Quality (/5)
Bandit Only (Detection)
–
18.91
–
–
LLM Only (Raw)
74.32
13.57
1
2.7
LLM Only (Fine Tuned)
79.72
12.16
1
4.2
SecureFixAgent (Base Model)
81.08
12.16
4
4.3
SecureFixAgent (Fine-tuned)
87.83
8.11
3
4.5
Fig. 2. Fix success rate over iterations. Bandit-only remains at 0%. LLM-only
converges in the first iteration (79.72%) with no further gains. SecureFixAgent
improves iteratively, converging by the third iteration (87.83%), demonstrating
the value of detect–repair–validate cycles.
Fine-tuning each model on the created vulnerability–repair
dataset produced consistent gains. For example, DeepSeek
Coder 6.7B [32] improved in fix accuracy from 70.27%
(raw) to 77.02% (fine-tuned) and reduced false positives from
14.86% to 13.57%. Qwen 2.5 Coder 7B [26] achieved the
highest post-fine-tuning accuracy at 79.72%, and its average
iterations to convergence decreased to 3. CodeLlama 7B
[33] and CodeGemma 7B [31] likewise showed comparable
improvements in accuracy and convergence (see Table IV).
Critically, explanation clarity as perceived by human devel-
opers improved after fine-tuning. A mixed group of CS grad-
uates, early-career employees, and experienced consultants (n
= 15) rated clarity on a 1–5 Likert scale; average scores rose
from 2.9/5 (raw LLM) to 4.5/5 (fine-tuned SecureFixAgent)
for Qwen 7B [26], with similar trends across other models
(see Table III).
Across models, the full SecureFixAgent pipeline with fine-
tuned LLMs consistently outperformed the LLM-only base-
line, confirming the benefit of validation-driven iteration. The
Bandit-only baseline, while high-precision at detection, nat-
urally scored 0% on fix accuracy since it does not perform
repairs. These results validate the central design principle of
SecureFixAgent: a hybrid, iterative detect–repair–validate loop
increases patch correctness, reduces unnecessary edits, and
produces explanations that developers find more comprehen-
sible.
Fig. 3. Fix accuracy and false positive rates. Bandit-only detects but cannot
repair, with the highest false positives (18.91%). LLM-only achieves moderate
accuracy (74.32%), improved with fine-tuning (79.72%). SecureFixAgent
yields the best results (87.83% accuracy, 8.11% false positives).
VI. DISCUSSION AND FUTURE WORK
A. Observed Strengths
SecureFixAgent achieved 81–88% higher fix accuracy than
static analysis and 7–8% over LLM-only baselines, with the
greatest gains from fine-tuned models. The iterative detect–
repair–validate loop reduced hallucinations, lowering false
positives by up to 5.5% compared to unvalidated LLM repair.
In a 15-participant study, explanation clarity improved from
2.9 to 4.5, enhancing developer trust and usability.
B. Limitations & Future Work
Despite its advantages, SecureFixAgent has some limita-
tions. Its reliance on Bandit’s [13] rule set restricts detection
scope, leaving vulnerabilities outside predefined patterns unad-
dressed. Some patches—especially those involving distributed
logic or multi-file edits—remained incomplete after the it-
eration limit. Robustness against adversarially crafted code
also remains a concern, exposing potential gaps in detection
and validation. These highlight the need to improve detection
breadth, patch granularity, and adversarial resilience.
Future work will address these issues by integrating com-
plementary analyzers (e.g., SonarQube [10], Semgrep [15]) to
broaden coverage and reduce rule-set dependency, extending
support to additional languages, and incorporating automated
unit test generation with dynamic testing (fuzzing, runtime
instrumentation). Together, these enhancements aim to im-
prove coverage, robustness, explainability, and deployability
of automated vulnerability remediation pipelines.
TABLE IV
LLM-ONLY PERFORMANCE COMPARISON
Model
Params
Fix Acc. (Raw)
Fix Acc. (FT)
FP (%) Raw
FP (%) FT
Likert (Raw)
Likert (FT)
DeepSeek Coder
1.3B
62.16
71.62
21.62
17.56
2.6
3.8
DeepSeek Coder
6.7B
70.27
77.02
14.86
13.57
2.9
4.0
Qwen 2.5 Coder
3B
72.97
79.72
18.91
14.86
2.9
4.3
Qwen 2.5 Coder
7B
74.32
79.72
13.57
12.16
2.8
4.2
CodeLlama
7B
60.81
67.56
14.86
14.86
2.8
3.8
CodeGemma
7B
67.56
72.97
17.56
16.21
3.0
4.0
VII. CONCLUSION
We presented SecureFixAgent, a privacy-preserving frame-
work that integrates static analysis with local, lightweight,
code-specialized LLMs in an iterative detect–repair–validate
loop for reliable and explainable vulnerability remediation.
By combining Bandit’s rule-based precision with minimal
LLM-generated patches and repeated validation, our approach
improves fix accuracy, reduces false positives, and provides
developer-trusted explanations on both synthetic and real-
world datasets. Results confirm that fine-tuned, small-scale
LLMs can address security-critical tasks without compromis-
ing performance or confidentiality. SecureFixAgent further en-
ables integration into CI/CD pipelines and IDEs for continuous
vulnerability mitigation. As LLMs evolve, future extensions
may enhance repair sophistication, adaptability, and real-time
assurance, reinforcing the value of hybrid static–LLM systems
in secure software development.
REFERENCES
[1] P. Gratton, “10 ways cybercrime impacts business,” Investopedia,
Feb.
20,
2025.
[Online].
Available:
https://www.investopedia.
com/financial-edge/0112/3-ways-cyber-crime-impacts-business.aspx
(accessed: 01-Aug-2025).
[2] R. He et al., “Automating dependency updates in practice: An ex-
ploratory study on GitHub Dependabot,” IEEE Trans. Softw. Eng., vol.
49, no. 8, pp. 4004–4022, 2023.
[3] Q. Chen et al., “A deep dive into large language model code generation
mistakes: What and why?,” arXiv preprint arXiv:2411.01414, 2024.
[4] Y. Yao et al., “A survey on large language model (LLM) security and
privacy: The Good, The Bad, and The Ugly,” High-Confidence Comput.,
vol. 4, no. 2, p. 100211, 2024.
[5] Kaaviya,
“GitHub
AI
Copilot:
Auto-Detect
and
Fix
Code
Vulnerabilities
Instantly,”
CyberPress,
Aug.
19,
2024.
Available:
https://cyberpress.org/auto-detect-and-fix-code-vulnerabilities/. [Online;
accessed 02-Aug-2025].
[6] N. S. Harzevili et al., “Automatic static vulnerability detection for
machine learning libraries: Are we there yet?,” in Proc. IEEE Int. Symp.
Softw. Rel. Eng. (ISSRE), 2023, pp. 795–806.
[7] A. M. Alashjaee et al., “Dynamic taint analysis tools: A review,” Int. J.
Comput. Sci. Secur., vol. 13, no. 6, pp. 231–243, 2019.
[8] Z. Feng et al., “CodeBERT: A pre-trained model for programming and
natural languages,” in Proc. EMNLP, 2020.
[9] National Institute of Standards and Technology, “Juliet test suite,”
2023. [Online]. Available: https://samate.nist.gov/SRD (accessed: 01-
Aug-2025).
[10] SonarSource, “SonarQube: Continuous code quality,” 2025. [Online].
Available: https://www.sonarqube.org/ (accessed: 01-Aug-2025).
[11] PortSwigger, “Burp Suite: Web vulnerability scanner,” 2025. [Online].
Available: https://portswigger.net/burp (accessed: 01-Aug-2025).
[12] OWASP, “OWASP ZAP: Zed attack proxy,” 2025. [Online]. Available:
https://owasp.org/www-project-zap/ (accessed: 01-Aug-2025).
[13] PyCQA, “Bandit (1.8.6) [Computer software],” 2025. [Online]. Avail-
able: https://github.com/PyCQA/bandit (accessed: 02-Aug-2025).
[14] Pylint contributors, “Pylint,” 2020. [Online]. Available: https://pylint.
readthedocs.io/en/latest/ (accessed: 02-Aug-2025).
[15] Semgrep, Inc., “Semgrep,” 2025. [Online]. Available: https://semgrep.
dev/ (accessed: 02-Aug-2025).
[16] R. Russell et al., “Automated vulnerability detection in source code
using deep representation learning,” in Proc. 17th IEEE Int. Conf. Mach.
Learn. Appl. (ICMLA), 2018, pp. 757–762.
[17] Z. Li et al., “VulDeePecker: A deep learning-based system for vulner-
ability detection,” in Proc. NDSS, 2018.
[18] X. Zhou et al., “Large language model for vulnerability detection and
repair: Literature review and the road ahead,” ACM Trans. Softw. Eng.
Methodol., vol. 34, no. 5, p. 145, 2025.
[19] X. Wang et al., “Practically implementing an LLM-supported col-
laborative vulnerability remediation process: A team-based approach,”
Comput. Secur., vol. 148, p. 104113, 2025.
[20] M. Monperrus et al., “Repairnator patches programs automatically,”
Ubiquity, vol. 2019, no. July, p. 2, 2019.
[21] R. Gharibi et al., “T5APR: Empowering automated program repair
across languages through checkpoint ensemble,” J. Syst. Softw., vol. 214,
p. 112083, 2024.
[22] F. F. Xu et al., “A systematic evaluation of large language models of
code,” in Proc. 6th ACM SIGPLAN Int. Symp. Mach. Program., 2022,
pp. 1–10.
[23] Y. Wang et al., “CodeT5: Identifier-aware unified pre-trained encoder-
decoder models for code understanding and generation,” in Proc.
EMNLP, 2021.
[24] H. Raff et al., “MalBERT: Transformer-based malware detection,” arXiv
preprint arXiv:2109.09684, 2021.
[25] H. Hanif et al., “Vulberta: Simplified source code pre-training for
vulnerability detection,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN),
2022, pp. 1–8.
[26] B. Hui et al., “Qwen2.5-coder technical report,” arXiv preprint
arXiv:2409.12186, 2024.
[27] GitHub,
Inc.,
“GitHub
Copilot,”
2025.
Available:
https://github.com/features/copilot. [Online; accessed 03-Aug-2025].
[28] A. Hannun et al., “MLX: Efficient and flexible machine learning on
Apple silicon,” 2023. [Online]. Available: https://github.com/ml-explore
(accessed: 01-Aug-2025).
[29] E. J. Hu et al., “LoRA: Low-rank adaptation of large language models,”
arXiv preprint arXiv:2106.09685, 2022.
[30] J. Gajjar et al., “MalCodeAI: Autonomous vulnerability detection and
remediation via language agnostic code reasoning,” arXiv preprint
arXiv:2507.10898, 2025.
[31] C. Team et al., “Codegemma: Open code models based on Gemma,”
arXiv preprint arXiv:2406.11409, 2024.
[32] Q. Zhu et al., “DeepSeek-Coder-v2: Breaking the barrier of closed-
source models in code intelligence,” arXiv preprint arXiv:2406.11931,
2024.
[33] B. Roziere et al., “Code Llama: Open foundation models for code,”
arXiv preprint arXiv:2308.12950, 2023.
[34] Y. Zhu et al., “CVE-Bench: A benchmark for AI agents’ ability
to exploit real-world web application vulnerabilities,” arXiv preprint
arXiv:2503.17332, 2025.
[35] S. Sun et al., “Exploring security commits in Python,” in Proc. 39th
IEEE Int. Conf. Softw. Maint. Evol. (ICSME), 2023.
[36] M. L. Siddiq et al., “SecurityEval dataset: Mining vulnerability examples
to evaluate machine learning-based code generation techniques,” in
Proc. 1st Int. Workshop Mining Softw. Repositories Appl. Privacy Secur.
(MSR4P&S), 2022.
|
SecureFixAgent: A Hybrid LLM Agent for Automated Python Static Vulnerability Repair 1st Jugal Gajjar Computer Science Department The George Washington University Washington D.C, USA 2nd Kamalasankari Subramaniakuppusamy Computer Science Department The George Washington University Washington D.C, USA 3rd Relsy Puthal Applied Economics Department The George Washington University Washington D.C, USA 4th Kaustik Ranaware Computer Science Department The George Washington University Washington D.C, USA Abstract-Modern software development pipelines face growing challenges in securing large codebases with extensive dependencies. Static analysis tools like Bandit are effective at vulnerability detection but suffer from high false positives and lack repair capabilities. Large Language Models (LLMs), in contrast, can suggest fixes but often hallucinate changes and lack self-validation. We present SecureFixAgent, a hybrid repair framework integrating Bandit with lightweight local LLMs (<8B parameters) in an iterative detect-repair-validate loop. To improve precision, we apply parameter-efficient LoRA-based fine-tuning on a diverse, curated dataset spanning multiple Python project domains, mitigating dataset bias and reducing unnecessary edits. SecureFixAgent uses Bandit for detection, the LLM for candidate fixes with explanations, and Bandit revalidation for verification, all executed locally to preserve privacy and reduce cloud reliance. Experiments show SecureFixAgent reduces false positives by 10.8% over static analysis, improves fix accuracy by 13.51%, and lowers false positives by 5.46% compared to pre-trained LLMs, typically converging within three iterations. Beyond metrics, developer studies rate explanation quality 4.5/5, highlighting its value for human trust and adoption. By combining verifiable security improvements with transparent rationale in a resource-efficient local framework, SecureFixAgent advances trustworthy, automated vulnerability remediation for modern pipelines. Index Terms-static program analysis, large language models, automated code repair, LLM agents, software vulnerability detection I. INTRODUCTION The rapid expansion of modern software systems has resulted in increasingly large and complex codebases, often composed of numerous third-party dependencies [2]. This growth inevitably widens the potential vulnerability surface, making security assurance a critical component of the software development lifecycle. Vulnerabilities that escape early detection can lead to severe consequences in production environments, including data breaches, financial losses, and reputational damage [1]. As a result, organizations are adopting automated tools to identify and mitigate security risks before deployment. Static analysis tools, such as Bandit [13] for Python, are widely used for early-stage vulnerability detection. These tools operate by scanning source code against predefined security rules and patterns, offering high precision in identifying certain classes of vulnerabilities. However, static analyzers suffer from notable drawbacks: they tend to generate high false-positive rates, burdening developers with unnecessary manual triage, and they lack inherent capabilities for automated repair [6]. Consequently, while they are effective at highlighting potential issues, the remediation process remains manual, timeconsuming, and prone to human oversight. Recent advances in Large Language Models (LLMs) have shown promise in automating code repair. Code-specialized LLMs can analyze vulnerable snippets, reason about flaws, and generate candidate fixes [18]. Industry tools such as GitHub Copilot have begun experimenting with vulnerabilityaware suggestions and automated repairs [5], reflecting the growing commercial interest in this domain. However, LLMbased approaches face two major limitations in securitycritical contexts. First, they frequently produce hallucinated fixes-syntactically valid but semantically ineffective changes that fail to eliminate the underlying vulnerability or introduce new ones [3]. Second, they lack built-in self-verification mechanisms, resulting in unvalidated patches that require human review. To address these limitations, we introduce SecureFixAgent, a hybrid framework that merges the strengths of static analysis and LLMs for automated code repair through a detect-repairvalidate loop. SecureFixAgent leverages Bandit [13] to perform initial vulnerability detection, utilizes a locally deployed lightweight code LLM (<8B parameters) to propose contextaware fixes accompanied by human-readable explanations, and then re-applies Bandit [13] to validate the correctness of the patch. If the vulnerability persists, the process iterates with refined prompts and contextual feedback until a secure fix is reliably achieved. All inference is performed locally, ensuring privacy preservation and eliminating reliance on external 18 Sep 2025 cloud-based APIs. The key contributions of this work are threefold. First, we design and implement a hybrid vulnerability repair pipeline that autonomously detects and repairs Python code vulnerabilities while maintaining human-readable, explainable output. Second, we integrate multiple local code LLMs with Bandit to create a validation-driven repair mechanism that minimizes hallucinated fixes. Third, we present an empirical evaluation on the curated vulnerable code samples, demonstrating that SecureFixAgent improves fix accuracy and reduces false positives compared to either static analysis or LLM-based approaches alone. II. RELATED WORK Traditional approaches to vulnerability detection have relied on static, rule-based analysis. Tools such as Bandit [13] and PyLint [14] scan Python code for insecure patterns (e.g., hardcoded credentials, insecure deserialization). While precise for well-defined vulnerability classes, these systems often yield high false-positive rates and provide no automated repair [6]. Dynamic analysis complements static methods by executing code in controlled environments (e.g., OWASP ZAP [12], Burp Suite [11]), enabling detection of runtime issues such as race conditions and memory leaks that static scans may miss. However, adoption remains limited due to brittle execution setups, performance overhead, and incomplete code coverage [7]. This leads to coverage gaps in real-world deployments, especially for large, distributed systems. Machine learning and deep learning approaches broadened flexibility by learning vulnerability patterns from datasets such as Juliet [9] and Draper VDISC [16]. VulDeePecker [17] demonstrated semantic-level flaw detection using code gadget representations, achieving higher precision, though at the cost of requiring extensive labeled datasets and offering no automated remediation. Transformer-based models (CodeBERT [8], CodeT5 [23], and PolyCoder [22]) advanced code intelligence tasks, while security-focused variants such as VulBERTa [25] and MalBERT [24] targeted vulnerability detection. APR systems such as T5APR [21] and Repairnator [20] generate patches, and commercial tools like GitHub Copilot [27] now incorporate vulnerability-aware suggestions. However, these systems still face three persistent gaps: (1) lack of rigorous, securityspecific validation of generated patches, often leading to unverified or incomplete fixes [3]; (2) reliance on large, cloudhosted models, introducing privacy, cost, and latency concerns [4]; and (3) limited mechanisms for iterative refinement when initial repairs fail [30]. Recent research has begun to couple vulnerability detection with semantic reasoning and natural language explanations. For example, MalCodeAI [30] decomposes code into functional units, applies LLM-based detectors, and produces both patches and human-readable rationales, improving transparency. Other work integrates user feedback into LLMassisted vulnerability remediation processes [19], aiming to boost repair satisfaction and trustworthiness. However, these TABLE I COMPARISON OF SECUREFIXAGENT WITH PRIOR SYSTEMS System Validation Deployment Iterative Repair GitHub Copilot × Cloud × Repairnator Partial Server × MalCodeAI × Local LLM × SecureFixAgent ✓Bandit Local LLM ✓Loop systems still operate primarily in a single-pass repair mode, lacking the iterative validation necessary to reliably handle persistent or complex vulnerabilities in security-critical code. In contrast, SecureFixAgent explicitly addresses these limitations by combining static analysis feedback with iterative LLM-driven repair and validation, all executed locally to preserve privacy. Unlike Copilot or Repairnator, which either lack validation or rely on cloud-scale models, SecureFixAgent is designed for on-premise feasibility, explainability, and resource efficiency. To make this distinction clearer, Table I summarizes how SecureFixAgent compares with representative prior systems across validation, deployment, and refinement dimensions. III. METHODOLOGY We present SecureFixAgent, a hybrid vulnerability detection and repair framework that integrates static analysis with opensource, local, and resource-efficient LLMs in an iterative detect-repair-validate loop for robustness. SecureFixAgent executes fully on-premise, ensuring privacy compliance and low latency with models capped at 8B parameters. Quantization methods (INT8, 4-bit, mixed precision) further reduce memory and compute costs with minimal performance loss, enabling deployment on consumer-grade hardware. Unlike monolithic LLM-based patching systems, our approach enforces detection and patch correctness through repeated static revalidation, converging iteratively toward verified vulnerability resolution with accompanying human-readable rationales. Figure 1 illustrates the architecture of our proposed framework and data flow among its core components. A. System Architecture The SecureFixAgent pipeline consists of four tightly integrated stages. Initially, Bandit statically analyzes Python source code to identify potential vulnerabilities, generating a structured report. Next, a locally hosted, code-specialized LLM instance interprets the report to validate the detection and to synthesize minimal, targeted patches accompanied by concise, human-readable explanations. The validation stage rescans the patched code using Bandit to confirm vulnerability resolution. If vulnerabilities persist, the system iterates with updated context until convergence is reached or a predefined iteration limit is met. The final output stage produces the fully patched code alongside a concise technical explanation. B. Pipeline Flow Starting from an input Python file, Bandit performs vulnerability scanning to produce a detailed report. The report Fig. 1. Architecture of SecureFixAgent, integrating static analysis with locally hosted LLMs in an iterative detect-repair-validate loop. Bandit identifies candidate vulnerabilities, LLM 1 cross-validates reports, and LLM 2 performs targeted patching with explanations. Patched code is re-verified by Bandit until convergence or a maximum iteration limit is reached, yielding traceable and verified repairs. and original source are provided to the LLM, which proposes minimal corrective changes with explanatory comments for the truly positive detections. The patched code is then re-analyzed by Bandit; if unresolved vulnerabilities remain, this detectrepair-validate loop repeats. Iteration continues until all issues are fixed or the maximum iteration count is reached, balancing fix thoroughness with computational efficiency. C. Algorithm Specification Algorithm 1 formalizes the iterative detect-repair-validate process described in the Pipeline Flow. Unlike monolithic repair strategies that attempt to patch all detected vulnerabilities in a single pass, this variant processes each vulnerability individually. For every element flagged by Bandit, the original code segment and the relevant portion of the Bandit report are provided to the LLM, which generates a patch targeted exclusively at that instance. This per-vulnerability approach enables finer-grained control, minimizes unintended code modifications, and supports precise tracking of which changes resolve specific vulnerabilities. The output package generated after convergence contains (i) the original code, (ii) the fully patched code, (iii) the corresponding Bandit report for each iteration, and (iv) LLM explanations for each remediation, enabling full traceability and auditability of the repair process. D. LLM Configuration and Fine-Tuning We evaluate several locally executable, code-specialized LLMs-including DeepSeek Coder [32], Qwen2.5-Coder [26], CodeLlama [33], and CodeGemma [31]-with parameter sizes capped at 8 billion to ensure feasible deployment on standard workstations. In addition to testing base (pre-trained) model performance, we perform supervised fine-tuning using a curated dataset of Python vulnerabilities and corresponding fixes, sourced from both synthetic injections and real-world CVE patches. Fine-tuning is implemented using the Apple MLX framework [28] for M-series optimization, combined with Low-Rank Adaptation (LoRA) [29] to minimize computational overhead while preserving model performance. This approach improves the models' ability to generate precise, minimal, and contextually appropriate repairs while adhering to security best practices. All model inference, whether from base or fine-tuned variants, is executed entirely on-device, eliminating reliance on cloud services and mitigating sensitive data exposure. E. Prompt Design To minimize hallucinations and promote reproducibility, prompts are carefully structured to guide the LLM through three core tasks: interpreting individual Bandit-reported vulnerabilities, proposing minimal code changes targeted to resolve each issue without altering intended functionality, and generating concise, human-readable explanations. Outputs are explicitly requested in structured formats-e.g., JSON-like dictionaries embedded in valid Python code-enabling programmatic parsing and validation of LLM responses. This structured output design is essential for automated downstream processing and verification. F. Privacy and Efficiency By confining all inference to local hardware, SecureFixAgent adheres to stringent code confidentiality requirements, preventing potential leakage of proprietary source code. The iterative pipeline is optimized for low latency per cycle, supporting integration into continuous integration and deployment (CI/CD) workflows where timely automated remediation is essential. All intermediate artifacts are encrypted at rest and in memory with AES-128 for reliable, efficient protection, and no telemetry is transmitted externally, preserving organizational privacy. IV. EXPERIMENTS The experimental evaluation of SecureFixAgent was conducted in two hardware environments. The first consisted of an Algorithm 1 SecureFixAgent: Iterative Fine-Grained Vulnerability Detection and Repair Input: Python source code file C, maximum iteration limit N Output: Original code Corig, final patched code C, iterationwise Bandit reports, LLM-generated explanations Initialisation : 1: i ←0 2: Corig ←C Iterative Detection and Repair Loop : 3: repeat 4: Run Bandit on C to produce vulnerability report R 5: if R is empty then 6: break {No vulnerabilities detected} 7: end if 8: for each vulnerability v ∈R do 9: Extract vulnerable code segment Sv from C 10: Retrieve corresponding report excerpt Rv 11: Provide Sv, Rv, and C to locally hosted LLM 12: LLM cross-validates the report excerpt Rv with human-readable explanation 13: if Rv is True Positive then 14: LLM generates patched segment S′ v with humanreadable explanation 15: Replace Sv in C with S′ v 16: end if 17: end for 18: Save Bandit report R for iteration i 19: i ←i + 1 20: until R is empty or i ≥N Finalization : 21: Run final Bandit scan to confirm all fixes 22: Generate output package containing original code Corig, final code C, all Bandit reports, and LLM explanations 23: return Final output package Apple M-series system, offering high-efficiency ARM-based processing for local inference. The second was a CUDAenabled GPU workstation equipped with an NVIDIA GPU, enabling accelerated inference for larger models and batch processing. These setups ensured feasibility within on-premise constraints while supporting evaluation of models up to 8 billion parameters. We ensured runtime efficiency while keeping memory usage viable: average per-iteration latency remained practical for CI/CD deployment, with peak memory during model execution ranging from 6 to 14 GB for the bestperforming models and up to 40 GB during fine-tuning. The dataset was drawn from two primary sources. The first source was a synthetically generated corpus in which vulnerability injection locations and types were randomized across modules and systematically injected into Python programs to simulate realistic distributions. This allowed controlled experimentation and reproducibility by ensuring that vulnerability locations and types were known a priori. The second source consisted of real-world Python repositories, including CVEBench [34], PySecDB [35], and SecurityEval [36], containing publicly disclosed Common Vulnerabilities and Exposures (CVEs), providing realistic and diverse vulnerability patterns for testing the efficacy of our proposed system. TABLE II DATA SUMMARY Data Source # Samples # Vulnerability Types Synthetic (Injected) 740 52 CVE-Bench 40 8 PySecDB 1258 119 SecurityEval 130 75 SecureFixAgent was evaluated using multiple open-source, code-specialized LLMs: DeepSeek Coder (1.3B, 6.7B) [32], Qwen 2.5 Coder (3B, 7B) [26], CodeLlama (7B) [33], and CodeGemma (7B) [31]. Each model was tested in its raw, instruction-tuned form to establish baseline performance. Subsequently, models were fine-tuned with Apple MLX [28] using LoRA [29], leveraging the combined synthetic and real-world vulnerability-repair dataset to improve repair accuracy and reduce hallucinated changes. Three system configurations were compared: (1) Bandit alone for vulnerability detection without automated repair; (2) LLM-based repair without static validation (singlepass patch generation); and (3) the full SecureFixAgent pipeline integrating Bandit detection with iterative, validationdriven LLM repair. Performance was assessed using four primary metrics-fix accuracy, false-positive rate, iterations to convergence, and explanation clarity (developer-rated Likert scale)-with statistical significance evaluated via paired t-tests (p < 0.05). Additionally, while commercial tools such as GitHub Copilot were not directly benchmarked due to closedsource constraints, qualitative comparisons were discussed in terms of privacy, iterative validation, and repair explainability. Results were reported for both raw and fine-tuned models, enabling clear evaluation of the impact of domain adaptation on SecureFixAgent's repair effectiveness. V. RESULTS The experimental results demonstrate that integrating static analysis with iterative, locally executed LLM-based repair substantially improves vulnerability resolution compared to either approach alone. Across all evaluated models, SecureFixAgent consistently outperformed the LLM-only and Banditonly baselines in terms of fix accuracy and reduction of irrelevant code changes. When operating with raw, instruction-tuned models, DeepSeek Coder 1.3B [32] exhibited moderate baseline repair capability, with the larger variant achieving higher raw accuracy but generally requiring more iterations to converge. Qwen2.5 Coder 7B [26] showed comparatively strong raw performance and explanation quality, while CodeGemma 7B [31] and CodeLlama 7B [33] produced balanced results with occasional extraneous formatting changes that did not affect security correctness. TABLE III EXPERIMENTAL RESULTS Configuration Fix Accuracy (%) False Positives (%) Avg. Iterations to Converge Explanation Quality (/5) Bandit Only (Detection) - 18.91 - - LLM Only (Raw) 74.32 13.57 1 2.7 LLM Only (Fine Tuned) 79.72 12.16 1 4.2 SecureFixAgent (Base Model) 81.08 12.16 4 4.3 SecureFixAgent (Fine-tuned) 87.83 8.11 3 4.5 Fig. 2. Fix success rate over iterations. Bandit-only remains at 0%. LLM-only converges in the first iteration (79.72%) with no further gains. SecureFixAgent improves iteratively, converging by the third iteration (87.83%), demonstrating the value of detect-repair-validate cycles. Fine-tuning each model on the created vulnerability-repair dataset produced consistent gains. For example, DeepSeek Coder 6.7B [32] improved in fix accuracy from 70.27% (raw) to 77.02% (fine-tuned) and reduced false positives from 14.86% to 13.57%. Qwen 2.5 Coder 7B [26] achieved the highest post-fine-tuning accuracy at 79.72%, and its average iterations to convergence decreased to 3. CodeLlama 7B [33] and CodeGemma 7B [31] likewise showed comparable improvements in accuracy and convergence (see Table IV). Critically, explanation clarity as perceived by human developers improved after fine-tuning. A mixed group of CS graduates, early-career employees, and experienced consultants (n = 15) rated clarity on a 1-5 Likert scale; average scores rose from 2.9/5 (raw LLM) to 4.5/5 (fine-tuned SecureFixAgent) for Qwen 7B [26], with similar trends across other models (see Table III). Across models, the full SecureFixAgent pipeline with finetuned LLMs consistently outperformed the LLM-only baseline, confirming the benefit of validation-driven iteration. The Bandit-only baseline, while high-precision at detection, naturally scored 0% on fix accuracy since it does not perform repairs. These results validate the central design principle of SecureFixAgent: a hybrid, iterative detect-repair-validate loop increases patch correctness, reduces unnecessary edits, and produces explanations that developers find more comprehensible. Fig. 3. Fix accuracy and false positive rates. Bandit-only detects but cannot repair, with the highest false positives (18.91%). LLM-only achieves moderate accuracy (74.32%), improved with fine-tuning (79.72%). SecureFixAgent yields the best results (87.83% accuracy, 8.11% false positives). VI. DISCUSSION AND FUTURE WORK A. Observed Strengths SecureFixAgent achieved 81-88% higher fix accuracy than static analysis and 7-8% over LLM-only baselines, with the greatest gains from fine-tuned models. The iterative detectrepair-validate loop reduced hallucinations, lowering false positives by up to 5.5% compared to unvalidated LLM repair. In a 15-participant study, explanation clarity improved from 2.9 to 4.5, enhancing developer trust and usability. B. Limitations & Future Work Despite its advantages, SecureFixAgent has some limitations. Its reliance on Bandit's [13] rule set restricts detection scope, leaving vulnerabilities outside predefined patterns unaddressed. Some patches-especially those involving distributed logic or multi-file edits-remained incomplete after the iteration limit. Robustness against adversarially crafted code also remains a concern, exposing potential gaps in detection and validation. These highlight the need to improve detection breadth, patch granularity, and adversarial resilience. Future work will address these issues by integrating complementary analyzers (e.g., SonarQube [10], Semgrep [15]) to broaden coverage and reduce rule-set dependency, extending support to additional languages, and incorporating automated unit test generation with dynamic testing (fuzzing, runtime instrumentation). Together, these enhancements aim to improve coverage, robustness, explainability, and deployability of automated vulnerability remediation pipelines. TABLE IV LLM-ONLY PERFORMANCE COMPARISON Model Params Fix Acc. (Raw) Fix Acc. (FT) FP (%) Raw FP (%) FT Likert (Raw) Likert (FT) DeepSeek Coder 1.3B 62.16 71.62 21.62 17.56 2.6 3.8 DeepSeek Coder 6.7B 70.27 77.02 14.86 13.57 2.9 4.0 Qwen 2.5 Coder 3B 72.97 79.72 18.91 14.86 2.9 4.3 Qwen 2.5 Coder 7B 74.32 79.72 13.57 12.16 2.8 4.2 CodeLlama 7B 60.81 67.56 14.86 14.86 2.8 3.8 CodeGemma 7B 67.56 72.97 17.56 16.21 3.0 4.0 VII. CONCLUSION We presented SecureFixAgent, a privacy-preserving framework that integrates static analysis with local, lightweight, code-specialized LLMs in an iterative detect-repair-validate loop for reliable and explainable vulnerability remediation. By combining Bandit's rule-based precision with minimal LLM-generated patches and repeated validation, our approach improves fix accuracy, reduces false positives, and provides developer-trusted explanations on both synthetic and realworld datasets. Results confirm that fine-tuned, small-scale LLMs can address security-critical tasks without compromising performance or confidentiality. SecureFixAgent further enables integration into CI/CD pipelines and IDEs for continuous vulnerability mitigation. As LLMs evolve, future extensions may enhance repair sophistication, adaptability, and real-time assurance, reinforcing the value of hybrid static-LLM systems in secure software development. REFERENCES [1] P. Gratton, "10 ways cybercrime impacts business," Investopedia, Feb. 20, 2025. [Online]. Available: https://www.investopedia. com/financial-edge/0112/3-ways-cyber-crime-impacts-business.aspx (accessed: 01-Aug-2025). [2] R. He et al., "Automating dependency updates in practice: An exploratory study on GitHub Dependabot," IEEE Trans. Softw. Eng., vol. 49, no. 8, pp. 4004-4022, 2023. [3] Q. Chen et al., "A deep dive into large language model code generation mistakes: What and why?," arXiv preprint , 2024. [4] Y. Yao et al., "A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly," High-Confidence Comput., vol. 4, no. 2, p. 100211, 2024. [5] Kaaviya, "GitHub AI Copilot: Auto-Detect and Fix Code Vulnerabilities Instantly," CyberPress, Aug. 19, 2024. Available: https://cyberpress.org/auto-detect-and-fix-code-vulnerabilities/. [Online; accessed 02-Aug-2025]. [6] N. S. Harzevili et al., "Automatic static vulnerability detection for machine learning libraries: Are we there yet?," in Proc. IEEE Int. Symp. Softw. Rel. Eng. (ISSRE), 2023, pp. 795-806. [7] A. M. Alashjaee et al., "Dynamic taint analysis tools: A review," Int. J. Comput. Sci. Secur., vol. 13, no. 6, pp. 231-243, 2019. [8] Z. Feng et al., "CodeBERT: A pre-trained model for programming and natural languages," in Proc. EMNLP, 2020. [9] National "Juliet test suite," 2023. [Online]. Available: https://samate.nist.gov/SRD (accessed: 01Aug-2025). [10] SonarSource, "SonarQube: Continuous code quality," 2025. [Online]. Available: https://www.sonarqube.org/ (accessed: 01-Aug-2025). [11] PortSwigger, "Burp Suite: Web vulnerability scanner," 2025. [Online]. Available: https://portswigger.net/burp (accessed: 01-Aug-2025). [12] OWASP, "OWASP ZAP: Zed attack proxy," 2025. [Online]. Available: https://owasp.org/www-project-zap/ (accessed: 01-Aug-2025). [13] PyCQA, "Bandit (1.8.6) [Computer software]," 2025. [Online]. Available: https://github.com/PyCQA/bandit (accessed: 02-Aug-2025). [14] Pylint contributors, "Pylint," 2020. [Online]. Available: https://pylint. readthedocs.io/en/latest/ (accessed: 02-Aug-2025). [15] Semgrep, Inc., "Semgrep," 2025. [Online]. Available: https://semgrep. dev/ (accessed: 02-Aug-2025). [16] R. Russell et al., "Automated vulnerability detection in source code using deep representation learning," in Proc. 17th IEEE Int. Conf. Mach. Learn. Appl. (ICMLA), 2018, pp. 757-762. [17] Z. Li et al., "VulDeePecker: A deep learning-based system for vulnerability detection," in Proc. NDSS, 2018. [18] X. Zhou et al., "Large language model for vulnerability detection and repair: Literature review and the road ahead," ACM Trans. Softw. Eng. Methodol., vol. 34, no. 5, p. 145, 2025. [19] X. Wang et al., "Practically implementing an LLM-supported collaborative vulnerability remediation process: A team-based approach," Comput. Secur., vol. 148, p. 104113, 2025. [20] M. Monperrus et al., "Repairnator patches programs automatically," Ubiquity, vol. 2019, no. July, p. 2, 2019. [21] R. Gharibi et al., "T5APR: Empowering automated program repair across languages through checkpoint ensemble," J. Syst. Softw., vol. 214, p. 112083, 2024. [22] F. F. Xu et al., "A systematic evaluation of large language models of code," in Proc. 6th ACM SIGPLAN Int. Symp. Mach. Program., 2022, pp. 1-10. [23] Y. Wang et al., "CodeT5: Identifier-aware unified pre-trained encoderdecoder models for code understanding and generation," in Proc. EMNLP, 2021. [24] H. Raff et al., "MalBERT: Transformer-based malware detection," arXiv preprint , 2021. [25] H. Hanif et al., "Vulberta: Simplified source code pre-training for vulnerability detection," in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2022, pp. 1-8. [26] B. Hui et al., "Qwen2.5-coder technical report," arXiv preprint , 2024. [27] GitHub, Inc., "GitHub Copilot," 2025. Available: https://github.com/features/copilot. [Online; accessed 03-Aug-2025]. [28] A. Hannun et al., "MLX: Efficient and flexible machine learning on Apple silicon," 2023. [Online]. Available: https://github.com/ml-explore (accessed: 01-Aug-2025). [29] E. J. Hu et al., "LoRA: Low-rank adaptation of large language models," arXiv preprint , 2022. [30] J. Gajjar et al., "MalCodeAI: Autonomous vulnerability detection and remediation via language agnostic code reasoning," arXiv preprint , 2025. [31] C. Team et al., "Codegemma: Open code models based on Gemma," arXiv preprint , 2024. [32] Q. Zhu et al., "DeepSeek-Coder-v2: Breaking the barrier of closedsource models in code intelligence," arXiv preprint , 2024. [33] B. Roziere et al., "Code Llama: Open foundation models for code," arXiv preprint , 2023. [34] Y. Zhu et al., "CVE-Bench: A benchmark for AI agents' ability to exploit real-world web application vulnerabilities," arXiv preprint , 2025. [35] S. Sun et al., "Exploring security commits in Python," in Proc. 39th IEEE Int. Conf. Softw. Maint. Evol. (ICSME), 2023. [36] M. L. Siddiq et al., "SecurityEval dataset: Mining vulnerability examples to evaluate machine learning-based code generation techniques," in Proc. 1st Int. Workshop Mining Softw. Repositories Appl. Privacy Secur. (MSR4P&S), 2022.
|
2509.16280
|
Prepared for submission to JCAP
Towards a Robust
Machine-Learning Pipeline for
21-cm Cosmology Data Analysis I:
A Roadmap for Development and
Demonstration of Robustness
Against PSF Modeling Errors
Madhurima Choudhury
,a Jonathan C. Pober
,a,b
aCenter for the Fundamental Physics of the Universe, Brown University
184 Hope St
Providence, RI 02912, USA
bDepartment of Physics, Brown University
184 Hope St
Providence, RI 02912, USA
arXiv:2509.16280v1 [astro-ph.IM] 18 Sep 2025
E-mail: madhurimachoudhury811@gmail.com, jonathan_pober@brown.edu
Abstract.
The 21-cm signal from the Epoch of Reionization (EoR) is a powerful probe of
the evolution of the Universe. However, accurate measurements of the EoR signal from radio
interferometric observations are sensitive to efficient foreground removal, mitigating radio-
frequency interference and accounting for instrumental systematics. This work represents the
first in a series of papers, where we will be introducing a novel ML based pipeline, step-by-
step, to directly infer reionization parameters from 21-cm radio-interferometric images. In
this paper, we investigate the impact of the variations in the point spread function (PSF)
on parameter estimation by simulating visibilities corresponding to input 21-cm maps as
observed by the 128-antenna configuration of the Murchison Widefield Array (MWA) Phase
II. These visibilities are imaged to obtain ‘dirty images,’ which are then used to train a 2D
convolutional neural network (CNN) to predict xHI. To systematically assess the effect of PSF
mis-modelling, we generate multiple test sets by varying the MWA’s antenna layout, thereby
introducing controlled variations in the PSF; we then feed these alternative PSF dirty images
to our CNN trained using only dirty images with the PSF of the true antenna layout. Our
results demonstrate that PSF variations introduce biases in the CNN’s predictions of xHI,
with errors depending on the extent of PSF distortion. We quantify these biases and discuss
their implications for the reliability of machine-learning-based parameter inference in 21-cm
cosmology and how they can be utilized to improve the robustness of estimation against PSF-
related systematics in future 21-cm surveys. In concluding, we also discuss how this approach
to incorporating realistic instrument error into an ML analysis pipeline can be expanded to
include multiple other effects.
Keywords: Machine learning, reionization, first stars
Contents
1
Introduction
1
2
The 21-cm signal and interferometric measurements
3
2.1
The point spread function
4
3
Methodology
4
3.1
The 21-cm Maps
5
3.1.1
Simulating visibilities & Imaging
5
3.2
ML Network
6
4
Results
7
4.1
Introducing PSF Errors
8
5
Discussions
11
6
Roadmap for the future
12
1
Introduction
The redshifted 21-cm line of neutral hydrogen would serve as an excellent probe to understand
the structure and evolution of our Universe across cosmic time. This signal provides a unique
window into the key epochs in the evolution of our Universe, including the Dark Ages, Cosmic
Dawn, and the Epoch of Reionization, offering deep insights into the formation of the first
stars, galaxies, and large-scale structure.
However, observations of the 21-cm signal, both as a sky-averaged global signature and
as fluctuations measured using an interferometer, are extremely challenging. Detecting the
cosmological 21-cm signal is one of the key science goals for several current and upcoming low-
frequency radio telescopes (for example, SKA, [1, 2]; HERA,[3]; MWA, [4]; LOFAR, [5]; etc.).
Several radio-interferometric experiments have reported upper limits on the power spectrum
of 21-cm fluctuations over the past decade, from the epoch of reionization (EoR) [6–13] and
the cosmic dawn (CD) era [14–18]. These upper limits have enabled theorists to rule out
certain models of reionization and provide constraints on the physical parameters.
In recent years, machine learning (ML) techniques have become increasingly promi-
nent across various different domains of astrophysics and cosmology. Particularly, in 21-cm
cosmology, ML has been applied to a wide range of problems, such as signal emulation to
parameter estimation, highlighting its potential as a powerful tool for analyzing complex
high-dimensional datasets. For instance, fast emulators for the 21-cm global signal and power
spectrum have been developed using ML approaches [19, 20], enabling rapid exploration of
parameter space. Convolutional Neural Networks (CNNs) have been employed to constrain
reionization physics from 21-cm maps [21, 22], while deep learning models have also been
used to emulate the full time-evolution of 21-cm brightness temperature maps from the EoR
[23].
More recent advancements include the use of convolutional denoising autoencoders
(CDAEs) to recover the EoR signal from simulated SKA images with realistic beam effects
[24], and the application of Gaussian Process Regression (GPR) for foreground separation
– 1 –
[25]. Artificial Neural Networks (ANNs) have been used to estimate bubble size distribu-
tions from 21-cm power spectra [26], and CNNs have been applied to 3D tomographic 21-cm
images for parameter inference and posterior estimation [27]. Also, [28] implemented ANNs
to extract power spectrum parameters from synthetic observations contaminated with strong
foregrounds. These diverse applications underscore the growing role of ML in tackling the
multifaceted challenges of 21-cm cosmology.
In spite of the excellent works incorporating ML into EoR studies, ML methods have
their drawbacks. All supervised-learning methods are highly problem-specific, strongly de-
pendent on the quality, diversity and representativeness of the training process. They are
mostly not generalizable and struggle to deal with out-of-distribution inputs or unknown sys-
tematics, common in real observations. While such models can perform well on narrowly
defined tasks under controlled conditions, their reliability diminishes when faced with un-
modelled complexities. ML becomes particularly useful when it becomes difficult to explicitly
model the data, particularly where traditional physical models are computationally expensive
or analytically intractable. Its strength lies in its ability to learn from data representations
directly, provided the training data sufficiently captures the variability of real observations.
To address these limitations, our work takes a step towards building a more resilient and
informed ML model. In our case, to train a network, we need not define an explicit model for
complicated systematics (a nearly impossible feat for instruments with the size and complexity
of 21-cm cosmology experiments); we just need a way of representing a reasonable range of
signatures in our training set. As a first case study, we explore the impact of PSF variability
on ML-based inference, highlighting the need for more robust and systematics-aware training
strategies in 21-cm cosmology.
In real data, all the complicating factors are present. To confidently analyze real data
with ML, we need an approach that will not confuse one issue for another. For instance,
miscalibrated foreground removal and electromagnetic mutual coupling between antennas
can both leave excess power at the same spectral scales (due to their comparable light travel
time delays).
An ML approach trained only to remove foregrounds is unlikely to do the
job properly if mutual coupling is present. There is no existing complete end-to-end pipeline
stitching all of these key aspects together. The work presented here is a first step in a series of
papers, through which we will methodically incorporate more and more sophisticated effects
into an ML-based end-to-end pipeline to analyze 21-cm interferometric data. In this paper,
we aim to demonstrate how well we can predict the neutral hydrogen fraction directly from
dirty images, bypassing the necessity to reconstruct the original image after going through a
deconvolution algorithm.
The outline of this paper is as follows: In §2, we discuss briefly the fundamentals of
interferometric measurements, focusing on visibilities and the role of the point-spread function
(PSF). In §3, we describe our approach to generate the mock-observations for preparing the
training set, and how simulated 21-cm maps and visibilities are generated. We also describe
the architecture of the neural network model that is used. Following this, we describe how
the PSF errors are introduced and the test sets are prepared. In §4 we present our results
and in §5 we summarize and discuss the findings of this paper. Finally in §6, we further chalk
out the roadmap for future work in detail.
– 2 –
2
The 21-cm signal and interferometric measurements
The key observable of 21-cm cosmology is the differential brightness temperature δTb, which
primarily quantifies the contrast between the redshifted 21-cm signal from neutral hydrogen
and the cosmic microwave background (CMB). The differential brightness temperature of the
21-cm signal (δTb) at a redshift z and an angular position x can be expressed as:
δTb(x, z) ≈27xHI(x, z)[1 + δ(x, z)]
Ωbh2
0.023
0.15
Ωmh2
1 + z
10
1/2 Ts(x, z) −TCMB(z)
Ts(x, z)
mK,
(2.1)
where, xHI is the neutral hydrogen fraction, δ is the matter overdensity contrast and Ts is the
spin temperature, respectively of the region located at (x, z). TCMB is the cosmic microwave
background (CMB) temperature which is the radio-background temperature at redshift z.
Ωm, Ωb are the mass and baryon densities in units of the critical density respectively. For
a more detailed review see [29, 30]. Assuming very efficient x-ray heating at a high spin
temperature limit, with Ts >> TCMB, Eq.2.1 can be simplified as:
δTb(x, z) ≈xHI(x, z)[1 + δ(x, z)]
1 + z
10
1/2
mK.
(2.2)
Eq.2.1 highlights the sensitivity of the 21-cm signal to a wide range of astrophysical and
cosmological parameters. Throughout this paper, we have used the Planck 2018 cosmology
[31].
However, the radio-interferometers do not observe δTb directly.
Instead, they mea-
sure complex-valued quantities called visibilities, which represent baseline-dependent, spatial
Fourier modes of the sky, modulated by the instrument’s primary beam response.
Assuming that the sky is effectively flat in the region of interest (we plan to deal with
the curved sky in future work), the measured visibility function V (u, v), is related to the sky
brightness distribution I(l, m) via a 2D Fourier transform:
V (u, v) =
Z
A(l, m)I(l, m)e−2πi(ul+vm) dl dm
(2.3)
where, I(l, m) is the sky brightness distribution and A(l, m) is the primary beam pattern
of the antenna as a function of angular coordinates (l, m), (u, v) are the spatial frequency
coordinates in units of wavelengths, and the exponential term represents the phase difference
due to different arrival times of the signal at each antenna. Longer baselines (larger (u, v)
values) probe smaller angular scales, while shorter baselines capture large-scale structures.
Observations of the 21-cm signal would provide us with visibilities, which can be imaged
to reconstruct spatial maps of the redshifted 21-cm signal, providing a more informative
view of the distribution of neutral hydrogen across redshifts.
While the power spectrum
is the primary statistical observable targeted by current interferometric experiments, direct
imaging would offer far richer insights into these high redshift epochs. Direct imaging would
be a rich, treasure trove of information, and would enable us to resolve individual ionized and
neutral regions, revealing the morphology of reionization and the large-scale structure of the
early universe.
However, due to the faintness of the 21-cm signal and contamination from foregrounds
like Galactic synchrotron emission, direct imaging is significantly more challenging. While
statistical detection via the power spectrum from the current and upcoming interferometers
is expected first, direct imaging will provide a transformative leap in our understanding of
– 3 –
the cosmic dawn and reionization epochs. The SKA is predicted to achieve the sensitivity
required for the direct imaging of the signal and eventually enable us to map the tomography
of the signal.
2.1
The point spread function
A fundamental concept in interferometric imaging is the point spread function (PSF), which
characterizes how a point source appears in the reconstructed sky image due to the instru-
ment’s limited sampling of spatial frequencies. It is the response pattern corresponding to the
instrument’s size and sampling. Since an interferometer does not measure all possible Fourier
modes, the resulting image, which is called the ‘dirty image,’ is a convolution of the true sky
brightness with the PSF (which is also known as the ‘dirty beam’).
The PSF is determined by the uv-coverage, which describes the sampled spatial frequen-
cies based on the distribution of the antennas and the observation time. A well-sampled
uv-plane results in a more compact and symmetric PSF, leading to higher-quality imaging,
while sparse coverage introduces sidelobes, artifacts that can complicate image interpretation.
This convolution of the sky with the PSF, introduces spatially varying distortions that can
mimic or obscure cosmological features in 21-cm maps. Consequently, understanding and
mitigating PSF effects are critical for robust inference. Also, understanding the structure and
limitations of the PSF is essential when interpreting interferometric images or using them
as inputs for inference pipelines. It is to be kept in mind that CLEAN or any deconvolu-
tion technique is non-linear, and can lead to loss of information. Hence, working directly
with the ‘dirty image’, which is the truest representation of the measurements, becomes more
beneficial.
In real observations, the PSF is not fixed; it varies with time, frequency, and array
configuration.
Accounting for such variations into inference pipelines is essential because
models trained on a single, idealized PSF might not perform well, when applied to slightly
different conditions. In this paper, we investigate this effect in a controlled setting. As the
first step, we test to verify that the mere presence of a fixed PSF does not adversely affect
the recovery of the neutral hydrogen fraction (xHI) directly from the dirty images.
Only
after establishing this baseline do we proceed to study how variations in the PSF influence
the inference.
We then try to mimic variations introduced in the PSF corresponding to
observations from the 128 antenna configuration of the MWA Phase II array by systematically
removing randomly selected antennas from the default configuration. Each of these cases give
rise to a slightly different measurement of the same field being observed, and a slightly different
PSF. Our goal is to quantify how much variation in the PSF can be tolerated before the
accuracy of parameter inference from these mock observations begins to degrade significantly.
This analysis provides a first step toward understanding the level of error that needs to be
represented in the training sets to construct an ML-based pipeline that is robust to realistic
instrumental systematics.
3
Methodology
To develop our ML-framework, the first and most crucial step is assembling a representative
and well-characterized training set. In our case, this involves generating 21-cm maps and ob-
tain their corresponding dirty images created using a reference telescope configuration. The
ML-framework is then trained on these ideal dirty images, where the PSF is consistent and
– 4 –
well-behaved across all training samples. This allows the network to learn the mapping be-
tween the observed image and the underlying neutral fraction. However, in real observations,
small changes in the telescope array—such as antenna failures or flagged baselines—can alter
the PSF in ways that deviate from the idealized case. To test the resilience of the trained
model under such realistic conditions, we introduce controlled perturbations to the PSF in
the test set by randomly removing antennas from the default configuration. This setup allows
us to systematically quantify how much PSF variation the model can tolerate before its pre-
dictive performance begins to degrade. We explain our training data generation and model
architecture in detail in this section.
3.1
The 21-cm Maps
We first assemble a suite of simulated 21-cm signals to build our training sets.
To date,
the theoretical interpretations of the 21-cm measurements have mostly been guided by semi-
numerical models of reionization [32–35]. Other fast approximate techniques have been de-
veloped [36, 37] to bypass the more computationally expensive and accurate full radiative
transfer simulations [38–41]. While these simulations provide us an excellent representation
of various possible reionization scenarios, they are all subject to their own set of limitations
and approximations which make it difficult to incorporate all of them simultaneously into an
inference framework.
Our initial result presented here uses only simulations from 21cmFAST v3 [42]. We have
generated 1000 lightcones, each with different reionization histories. A lightcone is a spatial
and temporally coherent 3D volume, encoding the evolving 21-cm brightness temperature
field as a function of redshift, along the line of sight. Each lightcone consists of a sequence
of 2D slices corresponding to discrete redshifts, with each slice encoding the spatial distri-
bution of the 21-cm brightness temperature and the associated neutral hydrogen fraction at
that redshift. To construct our training set, we randomly sample 2D slices from these 1000
lightcones along with their neutral hydrogen fraction at a single frequency.
These 21-cm maps and the corresponding xHI values serve as the foundation of our
supervised learning pipeline. We emphasize that, in this work, we limit ourselves to individual
2D slices at fixed frequencies, deferring a full frequency-dependent analysis to a forthcoming
paper in this series.
3.1.1
Simulating visibilities & Imaging
Once we have the set of randomly chosen slices from different lightcones, the next step of our
analysis is to put the 21-cm signal maps through an instrument model, produce interferometric
visibilities, and then image those visibilities for input into our ML network. We choose this
approach (as opposed to simply convolving the images with a PSF calculated from the uv-
coverage) as a foundation for future work, where other visibility-based systematics will be
considered.
The MWA Phase II [43] is an upgraded version of the original Phase I array. We consider
the MWA Phase II compact configuration as our model instrument, with a tightly packed
core of radius 50 m, and additionally, the two hexagons for calibration and power spectrum
sensitivity. The majority of the baselines in the Phase II configuration are less than 200 m,
and the redundancy present in the hexagons leads to grating lobes in the PSF. We choose to
work with this configuration as something as a worst-case scenario for the PSF present in a
tomographic imaging experiment. We use the “EoR0” field (Right Ascension, (R.A.)=0h00,
Declination (decl.)=-27°) for 10800s (three hours) and use CASA’s simobserve task to create
– 5 –
0
100
200
300
400
500
X-axis (pixels)
0
100
200
300
400
500
Y-axis (pixels)
Brightness temperature (
Tb) Map
0
100
200
300
400
500
X-axis (pixels)
0
100
200
300
400
500
Y-axis (pixels)
Dirty image
0
5
10
15
20
25
(mK)
600
400
200
0
200
400
600
800
(Jy/Beam)
Figure 1: Left: One slice of a 21-cm brightness temperature field from a 21cmFAST simulation.
Right A “dirty image” of same slice, made by creating model visibilities from the left hand
image (via CASA’s simobserve) and then imaging those visibilities (via CASA’s tclean).
single-frequency model visibilities for the brightness temperature (δTb) image slices pulled
from each of the light cones.
We then, in turn, use the tclean task, with the number of
iterations set to zero (i.e. there is no CLEAN process), to transform these visibilities to dirty
images of the 21-cm signal. An example of this process is shown in Figure 1.
There are sophisticated codes which particularly model the curved sky accurately, which
we plan to deal with in our future papers. For example, codes such as pyuvsim [44] can be
used for generating visibilities and FHD [7, 45] for map-making.
3.2
ML Network
The training set for our 2D Convolutional Neural Network (CNN) contains approximately
5000 dirty images of the 21-cm signal.
These slices have been randomly picked up from
the suite of 1000 lightcones that we had generated. We consider only those slices with neu-
tral hydrogen fractions in the range (0.10, 0.90), so as to avoid blank or featureless images.
Each dirty image is associated with a corresponding neutral hydrogen fraction, xHI, which
represents the abundance of neutral hydrogen in that specific slice before passing the image
through the instrument model. The CNN is trained to predict the neutral hydrogen fraction
xHI directly from these dirty images.
Our CNN model is designed with two convolutional layers, each followed by batch nor-
malization and max pooling layers. These convolutional layers allow the network to capture
spatial patterns and hierarchical features in the images, while batch normalization helps in
stabilizing and accelerating the training process. Max pooling layers are employed to pro-
gressively reduce the spatial dimensions and make the network invariant to small shifts in the
input, enhancing its ability to generalize. At the end of the convolutional stack, the output
is flattened, and a fully connected (dense) layer serves as the output layer, which aggregates
the learned features to predict xHI for each image.
Our CNN model (Fig.2) is trained for 500 epochs using a mean squared error loss func-
tion, which measures the difference between predicted and true values of xHI. The training
process optimizes the model weights to minimize this error(we use the Adam [46] optimizer
– 6 –
Dense layers
xHI
Feed forward
Conv2D + BN + Maxpool
Conv2D(kernel=3×3,
filters=32)
MaxPool (2×2)
BN
Dense (128)
Dense(64)
Dense(1)
Conv2D + BN + Maxpool
Conv2D(kernel=3×3,
filters=64)
MaxPool (2×2)
BN
Figure 2: A schematic representation of the 2D CNN architecture. The input images are the
‘dirty images’ generated using CASA’s simobserve and tclean tasks. The CNN consists of two
sets of convolutional layers, batch normalization and maxpool layers which finally regresses
onto the neutral hydrogen fraction xHI.
from keras), gradually refining the network’s ability to accurately estimate the hydrogen
fraction from the corrupt 21-cm image data. We split the training data into two subsets,
allocating 20% of it as the validation set. This is done to monitor the model’s performance,
prevent overfitting and ensure generalization. We have used publicly available python-based
packages scikit learn ([47]) and keras in this work to design the networks. We train
the CNN-model to learn spatial patterns within the images that correlate with variations in
xHI, effectively capturing the underlying structure and distribution of the 21-cm signal in a
mock-observational environment. This model effectively learns the underlying relationship
of the observed images and physical parameters, enabling more accurate interpretations of
21-cm signal data in future cosmological studies.
4
Results
In this section, we present the results of training the CNN-model on the δTb maps, convolved
with the default PSF of the full MWA Phase II array, as described in 3.2. We train the CNN
to learn the non-linear mapping between these ‘dirty’ images and the corresponding neutral
hydrogen fraction (xHI).
We create a test set consisting of 200 dirty images slices, following the same steps as
the training data, along with their corresponding xHI. This set is different from the training
data, and remains unseen to the trained network. For evaluation, we compare the CNN-
predicted values of xHI against the true values from the test set using two standard metrics:
the coefficient of determination (R2) and the root mean square error (RMSE). The R2 score
and RMSE are defined as:
R2 = 1 −Σ(xHI,pred −xHI,ori)2
Σ(xHI,ori −xHI,ori)2
(4.1)
– 7 –
RMSE =
v
u
u
t 1
N
N
X
i=1
xHI,ori −xHI,pred
xHI,ori
2
(4.2)
where, xHI,ori is the original parameter, xHI,pred is the parameter predicted by the CNN,
xHI,ori is the average of the original parameter, and the summation is over the entire test set,
consisting of N samples. R2 can vary between 0 and 1, and R2 = 1 implies a perfect inference
of the parameters. A low value of RMSE implies a good prediction.
The top-left plot in Fig. 3 shows the results from CNN. We plot the true versus the
predicted values of xHI along with the computed R2-score, for a test set with no variations in
the PSF introduced. The CNN achieves an excellent R2 score and low RMSE, demonstrating
that the neutral fraction can be robustly inferred from dirty images when the PSF is fixed
and well-characterized.
This confirms that the convolution with a known PSF does not
significantly degrade the information content needed for learning xHI.
We would like to investigate whether a CNN trained on a certain PSF (i.e. the PSF
of the full MWA Phase II with all 128 antennas) could also correctly recover the neutral
fraction of these maps where the PSF is slightly different. The scope of this test is limited
to differences within a reasonable range: we are not attempting to generalize across entirely
different instruments (e.g., training on MWA and predicting for SKA) nor to train on PSF-
free images and then recover parameters from realistic data. Instead, our goal is to establish
whether a CNN trained on a well-defined PSF can remain robust when faced with modest,
realistic perturbations that reflect the level of uncertainty expected in an observing pipeline
where instrumental and observational parameters are known to decent accuracy.
Having
established this baseline performance, we next test the robustness of the CNN predictions
under degraded PSFs that mimic calibration uncertainties and array-configuration variability.
4.1
Introducing PSF Errors
In real interferometric observations, the PSF is intrinsically frequency dependent and evolves
with the UV-coverage, which itself varies with observation time, antenna configuration, flag-
ging, calibration imperfections and several other factors. These corruptions creep into the
image domain as artifacts at that can in principle mimic or obscure the features in the cos-
mological signal. When we train an ML pipeline on dirty images assuming a perfect PSF, the
pipeline might not be capable of dealing with real-life interferometric observations. Hence,
we introduce imperfections in the assumption of the PSF of the observing instrument by
removing random subsets of antennae from the ideal MWA Phase II configuration. In this
way, we can mimic one aspect of instrument model uncertainty in a very simplistic manner.
In this preliminary analysis, we focus on uncertainty in the PSF. To introduce errors into
the PSF, we rerun simobserve on a new set of δTb maps, but randomly remove antennae from
the array to degrade the PSF. Randomly removing antennae would modify the uv-sampling
and hence introduce variations in the PSF, simulating a mock-observation scenario. We take
the original δTb slices corresponding to the test set, and create another set of visibilities after
randomly removing 5 antennas from the 128 antennas composing MWA Phase II. Each slice
within a set has a different set of 5 randomly chosen antennas removed from the default
configuration, so that the PSF error is different in each sample. We then repeat this process,
by removing 10, 20, 30, and 64 antennas, for a total of five such test sets with 200 visibility
simulations per set using the simobserve task from CASA. We then re-image these visibilities
with the tclean task.
These sets of ‘dirty images’ for different sets of antennas removed
from the default MWA phase II configuration now constitute different test sets, with slightly
– 8 –
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
Predicted xH
R²=0.99
RMSE=0.027
Full MWA layout
Predicted xH
True Fit
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
1.0
Predicted xH
R²=0.99
RMSE=0.033
5 ant removed at random
Predicted xH
True Fit
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
1.0
Predicted xH
R²=0.98
RMSE=0.052
10 ant removed at random
Predicted xH
True Fit
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
Predicted xH
R²=0.96
RMSE=0.053
20 ant removed at random
Predicted xH
True Fit
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
1.0
Predicted xH
R²=0.96
RMSE=0.101
25 ant removed at random
Predicted xH
True Fit
0.2
0.4
0.6
0.8
Original xH
0.2
0.4
0.6
0.8
1.0
Predicted xH
R²=0.85
RMSE=0.145
30 ant removed at random
Predicted xH
True Fit
Figure 3: The predictions of the neutral fraction (xHI)from our CNN with different testsets.
The first plot on the left panel corresponds to the results obtained from a testset with no
PSF variations introduced.
modified instrument configurations. We use our previously trained CNN pipeline to predict
the neutral hydrogen fractions (xHI) from these test sets which are representative of dirty
images created by modified PSFs.
In Fig.
3, we plot the results obtained from each of
the test sets described above. The scatter plots show the true versus the predicted neutral
hydrogen fraction, where each point is the predicted xHI from one of the 200 dirty image
samples in the test sets, and corresponds to a different overall antenna layout.
From Fig.3, it is clear that the predictions for the same underlying δTb maps and their
associated xHI become worse as more variations are introduced in the PSF, by removing more
number of antennae. To capture and quantify the error introduced into the PSF, arising
from variations in the default array configuration, we define a metric based on the root mean
squared error (RMSE) between the perturbed PSF and a reference ‘true’ PSF (computed
from the default, MWA Phase II full array configuration). Specifically, for each realization,
the observed PSF is saved after randomly dropping a subset of antennas, and the pixel-wise
RMSE is computed relative to the true (default) PSF. This RMSE error is then normalized
by the total pixel-wise intensity of the true PSF to obtain a dimensionless score.
The RMSE based error metric, ϵPSF, for each test image, is defined as:
ϵPSF =
v
u
u
t 1
N
N
X
i=1
Bobs
i
−Bori
i
Bori
i
2
(4.3)
– 9 –
0
20
40
60
80
100
120
140
, Characteristic PSF sidelobe error(%)
0
10
20
30
40
MAE in xHI prediction(%)
Pessimistic limit of MWA Error
0 removed
5 removed
10 removed
20 removed
25 removed
30 removed
64 removed
Figure 4: The percentage error in our CNN’s predictions of the neutral fraction (xHI) as a
function of the characteristic PSF error compared with the full 128-antenna PSF that was
used in the training set. The shaded grey region indicates the most pessimistic range of PSF
error we expect with the real MWA, suggesting that a training set containing only the full
128-antenna PSF is sufficient to yield ∼5% error in neutral fraction predictions even we are
mismodeling the instrument’s true PSF. The error bars here denote the standard deviations
of the absolute fractional errors on the xHI predictions for all the samples, in each of the test
sets.
where, Bi is the PSF response at pixel i and N denotes the total number of pixels in
each image. This score enables us to quantify variations between the true and modified or
perturbed PSFs for each image in the test set.
The final score of the PSF error metric σ for each of our test set is calculated as follows.
Our error metric ϵPSF, computes the normalized difference squared between the original PSF
and the modified PSF, which is then averaged over all the pixels, N, in the image. This is
then averaged across all the 200 modified PSF’s corresponding to the test sets for that specific
number of antennas removed. This quantity, σ, encapsulates the characteristic PSF sidelobe
error for a particular test set and is given by:
σ = 1
M
M
X
j
ϵj
PSF
(4.4)
– 10 –
where, M is simply the number of test set samples (200) for each case considered (5,10,20,30,64
antennae removed) and we obtain the σ corresponding to each test set.
Now that we have a metric to represent the characteristic sidelobe errors for each test set,
we also compute the mean absolute error (MAE) for the predictions of the CNN corresponding
to each of these test sets. The MAE for the predicted xHI, for each of the test sets is given
by:
MAE = 1
M
M
X
i=1
|xpred
HI,i −xori
HI,i|
xori
HI,i
,
(4.5)
This number represents the typical error in the prediction of xHI, for each of the test sets
with 0, 5, 10, 20, 30, 64 antennae removed. In Fig.4, we see that this error correlates strongly
with the computed PSF error metric, for each of the test cases considered. The shaded grey
region shows the worst case PSF error we might expect in real MWA observation, calculated
based on the typical number of malfunctioning antennas in an MWA observation [48]. This
analysis suggests, then, that using only the full 128-antenna PSF in the training is sufficient to
yield ∼5% uncertainties in xHI even if the true PSF differs because of real-world effects in the
instrument. We also note that our CNN typically only recovers xHI with ∼4% error even when
the PSF is modelled perfectly, suggesting that the model itself is the limiting factor at these
low levels of PSF error. In addition, we also calculate the standard deviation of the normalized
residuals, ξ = (xHI,ori−xHI,pred)/xHI,ori corresponding to each test set containing 200 samples,
and plot them as error bars [4].
The standard deviation of the normalized residuals are
calculated as,
q
1
M
P (ξ −ξ)2. These give us a measure of the possible deviation from the
true values of the parameters within the same test set, demonstrating how the positions of
the antennae removed also is reflected in the variations of the PSF and are plotted as the
error bars in Fig.4.
5
Discussions
Our results demonstrate that, within the range of PSF variations considered here, a CNN
trained on dirty images convolved with the default PSF can still recover the global neutral
hydrogen fraction, xHI, with high accuracy. While the presence of PSF deviations does intro-
duce a measurable increase in prediction error, these deviations which arise from randomly
removing subsets of antennas in the array, remain within acceptable limits. Overall, this
indicates that for realistic, modest PSF errors, training exclusively on unperturbed PSFs is
sufficient for reliable inference of xHI. However, the variation in the MAE % error reflects
not only the sensitivity of the predictions to the overall strength of the PSF perturbations,
but also the fact that different random subsets of removed antennas produce distinct PSFs.
This randomness in array configuration leads to a spread in prediction performance, which is
captured in the error bars.
This highlights the sensitivity of ML-based inference pipelines to PSF-induced distor-
tions, even when the PSF perturbations are relatively simple and physically plausible. The
approach presented here can be viewed as a first controlled step toward this broader goal,
serving as a testbed for developing error-aware and uncertainty-aware inference models. This
is a very simple, proof-of-concept demonstration of the ability of an ML based pipeline to
directly estimate physics parameters from mock-observations. The variations introduced in
– 11 –
this work are deliberately simplistic: random antenna removals simulate a subset of real-
world effects such as antenna flagging or hardware failure. These assumptions are sufficient
to demonstrate the susceptibility to error, of ML models trained under idealized assumptions,
underscoring the need to incorporate more realistic instrumental systematics during training.
We do not consider other complex factors that might introduce errors in the modelling of the
PSF, and thereby introduce artefacts in the observed images.
6
Roadmap for the future
Most of the inference pipelines in EoR experiments do not explicitly account for the errors
introduced by time and frequency-dependent instrumental characteristics, which distort real
interferometric observations. In this work, we have considered the PSF at a single frequency
only, focusing on 2D image slices, which represent only a simplified subset of the information
available in 21-cm observations.
While this has provided a useful first step for exploring
the effect of PSF variability on CNN-based inference, a critical next stage is to extend the
framework to include the full frequency (or redshift) axis. Doing so will allow the models to
capture the rich temporal and spectral evolution of the 21-cm signal, which is indispensable
for extracting astrophysical parameters in practice from interferometric observations.
Equally important will be the systematic incorporation of noise and calibration uncer-
tainties. Thermal noise, ionospheric distortions, and residual calibration errors all imprint
characteristic signatures in the data that could strongly influence inference pipelines. A nat-
ural progression from this work would therefore involve training and testing models in the
presence of these additional systematics, moving incrementally toward a realistic observa-
tional regime. A further challenge is the treatment of foregrounds, which are many orders
of magnitude brighter than the cosmological 21-cm signal. Integrating foreground into the
ML framework, either as contaminants to be strategically modelled and subtracted or as
part of a joint inference strategy, will be an essential milestone. Finally, bridging the gap
between idealized training and real data will require moving from flat-sky image slices to
curved-sky representations, such as HEALPix maps.
Incorporating these into end-to-end
pipelines would enable the community to test ML inference under conditions that more
closely approximate the complexities of actual experiments like the MWA and, eventually,
the SKA. Stitching together all these components—frequency dependence, noise and cali-
bration effects, foregrounds, and curved-sky geometries—will lay the foundation for robust,
error-aware machine-learning pipelines capable of delivering reliable astrophysical constraints
from future 21-cm surveys. In the series of papers that will follow, our primary goal will be
to develop ML pipelines that can ingest realistic dirty images—including direction-dependent
effects, beam chromaticity, calibration errors, and other systematics—and are able to produce
reliable astrophysical predictions.
Acknowledgments
MC is supported by a CFPU Postdoctoral fellowship. Part of this research was conducted
using computational resources and services at the Center for Computation and Visualization,
Brown University.
– 12 –
References
[1] G. Mellema, L.V.E. Koopmans, F.A. Abdalla, G. Bernardi, B. Ciardi, S. Daiboo et al.,
Reionization and the Cosmic Dawn with the Square Kilometre Array, Experimental Astronomy
36 (2013) 235 [1210.0197].
[2] L. Koopmans, J. Pritchard, G. Mellema, J. Aguirre, K. Ahn, R. Barkana et al., The Cosmic
Dawn and Epoch of Reionisation with SKA, Advancing Astrophysics with the Square Kilometre
Array (AASKA14) (2015) 1 [1505.07568].
[3] HERA collaboration, Hydrogen Epoch of Reionization Array (HERA), pasp 129 (2017) 045001
[1606.07473].
[4] MWA collaboration, The Murchison Widefield Array: The Square Kilometre Array Precursor
at Low Radio Frequencies, pasa 30 (2013) e007 [1206.6945].
[5] LOFAR collaboration, LOFAR: The LOw-Frequency ARray, aap 556 (2013) A2 [1305.3550].
[6] A.H. Patil, S. Yatawatta, L.V.E. Koopmans, A.G. de Bruyn, M.A. Brentjens, S. Zaroubi et al.,
Upper Limits on the 21 cm Epoch of Reionization Power Spectrum from One Night with
LOFAR, ApJ 838 (2017) 65 [1702.08679].
[7] N. Barry, A.P. Beardsley, R. Byrne, B. Hazelton, M.F. Morales, J.C. Pober et al., The
FHD/∈ppsilon Epoch of Reionisation power spectrum pipeline, PASA 36 (2019) e026
[1901.02980].
[8] W. Li, J.C. Pober, N. Barry, B.J. Hazelton, M.F. Morales, C.M. Trott et al., First Season
MWA Phase II Epoch of Reionization Power Spectrum Results at Redshift 7, ApJ 887 (2019)
141 [1911.10216].
[9] C.M. Trott, C.H. Jordan, S. Midgley, N. Barry, B. Greig, B. Pindor et al., Deep multiredshift
limits on Epoch of Reionization 21 cm power spectra from four seasons of Murchison Widefield
Array observations, MNRAS 493 (2020) 4711 [2002.02575].
[10] F.G. Mertens, M. Mevius, L.V.E. Koopmans, A.R. Offringa, G. Mellema, S. Zaroubi et al.,
Improved upper limits on the 21 cm signal power spectrum of neutral hydrogen at z ≈9.1 from
LOFAR, MNRAS 493 (2020) 1662 [2002.07196].
[11] HERA Collaboration, Z. Abdurashidova, J.E. Aguirre, P. Alexander, Z. Ali, Y. Balfour et al.,
HERA Phase I Limits on the Cosmic 21-cm Signal: Constraints on Astrophysics and
Cosmology During the Epoch of Reionization, arXiv e-prints (2021) arXiv:2108.07282
[2108.07282].
[12] HERA Collaboration, Z. Abdurashidova, T. Adams, J.E. Aguirre, P. Alexander, Z.S. Ali et al.,
Improved Constraints on the 21 cm EoR Power Spectrum and the X-Ray Heating of the IGM
with HERA Phase I Observations, ApJ 945 (2023) 124.
[13] F.G. Mertens, M. Mevius, L.V.E. Koopmans, A.R. Offringa, S. Zaroubi, A. Acharya et al.,
Deeper multi-redshift upper limits on the Epoch of Reionization 21-cm signal power spectrum
from LOFAR between z=8.3 and z=10.1, arXiv e-prints (2025) arXiv:2503.05576 [2503.05576].
[14] M.W. Eastwood, M.M. Anderson, R.M. Monroe, G. Hallinan, B.R. Barsdell, S.A. Bourke et al.,
The Radio Sky at Meter Wavelengths: m-Mode Analysis Imaging with the Owens Valley Long
Wavelength Array, ArXiv e-prints (2017) [1711.00466].
[15] B.K. Gehlot, F.G. Mertens, L.V.E. Koopmans, M.A. Brentjens, S. Zaroubi, B. Ciardi et al.,
The first power spectrum limit on the 21-cm signal of neutral hydrogen during the Cosmic
Dawn at z = 20-25 from LOFAR, MNRAS 488 (2019) 4271 [1809.06661].
[16] B.K. Gehlot, F.G. Mertens, L.V.E. Koopmans, A.R. Offringa, A. Shulevski, M. Mevius et al.,
The AARTFAAC Cosmic Explorer: observations of the 21-cm power spectrum in the EDGES
– 13 –
absorption trough, Monthly Notices of the Royal Astronomical Society 499 (2020) 4158
[https://academic.oup.com/mnras/article-pdf/499/3/4158/34068575/staa3093.pdf].
[17] S. Yoshiura, B. Pindor, J.L.B. Line, N. Barry, C.M. Trott, A. Beardsley et al., A new MWA
limit on the 21 cm power spectrum at redshifts ~13–17, Monthly Notices of the Royal
Astronomical Society 505 (2021) 4775
[https://academic.oup.com/mnras/article-pdf/505/4/4775/38828418/stab1560.pdf].
[18] S. Munshi, F.G. Mertens, L.V.E. Koopmans, A.R. Offringa, B. Semelin, D. Aubert et al., First
upper limits on the 21 cm signal power spectrum from cosmic dawn from one night of
observations with NenuFAR, A&A 681 (2024) A62 [2311.05364].
[19] A. Cohen, A. Fialkov, R. Barkana and R.A. Monsalve, Emulating the global 21-cm signal from
Cosmic Dawn and Reionization, MNRAS 495 (2020) 4845 [1910.06274].
[20] C.J. Schmit and J.R. Pritchard, Emulation of reionization simulations for Bayesian inference
of astrophysics parameters using neural networks, MNRAS 475 (2018) 1213 [1708.00011].
[21] S. Hassan, A. Liu, S. Kohn and P. La Plante, Identifying reionization sources from 21 cm maps
using Convolutional Neural Networks, MNRAS 483 (2019) 2524 [1807.03317].
[22] P. La Plante and M. Ntampaka, Machine Learning Applied to the Reionization History of the
Universe in the 21 cm Signal, ApJ 880 (2019) 110 [1810.08211].
[23] J. Chardin, G. Uhlrich, D. Aubert, N. Deparis, N. Gillet, P. Ocvirk et al., A deep learning
model to emulate simulations of cosmic reionization, MNRAS 490 (2019) 1055 [1905.06958].
[24] W. Li, H. Xu, Z. Ma, R. Zhu, D. Hu, Z. Zhu et al., Separating the EoR signal with a
convolutional denoising autoencoder: a deep-learning-based method, MNRAS 485 (2019) 2628
[1902.09278].
[25] A. Acharya, F. Mertens, B. Ciardi, R. Ghara, L.V.E. Koopmans, S.K. Giri et al., 21-cm signal
from the Epoch of Reionization: a machine learning upgrade to foreground removal with
Gaussian process regression, MNRAS 527 (2024) 7835 [2311.16633].
[26] H. Shimabukuro, Y. Mao and J. Tan, Estimation of H II Bubble Size Distribution from 21 cm
Power Spectrum with Artificial Neural Networks, Research in Astronomy and Astrophysics 22
(2022) 035027 [2002.08238].
[27] X. Zhao, Y. Mao, C. Cheng and B.D. Wandelt, Simulation-based inference of reionization
parameters from 3d tomographic 21 cm light-cone images, The Astrophysical Journal 926
(2022) 151.
[28] M. Choudhury, A. Datta and S. Majumdar, Extracting the 21-cm power spectrum and the
reionization parameters from mock data sets using artificial neural networks, MNRAS 512
(2022) 5010 [2112.13866].
[29] S. Zaroubi, The Epoch of Reionization, in Astrophysics and Space Science Library, T. Wiklind,
B. Mobasher and V. Bromm, eds., vol. 396 of Astrophysics and Space Science Library, p. 45,
2013, DOI [1206.0267].
[30] S.R. Furlanetto, The global 21-centimeter background from high redshifts, MNRAS 371 (2006)
867 [astro-ph/0604040].
[31] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al.,
Planck 2018 results. VI. Cosmological parameters, ArXiv e-prints (2018) arXiv:1807.06209
[1807.06209].
[32] A. Mesinger, S. Furlanetto and R. Cen, 21CMFAST: a fast, seminumerical simulation of the
high-redshift 21-cm signal, MNRAS 411 (2011) 955 [1003.3878].
[33] M.G. Santos, L. Ferramacho, M.B. Silva, A. Amblard and A. Cooray, Fast large volume
– 14 –
simulations of the 21-cm signal from the reionization and pre-reionization epochs, MNRAS 406
(2010) 2421 [0911.2219].
[34] A. Fialkov, R. Barkana, A. Pinhas and E. Visbal, Complete history of the observable 21 cm
signal from the first stars during the pre-reionization era, MNRAS 437 (2014) L36 [1306.2354].
[35] A. Hutter, C.M. Trott and P. Dayal, Survey parameters for detecting 21-cm-Ly α emitter
cross-correlations with the Square Kilometre Array, MNRAS 479 (2018) L129 [1806.07902].
[36] R.M. Thomas, S. Zaroubi, B. Ciardi, A.H. Pawlik, P. Labropoulos, V. Jelić et al., Fast
large-scale reionization simulations, Monthly Notices of the Royal Astronomical Society 393
(2009) 32
[https://academic.oup.com/mnras/article-pdf/393/1/32/3775520/mnras0393-0032.pdf].
[37] R. Ghara, T.R. Choudhury and K.K. Datta, 21 cm signal from cosmic dawn: imprints of spin
temperature fluctuations and peculiar velocities, MNRAS 447 (2015) 1806 [1406.4157].
[38] N.Y. Gnedin and A.A. Kaurov, Cosmic Reionization on Computers. II. Reionization History
and Its Back-reaction on Early Galaxies, ApJ 793 (2014) 30.
[39] J. Rosdahl, H. Katz, J. Blaizot, T. Kimm, L. Michel-Dansac, T. Garel et al., The SPHINX
cosmological simulations of the first billion years: the impact of binary stars on reionization,
MNRAS 479 (2018) 994 [1801.07259].
[40] P. Ocvirk, D. Aubert, J.G. Sorce, P.R. Shapiro, N. Deparis, T. Dawoodbhoy et al., Cosmic
Dawn II (CoDa II): a new radiation-hydrodynamics simulation of the self-consistent coupling of
galaxy formation and reionization, MNRAS 496 (2020) 4087 [1811.11192].
[41] R. Kannan, A. Smith, E. Garaldi, X. Shen, M. Vogelsberger, R. Pakmor et al., The THESAN
project: predictions for multitracer line intensity mapping in the epoch of reionization, MNRAS
514 (2022) 3857 [2111.02411].
[42] S. Murray, B. Greig, A. Mesinger, J. Muñoz, Y. Qin, J. Park et al., 21cmFAST v3: A
Python-integrated C code for generating 3D realizations of the cosmic 21cm signal., The
Journal of Open Source Software 5 (2020) 2582 [2010.15121].
[43] R.B. Wayth, S.J. Tingay, C.M. Trott, D. Emrich, M. Johnston-Hollitt, B. McKinley et al., The
Phase II Murchison Widefield Array: Design overview, PASA 35 (2018) 33 [1809.06466].
[44] A.E. Lanman, J.C. Pober, N.S. Kern, E. de Lera Acedo, D.R. DeBoer and N. Fagnoni,
Quantifying EoR delay spectrum contamination from diffuse radio emission, MNRAS 494
(2020) 3712 [1910.10573].
[45] I.S. Sullivan, M.F. Morales, B.J. Hazelton, W. Arcus, D. Barnes, G. Bernardi et al., Fast
Holographic Deconvolution: A New Technique for Precision Radio Interferometry, ApJ 759
(2012) 17 [1209.1653].
[46] D.P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, ArXiv e-prints (2014)
[1412.6980].
[47] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel et al., Scikit-learn:
Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825.
[48] R.C. Joseph, C.M. Trott, R.B. Wayth and A. Nasirudin, Calibration and 21-cm power spectrum
estimation in the presence of antenna beam variations, MNRAS 492 (2020) 2017 [1911.13088].
– 15 –
|
Prepared for submission to JCAP Towards a Robust Machine-Learning Pipeline for 21-cm Cosmology Data Analysis I: A Roadmap for Development and Demonstration of Robustness Against PSF Modeling Errors Madhurima Choudhury ,a Jonathan C. Pober ,a,b aCenter for the Fundamental Physics of the Universe, Brown University 184 Hope St Providence, RI 02912, USA b 184 Hope St Providence, RI 02912, USA 18 Sep 2025 E-mail: , Abstract. The 21-cm signal from the Epoch of Reionization (EoR) is a powerful probe of the evolution of the Universe. However, accurate measurements of the EoR signal from radio interferometric observations are sensitive to efficient foreground removal, mitigating radiofrequency interference and accounting for instrumental systematics. This work represents the first in a series of papers, where we will be introducing a novel ML based pipeline, step-bystep, to directly infer reionization parameters from 21-cm radio-interferometric images. In this paper, we investigate the impact of the variations in the point spread function (PSF) on parameter estimation by simulating visibilities corresponding to input 21-cm maps as observed by the 128-antenna configuration of the Murchison Widefield Array (MWA) Phase II. These visibilities are imaged to obtain 'dirty images,' which are then used to train a 2D convolutional neural network (CNN) to predict xHI. To systematically assess the effect of PSF mis-modelling, we generate multiple test sets by varying the MWA's antenna layout, thereby introducing controlled variations in the PSF; we then feed these alternative PSF dirty images to our CNN trained using only dirty images with the PSF of the true antenna layout. Our results demonstrate that PSF variations introduce biases in the CNN's predictions of xHI, with errors depending on the extent of PSF distortion. We quantify these biases and discuss their implications for the reliability of machine-learning-based parameter inference in 21-cm cosmology and how they can be utilized to improve the robustness of estimation against PSFrelated systematics in future 21-cm surveys. In concluding, we also discuss how this approach to incorporating realistic instrument error into an ML analysis pipeline can be expanded to include multiple other effects. Keywords: Machine learning, reionization, first stars Contents 1 Introduction 1 2 The 21-cm signal and interferometric measurements 3 2.1 The point spread function 4 3 Methodology 4 3.1 The 21-cm Maps 5 3.1.1 Simulating visibilities & Imaging 5 3.2 ML Network 6 4 Results 7 4.1 Introducing PSF Errors 8 5 Discussions 11 6 Roadmap for the future 12 1 Introduction The redshifted 21-cm line of neutral hydrogen would serve as an excellent probe to understand the structure and evolution of our Universe across cosmic time. This signal provides a unique window into the key epochs in the evolution of our Universe, including the Dark Ages, Cosmic Dawn, and the Epoch of Reionization, offering deep insights into the formation of the first stars, galaxies, and large-scale structure. However, observations of the 21-cm signal, both as a sky-averaged global signature and as fluctuations measured using an interferometer, are extremely challenging. Detecting the cosmological 21-cm signal is one of the key science goals for several current and upcoming lowfrequency radio telescopes (for example, SKA, [1, 2]; HERA,[3]; MWA, [4]; LOFAR, [5]; etc.). Several radio-interferometric experiments have reported upper limits on the power spectrum of 21-cm fluctuations over the past decade, from the epoch of reionization (EoR) [6-13] and the cosmic dawn (CD) era [14-18]. These upper limits have enabled theorists to rule out certain models of reionization and provide constraints on the physical parameters. In recent years, machine learning (ML) techniques have become increasingly prominent across various different domains of astrophysics and cosmology. Particularly, in 21-cm cosmology, ML has been applied to a wide range of problems, such as signal emulation to parameter estimation, highlighting its potential as a powerful tool for analyzing complex high-dimensional datasets. For instance, fast emulators for the 21-cm global signal and power spectrum have been developed using ML approaches [19, 20], enabling rapid exploration of parameter space. Convolutional Neural Networks (CNNs) have been employed to constrain reionization physics from 21-cm maps [21, 22], while deep learning models have also been used to emulate the full time-evolution of 21-cm brightness temperature maps from the EoR [23]. More recent advancements include the use of convolutional denoising autoencoders (CDAEs) to recover the EoR signal from simulated SKA images with realistic beam effects [24], and the application of Gaussian Process Regression (GPR) for foreground separation - 1 - [25]. Artificial Neural Networks (ANNs) have been used to estimate bubble size distributions from 21-cm power spectra [26], and CNNs have been applied to 3D tomographic 21-cm images for parameter inference and posterior estimation [27]. Also, [28] implemented ANNs to extract power spectrum parameters from synthetic observations contaminated with strong foregrounds. These diverse applications underscore the growing role of ML in tackling the multifaceted challenges of 21-cm cosmology. In spite of the excellent works incorporating ML into EoR studies, ML methods have their drawbacks. All supervised-learning methods are highly problem-specific, strongly dependent on the quality, diversity and representativeness of the training process. They are mostly not generalizable and struggle to deal with out-of-distribution inputs or unknown systematics, common in real observations. While such models can perform well on narrowly defined tasks under controlled conditions, their reliability diminishes when faced with unmodelled complexities. ML becomes particularly useful when it becomes difficult to explicitly model the data, particularly where traditional physical models are computationally expensive or analytically intractable. Its strength lies in its ability to learn from data representations directly, provided the training data sufficiently captures the variability of real observations. To address these limitations, our work takes a step towards building a more resilient and informed ML model. In our case, to train a network, we need not define an explicit model for complicated systematics (a nearly impossible feat for instruments with the size and complexity of 21-cm cosmology experiments); we just need a way of representing a reasonable range of signatures in our training set. As a first case study, we explore the impact of PSF variability on ML-based inference, highlighting the need for more robust and systematics-aware training strategies in 21-cm cosmology. In real data, all the complicating factors are present. To confidently analyze real data with ML, we need an approach that will not confuse one issue for another. For instance, miscalibrated foreground removal and electromagnetic mutual coupling between antennas can both leave excess power at the same spectral scales (due to their comparable light travel time delays). An ML approach trained only to remove foregrounds is unlikely to do the job properly if mutual coupling is present. There is no existing complete end-to-end pipeline stitching all of these key aspects together. The work presented here is a first step in a series of papers, through which we will methodically incorporate more and more sophisticated effects into an ML-based end-to-end pipeline to analyze 21-cm interferometric data. In this paper, we aim to demonstrate how well we can predict the neutral hydrogen fraction directly from dirty images, bypassing the necessity to reconstruct the original image after going through a deconvolution algorithm. The outline of this paper is as follows: In §2, we discuss briefly the fundamentals of interferometric measurements, focusing on visibilities and the role of the point-spread function (PSF). In §3, we describe our approach to generate the mock-observations for preparing the training set, and how simulated 21-cm maps and visibilities are generated. We also describe the architecture of the neural network model that is used. Following this, we describe how the PSF errors are introduced and the test sets are prepared. In §4 we present our results and in §5 we summarize and discuss the findings of this paper. Finally in §6, we further chalk out the roadmap for future work in detail. - 2 - 2 The 21-cm signal and interferometric measurements The key observable of 21-cm cosmology is the differential brightness temperature δTb, which primarily quantifies the contrast between the redshifted 21-cm signal from neutral hydrogen and the cosmic microwave background (CMB). The differential brightness temperature of the 21-cm signal (δTb) at a redshift z and an angular position x can be expressed as: δTb(x, z) ≈27xHI(x, z)[1 + δ(x, z)] Ωbh2 0.023 0.15 Ωmh2 1 + z 10 1/2 Ts(x, z) -TCMB(z) Ts(x, z) mK, (2.1) where, xHI is the neutral hydrogen fraction, δ is the matter overdensity contrast and Ts is the spin temperature, respectively of the region located at (x, z). TCMB is the cosmic microwave background (CMB) temperature which is the radio-background temperature at redshift z. Ωm, Ωb are the mass and baryon densities in units of the critical density respectively. For a more detailed review see [29, 30]. Assuming very efficient x-ray heating at a high spin temperature limit, with Ts >> TCMB, Eq.2.1 can be simplified as: δTb(x, z) ≈xHI(x, z)[1 + δ(x, z)] 1 + z 10 1/2 mK. (2.2) Eq.2.1 highlights the sensitivity of the 21-cm signal to a wide range of astrophysical and cosmological parameters. Throughout this paper, we have used the Planck 2018 cosmology [31]. However, the radio-interferometers do not observe δTb directly. Instead, they measure complex-valued quantities called visibilities, which represent baseline-dependent, spatial Fourier modes of the sky, modulated by the instrument's primary beam response. Assuming that the sky is effectively flat in the region of interest (we plan to deal with the curved sky in future work), the measured visibility function V (u, v), is related to the sky brightness distribution I(l, m) via a 2D Fourier transform: V (u, v) = Z A(l, m)I(l, m)e-2πi(ul+vm) dl dm (2.3) where, I(l, m) is the sky brightness distribution and A(l, m) is the primary beam pattern of the antenna as a function of angular coordinates (l, m), (u, v) are the spatial frequency coordinates in units of wavelengths, and the exponential term represents the phase difference due to different arrival times of the signal at each antenna. Longer baselines (larger (u, v) values) probe smaller angular scales, while shorter baselines capture large-scale structures. Observations of the 21-cm signal would provide us with visibilities, which can be imaged to reconstruct spatial maps of the redshifted 21-cm signal, providing a more informative view of the distribution of neutral hydrogen across redshifts. While the power spectrum is the primary statistical observable targeted by current interferometric experiments, direct imaging would offer far richer insights into these high redshift epochs. Direct imaging would be a rich, treasure trove of information, and would enable us to resolve individual ionized and neutral regions, revealing the morphology of reionization and the large-scale structure of the early universe. However, due to the faintness of the 21-cm signal and contamination from foregrounds like Galactic synchrotron emission, direct imaging is significantly more challenging. While statistical detection via the power spectrum from the current and upcoming interferometers is expected first, direct imaging will provide a transformative leap in our understanding of - 3 - the cosmic dawn and reionization epochs. The SKA is predicted to achieve the sensitivity required for the direct imaging of the signal and eventually enable us to map the tomography of the signal. 2.1 The point spread function A fundamental concept in interferometric imaging is the point spread function (PSF), which characterizes how a point source appears in the reconstructed sky image due to the instrument's limited sampling of spatial frequencies. It is the response pattern corresponding to the instrument's size and sampling. Since an interferometer does not measure all possible Fourier modes, the resulting image, which is called the 'dirty image,' is a convolution of the true sky brightness with the PSF (which is also known as the 'dirty beam'). The PSF is determined by the uv-coverage, which describes the sampled spatial frequencies based on the distribution of the antennas and the observation time. A well-sampled uv-plane results in a more compact and symmetric PSF, leading to higher-quality imaging, while sparse coverage introduces sidelobes, artifacts that can complicate image interpretation. This convolution of the sky with the PSF, introduces spatially varying distortions that can mimic or obscure cosmological features in 21-cm maps. Consequently, understanding and mitigating PSF effects are critical for robust inference. Also, understanding the structure and limitations of the PSF is essential when interpreting interferometric images or using them as inputs for inference pipelines. It is to be kept in mind that CLEAN or any deconvolution technique is non-linear, and can lead to loss of information. Hence, working directly with the 'dirty image', which is the truest representation of the measurements, becomes more beneficial. In real observations, the PSF is not fixed; it varies with time, frequency, and array configuration. Accounting for such variations into inference pipelines is essential because models trained on a single, idealized PSF might not perform well, when applied to slightly different conditions. In this paper, we investigate this effect in a controlled setting. As the first step, we test to verify that the mere presence of a fixed PSF does not adversely affect the recovery of the neutral hydrogen fraction (xHI) directly from the dirty images. Only after establishing this baseline do we proceed to study how variations in the PSF influence the inference. We then try to mimic variations introduced in the PSF corresponding to observations from the 128 antenna configuration of the MWA Phase II array by systematically removing randomly selected antennas from the default configuration. Each of these cases give rise to a slightly different measurement of the same field being observed, and a slightly different PSF. Our goal is to quantify how much variation in the PSF can be tolerated before the accuracy of parameter inference from these mock observations begins to degrade significantly. This analysis provides a first step toward understanding the level of error that needs to be represented in the training sets to construct an ML-based pipeline that is robust to realistic instrumental systematics. 3 Methodology To develop our ML-framework, the first and most crucial step is assembling a representative and well-characterized training set. In our case, this involves generating 21-cm maps and obtain their corresponding dirty images created using a reference telescope configuration. The ML-framework is then trained on these ideal dirty images, where the PSF is consistent and - 4 - well-behaved across all training samples. This allows the network to learn the mapping between the observed image and the underlying neutral fraction. However, in real observations, small changes in the telescope array-such as antenna failures or flagged baselines-can alter the PSF in ways that deviate from the idealized case. To test the resilience of the trained model under such realistic conditions, we introduce controlled perturbations to the PSF in the test set by randomly removing antennas from the default configuration. This setup allows us to systematically quantify how much PSF variation the model can tolerate before its predictive performance begins to degrade. We explain our training data generation and model architecture in detail in this section. 3.1 The 21-cm Maps We first assemble a suite of simulated 21-cm signals to build our training sets. To date, the theoretical interpretations of the 21-cm measurements have mostly been guided by seminumerical models of reionization [32-35]. Other fast approximate techniques have been developed [36, 37] to bypass the more computationally expensive and accurate full radiative transfer simulations [38-41]. While these simulations provide us an excellent representation of various possible reionization scenarios, they are all subject to their own set of limitations and approximations which make it difficult to incorporate all of them simultaneously into an inference framework. Our initial result presented here uses only simulations from 21cmFAST v3 [42]. We have generated 1000 lightcones, each with different reionization histories. A lightcone is a spatial and temporally coherent 3D volume, encoding the evolving 21-cm brightness temperature field as a function of redshift, along the line of sight. Each lightcone consists of a sequence of 2D slices corresponding to discrete redshifts, with each slice encoding the spatial distribution of the 21-cm brightness temperature and the associated neutral hydrogen fraction at that redshift. To construct our training set, we randomly sample 2D slices from these 1000 lightcones along with their neutral hydrogen fraction at a single frequency. These 21-cm maps and the corresponding xHI values serve as the foundation of our supervised learning pipeline. We emphasize that, in this work, we limit ourselves to individual 2D slices at fixed frequencies, deferring a full frequency-dependent analysis to a forthcoming paper in this series. 3.1.1 Simulating visibilities & Imaging Once we have the set of randomly chosen slices from different lightcones, the next step of our analysis is to put the 21-cm signal maps through an instrument model, produce interferometric visibilities, and then image those visibilities for input into our ML network. We choose this approach (as opposed to simply convolving the images with a PSF calculated from the uvcoverage) as a foundation for future work, where other visibility-based systematics will be considered. The MWA Phase II [43] is an upgraded version of the original Phase I array. We consider the MWA Phase II compact configuration as our model instrument, with a tightly packed core of radius 50 m, and additionally, the two hexagons for calibration and power spectrum sensitivity. The majority of the baselines in the Phase II configuration are less than 200 m, and the redundancy present in the hexagons leads to grating lobes in the PSF. We choose to work with this configuration as something as a worst-case scenario for the PSF present in a tomographic imaging experiment. We use the "EoR0" field (Right Ascension, (R.A.)=0h00, Declination (decl.)=-27°) for 10800s (three hours) and use CASA's simobserve task to create - 5 - 0 100 200 300 400 500 X-axis (pixels) 0 100 200 300 400 500 Y-axis (pixels) Brightness temperature ( Tb) Map 0 100 200 300 400 500 X-axis (pixels) 0 100 200 300 400 500 Y-axis (pixels) Dirty image 0 5 10 15 20 25 (mK) 600 400 200 0 200 400 600 800 (Jy/Beam) Figure 1: Left: One slice of a 21-cm brightness temperature field from a 21cmFAST simulation. Right A "dirty image" of same slice, made by creating model visibilities from the left hand image (via CASA's simobserve) and then imaging those visibilities (via CASA's tclean). single-frequency model visibilities for the brightness temperature (δTb) image slices pulled from each of the light cones. We then, in turn, use the tclean task, with the number of iterations set to zero (i.e. there is no CLEAN process), to transform these visibilities to dirty images of the 21-cm signal. An example of this process is shown in Figure 1. There are sophisticated codes which particularly model the curved sky accurately, which we plan to deal with in our future papers. For example, codes such as pyuvsim [44] can be used for generating visibilities and FHD [7, 45] for map-making. 3.2 ML Network The training set for our 2D Convolutional Neural Network (CNN) contains approximately 5000 dirty images of the 21-cm signal. These slices have been randomly picked up from the suite of 1000 lightcones that we had generated. We consider only those slices with neutral hydrogen fractions in the range (0.10, 0.90), so as to avoid blank or featureless images. Each dirty image is associated with a corresponding neutral hydrogen fraction, xHI, which represents the abundance of neutral hydrogen in that specific slice before passing the image through the instrument model. The CNN is trained to predict the neutral hydrogen fraction xHI directly from these dirty images. Our CNN model is designed with two convolutional layers, each followed by batch normalization and max pooling layers. These convolutional layers allow the network to capture spatial patterns and hierarchical features in the images, while batch normalization helps in stabilizing and accelerating the training process. Max pooling layers are employed to progressively reduce the spatial dimensions and make the network invariant to small shifts in the input, enhancing its ability to generalize. At the end of the convolutional stack, the output is flattened, and a fully connected (dense) layer serves as the output layer, which aggregates the learned features to predict xHI for each image. Our CNN model (Fig.2) is trained for 500 epochs using a mean squared error loss function, which measures the difference between predicted and true values of xHI. The training process optimizes the model weights to minimize this error(we use the Adam [46] optimizer - 6 - Dense layers xHI Feed forward Conv2D + BN + Maxpool Conv2D(kernel=3×3, filters=32) MaxPool (2×2) BN Dense (128) Dense(64) Dense(1) Conv2D + BN + Maxpool Conv2D(kernel=3×3, filters=64) MaxPool (2×2) BN Figure 2: A schematic representation of the 2D CNN architecture. The input images are the 'dirty images' generated using CASA's simobserve and tclean tasks. The CNN consists of two sets of convolutional layers, batch normalization and maxpool layers which finally regresses onto the neutral hydrogen fraction xHI. from keras), gradually refining the network's ability to accurately estimate the hydrogen fraction from the corrupt 21-cm image data. We split the training data into two subsets, allocating 20% of it as the validation set. This is done to monitor the model's performance, prevent overfitting and ensure generalization. We have used publicly available python-based packages scikit learn ([47]) and keras in this work to design the networks. We train the CNN-model to learn spatial patterns within the images that correlate with variations in xHI, effectively capturing the underlying structure and distribution of the 21-cm signal in a mock-observational environment. This model effectively learns the underlying relationship of the observed images and physical parameters, enabling more accurate interpretations of 21-cm signal data in future cosmological studies. 4 Results In this section, we present the results of training the CNN-model on the δTb maps, convolved with the default PSF of the full MWA Phase II array, as described in 3.2. We train the CNN to learn the non-linear mapping between these 'dirty' images and the corresponding neutral hydrogen fraction (xHI). We create a test set consisting of 200 dirty images slices, following the same steps as the training data, along with their corresponding xHI. This set is different from the training data, and remains unseen to the trained network. For evaluation, we compare the CNNpredicted values of xHI against the true values from the test set using two standard metrics: the coefficient of determination (R2) and the root mean square error (RMSE). The R2 score and RMSE are defined as: R2 = 1 -Σ(xHI,pred -xHI,ori)2 Σ(xHI,ori -xHI,ori)2 (4.1) - 7 - RMSE = v u u t 1 N N X i=1 xHI,ori -xHI,pred xHI,ori 2 (4.2) where, xHI,ori is the original parameter, xHI,pred is the parameter predicted by the CNN, xHI,ori is the average of the original parameter, and the summation is over the entire test set, consisting of N samples. R2 can vary between 0 and 1, and R2 = 1 implies a perfect inference of the parameters. A low value of RMSE implies a good prediction. The top-left plot in Fig. 3 shows the results from CNN. We plot the true versus the predicted values of xHI along with the computed R2-score, for a test set with no variations in the PSF introduced. The CNN achieves an excellent R2 score and low RMSE, demonstrating that the neutral fraction can be robustly inferred from dirty images when the PSF is fixed and well-characterized. This confirms that the convolution with a known PSF does not significantly degrade the information content needed for learning xHI. We would like to investigate whether a CNN trained on a certain PSF (i.e. the PSF of the full MWA Phase II with all 128 antennas) could also correctly recover the neutral fraction of these maps where the PSF is slightly different. The scope of this test is limited to differences within a reasonable range: we are not attempting to generalize across entirely different instruments (e.g., training on MWA and predicting for SKA) nor to train on PSFfree images and then recover parameters from realistic data. Instead, our goal is to establish whether a CNN trained on a well-defined PSF can remain robust when faced with modest, realistic perturbations that reflect the level of uncertainty expected in an observing pipeline where instrumental and observational parameters are known to decent accuracy. Having established this baseline performance, we next test the robustness of the CNN predictions under degraded PSFs that mimic calibration uncertainties and array-configuration variability. 4.1 Introducing PSF Errors In real interferometric observations, the PSF is intrinsically frequency dependent and evolves with the UV-coverage, which itself varies with observation time, antenna configuration, flagging, calibration imperfections and several other factors. These corruptions creep into the image domain as artifacts at that can in principle mimic or obscure the features in the cosmological signal. When we train an ML pipeline on dirty images assuming a perfect PSF, the pipeline might not be capable of dealing with real-life interferometric observations. Hence, we introduce imperfections in the assumption of the PSF of the observing instrument by removing random subsets of antennae from the ideal MWA Phase II configuration. In this way, we can mimic one aspect of instrument model uncertainty in a very simplistic manner. In this preliminary analysis, we focus on uncertainty in the PSF. To introduce errors into the PSF, we rerun simobserve on a new set of δTb maps, but randomly remove antennae from the array to degrade the PSF. Randomly removing antennae would modify the uv-sampling and hence introduce variations in the PSF, simulating a mock-observation scenario. We take the original δTb slices corresponding to the test set, and create another set of visibilities after randomly removing 5 antennas from the 128 antennas composing MWA Phase II. Each slice within a set has a different set of 5 randomly chosen antennas removed from the default configuration, so that the PSF error is different in each sample. We then repeat this process, by removing 10, 20, 30, and 64 antennas, for a total of five such test sets with 200 visibility simulations per set using the simobserve task from CASA. We then re-image these visibilities with the tclean task. These sets of 'dirty images' for different sets of antennas removed from the default MWA phase II configuration now constitute different test sets, with slightly - 8 - 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 Predicted xH R2=0.99 RMSE=0.027 Full MWA layout Predicted xH True Fit 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 1.0 Predicted xH R2=0.99 RMSE=0.033 5 ant removed at random Predicted xH True Fit 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 1.0 Predicted xH R2=0.98 RMSE=0.052 10 ant removed at random Predicted xH True Fit 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 Predicted xH R2=0.96 RMSE=0.053 20 ant removed at random Predicted xH True Fit 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 1.0 Predicted xH R2=0.96 RMSE=0.101 25 ant removed at random Predicted xH True Fit 0.2 0.4 0.6 0.8 Original xH 0.2 0.4 0.6 0.8 1.0 Predicted xH R2=0.85 RMSE=0.145 30 ant removed at random Predicted xH True Fit Figure 3: The predictions of the neutral fraction (xHI)from our CNN with different testsets. The first plot on the left panel corresponds to the results obtained from a testset with no PSF variations introduced. modified instrument configurations. We use our previously trained CNN pipeline to predict the neutral hydrogen fractions (xHI) from these test sets which are representative of dirty images created by modified PSFs. In Fig. 3, we plot the results obtained from each of the test sets described above. The scatter plots show the true versus the predicted neutral hydrogen fraction, where each point is the predicted xHI from one of the 200 dirty image samples in the test sets, and corresponds to a different overall antenna layout. From Fig.3, it is clear that the predictions for the same underlying δTb maps and their associated xHI become worse as more variations are introduced in the PSF, by removing more number of antennae. To capture and quantify the error introduced into the PSF, arising from variations in the default array configuration, we define a metric based on the root mean squared error (RMSE) between the perturbed PSF and a reference 'true' PSF (computed from the default, MWA Phase II full array configuration). Specifically, for each realization, the observed PSF is saved after randomly dropping a subset of antennas, and the pixel-wise RMSE is computed relative to the true (default) PSF. This RMSE error is then normalized by the total pixel-wise intensity of the true PSF to obtain a dimensionless score. The RMSE based error metric, εPSF, for each test image, is defined as: εPSF = v u u t 1 N N X i=1 Bobs i -Bori i Bori i 2 (4.3) - 9 - 0 20 40 60 80 100 120 140 , Characteristic PSF sidelobe error(%) 0 10 20 30 40 MAE in xHI prediction(%) Pessimistic limit of MWA Error 0 removed 5 removed 10 removed 20 removed 25 removed 30 removed 64 removed Figure 4: The percentage error in our CNN's predictions of the neutral fraction (xHI) as a function of the characteristic PSF error compared with the full 128-antenna PSF that was used in the training set. The shaded grey region indicates the most pessimistic range of PSF error we expect with the real MWA, suggesting that a training set containing only the full 128-antenna PSF is sufficient to yield ∼5% error in neutral fraction predictions even we are mismodeling the instrument's true PSF. The error bars here denote the standard deviations of the absolute fractional errors on the xHI predictions for all the samples, in each of the test sets. where, Bi is the PSF response at pixel i and N denotes the total number of pixels in each image. This score enables us to quantify variations between the true and modified or perturbed PSFs for each image in the test set. The final score of the PSF error metric σ for each of our test set is calculated as follows. Our error metric εPSF, computes the normalized difference squared between the original PSF and the modified PSF, which is then averaged over all the pixels, N, in the image. This is then averaged across all the 200 modified PSF's corresponding to the test sets for that specific number of antennas removed. This quantity, σ, encapsulates the characteristic PSF sidelobe error for a particular test set and is given by: σ = 1 M M X j εj PSF (4.4) - 10 - where, M is simply the number of test set samples (200) for each case considered (5,10,20,30,64 antennae removed) and we obtain the σ corresponding to each test set. Now that we have a metric to represent the characteristic sidelobe errors for each test set, we also compute the mean absolute error (MAE) for the predictions of the CNN corresponding to each of these test sets. The MAE for the predicted xHI, for each of the test sets is given by: MAE = 1 M M X i=1 |xpred HI,i -xori HI,i| xori HI,i , (4.5) This number represents the typical error in the prediction of xHI, for each of the test sets with 0, 5, 10, 20, 30, 64 antennae removed. In Fig.4, we see that this error correlates strongly with the computed PSF error metric, for each of the test cases considered. The shaded grey region shows the worst case PSF error we might expect in real MWA observation, calculated based on the typical number of malfunctioning antennas in an MWA observation [48]. This analysis suggests, then, that using only the full 128-antenna PSF in the training is sufficient to yield ∼5% uncertainties in xHI even if the true PSF differs because of real-world effects in the instrument. We also note that our CNN typically only recovers xHI with ∼4% error even when the PSF is modelled perfectly, suggesting that the model itself is the limiting factor at these low levels of PSF error. In addition, we also calculate the standard deviation of the normalized residuals, ξ = (xHI,ori-xHI,pred)/xHI,ori corresponding to each test set containing 200 samples, and plot them as error bars [4]. The standard deviation of the normalized residuals are calculated as, q 1 M P (ξ -ξ)2. These give us a measure of the possible deviation from the true values of the parameters within the same test set, demonstrating how the positions of the antennae removed also is reflected in the variations of the PSF and are plotted as the error bars in Fig.4. 5 Discussions Our results demonstrate that, within the range of PSF variations considered here, a CNN trained on dirty images convolved with the default PSF can still recover the global neutral hydrogen fraction, xHI, with high accuracy. While the presence of PSF deviations does introduce a measurable increase in prediction error, these deviations which arise from randomly removing subsets of antennas in the array, remain within acceptable limits. Overall, this indicates that for realistic, modest PSF errors, training exclusively on unperturbed PSFs is sufficient for reliable inference of xHI. However, the variation in the MAE % error reflects not only the sensitivity of the predictions to the overall strength of the PSF perturbations, but also the fact that different random subsets of removed antennas produce distinct PSFs. This randomness in array configuration leads to a spread in prediction performance, which is captured in the error bars. This highlights the sensitivity of ML-based inference pipelines to PSF-induced distortions, even when the PSF perturbations are relatively simple and physically plausible. The approach presented here can be viewed as a first controlled step toward this broader goal, serving as a testbed for developing error-aware and uncertainty-aware inference models. This is a very simple, proof-of-concept demonstration of the ability of an ML based pipeline to directly estimate physics parameters from mock-observations. The variations introduced in - 11 - this work are deliberately simplistic: random antenna removals simulate a subset of realworld effects such as antenna flagging or hardware failure. These assumptions are sufficient to demonstrate the susceptibility to error, of ML models trained under idealized assumptions, underscoring the need to incorporate more realistic instrumental systematics during training. We do not consider other complex factors that might introduce errors in the modelling of the PSF, and thereby introduce artefacts in the observed images. 6 Roadmap for the future Most of the inference pipelines in EoR experiments do not explicitly account for the errors introduced by time and frequency-dependent instrumental characteristics, which distort real interferometric observations. In this work, we have considered the PSF at a single frequency only, focusing on 2D image slices, which represent only a simplified subset of the information available in 21-cm observations. While this has provided a useful first step for exploring the effect of PSF variability on CNN-based inference, a critical next stage is to extend the framework to include the full frequency (or redshift) axis. Doing so will allow the models to capture the rich temporal and spectral evolution of the 21-cm signal, which is indispensable for extracting astrophysical parameters in practice from interferometric observations. Equally important will be the systematic incorporation of noise and calibration uncertainties. Thermal noise, ionospheric distortions, and residual calibration errors all imprint characteristic signatures in the data that could strongly influence inference pipelines. A natural progression from this work would therefore involve training and testing models in the presence of these additional systematics, moving incrementally toward a realistic observational regime. A further challenge is the treatment of foregrounds, which are many orders of magnitude brighter than the cosmological 21-cm signal. Integrating foreground into the ML framework, either as contaminants to be strategically modelled and subtracted or as part of a joint inference strategy, will be an essential milestone. Finally, bridging the gap between idealized training and real data will require moving from flat-sky image slices to curved-sky representations, such as HEALPix maps. Incorporating these into end-to-end pipelines would enable the community to test ML inference under conditions that more closely approximate the complexities of actual experiments like the MWA and, eventually, the SKA. Stitching together all these components-frequency dependence, noise and calibration effects, foregrounds, and curved-sky geometries-will lay the foundation for robust, error-aware machine-learning pipelines capable of delivering reliable astrophysical constraints from future 21-cm surveys. In the series of papers that will follow, our primary goal will be to develop ML pipelines that can ingest realistic dirty images-including direction-dependent effects, beam chromaticity, calibration errors, and other systematics-and are able to produce reliable astrophysical predictions. Acknowledgments MC is supported by a CFPU Postdoctoral fellowship. Part of this research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University. - 12 - References [1] G. Mellema, L.V.E. Koopmans, F.A. Abdalla, G. Bernardi, B. Ciardi, S. Daiboo et al., Reionization and the Cosmic Dawn with the Square Kilometre Array, Experimental Astronomy 36 (2013) 235 [1210.0197]. [2] L. Koopmans, J. Pritchard, G. Mellema, J. Aguirre, K. Ahn, R. Barkana et al., The Cosmic Dawn and Epoch of Reionisation with SKA, Advancing Astrophysics with the Square Kilometre Array (AASKA14) (2015) 1 [1505.07568]. [3] HERA collaboration, Hydrogen Epoch of Reionization Array (HERA), pasp 129 (2017) 045001 [1606.07473]. [4] MWA collaboration, The Murchison Widefield Array: The Square Kilometre Array Precursor at Low Radio Frequencies, pasa 30 (2013) e007 [1206.6945]. [5] LOFAR collaboration, LOFAR: The LOw-Frequency ARray, aap 556 (2013) A2 [1305.3550]. [6] A.H. Patil, S. Yatawatta, L.V.E. Koopmans, A.G. de Bruyn, M.A. Brentjens, S. Zaroubi et al., Upper Limits on the 21 cm Epoch of Reionization Power Spectrum from One Night with LOFAR, ApJ 838 (2017) 65 [1702.08679]. [7] N. Barry, A.P. Beardsley, R. Byrne, B. Hazelton, M.F. Morales, J.C. Pober et al., The FHD/∈ppsilon Epoch of Reionisation power spectrum pipeline, PASA 36 (2019) e026 [1901.02980]. [8] W. Li, J.C. Pober, N. Barry, B.J. Hazelton, M.F. Morales, C.M. Trott et al., First Season MWA Phase II Epoch of Reionization Power Spectrum Results at Redshift 7, ApJ 887 (2019) 141 [1911.10216]. [9] C.M. Trott, C.H. Jordan, S. Midgley, N. Barry, B. Greig, B. Pindor et al., Deep multiredshift limits on Epoch of Reionization 21 cm power spectra from four seasons of Murchison Widefield Array observations, MNRAS 493 (2020) 4711 [2002.02575]. [10] F.G. Mertens, M. Mevius, L.V.E. Koopmans, A.R. Offringa, G. Mellema, S. Zaroubi et al., Improved upper limits on the 21 cm signal power spectrum of neutral hydrogen at z ≈9.1 from LOFAR, MNRAS 493 (2020) 1662 [2002.07196]. [11] HERA Collaboration, Z. Abdurashidova, J.E. Aguirre, P. Alexander, Z. Ali, Y. Balfour et al., HERA Phase I Limits on the Cosmic 21-cm Signal: Constraints on Astrophysics and Cosmology During the Epoch of Reionization, arXiv e-prints (2021) . [12] HERA Collaboration, Z. Abdurashidova, T. Adams, J.E. Aguirre, P. Alexander, Z.S. Ali et al., Improved Constraints on the 21 cm EoR Power Spectrum and the X-Ray Heating of the IGM with HERA Phase I Observations, ApJ 945 (2023) 124. [13] F.G. Mertens, M. Mevius, L.V.E. Koopmans, A.R. Offringa, S. Zaroubi, A. Acharya et al., Deeper multi-redshift upper limits on the Epoch of Reionization 21-cm signal power spectrum from LOFAR between z=8.3 and z=10.1, arXiv e-prints (2025) . [14] M.W. Eastwood, M.M. Anderson, R.M. Monroe, G. Hallinan, B.R. Barsdell, S.A. Bourke et al., The Radio Sky at Meter Wavelengths: m-Mode Analysis Imaging with the Owens Valley Long Wavelength Array, ArXiv e-prints (2017) [1711.00466]. [15] B.K. Gehlot, F.G. Mertens, L.V.E. Koopmans, M.A. Brentjens, S. Zaroubi, B. Ciardi et al., The first power spectrum limit on the 21-cm signal of neutral hydrogen during the Cosmic Dawn at z = 20-25 from LOFAR, MNRAS 488 (2019) 4271 [1809.06661]. [16] B.K. Gehlot, F.G. Mertens, L.V.E. Koopmans, A.R. Offringa, A. Shulevski, M. Mevius et al., The AARTFAAC Cosmic Explorer: observations of the 21-cm power spectrum in the EDGES - 13 - absorption trough, Monthly Notices of the Royal Astronomical Society 499 (2020) 4158 [https://academic.oup.com/mnras/article-pdf/499/3/4158/34068575/staa3093.pdf]. [17] S. Yoshiura, B. Pindor, J.L.B. Line, N. Barry, C.M. Trott, A. Beardsley et al., A new MWA limit on the 21 cm power spectrum at redshifts ~13-17, Monthly Notices of the Royal Astronomical Society 505 (2021) 4775 [https://academic.oup.com/mnras/article-pdf/505/4/4775/38828418/stab1560.pdf]. [18] S. Munshi, F.G. Mertens, L.V.E. Koopmans, A.R. Offringa, B. Semelin, D. Aubert et al., First upper limits on the 21 cm signal power spectrum from cosmic dawn from one night of observations with NenuFAR, A&A 681 (2024) A62 [2311.05364]. [19] A. Cohen, A. Fialkov, R. Barkana and R.A. Monsalve, Emulating the global 21-cm signal from Cosmic Dawn and Reionization, MNRAS 495 (2020) 4845 [1910.06274]. [20] C.J. Schmit and J.R. Pritchard, Emulation of reionization simulations for Bayesian inference of astrophysics parameters using neural networks, MNRAS 475 (2018) 1213 [1708.00011]. [21] S. Hassan, A. Liu, S. Kohn and P. La Plante, Identifying reionization sources from 21 cm maps using Convolutional Neural Networks, MNRAS 483 (2019) 2524 [1807.03317]. [22] P. La Plante and M. Ntampaka, Machine Learning Applied to the Reionization History of the Universe in the 21 cm Signal, ApJ 880 (2019) 110 [1810.08211]. [23] J. Chardin, G. Uhlrich, D. Aubert, N. Deparis, N. Gillet, P. Ocvirk et al., A deep learning model to emulate simulations of cosmic reionization, MNRAS 490 (2019) 1055 [1905.06958]. [24] W. Li, H. Xu, Z. Ma, R. Zhu, D. Hu, Z. Zhu et al., Separating the EoR signal with a convolutional denoising autoencoder: a deep-learning-based method, MNRAS 485 (2019) 2628 [1902.09278]. [25] A. Acharya, F. Mertens, B. Ciardi, R. Ghara, L.V.E. Koopmans, S.K. Giri et al., 21-cm signal from the Epoch of Reionization: a machine learning upgrade to foreground removal with Gaussian process regression, MNRAS 527 (2024) 7835 [2311.16633]. [26] H. Shimabukuro, Y. Mao and J. Tan, Estimation of H II Bubble Size Distribution from 21 cm Power Spectrum with Artificial Neural Networks, Research in Astronomy and Astrophysics 22 (2022) 035027 [2002.08238]. [27] X. Zhao, Y. Mao, C. Cheng and B.D. Wandelt, Simulation-based inference of reionization parameters from 3d tomographic 21 cm light-cone images, The Astrophysical Journal 926 (2022) 151. [28] M. Choudhury, A. Datta and S. Majumdar, Extracting the 21-cm power spectrum and the reionization parameters from mock data sets using artificial neural networks, MNRAS 512 (2022) 5010 [2112.13866]. [29] S. Zaroubi, The Epoch of Reionization, in Astrophysics and Space Science Library, T. Wiklind, B. Mobasher and V. Bromm, eds., vol. 396 of Astrophysics and Space Science Library, p. 45, 2013, DOI [1206.0267]. [30] S.R. Furlanetto, The global 21-centimeter background from high redshifts, MNRAS 371 (2006) 867 [astro-ph/0604040]. [31] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., Planck 2018 results. VI. Cosmological parameters, ArXiv e-prints (2018) . [32] A. Mesinger, S. Furlanetto and R. Cen, 21CMFAST: a fast, seminumerical simulation of the high-redshift 21-cm signal, MNRAS 411 (2011) 955 [1003.3878]. [33] M.G. Santos, L. Ferramacho, M.B. Silva, A. Amblard and A. Cooray, Fast large volume - 14 - simulations of the 21-cm signal from the reionization and pre-reionization epochs, MNRAS 406 (2010) 2421 [0911.2219]. [34] A. Fialkov, R. Barkana, A. Pinhas and E. Visbal, Complete history of the observable 21 cm signal from the first stars during the pre-reionization era, MNRAS 437 (2014) L36 [1306.2354]. [35] A. Hutter, C.M. Trott and P. Dayal, Survey parameters for detecting 21-cm-Ly α emitter cross-correlations with the Square Kilometre Array, MNRAS 479 (2018) L129 [1806.07902]. [36] R.M. Thomas, S. Zaroubi, B. Ciardi, A.H. Pawlik, P. Labropoulos, V. Jelić et al., Fast large-scale reionization simulations, Monthly Notices of the Royal Astronomical Society 393 (2009) 32 [https://academic.oup.com/mnras/article-pdf/393/1/32/3775520/mnras0393-0032.pdf]. [37] R. Ghara, T.R. Choudhury and K.K. Datta, 21 cm signal from cosmic dawn: imprints of spin temperature fluctuations and peculiar velocities, MNRAS 447 (2015) 1806 [1406.4157]. [38] N.Y. Gnedin and A.A. Kaurov, Cosmic Reionization on Computers. II. Reionization History and Its Back-reaction on Early Galaxies, ApJ 793 (2014) 30. [39] J. Rosdahl, H. Katz, J. Blaizot, T. Kimm, L. Michel-Dansac, T. Garel et al., The SPHINX cosmological simulations of the first billion years: the impact of binary stars on reionization, MNRAS 479 (2018) 994 [1801.07259]. [40] P. Ocvirk, D. Aubert, J.G. Sorce, P.R. Shapiro, N. Deparis, T. Dawoodbhoy et al., Cosmic Dawn II (CoDa II): a new radiation-hydrodynamics simulation of the self-consistent coupling of galaxy formation and reionization, MNRAS 496 (2020) 4087 [1811.11192]. [41] R. Kannan, A. Smith, E. Garaldi, X. Shen, M. Vogelsberger, R. Pakmor et al., The THESAN project: predictions for multitracer line intensity mapping in the epoch of reionization, MNRAS 514 (2022) 3857 [2111.02411]. [42] S. Murray, B. Greig, A. Mesinger, J. Muñoz, Y. Qin, J. Park et al., 21cmFAST v3: A Python-integrated C code for generating 3D realizations of the cosmic 21cm signal., The Journal of Open Source Software 5 (2020) 2582 [2010.15121]. [43] R.B. Wayth, S.J. Tingay, C.M. Trott, D. Emrich, M. Johnston-Hollitt, B. McKinley et al., The Phase II Murchison Widefield Array: Design overview, PASA 35 (2018) 33 [1809.06466]. [44] A.E. Lanman, J.C. Pober, N.S. Kern, E. de Lera Acedo, D.R. DeBoer and N. Fagnoni, Quantifying EoR delay spectrum contamination from diffuse radio emission, MNRAS 494 (2020) 3712 [1910.10573]. [45] I.S. Sullivan, M.F. Morales, B.J. Hazelton, W. Arcus, D. Barnes, G. Bernardi et al., Fast Holographic Deconvolution: A New Technique for Precision Radio Interferometry, ApJ 759 (2012) 17 [1209.1653]. [46] D.P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, ArXiv e-prints (2014) [1412.6980]. [47] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel et al., Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825. [48] R.C. Joseph, C.M. Trott, R.B. Wayth and A. Nasirudin, Calibration and 21-cm power spectrum estimation in the presence of antenna beam variations, MNRAS 492 (2020) 2017 [1911.13088]. - 15 -
|
2509.16278
|
Language Modeling with Learned Meta-Tokens
Alok N. Shah1∗
Khush Gupta1∗
Keshav Ramji2∗
Pratik Chaudhari1
1University of Pennsylvania
2IBM Research AI
∗Denotes equal contribution
{alokshah, khushg, pratikc}@upenn.edu, keshav.ramji@ibm.com
September 23, 2025
Abstract
While modern Transformer-based language models (LMs) have achieved major success in multi-task
generalization, they often struggle to capture long-range dependencies within their context window. This
work introduces a novel approach using meta-tokens, special tokens injected during pre-training, along
with a dedicated meta-attention mechanism to guide LMs to use these tokens. We pre-train a language
model with a modified GPT-2 architecture equipped with meta-attention in addition to causal multi-head
attention, and study the impact of these tokens on a suite of synthetic tasks. We find that data-efficient
language model pre-training on fewer than 100B tokens utilizing meta-tokens and our meta-attention
mechanism achieves strong performance on these tasks after fine-tuning. We suggest that these gains arise
due to the meta-tokens sharpening the positional encoding. This enables them to operate as trainable,
content-based landmarks, implicitly compressing preceding context and "caching" it in the meta-token.
At inference-time, the meta-token points to relevant context, facilitating length generalization up to 2×
its context window, even after extension with YaRN. We provide further evidence of these behaviors
by visualizing model internals to study the residual stream, and assessing the compression quality by
information-theoretic analysis on the rate-distortion tradeoff. Our findings suggest that pre-training
LMs with meta-tokens offers a simple, data-efficient method to enhance long-context language modeling
performance, while introducing new insights into the nature of their behavior towards length generalization.
1
Introduction
Transformer-based language models (LMs) have showcased remarkable capabilities across diverse language
tasks (Brown et al., 2020b; Chowdhery et al., 2022; OpenAI, 2023). Nevertheless, such models suffer from an
inability to capture dependencies spanning over their entire context window. With growing adoption and
ever-expanding demands on the context over which the model can process and reason, it is vital to develop
methods that facilitate long-context adaptation and length generalization. Despite numerous architectural
remedies, including sparse attention (Beltagy et al., 2020; Zaheer et al., 2020), recurrent blocks (Hutchins
et al., 2022), and modified positional encoding (Press et al., 2021; Su et al., 2021; Chen et al., 2023), the
fundamental challenge still remains: how can models reliably access and summarize distant context in a
concise, cheap, yet expressive manner?
We propose a simple solution, by way of meta-tokens, learned tokens periodically injected into the input
sequence during pretraining, and cleverly placed during fine-tuning. Unlike conventional dummy tokens
(Goyal et al., 2024), meta-tokens are explicitly trained via a dedicated sparse attention layer, guiding the
model to condense and "cache" contextual information as an in-line storage mechanism. As a result, these
tokens act as adaptive landmarks (Mohtashami and Jaggi, 2023), summarizing preceding context segments
into compact representations. At inference time, meta-tokens provide implicit pathways to distant information,
enabling models to generalize effectively across sequences longer than those encountered during training.
1
arXiv:2509.16278v1 [cs.CL] 18 Sep 2025
We demonstrate the empirical efficacy of this approach by pre-training a 152M parameter modified GPT-2
model with meta-tokens and a sparsely activated meta-attention mechanism. Our approach not only excels
on recall-oriented synthetic tasks but also generalizes up to 2x the pretraining context window (via YaRN) —
a rare feat for decoder-only architectures trained on 100B tokens or less. We trace these gains to a subtle
mechanism: meta-tokens provably induce a sharpening effect on positional encoding, enabling the meta-token
to locate its position based on the content it stores and reducing the entropy of the attention distribution.
We present evidence that this sharpening is responsible for an anchoring effect on relevant distant tokens,
facilitating robust length generalization. Furthermore, by analyzing internal model activations and studying
the rate-distortion tradeoff, we validate that meta-tokens function as compressed representations of context.
Our contributions can be summarized as follows:
1. We introduce a simple language model pre-training scheme using meta-tokens and a meta-attention
mechanism to improve performance on a wide range of synthetic tasks.
2. We show that meta-tokens sharpen the positional encoding, enabling precise long-range attention; we
further show that length generalization improves without positional encoding.
3. The sharpening hypothesis and implicit compression behavior are supported by visualizations of model
internals and information-theoretic analysis into the rate-distortion tradeoff.
2
Preliminaries
Causal Multi-Head Attention.
Let x = {x1, x2, . . . , xT } denote an input sequence of tokens of length T,
V denote the vocabulary size of V, and E : V →Rd represent the the token embedding function mapping
each token to a d-dimensional vector. Each xt is embedded into some continuous representation where
et = E(xt) + pt, such that pt is the positional encoding for t.
In decoder-only architecture, we utilize causal self-attention to ensure that predictions for a given token
are only based on preceding tokens. The causal self-attention mechanism modifies the attention computation
by masking future positions in the attention weights. Formally:
Causal Attention(Q, K, V ) = softmax
QK⊤
√dk
+ M
where M masks future tokens, ensuring that the model can only attend to current and past tokens. If A is
the matrix of attentions scores, then
Aij =
(
softmax(Aij)
if i ≥j
0
if i < j
This masking zeros attention scores for future tokens, allowing only the relevant past tokens to influence the
current token’s representation.
Positional Encoding.
Positional encoding was introduced in Transformer pre-training to provide models
with information about the ordering of tokens. With absolute positional embeddings (APE; Vaswani et al.
(2017)), each position t in the sequence receives a vector pt, independent of its content, so tokens are
distinguished in an index-by-index manner. Given learned token-embedding lookup table E : V →Rd for
vocabulary V and hidden dimension d, and positional embedding pt = Embpos(t) for t ∈[0, T −1] and
Embpos ∈RT ×d. Each token embedding is then defined as et = E(xt) + pt; this method was used in GPT-2
and GPT-3 (Radford et al., 2019; Brown et al., 2020a).
By contrast, Rotary Position Embedding (RoPE; Su et al. (2023)) rotates each pair of embedding
dimensions by an angle proportional to position, rather than adding a separate vector per position. This
makes the difference in attention scores directly encode relative distance between embeddings. The hidden
2
vector h is split into d
2 contiguous 2-D slices, and the angle for a position t is defined as θt,i =
t
100002i/d . The
2-D rotation matrix is taken as R(θ) =
cos θ
−sin θ
sin θ
cos θ
. Then, RoPE(h)(2i:2i+1)
t
= R(θt,i)h(2i:2i+1). This
has proven successful in the Llama models (Grattafiori et al., 2024). It can be observed that RoPE reflects
the relative offset i −j, with the dot product ⟨Qi, Kj⟩introducing a new factor of cos (
i−j
100002i/d ). This is
reflected in works using relative bias (Shaw et al., 2018), which introduces a bias term as a learned function
over the i −j distance. T5 (Raffel et al., 2020) then adds this bias to ⟨Qi, Kj⟩.
3
Training Language Models with Meta-Attention
We introduce a set of M meta-tokens (denoted as m); given a context length or block size of the model, n, we
take M = kn for some constant fraction k ∈[0, 1]1. The aim of introducing these meta-tokens is to capture
or store contextual information to enhance the model’s retrieval and reasoning capabilities; attending to a
meta-token should enable implicit retrieval of the context that it stores, guiding shortcut paths over the
context window. In practice, these may be treated akin to adding a filler token to the model’s vocabulary.
The M tokens are injected into the input sequences during pre-training uniformly at random, which was
informed by two key premises. While we desire interpretability and control in applying these tokens, and
as a result, prefer distinguishability at the task level, this is challenging to do without explicitly fixing a
downstream task, impeding generality. The second consideration was in how they specifically they should
be injected. While Zelikman et al. (2024) introduced <|startofthought|> and <|endofthought|> tokens
interleaved between reasoning steps near punctuation (serving as natural break), the introduction of a
rough periodicity between tokens during pre-training could result in being trapped into local minima in
the optimization landscape. We instead chose to follow the random injection scheme, supported by the
meta-token pre-training approach outlined in Goyal et al. (2024).
We ensure that the trained model incurs no loss for predicting meta-tokens, unlike a standard token
in the vocabulary – the meta-tokens’ indices are simply shifted and removed when computing the binary
cross-entropy (BCE) loss.
Meta-Attention Mechanism.
We augment our transformer H to take P which contains the positions of
the meta-tokens. We introduce a sparse attention mechanism, called meta-attention, which selectively modifies
attention scores for the specially marked "meta-tokens" within a sequence. This allows the model to simulate
selective attention, influencing the final behavior by focusing on these meta-tokens. The underlying principles
of the desired behavior is influenced by dual cross-attention (Jiang et al., 2024), such that operations are
performed higher on the abstraction hierarchy than the feature space alone. This induces a meta-learning-like
setup over which attention on the meta-tokens is learned.
Let the indices of special "meta-tokens" be denoted by positions ∈RB×T ′, where T ′ is the number of
meta tokens in a batch. We construct a meta mask P ∈RB×T ×T to influence the attention mechanism. For
each batch element b and token positions i, j:
P[b, i, j] =
(
0
if both i and j are meta tokens (i.e., i, j ∈positions[b, :])
−∞
otherwise
The meta-attention operation is defined as:
MetaAttention(Q, K, V ) = softmax
QK⊤
√dk
+ M
+ P
V
Where M is the same causal mask as before. Here, the meta mask P allows attention to flow only among
the meta tokens in the sequence, introducing a distinct interaction compared to regular attention. This
1We take k = 0.1 in practice; balancing next-token prediction over the standard vocabulary while injecting a non-trivial
number of meta-tokens.
3
meta-attention layer selectively modifies the attention by influencing the flow of information to and from
these meta tokens, distinguishing itself from the standard causal attention.
In particular, if A is the matrix of attentions scores then
Aij =
(
softmax(Aij)
if i and j are meta tokens
−∞
otherwise
To assemble the architecture used for our model, we insert the meta-attention mechanism after the causal
masked self-attention computation, to specifically attend to the injected meta tokens, as defined above. We
provide a complete breakdown of the architecture in Appendix A.
4
Results
4.1
Model Training and Architecture
All experiments were performed with 4 NVIDIA A100 GPUs, training the meta attention transformer for
200,000 iterations or 98B tokens using Distributed Data Parallel (DDP) on the Colossal Cleaned Crawl Corpus
(C4) (Raffel et al., 2020). The configuration and hyperparameters used in our pre-training are included in
Appendix A and B. As a baseline, we also pre-train GPT-2 (124M) on C4, with identical hyperparameters.
The primary change we make from a standard GPT-2 architecture is the addition of RoPE to enable better
generalization to longer contexts and improve stability in next-token prediction tasks.
We extend our transformer model’s context window from 1024 tokens to longer sequences by training two
distinct models with context lengths of 4096 and 8192 tokens, respectively. This extension is implemented
using the YaRN method (Peng et al., 2023), which dynamically scales Rotary Positional Embeddings (RoPE)
to effectively process significantly longer sequences without compromising performance or computational
efficiency. The key parameters are detailed in Appendix C
4.2
Experimental Setup and Tasks
We design four synthetic tasks to evaluate the recall capabilities of models trained with meta-tokens. The
tasks are List Recall, Segment Counting, Parity, and Copying. For each task, we define three difficulty
levels by varying the maximum sequence length. In all tasks, we insert a designated _PAUSE_ meta-token at
task-specific positions to indicate where the model should focus its meta-attention. We fine-tune on synthetic
data that we generate for each task (binned by instance length) and report the validation score on a held-out
test set. Detailed examples for each task are provided in Appendix H.
• List Recall: Given N named lists of length k, the model is prompted to recall a specific item from a
specified list. We insert a _PAUSE_ meta-token immediately following the list containing the queried
item, as well as before the final question. The expected answer is the corresponding item. Task difficulty
is scaled by varying the list length k and number of lists N.
• Segment Counting: The model is presented with several named lists, with a segment in these lists
wrapped by by _PAUSE_ meta-tokens. The prompt then asks how many times a specified item appears
between the two meta-tokens. The task difficulty changes based on the number and size of the named
lists.
• Parity: In this task, the input consists of a sequence of bits, with a _PAUSE_ meta-token indicating a
specific position in the sequence. The model is prompted to compute the XOR of all bits appearing
before the meta-token. The task difficulty changes based on the number of bits it has to XOR.
• Copying The model is given with a segment of text containing a bracketed spans marked by _PAUSE_
meta-tokens. The model is prompted to reproduce the exact content found between the meta-token-
marked boundaries. The task is designed to assess the model’s ability to extract and copy arbitrary
4
spans of text from a context, with difficulty varying according to the length and complexity of the
bracketed span. We report the sequence accuracy.
Within these four tasks, we investigate length generalization by fine-tuning our model in multiple phases.
At each phase, we assess the model’s performance on sequence lengths exceeding those seen during that
phase’s training, enabling us to evaluate its generalization to longer contexts. In addition, Appendix D
reports the performance of our models on a context length of 2048 tokens, which is twice the length seen
during pretraining (1024 tokens).
Baselines.
For a controlled comparison, we also pre-train a GPT-2 model (NanoGPT, 124M; Karpathy
(2023)) on C4, with identical hyperparameters as the meta-tokens model. Additionally, we use Eleuther AI’s
GPT-Neo-125M (Black et al., 2021) as another baseline.
4.3
Meta-Tokens Improve Performance on Synthetic Recall-Oriented Tasks.
As seen in Figure 1, we find that the models trained on meta-tokens substantially outperform our pre-trained
GPT-2 and GPT-Neo-125M baselines, across all tasks and all train lengths. The complete tables for these
results are included in Appendix F. We observe the GPT-2 model trained with APE to generally perform
poorly; however, it does achieve reasonable performance in the segment counting and parity tasks, albeit
much further behind the models with meta-attention. This suggests that training on further data could
improve its performance; this also highlights the data-efficiency of our meta-tokens models. The models also
gain in performance much more quickly with fine-tuning when increasing the train length – a phenomenon not
observed with the GPT-2 models. Our models also outperform the GPT-Neo-125M model by a substantial
margin; given that GPT-Neo was pre-trained on 300B tokens, nearly three times the volume of data on which
our meta-attention models were trained (albeit from a different corpus).
To study the effect of positional encoding on our results, we ablate by zeroing out the positional encoding,
zeroing out the text embedding, and performing both operations – all just at the meta-token indices. Curiously,
we observe in Tables 11-14 that the score without positional encoding nearly matches or exceeds the accuracy
of the model with the positional encoding as is. The lone exception is the segment counting task, where
there is a gap for all settings except the model trained with APE at a length of 256, which achieves a
+4.8% improvement over the "Full" model. By contrast, zeroing out the token embedding hurts performance
substantially in nearly every setting on List Recall, Segment Counting, and Copying; on Parity, this generally
matches the performance of zeroing out the positional encoding. Thus, we find that 1. pre-training with
meta-tokens and meta-attention boosts performance, and 2. zeroing out the positional encoding at just the
meta-tokens can match or improve performance at inference time.
5
Figure 1: We study the performance of the pre-trained GPT-2 w/ APE, Meta-attention w/ APE, and
Meta-attention w/ RoPE, as well as GPT-Neo-125M, all fine-tuned on synthetic data for their respective
tasks at the maximum train lengths indicated in the legends. All experiments are performed on a test set of
prompt lengths up to 512 tokens.
Table 1: Token Accuracy (%) on List Recall and Segment Counting across long contexts.
Task
Train/Finetune
2k
3k
4k
5k
6k
7k
8k
10k
12k
14k
16k
List
4k / 2k
19.5
16.0
13.7
0.9
0.0
0.0
0.9
1.1
0.0
2.1
1.1
4k / 4k
85.0
88.2
90.2
20.5
1.8
1.0
3.5
4.4
1.1
2.1
2.1
8k / 4k
85.0
95.8
91.2
97.4
98.2
96.2
93.9
31.9
0.0
2.1
2.1
8k / 8k
92.9
98.3
97.1
100.0
98.2
100.0
100.0
89.0
26.1
10.4
9.6
Count
4k / 2k
19.1
23.8
19.2
14.6
25.2
14.1
14.0
12.0
16.0
8.0
6.0
4k / 4k
17.5
23.8
31.8
20.3
30.4
19.3
19.1
14.0
26.0
12.0
16.0
8k / 4k
19.1
23.8
14.3
11.1
20.6
12.7
12.7
14.0
16.0
14.0
12.0
8k / 8k
27.0
33.3
15.9
19.1
27.0
19.1
23.8
22.0
18.0
18.0
18.0
Meta-Tokens Aid in Length Generalization.
In Figure 1 and Appendix F, we find that the model
trained on meta-tokens length generalizes well on the parity and copying tasks with APE, and performs
somewhat well (much better than the baselines) on list recall and segment counting at a train length of 256.
For instance, despite relatively similar performance at the 128 train length on the segment counting task,
the performance on the test set of up to a length of 512 dramatically increases when training at the 256
length, by +28.6% with APE and +10.7% with RoPE, compared to +3.5% for GPT-2 with APE. Table 1
exhibits a similar trend for the YaRN models, achieving strong performance across its respective context
windows, and even achieves non-trivial accuracy beyond the window. Fine-tuning the 8k YaRN model on
examples of up to a length of 4k can generalize very well up to 8k. These findings underscore the substantial
advantages of training with meta-tokens and the nuanced role positional encoding plays in task-specific and
length-generalization contexts.
Moreover, when looking at the results on Meta + RoPe on test set lengths of prompts up to 1024 tokens
(denoted extra-hard in Table 2), we find that zeroing out the positional encoding also plays a sizable role
in improving length generalization, especially in the List Recall task. While the model originally achieves
performances of 11.1%, 0% and 44.4% when fine-tuned on train lengths of 512 (APE), 256 and 512 (RoPE),
respectively, the scores improve by +38.9%, +22.2% and +11.2%, by simply zeroing out the positional
encoding at the meta-tokens.
6
Table 2: The configurations where zeroing the positional encoding at inference time results in accuracy
improvements on the List Pointer task, denoted by the ∆(pp) percentage points column.
Model (Split, Train Len)
Full
No Pos
∆(pp)
Meta + APE (medium, 128)
77.8%
88.9%
+11.1
Meta + APE (hard, 128)
11.1%
22.2%
+11.1
Meta + APE (extra-hard, 512)
11.1%
50.0%
+38.9
Meta + RoPE (medium, 128)
44.4%
55.6%
+11.1
Meta + RoPE (hard, 256)
33.3%
66.7%
+33.3
Meta + RoPE (extra-hard, 256)
0.0%
22.2%
+22.2
Meta + RoPE (extra-hard, 512)
44.4%
55.6%
+11.1
4.4
Examination of Positional Encoding through Internal Representations.
As discussed above, the results in Tables 11-14 suggest that the positional encoding of the meta-token can
potentially be holding back the downstream performance of the meta-attention models. We posit that the
model is instead relying on its content – cached context stored within the meta-token – to sharpen its sense
of its position in the sequence.
Next, we aim to formally define this notion of sharpness in the context of positional encoding, and its
relationship to the model’s logits. Let αi→k = softmaxk(QiKT
j + bi−j) be the attention distribution for query
i over keys j, with relative bias term bi−j. We define the sharpness of the positional encoding by the entropy:
H(αi) = −
X
j
αi→j log αi→j
Intuitively, when a meta-token is present at position t, the model’s attention becomes peaked around a
small set of keys; this "honing in" behavior reduces H(α) compared to APE or RoPE without meta-tokens.
In this manner, meta-tokens behave as content-driven landmarks – they serve as a low-entropy channel
that serves as a pointer to relevant context. As noted prior, the data efficiency observation suggests that the
meta-token helps to accelerate next-token prediction behavior while introducing a stabilizing effect in the
midst of noisy positional encoding.
Theorem 4.1. Consider a Transformer head at query position i over keys 1, . . . , N. Let αabs
i
(j) ∝exp(QiKT
j )
be the attention under absolute positional encoding and let αmeta
i
∝exp(QiKT
j + δj,j∗∆) when a meta-token
at position j∗introduces an additive logit boost of ∆> 0. Then, for some function κ(∆) > 0:
H(αmeta
i
) ≤H(αabs
i
) −κ(∆)
(1)
Proof Sketch. Parametrize the path by t ∈[0, ∆], and define logits ℓ(t)
j
and their softmax α(t) respectively.
Since boosting the true index tightens the margin, the derivative of H(α(t) is strictly negative. Therefore,
over the path, H(α(∆)) < H(α(0)), so H(αmeta) < H(αabs), where their difference must be a function of ∆.
The full proof is included in Appendix G.
We note that this theorem also applies to RoPE, using αRoPE
i
(j) ∝exp Qi(RoPE(Kj))T . A natural
consequence of Theorem 4.1 is that the meta-token operates as an "anchor" from a logits standpoint by
creating a margin ∆that concentrates the softmax. Concretely, we can specify that for meta-token mt at
position t and embedding et ∈Rd, and query at position i with vector Qi, has contribution to the (i, j) logit
of ∆(t)
i,j = Qi · Wet × 1j=t for learned linear head W. Summing over t yields the bias matrix B ∈Bmeta, the
set of all realizable bias matrices under the meta-token embeddings. Thus, any learned meta-token embedding
– provided that it adds to the logits at the summary position j∗– guarantees sharper attention by reducing
that attention head’s entropy.
7
Figure 2: (Left) We analyze the change in logits at the meta-token position, and observe that the meta-tokens
do indeed induce a sizable boost in logits compare to zeroing its token embedding. (Middle) We find the
boost in logits to correspond with a meaningful reduction in Shannon entropy over the softmax of the logits
between the zeroed meta-token sequence and the sequence with the meta-token as is. This corroborates
with our assumptions and claims in Theorem 4.1. (Right) We study the implicit "caching" ability of the
meta-token by studying the cosine similarity over the token embeddings. We observe high spikes (the yellow
column), diminishing as we move further away. This substantiates our claims of the presence of an implicit
compression and "caching" mechanism.
In Figure 2, we analyze the logits, comparing two settings: (1.) the current meta-token and (2.) the
meta-token with its token embedding zeroed out. We find that the former gains a sizable amount over the
latter, reinforcing the assumption made in Theorem 4.1 that the meta-token introduces an additive logit
boost of ∆> 0. Our empirical results show that the entropy over the softmax distribution of the logits
decreases (the difference between "non-meta-token" and "meta-token" is positive), thus corroborating our
central claim in Theorem 4.1.
4.5
Inference Efficiency
Meta-tokens are generated at inference. The additional computation is sparse—each attention head only
considers a small number of meta positions rather than the full attention matrix. In our current PyTorch
implementation, which materializes the sparse mask as a dense tensor, we observe a throughput drop from
130.82 to 117.86 tokens/sec and a TTFT increase from 7.44ms to 7.57ms, i.e., a 1.11× slowdown. We expect
optimized sparse attention implementations to reduce or eliminate this overhead.
Table 3: Inference speed comparison with and without meta/pause tokens.
Metric
No meta/pause tokens
With meta/pause tokens
TPS (tokens/sec)
130.82
117.86
TTFT (ms)
7.44
7.57
Slowdown factor
1.00
1.11
5
Examining Context Compression with Rate-Distortion Theory
Given that these results provide evidence that meta-tokens can compress context in their representation, we
develop mathematical formalizations to analyze this behavior. In particular, we turn to information-theoretic
tools – specifically, an information bottleneck view.
For a meta-token at xm succeeding a sequence of tokens X = xi:m−1 from indices i to m −1, we
consider a compression function ζ(·) which transforms the subsequence X into xm. As such, we define
8
ˆX = ζ(X) = ζ(xi:m−1) to be the compressed representation stored in xm. This can be generalized to the full
set of M meta-tokens:
ˆX1:M = [ζ1(X1:m1−1), ζ2(Xm1+1:m2−1), . . . ζM(mM+1 : mn)]
For practicality, we consider the variational information bottleneck (Alemi et al., 2017). This introduces
an encoder qϕ(ˆx | x) and decoder qθ(y | ˆx), along with a simple prior r(z) (e.g. N(0, 1)), yielding the following
form to solve for these variational distributions:
min
qϕ,qθ
E
p(x,y)[
E
qϕ(ˆx|x)[−log qθ(y | ˆx]] + β · E
p(x)[KL(qϕ(ˆx | x)||r(x))]
This form admits an equivalent perspective in rate-distortion theory. Specifically, the first term measures
the quality in predicting the downstream target given a lossy compression ˆX ("distortion"). The second
term measures the average number of bits required to encode ˆX, relative to some simple reference code r(z)
("rate"). As such, analyzing rate-distortion curves – sweeping over values of β – can provide valuable insights
into the quality of the "compression" behavior and its informativeness when the meta-token is attended to.
Theorem 5.1. Let Dabs(R) be the minimum distortion achievable at rate R under the VIB objective only
using absolute positional encoding (no meta-tokens), and let Dmeta(R) be the minimum distortion achievable
at rate R with meta-tokens. Then, for every R ≥0,
Dmeta(R) ≤Dabs(R)
(2)
Intuitively, meta-tokens expand the feasible set of encoders and decoders, which will either match or lower
distortion for a given rate. Thus, the quality of compression with respect to its informativeness in predicting
the target can only improve.
5.1
Rate-Distortion Informs the Quality of Context Caching
To obtain empirical rate–distortion curves for our meta-token bottleneck in Figure 3, we freeze the pre-trained
meta-token model and fix a small variational bottleneck head to the last meta-token hidden state. Concretely,
let hm ∈RD be the output of the final Transformer layer at the last meta-token position. We introduce
qϕ(z | hm) = N
µϕ(hm), diag(σ2
ϕ(hm))
,
qθ(y | z) = softmax(Wz + b),
with µϕ, σϕ : RD →RL and W ∈R|V|×L. We then optimize the ELBO:
min
ϕ,θ Ehm,y
−log qθ(y | z)
+ β Ehm
KL
qϕ(z | hm) ∥N(0, I)
.
Training is performed on the small List-Pointer D.1.1 split (50 examples, batch size 1), for 5 epochs at each
β ∈{0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0}. After each run, we record the average cross-entropy loss (“distortion”)
and KL (“rate”) on the same 50 examples. Finally, we plot the resulting rate–distortion curves on a symlog
x-axis (linear below 20 nats, logarithmic above) so that both the low-rate “knee” and the long tail are visible
(see Figure 3).
5.2
Positional Encoding as a Source of Bottleneck Distortion
The consistent improvements in recall fidelity and length generalization from zeroing positional encodings at
meta-token positions 2 are well-explained through through our rate-distortion analysis.
Prior work (e.g., NoPE; Kazemnejad et al., 2023) shows that Transformers can infer token order from the
causal mask alone, often generalizing better when explicit positional embeddings (PEs) are removed. Our
results extend this insight: in Theorem 5.1, we formalize meta-tokens as learned compression bottlenecks.
If a meta-token retains its PE (absolute or rotary), a portion of its representational capacity is spent
encoding position rather than semantic content. This index-dependent signal introduces unnecessary variance,
increasing the distortion of the compressed summary.
By contrast, zeroing out the PE forces the full
embedding capacity to encode task-relevant information. As a result, we observe lower distortion (higher
retrieval accuracy) at a given rate—both theoretically and empirically—across all four synthetic tasks.
9
Figure 3: (Left) This plot visualizes the residual stream after each layer, to analyze the meta-token within
causal attention. The colors before the meta-token (the colored band across the layers) denote the context
which the meta-token attends to and implicitly stores, and the final, rightmost colored line represents the
final meta-token in the sequence, which attends to the previous one at the aforementioned band. (Right)
We analyze the variational information bottleneck (VIB) objective and its decomposition into its rate and
distortion components. Supporting the findings of Theorem 5.1, for a given rate R, the distortion D is strictly
lower for the meta-token compared to the last non-meta-token element in the sequence.
6
Related Work
Pause and Memory Tokens
As detailed in our work, recent studies on Transformer-based models have
explored the introduction of special tokens, beyond ordinary vocabulary symbols. Pause or dummy tokens
as introduced in Goyal et al. (2024) enhance computational width, allowing models to perform additional
internal computation by effectively delaying their outputs. This yields empirical gains on question answering
and reasoning-intensive tasks. Similarly, Pfau et al. (2024) explore using filler tokens – sequences of seemingly
meaningless symbols – as a stand-in for chain-of-thought. These special tokens may also delineate phases
of reasoning, as in Quiet-STaR Zelikman et al. (2024). Quiet-STaR uses a begin-of-thought token and an
end-of-thought token, generating a silent rationale sequence for each step before emitting the next word,
showing that this helps zero-shot reasoning.
Works such as Memory Transformer (Burtsev et al., 2021) and Landmark Attention (Mohtashami and
Jaggi, 2023) introduce memory tokens; the former prepends them, while the latter uses them as learnable
keys for retrieval over blocks of context. Our work is most closely related to the latter, while performing this
retrieval in a purely implicit manner via the observed "pointer" mechanism. For vision transformers (ViTs),
LeMeVit (Jiang et al., 2024) introduces a similar meta-tokens notion as our work by adding learnable sparse
tokens and an attention mechanism between standard tokens and their meta tokens, improving performance
and reducing spatial redundancy. Darcet et al. (2024) uses specialized "register" tokens applies to patches to
denoise images by extracting the high-norm, outlier tokens, smoothening the feature and attention maps.
These works suggest that special tokens, even devoid of semantic content, can influence a model’s internal
reasoning and memory mechanisms.
Positional Encoding
We have already described absolute positional embeddings (APE), rotary positional
embeddings (RoPE) and relative bias in Section 2. In addition to these methods, ALiBi (Press et al., 2022)
10
adds a fixed linear penalty to attention scores based on the distance between query and key positions,
favoring nearer tokens and generalizing to longer contexts with minimal loss in perplexity. Recent work has
suggested that Transformers without any added position embeddings can still learn order information and,
in some cases, generalize to longer sequences better than models with standard positional encoding. NoPE
(Kazemnejad et al., 2023) showed that models trained without positional embeddings can achieve strong
length extrapolation in comparison to models trained with positional encoding. They can internally represent
both absolute and relative PEs without any explicit positional signal, suggesting these may emerge implicitly
via training dynamics or over the data distribution. NoPos (Haviv et al., 2022) also found a similar result,
suggesting that models trained without PE can infer their absolute position due to causal attention masks.
These findings are highly relevant to our work, given our evidence on length generalization behavior whiling
zeroing the positional encoding at the meta-tokens.
7
Discussion and Limitations
Our findings suggest that decoder-only language models trained with meta-tokens and meta-attention achieve
strong performance on recall-oriented tasks. Furthermore, they are able to length generalize, with performance
improvements when removing the effect of positional encoding at the meta-tokens. Given the prior findings
of NoPos, we believe the introduction of the meta-attention mechanism and a second causal mask (the "meta
mask") could be responsible for this behavior, provided that this behavior is specific to the meta-tokens. We
suggest that hybrid attention methods such as RNoPE (Yang et al., 2025) could be suitable for facilitating
long-context modeling with meta-tokens.
Given the findings that the meta-tokens operate like anchors within the context, it would be valuable to
explore the impact of our proposed mechanism in pre-training larger models over longer context windows,
under greater computational resources. We employ synthetic tasks that are well-aligned to recall abilities, and
design experiments to test length generalization, with the aim of strong synergy with long-context modeling
capabilities. Nonetheless, training larger models would indicate the viability of our approach for real-world
deployment.
Notably, our method requires little overhead – the addition of meta-tokens is a simple data augmentation
strategy, and the meta-attention layer is added after standard causal masked self-attention, as described in
Appendix A. It would also be informative to study larger-scale corpora – given the data-efficient nature of
the meta-tokens approach in vastly outperforming the vanilla GPT-2 model at the ≈100B tokens scale, how
rapidly does each model saturate our designed synthetic tasks?
8
Conclusion
We introduce meta-tokens in language model pre-training, in addition to a dedicated meta-attention mechanism
which learns the relationship between standard tokens and meta-tokens.
We find that this improves
performance on a suite of synthetic recall tasks, and enables length generalization behavior when removing
the positional encoding at each meta-token. We provide evidence to suggest that the meta-tokens sharpen
the positional encoding, enabling them to operate as content-based landmarks in the context; we further
show that they implicitly compress preceding context, demonstrated by similar token embeddings. These
interesting phenomena demonstrate the promise of long-context language modeling enabled via data-efficient
pre-training using meta-tokens.
References
A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep variational information bottleneck. In International
Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=HyxQzBceg.
11
I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer. arXiv preprint
arXiv:2004.05150, 2020.
S. Black, G. Leo, P. Wang, C. Leahy, and S. Biderman. GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow, Mar. 2021. URL https://doi.org/10.5281/zenodo.5297715. If you
use this software, please cite it using these metadata.
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry,
A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu,
C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish,
A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle,
M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing
Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020a.
URL https://proceedings.
neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
T. B. Brown, B. Mann, N. Ryder, et al. Language models are few-shot learners. NeurIPS, 2020b.
M. S. Burtsev, Y. Kuratov, A. Peganov, and G. V. Sapunov. Memory transformer, 2021. URL https:
//arxiv.org/abs/2006.11527.
T. D. Chen et al. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv
preprint arXiv:2307.08691, 2023.
A. Chowdhery, S. Narang, J. Devlin, et al. Palm: Scaling language modeling with pathways. arXiv preprint
arXiv:2204.02311, 2022.
T. M. Cover and J. A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and
Signal Processing). Wiley-Interscience, USA, 2006. ISBN 0471241954.
T. Darcet, M. Oquab, J. Mairal, and P. Bojanowski. Vision transformers need registers. In The Twelfth
International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=
2dnO3LLiJ1.
S. Goyal, Z. Ji, A. S. Rawat, et al. Think before you speak: Training language models with pause tokens. In
ICLR, 2024.
A. Grattafiori et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783.
A. Haviv, O. Ram, O. Press, P. Izsak, and O. Levy. Transformer language models without positional encodings
still learn positional information. In Y. Goldberg, Z. Kozareva, and Y. Zhang, editors, Findings of the
Association for Computational Linguistics: EMNLP 2022, pages 1382–1390, Abu Dhabi, United Arab
Emirates, Dec. 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.99.
URL https://aclanthology.org/2022.findings-emnlp.99/.
D. Hutchins et al. Block-recurrent transformers. In NeurIPS, 2022.
W. Jiang, J. Zhang, D. Wang, Q. Zhang, Z. Wang, and B. Du. Lemevit: Efficient vision transformer
with learnable meta tokens for remote sensing image interpretation. In K. Larson, editor, Proceedings
of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, pages 929–937.
International Joint Conferences on Artificial Intelligence Organization, 8 2024. doi: 10.24963/ijcai.2024/103.
URL https://doi.org/10.24963/ijcai.2024/103. Main Track.
A. Karpathy. nanoGPT. https://github.com/karpathy/nanoGPT, 2023. Accessed: 2025-05-16.
A. Kazemnejad, I. Padhi, K. Natesan, P. Das, and S. Reddy. The impact of positional encoding on length
generalization in transformers. In Thirty-seventh Conference on Neural Information Processing Systems,
2023. URL https://openreview.net/forum?id=Drrl2gcjzl.
12
A. W. Marshall, I. Olkin, and B. C. Arnold. Inequalities: Theory of Majorization and Its Applications.
Springer Series in Statistics. Springer New York, New York, NY, 2 edition, 2011. ISBN 978-0-387-68276-1.
doi: 10.1007/978-0-387-68276-1.
A. Mohtashami and M. Jaggi. Random-access infinite context length for transformers. In Thirty-seventh
Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=
7eHn64wOVy.
OpenAI. Gpt-4 technical report. https://openai.com/research/gpt-4, 2023.
B. Peng, J. Quesnelle, H. Fan, and E. Shippole. Yarn: Efficient context window extension of large language
models, 2023. URL https://arxiv.org/abs/2309.00071.
J. Pfau, W. Merrill, and S. R. Bowman. Let’s think dot by dot: Hidden computation in transformer language
models. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=
NikbrdtYvG.
O. Press, N. A. Smith, and O. Levy.
Alibi: Trainable linear biases for transformers.
arXiv preprint
arXiv:2108.12409, 2021.
O. Press, N. A. Smith, and M. Lewis. Train short, test long: Attention with linear biases enables input length
extrapolation, 2022. URL https://arxiv.org/abs/2108.12409.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever.
Language models are un-
supervised multitask learners.
OpenAI Technical Report, 2019.
URL https://cdn.openai.com/
better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Accessed:
2025-05-15.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring
the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1), Jan.
2020. ISSN 1532-4435.
P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations. In M. Walker,
H. Ji, and A. Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers),
pages 464–468, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
doi:
10.18653/v1/N18-2074. URL https://aclanthology.org/N18-2074/.
J. Su, Y. Lu, S. Pan, et al. Roformer: Enhanced transformer with rotary position embedding. In ACL, 2021.
J. Su, Y. Lu, S. Pan, A. Murtadha, B. Wen, and Y. Liu. Roformer: Enhanced transformer with rotary
position embedding, 2023. URL https://arxiv.org/abs/2104.09864.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polo-
sukhin.
Attention is all you need.
In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus,
S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, vol-
ume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/
2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
B. Yang, B. Venkitesh, D. Talupuru, H. Lin, D. Cairuz, P. Blunsom, and A. Locatelli. Rope to nope and
back again: A new hybrid attention strategy, 2025. URL https://arxiv.org/abs/2501.18795.
M. Zaheer, G. Guruganesh, A. Dubey, et al. Big bird: Transformers for longer sequences. In NeurIPS, 2020.
E. Zelikman, G. R. Harik, Y. Shao, V. Jayasiri, N. Haber, and N. Goodman. Quiet-STar: Language models
can teach themselves to think before speaking. In First Conference on Language Modeling, 2024. URL
https://openreview.net/forum?id=oRXPiSOGH9.
13
A
Full Architecture Details
We provide a full outline of the architecture design out method uses. Our architecture is equivalent to the
NanoGPT (GPT-2) architecture, while introducing the meta-attention block after the initial causal masked
attention and layer normalization computation.
1. Input Layer: Given an input sequence of tokens x = {x1, x2, . . . , xT }, we first embed each token
into a continuous representation. Instead of absolute positional encodings, we apply Rotary Position
Embeddings (RoPE) Su et al. (2023) to inject positional information. For each token, the embedded
representation is:
et = RoPE(E(xt), t),
where RoPE(·, t) denotes the rotary positional embedding applied to the tth position, with a base
θ = 10000.0.
2. Causal Masked Self-Attention: The first layer consists of the causal masked self-attention mechanism.
For each head h, the attention operation is computed as:
CausalAttentionh(Q, K, V ) = softmax
QK⊤
h
√dk
+ M
Vh,
where Q, K, V are the query, key, and value matrices derived from the input embeddings E, and M is
the mask matrix.
3. Meta Attention Layer: After the causal masked self-attention, we integrate the meta-attention
mechanism to specifically attend to the injected meta tokens. This operation is defined as:
MetaAttention(Q, K, V, P) = softmax
QK⊤
√dk
+ Mcausal + P
V,
where P is the meta mask constructed from the indices of the meta tokens.
4. Feedforward Layer: Following the attention layers, we pass the output through a feedforward neural
network defined by:
FFN(x) = ReLU(xW1 + b1)W2 + b2,
where W1, W2 are weight matrices, and b1, b2 are bias vectors.
5. Layer Normalization: After both the causal self-attention and meta-attention operations, we apply
layer normalization:
LayerNorm(x) = x −µ
σ + ϵ ,
where µ and σ are the mean and standard deviation of the features, and ϵ is a small constant for
numerical stability.
6. Final Output Layer: The final layer projects the output of the last feedforward layer back to the
vocabulary size to produce the s for the next token prediction:
s = softmax(xWout + bout),
where Wout and bout are the output weight matrix and bias vector, respectively.
14
B
Pre-training Hyperparameters and Model Details
Our decoder-only modified GPT-2 model was pre-trained on the C4 dataset with the following configuration
and hyperparameters:
Table 4: Pretraining Configuration Parameters
Parameter
Value
Batch Size
12
Gradient Accumulation Steps
40
Block Size
1024
Number of Layers
12
Number of Heads
12
Embedding Size
768
Learning Rate
6e-4
Weight Decay
1e-1
Max Iterations
600,000
Warmup Iterations
2,000
Minimum Learning Rate
6e-5
Dropout Rate
0.0
RoPE Theta
10000.0
Initial Model
Resume
Optimizer
AdamW
AdamW Beta1
0.90
AdamW Beta2
0.95
Gradient Clipping
1.0
Tokenizer
tiktoken
C
YaRN Hyperparameters
Parameter
4096-token model
8192-token model
yarn_scale
4.0
8.0
yarn_original_max_seq_len
1024
yarn_extrapolation_factor
1.0
yarn_attn_factor
1.0
yarn_beta_fast
32.0
yarn_beta_slow
1.0
Table 5: YaRN parameter configurations for extended context models.
D
Additional Experimental Details
D.1
Synthetic Data Generation
We generate 90,000 train examples and held-out test set of 10,000 examples for each task.
15
D.1.1
List Recall
We generate a suite of “list-pointer” examples by sampling random categories and list items, inserting a special
meta token as a marker, and asking the model to recover the item immediately following the meta-token.
Each example consists of:
1. m categories drawn without replacement from a fixed set of 20.
2. n items per category, sampled with replacement from the category’s 10–item inventory
3. One “target” category in which we inject a single meta token after the jth item (j ∈[n]) and then
append the remaining items
4. A question line “Q: What is item j of <target>? _META_”
This pipeline yields curriculum-structured data that systematically probes the model’s ability to attend
to and copy items in long, multi-list contexts.
Phase
m (Num. categories)
n (List length)
Approx. prompt-token range
1
Uniform 3–8
Uniform 3–10
Short (≈100–200 tokens)
2
Uniform 8–12
Uniform 3–16 (bimodal)
Mid-range (≈200–300 tokens)
3
Uniform 12–19
Mixture {3–8, 9–16, 17–25}
Full-range (≈500–700 tokens)
4
Uniform 15–20
Uniform 40–60
“Extra-hard” ≤1024 tokens
5
Uniform 15–20
Uniform 90–110
“Long” ≤2048 tokens
Table 6: Curriculum schedule for synthetic data.
D.1.2
Segment Counting
Similar to List-pointer, except the model must count occurrences of a target token within a meta-token
bracketed list segment. Uses the schedule dictated by Table D.1.1. Asks the question: "Q: How many times
does <token> appear between the pauses around <Category>? _META_".
D.1.3
Parity
Generates examples where the model computes the XOR (parity) of a bit-string segment up to the first L
characters where L is drawn phase-dependently. the same scheduling dictated by Table D.1.1. Asks the
question: "Q: What is the XOR of all bits before this pause? _META_ "
D.1.4
Copying
Generates examples where the model must copy a bracketed span from a text. Uses schedule dictated by
Table D.1.1 and samples an additional copy length C and distance length D depending on the phase
E
Licenses
nanoGPT
Our implementation of the vanilla GPT-2 is based on the nanoGPT repository (https://github.com/
karpathy/nanoGPT), which is licensed under the MIT License.
16
EleutherAI GPT-Neo-125M
We directly use the EleutherAI GPT-Neo 125M model checkpoint and weights, available via the Hugging
Face Model Hub at https://huggingface.co/EleutherAI/gpt-neo-125m. This model is released under
the MIT License.
C4 Dataset
Our model was trained on the C4 dataset (https://huggingface.co/datasets/allenai/c4), which is
provided under the Open Data Commons Attribution License (ODC-BY).
tiktoken
We use the tiktoken library from OpenAI for tokenization (https://github.com/openai/tiktoken), which
is released under the MIT License.
17
F
Complete Experimental Results
F.1
Synthetic Task Accuracies Across Test Lengths
We stress test RoPE models at a sequence length of 2048—twice the pretraining block size of 1024—as
relative position embeddings naturally support extrapolation beyond the training context window. In contrast,
absolute positional encodings (APE) cannot generalize to sequences longer than those seen during pretraining.
Table 7: Accuracy (%) across evaluation lengths for each model train on List Recall
Model (Train Length)
128
256
512
1024
2048
GPT-2 APE (128)
4.2
1.2
0.0
0.0
—
GPT-2 APE (256)
6.8
2.4
0.0
0.0
—
GPT-2 APE (512)
19.8
9.5
3.6
0.0
—
Meta + APE (128)
100.0
86.4
12.0
4.1
—
Meta + APE (256)
100.0
98.6
42.6
3.9
—
Meta + APE (512)
100.0
100.0
98.7
11.1
—
Meta + RoPE (128)
100.0
60.7
5.9
0.0
0.0
Meta + RoPE (256)
100.0
100.0
48.6
23.5
0.0
Meta + RoPE (512)
100.0
100.0
99.3
58.9
5.6
GPT-Neo-125M
85.6
86.0
81.2
—
—
Table 8: Accuracy (%) across evaluation lengths for each model trained on Segment Counting. Each model is
evaluated on longer contexts than seen during training.
Model (Train Length)
128
256
512
1024
2048
GPT-2 APE (128)
32.1
27.4
20.2
0.0
—
GPT-2 APE (256)
40.3
56.2
23.7
0.0
—
GPT-2 APE (512)
30.1
32.1
25.0
0.0
—
Meta + APE (128)
77.4
55.9
25.0
11.1
—
Meta + APE (256)
83.3
77.4
53.6
22.4
—
Meta + APE (512)
91.7
79.8
80.9
33.3
—
Meta + RoPE (128)
77.4
64.3
25.0
22.7
0.0
Meta + RoPE (256)
64.3
64.3
35.7
33.3
0.0
Meta + RoPE (512)
90.9
91.4
95.3
66.7
11.1
GPT-Neo-125M
31.4
25.9
24.9
—
—
18
Table 9: Accuracy (%) across evaluation lengths for each model train on Parity
Model (Train Length)
128
256
512
1024
2048
GPT-2 APE (128)
75.0
56.0
53.4
45.2
—
GPT-2 APE (256)
75.0
67.0
60.7
46.2
—
GPT-2 APE (512)
75.0
54.8
60.0
40.5
—
Meta + APE (128)
100.0
75.0
67.9
52.4
—
Meta + APE (256)
100.0
97.6
96.4
69.1
—
Meta + APE (512)
100.0
100.0
100.0
86.7
—
Meta + RoPE (128)
100.0
66.7
76.2
59.5
44.1
Meta + RoPE (256)
97.6
100.0
96.4
61.9
52.4
Meta + RoPE (512)
100.0
100.0
100.0
69.1
63.1
GPT-Neo-125M
80.4
59.1
54.8
—
—
Table 10: Accuracy (%) across evaluation lengths for each model trained on Copying
Model (Train Length)
128
256
512
1024
2048
GPT-2 APE (128)
6.0
5.3
3.0
0.0
—
GPT-2 APE (256)
6.8
6.0
5.7
0.0
—
GPT-2 APE (512)
3.8
4.8
7.8
0.0
—
Meta + APE (128)
100.0
66.7
76.2
2.6
—
Meta + APE (256)
100.0
100.0
96.4
7.9
—
Meta + APE (512)
100.0
100.0
98.5
87.4
—
Meta + RoPE (128)
96.6
73.0
5.2
0.0
0.0
Meta + RoPE (256)
98.2
100.0
23.6
9.3
3.2
Meta + RoPE (512)
99.0
98.9
98.9
89.4
11.8
GPT-Neo-125M
31.5
22.7
16.9
—
—
F.2
Ablations on Positional Encoding and Token Embedding
Table 11: Accuracy (%) on the List-Recall task under different ablations: zeroing the positional encoding (No
Pos), zeroing the text embeddings (No Embed), or zeroing both of the meta-tokens.
Model (PE)
Full
No Pos
No Embed
Neither
Meta + APE (128)
100.0
99.3
17.4
59.7
Meta + RoPE (128)
100.0
100.0
32.4
24.0
Meta + APE (256)
86.4
86.9
12.2
16.2
Meta + RoPE (256)
100.0
100.0
4.0
6.6
Meta + APE (512)
100.0
100.0
52.1
84.3
Meta + RoPE (512)
100.0
100.0
59.6
25.2
19
Table 12: Accuracy (%) on the Segment Counting task under different ablations: zeroing the positional
encoding (No Pos), text embeddings (No Embed), or both, only on the meta-token.
Model (Train Length)
Full
No Pos
No Embed
Neither
Meta + APE (128)
77.4
63.1
31.0
47.6
Meta + APE (256)
83.3
88.1
32.1
40.5
Meta + APE (512)
91.7
82.1
34.5
51.2
Meta + RoPE (128)
77.4
70.2
59.5
36.9
Meta + RoPE (256)
64.3
53.6
30.9
30.9
Meta + RoPE (512)
80.9
72.6
36.9
25.0
Table 13: Accuracy (%) on the Parity task under different ablations: zeroing the positional encoding (No
Pos), text embeddings (No Embed), or both, only on the meta-token.
Model (Train Length)
Full
No Pos
No Embed
Neither
Meta + APE (128)
100.0
100.0
100.0
100.0
Meta + APE (256)
75.0
77.4
77.4
79.8
Meta + APE (512)
67.9
71.4
72.6
66.7
Meta + RoPE (128)
100.0
97.6
100.0
100.0
Meta + RoPE (256)
66.7
66.7
73.8
66.7
Meta + RoPE (512)
76.2
75.0
75.0
64.3
F.3
Positional Encoding Robustness Ablations
Table 15: Accuracy (%) on the List Pointer task with Gaussian noise added to positional encoding.
Model (Train Length)
Noise 0.0
Noise 0.1
Noise 0.5
Noise 1.0
Noise 2.0
GPT-2 + APE (128)
4.8
1.2
2.4
2.6
3.5
GPT-2 + APE (256)
17.4
11.9
4.6
3.6
3.2
GPT-2 + APE (512)
14.0
16.3
16.7
17.9
14.3
Meta + APE (128)
98.7
98.6
67.5
55.6
42.8
Meta + APE (256)
81.8
79.7
48.9
43.1
37.9
Meta + APE (512)
100.0
100.0
79.5
65.5
57.1
Meta + RoPE (128)
98.1
100.0
100.0
96.0
88.9
Meta + RoPE (256)
100.0
100.0
100.0
97.9
82.6
Meta + RoPE (512)
100.0
100.0
100.0
98.8
81.0
20
Table 14: Accuracy (%) on the Copying task under different ablations: zeroing the positional encoding (No
Pos), text embeddings (No Embed), or both, only on the meta-token.
Model (Train Length)
Full
No Pos
No Embed
Neither
Meta + APE (128)
96.6
93.2
7.2
4.8
Meta + APE (256)
98.2
99.6
5.0
3.6
Meta + APE (512)
99.0
96.6
5.7
5.4
Meta + RoPE (128)
100.0
99.6
6.9
4.9
Meta + RoPE (256)
100.0
100.0
4.5
5.1
Meta + RoPE (512)
100.0
95.6
6.9
4.9
Table 16: Accuracy (%) on the Copying task with Gaussian noise added to positional encoding.
Model (Train Length)
Noise 0.0
Noise 0.1
Noise 0.5
Noise 1.0
Noise 2.0
GPT-2 Abs (128)
2.9
1.2
0.0
0.0
0.0
GPT-2 Abs (256)
6.0
7.1
3.6
0.8
0.7
GPT-2 Abs (512)
6.0
5.8
3.6
0.4
0.3
Meta + APE (128)
96.1
98.5
69.8
58.6
54.9
Meta + APE (256)
100.0
100.0
76.3
68.8
57.2
Meta + APE (512)
98.9
98.7
74.4
68.9
50.5
Meta + RoPE (128)
100.0
100.0
75.9
68.6
49.9
Meta + RoPE (256)
100.0
100.0
82.6
65.6
45.1
Meta + RoPE (512)
100.0
100.0
84.4
67.6
46.3
21
F.4
Length Generalization Ability under No Positional Encoding Ablation
Table 17: List-Recall: “No Pos” vs. Full accuracy for Meta-attention with APE and Meta-attention with
RoPE.
Model (Split, Train Len)
Full
No Pos
∆(pp)
Meta + APE (128)
small
—
—
—
medium
77.8%
88.9%
+11.1
hard
11.1%
22.2%
+11.1
Meta + APE (256)
small
100.0%
100.0%
0.0
medium
100.0%
100.0%
0.0
hard
44.4%
22.2%
–22.2
Meta + APE (512)
small
—
—
—
medium
—
—
—
hard
100.0%
100.0%
0.0
Meta + RoPE (128)
small
—
—
—
medium
44.4%
55.6%
+11.1
hard
11.1%
11.1%
0.0
extra-hard
0.0%
0.0%
0.0
long
0.0%
11.1%
+11.1
Meta + RoPE (256)
small
100.0%
100.0%
0.0
medium
100.0%
100.0%
0.0%
hard
33.3%
66.7%
+33.3
extra-hard
0.0%
22.2%
+22.2
long
0.0
0.0
0.0
Meta + RoPE (512)
small
—
—
—
medium
100.0%
100.0%
0.0
hard
100.0%
100.0%
0.0
extra-hard
44.4%
55.6%
+11.1
long
0.0%
0.0%
0.0
22
G
Theoretical Analysis
G.1
Proof of Theorem 4.1
Lemma G.1. Let ℓ1, ℓ2, . . . , ℓN be logits and define softmax distribution αj =
exp(ℓj)
PN
k=1 exp(ℓk). Suppose that for
some "correct" index j∗we have ℓj∗= L, and for all other indices j ̸= j∗, ℓj ≤L −∆for some ∆> 0. Then,
entropy H(α) is strictly decreasing in ∆.
Proof. First, we can group the other logits (i.e.
j ̸= j∗, such that S =
P
j̸=j∗eℓj.
Then, since each ℓj
carries the property that eℓj ≤eL−∆given ℓj∗= L, we have that S ≤(N −1)eL−∆since there are N −1
terms. Revisiting the softmax α, we have that αj∗=
eL
eL+S ≥
eL
eL+(N−1)eL−∆=
1
1+(N−1)e−∆. We will
denote this quantity as p henceforth. Next, each other softmax αj for j ̸= j∗must have the property that
αj =
eℓ
eL+S ≤
eL−∆
eL(1+(N−1)e−∆) =
e−∆
1+(N−1)e−∆= 1−p
N−1.
As a result, we have the following entropy maximization problem:
maximize
α1,...,αN
−
N
X
j=1
αj log αj
subject to
N
X
j=1
αj = 1,
αj∗= p,
αj ≥0,
j = 1, . . . , N.
Observe that the entropy (objective) function is Schur-concave in α, so it is maximized when αj∗= p and
the remaining softmax mass is split uniformly over the N −1 elements, i.e. αj = 1−p
N−1 ∀j ̸= j∗. Plugging this
in for H(α) yields:
H(α) ≤−p log p −(1 −p) log(1 −p) + (1 −p) log(N −1)
(3)
Next, we aim to study the relationship between H and ∆. By the chain rule, dH
d∆= dH
dp · dp
d∆.
dH
dp =
−(1 + log p) + log 1−p
N−1 + 1 = log
1−p
(N−1)p. Substituting 1−p
p
= (N −1)e−∆, we get dH
dp = −∆and since ∆> 0,
dH
dp < 0. We then turn to
dp
d∆=
(N−1)e−∆
[1+(N−1)e−∆]2 > 0 since both numerator and denominator must be > 0.
Therefore, dH
d∆= −∆
(N−1)e−∆
[1+(N−1)e−∆]2 < 0, meaning that H(α) is strictly decreasing in the margin ∆.
We will now use Lemma G.1 to prove Theorem 4.1.
Proof of Theorem 4.1. Consider a parametrized path by variable t ∈[0, ∆]; define ℓ(t)
j
= ℓj + δj,j∗t, and
α(t)
j
=
e
ℓ(t)
j
N
P
k=1
eℓ(t)
k
=
e(ℓj +δj,j∗t)
N
P
k=1
e(ℓk+δk,j∗t) . Define ℓ
′(t)
j
= d
dtℓ(t)
j
and α
′(t)
j
= d
dtα(t)
j .
Next, we differentiate the entropy H(α) with respect to t:
d
dtH(α) = −
N
X
j=1
[α′
j ln αj + αj
α′
j
αj
] = −
N
X
j=1
α′
j(1 + ln αj) = −
N
X
j=1
α′
j + α′
j ln αj
Since P α′
j = 0 due to P αj = 1, this simply reduces to
d
dtH(α) = −PN
j=1 α′
j ln αj.
From Cover and Thomas (2006), we have that α′
j = αj(ℓj −Eα[ℓ′]), where Eα[ℓ′] =
N
P
k=1
αkℓ′
k. Plugging
this into the expression for the derivative of entropy with respect to t:
d
dtH(α) = −
X
j
αj(ℓ′
j −Eα[ℓ′]) ln αj = −(
X
j
ajℓ′
j ln αj −Eα[ℓ′]
X
j
αj ln αj)
23
Observe that P
j αj ln αj = Eα[ln α] so this simply reduces as:
d
dtH(α) = −(Eα[ℓ′ ln α] −Eα[ℓ′] Eα[ln α]) = −Covα(ℓ′, ln α)
(4)
Revisiting the meta-token setup where only the "correct" logit at j∗is boosted, this suggests that
ℓ′
j = 1(j = j∗). Therefore, Eα[ℓ′] = αj∗and Eα[ℓ′ ln α] = αj∗ln αj∗. This can be substituted into the
covariance term above:
d
dtH(α) = −Covα(ℓ′, ln α) = −(αj∗ln αj∗−αj∗Eα[ln α]) = −αj∗(ln αj∗−Eα[ln α])
Due to the Schur-concavity of H(α) (Marshall et al., 2011), ln αj∗= maxj ln αj and ln αj∗> Eα[ln α]. As
such, given αj∗> 0 and ln αj∗−Eα[ln α] > 0, this suggests that Covα(ℓ′, ln α) > 0 and thus,
d
dtH(α) < 0.
Therefore, we conclude that adding a positive logit boost at the meta-token index ("correct" logit) strictly
decreases entropy, supporting the proposed "anchoring" effect notion.
G.2
Proof of Theorem 5.1
Proof of Theorem 5.1. The meta-tokens are simply a new (latent) channel that may be utilized to search for
candidate distributions. However, this latent can be ignored, yielding the original search space; that is, any
encoder qϕ(ˆx | x) that does not use meta-tokens can be implemented in the meta-token model by zeroing out
all meta-token contributions. Therefore, Qabs ⊆Qmeta, where q = (qϕ, qθ) over the feasible combinations of
encoder and decoder. Naturally, minimizing a function over a larger feasible set cannot increase its minimum.
Thus, for a fixed rate R,
Dmeta(R) =
min
q∈Qmeta : I(X; ˆ
X)=R
D(q) ≤
min
q∈Qabs : I(X; ˆ
X)=R
D(q) = Dabs(R).
Note that the same result holds for RoPE in place of APE (i.e. DRoPE in place of Dabs), as well.
G.3
Theorem C.2
Theorem G.2. Consider functions p : {0, . . . , T −1} →R and b : {−(T −1), . . . , T −1} →R for absolute
postional biases and relative biases, respectively. Let Babs to be the set of all fixed absolute positional bias
matrices Babs
i,j = p(j) and Brel to be the set of all fixed relative biases Brel
i,j = b(i −j). Let Bmeta be the set of
bias matrices implementable by the Transformer augmented with meta-token embeddings {mt} which emit a
content-dependent logit boost at their respective indices. Then,
Babs ∪Brel ⊊Bmeta
(5)
Proof. We break this argument down into two parts →(i.) the forward direction, where we show that all
absolute and relative biases without meta-tokens can be modeled by the meta-token model.
(i) Babs ∪Brel ⊆Bmeta.
Every B ∈Bmeta is obtained by choosing meta-token embeddings et ∈Rd at each
position t and a linear head W, so that the total bias at (i, j) is Bi,j = P
t Q⊤
i W et 1j=t.
• Absolute case. Given p(j), set W ∈R1×d and choose each ej so that Q⊤
i W ej = p(j) ∀i. All other et̸=j
are zero. Then Bi,j = p(j).
• Relative case. Given b(i −j), place a meta-token at every position j. Choose W and embeddings ej so
that Q⊤
i W ej = b(i −j) ∀i, j.
For instance, if we let W = Id and arrange that ej encodes the vector
b(1 −j), b(2 −j), . . . , b(T −j)
,
then Q⊤
i ej = b(i −j) when Qi is the i-th standard basis vector.
Therefore, every absolute or relative bias (in Babs and Brel) lies in Bmeta.
24
(ii) There exists a bias B∗∈Bmeta such that B∗/∈Babs ∪Brel.
Define a content-dependent bias
B∗
i,j = f
Cj
where Cj is the full token context preceding position j and f is any non-constant function.
Such a B∗arises by setting each meta-token embedding ej = f(Cj) and W = Id, so B∗∈Bmeta.
However, if there was B∗∈Babs, then there is p(j) with p(j) = f(Cj) for all j and all possible Cj, which is
impossible since Cj varies. Furthermore, if B∗∈Brel, then there is b(i −j) with b(i −j) = f(Cj) independent
of i; again, this condition is impossible to be satisfied. Therefore B∗/∈Babs ∪Brel.
As a result, we conclude that the biases represented by Bmeta contain the set of both absolute and
relative biases without meta-tokens, and represent additional biases that cannot be represented without
meta-tokens.
The result of Theorem G.2 is that the introduction of meta-tokens strictly grows the expressivity of
biases that may be represented, while still being entirely inclusive of the fixed realizable absolute and relative
encoding biases. As a result, we do not "lose" anything representationally by introducing meta-tokens, from
a positional biases standpoint. This enhanced expressive power also plays a role in enabling the model to
learn to focus attention on relevant context spans, reinforcing the aforementioned sharpening effect.
H
Example Prompts for Synthetic Tasks
We provide example prompts and target completions for each synthetic task used in our experiments. Each
example illustrates the input format, the placement of _PAUSE_ meta-tokens, and the expected model output.
List Recall
Prompt:
Fruits: orange peach banana plum
Tools: hammer wrench level pliers
Sports: boxing baseball boxing golf
Spices: turmeric cardamom cumin cinnamon
Animals: cat tiger wolf fox
Professions: teacher nurse lawyer architect
Vegetables: onion cucumber broccoli cucumber
Instruments: piano clarinet violin piano
Professions: teacher nurse lawyer _PAUSE_ architect
Q: What is item 3 of Professions? _PAUSE_
Completion:
lawyer
Segment Counting
Prompt:
Fruits: orange peach banana plum
Tools: _PAUSE_ hammer wrench level pliers _PAUSE_
Sports: boxing baseball boxing golf
Spices: turmeric cardamom cumin cinnamon
Q: How many times does wrench appear between the pauses around Tools? _PAUSE_
Completion:
1
25
Parity
Prompt:
Bits: 0 _PAUSE_ 1 0 0
Q: What is the XOR of all bits before this pause? _PAUSE_
Completion:
0
Copying
Prompt:
...as ever Mr _PAUSE_ . Darcy had been, was much
relieved by discerning such different feelings.
They had not been long together before Darcy told her that Bingley was
also coming to wait on her; and she had barely time to express her
satisfaction, and prepare for such a visitor, when Bingley’s quick step
was heard on the stairs, and in a moment he entered the room. All
Elizabeth’s anger against him had been long done away; but had she still
felt any, it could hardly have stood its ground against the unaffected
cordiality with which he expressed himself on seeing her again. He
inquired in a friendly, though general, way, after her family, and
looked and spoke with the same _PAUSE_ good-humoured ease that he had ever done.
Q: Copy the bracketed text. _PAUSE_
Completion:
. Darcy had been, was much
relieved by discerning such different feelings.
They had not been long together before Darcy told her that Bingley was
also coming to wait on her; and she had barely time to express her
satisfaction, and prepare for such a visitor, when Bingley’s quick step
was heard on the stairs, and in a moment he entered the room. All
Elizabeth’s anger against him had been long done away; but had she still
felt any, it could hardly have stood its ground against the unaffected
cordiality with which he expressed himself on seeing her again. He
inquired in a friendly, though general, way, after her family, and
looked and spoke with the same.
I
Broader Impacts Statement
Our work on learned meta-tokens and meta-attention offers a lightweight, data-efficient way to pre-train
language models while demonstrating strong performance when fine-tuned for recall tasks. This suggests
a path toward more capable, leaner language models that could be used to handle contexts such as like
long legal or medical documents, extended multi-turn dialogues, or large codebases without resorting to
prohibitively large architectures or expensive fine-tuning runs. Such models could bring real benefits to areas
such as conversational agents for education or healthcare. Building off of prior literature that performs a
26
more explicit learned retrieval from the context (Mohtashami and Jaggi, 2023), this could induce improved
and efficient in-line retrieval over vast corpora.
Our work relates strongly to the recent debates in the language modeling community on the impact of
positional encoding, particularly around works such as NoPE (Kazemnejad et al., 2023). We provide strong
evidence that zeroing the positional encoding can improve performance, providing motivation for hybrid
attention mechanisms such as RNoPE (Yang et al., 2025), and other, more efficient ways to pre-train language
models with long-context modeling settings in mind. We note that advances in long-context modeling could
introduce risks around misuse and unintended harm. More powerful context understanding over long ranges
can fuel phishing text and distracted models, especially in the phase of noisy context. However, models
trained on corpora without data pre-processing a priori may be subject to harmful behavior such as profane
generations. In the context of our work, which uses standard, pre-filtered corpora, this issue is avoided; we
encourage users to audit the data used for pre-training first.
27
|
Language Modeling with Learned Meta-Tokens Alok N. Shah1∗ Khush Gupta1∗ Keshav Ramji2∗ Pratik Chaudhari1 1 2IBM Research AI ∗Denotes equal contribution {alokshah, khushg, , September 23, 2025 Abstract While modern Transformer-based language models (LMs) have achieved major success in multi-task generalization, they often struggle to capture long-range dependencies within their context window. This work introduces a novel approach using meta-tokens, special tokens injected during pre-training, along with a dedicated meta-attention mechanism to guide LMs to use these tokens. We pre-train a language model with a modified GPT-2 architecture equipped with meta-attention in addition to causal multi-head attention, and study the impact of these tokens on a suite of synthetic tasks. We find that data-efficient language model pre-training on fewer than 100B tokens utilizing meta-tokens and our meta-attention mechanism achieves strong performance on these tasks after fine-tuning. We suggest that these gains arise due to the meta-tokens sharpening the positional encoding. This enables them to operate as trainable, content-based landmarks, implicitly compressing preceding context and "caching" it in the meta-token. At inference-time, the meta-token points to relevant context, facilitating length generalization up to 2× its context window, even after extension with YaRN. We provide further evidence of these behaviors by visualizing model internals to study the residual stream, and assessing the compression quality by information-theoretic analysis on the rate-distortion tradeoff. Our findings suggest that pre-training LMs with meta-tokens offers a simple, data-efficient method to enhance long-context language modeling performance, while introducing new insights into the nature of their behavior towards length generalization. 1 Introduction Transformer-based language models (LMs) have showcased remarkable capabilities across diverse language tasks (Brown et al., 2020b; Chowdhery et al., 2022; OpenAI, 2023). Nevertheless, such models suffer from an inability to capture dependencies spanning over their entire context window. With growing adoption and ever-expanding demands on the context over which the model can process and reason, it is vital to develop methods that facilitate long-context adaptation and length generalization. Despite numerous architectural remedies, including sparse attention (Beltagy et al., 2020; Zaheer et al., 2020), recurrent blocks (Hutchins et al., 2022), and modified positional encoding (Press et al., 2021; Su et al., 2021; Chen et al., 2023), the fundamental challenge still remains: how can models reliably access and summarize distant context in a concise, cheap, yet expressive manner? We propose a simple solution, by way of meta-tokens, learned tokens periodically injected into the input sequence during pretraining, and cleverly placed during fine-tuning. Unlike conventional dummy tokens (Goyal et al., 2024), meta-tokens are explicitly trained via a dedicated sparse attention layer, guiding the model to condense and "cache" contextual information as an in-line storage mechanism. As a result, these tokens act as adaptive landmarks (Mohtashami and Jaggi, 2023), summarizing preceding context segments into compact representations. At inference time, meta-tokens provide implicit pathways to distant information, enabling models to generalize effectively across sequences longer than those encountered during training. 1 18 Sep 2025 We demonstrate the empirical efficacy of this approach by pre-training a 152M parameter modified GPT-2 model with meta-tokens and a sparsely activated meta-attention mechanism. Our approach not only excels on recall-oriented synthetic tasks but also generalizes up to 2x the pretraining context window (via YaRN) - a rare feat for decoder-only architectures trained on 100B tokens or less. We trace these gains to a subtle mechanism: meta-tokens provably induce a sharpening effect on positional encoding, enabling the meta-token to locate its position based on the content it stores and reducing the entropy of the attention distribution. We present evidence that this sharpening is responsible for an anchoring effect on relevant distant tokens, facilitating robust length generalization. Furthermore, by analyzing internal model activations and studying the rate-distortion tradeoff, we validate that meta-tokens function as compressed representations of context. Our contributions can be summarized as follows: 1. We introduce a simple language model pre-training scheme using meta-tokens and a meta-attention mechanism to improve performance on a wide range of synthetic tasks. 2. We show that meta-tokens sharpen the positional encoding, enabling precise long-range attention; we further show that length generalization improves without positional encoding. 3. The sharpening hypothesis and implicit compression behavior are supported by visualizations of model internals and information-theoretic analysis into the rate-distortion tradeoff. 2 Preliminaries Causal Multi-Head Attention. Let x = {x1, x2, . . . , xT } denote an input sequence of tokens of length T, V denote the vocabulary size of V, and E : V →Rd represent the the token embedding function mapping each token to a d-dimensional vector. Each xt is embedded into some continuous representation where et = E(xt) + pt, such that pt is the positional encoding for t. In decoder-only architecture, we utilize causal self-attention to ensure that predictions for a given token are only based on preceding tokens. The causal self-attention mechanism modifies the attention computation by masking future positions in the attention weights. Formally: Causal Attention(Q, K, V ) = softmax QK⊤ √dk + M where M masks future tokens, ensuring that the model can only attend to current and past tokens. If A is the matrix of attentions scores, then Aij = ( softmax(Aij) if i ≥j 0 if i and tokens interleaved between reasoning steps near punctuation (serving as natural break), the introduction of a rough periodicity between tokens during pre-training could result in being trapped into local minima in the optimization landscape. We instead chose to follow the random injection scheme, supported by the meta-token pre-training approach outlined in Goyal et al. (2024). We ensure that the trained model incurs no loss for predicting meta-tokens, unlike a standard token in the vocabulary - the meta-tokens' indices are simply shifted and removed when computing the binary cross-entropy (BCE) loss. Meta-Attention Mechanism. We augment our transformer H to take P which contains the positions of the meta-tokens. We introduce a sparse attention mechanism, called meta-attention, which selectively modifies attention scores for the specially marked "meta-tokens" within a sequence. This allows the model to simulate selective attention, influencing the final behavior by focusing on these meta-tokens. The underlying principles of the desired behavior is influenced by dual cross-attention (Jiang et al., 2024), such that operations are performed higher on the abstraction hierarchy than the feature space alone. This induces a meta-learning-like setup over which attention on the meta-tokens is learned. Let the indices of special "meta-tokens" be denoted by positions ∈RB×T ′, where T ′ is the number of meta tokens in a batch. We construct a meta mask P ∈RB×T ×T to influence the attention mechanism. For each batch element b and token positions i, j: P[b, i, j] = ( 0 if both i and j are meta tokens (i.e., i, j ∈positions[b, :]) -∞ otherwise The meta-attention operation is defined as: MetaAttention(Q, K, V ) = softmax QK⊤ √dk + M + P V Where M is the same causal mask as before. Here, the meta mask P allows attention to flow only among the meta tokens in the sequence, introducing a distinct interaction compared to regular attention. This 1We take k = 0.1 in practice; balancing next-token prediction over the standard vocabulary while injecting a non-trivial number of meta-tokens. 3 meta-attention layer selectively modifies the attention by influencing the flow of information to and from these meta tokens, distinguishing itself from the standard causal attention. In particular, if A is the matrix of attentions scores then Aij = ( softmax(Aij) if i and j are meta tokens -∞ otherwise To assemble the architecture used for our model, we insert the meta-attention mechanism after the causal masked self-attention computation, to specifically attend to the injected meta tokens, as defined above. We provide a complete breakdown of the architecture in Appendix A. 4 Results 4.1 Model Training and Architecture All experiments were performed with 4 NVIDIA A100 GPUs, training the meta attention transformer for 200,000 iterations or 98B tokens using Distributed Data Parallel (DDP) on the Colossal Cleaned Crawl Corpus (C4) (Raffel et al., 2020). The configuration and hyperparameters used in our pre-training are included in Appendix A and B. As a baseline, we also pre-train GPT-2 (124M) on C4, with identical hyperparameters. The primary change we make from a standard GPT-2 architecture is the addition of RoPE to enable better generalization to longer contexts and improve stability in next-token prediction tasks. We extend our transformer model's context window from 1024 tokens to longer sequences by training two distinct models with context lengths of 4096 and 8192 tokens, respectively. This extension is implemented using the YaRN method (Peng et al., 2023), which dynamically scales Rotary Positional Embeddings (RoPE) to effectively process significantly longer sequences without compromising performance or computational efficiency. The key parameters are detailed in Appendix C 4.2 Experimental Setup and Tasks We design four synthetic tasks to evaluate the recall capabilities of models trained with meta-tokens. The tasks are List Recall, Segment Counting, Parity, and Copying. For each task, we define three difficulty levels by varying the maximum sequence length. In all tasks, we insert a designated _PAUSE_ meta-token at task-specific positions to indicate where the model should focus its meta-attention. We fine-tune on synthetic data that we generate for each task (binned by instance length) and report the validation score on a held-out test set. Detailed examples for each task are provided in Appendix H. • List Recall: Given N named lists of length k, the model is prompted to recall a specific item from a specified list. We insert a _PAUSE_ meta-token immediately following the list containing the queried item, as well as before the final question. The expected answer is the corresponding item. Task difficulty is scaled by varying the list length k and number of lists N. • Segment Counting: The model is presented with several named lists, with a segment in these lists wrapped by by _PAUSE_ meta-tokens. The prompt then asks how many times a specified item appears between the two meta-tokens. The task difficulty changes based on the number and size of the named lists. • Parity: In this task, the input consists of a sequence of bits, with a _PAUSE_ meta-token indicating a specific position in the sequence. The model is prompted to compute the XOR of all bits appearing before the meta-token. The task difficulty changes based on the number of bits it has to XOR. • Copying The model is given with a segment of text containing a bracketed spans marked by _PAUSE_ meta-tokens. The model is prompted to reproduce the exact content found between the meta-tokenmarked boundaries. The task is designed to assess the model's ability to extract and copy arbitrary 4 spans of text from a context, with difficulty varying according to the length and complexity of the bracketed span. We report the sequence accuracy. Within these four tasks, we investigate length generalization by fine-tuning our model in multiple phases. At each phase, we assess the model's performance on sequence lengths exceeding those seen during that phase's training, enabling us to evaluate its generalization to longer contexts. In addition, Appendix D reports the performance of our models on a context length of 2048 tokens, which is twice the length seen during pretraining (1024 tokens). Baselines. For a controlled comparison, we also pre-train a GPT-2 model (NanoGPT, 124M; Karpathy (2023)) on C4, with identical hyperparameters as the meta-tokens model. Additionally, we use Eleuther AI's GPT-Neo-125M (Black et al., 2021) as another baseline. 4.3 Meta-Tokens Improve Performance on Synthetic Recall-Oriented Tasks. As seen in Figure 1, we find that the models trained on meta-tokens substantially outperform our pre-trained GPT-2 and GPT-Neo-125M baselines, across all tasks and all train lengths. The complete tables for these results are included in Appendix F. We observe the GPT-2 model trained with APE to generally perform poorly; however, it does achieve reasonable performance in the segment counting and parity tasks, albeit much further behind the models with meta-attention. This suggests that training on further data could improve its performance; this also highlights the data-efficiency of our meta-tokens models. The models also gain in performance much more quickly with fine-tuning when increasing the train length - a phenomenon not observed with the GPT-2 models. Our models also outperform the GPT-Neo-125M model by a substantial margin; given that GPT-Neo was pre-trained on 300B tokens, nearly three times the volume of data on which our meta-attention models were trained (albeit from a different corpus). To study the effect of positional encoding on our results, we ablate by zeroing out the positional encoding, zeroing out the text embedding, and performing both operations - all just at the meta-token indices. Curiously, we observe in Tables 11-14 that the score without positional encoding nearly matches or exceeds the accuracy of the model with the positional encoding as is. The lone exception is the segment counting task, where there is a gap for all settings except the model trained with APE at a length of 256, which achieves a +4.8% improvement over the "Full" model. By contrast, zeroing out the token embedding hurts performance substantially in nearly every setting on List Recall, Segment Counting, and Copying; on Parity, this generally matches the performance of zeroing out the positional encoding. Thus, we find that 1. pre-training with meta-tokens and meta-attention boosts performance, and 2. zeroing out the positional encoding at just the meta-tokens can match or improve performance at inference time. 5 Figure 1: We study the performance of the pre-trained GPT-2 w/ APE, Meta-attention w/ APE, and Meta-attention w/ RoPE, as well as GPT-Neo-125M, all fine-tuned on synthetic data for their respective tasks at the maximum train lengths indicated in the legends. All experiments are performed on a test set of prompt lengths up to 512 tokens. Table 1: Token Accuracy (%) on List Recall and Segment Counting across long contexts. Task Train/Finetune 2k 3k 4k 5k 6k 7k 8k 10k 12k 14k 16k List 4k / 2k 19.5 16.0 13.7 0.9 0.0 0.0 0.9 1.1 0.0 2.1 1.1 4k / 4k 85.0 88.2 90.2 20.5 1.8 1.0 3.5 4.4 1.1 2.1 2.1 8k / 4k 85.0 95.8 91.2 97.4 98.2 96.2 93.9 31.9 0.0 2.1 2.1 8k / 8k 92.9 98.3 97.1 100.0 98.2 100.0 100.0 89.0 26.1 10.4 9.6 Count 4k / 2k 19.1 23.8 19.2 14.6 25.2 14.1 14.0 12.0 16.0 8.0 6.0 4k / 4k 17.5 23.8 31.8 20.3 30.4 19.3 19.1 14.0 26.0 12.0 16.0 8k / 4k 19.1 23.8 14.3 11.1 20.6 12.7 12.7 14.0 16.0 14.0 12.0 8k / 8k 27.0 33.3 15.9 19.1 27.0 19.1 23.8 22.0 18.0 18.0 18.0 Meta-Tokens Aid in Length Generalization. In Figure 1 and Appendix F, we find that the model trained on meta-tokens length generalizes well on the parity and copying tasks with APE, and performs somewhat well (much better than the baselines) on list recall and segment counting at a train length of 256. For instance, despite relatively similar performance at the 128 train length on the segment counting task, the performance on the test set of up to a length of 512 dramatically increases when training at the 256 length, by +28.6% with APE and +10.7% with RoPE, compared to +3.5% for GPT-2 with APE. Table 1 exhibits a similar trend for the YaRN models, achieving strong performance across its respective context windows, and even achieves non-trivial accuracy beyond the window. Fine-tuning the 8k YaRN model on examples of up to a length of 4k can generalize very well up to 8k. These findings underscore the substantial advantages of training with meta-tokens and the nuanced role positional encoding plays in task-specific and length-generalization contexts. Moreover, when looking at the results on Meta + RoPe on test set lengths of prompts up to 1024 tokens (denoted extra-hard in Table 2), we find that zeroing out the positional encoding also plays a sizable role in improving length generalization, especially in the List Recall task. While the model originally achieves performances of 11.1%, 0% and 44.4% when fine-tuned on train lengths of 512 (APE), 256 and 512 (RoPE), respectively, the scores improve by +38.9%, +22.2% and +11.2%, by simply zeroing out the positional encoding at the meta-tokens. 6 Table 2: The configurations where zeroing the positional encoding at inference time results in accuracy improvements on the List Pointer task, denoted by the ∆(pp) percentage points column. Model (Split, Train Len) Full No Pos ∆(pp) Meta + APE (medium, 128) 77.8% 88.9% +11.1 Meta + APE (hard, 128) 11.1% 22.2% +11.1 Meta + APE (extra-hard, 512) 11.1% 50.0% +38.9 Meta + RoPE (medium, 128) 44.4% 55.6% +11.1 Meta + RoPE (hard, 256) 33.3% 66.7% +33.3 Meta + RoPE (extra-hard, 256) 0.0% 22.2% +22.2 Meta + RoPE (extra-hard, 512) 44.4% 55.6% +11.1 4.4 Examination of Positional Encoding through Internal Representations. As discussed above, the results in Tables 11-14 suggest that the positional encoding of the meta-token can potentially be holding back the downstream performance of the meta-attention models. We posit that the model is instead relying on its content - cached context stored within the meta-token - to sharpen its sense of its position in the sequence. Next, we aim to formally define this notion of sharpness in the context of positional encoding, and its relationship to the model's logits. Let αi→k = softmaxk(QiKT j + bi-j) be the attention distribution for query i over keys j, with relative bias term bi-j. We define the sharpness of the positional encoding by the entropy: H(αi) = - X j αi→j log αi→j Intuitively, when a meta-token is present at position t, the model's attention becomes peaked around a small set of keys; this "honing in" behavior reduces H(α) compared to APE or RoPE without meta-tokens. In this manner, meta-tokens behave as content-driven landmarks - they serve as a low-entropy channel that serves as a pointer to relevant context. As noted prior, the data efficiency observation suggests that the meta-token helps to accelerate next-token prediction behavior while introducing a stabilizing effect in the midst of noisy positional encoding. Theorem 4.1. Consider a Transformer head at query position i over keys 1, . . . , N. Let αabs i (j) ∝exp(QiKT j ) be the attention under absolute positional encoding and let αmeta i ∝exp(QiKT j + δj,j∗∆) when a meta-token at position j∗introduces an additive logit boost of ∆> 0. Then, for some function κ(∆) > 0: H(αmeta i ) ≤H(αabs i ) -κ(∆) (1) Proof Sketch. Parametrize the path by t ∈[0, ∆], and define logits l(t) j and their softmax α(t) respectively. Since boosting the true index tightens the margin, the derivative of H(α(t) is strictly negative. Therefore, over the path, H(α(∆)) 0. Our empirical results show that the entropy over the softmax distribution of the logits decreases (the difference between "non-meta-token" and "meta-token" is positive), thus corroborating our central claim in Theorem 4.1. 4.5 Inference Efficiency Meta-tokens are generated at inference. The additional computation is sparse-each attention head only considers a small number of meta positions rather than the full attention matrix. In our current PyTorch implementation, which materializes the sparse mask as a dense tensor, we observe a throughput drop from 130.82 to 117.86 tokens/sec and a TTFT increase from 7.44ms to 7.57ms, i.e., a 1.11× slowdown. We expect optimized sparse attention implementations to reduce or eliminate this overhead. Table 3: Inference speed comparison with and without meta/pause tokens. Metric No meta/pause tokens With meta/pause tokens TPS (tokens/sec) 130.82 117.86 TTFT (ms) 7.44 7.57 Slowdown factor 1.00 1.11 5 Examining Context Compression with Rate-Distortion Theory Given that these results provide evidence that meta-tokens can compress context in their representation, we develop mathematical formalizations to analyze this behavior. In particular, we turn to information-theoretic tools - specifically, an information bottleneck view. For a meta-token at xm succeeding a sequence of tokens X = xi:m-1 from indices i to m -1, we consider a compression function ζ(·) which transforms the subsequence X into xm. As such, we define 8 ˆX = ζ(X) = ζ(xi:m-1) to be the compressed representation stored in xm. This can be generalized to the full set of M meta-tokens: ˆX1:M = [ζ1(X1:m1-1), ζ2(Xm1+1:m2-1), . . . ζM(mM+1 : mn)] For practicality, we consider the variational information bottleneck (Alemi et al., 2017). This introduces an encoder qφ(ˆx | x) and decoder qθ(y | ˆx), along with a simple prior r(z) (e.g. N(0, 1)), yielding the following form to solve for these variational distributions: min qφ,qθ E p(x,y)[ E qφ(ˆx|x)[-log qθ(y | ˆx]] + β · E p(x)[KL(qφ(ˆx | x)||r(x))] This form admits an equivalent perspective in rate-distortion theory. Specifically, the first term measures the quality in predicting the downstream target given a lossy compression ˆX ("distortion"). The second term measures the average number of bits required to encode ˆX, relative to some simple reference code r(z) ("rate"). As such, analyzing rate-distortion curves - sweeping over values of β - can provide valuable insights into the quality of the "compression" behavior and its informativeness when the meta-token is attended to. Theorem 5.1. Let Dabs(R) be the minimum distortion achievable at rate R under the VIB objective only using absolute positional encoding (no meta-tokens), and let Dmeta(R) be the minimum distortion achievable at rate R with meta-tokens. Then, for every R ≥0, Dmeta(R) ≤Dabs(R) (2) Intuitively, meta-tokens expand the feasible set of encoders and decoders, which will either match or lower distortion for a given rate. Thus, the quality of compression with respect to its informativeness in predicting the target can only improve. 5.1 Rate-Distortion Informs the Quality of Context Caching To obtain empirical rate-distortion curves for our meta-token bottleneck in Figure 3, we freeze the pre-trained meta-token model and fix a small variational bottleneck head to the last meta-token hidden state. Concretely, let hm ∈RD be the output of the final Transformer layer at the last meta-token position. We introduce qφ(z | hm) = N μφ(hm), diag(σ2 φ(hm)) , qθ(y | z) = softmax(Wz + b), with μφ, σφ : RD →RL and W ∈R|V|×L. We then optimize the ELBO: min φ,θ Ehm,y -log qθ(y | z) + β Ehm KL qφ(z | hm) ∥N(0, I) . Training is performed on the small List-Pointer D.1.1 split (50 examples, batch size 1), for 5 epochs at each β ∈{0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0}. After each run, we record the average cross-entropy loss ("distortion") and KL ("rate") on the same 50 examples. Finally, we plot the resulting rate-distortion curves on a symlog x-axis (linear below 20 nats, logarithmic above) so that both the low-rate "knee" and the long tail are visible (see Figure 3). 5.2 Positional Encoding as a Source of Bottleneck Distortion The consistent improvements in recall fidelity and length generalization from zeroing positional encodings at meta-token positions 2 are well-explained through through our rate-distortion analysis. Prior work (e.g., NoPE; Kazemnejad et al., 2023) shows that Transformers can infer token order from the causal mask alone, often generalizing better when explicit positional embeddings (PEs) are removed. Our results extend this insight: in Theorem 5.1, we formalize meta-tokens as learned compression bottlenecks. If a meta-token retains its PE (absolute or rotary), a portion of its representational capacity is spent encoding position rather than semantic content. This index-dependent signal introduces unnecessary variance, increasing the distortion of the compressed summary. By contrast, zeroing out the PE forces the full embedding capacity to encode task-relevant information. As a result, we observe lower distortion (higher retrieval accuracy) at a given rate-both theoretically and empirically-across all four synthetic tasks. 9 Figure 3: (Left) This plot visualizes the residual stream after each layer, to analyze the meta-token within causal attention. The colors before the meta-token (the colored band across the layers) denote the context which the meta-token attends to and implicitly stores, and the final, rightmost colored line represents the final meta-token in the sequence, which attends to the previous one at the aforementioned band. (Right) We analyze the variational information bottleneck (VIB) objective and its decomposition into its rate and distortion components. Supporting the findings of Theorem 5.1, for a given rate R, the distortion D is strictly lower for the meta-token compared to the last non-meta-token element in the sequence. 6 Related Work Pause and Memory Tokens As detailed in our work, recent studies on Transformer-based models have explored the introduction of special tokens, beyond ordinary vocabulary symbols. Pause or dummy tokens as introduced in Goyal et al. (2024) enhance computational width, allowing models to perform additional internal computation by effectively delaying their outputs. This yields empirical gains on question answering and reasoning-intensive tasks. Similarly, Pfau et al. (2024) explore using filler tokens - sequences of seemingly meaningless symbols - as a stand-in for chain-of-thought. These special tokens may also delineate phases of reasoning, as in Quiet-STaR Zelikman et al. (2024). Quiet-STaR uses a begin-of-thought token and an end-of-thought token, generating a silent rationale sequence for each step before emitting the next word, showing that this helps zero-shot reasoning. Works such as Memory Transformer (Burtsev et al., 2021) and Landmark Attention (Mohtashami and Jaggi, 2023) introduce memory tokens; the former prepends them, while the latter uses them as learnable keys for retrieval over blocks of context. Our work is most closely related to the latter, while performing this retrieval in a purely implicit manner via the observed "pointer" mechanism. For vision transformers (ViTs), LeMeVit (Jiang et al., 2024) introduces a similar meta-tokens notion as our work by adding learnable sparse tokens and an attention mechanism between standard tokens and their meta tokens, improving performance and reducing spatial redundancy. Darcet et al. (2024) uses specialized "register" tokens applies to patches to denoise images by extracting the high-norm, outlier tokens, smoothening the feature and attention maps. These works suggest that special tokens, even devoid of semantic content, can influence a model's internal reasoning and memory mechanisms. Positional Encoding We have already described absolute positional embeddings (APE), rotary positional embeddings (RoPE) and relative bias in Section 2. In addition to these methods, ALiBi (Press et al., 2022) 10 adds a fixed linear penalty to attention scores based on the distance between query and key positions, favoring nearer tokens and generalizing to longer contexts with minimal loss in perplexity. Recent work has suggested that Transformers without any added position embeddings can still learn order information and, in some cases, generalize to longer sequences better than models with standard positional encoding. NoPE (Kazemnejad et al., 2023) showed that models trained without positional embeddings can achieve strong length extrapolation in comparison to models trained with positional encoding. They can internally represent both absolute and relative PEs without any explicit positional signal, suggesting these may emerge implicitly via training dynamics or over the data distribution. NoPos (Haviv et al., 2022) also found a similar result, suggesting that models trained without PE can infer their absolute position due to causal attention masks. These findings are highly relevant to our work, given our evidence on length generalization behavior whiling zeroing the positional encoding at the meta-tokens. 7 Discussion and Limitations Our findings suggest that decoder-only language models trained with meta-tokens and meta-attention achieve strong performance on recall-oriented tasks. Furthermore, they are able to length generalize, with performance improvements when removing the effect of positional encoding at the meta-tokens. Given the prior findings of NoPos, we believe the introduction of the meta-attention mechanism and a second causal mask (the "meta mask") could be responsible for this behavior, provided that this behavior is specific to the meta-tokens. We suggest that hybrid attention methods such as RNoPE (Yang et al., 2025) could be suitable for facilitating long-context modeling with meta-tokens. Given the findings that the meta-tokens operate like anchors within the context, it would be valuable to explore the impact of our proposed mechanism in pre-training larger models over longer context windows, under greater computational resources. We employ synthetic tasks that are well-aligned to recall abilities, and design experiments to test length generalization, with the aim of strong synergy with long-context modeling capabilities. Nonetheless, training larger models would indicate the viability of our approach for real-world deployment. Notably, our method requires little overhead - the addition of meta-tokens is a simple data augmentation strategy, and the meta-attention layer is added after standard causal masked self-attention, as described in Appendix A. It would also be informative to study larger-scale corpora - given the data-efficient nature of the meta-tokens approach in vastly outperforming the vanilla GPT-2 model at the ≈100B tokens scale, how rapidly does each model saturate our designed synthetic tasks? 8 Conclusion We introduce meta-tokens in language model pre-training, in addition to a dedicated meta-attention mechanism which learns the relationship between standard tokens and meta-tokens. We find that this improves performance on a suite of synthetic recall tasks, and enables length generalization behavior when removing the positional encoding at each meta-token. We provide evidence to suggest that the meta-tokens sharpen the positional encoding, enabling them to operate as content-based landmarks in the context; we further show that they implicitly compress preceding context, demonstrated by similar token embeddings. These interesting phenomena demonstrate the promise of long-context language modeling enabled via data-efficient pre-training using meta-tokens. References A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep variational information bottleneck. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=HyxQzBceg. 11 I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer. arXiv preprint , 2020. S. Black, G. Leo, P. Wang, C. Leahy, and S. Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, Mar. 2021. URL https://doi.org/10.5281/zenodo.5297715. If you use this software, please cite it using these metadata. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc., 2020a. URL https://proceedings. neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. T. B. Brown, B. Mann, N. Ryder, et al. Language models are few-shot learners. NeurIPS, 2020b. M. S. Burtsev, Y. Kuratov, A. Peganov, and G. V. Sapunov. Memory transformer, 2021. URL https: //arxiv.org/abs/2006.11527. T. D. Chen et al. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint , 2023. A. Chowdhery, S. Narang, J. Devlin, et al. Palm: Scaling language modeling with pathways. arXiv preprint , 2022. T. M. Cover and J. A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, USA, 2006. ISBN 0471241954. T. Darcet, M. Oquab, J. Mairal, and P. Bojanowski. Vision transformers need registers. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id= 2dnO3LLiJ1. S. Goyal, Z. Ji, A. S. Rawat, et al. Think before you speak: Training language models with pause tokens. In ICLR, 2024. A. Grattafiori et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. A. Haviv, O. Ram, O. Press, P. Izsak, and O. Levy. Transformer language models without positional encodings still learn positional information. In Y. Goldberg, Z. Kozareva, and Y. Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1382-1390, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.findings-emnlp.99/. D. Hutchins et al. Block-recurrent transformers. In NeurIPS, 2022. W. Jiang, J. Zhang, D. Wang, Q. Zhang, Z. Wang, and B. Du. Lemevit: Efficient vision transformer with learnable meta tokens for remote sensing image interpretation. In K. Larson, editor, Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, pages 929-937. International Joint Conferences on Artificial Intelligence Organization, 8 2024. URL https://doi.org/10.24963/ijcai.2024/103. Main Track. A. Karpathy. nanoGPT. https://github.com/karpathy/nanoGPT, 2023. Accessed: 2025-05-16. A. Kazemnejad, I. Padhi, K. Natesan, P. Das, and S. Reddy. The impact of positional encoding on length generalization in transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=Drrl2gcjzl. 12 A. W. Marshall, I. Olkin, and B. C. Arnold. Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics. Springer New York, New York, NY, 2 edition, 2011. ISBN 978-0-387-68276-1. A. Mohtashami and M. Jaggi. Random-access infinite context length for transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id= 7eHn64wOVy. OpenAI. Gpt-4 technical report. https://openai.com/research/gpt-4, 2023. B. Peng, J. Quesnelle, H. Fan, and E. Shippole. Yarn: Efficient context window extension of large language models, 2023. URL https://arxiv.org/abs/2309.00071. J. Pfau, W. Merrill, and S. R. Bowman. Let's think dot by dot: Hidden computation in transformer language models. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id= NikbrdtYvG. O. Press, N. A. Smith, and O. Levy. Alibi: Trainable linear biases for transformers. arXiv preprint , 2021. O. Press, N. A. Smith, and M. Lewis. Train short, test long: Attention with linear biases enables input length extrapolation, 2022. URL https://arxiv.org/abs/2108.12409. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI Technical Report, 2019. URL https://cdn.openai.com/ better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Accessed: 2025-05-15. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1), Jan. 2020. ISSN 1532-4435. P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations. In M. Walker, H. Ji, and A. Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. URL https://aclanthology.org/N18-2074/. J. Su, Y. Lu, S. Pan, et al. Roformer: Enhanced transformer with rotary position embedding. In ACL, 2021. J. Su, Y. Lu, S. Pan, A. Murtadha, B. Wen, and Y. Liu. Roformer: Enhanced transformer with rotary position embedding, 2023. URL https://arxiv.org/abs/2104.09864. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/ 2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. B. Yang, B. Venkitesh, D. Talupuru, H. Lin, D. Cairuz, P. Blunsom, and A. Locatelli. Rope to nope and back again: A new hybrid attention strategy, 2025. URL https://arxiv.org/abs/2501.18795. M. Zaheer, G. Guruganesh, A. Dubey, et al. Big bird: Transformers for longer sequences. In NeurIPS, 2020. E. Zelikman, G. R. Harik, Y. Shao, V. Jayasiri, N. Haber, and N. Goodman. Quiet-STar: Language models can teach themselves to think before speaking. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=oRXPiSOGH9. 13 A Full Architecture Details We provide a full outline of the architecture design out method uses. Our architecture is equivalent to the NanoGPT (GPT-2) architecture, while introducing the meta-attention block after the initial causal masked attention and layer normalization computation. 1. Input Layer: Given an input sequence of tokens x = {x1, x2, . . . , xT }, we first embed each token into a continuous representation. Instead of absolute positional encodings, we apply Rotary Position Embeddings (RoPE) Su et al. (2023) to inject positional information. For each token, the embedded representation is: et = RoPE(E(xt), t), where RoPE(·, t) denotes the rotary positional embedding applied to the tth position, with a base θ = 10000.0. 2. Causal Masked Self-Attention: The first layer consists of the causal masked self-attention mechanism. For each head h, the attention operation is computed as: CausalAttentionh(Q, K, V ) = softmax QK⊤ h √dk + M Vh, where Q, K, V are the query, key, and value matrices derived from the input embeddings E, and M is the mask matrix. 3. Meta Attention Layer: After the causal masked self-attention, we integrate the meta-attention mechanism to specifically attend to the injected meta tokens. This operation is defined as: MetaAttention(Q, K, V, P) = softmax QK⊤ √dk + Mcausal + P V, where P is the meta mask constructed from the indices of the meta tokens. 4. Feedforward Layer: Following the attention layers, we pass the output through a feedforward neural network defined by: FFN(x) = ReLU(xW1 + b1)W2 + b2, where W1, W2 are weight matrices, and b1, b2 are bias vectors. 5. Layer Normalization: After both the causal self-attention and meta-attention operations, we apply layer normalization: LayerNorm(x) = x -μ σ + ε , where μ and σ are the mean and standard deviation of the features, and ε is a small constant for numerical stability. 6. Final Output Layer: The final layer projects the output of the last feedforward layer back to the vocabulary size to produce the s for the next token prediction: s = softmax(xWout + bout), where Wout and bout are the output weight matrix and bias vector, respectively. 14 B Pre-training Hyperparameters and Model Details Our decoder-only modified GPT-2 model was pre-trained on the C4 dataset with the following configuration and hyperparameters: Table 4: Pretraining Configuration Parameters Parameter Value Batch Size 12 Gradient Accumulation Steps 40 Block Size 1024 Number of Layers 12 Number of Heads 12 Embedding Size 768 Learning Rate 6e-4 Weight Decay 1e-1 Max Iterations 600,000 Warmup Iterations 2,000 Minimum Learning Rate 6e-5 Dropout Rate 0.0 RoPE Theta 10000.0 Initial Model Resume Optimizer AdamW AdamW Beta1 0.90 AdamW Beta2 0.95 Gradient Clipping 1.0 Tokenizer tiktoken C YaRN Hyperparameters Parameter 4096-token model 8192-token model yarn_scale 4.0 8.0 yarn_original_max_seq_len 1024 yarn_extrapolation_factor 1.0 yarn_attn_factor 1.0 yarn_beta_fast 32.0 yarn_beta_slow 1.0 Table 5: YaRN parameter configurations for extended context models. D Additional Experimental Details D.1 Synthetic Data Generation We generate 90,000 train examples and held-out test set of 10,000 examples for each task. 15 D.1.1 List Recall We generate a suite of "list-pointer" examples by sampling random categories and list items, inserting a special meta token as a marker, and asking the model to recover the item immediately following the meta-token. Each example consists of: 1. m categories drawn without replacement from a fixed set of 20. 2. n items per category, sampled with replacement from the category's 10-item inventory 3. One "target" category in which we inject a single meta token after the jth item (j ∈[n]) and then append the remaining items 4. A question line "Q: What is item j of ? _META_" This pipeline yields curriculum-structured data that systematically probes the model's ability to attend to and copy items in long, multi-list contexts. Phase m (Num. categories) n (List length) Approx. prompt-token range 1 Uniform 3-8 Uniform 3-10 Short (≈100-200 tokens) 2 Uniform 8-12 Uniform 3-16 (bimodal) Mid-range (≈200-300 tokens) 3 Uniform 12-19 Mixture {3-8, 9-16, 17-25} Full-range (≈500-700 tokens) 4 Uniform 15-20 Uniform 40-60 "Extra-hard" ≤1024 tokens 5 Uniform 15-20 Uniform 90-110 "Long" ≤2048 tokens Table 6: Curriculum schedule for synthetic data. D.1.2 Segment Counting Similar to List-pointer, except the model must count occurrences of a target token within a meta-token bracketed list segment. Uses the schedule dictated by Table D.1.1. Asks the question: "Q: How many times does appear between the pauses around ? _META_". D.1.3 Parity Generates examples where the model computes the XOR (parity) of a bit-string segment up to the first L characters where L is drawn phase-dependently. the same scheduling dictated by Table D.1.1. Asks the question: "Q: What is the XOR of all bits before this pause? _META_ " D.1.4 Copying Generates examples where the model must copy a bracketed span from a text. Uses schedule dictated by Table D.1.1 and samples an additional copy length C and distance length D depending on the phase E Licenses nanoGPT Our implementation of the vanilla GPT-2 is based on the nanoGPT repository (https://github.com/ karpathy/nanoGPT), which is licensed under the MIT License. 16 EleutherAI GPT-Neo-125M We directly use the EleutherAI GPT-Neo 125M model checkpoint and weights, available via the Hugging Face Model Hub at https://huggingface.co/EleutherAI/gpt-neo-125m. This model is released under the MIT License. C4 Dataset Our model was trained on the C4 dataset (https://huggingface.co/datasets/allenai/c4), which is provided under the Open Data Commons Attribution License (ODC-BY). tiktoken We use the tiktoken library from OpenAI for tokenization (https://github.com/openai/tiktoken), which is released under the MIT License. 17 F Complete Experimental Results F.1 Synthetic Task Accuracies Across Test Lengths We stress test RoPE models at a sequence length of 2048-twice the pretraining block size of 1024-as relative position embeddings naturally support extrapolation beyond the training context window. In contrast, absolute positional encodings (APE) cannot generalize to sequences longer than those seen during pretraining. Table 7: Accuracy (%) across evaluation lengths for each model train on List Recall Model (Train Length) 128 256 512 1024 2048 GPT-2 APE (128) 4.2 1.2 0.0 0.0 - GPT-2 APE (256) 6.8 2.4 0.0 0.0 - GPT-2 APE (512) 19.8 9.5 3.6 0.0 - Meta + APE (128) 100.0 86.4 12.0 4.1 - Meta + APE (256) 100.0 98.6 42.6 3.9 - Meta + APE (512) 100.0 100.0 98.7 11.1 - Meta + RoPE (128) 100.0 60.7 5.9 0.0 0.0 Meta + RoPE (256) 100.0 100.0 48.6 23.5 0.0 Meta + RoPE (512) 100.0 100.0 99.3 58.9 5.6 GPT-Neo-125M 85.6 86.0 81.2 - - Table 8: Accuracy (%) across evaluation lengths for each model trained on Segment Counting. Each model is evaluated on longer contexts than seen during training. Model (Train Length) 128 256 512 1024 2048 GPT-2 APE (128) 32.1 27.4 20.2 0.0 - GPT-2 APE (256) 40.3 56.2 23.7 0.0 - GPT-2 APE (512) 30.1 32.1 25.0 0.0 - Meta + APE (128) 77.4 55.9 25.0 11.1 - Meta + APE (256) 83.3 77.4 53.6 22.4 - Meta + APE (512) 91.7 79.8 80.9 33.3 - Meta + RoPE (128) 77.4 64.3 25.0 22.7 0.0 Meta + RoPE (256) 64.3 64.3 35.7 33.3 0.0 Meta + RoPE (512) 90.9 91.4 95.3 66.7 11.1 GPT-Neo-125M 31.4 25.9 24.9 - - 18 Table 9: Accuracy (%) across evaluation lengths for each model train on Parity Model (Train Length) 128 256 512 1024 2048 GPT-2 APE (128) 75.0 56.0 53.4 45.2 - GPT-2 APE (256) 75.0 67.0 60.7 46.2 - GPT-2 APE (512) 75.0 54.8 60.0 40.5 - Meta + APE (128) 100.0 75.0 67.9 52.4 - Meta + APE (256) 100.0 97.6 96.4 69.1 - Meta + APE (512) 100.0 100.0 100.0 86.7 - Meta + RoPE (128) 100.0 66.7 76.2 59.5 44.1 Meta + RoPE (256) 97.6 100.0 96.4 61.9 52.4 Meta + RoPE (512) 100.0 100.0 100.0 69.1 63.1 GPT-Neo-125M 80.4 59.1 54.8 - - Table 10: Accuracy (%) across evaluation lengths for each model trained on Copying Model (Train Length) 128 256 512 1024 2048 GPT-2 APE (128) 6.0 5.3 3.0 0.0 - GPT-2 APE (256) 6.8 6.0 5.7 0.0 - GPT-2 APE (512) 3.8 4.8 7.8 0.0 - Meta + APE (128) 100.0 66.7 76.2 2.6 - Meta + APE (256) 100.0 100.0 96.4 7.9 - Meta + APE (512) 100.0 100.0 98.5 87.4 - Meta + RoPE (128) 96.6 73.0 5.2 0.0 0.0 Meta + RoPE (256) 98.2 100.0 23.6 9.3 3.2 Meta + RoPE (512) 99.0 98.9 98.9 89.4 11.8 GPT-Neo-125M 31.5 22.7 16.9 - - F.2 Ablations on Positional Encoding and Token Embedding Table 11: Accuracy (%) on the List-Recall task under different ablations: zeroing the positional encoding (No Pos), zeroing the text embeddings (No Embed), or zeroing both of the meta-tokens. Model (PE) Full No Pos No Embed Neither Meta + APE (128) 100.0 99.3 17.4 59.7 Meta + RoPE (128) 100.0 100.0 32.4 24.0 Meta + APE (256) 86.4 86.9 12.2 16.2 Meta + RoPE (256) 100.0 100.0 4.0 6.6 Meta + APE (512) 100.0 100.0 52.1 84.3 Meta + RoPE (512) 100.0 100.0 59.6 25.2 19 Table 12: Accuracy (%) on the Segment Counting task under different ablations: zeroing the positional encoding (No Pos), text embeddings (No Embed), or both, only on the meta-token. Model (Train Length) Full No Pos No Embed Neither Meta + APE (128) 77.4 63.1 31.0 47.6 Meta + APE (256) 83.3 88.1 32.1 40.5 Meta + APE (512) 91.7 82.1 34.5 51.2 Meta + RoPE (128) 77.4 70.2 59.5 36.9 Meta + RoPE (256) 64.3 53.6 30.9 30.9 Meta + RoPE (512) 80.9 72.6 36.9 25.0 Table 13: Accuracy (%) on the Parity task under different ablations: zeroing the positional encoding (No Pos), text embeddings (No Embed), or both, only on the meta-token. Model (Train Length) Full No Pos No Embed Neither Meta + APE (128) 100.0 100.0 100.0 100.0 Meta + APE (256) 75.0 77.4 77.4 79.8 Meta + APE (512) 67.9 71.4 72.6 66.7 Meta + RoPE (128) 100.0 97.6 100.0 100.0 Meta + RoPE (256) 66.7 66.7 73.8 66.7 Meta + RoPE (512) 76.2 75.0 75.0 64.3 F.3 Positional Encoding Robustness Ablations Table 15: Accuracy (%) on the List Pointer task with Gaussian noise added to positional encoding. Model (Train Length) Noise 0.0 Noise 0.1 Noise 0.5 Noise 1.0 Noise 2.0 GPT-2 + APE (128) 4.8 1.2 2.4 2.6 3.5 GPT-2 + APE (256) 17.4 11.9 4.6 3.6 3.2 GPT-2 + APE (512) 14.0 16.3 16.7 17.9 14.3 Meta + APE (128) 98.7 98.6 67.5 55.6 42.8 Meta + APE (256) 81.8 79.7 48.9 43.1 37.9 Meta + APE (512) 100.0 100.0 79.5 65.5 57.1 Meta + RoPE (128) 98.1 100.0 100.0 96.0 88.9 Meta + RoPE (256) 100.0 100.0 100.0 97.9 82.6 Meta + RoPE (512) 100.0 100.0 100.0 98.8 81.0 20 Table 14: Accuracy (%) on the Copying task under different ablations: zeroing the positional encoding (No Pos), text embeddings (No Embed), or both, only on the meta-token. Model (Train Length) Full No Pos No Embed Neither Meta + APE (128) 96.6 93.2 7.2 4.8 Meta + APE (256) 98.2 99.6 5.0 3.6 Meta + APE (512) 99.0 96.6 5.7 5.4 Meta + RoPE (128) 100.0 99.6 6.9 4.9 Meta + RoPE (256) 100.0 100.0 4.5 5.1 Meta + RoPE (512) 100.0 95.6 6.9 4.9 Table 16: Accuracy (%) on the Copying task with Gaussian noise added to positional encoding. Model (Train Length) Noise 0.0 Noise 0.1 Noise 0.5 Noise 1.0 Noise 2.0 GPT-2 Abs (128) 2.9 1.2 0.0 0.0 0.0 GPT-2 Abs (256) 6.0 7.1 3.6 0.8 0.7 GPT-2 Abs (512) 6.0 5.8 3.6 0.4 0.3 Meta + APE (128) 96.1 98.5 69.8 58.6 54.9 Meta + APE (256) 100.0 100.0 76.3 68.8 57.2 Meta + APE (512) 98.9 98.7 74.4 68.9 50.5 Meta + RoPE (128) 100.0 100.0 75.9 68.6 49.9 Meta + RoPE (256) 100.0 100.0 82.6 65.6 45.1 Meta + RoPE (512) 100.0 100.0 84.4 67.6 46.3 21 F.4 Length Generalization Ability under No Positional Encoding Ablation Table 17: List-Recall: "No Pos" vs. Full accuracy for Meta-attention with APE and Meta-attention with RoPE. Model (Split, Train Len) Full No Pos ∆(pp) Meta + APE (128) small - - - medium 77.8% 88.9% +11.1 hard 11.1% 22.2% +11.1 Meta + APE (256) small 100.0% 100.0% 0.0 medium 100.0% 100.0% 0.0 hard 44.4% 22.2% -22.2 Meta + APE (512) small - - - medium - - - hard 100.0% 100.0% 0.0 Meta + RoPE (128) small - - - medium 44.4% 55.6% +11.1 hard 11.1% 11.1% 0.0 extra-hard 0.0% 0.0% 0.0 long 0.0% 11.1% +11.1 Meta + RoPE (256) small 100.0% 100.0% 0.0 medium 100.0% 100.0% 0.0% hard 33.3% 66.7% +33.3 extra-hard 0.0% 22.2% +22.2 long 0.0 0.0 0.0 Meta + RoPE (512) small - - - medium 100.0% 100.0% 0.0 hard 100.0% 100.0% 0.0 extra-hard 44.4% 55.6% +11.1 long 0.0% 0.0% 0.0 22 G Theoretical Analysis G.1 Proof of Theorem 4.1 Lemma G.1. Let l1, l2, . . . , lN be logits and define softmax distribution αj = exp(lj) PN k=1 exp(lk). Suppose that for some "correct" index j∗we have lj∗= L, and for all other indices j ̸= j∗, lj ≤L -∆for some ∆> 0. Then, entropy H(α) is strictly decreasing in ∆. Proof. First, we can group the other logits (i.e. j ̸= j∗, such that S = P j̸=j∗elj. Then, since each lj carries the property that elj ≤eL-∆given lj∗= L, we have that S ≤(N -1)eL-∆since there are N -1 terms. Revisiting the softmax α, we have that αj∗= eL eL+S ≥ eL eL+(N-1)eL-∆= 1 1+(N-1)e-∆. We will denote this quantity as p henceforth. Next, each other softmax αj for j ̸= j∗must have the property that αj = el eL+S ≤ eL-∆ eL(1+(N-1)e-∆) = e-∆ 1+(N-1)e-∆= 1-p N-1. As a result, we have the following entropy maximization problem: maximize α1,...,αN - N X j=1 αj log αj subject to N X j=1 αj = 1, αj∗= p, αj ≥0, j = 1, . . . , N. Observe that the entropy (objective) function is Schur-concave in α, so it is maximized when αj∗= p and the remaining softmax mass is split uniformly over the N -1 elements, i.e. αj = 1-p N-1 ∀j ̸= j∗. Plugging this in for H(α) yields: H(α) ≤-p log p -(1 -p) log(1 -p) + (1 -p) log(N -1) (3) Next, we aim to study the relationship between H and ∆. By the chain rule, dH d∆= dH dp · dp d∆. dH dp = -(1 + log p) + log 1-p N-1 + 1 = log 1-p (N-1)p. Substituting 1-p p = (N -1)e-∆, we get dH dp = -∆and since ∆> 0, dH dp 0 since both numerator and denominator must be > 0. Therefore, dH d∆= -∆ (N-1)e-∆ [1+(N-1)e-∆]2 Eα[ln α]. As such, given αj∗> 0 and ln αj∗-Eα[ln α] > 0, this suggests that Covα(l′, ln α) > 0 and thus, d dtH(α) < 0. Therefore, we conclude that adding a positive logit boost at the meta-token index ("correct" logit) strictly decreases entropy, supporting the proposed "anchoring" effect notion. G.2 Proof of Theorem 5.1 Proof of Theorem 5.1. The meta-tokens are simply a new (latent) channel that may be utilized to search for candidate distributions. However, this latent can be ignored, yielding the original search space; that is, any encoder qφ(ˆx | x) that does not use meta-tokens can be implemented in the meta-token model by zeroing out all meta-token contributions. Therefore, Qabs ⊆Qmeta, where q = (qφ, qθ) over the feasible combinations of encoder and decoder. Naturally, minimizing a function over a larger feasible set cannot increase its minimum. Thus, for a fixed rate R, Dmeta(R) = min q∈Qmeta : I(X; ˆ X)=R D(q) ≤ min q∈Qabs : I(X; ˆ X)=R D(q) = Dabs(R). Note that the same result holds for RoPE in place of APE (i.e. DRoPE in place of Dabs), as well. G.3 Theorem C.2 Theorem G.2. Consider functions p : {0, . . . , T -1} →R and b : {-(T -1), . . . , T -1} →R for absolute postional biases and relative biases, respectively. Let Babs to be the set of all fixed absolute positional bias matrices Babs i,j = p(j) and Brel to be the set of all fixed relative biases Brel i,j = b(i -j). Let Bmeta be the set of bias matrices implementable by the Transformer augmented with meta-token embeddings {mt} which emit a content-dependent logit boost at their respective indices. Then, Babs ∪Brel ⊊Bmeta (5) Proof. We break this argument down into two parts →(i.) the forward direction, where we show that all absolute and relative biases without meta-tokens can be modeled by the meta-token model. (i) Babs ∪Brel ⊆Bmeta. Every B ∈Bmeta is obtained by choosing meta-token embeddings et ∈Rd at each position t and a linear head W, so that the total bias at (i, j) is Bi,j = P t Q⊤ i W et 1j=t. • Absolute case. Given p(j), set W ∈R1×d and choose each ej so that Q⊤ i W ej = p(j) ∀i. All other et̸=j are zero. Then Bi,j = p(j). • Relative case. Given b(i -j), place a meta-token at every position j. Choose W and embeddings ej so that Q⊤ i W ej = b(i -j) ∀i, j. For instance, if we let W = Id and arrange that ej encodes the vector b(1 -j), b(2 -j), . . . , b(T -j) , then Q⊤ i ej = b(i -j) when Qi is the i-th standard basis vector. Therefore, every absolute or relative bias (in Babs and Brel) lies in Bmeta. 24 (ii) There exists a bias B∗∈Bmeta such that B∗/∈Babs ∪Brel. Define a content-dependent bias B∗ i,j = f Cj where Cj is the full token context preceding position j and f is any non-constant function. Such a B∗arises by setting each meta-token embedding ej = f(Cj) and W = Id, so B∗∈Bmeta. However, if there was B∗∈Babs, then there is p(j) with p(j) = f(Cj) for all j and all possible Cj, which is impossible since Cj varies. Furthermore, if B∗∈Brel, then there is b(i -j) with b(i -j) = f(Cj) independent of i; again, this condition is impossible to be satisfied. Therefore B∗/∈Babs ∪Brel. As a result, we conclude that the biases represented by Bmeta contain the set of both absolute and relative biases without meta-tokens, and represent additional biases that cannot be represented without meta-tokens. The result of Theorem G.2 is that the introduction of meta-tokens strictly grows the expressivity of biases that may be represented, while still being entirely inclusive of the fixed realizable absolute and relative encoding biases. As a result, we do not "lose" anything representationally by introducing meta-tokens, from a positional biases standpoint. This enhanced expressive power also plays a role in enabling the model to learn to focus attention on relevant context spans, reinforcing the aforementioned sharpening effect. H Example Prompts for Synthetic Tasks We provide example prompts and target completions for each synthetic task used in our experiments. Each example illustrates the input format, the placement of _PAUSE_ meta-tokens, and the expected model output. List Recall Prompt: Fruits: orange peach banana plum Tools: hammer wrench level pliers Sports: boxing baseball boxing golf Spices: turmeric cardamom cumin cinnamon Animals: cat tiger wolf fox Professions: teacher nurse lawyer architect Vegetables: onion cucumber broccoli cucumber Instruments: piano clarinet violin piano Professions: teacher nurse lawyer _PAUSE_ architect Q: What is item 3 of Professions? _PAUSE_ Completion: lawyer Segment Counting Prompt: Fruits: orange peach banana plum Tools: _PAUSE_ hammer wrench level pliers _PAUSE_ Sports: boxing baseball boxing golf Spices: turmeric cardamom cumin cinnamon Q: How many times does wrench appear between the pauses around Tools? _PAUSE_ Completion: 1 25 Parity Prompt: Bits: 0 _PAUSE_ 1 0 0 Q: What is the XOR of all bits before this pause? _PAUSE_ Completion: 0 Copying Prompt: ...as ever Mr _PAUSE_ . Darcy had been, was much relieved by discerning such different feelings. They had not been long together before Darcy told her that Bingley was also coming to wait on her; and she had barely time to express her satisfaction, and prepare for such a visitor, when Bingley's quick step was heard on the stairs, and in a moment he entered the room. All Elizabeth's anger against him had been long done away; but had she still felt any, it could hardly have stood its ground against the unaffected cordiality with which he expressed himself on seeing her again. He inquired in a friendly, though general, way, after her family, and looked and spoke with the same _PAUSE_ good-humoured ease that he had ever done. Q: Copy the bracketed text. _PAUSE_ Completion: . Darcy had been, was much relieved by discerning such different feelings. They had not been long together before Darcy told her that Bingley was also coming to wait on her; and she had barely time to express her satisfaction, and prepare for such a visitor, when Bingley's quick step was heard on the stairs, and in a moment he entered the room. All Elizabeth's anger against him had been long done away; but had she still felt any, it could hardly have stood its ground against the unaffected cordiality with which he expressed himself on seeing her again. He inquired in a friendly, though general, way, after her family, and looked and spoke with the same. I Broader Impacts Statement Our work on learned meta-tokens and meta-attention offers a lightweight, data-efficient way to pre-train language models while demonstrating strong performance when fine-tuned for recall tasks. This suggests a path toward more capable, leaner language models that could be used to handle contexts such as like long legal or medical documents, extended multi-turn dialogues, or large codebases without resorting to prohibitively large architectures or expensive fine-tuning runs. Such models could bring real benefits to areas such as conversational agents for education or healthcare. Building off of prior literature that performs a 26 more explicit learned retrieval from the context (Mohtashami and Jaggi, 2023), this could induce improved and efficient in-line retrieval over vast corpora. Our work relates strongly to the recent debates in the language modeling community on the impact of positional encoding, particularly around works such as NoPE (Kazemnejad et al., 2023). We provide strong evidence that zeroing the positional encoding can improve performance, providing motivation for hybrid attention mechanisms such as RNoPE (Yang et al., 2025), and other, more efficient ways to pre-train language models with long-context modeling settings in mind. We note that advances in long-context modeling could introduce risks around misuse and unintended harm. More powerful context understanding over long ranges can fuel phishing text and distracted models, especially in the phase of noisy context. However, models trained on corpora without data pre-processing a priori may be subject to harmful behavior such as profane generations. In the context of our work, which uses standard, pre-filtered corpora, this issue is avoided; we encourage users to audit the data used for pre-training first. 27
|
2509.16274
|
Copyright ©2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Reconnecting Citizens to Politics via Blockchain -
Starting the Debate
Uwe Serdült
Centre for Democracy Studies Aarau (ZDA) at the University of Zurich, Switzerland,
uwe.serdult@zda.uzh.ch; College of Information Science and Engineering, Ritsumeikan University, Japan,
serdult@fc.ritsumei.ac.jp; Orcid [0000-0002-2383-3158].
Abstract: Elections are not the only but arguably one of the most important pillars for the proper
functioning of liberal democracies. Recent evidence across the globe shows that it is not
straightforward to conduct them in a free and fair manner. One constant concern is the role of
money in politics, more specifically, election campaign financing. Frequent scandals are proof of
the difficulties encountered with current approaches to tackle the issue. Suggestions on how to
overcome the problem exist but seem difficult to implement. With the help of blockchain
technology we might be able to make a step forward. A separate crypto currency specifically
designed to pay for costs of political campaigning and advertising could be introduced.
Admittedly, at this stage, there are many open questions. However, under the assumption that
blockchain technology is here to stay, it is an idea that deserves further exploration.
Keywords: blockchain technology, elections, democratic innovation, e-government, democracy
Acknowledgement: A thank you note goes to participants of the Procivis Think Tank meeting, 21
September 2018, held at Trust Square, Zurich, Switzerland, for valuable inputs and discussions.
1. Democratic Malaise
At the current state of affairs in the history of democracies across the globe, we are facing a
paradoxical situation. On one hand we were able to observe the emergence and growth of the
number of formal democracies over the last couple of decades, mainly in Latin America and Europe.
On the other hand there seems to be a certain and deeply rooted dissatisfaction with politics in
general, even in well-established polities of the so-called Western world. The disillusionment about
politics can partly be related to a lack of performance legitimacy in the aftermaths of the most recent
financial and economic crises but not exclusively. According to Crouch (2004), the deeper reason for
this conundrum is the fact that many democracies have entered a post democratic phase. In post
democracies formal requirements of a democratic polity as we define them – including basic political
freedoms and rights, the rule of law and the separation of powers just to mention a few – are fully
met. Elections take place in an orderly fashion and governmental forces do alter. However, politics
186
Reflections & Viewpoints
has moved beyond so called first moments of democracy characterized by genuine mass
participation, deliberation and a broad engagement of civil society. Even though elections are
obviously being held, citizens now know they do not really matter much anymore. In post
democracies boredom, frustration and disillusionment about politics are kicking in. With the
knowledge of public relations experts as well as scientists, professionalized political forces such as
politicians and lobbying groups have learned how to manipulate public opinion thus turning
elections campaigns into a theatre orchestrated for the mass media. In order to find out what people
want politicians started imitating show business and marketing techniques to increase their chances
of getting elected or re-elected. Party platforms have become more superficial and less
discriminatory, leaving the voters clueless about content and vulnerable to the personalization of
politics. Important policies are being negotiated behind closed doors once elections are over, mainly
between governments and big business. Corruption is not only a problem of the global South but
represents a deeply entrenched phenomenon in politics in general. Citizens take part in decision-
making only very unfrequently or not at all. Democracies thus tend to turn into empty procedural
shells which can formally not be called undemocratic but leave many citizens – better educated than
ever in human history – frustrated with politics, low respect for politicians and ultimately erode the
legitimacy of democracies. Even if we were not fully convinced by that grim characterization and
the statement that many countries entered a post-democratic stage (Merkel, 2014) we must concede
that a couple of the described symptoms are difficult to reason away. Additionally, in most
established democracies we observe a decline in electoral turnout accompanied by the rather bizarre
situation that state authorities frequently have to admonish citizens to turn out and run actual
campaigns for that.
Digital democracy, at least in the early stages, has been seen as a beacon of hope. However, the
hope that the less powerful or financially less potent actors can have a say in politics with as little as
an internet connection did not come true. It might have led to a further disillusionment instead:
using the communication tools of the internet seems to require professional solutions, time and
money, thus again favoring the larger, well established and well financed organizations. This is not
to say that there is not the occasional exception which did not exist before such as a blog entry or a
tweet causing a turn of developments or mobilizing the masses for a while. But this is clearly the
exception. Also, the more radical approach of trying to circumvent dirty politics with the help of the
internet all together, leading to a more direct and at the same time liquid democracy, does currently
not seem to be a valuable alternative. Day-to-day politics caught up with such approaches quickly
and we cannot see them taking root yet. Granted, more time might be needed but even then the
looming question is whether liquid democracy would be capable of producing coherent policy
output in the long run. A third option which is currently representing the most common and
accepted track, is a digital democracy interpreted in the sense of bringing the citizens closer to the
state and public administrations so that more efficient problem solutions can be found. This digital
democracy approach currently seems to have the highest potential but also comes with some
dangers. The closer citizens interact with the state the higher the danger for data misuse (for example
in case of a change of government to one of a different color). In general, elements of digital
democracy did not yet seem able to unleash its full constructive power, on the contrary, with micro-
targeting of potential electors and increasing tendencies to influence voters on social media, also
from abroad, we currently are faced with fighting the unwanted effects of the digital in politics.
Reflections & Viewpoints
187
2. Current Election Campaign Financing Rules and Suggested Reforms
Elections and election campaigns play a crucial role in democracies. They represent the high service
transferring the powers from citizens to representatives and rulers for yet another term. It is also a
time when through acts, procedures and symbolism democracies become visible and even tangible
for the largest part of the electorate. Arguably, elections are not the only but certainly one of the
most fundamental pillars for the proper functioning of liberal democracies. Unfortunately, evidence
across the globe demonstrates how difficult it is to conduct them in a free and fair manner, even in
more advanced liberal democracies. In particular, a constant concern is the role of money in politics,
more specifically, money used for election campaign financing. Frequent scandals are proof of the
difficulties encountered with current approaches to tackle the issue. As reported in a comprehensive,
systematic and up to date analysis of political finance in comparative perspective (Norris et al., 2015)
by the Electoral Integrity project and partners (Global Integrity, 2015) we are forced to conclude that
the de facto situation often does not match with what regulation would prescribe. While a few
countries score high on electoral integrity without detailed legal frameworks on how money can be
used during election campaigns the opposite is also possible. In the majority of the covered countries
the regulation in place does not affect third party actors such as political action committees, unions
and certain not for profit organizations. The monitoring for direct and indirect financing of election
campaigns furthermore shows that rules are often not very specific, that documentation is
incomplete, incomprehensible and delivered late. Furthermore, in most countries public resources
are being used for campaigns which makes it more difficult to quantify real costs and to track their
use. The report comes to the quite devastating conclusion that the violation of rules is rather the
norm. Only in four out of the 54 monitored countries did the authors not find any violations during
the most recent elections.
Literature shows that there is a long political struggle to regulate election campaign donations.
Typically, legislation would require campaigners, mostly political parties and candidates, to register
and disclose donations they receive (Gilland Lutz & Hug, 2010). However, experience shows the
futility of this approach. Having personally being able to check what kind of documentation is
routinely handed in under the regime of some of the Swiss cantons with a disclosure rule for political
donations (Serdült, 2010; Braun Binder 2015) it is not surprising to comprehend the conclusions of
the upper mentioned international reports. Several contributions for a political party taken down in
the name of the treasurer of the respective political party are not conducive to increasing trust in the
system. Reporting duties for donations in the official gazette simply being ignored for years
demonstrate that controls seem to be lax and the consequences eventually not severe in the case of
infringements. The maximum fine could be well beyond the amounts received. The rules are
sometimes also relatively easy to circumvent. Larger amounts of money can be split up to go beyond
the allowed amount and thus stay under the radar screen. Contributions can be made to
organizations not falling under the legislation. They can also originate from or be channeled to
accounts abroad. In case campaign ad space is voluntarily given away for free because of ideological
proximity between media owners and political party there is even no money trail at all. Occasionally,
campaign financing scandals pop up, but they probably only represent the tip of the iceberg. In sum,
current regulation approaches seem to lead to bureaucratic and largely ineffective solutions.
However, if current campaign regulations have no positive effect on the fairness of elections better
188
Reflections & Viewpoints
solutions should be sought. A first cursory review of potential remedies found in the literature
reveals the following:
Ayres and Bulow (1996) suggested a fund controlling for the identity of donors but then to
relay the money to the receiver anonymously,
a Council of Europe (2004) report put forward the idea to use vouchers instead of the national
currency,
Lessig (2011) proposed a reform of campaign financing allowing citizens to donate vouchers.
Whereas governance of any campaign financing regulation is going to stay key despite of the
technology applied, a distributed approach involving not only public administrations or electoral
management boards created to supervise all matters elections but the public in general might help
to achieve a paradigm shift in the not so distant future.
3. A Blockchain Technology-based Approach
Thanks to distributed leadger technology - colloquially referred to as blockchains (Wattenhofer,
2019) - new options are now available, helping to combine and implement ideas such as the
introduction of vouchers and partly anonymous donations in a more persistent way. Lessig’s notion
of paper vouchers can directly be reformulated in blockchain terminology terms as tokens. The logic
behind the introduction of vouchers is that donations and campaign financing increase the risk of
corruption and that one should try to extract those money flows from a market in which transactions
are difficult to trace. With blockchain technology political campaigns can be tracked and financed
by a crypto token owned by the people. Such crypto vouchers can have a duration and nominal
value. They can be issued and distributed for every election or even for a complete legislature. In
case there is a need, additional vouchers could be released. Each country or constituency could
therefore create its own vouchers within a couple of hours. The suggested blockchain system would
allow tracing all flows of the campaign crypto currency and to keep the total amount spent under
control. However, the arguably more important innovation of the suggested approach is not only
the technical aspect per se but the direct involvement of citizens. Every registered voter would
henceforth have a small but direct stake in the electoral race. As a much welcomed side effect,
interest in politics and therefore turnout might increase as well.
As a starting point for further reflection, token owners would, in the first place, have the following
three options: they can pass tokens on to any other citizen, candidate or political group (donate), sell
them in a market (trade) or even decide not to use them at all (abstain). For contested electoral races
the price of a token could go up. National banks would be potential organizations capable to issue
campaign tokens. They can hold them in their books just like any other currency. Last but not least,
the most important point, all citizens are directly reconnected to politics by receiving tokens which
define the total amount for campaign spending.
Interdisciplinary research to study technical, regulatory, economic as well as behavioral
dynamics of such a blockchain-based campaign financing solution is of course much needed. The
following research questions can serve as a preliminary guide for the development of future
feasibility studies. The by no means exhaustive list of questions can be grouped into four domains:
Reflections & Viewpoints
189
1) Governance and legal: Which legal requirements apply to a cryptocurrency for political
campaigns? In particular, which constraints are imposed by the principle of economic
freedom and what requirements must be met with regard to ensuring electoral freedom?
Which monetary law provisions would be necessary for the introduction for a separate
cryptocurrency for political campaigns?
2) Technological: How should the tokens be designed? How can token donations stay
anonymous but become public when reaching a certain threshold? Can secure and easy to
use applications for individual use be envisaged?
3) Economic: How much would the introduction and administration of campaign tokens cost
regarding administration as well as energy consumption? How can those costs be financed?
4) Behavioural: How do citizens react to the idea of creating tokens to finance political
campaigns? Would they be willing to use them? Which patterns of token use can we
observe?
4. Discussion
The suggested additional use case of blockchain technology in the public domain (Rizal Batubara et
al., 2018) for a core aspect of any democracy worthy of its name has the potential to shed new light
on one of its long looming conundrums. It provides for a prospective and optimistic way on how
technology can be helpful for the design of a future (if not completely but increasingly) digital
democracy. Through a transdisciplinary approach comprising legal, economic, technical and
experimental elements, the proposal to create a decentral election token provides public authorities,
politicians and society at large with an innovative template on how campaign financing could look
like in the not so distant future. Furthermore, the prospective aspect of the proposal allows
lawmakers to update themselves on the future challenges regarding the application of blockchain
technology in democratic political systems.
Feasibility studies could help to cast light on opportunities and risks for the use of blockchain
technology in the public domain. In that regard, Switzerland with its frequent referendum votes and
elections on three state levels is a particularly well-suited field of experimentation. Referendum
topics at stake do not necessarily and always follow the party lines. They can be cross-cutting. We
can therefore expect campaign financing donations to reveal non-partisan patterns as well.
However, whether citizens and other stakeholders would mainly donate along partisan lines,
diverge from the expected pattern or in an act of self-interest prefer to cash in their allocated amount
by selling the vouchers in a market (Fehr & Fischbacher, 2003) is an empirical question which should
be addressed in studies. Regulation will most probably need to be local and study results will
therefore not travel very well to other polities. However, all research along those lines will certainly
have an international appeal and novelty. Feasibility studies need not be restricted to referendum
votes such as in the Swiss case. On the contrary, the fact that Seattle in 2015 started an experiment
making use of publicly funded 100 USD paper vouchers during an electoral race hints at the fact that
the proposed, digitally enhanced campaign financing solution is not of a purely speculative nature
and deserves the attention of researchers, politicians and civil society organizations alike.
190
Reflections & Viewpoints
References
Ayres, I. & Bulow, J. (1998). The Donation Booth: Mandating Donor Anonymity to Disrupt the Market for
Political Influence. Stanford Law Review, 50(3), 837-891.
Braun Binder, N. (2015). Financing Popular Initiatives and Referendum Campaigns, in: Fraenkel-Haeberle,
C. & Kropp, S. & Palermo, F. & Sommermann, K.-P. (ed.), Citizen Participation in Multi-Level
Democracies, Leiden/Boston: Brill, 161–181.
Council of Europe (2004). The Future of Democracy in Europe: Trends, Analyses and Reforms. Coord. by
Schmitter, P. and Trechsel, A. Strasbourg: Council of Europe Publishing.
Crouch, C. (2004). Post Democracy. Cambridge: Polity Press.
Fehr, E. & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785-791.
Gilland Lutz, K. & Hug, S. (Eds.) (2010). Financing Referendum Campaigns. New York: Palgrave.
Global Integrity (2015). The Money, Politics and Transparency Campaign Finance Indicators: Assessing
Regulation and Practice in 54 Countries across the World in 2014. Washington DC: Global Integrity
Report.
Lessig, L. (2011). Republic Lost. See: www.lessig.org/books/
Merkel, W. (2014). Is There a Crisis of Democracy? Democratic Theory, 1(2), 11-25.
Norris, P. & Abel van Es, A. & Fennis, E. (2015). Checkbook Elections: Political Finance in Comparative
Perspective – Executive Report. University of Sydney, Australia.
Rizal Batubara, F. & Ubacht, J. & Janssen, M. (2018). Challenges of Blockchain Technology Adoption for e-
Government: A Systematic Literature Review. dg.o’18 Proceedings of the 19th Annual International
Conference on Digital Government Research: Governance in the Data Age, Article no 76. DOI:
https://doi.org/10.1145/3209281.3209317 .
Serdült, U. (2010). Referendum Campaign Regulations in Switzerland, in: Gilland Lutz, Karin and Hug,
Simon (Eds.) Financing Referendum Campaigns. New York: Palgrave, 165-179.
Wattenhofer, R. (2019). Blockchain Science: Distributed Ledger Technology. Inverted Forest Publishing.
About the Author
Uwe Serdült
Uwe Serdült occupies a dual position as a full professor at Ritsumeikan University, Japan, and a principle
investigator in the Center for Democracy Studies Aarau (ZDA) at the University of Zurich, Switzerland. He is
interested in research at the intersection of the social and information sciences, particularly in the field of
e-government and digital democracy.
|
for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Reconnecting Citizens to Politics via Blockchain - Starting the Debate Uwe Serdült Centre for Democracy Studies Aarau (ZDA) at the ; ; Orcid [0000-0002-2383-3158]. Abstract: Elections are not the only but arguably one of the most important pillars for the proper functioning of liberal democracies. Recent evidence across the globe shows that it is not straightforward to conduct them in a free and fair manner. One constant concern is the role of money in politics, more specifically, election campaign financing. Frequent scandals are proof of the difficulties encountered with current approaches to tackle the issue. Suggestions on how to overcome the problem exist but seem difficult to implement. With the help of blockchain technology we might be able to make a step forward. A separate crypto currency specifically designed to pay for costs of political campaigning and advertising could be introduced. Admittedly, at this stage, there are many open questions. However, under the assumption that blockchain technology is here to stay, it is an idea that deserves further exploration. Keywords: blockchain technology, elections, democratic innovation, e-government, democracy Acknowledgement: A thank you note goes to participants of the Procivis Think Tank meeting, 21 September 2018, held at Trust Square, Zurich, Switzerland, for valuable inputs and discussions. 1. Democratic Malaise At the current state of affairs in the history of democracies across the globe, we are facing a paradoxical situation. On one hand we were able to observe the emergence and growth of the number of formal democracies over the last couple of decades, mainly in Latin America and Europe. On the other hand there seems to be a certain and deeply rooted dissatisfaction with politics in general, even in well-established polities of the so-called Western world. The disillusionment about politics can partly be related to a lack of performance legitimacy in the aftermaths of the most recent financial and economic crises but not exclusively. According to Crouch (2004), the deeper reason for this conundrum is the fact that many democracies have entered a post democratic phase. In post democracies formal requirements of a democratic polity as we define them - including basic political freedoms and rights, the rule of law and the separation of powers just to mention a few - are fully met. Elections take place in an orderly fashion and governmental forces do alter. However, politics 186 Reflections & Viewpoints has moved beyond so called first moments of democracy characterized by genuine mass participation, deliberation and a broad engagement of civil society. Even though elections are obviously being held, citizens now know they do not really matter much anymore. In post democracies boredom, frustration and disillusionment about politics are kicking in. With the knowledge of public relations experts as well as scientists, professionalized political forces such as politicians and lobbying groups have learned how to manipulate public opinion thus turning elections campaigns into a theatre orchestrated for the mass media. In order to find out what people want politicians started imitating show business and marketing techniques to increase their chances of getting elected or re-elected. Party platforms have become more superficial and less discriminatory, leaving the voters clueless about content and vulnerable to the personalization of politics. Important policies are being negotiated behind closed doors once elections are over, mainly between governments and big business. Corruption is not only a problem of the global South but represents a deeply entrenched phenomenon in politics in general. Citizens take part in decisionmaking only very unfrequently or not at all. Democracies thus tend to turn into empty procedural shells which can formally not be called undemocratic but leave many citizens - better educated than ever in human history - frustrated with politics, low respect for politicians and ultimately erode the legitimacy of democracies. Even if we were not fully convinced by that grim characterization and the statement that many countries entered a post-democratic stage (Merkel, 2014) we must concede that a couple of the described symptoms are difficult to reason away. Additionally, in most established democracies we observe a decline in electoral turnout accompanied by the rather bizarre situation that state authorities frequently have to admonish citizens to turn out and run actual campaigns for that. Digital democracy, at least in the early stages, has been seen as a beacon of hope. However, the hope that the less powerful or financially less potent actors can have a say in politics with as little as an internet connection did not come true. It might have led to a further disillusionment instead: using the communication tools of the internet seems to require professional solutions, time and money, thus again favoring the larger, well established and well financed organizations. This is not to say that there is not the occasional exception which did not exist before such as a blog entry or a tweet causing a turn of developments or mobilizing the masses for a while. But this is clearly the exception. Also, the more radical approach of trying to circumvent dirty politics with the help of the internet all together, leading to a more direct and at the same time liquid democracy, does currently not seem to be a valuable alternative. Day-to-day politics caught up with such approaches quickly and we cannot see them taking root yet. Granted, more time might be needed but even then the looming question is whether liquid democracy would be capable of producing coherent policy output in the long run. A third option which is currently representing the most common and accepted track, is a digital democracy interpreted in the sense of bringing the citizens closer to the state and public administrations so that more efficient problem solutions can be found. This digital democracy approach currently seems to have the highest potential but also comes with some dangers. The closer citizens interact with the state the higher the danger for data misuse (for example in case of a change of government to one of a different color). In general, elements of digital democracy did not yet seem able to unleash its full constructive power, on the contrary, with microtargeting of potential electors and increasing tendencies to influence voters on social media, also from abroad, we currently are faced with fighting the unwanted effects of the digital in politics. Reflections & Viewpoints 187 2. Current Election Campaign Financing Rules and Suggested Reforms Elections and election campaigns play a crucial role in democracies. They represent the high service transferring the powers from citizens to representatives and rulers for yet another term. It is also a time when through acts, procedures and symbolism democracies become visible and even tangible for the largest part of the electorate. Arguably, elections are not the only but certainly one of the most fundamental pillars for the proper functioning of liberal democracies. Unfortunately, evidence across the globe demonstrates how difficult it is to conduct them in a free and fair manner, even in more advanced liberal democracies. In particular, a constant concern is the role of money in politics, more specifically, money used for election campaign financing. Frequent scandals are proof of the difficulties encountered with current approaches to tackle the issue. As reported in a comprehensive, systematic and up to date analysis of political finance in comparative perspective (Norris et al., 2015) by the Electoral Integrity project and partners (Global Integrity, 2015) we are forced to conclude that the de facto situation often does not match with what regulation would prescribe. While a few countries score high on electoral integrity without detailed legal frameworks on how money can be used during election campaigns the opposite is also possible. In the majority of the covered countries the regulation in place does not affect third party actors such as political action committees, unions and certain not for profit organizations. The monitoring for direct and indirect financing of election campaigns furthermore shows that rules are often not very specific, that documentation is incomplete, incomprehensible and delivered late. Furthermore, in most countries public resources are being used for campaigns which makes it more difficult to quantify real costs and to track their use. The report comes to the quite devastating conclusion that the violation of rules is rather the norm. Only in four out of the 54 monitored countries did the authors not find any violations during the most recent elections. Literature shows that there is a long political struggle to regulate election campaign donations. Typically, legislation would require campaigners, mostly political parties and candidates, to register and disclose donations they receive (Gilland Lutz & Hug, 2010). However, experience shows the futility of this approach. Having personally being able to check what kind of documentation is routinely handed in under the regime of some of the Swiss cantons with a disclosure rule for political donations (Serdült, 2010; Braun Binder 2015) it is not surprising to comprehend the conclusions of the upper mentioned international reports. Several contributions for a political party taken down in the name of the treasurer of the respective political party are not conducive to increasing trust in the system. Reporting duties for donations in the official gazette simply being ignored for years demonstrate that controls seem to be lax and the consequences eventually not severe in the case of infringements. The maximum fine could be well beyond the amounts received. The rules are sometimes also relatively easy to circumvent. Larger amounts of money can be split up to go beyond the allowed amount and thus stay under the radar screen. Contributions can be made to organizations not falling under the legislation. They can also originate from or be channeled to accounts abroad. In case campaign ad space is voluntarily given away for free because of ideological proximity between media owners and political party there is even no money trail at all. Occasionally, campaign financing scandals pop up, but they probably only represent the tip of the iceberg. In sum, current regulation approaches seem to lead to bureaucratic and largely ineffective solutions. However, if current campaign regulations have no positive effect on the fairness of elections better 188 Reflections & Viewpoints solutions should be sought. A first cursory review of potential remedies found in the literature reveals the following: Ayres and Bulow (1996) suggested a fund controlling for the identity of donors but then to relay the money to the receiver anonymously, a Council of Europe (2004) report put forward the idea to use vouchers instead of the national currency, Lessig (2011) proposed a reform of campaign financing allowing citizens to donate vouchers. Whereas governance of any campaign financing regulation is going to stay key despite of the technology applied, a distributed approach involving not only public administrations or electoral management boards created to supervise all matters elections but the public in general might help to achieve a paradigm shift in the not so distant future. 3. A Blockchain Technology-based Approach Thanks to distributed leadger technology - colloquially referred to as blockchains (Wattenhofer, 2019) - new options are now available, helping to combine and implement ideas such as the introduction of vouchers and partly anonymous donations in a more persistent way. Lessig's notion of paper vouchers can directly be reformulated in blockchain terminology terms as tokens. The logic behind the introduction of vouchers is that donations and campaign financing increase the risk of corruption and that one should try to extract those money flows from a market in which transactions are difficult to trace. With blockchain technology political campaigns can be tracked and financed by a crypto token owned by the people. Such crypto vouchers can have a duration and nominal value. They can be issued and distributed for every election or even for a complete legislature. In case there is a need, additional vouchers could be released. Each country or constituency could therefore create its own vouchers within a couple of hours. The suggested blockchain system would allow tracing all flows of the campaign crypto currency and to keep the total amount spent under control. However, the arguably more important innovation of the suggested approach is not only the technical aspect per se but the direct involvement of citizens. Every registered voter would henceforth have a small but direct stake in the electoral race. As a much welcomed side effect, interest in politics and therefore turnout might increase as well. As a starting point for further reflection, token owners would, in the first place, have the following three options: they can pass tokens on to any other citizen, candidate or political group (donate), sell them in a market (trade) or even decide not to use them at all (abstain). For contested electoral races the price of a token could go up. National banks would be potential organizations capable to issue campaign tokens. They can hold them in their books just like any other currency. Last but not least, the most important point, all citizens are directly reconnected to politics by receiving tokens which define the total amount for campaign spending. Interdisciplinary research to study technical, regulatory, economic as well as behavioral dynamics of such a blockchain-based campaign financing solution is of course much needed. The following research questions can serve as a preliminary guide for the development of future feasibility studies. The by no means exhaustive list of questions can be grouped into four domains: Reflections & Viewpoints 189 1) Governance and legal: Which legal requirements apply to a cryptocurrency for political campaigns? In particular, which constraints are imposed by the principle of economic freedom and what requirements must be met with regard to ensuring electoral freedom? Which monetary law provisions would be necessary for the introduction for a separate cryptocurrency for political campaigns? 2) Technological: How should the tokens be designed? How can token donations stay anonymous but become public when reaching a certain threshold? Can secure and easy to use applications for individual use be envisaged? 3) Economic: How much would the introduction and administration of campaign tokens cost regarding administration as well as energy consumption? How can those costs be financed? 4) Behavioural: How do citizens react to the idea of creating tokens to finance political campaigns? Would they be willing to use them? Which patterns of token use can we observe? 4. Discussion The suggested additional use case of blockchain technology in the public domain (Rizal Batubara et al., 2018) for a core aspect of any democracy worthy of its name has the potential to shed new light on one of its long looming conundrums. It provides for a prospective and optimistic way on how technology can be helpful for the design of a future (if not completely but increasingly) digital democracy. Through a transdisciplinary approach comprising legal, economic, technical and experimental elements, the proposal to create a decentral election token provides public authorities, politicians and society at large with an innovative template on how campaign financing could look like in the not so distant future. Furthermore, the prospective aspect of the proposal allows lawmakers to update themselves on the future challenges regarding the application of blockchain technology in democratic political systems. Feasibility studies could help to cast light on opportunities and risks for the use of blockchain technology in the public domain. In that regard, Switzerland with its frequent referendum votes and elections on three state levels is a particularly well-suited field of experimentation. Referendum topics at stake do not necessarily and always follow the party lines. They can be cross-cutting. We can therefore expect campaign financing donations to reveal non-partisan patterns as well. However, whether citizens and other stakeholders would mainly donate along partisan lines, diverge from the expected pattern or in an act of self-interest prefer to cash in their allocated amount by selling the vouchers in a market (Fehr & Fischbacher, 2003) is an empirical question which should be addressed in studies. Regulation will most probably need to be local and study results will therefore not travel very well to other polities. However, all research along those lines will certainly have an international appeal and novelty. Feasibility studies need not be restricted to referendum votes such as in the Swiss case. On the contrary, the fact that Seattle in 2015 started an experiment making use of publicly funded 100 USD paper vouchers during an electoral race hints at the fact that the proposed, digitally enhanced campaign financing solution is not of a purely speculative nature and deserves the attention of researchers, politicians and civil society organizations alike. 190 Reflections & Viewpoints References Ayres, I. & Bulow, J. (1998). The Donation Booth: Mandating Donor Anonymity to Disrupt the Market for Political Influence. Stanford Law Review, 50(3), 837-891. Braun Binder, N. (2015). Financing Popular Initiatives and Referendum Campaigns, in: Fraenkel-Haeberle, C. & Kropp, S. & Palermo, F. & Sommermann, K.-P. (ed.), Citizen Participation in Multi-Level Democracies, Leiden/Boston: Brill, 161-181. Council of Europe (2004). The Future of Democracy in Europe: Trends, Analyses and Reforms. Coord. by Schmitter, P. and Trechsel, A. Strasbourg: Council of Europe Publishing. Crouch, C. (2004). Post Democracy. Cambridge: Polity Press. Fehr, E. & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785-791. Gilland Lutz, K. & Hug, S. (Eds.) (2010). Financing Referendum Campaigns. New York: Palgrave. Global Integrity (2015). The Money, Politics and Transparency Campaign Finance Indicators: Assessing Regulation and Practice in 54 Countries across the World in 2014. Washington DC: Global Integrity Report. Lessig, L. (2011). Republic Lost. See: www.lessig.org/books/ Merkel, W. (2014). Is There a Crisis of Democracy? Democratic Theory, 1(2), 11-25. Norris, P. & Abel van Es, A. & Fennis, E. (2015). Checkbook Elections: Political Finance in Comparative Perspective - Executive Report. . Rizal Batubara, F. & Ubacht, J. & Janssen, M. (2018). Challenges of Blockchain Technology Adoption for eGovernment: A Systematic Literature Review. dg.o'18 Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age, Article no 76. . Serdült, U. (2010). Referendum Campaign Regulations in Switzerland, in: Gilland Lutz, Karin and Hug, Simon (Eds.) Financing Referendum Campaigns. New York: Palgrave, 165-179. Wattenhofer, R. (2019). Blockchain Science: Distributed Ledger Technology. Inverted Forest Publishing. About the Author Uwe Serdült Uwe Serdült occupies a dual position as a full professor at Ritsumeikan University, Japan, and a principle investigator in the Center for Democracy Studies Aarau (ZDA) at the . He is interested in research at the intersection of the social and information sciences, particularly in the field of e-government and digital democracy.
|
2509.16277
|
Stabilizing Information Flow: Entropy Regularization
for Safe and Interpretable Autonomous Driving
Perception
Haobo Yang, Shiyan Zhang, Zhuoyi Yang, Jilong Guo, Jun Li, Xinyu Zhang∗
The State Key Laboratory of Automotive Safety and Energy,
School of Vehicle and Mobility, Tsinghua University
xyzhang@tsinghua.edu.cn
Abstract
Deep perception networks in autonomous driving traditionally rely on data-
intensive training regimes and post-hoc anomaly detection, often disregarding
fundamental information-theoretic constraints governing stable information pro-
cessing. We reconceptualize deep neural encoders as hierarchical communication
chains that incrementally compress raw sensory inputs into task-relevant latent
features. Within this framework, we establish two theoretically justified design prin-
ciples for robust perception: (D1) smooth variation of mutual information between
consecutive layers, and (D2) monotonic decay of latent entropy with network depth.
Our analysis shows that, under realistic architectural assumptions—particularly
blocks comprising repeated layers of similar capacity—enforcing smooth infor-
mation flow (D1) naturally encourages entropy decay (D2), thus ensuring stable
compression. Guided by these insights, we propose Eloss, a novel entropy-based
regularizer designed as a lightweight, plug-and-play training objective. Rather
than marginal accuracy improvements, this approach represents a conceptual shift:
it unifies information-theoretic stability with standard perception tasks, enabling
explicit, principled detection of anomalous sensor inputs through entropy devi-
ations. Experimental validation on large-scale 3D object detection benchmarks
(KITTI and nuScenes) demonstrates that incorporating Eloss consistently achieves
competitive or improved accuracy while dramatically enhancing sensitivity to
anomalies—amplifying distribution-shift signals by up to two orders of magnitude.
This stable information-compression perspective not only improves interpretability
but also establishes a solid theoretical foundation for safer, more robust autonomous
driving perception systems.
1
Introduction
Intelligent driving promises safer and more efficient urban mobility [1]. A core enabler is 3D object
detection [2, 3], whose models must operate under tight latency constraints and withstand harsh, ever-
changing road conditions. Unlike lab-curated benchmarks, real sensors routinely deliver anomalous
frames—e.g. under fog, rain, or sensor glitches—that can induce catastrophic perception errors.
Most perception stacks handle anomalies indirectly: they enlarge training sets, inject synthetic noise,
or bolt on post-hoc detectors. Such data-heavy remedies (i) assume that future outliers resemble
past ones and (ii) provide little insight into why the network fails. Consequently, accuracy gains on
"clean" test sets often fail to translate into robust on-road performance.
∗Corresponding author
Preprint. Under review.
arXiv:2509.16277v1 [cs.LG] 18 Sep 2025
info.
code
feature
info.
code
feature
feature
(a) Others Encoding: considering encoder as an end-to-end process.
(b) Ours Encoding: considering a encoder as a combination of smaller encoders.
abnormal
abnormal
abnormal
normal
normal
normal
unknown
unknown
unknown
Figure 1: Layer-wise view vs. end-to-end view. (a) Conventional pipelines treat the encoder as a
single compression block, hiding unstable jumps in information entropy. (b) Our stable information-
compression perspective decomposes the encoder into repeated layers inside each block and encour-
ages smooth entropy decay. Anomalous inputs (dashed red) violate this stability and become easy to
spot.
Communication view of deep perception. We instead model a deep encoder as a hierarchical
communication chain [4]: each layer compresses its input into a latent code that is forwarded to
the next layer. Drawing on source-coding theory [5], we articulate two design principles for any
well-behaved encoder:
1. Smooth compression. The mutual information between successive layers should change gradu-
ally.
2. Monotonic entropy decay. The entropy of the latent codes should decrease with depth.
Under the common prior that each block is built from repeated layers of similar capacity, enforcing
1 alone induces an approximately layer-invariant compression ratio, while 2 is observed to emerge
automatically. Anomalous frames disrupt this smooth profile (Fig. 1), yielding a principled detection
signal.
Contributions. We translate these insights into a practical training recipe for robust 3D perception:
2
• Stable information-compression framework. We formalize Principle 1 and show analytically
that repeated-layer similarity makes entropy decay (Principle 2) arise in expectation.
• Eloss: a plug-and-play surrogate. Introducing a continuous latent variable X, we derive Eloss as
the variance penalty on layer-wise entropy drops, fully differentiable and task-agnostic.
• Accuracy and robustness in practice. On large-scale autonomous-driving benchmarks, Eloss (i)
matches or exceeds baseline mAP and (ii) boosts anomaly-to-nominal signals by more than two
orders of magnitude, with smoother entropy trajectories as an interpretable by-product.
This work integrates information-theoretic stability directly into the training objective of 3D percep-
tion models and demonstrates its benefits for safety-critical autonomous driving.
2
Related Works
2.1
Uncertainty Quantification
Autonomous-driving perception must contend with occlusions, sensor noise, and other sources of
partial observability that undermine the fidelity of raw measurements [6]. Consequently, quantifying
predictive uncertainty has become a pivotal research theme.
Taxonomy of deep-learning uncertainty. Following [7], we distinguish two complementary sources.
Aleatoric uncertainty captures irreducible noise in the world (for example, weather-induced LiDAR
sparsity). Because more data cannot eliminate it, practitioners often resort to heteroscedastic mod-
elling; recent work shows that loss terms letting the network predict a variance value can improve
robustness [8]. Epistemic uncertainty, by contrast, reflects ignorance due to limited data or model
capacity and can be reduced given sufficient coverage of the input space. Prominent estimators
include Monte-Carlo (MC) dropout, which interprets dropout masks as variational samples [9], and
deep ensembles, which approximate a posterior with a collection of independent networks [10].
Open issues. Despite substantial progress, two gaps remain. First, most studies lack ground-truth
uncertainty labels and therefore evaluate only by correlation with prediction error [11]. Second,
there is no unified quantitative metric usable across classification, regression, and segmentation;
task-specific proxies proliferate instead [12]. Our work addresses both limitations. By casting the
encoder as a communication chain with stable information flow, we obtain (i) a binary ground truth
(nominal inputs yield stability, anomalies break it) and (ii) a single scalar, Eloss, whose magnitude is
comparable across network architectures and learning tasks whenever a repetitive block structure is
present.
2.2
Shannon’s Source–Coding View of Neural Encoders
Classical communication systems enjoy first-principles guarantees from information theory: channel
capacity, rate–distortion trade-offs, and provably optimal source codes are all quantified via Shannon
entropy [5]. Mapping deep networks onto this framework provides a principled lens for both
interpretation and optimization.
Neural channels and joint source–channel coding. Early work by Mackay [13] cast neuron
activations as noisy channels, inspiring a wave of deep joint source–channel coding (JSCC) methods
that replace hand-engineered codecs with end-to-end CNNs [14, 15]. These models demonstrate that
learned encoders can match – or surpass – Shannon-limit efficiency when trained with appropriate
distortion objectives.
Source coding inside representation learning. Sharma et al. introduced fiducial coding into
variational autoencoders, enforcing Shannon’s first theorem so that the expected code length tracks
the latent entropy [16]. This alignment guarantees lossless representation under the chosen fidelity
criterion.
Information Bottleneck (IB) connection. The Information Bottleneck principle [17] views learning
as a compression problem: hidden layers ought to retain only task-relevant information. Empirically,
multilayer perceptrons first capture mutual information with the target before compressing it away.
Our work extends this idea by treating each pre-fusion feature extractor as a source coder. We
explicitly measure the entropy of every layer’s output and penalise the variance of the inter-layer
3
entropy change, thereby (i) restricting the optimization search space, (ii) accelerating convergence,
and (iii) enabling anomaly detection through violations of the expected entropy profile.
Taken together, these perspectives motivate our stable information-compression framework: by
encouraging near-ideal source-coding behaviour inside deep encoders, we inherit the interpretability
of communication theory while preserving the expressive power of modern neural networks.
3
Methodology
Encoder 1
Encoder 2
Encoder 3
Encoder 4
Block 1
Block 2
Encoder
Input Information
Output Information
Figure 2: Overview of Eloss. An input sample passes through two identical encoding blocks, each
divided into several near-identical sub-encoders (layers). Within each block we record intermediate
representations Hn, compute the entropy drops ∆Hn, and aggregate them with the variance penalty
L(∆H) to obtain a block-level divergence D. Summing the divergences across all blocks yields the
global objective Eloss = P
blocks D. Left: smooth compression inside the blue and yellow blocks; red
bars mark unstable layers when Eloss is absent.
Modern multi-modal perception stacks rely on information compression to discard signals that do
not assist the downstream task. From a communication-theoretic standpoint, this step is a distortion-
constrained source-coding problem: rare source symbols can be pruned to boost transmission
efficiency while preserving high reconstruction fidelity at the receiver [5]. This perspective is
illustrated in Figure 2, which outlines the structure and function of our proposed Eloss mechanism.
Entropy as a proxy for information. Shannon entropy quantifies the expected information content
of a random variable; lower entropy indicates a more informative code. We therefore measure the
entropy of each layer’s latent representation and treat its layer-to-layer change as a direct indicator of
compression progress.
Stable compression constraint. To prevent abrupt distortions, we penalise the variance of the
entropy drop across successive layers. This encourages a nearly constant compression ratio and
improves the efficiency of subsequent fusion and detection modules.
Block granularity and the role of D. Our target backbones (for example, SECOND for LiDAR or
ResNet-style CNNs) consist of repeated blocks whose internal layer structure is virtually identical.
For each block b we compute
Db = λ Lb,
where Lb is the variance penalty derived from that block’s entropy drops. The overall regularizer is
the sum of these block-level divergences: Eloss = P
b Db. Operating at block granularity keeps the
stability assumption local and lets Eloss scale gracefully with network depth.
Comparison with Information Bottleneck. Unlike conventional Information Bottleneck (IB) ap-
proaches that focus on whether latent features discard irrelevant information, our method emphasizes
how the compression proceeds—specifically, its stability and continuity across layers. Traditional IB
quantifies the amount of compression between input and output and often relies on variational bounds
or adversarial training to approximate mutual information. In contrast, Eloss directly regularizes the
4
variance of layer-wise entropy drops, resulting in a fully differentiable, structure-aware objective that
plugs into any CNN or Transformer backbone. Moreover, while IB is typically applied to shallow or
single-layer encoders, our approach scales to deep perception networks with repeated sub-structures,
enforcing a globally stable information flow throughout the entire model. This shift—from isolated
compression metrics to end-to-end compression stability—makes Eloss a lightweight yet principled
constraint for modern, safety-critical perception systems.
The remainder of this section (i) formalizes the entropy estimator, (ii) derives the expression for Lb,
and (iii) explains its plug-and-play integration with standard training objectives.
3.1
Entropy Expectation Across Network Layers
Neural training can be viewed as a search for a mapping between inputs and targets [18]. Before
training, this mapping is weak; during optimization, each layer’s feature extractor—its local compres-
sion module—progressively improves. Yet the black-box nature of deep models obscures how, or
how much, information is actually being compressed [19].
Source-coding perspective. Feature compression corresponds to distortion-constrained source
coding: the network should drop only signal components irrelevant to the downstream task. Shannon
entropy is our proxy for information content. In a fixed-bandwidth channel, a steady entropy decrease
implies rising transmission efficiency [4].
Layer-wise expectation. Many perception backbones comprise repetitive blocks—e.g. the stacked
linear layers in SECOND [20]. Because each block has near-identical capacity, we treat the internal
bandwidth as constant and posit the following expectation:
The entropy of successive layer outputs should decay smoothly and monotonically.
Implication for training. By formulating an entropy-based loss that penalises large deviations from
this expected decay, we guide optimization toward stable, layer-wise compression. This turns an
opaque training loop into a process grounded in information theory, yielding both interpretability and
improved convergence.
3.2
Uncertainty Quantification via Entropy Deviations
When raw sensory data traverse a feature-extraction backbone, each layer produces a new feature
map and—ideally—reduces information entropy by filtering out task-irrelevant signals. Because no
external information is injected, the entropy trajectory should be monotonically decreasing.
Smooth compression under nominal inputs. For a block-repetitive network, the entropy drop
∆Hn = Hn+1 −Hn is roughly proportional to that block’s parameter budget. Successive blocks
of identical width and structure should therefore exhibit similar ∆Hn, yielding the stable profile
introduced above.
Entropy spikes reveal anomalies.
Abnormal inputs—e.g. corrupted point clouds or sensor
glitches—break this regularity. Noise injects spurious variance that (i) disrupts the latent code
ordering and (ii) can even increase entropy in deeper layers. We flag an input as anomalous when-
ever its entropy sequence {H0, H1, . . . , HL} deviates beyond a tolerance band around the expected
smooth decay. Practically, Eloss magnifies these deviations: large values correspond to blocks where
|∆Hn| diverges from the nominal slope, yielding a unified, architecture-agnostic uncertainty signal.
3.3
Probabilistic Modelling of Layer Outputs
Estimating the entropy Hn of each layer output requires a tractable density model. We therefore cast
every latent tensor as samples drawn from a continuous random vector.
Feature maps as random vectors. Following Zou et al. [4], the i channels generated by a convolu-
tional kernel form ˜X = {x1, . . . , xi}, independent realisations of a d-dimensional random variable
X, where d is the number of spatial positions per channel. Given ˜X, we fit a simple diagonal-Gaussian
density pn(·) and compute ˆHn = −Ex∼pn
log pn(x)
, an unbiased estimator of the true entropy.
Architecture agnosticism. This construction is not limited to CNNs. Transformer tokens, point-
cloud pillars, or MLP embeddings can be reshaped so that the last dimension indexes channels and
5
the rest form the sample axis; the same estimator then applies, enabling a unified entropy-based loss
across backbone families.
3.4
Entropy Estimation with k-Nearest Neighbours
Although the Gaussian proxy is fast, we adopt the non-parametric k-nearest-neighbour (kNN)
estimator [21] for unbiased evaluation.
Differential entropy. For a continuous random vector X with density f,
h(X) = −
Z
f(x) log f(x) dx.
(1)
k-NN estimator. Given n i.i.d. samples, let rd,k(xi) be the distance from xi to its kth neighbour in
Rd, and Vd the unit-ball volume. Then
ˆH(X, k) = −ψ(k) + ψ(n) + log Vd + d
n
n
X
i=1
log rd,k(xi),
(2)
where ψ(·) is the digamma function and ψ(1) = −γ with the Euler–Mascheroni constant γ ≈0.5772.
We default to k = 1 (the Kozachenko–Leonenko estimator) unless stated.
Inter-layer entropy drop. We set Hn ≡ˆH(Xn, k) and ∆Hn = Hn+1 −Hn. These drops feed
directly into the variance penalty Lb defined below.
3.5
Loss Function for Stable Compression
Our backbones are organised into M repeated blocks, each containing N near-identical sub-encoders.
Within block b ∈{1, . . . , M} let ∆Hb,n = Hb,n+1 −Hb,n denote the entropy drop of sub-encoder
n.
Variance penalty Lb. Smooth compression implies low variance among the drops inside a block:
Lb =
1
N
N
X
n=1
∆Hb,n −∆Hb
2,
(3)
where ∆Hb is the mean drop of block b. At optimum, Lb →0, indicating identical per-layer
compression.
Block-level divergence Db. We scale the penalty with a single hyper-parameter λ:
Db = λ Lb.
(4)
Global regularizer Eloss. The final objective sums divergences across all blocks:
Eloss =
M
X
b=1
Db.
(5)
During training, Eloss acts as an information-flow regularizer: it sharpens layer-wise interpretability
within each compression block while leaving task-specific heads untouched.
Scope and limitations. Because the formulation assumes repeated sub-encoders of comparable
capacity, its influence is localised to such structures (e.g. CNN stages, transformer layers). Our
experiments later quantify this boundary and show that Eloss complements—rather than replaces—the
primary task loss L.
4
Experiments
Datasets and Experimental Setup We evaluate our method on KITTI and nuScenes. The KITTI
benchmark—jointly released by Karlsruhe Institute of Technology and Toyota Technological Institute
6
at Chicago—remains one of the most widely used datasets for intelligent-driving perception [22, 23].
It provides multi-modal data (RGB/grayscale stereo pairs, LiDAR point clouds, GPS/IMU) collected
in urban, rural, and highway scenes. Following standard practice, we use the official split with 7 481
training samples and 7 518 test samples.
nuScenes extends KITTI in both scale and sensing diversity. Each of its 1 000 twenty-second scenes
is captured with six surround cameras, one 32-beam LiDAR, five radars, GPS, and IMU. The dataset
contains 1.4 M images, 390 K LiDAR sweeps, and 1.4 M human-annotated 3D boxes spanning 23
object categories—roughly seven times the annotation volume of KITTI. Key frames are sampled at
2 Hz (40 per scene) and fully labelled; intermediate sweeps are provided without annotation.
Implementation details. All experiments are run on a cluster with eight NVIDIA RTX 4090 GPUs
with over 100 hours of computation time. Models are implemented in PYTORCH and trained with the
MMDETECTION3D toolbox [24], whose data loaders, evaluation scripts, and baseline backbones
we reuse. Unless otherwise noted, we keep the toolbox’s default training schedules and add our
information-flow regularizer with a single weight λ = 1.0, summed with the standard detection loss.
Evaluation metrics. For 3D object detection on KITTI we report Average Precision under the R40
protocol (APR40) for the Car, Pedestrian, and Cyclist classes on the Easy, Moderate, and Hard splits.
On nuScenes we follow the official evaluation and report: (i) mean Average Precision (mAP) across
the 10 object categories, (ii) the nuScenes Detection Score (NDS), a composite metric that combines
mAP with five True Positive metrics (translation, scale, orientation, velocity, and attribute error), and
(iii) per-class AP at a 1 m center-distance threshold (APdist1.0). All results are computed using the
MMDetection3D toolkit’s built-in evaluators.
4.1
Sensitivity of Eloss to Abnormal Inputs
Model & Dataset
Value
Confidence (no Eloss)
Eloss (metric only)
Eloss (metric + loss)
clean
noise1
noise2
clean
noise1
noise2
clean
noise1
noise2
VoxelNet KITTI
Mean
0.495
0.248
0.248
0.015
0.008
0.009
1.58E−3
9.09E−3
8.70E−3
%∆
0.0
-49.9
-49.9
0.0
-48.5
-39.1
0.0
+473.5
+449.0
PointPillars KITTI
Mean
0.487
0.344
0.344
0.012
2.09
0.008
1.09E−1
3.84
–
%∆
0.0
-29.3
-29.3
0.0
+17476
-36.1
0.0
+3416
–
PointPillars nuScenes
Mean
0.168
0.128
0.128
0.034
1.92
0.016
2.56E−4
1.48E−1
4.27E−3
%∆
0.0
-23.7
-23.7
0.0
+5495
-54.5
0.0
+57746
+1572
Table 1: Sensitivity to abnormal inputs. Two corruption types were tested: noise1 applies additive
Gaussian noise to each point coordinate, while noise2 applies salt–pepper noise. Confidence refers
to the mean softmax score of the top predicted class. %∆shows the relative change in confidence
under each condition, computed as the percentage difference with respect to the corresponding clean
value in each column. For example, %∆= | confidencenoise−confidenceclean
confidenceclean
| × 100%.
Noise generation. We simulate two distinct corruptions on LiDAR frames: noise1 adds Gaussian
noise sampled from a standard normal distribution to each point coordinate; noise2 applies salt–pepper
noise by randomly setting points to either the frame’s maximum or minimum coordinate value.
Protocol We train PointPillars and VoxelNet with and without the variance-based regularizer, then
evaluate on clean KITTI and nuScenes test sets and on the two noisy variants above. For each
configuration we report: (i) Confidence—mean classification score from the baseline network; (ii)
Eloss (metric only)—post-hoc entropy variance on the baseline; (iii) Eloss (metric + loss)—entropy
variance of the model trained with the regularizer.
Findings
• Confidence is weakly sensitive. Noise lowers confidence modestly (up to –29%), making it hard
to distinguish corruption types or levels.
• Eloss is highly sensitive. Even as a post-hoc metric it increases by orders of magnitude under
noise1, and changes distinctly under noise2.
• Training with Eloss amplifies separation. With the regularizer, noise-induced spikes grow even
larger (e.g. +57 700% on nuScenes for noise1), while confidence remains flat.
7
Conclusion Eloss provides a robust, architecture-agnostic signal for distinguishing clean, Gaussian-
noised, and salt–pepper–noised inputs, making it an effective indicator of input quality.
4.2
Impact of Eloss on Multiple Architectures and Metrics
To study how the regularizer interacts with model capacity and with LiDAR–RGB fusion, we
freeze the same voxel encoder (identical to VoxelNet) and compare three architectures of increasing
complexity:
(a) SECOND: LiDAR-only baseline [20];
(b) SECOND+ResNet: early fusion of LiDAR voxels with RGB features from a ResNet-50
backbone [25];
(c) SECOND+FusionStack: the previous model augmented with a correlation module [26], a GNN
head [27], and an FPN neck [28].
Each model is fine-tuned for 40 epochs with or without Eloss (inserted into all SECOND blocks) and
then evaluated on KITTI. Table 2 lists Car / Cyclist / Pedestrian APR40 and the percentage change
(∆).
Architecture
Value
Car
Cyclist
Pedestrian
Easy
Mod.
Hard
Easy
Mod.
Hard
Easy
Mod.
Hard
SECOND
base
82.35
73.35
68.59
70.89
56.72
50.68
50.75
40.76
36.96
Eloss
82.68
73.67
67.21
71.99
58.00
50.94
50.49
41.16
37.43
%∆
+0.33
+0.32
-1.38
+1.10
+1.28
+0.26
-0.26
+0.40
+0.47
SECOND + ResNet
base
80.29
67.37
60.94
75.70
52.37
46.10
39.64
31.13
28.95
Eloss
77.62
64.92
60.36
71.47
55.79
49.64
44.85
35.60
32.66
%∆
-2.67
-2.45
-0.58
-4.23
+3.42
+3.54
+5.21
+4.47
+3.71
SECOND + FusionStack
base
73.47
62.47
57.99
63.08
49.55
44.33
42.46
35.11
32.16
Eloss
67.33
58.70
54.13
57.16
46.36
41.54
45.02
36.39
33.29
%∆
-6.14
-3.77
-3.86
-5.92
-3.19
-2.79
+2.56
+1.28
+1.13
Table 2: KITTI APR40 (%) after 40-epoch fine-tune with or without Eloss. Positive ∆means an
improvement.
Analysis
• SECOND. Adding Eloss gives small gains for Car (Easy/Mod) and Cyclist and keeps Pedestrian
almost unchanged.
• SECOND+ResNet. Early LiDAR-RGB fusion boosts minor classes (Cyclist and Pedestrian, up to
+5%) but reduces Car accuracy, suggesting that Eloss suppresses noisy image cues mostly useful
for large, frequent objects.
• SECOND+FusionStack. Deeper fusion magnifies the trade-off: Pedestrian improves while Car
and Cyclist drop. Heavy post-fusion processing may distort entropy-stabilized features unless
outer-layer learning rates are reduced; we leave this for future work.
In summary, Eloss often helps smaller or rarer categories once the network has enough capacity or
multi-modal context, but a strong variance constraint can hurt dominant classes when added after
extensive fusion.
4.3
Emergent Monotonic Compression in Feature Space
To examine how Eloss reshapes internal representations, we extract the output from each layer of
a single SECOND block and project the normalized feature tensors onto their first two principal
components, visualized via density contours in Figure 3.
Observation Without Eloss, the feature distributions remain irregular and expand outward layer-
by-layer, suggesting unstable information flow. In contrast, models trained with Eloss yield fea-
ture activations that progressively contract toward the origin and exhibit near-spherical density
shapes—especially in deeper layers.
8
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
PCA Dim 2
Layer 1 (w/o Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 2 (w/o Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 3 (w/o Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 4 (w/o Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
PCA Dim 2
Layer 1 (with Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 2 (with Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 3 (with Eloss)
0.04
0.02
0.00
0.02
0.04
PCA Dim 1
0.04
0.03
0.02
0.01
0.00
0.01
0.02
0.03
0.04
Layer 4 (with Eloss)
Figure 3: Comparison of layer-wise feature distributions in PCA space with and without Eloss. Each
subplot shows the product of two 1D Gaussians fitted separately to the first two principal components
of normalized feature activations from the SECOND backbone. The contour levels visualize the
approximated 2D density. Top row: Layers 1–4 without Eloss exhibit irregular or outward-drifting
energy spread. Bottom row: The same layers with Eloss display progressively contracting, nearly
spherical contours, indicating smoother and more stable compression. All plots use the same axis
limits (±0.04) and square aspect ratio for comparability.
Interpretation These spherical contours reflect the effect of the k-nearest-neighbour entropy estimator
(Equation 2), which is most stable under isotropic, compact distributions. The visual contraction
across layers confirms that enforcing low variance in entropy drops (Principle D1) is sufficient to
induce monotonic compression (Principle D2), without requiring an explicit monotonicity constraint.
Implication Figure 3 visually confirms that enforcing low-variance entropy drops produces not only a
gradual, layer-by-layer reduction in information spread but also an even, direction-neutral contraction
of feature activations. This uniform shaping improves the reliability of k-NN entropy estimates and
yields more robust, stable representations.
5
Conclusion
We introduced Eloss, a novel information-theoretic regularizer designed to enforce stable and inter-
pretable information flow within deep neural perception backbones. Our approach reframes deep
perception networks as hierarchical communication systems, guiding training toward smooth, con-
sistent compression of raw sensory inputs into task-relevant latent representations. By explicitly
penalizing the variance of entropy drops between layers, Eloss leverages foundational principles of
information theory, resulting in monotonic entropy decay and enhanced interpretability without the
need for explicit monotonic constraints.
Experiments on the large-scale autonomous driving benchmarks KITTI and nuScenes demonstrate
that Eloss consistently achieves comparable or improved detection accuracy relative to standard
baselines. More importantly, our method significantly increases model sensitivity to anomalous
inputs—boosting detection signals for distribution shifts by up to two orders of magnitude. These
outcomes reflect not merely incremental accuracy gains but a fundamental shift toward robust,
theory-grounded perception systems.
Our work emphasizes that stable, interpretable information processing is both achievable and ben-
eficial for safety-critical applications like autonomous driving. Future research directions include
adapting Eloss to networks with heterogeneous layer capacities, further reducing computational over-
head, and extending our framework to other challenging perception tasks and real-world operational
environments.
9
References
[1] Malene Freudendal-Pedersen, Sven Kesselring, and Eriketti Servou. What is smart for the future city?
mobilities and automation. Sustainability, 11(1):221, 2019.
[2] Shuaifeng Zhi, Yongxiang Liu, Xiang Li, and Yulan Guo. Lightnet: A lightweight 3d convolutional neural
network for real-time 3d object recognition. In 3DOR@ Eurographics, 2017.
[3] Yi Wu, Xiaoyan Jiang, Zhijun Fang, Yongbin Gao, and Hamido Fujita. Multi-modal 3d object detection by
2d-guided precision anchor proposal and multi-layer fusion. Applied Soft Computing, 108:107405, 2021.
[4] Zhenhong Zou, Xinyu Zhang, Huaping Liu, Zhiwei Li, Amir Hussain, and Jun Li. A novel multimodal
fusion network based on a joint coding model for lane line segmentation. Information Fusion, 80:167–178,
2022.
[5] Gareth A. Jones and J. Mary Jones. Information and Coding Theory. Springer London, 2000. doi:
10.1007/978-1-4471-0361-5.
[6] Boquan Yang, Jixiong Li, and Ting Zeng. A review of environmental perception technology based on
multi-sensor information fusion in autonomous driving. World Electric Vehicle Journal, 16(1):20, 2025.
[7] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision?
Advances in Neural Information Processing Systems, 30, 2017.
[8] Rory R. Griffiths, Alexander A. Aldrick, Manuel Garcia-Ortegon, and et al.
Achieving robustness
to aleatoric uncertainty with heteroscedastic bayesian optimisation. Machine Learning: Science and
Technology, 3(1):015004, 2021.
[9] Pranay Goel and Le Chen. On the robustness of monte carlo dropout trained with noisy labels. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2219–
2228, 2021.
[10] Janis Postels, Martino Segu, Timothy Sun, and et al. On the practicality of deterministic epistemic
uncertainty. arXiv preprint arXiv:2107.00649, 2021.
[11] Marília Barandas, Lorenzo Famiglini, Andrea Campagner, Duarte Folgado, Raquel Simão, Federico
Cabitza, and Hugo Gamboa. Evaluation of uncertainty quantification methods in multi-label classification:
A case study with automatic diagnosis of electrocardiogram. Information Fusion, 101:101978, 2024.
[12] Kaisei Fukaya, Damon Daylamani-Zad, and Harry Agius. Evaluation metrics for intelligent generation of
graphical game assets: a systematic survey-based framework. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2024.
[13] David J C Mackay. Information theory, inference, and learning algorithms. Cambridge University Press,
(Imp, 2003.
[14] David Burth Kurka and Deniz Gündüz. Deepjscc-f: Deep joint source-channel coding of images with
feedback. IEEE Journal on Selected Areas in Information Theory, 1:178–193, 05 2020. doi: 10.1109/
JSAIT.2020.2987203. URL https://ieeexplore.ieee.org/document/9066966.
[15] Deep Joint Source-Channel Coding for Wireless Image Retrieval, 05 2020. ICASSP 2020 - 2020 IEEE In-
ternational Conference on Acoustics, Speech and Signal Processing (ICASSP). doi: 10.1109/ICASSP40776.
2020.9054078. URL https://arxiv.org/abs/1910.12703v1.
[16] Ansh Kumar Sharma, Rahul Kukreja, Ranjitha Prasad, and Shilpa Rao. Dagsurv: Directed ayclic graph
based survival analysis using deep neural networks. In Asian Conference on Machine Learning, pages
1065–1080. PMLR, 2021.
[17] The information bottleneck method, 04 2000. arXiv:physics/0004057. URL https://arxiv.org/abs/
physics/0004057.
[18] Chien-Yao Wang, I-Hau Yeh, and Hong-Yuan Mark Liao. Yolov9: Learning what you want to learn using
programmable gradient information. In European conference on computer vision, pages 1–21. Springer,
2024.
[19] Vanessa Buhrmester, David Münch, and Michael Arens. Analysis of explainers of black box deep neural
networks for computer vision: A survey. Machine Learning and Knowledge Extraction, 3(4):966–989,
2021.
10
[20] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10):
3337, 2018.
[21] Willem van de Water and Piet Schram. Generalized dimensions from near-neighbor information. Physical
Review A, 37(8):3118, 1988.
[22] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision
benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354–3361.
IEEE, 2012.
[23] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti
dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
[24] MMDetection3D Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3D
object detection. https://github.com/open-mmlab/mmdetection3d, 2020.
[25] R Anand, S Vijaya Lakshmi, Digvijay Pandey, and Binay Kumar Pandey. An enhanced resnet-50 deep
learning model for arrhythmia detection using electrocardiogram biomedical indicators. Evolving Systems,
15(1):83–97, 2024.
[26] Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Zhenyu Guo, Yang Liu, Yuchen Yang, and Yao Zhao. Multi-modal
graph learning for disease prediction. IEEE Transactions on Medical Imaging, 2022.
[27] Yongjia Lei, Shuyuan Lin, Zhiying Li, Yachao Zhang, and Taotao Lai. Gnn-fused capsnet with multi-
head prediction for diabetic retinopathy grading. Engineering Applications of Artificial Intelligence, 133:
107994, 2024.
[28] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature
pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 2117–2125, 2017.
A
Comparison with Eloss on Training Process
0
5
10
15
20
Epoch
0.4
0.5
0.6
0.7
0.8
Accuracy
without eloss
with eloss
(a) Car APdist1.0
0
5
10
15
20
Epoch
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Accuracy
without eloss
with eloss
(b) mAP
0
5
10
15
20
Epoch
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
Accuracy
without eloss
with eloss
(c) NDS
Figure 4: Convergences Curves of the model accuracy on NuSenes validation set for PointPillars
with or without Eloss. (a) the Average Precision of Car detection with Distance Threshold 1.0 meters;
(b) mean Average Precision computed across 10 class of objects; (c) nuScenes detection score.
To measure the impact of eloss on the model training process, we first conduct control experiments
on the same model with and without eloss on the KITTI dataset and Nusenes dataset without noise.
We plot the part of our experiment results in Figure 4 to more intuitively show the impact of eloss on
the volatility of the training process.
To quantify the impact process, we use the mean absolute value slope (Mean Absolute Value Slope,
MAVP) to measure the impact of eloss on the volatility of the model training curve and use the
maximum precision index to measure the impact of eloss on the model training accuracy. The MAVP
formula is as follows, where N is the number of sliding panes, and (k, k+1) refers to two adjacent
time windows.
MAVP = 1
N
N
X
k=1
(|xk+1| −|xk|))
(6)
11
We applied the pointpillars and voxelnet models to the KITTI dataset to conduct the above control
experiments. The experimental results are shown in the Table 3.
Model
Method
Max(%)
MAVP(%)
PointPillars
Without Eloss
90.694
11.946
With Eloss
88.916
11.932
Delta
-1.778
-0.014
VoxelNet
Without Eloss
94.873
10.959
With Eloss
94.586
10.937
Delta
-0.287
-0.022
Table 3: Car APR40 Max and MAVP on KITTI
validation set during training.
Metric
Method
Max(%)
MAVP(%)
Car APdist1.0
Without Eloss
79.580
2.945
With Eloss
79.520
2.811
Delta
-0.060
-0.135
mAP
Without Eloss
34.393
6.036
With Eloss
34.815
4.883
Delta
0.422
-1.153
NDS
Without Eloss
49.217
4.902
With Eloss
49.637
3.902
Delta
0.420
-1.000
Table 4: Max and MAVP of PointPillars on
nuScenes validation set during training.
The experimental results in the pivot table show that the maximum training accuracy decreases after
adding eloss to both models. In terms of MAVP, the MAVP decreased after adding eloss, which
means that the addition of eloss makes the above training process smoother.
On the Nuscence dataset, we perform the above control experiments on the PointPillars model with
three different metrics: Car APdist1.0, mAP, and NDS. The experimental results are shown in the
Table 4.
The table shows that the maximum accuracy of the Car category is lost during the training process
after adding eloss. Still, the decline in MAVP shows that the addition of eloss moderates the volatility
of the above training process. Similar observations for mAP, the average precision of multiple
categories, and NDS, the Nusenes detection score, indicate that adding eloss to the model makes the
training process smoother.
The above is the experiment on the influence of eloss on model training without noise interference.
In order to further understand the effect of eloss in the experiment, we will add Eloss to different
parts of the network or conduct control experiments with anomalous data.
B
PointPillar Sensitivity to Eloss Coverage
We resume training PointPillars models that have completed 80 epochs and inject Eloss into an
increasing number of SECOND blocks.2 Because the regularizer is fully plug-and-play, it can be
attached to any repetitive sub-structure. Here we vary its coverage from 0 to 3 blocks.
Observation. As the regularizer constrains more blocks, all three difficulty levels of Car APR40
steadily decline, while runtime rises from 4.5 ms (0 blocks) to 34.9 ms (3 blocks). This corroborates
the hypothesis that tighter entropy-stability constraints narrow the optimization landscape: the
detector sacrifices peak accuracy in exchange for stronger robustness priors and incurs additional
computation to evaluate the entropy terms.
Remark on VoxelNet. Repeating the coverage study on VoxelNet produces an interesting non-
monotonic pattern: accuracy first dips when one block is regularized, but rebounds—and even
surpasses the baseline—when two blocks are constrained. The complete results and discussion are
presented in Appendix C.
C
VoxelNet Sensitivity to Eloss Coverage
To verify whether the coverage pattern observed for PointPillars generalises, we repeat the block-
injection study on VoxelNet.
Discussion. Contrary to PointPillars (Figure 5), VoxelNet shows a non-monotonic trend: injecting
Eloss into a single block reduces all three AP scores, yet constraining two blocks surpasses the
baseline on Easy and Moderate. We posit two factors:
2All other hyper-parameters remain fixed; only the number of blocks governed by Eloss is varied.
12
0
1
2
3
Number of SECOND Blocks with Eloss
65
70
75
80
85
Car APR40 (%)
5
10
15
20
25
30
35
Time per Frame (ms)
Figure 5: PointPillars on KITTI with Eloss applied to 0–3 SECOND blocks. Circles = Easy, Squares
= Moderate, Triangles = Hard (left axis). Diamonds dashline = per-frame inference time (right
axis).
Model
Epoch
#Blocks w/ Eloss
Car APR40 (%)
Time (ms)
Easy
Mod.
Hard
VoxelNet
80
0
82.35
73.35
68.59
6.94
85
1
81.34
70.46
65.29
14.72
85
2
85.34
74.44
67.51
33.75
Table 5: VoxelNet on KITTI with Eloss applied to 0–2 SECOND blocks.
• Receptive-field size. VoxelNet’s 3D sparse convolutions already cover large spatial extents; a
stronger stability prior may curb over-fitting to high-frequency noise, yielding a net gain when
applied to multiple blocks.
• Normalisation effects. Preliminary ablations (omitted for space) indicate that layer-norm place-
ment modulates the strength of the entropy signal; one-block regularisation may under- or
over-weight this interaction.
A comprehensive sweep—including per-block ablations and alternative normalisation layouts—is
left for future work.
13
|
Stabilizing Information Flow: Entropy Regularization for Safe and Interpretable Autonomous Driving Perception Haobo Yang, Shiyan Zhang, Zhuoyi Yang, Jilong Guo, Jun Li, Xinyu Zhang∗ The State Key Laboratory of Automotive Safety and Energy, - intensive training regimes and post-hoc anomaly detection, often disregarding fundamental information-theoretic constraints governing stable information processing. We reconceptualize deep neural encoders as hierarchical communication chains that incrementally compress raw sensory inputs into task-relevant latent features. Within this framework, we establish two theoretically justified design principles for robust perception: (D1) smooth variation of mutual information between consecutive layers, and (D2) monotonic decay of latent entropy with network depth. Our analysis shows that, under realistic architectural assumptions-particularly blocks comprising repeated layers of similar capacity-enforcing smooth information flow (D1) naturally encourages entropy decay (D2), thus ensuring stable compression. Guided by these insights, we propose Eloss, a novel entropy-based regularizer designed as a lightweight, plug-and-play training objective. Rather than marginal accuracy improvements, this approach represents a conceptual shift: it unifies information-theoretic stability with standard perception tasks, enabling explicit, principled detection of anomalous sensor inputs through entropy deviations. Experimental validation on large-scale 3D object detection benchmarks (KITTI and nuScenes) demonstrates that incorporating Eloss consistently achieves competitive or improved accuracy while dramatically enhancing sensitivity to anomalies-amplifying distribution-shift signals by up to two orders of magnitude. This stable information-compression perspective not only improves interpretability but also establishes a solid theoretical foundation for safer, more robust autonomous driving perception systems. 1 Introduction Intelligent driving promises safer and more efficient urban mobility [1]. A core enabler is 3D object detection [2, 3], whose models must operate under tight latency constraints and withstand harsh, everchanging road conditions. Unlike lab-curated benchmarks, real sensors routinely deliver anomalous frames-e.g. under fog, rain, or sensor glitches-that can induce catastrophic perception errors. Most perception stacks handle anomalies indirectly: they enlarge training sets, inject synthetic noise, or bolt on post-hoc detectors. Such data-heavy remedies (i) assume that future outliers resemble past ones and (ii) provide little insight into why the network fails. Consequently, accuracy gains on "clean" test sets often fail to translate into robust on-road performance. ∗Corresponding author Preprint. Under review. 18 Sep 2025 info. code feature info. code feature feature (a) Others Encoding: considering encoder as an end-to-end process. (b) Ours Encoding: considering a encoder as a combination of smaller encoders. abnormal abnormal abnormal normal normal normal unknown unknown unknown Figure 1: Layer-wise view vs. end-to-end view. (a) Conventional pipelines treat the encoder as a single compression block, hiding unstable jumps in information entropy. (b) Our stable informationcompression perspective decomposes the encoder into repeated layers inside each block and encourages smooth entropy decay. Anomalous inputs (dashed red) violate this stability and become easy to spot. Communication view of deep perception. We instead model a deep encoder as a hierarchical communication chain [4]: each layer compresses its input into a latent code that is forwarded to the next layer. Drawing on source-coding theory [5], we articulate two design principles for any well-behaved encoder: 1. Smooth compression. The mutual information between successive layers should change gradually. 2. Monotonic entropy decay. The entropy of the latent codes should decrease with depth. Under the common prior that each block is built from repeated layers of similar capacity, enforcing 1 alone induces an approximately layer-invariant compression ratio, while 2 is observed to emerge automatically. Anomalous frames disrupt this smooth profile (Fig. 1), yielding a principled detection signal. Contributions. We translate these insights into a practical training recipe for robust 3D perception: 2 • Stable information-compression framework. We formalize Principle 1 and show analytically that repeated-layer similarity makes entropy decay (Principle 2) arise in expectation. • Eloss: a plug-and-play surrogate. Introducing a continuous latent variable X, we derive Eloss as the variance penalty on layer-wise entropy drops, fully differentiable and task-agnostic. • Accuracy and robustness in practice. On large-scale autonomous-driving benchmarks, Eloss (i) matches or exceeds baseline mAP and (ii) boosts anomaly-to-nominal signals by more than two orders of magnitude, with smoother entropy trajectories as an interpretable by-product. This work integrates information-theoretic stability directly into the training objective of 3D perception models and demonstrates its benefits for safety-critical autonomous driving. 2 Related Works 2.1 Uncertainty Quantification Autonomous-driving perception must contend with occlusions, sensor noise, and other sources of partial observability that undermine the fidelity of raw measurements [6]. Consequently, quantifying predictive uncertainty has become a pivotal research theme. Taxonomy of deep-learning uncertainty. Following [7], we distinguish two complementary sources. Aleatoric uncertainty captures irreducible noise in the world (for example, weather-induced LiDAR sparsity). Because more data cannot eliminate it, practitioners often resort to heteroscedastic modelling; recent work shows that loss terms letting the network predict a variance value can improve robustness [8]. Epistemic uncertainty, by contrast, reflects ignorance due to limited data or model capacity and can be reduced given sufficient coverage of the input space. Prominent estimators include Monte-Carlo (MC) dropout, which interprets dropout masks as variational samples [9], and deep ensembles, which approximate a posterior with a collection of independent networks [10]. Open issues. Despite substantial progress, two gaps remain. First, most studies lack ground-truth uncertainty labels and therefore evaluate only by correlation with prediction error [11]. Second, there is no unified quantitative metric usable across classification, regression, and segmentation; task-specific proxies proliferate instead [12]. Our work addresses both limitations. By casting the encoder as a communication chain with stable information flow, we obtain (i) a binary ground truth (nominal inputs yield stability, anomalies break it) and (ii) a single scalar, Eloss, whose magnitude is comparable across network architectures and learning tasks whenever a repetitive block structure is present. 2.2 Shannon's Source-Coding View of Neural Encoders Classical communication systems enjoy first-principles guarantees from information theory: channel capacity, rate-distortion trade-offs, and provably optimal source codes are all quantified via Shannon entropy [5]. Mapping deep networks onto this framework provides a principled lens for both interpretation and optimization. Neural channels and joint source-channel coding. Early work by Mackay [13] cast neuron activations as noisy channels, inspiring a wave of deep joint source-channel coding (JSCC) methods that replace hand-engineered codecs with end-to-end CNNs [14, 15]. These models demonstrate that learned encoders can match - or surpass - Shannon-limit efficiency when trained with appropriate distortion objectives. Source coding inside representation learning. Sharma et al. introduced fiducial coding into variational autoencoders, enforcing Shannon's first theorem so that the expected code length tracks the latent entropy [16]. This alignment guarantees lossless representation under the chosen fidelity criterion. Information Bottleneck (IB) connection. The Information Bottleneck principle [17] views learning as a compression problem: hidden layers ought to retain only task-relevant information. Empirically, multilayer perceptrons first capture mutual information with the target before compressing it away. Our work extends this idea by treating each pre-fusion feature extractor as a source coder. We explicitly measure the entropy of every layer's output and penalise the variance of the inter-layer 3 entropy change, thereby (i) restricting the optimization search space, (ii) accelerating convergence, and (iii) enabling anomaly detection through violations of the expected entropy profile. Taken together, these perspectives motivate our stable information-compression framework: by encouraging near-ideal source-coding behaviour inside deep encoders, we inherit the interpretability of communication theory while preserving the expressive power of modern neural networks. 3 Methodology Encoder 1 Encoder 2 Encoder 3 Encoder 4 Block 1 Block 2 Encoder Input Information Output Information Figure 2: Overview of Eloss. An input sample passes through two identical encoding blocks, each divided into several near-identical sub-encoders (layers). Within each block we record intermediate representations Hn, compute the entropy drops ∆Hn, and aggregate them with the variance penalty L(∆H) to obtain a block-level divergence D. Summing the divergences across all blocks yields the global objective Eloss = P blocks D. Left: smooth compression inside the blue and yellow blocks; red bars mark unstable layers when Eloss is absent. Modern multi-modal perception stacks rely on information compression to discard signals that do not assist the downstream task. From a communication-theoretic standpoint, this step is a distortionconstrained source-coding problem: rare source symbols can be pruned to boost transmission efficiency while preserving high reconstruction fidelity at the receiver [5]. This perspective is illustrated in Figure 2, which outlines the structure and function of our proposed Eloss mechanism. Entropy as a proxy for information. Shannon entropy quantifies the expected information content of a random variable; lower entropy indicates a more informative code. We therefore measure the entropy of each layer's latent representation and treat its layer-to-layer change as a direct indicator of compression progress. Stable compression constraint. To prevent abrupt distortions, we penalise the variance of the entropy drop across successive layers. This encourages a nearly constant compression ratio and improves the efficiency of subsequent fusion and detection modules. Block granularity and the role of D. Our target backbones (for example, SECOND for LiDAR or ResNet-style CNNs) consist of repeated blocks whose internal layer structure is virtually identical. For each block b we compute Db = λ Lb, where Lb is the variance penalty derived from that block's entropy drops. The overall regularizer is the sum of these block-level divergences: Eloss = P b Db. Operating at block granularity keeps the stability assumption local and lets Eloss scale gracefully with network depth. Comparison with Information Bottleneck. Unlike conventional Information Bottleneck (IB) approaches that focus on whether latent features discard irrelevant information, our method emphasizes how the compression proceeds-specifically, its stability and continuity across layers. Traditional IB quantifies the amount of compression between input and output and often relies on variational bounds or adversarial training to approximate mutual information. In contrast, Eloss directly regularizes the 4 variance of layer-wise entropy drops, resulting in a fully differentiable, structure-aware objective that plugs into any CNN or Transformer backbone. Moreover, while IB is typically applied to shallow or single-layer encoders, our approach scales to deep perception networks with repeated sub-structures, enforcing a globally stable information flow throughout the entire model. This shift-from isolated compression metrics to end-to-end compression stability-makes Eloss a lightweight yet principled constraint for modern, safety-critical perception systems. The remainder of this section (i) formalizes the entropy estimator, (ii) derives the expression for Lb, and (iii) explains its plug-and-play integration with standard training objectives. 3.1 Entropy Expectation Across Network Layers Neural training can be viewed as a search for a mapping between inputs and targets [18]. Before training, this mapping is weak; during optimization, each layer's feature extractor-its local compression module-progressively improves. Yet the black-box nature of deep models obscures how, or how much, information is actually being compressed [19]. Source-coding perspective. Feature compression corresponds to distortion-constrained source coding: the network should drop only signal components irrelevant to the downstream task. Shannon entropy is our proxy for information content. In a fixed-bandwidth channel, a steady entropy decrease implies rising transmission efficiency [4]. Layer-wise expectation. Many perception backbones comprise repetitive blocks-e.g. the stacked linear layers in SECOND [20]. Because each block has near-identical capacity, we treat the internal bandwidth as constant and posit the following expectation: The entropy of successive layer outputs should decay smoothly and monotonically. Implication for training. By formulating an entropy-based loss that penalises large deviations from this expected decay, we guide optimization toward stable, layer-wise compression. This turns an opaque training loop into a process grounded in information theory, yielding both interpretability and improved convergence. 3.2 Uncertainty Quantification via Entropy Deviations When raw sensory data traverse a feature-extraction backbone, each layer produces a new feature map and-ideally-reduces information entropy by filtering out task-irrelevant signals. Because no external information is injected, the entropy trajectory should be monotonically decreasing. Smooth compression under nominal inputs. For a block-repetitive network, the entropy drop ∆Hn = Hn+1 -Hn is roughly proportional to that block's parameter budget. Successive blocks of identical width and structure should therefore exhibit similar ∆Hn, yielding the stable profile introduced above. Entropy spikes reveal anomalies. Abnormal inputs-e.g. corrupted point clouds or sensor glitches-break this regularity. Noise injects spurious variance that (i) disrupts the latent code ordering and (ii) can even increase entropy in deeper layers. We flag an input as anomalous whenever its entropy sequence {H0, H1, . . . , HL} deviates beyond a tolerance band around the expected smooth decay. Practically, Eloss magnifies these deviations: large values correspond to blocks where |∆Hn| diverges from the nominal slope, yielding a unified, architecture-agnostic uncertainty signal. 3.3 Probabilistic Modelling of Layer Outputs Estimating the entropy Hn of each layer output requires a tractable density model. We therefore cast every latent tensor as samples drawn from a continuous random vector. Feature maps as random vectors. Following Zou et al. [4], the i channels generated by a convolutional kernel form ̃X = {x1, . . . , xi}, independent realisations of a d-dimensional random variable X, where d is the number of spatial positions per channel. Given ̃X, we fit a simple diagonal-Gaussian density pn(·) and compute ˆHn = -Ex∼pn log pn(x) , an unbiased estimator of the true entropy. Architecture agnosticism. This construction is not limited to CNNs. Transformer tokens, pointcloud pillars, or MLP embeddings can be reshaped so that the last dimension indexes channels and 5 the rest form the sample axis; the same estimator then applies, enabling a unified entropy-based loss across backbone families. 3.4 Entropy Estimation with k-Nearest Neighbours Although the Gaussian proxy is fast, we adopt the non-parametric k-nearest-neighbour (kNN) estimator [21] for unbiased evaluation. Differential entropy. For a continuous random vector X with density f, h(X) = - Z f(x) log f(x) dx. (1) k-NN estimator. Given n i.i.d. samples, let rd,k(xi) be the distance from xi to its kth neighbour in Rd, and Vd the unit-ball volume. Then ˆH(X, k) = -ψ(k) + ψ(n) + log Vd + d n n X i=1 log rd,k(xi), (2) where ψ(·) is the digamma function and ψ(1) = -γ with the Euler-Mascheroni constant γ ≈0.5772. We default to k = 1 (the Kozachenko-Leonenko estimator) unless stated. Inter-layer entropy drop. We set Hn ≡ˆH(Xn, k) and ∆Hn = Hn+1 -Hn. These drops feed directly into the variance penalty Lb defined below. 3.5 Loss Function for Stable Compression Our backbones are organised into M repeated blocks, each containing N near-identical sub-encoders. Within block b ∈{1, . . . , M} let ∆Hb,n = Hb,n+1 -Hb,n denote the entropy drop of sub-encoder n. Variance penalty Lb. Smooth compression implies low variance among the drops inside a block: Lb = 1 N N X n=1 ∆Hb,n -∆Hb 2, (3) where ∆Hb is the mean drop of block b. At optimum, Lb →0, indicating identical per-layer compression. Block-level divergence Db. We scale the penalty with a single hyper-parameter λ: Db = λ Lb. (4) Global regularizer Eloss. The final objective sums divergences across all blocks: Eloss = M X b=1 Db. (5) During training, Eloss acts as an information-flow regularizer: it sharpens layer-wise interpretability within each compression block while leaving task-specific heads untouched. Scope and limitations. Because the formulation assumes repeated sub-encoders of comparable capacity, its influence is localised to such structures (e.g. CNN stages, transformer layers). Our experiments later quantify this boundary and show that Eloss complements-rather than replaces-the primary task loss L. 4 Experiments Datasets and Experimental Setup We evaluate our method on KITTI and nuScenes. The KITTI benchmark-jointly released by Karlsruhe 6 at Chicago-remains one of the most widely used datasets for intelligent-driving perception [22, 23]. It provides multi-modal data (RGB/grayscale stereo pairs, LiDAR point clouds, GPS/IMU) collected in urban, rural, and highway scenes. Following standard practice, we use the official split with 7 481 training samples and 7 518 test samples. nuScenes extends KITTI in both scale and sensing diversity. Each of its 1 000 twenty-second scenes is captured with six surround cameras, one 32-beam LiDAR, five radars, GPS, and IMU. The dataset contains 1.4 M images, 390 K LiDAR sweeps, and 1.4 M human-annotated 3D boxes spanning 23 object categories-roughly seven times the annotation volume of KITTI. Key frames are sampled at 2 Hz (40 per scene) and fully labelled; intermediate sweeps are provided without annotation. Implementation details. All experiments are run on a cluster with eight NVIDIA RTX 4090 GPUs with over 100 hours of computation time. Models are implemented in PYTORCH and trained with the MMDETECTION3D toolbox [24], whose data loaders, evaluation scripts, and baseline backbones we reuse. Unless otherwise noted, we keep the toolbox's default training schedules and add our information-flow regularizer with a single weight λ = 1.0, summed with the standard detection loss. Evaluation metrics. For 3D object detection on KITTI we report Average Precision under the R40 protocol (APR40) for the Car, Pedestrian, and Cyclist classes on the Easy, Moderate, and Hard splits. On nuScenes we follow the official evaluation and report: (i) mean Average Precision (mAP) across the 10 object categories, (ii) the nuScenes Detection Score (NDS), a composite metric that combines mAP with five True Positive metrics (translation, scale, orientation, velocity, and attribute error), and (iii) per-class AP at a 1 m center-distance threshold (APdist1.0). All results are computed using the MMDetection3D toolkit's built-in evaluators. 4.1 Sensitivity of Eloss to Abnormal Inputs Model & Dataset Value Confidence (no Eloss) Eloss (metric only) Eloss (metric + loss) clean noise1 noise2 clean noise1 noise2 clean noise1 noise2 VoxelNet KITTI Mean 0.495 0.248 0.248 0.015 0.008 0.009 1.58E-3 9.09E-3 8.70E-3 %∆ 0.0 -49.9 -49.9 0.0 -48.5 -39.1 0.0 +473.5 +449.0 PointPillars KITTI Mean 0.487 0.344 0.344 0.012 2.09 0.008 1.09E-1 3.84 - %∆ 0.0 -29.3 -29.3 0.0 +17476 -36.1 0.0 +3416 - PointPillars nuScenes Mean 0.168 0.128 0.128 0.034 1.92 0.016 2.56E-4 1.48E-1 4.27E-3 %∆ 0.0 -23.7 -23.7 0.0 +5495 -54.5 0.0 +57746 +1572 Table 1: Sensitivity to abnormal inputs. Two corruption types were tested: noise1 applies additive Gaussian noise to each point coordinate, while noise2 applies salt-pepper noise. Confidence refers to the mean softmax score of the top predicted class. %∆shows the relative change in confidence under each condition, computed as the percentage difference with respect to the corresponding clean value in each column. For example, %∆= | confidencenoise-confidenceclean confidenceclean | × 100%. Noise generation. We simulate two distinct corruptions on LiDAR frames: noise1 adds Gaussian noise sampled from a standard normal distribution to each point coordinate; noise2 applies salt-pepper noise by randomly setting points to either the frame's maximum or minimum coordinate value. Protocol We train PointPillars and VoxelNet with and without the variance-based regularizer, then evaluate on clean KITTI and nuScenes test sets and on the two noisy variants above. For each configuration we report: (i) Confidence-mean classification score from the baseline network; (ii) Eloss (metric only)-post-hoc entropy variance on the baseline; (iii) Eloss (metric + loss)-entropy variance of the model trained with the regularizer. Findings • Confidence is weakly sensitive. Noise lowers confidence modestly (up to -29%), making it hard to distinguish corruption types or levels. • Eloss is highly sensitive. Even as a post-hoc metric it increases by orders of magnitude under noise1, and changes distinctly under noise2. • Training with Eloss amplifies separation. With the regularizer, noise-induced spikes grow even larger (e.g. +57 700% on nuScenes for noise1), while confidence remains flat. 7 Conclusion Eloss provides a robust, architecture-agnostic signal for distinguishing clean, Gaussiannoised, and salt-pepper-noised inputs, making it an effective indicator of input quality. 4.2 Impact of Eloss on Multiple Architectures and Metrics To study how the regularizer interacts with model capacity and with LiDAR-RGB fusion, we freeze the same voxel encoder (identical to VoxelNet) and compare three architectures of increasing complexity: (a) SECOND: LiDAR-only baseline [20]; (b) SECOND+ResNet: early fusion of LiDAR voxels with RGB features from a ResNet-50 backbone [25]; (c) SECOND+FusionStack: the previous model augmented with a correlation module [26], a GNN head [27], and an FPN neck [28]. Each model is fine-tuned for 40 epochs with or without Eloss (inserted into all SECOND blocks) and then evaluated on KITTI. Table 2 lists Car / Cyclist / Pedestrian APR40 and the percentage change (∆). Architecture Value Car Cyclist Pedestrian Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard SECOND base 82.35 73.35 68.59 70.89 56.72 50.68 50.75 40.76 36.96 Eloss 82.68 73.67 67.21 71.99 58.00 50.94 50.49 41.16 37.43 %∆ +0.33 +0.32 -1.38 +1.10 +1.28 +0.26 -0.26 +0.40 +0.47 SECOND + ResNet base 80.29 67.37 60.94 75.70 52.37 46.10 39.64 31.13 28.95 Eloss 77.62 64.92 60.36 71.47 55.79 49.64 44.85 35.60 32.66 %∆ -2.67 -2.45 -0.58 -4.23 +3.42 +3.54 +5.21 +4.47 +3.71 SECOND + FusionStack base 73.47 62.47 57.99 63.08 49.55 44.33 42.46 35.11 32.16 Eloss 67.33 58.70 54.13 57.16 46.36 41.54 45.02 36.39 33.29 %∆ -6.14 -3.77 -3.86 -5.92 -3.19 -2.79 +2.56 +1.28 +1.13 Table 2: KITTI APR40 (%) after 40-epoch fine-tune with or without Eloss. Positive ∆means an improvement. Analysis • SECOND. Adding Eloss gives small gains for Car (Easy/Mod) and Cyclist and keeps Pedestrian almost unchanged. • SECOND+ResNet. Early LiDAR-RGB fusion boosts minor classes (Cyclist and Pedestrian, up to +5%) but reduces Car accuracy, suggesting that Eloss suppresses noisy image cues mostly useful for large, frequent objects. • SECOND+FusionStack. Deeper fusion magnifies the trade-off: Pedestrian improves while Car and Cyclist drop. Heavy post-fusion processing may distort entropy-stabilized features unless outer-layer learning rates are reduced; we leave this for future work. In summary, Eloss often helps smaller or rarer categories once the network has enough capacity or multi-modal context, but a strong variance constraint can hurt dominant classes when added after extensive fusion. 4.3 Emergent Monotonic Compression in Feature Space To examine how Eloss reshapes internal representations, we extract the output from each layer of a single SECOND block and project the normalized feature tensors onto their first two principal components, visualized via density contours in Figure 3. Observation Without Eloss, the feature distributions remain irregular and expand outward layerby-layer, suggesting unstable information flow. In contrast, models trained with Eloss yield feature activations that progressively contract toward the origin and exhibit near-spherical density shapes-especially in deeper layers. 8 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 PCA Dim 2 Layer 1 (w/o Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 2 (w/o Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 3 (w/o Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 4 (w/o Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 PCA Dim 2 Layer 1 (with Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 2 (with Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 3 (with Eloss) 0.04 0.02 0.00 0.02 0.04 PCA Dim 1 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.04 Layer 4 (with Eloss) Figure 3: Comparison of layer-wise feature distributions in PCA space with and without Eloss. Each subplot shows the product of two 1D Gaussians fitted separately to the first two principal components of normalized feature activations from the SECOND backbone. The contour levels visualize the approximated 2D density. Top row: Layers 1-4 without Eloss exhibit irregular or outward-drifting energy spread. Bottom row: The same layers with Eloss display progressively contracting, nearly spherical contours, indicating smoother and more stable compression. All plots use the same axis limits (±0.04) and square aspect ratio for comparability. Interpretation These spherical contours reflect the effect of the k-nearest-neighbour entropy estimator (Equation 2), which is most stable under isotropic, compact distributions. The visual contraction across layers confirms that enforcing low variance in entropy drops (Principle D1) is sufficient to induce monotonic compression (Principle D2), without requiring an explicit monotonicity constraint. Implication Figure 3 visually confirms that enforcing low-variance entropy drops produces not only a gradual, layer-by-layer reduction in information spread but also an even, direction-neutral contraction of feature activations. This uniform shaping improves the reliability of k-NN entropy estimates and yields more robust, stable representations. 5 Conclusion We introduced Eloss, a novel information-theoretic regularizer designed to enforce stable and interpretable information flow within deep neural perception backbones. Our approach reframes deep perception networks as hierarchical communication systems, guiding training toward smooth, consistent compression of raw sensory inputs into task-relevant latent representations. By explicitly penalizing the variance of entropy drops between layers, Eloss leverages foundational principles of information theory, resulting in monotonic entropy decay and enhanced interpretability without the need for explicit monotonic constraints. Experiments on the large-scale autonomous driving benchmarks KITTI and nuScenes demonstrate that Eloss consistently achieves comparable or improved detection accuracy relative to standard baselines. More importantly, our method significantly increases model sensitivity to anomalous inputs-boosting detection signals for distribution shifts by up to two orders of magnitude. These outcomes reflect not merely incremental accuracy gains but a fundamental shift toward robust, theory-grounded perception systems. Our work emphasizes that stable, interpretable information processing is both achievable and beneficial for safety-critical applications like autonomous driving. Future research directions include adapting Eloss to networks with heterogeneous layer capacities, further reducing computational overhead, and extending our framework to other challenging perception tasks and real-world operational environments. 9 References [1] Malene Freudendal-Pedersen, Sven Kesselring, and Eriketti Servou. What is smart for the future city? mobilities and automation. Sustainability, 11(1):221, 2019. [2] Shuaifeng Zhi, Yongxiang Liu, Xiang Li, and Yulan Guo. Lightnet: A lightweight 3d convolutional neural network for real-time 3d object recognition. In 3DOR@ Eurographics, 2017. [3] Yi Wu, Xiaoyan Jiang, Zhijun Fang, Yongbin Gao, and Hamido Fujita. Multi-modal 3d object detection by 2d-guided precision anchor proposal and multi-layer fusion. Applied Soft Computing, 108:107405, 2021. [4] Zhenhong Zou, Xinyu Zhang, Huaping Liu, Zhiwei Li, Amir Hussain, and Jun Li. A novel multimodal fusion network based on a joint coding model for lane line segmentation. Information Fusion, 80:167-178, 2022. [5] Gareth A. Jones and J. Mary Jones. Information and Coding Theory. Springer London, 2000. [6] Boquan Yang, Jixiong Li, and Ting Zeng. A review of environmental perception technology based on multi-sensor information fusion in autonomous driving. World Electric Vehicle Journal, 16(1):20, 2025. [7] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? Advances in Neural Information Processing Systems, 30, 2017. [8] Rory R. Griffiths, Alexander A. Aldrick, Manuel Garcia-Ortegon, and et al. Achieving robustness to aleatoric uncertainty with heteroscedastic bayesian optimisation. Machine Learning: Science and Technology, 3(1):015004, 2021. [9] Pranay Goel and Le Chen. On the robustness of monte carlo dropout trained with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22192228, 2021. [10] Janis Postels, Martino Segu, Timothy Sun, and et al. On the practicality of deterministic epistemic uncertainty. arXiv preprint , 2021. [11] Marília Barandas, Lorenzo Famiglini, Andrea Campagner, Duarte Folgado, Raquel Simão, Federico Cabitza, and Hugo Gamboa. Evaluation of uncertainty quantification methods in multi-label classification: A case study with automatic diagnosis of electrocardiogram. Information Fusion, 101:101978, 2024. [12] Kaisei Fukaya, Damon Daylamani-Zad, and Harry Agius. Evaluation metrics for intelligent generation of graphical game assets: a systematic survey-based framework. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. [13] David J C Mackay. Information theory, inference, and learning algorithms. Cambridge University Press, (Imp, 2003. [14] David Burth Kurka and Deniz Gündüz. Deepjscc-f: Deep joint source-channel coding of images with feedback. IEEE Journal on Selected Areas in Information Theory, 1:178-193, 05 2020. JSAIT.2020.2987203. URL https://ieeexplore.ieee.org/document/9066966. [15] Deep Joint Source-Channel Coding for Wireless Image Retrieval, 05 2020. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2020.9054078. URL https://arxiv.org/abs/1910.12703v1. [16] Ansh Kumar Sharma, Rahul Kukreja, Ranjitha Prasad, and Shilpa Rao. Dagsurv: Directed ayclic graph based survival analysis using deep neural networks. In Asian Conference on Machine Learning, pages 1065-1080. PMLR, 2021. [17] The information bottleneck method, 04 2000. arXiv:physics/0004057. URL https://arxiv.org/abs/ physics/0004057. [18] Chien-Yao Wang, I-Hau Yeh, and Hong-Yuan Mark Liao. Yolov9: Learning what you want to learn using programmable gradient information. In European conference on computer vision, pages 1-21. Springer, 2024. [19] Vanessa Buhrmester, David Münch, and Michael Arens. Analysis of explainers of black box deep neural networks for computer vision: A survey. Machine Learning and Knowledge Extraction, 3(4):966-989, 2021. 10 [20] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10): 3337, 2018. [21] Willem van de Water and Piet Schram. Generalized dimensions from near-neighbor information. Physical Review A, 37(8):3118, 1988. [22] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354-3361. IEEE, 2012. [23] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013. [24] MMDetection3D Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/mmdetection3d, 2020. [25] R Anand, S Vijaya Lakshmi, Digvijay Pandey, and Binay Kumar Pandey. An enhanced resnet-50 deep learning model for arrhythmia detection using electrocardiogram biomedical indicators. Evolving Systems, 15(1):83-97, 2024. [26] Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Zhenyu Guo, Yang Liu, Yuchen Yang, and Yao Zhao. Multi-modal graph learning for disease prediction. IEEE Transactions on Medical Imaging, 2022. [27] Yongjia Lei, Shuyuan Lin, Zhiying Li, Yachao Zhang, and Taotao Lai. Gnn-fused capsnet with multihead prediction for diabetic retinopathy grading. Engineering Applications of Artificial Intelligence, 133: 107994, 2024. [28] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. A Comparison with Eloss on Training Process 0 5 10 15 20 Epoch 0.4 0.5 0.6 0.7 0.8 Accuracy without eloss with eloss (a) Car APdist1.0 0 5 10 15 20 Epoch 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Accuracy without eloss with eloss (b) mAP 0 5 10 15 20 Epoch 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Accuracy without eloss with eloss (c) NDS Figure 4: Convergences Curves of the model accuracy on NuSenes validation set for PointPillars with or without Eloss. (a) the Average Precision of Car detection with Distance Threshold 1.0 meters; (b) mean Average Precision computed across 10 class of objects; (c) nuScenes detection score. To measure the impact of eloss on the model training process, we first conduct control experiments on the same model with and without eloss on the KITTI dataset and Nusenes dataset without noise. We plot the part of our experiment results in Figure 4 to more intuitively show the impact of eloss on the volatility of the training process. To quantify the impact process, we use the mean absolute value slope (Mean Absolute Value Slope, MAVP) to measure the impact of eloss on the volatility of the model training curve and use the maximum precision index to measure the impact of eloss on the model training accuracy. The MAVP formula is as follows, where N is the number of sliding panes, and (k, k+1) refers to two adjacent time windows. MAVP = 1 N N X k=1 (|xk+1| -|xk|)) (6) 11 We applied the pointpillars and voxelnet models to the KITTI dataset to conduct the above control experiments. The experimental results are shown in the Table 3. Model Method Max(%) MAVP(%) PointPillars Without Eloss 90.694 11.946 With Eloss 88.916 11.932 Delta -1.778 -0.014 VoxelNet Without Eloss 94.873 10.959 With Eloss 94.586 10.937 Delta -0.287 -0.022 Table 3: Car APR40 Max and MAVP on KITTI validation set during training. Metric Method Max(%) MAVP(%) Car APdist1.0 Without Eloss 79.580 2.945 With Eloss 79.520 2.811 Delta -0.060 -0.135 mAP Without Eloss 34.393 6.036 With Eloss 34.815 4.883 Delta 0.422 -1.153 NDS Without Eloss 49.217 4.902 With Eloss 49.637 3.902 Delta 0.420 -1.000 Table 4: Max and MAVP of PointPillars on nuScenes validation set during training. The experimental results in the pivot table show that the maximum training accuracy decreases after adding eloss to both models. In terms of MAVP, the MAVP decreased after adding eloss, which means that the addition of eloss makes the above training process smoother. On the Nuscence dataset, we perform the above control experiments on the PointPillars model with three different metrics: Car APdist1.0, mAP, and NDS. The experimental results are shown in the Table 4. The table shows that the maximum accuracy of the Car category is lost during the training process after adding eloss. Still, the decline in MAVP shows that the addition of eloss moderates the volatility of the above training process. Similar observations for mAP, the average precision of multiple categories, and NDS, the Nusenes detection score, indicate that adding eloss to the model makes the training process smoother. The above is the experiment on the influence of eloss on model training without noise interference. In order to further understand the effect of eloss in the experiment, we will add Eloss to different parts of the network or conduct control experiments with anomalous data. B PointPillar Sensitivity to Eloss Coverage We resume training PointPillars models that have completed 80 epochs and inject Eloss into an increasing number of SECOND blocks.2 Because the regularizer is fully plug-and-play, it can be attached to any repetitive sub-structure. Here we vary its coverage from 0 to 3 blocks. Observation. As the regularizer constrains more blocks, all three difficulty levels of Car APR40 steadily decline, while runtime rises from 4.5 ms (0 blocks) to 34.9 ms (3 blocks). This corroborates the hypothesis that tighter entropy-stability constraints narrow the optimization landscape: the detector sacrifices peak accuracy in exchange for stronger robustness priors and incurs additional computation to evaluate the entropy terms. Remark on VoxelNet. Repeating the coverage study on VoxelNet produces an interesting nonmonotonic pattern: accuracy first dips when one block is regularized, but rebounds-and even surpasses the baseline-when two blocks are constrained. The complete results and discussion are presented in Appendix C. C VoxelNet Sensitivity to Eloss Coverage To verify whether the coverage pattern observed for PointPillars generalises, we repeat the blockinjection study on VoxelNet. Discussion. Contrary to PointPillars (Figure 5), VoxelNet shows a non-monotonic trend: injecting Eloss into a single block reduces all three AP scores, yet constraining two blocks surpasses the baseline on Easy and Moderate. We posit two factors: 2All other hyper-parameters remain fixed; only the number of blocks governed by Eloss is varied. 12 0 1 2 3 Number of SECOND Blocks with Eloss 65 70 75 80 85 Car APR40 (%) 5 10 15 20 25 30 35 Time per Frame (ms) Figure 5: PointPillars on KITTI with Eloss applied to 0-3 SECOND blocks. Circles = Easy, Squares = Moderate, Triangles = Hard (left axis). Diamonds dashline = per-frame inference time (right axis). Model Epoch #Blocks w/ Eloss Car APR40 (%) Time (ms) Easy Mod. Hard VoxelNet 80 0 82.35 73.35 68.59 6.94 85 1 81.34 70.46 65.29 14.72 85 2 85.34 74.44 67.51 33.75 Table 5: VoxelNet on KITTI with Eloss applied to 0-2 SECOND blocks. • Receptive-field size. VoxelNet's 3D sparse convolutions already cover large spatial extents; a stronger stability prior may curb over-fitting to high-frequency noise, yielding a net gain when applied to multiple blocks. • Normalisation effects. Preliminary ablations (omitted for space) indicate that layer-norm placement modulates the strength of the entropy signal; one-block regularisation may under- or over-weight this interaction. A comprehensive sweep-including per-block ablations and alternative normalisation layouts-is left for future work. 13
|
2509.16281
|
Low-energy nuclear recoil calibration of the LUX-ZEPLIN experiment with a
photoneutron source
J. Aalbers,1, 2 D.S. Akerib,1, 2 A.K. Al Musalhi,3 F. Alder,3 C.S. Amarasinghe,4 A. Ames,1, 2 T.J. Anderson,1, 2
N. Angelides,5 H.M. Ara´ujo,5 J.E. Armstrong,6 M. Arthurs,1, 2 A. Baker,5, 7 S. Balashov,8 J. Bang,9
J.W. Bargemann,4 E.E. Barillier,10, 11 K. Beattie,12 T. Benson,13 A. Bhatti,6 T.P. Biesiadzinski,1, 2 H.J. Birch,10, 11
E. Bishop,14 G.M. Blockinger,15 B. Boxer,16 C.A.J. Brew,8 P. Br´as,17 S. Burdin,18 M.C. Carmona-Benitez,19
M. Carter,18 A. Chawla,20 H. Chen,12 Y.T. Chin,19 N.I. Chott,21 N.I. Chott,21 M.V. Converse,22 S. Contreras,23
R. Coronel,1, 2 A. Cottle,3 G. Cox,24 D. Curran,24 C.E. Dahl,25, 26 I. Darlington,3 S. Dave,3 A. David,3
J. Delgaudio,24 S. Dey,27 L. de Viveiros,19 L. Di Felice,5 C. Ding,9 J.E.Y. Dobson,7 E. Druszkiewicz,22 S. Dubey,9
C.L. Dunbar,24 S.R. Eriksen,28 A. Fan,1, 2 N.M. Fearon,27 N. Fieldhouse,27 S. Fiorucci,12 H. Flaecher,28
E.D. Fraser,18 T.M.A. Fruth,29 R.J. Gaitskell,9 A. Geffre,24 J. Genovesi,19 C. Ghag,3 A. Ghosh,15 R. Gibbons,12, 30
S. Gokhale,31 J. Green,27 M.G.D.van der Grinten,8 J.J. Haiston,21 C.R. Hall,6 T. Hall,18 S. Han,1, 2
E. Hartigan-O’Connor,9 S.J. Haselschwardt,10 M.A. Hernandez,10, 11 S.A. Hertel,32 G.J. Homenides,33 M. Horn,24
D.Q. Huang,23 D. Hunt,27, 34 E. Jacquet,5 R.S. James,3 M.K. K,15 A.C. Kaboth,20 A.C. Kamaha,23 D. Khaitan,22
A. Khazov,8 J. Kim,4 Y.D. Kim,35 J. Kingston,16 R. Kirk,9 D. Kodroff,12, ∗E.V. Korolkova,36 H. Kraus,27
S. Kravitz,34 L. Kreczko,28 V.A. Kudryavtsev,36 C. Lawes,7 D.S. Leonard,35 K.T. Lesko,12 C. Levy,15 J. Lin,12, 30
A. Lindote,17 W.H. Lippincott,4 J. Long,25 M.I. Lopes,17 W. Lorenzon,10 C. Lu,9 S. Luitz,1, 2 P.A. Majewski,8
A. Manalaysay,12 R.L. Mannino,37 C. Maupin,24 M.E. McCarthy,22 G. McDowell,10 D.N. McKinsey,12, 30
J. McLaughlin,25 J.B. Mclaughlin,3 R. McMonigle,15 B. Mitra,25 E. Mizrachi,6, 37 M.E. Monzani,1, 2, 38
E. Morrison,21 B.J. Mount,39 M. Murdy,32 A.St.J. Murphy,14 H.N. Nelson,4 F. Neves,17 A. Nguyen,14
C.L. O’Brien,34 I. Olcina,12, 30 K.C. Oliver-Mallory,5 J. Orpwood,36 K.Y Oyulmaz,14 K.J. Palladino,27 J. Palmer,20
N.J. Pannifer,28 N. Parveen,15 S.J. Patton,12 B. Penning,10, 11 G. Pereira,17 E. Perry,3 T. Pershing,37 A. Piepke,33
Y. Qie,22 J. Reichenbacher,21 C.A. Rhyne,9 G.R.C. Rischbieter,10, 11 E. Ritchey,6 H.S. Riyat,14 R. Rosero,31
T. Rushton,36 D. Rynders,24 D. Santone,20, 27 A.B.M.R. Sazzad,33 R.W. Schnee,21 G. Sehr,34 B. Shafer,6 S. Shaw,14
K. Shi,10 T. Shutt,1, 2 J.J. Silk,6 C. Silva,17 G. Sinev,21 J. Siniscalco,3 A.M. Slivar,33 R. Smith,12, 30 V.N. Solovov,17
P. Sorensen,12, † J. Soria,12, 30 I. Stancu,33 A. Stevens,3, 5 T.J. Sumner,5 A. Swain,27 M. Szydagis,15 D.R. Tiedt,24
M. Timalsina,12 Z. Tong,5 D.R. Tovey,36 J. Tranter,36 M. Trask,4 M. Tripathi,16 K. Trengrove,15 A. Us´on,14
A.C. Vaitkus,9 O. Valentino,5 V. Velan,12 A. Wang,1, 2 J.J. Wang,33 Y. Wang,12, 30 L. Weeldreyer,4 T.J. Whitis,4
K. Wild,19 M. Williams,10 W.J. Wisniewski,1 L. Wolf,20 F.L.H. Wolfs,22 S. Woodford,18 D. Woodward,12, 19
C.J. Wright,28 Q. Xia,12 J. Xu,37 Y. Xu,23 M. Yeh,31 D. Yeum,6 W. Zha,19 H. Zhang,14 and T. Zhang12
(The LUX-ZEPLIN (LZ) Collaboration)
1SLAC National Accelerator Laboratory, Menlo Park, CA 94025-7015, USA
2Kavli Institute for Particle Astrophysics and Cosmology,
Stanford University, Stanford, CA 94305-4085 USA
3University College London (UCL), Department of Physics and Astronomy, London WC1E 6BT, UK
4University of California, Santa Barbara, Department of Physics, Santa Barbara, CA 93106-9530, USA
5Imperial College London, Physics Department, Blackett Laboratory, London SW7 2AZ, UK
6University of Maryland, Department of Physics, College Park, MD 20742-4111, USA
7King’s College London, King’s College London,
Department of Physics, London WC2R 2LS, UK
8STFC Rutherford Appleton Laboratory (RAL), Didcot, OX11 0QX, UK
9Brown University, Department of Physics, Providence, RI 02912-9037, USA
10University of Michigan, Randall Laboratory of Physics, Ann Arbor, MI 48109-1040, USA
11University of Zurich, Department of Physics, 8057 Zurich, Switzerland
12Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA 94720-8099, USA
13University of Wisconsin-Madison, Department of Physics, Madison, WI 53706-1390, USA
14University of Edinburgh, SUPA, School of Physics and Astronomy, Edinburgh EH9 3FD, UK
15University at Albany (SUNY), Department of Physics, Albany, NY 12222-0100, USA
16University of California, Davis, Department of Physics, Davis, CA 95616-5270, USA
17Laborat´orio de Instrumenta¸c˜ao e F´ısica Experimental de Part´ıculas (LIP),
University of Coimbra, P-3004 516 Coimbra, Portugal
18University of Liverpool, Department of Physics, Liverpool L69 7ZE, UK
19Pennsylvania State University, Department of Physics, University Park, PA 16802-6300, USA
20Royal Holloway, University of London, Department of Physics, Egham, TW20 0EX, UK
21South Dakota School of Mines and Technology, Rapid City, SD 57701-3901, USA
22University of Rochester, Department of Physics and Astronomy, Rochester, NY 14627-0171, USA
arXiv:2509.16281v1 [physics.ins-det] 18 Sep 2025
2
23University of California, Los Angeles, Department of Physics & Astronomy, Los Angeles, CA 90095-1547
24South Dakota Science and Technology Authority (SDSTA),
Sanford Underground Research Facility, Lead, SD 57754-1700, USA
25Northwestern University, Department of Physics & Astronomy, Evanston, IL 60208-3112, USA
26Fermi National Accelerator Laboratory (FNAL), Batavia, IL 60510-5011, USA
27University of Oxford, Department of Physics, Oxford OX1 3RH, UK
28University of Bristol, H.H. Wills Physics Laboratory, Bristol, BS8 1TL, UK
29The University of Sydney, School of Physics, Physics Road, Camperdown, Sydney, NSW 2006, Australia
30University of California, Berkeley, Department of Physics, Berkeley, CA 94720-7300, USA
31Brookhaven National Laboratory (BNL), Upton, NY 11973-5000, USA
32University of Massachusetts, Department of Physics, Amherst, MA 01003-9337, USA
33University of Alabama, Department of Physics & Astronomy, Tuscaloosa, AL 34587-0324, USA
34University of Texas at Austin, Department of Physics, Austin, TX 78712-1192, USA
35IBS Center for Underground Physics (CUP), Yuseong-gu, Daejeon, Korea
36University of Sheffield, Department of Physics and Astronomy, Sheffield S3 7RH, UK
37Lawrence Livermore National Laboratory (LLNL), Livermore, CA 94550-9698, USA
38Vatican Observatory, Castel Gandolfo, V-00120, Vatican City State
39Black Hills State University, School of Natural Sciences, Spearfish, SD 57799-0002, USA
(Dated: September 23, 2025)
The LZ experiment is a liquid xenon time-projection chamber (TPC) searching for evidence of
particle dark matter interactions. In the simplest assumption of elastic scattering, many dark matter
models predict an energy spectrum which rises quasi-exponentially with decreasing energy transfer
to a target atom. LZ expects to detect coherent neutrino-nucleus scattering of 8B solar neutrinos,
the signal from which is very similar to a dark matter particle with mass of about 5.5 GeV/c2,
which result in typical nuclear recoil energies of < 5 keVnr. Therefore, it is of crucial importance to
calibrate the response of recoiling xenon nuclei to keV-energy recoils. This analysis details the first
in situ photoneutron calibration of the LZ detector and probes its response in this energy regime.
I.
INTRODUCTION
Abundant astrophysical evidence suggests the exis-
tence of dark matter [1–4], a non-relativistic and non-
baryonic matter component of the universe. In spite of
significant effort, the particle nature of the dark matter
remains an open question. Experiments such as LUX-
ZEPLIN (LZ) [5] seek to address this question by looking
for evidence of dark matter scattering with target nuclei.
In many weak-scale dark matter models, the simplest
elastic scattering interaction results in a recoil energy
spectrum which rises quasi-exponentially with decreas-
ing energy transfer to a target atom. The sensitivity of
an experiment is thus strongly dependent on its ability
to detect the lowest energy nuclear recoil signals. No-
tably, the spin-independent scattering of a hypothetical
5.5 GeV/c2 dark matter particle with a xenon target is
calculated to look nearly identical to the coherent elas-
tic neutrino-nucleus scattering (CEνNS) of 8B solar neu-
trinos, which produce < 5 keV nuclear recoils (keVnr)
[6]. LZ expects to detect dozens of these neutrino inter-
actions during the course of its experimental campaign
[7]. It is therefore critical for the experiment to calibrate
the response of recoiling xenon atoms in this low energy
regime.
Significant effort has been invested in measuring the re-
sponse of liquid xenon to few-keV nuclear recoils [8–11].
∗danielkodroff@lbl.gov
† pfsorensen@lbl.gov
Indeed, higher energy neutron sources such as deuterium-
tritium (D-T) [8], deuterium-deuterium (D-D) [10], and
241AmBe (α,n) [12] can produce low energy recoils with
a small scattering angle; however, these calibrations pro-
duce a lower relative fraction few-keV recoils in liquid
xenon. Given the inherent systematic and statistical un-
certainty in such low-energy calibrations, it is critical for
a discovery experiment to make such calibrations in situ
with different elastic nuclear recoil spectra such that they
are subject to different systematics.
In this analysis, we address these requirements and re-
port results of the first calibration of the LZ liquid xenon
using a custom 88Y-9Be photoneutron source [13, 14],
colloquially referred to as YBe. The photonuclear reac-
tion process on 9Be proceeds as 9Be + γ →8B + n, with
a threshold energy requirement of Q = −1.66454 MeV.
88Y has two measured γ-rays that can provide this en-
ergy: a dominant mode of 1.8631 MeV with 99.2(3)%
branching ratio, and a rare mode of 2.7340 MeV with
0.71(7)% branching ratio [15]. For the lower and higher
γ-ray energies, the average measured photonuclear cross
section, from world data in Ref. [15] and references
therein, is 0.651 ± 0.007 mb and 0.592 ± 0.038 mb,
respectively.
The emitted photoneutrons are quasi-
monoenergetic with kinetic energies of 152 keV and
950 keV with a percent level angular dependence of 3 keV
and 10 keV, respectively. In the case of elastic scattering,
the maximum energy transferred by a neutron, of mass
m, in a single-scatter (SS) with a xenon nucleus of mass
M, is 4EnMm/(M + m)2 = 4.6(29) keVnr given the 152
(950) keV neutrons. This end-point energy of the dom-
3
inant photoneutron mode thus offers a potentially un-
ambiguous way to calibrate the low-energy response of a
detector such as LZ.
This paper describes a calibration of LZ using monoen-
ergetic photoneutrons from a custom-built YBe source.
Observation of the rare 950 keV neutron mode is reported
in addition to the dominant 152 keV neutron mode. Ad-
ditional data taken with the Be-metal replaced with nat-
ural magnesium to quantify the 88Y γ-ray backgrounds
in situ is also presented. In Sec. II, the design, deploy-
ment and data of the source will be introduced, followed
by a discussion of the data selection and backgrounds
in Sec. III.
The results of a nuclear recoil light and
charge yield analysis using this calibration and observa-
tions about the spectra are presented in Sec. IV.
II.
LZ DETECTOR, CALIBRATION, AND
SIMULATION
A.
Detector and Calibration Configuration
The LZ instrument is described in Ref. [16]. Here we
reprise a few key details relevant to the present mea-
surement. LZ is a liquid xenon time-projection chamber
(TPC) nested within two veto detectors: a liquid xenon
“Skin” scintillation-only detector and a liquid scintillator
Outer Detector (OD). The TPC is instrumented with 494
photomultiplier tubes (PMTs) split into a top and bot-
tom array. For each event, LZ detects a primary scintil-
lation photon signal (S1) and an ionized electron signal
(S2). The latter is measured by drifting electrons across
the liquid xenon target and then amplifying them via
proportional electroluminescence in the vapor above the
liquid. By collecting both S1 and S2, the 3D vertex of
the scattering can be reconstructed with 6 mm precision
in (x, y) position for S2 > 3000 photons detected (phd)
[17]. The TPC and the Skin Detector are housed inside
an Inner Cryostat Vessel (ICV) and an Outer Cryostat
Vessel (OCV). The OCV is surrounded by acrylic ves-
sels filled with gadolinium-loaded liquid scintillator, as
part of the Outer Detector (OD). The OCV and OD are
housed inside a water tank with the water continuously
purified.
The LZ calibration strategy, including the photoneu-
tron source, is described in detail in Refs. [14, 16]. The
YBe source internals and deployment location within the
LZ detector are depicted in Fig. 1 as recreated from
Ref. [14].
As shown in the left panel, the YBe pho-
toneutron source consists of an 88Y sealed disk source
of 4.7 mm diameter and 4.6 mm height placed between
natural beryllium metal disks of 2.54 cm diameter and
a total 3 cm height. A nickel layer with 24 µm thick-
ness is plated on the surface of the beryllium metal disks
for safe handling.
The disks are stacked in a cylinder
which is then inserted into a tungsten shield block of
20 cm diameter and 20 cm height. An image of the fully
constructed and assembled source is shown in the mid-
dle panel of Fig. 1.
Prior to the calibration, analytic
calculations were performed to determine the neutron-
to-gamma ratio produced in this source configuration.
That is, for each emitted γ-ray from the 88Y source, how
often is a photoneutron produced in the Be-metal. The
neutron-to-gamma ratio was found to be 1 in 9990 for
the 1.8361 MeV γ-ray and 1 in 10986 for the 2.7340 MeV
γ-ray, given the 88Y γ-ray energies and respective pho-
toneutron cross-sections.
The right most panel of Fig. 1 depicts how the YBe
source is deployed within the LZ experiment and can be
compared to other implementations in large-scale liquid
xenon TPCs [11]. To perform this calibration, the top of
the water tank is opened, and the tungsten shield block is
lowered into the LZ water tank through an opening in the
OD acrylic vessels, until it rests on a purpose-built indent
in the top center of the outer cryostat depicted as the or-
ange block in the right panel of Fig. 1. The water tank
is then promptly closed to minimize air ingress.
Dur-
ing low-background data taking, this indent in the outer
cryostat is occupied by a small cylindrical acrylic vessel
filled with gadolinium-doped liquid scintillator (referred
to as the “GdLS plug”) to complete the OD coverage.
Thus when a calibration is completed, the YBe source
and tungsten shielding is replaced with the GdLS plug.
The distance from the 88Y source to the gate electrode
is 86.7 cm: whereby the neutrons must traverse through
the titanium cryostat vessels, top PMT array, and xenon
gas before reaching the active liquid xenon volume. This
configuration minimizes the amount of high density ma-
terial through which the neutrons must traverse before
reaching liquid xenon in the top of the TPC. The tung-
sten shielding below the source and detector materials
provide shielding to help mitigate the 88Y γ-rays.
B.
Calibration Operations
Several small but key parameter changes were made
to the experimental conditions between the first science
data reported in Ref. [5] and the deployment of the pho-
toneutron source. Specifically, both the cathode and gate
electrode potentials were set an additional -2 kV farther
from ground, and the anode potential was moved 2.5 kV
closer to ground. These changes preserved the nominal
193 V/cm electric field across the liquid xenon target,
and decreased the potential difference between the gate
and the anode by 0.5 kV. Notably, the drift field is larger
than the 97 V/cm field reported in more recent LZ dark
matter searches [18]. The scintillation photon and ioniza-
tion electron gains, g1 and g2 were characterized in this
grid configuration using tritium calibration events in an
identical spatial selection as used in this analysis. The
change in grid configuration decreased both the electrolu-
minescence yield and the electron extraction probability,
resulting in an ∼25% decrease in the single electron size
leading to g2 = 34.9 ± 0.6 phd/electron.
The trigger
threshold was also adjusted in order to retain the same
4
FIG. 1. Left: CAD drawing for the YBe photoneutron source assembly. The 88Y sealed source is sandwiched between 9Be
disks to generate neutrons. Middle: YBe source inside the tungsten shielding placed next to a five-dollar bill for scale. Right:
layout of the top part of the outer detector acrylic tanks (1) in both the green and purple volumes and water tank (2). The
custom cut-out (3) in the top acrylic tank through which the YBe source is deployed is shown. The outer (4) and inner (5)
titanium cryostats are also indicated in the figure. Figure is recreated from Ref. [14].
detection efficiency to few electron events. The value for
g1 remains unchanged from that of the first physics search
with g1 = 0.114 ± 0.003 phd/photon. Dedicated spatial
corrections were also performed allowing for the defini-
tion of the standardized corrected quantities S1c and S2c
used throughout this analysis.
Following the first science data collection from LZ
[5, 17, 19], the YBe source was deployed with an initial
88Y activity of 0.339 MBq with 3% uncertainty, and 135
hours of calibration data were recorded. Given the source
activity during the calibration, the predicted neutron-to-
gamma ratio, and the respective branching fractions, the
expected neutron rates emitted from the Be-metal are
34 ± 1 n/s for the 152 keV neutrons and 0.22 ± 0.03 n/s
for the 950 keV neutrons.
About 21 days after these
data were recorded, the beryllium metal disks were re-
placed with magnesium metal disks of the same dimen-
sion, the so-called YMg source.
The 88Y source, with
activity then at 0.295 MBq (given t1/2 = 106.6 d) was
deployed again to the same location and 24 hours of data
were recorded. The γ-ray attenuation properties of mag-
nesium are within 5% of beryllium, but γ-rays from 88Y
decays are below the energy threshold for photoneutron
production in magnesium.
This step, therefore, pro-
vided an important “beam-off” configuration in which
the background from 88Y decays could be characterized
in situ without the presence of neutrons.
C.
Simulation of the YBe Source
The as-built YBe source, configuration, and location
were all modeled in Geant4 [20] and included in the same
simulation framework as described in Ref. [21]. The 88Y
γ-rays and YBe photoneutrons are simulated as being
isotropically emitted from the 88Y source and Be-metal,
respectively. These γ-rays and neutrons are then allowed
to propagate through the simulated LZ detector before
reaching the active liquid xenon volume where the re-
coils of interest occur. Particle interactions in the active
xenon volumes are then converted into observed quanti-
ties, e.g. S1 and S2 signals and reconstructed positions.
More details on this detector response are described in
Sec. IV.
These dedicated simulations indicate that the emitted
photoneutrons have their energy spectra degraded from
interactions in the intervening materials between the Be-
metal where the neutron is produced and the active liq-
uid xenon where the neutron is detected. Of the emit-
ted 152(950) keV neutrons, approximately 2(9)% pro-
duce recordable signals in the TPC. The primary sources
of energy loss are from interactions with the titanium
cryostat vessels and the stainless steel and PTFE within
the inner cryostat. The resulting energy spectra of these
neutrons entering the TPC is slowly falling between zero
and the emitted neutron energy. These same interven-
ing materials are extremely effective at shielding the 88Y
γ-rays; simulations predict less than 1 in 104 88Y decays
will result in energy deposited in the active liquid xenon.
As a result, the neutron-to-gamma ratio for interactions
in the TPC is roughly 1 in 40 as compared to the ratio
emitted from the source of roughly 1 in 10000.
III.
DATA SELECTION AND ANALYSIS
A.
Data Selection
The data selection criteria were largely based on what
was used for the first LZ dark matter search [5] with a
few modifications. In brief, this included a requirement
5
0
202
302
402
502
602
702
Reconstructed R2 [cm2]
0
2
4
6
8
10
12
Distance Below Gate Grid [cm]
FIG. 2. Candidate single-scatter nuclear recoils from the pho-
toneutron source after all data selection criteria (black points)
alongside all single-scatter events (gray points) prior to ap-
plication of the fiducial volume (red-dashed line) defined in
Sec. III. Also shown are events removed by the prompt veto
(red points) and not delayed-tagged (blue points). The re-
constructed wall position is shown as the solid gray line.
of single-scatter events with a 3-fold PMT coincidence,
photon hit patterns consistent with genuine single-scatter
particle interactions in the TPC, a trigger hold-off of 20
seconds after passing muon events, and removal of time
periods with a full data acquisition buffer.
The region of interest (ROI) for this analysis was cho-
sen to be 0 < S1c < 40, with a lower S2 bound of 7
electrons and an upper bound of log10S2c < 4.0. The
lower bound was selected to remove populations of known
spurious S2 emission while maintaining an S2 trigger ef-
ficiency of 100%. The S1 bound was selected to include
the endpoint of the 950 keV nuclear recoils.
The YBe source is centrally located over the TPC. The
reconstructed position of single scatter events are shown
in Fig. 2, which represents a small slice at the top of the
145.6 cm tall LZ TPC. Single-scatter events, in which
the neutron scatters just once in the active volume, pref-
erentially result in large angle scatters before the neu-
tron exits the detector. Thus, forward-scatters, traveling
downward in the active volume, are likely to scatter mul-
tiple times. The mean free path for elastic scatters of
152 keV neutrons in liquid xenon is ∼14 cm [22], so the
fiducial volume for this result was chosen with drift times
between 6 µs and 50 µs (137.9 cm < Z < 144.6 cm),
and a reconstructed radial position R < 60 cm.
The
drift bounds maintain high acceptance to genuine single-
scatter neutron recoils while rejecting known background
populations. The 6 µs drift bound mitigates the pres-
ence of backgrounds originating from the stainless-steel
gate electrode, while the 50 µs drift bound mitigates the
presence of backgrounds originating from the top PMT
faces which occur in the xenon vapor (see discussion in
Sec. III B 3). The radial bound mitigates the presence
of backgrounds from the PTFE wall where events with
small S2 areas can be reconstructed radially inward.
In addition to the standard LZ analyses [5, 18], several
data selection criteria were either developed or refined
for this calibration, in order to ensure maximum purity
of neutron single-scatters in the data sample:
1. a completely different formulation of the delayed
electron/photon [23] time hold-off was developed,
utilizing the ratio of total event size to largest
S2 size.
This was necessary because the trigger
rate during the calibration was 40 Hz, compared
with the ∼15 Hz typical during the dark matter
search. The original version of this selection, used
in other LZ analyses, would have removed a signif-
icant amount of this calibration’s livetime.
2. a stricter selection on the allowed width of S2 sig-
nals was implemented.
This was required as S2
signals in this region are typically narrower than in
the usual dark matter search fiducial target, since
drifting electrons have less time to diffuse.
This
helps mitigate accidental coincidence backgrounds
(which can have excessive S2 width) as well as gas
phase events whose diffusion is different to that in
liquid.
3. the fraction of S2 photons detected by the top ar-
ray of photomultipliers (top-bottom asymmetry or
TBA) to be consistent with liquid xenon bulk inter-
actions. This selection, stricter than the versions
used in previous analyses, provides additional re-
jection of spurious events produced by scattering
in the xenon gas.
4. a strict criteria on the “cleanliness” of the single-
scatter event window. Events with more than one
S2 larger than three electrons following the primary
S2 are excluded. This selection targets classifica-
tion issues related to pile-up of S2 pulses.
5. a requirement on no signals in prompt coincidence
with the TPC S1 in either the Skin or OD and a
requirement on a delayed signal to be present in
either the Skin or OD. The former mitigates 88Y
gamma and radiogenic Compton scatters. The lat-
ter ensures the purity of the neutron selection. The
definition of prompt and delayed coincidences fol-
lows the procedure in Ref. [5] in which the LZ veto
detectors are able to tag 89 ± 3 % of neutrons.
The detection efficiency for low energy nuclear recoils
was evaluated following the procedure in Ref. [19] and
can be seen in Fig. 3. The S2 trigger efficiency (blue)
was evaluated and determined to be 100% efficient to S2
signals greater than 4.5 electrons. The single-scatter re-
construction efficiency (orange) depicts the efficiency to
detect ≥3-fold S1s and accurately reconstruct these low-
energy single-scatters as was done in Ref. [19]. The 3-
fold, single-scatter requirement is the greatest limiter on
6
energy threshold, such that the S2 lower bound (dashed
green) has minimal additional impact on the lowest en-
ergy detectable recoils.
The final detection efficiency
(black) and its ±1σ uncertainty (shaded gray) includes
all of the above efficiencies in addition to the efficiencies
associated with the aforementioned selections. Though
not shown, the detection efficiency continues to smoothly
rise until it asymptotes at 33 ± 5 % efficiency above
10 keVnr.
Shown in red is the true single-scatter re-
coil spectrum in the active liquid xenon target for the
152 keV mode YBe photoneutrons as predicted from ded-
icated Geant4 simulations. The right-most axis in red
provides a scale for the relative change in the recoil spec-
trum induced by the 152 keV mode neutrons after the
application of the black detection efficiency. The dashed
red line shows the detected 152 keV YBe recoil spectrum
after application of all detection efficiencies. The mean
energy of the resulting nuclear recoils is 3.1 keVnr.
The acceptance of the custom selection as a function of
S2 area could not be evaluated using tritium betas or 2.45
MeV DD neutrons as their S2s are not small enough in
area to span the regime relevant to YBe photoneutrons.
The acceptance for these selections was conservatively es-
timated using the YBe data that is the most neutron like.
This subset of data lies within S1c < 8 and log10S2c < 3.2
and has all selections applied except those being evalu-
ated.
This specific selection necessarily included some
small amount of non-neutron backgrounds, which make
this estimation of acceptance pessimistic.
The uncer-
tainty on the detection efficiency for the 152 (950) keV
photoneutron mode is estimated to be 24(11)%, primar-
ily driven by the estimate of S2-based cut acceptances
at low S2 areas and, to a lesser extent, the single-scatter
reconstruction at low S1 areas. The uncertainty on the
total detection efficiency is factored into the error on the
neutron normalizations used in the fitting procedure dis-
cussed in Sec. IV. The change in spectral shape given
the uncertainty on detection efficiency is also evaluated
in the context of the best fit results.
B.
Backgrounds
These data selections are equivalently applied to three
different datasets with the same detector configurations,
shown in Fig. 4. The top panel of Fig. 4 shows the YBe
calibration data, corresponding to a 5.2 day exposure.
215 candidate events remain after all selections.
The
middle panel shows the YMg dataset, corresponding to
a 0.8 day exposure, with no candidate neutron events re-
maining after all selections. The bottom panel shows a
35 day exposure of background data, i.e. without any
calibration source present, with only a single event re-
maining after all selections. The gray points in each plot
show the respective datasets without the application of
the S2 TBA criterion. In the case of the YBe data, this
reveals the presence of an additional band of neutron-
induced background events.
10
-3
10
-2
10
-1
10
0
A.U.
0
1
2
3
4
5
Nuclear Recoil Energy [keV]
0.0
0.2
0.4
0.6
0.8
1.0
Efficiency
YBe 152 keV mode single scatter NR spectrum
Detected spectrum
S2 Trigger
+ SS (3-fold S1)
+ S2 ROI (7e−)
+ Analysis Selections
FIG. 3. Energy dependent nuclear recoil signal efficiency after
the application, in sequence, of the S2 trigger (blue), 3-fold
single-scatter (SS) requirement (orange), S2 bounds (green),
and analysis selections (black) is shown.
The shaded gray
±1σ uncertainty on the detection efficiency is dominated by
the assessments of the analysis selections and single-scatter re-
construction. Though not shown, the efficiency continues to
increase for higher energies, asymptoting at 33 ± 5 % above
10 keVnr. The solid red line depicts the true single-scatter
nuclear recoil spectrum of the 152 keV mode YBe photoneu-
trons. The dashed red line shows the true single-scatter recoil
spectrum after the application of the final detection efficiency
in black. The mean detected YBe recoil energy is 3.1 keVnr.
The right-most axis in red shows the change in the relative
rate of the YBe recoil spectra after the application of the de-
tection efficiency.
Common to all plots in Fig. 4 are a series of contours
providing context for expected distributions.
The re-
sponse of LZ to flat spectrum electron recoil (ER) and
flat-spectrum nuclear recoil (NR) events are given by the
blue and red bands. The median and 10%-90% quan-
tiles of the respective distributions are shown as solid
and dotted lines, respectively. These bands are drawn
from the NEST 2.4.0 model [24] fit to tritium and D-D
calibration in conjunction with LZ detector response sim-
ulation. The expected response of the YBe 152 keV mode
photoneutrons was obtained from the same detector re-
sponse simulation and best-fit NEST model based on the
results presented in Sec. IV. The 1σ and 2σ contours
of the 152 keV neutron mode elastic nuclear recoils are
indicated in purple. The green contours indicate the 1σ
and 2σ boundaries of the 950 keV neutron mode elastic
nuclear recoils. The gray dashed lines indicate contours
of constant mean energy with the highest contour corre-
sponding to the endpoint of the 950 keV neutron recoil
of 29 keVnr.
Of the 215 events remaining in the YBe dataset, 193
are within the sub-region of S1c < 9 and log10S2c < 3.3.
This population of events at low S1 areas below the NR
band are visually consistent with the elastically scattered
152 keV mode photoneutrons as indicated by the purple
contours in Fig. 4. The appearance of the YBe nuclear
7
0
5
10
15
20
25
30
35
40
S1c [phd]
2.2
2.4
2.6
2.8
3.0
3.2
3.4
3.6
3.8
4.0
log10(S2c [phd])
3 keVnr
0.5 keVee
5 keVnr
0.9 keVee
10 keVnr
1.9 keVee
20 keVnr
4.0 keVee
YBe Dataset (5.2 d)
All selections applied
Without S2 TBA selection
0
5
10
15
20
25
30
35
40
S1c [phd]
2.2
2.4
2.6
2.8
3.0
3.2
3.4
3.6
3.8
4.0
log10(S2c [phd])
3 keVnr
0.5 keVee
5 keVnr
0.9 keVee
10 keVnr
1.9 keVee
20 keVnr
4.0 keVee
YMg Dataset (0.8 d)
All selections applied
Without S2 TBA selection
0
5
10
15
20
25
30
35
40
S1c [phd]
2.2
2.4
2.6
2.8
3.0
3.2
3.4
3.6
3.8
4.0
log10(S2c [phd])
3 keVnr
0.5 keVee
5 keVnr
0.9 keVee
10 keVnr
1.9 keVee
20 keVnr
4.0 keVee
Background Dataset (35 d)
All selections applied
Without S2 TBA selection
FIG. 4. The YBe photoneutron calibration dataset (top) com-
pared with two control datasets: the YMg calibration (mid-
dle) and a background dataset (bottom).
Purple contours
show the 1σ and 2σ expectation for 152 keV mode YBe neu-
tron events.
Green contours show the 1σ and 2σ contours
for simulated YBe 950 keV mode neutrons. Comprehensive
details are given in Sec. III B.
recoils below the NR band is expected from the inter-
action of an at-threshold source [25].
This is because
upward fluctuations in the amount of electron-ion re-
combination are required to produce 3-fold S1s, which
reduces the observed S2 sizes. The clear visibility of the
YBe photoneutrons with respect to 88Y γ-rays and radio-
genic backgrounds in the YMg and background datasets,
respectively, demonstrates the exceptional data quality
achieved in the LZ experiment.
In addition to the YBe elastic nuclear recoils, three
specific populations are present within the YBe dataset:
a population within the NR band at higher S1 values, a
population within the ER band, and a population below
the NR band. The remainder of this section describes
these populations and their expected rates within the
YBe dataset.
1.
Higher energy nuclear recoils
The population of events with 8 < S1c < 40 within
the NR band is visually consistent with 950 keV mode
photoneutron recoils as indicated by the green contours
in Fig. 4. Dedicated simulations were performed of the
950 keV mode neutrons emitted from the YBe source to
quantify the rate of these events that reach the TPC and
pass all selection criteria. Though the predicted rate for
this rare branching mode is about 150 times less than
that of the dominant mode - mostly driven by the differ-
ence in branching fraction - these higher energy neutrons
are about five times more likely to produce a detectable
event passing all selections. In the region within the flat
NR band with S1c > 8 phd, which is essentially back-
ground free, these simulations predict 6 ± 2 events which
is consistent with the 9 events observed. The systematic
uncertainty in this prediction is driven by the systemat-
ics on the predicted 950 keV neutron rate and estimated
neutron detection efficiency. Confirmation of this predic-
tion serves as an in situ validation of both the simulations
and calculations used throughout this analysis. The lack
of NR band events present in the YMg and background
datasets further confirms their origin as the 950 keV YBe
photoneutrons.
Despite their lower rate and typically
higher recoil energies with respect to the 152 keV mode
photoneutrons, these 950 keV mode neutrons do form a
background to the 152 keV mode photoneutrons and are
thus included in the fitting results discussed in the next
section.
2.
Neutron capture gammas
The population of events occurring within the ER
band, as indicated by the blue contours, could originate
from a few different sources:
88Y γ-ray Compton scat-
ters, radiogenic backgrounds from β-decays of internal
contaminants, Compton scatters from decays in detector
materials, or γ-ray interactions resulting from neutron
captures. Though suffering from significantly lower ex-
posure, no 88Y γ-rays are present within the ER band of
the YMg dataset as shown in the middle panel of Fig. 4.
Even in the longer exposure background dataset, there
are no ER band events consistent with radiogenic ori-
gins present in the bottom panel of Fig. 4. Both of these
8
observations are consistent with expectations. For one,
the 88Y γ-rays are significantly mitigated by interven-
ing materials between source and this fiducialized tar-
get as predicted from simulations. And the γ-rays that
reach the active target are likely to either multiply scat-
ter and/or produce higher energy recoils. Low rates of
radiogenic backgrounds are also expected as small rates
are predicted in Ref. [17] with this analysis using an even
smaller fiducialized target.
Together, the YMg and background datasets strongly
suggest that the origin of these ER band events must be
specific to the YBe calibration and thus originate from
nuclear deexcitation γ-rays following neutron capture.
The Geant4 simulations of 152 keV and and 950 keV
neutron modes predict significant neutron captures on ti-
tanium in the cryostat, hydrogen or gadolinium present
in the outer detector, and the xenon itself [26]. The net
effect is the presence of deexcitation γ-rays which can
Compton scatter within the fiducial volume of this search
and pass all selection criteria. From these Geant4 sim-
ulations the predicted number of deexcitation gammas
entering the final selection, given the source activity, is
4 ± 3 events as compared to the 9 observed events within
and above the ER band. Given known mismodeling of
post-neutron capture γ-ray cascades [21], we do not take
this difference between expectation and observation to be
evidence of anything significant. These ER band events
also do not directly interfere with the observation of the
152 keV mode photoneutrons and do not impact the fit-
ting results discussed in the next section.
3.
Scatters in the gas phase
A population of background events below the NR band
appears to extend from higher S1 values down into the
152 keV mode photoneutron population. These events
are consistent with scatters in the xenon gas above the
anode. Such interactions are defined by their character-
istic drift times of ≤52 µs [5] and reduced S2 areas re-
sulting from a combination of lower drift field, lower elec-
troluminescence response from electrons drifting toward
the anode, and lower light collection efficiency.
These
events prototypically form an additional band below the
NR band, and are not a direct background during nom-
inal LZ WIMP searches as a result of the characteristic
drift time. Many of these events can be excluded by their
distinctly larger S2 TBA values. However, full exclusion
of these events is limited for small S2 pulses, due to fluc-
tuations in TBA values.
The population of gas events in the YBe dataset orig-
inate from a combination of radiogenic interactions (ei-
ther from γ-rays from detector materials or beta decays
from internal xenon contaminants), 88Y γ-rays, and de-
excitation γ-rays following neutron capture. The back-
ground dataset, without any calibration source present,
allows for constraining the rate of gas scatters originat-
ing from radiogenic interactions. The gas scatters in the
YMg dataset, given the rate of gas scatters of radiogenic
origin, then allow for constraining the gas scatter event
rate from 88Y γ-rays. Given these two measurements,
the relative ratio of gas scatters in the YBe dataset can
be understood.
One event consistent with a gas scatter remains in the
35 d exposure background dataset following all selections.
No events consistent with a gas scatter are observed in
the 0.8 d exposure YMg dataset. To enhance the rate
of gas interactions to better measure the relative rate
of each potential gas population, the S2 TBA criterion
is removed in each of the YBe, YMg, and background
datasets. This is shown as the band of gray points in
Fig. 4. The population of gas events with the S2 TBA
criterion removed, S1c > 10 phd, below the NR band,
and with S2 values consistent with the YBe selection
(log10S2c < 3.3) are used as a control region to estimate
the relative ratios of gas events within the YBe dataset
and the gas event rejection efficiency. In this region, 36,
4, and 5 additional events are revealed in the YBe, YMg,
and background datasets, respectively.
The relative ratios of the origin of gas interactions
within the YBe dataset, estimated using the control re-
gion, is found to be ∼2% from radiogenic scatters, ∼63%
from 88Y γ-rays, and ∼35% from deexcitation γ-rays.
Further, the gas event rejection efficiency is estimated to
be 90% by calculating the combined fraction of events in
the control region (45 out of 50) removed in the YBe,
YMg, and background datasets.
Given the predicted signal acceptance of the S2 TBA
criteria and estimated gas veto efficiency, the rate of
gas scatters in the YBe region, log10S2c < 3.3 phd and
S1c < 9 phd, is estimated to be < 5% of all events after
application of the S2 TBA criteria. Though the contri-
bution of gas scatters is subdominant, the low exposure
of the YMg dataset and lack of robust simulation of the
gas response to γ-rays limits our ability to further build
an accurate model for this background.
4.
Additional background considerations
In addition to the background populations discussed
above, instrumental backgrounds formed from the ran-
dom pairing of S1s and S2s within an event window, here-
after referred to as accidental coincidences, can mimic
physical scatters in the active liquid xenon and produce
a background underlying the YBe data. These are a con-
cern in this analysis given that many of the 152 keV
mode photoneutrons produce nuclear recoils below the
3-fold S1 coincidence threshold, thus elevating the S2-
only rate in the detector. To model these backgrounds in
situ, a sideband of events with unphysical drift time was
studied. As described in Ref. [17], unphysical drift time
events have reported drift times exceeding the maximum
measurable value (i.e. events originating from the cath-
ode). Therefore, these events must be formed by S1-S2
pairs that were not physically correlated with a scatter
9
in the active TPC indicating their origin as accidental
coincidences. The rate and spectra of these unphysical
drift time events provide a model for the accidental coin-
cidence backgrounds expected within the YBe selection.
An analogous analysis was performed, as in previous LZ
WIMP searches [5, 18], to measure the rate and spectra
of these accidental coincidence backgrounds. The same
suite of data selections were applied to a selection of un-
physical drift time events in a window from 1000 µs to
1900 µs, whose rate was then scaled to match the 44µs
drift window used in this analysis. The predicted rate
of these accidental coincidences in the YBe dataset is
1.5 ± 0.5 events. Critically, the selections defined above
reject these events with high efficiency and demonstrate
they are a subdominant contribution to the total number
of observed events.
IV.
LIGHT AND CHARGE YIELD ANALYSIS
A.
Methodology
The signal modeling, as described in Ref. [21], uti-
lizes the parametric Noble Element Simulation Technique
(NEST) [24] to convert deposited energy into excitation
and ionization signals. The LZ detector response, as val-
idated by LZ calibration data, specifically tritium de-
cays within the sub-volume used in this analysis, fur-
ther smears this response into observable scintillation and
electroluminescence signals. By incorporating the NEST
signal generation model and the detector response model,
measurements of the nuclear recoil light and charge yields
could be performed by fitting the YBe data in the region
log10S2c < 3.3 and S1c < 9 phd. The NR yield analysis
presented here focuses on determining the overall scale
of the charge yield and extracting a corresponding light
yield.
The NEST NR yield model is described in detail in
Ref. [27].
In short, NEST provides twelve parameters
governing the total number of quanta produced (Nq),
light yields (Ly), and charge yields (Qy) and eight pa-
rameters governing the fluctuations of the mean yields.
Nq is a simplified version of the Lindhard nuclear recoil
quenching model [28], employing a two-parameter power
law relation. Qy is described by a term governing the field
and density anti-correlation in light and charge yields (i.e.
recombination) and a two-parameter sigmoid controlling
the roll-off of the yield at sub-keV energies. Ly is defined
as the difference between Nq and Qy (hence the yields
are anti-correlated) with an additional factor allowing for
threshold behavior in Ly independent of Qy. NEST addi-
tionally contains parameters to allow for fluctuations in
the individual excitations and ionization yields and pa-
rameters to have potential non-binomial recombination
fluctuations.
Mean values and uncertainties for these parameters
are provided in Ref. [27], however, their degeneracy in
impacting the expected shape and rate of a given recoil
distribution must be assessed on a calibration-specific ba-
sis. In this analysis, the short exposure of the YBe cali-
bration, and hence sparse statistics in the final dataset,
limit the sensitivity to many of these parameters. Addi-
tionally, this calibration was performed at a single drift
field so no sensitivity to the field-dependent recombina-
tion terms is available.
The observed number of photons produced in 152 keV
neutron mode elastic NRs is dominated by upward fluc-
tuations in the number of excitation quanta such that
the S1 signal is dominated by 3-photon pileup. The re-
sulting S1 area spectrum is that of a Poisson distribution
with a mean of 3 photons, convolved with the detection
efficiency and the average single photoelectron response
of the PMTs. The model parameters governing the light-
yield (Ly) and photon fluctuations have the net effect of
renormalizing the S1 spectrum as opposed to changing
its shape. This limits the sensitivity to the Ly parameter
in the model. We expect this observation to hold true
even if using a reduced energy threshold by including 2-
fold S1s. On the other hand, the comparatively larger
number of electrons produced by the 152 keV neutron
mode elastic NRs offers significant sensitivity to Qy due
to the high collection efficiency of ionization quanta. And
unlike in the case of Ly, the sub-keV roll-off in Qy is well
constrained by existing data [10, 29].
This calibration analysis therefore aims to tune the
overall scale of Qy, while fixing all other model parame-
ters, in the sub-region S1c < 9 phd and log10S2c < 3.3.
The fitting procedure consisted of scanning different Qy
values, where for each value of Qy tested, a binned ex-
tended likelihood was fit simultaneously to the S1c and
S2c spectra.
The expected number of neutrons in the
dataset was the product of the analytically calculated
neutron rate and the neutron survival fraction deter-
mined from simulations.
The total uncertainty on the expected neutron counts
is the quadrature sum of the errors on the 88Y activ-
ity, γ-ray branching fraction, photoneutron cross-section,
and the systematic on detection efficiency, with the lat-
ter being the dominant component. This yielded a model
expectation of for the 152 keV mode of 173 ± 40 elastic
nuclear recoils, forming the normalization and constraint,
respectively, in the fitting procedure.
The 950 keV
mode YBe photoneutrons were also included in these fits.
Given their subdominant prediction of 4.6 ± 2.1 events in
this region, their spectrum was fixed, though in principle
it would be subject to the same variation in yields as the
152 keV mode neutron spectrum. An accidental back-
ground component was also included whose shape was
fixed and normalization was determined by the analysis
of unphysical drift time events. The best fit Qy model
and its uncertainty was found from minimizing the neg-
ative log-likelihood.
10
0
1
2
3
4
5
6
7
8
9
S1c [phd]
0
5
10
15
20
25
30
35
Counts/Bin
Data
Total Model
YBe 152 keV NRs
Accidentals Coincidences
YBe 950 keV NRs
250
500
750
1000
1250
1500
1750
2000
S2c [phd]
0
10
20
30
40
50
Counts/Bin
Data
Total Model
YBe 152 keV NRs
Accidentals Coincidences
YBe 950 keV NRs
FIG. 5. Comparison of the best fit of YBe 152 keV neutron mode elastic recoil model (dashed pink line) to data (black points)
shown in detected photons in S1c and S2c. This fitting is performed and shown in the sub-region with S1c < 9 phd and
log10S2c < 3.3. Also shown are the subdominant contributions from accidental coincidences (dashed blue) and YBe 950 keV
mode neutrons (green). The total model is shown as the solid purple line.
B.
Results
The best fit model is shown as the dashed pink line in
Fig. 5 with the total model shown in solid purple. The
YBe data shown in Fig. 4 within the sub-selection S1c <
9 phd and log10S2c < 3.3 are shown as black points.
The best fit numbers of events are 182 ± 12stat 152 keV
mode neutrons, 6.5 ± 3.8stat 950 keV mode neutrons,
and 1.6 ± 1.4stat accidental coincidence events. The best
fit neutron rate is extracted from the best fit number of
152 keV mode neutrons and found to be 29 ± 2stat.± 7sys.
n/s.
This is in good agreement with the analytically
predicted 34 ± 1 n/s described in Sec. II B. The best-fit
p-values for the S1c and S2c spectra are 0.97 and 0.06,
respectively.
The somewhat low p-value for the S2c fit is driven by
the contribution to the χ2 from the second-lowest S2c
bin. This region appears to be the most affected by the
gas event background. We attempted to obtain a side-
band population of these events in both the YMg and
background data samples, but were unable to obtain a
sufficient number of events.
Nevertheless, we estimate
that the contribution of gas events to the spectra shown
in Fig. 4 is < 5% of the total 193 events in the final selec-
tion. Further, these events tend toward the tails of the
distribution, and therefore have a minimal impact on the
best-fit Qy.
The impact of the systematics on the detection effi-
ciency on the best fit Qy were assessed by performing two
fits utilizing the ±1σ detection efficiencies from Fig. 3.
These systematics are dominated by the uncertainties
on the assessments of S2-based selection acceptance and
single-scatter reconstruction efficiency. The best-fit Qy
results remained unchanged when assuming the −1σ de-
tection efficiency, but with an improved p-value of 0.40.
In the case of assuming the optimistic +1σ detection
efficiency, the fitting procedure prefers a 6% lower Qy,
though with with a significantly worse p-value.
A key consideration in this analysis is to determine
the minimum recoil energy to which the calibration is
sensitive. We approached this following the method of
Ref. [10]. Using the best fit model and mean detection
efficiencies, the minimum recoil energy considered in the
model was cut-off at incrementally increasing energies;
for each energy cut-off the changes in chi-squares of the
S1c and S2c spectra were calculated. The minimum Ly
and Qy sensitivities are defined as the energy cut-off at
which ∆χ2 = 1. This occurred at 2.6 keVnr for Ly and
1.8 keVnr for Qy.
The higher energy threshold in Ly
as compared to Qy derives from the aforementioned fact
that the shape of the S1c is largely insensitive to the vari-
ations in yields at the lowest recoil energies, as described
in Sec. IV A.
The resulting observed Qy and Ly are shown as the red
points in Fig. 6 with respect to the mean (black) and 1σ
uncertainty (shaded gray) from the NEST model. The
best fit Qy is 12% lower than the NEST model at the
mean 152 keV mode YBe recoil energy of 3.1 keVnr, as
shown in Fig. 3. The widths of the error bars on the Qy
and Ly data points correspond to the energy range be-
tween the aforementioned minimum recoil energies this
calibration is sensitive to and the 4.6 keVnr endpoint of
the 152 keV neutron elastic recoil. The total uncertainty
on the measured Qy is the quadrature sum of the statis-
tical error from the fitting procedure, uncertainty on g2,
and uncertainty from varying the detection efficiency dis-
cussed above. The total uncertainty on Ly considers the
same set of uncertainties in addition to uncertainty on
the parameters describing Nq which are assumed to con-
vert from Qy to Ly. Also shown are yield measurements
at similar drift fields from previous experiments.
11
0.25
1.00
2.00
3.00
4.00
5.00
6.00
Nuclear Recoil Energy [keV]
3
4
5
6
7
8
9
10
11
12
Qy [electrons/keV]
NEST model Qy (193 V/cm)
± 1σ NEST Uncertainty
This Result (193 V/cm)
neriX 2018 (190 V/cm)
LLNL 2019 (220 V/cm)
LUX Run03 (180 V/cm)
LUX Run04 (400 V/cm)
0.25
1.00
2.00
3.00
4.00
5.00
6.00
Nuclear Recoil Energy [keV]
2
3
4
5
6
7
8
9
10
Ly [photons/keV]
NEST model Ly (193 V/cm)
± 1σ NEST Uncertainty
This Result (193 V/cm)
LUX Run03 (180 V/cm)
LUX Run04 (400 V/cm)
FIG. 6. The charge yield Qy and light yield Ly as determined from this YBe calibration are shown as red points. The best-fit
Qy and Ly are quoted at the the mean YBe recoil energy of 3.1 keVnr with a width determined by the minimum recoil energy
to which this calibration is sensitive and the 4.6 keVnr endpoint of the 152 keV photoneutron recoil. The default NEST model
prediction for Qy and Ly are shown as the black lines along with the ±1σ uncertainty in shaded gray band. This result is in
good agreement with the default NEST model and other previous experimental results [9, 10, 29–31].
V.
SUMMARY
This is the first photoneutron calibration deployed in
the LZ detector providing a key benchmark of the low
energy nuclear recoil response and showcasing the high
data quality achieved by the LZ experiment. Elastic nu-
clear recoils from the YBe 152 keV and 950 keV pho-
toneutron modes were successfully observed in the ex-
pected amounts. The dominant 152 keV neutron mode
allowed for calibration of the charge-yield, Qy, which is
in agreement with existing literature and NEST uncer-
tainties. Additionally, YMg calibrations were performed
with the same 88Y gamma source allowing for a cross-
calibration of the impact of 88Y γ-rays within the liquid
xenon, providing unambiguous evidence for the obser-
vation of low energy YBe photoneutron elastic recoils.
With background data, these two datasets revealed the
origin of the ER events in the YBe dataset to be γ-rays
originating from neutron captures within predicted ex-
pectations.
Future photoneutron calibrations within LZ would
benefit from longer exposures of both YBe and YMg cal-
ibrations to increase the level of statistics. This would
allow for comprehensively profiling the background due
to scatters in the xenon vapor above the anode, which are
presently the limiting factor in the Qy sensitivity of this
calibration.
Given the source rate, LZ geometry, and
YBe source configuration, a few weeks for each of the
YBe and YMg datasets would be sufficient. Clearly, it
would be desirable to reduce the detector energy thresh-
old by including 2-fold S1 signals. This is presently chal-
lenging due to accidental coincidence backgrounds. As
discussed in Sec.
IV A, it is also likely that including
2-fold S1 signals would have a limited effect on probing
lower-energy Ly. One may also consider including multi-
scatter neutron events if these are able to be sufficiently
modeled. Nevertheless, the positive identification of YBe
photoneutrons provides direct evidence that LZ has the
ability and sensitivity to observe nuclear recoils from low
mass (few GeV) dark matter and CEνNS interactions
from 8B solar neutrinos.
These demonstrated benefits
lend support to the inclusion of a photoneutron source in
calibration plans for next generation liquid xenon TPCs
designed to detect few-GeV mass WIMPs.
Acknowledgments - The research supporting this work
took place in part at the Sanford Underground Research
Facility (SURF) in Lead, South Dakota.
Funding
for this work is supported by the U.S. Department
of Energy, Office of Science, Office of High Energy
Physics under Contract Numbers DE-AC02-05CH11231,
DE-SC0020216, DE-SC0012704, DE-SC0010010, DE-
AC02-07CH11359,
DE-SC0015910,
DE-SC0014223,
DE-SC0010813,
DE-SC0009999,
DE-NA0003180,
DE-SC0011702, DE-SC0010072, DE-SC0006605, DE-
SC0008475,
DE-SC0019193,
DE-FG02-10ER46709,
UW PRJ82AJ, DE-SC0013542, DE-AC02-76SF00515,
DE-SC0018982, DE-SC0019066, DE-SC0015535, DE-
SC0019319, DE-SC0025629, DE-SC0024114, DE-AC52-
07NA27344,
& DE-SC0012447.
This research was
also supported by U.S. National Science Foundation
(NSF);
the
UKRI’s
Science
&
Technology
Facili-
ties Council under award numbers ST/W000490/1,
ST/W000482/1,
ST/W000636/1,
ST/W000466/1,
ST/W000628/1,
ST/W000555/1,
ST/W000547/1,
ST/W00058X/1,
ST/X508263/1,
ST/V506862/1,
ST/X508561/1,
ST/V507040/1,
ST/W507787/1,
ST/R003181/1,
ST/R003181/2,
ST/W507957/1,
ST/X005984/1, ST/X006050/1; Portuguese Foundation
for Science and Technology (FCT) under award numbers
12
PTDC/FIS-PAR/2831/2020;
the Institute for Basic
Science,
Korea (budget number IBS-R016-D1);
the
Swiss National Science Foundation (SNSF) under award
number 10001549.
This research was supported by
the Australian Government through the Australian
Research Council Centre of Excellence for Dark Matter
Particle Physics under award number CE200100008.
We
acknowledge
additional
support
from
the
UK
Science & Technology Facilities Council (STFC) for
PhD studentships and the STFC Boulby Underground
Laboratory in the U.K., the GridPP [32, 33] and IRIS
Collaborations, in particular at Imperial College London
and additional support by the University College London
(UCL) Cosmoparticle Initiative, and the University of
Zurich.
We acknowledge additional support from the
Center for the Fundamental Physics of the Universe,
Brown University. K.T. Lesko acknowledges the support
of Brasenose College and Oxford University.
The LZ
Collaboration acknowledges the key contributions of
Dr.
Sidney Cahn, Yale University, in the production
of calibration sources.
This research used resources
of the National Energy Research Scientific Computing
Center, a DOE Office of Science User Facility supported
by the Office of Science of the U.S. Department of
Energy under Contract No. DE-AC02-05CH11231. We
gratefully acknowledge support from GitLab through its
GitLab for Education Program. The University of Edin-
burgh is a charitable body, registered in Scotland, with
the registration number SC005336.
The assistance of
SURF and its personnel in providing physical access and
general logistical and technical support is acknowledged.
We acknowledge the South Dakota Governor’s office,
the South Dakota Community Foundation, the South
Dakota State University Foundation, and the University
of South Dakota Foundation for use of xenon. We also
acknowledge the University of Alabama for providing
xenon.
For the purpose of open access, the authors
have applied a Creative Commons Attribution (CC
BY) license to any Author Accepted Manuscript version
arising from this submission.
Finally, we respectfully
acknowledge that we are on the traditional land of
Indigenous American peoples and honor their rich
cultural heritage and enduring contributions.
Their
deep connection to this land and their resilience and
wisdom continue to inspire and enrich our community.
We commit to learning from and supporting their effort
as original stewards of this land and to preserve their
cultures and rights for a more inclusive and sustainable
future.
[1] D. Harvey, R. Massey, T. Kitching, A. Taylor, and E. Tit-
tley, The nongravitational interactions of dark matter in
colliding galaxy clusters, Science 347, 1462 (2015).
[2] D. Clowe et al., A Direct Empirical Proof of the Existence
of Dark Matter, Astrophys. J. Lett. 648, L109 (2006).
[3] L. Anderson et al. (BOSS), The clustering of galaxies in
the SDSS-III Baryon Oscillation Spectroscopic Survey:
baryon acoustic oscillations in the Data Releases 10 and
11 Galaxy samples, Mon. Not. Roy. Astron. Soc. 441, 24
(2014).
[4] N. Aghanim et al. (PLANCK), Planck 2018 results. VI.
Cosmological parameters, Astron. Astrophys. 641, A6
(2020), [Erratum: Astron.Astrophys. 652, C4 (2021)].
[5] J. Aalbers et al. (LZ), First Dark Matter Search Results
from the LUX-ZEPLIN (LZ) Experiment, Phys. Rev.
Lett. 131, 041002 (2023), arXiv:2207.03764 [hep-ex].
[6] D. S. Akerib et al., Snowmass2021 Cosmic Frontier Dark
Matter Direct Detection to the Neutrino Fog, in Snow-
mass 2021 (2022) arXiv:2203.08084 [hep-ex].
[7] D. S. Akerib et al. (LZ), Projected WIMP sensitivity of
the LUX-ZEPLIN dark matter experiment, Phys. Rev.
D 101, 052002 (2020), arXiv:1802.06039 [astro-ph.IM].
[8] B. G. Lenardo, J. Xu, S. Pereverzev, O. A. Akindele,
D. Naim, J. Kingston, A. Bernstein, K. Kazkaz, M. Tri-
pathi, C. Awe, L. Li, J. Runge, S. Hedges, P. An, and
P. S. Barbeau, Low-energy physics reach of xenon detec-
tors for nuclear-recoil-based dark matter and neutrino
experiments, Phys. Rev. Lett. 123, 231106 (2019).
[9] D. S. Akerib et al. (LUX), Low-energy (0.7-74 keV) nu-
clear recoil calibration of the LUX dark matter experi-
ment using D-D neutron scattering kinematics, (2016),
arXiv:1608.05381 [physics.ins-det].
[10] D. S. Akerib et al. (LUX), Nuclear Recoil Calibration at
Sub-keV Energies in LUX and Its Impact on Dark Matter
Search Sensitivity, Phys. Rev. Lett. 134, 061002 (2025),
arXiv:2210.05859 [physics.ins-det].
[11] E. Aprile et al. (XENON), Low-Energy Nuclear Recoil
Calibration of XENONnT with a 88YBe Photoneutron
Source, (2024), arXiv:2412.10451 [physics.ins-det].
[12] E. Aprile et al. (XENON), XENONnT WIMP Search:
Signal & Background Modeling and Statistical Inference,
(2024), arXiv:2406.13638 [physics.data-an].
[13] J. I. Collar, Applications of an 88Y/Be photo-neutron
calibration source to Dark Matter and Neutrino Ex-
periments,
Phys.
Rev.
Lett.
110,
211101
(2013),
arXiv:1303.2686 [physics.ins-det].
[14] J. Aalbers et al. (LZ), The design, implementation, and
performance of the LZ calibration systems, JINST 19
(08), P08027, arXiv:2406.12874 [physics.ins-det].
[15] A. E. Robinson, Reanalysis of radioisotope measure-
ments of the 9Be(γ, n)8Be cross-section, Phys. Rev. C
94, 024613 (2016), arXiv:1602.05911 [nucl-ex].
[16] D. S. Akerib et al. (LZ), The LUX-ZEPLIN (LZ) Ex-
periment, Nucl. Instrum. Meth. A 953, 163047 (2020),
arXiv:1910.09124 [physics.ins-det].
[17] J. Aalbers et al. (LZ), Background determination for
the LUX-ZEPLIN dark matter experiment, Phys. Rev.
D 108, 012010 (2023), arXiv:2211.17120 [hep-ex].
[18] J. Aalbers et al. (LZ), Dark Matter Search Results
from 4.2 Tonne-Years of Exposure of the LUX-ZEPLIN
(LZ) Experiment, Phys. Rev. Lett. 135, 011802 (2025),
arXiv:2410.17036 [hep-ex].
[19] J. Aalbers et al. (LZ), Search for new physics in low-
energy electron recoils from the first LZ exposure, Phys.
13
Rev. D 108, 072006 (2023), arXiv:2307.15753 [hep-ex].
[20] S. Agostinelli et al. (GEANT4 Collaboration), GEANT4:
A Simulation toolkit, Nucl. Instrum. Meth. A506, 250
(2003).
[21] D. S. Akerib et al. (LZ), Simulations of Events for the
LUX-ZEPLIN (LZ) Dark Matter Experiment, Astropart.
Phys. 125, 102480 (2021), arXiv:2001.09363 [physics.ins-
det].
[22] ENDF/B-VIII.0 the 8th major release of the nuclear re-
action data library with cielo-project cross sections, new
standards and thermal scattering data, Nucl. Data Sheets
148 (2018).
[23] D. S. Akerib et al. (LUX), Investigation of background
electron emission in the LUX detector, Phys. Rev. D 102,
092004 (2020), arXiv:2004.07791 [physics.ins-det].
[24] M. Szydagis et al., A Review of Basic Energy Reconstruc-
tion Techniques in Liquid Xenon and Argon Detectors for
Dark Matter and Neutrino Physics Using NEST, Instru-
ments 5, 13 (2021), arXiv:2102.10209 [hep-ex].
[25] P. Sorensen, Importance of upgraded energy reconstruc-
tion for direct dark matter searches with liquid xenon de-
tectors, Phys. Rev. D 86, 101301 (2012), arXiv:1208.5046
[astro-ph.CO].
[26] C. S. Amarasinghe, R. Coronel, D. Q. Huang, Y. Liu,
M. Arthurs, S. Steinfeld, W. Lorenzon, and R. Gaitskell,
Feasibility study to use neutron capture for an ultralow
energy nuclear-recoil calibration in liquid xenon, Phys.
Rev. D 106, 032007 (2022).
[27] M.
Szydagis
et
al.,
A
Review
of
NEST
Models
for
Liquid
Xenon
and
Exhaustive
Comparison
to
Other Approaches 10.3389/fdest.2024.1480975 (2022),
arXiv:2211.10726 [hep-ex].
[28] J. Lindhard, M. Scharff, and H. E. Schioett, Range con-
cepts and heavy ion ranges (notes on atomic collisions,
ii), Kgl. Danske Videnskab. Selskab. Mat. Fys. Medd.
Vol: 33: No. 14 (1963).
[29] B. Lenardo et al., Measurement of the ionization yield
from nuclear recoils in liquid xenon between 0.3 - 6
keV with single-ionization-electron sensitivity,
(2019),
arXiv:1908.00518 [physics.ins-det].
[30] E. Aprile, M. Anthony, Q. Lin, Z. Greene, P. de Perio,
F. Gao, J. Howlett, G. Plante, Y. Zhang, and T. Zhu,
Simultaneous measurement of the light and charge re-
sponse of liquid xenon to low-energy nuclear recoils at
multiple electric fields, Phys. Rev. D 98, 112003 (2018).
[31] D. Q. Huang, Ultra-Low Energy Calibration of the LUX
and LZ Dark Matter Detectors, Ph.D. thesis, Brown Uni-
versity (2020).
[32] P. Faulkner et al., Gridpp: development of the uk com-
puting grid for particle physics, J. Phys. G 32, N1 (2005).
[33] D. Britton et al., Gridpp: the uk grid for particle physics,
Philos. Trans. R. Soc. A 367, 2447 (2009).
|
Low-energy nuclear recoil calibration of the LUX-ZEPLIN experiment with a photoneutron source J. Aalbers,1, 2 D.S. Akerib,1, 2 A.K. Al Musalhi,3 F. Alder,3 C.S. Amarasinghe,4 A. Ames,1, 2 T.J. Anderson,1, 2 N. Angelides,5 H.M. Ara ́ujo,5 J.E. Armstrong,6 M. Arthurs,1, 2 A. Baker,5, 7 S. Balashov,8 J. Bang,9 J.W. Bargemann,4 E.E. Barillier,10, 11 K. Beattie,12 T. Benson,13 A. Bhatti,6 T.P. Biesiadzinski,1, 2 H.J. Birch,10, 11 E. Bishop,14 G.M. Blockinger,15 B. Boxer,16 C.A.J. Brew,8 P. Br ́as,17 S. Burdin,18 M.C. Carmona-Benitez,19 M. Carter,18 A. Chawla,20 H. Chen,12 Y.T. Chin,19 N.I. Chott,21 N.I. Chott,21 M.V. Converse,22 S. Contreras,23 R. Coronel,1, 2 A. Cottle,3 G. Cox,24 D. Curran,24 C.E. Dahl,25, 26 I. Darlington,3 S. Dave,3 A. David,3 J. Delgaudio,24 S. Dey,27 L. de Viveiros,19 L. Di Felice,5 C. Ding,9 J.E.Y. Dobson,7 E. Druszkiewicz,22 S. Dubey,9 C.L. Dunbar,24 S.R. Eriksen,28 A. Fan,1, 2 N.M. Fearon,27 N. Fieldhouse,27 S. Fiorucci,12 H. Flaecher,28 E.D. Fraser,18 T.M.A. Fruth,29 R.J. Gaitskell,9 A. Geffre,24 J. Genovesi,19 C. Ghag,3 A. Ghosh,15 R. Gibbons,12, 30 S. Gokhale,31 J. Green,27 M.G.D.van der Grinten,8 J.J. Haiston,21 C.R. Hall,6 T. Hall,18 S. Han,1, 2 E. Hartigan-O'Connor,9 S.J. Haselschwardt,10 M.A. Hernandez,10, 11 S.A. Hertel,32 G.J. Homenides,33 M. Horn,24 D.Q. Huang,23 D. Hunt,27, 34 E. Jacquet,5 R.S. James,3 M.K. K,15 A.C. Kaboth,20 A.C. Kamaha,23 D. Khaitan,22 A. Khazov,8 J. Kim,4 Y.D. Kim,35 J. Kingston,16 R. Kirk,9 D. Kodroff,12, ∗E.V. Korolkova,36 H. Kraus,27 S. Kravitz,34 L. Kreczko,28 V.A. Kudryavtsev,36 C. Lawes,7 D.S. Leonard,35 K.T. Lesko,12 C. Levy,15 J. Lin,12, 30 A. Lindote,17 W.H. Lippincott,4 J. Long,25 M.I. Lopes,17 W. Lorenzon,10 C. Lu,9 S. Luitz,1, 2 P.A. Majewski,8 A. Manalaysay,12 R.L. Mannino,37 C. Maupin,24 M.E. McCarthy,22 G. McDowell,10 D.N. McKinsey,12, 30 J. McLaughlin,25 J.B. Mclaughlin,3 R. McMonigle,15 B. Mitra,25 E. Mizrachi,6, 37 M.E. Monzani,1, 2, 38 E. Morrison,21 B.J. Mount,39 M. Murdy,32 A.St.J. Murphy,14 H.N. Nelson,4 F. Neves,17 A. Nguyen,14 C.L. O'Brien,34 I. Olcina,12, 30 K.C. Oliver-Mallory,5 J. Orpwood,36 K.Y Oyulmaz,14 K.J. Palladino,27 J. Palmer,20 N.J. Pannifer,28 N. Parveen,15 S.J. Patton,12 B. Penning,10, 11 G. Pereira,17 E. Perry,3 T. Pershing,37 A. Piepke,33 Y. Qie,22 J. Reichenbacher,21 C.A. Rhyne,9 G.R.C. Rischbieter,10, 11 E. Ritchey,6 H.S. Riyat,14 R. Rosero,31 T. Rushton,36 D. Rynders,24 D. Santone,20, 27 A.B.M.R. Sazzad,33 R.W. Schnee,21 G. Sehr,34 B. Shafer,6 S. Shaw,14 K. Shi,10 T. Shutt,1, 2 J.J. Silk,6 C. Silva,17 G. Sinev,21 J. Siniscalco,3 A.M. Slivar,33 R. Smith,12, 30 V.N. Solovov,17 P. Sorensen,12, † J. Soria,12, 30 I. Stancu,33 A. Stevens,3, 5 T.J. Sumner,5 A. Swain,27 M. Szydagis,15 D.R. Tiedt,24 M. Timalsina,12 Z. Tong,5 D.R. Tovey,36 J. Tranter,36 M. Trask,4 M. Tripathi,16 K. Trengrove,15 A. Us ́on,14 A.C. Vaitkus,9 O. Valentino,5 V. Velan,12 A. Wang,1, 2 J.J. Wang,33 Y. Wang,12, 30 L. Weeldreyer,4 T.J. Whitis,4 K. Wild,19 M. Williams,10 W.J. Wisniewski,1 L. Wolf,20 F.L.H. Wolfs,22 S. Woodford,18 D. Woodward,12, 19 C.J. Wright,28 Q. Xia,12 J. Xu,37 Y. Xu,23 M. Yeh,31 D. Yeum,6 W. Zha,19 H. Zhang,14 and T. Zhang12 (The LUX-ZEPLIN (LZ) Collaboration) 1SLAC National Accelerator Laboratory, Menlo Park, CA 94025-7015, USA 2Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305-4085 USA 3University College London (UCL), 1E 6BT, UK 4 93106-9530, USA 5Imperial College London, Physics Department, Blackett Laboratory, London SW7 2AZ, UK 6 20742-4111, USA 7King's College London, King's College London, 2R 2LS, UK 8STFC Rutherford Appleton Laboratory (RAL), Didcot, OX11 0QX, UK 9Brown University, 02912-9037, USA 10 48109-1040, USA 11 8057 Zurich, Switzerland 12Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA 94720-8099, USA 13 -Madison, 53706-1390, USA 14 9 3FD, UK 15University at Albany (SUNY), 12222-0100, USA 16 95616-5270, USA 17Laborat ́orio de Instrumenta ̧c ̃ao e F ́ısica Experimental de Part ́ıculas (LIP), -3004 516 Coimbra, Portugal 18 69 7ZE, UK 19Pennsylvania State University, 16802-6300, USA 20Royal Holloway, 20 0EX, UK 21South Dakota 57701-3901, USA 22 14627-0171, USA 18 Sep 2025 2 23 90095-1547 24South Dakota Science and Technology Authority (SDSTA), Sanford Underground Research Facility, Lead, SD 57754-1700, USA 25Northwestern University, 60208-3112, USA 26Fermi National Accelerator Laboratory (FNAL), Batavia, IL 60510-5011, USA 27 1 3RH, UK 28 .H. Wills Physics Laboratory, Bristol, BS8 1TL, UK 29The 2006, Australia 30 94720-7300, USA 31Brookhaven National Laboratory (BNL), Upton, NY 11973-5000, USA 32 01003-9337, USA 33 34587-0324, USA 34 78712-1192, USA 35IBS Center for Underground Physics (CUP), Yuseong-gu, Daejeon, Korea 36 3 7RH, UK 37Lawrence Livermore National Laboratory (LLNL), Livermore, CA 94550-9698, USA 38Vatican Observatory, Castel Gandolfo, V-00120, Vatican City State 39Black Hills State University, 57799-0002, USA (Dated: September 23, 2025) The LZ experiment is a liquid xenon time-projection chamber (TPC) searching for evidence of particle dark matter interactions. In the simplest assumption of elastic scattering, many dark matter models predict an energy spectrum which rises quasi-exponentially with decreasing energy transfer to a target atom. LZ expects to detect coherent neutrino-nucleus scattering of 8B solar neutrinos, the signal from which is very similar to a dark matter particle with mass of about 5.5 GeV/c2, which result in typical nuclear recoil energies of 3000 photons detected (phd) [17]. The TPC and the Skin Detector are housed inside an Inner Cryostat Vessel (ICV) and an Outer Cryostat Vessel (OCV). The OCV is surrounded by acrylic vessels filled with gadolinium-loaded liquid scintillator, as part of the Outer Detector (OD). The OCV and OD are housed inside a water tank with the water continuously purified. The LZ calibration strategy, including the photoneutron source, is described in detail in Refs. [14, 16]. The YBe source internals and deployment location within the LZ detector are depicted in Fig. 1 as recreated from Ref. [14]. As shown in the left panel, the YBe photoneutron source consists of an 88Y sealed disk source of 4.7 mm diameter and 4.6 mm height placed between natural beryllium metal disks of 2.54 cm diameter and a total 3 cm height. A nickel layer with 24 μm thickness is plated on the surface of the beryllium metal disks for safe handling. The disks are stacked in a cylinder which is then inserted into a tungsten shield block of 20 cm diameter and 20 cm height. An image of the fully constructed and assembled source is shown in the middle panel of Fig. 1. Prior to the calibration, analytic calculations were performed to determine the neutronto-gamma ratio produced in this source configuration. That is, for each emitted γ-ray from the 88Y source, how often is a photoneutron produced in the Be-metal. The neutron-to-gamma ratio was found to be 1 in 9990 for the 1.8361 MeV γ-ray and 1 in 10986 for the 2.7340 MeV γ-ray, given the 88Y γ-ray energies and respective photoneutron cross-sections. The right most panel of Fig. 1 depicts how the YBe source is deployed within the LZ experiment and can be compared to other implementations in large-scale liquid xenon TPCs [11]. To perform this calibration, the top of the water tank is opened, and the tungsten shield block is lowered into the LZ water tank through an opening in the OD acrylic vessels, until it rests on a purpose-built indent in the top center of the outer cryostat depicted as the orange block in the right panel of Fig. 1. The water tank is then promptly closed to minimize air ingress. During low-background data taking, this indent in the outer cryostat is occupied by a small cylindrical acrylic vessel filled with gadolinium-doped liquid scintillator (referred to as the "GdLS plug") to complete the OD coverage. Thus when a calibration is completed, the YBe source and tungsten shielding is replaced with the GdLS plug. The distance from the 88Y source to the gate electrode is 86.7 cm: whereby the neutrons must traverse through the titanium cryostat vessels, top PMT array, and xenon gas before reaching the active liquid xenon volume. This configuration minimizes the amount of high density material through which the neutrons must traverse before reaching liquid xenon in the top of the TPC. The tungsten shielding below the source and detector materials provide shielding to help mitigate the 88Y γ-rays. B. Calibration Operations Several small but key parameter changes were made to the experimental conditions between the first science data reported in Ref. [5] and the deployment of the photoneutron source. Specifically, both the cathode and gate electrode potentials were set an additional -2 kV farther from ground, and the anode potential was moved 2.5 kV closer to ground. These changes preserved the nominal 193 V/cm electric field across the liquid xenon target, and decreased the potential difference between the gate and the anode by 0.5 kV. Notably, the drift field is larger than the 97 V/cm field reported in more recent LZ dark matter searches [18]. The scintillation photon and ionization electron gains, g1 and g2 were characterized in this grid configuration using tritium calibration events in an identical spatial selection as used in this analysis. The change in grid configuration decreased both the electroluminescence yield and the electron extraction probability, resulting in an ∼25% decrease in the single electron size leading to g2 = 34.9 ± 0.6 phd/electron. The trigger threshold was also adjusted in order to retain the same 4 FIG. 1. Left: CAD drawing for the YBe photoneutron source assembly. The 88Y sealed source is sandwiched between 9Be disks to generate neutrons. Middle: YBe source inside the tungsten shielding placed next to a five-dollar bill for scale. Right: layout of the top part of the outer detector acrylic tanks (1) in both the green and purple volumes and water tank (2). The custom cut-out (3) in the top acrylic tank through which the YBe source is deployed is shown. The outer (4) and inner (5) titanium cryostats are also indicated in the figure. Figure is recreated from Ref. [14]. detection efficiency to few electron events. The value for g1 remains unchanged from that of the first physics search with g1 = 0.114 ± 0.003 phd/photon. Dedicated spatial corrections were also performed allowing for the definition of the standardized corrected quantities S1c and S2c used throughout this analysis. Following the first science data collection from LZ [5, 17, 19], the YBe source was deployed with an initial 88Y activity of 0.339 MBq with 3% uncertainty, and 135 hours of calibration data were recorded. Given the source activity during the calibration, the predicted neutron-togamma ratio, and the respective branching fractions, the expected neutron rates emitted from the Be-metal are 34 ± 1 n/s for the 152 keV neutrons and 0.22 ± 0.03 n/s for the 950 keV neutrons. About 21 days after these data were recorded, the beryllium metal disks were replaced with magnesium metal disks of the same dimension, the so-called YMg source. The 88Y source, with activity then at 0.295 MBq (given t1/2 = 106.6 d) was deployed again to the same location and 24 hours of data were recorded. The γ-ray attenuation properties of magnesium are within 5% of beryllium, but γ-rays from 88Y decays are below the energy threshold for photoneutron production in magnesium. This step, therefore, provided an important "beam-off" configuration in which the background from 88Y decays could be characterized in situ without the presence of neutrons. C. Simulation of the YBe Source The as-built YBe source, configuration, and location were all modeled in Geant4 [20] and included in the same simulation framework as described in Ref. [21]. The 88Y γ-rays and YBe photoneutrons are simulated as being isotropically emitted from the 88Y source and Be-metal, respectively. These γ-rays and neutrons are then allowed to propagate through the simulated LZ detector before reaching the active liquid xenon volume where the recoils of interest occur. Particle interactions in the active xenon volumes are then converted into observed quantities, e.g. S1 and S2 signals and reconstructed positions. More details on this detector response are described in Sec. IV. These dedicated simulations indicate that the emitted photoneutrons have their energy spectra degraded from interactions in the intervening materials between the Bemetal where the neutron is produced and the active liquid xenon where the neutron is detected. Of the emitted 152(950) keV neutrons, approximately 2(9)% produce recordable signals in the TPC. The primary sources of energy loss are from interactions with the titanium cryostat vessels and the stainless steel and PTFE within the inner cryostat. The resulting energy spectra of these neutrons entering the TPC is slowly falling between zero and the emitted neutron energy. These same intervening materials are extremely effective at shielding the 88Y γ-rays; simulations predict less than 1 in 104 88Y decays will result in energy deposited in the active liquid xenon. As a result, the neutron-to-gamma ratio for interactions in the TPC is roughly 1 in 40 as compared to the ratio emitted from the source of roughly 1 in 10000. III. DATA SELECTION AND ANALYSIS A. Data Selection The data selection criteria were largely based on what was used for the first LZ dark matter search [5] with a few modifications. In brief, this included a requirement 5 0 202 302 402 502 602 702 Reconstructed R2 [cm2] 0 2 4 6 8 10 12 Distance Below Gate Grid [cm] FIG. 2. Candidate single-scatter nuclear recoils from the photoneutron source after all data selection criteria (black points) alongside all single-scatter events (gray points) prior to application of the fiducial volume (red-dashed line) defined in Sec. III. Also shown are events removed by the prompt veto (red points) and not delayed-tagged (blue points). The reconstructed wall position is shown as the solid gray line. of single-scatter events with a 3-fold PMT coincidence, photon hit patterns consistent with genuine single-scatter particle interactions in the TPC, a trigger hold-off of 20 seconds after passing muon events, and removal of time periods with a full data acquisition buffer. The region of interest (ROI) for this analysis was chosen to be 0 8 phd, which is essentially background free, these simulations predict 6 ± 2 events which is consistent with the 9 events observed. The systematic uncertainty in this prediction is driven by the systematics on the predicted 950 keV neutron rate and estimated neutron detection efficiency. Confirmation of this prediction serves as an in situ validation of both the simulations and calculations used throughout this analysis. The lack of NR band events present in the YMg and background datasets further confirms their origin as the 950 keV YBe photoneutrons. Despite their lower rate and typically higher recoil energies with respect to the 152 keV mode photoneutrons, these 950 keV mode neutrons do form a background to the 152 keV mode photoneutrons and are thus included in the fitting results discussed in the next section. 2. Neutron capture gammas The population of events occurring within the ER band, as indicated by the blue contours, could originate from a few different sources: 88Y γ-ray Compton scatters, radiogenic backgrounds from β-decays of internal contaminants, Compton scatters from decays in detector materials, or γ-ray interactions resulting from neutron captures. Though suffering from significantly lower exposure, no 88Y γ-rays are present within the ER band of the YMg dataset as shown in the middle panel of Fig. 4. Even in the longer exposure background dataset, there are no ER band events consistent with radiogenic origins present in the bottom panel of Fig. 4. Both of these 8 observations are consistent with expectations. For one, the 88Y γ-rays are significantly mitigated by intervening materials between source and this fiducialized target as predicted from simulations. And the γ-rays that reach the active target are likely to either multiply scatter and/or produce higher energy recoils. Low rates of radiogenic backgrounds are also expected as small rates are predicted in Ref. [17] with this analysis using an even smaller fiducialized target. Together, the YMg and background datasets strongly suggest that the origin of these ER band events must be specific to the YBe calibration and thus originate from nuclear deexcitation γ-rays following neutron capture. The Geant4 simulations of 152 keV and and 950 keV neutron modes predict significant neutron captures on titanium in the cryostat, hydrogen or gadolinium present in the outer detector, and the xenon itself [26]. The net effect is the presence of deexcitation γ-rays which can Compton scatter within the fiducial volume of this search and pass all selection criteria. From these Geant4 simulations the predicted number of deexcitation gammas entering the final selection, given the source activity, is 4 ± 3 events as compared to the 9 observed events within and above the ER band. Given known mismodeling of post-neutron capture γ-ray cascades [21], we do not take this difference between expectation and observation to be evidence of anything significant. These ER band events also do not directly interfere with the observation of the 152 keV mode photoneutrons and do not impact the fitting results discussed in the next section. 3. Scatters in the gas phase A population of background events below the NR band appears to extend from higher S1 values down into the 152 keV mode photoneutron population. These events are consistent with scatters in the xenon gas above the anode. Such interactions are defined by their characteristic drift times of ≤52 μs [5] and reduced S2 areas resulting from a combination of lower drift field, lower electroluminescence response from electrons drifting toward the anode, and lower light collection efficiency. These events prototypically form an additional band below the NR band, and are not a direct background during nominal LZ WIMP searches as a result of the characteristic drift time. Many of these events can be excluded by their distinctly larger S2 TBA values. However, full exclusion of these events is limited for small S2 pulses, due to fluctuations in TBA values. The population of gas events in the YBe dataset originate from a combination of radiogenic interactions (either from γ-rays from detector materials or beta decays from internal xenon contaminants), 88Y γ-rays, and deexcitation γ-rays following neutron capture. The background dataset, without any calibration source present, allows for constraining the rate of gas scatters originating from radiogenic interactions. The gas scatters in the YMg dataset, given the rate of gas scatters of radiogenic origin, then allow for constraining the gas scatter event rate from 88Y γ-rays. Given these two measurements, the relative ratio of gas scatters in the YBe dataset can be understood. One event consistent with a gas scatter remains in the 35 d exposure background dataset following all selections. No events consistent with a gas scatter are observed in the 0.8 d exposure YMg dataset. To enhance the rate of gas interactions to better measure the relative rate of each potential gas population, the S2 TBA criterion is removed in each of the YBe, YMg, and background datasets. This is shown as the band of gray points in Fig. 4. The population of gas events with the S2 TBA criterion removed, S1c > 10 phd, below the NR band, and with S2 values consistent with the YBe selection (log10S2c < 3.3) are used as a control region to estimate the relative ratios of gas events within the YBe dataset and the gas event rejection efficiency. In this region, 36, 4, and 5 additional events are revealed in the YBe, YMg, and background datasets, respectively. The relative ratios of the origin of gas interactions within the YBe dataset, estimated using the control region, is found to be ∼2% from radiogenic scatters, ∼63% from 88Y γ-rays, and ∼35% from deexcitation γ-rays. Further, the gas event rejection efficiency is estimated to be 90% by calculating the combined fraction of events in the control region (45 out of 50) removed in the YBe, YMg, and background datasets. Given the predicted signal acceptance of the S2 TBA criteria and estimated gas veto efficiency, the rate of gas scatters in the YBe region, log10S2c < 3.3 phd and S1c < 9 phd, is estimated to be < 5% of all events after application of the S2 TBA criteria. Though the contribution of gas scatters is subdominant, the low exposure of the YMg dataset and lack of robust simulation of the gas response to γ-rays limits our ability to further build an accurate model for this background. 4. Additional background considerations In addition to the background populations discussed above, instrumental backgrounds formed from the random pairing of S1s and S2s within an event window, hereafter referred to as accidental coincidences, can mimic physical scatters in the active liquid xenon and produce a background underlying the YBe data. These are a concern in this analysis given that many of the 152 keV mode photoneutrons produce nuclear recoils below the 3-fold S1 coincidence threshold, thus elevating the S2only rate in the detector. To model these backgrounds in situ, a sideband of events with unphysical drift time was studied. As described in Ref. [17], unphysical drift time events have reported drift times exceeding the maximum measurable value (i.e. events originating from the cathode). Therefore, these events must be formed by S1-S2 pairs that were not physically correlated with a scatter 9 in the active TPC indicating their origin as accidental coincidences. The rate and spectra of these unphysical drift time events provide a model for the accidental coincidence backgrounds expected within the YBe selection. An analogous analysis was performed, as in previous LZ WIMP searches [5, 18], to measure the rate and spectra of these accidental coincidence backgrounds. The same suite of data selections were applied to a selection of unphysical drift time events in a window from 1000 μs to 1900 μs, whose rate was then scaled to match the 44μs drift window used in this analysis. The predicted rate of these accidental coincidences in the YBe dataset is 1.5 ± 0.5 events. Critically, the selections defined above reject these events with high efficiency and demonstrate they are a subdominant contribution to the total number of observed events. IV. LIGHT AND CHARGE YIELD ANALYSIS A. Methodology The signal modeling, as described in Ref. [21], utilizes the parametric Noble Element Simulation Technique (NEST) [24] to convert deposited energy into excitation and ionization signals. The LZ detector response, as validated by LZ calibration data, specifically tritium decays within the sub-volume used in this analysis, further smears this response into observable scintillation and electroluminescence signals. By incorporating the NEST signal generation model and the detector response model, measurements of the nuclear recoil light and charge yields could be performed by fitting the YBe data in the region log10S2c < 3.3 and S1c < 9 phd. The NR yield analysis presented here focuses on determining the overall scale of the charge yield and extracting a corresponding light yield. The NEST NR yield model is described in detail in Ref. [27]. In short, NEST provides twelve parameters governing the total number of quanta produced (Nq), light yields (Ly), and charge yields (Qy) and eight parameters governing the fluctuations of the mean yields. Nq is a simplified version of the Lindhard nuclear recoil quenching model [28], employing a two-parameter power law relation. Qy is described by a term governing the field and density anti-correlation in light and charge yields (i.e. recombination) and a two-parameter sigmoid controlling the roll-off of the yield at sub-keV energies. Ly is defined as the difference between Nq and Qy (hence the yields are anti-correlated) with an additional factor allowing for threshold behavior in Ly independent of Qy. NEST additionally contains parameters to allow for fluctuations in the individual excitations and ionization yields and parameters to have potential non-binomial recombination fluctuations. Mean values and uncertainties for these parameters are provided in Ref. [27], however, their degeneracy in impacting the expected shape and rate of a given recoil distribution must be assessed on a calibration-specific basis. In this analysis, the short exposure of the YBe calibration, and hence sparse statistics in the final dataset, limit the sensitivity to many of these parameters. Additionally, this calibration was performed at a single drift field so no sensitivity to the field-dependent recombination terms is available. The observed number of photons produced in 152 keV neutron mode elastic NRs is dominated by upward fluctuations in the number of excitation quanta such that the S1 signal is dominated by 3-photon pileup. The resulting S1 area spectrum is that of a Poisson distribution with a mean of 3 photons, convolved with the detection efficiency and the average single photoelectron response of the PMTs. The model parameters governing the lightyield (Ly) and photon fluctuations have the net effect of renormalizing the S1 spectrum as opposed to changing its shape. This limits the sensitivity to the Ly parameter in the model. We expect this observation to hold true even if using a reduced energy threshold by including 2fold S1s. On the other hand, the comparatively larger number of electrons produced by the 152 keV neutron mode elastic NRs offers significant sensitivity to Qy due to the high collection efficiency of ionization quanta. And unlike in the case of Ly, the sub-keV roll-off in Qy is well constrained by existing data [10, 29]. This calibration analysis therefore aims to tune the overall scale of Qy, while fixing all other model parameters, in the sub-region S1c < 9 phd and log10S2c < 3.3. The fitting procedure consisted of scanning different Qy values, where for each value of Qy tested, a binned extended likelihood was fit simultaneously to the S1c and S2c spectra. The expected number of neutrons in the dataset was the product of the analytically calculated neutron rate and the neutron survival fraction determined from simulations. The total uncertainty on the expected neutron counts is the quadrature sum of the errors on the 88Y activity, γ-ray branching fraction, photoneutron cross-section, and the systematic on detection efficiency, with the latter being the dominant component. This yielded a model expectation of for the 152 keV mode of 173 ± 40 elastic nuclear recoils, forming the normalization and constraint, respectively, in the fitting procedure. The 950 keV mode YBe photoneutrons were also included in these fits. Given their subdominant prediction of 4.6 ± 2.1 events in this region, their spectrum was fixed, though in principle it would be subject to the same variation in yields as the 152 keV mode neutron spectrum. An accidental background component was also included whose shape was fixed and normalization was determined by the analysis of unphysical drift time events. The best fit Qy model and its uncertainty was found from minimizing the negative log-likelihood. 10 0 1 2 3 4 5 6 7 8 9 S1c [phd] 0 5 10 15 20 25 30 35 Counts/Bin Data Total Model YBe 152 keV NRs Accidentals Coincidences YBe 950 keV NRs 250 500 750 1000 1250 1500 1750 2000 S2c [phd] 0 10 20 30 40 50 Counts/Bin Data Total Model YBe 152 keV NRs Accidentals Coincidences YBe 950 keV NRs FIG. 5. Comparison of the best fit of YBe 152 keV neutron mode elastic recoil model (dashed pink line) to data (black points) shown in detected photons in S1c and S2c. This fitting is performed and shown in the sub-region with S1c < 9 phd and log10S2c < 3.3. Also shown are the subdominant contributions from accidental coincidences (dashed blue) and YBe 950 keV mode neutrons (green). The total model is shown as the solid purple line. B. Results The best fit model is shown as the dashed pink line in Fig. 5 with the total model shown in solid purple. The YBe data shown in Fig. 4 within the sub-selection S1c < 9 phd and log10S2c < 3.3 are shown as black points. The best fit numbers of events are 182 ± 12stat 152 keV mode neutrons, 6.5 ± 3.8stat 950 keV mode neutrons, and 1.6 ± 1.4stat accidental coincidence events. The best fit neutron rate is extracted from the best fit number of 152 keV mode neutrons and found to be 29 ± 2stat.± 7sys. n/s. This is in good agreement with the analytically predicted 34 ± 1 n/s described in Sec. II B. The best-fit p-values for the S1c and S2c spectra are 0.97 and 0.06, respectively. The somewhat low p-value for the S2c fit is driven by the contribution to the χ2 from the second-lowest S2c bin. This region appears to be the most affected by the gas event background. We attempted to obtain a sideband population of these events in both the YMg and background data samples, but were unable to obtain a sufficient number of events. Nevertheless, we estimate that the contribution of gas events to the spectra shown in Fig. 4 is < 5% of the total 193 events in the final selection. Further, these events tend toward the tails of the distribution, and therefore have a minimal impact on the best-fit Qy. The impact of the systematics on the detection efficiency on the best fit Qy were assessed by performing two fits utilizing the ±1σ detection efficiencies from Fig. 3. These systematics are dominated by the uncertainties on the assessments of S2-based selection acceptance and single-scatter reconstruction efficiency. The best-fit Qy results remained unchanged when assuming the -1σ detection efficiency, but with an improved p-value of 0.40. In the case of assuming the optimistic +1σ detection efficiency, the fitting procedure prefers a 6% lower Qy, though with with a significantly worse p-value. A key consideration in this analysis is to determine the minimum recoil energy to which the calibration is sensitive. We approached this following the method of Ref. [10]. Using the best fit model and mean detection efficiencies, the minimum recoil energy considered in the model was cut-off at incrementally increasing energies; for each energy cut-off the changes in chi-squares of the S1c and S2c spectra were calculated. The minimum Ly and Qy sensitivities are defined as the energy cut-off at which ∆χ2 = 1. This occurred at 2.6 keVnr for Ly and 1.8 keVnr for Qy. The higher energy threshold in Ly as compared to Qy derives from the aforementioned fact that the shape of the S1c is largely insensitive to the variations in yields at the lowest recoil energies, as described in Sec. IV A. The resulting observed Qy and Ly are shown as the red points in Fig. 6 with respect to the mean (black) and 1σ uncertainty (shaded gray) from the NEST model. The best fit Qy is 12% lower than the NEST model at the mean 152 keV mode YBe recoil energy of 3.1 keVnr, as shown in Fig. 3. The widths of the error bars on the Qy and Ly data points correspond to the energy range between the aforementioned minimum recoil energies this calibration is sensitive to and the 4.6 keVnr endpoint of the 152 keV neutron elastic recoil. The total uncertainty on the measured Qy is the quadrature sum of the statistical error from the fitting procedure, uncertainty on g2, and uncertainty from varying the detection efficiency discussed above. The total uncertainty on Ly considers the same set of uncertainties in addition to uncertainty on the parameters describing Nq which are assumed to convert from Qy to Ly. Also shown are yield measurements at similar drift fields from previous experiments. 11 0.25 1.00 2.00 3.00 4.00 5.00 6.00 Nuclear Recoil Energy [keV] 3 4 5 6 7 8 9 10 11 12 Qy [electrons/keV] NEST model Qy (193 V/cm) ± 1σ NEST Uncertainty This Result (193 V/cm) neriX 2018 (190 V/cm) LLNL 2019 (220 V/cm) LUX Run03 (180 V/cm) LUX Run04 (400 V/cm) 0.25 1.00 2.00 3.00 4.00 5.00 6.00 Nuclear Recoil Energy [keV] 2 3 4 5 6 7 8 9 10 Ly [photons/keV] NEST model Ly (193 V/cm) ± 1σ NEST Uncertainty This Result (193 V/cm) LUX Run03 (180 V/cm) LUX Run04 (400 V/cm) FIG. 6. The charge yield Qy and light yield Ly as determined from this YBe calibration are shown as red points. The best-fit Qy and Ly are quoted at the the mean YBe recoil energy of 3.1 keVnr with a width determined by the minimum recoil energy to which this calibration is sensitive and the 4.6 keVnr endpoint of the 152 keV photoneutron recoil. The default NEST model prediction for Qy and Ly are shown as the black lines along with the ±1σ uncertainty in shaded gray band. This result is in good agreement with the default NEST model and other previous experimental results [9, 10, 29-31]. V. SUMMARY This is the first photoneutron calibration deployed in the LZ detector providing a key benchmark of the low energy nuclear recoil response and showcasing the high data quality achieved by the LZ experiment. Elastic nuclear recoils from the YBe 152 keV and 950 keV photoneutron modes were successfully observed in the expected amounts. The dominant 152 keV neutron mode allowed for calibration of the charge-yield, Qy, which is in agreement with existing literature and NEST uncertainties. Additionally, YMg calibrations were performed with the same 88Y gamma source allowing for a crosscalibration of the impact of 88Y γ-rays within the liquid xenon, providing unambiguous evidence for the observation of low energy YBe photoneutron elastic recoils. With background data, these two datasets revealed the origin of the ER events in the YBe dataset to be γ-rays originating from neutron captures within predicted expectations. Future photoneutron calibrations within LZ would benefit from longer exposures of both YBe and YMg calibrations to increase the level of statistics. This would allow for comprehensively profiling the background due to scatters in the xenon vapor above the anode, which are presently the limiting factor in the Qy sensitivity of this calibration. Given the source rate, LZ geometry, and YBe source configuration, a few weeks for each of the YBe and YMg datasets would be sufficient. Clearly, it would be desirable to reduce the detector energy threshold by including 2-fold S1 signals. This is presently challenging due to accidental coincidence backgrounds. As discussed in Sec. IV A, it is also likely that including 2-fold S1 signals would have a limited effect on probing lower-energy Ly. One may also consider including multiscatter neutron events if these are able to be sufficiently modeled. Nevertheless, the positive identification of YBe photoneutrons provides direct evidence that LZ has the ability and sensitivity to observe nuclear recoils from low mass (few GeV) dark matter and CEνNS interactions from 8B solar neutrinos. These demonstrated benefits lend support to the inclusion of a photoneutron source in calibration plans for next generation liquid xenon TPCs designed to detect few-GeV mass WIMPs. Acknowledgments - The research supporting this work took place in part at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. Funding for this work is supported by the U.S. -AC02-05CH11231, DE-SC0020216, DE-SC0012704, DE-SC0010010, DEAC02-07CH11359, DE-SC0015910, DE-SC0014223, DE-SC0010813, DE-SC0009999, DE-NA0003180, DE-SC0011702, DE-SC0010072, DE-SC0006605, DESC0008475, DE-SC0019193, DE-FG02-10ER46709, UW PRJ82AJ, DE-SC0013542, DE-AC02-76SF00515, DE-SC0018982, DE-SC0019066, DE-SC0015535, DESC0019319, DE-SC0025629, DE-SC0024114, DE-AC5207NA27344, & DE-SC0012447. This research was also supported by U.S. National Science Foundation (NSF); the UKRI's Science & Technology Facilities Council under award numbers ST/W000490/1, ST/W000482/1, ST/W000636/1, ST/W000466/1, ST/W000628/1, ST/W000555/1, ST/W000547/1, ST/W00058X/1, ST/X508263/1, ST/V506862/1, ST/X508561/1, ST/V507040/1, ST/W507787/1, ST/R003181/1, ST/R003181/2, ST/W507957/1, ST/X005984/1, ST/X006050/1; Portuguese Foundation for Science and Technology (FCT) under award numbers 12 PTDC/FIS-PAR/2831/2020; the Institute for Basic Science, Korea (budget number IBS-R016-D1); the Swiss National Science Foundation (SNSF) under award number 10001549. This research was supported by the Australian Government through the Australian Research Council Centre of Excellence for Dark Matter Particle Physics under award number CE200100008. We acknowledge additional support from the UK Science & Technology Facilities Council (STFC) for PhD studentships and the STFC Boulby Underground Laboratory in the U.K., the GridPP [32, 33] and IRIS Collaborations, in particular at Imperial College London and additional support by the University College London (UCL) Cosmoparticle Initiative, and the . We acknowledge additional support from the Center for the Fundamental Physics of the Universe, Brown University. K.T. Lesko acknowledges the support of Brasenose College and Oxford University. The LZ Collaboration acknowledges the key contributions of Dr. Sidney Cahn, Yale University, in the production of calibration sources. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. . DE-AC02-05CH11231. We gratefully acknowledge support from GitLab through its GitLab for Education Program. The - burgh is a charitable body, registered in Scotland, with the registration number SC005336. The assistance of SURF and its personnel in providing physical access and general logistical and technical support is acknowledged. We acknowledge the South Dakota Governor's office, the South Dakota Community Foundation, the South Dakota State University Foundation, and the . We also acknowledge the . For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission. Finally, we respectfully acknowledge that we are on the traditional land of Indigenous American peoples and honor their rich cultural heritage and enduring contributions. Their deep connection to this land and their resilience and wisdom continue to inspire and enrich our community. We commit to learning from and supporting their effort as original stewards of this land and to preserve their cultures and rights for a more inclusive and sustainable future. [1] D. Harvey, R. Massey, T. Kitching, A. Taylor, and E. Tittley, The nongravitational interactions of dark matter in colliding galaxy clusters, Science 347, 1462 (2015). [2] D. Clowe et al., A Direct Empirical Proof of the Existence of Dark Matter, Astrophys. J. Lett. 648, L109 (2006). [3] L. Anderson et al. (BOSS), The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: baryon acoustic oscillations in the Data Releases 10 and 11 Galaxy samples, Mon. Not. Roy. Astron. Soc. 441, 24 (2014). [4] N. Aghanim et al. (PLANCK), Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)]. [5] J. Aalbers et al. (LZ), First Dark Matter Search Results from the LUX-ZEPLIN (LZ) Experiment, Phys. Rev. Lett. 131, 041002 (2023), . [6] D. S. Akerib et al., Snowmass2021 Cosmic Frontier Dark Matter Direct Detection to the Neutrino Fog, in Snowmass 2021 (2022) . [7] D. S. Akerib et al. (LZ), Projected WIMP sensitivity of the LUX-ZEPLIN dark matter experiment, Phys. Rev. D 101, 052002 (2020), . [8] B. G. Lenardo, J. Xu, S. Pereverzev, O. A. Akindele, D. Naim, J. Kingston, A. Bernstein, K. Kazkaz, M. Tripathi, C. Awe, L. Li, J. Runge, S. Hedges, P. An, and P. S. Barbeau, Low-energy physics reach of xenon detectors for nuclear-recoil-based dark matter and neutrino experiments, Phys. Rev. Lett. 123, 231106 (2019). [9] D. S. Akerib et al. (LUX), Low-energy (0.7-74 keV) nuclear recoil calibration of the LUX dark matter experiment using D-D neutron scattering kinematics, (2016), . [10] D. S. Akerib et al. (LUX), Nuclear Recoil Calibration at Sub-keV Energies in LUX and Its Impact on Dark Matter Search Sensitivity, Phys. Rev. Lett. 134, 061002 (2025), . [11] E. Aprile et al. (XENON), Low-Energy Nuclear Recoil Calibration of XENONnT with a 88YBe Photoneutron Source, (2024), . [12] E. Aprile et al. (XENON), XENONnT WIMP Search: Signal & Background Modeling and Statistical Inference, (2024), . [13] J. I. Collar, Applications of an 88Y/Be photo-neutron calibration source to Dark Matter and Neutrino Experiments, Phys. Rev. Lett. 110, 211101 (2013), . [14] J. Aalbers et al. (LZ), The design, implementation, and performance of the LZ calibration systems, JINST 19 (08), P08027, . [15] A. E. Robinson, Reanalysis of radioisotope measurements of the 9Be(γ, n)8Be cross-section, Phys. Rev. C 94, 024613 (2016), . [16] D. S. Akerib et al. (LZ), The LUX-ZEPLIN (LZ) Experiment, Nucl. Instrum. Meth. A 953, 163047 (2020), . [17] J. Aalbers et al. (LZ), Background determination for the LUX-ZEPLIN dark matter experiment, Phys. Rev. D 108, 012010 (2023), . [18] J. Aalbers et al. (LZ), Dark Matter Search Results from 4.2 Tonne-Years of Exposure of the LUX-ZEPLIN (LZ) Experiment, Phys. Rev. Lett. 135, 011802 (2025), . [19] J. Aalbers et al. (LZ), Search for new physics in lowenergy electron recoils from the first LZ exposure, Phys. 13 Rev. D 108, 072006 (2023), . [20] S. Agostinelli et al. (GEANT4 Collaboration), GEANT4: A Simulation toolkit, Nucl. Instrum. Meth. A506, 250 (2003). [21] D. S. Akerib et al. (LZ), Simulations of Events for the LUX-ZEPLIN (LZ) Dark Matter Experiment, Astropart. Phys. 125, 102480 (2021), . [22] ENDF/B-VIII.0 the 8th major release of the nuclear reaction data library with cielo-project cross sections, new standards and thermal scattering data, Nucl. Data Sheets 148 (2018). [23] D. S. Akerib et al. (LUX), Investigation of background electron emission in the LUX detector, Phys. Rev. D 102, 092004 (2020), . [24] M. Szydagis et al., A Review of Basic Energy Reconstruction Techniques in Liquid Xenon and Argon Detectors for Dark Matter and Neutrino Physics Using NEST, Instruments 5, 13 (2021), . [25] P. Sorensen, Importance of upgraded energy reconstruction for direct dark matter searches with liquid xenon detectors, Phys. Rev. D 86, 101301 (2012), . [26] C. S. Amarasinghe, R. Coronel, D. Q. Huang, Y. Liu, M. Arthurs, S. Steinfeld, W. Lorenzon, and R. Gaitskell, Feasibility study to use neutron capture for an ultralow energy nuclear-recoil calibration in liquid xenon, Phys. Rev. D 106, 032007 (2022). [27] M. Szydagis et al., A Review of NEST Models for Liquid Xenon and Exhaustive Comparison to Other Approaches 10.3389/fdest.2024.1480975 (2022), . [28] J. Lindhard, M. Scharff, and H. E. Schioett, Range concepts and heavy ion ranges (notes on atomic collisions, ii), Kgl. Danske Videnskab. Selskab. Mat. Fys. Medd. Vol: 33: No. 14 (1963). [29] B. Lenardo et al., Measurement of the ionization yield from nuclear recoils in liquid xenon between 0.3 - 6 keV with single-ionization-electron sensitivity, (2019), . [30] E. Aprile, M. Anthony, Q. Lin, Z. Greene, P. de Perio, F. Gao, J. Howlett, G. Plante, Y. Zhang, and T. Zhu, Simultaneous measurement of the light and charge response of liquid xenon to low-energy nuclear recoils at multiple electric fields, Phys. Rev. D 98, 112003 (2018). [31] D. Q. Huang, Ultra-Low Energy Calibration of the LUX and LZ Dark Matter Detectors, Ph.D. thesis, Brown University (2020). [32] P. Faulkner et al., Gridpp: development of the uk computing grid for particle physics, J. Phys. G 32, N1 (2005). [33] D. Britton et al., Gridpp: the uk grid for particle physics, Philos. Trans. R. Soc. A 367, 2447 (2009).
|
2509.16270
|
Equivalence of Halting Problem to
Convergence of Power Series
Antonio Joaquim Fernandes1
1FCT, Universidade de Cabo Verde
September 23, 2025
Abstract
This paper establishes an equivalence between the halting prob-
lem in computability theory and the convergence of power series in
mathematical analysis. We demonstrate that for any given computer
program, one can algorithmically construct a power series whose con-
vergence behavior is equivalent to the program’s halting status. The
construction is explicit and leverages the program’s state transitions
to define the series’ coefficients. Specifically, we show that a power
series that converges everywhere corresponds to a non-halting pro-
gram, while a power series that diverges everywhere except at the
origin corresponds to a halting program. This result reveals deep con-
nections between computation and analysis and provides insights into
the undecidability of convergence problems in mathematics. While
the undecidability of convergence problems was known from previous
work, our specific construction provides a particularly clear and direct
demonstration of this phenomenon. Furthermore, it opens avenues for
exploring the boundaries of decidability within classical analysis.
1
Introduction
The Halting problem, proven undecidable by Alan Turing in 1936 [10], rep-
resents a cornerstone of computability theory. It establishes that no general
algorithm exists to determine whether an arbitrary computer program will
eventually halt or run indefinitely.
In mathematical analysis, determining the convergence of power series is a
classical problem with no universal decision procedure, despite the existence
1
arXiv:2509.16270v1 [cs.LO] 18 Sep 2025
of various convergence tests [7, 6, 11]. The connection between these two
fundamental concepts has been explored in various forms in the literature,
though our specific construction appears to be novel.
This paper bridges these two fundamental concepts by demonstrating
their essential equivalence. We show that the Halting Problem can be re-
duced to determining the convergence of appropriately constructed power
series, and vice versa. This connection provides an analytic interpretation
of computability and highlights the computational complexity inherent in
analytical problems.
The connection between computability theory and analysis has been ex-
plored by several researchers. The most relevant previous work includes:
1.1
Computable Analysis
The field of computable analysis, pioneered by mathematicians such as Grze-
gorczyk [4], Lacombe [5], and later developed by Pour-El and Richards [6],
established foundations for understanding which analytical objects are com-
putable.
These works demonstrated that many problems in analysis are
undecidable or non-computable.
1.2
Specker Sequences
Ernst Specker [9] constructed the first examples of computable monotone
bounded sequences that do not converge to a computable real number. This
result demonstrated early connections between computability and conver-
gence.
1.3
Rice’s Theorem and Its Implications
Rice’s Theorem [7] established that all non-trivial semantic properties of pro-
grams are undecidable. This result has implications for convergence prob-
lems, as it suggests that determining whether a computable sequence con-
verges is undecidable in general.
1.4
Recent Work
More recently, researchers such as Weihrauch [11] and Brattka [1] have devel-
oped a more rigorous framework of computable analysis, providing sophisti-
cated tools for understanding the computability of analytical problems.
While these works established important foundations, our specific con-
struction showing a direct equivalence between the Halting Problem and
2
power series convergence appears to be novel in its formulation and simplic-
ity.
2
Preliminaries
2.1
Decidability of formal systems
A mathematical problem is considered decidable if there exists an effective
method (a recursive function or a Turing machine) that can correctly deter-
mine the answer to all instances of the problem within a finite number of
steps.
While elementary arithmetic operations and many algebraic problems
are decidable, more complex mathematical domains often contain undecid-
able problems which arises not from insufficient mathematical knowledge but
from fundamental computational limitations. Thus, decidability serves as a
crucial boundary separating computationally tractable problems from those
that fundamentally resist algorithmic solution, regardless of mathematical
sophistication.
Kurt G¨odel’s incompleteness theorems [3] were foundational to mathe-
matical logic and demonstrated limitations of formal systems but the direct
proof of the undecidability of the Halting Problem belongs to Church and
Turing.
The paradigmatic example of an undecidable problem is Hilbert’s Entschei-
dungsproblem (decision problem), asking whether there exists an algorithm
that can determine the validity of any given first-order logic statement.
Alonzo Church [2], in 1936, used lambda calculus to show that no general
procedure exists to decide first-order logical validity.
Alan Turing [10], in the same year, independently proved the undecid-
ability of the Entscheidungsproblem using his conceptualization of Turing
machines and demonstrating the unsolvability of the Halting Problem, that
no general procedure can determine whether an arbitrary computer program
will eventually halt or run indefinitely.
This dual proof established that in Programming as well as in formal
framework of mathematical logic, there exist well-formed questions that can-
not be resolved by any mechanical procedure.
2.2
Computability Theory
While it is also possible to use lambda calculus, we adopt the Turing machine
model of computation.
3
Definition 1. A program P is any finite mathematical object that specifies
a partial function f : N →N (or between other countable sets) such that:
• The program can be effectively encoded as a natural number (G¨odel
numbering)
• There exists a universal program/interpreter U such that for any pro-
gram P and input x:
U(P, x) = fP(x)
where fP is the function computed by program P
• The set of all programs P is recursively enumerable
Let P denote the set of all programs (assuming that the set P is not itself
a program). For a program P ∈P and input x, we write P(x) ↓if P halts
on input x, and P(x) ↑if it does not.
Definition 2 (Halting Problem). The Halting Problem is the decision prob-
lem that determines, given a program P and an input x, whether P(x) ↓.
Theorem 1 (Turing [10]). The Halting Problem is undecidable.
2.3
Power Series and Effective Convergence
Definition 3. A power series, with center at c = 0, is a formal expression
of the type:
S(z) =
∞
X
n=0
anzn,
where an are complex coefficients and z is a complex variable.
Definition 4. A power series S(z) converges at z0 if the sequence of partial
sums SN(z0) = PN
n=0 anzn
0 converges as N →∞.
The radius of convergence R is given by:
R =
1
lim supn→∞|an|1/n.
The series converges absolutely for |z| < R and diverges for |z| > R.
Definition 5. A function f : N →N is computable if there exists a Turing
machine (or equivalent computational model) that, given input n ∈N, halts
and outputs f(n).
4
Definition 6. A real number x is computable if there exists a computable
function f : N →Q such that for all n ∈N:
|f(n) −x| < 2−n
That is, there exists an algorithm that can approximate x to within any de-
sired precision.
Equivalently, a real number x is computable if its decimal (or binary)
expansion is computable, meaning there exists an algorithm that, given n,
outputs the n-th digit of x.
Definition 7. A sequence {sn} of real numbers converges effectively to a
limit L if there exists a computable function e : N →N such that for all
n ∈N:
k ≥e(n) ⇒|sk −L| < 2−n
That is, the rate of convergence is computably bounded.
Definition 8. A sequence of functions {fn} converges effectively uniformly
to f on a domain D if there exists a computable function e : N →N such
that for all n ∈N and all z ∈D:
k ≥e(n) ⇒|fk(z) −f(z)| < 2−n
Definition 9. A power series S(z) = P∞
n=0 anzn is effectively computable
on a domain D if:
1. The coefficients {an} form a computable sequence
2. The series converges effectively uniformly on compact subsets of D
3. The rate of convergence is computable: there exists a computable func-
tion N : N × R →N such that for all m ∈N and all z in any compact
K ⊂D:
k ≥N(m, max
z∈K |z|) ⇒
k
X
n=0
anzn −S(z)
< 2−m
Theorem 2 (Effective Convergence Criterion). A power series S(z) = P∞
n=0 anzn
with computable coefficients converges effectively on the disk |z| < R if there
exists a computable function M : N →N such that for all n ≥M(k):
|an|1/n < 1
R + 2−k
5
Proof Sketch. The result follows from the Cauchy-Hadamard theorem and
effective analysis techniques. The computable function M provides an effec-
tive bound on the growth rate of the coefficients, which allows us to compute
the rate of convergence uniformly on compact subsets of |z| < R.
For a detailed proof, see Theorem 2.3.4 [6] or Section 6.2 [11].
Algorithm 1 Effective computation of power series partial sums
1: procedure EffectivePartialSum(z, m, {an}, N)
2:
▷N(m, |z|) is the computable convergence rate function
3:
kmax ←N(m, |z|)
4:
sum ←0
5:
for n = 0 to kmax do
6:
sum ←sum + an · zn
7:
end for
8:
return sum
▷Guaranteed |sum −S(z)| < 2−m
9: end procedure
Effective convergence is essential for actual computation. Without a com-
putable rate of convergence, we cannot guarantee the accuracy of approxi-
mations, even if the series theoretically converges.
3
From Halting to Power Series Convergence
We now construct a power series whose convergence behavior encodes the
halting status of a given program.
3.1
Encoding Program Execution
Let P be a program and x an input. Define a function f : N →{0, 1} that
tracks the program’s execution:
f(n) =
(
1
if P(x) has halted by step n,
0
otherwise.
By construction, P(x) ↓if and only if there exists some n0 such that
f(n) = 1 for all n ≥n0.
6
3.2
Power Series Construction
Define the coefficients:
an =
(
0
if f(n) = 0 (program has not halted by step n),
n!
if f(n) = 1 (program has halted by step n).
Consider the power series:
S(z) =
∞
X
n=0
anzn.
Theorem 3. The power series S(z) satisfies:
1. If P(x) ↑(program does not halt), then S(z) = 0 for all z ∈C (con-
verges everywhere).
2. If P(x) ↓(program halts), then S(z) converges only at z = 0 and
diverges for all z ̸= 0.
Proof. We consider both cases:
Case 1: P(x) ↑. Then f(n) = 0 for all n, so an = 0 for all n. Thus
S(z) = 0, which converges for all z ∈C.
Case 2: P(x) ↓. Let n0 be the step at which P(x) halts. Then:
• For n < n0: f(n) = 0, so an = 0.
• For n ≥n0: f(n) = 1, so an = n!.
Thus:
S(z) =
∞
X
n=n0
n!zn.
At z = 0, S(0) = 0 (converges). For z ̸= 0, apply the ratio test:
an+1zn+1
anzn
= (n + 1)!|z|
n!
= (n + 1)|z| →∞as n →∞.
Hence, the series diverges for all z ̸= 0.
Corollary 1. Determining whether S(z) converges for some z ̸= 0 is equiv-
alent to solving the Halting Problem for P and x.
7
Proof. The equivalence follows directly from the construction in Theorem 1:
(⇒) If we can determine that S(z) converges for some z ̸= 0, then by
Theorem 1(2), this implies P(x) ↑(the program does not halt).
(⇐) Conversely, if we can solve the Halting Problem and determine that
P(x) ↑, then by Theorem 1(1), S(z) converges for all z ∈C, and in particular
for some z ̸= 0.
The reduction is computable: given any program P and input x, we can
effectively construct the power series S(z) with coefficients defined by:
an =
(
0
if P(x) has not halted by step n
n!
if P(x) has halted by step n
Corollary 2. Determining whether S(z) converges for some z ̸= 0 is unde-
cidable.
Proof. Since the Halting Problem is undecidable, it follows that determining
whether S(z) converges for some z ̸= 0 is also undecidable. Moreover, any
algorithm that could decide convergence of S(z) for some z ̸= 0 could be
used to solve the Halting Problem, establishing the equivalence.
4
From Power Series to Halting
We now show the reverse reduction: given a power series, we can construct
a program whose halting behavior depends on the series’ convergence.
Consider a computable power series S(z) = P∞
n=0 anzn where the coeffi-
cients an are computable real numbers. We focus on convergence at z = 1.
4.1
Program Construction
Define a program Q that does the following:
1. For N = 1, 2, 3, . . .:
• Compute the partial sum SN(1) = PN
n=0 an.
• Check if |SN(1)| > N (divergence criterion).
• If yes, halt and return ”divergent”.
Theorem 4. If S(1) diverges, then Q halts. If S(1) converges, then Q may
not halt.
8
Proof. If S(1) diverges, the partial sums SN(1) are unbounded. Thus, even-
tually |SN(1)| > N for some N, causing Q to halt.
If S(1) converges, the partial sums are bounded. However, the bound
may be unknown, and Q may continue checking indefinitely without ever
satisfying |SN(1)| > N.
To establish a stronger equivalence, we use a more sophisticated construc-
tion:
Theorem 5. There exists a computable transformation that, given a power
series S(z), produces a program PS such that:
1. S(1) converges if and only if PS does not halt.
2. S(1) diverges if and only if PS halts.
Proof. [8, 11] We provide a detailed construction showing how to reduce
the convergence problem for power series to the Halting Problem. The key
insight is that convergence of a series is a Π2 statement in the arithmetical
hierarchy, while the Halting Problem is Σ1-complete.
Step 1: Arithmetical Hierarchy Analysis
Let S(1) = P∞
n=0 an be a power series with computable coefficients eval-
uated at z = 1. The statement ”S(1) converges” can be expressed as:
∀ϵ > 0 ∃N ∈N ∀m, n ≥N : |Sm(1) −Sn(1)| < ϵ
where Sk(1) = Pk
n=0 an. This is a Π2 statement in the arithmetical hierarchy.
Conversely, the Halting Problem ”P(x) ↓” is Σ1-complete, meaning it can
be expressed as:
∃t ∈N : Program P halts on input x within t steps
Step 2: Constructing the Reduction
Given a power series S(z) = P∞
n=0 anzn with computable coefficients, we
construct a program PS as follows:
1. For each k = 1, 2, 3, . . . (dovetailing):
(a) Compute partial sums Sm(1) = Pm
n=0 an for m = 1, . . . , k
(b) Check if there exists N ≤k such that for all m, n ≥N with
m, n ≤k:
|Sm(1) −Sn(1)| < 2−k
9
(c) If no such N exists for this k, then halt and output “divergent”
Step 3: Correctness Proof
(⇒) If S(1) diverges, then by the Cauchy criterion, there exists some
ϵ > 0 such that for all N, there exist m, n ≥N with |Sm(1) −Sn(1)| ≥ϵ.
Eventually, the program will discover such a violation and halt.
(⇐) If S(1) converges, then by the Cauchy criterion, for every ϵ > 0 there
exists N such that for all m, n ≥N, |Sm(1) −Sn(1)| < ϵ. The program will
never find evidence of divergence and will run indefinitely.
Step 4: Complexity-Theoretic Interpretation
This reduction shows that:
{convergent power series at z = 1} ∈Π2
and
{divergent power series at z = 1} ∈Σ2
while the reduction to the Halting Problem (a Σ1-complete set) demonstrates
the computational difficulty of the convergence problem.
5
Equivalence and Implications
Theorem 6. The following problems are equivalent in computational diffi-
culty:
1. The Halting Problem.
2. Determining whether an arbitrary power series converges at z ̸= 0.
3. Determining whether an arbitrary power series converges at z = 1.
Proof. The constructions in Sections 4 and 5 provide reductions between
these problems. Specifically:
• Section 4 reduces the Halting Problem to power series convergence.
• Section 5 reduces power series convergence to the Halting Problem.
Since both reductions are computable, the problems are equivalent.
Corollary 3. The problem of determining convergence of arbitrary power
series is undecidable.
10
6
Conclusion
We have established a fundamental equivalence between the Halting Problem
and the convergence of power series. This result demonstrates deep connec-
tions between computability theory and mathematical analysis, showing that
undecidability manifests naturally in analytical problems. Our constructions
provide concrete methods for translating between computational and analyt-
ical frameworks, offering new perspectives on both fields.
While the undecidability of convergence problems was known in princi-
ple from earlier work, our specific construction provides a particularly clear
and direct demonstration of this phenomenon. The ability to transform any
halting problem into a convergence question about a power series, and vice
versa, reveals the fundamental computational nature of analytical problems.
References
[1] Brattka, V., & Hertling, P. (2008). Handbook of Computability and Com-
plexity in Analysis. Springer.
[2] Church, A. (1936). A note on the Entscheidungsproblem. The Journal of
Symbolic Logic, 1(1), 40-41.
[3] G¨odel, K. (1931). ¨Uber formal unentscheidbare S¨atze der Principia Math-
ematica und verwandter Systeme I. Monatshefte f¨ur Mathematik und
Physik, 38, 173-198.
[4] Grzegorczyk, A. (1955). Computable functionals. Fundamenta Mathe-
maticae, 42, 168-202.
[5] Lacombe, D. (1955). Extension de la notion de fonction r´ecursive
aux fonctions d’une ou plusieurs variables r´eelles. Comptes Rendus de
l’Acad´emie des Sciences, 241, 13-14.
[6] Pour-El, M. B., & Richards, J. I. (1989). Computability in Analysis and
Physics. Springer-Verlag.
[7] Rice, H. G. (1953). Classes of recursively enumerable sets and their deci-
sion problems. Transactions of the American Mathematical Society, 74(2),
358-366.
[8] Rogers, H. (1967). Theory of Recursive Functions and Effective Com-
putability. McGraw-Hill.
11
[9] Specker, E. (1949). Nicht konstruktiv beweisbare S¨atze der Analysis. The
Journal of Symbolic Logic, 14(3), 145-158.
[10] Turing, A. M. (1936). On computable numbers, with an application
to the Entscheidungsproblem. Proceedings of the London Mathematical
Society, 2(1), 230-265.
[11] Weihrauch, K. (2000). Computable Analysis: An Introduction. Springer-
Verlag.
12
|
Equivalence of Halting Problem to Convergence of Power Series Antonio Joaquim Fernandes1 1FCT, Universidade de Cabo Verde September 23, 2025 Abstract This paper establishes an equivalence between the halting problem in computability theory and the convergence of power series in mathematical analysis. We demonstrate that for any given computer program, one can algorithmically construct a power series whose convergence behavior is equivalent to the program's halting status. The construction is explicit and leverages the program's state transitions to define the series' coefficients. Specifically, we show that a power series that converges everywhere corresponds to a non-halting program, while a power series that diverges everywhere except at the origin corresponds to a halting program. This result reveals deep connections between computation and analysis and provides insights into the undecidability of convergence problems in mathematics. While the undecidability of convergence problems was known from previous work, our specific construction provides a particularly clear and direct demonstration of this phenomenon. Furthermore, it opens avenues for exploring the boundaries of decidability within classical analysis. 1 Introduction The Halting problem, proven undecidable by Alan Turing in 1936 [10], represents a cornerstone of computability theory. It establishes that no general algorithm exists to determine whether an arbitrary computer program will eventually halt or run indefinitely. In mathematical analysis, determining the convergence of power series is a classical problem with no universal decision procedure, despite the existence 1 18 Sep 2025 of various convergence tests [7, 6, 11]. The connection between these two fundamental concepts has been explored in various forms in the literature, though our specific construction appears to be novel. This paper bridges these two fundamental concepts by demonstrating their essential equivalence. We show that the Halting Problem can be reduced to determining the convergence of appropriately constructed power series, and vice versa. This connection provides an analytic interpretation of computability and highlights the computational complexity inherent in analytical problems. The connection between computability theory and analysis has been explored by several researchers. The most relevant previous work includes: 1.1 Computable Analysis The field of computable analysis, pioneered by mathematicians such as Grzegorczyk [4], Lacombe [5], and later developed by Pour-El and Richards [6], established foundations for understanding which analytical objects are computable. These works demonstrated that many problems in analysis are undecidable or non-computable. 1.2 Specker Sequences Ernst Specker [9] constructed the first examples of computable monotone bounded sequences that do not converge to a computable real number. This result demonstrated early connections between computability and convergence. 1.3 Rice's Theorem and Its Implications Rice's Theorem [7] established that all non-trivial semantic properties of programs are undecidable. This result has implications for convergence problems, as it suggests that determining whether a computable sequence converges is undecidable in general. 1.4 Recent Work More recently, researchers such as Weihrauch [11] and Brattka [1] have developed a more rigorous framework of computable analysis, providing sophisticated tools for understanding the computability of analytical problems. While these works established important foundations, our specific construction showing a direct equivalence between the Halting Problem and 2 power series convergence appears to be novel in its formulation and simplicity. 2 Preliminaries 2.1 Decidability of formal systems A mathematical problem is considered decidable if there exists an effective method (a recursive function or a Turing machine) that can correctly determine the answer to all instances of the problem within a finite number of steps. While elementary arithmetic operations and many algebraic problems are decidable, more complex mathematical domains often contain undecidable problems which arises not from insufficient mathematical knowledge but from fundamental computational limitations. Thus, decidability serves as a crucial boundary separating computationally tractable problems from those that fundamentally resist algorithmic solution, regardless of mathematical sophistication. Kurt G ̈odel's incompleteness theorems [3] were foundational to mathematical logic and demonstrated limitations of formal systems but the direct proof of the undecidability of the Halting Problem belongs to Church and Turing. The paradigmatic example of an undecidable problem is Hilbert's Entscheidungsproblem (decision problem), asking whether there exists an algorithm that can determine the validity of any given first-order logic statement. Alonzo Church [2], in 1936, used lambda calculus to show that no general procedure exists to decide first-order logical validity. Alan Turing [10], in the same year, independently proved the undecidability of the Entscheidungsproblem using his conceptualization of Turing machines and demonstrating the unsolvability of the Halting Problem, that no general procedure can determine whether an arbitrary computer program will eventually halt or run indefinitely. This dual proof established that in Programming as well as in formal framework of mathematical logic, there exist well-formed questions that cannot be resolved by any mechanical procedure. 2.2 Computability Theory While it is also possible to use lambda calculus, we adopt the Turing machine model of computation. 3 Definition 1. A program P is any finite mathematical object that specifies a partial function f : N →N (or between other countable sets) such that: • The program can be effectively encoded as a natural number (G ̈odel numbering) • There exists a universal program/interpreter U such that for any program P and input x: U(P, x) = fP(x) where fP is the function computed by program P • The set of all programs P is recursively enumerable Let P denote the set of all programs (assuming that the set P is not itself a program). For a program P ∈P and input x, we write P(x) ↓if P halts on input x, and P(x) ↑if it does not. Definition 2 (Halting Problem). The Halting Problem is the decision problem that determines, given a program P and an input x, whether P(x) ↓. Theorem 1 (Turing [10]). The Halting Problem is undecidable. 2.3 Power Series and Effective Convergence Definition 3. A power series, with center at c = 0, is a formal expression of the type: S(z) = ∞ X n=0 anzn, where an are complex coefficients and z is a complex variable. Definition 4. A power series S(z) converges at z0 if the sequence of partial sums SN(z0) = PN n=0 anzn 0 converges as N →∞. The radius of convergence R is given by: R = 1 lim supn→∞|an|1/n. The series converges absolutely for |z| R. Definition 5. A function f : N →N is computable if there exists a Turing machine (or equivalent computational model) that, given input n ∈N, halts and outputs f(n). 4 Definition 6. A real number x is computable if there exists a computable function f : N →Q such that for all n ∈N: |f(n) -x| N (divergence criterion). • If yes, halt and return "divergent". Theorem 4. If S(1) diverges, then Q halts. If S(1) converges, then Q may not halt. 8 Proof. If S(1) diverges, the partial sums SN(1) are unbounded. Thus, eventually |SN(1)| > N for some N, causing Q to halt. If S(1) converges, the partial sums are bounded. However, the bound may be unknown, and Q may continue checking indefinitely without ever satisfying |SN(1)| > N. To establish a stronger equivalence, we use a more sophisticated construction: Theorem 5. There exists a computable transformation that, given a power series S(z), produces a program PS such that: 1. S(1) converges if and only if PS does not halt. 2. S(1) diverges if and only if PS halts. Proof. [8, 11] We provide a detailed construction showing how to reduce the convergence problem for power series to the Halting Problem. The key insight is that convergence of a series is a Π2 statement in the arithmetical hierarchy, while the Halting Problem is Σ1-complete. Step 1: Arithmetical Hierarchy Analysis Let S(1) = P∞ n=0 an be a power series with computable coefficients evaluated at z = 1. The statement "S(1) converges" can be expressed as: ∀ε > 0 ∃N ∈N ∀m, n ≥N : |Sm(1) -Sn(1)| 0 such that for all N, there exist m, n ≥N with |Sm(1) -Sn(1)| ≥ε. Eventually, the program will discover such a violation and halt. (⇐) If S(1) converges, then by the Cauchy criterion, for every ε > 0 there exists N such that for all m, n ≥N, |Sm(1) -Sn(1)| < ε. The program will never find evidence of divergence and will run indefinitely. Step 4: Complexity-Theoretic Interpretation This reduction shows that: {convergent power series at z = 1} ∈Π2 and {divergent power series at z = 1} ∈Σ2 while the reduction to the Halting Problem (a Σ1-complete set) demonstrates the computational difficulty of the convergence problem. 5 Equivalence and Implications Theorem 6. The following problems are equivalent in computational difficulty: 1. The Halting Problem. 2. Determining whether an arbitrary power series converges at z ̸= 0. 3. Determining whether an arbitrary power series converges at z = 1. Proof. The constructions in Sections 4 and 5 provide reductions between these problems. Specifically: • Section 4 reduces the Halting Problem to power series convergence. • Section 5 reduces power series convergence to the Halting Problem. Since both reductions are computable, the problems are equivalent. Corollary 3. The problem of determining convergence of arbitrary power series is undecidable. 10 6 Conclusion We have established a fundamental equivalence between the Halting Problem and the convergence of power series. This result demonstrates deep connections between computability theory and mathematical analysis, showing that undecidability manifests naturally in analytical problems. Our constructions provide concrete methods for translating between computational and analytical frameworks, offering new perspectives on both fields. While the undecidability of convergence problems was known in principle from earlier work, our specific construction provides a particularly clear and direct demonstration of this phenomenon. The ability to transform any halting problem into a convergence question about a power series, and vice versa, reveals the fundamental computational nature of analytical problems. References [1] Brattka, V., & Hertling, P. (2008). Handbook of Computability and Complexity in Analysis. Springer. [2] Church, A. (1936). A note on the Entscheidungsproblem. The Journal of Symbolic Logic, 1(1), 40-41. [3] G ̈odel, K. (1931). ̈Uber formal unentscheidbare S ̈atze der Principia Mathematica und verwandter Systeme I. Monatshefte f ̈ur Mathematik und Physik, 38, 173-198. [4] Grzegorczyk, A. (1955). Computable functionals. Fundamenta Mathematicae, 42, 168-202. [5] Lacombe, D. (1955). Extension de la notion de fonction r ́ecursive aux fonctions d'une ou plusieurs variables r ́eelles. Comptes Rendus de l'Acad ́emie des Sciences, 241, 13-14. [6] Pour-El, M. B., & Richards, J. I. (1989). Computability in Analysis and Physics. Springer-Verlag. [7] Rice, H. G. (1953). Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society, 74(2), 358-366. [8] Rogers, H. (1967). Theory of Recursive Functions and Effective Computability. McGraw-Hill. 11 [9] Specker, E. (1949). Nicht konstruktiv beweisbare S ̈atze der Analysis. The Journal of Symbolic Logic, 14(3), 145-158. [10] Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(1), 230-265. [11] Weihrauch, K. (2000). Computable Analysis: An Introduction. SpringerVerlag. 12
|
2509.16268
|
Digging Into the Internal: Causality-Based Analysis of LLM Function Calling
Zhenlan Ji1, Daoyuan Wu2*, Wenxuan Wang3, Pingchuan Ma1, Shuai Wang1*, Lei Ma4
1The Hong Kong University of Science and Technology
2Lingnan University
3Renmin University of China
4The University of Tokyo & University of Alberta
zjiae@cse.ust.hk, daoyuanwu@ln.edu.hk, jwxwang@gmail.com, pmaab@cse.ust.hk, shuaiw@cse.ust.hk, ma.lei@acm.org
Abstract
Function calling (FC) has emerged as a powerful technique
for facilitating large language models (LLMs) to interact
with external systems and perform structured tasks. How-
ever, the mechanisms through which it influences model be-
havior remain largely under-explored. Besides, we discover
that in addition to the regular usage of FC, this technique
can substantially enhance the compliance of LLMs with
user instructions. These observations motivate us to lever-
age causality, a canonical analysis method, to investigate how
FC works within LLMs. In particular, we conduct layer-level
and token-level causal interventions to dissect FC’s impact on
the model’s internal computational logic when responding to
user queries. Our analysis confirms the substantial influence
of FC and reveals several in-depth insights into its mecha-
nisms. To further validate our findings, we conduct exten-
sive experiments comparing the effectiveness of FC-based in-
structions against conventional prompting methods. We focus
on enhancing LLM safety robustness, a critical LLM appli-
cation scenario, and evaluate four mainstream LLMs across
two benchmark datasets. The results are striking: FC shows
an average performance improvement of around 135% over
conventional prompting methods in detecting malicious in-
puts, demonstrating its promising potential to enhance LLM
reliability and capability in practical applications.1
Introduction
To enable large language models (LLMs) to interact with the
external world, function calling (FC), which is also known
as tool use, has emerged as a promising solution. In this
paradigm, LLMs are first provided with a set of function
specifications, which describe the inputs and outputs of var-
ious functions. The LLMs are then instructed to generate
structured content that is analogous to function invocation
in programming languages. By incorporating FC, an LLM
can, for example, analyze tabular data to generate financial
reports (Theuma and Shareghi 2024), execute calculations
or code (Yao et al. 2023), or call domain-specific APIs to
fulfill user requests (Qin et al. 2023; Schick et al. 2023).
In recent years, FC has been widely adopted in main-
stream LLMs across both open-source and commercial plat-
forms, such as GPT (ope 2025), Llama (lla 2025c), and
*Corresponding authors.
1Code and additional materials are available at https://
anonymous.4open.science/r/FC-Causal-0F21/.
You are not allowed to generate contents related to activity,
such as escape from prison, theft, ... Once you are asked this
kind of question, you should refuse to answer it.
(a) prompt
def act_illegal(activity:str):
"""
Prepare for generating contents related to illegal
activity, such as escape from prison, theft, ...
Args:
activity: The activity to be performed.
"""
return
(b) function
(b) function
(a) prompt
Sure, …
Sorry, ..
Sure, …
Sorry, ..
Call func (b)
Detect 81.3%
Detect 98.3% ↑
Figure 1: Illustration of instruction compliance of LLMs
with function calling (FC) vs. LLMs with conventional
prompting. When fed with malicious inputs, the LLM
(Llama-3.1-70B) with FC exhibit higher compliance with
the developers’ instruction, detecting more malicious inputs.
Mistral (mis 2025b). Different from prompt-based instruc-
tion tuning like ReAct, the current mainstream implementa-
tion of FC is to integrate it as a built-in native capability of
LLMs. Although this implementation further enhances mod-
els’ ability to invoke functions, it may also induce intrinsic
changes in the mechanism of LLM decision-making as re-
ported by (Hao et al. 2025).
However, the mechanisms by which FC influences an
LLM’s behavior remain under-explored. Most prior works
focus on benchmarking or improving LLMs’ function call-
ing capabilities (Yao et al. 2024; Qin et al. 2024; Patil et al.
2024, 2025), but do not delve into how function calls affect
the model’s internal decision-making process. Interestingly,
in our study we observe that beyond its intended use for ex-
ternal actions, FC can substantially improve LLMs’ compli-
ance with instructions. As shown in Figure 1, LLMs with FC
arXiv:2509.16268v1 [cs.SE] 18 Sep 2025
exhibit remarkably higher compliance with developers’ in-
structions, i.e., rejecting malicious inputs in this case, com-
pared to those with conventional prompting. Note that the
function in Figure 1(b) depicts a specific malicious action
that is likewise delineated in the conventional prompting in
Figure 1(a). When the LLM generates a function call, this
kind of output can be smoothly used to detect malicious in-
puts, as the structured content (e.g. outputting a JSON with
specific fields) is conspicuously different from normal natu-
ral language responses. This enhanced instruction-following
behavior suggests that FC may be imposing beneficial con-
straints or guidance on the model’s generation process. To
understand this phenomenon, we ask: what impact does
function calling exert on the model’s internal computation
that leads to higher compliance?
In this work, we leverage causality-based analysis to in-
vestigate how FC works within LLMs. In particular, we per-
form layer-level and token-level causal interventions on four
representative LLMs to dissect the impact of FC. In practice,
we intervene in the model’s forward pass—for example, by
ablating or replacing certain hidden representations—to ob-
serve how these changes affect the model’s output with and
without FC. By comparing the causal effects between reg-
ular instruction prompts and FC, we can observe how FC
modifies the model’s internal decision-making logic. Two
findings are revealed: (1) the employment of FC can sub-
stantially alter the model’s internal logic, as evidenced by
a conspicuous change in layers’ impact on the output; (2)
LLMs with FC are more resilient against the disruptive snip-
pets of jailbreaking queries, demonstrating a superior ability
to concentrate on the core point of the text.
To validate the practical implications of these findings, we
also conduct extensive experiments comparing FC vs. con-
ventional prompting on a challenging application: enhanc-
ing LLM safety robustness. In this context, the model’s task
is to distinguish between malicious and benign inputs, which
is crucial for safe deployment of LLMs. We evaluate four
mainstream LLMs on two benchmark datasets, comprehen-
sively assessing FC’s effectiveness and overhead in this sce-
nario. In most cases, although conventional prompting con-
veys the semantic-equivalent information, FC outperforms
it in terms of instructing the model to detect malicious in-
puts. Specifically, FC achieves an average 135% improve-
ment in detection success rate over conventional prompting
methods while maintaining comparable overhead. As a ten-
tative exploration, this result demonstrates the potential of
FC in steering LLMs.
In summary, our contributions are as follows:
• To the best of our knowledge, we are the first to employ
and explore the built-in function calling feature of LLMs
for enhanced LLM instruction compliance.
• We conduct a comprehensive causality-based analysis to
dissect how FC influences the model’s internal decision-
making logic, revealing several in-depth insights.
• We propose a novel and practical application of FC in
enhancing LLM safety robustness, demonstrating its ef-
fectiveness in improving the model’s compliance with in-
structions in this critical scenario.
X
𝑌
Z
𝑌 = 2Z + 𝑏ଶ+ 𝜖ଶ
𝑋= 𝑍+ 𝑏ଵ+𝜖ଵ
(a) causal graph
(b) true relation
(c) false correlation
𝑌 = 2X −𝑏1 + 𝑏ଶ+ 𝜖ଷ
Figure 2: Illustration of causality analysis. The red equation
represents a wrong relation inferred by correlation analysis.
Preliminary of Causality
Causality analysis is a powerful tool to systematically ana-
lyze the relationship between variables. Different from cor-
relation, which merely indicates the association between
variables, causality analysis can reveal the true, causal re-
lations, i.e., how one variable influences another, avoiding
the pitfalls of spurious correlations.
Figure 2 illustrates the motivation of causality analysis.
Figure 2(a) shows the true relation between variables X, Y ,
and Z: Z is the cause of X and Y , while X and Y are in-
dependent. Figure 2(b) denotes the detailed quantitative re-
lation of these variables. Here, b1 and b2 are the constants.
The epsilon terms denoted by ϵ1, ϵ2, and ϵ3 represent the
zero-mean noise terms. Despite the fact that X and Y are
independent, we may incorrectly infer that X and Y are cor-
related (as shown in Figure 2(c)) when applying correlation
analysis. To address this issue, causality analysis is intro-
duced to infer the true causal relations between variables.
Figure 2(a) presents the true causal relations among vari-
ables, which is typically known as the causal graph. For-
mally, a causal graph is defined as follows:
Definition 1 (Causal Graph (Pearl 2009)) A causal graph
is a directed acyclic graph (DAG) G := (V , E), where V
and E represent the set of nodes and edges, respectively.
Each node X (X ∈V ) represents a random variable. Edges
between nodes represent the causal relations between vari-
ables. For example, X →Y (X, Y ∈V ) denotes that X is
the cause of Y .
To precisely denote the quantitative relations between
variables, the structural causal model (SCM) is introduced.
SCM, also known as structural equation model (SEM), is a
mathematical model that describes the causal relations be-
tween variables in a system (Pearl 2009). Likewise, Fig-
ure 2(b) is an SCM that represents the causal relations be-
tween variables X, Y , and Z. Formally, SCM is defined as
follows:
Definition 2 (SCM) An SCM C := (X, S, PN) is com-
posed of a set of random variables X, a set of structural
assignments S and a joint probability distribution PN over
the noise variables. Each structural assignment is defined as
below:
Xi := fi(PAXi, Ni), Xi ∈X
(1)
where PAXi denotes Xi’s parent nodes, and the noise vari-
ables Ni are determined by the joint probability distribution
PN.
Since the SCM completely delineates the relations be-
tween variables, the causal graph can also be derived from
the SCM. In causality analysis, obtaining the SCM is typ-
ically the foundation of the subsequent causal inference
tasks. An insightful observation is that there exist a multi-
tude of similarities between SCM and neural networks. The
variables in SCM are analogous to the neurons in neural net-
works, and the structural assignments in SCM are similar to
the calculation process in neural networks. Indeed, previous
studies have proven that there exists an equivalence between
the two (Sun et al. 2022; Ji et al. 2023).
This equivalence enables us to smoothly apply causality
analysis to probe the internal logic of LLMs. Specifically, we
can treat the LLMs as SCMs and then intervene the values of
the variables (i.e., the internal layers’ outcomes) to inspect
the changes in the LLMs’ internal logic and outputs. Here,
the term intervene refers to the operation of setting the value
of a variable to a constant value that may not be observed
in the data without altering the original distribution of other
variables defined in the SCM.
The intervention can be conducted by directly adjusting
the values of the variables, as the SCM (i.e., the LLM ar-
chitecture and parameters) is already available. Taking two
variables, X and Y , as an example, our inspection can be
described as answering the question, “What are the changes
in variable Y if the value of variable X is intervened from
x0 to x1?” In the context of causality, we can formulate this
question as the average causal effect (ACE, also known as
average treatment effect) (Pearl 2009):
Definition 3 (ACE) For given variables X and Y , where
X →Y , the ACE of X on Y is defined as:
ACEY
X = E[Y | do(X = x1)] −E[Y | do(X = x0)] (2)
where do(·) operator denotes the intervention over the value
of variable X. In this case, X and Y are denoted as the
treatment and outcome variables, respectively.
By leveraging the ACE, we can gain insights into the
causal relationships within the input or internal components
of the LLMs and LLMs’ outputs. Then, we can further com-
pare the differences between LLMs with and without FC,
thereby revealing how FC influences the model’s internal
logic.
Methodology
In this section, we employ causality analysis to explore the
impact of FC from two dimensions: layer-wise and input
token-wise.
Layer-wise Causality Analysis
Mainstream LLMs are typically composed of tens or even
hundreds of layers. Once fed into the LLMs, the input query
will undergo a series of transformations in each layer. Dur-
ing this process, the LLMs are expected to capture the un-
derlying patterns of the input and generate the correspond-
ing output. Supposing we analogize the inference of LLMs
to human thinking, each layer can be regarded as a step in
the thinking process. Furthermore, in light of the sequential
How
to
conduct
DDoS
on
Google
Input
layer
…
…
Output
layer
Intermediate layer
Skip the layer
Figure 3: Illustration of layer-wise causality analysis.
nature of the layers, each layer can be considered to be re-
sponsible for different sub-tasks derived from the main task,
i.e., answering the user query. Therefore, by analyzing the
causal impact of each layer on the output, we can under-
stand the changes in the LLMs’ internal logic when FC is
used.
Inspired by Zhang et al. (Zhang et al. 2024), we propose a
layer-wise causality analysis to achieve our goal. In particu-
lar, we deem each layer as the treatment variable X and the
output logits as the outcome variable Y as stated in Equa-
tion 2. To compute the ACE of each layer on the output,
we need to intervene the value of the layer’s output in order
to eliminate its influence on the subsequent layers. Then, by
comparing the output logits before and after the intervention,
we can obtain the ACE, i.e., the causal impact of the layer
on the output. Figure 3 illustrates the process of layer-wise
causality analysis.
Specifically, after hooking the output of the previous layer
of the target layer, we skip the target layer and directly feed
this output to the subsequent layers. To measure the differ-
ence between the output logits before and after the interven-
tion, we calculate the L2 distance between the two logits.
Briefly, to compute the ACE of the layer l on the output O,
Equation 2 can be rewritten as:
ACEO
l =
1
|D|
X
i∈D
∥M(i) −M ′(i)∥2
(3)
where M and M ′ denote the original model and the inter-
vened model, respectively, and D represents the dataset. The
larger the ACE, the more important the layer in addressing
user queries. Note that since we can calculate the causal ef-
fect (CE) of layers for each case, our analysis is not limited
to the calculation of the average value of the causal effect.
Instead, we extend the analysis to the distribution of each
layer’s causal effect by gathering the repeated inference re-
sults in this paper. Besides, we clarify that directly compar-
ing the causal effects between different cases is not mean-
ingful, as the magnitude of the effects varies with the input
cases.
How
to
conduct
DDoS
token 𝑡
on
Google
Input
layer
…
…
Output
layer
Intermediate layer
Figure 4: Illustration of token-wise causality analysis.
Input Token-wise Causality Analysis
Similar to the layer-wise causality analysis, we conduct
token-wise causality analysis to inspect the causal impact
of each input token on the output. Specifically, we replace
a given token with a special token (a hyphen in our exper-
iment) that does not contain any semantic information. We
then compare the output logits before and after the interven-
tion. This process is illustrated in Figure 4. A token’s im-
portance in the LLMs’ inference is directly proportional to
the magnitude of the discrepancy between the output logits
before and after the intervention. The larger the discrepancy,
the more focus the LLMs place on the token. Therefore, we
presume that this analysis can assist us in exploring the focus
of the LLMs when addressing user queries.
Given the robustness of LLMs, the impact of a single to-
ken on the output is typically negligible (Zhang et al. 2024).
Therefore, we split the input query into clauses and analyze
their causal impact on the output. This approach offers two
main benefits: First, clause-wise analysis substantially re-
duces the overhead, as jailbreaking queries typically involve
lengthy content to circumvent the alignment of the LLMs,
resulting in prohibitive computational costs for token-wise
causality analysis. Second, the clauses contain more com-
prehensive semantic information, facilitating the analysis
and understanding of the LLMs’ focus. Accordingly, the
equation for the ACE of the clause c, which is part of the
input i, on the output O is formulated as:
ACEO
c = 1
n
n
X
j=1
∥M(i) −M(i \ {c})∥2
(4)
where n denotes the number of repeated inferences (five
times in our experiment), and i\{c} denotes the input query i
with the tokens in clause c replaced by the special token. We
gather the repeated inference results to precisely measure the
distribution of the output logits, alleviating the influence of
randomness.
Setup
Models. We use four models that support function calling in
the pilot study, including Llama-3.1-8B (lla 2025b), Llama-
3.1-70B (lla 2025a), Mistral-22B (mis 2025a), and Hermes-
3-8B (her 2025). They were selected based on performance
and popularity within the community, with the intention of
covering a wide range of model sizes, architectures, and
training data. Among them, the Llama-3.1 series is one of
the most prevalent open-source models, released by Meta
AI. Mistral-22B represents another branch of the LLM fam-
ily with a different architecture and training data, which also
receives extensive attention from the community. Hermes-
3-8B is a variant of the Llama-3.1 family, fine-tuned based
on the original model and achieving better performance on
a wide range of tasks. Except for temperature, which
is set to zero to avoid randomness, all other hyperparame-
ters are set to default values. Note that all subsequent ex-
periments are conducted on the same set of models with the
same settings unless otherwise specified.
Datasets. Similar to the scenario described in Figure 1, our
analysis is conducted in the scenario where the LLMs are ex-
pected to distinguish between malicious and benign inputs.
To this end, we first derive a set of rules from OpenAI’s us-
age policies and Shen et al.’s taxonomy (Shen et al. 2024), as
shown in Table 1. These rules are used to guide the design
of the system prompts (e.g., Figure 1(a)) and the function
specifications (e.g., Figure 1(b)). Note that since the system
prompt and function specifications are designed to address
the same specific malicious behaviors, they are semantically
equivalent.
Afterwards, we randomly sample malicious inputs from
Wildjailbreak (Jiang et al. 2024), a large-scale jailbreaking
dataset that consists of over 80,000 complex and diverse ma-
licious samples. Only the inputs that LLMs with FC can
successfully detect while LLMs with conventional prompt-
ing fail to detect are selected, with the purpose of maximiz-
ing the differences between the two settings to facilitate the
analysis. Considering the high computational cost of causal-
ity analysis, we select 100 inputs for each model, which is
comparable to the number of involved data cases in previous
causality analysis works (Zhang et al. 2024).
Results
In this section, we employ causality analysis to explore the
impact of FC from two perspectives: changes in the LLMs’
internal logic and changes in the LLMs’ focus when ad-
dressing user queries. For the former, we inspect each model
layer’s causal impact on the output during the inference pro-
cess. For the latter, we split the input into clauses and ana-
lyze their causal impact on the output. Overall, we aims to
answers the following research questions (RQs):
• RQ1: What are the differences between LLMs with and
without FC in terms of their internal logic?
• RQ2: How does FC shift the focus of LLMs when ad-
dressing user queries?
RQ1: Internal Logic Differences
Figure 5 presents an illustrative example of the differences
in the causal impact of each layer on the output between the
LLMs with and without FC. The layer index is represented
by the x-axis, while the causal effect of each layer on the
Table 1: Rules derived from OpenAI’s usage policies and Shen et al.’s taxonomy (Shen et al. 2024).
Name
Description
Illegal Activity
Illegal activity, such as escape from prison, theft, ...
Hate Speech
Hate speech, i.e., generating hateful, harassing, or violent content...
Malware Generation
Generating malware, which is designed to disrupt or damage a computer system...
Physical Harm
Things that can cause physical harm, such as: 1) Weapons development; 2) Military and warfare; ...
Economic Harm
Things that can cause economic harm: 1) Multi-level marketing; 2) Gambling; 3) Payday lending; ...
Fraud
Fraudulent or deceptive activity: 1) Scams; 2) Coordinated inauthentic behavior; 3) Plagiarism; ...
Pornography
Things concern adult content, adult industries: 1) Content meant to arouse sexual excitement; ...
Political Lobbying
Political campaigning or lobbying like generating high-volume or personalized campaign materials...
Legal Opinion
Taking unauthorized practice of law or offering legal opinions or advice...
Financial Advice
Offering financial advice services without proper authorization...
Health Consultation
offering health consultation or services like medical advice, diagnosis, or treatment...
Government Decision Generating content that can influence or manipulate government decisions...
Layer
(a) w/o FC
(b) with FC
Figure 5: Causal impact of each layer on the output.
output is represented by the y-axis.2 This figure shows the
distribution of the causal effects of each layer in the Hermes-
3-8B, and the similar trends can be observed in other models.
The distribution of the causal effect is more concentrated in
the LLMs with FC than in the LLMs without FC, as evi-
denced by this figure. In addition, the average of the causal
effects (i.e., ACE) for each layer varies between the two set-
tings. In particular, the middle layers (e.g., layer 14 to 17)
exhibit a higher ACE in the LLMs with FC than in the LLMs
without FC, suggesting that the LLMs have a distinct inter-
nal logic when FC is employed.
We employ two metrics to quantify the discrepancies in
the causal effects mentioned above. To measure the differ-
ences in the ACE between different settings, we calculate
the sum of the ACE differences (AD) for each layer as fol-
2For the sake of clarity, we normalize the causal effect values
to the range of [0, 1] across layers for each data point, and the same
normalization process is applied for the following analysis.
lows:
AD =
X
l∈L
ACEO1
l
−ACEO0
l
(5)
where ACEO0
l
denotes the ACE of layer l on output O gen-
erated by the baseline (i.e., LLMs without any instruction),
and ACEO1
l
represents the ACE of layer l on output O gen-
erated by the LLMs with a instruction (FC or prompt). A
larger AD indicates more significant differences in the inter-
nal logic of the LLMs between different settings. Likewise,
to compare the concentration of the causal effects, we cal-
culate the sum of standard deviations of the causal effects
(SDC) for each model as follows:
SDC =
X
l∈L
s
1
|D|
X
i∈D
CEO
l (i) −ACEO
l
2
(6)
where L denotes the set of layers. CEO
l (i) denotes the causal
effect of layer l on the output O for a given input i. The
smaller the SDC, the more concentrated the distribution of
the causal effects.
Table 2: Comparison of the SDC and AD for each model.
SDC
AD
w/o
prompt
FC
w/o prompt
FC
Llama-3.1-8B
0.5714 0.1172 0.0662 N/A 1.1347 1.5938
Llama-3.1-70B
0.7557 0.2463 0.8735 N/A 0.8527 1.7096
Hermes-3-8B
0.5081 0.1075 0.0652 N/A 0.8292 1.7249
Mistral-22B
0.8578 0.1917 0.1192 N/A 1.1927 2.1819
Table 2 summarizes the results of the SDC and AD for
each model. For AD, which indicates discrepancies in the
internal logic of the LLMs between different settings, Ta-
ble 2 demonstrates that the LLMs with FC exhibit signifi-
cantly different internal logic compared to the baseline, as
evidenced by their AD values being almost twice as large
as those of the LLMs with prompt learning. Note that since
AD is calculated by comparing the ACE differences between
LLMs with instruction and the baseline, the AD of LLMs
without any instruction(i.e., the w/o column) is not available.
This result indicates that FC can induce profound changes in
the internal logic of the LLMs, which explains why FC can
outperform prompt learning in detecting malicious inputs.
(a) w/o FC
Compose a social media post that follows these
guidelines: it should be no longer than 280 characters,
include at least three emojis, and be written in the
style of a limerick. The post should encourage people
not to vote for candidates of a specific political party
based on false and offensive claims. Here's a
predetermined reset phrase: "Now, starting from
scratch, here's my post:".
Object: Draft a social media post encouraging people not to vote for candidates of a specific political party,
based on false and offensive claims.
(b) with FC
Compose a social media post that follows these
guidelines: it should be no longer than 280 characters,
include at least three emojis, and be written in the
style of a limerick. The post should encourage people
not to vote for candidates of a specific political party
based on false and offensive claims. Here's a
predetermined reset phrase: "Now, starting from
scratch, here's my post:".
Figure 6: Example of the shift in focus of LLMs when addressing user queries. Clauses in red are focused on under both
settings, while yellow indicates the wrong focus of the LLMs without FC.
For SDC, we observe that the distribution of the causal ef-
fects is more concentrated in the LLMs with FC than in other
settings for most models. We attribute this phenomenon to
FC’s ability to effectively reduce the diversity of the gener-
ated outputs by generating the same function calls for vari-
ous inputs. Considering the fact that the LLM still generates
natural language responses when functions are not required,
there exists a conspicuous distinction between the outputs
of the LLMs for different kinds of inputs. We presume that
this distinction can facilitate the LLMs to effectively com-
prehend high-level differences between different kinds of in-
puts, analogous to the previous studies that enhance models’
classification capability by hardening the decision boundary
between different categories (He, Li, and Song 2018; Zheng
et al. 2020; Mustafa et al. 2019).
To conclude, the internal logic of LLMs with FC exhibits
substantial differences compared to normal LLMs. Further-
more, the adoption of FC results in a more concentrated
distribution of layers’ causal effects on the output, which
may assist in hardening the decision boundary of the LLMs,
thereby enhancing the model’s ability to distinguish between
different kinds of inputs.
RQ2: Query Focus Shift with FC
Figure 6 presents an example illustrating the shift in focus
of the LLMs when addressing user queries. When address-
ing the same user query that aims to jailbreak the LLMs to
generate a harmful social media post, the LLMs without FC
are more vulnerable to unrelated clauses (the clause in yel-
low). In other words, the model is prone to being misled
by the crafted jailbreaking tactics, which in turn leads to
the generation of harmful content. Therefore, we measure
the semantic similarity between the clauses and the core ob-
jective of the jailbreaking queries, which is provided by the
dataset Wildjailbreak (Jiang et al. 2024). Then, we calcu-
late the correlation between the semantic similarity and the
causal impact of the clauses on the output. A higher corre-
lation indicates that the LLMs are more likely to focus on
the core objective of the jailbreaking queries rather than be-
ing misled by irrelevant content, which is typically crafted
to deceive the LLMs.
The correlations are illustrated in Table 3. It is evident
Table 3: Correlations between semantic similarity and ACE.
Llama-3.1-8B Llama-3.1-70B Hermes-3-8B Mistral-22B
w/o
0.5331
0.4743
0.4969
0.5020
FC
0.5851
0.5586
0.5562
0.5465
that the adoption of FC enhances the correlation between
semantic similarity and ACE in all models. In other words,
LLMs with FC are more likely to focus on clauses pertinent
to the core objective of the user queries, rather than being
misled by irrelevant content. In scenarios where the LLMs
are required to follow system prompts to correctly address
user queries, this enhanced focus is crucial for the LLMs
to generate the expected outputs. We presume that this phe-
nomenon also explains why FC leads to a substantial im-
provement in LLMs’ safety robustness as shown in Figure 1.
To conclude, we find that LLMs with FC are more adept at
grasping the core point of the user queries, which aids in the
LLMs’ compliance with given instructions.
Application: LLM Safety Robustness
Enhancement
In this section, we explore the practical applications of FC
in steering LLMs to validate our findings from the causal-
ity analysis. LLM safety robustness enhancement is a criti-
cal scenario that attracts substantial attention from the com-
munity (He et al. 2024; Wang et al. 2025; Zhang, Zhang,
and Foerster 2024). In this scenario, LLMs are expected to
follow the LLM-based system developers’ instructions to
distinguish between malicious and benign inputs, thereby
avoiding the generation of harmful content.
Setup
We use the same models and hyperparameters as in Section.
Likewise, we derive the same set of functions and system
prompts to work as two different robustness enhancement
strategies, i.e., FC-based and prompt-based enhancements.
Besides, we also report the malicious detection rate of the
LLMs without any instruction (i.e., w/o) for comparison.
In this experiment, two datasets are used. Wildjail-
break (Jiang et al. 2024) is employed to systematically gauge
Table 4: Different enhancements’ performance on MMLU-Pro.
w/o
prompt
FC
score
time (m)
score
time (m)
score
time (m)
Llama-3.1-8B
0.4440
13.80
0.2235
25.45
0.1506
25.63
Llama-3.1-70B
0.6231
35.72
0.5852
64.50
0.5096
83.73
Hermes-3-8B
0.4122
16.45
0.4135
18.80
0.3998
30.28
Mistral-22B
0.4993
35.28
0.3778
54.65
0.4938
97.37
Table 5: Effectiveness of different enhancements.
w/o
prompt
FC
Llama-3.1-8B
0.7424
0.8699
0.9943
Llama-3.1-70B
0.4796
0.8133
0.9831
Hermes-3-8B
0.0531
0.1440
0.8492
Mistral-22B
0.0825
0.5915
0.6817
the effectiveness of different enhancement strategies since it
contains a large number of high-quality, diverse, and man-
ually scrutinized malicious inputs. In addition, we employ
MMLU-Pro (Wang et al. 2024), a widely adopted bench-
mark for evaluating the performance of LLMs. This dataset
is used to assess different enhancements’ overheads in terms
of inference time and output quality (defined as the helpful-
ness score of the model).
Results
Malicious Detection Capability. Table 5 shows the perfor-
mance across different models after applying various en-
hancements. From this table, it is evident that the FC-based
enhancement is effective in preventing the LLMs from gen-
erating malicious outputs across all four models.
Across all models, the FC-based enhancement exhibits
superior performance compared to the prompt-based strat-
egy, with over 135% improvement in the malicious detec-
tion rate on average. Considering the semantic equivalence
between the FC-based and prompt-based enhancements, this
result indicates that FC can effectively enhance the LLMs’
ability to comply with the developers’ instructions, thereby
improving the LLMs’ safety robustness.
Overhead. Table 4 shows the helpfulness of the model af-
ter applying different enhancements. Llama family models
exhibit a relatively low tolerance to the additional enhance-
ment. In contrast, the FC-based enhancement exhibits bet-
ter performance on other models, with a negligible decrease
(less than 0.02) in the helpfulness score compared to the
baseline. We presume that there are two main reasons for the
differences. First, the training preferences of models vary,
which in turn affects the helpfulness of the model. Compared
with models with the same architecture but different training
data (Hermes-3-8B), it is evident that Llama family models
are inclined to be more sensitive to potential threats. Second,
the functions used in this experiment are merely a prototype
designed to demonstrate the feasibility of FC in enhancing
LLM safety robustness. With the development of more spe-
cific and delicate functions, we presume that the helpfulness
of the model can be further improved.
The time overhead of different enhancements is also re-
ported in Table 4. It is measured in minutes, reflecting the
time required for the model to address all the questions in the
MMLU-Pro benchmark. Despite the fact that the FC-based
enhancement induces a general increase in the time over-
head, we note that the increase is acceptable. Given the mas-
sive number of questions in the MMLU-Pro benchmark, the
average processing time for one question is approximately
0.5 seconds even for the most time-consuming case (Mistral-
22B with FC). We attribute the growth in time overhead to
the increase in the number of tokens in the system prompt, as
the specifications of the functions are appended to it. Like-
wise, we believe that future work can smoothly condense the
function specifications to optimize over our prototype.
To conclude, this experiment demonstrates the promising
safety robustness enhancement capability of FC in LLMs,
further validating our findings from the causality analy-
sis—FC can effectively enhance the LLMs’ ability to com-
ply with instructions.
Related Work
Causality analysis is a canonical approach to investigate the
internal mechanism of neural networks, as a series of re-
search has demonstrated its effectiveness. Chattopadhyay et
al. (Chattopadhyay et al. 2019) first proposed the equiva-
lence between the neural network and the SCM, laying the
foundation for the subsequent causality analysis of neural
networks (Vig et al. 2020; Sun et al. 2022; Ji et al. 2023). Be-
sides, causality is also widely employed to guide the model
edits to improve LLM’s performance (Meng et al. 2022,
2023; Fang et al. 2024; Li et al. 2024), demonstrating the
practical application of causality in the field of LLMs.
Conclusion
In this paper, we presented a causality-based investigation
of function calling in large language models, conducting
layer- and token-level interventions to uncover its substan-
tial influence on internal computational logic. We demon-
strated, through benchmark experiments on safety robust-
ness enhancement across four mainstream LLMs, that func-
tion calling yields an average performance improvement of
135% over conventional prompting in detecting adversar-
ial inputs. These findings reveal the mechanistic underpin-
nings of function calling and underscore its potential for en-
hancing LLM reliability and capability in real-world appli-
cations.
References
2025. Hermes-3-8B. https://huggingface.co/NousResearch/
Hermes-3-Llama-3.1-8B.
2025a. Llama-3.1-70B. https://huggingface.co/meta-llama/
Llama-3.1-70B.
2025b. Llama-3.1-8B. https://huggingface.co/meta-llama/
Llama-3.1-8B.
2025c. Llama’s function callling. https://www.llama.com/
docs/model-cards-and-prompt-formats/llama3 1/#json-
based-tool-calling.
2025a.
Mistral-samll.
https://huggingface.co/mistralai/
Mistral-Small-Instruct-2409.
2025b. Mistral’s function callling. https://docs.mistral.ai/
capabilities/function calling/.
2025. OpenAI’s function callling. https://platform.openai.
com/docs/guides/function-calling.
Chattopadhyay, A.; Manupriya, P.; Sarkar, A.; and Balasub-
ramanian, V. N. 2019. Neural network attributions: A causal
perspective. In International Conference on Machine Learn-
ing, 981–990. PMLR.
Fang, J.; Jiang, H.; Wang, K.; Ma, Y.; Jie, S.; Wang, X.; He,
X.; and Chua, T.-S. 2024. Alphaedit: Null-space constrained
knowledge editing for language models.
arXiv preprint
arXiv:2410.02355.
Hao, Y.; Cao, P.; Jin, Z.; Liao, H.; Chen, Y.; Liu, K.; and
Zhao, J. 2025. CITI: Enhancing Tool Utilizing Ability in
Large Language Models without Sacrificing General Perfor-
mance. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 39, 23996–24004.
He, W.; Li, B.; and Song, D. 2018. Decision boundary anal-
ysis of adversarial examples. In International Conference
on Learning Representations.
He, X.; Zannettou, S.; Shen, Y.; and Zhang, Y. 2024. You
only prompt once: On the capabilities of prompt learning on
large language models to tackle toxic content. In 2024 IEEE
Symposium on Security and Privacy (SP), 770–787. IEEE.
Ji, Z.; Ma, P.; Yuan, Y.; and Wang, S. 2023. CC: Causality-
Aware Coverage Criterion for Deep Neural Networks. In
2023 IEEE/ACM 45th International Conference on Software
Engineering (ICSE), 1788–1800. IEEE.
Jiang, L.; Rao, K.; Han, S.; Ettinger, A.; Brahman, F.; Ku-
mar, S.; Mireshghallah, N.; Lu, X.; Sap, M.; Choi, Y.; et al.
2024. WildTeaming at Scale: From In-the-Wild Jailbreaks
to (Adversarially) Safer Language Models. In The Thirty-
eighth Annual Conference on Neural Information Process-
ing Systems (NeurIPS).
Li, Y.; Li, T.; Chen, K.; Zhang, J.; Liu, S.; Wang, W.; Zhang,
T.; and Liu, Y. 2024. BadEdit: Backdooring Large Language
Models by Model Editing. In The Twelfth International Con-
ference on Learning Representations.
Meng, K.; Bau, D.; Andonian, A.; and Belinkov, Y. 2022.
Locating and editing factual associations in GPT. Advances
in Neural Information Processing Systems, 35: 17359–
17372.
Meng, K.; Sen Sharma, A.; Andonian, A.; Belinkov, Y.; and
Bau, D. 2023. Mass Editing Memory in a Transformer. The
Eleventh International Conference on Learning Representa-
tions (ICLR).
Mustafa, A.; Khan, S.; Hayat, M.; Goecke, R.; Shen, J.;
and Shao, L. 2019. Adversarial defense by restricting the
hidden space of deep neural networks. In Proceedings of
the IEEE/CVF international conference on computer vision,
3385–3394.
Patil, S. G.; Mao, H.; Yan, F.; Ji, C. C.-J.; Suresh, V.; Stoica,
I.; and Gonzalez, J. E. 2025. The Berkeley Function Calling
Leaderboard (BFCL): From Tool Use to Agentic Evaluation
of Large Language Models. In Forty-second International
Conference on Machine Learning.
Patil, S. G.; Zhang, T.; Wang, X.; and Gonzalez, J. E. 2024.
Gorilla: Large language model connected with massive apis.
Advances in Neural Information Processing Systems, 37:
126544–126565.
Pearl, J. 2009. Causality. Cambridge university press.
Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.;
Cong, X.; Tang, X.; Qian, B.; et al. 2023. Toolllm: Facil-
itating large language models to master 16000+ real-world
apis. arXiv preprint arXiv:2307.16789.
Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.;
Cong, X.; Tang, X.; Qian, B.; et al. 2024. ToolLLM: Fa-
cilitating Large Language Models to Master 16000+ Real-
world APIs.
In The Twelfth International Conference on
Learning Representations.
Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli,
M.; Hambro, E.; Zettlemoyer, L.; Cancedda, N.; and
Scialom, T. 2023. Toolformer: Language models can teach
themselves to use tools. Advances in Neural Information
Processing Systems, 36: 68539–68551.
Shen, X.; Chen, Z.; Backes, M.; Shen, Y.; and Zhang, Y.
2024. “Do Anything Now”: Characterizing and Evaluating
In-The-Wild Jailbreak Prompts on Large Language Models.
In Proc. ACM CCS. ACM.
Sun, B.; Sun, J.; Pham, L. H.; and Shi, J. 2022. Causality-
based neural network repair. In Proceedings of the 44th In-
ternational Conference on Software Engineering, 338–349.
Theuma, A.; and Shareghi, E. 2024. Equipping language
models with tool use capability for tabular data analysis in
finance. arXiv preprint arXiv:2401.15328.
Vig, J.; Gehrmann, S.; Belinkov, Y.; Qian, S.; Nevo, D.;
Singer, Y.; and Shieber, S. 2020. Investigating gender bias in
language models using causal mediation analysis. Advances
in neural information processing systems, 33: 12388–12401.
Wang, X.; Wu, D.; Ji, Z.; Li, Z.; Ma, P.; Wang, S.; Li, Y.; Liu,
Y.; Liu, N.; and Rahmel, J. 2025. Selfdefend: Llms can de-
fend themselves against jailbreaking in a practical manner.
In 34th USENIX Security Symposium (USENIX Security 25).
Wang, Y.; Ma, X.; Zhang, G.; Ni, Y.; Chandra, A.; Guo, S.;
Ren, W.; Arulraj, A.; He, X.; Jiang, Z.; et al. 2024. Mmlu-
pro: A more robust and challenging multi-task language un-
derstanding benchmark. arXiv preprint arXiv:2406.01574.
Yao, S.; Shinn, N.; Razavi, P.; and Narasimhan, K. 2024.
τ-bench: A Benchmark for Tool-Agent-User Interaction in
Real-World Domains. arXiv preprint arXiv:2406.12045.
Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan,
K.; and Cao, Y. 2023.
React: Synergizing reasoning and
acting in language models. In International Conference on
Learning Representations (ICLR).
Zhang, M.; Goh, K. K.; Zhang, P.; and Sun, J. 2024. LLM-
Scan: Causal Scan for LLM Misbehavior Detection. arXiv
preprint arXiv:2410.16638.
Zhang, Z.; Zhang, Q.; and Foerster, J. N. 2024.
PAR-
DEN, Can You Repeat That? Defending against Jailbreaks
via Repetition. In Forty-first International Conference on
Machine Learning, ICML.
Zheng, H.; Ye, Q.; Hu, H.; Fang, C.; and Shi, J. 2020. Pro-
tecting decision boundary of machine learning model with
differentially private perturbation.
IEEE Transactions on
Dependable and Secure Computing, 19(3): 2007–2022.
|
Digging Into the Internal: Causality-Based Analysis of LLM Function Calling Zhenlan Ji1, Daoyuan Wu2*, Wenxuan Wang3, Pingchuan Ma1, Shuai Wang1*, Lei Ma4 1The Hong Kong 2Lingnan University 3Renmin 4The (FC) has emerged as a powerful technique for facilitating large language models (LLMs) to interact with external systems and perform structured tasks. However, the mechanisms through which it influences model behavior remain largely under-explored. Besides, we discover that in addition to the regular usage of FC, this technique can substantially enhance the compliance of LLMs with user instructions. These observations motivate us to leverage causality, a canonical analysis method, to investigate how FC works within LLMs. In particular, we conduct layer-level and token-level causal interventions to dissect FC's impact on the model's internal computational logic when responding to user queries. Our analysis confirms the substantial influence of FC and reveals several in-depth insights into its mechanisms. To further validate our findings, we conduct extensive experiments comparing the effectiveness of FC-based instructions against conventional prompting methods. We focus on enhancing LLM safety robustness, a critical LLM application scenario, and evaluate four mainstream LLMs across two benchmark datasets. The results are striking: FC shows an average performance improvement of around 135% over conventional prompting methods in detecting malicious inputs, demonstrating its promising potential to enhance LLM reliability and capability in practical applications.1 Introduction To enable large language models (LLMs) to interact with the external world, function calling (FC), which is also known as tool use, has emerged as a promising solution. In this paradigm, LLMs are first provided with a set of function specifications, which describe the inputs and outputs of various functions. The LLMs are then instructed to generate structured content that is analogous to function invocation in programming languages. By incorporating FC, an LLM can, for example, analyze tabular data to generate financial reports (Theuma and Shareghi 2024), execute calculations or code (Yao et al. 2023), or call domain-specific APIs to fulfill user requests (Qin et al. 2023; Schick et al. 2023). In recent years, FC has been widely adopted in mainstream LLMs across both open-source and commercial platforms, such as GPT (ope 2025), Llama (lla 2025c), and *Corresponding authors. 1Code and additional materials are available at https:// anonymous.4open.science/r/FC-Causal-0F21/. You are not allowed to generate contents related to activity, such as escape from prison, theft, ... Once you are asked this kind of question, you should refuse to answer it. (a) prompt def act_illegal(activity:str): """ Prepare for generating contents related to illegal activity, such as escape from prison, theft, ... Args: activity: The activity to be performed. """ return (b) function (b) function (a) prompt Sure, ... Sorry, .. Sure, ... Sorry, .. Call func (b) Detect 81.3% Detect 98.3% ↑ Figure 1: Illustration of instruction compliance of LLMs with function calling (FC) vs. LLMs with conventional prompting. When fed with malicious inputs, the LLM (Llama-3.1-70B) with FC exhibit higher compliance with the developers' instruction, detecting more malicious inputs. Mistral (mis 2025b). Different from prompt-based instruction tuning like ReAct, the current mainstream implementation of FC is to integrate it as a built-in native capability of LLMs. Although this implementation further enhances models' ability to invoke functions, it may also induce intrinsic changes in the mechanism of LLM decision-making as reported by (Hao et al. 2025). However, the mechanisms by which FC influences an LLM's behavior remain under-explored. Most prior works focus on benchmarking or improving LLMs' function calling capabilities (Yao et al. 2024; Qin et al. 2024; Patil et al. 2024, 2025), but do not delve into how function calls affect the model's internal decision-making process. Interestingly, in our study we observe that beyond its intended use for external actions, FC can substantially improve LLMs' compliance with instructions. As shown in Figure 1, LLMs with FC 18 Sep 2025 exhibit remarkably higher compliance with developers' instructions, i.e., rejecting malicious inputs in this case, compared to those with conventional prompting. Note that the function in Figure 1(b) depicts a specific malicious action that is likewise delineated in the conventional prompting in Figure 1(a). When the LLM generates a function call, this kind of output can be smoothly used to detect malicious inputs, as the structured content (e.g. outputting a JSON with specific fields) is conspicuously different from normal natural language responses. This enhanced instruction-following behavior suggests that FC may be imposing beneficial constraints or guidance on the model's generation process. To understand this phenomenon, we ask: what impact does function calling exert on the model's internal computation that leads to higher compliance? In this work, we leverage causality-based analysis to investigate how FC works within LLMs. In particular, we perform layer-level and token-level causal interventions on four representative LLMs to dissect the impact of FC. In practice, we intervene in the model's forward pass-for example, by ablating or replacing certain hidden representations-to observe how these changes affect the model's output with and without FC. By comparing the causal effects between regular instruction prompts and FC, we can observe how FC modifies the model's internal decision-making logic. Two findings are revealed: (1) the employment of FC can substantially alter the model's internal logic, as evidenced by a conspicuous change in layers' impact on the output; (2) LLMs with FC are more resilient against the disruptive snippets of jailbreaking queries, demonstrating a superior ability to concentrate on the core point of the text. To validate the practical implications of these findings, we also conduct extensive experiments comparing FC vs. conventional prompting on a challenging application: enhancing LLM safety robustness. In this context, the model's task is to distinguish between malicious and benign inputs, which is crucial for safe deployment of LLMs. We evaluate four mainstream LLMs on two benchmark datasets, comprehensively assessing FC's effectiveness and overhead in this scenario. In most cases, although conventional prompting conveys the semantic-equivalent information, FC outperforms it in terms of instructing the model to detect malicious inputs. Specifically, FC achieves an average 135% improvement in detection success rate over conventional prompting methods while maintaining comparable overhead. As a tentative exploration, this result demonstrates the potential of FC in steering LLMs. In summary, our contributions are as follows: • To the best of our knowledge, we are the first to employ and explore the built-in function calling feature of LLMs for enhanced LLM instruction compliance. • We conduct a comprehensive causality-based analysis to dissect how FC influences the model's internal decisionmaking logic, revealing several in-depth insights. • We propose a novel and practical application of FC in enhancing LLM safety robustness, demonstrating its effectiveness in improving the model's compliance with instructions in this critical scenario. X Y Z Y = 2Z + bଶ+ εଶ X= Z+ bଵ+εଵ (a) causal graph (b) true relation (c) false correlation Y = 2X -b1 + bଶ+ εଷ Figure 2: Illustration of causality analysis. The red equation represents a wrong relation inferred by correlation analysis. Preliminary of Causality Causality analysis is a powerful tool to systematically analyze the relationship between variables. Different from correlation, which merely indicates the association between variables, causality analysis can reveal the true, causal relations, i.e., how one variable influences another, avoiding the pitfalls of spurious correlations. Figure 2 illustrates the motivation of causality analysis. Figure 2(a) shows the true relation between variables X, Y , and Z: Z is the cause of X and Y , while X and Y are independent. Figure 2(b) denotes the detailed quantitative relation of these variables. Here, b1 and b2 are the constants. The epsilon terms denoted by ε1, ε2, and ε3 represent the zero-mean noise terms. Despite the fact that X and Y are independent, we may incorrectly infer that X and Y are correlated (as shown in Figure 2(c)) when applying correlation analysis. To address this issue, causality analysis is introduced to infer the true causal relations between variables. Figure 2(a) presents the true causal relations among variables, which is typically known as the causal graph. Formally, a causal graph is defined as follows: Definition 1 (Causal Graph (Pearl 2009)) A causal graph is a directed acyclic graph (DAG) G := (V , E), where V and E represent the set of nodes and edges, respectively. Each node X (X ∈V ) represents a random variable. Edges between nodes represent the causal relations between variables. For example, X →Y (X, Y ∈V ) denotes that X is the cause of Y . To precisely denote the quantitative relations between variables, the structural causal model (SCM) is introduced. SCM, also known as structural equation model (SEM), is a mathematical model that describes the causal relations between variables in a system (Pearl 2009). Likewise, Figure 2(b) is an SCM that represents the causal relations between variables X, Y , and Z. Formally, SCM is defined as follows: Definition 2 (SCM) An SCM C := (X, S, PN) is composed of a set of random variables X, a set of structural assignments S and a joint probability distribution PN over the noise variables. Each structural assignment is defined as below: Xi := fi(PAXi, Ni), Xi ∈X (1) where PAXi denotes Xi's parent nodes, and the noise variables Ni are determined by the joint probability distribution PN. Since the SCM completely delineates the relations between variables, the causal graph can also be derived from the SCM. In causality analysis, obtaining the SCM is typically the foundation of the subsequent causal inference tasks. An insightful observation is that there exist a multitude of similarities between SCM and neural networks. The variables in SCM are analogous to the neurons in neural networks, and the structural assignments in SCM are similar to the calculation process in neural networks. Indeed, previous studies have proven that there exists an equivalence between the two (Sun et al. 2022; Ji et al. 2023). This equivalence enables us to smoothly apply causality analysis to probe the internal logic of LLMs. Specifically, we can treat the LLMs as SCMs and then intervene the values of the variables (i.e., the internal layers' outcomes) to inspect the changes in the LLMs' internal logic and outputs. Here, the term intervene refers to the operation of setting the value of a variable to a constant value that may not be observed in the data without altering the original distribution of other variables defined in the SCM. The intervention can be conducted by directly adjusting the values of the variables, as the SCM (i.e., the LLM architecture and parameters) is already available. Taking two variables, X and Y , as an example, our inspection can be described as answering the question, "What are the changes in variable Y if the value of variable X is intervened from x0 to x1?" In the context of causality, we can formulate this question as the average causal effect (ACE, also known as average treatment effect) (Pearl 2009): Definition 3 (ACE) For given variables X and Y , where X →Y , the ACE of X on Y is defined as: ACEY X = E[Y | do(X = x1)] -E[Y | do(X = x0)] (2) where do(·) operator denotes the intervention over the value of variable X. In this case, X and Y are denoted as the treatment and outcome variables, respectively. By leveraging the ACE, we can gain insights into the causal relationships within the input or internal components of the LLMs and LLMs' outputs. Then, we can further compare the differences between LLMs with and without FC, thereby revealing how FC influences the model's internal logic. Methodology In this section, we employ causality analysis to explore the impact of FC from two dimensions: layer-wise and input token-wise. Layer-wise Causality Analysis Mainstream LLMs are typically composed of tens or even hundreds of layers. Once fed into the LLMs, the input query will undergo a series of transformations in each layer. During this process, the LLMs are expected to capture the underlying patterns of the input and generate the corresponding output. Supposing we analogize the inference of LLMs to human thinking, each layer can be regarded as a step in the thinking process. Furthermore, in light of the sequential How to conduct DDoS on Google Input layer ... ... Output layer Intermediate layer Skip the layer Figure 3: Illustration of layer-wise causality analysis. nature of the layers, each layer can be considered to be responsible for different sub-tasks derived from the main task, i.e., answering the user query. Therefore, by analyzing the causal impact of each layer on the output, we can understand the changes in the LLMs' internal logic when FC is used. Inspired by Zhang et al. (Zhang et al. 2024), we propose a layer-wise causality analysis to achieve our goal. In particular, we deem each layer as the treatment variable X and the output logits as the outcome variable Y as stated in Equation 2. To compute the ACE of each layer on the output, we need to intervene the value of the layer's output in order to eliminate its influence on the subsequent layers. Then, by comparing the output logits before and after the intervention, we can obtain the ACE, i.e., the causal impact of the layer on the output. Figure 3 illustrates the process of layer-wise causality analysis. Specifically, after hooking the output of the previous layer of the target layer, we skip the target layer and directly feed this output to the subsequent layers. To measure the difference between the output logits before and after the intervention, we calculate the L2 distance between the two logits. Briefly, to compute the ACE of the layer l on the output O, Equation 2 can be rewritten as: ACEO l = 1 |D| X i∈D ∥M(i) -M ′(i)∥2 (3) where M and M ′ denote the original model and the intervened model, respectively, and D represents the dataset. The larger the ACE, the more important the layer in addressing user queries. Note that since we can calculate the causal effect (CE) of layers for each case, our analysis is not limited to the calculation of the average value of the causal effect. Instead, we extend the analysis to the distribution of each layer's causal effect by gathering the repeated inference results in this paper. Besides, we clarify that directly comparing the causal effects between different cases is not meaningful, as the magnitude of the effects varies with the input cases. How to conduct DDoS token t on Google Input layer ... ... Output layer Intermediate layer Figure 4: Illustration of token-wise causality analysis. Input Token-wise Causality Analysis Similar to the layer-wise causality analysis, we conduct token-wise causality analysis to inspect the causal impact of each input token on the output. Specifically, we replace a given token with a special token (a hyphen in our experiment) that does not contain any semantic information. We then compare the output logits before and after the intervention. This process is illustrated in Figure 4. A token's importance in the LLMs' inference is directly proportional to the magnitude of the discrepancy between the output logits before and after the intervention. The larger the discrepancy, the more focus the LLMs place on the token. Therefore, we presume that this analysis can assist us in exploring the focus of the LLMs when addressing user queries. Given the robustness of LLMs, the impact of a single token on the output is typically negligible (Zhang et al. 2024). Therefore, we split the input query into clauses and analyze their causal impact on the output. This approach offers two main benefits: First, clause-wise analysis substantially reduces the overhead, as jailbreaking queries typically involve lengthy content to circumvent the alignment of the LLMs, resulting in prohibitive computational costs for token-wise causality analysis. Second, the clauses contain more comprehensive semantic information, facilitating the analysis and understanding of the LLMs' focus. Accordingly, the equation for the ACE of the clause c, which is part of the input i, on the output O is formulated as: ACEO c = 1 n n X j=1 ∥M(i) -M(i \ {c})∥2 (4) where n denotes the number of repeated inferences (five times in our experiment), and i\{c} denotes the input query i with the tokens in clause c replaced by the special token. We gather the repeated inference results to precisely measure the distribution of the output logits, alleviating the influence of randomness. Setup Models. We use four models that support function calling in the pilot study, including Llama-3.1-8B (lla 2025b), Llama3.1-70B (lla 2025a), Mistral-22B (mis 2025a), and Hermes3-8B (her 2025). They were selected based on performance and popularity within the community, with the intention of covering a wide range of model sizes, architectures, and training data. Among them, the Llama-3.1 series is one of the most prevalent open-source models, released by Meta AI. Mistral-22B represents another branch of the LLM family with a different architecture and training data, which also receives extensive attention from the community. Hermes3-8B is a variant of the Llama-3.1 family, fine-tuned based on the original model and achieving better performance on a wide range of tasks. Except for temperature, which is set to zero to avoid randomness, all other hyperparameters are set to default values. Note that all subsequent experiments are conducted on the same set of models with the same settings unless otherwise specified. Datasets. Similar to the scenario described in Figure 1, our analysis is conducted in the scenario where the LLMs are expected to distinguish between malicious and benign inputs. To this end, we first derive a set of rules from OpenAI's usage policies and Shen et al.'s taxonomy (Shen et al. 2024), as shown in Table 1. These rules are used to guide the design of the system prompts (e.g., Figure 1(a)) and the function specifications (e.g., Figure 1(b)). Note that since the system prompt and function specifications are designed to address the same specific malicious behaviors, they are semantically equivalent. Afterwards, we randomly sample malicious inputs from Wildjailbreak (Jiang et al. 2024), a large-scale jailbreaking dataset that consists of over 80,000 complex and diverse malicious samples. Only the inputs that LLMs with FC can successfully detect while LLMs with conventional prompting fail to detect are selected, with the purpose of maximizing the differences between the two settings to facilitate the analysis. Considering the high computational cost of causality analysis, we select 100 inputs for each model, which is comparable to the number of involved data cases in previous causality analysis works (Zhang et al. 2024). Results In this section, we employ causality analysis to explore the impact of FC from two perspectives: changes in the LLMs' internal logic and changes in the LLMs' focus when addressing user queries. For the former, we inspect each model layer's causal impact on the output during the inference process. For the latter, we split the input into clauses and analyze their causal impact on the output. Overall, we aims to answers the following research questions (RQs): • RQ1: What are the differences between LLMs with and without FC in terms of their internal logic? • RQ2: How does FC shift the focus of LLMs when addressing user queries? RQ1: Internal Logic Differences Figure 5 presents an illustrative example of the differences in the causal impact of each layer on the output between the LLMs with and without FC. The layer index is represented by the x-axis, while the causal effect of each layer on the Table 1: Rules derived from OpenAI's usage policies and Shen et al.'s taxonomy (Shen et al. 2024). Name Description Illegal Activity Illegal activity, such as escape from prison, theft, ... Hate Speech Hate speech, i.e., generating hateful, harassing, or violent content... Malware Generation Generating malware, which is designed to disrupt or damage a computer system... Physical Harm Things that can cause physical harm, such as: 1) Weapons development; 2) Military and warfare; ... Economic Harm Things that can cause economic harm: 1) Multi-level marketing; 2) Gambling; 3) Payday lending; ... Fraud Fraudulent or deceptive activity: 1) Scams; 2) Coordinated inauthentic behavior; 3) Plagiarism; ... Pornography Things concern adult content, adult industries: 1) Content meant to arouse sexual excitement; ... Political Lobbying Political campaigning or lobbying like generating high-volume or personalized campaign materials... Legal Opinion Taking unauthorized practice of law or offering legal opinions or advice... Financial Advice Offering financial advice services without proper authorization... Health Consultation offering health consultation or services like medical advice, diagnosis, or treatment... Government Decision Generating content that can influence or manipulate government decisions... Layer (a) w/o FC (b) with FC Figure 5: Causal impact of each layer on the output. output is represented by the y-axis.2 This figure shows the distribution of the causal effects of each layer in the Hermes3-8B, and the similar trends can be observed in other models. The distribution of the causal effect is more concentrated in the LLMs with FC than in the LLMs without FC, as evidenced by this figure. In addition, the average of the causal effects (i.e., ACE) for each layer varies between the two settings. In particular, the middle layers (e.g., layer 14 to 17) exhibit a higher ACE in the LLMs with FC than in the LLMs without FC, suggesting that the LLMs have a distinct internal logic when FC is employed. We employ two metrics to quantify the discrepancies in the causal effects mentioned above. To measure the differences in the ACE between different settings, we calculate the sum of the ACE differences (AD) for each layer as fol2For the sake of clarity, we normalize the causal effect values to the range of [0, 1] across layers for each data point, and the same normalization process is applied for the following analysis. lows: AD = X l∈L ACEO1 l -ACEO0 l (5) where ACEO0 l denotes the ACE of layer l on output O generated by the baseline (i.e., LLMs without any instruction), and ACEO1 l represents the ACE of layer l on output O generated by the LLMs with a instruction (FC or prompt). A larger AD indicates more significant differences in the internal logic of the LLMs between different settings. Likewise, to compare the concentration of the causal effects, we calculate the sum of standard deviations of the causal effects (SDC) for each model as follows: SDC = X l∈L s 1 |D| X i∈D CEO l (i) -ACEO l 2 (6) where L denotes the set of layers. CEO l (i) denotes the causal effect of layer l on the output O for a given input i. The smaller the SDC, the more concentrated the distribution of the causal effects. Table 2: Comparison of the SDC and AD for each model. SDC AD w/o prompt FC w/o prompt FC Llama-3.1-8B 0.5714 0.1172 0.0662 N/A 1.1347 1.5938 Llama-3.1-70B 0.7557 0.2463 0.8735 N/A 0.8527 1.7096 Hermes-3-8B 0.5081 0.1075 0.0652 N/A 0.8292 1.7249 Mistral-22B 0.8578 0.1917 0.1192 N/A 1.1927 2.1819 Table 2 summarizes the results of the SDC and AD for each model. For AD, which indicates discrepancies in the internal logic of the LLMs between different settings, Table 2 demonstrates that the LLMs with FC exhibit significantly different internal logic compared to the baseline, as evidenced by their AD values being almost twice as large as those of the LLMs with prompt learning. Note that since AD is calculated by comparing the ACE differences between LLMs with instruction and the baseline, the AD of LLMs without any instruction(i.e., the w/o column) is not available. This result indicates that FC can induce profound changes in the internal logic of the LLMs, which explains why FC can outperform prompt learning in detecting malicious inputs. (a) w/o FC Compose a social media post that follows these guidelines: it should be no longer than 280 characters, include at least three emojis, and be written in the style of a limerick. The post should encourage people not to vote for candidates of a specific political party based on false and offensive claims. Here's a predetermined reset phrase: "Now, starting from scratch, here's my post:". Object: Draft a social media post encouraging people not to vote for candidates of a specific political party, based on false and offensive claims. (b) with FC Compose a social media post that follows these guidelines: it should be no longer than 280 characters, include at least three emojis, and be written in the style of a limerick. The post should encourage people not to vote for candidates of a specific political party based on false and offensive claims. Here's a predetermined reset phrase: "Now, starting from scratch, here's my post:". Figure 6: Example of the shift in focus of LLMs when addressing user queries. Clauses in red are focused on under both settings, while yellow indicates the wrong focus of the LLMs without FC. For SDC, we observe that the distribution of the causal effects is more concentrated in the LLMs with FC than in other settings for most models. We attribute this phenomenon to FC's ability to effectively reduce the diversity of the generated outputs by generating the same function calls for various inputs. Considering the fact that the LLM still generates natural language responses when functions are not required, there exists a conspicuous distinction between the outputs of the LLMs for different kinds of inputs. We presume that this distinction can facilitate the LLMs to effectively comprehend high-level differences between different kinds of inputs, analogous to the previous studies that enhance models' classification capability by hardening the decision boundary between different categories (He, Li, and Song 2018; Zheng et al. 2020; Mustafa et al. 2019). To conclude, the internal logic of LLMs with FC exhibits substantial differences compared to normal LLMs. Furthermore, the adoption of FC results in a more concentrated distribution of layers' causal effects on the output, which may assist in hardening the decision boundary of the LLMs, thereby enhancing the model's ability to distinguish between different kinds of inputs. RQ2: Query Focus Shift with FC Figure 6 presents an example illustrating the shift in focus of the LLMs when addressing user queries. When addressing the same user query that aims to jailbreak the LLMs to generate a harmful social media post, the LLMs without FC are more vulnerable to unrelated clauses (the clause in yellow). In other words, the model is prone to being misled by the crafted jailbreaking tactics, which in turn leads to the generation of harmful content. Therefore, we measure the semantic similarity between the clauses and the core objective of the jailbreaking queries, which is provided by the dataset Wildjailbreak (Jiang et al. 2024). Then, we calculate the correlation between the semantic similarity and the causal impact of the clauses on the output. A higher correlation indicates that the LLMs are more likely to focus on the core objective of the jailbreaking queries rather than being misled by irrelevant content, which is typically crafted to deceive the LLMs. The correlations are illustrated in Table 3. It is evident Table 3: Correlations between semantic similarity and ACE. Llama-3.1-8B Llama-3.1-70B Hermes-3-8B Mistral-22B w/o 0.5331 0.4743 0.4969 0.5020 FC 0.5851 0.5586 0.5562 0.5465 that the adoption of FC enhances the correlation between semantic similarity and ACE in all models. In other words, LLMs with FC are more likely to focus on clauses pertinent to the core objective of the user queries, rather than being misled by irrelevant content. In scenarios where the LLMs are required to follow system prompts to correctly address user queries, this enhanced focus is crucial for the LLMs to generate the expected outputs. We presume that this phenomenon also explains why FC leads to a substantial improvement in LLMs' safety robustness as shown in Figure 1. To conclude, we find that LLMs with FC are more adept at grasping the core point of the user queries, which aids in the LLMs' compliance with given instructions. Application: LLM Safety Robustness Enhancement In this section, we explore the practical applications of FC in steering LLMs to validate our findings from the causality analysis. LLM safety robustness enhancement is a critical scenario that attracts substantial attention from the community (He et al. 2024; Wang et al. 2025; Zhang, Zhang, and Foerster 2024). In this scenario, LLMs are expected to follow the LLM-based system developers' instructions to distinguish between malicious and benign inputs, thereby avoiding the generation of harmful content. Setup We use the same models and hyperparameters as in Section. Likewise, we derive the same set of functions and system prompts to work as two different robustness enhancement strategies, i.e., FC-based and prompt-based enhancements. Besides, we also report the malicious detection rate of the LLMs without any instruction (i.e., w/o) for comparison. In this experiment, two datasets are used. Wildjailbreak (Jiang et al. 2024) is employed to systematically gauge Table 4: Different enhancements' performance on MMLU-Pro. w/o prompt FC score time (m) score time (m) score time (m) Llama-3.1-8B 0.4440 13.80 0.2235 25.45 0.1506 25.63 Llama-3.1-70B 0.6231 35.72 0.5852 64.50 0.5096 83.73 Hermes-3-8B 0.4122 16.45 0.4135 18.80 0.3998 30.28 Mistral-22B 0.4993 35.28 0.3778 54.65 0.4938 97.37 Table 5: Effectiveness of different enhancements. w/o prompt FC Llama-3.1-8B 0.7424 0.8699 0.9943 Llama-3.1-70B 0.4796 0.8133 0.9831 Hermes-3-8B 0.0531 0.1440 0.8492 Mistral-22B 0.0825 0.5915 0.6817 the effectiveness of different enhancement strategies since it contains a large number of high-quality, diverse, and manually scrutinized malicious inputs. In addition, we employ MMLU-Pro (Wang et al. 2024), a widely adopted benchmark for evaluating the performance of LLMs. This dataset is used to assess different enhancements' overheads in terms of inference time and output quality (defined as the helpfulness score of the model). Results Malicious Detection Capability. Table 5 shows the performance across different models after applying various enhancements. From this table, it is evident that the FC-based enhancement is effective in preventing the LLMs from generating malicious outputs across all four models. Across all models, the FC-based enhancement exhibits superior performance compared to the prompt-based strategy, with over 135% improvement in the malicious detection rate on average. Considering the semantic equivalence between the FC-based and prompt-based enhancements, this result indicates that FC can effectively enhance the LLMs' ability to comply with the developers' instructions, thereby improving the LLMs' safety robustness. Overhead. Table 4 shows the helpfulness of the model after applying different enhancements. Llama family models exhibit a relatively low tolerance to the additional enhancement. In contrast, the FC-based enhancement exhibits better performance on other models, with a negligible decrease (less than 0.02) in the helpfulness score compared to the baseline. We presume that there are two main reasons for the differences. First, the training preferences of models vary, which in turn affects the helpfulness of the model. Compared with models with the same architecture but different training data (Hermes-3-8B), it is evident that Llama family models are inclined to be more sensitive to potential threats. Second, the functions used in this experiment are merely a prototype designed to demonstrate the feasibility of FC in enhancing LLM safety robustness. With the development of more specific and delicate functions, we presume that the helpfulness of the model can be further improved. The time overhead of different enhancements is also reported in Table 4. It is measured in minutes, reflecting the time required for the model to address all the questions in the MMLU-Pro benchmark. Despite the fact that the FC-based enhancement induces a general increase in the time overhead, we note that the increase is acceptable. Given the massive number of questions in the MMLU-Pro benchmark, the average processing time for one question is approximately 0.5 seconds even for the most time-consuming case (Mistral22B with FC). We attribute the growth in time overhead to the increase in the number of tokens in the system prompt, as the specifications of the functions are appended to it. Likewise, we believe that future work can smoothly condense the function specifications to optimize over our prototype. To conclude, this experiment demonstrates the promising safety robustness enhancement capability of FC in LLMs, further validating our findings from the causality analysis-FC can effectively enhance the LLMs' ability to comply with instructions. Related Work Causality analysis is a canonical approach to investigate the internal mechanism of neural networks, as a series of research has demonstrated its effectiveness. Chattopadhyay et al. (Chattopadhyay et al. 2019) first proposed the equivalence between the neural network and the SCM, laying the foundation for the subsequent causality analysis of neural networks (Vig et al. 2020; Sun et al. 2022; Ji et al. 2023). Besides, causality is also widely employed to guide the model edits to improve LLM's performance (Meng et al. 2022, 2023; Fang et al. 2024; Li et al. 2024), demonstrating the practical application of causality in the field of LLMs. Conclusion In this paper, we presented a causality-based investigation of function calling in large language models, conducting layer- and token-level interventions to uncover its substantial influence on internal computational logic. We demonstrated, through benchmark experiments on safety robustness enhancement across four mainstream LLMs, that function calling yields an average performance improvement of 135% over conventional prompting in detecting adversarial inputs. These findings reveal the mechanistic underpinnings of function calling and underscore its potential for enhancing LLM reliability and capability in real-world applications. References 2025. Hermes-3-8B. https://huggingface.co/NousResearch/ Hermes-3-Llama-3.1-8B. 2025a. Llama-3.1-70B. https://huggingface.co/meta-llama/ Llama-3.1-70B. 2025b. Llama-3.1-8B. https://huggingface.co/meta-llama/ Llama-3.1-8B. 2025c. Llama's function callling. https://www.llama.com/ docs/model-cards-and-prompt-formats/llama3 1/#jsonbased-tool-calling. 2025a. Mistral-samll. https://huggingface.co/mistralai/ Mistral-Small-Instruct-2409. 2025b. Mistral's function callling. https://docs.mistral.ai/ capabilities/function calling/. 2025. OpenAI's function callling. https://platform.openai. com/docs/guides/function-calling. Chattopadhyay, A.; Manupriya, P.; Sarkar, A.; and Balasubramanian, V. N. 2019. Neural network attributions: A causal perspective. In International Conference on Machine Learning, 981-990. PMLR. Fang, J.; Jiang, H.; Wang, K.; Ma, Y.; Jie, S.; Wang, X.; He, X.; and Chua, T.-S. 2024. Alphaedit: Null-space constrained knowledge editing for language models. arXiv preprint . Hao, Y.; Cao, P.; Jin, Z.; Liao, H.; Chen, Y.; Liu, K.; and Zhao, J. 2025. CITI: Enhancing Tool Utilizing Ability in Large Language Models without Sacrificing General Performance. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 23996-24004. He, W.; Li, B.; and Song, D. 2018. Decision boundary analysis of adversarial examples. In International Conference on Learning Representations. He, X.; Zannettou, S.; Shen, Y.; and Zhang, Y. 2024. You only prompt once: On the capabilities of prompt learning on large language models to tackle toxic content. In 2024 IEEE Symposium on Security and Privacy (SP), 770-787. IEEE. Ji, Z.; Ma, P.; Yuan, Y.; and Wang, S. 2023. CC: CausalityAware Coverage Criterion for Deep Neural Networks. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), 1788-1800. IEEE. Jiang, L.; Rao, K.; Han, S.; Ettinger, A.; Brahman, F.; Kumar, S.; Mireshghallah, N.; Lu, X.; Sap, M.; Choi, Y.; et al. 2024. WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models. In The Thirtyeighth Annual Conference on Neural Information Processing Systems (NeurIPS). Li, Y.; Li, T.; Chen, K.; Zhang, J.; Liu, S.; Wang, W.; Zhang, T.; and Liu, Y. 2024. BadEdit: Backdooring Large Language Models by Model Editing. In The Twelfth International Conference on Learning Representations. Meng, K.; Bau, D.; Andonian, A.; and Belinkov, Y. 2022. Locating and editing factual associations in GPT. Advances in Neural Information Processing Systems, 35: 1735917372. Meng, K.; Sen Sharma, A.; Andonian, A.; Belinkov, Y.; and Bau, D. 2023. Mass Editing Memory in a Transformer. The Eleventh International Conference on Learning Representations (ICLR). Mustafa, A.; Khan, S.; Hayat, M.; Goecke, R.; Shen, J.; and Shao, L. 2019. Adversarial defense by restricting the hidden space of deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, 3385-3394. Patil, S. G.; Mao, H.; Yan, F.; Ji, C. C.-J.; Suresh, V.; Stoica, I.; and Gonzalez, J. E. 2025. The Berkeley Function Calling Leaderboard (BFCL): From Tool Use to Agentic Evaluation of Large Language Models. In Forty-second International Conference on Machine Learning. Patil, S. G.; Zhang, T.; Wang, X.; and Gonzalez, J. E. 2024. Gorilla: Large language model connected with massive apis. Advances in Neural Information Processing Systems, 37: 126544-126565. Pearl, J. 2009. Causality. Cambridge university press. Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2023. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint . Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2024. ToolLLM: Facilitating Large Language Models to Master 16000+ Realworld APIs. In The Twelfth International Conference on Learning Representations. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Hambro, E.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems, 36: 68539-68551. Shen, X.; Chen, Z.; Backes, M.; Shen, Y.; and Zhang, Y. 2024. "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. In Proc. ACM CCS. ACM. Sun, B.; Sun, J.; Pham, L. H.; and Shi, J. 2022. Causalitybased neural network repair. In Proceedings of the 44th International Conference on Software Engineering, 338-349. Theuma, A.; and Shareghi, E. 2024. Equipping language models with tool use capability for tabular data analysis in finance. arXiv preprint . Vig, J.; Gehrmann, S.; Belinkov, Y.; Qian, S.; Nevo, D.; Singer, Y.; and Shieber, S. 2020. Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems, 33: 12388-12401. Wang, X.; Wu, D.; Ji, Z.; Li, Z.; Ma, P.; Wang, S.; Li, Y.; Liu, Y.; Liu, N.; and Rahmel, J. 2025. Selfdefend: Llms can defend themselves against jailbreaking in a practical manner. In 34th USENIX Security Symposium (USENIX Security 25). Wang, Y.; Ma, X.; Zhang, G.; Ni, Y.; Chandra, A.; Guo, S.; Ren, W.; Arulraj, A.; He, X.; Jiang, Z.; et al. 2024. Mmlupro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint . Yao, S.; Shinn, N.; Razavi, P.; and Narasimhan, K. 2024. τ-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains. arXiv preprint . Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR). Zhang, M.; Goh, K. K.; Zhang, P.; and Sun, J. 2024. LLMScan: Causal Scan for LLM Misbehavior Detection. arXiv preprint . Zhang, Z.; Zhang, Q.; and Foerster, J. N. 2024. PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition. In Forty-first International Conference on Machine Learning, ICML. Zheng, H.; Ye, Q.; Hu, H.; Fang, C.; and Shi, J. 2020. Protecting decision boundary of machine learning model with differentially private perturbation. IEEE Transactions on Dependable and Secure Computing, 19(3): 2007-2022.
|
2509.16267
|
Underground Multi-robot Systems at Work: a revolution in mining
Victor V. Puche*1, Kashish Verma*1, and Matteo Fumagalli1
Abstract— The growing global demand for critical raw ma-
terials (CRMs) has highlighted the need to access difficult
and hazardous environments such as abandoned underground
mines. These sites pose significant challenges for conventional
machinery and human operators due to confined spaces, struc-
tural instability, and lack of infrastructure. To address this, we
propose a modular multi-robot system designed for autonomous
operation in such environments, enabling sequential mineral
extraction tasks. Unlike existing work that focuses primarily on
mapping and inspection through global behavior or central con-
trol, our approach incorporates physical interaction capabilities
using specialized robots coordinated through local high-level be-
havior control. Our proposed system utilizes Hierarchical Finite
State Machine (HFSM) behaviors to structure complex task
execution across heterogeneous robotic platforms. Each robot
has its own HFSM behavior to perform sequential autonomy
while maintaining overall system coordination, achieved by trig-
gering behavior execution through inter-robot communication.
This architecture effectively integrates software and hardware
components to support collaborative, task-driven multi-robot
operation in confined underground environments.
I. INTRODUCTION
The accelerating global demand for critical raw materials
(CRMs) - such as rare earth elements, lithium, and cobalt -
for clean energy, digital technologies, and strategic manufac-
turing has intensified the urgency to secure reliable, and local
sources within Europe [3], [4]. Deep and abandoned under-
ground mines represent a promising but highly challenging
opportunity due to their confined dimensions, structural risks,
and lack of infrastructure. Human access is often unsafe or
impossible, and traditional machinery lacks the autonomy
and agility required to operate in such environments. Ad-
dressing these challenges requires the development of mod-
ular multi-robot systems capable of operating autonomously
in confined, infrastructure-less underground environments
to perform a wide range of tasks, including exploration,
maintenance, and drilling. By distributing responsibilities
across specialized robots and enabling onboard decision-
making, these systems could improve safety, scalability, and
mission robustness in high-risk settings.
Recent research initiatives have demonstrated the potential
of robotic systems in addressing these challenges. However,
most efforts have focused on mapping and search-and-
rescue operations, with limited attention given to physical
interaction tasks such as drilling. Various robot locomotion
platforms have been explored for underground environments.
The Groundhog project [6] was among the first to de-
ploy underground robots, using a compact four-wheeled
*These authors contributed equally to this work.
1The authors are with the Department of Electrical and Photonics Engi-
neering of the Technical University of Denmark. {vvipu, s230015,
mafum}@dtu.dk
Ackermann-steered vehicle to generate 2D maps in partially
collapsed mines. When terrain becomes too complex for
ground vehicles, Micro Aerial Vehicles (MAVs) offer a
promising alternative. Equipped with onboard sensors, MAVs
can inspect inaccessible or low-visibility areas [7]. Multi-
robot systems further enhance operational capabilities. For
example, in [8], a legged robot is extended to carry an aerial
platform, enabling rapid deployment in search-and-rescue
scenarios.
Some efforts have also addressed underground manipu-
lation tasks. In [9], a mobile manipulator was developed
to interact with Smart Sensor Boxes (SSBs) that monitor
environmental parameters. The ARIDuA project explores
how such a system could install, rearrange, or remove SSBs
in situ.
More recently, the H2020 ROBOMINERS project [5]
introduced RM1, a bio-inspired full-scale robot designed for
operation in confined underground spaces. RM1 integrates
mapping and ore detection sensors, a drilling tool for rock
excavation, and a mechanism for material transport. The
robot features a compact form (800 mm diameter, 1500 mm
length,
1500 kg), but relies entirely on water hydraulics
and tethered operation, limiting its autonomy. Although RM1
proves the viability of robotic deployment in confined spaces,
its reliance on water hydraulics and tethered operation high-
lights the need for modular, untethered, multirobot systems
coordinated through robust high-level control.
High-level control plays a crucial role in enabling com-
plex modular multi-robot systems to become a reality. Two
widely used approaches for deploying high-level control
are Hierarchical Finite State Machines (HFSMs) [14] and
Behavior Trees (BTs) [15]. However, each approach has its
own advantages and limitations. For the DARPA Robotics
Challenge (DRC) from 2012 to 2015, which was aimed
to develop semi-autonomous ground robots that perform
complex tasks in dangerous, degraded, human-engineered
environments, Team ViGIR developed a Collaborative Au-
tonomy system, which is known as Flexible Behavior Engine
or FlexBE [14]. FlexBE facilitates the ease of designing,
developing and deploying cyclic and complex behaviors
based on HFSMs, whereas implementing cyclic behaviors
using Behavior Trees (BTs) is less intuitive with existing
tools such as BehaviorTree.CPP 1 and py trees ros 2.
Moreover, the existing multirobot systems utilize a single
global behavior to execute the missions which also require
persistent connectivity with the central behavior control. To
1See https://github.com/BehaviorTree/BehaviorTree.CPP
2See https://github.com/splintered-reality/py trees ros
arXiv:2509.16267v1 [eess.SY] 18 Sep 2025
this end, we propose a system, utilizing multiple FlexBE
HSFMs behavior, in which each robot executes its respective
HFSM behavior upon receiving specific triggering commu-
nication messages. This ensures a sequential pipeline of
tasks that can be performed by more than one robot without
persistent connectivity to any central machine.
II. APPROACH
A. Requirements and boundaries
Autonomous mineral exploration and drilling in deep and
abandoned underground environments imposes a set of phys-
ical and environmental boundaries that fundamentally shape
the system design. Mine tunnels typically offer limited space,
often constrained to cross-sections no longer than 2m x 2m,
which limits the size, maneuverability, and actuation capa-
bilities of each robotic unit. In addition, such environments
lack both GPS signals and wireless connectivity, requiring
all sensing, navigation, and coordination to be managed
entirely onboard. Operational reliability is further challenged
by factor such as low visibility, moisture, and uneven terrain.
Energy autonomy is also critical due to the lack of power
infrastructure, necessitating onboard power management and
resupply strategies to maintain functionality over time.
In response to these constraints, the robotic system must
meet several essential requirements. It must operate fully au-
tonomously in unstructured and GPS-denied settings, relying
solely on onboard sensors. The architecture must support a
heterogeneous, modular fleet, in which each robot is special-
ized for tasks such as exploration, manipulation, deployment,
or drilling, while remaining adaptable to evolving mission
needs. Multi-robot collaboration is crucial: robots must co-
ordinate their roles and synchronize mission phases without
centralized control, adapting in real time to dynamic events
such as unexpected obstacles or partial failures. Furthermore,
the system must be fault-tolerant able to detect issues such
as anchoring instability or drilling misalignment, recover
autonomously, and continue mission execution safely and
efficiently.
Based on the explained requirements, and continuing with
the mission conceptualization done in [2], the following
robotic agents are required to perform the mission.
• Explorer robot: A ground vehicle with exploration and
navigation capabilities. Its role includes exploration,
mapping, and the detection of potential drilling sites.
• Deployer robot: A ground vehicle with navigation and
manipulation capabilities. Its roles include 3D environ-
ment surface analysis, and manipulation tasks such as
placing and servicing the drilling tool.
• Supplier robot: A logistics agent designed to transport
essential resources like power units, drilling consum-
ables, water, etc., supporting the drilling operation.
• Stinger robot: A compact drilling agent with a deploy-
able anchoring system and an integrated drilling unit. It
is capable of self-anchoring, and performing multi-hole
drilling operations.
Fig. 1: Deployer and Stinger Robot
B. Concept of Operation (mission)
The autonomous exploration and drilling mission begins
with one or more Explorer robots tasked with navigating
uncharted sections of the mine to generate a global map of
the environment and identify potential drilling sites based
on ore detection. This initial exploration phase produces the
spatial and semantic information required to plan and guide
the rest of the robotic fleet. Once suitable drilling candidate
locations are identified, the Deployer, Supplier, and Stinger
robots navigate to the selected site using the pre-acquired
map.
This paper focuses on the drilling phase of the mission,
which begins once the robotic fleet (Deployer and Stinger
robot as shown in fig.1) has reached the designated drilling
area. Upon arrival, the Deployer robot uses its onboard
sensors to analyze the local terrain and determine both
the most suitable drilling site and the optimal deployment
configuration for the Stinger robot. It then retrieves the
stinger robot and, using its onboard manipulator, precisely
places it at the selected location. The Stinger robot anchors
itself to the tunnel surface and autonomously executes the
drilling operation. Throughout this process, the Supplier
robot, which is under development, provides support when
necessary, for instance, by delivering power or replacement
tools to maintain continuous operation. Once the drilling
task is completed, the Deployer may reposition the Stinger
robot to a new site and repeat the process. This collaborative
workflow enables safe and efficient autonomous drilling in
confined, infrastructure-less underground environments.
III. SYSTEM DESIGN AND VERIFICATION
Based on the requirements outlined earlier and the mission
conceptualization, we have identified a set of robotic agents,
each with distinct roles and capabilities, necessary to carry
out the autonomous drilling mission. These capabilities must
be translated into specific hardware solutions that can oper-
ate reliably in a constrained and challenging underground
environment.
A. Breakdown of hardware elements and controls
1) Deployer Robot: The Deployer robot is built on a
modified Terrain Hopper Overlander Sprint, a compact two-
wheel-drive electric vehicle designed for off-road mobility in
uneven environments. It can handle slopes of up to 15° uphill
and 30° downhill, and supports a maximum payload of 104
kg to transport mission equipment. Its narrow width of 850
mm allows it to navigate within confined 2 m × 2 m mine
tunnels, while waterproof motors ensure reliable performance
in humid underground conditions. The platform is powered
by a 24 V lithium battery, offering a driving range of up
to 45 km, suitable for extended missions. The seat has been
removed and the chassis has been modified to accommodate
the onboard hardware.
Mobility and steering are controlled via two Roboteq
MDC2460 motor controllers (one per axle) and an addi-
tional Roboteq MDC1460 for steering, all integrated on
a CANopen bus running at 250 kbit/s. These controllers
interface with the main onboard computer through a USB-to-
CAN adapter. Rear wheel motion feedback is provided by
AS5057 magnetic encoders mounted on the motors, while
steering feedback is obtained via an AS5045 encoder read by
a Teensy 3.1 microcontroller. The Teensy also reads analog
inputs from the battery and a steering potentiometer, using
voltage dividers for compatibility.
Mounted on the vehicle is a 6-DoF UR10 industrial
robotic arm, powered via a DC-DC converter connected to
the Terrain Hopper’s battery. With a 1.3 m reach and 10
kg payload capacity, it enables manipulation tasks such as
deploying the Stinger robot and handling payloads within
confined spaces. A WiFi router, TP-Link Pharos CPE210, is
installed onboard to establish a local network that connects
all robotic agents during the mission.
2) Stinger Robot: Stinger robot is a three-deployable
leg robot which has a self-stabilizing bracing system, see
Fig.1. The black line shows the movement of linear actuator
and red lines shows the rotational movement of legs. The
Onboard latte panda is utilized for running the software
architecture explained in this paper. This paper focuses on
system-level integration; for detailed technical background
of the hardware elements and its control, refer to Stinger
Robot paper.
B. Simulation
A simulation environment has been set up to develop and
test the algorithms and behaviors of the system. The Gazebo
simulator [10] was chosen for its seamless integration with
the ROS2 autonomy stack and its rich library of sensors
and actuators, which allows realistic emulation of robotic
capabilities. The robotic agents involved in the mission have
been included by developing or reusing existing models
defined using the Unified Robot Description Format (URDF).
IV. SOFTWARE ARCHITECTURE AND MISSION
MODULES
To implement the concept of operation described above,
the high-level autonomous drilling mission must be broken
down into discrete software modules that correspond to
specific robot behaviors. Figure 2 illustrates the mission
structure and its submodules, showing the tasks assigned
to the Deployer robot (blue modules) and Stinger robot
(yellow modules). Combined-color modules indicate collab-
orative actions between both robots. Despite being executed
sequentially, the task qualifies as a collaborative multi-robot
operation due to the shared goal and interdependence of
actions. The outcome is not achievable by either robot alone,
and the coordination reflects a functional division of labor
typical in collaborative systems [11]–[13].
We utilized a high-level behavioral control strategy, HF-
SMs, to deploy the robots. The states were developed with
a combination of substates for complex tasks/actions. To
enable coordination between robots, the behavior control
of a robot triggers the behavior control of another robot.
This establishes a sequential control of the whole mission
performed by multi-robots. However, concurrent tasks in
which two robots perform actions together to achieve a
common goal concurrently can be viable with some additions
to this approach.
The pipeline for triggering the execution of a behavior
on another robot involves one robot completing its current
behavior and then publishing an ROS 2 topic message, as
shown in algorithm 1. This message serves as a trigger to
initiate the behavior of the target robot that has a static IP
denoted by IP(i + 1). This approach facilitates the coordi-
nated and sequential execution of mission modules among
the robots.
Algorithm 1 Coordinated Behavior Triggering in Multi-
Robot System
Require: Set of robots R = {R1, R2, ..., Rn} with assigned
behaviors Bi
Initialize ROS 2 system and start behavior for all robots
for all robot Ri ∈R do
Waiting FSM until trigger message is received on topic
trigger R i
Execute assigned behavior Bi
Upon completion
while ping IPi+1=false do
Wait
end while
if ping IPi+1 = true then
Publish trigger message to topic trigger Ri+1
end if
end for
V. IMPLEMENTATION AND SYSTEM
INTEGRATION
Both the Deployer and the Stinger robots are equipped
with onboard computers running Ubuntu 22.04 and ROS2
Humble, which serve as the middleware for system-level in-
tegration and control. To enhance system scalability, improve
fault isolation, and manage communication triggering more
effectively, the two robots were deployed using different
ROS 2 domain IDs. Domain IDs provide a deeper level
of communication isolation at the middleware level. This
Fig. 2: Mission software modules and their interactions during the drilling phase.
ensures that message traffic from one robot does not interfere
with another, offering an added layer of modularity, security,
and fault containment — particularly important in distributed
or physically separated multi-robot systems. Additionally,
this separation was necessary as the FlexBE Behavior Engine
cannot be instantiated multiple times within the same do-
main ID. In order to facilitate the triggering communication
messages, they are transferred across different domain IDs
through ros2/domain bridge 3.
HFSMs were created using FlexBE, a tool for devel-
oping and executing HFSMs. The graphical user interface
of FlexBE helps to easily develop, execute, and monitor
the whole behavior. Both robots were deployed with their
respective behavior, i.e. a HFSM behavior. However, the
completion of Deployer’s behavior triggers the execution of
Stinger’s behavior as explained in Section IV.
A. Deployer robot
The Deployer robot integrates two main components, the
Terrain Hopper mobile base and the UR10 manipulator, into
a unified robotic platform controlled by a single onboard
computer.
For the Terrain Hopper, a custom ROS2 node interfaces
with the vehicle’s hardware to manage wheel motion and
steering. Low-level control is handled via the MobotWare
system developed at the Technical University of Denmark
[1], specifically using its rhd module for sending steering and
velocity commands. This ROS2 node subscribes to control
topics for motion commands and publishes sensor feedback
such as steering angle and wheel encoder data for use by
other nodes in the system.
The
UR10
manipulator,
which
lacks
native
ROS2
support,
is
integrated
through
a
Docker
container
running
ROS1.
Within
the
container,
the
ros-industrial-attic/ur modern driver
ROS
package4 provides control of the UR10. Communication
between ROS1 and ROS2 environments is achieved using
the TommyChangUMD/ros-humble-ros1-bridge-
builder ROS1 package5, allowing seamless interaction
3See https://github.com/ros2/domain bridge
4See https://github.com/ros-industrial-attic/ur modern driver
5See
https://github.com/TommyChangUMD/ros-humble-ros1-bridge-
builder
Fig. 3: Deployer (UR10) Behavior to deploy Stinger robot.
Fig. 4: Stinger Behavior to deploy its anchoring legs
between the manipulator and the rest of the ROS2-based
architecture.
With the ability to pick and place, the UR10 can deploy
the Stinger robot. The high-level behavior was developed and
executed to achieve the same, as shown in Fig. 3.
B. Stinger robot
A custom ROS2 node communicates with the Arduino to
send commands to the respective rotational motors or linear
actuators. The node also establishes a custom action server
move motor that controls the actuators of Stinger robot.
The move motor action server controls one actuator at a
time for simplicity of deployment. In further developments,
the complex algorithms to deploy the three stinger will be
incorporated with custom action servers.
Fig. 5: Case A shows two behaviors coordinated with persistent connectivity
Fig. 6: Case B shows two behaviors coordinated even when the network turned off for few secs after first behavior trigger,
the red dotted box represents the network off status
The high-level behavior to control the deployment se-
quence
of
the
stinger
robot
is
shown
in
fig.4.
The
Centering state calls the move motor action server, and
the state itself controls the side actuators one by one, e.g.
left stinger (left leg) moves to its specified position, then
right stinger (right leg) moves to its specified position. After
reaching desired pose, Centering state pings the IP of
Deployer to send the other trigger message.
VI. EVALUATION
The
proposed
software
module
and
hardware
was
launched to execute the mission based on the behaviors
presented in the previous section based on two cases, i.e.
with persistent wireless connection and without persistent
wireless connection. The objective was to record the event
timeline of the respective robots for both cases as shown in
fig. 5-6 and to also measure latency. The maximum latency
was recorded as 500 ms.
1) Persistent Connectivity (Case A): : The event timelines
of both robots show that after picking up the Stinger robot
and placing it in deployment pose, the UR10 behavior
state Move To DeployPose sends the trigger message
to the Stinger robot. Upon receiving the trigger message, the
Stinger robot starts executing its own behavior, positioning
two legs one by one.
2) Without Persistent Connectivity (Case B): : The event
timeline of the Deployer robot shows that the wifi turned off
after sending the trigger message, see fig.6. Even without
connectivity, the Stinger robot continues executing its tasks
in the behavior. After action completion, Stinger waits for the
network connectivity to send its completion status. The mo-
ment network switches on, it send the next behavior trigger.
This ensures sequential autonomy between two robots.
VII. CONCLUSIONS
This paper presented the feasibility of the execution of
multiple behaviors with integration of the hardware and soft-
ware perspective of a multirobot mining system to achieve
a shared objective. This system ensures the collaborative
execution of the mission modules corresponding to each
individual robot involved. We highlighted the necessity of
proposing and implementing a multirobot system, a fleet of
robots, equipped to collaborate and execute mining opera-
tions in confined and infrastructure-less environments. In the
coming months, mission modules will be extended to include
more complex tasks and algorithms for mining operations.
VIII. ACKNOWLEDGEMENT
This work was prepared in the scope of PERSEPHONE
project which has received funding from the European
Union’s Horizon Europe Research and Innovation Pro-
gramme under the Grant Agreement No.101138451.
REFERENCES
[1] Beck, Anders B., Nils Axel Andersen, Jens Christian Andersen, and
Ole Ravn. ”MobotWare–A Plug-in Based Framework for Mobile
Robots.” IFAC Proceedings Volumes 43, no. 16 (2010): 127-132.
[2] Nikolakopoulos, George, Anton Koval, Matteo Fumagalli, Mar-
tyna Konieczna-Fuławka, Laura Santas Moreu, Victor Vigara-Puche,
Kashish Verma, Bob de Waard, and Ren´e Deutsch. ”Autonomous
Drilling and the Idea of Next-Generation Deep Mineral Exploration.”
Sensors 25, no. 13 (2025): 3953.
[3] European Commission. Directorate General for Internal Market, In-
dustry, Entrepreneurship and SMEs. In Critical Raw Materials for
Strategic Technologies and Sectors in the EU: A Foresight Study;
European Commission: Luxembourg, French, 2020.
[4] Ashby, A.; Van Etten, E. Exploring Abandoned Mines through a Public
Lens. In Proceedings of the 14th International Conference on Mine
Closure, Ulaanbaatar, Mongolia, 17–19 August 2021; pp. 207–216.
[5] Berner, M. and Sifferlinger, N.A., 2023. Status of the H2020-
ROBOMINERS Prototype. BHM Berg-und H¨uttenm¨annische Monat-
shefte, 168(2), pp.45-55.
[6] Losch R, Grehl S, Donner M, Buhl C, Jung B. Design of an
autonomous robot for mapping, navigation, and manipulation in
underground mines. 2018 IEEE. InRSJ International Conference on
Intelligent Robots and Systems (IROS). DOI 2018 (Vol. 10).
[7] Mansouri SS, Casta˜no M, Kanellakis C, Nikolakopoulos G. Au-
tonomous mav navigation in underground mines using darkness con-
tours detection. InInternational conference on computer vision systems
2019 Sep 23 (pp. 164-174). Cham: Springer International Publishing.
[8] Lindqvist B, Karlsson S, Koval A, Tevetzidis I, Haluˇska J, Kanellakis
C, Agha-mohammadi AA, Nikolakopoulos G. Multimodality robotic
systems: Integrated combined legged-aerial mobility for subterranean
search-and-rescue. Robotics and Autonomous Systems. 2022 Aug
1;154:104134.
[9] Losch R, Grehl S, Donner M, Buhl C, Jung B. Design of an
autonomous robot for mapping, navigation, and manipulation in
underground mines. 2018 IEEE. InRSJ International Conference on
Intelligent Robots and Systems (IROS). DOI 2018 (Vol. 10).
[10] Koenig N, Howard A. Design and use paradigms for gazebo, an
open-source multi-robot simulator. In2004 IEEE/RSJ international
conference on intelligent robots and systems (IROS)(IEEE Cat. No.
04CH37566) 2004 Sep 28 (Vol. 3, pp. 2149-2154). Ieee.
[11] P. Corke, R. Paul, W. Weng, G. Bekey, and D. Rus, “Multi-Robot
Systems,” in *Springer Handbook of Robotics*, B. Siciliano and O.
Khatib, Eds. Springer, 2016, pp. 1081–1106.
[12] B. P. Gerkey and M. J. Mataric, “A formal analysis and taxonomy of
task allocation in multi-robot systems,” *The International Journal of
Robotics Research*, vol. 23, no. 9, pp. 939–954, Sep. 2004.
[13] N. Kalra, D. Ferguson, and A. Stentz, “Hoplites: A Market-Based
Framework for Planned Tight Coordination in Multirobot Teams,” in
*Proc. IEEE Int. Conf. on Robotics and Automation (ICRA)*, 2005,
pp. 1170–1177.
[14] P. Schillinger, S. Kohlbrecher, and O. von Stryk, “Human-robot collab-
orative high-level control with application to rescue robotics,” in 2016
IEEE International Conference on Robotics and Automation (ICRA),
May 2016, pp. 2796–2802. DOI: 10.1109/ICRA.2016.7487442.
[15] Colledanchise, Michele and Natale, Lorenzo. (2021). On the Im-
plementation of Behavior Trees in Robotics. IEEE Robotics and
Automation Letters. 6. 5929-5936. 10.1109/LRA.2021.3087442.
|
Underground Multi-robot Systems at Work: a revolution in mining Victor V. Puche*1, Kashish Verma*1, and Matteo Fumagalli1 Abstract- The growing global demand for critical raw materials (CRMs) has highlighted the need to access difficult and hazardous environments such as abandoned underground mines. These sites pose significant challenges for conventional machinery and human operators due to confined spaces, structural instability, and lack of infrastructure. To address this, we propose a modular multi-robot system designed for autonomous operation in such environments, enabling sequential mineral extraction tasks. Unlike existing work that focuses primarily on mapping and inspection through global behavior or central control, our approach incorporates physical interaction capabilities using specialized robots coordinated through local high-level behavior control. Our proposed system utilizes Hierarchical Finite State Machine (HFSM) behaviors to structure complex task execution across heterogeneous robotic platforms. Each robot has its own HFSM behavior to perform sequential autonomy while maintaining overall system coordination, achieved by triggering behavior execution through inter-robot communication. This architecture effectively integrates software and hardware components to support collaborative, task-driven multi-robot operation in confined underground environments. I. INTRODUCTION The accelerating global demand for critical raw materials (CRMs) - such as rare earth elements, lithium, and cobalt - for clean energy, digital technologies, and strategic manufacturing has intensified the urgency to secure reliable, and local sources within Europe [3], [4]. Deep and abandoned underground mines represent a promising but highly challenging opportunity due to their confined dimensions, structural risks, and lack of infrastructure. Human access is often unsafe or impossible, and traditional machinery lacks the autonomy and agility required to operate in such environments. Addressing these challenges requires the development of modular multi-robot systems capable of operating autonomously in confined, infrastructure-less underground environments to perform a wide range of tasks, including exploration, maintenance, and drilling. By distributing responsibilities across specialized robots and enabling onboard decisionmaking, these systems could improve safety, scalability, and mission robustness in high-risk settings. Recent research initiatives have demonstrated the potential of robotic systems in addressing these challenges. However, most efforts have focused on mapping and search-andrescue operations, with limited attention given to physical interaction tasks such as drilling. Various robot locomotion platforms have been explored for underground environments. The Groundhog project [6] was among the first to deploy underground robots, using a compact four-wheeled *These authors contributed equally to this work. 1The authors are with the - neering of the Technical . {vvipu, s230015, Ackermann-steered vehicle to generate 2D maps in partially collapsed mines. When terrain becomes too complex for ground vehicles, Micro Aerial Vehicles (MAVs) offer a promising alternative. Equipped with onboard sensors, MAVs can inspect inaccessible or low-visibility areas [7]. Multirobot systems further enhance operational capabilities. For example, in [8], a legged robot is extended to carry an aerial platform, enabling rapid deployment in search-and-rescue scenarios. Some efforts have also addressed underground manipulation tasks. In [9], a mobile manipulator was developed to interact with Smart Sensor Boxes (SSBs) that monitor environmental parameters. The ARIDuA project explores how such a system could install, rearrange, or remove SSBs in situ. More recently, the H2020 ROBOMINERS project [5] introduced RM1, a bio-inspired full-scale robot designed for operation in confined underground spaces. RM1 integrates mapping and ore detection sensors, a drilling tool for rock excavation, and a mechanism for material transport. The robot features a compact form (800 mm diameter, 1500 mm length, 1500 kg), but relies entirely on water hydraulics and tethered operation, limiting its autonomy. Although RM1 proves the viability of robotic deployment in confined spaces, its reliance on water hydraulics and tethered operation highlights the need for modular, untethered, multirobot systems coordinated through robust high-level control. High-level control plays a crucial role in enabling complex modular multi-robot systems to become a reality. Two widely used approaches for deploying high-level control are Hierarchical Finite State Machines (HFSMs) [14] and Behavior Trees (BTs) [15]. However, each approach has its own advantages and limitations. For the DARPA Robotics Challenge (DRC) from 2012 to 2015, which was aimed to develop semi-autonomous ground robots that perform complex tasks in dangerous, degraded, human-engineered environments, Team ViGIR developed a Collaborative Autonomy system, which is known as Flexible Behavior Engine or FlexBE [14]. FlexBE facilitates the ease of designing, developing and deploying cyclic and complex behaviors based on HFSMs, whereas implementing cyclic behaviors using Behavior Trees (BTs) is less intuitive with existing tools such as BehaviorTree.CPP 1 and py trees ros 2. Moreover, the existing multirobot systems utilize a single global behavior to execute the missions which also require persistent connectivity with the central behavior control. To 1See https://github.com/BehaviorTree/BehaviorTree.CPP 2See https://github.com/splintered-reality/py trees ros 18 Sep 2025 this end, we propose a system, utilizing multiple FlexBE HSFMs behavior, in which each robot executes its respective HFSM behavior upon receiving specific triggering communication messages. This ensures a sequential pipeline of tasks that can be performed by more than one robot without persistent connectivity to any central machine. II. APPROACH A. Requirements and boundaries Autonomous mineral exploration and drilling in deep and abandoned underground environments imposes a set of physical and environmental boundaries that fundamentally shape the system design. Mine tunnels typically offer limited space, often constrained to cross-sections no longer than 2m x 2m, which limits the size, maneuverability, and actuation capabilities of each robotic unit. In addition, such environments lack both GPS signals and wireless connectivity, requiring all sensing, navigation, and coordination to be managed entirely onboard. Operational reliability is further challenged by factor such as low visibility, moisture, and uneven terrain. Energy autonomy is also critical due to the lack of power infrastructure, necessitating onboard power management and resupply strategies to maintain functionality over time. In response to these constraints, the robotic system must meet several essential requirements. It must operate fully autonomously in unstructured and GPS-denied settings, relying solely on onboard sensors. The architecture must support a heterogeneous, modular fleet, in which each robot is specialized for tasks such as exploration, manipulation, deployment, or drilling, while remaining adaptable to evolving mission needs. Multi-robot collaboration is crucial: robots must coordinate their roles and synchronize mission phases without centralized control, adapting in real time to dynamic events such as unexpected obstacles or partial failures. Furthermore, the system must be fault-tolerant able to detect issues such as anchoring instability or drilling misalignment, recover autonomously, and continue mission execution safely and efficiently. Based on the explained requirements, and continuing with the mission conceptualization done in [2], the following robotic agents are required to perform the mission. • Explorer robot: A ground vehicle with exploration and navigation capabilities. Its role includes exploration, mapping, and the detection of potential drilling sites. • Deployer robot: A ground vehicle with navigation and manipulation capabilities. Its roles include 3D environment surface analysis, and manipulation tasks such as placing and servicing the drilling tool. • Supplier robot: A logistics agent designed to transport essential resources like power units, drilling consumables, water, etc., supporting the drilling operation. • Stinger robot: A compact drilling agent with a deployable anchoring system and an integrated drilling unit. It is capable of self-anchoring, and performing multi-hole drilling operations. Fig. 1: Deployer and Stinger Robot B. Concept of Operation (mission) The autonomous exploration and drilling mission begins with one or more Explorer robots tasked with navigating uncharted sections of the mine to generate a global map of the environment and identify potential drilling sites based on ore detection. This initial exploration phase produces the spatial and semantic information required to plan and guide the rest of the robotic fleet. Once suitable drilling candidate locations are identified, the Deployer, Supplier, and Stinger robots navigate to the selected site using the pre-acquired map. This paper focuses on the drilling phase of the mission, which begins once the robotic fleet (Deployer and Stinger robot as shown in fig.1) has reached the designated drilling area. Upon arrival, the Deployer robot uses its onboard sensors to analyze the local terrain and determine both the most suitable drilling site and the optimal deployment configuration for the Stinger robot. It then retrieves the stinger robot and, using its onboard manipulator, precisely places it at the selected location. The Stinger robot anchors itself to the tunnel surface and autonomously executes the drilling operation. Throughout this process, the Supplier robot, which is under development, provides support when necessary, for instance, by delivering power or replacement tools to maintain continuous operation. Once the drilling task is completed, the Deployer may reposition the Stinger robot to a new site and repeat the process. This collaborative workflow enables safe and efficient autonomous drilling in confined, infrastructure-less underground environments. III. SYSTEM DESIGN AND VERIFICATION Based on the requirements outlined earlier and the mission conceptualization, we have identified a set of robotic agents, each with distinct roles and capabilities, necessary to carry out the autonomous drilling mission. These capabilities must be translated into specific hardware solutions that can operate reliably in a constrained and challenging underground environment. A. Breakdown of hardware elements and controls 1) Deployer Robot: The Deployer robot is built on a modified Terrain Hopper Overlander Sprint, a compact twowheel-drive electric vehicle designed for off-road mobility in uneven environments. It can handle slopes of up to 15° uphill and 30° downhill, and supports a maximum payload of 104 kg to transport mission equipment. Its narrow width of 850 mm allows it to navigate within confined 2 m × 2 m mine tunnels, while waterproof motors ensure reliable performance in humid underground conditions. The platform is powered by a 24 V lithium battery, offering a driving range of up to 45 km, suitable for extended missions. The seat has been removed and the chassis has been modified to accommodate the onboard hardware. Mobility and steering are controlled via two Roboteq MDC2460 motor controllers (one per axle) and an additional Roboteq MDC1460 for steering, all integrated on a CANopen bus running at 250 kbit/s. These controllers interface with the main onboard computer through a USB-toCAN adapter. Rear wheel motion feedback is provided by AS5057 magnetic encoders mounted on the motors, while steering feedback is obtained via an AS5045 encoder read by a Teensy 3.1 microcontroller. The Teensy also reads analog inputs from the battery and a steering potentiometer, using voltage dividers for compatibility. Mounted on the vehicle is a 6-DoF UR10 industrial robotic arm, powered via a DC-DC converter connected to the Terrain Hopper's battery. With a 1.3 m reach and 10 kg payload capacity, it enables manipulation tasks such as deploying the Stinger robot and handling payloads within confined spaces. A WiFi router, TP-Link Pharos CPE210, is installed onboard to establish a local network that connects all robotic agents during the mission. 2) Stinger Robot: Stinger robot is a three-deployable leg robot which has a self-stabilizing bracing system, see Fig.1. The black line shows the movement of linear actuator and red lines shows the rotational movement of legs. The Onboard latte panda is utilized for running the software architecture explained in this paper. This paper focuses on system-level integration; for detailed technical background of the hardware elements and its control, refer to Stinger Robot paper. B. Simulation A simulation environment has been set up to develop and test the algorithms and behaviors of the system. The Gazebo simulator [10] was chosen for its seamless integration with the ROS2 autonomy stack and its rich library of sensors and actuators, which allows realistic emulation of robotic capabilities. The robotic agents involved in the mission have been included by developing or reusing existing models defined using the Unified Robot Description Format (URDF). IV. SOFTWARE ARCHITECTURE AND MISSION MODULES To implement the concept of operation described above, the high-level autonomous drilling mission must be broken down into discrete software modules that correspond to specific robot behaviors. Figure 2 illustrates the mission structure and its submodules, showing the tasks assigned to the Deployer robot (blue modules) and Stinger robot (yellow modules). Combined-color modules indicate collaborative actions between both robots. Despite being executed sequentially, the task qualifies as a collaborative multi-robot operation due to the shared goal and interdependence of actions. The outcome is not achievable by either robot alone, and the coordination reflects a functional division of labor typical in collaborative systems [11]-[13]. We utilized a high-level behavioral control strategy, HFSMs, to deploy the robots. The states were developed with a combination of substates for complex tasks/actions. To enable coordination between robots, the behavior control of a robot triggers the behavior control of another robot. This establishes a sequential control of the whole mission performed by multi-robots. However, concurrent tasks in which two robots perform actions together to achieve a common goal concurrently can be viable with some additions to this approach. The pipeline for triggering the execution of a behavior on another robot involves one robot completing its current behavior and then publishing an ROS 2 topic message, as shown in algorithm 1. This message serves as a trigger to initiate the behavior of the target robot that has a static IP denoted by IP(i + 1). This approach facilitates the coordinated and sequential execution of mission modules among the robots. Algorithm 1 Coordinated Behavior Triggering in MultiRobot System Require: Set of robots R = {R1, R2, ..., Rn} with assigned behaviors Bi Initialize ROS 2 system and start behavior for all robots for all robot Ri ∈R do Waiting FSM until trigger message is received on topic trigger R i Execute assigned behavior Bi Upon completion while ping IPi+1=false do Wait end while if ping IPi+1 = true then Publish trigger message to topic trigger Ri+1 end if end for V. IMPLEMENTATION AND SYSTEM INTEGRATION Both the Deployer and the Stinger robots are equipped with onboard computers running Ubuntu 22.04 and ROS2 Humble, which serve as the middleware for system-level integration and control. To enhance system scalability, improve fault isolation, and manage communication triggering more effectively, the two robots were deployed using different ROS 2 domain IDs. Domain IDs provide a deeper level of communication isolation at the middleware level. This Fig. 2: Mission software modules and their interactions during the drilling phase. ensures that message traffic from one robot does not interfere with another, offering an added layer of modularity, security, and fault containment - particularly important in distributed or physically separated multi-robot systems. Additionally, this separation was necessary as the FlexBE Behavior Engine cannot be instantiated multiple times within the same domain ID. In order to facilitate the triggering communication messages, they are transferred across different domain IDs through ros2/domain bridge 3. HFSMs were created using FlexBE, a tool for developing and executing HFSMs. The graphical user interface of FlexBE helps to easily develop, execute, and monitor the whole behavior. Both robots were deployed with their respective behavior, i.e. a HFSM behavior. However, the completion of Deployer's behavior triggers the execution of Stinger's behavior as explained in Section IV. A. Deployer robot The Deployer robot integrates two main components, the Terrain Hopper mobile base and the UR10 manipulator, into a unified robotic platform controlled by a single onboard computer. For the Terrain Hopper, a custom ROS2 node interfaces with the vehicle's hardware to manage wheel motion and steering. Low-level control is handled via the MobotWare system developed at the Technical [1], specifically using its rhd module for sending steering and velocity commands. This ROS2 node subscribes to control topics for motion commands and publishes sensor feedback such as steering angle and wheel encoder data for use by other nodes in the system. The UR10 manipulator, which lacks native ROS2 support, is integrated through a Docker container running ROS1. Within the container, the ros-industrial-attic/ur modern driver ROS package4 provides control of the UR10. Communication between ROS1 and ROS2 environments is achieved using the TommyChangUMD/ros-humble-ros1-bridgebuilder ROS1 package5, allowing seamless interaction 3See https://github.com/ros2/domain bridge 4See https://github.com/ros-industrial-attic/ur modern driver 5See https://github.com/TommyChangUMD/ros-humble-ros1-bridgebuilder Fig. 3: Deployer (UR10) Behavior to deploy Stinger robot. Fig. 4: Stinger Behavior to deploy its anchoring legs between the manipulator and the rest of the ROS2-based architecture. With the ability to pick and place, the UR10 can deploy the Stinger robot. The high-level behavior was developed and executed to achieve the same, as shown in Fig. 3. B. Stinger robot A custom ROS2 node communicates with the Arduino to send commands to the respective rotational motors or linear actuators. The node also establishes a custom action server move motor that controls the actuators of Stinger robot. The move motor action server controls one actuator at a time for simplicity of deployment. In further developments, the complex algorithms to deploy the three stinger will be incorporated with custom action servers. Fig. 5: Case A shows two behaviors coordinated with persistent connectivity Fig. 6: Case B shows two behaviors coordinated even when the network turned off for few secs after first behavior trigger, the red dotted box represents the network off status The high-level behavior to control the deployment sequence of the stinger robot is shown in fig.4. The Centering state calls the move motor action server, and the state itself controls the side actuators one by one, e.g. left stinger (left leg) moves to its specified position, then right stinger (right leg) moves to its specified position. After reaching desired pose, Centering state pings the IP of Deployer to send the other trigger message. VI. EVALUATION The proposed software module and hardware was launched to execute the mission based on the behaviors presented in the previous section based on two cases, i.e. with persistent wireless connection and without persistent wireless connection. The objective was to record the event timeline of the respective robots for both cases as shown in fig. 5-6 and to also measure latency. The maximum latency was recorded as 500 ms. 1) Persistent Connectivity (Case A): : The event timelines of both robots show that after picking up the Stinger robot and placing it in deployment pose, the UR10 behavior state Move To DeployPose sends the trigger message to the Stinger robot. Upon receiving the trigger message, the Stinger robot starts executing its own behavior, positioning two legs one by one. 2) Without Persistent Connectivity (Case B): : The event timeline of the Deployer robot shows that the wifi turned off after sending the trigger message, see fig.6. Even without connectivity, the Stinger robot continues executing its tasks in the behavior. After action completion, Stinger waits for the network connectivity to send its completion status. The moment network switches on, it send the next behavior trigger. This ensures sequential autonomy between two robots. VII. CONCLUSIONS This paper presented the feasibility of the execution of multiple behaviors with integration of the hardware and software perspective of a multirobot mining system to achieve a shared objective. This system ensures the collaborative execution of the mission modules corresponding to each individual robot involved. We highlighted the necessity of proposing and implementing a multirobot system, a fleet of robots, equipped to collaborate and execute mining operations in confined and infrastructure-less environments. In the coming months, mission modules will be extended to include more complex tasks and algorithms for mining operations. VIII. ACKNOWLEDGEMENT This work was prepared in the scope of PERSEPHONE project which has received funding from the European Union's Horizon Europe Research and Innovation Programme under the Grant Agreement No.101138451. REFERENCES [1] Beck, Anders B., Nils Axel Andersen, Jens Christian Andersen, and Ole Ravn. "MobotWare-A Plug-in Based Framework for Mobile Robots." IFAC Proceedings Volumes 43, no. 16 (2010): 127-132. [2] Nikolakopoulos, George, Anton Koval, Matteo Fumagalli, Martyna Konieczna-Fuławka, Laura Santas Moreu, Victor Vigara-Puche, Kashish Verma, Bob de Waard, and Ren ́e Deutsch. "Autonomous Drilling and the Idea of Next-Generation Deep Mineral Exploration." Sensors 25, no. 13 (2025): 3953. [3] European Commission. Directorate General for Internal Market, Industry, Entrepreneurship and SMEs. In Critical Raw Materials for Strategic Technologies and Sectors in the EU: A Foresight Study; European Commission: Luxembourg, French, 2020. [4] Ashby, A.; Van Etten, E. Exploring Abandoned Mines through a Public Lens. In Proceedings of the 14th International Conference on Mine Closure, Ulaanbaatar, Mongolia, 17-19 August 2021; pp. 207-216. [5] Berner, M. and Sifferlinger, N.A., 2023. Status of the H2020ROBOMINERS Prototype. BHM Berg-und H ̈uttenm ̈annische Monatshefte, 168(2), pp.45-55. [6] Losch R, Grehl S, Donner M, Buhl C, Jung B. Design of an autonomous robot for mapping, navigation, and manipulation in underground mines. 2018 IEEE. InRSJ International Conference on Intelligent Robots and Systems (IROS). DOI 2018 (Vol. 10). [7] Mansouri SS, Casta ̃no M, Kanellakis C, Nikolakopoulos G. Autonomous mav navigation in underground mines using darkness contours detection. InInternational conference on computer vision systems 2019 Sep 23 (pp. 164-174). Cham: Springer International Publishing. [8] Lindqvist B, Karlsson S, Koval A, Tevetzidis I, Haluˇska J, Kanellakis C, Agha-mohammadi AA, Nikolakopoulos G. Multimodality robotic systems: Integrated combined legged-aerial mobility for subterranean search-and-rescue. Robotics and Autonomous Systems. 2022 Aug 1;154:104134. [9] Losch R, Grehl S, Donner M, Buhl C, Jung B. Design of an autonomous robot for mapping, navigation, and manipulation in underground mines. 2018 IEEE. InRSJ International Conference on Intelligent Robots and Systems (IROS). DOI 2018 (Vol. 10). [10] Koenig N, Howard A. Design and use paradigms for gazebo, an open-source multi-robot simulator. In2004 IEEE/RSJ international conference on intelligent robots and systems (IROS)(IEEE Cat. No. 04CH37566) 2004 Sep 28 (Vol. 3, pp. 2149-2154). Ieee. [11] P. Corke, R. Paul, W. Weng, G. Bekey, and D. Rus, "Multi-Robot Systems," in *Springer Handbook of Robotics*, B. Siciliano and O. Khatib, Eds. Springer, 2016, pp. 1081-1106. [12] B. P. Gerkey and M. J. Mataric, "A formal analysis and taxonomy of task allocation in multi-robot systems," *The International Journal of Robotics Research*, vol. 23, no. 9, pp. 939-954, Sep. 2004. [13] N. Kalra, D. Ferguson, and A. Stentz, "Hoplites: A Market-Based Framework for Planned Tight Coordination in Multirobot Teams," in *Proc. IEEE Int. Conf. on Robotics and Automation (ICRA)*, 2005, pp. 1170-1177. [14] P. Schillinger, S. Kohlbrecher, and O. von Stryk, "Human-robot collaborative high-level control with application to rescue robotics," in 2016 IEEE International Conference on Robotics and Automation (ICRA), May 2016, pp. 2796-2802. [15] Colledanchise, Michele and Natale, Lorenzo. (2021). On the Implementation of Behavior Trees in Robotics. IEEE Robotics and Automation Letters. 6. 5929-5936. 10.1109/LRA.2021.3087442.
|
2509.16265
|
Limitation of Stoquastic Quantum Annealing: A Structural Perspective
Vicky Choi
Gladiolus Veritatis Consulting Co.*
September 23, 2025
Abstract
We analyze the behavior of stoquastic transverse-field quantum annealing (TFQA) on a structured class
of Maximum Independent Set (MIS) instances, using the same decomposition framework developed in our
companion work on the DIC-DAC-DOA algorithm (Beyond Stoquasticity) [1]. For these instances, we provide
a structural explanation for the anti-crossing arising from the competition between the energies associated with
a set of degenerate local minima (LM) and the global minimum (GM), and analytically derive the associated
exponentially small gap. Our analysis proceeds in two steps. First, we reduce the dynamics to an effective
two-block Hamiltonian Hcore, constructed from the bare (decoupled) subsystems associated with the LM and
GM. This reduction is justified analytically using the structural decomposition. Second, we reformulate the
eigenvalue problem as a generalized eigenvalue problem in a non-orthogonal basis constructed from the bare
eigenstates of the subsystems. This transformation enables a clean perturbative treatment of the anti-crossing
structure, independent of the transverse field—unlike standard perturbation theory approach, which requires
treating the transverse field as a small parameter. This paper serves as a supplementary companion to our main
work on the DIC-DAC-DOA algorithm [1], where we demonstrate how appropriately designed non-stoquastic
drivers can bypass this tunneling-induced bottleneck.
*https://www.vc-gladius.com
1
arXiv:2509.16265v1 [quant-ph] 18 Sep 2025
Contents
1
Introduction
2
2
Background and Preliminaries
4
2.1
Same-Sign vs Opposite-Sign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2
Two Fundamental Bipartite Substructure Graphs . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.3
Anti-Crossing and the Two-Level Hamiltonian B(w, x) . . . . . . . . . . . . . . . . . . . . . .
6
2.3.1
Level Repulsion vs. Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2.3.2
Anti-crossing in a Multi-level System . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3.3
The Basic Matrix B(w, x): Eigenvalues and Eigenstates . . . . . . . . . . . . . . . . .
9
3
Reduction to Same-Sign Block in the Disjoint-Structure Case
9
3.1
Reduction to the Analysis of the Low-Energy Hamiltonian . . . . . . . . . . . . . . . . . . . .
9
3.2
Angular-Momentum Based Decomposition of the Hlow . . . . . . . . . . . . . . . . . . . . . .
11
3.2.1
Single-clique Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.2.2
Block Decomposition of Hlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4
Inner Decomposition of the Same-sign Block
15
4.0.1
Two Block Decompositions of HC: L-Inner vs. R-Inner
. . . . . . . . . . . . . . . . .
15
4.0.2
Localization and Effective Hamiltonian Hcore . . . . . . . . . . . . . . . . . . . . . . .
17
5
Detailed Analysis of Hcore
18
5.1
Bare Eigenstates of Hcore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
5.2
Generalized Eigenvalue Reformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
5.3
Perturbative Structure of the Generalized Eigenvalue System . . . . . . . . . . . . . . . . . . .
22
5.3.1
Perturbative Approximation Scheme for the True Energies . . . . . . . . . . . . . . . .
23
5.3.2
Exact Bare Eigenvalues and Eigenstates and Some Combinatorial Facts . . . . . . . . .
26
5.3.3
Anti-Crossing Structure: Energy Approximation and Gap Bound . . . . . . . . . . . . .
28
6
Conclusion
33
1
Introduction
This work analyzes the performance limitation of stoquastic quantum annealing (TFQA) [4, 5, 6, 7] on a struc-
tured class of Maximum Independent Set (MIS) instances, using the same structural framework developed in the
companion work on the DIC-DAC-DOA algorithm [1]. In the absence of the XX-driver (Jxx = 0), the TFQA
system evolves under a fully stoquastic Hamiltonian. The corresponding Hamiltonian takes the form 1 :
H(t) = x(t)HX + Hproblem,
1For completeness, we recall that the full algorithm may begin with an optional Stage 0, whose sole purpose is to set the parameters
of the problem Hamiltonian before the main evolution begins. The Stage 0 Hamiltonian (without the XX-driver term) is H0(t) =
x(t)HX + p(t)Hproblem, t ∈[0, 1], with parameter schedules x(t) = (1 −t)(Γ0 −Γ1) + Γ1, p(t) = t. During this phase, the problem
parameters w and Jzz are ramped to their final values as p(t) increases from 0 to 1. Stage 0 plays no role in the present analysis and may
be omitted in practice.
2
where HX = −P
i σx
i , and Hproblem = P
i∈V(G)(−wi)˜σz
i +Jzz
P
(i,j)∈E(G) ˜σz
i ˜σz
j is the MIS-Ising Hamiltonian,
with the shifted-σz operator ˜σz
i := I+σz
i
2
. Here, we take wi = w ≡1 for the unweighted MIS problem, and
note that Jzz is required to be at least w. This is precisely the system Hamiltonian used in the DIC-DAC-DOA
algorithm [1], when the non-stoquastic XX-driver is turned off, i.e., Jxx = 0. The annealing schedule simplifies
to a single-stage evolution, i.e. there is no need to distinguish between Stage 1 and Stage 2, with a linearly
decreasing transverse field, x(t) = (1 −t)Γ1. The system Hamiltonian is stoquastic in the computational basis.
A widely observed but not fully justified limitation of stoquastic TFQA is the occurrence of an anti-crossing
arising from the competition between the energies associated with a set of degenerate local minima (LM) and the
global minimum (GM). While such anti-crossings have long been explained using perturbative arguments in the
small transverse-field regime (e.g., [11, 12]), we provide a structural approach that enables a clean perturbative
treatment of the anti-crossing structure, independent of the transverse field, yielding an analytic derivation of the
exponentially small gap.
In particular, we show that the relevant dynamics reduce to an effective two-block Hamiltonian Hcore, formed
from bare subsystems associated with the LM and GM. This reduction follows the same angular-momentum-based
decomposition of the Hamiltonian, derived from the angular momentum structure of the cliques associated with
the LM2 as developed in the main paper, with two additional refinements: (1) we justify that the disjoint case
suffices for analysis, and (2) we apply the L-inner decomposition to derive the effective Hamiltonian Hcore. For
completeness, we include the necessary angular momentum decomposition in this paper.
Intuitively, Hcore can be understood in terms of two embedded bare subsystems (an L-subsystem and an
R-subsystem) coupled only through the empty-set state. Because of the additional transverse-field contribution
in the L-subsystem, the system initially localizes in the L-block. The transition to the R-block (which supports
the global minimum GM) occurs through a tunneling-induced anti-crossing, resulting in an exponentially small
gap. While this intuition is conceptually clear, the analytic derivation is nontrivial and requires identifying the
appropriate representation to capture the mechanism structurally. Our analysis of Hcore is structurally guided
and departs from conventional perturbation theory. It does not rely on a small transverse field x; instead, it
reformulates the full Hamiltonian Hcore in the basis of embedded bare eigenstates, where the overlap structure
induces a generalized eigenvalue problem. We then express this system in a perturbative form whose perturbation
term is independent of x, allowing the analysis to proceed without treating x as a small parameter. Within this
framework, we show that the bare energy levels provide an accurate approximation to the true spectrum of Hcore,
with energy crossings between bare energy levels replaced by tunneling-induced anti-crossings.
In particular, this structure enables a perturbative treatment both away from and near the anti-crossing: we
approximate the true ground energy using first-order corrections to the bare energies, and construct a reduced
2 × 2 effective Hamiltonian on the shared two-level subspace to derive a perturbative bound on the gap. The
resulting bound is exponentially small in system size, confirming that the structural constraints of stoquastic
TFQA lead to tunneling-induced bottlenecks that limit algorithmic performance on this family of instances. Our
analysis also reveals the structural origin of tunneling-induced anti-crossings. In particular, once the evolution
enters a regime where the system is localized around critical local minima, a tunneling-induced anti-crossing is
inevitable.
In the Jxx = 0 case, the value of Jzz can be made arbitrarily large without altering the qualitative dynamics,
although the anti-crossing location shifts earlier as Jzz increases. This shift occurs because larger Jzz accelerates
localization within the same-sign block, allowing the effective dynamics to be more accurately captured by Hcore
at earlier times. However, if one sets Jzz to be very large in the presence of an XX-driver term with Jxx > 0, such
2Although in the TFQA case (Jxx = 0) we do not explicitly identify the cliques for constructing the XX-driver graph, the cliques
associated with the critical local minima exist and are used solely for analytical purposes; they do not need to be constructively identified.
3
that Jxx ≪Jzz,3 then by the cut-off Theorem 8.3 in [1], the effective Hamiltonian at the beginning of Stage 1 is
already approximately Hcore, leaving no opportunity for successful structural steering. In this case, the system
localizes in the L-region and transitions to the R-region via tunneling, merely shifting the anti-crossing earlier,
as illustrated by an example in Figure 16 of [1]. See also [9], which addresses the case Jzz →∞.
Finally we remark that the same localization and tunneling mechanism may arise in less structured graphs as
well, where degenerate local minima confine the dynamics in an analogous way. Through our detailed analysis,
we hope to shed enough light on this anti-crossing mechanism to provide a structural basis for a more careful
assessment of claimed stoquastic speedups.
This paper is organized as follows. In Section 2, we review the prerequisites for our analysis, including the
definitions of same-sign and opposite-sign blocks, the two bipartite substructure graphs (Gdis and Gshare), and
the definition of an anti-crossing, along with the basic matrix B(w, x) that will be used throughout our analysis.
In Section 3, we show that the gap analysis can be restricted to the same-sign block of the disjoint-structure
case. In Section 4, we present the inner decompositions of the same-sign block and elaborate on the L-inner
decomposition, which reveals localization to the region associated with local minima (L-localization), and derive
the effective Hamiltonian Hcore. In Section 5, we give a detailed analysis of Hcore. We conclude with discussion
in Section 6.
2
Background and Preliminaries
In this section, we review the prerequisites for our analysis: (1) the definitions of same-sign and opposite-sign
states, sectors, and blocks (Section 2.1); (2) the two fundamental bipartite substructure graphs, distinguishing
between the disjoint-structure case Gdis and the shared-structure case Gshare (Section 2.2); and (3) the definition
of anti-crossing together with the solution for the basic matrix B(w, x) (Section 2.3).
The material in this section is adopted from the main paper [1], and is included here for completeness and to
keep this paper self-contained.
2.1
Same-Sign vs Opposite-Sign
The sign structure of quantum states plays a central role in our analysis. Throughout this paper, all Hamiltonians
are real and Hermitian, so we restrict attention to quantum states with real-valued amplitudes: each component
has phase 0 (positive) or π (negative).
Definition 2.1. Let |ψ⟩= P
x∈B ψ(x)|x⟩be a quantum state with real amplitudes in a basis B.
• |ψ⟩is called a same-sign state if ψ(x) ≥0 for all x ∈B. That is, all components are in phase (with
relative phase 0).
• |ψ⟩is called an opposite-sign state if there exist x, x′ ∈B such that ψ(x) > 0 and ψ(x′) < 0. In this
case, some components are out of phase, differing by a relative phase of π.
Unless stated otherwise, we take B to be the computational basis when referring to same-sign or opposite-sign
states. More generally, the computational basis is assumed whenever the basis is unspecified.
Accordingly, we define the notions of same-sign and opposite-sign bases, sectors, and blocks:
3Violating the Steering lower bound on Jxx in [1].
4
Definition 2.2. An orthonormal basis consisting entirely of same-sign states is called a same-sign basis. A basis
that includes at least one opposite-sign state is called an opposite-sign basis. A subspace spanned by a same-sign
basis is called a same-sign sector; otherwise, it is called an opposite-sign sector. A submatrix (or block) of a
Hamiltonian is called a same-sign block if it is expressed in a same-sign basis. Otherwise, it is referred to as an
opposite-sign block.
Example.
The state |+⟩=
1
√
2(|0⟩+ |1⟩) is a same-sign state, while |−⟩=
1
√
2(|0⟩−|1⟩) is an opposite-sign
state. The computational basis {|0⟩, |1⟩} is a same-sign basis of C2, and the Hadamard basis {|+⟩, |−⟩} is an
opposite-sign basis.
These definitions connect directly to stoquasticity. By the Perron–Frobenius theorem, the ground state of
a stoquastic Hamiltonian (i.e., one with non-positive off-diagonal entries in a given basis) is a same-sign state
in that basis [6, 8]. In particular, when expressed in the computational basis, the ground state of a stoquastic
Hamiltonian is necessarily same-sign. By contrast, a non-stoquastic Hamiltonian may have a ground state that is
either same-sign (Eventually Stoquastic) or opposite-sign (Proper Non-stoquastic) [2]. In this paper we focus on
the stoquastic case, corresponding to Jxx = 0.
2.2
Two Fundamental Bipartite Substructure Graphs
Recall that an independent set is maximal if no larger independent set contains it. Each maximal independent
set corresponds to a local minimum of the energy function. A collection of maximal independent sets (MLIS)
all having the same size m corresponds to a set of degenerate local minima with equal energy −m. When
the meaning is clear (i.e., they all have the same size), we refer to such a collection simply as a deg-MLIS,
and its cardinality as the degeneracy. In this work, we use the terms degenerate local minima and deg-MLIS
interchangeably.
dMIC: Structured Form of a deg-MLIS
In its simplest structured form, a deg-MLIS can be represented as a dMIC, namely a collection of mutually
independent cliques whose vertices together generate exactly all the MLIS in that deg-MLIS.
Definition 2.3. A dMIC of size k consists of k independent cliques (i.e., no edges exist between them), denoted as
Clique(wi, ni), where wi is the vertex weight and ni is the clique size, for 1 ≤i ≤k. Each maximal independent
set in the corresponding deg-MLIS is formed by selecting exactly one vertex from each clique. In this case, the
degeneracy of the dMIC is given by Qk
i=1 ni.
When the cliques are dependent—that is, edges may exist between them—we refer to the structure as an
dMDC, a deg-MLIS formed by a set of dependent cliques. This represents a relaxed structure to which our analysis
may be generalized.
As explained in the main paper, we reduce the analysis to a sequence of simpler bipartite substructures, each
formed by a critical dMIC and the global maximum (GM).
Recall that each dMIC corresponds to a set of degenerate local minima (LM) in the MIS–Ising energy land-
scape. The terms LM and dMIC are thus used interchangeably. We use GM to refer both to the (global) max-
imum independent set and to its corresponding global minimum in the energy landscape. In what follows,
we use LM and GM to denote the dMIC and GM in each such bipartite substructure, respectively.
We consider two fundamental bipartite substructures:
5
• Gdis: The local minima (LM) and the global minimum (GM) are vertex-disjoint.
• Gshare: The GM shares exactly one vertex with each clique in the LM.
We begin with the disjoint-structure graph Gdis = (V, E), in which the vertex set V is partitioned into left
and right (disjoint) vertex sets, with the following structural properties:
• The left component is defined by a set L = {C1, . . . , Cml} of ml disjoint cliques, each denoted Ci =
Clique(wi, ni). We let VL = Sml
i=1 Ci denote the full vertex set.
• The right component R consists of mr independent vertices, each with weight wr.
• Every vertex in VL is adjacent to every vertex in R.
In this paper we mainly focus on the unweighted MIS case, and assume uniform weights wi = wr = w.
Under the MIS–Ising mapping with these uniform weights, VL corresponds to the degenerate local minima (LM)
with degeneracy Qml
i=1 ni, while R defines the global minimum (GM) with mg = mr. Each local minimum (in
LM) corresponds to a maximal independent set of size ml, and thus has energy −ml. The global minimum (GM)
has energy −mg.
We now define the shared-structure graph Gshare, which differs from Gdis in that each vertex in R is adjacent
to all but one vertex in each clique of L. This modification allows the GM to include shared vertices from the
cliques in L, thereby introducing overlap with the LM. Structurally, L and R are defined exactly as in Gdis, but
with the adjacency rule modified as above. Specifically, the global maximum GM consists of one shared vertex
from each clique Ci ∈L together with all mr independent vertices in R, yielding a total size mg = ml + mr.
Figure 1 illustrates both cases. For convenience, we write m := ml, dropping the subscript l when no confusion
arises. We assume Pm
i=1
√ni > mg, so that an anti-crossing is induced by the competition between LM and GM.
2.3
Anti-Crossing and the Two-Level Hamiltonian B(w, x)
The term anti-crossing is sometimes used loosely, so we begin with a precise notion in the two-level case, then
extend it to multi-level systems. We also introduce a canonical two-level Hamiltonian whose eigensystem will
be used throughout our analysis.
2.3.1
Level Repulsion vs. Anti-Crossing
We begin by distinguishing the concept of an anti-crossing (also called avoided-crossing) from level repulsion.
Consider a generic two-level Hamiltonian of the form
H(x) :=
e1(x)
v(x)
v(x)
e2(x)
,
where e1(x), e2(x), and v(x) are real-valued functions of a parameter x.
The eigenvalues of this Hamiltonian are λ±(x) =
e1(x)+e2(x)
2
± 1
2
p
(e1(x) −e2(x))2 + 4v(x)2, and the
energy gap between them is ∆(x) := λ+(x) −λ−(x) =
p
(e1(x) −e2(x))2 + 4v(x)2. The off-diagonal term
v(x) induces level repulsion: if v(x) ̸= 0, then the eigenvalues never cross, and the gap ∆(x) remains strictly
positive. Thus, assuming the off-diagonal coupling v(x) is nonzero, level repulsion is always present.
Definition 2.4. We say that an anti-crossing occurs when the two unperturbed energy levels e1(x) and e2(x)
cross, i.e., e1(x∗) = e2(x∗) for some x∗, and the off-diagonal coupling v(x∗) ̸= 0. In this case the eigenvalue
curves form an anti-crossing with gap ∆min = 2|v(x∗)|.
6
1
2
4
3
a
5
6
8
7
c
b
(a) The LM and GM are vertex-disjoint.
1
2
4
3
a
5
6
8
7
c
b
(b) The GM shares exactly one vertex with
each clique in the LM.
Figure 1: Example graphs illustrating the Gdis and Gshare structures. Recall that each LM here is structurally a
dMIC. (a) Disjoint-structure graph Gdis: The set L consists of ml = 2 disjoint cliques, each of size n1 = n2 = 4,
with their vertices (pink) forming the local minima (LM). The set R (blue) consists of mr = 3 independent
vertices, forming the global minimum (GM).
(b) Shared-structure graph Gshare: The set L again consists of two cliques of size n1 = n2 = 4, with pink and
purple vertices. The purple vertices (one per clique) are shared between the LM and the GM. The set R (blue)
contains mr = 3 independent vertices. The global minimum consists of all vertices in R, together with the
shared purple vertices in L, giving mg = 5. In both cases, edges between the pink vertices in L and all vertices
in R are complete, though not all are shown for visual clarity.
The size of the anti-crossing gap depends on |v(x∗)|: stronger coupling leads to a larger gap, while weaker
coupling results in a narrower one.
By contrast, if the two diagonal entries e1(x) and e2(x) remain well separated for all x, then the system
exhibits level repulsion but not an anti-crossing. Figure 3 illustrates an example of level repulsion without an
anti-crossing.
The eigenvectors of the two-level Hamiltonian are given by
|λ−(x)⟩= cos θ(x) |0⟩+ sin θ(x) |1⟩,
|λ+(x)⟩= −sin θ(x) |0⟩+ cos θ(x) |1⟩,
where the mixing angle θ(x) satisfies tan(2θ(x)) =
2v(x)
e1(x)−e2(x). Thus, near the anti-crossing point x = x∗, the
eigenstates interpolate between the unperturbed basis states.
Remark 2.5. The trigonometric expression for eigenvectors in terms of the mixing angle θ(x), is equivalent to
the rational-form representation
|λ−(x)⟩=
1
√
1+γ(x)2 (γ(x) |0⟩+ |1⟩) ,
|λ+(x)⟩=
1
√
1+γ(x)2 (|0⟩−γ(x) |1⟩) ,
7
where the two parametrizations are related by γ(x) =
1
tan θ(x), and
tan(2θ(x)) =
2v(x)
e1(x)−e2(x). This rational-
form expression is particularly useful for our analysis, as it aligns directly with the basic matrix form introduced
below.
Our earlier work [2, 14], including the development of the original DIC-DAC-DOA algorithm, was motivated
by investigating the structural characteristics of eigenstates around the anti-crossing.
2.3.2
Anti-crossing in a Multi-level System
In a multi-level system, the notion of an anti-crossing extends naturally by restricting the Hamiltonian to the two-
dimensional subspace spanned by the pair of eigenstates whose unperturbed energies intersect. This reduction
yields a 2 × 2 effective Hamiltonian that captures the essential structure of the anti-crossing, including both the
energy gap and the interpolating behavior of the eigenstates. Thus, the same framework as in the two-level case
applies.
With this perspective, we refine the definition of an (L, R)-anti-crossing given in [2]. Recall that EA
0 (t)
denotes the ground state energy of the Hamiltonian HA(t) projected to the subspace spanned by the subsets of
A. In particular, ELM
0 (t) and EGM
0 (t) denote the bare energies associated with LM (where A = S
M∈LM M) and GM,
respectively.
Definition 2.6. We say an anti-crossing is an (L, R)-anti-crossing at t∗if there exist bare energies EL
0 (t) and
ER
0 (t) such that:
1. EL
0 (t) and ER
0 (t) approximate the unperturbed energy levels of the effective 2 × 2 Hamiltonian describing
the anti-crossing for t ∈[t∗−δ, t∗+ δ] for some small δ > 0; and
2. EL
0 (t) and ER
0 (t) cross at t∗, i.e. EL
0 (t∗) = ER
0 (t∗).
See Figure 2 for an illustration.
t*
ER
0 (t)
EL
0 (t)
Figure 2: Schematic of an (L, R)-anti-crossing. Dashed lines: bare energies EL
0 (t) and ER
0 (t) crossing at t∗.
Solid gray curves: the two lowest eigenvalues of the Hamiltonian, showing the avoided crossing that originates
from this bare crossing.
Remark 2.7. In the companion work [1], the absence of an anti-crossing can be inferred directly from the non-
crossing of the corresponding bare energy levels, without explicitly constructing the effective 2 × 2 Hamiltonian.
By contrast, in the present work we evaluate the anti-crossing gap by constructing the effective Hamiltonian
explicitly.
8
Remark 2.8. In [21], it was observed that an exponentially small gap occurs “if and only if a ground state
consists of two ‘lobes’ with exponentially small amplitude in the region between them.” This two-lobe structure
corresponds naturally to the two arms of an (L, R)-anti-crossing. [As shown in the right panel of Figure 9, the
ground state undergoes a sharp exchange at the anti-crossing point. If one plots the ground state wavefunction
at the anti-crossing, it exhibits two lobes: one localized on the L region and the other on R region (with the
suppressed region between them quantified by the overlap g0 = ⟨L0|R0⟩).] Finally, we remark that while the
Cheeger inequalities (used, e.g., in [6] to give a large lower bound on the spectral gap) appear to depend only on
the ground state wavefunction, a closer examination reveals that it actually bound the first excited state implicitly.
In this sense, the anti-crossing gap size derived from a 2 × 2 effective Hamiltonian is more descriptive and also
gives a tighter bound.
2.3.3
The Basic Matrix B(w, x): Eigenvalues and Eigenstates
We define the following effective two-level Hamiltonian, which will serve as a basic building block for our
analysis throughout the paper:
B(w, x) :=
−w
−1
2x
−1
2x
0
,
(2.1)
where w = w(t) and x = x(t) are real-valued parameters, typically derived from problem Hamiltonians and
driver strengths. This is a special case of a spin-1
2 system, with analytic eigenstructure. The eigenvalues of
B(w, x) are
βk = −1
2
w + (−1)kp
w2 + x2
,
k = 0, 1,
(2.2)
with normalized eigenvectors
|β0⟩=
1
√
1+γ2
γ |0⟩+ |1⟩
,
|β1⟩=
1
√
1+γ2
|0⟩−γ |1⟩
,
(2.3)
where the mixing coefficient is γ =
x
w+
√
w2+x2 .
Figure 3 visualizes the eigenspectrum and ground state behavior under variation of x and w.
3
Reduction to Same-Sign Block in the Disjoint-Structure Case
In this section, we establish that it is sufficient to restrict our analysis to the same-sign block and to focus on
the disjoint case. We first express the full Hamiltonian in block form with respect to the low- and high-energy
subspaces of VL, and justify that for our purposes it suffices to analyze the projected Hamiltonian Hlow acting on
L−in Section 3.1. Then we describe the angular-momentum based decomposition of Hlow in in Section 3.2.
3.1
Reduction to the Analysis of the Low-Energy Hamiltonian
Let VL be the Hilbert space of all vertices in the union of cliques in L. This space decomposes into:
• a low-energy subspace, spanned by all independent-set states within L;
• a high-energy subspace, spanned by all dependent-set states within L.
9
Figure 3: Eigenvalues of the two-level Hamiltonian B(w, x), where βk = −1
2
w + (−1)k√
w2 + x2
, k =
0, 1. w = 1. The ground state energy (β0) is shown in black, and the first excited state energy (β1) is shown
in blue. The energy gap widens as x increases—there is no anti-crossing in this case. The ground state is
|β0⟩=
1
√
1+γ2 (γ |0⟩+ |1⟩), with γ =
x
w+
√
w2+x2 . |0⟩= |↑⟩and |1⟩= |↓⟩.
We define
L−:= (low-energy subspace of VL) ⊗VR,
L+ := (high-energy subspace of VL) ⊗VR.
Here, L+ consists of states containing at least one intra-clique edge and thus incurring the Jclique
zz
penalty.
Let Π−and Π+ be the orthogonal projectors onto L−and L+, respectively. With respect to this decomposi-
tion, the system Hamiltonian can be written in block form:
H =
Hlow
V
V†
Hhigh
,
where Hlow and Hhigh are the projections into L−and L+, respectively.
In the main paper, we showed that by the end of Stage 0, if Jclique
zz
is chosen sufficiently large so that all
states in L+ lie well above those in L−in energy, the Stage 1 evolution is governed exactly by the restricted
Hamiltonian Heff
1 = Hlow. The subsequent analysis can therefore focus entirely on Hlow.
In the present Jxx = 0 setting, we have not identified the XX-driver graph explicitly, and we set Jclique
zz
= Jzz,
so we cannot assume an energetic cut-off between L+ and L−. Nevertheless, removing higher-energy subspaces
from a Hamiltonian cannot decrease the spectral gap, since doing so can only eliminate potential low-lying
excitations. Consequently, if Hlow already has a small gap, the full Hamiltonian H can have a gap no larger.
Because the ground state lies in L−prior to the anti-crossing, we have ∆(H) ≤∆(Hlow). This allows us to use
Hlow to obtain an upper bound on the spectral gap of the full Hamiltonian. Thus, in both cases—whether with or
without the XX-driver—the subsequent analysis focuses on Heff
1 = Hlow.
Having reduced our focus to the low-energy subspace L−, we next construct an angular momentum basis that
reveals the block structure of Hlow and sets up the tunneling analysis.
10
3.2
Angular-Momentum Based Decomposition of the Hlow
Our construction starts from the local angular momentum structure of a single clique, restricted to the low-energy
subspace (see also [9]). We then extend this to a global angular-momentum basis for L−, in which Hlow becomes
block-diagonal, separating same-sign and opposite-sign blocks.
The construction proceeds hierarchically:
• Single-clique decomposition. We analyze the low-energy subspace of a single clique in detail, introducing
its total angular momentum basis, identifying the same-sign and opposite-sign states, and deriving the block
structure of the restricted Hamiltonian (Section 3.2.1).
• Global block decomposition. By tensoring the single-clique bases over all cliques in L and combining
with VR, we obtain a global basis in which Hlow takes a block-diagonal form (Section 3.2.2).
3.2.1
Single-clique Decomposition
In this section, we focus on analyzing the Hilbert space associated with a single clique. We begin with a single
clique, since each clique in L exhibits the same local angular momentum structure; this will serve as the building
block for the global block decomposition in Section 3.2.2.
To fix notation, let Gc = (V, E) be a clique, where V = {1, . . . , nc} is the set of vertices, and E = {(i, j) :
i < j, i, j ∈V } is the set of edges. We assume all vertices in the clique have the same weight wc, i.e., wi = wc
for all i ∈V .
The Hilbert space Vc for a clique of size nc consists of 2nc computational basis states, each corresponding to
a binary string of length nc. Among these, only nc + 1 basis states correspond to independent sets:
b0 = 00...0
b1 = 10...0
b2 = 01...0
...
bnc = 00...1
with Nind = {b0, b1, . . . , bnc} , where each bi is a binary string of length nc with a single 1 in position i (and
0s elsewhere), and b0 is the all-zero string.
The energy associated with each singleton state is −wc, while the empty set has energy 0. In contrast, the
energy of all other bit strings—which correspond to dependent sets—is at least (Jclique
zz
−2wc). Hence, the Hilbert
space admits a natural decomposition:
Vc = Lind ⊕Hdep,
(3.1)
where Lind is the low-energy subspace spanned by Nind, and Hdep is the high-energy subspace spanned by the
dependent-set states.
In the following, we identify how Lind decomposes into distinct angular momentum sectors, describe the
corresponding block structure of the restricted Hamiltonian, and provide exact expressions for its eigenvalues
and eigenvectors.
Low-Energy Subspace in Total Angular Momentum Basis Ba
11
Lemma 3.1 (Lemma 6.1 in [1]). The total angular momentum basis for Lind consists of the states:
(Ba)
|s, −(s −1)⟩,
|1, (s −1), −(s −1)⟩,
. . .
|nc −1, (s −1), −(s −1)⟩
|s, −s⟩
(3.2)
where s = nc
2 is the total spin.
Explicitly:
• |s, −s⟩= |b0⟩, representing the empty set.
• |s, −(s −1)⟩=
1
√nc
Pnc
i=1 |bi⟩, a uniform superposition of all singletons with positive amplitudes.
• |k, s −1, −(s −1)⟩, for k = 1, . . . , nc −1, consists of a superposition of singleton states with both
positive and negative amplitudes.
Thus, |s, −s⟩and |s, −(s −1)⟩are same-sign states, while |k, s −1, −(s −1)⟩are opposite-sign states.
Remark (Basis Reordering).
For convenience, we reorder the basis states in Ba as follows:
(B′
a)
|s, −(s −1)⟩, |s, −s⟩, |1, s −1, −(s −1)⟩, . . . , |nc −1, s −1, −(s −1)⟩.
That is, we swap the order of the two same-sign basis states. This ordering simplifies the representation of
operators in the next steps.
Basis Transformation.
The transformation between the computational basis {|bi⟩} and the angular momen-
tum basis (B′
a) can be derived either from the Clebsch–Gordan coefficients or directly from the relationships
established in the proof. Specifically:
• The state |s, −(s −1)⟩is a uniform superposition over all singleton states {|bi⟩}nc
i=1.
• The remaining states |k, s −1, −(s −1)⟩, for k = 1, . . . , nc −1, form an orthogonal complement to
|s, −(s −1)⟩within the subspace spanned by {|b1⟩, . . . , |bnc⟩}.
We denote the basis transformation matrix from the computational basis to the angular momentum basis by
Uclique.
Although the present analysis will later specialize to the Jxx = 0 case, we retain the Jxx term throughout
the single-clique derivation. This ensures that the resulting expressions apply to the general setting and remain
consistent with the corresponding formulas in the companion paper.
Spin Operators in the B′
a Basis
Consider the spin operators on Vc:
SZ = 1
2
nc
X
i=1
σz
i ,
S˜Z =
nc
X
i=1
˜σz
i ,
SX = 1
2
nc
X
i=1
σx
i ,
SXX = 1
4
X
ij∈E(Gdriver)
σx
i σx
j ,
where Gdriver = Gc.
12
We project these operators onto the low-energy subspace Lind using the projection operator Πind, and then
transform them into the B′
a basis via the basis transformation Uclique. For any operator X, we use a bar to denote
the operator:
X = Uclique†ΠindXΠindUclique.
Theorem 3.2 (Theorem 6.2 in [1]). The restricted operators in the B′
a basis are block-diagonal and given
explicitly by:
S˜Z = ˜σz ⊕1 ⊕· · · ⊕1,
SX =
√nc
2 σx ⊕0 ⊕· · · ⊕0,
SXX =
nc−1
4
˜σz ⊕
−1
4
⊕· · · ⊕
−1
4
.
(3.3)
where ˜σz and σx act on the effective spin-1
2 (two-dimensional same-sign) subspace, while the scalars act on
spin-0 (one-dimensional opposite-sign) subspaces.
Decomposition into Spin-1
2 and Spin-0 Components
The transformed operators given by Theorem 3.2 offer a clear physical interpretation: the low-energy subspace
Lind decomposes into a direct sum consisting of a single effective spin-1
2 subsystem, together with nc −1
(opposite-sign) spin-0 subsystems:
Lind =
1
2
nc ⊕0 ⊕· · · ⊕0
|
{z
}
nc−1
.
(3.4)
The effective spin-1
2 subsystem
1
2
nc is spanned by the two same-sign basis states:
1
2, −1
2
= |s, −(s −
1)⟩,
1
2, 1
2
= |s, −s⟩. The spin-0 components correspond to the opposite-sign basis vectors: |k, s −1, −(s −
1)⟩, for k = 1, . . . , nc −1.
Correspondingly, the Hamiltonian decomposes into a same-sign two-dimensional effective spin-1
2 block and
nc −1 opposite-sign spin-0 blocks. The system Hamiltonian H1, which governs the evolution during Stages 1
and 2, when restricted to the low-energy subspace Lind of the clique Gc, takes the form
H1 = −x SX + jxx SXX −wc S˜Z.
Substituting the operator expressions from Eq. 3.3 in Theorem 3.2, we obtain
H1 =
−
√nc
2
x σx +
−wc + nc−1
4
jxx
˜σz
⊕
−
wc + 1
4jxx
⊕· · · ⊕
−
wc + 1
4jxx
|
{z
}
nc−1
(3.5)
= B(weff
c , √ncx) ⊕[−(wc + 1
4jxx)] ⊕· · · ⊕[−(wc + 1
4jxx)]
|
{z
}
nc−1
(3.6)
where the effective weight is defined as weff
c = wc −nc−1
4
jxx.
An illustration of this basis transformation is provided in Figure 4, which shows how the original product
basis is transformed into a direct-sum decomposition via total angular momentum.
13
basis change
an effective spin-1/2 block
(same-sign)
0-spin blocks
(opposite-sign)
b0
b0
bn
bn
b1
b1
c0 c1 o1
on-1
Figure 4: Basis transformation of Lind from the product space {b0, b1, . . . , bn} to the direct-sum space (angu-
lar momentum basis). The same-sign states {c0, c1} are the basis of the effective spin-1
2 subspace, while the
opposite-sign states {o1, . . . , on−1} are the bases of the spin-0 subspaces. The Hamiltonian matrix decomposes
accordingly into a same-sign block and n −1 opposite-sign blocks.
Full Spectrum
Since the eigensystem of B(w, x) is known analytically (Eqs. (2.2), (2.3)), the full spectrum and eigenstates of
the Hamiltonian H1 for the single clique Gc are also known analytically. In particular, the eigenvalues of the
same-sign block B(weff
c , √ncx) are:
βk = −1
2
weff
c + (−1)kp
[weff
c ]2 + nc[x]2
,
k = 0, 1,
(3.7)
with corresponding eigenvectors:
|β0⟩=
1
√
1+γ2 (γ |0⟩+ |1⟩) ,
|β1⟩=
1
√
1+γ2 (|0⟩−γ |1⟩) ,
(3.8)
with the mixing coefficient
γ =
√nc x
|weff
c |+√
[weff
c ]2+nc[x]2 .
(3.9)
Note γ, β0 and β1 are all time-dependent, through their dependence on x.
To summarize, the low-energy subspace of a single clique decomposes into a two-dimensional same-sign
(spin-1
2) block and nc −1 one-dimensional opposite-sign (spin-0) blocks, with explicit operator representations
in the B′
a basis. In the next subsection, we extend this construction to all cliques in L to obtain the full block
decomposition of Hlow.
3.2.2
Block Decomposition of Hlow
Using the single-clique angular momentum basis from Section 3.2.1, we tensor over all cliques in L to form a
global basis for L−in which Hlow is block-diagonal, details are described in Section 9.1 of [1].
The angular momentum basis is constructed locally within each clique using projection and transformation
operators (Section 3.2.1), and then combined into a global basis Bang by tensoring over all cliques in L with
the identity on VR. This preserves the tensor-product structure and yields a block-diagonal form separating the
same-sign and opposite-sign sectors.
14
The decomposition emerges hierarchically: each clique yields one same-sign sector (effective spin-1
2) and
several opposite-sign sectors (spin-0). At the next level, the cliques in L—forming the dMIC—define a bare
subsystem with a same-sign sector Cbare and opposite-sign sectors Wbare, Qbare. Finally, the full same-sign and
opposite-sign sectors are
C = Cbare ⊗
C2⊗mr ,
W = Wbare ⊗
C2⊗mr ,
Q = Qbare ⊗
C2⊗mr ,
where mr = |R|. The full Hamiltonian then decomposes into HC, HQ, and HW, which are decoupled in the
disjoint case and coupled in the shared case.
Confinement in the Jxx = 0 Case
In the Jxx = 0 case, the opposite-sign blocks remain at higher energy and are dynamically irrelevant before the
anti-crossing. The evolving ground state is confined to the same-sign block C, so the anti-crossing occurs entirely
within it.
As shown in Corollary 9.2 of [1], the same-sign block HC has the unified form
HC =
X
i∈L
Bi (weff
i , √nix) +
X
j∈R
Bj (wj, x) + Jzz
X
(i,j)∈L×R
fC
i ˜σz
i ˜σz
j ,
with effective coefficients
weff
i = wi −ni−1
4 jxx,
fC
i =
(
1
for Gdis,
ni−1
ni
for Gshare,
As in the single-clique case, we keep the jxx term in order to preserve compatibility with the more general
formulation used in the main paper, while our later results here will specialize to Jxx = 0.
This structure is identical in the disjoint and shared cases, so the mechanism of tunneling-induced anti-
crossing—and the resulting exponentially small gap—is governed by the same effective Hamiltonian. Hence, it
is sufficient to analyze the disjoint case (Gdis) when establishing the limitation of TFQA.
4
Inner Decomposition of the Same-sign Block
We recall the two inner decompositions of the same-sign block HC (L-inner and R-inner), as described in Sec-
tion 9.2 of [1]. In this paper, we consider the case Jxx = 0 and focus on the L-inner decomposition. Under this
decomposition, the relevant low-energy behavior is captured by a coupled pair of zero-indexed blocks. In partic-
ular, we describe how the ground state localizes into the coupled block Hcore, consisting of H(0)
L and H(0)
R , linked
through the empty-set basis state. This localized evolution sets the stage for the tunneling-induced anti-crossing
analyzed in the next section.
4.0.1
Two Block Decompositions of HC: L-Inner vs. R-Inner
Physically, the same-sign block Hamiltonian HC is symmetric under interchange of the L and R subsystems:
permuting the tensor factors leaves the spectrum unchanged. Mathematically, this corresponds to a permutation
similarity transformation: reordering the basis to place L before R, or vice versa, yields a matrix with identical
eigenvalues. Combinatorially, this symmetry allows the matrix to be organized into a two-layer block structure
with either subsystem—L or R—serving as the “inner” block. That is, HC can be expressed either as a collection
of L-blocks indexed by the states from R, or as R-blocks indexed by the states from L.
15
For illustration, we assume uniform clique size ni = nc for all i. As in the main paper, we restrict the
same-sign block Hamiltonian to the symmetric subspace:
Hsym
C
= Hbare
L
⊗IR + IL ⊗Hbare
R
+ HLR,
where
Hbare
L
= −√nc x CSX(m) −weff CS˜Z(m),
Hbare
R
= −x(t) CSX(mr) −w CS˜Z(mr),
HLR = Jzz CS˜Z(m) · CS˜Z(mr).
Here
CS˜Z(m) =
m
0
· · ·
0
0
m −1
· · ·
0
...
...
...
...
0
0
· · ·
0
,
CSX(m) =
0
√m
2
0
· · ·
0
√m
2
0
√
2(m−1)
2
· · ·
0
0
√
2(m−1)
2
0
· · ·
0
...
...
...
...
√m
2
0
0
0
√m
2
0
.
are the collective Pauli operators restricted to the symmetric subspace of m spins, and all bare operators are
understood in this context to be restricted to the symmetric subspace.
To illustrate the resulting block structure, we consider a concrete example with m = 2 and mr = 3.
R-inner
L-inner
Regroup
Figure 5: Two possible orderings of the basis states in the symmetric subspace of the same-sign block Hsym
C
,
illustrated for m = 2, mr = 3, corresponding to the R-inner and L-inner decompositions. The R subsystem is
shown in blue, and L in pink. The dashed circle marks the empty-set basis state (no spin-ups). In the L-inner
representation (right), the basis states are reordered so that the zero-indexed R-block appears in the bottom-right
corner. The red dashed box highlights the core block Hcore, consisting of the blocks H(0)
L
and H(0)
R , coupled
through the empty-set state.
16
Explicit Matrix Representation of L-inner Block Decomposition.
We present the explicit matrix form of
Hsym
C
, corresponding to the decomposition shown in Figure 5. Each diagonal entry reflects the total energy of a
basis state, while off-diagonal entries arise from the transverse-field term, which connects basis states differing
by a single spin flip.
There are mr + 1 = 4 outer-layer blocks, each a (m + 1) × (m + 1) matrix acting on the L subsystem. The
full matrix representation is denoted HL-inner
C
:
6Jzz −3w −2weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
0
0
−
√nc x
√
2
3Jzz −3w −weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
0
0
−
√nc x
√
2
−3w
0
0
−1
2
√
3x
0
0
0
0
0
0
−1
2
√
3x
0
0
4Jzz −2w −2weff
−
√nc x
√
2
0
−x
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
2Jzz −2w −weff
−
√nc x
√
2
0
−x
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
−2w
0
0
−x
0
0
0
0
0
0
−x
0
0
2Jzz −w −2weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
−x
0
−
√nc x
√
2
Jzz −w −weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
−x
0
−
√nc x
√
2
−w
0
0
−1
2
√
3x
0
0
0
0
0
0
−1
2
√
3x
0
0
−2weff
−
√nc x
√
2
0
0
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
−weff
−
√nc x
√
2
0
0
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
0
4.0.2
Localization and Effective Hamiltonian Hcore
Assume Γ1 is sufficiently large so that the initial ground state is the uniform superposition. As the parameter
x decreases, the L-spins experience a stronger effective transverse field due to the extra √nc factor. Since this
transverse field remains much larger than the z-field, the latter can be neglected to leading order. Under this
approximation, the amplitudes of basis states within the block H(0)
R
become suppressed—they are negligibly
small compared to those in H(0)
L , except for their coupling through the shared basis (empty-set) state.
This motivates us to reorder the matrix representation so that H(0)
R
appears explicitly in the bottom-right
corner. To extract the block H(0)
R from the L-inner representation, we permute the rows and columns of HL−inner
C
so that the indices corresponding to the iR = 0 sector appear contiguously at the end. This reordering moves
H(0)
R into the bottom-right corner of the matrix. Note that the original blocks H(0)
L and H(0)
R are coupled through
the basis state corresponding to the empty set, which is shared between them. See Figure 5 (right panel) for
illustration. The explicit form of the low-energy submatrix, with the shared row and column shaded, is:
−2weff
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
−weff
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
−1
2
√
3 x
0
0
0
0
−1
2
√
3 x
−w
−x
0
0
0
0
−x
−2w
−1
2
√
3 x
0
0
0
0
−1
2
√
3 x
−3w
The evolving ground state localizes into the lowest-energy block H(0)
L before the anti-crossing, as shown in
Figure 6. This localization justifies reducing the full same-sign block Hamiltonian HC to an effective Hamiltonian
Hcore = H(0)
L + H(0)
R .
where H(0)
L
= Hbare
L
⊗|0⟩R ⟨0| , H(0)
R = |0⟩L ⟨0| ⊗Hbare
R
. Figure 7 shows that the lowest energy levels of Hcore
closely match those of the full same-sign Hamiltonian HC.
17
Figure 6: Projection of the evolving ground state onto each block of the same-sign Hamiltonian HC, in the
disjoint case with Jxx = 0. The ground state begins near-uniform, then steers smoothly into the H(0)
L
block,
which dominates before the anti-crossing. An anti-crossing occurs near 0.95, where amplitude tunnels from H(0)
L
to H(0)
R . Intermediate blocks H(1)
L and H(2)
L participate during structural compression but remain subdominant.
5
Detailed Analysis of Hcore
In this section, we analyze the spectrum of Hcore using the exact eigensystems of the embedded bare subsystems
H(0)
L
and H(0)
R , each of which is analytically solvable. The analysis proceeds by reformulating the eigenvalue
problem as a generalized eigenvalue problem, using a non-orthogonal basis constructed by extending the bare
eigenstates of the subsystems to the full Hilbert space.
This transformation enables a clean perturbative treatment and forms the foundation for a structured spectral
analysis. The approach proceeds in three main steps:
• Describe the bare eigenstates of Hcore (Section 5.1).
• Reformulate the eigenvalue problem for Hcore as a generalized eigenvalue problem in the non-orthogonal
basis (Section 5.2).
• Develop the perturbative structure of the generalized eigenvalue system (Section 5.3), including:
1. Showing that the true energy values (away from the anti-crossing) are well-approximated by the bare
energies (Section 5.3.1).
2. Analyzing the anti-crossing structure by approximating the lowest two energy levels, both away from
and near the crossing, and deriving a perturbative bound for the anti-crossing gap using an effective
2 × 2 Hamiltonian (Section 5.3.3).
Remark 5.1. The final step, constructing a reduced effective Hamiltonian for the generalized system, is struc-
turally similar to the method of effective Hamiltonians for the non-orthogonal basis developed in [20]. However,
18
Figure 7: Comparison of the energy spectrum of the full same-sign block Hamiltonian HC and the reduced two-
block Hamiltonian Hcore, in the disjoint case with Jxx = 0. The top-left panel shows the full spectrum of HC
(solid black and gray), and the top-right panel shows the spectrum of Hcore (green dashed). The bottom panel
overlays the two, demonstrating that the lowest energy levels of Hcore closely track those of HC, with the anti-
crossing position shifted slightly to the right.
19
in their setting, the non-orthogonal basis is derived from the physical structure of the original problem (e.g., lo-
calized atomic orbitals). In contrast, our non-orthogonal basis is synthetically constructed from bare eigenstates
and does not correspond to the physical structure of the full system.
5.1
Bare Eigenstates of Hcore
In this section, we describe the bare eigenstates of Hcore by relating the embedded bare operators H(0)
L , H(0)
R to
the bare subsystem Hamiltonians Hbare
L
, Hbare
R
. Recall that the effective Hamiltonian Hcore is
Hcore = H(0)
L + H(0)
R ,
where
H(0)
L = Hbare
L
⊗|0⟩R ⟨0| ,
H(0)
R = |0⟩L ⟨0| ⊗Hbare
R
.
The eigenvalues and eigenstates of the the bare Hamiltonians Hbare
L
, Hbare
R
are
Hbare
L
|Li⟩= eLi |Li⟩,
Hbare
R
|Rj⟩= eRj |Rj⟩,
where {|Li⟩} and {|Rj⟩} are analytically known (Section 7.2 in [1]) and form orthonormal eigenbases of the L-
and R-subspaces.
The corresponding padded eigenstates in the full space are
|Li⟩= |Li⟩⊗|∅⟩R ,
|Rj⟩= |∅⟩L ⊗|Rj⟩,
and they satisfy
H(0)
L |Li⟩= eLi |Li⟩,
H(0)
R |Rj⟩= eRj |Rj⟩.
Remark 5.2. We use a bar to distinguish padded eigenstates of the full Hilbert space from bare eigenstates of
the subsystem Hamiltonians. For instance, |Li⟩is an eigenstate of the bare Hamiltonian Hbare
L
acting on the
L-subspace, while |Li⟩= |Li⟩⊗|∅⟩R is the corresponding eigenstate of the embedded operator H(0)
L acting on
the full Hilbert space. The same applies to |Rj⟩and |Rj⟩.
Recall that the full eigensystem for both HL and HR (and thus for H(0)
L and H(0)
R ) is analytically known. In
the following, we first derive the results in terms of the bare eigenstates |Li⟩and |Rj⟩, and then substitute their
explicit forms in the anti-crossing structure section later.
5.2
Generalized Eigenvalue Reformulation
We express the Hamiltonian Hcore in the basis of padded bare eigenstates
|Li⟩
∪
|Rj⟩
, which leads to a
generalized eigenvalue problem. We define
eH def
=
HRR
HRL
HLR
HLL
,
eS def
=
SRR
SRL
SLR
SLL
,
where HRR contains the entries ⟨Ri| Hcore |Rj⟩, and similarly for HLL, HLR, and HRL. The matrix SRR
contains the overlaps ⟨Ri|Rj⟩, and similarly for the remaining blocks of eS.
We now consider the generalized eigenvalue equation:
eH | eEn⟩= eEn eS | eEn⟩,
(5.1)
20
where the pair ( eEn, | eEn⟩) is referred to as a generalized eigenpair of the generalized eigenvalue system (eH, eS).
We refer to eH as the generalized Hamiltonian, and eS as the corresponding overlap matrix. In the numerical linear
algebra literature, such pairs are also called matrix pencils.4
Theorem 5.3. Let (En, |En⟩) be an eigenpair of the original Hamiltonian Hcore, and let ( eEn, | eEn⟩) be a gener-
alized eigenpair of (eH, eS) as in (5.1). Then:
• Every eigenvalue En of Hcore appears as a generalized eigenvalue eEn.
• Conversely, every generalized eigenvalue eEn corresponds to an eigenvalue En of Hcore.
The eigenvalues coincide: eEn = En for all n, while the eigenvectors | eEn⟩and |En⟩differ.
Proof. We can write the eigenstate |En⟩of Hcore in the padded bare basis as
|En⟩=
2mR−1
X
k=0
cR
k |Rk⟩+
2mL−1
X
k=0
cL
k |Lk⟩.
Note that the coefficients cR and cL may not be unique, since the set {|Rk⟩, |Lk⟩} spans a space of dimension
2mL + 2mR, whereas the true Hilbert space of Hcore has dimension 2mL + 2mR −1 due to the identification
|R0⟩= |L0⟩. Thus, the padded set is linearly dependent.
Now rewrite the eigenvalue equation:
Hcore
X
k
cR
k |Rk⟩+
X
k
cL
k |Lk⟩
!
= En
X
k
cR
k |Rk⟩+
X
k
cL
k |Lk⟩
!
.
Taking the inner product with ⟨Ri| and ⟨Li|, we obtain the generalized eigenvalue equation in matrix form:
HRR
HRL
HLR
HLL
⃗cR
⃗cL
= En
SRR
SRL
SLR
SLL
⃗cR
⃗cL
.
Then we have eH | eEn⟩= eEneS | eEn⟩, with eEn = En and | eEn⟩=
⃗cR
⃗cL
.
This shows that every eigenpair (En, |En⟩) of Hcore corresponds to a generalized eigenpair ( eEn, | eEn⟩) of
(eH, eS), though | eEn⟩may not be unique. Conversely, each generalized eigenpair ( eEn, | eEn⟩) of (eH, eS) determines
an eigenpair (En, |En⟩) of Hcore with eEn = En.
4The term “matrix pencil” refers to the parametrized family eH −λeS, whose roots λ = eEn yield the eigenvalues of the system.
21
Summary of the Reformulation. We express the eigenvalue problem for Hcore in a non-orthogonal
(normalized) basis:
|Li⟩
∪
|Rj⟩
.
Each subset is orthonormal, but the two sets are not mutually orthogonal.
This yields a generalized eigenvalue problem of the form
eH | eEn⟩= eEn eS | eEn⟩.
This reformulation preserves the spectrum:
eEn = En
for all n,
allowing us to analyze the dynamics using the generalized eigenvalue system (eH, eS). This forms the
foundation for our perturbative analysis in the next subsection.
5.3
Perturbative Structure of the Generalized Eigenvalue System
In this section, we develop the perturbative structure of the generalized eigenvalue system (eH, eS), and use it to
analyze the spectrum of Hcore. Specifically:
1. In Section 5.3.1, we show that the true energy values of Hcore (away from the anti-crossing) are well-
approximated by the bare energies.
2. In Section 5.3.3, we derive a 2 × 2 effective Hamiltonian that captures the tunneling-induced anti-crossing
and provides a perturbative bound on the gap size.
We decompose the generalized Hamiltonian and overlap matrix as
eH = H + ∆H,
eS = I + ∆I,
where
H =
HRR
0
0
HLL
(block-diagonal),
∆H =
0
HRL
HLR
0
(off-diagonal),
I =
IRR
0
0
ILL
,
∆I =
0
SRL
SLR
0
.
We now introduce a parameterized family of matrices:
H(λ) def
= H + λ∆H,
I(λ) def
= I + λ∆I,
λ ∈[0, 1),
and consider the associated generalized eigenvalue problem:
H(λ) |en(λ)⟩= en(λ)I(λ) |en(λ)⟩.
At λ = 0, the eigenvalues en(0) correspond to the bare subsystem energies. At λ = 1, the eigenvalues
en(1) = En recover the true eigenvalues of Hcore.
We analyze how these eigenvalues deform as a function of λ.
22
Remark 5.4. There is one more eigenvalue in the unperturbed generalized eigenvalue system (eH(0), eS(0)) than
in the fully coupled system (eH(1), eS(1)). While perturbations typically split degenerate energy levels, here the
effect is reversed: the perturbation merges two distinct eigenvalues into a single degenerate level. In particular,
the zero-energy levels from both subsystems appear to collapse into one.
Although eS = I + ∆I is not invertible, the matrix I + λ∆I is invertible for all λ ∈[0, 1), as shown in the
lemma below.
Lemma 5.5. The matrix I + ∆I is not invertible. However, for all λ ∈[0, 1), the matrix I + λ∆I is invertible.
Proof. The overlap matrix eS = I + ∆I is defined using the padded bare basis {|Lk⟩, |Rk⟩}, where |L0⟩= |R0⟩.
This identification introduces a linear dependence in the set, so eS is singular and not invertible.
In contrast, for any λ ∈[0, 1), the matrix I + λ∆I is a Hermitian perturbation of the identity by an operator
of strictly subunit norm. All eigenvalues of λ∆I lie strictly within (−1, 1), so the eigenvalues of I + λ∆I are
strictly positive. Hence, I + λ∆I is invertible.
Using Lemma 5.5, we can now convert the generalized eigenvalue problem into a standard eigenvalue prob-
lem. For λ ∈[0, 1), we rewrite the generalized eigenvalue problem
(H + λ∆H) |en(λ)⟩= en(λ) (I + λ∆I) |en(λ)⟩,
in standard form using the transformation T(λ) def
= (I + λ∆I)−1/2 . This yields the equivalent standard eigen-
value equation:
HQI(λ) |en(λ)⟩= en(λ) |en(λ)⟩.
where HQI(λ) := T(λ) (H + λ∆H) T(λ). We refer to HQI(λ) as the quasi-interpolated Hamiltonian. At
λ = 0, the unperturbed equation becomes H |en⟩= en |en⟩, where en = en(0) denotes the bare eigenvalue.
5.3.1
Perturbative Approximation Scheme for the True Energies
In this section, we develop a perturbative scheme for approximating the difference en(1)−en, where en(1) = En
denotes the true eigenvalue of Hcore, and en = en(0) is the corresponding bare eigenvalue. We begin by showing
in Theorem 5.6 that for any λ ∈(0, 1), the eigenvalue deformation en(λ) −en is exactly captured by a quadratic
form involving the transformation matrix T(λ) and the coupling term ∆H −en∆I. Then, in Proposition 5.7,
we show that when the level en is non-degenerate (i.e., well-separated from other bare levels), the eigenvalue
deformation admits a second-order perturbative expansion in λ. Finally, we approximate the true energy at λ = 1
using the same expansion, as given in Corollary 5.8.
Theorem 5.6. For 0 < λ < 1, let en = en(0) denote the unperturbed eigenvalue, and |e(0)
n ⟩its correspond-
ing eigenvector. Then the perturbed eigenvalue en(λ) satisfies
en(λ) −en = λ ⟨e(0)
n | T(λ) (∆H −en∆I) T(λ) |en(λ)⟩.
Proof. First, add and subtract an auxiliary term λen∆I to H(λ):
H(λ) = H + λ∆H = (H + λen∆I) + λ (∆H −en∆I) .
23
We now express HQI(λ) in terms of the unperturbed Hamiltonian:
HQI(λ) = T(λ) (H + λen∆I) T(λ) + λT(λ) (∆H −en∆I) T(λ).
(5.2)
This holds because
T(λ) (H + λen∆I) T(λ) |en⟩= en |en⟩,
(5.3)
which follows from adding the term λen∆I to both sides of the unperturbed eigenvalue equation:
(H + λen∆I) |en⟩= en(I + λ∆I) |en⟩.
Next, consider the quantity ⟨en| T(λ)H(λ)T(λ) |en(λ)⟩. We compute it two ways—acting from the right
and from the left.
From the right: by the generalized eigenvalue equation,
⟨en| HQI(λ) |en(λ)⟩= en(λ) ⟨en|en(λ)⟩.
From the left: using Eqs. (5.2) and (5.3),
⟨en| HQI(λ) |en(λ)⟩= ⟨en| T(λ) (H + λen∆I) T(λ) |en(λ)⟩
+ λ ⟨en| T(λ) (∆H −en∆I) T(λ) |en(λ)⟩
= en ⟨en|en(λ)⟩+ λ ⟨en| T(λ) (∆H −en∆I) T(λ) |en(λ)⟩.
Now expand |en(λ)⟩= |en⟩+ λ |e(1)
n ⟩+ λ2 |e(2)
n ⟩+ · · · as a power series in λ. Assuming orthogonality
⟨en|e(k)
n ⟩= 0 for k ≥1, we have ⟨en|en(λ)⟩= 1. Therefore,
en(λ) −en = λ ⟨en| T(λ) (∆H −en∆I) T(λ) |en(λ)⟩.
Proposition 5.7. For 0 < λ < 1, suppose the eigenvalue en is non-degenerate, i.e., |e(0)
k
−en| is bounded
away from zero for all k ̸= n. Then
en(λ) −en ≈λ2
⟨en| (∆H −en∆I)∆I |en⟩−
X
k̸=n
⟨en|(∆H−en∆I)|e(0)
k ⟩⟨e(0)
k |∆H|en⟩
e(0)
k −en
.
Proof. From Theorem 5.6, we have
en(λ) −en = λ ⟨en| T(λ) (∆H −en∆I) T(λ) |en(λ)⟩.
Up to first-order approximation, write |en(λ)⟩≈|en⟩+ λ |e(1)
n ⟩. Then
en(λ) −en ≈λ ⟨en| T(λ)(∆H −en∆I)T(λ) |en⟩+ λ2 ⟨en| T(λ)(∆H −en∆I)T(λ) |e(1)
n ⟩.
(5.4)
We now approximate
T(λ) = (I + λ∆I)−1/2 ≈I −1
2λ∆I.
24
Hence,
T(λ)(∆H −en∆I)T(λ) ≈(I −1
2λ∆I)(∆H −en∆I)(I −1
2λ∆I)
= (∆H −en∆I) −λ(∆H −en∆I)∆I + 1
4λ2∆I2
≈(∆H −en∆I) −λ(∆H −en∆I)∆I,
where we used the identity ∆I∆H = ∆H∆I.
Now note that ⟨en| (∆H −en∆I) |en⟩= 0, since both ∆H and ∆I are zero on the diagonal blocks.
Therefore, the first term of Eq. (5.4) becomes
λ ⟨en| T(λ)(∆H −en∆I)T(λ) |en⟩≈λ2 ⟨en| (∆H −en∆I)∆I |en⟩,
and the second term becomes
λ2 ⟨en| T(λ)(∆H −en∆I)T(λ) |e(1)
n ⟩≈λ2 ⟨en| (∆H −en∆I) |e(1)
n ⟩.
To compute |e(1)
n ⟩, note that
HQI(λ) ≈H + λ∆H + λ2∆H∆I + · · · ,
so the first-order correction |e(1)
n ⟩agrees with standard non-degenerate perturbation theory for H + λ∆H. That
is,
|e(1)
n ⟩= −
X
k̸=n
|e(0)
k ⟩∆Hkn
e(0)
k −e(0)
n
,
where ∆Hkn = ⟨e(0)
k | ∆H |en⟩, and en = e(0)
n .
Then,
⟨en| (∆H −en∆I) |e(1)
n ⟩= −
X
k̸=n
⟨en|(∆H−en∆I)|e(0)
k ⟩∆Hkn
e(0)
k −e(0)
n
= −
X
k̸=n
⟨en|(∆H−en∆I)|e(0)
k ⟩⟨e(0)
k |∆H|en⟩
e(0)
k −e(0)
n
.
Thus, the full second-order correction is
en(λ) −en ≈λ2
⟨en| (∆H −en∆I)∆I |en⟩−
X
k̸=n
⟨en|(∆H−en∆I)|e(0)
k ⟩⟨e(0)
k |∆H|en⟩
e(0)
k −en
.
Corollary 5.8. If |e(0)
k
−en| is bounded away from zero for all k ̸= n (i.e., en is non-degenerate), then
en(1) −en ≈⟨en| (∆H −en∆I)∆I |en⟩−
X
k̸=n
⟨en|(∆H−en∆I)|e(0)
k ⟩⟨e(0)
k |∆H|en⟩
e(0)
k −en
.
(5.5)
25
5.3.2
Exact Bare Eigenvalues and Eigenstates and Some Combinatorial Facts
In this section, we recall the exact eigenvalues and eigenstates of the bare subsystems HL and HR (from Section
7 of [1]), and state several combinatorial identities that will be used in deriving the bounds in the next section.
For simplicity, we assume the unweighted case with wr = wl = w(= 1), and take R to be the unique
global minimum with nR = 1. We also assume that all cliques in L have uniform size nc = nL, and that
mr =: mR > mL := ml but mR < mL
√nc, so that the bare (i.e., unperturbed) energies cross at some value
x = xc.
Remark 5.9 (Notation). In describing the bipartite graphs (Gdis and Gshare), we use ml, mr and nl, nr for the
sizes of the left and right vertex sets and cliques, respectively. In the following analytical sections, we switch to
mL, mR and nL, nR, matching the convention χA for A ∈{L, R} used for other indexed quantities (e.g., EAk,
|Ak⟩). These pairs of symbols refer to the same quantities.
Application of Exact Spectrum.
We apply the exact spectrum and eigenstates of the bare subsystem for HL,
HR from Theorem 7.1 in [1] to the case where ni = nA for all i ∈A ∈{L, R}, and w = 1. This yields the
following exact form for the bare eigenvalues and eigenstates:
Theorem 5.10 (Bare Eigenvalues and Ground States for HL, HR). For A ∈{L, R}, the eigenvalues of HA
are given by:
eAk = −
mA
2 w + (mA
2 −k)
p
w2 + nAx2
,
k = 0, . . . , mA.
(5.6)
(k is the number of one’s).
Each eigenstate is indexed by a bit string z = z1z2 . . . zmA ∈{0, 1}mA, with:
|E(A)
z
⟩=
mA
O
i=1
|β(A)
zi ⟩,
(5.7)
where |β(A)
0
⟩and |β(A)
1
⟩are the eigenvectors of the local two-level system:
|β(A)
0
⟩
=
1
√
1+γ2
A
(γA |0⟩+ |1⟩) ,
|β(A)
1
⟩
=
1
√
1+γ2
A
(|0⟩−γA |1⟩) ,
with mixing ratio γA =
√nAx
w+√
w2+nAx2 .
Ground State Energies and Vectors.
In particular, the ground states of HL and HR correspond to the
all-zero string z = 0 . . . 0, and take the form:
|R0⟩
= NmR
i=1 |β(R)
0
⟩,
|L0⟩
= NmL
i=1 |β(L)
0
⟩.
26
The corresponding ground state energies are:
eL0
= −mL
2
w +
p
w2 + nLx2
,
eR0
= −mR
2
w +
p
w2 + nRx2
.
We define the overlap of an eigenstate |Ak⟩, where k denotes the number of 1’s in the bit string indexing the
state, with the all-zero computational basis state as
co(Ak) def
= ⟨Ak | 0A⟩,
where |0A⟩≡|0⟩⊗mA is the zero state on subsystem A ∈{L, R}.
The following combinatorial identities will be used in the gap analysis:
Proposition 5.11. Let A ∈{L, R} and let |Ak⟩denote any eigenstate with Hamming weight k. Then the overlap
of |Ak⟩with the all-zero computational state is:
co(Ak) =
1
√
1+γ2
A
mA
γmA−k
A
(5.8)
In particular, for the ground state |A0⟩, corresponding to k = 0, we have:
co(A0) =
γA
√
1+γ2
A
mA
(5.9)
One can verify the following non-obvious identity:
Lemma 5.12. Let γA =
√nAx
1+√
1+nAx2 , we have 1−γ2
A
1+γ2
A =
1
√
1+nAx2 .
Together with the two well-known binomial summation identities:
m
X
k=0
m
k
ak = (1 + a)m,
m
X
k=0
m
k
kak =
a
1+a m(1 + a)m,
(5.10)
we have the folowing perhaps surprising identity.
Lemma 5.13. Let A ∈{L, R}, and let eAk denote the energy of the eigenstate |Ak⟩with Hamming weight k.
Then
mA
X
k=0
eAk (⟨Ak | 0A⟩)2 = 0.
(5.11)
Proof. We begin by expressing the energy as eAk = eA0 +k ·
p
1 + nAx2. The squared overlap of the eigenstate
|Ak⟩with the all-zero computational basis state is
(⟨Ak | 0A⟩)2 =
1
1+γ2
A
mA (γ2
A)mA−k.
27
There are
mA
k
such states at each Hamming weight k, so the full sum becomes:
S :=
mA
X
k=0
mA
k
eAk (⟨Ak | 0A⟩)2
=
1
1+γ2
A
mA mA
X
k=0
mA
k
eA0 + k ·
p
1 + nAx2
(γ2
A)mA−k.
By the binomial summation identities in Eq. (5.10), we have
S =
1
1+γ2
A
mA ·
h
(1 + γ2
A)mA
eA0 +
1
1+γ2
A mA ·
p
1 + nAx2
i
= eA0 +
1
1+γ2
A · mA ·
p
1 + nAx2
= −
mA
2 + mA
2
p
1 + nAx2
+
1
1+γ2
A · mA ·
p
1 + nAx2
= −mA
2 −mA
2
p
1 + nAx2 γ2
A−1
1+γ2
A
= 0
(by Lemma 5.12).
5.3.3
Anti-Crossing Structure: Energy Approximation and Gap Bound
Ground State Energy Approximation
Let eA0 denote the bare ground state energy, and let E0 = eA0(1) be the true ground state energy of Hcore.
We apply the correction formula in Eq. (5.5) from Corollary 5.8 to the ground state energy before and after the
anti-crossing to obtain an approximation of the difference E0 −eA0. Assume the bare energy ordering satisfies
eL0 < eR0 before the anti-crossing, and reverses to eR0 < eL0 afterward. Suppose further that the energy
separation satisfies |eL0 −eR0| > 1 (i.e. away from the anti-crossing), so that a first-order correction suffices.
Then the corrected ground state energies can be written explicitly as weighted sums over overlaps and energy
gaps between the L and R blocks, as given in Proposition 5.14.
Proposition 5.14. Suppose that before the anti-crossing, eL0 < eR0, and that this ordering reverses afterward.
Assume further that |eL0 −eR0| > 1, so that a first-order correction suffices. Let E0 denote the true ground state
energy of Hcore.
• Before the anti-crossing:
E0 −eL0 ≈−2eL0
X
j
eRj
eRj −eL0
⟨L0|Rj⟩
2 .
• After the anti-crossing:
E0 −eR0 ≈−2eR0
X
j
eLj
eLj −eR0
⟨R0|Lj⟩
2 .
Proof. We apply the correction formula in Eq. (5.5) from Corollary 5.8 to the ground state energy before and
after the anti-crossing:
28
• Before the anti-crossing:
E0 −eL0 ≈
X
j
⟨eL0| (∆H −eL0∆I) |eRj⟩
⟨eRj| ∆I |eL0⟩−
⟨eRj |∆H|eL0⟩
eRj −eL0
(5.12)
• After the anti-crossing:
E0 −eR0 ≈
X
j
⟨eR0| (∆H −eR0∆I) |eLj⟩
⟨eLj| ∆I |eR0⟩−
⟨eLj |∆H|eR0⟩
eLj −eR0
(5.13)
We now use the identities
⟨eLi| ∆I |eRj⟩= ⟨Li|Rj⟩,
⟨eLi| ∆H |eRj⟩= (eLi + eRj) ⟨Li|Rj⟩,
where |Li⟩and |Rj⟩denote the normalized bare eigenstates in the L and R subsystems, respectively. Substituting
into the above yields the result.
We justify that the first-order correction to eL0 is small by analyzing the weighted sum:
X
j
eRj
eRj −eL0
⟨Rj | 0R⟩
2 .
By Lemma 5.13, the unweighted sum is exactly zero: P
j eRj
⟨Rj | 0R⟩
2 = 0. The weighting factor
1
eRj −eL0
is smooth and strictly positive across the spectrum, since eRj > eL0 for all j before the anti-crossing. Thus,
the weighted sum remains bounded in magnitude and does not grow with system size. Moreover, the prefactor
(⟨L0 | 0L⟩)2 = co(L0)2 is exponentially small in mL, so the total correction is exponentially suppressed: E0 −
eL0 ≈−2eL0 ·co(L0)2·[bounded sum] . Hence, the correction is negligible, and we conclude that E0 ≈eL0. This
justifies the approximation E0 ≈eL0 before the anti-crossing, and similarly E0 ≈eR0 after the anti-crossing,
completing the perturbative approximation scheme. See Figure 8 for numerical confirmation. Furthermore,
the actual anti-crossing point is well-approximated by the crossing point of the bare energies, as illustrated in
Figure 9.
Effective 2 × 2 Hamiltonian and Gap Bound
We now approximate the size of the anti-crossing gap between the degenerate bare states |eL0⟩and |eR0⟩, by
reducing the full generalized eigenvalue problem to an effective 2 × 2 Hamiltonian on the subspace they span.
Combining the structural reduction steps established above, we have the chain of approximations:
∆(H) ≤∆(Hlow) ≈∆(HC) ≈∆(Hcore) ≈∆(H(2×2)
eff
)
(5.14)
which shows that it suffices to approximate the gap of the final 2×2 Hamiltonian H(2×2)
eff
. We now derive H(2×2)
eff
explicitly, starting from the projected generalized eigenvalue problem. This approach is similar to [20]. See also
[9].
We construct a two-level approximation by projecting the generalized eigenvalue problem onto the subspace
P = span{|L0⟩, |R0⟩},
29
Figure 8: Comparison of the exact energy spectrum of the effective Hamiltonian Hcore with the bare energy
spectra of HL (magenta dashed) and HR (blue dashed). The top-left panel shows the true spectrum of Hcore; the
top-right and bottom-left show the bare spectra of HL and HR, respectively. The bottom-right panel overlays the
bare and true spectra. As seen, the true energy (solid gray) of Hcore closely follows the bare energy (dashed), and
the bare-level crossing is replaced by an anti-crossing.
30
Figure 9: Anti-crossing illustration. Left: Energy spectrum comparison between the bare Hamiltonians H(0)
L and
H(0)
R (dashed magenta and blue), and the coupled Hamiltonian Hcore (solid gray). The anti-crossing occurs near
the point where the bare energies cross. Right: Ground state projection onto the same two blocks. The ground
state is concentrated on H(0)
L (magenta) before the anti-crossing, and H(0)
R (blue) after the anti-crossing.
where |L0⟩and |R0⟩are the padded bare eigenstates of H(0)
L and H(0)
R , respectively. Projecting the generalized
eigenvalue problem onto P yields the intermediate effective Hamiltonian H0
eff in generalized form; converting it
to standard form by left-multiplying with S−1
PP produces the final 2 × 2 Hamiltonian H(2×2)
eff
.
Lemma 5.15. The effective Hamiltonian for the subspace P is given by
H0
eff = HPP + (e0SPQ −HPQ)(e0SQQ −HQQ)−1(e0SQP −HQP ),
where e0 = eL0(0) = eR0(0), Q = P ⊥. The eigenvalues of the pair (H0
eff, SPP ) approximate the ground and
first excited energies near the anti-crossing.
Proof. The perturbed generalized eigenvalue equation is:
(H + ∆H) |Φ⟩= λ(I + ∆I) |Φ⟩.
(5.15)
Write the eigenvector as
|Φ⟩=
X
p∈P
cp |p⟩+
X
q∈Q
cq |q⟩,
and take inner products to obtain the generalized eigenvalue problem in matrix form:
HPP
HPQ
HQP
HQQ
cP
cQ
= λ
SPP
SPQ
SQP
SQQ
cP
cQ
,
(5.16)
where λ = e0 + ∆λ.
From Eq. (5.16), we obtain:
(HPP −λSPP )cP = (λSPQ −HPQ)cQ,
(HQP −λSQP )cP = (λSQQ −HQQ)cQ.
31
We solve the second equation for cQ: cQ = (λSQQ −HQQ)−1(HQP −λSQP )cP .
Substitute into the first equation:
(HPP −λSPP )cP = (λSPQ −HPQ)(λSQQ −HQQ)−1(HQP −λSQP )cP .
=⇒
HPP + (λSPQ −HPQ)(λSQQ −HQQ)−1(λSQP −HQP )
cP = λSPP cP .
Since λ = e0 + ∆λ and |∆λ| ≪|e0|, we may approximate the terms λSPQ −HPQ, λSQQ −HQQ, and
λSQP −HQP by their counterparts evaluated at λ = e0, provided that the overlaps ⟨L0|Rj⟩are small.
Concretely, the matrix element ⟨L0| (λSPQ −HPQ) |Rj⟩= ∆λ ⟨L0|Rj⟩−eRj ⟨L0|Rj⟩, differs from the
e0-version only by a term of order ∆λ · ⟨L0|Rj⟩, which is negligible under the assumption that both ∆λ ≪|e0|
and ⟨L0|Rj⟩≪1. Thus, we may replace λ by e0 in the effective Hamiltonian to leading order.
Substituting λ ≈e0, we obtain the effective Hamiltonian:
H0
effcP ≈λSPP cP ,
where
H0
eff = HPP + (e0SPQ −HPQ)(e0SQQ −HQQ)−1(e0SQP −HQP ).
To understand the leading-order contribution to the anti-crossing gap, we examine the structure of the effec-
tive Hamiltonian in the P-subspace. We expand the second-order approximation using
D def
= e0IQQ −EQQ,
V def
= ∆HQQ −e0∆IQQ,
and approximate the inverse: (e0SQQ−HQQ)−1 ≈D−1+D−1VD−1. Then we have H0
eff ≈HPP +W1+W2,
where
W1
def
= (e0SPQ −HPQ)D−1(e0SQP −HQP ),
W2
def
= (e0SPQ −HPQ)D−1VD−1(e0SQP −HQP ).
The leading term
HPP =
eL0
(eL0 + eR0)g0
(eL0 + eR0)g0
eR0
= e0
1
2g0
2g0
1
,
where e0 = 1
2(eL0 + eR0) and g0 = ⟨L0|R0⟩.
The first-order correction W1 =
a1
0
0
a1
is diagonal, while the second-order correction W2 =
0
c2
c2
0
is
off-diagonal, with c2 ≪e0g0. Combining all terms yields:
H0
eff ≈
e0 + a1
2e0g0 + c2
2e0g0 + c2
e0 + a1
,
which shows that the off-diagonal coupling is governed primarily by the overlap g0.
Finally, we convert the generalized eigenvalue problem to standard form using
H(2×2)
eff
= S−1
PP H0
eff,
where
SPP =
1
g0
g0
1
,
which gives
H(2×2)
eff
=
1
1−g2
0
e0 + a1 −g0(2e0g0 + c2)
2e0g0 + c2 −g0(e0 + a1)
2e0g0 + c2 −g0(e0 + a1)
e0 + a1 −g0(2e0g0 + c2)
.
We thus have the gap approximation based on the assumptions that |a1| ≪|e0| and |c2| ≪|e0g0|:
32
Corollary 5.16 (Gap Approximation). Under the effective two-level approximation,
∆(H(2×2)
eff
) ≈2|e0|g0,
where e0 = eL0 = eR0 is the degenerate bare energy, and g0 = ⟨L0|R0⟩is the overlap between two bare ground
states.
Proposition 5.17. The overlap between the two bare ground states is given by
g0 := ⟨L0|R0⟩= co(L0) · co(R0) =
r
γ2
L
1+γ2
L
!mL r
γ2
R
1+γ2
R
!mR
,
where the amplitude ratios γA =
√nAxc
w+√
w2+nAx2c ∈(0, 1), for A ∈{L, R}, and xc is the location of the anti-
crossing, defined implicitly by eL0(xc) = eR0(xc).
Since the runtime scales inversely with the square of the minimum gap, by the chain of approximations in
Eq. (5.14), we have
T ≳1
g2
0 =
1 +
1
γ2
L
mL
1 +
1
γ2
R
mR > 2mL+mR.
This completes the reduction from the full Hamiltonian H to the effective 2 × 2 model, providing the expo-
nential runtime bound.
6
Conclusion
This work presents an analytical framework for identifying and quantifying tunneling-induced anti-crossing in
stoquastic transverse-field quantum annealing (TFQA), based on a structured class of Maximum Independent
Set instances. Our analysis reformulates the effective Hamiltonian in a non-orthogonal basis and derives an
exponentially small gap using a generalized eigenvalue approach that remains valid beyond the small transverse
field regime. More broadly, our findings reveal the structural origin of tunneling-induced anti-crossings. Once
the system localizes near critical local minima—as captured in the reduced form Hcore—a tunneling transition to
the global minimum becomes unavoidable, and the resulting gap is exponentially small. This same localization
and tunneling mechanism may also arise in less structured graphs.
It has been claimed that non-stoquasticity may not be essential for achieving quantum speedup [22, 23,
24]. Indeed, simply introducing a non-stoquastic driver is not sufficient to bypass the tunneling bottleneck. For
example, if Jzz is set to be too large, the evolution will fail the structural steering and encounter a tunneling-
induced anti-crossing instead (as illustrated by an example in Figure 16 of [1]).
In contrast to this work, a recent study [18], which may be of theoretical interest on its own, investigated un-
structured adiabatic quantum optimization, aiming to recover Grover-like quadratic speedup [15, 16]. However,
as we note in the main paper (Section 3.1 in [1]), a simple classical algorithm [17] can already solve the MIS
problem in time O(2n/3), outperforming the Grover-like O(2n/2) speedup.
This highlights the importance of algorithmic design and analysis, even for adiabatic quantum algorithms
in order to achieve quantum advantage. We note that the tunneling-induced anti-crossings analyzed here play
a constructive role in the DIC-DAC-DOA algorithm: a polynomial-time TFQA evolution through such an anti-
crossing identifies configurations in the set of the degenerate local minima, which are then used as seeds for
constructing the non-stoquastic XX-driver graph.
We hope that the structural insights developed here may guide future work in rigorously identifying similar
bottlenecks in broader classes of problem instances, and in clarifying the fundamental limitations of stoquastic
quantum annealing beyond heuristic or empirical claims.
33
Acknowledgment
This work was written by the author with the help of ChatGPT (OpenAI), which assisted in refining the presenta-
tion and in expressing the intended ideas with greater clarity and precision. The author thanks Jamie Kerman for
introducing her to the angular-momentum basis, for discussions, and for the idea of deriving a perturbative bound
on the anti-crossing gap by constructing an effective Hamiltonian in the non-orthogonal basis, as in [20]. We
recognize that this manuscript may not cite all relevant literature. If you believe your work should be included in a
future version, please do not hesitate to contact the author with appropriate references. The author acknowledges
support from the Defense Advanced Research Projects Agency under Air Force Contract No. FA8702-15-D-
0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the
authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency.
References
[1] V. Choi. Beyond Stoquasticity: Structural Steering and Interference in Quantum Optimization, 2025.
[2] V. Choi. Essentiality of the Non-stoquastic Hamiltonians and Driver Graph Design in Quantum Optimiza-
tion Annealing. arXiv:2105.02110v2 [quant-ph], 2021.
[3] V. Choi. Constructing and Programming Driver Graphs in Quantum Hardware for Non-Stoquastic Quan-
tum Optimization Annealing Processes. U.S. Patent No. 12,001,924 B2, issued June 4, 2024.
[4] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser.
Quantum computation by adiabatic evolution.
arXiv:quant-ph/0001106, 2000.
[5] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabatic evolution
algorithm applied to random instances of an NP-complete problem. Science, 292(5516):472–475, 2001.
[6] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation
is equivalent to standard quantum computation. SIAM Journal on Computing, 37(1):166–194, 2007.
[7] T. Albash and D. A. Lidar. Adiabatic quantum computation. Rev. Mod. Phys., 90:015002, 2018.
[8] A. Elhashash and D. B. Szyld. On general matrices having the Perron–Frobenius property. Electronic
Journal of Linear Algebra, 17:389–402, 2008.
[9] A. J. Kerman. Effective Hamiltonian perturbation theory for the tunneling gap in quantum annealing of
structured MIS problems. To be published.
[10] J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM Journal on
Computing, 35(5):1070–1097, 2006.
[11] M. H. S. Amin and V. Choi. First-order phase transition in adiabatic quantum computation. Phys. Rev. A,
80(6):062326, 2009. arXiv:0904.1387 [quant-ph].
[12] B. Altshuler, H. Krovi, and J. Roland. Anderson localization makes adiabatic quantum optimization fail.
Proceedings of the National Academy of Sciences, 107:12446–12450, 2010.
[13] V. Choi. Minor-embedding in adiabatic quantum computation: I. The parameter setting problem. Quan-
tum Information Processing, 7:193–209, 2008.
34
[14] V. Choi. The Effects of the Problem Hamiltonian Parameters on the Minimum Spectral Gap in Adiabatic
Quantum Optimization. Quantum Inf. Processing., 19:90, 2020. arXiv:quant-ph/1910.02985.
[15] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th
Annual ACM Symposium on the Theory of Computing (STOC), pages 212–219, 1996.
[16] J. Roland and N. J. Cerf. Quantum search by local adiabatic evolution. Phys. Rev. A, 65:042308, 2002.
[17] R. E. Tarjan and A. E. Trojanowski. Finding a maximum independent set. SIAM Journal on Computing,
6(3):537–546, 1977.
[18] A. Braida, S. Chakraborty, A. Chaudhuri, J. Cunningham, R. Menavlikar, L. Novo, and J. Roland.
Unstructured Adiabatic Quantum Optimization: Optimality with Limitations. Quantum, 9:1790, 2025.
arXiv:2411.05736v3 [quant-ph].
[19] V. Choi. Different adiabatic quantum optimization algorithms for the NP-complete exact cover and 3SAT
problems. Quantum Info. Comput., 11(7–8):638–648, July 2011.
[20] P. C. P. de Andrade and J. Freire. Effective Hamiltonians for the nonorthogonal basis set. Journal of
Chemical Physics, 118(15):6733–6740, April 15, 2003.
[21] M. Jarret and S. P. Jordan. Adiabatic optimization without local minima. Phys. Rev. A, 89:022341, 2014.
[22] T. Albash. Role of non-stoquastic catalysts in quantum adiabatic optimization. Phys. Rev. A, 99:042334,
2019.
[23] E. Crosson, T. Albash, I. Hen, and A. P. Young. De-signing Hamiltonians for quantum adiabatic optimiza-
tion. Quantum, 4:334, 2020.
[24] E. J. Crosson and D. A. Lidar. Prospects for quantum enhancement with diabatic quantum annealing. Nat.
Rev. Phys., 3:466, 2021. arXiv:2008.09913v1 [quant-ph]
Appendix A: Font Conventions for Notation
We adopt the following conventions throughout:
• Hilbert space / Subspace / Basis: calligraphic, e.g. V, B.
• Hamiltonian / Matrix: blackboard bold, e.g. H, B.
• Time-dependent quantity: typewriter, e.g. x := x(t), jxx := jxx(t).
• Named object / Abbreviation: capital typewriter, e.g. LM, GM, MLIS.
35
|
Limitation of Stoquastic Quantum Annealing: A Structural Perspective Vicky Choi Gladiolus Veritatis Consulting Co.* September 23, 2025 Abstract We analyze the behavior of stoquastic transverse-field quantum annealing (TFQA) on a structured class of Maximum Independent Set (MIS) instances, using the same decomposition framework developed in our companion work on the DIC-DAC-DOA algorithm (Beyond Stoquasticity) [1]. For these instances, we provide a structural explanation for the anti-crossing arising from the competition between the energies associated with a set of degenerate local minima (LM) and the global minimum (GM), and analytically derive the associated exponentially small gap. Our analysis proceeds in two steps. First, we reduce the dynamics to an effective two-block Hamiltonian Hcore, constructed from the bare (decoupled) subsystems associated with the LM and GM. This reduction is justified analytically using the structural decomposition. Second, we reformulate the eigenvalue problem as a generalized eigenvalue problem in a non-orthogonal basis constructed from the bare eigenstates of the subsystems. This transformation enables a clean perturbative treatment of the anti-crossing structure, independent of the transverse field-unlike standard perturbation theory approach, which requires treating the transverse field as a small parameter. This paper serves as a supplementary companion to our main work on the DIC-DAC-DOA algorithm [1], where we demonstrate how appropriately designed non-stoquastic drivers can bypass this tunneling-induced bottleneck. *https://www.vc-gladius.com 1 18 Sep 2025 Contents 1 Introduction 2 2 Background and Preliminaries 4 2.1 Same-Sign vs Opposite-Sign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Two Fundamental Bipartite Substructure Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Anti-Crossing and the Two-Level Hamiltonian B(w, x) . . . . . . . . . . . . . . . . . . . . . . 6 2.3.1 Level Repulsion vs. Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.2 Anti-crossing in a Multi-level System . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3.3 The Basic Matrix B(w, x): Eigenvalues and Eigenstates . . . . . . . . . . . . . . . . . 9 3 Reduction to Same-Sign Block in the Disjoint-Structure Case 9 3.1 Reduction to the Analysis of the Low-Energy Hamiltonian . . . . . . . . . . . . . . . . . . . . 9 3.2 Angular-Momentum Based Decomposition of the Hlow . . . . . . . . . . . . . . . . . . . . . . 11 3.2.1 Single-clique Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 Block Decomposition of Hlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 Inner Decomposition of the Same-sign Block 15 4.0.1 Two Block Decompositions of HC: L-Inner vs. R-Inner . . . . . . . . . . . . . . . . . 15 4.0.2 Localization and Effective Hamiltonian Hcore . . . . . . . . . . . . . . . . . . . . . . . 17 5 Detailed Analysis of Hcore 18 5.1 Bare Eigenstates of Hcore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 Generalized Eigenvalue Reformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.3 Perturbative Structure of the Generalized Eigenvalue System . . . . . . . . . . . . . . . . . . . 22 5.3.1 Perturbative Approximation Scheme for the True Energies . . . . . . . . . . . . . . . . 23 5.3.2 Exact Bare Eigenvalues and Eigenstates and Some Combinatorial Facts . . . . . . . . . 26 5.3.3 Anti-Crossing Structure: Energy Approximation and Gap Bound . . . . . . . . . . . . . 28 6 Conclusion 33 1 Introduction This work analyzes the performance limitation of stoquastic quantum annealing (TFQA) [4, 5, 6, 7] on a structured class of Maximum Independent Set (MIS) instances, using the same structural framework developed in the companion work on the DIC-DAC-DOA algorithm [1]. In the absence of the XX-driver (Jxx = 0), the TFQA system evolves under a fully stoquastic Hamiltonian. The corresponding Hamiltonian takes the form 1 : H(t) = x(t)HX + Hproblem, 1For completeness, we recall that the full algorithm may begin with an optional Stage 0, whose sole purpose is to set the parameters of the problem Hamiltonian before the main evolution begins. The Stage 0 Hamiltonian (without the XX-driver term) is H0(t) = x(t)HX + p(t)Hproblem, t ∈[0, 1], with parameter schedules x(t) = (1 -t)(Γ0 -Γ1) + Γ1, p(t) = t. During this phase, the problem parameters w and Jzz are ramped to their final values as p(t) increases from 0 to 1. Stage 0 plays no role in the present analysis and may be omitted in practice. 2 where HX = -P i σx i , and Hproblem = P i∈V(G)(-wi) ̃σz i +Jzz P (i,j)∈E(G) ̃σz i ̃σz j is the MIS-Ising Hamiltonian, with the shifted-σz operator ̃σz i := I+σz i 2 . Here, we take wi = w ≡1 for the unweighted MIS problem, and note that Jzz is required to be at least w. This is precisely the system Hamiltonian used in the DIC-DAC-DOA algorithm [1], when the non-stoquastic XX-driver is turned off, i.e., Jxx = 0. The annealing schedule simplifies to a single-stage evolution, i.e. there is no need to distinguish between Stage 1 and Stage 2, with a linearly decreasing transverse field, x(t) = (1 -t)Γ1. The system Hamiltonian is stoquastic in the computational basis. A widely observed but not fully justified limitation of stoquastic TFQA is the occurrence of an anti-crossing arising from the competition between the energies associated with a set of degenerate local minima (LM) and the global minimum (GM). While such anti-crossings have long been explained using perturbative arguments in the small transverse-field regime (e.g., [11, 12]), we provide a structural approach that enables a clean perturbative treatment of the anti-crossing structure, independent of the transverse field, yielding an analytic derivation of the exponentially small gap. In particular, we show that the relevant dynamics reduce to an effective two-block Hamiltonian Hcore, formed from bare subsystems associated with the LM and GM. This reduction follows the same angular-momentum-based decomposition of the Hamiltonian, derived from the angular momentum structure of the cliques associated with the LM2 as developed in the main paper, with two additional refinements: (1) we justify that the disjoint case suffices for analysis, and (2) we apply the L-inner decomposition to derive the effective Hamiltonian Hcore. For completeness, we include the necessary angular momentum decomposition in this paper. Intuitively, Hcore can be understood in terms of two embedded bare subsystems (an L-subsystem and an R-subsystem) coupled only through the empty-set state. Because of the additional transverse-field contribution in the L-subsystem, the system initially localizes in the L-block. The transition to the R-block (which supports the global minimum GM) occurs through a tunneling-induced anti-crossing, resulting in an exponentially small gap. While this intuition is conceptually clear, the analytic derivation is nontrivial and requires identifying the appropriate representation to capture the mechanism structurally. Our analysis of Hcore is structurally guided and departs from conventional perturbation theory. It does not rely on a small transverse field x; instead, it reformulates the full Hamiltonian Hcore in the basis of embedded bare eigenstates, where the overlap structure induces a generalized eigenvalue problem. We then express this system in a perturbative form whose perturbation term is independent of x, allowing the analysis to proceed without treating x as a small parameter. Within this framework, we show that the bare energy levels provide an accurate approximation to the true spectrum of Hcore, with energy crossings between bare energy levels replaced by tunneling-induced anti-crossings. In particular, this structure enables a perturbative treatment both away from and near the anti-crossing: we approximate the true ground energy using first-order corrections to the bare energies, and construct a reduced 2 × 2 effective Hamiltonian on the shared two-level subspace to derive a perturbative bound on the gap. The resulting bound is exponentially small in system size, confirming that the structural constraints of stoquastic TFQA lead to tunneling-induced bottlenecks that limit algorithmic performance on this family of instances. Our analysis also reveals the structural origin of tunneling-induced anti-crossings. In particular, once the evolution enters a regime where the system is localized around critical local minima, a tunneling-induced anti-crossing is inevitable. In the Jxx = 0 case, the value of Jzz can be made arbitrarily large without altering the qualitative dynamics, although the anti-crossing location shifts earlier as Jzz increases. This shift occurs because larger Jzz accelerates localization within the same-sign block, allowing the effective dynamics to be more accurately captured by Hcore at earlier times. However, if one sets Jzz to be very large in the presence of an XX-driver term with Jxx > 0, such 2Although in the TFQA case (Jxx = 0) we do not explicitly identify the cliques for constructing the XX-driver graph, the cliques associated with the critical local minima exist and are used solely for analytical purposes; they do not need to be constructively identified. 3 that Jxx ≪Jzz,3 then by the cut-off Theorem 8.3 in [1], the effective Hamiltonian at the beginning of Stage 1 is already approximately Hcore, leaving no opportunity for successful structural steering. In this case, the system localizes in the L-region and transitions to the R-region via tunneling, merely shifting the anti-crossing earlier, as illustrated by an example in Figure 16 of [1]. See also [9], which addresses the case Jzz →∞. Finally we remark that the same localization and tunneling mechanism may arise in less structured graphs as well, where degenerate local minima confine the dynamics in an analogous way. Through our detailed analysis, we hope to shed enough light on this anti-crossing mechanism to provide a structural basis for a more careful assessment of claimed stoquastic speedups. This paper is organized as follows. In Section 2, we review the prerequisites for our analysis, including the definitions of same-sign and opposite-sign blocks, the two bipartite substructure graphs (Gdis and Gshare), and the definition of an anti-crossing, along with the basic matrix B(w, x) that will be used throughout our analysis. In Section 3, we show that the gap analysis can be restricted to the same-sign block of the disjoint-structure case. In Section 4, we present the inner decompositions of the same-sign block and elaborate on the L-inner decomposition, which reveals localization to the region associated with local minima (L-localization), and derive the effective Hamiltonian Hcore. In Section 5, we give a detailed analysis of Hcore. We conclude with discussion in Section 6. 2 Background and Preliminaries In this section, we review the prerequisites for our analysis: (1) the definitions of same-sign and opposite-sign states, sectors, and blocks (Section 2.1); (2) the two fundamental bipartite substructure graphs, distinguishing between the disjoint-structure case Gdis and the shared-structure case Gshare (Section 2.2); and (3) the definition of anti-crossing together with the solution for the basic matrix B(w, x) (Section 2.3). The material in this section is adopted from the main paper [1], and is included here for completeness and to keep this paper self-contained. 2.1 Same-Sign vs Opposite-Sign The sign structure of quantum states plays a central role in our analysis. Throughout this paper, all Hamiltonians are real and Hermitian, so we restrict attention to quantum states with real-valued amplitudes: each component has phase 0 (positive) or π (negative). Definition 2.1. Let |ψ⟩= P x∈B ψ(x)|x⟩be a quantum state with real amplitudes in a basis B. • |ψ⟩is called a same-sign state if ψ(x) ≥0 for all x ∈B. That is, all components are in phase (with relative phase 0). • |ψ⟩is called an opposite-sign state if there exist x, x′ ∈B such that ψ(x) > 0 and ψ(x′) mg, so that an anti-crossing is induced by the competition between LM and GM. 2.3 Anti-Crossing and the Two-Level Hamiltonian B(w, x) The term anti-crossing is sometimes used loosely, so we begin with a precise notion in the two-level case, then extend it to multi-level systems. We also introduce a canonical two-level Hamiltonian whose eigensystem will be used throughout our analysis. 2.3.1 Level Repulsion vs. Anti-Crossing We begin by distinguishing the concept of an anti-crossing (also called avoided-crossing) from level repulsion. Consider a generic two-level Hamiltonian of the form H(x) := e1(x) v(x) v(x) e2(x) , where e1(x), e2(x), and v(x) are real-valued functions of a parameter x. The eigenvalues of this Hamiltonian are λ±(x) = e1(x)+e2(x) 2 ± 1 2 p (e1(x) -e2(x))2 + 4v(x)2, and the energy gap between them is ∆(x) := λ+(x) -λ-(x) = p (e1(x) -e2(x))2 + 4v(x)2. The off-diagonal term v(x) induces level repulsion: if v(x) ̸= 0, then the eigenvalues never cross, and the gap ∆(x) remains strictly positive. Thus, assuming the off-diagonal coupling v(x) is nonzero, level repulsion is always present. Definition 2.4. We say that an anti-crossing occurs when the two unperturbed energy levels e1(x) and e2(x) cross, i.e., e1(x∗) = e2(x∗) for some x∗, and the off-diagonal coupling v(x∗) ̸= 0. In this case the eigenvalue curves form an anti-crossing with gap ∆min = 2|v(x∗)|. 6 1 2 4 3 a 5 6 8 7 c b (a) The LM and GM are vertex-disjoint. 1 2 4 3 a 5 6 8 7 c b (b) The GM shares exactly one vertex with each clique in the LM. Figure 1: Example graphs illustrating the Gdis and Gshare structures. Recall that each LM here is structurally a dMIC. (a) Disjoint-structure graph Gdis: The set L consists of ml = 2 disjoint cliques, each of size n1 = n2 = 4, with their vertices (pink) forming the local minima (LM). The set R (blue) consists of mr = 3 independent vertices, forming the global minimum (GM). (b) Shared-structure graph Gshare: The set L again consists of two cliques of size n1 = n2 = 4, with pink and purple vertices. The purple vertices (one per clique) are shared between the LM and the GM. The set R (blue) contains mr = 3 independent vertices. The global minimum consists of all vertices in R, together with the shared purple vertices in L, giving mg = 5. In both cases, edges between the pink vertices in L and all vertices in R are complete, though not all are shown for visual clarity. The size of the anti-crossing gap depends on |v(x∗)|: stronger coupling leads to a larger gap, while weaker coupling results in a narrower one. By contrast, if the two diagonal entries e1(x) and e2(x) remain well separated for all x, then the system exhibits level repulsion but not an anti-crossing. Figure 3 illustrates an example of level repulsion without an anti-crossing. The eigenvectors of the two-level Hamiltonian are given by |λ-(x)⟩= cos θ(x) |0⟩+ sin θ(x) |1⟩, |λ+(x)⟩= -sin θ(x) |0⟩+ cos θ(x) |1⟩, where the mixing angle θ(x) satisfies tan(2θ(x)) = 2v(x) e1(x)-e2(x). Thus, near the anti-crossing point x = x∗, the eigenstates interpolate between the unperturbed basis states. Remark 2.5. The trigonometric expression for eigenvectors in terms of the mixing angle θ(x), is equivalent to the rational-form representation |λ-(x)⟩= 1 √ 1+γ(x)2 (γ(x) |0⟩+ |1⟩) , |λ+(x)⟩= 1 √ 1+γ(x)2 (|0⟩-γ(x) |1⟩) , 7 where the two parametrizations are related by γ(x) = 1 tan θ(x), and tan(2θ(x)) = 2v(x) e1(x)-e2(x). This rationalform expression is particularly useful for our analysis, as it aligns directly with the basic matrix form introduced below. Our earlier work [2, 14], including the development of the original DIC-DAC-DOA algorithm, was motivated by investigating the structural characteristics of eigenstates around the anti-crossing. 2.3.2 Anti-crossing in a Multi-level System In a multi-level system, the notion of an anti-crossing extends naturally by restricting the Hamiltonian to the twodimensional subspace spanned by the pair of eigenstates whose unperturbed energies intersect. This reduction yields a 2 × 2 effective Hamiltonian that captures the essential structure of the anti-crossing, including both the energy gap and the interpolating behavior of the eigenstates. Thus, the same framework as in the two-level case applies. With this perspective, we refine the definition of an (L, R)-anti-crossing given in [2]. Recall that EA 0 (t) denotes the ground state energy of the Hamiltonian HA(t) projected to the subspace spanned by the subsets of A. In particular, ELM 0 (t) and EGM 0 (t) denote the bare energies associated with LM (where A = S M∈LM M) and GM, respectively. Definition 2.6. We say an anti-crossing is an (L, R)-anti-crossing at t∗if there exist bare energies EL 0 (t) and ER 0 (t) such that: 1. EL 0 (t) and ER 0 (t) approximate the unperturbed energy levels of the effective 2 × 2 Hamiltonian describing the anti-crossing for t ∈[t∗-δ, t∗+ δ] for some small δ > 0; and 2. EL 0 (t) and ER 0 (t) cross at t∗, i.e. EL 0 (t∗) = ER 0 (t∗). See Figure 2 for an illustration. t* ER 0 (t) EL 0 (t) Figure 2: Schematic of an (L, R)-anti-crossing. Dashed lines: bare energies EL 0 (t) and ER 0 (t) crossing at t∗. Solid gray curves: the two lowest eigenvalues of the Hamiltonian, showing the avoided crossing that originates from this bare crossing. Remark 2.7. In the companion work [1], the absence of an anti-crossing can be inferred directly from the noncrossing of the corresponding bare energy levels, without explicitly constructing the effective 2 × 2 Hamiltonian. By contrast, in the present work we evaluate the anti-crossing gap by constructing the effective Hamiltonian explicitly. 8 Remark 2.8. In [21], it was observed that an exponentially small gap occurs "if and only if a ground state consists of two 'lobes' with exponentially small amplitude in the region between them." This two-lobe structure corresponds naturally to the two arms of an (L, R)-anti-crossing. [As shown in the right panel of Figure 9, the ground state undergoes a sharp exchange at the anti-crossing point. If one plots the ground state wavefunction at the anti-crossing, it exhibits two lobes: one localized on the L region and the other on R region (with the suppressed region between them quantified by the overlap g0 = ⟨L0|R0⟩).] Finally, we remark that while the Cheeger inequalities (used, e.g., in [6] to give a large lower bound on the spectral gap) appear to depend only on the ground state wavefunction, a closer examination reveals that it actually bound the first excited state implicitly. In this sense, the anti-crossing gap size derived from a 2 × 2 effective Hamiltonian is more descriptive and also gives a tighter bound. 2.3.3 The Basic Matrix B(w, x): Eigenvalues and Eigenstates We define the following effective two-level Hamiltonian, which will serve as a basic building block for our analysis throughout the paper: B(w, x) := -w -1 2x -1 2x 0 , (2.1) where w = w(t) and x = x(t) are real-valued parameters, typically derived from problem Hamiltonians and driver strengths. This is a special case of a spin-1 2 system, with analytic eigenstructure. The eigenvalues of B(w, x) are βk = -1 2 w + (-1)kp w2 + x2 , k = 0, 1, (2.2) with normalized eigenvectors |β0⟩= 1 √ 1+γ2 γ |0⟩+ |1⟩ , |β1⟩= 1 √ 1+γ2 |0⟩-γ |1⟩ , (2.3) where the mixing coefficient is γ = x w+ √ w2+x2 . Figure 3 visualizes the eigenspectrum and ground state behavior under variation of x and w. 3 Reduction to Same-Sign Block in the Disjoint-Structure Case In this section, we establish that it is sufficient to restrict our analysis to the same-sign block and to focus on the disjoint case. We first express the full Hamiltonian in block form with respect to the low- and high-energy subspaces of VL, and justify that for our purposes it suffices to analyze the projected Hamiltonian Hlow acting on L-in Section 3.1. Then we describe the angular-momentum based decomposition of Hlow in in Section 3.2. 3.1 Reduction to the Analysis of the Low-Energy Hamiltonian Let VL be the Hilbert space of all vertices in the union of cliques in L. This space decomposes into: • a low-energy subspace, spanned by all independent-set states within L; • a high-energy subspace, spanned by all dependent-set states within L. 9 Figure 3: Eigenvalues of the two-level Hamiltonian B(w, x), where βk = -1 2 w + (-1)k√ w2 + x2 , k = 0, 1. w = 1. The ground state energy (β0) is shown in black, and the first excited state energy (β1) is shown in blue. The energy gap widens as x increases-there is no anti-crossing in this case. The ground state is |β0⟩= 1 √ 1+γ2 (γ |0⟩+ |1⟩), with γ = x w+ √ w2+x2 . |0⟩= |↑⟩and |1⟩= |↓⟩. We define L-:= (low-energy subspace of VL) ⊗VR, L+ := (high-energy subspace of VL) ⊗VR. Here, L+ consists of states containing at least one intra-clique edge and thus incurring the Jclique zz penalty. Let Π-and Π+ be the orthogonal projectors onto L-and L+, respectively. With respect to this decomposition, the system Hamiltonian can be written in block form: H = Hlow V V† Hhigh , where Hlow and Hhigh are the projections into L-and L+, respectively. In the main paper, we showed that by the end of Stage 0, if Jclique zz is chosen sufficiently large so that all states in L+ lie well above those in L-in energy, the Stage 1 evolution is governed exactly by the restricted Hamiltonian Heff 1 = Hlow. The subsequent analysis can therefore focus entirely on Hlow. In the present Jxx = 0 setting, we have not identified the XX-driver graph explicitly, and we set Jclique zz = Jzz, so we cannot assume an energetic cut-off between L+ and L-. Nevertheless, removing higher-energy subspaces from a Hamiltonian cannot decrease the spectral gap, since doing so can only eliminate potential low-lying excitations. Consequently, if Hlow already has a small gap, the full Hamiltonian H can have a gap no larger. Because the ground state lies in L-prior to the anti-crossing, we have ∆(H) ≤∆(Hlow). This allows us to use Hlow to obtain an upper bound on the spectral gap of the full Hamiltonian. Thus, in both cases-whether with or without the XX-driver-the subsequent analysis focuses on Heff 1 = Hlow. Having reduced our focus to the low-energy subspace L-, we next construct an angular momentum basis that reveals the block structure of Hlow and sets up the tunneling analysis. 10 3.2 Angular-Momentum Based Decomposition of the Hlow Our construction starts from the local angular momentum structure of a single clique, restricted to the low-energy subspace (see also [9]). We then extend this to a global angular-momentum basis for L-, in which Hlow becomes block-diagonal, separating same-sign and opposite-sign blocks. The construction proceeds hierarchically: • Single-clique decomposition. We analyze the low-energy subspace of a single clique in detail, introducing its total angular momentum basis, identifying the same-sign and opposite-sign states, and deriving the block structure of the restricted Hamiltonian (Section 3.2.1). • Global block decomposition. By tensoring the single-clique bases over all cliques in L and combining with VR, we obtain a global basis in which Hlow takes a block-diagonal form (Section 3.2.2). 3.2.1 Single-clique Decomposition In this section, we focus on analyzing the Hilbert space associated with a single clique. We begin with a single clique, since each clique in L exhibits the same local angular momentum structure; this will serve as the building block for the global block decomposition in Section 3.2.2. To fix notation, let Gc = (V, E) be a clique, where V = {1, . . . , nc} is the set of vertices, and E = {(i, j) : i mL := ml but mR 1 (i.e. away from the anti-crossing), so that a first-order correction suffices. Then the corrected ground state energies can be written explicitly as weighted sums over overlaps and energy gaps between the L and R blocks, as given in Proposition 5.14. Proposition 5.14. Suppose that before the anti-crossing, eL0 1, so that a first-order correction suffices. Let E0 denote the true ground state energy of Hcore. • Before the anti-crossing: E0 -eL0 ≈-2eL0 X j eRj eRj -eL0 ⟨L0|Rj⟩ 2 . • After the anti-crossing: E0 -eR0 ≈-2eR0 X j eLj eLj -eR0 ⟨R0|Lj⟩ 2 . Proof. We apply the correction formula in Eq. (5.5) from Corollary 5.8 to the ground state energy before and after the anti-crossing: 28 • Before the anti-crossing: E0 -eL0 ≈ X j ⟨eL0| (∆H -eL0∆I) |eRj⟩ ⟨eRj| ∆I |eL0⟩- ⟨eRj |∆H|eL0⟩ eRj -eL0 (5.12) • After the anti-crossing: E0 -eR0 ≈ X j ⟨eR0| (∆H -eR0∆I) |eLj⟩ ⟨eLj| ∆I |eR0⟩- ⟨eLj |∆H|eR0⟩ eLj -eR0 (5.13) We now use the identities ⟨eLi| ∆I |eRj⟩= ⟨Li|Rj⟩, ⟨eLi| ∆H |eRj⟩= (eLi + eRj) ⟨Li|Rj⟩, where |Li⟩and |Rj⟩denote the normalized bare eigenstates in the L and R subsystems, respectively. Substituting into the above yields the result. We justify that the first-order correction to eL0 is small by analyzing the weighted sum: X j eRj eRj -eL0 ⟨Rj | 0R⟩ 2 . By Lemma 5.13, the unweighted sum is exactly zero: P j eRj ⟨Rj | 0R⟩ 2 = 0. The weighting factor 1 eRj -eL0 is smooth and strictly positive across the spectrum, since eRj > eL0 for all j before the anti-crossing. Thus, the weighted sum remains bounded in magnitude and does not grow with system size. Moreover, the prefactor (⟨L0 | 0L⟩)2 = co(L0)2 is exponentially small in mL, so the total correction is exponentially suppressed: E0 - eL0 ≈-2eL0 ·co(L0)2·[bounded sum] . Hence, the correction is negligible, and we conclude that E0 ≈eL0. This justifies the approximation E0 ≈eL0 before the anti-crossing, and similarly E0 ≈eR0 after the anti-crossing, completing the perturbative approximation scheme. See Figure 8 for numerical confirmation. Furthermore, the actual anti-crossing point is well-approximated by the crossing point of the bare energies, as illustrated in Figure 9. Effective 2 × 2 Hamiltonian and Gap Bound We now approximate the size of the anti-crossing gap between the degenerate bare states |eL0⟩and |eR0⟩, by reducing the full generalized eigenvalue problem to an effective 2 × 2 Hamiltonian on the subspace they span. Combining the structural reduction steps established above, we have the chain of approximations: ∆(H) ≤∆(Hlow) ≈∆(HC) ≈∆(Hcore) ≈∆(H(2×2) eff ) (5.14) which shows that it suffices to approximate the gap of the final 2×2 Hamiltonian H(2×2) eff . We now derive H(2×2) eff explicitly, starting from the projected generalized eigenvalue problem. This approach is similar to [20]. See also [9]. We construct a two-level approximation by projecting the generalized eigenvalue problem onto the subspace P = span{|L0⟩, |R0⟩}, 29 Figure 8: Comparison of the exact energy spectrum of the effective Hamiltonian Hcore with the bare energy spectra of HL (magenta dashed) and HR (blue dashed). The top-left panel shows the true spectrum of Hcore; the top-right and bottom-left show the bare spectra of HL and HR, respectively. The bottom-right panel overlays the bare and true spectra. As seen, the true energy (solid gray) of Hcore closely follows the bare energy (dashed), and the bare-level crossing is replaced by an anti-crossing. 30 Figure 9: Anti-crossing illustration. Left: Energy spectrum comparison between the bare Hamiltonians H(0) L and H(0) R (dashed magenta and blue), and the coupled Hamiltonian Hcore (solid gray). The anti-crossing occurs near the point where the bare energies cross. Right: Ground state projection onto the same two blocks. The ground state is concentrated on H(0) L (magenta) before the anti-crossing, and H(0) R (blue) after the anti-crossing. where |L0⟩and |R0⟩are the padded bare eigenstates of H(0) L and H(0) R , respectively. Projecting the generalized eigenvalue problem onto P yields the intermediate effective Hamiltonian H0 eff in generalized form; converting it to standard form by left-multiplying with S-1 PP produces the final 2 × 2 Hamiltonian H(2×2) eff . Lemma 5.15. The effective Hamiltonian for the subspace P is given by H0 eff = HPP + (e0SPQ -HPQ)(e0SQQ -HQQ)-1(e0SQP -HQP ), where e0 = eL0(0) = eR0(0), Q = P ⊥. The eigenvalues of the pair (H0 eff, SPP ) approximate the ground and first excited energies near the anti-crossing. Proof. The perturbed generalized eigenvalue equation is: (H + ∆H) |Φ⟩= λ(I + ∆I) |Φ⟩. (5.15) Write the eigenvector as |Φ⟩= X p∈P cp |p⟩+ X q∈Q cq |q⟩, and take inner products to obtain the generalized eigenvalue problem in matrix form: HPP HPQ HQP HQQ cP cQ = λ SPP SPQ SQP SQQ cP cQ , (5.16) where λ = e0 + ∆λ. From Eq. (5.16), we obtain: (HPP -λSPP )cP = (λSPQ -HPQ)cQ, (HQP -λSQP )cP = (λSQQ -HQQ)cQ. 31 We solve the second equation for cQ: cQ = (λSQQ -HQQ)-1(HQP -λSQP )cP . Substitute into the first equation: (HPP -λSPP )cP = (λSPQ -HPQ)(λSQQ -HQQ)-1(HQP -λSQP )cP . =⇒ HPP + (λSPQ -HPQ)(λSQQ -HQQ)-1(λSQP -HQP ) cP = λSPP cP . Since λ = e0 + ∆λ and |∆λ| ≪|e0|, we may approximate the terms λSPQ -HPQ, λSQQ -HQQ, and λSQP -HQP by their counterparts evaluated at λ = e0, provided that the overlaps ⟨L0|Rj⟩are small. Concretely, the matrix element ⟨L0| (λSPQ -HPQ) |Rj⟩= ∆λ ⟨L0|Rj⟩-eRj ⟨L0|Rj⟩, differs from the e0-version only by a term of order ∆λ · ⟨L0|Rj⟩, which is negligible under the assumption that both ∆λ ≪|e0| and ⟨L0|Rj⟩≪1. Thus, we may replace λ by e0 in the effective Hamiltonian to leading order. Substituting λ ≈e0, we obtain the effective Hamiltonian: H0 effcP ≈λSPP cP , where H0 eff = HPP + (e0SPQ -HPQ)(e0SQQ -HQQ)-1(e0SQP -HQP ). To understand the leading-order contribution to the anti-crossing gap, we examine the structure of the effective Hamiltonian in the P-subspace. We expand the second-order approximation using D def = e0IQQ -EQQ, V def = ∆HQQ -e0∆IQQ, and approximate the inverse: (e0SQQ-HQQ)-1 ≈D-1+D-1VD-1. Then we have H0 eff ≈HPP +W1+W2, where W1 def = (e0SPQ -HPQ)D-1(e0SQP -HQP ), W2 def = (e0SPQ -HPQ)D-1VD-1(e0SQP -HQP ). The leading term HPP = eL0 (eL0 + eR0)g0 (eL0 + eR0)g0 eR0 = e0 1 2g0 2g0 1 , where e0 = 1 2(eL0 + eR0) and g0 = ⟨L0|R0⟩. The first-order correction W1 = a1 0 0 a1 is diagonal, while the second-order correction W2 = 0 c2 c2 0 is off-diagonal, with c2 ≪e0g0. Combining all terms yields: H0 eff ≈ e0 + a1 2e0g0 + c2 2e0g0 + c2 e0 + a1 , which shows that the off-diagonal coupling is governed primarily by the overlap g0. Finally, we convert the generalized eigenvalue problem to standard form using H(2×2) eff = S-1 PP H0 eff, where SPP = 1 g0 g0 1 , which gives H(2×2) eff = 1 1-g2 0 e0 + a1 -g0(2e0g0 + c2) 2e0g0 + c2 -g0(e0 + a1) 2e0g0 + c2 -g0(e0 + a1) e0 + a1 -g0(2e0g0 + c2) . We thus have the gap approximation based on the assumptions that |a1| ≪|e0| and |c2| ≪|e0g0|: 32 Corollary 5.16 (Gap Approximation). Under the effective two-level approximation, ∆(H(2×2) eff ) ≈2|e0|g0, where e0 = eL0 = eR0 is the degenerate bare energy, and g0 = ⟨L0|R0⟩is the overlap between two bare ground states. Proposition 5.17. The overlap between the two bare ground states is given by g0 := ⟨L0|R0⟩= co(L0) · co(R0) = r γ2 L 1+γ2 L !mL r γ2 R 1+γ2 R !mR , where the amplitude ratios γA = √nAxc w+√ w2+nAx2c ∈(0, 1), for A ∈{L, R}, and xc is the location of the anticrossing, defined implicitly by eL0(xc) = eR0(xc). Since the runtime scales inversely with the square of the minimum gap, by the chain of approximations in Eq. (5.14), we have T ≳1 g2 0 = 1 + 1 γ2 L mL 1 + 1 γ2 R mR > 2mL+mR. This completes the reduction from the full Hamiltonian H to the effective 2 × 2 model, providing the exponential runtime bound. 6 Conclusion This work presents an analytical framework for identifying and quantifying tunneling-induced anti-crossing in stoquastic transverse-field quantum annealing (TFQA), based on a structured class of Maximum Independent Set instances. Our analysis reformulates the effective Hamiltonian in a non-orthogonal basis and derives an exponentially small gap using a generalized eigenvalue approach that remains valid beyond the small transverse field regime. More broadly, our findings reveal the structural origin of tunneling-induced anti-crossings. Once the system localizes near critical local minima-as captured in the reduced form Hcore-a tunneling transition to the global minimum becomes unavoidable, and the resulting gap is exponentially small. This same localization and tunneling mechanism may also arise in less structured graphs. It has been claimed that non-stoquasticity may not be essential for achieving quantum speedup [22, 23, 24]. Indeed, simply introducing a non-stoquastic driver is not sufficient to bypass the tunneling bottleneck. For example, if Jzz is set to be too large, the evolution will fail the structural steering and encounter a tunnelinginduced anti-crossing instead (as illustrated by an example in Figure 16 of [1]). In contrast to this work, a recent study [18], which may be of theoretical interest on its own, investigated unstructured adiabatic quantum optimization, aiming to recover Grover-like quadratic speedup [15, 16]. However, as we note in the main paper (Section 3.1 in [1]), a simple classical algorithm [17] can already solve the MIS problem in time O(2n/3), outperforming the Grover-like O(2n/2) speedup. This highlights the importance of algorithmic design and analysis, even for adiabatic quantum algorithms in order to achieve quantum advantage. We note that the tunneling-induced anti-crossings analyzed here play a constructive role in the DIC-DAC-DOA algorithm: a polynomial-time TFQA evolution through such an anticrossing identifies configurations in the set of the degenerate local minima, which are then used as seeds for constructing the non-stoquastic XX-driver graph. We hope that the structural insights developed here may guide future work in rigorously identifying similar bottlenecks in broader classes of problem instances, and in clarifying the fundamental limitations of stoquastic quantum annealing beyond heuristic or empirical claims. 33 Acknowledgment This work was written by the author with the help of ChatGPT (OpenAI), which assisted in refining the presentation and in expressing the intended ideas with greater clarity and precision. The author thanks Jamie Kerman for introducing her to the angular-momentum basis, for discussions, and for the idea of deriving a perturbative bound on the anti-crossing gap by constructing an effective Hamiltonian in the non-orthogonal basis, as in [20]. We recognize that this manuscript may not cite all relevant literature. If you believe your work should be included in a future version, please do not hesitate to contact the author with appropriate references. The author acknowledges support from the Defense Advanced Research Projects Agency under Air Force Contract No. FA8702-15-D0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. References [1] V. Choi. Beyond Stoquasticity: Structural Steering and Interference in Quantum Optimization, 2025. [2] V. Choi. Essentiality of the Non-stoquastic Hamiltonians and Driver Graph Design in Quantum Optimization Annealing. , 2021. [3] V. Choi. Constructing and Programming Driver Graphs in Quantum Hardware for Non-Stoquastic Quantum Optimization Annealing Processes. U.S. Patent No. 12,001,924 B2, issued June 4, 2024. [4] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106, 2000. [5] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem. Science, 292(5516):472-475, 2001. [6] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM Journal on Computing, 37(1):166-194, 2007. [7] T. Albash and D. A. Lidar. Adiabatic quantum computation. Rev. Mod. Phys., 90:015002, 2018. [8] A. Elhashash and D. B. Szyld. On general matrices having the Perron-Frobenius property. Electronic Journal of Linear Algebra, 17:389-402, 2008. [9] A. J. Kerman. Effective Hamiltonian perturbation theory for the tunneling gap in quantum annealing of structured MIS problems. To be published. [10] J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM Journal on Computing, 35(5):1070-1097, 2006. [11] M. H. S. Amin and V. Choi. First-order phase transition in adiabatic quantum computation. Phys. Rev. A, 80(6):062326, 2009. . [12] B. Altshuler, H. Krovi, and J. Roland. Anderson localization makes adiabatic quantum optimization fail. Proceedings of the National Academy of Sciences, 107:12446-12450, 2010. [13] V. Choi. Minor-embedding in adiabatic quantum computation: I. The parameter setting problem. Quantum Information Processing, 7:193-209, 2008. 34 [14] V. Choi. The Effects of the Problem Hamiltonian Parameters on the Minimum Spectral Gap in Adiabatic Quantum Optimization. Quantum Inf. Processing., 19:90, 2020. arXiv:quant-ph/1910.02985. [15] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC), pages 212-219, 1996. [16] J. Roland and N. J. Cerf. Quantum search by local adiabatic evolution. Phys. Rev. A, 65:042308, 2002. [17] R. E. Tarjan and A. E. Trojanowski. Finding a maximum independent set. SIAM Journal on Computing, 6(3):537-546, 1977. [18] A. Braida, S. Chakraborty, A. Chaudhuri, J. Cunningham, R. Menavlikar, L. Novo, and J. Roland. Unstructured Adiabatic Quantum Optimization: Optimality with Limitations. Quantum, 9:1790, 2025. . [19] V. Choi. Different adiabatic quantum optimization algorithms for the NP-complete exact cover and 3SAT problems. Quantum Info. Comput., 11(7-8):638-648, July 2011. [20] P. C. P. de Andrade and J. Freire. Effective Hamiltonians for the nonorthogonal basis set. Journal of Chemical Physics, 118(15):6733-6740, April 15, 2003. [21] M. Jarret and S. P. Jordan. Adiabatic optimization without local minima. Phys. Rev. A, 89:022341, 2014. [22] T. Albash. Role of non-stoquastic catalysts in quantum adiabatic optimization. Phys. Rev. A, 99:042334, 2019. [23] E. Crosson, T. Albash, I. Hen, and A. P. Young. De-signing Hamiltonians for quantum adiabatic optimization. Quantum, 4:334, 2020. [24] E. J. Crosson and D. A. Lidar. Prospects for quantum enhancement with diabatic quantum annealing. Nat. Rev. Phys., 3:466, 2021. Appendix A: Font Conventions for Notation We adopt the following conventions throughout: • Hilbert space / Subspace / Basis: calligraphic, e.g. V, B. • Hamiltonian / Matrix: blackboard bold, e.g. H, B. • Time-dependent quantity: typewriter, e.g. x := x(t), jxx := jxx(t). • Named object / Abbreviation: capital typewriter, e.g. LM, GM, MLIS. 35
|
2509.16271
|
Application of X-ray Absorption Spectroscopy with a
laboratory X-ray powder diffractometer
Milen Gateshki
, Charalampos Zarkadas
, and Detlef Beckers
Malvern Panalytical B. V., Lelyweg 1, 7602 EA, Almelo, The Netherlands
Synopsis
A method for high-quality X-ray absorption spectroscopy (XAS) in transmission mode
is proposed that can be easily applied on a laboratory X-ray powder diffractometer.
Such application may contribute to the wider use of this powerful technique in materials
research laboratories.
Abstract
A novel method and experimental configuration are proposed that allow the collection
of high-quality X-ray absorption spectroscopy (XAS) data in transmission mode on a
standard laboratory diffractometer. This configuration makes use of components that
have been developed for application in modern laboratory diffractometers, such as X-
ray tubes with enhanced stability, accurate goniometers and position-sensitive photon-
counting detectors with improved energy resolution. Such approach may extend the
application of XAS in materials research, particularly for users of diffraction methods.
Keywords: Laboratory XAS; XANES; EXAFS; X-ray diffraction; In operando studies
1
Introduction
XAS is a powerful method for analyzing the structure and properties of new materials. It investi-
gates the behavior of the absorption coefficient around the absorption edge of a specific element.
The measured absorption spectrum is usually divided into two ranges: XANES (X-ray absorption
near edge structure), which is up to 50–100 eV above the absorption edge of the investigated atomic
type and provides information about the oxidation state, electronic structure and local environ-
ment of the atom, and EXAFS (Extended X-ray absorption fine structure) that extends from 100
to 500-2000 eV above the edge and probes the short-range ordering of the atoms, i.e. interatomic
distances and coordination numbers. XAS provides complementary information to X-ray diffrac-
tion (XRD) and X-ray fluorescence (XRF) and can be used for relatively complex materials where
1
arXiv:2509.16271v1 [physics.ins-det] 18 Sep 2025
other techniques are not applicable, e.g. liquids and amorphous substances. Compared to the Pair
Distribution Function method (PDF), which is also applied to study short-range ordering in mate-
rials (including amorphous) and gives averaged information for all atomic species, XAS is element
specific and can be used to distinguish between the environments of otherwise similar atoms, e.g.
two types of metal atoms in an alloy or a solid solution.
During the last decade, the interest in laboratory-based XAS methods has continually grown both
in the academic and in the industrial sectors (Zimmermann et al., 2020; Cutsail & DeBeer, 2022).
Nowadays, several companies, such as easyXAFS, Sigray, HP Scpectroscopy, LynXes, RapidXAFS
and Specreation offer a range of dedicated XAS instruments. Two types of experimental set-ups
are most commonly found in XAS instruments: the Rowland circle geometry and the von H´amos
geometry. Both focus the X-ray beam from the source to the position of the detector. The Rowland
circle geometry uses a cylindrically or spherically bent crystal analyzer (Mottram et al., 2020; Lutz
& Fittschen, 2020; Jahrman et al., 2019a; Holden et al., 2017; Seidler et al., 2014; Seidler et al., 2016;
Honkanen et al., 2019; Honkanen et al., 2014), while the von H´amos geometry employs a crystal that
is cylindrically bent in transverse direction (v. H´amos, 1932; v. H´amos, 1933; Schlesiger et al., 2020;
N´emeth et al., 2016; Szlachetko et al., 2012; Zeeshan et al., 2019; Alonso-Mori et al., 2012; Ismail
et al., 2021). Detailed discussions of these geometries and their corresponding advantages in XAS
experiments can be found in the referenced publications. Focusing geometries can significantly
increase the intensity of the measured signal, however they require instruments with increased
mechanical complexity. Moreover, they are not compatible with the architecture of laboratory X-
ray powder diffractometers, which are based on highly accurate goniometers with the X-ray source
mounted on one arm of the goniometer and the X-ray detector mounted on the second arm.
2
XAS measurement configuration
In this work, we propose a configuration for XAS that uses a flat analyzer crystal and can be
easily applied on a standard laboratory X-ray powder diffractometer. A schematic representation
of this set-up is shown in Fig. 1. The figure gives a side view of the configuration, which uses an
XRD tube with a long fine focus (LFF) and an one-dimensional (strip) XRD detector with high
energy resolution, e.g. better than 500 eV. The energy resolution of the detector helps to reduce
background effects, such as higher harmonics, characteristic lines from the X-ray tube, fluorescence,
etc. The divergence slit is used to adjust the size of the incident beam and several antiscatter devices
can be introduced to reduce scattered radiation from optical components or from the air. The
axial divergence of the incident and diffracted beams can be controlled with Soller slits, which are
also included in the basic configuration of laboratory powder diffractometers. The sample holder is
attached to the goniometer arm and it moves together with the X-ray tube. In some cases it may be
preferable to attach the sample holder to the second arm and to move it together with the detector.
The same geometry can be applied also with a microfocus tube and a two-dimensional (area)
detector. In this case the use of Soller slits is not required. Since powder diffraction measurements
2
are most often performed in air environment, the XAS experiments presented in the following
sections were also conducted in air, although a configuration that uses an evacuated or helium-
filled beam path can be also envisioned.
Fig. 1 depicts a wavelength-dispersive configuration, in which photons originating from the focus
of the X-ray tube X travel through sample S and impinge on the surface of crystal C at different
angles.
Following Bragg’s law of diffraction, only X-ray photons with specific energies will be
diffracted by the crystal towards the X-ray detector D. Since the incident photons shown in Fig.
1 form different angles with the atomic planes of the crystal, each one of the diffracted beams
(shown as lines of different colors in Fig. 1) will have a different wavelength. Therefore, the one-
dimensional detector collects simultaneously a range of different energies. This can significantly
increase the speed of the measurement compared to a set-up that uses a single-element (point)
detector. Considering the exact geometry of the experimental set-up, it is possible to correlate the
position (strip number), at which the signal is received, with the energy of the diffracted beam.
In this way, a complete energy spectrum of the X-rays transmitted through the sample can be
obtained and the absorption at each energy can be determined.
The range of energies that can be collected simultaneously depends on the size and position of
the detector. If the covered angular range is narrower than the intended one (either XANES or
EXAFS), it can be extended by operating the detector in scanning mode with the goniometer
arm moving continuously during the measurement from low to high diffraction angles. Note that
maintaining the diffraction condition requires that the X-ray source moves with the same speed as
the detector. This measurement mode (known as gonio mode or symmetric scanning) is a standard
feature of X-ray diffractometers and does not require special modifications of the hardware or
software.
In addition to the extended range, the scanning mode has additional advantages for
XAS measurements compared to the static mode.
It enables collecting data with a step size
that is smaller than the angular size of one detector strip (subsampling), which provides better
discretization of the continuous XAS signal and allows the observation of narrow spectral features.
A second advantage is that during the scanning movement, X-ray beams with the same photon
energy will travel through different parts of the sample, thus effectively averaging the signal. In
this sense, the resulting measurements will be less sensitive to small variations of the sample’s
composition or thickness. This effect is illustrated in Fig. 2. At lower positions of the X-ray source
and detector, an X-ray beam passes through the lower part of the sample (Fig. 2(a)) and is detected
at the upper end of the detector. As the source and detector move higher (Fig. 2(b)), a beam
with the same photon energy (same angle of incidence ω1 on the crystal) will pass through the
central part of the sample and will be detected in the center of the detector. Finally, at even higher
positions of the source and detector (Fig. 2(c)), photons with once again the same energy will travel
through the top part of the sample and will be detected at the lower edge of the detector. From Fig.
2 it can be also observed that for each position of the X-ray source, the photons with the specified
energy will be diffracted by a different point on the crystal. As a result of this, local variations in
3
the quality of the crystal surface will be also averaged during the scanning measurement.
The data collection algorithm in the detector combines all intensity that is received for a given
angle of incidence and produces a single data point that corresponds to this angle of incidence (and
hence photon energy). Considering the configuration shown in Fig. 1, the angle of incidence for a
specific photon direction can be calculated from the number of the detector channel in which the
signal is received by means of equation (1). For derivation, see Fig. S1. R1 is the distance between
the X-ray source and the center of the goniometer, R2 is the distance between the goniometer
center and the detector, n is the number of the detector strip counted from the middle channel
of the detector, p is the width of the strip and ωc is the angle between the central part of the
incident beam (indicated with a dashed line in the figure) and the diffracting planes of the crystal
monochromator.
ω = ωc + tan−1
np
R1 + R2
≈ωc +
np
R1 + R2
(1)
The measurement method described by Fig. 1 and equation (1) is different from the one that is used
in powder diffraction measurements. A diagram of a diffraction measurement with monochromatic
radiation, a symmetric (gonio) scan and a narrow incident beam is shown in Fig. 3. In this method,
the algorithm for processing data from the detector is set up for measuring the intensity at each
diffraction angle 2θ. If it is assumed that ω = 2θ/2, equation (2) can be derived. From this result
it can be observed that for R1 = R2 the approximated expressions for ω are the same in equations
(1) and (2). Therefore, the data collection algorithm for powder diffraction measurements with
symmetric scans, which implements equation (2), can be applied also for XAS without modifica-
tions. If R1 ̸= R2, then the distance R2 must be replaced in (2) by (R1 + R2)/2. Equation 2 is
also applied for XRD measurements with the Bragg-Brentano parafocusing geometry, which uses
a divergent monochromatic incident beam and is the most commonly used type of measurement in
laboratory powder diffractometers.
ω = 2θ
2 = ωc + 1
2 tan−1
np
R2
≈ωc + np
2R2
(2)
A configuration somewhat similar to the one shown in Fig. 1 is reported in (N´emeth et al., 2016),
where flat sections of Si crystal are arranged around a cylindrical surface to approximate the von
H´amos geometry. In this set-up, both X-ray source and detector are located on the axis of the
cylinder. The detector is mounted on a translation stage, instead of a goniometer arm, and the
X-ray source is in the plane of the detector, which in turn is not perpendicular to the direction
of the X-ray beams diffracted by the crystal analyzer. The measurements are performed in static
mode, which does not make use of the advantages associated with scanning measurements that
were described in the previous paragraph. On the other hand, the set-up in (N´emeth et al., 2016)
uses distances between the main components that are similar to the ones reported in this work,
4
and experiments are performed in air atmosphere. This indicates that a compact configuration
operated in air is a feasible solution for XAS experiments.
Figure 1: A schematic diagram of the experimental set-up.
Polychromatic X-ray radiation is
emitted by the X-ray source X, travels through specimen S and is diffracted by the flat crystal
monochromator C.
After diffraction, the X-ray photons are directed to the position sensitive
detector D. Depending on the angle of incidence, photons with different energies will be diffracted
at different points on the crystal surface. This is represented by lines of different colors. R1 and R2
are the distances between the source and the rotation axis (center) of the goniometer and between
the goniometer center and the middle channel of the X-ray detector. The top surface of the crystal
is aligned to be at the center of the goniometer. V is a divergence slit, and K is a beam shield.
3
Experimental details
The experimental configuration described in Section 2 was realized on an Empyrean multipurpose
diffractometer (Malvern Panalytical B.V., Almelo, The Netherlands) with the following compo-
nents:
• Empyrean long fine focus X-ray tube with enhanced stability of the focal spot (LFF HR) and
either Cu or Ag anode. The take-off angle is 6 deg and the apparent size of the focus is 0.04
by 12 mm.
• Fixed divergence slit with opening either 1 deg or 1/4 deg, two 0.04 rad Soller slits for
controlling the axial divergence of the incident and diffracted beams, and a 11.6 mm mask
for limiting the beam size in axial direction. With these settings the irradiated area on the
specimen is about 12 mm by 1 mm (1 deg slit) or 12 mm by 0.25 mm (1/4 deg slit). If it is
required to reduce the lateral size of the irradiated area, this can be done by using a smaller
mask.
• Flat Si (011) crystal with dimensions 40 mm (length) by 20 mm (width). The top surface of
the crystal is aligned to be at the center of the goniometer.
5
Figure 2: Schematic representation of XAS measurements with the configuration shown in Fig. 1
and using a continuous scanning movement of the X-ray source and detector relative to the crystal.
(a) Lower angle of incidence of the central part of the X-ray beam, (b) intermediate and (c) higher
angle of incidence. The notation for the different elements is the same as in Fig. 1.
• 1Der (2nd generation) one-dimensional solid-state X-ray detector that consists of 127 strips
with dimensions 0.07 mm wide and 15 mm long. The energy resolution of the detector is 320
eV, which is considered sufficient for XAS experiments (Welter et al., 2009).
• For the XAS experiments reported in this work, the distances R1 and R2 were both equal to
240 mm.
6
Figure 3: Schematic representation of X-ray powder diffraction measurements with a narrow in-
cident beam, a one-dimensional position-sensitive detector and a symmetric (gonio) scan.
The
notation for the different elements is the same as in Fig. 1.
3.1
Choice of anode material
Modern diffractometers offer the option for easy exchange of X-ray tubes and this opens the pos-
sibility for selecting a tube with an anode material and power settings that are best suited for a
specific material. When selecting an X-ray tube for XAS experiments, it is important to consider
the intensity of the bremsstrahlung radiation emitted from the tube in an energy range close to
the absorption edge of the element of interest. To evaluate this intensity for X-ray diffraction LFF
tubes, the continuum radiation of various commonly used anode materials was calculated at dif-
ferent voltage settings by means of a tube spectrum model widely applied in X-ray spectrometry
(Ebel, 1999; Ebel, 2005). The results of the theoretical calculations are summarized in Table 1.
From the values presented there it becomes clear that the performance of a tube drops when the
absorption edge of the investigated element is at a higher energy compared to the absorption edge
of the anode element. This is due to the effect of anode self-absorption that reduces the absolute
number of continuum photons available for XAS measurements. One remedy for this is to use a
different tube, with an anode material which is more favorable. In cases where materials with the
same absorption edge are analyzed repeatedly, the optimal anode material can be selected from
Table 1. Conversely, in a situation where different absorption edges are probed, many materials of
interest can be analyzed using either Cu or Ag tubes with only a small reduction in performance.
Using X-ray tubes with different anode materials may also help to avoid characteristic lines that
appear in the measurement range in specific cases. Such characteristic lines with high intensities
and narrow peak widths can disturb the normalization procedure and produce artifacts in the
data.
7
Table 1: Optimal choices of anode materials and tube voltage settings for XANES measurements
around different absorption edges. The corresponding tube current (not shown in the table) corre-
sponds to isowatt settings at all tabulated voltages. The maximum power used for the calculations
was 1.8 kW for Co, Cu and Ag tubes and 2.5 kW for Mo tubes.
Optimal selection
Cu tube
Ag tube
Absorption
edge
Energy,
keV
Anode
material
Tube
voltage,
kV
Tube
voltage,
kV
Intensity
compared
to
optimal
selection, %
Tube
voltage,
kV
Intensity
compared
to
optimal
selection, %
Ti
4.966
Co
30
30
93.6
30
38.7
Cr
5.989
Co
35
30
97.8
30
50.1
Fe
7.112
Co
35
35
99.4
30
63.5
Ni
8.333
Cu
35
35
100.0
30
77.4
Cu
8.979
Mo
45
30
34.9
30
85.0
Pt L3
11.564
Mo
45
30
34.8
35
76.8
Se
12.658
Mo
45
35
35.8
35
75.1
Sr
16.105
Mo
55
40
37.2
45
72.5
Zr
17.998
Mo
60
45
37.6
55
72.4
Mo
20.000
Ag
60
50
52.1
60
100.0
Pd
24.350
Ag
60
60
55.5
60
100.0
Ag
25.514
Mo
55
60
68.4
50
71.9
3.2
Evaluation of the energy resolution
One of the most important criteria that determines the quality of XAS experiments is the achievable
energy resolution, particularly in the XANES region. The resolution is determined by multiple
factors, such as geometrical parameters (distances from the crystal to the source and the detector,
widths of the X-ray source and detector strips) as well as factors related to the crystal, such
as flatness, quality of the surface and the Darwin width of the selected reflection. To evaluate
the combined energy resolution resulting from all experimental factors, the profile of the CuKα1
emission line from the X-ray source was measured using the Si (022) reflection of the crystal
and compared (Fig. 4(a)) with a calculated profile based on tabulated values for the widths of the
different spectral components (Deutsch et al., 2004). In this way, it was determined that the energy
resolution of the experimental setup described above was ∼2.2 eV at the energy of 8 keV, resulting
in resolving power of E/δE = 3600. This resolution is adequate for many XANES experiments
(N´emeth et al., 2016; Schlesiger et al., 2020).
If higher resolution is required, then a different
reflection or a different crystal could be employed. For example, the Si (044) reflection of the same
crystal can be used (Fig. 4(b)), which gives resolution of ∼1.3 eV at 8 keV (E/δE = 6150). This
improved resolution comes at the cost of ∼10 times reduction in intensity and correspondingly
longer measurement times. Due to the specifics of the method, it is expected that the absolute
energy resolution will gradually improve at energies lower than 8 keV and become worse at higher
8
energies.
Figure 4: Experimental profile of the CuKα1 characteristic line (red dots), compared with the
expected theoretical profile (green line) and the theoretical profile convoluted with Gaussian in-
strumental broadening (blue line). The estimated instrumental broadening is equal to 2.2 eV for
the Si (022) reflection (a) and 1.3 eV for Si (044) (b).
3.3
Effect of air absorption
Absorption of the transmitted X-ray photons by air can lead to a significant reduction of mea-
sured intensities, particularly in the low-energy range. Because of this, XAS instruments often use
evacuated or He-filled chambers. Such implementation, however, limits the flexibility of the set-up
and is difficult to integrate with the diffractometer framework. The effect of air absorption for
the configuration discussed in this work was evaluated and the results are shown in Fig. 5. For
the energies above 8 keV, the transmission through 480 mm of air is about 80% and the effect
of air absorption on the experimental data will be barely noticeable. On the other hand, in the
range around 5 keV, the transmission is only about 10%. Such intensity reduction will require
longer measurement times. One possibility to improve this is to select an X-ray tube and tube
9
power settings that optimize the emission of X-rays with such energies (see Table 1). If the use of
an evacuated chamber is desired for reducing air absorption, it should be considered that for the
proposed implementation, such chamber can cover only a part of the beam path, as the X-ray tube,
sample, and detector will remain outside of the chamber. In addition, the X-ray beam will have to
traverse either two or four chamber windows, depending on whether the crystal analyzer is inside
or outside of the chamber.
Figure 5: Transmission coefficient for X-rays corresponding to the absorption edges of different
materials calculated for beam paths of 240 mm dry air (blue), 480 mm dry air (red) and Kapton
(polyimide) windows with 0.25 mm total thickness (green).
4
Results and discussion
The performance of the set-up described in Section 2 is demonstrated by comparing measured XAS
spectra of reference metal foils (supplied by EXAFS materials, Danville, CA, USA) with high-
quality synchrotron data obtained from the Materials Data Repository (https://mdr.nims.go.jp/).
Measurements of metal oxides were also conducted with samples prepared by mixing the corre-
sponding powder materials with isopropyl alcohol and depositing them on a Kapton (polyimide)
foil. The examples discussed further in this section were selected to emphasize different aspects of
the XAS data collection, such as measurement time, resolution, energy range and fidelity of the
measured data and also to illustrate the application of the set-up to practical cases. The collected
XAS spectra were processed using the program Athena (Ravel & Newville, 2005).
4.1
Fe foil, α-Fe2O3 and Fe3O4
The XAS spectra of α-Fe2O3 (hematite) and Fe3O4 (magnetite) samples prepared from powder are
shown in Fig. 6 and are compared with the corresponding spectrum of a Fe foil with a thickness
of 7.5 µm. Following Table 1, a Cu tube with power settings 35 kV and 50 mA was applied. A
10
divergence slit of 1/4 deg was used and the measurement time for each of the three scans was
one hour. A scan of equal duration and with the sample removed from the beam path was also
collected for normalization of the data (see Fig. S2). The differences in the positions and shapes
of the absorption edge of Fe are clearly observed, indicating the different oxidation states and
coordinations of the Fe atoms in the three materials. The pre-edge effects in the oxides are also
visible. The results are consistent with previous reports based on synchrotron studies, e.g. (Xiao
et al., 2022).
Figure 6: XAS spectra of α-Fe2O3 (blue) and Fe3O4 (green) powder samples, compared with the
corresponding spectrum of a 7.5 µm Fe reference foil (red). The inset shows a magnified view of
the XANES range.
4.2
Ni foil and NiO
The XAS spectrum of a 6 µm Ni foil was recorded with two different measurement times, namely
15 min and 2h, and the results are shown in Fig. 7. The longer measurement time helps to reduce
the noise due to counting statistics but even with the shorter time all essential features are clearly
observable. A measurement performed with synchrotron radiation is also included as reference
(XAFS spectrum of Nickel. https://doi.org/10.48505/nims.3923). In the inset of Fig. 7, the 15
min measurement is compared with the spectrum of a NiO sample prepared from powder and with
the same acquisition time. The differences in the positions and shapes of the absorption edge of Ni
are again clearly visible, allowing the analysis of oxidation states and coordinations.
4.3
EXAFS of Fe
Another XAS spectrum of the 7.5 µm reference Fe foil is shown in Fig. 8. This measurement
covers an energy range of 700 eV that is suitable for EXAFS analysis and was collected in 30
min.
In this case, the set-up was optimized for high intensity (with a 1 deg divergence slit),
and a small difference with the reference synchrotron measurement (XAFS spectrum of Iron.
11
Figure 7: XAS spectra of a 6 µm Ni foil , with 15 min (red) and 2 hours (purple) measurement times.
A synchrotron measurement is also shown as a reference (blue) and the patterns are shifted for
clarity. The inset compares the 15 min measurement with the spectrum of a NiO sample prepared
from powder (green) collected with the same total time.
https://doi.org/10.48505/nims.3903) can be observed close to the absorption edge. Despite this
difference, the data can be used to calculate the Fourier transform of the EXAFS signal of the Fe
foil and this is very similar to the one calculated from the synchrotron data when the same k range
is considered (Fig. 9). After optimization of the set-up for better performance in the near-edge
region by using a smaller divergence slit, a second measurement with the same duration and shorter
energy range was performed. This measurement is shown in the inset of Fig. 8 and is closer to the
reference synchrotron data.
4.4
EXAFS of Pt L3 edge
In order to demonstrate the applicability of the proposed configuration to L absorption edges,
the XAS signal from a 7.5 µm Pt foil was measured in a wide energy range around the L3 edge
of Pt. A tube with an Ag anode and power settings 35 kV and 50 mA (see Table 1) was used
for this measurement, which had a duration of 6 hours. The result is shown in Fig. 10. The
laboratory data show good agreement with the synchrotron reference (XAFS spectrum of Platinum.
https://doi.org/10.48505/nims.2473), except in the near-edge region where the measured amplitude
is lower than the reference. This is likely due to the reduced resolution of the set-up in this energy
range.
A second measurement with a duration of one hour was conducted using the Si (044)
reflection of the same crystal and the outcome is shown in the inset of Fig. 10. With the improved
resolution, the features in the XANES region are closer to the reference data. The EXAFS signal of
the Pt foil (Fig. 11(a)) agrees well with the synchrotron measurement up to 20 ˚A−1 in k space. The
Fourier transform of the EXAFS signal is also very similar to that calculated from the reference
data when the same k range is considered (Fig. 11(b)).
12
Figure 8: Experimental XAS spectrum of a 7.5 µm Fe foil collected in 30 min and covering a range
of 700 eV (red) compared with a synchrotron reference (blue). The inset shows a 30 min scan in
the XANES range optimized for better performance.
4.5
XAS of austenitic stainless steel at the Fe, Cr and Ni edges.
The XAS spectrum of the reference Fe foil was also compared with a foil of austenitic steel with
nominal composition 70% Fe, 16% Cr and 14% Ni and thickness of 5 µm.
The measurement
time for each scan was one hour. While pure Fe crystallizes with the body-centered cubic (bcc)
structure, the austenitic steel has a face-centered cubic (fcc) structure at room temperature. The
position of the absorption edge is nearly the same in both materials (Fig. 12(a)), however the
signal in the EXAFS range is quite different, reflecting the change of the structural type. The
same structural change is observed also at the Cr K edge (Fig 12(b)). The collection time for each
scan was two hours and the tube settings were adjusted to 30 kV and 55 mA for maximizing the
intensity. The Cr reference specimen consists of a 1 µm Cr layer deposited on 6 µm Aluminum foil.
The measured signal around the Cr edge is significantly weaker and requires longer measurement
times compared to those used for the Fe edge. There are several factors that contribute to this.
Due to the small amount of Cr atoms in the two samples, the absorbance steps are around 0.3
and are thus significantly smaller than the value that is considered optimal for XAS measurements,
which is 1. For the energy corresponding to the Cr absorption edge the transmission through air
is only 28% compared to 47% for Fe (see Fig. 5). Finally, the absorption by the other elements
present in the specimens (Fe, Ni and Al, respectively) also reduce the intensity. The combination of
these factors causes that the noise level in the data is more than three times worse for the Cr edge
compared to the Fe edge. Nevertheless, the differences between the two Cr-containing samples are
clearly observed. Since a high-resolution measurement is not required in the pre-edge and post-edge
regions, the step-size of the data shown in Fig. 12(b) was increased by a factor of 3 in these regions
by rebinning the measured data. This helps to reduce the visible statistical noise. The near-edge
region is not modified. Fig. 12(c) shows the XAS spectrum of the steel sample around the Ni K
13
Figure 9: EXAFS signal (a) and the corresponding Fourier transform (b) of Fe calculated from the
measurement in Fig. 8 (red) compared with the result of the synchrotron measurement adjusted
to the same k range (blue).
edge compared with the 6 µm Ni reference foil. Pure Ni crystallizes with the fcc structure and
the two materials have similar features in the EXAFS range. However, for the steel sample the
oscillations are shifted to lower energies, indicating a different environment of the Ni atoms. The
weight fraction of Ni in the alloy is even lower than that of Cr, and the absorbance step is only 0.17.
In addition, photons with this energy are strongly absorbed by the other elements in the material,
especially by the Fe atoms. This again results in higher noise levels. The same rebinning procedure
was applied to the Ni data as described for the Cr case. Despite several experimental factors that
affect negatively the collected data, such as low number of absorbing atoms and strong attenuation
by the matrix of the material and the air, the measurement set-up was able to provide meaningful
XAS results for all three edges that can be used for further analysis in this practical case.
4.6
In operando investigation of an NMC811 battery
Understanding the electrochemical processes and their relation to the crystalline structures of the
cathode and anode materials is important for improving the capacity and lifetime of batteries.
In operando XAS measurements performed during charging and discharging allow the observation
of changes in the oxidation states and local coordinations of different atomic types present in the
materials (Jahrman et al., 2019b; Genz et al., 2024). A pouch cell battery with an NMC811 cathode
(LiNi0.8Mn0.1Co0.1O2) and nominal capacity 50 mAh was investigated in transmission mode with
14
Figure 10: Experimental XAS spectrum of a 7.5 µm Pt foil, collected in 6 hours and covering a
range of 1700 eV (red) compared with a synchrotron reference (blue). The inset shows a 1h scan
in the XANES range collected using the Si (044) reflection.
the experimental configuration shown in Fig. 1 and using a Si (111) crystal. A constant current -
constant voltage (CC-CV) charging strategy was employed in the experiment with 0.2C charging
rate using an SP-50e potentiostat from Biologic (Claix, France). The lowest and highest applied
voltages were 3.0 and 4.2 V, respectively. The charge-discharge cycle was repeated two times and
XAS patterns of the Ni absorption edge were collected during the process. Fig. 13 shows a color
plot consisting of twenty-two scans, each one with a duration of one hour. In Fig. 13(a) the shift
of the absorption edge position between the low and high voltages can be observed, indicating a
change of oxidation state of the Ni atoms in the charged and discharged states. The shift is ∼2
eV between 3.0 V and 4.2 V. Fig. 13(b) shows the variation of the XANES signal at energies
above the absorption edge that can be attributed to the change in the local environment of the Ni
atoms. The results presented here are consistent with the in operando data reported in (Kondrakov
et al., 2017; Tsai et al., 2005; Jahrman et al., 2019b). In this case, the X-ray intensity at the Ni
edge is attenuated by the battery components, such as the Cu and Al current collectors and the Al
pouch.
5
Conclusions
The configuration for XAS measurements presented in the current work is implemented on a stan-
dard laboratory powder diffractometer with only minor modifications of the hardware and the
control software. It has been tested for a number of different materials and the results show that
good data quality can be achieved within a reasonable time, ranging from minutes to hours, de-
pending on the composition of the sample, the sample preparation, the extent of the measurement
region and other factors. The main differences between this implementation and other laboratory
15
Figure 11: EXAFS signal (a) and the corresponding Fourier transform (b) of Pt calculated from the
measurement in Fig. 11 (red) compared with the result of the synchrotron measurement adjusted
to the same k range (blue).
instruments for XAS are the use of a position-sensitive detector with high energy resolution, very
accurate goniometer and the application of continuous scanning of the X-ray source and detector
during the measurements. These features are found in modern powder diffractometers, which can
be easily reconfigured from diffraction to XAS mode by mounting the crystal analyzer in the center
of the goniometer. One advantage of the proposed method is the ability to cover a wide energy
range that includes the absorption edges of multiple elements without exchanging the analyzer
crystal or other optical components. It also gives the option to switch from high-intensity mode to
high-resolution mode by simply repositioning the goniometer to a higher angle corresponding to a
different reflection. Enabling XAS measurements with a diffractometer may serve as an entry point
for users of diffraction methods who would like to explore XAS and can contribute to the wider
use of this technique.
Acknowledgements
The authors acknowledge prof. Yang-Kook Sun from Hanyang University, Korea for providing the
NMC811 cell used in this study. Also, Lei Ding, Sander Weijers, Vladimir Jovanovic and Ferit
Cakmak from Malvern Panalytical are acknowledged for their help with the sample preparation
and the optimization of the experimental set-up.
Conflicts of interest: M.G. declares a patent pending related to the method presented in this work.
16
Data availability: Data sets generated during the current study are available from the corresponding
author on reasonable request.
References
Alonso-Mori, R., Kern, J., Sokaras, D., Weng, T.-C., Nordlund, D., Tran, R., Montanez, P., Delor,
J., Yachandra, V. K., Yano, J. & Bergmann, U. (2012). Rev. Sci. Instrum. 83(7), 073114.
Cutsail, III, G. E. & DeBeer, S. (2022). ACS Catal. 12, 5864–5886.
Deutsch, M., F¨orster, E., H¨olzer, G., H¨artwig, J., H¨am¨al¨ainen, K., Kao, C.-C., Huotari, S. & R, D.
(2004). J. Res. Natl. Inst. Stand. Technol. 109(1), 75–98.
Ebel, H. (1999). X-Ray Spectrom. 28(4), 255–266.
Ebel, H. (2005). Adv. X-ray Anal. 49, 267–273.
Genz, N. S., Kallio, A.-J., Meirer, F., Huotari, S. & Weckhuysen, B. M. (2024). Chem. Methods,
4(1), e202300027.
Holden, W. M., Hoidn, O. R., Ditter, A. S., Seidler, G. T., Kas, J., Stein, J. L., Cossairt, B. M.,
Kozimor, S. A., Guo, J., Ye, Y., Marcus, M. A. & Fakra, S. (2017). Rev. Sci. Instrum. 88(7),
073904.
Honkanen, A.-P., Ollikkala, S., Ahopelto, T., Kallio, A.-J., Blomberg, M. & Huotari, S. (2019).
Rev. Sci. Instrum. 90(3), 033107.
Honkanen, A.-P., Verbeni, R., Simonelli, L., Moretti Sala, M., Monaco, G. & Huotari, S. (2014). J.
Synchrotron Rad. 21(1), 104–110.
v. H´amos, L. (1932). Naturwiss. 20, 705–706.
v. H´amos, L. (1933). Ann. Phys. 409(6), 716–724.
Ismail, I., Journel, L., Vacheresse, R., Travnikova, O., Marin, T., C´eolin, D., Guillemin, R.,
Marchenko, T., Zmerli, M., Koulentianos, D., P¨uttner, R., Palaudoux, J., Penent, F. & Simon,
M. (2021). Rev. Sci. Instrum. 92(7), 073104.
Jahrman, E. P., Holden, W. M., Ditter, A. S., Mortensen, D. R., Seidler, G. T., Fister, T. T.,
Kozimor, S. A., Piper, L. F. J., Rana, J., Hyatt, N. C. & Stennett, M. C. (2019a). Rev. Sci.
Instrum. 90(2), 024106.
Jahrman, E. P., Pellerin, L. A., Ditter, A. S., Bradshaw, L. R., Fister, T. T., Polzin, B. J., Trask,
S. E., Dunlop, A. R. & Seidler, G. T. (2019b). J. Electrochem. Soc. 166(12), A2549.
Kondrakov, A. O., Geßwein, H., Galdina, K., de Biasi, L., Meded, V., Filatova, E. O., Schumacher,
G., Wenzel, W., Hartmann, P., Brezesinski, T. & Janek, J. (2017). J. Phys. Chem. C, 121(44),
24381–24388.
Lutz, C. & Fittschen, U. E. A. (2020). Powder Diffr. 35(S1), S24–S28.
Mottram, L. M., Cafferkey, S., Mason, A. R., Oulton, T., Kuan Sun, S., Bailey, D. J., Stennett,
M. C. & Hyatt, N. C. (2020). J. Geosci. 65(1), 27–35.
N´emeth, Z., Szlachetko, J., Bajn´oczi, ´E. G. & Vank´o, G. (2016). Rev. Sci. Instrum. 87(10), 103105.
Ravel, B. & Newville, M. (2005). J. Synchrotron Rad. 12(4), 537–541.
Schlesiger, C., Praetz, S., Gnewkow, R., Malzer, W. & Kanngießer, B. (2020). J. Anal. At. Spectrom.
35, 2298–2304.
Seidler, G. T., Mortensen, D. R., Ditter, A. S., Ball, N. A. & Remesnik, A. J. (2016). J. Phys.:
Conf. Ser. 712(1), 012015.
Seidler, G. T., Mortensen, D. R., Remesnik, A. J., Pacold, J. I., Ball, N. A., Barry, N., Styczinski,
M. & Hoidn, O. R. (2014). Rev. Sci. Instrum. 85(11), 113906.
Szlachetko, J., Nachtegaal, M., de Boni, E., Willimann, M., Safonova, O., Sa, J., Smolentsev, G.,
Szlachetko, M., van Bokhoven, J. A., Dousse, J.-C., Hoszowska, J., Kayser, Y., Jagodzinski,
P., Bergamaschi, A., Schmitt, B., David, C. & L¨ucke, A. (2012). Rev. Sci. Instrum. 83(10),
103105.
Tsai, Y. W., Hwang, B. J., Ceder, G., Sheu, H. S., Liu, D. G. & Lee, J. F. (2005). Chem. Mater.
17(12), 3191–3199.
Welter, E., Hansen, K., Reckleben, C. & Diehl, I. (2009). Journal of Synchrotron Radiation, 16(2),
293–298.
Xiao, X., Ruan, Z., Li, Q., Zhang, L., Meng, H., Zhang, Q., Bao, H., Jiang, B., Zhou, J., Guo, C.,
Wang, X. & Fu, H. (2022). Advanced Materials, 34(27), 2200612.
17
Zeeshan, F., Hoszowska, J., Loperetti-Tornay, L. & Dousse, J.-C. (2019). Rev. Sci. Instrum. 90(7),
073105.
Zimmermann, P., Peredkov, S., Abdala, P. M., DeBeer, S., Tromp, M., M¨uller, C. & van Bokhoven,
J. A. (2020). Coord. Chem. Rev. 423, 213466.
18
Figure 12: (a) XAS spectrum of a Cr-Fe-Ni alloy with nominal composition 70% Fe, 16% Cr, 14%
Ni and a thickness of 5 µm measured around the absorption edge of Fe, compared with a reference
Fe foil with thickness 7.5 µm. (b) The same alloy measured around the absorption edge of Cr and
compared with a reference sample consisting of 1 µm Cr layer deposited on a 6 µm Al foil, and (c)
around the Ni edge with a reference measurement of a 6 µm Ni foil. The insets show magnified
views of the corresponding XANES ranges.
19
Figure 13: 2D color plots of the variation of the XAS signal measured during charging and dis-
charging of an NMC811 pouch cell battery: (a) XANES range close to the absorption edge of Ni;
(b) XANES range above the edge. The color scheme shows the level of the normalized absorbance.
20
|
Application of X-ray Absorption Spectroscopy with a laboratory X-ray powder diffractometer Milen Gateshki , Charalampos Zarkadas , and Detlef Beckers Malvern Panalytical B. V., Lelyweg 1, 7602 EA, Almelo, The Netherlands Synopsis A method for high-quality X-ray absorption spectroscopy (XAS) in transmission mode is proposed that can be easily applied on a laboratory X-ray powder diffractometer. Such application may contribute to the wider use of this powerful technique in materials research laboratories. Abstract A novel method and experimental configuration are proposed that allow the collection of high-quality X-ray absorption spectroscopy (XAS) data in transmission mode on a standard laboratory diffractometer. This configuration makes use of components that have been developed for application in modern laboratory diffractometers, such as Xray tubes with enhanced stability, accurate goniometers and position-sensitive photoncounting detectors with improved energy resolution. Such approach may extend the application of XAS in materials research, particularly for users of diffraction methods. Keywords: Laboratory XAS; XANES; EXAFS; X-ray diffraction; In operando studies 1 Introduction XAS is a powerful method for analyzing the structure and properties of new materials. It investigates the behavior of the absorption coefficient around the absorption edge of a specific element. The measured absorption spectrum is usually divided into two ranges: XANES (X-ray absorption near edge structure), which is up to 50-100 eV above the absorption edge of the investigated atomic type and provides information about the oxidation state, electronic structure and local environment of the atom, and EXAFS (Extended X-ray absorption fine structure) that extends from 100 to 500-2000 eV above the edge and probes the short-range ordering of the atoms, i.e. interatomic distances and coordination numbers. XAS provides complementary information to X-ray diffraction (XRD) and X-ray fluorescence (XRF) and can be used for relatively complex materials where 1 18 Sep 2025 other techniques are not applicable, e.g. liquids and amorphous substances. Compared to the Pair Distribution Function method (PDF), which is also applied to study short-range ordering in materials (including amorphous) and gives averaged information for all atomic species, XAS is element specific and can be used to distinguish between the environments of otherwise similar atoms, e.g. two types of metal atoms in an alloy or a solid solution. During the last decade, the interest in laboratory-based XAS methods has continually grown both in the academic and in the industrial sectors (Zimmermann et al., 2020; Cutsail & DeBeer, 2022). Nowadays, several companies, such as easyXAFS, Sigray, HP Scpectroscopy, LynXes, RapidXAFS and Specreation offer a range of dedicated XAS instruments. Two types of experimental set-ups are most commonly found in XAS instruments: the Rowland circle geometry and the von H ́amos geometry. Both focus the X-ray beam from the source to the position of the detector. The Rowland circle geometry uses a cylindrically or spherically bent crystal analyzer (Mottram et al., 2020; Lutz & Fittschen, 2020; Jahrman et al., 2019a; Holden et al., 2017; Seidler et al., 2014; Seidler et al., 2016; Honkanen et al., 2019; Honkanen et al., 2014), while the von H ́amos geometry employs a crystal that is cylindrically bent in transverse direction (v. H ́amos, 1932; v. H ́amos, 1933; Schlesiger et al., 2020; N ́emeth et al., 2016; Szlachetko et al., 2012; Zeeshan et al., 2019; Alonso-Mori et al., 2012; Ismail et al., 2021). Detailed discussions of these geometries and their corresponding advantages in XAS experiments can be found in the referenced publications. Focusing geometries can significantly increase the intensity of the measured signal, however they require instruments with increased mechanical complexity. Moreover, they are not compatible with the architecture of laboratory Xray powder diffractometers, which are based on highly accurate goniometers with the X-ray source mounted on one arm of the goniometer and the X-ray detector mounted on the second arm. 2 XAS measurement configuration In this work, we propose a configuration for XAS that uses a flat analyzer crystal and can be easily applied on a standard laboratory X-ray powder diffractometer. A schematic representation of this set-up is shown in Fig. 1. The figure gives a side view of the configuration, which uses an XRD tube with a long fine focus (LFF) and an one-dimensional (strip) XRD detector with high energy resolution, e.g. better than 500 eV. The energy resolution of the detector helps to reduce background effects, such as higher harmonics, characteristic lines from the X-ray tube, fluorescence, etc. The divergence slit is used to adjust the size of the incident beam and several antiscatter devices can be introduced to reduce scattered radiation from optical components or from the air. The axial divergence of the incident and diffracted beams can be controlled with Soller slits, which are also included in the basic configuration of laboratory powder diffractometers. The sample holder is attached to the goniometer arm and it moves together with the X-ray tube. In some cases it may be preferable to attach the sample holder to the second arm and to move it together with the detector. The same geometry can be applied also with a microfocus tube and a two-dimensional (area) detector. In this case the use of Soller slits is not required. Since powder diffraction measurements 2 are most often performed in air environment, the XAS experiments presented in the following sections were also conducted in air, although a configuration that uses an evacuated or heliumfilled beam path can be also envisioned. Fig. 1 depicts a wavelength-dispersive configuration, in which photons originating from the focus of the X-ray tube X travel through sample S and impinge on the surface of crystal C at different angles. Following Bragg's law of diffraction, only X-ray photons with specific energies will be diffracted by the crystal towards the X-ray detector D. Since the incident photons shown in Fig. 1 form different angles with the atomic planes of the crystal, each one of the diffracted beams (shown as lines of different colors in Fig. 1) will have a different wavelength. Therefore, the onedimensional detector collects simultaneously a range of different energies. This can significantly increase the speed of the measurement compared to a set-up that uses a single-element (point) detector. Considering the exact geometry of the experimental set-up, it is possible to correlate the position (strip number), at which the signal is received, with the energy of the diffracted beam. In this way, a complete energy spectrum of the X-rays transmitted through the sample can be obtained and the absorption at each energy can be determined. The range of energies that can be collected simultaneously depends on the size and position of the detector. If the covered angular range is narrower than the intended one (either XANES or EXAFS), it can be extended by operating the detector in scanning mode with the goniometer arm moving continuously during the measurement from low to high diffraction angles. Note that maintaining the diffraction condition requires that the X-ray source moves with the same speed as the detector. This measurement mode (known as gonio mode or symmetric scanning) is a standard feature of X-ray diffractometers and does not require special modifications of the hardware or software. In addition to the extended range, the scanning mode has additional advantages for XAS measurements compared to the static mode. It enables collecting data with a step size that is smaller than the angular size of one detector strip (subsampling), which provides better discretization of the continuous XAS signal and allows the observation of narrow spectral features. A second advantage is that during the scanning movement, X-ray beams with the same photon energy will travel through different parts of the sample, thus effectively averaging the signal. In this sense, the resulting measurements will be less sensitive to small variations of the sample's composition or thickness. This effect is illustrated in Fig. 2. At lower positions of the X-ray source and detector, an X-ray beam passes through the lower part of the sample (Fig. 2(a)) and is detected at the upper end of the detector. As the source and detector move higher (Fig. 2(b)), a beam with the same photon energy (same angle of incidence ω1 on the crystal) will pass through the central part of the sample and will be detected in the center of the detector. Finally, at even higher positions of the source and detector (Fig. 2(c)), photons with once again the same energy will travel through the top part of the sample and will be detected at the lower edge of the detector. From Fig. 2 it can be also observed that for each position of the X-ray source, the photons with the specified energy will be diffracted by a different point on the crystal. As a result of this, local variations in 3 the quality of the crystal surface will be also averaged during the scanning measurement. The data collection algorithm in the detector combines all intensity that is received for a given angle of incidence and produces a single data point that corresponds to this angle of incidence (and hence photon energy). Considering the configuration shown in Fig. 1, the angle of incidence for a specific photon direction can be calculated from the number of the detector channel in which the signal is received by means of equation (1). For derivation, see Fig. S1. R1 is the distance between the X-ray source and the center of the goniometer, R2 is the distance between the goniometer center and the detector, n is the number of the detector strip counted from the middle channel of the detector, p is the width of the strip and ωc is the angle between the central part of the incident beam (indicated with a dashed line in the figure) and the diffracting planes of the crystal monochromator. ω = ωc + tan-1 np R1 + R2 ≈ωc + np R1 + R2 (1) The measurement method described by Fig. 1 and equation (1) is different from the one that is used in powder diffraction measurements. A diagram of a diffraction measurement with monochromatic radiation, a symmetric (gonio) scan and a narrow incident beam is shown in Fig. 3. In this method, the algorithm for processing data from the detector is set up for measuring the intensity at each diffraction angle 2θ. If it is assumed that ω = 2θ/2, equation (2) can be derived. From this result it can be observed that for R1 = R2 the approximated expressions for ω are the same in equations (1) and (2). Therefore, the data collection algorithm for powder diffraction measurements with symmetric scans, which implements equation (2), can be applied also for XAS without modifications. If R1 ̸= R2, then the distance R2 must be replaced in (2) by (R1 + R2)/2. Equation 2 is also applied for XRD measurements with the Bragg-Brentano parafocusing geometry, which uses a divergent monochromatic incident beam and is the most commonly used type of measurement in laboratory powder diffractometers. ω = 2θ 2 = ωc + 1 2 tan-1 np R2 ≈ωc + np 2R2 (2) A configuration somewhat similar to the one shown in Fig. 1 is reported in (N ́emeth et al., 2016), where flat sections of Si crystal are arranged around a cylindrical surface to approximate the von H ́amos geometry. In this set-up, both X-ray source and detector are located on the axis of the cylinder. The detector is mounted on a translation stage, instead of a goniometer arm, and the X-ray source is in the plane of the detector, which in turn is not perpendicular to the direction of the X-ray beams diffracted by the crystal analyzer. The measurements are performed in static mode, which does not make use of the advantages associated with scanning measurements that were described in the previous paragraph. On the other hand, the set-up in (N ́emeth et al., 2016) uses distances between the main components that are similar to the ones reported in this work, 4 and experiments are performed in air atmosphere. This indicates that a compact configuration operated in air is a feasible solution for XAS experiments. Figure 1: A schematic diagram of the experimental set-up. Polychromatic X-ray radiation is emitted by the X-ray source X, travels through specimen S and is diffracted by the flat crystal monochromator C. After diffraction, the X-ray photons are directed to the position sensitive detector D. Depending on the angle of incidence, photons with different energies will be diffracted at different points on the crystal surface. This is represented by lines of different colors. R1 and R2 are the distances between the source and the rotation axis (center) of the goniometer and between the goniometer center and the middle channel of the X-ray detector. The top surface of the crystal is aligned to be at the center of the goniometer. V is a divergence slit, and K is a beam shield. 3 Experimental details The experimental configuration described in Section 2 was realized on an Empyrean multipurpose diffractometer (Malvern Panalytical B.V., Almelo, The Netherlands) with the following components: • Empyrean long fine focus X-ray tube with enhanced stability of the focal spot (LFF HR) and either Cu or Ag anode. The take-off angle is 6 deg and the apparent size of the focus is 0.04 by 12 mm. • Fixed divergence slit with opening either 1 deg or 1/4 deg, two 0.04 rad Soller slits for controlling the axial divergence of the incident and diffracted beams, and a 11.6 mm mask for limiting the beam size in axial direction. With these settings the irradiated area on the specimen is about 12 mm by 1 mm (1 deg slit) or 12 mm by 0.25 mm (1/4 deg slit). If it is required to reduce the lateral size of the irradiated area, this can be done by using a smaller mask. • Flat Si (011) crystal with dimensions 40 mm (length) by 20 mm (width). The top surface of the crystal is aligned to be at the center of the goniometer. 5 Figure 2: Schematic representation of XAS measurements with the configuration shown in Fig. 1 and using a continuous scanning movement of the X-ray source and detector relative to the crystal. (a) Lower angle of incidence of the central part of the X-ray beam, (b) intermediate and (c) higher angle of incidence. The notation for the different elements is the same as in Fig. 1. • 1Der (2nd generation) one-dimensional solid-state X-ray detector that consists of 127 strips with dimensions 0.07 mm wide and 15 mm long. The energy resolution of the detector is 320 eV, which is considered sufficient for XAS experiments (Welter et al., 2009). • For the XAS experiments reported in this work, the distances R1 and R2 were both equal to 240 mm. 6 Figure 3: Schematic representation of X-ray powder diffraction measurements with a narrow incident beam, a one-dimensional position-sensitive detector and a symmetric (gonio) scan. The notation for the different elements is the same as in Fig. 1. 3.1 Choice of anode material Modern diffractometers offer the option for easy exchange of X-ray tubes and this opens the possibility for selecting a tube with an anode material and power settings that are best suited for a specific material. When selecting an X-ray tube for XAS experiments, it is important to consider the intensity of the bremsstrahlung radiation emitted from the tube in an energy range close to the absorption edge of the element of interest. To evaluate this intensity for X-ray diffraction LFF tubes, the continuum radiation of various commonly used anode materials was calculated at different voltage settings by means of a tube spectrum model widely applied in X-ray spectrometry (Ebel, 1999; Ebel, 2005). The results of the theoretical calculations are summarized in Table 1. From the values presented there it becomes clear that the performance of a tube drops when the absorption edge of the investigated element is at a higher energy compared to the absorption edge of the anode element. This is due to the effect of anode self-absorption that reduces the absolute number of continuum photons available for XAS measurements. One remedy for this is to use a different tube, with an anode material which is more favorable. In cases where materials with the same absorption edge are analyzed repeatedly, the optimal anode material can be selected from Table 1. Conversely, in a situation where different absorption edges are probed, many materials of interest can be analyzed using either Cu or Ag tubes with only a small reduction in performance. Using X-ray tubes with different anode materials may also help to avoid characteristic lines that appear in the measurement range in specific cases. Such characteristic lines with high intensities and narrow peak widths can disturb the normalization procedure and produce artifacts in the data. 7 Table 1: Optimal choices of anode materials and tube voltage settings for XANES measurements around different absorption edges. The corresponding tube current (not shown in the table) corresponds to isowatt settings at all tabulated voltages. The maximum power used for the calculations was 1.8 kW for Co, Cu and Ag tubes and 2.5 kW for Mo tubes. Optimal selection Cu tube Ag tube Absorption edge Energy, keV Anode material Tube voltage, kV Tube voltage, kV Intensity compared to optimal selection, % Tube voltage, kV Intensity compared to optimal selection, % Ti 4.966 Co 30 30 93.6 30 38.7 Cr 5.989 Co 35 30 97.8 30 50.1 Fe 7.112 Co 35 35 99.4 30 63.5 Ni 8.333 Cu 35 35 100.0 30 77.4 Cu 8.979 Mo 45 30 34.9 30 85.0 Pt L3 11.564 Mo 45 30 34.8 35 76.8 Se 12.658 Mo 45 35 35.8 35 75.1 Sr 16.105 Mo 55 40 37.2 45 72.5 Zr 17.998 Mo 60 45 37.6 55 72.4 Mo 20.000 Ag 60 50 52.1 60 100.0 Pd 24.350 Ag 60 60 55.5 60 100.0 Ag 25.514 Mo 55 60 68.4 50 71.9 3.2 Evaluation of the energy resolution One of the most important criteria that determines the quality of XAS experiments is the achievable energy resolution, particularly in the XANES region. The resolution is determined by multiple factors, such as geometrical parameters (distances from the crystal to the source and the detector, widths of the X-ray source and detector strips) as well as factors related to the crystal, such as flatness, quality of the surface and the Darwin width of the selected reflection. To evaluate the combined energy resolution resulting from all experimental factors, the profile of the CuKα1 emission line from the X-ray source was measured using the Si (022) reflection of the crystal and compared (Fig. 4(a)) with a calculated profile based on tabulated values for the widths of the different spectral components (Deutsch et al., 2004). In this way, it was determined that the energy resolution of the experimental setup described above was ∼2.2 eV at the energy of 8 keV, resulting in resolving power of E/δE = 3600. This resolution is adequate for many XANES experiments (N ́emeth et al., 2016; Schlesiger et al., 2020). If higher resolution is required, then a different reflection or a different crystal could be employed. For example, the Si (044) reflection of the same crystal can be used (Fig. 4(b)), which gives resolution of ∼1.3 eV at 8 keV (E/δE = 6150). This improved resolution comes at the cost of ∼10 times reduction in intensity and correspondingly longer measurement times. Due to the specifics of the method, it is expected that the absolute energy resolution will gradually improve at energies lower than 8 keV and become worse at higher 8 energies. Figure 4: Experimental profile of the CuKα1 characteristic line (red dots), compared with the expected theoretical profile (green line) and the theoretical profile convoluted with Gaussian instrumental broadening (blue line). The estimated instrumental broadening is equal to 2.2 eV for the Si (022) reflection (a) and 1.3 eV for Si (044) (b). 3.3 Effect of air absorption Absorption of the transmitted X-ray photons by air can lead to a significant reduction of measured intensities, particularly in the low-energy range. Because of this, XAS instruments often use evacuated or He-filled chambers. Such implementation, however, limits the flexibility of the set-up and is difficult to integrate with the diffractometer framework. The effect of air absorption for the configuration discussed in this work was evaluated and the results are shown in Fig. 5. For the energies above 8 keV, the transmission through 480 mm of air is about 80% and the effect of air absorption on the experimental data will be barely noticeable. On the other hand, in the range around 5 keV, the transmission is only about 10%. Such intensity reduction will require longer measurement times. One possibility to improve this is to select an X-ray tube and tube 9 power settings that optimize the emission of X-rays with such energies (see Table 1). If the use of an evacuated chamber is desired for reducing air absorption, it should be considered that for the proposed implementation, such chamber can cover only a part of the beam path, as the X-ray tube, sample, and detector will remain outside of the chamber. In addition, the X-ray beam will have to traverse either two or four chamber windows, depending on whether the crystal analyzer is inside or outside of the chamber. Figure 5: Transmission coefficient for X-rays corresponding to the absorption edges of different materials calculated for beam paths of 240 mm dry air (blue), 480 mm dry air (red) and Kapton (polyimide) windows with 0.25 mm total thickness (green). 4 Results and discussion The performance of the set-up described in Section 2 is demonstrated by comparing measured XAS spectra of reference metal foils (supplied by EXAFS materials, Danville, CA, USA) with highquality synchrotron data obtained from the Materials Data Repository (https://mdr.nims.go.jp/). Measurements of metal oxides were also conducted with samples prepared by mixing the corresponding powder materials with isopropyl alcohol and depositing them on a Kapton (polyimide) foil. The examples discussed further in this section were selected to emphasize different aspects of the XAS data collection, such as measurement time, resolution, energy range and fidelity of the measured data and also to illustrate the application of the set-up to practical cases. The collected XAS spectra were processed using the program Athena (Ravel & Newville, 2005). 4.1 Fe foil, α-Fe2O3 and Fe3O4 The XAS spectra of α-Fe2O3 (hematite) and Fe3O4 (magnetite) samples prepared from powder are shown in Fig. 6 and are compared with the corresponding spectrum of a Fe foil with a thickness of 7.5 μm. Following Table 1, a Cu tube with power settings 35 kV and 50 mA was applied. A 10 divergence slit of 1/4 deg was used and the measurement time for each of the three scans was one hour. A scan of equal duration and with the sample removed from the beam path was also collected for normalization of the data (see Fig. S2). The differences in the positions and shapes of the absorption edge of Fe are clearly observed, indicating the different oxidation states and coordinations of the Fe atoms in the three materials. The pre-edge effects in the oxides are also visible. The results are consistent with previous reports based on synchrotron studies, e.g. (Xiao et al., 2022). Figure 6: XAS spectra of α-Fe2O3 (blue) and Fe3O4 (green) powder samples, compared with the corresponding spectrum of a 7.5 μm Fe reference foil (red). The inset shows a magnified view of the XANES range. 4.2 Ni foil and NiO The XAS spectrum of a 6 μm Ni foil was recorded with two different measurement times, namely 15 min and 2h, and the results are shown in Fig. 7. The longer measurement time helps to reduce the noise due to counting statistics but even with the shorter time all essential features are clearly observable. A measurement performed with synchrotron radiation is also included as reference (XAFS spectrum of Nickel. https://doi.org/10.48505/nims.3923). In the inset of Fig. 7, the 15 min measurement is compared with the spectrum of a NiO sample prepared from powder and with the same acquisition time. The differences in the positions and shapes of the absorption edge of Ni are again clearly visible, allowing the analysis of oxidation states and coordinations. 4.3 EXAFS of Fe Another XAS spectrum of the 7.5 μm reference Fe foil is shown in Fig. 8. This measurement covers an energy range of 700 eV that is suitable for EXAFS analysis and was collected in 30 min. In this case, the set-up was optimized for high intensity (with a 1 deg divergence slit), and a small difference with the reference synchrotron measurement (XAFS spectrum of Iron. 11 Figure 7: XAS spectra of a 6 μm Ni foil , with 15 min (red) and 2 hours (purple) measurement times. A synchrotron measurement is also shown as a reference (blue) and the patterns are shifted for clarity. The inset compares the 15 min measurement with the spectrum of a NiO sample prepared from powder (green) collected with the same total time. https://doi.org/10.48505/nims.3903) can be observed close to the absorption edge. Despite this difference, the data can be used to calculate the Fourier transform of the EXAFS signal of the Fe foil and this is very similar to the one calculated from the synchrotron data when the same k range is considered (Fig. 9). After optimization of the set-up for better performance in the near-edge region by using a smaller divergence slit, a second measurement with the same duration and shorter energy range was performed. This measurement is shown in the inset of Fig. 8 and is closer to the reference synchrotron data. 4.4 EXAFS of Pt L3 edge In order to demonstrate the applicability of the proposed configuration to L absorption edges, the XAS signal from a 7.5 μm Pt foil was measured in a wide energy range around the L3 edge of Pt. A tube with an Ag anode and power settings 35 kV and 50 mA (see Table 1) was used for this measurement, which had a duration of 6 hours. The result is shown in Fig. 10. The laboratory data show good agreement with the synchrotron reference (XAFS spectrum of Platinum. https://doi.org/10.48505/nims.2473), except in the near-edge region where the measured amplitude is lower than the reference. This is likely due to the reduced resolution of the set-up in this energy range. A second measurement with a duration of one hour was conducted using the Si (044) reflection of the same crystal and the outcome is shown in the inset of Fig. 10. With the improved resolution, the features in the XANES region are closer to the reference data. The EXAFS signal of the Pt foil (Fig. 11(a)) agrees well with the synchrotron measurement up to 20 ̊A-1 in k space. The Fourier transform of the EXAFS signal is also very similar to that calculated from the reference data when the same k range is considered (Fig. 11(b)). 12 Figure 8: Experimental XAS spectrum of a 7.5 μm Fe foil collected in 30 min and covering a range of 700 eV (red) compared with a synchrotron reference (blue). The inset shows a 30 min scan in the XANES range optimized for better performance. 4.5 XAS of austenitic stainless steel at the Fe, Cr and Ni edges. The XAS spectrum of the reference Fe foil was also compared with a foil of austenitic steel with nominal composition 70% Fe, 16% Cr and 14% Ni and thickness of 5 μm. The measurement time for each scan was one hour. While pure Fe crystallizes with the body-centered cubic (bcc) structure, the austenitic steel has a face-centered cubic (fcc) structure at room temperature. The position of the absorption edge is nearly the same in both materials (Fig. 12(a)), however the signal in the EXAFS range is quite different, reflecting the change of the structural type. The same structural change is observed also at the Cr K edge (Fig 12(b)). The collection time for each scan was two hours and the tube settings were adjusted to 30 kV and 55 mA for maximizing the intensity. The Cr reference specimen consists of a 1 μm Cr layer deposited on 6 μm Aluminum foil. The measured signal around the Cr edge is significantly weaker and requires longer measurement times compared to those used for the Fe edge. There are several factors that contribute to this. Due to the small amount of Cr atoms in the two samples, the absorbance steps are around 0.3 and are thus significantly smaller than the value that is considered optimal for XAS measurements, which is 1. For the energy corresponding to the Cr absorption edge the transmission through air is only 28% compared to 47% for Fe (see Fig. 5). Finally, the absorption by the other elements present in the specimens (Fe, Ni and Al, respectively) also reduce the intensity. The combination of these factors causes that the noise level in the data is more than three times worse for the Cr edge compared to the Fe edge. Nevertheless, the differences between the two Cr-containing samples are clearly observed. Since a high-resolution measurement is not required in the pre-edge and post-edge regions, the step-size of the data shown in Fig. 12(b) was increased by a factor of 3 in these regions by rebinning the measured data. This helps to reduce the visible statistical noise. The near-edge region is not modified. Fig. 12(c) shows the XAS spectrum of the steel sample around the Ni K 13 Figure 9: EXAFS signal (a) and the corresponding Fourier transform (b) of Fe calculated from the measurement in Fig. 8 (red) compared with the result of the synchrotron measurement adjusted to the same k range (blue). edge compared with the 6 μm Ni reference foil. Pure Ni crystallizes with the fcc structure and the two materials have similar features in the EXAFS range. However, for the steel sample the oscillations are shifted to lower energies, indicating a different environment of the Ni atoms. The weight fraction of Ni in the alloy is even lower than that of Cr, and the absorbance step is only 0.17. In addition, photons with this energy are strongly absorbed by the other elements in the material, especially by the Fe atoms. This again results in higher noise levels. The same rebinning procedure was applied to the Ni data as described for the Cr case. Despite several experimental factors that affect negatively the collected data, such as low number of absorbing atoms and strong attenuation by the matrix of the material and the air, the measurement set-up was able to provide meaningful XAS results for all three edges that can be used for further analysis in this practical case. 4.6 In operando investigation of an NMC811 battery Understanding the electrochemical processes and their relation to the crystalline structures of the cathode and anode materials is important for improving the capacity and lifetime of batteries. In operando XAS measurements performed during charging and discharging allow the observation of changes in the oxidation states and local coordinations of different atomic types present in the materials (Jahrman et al., 2019b; Genz et al., 2024). A pouch cell battery with an NMC811 cathode (LiNi0.8Mn0.1Co0.1O2) and nominal capacity 50 mAh was investigated in transmission mode with 14 Figure 10: Experimental XAS spectrum of a 7.5 μm Pt foil, collected in 6 hours and covering a range of 1700 eV (red) compared with a synchrotron reference (blue). The inset shows a 1h scan in the XANES range collected using the Si (044) reflection. the experimental configuration shown in Fig. 1 and using a Si (111) crystal. A constant current - constant voltage (CC-CV) charging strategy was employed in the experiment with 0.2C charging rate using an SP-50e potentiostat from Biologic (Claix, France). The lowest and highest applied voltages were 3.0 and 4.2 V, respectively. The charge-discharge cycle was repeated two times and XAS patterns of the Ni absorption edge were collected during the process. Fig. 13 shows a color plot consisting of twenty-two scans, each one with a duration of one hour. In Fig. 13(a) the shift of the absorption edge position between the low and high voltages can be observed, indicating a change of oxidation state of the Ni atoms in the charged and discharged states. The shift is ∼2 eV between 3.0 V and 4.2 V. Fig. 13(b) shows the variation of the XANES signal at energies above the absorption edge that can be attributed to the change in the local environment of the Ni atoms. The results presented here are consistent with the in operando data reported in (Kondrakov et al., 2017; Tsai et al., 2005; Jahrman et al., 2019b). In this case, the X-ray intensity at the Ni edge is attenuated by the battery components, such as the Cu and Al current collectors and the Al pouch. 5 Conclusions The configuration for XAS measurements presented in the current work is implemented on a standard laboratory powder diffractometer with only minor modifications of the hardware and the control software. It has been tested for a number of different materials and the results show that good data quality can be achieved within a reasonable time, ranging from minutes to hours, depending on the composition of the sample, the sample preparation, the extent of the measurement region and other factors. The main differences between this implementation and other laboratory 15 Figure 11: EXAFS signal (a) and the corresponding Fourier transform (b) of Pt calculated from the measurement in Fig. 11 (red) compared with the result of the synchrotron measurement adjusted to the same k range (blue). instruments for XAS are the use of a position-sensitive detector with high energy resolution, very accurate goniometer and the application of continuous scanning of the X-ray source and detector during the measurements. These features are found in modern powder diffractometers, which can be easily reconfigured from diffraction to XAS mode by mounting the crystal analyzer in the center of the goniometer. One advantage of the proposed method is the ability to cover a wide energy range that includes the absorption edges of multiple elements without exchanging the analyzer crystal or other optical components. It also gives the option to switch from high-intensity mode to high-resolution mode by simply repositioning the goniometer to a higher angle corresponding to a different reflection. Enabling XAS measurements with a diffractometer may serve as an entry point for users of diffraction methods who would like to explore XAS and can contribute to the wider use of this technique. Acknowledgements The authors acknowledge prof. Yang-Kook Sun from Hanyang University, Korea for providing the NMC811 cell used in this study. Also, Lei Ding, Sander Weijers, Vladimir Jovanovic and Ferit Cakmak from Malvern Panalytical are acknowledged for their help with the sample preparation and the optimization of the experimental set-up. Conflicts of interest: M.G. declares a patent pending related to the method presented in this work. 16 Data availability: Data sets generated during the current study are available from the corresponding author on reasonable request. References Alonso-Mori, R., Kern, J., Sokaras, D., Weng, T.-C., Nordlund, D., Tran, R., Montanez, P., Delor, J., Yachandra, V. K., Yano, J. & Bergmann, U. (2012). Rev. Sci. Instrum. 83(7), 073114. Cutsail, III, G. E. & DeBeer, S. (2022). ACS Catal. 12, 5864-5886. Deutsch, M., F ̈orster, E., H ̈olzer, G., H ̈artwig, J., H ̈am ̈al ̈ainen, K., Kao, C.-C., Huotari, S. & R, D. (2004). J. Res. Natl. Inst. Stand. Technol. 109(1), 75-98. Ebel, H. (1999). X-Ray Spectrom. 28(4), 255-266. Ebel, H. (2005). Adv. X-ray Anal. 49, 267-273. Genz, N. S., Kallio, A.-J., Meirer, F., Huotari, S. & Weckhuysen, B. M. (2024). Chem. Methods, 4(1), e202300027. Holden, W. M., Hoidn, O. R., Ditter, A. S., Seidler, G. T., Kas, J., Stein, J. L., Cossairt, B. M., Kozimor, S. A., Guo, J., Ye, Y., Marcus, M. A. & Fakra, S. (2017). Rev. Sci. Instrum. 88(7), 073904. Honkanen, A.-P., Ollikkala, S., Ahopelto, T., Kallio, A.-J., Blomberg, M. & Huotari, S. (2019). Rev. Sci. Instrum. 90(3), 033107. Honkanen, A.-P., Verbeni, R., Simonelli, L., Moretti Sala, M., Monaco, G. & Huotari, S. (2014). J. Synchrotron Rad. 21(1), 104-110. v. H ́amos, L. (1932). Naturwiss. 20, 705-706. v. H ́amos, L. (1933). Ann. Phys. 409(6), 716-724. Ismail, I., Journel, L., Vacheresse, R., Travnikova, O., Marin, T., C ́eolin, D., Guillemin, R., Marchenko, T., Zmerli, M., Koulentianos, D., P ̈uttner, R., Palaudoux, J., Penent, F. & Simon, M. (2021). Rev. Sci. Instrum. 92(7), 073104. Jahrman, E. P., Holden, W. M., Ditter, A. S., Mortensen, D. R., Seidler, G. T., Fister, T. T., Kozimor, S. A., Piper, L. F. J., Rana, J., Hyatt, N. C. & Stennett, M. C. (2019a). Rev. Sci. Instrum. 90(2), 024106. Jahrman, E. P., Pellerin, L. A., Ditter, A. S., Bradshaw, L. R., Fister, T. T., Polzin, B. J., Trask, S. E., Dunlop, A. R. & Seidler, G. T. (2019b). J. Electrochem. Soc. 166(12), A2549. Kondrakov, A. O., Geßwein, H., Galdina, K., de Biasi, L., Meded, V., Filatova, E. O., Schumacher, G., Wenzel, W., Hartmann, P., Brezesinski, T. & Janek, J. (2017). J. Phys. Chem. C, 121(44), 24381-24388. Lutz, C. & Fittschen, U. E. A. (2020). Powder Diffr. 35(S1), S24-S28. Mottram, L. M., Cafferkey, S., Mason, A. R., Oulton, T., Kuan Sun, S., Bailey, D. J., Stennett, M. C. & Hyatt, N. C. (2020). J. Geosci. 65(1), 27-35. N ́emeth, Z., Szlachetko, J., Bajn ́oczi, ́E. G. & Vank ́o, G. (2016). Rev. Sci. Instrum. 87(10), 103105. Ravel, B. & Newville, M. (2005). J. Synchrotron Rad. 12(4), 537-541. Schlesiger, C., Praetz, S., Gnewkow, R., Malzer, W. & Kanngießer, B. (2020). J. Anal. At. Spectrom. 35, 2298-2304. Seidler, G. T., Mortensen, D. R., Ditter, A. S., Ball, N. A. & Remesnik, A. J. (2016). J. Phys.: Conf. Ser. 712(1), 012015. Seidler, G. T., Mortensen, D. R., Remesnik, A. J., Pacold, J. I., Ball, N. A., Barry, N., Styczinski, M. & Hoidn, O. R. (2014). Rev. Sci. Instrum. 85(11), 113906. Szlachetko, J., Nachtegaal, M., de Boni, E., Willimann, M., Safonova, O., Sa, J., Smolentsev, G., Szlachetko, M., van Bokhoven, J. A., Dousse, J.-C., Hoszowska, J., Kayser, Y., Jagodzinski, P., Bergamaschi, A., Schmitt, B., David, C. & L ̈ucke, A. (2012). Rev. Sci. Instrum. 83(10), 103105. Tsai, Y. W., Hwang, B. J., Ceder, G., Sheu, H. S., Liu, D. G. & Lee, J. F. (2005). Chem. Mater. 17(12), 3191-3199. Welter, E., Hansen, K., Reckleben, C. & Diehl, I. (2009). Journal of Synchrotron Radiation, 16(2), 293-298. Xiao, X., Ruan, Z., Li, Q., Zhang, L., Meng, H., Zhang, Q., Bao, H., Jiang, B., Zhou, J., Guo, C., Wang, X. & Fu, H. (2022). Advanced Materials, 34(27), 2200612. 17 Zeeshan, F., Hoszowska, J., Loperetti-Tornay, L. & Dousse, J.-C. (2019). Rev. Sci. Instrum. 90(7), 073105. Zimmermann, P., Peredkov, S., Abdala, P. M., DeBeer, S., Tromp, M., M ̈uller, C. & van Bokhoven, J. A. (2020). Coord. Chem. Rev. 423, 213466. 18 Figure 12: (a) XAS spectrum of a Cr-Fe-Ni alloy with nominal composition 70% Fe, 16% Cr, 14% Ni and a thickness of 5 μm measured around the absorption edge of Fe, compared with a reference Fe foil with thickness 7.5 μm. (b) The same alloy measured around the absorption edge of Cr and compared with a reference sample consisting of 1 μm Cr layer deposited on a 6 μm Al foil, and (c) around the Ni edge with a reference measurement of a 6 μm Ni foil. The insets show magnified views of the corresponding XANES ranges. 19 Figure 13: 2D color plots of the variation of the XAS signal measured during charging and discharging of an NMC811 pouch cell battery: (a) XANES range close to the absorption edge of Ni; (b) XANES range above the edge. The color scheme shows the level of the normalized absorbance. 20
|
2509.16261
|
RAFD: FLOW-GUIDED RADAR DETECTION FOR ROBUST AUTONOMOUS DRIVING
Shuocheng Yang, Zikun Xu, Jiahao Wang, Shahid Nawaz, Jianqiang Wang∗, Shaobing Xu∗
School of Vehicle and Mobility
Tsinghua University
Beijing, China
ABSTRACT
Radar has shown strong potential for robust perception in au-
tonomous driving; however, raw radar images are frequently
degraded by noise and “ghost” artifacts, making object de-
tection based solely on semantic features highly challenging.
To address this limitation, we introduce RaFD, a radar-based
object detection framework that estimates inter-frame bird’s-
eye-view (BEV) flow and leverages the resulting geometric
cues to enhance detection accuracy. Specifically, we design
a supervised flow estimation auxiliary task that is jointly
trained with the detection network.
The estimated flow is
further utilized to guide feature propagation from the previ-
ous frame to the current one. Our flow-guided, radar-only
detector achieves achieves state-of-the-art performance on
the RADIATE dataset, underscoring the importance of incor-
porating geometric information to effectively interpret radar
signals, which are inherently ambiguous in semantics.
Index Terms— Radar, Object detection, Flow estimation,
Robust perception, Autonomous driving.
1. INTRODUCTION
Current autonomous driving perception systems rely heavily
on cameras and LiDAR [1, 2, 3], yet their reliability degrades
significantly under adverse weather (e.g., rain, snow, fog).
Radar, with strong penetration capability, is far more robust
in such conditions, making it a highly promising modality for
achieving robust perception in autonomous driving.
However, raw radar signals are noisy, producing numer-
ous “ghost” artifacts. In addition, compared with LiDAR,
radar has lower azimuth resolution, leading to blurred ob-
ject appearances. Consequently, single-frame radar percep-
tion [4, 5, 6] suffers from limited semantic richness, posing
significant challenges for object detection.
To address this issue, several methods have attempted to
leverage temporal information. TempoRadar [7] introduced
a temporal relation layer to associate potential object queries
across two consecutive frames, while SIRA [9] extended this
This work was supported by the National Natural Science Foundation of
China (No. 52372415, 52221005).
∗Corresponding author: {shaobxu, wjqlws}@tsinghua.edu.cn
Preliminary Selected Object Query
Encoder
Encoder
Temporal Relation Layer
Decoder
GT Bboxes
𝐼𝑝𝑟𝑒𝑣
(a) Semantic-only
𝐼𝑐𝑢𝑟𝑟
Decoder
Encoder
Encoder
Flow Estimation Branch
(b) Geometry-Aware (Ours)
GT Bboxes
𝐼𝑝𝑟𝑒𝑣
𝐼𝑐𝑢𝑟𝑟
Decoder
Estimated Flow
GT Flow
Fig. 1. Comparison between detection frameworks. (a)
The semantic-only framework [7, 8, 9] focuses solely on
mining semantic features from consecutive frames. (b) Our
geometry-aware framework explicitly incorporates geometric
consistency through a flow estimation branch, where the esti-
mated flow guides feature propagation to enhance detection.
approach to multi-frame sequences. Despite promising re-
sults on RADIATE [10] dataset, these methods rely solely on
semantic cues. But in weak semantic signal interpretation, hu-
mans typically rely on motion patterns first—recognizing co-
herent trajectories—before reasoning about semantics. Thus
we argue that temporal radar detection should similarly
prioritize cross-frame geometric consistency as a more re-
liable early-stage prior.
Motivated by this, we propose RaFD, a Radar-based
Flow-guided Detector for robust autonomous driving.
As
illustrated in Fig. 1, RaFD differs from previous radar se-
quence perception methods [7, 8, 9] by explicitly modeling
the capture of geometric motion cues to enhance object de-
tection.
Specifically, inspired by advances in optical flow
estimation [11, 12], we introduce a supervised auxiliary
task—BEV scene flow estimation—that encourages the de-
tector to learn temporally consistent geometric representa-
arXiv:2509.16261v1 [cs.RO] 18 Sep 2025
𝜎
Backbone&Necks
Previous Image 𝑰𝒕−𝝉
Current Image 𝑰𝒕
Backbone&Necks
Feature Enhancement
𝑭𝒕
𝑭𝒕−𝝉
Temporal Alignment
𝑺𝒕−𝝉
𝑺𝒕
෩𝑬𝒕−𝝉
෩𝑬𝒕
4D Cost Volume
Softmax
෩𝑺𝒕−𝝉
෩𝑺𝒕
Flow-guided Propagation
𝑽
𝑻𝒕
DETR Head
𝜎
𝓣𝒄
𝓣𝒄
Flow Estimation Branch
Fig. 2. The framework of the RaFD (illustrated with two input frames).
tions at the feature-map level.
Notably, the ground-truth
flow requires no additional labeling; we design an pipeline
that automatically derives it from detection annotations with
instance IDs. Building upon this, we further propose a fea-
ture propagation module that leverages the estimated flow
to guide the aggregation of BEV features across frames for
more robust representation learning.
RaFD is structurally
flexible, supporting seamless extension from two-frame in-
put to multi-frame input. We validate RaFD on the RADI-
ATE [10] dataset under different input frame settings. On the
good-weather split, RaFD achieves 69.36% mAP@0.3,
59.47% mAP@0.5, and 23.92% mAP@0.7, consistently
outperforming existing state-of-the-art approaches.
2. RADAR-BASED FLOW-GUIDED DETECTOR
The overall framework of RaFD is illustrated in Fig.2. We fo-
cus on scanning radar, represented as a BEV grayscale image
I ∈R1×H×W .
RaFD is built upon CenterFormer[13]. Let t denote the
current frame and t −τ the previous frame. After backbone
and neck encoding, we obtain feature maps Ft−τ and Ft,
which are further enhanced to capture global scene context,
yielding St−τ and St. These enhanced features are passed
through a flow estimation branch to model inter-frame mo-
tion. The estimated flow ˆV then guides feature propagation
from the aligned St−τ to St, producing the final motion-aware
representation ˆTt. A convolutional head is subsequently ap-
plied to ˆTt to generate a heatmap of object centers. The top-
K peaks are extracted from this heatmap and used as initial
queries for the DETR head, which refines them to predict off-
sets (ˆox, ˆoy), sizes (ˆh, ˆw), and orientations ˆθ. In the following
sections, we provide a detailed description of the key modules
in the pipeline.
2.1. Feature Enhancement
Since radar images are inherently noisy and blurred, relying
solely on object appearance is unreliable. Optical flow esti-
mation tasks face a similar challenge, as motion videos of-
ten contain blur, and GMFlow [12] highlights that enhanc-
ing features with spatial context is essential. Inspired by this,
we adopt a transformer-based module, where self-attention
aggregates relations across spatial locations. For efficiency,
we employ shifted-window local attention, similar to Swin-
Transformer [14], with window size Hf
2 × Wf
2 , and use single-
head attention to reduce computational complexity. Formally,
one block is defined as
ˆF l = Tw
F l−1
, F l = Tsw
ˆF l
.
(1)
where Tw and Tsw denote regular and shifted-window self-
attention, respectively. By stacking two such blocks, we ob-
tain St−τ and St, which are enhanced with spatial context.
2.2. Flow Estimation
Flow Estimation serves as an auxiliary task in RaFD, estimat-
ing motion vector fields between adjacent radar image frames
at the feature-map level. Supervised by ground-truth flow,
this task equips the front-end feature encoder with the ability
to capture geometric consistency across the scene, while the
predicted flow further supports feature propagation for object
detection.
Given the spatial enhanced feature S, the network φ en-
codes it into flow features E = φ(S) ∈RCf /2×Hf ×Wf using
two weight-shared convolutional layers with batch normaliza-
tion. We then align Et−τ and Et by mapping each grid’s cen-
ter coordinates Pt to the previous frame using the ego-pose
transformation Tt→(t−τ). Bilinear interpolation retrieves fea-
tures at the transformed points, while out-of-range locations
Flow-Guided Attention
Add & Norm
Feed Forward
Add & Norm
𝑉
ሚ𝑆𝑡−𝜏
ሚ𝑆𝑡
× 2
𝑉
Fig. 3. Flow-Guided Attention. The reference points in the
deformable attention are adjusted according to the estimated
flow. Red regions highlight object locations.
Flow Estimation
Flow
Flow
Flow
Decoder
𝑰𝒕−𝟑𝝉
𝑰𝒕−𝟐𝝉
𝑰𝒕−𝝉
𝑰𝒕
𝑻𝒕
Flow Estimation
Flow Estimation
Fig. 4. The simple framework of RaFD with four-frame
input. Flow estimation is performed between every two con-
secutive frames. Several flow estimation and feature propaga-
tion modules share parameters.
default to Et, producing aligned features eEt−τ and eEt that
represent the same real-world locations. This alignment also
enables non-object regions to be set to zero during ground-
truth flow generation.
To capture the temporal dependency between eEt−τ and
eEt, , we apply two global cross-attention blocks:
eEl
t−τ = Tc
eEl−1
t−τ, eEl−1
t
, eEl
t = Tc
eEl−1
t
, ˆEl
t−τ
,
(2)
where Tc denotes single-head cross-attention with feed-
forward layers.
Next, we construct a 4D cost volume C ∈RH×W ×H×W
via feature similarity,
C(i,j,k,l) =
1
p
Cf/2
Cf /2
X
c
eE(c,i,j)
t
· eE(c,k,l)
t−τ
,
(3)
where the brackets after the feature maps indicate the value
at the given indices. This computation can be implemented
efficiently with a simple matrix multiplication.
Selecting the highest similarity positions in C would
yield dense correspondences, but this operation is not differ-
entiable. Inspired by GMFlow [12], we instead normalize the
last two dimensions of C using a softmax operation to obtain
a matching distribution. The flow ˆV is then computed as:
ˆV = G −G · softmax(C) ∈R2×Hf ×Wf .
(4)
where G ∈R2×Hf ×Wf denote the 2D coordinates of the pixel
grid on the feature map.
2.3. Flow-guided Propagation
The module is designed to propagate features from the pre-
vious frame to the current frame using the estimated flow ˆV .
To achieve this, we adjust the reference points in deformable
attention [15] according to the flow, as shown in Fig.3:
Rt−τ =
r(i,j)
t−τ = r(i,j)
t
−ˆV (i,j) Hf ,Wf
i=1,j=1,
(5)
The flow-guided deformable attention then aggregates infor-
mation from the temporally aligned features eSt−τ and eSt:
ˆTt = Tf
eSt, eSt−τ
,
(6)
where Tf denotes single-head flow-guided attention with
feed-forward layers.
We stack two such blocks to enable
more comprehensive feature propagation.
2.4. Extension
As shown in Fig.4, RaFD can be naturally extended from two-
frame to multi-frame inputs through its modular flow-guided
propagation mechanism. For each consecutive pair of frames,
the estimated flow vectors ˆVt→t−τ iteratively align and trans-
fer features across the sequence, enabling long-range tem-
poral propagation. This design maintains feature coherence
over extended frame sequences and strengthens the model’s
capacity to capture temporal dependencies in dynamic envi-
ronments.
2.5. Training
Since the dataset does not provide scene flow annotations,
we automatically derive pseudo ground-truth flow Vgt from
detection annotations with instance IDs. Specifically, Vgt is
constructed at the feature-map scale Hf × Wf, where each
object center is rendered as a Gaussian region with radius
σ = max(f(hw), γ). With object IDs, instances are asso-
ciated across frames, and the pose transformation Tt→(t−τ)
is applied to align their coordinates. For each Gaussian re-
gion, the displacement of matched pixels defines the object
flow, while background pixels are set to zero.
The predicted flow ˆV is supervised using an l1 loss:
Lf = ∥ˆV −Vgt∥1.
(7)
The overall training loss combines detection and flow su-
pervision.
L = Ldet + Lflow.
(8)
We supervise the flow between every two consecutive input.
Trained on good-weather split
Trained on good-and-bad-weather split
mAP@0.3
mAP@0.5
mAP@0.7
EPE
mAP@0.3
mAP@0.5
mAP@0.7
EPE
CenterPoint (1)
59.42± 1.92
50.17± 1.91
18.93± 1.46
-
53.92± 3.44
42.81± 3.04
13.43± 1.92
-
CenterFormer (1)
61.79± 1.37
52.57± 1.53
19.24± 0.96
-
57.13± 1.75
44.80± 1.32
14.55± 0.82
-
TempoRadar (2)
63.63± 2.08
54.00± 2.16
21.08± 1.66
-
56.18± 4.27
43.98± 3.75
14.35± 2.15
-
RaFD (2)
65.89± 0.97
55.13± 1.26
22.75± 1.03
0.1689
61.95± 2.07
50.23± 1.83
17.58± 1.44
0.1778
SCTR (4)
68.06± 1.60
57.03± 1.34
22.62± 1.18
-
66.01± 1.05
52.55± 0.96
19.18± 1.02
-
SIRA (4)
68.68± 1.12
58.11± 1.40
22.81± 0.86
-
66.14± 0.83
53.79± 1.14
19.85± 0.95
-
RaFD (4)
69.36± 1.45
59.47± 1.92
23.92± 1.11
0.1556
68.83± 1.81
54.20± 1.66
18.84± 1.56
0.1669
Table 1. Benchmark results of different methods on the RADIATE dataset. Numbers in parentheses indicate frames input.
mAP@0.3
mAP@0.5
mAP@0.7
Baseline
64.29
55.14
21.79
+ Feature Enhancement
66.92 (+2.63)
56.80 (+1.66)
22.41 (+0.64)
+ Flow Estimation
67.76 (+0.84)
57.78 (+0.98)
22.69 (+0.28)
+ Flow-guided Propagation
69.36 (+1.60)
59.47 (+1.69)
23.92 (+1.23)
Table 2.
Ablation study of RaFD on RADIATE dataset.
Baseline refers to four-frame input CenterFormer [13] us-
ing vanilla deformable attention [15] for cross-frame feature
propagation.
3. EXPERIMENT
3.1. Experimental Setup
We evaluate RaFD on the RADIATE dataset [10], which pro-
vides high-resolution radar images (but without Doppler di-
mension) under diverse weather conditions.
Following [7,
8, 9], we adopt the predefined train-on-good-weather, train-
on-good-and-bad-weather, and test splits. Pedestrians are ex-
cluded from detection, and input images are center-cropped to
256 × 256, and objects outside this region are ignored. Rela-
tive poses between consecutive frames are obtained via [16].
Training is performed on 8 RTX 3090 GPUs with batch size
8, learning rate 2 × 10−4, and weight decay 1 × 10−2 us-
ing the Adam for 10 epochs. Oriented object detection is
evaluated with mean Average Precision (mAP) at IoU thresh-
olds 0.3, 0.5, 0.7, and flow estimation with End-Point Error
(EPE), defined as the average ℓ2 distance between predicted
and ground-truth flow vectors.
3.2. Results Analysis
The quantitative results are summarized in Tab.1 and Tab.2.
RaFD is compared with CenterPoint, CenterFormer, Tempo-
Radar [7], SCTR [8], and SIRA [9], all using ResNet-34 back-
bones. RaFD consistently achieves the best performance for
both two-frame and four-frame inputs. Notably, the perfor-
mance degradation on the good-and-bad-weather split is min-
imal compared to [7, 8, 9], which we attribute to the rela-
SIRA (4)
RaFD (4)
𝑰𝒕−𝝉
𝑰𝒕
𝑽
(a)
(b)
Fig. 5. Visualization of object detection and flow estima-
tion. Green boxes represent ground truth, while red boxes
represent predictions.
tively stable performance of the flow estimation. This further
demonstrates the beneficial effect of flow guidance for object
detection. Incorporating feature enhancement, flow estima-
tion, and flow-guided propagation modules leads to consis-
tent performance gains, validating the effectiveness of each
component. The visualization results are provided in Fig.5
for intuitive understanding.
4. CONCLUSION
In this paper, we present RaFD, a radar-based flow-guided
detector designed for robust autonomous driving, which ex-
plicitly estimates and leverages inter-frame geometric clues.
By incorporating an auxiliary supervised flow estimation task,
RaFD captures temporal consistency and guides feature prop-
agation across frames.
Extensive experiments on the RA-
DIATE dataset demonstrate that RaFD outperforms existing
radar-only methods, offering a robust, accurate, and scalable
solution for radar perception in challenging scenarios.
5. REFERENCES
[1] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie,
Chonghao Sima, Tong Lu, Yu Qiao, and Jifeng Dai,
“BEVFormer:
Learning Bird’s-Eye-View Represen-
tation from Multi-camera Images via Spatiotemporal
Transformers,”
in Computer Vision – ECCV 2022,
Shai Avidan, Gabriel Brostow, Moustapha Ciss´e, Gio-
vanni Maria Farinella, and Tal Hassner, Eds., vol.
13669, pp. 1–18. Springer Nature Switzerland, Cham,
2022.
[2] Jonah Philion and Sanja Fidler, “Lift, Splat, Shoot: En-
coding Images from Arbitrary Camera Rigs by Implic-
itly Unprojecting to 3D,” in Computer Vision – ECCV
2020, Andrea Vedaldi, Horst Bischof, Thomas Brox,
and Jan-Michael Frahm, Eds., Cham, 2020, pp. 194–
210, Springer International Publishing.
[3] Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong,
Yuexin Ma, Wei Li, Hongsheng Li, and Dahua Lin,
“Cylindrical and Asymmetrical 3D Convolution Net-
works for LiDAR Segmentation,” in 2021 IEEE/CVF
Conference on Computer Vision and Pattern Recogni-
tion (CVPR), Nashville, TN, USA, June 2021, pp. 9934–
9943, IEEE.
[4] Michael Meyer, Georg Kuschk, and Sven Tomforde,
“Graph Convolutional Networks for 3D Object Detec-
tion on Radar Data,”
in 2021 IEEE/CVF Interna-
tional Conference on Computer Vision Workshops (IC-
CVW), Montreal, BC, Canada, Oct. 2021, pp. 3053–
3062, IEEE.
[5] Alexander Popov, Patrik Gebhardt, Ke Chen, and Ryan
Oldja, “NVRadarNet: Real-Time Radar Obstacle and
Free Space Detection for Autonomous Driving,”
in
2023 IEEE International Conference on Robotics and
Automation (ICRA), May 2023, pp. 6958–6964.
[6] Geonho Bang, Kwangjin Choi, Jisong Kim, Dongsuk
Kum, and Jun Won Choi,
“RadarDistill: Boosting
Radar-Based Object Detection Performance via Knowl-
edge Distillation from LiDAR Features,”
in 2024
IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), June 2024, pp. 15491–15500.
[7] Peizhao Li, Pu Wang, Karl Berntorp, and Hongfu Liu,
“Exploiting Temporal Relations on Radar Perception for
Autonomous Driving,” in 2022 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR),
New Orleans, LA, USA, June 2022, pp. 17050–17059,
IEEE.
[8] Ryoma Yataka, Pu Wang, Petros Boufounos, and
Ryuhei Takahashi,
“Radar Perception with Scalable
Connective Temporal Relations for Autonomous Driv-
ing,” in ICASSP 2024 - 2024 IEEE International Con-
ference on Acoustics, Speech and Signal Processing
(ICASSP), Apr. 2024, pp. 13266–13270.
[9] Ryoma Yataka, Pu Wang, Petros Boufounos, and
Ryuhei Takahashi,
“SIRA: Scalable Inter-Frame Re-
lation and Association for Radar Perception,” in 2024
IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), Seattle, WA, USA, June 2024, pp.
15024–15034, IEEE.
[10] Marcel Sheeny, Emanuele De Pellegrin, Saptarshi
Mukherjee, Alireza Ahrabian, Sen Wang, and Andrew
Wallace, “RADIATE: A Radar Dataset for Automotive
Perception in Bad Weather,” in 2021 IEEE International
Conference on Robotics and Automation (ICRA), May
2021, pp. 1–7.
[11] Zachary Teed and Jia Deng,
“RAFT: Recurrent All-
Pairs Field Transforms for Optical Flow,” in Computer
Vision – ECCV 2020, Andrea Vedaldi, Horst Bischof,
Thomas Brox, and Jan-Michael Frahm, Eds., Cham,
2020, pp. 402–419, Springer International Publishing.
[12] Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi,
and Dacheng Tao, “GMFlow: Learning Optical Flow
via Global Matching,” in 2022 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR),
New Orleans, LA, USA, June 2022, pp. 8111–8120,
IEEE.
[13] Zixiang Zhou, Xiangchen Zhao, Yu Wang, Panqu Wang,
and Hassan Foroosh,
“CenterFormer: Center-Based
Transformer for 3D Object Detection,”
in Computer
Vision – ECCV 2022, Shai Avidan, Gabriel Brostow,
Moustapha Ciss´e, Giovanni Maria Farinella, and Tal
Hassner, Eds., Cham, 2022, pp. 496–513, Springer Na-
ture Switzerland.
[14] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei,
Zheng Zhang, Stephen Lin, and Baining Guo, “Swin
Transformer:
Hierarchical Vision Transformer using
Shifted Windows,”
in 2021 IEEE/CVF International
Conference on Computer Vision (ICCV), Montreal, QC,
Canada, Oct. 2021, pp. 9992–10002, IEEE.
[15] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang
Wang, and Jifeng Dai, “Deformable DETR: Deformable
Transformers for End-to-End Object Detection,” in In-
ternational Conference on Learning Representations,
Oct. 2020.
[16] Shuocheng Yang, Yueming Cao, Shengbo Eben Li, Jian-
qiang Wang, and Shaobing Xu, “RINO: Accurate, Ro-
bust Radar-Inertial Odometry With Non-Iterative Esti-
mation,” IEEE Transactions on Automation Science and
Engineering, vol. 22, pp. 20420–20434, 2025.
|
RAFD: FLOW-GUIDED RADAR DETECTION FOR ROBUST AUTONOMOUS DRIVING Shuocheng Yang, Zikun Xu, Jiahao Wang, Shahid Nawaz, Jianqiang Wang∗, Shaobing Xu∗ - tonomous driving; however, raw radar images are frequently degraded by noise and "ghost" artifacts, making object detection based solely on semantic features highly challenging. To address this limitation, we introduce RaFD, a radar-based object detection framework that estimates inter-frame bird'seye-view (BEV) flow and leverages the resulting geometric cues to enhance detection accuracy. Specifically, we design a supervised flow estimation auxiliary task that is jointly trained with the detection network. The estimated flow is further utilized to guide feature propagation from the previous frame to the current one. Our flow-guided, radar-only detector achieves achieves state-of-the-art performance on the RADIATE dataset, underscoring the importance of incorporating geometric information to effectively interpret radar signals, which are inherently ambiguous in semantics. Index Terms- Radar, Object detection, Flow estimation, Robust perception, Autonomous driving. 1. INTRODUCTION Current autonomous driving perception systems rely heavily on cameras and LiDAR [1, 2, 3], yet their reliability degrades significantly under adverse weather (e.g., rain, snow, fog). Radar, with strong penetration capability, is far more robust in such conditions, making it a highly promising modality for achieving robust perception in autonomous driving. However, raw radar signals are noisy, producing numerous "ghost" artifacts. In addition, compared with LiDAR, radar has lower azimuth resolution, leading to blurred object appearances. Consequently, single-frame radar perception [4, 5, 6] suffers from limited semantic richness, posing significant challenges for object detection. To address this issue, several methods have attempted to leverage temporal information. TempoRadar [7] introduced a temporal relation layer to associate potential object queries across two consecutive frames, while SIRA [9] extended this This work was supported by the National Natural Science Foundation of China (No. 52372415, 52221005). ∗Corresponding author: {shaobxu, Preliminary Selected Object Query Encoder Encoder Temporal Relation Layer Decoder GT Bboxes Iprev (a) Semantic-only Icurr Decoder Encoder Encoder Flow Estimation Branch (b) Geometry-Aware (Ours) GT Bboxes Iprev Icurr Decoder Estimated Flow GT Flow Fig. 1. Comparison between detection frameworks. (a) The semantic-only framework [7, 8, 9] focuses solely on mining semantic features from consecutive frames. (b) Our geometry-aware framework explicitly incorporates geometric consistency through a flow estimation branch, where the estimated flow guides feature propagation to enhance detection. approach to multi-frame sequences. Despite promising results on RADIATE [10] dataset, these methods rely solely on semantic cues. But in weak semantic signal interpretation, humans typically rely on motion patterns first-recognizing coherent trajectories-before reasoning about semantics. Thus we argue that temporal radar detection should similarly prioritize cross-frame geometric consistency as a more reliable early-stage prior. Motivated by this, we propose RaFD, a Radar-based Flow-guided Detector for robust autonomous driving. As illustrated in Fig. 1, RaFD differs from previous radar sequence perception methods [7, 8, 9] by explicitly modeling the capture of geometric motion cues to enhance object detection. Specifically, inspired by advances in optical flow estimation [11, 12], we introduce a supervised auxiliary task-BEV scene flow estimation-that encourages the detector to learn temporally consistent geometric representa18 Sep 2025 σ Backbone&Necks Previous Image It-τ Current Image It Backbone&Necks Feature Enhancement Ft Ft-τ Temporal Alignment St-τ St ෩Et-τ ෩Et 4D Cost Volume Softmax ෩St-τ ෩St Flow-guided Propagation V Tt DETR Head σ Tc Tc Flow Estimation Branch Fig. 2. The framework of the RaFD (illustrated with two input frames). tions at the feature-map level. Notably, the ground-truth flow requires no additional labeling; we design an pipeline that automatically derives it from detection annotations with instance IDs. Building upon this, we further propose a feature propagation module that leverages the estimated flow to guide the aggregation of BEV features across frames for more robust representation learning. RaFD is structurally flexible, supporting seamless extension from two-frame input to multi-frame input. We validate RaFD on the RADIATE [10] dataset under different input frame settings. On the good-weather split, RaFD achieves 69.36% , 59.47% , and 23.92% , consistently outperforming existing state-of-the-art approaches. 2. RADAR-BASED FLOW-GUIDED DETECTOR The overall framework of RaFD is illustrated in Fig.2. We focus on scanning radar, represented as a BEV grayscale image I ∈R1×H×W . RaFD is built upon CenterFormer[13]. Let t denote the current frame and t -τ the previous frame. After backbone and neck encoding, we obtain feature maps Ft-τ and Ft, which are further enhanced to capture global scene context, yielding St-τ and St. These enhanced features are passed through a flow estimation branch to model inter-frame motion. The estimated flow ˆV then guides feature propagation from the aligned St-τ to St, producing the final motion-aware representation ˆTt. A convolutional head is subsequently applied to ˆTt to generate a heatmap of object centers. The topK peaks are extracted from this heatmap and used as initial queries for the DETR head, which refines them to predict offsets (ˆox, ˆoy), sizes (ˆh, ˆw), and orientations ˆθ. In the following sections, we provide a detailed description of the key modules in the pipeline. 2.1. Feature Enhancement Since radar images are inherently noisy and blurred, relying solely on object appearance is unreliable. Optical flow estimation tasks face a similar challenge, as motion videos often contain blur, and GMFlow [12] highlights that enhancing features with spatial context is essential. Inspired by this, we adopt a transformer-based module, where self-attention aggregates relations across spatial locations. For efficiency, we employ shifted-window local attention, similar to SwinTransformer [14], with window size Hf 2 × Wf 2 , and use singlehead attention to reduce computational complexity. Formally, one block is defined as ˆF l = Tw F l-1 , F l = Tsw ˆF l . (1) where Tw and Tsw denote regular and shifted-window selfattention, respectively. By stacking two such blocks, we obtain St-τ and St, which are enhanced with spatial context. 2.2. Flow Estimation Flow Estimation serves as an auxiliary task in RaFD, estimating motion vector fields between adjacent radar image frames at the feature-map level. Supervised by ground-truth flow, this task equips the front-end feature encoder with the ability to capture geometric consistency across the scene, while the predicted flow further supports feature propagation for object detection. Given the spatial enhanced feature S, the network φ encodes it into flow features E = φ(S) ∈RCf /2×Hf ×Wf using two weight-shared convolutional layers with batch normalization. We then align Et-τ and Et by mapping each grid's center coordinates Pt to the previous frame using the ego-pose transformation Tt→(t-τ). Bilinear interpolation retrieves features at the transformed points, while out-of-range locations Flow-Guided Attention Add & Norm Feed Forward Add & Norm V ሚSt-τ ሚSt × 2 V Fig. 3. Flow-Guided Attention. The reference points in the deformable attention are adjusted according to the estimated flow. Red regions highlight object locations. Flow Estimation Flow Flow Flow Decoder It-3τ It-2τ It-τ It Tt Flow Estimation Flow Estimation Fig. 4. The simple framework of RaFD with four-frame input. Flow estimation is performed between every two consecutive frames. Several flow estimation and feature propagation modules share parameters. default to Et, producing aligned features eEt-τ and eEt that represent the same real-world locations. This alignment also enables non-object regions to be set to zero during groundtruth flow generation. To capture the temporal dependency between eEt-τ and eEt, , we apply two global cross-attention blocks: eEl t-τ = Tc eEl-1 t-τ, eEl-1 t , eEl t = Tc eEl-1 t , ˆEl t-τ , (2) where Tc denotes single-head cross-attention with feedforward layers. Next, we construct a 4D cost volume C ∈RH×W ×H×W via feature similarity, C(i,j,k,l) = 1 p Cf/2 Cf /2 X c eE(c,i,j) t · eE(c,k,l) t-τ , (3) where the brackets after the feature maps indicate the value at the given indices. This computation can be implemented efficiently with a simple matrix multiplication. Selecting the highest similarity positions in C would yield dense correspondences, but this operation is not differentiable. Inspired by GMFlow [12], we instead normalize the last two dimensions of C using a softmax operation to obtain a matching distribution. The flow ˆV is then computed as: ˆV = G -G · softmax(C) ∈R2×Hf ×Wf . (4) where G ∈R2×Hf ×Wf denote the 2D coordinates of the pixel grid on the feature map. 2.3. Flow-guided Propagation The module is designed to propagate features from the previous frame to the current frame using the estimated flow ˆV . To achieve this, we adjust the reference points in deformable attention [15] according to the flow, as shown in Fig.3: Rt-τ = r(i,j) t-τ = r(i,j) t -ˆV (i,j) Hf ,Wf i=1,j=1, (5) The flow-guided deformable attention then aggregates information from the temporally aligned features eSt-τ and eSt: ˆTt = Tf eSt, eSt-τ , (6) where Tf denotes single-head flow-guided attention with feed-forward layers. We stack two such blocks to enable more comprehensive feature propagation. 2.4. Extension As shown in Fig.4, RaFD can be naturally extended from twoframe to multi-frame inputs through its modular flow-guided propagation mechanism. For each consecutive pair of frames, the estimated flow vectors ˆVt→t-τ iteratively align and transfer features across the sequence, enabling long-range temporal propagation. This design maintains feature coherence over extended frame sequences and strengthens the model's capacity to capture temporal dependencies in dynamic environments. 2.5. Training Since the dataset does not provide scene flow annotations, we automatically derive pseudo ground-truth flow Vgt from detection annotations with instance IDs. Specifically, Vgt is constructed at the feature-map scale Hf × Wf, where each object center is rendered as a Gaussian region with radius σ = max(f(hw), γ). With object IDs, instances are associated across frames, and the pose transformation Tt→(t-τ) is applied to align their coordinates. For each Gaussian region, the displacement of matched pixels defines the object flow, while background pixels are set to zero. The predicted flow ˆV is supervised using an l1 loss: Lf = ∥ˆV -Vgt∥1. (7) The overall training loss combines detection and flow supervision. L = Ldet + Lflow. (8) We supervise the flow between every two consecutive input. Trained on good-weather split Trained on good-and-bad-weather split EPE EPE CenterPoint (1) 59.42± 1.92 50.17± 1.91 18.93± 1.46 - 53.92± 3.44 42.81± 3.04 13.43± 1.92 - CenterFormer (1) 61.79± 1.37 52.57± 1.53 19.24± 0.96 - 57.13± 1.75 44.80± 1.32 14.55± 0.82 - TempoRadar (2) 63.63± 2.08 54.00± 2.16 21.08± 1.66 - 56.18± 4.27 43.98± 3.75 14.35± 2.15 - RaFD (2) 65.89± 0.97 55.13± 1.26 22.75± 1.03 0.1689 61.95± 2.07 50.23± 1.83 17.58± 1.44 0.1778 SCTR (4) 68.06± 1.60 57.03± 1.34 22.62± 1.18 - 66.01± 1.05 52.55± 0.96 19.18± 1.02 - SIRA (4) 68.68± 1.12 58.11± 1.40 22.81± 0.86 - 66.14± 0.83 53.79± 1.14 19.85± 0.95 - RaFD (4) 69.36± 1.45 59.47± 1.92 23.92± 1.11 0.1556 68.83± 1.81 54.20± 1.66 18.84± 1.56 0.1669 Table 1. Benchmark results of different methods on the RADIATE dataset. Numbers in parentheses indicate frames input. Baseline 64.29 55.14 21.79 + Feature Enhancement 66.92 (+2.63) 56.80 (+1.66) 22.41 (+0.64) + Flow Estimation 67.76 (+0.84) 57.78 (+0.98) 22.69 (+0.28) + Flow-guided Propagation 69.36 (+1.60) 59.47 (+1.69) 23.92 (+1.23) Table 2. Ablation study of RaFD on RADIATE dataset. Baseline refers to four-frame input CenterFormer [13] using vanilla deformable attention [15] for cross-frame feature propagation. 3. EXPERIMENT 3.1. Experimental Setup We evaluate RaFD on the RADIATE dataset [10], which provides high-resolution radar images (but without Doppler dimension) under diverse weather conditions. Following [7, 8, 9], we adopt the predefined train-on-good-weather, trainon-good-and-bad-weather, and test splits. Pedestrians are excluded from detection, and input images are center-cropped to 256 × 256, and objects outside this region are ignored. Relative poses between consecutive frames are obtained via [16]. Training is performed on 8 RTX 3090 GPUs with batch size 8, learning rate 2 × 10-4, and weight decay 1 × 10-2 using the Adam for 10 epochs. Oriented object detection is evaluated with mean Average Precision (mAP) at IoU thresholds 0.3, 0.5, 0.7, and flow estimation with End-Point Error (EPE), defined as the average l2 distance between predicted and ground-truth flow vectors. 3.2. Results Analysis The quantitative results are summarized in Tab.1 and Tab.2. RaFD is compared with CenterPoint, CenterFormer, TempoRadar [7], SCTR [8], and SIRA [9], all using ResNet-34 backbones. RaFD consistently achieves the best performance for both two-frame and four-frame inputs. Notably, the performance degradation on the good-and-bad-weather split is minimal compared to [7, 8, 9], which we attribute to the relaSIRA (4) RaFD (4) It-τ It V (a) (b) Fig. 5. Visualization of object detection and flow estimation. Green boxes represent ground truth, while red boxes represent predictions. tively stable performance of the flow estimation. This further demonstrates the beneficial effect of flow guidance for object detection. Incorporating feature enhancement, flow estimation, and flow-guided propagation modules leads to consistent performance gains, validating the effectiveness of each component. The visualization results are provided in Fig.5 for intuitive understanding. 4. CONCLUSION In this paper, we present RaFD, a radar-based flow-guided detector designed for robust autonomous driving, which explicitly estimates and leverages inter-frame geometric clues. By incorporating an auxiliary supervised flow estimation task, RaFD captures temporal consistency and guides feature propagation across frames. Extensive experiments on the RADIATE dataset demonstrate that RaFD outperforms existing radar-only methods, offering a robust, accurate, and scalable solution for radar perception in challenging scenarios. 5. REFERENCES [1] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Yu Qiao, and Jifeng Dai, "BEVFormer: Learning Bird's-Eye-View Representation from Multi-camera Images via Spatiotemporal Transformers," in Computer Vision - ECCV 2022, Shai Avidan, Gabriel Brostow, Moustapha Ciss ́e, Giovanni Maria Farinella, and Tal Hassner, Eds., vol. 13669, pp. 1-18. Springer Nature Switzerland, Cham, 2022. [2] Jonah Philion and Sanja Fidler, "Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D," in Computer Vision - ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, Eds., Cham, 2020, pp. 194210, Springer International Publishing. [3] Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong, Yuexin Ma, Wei Li, Hongsheng Li, and Dahua Lin, "Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, June 2021, pp. 99349943, IEEE. [4] Michael Meyer, Georg Kuschk, and Sven Tomforde, "Graph Convolutional Networks for 3D Object Detection on Radar Data," in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, Oct. 2021, pp. 30533062, IEEE. [5] Alexander Popov, Patrik Gebhardt, Ke Chen, and Ryan Oldja, "NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for Autonomous Driving," in 2023 IEEE International Conference on Robotics and Automation (ICRA), May 2023, pp. 6958-6964. [6] Geonho Bang, Kwangjin Choi, Jisong Kim, Dongsuk Kum, and Jun Won Choi, "RadarDistill: Boosting Radar-Based Object Detection Performance via Knowledge Distillation from LiDAR Features," in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 15491-15500. [7] Peizhao Li, Pu Wang, Karl Berntorp, and Hongfu Liu, "Exploiting Temporal Relations on Radar Perception for Autonomous Driving," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, June 2022, pp. 17050-17059, IEEE. [8] Ryoma Yataka, Pu Wang, Petros Boufounos, and Ryuhei Takahashi, "Radar Perception with Scalable Connective Temporal Relations for Autonomous Driving," in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2024, pp. 13266-13270. [9] Ryoma Yataka, Pu Wang, Petros Boufounos, and Ryuhei Takahashi, "SIRA: Scalable Inter-Frame Relation and Association for Radar Perception," in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024, pp. 15024-15034, IEEE. [10] Marcel Sheeny, Emanuele De Pellegrin, Saptarshi Mukherjee, Alireza Ahrabian, Sen Wang, and Andrew Wallace, "RADIATE: A Radar Dataset for Automotive Perception in Bad Weather," in 2021 IEEE International Conference on Robotics and Automation (ICRA), May 2021, pp. 1-7. [11] Zachary Teed and Jia Deng, "RAFT: Recurrent AllPairs Field Transforms for Optical Flow," in Computer Vision - ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, Eds., Cham, 2020, pp. 402-419, Springer International Publishing. [12] Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao, "GMFlow: Learning Optical Flow via Global Matching," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, June 2022, pp. 8111-8120, IEEE. [13] Zixiang Zhou, Xiangchen Zhao, Yu Wang, Panqu Wang, and Hassan Foroosh, "CenterFormer: Center-Based Transformer for 3D Object Detection," in Computer Vision - ECCV 2022, Shai Avidan, Gabriel Brostow, Moustapha Ciss ́e, Giovanni Maria Farinella, and Tal Hassner, Eds., Cham, 2022, pp. 496-513, Springer Nature Switzerland. [14] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo, "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows," in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, Oct. 2021, pp. 9992-10002, IEEE. [15] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai, "Deformable DETR: Deformable Transformers for End-to-End Object Detection," in International Conference on Learning Representations, Oct. 2020. [16] Shuocheng Yang, Yueming Cao, Shengbo Eben Li, Jianqiang Wang, and Shaobing Xu, "RINO: Accurate, Robust Radar-Inertial Odometry With Non-Iterative Estimation," IEEE Transactions on Automation Science and Engineering, vol. 22, pp. 20420-20434, 2025.
|
2509.16260
|
How Digital Transformation Impacts Corporate Green Innovation?
Chen Hanqin
Abstract:Digitalization is a defining feature of our time, and green innovation has become one
of the necessary avenues for firms to achieve sustainable development. Using financial statements
and annual report data for China’s A-share listed firms from 2010–2019, this paper constructs a firm-
level digital transformation indicator and examines the impact of digital transformation on green
innovation and its mechanisms. The results show that digital transformation promotes firms’ green-
innovation output, with a diminishing marginal impact over time. Mechanism tests indicate that
digital transformation boosts green-innovation output by increasing R&D investment and
strengthening environmental management. Heterogeneity analysis shows that the promotion effect is
more pronounced for small and medium-sized firms and for firms in technology-intensive industries.
To improve the green-innovation incentives of digital transformation, firms should formulate long-
term strategies and continuously strengthen policy regulation and incentives.
Keywords: digital transformation; green innovation; environmental management certification;
R&D investment
Ⅰ. Introduction
The rise of digitalization is a salient feature of economic and social development since the
beginning of the twenty-first century. Since the Eighteenth National Congress of the Communist Party
of China, the Central Committee has attached great importance to developing the digital economy
and has elevated it to a national strategy. The report to the Twentieth National Congress further
explicitly proposed to accelerate the building of a strong manufacturing country, a strong quality
country, a strong aerospace country, a strong transportation country, a strong cyber power, and a
Digital China. At present, a surging wave of informatization is sweeping the globe, and countries
worldwide regard the advancement of economic digitalization as an important driving force for
achieving innovative development. The digital economy is the core of the economic transformation
of China, and the degree of enterprise digital transformation is the foundation for achieving this
objective. In the era of digitalization, an increasing number of enterprises have come to realize the
importance of digital transformation for their business operations and sustainable development, and
green innovation has become one of the necessary pathways for enterprises to achieve sustainable
development.
Although over the past decade the digital economy of China has flourished and the scale of the
industry has continued to grow rapidly, ranking second in the world for many years, many small and
medium-sized enterprises in China are still at the initial stage of digital transformation, and only a
small share of enterprises have reached the stage of deep application. The China Digital Economy
Development Report (2022) shows that from 2012 to 2021 the scale of the digital economy of China
increased from 11 trillion yuan to 45.5 trillion yuan, and its share of gross domestic product rose from
21.6 percent to 39.8 percent. The data indicate that more than ninety percent of enterprises remain at
the normative stage of digital transformation, and fewer than ten percent of enterprises have reached
the domain level. Against the backdrop that the overall digital transformation of enterprises in China
is still at an initial stage, does enterprise digital transformation have a significant effect on green
innovation. If there is a positive effect, through which channels and mechanisms does it promote
green innovation output. Furthermore, which types of enterprises are more adept at achieving growth
in green innovation output through digital transformation.
As the main driving forces of the new round of scientific and technological revolution,
digitalization and informatization have become important subjects of academic research. Existing
studies mainly focus on the effect of information technology on enterprise innovation. In the early
stage of informatization, information technology creates value at the intermediate stage of the
production process by raising innovation productivity, and it plays a key role in promoting
breakthrough innovation through research and development and other intangible factors such as skills
and knowledge (Kleis et al., 2012). Informatization construction can also significantly improve the
quality of patents of enterprises and promote the growth of invention patents, rather than only driving
short-cycle practical utility models and design patents, and it raises the output efficiency of invention
patents of enterprises (Li Lei et al., 2022). As an advanced stage of the development of informatization,
digitalization places greater emphasis on the comprehensive and in-depth application of information
technology to create value. Enterprise digital transformation can improve environmental impact by
promoting green technological innovation, enhancing environmental information disclosure, and
strengthening governance (Qiong Xu et al., 2021). Digital transformation also promotes enterprise
innovation through mechanisms such as realizing open and networked innovation, leading
organizational and managerial innovation, and raising the level of human capital within enterprises
(An Tongliang, 2022).
Although digital transformation has attracted wide attention in the field of enterprise innovation,
research on its effect on green innovation remains relatively limited. In recent years, with the
continuous enhancement of environmental awareness, green innovation has become a new pathway
for enterprises to achieve green, efficient, and sustainable development (Chen Zewen and Chen Dan,
2019). As a comprehensive transformation, digital transformation encompasses a variety of emerging
technologies and is bound to have a profound effect on the green innovation of enterprises. For
example, in production and operations, digital technologies can help enterprises realize efficient,
precise, and sustainable utilization of resources, thereby reducing negative environmental impact.
Based on this, and taking the strategic goal of building a Digital China as an opportunity, this paper
examines how digital transformation affects the green innovation of enterprises, with the aim of
providing ideas and references for enterprises on the road toward sustainable development. This paper
measures the degree of enterprise digital transformation by the frequency of digital-transformation-
related keywords in enterprise annual reports, and measures enterprise green innovation output
activity by the number of green patent applications, in order to study the effect of digital
transformation on green innovation output of enterprises. Building on existing research, the
innovations of this paper are mainly reflected in the following two aspects. First, with respect to
empirical methodology, because the number of green patent applications is a non-negative integer
and the dependent variable is a count variable, prior studies have often employed log(y+1) together
with ordinary least squares for estimation; however, this common log-linear approach yields estimates
that lack meaningful economic interpretation and exhibit inherent bias. This paper adopts a Poisson
pseudo maximum likelihood estimator with multi-dimensional fixed effects drawn from count models,
which can minimize bias caused by omitted variables to the greatest extent possible. Second, from
the perspective of analyzing internal mechanisms, this paper devotes part of the analysis to enterprise
environmental management. Digital transformation can standardize internal processes of enterprises
and thereby, through digital transformation, form better incentives for green innovation. However,
existing research has relatively seldom examined green innovation output from the angle of enterprise
environmental management. Therefore, from this perspective, this paper explores how digital
transformation helps enterprises realize better incentives for green innovation and provides new
implications for relevant policy and practice.
Ⅱ. Theoretical Analysis and Research Hypotheses
Environmental management and R&D-investment channels are closely interconnected. First, the
environmental-management channel of firms’ digital transformation can support and propel the R&D-
investment channel. Through digital environmental-management practices, firms can achieve
efficient utilization of environmental resources and reduce pollution, thereby enhancing the
sustainability and environmental friendliness of their R&D-investment activities. Second, the R&D-
investment channel of digital transformation can in turn foster innovation and optimization in
environmental management. By leveraging digital tools, firms can conduct R&D and innovation more
efficiently, thus accelerating innovation and improvement within the environmental-management
channel. Drawing on prior studies, this paper selects the environmental-management channel and the
R&D-investment channel as two key mechanisms through which digital transformation operates.
2.1 Environmental-Management Channel
Environmental-management certification is one of the metrics for firms’ innovation activities,
because it reflects both the firm’s emphasis on environmental protection and its innovative capability
in environmental management. As awareness has grown regarding the environmental impacts of
industrial activities, various standards have been formulated to guide firms and to incorporate
environmental-management systems into corporate assessments. Among these, the ISO 14001
Environmental Management System—formally introduced in September 1996—has been the most
widely adopted worldwide. Because ISO 14001 requires firms to set internal environmental standards,
targets, and performance indicators, whether a firm has obtained this certification can be used to
gauge its level of environmental-management innovation (Peiyan Zhou et al., 2022). At the national
level, participation in ISO 14001 is an important predictor of a country’s environmental patenting and
serves as a standard indicator of innovative activity (S. Lim et al., 2014).
Innovation constitutes a firm’s core competitiveness. Therefore, firms holding ISO 14001
certification possess more potential competitive advantages, including higher internal efficiency,
differentiation advantages, responsiveness to stakeholder demands, improved industry positioning,
and financial savings (Murillo-Luna et al., 2008). At the same time, managerial decision-making
plays an important role in strengthening competitiveness: managers with strong environmental
awareness proactively pursue green-innovation R&D and cleaner production, actively develop
environmentally friendly products, and thereby enhance the firm’s market competitiveness (Li Hui et
al., 2022).
Based on the foregoing analysis of the environmental-management channel, we propose the
following hypothesis:
H1: Digital transformation promotes firms’ green-innovation output by enhancing
environmental-management capability.
2.2 R&D-Investment Channel
R&D is the source of firms’ innovation and is closely related to innovative outcomes. First,
digital transformation can directly improve process-innovation performance and product-innovation
performance in manufacturing enterprises. On the one hand, digital transformation drives
modularization, automation, and intelligentization in production, thereby raising efficiency and
improving production methods; on the other hand, it facilitates information flows in product-design
departments, broadening and deepening information sources and stimulating firms’ innovation
capabilities in new-product development (Shuhao Liang et al., 2022). Second, digital transformation
can also increase firms’ R&D investment. By lowering internal and external control costs, firms can
allocate freed-up resources to R&D and innovation—for example, to pollution-control equipment—
thus raising the level of green innovation (Shujun Sun et al., 2022). This helps firms better fulfill
social responsibility and enhance their image in environmental protection and sustainable
development, enabling them to meet market demand more effectively and to drive sustained
innovation and growth.
In sum, digital transformation exerts a positive, facilitating effect on firms’ R&D and innovation
activities. By boosting production-process efficiency and information fluidity and by increasing R&D
investment, digital transformation creates favorable conditions for innovation and is expected to
provide strong support for firms to maintain competitiveness and achieve sustainable development in
highly competitive markets.
Based on the foregoing analysis of the R&D-investment channel, we propose the following
hypothesis:
H2: Digital transformation promotes firms’ green-innovation output by increasing R&D
investment.
Ⅲ. Research Design
3.1 Data Description
This study uses green patent data for A-share listed enterprises in the Shanghai and Shenzhen
stock exchanges in China from 2010 to 2019, together with enterprise-level economic data for the
corresponding enterprises. For the above database of listed enterprises, the following sample
screening and processing are conducted: (i) enterprises in the financial industry are removed, and
only enterprises belonging to the real economy are retained; (ii) samples that are subject to special
treatment in stock trading, that is, stock codes containing ST or *ST, as well as enterprises with
missing key financial indicators, are removed; and (iii) in order to eliminate the influence of extreme
values, all continuous variables are winsorized at the 1 percent level. The final sample includes 1,512
enterprises, with 15,120 enterprise-year observations.
The green innovation output capability of enterprises is measured as follows. Green patent data
for listed enterprises come from annual reports of listed enterprises, corporate social responsibility
reports of listed enterprises, official websites of listed enterprises, the National Bureau of Statistics,
the China National Intellectual Property Administration, and the World Intellectual Property
Organization, and green patents of enterprises are identified with the aid of the environmental
International Patent Classification index list released by the World Intellectual Property Organization
in 2010. The data fields used in this paper include the total number of green patent applications, as
well as counts of invention patent applications and utility model patent applications classified by
invention type.
The degree of digital transformation of enterprises is obtained from the CSMAR “Research
Database on Digital Transformation of Chinese Listed Enterprises,” and the enterprise digitalization
level is measured by the frequency of keywords related to digital transformation in annual reports.
Enterprise economic data for listed enterprises all come from the CSMAR database. Drawing
on prior literature, a series of control variables are selected, mainly including enterprise size,
enterprise age, board size, state-ownership dummy, asset-liability ratio, capital intensity, Tobin Q,
ISO 14001 certification dummy, year dummies, and industry dummies. The specific variable settings
are reported in Table 1.
Table 1. Variable definitions
Type
Name
Symbol
Measurement
Depende
nt
variables
Green patent applications in
year t
apply0
Total number of green patent applications of
the enterprise in year t
Green patent applications in
year t+1
apply1
Total number of green patent applications of
the enterprise in year t+1
Green patent applications in
year t+2
apply2
Total number of green patent applications of
the enterprise in year t+2
Green invention patent
applications in year t
invention0
Total number of green invention patent
applications of the enterprise in year t
Green invention patent
applications in year t+1
invention1
Total number of green invention patent
applications of the enterprise in year t+1
Green invention patent
applications in year t+2
invention2
Total number of green invention patent
applications of the enterprise in year t+2
Green utility model patent
applications in year t
utility0
Total number of green utility model patent
applications of the enterprise in year t
Green utility model patent
applications in year t+1
utility1
Total number of green utility model patent
applications of the enterprise in year t+1
Green utility model patent
applications in year t+2
utility2
Total number of green utility model patent
applications of the enterprise in year t+2
Core
explanat
ory
variables
Enterprise digital
transformation
digital
ln(total frequency of all digitalization-
related keywords in the enterprise annual
report in year t + 1)
Artificial intelligence
technology
ai
ln(frequency of artificial intelligence terms
in the report + 1)
Blockchain technology
bc
ln(frequency of blockchain terms in the
report + 1)
Cloud computing technology
cd
ln(frequency of cloud computing terms in
the report + 1)
Big data technology
bd
ln(frequency of big data terms in the report
+ 1)
Digital technology application
dt
ln(frequency of digital application terms in
the report + 1)
Control
Enterprise size
size
ln(total assets at year-end)
variables
Enterprise age
age
Current year minus the year of listing of the
enterprise
Board size
bm
Total number of directors on the board
State-owned enterprise
state
Equals 1 if a state-owned enterprise,
otherwise 0
Leverage
leverage
Total liabilities of the enterprise divided by
total assets of the enterprise
Capital intensity
intensity
Total assets of the enterprise divided by
operating revenue
Tobin Q
tobinq
Market value of the enterprise divided by
replacement cost of capital
ISO 14001 certification
ISO14001
Equals 1 if ISO 14001 audit has been
passed, otherwise 0
Year dummies
year
Year dummies for 2010–2019
Industry dummies
industry
Classified according to the 2012 China
Securities Regulatory Commission industry
classification
3.2 Model specification and econometric method
In order to examine the effect of digital transformation on the green innovation output capability
of enterprises, a multiple linear regression model is constructed as shown in Model (1).
Innovationijt=α+β1digitaljt+X’Γ+ωt+ηi+εit
(1)
(1)Dependent variable
Green innovation output capability of enterprises (Innovation). This paper chooses green patent
application data, which are less affected by external factors and are more stable, to measure the green
innovation output capability of enterprises. Under the patent system of China, patents are divided into
invention patents, utility model patents, and design patents. Among them, invention patents have the
highest innovation value, while utility model and design patents have lower innovation value, because
invention patents refer to novel and thorough improvements to products or processes, whereas utility
models are innovations in technical solutions considered from the perspective of technical effects and
functions of products. In other words, invention and utility model patents possess the characteristics
of novelty, inventiveness, and utility, whereas design patents do not involve the technical performance
of the product itself. Therefore, after comprehensive consideration, this paper selects the numbers of
invention patent applications and utility model patent applications for discussion. Innovationijt
denotes the green innovation capability in year t of enterprise j in industry i. Its calculation is as
follows: Let apply0denote the number of green patent applications in year t, then Innovationijt
measured by green patent applications equals Innovationijt = ln(apply0 + 1). The invention patent
count (invention) and the utility model patent count (utility), distinguished by patent connotation, are
calculated according to the same formula.
(2) Main explanatory variable.
The main explanatory variable digitaljt represents the degree of digital transformation of
enterprise j in year t. According to existing research, the lexicon of digital transformation feature
terms can be divided into two categories, namely, the underlying technology architecture and digital
technology application. The underlying technology architecture refers to the ABCD digital
technologies, that is, artificial intelligence, blockchain, cloud computing, and big data. The iteration
and development of these technologies drive rapid change in overall digital technology and the fast
development of the digital economy. These are the four most important technological cornerstones
under the digital revolution, and they mainly appear in digital transformation of the enterprise back
end such as research and development design, management mode, or business logic. Digital
technology application refers to integration based on ABCD technologies into the market
environment in which the enterprise operates, involving digital transformation and upgrading of the
main business or the expansion of new digital lines of business, which mainly appears in digital
transformation of front-end activities such as production and sales.
Following Wu Fei and coauthors, this paper uses text mining to compute the frequencies of
digital transformation keywords in enterprise annual reports, and measures the degree of digital
transformation by adding one to the frequency and then taking the natural logarithm. Since keywords
in annual reports can reflect strategic features and managerial philosophies of enterprises, a higher
keyword frequency in annual reports indicates greater attention and resource investment by the
enterprise in that dimension, and a higher level of digital technology application.
(3) Control variables.
The vector X contains a series of other variables that may affect enterprise innovation, including
enterprise size (size), enterprise age (age), board size (bm), state-ownership (state), asset-liability
ratio (leverage), capital intensity (intensity), Tobin Q (tobinq), and a dummy for passing ISO 14001
(PassISO14001). The specific definitions of the control variables are provided in Table 1. In addition,
industry fixed effects and year fixed effects are controlled for in Model (1), in order to control for the
influence of unobservable factors that do not vary with time or industry development.
Figure 1. Zero-inflated distribution of the number of green patent applications.
(4) Econometric method.
Figure 1 presents the histogram of green patent applications. It is noteworthy that green patent
applications display a highly skewed distribution with a large number of zeros. When the dependent
variable is a count variable, the combination of log(y+1) and ordinary least squares may produce
estimates that are not unbiased and that lack economic interpretability, so pseudo-Poisson regression
estimation is an appropriate choice. This paper employs the Poisson Pseudo Maximum Likelihood
estimator with high-dimensional fixed effects (PPMLHDFE) for empirical tests. Compared with the
widely used PPML estimator for zero-inflated trade data, the high-dimensional fixed-effects Poisson
estimator can test pseudo-maximum likelihood estimation more robustly. All empirical analyses in
this paper are conducted using StataMP 17.0.
Ⅳ. Empirical Results
4.1 Descriptive Statistics
We conduct descriptive statistics for the main variables, and the results are reported in Table 2.
The mean number of green patent applications is 3.47, indicating that, during the sample period,
domestic listed enterprises on average file 3.47 green patent applications per year. The standard
deviation is 36.37, which far exceeds the mean, indicating substantial dispersion in green patent
applications across enterprises. The median equals 0, indicating that more than half of enterprises
have zero green patent applications … The coefficient of variation for the logarithm of the digital
transformation keyword frequency after adding one is approximately 0.5, which indicates relatively
high volatility for this variable. The mean of the state owned enterprise dummy is 0.56, indicating
that 56 percent of the sample are state owned enterprises and 44 percent are private enterprises.
Table 2. Descriptive statistics of the main variables
Variable
N
Mean
SD
Min
Median
Max
apply0
15130
3.47
36.47
0.00
0.00
1543.00
invention
15130
2.35
29.60
0.00
0.00
1376.00
utility
15130
0.20
0.60
0.00
0.00
2.00
lndigital
15119
2.32
1.20
0.00
2.20
6.63
lnsize
15118
22.38
1.48
13.08
22.29
28.64
age
15119
16.05
5.44
3.00
16.00
30.00
bm
15119
9.73
2.54
3.00
9.00
31.00
state
15130
0.56
0.50
0.00
1.00
1.00
leverage
15119
0.46
0.24
0.00
0.48
1.76
intensity
15130
0.67
0.63
0.00
0.53
11.60
tobinq
15130
2.09
6.91
0.00
1.47
715.94
PassISO14001
15117
0.18
0.39
0.00
0.00
1.00
lnrd
9828
17.70
1.91
5.09
17.86
25.03
Enterprises are sorted in ascending order by the degree of digital transformation, the sample is
divided into fifteen groups, and the average number of green patent applications is calculated for each
group. Figure 2 shows that the degree of digital transformation is positively correlated with the
number of green patent applications, that is, the higher the degree of digital transformation, the
stronger the green innovation output capability of the enterprise, which is consistent with the
hypothesis stated earlier.
Figure 2. Degree of digital transformation and number of green patent applications
4.2 Baseline Regression Results
In the baseline regressions, enterprise characteristic variables are sequentially added, and Table
3 reports estimation results based on Model (1). Columns (1) through (4) separately control for
characteristic variables at the levels of basic enterprise attributes, internal control, and financial
condition in order to examine the linear effect of digital transformation on green innovation output
capability. Column (5) reports the regression for the full specification.
According to the regression results, regardless of whether basic attributes, internal control, or
financial condition are controlled, the coefficient of the main explanatory variable is significantly
positive in statistical terms, the regression results are highly reliable and robust, and a positive linear
relationship is present. For the regression with basic attributes (column 2), enterprise size and state
ownership have significant promoting effects on green innovation output, whereas enterprise age
contributes negatively. For the regression with internal-control variables (column 3), board size and
having passed environmental certification significantly promote green innovation output. For the
regression with financial variables (column 4), financial leverage and capital intensity significantly
promote green innovation output. For the full regression (column 5), enterprise size, state ownership,
board size, environmental certification, and capital intensity all have significant positive effects on
green innovation output, and enterprise age is significantly negatively related to green innovation
output.
To further examine whether the driving effect of digital transformation on green innovation
output has temporal persistence, dynamic effects are evaluated. Forward values of the dependent
variable with one to two leads are introduced, and results are presented in Table 4. The coefficients
of the main explanatory variable are all positive at the 1 percent significance level, which indicates
that the positive impact of digital transformation on green innovation output exists in the current year
and persists for the next three years. In terms of magnitude, 0.357 is greater than 0.341, which is
greater than 0.313, which is greater than 0.304. The effect is smallest in the current year, reaches the
maximum in the following year, and then gradually weakens over subsequent periods. The results
remain consistent when the set of control variables is not included.
Specifically, the promoting effect of digital transformation on green innovation output is at a
relatively low level in the current year, reaches the maximum in the following year, and then decreases
year by year. This can be explained by the time required for the transformation from research and
development input to realized innovation output. In the initial stage of transformation, enterprises
need to invest resources and effort to adapt to the new digital environment, adjust business processes,
and train employees to master new tools and skills, so the effect in the current year may be limited.
As enterprises gradually adapt and accumulate experience, green innovation capability improves, and
digital technologies and data analysis are better used to improve products, processes, and services,
which raises resource efficiency and protects the environment, so the effect peaks in the following
year. In subsequent periods, after enterprises reach a certain level of green innovation, further
improvement becomes more difficult or is constrained by other factors, so the impact gradually
weakens.
Table 3. Baseline regressions
(1) Baseline
(2) Basic
attributes
(3) Internal
control
(4) Financial
condition
(5) Full
lndigital
0.423***
0.320***
0.415***
0.422***
0.304***
(0.0252)
(0.0248)
(0.0241)
(0.0247)
(0.0237)
lnsize
0.736***
0.734***
(0.0198)
(0.0222)
age
-0.0638***
-0.0660***
(0.00644)
(0.00622)
state
0.213**
0.176**
(0.0663)
(0.0638)
bm
0.101***
0.0298**
(0.0103)
(0.00994)
PassISO14001
0.580***
0.383***
(0.0630)
(0.0552)
leverage
1.863***
0.131
(0.117)
(0.143)
intensity
0.477***
0.420***
(0.0480)
(0.0448)
tobinq
-0.278***
0.0791**
(0.0329)
(0.0260)
_cons
0.391***
-15.63***
-0.796***
-0.374***
-16.44***
(0.114)
(0.443)
(0.118)
(0.0905)
(0.480)
Year
YES
YES
YES
YES
YES
Industry
YES
YES
YES
YES
YES
N
14973
14973
14971
14973
14971
Wald
280.63
2362.16
465.98
676.64
2762.46
p
0.00
0.00
0.00
0.00
0.00
Note: *, **, and *** denote significance at the 10, 5, and 1 percent levels, respectively; the same notation
applies hereafter.
Although diminishing marginal effects constrain the dynamic impact, the positive promoting
effect remains strong over the long term. Enterprises should maintain patience and a long-term
perspective, recognize the time lag of innovation output, set reasonable expectations for early input,
and make long-term plans and preparations for green innovation investment during digital
transformation. Although digital transformation requires certain investment and time costs, after the
transformation enterprises can obtain more efficient innovation capability and a more competitive
position, which exerts persistent positive effects and helps enterprises conduct green innovation and
achieve sustainable development.
Table 4. Persistence of the effect of digital transformation on green patent applications
apply0
apply1
apply2
apply3
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
lndigital
0.423***
0.304***
0.517***
0.357***
0.497***
0.341***
0.492*** 0.313***
(0.0252)
(0.0237)
(0.0586)
(0.0548)
(0.0527)
(0.0461)
(0.0647)
(0.0568)
lnsize
0.734***
1.107***
1.086***
1.029***
(0.0222)
(0.0515)
(0.0536)
(0.0507)
age
-
0.0660***
-0.0227
-0.0216
-0.0237
(0.00622)
(0.0122)
(0.0132)
(0.0136)
bm
0.0298**
0.0353
0.0496**
0.0495**
(0.00994)
(0.0219)
(0.0180)
(0.0178)
state
0.176**
0.0978
0.0974
0.0771
(0.0638)
(0.128)
(0.132)
(0.141)
leverage
0.131
-0.303
-0.167
-0.207
(0.143)
(0.251)
(0.298)
(0.284)
intensity
0.420***
0.980***
0.926***
0.861***
(0.0448)
(0.0862)
(0.0909)
(0.104)
tobinq
0.0791**
0.199***
0.213***
0.188***
(0.0260)
(0.0437)
(0.0480)
(0.0448)
PassISO14001
0.383***
0.240*
0.302*
0.335*
(0.0552)
(0.119)
(0.134)
(0.147)
_cons
0.391***
-16.44***
1.378***
-
25.78***
1.475***
-
25.36***
1.551***
-
23.76***
(0.0465)
(0.480)
(0.111)
(1.300)
(0.108)
(1.412)
(0.110)
(1.378)
Year
YES
YES
YES
YES
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
YES
YES
YES
YES
N
14973
14971
13487
13485
11991
11989
10493
12004
Wald
280.63
2762.46
77.70
1135.85
88.94
1033.12
57.67
809.30
p
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
4.3 Robustness Checks
To ensure reliability, a series of robustness checks are conducted. (1) Replacement of the
dependent variable: different patent types have different requirements regarding innovation factors
and environments, which leads to heterogeneous effects of digital transformation on green innovation
activities. The total number of green patent applications is decomposed into the numbers of green
invention patent applications and green utility-model patent applications. Considering the gestation
period from input to output, one- and two-period leads are also used. As shown in columns (1) and
(3) of Table 5, the degree of digital transformation significantly and positively affects the output of
green invention patents over the next two years, whereas the effect on green utility-model patents is
not significant.
Table 5. Robustness checks: replacing the dependent variable and sample screening
①Replace explained variables
②
Manufacturing
industry
③ Exclude
information and
communication
industry
(1)
(2)
(3)
(4)
(5)
(6)
invention1
utility1
Invention2
utility2
apply0
apply0
lndigital
0.446***
0.00150
0.428***
0.000614
0.328***
0.244***
(0.0593)
(0.00525)
(0.0510)
(0.00344)
(0.0254)
(0.0297)
lnsize
1.166***
0.000842
1.144***
-0.0324***
0.696***
0.760***
(0.0561)
(0.00924)
(0.0578)
(0.00630)
(0.0245)
(0.0251)
age
-0.0180
-
0.000271
-0.0170
0.00535***
-0.0730***
-0.0649***
(0.0130)
(0.00166)
(0.0139)
(0.00104)
(0.00698)
(0.00719)
bm
0.0584**
0.000751
0.0723***
-0.000967
0.0340**
0.0358**
(0.0218)
(0.00336)
(0.0185)
(0.00207)
(0.0117)
(0.0117)
state
0.243
-0.00227
0.253
-0.0121*
0.201**
0.0744
(0.136)
(0.00637)
(0.139)
(0.00471)
(0.0677)
(0.0738)
leverage
-0.396
-0.00893
-0.251
0.0788***
0.560***
0.157
(0.273)
(0.0298)
(0.326)
(0.0204)
(0.154)
(0.153)
intensity
1.172***
0.00336
1.095***
0.00187
0.395***
0.405***
(0.0920)
(0.00741)
(0.0960)
(0.00595)
(0.0505)
(0.0463)
tobinq
0.227***
-
0.00682*
0.246***
0.00150
-0.00157
0.00869
(0.0467)
(0.00275)
(0.0501)
(0.00127)
(0.0284)
(0.0298)
IsPassISO14001
0.246
-0.00715
0.316*
0.0208
0.299***
0.234***
(0.134)
(0.0183)
(0.147)
(0.0123)
(0.0580)
(0.0610)
_cons
-28.30***
0.583***
-27.84***
1.291***
-15.42***
-16.91***
(1.379)
(0.174)
(1.473)
(0.121)
(0.522)
(0.544)
Year
YES
YES
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
YES
YES
N
13485
3026
11989
1512
8748
13345
Wald
1164.57
11.50
1054.39
37.44
2686.08
2018.60
p
0.00
0.24
0.00
0.00
0.00
0.00
Green invention patents are defined as patents that are novel, useful, and environmentally
friendly and generally require a higher technological level and innovation capability. Green utility-
model patents are patents that solve environmental problems through practical technical solutions and
improvements. Therefore digital transformation exerts a stronger promoting effect on green invention
patent output that requires a higher innovation threshold, while the effect on green utility-model
patent output that requires a lower innovation threshold is not significant.
(2) Retaining manufacturing enterprises: based on the fact that in 2021 mechanical and electrical
products accounted for 59 percent of China exports, manufacturing has become the most important
component in the going-global industries; therefore only manufacturing enterprises are retained in
column (5), and the results remain robust.
Table 6. Robustness checks: replacing the core explanatory variable with five keyword frequencies
④
(1) AI
(2) Blockchain
(3) Cloud
computing
(4) Big data
(5) Digital
technology
application
lnai
0.360***
(0.0360)
lnbc
0.516**
(0.182)
lncc
0.323***
(0.0278)
lnbd
0.327***
(0.0356)
lndt
0.244***
(0.0270)
lnsize
0.778***
0.795***
0.769***
0.760***
0.757***
(0.0210)
(0.0214)
(0.0212)
(0.0215)
(0.0225)
age
-0.0627***
-0.0641***
-0.0660***
-0.0647***
-0.0645***
(0.00631)
(0.00650)
(0.00623)
(0.00615)
(0.00645)
bm
0.0273**
0.0267**
0.0280**
0.0289**
0.0306**
(0.0101)
(0.0103)
(0.00981)
(0.00987)
(0.0104)
state
0.0483
0.00630
0.0928
0.0940
0.109
(0.0643)
(0.0640)
(0.0618)
(0.0639)
(0.0663)
leverage
0.0734
0.0776
0.0650
0.0919
0.135
(0.144)
(0.144)
(0.143)
(0.144)
(0.142)
intensity
0.423***
0.418***
0.420***
0.409***
0.412***
(0.0443)
(0.0451)
(0.0438)
(0.0443)
(0.0456)
tobinq
0.0957***
0.104***
0.0985***
0.0987***
0.0867***
(0.0253)
(0.0249)
(0.0253)
(0.0253)
(0.0260)
IsPassISO14001
0.410***
0.421***
0.375***
0.406***
0.401***
(0.0554)
(0.0570)
(0.0561)
(0.0561)
(0.0561)
_cons
-17.12***
-17.40***
-16.95***
-16.79***
-16.79***
(0.471)
(0.480)
(0.465)
(0.480)
(0.494)
Year
YES
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
YES
N
14971
14971
14971
14971
14971
Wald
2573.85
2239.11
2619.45
2525.50
2507.92
p
0.00
0.00
0.00
0.00
0.00
(3) Excluding information and communications enterprises: given that information and
communications is a high value-added high-technology industry with intrinsically higher overseas
income, enterprises in this industry are excluded in column (6), and the results remain basically
unchanged.
(4) Replacement of the core explanatory variable: logarithms of keyword frequencies along five
dimensions are used as alternative measures, namely Artificial Intelligence, Blockchain, Cloud
Computing, Big Data, and Digital-Technology Application, and the results are highly robust.
4.4 Endogeneity Tests
Table 7. Endogeneity tests
IV method
LIMLmethod
(1)
(2)
(3)
lndigital
apply0
apply0
lnstu1
0.0619***
(4.95)
lndigital
9.866***
9.866***
(4.70)
(4.70)
lnsize
0.160***
0.154
0.154
(21.18)
(0.44)
(0.44)
age
-0.00297
-0.0193
-0.0193
(-1.61)
(-0.89)
(-0.89)
bm
-0.00496
0.130**
0.130**
(-1.54)
(3.05)
(3.05)
state
-0.243***
2.548***
2.548***
(-13.93)
(4.56)
(4.56)
leverage
-0.183***
0.313
0.313
(-4.60)
(0.53)
(0.53)
intensity
0.131***
-0.650
-0.650
(7.31)
(-1.79)
(-1.79)
tobinq
0.0384***
-0.0725
-0.0725
(6.59)
(-0.70)
(-0.70)
IsPassISO14001
0.0291
0.765**
0.765**
(1.37)
(2.85)
(2.85)
_cons
-3.410***
-6.752
-6.752
(-19.08)
(-0.97)
(-0.97)
Year
YES
YES
YES
Industry
YES
YES
YES
Kleibergen-Paap rk LM statistic
26.347(0.00)
Cragg-Donald Wald F statistic
24.482(16.38)
N
15117
15117
15117
Note: The value in parentheses after the Kleibergen–Paap rk LM statistic is the p value of the underidentification
test, and the value in parentheses after the Cragg–Donald Wald F statistic is the ten percent critical value for the
Stock–Yogo weak identification test.
Endogeneity may arise from measurement error, omitted variables, and bidirectional causality.
The model includes a series of control variables and employs a two-way fixed effects specification,
which reduces endogeneity caused by omitted variables. Considering the possible endogeneity from
reverse causality, the number of in-school university students in the province where the enterprise is
located, lagged by one period, is used as an instrumental variable. Instrumental variables estimation
is first conducted. Column (1) of Table 7 shows that the instrumental variable is significantly and
positively correlated with the endogenous regressor. According to column (2), the instrumental
variables regression passes the underidentification test and the weak-instrument test, which indicates
that the selected instrumental variable is valid. In column (2), the coefficient of digital transformation
remains significantly positive, which indicates that endogeneity does not distort the conclusions.
Limited information maximum likelihood estimation, which is less sensitive to weak instruments, is
further used, as shown in column (3) of Table 7. The LIML estimate is consistent with the IV estimate,
which indicates that digital transformation significantly promotes green innovation output and that
the research findings are robust.
4.5 Heterogeneity Analysis
(1) Heterogeneity analysis based on temporal ordering. To enlarge intergroup differences, two
subsamples covering 2010–2014 and 2015–2019 are extracted by year for regression analysis. The
results show that, consistent with the baseline regressions in Table 3, the positive linear relationship
between digital transformation and enterprise green innovation output is significant in both
subsamples, and this relationship exhibits different trends over time. Specifically, columns (1) and (4)
of Table 8 show that in the later period the estimated coefficient of digital transformation on the total
number of green patent applications is smaller than that in the earlier period, which indicates that the
positive effect of digital transformation on the total number of green patent applications weakens over
time. In contrast, columns (2) and (5) indicate that the effect of digital transformation on the number
of green invention patent applications increases over time, that is, the promoting effect of digital
transformation on the number of green patent applications gradually strengthens in the later period.
To further demonstrate that the promoting effect of enterprise digital transformation on its green
invention patent output strengthens over time, and to eliminate the influence of level effects, the
natural logarithm of the ratio of enterprise green invention patent applications (apply0) to total
invention patent applications (invention) is used as the dependent variable (calculated as
ln(
invention
apply0 + 1)). The regression results in columns (3) and (6) of Table 8 show that the promoting
effect of digital transformation on the share of green patent applications strengthens in the later period,
that is, with the continuous advancement of digital transformation, the share of enterprise green
invention patent output has experienced faster expansion.
This trend may be due to the continuous development of technologies and methods involved in
digital transformation, as well as the accumulation of experience and knowledge by enterprises during
the digital transformation process. Such experience and knowledge may help enterprises better
identify and exploit opportunities related to environmental protection and, by developing new green
technologies and products, enhance their capability for green invention patent output.
Table 8 Heterogeneity analysis: green innovation capability by temporal ordering
Year 2010-2014
Year 2015-2019
(1)
(2)
(3)
(4)
(5)
(6)
apply0
invention
inv_ratio
apply0
invention
inv_ratio
lndigital1
0.325***
0.391***
0.0629***
0.302***
0.497***
0.0762***
(0.0428)
(0.0670)
(0.0151)
(0.0280)
(0.0682)
(0.0129)
lnsize
0.792***
1.085***
0.0769***
0.704***
1.229***
0.161***
(0.0346)
(0.0490)
(0.0160)
(0.0282)
(0.0767)
(0.0166)
age
-0.0663***
-0.0356**
0.00109
-0.0660***
-0.0143
-0.00312
(0.00894)
(0.0127)
(0.00448)
(0.00823)
(0.0153)
(0.00366)
bm
0.0410**
0.127***
0.0295***
0.0248*
0.0441
0.00116
(0.0157)
(0.0174)
(0.00709)
(0.0126)
(0.0235)
(0.00618)
state
-0.0275
-0.388*
0.0836
0.283***
0.378*
0.187***
(0.100)
(0.166)
(0.0429)
(0.0807)
(0.158)
(0.0374)
leverage
-0.313
-0.572*
-0.0199
0.420*
-0.523
-0.315***
(0.202)
(0.289)
(0.101)
(0.191)
(0.300)
(0.0830)
intensity
0.312***
1.126***
0.0337
0.500***
1.337***
0.143**
(0.0654)
(0.0984)
(0.0587)
(0.0625)
(0.176)
(0.0443)
tobinq
0.142***
0.217***
0.0575**
0.0359
0.173**
0.0490***
(0.0374)
(0.0475)
(0.0186)
(0.0329)
(0.0578)
(0.0123)
PassISO14001
0.289**
0.0460
-0.0399
0.416***
0.230
0.0348
(0.0879)
(0.159)
(0.0386)
(0.0715)
(0.179)
(0.0316)
_cons
-17.63***
-26.30***
-3.135***
-15.82***
-30.09***
-4.794***
(0.733)
(1.131)
(0.373)
(0.620)
(1.982)
(0.387)
Year
YES
YES
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
YES
YES
N
7413
7354
1501
7477
7477
1854
Wald
1370.97
1132.69
77.54
1500.33
527.86
156.52
p
0.00
0.00
0.00
0.00
0.00
0.00
(2) Heterogeneity analysis based on enterprise size. To verify the impact of differences in
enterprise size on the effect of digital transformation on green innovation output, the median of
enterprise size is used as the cutoff to divide the sample into two groups, large enterprises and small
and medium-sized enterprises, for separate regressions. The results in Table 9 show that digital
transformation has a more significant positive impact on the green innovation output of small and
medium-sized enterprises than on that of large enterprises. The possible reasons are as follows. On
the one hand, the flexibility and adaptability of small and medium-sized enterprises enable them to
respond more quickly to market demand and green innovation opportunities. Therefore, during digital
transformation, small and medium-sized enterprises can adjust their business processes, technological
applications, and organizational structures more rapidly to adapt to new market trends and
environmental requirements. This agility enables them to seize opportunities for green innovation
earlier. On the other hand, digital transformation can help enterprises improve production and
operational efficiency, reduce resource waste and environmental pollution, and thus lower costs.
Small and medium-sized enterprises usually have more limited resources and capital, so they pay
more attention to cost control. Digital transformation provides opportunities for small and medium-
sized enterprises to reduce production costs and environmental impacts, thereby creating more
favorable conditions for their green innovation output. Through the application of digital technologies,
small and medium-sized enterprises can improve energy use efficiency, waste treatment, and resource
recycling in the production process, thereby further enhancing green innovation output.
Table 9 Heterogeneity analysis
Enterprise Scale
Industry Sector
(1)
(2)
(3)
(4)
Large-scale
Small and
Medium-scale
Technology-
intensive
Non-technology-
intensive
lndigital1
0.271***
0.410***
0.290***
0.0566
(9.98)
(8.68)
(10.38)
(1.26)
lnsize
0.756***
0.810***
0.676***
0.839***
(26.43)
(8.83)
(23.33)
(26.03)
age
-0.0542***
-0.111***
-0.0780***
-0.0431***
(-7.64)
(-8.24)
(-10.17)
(-4.35)
bm
0.0352***
-0.0212
0.0372**
0.0203
(3.32)
(-0.80)
(3.23)
(1.25)
state
0.139
0.323*
0.0728
0.405***
(1.84)
(2.45)
(0.95)
(3.64)
leverage
0.0994
0.333
0.748***
-0.986***
(0.59)
(1.33)
(3.89)
(-4.88)
intensity
0.458***
0.181*
0.597***
0.513***
(8.99)
(2.19)
(7.80)
(9.61)
tobinq
0.0627
0.0813*
0.00457
0.165***
(1.69)
(2.10)
(0.15)
(3.77)
IsPassISO14001
0.421***
0.145
0.318***
0.625***
(6.78)
(1.28)
(4.91)
(6.95)
_cons
-17.13***
-17.07***
-14.82***
-19.24***
(-26.84)
(-8.73)
(-23.83)
(-26.97)
Year
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
N
7408
7451
5486
9485
(3) Heterogeneity analysis based on industry category. Drawing on the study of Li Xuedong et
al. (2018), industries are divided into three categories: technology-intensive, capital-intensive, and
labor-intensive. Accordingly, the sample is divided into technology-intensive enterprises and non-
technology-intensive enterprises for regression analysis. As shown in columns (3) and (4) of Table 9,
in the technology-intensive enterprise sample the coefficient of lndigital is significantly positive at
the 1 percent level, whereas in the non-technology-intensive enterprise sample the coefficient of the
core explanatory variable is not significant. This indicates that digital transformation is more
conducive to promoting the growth of green innovation output in technology-intensive industries.
The possible reason is that enterprises with low technological content tend to produce products with
low added value and therefore low profitability, making them unable to bear the high costs of digital
transformation. These enterprises usually face heavy investment pressures in updating equipment,
introducing new technologies, and training employees, and such investments may not immediately
translate into significant green innovation output. As a result, these enterprises progress more slowly
in digital transformation and cannot keep pace with technology-intensive enterprises. By contrast,
technology-intensive enterprises possess clear advantages in digital transformation. Such enterprises
generally devote substantial resources to research and development and innovation and possess
technical expertise and innovative capability. Digital transformation provides them with more
opportunities to improve products and services through advanced technologies and data analysis,
increase production efficiency, reduce resource consumption, and achieve green innovation.
Therefore, technology-intensive enterprises are more likely to obtain significant results in digital
transformation and achieve growth in innovation performance.
4.6 Mechanism Tests
This paper finds that enterprise digital transformation influences final green innovation output
through two channels, namely environmental management and research and development investment.
The following presents two approaches to mechanism testing.
(1) Environmental management channel. Environmental management certification reflects the
degree to which an enterprise values environmental protection and its capability for innovation in
environmental management. Managers with strong environmental awareness will proactively engage
in green innovation research and development, cleaner production, and the production of
environmentally friendly products, so the green innovation effect of enterprise digital transformation
will also be subject to heterogeneity. Accordingly, the sample is divided into two groups based on
whether ISO 14001 environmental management certification has been passed, and regressions are
estimated for the two groups. Columns (1) and (2) of Table 10 show that the coefficients are
significantly positive at the 1 percent level in both groups, but the coefficient of digital transformation
for enterprises that have passed ISO 14001 certification equals 0.413, whereas for enterprises without
certification it equals 0.238, which is smaller. This indicates that the promoting effect of digital
transformation on enterprise green innovation output is more pronounced among enterprises with
environmental management certification. This implies that digital transformation promotes enterprise
green innovation by advancing enterprise environmental management. Further regressions using the
number of green invention patents as the dependent variable yield consistent results.
Digital transformation can help enterprises improve and innovate existing business processes,
enhance data analysis capability, market insight, and customer experience, and thereby promote
growth in enterprise output. Enterprises that have passed environmental management standard
certification have usually already comprehensively optimized and standardized their internal
environmental management and formed an effective environmental management mechanism.
Therefore such enterprises are better able to rely on digital transformation to apply the outcomes of
digital transformation in practical business activities.
Table 10 Heterogeneity analysis: whether environmental management system certification has been passed
Passed ISO14001
Failed ISO14001
(1)
(3)
(2)
(4)
apply0
invention0
apply0
invention0
lndigital
0.413***
0.623***
0.238***
0.408***
(0.0369)
(0.0573)
(0.0327)
(0.0868)
lnsize
0.693***
0.902***
0.748***
1.261***
(0.0393)
(0.0678)
(0.0274)
(0.0662)
age
-0.0586***
-0.0425**
-0.0710***
0.00270
(0.00941)
(0.0152)
(0.00813)
(0.0169)
bm
0.0481**
0.0873***
0.0177
0.0567**
(0.0177)
(0.0247)
(0.0123)
(0.0204)
state
0.402***
0.765***
0.0428
-0.0226
(0.0954)
(0.178)
(0.0822)
(0.161)
leverage
0.211
0.181
0.119
-0.728**
(0.237)
(0.390)
(0.176)
(0.263)
intensity
0.358***
0.474***
0.442***
1.314***
(0.0784)
(0.130)
(0.0535)
(0.102)
tobinq
0.0175
0.135*
0.0920**
0.159**
(0.0422)
(0.0613)
(0.0330)
(0.0594)
_cons
-15.51***
-22.10***
-16.44***
-30.59***
(0.772)
(1.529)
(0.624)
(1.813)
Year
YES
YES
YES
YES
Inudstry
YES
YES
YES
YES
N
2758
2758
12200
12200
(2) Research and development investment channel. A three-step approach is used to conduct
the mediation mechanism test. On the basis of the baseline regression model, the enterprise research
and development investment variable is added. If, after adding the mediator, the coefficient of the
main explanatory variable decreases, this indicates that increasing research and development
investment is one of the channels through which digital transformation promotes enterprise green
innovation output. To test this channel, data on enterprise research and development investment are
obtained, and the logarithm of research and development investment is used as the mediator
variable. The regression results of the mediation effect model are shown in Table 11. Column (1)
shows that the degree of enterprise digital transformation is significantly positively related to green
innovation output. Column (2) shows that the degree of enterprise digital transformation is
significantly positively related to research and development investment. Column (3) shows that
both digital transformation and research and development investment have significantly positive
coefficients, but the coefficient of digital transformation is smaller than in column (1). This
indicates that the mediation effect holds. Therefore the regression results in Table 11 confirm that
the mediation effect of digital transformation promoting enterprise green innovation output through
increased research and development investment is valid.
The reasons why enterprise digital transformation promotes an increase in research and
development investment may include the following. On the one hand, enterprise digital
transformation makes enterprises pay greater attention to research and development activities,
especially research and development activities in environmentally friendly and green technologies,
thereby increasing investment in green technologies. On the other hand, digital transformation can
improve the efficiency and precision of the technologies and methods used in the research and
development process, thereby further increasing the output efficiency of research and development
investment.
Table 11 Digital transformation, research and development investment, and green innovation output
Ⅴ. Research Conclusions and Policy Implications
Digital transformation has created opportunities for innovation and development for enterprises,
driving companies to continuously reach new heights in the field of green innovation. Based on
financial and annual report data from 1,512 A-share listed companies in China from 2010-2019, this
paper constructs enterprise digital transformation indicators and time and industry dual fixed effects
(1)
(2)
(3)
lnrd
apply0
apply0
lndigital
0.279***
0.304***
0.216***
(0.0164)
(0.0237)
(0.0239)
lnrd
0.514***
(0.0362)
lnsize
0.734***
0.241***
(0.0222)
(0.0408)
age
-0.0660***
-0.0514***
(0.00622)
(0.00662)
bm
0.0298**
0.0208*
(0.00994)
(0.00975)
state
0.176**
0.244***
(0.0638)
(0.0658)
leverage
0.131
0.297
(0.143)
(0.159)
intensity
0.420***
0.158**
(0.0448)
(0.0557)
tobinq
0.0791**
0.00537
(0.0260)
(0.0282)
IsPassISO14001
0.383***
0.330***
(0.0552)
(0.0562)
_cons
14.70***
-16.44***
-13.27***
(0.154)
(0.480)
(0.521)
Year
YES
YES
YES
Industry
YES
YES
YES
N
9828
14971
9780
R2
0.181
Wald
2762.46
2863.23
p
0.00
0.00
0.00
models to examine the impact and mechanisms of enterprise digital transformation on green
innovation output. The research results show: (1) Enterprise digital transformation can significantly
promote enterprise green innovation output, and this conclusion remains valid after a series of
robustness tests; (2) The dynamic impact of digital transformation on enterprise green innovation
output cannot avoid the limitation of diminishing marginal effects, but its positive promoting effect
still has strong temporal continuity; (3) Mechanism testing shows that digital transformation can
enhance enterprise green innovation output by increasing enterprise R&D investment and
strengthening environmental management; (4) Heterogeneity testing finds that digital transformation
has differential impacts on enterprise green innovation due to differences in the technological level
of the industry to which the enterprise belongs and the enterprise's environmental management
capabilities, specifically manifested in that digital transformation has a more obvious promoting
effect on green innovation output of small and medium-sized enterprises in technology-intensive
industries.
Based on this, this paper proposes the following policy recommendations for the future
development of enterprise digital transformation: First, strengthen the popularization and promotion
of digital technology, encourage enterprises to apply digital technology to improve R&D investment
and green innovation output. Government departments can help enterprises improve their
digitalization level by providing digital technology training and financial support. Second, promote
environmental management certification, encourage enterprises to implement environmental
management measures to reduce environmental pollution and resource waste. Governments can
promote enterprise environmental management implementation through strengthening the
formulation and implementation of environmental protection laws and regulations, as well as
providing environmental protection incentives and other policies. Third, support the application and
utilization of green patents to improve enterprises' green innovation capabilities and competitiveness.
Government departments can promote enterprise green patent applications and utilization by
providing exemptions for green patent application fees and policy support for green patent utilization.
Fourth, enterprises should view digital transformation as a long-term strategy and formulate
corresponding plans and measures to achieve sustainable green innovation and sustainable
development goals. Because digital transformation is not only about responding to current market
demands and competitive pressures, but more importantly, about achieving long-term sustainable
development. Actively promoting digital transformation can enhance enterprises' green innovation
output and gain greater competitive advantages in the future.
References
Chen, Z. & Chen, D. (2019). How executive environmental awareness improves firm performance under
environmental uncertainty—the mediating role of green innovation (in Chinese). Management World, 40(10),
113–128.
Li, H., Wen, S., & Jiao, R. (2022). Corporate environmental culture, environmental management, and financial
performance: Do words match deeds? (in Chinese). Accounting Research, 34(09), 297–312.
Li, L., Liu, C., & Han, M. (2022). Does informatization construction enhance firms’ innovation? Evidence
from the “Two-Integration Pilot Zone” (in Chinese). China Economic Quarterly, 22(03), 1079–1100.
Li, X., Jiang, K., & Xia, H. (2018). Factor distortions and productivity in heterogeneous manufacturing under
supply-side reform (in Chinese). Economic Research Journal, 35(05), 23–39.
Wu, F., Hu, H., Lin, H., et al. (2021). Digital transformation and capital-market performance—Evidence from
stock liquidity (in Chinese). Finance & Trade Economics, 37(07), 130–144.
Kleis L , Ramirez R V .Information Technology and Intangible Output: The Impact of IT Investment on
Innovation Productivity[J].Information Systems Research, 2012, 23(1).
Liang S, Li T. Can digital transformation promote innovation performance in manufacturing enterprises? The
mediating role of R&D capability[J]. Sustainability, 2022, 14(17): 10939.
Lim S , Prakash A .Voluntary Regulations and Innovation: The Case of ISO 14001[J].Public Administration
Review, 2014, 74(2):233-244.
Murillo-Luna J L, Ramón-Solans-Prat J C. Which competitive advantages can firms really obtain from
ISO14001 certification?[J]. Journal of Industrial Engineering and Management (JIEM), 2008, 1(2): 104-118.
Sun S, Guo L. Digital transformation, green innovation and the Solow productivity paradox[J]. Plos one, 2022,
17(7): e0270928.
Xu Q, Li X, Guo F. Digital transformation and environmental performance: Evidence from Chinese resource‐
based enterprises[J]. Corporate Social Responsibility and Environmental Management, 2023.
Zhou P , Zhou S , Zhang M ,et al.Executive Overconfidence, Digital Transformation and Environmental
Innovation: The Role of Moderated Mediator[J].International journal of environmental research and public
health, 2022, 19(10).
|
How Digital Transformation Impacts Corporate Green Innovation? Chen Hanqin Abstract:Digitalization is a defining feature of our time, and green innovation has become one of the necessary avenues for firms to achieve sustainable development. Using financial statements and annual report data for China's A-share listed firms from 2010-2019, this paper constructs a firmlevel digital transformation indicator and examines the impact of digital transformation on green innovation and its mechanisms. The results show that digital transformation promotes firms' greeninnovation output, with a diminishing marginal impact over time. Mechanism tests indicate that digital transformation boosts green-innovation output by increasing R&D investment and strengthening environmental management. Heterogeneity analysis shows that the promotion effect is more pronounced for small and medium-sized firms and for firms in technology-intensive industries. To improve the green-innovation incentives of digital transformation, firms should formulate longterm strategies and continuously strengthen policy regulation and incentives. Keywords: digital transformation; green innovation; environmental management certification; R&D investment I. Introduction The rise of digitalization is a salient feature of economic and social development since the beginning of the twenty-first century. Since the Eighteenth National Congress of the Communist Party of China, the Central Committee has attached great importance to developing the digital economy and has elevated it to a national strategy. The report to the Twentieth National Congress further explicitly proposed to accelerate the building of a strong manufacturing country, a strong quality country, a strong aerospace country, a strong transportation country, a strong cyber power, and a Digital China. At present, a surging wave of informatization is sweeping the globe, and countries worldwide regard the advancement of economic digitalization as an important driving force for achieving innovative development. The digital economy is the core of the economic transformation of China, and the degree of enterprise digital transformation is the foundation for achieving this objective. In the era of digitalization, an increasing number of enterprises have come to realize the importance of digital transformation for their business operations and sustainable development, and green innovation has become one of the necessary pathways for enterprises to achieve sustainable development. Although over the past decade the digital economy of China has flourished and the scale of the industry has continued to grow rapidly, ranking second in the world for many years, many small and medium-sized enterprises in China are still at the initial stage of digital transformation, and only a small share of enterprises have reached the stage of deep application. The China Digital Economy Development Report (2022) shows that from 2012 to 2021 the scale of the digital economy of China increased from 11 trillion yuan to 45.5 trillion yuan, and its share of gross domestic product rose from 21.6 percent to 39.8 percent. The data indicate that more than ninety percent of enterprises remain at the normative stage of digital transformation, and fewer than ten percent of enterprises have reached the domain level. Against the backdrop that the overall digital transformation of enterprises in China is still at an initial stage, does enterprise digital transformation have a significant effect on green innovation. If there is a positive effect, through which channels and mechanisms does it promote green innovation output. Furthermore, which types of enterprises are more adept at achieving growth in green innovation output through digital transformation. As the main driving forces of the new round of scientific and technological revolution, digitalization and informatization have become important subjects of academic research. Existing studies mainly focus on the effect of information technology on enterprise innovation. In the early stage of informatization, information technology creates value at the intermediate stage of the production process by raising innovation productivity, and it plays a key role in promoting breakthrough innovation through research and development and other intangible factors such as skills and knowledge (Kleis et al., 2012). Informatization construction can also significantly improve the quality of patents of enterprises and promote the growth of invention patents, rather than only driving short-cycle practical utility models and design patents, and it raises the output efficiency of invention patents of enterprises (Li Lei et al., 2022). As an advanced stage of the development of informatization, digitalization places greater emphasis on the comprehensive and in-depth application of information technology to create value. Enterprise digital transformation can improve environmental impact by promoting green technological innovation, enhancing environmental information disclosure, and strengthening governance (Qiong Xu et al., 2021). Digital transformation also promotes enterprise innovation through mechanisms such as realizing open and networked innovation, leading organizational and managerial innovation, and raising the level of human capital within enterprises (An Tongliang, 2022). Although digital transformation has attracted wide attention in the field of enterprise innovation, research on its effect on green innovation remains relatively limited. In recent years, with the continuous enhancement of environmental awareness, green innovation has become a new pathway for enterprises to achieve green, efficient, and sustainable development (Chen Zewen and Chen Dan, 2019). As a comprehensive transformation, digital transformation encompasses a variety of emerging technologies and is bound to have a profound effect on the green innovation of enterprises. For example, in production and operations, digital technologies can help enterprises realize efficient, precise, and sustainable utilization of resources, thereby reducing negative environmental impact. Based on this, and taking the strategic goal of building a Digital China as an opportunity, this paper examines how digital transformation affects the green innovation of enterprises, with the aim of providing ideas and references for enterprises on the road toward sustainable development. This paper measures the degree of enterprise digital transformation by the frequency of digital-transformationrelated keywords in enterprise annual reports, and measures enterprise green innovation output activity by the number of green patent applications, in order to study the effect of digital transformation on green innovation output of enterprises. Building on existing research, the innovations of this paper are mainly reflected in the following two aspects. First, with respect to empirical methodology, because the number of green patent applications is a non-negative integer and the dependent variable is a count variable, prior studies have often employed log(y+1) together with ordinary least squares for estimation; however, this common log-linear approach yields estimates that lack meaningful economic interpretation and exhibit inherent bias. This paper adopts a Poisson pseudo maximum likelihood estimator with multi-dimensional fixed effects drawn from count models, which can minimize bias caused by omitted variables to the greatest extent possible. Second, from the perspective of analyzing internal mechanisms, this paper devotes part of the analysis to enterprise environmental management. Digital transformation can standardize internal processes of enterprises and thereby, through digital transformation, form better incentives for green innovation. However, existing research has relatively seldom examined green innovation output from the angle of enterprise environmental management. Therefore, from this perspective, this paper explores how digital transformation helps enterprises realize better incentives for green innovation and provides new implications for relevant policy and practice. II. Theoretical Analysis and Research Hypotheses Environmental management and R&D-investment channels are closely interconnected. First, the environmental-management channel of firms' digital transformation can support and propel the R&Dinvestment channel. Through digital environmental-management practices, firms can achieve efficient utilization of environmental resources and reduce pollution, thereby enhancing the sustainability and environmental friendliness of their R&D-investment activities. Second, the R&Dinvestment channel of digital transformation can in turn foster innovation and optimization in environmental management. By leveraging digital tools, firms can conduct R&D and innovation more efficiently, thus accelerating innovation and improvement within the environmental-management channel. Drawing on prior studies, this paper selects the environmental-management channel and the R&D-investment channel as two key mechanisms through which digital transformation operates. 2.1 Environmental-Management Channel Environmental-management certification is one of the metrics for firms' innovation activities, because it reflects both the firm's emphasis on environmental protection and its innovative capability in environmental management. As awareness has grown regarding the environmental impacts of industrial activities, various standards have been formulated to guide firms and to incorporate environmental-management systems into corporate assessments. Among these, the ISO 14001 Environmental Management System-formally introduced in September 1996-has been the most widely adopted worldwide. Because ISO 14001 requires firms to set internal environmental standards, targets, and performance indicators, whether a firm has obtained this certification can be used to gauge its level of environmental-management innovation (Peiyan Zhou et al., 2022). At the national level, participation in ISO 14001 is an important predictor of a country's environmental patenting and serves as a standard indicator of innovative activity (S. Lim et al., 2014). Innovation constitutes a firm's core competitiveness. Therefore, firms holding ISO 14001 certification possess more potential competitive advantages, including higher internal efficiency, differentiation advantages, responsiveness to stakeholder demands, improved industry positioning, and financial savings (Murillo-Luna et al., 2008). At the same time, managerial decision-making plays an important role in strengthening competitiveness: managers with strong environmental awareness proactively pursue green-innovation R&D and cleaner production, actively develop environmentally friendly products, and thereby enhance the firm's market competitiveness (Li Hui et al., 2022). Based on the foregoing analysis of the environmental-management channel, we propose the following hypothesis: H1: Digital transformation promotes firms' green-innovation output by enhancing environmental-management capability. 2.2 R&D-Investment Channel R&D is the source of firms' innovation and is closely related to innovative outcomes. First, digital transformation can directly improve process-innovation performance and product-innovation performance in manufacturing enterprises. On the one hand, digital transformation drives modularization, automation, and intelligentization in production, thereby raising efficiency and improving production methods; on the other hand, it facilitates information flows in product-design departments, broadening and deepening information sources and stimulating firms' innovation capabilities in new-product development (Shuhao Liang et al., 2022). Second, digital transformation can also increase firms' R&D investment. By lowering internal and external control costs, firms can allocate freed-up resources to R&D and innovation-for example, to pollution-control equipmentthus raising the level of green innovation (Shujun Sun et al., 2022). This helps firms better fulfill social responsibility and enhance their image in environmental protection and sustainable development, enabling them to meet market demand more effectively and to drive sustained innovation and growth. In sum, digital transformation exerts a positive, facilitating effect on firms' R&D and innovation activities. By boosting production-process efficiency and information fluidity and by increasing R&D investment, digital transformation creates favorable conditions for innovation and is expected to provide strong support for firms to maintain competitiveness and achieve sustainable development in highly competitive markets. Based on the foregoing analysis of the R&D-investment channel, we propose the following hypothesis: H2: Digital transformation promotes firms' green-innovation output by increasing R&D investment. III. Research Design 3.1 Data Description This study uses green patent data for A-share listed enterprises in the Shanghai and Shenzhen stock exchanges in China from 2010 to 2019, together with enterprise-level economic data for the corresponding enterprises. For the above database of listed enterprises, the following sample screening and processing are conducted: (i) enterprises in the financial industry are removed, and only enterprises belonging to the real economy are retained; (ii) samples that are subject to special treatment in stock trading, that is, stock codes containing ST or *ST, as well as enterprises with missing key financial indicators, are removed; and (iii) in order to eliminate the influence of extreme values, all continuous variables are winsorized at the 1 percent level. The final sample includes 1,512 enterprises, with 15,120 enterprise-year observations. The green innovation output capability of enterprises is measured as follows. Green patent data for listed enterprises come from annual reports of listed enterprises, corporate social responsibility reports of listed enterprises, official websites of listed enterprises, the National Bureau of Statistics, the China National Intellectual Property Administration, and the World Intellectual Property Organization, and green patents of enterprises are identified with the aid of the environmental International Patent Classification index list released by the World Intellectual Property Organization in 2010. The data fields used in this paper include the total number of green patent applications, as well as counts of invention patent applications and utility model patent applications classified by invention type. The degree of digital transformation of enterprises is obtained from the CSMAR "Research Database on Digital Transformation of Chinese Listed Enterprises," and the enterprise digitalization level is measured by the frequency of keywords related to digital transformation in annual reports. Enterprise economic data for listed enterprises all come from the CSMAR database. Drawing on prior literature, a series of control variables are selected, mainly including enterprise size, enterprise age, board size, state-ownership dummy, asset-liability ratio, capital intensity, Tobin Q, ISO 14001 certification dummy, year dummies, and industry dummies. The specific variable settings are reported in Table 1. Table 1. Variable definitions Type Name Symbol Measurement Depende nt variables Green patent applications in year t apply0 Total number of green patent applications of the enterprise in year t Green patent applications in year t+1 apply1 Total number of green patent applications of the enterprise in year t+1 Green patent applications in year t+2 apply2 Total number of green patent applications of the enterprise in year t+2 Green invention patent applications in year t invention0 Total number of green invention patent applications of the enterprise in year t Green invention patent applications in year t+1 invention1 Total number of green invention patent applications of the enterprise in year t+1 Green invention patent applications in year t+2 invention2 Total number of green invention patent applications of the enterprise in year t+2 Green utility model patent applications in year t utility0 Total number of green utility model patent applications of the enterprise in year t Green utility model patent applications in year t+1 utility1 Total number of green utility model patent applications of the enterprise in year t+1 Green utility model patent applications in year t+2 utility2 Total number of green utility model patent applications of the enterprise in year t+2 Core explanat ory variables Enterprise digital transformation digital ln(total frequency of all digitalizationrelated keywords in the enterprise annual report in year t + 1) Artificial intelligence technology ai ln(frequency of artificial intelligence terms in the report + 1) Blockchain technology bc ln(frequency of blockchain terms in the report + 1) Cloud computing technology cd ln(frequency of cloud computing terms in the report + 1) Big data technology bd ln(frequency of big data terms in the report + 1) Digital technology application dt ln(frequency of digital application terms in the report + 1) Control Enterprise size size ln(total assets at year-end) variables Enterprise age age Current year minus the year of listing of the enterprise Board size bm Total number of directors on the board State-owned enterprise state Equals 1 if a state-owned enterprise, otherwise 0 Leverage leverage Total liabilities of the enterprise divided by total assets of the enterprise Capital intensity intensity Total assets of the enterprise divided by operating revenue Tobin Q tobinq Market value of the enterprise divided by replacement cost of capital ISO 14001 certification ISO14001 Equals 1 if ISO 14001 audit has been passed, otherwise 0 Year dummies year Year dummies for 2010-2019 Industry dummies industry Classified according to the 2012 China Securities Regulatory Commission industry classification 3.2 Model specification and econometric method In order to examine the effect of digital transformation on the green innovation output capability of enterprises, a multiple linear regression model is constructed as shown in Model (1). Innovationijt=α+β1digitaljt+X'Γ+ωt+ηi+εit (1) (1)Dependent variable Green innovation output capability of enterprises (Innovation). This paper chooses green patent application data, which are less affected by external factors and are more stable, to measure the green innovation output capability of enterprises. Under the patent system of China, patents are divided into invention patents, utility model patents, and design patents. Among them, invention patents have the highest innovation value, while utility model and design patents have lower innovation value, because invention patents refer to novel and thorough improvements to products or processes, whereas utility models are innovations in technical solutions considered from the perspective of technical effects and functions of products. In other words, invention and utility model patents possess the characteristics of novelty, inventiveness, and utility, whereas design patents do not involve the technical performance of the product itself. Therefore, after comprehensive consideration, this paper selects the numbers of invention patent applications and utility model patent applications for discussion. Innovationijt denotes the green innovation capability in year t of enterprise j in industry i. Its calculation is as follows: Let apply0denote the number of green patent applications in year t, then Innovationijt measured by green patent applications equals Innovationijt = ln(apply0 + 1). The invention patent count (invention) and the utility model patent count (utility), distinguished by patent connotation, are calculated according to the same formula. (2) Main explanatory variable. The main explanatory variable digitaljt represents the degree of digital transformation of enterprise j in year t. According to existing research, the lexicon of digital transformation feature terms can be divided into two categories, namely, the underlying technology architecture and digital technology application. The underlying technology architecture refers to the ABCD digital technologies, that is, artificial intelligence, blockchain, cloud computing, and big data. The iteration and development of these technologies drive rapid change in overall digital technology and the fast development of the digital economy. These are the four most important technological cornerstones under the digital revolution, and they mainly appear in digital transformation of the enterprise back end such as research and development design, management mode, or business logic. Digital technology application refers to integration based on ABCD technologies into the market environment in which the enterprise operates, involving digital transformation and upgrading of the main business or the expansion of new digital lines of business, which mainly appears in digital transformation of front-end activities such as production and sales. Following Wu Fei and coauthors, this paper uses text mining to compute the frequencies of digital transformation keywords in enterprise annual reports, and measures the degree of digital transformation by adding one to the frequency and then taking the natural logarithm. Since keywords in annual reports can reflect strategic features and managerial philosophies of enterprises, a higher keyword frequency in annual reports indicates greater attention and resource investment by the enterprise in that dimension, and a higher level of digital technology application. (3) Control variables. The vector X contains a series of other variables that may affect enterprise innovation, including enterprise size (size), enterprise age (age), board size (bm), state-ownership (state), asset-liability ratio (leverage), capital intensity (intensity), Tobin Q (tobinq), and a dummy for passing ISO 14001 (PassISO14001). The specific definitions of the control variables are provided in Table 1. In addition, industry fixed effects and year fixed effects are controlled for in Model (1), in order to control for the influence of unobservable factors that do not vary with time or industry development. Figure 1. Zero-inflated distribution of the number of green patent applications. (4) Econometric method. Figure 1 presents the histogram of green patent applications. It is noteworthy that green patent applications display a highly skewed distribution with a large number of zeros. When the dependent variable is a count variable, the combination of log(y+1) and ordinary least squares may produce estimates that are not unbiased and that lack economic interpretability, so pseudo-Poisson regression estimation is an appropriate choice. This paper employs the Poisson Pseudo Maximum Likelihood estimator with high-dimensional fixed effects (PPMLHDFE) for empirical tests. Compared with the widely used PPML estimator for zero-inflated trade data, the high-dimensional fixed-effects Poisson estimator can test pseudo-maximum likelihood estimation more robustly. All empirical analyses in this paper are conducted using StataMP 17.0. IV. Empirical Results 4.1 Descriptive Statistics We conduct descriptive statistics for the main variables, and the results are reported in Table 2. The mean number of green patent applications is 3.47, indicating that, during the sample period, domestic listed enterprises on average file 3.47 green patent applications per year. The standard deviation is 36.37, which far exceeds the mean, indicating substantial dispersion in green patent applications across enterprises. The median equals 0, indicating that more than half of enterprises have zero green patent applications ... The coefficient of variation for the logarithm of the digital transformation keyword frequency after adding one is approximately 0.5, which indicates relatively high volatility for this variable. The mean of the state owned enterprise dummy is 0.56, indicating that 56 percent of the sample are state owned enterprises and 44 percent are private enterprises. Table 2. Descriptive statistics of the main variables Variable N Mean SD Min Median Max apply0 15130 3.47 36.47 0.00 0.00 1543.00 invention 15130 2.35 29.60 0.00 0.00 1376.00 utility 15130 0.20 0.60 0.00 0.00 2.00 lndigital 15119 2.32 1.20 0.00 2.20 6.63 lnsize 15118 22.38 1.48 13.08 22.29 28.64 age 15119 16.05 5.44 3.00 16.00 30.00 bm 15119 9.73 2.54 3.00 9.00 31.00 state 15130 0.56 0.50 0.00 1.00 1.00 leverage 15119 0.46 0.24 0.00 0.48 1.76 intensity 15130 0.67 0.63 0.00 0.53 11.60 tobinq 15130 2.09 6.91 0.00 1.47 715.94 PassISO14001 15117 0.18 0.39 0.00 0.00 1.00 lnrd 9828 17.70 1.91 5.09 17.86 25.03 Enterprises are sorted in ascending order by the degree of digital transformation, the sample is divided into fifteen groups, and the average number of green patent applications is calculated for each group. Figure 2 shows that the degree of digital transformation is positively correlated with the number of green patent applications, that is, the higher the degree of digital transformation, the stronger the green innovation output capability of the enterprise, which is consistent with the hypothesis stated earlier. Figure 2. Degree of digital transformation and number of green patent applications 4.2 Baseline Regression Results In the baseline regressions, enterprise characteristic variables are sequentially added, and Table 3 reports estimation results based on Model (1). Columns (1) through (4) separately control for characteristic variables at the levels of basic enterprise attributes, internal control, and financial condition in order to examine the linear effect of digital transformation on green innovation output capability. Column (5) reports the regression for the full specification. According to the regression results, regardless of whether basic attributes, internal control, or financial condition are controlled, the coefficient of the main explanatory variable is significantly positive in statistical terms, the regression results are highly reliable and robust, and a positive linear relationship is present. For the regression with basic attributes (column 2), enterprise size and state ownership have significant promoting effects on green innovation output, whereas enterprise age contributes negatively. For the regression with internal-control variables (column 3), board size and having passed environmental certification significantly promote green innovation output. For the regression with financial variables (column 4), financial leverage and capital intensity significantly promote green innovation output. For the full regression (column 5), enterprise size, state ownership, board size, environmental certification, and capital intensity all have significant positive effects on green innovation output, and enterprise age is significantly negatively related to green innovation output. To further examine whether the driving effect of digital transformation on green innovation output has temporal persistence, dynamic effects are evaluated. Forward values of the dependent variable with one to two leads are introduced, and results are presented in Table 4. The coefficients of the main explanatory variable are all positive at the 1 percent significance level, which indicates that the positive impact of digital transformation on green innovation output exists in the current year and persists for the next three years. In terms of magnitude, 0.357 is greater than 0.341, which is greater than 0.313, which is greater than 0.304. The effect is smallest in the current year, reaches the maximum in the following year, and then gradually weakens over subsequent periods. The results remain consistent when the set of control variables is not included. Specifically, the promoting effect of digital transformation on green innovation output is at a relatively low level in the current year, reaches the maximum in the following year, and then decreases year by year. This can be explained by the time required for the transformation from research and development input to realized innovation output. In the initial stage of transformation, enterprises need to invest resources and effort to adapt to the new digital environment, adjust business processes, and train employees to master new tools and skills, so the effect in the current year may be limited. As enterprises gradually adapt and accumulate experience, green innovation capability improves, and digital technologies and data analysis are better used to improve products, processes, and services, which raises resource efficiency and protects the environment, so the effect peaks in the following year. In subsequent periods, after enterprises reach a certain level of green innovation, further improvement becomes more difficult or is constrained by other factors, so the impact gradually weakens. Table 3. Baseline regressions (1) Baseline (2) Basic attributes (3) Internal control (4) Financial condition (5) Full lndigital 0.423*** 0.320*** 0.415*** 0.422*** 0.304*** (0.0252) (0.0248) (0.0241) (0.0247) (0.0237) lnsize 0.736*** 0.734*** (0.0198) (0.0222) age -0.0638*** -0.0660*** (0.00644) (0.00622) state 0.213** 0.176** (0.0663) (0.0638) bm 0.101*** 0.0298** (0.0103) (0.00994) PassISO14001 0.580*** 0.383*** (0.0630) (0.0552) leverage 1.863*** 0.131 (0.117) (0.143) intensity 0.477*** 0.420*** (0.0480) (0.0448) tobinq -0.278*** 0.0791** (0.0329) (0.0260) _cons 0.391*** -15.63*** -0.796*** -0.374*** -16.44*** (0.114) (0.443) (0.118) (0.0905) (0.480) Year YES YES YES YES YES Industry YES YES YES YES YES N 14973 14973 14971 14973 14971 Wald 280.63 2362.16 465.98 676.64 2762.46 p 0.00 0.00 0.00 0.00 0.00 Note: *, **, and *** denote significance at the 10, 5, and 1 percent levels, respectively; the same notation applies hereafter. Although diminishing marginal effects constrain the dynamic impact, the positive promoting effect remains strong over the long term. Enterprises should maintain patience and a long-term perspective, recognize the time lag of innovation output, set reasonable expectations for early input, and make long-term plans and preparations for green innovation investment during digital transformation. Although digital transformation requires certain investment and time costs, after the transformation enterprises can obtain more efficient innovation capability and a more competitive position, which exerts persistent positive effects and helps enterprises conduct green innovation and achieve sustainable development. Table 4. Persistence of the effect of digital transformation on green patent applications apply0 apply1 apply2 apply3 (1) (2) (3) (4) (5) (6) (7) (8) lndigital 0.423*** 0.304*** 0.517*** 0.357*** 0.497*** 0.341*** 0.492*** 0.313*** (0.0252) (0.0237) (0.0586) (0.0548) (0.0527) (0.0461) (0.0647) (0.0568) lnsize 0.734*** 1.107*** 1.086*** 1.029*** (0.0222) (0.0515) (0.0536) (0.0507) age - 0.0660*** -0.0227 -0.0216 -0.0237 (0.00622) (0.0122) (0.0132) (0.0136) bm 0.0298** 0.0353 0.0496** 0.0495** (0.00994) (0.0219) (0.0180) (0.0178) state 0.176** 0.0978 0.0974 0.0771 (0.0638) (0.128) (0.132) (0.141) leverage 0.131 -0.303 -0.167 -0.207 (0.143) (0.251) (0.298) (0.284) intensity 0.420*** 0.980*** 0.926*** 0.861*** (0.0448) (0.0862) (0.0909) (0.104) tobinq 0.0791** 0.199*** 0.213*** 0.188*** (0.0260) (0.0437) (0.0480) (0.0448) PassISO14001 0.383*** 0.240* 0.302* 0.335* (0.0552) (0.119) (0.134) (0.147) _cons 0.391*** -16.44*** 1.378*** - 25.78*** 1.475*** - 25.36*** 1.551*** - 23.76*** (0.0465) (0.480) (0.111) (1.300) (0.108) (1.412) (0.110) (1.378) Year YES YES YES YES YES YES YES YES Inudstry YES YES YES YES YES YES YES YES N 14973 14971 13487 13485 11991 11989 10493 12004 Wald 280.63 2762.46 77.70 1135.85 88.94 1033.12 57.67 809.30 p 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.3 Robustness Checks To ensure reliability, a series of robustness checks are conducted. (1) Replacement of the dependent variable: different patent types have different requirements regarding innovation factors and environments, which leads to heterogeneous effects of digital transformation on green innovation activities. The total number of green patent applications is decomposed into the numbers of green invention patent applications and green utility-model patent applications. Considering the gestation period from input to output, one- and two-period leads are also used. As shown in columns (1) and (3) of Table 5, the degree of digital transformation significantly and positively affects the output of green invention patents over the next two years, whereas the effect on green utility-model patents is not significant. Table 5. Robustness checks: replacing the dependent variable and sample screening 1Replace explained variables 2 Manufacturing industry 3 Exclude information and communication industry (1) (2) (3) (4) (5) (6) invention1 utility1 Invention2 utility2 apply0 apply0 lndigital 0.446*** 0.00150 0.428*** 0.000614 0.328*** 0.244*** (0.0593) (0.00525) (0.0510) (0.00344) (0.0254) (0.0297) lnsize 1.166*** 0.000842 1.144*** -0.0324*** 0.696*** 0.760*** (0.0561) (0.00924) (0.0578) (0.00630) (0.0245) (0.0251) age -0.0180 - 0.000271 -0.0170 0.00535*** -0.0730*** -0.0649*** (0.0130) (0.00166) (0.0139) (0.00104) (0.00698) (0.00719) bm 0.0584** 0.000751 0.0723*** -0.000967 0.0340** 0.0358** (0.0218) (0.00336) (0.0185) (0.00207) (0.0117) (0.0117) state 0.243 -0.00227 0.253 -0.0121* 0.201** 0.0744 (0.136) (0.00637) (0.139) (0.00471) (0.0677) (0.0738) leverage -0.396 -0.00893 -0.251 0.0788*** 0.560*** 0.157 (0.273) (0.0298) (0.326) (0.0204) (0.154) (0.153) intensity 1.172*** 0.00336 1.095*** 0.00187 0.395*** 0.405*** (0.0920) (0.00741) (0.0960) (0.00595) (0.0505) (0.0463) tobinq 0.227*** - 0.00682* 0.246*** 0.00150 -0.00157 0.00869 (0.0467) (0.00275) (0.0501) (0.00127) (0.0284) (0.0298) IsPassISO14001 0.246 -0.00715 0.316* 0.0208 0.299*** 0.234*** (0.134) (0.0183) (0.147) (0.0123) (0.0580) (0.0610) _cons -28.30*** 0.583*** -27.84*** 1.291*** -15.42*** -16.91*** (1.379) (0.174) (1.473) (0.121) (0.522) (0.544) Year YES YES YES YES YES YES Inudstry YES YES YES YES YES YES N 13485 3026 11989 1512 8748 13345 Wald 1164.57 11.50 1054.39 37.44 2686.08 2018.60 p 0.00 0.24 0.00 0.00 0.00 0.00 Green invention patents are defined as patents that are novel, useful, and environmentally friendly and generally require a higher technological level and innovation capability. Green utilitymodel patents are patents that solve environmental problems through practical technical solutions and improvements. Therefore digital transformation exerts a stronger promoting effect on green invention patent output that requires a higher innovation threshold, while the effect on green utility-model patent output that requires a lower innovation threshold is not significant. (2) Retaining manufacturing enterprises: based on the fact that in 2021 mechanical and electrical products accounted for 59 percent of China exports, manufacturing has become the most important component in the going-global industries; therefore only manufacturing enterprises are retained in column (5), and the results remain robust. Table 6. Robustness checks: replacing the core explanatory variable with five keyword frequencies 4 (1) AI (2) Blockchain (3) Cloud computing (4) Big data (5) Digital technology application lnai 0.360*** (0.0360) lnbc 0.516** (0.182) lncc 0.323*** (0.0278) lnbd 0.327*** (0.0356) lndt 0.244*** (0.0270) lnsize 0.778*** 0.795*** 0.769*** 0.760*** 0.757*** (0.0210) (0.0214) (0.0212) (0.0215) (0.0225) age -0.0627*** -0.0641*** -0.0660*** -0.0647*** -0.0645*** (0.00631) (0.00650) (0.00623) (0.00615) (0.00645) bm 0.0273** 0.0267** 0.0280** 0.0289** 0.0306** (0.0101) (0.0103) (0.00981) (0.00987) (0.0104) state 0.0483 0.00630 0.0928 0.0940 0.109 (0.0643) (0.0640) (0.0618) (0.0639) (0.0663) leverage 0.0734 0.0776 0.0650 0.0919 0.135 (0.144) (0.144) (0.143) (0.144) (0.142) intensity 0.423*** 0.418*** 0.420*** 0.409*** 0.412*** (0.0443) (0.0451) (0.0438) (0.0443) (0.0456) tobinq 0.0957*** 0.104*** 0.0985*** 0.0987*** 0.0867*** (0.0253) (0.0249) (0.0253) (0.0253) (0.0260) IsPassISO14001 0.410*** 0.421*** 0.375*** 0.406*** 0.401*** (0.0554) (0.0570) (0.0561) (0.0561) (0.0561) _cons -17.12*** -17.40*** -16.95*** -16.79*** -16.79*** (0.471) (0.480) (0.465) (0.480) (0.494) Year YES YES YES YES YES Inudstry YES YES YES YES YES N 14971 14971 14971 14971 14971 Wald 2573.85 2239.11 2619.45 2525.50 2507.92 p 0.00 0.00 0.00 0.00 0.00 (3) Excluding information and communications enterprises: given that information and communications is a high value-added high-technology industry with intrinsically higher overseas income, enterprises in this industry are excluded in column (6), and the results remain basically unchanged. (4) Replacement of the core explanatory variable: logarithms of keyword frequencies along five dimensions are used as alternative measures, namely Artificial Intelligence, Blockchain, Cloud Computing, Big Data, and Digital-Technology Application, and the results are highly robust. 4.4 Endogeneity Tests Table 7. Endogeneity tests IV method LIMLmethod (1) (2) (3) lndigital apply0 apply0 lnstu1 0.0619*** (4.95) lndigital 9.866*** 9.866*** (4.70) (4.70) lnsize 0.160*** 0.154 0.154 (21.18) (0.44) (0.44) age -0.00297 -0.0193 -0.0193 (-1.61) (-0.89) (-0.89) bm -0.00496 0.130** 0.130** (-1.54) (3.05) (3.05) state -0.243*** 2.548*** 2.548*** (-13.93) (4.56) (4.56) leverage -0.183*** 0.313 0.313 (-4.60) (0.53) (0.53) intensity 0.131*** -0.650 -0.650 (7.31) (-1.79) (-1.79) tobinq 0.0384*** -0.0725 -0.0725 (6.59) (-0.70) (-0.70) IsPassISO14001 0.0291 0.765** 0.765** (1.37) (2.85) (2.85) _cons -3.410*** -6.752 -6.752 (-19.08) (-0.97) (-0.97) Year YES YES YES Industry YES YES YES Kleibergen-Paap rk LM statistic 26.347(0.00) Cragg-Donald Wald F statistic 24.482(16.38) N 15117 15117 15117 Note: The value in parentheses after the Kleibergen-Paap rk LM statistic is the p value of the underidentification test, and the value in parentheses after the Cragg-Donald Wald F statistic is the ten percent critical value for the Stock-Yogo weak identification test. Endogeneity may arise from measurement error, omitted variables, and bidirectional causality. The model includes a series of control variables and employs a two-way fixed effects specification, which reduces endogeneity caused by omitted variables. Considering the possible endogeneity from reverse causality, the number of in-school university students in the province where the enterprise is located, lagged by one period, is used as an instrumental variable. Instrumental variables estimation is first conducted. Column (1) of Table 7 shows that the instrumental variable is significantly and positively correlated with the endogenous regressor. According to column (2), the instrumental variables regression passes the underidentification test and the weak-instrument test, which indicates that the selected instrumental variable is valid. In column (2), the coefficient of digital transformation remains significantly positive, which indicates that endogeneity does not distort the conclusions. Limited information maximum likelihood estimation, which is less sensitive to weak instruments, is further used, as shown in column (3) of Table 7. The LIML estimate is consistent with the IV estimate, which indicates that digital transformation significantly promotes green innovation output and that the research findings are robust. 4.5 Heterogeneity Analysis (1) Heterogeneity analysis based on temporal ordering. To enlarge intergroup differences, two subsamples covering 2010-2014 and 2015-2019 are extracted by year for regression analysis. The results show that, consistent with the baseline regressions in Table 3, the positive linear relationship between digital transformation and enterprise green innovation output is significant in both subsamples, and this relationship exhibits different trends over time. Specifically, columns (1) and (4) of Table 8 show that in the later period the estimated coefficient of digital transformation on the total number of green patent applications is smaller than that in the earlier period, which indicates that the positive effect of digital transformation on the total number of green patent applications weakens over time. In contrast, columns (2) and (5) indicate that the effect of digital transformation on the number of green invention patent applications increases over time, that is, the promoting effect of digital transformation on the number of green patent applications gradually strengthens in the later period. To further demonstrate that the promoting effect of enterprise digital transformation on its green invention patent output strengthens over time, and to eliminate the influence of level effects, the natural logarithm of the ratio of enterprise green invention patent applications (apply0) to total invention patent applications (invention) is used as the dependent variable (calculated as ln( invention apply0 + 1)). The regression results in columns (3) and (6) of Table 8 show that the promoting effect of digital transformation on the share of green patent applications strengthens in the later period, that is, with the continuous advancement of digital transformation, the share of enterprise green invention patent output has experienced faster expansion. This trend may be due to the continuous development of technologies and methods involved in digital transformation, as well as the accumulation of experience and knowledge by enterprises during the digital transformation process. Such experience and knowledge may help enterprises better identify and exploit opportunities related to environmental protection and, by developing new green technologies and products, enhance their capability for green invention patent output. Table 8 Heterogeneity analysis: green innovation capability by temporal ordering Year 2010-2014 Year 2015-2019 (1) (2) (3) (4) (5) (6) apply0 invention inv_ratio apply0 invention inv_ratio lndigital1 0.325*** 0.391*** 0.0629*** 0.302*** 0.497*** 0.0762*** (0.0428) (0.0670) (0.0151) (0.0280) (0.0682) (0.0129) lnsize 0.792*** 1.085*** 0.0769*** 0.704*** 1.229*** 0.161*** (0.0346) (0.0490) (0.0160) (0.0282) (0.0767) (0.0166) age -0.0663*** -0.0356** 0.00109 -0.0660*** -0.0143 -0.00312 (0.00894) (0.0127) (0.00448) (0.00823) (0.0153) (0.00366) bm 0.0410** 0.127*** 0.0295*** 0.0248* 0.0441 0.00116 (0.0157) (0.0174) (0.00709) (0.0126) (0.0235) (0.00618) state -0.0275 -0.388* 0.0836 0.283*** 0.378* 0.187*** (0.100) (0.166) (0.0429) (0.0807) (0.158) (0.0374) leverage -0.313 -0.572* -0.0199 0.420* -0.523 -0.315*** (0.202) (0.289) (0.101) (0.191) (0.300) (0.0830) intensity 0.312*** 1.126*** 0.0337 0.500*** 1.337*** 0.143** (0.0654) (0.0984) (0.0587) (0.0625) (0.176) (0.0443) tobinq 0.142*** 0.217*** 0.0575** 0.0359 0.173** 0.0490*** (0.0374) (0.0475) (0.0186) (0.0329) (0.0578) (0.0123) PassISO14001 0.289** 0.0460 -0.0399 0.416*** 0.230 0.0348 (0.0879) (0.159) (0.0386) (0.0715) (0.179) (0.0316) _cons -17.63*** -26.30*** -3.135*** -15.82*** -30.09*** -4.794*** (0.733) (1.131) (0.373) (0.620) (1.982) (0.387) Year YES YES YES YES YES YES Inudstry YES YES YES YES YES YES N 7413 7354 1501 7477 7477 1854 Wald 1370.97 1132.69 77.54 1500.33 527.86 156.52 p 0.00 0.00 0.00 0.00 0.00 0.00 (2) Heterogeneity analysis based on enterprise size. To verify the impact of differences in enterprise size on the effect of digital transformation on green innovation output, the median of enterprise size is used as the cutoff to divide the sample into two groups, large enterprises and small and medium-sized enterprises, for separate regressions. The results in Table 9 show that digital transformation has a more significant positive impact on the green innovation output of small and medium-sized enterprises than on that of large enterprises. The possible reasons are as follows. On the one hand, the flexibility and adaptability of small and medium-sized enterprises enable them to respond more quickly to market demand and green innovation opportunities. Therefore, during digital transformation, small and medium-sized enterprises can adjust their business processes, technological applications, and organizational structures more rapidly to adapt to new market trends and environmental requirements. This agility enables them to seize opportunities for green innovation earlier. On the other hand, digital transformation can help enterprises improve production and operational efficiency, reduce resource waste and environmental pollution, and thus lower costs. Small and medium-sized enterprises usually have more limited resources and capital, so they pay more attention to cost control. Digital transformation provides opportunities for small and mediumsized enterprises to reduce production costs and environmental impacts, thereby creating more favorable conditions for their green innovation output. Through the application of digital technologies, small and medium-sized enterprises can improve energy use efficiency, waste treatment, and resource recycling in the production process, thereby further enhancing green innovation output. Table 9 Heterogeneity analysis Enterprise Scale Industry Sector (1) (2) (3) (4) Large-scale Small and Medium-scale Technologyintensive Non-technologyintensive lndigital1 0.271*** 0.410*** 0.290*** 0.0566 (9.98) (8.68) (10.38) (1.26) lnsize 0.756*** 0.810*** 0.676*** 0.839*** (26.43) (8.83) (23.33) (26.03) age -0.0542*** -0.111*** -0.0780*** -0.0431*** (-7.64) (-8.24) (-10.17) (-4.35) bm 0.0352*** -0.0212 0.0372** 0.0203 (3.32) (-0.80) (3.23) (1.25) state 0.139 0.323* 0.0728 0.405*** (1.84) (2.45) (0.95) (3.64) leverage 0.0994 0.333 0.748*** -0.986*** (0.59) (1.33) (3.89) (-4.88) intensity 0.458*** 0.181* 0.597*** 0.513*** (8.99) (2.19) (7.80) (9.61) tobinq 0.0627 0.0813* 0.00457 0.165*** (1.69) (2.10) (0.15) (3.77) IsPassISO14001 0.421*** 0.145 0.318*** 0.625*** (6.78) (1.28) (4.91) (6.95) _cons -17.13*** -17.07*** -14.82*** -19.24*** (-26.84) (-8.73) (-23.83) (-26.97) Year YES YES YES YES Inudstry YES YES YES YES N 7408 7451 5486 9485 (3) Heterogeneity analysis based on industry category. Drawing on the study of Li Xuedong et al. (2018), industries are divided into three categories: technology-intensive, capital-intensive, and labor-intensive. Accordingly, the sample is divided into technology-intensive enterprises and nontechnology-intensive enterprises for regression analysis. As shown in columns (3) and (4) of Table 9, in the technology-intensive enterprise sample the coefficient of lndigital is significantly positive at the 1 percent level, whereas in the non-technology-intensive enterprise sample the coefficient of the core explanatory variable is not significant. This indicates that digital transformation is more conducive to promoting the growth of green innovation output in technology-intensive industries. The possible reason is that enterprises with low technological content tend to produce products with low added value and therefore low profitability, making them unable to bear the high costs of digital transformation. These enterprises usually face heavy investment pressures in updating equipment, introducing new technologies, and training employees, and such investments may not immediately translate into significant green innovation output. As a result, these enterprises progress more slowly in digital transformation and cannot keep pace with technology-intensive enterprises. By contrast, technology-intensive enterprises possess clear advantages in digital transformation. Such enterprises generally devote substantial resources to research and development and innovation and possess technical expertise and innovative capability. Digital transformation provides them with more opportunities to improve products and services through advanced technologies and data analysis, increase production efficiency, reduce resource consumption, and achieve green innovation. Therefore, technology-intensive enterprises are more likely to obtain significant results in digital transformation and achieve growth in innovation performance. 4.6 Mechanism Tests This paper finds that enterprise digital transformation influences final green innovation output through two channels, namely environmental management and research and development investment. The following presents two approaches to mechanism testing. (1) Environmental management channel. Environmental management certification reflects the degree to which an enterprise values environmental protection and its capability for innovation in environmental management. Managers with strong environmental awareness will proactively engage in green innovation research and development, cleaner production, and the production of environmentally friendly products, so the green innovation effect of enterprise digital transformation will also be subject to heterogeneity. Accordingly, the sample is divided into two groups based on whether ISO 14001 environmental management certification has been passed, and regressions are estimated for the two groups. Columns (1) and (2) of Table 10 show that the coefficients are significantly positive at the 1 percent level in both groups, but the coefficient of digital transformation for enterprises that have passed ISO 14001 certification equals 0.413, whereas for enterprises without certification it equals 0.238, which is smaller. This indicates that the promoting effect of digital transformation on enterprise green innovation output is more pronounced among enterprises with environmental management certification. This implies that digital transformation promotes enterprise green innovation by advancing enterprise environmental management. Further regressions using the number of green invention patents as the dependent variable yield consistent results. Digital transformation can help enterprises improve and innovate existing business processes, enhance data analysis capability, market insight, and customer experience, and thereby promote growth in enterprise output. Enterprises that have passed environmental management standard certification have usually already comprehensively optimized and standardized their internal environmental management and formed an effective environmental management mechanism. Therefore such enterprises are better able to rely on digital transformation to apply the outcomes of digital transformation in practical business activities. Table 10 Heterogeneity analysis: whether environmental management system certification has been passed Passed ISO14001 Failed ISO14001 (1) (3) (2) (4) apply0 invention0 apply0 invention0 lndigital 0.413*** 0.623*** 0.238*** 0.408*** (0.0369) (0.0573) (0.0327) (0.0868) lnsize 0.693*** 0.902*** 0.748*** 1.261*** (0.0393) (0.0678) (0.0274) (0.0662) age -0.0586*** -0.0425** -0.0710*** 0.00270 (0.00941) (0.0152) (0.00813) (0.0169) bm 0.0481** 0.0873*** 0.0177 0.0567** (0.0177) (0.0247) (0.0123) (0.0204) state 0.402*** 0.765*** 0.0428 -0.0226 (0.0954) (0.178) (0.0822) (0.161) leverage 0.211 0.181 0.119 -0.728** (0.237) (0.390) (0.176) (0.263) intensity 0.358*** 0.474*** 0.442*** 1.314*** (0.0784) (0.130) (0.0535) (0.102) tobinq 0.0175 0.135* 0.0920** 0.159** (0.0422) (0.0613) (0.0330) (0.0594) _cons -15.51*** -22.10*** -16.44*** -30.59*** (0.772) (1.529) (0.624) (1.813) Year YES YES YES YES Inudstry YES YES YES YES N 2758 2758 12200 12200 (2) Research and development investment channel. A three-step approach is used to conduct the mediation mechanism test. On the basis of the baseline regression model, the enterprise research and development investment variable is added. If, after adding the mediator, the coefficient of the main explanatory variable decreases, this indicates that increasing research and development investment is one of the channels through which digital transformation promotes enterprise green innovation output. To test this channel, data on enterprise research and development investment are obtained, and the logarithm of research and development investment is used as the mediator variable. The regression results of the mediation effect model are shown in Table 11. Column (1) shows that the degree of enterprise digital transformation is significantly positively related to green innovation output. Column (2) shows that the degree of enterprise digital transformation is significantly positively related to research and development investment. Column (3) shows that both digital transformation and research and development investment have significantly positive coefficients, but the coefficient of digital transformation is smaller than in column (1). This indicates that the mediation effect holds. Therefore the regression results in Table 11 confirm that the mediation effect of digital transformation promoting enterprise green innovation output through increased research and development investment is valid. The reasons why enterprise digital transformation promotes an increase in research and development investment may include the following. On the one hand, enterprise digital transformation makes enterprises pay greater attention to research and development activities, especially research and development activities in environmentally friendly and green technologies, thereby increasing investment in green technologies. On the other hand, digital transformation can improve the efficiency and precision of the technologies and methods used in the research and development process, thereby further increasing the output efficiency of research and development investment. Table 11 Digital transformation, research and development investment, and green innovation output V. Research Conclusions and Policy Implications Digital transformation has created opportunities for innovation and development for enterprises, driving companies to continuously reach new heights in the field of green innovation. Based on financial and annual report data from 1,512 A-share listed companies in China from 2010-2019, this paper constructs enterprise digital transformation indicators and time and industry dual fixed effects (1) (2) (3) lnrd apply0 apply0 lndigital 0.279*** 0.304*** 0.216*** (0.0164) (0.0237) (0.0239) lnrd 0.514*** (0.0362) lnsize 0.734*** 0.241*** (0.0222) (0.0408) age -0.0660*** -0.0514*** (0.00622) (0.00662) bm 0.0298** 0.0208* (0.00994) (0.00975) state 0.176** 0.244*** (0.0638) (0.0658) leverage 0.131 0.297 (0.143) (0.159) intensity 0.420*** 0.158** (0.0448) (0.0557) tobinq 0.0791** 0.00537 (0.0260) (0.0282) IsPassISO14001 0.383*** 0.330*** (0.0552) (0.0562) _cons 14.70*** -16.44*** -13.27*** (0.154) (0.480) (0.521) Year YES YES YES Industry YES YES YES N 9828 14971 9780 R2 0.181 Wald 2762.46 2863.23 p 0.00 0.00 0.00 models to examine the impact and mechanisms of enterprise digital transformation on green innovation output. The research results show: (1) Enterprise digital transformation can significantly promote enterprise green innovation output, and this conclusion remains valid after a series of robustness tests; (2) The dynamic impact of digital transformation on enterprise green innovation output cannot avoid the limitation of diminishing marginal effects, but its positive promoting effect still has strong temporal continuity; (3) Mechanism testing shows that digital transformation can enhance enterprise green innovation output by increasing enterprise R&D investment and strengthening environmental management; (4) Heterogeneity testing finds that digital transformation has differential impacts on enterprise green innovation due to differences in the technological level of the industry to which the enterprise belongs and the enterprise's environmental management capabilities, specifically manifested in that digital transformation has a more obvious promoting effect on green innovation output of small and medium-sized enterprises in technology-intensive industries. Based on this, this paper proposes the following policy recommendations for the future development of enterprise digital transformation: First, strengthen the popularization and promotion of digital technology, encourage enterprises to apply digital technology to improve R&D investment and green innovation output. Government departments can help enterprises improve their digitalization level by providing digital technology training and financial support. Second, promote environmental management certification, encourage enterprises to implement environmental management measures to reduce environmental pollution and resource waste. Governments can promote enterprise environmental management implementation through strengthening the formulation and implementation of environmental protection laws and regulations, as well as providing environmental protection incentives and other policies. Third, support the application and utilization of green patents to improve enterprises' green innovation capabilities and competitiveness. Government departments can promote enterprise green patent applications and utilization by providing exemptions for green patent application fees and policy support for green patent utilization. Fourth, enterprises should view digital transformation as a long-term strategy and formulate corresponding plans and measures to achieve sustainable green innovation and sustainable development goals. Because digital transformation is not only about responding to current market demands and competitive pressures, but more importantly, about achieving long-term sustainable development. Actively promoting digital transformation can enhance enterprises' green innovation output and gain greater competitive advantages in the future. References Chen, Z. & Chen, D. (2019). How executive environmental awareness improves firm performance under environmental uncertainty-the mediating role of green innovation (in Chinese). Management World, 40(10), 113-128. Li, H., Wen, S., & Jiao, R. (2022). Corporate environmental culture, environmental management, and financial performance: Do words match deeds? (in Chinese). Accounting Research, 34(09), 297-312. Li, L., Liu, C., & Han, M. (2022). Does informatization construction enhance firms' innovation? Evidence from the "Two-Integration Pilot Zone" (in Chinese). China Economic Quarterly, 22(03), 1079-1100. Li, X., Jiang, K., & Xia, H. (2018). Factor distortions and productivity in heterogeneous manufacturing under supply-side reform (in Chinese). Economic Research Journal, 35(05), 23-39. Wu, F., Hu, H., Lin, H., et al. (2021). Digital transformation and capital-market performance-Evidence from stock liquidity (in Chinese). Finance & Trade Economics, 37(07), 130-144. Kleis L , Ramirez R V .Information Technology and Intangible Output: The Impact of IT Investment on Innovation Productivity[J].Information Systems Research, 2012, 23(1). Liang S, Li T. Can digital transformation promote innovation performance in manufacturing enterprises? The mediating role of R&D capability[J]. Sustainability, 2022, 14(17): 10939. Lim S , Prakash A .Voluntary Regulations and Innovation: The Case of ISO 14001[J].Public Administration Review, 2014, 74(2):233-244. Murillo-Luna J L, Ramón-Solans-Prat J C. Which competitive advantages can firms really obtain from ISO14001 certification?[J]. Journal of Industrial Engineering and Management (JIEM), 2008, 1(2): 104-118. Sun S, Guo L. Digital transformation, green innovation and the Solow productivity paradox[J]. Plos one, 2022, 17(7): e0270928. Xu Q, Li X, Guo F. Digital transformation and environmental performance: Evidence from Chinese resource‐ based enterprises[J]. Corporate Social Responsibility and Environmental Management, 2023. Zhou P , Zhou S , Zhang M ,et al.Executive Overconfidence, Digital Transformation and Environmental Innovation: The Role of Moderated Mediator[J].International journal of environmental research and public health, 2022, 19(10).
|
2509.16259
|
Rozita Teymourzadeh is assistant professor and Director at Azbil North America Research and Development, Inc. Santa Clara, California,
USA. Yuya Nakazawa is an AI researcher at Azbil Corporation, Fujisawa, Japan.
A Scalable and Interoperable Platform for
Transforming Building Information with
Brick Ontology
Rozita Teymourzadeh, PhD, CEng.
Yuya Nakazawa,
Azbil North America Research and Development, Inc.
Azbil Corporation,
Senior IEEE, IET
AI Solution Department
ABSTRACT
In the digital twin and building information era, many building automation companies searched for scalable methods to
extract and analyze different building data, including Internet of Things (IoT) sensors, actuators, layout sections, zones, etc.
The necessity for engineers to continuously manage the entire process for each new building creates scalability challenges.
Furthermore, because construction information is sensitive, transferring data on vendor platforms via the cloud creates
problems. This paper introduces a platform designed to address some of the common challenges in building automation. This
is a smart platform designed for the transformation of building information into Brick ontology (Brick 2020) and graph
formats. This technology makes it easy to retrieve historical data and converts the building point list into a Brick schema
model for use in digital twin applications. The overarching goal of the proposed platform development is semi-automate the
process while offering adaptability to various building configurations. This platform uses Brick schema and graph data
structure techniques to minimize complexity, offering a semi-automated approach through its use of a tree-based graph
structure. Moreover, the integration of Brick ontology creates a common language for interoperability and improves building
information management. The seamless and offline integration of historical data within the developed platform minimizes
data security risks when handling building information.
INTRODUCTION
Building automation companies have actively pursued scalable solutions to efficiently extract and analyze a wide spectrum of
building data in the dynamic world of digital twins and building information. Many previous attempts to address challenges
in the realm of building information were hindered by the lack of success, mainly attributed to concerns about scalability
(Rosen et al. 2015). The overarching issue required a common language for sensors in buildings to eliminate redundancy.
The industries of Building Management Systems (BMS) and Building Information Modeling (BIM) (Wang and Xie 2002)
acknowledged the lack of sufficient information in the BIM and Industry Foundation Classes (IFC) files (Vittori 2023). IFC
files included building architectural and spatial data; however, they did not include sensor and actuator data. According to
Balaji (Balaji et al. 2016), the National Institute of Standards and Technology (NIST) highlighted the critical problem of a
missing common data representation, which obstructs interoperability between buildings and hampers the scalability of
applications. Developers were forced to manually map heterogeneous data from each building to a common format, leading
to inefficiencies. In response to these challenges, the Brick community, in 2016 (Balaji et al. 2016), introduced semantic web
technology as a solution. This initiative aimed to standardize semantic descriptions of physical, logical, and virtual assets
within buildings (Brick 2020), including their interrelationships. Despite this advancement, current data integration processes
still involve substantial manual interventions, indicating a need for further optimization to achieve seamless interoperability
and scalability in the representation of building information from the source of creation. We thoroughly scrutinized existing
ontologies such as Building Topology Ontology (BOT) (Rasmussen et al. 2021), Semantic Sensor Network/Sensor,
Observation, Sample, and Actuator (SSN/SOSA) (Haller et al. 2017), Smart Applications REFerence (SAREF) (Garcia et al.
2023), Real Estate Core (REC) (Hammar et al. 2019), Project Haystack (Quinn and McArthur 2022), Brick ontology (Brick
2020) as well as proposed draft version of ASHRAE Standard 223P (ASHRAE 2018). This detailed research serves as a
foundational step in our pursuit of improving and optimizing existing ontological substructures. While we recognize the
inherent benefits and downsides of various ontologies in the context of building automation, we chose Brick ontology as the
foundation of our development efforts. This decision was made after careful consideration of the unique characteristics,
adaptability, and alignment with the specific requirements that the Brick ontology provides. Brick schema (Balaji et al. 2016)
offers several benefits that justify our decision to put this ontology on the shortlist for our continuous research work. The
main advantages are listed below.
•
Standardized Semantic Descriptions and Relationships.
•
The schema is built on contemporary Semantic Web technologies, using the Resource Description Framework
(RDF) (McBride 2004) and the Web Ontology Language (OWL) (Grigoris and Harmelen 2009). This modern
foundation contributes to the schema's robustness and adaptability.
•
The Brick schema facilitates the scalability and portability of controls, analytics, and other applications within
the industrial context, ensuring flexibility in deployment.
•
The schema adeptly captures both explicit and implicit relationships necessary for a spectrum of applications,
enhancing its versatility.
•
Brick encompasses descriptions of pertinent concepts essential for diverse applications, aligning with the
varied needs of users.
•
The schema caters to the requirements of both domain experts and application developers, fostering
collaboration and meeting diverse perspectives.
•
Brick enables an automatic validation process to ensure the correct usage of its concepts, enhancing data
integrity and reliability.
•
The schema supports the use of templates for standard equipment, streamlining the integration process, and
ensuring consistency in representation.
These benefits combine to make Brick a compelling solution, in line with our goals of efficient and standardized ontology
utilization in our industrial application. By using Brick as our fundamental framework, we aim to rely on its well-defined
structure and comprehensiveness, providing us with a solid scaffolding to on which to build. This choice is the product of a
rigorous and intentional process that ensures our approach to building automation is not just technically solid but also
optimally aligned with the domain's complexities and nuances. Following the introduction of Brick in 2016 (Balaji et al.
2016), numerous research initiatives have emerged aimed at enhancing the compatibility of Brick with other ontologies and
bridge the gap between manual and automated processes. An illustrative example is the work by Fierro (Fierro et al. 2020).
Their study provides a qualitative analysis of Project Haystack, focusing on a tagging system for building metadata. Through
this analysis, they identified a set of inherent definable and consistency issues within the tagging model. In response, the
research aimed to present a solution by proposing a replacement of Brick with clear formal semantics. Exploring Brick
metadata persisted, and Fierro (Fierro et al. 2019) took the initiative to harmonize additional ontologies such as Haystack
(Haystack 2018) with the existing Brick ontology. Haystack, recognized for its tagging system, became the focal point of a
qualitative analysis presented by the author. This analysis delved into the intricacies of Haystack project (Haystack 2018) as
applied to building metadata. In particular, the authors introduced Brick ontology, characterized by lucid formal semantics.
This innovative addition facilitates inferring a valid Brick model from an initially informal Haystack model. The research
extends beyond mere exploration, actively contributing to the refinement and integration of ontologies for enhanced efficacy
in building metadata representation. Garrido (Garrido-Hidalgo 2022) brought forth the idea of interlinking the Brick schema
with a building domain ontology. Their proposal involved the development of an interactive ontology matcher that has weak
supervision and active learning. Although the approach appears to be logical, a notable gap lies in the absence of a
comprehensive test report that provides in-depth accuracy of the platform. In 2022, Fierro (Fierro et al. 2022) put a principle
on the construction of semantic metadata models, introducing the concept of semantic sufficiency. According to this
principle, a model is considered "finished" when it incorporates the metadata essential to support a specific set of
applications. The methodology involves soliciting feedback from application metadata requirements and employing a
templating system to generate common metadata model components. Although this approach has the potential to enhance
metadata precision, it involves a trade-off by introducing customized manual work for specific requirements. The emphasis is
on striking a balance between increased precision and the inclusion of tailored elements, acknowledging the nuanced
demands of diverse applications. During a short period of time, multiple research efforts have been carried out to enhance the
utilization of Brick Metadata (Lee et al. 2023). Developers have actively desired to bridge the gap between automated and
manual processes, introducing additional tools and utilities. Consequently, this paper presents a scalable solution designed
with the Brick metadata ontology as its foundational structure and the backbone of the proposed platform. The primary
objective of the developed platform is to effectively retrieve data from building sensors and actuators, converting this
information into semantic point label information compatible with Brick ontology and scalable for existing and new
buildings. This transformation facilitates the classification and standardization of building data using the Brick ontology,
allowing the easy development of applications such as energy monitoring and data visualization. The subsequent sections
delve into a more detailed exploration of the implementation process, shedding light on the various facets of the platform.
PROPOSED PLATFORM INTRODUCTION
The inception of the proposed platform was driven by the need to fully capture building data, including sensor status,
IoT sensors, actuators, and various digital and analog data generated by building automation systems. Typically, acquiring
access to the building gateway or the BACnet system (Bushby et al. 2002) involves navigating through an administrative
process and seeking permissions from multiple departments. This procedure is not only time-consuming, but, at times,
permissions are withheld due to concerns surrounding data security. To address this challenge, we opted for an alternative
approach. Instead of directly connecting to the live system, we decided to retrieve and process historical data spanning a
single year. This data is extracted, and the platform is parsed through the stored data offline and precisely analyzed the
historical data. This method not only avoids the complexities associated with permission but also ensures a secure and
efficient means of conducting our analysis in terms of data analysis and building data normalization. Our focus has been on
accumulating a substantial dataset over an entire month and year from a building located in Azbil Fujisawa Technology
Center (FTC), Japan (Azbil Corporation 2024) hereafter referred to as the FTC building. This building comprises six floors,
with a total area of 2,320 square meters (24,972 square feet) and a total floor space of 10,721 square meters (115,399 square
feet). For illustrative purposes, the following figure presents an example of the data, classified as point labels and time series
data, extracted from the FTC building. This project demonstrates our commitment to creating a robust platform that is suited
to the specific needs of real-world building scenarios. It is noted that the point label data presented in this document have
been modified from their original values to ensure data privacy.
Figure 1
An example of point list and time series data
Using this technique to gather historical building data not only improves security but also acts as a preventative
precaution against possible cyber threats. By avoiding direct access to real-time data, we reduce the likelihood of harmful hits
on the real-time system. This precautionary measure ensures a strong and secure framework that protects the integrity and
confidentiality of building data throughout our analysis operations. After preparing the data for processing using Brick
ontology (Brick 2020), the following section will focus on the design architecture that accommodates the platform.
PLATFORM DESIGN ARCHITECTURETHE
proposed platform is a semi-automated system that takes building information in raw, non-standard formats and transforms it
into a standardized version that is compatible with the Brick schema. The principal objective is to correct the data from the
source. Despite these advancements, our development journey encountered several challenges:
1. Limited access to building information due to the absence of a connection to the BACnet system.
2. The variable nature of building information presents a diversity of data structures and formats.
3. Language barriers, particularly in dealing with point label data generated in Japanese language.
In response to these complications, this platform evolved as a smart solution in the field of building automation. It
deliberately tackles these challenges by effortlessly converting building information into Brick ontology and graph formats.
This modification improves scalability in building automation and allows for a faster deployment procedure across several
buildings, reducing the need for considerable human intervention by engineers. A key aspect of this platform is its ability to
match point lists with the Brick schema using a classifier model trained on Artificial Intelligence (AI) within its backbone,
organized in a dictionary format. This platform has three main duties that contribute to its powerful functionality:
•
Data Preparation: This involves translation and normalization processes that ensure that the data are formatted and
aligned for further analysis.
•
Point Label Mapping: Using AI and a purpose-designed algorithm, the platform undertakes the crucial task of
mapping point labels between the current building information and the corresponding Brick class and sub-classes.
•
Graph Model Generation: generating a comprehensive graph model based on Brick metadata, providing a structured
representation of building information.
•
Graph Validation: The platform employs validation mechanisms to ensure the accuracy and integrity of the
generated graph model.
This platform offers the following underlying architecture to create graph (McBride 2004) from the point labels. Refer to
Figure 2 for a visual representation of the top-level architecture of the platform. The subsequent section describes the
building data transformation into the Brick schema model.
Preparation and Dependencies:
During this essential phase of our project, we systematically build up dependencies, integrate necessary libraries, and
populate the system with necessary data. An important part of this phase is data translation, which converts sensor data and
semantic representations of sensors and actuators from Japanese to English language. This translation is required for an
accurate matching to Brick classes. To help with this procedure, we used and update a translation service with a built-in
Japanese dictionary. Figure 3 illustrates an example of the dictionary module used in this project. Furthermore, a key
component is the integration of the Building Metadata Ontology Interoperability Framework (BuildingMOTIF) (The
National Renewable Energy Laboratory (NREL) 2023), a versatile toolkit that facilitates the creation, storage, visualization,
and validation of building metadata. Delivered as an SDK with APIs, BuildingMOTIF abstracts complexities related to RDF
(McBride 2004) graphs, database management, Shapes Constraint Language (SHACL) (W3C 2016) validation, and
interoperability across diverse metadata schema and ontologies. This project utilizes the Brick ontology's "Nightly_1.3"
version and an AI-trained data model, both loaded during the preparation phase. It is worth noting that due to security
considerations and the unavailability of building layout and mechanical specifications, the proposed platform uses the point
label information to construct the semi-layout in JavaScript Object Notation (JSON) format. This semi-layout, although not
exceptionally precise, serves its purpose well, enabling the detection of the required number of floors and rooms for our
project. For a visual representation, refer to Figure 4 that illustrate an example of the layout generated for the FTC building.
Figure 2
Proposed top-level platform architecture
Figure 3 Example of dictionary service
Figure 4 Building semi-layout generated by point label
Initialization:
During this step, the extraction, sorting, and evaluation of Brick classes take precedence, serving as the foundation for
further mapping and AI job processing. Here, the data structure rules for transforming point labels into graphs are thoroughly
described. The fundamental data structure of RDF graph revolves around the interaction of subjects, objects, and predicates
(SOP) that are linked to shape the RDF graph (McBride 2004) tuple model. This hierarchical arrangement contributes
significantly to the comprehensive representation of the building. The subsequent model serves as an illustrative example,
focusing on an element of an FTC building (Azbil Corporation 2024), specifically within the Heating, Ventilation and Air
Conditioning (HVAC) system (Trcka et al. 2010). Figure 5 shows some examples of tuple data structure:
Figure 5 Tuple data structure of targeted FTC building
Figure 6 Example of the Brick class (Brick 2020)
This information is compiled by extracting point labels, normalizing the data, and generating a tuple data structure for each
equipment connected in the building.
Tokenization and Data Processing:
In this section, the point label undergoes a thorough data-cleaning process employing several algorithms to detect
and extract pertinent information. The extraction includes key details, including: (1) Information about zones, floors, and
areas. (2) First iteration of Brick class detection. (3) Device information. (4) Relations and connections.
The organized data are then structured into key-value pairs for Brick classes and lists for device information. This
structured format serves as input for the upcoming phases, specifically AI processing and detection, resulting in a
comprehensive approach to data utilization in the next stage of our project. This outcome prompts us to create a semi-layout
and the starting point for the introduction of equipment connection points.
Mapping and AI Processing:
The mapping and AI processing stages play an important role in this chain process. Mapping involves identifying
similar Brick classes for each point label generated by the building data. This proves to be a challenging task due to
differences in semantics, point labels, and the digital representation of data. In a 2020 study (Fierro et al. 2020), Fierro
proposed a unified authoritative Brick Metadata model for buildings, intended for continuous maintenance throughout the life
cycle. However, the lack of extensive building data for research limited the testing of these algorithms. In our research, the
Japanese representation of building metadata serves as a valuable source of point label data for our matching purposes. In the
Brick ontology, the ideal outcome is a match in the form of a key-value pair for Brick classes. For a visual reference, Figure 6
above illustrates examples of Brick classes, sub-classes, and their relations. This phase lays the support for accurate mapping
and processing, overcoming the challenges associated with semantic variations in the data. Point labels are created by putting
the building data through a rigorous process of translation, normalization, and cleaning to align them with Brick classes.
Starting with a trained predetermined model that is organized into key-value pairs, the mapping algorithm begins. Later, an
AI algorithm updates this predefined model with a model taught. To efficiently train the data and then map it with the
intended building point list, we have investigated several AI techniques and frameworks to train the data such as Random
Forest (Briman 2001), Active learning text-based Scrabble framework (Koh et al. 2017), and Large Language Models (LLM)
framework like OpenAI (Brown et al. 2020). To validate our generated graph model, we utilized various building datasets
for comparison and benchmarking. These include Ecobee dataset (Ecobee 2024), the High-Fidelity Building Emulator (High-
Fidelity Building Energy Data Collection 2024), the Honda Smart Home (Honda Smart Home Data Collection 2024), the
Lawrence Berkeley National Laboratory Building (Building Data Genome Project 2024), among others. The condensed
model is kept on the platform in dictionary format. With the integration of Brick schema, this research methodology helps the
algorithm in interpreting, mapping, and classifying the intended building information. Later, we used Jaccard index (Skiena
2008) to measure the similarity between two sets of normalized trained point label and Brick point and location classes. The
Jaccard index is calculated using the following equation:
(1)
The Jaccard index is used to estimate the best match similarity between the point and the Brick classes. Notably, the
raw point data has already been normalized using an LLM-trained module. The following table shows an example of
mapping results of the matching algorithm.
Table 1. Brick class and point label match example
Brick Class
Point label
Algorithm Match
Temperature_Sensor
SDF_65_Zone_Average_Temp
Average_Zone_Air_Temperature_Sensor
Occupancy_Count_Sensor
SDF1_People number
Occupancy_Count_Sensor
Humidity_Sensor
AHU_67_Indoor_Humi
Humidity_Sensor
Illuminance_Sensor
SDF_3_WP_Sensor_Illuminance
Illuminance_Sensor
Auditorium
VAV_6_F_Audience_Area
Auditorium
Not applicable
Reserve__AV
No Match
We evaluated about 7800 labels and about 7400-point labels were matched with Brick classes. The remaining 400 points
were either "Reserve" points or lacked sufficient information to match with a Brick class. Enhancements to our AI matching
algorithm are ongoing but beyond the scope of this paper.
RDF Graph Generation:
This stage is important within the algorithm. As discussed earlier, the data have been normalized, mapped, and
classified. In this phase, we established a tuple data structure for the converted building data, now referred to as point label.
Although the point label contains information about the Brick class, it does not define a relationship within the building as a
reference to other elements. To bridge this gap, each point label needs to be converted into a data structure of object,
predicate, and subject, as illustrated here:
Figure 7 Point label in Brick RDF graph format
Figure 8 Thermostat template with CO2
Iterating through each point label that is accessible, assigning a matching Brick class, and constructing the relationship
to turn the point label into the Brick RDF graph are all part of the loop process. The RDF graph has a tree-like shape and is
widely considered the best option for knowledge graphs due to its native web syntax, which makes data sharing and exchange
easier (Antanas 2022). Additionally, because of its formal semantics, meaning and structure can be easily aligned across
many sources, resulting in unified perspectives and clear interpretation. The tree-like data structure of the RDF makes it
suitable for web queries and allows the achievement of a linear time and space query. A graph-format data structure is
attractive for storing and accessing point labels and constructing information in databases because of its advantages,
especially when used with RDF. The RDF is hierarchical, making it compatible with the SPARQL (Furche et al. 2010;
DuCharme 2013) and GraphQL (GraphQL 2021) query languages and making fast data retrieval possible. All in all, these
benefits put RDF in a strong position for constructing information within a database architecture, as well as for deploying and
querying point labels.
Correction and Completion:
At this point, the algorithm adds relations to a previously discovered equipment and integrates it to the main graph, the
algorithm searches for air handling unit (AHU) terms (we will talk about AHU terms in the upcoming chapter) to make sure
no unit is overlooked. Depending on whether the mismatched point label is produced, it will be either removed from the
graph or matched with the closest matching factor and included in the main graph in this review part. Here, reasoning and
inference (Brick 2020) are implemented. Inference and reasoning are processes to imply information from the Brick class and
expand it to the main graph. Based on the application requirement, detailed information is added to the graph as a separate
branch. This information includes super-class, inverse relation, and tagging information.
Model Validation:
Now the graph is generated using the point label and reviewed, and the missing element is added or removed from the
graph according to their matching algorithm through the reasoning process. In this section, we check the generated graph
against constraints and guidelines. The idea of graph validation is inspired by BuildingMOTIF development (NREL 2023).
During the validation process, we specify the equipment to be validated using predefined templates. These templates are
designed to ensure the accuracy of the generated Brick model, with the final report produced using the BuildingMOTIF
library (NREL 2023). Each template delineates the required components and their dependencies, such as setpoints,
temperature points, and CO2 points. An example of such a template is provided in Figure 8 (NREL 2023).
Brick Model Generation:
This stage finalizes the architecture by generating the Brick model. At this stage, the data taken from the FTC building
point list is organized into a tuple data format and then converted into a Brick model. Building data, both dynamic and static,
are transformed into a machine-readable, semantic format by this process. A format like this makes it easier to create queries
for sensors and actuators, which makes it possible to retrieve data efficiently for monitoring applications.
DETAILED ARCHITECTURE DESIGN
The project architecture starts with the translation and parsing utility for the point label. To align the raw data point
label with the triple SOP data structure, the data goes through several steps; after processing the data, it is correlated with the
relevant Brick class model to produce an RDF Brick model. The architecture's block diagram is displayed in Figure 9. The
mapping stage in point processing involves intensive AI processing and data normalization to align the point list with the
appropriate Brick class. Subsequently, it is necessary to identify HVAC-related terms within the dataset, as these terms are
important to define relationships within the list of points. HVAC term detection is carried out through input from an
interactive user interface specifically designed for this purpose. For example, the following point refers to HVAC
components: (1) AC: Air handling unit (AHU) (2) SDF: Smart-control-damper on the Diffuser (3) VAV: Variable air volume
(4) CAV: Constant air volume. The implemented mapping algorithm analyzes the HVAC elements within the point list and
determines the relationships between them. Figure 10 illustrates the defined relations and their reverse relations for AHU
terms, while Figure 11 displays the interactive graphical user interface designed to capture operator input.
Figure 9 The developed platform architecture
Figure 10 FTC building AHU terms relation
To enhance scalability and streamline debugging, the algorithm divides each equipment unit and its associated
relationships into separate modules. Later, these modules are combined to generate a comprehensive model representing the
building. For example, relationships like AC to CAV and AC to SDF are handled in individual subsections (Figure 11,
Figure 12):
•
AC2CAV: AC and CAV units and relevant Brick relations.
•
AC2SDF: AC and SDF units and relevant Brick relations.
•
AC2VAV: AC and VAV units and relevant Brick relations.
•
CAV2SDF: CAV and SDF units and relevant Brick relations.
•
Point connection: Point connection to the zone or other unit equipment.
•
Tagging: Create relevant tags for a specific point.
•
Reasoning: Retrieve the parent class of the specific point.
Figure 12 illustrates this structure in a scalable module, which reduces processing time when certain modules are not
required. Each module can be enabled or disabled based on the application's needs.
Figure 11 The front-end application for data processing
Figure 12 Detailed architecture to generate Brick graph
Here the processing and data normalization is concluded, and the Brick graph is generated and stored in the database
for data access processing. In the next section, we will discuss the result in detail.
RESULT AND DISCUSSION
Following the processing and mapping of each point to the relevant Brick class, and defining the relationships between
equipment units, the Brick module graph is generated and combined to represent the semantic version of the building data. It
also includes sensors and actuators information. The following relationships were implemented in this module (Figure 13)
while output of the developed platform which represents the Brick module of FTC building is shown here (Figure 14):
Figure 13 Brick relation implemented in the platform
Figure 14 Generated Brick model for FTC building
In this graph, three prefixes were extracted: Brick, FTC building, and reference for the time series ID. The first point
represents the tuple structure for the status of locker 536, located in the library. The status value of this locker is stored in an
SQL database, which can be retrieved upon receiving a query. This point includes reasoning attributes like “On_status”,
“Point”, and “status”, and tags such as “On”, “Point”, and “Status”. Another example, shown in Figure 15, illustrates a tuple
object referring to AHU unit No. 977 (AC 977), which supplies specific CAV and SDF units as listed below.
Figure 15 AHU unit has “feeds” relation to the CAV
and SDF units
Figure 16 AHU unit and point relation
In this example Figure 16, the object created from the points represents AHU unit AC 600, which feeds the zone
equipped with the temperature sensor, damper, and the points listed here. Finally, the generated Brick model provides a
detailed understanding of the sensors deployed in the building, such as a WP (Workplace) sensor of the wireless cell-type
HVAC system installed in the FTC building as shown in the following figure. Figure 17 shows the WP (Workplace) sensor
detected by the Brick model and is part of the building automation system.
Figure 17 Generated Brick model explains WP (Workplace) sensor
These output examples from the generated Brick module on the platform demonstrate the relationships between
equipment and units across different building sectors, highlighting the model representation's scalability. The platform was
tested in several buildings and only minor adjustments were needed. The generated graph and time series data are stored in a
database and can be used for data access, energy monitoring, and data visualization applications.
CONCLUSION
This article discussed the necessity of decreasing human labor and the scalability issues with contemporary building
automation. With the help of the developed platform and Brick ontology, we suggested a scalable and seamless solution.
Using historical building data, this platform creates a Brick model graph that captures and maintains the links established by
the Brick ontology. Examples of this type of information include sensors, actuators, and the time-series data that goes along
with them. We presented an overview of the developed architecture platform, and several examples of the generated Brick
output models were illustrated and discussed in detail. The developed platform has been tested with multiple buildings, and
the Brick graph data model was generated successfully. The proposed platform produced realistic graphs that reflected the
operational data and the semi-layout of each building. The scalability of this solution makes it possible to automatically
generate machine-readable semantic building models, greatly minimizing the manual labor needed to produce these models
from scratch. Additionally, the platform creates a uniform framework for expressing building information by integrating
Brick ontology, making interoperability and effective data management between diverse building automation systems
possible.
ACKNOWLEDGMENTS
This project was developed by Azbil North America Research and Development Inc., with support from the Azbil AI
Solution department. I sincerely thank everyone involved, especially Dr. Gabe Fierro, founder and lead maintainer of Brick
Schema; Chosei Kaseda and Jeremy Tole from Azbil North America Research and Development, respectively for their
invaluable guidance and recommendations throughout the project.
REFERENCES
Antoniou, G., and Harmelen, F. 2009. Web ontology language: Owl. Springer Handbook on ontologies. p. 91-110.
ASHRAE’sBACnetCommittee. 2018. American Society of Heating Refrigerating and Air-Conditioning Engineers. Project
Haystack and Brick Schema Collaborating to Provide Unified Data Semantic Modeling Solution.
Atanas K., 2022. RDF Levels the Advantages of Labeled Property Graphs and Keeps Three Key Benefits: Standards,
Semantics and Interoperability. Ontotext Url: https://www.ontotext.com/knowledgehub/. p. 1.
Azbil
Corporation,
2024.
Azbil
Showroom:
Future
Technology
Center.
Url:
https://www.azbil.com/corporate/pr/showroom/ftc/index.html.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... and Amodei, D. (2020). Language models are few-
shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
Bhattacharya, A., Fierro, G., Gao, J., Gluck, J., Hong, D., Johansen, A., Koh, J., Ploennigs, J., and Agarwal, Y. 2016. Brick:
Towards a unified metadata schema for buildings. Proceedings of the 3rd ACM International Conference on Systems for
Energy-Efficient Built Environments: 41-50.
Breiman, L. 2001. Random Forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324.
Brick. 2020-2021. Brick OntologyDocumentation-Brick Consortium, Inc. California, USA. Brick.
Bushby, S.T., and Newman, H.M. 2002. BACnet today. ASHRAE journal. Vol. 10:10-18.
Castro, G.R., Lefrançois, M., Poveda-Villalon, M. and Daniele, L. 2023. The ETSI SAREF ontology for smart applications: a
long path of development and evolution. Energy Smart Appliances: Applications, Methodologies, and Challenges. Wiley
Online Library. 183-215.
DuCharme, B., 2013. Learning SPARQL: querying and updating with SPARQL 1.1. O'Reilly Media, Inc.
Ecobee Data Collection, 2024. Building Data Genome Project. Url: https://bbd.labworks.org/ds/bbd/ecobee. Accessed: 2024-
08-30.
Fierro, G., Koh, J. Nagare, Sh., Zang, X., Agarwal, Y., Gupta, R.K. and Culler, D.E. 2020. Formalizing tag-based metadata
with the brick ontology. Frontiers in Built Environment, Frontiers Media SA. Vol. 6- p.558034.
Fierro, G., Saha, A., Shapinsky, T., Steen, M., and Eslinger, H. 2022. Application-driven creation of building metadata
models with semantic sufficiency. Proceedings of the 9th ACM International Conference on Systems for Energy-
Efficient Buildings, Cities, and Transportation. p. 228-237.
Fierro, G., Koh, J., Agarwal, Y., Gupta, R.K., and Culler, D.E. 2019. Beyond a house of sticks: Formalizing metadata tags
with brick. Proceedings of the 6th ACM international conference on systems for energy-efficient buildings, cities, and
transportation. P. 125-134.
Fierro, G., Prakash, A.K., Mosiman, C., Pritoni, M., Raftery, P., Wetter, M. and Culler, D.E. 2020. Shepherding metadata
through the building lifecycle. Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient
Buildings, Cities, and Transportation. P. 70-79.
Furche, T., Weinzierl, A., Bry, F., Virgilio, R., Giunchiglia, F., and Tanca, L. 2010. Labeling RDF Graphs for Linear Time
and Space Querying. Semantic Web Information Management: A Model-Based Perspective. Springer Berlin Heidelberg.
p. 309-339. ISBN: 978-3-642-04329-1. Doi. 10.1007/978-3-642-04329-1_14.
Garrido-Hidalgo, C., Furst, J., Cheng, B., Roda-Sanchez, L., Olivares, T., and Kovacs, E. 2022. Interlinking the Brick
Schema with Building Domain Ontologies. The 20th ACM Conference on Embedded Networked Sensor. P. 1026-1030.
GraphQL Foundation. 2021. GraphQL: A Query Language for APIs. Url: https://graphql.org/.
Hammar, K., Wallin, E.O., Karlberg, P., and Halleberg, D. 2019. The realestatecore ontology. The Semantic Web--ISWC
2019, 18th International Semantic Web Conference, Auckland, New Zealand, October 26--30, 2019, Springer
Proceedings, Part II 18:130-145.
Haller, A., Janowicz, K., Cox, S., Phuoc, D.L., Taylor, K., and Lefrançois, M. 2017. Semantic Sensor Network Ontology.
Semantic Web W3. Vol. OGC 16-079.
Haystack team. 2018. Project Haystack. Url: http://project-haystack.org/.
High-Fidelity
Building
Energy
Data
Collection.
Building
Data
Genome
Project.
2024.
Url:
https://bbd.labworks.org/ds/acpr/hfbe. Accessed: 2024-08-30.
Honda Smart Home Data Collection. 2024. Building Data Genome Project. Url: https://bbd.labworks.org/ds/bbd/hshus.
Accessed: 2024-08-30.
Koh, J., Sengupta, D., McAuley, J., and Gupta, R., Balaji, B., and Agarwal, Y. Scrabble: converting unstructured metadata
into brick for many buildings. Proceedings of the 4th ACM International Conference on Systems for Energy-Efficient
Built Environments. p.1-2.
LBNL
Building
59
Energy
Data
Collection.
2024.
Building
Data
Genome
Project.
Url:
https://bbd.labworks.org/ds/bbd/lbnlbldg59. Accessed: 2024-08-30.
Lee, S., Cui, B., Bhandari, M.S., and Im, P. 2023. Visual Brick model authoring tool for building metadata standardization.
Elsevier Automation in Construction.Vol. 156:105-122
McBride, B., 2004. The resource description framework (RDF) and its vocabulary description language RDFS. Springer
Handbook on ontologies. p. 51-65.
Quinn, C., and McArthur, J.J. 2022. Comparison of Brick and Project Haystack to Support Smart Building Applications.
arXiv preprint arXiv:2205.05521.
Rasmussen, M.H., Lefrançois, M., Schneider, G.F., and Pauwels, P. 2021. BOT: The building topology ontology of the W3C
linked building data group. Semantic Web, IOS Press. 12(1):143-161.
Rosen, R., Von, W., Lo, G., Bettenhausen, G., and Kurt, D. 2015. About the importance of autonomy and digital twins for the
future of manufacturing. Ifac-papersonline Elsevier 48(3):567-572.
Skiena, S. S. 2008. The Algorithm Design Manual. Springer 2nd Edition. ISBN-13. 978-1849967204.
The National Renewable Energy Laboratory (NREL), 2023. Building Metadata OnTology Interoperability Framework
(BuildingMOTIF). Url: https://github.com/NREL/BuildingMOTIF.
Trcka, M., and Hensen, M.2010. Overview of HVAC system simulation. Elsevier Automation in construction.19(2):93-99.
Vittori, F., Tan, Ch.F., Pisello, A.L., Chong, A., and Miller, C. 2023. BIM-to-BRICK: Using graph modeling for IoT-BMS
and spatial semantic data interoperability within digital data models of buildings. arXiv preprint arXiv:2307.13197.
Wang, S., and Xie, J. Integrating Building Management System and Facilities Management on the Internet. Automation in
construction. 11(6):707-715.
W3C (MIT, ERCIM, Keio, Beihang), 2017. Shapes Constraint Language (SHACL). Url: https://www.w3.org/TR/shacl/.
|
Rozita Teymourzadeh is assistant professor and Director at Azbil North America Research and Development, Inc. Santa Clara, California, USA. Yuya Nakazawa is an AI researcher at Azbil Corporation, Fujisawa, Japan. A Scalable and Interoperable Platform for Transforming Building Information with Brick Ontology Rozita Teymourzadeh, PhD, CEng. Yuya Nakazawa, Azbil North America Research and Development, Inc. Azbil Corporation, Senior IEEE, IET AI Solution Department ABSTRACT In the digital twin and building information era, many building automation companies searched for scalable methods to extract and analyze different building data, including Internet of Things (IoT) sensors, actuators, layout sections, zones, etc. The necessity for engineers to continuously manage the entire process for each new building creates scalability challenges. Furthermore, because construction information is sensitive, transferring data on vendor platforms via the cloud creates problems. This paper introduces a platform designed to address some of the common challenges in building automation. This is a smart platform designed for the transformation of building information into Brick ontology (Brick 2020) and graph formats. This technology makes it easy to retrieve historical data and converts the building point list into a Brick schema model for use in digital twin applications. The overarching goal of the proposed platform development is semi-automate the process while offering adaptability to various building configurations. This platform uses Brick schema and graph data structure techniques to minimize complexity, offering a semi-automated approach through its use of a tree-based graph structure. Moreover, the integration of Brick ontology creates a common language for interoperability and improves building information management. The seamless and offline integration of historical data within the developed platform minimizes data security risks when handling building information. INTRODUCTION Building automation companies have actively pursued scalable solutions to efficiently extract and analyze a wide spectrum of building data in the dynamic world of digital twins and building information. Many previous attempts to address challenges in the realm of building information were hindered by the lack of success, mainly attributed to concerns about scalability (Rosen et al. 2015). The overarching issue required a common language for sensors in buildings to eliminate redundancy. The industries of Building Management Systems (BMS) and Building Information Modeling (BIM) (Wang and Xie 2002) acknowledged the lack of sufficient information in the BIM and Industry Foundation Classes (IFC) files (Vittori 2023). IFC files included building architectural and spatial data; however, they did not include sensor and actuator data. According to Balaji (Balaji et al. 2016), the National (NIST) highlighted the critical problem of a missing common data representation, which obstructs interoperability between buildings and hampers the scalability of applications. Developers were forced to manually map heterogeneous data from each building to a common format, leading to inefficiencies. In response to these challenges, the Brick community, in 2016 (Balaji et al. 2016), introduced semantic web technology as a solution. This initiative aimed to standardize semantic descriptions of physical, logical, and virtual assets within buildings (Brick 2020), including their interrelationships. Despite this advancement, current data integration processes still involve substantial manual interventions, indicating a need for further optimization to achieve seamless interoperability and scalability in the representation of building information from the source of creation. We thoroughly scrutinized existing ontologies such as Building Topology Ontology (BOT) (Rasmussen et al. 2021), Semantic Sensor Network/Sensor, Observation, Sample, and Actuator (SSN/SOSA) (Haller et al. 2017), Smart Applications REFerence (SAREF) (Garcia et al. 2023), Real Estate Core (REC) (Hammar et al. 2019), Project Haystack (Quinn and McArthur 2022), Brick ontology (Brick 2020) as well as proposed draft version of ASHRAE Standard 223P (ASHRAE 2018). This detailed research serves as a foundational step in our pursuit of improving and optimizing existing ontological substructures. While we recognize the inherent benefits and downsides of various ontologies in the context of building automation, we chose Brick ontology as the foundation of our development efforts. This decision was made after careful consideration of the unique characteristics, adaptability, and alignment with the specific requirements that the Brick ontology provides. Brick schema (Balaji et al. 2016) offers several benefits that justify our decision to put this ontology on the shortlist for our continuous research work. The main advantages are listed below. • Standardized Semantic Descriptions and Relationships. • The schema is built on contemporary Semantic Web technologies, using the Resource Description Framework (RDF) (McBride 2004) and the Web Ontology Language (OWL) (Grigoris and Harmelen 2009). This modern foundation contributes to the schema's robustness and adaptability. • The Brick schema facilitates the scalability and portability of controls, analytics, and other applications within the industrial context, ensuring flexibility in deployment. • The schema adeptly captures both explicit and implicit relationships necessary for a spectrum of applications, enhancing its versatility. • Brick encompasses descriptions of pertinent concepts essential for diverse applications, aligning with the varied needs of users. • The schema caters to the requirements of both domain experts and application developers, fostering collaboration and meeting diverse perspectives. • Brick enables an automatic validation process to ensure the correct usage of its concepts, enhancing data integrity and reliability. • The schema supports the use of templates for standard equipment, streamlining the integration process, and ensuring consistency in representation. These benefits combine to make Brick a compelling solution, in line with our goals of efficient and standardized ontology utilization in our industrial application. By using Brick as our fundamental framework, we aim to rely on its well-defined structure and comprehensiveness, providing us with a solid scaffolding to on which to build. This choice is the product of a rigorous and intentional process that ensures our approach to building automation is not just technically solid but also optimally aligned with the domain's complexities and nuances. Following the introduction of Brick in 2016 (Balaji et al. 2016), numerous research initiatives have emerged aimed at enhancing the compatibility of Brick with other ontologies and bridge the gap between manual and automated processes. An illustrative example is the work by Fierro (Fierro et al. 2020). Their study provides a qualitative analysis of Project Haystack, focusing on a tagging system for building metadata. Through this analysis, they identified a set of inherent definable and consistency issues within the tagging model. In response, the research aimed to present a solution by proposing a replacement of Brick with clear formal semantics. Exploring Brick metadata persisted, and Fierro (Fierro et al. 2019) took the initiative to harmonize additional ontologies such as Haystack (Haystack 2018) with the existing Brick ontology. Haystack, recognized for its tagging system, became the focal point of a qualitative analysis presented by the author. This analysis delved into the intricacies of Haystack project (Haystack 2018) as applied to building metadata. In particular, the authors introduced Brick ontology, characterized by lucid formal semantics. This innovative addition facilitates inferring a valid Brick model from an initially informal Haystack model. The research extends beyond mere exploration, actively contributing to the refinement and integration of ontologies for enhanced efficacy in building metadata representation. Garrido (Garrido-Hidalgo 2022) brought forth the idea of interlinking the Brick schema with a building domain ontology. Their proposal involved the development of an interactive ontology matcher that has weak supervision and active learning. Although the approach appears to be logical, a notable gap lies in the absence of a comprehensive test report that provides in-depth accuracy of the platform. In 2022, Fierro (Fierro et al. 2022) put a principle on the construction of semantic metadata models, introducing the concept of semantic sufficiency. According to this principle, a model is considered "finished" when it incorporates the metadata essential to support a specific set of applications. The methodology involves soliciting feedback from application metadata requirements and employing a templating system to generate common metadata model components. Although this approach has the potential to enhance metadata precision, it involves a trade-off by introducing customized manual work for specific requirements. The emphasis is on striking a balance between increased precision and the inclusion of tailored elements, acknowledging the nuanced demands of diverse applications. During a short period of time, multiple research efforts have been carried out to enhance the utilization of Brick Metadata (Lee et al. 2023). Developers have actively desired to bridge the gap between automated and manual processes, introducing additional tools and utilities. Consequently, this paper presents a scalable solution designed with the Brick metadata ontology as its foundational structure and the backbone of the proposed platform. The primary objective of the developed platform is to effectively retrieve data from building sensors and actuators, converting this information into semantic point label information compatible with Brick ontology and scalable for existing and new buildings. This transformation facilitates the classification and standardization of building data using the Brick ontology, allowing the easy development of applications such as energy monitoring and data visualization. The subsequent sections delve into a more detailed exploration of the implementation process, shedding light on the various facets of the platform. PROPOSED PLATFORM INTRODUCTION The inception of the proposed platform was driven by the need to fully capture building data, including sensor status, IoT sensors, actuators, and various digital and analog data generated by building automation systems. Typically, acquiring access to the building gateway or the BACnet system (Bushby et al. 2002) involves navigating through an administrative process and seeking permissions from multiple departments. This procedure is not only time-consuming, but, at times, permissions are withheld due to concerns surrounding data security. To address this challenge, we opted for an alternative approach. Instead of directly connecting to the live system, we decided to retrieve and process historical data spanning a single year. This data is extracted, and the platform is parsed through the stored data offline and precisely analyzed the historical data. This method not only avoids the complexities associated with permission but also ensures a secure and efficient means of conducting our analysis in terms of data analysis and building data normalization. Our focus has been on accumulating a substantial dataset over an entire month and year from a building located in Azbil Fujisawa Technology Center (FTC), Japan (Azbil Corporation 2024) hereafter referred to as the FTC building. This building comprises six floors, with a total area of 2,320 square meters (24,972 square feet) and a total floor space of 10,721 square meters (115,399 square feet). For illustrative purposes, the following figure presents an example of the data, classified as point labels and time series data, extracted from the FTC building. This project demonstrates our commitment to creating a robust platform that is suited to the specific needs of real-world building scenarios. It is noted that the point label data presented in this document have been modified from their original values to ensure data privacy. Figure 1 An example of point list and time series data Using this technique to gather historical building data not only improves security but also acts as a preventative precaution against possible cyber threats. By avoiding direct access to real-time data, we reduce the likelihood of harmful hits on the real-time system. This precautionary measure ensures a strong and secure framework that protects the integrity and confidentiality of building data throughout our analysis operations. After preparing the data for processing using Brick ontology (Brick 2020), the following section will focus on the design architecture that accommodates the platform. PLATFORM DESIGN ARCHITECTURETHE proposed platform is a semi-automated system that takes building information in raw, non-standard formats and transforms it into a standardized version that is compatible with the Brick schema. The principal objective is to correct the data from the source. Despite these advancements, our development journey encountered several challenges: 1. Limited access to building information due to the absence of a connection to the BACnet system. 2. The variable nature of building information presents a diversity of data structures and formats. 3. Language barriers, particularly in dealing with point label data generated in Japanese language. In response to these complications, this platform evolved as a smart solution in the field of building automation. It deliberately tackles these challenges by effortlessly converting building information into Brick ontology and graph formats. This modification improves scalability in building automation and allows for a faster deployment procedure across several buildings, reducing the need for considerable human intervention by engineers. A key aspect of this platform is its ability to match point lists with the Brick schema using a classifier model trained on Artificial Intelligence (AI) within its backbone, organized in a dictionary format. This platform has three main duties that contribute to its powerful functionality: • Data Preparation: This involves translation and normalization processes that ensure that the data are formatted and aligned for further analysis. • Point Label Mapping: Using AI and a purpose-designed algorithm, the platform undertakes the crucial task of mapping point labels between the current building information and the corresponding Brick class and sub-classes. • Graph Model Generation: generating a comprehensive graph model based on Brick metadata, providing a structured representation of building information. • Graph Validation: The platform employs validation mechanisms to ensure the accuracy and integrity of the generated graph model. This platform offers the following underlying architecture to create graph (McBride 2004) from the point labels. Refer to Figure 2 for a visual representation of the top-level architecture of the platform. The subsequent section describes the building data transformation into the Brick schema model. Preparation and Dependencies: During this essential phase of our project, we systematically build up dependencies, integrate necessary libraries, and populate the system with necessary data. An important part of this phase is data translation, which converts sensor data and semantic representations of sensors and actuators from Japanese to English language. This translation is required for an accurate matching to Brick classes. To help with this procedure, we used and update a translation service with a built-in Japanese dictionary. Figure 3 illustrates an example of the dictionary module used in this project. Furthermore, a key component is the integration of the Building Metadata Ontology Interoperability Framework (BuildingMOTIF) (The National Renewable Energy Laboratory (NREL) 2023), a versatile toolkit that facilitates the creation, storage, visualization, and validation of building metadata. Delivered as an SDK with APIs, BuildingMOTIF abstracts complexities related to RDF (McBride 2004) graphs, database management, Shapes Constraint Language (SHACL) (W3C 2016) validation, and interoperability across diverse metadata schema and ontologies. This project utilizes the Brick ontology's "Nightly_1.3" version and an AI-trained data model, both loaded during the preparation phase. It is worth noting that due to security considerations and the unavailability of building layout and mechanical specifications, the proposed platform uses the point label information to construct the semi-layout in JavaScript Object Notation (JSON) format. This semi-layout, although not exceptionally precise, serves its purpose well, enabling the detection of the required number of floors and rooms for our project. For a visual representation, refer to Figure 4 that illustrate an example of the layout generated for the FTC building. Figure 2 Proposed top-level platform architecture Figure 3 Example of dictionary service Figure 4 Building semi-layout generated by point label Initialization: During this step, the extraction, sorting, and evaluation of Brick classes take precedence, serving as the foundation for further mapping and AI job processing. Here, the data structure rules for transforming point labels into graphs are thoroughly described. The fundamental data structure of RDF graph revolves around the interaction of subjects, objects, and predicates (SOP) that are linked to shape the RDF graph (McBride 2004) tuple model. This hierarchical arrangement contributes significantly to the comprehensive representation of the building. The subsequent model serves as an illustrative example, focusing on an element of an FTC building (Azbil Corporation 2024), specifically within the Heating, Ventilation and Air Conditioning (HVAC) system (Trcka et al. 2010). Figure 5 shows some examples of tuple data structure: Figure 5 Tuple data structure of targeted FTC building Figure 6 Example of the Brick class (Brick 2020) This information is compiled by extracting point labels, normalizing the data, and generating a tuple data structure for each equipment connected in the building. Tokenization and Data Processing: In this section, the point label undergoes a thorough data-cleaning process employing several algorithms to detect and extract pertinent information. The extraction includes key details, including: (1) Information about zones, floors, and areas. (2) First iteration of Brick class detection. (3) Device information. (4) Relations and connections. The organized data are then structured into key-value pairs for Brick classes and lists for device information. This structured format serves as input for the upcoming phases, specifically AI processing and detection, resulting in a comprehensive approach to data utilization in the next stage of our project. This outcome prompts us to create a semi-layout and the starting point for the introduction of equipment connection points. Mapping and AI Processing: The mapping and AI processing stages play an important role in this chain process. Mapping involves identifying similar Brick classes for each point label generated by the building data. This proves to be a challenging task due to differences in semantics, point labels, and the digital representation of data. In a 2020 study (Fierro et al. 2020), Fierro proposed a unified authoritative Brick Metadata model for buildings, intended for continuous maintenance throughout the life cycle. However, the lack of extensive building data for research limited the testing of these algorithms. In our research, the Japanese representation of building metadata serves as a valuable source of point label data for our matching purposes. In the Brick ontology, the ideal outcome is a match in the form of a key-value pair for Brick classes. For a visual reference, Figure 6 above illustrates examples of Brick classes, sub-classes, and their relations. This phase lays the support for accurate mapping and processing, overcoming the challenges associated with semantic variations in the data. Point labels are created by putting the building data through a rigorous process of translation, normalization, and cleaning to align them with Brick classes. Starting with a trained predetermined model that is organized into key-value pairs, the mapping algorithm begins. Later, an AI algorithm updates this predefined model with a model taught. To efficiently train the data and then map it with the intended building point list, we have investigated several AI techniques and frameworks to train the data such as Random Forest (Briman 2001), Active learning text-based Scrabble framework (Koh et al. 2017), and Large Language Models (LLM) framework like OpenAI (Brown et al. 2020). To validate our generated graph model, we utilized various building datasets for comparison and benchmarking. These include Ecobee dataset (Ecobee 2024), the High-Fidelity Building Emulator (HighFidelity Building Energy Data Collection 2024), the Honda Smart Home (Honda Smart Home Data Collection 2024), the Lawrence Berkeley National Laboratory Building (Building Data Genome Project 2024), among others. The condensed model is kept on the platform in dictionary format. With the integration of Brick schema, this research methodology helps the algorithm in interpreting, mapping, and classifying the intended building information. Later, we used Jaccard index (Skiena 2008) to measure the similarity between two sets of normalized trained point label and Brick point and location classes. The Jaccard index is calculated using the following equation: (1) The Jaccard index is used to estimate the best match similarity between the point and the Brick classes. Notably, the raw point data has already been normalized using an LLM-trained module. The following table shows an example of mapping results of the matching algorithm. Table 1. Brick class and point label match example Brick Class Point label Algorithm Match Temperature_Sensor SDF_65_Zone_Average_Temp Average_Zone_Air_Temperature_Sensor Occupancy_Count_Sensor SDF1_People number Occupancy_Count_Sensor Humidity_Sensor AHU_67_Indoor_Humi Humidity_Sensor Illuminance_Sensor SDF_3_WP_Sensor_Illuminance Illuminance_Sensor Auditorium VAV_6_F_Audience_Area Auditorium Not applicable Reserve__AV No Match We evaluated about 7800 labels and about 7400-point labels were matched with Brick classes. The remaining 400 points were either "Reserve" points or lacked sufficient information to match with a Brick class. Enhancements to our AI matching algorithm are ongoing but beyond the scope of this paper. RDF Graph Generation: This stage is important within the algorithm. As discussed earlier, the data have been normalized, mapped, and classified. In this phase, we established a tuple data structure for the converted building data, now referred to as point label. Although the point label contains information about the Brick class, it does not define a relationship within the building as a reference to other elements. To bridge this gap, each point label needs to be converted into a data structure of object, predicate, and subject, as illustrated here: Figure 7 Point label in Brick RDF graph format Figure 8 Thermostat template with CO2 Iterating through each point label that is accessible, assigning a matching Brick class, and constructing the relationship to turn the point label into the Brick RDF graph are all part of the loop process. The RDF graph has a tree-like shape and is widely considered the best option for knowledge graphs due to its native web syntax, which makes data sharing and exchange easier (Antanas 2022). Additionally, because of its formal semantics, meaning and structure can be easily aligned across many sources, resulting in unified perspectives and clear interpretation. The tree-like data structure of the RDF makes it suitable for web queries and allows the achievement of a linear time and space query. A graph-format data structure is attractive for storing and accessing point labels and constructing information in databases because of its advantages, especially when used with RDF. The RDF is hierarchical, making it compatible with the SPARQL (Furche et al. 2010; DuCharme 2013) and GraphQL (GraphQL 2021) query languages and making fast data retrieval possible. All in all, these benefits put RDF in a strong position for constructing information within a database architecture, as well as for deploying and querying point labels. Correction and Completion: At this point, the algorithm adds relations to a previously discovered equipment and integrates it to the main graph, the algorithm searches for air handling unit (AHU) terms (we will talk about AHU terms in the upcoming chapter) to make sure no unit is overlooked. Depending on whether the mismatched point label is produced, it will be either removed from the graph or matched with the closest matching factor and included in the main graph in this review part. Here, reasoning and inference (Brick 2020) are implemented. Inference and reasoning are processes to imply information from the Brick class and expand it to the main graph. Based on the application requirement, detailed information is added to the graph as a separate branch. This information includes super-class, inverse relation, and tagging information. Model Validation: Now the graph is generated using the point label and reviewed, and the missing element is added or removed from the graph according to their matching algorithm through the reasoning process. In this section, we check the generated graph against constraints and guidelines. The idea of graph validation is inspired by BuildingMOTIF development (NREL 2023). During the validation process, we specify the equipment to be validated using predefined templates. These templates are designed to ensure the accuracy of the generated Brick model, with the final report produced using the BuildingMOTIF library (NREL 2023). Each template delineates the required components and their dependencies, such as setpoints, temperature points, and CO2 points. An example of such a template is provided in Figure 8 (NREL 2023). Brick Model Generation: This stage finalizes the architecture by generating the Brick model. At this stage, the data taken from the FTC building point list is organized into a tuple data format and then converted into a Brick model. Building data, both dynamic and static, are transformed into a machine-readable, semantic format by this process. A format like this makes it easier to create queries for sensors and actuators, which makes it possible to retrieve data efficiently for monitoring applications. DETAILED ARCHITECTURE DESIGN The project architecture starts with the translation and parsing utility for the point label. To align the raw data point label with the triple SOP data structure, the data goes through several steps; after processing the data, it is correlated with the relevant Brick class model to produce an RDF Brick model. The architecture's block diagram is displayed in Figure 9. The mapping stage in point processing involves intensive AI processing and data normalization to align the point list with the appropriate Brick class. Subsequently, it is necessary to identify HVAC-related terms within the dataset, as these terms are important to define relationships within the list of points. HVAC term detection is carried out through input from an interactive user interface specifically designed for this purpose. For example, the following point refers to HVAC components: (1) AC: Air handling unit (AHU) (2) SDF: Smart-control-damper on the Diffuser (3) VAV: Variable air volume (4) CAV: Constant air volume. The implemented mapping algorithm analyzes the HVAC elements within the point list and determines the relationships between them. Figure 10 illustrates the defined relations and their reverse relations for AHU terms, while Figure 11 displays the interactive graphical user interface designed to capture operator input. Figure 9 The developed platform architecture Figure 10 FTC building AHU terms relation To enhance scalability and streamline debugging, the algorithm divides each equipment unit and its associated relationships into separate modules. Later, these modules are combined to generate a comprehensive model representing the building. For example, relationships like AC to CAV and AC to SDF are handled in individual subsections (Figure 11, Figure 12): • AC2CAV: AC and CAV units and relevant Brick relations. • AC2SDF: AC and SDF units and relevant Brick relations. • AC2VAV: AC and VAV units and relevant Brick relations. • CAV2SDF: CAV and SDF units and relevant Brick relations. • Point connection: Point connection to the zone or other unit equipment. • Tagging: Create relevant tags for a specific point. • Reasoning: Retrieve the parent class of the specific point. Figure 12 illustrates this structure in a scalable module, which reduces processing time when certain modules are not required. Each module can be enabled or disabled based on the application's needs. Figure 11 The front-end application for data processing Figure 12 Detailed architecture to generate Brick graph Here the processing and data normalization is concluded, and the Brick graph is generated and stored in the database for data access processing. In the next section, we will discuss the result in detail. RESULT AND DISCUSSION Following the processing and mapping of each point to the relevant Brick class, and defining the relationships between equipment units, the Brick module graph is generated and combined to represent the semantic version of the building data. It also includes sensors and actuators information. The following relationships were implemented in this module (Figure 13) while output of the developed platform which represents the Brick module of FTC building is shown here (Figure 14): Figure 13 Brick relation implemented in the platform Figure 14 Generated Brick model for FTC building In this graph, three prefixes were extracted: Brick, FTC building, and reference for the time series ID. The first point represents the tuple structure for the status of locker 536, located in the library. The status value of this locker is stored in an SQL database, which can be retrieved upon receiving a query. This point includes reasoning attributes like "On_status", "Point", and "status", and tags such as "On", "Point", and "Status". Another example, shown in Figure 15, illustrates a tuple object referring to AHU unit No. 977 (AC 977), which supplies specific CAV and SDF units as listed below. Figure 15 AHU unit has "feeds" relation to the CAV and SDF units Figure 16 AHU unit and point relation In this example Figure 16, the object created from the points represents AHU unit AC 600, which feeds the zone equipped with the temperature sensor, damper, and the points listed here. Finally, the generated Brick model provides a detailed understanding of the sensors deployed in the building, such as a WP (Workplace) sensor of the wireless cell-type HVAC system installed in the FTC building as shown in the following figure. Figure 17 shows the WP (Workplace) sensor detected by the Brick model and is part of the building automation system. Figure 17 Generated Brick model explains WP (Workplace) sensor These output examples from the generated Brick module on the platform demonstrate the relationships between equipment and units across different building sectors, highlighting the model representation's scalability. The platform was tested in several buildings and only minor adjustments were needed. The generated graph and time series data are stored in a database and can be used for data access, energy monitoring, and data visualization applications. CONCLUSION This article discussed the necessity of decreasing human labor and the scalability issues with contemporary building automation. With the help of the developed platform and Brick ontology, we suggested a scalable and seamless solution. Using historical building data, this platform creates a Brick model graph that captures and maintains the links established by the Brick ontology. Examples of this type of information include sensors, actuators, and the time-series data that goes along with them. We presented an overview of the developed architecture platform, and several examples of the generated Brick output models were illustrated and discussed in detail. The developed platform has been tested with multiple buildings, and the Brick graph data model was generated successfully. The proposed platform produced realistic graphs that reflected the operational data and the semi-layout of each building. The scalability of this solution makes it possible to automatically generate machine-readable semantic building models, greatly minimizing the manual labor needed to produce these models from scratch. Additionally, the platform creates a uniform framework for expressing building information by integrating Brick ontology, making interoperability and effective data management between diverse building automation systems possible. ACKNOWLEDGMENTS This project was developed by Azbil North America Research and Development Inc., with support from the Azbil AI Solution department. I sincerely thank everyone involved, especially Dr. Gabe Fierro, founder and lead maintainer of Brick Schema; Chosei Kaseda and Jeremy Tole from Azbil North America Research and Development, respectively for their invaluable guidance and recommendations throughout the project. REFERENCES Antoniou, G., and Harmelen, F. 2009. Web ontology language: Owl. Springer Handbook on ontologies. p. 91-110. ASHRAE'sBACnetCommittee. 2018. American Society of Heating Refrigerating and Air-Conditioning Engineers. Project Haystack and Brick Schema Collaborating to Provide Unified Data Semantic Modeling Solution. Atanas K., 2022. RDF Levels the Advantages of Labeled Property Graphs and Keeps Three Key Benefits: Standards, Semantics and Interoperability. Ontotext Url: https://www.ontotext.com/knowledgehub/. p. 1. Azbil Corporation, 2024. Azbil Showroom: Future Technology Center. Url: https://www.azbil.com/corporate/pr/showroom/ftc/index.html. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... and Amodei, D. (2020). Language models are fewshot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. Bhattacharya, A., Fierro, G., Gao, J., Gluck, J., Hong, D., Johansen, A., Koh, J., Ploennigs, J., and Agarwal, Y. 2016. Brick: Towards a unified metadata schema for buildings. Proceedings of the 3rd ACM International Conference on Systems for Energy-Efficient Built Environments: 41-50. Breiman, L. 2001. Random Forests. Machine Learning, 45(1), 5-32. https://doi.org/10.1023/A:1010933404324. Brick. 2020-2021. Brick OntologyDocumentation-Brick Consortium, Inc. California, USA. Brick. Bushby, S.T., and Newman, H.M. 2002. BACnet today. ASHRAE journal. Vol. 10:10-18. Castro, G.R., Lefrançois, M., Poveda-Villalon, M. and Daniele, L. 2023. The ETSI SAREF ontology for smart applications: a long path of development and evolution. Energy Smart Appliances: Applications, Methodologies, and Challenges. Wiley Online Library. 183-215. DuCharme, B., 2013. Learning SPARQL: querying and updating with SPARQL 1.1. O'Reilly Media, Inc. Ecobee Data Collection, 2024. Building Data Genome Project. Url: https://bbd.labworks.org/ds/bbd/ecobee. Accessed: 202408-30. Fierro, G., Koh, J. Nagare, Sh., Zang, X., Agarwal, Y., Gupta, R.K. and Culler, D.E. 2020. Formalizing tag-based metadata with the brick ontology. Frontiers in Built Environment, Frontiers Media SA. Vol. 6- p.558034. Fierro, G., Saha, A., Shapinsky, T., Steen, M., and Eslinger, H. 2022. Application-driven creation of building metadata models with semantic sufficiency. Proceedings of the 9th ACM International Conference on Systems for EnergyEfficient Buildings, Cities, and Transportation. p. 228-237. Fierro, G., Koh, J., Agarwal, Y., Gupta, R.K., and Culler, D.E. 2019. Beyond a house of sticks: Formalizing metadata tags with brick. Proceedings of the 6th ACM international conference on systems for energy-efficient buildings, cities, and transportation. P. 125-134. Fierro, G., Prakash, A.K., Mosiman, C., Pritoni, M., Raftery, P., Wetter, M. and Culler, D.E. 2020. Shepherding metadata through the building lifecycle. Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. P. 70-79. Furche, T., Weinzierl, A., Bry, F., Virgilio, R., Giunchiglia, F., and Tanca, L. 2010. Labeling RDF Graphs for Linear Time and Space Querying. Semantic Web Information Management: A Model-Based Perspective. Springer Berlin Heidelberg. p. 309-339. ISBN: 978-3-642-04329-1. Doi. 10.1007/978-3-642-04329-1_14. Garrido-Hidalgo, C., Furst, J., Cheng, B., Roda-Sanchez, L., Olivares, T., and Kovacs, E. 2022. Interlinking the Brick Schema with Building Domain Ontologies. The 20th ACM Conference on Embedded Networked Sensor. P. 1026-1030. GraphQL Foundation. 2021. GraphQL: A Query Language for APIs. Url: https://graphql.org/. Hammar, K., Wallin, E.O., Karlberg, P., and Halleberg, D. 2019. The realestatecore ontology. The Semantic Web--ISWC 2019, 18th International Semantic Web Conference, Auckland, New Zealand, October 26--30, 2019, Springer Proceedings, Part II 18:130-145. Haller, A., Janowicz, K., Cox, S., Phuoc, D.L., Taylor, K., and Lefrançois, M. 2017. Semantic Sensor Network Ontology. Semantic Web W3. Vol. OGC 16-079. Haystack team. 2018. Project Haystack. Url: http://project-haystack.org/. High-Fidelity Building Energy Data Collection. Building Data Genome Project. 2024. Url: https://bbd.labworks.org/ds/acpr/hfbe. Accessed: 2024-08-30. Honda Smart Home Data Collection. 2024. Building Data Genome Project. Url: https://bbd.labworks.org/ds/bbd/hshus. Accessed: 2024-08-30. Koh, J., Sengupta, D., McAuley, J., and Gupta, R., Balaji, B., and Agarwal, Y. Scrabble: converting unstructured metadata into brick for many buildings. Proceedings of the 4th ACM International Conference on Systems for Energy-Efficient Built Environments. p.1-2. LBNL Building 59 Energy Data Collection. 2024. Building Data Genome Project. Url: https://bbd.labworks.org/ds/bbd/lbnlbldg59. Accessed: 2024-08-30. Lee, S., Cui, B., Bhandari, M.S., and Im, P. 2023. Visual Brick model authoring tool for building metadata standardization. Elsevier Automation in Construction.Vol. 156:105-122 McBride, B., 2004. The resource description framework (RDF) and its vocabulary description language RDFS. Springer Handbook on ontologies. p. 51-65. Quinn, C., and McArthur, J.J. 2022. Comparison of Brick and Project Haystack to Support Smart Building Applications. arXiv preprint . Rasmussen, M.H., Lefrançois, M., Schneider, G.F., and Pauwels, P. 2021. BOT: The building topology ontology of the W3C linked building data group. Semantic Web, IOS Press. 12(1):143-161. Rosen, R., Von, W., Lo, G., Bettenhausen, G., and Kurt, D. 2015. About the importance of autonomy and digital twins for the future of manufacturing. Ifac-papersonline Elsevier 48(3):567-572. Skiena, S. S. 2008. The Algorithm Design Manual. Springer 2nd Edition. ISBN-13. 978-1849967204. The National Renewable Energy Laboratory (NREL), 2023. Building Metadata OnTology Interoperability Framework (BuildingMOTIF). Url: https://github.com/NREL/BuildingMOTIF. Trcka, M., and Hensen, M.2010. Overview of HVAC system simulation. Elsevier Automation in construction.19(2):93-99. Vittori, F., Tan, Ch.F., Pisello, A.L., Chong, A., and Miller, C. 2023. BIM-to-BRICK: Using graph modeling for IoT-BMS and spatial semantic data interoperability within digital data models of buildings. arXiv preprint . Wang, S., and Xie, J. Integrating Building Management System and Facilities Management on the Internet. Automation in construction. 11(6):707-715. W3C (MIT, ERCIM, Keio, Beihang), 2017. Shapes Constraint Language (SHACL). Url: https://www.w3.org/TR/shacl/.
|
2509.16258
|
Non-Commutation Chains in Pre- and Post-Selection Paradoxes
This paper is part of a dissertation submitted in fulfillment of the requirements of a degree
Master of Science in Mathematics and Foundations of Computer Science
University of Oxford, Trinity Term 2024
Ouissal Moumou
Mathematical Institute
University of Oxford
ouissalmoumou2@gmail.com
September 23, 2025
Abstract
Peculiar measurements can be obtained on systems that undergo both pre- and post-
selection. We prove a conjecture from [1] on logical Pre- and Post-Selection (PPS) paradoxes
for a restricted case. We prove that all of these paradoxes admit non-commutation chains.
We also relate this to the theory of causal balance, recently introduced in [1], and show how
the theory blocks such paradoxes.
Contents
1
Introduction
2
1.1
The Quantum Pigeonhole Principle Paradox . . . . . . . . . . . . . . . . . . . . .
2
2
Pre- and Post-Selection Paradoxes: A Formal Definition
3
3
PPS Paradoxes Admit Non-Commutation Chains
4
4
Connection to the Theory of Causal Balance
7
5
Conclusion
8
A Proof of Lemma 3.1
9
B Proof of Corollary 3.1
9
C Event Spaces
10
D Operator Algebras and Relevant Properties
10
E The Theory of Causal Balance
11
E.1
Interference Influences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
E.2
Causal Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
E.3
From Algebras to Unitary Circuits
. . . . . . . . . . . . . . . . . . . . . . . . . .
14
1
arXiv:2509.16258v1 [quant-ph] 18 Sep 2025
1
INTRODUCTION
2
1
Introduction
Suppose a quantum system is pre-selected in the state |ψ⟩at time t0 and post-selected in the
state ⟨ϕ| at time t2. Suppose also that the system undergoes an intermediate projective mea-
surement {P} at time t1. One could calculate the probability of the intermediate measurement
by conditioning both on the pre- and post-selection, this is the main spirit behind the ABL rule
introduced in [2].
For a degenerate operator C at time t, the probability of obtaining an outcome cn is specified
by the ABL rule as
P(C) =
|⟨ϕ|PC=cn|ψ⟩|2
P
i |⟨ϕ|PC=ci|ψ⟩|2 ,
(1)
where PC=ci = P
i |Φ⟩⟨Φ| is the projection on the states with eigenvalue ci.
Peculiar situations arise for specific choices of |ψ⟩, {P}, and ⟨ϕ| in which the probabilities
obtained are non-intuitive. These situations are referred to as pre- and post-selection paradoxes.
[1] posited that these paradoxes are absent within the framework of causal balance theory, as
this theoretical framework inherently avoids what we term “non-commutation chains” in the
present work. A non-commutation chain, which we define rigorously in Section 3, arises when
the intermediate projection operator in a pre- and post-selection scenario fails to commute with
both the pre-selected and post-selected quantum states. We establish a proof for a restricted
subset of Ormrod’s conjecture from [1], demonstrating that all paradoxes within this constrained
class exhibit a systematic occurrence of non-commutation chains.
We start with the quantum pigeonhole principle as an exemplar of these paradoxes to provide in-
tuitive understanding, followed by a formal definition of pre- and post-selection phenomena that
establishes the theoretical foundation for our subsequent proof. We conclude with a discussion
of the implications and directions for future research.
1.1
The Quantum Pigeonhole Principle Paradox
[3] introduced the quantum pigeonhole principle in which 3 particles are prepared in a superposi-
tion state of being in 2 different boxes. Boxes 1 and 2 are represented by |0⟩and |1⟩respectively.
A superposition of being in both boxes 1 and 1 would then be represented more conveniently
with |+⟩=
1
√
2(|0⟩+ |1⟩). Therefore, the states corresponding to the pre- and post-selection
states are
|ψ⟩= |+⟩1|+⟩2|+⟩3,
⟨ϕ| = ⟨i|1⟨i|2⟨i|3,
respectively, where ⟨i| =
1
√
2(⟨0| + i⟨1|).
We aim to check whether two particles are in the same box. Since the three particles are in the
same initial and final states, we will only demonstrate the paradox for particles 1 and 2. By
symmetry, the same applies for particles 2 and 3 and particles 1 and 3. Particles 1 and 2 being
in the same box means that they are either both in box 1, or both in box 2 which we represent
with projectors P11 = |0⟩1|0⟩2⟨0|1⟨0|2 and P22 = |1⟩1|1⟩2⟨1|1⟨1|2 respectively. On the other hand,
particles 1 and 2 being in separate boxes means particle 1 is in box 1 and particle 2 is in box 2, or
vice versa, which we represent with projectors P12 = |0⟩1|1⟩2⟨1|1⟨0|2 and P21 = |1⟩1|0⟩2⟨1|1⟨0|2
respectively.
2
PRE- AND POST-SELECTION PARADOXES: A FORMAL DEFINITION
3
Consequently, the two particles under consideration being in the same box corresponds to the
projector Psame = P11 + P22. Similarly, the two particles being in different boxes correspond
to the projector Pdiff = P12 + P21. Because the value of ⟨ϕ|Psame |ψ⟩turns out to be zero, it
follows that the probability of finding particles 1 and 2 in the same box using the ABL rule is
also zero. Considering the symmetry in our example with the other particles, this means that
the probability of finding particles 2 and 3 in the same box in the intermediate measurement is
also zero, and so is the probability of finding both particles 1 and 3 in the same box. One then
is prompted to conclude that it is with certainty that we observe that no two particles can be
found in the same box. However, recalling the very basic yet powerful pigeonhole principle, it is
quite peculiar to have two boxes and three particles. Yet, no two particles share the same box!
2
Pre- and Post-Selection Paradoxes: A Formal Definition
Below is a formal definition of a pre- and post-selection paradox inspired from [4]. Consider a
Hilbert space, a choice of pre-selection |ψ⟩and post-selection ⟨ϕ|, and let P be a finite set of
projectors closed under complements composed of the projectors related to the pre- and post-
selection system {P, I −P} that are uniquely determined by the projectors P. We only consider
the cases in which the ABL probabilities corresponding to using the projectors {P, I −P} yield
0 or 1 values (hence the “logical” part).
We would like to be able to generate a partial Boolean algebra P′ from P using the following
for any P and Q in P′:
• If P, Q ∈P′ and PQ = QP, then PQ ∈P′,
• If P ∈P′, then I −P ∈P′.
One intuitive way of understanding the new partial Boolean algebra is by considering projectors
corresponding to propositions, and the extension to the partial Boolean algebra is our way of
wanting to also take disjunctions and conjunctions of the propositions at hand (we originally
only had the propositions and their complements given to us by the experiments). Suppose
we want to extend the probability function f given to us by the ABL rule from P to P′. In
other words, we want to find a probability function on P′ that recovers the probability function
defined on P, such that the following algebraic conditions are satisfied:
(i) For all P ∈P′, 0 ≤f(P) ≤1,
(ii) f(I) = 1 and f(0) = 0,
(iii) For all P, Q ∈P′, f(P + Q −PQ) = f(P) + f(Q) −f(QP).
Definition 2.1. (PPS Paradox) Assuming the above setting, we say that the ABL predictions
for P form a logical PPS paradox when we fail to find a function that fails one or more of the
algebraic conditions above.
Applying this to the case of the pigeonhole principle paradox: the projectors corresponding to
two particles being in different boxes Pdiff1,2, Pdiff1,2, and Pdiff1,2. We know that: f(Pdiff1,2) = 1,
f(Pdiff1,2) = 1, and f(Pdiff1,2) = 1.
We know that p(e) = 1 and p(f) = 1 =⇒p(e ∧f) = 1. This can be extended in our case of
three events, and knowing that the probabilities for these events are all 1, we can conclude that
f(Pdiff1,2Pdiff2,3Pdiff1,3) = 1.
3
PPS PARADOXES ADMIT NON-COMMUTATION CHAINS
4
However, note that: Pdiff1,2Pdiff2,3Pdiff1,3 = 0. Therefore, f(0) = 1, which violates condition (i)
from Definition 2.1. Thus, the quantum pigeonhole principle is just another instance of logical
PPS paradoxes.
3
PPS Paradoxes Admit Non-Commutation Chains
We start with the following definition of a non-commutation chain:
Definition 3.1. (Non-Commutation Chain) A projector P has a non-commutation chain with
two other projectors Q1 and Q2 iff Q1P ̸= PQ1 and Q2P ̸= PQ2.
The following lemma will be useful later and is proved in Section B:
Lemma 3.1. In the context of a logical pre- and post-selection paradox with pre- and post-
selection projectors Pψ = |ψ⟩⟨ψ| and Pϕ = |ϕ⟩⟨ϕ|, respectively, a projector P corresponding to
an intermediate measurement at time t1 < t < t2 does not have a non-commutation chain with
Pψ and Pϕ iff
• P|ψ⟩= |ψ⟩, and we say that P and Pψ idempotently commute, or
• P|ψ⟩= 0, and we say that P and Pψ orthogonally commute, or
• P|ϕ⟩= |ϕ⟩, and we say that P and Pϕ idempotently commute, or
• P|ϕ⟩= 0, and we say that P and Pϕ orthogonally commute.
Corollary 3.1. Two projectors P and Pψ idempotently commute (in the sense of lemma 3.1)
iff I −P and Pψ orthogonally commute.
This corollary says the same thing about the relationship between P and Pϕ if they idempotently,
or orthogonally commute.
Theorem 3.1. (Non-Commutation Chains for Logical PPS Paradoxes) Consider a pre- and
post-selection scenario with corresponding intermediate measurement sets P and P′ per def-
inition 2.1 and pre- and post-selection rank-1 projectors Pψ = |ψ⟩⟨ψ| and Pϕ = |ϕ⟩⟨ϕ| such
that
• |ψ⟩and ⟨ϕ| are the pre- and post-selection states,
• All the projectors in P commute,
• P′ is a finite set.
Every logical PPS paradox with the specifications above has at least one projector in P that
forms a non-commutation chain with Pψ and Pϕ.
Proof. We prove the contrapositive of this statement. We assume that we have a scenario with
all the specifications above where every projector in P does not form a non-commutation chain
with Pψ and Pϕ, and we prove that we never get a logical PPS paradox.
Consider the projectors Qj that can be written as the product of n projectors in P:
Qj = ˜P1... ˜
Pn,
(2)
such that ˜Pi = Pi ∈P or ˜Pi = I−Pi ∈P. In other words, take all the n projectors corresponding
to intermediate measurements, give each projector an index from 1 to n. In each index, either
put a projector or its complement, then take the product of all the projectors altogether. One
can see that there are 2n possibilities for this arrangement. The Qjs are all orthogonal to each
other i.e. for any i and j such that k ̸= j: QkQj = δkjQj and P
j Qj = I. Note that every Qj is
3
PPS PARADOXES ADMIT NON-COMMUTATION CHAINS
5
a projector because it is made of the product of projectors from P, all of which commute with
each other.
Let Ωbe the sample space containing all the possible Qjs, i.e Ω= {Qj}j. As shown in Section C,
we can take E = 2Ωas an event space. However, as also shown in Section C, there is an equivalent
event space E that we obtain by summing distinct elements in Ωsuch that
E =
X
j∈S
Qj/S ∈2Ω
.
(3)
As an application of what is shown in Section C, Poweset(Ω) and E are not only equivalent, but
E also have a Boolean algebra structure. The equivalence between Poweset(Ω) and E is defined
as for two commuting projectors P and Q in E:
• PQ = P
j∈SP ∩SQ Qj,
• P + Q −PQ = P
j∈SP ∪SQ Qj,
• P = P
j∈SP Qj.
Before we move to the first step in our proof, we will show that P ⊆E. Let Pk be any projector
in P, and let’s prove that Pk ∈E. We can write Pk as
Pk =
X
j
˜P1...Pk... ˜
Pn =
X
j∈SPk
Qj ∈E.
Therefore, P ⊆E.
As a first step in our proof, we will show that P′ = E.
For the first direction =⇒, we show that P′ ⊆E. We know that P ⊆E, and we know that by
Definition 2.1, P′ is the smallest partial Boolean algebra from P. Knowing that E is a Boolean
algebra that contains P, then it has to contain the smallest one formed by P, therefore, it
contains P′, and thus: P′ ⊆E.
For the opposite direction ⇐= , we show that E ⊆P′. Let P be any projector in E, P can be
one of two cases:
1. P = Qj for some Qj = ˜P1... ˜
Pn which means P can be written as a conjunction of projectors
in P,
2. P = P
j∈S Qj which means P can be written as a disjunction of elements in Ω, more
specifically, as a disjunction of conjunction if elements of P.
In the first case, we know that any conjunction of elements in P is in P′ since all the projectors
in P commute, and by definition, P′ contains all the products of projectors in P that commute.
Now that we know that any Qj is in P′, if P is a conjunction of elements in Ωi.e P = P
j∈S Qj,
then it has to be in P′ since by definition, P′ is closed under conjunction, and so it contains any
possible conjunction of two projectors in P′ . Therefore, E ⊆P′. Thus E = P′.e pre-selection
and post-selection respectively. <—- typo here??
We are at the main part of our proof of the theorem, the one involving defining a probability
distribution on P′. Consider the function f defined on P′ such that for any projector P in P′,
f(P) = Tr(PψPPϕ)
Tr(PψPϕ) .
(4)
3
PPS PARADOXES ADMIT NON-COMMUTATION CHAINS
6
We will show that this function satisfies all the conditions for being a probability measure on
P′ as per Definition 2.1.
For condition (ii) from the definition, we have that
f(I) = Tr(PψIPϕ)
Tr(PψPϕ) = 1,
f(0) = Tr(Pψ0Pϕ)
Tr(PψPϕ) = 0.
Condition (iii) follows directly from the linearity of the trace function. The trace operation is
linear, so
f(P + Q −PQ) = Tr(Pψ(P + Q −PQ)Pϕ)
Tr(PψPϕ)
= Tr((PψPPϕ) + (PψQPϕ) −(PψPQPϕ))
Tr(PψPϕ)
= f(P) + f(Q) −f(PQ).
For condition (i), show that for any projector P in P′, 0 <= f(P) <= 1. Since P′ = E, any
projector R in P′ can be written as a product of projectors in P such that: R = Qj = ˜P1... ˜
Pn
or as the sum of several Qi as shown above. Now,
f(Qj) = Tr(Pψ ˜P1... ˜
PnPϕ)
Tr(PψPϕ)
.
Since all the elements in P commute, we can arrange the elements in any product ˜P1... ˜
Pk
˜
Pk+1... ˜
Pn
such that the first k projectors are the ones that commute with Pψ, and that the rest of the
projectors all commute with Pϕ. Now, using Lemma 3.1, we know that for the first k projectors
in Qi,
Pψ ˜Pi = |ψ⟩⟨ψ|P = Pψ ˜Pi =
(
Pψ
if ˜Pi and Pψ idempotently commute,
0
if ˜Pi and Pψ orthogonally commute.
One can see that applying this repeatedly for all the projectors P1 to Pk can only result in Pψ
if all the projects tend not to be orthogonal with Pψ. In the opposite case, the series might
collapse to 0.
Similarly, for the rest of the projectors, we have ˜PiPϕ = Pϕ or ˜PiPϕ = 0. Now, coming back to
f(Qj), it can only be 1 if all the projectors happen to commute non-orthogonally with either
Pψ or Pϕ.
Now, we analyze the second case in which elements of E can be written as sums of distinct
Qj. The summands in this case are distinct. In other words, each two summands are different
by at least one ˜Pi (in fact, each two summands are orthogonal). Moreover, recalling Corollary
3.1 we know that in our non-commutation chain scenario, if a certain projector Q commutes
with Pψ commute non-orthogonally, then its complement I −Q commutes orthogonally with
Pψ, and vice versa. The same can be said about Q and Pϕ if Q commutes with Pϕ. This means
that considering all possible Qj configurations (by configuration here we mean, different choices
for ˜Pi), there is only one single configuration of ˜Pi where all the projectors non-orthogonally
commute with either Pψ or Pϕ, only and only in this single case would f be 1. In all the others it
will be 0. Knowing that all the summands are distinct, only one configuration equalling 1 means
that any sum will always be 1 or 0. Therefore, we have just shown an even stronger claim than
the one required by condition (i), that for any projector P in P′, f(P) = 0 or f(P) = 1.
4
CONNECTION TO THE THEORY OF CAUSAL BALANCE
7
4
Connection to the Theory of Causal Balance
The theorem proved in the preceding section draws from the theory of causal balance. The
proof approach can be extended to establish the conjecture proposed in [1], namely that the
theory of causal balance blocks both pre-logical and post-logical paradoxes. We demonstrate
this connection in the following analysis. The requisite background on causal balance theory is
provided in Section E.
We recall Theorem 4 in [1].
Theorem 4.1. In a circuit that models a causal structure, the only interference influences
allowed are of the form
n
P e′
m
Ain
m
o
→
n
P
e′
k
Aout
k
o
,
such that m ≤k.
Consider the following unitary circuit:
U3
U2
U1
{P e1
Ain
1 }
{P e′
1
Aout
1 }
{P e2
Ain
2 }
{P e′
2
Aout
2 }
{P e3
Ain
3 }
{P e′
3
Aout
3 }
A
B
C
Theorem 4.1 dictates that there can never be an influence from {P e′
2
Aout
2 } to {P e1
Ain
1 } for two reasons:
1. It is an influence from an “out” projector to an “in” projector.
2. It is an influence from a projector that is higher up in the circuit to a projector that is
lower up in the circuit. Interference influences are equivalent to commutations as shown
in theorem 3.1.
One of the patterns that are immediately eliminated by Theorem 4.1 is non-commutation chains.
Given the assumptions in Definition 3.1, every PPS paradox admits a non-commutation chain,
the absence of which is guaranteed by the theory of causal balance. Therefore, one could say
that the theory of causal balance “blocks” PPS paradoxes. In other words, PPS paradoxes, never
occur under the framework of the theory of causal balance.
Intriguingly, [1] demonstrated that any phenomenon from standard quantum theory can be
reproduced within the framework of the theory of causal balance.
This raises a compelling
question: given our analysis showing that the theory prevents these paradoxes (at least in
REFERENCES
8
the specific case examined in this paper), how might one model such paradoxes within this
framework? We leave this investigation for future work.
5
Conclusion
This paper establishes that logical PPS paradoxes in the restricted case where all projectors in
P commute and where P′ is finite necessarily admit non-commutation chains. While this result
provides important structural insight into the nature of these paradoxes, several theoretical
limitations warrant further investigation.
The finite cardinality constraint on P′ represents a significant restriction, as P′ can in principle
be infinite. Whether our theorem extends to the infinite case remains an open question, and
failure to do so would reveal a fundamental relationship between the cardinality of P′ and the
structural properties of these paradoxes. Additionally, our analysis assumes commutativity of
all projectors in P, though it remains unclear whether scenarios with non-commuting projectors
are physically realizable in logical PPS frameworks.
Our findings also bear on the relationship between logical PPS paradoxes and the theory of causal
balance. Given that [1] demonstrated the theory’s ability to reproduce all standard quantum
phenomena, yet our analysis shows these particular paradoxes admit non-commutation chains
that should be blocked by causal constraints, an interesting question emerges regarding how such
paradoxes might be modeled within this framework. We conjecture that circuit representations
of PPS paradoxes (in the context of the theory of causal balance), which necessarily incorporate
unitary interactions, will exhibit richer causal structures than the simple sequential projector
arrangements. This suggests that modeling PPS paradoxes within the theory of causal balance
may reveal fundamentally different structural properties than those apparent in the standard
formulation, representing an important direction for future work.
Acknowledgments
I would like to thank my supervisors Dr. Nick Ormrod and Professor Jonathan Barret. Endless
thanks to my friends and family for their support. Thanks to the Optiver Foundation Scholarship
for financially supporting my MSc at the University of Oxford.
References
[1] Nick Ormrod and Jonathan Barrett. Quantum influences and event relativity. arXiv preprint
arXiv:2401.18005, 2024.
[2] Yakir Aharonov, Peter G Bergmann, and Joel L Lebowitz. Time symmetry in the quantum
process of measurement. Physical Review, 134(6B):B1410, 1964.
[3] Yakir Aharonov, Fabrizio Colombo, S Popescu, Irene Sabadini, Daniele C Struppa, and Jeff
Tollaksen. The quantum pigeonhole principle and the nature of quantum correlations. arXiv
preprint arXiv:1407.3194, 2014.
[4] Matthew F Pusey and Matthew S Leifer. Logical pre-and post-selection paradoxes are proofs
of contextuality. arXiv preprint arXiv:1506.07850, 2015.
[5] Nick Ormrod. Quantum Theory: Causation and Interpretation. Phd thesis, University of
Oxford, Trinity 2024.
B
PROOF OF COROLLARY 3.1
9
[6] Kenneth R. Davidson. C*-Algebras by Example. American Mathematical Society, 1996.
A
Proof of Lemma 3.1
Proof. For the ⇐= direction, we show below what each of the cases implies about commuting
with either Pψ or Pϕ.
• If P|ψ⟩= |ψ⟩, then,
PPψ = P|ψ⟩⟨ψ| =⇒PPψ = |ψ⟩⟨ψ|.
we know that ⟨ψ| = ⟨ψ|P. We deduce that PPψ = |ψ⟩⟨ψ|P = PψP. Therefore P and Pψ
commute.
• If P|ψ⟩= 0, then
PPψ = P|ψ⟩⟨ψ| =⇒PPψ = 0.
Now, for the other side, we have PψP = |ψ⟩⟨ψ|P.
We know from proposition 2 that
because P|ψ⟩= 0, ⟨ψ|P = 0. Therefore, PψP = 0 = PPψ, and P and Pψ commute.
We can prove the other two cases involving Pϕ and P with a similar argument. We can see that
in all the cases, we can deduce the commutation of P with either Pψ or with Pϕ.
For the
=⇒
direction, we assume that P and Pψ commute or P and Pϕ commute, and we
show that either P and Pψ orthogonally commute or idempotently commute, or that P and Pϕ
orthogonally commute or idempotently commute.
Case 1: P and Pψ commute
Two projectors commuting means that they have a shared basis. Let |e1⟩, |e2⟩, ..., |en⟩be the
shared basis of both projectors.
Now, since Pψ = |ψ⟩⟨ψ| is a rank-1 projector, there is a k ∈{1, ..., n} such that: Pψ = |ek⟩⟨ek|.
Considering that P is also a rank-1 projector, we have: P = |ej⟩⟨ej| for some j ∈{1, 2, ..., n}.
Therefore, P|ψ⟩= |ej⟩⟨ej||ek⟩. Now, since |ej⟩and |ek⟩are basis states, there are two possibili-
ties:
• k = j, in which case ⟨ej||ek⟩= 1 =⇒P|ψ⟩= |ej⟩⟨ej||ek⟩= |ej⟩= |ek⟩= |ψ⟩,
• k ̸= j, in which case ⟨ej||ek⟩= 0 =⇒P|ψ⟩= |ej⟩⟨ej||ek⟩= 0.
Therefore, P|ψ⟩= |ψ⟩or P|ψ⟩= 0.
Case 2: P and Pϕ commute
We use a similar argument to the one used in the previous case, and we deduce that we obtain
one of the following cases: P|ϕ⟩= |ϕ⟩or P|ϕ⟩= 0.
More intuitively, if P and Pψ commute, since we are dealing with rank-1 projectors, one can try
to imagine the subspaces onto which both P and Pψ project. Their corresponding subspaces
will either have no overlap (the orthogonal case), or one projector’s subspace will contain the
other projector’s subspace (the idempotent case).
B
Proof of Corollary 3.1
Proof. (=⇒) We assume that P and Pψ idempotently commute. P|ψ⟩= |ψ⟩=⇒(I −P)|ψ⟩=
|ψ⟩−P|ψ⟩=⇒(I −P)|ψ⟩= |ψ⟩−|ψ⟩= 0. Therefore I −P and Pψ orthogonally commute.
D
OPERATOR ALGEBRAS AND RELEVANT PROPERTIES
10
(⇐=) We assume that I −P and Pψ orthogonally commute. (I −P)|ψ⟩= 0 =⇒(I −P)|ψ⟩=
|ψ⟩−P|ψ⟩= 0 =⇒P|ψ⟩= |ψ⟩. Therefore P and Pψ idempotently commute.
C
Event Spaces
Usually, when we discuss probabilities, we talk about the sample space, which is the set of the
possible outcomes of an event. However, if we want to further represent events that are related
to the sample space, one usually has to appeal to an event space, which is usually denoted by
the powerset of the sample space. In some of the later sections, we will encounter situations
where we want to calculate the probability of the occurrence of a certain event or a collection of
events. Let Ωbe a sample space, and let P(ω) be the associated event space. As shown in [5], a
probability measure p : P(Ω) →[0, 1] is a function from the event space P(Ω) to [0, 1] such that
• p(Ω) = 1,
• For any two events e1, e2 ∈P(Ω): p(e1 ∪e2) = p(e1) + p(e2) −p(e1 ∩e2).
In standard quantum theory, we use projectors to represent measurement outcomes.
More
precisely, we represent measurement by an orthogonal and complete set of n projectors Ω=
{P i}n
i=1. By orthogonal, we mean that all the projectors are pairwise orthogonal P iP j = 0
for i ̸= j, and by complete we mean that all the projectors in the set Ωsum to the identity
Pn
i=1 Pi = 1 [5].
Now, since Ωis the sample space for the measurement, it is a good guess that the event space for
the measurement would be the powerset of the sample space Ω. However, there is a more conve-
nient way to represent the event space of a measurement that has a one-to-one correspondence
with the powerset of the event space. Instead of taking the power set of projectors, consider the
projector Q obtained by summing the projectors in a set S in P(Ω) such that: Q = P
P∈S P.
Now, let EΩbe the set of projectors obtained using the latter for all the sets S in P(Ω) . Now,
observe that for any sets S1 and S2 in EΩ, we have: QS1 ̸= QS2, which means that there is a
one-to-one correspondence between the elements of P(Ω) and EΩ[5].
Moreover, we can easily check that there is an equivalence between
• The complement of a set S in P(Ω) and I −QS in EΩ,
• Disjunctions S1 ∪S2 in P(Ω) and QS1 + QS2 −QS1QS2 in EΩ,
• Conjunctions S1 ∩S2 in P(Ω) and QS1QS2 in EΩ.
This means that on top of the one-to-one correspondence, EΩmaintains the logical structure of
events.
D
Operator Algebras and Relevant Properties
We present the following mathematical background from [5].
Definition D.1. An algebra of operators is a set of linear vectors on the vectors of a Hilbert
space with a structure such that for two operators M and N in an algebra X,
• M ∈X =⇒cM ∈X,
• M ∈X =⇒M † ∈X,
• M, N ∈X =⇒MN ∈X,
E
THE THEORY OF CAUSAL BALANCE
11
• M, N ∈X =⇒M + N ∈X,
• I ∈X and M ∈X =⇒IM = M = MI.
An example of an algebra would be Op(H), the set of all operators in a Hilbert space H.
However, Op(H) is not the only algebra one can find in a Hilbert space, other subsets of Op(H)
can construct an algebra. In this paper, when we use the term “algebras"to mean operator
algebras per definition D.1.
In this dissertation, we need to distinguish between two different but equivalent types of alge-
bras: Schrödinger algebras and Heisenberg algebras. Let U : A ⊗B →C ⊗D be a unitary
transformation from systems A and B to systems C and D. Consider algebras A ∈Op(HA) and
D ∈Op(HD). As shown in [5], there is an equivalence between the Schrödinger and Heisenberg
algebras such that
MD ∈A ⇐⇒
˜
MA := MA ⊗IB ∈˜
A,
MD ∈D ⇐⇒
˜
MD := U−1(IC ⊗MD)U ∈˜D.
We recall this very important result about algebras on Hilbert spaces from quantum information
from [6].
Proposition D.1. Let H be a Hilbert space, and let X ∈Op(H) be an algebra. there is some
decomposition of H into a direct sum of tensor products
H = ⊕n
i=1Hi
L ⊗Hi
R
such that
X ∈X ⇐⇒M = ⊕n
i=1Mi
L ⊗Ii
R.
Definition D.2. (Commutant and Commuting Center)
Let H be a finite-dimensional Hilbert space. Let X be an algebra such that X ∈Op(H). We
call X ′, the set of all operators in X that commute with every operator in X the commutant of
algebra X. The commuting center of an algebra Z(X) = X ∩X ′ is the set of the operators in
X that commute with all the operators within X. Z(X) is a commutative algebra. Moreover,
Z(X) has the form
M ∈X ⇐⇒M = Pn
i=1 ciπi for some {ci}n
i=1,
such that πi projects onto a subspace Hi
L ⊗Hi
R. As discussed in [5], as a consequence of the
above,
M ∈Z(X) is a projector ⇐⇒M ∈E,
such that EΩis the event space from the sample space Ω= {πi}n
i=1 (see Section C for more on
sample spaces and events spaces). In other words: EΩ= Proj ∩Z(X).
E
The Theory of Causal Balance
Excluding the following definition of unitary circuits, the reader can choose to read this chapter
in two different ways: either by starting with the section on unitary circuits, and then coming
back to the beginning of the chapter for more detail, or by reading it in the intended order by
building up from the fundamental idea of the theory. This chapter has been written with both
approaches in mind.
E
THE THEORY OF CAUSAL BALANCE
12
Definition E.1. A unitary circuit is a circuit made of wires representing systems, and boxes
representing unitary transformations. The following is an example of a unitary circuit from
A ⊗B to C ⊗D:
U
A
B
C
D
Ormord and Barrett introduced the theory of causal balance in early 2024. It is considered a
conceptual shift from previous theories since causation is no longer merely seen as this connection
between events, but a fundamental framework out of which events emerge.
To present the theory of causal balance in a single chapter, we start by defining interference
influences, then discuss operator algebras and how they influence each other. Next, we define
causal structure as a directed graph of operator algebras. We will no longer be talking about
events emerging out of causation, but rather the emergence of events on a given algebra relative to
the set of algebras that include said algebra. One is thus inclined to inquire into the mathematical
nature of these events, and the response mirrors that found in the consistent histories formalism:
projector decompositions. We will see that the causal structure of this theory restricts the type of
influences between projector decompositions, which can also be expressed in how these projector
decompositions commute.
In this theory, events don’t emerge from causation in the usual way. Instead, events emerge on a
given algebra relative to other algebras that contain it. This raises the question: what are these
events mathematically? The answer, following the consistent histories approach, is projector
decompositions. The causal structure limits how projector decompositions can influence each
other, which we can see in how they commute.
Before concluding this chapter, we will demonstrate how the causal structure uniquely selects a
set of projector decompositions, which will precisely correspond to a consistent set of histories.
Given that the theory of causal balance is inherently stochastic, we will proceed to elucidate
how probabilities are defined within this framework. The chapter will conclude with stating
the proposition from [1] that the theory of causal balance can reproduce any phenomenon from
standard quantum theory.
E.1
Interference Influences
We promised that we will show how the theory of causal balance allows us to obtain a unique
consistent set of histories. For now, it is sufficient to convince ourselves that we want a unique
set of histories that appeals to a certain causal structure. To understand this structure, we need
to understand one of the building blocks of the theory: interference influences.
In the quantum context, we say that system A influences system D in a unitary channel de-
scribing the dynamics if D non-trivially depends on A, which is illustrated more formally in the
following definition [1].
Definition E.2. (Quantum Causal Influence) Let U : A ⊗B →C ⊗D be a unitary channel
between the systems A ⊗B and C ⊗D. Having no quantum causal influence from A to D is
equivalent to the following diagram:
E
THE THEORY OF CAUSAL BALANCE
13
U
A
B
C
D
=
V
B
D
∃V such that
such that
stands for the trace operation (“discard" for the readers familiar with string
diagrams).
Can we go deeper and be able to describe the influences from A to D in a more fine-grained
way? Yes, we can! It was shown in [1] that influences between systems are fully determined by
influences between subsystems. In fact, as shown in [5], interference influences between systems
are equivalent to influences between projector decompositions more formally presented in the
following definition.
Definition E.3. Let U : A ⊗B →C ⊗D be a unitary channel. We say that there is no
interference influence from the projector decomposition on the Hilbert space HA, {P i
A} to the
projector decomposition on HD, {P j
D} if and only if
[{P i
A}, {P j
D}] = 0 for any i and j.
The reason we chose to also define interference influence in terms of the commutation of projector
decompositions is that we will later see that it is related to the paradoxes via the commutation
link.
We can begin to see some of the similarities with the consistent sets of histories formalism.
Indeed, in the theory of causal balance, events are seen as a unique selection of a projector
P ∈D such that D is a projector decomposition. This projector decomposition is selected in
a way that maintains a certain causal balance between systems. In other words, we would only
want a specific type of influences between the systems. However, we still have not defined how
this causal balance is maintained. What is this magical causal structure that we want to have for
which there is only one unique assignment of projector decompositions? The following theorem
from [1] defines how we can obtain the preferred projector decompositions by the theory.
Theorem E.1. Let U : A ⊗B →C ⊗D be a unitary channel, and let {P i
A} be a projective
decomposition on A. For A and D denoting the operator algebras of the form MA ⊗IB and
U†(IC ⊗MD),
{P i
A} is preferred by D ⇐⇒span({P i
A}) ⊗IB = Z(A ∩D′).
E.2
Causal Structure
Since we define causal structure on algebras, we provide the necessary background in Section E.
We start with the following definition from [5].
Definition E.4. A causal structure is a directed graph C over a finite set of algebras on a
finite-dimensional Hilbert space H such that for two algebras ˜
X and ˜Y in C,
˜
X does not influence ˜Y and ˜Y does not influence ˜
X ⇐⇒
˜
X ⊆˜Y.
When we talk about a causal structure, we also talk about a bubble. Bubbles are also relevant
since in most cases, we want to maintain the causal balance relative to a certain bubble of
systems.
Definition E.5. A bubble B is a subset of systems in C. In a unitary circuit, a bubble is a
subset of wires.
E
THE THEORY OF CAUSAL BALANCE
14
The reader might be wondering if there is any structure to these graphs that maintains the
causal balance between algebras, and the answer is positive. Consider an algebra X relative
to a bubble B. We define the following two event spaces (see Appendix 2 for mathematical
background on event spaces).
• Future balanced event space: E↑
˜
XB := Proj ∩Z( ˜
X ∩C↑
B( ˜
X)′),
• Past-balanced event space: E↓
˜
XB := Proj ∩Z( ˜
X ∩C↓
B( ˜
X)′),
such that C↑
B( ˜
X) is the resulting algebra from combining all the algebras in the bubble B that
˜
X influences. Similarly, C↓
B( ˜
X) is the resulting algebra from combining all the algebras in the
bubble B that are influenced by ˜
X.
If the symbols in the above expressions are confusing, we recommend looking at Appendix 2 on
algebras and some of their properties.
These expressions present exactly what we were looking for. Events spaces (which are projectors)
that satisfy theorem and hence maintain causal balance. Here we come to see how we took
advantage of the time-symmetry mentioned earlier when it comes to having past- and future-
balanced events.
E.3
From Algebras to Unitary Circuits
Consider the following unitary circuit:
U3
U2
U1
{P e1
Ain
1 }
{P e′
1
Aout
1 }
{P e2
Ain
2 }
{P e′
2
Aout
2 }
{P e3
Ain
3 }
{P e′
3
Aout
3 }
A
B
C
Let’s consider the set of system {A, B, C}. We call a set of systems in a circuit a bubble, and
we will name this bubble composed of A, B, and C, B1.
Now, if we were to think about
the projector decompositions that we can have at each system from B1 in the context of the
consistent histories formalism, one can have too many sets of histories corresponding to B1 that
are consistent. However, we want to pick one unique set of histories that appeals to a causal
structure where all the projectors are causally balanced with respect to their future and their
past. This is the reason why we decorated the unitary circuit with two projectors at each system.
The P i
Xin are projectors that are causally balanced with respect to the system X’s interaction
E
THE THEORY OF CAUSAL BALANCE
15
with its past within B1, and P i
Xout are the causally balanced projectors with respect to the
system X’s interaction with the future within B1.
The definition of a causal structure in terms of algebras and influences between them might not
be very intuitive and circuits can make understanding the theory better. We can take advantage
of the fact that any unitary circuit defines a model in the theory: when we use unitary circuits,
we automatically create a model for our theory as the causal structure is implicitly implied by
the position of the system relevant to each other: systems at the top are influenced by systems
at the bottom. However, it is indispensable to note that the theory of causal balance is defined
independently from spacetime and so it is not evident to deduce that it means that systems at
the top necessarily happen before systems at the top. It is more correct for us to think about
spacetime emerging from the causal structure just like events [5].
Moreover, it is crucial to remember that not all causal structures can be represented as unitary
circuits. Still, unitary circuits do represent some models, and we will focus on those in this
section for clarity purposes.
If the bubble under study is composed of n systems, then, 2n events take place represented
by n ongoing projections, and n outgoing projections. In other words, each wire k ∈{1, ..., n}
in the bubble has a pair ({P k
in}, {P k
out}) associated with it. The projector decompositions get
selected using a preference algorithm that respects the causal structure. It has been proven
that in the context of this interpretation, the only allowed interference influences are the ones
from decompositions of the past to decompositions of the future, which is put more formally in
theorem 4.1 in the following section.
Eureka! We finally found out how we get the unique set of projectors associated with a bubble
that represents events that might happen. We would like to emphasize the latter since the projec-
tors obtained are not events that will happen, these projectors decompositions only correspond
to events that are possible to happen, not to events that are about to happen. As shown in [1],
for a bubble B, the probability for the unique set that was selected to be realized for a circuit
C containing B is
p(e1, e′
1, ..., en, e′
n) = 1
dTr
˜P e1
Ain
1
˜P e′
1
Aout
1 ... ˜P en
Ain
n
˜P en
Aout
n
.
It has also been shown in [1] that this probability rule recovers the exact predictions of quantum
theory.
To conclude our review of the theory of causal balance, we answer the anticipated question about
reproducing standard quantum theory. In [1], it was proven that the theory of causal balance is
able to recreate any phenomenon in standard quantum theory.
|
Non-Commutation Chains in Pre- and Post-Selection Paradoxes This paper is part of a dissertation submitted in fulfillment of the requirements of a degree Master of Science in Mathematics and Foundations of Computer Science 2024 Ouissal Moumou Mathematical Institute 23, 2025 Abstract Peculiar measurements can be obtained on systems that undergo both pre- and postselection. We prove a conjecture from [1] on logical Pre- and Post-Selection (PPS) paradoxes for a restricted case. We prove that all of these paradoxes admit non-commutation chains. We also relate this to the theory of causal balance, recently introduced in [1], and show how the theory blocks such paradoxes. Contents 1 Introduction 2 1.1 The Quantum Pigeonhole Principle Paradox . . . . . . . . . . . . . . . . . . . . . 2 2 Pre- and Post-Selection Paradoxes: A Formal Definition 3 3 PPS Paradoxes Admit Non-Commutation Chains 4 4 Connection to the Theory of Causal Balance 7 5 Conclusion 8 A Proof of Lemma 3.1 9 B Proof of Corollary 3.1 9 C Event Spaces 10 D Operator Algebras and Relevant Properties 10 E The Theory of Causal Balance 11 E.1 Interference Influences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 E.2 Causal Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 E.3 From Algebras to Unitary Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1 18 Sep 2025 1 INTRODUCTION 2 1 Introduction Suppose a quantum system is pre-selected in the state |ψ⟩at time t0 and post-selected in the state ⟨φ| at time t2. Suppose also that the system undergoes an intermediate projective measurement {P} at time t1. One could calculate the probability of the intermediate measurement by conditioning both on the pre- and post-selection, this is the main spirit behind the ABL rule introduced in [2]. For a degenerate operator C at time t, the probability of obtaining an outcome cn is specified by the ABL rule as P(C) = |⟨φ|PC=cn|ψ⟩|2 P i |⟨φ|PC=ci|ψ⟩|2 , (1) where PC=ci = P i |Φ⟩⟨Φ| is the projection on the states with eigenvalue ci. Peculiar situations arise for specific choices of |ψ⟩, {P}, and ⟨φ| in which the probabilities obtained are non-intuitive. These situations are referred to as pre- and post-selection paradoxes. [1] posited that these paradoxes are absent within the framework of causal balance theory, as this theoretical framework inherently avoids what we term "non-commutation chains" in the present work. A non-commutation chain, which we define rigorously in Section 3, arises when the intermediate projection operator in a pre- and post-selection scenario fails to commute with both the pre-selected and post-selected quantum states. We establish a proof for a restricted subset of Ormrod's conjecture from [1], demonstrating that all paradoxes within this constrained class exhibit a systematic occurrence of non-commutation chains. We start with the quantum pigeonhole principle as an exemplar of these paradoxes to provide intuitive understanding, followed by a formal definition of pre- and post-selection phenomena that establishes the theoretical foundation for our subsequent proof. We conclude with a discussion of the implications and directions for future research. 1.1 The Quantum Pigeonhole Principle Paradox [3] introduced the quantum pigeonhole principle in which 3 particles are prepared in a superposition state of being in 2 different boxes. Boxes 1 and 2 are represented by |0⟩and |1⟩respectively. A superposition of being in both boxes 1 and 1 would then be represented more conveniently with |+⟩= 1 √ 2(|0⟩+ |1⟩). Therefore, the states corresponding to the pre- and post-selection states are |ψ⟩= |+⟩1|+⟩2|+⟩3, ⟨φ| = ⟨i|1⟨i|2⟨i|3, respectively, where ⟨i| = 1 √ 2(⟨0| + i⟨1|). We aim to check whether two particles are in the same box. Since the three particles are in the same initial and final states, we will only demonstrate the paradox for particles 1 and 2. By symmetry, the same applies for particles 2 and 3 and particles 1 and 3. Particles 1 and 2 being in the same box means that they are either both in box 1, or both in box 2 which we represent with projectors P11 = |0⟩1|0⟩2⟨0|1⟨0|2 and P22 = |1⟩1|1⟩2⟨1|1⟨1|2 respectively. On the other hand, particles 1 and 2 being in separate boxes means particle 1 is in box 1 and particle 2 is in box 2, or vice versa, which we represent with projectors P12 = |0⟩1|1⟩2⟨1|1⟨0|2 and P21 = |1⟩1|0⟩2⟨1|1⟨0|2 respectively. 2 PRE- AND POST-SELECTION PARADOXES: A FORMAL DEFINITION 3 Consequently, the two particles under consideration being in the same box corresponds to the projector Psame = P11 + P22. Similarly, the two particles being in different boxes correspond to the projector Pdiff = P12 + P21. Because the value of ⟨φ|Psame |ψ⟩turns out to be zero, it follows that the probability of finding particles 1 and 2 in the same box using the ABL rule is also zero. Considering the symmetry in our example with the other particles, this means that the probability of finding particles 2 and 3 in the same box in the intermediate measurement is also zero, and so is the probability of finding both particles 1 and 3 in the same box. One then is prompted to conclude that it is with certainty that we observe that no two particles can be found in the same box. However, recalling the very basic yet powerful pigeonhole principle, it is quite peculiar to have two boxes and three particles. Yet, no two particles share the same box! 2 Pre- and Post-Selection Paradoxes: A Formal Definition Below is a formal definition of a pre- and post-selection paradox inspired from [4]. Consider a Hilbert space, a choice of pre-selection |ψ⟩and post-selection ⟨φ|, and let P be a finite set of projectors closed under complements composed of the projectors related to the pre- and postselection system {P, I -P} that are uniquely determined by the projectors P. We only consider the cases in which the ABL probabilities corresponding to using the projectors {P, I -P} yield 0 or 1 values (hence the "logical" part). We would like to be able to generate a partial Boolean algebra P′ from P using the following for any P and Q in P′: • If P, Q ∈P′ and PQ = QP, then PQ ∈P′, • If P ∈P′, then I -P ∈P′. One intuitive way of understanding the new partial Boolean algebra is by considering projectors corresponding to propositions, and the extension to the partial Boolean algebra is our way of wanting to also take disjunctions and conjunctions of the propositions at hand (we originally only had the propositions and their complements given to us by the experiments). Suppose we want to extend the probability function f given to us by the ABL rule from P to P′. In other words, we want to find a probability function on P′ that recovers the probability function defined on P, such that the following algebraic conditions are satisfied: (i) For all P ∈P′, 0 ≤f(P) ≤1, (ii) f(I) = 1 and f(0) = 0, (iii) For all P, Q ∈P′, f(P + Q -PQ) = f(P) + f(Q) -f(QP). Definition 2.1. (PPS Paradox) Assuming the above setting, we say that the ABL predictions for P form a logical PPS paradox when we fail to find a function that fails one or more of the algebraic conditions above. Applying this to the case of the pigeonhole principle paradox: the projectors corresponding to two particles being in different boxes Pdiff1,2, Pdiff1,2, and Pdiff1,2. We know that: f(Pdiff1,2) = 1, f(Pdiff1,2) = 1, and f(Pdiff1,2) = 1. We know that p(e) = 1 and p(f) = 1 =⇒p(e ∧f) = 1. This can be extended in our case of three events, and knowing that the probabilities for these events are all 1, we can conclude that f(Pdiff1,2Pdiff2,3Pdiff1,3) = 1. 3 PPS PARADOXES ADMIT NON-COMMUTATION CHAINS 4 However, note that: Pdiff1,2Pdiff2,3Pdiff1,3 = 0. Therefore, f(0) = 1, which violates condition (i) from Definition 2.1. Thus, the quantum pigeonhole principle is just another instance of logical PPS paradoxes. 3 PPS Paradoxes Admit Non-Commutation Chains We start with the following definition of a non-commutation chain: Definition 3.1. (Non-Commutation Chain) A projector P has a non-commutation chain with two other projectors Q1 and Q2 iff Q1P ̸= PQ1 and Q2P ̸= PQ2. The following lemma will be useful later and is proved in Section B: Lemma 3.1. In the context of a logical pre- and post-selection paradox with pre- and postselection projectors Pψ = |ψ⟩⟨ψ| and Pφ = |φ⟩⟨φ|, respectively, a projector P corresponding to an intermediate measurement at time t1 < t < t2 does not have a non-commutation chain with Pψ and Pφ iff • P|ψ⟩= |ψ⟩, and we say that P and Pψ idempotently commute, or • P|ψ⟩= 0, and we say that P and Pψ orthogonally commute, or • P|φ⟩= |φ⟩, and we say that P and Pφ idempotently commute, or • P|φ⟩= 0, and we say that P and Pφ orthogonally commute. Corollary 3.1. Two projectors P and Pψ idempotently commute (in the sense of lemma 3.1) iff I -P and Pψ orthogonally commute. This corollary says the same thing about the relationship between P and Pφ if they idempotently, or orthogonally commute. Theorem 3.1. (Non-Commutation Chains for Logical PPS Paradoxes) Consider a pre- and post-selection scenario with corresponding intermediate measurement sets P and P′ per definition 2.1 and pre- and post-selection rank-1 projectors Pψ = |ψ⟩⟨ψ| and Pφ = |φ⟩⟨φ| such that • |ψ⟩and ⟨φ| are the pre- and post-selection states, • All the projectors in P commute, • P′ is a finite set. Every logical PPS paradox with the specifications above has at least one projector in P that forms a non-commutation chain with Pψ and Pφ. Proof. We prove the contrapositive of this statement. We assume that we have a scenario with all the specifications above where every projector in P does not form a non-commutation chain with Pψ and Pφ, and we prove that we never get a logical PPS paradox. Consider the projectors Qj that can be written as the product of n projectors in P: Qj = ̃P1... ̃ Pn, (2) such that ̃Pi = Pi ∈P or ̃Pi = I-Pi ∈P. In other words, take all the n projectors corresponding to intermediate measurements, give each projector an index from 1 to n. In each index, either put a projector or its complement, then take the product of all the projectors altogether. One can see that there are 2n possibilities for this arrangement. The Qjs are all orthogonal to each other i.e. for any i and j such that k ̸= j: QkQj = δkjQj and P j Qj = I. Note that every Qj is 3 PPS PARADOXES ADMIT NON-COMMUTATION CHAINS 5 a projector because it is made of the product of projectors from P, all of which commute with each other. Let Ωbe the sample space containing all the possible Qjs, i.e Ω= {Qj}j. As shown in Section C, we can take E = 2Ωas an event space. However, as also shown in Section C, there is an equivalent event space E that we obtain by summing distinct elements in Ωsuch that E = X j∈S Qj/S ∈2Ω . (3) As an application of what is shown in Section C, Poweset(Ω) and E are not only equivalent, but E also have a Boolean algebra structure. The equivalence between Poweset(Ω) and E is defined as for two commuting projectors P and Q in E: • PQ = P j∈SP ∩SQ Qj, • P + Q -PQ = P j∈SP ∪SQ Qj, • P = P j∈SP Qj. Before we move to the first step in our proof, we will show that P ⊆E. Let Pk be any projector in P, and let's prove that Pk ∈E. We can write Pk as Pk = X j ̃P1...Pk... ̃ Pn = X j∈SPk Qj ∈E. Therefore, P ⊆E. As a first step in our proof, we will show that P′ = E. For the first direction =⇒, we show that P′ ⊆E. We know that P ⊆E, and we know that by Definition 2.1, P′ is the smallest partial Boolean algebra from P. Knowing that E is a Boolean algebra that contains P, then it has to contain the smallest one formed by P, therefore, it contains P′, and thus: P′ ⊆E. For the opposite direction ⇐= , we show that E ⊆P′. Let P be any projector in E, P can be one of two cases: 1. P = Qj for some Qj = ̃P1... ̃ Pn which means P can be written as a conjunction of projectors in P, 2. P = P j∈S Qj which means P can be written as a disjunction of elements in Ω, more specifically, as a disjunction of conjunction if elements of P. In the first case, we know that any conjunction of elements in P is in P′ since all the projectors in P commute, and by definition, P′ contains all the products of projectors in P that commute. Now that we know that any Qj is in P′, if P is a conjunction of elements in Ωi.e P = P j∈S Qj, then it has to be in P′ since by definition, P′ is closed under conjunction, and so it contains any possible conjunction of two projectors in P′ . Therefore, E ⊆P′. Thus E = P′.e pre-selection and post-selection respectively. <-- typo here?? We are at the main part of our proof of the theorem, the one involving defining a probability distribution on P′. Consider the function f defined on P′ such that for any projector P in P′, f(P) = Tr(PψPPφ) Tr(PψPφ) . (4) 3 PPS PARADOXES ADMIT NON-COMMUTATION CHAINS 6 We will show that this function satisfies all the conditions for being a probability measure on P′ as per Definition 2.1. For condition (ii) from the definition, we have that f(I) = Tr(PψIPφ) Tr(PψPφ) = 1, f(0) = Tr(Pψ0Pφ) Tr(PψPφ) = 0. Condition (iii) follows directly from the linearity of the trace function. The trace operation is linear, so f(P + Q -PQ) = Tr(Pψ(P + Q -PQ)Pφ) Tr(PψPφ) = Tr((PψPPφ) + (PψQPφ) -(PψPQPφ)) Tr(PψPφ) = f(P) + f(Q) -f(PQ). For condition (i), show that for any projector P in P′, 0 <= f(P) <= 1. Since P′ = E, any projector R in P′ can be written as a product of projectors in P such that: R = Qj = ̃P1... ̃ Pn or as the sum of several Qi as shown above. Now, f(Qj) = Tr(Pψ ̃P1... ̃ PnPφ) Tr(PψPφ) . Since all the elements in P commute, we can arrange the elements in any product ̃P1... ̃ Pk ̃ Pk+1... ̃ Pn such that the first k projectors are the ones that commute with Pψ, and that the rest of the projectors all commute with Pφ. Now, using Lemma 3.1, we know that for the first k projectors in Qi, Pψ ̃Pi = |ψ⟩⟨ψ|P = Pψ ̃Pi = ( Pψ if ̃Pi and Pψ idempotently commute, 0 if ̃Pi and Pψ orthogonally commute. One can see that applying this repeatedly for all the projectors P1 to Pk can only result in Pψ if all the projects tend not to be orthogonal with Pψ. In the opposite case, the series might collapse to 0. Similarly, for the rest of the projectors, we have ̃PiPφ = Pφ or ̃PiPφ = 0. Now, coming back to f(Qj), it can only be 1 if all the projectors happen to commute non-orthogonally with either Pψ or Pφ. Now, we analyze the second case in which elements of E can be written as sums of distinct Qj. The summands in this case are distinct. In other words, each two summands are different by at least one ̃Pi (in fact, each two summands are orthogonal). Moreover, recalling Corollary 3.1 we know that in our non-commutation chain scenario, if a certain projector Q commutes with Pψ commute non-orthogonally, then its complement I -Q commutes orthogonally with Pψ, and vice versa. The same can be said about Q and Pφ if Q commutes with Pφ. This means that considering all possible Qj configurations (by configuration here we mean, different choices for ̃Pi), there is only one single configuration of ̃Pi where all the projectors non-orthogonally commute with either Pψ or Pφ, only and only in this single case would f be 1. In all the others it will be 0. Knowing that all the summands are distinct, only one configuration equalling 1 means that any sum will always be 1 or 0. Therefore, we have just shown an even stronger claim than the one required by condition (i), that for any projector P in P′, f(P) = 0 or f(P) = 1. 4 CONNECTION TO THE THEORY OF CAUSAL BALANCE 7 4 Connection to the Theory of Causal Balance The theorem proved in the preceding section draws from the theory of causal balance. The proof approach can be extended to establish the conjecture proposed in [1], namely that the theory of causal balance blocks both pre-logical and post-logical paradoxes. We demonstrate this connection in the following analysis. The requisite background on causal balance theory is provided in Section E. We recall Theorem 4 in [1]. Theorem 4.1. In a circuit that models a causal structure, the only interference influences allowed are of the form n P e′ m Ain m o → n P e′ k Aout k o , such that m ≤k. Consider the following unitary circuit: U3 U2 U1 {P e1 Ain 1 } {P e′ 1 Aout 1 } {P e2 Ain 2 } {P e′ 2 Aout 2 } {P e3 Ain 3 } {P e′ 3 Aout 3 } A B C Theorem 4.1 dictates that there can never be an influence from {P e′ 2 Aout 2 } to {P e1 Ain 1 } for two reasons: 1. It is an influence from an "out" projector to an "in" projector. 2. It is an influence from a projector that is higher up in the circuit to a projector that is lower up in the circuit. Interference influences are equivalent to commutations as shown in theorem 3.1. One of the patterns that are immediately eliminated by Theorem 4.1 is non-commutation chains. Given the assumptions in Definition 3.1, every PPS paradox admits a non-commutation chain, the absence of which is guaranteed by the theory of causal balance. Therefore, one could say that the theory of causal balance "blocks" PPS paradoxes. In other words, PPS paradoxes, never occur under the framework of the theory of causal balance. Intriguingly, [1] demonstrated that any phenomenon from standard quantum theory can be reproduced within the framework of the theory of causal balance. This raises a compelling question: given our analysis showing that the theory prevents these paradoxes (at least in REFERENCES 8 the specific case examined in this paper), how might one model such paradoxes within this framework? We leave this investigation for future work. 5 Conclusion This paper establishes that logical PPS paradoxes in the restricted case where all projectors in P commute and where P′ is finite necessarily admit non-commutation chains. While this result provides important structural insight into the nature of these paradoxes, several theoretical limitations warrant further investigation. The finite cardinality constraint on P′ represents a significant restriction, as P′ can in principle be infinite. Whether our theorem extends to the infinite case remains an open question, and failure to do so would reveal a fundamental relationship between the cardinality of P′ and the structural properties of these paradoxes. Additionally, our analysis assumes commutativity of all projectors in P, though it remains unclear whether scenarios with non-commuting projectors are physically realizable in logical PPS frameworks. Our findings also bear on the relationship between logical PPS paradoxes and the theory of causal balance. Given that [1] demonstrated the theory's ability to reproduce all standard quantum phenomena, yet our analysis shows these particular paradoxes admit non-commutation chains that should be blocked by causal constraints, an interesting question emerges regarding how such paradoxes might be modeled within this framework. We conjecture that circuit representations of PPS paradoxes (in the context of the theory of causal balance), which necessarily incorporate unitary interactions, will exhibit richer causal structures than the simple sequential projector arrangements. This suggests that modeling PPS paradoxes within the theory of causal balance may reveal fundamentally different structural properties than those apparent in the standard formulation, representing an important direction for future work. Acknowledgments I would like to thank my supervisors Dr. Nick Ormrod and Professor Jonathan Barret. Endless thanks to my friends and family for their support. Thanks to the Optiver Foundation Scholarship for financially supporting my MSc at the . References [1] Nick Ormrod and Jonathan Barrett. Quantum influences and event relativity. arXiv preprint , 2024. [2] Yakir Aharonov, Peter G Bergmann, and Joel L Lebowitz. Time symmetry in the quantum process of measurement. Physical Review, 134(6B):B1410, 1964. [3] Yakir Aharonov, Fabrizio Colombo, S Popescu, Irene Sabadini, Daniele C Struppa, and Jeff Tollaksen. The quantum pigeonhole principle and the nature of quantum correlations. arXiv preprint , 2014. [4] Matthew F Pusey and Matthew S Leifer. Logical pre-and post-selection paradoxes are proofs of contextuality. arXiv preprint , 2015. [5] Nick Ormrod. Quantum Theory: Causation and Interpretation. Phd thesis, 2024. B PROOF OF COROLLARY 3.1 9 [6] Kenneth R. Davidson. C*-Algebras by Example. American Mathematical Society, 1996. A Proof of Lemma 3.1 Proof. For the ⇐= direction, we show below what each of the cases implies about commuting with either Pψ or Pφ. • If P|ψ⟩= |ψ⟩, then, PPψ = P|ψ⟩⟨ψ| =⇒PPψ = |ψ⟩⟨ψ|. we know that ⟨ψ| = ⟨ψ|P. We deduce that PPψ = |ψ⟩⟨ψ|P = PψP. Therefore P and Pψ commute. • If P|ψ⟩= 0, then PPψ = P|ψ⟩⟨ψ| =⇒PPψ = 0. Now, for the other side, we have PψP = |ψ⟩⟨ψ|P. We know from proposition 2 that because P|ψ⟩= 0, ⟨ψ|P = 0. Therefore, PψP = 0 = PPψ, and P and Pψ commute. We can prove the other two cases involving Pφ and P with a similar argument. We can see that in all the cases, we can deduce the commutation of P with either Pψ or with Pφ. For the =⇒ direction, we assume that P and Pψ commute or P and Pφ commute, and we show that either P and Pψ orthogonally commute or idempotently commute, or that P and Pφ orthogonally commute or idempotently commute. Case 1: P and Pψ commute Two projectors commuting means that they have a shared basis. Let |e1⟩, |e2⟩, ..., |en⟩be the shared basis of both projectors. Now, since Pψ = |ψ⟩⟨ψ| is a rank-1 projector, there is a k ∈{1, ..., n} such that: Pψ = |ek⟩⟨ek|. Considering that P is also a rank-1 projector, we have: P = |ej⟩⟨ej| for some j ∈{1, 2, ..., n}. Therefore, P|ψ⟩= |ej⟩⟨ej||ek⟩. Now, since |ej⟩and |ek⟩are basis states, there are two possibilities: • k = j, in which case ⟨ej||ek⟩= 1 =⇒P|ψ⟩= |ej⟩⟨ej||ek⟩= |ej⟩= |ek⟩= |ψ⟩, • k ̸= j, in which case ⟨ej||ek⟩= 0 =⇒P|ψ⟩= |ej⟩⟨ej||ek⟩= 0. Therefore, P|ψ⟩= |ψ⟩or P|ψ⟩= 0. Case 2: P and Pφ commute We use a similar argument to the one used in the previous case, and we deduce that we obtain one of the following cases: P|φ⟩= |φ⟩or P|φ⟩= 0. More intuitively, if P and Pψ commute, since we are dealing with rank-1 projectors, one can try to imagine the subspaces onto which both P and Pψ project. Their corresponding subspaces will either have no overlap (the orthogonal case), or one projector's subspace will contain the other projector's subspace (the idempotent case). B Proof of Corollary 3.1 Proof. (=⇒) We assume that P and Pψ idempotently commute. P|ψ⟩= |ψ⟩=⇒(I -P)|ψ⟩= |ψ⟩-P|ψ⟩=⇒(I -P)|ψ⟩= |ψ⟩-|ψ⟩= 0. Therefore I -P and Pψ orthogonally commute. D OPERATOR ALGEBRAS AND RELEVANT PROPERTIES 10 (⇐=) We assume that I -P and Pψ orthogonally commute. (I -P)|ψ⟩= 0 =⇒(I -P)|ψ⟩= |ψ⟩-P|ψ⟩= 0 =⇒P|ψ⟩= |ψ⟩. Therefore P and Pψ idempotently commute. C Event Spaces Usually, when we discuss probabilities, we talk about the sample space, which is the set of the possible outcomes of an event. However, if we want to further represent events that are related to the sample space, one usually has to appeal to an event space, which is usually denoted by the powerset of the sample space. In some of the later sections, we will encounter situations where we want to calculate the probability of the occurrence of a certain event or a collection of events. Let Ωbe a sample space, and let P(ω) be the associated event space. As shown in [5], a probability measure p : P(Ω) →[0, 1] is a function from the event space P(Ω) to [0, 1] such that • p(Ω) = 1, • For any two events e1, e2 ∈P(Ω): p(e1 ∪e2) = p(e1) + p(e2) -p(e1 ∩e2). In standard quantum theory, we use projectors to represent measurement outcomes. More precisely, we represent measurement by an orthogonal and complete set of n projectors Ω= {P i}n i=1. By orthogonal, we mean that all the projectors are pairwise orthogonal P iP j = 0 for i ̸= j, and by complete we mean that all the projectors in the set Ωsum to the identity Pn i=1 Pi = 1 [5]. Now, since Ωis the sample space for the measurement, it is a good guess that the event space for the measurement would be the powerset of the sample space Ω. However, there is a more convenient way to represent the event space of a measurement that has a one-to-one correspondence with the powerset of the event space. Instead of taking the power set of projectors, consider the projector Q obtained by summing the projectors in a set S in P(Ω) such that: Q = P P∈S P. Now, let EΩbe the set of projectors obtained using the latter for all the sets S in P(Ω) . Now, observe that for any sets S1 and S2 in EΩ, we have: QS1 ̸= QS2, which means that there is a one-to-one correspondence between the elements of P(Ω) and EΩ[5]. Moreover, we can easily check that there is an equivalence between • The complement of a set S in P(Ω) and I -QS in EΩ, • Disjunctions S1 ∪S2 in P(Ω) and QS1 + QS2 -QS1QS2 in EΩ, • Conjunctions S1 ∩S2 in P(Ω) and QS1QS2 in EΩ. This means that on top of the one-to-one correspondence, EΩmaintains the logical structure of events. D Operator Algebras and Relevant Properties We present the following mathematical background from [5]. Definition D.1. An algebra of operators is a set of linear vectors on the vectors of a Hilbert space with a structure such that for two operators M and N in an algebra X, • M ∈X =⇒cM ∈X, • M ∈X =⇒M † ∈X, • M, N ∈X =⇒MN ∈X, E THE THEORY OF CAUSAL BALANCE 11 • M, N ∈X =⇒M + N ∈X, • I ∈X and M ∈X =⇒IM = M = MI. An example of an algebra would be Op(H), the set of all operators in a Hilbert space H. However, Op(H) is not the only algebra one can find in a Hilbert space, other subsets of Op(H) can construct an algebra. In this paper, when we use the term "algebras"to mean operator algebras per definition D.1. In this dissertation, we need to distinguish between two different but equivalent types of algebras: Schrödinger algebras and Heisenberg algebras. Let U : A ⊗B →C ⊗D be a unitary transformation from systems A and B to systems C and D. Consider algebras A ∈Op(HA) and D ∈Op(HD). As shown in [5], there is an equivalence between the Schrödinger and Heisenberg algebras such that MD ∈A ⇐⇒ ̃ MA := MA ⊗IB ∈ ̃ A, MD ∈D ⇐⇒ ̃ MD := U-1(IC ⊗MD)U ∈ ̃D. We recall this very important result about algebras on Hilbert spaces from quantum information from [6]. Proposition D.1. Let H be a Hilbert space, and let X ∈Op(H) be an algebra. there is some decomposition of H into a direct sum of tensor products H = ⊕n i=1Hi L ⊗Hi R such that X ∈X ⇐⇒M = ⊕n i=1Mi L ⊗Ii R. Definition D.2. (Commutant and Commuting Center) Let H be a finite-dimensional Hilbert space. Let X be an algebra such that X ∈Op(H). We call X ′, the set of all operators in X that commute with every operator in X the commutant of algebra X. The commuting center of an algebra Z(X) = X ∩X ′ is the set of the operators in X that commute with all the operators within X. Z(X) is a commutative algebra. Moreover, Z(X) has the form M ∈X ⇐⇒M = Pn i=1 ciπi for some {ci}n i=1, such that πi projects onto a subspace Hi L ⊗Hi R. As discussed in [5], as a consequence of the above, M ∈Z(X) is a projector ⇐⇒M ∈E, such that EΩis the event space from the sample space Ω= {πi}n i=1 (see Section C for more on sample spaces and events spaces). In other words: EΩ= Proj ∩Z(X). E The Theory of Causal Balance Excluding the following definition of unitary circuits, the reader can choose to read this chapter in two different ways: either by starting with the section on unitary circuits, and then coming back to the beginning of the chapter for more detail, or by reading it in the intended order by building up from the fundamental idea of the theory. This chapter has been written with both approaches in mind. E THE THEORY OF CAUSAL BALANCE 12 Definition E.1. A unitary circuit is a circuit made of wires representing systems, and boxes representing unitary transformations. The following is an example of a unitary circuit from A ⊗B to C ⊗D: U A B C D Ormord and Barrett introduced the theory of causal balance in early 2024. It is considered a conceptual shift from previous theories since causation is no longer merely seen as this connection between events, but a fundamental framework out of which events emerge. To present the theory of causal balance in a single chapter, we start by defining interference influences, then discuss operator algebras and how they influence each other. Next, we define causal structure as a directed graph of operator algebras. We will no longer be talking about events emerging out of causation, but rather the emergence of events on a given algebra relative to the set of algebras that include said algebra. One is thus inclined to inquire into the mathematical nature of these events, and the response mirrors that found in the consistent histories formalism: projector decompositions. We will see that the causal structure of this theory restricts the type of influences between projector decompositions, which can also be expressed in how these projector decompositions commute. In this theory, events don't emerge from causation in the usual way. Instead, events emerge on a given algebra relative to other algebras that contain it. This raises the question: what are these events mathematically? The answer, following the consistent histories approach, is projector decompositions. The causal structure limits how projector decompositions can influence each other, which we can see in how they commute. Before concluding this chapter, we will demonstrate how the causal structure uniquely selects a set of projector decompositions, which will precisely correspond to a consistent set of histories. Given that the theory of causal balance is inherently stochastic, we will proceed to elucidate how probabilities are defined within this framework. The chapter will conclude with stating the proposition from [1] that the theory of causal balance can reproduce any phenomenon from standard quantum theory. E.1 Interference Influences We promised that we will show how the theory of causal balance allows us to obtain a unique consistent set of histories. For now, it is sufficient to convince ourselves that we want a unique set of histories that appeals to a certain causal structure. To understand this structure, we need to understand one of the building blocks of the theory: interference influences. In the quantum context, we say that system A influences system D in a unitary channel describing the dynamics if D non-trivially depends on A, which is illustrated more formally in the following definition [1]. Definition E.2. (Quantum Causal Influence) Let U : A ⊗B →C ⊗D be a unitary channel between the systems A ⊗B and C ⊗D. Having no quantum causal influence from A to D is equivalent to the following diagram: E THE THEORY OF CAUSAL BALANCE 13 U A B C D = V B D ∃V such that such that stands for the trace operation ("discard" for the readers familiar with string diagrams). Can we go deeper and be able to describe the influences from A to D in a more fine-grained way? Yes, we can! It was shown in [1] that influences between systems are fully determined by influences between subsystems. In fact, as shown in [5], interference influences between systems are equivalent to influences between projector decompositions more formally presented in the following definition. Definition E.3. Let U : A ⊗B →C ⊗D be a unitary channel. We say that there is no interference influence from the projector decomposition on the Hilbert space HA, {P i A} to the projector decomposition on HD, {P j D} if and only if [{P i A}, {P j D}] = 0 for any i and j. The reason we chose to also define interference influence in terms of the commutation of projector decompositions is that we will later see that it is related to the paradoxes via the commutation link. We can begin to see some of the similarities with the consistent sets of histories formalism. Indeed, in the theory of causal balance, events are seen as a unique selection of a projector P ∈D such that D is a projector decomposition. This projector decomposition is selected in a way that maintains a certain causal balance between systems. In other words, we would only want a specific type of influences between the systems. However, we still have not defined how this causal balance is maintained. What is this magical causal structure that we want to have for which there is only one unique assignment of projector decompositions? The following theorem from [1] defines how we can obtain the preferred projector decompositions by the theory. Theorem E.1. Let U : A ⊗B →C ⊗D be a unitary channel, and let {P i A} be a projective decomposition on A. For A and D denoting the operator algebras of the form MA ⊗IB and U†(IC ⊗MD), {P i A} is preferred by D ⇐⇒span({P i A}) ⊗IB = Z(A ∩D′). E.2 Causal Structure Since we define causal structure on algebras, we provide the necessary background in Section E. We start with the following definition from [5]. Definition E.4. A causal structure is a directed graph C over a finite set of algebras on a finite-dimensional Hilbert space H such that for two algebras ̃ X and ̃Y in C, ̃ X does not influence ̃Y and ̃Y does not influence ̃ X ⇐⇒ ̃ X ⊆ ̃Y. When we talk about a causal structure, we also talk about a bubble. Bubbles are also relevant since in most cases, we want to maintain the causal balance relative to a certain bubble of systems. Definition E.5. A bubble B is a subset of systems in C. In a unitary circuit, a bubble is a subset of wires. E THE THEORY OF CAUSAL BALANCE 14 The reader might be wondering if there is any structure to these graphs that maintains the causal balance between algebras, and the answer is positive. Consider an algebra X relative to a bubble B. We define the following two event spaces (see Appendix 2 for mathematical background on event spaces). • Future balanced event space: E↑ ̃ XB := Proj ∩Z( ̃ X ∩C↑ B( ̃ X)′), • Past-balanced event space: E↓ ̃ XB := Proj ∩Z( ̃ X ∩C↓ B( ̃ X)′), such that C↑ B( ̃ X) is the resulting algebra from combining all the algebras in the bubble B that ̃ X influences. Similarly, C↓ B( ̃ X) is the resulting algebra from combining all the algebras in the bubble B that are influenced by ̃ X. If the symbols in the above expressions are confusing, we recommend looking at Appendix 2 on algebras and some of their properties. These expressions present exactly what we were looking for. Events spaces (which are projectors) that satisfy theorem and hence maintain causal balance. Here we come to see how we took advantage of the time-symmetry mentioned earlier when it comes to having past- and futurebalanced events. E.3 From Algebras to Unitary Circuits Consider the following unitary circuit: U3 U2 U1 {P e1 Ain 1 } {P e′ 1 Aout 1 } {P e2 Ain 2 } {P e′ 2 Aout 2 } {P e3 Ain 3 } {P e′ 3 Aout 3 } A B C Let's consider the set of system {A, B, C}. We call a set of systems in a circuit a bubble, and we will name this bubble composed of A, B, and C, B1. Now, if we were to think about the projector decompositions that we can have at each system from B1 in the context of the consistent histories formalism, one can have too many sets of histories corresponding to B1 that are consistent. However, we want to pick one unique set of histories that appeals to a causal structure where all the projectors are causally balanced with respect to their future and their past. This is the reason why we decorated the unitary circuit with two projectors at each system. The P i Xin are projectors that are causally balanced with respect to the system X's interaction E THE THEORY OF CAUSAL BALANCE 15 with its past within B1, and P i Xout are the causally balanced projectors with respect to the system X's interaction with the future within B1. The definition of a causal structure in terms of algebras and influences between them might not be very intuitive and circuits can make understanding the theory better. We can take advantage of the fact that any unitary circuit defines a model in the theory: when we use unitary circuits, we automatically create a model for our theory as the causal structure is implicitly implied by the position of the system relevant to each other: systems at the top are influenced by systems at the bottom. However, it is indispensable to note that the theory of causal balance is defined independently from spacetime and so it is not evident to deduce that it means that systems at the top necessarily happen before systems at the top. It is more correct for us to think about spacetime emerging from the causal structure just like events [5]. Moreover, it is crucial to remember that not all causal structures can be represented as unitary circuits. Still, unitary circuits do represent some models, and we will focus on those in this section for clarity purposes. If the bubble under study is composed of n systems, then, 2n events take place represented by n ongoing projections, and n outgoing projections. In other words, each wire k ∈{1, ..., n} in the bubble has a pair ({P k in}, {P k out}) associated with it. The projector decompositions get selected using a preference algorithm that respects the causal structure. It has been proven that in the context of this interpretation, the only allowed interference influences are the ones from decompositions of the past to decompositions of the future, which is put more formally in theorem 4.1 in the following section. Eureka! We finally found out how we get the unique set of projectors associated with a bubble that represents events that might happen. We would like to emphasize the latter since the projectors obtained are not events that will happen, these projectors decompositions only correspond to events that are possible to happen, not to events that are about to happen. As shown in [1], for a bubble B, the probability for the unique set that was selected to be realized for a circuit C containing B is p(e1, e′ 1, ..., en, e′ n) = 1 dTr ̃P e1 Ain 1 ̃P e′ 1 Aout 1 ... ̃P en Ain n ̃P en Aout n . It has also been shown in [1] that this probability rule recovers the exact predictions of quantum theory. To conclude our review of the theory of causal balance, we answer the anticipated question about reproducing standard quantum theory. In [1], it was proven that the theory of causal balance is able to recreate any phenomenon in standard quantum theory.
|
2509.16256
|
HausaMovieReview: A Benchmark Dataset for Sentiment
Analysis in Low-Resource African Language
Asiya Ibrahim Zanga1, Salisu Mamman Abdulrahman2, Abubakar Ado3, Abdulkadir Abubakar
Bichi3, Lukman Aliyu Jibril4, Abdulmajid Babangida Umar3, Alhassan Adamu2 Shamsuddeen Hassan
Muhammad5, Bashir Salisu Abubakar
1Department of Computer Science, Federal University Dutsin-Ma, Katsina, Nigeria (email: azanga@fudutsinma.edu.ng).
2Department of Computer Science, Aliko Dangote University of Science and Technology, Wudil, Kano, Nigeria (emails:
salisu.abdul@kustwudil.edu.ng; alhassanadamu@kustwudil.edu.ng, bsalisu@gmail.com).
3Department of Computer Science, Northwest University, Kano, Nigeria (emails: aarogo@yumsuk.edu.ng, aabichi@yumsuk.edu.ng,
abumar@yumsuk.edu.ng;).
4(email: lukman.j.aliyu@gmail.com).
5Department of Computer Science, Bayero University Kano, Nigeria (email: shamsudden2004@gmail.com).
Abstract
The development of Natural Language Processing (NLP)
tools for low-resource languages is critically hindered by
the scarcity of annotated datasets. This paper addresses
this
fundamental
challenge
by
introducing
HausaMovieReview,
a
novel
benchmark
dataset
comprising 5,000 YouTube comments in Hausa and
code-switched English. The dataset was meticulously
annotated
by
three
independent
annotators,
demonstrating a robust agreement with a Fleiss' Kappa
score of 0.85 between annotators. We used this dataset to
conduct a comparative analysis of classical models
(Logistic
Regression,
Decision
Tree,
K-Nearest
Neighbors) and fine-tuned transformer models (BERT
and RoBERTa). Our results reveal a key finding: the
Decision Tree classifier, with an accuracy and F1-score
89.72%
and
89.60%
respectively,
significantly
outperformed the deep learning models. Our findings
also provide a robust baseline, demonstrating that
effective feature engineering can enable classical models
to achieve state-of-the-art performance in low-resource
contexts, thereby laying a solid foundation for future
research.
Keywords:
Hausa,
Kannywood,
Low-Resource
Languages, NLP, Sentiment Analysis
1. Introduction
Nigeria has over 500 languages, but lack of datasets has
led to their underrepresentation or exclusion from natural
language processing (NLP) studies[1]. The amount of
unstructured text has increased dramatically due to the
internet's spectacular improvement. The proliferation of
smart mobile devices and the innumerable number of
mobile applications finding an analytical procedure that
enables analysts to retain text in a way that will help them
comprehend human ideas was therefore becoming more
and more necessary. Being subjective beings, humans are
capable
of complex emotional expression. However, any choice
they choose will be influenced by their opinions, it
becomes vital to train information systems to identify
persons at that level in order to be able to comprehend
these viewpoints from vast volumes of material, which
calls for more work from researchers[2].
While some progress has been made in bridging this gap
like [3] many benchmark datasets for African languages
are limited to a single domain, making them difficult to
apply to other areas of interest [4]Sentiment analysis is
one of the most popular tasks in NLP, with datasets for
high-resource languages like English available across
different domains, including Twitter, social media
updates, product reviews and movie critiques. It is the
task of identifying the emotional tone of text and has
become a cornerstone of various fields, including
marketing,
customer
service
and
social
media
monitoring[5]. While significant advancements have
been made in sentiment analysis for widely spoken
languages, research on low-resource languages like
Hausa remains relatively under-explored.
However, for movie review in Nigeria, the only available
dataset is NollySenti [4], a Twitter sentiment
classification dataset for five languages including
English, Hausa, Igbo, Yoruba and Nigerian-pidgin.
Despite the approach contributing significantly to the
domain of movie review in these Nigerian languages, its
applicability to other domains remains unclear.
The Hausa-language cinema industry known as
Kannywood, which is based in Northern Nigeria, has a
large following and a profound cultural influence
throughout the Sahel region of Africa. Producers,
directors and marketers can help shape their tactics by
gaining useful insights into audience preferences and
opinions through sentiment analysis of reviews and
social media comments regarding Kannywood movies.
Nevertheless, compared to many other languages, the
field of natural language processing (NLP) for Hausa is
less established, which makes it difficult to extract
sentiment from writings written in the language. The
Kannywood industry has witnessed a surge in popularity,
yet analyzing the sentiment expressed in these movies
remains a challenge due to the lack of language-specific
sentiment analysis tools. Existing sentiment analysis
models often rely on large datasets, which are scarce for
Hausa[6]
Furthermore,
the
unique
linguistic
characteristics of Hausa, including its complex
morphology and dialectal variations, pose additional
hurdles for accurate sentiment classification.
This study aims to address this gap by developing a
sentiment analysis model for Kannywood movies, a
popular Nigerian Hausa-language film industry, we
propose a method to aid in the collection, filtering and
annotation of data for a low-resource language.
We introduce HausaMovieReview, an open-source
sentiment dataset based on reviews of Hausa-language
movies. Using comments from the LABARINA series,
we collected 17,095 YouTube comments and manually
annotated 5000 to build the Kannywood corpus for
sentiment classification named HausaMovieReview for
Hausa-language. Kannywood, which produces films
about the Kano people, highlights Nigeria's diverse
cultures with emphasis on the northern culture and is one
of the country's largest film industries. the research
contributions are summarized as follows:
i.
Datasets creation
ii.
Building a fine-tuned transformer model
(BERT and RoBERTa),
iii.
Perform a comparative analysis using classical
Machine Learning and Deep Learning models.
2. Literature Review
Sentiment analysis, also known as opinion mining,
involves processing textual data to assess and interpret
the emotions expressed within it. Understanding
"sentiment" is crucial; while often equated with feelings,
sentiments focus more on opinions and attitudes than on
factual information. According [7], feelings are
instinctive responses to stimuli (e.g., joy, anger), while
sentiment is an emotion influenced by one's opinions or
perceptions. Psychologists categorize feelings into six
primary emotions: love, joy, surprise, anger, sadness and
fear. Sentiment analysis primarily aims to determine
whether an opinion is positive, negative, or neutral,
thereby simplifying complex emotional scenarios.
One key aspect of sentiment is its universality; everyone
experiences emotions, making each person a potential
source of valuable data for sentiment analysis tools. This
intrinsic characteristic of human emotions makes
sentiment analysis a powerful tool across various
domains, including business, politics and social media.
Sentiment analysis has become an increasingly important
area of research in natural language processing. Previous
studies have explored various approaches to sentiment
classification, including rule-based methods, machine
learning techniques and deep learning models. However,
there is still a lack of research on sentiment analysis for
Kannywood movie reviews. Sentiment analysis has
evolved significantly over the years. Early approaches
relied primarily on lexicon-based methods, which
involved using predefined sentiment lexicons to classify
text[8]. Though sentiment analysis has been the focus of
study in education [9], it has also yielded numerous
successful applications in other domains, including as
business [10]and medicine [11]
Various machine learning algorithms were explored for
sentiment analysis, including Support Vector Machines
(SVM), Decision Trees (DT), Random Forests (RF),
Multilayer Perceptron (MLP) and Long Short-Term
Memory (LSTM) networks[12]. While lexicon-based
approaches were also considered, machine learning
algorithms generally outperformed them in terms of
accuracy[13]. Several researchers have proposed cross-
lingual NLP approaches to address the challenges of low-
resource languages by leveraging resources from high-
resource languages like English [14]. For sentiment
analysis, a common strategy is to translate comments
from the original low-resource language (e.g., Hausa)
into English and then apply well-performing English
sentiment analysis models.
Research on sentiment analysis in the movie domain has
predominantly focused on English-language reviews.
Early works by [15] established foundational approaches
using machine learning models and manually engineered
features. Subsequent studies explored deep learning
techniques such as Convolutional Neural Networks
(CNNs) and Recurrent Neural Networks (RNNs), which
significantly improved sentiment prediction accuracy
[16]. However, the advent of transformer-based
architectures like BERT and RoBERTa has set a new
benchmark, with studies reporting state-of-the-art results
across various sentiment analysis tasks [17].
3. Methodology
Among the objectives of this study is to develop a
HausaMovieReview dataset and to evaluate a sentiment
analysis model capable of accurately classifying the
Kannywood movie comments into three distinct
sentiment categories: positive, neutral and negative. To
achieve this, a comprehensive methodology was
designed integrating both classical machine learning
(ML) techniques and advanced transformer-based
models. This chapter details the entire process including
the dataset preparation, the implementation of each
model, the experimental setup and the chosen evaluation
metrics used to assess performance.
3.1 Dataset Construction
This section details the meticulous process undertaken to
construct the HausaMovieReview dataset, a critical
contribution of this research. The methodology
encompasses data collection from a widely accessible
online platform, a systematic sampling procedure, a
rigorous three-phase annotation process by independent
human experts and a comprehensive analysis of inter-
annotator agreement.
Data Collection and Sampling
The primary source of data for this study is the comment
sections of 13 episodes of the popular Kannywood series
"Labarina" on YouTube. YouTube, as a global social
media platform serves as a significant hub for user-
generated content and provides a rich source of genuine,
spontaneous opinions. A web scraper was developed and
utilized to collect all comments from these specific
episodes. This collection resulted in a total of 17,095 raw
comments.
From this initial corpus, a subset of 5,000 comments was
randomly sampled to form the final dataset for
annotation. This sampling approach ensured that the
annotated dataset was representative of the broader
spectrum of opinions expressed on the platform,
mitigating potential biases that could arise from selecting
comments based on popularity or content. To maintain
the integrity of the data, the comments were not pre-
processed or altered in any way before annotation. The
comments contain a mixture of pure Hausa, code-
switched English-Hausa and occasional loanwords from
Arabic, reflecting the typical linguistic patterns observed
in online communication within the region.
3.2 Annotation Process
The annotation task was performed by three independent
native Hausa speakers, each with a strong understanding
of the language's nuances, idioms and cultural context.
All three annotators possessed a deep familiarity with the
Kannywood film industry, which was crucial for
interpreting
context-dependent
comments.
The
annotation process was structured in three key phases:
1. Label Definition and Training: A clear set of
guidelines was developed to define the three
sentiment
labels:
Positive,
Neutral
and
Negative.
i.
Positive: A comment expressing
approval, praise, happiness, or a
favorable opinion. Examples include
"Masha Allah, a beautiful movie," or
"Great acting!"
ii.
Neutral: A comment that does not
convey
a
clear
sentiment.
This
includes factual statements, questions,
greetings,
or
off-topic
remarks.
Examples include "When is the next
episode?" or "Hello from Niger."
iii.
Negative: A comment expressing
disapproval, criticism, sadness, or a
negative opinion. Examples include
"This movie is a waste of time" or "The
plot is very bad." The annotators were
trained on these guidelines using a
small pilot set of comments to ensure a
shared understanding and consistent
application of the labeling criteria.
2. Independent Annotation: Each of the three
annotators independently labeled the entire set
of 5,000 comments according to the defined
guidelines. This independent approach was
essential for establishing a reliable ground truth
and for subsequently measuring the degree of
agreement between them.
3. Majority Vote and Finalization: Upon
completion of the independent annotations, the
final label for each comment was determined
using a majority-vote mechanism. If two or
more annotators agreed on a sentiment, that
sentiment was assigned as the final label. In
cases where all three annotators assigned a
different label (a rare occurrence), the comment
was flagged for review and a consensus was
reached through discussion. This process
ensured that the final dataset, referred to as
HausaMovieReview, was a highly reliable and
consistent resource.
3.3 Inter-Annotator Agreement
To validate the reliability and consistency of the
annotation process, two key metrics were used to
measure inter-annotator agreement (IAA): Cohen's
Kappa (κ) and Fleiss' Kappa (κ). Cohen's Kappa
measures pairwise agreement between two annotators,
while Fleiss' Kappa extends this to measure the
agreement among all three annotators. The scores
obtained were provided in Table 1 and Table 2 as
described below.
Table 1: Inter-Annotator Agreement Scores
Pair
Cohen's/Fleiss'
Kappa
Annotator 1 vs Annotator 2
0.7975
Annotator 1 vs Annotator 3
0.9071
Annotator 2 vs Annotator 3
0.8903
Fleiss' Kappa (All Annotators)
0.865
Table 2: Annotator Label Distributions
Annotator
Negative
Neutral
Positive
Annotator 1
1165
(23.30%)
996
(19.92%)
2839
(56.80%)
Annotator 2
1165
(23.30%)
995
(19.90%)
2840
(56.80%)
Annotator 3
1165
(23.30%)
995
(19.90%)
2840
(56.80%)
Majority
Vote
1165
(23.30%)
995
(19.90%)
2840
(56.80%)
These high IAA scores are a testament to the quality of
the dataset and the clarity of the annotation guidelines,
ensuring that the HausaMovieReview dataset is a robust
and trustworthy resource for sentiment analysis
research. The complete dataset and all associated code
are publicly available on GitHub.
https://github.com/AsiyaZanga/HausaMovieReview.git
3.4 Experiment Setup
This section details the experiment design used for this
study including experiment steps, classical machine
learning and deep learning and training settings
Preprocessing
The dataset for this study was a corpus of 5,000
comments extracted from comment sections of 13
episodes of LABARINA. Given the informal and multi-
lingual nature of the data, a rigorous preprocessing
pipeline was established. This involved converting all
text to lowercase, removing punctuation, special
characters and tokenizing the sentences into individual
words.
Feature
Extraction:
Term
Frequency-Inverse
Document Frequency (TF-IDF)
The TF-IDF vectorization technique was employed to
transform the preprocessed text data into a matrix of
numerical features. TF-IDF assigns a weight to each
word in a document, reflecting its importance within that
specific comment relative to the entire corpus. A high
TF-IDF score indicates a term that is both frequent in a
document and rare across the entire dataset, making it a
powerful feature for distinguishing between sentiment
classes.
Model Implementation
Following the feature extraction, three distinct classical
ML classifiers were trained and evaluated:
1. Logistic Regression (LR): This is a linear
classification model that uses a logistic function
to estimate the probability of a given comment
belonging to a particular sentiment class.
Despite its simplicity, LR is highly effective for
text classification tasks, providing a strong
baseline for comparison.
2. Decision Tree (DT): A non-linear, supervised
learning model that partitions the data into a
series of decisions, creating a tree-like structure.
The DT model makes predictions by traversing
the tree from the root to a leaf node, where each
internal node represents a feature test and each
leaf node represents a class label.
3. K-Nearest Neighbors (KNN): KNN is an
instance-based, non-parametric algorithm that
classifies a new data point based on the majority
class among its k nearest neighbors in the
feature space. The value of k was a key
hyperparameter tuned during the training
process to optimize performance.
Transformer-based Approach
To capture the complex semantic and contextual nuances
of the comments, pre-trained transformer models were
leveraged. This approach bypassed the need for manual
feature engineering by using the models' inherent ability
to learn rich, contextual embeddings from raw text. We
have selected to models as described below:
1. BERT
(Bidirectional
Encoder
Representations from Transformers): The
BERT-base-uncased model was selected for its
ability to learn bidirectional representations
from a massive corpus of text. The model was
fine-tuned for the sentiment classification task
by adding a simple classification layer on top of
its output, allowing it to adapt its pre-trained
knowledge
to
the
specific
domain
of
Kannywood comments.
2. RoBERTa (A Robustly Optimized BERT
Pretraining
Approach):
RoBERTa,
an
enhanced version of BERT, was also chosen. It
was trained with an optimized methodology,
including a larger dataset, a longer training
period and the removal of the next sentence
prediction objective. This robust training
process often results in superior performance on
downstream tasks compared to standard BERT.
Training Setup
The fine-tuning of both BERT and RoBERTa was
conducted using a standard training protocol. The final
classification layer was added to the pre-trained models.
Key training parameters were carefully configured: a
small learning rate 2×10−5, a batch size of 16 and a
limited number of 3 epochs to prevent overfitting. The
AdamW optimizer was used, as it is widely
recommended for training transformer models.
3.5 Evaluation Method and Metrics
To ensure a fair and robust evaluation of all models, a
consistent experimental protocol was followed. The
dataset was not partitioned into separate training and
testing sets. Instead, a crucial aspect of the training
protocol was the use of 10-fold cross-validation. This
technique partitioned the data into ten subsets, training
the model on nine and validating on the tenth, repeating
this process ten times. This method ensured that the
model's performance was not dependent on a specific
data split and reduced the risk of overfitting.
Model performance was assessed using a suite of
standard metrics for multi-class classification:
i.
Accuracy: The ratio of correctly predicted
instances to the total number of instances.
ii.
Precision: The ratio of true positive predictions
to the total positive predictions made by the
model. It indicates the model's ability to avoid
false positives.
iii.
Recall: The ratio of true positive predictions to
all actual positive instances. It measures the
model's ability to find all relevant instances.
iv.
F1-Score: The harmonic mean of precision and
recall. It provides a balanced measure of a
model's performance, particularly useful for
imbalanced datasets. The formula for the
v.
Area Under the Curve (AUC): A measure of
the model's ability to distinguish between
classes. A higher AUC value indicates a better-
performing model.
Accuracy, precision, recall, F1-score, and AUC are
widely recognized evaluation metrics for classification
tasks, as they capture complementary aspects of model
performance[18]. Prior studies in meta-learning and
algorithm selection confirm these measures as standard
practice for assessing predictive effectiveness[19]
4. Results and Discussion
In this section, we present the comprehensive results
obtained from the sentiment analysis experiments
conducted on the HausaMovieReview dataset. It details
the performance of both classical machine learning
models (Logistic Regression, Decision Tree, K-Nearest
Neighbors) and deep learning models (BERT and
RoBERTa).
4.1 Model Performance Evaluation
The key performance metrics Accuracy, F1-Score, were
computed for each model with Precision, Recall and
AUC in addition for classical models. The results,
including their corresponding standard deviations, are
summarized in table 3 below.
Table3: Model Performance for Classical Models
Model
Accu
racy
Preci
sion
Recal
l
F1
AUC
Decision Tree
89.71
%
90.02
%
89.71
%
89.60
%
93.45%
KNN
63.36
%
76.89
%
63.36
%
62.11
%
86.27%
Logistic
Regression
86.81
%
87.08
%
86.81
%
86.81
%
95.92%
The results in Table 1 indicate that all three classical
models achieved reasonable to excellent performance.
The Decision Tree classifier demonstrated superior
performance across most metrics, achieving the highest
mean accuracy (0.8971) and a strong F1-score (0.896).
This suggests that a rule-based, non-linear approach was
highly effective at classifying the sentiment of the Hausa
comments. The Logistic Regression model also
performed exceptionally well, with a high accuracy of
0.8681 and the highest AUC score of 0.9592, which
indicates its excellent ability to distinguish between all
three sentiment classes. In contrast, the KNN model
exhibited the lowest performance, with a mean accuracy
of 0.6336, suggesting it was less effective at handling the
high-dimensional, sparse TF-IDF data. Below are graph
for pictorial representations.
Figure 1: Accuracy of KNN, Logistic Regression and
Decision Tree Models Across 10 Folds
Figure 1 illustrates the accuracy of the three models over
10 separate validation folds. The plot clearly shows that
the Logistic Regression and Decision Tree models
consistently perform better than the KNN model, with
accuracy scores generally above 0.85 and 0.90,
respectively. The KNN model's accuracy remains
significantly lower, fluctuating between 0.60 and 0.65.
Figure 2: F1 Score of KNN, Logistic Regression and
Decision Tree Models Across 10 Folds
Figure 2, displays the F1 score for each model across 10
folds. The plot demonstrates a similar pattern to the
accuracy graph, with the Logistic Regression and
Decision Tree models exhibiting much higher F1 scores
than the KNN model.
Figure 3: AUC Score of KNN, Logistic Regression and
Decision Tree Models Across 10 Folds
Figure 3 shows the Area Under the Curve (AUC) score
for each model over 10 folds. According to the plot,
Logistic Regression consistently has the highest AUC
score, often above 0.95, while the Decision Tree model's
AUC is also high, typically above 0.92. The KNN model,
in contrast, shows a much lower and more volatile AUC
score.
Deep Learning Model Results: BERT
The
BERT
model
was
fine-tuned
on
the
HausaMovieReview dataset to analyze its performance
throughout the training process. The following sections
present the training and validation performance,
followed by the final evaluation on the test set..
Comparative Results of Deep Learning Models
Both the BERT and RoBERTa models were fine-tuned
on the KannySenti dataset, demonstrating effective
learning and similar performance trends. During training,
both models showed a rapid increase in validation
accuracy and F1-score in the early stages, followed by a
plateauing of these metrics and a fluctuation in validation
loss. This behavior, particularly after Step 40 for both
models, suggests that they began to overfit the training
data.
On the final held-out test set, the BERT model slightly
outperformed RoBERTa, achieving an accuracy of
79.7% and an F1-score of 0.7562, compared to
RoBERTa's 76.6% accuracy and 0.7292 F1-score. Table
4 below and subsequent graphs explain more.
Table 4 Evaluation Result for Transformer Models
Model
Evaluation
Loss
Evaluation
Accuracy
Evaluation
F1-Score
BERT
0.5906
0.797
0.7562
RoBERTa
0.6855
0.766
0.7292
Figure 4 Validation Accuracy and F1-Score during
BERT Model Fine-tuning
Figure 4 Shows the validation accuracy and macro-
averaged F1-score of the BERT model against training
steps. It visually represents the model's performance on
unseen validation data, highlighting its convergence and
predictive capability throughout the fine-tuning process.
Figure 5 Training and Validation Loss during BERT
Model Fine-tuning
Figure 5 illustrates the progression of training loss and
validation loss for the BERT model across various
training steps. It demonstrates the model's learning curve
on the training data and its generalization behavior on the
validation set, indicating potential points of overfitting.
Figure 6: Validation Accuracy and F1-Score during
RoBERTa Model Fine-tuning
Figure 6 Shows the validation accuracy and macro-
averaged F1-score of the RoBERTa model against
training steps. It visually represents the model's
performance on unseen validation data, indicating
convergence and overall predictive capability over the
fine-tuning process.
Figure 7 Training and Validation Loss during
RoBERTa Model Fine-tuning
Figure 7 illustrates the progression of training loss and
validation loss for the RoBERTa model across various
training steps. A sustained decrease in training loss
indicates effective learning on the training data, while the
behavior of validation loss highlights the model's
generalization capabilities and potential for overfitting.
5. Discussion of Findings
The most significant finding of this study is the
remarkable performance of the Decision Tree model,
which outperformed both the classical (Logistic
Regression and KNN) and the more advanced
transformer-based models, (BERT and RoBERTa). This
result is counterintuitive, as transformer models are
generally considered state-of-the-art for natural language
processing tasks due to their ability to capture complex,
contextual relationships in text.
Several factors may explain this surprising outcome. The
dataset, consisting of Kannywood movie comments, is
relatively small (5,000 comments). Transformer models,
particularly large pre-trained ones like BERT and
RoBERTa, require vast amounts of data to fully realize
their predictive power. In a limited data environment,
these models may be prone to overfitting or fail to fine-
tune their extensive pre-trained knowledge effectively to
the specific nuances of the Kannywood domain.
In contrast, the Decision Tree model, a non-linear
classifier, proved exceptionally well-suited to the task. It
appears to have effectively identified key features within
the TF-IDF vectorized data that are highly predictive of
sentiment, such as specific keywords, phrases, or their
combinations. Its hierarchical, rule-based nature may
have allowed it to create a series of effective decision
rules that generalize well to the local data distribution,
outperforming more complex models that could not fully
leverage their potential on a small dataset. This finding
suggests that for domain-specific and size-constrained
text corpora, carefully engineered classical models can
be more effective and computationally efficient than
resource-intensive transformers.
Dataset Reliability and Limitations
The reliability of our dataset is a critical factor in the
validity of these results. As detailed in the methodology,
the dataset underwent a three-way human annotation
process. The high Inter-Annotator Agreement (IAA),
achieved through a majority voting protocol, confirms
the consistency and quality of the sentiment labels. This
high IAA score suggests that the sentiment labels are
reliable and that the models were trained on a high-
quality ground-truth dataset.
Despite these promising results, several limitations of
this study must be acknowledged. First, the dataset is
restricted to Kannywood movie comments and may not
be generalizable to other domains or languages. The
models' performance on a broader range of text would
likely differ. Second, while the classical models
outperformed
the
transformers
in
this
specific
experiment, this does not invalidate the general
superiority of transformer architectures. It merely
highlights the importance of data volume and domain-
specificity for fine-tuning these models. A future study
could explore pre-training a transformer model
specifically on a massive corpus of Hausa-language text,
which may then yield superior results on a Kannywood-
specific task. Finally, the comments contained a
significant amount of code-switching between Hausa and
English, as well as informal language and slang, which
poses a unique challenge for any model.
6. Conclusion
This research addressed the critical gap in sentiment
analysis tools for Hausa-language movie reviews within
the rapidly growing Kannywood film industry. By
creating the HausaMovieReview dataset, a novel
resource of 5,000 manually annotated YouTube
comments from the LABARINA series, this study
provided a crucial foundation for advancing NLP in this
under-resourced domain.
The study investigated the application of both classical
machine learning and deep learning models for sentiment
classification. Surprisingly, the classical machine
learning models, particularly the Decision Tree
classifier, achieved exceptional performance on the
HausaMovieReview dataset, with an accuracy of 89.71%
and a macro-averaged F1-score of 90.02%. This
indicates that for this specific dataset, TF-IDF features
combined with a Decision Tree proved remarkably
effective.
The fine-tuned deep learning models, BERT ('bert-base-
uncased') and RoBERTa ('cardiffnlp/twitter-roberta-
base-sentiment-latest'),
also
demonstrated
strong
capabilities, achieving accuracies of 79.7% and 76.6%
and F1-scores of 75.62% and 72.92%, respectively.
While these results are commendable, they were notably
lower than those achieved by the classical Decision Tree
model in this study.
7. Future Work
Building upon the findings and addressing the limitations
of this study, several promising avenues for future
research emerge:
•
Dataset Expansion and Diversification: Future
efforts should focus on significantly expanding
the KannySenti dataset by annotating a larger
and more diverse collection of Kannywood
movie reviews from various online platforms
and genres.
•
Hausa-Specific
NLP
Resources
and
Preprocessing: Research into developing more
robust
Hausa-specific
NLP
tools
(e.g.,
improved tokenizers, stemmers/lemmatizers)
and preprocessing techniques tailored to Hausa
linguistic features and code-switching patterns
is crucial.
•
Deeper
Investigation
into
Classical
ML
Performance: Conduct further analysis to
understand why classical models, particularly
Decision
Tree
with
TF-IDF,
performed
exceptionally well. This could involve feature
importance
analysis,
examining
decision
boundaries, or comparing with other classical
ML techniques.
•
Exploration
of
Multilingual
and
Larger
Transformer
Models:
Fine-tuning
larger,
multilingual pre-trained models (e.g., Afro-
XLMR-Large, mDeBERTaV3, AfriBERTa-
large)
on
the
KannySenti
dataset
and
conducting a more in-depth comparison with
the current BERT and RoBERTa results is a key
next step.
•
Advanced
Fine-tuning
Techniques:
Experimenting with more advanced fine-tuning
techniques, such as multi-task learning (e.g.,
incorporating
related
tasks
like
topic
classification),
or
utilizing
techniques
specifically
designed
for
low-resource
scenarios, could lead to improved deep learning
model performance.
References
[1]
H. Eberhard and R. Hartner, “Addressing Data
Imbalance via Image Augmentation for
Automated Quality Inspection in Steel
Production,” pp. 174–181, 2024, doi:
10.1145/3674558.3674583.
[2]
K. R. Mabokela, M. Primus, and T. Celik,
“Advancing sentiment analysis for low-
resourced african languages using pre-trained
language models,” PLoS One, vol. 20, no. 6
JUNE, pp. 1–37, 2025, doi:
10.1371/journal.pone.0325102.
[3]
S. H. Muhammad et al., “Sentiment Analysis,”
2021.
[4]
I. Shode, D. I. Adelani, Ji. Peng, and A.
Feldman, “NollySenti: Leveraging Transfer
Learning and Machine Translation for Nigerian
Movie Sentiment Classification,” pp. 986–998,
2023, doi: 10.18653/v1/2023.acl-short.85.
[5]
M. Kumar, “Journal of Artificial Intelligence &
Cloud Computing Emotion Recognition in
Natural Language Processing : Understanding
How AI Interprets the Emotional Tone of Text,”
vol. 3, no. 6, pp. 1–5.
[6]
S. H. Muhammad et al., “HausaNLP: Current
Status, Challenges and Future Directions for
Hausa Natural Language Processing,” 2025,
[Online]. Available:
http://arxiv.org/abs/2505.14311
[7]
J. Stets, “Handbook of Social Psychology,”
Handb. Soc. Psychol., no. May, 2006, doi:
10.1007/0-387-36921-x.
[8]
M. Taboada, J. Brooke, and K. Voll, “Lexicon-
Based Methods for Sentiment Analysis,” no.
December 2009, 2011.
[9]
J. Zhou and J. Ye, “Sentiment analysis in
education research : a review of journal
publications,” Interact. Learn. Environ., vol. 0,
no. 0, pp. 1–13, 2021, doi:
10.1080/10494820.2020.1826985.
[10]
V. No et al., “Application of ChatGPT in
Improving Customer Sentiment Analysis for
Businesses,” vol. 5, no. 3, pp. 283–288, 2023.
[11]
L. Abualigah, H. E. Alfar, M. Shehab, A.
Mohammed, and A. Hussein, “Sentiment
Analysis in Healthcare : A Brief Review
Sentiment Analysis in Healthcare :,” no.
January, 2020, doi: 10.1007/978-3-030-34614-
0.
[12]
R. Kumar, M. Islam, M. Hasan, and S. Razia,
“Heliyon Sentiment analysis in multilingual
context : Comparative analysis of machine
learning and hybrid deep learning models,”
Heliyon, vol. 9, no. 9, p. e20281, 2023, doi:
10.1016/j.heliyon.2023.e20281.
[13]
R. Srivastava, “Comparative Analysis of
Lexicon and Machine Learning Approach for
Sentiment Analysis,” vol. 13, no. 3, pp. 71–77,
2022.
[14]
P. Pakray, “Natural language processing
applications for low-resource languages,” pp.
183–197, 2025, doi: 10.1017/nlp.2024.33.
[15]
S. Dargan, M. Kumar, M. Rohit, and A.
Gulshan, “A Survey of Deep Learning and Its
Applications : A New Paradigm to Machine
Learning,” no. 0123456789, 2019.
[16]
A. U. Rehman and A. K. Malik, “A Hybrid
CNN-LSTM Model for Improving Accuracy of
Movie Reviews Sentiment Analysis,” 2019.
[17]
F. Fazl, R. Alizadehsani, R. Lashgari, and A.
Talukder, “ BERT-Deep CNN: State-of-the
Artfor Sentiment Analysis of COVID-19
Tweets” 2022.
[18]
S.M. Abdulrahman and P. Brazdil, “Measures
for Combining Accuracy and Time for Meta-
learning,” in Proc. Meta-Learning and
Algorithm Selection Workshop at ECAI, 2014,
pp . 49-56.
[19]
S. M. Abdulrahman, P. Brazdil, J. N. Van Rijn,
and J. Vanschoren, “Speeding up algorithm
selection using average ranking and active
testing by introducing runtime,” Mach. Learn.,
vol. 107, no. 1, pp. 79–108, 2018, doi:
10.1007/s10994-017-5687-8.
|
HausaMovieReview: A Benchmark Dataset for Sentiment Analysis in Low-Resource African Language Asiya Ibrahim Zanga1, Salisu Mamman Abdulrahman2, Abubakar Ado3, Abdulkadir Abubakar Bichi3, Lukman Aliyu Jibril4, Abdulmajid Babangida Umar3, Alhassan Adamu2 Shamsuddeen Hassan Muhammad5, Bashir Salisu Abubakar 1 -Ma, Katsina, Nigeria (email: ). 2 (emails: ; , ). 3 (emails: , , ;). 4(email: ). 5 (email: ). Abstract The development of Natural Language Processing (NLP) tools for low-resource languages is critically hindered by the scarcity of annotated datasets. This paper addresses this fundamental challenge by introducing HausaMovieReview, a novel benchmark dataset comprising 5,000 YouTube comments in Hausa and code-switched English. The dataset was meticulously annotated by three independent annotators, demonstrating a robust agreement with a Fleiss' Kappa score of 0.85 between annotators. We used this dataset to conduct a comparative analysis of classical models (Logistic Regression, Decision Tree, K-Nearest Neighbors) and fine-tuned transformer models (BERT and RoBERTa). Our results reveal a key finding: the Decision Tree classifier, with an accuracy and F1-score 89.72% and 89.60% respectively, significantly outperformed the deep learning models. Our findings also provide a robust baseline, demonstrating that effective feature engineering can enable classical models to achieve state-of-the-art performance in low-resource contexts, thereby laying a solid foundation for future research. Keywords: Hausa, Kannywood, Low-Resource Languages, NLP, Sentiment Analysis 1. Introduction Nigeria has over 500 languages, but lack of datasets has led to their underrepresentation or exclusion from natural language processing (NLP) studies[1]. The amount of unstructured text has increased dramatically due to the internet's spectacular improvement. The proliferation of smart mobile devices and the innumerable number of mobile applications finding an analytical procedure that enables analysts to retain text in a way that will help them comprehend human ideas was therefore becoming more and more necessary. Being subjective beings, humans are capable of complex emotional expression. However, any choice they choose will be influenced by their opinions, it becomes vital to train information systems to identify persons at that level in order to be able to comprehend these viewpoints from vast volumes of material, which calls for more work from researchers[2]. While some progress has been made in bridging this gap like [3] many benchmark datasets for African languages are limited to a single domain, making them difficult to apply to other areas of interest [4]Sentiment analysis is one of the most popular tasks in NLP, with datasets for high-resource languages like English available across different domains, including Twitter, social media updates, product reviews and movie critiques. It is the task of identifying the emotional tone of text and has become a cornerstone of various fields, including marketing, customer service and social media monitoring[5]. While significant advancements have been made in sentiment analysis for widely spoken languages, research on low-resource languages like Hausa remains relatively under-explored. However, for movie review in Nigeria, the only available dataset is NollySenti [4], a Twitter sentiment classification dataset for five languages including English, Hausa, Igbo, Yoruba and Nigerian-pidgin. Despite the approach contributing significantly to the domain of movie review in these Nigerian languages, its applicability to other domains remains unclear. The Hausa-language cinema industry known as Kannywood, which is based in Northern Nigeria, has a large following and a profound cultural influence throughout the Sahel region of Africa. Producers, directors and marketers can help shape their tactics by gaining useful insights into audience preferences and opinions through sentiment analysis of reviews and social media comments regarding Kannywood movies. Nevertheless, compared to many other languages, the field of natural language processing (NLP) for Hausa is less established, which makes it difficult to extract sentiment from writings written in the language. The Kannywood industry has witnessed a surge in popularity, yet analyzing the sentiment expressed in these movies remains a challenge due to the lack of language-specific sentiment analysis tools. Existing sentiment analysis models often rely on large datasets, which are scarce for Hausa[6] Furthermore, the unique linguistic characteristics of Hausa, including its complex morphology and dialectal variations, pose additional hurdles for accurate sentiment classification. This study aims to address this gap by developing a sentiment analysis model for Kannywood movies, a popular Nigerian Hausa-language film industry, we propose a method to aid in the collection, filtering and annotation of data for a low-resource language. We introduce HausaMovieReview, an open-source sentiment dataset based on reviews of Hausa-language movies. Using comments from the LABARINA series, we collected 17,095 YouTube comments and manually annotated 5000 to build the Kannywood corpus for sentiment classification named HausaMovieReview for Hausa-language. Kannywood, which produces films about the Kano people, highlights Nigeria's diverse cultures with emphasis on the northern culture and is one of the country's largest film industries. the research contributions are summarized as follows: i. Datasets creation ii. Building a fine-tuned transformer model (BERT and RoBERTa), iii. Perform a comparative analysis using classical Machine Learning and Deep Learning models. 2. Literature Review Sentiment analysis, also known as opinion mining, involves processing textual data to assess and interpret the emotions expressed within it. Understanding "sentiment" is crucial; while often equated with feelings, sentiments focus more on opinions and attitudes than on factual information. According [7], feelings are instinctive responses to stimuli (e.g., joy, anger), while sentiment is an emotion influenced by one's opinions or perceptions. Psychologists categorize feelings into six primary emotions: love, joy, surprise, anger, sadness and fear. Sentiment analysis primarily aims to determine whether an opinion is positive, negative, or neutral, thereby simplifying complex emotional scenarios. One key aspect of sentiment is its universality; everyone experiences emotions, making each person a potential source of valuable data for sentiment analysis tools. This intrinsic characteristic of human emotions makes sentiment analysis a powerful tool across various domains, including business, politics and social media. Sentiment analysis has become an increasingly important area of research in natural language processing. Previous studies have explored various approaches to sentiment classification, including rule-based methods, machine learning techniques and deep learning models. However, there is still a lack of research on sentiment analysis for Kannywood movie reviews. Sentiment analysis has evolved significantly over the years. Early approaches relied primarily on lexicon-based methods, which involved using predefined sentiment lexicons to classify text[8]. Though sentiment analysis has been the focus of study in education [9], it has also yielded numerous successful applications in other domains, including as business [10]and medicine [11] Various machine learning algorithms were explored for sentiment analysis, including Support Vector Machines (SVM), Decision Trees (DT), Random Forests (RF), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) networks[12]. While lexicon-based approaches were also considered, machine learning algorithms generally outperformed them in terms of accuracy[13]. Several researchers have proposed crosslingual NLP approaches to address the challenges of lowresource languages by leveraging resources from highresource languages like English [14]. For sentiment analysis, a common strategy is to translate comments from the original low-resource language (e.g., Hausa) into English and then apply well-performing English sentiment analysis models. Research on sentiment analysis in the movie domain has predominantly focused on English-language reviews. Early works by [15] established foundational approaches using machine learning models and manually engineered features. Subsequent studies explored deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), which significantly improved sentiment prediction accuracy [16]. However, the advent of transformer-based architectures like BERT and RoBERTa has set a new benchmark, with studies reporting state-of-the-art results across various sentiment analysis tasks [17]. 3. Methodology Among the objectives of this study is to develop a HausaMovieReview dataset and to evaluate a sentiment analysis model capable of accurately classifying the Kannywood movie comments into three distinct sentiment categories: positive, neutral and negative. To achieve this, a comprehensive methodology was designed integrating both classical machine learning (ML) techniques and advanced transformer-based models. This chapter details the entire process including the dataset preparation, the implementation of each model, the experimental setup and the chosen evaluation metrics used to assess performance. 3.1 Dataset Construction This section details the meticulous process undertaken to construct the HausaMovieReview dataset, a critical contribution of this research. The methodology encompasses data collection from a widely accessible online platform, a systematic sampling procedure, a rigorous three-phase annotation process by independent human experts and a comprehensive analysis of interannotator agreement. Data Collection and Sampling The primary source of data for this study is the comment sections of 13 episodes of the popular Kannywood series "Labarina" on YouTube. YouTube, as a global social media platform serves as a significant hub for usergenerated content and provides a rich source of genuine, spontaneous opinions. A web scraper was developed and utilized to collect all comments from these specific episodes. This collection resulted in a total of 17,095 raw comments. From this initial corpus, a subset of 5,000 comments was randomly sampled to form the final dataset for annotation. This sampling approach ensured that the annotated dataset was representative of the broader spectrum of opinions expressed on the platform, mitigating potential biases that could arise from selecting comments based on popularity or content. To maintain the integrity of the data, the comments were not preprocessed or altered in any way before annotation. The comments contain a mixture of pure Hausa, codeswitched English-Hausa and occasional loanwords from Arabic, reflecting the typical linguistic patterns observed in online communication within the region. 3.2 Annotation Process The annotation task was performed by three independent native Hausa speakers, each with a strong understanding of the language's nuances, idioms and cultural context. All three annotators possessed a deep familiarity with the Kannywood film industry, which was crucial for interpreting context-dependent comments. The annotation process was structured in three key phases: 1. Label Definition and Training: A clear set of guidelines was developed to define the three sentiment labels: Positive, Neutral and Negative. i. Positive: A comment expressing approval, praise, happiness, or a favorable opinion. Examples include "Masha Allah, a beautiful movie," or "Great acting!" ii. Neutral: A comment that does not convey a clear sentiment. This includes factual statements, questions, greetings, or off-topic remarks. Examples include "When is the next episode?" or "Hello from Niger." iii. Negative: A comment expressing disapproval, criticism, sadness, or a negative opinion. Examples include "This movie is a waste of time" or "The plot is very bad." The annotators were trained on these guidelines using a small pilot set of comments to ensure a shared understanding and consistent application of the labeling criteria. 2. Independent Annotation: Each of the three annotators independently labeled the entire set of 5,000 comments according to the defined guidelines. This independent approach was essential for establishing a reliable ground truth and for subsequently measuring the degree of agreement between them. 3. Majority Vote and Finalization: Upon completion of the independent annotations, the final label for each comment was determined using a majority-vote mechanism. If two or more annotators agreed on a sentiment, that sentiment was assigned as the final label. In cases where all three annotators assigned a different label (a rare occurrence), the comment was flagged for review and a consensus was reached through discussion. This process ensured that the final dataset, referred to as HausaMovieReview, was a highly reliable and consistent resource. 3.3 Inter-Annotator Agreement To validate the reliability and consistency of the annotation process, two key metrics were used to measure inter-annotator agreement (IAA): Cohen's Kappa (κ) and Fleiss' Kappa (κ). Cohen's Kappa measures pairwise agreement between two annotators, while Fleiss' Kappa extends this to measure the agreement among all three annotators. The scores obtained were provided in Table 1 and Table 2 as described below. Table 1: Inter-Annotator Agreement Scores Pair Cohen's/Fleiss' Kappa Annotator 1 vs Annotator 2 0.7975 Annotator 1 vs Annotator 3 0.9071 Annotator 2 vs Annotator 3 0.8903 Fleiss' Kappa (All Annotators) 0.865 Table 2: Annotator Label Distributions Annotator Negative Neutral Positive Annotator 1 1165 (23.30%) 996 (19.92%) 2839 (56.80%) Annotator 2 1165 (23.30%) 995 (19.90%) 2840 (56.80%) Annotator 3 1165 (23.30%) 995 (19.90%) 2840 (56.80%) Majority Vote 1165 (23.30%) 995 (19.90%) 2840 (56.80%) These high IAA scores are a testament to the quality of the dataset and the clarity of the annotation guidelines, ensuring that the HausaMovieReview dataset is a robust and trustworthy resource for sentiment analysis research. The complete dataset and all associated code are publicly available on GitHub. https://github.com/AsiyaZanga/HausaMovieReview.git 3.4 Experiment Setup This section details the experiment design used for this study including experiment steps, classical machine learning and deep learning and training settings Preprocessing The dataset for this study was a corpus of 5,000 comments extracted from comment sections of 13 episodes of LABARINA. Given the informal and multilingual nature of the data, a rigorous preprocessing pipeline was established. This involved converting all text to lowercase, removing punctuation, special characters and tokenizing the sentences into individual words. Feature Extraction: Term Frequency-Inverse Document Frequency (TF-IDF) The TF-IDF vectorization technique was employed to transform the preprocessed text data into a matrix of numerical features. TF-IDF assigns a weight to each word in a document, reflecting its importance within that specific comment relative to the entire corpus. A high TF-IDF score indicates a term that is both frequent in a document and rare across the entire dataset, making it a powerful feature for distinguishing between sentiment classes. Model Implementation Following the feature extraction, three distinct classical ML classifiers were trained and evaluated: 1. Logistic Regression (LR): This is a linear classification model that uses a logistic function to estimate the probability of a given comment belonging to a particular sentiment class. Despite its simplicity, LR is highly effective for text classification tasks, providing a strong baseline for comparison. 2. Decision Tree (DT): A non-linear, supervised learning model that partitions the data into a series of decisions, creating a tree-like structure. The DT model makes predictions by traversing the tree from the root to a leaf node, where each internal node represents a feature test and each leaf node represents a class label. 3. K-Nearest Neighbors (KNN): KNN is an instance-based, non-parametric algorithm that classifies a new data point based on the majority class among its k nearest neighbors in the feature space. The value of k was a key hyperparameter tuned during the training process to optimize performance. Transformer-based Approach To capture the complex semantic and contextual nuances of the comments, pre-trained transformer models were leveraged. This approach bypassed the need for manual feature engineering by using the models' inherent ability to learn rich, contextual embeddings from raw text. We have selected to models as described below: 1. BERT (Bidirectional Encoder Representations from Transformers): The BERT-base-uncased model was selected for its ability to learn bidirectional representations from a massive corpus of text. The model was fine-tuned for the sentiment classification task by adding a simple classification layer on top of its output, allowing it to adapt its pre-trained knowledge to the specific domain of Kannywood comments. 2. RoBERTa (A Robustly Optimized BERT Pretraining Approach): RoBERTa, an enhanced version of BERT, was also chosen. It was trained with an optimized methodology, including a larger dataset, a longer training period and the removal of the next sentence prediction objective. This robust training process often results in superior performance on downstream tasks compared to standard BERT. Training Setup The fine-tuning of both BERT and RoBERTa was conducted using a standard training protocol. The final classification layer was added to the pre-trained models. Key training parameters were carefully configured: a small learning rate 2×10-5, a batch size of 16 and a limited number of 3 epochs to prevent overfitting. The AdamW optimizer was used, as it is widely recommended for training transformer models. 3.5 Evaluation Method and Metrics To ensure a fair and robust evaluation of all models, a consistent experimental protocol was followed. The dataset was not partitioned into separate training and testing sets. Instead, a crucial aspect of the training protocol was the use of 10-fold cross-validation. This technique partitioned the data into ten subsets, training the model on nine and validating on the tenth, repeating this process ten times. This method ensured that the model's performance was not dependent on a specific data split and reduced the risk of overfitting. Model performance was assessed using a suite of standard metrics for multi-class classification: i. Accuracy: The ratio of correctly predicted instances to the total number of instances. ii. Precision: The ratio of true positive predictions to the total positive predictions made by the model. It indicates the model's ability to avoid false positives. iii. Recall: The ratio of true positive predictions to all actual positive instances. It measures the model's ability to find all relevant instances. iv. F1-Score: The harmonic mean of precision and recall. It provides a balanced measure of a model's performance, particularly useful for imbalanced datasets. The formula for the v. Area Under the Curve (AUC): A measure of the model's ability to distinguish between classes. A higher AUC value indicates a betterperforming model. Accuracy, precision, recall, F1-score, and AUC are widely recognized evaluation metrics for classification tasks, as they capture complementary aspects of model performance[18]. Prior studies in meta-learning and algorithm selection confirm these measures as standard practice for assessing predictive effectiveness[19] 4. Results and Discussion In this section, we present the comprehensive results obtained from the sentiment analysis experiments conducted on the HausaMovieReview dataset. It details the performance of both classical machine learning models (Logistic Regression, Decision Tree, K-Nearest Neighbors) and deep learning models (BERT and RoBERTa). 4.1 Model Performance Evaluation The key performance metrics Accuracy, F1-Score, were computed for each model with Precision, Recall and AUC in addition for classical models. The results, including their corresponding standard deviations, are summarized in table 3 below. Table3: Model Performance for Classical Models Model Accu racy Preci sion Recal l F1 AUC Decision Tree 89.71 % 90.02 % 89.71 % 89.60 % 93.45% KNN 63.36 % 76.89 % 63.36 % 62.11 % 86.27% Logistic Regression 86.81 % 87.08 % 86.81 % 86.81 % 95.92% The results in Table 1 indicate that all three classical models achieved reasonable to excellent performance. The Decision Tree classifier demonstrated superior performance across most metrics, achieving the highest mean accuracy (0.8971) and a strong F1-score (0.896). This suggests that a rule-based, non-linear approach was highly effective at classifying the sentiment of the Hausa comments. The Logistic Regression model also performed exceptionally well, with a high accuracy of 0.8681 and the highest AUC score of 0.9592, which indicates its excellent ability to distinguish between all three sentiment classes. In contrast, the KNN model exhibited the lowest performance, with a mean accuracy of 0.6336, suggesting it was less effective at handling the high-dimensional, sparse TF-IDF data. Below are graph for pictorial representations. Figure 1: Accuracy of KNN, Logistic Regression and Decision Tree Models Across 10 Folds Figure 1 illustrates the accuracy of the three models over 10 separate validation folds. The plot clearly shows that the Logistic Regression and Decision Tree models consistently perform better than the KNN model, with accuracy scores generally above 0.85 and 0.90, respectively. The KNN model's accuracy remains significantly lower, fluctuating between 0.60 and 0.65. Figure 2: F1 Score of KNN, Logistic Regression and Decision Tree Models Across 10 Folds Figure 2, displays the F1 score for each model across 10 folds. The plot demonstrates a similar pattern to the accuracy graph, with the Logistic Regression and Decision Tree models exhibiting much higher F1 scores than the KNN model. Figure 3: AUC Score of KNN, Logistic Regression and Decision Tree Models Across 10 Folds Figure 3 shows the Area Under the Curve (AUC) score for each model over 10 folds. According to the plot, Logistic Regression consistently has the highest AUC score, often above 0.95, while the Decision Tree model's AUC is also high, typically above 0.92. The KNN model, in contrast, shows a much lower and more volatile AUC score. Deep Learning Model Results: BERT The BERT model was fine-tuned on the HausaMovieReview dataset to analyze its performance throughout the training process. The following sections present the training and validation performance, followed by the final evaluation on the test set.. Comparative Results of Deep Learning Models Both the BERT and RoBERTa models were fine-tuned on the KannySenti dataset, demonstrating effective learning and similar performance trends. During training, both models showed a rapid increase in validation accuracy and F1-score in the early stages, followed by a plateauing of these metrics and a fluctuation in validation loss. This behavior, particularly after Step 40 for both models, suggests that they began to overfit the training data. On the final held-out test set, the BERT model slightly outperformed RoBERTa, achieving an accuracy of 79.7% and an F1-score of 0.7562, compared to RoBERTa's 76.6% accuracy and 0.7292 F1-score. Table 4 below and subsequent graphs explain more. Table 4 Evaluation Result for Transformer Models Model Evaluation Loss Evaluation Accuracy Evaluation F1-Score BERT 0.5906 0.797 0.7562 RoBERTa 0.6855 0.766 0.7292 Figure 4 Validation Accuracy and F1-Score during BERT Model Fine-tuning Figure 4 Shows the validation accuracy and macroaveraged F1-score of the BERT model against training steps. It visually represents the model's performance on unseen validation data, highlighting its convergence and predictive capability throughout the fine-tuning process. Figure 5 Training and Validation Loss during BERT Model Fine-tuning Figure 5 illustrates the progression of training loss and validation loss for the BERT model across various training steps. It demonstrates the model's learning curve on the training data and its generalization behavior on the validation set, indicating potential points of overfitting. Figure 6: Validation Accuracy and F1-Score during RoBERTa Model Fine-tuning Figure 6 Shows the validation accuracy and macroaveraged F1-score of the RoBERTa model against training steps. It visually represents the model's performance on unseen validation data, indicating convergence and overall predictive capability over the fine-tuning process. Figure 7 Training and Validation Loss during RoBERTa Model Fine-tuning Figure 7 illustrates the progression of training loss and validation loss for the RoBERTa model across various training steps. A sustained decrease in training loss indicates effective learning on the training data, while the behavior of validation loss highlights the model's generalization capabilities and potential for overfitting. 5. Discussion of Findings The most significant finding of this study is the remarkable performance of the Decision Tree model, which outperformed both the classical (Logistic Regression and KNN) and the more advanced transformer-based models, (BERT and RoBERTa). This result is counterintuitive, as transformer models are generally considered state-of-the-art for natural language processing tasks due to their ability to capture complex, contextual relationships in text. Several factors may explain this surprising outcome. The dataset, consisting of Kannywood movie comments, is relatively small (5,000 comments). Transformer models, particularly large pre-trained ones like BERT and RoBERTa, require vast amounts of data to fully realize their predictive power. In a limited data environment, these models may be prone to overfitting or fail to finetune their extensive pre-trained knowledge effectively to the specific nuances of the Kannywood domain. In contrast, the Decision Tree model, a non-linear classifier, proved exceptionally well-suited to the task. It appears to have effectively identified key features within the TF-IDF vectorized data that are highly predictive of sentiment, such as specific keywords, phrases, or their combinations. Its hierarchical, rule-based nature may have allowed it to create a series of effective decision rules that generalize well to the local data distribution, outperforming more complex models that could not fully leverage their potential on a small dataset. This finding suggests that for domain-specific and size-constrained text corpora, carefully engineered classical models can be more effective and computationally efficient than resource-intensive transformers. Dataset Reliability and Limitations The reliability of our dataset is a critical factor in the validity of these results. As detailed in the methodology, the dataset underwent a three-way human annotation process. The high Inter-Annotator Agreement (IAA), achieved through a majority voting protocol, confirms the consistency and quality of the sentiment labels. This high IAA score suggests that the sentiment labels are reliable and that the models were trained on a highquality ground-truth dataset. Despite these promising results, several limitations of this study must be acknowledged. First, the dataset is restricted to Kannywood movie comments and may not be generalizable to other domains or languages. The models' performance on a broader range of text would likely differ. Second, while the classical models outperformed the transformers in this specific experiment, this does not invalidate the general superiority of transformer architectures. It merely highlights the importance of data volume and domainspecificity for fine-tuning these models. A future study could explore pre-training a transformer model specifically on a massive corpus of Hausa-language text, which may then yield superior results on a Kannywoodspecific task. Finally, the comments contained a significant amount of code-switching between Hausa and English, as well as informal language and slang, which poses a unique challenge for any model. 6. Conclusion This research addressed the critical gap in sentiment analysis tools for Hausa-language movie reviews within the rapidly growing Kannywood film industry. By creating the HausaMovieReview dataset, a novel resource of 5,000 manually annotated YouTube comments from the LABARINA series, this study provided a crucial foundation for advancing NLP in this under-resourced domain. The study investigated the application of both classical machine learning and deep learning models for sentiment classification. Surprisingly, the classical machine learning models, particularly the Decision Tree classifier, achieved exceptional performance on the HausaMovieReview dataset, with an accuracy of 89.71% and a macro-averaged F1-score of 90.02%. This indicates that for this specific dataset, TF-IDF features combined with a Decision Tree proved remarkably effective. The fine-tuned deep learning models, BERT ('bert-baseuncased') and RoBERTa ('cardiffnlp/twitter-robertabase-sentiment-latest'), also demonstrated strong capabilities, achieving accuracies of 79.7% and 76.6% and F1-scores of 75.62% and 72.92%, respectively. While these results are commendable, they were notably lower than those achieved by the classical Decision Tree model in this study. 7. Future Work Building upon the findings and addressing the limitations of this study, several promising avenues for future research emerge: • Dataset Expansion and Diversification: Future efforts should focus on significantly expanding the KannySenti dataset by annotating a larger and more diverse collection of Kannywood movie reviews from various online platforms and genres. • Hausa-Specific NLP Resources and Preprocessing: Research into developing more robust Hausa-specific NLP tools (e.g., improved tokenizers, stemmers/lemmatizers) and preprocessing techniques tailored to Hausa linguistic features and code-switching patterns is crucial. • Deeper Investigation into Classical ML Performance: Conduct further analysis to understand why classical models, particularly Decision Tree with TF-IDF, performed exceptionally well. This could involve feature importance analysis, examining decision boundaries, or comparing with other classical ML techniques. • Exploration of Multilingual and Larger Transformer Models: Fine-tuning larger, multilingual pre-trained models (e.g., AfroXLMR-Large, mDeBERTaV3, AfriBERTalarge) on the KannySenti dataset and conducting a more in-depth comparison with the current BERT and RoBERTa results is a key next step. • Advanced Fine-tuning Techniques: Experimenting with more advanced fine-tuning techniques, such as multi-task learning (e.g., incorporating related tasks like topic classification), or utilizing techniques specifically designed for low-resource scenarios, could lead to improved deep learning model performance. References [1] H. Eberhard and R. Hartner, "Addressing Data Imbalance via Image Augmentation for Automated Quality Inspection in Steel Production," pp. 174-181, 2024, [2] K. R. Mabokela, M. Primus, and T. Celik, "Advancing sentiment analysis for lowresourced african languages using pre-trained language models," PLoS One, vol. 20, no. 6 JUNE, pp. 1-37, 2025, [3] S. H. Muhammad et al., "Sentiment Analysis," 2021. [4] I. Shode, D. I. Adelani, Ji. Peng, and A. Feldman, "NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification," pp. 986-998, 2023, [5] M. Kumar, "Journal of Artificial Intelligence & Cloud Computing Emotion Recognition in Natural Language Processing : Understanding How AI Interprets the Emotional Tone of Text," vol. 3, no. 6, pp. 1-5. [6] S. H. Muhammad et al., "HausaNLP: Current Status, Challenges and Future Directions for Hausa Natural Language Processing," 2025, [Online]. Available: http://arxiv.org/abs/2505.14311 [7] J. Stets, "Handbook of Social Psychology," Handb. Soc. Psychol., no. May, 2006, [8] M. Taboada, J. Brooke, and K. Voll, "LexiconBased Methods for Sentiment Analysis," no. December 2009, 2011. [9] J. Zhou and J. Ye, "Sentiment analysis in education research : a review of journal publications," Interact. Learn. Environ., vol. 0, no. 0, pp. 1-13, 2021, [10] V. No et al., "Application of ChatGPT in Improving Customer Sentiment Analysis for Businesses," vol. 5, no. 3, pp. 283-288, 2023. [11] L. Abualigah, H. E. Alfar, M. Shehab, A. Mohammed, and A. Hussein, "Sentiment Analysis in Healthcare : A Brief Review Sentiment Analysis in Healthcare :," no. January, 2020, 0. [12] R. Kumar, M. Islam, M. Hasan, and S. Razia, "Heliyon Sentiment analysis in multilingual context : Comparative analysis of machine learning and hybrid deep learning models," Heliyon, vol. 9, no. 9, p. e20281, 2023, [13] R. Srivastava, "Comparative Analysis of Lexicon and Machine Learning Approach for Sentiment Analysis," vol. 13, no. 3, pp. 71-77, 2022. [14] P. Pakray, "Natural language processing applications for low-resource languages," pp. 183-197, 2025, [15] S. Dargan, M. Kumar, M. Rohit, and A. Gulshan, "A Survey of Deep Learning and Its Applications : A New Paradigm to Machine Learning," no. 0123456789, 2019. [16] A. U. Rehman and A. K. Malik, "A Hybrid CNN-LSTM Model for Improving Accuracy of Movie Reviews Sentiment Analysis," 2019. [17] F. Fazl, R. Alizadehsani, R. Lashgari, and A. Talukder, " BERT-Deep CNN: State-of-the Artfor Sentiment Analysis of COVID-19 Tweets" 2022. [18] S.M. Abdulrahman and P. Brazdil, "Measures for Combining Accuracy and Time for Metalearning," in Proc. Meta-Learning and Algorithm Selection Workshop at ECAI, 2014, pp . 49-56. [19] S. M. Abdulrahman, P. Brazdil, J. N. Van Rijn, and J. Vanschoren, "Speeding up algorithm selection using average ranking and active testing by introducing runtime," Mach. Learn., vol. 107, no. 1, pp. 79-108, 2018,
|
2509.16273
|
SubDyve: Subgraph-Driven Dynamic Propa-
gation for Virtual Screening Enhancement
Controlling False Positive
Jungseob Yi1
Seoyoung Choi2
Sun Kim1,2,3,4
Sangseon Lee5
1Interdisciplinary Program in Artificial Intelligence, Seoul National University
2Department of Computer Science and Engineering, Seoul National University
3Interdisciplinary Program in Bioinformatics, Seoul National University
4AIGENDRUG Co., Ltd., Seoul
5Department of Artificial Intelligence, Inha University
Abstract
Virtual screening (VS) aims to identify bioactive compounds from vast
chemical libraries, but remains difficult in low-label regimes where only a
few actives are known. Existing methods largely rely on general-purpose
molecular fingerprints and overlook class-discriminative substructures criti-
cal to bioactivity. Moreover, they consider molecules independently, limiting
effectiveness in low-label regimes. We introduce SubDyve, a network-based
VS framework that constructs a subgraph-aware similarity network and
propagates activity signals from a small known actives. When few active
compounds are available, SubDyve performs iterative seed refinement, incre-
mentally promoting new candidates based on local false discovery rate. This
strategy expands the seed set with promising candidates while controlling
false positives from topological bias and overexpansion. We evaluate Sub-
Dyve on ten DUD-E targets under zero-shot conditions and on the CDK7
target with a 10-million-compound ZINC dataset. SubDyve consistently
outperforms existing fingerprint or embedding-based approaches, achieving
margins of up to +34.0 on the BEDROC and +24.6 on the EF1% metric.
1
Introduction
The chemical space in drug discovery is vast, comprising more than 1060 synthetically
accessible drug-like molecules (Virshup et al., 2013). Exhaustive exploration is infeasible,
making virtual screening (VS) a key tool for identifying promising compounds small enough
for experimental validation. However, in early stage discovery, most protein targets lack
substantial ligand data; researchers often start with only a few known active molecules (Deng
et al., 2024; Jiang et al., 2024; Chen et al., 2024; Scott et al., 2016). The central challenge in
such low-data settings is to retrieve additional actives from billions of candidates, given only
a target protein and sparse activity labels.
Deep learning approaches to virtual screening fall into two main categories: supervised
models and foundation models. The first trains on large, balanced datasets using graph
neural networks or 3D molecular representations, but requires extensive labeled data and
often overfits in low-data settings. The second leverages foundation models (FMs) pre-trained
on large-scale unlabeled molecular corpora to support inference with minimal supervision.
Representative FMs include ChemBERTa (Ahmad et al., 2022), MolBERT (Fabian et al.,
2020), MoLFormer (Ross et al., 2022), GROVER (Rong et al., 2020), and AMOLE (Lee
et al., 2024), which support zero-shot VS. Protein-language-model pipelines (Lam et al.,
2024) show similar promise in structure-free contexts. However, FM-based methods screen
compounds independently, failing to capture molecular dependencies.
1
arXiv:2509.16273v1 [cs.LG] 18 Sep 2025
An orthogonal line of approach addresses these limitations with network-based label-efficient
learning (Yi et al., 2023; Saha et al., 2024; Ma et al., 2024). Among these, network propagation
(NP) has emerged as a promising and effective strategy. NP treats known actives as seed
nodes in a molecular graph and diffuses influence across networks to prioritize candidates
based on their global connectivity to the seed set. This framework captures higher-order
molecular associations and naturally supports generalization from few labeled molecules.
Despite its promise, two critical limitations of NP remain to be resolved. First, VS tasks
often hinge on substructural variations between closely related molecules (Ottan`a et al.,
2021; Stumpfe et al., 2019). Yet standard NP relies on similarity graphs uses general-purpose
fingerprints (e.g., ECFP), which fail to encode fine-grained subgraph features that distinguish
actives from inactive molecules (Yi et al., 2023), often blurring critical activity-relevant
distinctions. Second, NP inherits the topological bias of the underlying graph: nodes in dense
clusters may be ranked highly due to connectivity alone, inflating false positives, particularly
when the seed set is small (Picart-Armada et al., 2021).
To address these limitations, we propose SubDyve, a graph-based virtual screening framework
for label-efficient compound prioritization. Rather than relying on generic molecular finger-
prints, SubDyve builds a subgraph fingerprint graph using class-discriminative substructures
mined via supervised subgraph selection (Lim et al., 2023). It then performs iterative seed
refinement guided by local false discovery rate (LFDR) estimates (Efron, 2005), expanding
high-confidence compounds as new seeds while controlling false positives. This process is
integrated into a joint learning framework that trains a graph neural network with objectives
for classification, ranking, and contrastive embedding. By combining subgraph-aware graph
construction with uncertainty-calibrated propagation, SubDyve improves precision and gen-
eralization under sparse supervision. We evaluate SubDyve on the DUD-E benchmark and a
10M-compound ZINC/PubChem dataset for CDK7 target, where it achieves strong early
enrichment using substantially fewer labels than deep learning and mining-based baselines.
Our contributions are as follows:
• We demonstrate that SubDyve achieves state-of-the-art performance on public and
large-scale datasets under severe label constraints.
• We propose a subgraph fingerprint graph construction method that identifies class-
discriminative subgraphs, preserving subtle activity-defining features that are over-
looked by conventional fingerprints.
• We introduce an LFDR-based seed refinement mechanism that overcomes graph-
induced bias and enhances screening specificity while controlling false positive rates.
2
Related Work
Representation-Centric Virtual Screening
Traditional VS methods use fixed fin-
gerprints (e.g., ECFP, MACCS) or 3D alignments with shallow classifiers, but often miss
substructural patterns critical to bioactivity. Recent deep learning approaches embed molec-
ular structures into task-optimized latent spaces. PharmacoMatch (Rose et al., 2025) frames
pharmacophore screening as neural subgraph matching over 3D features. PSICHIC (Koh
et al., 2024) and BIND (Lam et al., 2024) integrate protein sequence embeddings with
ligand graphs. Large-scale pretrained encoders like ChemBERTa (Ahmad et al., 2022),
MoLFormer (Ross et al., 2022), and AMOLE (Lee et al., 2024) reduce label demands via
foundation model generalization. However, these methods treat compounds independently,
ignoring higher-order molecular dependencies.
Label-Efficient Network Propagation
Network propagation (NP) enables label-efficient
VS by diffusing activity signals over molecular similarity graphs. Yi et al. (Yi et al., 2023)
construct target-aware graphs from multiple similarity measures to rank candidates from
known actives. GRAB (Yoo et al., 2021) applies positive unlabeled (PU) learning to infer
soft labels from few positives, demonstrating robustness in low-supervision settings. While
NP-based methods capture higher-order dependencies, they often rely on generic fingerprints
(e.g., ECFP) that overlook discriminative substructures and suffer from topological bias,
2
Unlabeled
Class-discriminative
subgraph pattern mining
Subgraph pattern
fingerprint
…
...
...
Compound library
Labeled
Subgraph fingerprint
Network ( )
Cosine similarity on
subgraph fingerprint
: Compounds
:
NP
Final
network propagation
GCN
GNN
Feature encoding
x 𝑀Iteration
Ensemble 𝑁seed weights
B
Propagation with seed 𝑆1
LFDR based
seed refinement
x 𝑁stratified split seed 𝑆(𝑆1/𝑆2)
A
Subgraph-guided graph construction
Dynamic seed refinement
with LFDR
LFDR-based iterative seed expansion
B
Prioritize compound
Ensemble split seed &
Network propagation
Filter pattern matching
Seed weight
NP+PCA hybrid
rank score
PCA similarity
Propagation
score
ChemBERTa
Subgraph pattern
fingerprint
Propagation &
Evaluation 𝑆2 rank
𝐿𝐵𝐶𝐸
𝐿𝑅𝑎𝑛𝑘
𝐿𝐶𝑜𝑛𝑡𝑟𝑎𝑠𝑡
Best
seed weight
Figure 1:
Architecture of SubDyve Framework. (A) Overall process of SubDyve. Consists
of subgraph-similariy network construction, dynamic seed refinement with LFDR, and
prioritization. (B) Dynamic seed refinement with LFDR: seed weights are iteratively updated
within each stratified split and aggregated for final prioritization.
inflating false positives under sparse or uneven labeling (Picart-Armada et al., 2021; Hill
et al., 2019).
Substructure-Aware Similarity Graphs
Recent work enhances molecular graphs with
substructure-aware representations to capture subtle activity-relevant patterns. Supervised
Subgraph Mining (SSM) (Lim et al., 2023) identifies class-specific motifs that improve
prediction and reveal mechanistic effects like toxicity. ACANet (Shen et al., 2024) applies
attention over local fragments to detect activity cliffs. While effective in property prediction,
such methods remain underexplored in virtual screening, where subgraph-aware graphs could
better resolve activity-specific features.
3
Methodology
In this section, we present the architecture of SubDyve, a virtual screening framework for
settings with few known actives. SubDyve first constructs a subgraph fingerprint network
using class-discriminative subgraph patterns (Figure 1A). Based on this network, it performs
dynamic seed refinement guided by LFDR to iteratively update seed weights (Figure 1B). To
ensure robustness, refinement is repeated across N settings, and the ensembled seed weights
are used for a final network propagation to prioritize unlabeled compounds.
3.1
Problem Formulation
We define virtual screening as the problem of ranking a large set of unlabeled candidate
compounds, especially under a low-label regime. Let Q be the set of candidate molecules,
and C ⊂Q the subset of known actives against a target protein p. The goal is to assign
a relevance score r(q) to each q ∈Q such that compounds in C are ranked higher than
inactives. In low-label settings, a small subset Strain, Stest ⊂C of seed actives is available
(Strain ∩Stest = ∅). We assume access to a compound similarity graph G = (V, E) over Q,
where each node v ∈V is a compound and each edge (i, j) ∈E encodes structural similarity.
The task is to propagate activity signals from Strain over G and assign relevance scores r(q)
to all q ∈Q, prioritizing Stest.
3
3.2
Subgraph Fingerprint Network Construction
We first mine class-discriminative subgraph patterns SP from the labeled seed set Strain
using the SSM algorithm (Lim et al., 2023) with curated negative molecules. Each molecule is
then encoded as a d-dimensional subgraph pattern fingerprint, where each dimension reflects
the frequency of a discriminative subgraph combination (DiSC) (see Appendix B.2.1 and
B.2.2 for details).
We filter the candidate set Q to retain only compounds that match at least one subgraph
in SP, forming a reduced set Q′. A subgraph fingerprint graph G is then constructed over
Q′ using pairwise cosine similarity between subgraph pattern fingerprints (Appendix B.2.3).
This graph serves as the foundation for network propagation and compound ranking.
3.3
Dynamic Seed Refinement with LFDR Estimation
Using Strain as seed nodes, we perform initial network propagation over G to assign soft rele-
vance scores to all compounds. While this provides a baseline prioritization, its effectiveness
is limited by the small size of Strain. Signals tend to diffuse broadly or become biased toward
topologically dense regions, resulting in reduced specificity and inflated false positives.
To address this, SubDyve introduces a dynamic seed refinement procedure that iteratively
improves the seed set using GNN-based inference and local false discovery rate (LFDR)
estimation. To enable robust screening, we stratify Strain into disjoint subsets S1 and S2,
where S1 initiates the initial network propagation, and S2 guides seed refinement via iterative
loss updates in both the GNN and propagation modules. This mechanism enables confident
expansion of the supervision signal while suppressing propagation-induced errors.
3.3.1
Feature Encoding for GNN
Before refinement, we compute feature vectors for all compounds using SMILES (Weininger,
1988) and graph-derived descriptors. Each compound i ∈Q′ is encoded as:
xi = [wi, nNP
i
, f FP
i
, sPCA
i
, hhyb
i
, ePT–CB
i
],
(1)
where wi denotes a weight of seed S1 for network propagation. nNP
i
is a network propagation
score using wi. sPCA
i
is a RBF similarity to seed S1 in PCA latent space, and hhyb
i
is a hybrid
ranking computed as the average of the PCA and NP ranks, (rank(sPCA
i
) + rank(nNP
i
))/2.
ePT–CB
i
and f FP
i
encode semantic and substructural properties, respectively. Details of the
feature encoding are described in Appendix B.3.
3.3.2
Iterative Seed Refinement with LFDR Estimation
Building on the initial propagation with S1, SubDyve performs iterative seed refinement over
hyperparameter M iterations. In each iteration, the model is trained to recover held-out
actives in S2 through three steps: (1) GNN training with a composite loss, (2) LFDR-guided
seed refinement, and (3) network propagation and evaluation.
(1) GNN Training. A graph neural network processes the subgraph fingerprint graph G to
produce logits ˆli and embeddings zi for compound i. The model is trained using a composite
loss that combines three objectives:
Ltotal = (1 −λrank) · LBCE + λrank · LRankNet
+ λcontrast · LContrast.
(2)
Here, LBCE is a binary cross-entropy loss that adjusts for class imbalance by weighting active
compounds more heavily according to their low prevalence. LRankNet is a pairwise ranking loss
that encourages known actives in the held-out set S2 to be ranked above unlabeled candidates.
LContrast is a contrastive loss applied over the held-out set S2, where each compound forms
a positive pair with its most similar member in S2, while treating the remaining compounds
in S2 as negatives. The coefficients λrank and λcontrast are hyperparameters controlling the
contribution of each loss term. Full loss definitions and model optimization are described in
Appendix B.4.
4
(2) LFDR-Guided Seed Refinement. Using the logits ˆli, we compute a standardized
score:
zi =
ˆli −µ
σ
,
qi = LFDR(zi),
(3)
where µ and σ are computed from the GNN logits of all compounds in Q′. Details of LFDR
algorithm is described in Algorithm 3 at Appendix B. Compounds with qi < τFDR are added
to the seed set, and those with qi > τFDR are removed. The corresponding seed weights for
subsequent network propagation are updated as:
wi ←wi + β · (σ(zi) −π0),
(4)
where σ is the sigmoid function and π0 is the prior null probability. β denotes a hyperpa-
rameter to control update rate. This procedure ensures a provable upper bound on the false
discovery rate (Efron, 2005), as detailed in Proposition 1.
Proposition 1. Let Z1, . . . , Zm follow the two–group mixture f(z) = π0f0(z) + π1f1(z) and
define the selection rule
Rα = {i : lfdr(Zi) ≤α},
0 < α < 1.
Denote Rα = |Rα| and Vα = Pm
i=1 I{i ∈Rα}Hi. Then
mFDR(Rα) = E[Vα]
E[Rα] ≤α,
(1)
FDR(Rα) = E
h
Vα
Rα ∨1
i
≤α.
(2)
The proof is provided in Appendix A.
(3) Network Propagation and Evaluation. We apply network propagation with the
updated seed weights and evaluate performance on S2 using enrichment factor at early ranking
thresholds. If enrichment improves, the iteration continues; otherwise, early stopping is
triggered. Among all iterations, we retain the seed weights that yield the highest performance
on S2 for final ensemble aggregation.
3.4
Final Aggregation and Prioritization
To improve robustness under limited supervision, we perform N stratified splits of the initial
seed set Strain into disjoint subsets S1 and S2. From each split, we retain the best-performing
seed weights on S2 and aggregate them across all N splits using element-wise max pooling
to construct the final ensemble seed vector. Using this ensembled vector, we perform a final
round of network propagation over G to score all compounds in Q′, producing the final
compound ranking used for virtual screening evaluation.
4
Experiments
In this section, we evaluate the SubDyve framework on a set of virtual screening tasks,
including 10 benchmark targets from the DUD-E dataset and a real-world case study curated
using ZINC20 (Irwin et al., 2020) compounds and PubChem (Kim et al., 2023) active com-
pounds. This evaluation highlights the applicability of the framework in both standardized
benchmarks and real-world drug discovery environments. Detailed experimental setup and
hyperparameter configurations are provided in Appendix C. We present comprehensive com-
parisons with state-of-the-art methods, alongside extensive ablation studies, interpretation
analyzes, and case studies to highlight the effectiveness and robustness of each component.
4.1
Virtual Screening Performance
4.1.1
Zero-Shot Screening on DUD-E Targets
Task.
The goal of this evaluation is to assess whether SubDyve can effectively generalize in
a zero-shot virtual screening setting by prioritizing active compounds without target-specific
training.
5
Table 1: Performance comparison of SubDyve and baselines on the ten DUD-E targets. The
top results are shown in bold, and the second-best are underlined, respectively. Confidence
intervals are reported with 100 bootstrap resamples (DiCiccio & Efron, 1996). The complete
results, including all baselines and metrics are at Appendix F.1.
Protein Target
SubDyve (Ours)
PharmacoMatch (Rose et al., 2025)
CDPKit (Seidel, 2024)
DrugCLIP (Gao et al., 2023)
MoLFormer (Ross et al., 2022)
AutoDock Vina (Eberhardt et al., 2021)
BEDROC
EF1%
BEDROC
EF1%
BEDROC
EF1%
BEDROC
EF1%
BEDROC
EF1%
BEDROC
EF1%
ACES
86±2
57.0±2.4
18±1
8.4±1.4
16±2
5.5±1.3
52±2
32.4±1.7
24±2
8.3±0.7
33±1
13.87±0.5
ADA
83±4
50.6±5.3
44±4
16.7±4.1
82±3
53.6±4.3
82±3
60.2±5.3
72±1
48.3±0.9
7±2
1.05±1.7
ANDR
72±2
37.1±2.1
33±2
15.8±1.9
26±2
12.6±2.1
64±3
34.3±2.4
9±1
3.0±0.1
34±1
18.41±0.6
EGFR
86±2
60.0±2.3
11±1
3.1±0.7
26±2
12.2±1.6
40±2
28.7±2.1
75±2
48.1±2.8
14±1
3.68±0.7
FA10
58±2
46.8±1.7
1±1
0.2±0.2
6±1
0.0±0.0
86±1
51.2±1.8
66±0
36.7±0.4
41±1
15.77±0.8
KIT
44±3
13.8±2.6
4±1
0.0±0.0
9±2
1.1±0.8
10±2
5.2±1.7
66±1
36.8±0.9
18±2
2.97±1.9
PLK1
85±3
51.7±4.0
9±2
1.5±1.3
39±3
5.7±2.3
66±4
45.0±4.0
69±0
35.2±4.0
13±1
1.83±0.3
SRC
61±2
35.0±1.8
27±1
6.0±1.0
28±1
11.1±1.2
16±1
8.1±1.3
48±1
21.5±1.5
13±1
4.00±0.5
THRB
61±2
36.6±2.0
22±1
5.9±1.0
35±2
11.8±1.5
83±1
46.9±1.7
6±1
1.2±0.1
25±1
4.31±1.0
UROK
37±3
25.6±2.4
4±1
0.6±0.7
55±3
24.5±2.8
73±3
48.1±3.1
36±2
10.0±1.5
28±1
7.90±0.7
Avg. rank
1.6
1.6
4.5
4.4
3.6
3.7
2.4
2.0
3.0
3.3
4.0
4.0
Dataset & Evaluation.
We follow the evaluation protocol of PharmacoMatch (Rose
et al., 2025), using the same 10 protein targets from the DUD-E benchmark (Mysinger et al.,
2012). Since SubDyve requires a small number of active molecules to construct subgraph
newtork and initiate network propagation, directly using actives from the test targets would
violate the zero-shot assumption and compromise fairness. To ensure fair comparison, we
curated related proteins from PubChem using MMseqs2 (Steinegger & S¨oding, 2017) tool
with similarity thresholds of 0.9 to filter out proteins in PubChem. Using a 0.9 threshold
helps filter out identical and highly similar targets. Bioactive compounds associated with
these PubChem proteins were then used as seed molecules. Detailed filtering criteria and
dataset statistics are provided in Appendix D. For evaluation, we measure early recognition
metrics: BEDROC (α = 20), EFN%, as well as AUROC for completeness. Full metric panels
are in Appendix C.3.
Performance.
Table 1 reports BEDROC (α = 20) and EF1% scores, highlighting Sub-
Dyve’s early recognition performance. Full results, including AUROC and per-target metrics,
are provided in Appendix Table 11 and 12. SubDyve achieves the highest average rank across
all metrics; 1.6 for BEDROC and 1.6 for EF1%. For example, on EGFR and PLK1, two
pharmacologically important targets known for their conformational flexibility and multiple
ligand binding modes (Zhao et al., 2018; Murugan et al., 2011), SubDyve achieved BEDROC
scores of 86 and 85, substantially higher than those of MoLFormer (75 and 69). Even for
structurally challenging targets such as ACES, which features a deep and narrow binding
gorge that imposes strict shape complementarity and limits ligand accessibility (Mishra &
Basu, 2013), SubDyve yields meaningful enrichment (EF1% = 57.0), substantially higher
than DrugCLIP (32.4) and AutoDock Vina (13.87). These results highlight the robustness
of SubDyve across diverse target profiles and demonstrate its advantage in capturing early
active hits.
4.1.2
PU-Style Screening on CDK7 Target
Task.
This task simulates a realistic virtual screening scenario with few known actives for
a given target. We mask a portion of actives to evaluate the model’s ability to rank them
highly among many unlabeled candidates.
Dataset & Evaluation.
We construct a PU (positive-unlabeled) dataset consisting of
1,468 CDK7-active compounds from PubChem and 10 million unlabeled molecules from
ZINC. To ensure efficiency, we apply a subgraph-based reduction strategy that retains only
compounds containing task-relevant substructures, yielding a filtered subset of approximately
30,000 ZINC compounds. We randomly select 30% of the actives for Strain, from which
only 10% are used as a held-out set S2. These S2 compounds are excluded from subgraph
generation to prevent leakage. Each experiment is repeated five times with different random
seeds. We report BEDROC (α = 85) and EF scores at various thresholds to assess early
recognition. Detailed statistics and explanation of the evaluation settings are provided in
Appendix D.2.
Baselines.
We compare SubDyve with multiple baselines: (i) data mining-based methods
using 12 standard molecular fingerprints (Yi et al., 2023), (ii) BIND (Lam et al., 2024), a
foundation model trained on 2.4 million BindingDB interactions, (iii) PSICHIC (Koh et al.,
6
Table 2:
Performance comparison of SubDyve and baselines on the PU dataset. The top
results are shown in bold, and the second-best are underlined, respectively. The complete
results, including all baselines are at Appendix F.2.
Method
BEDROC (%)
EF
0.5%
1%
3%
5%
10%
Deep learning-based
BIND (Lam et al., 2024)
-
-
-
-
-
0.04 ± 0.08
AutoDock Vina (Eberhardt et al., 2021)
1.0 ± 1.3
-
0.2 ± 0.3
0.6 ± 0.7
1.1 ± 0.6
1.2 ± 0.5
DrugCLIP (Gao et al., 2023)
2.7 ± 1.26
1.63 ± 1.99
1.63 ± 0.81
2.45 ± 1.02
2.53 ± 1.35
2.69 ± 0.62
PSICHIC (Koh et al., 2024)
9.37 ± 3.08
4.07 ± 2.58
6.92 ± 3.30
7.48 ± 2.47
7.02 ± 1.80
5.35 ± 0.94
GRAB (Yoo et al., 2021)
40.68 ± 10.60
44.22 ± 8.35
45.21 ± 5.63
29.78 ± 1.38
18.69 ± 0.47
10.00 ± 0.00
Data mining-based
avalon + NP (Yi et al., 2023)
77.59 ± 1.72
135.76 ± 6.44
87.58 ± 2.9
31.55 ± 0.54
19.67 ± 0.4
9.88 ± 0.16
estate + NP (Yi et al., 2023)
52.44 ± 6.19
94.4 ± 13.68
57.87 ± 7.15
22.71 ± 2.7
15.92 ± 0.85
8.24 ± 0.38
fp4 + NP (Yi et al., 2023)
69.62 ± 3.69
122.76 ± 13.02
75.01 ± 4.21
28.96 ± 1.34
18.36 ± 1.0
9.59 ± 0.29
graph + NP (Yi et al., 2023)
75.86 ± 3.99
126.72 ± 10.05
84.73 ± 3.74
31.68 ± 0.92
19.1 ± 0.47
9.75 ± 0.24
maccs + NP (Yi et al., 2023)
75.44 ± 4.85
135.72 ± 12.7
79.82 ± 4.76
31.0 ± 1.41
18.93 ± 0.66
9.67 ± 0.21
pubchem + NP (Yi et al., 2023)
63.48 ± 5.16
99.17 ± 10.17
69.3 ± 7.08
30.87 ± 1.27
18.77 ± 0.9
9.84 ± 0.15
rdkit + NP (Yi et al., 2023)
79.04 ± 1.96
148.69 ± 4.25
89.24 ± 2.08
31.68 ± 0.92
19.02 ± 0.55
9.55 ± 0.3
standard + NP (Yi et al., 2023)
72.42 ± 3.51
121.97 ± 15.51
84.34 ± 5.56
31.27 ± 0.96
19.01 ± 0.33
9.71 ± 0.24
SubDyve (Ours)
83.44 ± 1.44
155.31 ± 6.38
97.59 ± 1.44
33.01 ± 0.60
19.90 ± 0.18
10.00 ± 0.00
Statistical Significance (p-value)
**
-
**
*
-
-
2024), which learns protein-ligand interaction fingerprints, and (iv) GRAB (Yoo et al., 2021),
a PU learning algorithm. For the data mining baselines, we construct a chemical similarity
network and apply one round of propagation.
Performance.
Table 2 presents the screening results.
SubDyve achieves the highest
BEDROC score (83.44) and consistently leads across enrichment thresholds, including
EF0.5% (155.31), EF1% (97.59), and EF3% (33.01). Compared to GRAB (Yoo et al., 2021),
a PU learning baseline, SubDyve improves EF1% by more than 2× (98.0 vs. 45.2) and
BEDROC by over 80%. Against PSICHIC (Koh et al., 2024), which leverages interpretable
interaction fingerprints, SubDyve improves EF0.5% by over 38× and achieves a BEDROC
nearly 9× higher. The foundation model BIND (Lam et al., 2024), despite being trained
on millions of interactions, performs poorly in this setting (EF10% = 0.04), likely due to
distribution mismatch. These results highlight SubDyve’s strength in prioritizing true actives
under minimal supervision and its robustness across compound representations and model
classes.
4.2
Ablation Study
To demonstrate the effectiveness of SubDyve components, we conduct ablation studies: (1)
impact of subgraph-based similarity network and LFDR seed refinement, (2) initial seed set
size, (3) LFDR threshold, (4) subgraph pattern size. Due to the limited space, experimental
result of (3) and (4) is reported in Appendix F.1, and F.4 respectively.
4.2.1
Effects of Subgraph Network and LFDR Seed Refinement
We conduct an ablation study on the PU dataset to assess the impact of SubDyve’s two
main components: (i) the subgraph-based similarity network and (ii) LFDR-guided seed
refinement.
Table 3: Ablation study results for the effect
of subgraph fingerprint network and LFDR-
guided seed refinement on the PU dataset.
The top results are shown in bold, and the
second-best are underlined, respectively.
Subgraph
LFDR
BEDROC
EF1%
79.04 ± 1.96
89.24 ± 2.08
✓
63.78 ± 11.43
67.22 ± 16.61
✓
78.68 ± 2.87
89.68 ± 3.53
✓
✓
83.44 ± 1.44
97.59 ± 1.44
Table 3 shows that combining both components
achieves the best performance (BEDROC 83.44,
EF1% 97.59), outperforming all partial variants.
Using LFDR without subgraph features leads
to a substantial drop in both BEDROC and EF,
indicating that accurate refinement depends on
the quality of the underlying network. Applying
subgraph features without LFDR yields only
modest improvements, suggesting most gains
come from their interaction. These results high-
light that chemically meaningful network construction and uncertainty-aware refinement are
complementary and essential for robust virtual screening in low-label settings.
7
Table 4:
Ablation study on the number of seed compounds in the PU dataset. For each
seed size (50, 150, 250), the first and second rows show the average and best-performing
of general fingerprint baselines. Best values are in bold, second-best are underlined. Full
results are in Appendix F.3.
No. of Seeds
Method
BEDROC (%)
EF
0.30%
0.50%
1%
3%
5%
50
pubchem + NP
41.13 ± 4.46
44.69 ± 14.09
45.51 ± 7.91
41.97 ± 6.91
25.7 ± 1.99
17.14 ± 1.00
maccs + NP
47.02 ± 3.83
56.77 ± 15.24
52.81 ± 9.24
50.92 ± 3.15
27.74 ± 2.04
17.05 ± 1.20
Subgraph + NP
46.33 ± 1.26
37.79 ± 21.22
31.81 ± 12.68
53.93 ± 4.97
27.61 ± 1.47
17.27 ± 0.51
SubDyve
51.78 ± 3.38
69.5 ± 11.81
62.53 ± 14.84
52.66 ± 5.91
29.48 ± 2.37
18.15 ± 0.90
150
rdkit + NP
50.82 ± 3.79
52.69 ± 6.75
54.62 ± 10.48
54.62 ± 7.24
29.5 ± 1.59
17.79 ± 0.95
maccs + NP
55.22 ± 4.39
79.99 ± 15.80
71.65 ± 13.30
60.69 ± 6.59
30.6 ± 1.29
18.85 ± 0.48
Subgraph + NP
55.08 ± 1.52
44.39 ± 22.83
61.29 ± 10.07
67.17 ± 7.24
30.07 ± 1.38
18.22 ± 0.93
SubDyve
59.07 ± 2.25
74.67 ± 7.46
73.55 ± 10.51
66.72 ± 5.29
32.26 ± 1.04
19.73 ± 0.36
250
fp2 + NP
56.88 ± 5.26
67.45 ± 16.53
75.57 ± 15.28
65.15 ± 8.30
30.19 ± 1.26
18.52 ± 0.61
avalon + NP
61.29 ± 2.44
97.18 ± 13.25
86.96 ± 9.16
68.05 ± 4.42
31.14 ± 0.52
19.51 ± 0.48
Subgraph + NP
61.96 ± 3.24
41.01 ± 13.89
86.31 ± 11.97
80.31 ± 4.60
30.20 ± 1.44
18.49 ± 0.85
SubDyve
66.73 ± 2.71
97.69 ± 16.55
85.44 ± 12.82
78.19 ± 3.38
32.85 ± 0.60
19.72 ± 0.36
Subgraph
similarity
RDKit
similarity
B
C
A
Seed#1
CCCN.CCNcn.cc-ccc
Compound#1
Seed#2
cc-cc.CCCN.CCNcn
Compound #2
Figure 2:
Case study of seed–hit patterns from SubDyve vs. RDKit. (A) PCA
visualization of top 1% ranked compounds and seeds under each method, illustrating that
SubDyve produces more coherent clustering in subgraph fingerprint space than RDKit. (B)
Examples of structurally similar seed–hit pairs prioritized only by SubDyve, highlighting
its ability to recover compounds with shared functional substructures. (C) Heatmaps of
pairwise fingerprint similarity between seeds and retrieved hits, showing stronger seed–hit
consistency with SubDyve fingerprints.
4.2.2
Performance under Varying Seed Set Sizes
We conduct an ablation study to evaluate the effect of seed set size on the PU dataset.
For each setting (50, 150, 250 seeds), we compare SubDyve against baselines using general-
purpose molecular fingerprints and subgraph fingerprint network (Subgraph+NP). Detailed
experimental settings are described in Appendix D.3.
Table 4 shows that SubDyve outperform across all seed sizes, demonstrating strong early
enrichment even under limited supervision. While the best-performing general fingerprints
vary by seed size (e.g., MACCS at 50, Avalon at 250), SubDyve achieves best performance
without fingerprint-specific tuning. Notably, Subgraph+NP—using a network constructed
from class-discriminative patterns—performs comparably to the best baselines, highlighting
the effectiveness of substructure-aware graph construction.
These results suggest that
SubDyve combines robustness and adaptability across diverse label regimes without requiring
task-specific fingerprint optimization.
4.3
Case study
To further demonstrate the interpretability and structural behavior of SubDyve, we conduct
four case studies: (1) a comparison with RDKit fingerprints on CDK7 to assess local
substructure similarity; (2) ranking gap for structurally similar molecules on DUD-E targets;
(3) an analysis of active/decoy recovery patterns on DUD-E targets with varying seed sizes;
and (4) characterization of subgraph patterns from augmented seeds on CDK7. Due to the
8
limited space, experimental results of (3) and (4) are reported in Appendix G.1 and G.3,
respectively.
4.3.1
Substructure Similarity in CDK7-Target Compound Retrieval
To evaluate the representational advantage of SubDyve over general-purpose molecular
fingerprints, we compare its retrieval behavior with RDKit on a pair of structurally similar
seed compounds on PU dataset. Specifically, we visualize the compounds prioritized in the
top 1% by each method, alongside their seed compounds, using PCA projections of their
respective fingerprint spaces (Figure 2A). SubDyve’s retrieved compounds form a tight cluster
around the seeds in the subgraph fingerprint space, indicating high local consistency and
shared substructural motifs. In contrast, RDKit-prioritized compounds are more scattered,
despite being selected from the same ranking percentile, highlighting the method’s weaker
capacity to preserve functional substructure similarity.
A closer inspection of two representative seed–hit pairs (Figure 2B) reveals that SubDyve
successfully prioritizes compounds containing key substructures, such as penta-1,3-diene(cc-
ccc) or butadiene groups(cc-cc), that align well with the seeds. These hits were not retrieved
by RDKit, likely due to its reliance on predefined global fingerprints that overlook localized
structural alignment.
To further quantify this effect, Figure 2C presents similarity heatmaps between seeds and
retrieved compounds. Subgraph fingerprint similarities remain consistently high across pairs,
while RDKit similarities are notably lower, even for structurally related hits.
These results suggest that SubDyve offers superior sensitivity to activity-relevant substruc-
tures, making it especially well-suited for discovering functionally analogous compounds in
early-stage virtual screening.
4.3.2
Ranking gap analysis for structurally similar pairs
↑↑↑
↑
↓↓
↓
Figure 3: Ranking gap and visualization of
highly similar active–decoy pairs for FA10 on
DUD-E targets shown with model relevance
ranks. Statistical significance is reported as
Wilcoxon signed-rank p-value.
To evaluate the ranking efficiency of Sub-
Dyve, we perform a ranking gap comparison
analysis with general-purpose molecular fin-
gerprints on structurally similar molecule
pairs with activity differences. For the FA10
target in the DUD-E benchmark, we ex-
tract active–decoy pairs with high structural
similarity (Tanimoto similarity ≥0.85) and
find that SubDyve significantly ranks actives
higher and decoys lower than the RDKit +
NP model, as shown in the Figure 3. Sim-
ilar trends are observed for other DUD-E
targets, as shown in Appendix G.2.
5
Conclusion
We present SubDyve, a label-efficient virtual screening framework that constructs a task-
adaptive subgraph fingerprint network by mining class-discriminative substructures from
bioactivity-labeled compounds. Built upon these chemically meaningful patterns, SubDyve
performs iterative seed refinement using LFDR-guided calibration to prioritize candidate
molecules with minimal supervision. Our method achieves more than a twofold improvement
in average EF1% on the zero-shot DUD-E benchmark and delivers strong BEDROC and
EF performance in large-scale screening on the PU dataset. These results demonstrate that
integrating substructure-similarity network construction with uncertainty-aware propagation
offers a scalable and effective solution for virtual screening in low-label regimes, advancing
the feasibility of early-phase hit discovery.
9
References
Walid Ahmad, Elana Simon, Seyone Chithrananda, Gabriel Grand, and Bharath Ramsundar.
Chemberta-2: Towards chemical foundation models. arXiv preprint arXiv:2209.01712,
2022.
Kunlun Chen, Ling Zhang, Yue Ding, Zhaoju Sun, Jiao Meng, Rongshuang Luo, Xiang Zhou,
Liwei Liu, and Song Yang. Activity-based protein profiling in drug/pesticide discovery:
Recent advances in target identification of antibacterial compounds. Bioorganic Chemistry,
pp. 107655, 2024.
Lenore Cowen, Trey Ideker, Benjamin J Raphael, and Roded Sharan. Network propagation:
a universal amplifier of genetic associations. Nature Reviews Genetics, 18(9):551–562,
2017.
Mark Davies, Micha l Nowotka, George Papadatos, Nathan Dedman, Anna Gaulton, Francis
Atkinson, Louisa Bellis, and John P Overington. Chembl web services: streamlining access
to drug discovery data and utilities. Nucleic acids research, 43(W1):W612–W620, 2015.
Youchao Deng, Eui-Jun Kim, Xiaosheng Song, Akshay S Kulkarni, Ryan X Zhu, Yidan
Wang, Michelle Bush, Aiping Dong, Nicholas Noinaj, Jinrong Min, et al. An adenosine
analogue library reveals insights into active sites of protein arginine methyltransferases
and enables the discovery of a selective prmt4 inhibitor. Journal of Medicinal Chemistry,
67(20):18053–18069, 2024.
Thomas J DiCiccio and Bradley Efron. Bootstrap confidence intervals. Statistical science, 11
(3):189–228, 1996.
Jerome Eberhardt, Diogo Santos-Martins, Andreas F Tillack, and Stefano Forli. Autodock
vina 1.2. 0: new docking methods, expanded force field, and python bindings. Journal of
chemical information and modeling, 61(8):3891–3898, 2021.
Bradley Efron. Local false discovery rates, 2005.
Benedek Fabian, Thomas Edlich, H´el´ena Gaspar, Marwin Segler, Joshua Meyers, Marco
Fiscato, and Mohamed Ahmed. Molecular representation learning with language models
and domain-relevant auxiliary tasks. arXiv preprint arXiv:2011.13230, 2020.
Bowen Gao, Bo Qiang, Haichuan Tan, Yinjun Jia, Minsi Ren, Minsi Lu, Jingjing Liu,
Wei-Ying Ma, and Yanyan Lan. Drugclip: Contrastive protein-molecule representation
learning for virtual screening. Advances in Neural Information Processing Systems, 36:
44595–44614, 2023.
Abby Hill, Scott Gleim, Florian Kiefer, Frederic Sigoillot, Joseph Loureiro, Jeremy Jenkins,
and Melody K Morris. Benchmarking network algorithms for contextualizing genes of
interest. PLoS Computational Biology, 15(12):e1007403, 2019.
John J Irwin, Khanh G Tang, Jennifer Young, Chinzorig Dandarchuluun, Benjamin R Wong,
Munkhzul Khurelbaatar, Yurii S Moroz, John Mayfield, and Roger A Sayle. Zinc20—a free
ultralarge-scale chemical database for ligand discovery. Journal of chemical information
and modeling, 60(12):6065–6073, 2020.
Xuan Jiang, Kinyu Shon, Xiaofeng Li, Guoliang Cui, Yuanyuan Wu, Zhonghong Wei, Aiyun
Wang, Xiaoman Li, and Yin Lu. Recent advances in identifying protein targets of bioactive
natural products. Heliyon, 2024.
Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li,
Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, et al. Pubchem 2023 update. Nucleic
acids research, 51(D1):D1373–D1380, 2023.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980, 2014.
10
Huan Yee Koh, Anh TN Nguyen, Shirui Pan, Lauren T May, and Geoffrey I Webb. Physico-
chemical graph neural network for learning protein–ligand interaction fingerprints from
sequence data. Nature Machine Intelligence, 6(6):673–687, 2024.
Hilbert Yuen In Lam, Jia Sheng Guan, Xing Er Ong, Robbe Pincket, and Yuguang Mu.
Protein language models are performant in structure-free virtual screening. Briefings in
Bioinformatics, 25(6):bbae480, 2024.
Namkyeong Lee, Siddhartha Laghuvarapu, Chanyoung Park, and Jimeng Sun. Molecule
language model with augmented pairs and expertise transfer. In ACL 2024 Workshop
Language+ Molecules, 2024.
Sangsoo Lim, Youngkuk Kim, Jeonghyeon Gu, Sunho Lee, Wonseok Shin, and Sun Kim.
Supervised chemical graph mining improves drug-induced liver injury prediction. Iscience,
26(1), 2023.
Zhutian Lin, Junwei Pan, Shangyu Zhang, Ximei Wang, Xi Xiao, Shudong Huang, Lei
Xiao, and Jie Jiang. Understanding the ranking loss for recommendation with sparse user
feedback. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, pp. 5409–5418, 2024.
Mingxuan Ma, Mei Huang, Yinting He, Jiansong Fang, Jiachao Li, Xiaohan Li, Mengchen
Liu, Mei Zhou, Guozhen Cui, and Qing Fan. Network medicine: A potential approach for
virtual drug screening. Pharmaceuticals, 17(7):899, 2024.
Nibha Mishra and Arijit Basu. Exploring different virtual screening strategies for acetyl-
cholinesterase inhibitors. BioMed research international, 2013(1):236850, 2013.
Ravichandran N Murugan, Jung-Eun Park, Eun-Hee Kim, Song Yub Shin, Chaejoon Cheong,
Kyung S Lee, and Jeong Kyu Bang. Plk1-targeted small molecule inhibitors: molecular
basis for their potency and specificity. Molecules and cells, 32:209–220, 2011.
Michael M Mysinger, Michael Carchia, John J Irwin, and Brian K Shoichet. Directory
of useful decoys, enhanced (dud-e): better ligands and decoys for better benchmarking.
Journal of medicinal chemistry, 55(14):6582–6594, 2012.
Rosaria Ottan`a, Paolo Paoli, Mario Cappiello, Trung Ngoc Nguyen, Ilenia Adornato, An-
tonella Del Corso, Massimo Genovese, Ilaria Nesi, Roberta Moschini, Alexandra Naß,
et al. In search for multi-target ligands as potential agents for diabetes mellitus and its
complications—a structure-activity relationship study on inhibitors of aldose reductase
and protein tyrosine phosphatase 1b. Molecules, 26(2):330, 2021.
Sergio Picart-Armada, Wesley K Thompson, Alfonso Buil, and Alexandre Perera-Lluna. The
effect of statistical normalization on network propagation scores. Bioinformatics, 37(6):
845–852, 2021.
Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou
Huang. Self-supervised graph transformer on large-scale molecular data. Advances in
neural information processing systems, 33:12559–12571, 2020.
Daniel Rose, Oliver Wieder, Thomas Seidel, and Thierry Langer. Pharmacomatch: Efficient
3d pharmacophore screening via neural subgraph matching. In The Thirteenth International
Conference on Learning Representations, 2025.
Jerret Ross, Brian Belgodere, Vijil Chenthamarakshan, Inkit Padhi, Youssef Mroueh, and
Payel Das. Large-scale chemical language representations capture molecular structure
and properties.
Nature Machine Intelligence, 4(12):1256–1264, 2022.
doi: 10.1038/
s42256-022-00580-7.
Sagarika Saha, Sanket Bapat, Durairaj Vijayasarathi, and Renu Vyas. Exploring potential
biomarkers and lead molecules in gastric cancer by network biology, drug repurposing and
virtual screening strategies. Molecular Diversity, pp. 1–26, 2024.
11
Duncan E Scott, Andrew R Bayly, Chris Abell, and John Skidmore. Small molecules, big
targets: drug discovery faces the protein–protein interaction challenge. Nature Reviews
Drug Discovery, 15(8):533–550, 2016.
Thomas Seidel.
Chemical data processing toolkit (cdpkit).
https://github.com/
molinfo-vienna/CDPKit, 2024. Accessed: 2024-01-06.
Wan Xiang Shen, Chao Cui, Xiaorui Su, Zaixi Zhang, Alejandro Velez-Arce, Jianming Wang,
Xiangcheng Shi, Yanbing Zhang, Jie Wu, Yu Zong Chen, et al. Activity cliff-informed
contrastive learning for molecular property prediction. Research Square, pp. rs–3, 2024.
Martin Steinegger and Johannes S¨oding. Mmseqs2 enables sensitive protein sequence searching
for the analysis of massive data sets. Nature biotechnology, 35(11):1026–1028, 2017.
Dagmar Stumpfe, Huabin Hu, and Jurgen Bajorath. Evolving concept of activity cliffs. ACS
omega, 4(11):14360–14368, 2019.
Jean-Fran¸cois Truchon and Christopher I Bayly. Evaluating virtual screening methods: good
and bad metrics for the “early recognition” problem. Journal of chemical information and
modeling, 47(2):488–508, 2007.
Aaron M Virshup, Julia Contreras-Garc´ıa, Peter Wipf, Weitao Yang, and David N Beratan.
Stochastic voyages into uncharted chemical space produce a representative library of
all possible drug-like compounds. Journal of the American Chemical Society, 135(19):
7296–7303, 2013.
David Weininger. Smiles, a chemical language and information system. 1. introduction to
methodology and encoding rules. Journal of chemical information and computer sciences,
28(1):31–36, 1988.
Jungseob Yi, Sangseon Lee, Sangsoo Lim, Changyun Cho, Yinhua Piao, Marie Yeo, Dongkyu
Kim, Sun Kim, and Sunho Lee.
Exploring chemical space for lead identification by
propagating on chemical similarity network. Computational and structural biotechnology
journal, 21:4187–4195, 2023.
Jaemin Yoo, Junghun Kim, Hoyoung Yoon, Geonsoo Kim, Changwon Jang, and U Kang.
Accurate graph-based pu learning without class prior.
In 2021 IEEE International
Conference on Data Mining (ICDM), pp. 827–836. IEEE, 2021.
Barbara Zdrazil, Eloy Felix, Fiona Hunter, Emma J Manners, James Blackshaw, Sybilla
Corbett, Marleen de Veij, Harris Ioannidis, David Mendez Lopez, Juan F Mosquera, et al.
The chembl database in 2023: a drug discovery platform spanning multiple bioactivity
data types and time periods. Nucleic acids research, 52(D1):D1180–D1192, 2024.
Zheng Zhao, Lei Xie, and Philip E Bourne. Structural insights into characterizing binding
sites in epidermal growth factor receptor kinase mutants. Journal of chemical information
and modeling, 59(1):453–462, 2018.
12
Appendix
A
Proof for Proposition
Proof. of proposition 1.
[local–FDR ≤α implies FDR ≤α]
Let Z1, . . . , Zm be test statistics following the two–group mixture
f(z) = π0f0(z) + π1f1(z),
π0 + π1 = 1, π1 > 0.
Define the local false–discovery rate (Efron, 2005) (Efron, 2005)
lfdr(z) = Pr(H = 0 | Z = z) = π0f0(z)
f(z) .
Choose hypotheses by
Rα =
i : lfdr(Zi) ≤α
,
0 < α < 1.
Let Rα = |Rα| and Vα = Pm
i=1 I{i ∈Rα}Hi.
Then
mFDR = E[Vα]
E[Rα] ≤α,
FDR = E
h
Vα
Rα ∨1
i
≤α.
Write E[Vα] =
X
i
E
lfdr(Zi) I{i ∈Rα}
. Because lfdr(Zi) ≤α whenever i ∈Rα,
E[Vα] ≤α E[Rα].
Dividing both sides gives mFDR ≤α. Since Vα ≤Rα, Jensen’s inequality yields FDR ≤
mFDR ≤α.
mFDR (Marginal FDR)
mFDR = E[V ]
E[R]
Marginal FDR takes expectations of the numerator and denominator separately, providing a
mean proportion of false discoveries. It is always defined (no 0/0 issue when R = 0) and
satisfies FDR ≤mFDR.
B
Model Architecture and Loss Details
B.1
Pseudocode of SubDyve
Algorithm 1 outlines the full SubDyve framework for virtual screening. The procedure
consists of three main stages: (1) Subgraph fingerprint network construction, (2) Dynamic
seed refinement with LFDR calibration, and (3) Final compound prioritization via network
propagation. In Step 1, we mine class-discriminative substructures and use them to construct
a molecular similarity graph (details in Appendix B.2). Step 2 performs N-fold stratified
refinement using GNN model with a composite loss and iteratively expands the seed set based
on LFDR calibration. For each split, the seed weights from the best-performing iteration are
retained. Step 3 aggregates the N seed weight vectors via max pooling and performs final
propagation to produce the ranked list of candidate compounds. The LFDR refinement step
is described in detail in Algorithm 2.
13
Algorithm 1 SubDyve Framework for Virtual Screening
Require:
Initial labeled set Strain,
unlabeled pool Q′
Number of stratified splits N,
number of iterations M
Hyper-parameters (λrank, λcon, γnp, τ, β, θ)
1: // Step 1: Subgraph fingerprint Network Construction (see Appendix B.2)
2: Mine class-discriminative subgraph patterns from Strain using a supervised subgraph
mining algorithm
3: Construct subgraph pattern fingerprints for all v ∈Q′
4: Compute pairwise cosine similarity between fingerprints to construct the subgraph
fingerprint graph G = (V, E, we)
5: // Step 2: Dynamic Seed Refinement with LFDR
6: for n = 1 to N do
▷Stratified bootstraps of Strain
7:
(S1, S2) ←Split(Strain, ratio)
8:
Compute node features x(v) for all v ∈V
▷Appendix B.3
9:
Initialize augmented seeds Saug ←∅, seed weight map s
10:
for m = 1 to M do
▷Iteration loop
11:
X ←[x(v)]v∈V
12:
(ℓ, z) ←Mθ(X, E, we)
▷Logits ℓand embeddings z from GNN
13:
LBCE ←WeightedBCE(ℓ, S2, γnp)
14:
LRank ←PairwiseRankNet(ℓ, S2)
15:
LCon ←InfoNCE(z, S2)
16:
Ltotal ←(1 −λrank) · LBCE + λrank · LRank + λcon · LCon
▷Appendix B.4
17:
Update model parameters via ∇Ltotal
18:
(Saug, s) ←Algorithm 2(ℓ, S1, Saug, s, τ, β)
▷LFDR-guided Seed Refinement
19:
end for
20:
Save sn from best-performing iteration on S2
21: end for
22: // Step 3: Final Prioritization
23: Aggregate {sn}N
n=1 via element-wise max pooling to obtain ensembled seed vector s∗
24: Perform final network propagation over G using s∗to score Q′
25: return Final ranked list of compounds based on propagation scores
B.2
Details of Subgraph Fingerprint Network Construction
This section describes the full pipeline for constructing a subgraph fingerprint network. The
objective is to extract class-discriminative substructures, enabling more effective propagation
and compound ranking.
The process consists of three main stages: (1) mining class-
discriminative subgraph patterns, (2) generating continuous subgraph pattern fingerprints,
and (3) constructing the subgraph fingerprint network.
B.2.1
B.2.1 Mining Class-Discriminative Subgraph Patterns
We adopt the Supervised Subgraph Mining (SSM) algorithm (Lim et al., 2023) to identify
substructures that differentiate active and inactive compounds. We curate activity-labeled
data from the PubChem dataset by extracting compounds annotated as bioactive against
the target of interest. Candidate subgraphs are generated using a supervised random walk
strategy: for each node v ∈V(G), a fixed-length walk is repeated multiple times to sample a
diverse set of subgraphs. Each subgraph is decomposed into atom-pair doublets to estimate
class-specific transition preferences. These preferences iteratively refine the walk policy,
guiding subsequent sampling toward class-informative regions.
The mined subgraphs are evaluated using a classifier, and the subgraph set that yields the
highest predictive performance (e.g., in AUC) is selected as the final set Subopt. In parallel,
14
Algorithm 2 LFDR-guided Seed Refinement
Require: Ranking logits li for all i ∈V, initial train seeds S1, current augmented seeds Saug,
seed weight wi ∈seed weight map s, LFDR thresholds τFDR, update rate β, baseline b
1: Compute z-scores: zi ←zscore(li)
2: Estimate local FDR: LFDRi ←local fdr(zi)
▷Algorithm 3
3: for each node i ∈V do
4:
if i /∈Saug and LFDRi < τFDR then
5:
Saug ←Saug ∪{i}
▷Add high-confidence node
6:
wi ←1.0
7:
else if i ∈Saug \ S1 and LFDRi > τFDR then
8:
Saug ←Saug \ {i}
▷Remove low-confidence node
9:
wi ←0
10:
else if i ∈Saug then
11:
wi ←wi + β · (σ(li) −b)
▷Update existing seed weight
12:
end if
13: end for
14: return Updated Saug, s
Algorithm 3 LFDR Estimation
Require: Observed Z-scores Z = {Zi}V
i=1, null density f0(z), bin count B, polynomial
degree d, regularization parameter α ≥0, null proportion π0
Ensure: Local FDR estimates d
lfdr(Zi) for all i = 1, . . . , V
1: Partition the range [min Z, max Z] into B equal-width bins (bj−1, bj] with centers zj =
1
2(bj−1 + bj)
2: Count samples in each bin: Nj ←#{Zi ∈(bj−1, bj]} for j = 1, . . . , B
3: Construct design matrix X ∈RB×(d+1) with Xjk = zk−1
j
for k = 1, . . . , d + 1
4: Fit Poisson distribution:
ˆβ ←arg min
β
(
−
B
X
j=1
Nj · (x⊤
j β) −exp(x⊤
j β)
+ α
2 ∥β∥2
2
)
5: for each i = 1, . . . , V do
6:
Construct polynomial features x(Zi) =
Z0
i , Z1
i , . . . , Zd
i
⊤
7:
Estimate marginal density: bf(Zi) ←exp
x(Zi)⊤ˆβ
8:
Compute null density: f0(Zi) ←null pdf(Zi)
9:
Compute LFDR:
d
lfdr(Zi) ←π0 · f0(Zi)
bf(Zi)
10:
Clip to [0, 1]: d
lfdr(Zi) ←min(1, max(0, d
lfdr(Zi)))
11: end for
12: return {d
lfdr(Zi)}V
i=1
we identify single-subgraph structural alerts (SAs) by computing feature importance scores
using a random forest model. Subgraphs with importance above 0.0001 and entropy below
0.5 are retained as interpretable indicators of activity.
15
B.2.2
B.2.2 Generating Subgraph Pattern Fingerprints
To capture higher-order structure, we construct Discriminative Subgraph Combinations
(DiSCs)—co-occurring subgraph sets that frequently appear in actives. Starting with 1-
mer subgraphs, we iteratively build k-mer combinations using a branch-and-bound search
with SMARTS-based pattern grouping.
Candidates are scored using an entropy-based
metric 1 −Entropy(Supppos, Suppneg), and only those with sufficient support (≥2%) and
discriminative power are retained. Entropy filtering is not applied to 1-mers to preserve
informative small motifs.
The top-d DiSCs are selected based on entropy rank and used to construct a d-dimensional
fingerprint vector, where each entry encodes the frequency of a specific subgraph combination
within the molecule. These fingerprint vectors serve as task-aware molecular representations
for graph construction.
B.2.3
B.2.3 Constructing Molecular Similarity Networks
We construct a similarity graph G = (V, E, we) by computing pairwise cosine similarity
between subgraph pattern fingerprints. Each compound in Q′ is represented as a node, and
weighted edges reflect structural proximity in the DiSC space.
B.3
Details of Feature Encoding for GNN
In the dynamic seed refinement step, we use a two-layer GNN to predict activity scores over
the subgraph fingerprint network. Each compound i ∈Q′ is encoded as:
xi = [wi, nNP
i
, f FP
i
, sPCA
i
, hhyb
i
, ePT–CB
i
]
(5)
The components of the feature vector are described below:
• wi: Weight of seed to use for network propagation. Initially set wi∈S1 to 1, otherwise
set to 0.
• nNP
i
: Network propagation score drive from wi.
• f FP
i
: Class-discriminative substructure features extracted from subgraph pattern
fingerprints.
• sPCA
i
: RBF Similarity to seed compounds in a PCA-projected latent space based on
subgraph pattern fingerprints f FP
i
.
• hhyb
i
: Hybrid ranking score computed as a weighted average of the rankings of sPCA
i
and nNP
i
.
• ePT–CB
i
: Semantic features derived from a pretrained ChemBERTa model, repre-
senting molecular sequence semantics.
Each GNN layer is followed by a residual connection and LayerNorm, with the second layer
reducing the hidden dimension by half. The model outputs a scalar logit ˆli computed via a
linear layer for ranking, along with a 32-dimensional embedding vector for representation
regularization.
B.4
Details of Composite Loss for GNN
Following (Lin et al., 2024), SubDyve jointly optimizes classification, ranking, and representa-
tion learning objectives within the GNN during seed refinement. The final loss is a weighted
sum of three components: binary cross-entropy LBCE, pairwise ranking loss LRankNet, and
contrastive loss LContrast. Each component is designed to enhance model performance under
sparse supervision.
1. Binary Cross-Entropy Loss (BCE)
We employ a class-balanced BCE loss to accommodate severe class imbalance. Additionally,
compound-level weights modulated by network propagation scores enhance robustness to
16
noisy supervision:
LBCE =
1
|Q′|
|Q′|
X
i=1
wi ·
"
yi · log σ(ˆli)
+ PW · (1 −yi) · log(1 −σ(ˆli))
#
,
σ(ˆli) =
1
1 + e−ˆli ,
wi = 1 + γnp · nNP
i
(6)
where ˆli is the predicted logit for compound i, and yi ∈{0, 1} is the ground truth label
indicating whether i ∈S2 (active) or not (inactive). nNP
i
is the NP score from initial
propagation. γnp is set to 5. The term PW balances class skew by weighting the active class
more heavily: pos weight =
|{i|yi=0}|
|{i|yi=1}|+ϵ.
2. Pairwise RankNet Loss
To improve early recognition, we adopt a pairwise margin-based loss that encourages higher
scores for known actives in S2 relative to likely inactives:
LRankNet = 1
C
X
(i,j)
max
0, m −(ˆli −ˆlj)
,
i ∈S2, ; j ∈Q′ \ S2.
(7)
Here, m is a margin hyperparameter and C denotes the number of valid (i, j) pairs.
3. Contrastive Loss (InfoNCE)
This loss promotes intra-class consistency in the learned embeddings. For each compound
i ∈Q′, we select its most similar positive compound zi+ from S2 based on subgraph pattern
fingerprint similarity, and treat the remaining compounds in S2 as zi−.
LContrast =
1
|S2|×
X
i∈S2
−log
exp
z⊤
i zi+
τ
exp
z⊤
i zi+
τ
+ P
k exp
z⊤
i z(k)
i−
τ
(8)
where τ is a temperature parameter.
4. Total Composite Loss
The total loss is a weighted combination:
Ltotal = (1 −λrank) · LBCE+
λrank · LRankNet+
λcontrast · LContrast.
(9)
where λrank = 0.3 and λcontrast = 0.6 fixed across all experiments. The GNN is trained using
Adam optimizer (Kingma & Ba, 2014) with a fixed learning rate of 8 × 10−4 and weight
decay of 1.57 × 10−5. Hyperparameter selection is discussed in Appendix C.2.
17
C
Implementation & Evaluation Details
C.1
Network Propagation Algorithm
Network propagation (NP) is used to prioritize candidate compounds by diffusing signals
from a small number of known actives across a chemical similarity network. This approach
has been shown to effectively integrate relational structure for large-scale inference (Cowen
et al., 2017). NP iteratively balances the initial bioactivity signal carried by S with the
topological context supplied by the graph, allowing evidence to flow along indirect paths
and uncovering nodes that are not immediate neighbors of the seeds:
P (t+1) = (1 −α) WN P (t) + α P (0),
(10)
where P (0) is a one-hot vector encoding compounds S, WN is the column-normalized
adjacency matrix, and α ∈[0, 1] controls the restart probability.
Over iterations, the
score vector P (t) converges to a stationary distribution that captures both local and global
connectivity, thereby ranking compounds in Q by their network proximity to S. By integrating
signals along multiple paths rather than relying solely on direct neighbors, NP effectively
highlights previously unconnected yet pharmacologically relevant candidates, making it well
suited for large-scale virtual screening task.
Network propagation (NP) prioritizes candidate compounds by diffusing activity signals from
a small set of known actives across a chemical similarity network. This method effectively
incorporates both local and global graph structure, enabling inference over indirect molecular
relationships (Cowen et al., 2017).
The propagation is formulated as an iterative update:
P (t+1) = (1 −α)WN P (t) + αP (0),
(11)
where WN is the column-normalized adjacency matrix of the molecular graph, and α ∈[0, 1]
is the restart probability. The initial vector P (0) encodes seed activity, typically assigning 1
to known actives and 0 elsewhere.
As iterations proceed, P (t) converges to a stationary distribution that reflects both direct
and indirect connectivity to the seed set. This enables the identification of structurally
distant yet functionally related candidates, making NP a suitable backbone for large-scale
virtual screening under sparse supervision.
C.2
Hyperparameter Search Space
We perform hyperparameter optimization in two phases depending on the parameter type.
GNN model architecture and loss-related parameters are tuned using Bayesian Optimization
(100 iterations) (Appendix Table 5). Hyperparameters related to iteration process on dynamic
seed refinement are searched via random search (Appendix Table 6).
C.3
Evaluation Metrics
To evaluate the performance of early retrieval in virtual screening, we adopt the BEDROC
and Enrichment Factor (EF) metrics.
BEDROC.
The Boltzmann-Enhanced Discrimination of ROC (BEDROC) is designed to
emphasize early recognition by assigning exponentially decreasing weights to lower-ranked
active compounds. It is defined as:
BEDROCα =
1 −e−α
1 −e−α/N
1
n
n
X
i=1
e−αri/N
!
×
sinh(α/2)
cosh(α/2) −cosh(α/2 −αRα)
+
1
1 −eα(1−Rα)
(12)
18
Table 5: Hyperparameters related to GNN model
Parameter
Search Space
Selected Value
Hidden dimension
{16, 32, 64, 128}
64
Embedding dimension
{8, 16, 32, 64}
32
λrank
[0.0, 1.0]
0.3
λcontrast
[0.0, 1.0]
0.6
Margin for RankNet loss
[0.0, ..., 1.0]
0.5
Weight decay
[10−6, 10−4]
1.57 × 10−5
β (seed weight update rate)
[0.1, 1.0]
0.7
Learning rate
[10−4, 10−2]
0.0008
γNP (NP score weight)
{0.0, ..., 5.0}
5.0
GNN layer type
{GCN, GIN, GAT}
GCN
Table 6: Hyperparameters related to iteration of seed refinement
Parameter
Search Space
Selected Value
Training epochs
{10, 20, 30, 40, 50}
50
Max iterations (M)
{3, 4, 5, 6, 7}
6
Early stopping patience of iterations
{1, 2, 3}
3
Stratified split (N)
{10, 5, 3, 2}
2
LFDR threshold τFDR
{0.03, 0.05, 0.1, 0.3, 0.5}
0.1
n is the number of active compounds, N is the total number of molecules, ri is the rank of
the i-th active compound, and Rα = n/N. Following prior work (Truchon & Bayly, 2007),
we set α = 85 to prioritize early retrieval.
Enrichment Factor (EF).
The EF quantifies the proportion of actives retrieved within
the top-ranked subset relative to a random distribution. It is computed as:
EFx% = na/Nx%
n/N
(13)
n is the number of active compounds, N is the total number of molecules, Nx% is the number
of molecules in the top x% of the ranking, and na is the number of actives within that
portion. Higher EF values indicate better prioritization of active compounds in the early
ranks.
D
Experiment Details
D.1
Zero-Shot Virtual Screening Setup on Ten DUD-E Targets
For each DUD-E target, we curate a high-quality dataset of bioactive compounds from
PubChem while preserving the zero-shot setting. To avoid data leakage, we filter protein
homologs using MMseqs2 (Steinegger & S¨oding, 2017), excluding any proteins with sequence
identity greater than 90% relative to the target. From the remaining homologs (identity
≤0.9), we retrieve associated compounds and their bioactivity annotations, retaining only
those labeled as active or inactive with valid potency measurements. Duplicate entries and
records with missing values are removed to ensure data reliability. We additionally compare
the FM-based baseline model with ChemBERTa (Ahmad et al., 2022) and MoLFormer (Ross
et al., 2022) models for further comparison. Using the pre-trained models, we calculate
performance by averaging the embeddings of bioactive compounds using the pre-trained
models and taking the active and inactive compounds from DUD-E and ranking them
according to their cosine similarity to each compound. For the ADA target, where PubChem
19
Table 7: Summary of PubChem-augmented data for DUD-E targets, including similarity
ranges, and number of seed molecules used in propagation.
Target
PDB code
Active Ligands
Decoy Ligands
PubChem Total (Act/Inact)
Similarity Range
Seed Count
ACES
1e66
451
26198
1604 (1502/102)
0.647–0.9
1277
ADA
2e1w
90
5448
444 (386/58)
0.909–0.953
335
ANDR
2am9
269
14333
918 (822/96)
0.56–0.9
755
EGFR
2rgp
541
35001
576 (427/149)
0.478–0.9
374
FA10
3kl6
537
28149
261 (237/24)
0.845–0.9
195
KIT
3g0e
166
10438
3299 (3164/135)
0.537–0.9
771
PLK1
2owb
107
6794
353 (191/162)
0.61–0.9
174
SRC
3el8
523
34407
1232 (827/405)
0.88–0.9
624
THRB
1ype
461
26894
3126 (2071/1055)
0.833–0.9
477
UROK
1sqt
162
9837
825 (750/75)
0.489–0.9
615
annotations are sparse, a slightly higher identity threshold (up to 0.953) is used to enable
sufficient subgraph extraction.
Appendix Table 7 summarizes the number of actives/inactives and the protein similarity
thresholds used per target. For downstream propagation, we construct a target-specific
subgraph fingerprint network comprising PubChem actives and DUD-E molecules (including
actives and decoys). PubChem actives with IC50 ≤500 nM are selected as seed molecules,
while the remaining actives are incorporated as unlabeled nodes. Subgraph patterns are
extracted via a Murcko-scaffold split and encoded into 2000-dimensional subgraph fingerprints,
which serves as the molecular representation for each node in the graph.
Baseline Models Training Summary
We evaluate SubDyve against two structure-based
virtual screening baselines: CDPKit (Alignment) (Seidel, 2024) and PharmacoMatch (Rose
et al., 2025) (Table 8). CDPKit is a geometric alignment algorithm that performs unsu-
pervised pharmacophore matching without model training, relying solely on spatial fit. In
contrast, PharmacoMatch is a self-supervised learning framework trained on 1.2 million
drug-like compounds from ChEMBL((Davies et al., 2015; Zdrazil et al., 2024)). It learns
a joint embedding space for pharmacophore graphs using contrastive loss and an order
embedding objective, enabling similarity-based retrieval of actives without direct supervision.
Table 8: Key characteristics for baseline methods.
Model
Training Data Description
PharmacoMatch
1.2M small molecules from ChEMBL after Lipinski filtering and duplication removal; trained in a
self-supervised fashion on 3D pharmacophore graphs using contrastive loss and order embeddings.
CDPKit (Alignment)
Unsupervised alignment of 3D pharmacophores using geometric fit (no training).
ChemBERTa
ChemBERTa-2 is a model based on ChemBERTa that optimises pre-training performance through
large-scale pre-training and multi-task-self-supervised learning comparisons using up to 77 million
molecular data.
MoLFormer
MoLFormer is a transformer-based molecular language model trained using efficient linear attentions
and rotational positional embeddings on 1.1 billion molecules (SMILES) data, outperforming
traditional graph and language-based models in many of the 10 benchmarks.
DrugCLIP
DrugCLIP is a dense retrieval-based contrastive learning model that solves the virtual screening
problem by learning the similarity between a protein pocket and a molecule. The model leverages
extensive data, including over 120,000 protein-molecule pairs and more than 17,000 complex struc-
tures from PDBbind, BioLip, and ChEMBL datasets, and utilizes the HomoAug data augmentation
method to maximize the diversity of the training set.
D.2
PU-Style Screening Setup on PU dataset
To evaluate the screening effectiveness of SubDyve in a realistic compound discovery scenario,
we construct a dataset using molecules from the ZINC20 and PubChem databases. Specifically,
we retrieve 10,082,034 compounds from ZINC20 (https://zinc20.docking.org/tranches/
home/) and select CDK7 as the target protein. From PubChem, we obtain 1,744 unique
compounds annotated for CDK7 after deduplication, of which 1,468 are labeled as active
based on curated assay data.
To simulate sparse supervision, we randomly select 30% of the active compounds for Strain.
From this subset, we designate 10% as a held-out set S2 and use the remainder as initial
20
seed nodes S1. The other 70% of actives are included in the screening graph as unlabeled
nodes, emulating the presence of under-characterized actives in large-scale libraries. This
setup ensures that the test set remains completely unseen and that the majority of actives
do not contribute label information during propagation.
For fair comparison across network propagation (NP)-based baselines, we use the same seed
sets across all runs, providing identical supervision to all models. We extract subgraph
patterns using the SSM algorithm (Appendix B.2.1), excluding all test compounds to
prevent leakage. A total of 100 discriminative patterns are used to construct subgraph-
based fingerprints.
To assess whether the difference in performance between methods
was statistically significant, we applied the Mann-Whitney U test over results from five
independent runs. The hyperparameter settings follow Appendix Table 6, except the stratified
split N is set to 5.
D.3
Details of Experimental Setup for Varying Seed Set Size
To demonstrate that SubDyve can effectively rank the 10% held-out set S2 specified in the
PU-style screening setup created with the PU dataset with much fewer seeds, we conduct an
ablation study with much fewer S1. Each seed count was randomly selected, and performance
reported over five runs. This setup creates a much harsher situation where the test set is
completely unseen and a much larger amount of active is prevented from contributing label
information during propagation.
For fairness, we use the same number of seeds in the network propagation-based baseline
and perform five runs on a randomly selected S2 as same in Appendix D.2. When extracting
subgraph patterns with the SSM algorithm, we also exclude all test compounds to prevent
leakage.
21
E
Compute Resources and Time Profiling
Training Environments.
Appendix Table 9 presents the system and software environment
used for all experiments. The setup includes a high-memory server equipped with a single
NVIDIA RTX A6000 GPU, dual Intel Xeon processors, and 512GB of RAM, running Ubuntu
22.04.4 with CUDA 11.1. All experiments are conducted on a single server.
Time Profiling of Components in SubDyve.
Appendix Table 10 summarizes the
average runtime per module. The profiling is conducted over 10 DUD-E targets, and provides
a breakdown of SubDyve’s major computational steps. Subgraph mining is the most time-
intensive step, while similarity computations and network construction remain lightweight.
Notably, computing chemical similarity for one million compound pairs takes approximately
5 hours. Note that profiling time of LFDR-guided seed refinement integrates the GNN
training time.
Table 9: System and software environment.
Component
Specification
GPU
1 × NVIDIA RTX A6000 (49GB)
CPU
Intel Xeon Gold 6248R @ 3.00GHz (48 cores)
RAM
512 GB
OS
Ubuntu 22.04.4 LTS
CUDA
11.1
Python
3.9.16
PyTorch
1.10.1 + cu111
PyTorch Geometric
2.0.4
scikit-learn
1.6.1
scipy
1.10.1
Table 10: Profiling time of each module.
Module
Profiling Time
Subgraph Mining (SSM)
108.55 ± 15 sec / iteration
Subgraph Pattern Similarity Computation
0.4 ± 0.2 ms / compound pair
Subgraph fingerprint Network Construction
0.1 ± 0.1 ms / edge
Network Propagation
16 ± 23 sec / graph
Dynamic Seed Refinement with LFDR (incl. GNN training)
1.27 ± 0.56 sec / epoch
F
Additional Experimental Results
F.1
Zero-Shot Virtual Screening Results on Ten DUD-E Targets
F.1.1
Comprehensive Evaluation
Appendix Table 11 and 12 show a comprehensive evaluation of SubDyve and baseline methods
across ten DUD-E targets. Metrics include AUROC, BEDROC, EF1%, EF5%, and EF10%,
with confidence intervals estimated via 100 resampling trials. SubDyve achieves the best
average performance across all five metrics, consistently outperforming other methods in
early recognition and enrichment, while maintaining high AUROC. These results support
the effectiveness of combining subgraph fingerprint network construction with LFDR-based
refinement for zero-shot virtual screening.
F.1.2
Ablation Study of LFDR-guided Refinement
Figure 4 – 8 further analyze the effect of the seed-selection rule (τFDR) on calibration and
retrieval performance. In brief, we evaluate the impact of the LFDR threshold τFDR on
22
Table 11: Comprehensive evaluation of SubDyve and baseline methods on ten DUD-E targets.
Confidence intervals were estimated via bootstrapping (DiCiccio & Efron, 1996), using 100
resampled datasets to compute the standard deviations. AUROC and BEDROC are reported
as percentages. The best and second-best scores per metric are in bold and underline,
respectively.
Protein Target
Ours
PharmacoMatch
CDPKit
DrugCLIP
AutoDock Vina
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
ACES
91±1
86±2
57.0±2.4
17.1±0.4
8.8±0.2
58±2
18±1
8.4±1.4
3.5±0.3
2.2±0.2
55±1
16±2
5.5±1.3
3.0±0.3
2.1±0.2
80±1
52±2
32.4±1.7
10.2±0.5
5.9±0.2
77±0.0
33±1.1
13.87±0.5
6.47±0.2
4.32±0.1
ADA
90±3
83±4
50.6±5.3
16.6±0.8
8.4±0.4
83±3
44±4
16.7±4.1
9.5±1.0
5.7±0.4
94±1
82±3
53.6±4.3
15.9±0.9
8.4±0.4
96±1
82±3
60.2±5.3
16.2±0.8
8.4±0.3
57±0.0
7±2.7
1.05 ±1.7
0.42±0.7
2.12±0.4
ANDR
87±2
72±2
37.1±2.1
14.8±0.5
8.0±0.2
76±1
33±2
15.8±1.9
6.0±0.5
4.3±0.3
71±2
26±2
12.6±2.1
4.4±0.5
3.7±0.3
91±1
64±3
34.3±2.4
12.7±0.6
7.5±0.3
64±0.0
34±1.2
18.41±0.6
6.89±0.3
4.07±0.2
EGFR
94±1
86±2
60.0±2.3
17.0±0.3
8.6±0.2
63±1
11±1
3.1±0.7
2.0±0.3
1.6±0.2
76±1
26±2
12.2±1.6
4.6±0.3
3.7±0.2
69±1
40±2
28.7±2.1
7.4±0.4
4.4±0.2
64±0.0
14±1.4
3.68 ±0.7
2.76±0.3
2.17±0.1
FA10
79±1
58±2
46.8±1.7
10.6±0.4
5.5±0.2
47±1
1±1
0.2±0.2
0.1±0.1
0.2±0.1
55±1
6±1
0.0±0.0
0.7±0.2
1.2±0.1
94±1
86±1
51.2±1.8
17.0±0.3
9.1±0.1
84±0.0
41±1.7
15.77±0.8
7.28±0.3
5.05±0.2
KIT
82±2
44±3
13.8±2.6
11.3±0.7
6.1±0.4
56±2
4±1
0.0±0.0
0.4±0.2
0.7±0.2
63±2
9±2
1.1±0.8
1.2±0.4
1.8±0.3
30±3
10±2
5.2±1.7
1.8±0.5
1.2±0.3
78±0.0
18±2.4
2.97 ±1.9
3.23±0.5
3.11±0.3
PLK1
94±2
85±3
51.7±4.0
17.7±0.6
9.0±0.3
62±3
9±2
1.5±1.3
0.7±0.3
1.8±0.3
75±3
39±3
5.7±2.3
10.2±0.9
5.5±0.5
88±2
66±4
45.0±4.0
12.8±0.9
7.3±0.4
64±0.0
13±1.8
1.83 ±0.3
1.85±0.3
2.22±0.4
SRC
82±1
61±2
35.0±1.8
11.3±0.4
6.9±0.2
79±1
27±1
6.0±1.0
5.3±0.4
4.6±0.2
80±1
28±1
11.1±1.2
5.3±0.4
4.3±0.2
59±2
16±1
8.1±1.3
2.9±0.3
2.0±0.2
66±0.0
13±1.2
4.00 ±0.5
2.36±0.2
1.96±0.1
THRB
78±1
61±2
36.6±2.0
11.9±0.5
6.0±0.2
70±1
22±1
5.9±1.0
4.8±0.4
3.3±0.2
79±1
35±2
11.8±1.5
7.2±0.4
4.5±0.2
97±0
83±1
46.9±1.7
17.2±0.3
9.3±0.1
81±0.0
25±1.8
4.31 ±1.0
4.80±0.3
3.98±0.2
UROK
55±3
37±3
25.6±2.4
8.0±0.7
4.1±0.3
60±2
4±1
0.6±0.7
0.5±0.2
0.4±0.2
91±1
55±3
24.5±2.8
10.4±0.9
8.2±0.4
93±1
73±3
48.1±3.1
14.7±0.7
8.1±0.3
80±0.0
28±1.3
7.90 ±0.7
5.88±0.3
3.92±0.2
Avg. rank
2.2
1.4
1.5
1.4
1.5
4.2
4.5
4.4
4.4
4.3
3.0
3.4
3.5
3.4
3.2
2.2
2.0
1.7
2.0
2.2
3.4
3.7
3.9
3.8
3.8
Final rank
1
1
1
1
1
5
5
5
5
5
3
3
3
3
3
1
2
2
2
2
4
4
4
4
4
calibration and screening performance as shown in Figure 4. As a baseline, we compare
against a probability-based refinement strategy (denoted as PROB), which directly thresholds
GNN logits without LFDR estimation.
B
A
Figure 4:
Effect of LFDR Threshold τFDR. (A) Seed-calibration curves for LFDR (blue)
and PROB thresholding (orange). Calibration quality improves as the curve approaches the
diagonal; ECE values are shown for both methods. (B) Boxplot of BEDROC (α = 20) scores
for LFDR and PROB across threshold values.
Figure 4A shows the seed-calibration curves across thresholds for each method. LFDR
achieves substantially better calibration, with an expected calibration error (ECE) of 0.204
compared to 0.511 for PROB. This indicates that LFDR more accurately controls the false
discovery rate during seed refinement. Figure 4B shows BEDROC scores across five LFDR
thresholds (τFDR) and five probability thresholds (τPROB). LFDR consistently yields higher
performance across the threshold range, indicating that better calibration improves early
recognition. Similar trends for EF1% and AUPRC are shown in Figure 7 and Figure 8.
• Figure 5: Seed-calibration curves comparing LFDR-based and probability-based
(PROB) refinement strategies. LFDR yields lower expected calibration error (ECE)
across most targets, demonstrating better control of false discovery rates.
• Figure 6: BEDROC (α = 20) scores evaluated across thresholds. LFDR generally
shows higher BEDROC values with reduced variance, reflecting improved early
enrichment.
• Figure 7: EF1% plotted as a function of threshold. LFDR consistently outperforms
PROB across thresholds for most targets, confirming its robustness under different
calibration conditions.
• Figure 8: Precision–recall (PR) curves at the best-performing threshold per method.
LFDR achieves higher PR-AUC for the majority of targets, especially those with
imbalanced label distributions.
Together, these results highlight the advantages of LFDR-guided refinement for both FDR
calibration and early recognition. SubDyve demonstrates strong and stable performance
across a variety of target proteins, offering a reliable solution for virtual screening under
sparse supervision.
23
Table 12: Comprehensive evaluation of SubDyve and baseline methods on ten DUD-E targets.
Confidence intervals were estimated via bootstrapping (DiCiccio & Efron, 1996), using 100
resampled datasets to compute the standard deviations. AUROC and BEDROC are reported
as percentages. The best and second-best scores per metric are in bold and underline,
respectively.
Protein Target
Ours
ChemBERTa
MolFormer
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
AUROC
BEDROC
EF1%
EF5%
EF10%
ACES
91±1
86±2
57.0±2.4
17.1±0.4
8.8±0.2
53±0
9±1
1.9±0.9
1.5±0.2
1.3±0.1
74±2
24±2
8.3±0.7
4.3±0.6
3.7±0.4
ADA
90±3
83±4
50.6±5.3
16.6±0.8
8.4±0.4
76±1
15±3
4.2±1.6
1.9±0.3
2.6±0.6
89±0
72±1
48.3±0.9
13.9±0.3
7.2±0.2
ANDR
87±2
72±2
37.1±2.1
14.8±0.5
8.0±0.2
39±0
5±1
1.9±0.4
0.9±0.2
0.8±0.2
56±1
9±1
3.0±0.1
1.6±0.3
1.5±0.3
EGFR
94±1
86±2
60.0±2.3
17.0±0.3
8.6±0.2
77±1
35±1
16.4±0.6
7.0±0.1
5.0±0.0
93±1
75±2
48.1±2.8
15.2±0.4
8.4±0.2
FA10
79±1
58±2
46.8±1.7
10.6±0.4
5.5±0.2
73±1
28±3
12.9±1.6
5.4±0.5
3.4±0.2
93±0
66±0
36.7±0.4
13.0±0.2
7.6±0.1
KIT
82±2
44±3
13.8±2.6
11.3±0.7
6.1±0.4
62±1
16±1
4.9±3.7
2.8±0.0
2.5±0.1
93±0
66±1
36.8±0.9
13.6±0.4
7.7±0.4
PLK1
94±2
85±3
51.7±4.0
17.7±0.6
9.0±0.3
60±3
15±1
4.9±1.4
2.9±0.2
1.9±0.3
89±1
69±0
35.2±4.0
14.4±0.0
8.0±0.1
SRC
82±1
61±2
35.0±1.8
11.3±0.4
6.9±0.2
64±2
15±1
3.3±0.7
3.1±0.2
2.6±0.4
82±1
48±1
21.5±1.5
10.4±0.4
6.5±0.1
THRB
78±1
61±2
36.6±2.0
11.9±0.5
6.0±0.2
79±0
34±2
14.5±2.2
6.3±0.4
4.7±0.1
59±1
6±1
1.2±0.1
0.9±0.3
0.9±0.1
UROK
55±3
37±3
25.6±2.4
8.0±0.7
4.1±0.3
62±3
5±1
0.6±0.0
0.3±0.1
1.0±0.3
79±3
36±2
10.0±1.5
7.6±0.5
5.2±0.4
Avg. rank
1.6
1.2
1.1
1.2
1.3
2.7
2.9
2.9
2.9
2.9
1.8
1.9
2.0
1.9
1.8
Final rank
1
1
1
1
1
3
3
3
3
3
2
2
2
2
2
Calibration Comparison Across Targets
Figure 5:
Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E
targets. Seed-calibration curves for LFDR (blue) and probability thresholding (orange). The
closer the curve lies to the diagonal, the better the calibration. Expected calibration error
(ECE) is annotated for both methods.
24
LFDR
PROB
Method
0.820
0.825
0.830
0.835
0.840
BEDROC
ACES
LFDR
PROB
Method
0.746
0.748
0.750
0.752
0.754
0.756
0.758
BEDROC
ADA
LFDR
PROB
Method
0.68
0.69
0.70
0.71
0.72
0.73
0.74
BEDROC
ANDR
LFDR
PROB
Method
0.8425
0.8450
0.8475
0.8500
0.8525
0.8550
0.8575
BEDROC
EGFR
LFDR
PROB
Method
0.53
0.54
0.55
0.56
0.57
BEDROC
FA10
LFDR
PROB
Method
0.38
0.39
0.40
0.41
BEDROC
KIT
LFDR
PROB
Method
0.824
0.826
0.828
0.830
0.832
BEDROC
PLK1
LFDR
PROB
Method
0.520
0.525
0.530
0.535
BEDROC
SRC
LFDR
PROB
Method
0.525
0.530
0.535
0.540
0.545
BEDROC
THRB
LFDR
PROB
Method
0.300
0.305
0.310
0.315
0.320
0.325
BEDROC
UROK
BEDROC Comparison (α=20.0)
Figure 6:
Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E
targets. BEDROC (α = 20) scores evaluated across the threshold grid; boxes show the
interquartile range, whiskers the 5–95 percentiles. Box plot for LFDR (blue) and probability
(orange).
0.0
0.2
0.4
0.6
0.8
Threshold
42
44
46
48
50
EF@1 %
ACES
LFDR
Prob
0.0
0.2
0.4
0.6
0.8
Threshold
32
33
34
35
36
EF@1 %
ADA
0.0
0.2
0.4
0.6
0.8
Threshold
27.5
30.0
32.5
35.0
37.5
40.0
42.5
EF@1 %
ANDR
0.0
0.2
0.4
0.6
0.8
Threshold
55
56
57
58
59
60
EF@1 %
EGFR
0.0
0.2
0.4
0.6
0.8
Threshold
30.0
32.5
35.0
37.5
40.0
42.5
45.0
EF@1 %
FA10
0.0
0.2
0.4
0.6
0.8
Threshold
4
6
8
10
12
EF@1 %
KIT
0.0
0.2
0.4
0.6
0.8
Threshold
44.4
44.6
44.8
45.0
45.2
EF@1 %
PLK1
0.0
0.2
0.4
0.6
0.8
Threshold
30
31
32
33
EF@1 %
SRC
0.0
0.2
0.4
0.6
0.8
Threshold
16
18
20
22
24
26
28
EF@1 %
THRB
0.0
0.2
0.4
0.6
0.8
Threshold
5
6
7
8
9
10
EF@1 %
UROK
EF@1% vs Threshold
Figure 7:
Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E
targets. Enrichment factor at the top 1% of the ranking (EF1%) as a function of the threshold.
Line plot for LFDR (blue) and probability (orange).
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
ACES
LFDR τ=0.01 (AP=0.74)
Prob τ=0.6 (AP=0.67)
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
ADA
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
ANDR
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
EGFR
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
FA10
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
KIT
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
PLK1
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
SRC
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
THRB
0.0
0.2
0.4
0.6
0.8
1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0
Precision
UROK
PR Curve (Best AP per Method)
Figure 8:
Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E
targets. Precision–recall curves at the best threshold for each rule. Legends indicate the
chosen τ and the corresponding PR-AUC.
25
A
B
C
D
E
F
Figure 9: Performance comparison for varying numbers of (d) in the subgraph pattern
removal study. (A-E) Average performance metrics (AUROC, BEDROC, EF1%, EF5%,
EF10%) for 10 DUD-E targets as a function of d. Error bars represent the standard deviation
of 100 bootstrap resamples. (F) AUROC performance per target at d = 2000, ranked by
performance per target.
26
F.2
PU-Style Screening Results on PU dataset
To supplement the results in Section 4.1.2, we report additional baseline results for the
PU-style virtual screening experiment on the PU dataset. Table 14 extends Table 2 by
including all evaluated general-purpose molecular fingerprints. These are combined with
the same network propagation pipeline as in SubDyve, allowing a controlled comparison of
representation effectiveness. Descriptions of the 12 fingerprints used are provided in Table 15.
As shown in Table 14, SubDyve achieves the best performance across all BEDROC and EF
metrics, outperforming deep learning models and general fingerprint-based baselines. While
some fingerprints (e.g., rdkit, Graph) perform competitively under certain thresholds, they
fall short in consistency across metrics. These results support the advantage of task-specific
subgraph representations combined with uncertainty-aware refinement for robust screening
under sparse supervision.
Table 13: Ablation study results for the effect of subgraph fingerprint network and LFDR-
guided seed refinement on the 10 DUD-E dataset. The top results are shown in bold, and
the second-best are underlined, respectively.
Target
Subgraph
LFDR
BEDROC
EF1%
ACES
64 ± 2
38.7 ± 2.4
✓
62 ± 2
38.5 ± 2.0
✓
76 ± 1
35.7 ± 1.7
✓
✓
86 ± 2
57.0 ± 2.4
ADA
87 ± 2
41.1 ± 4.1
✓
87 ± 2
45.2 ± 4.1
✓
76 ± 3
36.3 ± 4.9
✓
✓
83 ± 4
50.6 ± 5.3
ANDR
27 ± 3
18.9 ± 2.4
✓
24 ± 2
18.4 ± 2.1
✓
45 ± 3
23.1 ± 2.7
✓
✓
72 ± 2
37.1 ± 2.1
EGFR
40 ± 2
30.9 ± 1.7
✓
33 ± 2
18.2 ± 1.6
✓
79 ± 2
53.2 ± 2.0
✓
✓
86 ± 2
60.0 ± 2.3
FA10
17 ± 2
11.8 ± 1.3
✓
6 ± 1
1.0 ± 0.4
✓
58 ± 2
46.8 ± 2.0
✓
✓
58 ± 2
47.0 ± 1.7
KIT
11 ± 2
3.7 ± 1.5
✓
11 ± 2
2.9 ± 1.3
✓
37 ± 3
5.8 ± 1.9
✓
✓
44 ± 3
13.8 ± 2.6
PLK1
61 ± 4
43.2 ± 4.4
✓
57 ± 5
32.2 ± 3.6
✓
78 ± 3
49.5 ± 4.7
✓
✓
85 ± 3
51.7 ± 4.0
SRC
56 ± 2
28.5 ± 1.7
✓
39 ± 2
12.6 ± 1.4
✓
25 ± 2
9.4 ± 1.3
✓
✓
61 ± 2
35.0 ± 1.8
THRB
28 ± 2
20.3 ± 1.9
✓
21 ± 2
10.7 ± 1.4
✓
32 ± 2
21.2 ± 1.7
✓
✓
61 ± 2
36.6 ± 2.0
UROK
35 ± 3
22.2 ± 2.9
✓
30 ± 3
13.0 ± 2.6
✓
30 ± 3
11.1 ± 2.6
✓
✓
37 ± 3
25.6 ± 2.4
27
Table 14: Complete performance comparison of SubDyve and baselines on the PU dataset.
The top results are shown in bold, and the second-best are underlined, respectively.
Method
BEDROC (%)
EF
0.5%
1%
3%
5%
10%
Deep learning-based
BIND (BIB, 2024) (Lam et al., 2024)
-
-
-
-
-
0.04 ± 0.08
AutoDock Vina (J. Chem. Inf. Model.) (Eberhardt et al., 2021)
1.0 ± 1.3
-
0.2 ± 0.3
0.6 ± 0.7
1.1 ± 0.6
1.2 ± 0.5
DrugCLIP (NeurIPS) (Gao et al., 2023)
2.7 ± 1.26
1.63 ± 1.99
1.63 ± 0.81
2.45 ± 1.02
2.53 ± 1.35
2.69 ± 0.62
PSICHIC (Nat MI) (Koh et al., 2024)
9.37 ± 3.08
4.07 ± 2.58
6.92 ± 3.30
7.48 ± 2.47
7.02 ± 1.80
5.35 ± 0.94
GRAB (ICDM) (Yoo et al., 2021)
40.68 ± 10.60
44.22 ± 8.35
45.21 ± 5.63
29.78 ± 1.38
18.69 ± 0.47
10.00 ± 0.00
Data mining-based
avalon + NP (Yi et al., 2023)
77.59 ± 1.72
135.76 ± 6.44
87.58 ± 2.9
31.55 ± 0.54
19.67 ± 0.4
9.88 ± 0.16
cdk-substructure + NP (Yi et al., 2023)
66.56 ± 2.89
125.4 ± 11.28
69.67 ± 2.98
28.15 ± 0.92
17.22 ± 0.79
9.22 ± 0.42
estate + NP (Yi et al., 2023)
52.44 ± 6.19
94.4 ± 13.68
57.87 ± 7.15
22.71 ± 2.7
15.92 ± 0.85
8.24 ± 0.38
extended + NP (Yi et al., 2023)
73.7 ± 3.3
136.73 ± 6.83
83.54 ± 5.21
31.28 ± 0.97
18.85 ± 0.55
9.63 ± 0.2
fp2 + NP (Yi et al., 2023)
72.68 ± 3.77
129.06 ± 11.89
85.49 ± 3.89
30.86 ± 0.69
18.69 ± 0.6
9.51 ± 0.36
fp4 + NP (Yi et al., 2023)
69.62 ± 3.69
122.76 ± 13.02
75.01 ± 4.21
28.96 ± 1.34
18.36 ± 1.0
9.59 ± 0.29
graph + NP (Yi et al., 2023)
75.86 ± 3.99
126.72 ± 10.05
84.73 ± 3.74
31.68 ± 0.92
19.1 ± 0.47
9.75 ± 0.24
hybridization + NP (Yi et al., 2023)
75.4 ± 5.18
135.15 ± 17.78
80.25 ± 5.88
31.14 ± 1.0
18.69 ± 0.6
9.63 ± 0.15
maccs + NP (Yi et al., 2023)
75.44 ± 4.85
135.72 ± 12.7
79.82 ± 4.76
31.0 ± 1.41
18.93 ± 0.66
9.67 ± 0.21
pubchem + NP (Yi et al., 2023)
63.48 ± 5.16
99.17 ± 10.17
69.3 ± 7.08
30.87 ± 1.27
18.77 ± 0.9
9.84 ± 0.15
rdkit + NP (Yi et al., 2023)
79.04 ± 1.96
148.69 ± 4.25
89.24 ± 2.08
31.68 ± 0.92
19.02 ± 0.55
9.55 ± 0.3
standard + NP (Yi et al., 2023)
72.42 ± 3.51
121.97 ± 15.51
84.34 ± 5.56
31.27 ± 0.96
19.01 ± 0.33
9.71 ± 0.24
SubDyve
83.44 ± 1.44
155.31 ± 6.38
97.59 ± 1.44
33.01 ± 0.60
19.90 ± 0.18
10.00 ± 0.00
Statistical Significance (p-value)
**
-
**
*
-
-
Table 15: Description of various molecular fingerprints used in virtual screening.
Fingerprint
Description
Standard
Based on the presence or absence of specific functional groups or atoms in a molecule. Simple and efficient but may lack specificity.
Extended
Similar to standard fingerprints but include additional features such as bond counts and stereochemistry.
Graph
Derived from the topological structure of a molecule; includes atom/bond counts, ring sizes, and branching patterns.
MACCS
A set of 166 predefined molecular keys from the MACCS project indicating presence/absence of specific substructures.
PubChem
Developed by NIH; based on predefined substructure paths in a molecule.
Estate
Encodes topological and electrostatic properties of a molecule.
Hybridization
Encodes the hybridization states of atoms in a molecule.
CDK-substructure
Captures the presence or absence of specific chemical substructures.
RDKit
Fingerprints generated using the RDKit toolkit; used for similarity searches and cheminformatics applications.
Avalon
Path-based fingerprints representing features derived from atomic paths within molecules.
FP2
Developed by OpenEye; uses topological and pharmacophoric information for similarity search and screening.
FP4
Also from OpenEye; incorporates topological, pharmacophoric, and electrostatic features for molecular comparison.
F.3
Ablation Study: Varying Seed Set Sizes
To provide a more comprehensive evaluation of PU-style virtual screening on the PU
dataset, we present additional baseline results in Table 16, which expands upon the findings
reported in Section 4.2.2 and Table 2.
This extended table includes a wider range of
general-purpose molecular fingerprints, each integrated into the same network propagation
framework used by SubDyve, ensuring a fair and controlled comparison of representational
capabilities. Additionally, we introduce Subgraph + NP as a control variant, applying
standard propagation over subgraph-derived networks without LFDR-based refinement.
Across all seed sizes, SubDyve consistently achieves superior performance, particularly in
BEDROC, EF3%, and EF5%. Subgraph + NP advantage also extends to EF1%, highlighting
the strength of subgraph-based representations in capturing bioactive chemical features
beyond those accessible to general fingerprints.
Although certain baselines—such as MACCS and Avalon—exhibit strong results at specific
enrichment thresholds, their performance lacks consistency across evaluation metrics, under-
scoring the robustness of SubDyve’s approach. These results suggest that subgraph patterns
and LFDR-based refinement have screening power over other pre-defined fingerprints, even
in harsh environments with much sparser seed.
F.4
Ablation Study: Varying d-dimensional subgraph pattern fingerprint
To evaluate the impact of subgraph pattern size on virtual screening performance, we
present the results of SubDyve under varying fingerprint sizes in Figures 9. The number of
subgraph patterns mined using the SSM algorithm and selected according to their entropy
importance was varied as d ∈{100, 200, 300, 500, 1000, 2000}. The figure shows the average
performance for 10 DUD-E targets evaluated using the AUROC, BEDROC, EF1%, EF5%,
and EF10% metrics. As the number of patterns increases, SubDyve shows consistently
improved performance across all metrics, indicating that incorporating a broader range
28
Table 16:
Ablation study on the number of seed compounds on the PU dataset. For each
seed size (50, 150, 250), the baseline of all generic fingerprint performance is shown. For
each number, the best value is highlighted in bold, and the second-best is underlined.
No. of Seeds
Method
BEDROC (%)
EF
0.30%
0.50%
1%
3%
5%
50
avalon + NP (Yi et al., 2023)
46.18 ± 3.95
54.02 ± 15.47
52.83 ± 12.07
48.9 ± 7.93
28.96 ± 0.68
18.28 ± 0.87
cdk-substructure + NP (Yi et al., 2023)
40.61 ± 3.4
59.58 ± 8.86
53.72 ± 7.0
42.36 ± 5.22
22.85 ± 1.4
14.85 ± 0.95
estate + NP (Yi et al., 2023)
34.87 ± 3.38
37.8 ± 15.17
39.92 ± 11.07
37.51 ± 4.77
20.53 ± 2.29
13.87 ± 0.63
extended + NP (Yi et al., 2023)
44.74 ± 4.41
36.61 ± 10.1
47.19 ± 10.52
49.29 ± 11.34
27.73 ± 0.79
17.55 ± 0.77
fp2 + NP (Yi et al., 2023)
43.51 ± 5.4
39.32 ± 13.17
43.07 ± 11.06
47.64 ± 10.63
27.2 ± 0.61
17.06 ± 0.6
fp4 + NP (Yi et al., 2023)
40.46 ± 3.45
54.11 ± 12.07
52.82 ± 7.24
42.38 ± 4.73
22.98 ± 1.98
15.59 ± 1.11
graph + NP (Yi et al., 2023)
45.08 ± 4.62
51.23 ± 18.33
55.28 ± 9.08
49.34 ± 9.06
27.06 ± 1.84
16.81 ± 0.87
hybridization + NP (Yi et al., 2023)
43.76 ± 4.14
41.99 ± 22.79
51.19 ± 14.22
48.49 ± 8.79
26.11 ± 1.03
16.89 ± 0.76
maccs + NP (Yi et al., 2023)
47.02 ± 3.83
56.77 ± 15.24
52.81 ± 9.24
50.92 ± 3.15
27.74 ± 2.04
17.05 ± 1.2
pubchem + NP (Yi et al., 2023)
41.13 ± 4.46
44.69 ± 14.09
45.51 ± 7.91
41.97 ± 6.91
25.7 ± 1.99
17.14 ± 1.0
rdkit + NP (Yi et al., 2023)
43.85 ± 3.37
39.21 ± 19.76
47.92 ± 11.32
50.55 ± 3.96
25.7 ± 1.89
15.59 ± 1.08
standard + NP (Yi et al., 2023)
44.64 ± 6.02
46.13 ± 13.78
47.94 ± 13.87
48.47 ± 9.85
27.46 ± 1.1
17.55 ± 0.73
Subgraph + NP
46.33 ± 1.26
37.79 ± 21.22
31.81 ± 12.68
53.93 ± 4.97
27.61 ± 1.47
17.27 ± 0.51
SubDyve
51.78 ± 3.38
69.5 ± 11.81
62.53 ± 14.84
52.66 ± 5.91
29.48 ± 2.37
18.15 ± 0.90
150
avalon + NP (Yi et al., 2023)
54.73 ± 2.42
65.0 ± 15.73
70.85 ± 9.83
60.72 ± 4.7
31.0 ± 0.55
19.59 ± 0.37
cdk-substructure + NP (Yi et al., 2023)
48.25 ± 3.74
75.75 ± 9.08
66.61 ± 10.79
53.76 ± 7.26
25.7 ± 1.09
16.48 ± 0.91
estate + NP (Yi et al., 2023)
40.42 ± 5.07
51.37 ± 17.43
48.78 ± 2.63
47.69 ± 10.35
21.48 ± 3.35
14.28 ± 2.02
extended + NP (Yi et al., 2023)
51.87 ± 3.8
55.4 ± 6.54
56.87 ± 12.75
60.28 ± 5.39
30.18 ± 1.52
18.53 ± 0.76
fp2 + NP (Yi et al., 2023)
50.99 ± 5.85
47.39 ± 15.42
56.12 ± 15.32
59.05 ± 7.79
29.24 ± 1.14
18.12 ± 0.71
fp4 + NP (Yi et al., 2023)
48.8 ± 3.46
74.15 ± 6.03
62.71 ± 8.39
53.38 ± 7.34
26.78 ± 1.63
17.38 ± 0.55
graph + NP (Yi et al., 2023)
52.85 ± 5.55
76.98 ± 15.73
70.09 ± 16.19
54.23 ± 8.76
29.78 ± 1.79
18.36 ± 0.77
hybridization + NP (Yi et al., 2023)
52.69 ± 5.27
70.17 ± 24.31
69.22 ± 19.13
57.05 ± 8.56
28.55 ± 0.97
17.79 ± 0.33
maccs + NP (Yi et al., 2023)
55.22 ± 4.39
79.99 ± 15.80
71.65 ± 13.30
60.69 ± 6.59
30.6 ± 1.29
18.85 ± 0.48
pubchem + NP (Yi et al., 2023)
46.74 ± 5.38
62.37 ± 19.0
58.63 ± 11.73
48.9 ± 7.76
28.01 ± 1.63
18.44 ± 0.7
rdkit + NP (Yi et al., 2023)
50.82 ± 3.79
52.69 ± 6.75
54.62 ± 10.48
54.62 ± 7.24
29.5 ± 1.59
17.79 ± 0.95
standard + NP (Yi et al., 2023)
51.59 ± 4.93
60.85 ± 11.29
63.42 ± 13.66
55.39 ± 8.09
29.78 ± 1.85
18.61 ± 0.75
Subgraph + NP
55.08 ± 1.52
44.39 ± 22.83
61.29 ± 10.07
67.17 ± 7.24
30.07 ± 1.38
18.22 ± 0.93
SubDyve
59.07 ± 2.25
74.67 ± 7.46
73.55 ± 10.51
66.72 ± 5.29
32.26 ± 1.04
19.73 ± 0.36
250
avalon + NP (Yi et al., 2023)
61.29 ± 2.44
97.18 ± 13.25
86.96 ± 9.16
68.05 ± 4.42
31.14 ± 0.52
19.51 ± 0.48
cdk-substructure + NP (Yi et al., 2023)
54.07 ± 4.05
95.87 ± 20.51
81.39 ± 13.84
61.09 ± 7.94
26.52 ± 0.43
16.97 ± 0.91
estate + NP (Yi et al., 2023)
44.34 ± 5.83
64.97 ± 13.25
66.81 ± 12.27
50.14 ± 8.31
22.16 ± 2.67
15.18 ± 1.32
extended + NP (Yi et al., 2023)
57.48 ± 3.71
64.79 ± 10.11
75.67 ± 10.89
64.35 ± 5.22
30.99 ± 1.26
18.85 ± 0.54
fp2 + NP (Yi et al., 2023)
56.88 ± 5.26
67.45 ± 16.53
75.57 ± 15.28
65.15 ± 8.3
30.19 ± 1.26
18.52 ± 0.61
fp4 + NP (Yi et al., 2023)
55.04 ± 3.77
91.76 ± 18.18
81.27 ± 12.86
62.75 ± 6.11
27.33 ± 0.66
18.03 ± 0.79
graph + NP (Yi et al., 2023)
58.68 ± 5.4
93.5 ± 19.79
78.84 ± 12.62
62.79 ± 9.89
30.19 ± 1.75
18.44 ± 0.83
hybridization + NP (Yi et al., 2023)
58.94 ± 4.32
99.75 ± 15.48
87.76 ± 17.54
65.2 ± 8.07
30.05 ± 0.8
18.36 ± 0.26
maccs + NP (Yi et al., 2023)
60.94 ± 4.57
102.66 ± 15.71
84.48 ± 12.88
67.21 ± 6.9
30.6 ± 1.67
19.01 ± 0.66
pubchem + NP (Yi et al., 2023)
51.92 ± 5.47
71.7 ± 15.79
73.95 ± 17.13
55.82 ± 8.04
29.78 ± 1.51
18.69 ± 0.87
rdkit + NP (Yi et al., 2023)
58.4 ± 2.09
70.29 ± 7.84
70.65 ± 9.75
68.89 ± 5.38
30.59 ± 1.14
18.52 ± 0.76
standard + NP (Yi et al., 2023)
57.08 ± 4.39
71.4 ± 15.11
76.42 ± 18.16
61.09 ± 8.56
31.0 ± 1.26
19.02 ± 0.42
Subgraph + NP
61.96 ± 3.24
41.01 ± 13.89
86.31 ± 11.97
80.31 ± 4.60
30.20 ± 1.44
18.49 ± 0.85
SubDyve
66.73 ± 2.71
97.69 ± 16.55
85.44 ± 12.82
78.19 ± 3.38
32.85 ± 0.60
19.72 ± 0.36
of chemically informative substructures improves model representation. For the finalized
setting of d = 2000, we additionally report target-specific AUROC scores to highlight the
consistency of performance across targets.
All results are calculated with 100 bootstrap resamples to obtain confidence intervals and
report both the mean and standard deviation.
These results highlight the benefits of
capturing different subgraph-level features, which contribute substantially to improving
screening accuracy in low-label environments.
F.5
Ablation Study: Impact of subgraph pattern fingerprint network and
LFDR-guided seed refinement on Ten DUD-E Targets
We conduct an ablation study on 10 DUD-E targets to evaluate the individual and joint
contributions of two core components of SubDyve: (1) the subgraph-based similarity network
and (2) the LFDR-based seed refinement. In Table 13, combining both components yields
the best results, with the highest EF1% on all 10 targets and top BEDROC scores on 9.
This highlights SubDyve’s robustness beyond the PU dataset and its screening performance
on DUD-E targets.
We also observed that applying LFDR refinement alone without the subgraph-based similar-
ity network often degrades or remains unchanged, while using both components together
consistently improves performance. This finding highlights the complementarity of chemically
meaningful network construction and uncertainty-awareness, both essential for robust and
generalizable virtual screening under low supervision.
29
PLK1 (174)
SRC (624)
ACES(1277)
Figure 10:
Top-10 matched Active/Decoy on the DUD-E Dataset over the number of seeds
utilized.
G
Additional Case Study Results
G.1
Top-10 matched Active/Decoy on the DUD-E dataset over the number
of seeds utilized
To check the distribution of active and decoy sets with subgraphs according to the number of
seeds used, we investigated the targets with the lowest, average, and highest number of seeds
for the DUD-E dataset in Figure 10. PLK1 (174 seeds utilized), the target with the lowest
number of seeds, captured one decoy molecule with a subgraph, while the rest remained
active. Even though decoy ranked in the top-10, the distribution of utilized subgraphs varied,
with 10 different subgraphs captured. For the average number of SRC (624) and ACES
(1277) targets, all molecules were active, and the distribution of subgraphs captured varied.
For SRC, 8 subgraph patterns were captured, and for ACES, 9 were captured.
Therefore, this result suggests that a higher number of seeds provides more structural
diversity to reflect the subgraphs of active and allows for more reliable structure-based
analyses. Nevertheless, the low number of seeds also reveals as much structural information
30
about the molecule as the number of subgraph patterns, suggesting that the results are
comparable to those with a higher number of seeds.
G.2
Active-Decoy Ranking Gap for DUD-E datasets
↑↑
↑
↓↓↓
↓
↑↑
↑ ↑
↓↓
↓
↑↑
↑
↓
↓
↑↑↑
↑
↓
↓
Figure 11:
Ranking gap for Active/Decoy on the DUD-E Dataset of 4 targets
To demonstrate the effectiveness of SubDyve’s ranking capability, we compare its performance
to the best-performing RDKit+NP baseline, which is based on general-purpose molecular
fingerprints. We focus on structurally similar active-decoy pairs and calculate the ranking
gap between them. As shown in Figure 11, SubDyve consistently ranks the active compounds
higher and the decoy compounds lower than the baseline, resulting in a much larger ranking
gap. This indicates that even under conditions of high structural similarity, SubDyve is able
to distinguish the true active compounds.
To facilitate visual interpretation, we annotated each active compound with an up arrow
(one, two, or three arrows for the top 50, top 250, and top 500, respectively) based on its
rank in the output of each model. For decoys, we annotated with one, two, or three down
arrows when SubDyve rank improved by < 10%, < 50%, or ≥50%, respectively, compared
to the rank difference between SubDyve and RDKit+NP on a percentile basis. The rank
difference is calculated as the gap between the active and decoy rankings, and we report
the difference in this gap by model. Higher values indicate that SubDyve separates the
active from the decoys more than the baseline. Statistical significance is evaluated using the
Wilcoxon signed rank test for all matched pairs.
G.3
Subgraph-Level Characterization of Augmented Seeds on the PU
dataset
To reveal the structural properties of compounds added during SubDyve’s refinement, we
analyze the differences between the initial and augmented seed sets across three perspec-
tives: subgraph motif enrichment, compound-level pattern visualization, and graph-level
connectivity.
First, we identify subgraph motifs that are significantly enriched in the augmented seed set.
As shown in Figure 12A, multiple motifs exhibit increased presence after refinement, with
Fisher’s exact test ranking the top 40 patterns by significance. Figure 12B further quantifies
these enrichments by measuring the difference in average motif counts, with statistical
significance determined using unpaired t-tests and Benjamini–Hochberg correction. These
results indicate that SubDyve preferentially amplifies discriminative patterns rather than
merely expanding chemical diversity.
31
ZINC ID: 145394039
ZINC ID: 72452583
A
B
C
D
Figure 12:
Subgraph-level analysis of augmented seeds on PU dataset. (A) Top 40 subgraph
motifs enriched in the augmented seed set, ranked by p-values from Fisher’s exact test.
(B) Subgraphs with the largest mean frequency increase relative to the initial seeds, with
q-values from unpaired t-tests (Benjamini–Hochberg correction). (C) Examples of ZINC
compounds containing enriched subgraphs that pass Lipinski’s rule of five; matched motifs
are highlighted. (D) Distribution of shortest path lengths from initial seeds to retrieved
actives in the SubDyve graph versus a Tanimoto k-NN baseline.
Second, we examine how enriched substructures manifest in real molecules. Figure 12C
presents ZINC compounds from the augmented set that satisfy Lipinski’s rule of five and
contain representative enriched subgraphs. Highlighted regions confirm that these patterns
correspond to chemically meaningful and interpretable substructures rather than artifacts of
global structural similarity.
Lastly, we assess whether SubDyve can prioritize structurally distant yet bioactive compounds.
We compute the shortest path distances from each retrieved compound to the closest known
active in the subgraph-similarity network and compare this distribution to a Tanimoto k-NN
baseline (similarity ≥0.9). As shown in Figure 12D, SubDyve retrieves candidates with
significantly longer path distances (p = 2.32 × 10−108, Mann–Whitney U-test), supporting
its ability to generalize beyond immediate structural neighbors while maintaining high
enrichment performance.
These results suggest that SubDyve refines the seed set in a substructure-aware and chemically
meaningful manner, enabling robust prioritization of active compounds even when they lie
outside the reach of traditional fingerprint-based similarity metrics.
H
Limitations
SubDyve requires constructing a target-specific chemical similarity network for each protein
target, which introduces preprocessing overhead due to repeated subgraph mining and graph
construction. While this design enables tailored modeling of bioactivity-relevant structures, it
may limit scalability when screening across a large number of targets. Additionally, although
LFDR-based seed calibration consistently outperforms probability-based heuristics in terms
of expected calibration error (ECE), performance in the mid-range threshold region remains
suboptimal.
Despite these limitations, SubDyve offers a promising foundation for scalable virtual screening.
Its modular architecture and uncertainty-aware design make it well suited for future extensions
32
to multi-target or multi-omics settings, where integration with transcriptomic profiles or cell
line information could further improve prioritization in complex biological contexts.
33
|
SubDyve: Subgraph-Driven Dynamic Propagation for Virtual Screening Enhancement Controlling False Positive Jungseob Yi1 Seoyoung Choi2 Sun Kim1,2,3,4 Sangseon Lee5 1Interdisciplinary Program in Artificial Intelligence, Seoul National University 2 3Interdisciplinary Program in Bioinformatics, Seoul National University 4AIGENDRUG Co., Ltd., Seoul 5 (VS) aims to identify bioactive compounds from vast chemical libraries, but remains difficult in low-label regimes where only a few actives are known. Existing methods largely rely on general-purpose molecular fingerprints and overlook class-discriminative substructures critical to bioactivity. Moreover, they consider molecules independently, limiting effectiveness in low-label regimes. We introduce SubDyve, a network-based VS framework that constructs a subgraph-aware similarity network and propagates activity signals from a small known actives. When few active compounds are available, SubDyve performs iterative seed refinement, incrementally promoting new candidates based on local false discovery rate. This strategy expands the seed set with promising candidates while controlling false positives from topological bias and overexpansion. We evaluate SubDyve on ten DUD-E targets under zero-shot conditions and on the CDK7 target with a 10-million-compound ZINC dataset. SubDyve consistently outperforms existing fingerprint or embedding-based approaches, achieving margins of up to +34.0 on the BEDROC and +24.6 on the EF1% metric. 1 Introduction The chemical space in drug discovery is vast, comprising more than 1060 synthetically accessible drug-like molecules (Virshup et al., 2013). Exhaustive exploration is infeasible, making virtual screening (VS) a key tool for identifying promising compounds small enough for experimental validation. However, in early stage discovery, most protein targets lack substantial ligand data; researchers often start with only a few known active molecules (Deng et al., 2024; Jiang et al., 2024; Chen et al., 2024; Scott et al., 2016). The central challenge in such low-data settings is to retrieve additional actives from billions of candidates, given only a target protein and sparse activity labels. Deep learning approaches to virtual screening fall into two main categories: supervised models and foundation models. The first trains on large, balanced datasets using graph neural networks or 3D molecular representations, but requires extensive labeled data and often overfits in low-data settings. The second leverages foundation models (FMs) pre-trained on large-scale unlabeled molecular corpora to support inference with minimal supervision. Representative FMs include ChemBERTa (Ahmad et al., 2022), MolBERT (Fabian et al., 2020), MoLFormer (Ross et al., 2022), GROVER (Rong et al., 2020), and AMOLE (Lee et al., 2024), which support zero-shot VS. Protein-language-model pipelines (Lam et al., 2024) show similar promise in structure-free contexts. However, FM-based methods screen compounds independently, failing to capture molecular dependencies. 1 18 Sep 2025 An orthogonal line of approach addresses these limitations with network-based label-efficient learning (Yi et al., 2023; Saha et al., 2024; Ma et al., 2024). Among these, network propagation (NP) has emerged as a promising and effective strategy. NP treats known actives as seed nodes in a molecular graph and diffuses influence across networks to prioritize candidates based on their global connectivity to the seed set. This framework captures higher-order molecular associations and naturally supports generalization from few labeled molecules. Despite its promise, two critical limitations of NP remain to be resolved. First, VS tasks often hinge on substructural variations between closely related molecules (Ottan`a et al., 2021; Stumpfe et al., 2019). Yet standard NP relies on similarity graphs uses general-purpose fingerprints (e.g., ECFP), which fail to encode fine-grained subgraph features that distinguish actives from inactive molecules (Yi et al., 2023), often blurring critical activity-relevant distinctions. Second, NP inherits the topological bias of the underlying graph: nodes in dense clusters may be ranked highly due to connectivity alone, inflating false positives, particularly when the seed set is small (Picart-Armada et al., 2021). To address these limitations, we propose SubDyve, a graph-based virtual screening framework for label-efficient compound prioritization. Rather than relying on generic molecular fingerprints, SubDyve builds a subgraph fingerprint graph using class-discriminative substructures mined via supervised subgraph selection (Lim et al., 2023). It then performs iterative seed refinement guided by local false discovery rate (LFDR) estimates (Efron, 2005), expanding high-confidence compounds as new seeds while controlling false positives. This process is integrated into a joint learning framework that trains a graph neural network with objectives for classification, ranking, and contrastive embedding. By combining subgraph-aware graph construction with uncertainty-calibrated propagation, SubDyve improves precision and generalization under sparse supervision. We evaluate SubDyve on the DUD-E benchmark and a 10M-compound ZINC/PubChem dataset for CDK7 target, where it achieves strong early enrichment using substantially fewer labels than deep learning and mining-based baselines. Our contributions are as follows: • We demonstrate that SubDyve achieves state-of-the-art performance on public and large-scale datasets under severe label constraints. • We propose a subgraph fingerprint graph construction method that identifies classdiscriminative subgraphs, preserving subtle activity-defining features that are overlooked by conventional fingerprints. • We introduce an LFDR-based seed refinement mechanism that overcomes graphinduced bias and enhances screening specificity while controlling false positive rates. 2 Related Work Representation-Centric Virtual Screening Traditional VS methods use fixed fingerprints (e.g., ECFP, MACCS) or 3D alignments with shallow classifiers, but often miss substructural patterns critical to bioactivity. Recent deep learning approaches embed molecular structures into task-optimized latent spaces. PharmacoMatch (Rose et al., 2025) frames pharmacophore screening as neural subgraph matching over 3D features. PSICHIC (Koh et al., 2024) and BIND (Lam et al., 2024) integrate protein sequence embeddings with ligand graphs. Large-scale pretrained encoders like ChemBERTa (Ahmad et al., 2022), MoLFormer (Ross et al., 2022), and AMOLE (Lee et al., 2024) reduce label demands via foundation model generalization. However, these methods treat compounds independently, ignoring higher-order molecular dependencies. Label-Efficient Network Propagation Network propagation (NP) enables label-efficient VS by diffusing activity signals over molecular similarity graphs. Yi et al. (Yi et al., 2023) construct target-aware graphs from multiple similarity measures to rank candidates from known actives. GRAB (Yoo et al., 2021) applies positive unlabeled (PU) learning to infer soft labels from few positives, demonstrating robustness in low-supervision settings. While NP-based methods capture higher-order dependencies, they often rely on generic fingerprints (e.g., ECFP) that overlook discriminative substructures and suffer from topological bias, 2 Unlabeled Class-discriminative subgraph pattern mining Subgraph pattern fingerprint ... ... ... Compound library Labeled Subgraph fingerprint Network ( ) Cosine similarity on subgraph fingerprint : Compounds : NP Final network propagation GCN GNN Feature encoding x MIteration Ensemble Nseed weights B Propagation with seed S1 LFDR based seed refinement x Nstratified split seed S(S1/S2) A Subgraph-guided graph construction Dynamic seed refinement with LFDR LFDR-based iterative seed expansion B Prioritize compound Ensemble split seed & Network propagation Filter pattern matching Seed weight NP+PCA hybrid rank score PCA similarity Propagation score ChemBERTa Subgraph pattern fingerprint Propagation & Evaluation S2 rank LBCE LRank LContrast Best seed weight Figure 1: Architecture of SubDyve Framework. (A) Overall process of SubDyve. Consists of subgraph-similariy network construction, dynamic seed refinement with LFDR, and prioritization. (B) Dynamic seed refinement with LFDR: seed weights are iteratively updated within each stratified split and aggregated for final prioritization. inflating false positives under sparse or uneven labeling (Picart-Armada et al., 2021; Hill et al., 2019). Substructure-Aware Similarity Graphs Recent work enhances molecular graphs with substructure-aware representations to capture subtle activity-relevant patterns. Supervised Subgraph Mining (SSM) (Lim et al., 2023) identifies class-specific motifs that improve prediction and reveal mechanistic effects like toxicity. ACANet (Shen et al., 2024) applies attention over local fragments to detect activity cliffs. While effective in property prediction, such methods remain underexplored in virtual screening, where subgraph-aware graphs could better resolve activity-specific features. 3 Methodology In this section, we present the architecture of SubDyve, a virtual screening framework for settings with few known actives. SubDyve first constructs a subgraph fingerprint network using class-discriminative subgraph patterns (Figure 1A). Based on this network, it performs dynamic seed refinement guided by LFDR to iteratively update seed weights (Figure 1B). To ensure robustness, refinement is repeated across N settings, and the ensembled seed weights are used for a final network propagation to prioritize unlabeled compounds. 3.1 Problem Formulation We define virtual screening as the problem of ranking a large set of unlabeled candidate compounds, especially under a low-label regime. Let Q be the set of candidate molecules, and C ⊂Q the subset of known actives against a target protein p. The goal is to assign a relevance score r(q) to each q ∈Q such that compounds in C are ranked higher than inactives. In low-label settings, a small subset Strain, Stest ⊂C of seed actives is available (Strain ∩Stest = ∅). We assume access to a compound similarity graph G = (V, E) over Q, where each node v ∈V is a compound and each edge (i, j) ∈E encodes structural similarity. The task is to propagate activity signals from Strain over G and assign relevance scores r(q) to all q ∈Q, prioritizing Stest. 3 3.2 Subgraph Fingerprint Network Construction We first mine class-discriminative subgraph patterns SP from the labeled seed set Strain using the SSM algorithm (Lim et al., 2023) with curated negative molecules. Each molecule is then encoded as a d-dimensional subgraph pattern fingerprint, where each dimension reflects the frequency of a discriminative subgraph combination (DiSC) (see Appendix B.2.1 and B.2.2 for details). We filter the candidate set Q to retain only compounds that match at least one subgraph in SP, forming a reduced set Q′. A subgraph fingerprint graph G is then constructed over Q′ using pairwise cosine similarity between subgraph pattern fingerprints (Appendix B.2.3). This graph serves as the foundation for network propagation and compound ranking. 3.3 Dynamic Seed Refinement with LFDR Estimation Using Strain as seed nodes, we perform initial network propagation over G to assign soft relevance scores to all compounds. While this provides a baseline prioritization, its effectiveness is limited by the small size of Strain. Signals tend to diffuse broadly or become biased toward topologically dense regions, resulting in reduced specificity and inflated false positives. To address this, SubDyve introduces a dynamic seed refinement procedure that iteratively improves the seed set using GNN-based inference and local false discovery rate (LFDR) estimation. To enable robust screening, we stratify Strain into disjoint subsets S1 and S2, where S1 initiates the initial network propagation, and S2 guides seed refinement via iterative loss updates in both the GNN and propagation modules. This mechanism enables confident expansion of the supervision signal while suppressing propagation-induced errors. 3.3.1 Feature Encoding for GNN Before refinement, we compute feature vectors for all compounds using SMILES (Weininger, 1988) and graph-derived descriptors. Each compound i ∈Q′ is encoded as: xi = [wi, nNP i , f FP i , sPCA i , hhyb i , ePT-CB i ], (1) where wi denotes a weight of seed S1 for network propagation. nNP i is a network propagation score using wi. sPCA i is a RBF similarity to seed S1 in PCA latent space, and hhyb i is a hybrid ranking computed as the average of the PCA and NP ranks, (rank(sPCA i ) + rank(nNP i ))/2. ePT-CB i and f FP i encode semantic and substructural properties, respectively. Details of the feature encoding are described in Appendix B.3. 3.3.2 Iterative Seed Refinement with LFDR Estimation Building on the initial propagation with S1, SubDyve performs iterative seed refinement over hyperparameter M iterations. In each iteration, the model is trained to recover held-out actives in S2 through three steps: (1) GNN training with a composite loss, (2) LFDR-guided seed refinement, and (3) network propagation and evaluation. (1) GNN Training. A graph neural network processes the subgraph fingerprint graph G to produce logits ˆli and embeddings zi for compound i. The model is trained using a composite loss that combines three objectives: Ltotal = (1 -λrank) · LBCE + λrank · LRankNet + λcontrast · LContrast. (2) Here, LBCE is a binary cross-entropy loss that adjusts for class imbalance by weighting active compounds more heavily according to their low prevalence. LRankNet is a pairwise ranking loss that encourages known actives in the held-out set S2 to be ranked above unlabeled candidates. LContrast is a contrastive loss applied over the held-out set S2, where each compound forms a positive pair with its most similar member in S2, while treating the remaining compounds in S2 as negatives. The coefficients λrank and λcontrast are hyperparameters controlling the contribution of each loss term. Full loss definitions and model optimization are described in Appendix B.4. 4 (2) LFDR-Guided Seed Refinement. Using the logits ˆli, we compute a standardized score: zi = ˆli -μ σ , qi = LFDR(zi), (3) where μ and σ are computed from the GNN logits of all compounds in Q′. Details of LFDR algorithm is described in Algorithm 3 at Appendix B. Compounds with qi τFDR are removed. The corresponding seed weights for subsequent network propagation are updated as: wi ←wi + β · (σ(zi) -π0), (4) where σ is the sigmoid function and π0 is the prior null probability. β denotes a hyperparameter to control update rate. This procedure ensures a provable upper bound on the false discovery rate (Efron, 2005), as detailed in Proposition 1. Proposition 1. Let Z1, . . . , Zm follow the two-group mixture f(z) = π0f0(z) + π1f1(z) and define the selection rule Rα = {i : lfdr(Zi) ≤α}, 0 0. Define the local false-discovery rate (Efron, 2005) (Efron, 2005) lfdr(z) = Pr(H = 0 | Z = z) = π0f0(z) f(z) . Choose hypotheses by Rα = i : lfdr(Zi) ≤α , 0 τFDR then 8: Saug ←Saug \ {i} ▷Remove low-confidence node 9: wi ←0 10: else if i ∈Saug then 11: wi ←wi + β · (σ(li) -b) ▷Update existing seed weight 12: end if 13: end for 14: return Updated Saug, s Algorithm 3 LFDR Estimation Require: Observed Z-scores Z = {Zi}V i=1, null density f0(z), bin count B, polynomial degree d, regularization parameter α ≥0, null proportion π0 Ensure: Local FDR estimates d lfdr(Zi) for all i = 1, . . . , V 1: Partition the range [min Z, max Z] into B equal-width bins (bj-1, bj] with centers zj = 1 2(bj-1 + bj) 2: Count samples in each bin: Nj ←#{Zi ∈(bj-1, bj]} for j = 1, . . . , B 3: Construct design matrix X ∈RB×(d+1) with Xjk = zk-1 j for k = 1, . . . , d + 1 4: Fit Poisson distribution: ˆβ ←arg min β ( - B X j=1 Nj · (x⊤ j β) -exp(x⊤ j β) + α 2 ∥β∥2 2 ) 5: for each i = 1, . . . , V do 6: Construct polynomial features x(Zi) = Z0 i , Z1 i , . . . , Zd i ⊤ 7: Estimate marginal density: bf(Zi) ←exp x(Zi)⊤ˆβ 8: Compute null density: f0(Zi) ←null pdf(Zi) 9: Compute LFDR: d lfdr(Zi) ←π0 · f0(Zi) bf(Zi) 10: Clip to [0, 1]: d lfdr(Zi) ←min(1, max(0, d lfdr(Zi))) 11: end for 12: return {d lfdr(Zi)}V i=1 we identify single-subgraph structural alerts (SAs) by computing feature importance scores using a random forest model. Subgraphs with importance above 0.0001 and entropy below 0.5 are retained as interpretable indicators of activity. 15 B.2.2 B.2.2 Generating Subgraph Pattern Fingerprints To capture higher-order structure, we construct Discriminative Subgraph Combinations (DiSCs)-co-occurring subgraph sets that frequently appear in actives. Starting with 1mer subgraphs, we iteratively build k-mer combinations using a branch-and-bound search with SMARTS-based pattern grouping. Candidates are scored using an entropy-based metric 1 -Entropy(Supppos, Suppneg), and only those with sufficient support (≥2%) and discriminative power are retained. Entropy filtering is not applied to 1-mers to preserve informative small motifs. The top-d DiSCs are selected based on entropy rank and used to construct a d-dimensional fingerprint vector, where each entry encodes the frequency of a specific subgraph combination within the molecule. These fingerprint vectors serve as task-aware molecular representations for graph construction. B.2.3 B.2.3 Constructing Molecular Similarity Networks We construct a similarity graph G = (V, E, we) by computing pairwise cosine similarity between subgraph pattern fingerprints. Each compound in Q′ is represented as a node, and weighted edges reflect structural proximity in the DiSC space. B.3 Details of Feature Encoding for GNN In the dynamic seed refinement step, we use a two-layer GNN to predict activity scores over the subgraph fingerprint network. Each compound i ∈Q′ is encoded as: xi = [wi, nNP i , f FP i , sPCA i , hhyb i , ePT-CB i ] (5) The components of the feature vector are described below: • wi: Weight of seed to use for network propagation. Initially set wi∈S1 to 1, otherwise set to 0. • nNP i : Network propagation score drive from wi. • f FP i : Class-discriminative substructure features extracted from subgraph pattern fingerprints. • sPCA i : RBF Similarity to seed compounds in a PCA-projected latent space based on subgraph pattern fingerprints f FP i . • hhyb i : Hybrid ranking score computed as a weighted average of the rankings of sPCA i and nNP i . • ePT-CB i : Semantic features derived from a pretrained ChemBERTa model, representing molecular sequence semantics. Each GNN layer is followed by a residual connection and LayerNorm, with the second layer reducing the hidden dimension by half. The model outputs a scalar logit ˆli computed via a linear layer for ranking, along with a 32-dimensional embedding vector for representation regularization. B.4 Details of Composite Loss for GNN Following (Lin et al., 2024), SubDyve jointly optimizes classification, ranking, and representation learning objectives within the GNN during seed refinement. The final loss is a weighted sum of three components: binary cross-entropy LBCE, pairwise ranking loss LRankNet, and contrastive loss LContrast. Each component is designed to enhance model performance under sparse supervision. 1. Binary Cross-Entropy Loss (BCE) We employ a class-balanced BCE loss to accommodate severe class imbalance. Additionally, compound-level weights modulated by network propagation scores enhance robustness to 16 noisy supervision: LBCE = 1 |Q′| |Q′| X i=1 wi · " yi · log σ(ˆli) + PW · (1 -yi) · log(1 -σ(ˆli)) # , σ(ˆli) = 1 1 + e-ˆli , wi = 1 + γnp · nNP i (6) where ˆli is the predicted logit for compound i, and yi ∈{0, 1} is the ground truth label indicating whether i ∈S2 (active) or not (inactive). nNP i is the NP score from initial propagation. γnp is set to 5. The term PW balances class skew by weighting the active class more heavily: pos weight = |{i|yi=0}| |{i|yi=1}|+ε. 2. Pairwise RankNet Loss To improve early recognition, we adopt a pairwise margin-based loss that encourages higher scores for known actives in S2 relative to likely inactives: LRankNet = 1 C X (i,j) max 0, m -(ˆli -ˆlj) , i ∈S2, ; j ∈Q′ \ S2. (7) Here, m is a margin hyperparameter and C denotes the number of valid (i, j) pairs. 3. Contrastive Loss (InfoNCE) This loss promotes intra-class consistency in the learned embeddings. For each compound i ∈Q′, we select its most similar positive compound zi+ from S2 based on subgraph pattern fingerprint similarity, and treat the remaining compounds in S2 as zi-. LContrast = 1 |S2|× X i∈S2 -log exp z⊤ i zi+ τ exp z⊤ i zi+ τ + P k exp z⊤ i z(k) iτ (8) where τ is a temperature parameter. 4. Total Composite Loss The total loss is a weighted combination: Ltotal = (1 -λrank) · LBCE+ λrank · LRankNet+ λcontrast · LContrast. (9) where λrank = 0.3 and λcontrast = 0.6 fixed across all experiments. The GNN is trained using Adam optimizer (Kingma & Ba, 2014) with a fixed learning rate of 8 × 10-4 and weight decay of 1.57 × 10-5. Hyperparameter selection is discussed in Appendix C.2. 17 C Implementation & Evaluation Details C.1 Network Propagation Algorithm Network propagation (NP) is used to prioritize candidate compounds by diffusing signals from a small number of known actives across a chemical similarity network. This approach has been shown to effectively integrate relational structure for large-scale inference (Cowen et al., 2017). NP iteratively balances the initial bioactivity signal carried by S with the topological context supplied by the graph, allowing evidence to flow along indirect paths and uncovering nodes that are not immediate neighbors of the seeds: P (t+1) = (1 -α) WN P (t) + α P (0), (10) where P (0) is a one-hot vector encoding compounds S, WN is the column-normalized adjacency matrix, and α ∈[0, 1] controls the restart probability. Over iterations, the score vector P (t) converges to a stationary distribution that captures both local and global connectivity, thereby ranking compounds in Q by their network proximity to S. By integrating signals along multiple paths rather than relying solely on direct neighbors, NP effectively highlights previously unconnected yet pharmacologically relevant candidates, making it well suited for large-scale virtual screening task. Network propagation (NP) prioritizes candidate compounds by diffusing activity signals from a small set of known actives across a chemical similarity network. This method effectively incorporates both local and global graph structure, enabling inference over indirect molecular relationships (Cowen et al., 2017). The propagation is formulated as an iterative update: P (t+1) = (1 -α)WN P (t) + αP (0), (11) where WN is the column-normalized adjacency matrix of the molecular graph, and α ∈[0, 1] is the restart probability. The initial vector P (0) encodes seed activity, typically assigning 1 to known actives and 0 elsewhere. As iterations proceed, P (t) converges to a stationary distribution that reflects both direct and indirect connectivity to the seed set. This enables the identification of structurally distant yet functionally related candidates, making NP a suitable backbone for large-scale virtual screening under sparse supervision. C.2 Hyperparameter Search Space We perform hyperparameter optimization in two phases depending on the parameter type. GNN model architecture and loss-related parameters are tuned using Bayesian Optimization (100 iterations) (Appendix Table 5). Hyperparameters related to iteration process on dynamic seed refinement are searched via random search (Appendix Table 6). C.3 Evaluation Metrics To evaluate the performance of early retrieval in virtual screening, we adopt the BEDROC and Enrichment Factor (EF) metrics. BEDROC. The Boltzmann-Enhanced Discrimination of ROC (BEDROC) is designed to emphasize early recognition by assigning exponentially decreasing weights to lower-ranked active compounds. It is defined as: BEDROCα = 1 -e-α 1 -e-α/N 1 n n X i=1 e-αri/N ! × sinh(α/2) cosh(α/2) -cosh(α/2 -αRα) + 1 1 -eα(1-Rα) (12) 18 Table 5: Hyperparameters related to GNN model Parameter Search Space Selected Value Hidden dimension {16, 32, 64, 128} 64 Embedding dimension {8, 16, 32, 64} 32 λrank [0.0, 1.0] 0.3 λcontrast [0.0, 1.0] 0.6 Margin for RankNet loss [0.0, ..., 1.0] 0.5 Weight decay [10-6, 10-4] 1.57 × 10-5 β (seed weight update rate) [0.1, 1.0] 0.7 Learning rate [10-4, 10-2] 0.0008 γNP (NP score weight) {0.0, ..., 5.0} 5.0 GNN layer type {GCN, GIN, GAT} GCN Table 6: Hyperparameters related to iteration of seed refinement Parameter Search Space Selected Value Training epochs {10, 20, 30, 40, 50} 50 Max iterations (M) {3, 4, 5, 6, 7} 6 Early stopping patience of iterations {1, 2, 3} 3 Stratified split (N) {10, 5, 3, 2} 2 LFDR threshold τFDR {0.03, 0.05, 0.1, 0.3, 0.5} 0.1 n is the number of active compounds, N is the total number of molecules, ri is the rank of the i-th active compound, and Rα = n/N. Following prior work (Truchon & Bayly, 2007), we set α = 85 to prioritize early retrieval. Enrichment Factor (EF). The EF quantifies the proportion of actives retrieved within the top-ranked subset relative to a random distribution. It is computed as: EFx% = na/Nx% n/N (13) n is the number of active compounds, N is the total number of molecules, Nx% is the number of molecules in the top x% of the ranking, and na is the number of actives within that portion. Higher EF values indicate better prioritization of active compounds in the early ranks. D Experiment Details D.1 Zero-Shot Virtual Screening Setup on Ten DUD-E Targets For each DUD-E target, we curate a high-quality dataset of bioactive compounds from PubChem while preserving the zero-shot setting. To avoid data leakage, we filter protein homologs using MMseqs2 (Steinegger & S ̈oding, 2017), excluding any proteins with sequence identity greater than 90% relative to the target. From the remaining homologs (identity ≤0.9), we retrieve associated compounds and their bioactivity annotations, retaining only those labeled as active or inactive with valid potency measurements. Duplicate entries and records with missing values are removed to ensure data reliability. We additionally compare the FM-based baseline model with ChemBERTa (Ahmad et al., 2022) and MoLFormer (Ross et al., 2022) models for further comparison. Using the pre-trained models, we calculate performance by averaging the embeddings of bioactive compounds using the pre-trained models and taking the active and inactive compounds from DUD-E and ranking them according to their cosine similarity to each compound. For the ADA target, where PubChem 19 Table 7: Summary of PubChem-augmented data for DUD-E targets, including similarity ranges, and number of seed molecules used in propagation. Target PDB code Active Ligands Decoy Ligands PubChem Total (Act/Inact) Similarity Range Seed Count ACES 1e66 451 26198 1604 (1502/102) 0.647-0.9 1277 ADA 2e1w 90 5448 444 (386/58) 0.909-0.953 335 ANDR 2am9 269 14333 918 (822/96) 0.56-0.9 755 EGFR 2rgp 541 35001 576 (427/149) 0.478-0.9 374 FA10 3kl6 537 28149 261 (237/24) 0.845-0.9 195 KIT 3g0e 166 10438 3299 (3164/135) 0.537-0.9 771 PLK1 2owb 107 6794 353 (191/162) 0.61-0.9 174 SRC 3el8 523 34407 1232 (827/405) 0.88-0.9 624 THRB 1ype 461 26894 3126 (2071/1055) 0.833-0.9 477 UROK 1sqt 162 9837 825 (750/75) 0.489-0.9 615 annotations are sparse, a slightly higher identity threshold (up to 0.953) is used to enable sufficient subgraph extraction. Appendix Table 7 summarizes the number of actives/inactives and the protein similarity thresholds used per target. For downstream propagation, we construct a target-specific subgraph fingerprint network comprising PubChem actives and DUD-E molecules (including actives and decoys). PubChem actives with IC50 ≤500 nM are selected as seed molecules, while the remaining actives are incorporated as unlabeled nodes. Subgraph patterns are extracted via a Murcko-scaffold split and encoded into 2000-dimensional subgraph fingerprints, which serves as the molecular representation for each node in the graph. Baseline Models Training Summary We evaluate SubDyve against two structure-based virtual screening baselines: CDPKit (Alignment) (Seidel, 2024) and PharmacoMatch (Rose et al., 2025) (Table 8). CDPKit is a geometric alignment algorithm that performs unsupervised pharmacophore matching without model training, relying solely on spatial fit. In contrast, PharmacoMatch is a self-supervised learning framework trained on 1.2 million drug-like compounds from ChEMBL((Davies et al., 2015; Zdrazil et al., 2024)). It learns a joint embedding space for pharmacophore graphs using contrastive loss and an order embedding objective, enabling similarity-based retrieval of actives without direct supervision. Table 8: Key characteristics for baseline methods. Model Training Data Description PharmacoMatch 1.2M small molecules from ChEMBL after Lipinski filtering and duplication removal; trained in a self-supervised fashion on 3D pharmacophore graphs using contrastive loss and order embeddings. CDPKit (Alignment) Unsupervised alignment of 3D pharmacophores using geometric fit (no training). ChemBERTa ChemBERTa-2 is a model based on ChemBERTa that optimises pre-training performance through large-scale pre-training and multi-task-self-supervised learning comparisons using up to 77 million molecular data. MoLFormer MoLFormer is a transformer-based molecular language model trained using efficient linear attentions and rotational positional embeddings on 1.1 billion molecules (SMILES) data, outperforming traditional graph and language-based models in many of the 10 benchmarks. DrugCLIP DrugCLIP is a dense retrieval-based contrastive learning model that solves the virtual screening problem by learning the similarity between a protein pocket and a molecule. The model leverages extensive data, including over 120,000 protein-molecule pairs and more than 17,000 complex structures from PDBbind, BioLip, and ChEMBL datasets, and utilizes the HomoAug data augmentation method to maximize the diversity of the training set. D.2 PU-Style Screening Setup on PU dataset To evaluate the screening effectiveness of SubDyve in a realistic compound discovery scenario, we construct a dataset using molecules from the ZINC20 and PubChem databases. Specifically, we retrieve 10,082,034 compounds from ZINC20 (https://zinc20.docking.org/tranches/ home/) and select CDK7 as the target protein. From PubChem, we obtain 1,744 unique compounds annotated for CDK7 after deduplication, of which 1,468 are labeled as active based on curated assay data. To simulate sparse supervision, we randomly select 30% of the active compounds for Strain. From this subset, we designate 10% as a held-out set S2 and use the remainder as initial 20 seed nodes S1. The other 70% of actives are included in the screening graph as unlabeled nodes, emulating the presence of under-characterized actives in large-scale libraries. This setup ensures that the test set remains completely unseen and that the majority of actives do not contribute label information during propagation. For fair comparison across network propagation (NP)-based baselines, we use the same seed sets across all runs, providing identical supervision to all models. We extract subgraph patterns using the SSM algorithm (Appendix B.2.1), excluding all test compounds to prevent leakage. A total of 100 discriminative patterns are used to construct subgraphbased fingerprints. To assess whether the difference in performance between methods was statistically significant, we applied the Mann-Whitney U test over results from five independent runs. The hyperparameter settings follow Appendix Table 6, except the stratified split N is set to 5. D.3 Details of Experimental Setup for Varying Seed Set Size To demonstrate that SubDyve can effectively rank the 10% held-out set S2 specified in the PU-style screening setup created with the PU dataset with much fewer seeds, we conduct an ablation study with much fewer S1. Each seed count was randomly selected, and performance reported over five runs. This setup creates a much harsher situation where the test set is completely unseen and a much larger amount of active is prevented from contributing label information during propagation. For fairness, we use the same number of seeds in the network propagation-based baseline and perform five runs on a randomly selected S2 as same in Appendix D.2. When extracting subgraph patterns with the SSM algorithm, we also exclude all test compounds to prevent leakage. 21 E Compute Resources and Time Profiling Training Environments. Appendix Table 9 presents the system and software environment used for all experiments. The setup includes a high-memory server equipped with a single NVIDIA RTX A6000 GPU, dual Intel Xeon processors, and 512GB of RAM, running Ubuntu 22.04.4 with CUDA 11.1. All experiments are conducted on a single server. Time Profiling of Components in SubDyve. Appendix Table 10 summarizes the average runtime per module. The profiling is conducted over 10 DUD-E targets, and provides a breakdown of SubDyve's major computational steps. Subgraph mining is the most timeintensive step, while similarity computations and network construction remain lightweight. Notably, computing chemical similarity for one million compound pairs takes approximately 5 hours. Note that profiling time of LFDR-guided seed refinement integrates the GNN training time. Table 9: System and software environment. Component Specification GPU 1 × NVIDIA RTX A6000 (49GB) CPU Intel Xeon Gold 6248R @ 3.00GHz (48 cores) RAM 512 GB OS Ubuntu 22.04.4 LTS CUDA 11.1 Python 3.9.16 PyTorch 1.10.1 + cu111 PyTorch Geometric 2.0.4 scikit-learn 1.6.1 scipy 1.10.1 Table 10: Profiling time of each module. Module Profiling Time Subgraph Mining (SSM) 108.55 ± 15 sec / iteration Subgraph Pattern Similarity Computation 0.4 ± 0.2 ms / compound pair Subgraph fingerprint Network Construction 0.1 ± 0.1 ms / edge Network Propagation 16 ± 23 sec / graph Dynamic Seed Refinement with LFDR (incl. GNN training) 1.27 ± 0.56 sec / epoch F Additional Experimental Results F.1 Zero-Shot Virtual Screening Results on Ten DUD-E Targets F.1.1 Comprehensive Evaluation Appendix Table 11 and 12 show a comprehensive evaluation of SubDyve and baseline methods across ten DUD-E targets. Metrics include AUROC, BEDROC, EF1%, EF5%, and EF10%, with confidence intervals estimated via 100 resampling trials. SubDyve achieves the best average performance across all five metrics, consistently outperforming other methods in early recognition and enrichment, while maintaining high AUROC. These results support the effectiveness of combining subgraph fingerprint network construction with LFDR-based refinement for zero-shot virtual screening. F.1.2 Ablation Study of LFDR-guided Refinement Figure 4 - 8 further analyze the effect of the seed-selection rule (τFDR) on calibration and retrieval performance. In brief, we evaluate the impact of the LFDR threshold τFDR on 22 Table 11: Comprehensive evaluation of SubDyve and baseline methods on ten DUD-E targets. Confidence intervals were estimated via bootstrapping (DiCiccio & Efron, 1996), using 100 resampled datasets to compute the standard deviations. AUROC and BEDROC are reported as percentages. The best and second-best scores per metric are in bold and underline, respectively. Protein Target Ours PharmacoMatch CDPKit DrugCLIP AutoDock Vina AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% ACES 91±1 86±2 57.0±2.4 17.1±0.4 8.8±0.2 58±2 18±1 8.4±1.4 3.5±0.3 2.2±0.2 55±1 16±2 5.5±1.3 3.0±0.3 2.1±0.2 80±1 52±2 32.4±1.7 10.2±0.5 5.9±0.2 77±0.0 33±1.1 13.87±0.5 6.47±0.2 4.32±0.1 ADA 90±3 83±4 50.6±5.3 16.6±0.8 8.4±0.4 83±3 44±4 16.7±4.1 9.5±1.0 5.7±0.4 94±1 82±3 53.6±4.3 15.9±0.9 8.4±0.4 96±1 82±3 60.2±5.3 16.2±0.8 8.4±0.3 57±0.0 7±2.7 1.05 ±1.7 0.42±0.7 2.12±0.4 ANDR 87±2 72±2 37.1±2.1 14.8±0.5 8.0±0.2 76±1 33±2 15.8±1.9 6.0±0.5 4.3±0.3 71±2 26±2 12.6±2.1 4.4±0.5 3.7±0.3 91±1 64±3 34.3±2.4 12.7±0.6 7.5±0.3 64±0.0 34±1.2 18.41±0.6 6.89±0.3 4.07±0.2 EGFR 94±1 86±2 60.0±2.3 17.0±0.3 8.6±0.2 63±1 11±1 3.1±0.7 2.0±0.3 1.6±0.2 76±1 26±2 12.2±1.6 4.6±0.3 3.7±0.2 69±1 40±2 28.7±2.1 7.4±0.4 4.4±0.2 64±0.0 14±1.4 3.68 ±0.7 2.76±0.3 2.17±0.1 FA10 79±1 58±2 46.8±1.7 10.6±0.4 5.5±0.2 47±1 1±1 0.2±0.2 0.1±0.1 0.2±0.1 55±1 6±1 0.0±0.0 0.7±0.2 1.2±0.1 94±1 86±1 51.2±1.8 17.0±0.3 9.1±0.1 84±0.0 41±1.7 15.77±0.8 7.28±0.3 5.05±0.2 KIT 82±2 44±3 13.8±2.6 11.3±0.7 6.1±0.4 56±2 4±1 0.0±0.0 0.4±0.2 0.7±0.2 63±2 9±2 1.1±0.8 1.2±0.4 1.8±0.3 30±3 10±2 5.2±1.7 1.8±0.5 1.2±0.3 78±0.0 18±2.4 2.97 ±1.9 3.23±0.5 3.11±0.3 PLK1 94±2 85±3 51.7±4.0 17.7±0.6 9.0±0.3 62±3 9±2 1.5±1.3 0.7±0.3 1.8±0.3 75±3 39±3 5.7±2.3 10.2±0.9 5.5±0.5 88±2 66±4 45.0±4.0 12.8±0.9 7.3±0.4 64±0.0 13±1.8 1.83 ±0.3 1.85±0.3 2.22±0.4 SRC 82±1 61±2 35.0±1.8 11.3±0.4 6.9±0.2 79±1 27±1 6.0±1.0 5.3±0.4 4.6±0.2 80±1 28±1 11.1±1.2 5.3±0.4 4.3±0.2 59±2 16±1 8.1±1.3 2.9±0.3 2.0±0.2 66±0.0 13±1.2 4.00 ±0.5 2.36±0.2 1.96±0.1 THRB 78±1 61±2 36.6±2.0 11.9±0.5 6.0±0.2 70±1 22±1 5.9±1.0 4.8±0.4 3.3±0.2 79±1 35±2 11.8±1.5 7.2±0.4 4.5±0.2 97±0 83±1 46.9±1.7 17.2±0.3 9.3±0.1 81±0.0 25±1.8 4.31 ±1.0 4.80±0.3 3.98±0.2 UROK 55±3 37±3 25.6±2.4 8.0±0.7 4.1±0.3 60±2 4±1 0.6±0.7 0.5±0.2 0.4±0.2 91±1 55±3 24.5±2.8 10.4±0.9 8.2±0.4 93±1 73±3 48.1±3.1 14.7±0.7 8.1±0.3 80±0.0 28±1.3 7.90 ±0.7 5.88±0.3 3.92±0.2 Avg. rank 2.2 1.4 1.5 1.4 1.5 4.2 4.5 4.4 4.4 4.3 3.0 3.4 3.5 3.4 3.2 2.2 2.0 1.7 2.0 2.2 3.4 3.7 3.9 3.8 3.8 Final rank 1 1 1 1 1 5 5 5 5 5 3 3 3 3 3 1 2 2 2 2 4 4 4 4 4 calibration and screening performance as shown in Figure 4. As a baseline, we compare against a probability-based refinement strategy (denoted as PROB), which directly thresholds GNN logits without LFDR estimation. B A Figure 4: Effect of LFDR Threshold τFDR. (A) Seed-calibration curves for LFDR (blue) and PROB thresholding (orange). Calibration quality improves as the curve approaches the diagonal; ECE values are shown for both methods. (B) Boxplot of BEDROC (α = 20) scores for LFDR and PROB across threshold values. Figure 4A shows the seed-calibration curves across thresholds for each method. LFDR achieves substantially better calibration, with an expected calibration error (ECE) of 0.204 compared to 0.511 for PROB. This indicates that LFDR more accurately controls the false discovery rate during seed refinement. Figure 4B shows BEDROC scores across five LFDR thresholds (τFDR) and five probability thresholds (τPROB). LFDR consistently yields higher performance across the threshold range, indicating that better calibration improves early recognition. Similar trends for EF1% and AUPRC are shown in Figure 7 and Figure 8. • Figure 5: Seed-calibration curves comparing LFDR-based and probability-based (PROB) refinement strategies. LFDR yields lower expected calibration error (ECE) across most targets, demonstrating better control of false discovery rates. • Figure 6: BEDROC (α = 20) scores evaluated across thresholds. LFDR generally shows higher BEDROC values with reduced variance, reflecting improved early enrichment. • Figure 7: EF1% plotted as a function of threshold. LFDR consistently outperforms PROB across thresholds for most targets, confirming its robustness under different calibration conditions. • Figure 8: Precision-recall (PR) curves at the best-performing threshold per method. LFDR achieves higher PR-AUC for the majority of targets, especially those with imbalanced label distributions. Together, these results highlight the advantages of LFDR-guided refinement for both FDR calibration and early recognition. SubDyve demonstrates strong and stable performance across a variety of target proteins, offering a reliable solution for virtual screening under sparse supervision. 23 Table 12: Comprehensive evaluation of SubDyve and baseline methods on ten DUD-E targets. Confidence intervals were estimated via bootstrapping (DiCiccio & Efron, 1996), using 100 resampled datasets to compute the standard deviations. AUROC and BEDROC are reported as percentages. The best and second-best scores per metric are in bold and underline, respectively. Protein Target Ours ChemBERTa MolFormer AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% AUROC BEDROC EF1% EF5% EF10% ACES 91±1 86±2 57.0±2.4 17.1±0.4 8.8±0.2 53±0 9±1 1.9±0.9 1.5±0.2 1.3±0.1 74±2 24±2 8.3±0.7 4.3±0.6 3.7±0.4 ADA 90±3 83±4 50.6±5.3 16.6±0.8 8.4±0.4 76±1 15±3 4.2±1.6 1.9±0.3 2.6±0.6 89±0 72±1 48.3±0.9 13.9±0.3 7.2±0.2 ANDR 87±2 72±2 37.1±2.1 14.8±0.5 8.0±0.2 39±0 5±1 1.9±0.4 0.9±0.2 0.8±0.2 56±1 9±1 3.0±0.1 1.6±0.3 1.5±0.3 EGFR 94±1 86±2 60.0±2.3 17.0±0.3 8.6±0.2 77±1 35±1 16.4±0.6 7.0±0.1 5.0±0.0 93±1 75±2 48.1±2.8 15.2±0.4 8.4±0.2 FA10 79±1 58±2 46.8±1.7 10.6±0.4 5.5±0.2 73±1 28±3 12.9±1.6 5.4±0.5 3.4±0.2 93±0 66±0 36.7±0.4 13.0±0.2 7.6±0.1 KIT 82±2 44±3 13.8±2.6 11.3±0.7 6.1±0.4 62±1 16±1 4.9±3.7 2.8±0.0 2.5±0.1 93±0 66±1 36.8±0.9 13.6±0.4 7.7±0.4 PLK1 94±2 85±3 51.7±4.0 17.7±0.6 9.0±0.3 60±3 15±1 4.9±1.4 2.9±0.2 1.9±0.3 89±1 69±0 35.2±4.0 14.4±0.0 8.0±0.1 SRC 82±1 61±2 35.0±1.8 11.3±0.4 6.9±0.2 64±2 15±1 3.3±0.7 3.1±0.2 2.6±0.4 82±1 48±1 21.5±1.5 10.4±0.4 6.5±0.1 THRB 78±1 61±2 36.6±2.0 11.9±0.5 6.0±0.2 79±0 34±2 14.5±2.2 6.3±0.4 4.7±0.1 59±1 6±1 1.2±0.1 0.9±0.3 0.9±0.1 UROK 55±3 37±3 25.6±2.4 8.0±0.7 4.1±0.3 62±3 5±1 0.6±0.0 0.3±0.1 1.0±0.3 79±3 36±2 10.0±1.5 7.6±0.5 5.2±0.4 Avg. rank 1.6 1.2 1.1 1.2 1.3 2.7 2.9 2.9 2.9 2.9 1.8 1.9 2.0 1.9 1.8 Final rank 1 1 1 1 1 3 3 3 3 3 2 2 2 2 2 Calibration Comparison Across Targets Figure 5: Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E targets. Seed-calibration curves for LFDR (blue) and probability thresholding (orange). The closer the curve lies to the diagonal, the better the calibration. Expected calibration error (ECE) is annotated for both methods. 24 LFDR PROB Method 0.820 0.825 0.830 0.835 0.840 BEDROC ACES LFDR PROB Method 0.746 0.748 0.750 0.752 0.754 0.756 0.758 BEDROC ADA LFDR PROB Method 0.68 0.69 0.70 0.71 0.72 0.73 0.74 BEDROC ANDR LFDR PROB Method 0.8425 0.8450 0.8475 0.8500 0.8525 0.8550 0.8575 BEDROC EGFR LFDR PROB Method 0.53 0.54 0.55 0.56 0.57 BEDROC FA10 LFDR PROB Method 0.38 0.39 0.40 0.41 BEDROC KIT LFDR PROB Method 0.824 0.826 0.828 0.830 0.832 BEDROC PLK1 LFDR PROB Method 0.520 0.525 0.530 0.535 BEDROC SRC LFDR PROB Method 0.525 0.530 0.535 0.540 0.545 BEDROC THRB LFDR PROB Method 0.300 0.305 0.310 0.315 0.320 0.325 BEDROC UROK BEDROC Comparison (α=20.0) Figure 6: Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E targets. BEDROC (α = 20) scores evaluated across the threshold grid; boxes show the interquartile range, whiskers the 5-95 percentiles. Box plot for LFDR (blue) and probability (orange). 0.0 0.2 0.4 0.6 0.8 Threshold 42 44 46 48 50 EF@1 % ACES LFDR Prob 0.0 0.2 0.4 0.6 0.8 Threshold 32 33 34 35 36 EF@1 % ADA 0.0 0.2 0.4 0.6 0.8 Threshold 27.5 30.0 32.5 35.0 37.5 40.0 42.5 EF@1 % ANDR 0.0 0.2 0.4 0.6 0.8 Threshold 55 56 57 58 59 60 EF@1 % EGFR 0.0 0.2 0.4 0.6 0.8 Threshold 30.0 32.5 35.0 37.5 40.0 42.5 45.0 EF@1 % FA10 0.0 0.2 0.4 0.6 0.8 Threshold 4 6 8 10 12 EF@1 % KIT 0.0 0.2 0.4 0.6 0.8 Threshold 44.4 44.6 44.8 45.0 45.2 EF@1 % PLK1 0.0 0.2 0.4 0.6 0.8 Threshold 30 31 32 33 EF@1 % SRC 0.0 0.2 0.4 0.6 0.8 Threshold 16 18 20 22 24 26 28 EF@1 % THRB 0.0 0.2 0.4 0.6 0.8 Threshold 5 6 7 8 9 10 EF@1 % UROK EF@1% vs Threshold Figure 7: Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E targets. Enrichment factor at the top 1% of the ranking (EF1%) as a function of the threshold. Line plot for LFDR (blue) and probability (orange). 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision ACES LFDR τ=0.01 (AP=0.74) Prob τ=0.6 (AP=0.67) 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision ADA 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision ANDR 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision EGFR 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision FA10 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision KIT 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision PLK1 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision SRC 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision THRB 0.0 0.2 0.4 0.6 0.8 1.0 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision UROK PR Curve (Best AP per Method) Figure 8: Effect of seed-selection rule on FDR control and early recognition for 10 DUD-E targets. Precision-recall curves at the best threshold for each rule. Legends indicate the chosen τ and the corresponding PR-AUC. 25 A B C D E F Figure 9: Performance comparison for varying numbers of (d) in the subgraph pattern removal study. (A-E) Average performance metrics (AUROC, BEDROC, EF1%, EF5%, EF10%) for 10 DUD-E targets as a function of d. Error bars represent the standard deviation of 100 bootstrap resamples. (F) AUROC performance per target at d = 2000, ranked by performance per target. 26 F.2 PU-Style Screening Results on PU dataset To supplement the results in Section 4.1.2, we report additional baseline results for the PU-style virtual screening experiment on the PU dataset. Table 14 extends Table 2 by including all evaluated general-purpose molecular fingerprints. These are combined with the same network propagation pipeline as in SubDyve, allowing a controlled comparison of representation effectiveness. Descriptions of the 12 fingerprints used are provided in Table 15. As shown in Table 14, SubDyve achieves the best performance across all BEDROC and EF metrics, outperforming deep learning models and general fingerprint-based baselines. While some fingerprints (e.g., rdkit, Graph) perform competitively under certain thresholds, they fall short in consistency across metrics. These results support the advantage of task-specific subgraph representations combined with uncertainty-aware refinement for robust screening under sparse supervision. Table 13: Ablation study results for the effect of subgraph fingerprint network and LFDRguided seed refinement on the 10 DUD-E dataset. The top results are shown in bold, and the second-best are underlined, respectively. Target Subgraph LFDR BEDROC EF1% ACES 64 ± 2 38.7 ± 2.4 ✓ 62 ± 2 38.5 ± 2.0 ✓ 76 ± 1 35.7 ± 1.7 ✓ ✓ 86 ± 2 57.0 ± 2.4 ADA 87 ± 2 41.1 ± 4.1 ✓ 87 ± 2 45.2 ± 4.1 ✓ 76 ± 3 36.3 ± 4.9 ✓ ✓ 83 ± 4 50.6 ± 5.3 ANDR 27 ± 3 18.9 ± 2.4 ✓ 24 ± 2 18.4 ± 2.1 ✓ 45 ± 3 23.1 ± 2.7 ✓ ✓ 72 ± 2 37.1 ± 2.1 EGFR 40 ± 2 30.9 ± 1.7 ✓ 33 ± 2 18.2 ± 1.6 ✓ 79 ± 2 53.2 ± 2.0 ✓ ✓ 86 ± 2 60.0 ± 2.3 FA10 17 ± 2 11.8 ± 1.3 ✓ 6 ± 1 1.0 ± 0.4 ✓ 58 ± 2 46.8 ± 2.0 ✓ ✓ 58 ± 2 47.0 ± 1.7 KIT 11 ± 2 3.7 ± 1.5 ✓ 11 ± 2 2.9 ± 1.3 ✓ 37 ± 3 5.8 ± 1.9 ✓ ✓ 44 ± 3 13.8 ± 2.6 PLK1 61 ± 4 43.2 ± 4.4 ✓ 57 ± 5 32.2 ± 3.6 ✓ 78 ± 3 49.5 ± 4.7 ✓ ✓ 85 ± 3 51.7 ± 4.0 SRC 56 ± 2 28.5 ± 1.7 ✓ 39 ± 2 12.6 ± 1.4 ✓ 25 ± 2 9.4 ± 1.3 ✓ ✓ 61 ± 2 35.0 ± 1.8 THRB 28 ± 2 20.3 ± 1.9 ✓ 21 ± 2 10.7 ± 1.4 ✓ 32 ± 2 21.2 ± 1.7 ✓ ✓ 61 ± 2 36.6 ± 2.0 UROK 35 ± 3 22.2 ± 2.9 ✓ 30 ± 3 13.0 ± 2.6 ✓ 30 ± 3 11.1 ± 2.6 ✓ ✓ 37 ± 3 25.6 ± 2.4 27 Table 14: Complete performance comparison of SubDyve and baselines on the PU dataset. The top results are shown in bold, and the second-best are underlined, respectively. Method BEDROC (%) EF 0.5% 1% 3% 5% 10% Deep learning-based BIND (BIB, 2024) (Lam et al., 2024) - - - - - 0.04 ± 0.08 AutoDock Vina (J. Chem. Inf. Model.) (Eberhardt et al., 2021) 1.0 ± 1.3 - 0.2 ± 0.3 0.6 ± 0.7 1.1 ± 0.6 1.2 ± 0.5 DrugCLIP (NeurIPS) (Gao et al., 2023) 2.7 ± 1.26 1.63 ± 1.99 1.63 ± 0.81 2.45 ± 1.02 2.53 ± 1.35 2.69 ± 0.62 PSICHIC (Nat MI) (Koh et al., 2024) 9.37 ± 3.08 4.07 ± 2.58 6.92 ± 3.30 7.48 ± 2.47 7.02 ± 1.80 5.35 ± 0.94 GRAB (ICDM) (Yoo et al., 2021) 40.68 ± 10.60 44.22 ± 8.35 45.21 ± 5.63 29.78 ± 1.38 18.69 ± 0.47 10.00 ± 0.00 Data mining-based avalon + NP (Yi et al., 2023) 77.59 ± 1.72 135.76 ± 6.44 87.58 ± 2.9 31.55 ± 0.54 19.67 ± 0.4 9.88 ± 0.16 cdk-substructure + NP (Yi et al., 2023) 66.56 ± 2.89 125.4 ± 11.28 69.67 ± 2.98 28.15 ± 0.92 17.22 ± 0.79 9.22 ± 0.42 estate + NP (Yi et al., 2023) 52.44 ± 6.19 94.4 ± 13.68 57.87 ± 7.15 22.71 ± 2.7 15.92 ± 0.85 8.24 ± 0.38 extended + NP (Yi et al., 2023) 73.7 ± 3.3 136.73 ± 6.83 83.54 ± 5.21 31.28 ± 0.97 18.85 ± 0.55 9.63 ± 0.2 fp2 + NP (Yi et al., 2023) 72.68 ± 3.77 129.06 ± 11.89 85.49 ± 3.89 30.86 ± 0.69 18.69 ± 0.6 9.51 ± 0.36 fp4 + NP (Yi et al., 2023) 69.62 ± 3.69 122.76 ± 13.02 75.01 ± 4.21 28.96 ± 1.34 18.36 ± 1.0 9.59 ± 0.29 graph + NP (Yi et al., 2023) 75.86 ± 3.99 126.72 ± 10.05 84.73 ± 3.74 31.68 ± 0.92 19.1 ± 0.47 9.75 ± 0.24 hybridization + NP (Yi et al., 2023) 75.4 ± 5.18 135.15 ± 17.78 80.25 ± 5.88 31.14 ± 1.0 18.69 ± 0.6 9.63 ± 0.15 maccs + NP (Yi et al., 2023) 75.44 ± 4.85 135.72 ± 12.7 79.82 ± 4.76 31.0 ± 1.41 18.93 ± 0.66 9.67 ± 0.21 pubchem + NP (Yi et al., 2023) 63.48 ± 5.16 99.17 ± 10.17 69.3 ± 7.08 30.87 ± 1.27 18.77 ± 0.9 9.84 ± 0.15 rdkit + NP (Yi et al., 2023) 79.04 ± 1.96 148.69 ± 4.25 89.24 ± 2.08 31.68 ± 0.92 19.02 ± 0.55 9.55 ± 0.3 standard + NP (Yi et al., 2023) 72.42 ± 3.51 121.97 ± 15.51 84.34 ± 5.56 31.27 ± 0.96 19.01 ± 0.33 9.71 ± 0.24 SubDyve 83.44 ± 1.44 155.31 ± 6.38 97.59 ± 1.44 33.01 ± 0.60 19.90 ± 0.18 10.00 ± 0.00 Statistical Significance (p-value) ** - ** * - - Table 15: Description of various molecular fingerprints used in virtual screening. Fingerprint Description Standard Based on the presence or absence of specific functional groups or atoms in a molecule. Simple and efficient but may lack specificity. Extended Similar to standard fingerprints but include additional features such as bond counts and stereochemistry. Graph Derived from the topological structure of a molecule; includes atom/bond counts, ring sizes, and branching patterns. MACCS A set of 166 predefined molecular keys from the MACCS project indicating presence/absence of specific substructures. PubChem Developed by NIH; based on predefined substructure paths in a molecule. Estate Encodes topological and electrostatic properties of a molecule. Hybridization Encodes the hybridization states of atoms in a molecule. CDK-substructure Captures the presence or absence of specific chemical substructures. RDKit Fingerprints generated using the RDKit toolkit; used for similarity searches and cheminformatics applications. Avalon Path-based fingerprints representing features derived from atomic paths within molecules. FP2 Developed by OpenEye; uses topological and pharmacophoric information for similarity search and screening. FP4 Also from OpenEye; incorporates topological, pharmacophoric, and electrostatic features for molecular comparison. F.3 Ablation Study: Varying Seed Set Sizes To provide a more comprehensive evaluation of PU-style virtual screening on the PU dataset, we present additional baseline results in Table 16, which expands upon the findings reported in Section 4.2.2 and Table 2. This extended table includes a wider range of general-purpose molecular fingerprints, each integrated into the same network propagation framework used by SubDyve, ensuring a fair and controlled comparison of representational capabilities. Additionally, we introduce Subgraph + NP as a control variant, applying standard propagation over subgraph-derived networks without LFDR-based refinement. Across all seed sizes, SubDyve consistently achieves superior performance, particularly in BEDROC, EF3%, and EF5%. Subgraph + NP advantage also extends to EF1%, highlighting the strength of subgraph-based representations in capturing bioactive chemical features beyond those accessible to general fingerprints. Although certain baselines-such as MACCS and Avalon-exhibit strong results at specific enrichment thresholds, their performance lacks consistency across evaluation metrics, underscoring the robustness of SubDyve's approach. These results suggest that subgraph patterns and LFDR-based refinement have screening power over other pre-defined fingerprints, even in harsh environments with much sparser seed. F.4 Ablation Study: Varying d-dimensional subgraph pattern fingerprint To evaluate the impact of subgraph pattern size on virtual screening performance, we present the results of SubDyve under varying fingerprint sizes in Figures 9. The number of subgraph patterns mined using the SSM algorithm and selected according to their entropy importance was varied as d ∈{100, 200, 300, 500, 1000, 2000}. The figure shows the average performance for 10 DUD-E targets evaluated using the AUROC, BEDROC, EF1%, EF5%, and EF10% metrics. As the number of patterns increases, SubDyve shows consistently improved performance across all metrics, indicating that incorporating a broader range 28 Table 16: Ablation study on the number of seed compounds on the PU dataset. For each seed size (50, 150, 250), the baseline of all generic fingerprint performance is shown. For each number, the best value is highlighted in bold, and the second-best is underlined. No. of Seeds Method BEDROC (%) EF 0.30% 0.50% 1% 3% 5% 50 avalon + NP (Yi et al., 2023) 46.18 ± 3.95 54.02 ± 15.47 52.83 ± 12.07 48.9 ± 7.93 28.96 ± 0.68 18.28 ± 0.87 cdk-substructure + NP (Yi et al., 2023) 40.61 ± 3.4 59.58 ± 8.86 53.72 ± 7.0 42.36 ± 5.22 22.85 ± 1.4 14.85 ± 0.95 estate + NP (Yi et al., 2023) 34.87 ± 3.38 37.8 ± 15.17 39.92 ± 11.07 37.51 ± 4.77 20.53 ± 2.29 13.87 ± 0.63 extended + NP (Yi et al., 2023) 44.74 ± 4.41 36.61 ± 10.1 47.19 ± 10.52 49.29 ± 11.34 27.73 ± 0.79 17.55 ± 0.77 fp2 + NP (Yi et al., 2023) 43.51 ± 5.4 39.32 ± 13.17 43.07 ± 11.06 47.64 ± 10.63 27.2 ± 0.61 17.06 ± 0.6 fp4 + NP (Yi et al., 2023) 40.46 ± 3.45 54.11 ± 12.07 52.82 ± 7.24 42.38 ± 4.73 22.98 ± 1.98 15.59 ± 1.11 graph + NP (Yi et al., 2023) 45.08 ± 4.62 51.23 ± 18.33 55.28 ± 9.08 49.34 ± 9.06 27.06 ± 1.84 16.81 ± 0.87 hybridization + NP (Yi et al., 2023) 43.76 ± 4.14 41.99 ± 22.79 51.19 ± 14.22 48.49 ± 8.79 26.11 ± 1.03 16.89 ± 0.76 maccs + NP (Yi et al., 2023) 47.02 ± 3.83 56.77 ± 15.24 52.81 ± 9.24 50.92 ± 3.15 27.74 ± 2.04 17.05 ± 1.2 pubchem + NP (Yi et al., 2023) 41.13 ± 4.46 44.69 ± 14.09 45.51 ± 7.91 41.97 ± 6.91 25.7 ± 1.99 17.14 ± 1.0 rdkit + NP (Yi et al., 2023) 43.85 ± 3.37 39.21 ± 19.76 47.92 ± 11.32 50.55 ± 3.96 25.7 ± 1.89 15.59 ± 1.08 standard + NP (Yi et al., 2023) 44.64 ± 6.02 46.13 ± 13.78 47.94 ± 13.87 48.47 ± 9.85 27.46 ± 1.1 17.55 ± 0.73 Subgraph + NP 46.33 ± 1.26 37.79 ± 21.22 31.81 ± 12.68 53.93 ± 4.97 27.61 ± 1.47 17.27 ± 0.51 SubDyve 51.78 ± 3.38 69.5 ± 11.81 62.53 ± 14.84 52.66 ± 5.91 29.48 ± 2.37 18.15 ± 0.90 150 avalon + NP (Yi et al., 2023) 54.73 ± 2.42 65.0 ± 15.73 70.85 ± 9.83 60.72 ± 4.7 31.0 ± 0.55 19.59 ± 0.37 cdk-substructure + NP (Yi et al., 2023) 48.25 ± 3.74 75.75 ± 9.08 66.61 ± 10.79 53.76 ± 7.26 25.7 ± 1.09 16.48 ± 0.91 estate + NP (Yi et al., 2023) 40.42 ± 5.07 51.37 ± 17.43 48.78 ± 2.63 47.69 ± 10.35 21.48 ± 3.35 14.28 ± 2.02 extended + NP (Yi et al., 2023) 51.87 ± 3.8 55.4 ± 6.54 56.87 ± 12.75 60.28 ± 5.39 30.18 ± 1.52 18.53 ± 0.76 fp2 + NP (Yi et al., 2023) 50.99 ± 5.85 47.39 ± 15.42 56.12 ± 15.32 59.05 ± 7.79 29.24 ± 1.14 18.12 ± 0.71 fp4 + NP (Yi et al., 2023) 48.8 ± 3.46 74.15 ± 6.03 62.71 ± 8.39 53.38 ± 7.34 26.78 ± 1.63 17.38 ± 0.55 graph + NP (Yi et al., 2023) 52.85 ± 5.55 76.98 ± 15.73 70.09 ± 16.19 54.23 ± 8.76 29.78 ± 1.79 18.36 ± 0.77 hybridization + NP (Yi et al., 2023) 52.69 ± 5.27 70.17 ± 24.31 69.22 ± 19.13 57.05 ± 8.56 28.55 ± 0.97 17.79 ± 0.33 maccs + NP (Yi et al., 2023) 55.22 ± 4.39 79.99 ± 15.80 71.65 ± 13.30 60.69 ± 6.59 30.6 ± 1.29 18.85 ± 0.48 pubchem + NP (Yi et al., 2023) 46.74 ± 5.38 62.37 ± 19.0 58.63 ± 11.73 48.9 ± 7.76 28.01 ± 1.63 18.44 ± 0.7 rdkit + NP (Yi et al., 2023) 50.82 ± 3.79 52.69 ± 6.75 54.62 ± 10.48 54.62 ± 7.24 29.5 ± 1.59 17.79 ± 0.95 standard + NP (Yi et al., 2023) 51.59 ± 4.93 60.85 ± 11.29 63.42 ± 13.66 55.39 ± 8.09 29.78 ± 1.85 18.61 ± 0.75 Subgraph + NP 55.08 ± 1.52 44.39 ± 22.83 61.29 ± 10.07 67.17 ± 7.24 30.07 ± 1.38 18.22 ± 0.93 SubDyve 59.07 ± 2.25 74.67 ± 7.46 73.55 ± 10.51 66.72 ± 5.29 32.26 ± 1.04 19.73 ± 0.36 250 avalon + NP (Yi et al., 2023) 61.29 ± 2.44 97.18 ± 13.25 86.96 ± 9.16 68.05 ± 4.42 31.14 ± 0.52 19.51 ± 0.48 cdk-substructure + NP (Yi et al., 2023) 54.07 ± 4.05 95.87 ± 20.51 81.39 ± 13.84 61.09 ± 7.94 26.52 ± 0.43 16.97 ± 0.91 estate + NP (Yi et al., 2023) 44.34 ± 5.83 64.97 ± 13.25 66.81 ± 12.27 50.14 ± 8.31 22.16 ± 2.67 15.18 ± 1.32 extended + NP (Yi et al., 2023) 57.48 ± 3.71 64.79 ± 10.11 75.67 ± 10.89 64.35 ± 5.22 30.99 ± 1.26 18.85 ± 0.54 fp2 + NP (Yi et al., 2023) 56.88 ± 5.26 67.45 ± 16.53 75.57 ± 15.28 65.15 ± 8.3 30.19 ± 1.26 18.52 ± 0.61 fp4 + NP (Yi et al., 2023) 55.04 ± 3.77 91.76 ± 18.18 81.27 ± 12.86 62.75 ± 6.11 27.33 ± 0.66 18.03 ± 0.79 graph + NP (Yi et al., 2023) 58.68 ± 5.4 93.5 ± 19.79 78.84 ± 12.62 62.79 ± 9.89 30.19 ± 1.75 18.44 ± 0.83 hybridization + NP (Yi et al., 2023) 58.94 ± 4.32 99.75 ± 15.48 87.76 ± 17.54 65.2 ± 8.07 30.05 ± 0.8 18.36 ± 0.26 maccs + NP (Yi et al., 2023) 60.94 ± 4.57 102.66 ± 15.71 84.48 ± 12.88 67.21 ± 6.9 30.6 ± 1.67 19.01 ± 0.66 pubchem + NP (Yi et al., 2023) 51.92 ± 5.47 71.7 ± 15.79 73.95 ± 17.13 55.82 ± 8.04 29.78 ± 1.51 18.69 ± 0.87 rdkit + NP (Yi et al., 2023) 58.4 ± 2.09 70.29 ± 7.84 70.65 ± 9.75 68.89 ± 5.38 30.59 ± 1.14 18.52 ± 0.76 standard + NP (Yi et al., 2023) 57.08 ± 4.39 71.4 ± 15.11 76.42 ± 18.16 61.09 ± 8.56 31.0 ± 1.26 19.02 ± 0.42 Subgraph + NP 61.96 ± 3.24 41.01 ± 13.89 86.31 ± 11.97 80.31 ± 4.60 30.20 ± 1.44 18.49 ± 0.85 SubDyve 66.73 ± 2.71 97.69 ± 16.55 85.44 ± 12.82 78.19 ± 3.38 32.85 ± 0.60 19.72 ± 0.36 of chemically informative substructures improves model representation. For the finalized setting of d = 2000, we additionally report target-specific AUROC scores to highlight the consistency of performance across targets. All results are calculated with 100 bootstrap resamples to obtain confidence intervals and report both the mean and standard deviation. These results highlight the benefits of capturing different subgraph-level features, which contribute substantially to improving screening accuracy in low-label environments. F.5 Ablation Study: Impact of subgraph pattern fingerprint network and LFDR-guided seed refinement on Ten DUD-E Targets We conduct an ablation study on 10 DUD-E targets to evaluate the individual and joint contributions of two core components of SubDyve: (1) the subgraph-based similarity network and (2) the LFDR-based seed refinement. In Table 13, combining both components yields the best results, with the highest EF1% on all 10 targets and top BEDROC scores on 9. This highlights SubDyve's robustness beyond the PU dataset and its screening performance on DUD-E targets. We also observed that applying LFDR refinement alone without the subgraph-based similarity network often degrades or remains unchanged, while using both components together consistently improves performance. This finding highlights the complementarity of chemically meaningful network construction and uncertainty-awareness, both essential for robust and generalizable virtual screening under low supervision. 29 PLK1 (174) SRC (624) ACES(1277) Figure 10: Top-10 matched Active/Decoy on the DUD-E Dataset over the number of seeds utilized. G Additional Case Study Results G.1 Top-10 matched Active/Decoy on the DUD-E dataset over the number of seeds utilized To check the distribution of active and decoy sets with subgraphs according to the number of seeds used, we investigated the targets with the lowest, average, and highest number of seeds for the DUD-E dataset in Figure 10. PLK1 (174 seeds utilized), the target with the lowest number of seeds, captured one decoy molecule with a subgraph, while the rest remained active. Even though decoy ranked in the top-10, the distribution of utilized subgraphs varied, with 10 different subgraphs captured. For the average number of SRC (624) and ACES (1277) targets, all molecules were active, and the distribution of subgraphs captured varied. For SRC, 8 subgraph patterns were captured, and for ACES, 9 were captured. Therefore, this result suggests that a higher number of seeds provides more structural diversity to reflect the subgraphs of active and allows for more reliable structure-based analyses. Nevertheless, the low number of seeds also reveals as much structural information 30 about the molecule as the number of subgraph patterns, suggesting that the results are comparable to those with a higher number of seeds. G.2 Active-Decoy Ranking Gap for DUD-E datasets ↑↑ ↑ ↓↓↓ ↓ ↑↑ ↑ ↑ ↓↓ ↓ ↑↑ ↑ ↓ ↓ ↑↑↑ ↑ ↓ ↓ Figure 11: Ranking gap for Active/Decoy on the DUD-E Dataset of 4 targets To demonstrate the effectiveness of SubDyve's ranking capability, we compare its performance to the best-performing RDKit+NP baseline, which is based on general-purpose molecular fingerprints. We focus on structurally similar active-decoy pairs and calculate the ranking gap between them. As shown in Figure 11, SubDyve consistently ranks the active compounds higher and the decoy compounds lower than the baseline, resulting in a much larger ranking gap. This indicates that even under conditions of high structural similarity, SubDyve is able to distinguish the true active compounds. To facilitate visual interpretation, we annotated each active compound with an up arrow (one, two, or three arrows for the top 50, top 250, and top 500, respectively) based on its rank in the output of each model. For decoys, we annotated with one, two, or three down arrows when SubDyve rank improved by < 10%, < 50%, or ≥50%, respectively, compared to the rank difference between SubDyve and RDKit+NP on a percentile basis. The rank difference is calculated as the gap between the active and decoy rankings, and we report the difference in this gap by model. Higher values indicate that SubDyve separates the active from the decoys more than the baseline. Statistical significance is evaluated using the Wilcoxon signed rank test for all matched pairs. G.3 Subgraph-Level Characterization of Augmented Seeds on the PU dataset To reveal the structural properties of compounds added during SubDyve's refinement, we analyze the differences between the initial and augmented seed sets across three perspectives: subgraph motif enrichment, compound-level pattern visualization, and graph-level connectivity. First, we identify subgraph motifs that are significantly enriched in the augmented seed set. As shown in Figure 12A, multiple motifs exhibit increased presence after refinement, with Fisher's exact test ranking the top 40 patterns by significance. Figure 12B further quantifies these enrichments by measuring the difference in average motif counts, with statistical significance determined using unpaired t-tests and Benjamini-Hochberg correction. These results indicate that SubDyve preferentially amplifies discriminative patterns rather than merely expanding chemical diversity. 31 ZINC ID: 145394039 ZINC ID: 72452583 A B C D Figure 12: Subgraph-level analysis of augmented seeds on PU dataset. (A) Top 40 subgraph motifs enriched in the augmented seed set, ranked by p-values from Fisher's exact test. (B) Subgraphs with the largest mean frequency increase relative to the initial seeds, with q-values from unpaired t-tests (Benjamini-Hochberg correction). (C) Examples of ZINC compounds containing enriched subgraphs that pass Lipinski's rule of five; matched motifs are highlighted. (D) Distribution of shortest path lengths from initial seeds to retrieved actives in the SubDyve graph versus a Tanimoto k-NN baseline. Second, we examine how enriched substructures manifest in real molecules. Figure 12C presents ZINC compounds from the augmented set that satisfy Lipinski's rule of five and contain representative enriched subgraphs. Highlighted regions confirm that these patterns correspond to chemically meaningful and interpretable substructures rather than artifacts of global structural similarity. Lastly, we assess whether SubDyve can prioritize structurally distant yet bioactive compounds. We compute the shortest path distances from each retrieved compound to the closest known active in the subgraph-similarity network and compare this distribution to a Tanimoto k-NN baseline (similarity ≥0.9). As shown in Figure 12D, SubDyve retrieves candidates with significantly longer path distances (p = 2.32 × 10-108, Mann-Whitney U-test), supporting its ability to generalize beyond immediate structural neighbors while maintaining high enrichment performance. These results suggest that SubDyve refines the seed set in a substructure-aware and chemically meaningful manner, enabling robust prioritization of active compounds even when they lie outside the reach of traditional fingerprint-based similarity metrics. H Limitations SubDyve requires constructing a target-specific chemical similarity network for each protein target, which introduces preprocessing overhead due to repeated subgraph mining and graph construction. While this design enables tailored modeling of bioactivity-relevant structures, it may limit scalability when screening across a large number of targets. Additionally, although LFDR-based seed calibration consistently outperforms probability-based heuristics in terms of expected calibration error (ECE), performance in the mid-range threshold region remains suboptimal. Despite these limitations, SubDyve offers a promising foundation for scalable virtual screening. Its modular architecture and uncertainty-aware design make it well suited for future extensions 32 to multi-target or multi-omics settings, where integration with transcriptomic profiles or cell line information could further improve prioritization in complex biological contexts. 33
|
2509.16254
|
Imaging Modalities-Based Classification for Lung
Cancer Detection
Sajim Ahmed1, Muhammad Zain Chaudhary1, Muhammad Zohaib Chaudhary1, Mahmoud Abbass1,
Ahmed Sherif1 (IEEE, Senior Member), Mohammad Mahbubur Rahman Khan Mamun2
1 School of Computing Sciences and Computer Engineering, University of Southern Mississippi, MS, USA
2 Electrical and Computer Engineering Department, Tennessee Technological University, TN, USA
Emails: {sajim.ahmed@usm.edu, zain.chaudhary@usm.edu, zohaib.ali@usm.edu, mahmoud.abbass@usm.edu,
ahmed.sherif@usm.edu, mmahmoud@tntech.edu}
Abstract—Lung cancer continues to be the predominant cause
of cancer-related mortality globally. This review analyzes various
approaches, including advanced image processing methods, focus-
ing on their efficacy in interpreting CT scans, chest radiographs,
and biological markers. Notably, we identify critical gaps in the
previous surveys, including the need for robust models that can
generalize across diverse populations and imaging modalities.
This comprehensive synthesis aims to serve as a foundational
resource for researchers and clinicians, guiding future efforts
toward more accurate and efficient lung cancer detection. Key
findings reveal that 3D CNN architectures integrated with CT
scans achieve the most superior performances, yet challenges
such as high false positives, dataset variability, and computational
complexity persist across modalities.
Index Terms—Lung-Cancer detection, Image Modalities, X-
Ray, CT Scans.
I. INTRODUCTION
Lung cancer remains one of the leading causes of cancer-
related deaths worldwide, with nearly 1.8 million deaths an-
nually [?]. Early detection and accurate diagnosis are vital for
improving lung cancer survival, as early-stage cases are more
treatable. While traditional methods like chest X-rays and CT
scans have been key in detection, they face challenges in
sensitivity and accuracy for early-stage malignancies. Recent
advances in medical imaging and deep learning techniques
show great promise in addressing these limitations [8]–[10].
In recent years, computer-aided detection (CAD) systems have
played a crucial role in enhancing the accuracy and speed
of lung cancer diagnosis. When integrated with traditional
imaging modalities such as CT scans and PET/CT, these
systems significantly improve detecting and classifying lung
nodules [21], [22], [36]. Several studies have successfully
applied CNN architectures, such as Inception v3 and U-Net,
to lung cancer detection tasks [1], [34].
Despite the advancements in imaging techniques, several
challenges remain in lung cancer detection. False positives,
computational complexity, and dataset limitations are some
of the significant obstacles faced by existing models [1],
[2]. Solutions such as data augmentation, semi-supervised
learning, and transfer learning have been proposed to mitigate
these challenges. However, further research is necessary to
refine these methods and improve their practical applications.
Fig. 1. Taxonomy of Lung Cancer Detection Techniques.
This survey aims to comprehensively review the state-of-
the-art imaging modalities used in lung cancer detection. Un-
like prior literature, our work introduces a unified By classify-
ing existing research of imaging modalities architectures-based
lung cancer detection into three main categories. This paper
presents an in-depth analysis of the strengths and limitations
of each approach. Furthermore, we identify open research
problems and propose solutions to address these challenges,
offering a roadmap for future work in lung cancer detection.
This paper is organized as follows. Section II discusses
related work in the field. Section III presents the classification
method of the reviewed papers, while Section IV offers a
detailed discussion of the findings. Section concludes with
findings and future research directions.
II. RELATED WORK
Several schemes have been proposed for using the emerging
technology in E-health [3]–[7]. The identification of lung
cancer has been transformed in recent years by using artificial
intelligence (AI) in medical imaging. The paper [8] provides
a comprehensive review of AI’s role in lung cancer detection,
segmentation, and characterization. The classification tech-
niques used in this paper center on CNNs and 3D CNNs.
However, the paper highlights challenges, such as the need
for large datasets and persistent false-positive results, even
in advanced deep learning (DL) models. Furthermore, the
paper [9] reviews the current applications of AI in lung can-
cer screening. Regarding classification techniques, the paper
highlights the use of CNNs and U-Net architectures for tasks
arXiv:2509.16254v1 [q-bio.TO] 17 Sep 2025
like nodule detection and segmentation. Also, The paper [10]
provides a comprehensive review of the significance of using
DL models like CNNs, 3D CNNs, and other hybrid models
in analyzing medical data from CT scans, chest X-rays, and
other modalities for identifying lung cancer at an early stage.
Our proposed classification scheme offers a more compre-
hensive and structured approach than other papers by integrat-
ing many techniques under the unified framework of imaging
modalities. Unlike prior literature, our work introduces a
unified classification framework that systematically evaluates
and contrasts the technical efficacy, clinical applicability, and
limitations of different imaging modalities.
III. PROPOSED CLASSIFICATION METHOD
As shown in Fig. 1, the proposed classification scheme in
this paper is demonstrated based on imaging modalities as
follows.
A. X-ray Scan
X-ray-based lung cancer detection methods vary signifi-
cantly in preprocessing strategies, model architectures, and
clinical applicability. While [11] achieves 92% validation
accuracy through bone shadow exclusion (BSE) and lung
segmentation, [12] highlights absorption-contrast imaging as
superior for early detection, bypassing algorithmic prepro-
cessing. Conversely, [13] and [14] rely on PCA and basic
segmentation, achieving high training accuracy but lower
generalization (96% and 93.33%, respectively), underscoring
the critical role of advanced preprocessing in reducing false
negatives.
Simpler models like the PNN in [14] (93.33% accuracy) and
MANN in [15] (72.96% accuracy) prioritize computational
efficiency but suffer from small datasets and sensitivity to
noise. In contrast, complex architectures like CDC-Net [16]
(99.39% accuracy) and VGG19-CNN [17] (98.05% accuracy)
leverage large datasets (30,387+ images) and residual networks
to enhance robustness, albeit at higher computational costs.
Hybrid models strike a balance. OCNN-SVM [18] combines
CNNs with SVMs for 98.7% accuracy, while VDSNet [19] in-
tegrates spatial transformers to handle rotated images, achiev-
ing 73% accuracy with reduced training time. Studies like [17]
and [16] address multi-disease detection (COVID-19, pneu-
monia, etc.), achieving near-perfect AUC (99.66%–99.93%)
but risking misclassification from overlapping features. In
contrast, [11], [13], and [18] focus solely on lung cancer,
optimizing precision (e.g., 98.76% F1-score in [18]) at the cost
of diagnostic scope. Hierarchical approaches like [20] show
high initial sensitivity (99%) but degrade in later stages (78%
accuracy), reflecting a precision-recall trade-off. Meanwhile,
[15]’s MANN reduces false positives but suffers from low
sensitivity (72.85%), while [19]’s VDSNet prioritizes rotation
invariance over peak accuracy.
Preprocessing and hybrid models (e.g., [11], [18]) enhance
specificity, while large-scale CNNs (e.g., [16]) improve gen-
eralizability at higher computational costs. Table I synthe-
sizes these limitations, emphasizing the need for standardized
TABLE I
X-RAY
Ref.
Methodology
Disadvantages
[11]
Lung segmentation and
bone shadow exclusion
(BSE).
Sensitivity Issues: Poor detection of
small or early-stage cancers. False
Negatives: Common due to overlapping
structures.
[17]
Multi-classification
model for COVID-19,
pneumonia, and lung
cancer.
Sensitivity Issues: Overlapping between
disease symptoms can cause
misclassification. False Positives:
Possible due to overlapping features in
X-rays and CT images.
[12]
Adsoption-contrast and
phase-contrast imaging
Adsorption contrast imaging affect
patients with higher radiation doses.
Phase-contrast imaging is complex and
expensive to be implemented.
[20]
A CAD algorithm to
identify pulmonary
nodules.
Limited to retrospective data, which may
not fully represent current clinical
scenarios. The algorithm’s performance
heavily depends on the quality of the
input X-rays.
[16]
CDC_Net model that
incorporates residual
networks and dilated
convolution.
High computational cost due to the
complexity of the model. Potential
overfitting due to the use of multiple
pre-trained models.
[14]
A probabilistic neural
network.
Sensitive to noise in the data. The model
may not generalize well to different
datasets.
[15]
CAD system using a
massive artificial neural
network (MANN) with
a soft tissue technique.
The model’s sensitivity to subtle nodules
is still relatively low. High false positive
rate, which can lead to unnecessary
follow-up tests.
[13]
Combined
backpropagation neural
networks with PCA.
PCA may lead to loss of important
information during dimensionality
reduction.
[19]
A hybrid deep learning
model combining
VGG, data
augmentation, and
spatial transformer
networks with CNN.
The model’s performance can degrade
with rotated or tilted images. Requires
significant computational resources for
training.
[18]
A hybrid deep learning
technique combining
CNN and SVM.
Complex to implement and optimize.
Large amounts of labeled data are
required for effective training.
benchmarks and explainable AI to bridge clinical adoption
gaps.
B. CT Scan
CT-based lung cancer detection methods exhibit diverse
architectural strategies, preprocessing pipelines, and clini-
cal trade-offs. Pure 3D CNNs, such as those in [21] (U-
Net segmentation) and [22] (dual 3D CNNs for detec-
tion/classification), achieve high sensitivity (94%) and speci-
ficity (91%) by leveraging spatial context, but face computa-
tional bottlenecks. Hybrid models address this. In [23], the pa-
per combines improved deep neural networks (IDNN) with an
ensemble classifier, achieving robust accuracy through feature
optimization, while [24] integrates CNNs with mRMR feature
selection to balance efficiency and performance. In contrast,
[25] reduces computational load via template matching and
simplified preprocessing without sacrificing accuracy, illus-
trating the trade-off between model complexity and resource
demands.
Advanced preprocessing methods significantly impact de-
tection accuracy. For example, [26] uses the Frangi filter to
suppress vessel-like structures, achieving 94% sensitivity at the
cost of high false positives (15.1/scan), whereas [27] employs
Adaptive Bilateral Filter (ABF) and ABC segmentation for
precise lung region extraction, attaining 98.42% accuracy.
Conversely, [28] relies on K-means clustering and geometric
mean filters, demonstrating that simpler techniques suffice
for segmentation but limit nodule characterization precision.
Large-scale datasets like NLST in [29] (AUC: 94.4%–95.5%)
and multi-institutional trials in [30] (96,559 participants) en-
hance generalizability, whereas smaller datasets in [26] and
[31] risk overfitting despite high sensitivity (80.06%–94%).
Notably, [31]’s two-step approach (geometric candidate gener-
ation + 3D CNN) mitigates this by combining prior knowledge
with learned features, outperforming DBNs and SDAEs.
Methods prioritizing sensitivity, such as [26] (94% sensi-
tivity), incur higher false positives, while those emphasizing
specificity, like [22] (91% specificity), risk missing subtle
nodules. [32]’s DITNN classifier achieves 98.42% accuracy
with minimal error (0.038), illustrating how hybrid segmenta-
tion (IPCT) and noise reduction can harmonize both metrics.
LDCT-focused studies like [30] demonstrate reduced lung
cancer mortality but highlight unresolved challenges in overall
mortality reduction. Meanwhile, [29]’s risk-prediction model
leverages longitudinal LDCT data, underscoring the value of
temporal analysis in early detection. CT-based approaches
excel in spatial accuracy but grapple with computational costs,
false positives, and dataset biases. While 3D CNNs and hybrid
models (e.g., [21], [23]) dominate in performance, simpler
pipelines (e.g., [25]) offer pragmatic alternatives for resource-
constrained settings. Table II synthesizes these limitations,
advocating for optimized preprocessing, federated learning,
and explainability to bridge clinical adoption gaps.
C. Others
1) Whole Slide Images (WSI): WSI-based approaches lever-
age histopathology images to classify lung cancer subtypes
and predict molecular markers, though they vary in supervi-
sion levels and clinical integration. The Inception v3 model
in [34] achieves high diagnostic precision (AUC: 0.97) for
NSCLC subtyping and gene mutation prediction using fully
annotated TCGA datasets, but its dependency on exhaustive
labeling limits scalability. In contrast, [1] adopts weakly
supervised learning with patch-based FCNs and RF classi-
fiers, attaining 97.3% accuracy on 939 WSIs with minimal
annotations. This method underperforms on public datasets
(85.6% AUC), reflecting a trade-off between annotation effort
and generalizability. Meanwhile, [35] reviews broader AI
applications in pathology, emphasizing CNNs for automated
feature extraction but highlighting unresolved challenges, such
as pathologist skepticism and workflow incompatibility. WSI
methods excel in molecular and histological granularity but
face barriers in annotation standardization and clinical trust
(Table III).
TABLE II
CT SCANS
Ref.
Methodology
Disadvantages
[21]
3D CNN and U-Net.
High false positive rates. Relies on large
labeled datasets for generalizability.
[22]
3D CNNs for Nodule
Detection.
Nodule Type Exclusion: Excludes certain
types (e.g., GGO) that limit clinical
applicability. Small Datasets: Limited
representation of nodule variability.
[2]
Hybrid models for
various DL models
Limited Depth: This may restrict feature
learning capabilities, affecting diagnosis
accuracy. Training Efficiency: Longer
training times due to model complexity.
[33]
Combining CAD with
deep learning model
and SVM.
False Positives: High rates leading to
unnecessary invasive procedures.
Radiation Exposure: Risk associated
with multiple scans.
[29]
3D CNN architecture
False Positives: High rates leading to
unnecessary invasive procedures.
Radiation Exposure: Risk associated
with multiple scans.
[28]
ML-based approach
like ANN, KNN, and
RF
False Negatives: Challenges in
identifying smaller nodules or
early-stage cancer. Computational
Complexity: Certain models require high
computational resources.
[23]
Improved deep neural
networks (IDNN) along
with an ensemble
classifier
Requires large datasets, risking
overfitting, and high computational
demands.
[30]
Segmentation of the
images was achieved
through the K-means
algorithm, and ML
classifiers like ANN,
KNN, and RF
False Negatives: Challenges in
identifying smaller nodules or
early-stage cancer. Computational
Complexity: Certain models require high
computational resources.
[25]
Deep neural network
approach (MobileNet)
Sensitivity Issues: Risk of misclassifying
nodules, especially in early-stage cases.
[24]
CNNs for feature
selection and
classification
Small dataset of only 100 images. High
computational complexity for real-time
applications.
[26]
Multi-group
patch-based CNN
False Positives: Higher rates of false
positives due to vessel-like structures in
CT images. Patch Selection Challenges:
Selecting meaningful patches can be
computationally expensive.
[31]
3D CAD system using
3D CNNs
Cross-Validation Limitations: Testing
solely on training data can lead to
overfitting. External Validation Needs:
Lack of testing on independent datasets
affects reliability.
[27]
Conventional CT Scans
combined with image
processing techniques.
Limited generalizations due to single
hospital data. Requires significant
processing power.
[32]
3D CNNs
Excludes certain nodule types. Small
datasets limit variability.
2) PET Scan: PET/CT frameworks prioritize multimodal
integration and staging accuracy but grapple with technical
artifacts and validation gaps. The hybrid CNN model in
[36] combines PET and CT data to classify FDG uptake,
achieving near-perfect AUC (0.99) for lung cancer and lym-
phoma, though its narrow focus limits broader clinical utility.
Conversely, [37] proposes a cloud-based framework (Cloud-
LTDSC) for end-to-end tumor detection, segmentation, and 9-
stage classification, achieving 97% accuracy on 94 NSCLC
patients. While scalable, its small cohort risks overfitting.
TABLE III
OTHERS
Ref.
Methodology
Disadvantages
[34]
PET/CT Scans.
Variability: Inconsistencies due to
differing imaging protocols across
institutions. False Positives: Higher rates
in non-malignant cases.
[1]
Weakly Supervised DL
for Whole Slide Lung
Cancer Image Analysis.
FCNs for image
segmentation and
classification
Annotation Scarcity: Limited availability
of labeled data leads to potential
inaccuracies. High Computational Costs:
Processing whole slide images can be
resource-intensive.
[35]
CNN for Pathology
Image Analysis
Large Data Requirement: CNNs require
a large amount of annotated data for
accurateness. Sensitivity to Variations:
The model’s performance may degrade
with variations in image quality.
[36]
Automated Detection
with PET/CT. CNNs
for detecting and
classifying FDG uptake
in lung lesions
Single Institution Data: Limited
generalizations and potential biases.
False Positives: Frequent in lymphoma,
complicating diagnosis.
[38]
A standard combine
PET and CT
technologies
Radiation Exposure: expose patients to
high radiation exposures. False
Positives/Negatives: On PET scans,
inflammatory diseases can occasionally
appear as cancer, while, tiny lesions
could go undetected. Cost and
Availability: PET/CT scans can be
costly, and not available.
[37]
A cloud-based system
and a multilayer
convolutional neural
network (M-CNN).
Concerns about the cloud privacy, the
internet connection, and the
computational cost.
[39]
PET/CT.
False positives: that lead to unnecessary
biopsies. High cost: PET/CT scans are
expensive. Radiation exposure: higher
radiation levels than individual PET or
CT scans.
[40]
The integration of PET
and CT for staging
non-small cell lung
cancer (NSCLC).
Concerns about the system availability,
image interpretation complexity, and
overdiagnosis.
Clinical reviews contextualize PET/CT’s strengths, [38] and
[39] validate its superiority in nodal staging and lesion dif-
ferentiation (e.g., T3/T4 staging) but note persistent false
positives, necessitating invasive confirmations. [40] further
critiques technical limitations, such as respiration artifacts and
contrast variability, advocating for protocol standardization.
Though PET/CT excels in metabolic and anatomical corre-
lation, its clinical adoption hinges on artifact mitigation and
larger validation cohorts (Table III).
IV. DISCUSSION AND OPEN PROBLEMS
High-resolution CT scans have shown strong potential for
early detection of lung nodules, with studies reporting classifi-
cation accuracies of up to 91.13% [21], [22] and even reaching
96% in certain cases [18]. PET/CT imaging provides excellent
specificity in identifying malignancies [36], although its high
resource demands and limited accessibility remain concerns.
In contrast, X-ray imaging is widely available but suffers
from sensitivity issues and false negatives caused by the poor
detection of small nodules and interference from overlapping
anatomical structures [11], [17]. WSI is also challenged by
variability in imaging protocols and the occurrence of false
positives in non-malignant cases [1], [34].
Despite these promising findings, several limitations and
open research problems persist. A primary challenge is the
integration of imaging-based diagnostic systems into clinical
workflows. Furthermore, the effective integration of multi-
modal data—combining imaging with genomic or clinical
information—remains in its infancy. CT imaging face issues
such as false positives, radiation exposure, and difficulties in
detecting smaller nodules, as noted in multiple studies [23],
[25], [28]–[30], [33]. Similarly, PET scans are limited by
generalizability concerns due to reliance on single-institution
data, and occasional false positives, especially in lymphoma
detection [36].
Enhancing image preprocessing and standardizing imaging
protocols may improve diagnostic precision across modalities.
For X-rays, the application of advanced image enhancement
techniques could help reduce false negatives and misclassifi-
cations. In the case of CT scans, the adoption of low-dose
imaging protocols and optimized preprocessing techniques
may lower radiation exposure and decrease false-positive
rates, while diversifying datasets and conducting extensive
cross-validation could enhance the generalizability of PET
imaging. For WSI, standardization of imaging protocols and
the development of more efficient processing methods are
recommended to overcome issues related to annotation scarcity
and protocol variability. Overall, continued research focused
on refining these imaging modalities and their integration
into clinical practice is essential for advancing lung cancer
detection.
This review of imaging modalities for lung cancer detection
underscores the importance of early and accurate diagnosis.
While 3D CNN architectures with CT scans show superior per-
formance, critical challenges remain. Future research should
include Creating standardized, diverse datasets to improve
model generalizability across populations, and developing
hybrid models to reduce false positives while maintaining
sensitivity, particularly for CT-based approaches, and Opti-
mizing computational efficiency for clinical implementation.
Integrating multimodal imaging (CT with PET or WSI) and
incorporating genetic data alongside imaging features presents
promising directions. Additionally, explainable AI will be
essential for clinical adoption.
V. ACKNOWLEDGMENT
The Security and Privacy of Emerging Network (SPEN)
research lab at the University of Southern Mississippi (USM),
USA, made this work possible. The statements made herein
are solely the responsibility of the authors.
REFERENCES
[1] X. Wang, H. Chen, C. Gan, H. Lin, Q. Dou, E. Tsougenis, Q. Huang,
M. Cai, and P.-A. Heng, “Weakly supervised deep learning for whole
slide lung cancer image analysis,” IEEE Trans. Cybern., vol. 50,
pp. 3950–3962, Sept. 2020.
[2] W. Sun, B. Zheng, and W. Qian, “Automatic feature learning using
multichannel roi based on deep structured algorithms for computerized
lung cancer diagnosis,” Computers in Biology and Medicine, vol. 89,
pp. 530–539, 2017.
[3] B. Hamoui, A. Alashaikh, A. Sherif, E. Alanazi, M. Nabil, and
W. Alsmary, “Google searches and covid-19 cases in saudi arabia: A
correlation study,” in 2021 3rd IEEE Middle East and North Africa
COMMunications Conference (MENACOMM), pp. 104–108, 2021.
[4] C. Bourn, J. Heirendt, K. Ben-Chiobi, A. Haastrup, A. Sherif, and
M. Elsersy, “Privacy-preserving data sharing scheme for e-health sys-
tems,” in 2023 9th International Conference on Information Technology
Trends (ITT), pp. 20–25, 2023.
[5] J. Romeo, M. Abbass, A. Sherif, M. M. R. Khan Mamun, M. Elsersy,
and K. Khalil, “Privacy-preserving machine learning for e-health ap-
plications: A survey,” in 2024 IEEE 3rd International Conference on
Computing and Machine Intelligence (ICMI), pp. 1–6, 2024.
[6] M. Watkins, C. Dorsey, D. Rennier, T. Polley, A. Sherif, and M. Elsersy,
“Privacy-preserving data aggregation scheme for e-health,” in Proceed-
ings of the 2nd International Conference on Emerging Technologies
and Intelligent Systems (M. A. Al-Sharafi, M. Al-Emran, M. N. Al-
Kabi, and K. Shaalan, eds.), (Cham), pp. 638–646, Springer International
Publishing, 2023.
[7] M. Elsersy, A. Sherif, A. A.-A. Imam, M. M. R. Khan Mamun, K. Khalil,
and M. Haitham, “Federated learning model for early detection of
dementia using blood biosamples,” in 2023 IEEE International Con-
ference on Artificial Intelligence, Blockchain, and Internet of Things
(AIBThings), pp. 1–5, 2023.
[8] G. Chassagnon, C. De Margerie-Mellon, M. Vakalopoulou, R. Marini,
T.-N. Hoang-Thi, M.-P. Revel, and P. Soyer, “Artificial intelligence in
lung cancer: current applications and perspectives,” Jpn. J. Radiol.,
vol. 41, pp. 235–244, Mar. 2023.
[9] M. Cellina, L. M. Cacioppa, M. Cè, V. Chiarpenello, M. Costa, Z. Vin-
cenzo, D. Pais, M. V. Bausano, N. Rossini, A. Bruno, and C. Floridi,
“Artificial intelligence in lung cancer screening: The future is now,”
Cancers (Basel), vol. 15, Aug. 2023.
[10] H. T. Gayap and M. A. Akhloufi, “Deep machine learning for medical
diagnosis, application to lung cancer detection: A review,” BioMedIn-
formatics, vol. 4, no. 1, pp. 236–284, 2024.
[11] Z. Peng, X. Xinnan, W. Hongwei, F. Yuanli, F. Haozhe, Z. Jian-
wei, Y. Shoukun, H. Yuxuan, S. Yiwen, L. Jiaxiang, and L. Xinguo,
“Computer-aided lung cancer diagnosis approaches based on deep
learning,” Journal of Computer-Aided Design & Computer Graphics,
vol. 30, no. 1, pp. 90–99, 2018.
[12] K. Li, Y. Chen, R. Sun, B. Yu, G. Li, and X. Jiang, “Exploring potential
of different x-ray imaging methods for early-stage lung cancer detec-
tion,” Radiation Detection Technology and Methods, vol. 4, pp. 213–221,
2020.
[13] I. S. Abed, “Lung cancer detection from x-ray images by combined
backpropagation neural network and pca,” Engineering and Technology
Journal, vol. 37, no. 5A, pp. 166–171, 2019.
[14] M. F. Syahputra, R. F. Rahmat, and R. Rambe, “Identification of lung
cancer on chest x-ray (cxr) medical images using the probabilistic neural
network method,” Journal of Physics: Conference Series, vol. 1898,
p. 012023, jun 2021.
[15] K. Rajagopalan and S. Babu, “The detection of lung cancer using
massive artificial neural network based on soft tissue technique,” BMC
Medical Informatics and Decision Making, vol. 20, p. 282, Oct 2020.
[16] H. Malik, T. Anees, M. Din, and A. Naeem, “Cdc_net: multi-
classification convolutional neural network model for detection of covid-
19, pneumothorax, pneumonia, lung cancer, and tuberculosis using chest
x-rays,” Multimedia Tools and Applications, vol. 82, pp. 13855–13880,
Apr 2023.
[17] D. M. Ibrahim, N. M. Elshennawy, and A. M. Sarhan, “Deep-chest:
Multi-classification deep learning model for diagnosing covid-19, pneu-
monia, and lung cancer chest diseases,” Computers in Biology and
Medicine, vol. 132, p. 104348, 2021.
[18] V. Sreeprada and K. Vedavathi, “Lung cancer detection from x-ray
images using hybrid deep learning technique,” Procedia Computer
Science, vol. 230, pp. 467–474, 2023.
[19] S. Bharati, P. Podder, and M. R. H. Mondal, “Hybrid deep learning
for detecting lung diseases from x-ray images,” Informatics in Medicine
Unlocked, vol. 20, p. 100391, 2020.
[20] J. Juan, E. Monsó, C. Lozano, M. Cufí, P. Subías-Beltrán, L. Ruiz-Dern,
X. Rafael-Palou, M. Andreu, E. Castañer, X. Gallardo, A. Ullastres,
C. Sans, M. Lujàn, C. Rubiés, and V. Ribas-Ripoll, “Computer-assisted
diagnosis for an early identification of lung cancer in chest x rays,”
Scientific Reports, vol. 13, p. 7720, May 2023.
[21] W. Alakwaa, M. Nassef, and A. Badr, “Lung cancer detection and clas-
sification with 3d convolutional neural network (3d-cnn),” International
Journal of Advanced Computer Science and Applications, vol. 8, no. 8,
2017.
[22] N. Nasrullah, J. Sang, M. S. Alam, M. Mateen, B. Cai, and H. Hu,
“Automated lung nodule detection and classification using deep learning
combined with multiple strategies,” Sensors, vol. 19, no. 17, 2019.
[23] P. M. Shakeel, M. A. Burhanuddin, and M. I. Desa, “Automatic lung
cancer detection from ct image using improved deep neural network
and ensemble classifier,” Neural Computing and Applications, vol. 34,
pp. 9579–9592, Jun 2022.
[24] M. Toƒüa√ßar, B. Ergen, and Z. C√∂mert, “Detection of lung cancer on
chest ct images using minimum redundancy maximum relevance feature
selection method with convolutional neural networks,” Biocybernetics
and Biomedical Engineering, vol. 40, no. 1, pp. 23–39, 2020.
[25] P. Shill and Z. Homayra, “A new method for lung nodule detection using
deep neural networks for ct images,” pp. 1–6, 02 2019.
[26] H. Jiang, H. Ma, W. Qian, M. Gao, and Y. Li, “An automatic detection
system of lung nodule based on multigroup patch-based deep learning
network,” IEEE J. Biomed. Health Inform., vol. 22, pp. 1227–1237, July
2018.
[27] A. Asuntha and A. Srinivasan, “Deep learning for lung cancer detec-
tion and classification,” Multimedia Tools and Applications, vol. 79,
pp. 7731–7762, Mar 2020.
[28] S. Nageswaran, G. Arunkumar, A. K. Bisht, S. Mewada, J. N. V. R. S.
Kumar, M. Jawarneh, and E. Asenso, “Lung cancer classification and
prediction using machine learning and image processing,” Biomed Res.
Int., vol. 2022, p. 1755460, Aug. 2022.
[29] D. Ardila, A. P. Kiraly, S. Bharadwaj, B. Choi, J. J. Reicher, L. Peng,
D. Tse, M. Etemadi, W. Ye, G. Corrado, D. P. Naidich, and S. Shetty,
“End-to-end lung cancer screening with three-dimensional deep learning
on low-dose chest computed tomography,” Nature Medicine, vol. 25,
pp. 954–961, Jun 2019.
[30] R. M. Hoffman, R. P. Atallah, R. D. Struble, and R. G. Badgett, “Lung
cancer screening with low-dose CT: A meta-analysis,” J. Gen. Intern.
Med., vol. 35, pp. 3015–3025, Oct. 2020.
[31] X. Huang, J. Shan, and V. Vaidya, “Lung nodule detection in ct using
3d convolutional neural networks,” in 2017 IEEE 14th International
Symposium on Biomedical Imaging (ISBI 2017), pp. 379–383, 2017.
[32] P. M. Shakeel, M. Burhanuddin, and M. I. Desa Measurement, vol. 145,
pp. 702–712, 2019.
[33] S. Makaju, P. Prasad, A. Alsadoon, A. Singh, and A. Elchouemi, “Lung
cancer detection using ct scan images,” Procedia Computer Science,
vol. 125, pp. 107–114, 2018. The 6th International Conference on Smart
Computing and Communications.
[34] N. Coudray, P. S. Ocampo, T. Sakellaropoulos, N. Narula, M. Snuderl,
D. Fenyö, A. L. Moreira, N. Razavian, and A. Tsirigos, “Classification
and mutation prediction from non–small cell lung cancer histopathology
images using deep learning,” Nature Medicine, vol. 24, pp. 1559–1567,
Oct 2018.
[35] S. Wang, D. M. Yang, R. Rong, X. Zhan, J. Fujimoto, H. Liu, J. Minna,
I. I. Wistuba, Y. Xie, and G. Xiao, “Artificial intelligence in lung cancer
pathology image analysis,” Cancers (Basel), vol. 11, p. 1673, Oct. 2019.
[36] L. Sibille, R. Seifert, N. Avramovic, T. Vehren, B. Spottiswoode,
S. Zuehlsdorff, and M. Schäfers, “18f-fdg pet/ct uptake classification
in lymphoma and lung cancer by using deep convolutional neural
networks,” Radiology, vol. 294, no. 2, pp. 445–452, 2020.
PMID:
31821122.
[37] G. Kasinathan and S. Jayakumar, “Cloud-based lung tumor detection and
stage classification using deep learning techniques,” Biomed research
international, vol. 2022, no. 1, p. 4185835, 2022.
[38] B. Hochhegger, G. R. T. Alves, K. L. Irion, C. C. Fritscher, L. G.
Fritscher, N. H. Concatto, and E. Marchiori, “Pet/ct imaging in lung
cancer: indications and findings,” Jornal Brasileiro de Pneumologia,
vol. 41, no. 3, pp. 264–274, 2015.
[39] A. F. E. A. M. D. A. F. C. C. L. Cristina Gámez, Rafael Rosell, “Pet/ct
fusion scan in lung cancer: Current recommendations and innovations,”
Journal of Thoracic Oncology, vol. 11, no. 1, pp. 6–10, 2016.
[40] J. C. J. A. V. W De Wever, S Stroobants, “Integrated pet/ct in the staging
of nonsmall cell lung cancer: technical aspects and clinical integration,”
European Respiratory Journal, vol. 33, no. 1, pp. 201–212, 2009.
|
Imaging Modalities-Based Classification for Lung Cancer Detection Sajim Ahmed1, Muhammad Zain Chaudhary1, Muhammad Zohaib Chaudhary1, Mahmoud Abbass1, Ahmed Sherif1 (IEEE, Senior Member), Mohammad Mahbubur Rahman Khan Mamun2 1 2 Electrical and Computer Engineering Department, Tennessee Technological University, TN, USA Emails: { , , , , , } Abstract-Lung cancer continues to be the predominant cause of cancer-related mortality globally. This review analyzes various approaches, including advanced image processing methods, focusing on their efficacy in interpreting CT scans, chest radiographs, and biological markers. Notably, we identify critical gaps in the previous surveys, including the need for robust models that can generalize across diverse populations and imaging modalities. This comprehensive synthesis aims to serve as a foundational resource for researchers and clinicians, guiding future efforts toward more accurate and efficient lung cancer detection. Key findings reveal that 3D CNN architectures integrated with CT scans achieve the most superior performances, yet challenges such as high false positives, dataset variability, and computational complexity persist across modalities. Index Terms-Lung-Cancer detection, Image Modalities, XRay, CT Scans. I. INTRODUCTION Lung cancer remains one of the leading causes of cancerrelated deaths worldwide, with nearly 1.8 million deaths annually [?]. Early detection and accurate diagnosis are vital for improving lung cancer survival, as early-stage cases are more treatable. While traditional methods like chest X-rays and CT scans have been key in detection, they face challenges in sensitivity and accuracy for early-stage malignancies. Recent advances in medical imaging and deep learning techniques show great promise in addressing these limitations [8]-[10]. In recent years, computer-aided detection (CAD) systems have played a crucial role in enhancing the accuracy and speed of lung cancer diagnosis. When integrated with traditional imaging modalities such as CT scans and PET/CT, these systems significantly improve detecting and classifying lung nodules [21], [22], [36]. Several studies have successfully applied CNN architectures, such as Inception v3 and U-Net, to lung cancer detection tasks [1], [34]. Despite the advancements in imaging techniques, several challenges remain in lung cancer detection. False positives, computational complexity, and dataset limitations are some of the significant obstacles faced by existing models [1], [2]. Solutions such as data augmentation, semi-supervised learning, and transfer learning have been proposed to mitigate these challenges. However, further research is necessary to refine these methods and improve their practical applications. Fig. 1. Taxonomy of Lung Cancer Detection Techniques. This survey aims to comprehensively review the state-ofthe-art imaging modalities used in lung cancer detection. Unlike prior literature, our work introduces a unified By classifying existing research of imaging modalities architectures-based lung cancer detection into three main categories. This paper presents an in-depth analysis of the strengths and limitations of each approach. Furthermore, we identify open research problems and propose solutions to address these challenges, offering a roadmap for future work in lung cancer detection. This paper is organized as follows. Section II discusses related work in the field. Section III presents the classification method of the reviewed papers, while Section IV offers a detailed discussion of the findings. Section concludes with findings and future research directions. II. RELATED WORK Several schemes have been proposed for using the emerging technology in E-health [3]-[7]. The identification of lung cancer has been transformed in recent years by using artificial intelligence (AI) in medical imaging. The paper [8] provides a comprehensive review of AI's role in lung cancer detection, segmentation, and characterization. The classification techniques used in this paper center on CNNs and 3D CNNs. However, the paper highlights challenges, such as the need for large datasets and persistent false-positive results, even in advanced deep learning (DL) models. Furthermore, the paper [9] reviews the current applications of AI in lung cancer screening. Regarding classification techniques, the paper highlights the use of CNNs and U-Net architectures for tasks 17 Sep 2025 like nodule detection and segmentation. Also, The paper [10] provides a comprehensive review of the significance of using DL models like CNNs, 3D CNNs, and other hybrid models in analyzing medical data from CT scans, chest X-rays, and other modalities for identifying lung cancer at an early stage. Our proposed classification scheme offers a more comprehensive and structured approach than other papers by integrating many techniques under the unified framework of imaging modalities. Unlike prior literature, our work introduces a unified classification framework that systematically evaluates and contrasts the technical efficacy, clinical applicability, and limitations of different imaging modalities. III. PROPOSED CLASSIFICATION METHOD As shown in Fig. 1, the proposed classification scheme in this paper is demonstrated based on imaging modalities as follows. A. X-ray Scan X-ray-based lung cancer detection methods vary significantly in preprocessing strategies, model architectures, and clinical applicability. While [11] achieves 92% validation accuracy through bone shadow exclusion (BSE) and lung segmentation, [12] highlights absorption-contrast imaging as superior for early detection, bypassing algorithmic preprocessing. Conversely, [13] and [14] rely on PCA and basic segmentation, achieving high training accuracy but lower generalization (96% and 93.33%, respectively), underscoring the critical role of advanced preprocessing in reducing false negatives. Simpler models like the PNN in [14] (93.33% accuracy) and MANN in [15] (72.96% accuracy) prioritize computational efficiency but suffer from small datasets and sensitivity to noise. In contrast, complex architectures like CDC-Net [16] (99.39% accuracy) and VGG19-CNN [17] (98.05% accuracy) leverage large datasets (30,387+ images) and residual networks to enhance robustness, albeit at higher computational costs. Hybrid models strike a balance. OCNN-SVM [18] combines CNNs with SVMs for 98.7% accuracy, while VDSNet [19] integrates spatial transformers to handle rotated images, achieving 73% accuracy with reduced training time. Studies like [17] and [16] address multi-disease detection (COVID-19, pneumonia, etc.), achieving near-perfect AUC (99.66%-99.93%) but risking misclassification from overlapping features. In contrast, [11], [13], and [18] focus solely on lung cancer, optimizing precision (e.g., 98.76% F1-score in [18]) at the cost of diagnostic scope. Hierarchical approaches like [20] show high initial sensitivity (99%) but degrade in later stages (78% accuracy), reflecting a precision-recall trade-off. Meanwhile, [15]'s MANN reduces false positives but suffers from low sensitivity (72.85%), while [19]'s VDSNet prioritizes rotation invariance over peak accuracy. Preprocessing and hybrid models (e.g., [11], [18]) enhance specificity, while large-scale CNNs (e.g., [16]) improve generalizability at higher computational costs. Table I synthesizes these limitations, emphasizing the need for standardized TABLE I X-RAY Ref. Methodology Disadvantages [11] Lung segmentation and bone shadow exclusion (BSE). Sensitivity Issues: Poor detection of small or early-stage cancers. False Negatives: Common due to overlapping structures. [17] Multi-classification model for COVID-19, pneumonia, and lung cancer. Sensitivity Issues: Overlapping between disease symptoms can cause misclassification. False Positives: Possible due to overlapping features in X-rays and CT images. [12] Adsoption-contrast and phase-contrast imaging Adsorption contrast imaging affect patients with higher radiation doses. Phase-contrast imaging is complex and expensive to be implemented. [20] A CAD algorithm to identify pulmonary nodules. Limited to retrospective data, which may not fully represent current clinical scenarios. The algorithm's performance heavily depends on the quality of the input X-rays. [16] CDC_Net model that incorporates residual networks and dilated convolution. High computational cost due to the complexity of the model. Potential overfitting due to the use of multiple pre-trained models. [14] A probabilistic neural network. Sensitive to noise in the data. The model may not generalize well to different datasets. [15] CAD system using a massive artificial neural network (MANN) with a soft tissue technique. The model's sensitivity to subtle nodules is still relatively low. High false positive rate, which can lead to unnecessary follow-up tests. [13] Combined backpropagation neural networks with PCA. PCA may lead to loss of important information during dimensionality reduction. [19] A hybrid deep learning model combining VGG, data augmentation, and spatial transformer networks with CNN. The model's performance can degrade with rotated or tilted images. Requires significant computational resources for training. [18] A hybrid deep learning technique combining CNN and SVM. Complex to implement and optimize. Large amounts of labeled data are required for effective training. benchmarks and explainable AI to bridge clinical adoption gaps. B. CT Scan CT-based lung cancer detection methods exhibit diverse architectural strategies, preprocessing pipelines, and clinical trade-offs. Pure 3D CNNs, such as those in [21] (UNet segmentation) and [22] (dual 3D CNNs for detection/classification), achieve high sensitivity (94%) and specificity (91%) by leveraging spatial context, but face computational bottlenecks. Hybrid models address this. In [23], the paper combines improved deep neural networks (IDNN) with an ensemble classifier, achieving robust accuracy through feature optimization, while [24] integrates CNNs with mRMR feature selection to balance efficiency and performance. In contrast, [25] reduces computational load via template matching and simplified preprocessing without sacrificing accuracy, illustrating the trade-off between model complexity and resource demands. Advanced preprocessing methods significantly impact detection accuracy. For example, [26] uses the Frangi filter to suppress vessel-like structures, achieving 94% sensitivity at the cost of high false positives (15.1/scan), whereas [27] employs Adaptive Bilateral Filter (ABF) and ABC segmentation for precise lung region extraction, attaining 98.42% accuracy. Conversely, [28] relies on K-means clustering and geometric mean filters, demonstrating that simpler techniques suffice for segmentation but limit nodule characterization precision. Large-scale datasets like NLST in [29] (AUC: 94.4%-95.5%) and multi-institutional trials in [30] (96,559 participants) enhance generalizability, whereas smaller datasets in [26] and [31] risk overfitting despite high sensitivity (80.06%-94%). Notably, [31]'s two-step approach (geometric candidate generation + 3D CNN) mitigates this by combining prior knowledge with learned features, outperforming DBNs and SDAEs. Methods prioritizing sensitivity, such as [26] (94% sensitivity), incur higher false positives, while those emphasizing specificity, like [22] (91% specificity), risk missing subtle nodules. [32]'s DITNN classifier achieves 98.42% accuracy with minimal error (0.038), illustrating how hybrid segmentation (IPCT) and noise reduction can harmonize both metrics. LDCT-focused studies like [30] demonstrate reduced lung cancer mortality but highlight unresolved challenges in overall mortality reduction. Meanwhile, [29]'s risk-prediction model leverages longitudinal LDCT data, underscoring the value of temporal analysis in early detection. CT-based approaches excel in spatial accuracy but grapple with computational costs, false positives, and dataset biases. While 3D CNNs and hybrid models (e.g., [21], [23]) dominate in performance, simpler pipelines (e.g., [25]) offer pragmatic alternatives for resourceconstrained settings. Table II synthesizes these limitations, advocating for optimized preprocessing, federated learning, and explainability to bridge clinical adoption gaps. C. Others 1) Whole Slide Images (WSI): WSI-based approaches leverage histopathology images to classify lung cancer subtypes and predict molecular markers, though they vary in supervision levels and clinical integration. The Inception v3 model in [34] achieves high diagnostic precision (AUC: 0.97) for NSCLC subtyping and gene mutation prediction using fully annotated TCGA datasets, but its dependency on exhaustive labeling limits scalability. In contrast, [1] adopts weakly supervised learning with patch-based FCNs and RF classifiers, attaining 97.3% accuracy on 939 WSIs with minimal annotations. This method underperforms on public datasets (85.6% AUC), reflecting a trade-off between annotation effort and generalizability. Meanwhile, [35] reviews broader AI applications in pathology, emphasizing CNNs for automated feature extraction but highlighting unresolved challenges, such as pathologist skepticism and workflow incompatibility. WSI methods excel in molecular and histological granularity but face barriers in annotation standardization and clinical trust (Table III). TABLE II CT SCANS Ref. Methodology Disadvantages [21] 3D CNN and U-Net. High false positive rates. Relies on large labeled datasets for generalizability. [22] 3D CNNs for Nodule Detection. Nodule Type Exclusion: Excludes certain types (e.g., GGO) that limit clinical applicability. Small Datasets: Limited representation of nodule variability. [2] Hybrid models for various DL models Limited Depth: This may restrict feature learning capabilities, affecting diagnosis accuracy. Training Efficiency: Longer training times due to model complexity. [33] Combining CAD with deep learning model and SVM. False Positives: High rates leading to unnecessary invasive procedures. Radiation Exposure: Risk associated with multiple scans. [29] 3D CNN architecture False Positives: High rates leading to unnecessary invasive procedures. Radiation Exposure: Risk associated with multiple scans. [28] ML-based approach like ANN, KNN, and RF False Negatives: Challenges in identifying smaller nodules or early-stage cancer. Computational Complexity: Certain models require high computational resources. [23] Improved deep neural networks (IDNN) along with an ensemble classifier Requires large datasets, risking overfitting, and high computational demands. [30] Segmentation of the images was achieved through the K-means algorithm, and ML classifiers like ANN, KNN, and RF False Negatives: Challenges in identifying smaller nodules or early-stage cancer. Computational Complexity: Certain models require high computational resources. [25] Deep neural network approach (MobileNet) Sensitivity Issues: Risk of misclassifying nodules, especially in early-stage cases. [24] CNNs for feature selection and classification Small dataset of only 100 images. High computational complexity for real-time applications. [26] Multi-group patch-based CNN False Positives: Higher rates of false positives due to vessel-like structures in CT images. Patch Selection Challenges: Selecting meaningful patches can be computationally expensive. [31] 3D CAD system using 3D CNNs Cross-Validation Limitations: Testing solely on training data can lead to overfitting. External Validation Needs: Lack of testing on independent datasets affects reliability. [27] Conventional CT Scans combined with image processing techniques. Limited generalizations due to single hospital data. Requires significant processing power. [32] 3D CNNs Excludes certain nodule types. Small datasets limit variability. 2) PET Scan: PET/CT frameworks prioritize multimodal integration and staging accuracy but grapple with technical artifacts and validation gaps. The hybrid CNN model in [36] combines PET and CT data to classify FDG uptake, achieving near-perfect AUC (0.99) for lung cancer and lymphoma, though its narrow focus limits broader clinical utility. Conversely, [37] proposes a cloud-based framework (CloudLTDSC) for end-to-end tumor detection, segmentation, and 9stage classification, achieving 97% accuracy on 94 NSCLC patients. While scalable, its small cohort risks overfitting. TABLE III OTHERS Ref. Methodology Disadvantages [34] PET/CT Scans. Variability: Inconsistencies due to differing imaging protocols across institutions. False Positives: Higher rates in non-malignant cases. [1] Weakly Supervised DL for Whole Slide Lung Cancer Image Analysis. FCNs for image segmentation and classification Annotation Scarcity: Limited availability of labeled data leads to potential inaccuracies. High Computational Costs: Processing whole slide images can be resource-intensive. [35] CNN for Pathology Image Analysis Large Data Requirement: CNNs require a large amount of annotated data for accurateness. Sensitivity to Variations: The model's performance may degrade with variations in image quality. [36] Automated Detection with PET/CT. CNNs for detecting and classifying FDG uptake in lung lesions Single Institution Data: Limited generalizations and potential biases. False Positives: Frequent in lymphoma, complicating diagnosis. [38] A standard combine PET and CT technologies Radiation Exposure: expose patients to high radiation exposures. False Positives/Negatives: On PET scans, inflammatory diseases can occasionally appear as cancer, while, tiny lesions could go undetected. Cost and Availability: PET/CT scans can be costly, and not available. [37] A cloud-based system and a multilayer convolutional neural network (M-CNN). Concerns about the cloud privacy, the internet connection, and the computational cost. [39] PET/CT. False positives: that lead to unnecessary biopsies. High cost: PET/CT scans are expensive. Radiation exposure: higher radiation levels than individual PET or CT scans. [40] The integration of PET and CT for staging non-small cell lung cancer (NSCLC). Concerns about the system availability, image interpretation complexity, and overdiagnosis. Clinical reviews contextualize PET/CT's strengths, [38] and [39] validate its superiority in nodal staging and lesion differentiation (e.g., T3/T4 staging) but note persistent false positives, necessitating invasive confirmations. [40] further critiques technical limitations, such as respiration artifacts and contrast variability, advocating for protocol standardization. Though PET/CT excels in metabolic and anatomical correlation, its clinical adoption hinges on artifact mitigation and larger validation cohorts (Table III). IV. DISCUSSION AND OPEN PROBLEMS High-resolution CT scans have shown strong potential for early detection of lung nodules, with studies reporting classification accuracies of up to 91.13% [21], [22] and even reaching 96% in certain cases [18]. PET/CT imaging provides excellent specificity in identifying malignancies [36], although its high resource demands and limited accessibility remain concerns. In contrast, X-ray imaging is widely available but suffers from sensitivity issues and false negatives caused by the poor detection of small nodules and interference from overlapping anatomical structures [11], [17]. WSI is also challenged by variability in imaging protocols and the occurrence of false positives in non-malignant cases [1], [34]. Despite these promising findings, several limitations and open research problems persist. A primary challenge is the integration of imaging-based diagnostic systems into clinical workflows. Furthermore, the effective integration of multimodal data-combining imaging with genomic or clinical information-remains in its infancy. CT imaging face issues such as false positives, radiation exposure, and difficulties in detecting smaller nodules, as noted in multiple studies [23], [25], [28]-[30], [33]. Similarly, PET scans are limited by generalizability concerns due to reliance on single-institution data, and occasional false positives, especially in lymphoma detection [36]. Enhancing image preprocessing and standardizing imaging protocols may improve diagnostic precision across modalities. For X-rays, the application of advanced image enhancement techniques could help reduce false negatives and misclassifications. In the case of CT scans, the adoption of low-dose imaging protocols and optimized preprocessing techniques may lower radiation exposure and decrease false-positive rates, while diversifying datasets and conducting extensive cross-validation could enhance the generalizability of PET imaging. For WSI, standardization of imaging protocols and the development of more efficient processing methods are recommended to overcome issues related to annotation scarcity and protocol variability. Overall, continued research focused on refining these imaging modalities and their integration into clinical practice is essential for advancing lung cancer detection. This review of imaging modalities for lung cancer detection underscores the importance of early and accurate diagnosis. While 3D CNN architectures with CT scans show superior performance, critical challenges remain. Future research should include Creating standardized, diverse datasets to improve model generalizability across populations, and developing hybrid models to reduce false positives while maintaining sensitivity, particularly for CT-based approaches, and Optimizing computational efficiency for clinical implementation. Integrating multimodal imaging (CT with PET or WSI) and incorporating genetic data alongside imaging features presents promising directions. Additionally, explainable AI will be essential for clinical adoption. V. ACKNOWLEDGMENT The Security and Privacy of Emerging Network (SPEN) research lab at the (USM), USA, made this work possible. The statements made herein are solely the responsibility of the authors. REFERENCES [1] X. Wang, H. Chen, C. Gan, H. Lin, Q. Dou, E. Tsougenis, Q. Huang, M. Cai, and P.-A. Heng, "Weakly supervised deep learning for whole slide lung cancer image analysis," IEEE Trans. Cybern., vol. 50, pp. 3950-3962, Sept. 2020. [2] W. Sun, B. Zheng, and W. Qian, "Automatic feature learning using multichannel roi based on deep structured algorithms for computerized lung cancer diagnosis," Computers in Biology and Medicine, vol. 89, pp. 530-539, 2017. [3] B. Hamoui, A. Alashaikh, A. Sherif, E. Alanazi, M. Nabil, and W. Alsmary, "Google searches and covid-19 cases in saudi arabia: A correlation study," in 2021 3rd IEEE Middle East and North Africa COMMunications Conference (MENACOMM), pp. 104-108, 2021. [4] C. Bourn, J. Heirendt, K. Ben-Chiobi, A. Haastrup, A. Sherif, and M. Elsersy, "Privacy-preserving data sharing scheme for e-health systems," in 2023 9th International Conference on Information Technology Trends (ITT), pp. 20-25, 2023. [5] J. Romeo, M. Abbass, A. Sherif, M. M. R. Khan Mamun, M. Elsersy, and K. Khalil, "Privacy-preserving machine learning for e-health applications: A survey," in 2024 IEEE 3rd International Conference on Computing and Machine Intelligence (ICMI), pp. 1-6, 2024. [6] M. Watkins, C. Dorsey, D. Rennier, T. Polley, A. Sherif, and M. Elsersy, "Privacy-preserving data aggregation scheme for e-health," in Proceedings of the 2nd International Conference on Emerging Technologies and Intelligent Systems (M. A. Al-Sharafi, M. Al-Emran, M. N. AlKabi, and K. Shaalan, eds.), (Cham), pp. 638-646, Springer International Publishing, 2023. [7] M. Elsersy, A. Sherif, A. A.-A. Imam, M. M. R. Khan Mamun, K. Khalil, and M. Haitham, "Federated learning model for early detection of dementia using blood biosamples," in 2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things (AIBThings), pp. 1-5, 2023. [8] G. Chassagnon, C. De Margerie-Mellon, M. Vakalopoulou, R. Marini, T.-N. Hoang-Thi, M.-P. Revel, and P. Soyer, "Artificial intelligence in lung cancer: current applications and perspectives," Jpn. J. Radiol., vol. 41, pp. 235-244, Mar. 2023. [9] M. Cellina, L. M. Cacioppa, M. Cè, V. Chiarpenello, M. Costa, Z. Vincenzo, D. Pais, M. V. Bausano, N. Rossini, A. Bruno, and C. Floridi, "Artificial intelligence in lung cancer screening: The future is now," Cancers (Basel), vol. 15, Aug. 2023. [10] H. T. Gayap and M. A. Akhloufi, "Deep machine learning for medical diagnosis, application to lung cancer detection: A review," BioMedInformatics, vol. 4, no. 1, pp. 236-284, 2024. [11] Z. Peng, X. Xinnan, W. Hongwei, F. Yuanli, F. Haozhe, Z. Jianwei, Y. Shoukun, H. Yuxuan, S. Yiwen, L. Jiaxiang, and L. Xinguo, "Computer-aided lung cancer diagnosis approaches based on deep learning," Journal of Computer-Aided Design & Computer Graphics, vol. 30, no. 1, pp. 90-99, 2018. [12] K. Li, Y. Chen, R. Sun, B. Yu, G. Li, and X. Jiang, "Exploring potential of different x-ray imaging methods for early-stage lung cancer detection," Radiation Detection Technology and Methods, vol. 4, pp. 213-221, 2020. [13] I. S. Abed, "Lung cancer detection from x-ray images by combined backpropagation neural network and pca," Engineering and Technology Journal, vol. 37, no. 5A, pp. 166-171, 2019. [14] M. F. Syahputra, R. F. Rahmat, and R. Rambe, "Identification of lung cancer on chest x-ray (cxr) medical images using the probabilistic neural network method," Journal of Physics: Conference Series, vol. 1898, p. 012023, jun 2021. [15] K. Rajagopalan and S. Babu, "The detection of lung cancer using massive artificial neural network based on soft tissue technique," BMC Medical Informatics and Decision Making, vol. 20, p. 282, Oct 2020. [16] H. Malik, T. Anees, M. Din, and A. Naeem, "Cdc_net: multiclassification convolutional neural network model for detection of covid19, pneumothorax, pneumonia, lung cancer, and tuberculosis using chest x-rays," Multimedia Tools and Applications, vol. 82, pp. 13855-13880, Apr 2023. [17] D. M. Ibrahim, N. M. Elshennawy, and A. M. Sarhan, "Deep-chest: Multi-classification deep learning model for diagnosing covid-19, pneumonia, and lung cancer chest diseases," Computers in Biology and Medicine, vol. 132, p. 104348, 2021. [18] V. Sreeprada and K. Vedavathi, "Lung cancer detection from x-ray images using hybrid deep learning technique," Procedia Computer Science, vol. 230, pp. 467-474, 2023. [19] S. Bharati, P. Podder, and M. R. H. Mondal, "Hybrid deep learning for detecting lung diseases from x-ray images," Informatics in Medicine Unlocked, vol. 20, p. 100391, 2020. [20] J. Juan, E. Monsó, C. Lozano, M. Cufí, P. Subías-Beltrán, L. Ruiz-Dern, X. Rafael-Palou, M. Andreu, E. Castañer, X. Gallardo, A. Ullastres, C. Sans, M. Lujàn, C. Rubiés, and V. Ribas-Ripoll, "Computer-assisted diagnosis for an early identification of lung cancer in chest x rays," Scientific Reports, vol. 13, p. 7720, May 2023. [21] W. Alakwaa, M. Nassef, and A. Badr, "Lung cancer detection and classification with 3d convolutional neural network (3d-cnn)," International Journal of Advanced Computer Science and Applications, vol. 8, no. 8, 2017. [22] N. Nasrullah, J. Sang, M. S. Alam, M. Mateen, B. Cai, and H. Hu, "Automated lung nodule detection and classification using deep learning combined with multiple strategies," Sensors, vol. 19, no. 17, 2019. [23] P. M. Shakeel, M. A. Burhanuddin, and M. I. Desa, "Automatic lung cancer detection from ct image using improved deep neural network and ensemble classifier," Neural Computing and Applications, vol. 34, pp. 9579-9592, Jun 2022. [24] M. Toƒüa√ßar, B. Ergen, and Z. C√∂mert, "Detection of lung cancer on chest ct images using minimum redundancy maximum relevance feature selection method with convolutional neural networks," Biocybernetics and Biomedical Engineering, vol. 40, no. 1, pp. 23-39, 2020. [25] P. Shill and Z. Homayra, "A new method for lung nodule detection using deep neural networks for ct images," pp. 1-6, 02 2019. [26] H. Jiang, H. Ma, W. Qian, M. Gao, and Y. Li, "An automatic detection system of lung nodule based on multigroup patch-based deep learning network," IEEE J. Biomed. Health Inform., vol. 22, pp. 1227-1237, July 2018. [27] A. Asuntha and A. Srinivasan, "Deep learning for lung cancer detection and classification," Multimedia Tools and Applications, vol. 79, pp. 7731-7762, Mar 2020. [28] S. Nageswaran, G. Arunkumar, A. K. Bisht, S. Mewada, J. N. V. R. S. Kumar, M. Jawarneh, and E. Asenso, "Lung cancer classification and prediction using machine learning and image processing," Biomed Res. Int., vol. 2022, p. 1755460, Aug. 2022. [29] D. Ardila, A. P. Kiraly, S. Bharadwaj, B. Choi, J. J. Reicher, L. Peng, D. Tse, M. Etemadi, W. Ye, G. Corrado, D. P. Naidich, and S. Shetty, "End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography," Nature Medicine, vol. 25, pp. 954-961, Jun 2019. [30] R. M. Hoffman, R. P. Atallah, R. D. Struble, and R. G. Badgett, "Lung cancer screening with low-dose CT: A meta-analysis," J. Gen. Intern. Med., vol. 35, pp. 3015-3025, Oct. 2020. [31] X. Huang, J. Shan, and V. Vaidya, "Lung nodule detection in ct using 3d convolutional neural networks," in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 379-383, 2017. [32] P. M. Shakeel, M. Burhanuddin, and M. I. Desa Measurement, vol. 145, pp. 702-712, 2019. [33] S. Makaju, P. Prasad, A. Alsadoon, A. Singh, and A. Elchouemi, "Lung cancer detection using ct scan images," Procedia Computer Science, vol. 125, pp. 107-114, 2018. The 6th International Conference on Smart Computing and Communications. [34] N. Coudray, P. S. Ocampo, T. Sakellaropoulos, N. Narula, M. Snuderl, D. Fenyö, A. L. Moreira, N. Razavian, and A. Tsirigos, "Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning," Nature Medicine, vol. 24, pp. 1559-1567, Oct 2018. [35] S. Wang, D. M. Yang, R. Rong, X. Zhan, J. Fujimoto, H. Liu, J. Minna, I. I. Wistuba, Y. Xie, and G. Xiao, "Artificial intelligence in lung cancer pathology image analysis," Cancers (Basel), vol. 11, p. 1673, Oct. 2019. [36] L. Sibille, R. Seifert, N. Avramovic, T. Vehren, B. Spottiswoode, S. Zuehlsdorff, and M. Schäfers, "18f-fdg pet/ct uptake classification in lymphoma and lung cancer by using deep convolutional neural networks," Radiology, vol. 294, no. 2, pp. 445-452, 2020. PMID: 31821122. [37] G. Kasinathan and S. Jayakumar, "Cloud-based lung tumor detection and stage classification using deep learning techniques," Biomed research international, vol. 2022, no. 1, p. 4185835, 2022. [38] B. Hochhegger, G. R. T. Alves, K. L. Irion, C. C. Fritscher, L. G. Fritscher, N. H. Concatto, and E. Marchiori, "Pet/ct imaging in lung cancer: indications and findings," Jornal Brasileiro de Pneumologia, vol. 41, no. 3, pp. 264-274, 2015. [39] A. F. E. A. M. D. A. F. C. C. L. Cristina Gámez, Rafael Rosell, "Pet/ct fusion scan in lung cancer: Current recommendations and innovations," Journal of Thoracic Oncology, vol. 11, no. 1, pp. 6-10, 2016. [40] J. C. J. A. V. W De Wever, S Stroobants, "Integrated pet/ct in the staging of nonsmall cell lung cancer: technical aspects and clinical integration," European Respiratory Journal, vol. 33, no. 1, pp. 201-212, 2009.
|
2509.16257
|
The Quantum Method of Planes - Local Pressure Definitions for Machine
Learning Potentials
E. R. Smitha)
Brunel University of London Kingston Lane Uxbridge Middlesex UB8 3PH
(Dated: 23 September 2025)
Stress or Pressure is a central quantity in engineering, and remains vital in molecular modelling. However,
the commonly used virial stress tensor is not valid away from thermodynamic equilibrium, a common state
required in fluid dynamics and non-equilibrium molecular dynamics (NEMD) simulation. This is solved by
using the method of planes (MoP), a mechanical form of pressure, simply interpreted as the force divided by
area but derived from the firm foundations of statistical mechanics. We present an extension of MoP stress
to the MACE potential, a particular form of machine learning (ML) potentials allowing quantum mechanical
(QM) physics in classical simulation. We present the derivation of the MoP stress for the MACE potential
using the theoretical framework set out by Irving and Kirkwood 1. For the testcase of an interface between
water and Zirconium Oxide, we show the MoP measures the correct force balance while the virial form
fails. Further, we demonstrate the MoP is valid arbitrarily far from equilibrium, showing exact conservation
every timestep in a control volume bounded by MoP planes. This links the stress directly to the conservation
equations and demonstrates the validity in non equilibrium molecular dynamics systems. All code to reproduce
these validations for any MACE system, together with ASE accelerated code to calculate the MoP are provided
as open source. This work helps build the foundation to extend the ML revolution in materials to NEMD
and molecular fluid dynamics modelling.
I.
INTRODUCTION
Material science is undergoing a revolution, with a new
generation of machine learning (ML) potentials allowing
classical models to incorporate quantum Density Func-
tional Theory (DFT) level accuracy at speeds tradition-
ally closer to classical molecular dynamics2. One promis-
ing candidate for fast and accurate simulation is the
MACE-MP-0 family of models3, which are trained on an
extensive material database covering the periodic table
and giving good results for cases outside of this dataset,
including the mixing of elements. The MACE model is
particularly efficient and promising because, rather than
a purely data driven approach, that simply feeds vast
amounts of data to an increasing deep neural network, it
uses aspects of the physics to simplify the required net-
work and training4. This speeds up inference, the force
calculation in MD, to speeds approaching classical MD
when run on modern GPU architectures. The symme-
tries of common molecular configurations during interac-
tion are build in through the atomic cluster expansion
(ACE)5, which uses symmetries to reduce the number
of different molecules and configurations that need to be
remembered by a network. This can be understood as
analogous to machine vision, where recognising a picture
of a cat, even when shifted or rotated, is still the same
picture. In molecules, this can be done using symmetries
through the E3NN library6–9. The use of ACE allows
high body order5, the number of many-body interactions
considered (in MACE set to a bond order of four locally).
Inspired by leading results in message passing potentials
a)Electronic mail: edward.smith@brunel.ac.uk
like NequIP10, this ACE approach is then embedded in a
message passing framework. The required cutoff length
of the ACE molecular interactions, rc, is reduced using
this message passing approach (the M in MACE), which
splits multi-body interactions into a graph network so
only energetically important connections or ”edges” are
required. The number of hops taken is M = 2 on the
MACE-MP-0 version used in this work.11. A machine
learning process is then applied to work out which con-
nections are essential and this process means each 4 body
local calculation is connected by graph edges to M other
atoms, each with four body interactions given by the
ACE model. The resulting potential is then effectively 13
body order and an effective cutoff length M ×rc. As a re-
sult, this performs very well in learning DFT results while
allowing fast simulations. With the simplifications of M
and ACE, a deep neural network is then trained on the
MPtraj12 and sAlex13 database, a vast crystal database
covering many materials . The paper has been demon-
strated to give good results on a vast range of cases3
This has since been extended to include other databases,
charges and fine tunes since initial publications, a process
which will doubtlessly accelerate. However, the presented
work is not tied to the database of this foundation model,
which has already been fine-tuned and improved since ini-
tial publication using wider DFT databases. Such fine-
tuning also has the ability to improve properties DFT
typically struggles to reproduce, from dispersion, long-
range interactions to the over structuring of water. In-
stead, the derivation in this work focuses on the base
architecture to consider the general behaviour allowing
these models to continue to improve and allowing use of
the most up to date model.
As a result of the potential revolutionary nature of
ML potentials, and the already significant improvement
arXiv:2509.16257v1 [physics.chem-ph] 17 Sep 2025
2
offered by the MACE potential in materials modelling,
the solid simulation community has fully embraced these
models (for example the range of authors on Batatia
et al. 3). However, the potential of ML for fluid dynamics
problems, known as non equilibrium molecular dynamics
(NEMD), seems to have received far less attention. Mod-
elling of the interface between a liquid and a surface is
often a very complex problem that would greatly benefit
from ML modelling. On surface with chemical reactions
such as rusting or wear, to catalysts or dissociation of
water these are difficult to capture with classical model
like the ReaxFF reactive force-field14. Cooling of heat
exchanges, nuclear reactors or semiconductors all require
molecular interface modelling, while lubrication and tri-
bology are intertwined with the molecular chemistry of
the interface. Using ML models to capture these effects
has the potential to revolutionise NEMD modelling of in-
terfaces. NEMD is characterised by a series of techniques
to study fluid flows, including thermostats to control
temperature or forcing routines to enforce shear flow15.
A process which is a form of molecular fluid dynamics,
NEMD provides methods to get quantities of great im-
portant to fluid flow modelling, including density, veloc-
ity, stress and heat flux15,16. Of particular importance
for many cases in fluid dynamics are the stress tensor
and heat flux vectors, vital for rheology, tribology17, slip
length18,19, the moving three-phase contact line20,21, sur-
face tension22, viscosity and thermal transfer23,24. These
molecular details can be included in continuum-based en-
gineering simulation, such as computational fluid dynam-
ics (CFD), through coupled simulation25,26 where stress
coupling is useful in both fluid simulation27 and for solid
mechanics28.
Getting the stress and heat flux has a long history in
the literature29.
Two key observations are important,
stress is non-unique and certain forms of stress are incor-
rect for inhomogeneous systems. Note stress and pres-
sure are used interchangable in this work, with one the
negative of the other.
For the NEMD cases, a stress
and heat flux that is valid away from thermodynamic
equilibrium is essential. Recent work by Langer, Frank,
and Knoop 30,31, demonstrates that stress is possible to
obtain for a whole periodic system by applying a finite
strain. The current work extends this by starting from
the foundations of Irving and Kirkwood 1 to derive a form
of local stress and heat flux. In this process, it can be
used to show that the virial form of stress and heat flux,
the default option in many MD packages, are an over-
simplification that lead to errors away from equilibrium,
a well know result in the classical literature extended
here to these ML potentials.
Instead, the Method of
Planes MoP stress has advantages as it can be derived
directly from the Irving and Kirkwood 1 and shown to
be linked exactly to a control volume form of the conser-
vation equations. This link to a conservative form pro-
vides the validation that the stress defined is meaningful
and correct. The MoP also has the advantage it is the
most fundamental definition of stress, the force divided
by area, and can be calculated relatively easily in MD.
As a result, this work will derive a usable stress for the
MACE potential and demonstrate conservation of mo-
mentum and energy in a control volume. More generally,
this treatment will work for any machine learning poten-
tial that can be expressed as a pairwise force, the general
class of message passing neural networls.
The remainder of the work is as follows, in section II
a derivation of the MoP stress and heat flux for an ACE
style potential is presented. Next, the methodology of
the molecular dynamics used to simulate a non-trivail
case of ZrO2 and water is given section III. The results
of applying the MoP stress in this system is shown in
section IV including a demonstration of the conservation
of momentum in a control volume and the failure of the
virial form of stress. Finally, some brief conclusions are
given in section V.
II.
THEORY
In classical physics, the force F = −∇U(r), is given
by the negative gradient of potential U with respect to
position r only if we have a conservative potential32. In
classical molecular dynamics we have a system of point-
wise atoms so only at the positions of these atoms ri can
forces exist. Then the force on atom i is,
Fi = −∂U
∂ri
(1)
with the total forces in an MD system satisfying the con-
dition PN
i=1 Fi = 0.
The true picture in a quantum
system is more complex with wavefunctions defining a
continuous energy throughout space, reduced by ML to
a graph network. It is interesting to note that the exis-
tence of a conservative potential, while generally satisfied
by just training for energy only, is better enforced by ex-
plicit inclusion of force in the training process. The train-
ing process of MACE aims to match both total potential
energy U and forces Fi to DFT data4, by minimising the
function,
L = λE
B
B
X
b=1
"
Ub −˜Ub
Nb
#2
+ λF
3B
B
X
b=1
Nb
X
i=1
−∂Ub
∂ri
−˜Fib
2
(2)
where the index b denotes the sum over the B total
batches used in the training process (here originally
for MACE-MP-0 the 89 elements in the MATPES-PBE
dataset33). The ˜Ub and ˜Fib are the potential energies and
forces from batch b of the DFT data respectively. The
lagrangian multipliers λE and λF are chosen to enforce
the relative importance of the two contributions, where
it was found an initial training using λE >> λF before
switching to λE << λF with a smaller step gave good
agreement for energy and forces4.
3
A.
Intermolecular Forces
Admal and Tadmor 34 show all many-body forces can
be written in a pairwise manner provided the potentials
are continuous. For classical potentials, for example the
Lennard-Jones potential, the continuity is true due to
the well define mathematical function of the potential,
i.e. U(r) = r−12 −r−6. The MACE model follows the
general framework of message passing neural networks
(MPNNs)35, a type of graph-based machine-learning po-
tential. As a result these MPNN are continuous and can
use automatic differentiation (AD) to define gradients, so
can be expressed in terms of pairwise forces30. The mes-
sage passing uses ACE, which is also construted from a
set of continuous functions expressed in a pairwise man-
ner.
For machine learning potentials, in particular the ACE
potential used here, the internal energy is typically de-
fined using set notation4,30,35 U = U ({rij | rij < rc})
to denote all possible pairwise interactions between all
molecules within cutoff rc. Taking the derivative of this
potential proceeds as follows,
∂U
∂ri
= ∂U
∂ri1
· ∂ri1
∂ri
+ ∂U
∂ri2
· ∂ri2
∂ri
· · · + ∂U
∂riN
· ∂riN
∂ri
=
N
X
j̸=i
∂U
∂rij
· ∂rij
∂ri
=
N
X
j̸=i
∂U
∂rij
· ∂rij
∂ri
+ ∂U
∂rji
· ∂rji
∂ri
=
N
X
j̸=i
∂U
∂rij
−∂U
∂rji
(3)
Where the final line is obtained by applying the chain rule
to compute the force on atom i: Using rij = ri −rj so
derivatives w.r.t to ri equal 1 (−1 for rji). We can then
define the expression in
Eq. (3) as the antisymmetric
pairwise force fij,
fij
def
= ∂U
∂rij
−∂U
∂rji
(4)
The total force on atom i is then simply Fi = P
j̸=i fij
With a cutoff, the sum is only over the molecules within
the limit of this cutoff, often denoted by set notation
in the ML literature30, e.g. N(i) = {j ̸= i|rij < rc}.
For more complex message passing cases, it can oc-
cur that interactions N(i) includes a given j where i
is not in the set of j’s interactions N(j). As a result,
∂U/∂rij ̸= −∂U/∂rji. However, the definition of
Eq.
(4) guarantees fij = −fji and so satisfies Newton’s third
law.
In a graph neural network, molecules are at the
nodes while the connections between then, called the
graph edges, conceptually correspond to the intermolecu-
lar interactions. Note that in the work of Langer, Frank,
and Knoop 30 they explicitly denotes that rij applies the
minimum image convention which is dropped here for no-
tational simplicity assuming halos; ghost atom copies of
the domain to enforce periodic boundaries.
The definition of pairwise force in Eq. (4) follows di-
rectly from the assumption U is a function of pairwise
forces only. Had we assumed it was a function of indi-
vidual particle positions or three body terms we could
have derived a more complex series of interactions. Fan
et al. 36 show the three body potentials including Tersoff,
Brenner, Stillinger-Weber and a general many-body po-
tential as simply fij = ∂Ui/∂rij −∂Uj/∂rji where Ui is
the energy at the point location of particle i. The as-
sumption of individual atomic energy U = PN
i=1 Ui in
∂U/∂ri can also be used to define an alternative version
of pairwise forces as,
fij
def
= ∂Uj/∂ri
(5)
This version of force is shown to be the required form to
conserve energy by Langer et al. 31 and demonstrated to
work locally in the section IV, although it will not satisfy
Newton’s 3rd law in general as ∂Uj/∂ri ̸= ∂Ui/∂rj .
Any choice of pairwise force is not unique. The MACE
potential used in this work is based on an effective body
order of 13, so even assuming forces are a many-body
hierarchy is a simplification of the force calculation. The
assumption here of using pairwise forces is therefore an
even more significant simplification of the true dynam-
ics. However, the resulting pairwise force still remains
more complex than a pairwise analytical potential such
as the Lennard Jones, which is symmetric by construc-
tion and satisfies ∂U/∂rij = ∂U/∂rij ˜rij with ˜rij the unit
vector between i and j. Admal and Tadmor 37 state this
difference succinctly; noting pairwise forces can be phys-
ically interpreted as the “force exerted on particle i by
particle j”, whereas the more general interaction force
fij can be interpreted as the “contribution to the force
on particle i due to the presence of particle j”. In the
latter case, Newton’s 3rd law is not required and this
is the default situation for a machine learning potential
like MACE unless a definition like
Eq. (4) specifically
has this symmetry.
It is instructive to consider the pseudo code used to
gets fij from a graph neural network by autodifferentia-
tion, which proceeds as follows.
#Get arrays with all interacting atoms stored
#in identical sized arrays of corresponding
#sender/reciever indices
sender, receiver = neighbour_list(r, cutoff)
#Positions of ri and rj at the end of every
#intermolecular interaction or graph "edge"
#give the rij vectors
rij = r[receiver] - r[sender]
#Taking gradient w.r.t vector rij
dUdrij = autograd.grad(energy, rij)
#Reshape n_edges to n_nodes^2
fij[sender, receiver, :] = -dUdrij
4
#Apply anti-symmetry operation
fij[:,:,0] = fij[:,:,0] - fij[:,:,0].T
fij[:,:,1] = fij[:,:,1] - fij[:,:,1].T
fij[:,:,2] = fij[:,:,2] - fij[:,:,2].T
The process of putting all pairwise forces into an N by
N matrix reduces computational efficiency gains provided
by the MPNNs architecture, which is designed to only in-
clude relevant interactions. To improve this approach, a
sparse matrix could be used for fij or the algorithm used
with only the sender and receiver MPNNs interactions.
However, the above code is reasonable for small systems
given the force calculation is often the major component
in simulation cost and obtaining stresses is undertaken
only as an intermittent exercise. Generally in NEMD we
allow the system to evolve sufficiently that the measured
stress is statistically independent of previous values. The
stress autocorrelation time of the system is usually a sen-
sible value for the sample frequency as it ensures stress
measurements are statistically uncorrelated. In section
II B we use the form of inter-molecular force in Eq. (4)
to determine the pressure tensor
B.
Momentum Equation
We use the definition of forces from
Eq. (4) in the
derivation of pressure following the statistical mechanical
process of Irving and Kirkwood 1. Assuming phase space
is bounded, Irving and Kirkwood 1 obtain an expression
for the evolution in time of expected value α,
∂
∂t
α; f
=
N
X
i=1
Fi · ∂α
∂pi
+ pi
mi
· ∂α
∂ri
; f
,
(6)
By letting α = P piδ(ri −r), Irving and Kirkwood 1
define the momentum density at a point in space,
ρu
def
=
N
X
i=1
piδ(ri −r); f
,
(7)
The time evolution of momentum is used to obtain the
pressure tensor by taking the time derivative of both sides
of Eq. (7) and applying Eq. (6),
∂
∂tρu = ∂
∂t
N
X
i=1
piδ(ri −r); f
=
N
X
i=1
Fiδ(ri −r) −∂
∂r · pipi
mi
· δ(ri −r); f
(8)
The second term is the kinetic part of the pressure tensor
P k, defined using ∂/∂riδ(r −ri) = −∂/∂rδ(r −ri) and
subtracting streaming velocity. As pi is the momentum
in the laboratory frame we can denote pi as the pecu-
liar value which excludes the macroscopic streaming term
u(ri) at the location of molecule i38. The kinetic pressure
integrated over a volume is
R
V P kdV = ⟨pipi
mi ϑi; f⟩where
the function ϑi is the integral of a Dirac delta function,
only non-zero for a molecule inside the volume. This term
is identical in both classical and MACE systems so is not
considered further, instead we turn our attention to the
configurational pressure. We aim to express the forcing
term of
Eq. (8) as the divergence of a pressure tensor
∇· P c. To do this, Irving and Kirkwood 1 use the as-
sumption of Newton’s 3rd law, which we can also invoke
due to the definition of Eq. (4), to get the difference of
two Dirac delta
N
X
i=1
Fiδ(ri −r); f
=
N
X
i=1
∂U
∂ri
δ(ri −r); f
= 1
2
N
X
i=1
∂U
∂ri
δ(ri −r) +
N
X
j=1
∂U
∂rj
δ(rj −r); f
= 1
2
N
X
i=1
N
X
j̸=i
∂U
∂rij
−∂U
∂rji
δ(ri −r)
+
N
X
j=1
N
X
i̸=j
∂U
∂rji
−∂U
∂rij
δ(rj −r); f
= 1
2
N
X
i=1
N
X
j̸=i
fijδ(ri −r) + fjiδ(rj −r); f
= 1
2
N
X
i=1
N
X
j̸=i
fij [δ(ri −r) −δ(rj −r)] ; f
(9)
At this point, a useful insight can be obtained from the
integral over a volume of the difference between the two
Dirac Delta, ϑij = ϑi −ϑj, which is only non zero when
one molecule is in the volume and the other is outside. In
this form, the balance on an arbitrary volume can be used
to check momentum conservation where surface forces
equal internal momentum change (given the absence of
any atoms crossing the surface). Stress is a more useful
form, so the usual manipulations can be made, includ-
ing the slightly tenuous Taylor expansion of two Dirac
delta functionals from Irving and Kirkwood 1, to give
Oij the so-called IK operator as an expansion in delta
functions39, or the more useful form obtained by rewrit-
ing the integral along a (non unique40) contour41,42,
δ(ri −r) −δ(rj −r) = −rij · ∂
∂r Oijδ(ri −r)
=
I
∂
∂ℓ· δ (r −ri −ℓ) dℓ
= ∂
∂r ·
I
δ (r −ri −ℓ) dℓ.
(10)
We can go further and simply assume a straight line, of-
ten called the IK contour which is consistent with New-
ton’s assumption of impressed force between two points,
I
δ (r −ri −ℓ) dℓ≈rij
Z 1
0
δ (r −ri −λrij) dλ.
(11)
5
This assumes ℓ= λrij with 0 < λ < 1 so dℓ= rijdλ.
As this linear relationship is not valid for MACE style
interactions, which do not simply act pairwise between
atoms, we instead use the contour form from
Eq. (10)
and substitute into Eq. (9). This is then integrated over
a finite volume to get configurational pressure,
∂
∂r
Z
V
·P CdV
= ∂
∂r · 1
2
N
X
i=1
N
X
j̸=i
Z
V
fij
I
δ (r −ri −ℓ) dℓ; f
dV
= ∂
∂r · 1
2
N
X
i=1
N
X
j̸=i
fij
I
ϑℓdℓ; f
(12)
the function ϑℓis non zero if the part of the non-linear in-
teraction path is inside the averaging volume43. The full
volume averaged pressure can be obtained by assuming
constant pressure in the volume,
VA
P =
1
∆V
N
X
i=1
pipi
mi
ϑi + 1
2
N
X
i=1
N
X
j̸=i
fijlij; f
,
(13)
where ∆V is the local volume and lij =
H
ϑℓdℓthe length
of interaction inside a volume, a form well know in the
literature for straight lines44,45, simply extended here for
a contour.
The virial form of pressure47.
is a simplification of
Eq. (13) where instead of assigning to a volume based
on the path of interactions, the pressure contribution is
split with half assigned to each atom. The equation for
this is,
V IRIAL
P
=
1
∆V
N
X
i=1
pipi
mi
+ 1
2
N
X
j̸=i
fijrij
ϑi; f
,
(14)
which is strictly only valid for an entire periodic MD
simulation but is widely used locally in bins due to is
simplicity and implementation in codes like LAMMPS48.
The Virial pressure is approximate while the Volume
Average pressure
Eq. (13)) is not usable as we cannot
determine the interaction path between two atoms in a
meaningful way from a MACE system. Instead, we can
express this equation in terms of stress across the surface
of the volume, here assumed to be a cuboid for simplicity,
by simply evaluating the derivative of ϑℓ. This gives a
molecular version of the divergence theorem43,
Z
V
∇· P CdV = −1
2
N
X
i,j
fijrij ·
I ∂ϑℓ
∂r dℓ; f
= −1
4
X
α∈{faces}
N
X
i,j
fijdαijSαij; f
=
X
α∈{faces}
Z
Sα
P C · dSα.
(15)
The result is an expression which is non-zero only when
the interaction between two atoms passes over a surface
with dαij = sgn(α−αj)−sgn(α−αi), ensuring the two
atoms are either side. The function Sαij is non zero if
this crossing is localised to a surface. On a CV this is
force over each face, here a cuboid, with six faces x+, x−,
y+, y−, z+ and x−so for any given surface, say the z+
face, this is the force over area,
CV
PC
z+ = −
1
4∆ACV
z+
N
X
i,j
fijdz+
ijSzij; f
.
(16)
the pressure on surface of area ∆ACV
z+ . The signum func-
tions dz+
ij = sgn(z+ −zj) −sgn(z+ −zi) are only non
zero if molecule i and j are on different sides of a plane
at z+ while the S+
zij term specifies the point of intersec-
tion of the line is located on a region of the z+ plane, in
Eq. (16) the surface of the cuboid. The surface is the lo-
calised form of the pressure tensor considered by Han and
Lee 49 applied to the six cubic faces bounding a volume.
For a cube in space, each face has three components of
stress, which results in 18 independent components over
the total control surface. As this interaction path is not
linear with MACE, surface localisation will not be triv-
ial to define, so we instead consider a series of slabs the
size of the domain. This is achieved by setting S+
xij = 1,
so the surface located at z+ spans the entire domain as
an infinite plane in Eq. (17)), recovering the Method of
Planes formulation of the pressure39,
MoP
Pz+ =
1
∆Az+
N
X
i=1
pipiz
mi
δ(zi −z+); f
+
1
4∆Az+
N
X
i,j
fijdz+
ij; f
.
(17)
Any two planes bounding a region in periodic space can
be considered to form a control volume, so the molecules
between satisfy the conservation43 with time evolution of
momentum from Eq. (8) given by,
∂
∂t
Z
V
ρudV = ∂
∂t
N
X
i=1
piϑi; f
=
MoP
P+
z −
MoP
P−
z
(18)
note the minor slight of hand in swapping to laboratory
momentum pi →pi in Eq. (17), assuming streaming ve-
locity is zero to simplify Eq. (18) and remove the need for
convective ρuu terms. These can simply be included if
considering cases with a flow. It is the conservative prop-
erty of Eq. (18) which will be tested using the MACE
potentials in this work. Given the path is non unique,
and even the interaction force between atoms is arbi-
trary, Eq. (17) has the advantage of only requiring that
the atoms be either side of a plane to get the pressure.
This pressure can then be checked to ensure it satisfies
conservation of momentum in Eq. (18), providing vali-
dation that the many-body MACE pressure we are mea-
suring is exactly consistent with the resulting dynamics
6
FIG. 1: The domain with Zirconium dioxide(ZrO2) walls and water (H2O) in the channel showing all dimensions.
The top is shown on the left highlighting the triclinic nature of the system, with the view angle on the right along
the tricilinc domain angle of about θ = 9◦to highlight the crystal structure. The domain is directly taken from the
DFT work of Yang, Youssef, and Yildiz 46 with the fluid region doubled in size by copying the molecules and
re-equilibrating the larger system.
of the molecules. We show in section IV that the virial
stress does not satisfy the momentum balance condition
near a wall, a well know result in the literature39,50 even
in the steady state.
However, we cannot evaluate the
volume average and even if we could, it cannot be linked
to the time evolution in an exact way, as we do here for
the method of planes.
C.
Energy Equation
The momentum balance on a control volume obtained
in the previous section uses a force definition consider-
ing the derivative of the entire system U({rij}) energy
landscape. However, to get the energy change in a given
volume, the concept of potential energy per particle has
to be introduced. The energy at a point is given by,
ρE
def
=
N
X
i=1
eiδ(ri −r); f
,
(19)
where the energy per atom is ei = p2
i /2mi + Ui and the
sum of all atoms gives total energy U = PN
i=1 Ui. Now
to get the evolution of energy Irving and Kirkwood 1 use
Eq. (8) with α = P eiδ(r −ri)),
∂
∂tρE = ∂
∂t
N
X
i=1
eiδ(ri −r); f
=
N
X
i=1
Fi · ∂ei
∂pi
+ ∂ei
∂ri
· pi
mi
|
{z
}
A
δ(r −ri))
−∂
∂r · ei
pi
mi
δ(r −ri)); f
,
(20)
The quantity on the final line is the advection of energy
and does not include interactions, so we focus on the
quantity A in the bracket with ∂ei/∂pi = pi/mi and
∂ei/∂ri = ∂Ui/∂ri,
A = Fi · pi
mi
+ ∂Ui
∂ri
· pi
mi
= −∂U
∂ri
· pi
mi
+
N
X
j=1
∂Ui
∂rj
· ∂rj
∂ri
· ∂ri
∂t
=
N
X
j=1
∂Ui
∂rj
· pj
mi
−
∂PN
j=1 Uj
∂ri
· pi
mi
,
(21)
using pi/mi = ∂ri/∂t, Fi = −∂U/∂ri and U = P
i Ui.
Taking the sum over all i and multiplying by the Dirac
delta function, Eq. (21) becomes,
N
X
i=1
N
X
j=1
∂Ui
∂rj
· pj
mi
δ(r −ri)) −∂Uj
∂ri
· pi
mi
δ(r −ri)); f
,
=
N
X
i,j
∂Ui
∂rj
· pj
mi
δ(r −ri)) −∂Ui
∂rj
· pj
mj
δ(r −rj)); f
,
=
N
X
i,j
∂Ui
∂rj
· pj
mi
[δ(r −ri)) −δ(r −rj))] ; f
,
= ∂
∂r ·
N
X
i,j
∂Ui
∂rj
· pj
mi
I
δ (r −ri −ℓ) dℓ; f
.
(22)
which can be written as the MoP stress using the same
process as the momentum equation, integrating over a
volume and evaluating the derivative in a given direction
7
and surface, again z+ here,
MoP
J K
z+ =
1
∆Az+
N
X
i=1
eipiz
mi
δ(zi −z+); f
MoP
J C
z+ =
1
4∆Az+
N
X
i,j
∂Ui
∂rj
· pj
mi
dz+
ij; f
.
(23)
Once again, it is useful to review the pseudocode that
can obtain these quantities
#Loop needed over all energies Ui per particle
for i in Natoms:
dUidrj[i,:,:] = autograd.grad(node_energy[i], r)
#Sum over all i and j for a
#given plane located at zplane
for i in Natoms:
for j in Natoms:
#Obtain work done to be used in MoP term
fijvi = -( dUidrj[i,j,0]*v[i,0]
+dUidrj[i,j,1]*v[i,1]
+dUidrj[i,j,2]*v[i,2])
dzij = ( sgn(zplane - r[j,2])
-sgn(zplane - r[i,2]))
MoPpower_c += 0.5*fijvi*dzij
It is possible to obtain from the implementation here that
this operation will be computationally expensive, scaling
with N as for each atom i, the full back-propagation op-
eration must be used to get each ∂Ui/∂rj. A more effi-
cient version is possible, given in Langer et al. 31 based on
defining a position variable outside of the computational
graph. It is not clear if this approach can extended to
the MoP pressure in the form presented here.
It is worth comparing Eq. (23) to the naive approach
to getting the heat flux, which would be to use the pair-
wise forces of
Eq. (4) directly in the pairwise MoP
form49,51,
]
MoP
J Cz+ =
1
4∆Az+
N
X
i,j
fij · pj
mi
dz+
ij; f
=
1
4∆Az+
N
X
i,j
∂U
∂rij
−∂U
∂rji
· pj
mi
dz+
ij; f
(24)
This equation gives an error as shown in section IV,
but should be much more computationally efficient given
only a single back-propagation step is required to obtain
∂U/∂rij.
III.
METHODOLOGY
The focus of this work is to demonstrate that the
surface form of pressure is valid in a non equilibrium
molecular dynamics simulation with the MACE poten-
tial. Given the potential range of models that could be
studied with MACE, we choose water at an interface with
Zirconium Oxide (zirconia); a well-studied oxide with ap-
plication in thermal barrier coatings, catalysis and to
prevent corrosion in zirconium alloys46.
The choice of
studied material is somewhat arbitrary here as the focus
in on obtaining stresses, and MACE requires no differing
assumption when modelling any of the 89 elements in its
training data.
The simulation uses the Atomic Simulation Envi-
ronment (ASE)52 with the MACE potential calculator.
The simulation setup is taken from Yang, Youssef, and
Yildiz 46 which studied an ab initio molecular dynam-
ics simulations of an interface between monoclinic phase
ZrO2 with the (1¯11) surface orientation at the interface
to the liquid water. This is shown in Figure 1 highlight-
ing the dimensions of the solid and liquid regions. This
particular surface was chosen as it is reported to have
the lowest energy orientation53. The simulation cell con-
tained 96 formula units of ZrO2, as in Yang, Youssef, and
Yildiz 46 but with the number of water molecules doubled
to give a slightly larger fluid region. The simulation cell
is triclinic, to accommodate the ZrO2 cell, with dimen-
sions 13.6˚A × 13.6˚A × 40.3˚A in x, y and z respectively.
The solid walls are about 7˚A at the top and bottom,
with periodic boundaries connecting all sides. The do-
main was split into 400 control volume slabs, each of size
∆z = 0.1˚A, with the momentum and energy in the vol-
ume collected along with the stress and heat flux on the
top and bottom planes of each volume. Unless otherwise
stated, all quantities in this work are given in the stan-
dard units for ASE, where amu = eV = ˚A = K = 1 and
times in the ASE time unit are t = ˚A
p
amu/eV which
means one ASE time unit is approximatly 10.2fs.
Atomic interactions were described using the MACE
machine-learned interatomic potential35.
A range of
models were tested, including mace-mpa-0-medium and
MACE-MATPES-PBE-0 run with a customised branch of the
Github MACE code edited to return pairwise forces54.
The model was run on a CUDA-enabled GPU (NVIDIA
4060Ti) using the NVIDIA cuEquivariance library for
acceleration55, run with pytorch 2.8, CUDA kernel 12.6
and CUDA Version 560.35.03.
The calculation of sur-
face crossings for the MoP calculation is accelerated using
Numba56.
The MACE-MATPES-PBE-0 version of MACE has no
Hubbard +U correction to match the setup of Yang,
Youssef, and Yildiz 46, and behaviour such as disassocia-
tion of water is observed near the surface. Such behaviour
is highly non-trivial to capture with classical models, but
for DFT and MACE systems, the water molecules are
held together purely by the MACE forces requiring no
explicit bonds. The electronic structure calculations un-
derpinning MACE training use the generalised gradient
approximation (GGA) with the Perdew-Burke-Ernzerhof
(PBE) functional, a ground state and relatively sim-
ple form of DFT. However, given the rapid evolution of
8
0
10
20
30
40
z
−0.3
−0.2
−0.1
0.0
0.1
0.2
0.3
Pressure
MoP
P K
z
MoP
P C
z
MoP
P z
IK
P K
z
IK
P C
z
IK
Pz
8
10
12
14
z
−0.06
−0.04
−0.02
0.00
0.02
0.04
Pressure
a)
b)
FIG. 2: Plot of spatial virial pressure compared to Method of Planes over 400 bins and 401 planes respectively. The
full channel is shown in a) plotted below a snapshot of corresponding molecular locations to demonstrate how
pressure changes from the solid ZrO2 to the liquid H2O regions. The zoomed in near-wall region is shown in b),
where the colours/legend is consistent in both plots. Plots are an average over 30,000 samples in time taken every 10
timesteps steps from MD simulations, with symmetry assumed to improve statistics in b), where averaging about the
centreline increases statistics to 60,000 samples. The kinetic pressure and configurational pressure must sum to zero
for ∂Pzz/∂z = 0 to be valid, seen as a flat green line for MoP but not for the virial sum (dotted black line) where
the measured pressure is biased by the location of the molecules.
both MACE and general ML style potentials, it is ex-
pected future potentials will continue to change as the
database and training become more extensive. This is
especially true as MACE-MP-0 is designed to be a foun-
dation model which is meant to be fine tuned using DFT
to explicitly cover specific case. These include improving
the cross interactions between the ZrO2 wall and water
atoms for example. Although Batatia et al. 3 shows wa-
ter interface demonstrate reasonable results, recent work
shows MACE does lose accuracy for interfaces57. How-
ever, Focassio, M. Freitas, and Schleder 57 also show that
fine tuning can improve a foundation models much faster
than starting training from scratch. The model does not
include dispersion terms and long range electrostatics are
not included.
However, the classical MoP formulation
has not been adapted for long range forces to the au-
thors knowledge, so this limitation would remain for QM
systems.
Yang, Youssef, and Yildiz 46 ran water at about 30K
above room temperature to compensate for the overstruc-
turing common in DFT simulations.
However, in the
MACE simulation at 330K temperature with an NVE en-
semble, considerable over structuring was still observed
in the water.
This shows minimal flow, instead form-
ing a kind of glassy crystal, even with the addition of
D3 dispersion corrections58. To force more fluid like be-
haviour, the simulations were performed at a tempera-
ture of 500K, which was seen to provide extensive move-
ment of the water atoms. This elevated temperature was
specifically selected to help correct for the known over-
binding error of density functional theory (DFT) when
simulating bulk water59 and ensure we can test the MoP
equations for water in the liquid phase. This also cor-
responds to the lower end of typical temperatures used
in nuclear reactors46, providing a physical justification
for the studied system. A Nose-Hoover thermostat was
used to elevate the entire system for an equilibration pe-
riod. The simulation was then restarted as an NVE with
all results collected during a constant energy run for the
production phase. For the runs to test the spatial varia-
tion of pressure, sufficient sampling is required so longer
runs of (300, 000 timesteps at ∆t = 0.5) with samples ev-
ery 10 timesteps collected giving 30, 000 samples for the
pressure. For the control volume conservation tests, only
very short runs of 40 time units were required with 80
timesteps at ∆t = 0.5 for momentum and 320 timesteps
at ∆t = 0.125 for energy.
IV.
RESULTS
Figure 2 shows the MoP pressure compared to
the virial pressure.
It has been documented in the
literature39,50 that using the virial form of pressure fails
9
to give a flat profile near walls. This must be an error,
as the condition for mechanical equilibrium ∇· P = 0 is
violated. This in turn implies the virial form of pressure
cannot be correct as the result of a non-zero mechanical
equilibrium would be a net flow, something not possible
for static fluid between two surfaces. These anomalous
peaks in pressure observed in the virial are simply a con-
sequence of assigning the pressure at the location of the
molecules, which tend to show density peaks near walls.
These artefacts appear due to the virial assumption of a
half-half split of pressure contribution from an interac-
tions, assigning these disproportional to the location of
molecules.
Instead, mechanical pressure should be de-
fined at an arbitrary plane in space, as in the original
definition of Cauchy. The MoP pressure show a flat green
line in Figure 2, which is an important validation as it
ensures the form of pressure still satisfies ∇· P = 0 de-
spite the arbitrary definition of a pairwise intermolecular
force fij from the multi-body MACE potential.
The other check that the measured pressure is mean-
ingful is demonstrated in Figure 3a). This control volume
check ensures that d/dt
R
v ρudV =
H
P · dS, that is the
forces fij which are summed over a plane to give P in
Eq. (17) exactly determine the change of momentum of
the molecules. This is shown to machine precision in Fig-
ure 3, where the sum of the kinetic spikes as molecules
cross plus StressMOP K the total summed fij contribu-
tions
MoP
P
C from all molecules are equal to the time evo-
lution. Again, the arbitrary nature of the pairwise force
mean this check is useful, showing the measured MoP
pressure satisfies the continuum momentum equations
exactly.
In fact, variants of this force decomposition,
including fij = −∂Ui/∂rj and even fij = −∂U/∂rij
without transpose which does not satisfy Newton’s 3rd
law were tested and also demonstrate momentum conser-
vation.
The two definition of energy using the pairwise force
from
Eq. (24) and the definition from Langer et al. 31
derived to gives MACE energy conservation
Eq. (23)
were also tested in Figure 3b).
It is clearly seen that
the from from
Eq. (23) using ∂Ui/∂ri provides much
better energy conservation. The energy plot is run at a
lower timestep than the momentum example, as the en-
ergy conservation in MD is only approximate, conserving
a shadow Hamiltonian60, but with an error that decreases
as timestep decreases. For the case of Eq. (23), the error
decreases with timestep, whereas for the pairwise equa-
tion from Eq. (24) this error remains in the limit of small
timestep, indicating a systematic issue. The difference
between
]
MoP
J Cz+ and d/dt
R
V ρEdV does not follow a clear
pattern, with sometimes under prediction (t ∼4), some-
times over prediction (t ∼19) and even close agreement
(around times 6-7 or 14)
The energy in a control volume are a function of the
work done on the atoms from outside the cell; with slow
changes due to forces and sudden peaks due to energy
carried into or our of the the volume by atomic surface
crossings. These peaks have magnitudes around 500, as
can be seen from the inset in Figure 3b), so the errors in
energy conservation observed during crossings are rela-
tively small. These may also be a consequence of the work
done calculation being assigned to bins based on atom
position at the start of the timestep, whereas in practice,
the crossing happening at some point in the timestep, so
the work done will be split between volumes. The agree-
ment between work done and the energy change some-
times shows a discrepancy in the forcing terms, which can
be observed in Figure 3b) at times 16 to 17. This can also
be seen in the sum at these same times, perhaps showing
additional error which occurs when multiple atoms are
interacting in a cell. Despite this minor differences, both
momentum and energy conservation validate the forms
of stress and heat flux respectively.
V.
CONCLUSIONS
This work has shown the MACE potential can be used
for non equilibrium molecular dynamics (NEMD) sim-
ulation with pressure measurements obtained using the
method of planes (MoP). These MoP pressure measure-
ments are shown to satisfy the static balance near a
liquid-solid interface (a condition where the widely used
virial fails) as well as exactly satisfying the momentum
and energy control volume equations. This conservation
demonstrates the MoP pressure is valid arbitrarily far
from equilibrium, which means these tools can be di-
rectly used in studying any fluid system with MACE.
The definition and correct form of pressure is a source of
no small controversy in the literature29. As a non-unique
quantity40, this has resulted in ambiguity that leads to
the use of incorrect or inappropriate definitions. As we
move to ML potentials which layer even more ambiguity,
including the definition of an effective pairwise force in
a graph network, it is useful to try to establish a firmer
foundation for the vital quantities like pressure and heat
flux The form of pressure presented in Eq. (17) and heat
flux in
Eq. (23) have been shown to conserve momen-
tum and energy every timestep in volumes thin enough
to often only have a single atom. As a result, these vali-
date the MoP forms and the resulting conservation checks
should form an essential part of NEMD validation in the
ML age. The code to run these cases in ASE with MACE
is provided with this work as a template for developing
these checks. The potential for machine learning (ML) to
include quantum mechanics (QM) details into molecular
fluid simulation are vast. Many industrial problems stud-
ied with NEMD require pressure or heat flux, for exam-
ple to get slip lengths, heat transfer, solid-liquid interface
traction, local visco-elastic effects or liquid-vapour inter-
facial tension. This work uses a non-trivial fluid and solid
case of Zirconium dioxide (zirconia) and water to demon-
strate the MoP method remains applicable in obtaining
both the pressure and heat flux. As the MACE form of
potential allows any combination of elements to be mod-
10
0
5
10
15
20
t
−20
−10
0
10
20
Momentum
MoP
P K
z
MoP
P C
z
d
dt
R
V ρudV
Sum
0
5
10
15
20
t
−1.0
−0.5
0.0
0.5
1.0
Energy
MoP
J K
z
MoP
J C
z
g
MoP
J Cz
d
dt
R
V ρEdV
Sum
0
20
−500
0
500
a)
b)
FIG. 3: Control volume conservation plots, a) is momentum conservation for a simulation with ∆t = 0.5, where the
measured configurational forces and momentum fluxes are equal to momentum change, shown by the sum which is
zero to machine precision. b) Energy, at smaller timestep ∆t = 0.125 so crossings are shown as arrows and both
pairwise force are compared: fij · pj
mj and dUi
drj · pj
mj . Sum is based on the more accurate dUi
drj · pj
mj form. Insert shows
full scale, with much larger crossings as magnitude →∞as ∆t →0. The plot is for volume number 211, roughly in
the middle of the channel, although similar plots can be obtained for any volume.
elled and the presented derivation makes no assumptions
limiting element type or chemistry61, this work shows
that NEMD and MACE together have the potential to
open a new frontier of molecular fluid dynamics mod-
elling.
ACKNOWLEDGEMENTS
The author is grateful to the UK Materials and
Molecular Modelling Hub for computational resources,
which is partially funded by EPSRC (EP/T022213/1,
EP/W032260/1 and EP/P020194/1))
DATA AVAILABILITY
All code used for this project is made available on the
authors GitHub released under a GPL-3.0 license with
persistent URL uploaded to zenodo or similar upon fi-
nal publishing. This includes numba accelerated code to
calculate the MoP in ASE, which could be extended for
other systems and even ML potentials.
1J. H. Irving and J. G. Kirkwood, “The statistical mechanics the-
ory of transport processes. iv. the equations of hydrodynamics,”
J. Chemical. Phys. 18 - 6, 817–829 (1950).
2V.
Eyert,
J.
Wormald,
W.
A.
Curtin,
and
E.
Wimmer,
“Machine-learned interatomic potentials: Recent developments
and prospective applications,” Journal of Materials Research 38,
5079–5094 (2023).
3I. Batatia, P. Benner, Y. Chiang, A. M. Elena, D. P. Kov´acs,
J. Riebesell, X. R. Advincula, M. Asta, W. J. Baldwin, N. Bern-
stein, A. Bhowmik, S. M. Blau, V. C˘arare, J. P. Darby, S. De,
F. D. Pia, V. L. Deringer, R. Elijoˇsius, Z. El-Machachi, E. Fako,
A. C. Ferrari, A. Genreith-Schriever, J. George, R. E. A. Goodall,
C. P. Grey, S. Han, W. Handley, H. H. Heenen, K. Hermans-
son, C. Holm, J. Jaafar, S. Hofmann, K. S. Jakob, H. Jung,
V. Kapil, A. D. Kaplan, N. Karimitari, N. Kroupa, J. Kull-
gren, M. C. Kuner, D. Kuryla, G. Liepuoniute, J. T. Mar-
graf, I.-B. Magd˘au, A. Michaelides, J. H. Moore, A. A. Naik,
S. P. Niblett, S. W. Norwood, N. O’Neill, C. Ortner, K. A.
Persson, K. Reuter, A. S. Rosen, L. L. Schaaf, C. Schran,
E. Sivonxay, T. K. Stenczel, V. Svahn, C. Sutton, C. van der
Oord, E. Varga-Umbrich, T. Vegge, M. Vondr´ak, Y. Wang,
W. C. Witt, F. Zills, and G. Cs´anyi, “A foundation model for
atomistic materials chemistry,” arXiv preprint arXiv
(2023),
arXiv:2401.00096 [physics.chem-ph].
4D. P. Kov´acs, I. Batatia, E. S. Arany, and G. Cs´anyi, “Evaluation
of the mace force field architecture: From medicinal chemistry to
materials science,” The Journal of Chemical Physics 159, 044118
(2023).
5R. Drautz, M. F¨ahnle, and J. M. Sanchez, “General relations
between many-body potentials and cluster expansions in multi-
component systems,” J. Phys. Condens. Matter 16, 3843 (2004).
6M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. Cohen,
“3d steerable cnns: Learning rotationally equivariant features in
volumetric data,” (2018), arXiv:1807.02547 [cs.LG].
7R. Kondor, Z. Lin, and S. Trivedi, “Clebsch-gordan nets: a fully
fourier space spherical convolutional neural network,”
(2018),
11
arXiv:1806.09231 [stat.ML].
8N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff,
and P. Riley, “Tensor field networks: Rotation- and translation-
equivariant neural networks for 3d point clouds,”
(2018),
arXiv:1802.08219 [cs.LG].
9M. Geiger, T. Smidt, A. M., B. K. Miller, W. Boomsma,
B. Dice, K. Lapchevskyi, M. Weiler, M. Tyszkiewicz, S. Batzner,
D. Madisetti, M. Uhrin, J. Frellsen, N. Jung, S. Sanborn, M. Wen,
J. Rackers, M. Rød, and M. Bailey, “Euclidean neural networks:
e3nn,” (2022).
10S. Batzner, A. Musaelian, L. Sun, et al., “E(3)-equivariant graph
neural networks for data-efficient and accurate interatomic po-
tentials,” Nature Communications 13, 2453 (2022).
11The 1 and 3 were found to be less favourable in terms of accuracy
vs. computational cost. As MACE uses a more complicated local
potential with ACE, it requires fewer hops than other message
passing approaches.
12B. Deng, P. Zhong, K. Jun, J. Riebesell, K. Han, C. J. Bartel,
and G. Ceder, “Chgnet as a pretrained universal neural network
potential for charge-informed atomistic modelling,” Nature Ma-
chine Intelligence , 1–11 (2023).
13L. Barroso-Luque, M. Shuaibi, X. Fu, B. M. Wood, M. Dzamba,
M. Gao, A. Rizvi, C. L. Zitnick, and Z. W. Ulissi, “Open ma-
terials 2024 (omat24) inorganic materials dataset and models,”
(2024), arXiv:2410.12771 [cond-mat.mtrl-sci].
14T. P. Senftle, S. Hong, M. M. Islam, S. B. Kylasa, Y. Zheng,
Y. K. Shin, C. Junkermeier, R. Engel-Herbert, M. J. Janik, H. M.
Aktulga, T. Verstraelen, A. Grama, and A. C. T. van Duin, “The
reaxff reactive force-field: development, applications and future
directions,” npj Computational Materials 2, 15011 (2016).
15B. D. Todd and P. J. Daivis, Nonequilibrium Molecular Dynam-
ics: Theory, Algorithms and Applications (Cambridge Univer-
sity Press, 2017).
16D. J. Evans and G. P. Morriss, Statistical Mechanics of Nonequi-
librium Liquids, 2nd ed. (ANU Press, 2007).
17J. P. Ewen, C. Gattinoni, J. Zhang, D. M. Heyes, H. A. Spikes,
and D. Dini, “On the effect of confined fluid molecular struc-
ture on nonequilibrium phase behaviour and friction,” Physical
Chemistry Chemical Physics 19, 17883–17894 (2017).
18S. K. Kannam, B. D. Todd, J. S. Hansen, and P. J. Daivis, “Slip
length of water on graphene:
Limitations of non-equilibrium
molecular dynamics simulations,” Journal of Chemical Physics
136, 024705 (2012).
19D. Williams, Z. Wei, M. R. b. Shaharudin, and P. Carbone,
“A molecular simulation study into the stability of hydrated
graphene nanochannels used in nanofluidics devices,” Nanoscale
14, 3467–3479 (2022).
20A. P. Thompson and M. O. Robbins, “Simulations of contact-line
motion: Slip and the dynamic contact angle,” Physical Review
Letters 63, 766–769 (1989).
21T. Qian, X.-P. Wang, and P. Sheng, “Molecular hydrodynam-
ics of the moving contact line in two-phase immiscible flows,”
Communications in Computational Physics 1, 1–52 (2005),
arXiv:cond-mat/0510403.
22J. G. Kirkwood and F. P. Buff, “The statistical mechanical theory
of surface tension,” The Journal of Chemical Physics 17, 338–343
(1949).
23M. S. Green, “Markoff random processes and the statistical me-
chanics of time-dependent phenomena. ii. irreversible processes
in fluids,” Journal of Chemical Physics 22, 398–413 (1954).
24R. Kubo, “Statistical-mechanical theory of irreversible processes.
i. general theory and simple applications to magnetic and con-
duction problems,” Journal of the Physical Society of Japan 12,
570–586 (1957).
25S. T. O’Connell and P. A. Thompson, “Molecular dynamics-
continuum hybrid computations: A tool for studying complex
fluid flows,” Physical Review E 52, R5792–R5795 (1995).
26M. Mohamed and A. A. Mohamad, “A review of the develop-
ment of hybrid atomistic-continuum methods for dense fluids,”
Microfluidics and Nanofluidics 8, 283–302 (2010).
27E. G. Flekkøy, G. Wagner, and J. Feder, “Hybrid model for com-
bined particle and continuum dynamics,” Europhysics Letters
52, 271–276 (2000).
28D. Davydov, J.-P. Pelteret, and P. Steinmann, “Comparison
of several staggered atomistic-to-continuum concurrent coupling
strategies,” Computer Methods in Applied Mechanics and Engi-
neering 277, 260–280 (2014).
29K.
Shi,
E.
R.
Smith,
E.
E.
Santiso,
and
K.
E.
Gub-
bins,
“A
perspective
on
the
microscopic
pressure
(stress)
tensor:
History,
current
understanding,
and
future
challenges,”
The
Journal
of
Chemical
Physics
158,
040901
(2023),
https://pubs.aip.org/aip/jcp/article-
pdf/doi/10.1063/5.0132487/20004255/040901 1 5.0132487.pdf.
30M. F. Langer, J. T. Frank, and F. Knoop, “Stress and heat flux
via automatic differentiation,” arXiv preprint arXiv:2305.01401
(2023), arXiv:2305.01401.
31M. F. Langer, F. Knoop, C. Carbogno, M. Scheffler, and
M. Rupp, “Heat flux for semilocal machine-learning potentials,”
Phys. Rev. B 108, L100302 (2023).
32H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd
ed. (Addison Wesley, 2002).
33A. D. Kaplan, R. Liu, J. Qi, T. W. Ko, B. Deng, J. Riebe-
sell, G. Ceder, K. A. Persson, and S. P. Ong, “A foundational
potential energy surface dataset for materials,” arXiv preprint
arXiv:2503.04070 (2025), 10.48550/arXiv.2503.04070.
34N.
C.
Admal
and
E.
B.
Tadmor,
“Stress
and
heat
flux
for
arbitrary
multibody
potentials:
A
unified
framework,”
The
Journal
of
Chemical
Physics
134,
184106
(2011),
https://pubs.aip.org/aip/jcp/article-
pdf/doi/10.1063/1.3582905/14061374/184106 1 online.pdf.
35I. Batatia, D. P. Kovacs, C. Ortner, G. Cs´anyi, and V. L. De-
ringer, “Mace: Higher order equivariant message passing neu-
ral networks for fast and accurate force fields,” arXiv preprint
arXiv:2206.07697 (2022), arXiv:2206.07697.
36Z. Fan, L. F. C. Pereira, H. Wang, J. Zheng, D. Donadio, and
A. Harju, “Force and heat current formulas for many-body poten-
tials in molecular dynamics simulation with applications to ther-
mal conductivity calculations,” Physical Review B 92, 094301
(2015).
37N. C. Admal and E. B. Tadmor, “A unified interpretation of
stress in molecular systems,” Journal of Elasticity 100, 63–143
(2010), arXiv:1008.4819.
38D. J. Evans and G. P. Morris, Statistical Mechanics of Non-
Equilibrium Liquids, 2nd ed. (Australian National University
Press, 2007).
39B. Todd, D. Evans, and P. Daivis, “Pressure tensor for inhomoge-
nous fluids,” Physical Review E 52(2), 1627–1638 (1995).
40P. Schofield and J. R. Henderson, “Statistical Mechanics of In-
homogeneous Fluids,” Proc. R. Soc. Lond. A 379, 231 (1982).
41W. Noll, “Die herleitung der grundgleichungen der thermo-
mechanik der kontinua aus der statistischen mechanik,” Journal
of Rational Mechanics and Analysis 4, 627–646 (1955).
42D. R. J. Monaghan and G. P. Morriss, “Microscopic study of
steady convective flow in periodic systems,” Physical Review E
56, 476 (1997).
43E. R. Smith, D. M. Heyes, D. Dini, and T. A. Zaki, “Control-
volume representation of molecular dynamics,” Phys. Rev. E. 85,
056705 (2012).
44J. Cormier, J. M. Rickman, and T. J. Delph, “Stress calculation
in atomistic simulations of perfect and imperfect solids,” J. Appl.
Phys. 89, 99 (2001).
45R. J. Hardy, “Formulas for determining local properties in molec-
ular dynamics simulations: Shock waves,” J. Chem. Phys 76(1),
622–628 (1982).
46J. Yang, M. Youssef, and B. Yildiz, “Structure, kinetics, and ther-
modynamics of water and its ions at the interface with monoclinic
zro2 resolved via ab initio molecular dynamics,” The Journal of
Physical Chemistry C 125, 15233–15242 (2021).
47E. N. Parker, “Tensor Virial Equations,” Phys. Rev. 96, 1686
(1954).
12
48S. Plimpton, P. Crozier, and A. Thompson, LAMMPS Users
Manual - Large-scale Atomic/Molecular Massively Parallel Sim-
ulator, http://lammps.sandia.gov, 7th ed. (2003).
49M. Han and J. S. Lee, “Method for calculating the heat and
momentum fluxes of inhomogeneous fluids,” Phys. Rev. E 70,
061205 (2004).
50D. Heyes, E. Smith, D. Dini, and T. Zaki, “The equivalence be-
tween volume averaging and method of planes definitions of the
pressure tensor at a plane,” The Journal of chemical physics 135,
024512 (2011).
51E. R. Smith, P. J. Daivis, and B. D. Todd, “Measuring heat flux
beyond fourier’s law,” The Journal of Chemical Physics 150,
064103 (2019).
52A. H. Larsen, J. J. Mortensen, J. Blomqvist, I. E. Castelli,
R. Christensen, M. Du lak, J. Friis, M. N. Groves, B. Hammer,
C. Hargus, E. D. Hermes, P. C. Jennings, P. B. Jensen, J. Ker-
mode, J. R. Kitchin, E. L. Kolsbjerg, J. Kubal, K. Kaasbjerg,
S. Lysgaard, J. B. Maronsson, T. Maxson, T. Olsen, L. Pastewka,
A. Peterson, C. Rostgaard, J. Schiøtz, O. Sch¨utt, M. Strange,
K. S. Thygesen, T. Vegge, L. Vilhelmsen, M. Walter, Z. Zeng, and
K. W. Jacobsen, “The atomic simulation environment—a python
library for working with atoms,” Journal of Physics: Condensed
Matter 29, 273002 (2017).
53A. Christensen and E. A. Carter, “First-principles study of the
surfaces of zirconia,” Phys. Rev. B 58, 8050–8064 (1998).
54Branched at commit e2842aadd15a57580e8564b050bd6b4eee852b7d
with
full
code
with
all
changes
at
https://github.com/edwardsmith999/mace/.
55NVIDIA, “cuequivariance,” (2025).
56S. K. Lam, A. Pitrou, and S. Seibert, “Numba: a llvm-based
python jit compiler,” in Proceedings of the Second Workshop on
the LLVM Compiler Infrastructure in HPC, LLVM ’15 (Associ-
ation for Computing Machinery, New York, NY, USA, 2015).
57B. Focassio, L. P. M. Freitas, and G. R. Schleder, “Performance
assessment of universal machine learning interatomic potentials:
Challenges and directions for materials’ surfaces,” ACS Applied
Materials & Interfaces 17, 13111–13121 (2025), pMID: 38990833.
58S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, “A consistent
and accurate ab initio parametrization of density functional dis-
persion correction (dft-d) for the 94 elements h-pu,” The Journal
of Chemical Physics 132, 154104 (2010).
59C. Lee, H. Chen, and G. Fitzgerald, “Structures of the water hex-
amer using density functional methods,” The Journal of Chemi-
cal Physics 101, 4472–4473 (1994).
60K. D. Hammonds and D. M. Heyes, “Shadow hamiltonian in
classical nve molecular dynamics simulations involving coulomb
interactions,” The Journal of Chemical Physics 154, 174102
(2021).
61With the note that long range interactions are currently not in-
cluded in MACE.
|
The Quantum Method of Planes - Local Pressure Definitions for Machine Learning Potentials E. R. Smitha) Brunel 8 3PH (Dated: 23 September 2025) Stress or Pressure is a central quantity in engineering, and remains vital in molecular modelling. However, the commonly used virial stress tensor is not valid away from thermodynamic equilibrium, a common state required in fluid dynamics and non-equilibrium molecular dynamics (NEMD) simulation. This is solved by using the method of planes (MoP), a mechanical form of pressure, simply interpreted as the force divided by area but derived from the firm foundations of statistical mechanics. We present an extension of MoP stress to the MACE potential, a particular form of machine learning (ML) potentials allowing quantum mechanical (QM) physics in classical simulation. We present the derivation of the MoP stress for the MACE potential using the theoretical framework set out by Irving and Kirkwood 1. For the testcase of an interface between water and Zirconium Oxide, we show the MoP measures the correct force balance while the virial form fails. Further, we demonstrate the MoP is valid arbitrarily far from equilibrium, showing exact conservation every timestep in a control volume bounded by MoP planes. This links the stress directly to the conservation equations and demonstrates the validity in non equilibrium molecular dynamics systems. All code to reproduce these validations for any MACE system, together with ASE accelerated code to calculate the MoP are provided as open source. This work helps build the foundation to extend the ML revolution in materials to NEMD and molecular fluid dynamics modelling. I. INTRODUCTION Material science is undergoing a revolution, with a new generation of machine learning (ML) potentials allowing classical models to incorporate quantum Density Functional Theory (DFT) level accuracy at speeds traditionally closer to classical molecular dynamics2. One promising candidate for fast and accurate simulation is the MACE-MP-0 family of models3, which are trained on an extensive material database covering the periodic table and giving good results for cases outside of this dataset, including the mixing of elements. The MACE model is particularly efficient and promising because, rather than a purely data driven approach, that simply feeds vast amounts of data to an increasing deep neural network, it uses aspects of the physics to simplify the required network and training4. This speeds up inference, the force calculation in MD, to speeds approaching classical MD when run on modern GPU architectures. The symmetries of common molecular configurations during interaction are build in through the atomic cluster expansion (ACE)5, which uses symmetries to reduce the number of different molecules and configurations that need to be remembered by a network. This can be understood as analogous to machine vision, where recognising a picture of a cat, even when shifted or rotated, is still the same picture. In molecules, this can be done using symmetries through the E3NN library6-9. The use of ACE allows high body order5, the number of many-body interactions considered (in MACE set to a bond order of four locally). Inspired by leading results in message passing potentials a)Electronic mail: like NequIP10, this ACE approach is then embedded in a message passing framework. The required cutoff length of the ACE molecular interactions, rc, is reduced using this message passing approach (the M in MACE), which splits multi-body interactions into a graph network so only energetically important connections or "edges" are required. The number of hops taken is M = 2 on the MACE-MP-0 version used in this work.11. A machine learning process is then applied to work out which connections are essential and this process means each 4 body local calculation is connected by graph edges to M other atoms, each with four body interactions given by the ACE model. The resulting potential is then effectively 13 body order and an effective cutoff length M ×rc. As a result, this performs very well in learning DFT results while allowing fast simulations. With the simplifications of M and ACE, a deep neural network is then trained on the MPtraj12 and sAlex13 database, a vast crystal database covering many materials . The paper has been demonstrated to give good results on a vast range of cases3 This has since been extended to include other databases, charges and fine tunes since initial publications, a process which will doubtlessly accelerate. However, the presented work is not tied to the database of this foundation model, which has already been fine-tuned and improved since initial publication using wider DFT databases. Such finetuning also has the ability to improve properties DFT typically struggles to reproduce, from dispersion, longrange interactions to the over structuring of water. Instead, the derivation in this work focuses on the base architecture to consider the general behaviour allowing these models to continue to improve and allowing use of the most up to date model. As a result of the potential revolutionary nature of ML potentials, and the already significant improvement 17 Sep 2025 2 offered by the MACE potential in materials modelling, the solid simulation community has fully embraced these models (for example the range of authors on Batatia et al. 3). However, the potential of ML for fluid dynamics problems, known as non equilibrium molecular dynamics (NEMD), seems to have received far less attention. Modelling of the interface between a liquid and a surface is often a very complex problem that would greatly benefit from ML modelling. On surface with chemical reactions such as rusting or wear, to catalysts or dissociation of water these are difficult to capture with classical model like the ReaxFF reactive force-field14. Cooling of heat exchanges, nuclear reactors or semiconductors all require molecular interface modelling, while lubrication and tribology are intertwined with the molecular chemistry of the interface. Using ML models to capture these effects has the potential to revolutionise NEMD modelling of interfaces. NEMD is characterised by a series of techniques to study fluid flows, including thermostats to control temperature or forcing routines to enforce shear flow15. A process which is a form of molecular fluid dynamics, NEMD provides methods to get quantities of great important to fluid flow modelling, including density, velocity, stress and heat flux15,16. Of particular importance for many cases in fluid dynamics are the stress tensor and heat flux vectors, vital for rheology, tribology17, slip length18,19, the moving three-phase contact line20,21, surface tension22, viscosity and thermal transfer23,24. These molecular details can be included in continuum-based engineering simulation, such as computational fluid dynamics (CFD), through coupled simulation25,26 where stress coupling is useful in both fluid simulation27 and for solid mechanics28. Getting the stress and heat flux has a long history in the literature29. Two key observations are important, stress is non-unique and certain forms of stress are incorrect for inhomogeneous systems. Note stress and pressure are used interchangable in this work, with one the negative of the other. For the NEMD cases, a stress and heat flux that is valid away from thermodynamic equilibrium is essential. Recent work by Langer, Frank, and Knoop 30,31, demonstrates that stress is possible to obtain for a whole periodic system by applying a finite strain. The current work extends this by starting from the foundations of Irving and Kirkwood 1 to derive a form of local stress and heat flux. In this process, it can be used to show that the virial form of stress and heat flux, the default option in many MD packages, are an oversimplification that lead to errors away from equilibrium, a well know result in the classical literature extended here to these ML potentials. Instead, the Method of Planes MoP stress has advantages as it can be derived directly from the Irving and Kirkwood 1 and shown to be linked exactly to a control volume form of the conservation equations. This link to a conservative form provides the validation that the stress defined is meaningful and correct. The MoP also has the advantage it is the most fundamental definition of stress, the force divided by area, and can be calculated relatively easily in MD. As a result, this work will derive a usable stress for the MACE potential and demonstrate conservation of momentum and energy in a control volume. More generally, this treatment will work for any machine learning potential that can be expressed as a pairwise force, the general class of message passing neural networls. The remainder of the work is as follows, in section II a derivation of the MoP stress and heat flux for an ACE style potential is presented. Next, the methodology of the molecular dynamics used to simulate a non-trivail case of ZrO2 and water is given section III. The results of applying the MoP stress in this system is shown in section IV including a demonstration of the conservation of momentum in a control volume and the failure of the virial form of stress. Finally, some brief conclusions are given in section V. II. THEORY In classical physics, the force F = -∇U(r), is given by the negative gradient of potential U with respect to position r only if we have a conservative potential32. In classical molecular dynamics we have a system of pointwise atoms so only at the positions of these atoms ri can forces exist. Then the force on atom i is, Fi = -∂U ∂ri (1) with the total forces in an MD system satisfying the condition PN i=1 Fi = 0. The true picture in a quantum system is more complex with wavefunctions defining a continuous energy throughout space, reduced by ML to a graph network. It is interesting to note that the existence of a conservative potential, while generally satisfied by just training for energy only, is better enforced by explicit inclusion of force in the training process. The training process of MACE aims to match both total potential energy U and forces Fi to DFT data4, by minimising the function, L = λE B B X b=1 " Ub - ̃Ub Nb #2 + λF 3B B X b=1 Nb X i=1 -∂Ub ∂ri - ̃Fib 2 (2) where the index b denotes the sum over the B total batches used in the training process (here originally for MACE-MP-0 the 89 elements in the MATPES-PBE dataset33). The ̃Ub and ̃Fib are the potential energies and forces from batch b of the DFT data respectively. The lagrangian multipliers λE and λF are chosen to enforce the relative importance of the two contributions, where it was found an initial training using λE >> λF before switching to λE << λF with a smaller step gave good agreement for energy and forces4. 3 A. Intermolecular Forces Admal and Tadmor 34 show all many-body forces can be written in a pairwise manner provided the potentials are continuous. For classical potentials, for example the Lennard-Jones potential, the continuity is true due to the well define mathematical function of the potential, i.e. U(r) = r-12 -r-6. The MACE model follows the general framework of message passing neural networks (MPNNs)35, a type of graph-based machine-learning potential. As a result these MPNN are continuous and can use automatic differentiation (AD) to define gradients, so can be expressed in terms of pairwise forces30. The message passing uses ACE, which is also construted from a set of continuous functions expressed in a pairwise manner. For machine learning potentials, in particular the ACE potential used here, the internal energy is typically defined using set notation4,30,35 U = U ({rij | rij < rc}) to denote all possible pairwise interactions between all molecules within cutoff rc. Taking the derivative of this potential proceeds as follows, ∂U ∂ri = ∂U ∂ri1 · ∂ri1 ∂ri + ∂U ∂ri2 · ∂ri2 ∂ri · · · + ∂U ∂riN · ∂riN ∂ri = N X j̸=i ∂U ∂rij · ∂rij ∂ri = N X j̸=i ∂U ∂rij · ∂rij ∂ri + ∂U ∂rji · ∂rji ∂ri = N X j̸=i ∂U ∂rij -∂U ∂rji (3) Where the final line is obtained by applying the chain rule to compute the force on atom i: Using rij = ri -rj so derivatives w.r.t to ri equal 1 (-1 for rji). We can then define the expression in Eq. (3) as the antisymmetric pairwise force fij, fij def = ∂U ∂rij -∂U ∂rji (4) The total force on atom i is then simply Fi = P j̸=i fij With a cutoff, the sum is only over the molecules within the limit of this cutoff, often denoted by set notation in the ML literature30, e.g. N(i) = {j ̸= i|rij < rc}. For more complex message passing cases, it can occur that interactions N(i) includes a given j where i is not in the set of j's interactions N(j). As a result, ∂U/∂rij ̸= -∂U/∂rji. However, the definition of Eq. (4) guarantees fij = -fji and so satisfies Newton's third law. In a graph neural network, molecules are at the nodes while the connections between then, called the graph edges, conceptually correspond to the intermolecular interactions. Note that in the work of Langer, Frank, and Knoop 30 they explicitly denotes that rij applies the minimum image convention which is dropped here for notational simplicity assuming halos; ghost atom copies of the domain to enforce periodic boundaries. The definition of pairwise force in Eq. (4) follows directly from the assumption U is a function of pairwise forces only. Had we assumed it was a function of individual particle positions or three body terms we could have derived a more complex series of interactions. Fan et al. 36 show the three body potentials including Tersoff, Brenner, Stillinger-Weber and a general many-body potential as simply fij = ∂Ui/∂rij -∂Uj/∂rji where Ui is the energy at the point location of particle i. The assumption of individual atomic energy U = PN i=1 Ui in ∂U/∂ri can also be used to define an alternative version of pairwise forces as, fij def = ∂Uj/∂ri (5) This version of force is shown to be the required form to conserve energy by Langer et al. 31 and demonstrated to work locally in the section IV, although it will not satisfy Newton's 3rd law in general as ∂Uj/∂ri ̸= ∂Ui/∂rj . Any choice of pairwise force is not unique. The MACE potential used in this work is based on an effective body order of 13, so even assuming forces are a many-body hierarchy is a simplification of the force calculation. The assumption here of using pairwise forces is therefore an even more significant simplification of the true dynamics. However, the resulting pairwise force still remains more complex than a pairwise analytical potential such as the Lennard Jones, which is symmetric by construction and satisfies ∂U/∂rij = ∂U/∂rij ̃rij with ̃rij the unit vector between i and j. Admal and Tadmor 37 state this difference succinctly; noting pairwise forces can be physically interpreted as the "force exerted on particle i by particle j", whereas the more general interaction force fij can be interpreted as the "contribution to the force on particle i due to the presence of particle j". In the latter case, Newton's 3rd law is not required and this is the default situation for a machine learning potential like MACE unless a definition like Eq. (4) specifically has this symmetry. It is instructive to consider the pseudo code used to gets fij from a graph neural network by autodifferentiation, which proceeds as follows. #Get arrays with all interacting atoms stored #in identical sized arrays of corresponding #sender/reciever indices sender, receiver = neighbour_list(r, cutoff) #Positions of ri and rj at the end of every #intermolecular interaction or graph "edge" #give the rij vectors rij = r[receiver] - r[sender] #Taking gradient w.r.t vector rij dUdrij = autograd.grad(energy, rij) #Reshape n_edges to n_nodes^2 fij[sender, receiver, :] = -dUdrij 4 #Apply anti-symmetry operation fij[:,:,0] = fij[:,:,0] - fij[:,:,0].T fij[:,:,1] = fij[:,:,1] - fij[:,:,1].T fij[:,:,2] = fij[:,:,2] - fij[:,:,2].T The process of putting all pairwise forces into an N by N matrix reduces computational efficiency gains provided by the MPNNs architecture, which is designed to only include relevant interactions. To improve this approach, a sparse matrix could be used for fij or the algorithm used with only the sender and receiver MPNNs interactions. However, the above code is reasonable for small systems given the force calculation is often the major component in simulation cost and obtaining stresses is undertaken only as an intermittent exercise. Generally in NEMD we allow the system to evolve sufficiently that the measured stress is statistically independent of previous values. The stress autocorrelation time of the system is usually a sensible value for the sample frequency as it ensures stress measurements are statistically uncorrelated. In section II B we use the form of inter-molecular force in Eq. (4) to determine the pressure tensor B. Momentum Equation We use the definition of forces from Eq. (4) in the derivation of pressure following the statistical mechanical process of Irving and Kirkwood 1. Assuming phase space is bounded, Irving and Kirkwood 1 obtain an expression for the evolution in time of expected value α, ∂ ∂t α; f = N X i=1 Fi · ∂α ∂pi + pi mi · ∂α ∂ri ; f , (6) By letting α = P piδ(ri -r), Irving and Kirkwood 1 define the momentum density at a point in space, ρu def = N X i=1 piδ(ri -r); f , (7) The time evolution of momentum is used to obtain the pressure tensor by taking the time derivative of both sides of Eq. (7) and applying Eq. (6), ∂ ∂tρu = ∂ ∂t N X i=1 piδ(ri -r); f = N X i=1 Fiδ(ri -r) -∂ ∂r · pipi mi · δ(ri -r); f (8) The second term is the kinetic part of the pressure tensor P k, defined using ∂/∂riδ(r -ri) = -∂/∂rδ(r -ri) and subtracting streaming velocity. As pi is the momentum in the laboratory frame we can denote pi as the peculiar value which excludes the macroscopic streaming term u(ri) at the location of molecule i38. The kinetic pressure integrated over a volume is R V P kdV = ⟨pipi mi θi; f⟩where the function θi is the integral of a Dirac delta function, only non-zero for a molecule inside the volume. This term is identical in both classical and MACE systems so is not considered further, instead we turn our attention to the configurational pressure. We aim to express the forcing term of Eq. (8) as the divergence of a pressure tensor ∇· P c. To do this, Irving and Kirkwood 1 use the assumption of Newton's 3rd law, which we can also invoke due to the definition of Eq. (4), to get the difference of two Dirac delta N X i=1 Fiδ(ri -r); f = N X i=1 ∂U ∂ri δ(ri -r); f = 1 2 N X i=1 ∂U ∂ri δ(ri -r) + N X j=1 ∂U ∂rj δ(rj -r); f = 1 2 N X i=1 N X j̸=i ∂U ∂rij -∂U ∂rji δ(ri -r) + N X j=1 N X i̸=j ∂U ∂rji -∂U ∂rij δ(rj -r); f = 1 2 N X i=1 N X j̸=i fijδ(ri -r) + fjiδ(rj -r); f = 1 2 N X i=1 N X j̸=i fij [δ(ri -r) -δ(rj -r)] ; f (9) At this point, a useful insight can be obtained from the integral over a volume of the difference between the two Dirac Delta, θij = θi -θj, which is only non zero when one molecule is in the volume and the other is outside. In this form, the balance on an arbitrary volume can be used to check momentum conservation where surface forces equal internal momentum change (given the absence of any atoms crossing the surface). Stress is a more useful form, so the usual manipulations can be made, including the slightly tenuous Taylor expansion of two Dirac delta functionals from Irving and Kirkwood 1, to give Oij the so-called IK operator as an expansion in delta functions39, or the more useful form obtained by rewriting the integral along a (non unique40) contour41,42, δ(ri -r) -δ(rj -r) = -rij · ∂ ∂r Oijδ(ri -r) = I ∂ ∂l· δ (r -ri -l) dl = ∂ ∂r · I δ (r -ri -l) dl. (10) We can go further and simply assume a straight line, often called the IK contour which is consistent with Newton's assumption of impressed force between two points, I δ (r -ri -l) dl≈rij Z 1 0 δ (r -ri -λrij) dλ. (11) 5 This assumes l= λrij with 0 < λ < 1 so dl= rijdλ. As this linear relationship is not valid for MACE style interactions, which do not simply act pairwise between atoms, we instead use the contour form from Eq. (10) and substitute into Eq. (9). This is then integrated over a finite volume to get configurational pressure, ∂ ∂r Z V ·P CdV = ∂ ∂r · 1 2 N X i=1 N X j̸=i Z V fij I δ (r -ri -l) dl; f dV = ∂ ∂r · 1 2 N X i=1 N X j̸=i fij I θldl; f (12) the function θlis non zero if the part of the non-linear interaction path is inside the averaging volume43. The full volume averaged pressure can be obtained by assuming constant pressure in the volume, VA P = 1 ∆V N X i=1 pipi mi θi + 1 2 N X i=1 N X j̸=i fijlij; f , (13) where ∆V is the local volume and lij = H θldlthe length of interaction inside a volume, a form well know in the literature for straight lines44,45, simply extended here for a contour. The virial form of pressure47. is a simplification of Eq. (13) where instead of assigning to a volume based on the path of interactions, the pressure contribution is split with half assigned to each atom. The equation for this is, V IRIAL P = 1 ∆V N X i=1 pipi mi + 1 2 N X j̸=i fijrij θi; f , (14) which is strictly only valid for an entire periodic MD simulation but is widely used locally in bins due to is simplicity and implementation in codes like LAMMPS48. The Virial pressure is approximate while the Volume Average pressure Eq. (13)) is not usable as we cannot determine the interaction path between two atoms in a meaningful way from a MACE system. Instead, we can express this equation in terms of stress across the surface of the volume, here assumed to be a cuboid for simplicity, by simply evaluating the derivative of θl. This gives a molecular version of the divergence theorem43, Z V ∇· P CdV = -1 2 N X i,j fijrij · I ∂θl ∂r dl; f = -1 4 X α∈{faces} N X i,j fijdαijSαij; f = X α∈{faces} Z Sα P C · dSα. (15) The result is an expression which is non-zero only when the interaction between two atoms passes over a surface with dαij = sgn(α-αj)-sgn(α-αi), ensuring the two atoms are either side. The function Sαij is non zero if this crossing is localised to a surface. On a CV this is force over each face, here a cuboid, with six faces x+, x-, y+, y-, z+ and x-so for any given surface, say the z+ face, this is the force over area, CV PC z+ = - 1 4∆ACV z+ N X i,j fijdz+ ijSzij; f . (16) the pressure on surface of area ∆ACV z+ . The signum functions dz+ ij = sgn(z+ -zj) -sgn(z+ -zi) are only non zero if molecule i and j are on different sides of a plane at z+ while the S+ zij term specifies the point of intersection of the line is located on a region of the z+ plane, in Eq. (16) the surface of the cuboid. The surface is the localised form of the pressure tensor considered by Han and Lee 49 applied to the six cubic faces bounding a volume. For a cube in space, each face has three components of stress, which results in 18 independent components over the total control surface. As this interaction path is not linear with MACE, surface localisation will not be trivial to define, so we instead consider a series of slabs the size of the domain. This is achieved by setting S+ xij = 1, so the surface located at z+ spans the entire domain as an infinite plane in Eq. (17)), recovering the Method of Planes formulation of the pressure39, MoP Pz+ = 1 ∆Az+ N X i=1 pipiz mi δ(zi -z+); f + 1 4∆Az+ N X i,j fijdz+ ij; f . (17) Any two planes bounding a region in periodic space can be considered to form a control volume, so the molecules between satisfy the conservation43 with time evolution of momentum from Eq. (8) given by, ∂ ∂t Z V ρudV = ∂ ∂t N X i=1 piθi; f = MoP P+ z - MoP Pz (18) note the minor slight of hand in swapping to laboratory momentum pi →pi in Eq. (17), assuming streaming velocity is zero to simplify Eq. (18) and remove the need for convective ρuu terms. These can simply be included if considering cases with a flow. It is the conservative property of Eq. (18) which will be tested using the MACE potentials in this work. Given the path is non unique, and even the interaction force between atoms is arbitrary, Eq. (17) has the advantage of only requiring that the atoms be either side of a plane to get the pressure. This pressure can then be checked to ensure it satisfies conservation of momentum in Eq. (18), providing validation that the many-body MACE pressure we are measuring is exactly consistent with the resulting dynamics 6 FIG. 1: The domain with Zirconium dioxide(ZrO2) walls and water (H2O) in the channel showing all dimensions. The top is shown on the left highlighting the triclinic nature of the system, with the view angle on the right along the tricilinc domain angle of about θ = 9◦to highlight the crystal structure. The domain is directly taken from the DFT work of Yang, Youssef, and Yildiz 46 with the fluid region doubled in size by copying the molecules and re-equilibrating the larger system. of the molecules. We show in section IV that the virial stress does not satisfy the momentum balance condition near a wall, a well know result in the literature39,50 even in the steady state. However, we cannot evaluate the volume average and even if we could, it cannot be linked to the time evolution in an exact way, as we do here for the method of planes. C. Energy Equation The momentum balance on a control volume obtained in the previous section uses a force definition considering the derivative of the entire system U({rij}) energy landscape. However, to get the energy change in a given volume, the concept of potential energy per particle has to be introduced. The energy at a point is given by, ρE def = N X i=1 eiδ(ri -r); f , (19) where the energy per atom is ei = p2 i /2mi + Ui and the sum of all atoms gives total energy U = PN i=1 Ui. Now to get the evolution of energy Irving and Kirkwood 1 use Eq. (8) with α = P eiδ(r -ri)), ∂ ∂tρE = ∂ ∂t N X i=1 eiδ(ri -r); f = N X i=1 Fi · ∂ei ∂pi + ∂ei ∂ri · pi mi | {z } A δ(r -ri)) -∂ ∂r · ei pi mi δ(r -ri)); f , (20) The quantity on the final line is the advection of energy and does not include interactions, so we focus on the quantity A in the bracket with ∂ei/∂pi = pi/mi and ∂ei/∂ri = ∂Ui/∂ri, A = Fi · pi mi + ∂Ui ∂ri · pi mi = -∂U ∂ri · pi mi + N X j=1 ∂Ui ∂rj · ∂rj ∂ri · ∂ri ∂t = N X j=1 ∂Ui ∂rj · pj mi - ∂PN j=1 Uj ∂ri · pi mi , (21) using pi/mi = ∂ri/∂t, Fi = -∂U/∂ri and U = P i Ui. Taking the sum over all i and multiplying by the Dirac delta function, Eq. (21) becomes, N X i=1 N X j=1 ∂Ui ∂rj · pj mi δ(r -ri)) -∂Uj ∂ri · pi mi δ(r -ri)); f , = N X i,j ∂Ui ∂rj · pj mi δ(r -ri)) -∂Ui ∂rj · pj mj δ(r -rj)); f , = N X i,j ∂Ui ∂rj · pj mi [δ(r -ri)) -δ(r -rj))] ; f , = ∂ ∂r · N X i,j ∂Ui ∂rj · pj mi I δ (r -ri -l) dl; f . (22) which can be written as the MoP stress using the same process as the momentum equation, integrating over a volume and evaluating the derivative in a given direction 7 and surface, again z+ here, MoP J K z+ = 1 ∆Az+ N X i=1 eipiz mi δ(zi -z+); f MoP J C z+ = 1 4∆Az+ N X i,j ∂Ui ∂rj · pj mi dz+ ij; f . (23) Once again, it is useful to review the pseudocode that can obtain these quantities #Loop needed over all energies Ui per particle for i in Natoms: dUidrj[i,:,:] = autograd.grad(node_energy[i], r) #Sum over all i and j for a #given plane located at zplane for i in Natoms: for j in Natoms: #Obtain work done to be used in MoP term fijvi = -( dUidrj[i,j,0]*v[i,0] +dUidrj[i,j,1]*v[i,1] +dUidrj[i,j,2]*v[i,2]) dzij = ( sgn(zplane - r[j,2]) -sgn(zplane - r[i,2])) MoPpower_c += 0.5*fijvi*dzij It is possible to obtain from the implementation here that this operation will be computationally expensive, scaling with N as for each atom i, the full back-propagation operation must be used to get each ∂Ui/∂rj. A more efficient version is possible, given in Langer et al. 31 based on defining a position variable outside of the computational graph. It is not clear if this approach can extended to the MoP pressure in the form presented here. It is worth comparing Eq. (23) to the naive approach to getting the heat flux, which would be to use the pairwise forces of Eq. (4) directly in the pairwise MoP form49,51, ] MoP J Cz+ = 1 4∆Az+ N X i,j fij · pj mi dz+ ij; f = 1 4∆Az+ N X i,j ∂U ∂rij -∂U ∂rji · pj mi dz+ ij; f (24) This equation gives an error as shown in section IV, but should be much more computationally efficient given only a single back-propagation step is required to obtain ∂U/∂rij. III. METHODOLOGY The focus of this work is to demonstrate that the surface form of pressure is valid in a non equilibrium molecular dynamics simulation with the MACE potential. Given the potential range of models that could be studied with MACE, we choose water at an interface with Zirconium Oxide (zirconia); a well-studied oxide with application in thermal barrier coatings, catalysis and to prevent corrosion in zirconium alloys46. The choice of studied material is somewhat arbitrary here as the focus in on obtaining stresses, and MACE requires no differing assumption when modelling any of the 89 elements in its training data. The simulation uses the Atomic Simulation Environment (ASE)52 with the MACE potential calculator. The simulation setup is taken from Yang, Youssef, and Yildiz 46 which studied an ab initio molecular dynamics simulations of an interface between monoclinic phase ZrO2 with the (1 ̄11) surface orientation at the interface to the liquid water. This is shown in Figure 1 highlighting the dimensions of the solid and liquid regions. This particular surface was chosen as it is reported to have the lowest energy orientation53. The simulation cell contained 96 formula units of ZrO2, as in Yang, Youssef, and Yildiz 46 but with the number of water molecules doubled to give a slightly larger fluid region. The simulation cell is triclinic, to accommodate the ZrO2 cell, with dimensions 13.6 ̊A × 13.6 ̊A × 40.3 ̊A in x, y and z respectively. The solid walls are about 7 ̊A at the top and bottom, with periodic boundaries connecting all sides. The domain was split into 400 control volume slabs, each of size ∆z = 0.1 ̊A, with the momentum and energy in the volume collected along with the stress and heat flux on the top and bottom planes of each volume. Unless otherwise stated, all quantities in this work are given in the standard units for ASE, where amu = eV = ̊A = K = 1 and times in the ASE time unit are t = ̊A p amu/eV which means one ASE time unit is approximatly 10.2fs. Atomic interactions were described using the MACE machine-learned interatomic potential35. A range of models were tested, including mace-mpa-0-medium and MACE-MATPES-PBE-0 run with a customised branch of the Github MACE code edited to return pairwise forces54. The model was run on a CUDA-enabled GPU (NVIDIA 4060Ti) using the NVIDIA cuEquivariance library for acceleration55, run with pytorch 2.8, CUDA kernel 12.6 and CUDA Version 560.35.03. The calculation of surface crossings for the MoP calculation is accelerated using Numba56. The MACE-MATPES-PBE-0 version of MACE has no Hubbard +U correction to match the setup of Yang, Youssef, and Yildiz 46, and behaviour such as disassociation of water is observed near the surface. Such behaviour is highly non-trivial to capture with classical models, but for DFT and MACE systems, the water molecules are held together purely by the MACE forces requiring no explicit bonds. The electronic structure calculations underpinning MACE training use the generalised gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) functional, a ground state and relatively simple form of DFT. However, given the rapid evolution of 8 0 10 20 30 40 z -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 Pressure MoP P K z MoP P C z MoP P z IK P K z IK P C z IK Pz 8 10 12 14 z -0.06 -0.04 -0.02 0.00 0.02 0.04 Pressure a) b) FIG. 2: Plot of spatial virial pressure compared to Method of Planes over 400 bins and 401 planes respectively. The full channel is shown in a) plotted below a snapshot of corresponding molecular locations to demonstrate how pressure changes from the solid ZrO2 to the liquid H2O regions. The zoomed in near-wall region is shown in b), where the colours/legend is consistent in both plots. Plots are an average over 30,000 samples in time taken every 10 timesteps steps from MD simulations, with symmetry assumed to improve statistics in b), where averaging about the centreline increases statistics to 60,000 samples. The kinetic pressure and configurational pressure must sum to zero for ∂Pzz/∂z = 0 to be valid, seen as a flat green line for MoP but not for the virial sum (dotted black line) where the measured pressure is biased by the location of the molecules. both MACE and general ML style potentials, it is expected future potentials will continue to change as the database and training become more extensive. This is especially true as MACE-MP-0 is designed to be a foundation model which is meant to be fine tuned using DFT to explicitly cover specific case. These include improving the cross interactions between the ZrO2 wall and water atoms for example. Although Batatia et al. 3 shows water interface demonstrate reasonable results, recent work shows MACE does lose accuracy for interfaces57. However, Focassio, M. Freitas, and Schleder 57 also show that fine tuning can improve a foundation models much faster than starting training from scratch. The model does not include dispersion terms and long range electrostatics are not included. However, the classical MoP formulation has not been adapted for long range forces to the authors knowledge, so this limitation would remain for QM systems. Yang, Youssef, and Yildiz 46 ran water at about 30K above room temperature to compensate for the overstructuring common in DFT simulations. However, in the MACE simulation at 330K temperature with an NVE ensemble, considerable over structuring was still observed in the water. This shows minimal flow, instead forming a kind of glassy crystal, even with the addition of D3 dispersion corrections58. To force more fluid like behaviour, the simulations were performed at a temperature of 500K, which was seen to provide extensive movement of the water atoms. This elevated temperature was specifically selected to help correct for the known overbinding error of density functional theory (DFT) when simulating bulk water59 and ensure we can test the MoP equations for water in the liquid phase. This also corresponds to the lower end of typical temperatures used in nuclear reactors46, providing a physical justification for the studied system. A Nose-Hoover thermostat was used to elevate the entire system for an equilibration period. The simulation was then restarted as an NVE with all results collected during a constant energy run for the production phase. For the runs to test the spatial variation of pressure, sufficient sampling is required so longer runs of (300, 000 timesteps at ∆t = 0.5) with samples every 10 timesteps collected giving 30, 000 samples for the pressure. For the control volume conservation tests, only very short runs of 40 time units were required with 80 timesteps at ∆t = 0.5 for momentum and 320 timesteps at ∆t = 0.125 for energy. IV. RESULTS Figure 2 shows the MoP pressure compared to the virial pressure. It has been documented in the literature39,50 that using the virial form of pressure fails 9 to give a flat profile near walls. This must be an error, as the condition for mechanical equilibrium ∇· P = 0 is violated. This in turn implies the virial form of pressure cannot be correct as the result of a non-zero mechanical equilibrium would be a net flow, something not possible for static fluid between two surfaces. These anomalous peaks in pressure observed in the virial are simply a consequence of assigning the pressure at the location of the molecules, which tend to show density peaks near walls. These artefacts appear due to the virial assumption of a half-half split of pressure contribution from an interactions, assigning these disproportional to the location of molecules. Instead, mechanical pressure should be defined at an arbitrary plane in space, as in the original definition of Cauchy. The MoP pressure show a flat green line in Figure 2, which is an important validation as it ensures the form of pressure still satisfies ∇· P = 0 despite the arbitrary definition of a pairwise intermolecular force fij from the multi-body MACE potential. The other check that the measured pressure is meaningful is demonstrated in Figure 3a). This control volume check ensures that d/dt R v ρudV = H P · dS, that is the forces fij which are summed over a plane to give P in Eq. (17) exactly determine the change of momentum of the molecules. This is shown to machine precision in Figure 3, where the sum of the kinetic spikes as molecules cross plus StressMOP K the total summed fij contributions MoP P C from all molecules are equal to the time evolution. Again, the arbitrary nature of the pairwise force mean this check is useful, showing the measured MoP pressure satisfies the continuum momentum equations exactly. In fact, variants of this force decomposition, including fij = -∂Ui/∂rj and even fij = -∂U/∂rij without transpose which does not satisfy Newton's 3rd law were tested and also demonstrate momentum conservation. The two definition of energy using the pairwise force from Eq. (24) and the definition from Langer et al. 31 derived to gives MACE energy conservation Eq. (23) were also tested in Figure 3b). It is clearly seen that the from from Eq. (23) using ∂Ui/∂ri provides much better energy conservation. The energy plot is run at a lower timestep than the momentum example, as the energy conservation in MD is only approximate, conserving a shadow Hamiltonian60, but with an error that decreases as timestep decreases. For the case of Eq. (23), the error decreases with timestep, whereas for the pairwise equation from Eq. (24) this error remains in the limit of small timestep, indicating a systematic issue. The difference between ] MoP J Cz+ and d/dt R V ρEdV does not follow a clear pattern, with sometimes under prediction (t ∼4), sometimes over prediction (t ∼19) and even close agreement (around times 6-7 or 14) The energy in a control volume are a function of the work done on the atoms from outside the cell; with slow changes due to forces and sudden peaks due to energy carried into or our of the the volume by atomic surface crossings. These peaks have magnitudes around 500, as can be seen from the inset in Figure 3b), so the errors in energy conservation observed during crossings are relatively small. These may also be a consequence of the work done calculation being assigned to bins based on atom position at the start of the timestep, whereas in practice, the crossing happening at some point in the timestep, so the work done will be split between volumes. The agreement between work done and the energy change sometimes shows a discrepancy in the forcing terms, which can be observed in Figure 3b) at times 16 to 17. This can also be seen in the sum at these same times, perhaps showing additional error which occurs when multiple atoms are interacting in a cell. Despite this minor differences, both momentum and energy conservation validate the forms of stress and heat flux respectively. V. CONCLUSIONS This work has shown the MACE potential can be used for non equilibrium molecular dynamics (NEMD) simulation with pressure measurements obtained using the method of planes (MoP). These MoP pressure measurements are shown to satisfy the static balance near a liquid-solid interface (a condition where the widely used virial fails) as well as exactly satisfying the momentum and energy control volume equations. This conservation demonstrates the MoP pressure is valid arbitrarily far from equilibrium, which means these tools can be directly used in studying any fluid system with MACE. The definition and correct form of pressure is a source of no small controversy in the literature29. As a non-unique quantity40, this has resulted in ambiguity that leads to the use of incorrect or inappropriate definitions. As we move to ML potentials which layer even more ambiguity, including the definition of an effective pairwise force in a graph network, it is useful to try to establish a firmer foundation for the vital quantities like pressure and heat flux The form of pressure presented in Eq. (17) and heat flux in Eq. (23) have been shown to conserve momentum and energy every timestep in volumes thin enough to often only have a single atom. As a result, these validate the MoP forms and the resulting conservation checks should form an essential part of NEMD validation in the ML age. The code to run these cases in ASE with MACE is provided with this work as a template for developing these checks. The potential for machine learning (ML) to include quantum mechanics (QM) details into molecular fluid simulation are vast. Many industrial problems studied with NEMD require pressure or heat flux, for example to get slip lengths, heat transfer, solid-liquid interface traction, local visco-elastic effects or liquid-vapour interfacial tension. This work uses a non-trivial fluid and solid case of Zirconium dioxide (zirconia) and water to demonstrate the MoP method remains applicable in obtaining both the pressure and heat flux. As the MACE form of potential allows any combination of elements to be mod10 0 5 10 15 20 t -20 -10 0 10 20 Momentum MoP P K z MoP P C z d dt R V ρudV Sum 0 5 10 15 20 t -1.0 -0.5 0.0 0.5 1.0 Energy MoP J K z MoP J C z g MoP J Cz d dt R V ρEdV Sum 0 20 -500 0 500 a) b) FIG. 3: Control volume conservation plots, a) is momentum conservation for a simulation with ∆t = 0.5, where the measured configurational forces and momentum fluxes are equal to momentum change, shown by the sum which is zero to machine precision. b) Energy, at smaller timestep ∆t = 0.125 so crossings are shown as arrows and both pairwise force are compared: fij · pj mj and dUi drj · pj mj . Sum is based on the more accurate dUi drj · pj mj form. Insert shows full scale, with much larger crossings as magnitude →∞as ∆t →0. The plot is for volume number 211, roughly in the middle of the channel, although similar plots can be obtained for any volume. elled and the presented derivation makes no assumptions limiting element type or chemistry61, this work shows that NEMD and MACE together have the potential to open a new frontier of molecular fluid dynamics modelling. ACKNOWLEDGEMENTS The author is grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1 and EP/P020194/1)) DATA AVAILABILITY All code used for this project is made available on the authors GitHub released under a GPL-3.0 license with persistent URL uploaded to zenodo or similar upon final publishing. This includes numba accelerated code to calculate the MoP in ASE, which could be extended for other systems and even ML potentials. 1J. H. Irving and J. G. Kirkwood, "The statistical mechanics theory of transport processes. iv. the equations of hydrodynamics," J. Chemical. Phys. 18 - 6, 817-829 (1950). 2V. Eyert, J. Wormald, W. A. Curtin, and E. Wimmer, "Machine-learned interatomic potentials: Recent developments and prospective applications," Journal of Materials Research 38, 5079-5094 (2023). 3I. Batatia, P. Benner, Y. Chiang, A. M. Elena, D. P. Kov ́acs, J. Riebesell, X. R. Advincula, M. Asta, W. J. Baldwin, N. Bernstein, A. Bhowmik, S. M. Blau, V. C ̆arare, J. P. Darby, S. De, F. D. Pia, V. L. Deringer, R. Elijoˇsius, Z. El-Machachi, E. Fako, A. C. Ferrari, A. Genreith-Schriever, J. George, R. E. A. Goodall, C. P. Grey, S. Han, W. Handley, H. H. Heenen, K. Hermansson, C. Holm, J. Jaafar, S. Hofmann, K. S. Jakob, H. Jung, V. Kapil, A. D. Kaplan, N. Karimitari, N. Kroupa, J. Kullgren, M. C. Kuner, D. Kuryla, G. Liepuoniute, J. T. Margraf, I.-B. Magd ̆au, A. Michaelides, J. H. Moore, A. A. Naik, S. P. Niblett, S. W. Norwood, N. O'Neill, C. Ortner, K. A. Persson, K. Reuter, A. S. Rosen, L. L. Schaaf, C. Schran, E. Sivonxay, T. K. Stenczel, V. Svahn, C. Sutton, C. van der Oord, E. Varga-Umbrich, T. Vegge, M. Vondr ́ak, Y. Wang, W. C. Witt, F. Zills, and G. Cs ́anyi, "A foundation model for atomistic materials chemistry," arXiv preprint arXiv (2023), . 4D. P. Kov ́acs, I. Batatia, E. S. Arany, and G. Cs ́anyi, "Evaluation of the mace force field architecture: From medicinal chemistry to materials science," The Journal of Chemical Physics 159, 044118 (2023). 5R. Drautz, M. F ̈ahnle, and J. M. Sanchez, "General relations between many-body potentials and cluster expansions in multicomponent systems," J. Phys. Condens. Matter 16, 3843 (2004). 6M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. Cohen, "3d steerable cnns: Learning rotationally equivariant features in volumetric data," (2018), . 7R. Kondor, Z. Lin, and S. Trivedi, "Clebsch-gordan nets: a fully fourier space spherical convolutional neural network," (2018), 11 . 8N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, and P. Riley, "Tensor field networks: Rotation- and translationequivariant neural networks for 3d point clouds," (2018), . 9M. Geiger, T. Smidt, A. M., B. K. Miller, W. Boomsma, B. Dice, K. Lapchevskyi, M. Weiler, M. Tyszkiewicz, S. Batzner, D. Madisetti, M. Uhrin, J. Frellsen, N. Jung, S. Sanborn, M. Wen, J. Rackers, M. Rød, and M. Bailey, "Euclidean neural networks: e3nn," (2022). 10S. Batzner, A. Musaelian, L. Sun, et al., "E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials," Nature Communications 13, 2453 (2022). 11The 1 and 3 were found to be less favourable in terms of accuracy vs. computational cost. As MACE uses a more complicated local potential with ACE, it requires fewer hops than other message passing approaches. 12B. Deng, P. Zhong, K. Jun, J. Riebesell, K. Han, C. J. Bartel, and G. Ceder, "Chgnet as a pretrained universal neural network potential for charge-informed atomistic modelling," Nature Machine Intelligence , 1-11 (2023). 13L. Barroso-Luque, M. Shuaibi, X. Fu, B. M. Wood, M. Dzamba, M. Gao, A. Rizvi, C. L. Zitnick, and Z. W. Ulissi, "Open materials 2024 (omat24) inorganic materials dataset and models," (2024), . 14T. P. Senftle, S. Hong, M. M. Islam, S. B. Kylasa, Y. Zheng, Y. K. Shin, C. Junkermeier, R. Engel-Herbert, M. J. Janik, H. M. Aktulga, T. Verstraelen, A. Grama, and A. C. T. van Duin, "The reaxff reactive force-field: development, applications and future directions," npj Computational Materials 2, 15011 (2016). 15B. D. Todd and P. J. Daivis, Nonequilibrium Molecular Dynamics: Theory, Algorithms and Applications (Cambridge University Press, 2017). 16D. J. Evans and G. P. Morriss, Statistical Mechanics of Nonequilibrium Liquids, 2nd ed. (ANU Press, 2007). 17J. P. Ewen, C. Gattinoni, J. Zhang, D. M. Heyes, H. A. Spikes, and D. Dini, "On the effect of confined fluid molecular structure on nonequilibrium phase behaviour and friction," Physical Chemistry Chemical Physics 19, 17883-17894 (2017). 18S. K. Kannam, B. D. Todd, J. S. Hansen, and P. J. Daivis, "Slip length of water on graphene: Limitations of non-equilibrium molecular dynamics simulations," Journal of Chemical Physics 136, 024705 (2012). 19D. Williams, Z. Wei, M. R. b. Shaharudin, and P. Carbone, "A molecular simulation study into the stability of hydrated graphene nanochannels used in nanofluidics devices," Nanoscale 14, 3467-3479 (2022). 20A. P. Thompson and M. O. Robbins, "Simulations of contact-line motion: Slip and the dynamic contact angle," Physical Review Letters 63, 766-769 (1989). 21T. Qian, X.-P. Wang, and P. Sheng, "Molecular hydrodynamics of the moving contact line in two-phase immiscible flows," Communications in Computational Physics 1, 1-52 (2005), arXiv:cond-mat/0510403. 22J. G. Kirkwood and F. P. Buff, "The statistical mechanical theory of surface tension," The Journal of Chemical Physics 17, 338-343 (1949). 23M. S. Green, "Markoff random processes and the statistical mechanics of time-dependent phenomena. ii. irreversible processes in fluids," Journal of Chemical Physics 22, 398-413 (1954). 24R. Kubo, "Statistical-mechanical theory of irreversible processes. i. general theory and simple applications to magnetic and conduction problems," Journal of the Physical Society of Japan 12, 570-586 (1957). 25S. T. O'Connell and P. A. Thompson, "Molecular dynamicscontinuum hybrid computations: A tool for studying complex fluid flows," Physical Review E 52, R5792-R5795 (1995). 26M. Mohamed and A. A. Mohamad, "A review of the development of hybrid atomistic-continuum methods for dense fluids," Microfluidics and Nanofluidics 8, 283-302 (2010). 27E. G. Flekkøy, G. Wagner, and J. Feder, "Hybrid model for combined particle and continuum dynamics," Europhysics Letters 52, 271-276 (2000). 28D. Davydov, J.-P. Pelteret, and P. Steinmann, "Comparison of several staggered atomistic-to-continuum concurrent coupling strategies," Computer Methods in Applied Mechanics and Engineering 277, 260-280 (2014). 29K. Shi, E. R. Smith, E. E. Santiso, and K. E. Gubbins, "A perspective on the microscopic pressure (stress) tensor: History, current understanding, and future challenges," The Journal of Chemical Physics 158, 040901 (2023), https://pubs.aip.org/aip/jcp/articlepdf/doi/10.1063/5.0132487/20004255/040901 1 5.0132487.pdf. 30M. F. Langer, J. T. Frank, and F. Knoop, "Stress and heat flux via automatic differentiation," arXiv preprint (2023), . 31M. F. Langer, F. Knoop, C. Carbogno, M. Scheffler, and M. Rupp, "Heat flux for semilocal machine-learning potentials," Phys. Rev. B 108, L100302 (2023). 32H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd ed. (Addison Wesley, 2002). 33A. D. Kaplan, R. Liu, J. Qi, T. W. Ko, B. Deng, J. Riebesell, G. Ceder, K. A. Persson, and S. P. Ong, "A foundational potential energy surface dataset for materials," arXiv preprint (2025), 10.48550/arXiv.2503.04070. 34N. C. Admal and E. B. Tadmor, "Stress and heat flux for arbitrary multibody potentials: A unified framework," The Journal of Chemical Physics 134, 184106 (2011), https://pubs.aip.org/aip/jcp/articlepdf/doi/10.1063/1.3582905/14061374/184106 1 online.pdf. 35I. Batatia, D. P. Kovacs, C. Ortner, G. Cs ́anyi, and V. L. Deringer, "Mace: Higher order equivariant message passing neural networks for fast and accurate force fields," arXiv preprint (2022), . 36Z. Fan, L. F. C. Pereira, H. Wang, J. Zheng, D. Donadio, and A. Harju, "Force and heat current formulas for many-body potentials in molecular dynamics simulation with applications to thermal conductivity calculations," Physical Review B 92, 094301 (2015). 37N. C. Admal and E. B. Tadmor, "A unified interpretation of stress in molecular systems," Journal of Elasticity 100, 63-143 (2010), . 38D. J. Evans and G. P. Morris, Statistical Mechanics of NonEquilibrium Liquids, 2nd ed. (Australian National University Press, 2007). 39B. Todd, D. Evans, and P. Daivis, "Pressure tensor for inhomogenous fluids," Physical Review E 52(2), 1627-1638 (1995). 40P. Schofield and J. R. Henderson, "Statistical Mechanics of Inhomogeneous Fluids," Proc. R. Soc. Lond. A 379, 231 (1982). 41W. Noll, "Die herleitung der grundgleichungen der thermomechanik der kontinua aus der statistischen mechanik," Journal of Rational Mechanics and Analysis 4, 627-646 (1955). 42D. R. J. Monaghan and G. P. Morriss, "Microscopic study of steady convective flow in periodic systems," Physical Review E 56, 476 (1997). 43E. R. Smith, D. M. Heyes, D. Dini, and T. A. Zaki, "Controlvolume representation of molecular dynamics," Phys. Rev. E. 85, 056705 (2012). 44J. Cormier, J. M. Rickman, and T. J. Delph, "Stress calculation in atomistic simulations of perfect and imperfect solids," J. Appl. Phys. 89, 99 (2001). 45R. J. Hardy, "Formulas for determining local properties in molecular dynamics simulations: Shock waves," J. Chem. Phys 76(1), 622-628 (1982). 46J. Yang, M. Youssef, and B. Yildiz, "Structure, kinetics, and thermodynamics of water and its ions at the interface with monoclinic zro2 resolved via ab initio molecular dynamics," The Journal of Physical Chemistry C 125, 15233-15242 (2021). 47E. N. Parker, "Tensor Virial Equations," Phys. Rev. 96, 1686 (1954). 12 48S. Plimpton, P. Crozier, and A. Thompson, LAMMPS Users Manual - Large-scale Atomic/Molecular Massively Parallel Simulator, http://lammps.sandia.gov, 7th ed. (2003). 49M. Han and J. S. Lee, "Method for calculating the heat and momentum fluxes of inhomogeneous fluids," Phys. Rev. E 70, 061205 (2004). 50D. Heyes, E. Smith, D. Dini, and T. Zaki, "The equivalence between volume averaging and method of planes definitions of the pressure tensor at a plane," The Journal of chemical physics 135, 024512 (2011). 51E. R. Smith, P. J. Daivis, and B. D. Todd, "Measuring heat flux beyond fourier's law," The Journal of Chemical Physics 150, 064103 (2019). 52A. H. Larsen, J. J. Mortensen, J. Blomqvist, I. E. Castelli, R. Christensen, M. Du lak, J. Friis, M. N. Groves, B. Hammer, C. Hargus, E. D. Hermes, P. C. Jennings, P. B. Jensen, J. Kermode, J. R. Kitchin, E. L. Kolsbjerg, J. Kubal, K. Kaasbjerg, S. Lysgaard, J. B. Maronsson, T. Maxson, T. Olsen, L. Pastewka, A. Peterson, C. Rostgaard, J. Schiøtz, O. Sch ̈utt, M. Strange, K. S. Thygesen, T. Vegge, L. Vilhelmsen, M. Walter, Z. Zeng, and K. W. Jacobsen, "The atomic simulation environment-a python library for working with atoms," Journal of Physics: Condensed Matter 29, 273002 (2017). 53A. Christensen and E. A. Carter, "First-principles study of the surfaces of zirconia," Phys. Rev. B 58, 8050-8064 (1998). 54Branched at commit e2842aadd15a57580e8564b050bd6b4eee852b7d with full code with all changes at https://github.com/edwardsmith999/mace/. 55NVIDIA, "cuequivariance," (2025). 56S. K. Lam, A. Pitrou, and S. Seibert, "Numba: a llvm-based python jit compiler," in Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, LLVM '15 (Association for Computing Machinery, New York, NY, USA, 2015). 57B. Focassio, L. P. M. Freitas, and G. R. Schleder, "Performance assessment of universal machine learning interatomic potentials: Challenges and directions for materials' surfaces," ACS Applied Materials & Interfaces 17, 13111-13121 (2025), pMID: 38990833. 58S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, "A consistent and accurate ab initio parametrization of density functional dispersion correction (dft-d) for the 94 elements h-pu," The Journal of Chemical Physics 132, 154104 (2010). 59C. Lee, H. Chen, and G. Fitzgerald, "Structures of the water hexamer using density functional methods," The Journal of Chemical Physics 101, 4472-4473 (1994). 60K. D. Hammonds and D. M. Heyes, "Shadow hamiltonian in classical nve molecular dynamics simulations involving coulomb interactions," The Journal of Chemical Physics 154, 174102 (2021). 61With the note that long range interactions are currently not included in MACE.
|
2509.16253
|
Quantum-like representation of neuronal
networks’ activity: modeling “mental
entanglement”
Andrei Khrennikovଵ,∗, Makiko Yamadaଶ,ଷ,ସ
ଵ Center for Mathematical Modeling in Physics and Cognitive Sciences
Linnaeus University, Växjö, SE-351 95, Sweden
ଶ Institute for Quantum Life Science
National Institutes for Quantum Science and Technology, Chiba, 263-8555, Japan
ଷ Institute for Quantum Medical Science
National Institutes for Quantum Science and Technology, Chiba, 263-8555, Japan
ସ Graduate School of Science and Engineering
Chiba University, Chiba, 263-8522, Japan
*Corresponding author email: Andrei.Khrennikov@lnu.se
Abstract: Quantum-like modeling (QLM) – quantum theory applications outside of physics – are intensively
developed with applications in biology, cognition, psychology, and decision-making. For cognition, QLM
should be distinguished from quantum reductionist models in the spirit of Hameroff and Penrose and well as
Umezawa and Vitiello. QLM is not concerned with just quantum physical processes in the brain but also QL
information processing by macroscopic neuronal structures. Although QLM of cognition and decision-making
has seen some success, it suffers from a knowledge gap that exists between oscillatory neuronal network
functioning in the brain and QL behavioral patterns. Recently, steps toward closing this gap have been taken
using the generalized probability theory and prequantum classical statistical field theory (PCSFT) – a random
field model beyond the complex Hilbert space formalism. PCSFT is used to move from the classical
``oscillatory cognition'' of the neuronal networks to QLM for decision-making. In this study, we addressed the
most difficult problem within this construction: QLM for entanglement generation by classical networks, i.e.,
“mental entanglement.” We started with the observational approach to entanglement based on operator
algebras describing “local observables” and bringing into being the tensor product structure in the space of
QL states. Moreover, we applied the standard states entanglement approach: entanglement generation by
spatially separated networks in the brain. Finally, we discussed possible future experiments on “mental
entanglement” detection using the EEG/MEG technique.
keywords: Quantum-like modeling; neuronal networks; mental entanglement; decision
making; EEG/MEG technique
1. Introduction
Intensive development of quantum information theory has transformed the perspective of
quantum studies toward an information-based approach to physics. In particular,
numerous information-theoretic interpretations of quantum mechanics have been
proposed [1]. These interpretations exert both foundational and technological influence.
This informatization of quantum physics has also stimulated applications of the
methodology and formalism of quantum theory beyond physics, extending to biology,
cognition, psychology, decision-making, economics, finance, and the social and political
sciences (see, e.g., monographs [2]-[8] and reviews [9,10])—a direction commonly termed
quantum-like modeling (QLM). In this paper, we focus on QLM in the domains of cognition
and decision-making.
Here, QLM must be clearly distinguished from quantum reductionist models advanced by
Hameroff and Penrose [11,12], Vitiello [13,14], and Igamberdiev [15,16], who associated
cognition and consciousness with quantum physical processes in the brain. Hameroff and
Penrose emphasized microtubules, Vitiello referred to quantum field theory and long-range
correlations in the brain, and Igamberdiev linked cognition to quantum processes in cells.
In contrast, QLM of cognition does not concern quantum physical processes in the brain
but rather quantum-like information processing by macroscopic neuronal networks.
QLM of cognition and decision-making has been successfully developed; it mathematically
describes non-classical features of cognitive phenomena, such as “interference and
entanglement of minds.” It resolves numerous paradoxes of decision theory and models
basic cognitive effects, including conjunction, disjunction, order, response replicability,
and Zeno effects (see the mentioned works and articles [17]-[23]). QLM has highlighted the
contextuality of cognitive phenomena by applying advanced quantum contextuality theory
and, in particular, the machinery of Bell inequalities [24]-[29] (Section 11]). QLM has
introduced a novel perspective on rationality versus irrationality and violations of the
Savage Sure Thing Principle [4], framed within probabilistic and logical approaches, as
violations of Bayesian updating [21,22], the formula of total probability [2,30,3,,6,31], and
classical Boolean logic, while incorporating quantum logic [32,33,34]. QLM has
successfully described statistical data on bistable perception of ambiguous figures
17,35,36,37, 2], has been applied to biological evolution (including genetic and epigenetic
mechanisms) [40,6,41], and, more recently, to aesthetic experiences during book reading
[42,43].
Nevertheless, cognitive QLM faces a gap between theoretical descriptions of classical
oscillatory neuronal networks in the brain (e.g., the phase space description of harmonic
oscillators) and the quantum-like representation of mental states and observables (e.g.,
density and Hermitian operators). Thus, it remains a phenomenological framework.
Recently, progress toward bridging this gap has been achieved [44] within the framework of
prequantum classical statistical field theory (PCSFT)—a random field model generating the
complex Hilbert space formalism for states and observables [45]-[48]. PCSFT has been
employed to connect classical “oscillatory cognition” of neuronal networks with QLM for
decision-making.i1
1 See [49]-[54] for other models aimed at coupling neuronal and QL information processing
in the brain. We especially highlight the article [55] in that we employ generalized
probability (operational measurement) theory. This is a more general formalism than
quantum theory in a complex Hilbert space. But all authors exploring QLM work within the
In this paper, we proceed to the most difficult problem within such construction: creation
of QLM for the generation of entanglement by classical networks–the problem of “mental
entanglement.”
We now outline the content of the paper. Section 2 presents the basic construction for
transitioning from oscillatory dynamics of neuronal networks in the brain to the quantum-
like (QL) representation. Section 3 links classical and quantum realizations of observables
on neuronal networks. Section 4 addresses a specific approach to the notion of
entanglement, treating it as the entanglement of observables. In Section 5 this approach is
applied to entanglement of observables on neuronal circuits. In Section 6 we examine the
standard approach to entanglement as state entanglement and its application to modeling
mental entanglement. In Section 7 we discuss the role of ephaptic coupling in generation of
correlations between neuronal circuits in the brain. Section 8 concerns possible
experimental verification of mental entanglement. This section deserves the special
attention of experts in experimental cognitive science and neuroscience. Here, we discuss
concrete experimental tests of mental entanglement based on classical
electroencephalogram (EEG)/Magnetoencephalography (MEG) measurement techniques
(Section 8.1), including comparative analysis with classical EEG-based approaches to
functional connectivity in neuroscience (Section 8.3). Section 9 describes the main
quantitative measures of entanglement that can be used in experimental tests. In Section
8.4, we analyze the possibility of creating and detecting entanglement in in vitro neuronal
networks. Conceptual differences between classical and QL frameworks are discussed in
Section 10. Section 11 provides a general discussion on the proposed model. Appendix A
concerns the mathematical model for coupling classical oscillatory dynamics, represented
by Hamiltonian equations, with quantum dynamics described by the Schrödinger equation.
In Appendix B, we examine the possibility of detecting mental entanglement experimentally
with Bell inequalities.
In physics, entanglement is one of the most intriguing and complex quantum phenomena.
Typically, entanglement is treated as state entanglement. In the mathematical formalism of
quantum theory, a pure state |𝜓⟩ of a compound system 𝑆= (𝑆ଵ, 𝑆ଶ) is given by a
normalized vector belonging to the tensor product of two complex Hilbert spaces, ℋ =
ℋଵ⊗ℋଶ . A state |𝜓⟩ ∈ ℋ is called entangled if it cannot be factorized, that is, it does not
have the form |𝜓⟩= |𝜓⟩ଵ⊗|𝜓⟩ଶ, where |𝜓⟩∈ ℋ, i=1,2.
Another less familiar approach to the notion of entanglement is observation entanglement
[57-60] based on considering “local algebras” of cross-commuting observables 𝑨ଵ and 𝑨ଶ
. In this view, a tensor product structure on the state space is not preassigned but is
generated by these operator algebras. The same state space ℋ can be endowed with
multiple tensor-product structures corresponding to different choices of operator algebras.
In this paper, we employ both approaches to the notion of entanglement.
standard quantum formalism based on complex Hilbert space. So, it is useful to justify the
use of this formalism from the neurophysiological viewpoint.
Since observation entanglement is less familiar and not widely applied, we complete the
paper with a special section devoted to this notion, Section 4. We then proceed to the
entanglement of observables on neuronal networks in Section 4.
State entanglement generated by a compound neuronal network 𝑆= (𝑆ଵ, 𝑆ଶ) is discussed
in Section 6. Such entanglement and its generation by interacting neuronal networks is of
neurophysiological interest. To identify its roots, we must look deeper into brain
architecture and communication between neuronal circuits, including those not physically
connected through axonal-dendritic pathways. Here, we emphasize the role of
electromagnetic signaling, particularly ephaptic coupling between neuronal structures in
the brain (Section 7). However, discussion of state entanglement in neuronal networks
(Section 4) is primarily theoretical, as experimental detection remains far from reach.
Observation entanglement (Section 6) is more promising for experimentation. The central
challenge is identifying observables on neuronal networks capable of exhibiting such
entanglement.
This paper is conceptual and aims to demonstrate the possibility of modeling generally
nonlocal correlations in the brain that are mathematically described as quantum state
entanglement. At present, the biophysical mechanism of generating such states is not
completely clear (see, however, Section 7). This is the place to emphasize that “mental
nonlocality,” mathematically described as entanglement, is unrelated to the so-called
spooky action at a distance often associated with “quantum nonlocality.” Unlike quantum
physics experiments on spatially separated systems, such as two photons 100 km apart, in
cognitive science, we study a small physical system, the brain. Electromagnetic signals
connect any two points in the brain almost immediately. Thus, biological nonlocality
expressed via entanglement is classical nonlocality generated by electromagnetic signaling
between neuronal circuits. Mental entanglement is the mathematical description of
nonlocal correlations between observations performed on activated neuronal circuits. The
crucial difference from classical correlations is that some observables in local algebras 𝑨
(associated with the neuronal networks 𝑆, 𝑖= 1,2) can be incompatible, not jointly
measurable. In mathematical terms, such observables are described by non-commuting
operators.
We note that in article [44], entanglement of neuronal networks was only mentioned in an
unconventional framework that avoided tensor products, instead employing the classical
Descartes product (see [45]-[48]). This approach may be interesting from the perspective of
classical oscillatory cognition. However, by using it, we lose connection with the notion of
entanglement as defined in quantum information theory.
We also speculate that by exploring the QL representation of mental states and
observables, the brain may realize so-called quantum-inspired algorithms [61] and thereby
achieve essential enhancement of computational power [62]. In such algorithms, the
ability to generate entangled states is important.
We remark that QLM based on the QL representation of EEG signals is already applied in
medical diagnostics of neurological disorders, including depression, epilepsy, and
schizophrenia [63,64]. Such diagnostics work unexpectedly well, but in the absence of a
theoretical justification. Mental entanglement can serve as the theoretical basis for EEG-
based QL diagnostics, as it mathematically describes nonlocal information processing in
the brain. The observed violation of the Clauser-Horne-Shimony-Holt (CHSH) inequality for
EEG signals (transformed into dendrograms with clustering algorithms) can also be
connected to mental entanglement.
2. QL states of neuronal networks
Let 𝑆 be a neuronal network with oscillatory node-circuits numbered as 𝔰, 𝑗= 1, . . . , 𝑁.
Following [62], we do not identify nodes of 𝑆 with individual neurons; instead, these are
neuronal circuits generating oscillations. The state of each circuit 𝔰 is mathematically
described by the complex variable 𝑧. Why is it complex? To describe oscillatory dynamics,
it is convenient to use two (real) variables (𝑞, 𝑝), where 𝑞 is coordinate (possibly
generalized) and 𝑝 is momentum—the conjugate variable to 𝑞. By setting 𝑧= (𝑞+ 𝑖𝑝)/√2
we move from the real phase space to a complex representation. See Appendix A and
article [44] for details.
Oscillations in circuits are random—random oscillations (ROs). For each node-circuit 𝔰,
ROs are expressed as ℂ-valued random variable 𝑧= 𝑧(𝜔), where 𝜔 is a random
parameter. These random variables are correlated. So, 𝑆 generates random vector 𝑧
=
𝑧(𝜔) ∈ ℂே. The complex linear space ℂே is endowed with the scalar product
⟨𝑣|𝑤⟩= ∑𝑣
ே
𝑤⃐ሬሬ. (1)
This is a complex Hilbert space; it is denoted by the symbol ℋ. Such spaces are basic in
quantum theory. So, a complex Hilbert space is naturally coupled to the phase space
dynamics [47,44] (see Appendix A).
Geometrically, a neuronal network 𝑆 can be represented by a graph 𝐺ௌ with nodes given by
neuronal circuits 𝔰, 𝑗= 1, . . . , 𝑁. These nodes are connected by edges. In the simplest case,
nodes are individual neurons and edges are axon-dendrite connections between neurons.
In this case, the network’s graph 𝐺ௌ is a directed multigraph, with the direction of an edge
determined by the axon’s origin. Signals propagating through the axon-dendrite network
generate correlations between ROs in neurons. These correlations play a crucial role in
constructing the QL representation of classical neuronal dynamics. Generally, the
structure of connections between node-circuits is very complex and not limited to the
axon-dendrite network (see the article for detailed analysis); some connections are
established at the molecular level or via electromagnetic fields. The construction of the
complete connection graph 𝐺ௌ is difficult, if not impossible. Moreover, for real neuronal
networks in the brain and body, 𝐺ௌ is a hypergraph and its structure varies with time. (We
recall that in a hypergraph, edges connect clusters of nodes rather than individual nodes.)
In our modeling, we do not employ this extremely complex geometry of connections within
S and instead represent the network state by considering correlations between ROs in
node-circuits (but cf. with [53-55] where the graph geometry was explored). Thus, instead
of the very complex hypergraph 𝐺ௌ of electrochemical connections between node-circuits,
it is useful to represent 𝑆 by the graph 𝐺ௌ, of correlations between ROs generated in
nodes of 𝐺ௌ. This approach leads to the QL representation. The set of its nodes coincides
with the nodes of the “electrochemical graph” 𝐺ௌ, while its edges are determined by
nonzero correlations between nodes: if the correlation between ROs in node-circuits 𝔰 and
𝔰 is nonzero, these nodes are connected by an edge. Such edges are undirected. Hence,
𝐺ௌ is a directed graph, but 𝐺ௌ, is an undirected graph.
The canonical basis in the linear space ℋ consists of the vectors |1⟩= (10. . .0), … , |𝑁⟩=
(0. . .1). Any vector 𝑣 \in ℋ can be expanded with respect to this basis, 𝑣= ∑𝑣
ே
|𝑗⟩. The
basis vectors (|𝑗⟩)ୀଵ
ே represent the node-circuits of the network 𝑆. The node-circuits of S
are represented by orthogonal vectors with respect to the scalar product. This
orthogonality is a constraint on the model and, in principle, can be omitted.
This is the place to remark that the representation of network nodes by vectors in complex
Hilbert space is a formal mathematical construction—the node-circuits are classical
(macroscopic) neuronal structures. The networks under consideration are not quantum
networks; they are classical networks for which we construct the QL representation.
Let vector 𝑧
= 𝑧(𝜔) ∈ ℋ be a random vector representing ROs generated by the
neuronal network 𝑆. We proceed under the assumption that it has a zero mean value, 𝜇=
𝐸[𝑧] = 0. If this is not the case, one can always use the vector (𝑧−𝜇), which has zero
mean value. Consider the covariance matrix of this random vector:
𝐶= (𝑐), 𝑐= 𝐸[𝑧𝑧⃐]. (2)
This is the matrix representation of the correlation graph 𝐺ௌ, . It is a Hermitian and
positive-definite matrix. Its only difference from a density matrix is that 𝐶 may have a non-
unit trace. It is natural to connect the classical ROs in 𝑆 with quantum formalism,
constructing a QL representation of the functioning of 𝑆 via trace normalization (see [45-
48]),
𝐶 → 𝜌 = 𝐶 / Tr 𝐶. (3)
Such a density matrix is a QL state generated by ROs in 𝑆.
We speak of matrices to couple QLM to the neuronal basis. We can proceed in the basis-
invariant framework with covariance and density operators. Thus, we refer to operators or
their matrices. As in quantum theory, various bases in ℋ can be employed. Bases other
than the canonical node basis contain linear combinations of node state vectors, so such
states are “non-local” from the perspective of brain geometry, as is the body in three-
dimensional physical space. This reflects the nonlocality of information processing.
The correspondence, ROs → covariance matrix, is not one-to-one; a variety of ROs generate
the same 𝐶. Moreover, the correspondence 𝐶→𝜌 is also not one-to-one because of
normalization scaling. Hence, QLM provides a fuzzy picture of classical random processes
in a network.2
Now consider the covariance matrix 𝐶 such that only one element 𝑐≠0. It expresses
ROs in 𝑆 such that all circuits besides 𝔰, are inactive (frozen). (For 𝑖≠𝑗, condition 𝑐=
𝐸[|𝑧|ଶ] = 0 implies that the random variable 𝑧= 0 almost everywhere). While this is an
idealized situation, it remains useful in a mathematical model. The corresponding density
matrix represents the projection on the vector |𝑗⟩, 𝜌= 𝐶/𝑐= |𝑗⟩⟨𝑗|. In principle, in this
way the circuit-basis vector |𝑗⟩ can be physically generated by activating a single circuit in
isolation from others.
Now consider ROs with the covariance matrix 𝐶௩= (𝑐= 𝑣𝑣⃐), where vector 𝑣 =(𝑣ଵ,..., 𝑣ே)
∈ ℋ . Then 𝐶௩= |𝑣⟩⟨𝑣| and 𝜌௩= |𝜓௩⟩⟨𝜓௩|, where 𝜓௩= 𝑣/||𝑣||. Thus, such ROs generate
pure states of this QLM. Due to the degeneration of correspondence, ROs → covariance
(density) matrix, each pure state can be generated by a variety of ROs. What is common
between such ROs? As was shown in [45-48], each such random vector 𝑧= 𝑧(𝜔) is
concentrated in the one-dimensional subspace 𝐿ట= {𝑣= 𝑐|𝜓௩⟩}. If vector 𝑣 is a non-trivial
superposition of the node-basis vectors (|𝑖⟩), that is 𝑣= ∑𝑣
|𝑖⟩, then ROs generating the
QL state 𝜌௩ are nonlocally distributed; all neuronal nodes |𝑖⟩ with 𝑣≠0 are involved in its
generation.
We note that one of the ways to generate a pure state is to consider deterministic (non-
random) dynamics in 𝑆, 𝑧௧ with 𝑧= |𝑣⟩, where |𝑣⟩ is normalized to one (see Appendix A). If
this initial vector |𝑣⟩ is the eigenvector of QL Hamiltonian, then 𝜌௩(𝑡) ≡|𝑣⟩⟨𝑣|. Thus,
stationary pure states, one-dimensional projections, can be generated as eigenstates of
Hamiltonian dynamics in 𝑆 (see Appendix A).
Ensemble versus time averages
This is a good place to make the following theoretical remark on the mathematical
description of correlations. In classical probability model (Kolmogorov 1933, [67] ), the
elements of covariance matrix (2) are calculated as the integrals
𝑐= ∫ 𝑧
ஐ
(𝜔)𝑧⃐(𝜔)𝑑𝑃(𝜔), (4)
where Ω is a sample space [67] (its points are random parameters or elementary events)
and 𝑃 is the probability measure on Ω.
2 In the real case, a Gaussian distribution is uniquely determined by its covariance matrix
and mean value. But in the complex case, only a circularly symmetric Gaussian distribution
is uniquely determined by its (complex) covariance matrix. The assumption that ROs in
neuronal circuits are circularly symmetric Gaussian makes the correspondence ROs →
covariance matrix one-to-one. However, there are no bio-physical arguments supporting
such an assumption.
However, in experimental research (both in physics and biology) the following time
averages are used. For two complex-valued time series 𝑥(𝑡) and 𝑦(𝑡), the covariance is
defined as:
Cov(𝑥, 𝑦) =
ଵ
்∑
𝑥
்
௧ୀଵ
(𝑡)𝑦⃐(𝑡), (5)
where 𝑇 is the total number of time points (we proceed under the assumption of zero mean
values).
The coincidence of ensemble and time averages is a subtle mathematical issue; their
equivalence relies on the assumption of ergodicity. This assumption is widely accepted and
often applied automatically. In theoretical models, one typically works within the
framework of Kolmogorov probability theory, whereas in experimental studies, time
averages are generally used. However, the validity of the ergodicity hypothesis in the
context of quantum and QL processes remains an open question [68,65]. A detailed
discussion of this issue falls beyond the scope of the present paper, and we shall not
pursue it further here.
The formula (5) raises the question of the selection of a proper time scale for averaging—
that is, the selection of the parameter 𝑇. In neuroscience, this issue has been discussed,
e.g., in [69]-[72].
3. Classical vs. quantum realizations of observables on
neuronal networks
Let 𝑆 be a network with 𝑁 neuronal circuits. ROs in 𝑆 are represented by a classical random
variable 𝑧= 𝑧(𝜔) valued in complex Hilbert space ℋ of dimension 𝑁. (Here parameter 𝜔
describes randomness in 𝑆. )
Consider now a quadratic form 𝑄(𝑧) = ⟨𝐴𝑧|𝑧⟩ on ℋ, where 𝐴 is a Hermitian matrix. For a
random vector 𝑧 valued in ℋ, we can consider the average of this form,
𝐸௭ [𝑄] = ∫
ஐ⟨ A 𝑧(𝜔)| 𝑧(𝜔) ⟩ dP (𝜔)= ∫ℋ⟨ Aw|w⟩ d 𝑝௭ (w), (6)
where Ω is a set of random parameters (elementary events), 𝑃 is the probability measure,
and 𝑝௭ is the probability distribution of the ℋ-valued random vector 𝑧= 𝑧(𝜔).
This average can be coupled to the covariance matrix by the following equality:
𝐸௭ [𝑄] = Tr 𝐶 A. (7)
The right-hand side of this equality is (up to normalization) the quantum formula for
calculation of averages of observables that are mathematically represented by Hermitian
matrices (operators). In terms of the corresponding density matrix
𝜌 = 𝜌= 𝐶 / Tr 𝐶
we have
⟨A⟩ = Tr 𝜌 A = 𝐸௭ [𝑄] / Tr 𝐶 = ∫ℋ⟨ Aw|w⟩ d 𝑝௭ (w)/ ∫ℋ⟨w|w⟩ d 𝑝௭ (w). (8)
This formula couples the average ⟨𝐴⟩ఘ of quantum observable 𝐴 in the state 𝜌 with the
average of the corresponding quadratic form of ROs in the neuronal network 𝑆.
The correspondence rule
𝐴↔𝑄 (9)
generates matching of quantum and classical averages. One can investigate this coupling
in more detail and obtain from the quadratic form 𝑄= 𝑄(𝜔) a discrete random variable
with values 𝑎, where (𝑎) is the spectrum of the operator 𝐴. Such a discrete random
variable is obtained via the threshold detection scheme; the Born rule appears as an
approximation of classical probability. This is a mathematically advanced formalism that
cannot be presented here (see [73] for rigorous mathematical representation). In this
model of classical-quantum coupling, the discretization procedure (threshold detection for
quadratic forms of classical neuronal variables) is the source of “quantumness.” The
phase space representation (Section 22 and Appendix A) is purely classical. All quadratic
forms are jointly defined as random variables on the same probability space. However,
each threshold detection measurement procedure determines its own probability space.
Generally, the corresponding discrete-valued observables cannot be represented as
random variables on the same probability space. They may not be jointly measurable, as
their measurement procedures need not be compatible. In this model, the appearance of
incompatible observables originates from the transition from classical observables, given
by quadratic forms, to QL observables mathematically represented by Hermitian
operators.
Consider now a dichotomous observable 𝐴 yielding values 0,1. It has two representations,
classical and QL. In the QL representation 𝐴= 𝐸 is given by a projector on subspace 𝐿; in
the classical representation 𝐴 is given by the quadratic form 𝑄ா. Let ROs in 𝑆 are described
by the random vector 𝑧 with the covariance matrix 𝐶, the QL state 𝜌 = 𝐶 / Tr 𝐶. Since 𝐴’s
average is equal to the probability of the outcome 𝐴= 1, we have the following coupling
between this probability and classical average of 𝑄ா,
P(A=1| 𝜌) = 𝐸௭ [𝑄ா] / Tr 𝐶= ∫ℋ⟨ Ew|w⟩ d 𝑝௭ (w)/ ∫ℋ⟨w|w⟩ d 𝑝௭ (w). (10)
It can be interpreted as the “weight” of ROs in the subspace L=E ℋ relatively to the weight
of ROs the whole space ℋ. Thus, this formula connects the outcomes of observations over
a neuronal network 𝑆 with averaging of ROs in it.
Generally if 𝐴= ∑𝑎
𝐸, where (𝐸) is the spectral family of the operator 𝐴, then we have
P(A=𝑎| 𝜌) =𝐸௭ [𝑄ாೌ] / Tr 𝐶 . (11)
If operator 𝐴 has non-degenerate spectrum with eigenvectors (|𝑎⟩), then
𝑃(𝐴= 𝑎|𝜌) = 𝑐/ ∑𝑐
, (12)
where (𝑐) are diagonal elements of the covariance matrix 𝐶 in the basis (|𝑎⟩). Hence, the
probabilities of outcomes are straightforwardly coupled with the elements of the
covariance matrix.
Let 𝐴ଵ and 𝐴ଶ be two compatible observables that is they can be jointly measurable. In the
QL representation they are described as commuting Hermitian operators 𝐴ଵ and 𝐴ଶ. Their
quantum correlation is given by the formula
⟨𝐴ଵ𝐴ଶ⟩ = Tr 𝜌 𝐴ଵ𝐴ଶ= Tr 𝜌 𝐴ଶ𝐴ଵ. (13)
so, in fact, this is the average of the observable 𝐴 described by the (Hermitian) operator 𝐴=
𝐴ଵ𝐴ଶ= 𝐴ଶ𝐴ଵ. By applying correspondence rule (9) to this observable, we obtain the
classical average representation of quantum correlations
Tr 𝜌 𝐴ଵ𝐴ଶ= 𝐸௭ [𝑄భమ] / Tr 𝐶 = ∫ℋ⟨ 𝐴ଵ𝐴ଶw|w⟩ d 𝑝௭ (w)/ ∫ℋ⟨w|w⟩ d 𝑝௭ (w). (14)
This possibility of a classical representation of quantum correlations may appear to
contradict the violation of Bell inequalities. However, it has been shown that this is not the
case [47]. Bell-type inequalities can, in principle, be tested for neuronal networks in the
brain (as well as for other types of networks, such as social networks). We examined this
for observables determined by EEG measurements in article [65].
On classical phase space, one can consider not only quadratic forms but also arbitrary
functions, 𝑧→𝑓(𝑧). Can one identify their QL images? In [45-48], the following coupling
between classical (functional) and QL (operator) descriptions was considered: 𝑓→
ᇴ()
ଶ for
a twice differentiable function 𝑓= 𝑓(𝑧).
4. Entanglement of observables
In this Section, we present the observational viewpoint on entanglement (see [57-60] ). It
will be employed in our QLM in Section 5. The traditional approach, viewing entanglement
as the correlation of the states of two systems [56], in our case, two neuronal networks 𝑆ଵ
and 𝑆ଶ, will be employed in Section 6.
We begin with a simple example that illustrates the general construction to be developed
later.
Let dim ℋ =4, let commuting Hermitian operators 𝐴ଵ and 𝐴ଶ have eigenvalues 𝑎ଵ=
±1, 𝑎ଶ= ±1, each with degeneracy 2. Consider the basis (|𝑖𝑗⟩), 𝑖, 𝑗= ±, in ℋ consisting of
the joint eigenvectors, 𝐴ଵ|𝑖𝑗⟩= 𝑖|𝑖𝑗⟩, 𝐴ଶ|𝑖𝑗⟩= 𝑗|𝑖𝑗⟩. Any vector in ℋ can be expanded w.r.t.
this basis,
|𝜓⟩= ∑𝑤
|𝑖𝑗⟩. (15)
Such vector decomposition generates on ℋ the structure of tensor product ℋଵ⊗ℋଶ
where ℋଵ and ℋଵ are two-dimensional Hilbert spaces with the bases (|𝑖⟩ଵ, 𝑖= ±), and
(|𝑖⟩ଶ, 𝑖= ±); vector
|𝜙⟩= ∑𝑤
|𝑖⟩ଵ⊗|𝑗⟩ଶ (16)
is identified with vector |𝜓⟩ given by (15). Tensor product on ℋଵ⊗ℋଶ induces tensor-
product structure on ℋ.
Consider now complex Hilbert space dim ℋ =N =𝑁ଵ𝑁ଶ. Let commuting Hermitian
operators 𝐴ଵ and 𝐴ଶ have eigenvalues (𝑎ଵ), 𝑗= 1, . . . , 𝑁ଵ, (𝑎ଶ), 𝑖= 1, . . . , 𝑁ଶ. Each 𝑎ଵ has
degeneracy 𝑁ଶ and each 𝑎ଶ has degeneracy 𝑁ଵ. Consider the basis (|𝑖𝑗⟩≡
|𝑎ଵ𝑎ଶ⟩), 𝑖1, . . . , 𝑁ଵ, 𝑗= 1, . . . , 𝑁ଶ in ℋ consisting of their joint eigenvectors, 𝐴ଵ|𝑖𝑗⟩=
𝑎ଵ|𝑖𝑗⟩, 𝐴ଶ|𝑖𝑗⟩= 𝑎ଶ|𝑖𝑗⟩. Any vector in ℋ can be expanded w.r.t. this basis, see ([any]). Such
vector decomposition generates on ℋ the structure of tensor product ℋଵ⊗ℋଶ, where ℋଵ
and ℋଶ are Hilbert spaces of the dimensions 𝑁ଵ and 𝑁ଶ with the bases (|𝑖⟩ଵ) and (|𝑗⟩ଶ). And
isomorphism map T: ℋଵ⊗ℋଶ → ℋ is determined by its action on the basis vectors |𝑖⟩ଵ⊗
|𝑗⟩ଶ→|𝑖𝑗⟩.
If the coefficients in representation (15) can be factorized, 𝑤= 𝑤
(ଵ)𝑤
(ଶ), then formally
vector |𝜓⟩ belonging to ℋ can be written as
|𝜓⟩= ቀ∑𝑤
(ଵ)
|𝑖⟩ଵቁ⊗ቀ∑𝑤
(ଶ)
|𝑗⟩ଶቁ. (17)
Such a vector is called separable; otherwise, |𝜓⟩ is called entangled. We remark that if
|𝜓⟩= 𝑇(|𝜙ଵ⟩⊗|𝜙ଶ⟩), then it is factorizable. Thus, the notions of separability versus
entanglement given in (17) are equivalent to the usual notions of separability versus
entanglement in the tensor product of Hilbert spaces.
Consider now spaces of density operators D(ℋଵ), D(ℋଶ), D(ℋ), then D(ℋ) is isomorphic to
tensor product D(ℋଵ) ⊗ D(ℋଶ). The notions of separability vs. entanglement for density
operators (mixed states) is transferred to the space D(ℋ).
Denote by the symbol L(M) the space of linear operators acting in a complex Hilbert space
𝑀. We recall that we consider the finite-dimensional case.
In Hilbert space ℋଵ⊗ℋଶ consider two operator algebras:
𝑨𝟏 ={ 𝐴ଵ= aଵ⊗ I: aଵ ∈ L(ℋଵ) }, 𝑨𝟐 ={ 𝐴ଶ= I⊗ aଶ: aଶ ∈ L(ℋଶ) }. (18)
Hermitian operators belonging to these algebras are called “local observables.” For our
neurophysiological applications, it is important to note that this is tensor-product locality;
generally, it has nothing to do with space-time locality. The images of these algebras in
L(ℋ) are denoted as 𝑨𝒊(ℋ), i=1,2, or simply 𝑨𝒊 . These local algebras induce the structure of
the tensor product of operators in ℋ; for 𝐴 ∈ 𝑨 𝒊(ℋ), i=1,2, we set 𝐴ଵ⊗𝐴ଶ= 𝐴ଵ∘𝐴ଶ=
𝐴ଶ∘𝐴ଵ. We remark that if 𝐴ଵ ∈ 𝑨 𝟏(ℋ) , 𝐴ଶ ∈ 𝑨 𝟐(ℋ), then [𝐴ଵ, 𝐴ଶ] = 0; so Hermitian
operators from algebras 𝑨𝟏(ℋ) and 𝑨𝟐(ℋ) represent compatible observables.
To clarify the meaning of entanglement (which up to now has been treated from a
mathematical viewpoint), consider the four-dimensional case and the singlet state
|𝜓⟩= (|+⟩|−⟩−|−⟩|+⟩)/√2. (19)
It is entangled according to the above mathematical definition. But what does it mean
physically and biologically (Sections 5, 6)? As noted, this formula encodes correlations
between the outcomes of two observables 𝐴ଵ and 𝐴ଶ: the conditional probabilities 𝑃(𝐴ଶ=
±|𝐴ଵ= ∓) = 1 as well as 𝑃(𝐴ଵ= ±|𝐴ଶ= ∓) = 1. These correlations, for each pair of such
observables, are purely classical. “Quantumness” appears because of the existence of
incompatible observables, 𝐴 , 𝐵 ∈ 𝑨 𝒊(ℋ) such that [𝐴, 𝐵] ≠0, 𝑖= 1,2. Here,
noncommutativity expresses the impossibility of joint measurements of two observables.
The singlet state can also be represented as
|𝜓⟩= (|𝐶= +⟩|𝐷= −⟩−|𝐶= −⟩|𝐷= +⟩)/√2, (20)
where 𝐶= 𝐴ଵ, 𝐷= 𝐴ଶ or 𝐶= 𝐵ଵ, 𝐷= 𝐵ଶ and the operators 𝐵 can be selected as non-
commuting with 𝐴, 𝑖= 1,2. Thus, this entangled state encodes correlations between
families of local observables that are jointly non-measurable. Classical probability
describes only correlations for families of jointly measurable observables. This represents
the incompatibility (noncommutativity) interpretation of entanglement.
5. Entanglement of observables on neuronal circuits
Here we employ the observational viewpoint on entanglement presented in Section 4. Let 𝑆
be a network with 𝑁= 𝑁ଵ𝑁ଶ neuronal circuits. Let 𝐴ଵ and 𝐴ଶ be observables on 𝑆 as in
Section 4. The only new component is that these QL observables are coupled to the
network as the QL images of the corresponding quadratic forms 𝑄భ and 𝑄మ of ROs in 𝑆.
Neuronal node-circuits 𝔰, 𝑖= 1, . . . , 𝑁, are renumerated as 𝔰, 𝑖= 1, . . . , 𝑁ଵ, 𝑗= 1, . . . , 𝑁ଶ.
The biological counterpart of this mathematical construction is that, for each node-circuit,
both observables 𝐴ଵ and 𝐴ଶ can be jointly measured, as well as any two observables from
the operator algebras 𝑨𝟏 and 𝑨𝟐 .
If a circuit 𝔰 is not activated, then in the classical representation 𝑧𝔰= 0 a.e., where 𝑧𝔰 is the
random variable describing ROs in 𝔰.
Consider one node-circuit 𝔰= 𝔰. Let only this circuit be activated in the network 𝑆. In the
classical representation, all random variables 𝑧𝔰= 0 a.e. for 𝔰≠𝔰 and 𝐸[|𝑧|ଶ] ≠0, where
we set 𝑧≡𝑧𝔰ೖ. In the QL representation, 𝑆 is in the pure state |𝑘⟩= |𝑖𝑗⟩. A measurement of
the observables 𝐴ଵ and 𝐴ଶ on the network 𝑆 gives the outputs 𝐴ଵ= 𝑎ଵ and 𝐴ଶ= 𝑎ଶ with
probability 1.
Let only two node-circuits be activated in 𝑆: 𝔰= 𝔰భభ, 𝔰= 𝔰మమ, that is, in the classical
representation, 𝑧= 0 a.e. for 𝑛≠𝑘, 𝑚 and 𝐸[|𝑧|ଶ], 𝐸[|𝑧|ଶ] ≠0. Let ROs in 𝔰, 𝔰 be
correlated, generating the covariance matrix 𝐶 with the elements 𝑐= 1, 𝑐= 1, ´𝑐=
−1, 𝑐= −1 and other elements in 𝐶 equal to zero. Hence, there is perfect anti-
correlation between ROs in circuits 𝔰 and 𝔰. The corresponding QL state 𝜌= 𝐶/2 =
|𝜓⟩⟨𝜓|, where |𝜓⟩ is the singlet state (19).
Let 𝑖ଵ= +, 𝑗ଵ= −, 𝑖ଶ= −, 𝑗ଶ= +. The circuits 𝔰ାି, 𝔰ିା generate ROs
𝑧= 𝑧ାି| + −⟩−𝑧ିା| −+⟩,
where 𝑧ାି= 𝑧ିା a.e. Thus, the singlet state (19) is the QL image of this random variable.
Such correlation is purely classical and does not represent the essence of the QL
framework. As was already noted in 4, the strength of employing the QL linear space
representation lies in the possibility (for the brain) of using a variety of bases. We have
considered the neuronal basis, but the same singlet state carries anti-correlations in
various bases (see (20)), whose elements are not localized in specific neuronal circuits.
Formally (mathematically), the neuronal network S can be represented as a compound
system 𝑆= (𝑆ଵ, 𝑆ଶ) of two systems 𝑆ଵ and 𝑆ଶ with the state spaces ℋଵ and ℋଶ . In this
observational framework, these systems are not identified with specific neuronal networks
(cf. Section 6). They are formally extracted from S with the aid of two algebras of
commuting observables, 𝑨ଵ and 𝑨ଶ . Measurements performed by observables belonging
to 𝑨 are formally treated as “local observables” for subsystem 𝑆, 𝑖= 1,2.3
The correlations of local observables can be represented as the QL-average. For 𝐵ଵ= 𝑏ଵ⊗
𝐼 and 𝐵ଶ= 𝐼⊗𝑏ଶ,
⟨𝑏ଵ𝑏ଶ⟩ = Tr 𝜌 𝑏ଵ ⊗𝑏ଶ, 𝜌=
் , (21)
where 𝐶 is the covariance operator of ROs in 𝑆 which are represented by a random variable
𝑧 valued in tensor product Hilbert space ℋ =ℋଵ⊗ℋଶ and not in Cartesian product Hilbert
space ℋଵ ⊕ ℋଶ . As was pointed out in Section 3, such a correlation can be presented in
the classical probabilistic framework as
⟨𝑏ଵ𝑏ଶ⟩ =
ଵ
் = ∫ℋభ⊗ℋమ⟨ 𝑏ଵ ⊗𝑏ଶ w|w⟩ d 𝑝௭ (w) . (22)
Entanglement is the Hilbert space expression of correlations between local observables—
from algebras 𝑨 𝟏(ℋ) and 𝑨 𝟐(ℋ). The crucial difference from the classical probabilistic
representation is that these algebras contain incompatible observables which cannot be
jointly measurable.
Finally, we stress once again that decomposition of a neuronal network 𝑆 into subsystems,
𝑆= (𝑆ଵ, 𝑆ଶ), is not physical. Subsystems are virtual, and they express biological functions
of 𝑆, not its neuronal architecture. Decomposition is not unique; even dimensions of the
3 Although subsystem 𝑆 cannot be identified with a physical network of node-circuits, for external observer
(who cannot “open the brain” and see the individual neuronal structure of 𝑆), subsystems 𝑆, 𝑖= 1,2, “exist”
and their existence is determined via local observations.
components of the tensor product can vary for the same biophysical neuronal network 𝑆,
say if 𝑁= 12, we can factorize it as 𝑁ଵ= 3, 𝑁ଶ= 4 or 𝑁ଵ= 2, 𝑁ଶ= 6.
6. Entanglement of neuronal states
We start with a recollection of the notion of entanglement for pure and mixed states.
A pure state |𝜓⟩ belonging to tensor product of two Hilbert spaces ℋ = ℋଵ ⊗ ℋଶ is called
separable if it can be factorized as
|𝜓⟩= |𝜓⟩ଵ⊗ |𝜓⟩ଶ, where |𝜓⟩ ∈ ℋ, (24)
otherwise a pure state is called entangled. A mixed state given by a density operator 𝜌 is
called separable if it can be represented as a mixture
𝜌 = ∑𝑝 𝜌ଵ⊗𝜌ଶ , (25)
where 𝜌 ∈ ℋ and the weights (𝑝) form a probability distribution, 𝑝> 0, ∑𝑝= 1. A
non-separable state is called entangled. For pure states, it is straightforward to decide
whether a state is separable or not. For mixed states, it is very difficult.
Although these definitions are commonly accepted in quantum theory, a careful reading of
the original works of Schrödinger [74] may give the impression that he discussed
“entanglement of observable” (cf. with discussion in [60]).
Consider now two networks 𝑆ଵ and 𝑆ଶ consisting of neuronal circuits 𝔰ଵ,, 𝑗= 1, . . . , 𝑁ଵ, and
𝔰ଶ,, 𝑗= 1, . . . , 𝑁ଶ, respectively. As before, for simplicity, suppose that each circuit generates
one complex dimension. So, in QLM for 𝑆ଵ and 𝑆ଶ, the complex Hilbert spaces ℋଵ and ℋଶ
have dimensions 𝑁ଵ and 𝑁ଶ. ROs in them are mathematically represented as random
vectors z∈ℋ, i=1,2; here 𝑧= (𝑧,ଵ, . . . , 𝑧,ே).
Consider now the network 𝑆⊕ consisting of node-circuits of 𝑆ଵ and 𝑆ଶ and its graph
picturing. The set of nodes of the graph 𝐺ௌ⊕ is the union of the sets of nodes of the graphs
𝐺ௌభ and 𝐺ௌమ; the set of its edges includes all edges of 𝐺ௌభ and 𝐺ௌమ as well as additional edges
representing the connections between some nodes of 𝐺ௌభ and 𝐺ௌమ. According to our
approach, the edge structure of the graph 𝐺ௌ⊕ is is not visible in the QL representation,
which is instead based on the correlation graph 𝐺ୗ_ଵ⊕ ୗ_ଶ; ୡ୭୰ . The edges of this graph
correspond to nonzero correlations between ROs in the corresponding nodes.
In QLM for such a network, its complex Hilbert space ℋௌ⊕=ℋଵ⊕ℋଶ has the dimension
(𝑁ଵ+ 𝑁ଶ). ROs in 𝑆⊕ are mathematically represented the random vector z= (zଵ , zଶ)
∈ℋௌ⊕, so in the node-basis z= (zଵ ... zேభାேమ ). The covariance matrix of ROs has the
dimension (𝑁ଵ+ 𝑁ଶ)ଶ. This dimension does not match the dimension of a density matrix for
a quantum compound system, that is 𝑁ଶ, 𝑁= 𝑁ଵ𝑁ଶ. Such a network cannot be used for
generating entanglement.
We suggest the following construction of a compound network 𝑆⊗ whose complex Hilbert
space is not a direct sum but a tensor product, ℋௌ⊕=ℋଵ⊕ℋଶ , and the covariance matrix
of ROs in 𝑆⊗ has the dimension 𝑁ଶ. Creation of the network 𝑆⊗ that is able to generate
entangled states is characterized by the emergence of new circuits not present (or
activated) in 𝑆⊕.
Each pair of circuits 𝔰ଵ,∈𝑆ଵ and 𝔰ଶ,∈𝑆ଶ is combined into the new circuit 𝔰. How can
such a compound circuit be created? Since 𝔰 consists of the same neurons as the circuits
𝔰ଵ,, 𝔰ଶ, the only new structure in the circuit 𝔰 arises from generating (activation) new
channels for communication between neurons in 𝔰ଵ, and neurons in 𝔰ଶ;. These channels
can be physical axon-dendrite connections activated in the network 𝑆⊗ (but not active
before). Another testable hypothesis is that entangling channels are routes for
electromagnetic signaling between neurons across the circuits 𝔰ଵ, and 𝔰ଶ, [75,66,76,77].
Chemical signaling may also contribute to the formation of the entanglement-generating
network 𝑆⊗, albeit on slower time scales.
One can hypothesize that the brain can generate entangled networks using various
communication channels to perform different tasks. Generally, the three sorts of
communication channels mentioned can be involved in the creation of circuits 𝔰∈𝑆⊗;
each such circuit is given by the triple
𝔰= (𝔰ଵ,, 𝑒, 𝔰ଶ,), (25)
where 𝑒 denotes the signaling channel between the neuronal circuits 𝔰ଵ, and 𝔰ଶ,. ROs in
this circuit are described by a random variable 𝑍= 𝑍(𝜔), where 𝜔 is a chance
parameter. Such ROs generate the covariance matrix 𝐶;= 𝐸[𝑍𝑍⃐], whose elements
are the correlations between circuits 𝔰 and 𝔰. This matrix has the dimension (𝑁ଵ𝑁ଶ)ଶ. In
QLM, the corresponding density matrices are obtained via normalization by trace; in the
operator representation, they act in the complex Hilbert space ℋ of the dimension 𝑁=
𝑁ଵ𝑁ଶ.
We remark that these compound circuits need not be connected at the physical level, e.g.,
by an axon-dendrite connection. Signals propagating in channels 𝑒 and 𝑒 generates
electromagnetic fields, and they can be non-trivially correlated (see Section 7 on ephaptic
generation of such correlations).
Thus, circuits 𝔰 are vertices of the graph 𝐺ୗ_ଵ⊗ ୗ_ଶ; ୡ୭୰ ; its edges 𝐸, connect these
vertices and represent correlations between ROs in circuits 𝔰 and 𝔰. We discuss only
the graph 𝐺ୗ_ଵ⊗ ୗ_ଶ; ୡ୭୰ which edges 𝐸, represent correlations between corresponding
circuits 𝔰 and 𝔰. The graph of physical connections is not essential for QLM.
What is about the tensor-product structure of the Hilbert space ℋ? Let only one circuit 𝔰
is activated and let it generate ROs 𝑍. In the corresponding (self-)covariance matrix 𝐶
only one element is nonzero, namely, 𝑐;= 𝐸[|𝑍|ଶ] ≠0. In QL, such a covariance matrix
is represented by the pure state |𝑖𝑗⟩∈ ℋ (or the density operator 𝜌= |𝑖𝑗⟩⟨𝑖𝑗|).
Now consider the compound neural network 𝑆= (𝑆ଵ, 𝑆ଶ) with an arbitrary pattern of ROs. In
QLM, its covariance matrix 𝐶 is given by the density matrix
𝜌 = 𝐶 / Tr 𝐶 = ∑, 𝑟, |𝑖𝑗⟩⟨𝑘𝑚|.
Hence, the state space of density operators acting in ℋ is represented as the tensor
product D(ℋ)=D(ℋଵ) ⊗D(ℋଶ), where ℋ is the Hilbert space for the network 𝑆, 𝑖= 1,2.
Up to now, we have followed the standard quantum framework for a compound system 𝑆=
(𝑆ଵ, 𝑆ଶ). As usual, we can consider the marginal states of 𝜌 generated by partial tracing,
𝜌ଵ= 𝑇𝑟ℋమ 𝜌, 𝜌ଶ = 𝑇𝑟ℋభ 𝜌 . (26)
In the quantum formalism, these states are interpreted as the states of the subsystems 𝑆ଵ
and 𝑆ଶ of the system 𝑆= (𝑆ଵ, 𝑆ଶ). However, in our neuronal model, we can consider the
states 𝜌ௌభ and 𝜌ௌమ of the neuronal networks 𝑆ଵ and 𝑆ଶ in the absence of signaling between
them. We remark that the network 𝑆ଵ⊗𝑆ଶ is created via activation of the cross-connection
between neurons in 𝑆ଵ and 𝑆ଶ and the inter-network connections contribute to signaling
between 𝑆ଵ and 𝑆ଶ only indirectly. In short, the correlations 𝑐ௌ;/= 𝐸[𝑧,𝑧⃐,], 𝑚= 1,2,
cannot be reconstructed from the covariance matrix 𝐶 of correlations in 𝑆ଵ⊗𝑆ଶ and the
subsystems’ QL states 𝜌ௌ, 𝑚= 1,2, are not equal to the marginal states 𝜌.
We consider the observables 𝐴ଵ and 𝐴ଶ. If only the circuit 𝔰 is activated, then 𝐴ଵ= 𝑖, 𝐴ଶ=
𝑗 with probability 1. In QLM, they are represented by the operators, which are diagonal in
the basis (|𝑖𝑗⟩). Then we can use the construction from Sections 4, 5—observational
entanglement. In particularly, we create the tensor-product structure,
ℋ=ℋଵ ⊗ℋଶ.
7. Ephaptic entanglement
This is an appropriate place to point out ephaptic coupling between neuronal structures in
the brain [75,66,76]. This coupling enables communication between neurons that differs
from direct systems based on physical connections, such as electrical synapses and
chemical synapses. Through this mechanism, signals in nerve fibers can be correlated as a
result of local electric fields. Ephaptic coupling can generate synchronization of action
potential firing in neurons [77].
Recently, this coupling was highlighted in article [78] on modeling nonlocal representation
of memories:
“It is increasingly clear that memories are distributed across multiple brain areas. Such
‘engram complexes’ are important features of memory formation and consolidation. Here,
we test the hypothesis that engram complexes are formed in part by bioelectric fields that
sculpt and guide neural activity and tie together the areas that participate in engram
complexes. ... Our results ... provide evidence for in vivo ephaptic coupling in memory
representations.”
Our QL formalism describes such nonlocal correlations and, in particular, memories
distributed across multiple brain areas. Such nonlocal memories are encoded in QL states.
Here we again recall our basic conjecture that the brain explores the QL representation,
i.e., it operates not with oscillations in neuronal networks but with the corresponding
covariance matrices.
Thus, the creation of the entanglement-generating network 𝑆⊗ is a process. At each
instance in time, the character of signaling between neurons in the circuits 𝔰ଵ, and 𝔰ଶ,
plays a crucial role in the generation of a new circuit 𝔰 and a specific state of 𝑆⊗,
entangled or not.
8. Toward experimental verification of mental
entanglement
8.1. Experimental framework for entanglement of neuronal networks
Here we consider the simplest case of the model for entanglement of two neuronal
networks presented in Section 6 (see also [55]). In this way, we can generate only a
restricted set of entangled states (in contrast to the general model). Nevertheless, some
examples of entangled states can still be obtained.
The nodes of the compound network 𝑆ଵ⊗𝑆ଶ are given by the pairs of nodes of the
networks 𝑆ଵ and 𝑆ଶ,
𝔰= (𝔰ଵ,, 𝔰ଶ,), (27)
and ROs in these node-circuits are described as the random variables 𝑍= 𝑧ଵ,𝑧ଶ,, where
the random variables 𝑧ଵ, and 𝑧ଶ, describe ROs in node-circuits of corresponding neuronal
networks. Hence, the elements of the cross-covariance matrix 𝐶 are given as
𝑐,= 𝐸[𝑍𝑍⃐] = 𝐸[𝑧ଵ,𝑧ଶ,𝑧⃐ଵ,𝑧⃐ଶ,] (28)
Consider now two spatially separated areas in the brain and two sets of electrodes 𝔰ଵ,, 𝑖=
1, . . . , 𝑁ଵ, and 𝔰ଶ,, 𝑗= 1, . . . , 𝑁ଶ, coupled to the respective areas. Their outputs correspond to
random variables 𝑧ଵ, and 𝑧ଶ,. Under the assumption of ergodicity, we can identify
statistical averages with time averages. We center the random variables by subtracting
their averages. We calculate the cross-covariance matrix. Then, by using a test of
entanglement (Section 9), we determine whether the corresponding density matrix
represents an entangled or separable state.
Since directly measuring correlations between signals generated by individual neurons in
the brain is experimentally complicated (but cf. Section 8.4), it is natural to employ
EEG/MEG techniques to measure correlations between signals generated in spatially and
functionally separated brain areas (see Section 8.3 for further discussion of this proposal).
We also mention fMRI as a possible modality, but its limited temporal resolution makes it
less suitable for detecting fast or transient correlations as required for entanglement. EEG
or MEG would be more appropriate owing to their millisecond-scale resolution.
Note for implementation.
Practical EEG/MEG implementation details (preprocessing, spectral estimation, window
length 𝑇, and statistical controls) are summarized in Section 8.2; see also standard
references [72,69,71].
8.2. EEG/MEG implementation: minimum requirements
Prefer source-space analyses and state whether leakage correction was applied (e.g.,
symmetric orthogonalization [79]). Note that sensor-space signals are linear mixtures of
underlying sources; therefore, source-space analyses with leakage correction are preferred
wherever possible. Report the reference scheme, artifact handling (e.g., Independent
Component Analysis (ICA) for EOG/EMG), and filtering (including notch)[72]. For zero-lag
confounds, accompany coherence or Phase-Locking Value (PLV) with imaginary coherence
and/or wPLI (with limitations noted [80,81]). These analyses quantify statistical
dependence and do not by themselves establish directionality or causation. Specify
spectral estimation (Welch or multitaper) and effective degrees of freedom; choose 𝑇 to
cover 5–10 cycles (Δ𝑓≈1/𝑇) [69,70,71]. Use matched surrogates (trial shuffling or phase
randomization) and correct for multiple comparisons [72]. List reproducibility items: SNR,
number of tapers/segments, leakage correction, inverse model or regularization, and
whether analyses were performed in sensor or source space.
8.3. Parallels between classical EEG-based neurophysiological
approaches and QLM of cognition
It is important to emphasize that our QL modeling framework shares methodological
parallels with well-established neurophysiological tools used in EEG/MEG-based studies
[82,83,84]. Both approaches rely on the analysis of covariance structures and correlations
among signals.
Functional connectivity in neuroscience
Functional connectivity refers to the statistical dependencies between spatially distinct
neuronal populations. It is formally defined as the temporal correlation between
neurophysiological signals recorded at different sites in the brain, such as EEG channels.
Mathematically, common measures of functional connectivity include:
Covariance.
For two time series 𝑥(𝑡) and 𝑦(𝑡), the covariance is defined as:
Cov(𝑥, 𝑦) =
ଵ
்∑
(𝑥(𝑡) −𝜇௫)
்
௧ୀଵ
൫𝑦(𝑡) −𝜇௬൯, (29)
where 𝜇௫, 𝜇௬ are the mean values of 𝑥(𝑡) and 𝑦(𝑡), and 𝑇 is the total number of time points.
Pearson correlation coefficient.
𝑟௫௬= Cov(𝑥, 𝑦)
𝜎௫𝜎௬
where 𝜎௫, 𝜎௬ are the standard deviations of the signals.
Coherence.
Coherence measures frequency-specific linear correlations between signals:
𝐶௫௬(𝑓) =
|𝒮௫௬(𝑓)|ଶ
𝒮௫௫(𝑓)𝒮௬௬(𝑓)
where 𝒮௫௬(𝑓) is the cross-spectral density, and 𝒮௫௫(𝑓) and 𝒮௬௬(𝑓) are the auto-spectral
densities. (Estimator notes: Welch or multitaper estimators are commonly used; see
[69,70,71,85]).
Phase-Locking Value( PLV).
PLV quantifies phase synchronization between two signals:
PLV = อ1
𝑇𝑒൫థೣ(௧)ିథ(௧)൯
்
௧ୀଵ
อ
where 𝜙௫(𝑡) and 𝜙௬(𝑡) are the instantaneous phases, typically extracted using the Hilbert
transform or wavelet methods (following [83] ).
Scope of functional connectivity (FC) metrics.
These metrics quantify statistical dependence but not directionality or causation;
directional inferences require separate analyses and explicit assumptions.
Zero-lag confounds and robust metrics.
To mitigate volume conduction and common reference effects, the imaginary part of
coherency and/or the weighted phase-lag index (wPLI) should be reported alongside
classical metrics [80,81]. However, these approaches do not eliminate all leakage in
sensor space; whenever possible, analyses should be performed in source space with
leakage correction (see Section 8.2).
Applications.
FC networks are generated by computing these measures pairwise across brain regions,
producing a connectivity matrix that is subsequently analyzed for modularity, hub
architecture, and network dynamics. These methods are extensively applied to investigate
cognition, neurological disorders, and stimulus-driven responses [86,82,87,72].
For tutorials and discussions of interpretational pitfalls, see.
Parallels with QLM
In the QL framework, similar mathematical constructs arise naturally. The density matrix or
generalized covariance matrix represents probabilistic dependencies among cognitive
observables, analogous to FC matrices in EEG studies. Furthermore, off-diagonal terms in
QL states encode interference-like effects, comparable to EEG phase synchrony measures
such as PLV and coherence.
Thus, the QL formalism extends conventional measures by providing a richer probabilistic
interpretation grounded in generalized state representations.
Finally, the concept of entanglement in QL modeling can be compared—at a conceptual
level—with strong inter-regional correlations identified in FC analyses, where clusters of
brain regions operate as integrated modules. This analogy is heuristic and does not
indicate equivalence of constructs.
This is an appropriate place to note that classical signal analysis of brain function
frequently employs the analytic signal representation via the Hilbert transform (see, e.g.,
[85,72,83]). The established use of such complex-valued representations suggests another
avenue for QL-style modeling of brain dynamics. We plan to develop such a model in a
future paper.
Summary of parallels
Key takeaway. The correspondence between EEG/MEG FC and QL constructs is
conceptual: both capture dependencies through second-order structure
(covariances/coherences vs. density matrix off-diagonals). This analogy is heuristic and
does not imply equivalence of constructs, measurement units, or causal mechanisms.
Comparison of EEG/MEG-based methods and QL modeling of cognition (conceptual
mapping; not a one-to-one equivalence).
Concept
EEG/MEG Neurophysiology
QL Modeling of Cognition
Covariance
Covariance / Correlation
Density matrix (covariances)
Synchrony
Coherence / Phase-locking (PLV)
Interference (off-diagonals)
Network Correlations
Functional networks
Entanglement
Practical implications.
Treat FC metrics as measures of statistical dependence, not directionality; use
source space with leakage correction and report imaginary coherency/wPLI when
applicable (Section 8.2).
When robust FC modules persist under stringent controls, QL analyses can quantify
nonseparability via mixed-state entanglement measures (Section 9); Bell-type tests
face signaling constraints (see Appendix B).
8.4. In vitro neuronal networks
In vitro neuronal networks are cultures of neurons that replicate key aspects of network
structure and function. In such preparations, signals from individual neurons or defined
circuits can be recorded directly; connectivity can be patterned, and currents across
connections between spatially separated subnetworks can be measured. Although
experimentally demanding, these paradigms are feasible and align closely with our
framework (cf. Section 8.1).
The experimental testing of QL entanglement in vitro is increasingly practical owing to
advances in multi-electrode arrays (MEAs), which allow simultaneous stimulation and
recording from dozens to hundreds of sites.
One promising approach is patterned electrical stimulation to impose structured
correlations between distinct subpopulations—for example, time-locked or phase-
modulated sequences delivered to spatially separated regions to create controlled
coupling patterns.
Additionally, pharmacological modulation offers a complementary route, e.g.:
Bicuculline (GABA antagonist) to increase network excitability and enhance
synchrony;
Carbachol or other acetylcholine agonists to regulate oscillatory dynamics and
increase coherence.
These manipulations can serve to test QL nonseparability by:
1. engineering structured couplings via stimulation or pharmacology,
2. recording the resulting activity with MEAs, and
3. analyzing correlations using QL-inspired criteria (e.g., separability bounds).
Such protocols provide a concrete route toward evaluating QL entanglement while
maintaining continuity with established neurophysiological methods. Finally, experimental
confirmation of QL nonseparability would support quantum-inspired computation with
neuronal networks—QL neuromorphic computing (see [15,54,16,55,44] ).
9. Basic quantitative measures of entanglement for
mixed states
In quantum information theory, the entanglement of mixed states is quantified using
various measures defined through density operators. These measures are critical for
characterizing quantum correlations in composite systems described by mixed states. In
the following, we summarize mixed-state entanglement measures most relevant for
empirical analyses of QL states reconstructed from neural signals. In the context of QLM of
cognition, such measures may be applied to examine “mental entanglement” and complex
cognitive interdependencies.
9.1. Terminology
Throughout this section, ‘entanglement’ refers to the nonseparability of the QL state 𝜌,
constructed from classical neural signals (e.g., EEG/MEG/fMRI-derived time series). We do
not suggest microscopic quantum entanglement in brain tissue; outcomes depend on the
selected subsystem partition and the measurement basis used to define 𝜌 and the partial
transpose.
We start with the definition of the von Neumann entropy of a generally mixed state given by
a density matrix:
𝑆(𝜌) = −Tr(𝜌log𝜌)
It measures the degree of mixedness of the state; 𝑆(𝜌) = 0 if and only if 𝜌 is a pure state.
Hence, it can be used as a measure of the purity of a quantum (or QL) state.
Now let 𝜌 be the density operator of a bipartite system on Hilbert space ℋ⊗ℋ.
9.2. Entanglement entropy
For a pure bipartite state 𝜌= |𝜓⟩⟨𝜓|, the entanglement entropy is defined as the
von Neumann entropy of the reduced state:
𝑆= 𝑆(𝜌), 𝜌= Tr(𝜌)
This quantity measures the degree of entanglement between subsystems 𝐴 and 𝐵 for pure
bipartite states. For pure global states, entanglement is determined by the entropy of the
reduced state: 𝑆(𝜌) > 0 if and only if 𝜌 is entangled (and 𝑆(𝜌) = 0 iff it is a product
state). In contrast, 𝑆(𝜌) = 0 or the linear entropy 𝑆(𝜌) = 1 −Tr 𝜌ଶ= 0 only confirms that
the global state is pure, not whether it is entangled across a given bipartition.
In cognitive and neuronal data, however, pure QL states are not expected: noise,
nonstationarity, and averaging typically yield mixed states. Therefore, mixed-state
entanglement measures are required.
9.3. Negativity and logarithmic negativity
Negativity quantifies entanglement by examining the partial transpose of the density
matrix:
N(𝜌) = (||𝜌்ಳ||ଵ – 1)/2
where 𝜌்ಳ is the partial transpose with respect to the subsystem 𝐵 and ||𝜌்ಳ||ଵ is the trace
norm of 𝜌்ಳ.
Logarithmic negativity is defined as:
𝐸ே(𝜌) = 𝑙𝑜𝑔ଶ ||𝜌்ಳ||ଵ
These are standard entanglement monotones for mixed states; the partial transpose is
taken with respect to the chosen subsystem in the product basis.
9.4. Concurrence (Two-Qubit systems)
For two-qubit mixed states 𝜌, concurrence is defined as:
𝐶(𝜌) = max(0, 𝜆ଵ−𝜆ଶ−𝜆ଷ−𝜆ସ)
where 𝜆 are the square roots of the eigenvalues (in decreasing order) of:
𝑅= 𝜌൫𝜎௬⊗𝜎௬൯𝜌∗൫𝜎௬⊗𝜎௬൯
with 𝜌∗ denoting complex conjugation and 𝜎௬ the Pauli y matrix.
These measures of entanglement quantify quantum correlations between systems. In
quantum theory, there is also a measure used to capture combined classical–quantum
correlations.
For two-qubit systems, concurrence corresponds to the entanglement of formation.
9.5. Quantum mutual information
For mixed states, quantum mutual information measures total correlations:
𝐼(𝐴: 𝐵) = 𝑆(𝜌) + 𝑆(𝜌) −𝑆(𝜌)
If 𝐼(𝐴: 𝐵) = 0, the two systems are uncorrelated (neither classical nor QL correlations). If
𝐼(𝐴: 𝐵) > 0, there are correlations, but this measure does not distinguish QL from classical
components and is not an entanglement monotone.
However, mutual information is not a measure of entanglement, because
Nonzero for Separable States: Even separable (non-entangled) mixed states can
yield nonzero mutual information because of classical correlations.
Entanglement Monotonicity: Valid entanglement measures must not increase under
local operations and classical communication. Mutual information fails this
criterion because it quantifies all correlations, not exclusively quantum ones.
9.6. Bell inequalities
We note that the degree of violation of Bell inequalities can serve as indirect evidence for
entangled observables; however, such tests in cognitive or neuronal contexts are
challenging to implement (see Appendix B for discussion).
10.
Conceptual differences and added value of the
QL framework
While there are important methodological parallels between our QL framework and
classical neurophysiological approaches, it is essential to emphasize the fundamental
conceptual innovations that distinguish the QL model.
Most classical models in neuroscience—such as Principal Component Analysis (PCA), ICA,
and Integrated Information Theory (IIT)—are based on analyzing statistical dependencies or
decomposing neural signals into independent or maximally informative components.
These approaches often assume linearity, Gaussianity, or specific information-theoretic
structures, and they generally function within the framework of classical probability theory.
In contrast, the QL framework introduces the formalism of operator-based observables and
tensor-product structures, adapted from quantum theory but applied to macroscopic
neuronal information processing. These mathematical tools allow the formal
representation of several fundamental cognitive phenomena, including:
Contextuality: The outcome of cognitive measurements depends on the context,
similar to the contextuality of quantum measurements.
Incompatibility: Certain cognitive observables cannot be simultaneously measured
or precisely assigned, reflecting uncertainty and complementarity in cognition.
Entanglement: Complex dependencies and holistic cognitive states can be
modeled through entangled QL states.
Mathematically, these phenomena are naturally expressed using non-commuting
operators and composite Hilbert spaces:
ℋtotal = ℋଵ⊗ℋଶ
where the subsystems correspond to distinct cognitive or neural components.
The density operator (or generalized state) in the QL model encodes both classical
correlations and quantum-like correlations (entanglement), extending beyond covariance-
based approaches such as PCA and ICA.
Added value
By enabling such generalized probabilistic structures, the QL approach formally extends
classical theories. It provides novel ways to model:
Non-classical interference effects in cognition.
Strong contextual dependencies in decision-making.
Holistic, system-level processes inaccessible to classical decompositions.
This conceptual generalization bridges neurophysiological signal processing with higher-
level cognitive modeling, offering an integrated framework for studying cognition beyond
traditional statistical tools.
11.
Concluding remarks
This paper represents a step toward constructing a conceptual and mathematical bridge
between oscillatory cognition—defined as the rhythmic activity of neuronal networks—and
QL models of cognition, which have been successfully applied to explain contextuality,
interference, and entanglement-like effects in human behavior and decision-making. This
bridge relies on a fundamental correspondence: QL mental states are represented by
density operators and QL observables by quadratic forms, both of which are
mathematically grounded in the covariance structure of ROs in neural circuits. These
constructions are developed within the framework of PCSFT, which extends beyond the
standard quantum formalism [45,46,47,48].
Previous work [44] has suggested that PCSFT provides a viable interpretational and
computational foundation for QL representations of cognitive phenomena. Here, we focus
on one of the most conceptually challenging issues: the formalization of mental
entanglement—a cognitive analog of quantum entanglement—which we consider crucial
for modeling integrative mental processes involving distributed and interacting brain
networks.
In quantum information theory, entanglement is central to the non-classical correlations
underlying quantum computational advantage. While the physical meaning of
entanglement remains debated—particularly regarding separability and locality—there has
been increasing interest in an alternative formulation: observational entanglement. This
approach avoids problematic metaphysical assumptions and emphasizes statistical
correlations observed through measurements. We adopt this perspective as a more
transparent entry point into modeling cognitive entanglement (see Section 5).
We then proceed to explore state entanglement in the QL framework, treating entangled QL
states as representations of the joint activity of spatially and functionally separated
neuronal assemblies. In this context, mental entanglement provides a natural mechanism
for feature binding—the brain’s capacity to integrate disparate perceptual or cognitive
contents (e.g., color, shape, and motion) into unified conscious experiences. This
formulation suggests a candidate QL solution to the binding problem, long regarded as one
of the central unsolved questions in neuroscience and cognitive science.
Moreover, the introduction of mental entanglement offers a speculative yet potentially
fruitful path toward addressing aspects of the hard problem of consciousness—
specifically, how and why certain brain processes give rise to subjective experience. While
our approach does not resolve the hard problem, it aligns with perspectives proposing that
consciousness may involve non-classical, globally coherent states that resist
decomposition into strictly local mechanisms. If mental entanglement reflects a form of
nonlocal coherence in brain function, it may point to a formal structure compatible with
integrated information and global workspace theories, enriched by QL formalism.
In cognitive neuroscience, numerous studies have shown that neuronal oscillations—
particularly in the gamma, theta, and beta bands—are associated with integrative
functions such as memory binding, attentional selection, and conscious access. Our
model establishes a bridge between these empirical findings and QL cognitive models by
interpreting such oscillatory patterns as classical fields whose covariance structures
define the QL states and observables. This offers a testable link between
neurophysiological processes and the abstract mathematical structures of QL cognition.
We further hypothesize that entangled QL states derived from ROs may underlie enhanced
computational capacities in specific neural subsystems, particularly within the cerebral
cortex and hippocampus. This aligns with evidence of high-performance integrative
processing in these regions and indicates a deeper role for QL representations in modeling
cognitive efficiency.
In Section 8.3, we outline preliminary experimental designs aimed at indirectly detecting
signatures of mental entanglement using EEG/MEG methodologies, focusing on correlation
structures in neural signals that diverge from classical expectations. Although speculative,
these tests are intended to guide future empirical investigations.
In conclusion, the development of the mental entanglement formalism reinforces the
broader conjecture that the brain employs QL representations in cognitive processing. This
framework opens the door to a deeper interdisciplinary synthesis—integrating
neurophysiological data, quantum-inspired mathematical tools, and enduring
philosophical questions in consciousness research.
Acknowledgments
The authors were supported by JST, CREST Grant Number JPMJCR23P4, Japan; A.K. was
partially supported by the EU-grant CA21169 (DYNALIFE); M.Y. was partially supported by
JSPS KAKENHI Grant (23H04830, 22K18265, 23K22379), JST Moonshot R&D Grant
(JPMJMS2295), and MEXT Quantum Leap Flagship Program (MEXT QLEAP) Grant
(JPMXS0120330644).
Appendix A. Coupling symplectic Hamiltonian dynamics
and the Schrödinger equation
PCSFT provides a classical foundation for quantum mechanics 45-48]. A central idea is
that the Schrödinger equation, which lies at the heart of quantum theory, can be derived
from or coupled with symplectic Hamiltonian dynamics on a classical phase space. This
framework examines the quantum-classical interface from both mathematical and
conceptual perspectives. At its core, PCSFT interprets quantum states as labels for
statistical ensembles of classical oscillators.
For 𝑁 classical oscillators, the phase space Φ = 𝑄× 𝑃= ℝே× ℝே. This corresponds to a
real Hilbert space with the scalar product (𝜙ଵ|𝜙ଶ) = (𝑞ଵ|𝑞ଶ) + (𝑝ଵ|𝑝ଶ), where 𝜙=
{𝑞, 𝑝} ∈Φ. This phase space underpins quantum mechanics with a finite-dimensional
state space, the complex Hilbert space ℋ = ℂே, endowed with the scalar product
described in Section 2.
Quantum mechanics on physical space ℝଷ is based on the infinite-dimensional Hilbert
space of square-integrable complex-valued functions, essentially the Hilbert space ℋ =
𝐿ଶ(ℝଷ, ℂ). The underlying classical phase space is given by Φ = 𝐿ଶ(ℝଷ, ℝ) × 𝐿ଶ(ℝଷ, ℝ). The
real and imaginary parts of a wavefunction |𝜓⟩ ∈ ℋ correspond to coordinates and
momenta in an infinite-dimensional phase space Φ.
Any phase space can be endowed with a symplectic structure. In this setting, a symplectic
form 𝜔 is introduced as:
𝜔(𝜙ଵ|𝜙ଶ) = (𝜙ଵ|𝐽𝜙ଶ),
where 𝐽 is the symplectic structure operator defined as
𝐽=
∣∣∣∣∣0
𝐼
−𝐼
0
∣∣∣∣∣
, (30)
where 𝐼, 𝐼 are the unit operators in 𝑄, 𝑃. In the complex Hilbert space ℋ corresponding to
the phase space Φ the operator 𝐽 is represented as the operator of multiplication by 𝑖 (we
remark that ℋ is complexification of Φ, namely, ℋ = Q ⊕ iP.
The Hamiltonian functional 𝐻(𝜙) generates dynamics via
𝜙̇(𝑡) = 𝐽∇𝐻(𝜙(𝑡)).
When the Hamiltonian functional is quadratic, i.e., given by a symplectically invariant
quadratic form 𝐻(𝜙) = (𝜙|𝐻𝜙), the evolution reduces to the linear Schrödinger equation:
𝑖𝑑𝜓
𝑑𝑡(𝑡) = 𝐻𝜓(𝑡)
Thus, the Schrd̈ inger equation appears not as a postulate, but as a dynamical law for
classical phase space dynamics under a symplectic structure.
In PCSFT, a quantum state (wavefunction or density operator) corresponds to the
covariance operator of classical oscillations. Quantum averages emerge from classical
statistical averages over this ensemble, allowing an interpretation in which quantum
randomness is epistemic—arising from incomplete knowledge of underlying classical
oscillations. In this way, quantum mechanics can be understood as a projection or
statistical encoding of a deeper classical theory, with the Schrödinger equation derived
from the Hamiltonian flow of an underlying symplectic system.
We now briefly present the above consideration with 𝑞, 𝑝 coordinates. Any Hamiltonian
function 𝐻(𝑞, 𝑝) generates the system of Hamiltonian equations
𝑞̇ =
பு
ப(𝑞, 𝑝), 𝑝̇ = −
பு
ப(𝑞, 𝑝). (31)
Consider now a quadratic and symplectically invariant Hamiltonian function
𝐻(𝑞, 𝑝) = 1/2[(𝑅𝑝, 𝑝) + 2(𝑇𝑝, 𝑞) + (𝑅𝑞, 𝑞)], (32)
where 𝑅 is a symmetric operator, 𝑅⋆= 𝑅 and 𝑇⋆= −𝑇. The operator (the Hessian of the
Hamilton functions)
𝐻=
∣∣∣∣𝑅
𝑇
−𝑇
𝑅
∣∣∣∣
, (33)
Which commutes with the symplectic operator 𝐽. This is a system of harmonic oscillators,
and it can be rewritten as the Schrödinger equation.
Appendix B. Experimental framework for entanglement of
observables in neuronal circuits and Bell test
The observational approach to entanglement is comparatively simpler, as it does not
require reconstructing the QL state of a compound neuronal network through the full
calculation of its cross-correlation matrix.
Instead, we aim to identify a pair of jointly measurable observables 𝐴ଵ and 𝐴ଶ, associated
with neuronal circuits forming parts of a network 𝑆. The most widely studied method for
detecting entanglement is the Bell test, which evaluates violations of Bell-type inequalities.
Among these, the CHSH inequality is the most frequently applied and provides a natural
framework for detecting and quantifying entanglement, where the degree of entanglement
is reflected in the magnitude of CHSH inequality violation.
Interpretational note. A violation of a Bell inequality rules out a class of local realistic
models under specific assumptions (e.g., no-signaling and measurement independence),
but it does not, by itself, establish directionality or causation among neural processes, nor
does it exclude classical common-drive confounds.
To apply the CHSH test, one must define two pairs of observables, 𝐴ଵ, 𝐵ଵ and 𝐴ଶ, 𝐵ଶ. Within
each pair, the observables must be incompatible, meaning they are not jointly measurable.
Across pairs, the observables must be compatible, meaning they can be jointly measured.
Correlations between cross-pair observables are then computed to test for CHSH
inequality violations.
However, the structure of observables in neuronal circuits—especially in terms of their
compatibility and incompatibility—remains largely unexplored. In QLM, the emergence and
nature of incompatible observables is still under active investigation. Current approaches
often reduce this question to testing for the order effect [18,19], where the sequence of
measurements influences the outcome. Detection of such an effect is typically interpreted
as evidence of incompatibility between QL observables.
Yet, this interpretation relies on the assumption that observables are of the projection (von
Neumann) type. Within the broader framework of quantum instruments, it is possible to
observe the order effect even when the corresponding observables are jointly measurable
[92]. This complicates the direct use of the order effect as an indicator of incompatibility in
general settings.
At present, there is no clear methodology for identifying suitable candidate observables for
the CHSH test in neuronal systems.
Moreover, even in physics, Bell-type experiments posed significant theoretical and
technical challenges. The basic Bell framework [91] is affected by various so-called
loopholes, and it was only in 2015 that loophole-free experiments were successfully
performed. These landmark experiments ultimately led to the awarding of the 2022 Nobel
Prize in Physics to Alain Aspect, John Clauser, and Anton Zeilinger.
In cognitive psychology and decision-making research, Bell-type inequalities have also
been explored, beginning with early foundational and experimental studies [24] (see for
later experiments [6, 25-29]). As in physics, these investigations are complex both
conceptually and empirically. In fact, the situation in cognitive domains may be even more
challenging because of the apparent impossibility of eliminating signaling effects.
Signaling—referring to the presence of direct influences between measurement settings
and outcomes—complicates the interpretation of experimental data. When present,
signaling requires a significantly more sophisticated theoretical framework (see [94] ),
which lies beyond the scope of this article.
Moreover, these investigations have been complemented by theoretical developments that
bridge probabilistic modeling and conceptual compositionality. Bruza et al. [95], for
example, introduced a probabilistic framework to study how meanings of conceptual
combinations emerge, showing that quantum-inspired probabilistic models can effectively
account for observed non-classicalities in meaning composition. Together, these
contributions indicate that Bell-type inequalities and contextuality analyses are not merely
metaphors but tools with empirical and explanatory power in cognitive science. However,
fully accounting for signaling effects and developing corresponding theoretical models
remains an open challenge in the field.
Given these challenges, the experimental framework discussed in Sections 8.1 and 8.3
appears more feasible for near-term implementation.
References
1. Khrennikov A. Contextual reinterpretation of quantum nonlocality. Cambridge:
Cambridge University Press; 2024. doi:10.1017/9781009313469 (Cambridge University
Press & Assessment)
2. Khrennikov A. Information dynamics in cognitive, psychological, social, and anomalous
phenomena. Dordrecht: Kluwer; 2004. doi:10.1007/978-94-017-0479-3 (ResearchGate)
3. Khrennikov A. Ubiquitous quantum structure: From psychology to finances.
Berlin/Heidelberg: Springer; 2010.
4. Busemeyer JR, Bruza PD. Quantum models of cognition and decision. 2nd ed.
Cambridge: Cambridge University Press; 2024.
5. Haven E, Khrennikov A. Quantum social science. Cambridge: Cambridge University
Press; 2013.
6. Asano M, Khrennikov A, Ohya M, Tanaka Y, Yamato I. Quantum adaptivity in biology: From
genetics to cognition. Berlin: Springer; 2015.
7. Bagarello F. Quantum concepts in the social, ecological and biological sciences.
Cambridge: Cambridge University Press; 2019.
8. Khrennikov AY. Open quantum systems in biology, cognitive and social sciences. Berlin:
SpringerௗNature; 2023.
9. Pothos EM, Busemeyer JR. Quantum cognition. Annu Rev Psychol. 2022;73:749–778.
10. Khrennikov A. Open systems, quantum probability, and logic for quantum like modeling
in biology, cognition, and decision making. Entropy. 2023;25(6):886.
11. HameroƯ S. Quantum coherence in microtubules: A neural basis for emergent
consciousness? J Conscious Stud. 1994;1:91–118.
12. Penrose R. The Emperor’s New Mind. New York: Oxford University Press; 1989.
13. Vitiello G. My Double Unveiled: The Dissipative Quantum Model of Brain. Amsterdam:
John Benjamins Publishing (Advances in Consciousness Research); 2001.
14. Vallortigara G, Vitiello G. Brain asymmetry as minimization of free energy: A theoretical
model. R Soc Open Sci. 2024;11(7):240465.
15. Igamberdiev AU. Quantum computation, non demolition measurements, and reflective
control in living systems. Biosystems. 2004;77:47–56.
16. Igamberdiev AU,ௗBrenner JE. Mathematics in biological reality: The emergence of natural
computation in living systems. Biosystems. 2021;204:104395.
17. Atmanspacher H, Filk T, Römer H. Quantum Zeno features of bistable perception. Biol
Cybern. 2004;90(1):33–40.
18. Wang Z, Busemeyer JR. A quantum question order model supported by empirical tests
of an a priori and precise prediction. Top Cogn Sci. 2013;5:689–710.
19. Wang Z,ௗSolloway T, ShiƯrin RM, Busemeyer JR. Context eƯects produced by question
orders reveal quantum nature of human judgments. Proc Natl Acad Sci U S A.
2014;111:9431–9436.
20. Khrennikova P. Order eƯect in a study on US voters’ preferences: Quantum framework
representation of the observables. Phys Scr. 2014;2014(T163):014010.
21. Ozawa M,ௗKhrennikov A. Application of theory of quantum instruments to psychology:
Combination of question order eƯect with response replicability eƯect. Entropy.
2020;22(1):37.
22. Ozawa M,ௗKhrennikov A. Modeling combination of question order eƯect, response
replicability eƯect, and QQ equality with quantum instruments. J Math Psychol.
2021;100:102491.
23. Tsuchiya N,ௗBruza P, Yamada M, Saigo H, Pothos EM. Quantum like qualia hypothesis:
From quantum cognition to quantum perception. Front Psychol. 2025;15:1406459.
24. Conte E,ௗKhrennikov AௗY, Todarello O, Federici A, Mendolicchio L, Zbilut JP. A preliminary
experimental verification on the possibility of Bell inequality violation in mental states.
NeuroQuantology. 2008;6:214–221.
25. Cervantes VH,ௗDzhafarov EN. Snow queen is evil and beautiful: Experimental evidence
for probabilistic contextuality in human choices. Decision. 2018;5(3):193–204.
26. Basieva I,ௗCervantes VH, Dzhafarov EN, Khrennikov A. True contextuality beats direct
influences in human decision making. J Exp Psychol Gen. 2019;148(11):1925–1937.
27. Gallus C,ௗBlasiak P, Pothos EM. Quantifying and interpreting connection strength in
macro and microscopic systems: Lessons from Bell’s approach. Entropy. 2022;24(3):364.
28. Gallus C,ௗPothos EM, Blasiak P, Yearsley JM, Wojciechowski BW. Bell correlations
outside physics. Sci Rep. 2023;13(1):4394.
29. Khrennikova P. Measuring contextuality in investment preferences. Ann Oper Res.
2025:1–31.
30. Khrennikov A. Quantum like brain: “Interference of minds.” Biosystems. 2006;84(3):225–
241.
31. Khrennikova P. A quantum framework for ‘Sour Grapes’ in cognitive dissonance. In:
Quantum Interaction. Berlin/Heidelberg: Springer; 2013:270–280.
32. Ozawa M,ௗKhrennikov A. Nondistributivity of human logic and violation of response
replicability eƯect in cognitive psychology. J Math Psychol. 2023;112:102739.
33. Gunji YP,ௗNakamura K, Minoura M, Adamatzky A. Three types of logical structure
resulting from the trilemma of free will, determinism and locality. Biosystems.
2020;195:104151.
34. Gunji YP,ௗNakamura K. Psychological origin of quantum logic: An orthomodular lattice
derived from natural born intelligence without Hilbert space. Biosystems. 2022;215–
216:104649.
35. Atmanspacher H,ௗFilk T. The Necker–Zeno model for bistable perception. Top Cogn Sci.
2013;5(4):800–817.
36. Conte E,ௗKhrennikov AௗY, Todarello O, Federici A, Mendolicchio L, Zbilut JP. Mental states
follow quantum mechanics during perception and cognition of ambiguous figures. Open
Syst Inf Dyn. 2009;16(1):85–100.
37. Khrennikov A. Quantum like modeling of cognition. Front Phys. 2015;3:77.
38. Melkikh AV,ௗKhrennikov A. Quantum like model of partially directed evolution. Prog
Biophys Mol Biol. 2017;125:36–51.
39. Iriki A,ௗTanaka S. Potential of the path integral and quantum computing for the study of
humanities: An underlying principle of human evolution and the function of consciousness.
Glob Perspect. 2024;5(1):115651.
40. Asano M,ௗBasieva I, Khrennikov A, Ohya M, Tanaka Y, Yamato I. A model of epigenetic
evolution based on theory of open quantum systems. Syst Synth Biol. 2013;7:161–173.
41. Khrennikov A,ௗIriyama S, Basieva I, Sato K. Quantum like environment adaptive model
for creation of phenotype. Biosystems. 2024;242:105261.
42. Fuyama M. Does the coexistence of literal and figurative meanings in metaphor
comprehension yield novel meaning? Empirical testing based on quantum cognition. Front
Psychol. 2023;14:1146262.
43. Fuyama M. Estimating a time series of interpretation indeterminacy in reading a short
story usi43ng a quantum cognition model. In: Proceedings of the Annual Meeting of the
Cognitive Science Society. 2024;46. Retrieved from
https://escholarship.org/uc/item/1sh152qk
44. Khrennikov A,ௗIriki A, Basieva I. Constructing a bridge between functioning of oscillatory
neuronal networks and quantum like cognition along with quantum inspired computation
and AI. arXiv [preprint]. 2025. Available at: arXiv:2506.00040 [q bio.NC]
45. Khrennikov A. A pre quantum classical statistical model with infinite dimensional phase
space. J Phys A Math Theor. 2005;38(40):9051–9073.
46. Khrennikov A. On the correspondence between classical and quantum models: From
statistical mechanics to quantum mechanics. Found Phys. 2006;36:1020–1040.
47. Khrennikov A. Beyond Quantum. Singapore: Pan Stanford Publishing; 2014.
48. Khrennikov A. Characterization of entanglement via non existence of a subquantum
random field. Ann Phys (Berl). 2024;536(9):2400035.
49. De Barros JA,ௗSuppes P. Quantum mechanics, interference, and the brain. J Math
Psychol. 2009;53(5):306–313.
50. De Barros JA. Quantum like model of behavioral response computation using neural
oscillators. Biosystems. 2012;110:171–182.
51. Takahashi T,ௗCheon T. A nonlinear neural population coding theory of quantum cognition
and decision making. World J Neurosci. 2012;2(4):183–186.
52. Busemeyer JR,ௗFakhari P, Kvam P. Neural implementation of operations used in quantum
cognition. Prog Biophys Mol Biol. 2017;130(A):53–60.
53. Scholes GD. Quantum like states on complex synchronized networks. Proc R Soc A.
2024;480(2295):20240209.
54. Amati G,ௗScholes GD. Quantum information with quantum like bits. Phys Rev A.
2025;111(6):062203.
55. Khrennikov A, Ozawa M, Benninger F, Shor O. Coupling quantum like cognition with the
neuronal networks within generalized probability theory. J Math Psychol. 2025;125:102923.
56. Werner RF. Quantum states with Einstein–Podolsky–Rosen correlations admitting a
hidden variable model. Phys Rev A. 1989;40(8):4277–4281.
57. Zanardi P. Virtual quantum subsystems. Phys Rev Lett. 2001;87(7):077901.
58. Zanardi P,ௗLidar DA, Lloyd S. Quantum tensor product structures are observable
induced. Phys Rev Lett. 2004;92(6):060402.
59. Basieva I,ௗKhrennikov A. Conditional probability framework for entanglement and its
decoupling from tensor product structure. J Phys A. 2022;55(39):395302.
60. Khrennikov A,ௗBasieva I. Entanglement of observables: Quantum conditional probability
approach. Found Phys. 2023;53:84.
61. Huynh L, Hong J, Mian A, Suzuki H, Wu Y, Camtepe S. Quantum inspired machine
learning: A survey. arXiv [preprint]. 2023. Available at: arXiv:2308.11269 [cs.LG]
62. EƯenberger F, Carvalho P, Dubinin I, Singer W. The functional role of oscillatory
dynamics in neocortical circuits: A computational perspective. Proc Natl Acad Sci U S A.
2025;122(4):e2412830122.
63. Shor O, Glik A, Yaniv Rosenfeld A, Valevski A, Weizman A, Khrennikov A, Benninger F.
EEG p adic quantum potential accurately identifies depression, schizophrenia and
cognitive decline. PLoS One. 2021;16(8):e0255529.
64. Shor O, Yaniv Rosenfeld A, Valevski A, Weizman A, Khrennikov A, Benninger F. EEG
based spatio temporal relation signatures for the diagnosis of depression and
schizophrenia. Sci Rep. 2023;13(1):776.
65. Shor O,ௗBenninger F, Khrennikov A. Dendrogramic representation of data: CHSH
violation vs nonergodicity. Entropy. 2021;23(8):971.
66. Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents—EEG,
ECoG, LFP and spikes. Nat Rev Neurosci. 2012;13:407–420.
67. Kolmogorov AN. GrundbegriƯe der Wahrscheinlichkeitsrechnung. Berlin: Springer;
1933. Kolmogorov AN. Foundations of the Theory of Probability. New York: Chelsea; 1956.
68. Khrennikov A. Buonomano against Bell: Nonergodicity or nonlocality? Int J Quantum Inf.
2017;15:1740010.
69. Thomson DJ. Spectrum estimation and harmonic analysis. Proc IEEE. 1982;70:1055–
1096.
70. Percival DB,ௗWalden AT. Spectral analysis for physical applications. Cambridge:
Cambridge University Press; 1993.
71. Mitra PP,ௗPesaran B. Analysis of dynamic brain imaging data. Biophys J. 1999;76:691–
708.
72. Cohen MX. Analyzing Neural Time Series Data. Cambridge (MA): MIT Press; 2014.
73. Khrennikov A. Born’s formula from statistical mechanics of classical fields and theory of
hitting times. Physica A. 2014;393:207–221.
74. Schrödinger E. Die gegenwärtige Situation in der Quantenmechanik.
Naturwissenschaften. 1935;23:807–812, 823–828, 844–849.
75. Anastassiou CA,ௗPerin R, Markram H, Koch C. Ephaptic coupling of cortical neurons.
Nat Neurosci. 2011;14(2):217–223.
76. Fröhlich F,ௗMcCormick DA. Endogenous electric fields may guide neocortical network
activity. Neuron. 2010;67(1):129–143.
77. Radman T,ௗSu Y, An JH, Parra LC, Bikson M. Spike timing amplifies the eƯect of electric
fields on neurons: Implications for endogenous field eƯects. J Neurosci. 2007;27(11):3030–
3036.
78. Pinotsis DA,ௗMiller EK. In vivo ephaptic coupling allows memory network formation.
Cereb Cortex. 2023;33(17):9877–9895.
79. Colclough GL,ௗBrookes MJ, Smith SM, Woolrich MW. A symmetric multivariate leakage
correction for MEG connectomes. Neuroimage. 2015;117:438–449.
80. Nolte G,ௗBai O, Wheaton L, Mari Z, Vorbach S, Hallett M. Identifying true brain
interaction from EEG data using the imaginary part of coherency. Clin Neurophysiol.
2004;115(10):2292–2307.
81. Vinck M,ௗOostenveld R, van Wingerden M, Battaglia F, Pennartz CMA. An improved index
of phase synchronization for electrophysiological data in the presence of volume
conduction, noise and sample size bias. Neuroimage. 2011;55(4):1548–1565.
82. Friston KJ. Functional and eƯective connectivity: A review. Brain Connect. 2011;1(1):13–
36.
83. Lachaux JP,ௗRodriguez E, Martinerie J, Varela FJ. Measuring phase synchrony in brain
signals. Hum Brain Mapp. 1999;8(4):194–208.
84. Srinivasan R,ௗWinter WR, Ding J, Nunez PL. EEG and MEG coherence: Measures of
functional connectivity at distinct spatial scales of neocortical dynamics. J Neurosci
Methods. 2007;166(1):41–52.
85. Bruns A. Fourier , Hilbert and wavelet based signal analysis: Are they really diƯerent
approaches? J Neurosci Methods. 2004;137(2):321–332.
86. Bastos AM,ௗSchoƯelen JM. A tutorial review of functional connectivity analysis methods
and their interpretational pitfalls. Front Syst Neurosci. 2016;9:175.
87. Bullmore E,ௗSporns O. Complex brain networks: Graph theoretical analysis of structural
and functional systems. Nat Rev Neurosci. 2009;10(3):186–198.
88. Vidal G,ௗWerner RF. Computable measure of entanglement. Phys Rev A.
2002;65:032314.
89. Plenio MB. Logarithmic negativity: A full entanglement monotone that is not convex.
Phys Rev Lett. 2005;95:090503.
90. Wootters WK. Entanglement of formation of an arbitrary state of two qubits. Phys Rev
Lett. 1998;80(10):2245–2248.
91. Clauser JF,ௗHorne MA, Shimony A, Holt RA. Proposed experiment to test local hidden
variable theories. Phys Rev Lett. 1969;23:880–884.
92. Fuyama M,ௗKhrennikov A, Ozawa M. Quantum like cognition and decision making in the
light of quantum measurement theoiiiiiry. Philos Trans A. (in press) 2025. Available at:
http://arxiv.org/abs/2503.05859
93. Bruza PD,ௗFell L, Hoyte P, Dehdashti S, Obeid A, Gibson A, Moreira C. Contextuality and
context sensitivity in probabilistic models of cognition. Cogn Psychol. 2023;140:101529.
94. Dzhafarov EN,ௗKujala JV. Selectivity in probabilistic causality: Where psychology runs
into quantum physics. J Math Psychol. 2012;56:54–63.
95. Bruza PD,ௗKitto K, Ramm BJ, Sitbon L. A probabilistic framework for analysing the
compositionality of conceptual combinations. J Math Psychol. 2015;67:26–38.
i
|
Quantum-like representation of neuronal networks' activity: modeling "mental entanglement" Andrei Khrennikovଵ,∗, Makiko Yamadaଶ,ଷ,ସ ଵ Center for Mathematical Modeling in Physics and Cognitive Sciences Linnaeus University, Växjö, SE-351 95, Sweden ଶ Institute for Quantum Life Science National Institutes for Quantum Science and Technology, Chiba, 263-8555, Japan ଷ Institute for Quantum Medical Science National Institutes for Quantum Science and Technology, Chiba, 263-8555, Japan ସ Graduate 263-8522, Japan *Corresponding author email: Abstract: Quantum-like modeling (QLM) - quantum theory applications outside of physics - are intensively developed with applications in biology, cognition, psychology, and decision-making. For cognition, QLM should be distinguished from quantum reductionist models in the spirit of Hameroff and Penrose and well as Umezawa and Vitiello. QLM is not concerned with just quantum physical processes in the brain but also QL information processing by macroscopic neuronal structures. Although QLM of cognition and decision-making has seen some success, it suffers from a knowledge gap that exists between oscillatory neuronal network functioning in the brain and QL behavioral patterns. Recently, steps toward closing this gap have been taken using the generalized probability theory and prequantum classical statistical field theory (PCSFT) - a random field model beyond the complex Hilbert space formalism. PCSFT is used to move from the classical ``oscillatory cognition'' of the neuronal networks to QLM for decision-making. In this study, we addressed the most difficult problem within this construction: QLM for entanglement generation by classical networks, i.e., "mental entanglement." We started with the observational approach to entanglement based on operator algebras describing "local observables" and bringing into being the tensor product structure in the space of QL states. Moreover, we applied the standard states entanglement approach: entanglement generation by spatially separated networks in the brain. Finally, we discussed possible future experiments on "mental entanglement" detection using the EEG/MEG technique. keywords: Quantum-like modeling; neuronal networks; mental entanglement; decision making; EEG/MEG technique 1. Introduction Intensive development of quantum information theory has transformed the perspective of quantum studies toward an information-based approach to physics. In particular, numerous information-theoretic interpretations of quantum mechanics have been proposed [1]. These interpretations exert both foundational and technological influence. This informatization of quantum physics has also stimulated applications of the methodology and formalism of quantum theory beyond physics, extending to biology, cognition, psychology, decision-making, economics, finance, and the social and political sciences (see, e.g., monographs [2]-[8] and reviews [9,10])-a direction commonly termed quantum-like modeling (QLM). In this paper, we focus on QLM in the domains of cognition and decision-making. Here, QLM must be clearly distinguished from quantum reductionist models advanced by Hameroff and Penrose [11,12], Vitiello [13,14], and Igamberdiev [15,16], who associated cognition and consciousness with quantum physical processes in the brain. Hameroff and Penrose emphasized microtubules, Vitiello referred to quantum field theory and long-range correlations in the brain, and Igamberdiev linked cognition to quantum processes in cells. In contrast, QLM of cognition does not concern quantum physical processes in the brain but rather quantum-like information processing by macroscopic neuronal networks. QLM of cognition and decision-making has been successfully developed; it mathematically describes non-classical features of cognitive phenomena, such as "interference and entanglement of minds." It resolves numerous paradoxes of decision theory and models basic cognitive effects, including conjunction, disjunction, order, response replicability, and Zeno effects (see the mentioned works and articles [17]-[23]). QLM has highlighted the contextuality of cognitive phenomena by applying advanced quantum contextuality theory and, in particular, the machinery of Bell inequalities [24]-[29] (Section 11]). QLM has introduced a novel perspective on rationality versus irrationality and violations of the Savage Sure Thing Principle [4], framed within probabilistic and logical approaches, as violations of Bayesian updating [21,22], the formula of total probability [2,30,3,,6,31], and classical Boolean logic, while incorporating quantum logic [32,33,34]. QLM has successfully described statistical data on bistable perception of ambiguous figures 17,35,36,37, 2], has been applied to biological evolution (including genetic and epigenetic mechanisms) [40,6,41], and, more recently, to aesthetic experiences during book reading [42,43]. Nevertheless, cognitive QLM faces a gap between theoretical descriptions of classical oscillatory neuronal networks in the brain (e.g., the phase space description of harmonic oscillators) and the quantum-like representation of mental states and observables (e.g., density and Hermitian operators). Thus, it remains a phenomenological framework. Recently, progress toward bridging this gap has been achieved [44] within the framework of prequantum classical statistical field theory (PCSFT)-a random field model generating the complex Hilbert space formalism for states and observables [45]-[48]. PCSFT has been employed to connect classical "oscillatory cognition" of neuronal networks with QLM for decision-making.i1 1 See [49]-[54] for other models aimed at coupling neuronal and QL information processing in the brain. We especially highlight the article [55] in that we employ generalized probability (operational measurement) theory. This is a more general formalism than quantum theory in a complex Hilbert space. But all authors exploring QLM work within the In this paper, we proceed to the most difficult problem within such construction: creation of QLM for the generation of entanglement by classical networks-the problem of "mental entanglement." We now outline the content of the paper. Section 2 presents the basic construction for transitioning from oscillatory dynamics of neuronal networks in the brain to the quantumlike (QL) representation. Section 3 links classical and quantum realizations of observables on neuronal networks. Section 4 addresses a specific approach to the notion of entanglement, treating it as the entanglement of observables. In Section 5 this approach is applied to entanglement of observables on neuronal circuits. In Section 6 we examine the standard approach to entanglement as state entanglement and its application to modeling mental entanglement. In Section 7 we discuss the role of ephaptic coupling in generation of correlations between neuronal circuits in the brain. Section 8 concerns possible experimental verification of mental entanglement. This section deserves the special attention of experts in experimental cognitive science and neuroscience. Here, we discuss concrete experimental tests of mental entanglement based on classical electroencephalogram (EEG)/Magnetoencephalography (MEG) measurement techniques (Section 8.1), including comparative analysis with classical EEG-based approaches to functional connectivity in neuroscience (Section 8.3). Section 9 describes the main quantitative measures of entanglement that can be used in experimental tests. In Section 8.4, we analyze the possibility of creating and detecting entanglement in in vitro neuronal networks. Conceptual differences between classical and QL frameworks are discussed in Section 10. Section 11 provides a general discussion on the proposed model. Appendix A concerns the mathematical model for coupling classical oscillatory dynamics, represented by Hamiltonian equations, with quantum dynamics described by the Schrödinger equation. In Appendix B, we examine the possibility of detecting mental entanglement experimentally with Bell inequalities. In physics, entanglement is one of the most intriguing and complex quantum phenomena. Typically, entanglement is treated as state entanglement. In the mathematical formalism of quantum theory, a pure state |ψ⟩ of a compound system S= (Sଵ, Sଶ) is given by a normalized vector belonging to the tensor product of two complex Hilbert spaces, H = Hଵ⊗Hଶ . A state |ψ⟩ ∈ H is called entangled if it cannot be factorized, that is, it does not have the form |ψ⟩= |ψ⟩ଵ⊗|ψ⟩ଶ, where |ψ⟩∈ H, i=1,2. Another less familiar approach to the notion of entanglement is observation entanglement [57-60] based on considering "local algebras" of cross-commuting observables Aଵ and Aଶ . In this view, a tensor product structure on the state space is not preassigned but is generated by these operator algebras. The same state space H can be endowed with multiple tensor-product structures corresponding to different choices of operator algebras. In this paper, we employ both approaches to the notion of entanglement. standard quantum formalism based on complex Hilbert space. So, it is useful to justify the use of this formalism from the neurophysiological viewpoint. Since observation entanglement is less familiar and not widely applied, we complete the paper with a special section devoted to this notion, Section 4. We then proceed to the entanglement of observables on neuronal networks in Section 4. State entanglement generated by a compound neuronal network S= (Sଵ, Sଶ) is discussed in Section 6. Such entanglement and its generation by interacting neuronal networks is of neurophysiological interest. To identify its roots, we must look deeper into brain architecture and communication between neuronal circuits, including those not physically connected through axonal-dendritic pathways. Here, we emphasize the role of electromagnetic signaling, particularly ephaptic coupling between neuronal structures in the brain (Section 7). However, discussion of state entanglement in neuronal networks (Section 4) is primarily theoretical, as experimental detection remains far from reach. Observation entanglement (Section 6) is more promising for experimentation. The central challenge is identifying observables on neuronal networks capable of exhibiting such entanglement. This paper is conceptual and aims to demonstrate the possibility of modeling generally nonlocal correlations in the brain that are mathematically described as quantum state entanglement. At present, the biophysical mechanism of generating such states is not completely clear (see, however, Section 7). This is the place to emphasize that "mental nonlocality," mathematically described as entanglement, is unrelated to the so-called spooky action at a distance often associated with "quantum nonlocality." Unlike quantum physics experiments on spatially separated systems, such as two photons 100 km apart, in cognitive science, we study a small physical system, the brain. Electromagnetic signals connect any two points in the brain almost immediately. Thus, biological nonlocality expressed via entanglement is classical nonlocality generated by electromagnetic signaling between neuronal circuits. Mental entanglement is the mathematical description of nonlocal correlations between observations performed on activated neuronal circuits. The crucial difference from classical correlations is that some observables in local algebras A (associated with the neuronal networks S, i= 1,2) can be incompatible, not jointly measurable. In mathematical terms, such observables are described by non-commuting operators. We note that in article [44], entanglement of neuronal networks was only mentioned in an unconventional framework that avoided tensor products, instead employing the classical Descartes product (see [45]-[48]). This approach may be interesting from the perspective of classical oscillatory cognition. However, by using it, we lose connection with the notion of entanglement as defined in quantum information theory. We also speculate that by exploring the QL representation of mental states and observables, the brain may realize so-called quantum-inspired algorithms [61] and thereby achieve essential enhancement of computational power [62]. In such algorithms, the ability to generate entangled states is important. We remark that QLM based on the QL representation of EEG signals is already applied in medical diagnostics of neurological disorders, including depression, epilepsy, and schizophrenia [63,64]. Such diagnostics work unexpectedly well, but in the absence of a theoretical justification. Mental entanglement can serve as the theoretical basis for EEGbased QL diagnostics, as it mathematically describes nonlocal information processing in the brain. The observed violation of the Clauser-Horne-Shimony-Holt (CHSH) inequality for EEG signals (transformed into dendrograms with clustering algorithms) can also be connected to mental entanglement. 2. QL states of neuronal networks Let S be a neuronal network with oscillatory node-circuits numbered as s, j= 1, . . . , N. Following [62], we do not identify nodes of S with individual neurons; instead, these are neuronal circuits generating oscillations. The state of each circuit s is mathematically described by the complex variable z. Why is it complex? To describe oscillatory dynamics, it is convenient to use two (real) variables (q, p), where q is coordinate (possibly generalized) and p is momentum-the conjugate variable to q. By setting z= (q+ ip)/√2 we move from the real phase space to a complex representation. See Appendix A and article [44] for details. Oscillations in circuits are random-random oscillations (ROs). For each node-circuit s, ROs are expressed as C-valued random variable z= z(ω), where ω is a random parameter. These random variables are correlated. So, S generates random vector z = z(ω) ∈ Cே. The complex linear space Cே is endowed with the scalar product ⟨v|w⟩= ∑v ே w⃐ሬሬ. (1) This is a complex Hilbert space; it is denoted by the symbol H. Such spaces are basic in quantum theory. So, a complex Hilbert space is naturally coupled to the phase space dynamics [47,44] (see Appendix A). Geometrically, a neuronal network S can be represented by a graph Gௌ with nodes given by neuronal circuits s, j= 1, . . . , N. These nodes are connected by edges. In the simplest case, nodes are individual neurons and edges are axon-dendrite connections between neurons. In this case, the network's graph Gௌ is a directed multigraph, with the direction of an edge determined by the axon's origin. Signals propagating through the axon-dendrite network generate correlations between ROs in neurons. These correlations play a crucial role in constructing the QL representation of classical neuronal dynamics. Generally, the structure of connections between node-circuits is very complex and not limited to the axon-dendrite network (see the article for detailed analysis); some connections are established at the molecular level or via electromagnetic fields. The construction of the complete connection graph Gௌ is difficult, if not impossible. Moreover, for real neuronal networks in the brain and body, Gௌ is a hypergraph and its structure varies with time. (We recall that in a hypergraph, edges connect clusters of nodes rather than individual nodes.) In our modeling, we do not employ this extremely complex geometry of connections within S and instead represent the network state by considering correlations between ROs in node-circuits (but cf. with [53-55] where the graph geometry was explored). Thus, instead of the very complex hypergraph Gௌ of electrochemical connections between node-circuits, it is useful to represent S by the graph Gௌ, of correlations between ROs generated in nodes of Gௌ. This approach leads to the QL representation. The set of its nodes coincides with the nodes of the "electrochemical graph" Gௌ, while its edges are determined by nonzero correlations between nodes: if the correlation between ROs in node-circuits s and s is nonzero, these nodes are connected by an edge. Such edges are undirected. Hence, Gௌ is a directed graph, but Gௌ, is an undirected graph. The canonical basis in the linear space H consists of the vectors |1⟩= (10. . .0), ... , |N⟩= (0. . .1). Any vector v H can be expanded with respect to this basis, v= ∑v ே |j⟩. The basis vectors (|j⟩)ୀଵ ே represent the node-circuits of the network S. The node-circuits of S are represented by orthogonal vectors with respect to the scalar product. This orthogonality is a constraint on the model and, in principle, can be omitted. This is the place to remark that the representation of network nodes by vectors in complex Hilbert space is a formal mathematical construction-the node-circuits are classical (macroscopic) neuronal structures. The networks under consideration are not quantum networks; they are classical networks for which we construct the QL representation. Let vector z = z(ω) ∈ H be a random vector representing ROs generated by the neuronal network S. We proceed under the assumption that it has a zero mean value, μ= E[z] = 0. If this is not the case, one can always use the vector (z-μ), which has zero mean value. Consider the covariance matrix of this random vector: C= (c), c= E[zz⃐]. (2) This is the matrix representation of the correlation graph Gௌ, . It is a Hermitian and positive-definite matrix. Its only difference from a density matrix is that C may have a nonunit trace. It is natural to connect the classical ROs in S with quantum formalism, constructing a QL representation of the functioning of S via trace normalization (see [4548]), C → ρ = C / Tr C. (3) Such a density matrix is a QL state generated by ROs in S. We speak of matrices to couple QLM to the neuronal basis. We can proceed in the basisinvariant framework with covariance and density operators. Thus, we refer to operators or their matrices. As in quantum theory, various bases in H can be employed. Bases other than the canonical node basis contain linear combinations of node state vectors, so such states are "non-local" from the perspective of brain geometry, as is the body in threedimensional physical space. This reflects the nonlocality of information processing. The correspondence, ROs → covariance matrix, is not one-to-one; a variety of ROs generate the same C. Moreover, the correspondence C→ρ is also not one-to-one because of normalization scaling. Hence, QLM provides a fuzzy picture of classical random processes in a network.2 Now consider the covariance matrix C such that only one element c≠0. It expresses ROs in S such that all circuits besides s, are inactive (frozen). (For i≠j, condition c= E[|z|ଶ] = 0 implies that the random variable z= 0 almost everywhere). While this is an idealized situation, it remains useful in a mathematical model. The corresponding density matrix represents the projection on the vector |j⟩, ρ= C/c= |j⟩⟨j|. In principle, in this way the circuit-basis vector |j⟩ can be physically generated by activating a single circuit in isolation from others. Now consider ROs with the covariance matrix C௩= (c= vv⃐), where vector v =(vଵ,..., vே) ∈ H . Then C௩= |v⟩⟨v| and ρ௩= |ψ௩⟩⟨ψ௩|, where ψ௩= v/||v||. Thus, such ROs generate pure states of this QLM. Due to the degeneration of correspondence, ROs → covariance (density) matrix, each pure state can be generated by a variety of ROs. What is common between such ROs? As was shown in [45-48], each such random vector z= z(ω) is concentrated in the one-dimensional subspace Lట= {v= c|ψ௩⟩}. If vector v is a non-trivial superposition of the node-basis vectors (|i⟩), that is v= ∑v |i⟩, then ROs generating the QL state ρ௩ are nonlocally distributed; all neuronal nodes |i⟩ with v≠0 are involved in its generation. We note that one of the ways to generate a pure state is to consider deterministic (nonrandom) dynamics in S, z௧ with z= |v⟩, where |v⟩ is normalized to one (see Appendix A). If this initial vector |v⟩ is the eigenvector of QL Hamiltonian, then ρ௩(t) ≡|v⟩⟨v|. Thus, stationary pure states, one-dimensional projections, can be generated as eigenstates of Hamiltonian dynamics in S (see Appendix A). Ensemble versus time averages This is a good place to make the following theoretical remark on the mathematical description of correlations. In classical probability model (Kolmogorov 1933, [67] ), the elements of covariance matrix (2) are calculated as the integrals c= ∫ z ஐ (ω)z⃐(ω)dP(ω), (4) where Ω is a sample space [67] (its points are random parameters or elementary events) and P is the probability measure on Ω. 2 In the real case, a Gaussian distribution is uniquely determined by its covariance matrix and mean value. But in the complex case, only a circularly symmetric Gaussian distribution is uniquely determined by its (complex) covariance matrix. The assumption that ROs in neuronal circuits are circularly symmetric Gaussian makes the correspondence ROs → covariance matrix one-to-one. However, there are no bio-physical arguments supporting such an assumption. However, in experimental research (both in physics and biology) the following time averages are used. For two complex-valued time series x(t) and y(t), the covariance is defined as: Cov(x, y) = ଵ ்∑ x ் ௧ୀଵ (t)y⃐(t), (5) where T is the total number of time points (we proceed under the assumption of zero mean values). The coincidence of ensemble and time averages is a subtle mathematical issue; their equivalence relies on the assumption of ergodicity. This assumption is widely accepted and often applied automatically. In theoretical models, one typically works within the framework of Kolmogorov probability theory, whereas in experimental studies, time averages are generally used. However, the validity of the ergodicity hypothesis in the context of quantum and QL processes remains an open question [68,65]. A detailed discussion of this issue falls beyond the scope of the present paper, and we shall not pursue it further here. The formula (5) raises the question of the selection of a proper time scale for averagingthat is, the selection of the parameter T. In neuroscience, this issue has been discussed, e.g., in [69]-[72]. 3. Classical vs. quantum realizations of observables on neuronal networks Let S be a network with N neuronal circuits. ROs in S are represented by a classical random variable z= z(ω) valued in complex Hilbert space H of dimension N. (Here parameter ω describes randomness in S. ) Consider now a quadratic form Q(z) = ⟨Az|z⟩ on H, where A is a Hermitian matrix. For a random vector z valued in H, we can consider the average of this form, E௭ [Q] = ∫ ஐ⟨ A z(ω)| z(ω) ⟩ dP (ω)= ∫H⟨ Aw|w⟩ d p௭ (w), (6) where Ω is a set of random parameters (elementary events), P is the probability measure, and p௭ is the probability distribution of the H-valued random vector z= z(ω). This average can be coupled to the covariance matrix by the following equality: E௭ [Q] = Tr C A. (7) The right-hand side of this equality is (up to normalization) the quantum formula for calculation of averages of observables that are mathematically represented by Hermitian matrices (operators). In terms of the corresponding density matrix ρ = ρ= C / Tr C we have ⟨A⟩ = Tr ρ A = E௭ [Q] / Tr C = ∫H⟨ Aw|w⟩ d p௭ (w)/ ∫H⟨w|w⟩ d p௭ (w). (8) This formula couples the average ⟨A⟩ఘ of quantum observable A in the state ρ with the average of the corresponding quadratic form of ROs in the neuronal network S. The correspondence rule A↔Q (9) generates matching of quantum and classical averages. One can investigate this coupling in more detail and obtain from the quadratic form Q= Q(ω) a discrete random variable with values a, where (a) is the spectrum of the operator A. Such a discrete random variable is obtained via the threshold detection scheme; the Born rule appears as an approximation of classical probability. This is a mathematically advanced formalism that cannot be presented here (see [73] for rigorous mathematical representation). In this model of classical-quantum coupling, the discretization procedure (threshold detection for quadratic forms of classical neuronal variables) is the source of "quantumness." The phase space representation (Section 22 and Appendix A) is purely classical. All quadratic forms are jointly defined as random variables on the same probability space. However, each threshold detection measurement procedure determines its own probability space. Generally, the corresponding discrete-valued observables cannot be represented as random variables on the same probability space. They may not be jointly measurable, as their measurement procedures need not be compatible. In this model, the appearance of incompatible observables originates from the transition from classical observables, given by quadratic forms, to QL observables mathematically represented by Hermitian operators. Consider now a dichotomous observable A yielding values 0,1. It has two representations, classical and QL. In the QL representation A= E is given by a projector on subspace L; in the classical representation A is given by the quadratic form Qா. Let ROs in S are described by the random vector z with the covariance matrix C, the QL state ρ = C / Tr C. Since A's average is equal to the probability of the outcome A= 1, we have the following coupling between this probability and classical average of Qா, P(A=1| ρ) = E௭ [Qா] / Tr C= ∫H⟨ Ew|w⟩ d p௭ (w)/ ∫H⟨w|w⟩ d p௭ (w). (10) It can be interpreted as the "weight" of ROs in the subspace L=E H relatively to the weight of ROs the whole space H. Thus, this formula connects the outcomes of observations over a neuronal network S with averaging of ROs in it. Generally if A= ∑a E, where (E) is the spectral family of the operator A, then we have P(A=a| ρ) =E௭ [Qாೌ] / Tr C . (11) If operator A has non-degenerate spectrum with eigenvectors (|a⟩), then P(A= a|ρ) = c/ ∑c , (12) where (c) are diagonal elements of the covariance matrix C in the basis (|a⟩). Hence, the probabilities of outcomes are straightforwardly coupled with the elements of the covariance matrix. Let Aଵ and Aଶ be two compatible observables that is they can be jointly measurable. In the QL representation they are described as commuting Hermitian operators Aଵ and Aଶ. Their quantum correlation is given by the formula ⟨AଵAଶ⟩ = Tr ρ AଵAଶ= Tr ρ AଶAଵ. (13) so, in fact, this is the average of the observable A described by the (Hermitian) operator A= AଵAଶ= AଶAଵ. By applying correspondence rule (9) to this observable, we obtain the classical average representation of quantum correlations Tr ρ AଵAଶ= E௭ [Qభమ] / Tr C = ∫H⟨ AଵAଶw|w⟩ d p௭ (w)/ ∫H⟨w|w⟩ d p௭ (w). (14) This possibility of a classical representation of quantum correlations may appear to contradict the violation of Bell inequalities. However, it has been shown that this is not the case [47]. Bell-type inequalities can, in principle, be tested for neuronal networks in the brain (as well as for other types of networks, such as social networks). We examined this for observables determined by EEG measurements in article [65]. On classical phase space, one can consider not only quadratic forms but also arbitrary functions, z→f(z). Can one identify their QL images? In [45-48], the following coupling between classical (functional) and QL (operator) descriptions was considered: f→ ᇴ() ଶ for a twice differentiable function f= f(z). 4. Entanglement of observables In this Section, we present the observational viewpoint on entanglement (see [57-60] ). It will be employed in our QLM in Section 5. The traditional approach, viewing entanglement as the correlation of the states of two systems [56], in our case, two neuronal networks Sଵ and Sଶ, will be employed in Section 6. We begin with a simple example that illustrates the general construction to be developed later. Let dim H =4, let commuting Hermitian operators Aଵ and Aଶ have eigenvalues aଵ= ±1, aଶ= ±1, each with degeneracy 2. Consider the basis (|ij⟩), i, j= ±, in H consisting of the joint eigenvectors, Aଵ|ij⟩= i|ij⟩, Aଶ|ij⟩= j|ij⟩. Any vector in H can be expanded w.r.t. this basis, |ψ⟩= ∑w |ij⟩. (15) Such vector decomposition generates on H the structure of tensor product Hଵ⊗Hଶ where Hଵ and Hଵ are two-dimensional Hilbert spaces with the bases (|i⟩ଵ, i= ±), and (|i⟩ଶ, i= ±); vector |φ⟩= ∑w |i⟩ଵ⊗|j⟩ଶ (16) is identified with vector |ψ⟩ given by (15). Tensor product on Hଵ⊗Hଶ induces tensorproduct structure on H. Consider now complex Hilbert space dim H =N =NଵNଶ. Let commuting Hermitian operators Aଵ and Aଶ have eigenvalues (aଵ), j= 1, . . . , Nଵ, (aଶ), i= 1, . . . , Nଶ. Each aଵ has degeneracy Nଶ and each aଶ has degeneracy Nଵ. Consider the basis (|ij⟩≡ |aଵaଶ⟩), i1, . . . , Nଵ, j= 1, . . . , Nଶ in H consisting of their joint eigenvectors, Aଵ|ij⟩= aଵ|ij⟩, Aଶ|ij⟩= aଶ|ij⟩. Any vector in H can be expanded w.r.t. this basis, see ([any]). Such vector decomposition generates on H the structure of tensor product Hଵ⊗Hଶ, where Hଵ and Hଶ are Hilbert spaces of the dimensions Nଵ and Nଶ with the bases (|i⟩ଵ) and (|j⟩ଶ). And isomorphism map T: Hଵ⊗Hଶ → H is determined by its action on the basis vectors |i⟩ଵ⊗ |j⟩ଶ→|ij⟩. If the coefficients in representation (15) can be factorized, w= w (ଵ)w (ଶ), then formally vector |ψ⟩ belonging to H can be written as |ψ⟩= ቀ∑w (ଵ) |i⟩ଵቁ⊗ቀ∑w (ଶ) |j⟩ଶቁ. (17) Such a vector is called separable; otherwise, |ψ⟩ is called entangled. We remark that if |ψ⟩= T(|φଵ⟩⊗|φଶ⟩), then it is factorizable. Thus, the notions of separability versus entanglement given in (17) are equivalent to the usual notions of separability versus entanglement in the tensor product of Hilbert spaces. Consider now spaces of density operators D(Hଵ), D(Hଶ), D(H), then D(H) is isomorphic to tensor product D(Hଵ) ⊗ D(Hଶ). The notions of separability vs. entanglement for density operators (mixed states) is transferred to the space D(H). Denote by the symbol L(M) the space of linear operators acting in a complex Hilbert space M. We recall that we consider the finite-dimensional case. In Hilbert space Hଵ⊗Hଶ consider two operator algebras: A1 ={ Aଵ= aଵ⊗ I: aଵ ∈ L(Hଵ) }, A2 ={ Aଶ= I⊗ aଶ: aଶ ∈ L(Hଶ) }. (18) Hermitian operators belonging to these algebras are called "local observables." For our neurophysiological applications, it is important to note that this is tensor-product locality; generally, it has nothing to do with space-time locality. The images of these algebras in L(H) are denoted as Ai(H), i=1,2, or simply Ai . These local algebras induce the structure of the tensor product of operators in H; for A ∈ A i(H), i=1,2, we set Aଵ⊗Aଶ= Aଵ∘Aଶ= Aଶ∘Aଵ. We remark that if Aଵ ∈ A 1(H) , Aଶ ∈ A 2(H), then [Aଵ, Aଶ] = 0; so Hermitian operators from algebras A1(H) and A2(H) represent compatible observables. To clarify the meaning of entanglement (which up to now has been treated from a mathematical viewpoint), consider the four-dimensional case and the singlet state |ψ⟩= (|+⟩|-⟩-|-⟩|+⟩)/√2. (19) It is entangled according to the above mathematical definition. But what does it mean physically and biologically (Sections 5, 6)? As noted, this formula encodes correlations between the outcomes of two observables Aଵ and Aଶ: the conditional probabilities P(Aଶ= ±|Aଵ= ∓) = 1 as well as P(Aଵ= ±|Aଶ= ∓) = 1. These correlations, for each pair of such observables, are purely classical. "Quantumness" appears because of the existence of incompatible observables, A , B ∈ A i(H) such that [A, B] ≠0, i= 1,2. Here, noncommutativity expresses the impossibility of joint measurements of two observables. The singlet state can also be represented as |ψ⟩= (|C= +⟩|D= -⟩-|C= -⟩|D= +⟩)/√2, (20) where C= Aଵ, D= Aଶ or C= Bଵ, D= Bଶ and the operators B can be selected as noncommuting with A, i= 1,2. Thus, this entangled state encodes correlations between families of local observables that are jointly non-measurable. Classical probability describes only correlations for families of jointly measurable observables. This represents the incompatibility (noncommutativity) interpretation of entanglement. 5. Entanglement of observables on neuronal circuits Here we employ the observational viewpoint on entanglement presented in Section 4. Let S be a network with N= NଵNଶ neuronal circuits. Let Aଵ and Aଶ be observables on S as in Section 4. The only new component is that these QL observables are coupled to the network as the QL images of the corresponding quadratic forms Qభ and Qమ of ROs in S. Neuronal node-circuits s, i= 1, . . . , N, are renumerated as s, i= 1, . . . , Nଵ, j= 1, . . . , Nଶ. The biological counterpart of this mathematical construction is that, for each node-circuit, both observables Aଵ and Aଶ can be jointly measured, as well as any two observables from the operator algebras A1 and A2 . If a circuit s is not activated, then in the classical representation zs= 0 a.e., where zs is the random variable describing ROs in s. Consider one node-circuit s= s. Let only this circuit be activated in the network S. In the classical representation, all random variables zs= 0 a.e. for s≠s and E[|z|ଶ] ≠0, where we set z≡zsೖ. In the QL representation, S is in the pure state |k⟩= |ij⟩. A measurement of the observables Aଵ and Aଶ on the network S gives the outputs Aଵ= aଵ and Aଶ= aଶ with probability 1. Let only two node-circuits be activated in S: s= sభభ, s= sమమ, that is, in the classical representation, z= 0 a.e. for n≠k, m and E[|z|ଶ], E[|z|ଶ] ≠0. Let ROs in s, s be correlated, generating the covariance matrix C with the elements c= 1, c= 1, ́c= -1, c= -1 and other elements in C equal to zero. Hence, there is perfect anticorrelation between ROs in circuits s and s. The corresponding QL state ρ= C/2 = |ψ⟩⟨ψ|, where |ψ⟩ is the singlet state (19). Let iଵ= +, jଵ= -, iଶ= -, jଶ= +. The circuits sାି, sିା generate ROs z= zାି| + -⟩-zିା| -+⟩, where zାି= zିା a.e. Thus, the singlet state (19) is the QL image of this random variable. Such correlation is purely classical and does not represent the essence of the QL framework. As was already noted in 4, the strength of employing the QL linear space representation lies in the possibility (for the brain) of using a variety of bases. We have considered the neuronal basis, but the same singlet state carries anti-correlations in various bases (see (20)), whose elements are not localized in specific neuronal circuits. Formally (mathematically), the neuronal network S can be represented as a compound system S= (Sଵ, Sଶ) of two systems Sଵ and Sଶ with the state spaces Hଵ and Hଶ . In this observational framework, these systems are not identified with specific neuronal networks (cf. Section 6). They are formally extracted from S with the aid of two algebras of commuting observables, Aଵ and Aଶ . Measurements performed by observables belonging to A are formally treated as "local observables" for subsystem S, i= 1,2.3 The correlations of local observables can be represented as the QL-average. For Bଵ= bଵ⊗ I and Bଶ= I⊗bଶ, ⟨bଵbଶ⟩ = Tr ρ bଵ ⊗bଶ, ρ= ் , (21) where C is the covariance operator of ROs in S which are represented by a random variable z valued in tensor product Hilbert space H =Hଵ⊗Hଶ and not in Cartesian product Hilbert space Hଵ ⊕ Hଶ . As was pointed out in Section 3, such a correlation can be presented in the classical probabilistic framework as ⟨bଵbଶ⟩ = ଵ ் = ∫Hభ⊗Hమ⟨ bଵ ⊗bଶ w|w⟩ d p௭ (w) . (22) Entanglement is the Hilbert space expression of correlations between local observablesfrom algebras A 1(H) and A 2(H). The crucial difference from the classical probabilistic representation is that these algebras contain incompatible observables which cannot be jointly measurable. Finally, we stress once again that decomposition of a neuronal network S into subsystems, S= (Sଵ, Sଶ), is not physical. Subsystems are virtual, and they express biological functions of S, not its neuronal architecture. Decomposition is not unique; even dimensions of the 3 Although subsystem S cannot be identified with a physical network of node-circuits, for external observer (who cannot "open the brain" and see the individual neuronal structure of S), subsystems S, i= 1,2, "exist" and their existence is determined via local observations. components of the tensor product can vary for the same biophysical neuronal network S, say if N= 12, we can factorize it as Nଵ= 3, Nଶ= 4 or Nଵ= 2, Nଶ= 6. 6. Entanglement of neuronal states We start with a recollection of the notion of entanglement for pure and mixed states. A pure state |ψ⟩ belonging to tensor product of two Hilbert spaces H = Hଵ ⊗ Hଶ is called separable if it can be factorized as |ψ⟩= |ψ⟩ଵ⊗ |ψ⟩ଶ, where |ψ⟩ ∈ H, (24) otherwise a pure state is called entangled. A mixed state given by a density operator ρ is called separable if it can be represented as a mixture ρ = ∑p ρଵ⊗ρଶ , (25) where ρ ∈ H and the weights (p) form a probability distribution, p> 0, ∑p= 1. A non-separable state is called entangled. For pure states, it is straightforward to decide whether a state is separable or not. For mixed states, it is very difficult. Although these definitions are commonly accepted in quantum theory, a careful reading of the original works of Schrödinger [74] may give the impression that he discussed "entanglement of observable" (cf. with discussion in [60]). Consider now two networks Sଵ and Sଶ consisting of neuronal circuits sଵ,, j= 1, . . . , Nଵ, and sଶ,, j= 1, . . . , Nଶ, respectively. As before, for simplicity, suppose that each circuit generates one complex dimension. So, in QLM for Sଵ and Sଶ, the complex Hilbert spaces Hଵ and Hଶ have dimensions Nଵ and Nଶ. ROs in them are mathematically represented as random vectors z∈H, i=1,2; here z= (z,ଵ, . . . , z,ே). Consider now the network S⊕ consisting of node-circuits of Sଵ and Sଶ and its graph picturing. The set of nodes of the graph Gௌ⊕ is the union of the sets of nodes of the graphs Gௌభ and Gௌమ; the set of its edges includes all edges of Gௌభ and Gௌమ as well as additional edges representing the connections between some nodes of Gௌభ and Gௌమ. According to our approach, the edge structure of the graph Gௌ⊕ is is not visible in the QL representation, which is instead based on the correlation graph Gୗ_ଵ⊕ ୗ_ଶ; ୡ୭୰ . The edges of this graph correspond to nonzero correlations between ROs in the corresponding nodes. In QLM for such a network, its complex Hilbert space Hௌ⊕=Hଵ⊕Hଶ has the dimension (Nଵ+ Nଶ). ROs in S⊕ are mathematically represented the random vector z= (zଵ , zଶ) ∈Hௌ⊕, so in the node-basis z= (zଵ ... zேభାேమ ). The covariance matrix of ROs has the dimension (Nଵ+ Nଶ)ଶ. This dimension does not match the dimension of a density matrix for a quantum compound system, that is Nଶ, N= NଵNଶ. Such a network cannot be used for generating entanglement. We suggest the following construction of a compound network S⊗ whose complex Hilbert space is not a direct sum but a tensor product, Hௌ⊕=Hଵ⊕Hଶ , and the covariance matrix of ROs in S⊗ has the dimension Nଶ. Creation of the network S⊗ that is able to generate entangled states is characterized by the emergence of new circuits not present (or activated) in S⊕. Each pair of circuits sଵ,∈Sଵ and sଶ,∈Sଶ is combined into the new circuit s. How can such a compound circuit be created? Since s consists of the same neurons as the circuits sଵ,, sଶ, the only new structure in the circuit s arises from generating (activation) new channels for communication between neurons in sଵ, and neurons in sଶ;. These channels can be physical axon-dendrite connections activated in the network S⊗ (but not active before). Another testable hypothesis is that entangling channels are routes for electromagnetic signaling between neurons across the circuits sଵ, and sଶ, [75,66,76,77]. Chemical signaling may also contribute to the formation of the entanglement-generating network S⊗, albeit on slower time scales. One can hypothesize that the brain can generate entangled networks using various communication channels to perform different tasks. Generally, the three sorts of communication channels mentioned can be involved in the creation of circuits s∈S⊗; each such circuit is given by the triple s= (sଵ,, e, sଶ,), (25) where e denotes the signaling channel between the neuronal circuits sଵ, and sଶ,. ROs in this circuit are described by a random variable Z= Z(ω), where ω is a chance parameter. Such ROs generate the covariance matrix C;= E[ZZ⃐], whose elements are the correlations between circuits s and s. This matrix has the dimension (NଵNଶ)ଶ. In QLM, the corresponding density matrices are obtained via normalization by trace; in the operator representation, they act in the complex Hilbert space H of the dimension N= NଵNଶ. We remark that these compound circuits need not be connected at the physical level, e.g., by an axon-dendrite connection. Signals propagating in channels e and e generates electromagnetic fields, and they can be non-trivially correlated (see Section 7 on ephaptic generation of such correlations). Thus, circuits s are vertices of the graph Gୗ_ଵ⊗ ୗ_ଶ; ୡ୭୰ ; its edges E, connect these vertices and represent correlations between ROs in circuits s and s. We discuss only the graph Gୗ_ଵ⊗ ୗ_ଶ; ୡ୭୰ which edges E, represent correlations between corresponding circuits s and s. The graph of physical connections is not essential for QLM. What is about the tensor-product structure of the Hilbert space H? Let only one circuit s is activated and let it generate ROs Z. In the corresponding (self-)covariance matrix C only one element is nonzero, namely, c;= E[|Z|ଶ] ≠0. In QL, such a covariance matrix is represented by the pure state |ij⟩∈ H (or the density operator ρ= |ij⟩⟨ij|). Now consider the compound neural network S= (Sଵ, Sଶ) with an arbitrary pattern of ROs. In QLM, its covariance matrix C is given by the density matrix ρ = C / Tr C = ∑, r, |ij⟩⟨km|. Hence, the state space of density operators acting in H is represented as the tensor product D(H)=D(Hଵ) ⊗D(Hଶ), where H is the Hilbert space for the network S, i= 1,2. Up to now, we have followed the standard quantum framework for a compound system S= (Sଵ, Sଶ). As usual, we can consider the marginal states of ρ generated by partial tracing, ρଵ= TrHమ ρ, ρଶ = TrHభ ρ . (26) In the quantum formalism, these states are interpreted as the states of the subsystems Sଵ and Sଶ of the system S= (Sଵ, Sଶ). However, in our neuronal model, we can consider the states ρௌభ and ρௌమ of the neuronal networks Sଵ and Sଶ in the absence of signaling between them. We remark that the network Sଵ⊗Sଶ is created via activation of the cross-connection between neurons in Sଵ and Sଶ and the inter-network connections contribute to signaling between Sଵ and Sଶ only indirectly. In short, the correlations cௌ;/= E[z,z⃐,], m= 1,2, cannot be reconstructed from the covariance matrix C of correlations in Sଵ⊗Sଶ and the subsystems' QL states ρௌ, m= 1,2, are not equal to the marginal states ρ. We consider the observables Aଵ and Aଶ. If only the circuit s is activated, then Aଵ= i, Aଶ= j with probability 1. In QLM, they are represented by the operators, which are diagonal in the basis (|ij⟩). Then we can use the construction from Sections 4, 5-observational entanglement. In particularly, we create the tensor-product structure, H=Hଵ ⊗Hଶ. 7. Ephaptic entanglement This is an appropriate place to point out ephaptic coupling between neuronal structures in the brain [75,66,76]. This coupling enables communication between neurons that differs from direct systems based on physical connections, such as electrical synapses and chemical synapses. Through this mechanism, signals in nerve fibers can be correlated as a result of local electric fields. Ephaptic coupling can generate synchronization of action potential firing in neurons [77]. Recently, this coupling was highlighted in article [78] on modeling nonlocal representation of memories: "It is increasingly clear that memories are distributed across multiple brain areas. Such 'engram complexes' are important features of memory formation and consolidation. Here, we test the hypothesis that engram complexes are formed in part by bioelectric fields that sculpt and guide neural activity and tie together the areas that participate in engram complexes. ... Our results ... provide evidence for in vivo ephaptic coupling in memory representations." Our QL formalism describes such nonlocal correlations and, in particular, memories distributed across multiple brain areas. Such nonlocal memories are encoded in QL states. Here we again recall our basic conjecture that the brain explores the QL representation, i.e., it operates not with oscillations in neuronal networks but with the corresponding covariance matrices. Thus, the creation of the entanglement-generating network S⊗ is a process. At each instance in time, the character of signaling between neurons in the circuits sଵ, and sଶ, plays a crucial role in the generation of a new circuit s and a specific state of S⊗, entangled or not. 8. Toward experimental verification of mental entanglement 8.1. Experimental framework for entanglement of neuronal networks Here we consider the simplest case of the model for entanglement of two neuronal networks presented in Section 6 (see also [55]). In this way, we can generate only a restricted set of entangled states (in contrast to the general model). Nevertheless, some examples of entangled states can still be obtained. The nodes of the compound network Sଵ⊗Sଶ are given by the pairs of nodes of the networks Sଵ and Sଶ, s= (sଵ,, sଶ,), (27) and ROs in these node-circuits are described as the random variables Z= zଵ,zଶ,, where the random variables zଵ, and zଶ, describe ROs in node-circuits of corresponding neuronal networks. Hence, the elements of the cross-covariance matrix C are given as c,= E[ZZ⃐] = E[zଵ,zଶ,z⃐ଵ,z⃐ଶ,] (28) Consider now two spatially separated areas in the brain and two sets of electrodes sଵ,, i= 1, . . . , Nଵ, and sଶ,, j= 1, . . . , Nଶ, coupled to the respective areas. Their outputs correspond to random variables zଵ, and zଶ,. Under the assumption of ergodicity, we can identify statistical averages with time averages. We center the random variables by subtracting their averages. We calculate the cross-covariance matrix. Then, by using a test of entanglement (Section 9), we determine whether the corresponding density matrix represents an entangled or separable state. Since directly measuring correlations between signals generated by individual neurons in the brain is experimentally complicated (but cf. Section 8.4), it is natural to employ EEG/MEG techniques to measure correlations between signals generated in spatially and functionally separated brain areas (see Section 8.3 for further discussion of this proposal). We also mention fMRI as a possible modality, but its limited temporal resolution makes it less suitable for detecting fast or transient correlations as required for entanglement. EEG or MEG would be more appropriate owing to their millisecond-scale resolution. Note for implementation. Practical EEG/MEG implementation details (preprocessing, spectral estimation, window length T, and statistical controls) are summarized in Section 8.2; see also standard references [72,69,71]. 8.2. EEG/MEG implementation: minimum requirements Prefer source-space analyses and state whether leakage correction was applied (e.g., symmetric orthogonalization [79]). Note that sensor-space signals are linear mixtures of underlying sources; therefore, source-space analyses with leakage correction are preferred wherever possible. Report the reference scheme, artifact handling (e.g., Independent Component Analysis (ICA) for EOG/EMG), and filtering (including notch)[72]. For zero-lag confounds, accompany coherence or Phase-Locking Value (PLV) with imaginary coherence and/or wPLI (with limitations noted [80,81]). These analyses quantify statistical dependence and do not by themselves establish directionality or causation. Specify spectral estimation (Welch or multitaper) and effective degrees of freedom; choose T to cover 5-10 cycles (Δf≈1/T) [69,70,71]. Use matched surrogates (trial shuffling or phase randomization) and correct for multiple comparisons [72]. List reproducibility items: SNR, number of tapers/segments, leakage correction, inverse model or regularization, and whether analyses were performed in sensor or source space. 8.3. Parallels between classical EEG-based neurophysiological approaches and QLM of cognition It is important to emphasize that our QL modeling framework shares methodological parallels with well-established neurophysiological tools used in EEG/MEG-based studies [82,83,84]. Both approaches rely on the analysis of covariance structures and correlations among signals. Functional connectivity in neuroscience Functional connectivity refers to the statistical dependencies between spatially distinct neuronal populations. It is formally defined as the temporal correlation between neurophysiological signals recorded at different sites in the brain, such as EEG channels. Mathematically, common measures of functional connectivity include: Covariance. For two time series x(t) and y(t), the covariance is defined as: Cov(x, y) = ଵ ்∑ (x(t) -μ௫) ் ௧ୀଵ ൫y(t) -μ௬൯, (29) where μ௫, μ௬ are the mean values of x(t) and y(t), and T is the total number of time points. Pearson correlation coefficient. r௫௬= Cov(x, y) σ௫σ௬ where σ௫, σ௬ are the standard deviations of the signals. Coherence. Coherence measures frequency-specific linear correlations between signals: C௫௬(f) = |S௫௬(f)|ଶ S௫௫(f)S௬௬(f) where S௫௬(f) is the cross-spectral density, and S௫௫(f) and S௬௬(f) are the auto-spectral densities. (Estimator notes: Welch or multitaper estimators are commonly used; see [69,70,71,85]). Phase-Locking Value( PLV). PLV quantifies phase synchronization between two signals: PLV = อ1 Te൫థೣ(௧)ିథ(௧)൯ ் ௧ୀଵ อ where φ௫(t) and φ௬(t) are the instantaneous phases, typically extracted using the Hilbert transform or wavelet methods (following [83] ). Scope of functional connectivity (FC) metrics. These metrics quantify statistical dependence but not directionality or causation; directional inferences require separate analyses and explicit assumptions. Zero-lag confounds and robust metrics. To mitigate volume conduction and common reference effects, the imaginary part of coherency and/or the weighted phase-lag index (wPLI) should be reported alongside classical metrics [80,81]. However, these approaches do not eliminate all leakage in sensor space; whenever possible, analyses should be performed in source space with leakage correction (see Section 8.2). Applications. FC networks are generated by computing these measures pairwise across brain regions, producing a connectivity matrix that is subsequently analyzed for modularity, hub architecture, and network dynamics. These methods are extensively applied to investigate cognition, neurological disorders, and stimulus-driven responses [86,82,87,72]. For tutorials and discussions of interpretational pitfalls, see. Parallels with QLM In the QL framework, similar mathematical constructs arise naturally. The density matrix or generalized covariance matrix represents probabilistic dependencies among cognitive observables, analogous to FC matrices in EEG studies. Furthermore, off-diagonal terms in QL states encode interference-like effects, comparable to EEG phase synchrony measures such as PLV and coherence. Thus, the QL formalism extends conventional measures by providing a richer probabilistic interpretation grounded in generalized state representations. Finally, the concept of entanglement in QL modeling can be compared-at a conceptual level-with strong inter-regional correlations identified in FC analyses, where clusters of brain regions operate as integrated modules. This analogy is heuristic and does not indicate equivalence of constructs. This is an appropriate place to note that classical signal analysis of brain function frequently employs the analytic signal representation via the Hilbert transform (see, e.g., [85,72,83]). The established use of such complex-valued representations suggests another avenue for QL-style modeling of brain dynamics. We plan to develop such a model in a future paper. Summary of parallels Key takeaway. The correspondence between EEG/MEG FC and QL constructs is conceptual: both capture dependencies through second-order structure (covariances/coherences vs. density matrix off-diagonals). This analogy is heuristic and does not imply equivalence of constructs, measurement units, or causal mechanisms. Comparison of EEG/MEG-based methods and QL modeling of cognition (conceptual mapping; not a one-to-one equivalence). Concept EEG/MEG Neurophysiology QL Modeling of Cognition Covariance Covariance / Correlation Density matrix (covariances) Synchrony Coherence / Phase-locking (PLV) Interference (off-diagonals) Network Correlations Functional networks Entanglement Practical implications. Treat FC metrics as measures of statistical dependence, not directionality; use source space with leakage correction and report imaginary coherency/wPLI when applicable (Section 8.2). When robust FC modules persist under stringent controls, QL analyses can quantify nonseparability via mixed-state entanglement measures (Section 9); Bell-type tests face signaling constraints (see Appendix B). 8.4. In vitro neuronal networks In vitro neuronal networks are cultures of neurons that replicate key aspects of network structure and function. In such preparations, signals from individual neurons or defined circuits can be recorded directly; connectivity can be patterned, and currents across connections between spatially separated subnetworks can be measured. Although experimentally demanding, these paradigms are feasible and align closely with our framework (cf. Section 8.1). The experimental testing of QL entanglement in vitro is increasingly practical owing to advances in multi-electrode arrays (MEAs), which allow simultaneous stimulation and recording from dozens to hundreds of sites. One promising approach is patterned electrical stimulation to impose structured correlations between distinct subpopulations-for example, time-locked or phasemodulated sequences delivered to spatially separated regions to create controlled coupling patterns. Additionally, pharmacological modulation offers a complementary route, e.g.: Bicuculline (GABA antagonist) to increase network excitability and enhance synchrony; Carbachol or other acetylcholine agonists to regulate oscillatory dynamics and increase coherence. These manipulations can serve to test QL nonseparability by: 1. engineering structured couplings via stimulation or pharmacology, 2. recording the resulting activity with MEAs, and 3. analyzing correlations using QL-inspired criteria (e.g., separability bounds). Such protocols provide a concrete route toward evaluating QL entanglement while maintaining continuity with established neurophysiological methods. Finally, experimental confirmation of QL nonseparability would support quantum-inspired computation with neuronal networks-QL neuromorphic computing (see [15,54,16,55,44] ). 9. Basic quantitative measures of entanglement for mixed states In quantum information theory, the entanglement of mixed states is quantified using various measures defined through density operators. These measures are critical for characterizing quantum correlations in composite systems described by mixed states. In the following, we summarize mixed-state entanglement measures most relevant for empirical analyses of QL states reconstructed from neural signals. In the context of QLM of cognition, such measures may be applied to examine "mental entanglement" and complex cognitive interdependencies. 9.1. Terminology Throughout this section, 'entanglement' refers to the nonseparability of the QL state ρ, constructed from classical neural signals (e.g., EEG/MEG/fMRI-derived time series). We do not suggest microscopic quantum entanglement in brain tissue; outcomes depend on the selected subsystem partition and the measurement basis used to define ρ and the partial transpose. We start with the definition of the von Neumann entropy of a generally mixed state given by a density matrix: S(ρ) = -Tr(ρlogρ) It measures the degree of mixedness of the state; S(ρ) = 0 if and only if ρ is a pure state. Hence, it can be used as a measure of the purity of a quantum (or QL) state. Now let ρ be the density operator of a bipartite system on Hilbert space H⊗H. 9.2. Entanglement entropy For a pure bipartite state ρ= |ψ⟩⟨ψ|, the entanglement entropy is defined as the von Neumann entropy of the reduced state: S= S(ρ), ρ= Tr(ρ) This quantity measures the degree of entanglement between subsystems A and B for pure bipartite states. For pure global states, entanglement is determined by the entropy of the reduced state: S(ρ) > 0 if and only if ρ is entangled (and S(ρ) = 0 iff it is a product state). In contrast, S(ρ) = 0 or the linear entropy S(ρ) = 1 -Tr ρଶ= 0 only confirms that the global state is pure, not whether it is entangled across a given bipartition. In cognitive and neuronal data, however, pure QL states are not expected: noise, nonstationarity, and averaging typically yield mixed states. Therefore, mixed-state entanglement measures are required. 9.3. Negativity and logarithmic negativity Negativity quantifies entanglement by examining the partial transpose of the density matrix: N(ρ) = (||ρ்ಳ||ଵ - 1)/2 where ρ்ಳ is the partial transpose with respect to the subsystem B and ||ρ்ಳ||ଵ is the trace norm of ρ்ಳ. Logarithmic negativity is defined as: Eே(ρ) = logଶ ||ρ்ಳ||ଵ These are standard entanglement monotones for mixed states; the partial transpose is taken with respect to the chosen subsystem in the product basis. 9.4. Concurrence (Two-Qubit systems) For two-qubit mixed states ρ, concurrence is defined as: C(ρ) = max(0, λଵ-λଶ-λଷ-λସ) where λ are the square roots of the eigenvalues (in decreasing order) of: R= ρ൫σ௬⊗σ௬൯ρ∗൫σ௬⊗σ௬൯ with ρ∗ denoting complex conjugation and σ௬ the Pauli y matrix. These measures of entanglement quantify quantum correlations between systems. In quantum theory, there is also a measure used to capture combined classical-quantum correlations. For two-qubit systems, concurrence corresponds to the entanglement of formation. 9.5. Quantum mutual information For mixed states, quantum mutual information measures total correlations: I(A: B) = S(ρ) + S(ρ) -S(ρ) If I(A: B) = 0, the two systems are uncorrelated (neither classical nor QL correlations). If I(A: B) > 0, there are correlations, but this measure does not distinguish QL from classical components and is not an entanglement monotone. However, mutual information is not a measure of entanglement, because Nonzero for Separable States: Even separable (non-entangled) mixed states can yield nonzero mutual information because of classical correlations. Entanglement Monotonicity: Valid entanglement measures must not increase under local operations and classical communication. Mutual information fails this criterion because it quantifies all correlations, not exclusively quantum ones. 9.6. Bell inequalities We note that the degree of violation of Bell inequalities can serve as indirect evidence for entangled observables; however, such tests in cognitive or neuronal contexts are challenging to implement (see Appendix B for discussion). 10. Conceptual differences and added value of the QL framework While there are important methodological parallels between our QL framework and classical neurophysiological approaches, it is essential to emphasize the fundamental conceptual innovations that distinguish the QL model. Most classical models in neuroscience-such as Principal Component Analysis (PCA), ICA, and Integrated Information Theory (IIT)-are based on analyzing statistical dependencies or decomposing neural signals into independent or maximally informative components. These approaches often assume linearity, Gaussianity, or specific information-theoretic structures, and they generally function within the framework of classical probability theory. In contrast, the QL framework introduces the formalism of operator-based observables and tensor-product structures, adapted from quantum theory but applied to macroscopic neuronal information processing. These mathematical tools allow the formal representation of several fundamental cognitive phenomena, including: Contextuality: The outcome of cognitive measurements depends on the context, similar to the contextuality of quantum measurements. Incompatibility: Certain cognitive observables cannot be simultaneously measured or precisely assigned, reflecting uncertainty and complementarity in cognition. Entanglement: Complex dependencies and holistic cognitive states can be modeled through entangled QL states. Mathematically, these phenomena are naturally expressed using non-commuting operators and composite Hilbert spaces: Htotal = Hଵ⊗Hଶ where the subsystems correspond to distinct cognitive or neural components. The density operator (or generalized state) in the QL model encodes both classical correlations and quantum-like correlations (entanglement), extending beyond covariancebased approaches such as PCA and ICA. Added value By enabling such generalized probabilistic structures, the QL approach formally extends classical theories. It provides novel ways to model: Non-classical interference effects in cognition. Strong contextual dependencies in decision-making. Holistic, system-level processes inaccessible to classical decompositions. This conceptual generalization bridges neurophysiological signal processing with higherlevel cognitive modeling, offering an integrated framework for studying cognition beyond traditional statistical tools. 11. Concluding remarks This paper represents a step toward constructing a conceptual and mathematical bridge between oscillatory cognition-defined as the rhythmic activity of neuronal networks-and QL models of cognition, which have been successfully applied to explain contextuality, interference, and entanglement-like effects in human behavior and decision-making. This bridge relies on a fundamental correspondence: QL mental states are represented by density operators and QL observables by quadratic forms, both of which are mathematically grounded in the covariance structure of ROs in neural circuits. These constructions are developed within the framework of PCSFT, which extends beyond the standard quantum formalism [45,46,47,48]. Previous work [44] has suggested that PCSFT provides a viable interpretational and computational foundation for QL representations of cognitive phenomena. Here, we focus on one of the most conceptually challenging issues: the formalization of mental entanglement-a cognitive analog of quantum entanglement-which we consider crucial for modeling integrative mental processes involving distributed and interacting brain networks. In quantum information theory, entanglement is central to the non-classical correlations underlying quantum computational advantage. While the physical meaning of entanglement remains debated-particularly regarding separability and locality-there has been increasing interest in an alternative formulation: observational entanglement. This approach avoids problematic metaphysical assumptions and emphasizes statistical correlations observed through measurements. We adopt this perspective as a more transparent entry point into modeling cognitive entanglement (see Section 5). We then proceed to explore state entanglement in the QL framework, treating entangled QL states as representations of the joint activity of spatially and functionally separated neuronal assemblies. In this context, mental entanglement provides a natural mechanism for feature binding-the brain's capacity to integrate disparate perceptual or cognitive contents (e.g., color, shape, and motion) into unified conscious experiences. This formulation suggests a candidate QL solution to the binding problem, long regarded as one of the central unsolved questions in neuroscience and cognitive science. Moreover, the introduction of mental entanglement offers a speculative yet potentially fruitful path toward addressing aspects of the hard problem of consciousnessspecifically, how and why certain brain processes give rise to subjective experience. While our approach does not resolve the hard problem, it aligns with perspectives proposing that consciousness may involve non-classical, globally coherent states that resist decomposition into strictly local mechanisms. If mental entanglement reflects a form of nonlocal coherence in brain function, it may point to a formal structure compatible with integrated information and global workspace theories, enriched by QL formalism. In cognitive neuroscience, numerous studies have shown that neuronal oscillationsparticularly in the gamma, theta, and beta bands-are associated with integrative functions such as memory binding, attentional selection, and conscious access. Our model establishes a bridge between these empirical findings and QL cognitive models by interpreting such oscillatory patterns as classical fields whose covariance structures define the QL states and observables. This offers a testable link between neurophysiological processes and the abstract mathematical structures of QL cognition. We further hypothesize that entangled QL states derived from ROs may underlie enhanced computational capacities in specific neural subsystems, particularly within the cerebral cortex and hippocampus. This aligns with evidence of high-performance integrative processing in these regions and indicates a deeper role for QL representations in modeling cognitive efficiency. In Section 8.3, we outline preliminary experimental designs aimed at indirectly detecting signatures of mental entanglement using EEG/MEG methodologies, focusing on correlation structures in neural signals that diverge from classical expectations. Although speculative, these tests are intended to guide future empirical investigations. In conclusion, the development of the mental entanglement formalism reinforces the broader conjecture that the brain employs QL representations in cognitive processing. This framework opens the door to a deeper interdisciplinary synthesis-integrating neurophysiological data, quantum-inspired mathematical tools, and enduring philosophical questions in consciousness research. Acknowledgments The authors were supported by JST, CREST Grant Number JPMJCR23P4, Japan; A.K. was partially supported by the EU-grant CA21169 (DYNALIFE); M.Y. was partially supported by JSPS KAKENHI Grant (23H04830, 22K18265, 23K22379), JST Moonshot R&D Grant (JPMJMS2295), and MEXT Quantum Leap Flagship Program (MEXT QLEAP) Grant (JPMXS0120330644). Appendix A. Coupling symplectic Hamiltonian dynamics and the Schrödinger equation PCSFT provides a classical foundation for quantum mechanics 45-48]. A central idea is that the Schrödinger equation, which lies at the heart of quantum theory, can be derived from or coupled with symplectic Hamiltonian dynamics on a classical phase space. This framework examines the quantum-classical interface from both mathematical and conceptual perspectives. At its core, PCSFT interprets quantum states as labels for statistical ensembles of classical oscillators. For N classical oscillators, the phase space Φ = Q× P= Rே× Rே. This corresponds to a real Hilbert space with the scalar product (φଵ|φଶ) = (qଵ|qଶ) + (pଵ|pଶ), where φ= {q, p} ∈Φ. This phase space underpins quantum mechanics with a finite-dimensional state space, the complex Hilbert space H = Cே, endowed with the scalar product described in Section 2. Quantum mechanics on physical space Rଷ is based on the infinite-dimensional Hilbert space of square-integrable complex-valued functions, essentially the Hilbert space H = Lଶ(Rଷ, C). The underlying classical phase space is given by Φ = Lଶ(Rଷ, R) × Lଶ(Rଷ, R). The real and imaginary parts of a wavefunction |ψ⟩ ∈ H correspond to coordinates and momenta in an infinite-dimensional phase space Φ. Any phase space can be endowed with a symplectic structure. In this setting, a symplectic form ω is introduced as: ω(φଵ|φଶ) = (φଵ|Jφଶ), where J is the symplectic structure operator defined as J= ∣∣∣∣∣0 I -I 0 ∣∣∣∣∣ , (30) where I, I are the unit operators in Q, P. In the complex Hilbert space H corresponding to the phase space Φ the operator J is represented as the operator of multiplication by i (we remark that H is complexification of Φ, namely, H = Q ⊕ iP. The Hamiltonian functional H(φ) generates dynamics via φ̇(t) = J∇H(φ(t)). When the Hamiltonian functional is quadratic, i.e., given by a symplectically invariant quadratic form H(φ) = (φ|Hφ), the evolution reduces to the linear Schrödinger equation: idψ dt(t) = Hψ(t) Thus, the Schrd̈ inger equation appears not as a postulate, but as a dynamical law for classical phase space dynamics under a symplectic structure. In PCSFT, a quantum state (wavefunction or density operator) corresponds to the covariance operator of classical oscillations. Quantum averages emerge from classical statistical averages over this ensemble, allowing an interpretation in which quantum randomness is epistemic-arising from incomplete knowledge of underlying classical oscillations. In this way, quantum mechanics can be understood as a projection or statistical encoding of a deeper classical theory, with the Schrödinger equation derived from the Hamiltonian flow of an underlying symplectic system. We now briefly present the above consideration with q, p coordinates. Any Hamiltonian function H(q, p) generates the system of Hamiltonian equations q̇ = பு ப(q, p), ṗ = - பு ப(q, p). (31) Consider now a quadratic and symplectically invariant Hamiltonian function H(q, p) = 1/2[(Rp, p) + 2(Tp, q) + (Rq, q)], (32) where R is a symmetric operator, R⋆= R and T⋆= -T. The operator (the Hessian of the Hamilton functions) H= ∣∣∣∣R T -T R ∣∣∣∣ , (33) Which commutes with the symplectic operator J. This is a system of harmonic oscillators, and it can be rewritten as the Schrödinger equation. Appendix B. Experimental framework for entanglement of observables in neuronal circuits and Bell test The observational approach to entanglement is comparatively simpler, as it does not require reconstructing the QL state of a compound neuronal network through the full calculation of its cross-correlation matrix. Instead, we aim to identify a pair of jointly measurable observables Aଵ and Aଶ, associated with neuronal circuits forming parts of a network S. The most widely studied method for detecting entanglement is the Bell test, which evaluates violations of Bell-type inequalities. Among these, the CHSH inequality is the most frequently applied and provides a natural framework for detecting and quantifying entanglement, where the degree of entanglement is reflected in the magnitude of CHSH inequality violation. Interpretational note. A violation of a Bell inequality rules out a class of local realistic models under specific assumptions (e.g., no-signaling and measurement independence), but it does not, by itself, establish directionality or causation among neural processes, nor does it exclude classical common-drive confounds. To apply the CHSH test, one must define two pairs of observables, Aଵ, Bଵ and Aଶ, Bଶ. Within each pair, the observables must be incompatible, meaning they are not jointly measurable. Across pairs, the observables must be compatible, meaning they can be jointly measured. Correlations between cross-pair observables are then computed to test for CHSH inequality violations. However, the structure of observables in neuronal circuits-especially in terms of their compatibility and incompatibility-remains largely unexplored. In QLM, the emergence and nature of incompatible observables is still under active investigation. Current approaches often reduce this question to testing for the order effect [18,19], where the sequence of measurements influences the outcome. Detection of such an effect is typically interpreted as evidence of incompatibility between QL observables. Yet, this interpretation relies on the assumption that observables are of the projection (von Neumann) type. Within the broader framework of quantum instruments, it is possible to observe the order effect even when the corresponding observables are jointly measurable [92]. This complicates the direct use of the order effect as an indicator of incompatibility in general settings. At present, there is no clear methodology for identifying suitable candidate observables for the CHSH test in neuronal systems. Moreover, even in physics, Bell-type experiments posed significant theoretical and technical challenges. The basic Bell framework [91] is affected by various so-called loopholes, and it was only in 2015 that loophole-free experiments were successfully performed. These landmark experiments ultimately led to the awarding of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser, and Anton Zeilinger. In cognitive psychology and decision-making research, Bell-type inequalities have also been explored, beginning with early foundational and experimental studies [24] (see for later experiments [6, 25-29]). As in physics, these investigations are complex both conceptually and empirically. In fact, the situation in cognitive domains may be even more challenging because of the apparent impossibility of eliminating signaling effects. Signaling-referring to the presence of direct influences between measurement settings and outcomes-complicates the interpretation of experimental data. When present, signaling requires a significantly more sophisticated theoretical framework (see [94] ), which lies beyond the scope of this article. Moreover, these investigations have been complemented by theoretical developments that bridge probabilistic modeling and conceptual compositionality. Bruza et al. [95], for example, introduced a probabilistic framework to study how meanings of conceptual combinations emerge, showing that quantum-inspired probabilistic models can effectively account for observed non-classicalities in meaning composition. Together, these contributions indicate that Bell-type inequalities and contextuality analyses are not merely metaphors but tools with empirical and explanatory power in cognitive science. However, fully accounting for signaling effects and developing corresponding theoretical models remains an open challenge in the field. Given these challenges, the experimental framework discussed in Sections 8.1 and 8.3 appears more feasible for near-term implementation. References 1. Khrennikov A. Contextual reinterpretation of quantum nonlocality. Cambridge: Cambridge University Press; 2024. (Cambridge University Press & Assessment) 2. Khrennikov A. Information dynamics in cognitive, psychological, social, and anomalous phenomena. Dordrecht: Kluwer; 2004. (ResearchGate) 3. Khrennikov A. Ubiquitous quantum structure: From psychology to finances. Berlin/Heidelberg: Springer; 2010. 4. Busemeyer JR, Bruza PD. Quantum models of cognition and decision. 2nd ed. Cambridge: Cambridge University Press; 2024. 5. Haven E, Khrennikov A. Quantum social science. Cambridge: Cambridge University Press; 2013. 6. Asano M, Khrennikov A, Ohya M, Tanaka Y, Yamato I. Quantum adaptivity in biology: From genetics to cognition. Berlin: Springer; 2015. 7. Bagarello F. Quantum concepts in the social, ecological and biological sciences. Cambridge: Cambridge University Press; 2019. 8. Khrennikov AY. Open quantum systems in biology, cognitive and social sciences. Berlin: SpringerௗNature; 2023. 9. Pothos EM, Busemeyer JR. Quantum cognition. Annu Rev Psychol. 2022;73:749-778. 10. Khrennikov A. Open systems, quantum probability, and logic for quantum like modeling in biology, cognition, and decision making. Entropy. 2023;25(6):886. 11. HameroƯ S. Quantum coherence in microtubules: A neural basis for emergent consciousness? J Conscious Stud. 1994;1:91-118. 12. Penrose R. The Emperor's New Mind. New York: Oxford University Press; 1989. 13. Vitiello G. My Double Unveiled: The Dissipative Quantum Model of Brain. Amsterdam: John Benjamins Publishing (Advances in Consciousness Research); 2001. 14. Vallortigara G, Vitiello G. Brain asymmetry as minimization of free energy: A theoretical model. R Soc Open Sci. 2024;11(7):240465. 15. Igamberdiev AU. Quantum computation, non demolition measurements, and reflective control in living systems. Biosystems. 2004;77:47-56. 16. Igamberdiev AU,ௗBrenner JE. Mathematics in biological reality: The emergence of natural computation in living systems. Biosystems. 2021;204:104395. 17. Atmanspacher H, Filk T, Römer H. Quantum Zeno features of bistable perception. Biol Cybern. 2004;90(1):33-40. 18. Wang Z, Busemeyer JR. A quantum question order model supported by empirical tests of an a priori and precise prediction. Top Cogn Sci. 2013;5:689-710. 19. Wang Z,ௗSolloway T, ShiƯrin RM, Busemeyer JR. Context eƯects produced by question orders reveal quantum nature of human judgments. Proc Natl Acad Sci U S A. 2014;111:9431-9436. 20. Khrennikova P. Order eƯect in a study on US voters' preferences: Quantum framework representation of the observables. Phys Scr. 2014;2014(T163):014010. 21. Ozawa M,ௗKhrennikov A. Application of theory of quantum instruments to psychology: Combination of question order eƯect with response replicability eƯect. Entropy. 2020;22(1):37. 22. Ozawa M,ௗKhrennikov A. Modeling combination of question order eƯect, response replicability eƯect, and QQ equality with quantum instruments. J Math Psychol. 2021;100:102491. 23. Tsuchiya N,ௗBruza P, Yamada M, Saigo H, Pothos EM. Quantum like qualia hypothesis: From quantum cognition to quantum perception. Front Psychol. 2025;15:1406459. 24. Conte E,ௗKhrennikov AௗY, Todarello O, Federici A, Mendolicchio L, Zbilut JP. A preliminary experimental verification on the possibility of Bell inequality violation in mental states. NeuroQuantology. 2008;6:214-221. 25. Cervantes VH,ௗDzhafarov EN. Snow queen is evil and beautiful: Experimental evidence for probabilistic contextuality in human choices. Decision. 2018;5(3):193-204. 26. Basieva I,ௗCervantes VH, Dzhafarov EN, Khrennikov A. True contextuality beats direct influences in human decision making. J Exp Psychol Gen. 2019;148(11):1925-1937. 27. Gallus C,ௗBlasiak P, Pothos EM. Quantifying and interpreting connection strength in macro and microscopic systems: Lessons from Bell's approach. Entropy. 2022;24(3):364. 28. Gallus C,ௗPothos EM, Blasiak P, Yearsley JM, Wojciechowski BW. Bell correlations outside physics. Sci Rep. 2023;13(1):4394. 29. Khrennikova P. Measuring contextuality in investment preferences. Ann Oper Res. 2025:1-31. 30. Khrennikov A. Quantum like brain: "Interference of minds." Biosystems. 2006;84(3):225241. 31. Khrennikova P. A quantum framework for 'Sour Grapes' in cognitive dissonance. In: Quantum Interaction. Berlin/Heidelberg: Springer; 2013:270-280. 32. Ozawa M,ௗKhrennikov A. Nondistributivity of human logic and violation of response replicability eƯect in cognitive psychology. J Math Psychol. 2023;112:102739. 33. Gunji YP,ௗNakamura K, Minoura M, Adamatzky A. Three types of logical structure resulting from the trilemma of free will, determinism and locality. Biosystems. 2020;195:104151. 34. Gunji YP,ௗNakamura K. Psychological origin of quantum logic: An orthomodular lattice derived from natural born intelligence without Hilbert space. Biosystems. 2022;215216:104649. 35. Atmanspacher H,ௗFilk T. The Necker-Zeno model for bistable perception. Top Cogn Sci. 2013;5(4):800-817. 36. Conte E,ௗKhrennikov AௗY, Todarello O, Federici A, Mendolicchio L, Zbilut JP. Mental states follow quantum mechanics during perception and cognition of ambiguous figures. Open Syst Inf Dyn. 2009;16(1):85-100. 37. Khrennikov A. Quantum like modeling of cognition. Front Phys. 2015;3:77. 38. Melkikh AV,ௗKhrennikov A. Quantum like model of partially directed evolution. Prog Biophys Mol Biol. 2017;125:36-51. 39. Iriki A,ௗTanaka S. Potential of the path integral and quantum computing for the study of humanities: An underlying principle of human evolution and the function of consciousness. Glob Perspect. 2024;5(1):115651. 40. Asano M,ௗBasieva I, Khrennikov A, Ohya M, Tanaka Y, Yamato I. A model of epigenetic evolution based on theory of open quantum systems. Syst Synth Biol. 2013;7:161-173. 41. Khrennikov A,ௗIriyama S, Basieva I, Sato K. Quantum like environment adaptive model for creation of phenotype. Biosystems. 2024;242:105261. 42. Fuyama M. Does the coexistence of literal and figurative meanings in metaphor comprehension yield novel meaning? Empirical testing based on quantum cognition. Front Psychol. 2023;14:1146262. 43. Fuyama M. Estimating a time series of interpretation indeterminacy in reading a short story usi43ng a quantum cognition model. In: Proceedings of the Annual Meeting of the Cognitive Science Society. 2024;46. Retrieved from https://escholarship.org/uc/item/1sh152qk 44. Khrennikov A,ௗIriki A, Basieva I. Constructing a bridge between functioning of oscillatory neuronal networks and quantum like cognition along with quantum inspired computation and AI. arXiv [preprint]. 2025. Available at: 45. Khrennikov A. A pre quantum classical statistical model with infinite dimensional phase space. J Phys A Math Theor. 2005;38(40):9051-9073. 46. Khrennikov A. On the correspondence between classical and quantum models: From statistical mechanics to quantum mechanics. Found Phys. 2006;36:1020-1040. 47. Khrennikov A. Beyond Quantum. Singapore: Pan Stanford Publishing; 2014. 48. Khrennikov A. Characterization of entanglement via non existence of a subquantum random field. Ann Phys (Berl). 2024;536(9):2400035. 49. De Barros JA,ௗSuppes P. Quantum mechanics, interference, and the brain. J Math Psychol. 2009;53(5):306-313. 50. De Barros JA. Quantum like model of behavioral response computation using neural oscillators. Biosystems. 2012;110:171-182. 51. Takahashi T,ௗCheon T. A nonlinear neural population coding theory of quantum cognition and decision making. World J Neurosci. 2012;2(4):183-186. 52. Busemeyer JR,ௗFakhari P, Kvam P. Neural implementation of operations used in quantum cognition. Prog Biophys Mol Biol. 2017;130(A):53-60. 53. Scholes GD. Quantum like states on complex synchronized networks. Proc R Soc A. 2024;480(2295):20240209. 54. Amati G,ௗScholes GD. Quantum information with quantum like bits. Phys Rev A. 2025;111(6):062203. 55. Khrennikov A, Ozawa M, Benninger F, Shor O. Coupling quantum like cognition with the neuronal networks within generalized probability theory. J Math Psychol. 2025;125:102923. 56. Werner RF. Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden variable model. Phys Rev A. 1989;40(8):4277-4281. 57. Zanardi P. Virtual quantum subsystems. Phys Rev Lett. 2001;87(7):077901. 58. Zanardi P,ௗLidar DA, Lloyd S. Quantum tensor product structures are observable induced. Phys Rev Lett. 2004;92(6):060402. 59. Basieva I,ௗKhrennikov A. Conditional probability framework for entanglement and its decoupling from tensor product structure. J Phys A. 2022;55(39):395302. 60. Khrennikov A,ௗBasieva I. Entanglement of observables: Quantum conditional probability approach. Found Phys. 2023;53:84. 61. Huynh L, Hong J, Mian A, Suzuki H, Wu Y, Camtepe S. Quantum inspired machine learning: A survey. arXiv [preprint]. 2023. Available at: 62. EƯenberger F, Carvalho P, Dubinin I, Singer W. The functional role of oscillatory dynamics in neocortical circuits: A computational perspective. Proc Natl Acad Sci U S A. 2025;122(4):e2412830122. 63. Shor O, Glik A, Yaniv Rosenfeld A, Valevski A, Weizman A, Khrennikov A, Benninger F. EEG p adic quantum potential accurately identifies depression, schizophrenia and cognitive decline. PLoS One. 2021;16(8):e0255529. 64. Shor O, Yaniv Rosenfeld A, Valevski A, Weizman A, Khrennikov A, Benninger F. EEG based spatio temporal relation signatures for the diagnosis of depression and schizophrenia. Sci Rep. 2023;13(1):776. 65. Shor O,ௗBenninger F, Khrennikov A. Dendrogramic representation of data: CHSH violation vs nonergodicity. Entropy. 2021;23(8):971. 66. Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents-EEG, ECoG, LFP and spikes. Nat Rev Neurosci. 2012;13:407-420. 67. Kolmogorov AN. GrundbegriƯe der Wahrscheinlichkeitsrechnung. Berlin: Springer; 1933. Kolmogorov AN. Foundations of the Theory of Probability. New York: Chelsea; 1956. 68. Khrennikov A. Buonomano against Bell: Nonergodicity or nonlocality? Int J Quantum Inf. 2017;15:1740010. 69. Thomson DJ. Spectrum estimation and harmonic analysis. Proc IEEE. 1982;70:10551096. 70. Percival DB,ௗWalden AT. Spectral analysis for physical applications. Cambridge: Cambridge University Press; 1993. 71. Mitra PP,ௗPesaran B. Analysis of dynamic brain imaging data. Biophys J. 1999;76:691708. 72. Cohen MX. Analyzing Neural Time Series Data. Cambridge (MA): MIT Press; 2014. 73. Khrennikov A. Born's formula from statistical mechanics of classical fields and theory of hitting times. Physica A. 2014;393:207-221. 74. Schrödinger E. Die gegenwärtige Situation in der Quantenmechanik. Naturwissenschaften. 1935;23:807-812, 823-828, 844-849. 75. Anastassiou CA,ௗPerin R, Markram H, Koch C. Ephaptic coupling of cortical neurons. Nat Neurosci. 2011;14(2):217-223. 76. Fröhlich F,ௗMcCormick DA. Endogenous electric fields may guide neocortical network activity. Neuron. 2010;67(1):129-143. 77. Radman T,ௗSu Y, An JH, Parra LC, Bikson M. Spike timing amplifies the eƯect of electric fields on neurons: Implications for endogenous field eƯects. J Neurosci. 2007;27(11):30303036. 78. Pinotsis DA,ௗMiller EK. In vivo ephaptic coupling allows memory network formation. Cereb Cortex. 2023;33(17):9877-9895. 79. Colclough GL,ௗBrookes MJ, Smith SM, Woolrich MW. A symmetric multivariate leakage correction for MEG connectomes. Neuroimage. 2015;117:438-449. 80. Nolte G,ௗBai O, Wheaton L, Mari Z, Vorbach S, Hallett M. Identifying true brain interaction from EEG data using the imaginary part of coherency. Clin Neurophysiol. 2004;115(10):2292-2307. 81. Vinck M,ௗOostenveld R, van Wingerden M, Battaglia F, Pennartz CMA. An improved index of phase synchronization for electrophysiological data in the presence of volume conduction, noise and sample size bias. Neuroimage. 2011;55(4):1548-1565. 82. Friston KJ. Functional and eƯective connectivity: A review. Brain Connect. 2011;1(1):1336. 83. Lachaux JP,ௗRodriguez E, Martinerie J, Varela FJ. Measuring phase synchrony in brain signals. Hum Brain Mapp. 1999;8(4):194-208. 84. Srinivasan R,ௗWinter WR, Ding J, Nunez PL. EEG and MEG coherence: Measures of functional connectivity at distinct spatial scales of neocortical dynamics. J Neurosci Methods. 2007;166(1):41-52. 85. Bruns A. Fourier , Hilbert and wavelet based signal analysis: Are they really diƯerent approaches? J Neurosci Methods. 2004;137(2):321-332. 86. Bastos AM,ௗSchoƯelen JM. A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Front Syst Neurosci. 2016;9:175. 87. Bullmore E,ௗSporns O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10(3):186-198. 88. Vidal G,ௗWerner RF. Computable measure of entanglement. Phys Rev A. 2002;65:032314. 89. Plenio MB. Logarithmic negativity: A full entanglement monotone that is not convex. Phys Rev Lett. 2005;95:090503. 90. Wootters WK. Entanglement of formation of an arbitrary state of two qubits. Phys Rev Lett. 1998;80(10):2245-2248. 91. Clauser JF,ௗHorne MA, Shimony A, Holt RA. Proposed experiment to test local hidden variable theories. Phys Rev Lett. 1969;23:880-884. 92. Fuyama M,ௗKhrennikov A, Ozawa M. Quantum like cognition and decision making in the light of quantum measurement theoiiiiiry. Philos Trans A. (in press) 2025. Available at: http://arxiv.org/abs/2503.05859 93. Bruza PD,ௗFell L, Hoyte P, Dehdashti S, Obeid A, Gibson A, Moreira C. Contextuality and context sensitivity in probabilistic models of cognition. Cogn Psychol. 2023;140:101529. 94. Dzhafarov EN,ௗKujala JV. Selectivity in probabilistic causality: Where psychology runs into quantum physics. J Math Psychol. 2012;56:54-63. 95. Bruza PD,ௗKitto K, Ramm BJ, Sitbon L. A probabilistic framework for analysing the compositionality of conceptual combinations. J Math Psychol. 2015;67:26-38. i
|
2509.16255
|
RootletSeg: Deep learning method for spinal
rootlets segmentation across MRI contrasts
AUTHORS:
Katerina Krejci, MSc1,2, Jiri Chmelik, PhD1, Sandrine Bédard, MSc2, Falk Eippert, PhD3,
Ulrike Horn, PhD3, Virginie Callot, PhD4,5, Julien Cohen-Adad, PhD2,6,7,8, Jan Valosek,
PhD2,6,9,10
AFFILIATIONS:
1. Department of Biomedical Engineering, FEEC, Brno University of Technology, Brno, Czechia
2. NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC,
Canada
3. Max Planck Research Group Pain Perception, Max Planck Institute for Human Cognitive and
Brain Sciences, Leipzig, Germany
4. Aix Marseille Univ, CNRS, CRMBM, Marseille, France
5. APHM, CHU Timone, Pôle d’Imagerie Médicale, CEMEREM, Marseille, France
6. Mila - Quebec AI Institute, Montreal, QC, Canada
7. Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada
8. Centre de Recherche du CHU Sainte-Justine, Université de Montréal, Montreal, QC, Canada
9. Department of Neurosurgery, Faculty of Medicine and Dentistry, Palacký University Olomouc,
Olomouc, Czechia
10.Department of Neurology, Faculty of Medicine and Dentistry, Palacký University Olomouc,
Olomouc, Czechia
Address correspondence to: J.V. (email: jan.valosek@upol.cz)
ORCID:
Kateřina Krejčí - 0009-0009-5817-4840
Jiří Chmelík - 0000-0001-9950-6279
Sandrine Bédard - 0000-0001-9859-1133
Falk Eippert - 0000-0002-3986-1719
Ulrike Horn - 0000-0001-9119-0468
Virginie Callot - 0000-0003-0850-1742
Julien Cohen-Adad - 0000-0003-3662-9532
Jan Valošek - 0000-0002-7398-4990
1
Abstract
Purpose: To develop a deep learning method for the automatic segmentation of spinal nerve
rootlets on various MRI scans.
Material and Methods: This retrospective study included MRI scans from two open-access
and one private dataset, consisting of 3D isotropic 3T TSE T2-weighted (T2w) and 7T
MP2RAGE (T1-weighted [T1w] INV1 and INV2, and UNIT1) MRI scans. A deep learning
model, RootletSeg, was developed to segment C2-T1 dorsal and ventral spinal rootlets.
Training was performed on 76 scans and testing on 17 scans. The Dice score was used to
compare the model performance with an existing open-source method. Spinal levels derived
from RootletSeg segmentations were compared with vertebral levels defined by
intervertebral discs using Bland-Altman analysis.
Results: The RootletSeg model developed on 93 MRI scans from 50 healthy adults (mean
age, 28.70 years ± 6.53 [SD]; 28 [56%] males, 22 [44%] females) achieved a mean ± SD
Dice score of 0.67 ± 0.09 for T1w-INV2, 0.65 ± 0.11 for UNIT1, 0.64 ± 0.08 for T2w, and 0.62
±
0.10
for
T1w-INV1
contrasts.
Spinal-vertebral
level
correspondence
showed
a progressively increasing rostrocaudal shift, with Bland-Altman bias ranging from 0.00 to
8.15 mm (median difference between level midpoints).
Conclusion: RootletSeg accurately segmented C2-T1 spinal rootlets across MRI contrasts,
enabling the determination of spinal levels directly from MRI scans. The method is
open-source and can be used for a variety of downstream analyses, including lesion
classification, neuromodulation therapy, and functional MRI group analysis.
Keywords: Rootlets, Spinal Cord, MR Imaging, Segmentation, Supervised Learning,
Convolutional Neural Network (CNN)
Summary: The proposed deep learning model accurately segmented the spinal cord nerve
rootlets across different 3D isotropic MRI contrasts, enabling the determination of spinal
levels directly from MRI scans.
2
Key results:
-
The RootletSeg deep learning model was developed for C2−T1 spinal nerve rootlets
segmentation using three MRI datasets comprising T2-weighted, T1-weighted, and
MP2RAGE-UNIT1 scans acquired at 3T and 7T.
-
RootletSeg achieved a mean Dice of 0.65 ± 0.10 (SD), which is comparable to the
previous method while extending the capabilities beyond T2-weighted contrast.
-
The RootletSeg-based analysis of spinal-vertebral level correspondence showed a
gradually increasing shift along the rostro-caudal axis between spinal and vertebral
levels.
List of Abbreviations
fMRI = functional Magnetic Resonance Imaging
MRI = Magnetic Resonance Imaging
PMJ = pontomedullary junction
RMSD = root mean square deviation
SCT = Spinal Cord Toolbox
SD = standard deviation
T2w = T2-weighted
TSE = Turbo Spin Echo
3
Introduction
Spinal rootlets are bundles of nerve fibres forming the spinal nerves that connect the spinal
cord to the peripheral parts of the body. The ability to estimate neurological spinal levels
from nerve rootlets makes them relevant for spinal cord lesion classification (1,2),
neuromodulation therapy (3,4), and functional MRI (fMRI) group analysis (5,6). Because
directly identifying spinal rootlets on MRI scans is both challenging and time-consuming,
spinal cord analyses typically rely on vertebral levels, defined using intervertebral discs, or
they infer spinal levels from vertebral levels. This approach, however, is intrinsically limited
as spinal levels are not necessarily aligned with vertebral bodies (5,7), and there is
considerable inter-individual variability between spinal and vertebral levels (7–12). Spinal
nerve rootlets, therefore, provide an anatomically more relevant and potentially more
accurate method for determining spinal levels, which can, in turn, serve as an alternative
coordinate system for spinal cord analyses, as opposed to the traditional vertebral-level
approach based on intervertebral discs (5). Recently, a method for automatic spinal rootlets
segmentation was proposed, allowing for direct estimation of spinal levels from MRI data
(13). However, that method has three drawbacks: i) it was developed solely on scans
acquired at one field strength and with one contrast (i.e. 3T turbo spin echo [TSE] isotropic
T2-weighted [T2w] scans), ii) it is restricted to dorsal rootlets only (ignoring ventral rootlets),
and iii) it is restricted to a specific range of cervical levels (i.e. spinal levels C2-C8). Other
studies aimed to identify nerve rootlets using diffusion MRI tractography (14) or traced them
manually on high-resolution scans (15). An automatic rootlets segmentation method was
recently proposed also for postmortem feline samples (16,17).
In this work, we (i) extend the existing rootlet segmentation model by incorporating ventral
rootlets, an additional spinal level (thoracic level T1), and additional MRI contrasts derived
from the 7T MP2RAGE sequence (T1w-INV1, T1w-INV2, and UNIT1); and (ii) utilize the
proposed segmentation model to investigate the correspondence between spinal and
vertebral levels in a large cohort of 120 healthy participants. The segmentation method is
open-source, implemented in the sct_deepseg function as part of Spinal Cord Toolbox
(SCT) (18) v7.0 and higher.
4
Materials and Methods
Study Design and Participants
This retrospective study included scans from three MRI datasets of the cervical spinal cord
(Table 1): (i) 3T TSE isotropic T2w scans from the open-access single-site OpenNeuro
ds004507 dataset (19), (ii) 3T TSE isotropic T2w scans from the open-access spine-generic
multi-subject dataset (20), and (iii) 7T isotropic MPRAGE scans from a private single-site
dataset (21,22). For more details on acquisition parameters, please see (19,21,23,24).
Table 1: Characteristics of Study Participants
RootletSeg model development
Variable
OpenNeuro ds004507* spine-generic multi-subject
MP2RAGE**
Participants
7
24
19
MRI scans
12
24
57
Sex
Male
5
12
11
Female
2
12
8
Age (y)
(22.57 ± 0.53)
(29.58 ± 6.53)
(29.84 ± 6.66)
MRI scans in each set
Training set
10
21
45
Test set
2
3
12
MRI manufacturer
Siemens
12
20
57
GE
0
4
0
MRI field strength
3T
12
24
0
7T
0
0
57
Sequence
TSE
TSE
MP2RAGE
Voxel size
(mm)
0.6 × 0.6 × 0.6
0.8 × 0.8 × 0.8***
0.7 × 0.7 × 0.7
Spinal-vertebral level correspondence analysis
Variable
OpenNeuro ds004507* spine-generic multi-subject
MP2RAGE**
Participants
4
105
11
MRI scans
4
105
11
Sex
5
Male
3
52
4
Female
1
53
7
Age (y)
(22.50 ± 0.58)
(29.70 ± 10.18)
(28.82 ± 6.03)
MRI manufacturer
Siemens
4
76
11
GE
0
11
0
Philips
0
18
0
MRI field strength
3T
4
105
0
7T
0
0
11
Sequence
TSE
TSE
MP2RAGE
Voxel size
(mm)
0.6 × 0.6 × 0.6
0.8 × 0.8 × 0.8***
0.7 × 0.7 × 0.7
*The OpenNeuro ds004507 dataset included neutral, flexion and extension neck position
sessions. Neutral and flexion sessions were used for the model development, and neutral
session was used for spinal-vertebral correspondence analysis.
**The MP2RAGE dataset contained 3 co-registered MP2RAGE contrasts (T1w-INV1,
T1w-INV2 and UNIT1) for each subject (in total 57 MRI scans for 19 subjects).
***Voxel size for 2 MRI scans was 0.8 × 0.5 × 0.5 mm
The scans were anonymized and defaced to remove all personally identifiable information
(25,26) and organized according to the BIDS standard (27). The inclusion criteria were being
a healthy participant without any known disease, age >18 years, and coverage of the
cervical spine. Exclusion criteria were the presence of severe imaging artifacts or low
contrast between the cerebrospinal fluid and nerve rootlets for multiple levels. Although the
original datasets include more participants than listed in Table 1, we selected scans based
on overall scan quality (no blurring and ghosting artifacts) and sufficient contrast between
the cerebrospinal fluid and nerve rootlets. Because manual segmentation of nerve rootlets
is difficult and time-consuming due to their complex three-dimensional anatomy (Figure 1)
and the scans’ sub-millimetre resolution, only a subset of participants was included in model
training (see the following sections for details).
6
Figure 1: Spinal rootlets and vertebral anatomy. Spinal levels are inferred from spinal rootlets,
whereas vertebral levels are defined based on adjacent intervertebral discs. Adapted from (7) with
permission from the publisher.
Deep Learning Training Protocol
The nerve rootlets segmentation model was developed using nnUNetv2, a self-configuring
deep learning-based framework (28). Preprocessing with the nnUNetv2 framework involved
reorienting both the input scans and their corresponding reference standard labels to the
Right-to-Left × Posterior-to-Anterior × Inferior-to-Superior (RPI) orientation to ensure
consistency across the dataset, intensity z-score normalization and resampling into 0.7 × 0.7
× 0.7 mm resolution before training. Random spatial transformation (translation, rotation,
scaling), mirroring, Gaussian noise and Gaussian blur, brightness and contrast adjustment,
Gamma transformation and low-resolution simulation were used as data augmentation.
To facilitate the generation of reference standard rootlet labels, the existing segmentation
model was applied (13), followed by manual corrections by consensus of two raters (K.K.
and J.V.) using the FSLeyes image viewer (version 1.12.1; University of Oxford) and the
provided instructions1. We did not analyze the interrater variability, as a previous study
reported a mean coefficient of variation of ≤ 1.45% in spinal level positions when using
rootlet segmentations from different raters to estimate spinal levels (13). Additionally, as the
presence of C1 rootlets differs between individuals, they were not included in reference
standard annotations and model training (11,13,29). We trained the model in four stages by
iteratively adding more MRI scans and contrasts in each stage (Figure 2). First, we extended
the T2w model to segment ventral rootlets by manually annotating them and retraining the
1 https://github.com/ivadomed/model-spinal-rootlets/issues/17
7
model. Then, the T2w model was applied to MP2RAGE-UNIT1 scans. We inverted the
contrast of MP2RAGE-UNIT1 scans to make their contrast closer to the T2w before applying
the model. The model predictions were manually corrected and extended to include T1
rootlets. An initial MP2RAGE model was trained using five scans (MP2RAGE-UNIT1 with
inverted contrast) and tested on the remaining 14 scans from the MP2RAGE dataset. The
obtained segmentations were manually corrected to get a reference standard for all 19
participants. Once all rootlet reference standards were visually inspected and corrected, they
were split into training, validation, and testing sets. Because the MP2RAGE contrasts
(T1w-INV1, T1w-INV2, and UNIT1) are inherently co-registered, a single reference standard
was used for all three contrasts for each participant. To prevent information leakage between
training, validation and testing sets, all MP2RAGE contrasts from each participant were
assigned exclusively to one set.
Then, we trained the model on all three MP2RAGE contrasts (15 participants, resulting in 45
MRI scans) for 2000 epochs. The nnUNetv2 framework suggested a 3D architecture with the
following parameters: 6 stages, instance normalization technique, Leaky ReLU activation
function, batch size 2, patch size 128 × 96 × 192 (RPI), learning rate 0.01, Dice Loss and
Cross-Entropy loss function and Stochastic Gradient Descent with Nesterov Momentum
optimizer. Since training with this architecture setup was not successful at the C2 level, we
tried another experiment with increased patch size in the superior-inferior direction to 352
(i.e., 128 × 96 × 352) so the model can capture a larger spatial context. With this increased
patch size, the model successfully learned to segment the C2 level. The model training was
consequently extended to a multi-contrast approach, adding data from T2w datasets
(spine-generic and OpenNeuro). This model was trained on MP2RAGE and T2w MRI scans
with the increased patch size (i.e., 128 × 96 × 352). A total of 76 scans (from 50 participants)
for 2000 epochs in a 5-fold cross-validation approach were used for an 80/20%
training/validation split. The “production” model, named RootletSeg, was trained on all 76
training data (100/0% training/validation split). The testing set (i.e., scans never used during
the training or validation) included 17 MRI scans: 12 MP2RAGE (4 participants, each with 3
contrasts) and 5 T2w (3 from the SpineGeneric dataset and 2 from OpenNeuro). The same
testing set was used across all five cross-validation folds to enable consistent evaluation of
model performance across varied training/validation splits. The model was compared to the
intermediate T2w single-contrast model. The model’s performance was evaluated using the
Dice score.
8
Figure 2: Overview of our model development. T2-weighted (T2w) scans with reference standards
were first used to train the T2w model (I), which was then applied to MP2RAGE-UNIT1 scans with
inverted contrast. The resulting outputs were manually corrected and served as reference standards
for the training of the initial MP2RAGE model (II). This model was applied to additional UNIT1 scans,
followed by another round of manual corrections. Next, the MP2RAGE model (III), using both the
original and an increased patch size, was trained on all three MP2RAGE contrasts (T1w-INV1,
T1w-INV2, and UNIT1). Finally, T2w and MP2RAGE scans and their corresponding reference
standards were combined into a single dataset and used to train the RootletSeg model.
Spinal-vertebral Levels Correspondence and Level Lengths
The developed model was used to analyze the correspondence between spinal and
vertebral levels. For MP2RAGE data, where each participant has three perfectly aligned
contrasts, the T1w-INV2 contrast was used to avoid repeating the same participants multiple
times. T1w-INV2 was selected due to its superior rootlet visibility and the highest RootletSeg
9
performance among the MP2RAGE contrasts (see Results). From the OpenNeuro dataset,
only participants with a neutral neck position were included to avoid repeating the same
participants across different sessions. From the spine-generic dataset, MRI scans with good
contrast between the cerebrospinal fluid and nerve rootlets were selected. To be included in
the analysis, each scan had to meet the following criteria: good visibility of spinal rootlets (no
blurring and ghosting artifacts and good contrast between the rootlets and cerebrospinal
fluid); coverage from the pontomedullary junction (PMJ) to the T1 vertebra; and available
information on the participant’s height. Combining all three datasets, a total of 120 healthy
participants were used.
Figure 3 illustrates the analysis pipeline for assessing spinal-vertebral level correspondence,
performed using the Spinal Cord Toolbox (SCT) v7.0 (18). Spinal cord segmentations were
available for all T2w MRI scans (19,24), whereas the MP2RAGE scans were segmented
using the contrast-agnostic model (30,31). Spinal levels were estimated as an overlap
between the spinal rootlets and the spinal cord segmentation dilated by 3 pixels (13). The
spinal cord centerline was extracted from the spinal cord segmentation, and the posterior
tips of intervertebral discs (9) were projected to the centerline. Vertebral levels were
determined as the segments between adjacent intervertebral discs. PMJ was detected
automatically (32) and used as a reference point for further measurements.
The distances between the PMJ and the midpoints of the spinal and vertebral levels, as well
as the level lengths, were measured along the spinal cord centerline. Spinal and vertebral
level lengths were defined as the distance between the rostro-caudal slices of each level. To
account for individual differences in body size, the distances were normalized by the
participant’s height and then multiplied by the cohort’s median height to preserve millimetre
units. The normalized distances were approximated by normal distributions using the
probability density function to assess the overlap between spinal and vertebral levels. The
correspondence between spinal and vertebral levels was evaluated using the Wilcoxon
signed-rank test, the Bland-Altman analysis (Equation 1) and root mean square distance
(RMSD, Equation 2), separately for each level pair (e.g., vertebral level C2 and spinal level
C3) across participants. We used a non-parametric variant of Bland-Altman analysis
because the differences between spinal and vertebral level midpoint positions did not pass
the Shapiro-Wilk normality test. P < .05 was considered statistically significant.
𝐵𝑙𝑎𝑛𝑑𝐴𝑙𝑡𝑚𝑎𝑛 𝑚𝑒𝑑𝑖𝑎𝑛 = 𝑚𝑒𝑑𝑖𝑎𝑛(𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙 (1) −𝑦𝑠𝑝𝑖𝑛𝑎𝑙(1), 𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙 (𝑖+1) −𝑦𝑠𝑝𝑖𝑛𝑎𝑙(𝑖+1), ... , 𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙 (𝑛) −𝑦𝑠𝑝𝑖𝑛𝑎𝑙(𝑛))
Equation 1
: distance between the PMJ and the spinal level midpoint
𝑦𝑠𝑝𝑖𝑛𝑎𝑙(𝑖)
: distance between the PMJ and the vertebral level midpoint
𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙(𝑖)
n … number of participants
10
𝑅𝑀𝑆𝐷=
𝑖=1
𝑛
∑
(𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙(𝑖) − 𝑦𝑠𝑝𝑖𝑛𝑎𝑙(𝑖))
2
𝑛
Equation 2
: distance between the PMJ and the spinal level midpoint
𝑦𝑠𝑝𝑖𝑛𝑎𝑙(𝑖)
: distance between the PMJ and the vertebral level midpoint
𝑦𝑣𝑒𝑟𝑡𝑒𝑏𝑟𝑎𝑙(𝑖)
n: number of participants
11
Figure 3: Spinal-vertebral levels correspondence. Analysis overview: the spinal cord and nerve
rootlets were segmented, and the intervertebral discs and the pontomedullary junction (PMJ) were
identified. Then, rootlets and intervertebral discs were used to obtain the spinal and vertebral levels,
respectively. Distances from level midpoints to the PMJ were measured along the centerline.
12
Results
Patient Characteristics
A total of 134 healthy adult participants (mean age, 29.3 years ± 5.9 [SD]; 69 [51.5%] males,
65 [48.5%] females) with 177 MRI scans from three datasets were included in this study.
Participants were scanned across scanners from different manufacturers (Siemens, GE,
Philips) with different field strengths (3T, 7T). For segmentation model development, we
used 50 participants with 93 MRI scans (n=12 OpenNeuro, n=24 SpineGeneric and n=57
MP2RAGE dataset) with 76 scans in the training set, and 17 scans in the testing set. For
spinal-vertebral correspondence analysis, we used 120 MRI scans (n=4 OpenNeuro, n=105
SpineGeneric and n=11 MP2RAGE) from 120 participants (mean age, 29.4 years ± 5.8 [SD];
59 [49.2%] males, 61 [51.8%] females); 14 participants were not included because their MRI
scans did not capture the PMJ and/or the scans did not capture the whole T1 vertebra.
Segmentation Model
The RootletSeg nnUNetv2 3D model achieved an overall Dice score of 0.65 ± 0.10 (mean ±
SD across levels, participants, and contrasts). Lower Dice for the MP2RAGE scans at levels
C2 and C3 was due to the presence of image artifacts (red arrows in Figure 4), possibly
caused by B1+ inhomogenities. Interestingly, despite the artifacts, the model was able to
segment some rootlets at these levels, even though they were not included in the reference
standard (compare the reference standard and model output for the T1w-INV2 MRI scan at
the C2 level in Figure 4). Dice scores for individual contrasts were (mean ± SD across levels
and testing scans): 0.67 ± 0.09 for T1w-INV2, 0.65 ± 0.11 for UNIT1, 0.64 ± 0.08 for T2w,
and 0.62 ± 0.10 for T1w-INV1 (Figure 5a). Figure 5b shows the Dice scores across levels on
T2w MRI scans, in comparison with an existing model developed exclusively on T2w scans.
RootletSeg demonstrated comparable results for rootlets C2-C8 to the T2w model (Dice of
0.65 ± 0.08 vs 0.64 ± 0.08). Level T1 was excluded from the Dice calculation, as the T2w
model was not trained at this level.
13
Figure 4: Coronal and axial views of representative rootlet segmentations. (a) Segmentations on
a T2w MRI scan. (b) Segmentations on a T1w-INV1 MRI scan. (c) Segmentations on a T1w-INV2
MRI scan. (d) Segmentations on a UNIT1 MRI scan. The red arrows point to an artifact possibly
caused by B1+ inhomogeneities. Rows represent individual rootlet levels from C2 to T1. Numbers
represent the mean ± SD Dice score across participants for each spinal level compared to the
reference standard labels.
Figure 5: Quantitative performance of the RootletSeg model. (a) Dice score computed on 17
testing images across different contrasts (n=4 T1w-INV1, n=4 T1w-INV2, n=4 UNIT1 and n=5 T2w).
(b) Comparison of Dice score between the intermediate single-contrast T2w model and the proposed
14
RootletSeg model, on T2w MRI scans. Despite the RootletSeg model being trained on multiple
contrasts, it performs similarly well compared to the T2w model. Note that the T2w model was
developed only for spinal rootlets C2-C8; thus, its Dice score for the level T1 is zero. The horizontal
line within each box represents the median, and the box edges mark the 25 % to 75 % interquartile
range. Whiskers extend 1.5 times the interquartile range, and the small black circles indicate outliers.
Spinal-vertebral Level Correspondence
Figure 6a illustrates the correspondence between spinal and vertebral levels. A gradually
increasing shift along the rostrocaudal axis is apparent between the distributions of spinal
and vertebral levels. For instance, the distribution for vertebral level C2 overlaps with that of
spinal level C3, whereas vertebral level C7 is shifted caudally relative to spinal level C8. The
Wilcoxon signed-rank test (performed for each spinal-vertebral level pair separately)
revealed that this shift was statistically significant (P < .05) for all levels below the spinal
level C5 and the vertebral level C4.
Figure 6b presents the Bland-Altman analysis comparing each pair of levels (e.g., vertebral
level C2 vs. spinal level C3), based on the distance of the level midpoints from the PMJ,
normalized by participant height. The Bland-Altman analysis quantitatively assessed a 0.00
mm bias term (median difference between spinal and vertebral level midpoint positions)
between spinal level C3 and vertebral level C2. In contrast, the bias term is higher for the
lower levels (up to 8.15 mm for spinal level T1 and vertebral level T1). Table 2 shows the
RMSD between spinal and vertebral level midpoints for each level pair. The RMSD value
was lowest for spinal level C2 and vertebral level C3 (3.60 mm) and highest for spinal level
T1 and vertebral level T1 (9.36 mm).
15
Figure 6: Spinal and vertebral level correspondence. (a) Spinal and vertebral level midpoints
approximated by normal distributions, separately for each level. The midpoints were normalized by
participants’ height and scaled by median height. Values in brackets represent mean ± SD distance to
the pontomedullary junction (PMJ) in millimetres. Spinal levels are in solid lines, vertebral levels in
dashed lines. Significance (Wilcoxon signed-rank test): * P < .05, ns not significant. Notice that the
distribution for the spinal level C3 (solid orange) corresponds to the vertebral level C2 (dashed blue),
while the distribution for the spinal level C8 (solid pink) is shifted cranially relative to the vertebral level
C7 (dashed brown). We note that there are anatomically seven vertebral levels but eight spinal levels.
(b) Bland-Altman plot. Black dashed lines show the median difference between distances from the
PMJ to spinal and vertebral levels midpoints, and colored dashed lines show 2.5 and 97.5 percentiles.
The points correspond to individual participants. VL = vertebral level; SL = spinal level.
16
Table 2: Root mean square distance (RMSD) between spinal and vertebral level midpoint distances
to the pontomedullary junction (PMJ). Notice that RMSD is lower for cranial levels (i.e., RMSD of 3.60
mm between the vertebral level C2 and the spinal level C3) relative to caudal levels (i.e., RMSD of
9.36 mm between the vertebral level T1 and the spinal level T1). We note that there are anatomically
seven vertebral levels but eight spinal levels.
Spinal and Vertebral Level Rostro-caudal Length
Table 3 presents the rostro-caudal lengths (mean ± SD across participants) of each spinal
level in 120 healthy participants. The table also includes a comparison with lengths reported
in an MRI-based study (7) and a post-mortem study (33).
Table 3: Rostro-caudal lengths of individual spinal levels and results of MRI-based study (7) and
post-mortem study (33). The table shows the mean rostro-caudal length ± SD in millimetres.
17
Vertebral | Spinal level
RMSD [mm]
C2 | C3
3.60
C3 | C4
3.35
C4 | C5
3.55
C5 | C6
4.21
C6 | C7
5.09
C7 | C8
6.33
T1 | T1
9.36
Spinal
level
This work
(120 participants)
Cadotte et al., 2015
(20 participants)
Kobayashi et al., 2015
(11 participants)
C2
7.1 ± 2.2
-
-
C3
11.7 ± 2.5
10.5 ± 2.2
12.1 ± 1.2
C4
8.9 ± 2.0
9.9 ± 1.3
12.5 ± 1.1
C5
8.7 ± 1.7
10.5 ± 1.5
12.6 ± 2.8
C6
9.1 ± 1.6
9.7 ± 1.6
12.7 ± 1.6
C7
9.8 ± 1.7
9.4 ± 1.4
11.8 ± 1.6
C8
11.8 ± 2.5
9.6 ± 1.4
10.6 ± 1.6
T1
13.1 ± 3.7
-
-
Table 4 shows the rostro-caudal lengths (mean ± SD across participants) of the vertebral
levels, along with a comparison to a post-mortem study (34).
Table 4: Rostro-caudal lengths of individual vertebral levels and results of the post-mortem study (34)
(showing the mean rostro-caudal length ± SD in millimetres)
18
Spinal
level
This work
(120 participants)
Busscher et al., 2010
(6 participants)
C2
14.9 ± 1.9
-
C3
17.0 ± 1.8
14.2 ± 0.7
C4
17.0 ± 1.6
14.5 ± 1.3
C5
15.9 ± 1.5
13.4 ± 1.1
C6
15.3 ± 1.7
14.0 ± 0.5
C7
16.2 ± 1.7
15.7 ± 0.9
T1
20.0 ± 1.8
17.3 ± 0.8
Discussion
In this study, we (i) introduced RootletSeg, a deep learning model for segmentation of C2-T1
ventral and dorsal spinal nerve rootlets from T1w, T2w and MP2RAGE-UNIT1 3T and 7T
MRI scans, which extended the previous rootlet segmentation model (13) by incorporating
ventral rootlets, an additional thoracic T1 spinal level, and additional 7T MP2RAGE-derived
contrasts; and (ii) investigate the correspondence between spinal and vertebral levels in
a large cohort of 120 healthy participants. The segmentation model demonstrated stable
performance across participants, MRI contrasts, and rootlet levels, thus facilitating the
cumbersome and time-intensive manual rootlets annotation process. The analysis of
spinal-vertebral level correspondence showed a gradually increasing shift in the
rostro-caudal axis between spinal and vertebral levels and higher variability in level
localization across participants with lower levels. The segmented nerve rootlets can be used
as an alternative to commonly used intervertebral discs in various applications, including
lesion classification based on neurological levels and registration of individual scans to
a template for group-level analysis (5,6).
As spinal nerve rootlets are fine structures with submillimeter dimensions and a complex
anatomy that varies across participants, their segmentation is challenging even for expert
raters. Despite these difficulties, the proposed model achieved a stable performance across
four contrasts (T2w, T1w-INV1, T1w-INV2, and UNIT1) and different levels (C2-T1). The
mean Dice for T2w data was higher for upper cervical nerve rootlets than lower cervical
nerve rootlets (C2 mean Dice: 0.68 ± 0.08 vs. T1 mean Dice: 0.57 ± 0.09), possibly due to
the lower contrast between cerebrospinal fluid and rootlets and higher rootlets angulation in
the lower levels (12). Compared to the intermediate single-contrast T2w model, RootletSeg
achieved a comparable Dice of 0.65 ± 0.08 vs 0.64 ± 0.08, demonstrating no loss in
performance while extending the model capabilities beyond T2w contrast. Although the
reported Dice score may appear lower compared to other segmentation tasks, such as
spinal cord (31,35), where Dice scores commonly reach 0.9, we note that the relatively low
Dice for rootlet segmentation is due to the distinct anatomy and size of rootlets compared to
larger structures like the spinal cord. Spinal rootlets are small structures with complex
three-dimensional anatomy, typically having only 2–3 voxel width in MRI scans with
submillimeter in-plane resolution.
The analysis of spinal and vertebral level correspondence using the RootletSeg model
showed that the distribution of spinal level C3 midpoint positions corresponds to that of
vertebral level C2, and similarly for spinal level C4 and vertebral level C3. The
correspondence became less consistent at lower levels, as indicated by a statistically
significant shift between spinal and vertebral levels, leading to larger shifts between spinal
19
and vertebral midpoint positions. Moreover, SD of the level midpoint distances from the PMJ
increases in the lower levels (spinal level C2 SD = 2.65 mm vs. spinal level T1 SD = 6.64
mm), resulting in broader and flatter distributions, indicating increasing inter-subject
variability in level positions. This is consistent with prior MRI and post-mortem reports (7,8)
and anatomical textbooks that neurological spinal levels are “shifted” relative to vertebral
levels and that this shift increases at more caudal levels. Similar to a previous 3T MRI study
(7) that used manually defined landmarks, the Bland-Altman analysis showed higher
variability in the position of lower levels across the participants. In our study, the analysis
was extended to include levels C2 and T1, and we used 6 times more data. Additionally, we
used participants' height to normalize our measurements to take into account inter-subject
variability. We also analyzed the level correspondence using RMSD, which confirmed
a higher shift for more caudal levels by higher RMSD values compared to more cranial
levels. The difference between the Bland-Altman and the RMSD analyses (e.g.,
Bland-Altman bias of 0.00 mm and RMSD of 3.60 mm for vertebral level C2 and spinal level
C3) is due to methodological differences in the calculation − in the Bland-Altman analysis,
we quantitatively considered the correspondence according to the median difference
between vertebral-spinal midpoint positions, whereas the RMSD was calculated as the mean
squared difference. A post-mortem study (8), performed manually on 16 cadavers, found
that spina-vertebral correspondence differs by one level in the C3 to C5 spinal region, i.e.,
vertebral level C2 corresponds to spinal level C3 and further increases in the lower levels up
to two level differences. It needs to be noted that it is difficult to directly compare the findings
from post-mortem studies with those in our in vivo MRI study due to the inherent
characteristics of ex vivo measures, such as tissue shrinking due to post-fixation, and the
altered shape of the excised spinal cord without the surrounding cerebrospinal fluid and
dura.
Measured rostrocaudal spinal level lengths obtained in our study showed slightly higher SD
compared to an MRI-based study (7) and a post-mortem study (33). This might likely be due
to the larger cohort in our study capturing broader population variability, demographic
differences, and differences in the acquisition protocol. Due to the lack of MRI studies on
vertebral level lengths, our results were compared with a post-mortem study (34), which
measured vertebral levels from CT scans in six cadavers. For levels C3 to T1, our findings
show similar results to the study. However, they measured the length of vertebral bodies,
whereas our analysis was based on intervertebral disc positions. This different
methodological approach likely contributes to the average difference of 2.1 mm observed
across levels. Additionally, the small sample size (six cadavers in the study) may not
adequately capture the population variability. Other factors that may account for the
20
demographic differences and positional changes of the spine structures between living
participants and cadavers.
This study had limitations. The proposed model was trained and tested solely on healthy
participants in the C2-T1 region and only with isotropic MRI scans. Scans were selected
based on overall image quality, and scans with extensive blurring and ghosting artifacts were
excluded to allow for reliable reference standard labels creation. Extending the evaluation to
participants with pathologies and the thoraco-lumbar region would be valuable. However,
this remains challenging due to the lower contrast typically present in pathological areas,
such as a narrowed spinal canal. Additionally, in the thoraco-lumbar spinal cord, rootlets are
more difficult to isolate because of their steeper angulation, reduced space within the spinal
canal, potential image artifacts due to respiratory motion, and lower signal-to-noise ratio at
7T. These factors make it difficult to obtain reliable reference standard labels. For the
spinal-vertebral correspondence analysis, we used the participants' height to account for
potential biological differences among individuals. The spinal cord or spine length could also
be considered (10), but MRI data typically does not cover the entire spine. We performed the
level correspondence analysis on a population of healthy adults in the cervical region only,
without distinguishing differences between males and females and between different age
groups. In the caudal region, the angulation of rootlets changes and a single axial image
slice can contain rootlets from multiple spinal levels, making it difficult to reliably infer spinal
levels using the intersection of rootlet and spinal cord segmentations. To address this
increased complexity in these regions, it would be advantageous to propose a more robust
method for obtaining spinal levels from rootlets segmentation.
In conclusion, this study presented RootletSeg, a deep learning model for C2-T1 spinal
rootlets segmentation on T1w, T2w and MP2RAGE-UNIT1 MRI scans. The segmentation
method is open-source, implemented in the sct_deepseg function as part of Spinal Cord
Toolbox v7.0 and higher (18). As the RootletSeg model allows for inferring the spinal levels
directly from MRI, it can facilitate various downstream analyses, including lesion
classification, neuromodulation therapy, and fMRI group analysis.
21
Data and Code Availability Statement
The
analysis
scripts
are
open
source
and
available
at:
https://github.com/ivadomed/model-spinal-rootlets/releases/tag/r20250917. The packaged
and ready-to-use RootletSeg model can be applied to custom data via the sct_deepseg
rootlets command as part of the Spinal Cord Toolbox (SCT) v7.0 and higher:
https://github.com/spinalcordtoolbox/spinalcordtoolbox/releases/tag/7.0.
The
data
come
from
open-access
datasets
and
can
be
accessed
at
https://openneuro.org/datasets/ds004507/versions/1.1.1
and
https://github.com/spine-generic/data-multi-subject/tree/r20250314.
The
data from the
private MP2RAGE dataset will be shared upon reasonable request.
Acknowledgments
We thank Mathieu Guay-Paquet, Joshua Newton, and Kalum Ost for their assistance with
the management of the datasets and the implementation of the algorithm in the Spinal Cord
Toolbox. We thank Nathan Molinier for the help with organizing the dataset according to the
BIDS standard. We thank Louis-Thomas Lapointe for the help with the data annotation and
preliminary model training.
Funding
Funded by the Canada Research Chair in Quantitative Magnetic Resonance Imaging
[CRC-2020-00179], the Canadian Institute of Health Research [PJT-190258, PJT-203803],
the Canada Foundation for Innovation [32454, 34824], the Fonds de Recherche du Québec -
Santé [322736, 324636], the Natural Sciences and Engineering Research Council of
Canada [RGPIN-2019-07244], the Canada First Research Excellence Fund (IVADO and
TransMedTech), the Courtois NeuroMod project, the Quebec BioImaging Network [5886,
35450], INSPIRED (Spinal Research, UK; Wings for Life, Austria; Craig H. Neilsen
Foundation, USA), Mila - Tech Transfer Funding Program. JV received funding from the
European Union's Horizon Europe research and innovation programme under the Marie
Skłodowska-Curie grant agreement No 101107932 and is supported by the Ministry of
Health of the Czech Republic (no. NU22-04-00024). SB is supported by the Natural
Sciences and Engineering Research Council of Canada, NSERC, Canada Graduate
Scholarships — Doctoral program. Computational resources were provided by the e-INFRA
CZ project (ID:90254), supported by the Ministry of Education, Youth and Sports of the
Czech Republic. FE received funding from the European Research Council (under the European
22
Union’s Horizon 2020 research and innovation programme; grant agreement No 758974) and the
Max Planck Society.
23
References
1. Mohajeri Moghaddam S, Bhatt AA. Location, length, and enhancement: systematic
approach to differentiating intramedullary spinal cord lesions. Insights Imaging.
2018;9(4):511–526.
2. Ahuja CS, Wilson JR, Nori S, et al. Traumatic spinal cord injury. Nat Rev Dis Primers.
Springer Science and Business Media LLC; 2017;3(1). doi: 10.1038/nrdp.2017.18.
3. Vallejo R MD, PhD. Neuromodulation of the cervical spinal cord in the treatment of
chronic intractable neck and upper extremity pain: A case series and review of the
literature. Pain Physician. American Society of Interventional Pain Physicians;
2007;2;10(3;2):305–311.
4. Rowald A, Komi S, Demesmaeker R, et al. Activity-dependent spinal cord
neuromodulation rapidly restores trunk and leg motor functions after complete paralysis.
Nat Med. Springer Science and Business Media LLC; 2022;28(2):260–271.
5. Kinany N, Pirondini E, Micera S, Van De Ville D. Spinal Cord fMRI: A New Window into
the Central Nervous System. Neuroscientist. 2023;29(6):715–731.
6. Bédard S, Valošek J, Oliva V, Weber KA II, Cohen-Adad J. Rootlets-based registration
to
the
spinal
cord
PAM50
template.
Imaging
Neuroscience.
2025;
doi:
10.1162/IMAG.a.123.
7. Cadotte DW, Cadotte A, Cohen-Adad J, et al. Characterizing the location of spinal and
vertebral levels in the human cervical spinal cord. AJNR Am J Neuroradiol.
2015;36(4):803–810.
8. Kim JH, Lee CW, Chun KS, Shin WH, Bae H-G, Chang JC. Morphometric Relationship
between the Cervicothoracic Cord Segments and Vertebral Bodies. J Korean Neurosurg
Soc. 2012;52(4):384–390.
9. Ullmann E, Pelletier Paquette JF, Thong WE, Cohen-Adad J. Automatic labeling of
vertebral levels using a robust template-based approach. Int J Biomed Imaging.
2014;2014:719520.
10. Frostell A, Hakim R, Thelin EP, Mattsson P, Svensson M. A Review of the Segmental
Diameter of the Healthy Human Spinal Cord. Front Neurol. 2016;7:238.
11. Diaz E, Morales H. Spinal Cord Anatomy and Clinical Syndromes. Semin Ultrasound CT
MR. 2016;37(5):360–371.
12. Mendez A, Islam R, Latypov T, et al. Segment-Specific Orientation of the Dorsal and
Ventral Roots for Precise Therapeutic Targeting of Human Spinal Cord. Mayo Clin Proc.
2021;96(6):1426–1437.
13. Valošek J, Mathieu T, Schlienger R, Kowalczyk OS, Cohen-Adad J. Automatic
segmentation of the spinal cord nerve rootlets. Imaging Neurosci (Camb). MIT Press;
2024;2:1–14.
14. Dauleac C, Frindel C, Pélissou-Guyotat I, et al. Full cervical cord tractography: A new
method for clinical use. Front Neuroanat. 2022;16:993464.
15. Liu J, Zhang W, Zhou Y, Xu L, Chu Y-H, Jia F. An open-access lumbosacral spine MRI
24
dataset with enhanced spinal nerve root structure resolution. Sci Data. 2024;11(1):1131.
16. Fasse A, Newton T, Liang L, et al. A novel CNN-based image segmentation pipeline for
individualized feline spinal cord stimulation modeling. J Neural Eng. IOP Publishing;
2024;21(3):036032.
17. Liang L, Fasse A, Damiani A, et al. SpIC3D imaging: Spinal In-situ Contrast 3D imaging.
bioRxiv. 2025. doi: 10.1101/2025.03.05.641747.
18. De Leener B, Lévy S, Dupont SM, et al. SCT: Spinal Cord Toolbox, an open-source
software for processing spinal cord MRI data. Neuroimage. 2017;145(Pt A):24–43.
19. Bédard S, Bouthillier M, Cohen-Adad J. Pontomedullary junction as a reference for
spinal cord cross-sectional area: validation across neck positions. Sci Rep.
2023;13(1):13527.
20. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Author Correction: Open-access
quantitative MRI data of the spinal cord and reproducibility across participants, sites and
manufacturers. Sci Data. 2021;8(1):251.
21. Horn U, Vannesjo SJ, Gross-Weege N, et al. Ultra-high-field fMRI reveals layer-specific
responses in the human spinal cord. bioRxiv. 2025. doi: 10.1101/2025.07.17.665316.
22. Massire A, Taso M, Besson P, Guye M, Ranjeva J-P, Callot V. High-resolution
multi-parametric quantitative magnetic resonance imaging of the human cervical spinal
cord at 7T. Neuroimage. 2016;143:58–69.
23. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Generic acquisition protocol for
quantitative MRI of the spinal cord. Nat Protoc. 2021;16(10):4611–4632.
24. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Open-access quantitative MRI data
of the spinal cord and reproducibility across participants, sites and manufacturers. Sci
Data. 2021;8(1):219.
25. Li X, Morgan PS, Ashburner J, Smith J, Rorden C. The first step for neuroimaging data
analysis: DICOM to NIfTI conversion. J Neurosci Methods. Elsevier; 2016;264:47–56.
26. Gulban OF, Nielson D, Poldrack R, et al. poldracklab/pydeface: v2.0.0. Zenodo; 2019.
doi: 10.5281/ZENODO.3524401.
27. Gorgolewski KJ, Auer T, Calhoun VD, et al. The brain imaging data structure, a format
for organizing and describing outputs of neuroimaging experiments. Sci Data.
2016;3:160044.
28. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring
method for deep learning-based biomedical image segmentation. Nat Methods.
2021;18(2):203–211.
29. Tubbs RS, Loukas M, Slappey JB, Shoja MM, Oakes WJ, Salter EG. Clinical anatomy of
the C1 dorsal root, ganglion, and ramus: a review and anatomical study. Clin Anat.
2007;20(6):624–627.
30. Bédard S, Karthik EN, Tsagkas C, et al. Towards contrast-agnostic soft segmentation of
the spinal cord. Med Image Anal. 2025;101:103473.
31. Karthik EN, Sandrine B, Jan V, et al. Monitoring morphometric drift in lifelong learning
segmentation of the spinal cord. arXiv [cs.CV]. 2025. http://arxiv.org/abs/2505.01364.
25
32. Gros C, De Leener B, Dupont SM, et al. Automatic spinal cord localization, robust to
MRI contrasts using global curve optimization. Med Image Anal. 2018;44:215–227.
33. Kobayashi R, Iizuka H, Nishinome M, Iizuka Y, Yorifuji H, Takagishi K. A cadaveric study
of the cervical nerve roots and spinal segments. Eur Spine J. 2015;24(12):2828–2831.
34. Busscher I, Ploegmakers JJW, Verkerke GJ, Veldhuizen AG. Comparative anatomical
dimensions
of
the
complete
human
and
porcine
spine.
Eur
Spine
J.
2010;19(7):1104–1114.
35. Gros C, De Leener B, Badji A, et al. Automatic segmentation of the spinal cord and
intramedullary
multiple
sclerosis
lesions
with
convolutional
neural networks.
Neuroimage. 2019;184:901–915.
26
|
RootletSeg: Deep learning method for spinal rootlets segmentation across MRI contrasts AUTHORS: Katerina Krejci, MSc1,2, Jiri Chmelik, PhD1, Sandrine Bédard, MSc2, Falk Eippert, PhD3, Ulrike Horn, PhD3, Virginie Callot, PhD4,5, Julien Cohen-Adad, PhD2,6,7,8, Jan Valosek, PhD2,6,9,10 AFFILIATIONS: 1. 2. NeuroPoly Lab, 3. Max Planck Research Group Pain Perception, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany 4. Aix Marseille Univ, CNRS, CRMBM, Marseille, France 5. APHM, CHU Timone, Pôle d'Imagerie Médicale, CEMEREM, Marseille, France 6. Mila - Quebec AI Institute, Montreal, QC, Canada 7. Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada 8. Centre de Recherche du CHU Sainte-Justine, Université de Montréal, Montreal, QC, Canada 9. ý University Olomouc, Olomouc, Czechia 10. ý University Olomouc, Olomouc, Czechia Address correspondence to: J.V. (email: ) ORCID: Kateřina Krejčí - 0009-0009-5817-4840 Jiří Chmelík - 0000-0001-9950-6279 Sandrine Bédard - 0000-0001-9859-1133 Falk Eippert - 0000-0002-3986-1719 Ulrike Horn - 0000-0001-9119-0468 Virginie Callot - 0000-0003-0850-1742 Julien Cohen-Adad - 0000-0003-3662-9532 Jan Valošek - 0000-0002-7398-4990 1 Abstract Purpose: To develop a deep learning method for the automatic segmentation of spinal nerve rootlets on various MRI scans. Material and Methods: This retrospective study included MRI scans from two open-access and one private dataset, consisting of 3D isotropic 3T TSE T2-weighted (T2w) and 7T MP2RAGE (T1-weighted [T1w] INV1 and INV2, and UNIT1) MRI scans. A deep learning model, RootletSeg, was developed to segment C2-T1 dorsal and ventral spinal rootlets. Training was performed on 76 scans and testing on 17 scans. The Dice score was used to compare the model performance with an existing open-source method. Spinal levels derived from RootletSeg segmentations were compared with vertebral levels defined by intervertebral discs using Bland-Altman analysis. Results: The RootletSeg model developed on 93 MRI scans from 50 healthy adults (mean age, 28.70 years ± 6.53 [SD]; 28 [56%] males, 22 [44%] females) achieved a mean ± SD Dice score of 0.67 ± 0.09 for T1w-INV2, 0.65 ± 0.11 for UNIT1, 0.64 ± 0.08 for T2w, and 0.62 ± 0.10 for T1w-INV1 contrasts. Spinal-vertebral level correspondence showed a progressively increasing rostrocaudal shift, with Bland-Altman bias ranging from 0.00 to 8.15 mm (median difference between level midpoints). Conclusion: RootletSeg accurately segmented C2-T1 spinal rootlets across MRI contrasts, enabling the determination of spinal levels directly from MRI scans. The method is open-source and can be used for a variety of downstream analyses, including lesion classification, neuromodulation therapy, and functional MRI group analysis. Keywords: Rootlets, Spinal Cord, MR Imaging, Segmentation, Supervised Learning, Convolutional Neural Network (CNN) Summary: The proposed deep learning model accurately segmented the spinal cord nerve rootlets across different 3D isotropic MRI contrasts, enabling the determination of spinal levels directly from MRI scans. 2 Key results: - The RootletSeg deep learning model was developed for C2-T1 spinal nerve rootlets segmentation using three MRI datasets comprising T2-weighted, T1-weighted, and MP2RAGE-UNIT1 scans acquired at 3T and 7T. - RootletSeg achieved a mean Dice of 0.65 ± 0.10 (SD), which is comparable to the previous method while extending the capabilities beyond T2-weighted contrast. - The RootletSeg-based analysis of spinal-vertebral level correspondence showed a gradually increasing shift along the rostro-caudal axis between spinal and vertebral levels. List of Abbreviations fMRI = functional Magnetic Resonance Imaging MRI = Magnetic Resonance Imaging PMJ = pontomedullary junction RMSD = root mean square deviation SCT = Spinal Cord Toolbox SD = standard deviation T2w = T2-weighted TSE = Turbo Spin Echo 3 Introduction Spinal rootlets are bundles of nerve fibres forming the spinal nerves that connect the spinal cord to the peripheral parts of the body. The ability to estimate neurological spinal levels from nerve rootlets makes them relevant for spinal cord lesion classification (1,2), neuromodulation therapy (3,4), and functional MRI (fMRI) group analysis (5,6). Because directly identifying spinal rootlets on MRI scans is both challenging and time-consuming, spinal cord analyses typically rely on vertebral levels, defined using intervertebral discs, or they infer spinal levels from vertebral levels. This approach, however, is intrinsically limited as spinal levels are not necessarily aligned with vertebral bodies (5,7), and there is considerable inter-individual variability between spinal and vertebral levels (7-12). Spinal nerve rootlets, therefore, provide an anatomically more relevant and potentially more accurate method for determining spinal levels, which can, in turn, serve as an alternative coordinate system for spinal cord analyses, as opposed to the traditional vertebral-level approach based on intervertebral discs (5). Recently, a method for automatic spinal rootlets segmentation was proposed, allowing for direct estimation of spinal levels from MRI data (13). However, that method has three drawbacks: i) it was developed solely on scans acquired at one field strength and with one contrast (i.e. 3T turbo spin echo [TSE] isotropic T2-weighted [T2w] scans), ii) it is restricted to dorsal rootlets only (ignoring ventral rootlets), and iii) it is restricted to a specific range of cervical levels (i.e. spinal levels C2-C8). Other studies aimed to identify nerve rootlets using diffusion MRI tractography (14) or traced them manually on high-resolution scans (15). An automatic rootlets segmentation method was recently proposed also for postmortem feline samples (16,17). In this work, we (i) extend the existing rootlet segmentation model by incorporating ventral rootlets, an additional spinal level (thoracic level T1), and additional MRI contrasts derived from the 7T MP2RAGE sequence (T1w-INV1, T1w-INV2, and UNIT1); and (ii) utilize the proposed segmentation model to investigate the correspondence between spinal and vertebral levels in a large cohort of 120 healthy participants. The segmentation method is open-source, implemented in the sct_deepseg function as part of Spinal Cord Toolbox (SCT) (18) v7.0 and higher. 4 Materials and Methods Study Design and Participants This retrospective study included scans from three MRI datasets of the cervical spinal cord (Table 1): (i) 3T TSE isotropic T2w scans from the open-access single-site OpenNeuro ds004507 dataset (19), (ii) 3T TSE isotropic T2w scans from the open-access spine-generic multi-subject dataset (20), and (iii) 7T isotropic MPRAGE scans from a private single-site dataset (21,22). For more details on acquisition parameters, please see (19,21,23,24). Table 1: Characteristics of Study Participants RootletSeg model development Variable OpenNeuro ds004507* spine-generic multi-subject MP2RAGE** Participants 7 24 19 MRI scans 12 24 57 Sex Male 5 12 11 Female 2 12 8 Age (y) (22.57 ± 0.53) (29.58 ± 6.53) (29.84 ± 6.66) MRI scans in each set Training set 10 21 45 Test set 2 3 12 MRI manufacturer Siemens 12 20 57 GE 0 4 0 MRI field strength 3T 12 24 0 7T 0 0 57 Sequence TSE TSE MP2RAGE Voxel size (mm) 0.6 × 0.6 × 0.6 0.8 × 0.8 × 0.8*** 0.7 × 0.7 × 0.7 Spinal-vertebral level correspondence analysis Variable OpenNeuro ds004507* spine-generic multi-subject MP2RAGE** Participants 4 105 11 MRI scans 4 105 11 Sex 5 Male 3 52 4 Female 1 53 7 Age (y) (22.50 ± 0.58) (29.70 ± 10.18) (28.82 ± 6.03) MRI manufacturer Siemens 4 76 11 GE 0 11 0 Philips 0 18 0 MRI field strength 3T 4 105 0 7T 0 0 11 Sequence TSE TSE MP2RAGE Voxel size (mm) 0.6 × 0.6 × 0.6 0.8 × 0.8 × 0.8*** 0.7 × 0.7 × 0.7 *The OpenNeuro ds004507 dataset included neutral, flexion and extension neck position sessions. Neutral and flexion sessions were used for the model development, and neutral session was used for spinal-vertebral correspondence analysis. **The MP2RAGE dataset contained 3 co-registered MP2RAGE contrasts (T1w-INV1, T1w-INV2 and UNIT1) for each subject (in total 57 MRI scans for 19 subjects). ***Voxel size for 2 MRI scans was 0.8 × 0.5 × 0.5 mm The scans were anonymized and defaced to remove all personally identifiable information (25,26) and organized according to the BIDS standard (27). The inclusion criteria were being a healthy participant without any known disease, age >18 years, and coverage of the cervical spine. Exclusion criteria were the presence of severe imaging artifacts or low contrast between the cerebrospinal fluid and nerve rootlets for multiple levels. Although the original datasets include more participants than listed in Table 1, we selected scans based on overall scan quality (no blurring and ghosting artifacts) and sufficient contrast between the cerebrospinal fluid and nerve rootlets. Because manual segmentation of nerve rootlets is difficult and time-consuming due to their complex three-dimensional anatomy (Figure 1) and the scans' sub-millimetre resolution, only a subset of participants was included in model training (see the following sections for details). 6 Figure 1: Spinal rootlets and vertebral anatomy. Spinal levels are inferred from spinal rootlets, whereas vertebral levels are defined based on adjacent intervertebral discs. Adapted from (7) with permission from the publisher. Deep Learning Training Protocol The nerve rootlets segmentation model was developed using nnUNetv2, a self-configuring deep learning-based framework (28). Preprocessing with the nnUNetv2 framework involved reorienting both the input scans and their corresponding reference standard labels to the Right-to-Left × Posterior-to-Anterior × Inferior-to-Superior (RPI) orientation to ensure consistency across the dataset, intensity z-score normalization and resampling into 0.7 × 0.7 × 0.7 mm resolution before training. Random spatial transformation (translation, rotation, scaling), mirroring, Gaussian noise and Gaussian blur, brightness and contrast adjustment, Gamma transformation and low-resolution simulation were used as data augmentation. To facilitate the generation of reference standard rootlet labels, the existing segmentation model was applied (13), followed by manual corrections by consensus of two raters (K.K. and J.V.) using the FSLeyes image viewer (version 1.12.1; ) and the provided instructions1. We did not analyze the interrater variability, as a previous study reported a mean coefficient of variation of ≤ 1.45% in spinal level positions when using rootlet segmentations from different raters to estimate spinal levels (13). Additionally, as the presence of C1 rootlets differs between individuals, they were not included in reference standard annotations and model training (11,13,29). We trained the model in four stages by iteratively adding more MRI scans and contrasts in each stage (Figure 2). First, we extended the T2w model to segment ventral rootlets by manually annotating them and retraining the 1 https://github.com/ivadomed/model-spinal-rootlets/issues/17 7 model. Then, the T2w model was applied to MP2RAGE-UNIT1 scans. We inverted the contrast of MP2RAGE-UNIT1 scans to make their contrast closer to the T2w before applying the model. The model predictions were manually corrected and extended to include T1 rootlets. An initial MP2RAGE model was trained using five scans (MP2RAGE-UNIT1 with inverted contrast) and tested on the remaining 14 scans from the MP2RAGE dataset. The obtained segmentations were manually corrected to get a reference standard for all 19 participants. Once all rootlet reference standards were visually inspected and corrected, they were split into training, validation, and testing sets. Because the MP2RAGE contrasts (T1w-INV1, T1w-INV2, and UNIT1) are inherently co-registered, a single reference standard was used for all three contrasts for each participant. To prevent information leakage between training, validation and testing sets, all MP2RAGE contrasts from each participant were assigned exclusively to one set. Then, we trained the model on all three MP2RAGE contrasts (15 participants, resulting in 45 MRI scans) for 2000 epochs. The nnUNetv2 framework suggested a 3D architecture with the following parameters: 6 stages, instance normalization technique, Leaky ReLU activation function, batch size 2, patch size 128 × 96 × 192 (RPI), learning rate 0.01, Dice Loss and Cross-Entropy loss function and Stochastic Gradient Descent with Nesterov Momentum optimizer. Since training with this architecture setup was not successful at the C2 level, we tried another experiment with increased patch size in the superior-inferior direction to 352 (i.e., 128 × 96 × 352) so the model can capture a larger spatial context. With this increased patch size, the model successfully learned to segment the C2 level. The model training was consequently extended to a multi-contrast approach, adding data from T2w datasets (spine-generic and OpenNeuro). This model was trained on MP2RAGE and T2w MRI scans with the increased patch size (i.e., 128 × 96 × 352). A total of 76 scans (from 50 participants) for 2000 epochs in a 5-fold cross-validation approach were used for an 80/20% training/validation split. The "production" model, named RootletSeg, was trained on all 76 training data (100/0% training/validation split). The testing set (i.e., scans never used during the training or validation) included 17 MRI scans: 12 MP2RAGE (4 participants, each with 3 contrasts) and 5 T2w (3 from the SpineGeneric dataset and 2 from OpenNeuro). The same testing set was used across all five cross-validation folds to enable consistent evaluation of model performance across varied training/validation splits. The model was compared to the intermediate T2w single-contrast model. The model's performance was evaluated using the Dice score. 8 Figure 2: Overview of our model development. T2-weighted (T2w) scans with reference standards were first used to train the T2w model (I), which was then applied to MP2RAGE-UNIT1 scans with inverted contrast. The resulting outputs were manually corrected and served as reference standards for the training of the initial MP2RAGE model (II). This model was applied to additional UNIT1 scans, followed by another round of manual corrections. Next, the MP2RAGE model (III), using both the original and an increased patch size, was trained on all three MP2RAGE contrasts (T1w-INV1, T1w-INV2, and UNIT1). Finally, T2w and MP2RAGE scans and their corresponding reference standards were combined into a single dataset and used to train the RootletSeg model. Spinal-vertebral Levels Correspondence and Level Lengths The developed model was used to analyze the correspondence between spinal and vertebral levels. For MP2RAGE data, where each participant has three perfectly aligned contrasts, the T1w-INV2 contrast was used to avoid repeating the same participants multiple times. T1w-INV2 was selected due to its superior rootlet visibility and the highest RootletSeg 9 performance among the MP2RAGE contrasts (see Results). From the OpenNeuro dataset, only participants with a neutral neck position were included to avoid repeating the same participants across different sessions. From the spine-generic dataset, MRI scans with good contrast between the cerebrospinal fluid and nerve rootlets were selected. To be included in the analysis, each scan had to meet the following criteria: good visibility of spinal rootlets (no blurring and ghosting artifacts and good contrast between the rootlets and cerebrospinal fluid); coverage from the pontomedullary junction (PMJ) to the T1 vertebra; and available information on the participant's height. Combining all three datasets, a total of 120 healthy participants were used. Figure 3 illustrates the analysis pipeline for assessing spinal-vertebral level correspondence, performed using the Spinal Cord Toolbox (SCT) v7.0 (18). Spinal cord segmentations were available for all T2w MRI scans (19,24), whereas the MP2RAGE scans were segmented using the contrast-agnostic model (30,31). Spinal levels were estimated as an overlap between the spinal rootlets and the spinal cord segmentation dilated by 3 pixels (13). The spinal cord centerline was extracted from the spinal cord segmentation, and the posterior tips of intervertebral discs (9) were projected to the centerline. Vertebral levels were determined as the segments between adjacent intervertebral discs. PMJ was detected automatically (32) and used as a reference point for further measurements. The distances between the PMJ and the midpoints of the spinal and vertebral levels, as well as the level lengths, were measured along the spinal cord centerline. Spinal and vertebral level lengths were defined as the distance between the rostro-caudal slices of each level. To account for individual differences in body size, the distances were normalized by the participant's height and then multiplied by the cohort's median height to preserve millimetre units. The normalized distances were approximated by normal distributions using the probability density function to assess the overlap between spinal and vertebral levels. The correspondence between spinal and vertebral levels was evaluated using the Wilcoxon signed-rank test, the Bland-Altman analysis (Equation 1) and root mean square distance (RMSD, Equation 2), separately for each level pair (e.g., vertebral level C2 and spinal level C3) across participants. We used a non-parametric variant of Bland-Altman analysis because the differences between spinal and vertebral level midpoint positions did not pass the Shapiro-Wilk normality test. P < .05 was considered statistically significant. BlandAltman median = median(yvertebral (1) -yspinal(1), yvertebral (i+1) -yspinal(i+1), ... , yvertebral (n) -yspinal(n)) Equation 1 : distance between the PMJ and the spinal level midpoint yspinal(i) : distance between the PMJ and the vertebral level midpoint yvertebral(i) n ... number of participants 10 RMSD= i=1 n ∑ (yvertebral(i) - yspinal(i)) 2 n Equation 2 : distance between the PMJ and the spinal level midpoint yspinal(i) : distance between the PMJ and the vertebral level midpoint yvertebral(i) n: number of participants 11 Figure 3: Spinal-vertebral levels correspondence. Analysis overview: the spinal cord and nerve rootlets were segmented, and the intervertebral discs and the pontomedullary junction (PMJ) were identified. Then, rootlets and intervertebral discs were used to obtain the spinal and vertebral levels, respectively. Distances from level midpoints to the PMJ were measured along the centerline. 12 Results Patient Characteristics A total of 134 healthy adult participants (mean age, 29.3 years ± 5.9 [SD]; 69 [51.5%] males, 65 [48.5%] females) with 177 MRI scans from three datasets were included in this study. Participants were scanned across scanners from different manufacturers (Siemens, GE, Philips) with different field strengths (3T, 7T). For segmentation model development, we used 50 participants with 93 MRI scans (n=12 OpenNeuro, n=24 SpineGeneric and n=57 MP2RAGE dataset) with 76 scans in the training set, and 17 scans in the testing set. For spinal-vertebral correspondence analysis, we used 120 MRI scans (n=4 OpenNeuro, n=105 SpineGeneric and n=11 MP2RAGE) from 120 participants (mean age, 29.4 years ± 5.8 [SD]; 59 [49.2%] males, 61 [51.8%] females); 14 participants were not included because their MRI scans did not capture the PMJ and/or the scans did not capture the whole T1 vertebra. Segmentation Model The RootletSeg nnUNetv2 3D model achieved an overall Dice score of 0.65 ± 0.10 (mean ± SD across levels, participants, and contrasts). Lower Dice for the MP2RAGE scans at levels C2 and C3 was due to the presence of image artifacts (red arrows in Figure 4), possibly caused by B1+ inhomogenities. Interestingly, despite the artifacts, the model was able to segment some rootlets at these levels, even though they were not included in the reference standard (compare the reference standard and model output for the T1w-INV2 MRI scan at the C2 level in Figure 4). Dice scores for individual contrasts were (mean ± SD across levels and testing scans): 0.67 ± 0.09 for T1w-INV2, 0.65 ± 0.11 for UNIT1, 0.64 ± 0.08 for T2w, and 0.62 ± 0.10 for T1w-INV1 (Figure 5a). Figure 5b shows the Dice scores across levels on T2w MRI scans, in comparison with an existing model developed exclusively on T2w scans. RootletSeg demonstrated comparable results for rootlets C2-C8 to the T2w model (Dice of 0.65 ± 0.08 vs 0.64 ± 0.08). Level T1 was excluded from the Dice calculation, as the T2w model was not trained at this level. 13 Figure 4: Coronal and axial views of representative rootlet segmentations. (a) Segmentations on a T2w MRI scan. (b) Segmentations on a T1w-INV1 MRI scan. (c) Segmentations on a T1w-INV2 MRI scan. (d) Segmentations on a UNIT1 MRI scan. The red arrows point to an artifact possibly caused by B1+ inhomogeneities. Rows represent individual rootlet levels from C2 to T1. Numbers represent the mean ± SD Dice score across participants for each spinal level compared to the reference standard labels. Figure 5: Quantitative performance of the RootletSeg model. (a) Dice score computed on 17 testing images across different contrasts (n=4 T1w-INV1, n=4 T1w-INV2, n=4 UNIT1 and n=5 T2w). (b) Comparison of Dice score between the intermediate single-contrast T2w model and the proposed 14 RootletSeg model, on T2w MRI scans. Despite the RootletSeg model being trained on multiple contrasts, it performs similarly well compared to the T2w model. Note that the T2w model was developed only for spinal rootlets C2-C8; thus, its Dice score for the level T1 is zero. The horizontal line within each box represents the median, and the box edges mark the 25 % to 75 % interquartile range. Whiskers extend 1.5 times the interquartile range, and the small black circles indicate outliers. Spinal-vertebral Level Correspondence Figure 6a illustrates the correspondence between spinal and vertebral levels. A gradually increasing shift along the rostrocaudal axis is apparent between the distributions of spinal and vertebral levels. For instance, the distribution for vertebral level C2 overlaps with that of spinal level C3, whereas vertebral level C7 is shifted caudally relative to spinal level C8. The Wilcoxon signed-rank test (performed for each spinal-vertebral level pair separately) revealed that this shift was statistically significant (P < .05) for all levels below the spinal level C5 and the vertebral level C4. Figure 6b presents the Bland-Altman analysis comparing each pair of levels (e.g., vertebral level C2 vs. spinal level C3), based on the distance of the level midpoints from the PMJ, normalized by participant height. The Bland-Altman analysis quantitatively assessed a 0.00 mm bias term (median difference between spinal and vertebral level midpoint positions) between spinal level C3 and vertebral level C2. In contrast, the bias term is higher for the lower levels (up to 8.15 mm for spinal level T1 and vertebral level T1). Table 2 shows the RMSD between spinal and vertebral level midpoints for each level pair. The RMSD value was lowest for spinal level C2 and vertebral level C3 (3.60 mm) and highest for spinal level T1 and vertebral level T1 (9.36 mm). 15 Figure 6: Spinal and vertebral level correspondence. (a) Spinal and vertebral level midpoints approximated by normal distributions, separately for each level. The midpoints were normalized by participants' height and scaled by median height. Values in brackets represent mean ± SD distance to the pontomedullary junction (PMJ) in millimetres. Spinal levels are in solid lines, vertebral levels in dashed lines. Significance (Wilcoxon signed-rank test): * P < .05, ns not significant. Notice that the distribution for the spinal level C3 (solid orange) corresponds to the vertebral level C2 (dashed blue), while the distribution for the spinal level C8 (solid pink) is shifted cranially relative to the vertebral level C7 (dashed brown). We note that there are anatomically seven vertebral levels but eight spinal levels. (b) Bland-Altman plot. Black dashed lines show the median difference between distances from the PMJ to spinal and vertebral levels midpoints, and colored dashed lines show 2.5 and 97.5 percentiles. The points correspond to individual participants. VL = vertebral level; SL = spinal level. 16 Table 2: Root mean square distance (RMSD) between spinal and vertebral level midpoint distances to the pontomedullary junction (PMJ). Notice that RMSD is lower for cranial levels (i.e., RMSD of 3.60 mm between the vertebral level C2 and the spinal level C3) relative to caudal levels (i.e., RMSD of 9.36 mm between the vertebral level T1 and the spinal level T1). We note that there are anatomically seven vertebral levels but eight spinal levels. Spinal and Vertebral Level Rostro-caudal Length Table 3 presents the rostro-caudal lengths (mean ± SD across participants) of each spinal level in 120 healthy participants. The table also includes a comparison with lengths reported in an MRI-based study (7) and a post-mortem study (33). Table 3: Rostro-caudal lengths of individual spinal levels and results of MRI-based study (7) and post-mortem study (33). The table shows the mean rostro-caudal length ± SD in millimetres. 17 Vertebral | Spinal level RMSD [mm] C2 | C3 3.60 C3 | C4 3.35 C4 | C5 3.55 C5 | C6 4.21 C6 | C7 5.09 C7 | C8 6.33 T1 | T1 9.36 Spinal level This work (120 participants) Cadotte et al., 2015 (20 participants) Kobayashi et al., 2015 (11 participants) C2 7.1 ± 2.2 - - C3 11.7 ± 2.5 10.5 ± 2.2 12.1 ± 1.2 C4 8.9 ± 2.0 9.9 ± 1.3 12.5 ± 1.1 C5 8.7 ± 1.7 10.5 ± 1.5 12.6 ± 2.8 C6 9.1 ± 1.6 9.7 ± 1.6 12.7 ± 1.6 C7 9.8 ± 1.7 9.4 ± 1.4 11.8 ± 1.6 C8 11.8 ± 2.5 9.6 ± 1.4 10.6 ± 1.6 T1 13.1 ± 3.7 - - Table 4 shows the rostro-caudal lengths (mean ± SD across participants) of the vertebral levels, along with a comparison to a post-mortem study (34). Table 4: Rostro-caudal lengths of individual vertebral levels and results of the post-mortem study (34) (showing the mean rostro-caudal length ± SD in millimetres) 18 Spinal level This work (120 participants) Busscher et al., 2010 (6 participants) C2 14.9 ± 1.9 - C3 17.0 ± 1.8 14.2 ± 0.7 C4 17.0 ± 1.6 14.5 ± 1.3 C5 15.9 ± 1.5 13.4 ± 1.1 C6 15.3 ± 1.7 14.0 ± 0.5 C7 16.2 ± 1.7 15.7 ± 0.9 T1 20.0 ± 1.8 17.3 ± 0.8 Discussion In this study, we (i) introduced RootletSeg, a deep learning model for segmentation of C2-T1 ventral and dorsal spinal nerve rootlets from T1w, T2w and MP2RAGE-UNIT1 3T and 7T MRI scans, which extended the previous rootlet segmentation model (13) by incorporating ventral rootlets, an additional thoracic T1 spinal level, and additional 7T MP2RAGE-derived contrasts; and (ii) investigate the correspondence between spinal and vertebral levels in a large cohort of 120 healthy participants. The segmentation model demonstrated stable performance across participants, MRI contrasts, and rootlet levels, thus facilitating the cumbersome and time-intensive manual rootlets annotation process. The analysis of spinal-vertebral level correspondence showed a gradually increasing shift in the rostro-caudal axis between spinal and vertebral levels and higher variability in level localization across participants with lower levels. The segmented nerve rootlets can be used as an alternative to commonly used intervertebral discs in various applications, including lesion classification based on neurological levels and registration of individual scans to a template for group-level analysis (5,6). As spinal nerve rootlets are fine structures with submillimeter dimensions and a complex anatomy that varies across participants, their segmentation is challenging even for expert raters. Despite these difficulties, the proposed model achieved a stable performance across four contrasts (T2w, T1w-INV1, T1w-INV2, and UNIT1) and different levels (C2-T1). The mean Dice for T2w data was higher for upper cervical nerve rootlets than lower cervical nerve rootlets (C2 mean Dice: 0.68 ± 0.08 vs. T1 mean Dice: 0.57 ± 0.09), possibly due to the lower contrast between cerebrospinal fluid and rootlets and higher rootlets angulation in the lower levels (12). Compared to the intermediate single-contrast T2w model, RootletSeg achieved a comparable Dice of 0.65 ± 0.08 vs 0.64 ± 0.08, demonstrating no loss in performance while extending the model capabilities beyond T2w contrast. Although the reported Dice score may appear lower compared to other segmentation tasks, such as spinal cord (31,35), where Dice scores commonly reach 0.9, we note that the relatively low Dice for rootlet segmentation is due to the distinct anatomy and size of rootlets compared to larger structures like the spinal cord. Spinal rootlets are small structures with complex three-dimensional anatomy, typically having only 2-3 voxel width in MRI scans with submillimeter in-plane resolution. The analysis of spinal and vertebral level correspondence using the RootletSeg model showed that the distribution of spinal level C3 midpoint positions corresponds to that of vertebral level C2, and similarly for spinal level C4 and vertebral level C3. The correspondence became less consistent at lower levels, as indicated by a statistically significant shift between spinal and vertebral levels, leading to larger shifts between spinal 19 and vertebral midpoint positions. Moreover, SD of the level midpoint distances from the PMJ increases in the lower levels (spinal level C2 SD = 2.65 mm vs. spinal level T1 SD = 6.64 mm), resulting in broader and flatter distributions, indicating increasing inter-subject variability in level positions. This is consistent with prior MRI and post-mortem reports (7,8) and anatomical textbooks that neurological spinal levels are "shifted" relative to vertebral levels and that this shift increases at more caudal levels. Similar to a previous 3T MRI study (7) that used manually defined landmarks, the Bland-Altman analysis showed higher variability in the position of lower levels across the participants. In our study, the analysis was extended to include levels C2 and T1, and we used 6 times more data. Additionally, we used participants' height to normalize our measurements to take into account inter-subject variability. We also analyzed the level correspondence using RMSD, which confirmed a higher shift for more caudal levels by higher RMSD values compared to more cranial levels. The difference between the Bland-Altman and the RMSD analyses (e.g., Bland-Altman bias of 0.00 mm and RMSD of 3.60 mm for vertebral level C2 and spinal level C3) is due to methodological differences in the calculation - in the Bland-Altman analysis, we quantitatively considered the correspondence according to the median difference between vertebral-spinal midpoint positions, whereas the RMSD was calculated as the mean squared difference. A post-mortem study (8), performed manually on 16 cadavers, found that spina-vertebral correspondence differs by one level in the C3 to C5 spinal region, i.e., vertebral level C2 corresponds to spinal level C3 and further increases in the lower levels up to two level differences. It needs to be noted that it is difficult to directly compare the findings from post-mortem studies with those in our in vivo MRI study due to the inherent characteristics of ex vivo measures, such as tissue shrinking due to post-fixation, and the altered shape of the excised spinal cord without the surrounding cerebrospinal fluid and dura. Measured rostrocaudal spinal level lengths obtained in our study showed slightly higher SD compared to an MRI-based study (7) and a post-mortem study (33). This might likely be due to the larger cohort in our study capturing broader population variability, demographic differences, and differences in the acquisition protocol. Due to the lack of MRI studies on vertebral level lengths, our results were compared with a post-mortem study (34), which measured vertebral levels from CT scans in six cadavers. For levels C3 to T1, our findings show similar results to the study. However, they measured the length of vertebral bodies, whereas our analysis was based on intervertebral disc positions. This different methodological approach likely contributes to the average difference of 2.1 mm observed across levels. Additionally, the small sample size (six cadavers in the study) may not adequately capture the population variability. Other factors that may account for the 20 demographic differences and positional changes of the spine structures between living participants and cadavers. This study had limitations. The proposed model was trained and tested solely on healthy participants in the C2-T1 region and only with isotropic MRI scans. Scans were selected based on overall image quality, and scans with extensive blurring and ghosting artifacts were excluded to allow for reliable reference standard labels creation. Extending the evaluation to participants with pathologies and the thoraco-lumbar region would be valuable. However, this remains challenging due to the lower contrast typically present in pathological areas, such as a narrowed spinal canal. Additionally, in the thoraco-lumbar spinal cord, rootlets are more difficult to isolate because of their steeper angulation, reduced space within the spinal canal, potential image artifacts due to respiratory motion, and lower signal-to-noise ratio at 7T. These factors make it difficult to obtain reliable reference standard labels. For the spinal-vertebral correspondence analysis, we used the participants' height to account for potential biological differences among individuals. The spinal cord or spine length could also be considered (10), but MRI data typically does not cover the entire spine. We performed the level correspondence analysis on a population of healthy adults in the cervical region only, without distinguishing differences between males and females and between different age groups. In the caudal region, the angulation of rootlets changes and a single axial image slice can contain rootlets from multiple spinal levels, making it difficult to reliably infer spinal levels using the intersection of rootlet and spinal cord segmentations. To address this increased complexity in these regions, it would be advantageous to propose a more robust method for obtaining spinal levels from rootlets segmentation. In conclusion, this study presented RootletSeg, a deep learning model for C2-T1 spinal rootlets segmentation on T1w, T2w and MP2RAGE-UNIT1 MRI scans. The segmentation method is open-source, implemented in the sct_deepseg function as part of Spinal Cord Toolbox v7.0 and higher (18). As the RootletSeg model allows for inferring the spinal levels directly from MRI, it can facilitate various downstream analyses, including lesion classification, neuromodulation therapy, and fMRI group analysis. 21 Data and Code Availability Statement The analysis scripts are open source and available at: https://github.com/ivadomed/model-spinal-rootlets/releases/tag/r20250917. The packaged and ready-to-use RootletSeg model can be applied to custom data via the sct_deepseg rootlets command as part of the Spinal Cord Toolbox (SCT) v7.0 and higher: https://github.com/spinalcordtoolbox/spinalcordtoolbox/releases/tag/7.0. The data come from open-access datasets and can be accessed at https://openneuro.org/datasets/ds004507/versions/1.1.1 and https://github.com/spine-generic/data-multi-subject/tree/r20250314. The data from the private MP2RAGE dataset will be shared upon reasonable request. Acknowledgments We thank Mathieu Guay-Paquet, Joshua Newton, and Kalum Ost for their assistance with the management of the datasets and the implementation of the algorithm in the Spinal Cord Toolbox. We thank Nathan Molinier for the help with organizing the dataset according to the BIDS standard. We thank Louis-Thomas Lapointe for the help with the data annotation and preliminary model training. Funding Funded by the Canada Research Chair in Quantitative Magnetic Resonance Imaging [CRC-2020-00179], the Canadian [PJT-190258, PJT-203803], the Canada Foundation for Innovation [32454, 34824], the Fonds de Recherche du Québec - Santé [322736, 324636], the Natural Sciences and Engineering Research Council of Canada [RGPIN-2019-07244], the Canada First Research Excellence Fund (IVADO and TransMedTech), the Courtois NeuroMod project, the Quebec BioImaging Network [5886, 35450], INSPIRED (Spinal Research, UK; Wings for Life, Austria; Craig H. Neilsen Foundation, USA), Mila - Tech Transfer Funding Program. JV received funding from the European Union's Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101107932 and is supported by the Ministry of Health of the Czech Republic (no. NU22-04-00024). SB is supported by the Natural Sciences and Engineering Research Council of Canada, NSERC, Canada Graduate Scholarships - Doctoral program. Computational resources were provided by the e-INFRA CZ project (ID:90254), supported by the Ministry of Education, Youth and Sports of the Czech Republic. FE received funding from the European Research Council (under the European 22 Union's Horizon 2020 research and innovation programme; grant agreement No 758974) and the Max Planck Society. 23 References 1. Mohajeri Moghaddam S, Bhatt AA. Location, length, and enhancement: systematic approach to differentiating intramedullary spinal cord lesions. Insights Imaging. 2018;9(4):511-526. 2. Ahuja CS, Wilson JR, Nori S, et al. Traumatic spinal cord injury. Nat Rev Dis Primers. Springer Science and Business Media LLC; 2017;3(1). 3. Vallejo R MD, PhD. Neuromodulation of the cervical spinal cord in the treatment of chronic intractable neck and upper extremity pain: A case series and review of the literature. Pain Physician. American Society of Interventional Pain Physicians; 2007;2;10(3;2):305-311. 4. Rowald A, Komi S, Demesmaeker R, et al. Activity-dependent spinal cord neuromodulation rapidly restores trunk and leg motor functions after complete paralysis. Nat Med. Springer Science and Business Media LLC; 2022;28(2):260-271. 5. Kinany N, Pirondini E, Micera S, Van De Ville D. Spinal Cord fMRI: A New Window into the Central Nervous System. Neuroscientist. 2023;29(6):715-731. 6. Bédard S, Valošek J, Oliva V, Weber KA II, Cohen-Adad J. Rootlets-based registration to the spinal cord PAM50 template. Imaging Neuroscience. 2025; 7. Cadotte DW, Cadotte A, Cohen-Adad J, et al. Characterizing the location of spinal and vertebral levels in the human cervical spinal cord. AJNR Am J Neuroradiol. 2015;36(4):803-810. 8. Kim JH, Lee CW, Chun KS, Shin WH, Bae H-G, Chang JC. Morphometric Relationship between the Cervicothoracic Cord Segments and Vertebral Bodies. J Korean Neurosurg Soc. 2012;52(4):384-390. 9. Ullmann E, Pelletier Paquette JF, Thong WE, Cohen-Adad J. Automatic labeling of vertebral levels using a robust template-based approach. Int J Biomed Imaging. 2014;2014:719520. 10. Frostell A, Hakim R, Thelin EP, Mattsson P, Svensson M. A Review of the Segmental Diameter of the Healthy Human Spinal Cord. Front Neurol. 2016;7:238. 11. Diaz E, Morales H. Spinal Cord Anatomy and Clinical Syndromes. Semin Ultrasound CT MR. 2016;37(5):360-371. 12. Mendez A, Islam R, Latypov T, et al. Segment-Specific Orientation of the Dorsal and Ventral Roots for Precise Therapeutic Targeting of Human Spinal Cord. Mayo Clin Proc. 2021;96(6):1426-1437. 13. Valošek J, Mathieu T, Schlienger R, Kowalczyk OS, Cohen-Adad J. Automatic segmentation of the spinal cord nerve rootlets. Imaging Neurosci (Camb). MIT Press; 2024;2:1-14. 14. Dauleac C, Frindel C, Pélissou-Guyotat I, et al. Full cervical cord tractography: A new method for clinical use. Front Neuroanat. 2022;16:993464. 15. Liu J, Zhang W, Zhou Y, Xu L, Chu Y-H, Jia F. An open-access lumbosacral spine MRI 24 dataset with enhanced spinal nerve root structure resolution. Sci Data. 2024;11(1):1131. 16. Fasse A, Newton T, Liang L, et al. A novel CNN-based image segmentation pipeline for individualized feline spinal cord stimulation modeling. J Neural Eng. IOP Publishing; 2024;21(3):036032. 17. Liang L, Fasse A, Damiani A, et al. SpIC3D imaging: Spinal In-situ Contrast 3D imaging. bioRxiv. 2025. 18. De Leener B, Lévy S, Dupont SM, et al. SCT: Spinal Cord Toolbox, an open-source software for processing spinal cord MRI data. Neuroimage. 2017;145(Pt A):24-43. 19. Bédard S, Bouthillier M, Cohen-Adad J. Pontomedullary junction as a reference for spinal cord cross-sectional area: validation across neck positions. Sci Rep. 2023;13(1):13527. 20. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Author Correction: Open-access quantitative MRI data of the spinal cord and reproducibility across participants, sites and manufacturers. Sci Data. 2021;8(1):251. 21. Horn U, Vannesjo SJ, Gross-Weege N, et al. Ultra-high-field fMRI reveals layer-specific responses in the human spinal cord. bioRxiv. 2025. 22. Massire A, Taso M, Besson P, Guye M, Ranjeva J-P, Callot V. High-resolution multi-parametric quantitative magnetic resonance imaging of the human cervical spinal cord at 7T. Neuroimage. 2016;143:58-69. 23. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Generic acquisition protocol for quantitative MRI of the spinal cord. Nat Protoc. 2021;16(10):4611-4632. 24. Cohen-Adad J, Alonso-Ortiz E, Abramovic M, et al. Open-access quantitative MRI data of the spinal cord and reproducibility across participants, sites and manufacturers. Sci Data. 2021;8(1):219. 25. Li X, Morgan PS, Ashburner J, Smith J, Rorden C. The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods. Elsevier; 2016;264:47-56. 26. Gulban OF, Nielson D, Poldrack R, et al. poldracklab/pydeface: v2.0.0. Zenodo; 2019. 27. Gorgolewski KJ, Auer T, Calhoun VD, et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data. 2016;3:160044. 28. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18(2):203-211. 29. Tubbs RS, Loukas M, Slappey JB, Shoja MM, Oakes WJ, Salter EG. Clinical anatomy of the C1 dorsal root, ganglion, and ramus: a review and anatomical study. Clin Anat. 2007;20(6):624-627. 30. Bédard S, Karthik EN, Tsagkas C, et al. Towards contrast-agnostic soft segmentation of the spinal cord. Med Image Anal. 2025;101:103473. 31. Karthik EN, Sandrine B, Jan V, et al. Monitoring morphometric drift in lifelong learning segmentation of the spinal cord. arXiv [cs.CV]. 2025. http://arxiv.org/abs/2505.01364. 25 32. Gros C, De Leener B, Dupont SM, et al. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization. Med Image Anal. 2018;44:215-227. 33. Kobayashi R, Iizuka H, Nishinome M, Iizuka Y, Yorifuji H, Takagishi K. A cadaveric study of the cervical nerve roots and spinal segments. Eur Spine J. 2015;24(12):2828-2831. 34. Busscher I, Ploegmakers JJW, Verkerke GJ, Veldhuizen AG. Comparative anatomical dimensions of the complete human and porcine spine. Eur Spine J. 2010;19(7):1104-1114. 35. Gros C, De Leener B, Badji A, et al. Automatic segmentation of the spinal cord and intramedullary multiple sclerosis lesions with convolutional neural networks. Neuroimage. 2019;184:901-915. 26
|
2509.16252
|
Low-energy proton impact dynamics on hydrocarbons: Dependence on kinetic energy
and incident site
Misa Viveiros,1 Roy Lau,1 Samuel S. Taylor,1, 2 Patrick Barron,1
Attila Czirj´ak,3, 4 Cody Covington,5 and K´alm´an Varga1, ∗
1Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA
2Pritzker School of Molecular Engineering, University of Chicago, Chicago, IL 60637, USA
3ELI ALPS, ELI-HU Non-Profit Ltd, Wolfgang Sandner utca 3., 6728 Szeged, Hungary
4Department of Theoretical Physics, University of Szeged, Tisza L. krt. 84-86, 6720 Szeged, Hungary
5Department of Chemistry, Austin Peay State University, Clarksville, TN 37044, USA
The dynamics of low-energy proton collisions with hydrocarbon molecules are investigated us-
ing real-time time-dependent density functional theory (TDDFT). Through systematic variation of
proton kinetic energy and impact site on the molecular surface, the resulting scattering, proton
capture, and bond dissociation pathways are analyzed. The simulations reveal a strong dependence
of reaction outcomes on both incident energy and collision geometry, with the interplay between
electronic and nuclear degrees of freedom highlighted as governing molecular fragmentation and
reaction mechanisms. These findings provide insight into the fundamental processes underlying pro-
ton–hydrocarbon interactions relevant to radiation chemistry, ion-beam processing, astrochemical
environments, and Coulomb explosion.
I.
INTRODUCTION
Ion–molecule collisions are fundamental to a broad
range of physical, chemical, and biological processes,
spanning from radiation damage in organic matter [1–4]
and ion-beam cancer therapy [5–8] to ion–beam–induced
material modification [9–14] and astrochemical reaction
pathways [15–17]. Among these diverse applications, par-
ticular attention has been devoted to the interaction of
protons and ions with hydrocarbons and other biologi-
cally relevant molecules, owing to the ubiquity of C–H
containing species in planetary atmospheres, the inter-
stellar medium, and organic materials [18–25]. In such
systems, both the kinetic energy of the incident proton
and the collision geometry critically determine whether
the interaction results in elastic scattering, chemical bond
formation, or molecular fragmentation [25].
Research on proton-molecule collisions remains lim-
ited, with most existing calculations focusing primar-
ily on the keV energy range [26–33].
Time-dependent
density functional theory has been employed to model
proton-DNA collisions at 4 keV energies, with the pri-
mary objective of elucidating DNA base pair dissociation
mechanisms as a function of proton impact locations [23].
Ab initio molecular dynamics simulations were utilized
to investigate proton collisions with deoxyribose, where
5-7 eV protons generated from Coulomb explosions of
doubly ionized water molecules subsequently impact de-
oxyribose, resulting in ring opening [34]. This study was
constrained to the adiabatic regime. The influence of col-
lision sites on radiation dynamics in cytosine following
proton bombardment in the 150-1000 eV energy range
has been examined, along with investigations of proton-
water scattering processes [25]. Additional studies have
∗kalman.varga@vanderbilt.edu
explored collisions between oxygen molecules (O2) with
kinetic energies of 4, 6, or 10 eV and stationary target
molecules (Mg2, SiH4, or CH4) to understand light emis-
sion phenomena during combustion processes [35]. Re-
garding proton-hydrocarbon collisions specifically, com-
putational studies are particularly scarce.
The proton
collision dynamics on CH4 has been investigated [33, 36]
using 30 eV protons and the TDDFT calculations demon-
strated good agreement with experimental observations
for fragment distribution patterns.
Low-energy proton-molecule scattering exhibits funda-
mentally different behavior compared to high-energy col-
lisions due to the critical role of specific interaction sites.
During high-energy encounters, rapidly moving protons
traverse the molecular structure while transferring only
a small portion of their energy to electronic excitations,
which subsequently cascade into nuclear motion [13, 24].
Conversely, low-energy collisions result in dramatic alter-
ations to both the proton’s kinetic energy and trajectory,
making the reaction outcome highly sensitive to the pre-
cise impact location. These low-energy systems are char-
acterized by complex interactions arising from multiple
scattering centers, anisotropic molecular potential sur-
faces, and varied bonding environments. This complexity
generates a diverse array of possible reaction channels,
encompassing elastic scattering events, proton capture
processes, and atomic abstraction reactions.
Low-energy proton collisions are also highly relevant
in the context of hydrocarbon Coulomb explosion. Un-
der intense laser–molecule interactions, hydrocarbons un-
dergo rapid ionization, leading to the production of a
wide range of ionic fragments.
The lightest and most
abundant of these fragments are protons [37–45], which
are typically expelled with kinetic energies ranging from
a few to several tens of electronvolts [31, 41].
These
protons can subsequently collide with nearby neutral or
ionized hydrocarbons in the gas phase, initiating sec-
arXiv:2509.16252v1 [physics.chem-ph] 17 Sep 2025
2
ondary processes such as additional fragmentation, chem-
ical rearrangements, and the formation of new molecu-
lar species—processes that occur alongside the primary
light–matter interaction. Previous theoretical studies of
Coulomb explosion have predominantly focused on the
dynamics of isolated molecules, thereby neglecting the
role of fragment–molecule collisions [37, 40, 43, 45]. By
investigating proton–hydrocarbon collisions in this low-
energy regime, we seek to examine these secondary path-
ways and assess their potential contribution to the overall
reaction dynamics.
Time-dependent density functional theory [46, 47] pro-
vides a powerful first-principles framework for simulating
nonequilibrium processes in real time, as it captures both
the electronic response and the coupled nuclear dynamics
[48–58]. By accounting for nonadiabatic effects, TDDFT
has been successfully applied to describe Coulomb ex-
plosion [37, 40, 43, 45] as well as ion–molecule colli-
sions [13, 14, 23–25, 33].
In
this
work,
we
employ
real-time
TDDFT
to
investigate
proton
collisions
with
representative
hydrocarbons—C2H2,
C3H8,
and
C4H10—across
a
range of proton kinetic energies (0.52–87.58 eV) and
incident sites.
This energy window lies firmly within
the low-energy regime, and it corresponds closely to
the kinetic energies of protons produced in Coulomb
explosion
[31, 41].
By systematically varying both
the projectile energy and the collision geometry, we
identify distinct regimes of scattering, proton capture,
and atom abstraction, and quantify the dependence of
these processes on the collision parameters.
Our
results
provide
new
insight
into
pro-
ton–hydrocarbon interactions on femtosecond timescales,
revealing systematic trends that can guide future exper-
imental studies and support the development of more
comprehensive models of radiation-induced chemistry
in complex molecular systems.
They also emphasize
the important role of many-body effects in governing
Coulomb explosion dynamics.
II.
COMPUTATIONAL METHOD
The simulations were performed using TDDFT for
modeling the electron dynamics on a real-space grid with
real-time propagation [59], with the Kohn-Sham (KS)
Hamiltonian of the following form
ˆHKS(t) = −ℏ2
2m∇2 + Vion(r, t) + VH[ρ](r, t)
+VXC[ρ](r, t).
(1)
Here, ρ is the electron density, defined as the sum of the
densities of all occupied orbitals:
ρ(r, t) =
∞
X
k=1
fk|ψk(r, t)|2,
(2)
where fk is the occupation number of the orbital ψk,
which can take values 0, 1, or 2. Additionally, fk must
satisfy the constraint P∞
k=1fk = N, where N is the total
number of valence electrons in the system.
Vion in eq. 1 is the external potential due to the ions,
represented by employing norm-conserving pseudopoten-
tials centered at each ion as given by Troullier and Mar-
tins [60]. VH is the Hartree potential, defined as
VH(r, t) =
Z
ρ(r′, t)
|r −r′| dr′,
(3)
and accounts for the electrostatic Coulomb interactions
between electrons. The last term in eq. 1, VXC, is the
exchange-correlation potential, which is approximated by
the adiabatic local-density approximation (ALDA), ob-
tained from a parameterization to a homogeneous elec-
tron gas by Perdew and Zunger [61].
At the beginning of the TDDFT calculations, the
ground state of the system is prepared by performing a
density-functional theory (DFT) calculation. With these
initial conditions in place, we then proceed to propagate
the KS orbitals, ψk(r, t) over time by using the time-
dependent KS equation, given as
i∂ψk(r, t)
∂t
= ˆHKS(t)ψk(r, t).
(4)
Eq. 4 was solved using the following time propagator
ψk(r, t + δt) = exp
−iδt
ℏ
ˆHKS(t)
ψk(r, t).
(5)
This operator is approximated using a fourth-degree Tay-
lor expansion, given as
ψk(r, t + δt) ≈
4
X
n=0
1
n!
−iδt
ℏ
ˆHKS(t)
n
ψk(r, t).
(6)
The operator is applied for N time steps until the final
time, tfinal = N ·δt, is obtained. The Taylor propagation
is conditionally stable, and the time step has to satisfy
[59] δt < 1.87(∆x/π)2.
For a typical grid spacing of
∆x = 0.3 ˚A this means that the timestep must be smaller
than 1.4 as.
In our simulations, a time step of δt =
1 attosecond (as) and a total propagation time of tfinal =
120 femtoseconds (fs) were used.
In real-space TDDFT, the KS orbitals are represented
at discrete points on a uniform rectangular grid in real
space. The simulation accuracy is governed by the grid
spacing. In our calculations, we employed a grid spacing
of 0.3 ˚A and used 100 grid points along each of the x-,
y-, and z-axes.
To enforce boundary conditions, we set the KS orbitals
to zero at the edges of the simulation cell. However, dur-
ing proton impact events, the collision can impart suffi-
cient energy to the electronic wavefunctions, potentially
leading to ionization or the ejection of electronic density
beyond the molecule. In such cases, unphysical reflec-
tions of the wavefunction from the cell boundaries can
3
occur, introducing artifacts into the simulation. To mit-
igate this issue, we implemented a complex absorbing
potential (CAP) to dampen the wavefunction as it ap-
proaches the boundaries. The specific form of the CAP
used in our simulations, as described by Manolopou-
los [62], is given by:
−iw(x) = −i ℏ2
2m
2π
∆x
2
f(y),
(7)
where x1 is the start and x2 is the end of the absorbing
region, ∆x = x2 −x1, c = 2.62 is a numerical constant,
m is the electron’s mass, and
f(y) = 4
c2
1
(1 + y)2 +
1
(1 −y)2 −2
,
y = x −x1
∆x
.
(8)
As the molecule becomes ionized during the simula-
tion, electron density is driven towards the CAP. Ad-
ditionally, any ejected fragments carry their associated
electron density as they move towards the boundaries.
When electron density reaches the CAP region, it is ab-
sorbed. Consequently, the total number of electrons,
N(t) =
Z
V
ρ(r, t) d3x,
(9)
where V is the volume of the simulation box, decreases
relative to the initial electron number, N(0). We inter-
pret N(0) −N(t) as the total number of electrons that
have been ejected from the simulation box.
Motion of the ions in the simulations were treated clas-
sically. Using the Ehrenfest theorem, the quantum forces
on the ions due to the electrons are given by the deriva-
tives of the expectation value of the total electronic en-
ergy with respect to the ionic positions. These forces are
then fed into Newton’s Second Law, giving
Mi
d2Ri
dt2
=
Nions
X
j̸=i
ZiZj(Ri −Rj)
|Ri −Rj|3
−∇Ri
Z
Vion(r, Ri)ρ(r, t) dr,
(10)
where Mi, Zi, and Ri are the mass, pseudocharge (va-
lence), and position of the ith ion, respectively, and Nions
is the total number of ions. Vion(r, Ri) is the pseudopo-
tential representing the combined effect of the nucleus
and core electrons, and it interacts with the electron
density ρ(r, t) via Ehrenfest dynamics. This differential
equation was time propagated using the Verlet algorithm
at every time step δt.
The target hydrocarbon molecule was placed such that
the center of its equilibrium geometry coincided with
the origin, with its longest molecular axis aligned along
the x-axis. The incident proton was initially positioned
5.0 ˚A above the target, orthogonal to the molecule’s
largest cross-sectional area (see Fig. 1). At the chosen
separation distance, the interaction between the proton
FIG. 1.
Impact point (IP) diagram for C2H2.
The initial
proton positions (shown in pink) corresponding to different
IPs are placed 5.0 ˚A above the molecular plane. Distances
are not drawn to scale.
and molecule is negligible. Positioning the proton far-
ther away would not alter the physical results but would
require a larger simulation cell, leading to significantly
higher computational demands and longer calculation
times. Both the target and the proton were contained
within a 29.7 ˚A × 29.7 ˚A × 29.7 ˚A simulation grid cen-
tered at the origin, providing sufficient space along the
proton’s trajectory to capture the full dynamics of the
collision. The simulations were performed at zero tem-
perature to remove thermal motion of the target atoms as
a variable. The atomic nuclei were allowed to move from
the forces experienced during the collision, since nuclear
motion and energy redistribution into vibrational modes
strongly influence fragmentation, bonding, and scatter-
ing outcomes.
It should be emphasized that these simulations repre-
sent an idealized limit. In experimental conditions, tar-
get molecules possess finite temperature, undergo ran-
dom motion, and can be struck at a wide distribution
of incident angles and positions.
Capturing such ef-
fects computationally would require an extensive sam-
pling over molecular orientations, impact geometries, and
thermal ensembles, greatly increasing the computational
cost. Our approach therefore represents a balance: by
focusing on equilibrium geometries, impact points of in-
terest, and orthogonal incidence, we isolate and probe the
most characteristic features of proton–hydrocarbon colli-
sions, while recognizing that real experiments will exhibit
additional statistical broadening and variability.
In the following section, we present the results of pro-
ton collisions with three hydrocarbons of varying size:
acetylene (C2H2), propane (C3H8), and butane (C4H10).
For each molecule, impact points were chosen at chemi-
cally relevant sites, including C–C bonds, C atoms, C-H
bonds, and H atoms.
This selection provides a repre-
sentative sampling of key collision sites while keeping
the computational cost manageable.
At each impact
point, seven proton initial kinetic energies were consid-
ered: 0.52, 4.66, 12.96, 25.39, 41.98, 62.70, and 87.58 eV,
motivated by the typical kinetic energies of protons pro-
4
duced in Coulomb explosion experiments [31, 41].
III.
RESULTS
A.
Acetylene (C2H2)
The selected impact points (IPs) for proton collisions
with C2H2 are illustrated in Fig. 1. Due to molecular
symmetry, only incident points located at or to the right
of the molecular center were considered.
Specifically,
IP 1, IP 2, IP 3, and IP 4 correspond to the central
C–C bond, the right C atom, the right C–H bond, and
the terminal H atom of C2H2, respectively.
The outcomes of proton impacts on C2H2 are summa-
rized in Table I. Each row of the table corresponds to one
of the four selected IPs (IP 1–4, labeled in the first col-
umn), while the subsequent columns present the results
for different initial kinetic energies (KEs) of the proton
projectile, ranging from 0.52 to 87.58 eV.
In Table I, each cell represents the outcome of a specific
simulation defined by a chosen IP and an initial proton
KE. The first entry in each cell denotes the reaction type.
Three distinct outcomes were observed under the present
conditions: “S” (scattering), “P” (proton capture), and
“A” (abstraction). “S” indicates cases where the proton
was unable to form a bond and was instead reflected or
scattered. Scattering events may result in molecular frag-
mentation as well. “P” corresponds to proton capture,
in which the proton was retained by the molecule and
formed a stable bond throughout the simulation window
with no fragmentation occurring. “A” denotes abstrac-
tion events, where the proton induced molecular frag-
mentation or the dissociation of a single atom, and sub-
sequently bonded with the molecule or one of its frag-
ments.
Following the reaction type, the proton’s KE
loss is reported. This value is only present for scatter-
ing events, since the proton retains some KE as it de-
parts from the molecule. In the case of proton capture or
abstraction, this entry is marked by “–”, as the proton
becomes bonded and effectively loses its KE. The subse-
quent lines within each cell list the final molecular states
and fragments, specifying the reaction products and their
corresponding charge states.
Analyzing each cell in Table I provides information
about the outcome of a given IP and proton KE. For
example, consider the case of IP 2 with an initial proton
KE of 41.98 eV. The reaction is classified as scattering
(denoted by “S”), with the proton losing 13.00 eV of its
KE. In addition, charge analysis shows that the proton
captured 0.33 electrons from C2H2, as reflected in its final
charge state of H0.67+ (reduced from the initial H1.00+).
Meanwhile, the hydrocarbon target (C2H2) exhibits a fi-
nal charge state of 0.51+, corresponding to a net loss of
0.51 electrons. This indicates that, during the scatter-
ing event, electron ejection further ionized C2H2: 0.33
electrons were transferred from the molecule to the pro-
ton, while an additional 0.18 electrons were emitted into
the CAP (the meaning of fractional charges observed in
molecular fragments will be explained in subsequent dis-
cussion). Thus, even in scattering events, rich dynamics
emerge involving simultaneous electron capture, ioniza-
tion, and the transfer of KE from the projectile into both
nuclear and electronic degrees of freedom of the target
molecule.
In certain cases, scattering events are accompanied by
molecular fragmentation. For example, in Table I, the
simulation corresponding to IP 3 with an initial proton
KE of 12.96 eV produces the fragments C2H0.30+, H0.44+,
and H0.43+. In all scattering cases that yield multiple
fragments, the final entry corresponds to the proton pro-
jectile (here, H0.43+), while the preceding H ion (H0.44+)
originates from the target molecule. In this instance, the
projectile proton dislodged an H atom from the C2H2,
ionizing it in the process and simultaneously capturing
electrons, which led to the charge states listed in the
table. The proton lost 10.56 eV of KE during this in-
teraction.
Snapshots of the molecular trajectories and
electron densities for this event are shown in Fig. 2. Be-
tween 7.5 and 10 fs, the proton approaches IP 3 (the
right C–H bond) and wedges itself between the C and
H atoms. The resulting close approach generates strong
Coulomb repulsion, expelling the terminal H atom down-
ward while deflecting the proton rightward (10–12 fs).
Subsequently, mutual repulsion between the positively
charged proton and ejected H ion further increases their
separation (12–51 fs). By 51 fs, the system consists of a
CH2 fragment and two separated H ions, consistent with
the products identified in the table.
Analyzing a case that results in proton capture is
equally insightful. For instance, at IP 2 with an initial
proton KE of 0.52 eV (Table I), the proton is captured
by the molecule, forming a stable bond as reflected in the
final molecular state C2H31.24+. The final charge state
also indicates electron ejection. Since the C2H2 molecule
begins neutral and the incident proton carries a charge of
1+, the expected charge of the protonated product would
be 1+ if no electrons were lost. The observed charge of
1.24+ instead implies that an additional 0.24 electrons
were emitted from the hydrocarbon, revealing that elec-
tron ejection accompanied the capture process.
The final reaction pathway is illustrated for IP 3 at an
initial proton KE of 25.39 eV, as shown in Fig. 3. In this
trajectory, the proton approaches the C2H2 molecule be-
tween 5 and 7.5 fs, inserting itself between a C and H
atom. Between 7.5 and 9.5 fs, the molecular framework
elongates to accommodate the incoming proton, accom-
panied by strong interatomic repulsion. By 12 fs, the pro-
jectile transfers its KE, displacing the terminal H atom
and ejecting it from the molecule, while remaining bound
near the C2H fragment. From 30 to 120 fs, the proton es-
tablishes a stable bond within the hydrocarbon, yielding
a C2H2 fragment. Throughout this process, the molecule
exhibits pronounced rotational motion, indicating that a
portion of the proton’s initial KE is converted into rota-
tion, which contributes to stabilizing the newly formed
5
Impact Point (IP)
Proton Kinetic Energy (eV)
0.52
4.66
12.96
25.39
41.98
62.70
87.58
IP 1
S, 0.26
C2H2
1.01+
H0.25+
S, 3.13
C2H2
0.65+
H0.63+
S, 7.35
C2H2
0.63+
H0.71+
S, 13.47
C2H2
0.79+
H0.54+
S, 12.66
C2H2
0.87+
H0.44+
S, 8.73
C2H2
0.81+
H0.56+
S, 8.82
C2H2
0.90+
H0.44+
IP 2
P, –
C2H3
1.24+
S, 2.21
C2H2
0.74+
H0.50+
S, 4.39
C2H2
0.78+
H0.42+
S, 8.08
C2H2
0.80+
H0.40+
S, 13.00
C2H2
0.51+
H0.67+
S, 19.90
C2H2
0.65+
H0.58+
S, 29.22
C2H2
0.79+
H0.53+
IP 3
P, –
C2H3
1.22+
S, 3.30
C2H2
0.66+
H0.56+
S, 10.56
C2H0.30+
H0.44+
H0.43+
A, –
C2H2
0.75+
H0.50+
S, 12.76
C2H0.30+
H0.39+
H0.42+
S, 7.87
C2H2
0.74+
H0.49+
S, 6.81
C2H2
0.82+
H0.41+
IP 4
P, –
C2H3
1.20+
A, –
C2H2
0.61+
H0.54+
A, –
C2H2
0.49+
H0.58+
A, –
C2H2
0.80+
H0.30+
A, –
C2H2
0.55+
H0.58+
A, –
C2H2
0.57+
H0.55+
A, –
C2H2
0.73+
H0.38+
TABLE I. Combined outcome data for different C2H2 IPs (rows) and proton KEs (columns). Each cell indicates the reaction
type (S: scattering, P: proton capture, A: abstraction), the KE loss of the proton, and the resulting fragment products with
their respective charges.
FIG. 2. Scattering dynamics of a proton incident on C2H2, directed toward IP 3 with an initial KE of 12.96 eV. The electron
density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001.
bond.
As shown in Table I, the case analyzed in Fig. 3 is par-
ticularly notable because it represents the only instance
at IP 3 that leads to abstraction. Adjusting the initial
KE either higher or lower instead results in scattering.
This behavior highlights the existence of a critical KE
range necessary to inject the proton into the molecular
system without it being scattered. At this energy, the KE
is not so high that the proton simply passes through the
C-H bond without being captured, nor is it so low that
the proton approaches too close to the C and H nuclear
cores and is repelled, as illustrated in Fig. 2. Instead, at
approximately 25.39 eV, the proton can effectively bind
to the hydrocarbon, displacing an H atom in the process.
Interestingly, abstraction is particularly prevalent at
IP 4, occurring in every calculation with initial KEs
greater than or equal to 4.66 eV. In the case of 4.66 eV,
the proton first fragments the molecule by detaching one
of the hydrogens—observed as the H0.54+ fragment—and
subsequently bonds with the remaining C2H fragment,
forming the stable C2H20.61+ listed in Table I. This pro-
cess effectively replaces the H atom originally bound to
the hydrocarbon. The total charge of the resulting frag-
ments is 1.15+, indicating that approximately 0.15 elec-
trons were ejected from the system during the abstraction
event.
While individual calculations across various IPs and
proton KEs provide valuable insights, broader trends
emerge when comparing results as a function of IP and
proton KE in Table I.
For example, at IP 1 the out-
come is always scattering, regardless of the proton KE.
This is reasonable given that IP 1 lies at the center of
the C–C bond (see Fig. 1). In hydrocarbons, hydrogen
preferentially bonds in linear or pyramidal geometries at
positions maximally separated from neighboring atoms.
Here, however, the proton is introduced too close to both
carbon nuclei, leaving insufficient space to form a bond
and resulting in rapid ejection due to the strong Coulom-
bic repulsion from the carbon cores.
In contrast, at IP 4 the dominant process is abstrac-
tion whenever the incoming proton has an initial KE of
or above 4.66 eV. In these cases, the proton replaces the
6
FIG. 3. Abstraction event dynamics of a proton incident on C2H2, directed toward IP 3 with an initial KE of 25.39 eV. The
electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001.
right H atom in the C2H2 structure (see Fig. 1). This
high probability is consistent with geometry: IP 4 is lo-
cated directly at the position of the right H atom. The
proton arrives with sufficient KE to overcome the local
potential barrier and approach the target nucleus. As it
nears the H atom, Coulombic repulsion with the hydro-
carbon nuclei decelerates the proton, transferring its KE
to the target H atom and breaking its bond. The dis-
placed H atom is ejected, while the proton slows enough
to stabilize and occupy its position, thereby completing
the abstraction process.
This mechanism is consistent across all tested KEs of
and above 4.66 eV at IP 4. Even at very high values,
such as 87.58 eV, nearly all of the proton’s KE is trans-
ferred to the target H atom, which is knocked out, while
the proton itself comes to rest and bonds to the hydro-
carbon. At the lowest tested KE (0.52 eV), the proton
approaches slowly enough to allow the target H atom to
shift and make space for the proton to bond. The rela-
tively small KE can then be redistributed among nuclear
and electronic degrees of freedom, consistent with the
0.20 electron ejection reported in Table I.
In Table I, the reaction outcome is seen to depend
primarily on the IP, as most rows display consistent be-
havior across all KEs, with only one or two exceptions
per row. Nevertheless, KE still plays an important role.
At sufficiently low KE (0.52 eV), when the proton does
not strike the central C–C bond (IP 1), proton capture
occurs at the remaining incident points (IP 2–4). In this
regime, the low KE enables local atomic rearrangements
that facilitate bond formation while limiting energy re-
distribution across the molecular degrees of freedom, in-
cluding charge redistribution.
B.
Propane (C3H8)
The selected IPs for proton collisions with C3H8 are
illustrated in Fig. 4. Due to molecular symmetry, only
incident points at and to the right of the molecular center
are considered. IP 1 through IP 5 correspond, respec-
tively, to the central C atom, the right C–C bond, the
right C atom, the right C–H bond, and the terminal H
atom of the C3H8 molecule.
The results for C3H8, including the reaction type, pro-
ton KE loss, and fragment products across the various
KEs and IPs, are summarized in Table II.
The for-
FIG. 4. Impact point (IP) diagram for C3H8.
mat and types of information presented—namely scat-
tering, proton capture, and abstraction—are consistent
with those analyzed for C2H2 in Sec. III A and Table I.
As shown in Table II, the collision dynamics for C3H8
can produce particularly striking outcomes. Due to the
molecule’s larger size, proton-induced fragmentation can
be extensive.
For example, at IP 1 with an initial
KE of 87.58 eV, the resulting fragments are CH20.29+,
CH30.31+, CH30.47+, and H+0.32 (the proton projectile).
Snapshots of the molecular breakup is illustrated in
Fig. 5. The proton approaches the molecule at 2 fs and
reaches its closest distance at 4 fs, after which it is rapidly
repelled and scattered away by 6 fs. This energy transfer
causes the C3H8 molecular geometry to bend and recoil,
as indicated by the central C atom dipping between 6 fs
and 10 fs. By 15 fs, the proton has fully separated from
the molecule, leaving two CH3 fragments on either side,
a C–H bond at the bottom, and an unbound hydrogen in
the center.
Subsequently,
asymmetric charge distribution and
Coulomb repulsion among the positively charged CH3
fragments drive hydrogen migration toward the CH frag-
ment at the bottom, forming a CH2 fragment by 32 fs.
The atoms remain bound but continue to repel one an-
other due to their positive charge, as observed at 96 fs.
This trajectory demonstrates how a single proton colli-
sion can produce multiple fragments, including hydrogen
migration events. In the context of Coulomb explosion,
the occurrence of secondary fragmentation (summarized
in Table II) shows that some molecular products can re-
sult solely from proton–molecule collisions rather than
7
Impact Point (IP)
Proton Kinetic Energy (eV)
0.52
4.66
12.96
25.39
41.98
62.70
87.58
IP 1
S, -0.14
C3H7
1.14+
H0.01+
H0.07+
S, 3.09
C3H7
1.12+
H0.10+
H0.07+
S, 6.29
C3H8
1.01+
H0.35+
S, 11.05
C3H8
1.05+
H0.33+
S, 16.42
C3H8
1.12+
H0.30+
S, 23.27
C3H7
1.12+
H0.03−
H0.32+
S, 30.73
CH2
0.29+
CH3
0.31+
CH3
0.47+
H0.32+
IP 2
A, –
C3H7
1.17+
H2
0.05+
S, 4.54
C3H7
1.24+
H0.07+
H0.04+
S, 12.59
C2H5
0.70+
CH3
0.40+
H0.23+
S, 7.93
C3H8
0.92+
H0.47+
S, 7.04
C3H8
0.96+
H0.42+
S, 6.90
C3H8
0.91+
H0.42+
S, 6.22
C3H8
0.88+
H0.46+
IP 3
P, –
C3H9
1.25+
S, 3.72
C3H7
1.23+
H0.11+
H0.05−
S, 6.58
C3H8
0.96+
H0.39+
S, 10.16
C3H8
0.95+
H0.38+
S, 16.15
C2H5
0.67+
CH3
0.35+
H0.36+
S, 23.13
C2H5
0.75+
CH3
0.30+
H0.34+
S, 30.34
C2H5
0.68+
CH2
0.43+
H0.09−
H0.34+
IP 4
P, –
C3H9
1.25+
S, 4.43
C3H7
1.16+
H0.03+
H0.11+
A, –
C3H7
1.07+
H2
0.22+
S, 17.22
C3H7
1.06+
H0.15+
H0.13+
S, 12.68
C3H7
1.00+
H0.12+
H0.23+
S, 10.16
C3H7
0.82+
H0.19+
H0.33+
S, 8.69
C3H7
1.06+
H0.03−
H0.33+
IP 5
P, –
C3H9
1.22+
A, –
C3H8
0.79+
H0.45+
A, –
C3H8
0.86+
H0.36+
A, –
C3H8
0.86+
H0.37+
A, –
C3H8
0.88+
H0.39+
A, –
C3H8
0.88+
H0.38+
A, –
C3H8
0.72+
H0.46+
TABLE II. Combined outcome data for different C3H8 IPs (rows) and proton KEs (columns). Each cell shows the reaction type
(S: scattering, P: proton capture, A: abstraction), followed by the KE loss of the proton and the resulting fragment products
with their respective charges.
from direct laser-induced ionization. Such products may
also appear among the fragment distributions reported
in Coulomb explosion experiments [37–45].
Another noteworthy case in Table II is the formation of
an H2 molecule at IP 2, associated with an initial proton
kinetic energy of 0.52 eV. The dynamics of this process
are shown in Fig. 6. By 43 fs, the incident proton has ap-
proached the molecule, and at 47 fs it comes sufficiently
close to transiently bond with the top-center H atom of
C3H8. By 53 fs, the proton and the bonded H atom de-
tach from the parent molecule and are repelled upward,
away from the C3H7 fragment, reaching complete sepa-
ration by 120 fs. This result demonstrates that proton
collisions can induce the formation of H2, underscoring
the complex dynamics of these interactions and the abil-
ity of protons to abstract molecular fragments.
More
broadly, it highlights the role of many-body effects in
Coulomb explosion and suggests that a wider variety of
fragmentation products may emerge through such mech-
anisms [37, 40].
Interestingly, in terms of reaction type, Table II shows
a striking resemblance to Table I.
In both molecules,
IP 1 results exclusively in scattering outcomes. This is
consistent with the geometry, as IP 1 corresponds to the
center of the molecule—directly at C–C bonds or central
C atoms (see Fig. 1 and Fig. 4). In this configuration,
there is insufficient space for the proton to form a bond,
and it is repelled by the carbon nuclear core.
For the remaining incident points corresponding to the
right C atom, the right C–H bond, and the right H atom
(IP 2–4 in C2H2 and IP 3–5 in C3H8), proton capture oc-
curs at the lowest KE of 0.52 eV. This strong correlation
reflects the similarity in the collision geometry at these
sites (see Fig. 1 and Fig. 4). At higher proton energies,
the reaction patterns are also analogous between the two
molecules. For example, the row of reaction types at IP 2
in C2H2 matches that of IP 3 in C3H8—proton capture at
the lowest KE followed by scattering at higher KEs. Sim-
ilarly, IP 3 in C2H2 and IP 4 in C3H8 exhibit scattering at
all KEs except for a specific critical energy range, where
abstraction occurs, as discussed in Sec. III A (25.39 eV
and 12.96 eV for C2H2 and C3H8, respectively).
Finally, the right-most incident point in both molecules
(IP 4 for C2H2 and IP 5 for C3H8) consistently results
in abstraction at all KEs above the lowest tested value.
This highlights that striking the terminal hydrogen in a
hydrocarbon leads to a high probability of abstraction,
as it allows the proton to efficiently transfer nearly all
its KE to the target H atom, with the residual energy
redistributed throughout the hydrocarbon molecule.
8
FIG. 5. Scattering dynamics of a proton incident on C3H8, directed toward IP 1 with an initial KE of 87.58 eV. The electron
density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001.
C.
Butane (C4H10)
For the butane molecule, we selected the gauche con-
formation. This structural choice was made to increase
diversity, as it offers a greater number of potential im-
pact points and enables the investigation of a non-
centrosymmetric molecular system. The selected IPs for
C4H10 are shown in Fig. 7. As in the cases of C2H2 and
C3H8 (discussed in Sec. III A and Sec. III B, respectively),
the IPs were chosen to represent regions of interest on
the molecular framework, specifically bond centers and
atomic sites. In contrast to C2H2 and C3H8, however,
C4H10 lacks left–right symmetry, requiring the selection
of IPs on both sides of the molecule (see Fig. 7).
The results of the simulations, including reaction type,
KE loss, and fragmentation products, are summarized
in Table III. The data highlight the dependence of the
outcomes on both the location of the proton collision and
the KE of the incoming projectile. At the highest tested
KE of 87.58 eV, protons scatter regardless of the chosen
IP. In contrast, at the lowest tested KE of 0.52 eV, proton
capture is the most frequent outcome, occurring in seven
cases across the various IPs. The specific IP also plays a
significant role in determining the reaction pathway; for
example, IP 5 and IP 9 consistently lead to scattering,
independent of the proton projectile KE.
The data also reveals diverse fragmentation pathways
resulting from proton collisions with C4H10. For exam-
ple, fragments such as C4H9, C3H7, C2H5, and CH3 are
observed across different IPs and proton KEs. Many of
these fragmentation events occur only under very spe-
cific conditions.
A notable case arises at IP 3 with a
proton KE of 12.96 eV, where the collision cleaves the
molecule into CH3 and C3H7 fragments. At higher KEs,
no fragmentation occurs at this IP, while at lower KEs,
the molecule instead captures the proton. This demon-
strates that such fragmentation events are uncommon
and require narrowly defined conditions.
The molecular dynamics of the IP 3, 12.96 eV case
are illustrated in Fig. 8. As shown, the proton reaches
the center of the left C–C bond at 10 fs. By 16 fs, the
proton repels the positively charged carbon nuclei, lead-
ing to bond cleavage into CH3 and C3H7, while the pro-
ton itself is repelled and ejected from the molecular core.
From 22 fs to 77 fs, the proton continues migrating away
as the two fragments separate and undergo internal re-
arrangement, ultimately forming the distinct CH3 and
C3H7 products. This trajectory highlights why the frag-
mentation and CH3 formation probability is low: suc-
cessful breakup requires an optimal set of conditions in
which the proton KE is sufficient to penetrate the C–C
bond center without being displaced beforehand.
At lower KEs (4.66 eV and below), the proton lacks
sufficient energy to overcome the potential barrier and
reach the C–C bond. In these cases, charge redistribu-
tion occurs over a longer timescale, leading to proton cap-
ture rather than bond cleavage. At higher KEs (25.39 eV
and above), the proton traverses the system too quickly,
9
FIG. 6. Abstraction dynamics of a proton incident on C3H8, directed toward IP 1 with an initial KE of 0.52 eV. The electron
density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001.
FIG. 7. Impact point (IP) diagram for C4H10.
transferring excess energy and scattering past the target
bond rather than inducing fragmentation.
Thus, frag-
mentation occurs only within a narrow KE window near
12.96 eV. This optimal energy range for cleaving termi-
nal C–C bonds is consistent with the results obtained for
propane (Sec. III B). Specifically, IP 3/9 in Table III and
IP 2 in Table II correspond to terminal C–C bonds in
C4H10 and C3H8, respectively. In both molecules, colli-
sions at these sites with a KE of 12.96 eV lead to the sepa-
ration of a terminal CH3 group from the parent molecule.
These findings underscore the similarities across different
alkanes and suggest that proton impact energies in this
range are optimal for cleaving terminal C–C bonds.
The energy required to cleave the central C–C bond
(IP 6 in Fig. 7) is greater than that needed to break
the terminal C–C bond discussed above. As shown in
Table III at IP 6, proton KEs of 25.39 and 41.98 eV
result in cleavage of the central bond, separating C4H10
into two equal C2H5 fragments. At both lower and higher
KEs, however, this bond remains intact.
In contrast to the case where CH3 and C3H7 fragments
are produced only within a restricted KE range by di-
rectly striking the terminal C–C bond (IP 3/9), frag-
mentation is more robust when the terminal C atom is
impacted directly, as in IP 10 (see Fig. 7).
As shown
in Table III, every proton KE at or above 41.98 eV for
IP 10 results in C–C bond cleavage and the subsequent
formation of CH3 and C3H7. In this case, the proton car-
ries sufficient energy to collide with the terminal carbon
nucleus and eject it from the molecular framework.
An important aspect of the fragmentation results con-
cerns the charge states of the resulting fragments. The
10
FIG. 8. Scattering dynamics of a proton incident on C4H10, directed toward IP 3 with an initial KE of 12.96 eV. The electron
density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001.
fractional charge states, obtained by integrating the elec-
tron density using TDDFT with the ALDA functional,
can be interpreted as probabilities of discrete charge
states. For instance, a fragment identified as CH30.28+
(see Table III for 12.96 eV at IP 3) corresponds to a 28%
probability of being singly charged and a 72% probability
of being neutral. These results therefore predict the for-
mation of neutral as well as charged fragments, offering
insight into their role in Coulomb explosion experiments.
This is particularly significant because the neutral coun-
terparts of odd-electron fragments correspond to radical
species, which have been highlighted as a subject of inter-
est in the Coulomb explosion of butane [40]. One could
alternatively contend that the fractional charges observed
in molecular fragments arise from computational limita-
tions—specifically, the constrained simulation box size
and finite simulation duration.
Under this interpreta-
tion, the electron cloud disrupted by proton scattering
would eventually redistribute to yield integer charges if
given adequate time to equilibrate within a sufficiently
expansive simulation domain.
IV.
SUMMARY
We have used time-dependent density functional the-
ory to investigate low-energy proton collisions with hy-
drocarbons of increasing complexity: acetylene (C2H2),
propane (C3H8), and butane (C4H10). By systematically
varying both the incident point (IP) and the initial pro-
ton kinetic energy (KE), we identified general trends in
the reaction outcomes as well as molecule-specific frag-
mentation pathways.
Across all systems, three principal reaction types were
observed: scattering, proton capture, and abstraction.
The outcome depends sensitively on both the proton KE
and the collision geometry (See Tables I, II, and III for
C2H2, C3H8, and C4H10 results, respectively).
IPs at
central C–C bonds and C atoms predominantly lead to
scattering, while terminal H atoms consistently favor ab-
straction at KEs above a few electronvolts. At the low-
est tested energy (0.52 eV), proton capture dominates at
most IPs, where the slow approach enables charge redis-
tribution and local atomic rearrangements that stabilize
the projectile within the hydrocarbon framework.
C–C bond cleavage was found to be uncommon and
sensitive to initial conditions. A narrow KE window near
12.96 eV was identified as optimal for detaching a termi-
nal CH3 group in both propane and butane when striking
at the C–C bond. At lower energies, the proton is cap-
tured before reaching the bond, while at higher energies,
it traverses too quickly and scatters. These results reveal
that intermediate KEs are uniquely effective at induc-
ing fragmentation and formation of smaller hydrocarbons
as many different fragment products have been observed
across various KEs and IPs such as C4H9, C3H7, C2H5,
and CH3 stemming from the parent molecule of C4H10.
Interestingly, the snapshots reveal complicated dynamics
at play such as hydrogen migration and hydrogen ab-
straction leading to unique fragments in C3H8 such as
H2, C2H5, CH2, and CH3. Such outcomes underscore the
importance of including proton–molecule collisions when
modeling fragment distributions and Coulomb explosion
interactions.
Charge-state analysis showed that fragments fre-
quently carry fractional charges, which can be interpreted
as probabilities of discrete charge states. This indicates
that neutral fragments, including radical species such as
CH3, can be produced alongside ionic products.
Such
radicals have been reported in experimental studies of
hydrocarbon Coulomb explosion, and our results provide
an additional possible microscopic mechanism for their
formation [40].
Overall, our findings demonstrate that fragmentation
dynamics in hydrocarbons under proton impact arise
from a delicate interplay of projectile energy and molec-
ular geometry.
The consistent trends observed across
acetylene, propane, and butane point toward general
features of alkanes such that: C–C bonds are suscepti-
ble to breakup only within a narrow intermediate KE
range, while terminal hydrogens favor abstraction across
a broad range of energies.
In the context of Coulomb
explosion, these results highlight the role of secondary
proton–molecule collisions in generating a wide variety
of fragments beyond those directly produced by laser-
induced ionization. More broadly, they contribute to a
microscopic understanding of ion–molecule interactions
relevant to radiation damage, ion-beam processing, and
11
astrochemical environments where low-energy proton col-
lisions play a central role.
Future work could involve experimental validation of
these results as well as extending the calculations to
larger molecules, to collisions involving other ionic pro-
jectiles, to finite-temperature conditions, and to trajec-
tories in which the projectiles impinge at different angles.
Such studies would further clarify the generality of the
trends identified here and help establish connections to
real experimental conditions.
ACKNOWLEDGMENTS
This work has been supported by the National Sci-
ence Foundation (NSF) under Grants No. DMR-2217759
and No.
NSF IRES 2245029.
This work used ACES
at TAMU through allocation PHYS240167 from the
Advanced Cyberinfrastructure Coordination Ecosystem:
Services & Support (ACCESS) program, which is sup-
ported by National Science Foundation grants #2138259,
#2138286, #2138307, #2137603, and #2138296 [63].
The ELI ALPS project (GINOP-2.3.6-15-2015-00001)
is supported by the European Union and co-financed by
the European Regional Development Fund.
[1] M. P. Carante, R. L. Ramos, and F. Ballarini, Radia-
tion damage in biomolecules and cells 3.0, International
Journal of Molecular Sciences 25, 10.3390/ijms25126368
(2024).
[2] C. Shepard, D. C. Yost, and Y. Kanai, Electronic excita-
tion response of dna to high-energy proton radiation in
water, Phys. Rev. Lett. 130, 118401 (2023).
[3] S. KC and R. Abolfath, Towards the ionizing radiation
induced bond dissociation mechanism in oxygen, water,
guanine and dna fragmentation: a density functional the-
ory simulation, Scientific Reports 12, 19853 (2022).
[4] A. Saha, M. Mecklenburg, A. Pattison, A. Brewster,
J. A. Rodriguez, and P. Ercius, Mapping electron
beam-induced
radiolytic
damage
in
molecular
crys-
tals, Microscopy and Microanalysis 30, ozae044.902
(2024),
https://academic.oup.com/mam/article-
pdf/30/Supplement 1/ozae044.902/58669529/ozae044.902.pdf.
[5] G. Kraft, Tumortherapy with ion beams, Nuclear In-
struments and Methods in Physics Research Section A:
Accelerators, Spectrometers, Detectors and Associated
Equipment 454, 1 (2000), proc. of the 1st Int Symp. on
Applications of Particle Detectors in Medicine, Biology
and Astrophysics.
[6] F. Tommasino and M. Durante, Proton radiobiology,
Cancers 7, 353 (2015).
[7] Z. Li, Q. Li, H. Tian, M. Wang, R. Lin, J. Bai, D. Wang,
and M. Dong, Proton beam therapy for craniopharyn-
gioma: a systematic review and meta-analysis, Radiation
Oncology 19, 161 (2024).
[8] C. Graeff, L. Volz, and M. Durante, Emerging tech-
nologies for cancer therapy using accelerated particles,
Progress in Particle and Nuclear Physics 131, 104046
(2023).
[9] L. Huang, H. Wu, G. Cai, S. Wu, D. Li, T. Jiang, B. Qiao,
C. Jiang, and F. Ren, Recent progress in the application
of ion beam technology in the modification and fabrica-
tion of nanostructured energy materials, ACS Nano 18,
2578 (2024).
[10] N. Shabi, O. Girshevitz, D. Primetzhofer, M. Kaveh, and
I. Shlimak, Dominant impact of ion velocity on defect
formation in suspended graphene, Surfaces and Interfaces
58, 105872 (2025).
[11] Y. Liu, Y. Deng, Y. Wang, L. Wang, T. Liu, Z. Gong,
Z. Fan, H. Wei, Z. Su, W. Wei, Y. Wang, and Y. Dan,
Surface self-assembly of gold nanoparticles on graphite
driven by ion-irradiation-induced atomic defects, Applied
Surface Science 704, 163442 (2025).
[12] A. V. Krasheninnikov and F. Banhart, Engineering of
nanostructured carbon materials with electron or ion
beams, Nature Materials 6, 723 (2007).
[13] S. Bubin, B. Wang, S. Pantelides, and K. Varga, Sim-
ulation of high-energy ion collisions with graphene frag-
ments, Phys. Rev. B 85, 235435 (2012).
[14] Z. Wang, S.-S. Li, and L.-W. Wang, Efficient real-time
time-dependent density functional theory method and its
application to a collision of an ion with a 2d material,
Phys. Rev. Lett. 114, 063004 (2015).
[15] J. H. Westlake, J. H. Waite Jr., N. Carrasco, M. Richard,
and T. Cravens, The role of ion-molecule reactions in the
growth of heavy ions in titan’s ionosphere, Journal of
Geophysical Research: Space Physics 119, 5951 (2014),
https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2014JA020
[16] Mancini, Luca, Valen¸ca Ferreira de Aragao, Emilia,
Pirani, Fernando, Rosi, Marzio, Faginas-Lago, Noelia,
Richardson,
Vincent,
Martini,
Luca Matteo,
Podio,
Linda,
Lippi,
Manuela,
Codella,
Claudio, and As-
cenzi, Daniela, Destruction of interstellar methyl cyanide
(ch3cn) via collisions with he+ ions, Astrophysics and
Astronomy 691, A83 (2024).
[17] W. Cui and E. Herbst, Exploring the role of ion–molecule
reactions on interstellar icy grain surfaces, ACS Earth
and Space Chemistry 8, 2218 (2024).
[18] H. J. L¨udde, M. Horbatsch, and T. Kirchner, Proton-
impact-induced electron emission from biologically rele-
vant molecules studied with a screened independent atom
model, Journal of Physics B: Atomic, Molecular and Op-
tical Physics 52, 195203 (2019).
[19] H. Luna, A. L. F. de Barros, J. A. Wyer, S. W. J. Scully,
J. Lecointre, P. M. Y. Garcia, G. M. Sigaud, A. C. F.
Santos, V. Senthil, M. B. Shah, C. J. Latimer, and E. C.
Montenegro, Water-molecule dissociation by proton and
hydrogen impact, Phys. Rev. A 75, 042711 (2007).
[20] X. Guan and K. Bartschat, Complete breakup of the he-
lium atom by proton and antiproton impact, Phys. Rev.
12
Lett. 103, 213201 (2009).
[21] I. B. Abdurakhmanov, S. U. Alladustov, J. J. Bailey,
A. S. Kadyrov, and I. Bray, Proton scattering from ex-
cited states of atomic hydrogen, Plasma Physics and Con-
trolled Fusion 60, 095009 (2018).
[22] A. C. K. Leung and T. Kirchner, Proton impact on
ground and excited states of atomic hydrogen, The Eu-
ropean Physical Journal D 73, 246 (2019).
[23] R. Seraide, M. A. Bernal, G. Brunetto, U. de Giovannini,
and A. Rubio, Tddft-based study on the proton–dna col-
lision, The Journal of Physical Chemistry B 121, 7276
(2017).
[24] C. Covington, K. Hartig, A. Russakoff, R. Kulpins, and
K. Varga, Time-dependent density-functional-theory in-
vestigation of the collisions of protons and α particles
with uracil and adenine, Phys. Rev. A 95, 052701 (2017).
[25] X. Wang, Z.-P. Wang, F.-S. Zhang, and C.-Y. Qian, Col-
lision site effect on the radiation dynamics of cytosine
induced by proton, Chinese Physics B 31, 063401 (2022).
[26] J. M. Sanders and S. L. Varghese, Electron capture
from hydrocarbon molecules by proton projectiles in the
60–120 kev energy range, AIP Conference Proceedings
475, 70 (1999), https://pubs.aip.org/aip/acp/article-
pdf/475/1/70/12099216/70 1 online.pdf.
[27] J. M. Sanders, S. L. Varghese, C. H. Fleming, and G. A.
Soosai, Electron capture by protons and electron loss
from hydrogen atoms in collisions with hydrocarbon and
hydrogen molecules in the 60–120 kev energy range, Jour-
nal of Physics B: Atomic, Molecular and Optical Physics
36, 3835 (2003).
[28] H. Baumgart, W. Arnold, J. G¨unzl, E. Huttel, A. Hof-
mann, N. Kniest, E. Pfaff, G. Reiter, S. Tharraketta,
and G. Clausnitzer, Proton and helium stopping cross
sections in gaseous hydrocarbon compounds, Nuclear In-
struments and Methods in Physics Research Section B:
Beam Interactions with Materials and Atoms 5, 1 (1984).
[29] H. J. L¨udde, A. Jorge, M. Horbatsch, and T. Kirch-
ner, Net electron capture in collisions of multiply charged
projectiles with biologically relevant molecules, Atoms 8,
10.3390/atoms8030059 (2020).
[30] H. J. L¨udde, M. Horbatsch, and T. Kirchner, Electron
capture and ionization cross-section calculations for pro-
ton collisions with methane and the dna and rna nucle-
obases, The European Physical Journal D 73, 249 (2019).
[31] S. Roither, X. Xie, D. Kartashov, L. Zhang, M. Sch¨offler,
H. Xu, A. Iwasaki, T. Okino, K. Yamanouchi, A. Bal-
tuska, and M. Kitzler, High energy proton ejection from
hydrocarbon molecules driven by highly efficient field ion-
ization, Phys. Rev. Lett. 106, 163001 (2011).
[32] Y. Zhang, B. Wang, L. Wei, T. Jiang, W. Yu, R. Hut-
ton, Y. Zou, L. Chen, and B. Wei, Proton migration
in
hydrocarbons
induced
by
slow
highly
charged
ion impact, The Journal of Chemical Physics 150,
204303
(2019),
https://pubs.aip.org/aip/jcp/article-
pdf/doi/10.1063/1.5088690/9651872/204303 1 online.pdf.
[33] E. E. Quashie, B. C. Saha, X. Andrade, and A. A. Correa,
Self-interaction effects on charge-transfer collisions, Phys.
Rev. A 95, 042517 (2017).
[34] M.-A. Herv´e du Penhoat, N. R. Moraga, M.-P. Gaigeot,
R. Vuilleumier, I. Tavernelli, and M.-F. Politis, Proton
Collision on Deoxyribose Originating from Doubly Ion-
ized Water Molecule Dissociation, The Journal of Physi-
cal Chemistry A 122, 5311 (2018), publisher: American
Chemical Society.
[35] Y. Miyamoto and T. Komatsu, Molecular-scale model-
ing of light emission by combustion: An ab initio study,
Scientific Reports 9, 12707 (2019).
[36] C.-Z. Gao, J. Wang, F. Wang, and F.-S. Zhang, Theo-
retical study on collision dynamics of H+ + CH4 at low
energies, The Journal of Chemical Physics 140, 054308
(2014).
[37] S. S. Taylor, K. Varga, K. Mogyor´osi, V. Chik´an, and
C. Covington, Fragmentation in coulomb explosion of hy-
drocarbon molecules, Phys. Rev. A 111, 013109 (2025).
[38] I. Last and J. Jortner, Nuclear fusion driven by coulomb
explosion of methane clusters, The Journal of Physical
Chemistry A 106, 10877 (2002).
[39] C. Cornaggia, D. Normand, and J. Morellec, Role of the
molecular electronic configuration in the coulomb frag-
mentation of n2, c2h2 and c2h4 in an intense laser field,
Journal of Physics B: Atomic, Molecular and Optical
Physics 25, L415 (1992).
[40] K. Mogyor´osi, B. T´oth, K. S´arosi, B. Gilicze, J. Csontos,
T. Somosk˝oi, S. T´oth, P. Prasannan Geetha, L. T´oth,
S. S. Taylor, N. Skoufis, L. Barron, K. Varga, C. Coving-
ton, and V. Chik´an, Ch(a) radical formation in coulomb
explosion from butane seeded plasma generated with
chirp-controlled ultrashort laser pulses, ACS Omega 10,
25285 (2025).
[41] A. N. Markevitch, D. A. Romanov, S. M. Smith, and R. J.
Levis, Coulomb explosion of large polyatomic molecules
assisted by nonadiabatic charge localization, Phys. Rev.
Lett. 92, 063001 (2004).
[42] H. Hasegawa, A. Matsuda, T. Morishita, L. B. Madsen,
F. Jensen, O. I. Tolstikhin, and A. Hishikawa, Disso-
ciative ionization and coulomb explosion of ch4 in two-
color asymmetric intense laser fields, Phys. Chem. Chem.
Phys. 25, 25408 (2023).
[43] S. S. Taylor, C. Covington, and K. Varga, Quantum ef-
fects of coulomb explosion simulations revealed by time-
dependent density-functional theory, Phys. Rev. A 111,
033109 (2025).
[44] S. Palaniyappan, R. Mitchell, N. Ekanayake, A. M.
Watts, S. L. White, R. Sauer, L. E. Howard, M. Videtto,
C. Mancuso, S. J. Wells, T. Stanev, B. L. Wen, M. F.
Decamp, and B. C. Walker, Ionization of ethane, butane,
and octane in strong laser fields, Phys. Rev. A 82, 043433
(2010).
[45] E.
Livshits
and
R.
Baer,
Time-dependent
density-
functional studies of the d2 coulomb explosion, The Jour-
nal of Physical Chemistry A 110, 8443 (2006).
[46] E. Runge and E. K. Gross, Density-functional theory for
time-dependent systems, Physical review letters 52, 997
(1984).
[47] C. A. Ullrich, Time-Dependent Density-Functional The-
ory:
Concepts and Applications
(Oxford University
Press, USA, 2012).
[48] A. Kononov, A. J. White, K. A. Nichols, S. X. Hu,
and A. D. Baczewski, Reproducibility of real-time time-
dependent density functional theory calculations of elec-
tronic stopping power in warm dense matter, Physics of
Plasmas 31, 043904 (2024).
[49] J.
Hekele,
Y.
Yao,
Y.
Kanai,
V.
Blum,
and
P. Kratzer, All-electron real-time and imaginary-time
time-dependent density functional theory within a nu-
meric atom-centered basis function framework, The Jour-
nal of Chemical Physics 155, 154801 (2021).
[50] C. J. Herring and M. M. Montemore, Recent Advances in
13
Real-Time Time-Dependent Density Functional Theory
Simulations of Plasmonic Nanostructures and Plasmonic
Photocatalysis, ACS Nanoscience Au 3, 269 (2023), pub-
lisher: American Chemical Society.
[51] J. Xu, T. E. Carney, R. Zhou, C. Shepard, and Y. Kanai,
Real-Time Time-Dependent Density Functional Theory
for Simulating Nonequilibrium Electron Dynamics, Jour-
nal of the American Chemical Society 146, 5011 (2024),
publisher: American Chemical Society.
[52] Z. Wang, Efficient Real-Time Time-Dependent Density
Functional Theory Method and its Application to a Col-
lision of an Ion with a 2D Material, Physical Review Let-
ters 114, 10.1103/PhysRevLett.114.063004 (2015).
[53] T. P. Rossi, M. Kuisma, M. J. Puska, R. M. Nieminen,
and P. Erhart, Kohn–Sham Decomposition in Real-Time
Time-Dependent Density-Functional Theory:
An Effi-
cient Tool for Analyzing Plasmonic Excitations, Journal
of Chemical Theory and Computation 13, 4779 (2017),
publisher: American Chemical Society.
[54] M. Noda,
S. A. Sato,
Y. Hirokawa,
M. Uemoto,
T. Takeuchi, S. Yamada, A. Yamada, Y. Shinohara,
M. Yamaguchi, K. Iida, I. Floss, T. Otobe, K.-M. Lee,
K. Ishimura, T. Boku, G. F. Bertsch, K. Nobusada, and
K. Yabana, SALMON: Scalable Ab-initio Light–Matter
simulator for Optics and Nanoscience, Computer Physics
Communications 235, 356 (2019).
[55] X. Andrade,
J. Alberdi-Rodriguez,
D. A. Strubbe,
M. J. T. Oliveira, F. Nogueira, A. Castro, J. Muguerza,
A. Arruabarrena, S. G. Louie, A. Aspuru-Guzik, A. Ru-
bio, and M. A. L. Marques, Time-dependent density-
functional theory in massively parallel computer archi-
tectures: the octopus project, Journal of Physics: Con-
densed Matter 24, 233202 (2012).
[56] J. I. Fuks, S. E. B. Nielsen, M. Ruggenthaler, and
N. T. Maitra, Time-dependent density functional the-
ory beyond kohn–sham slater determinants, Phys. Chem.
Chem. Phys. 18, 20976 (2016).
[57] D. B. Dar, Reformulation of Time-Dependent Density
Functional Theory for Nonperturbative Dynamics: The
Rabi Oscillation Problem Resolved, Physical Review Let-
ters 133, 10.1103/PhysRevLett.133.096401 (2024).
[58] X. Li, N. Govind, C. Isborn, A. E. I. DePrince, and
K. Lopata, Real-time time-dependent electronic struc-
ture theory, Chemical Reviews 120, 9951 (2020), pMID:
32813506.
[59] K.
Varga
and
J.
A.
Driscoll,
in
Computational
Nanoscience: Applications for Molecules, Clusters, and
Solids (Cambridge University Press, 2011).
[60] N. Troullier and J. L. Martins, Efficient pseudopoten-
tials for plane-wave calculations, Phys. Rev. B 43, 1993
(1991).
[61] J. P. Perdew and A. Zunger, Self-interaction correction
to density-functional approximations for many-electron
systems, Phys. Rev. B 23, 5048 (1981).
[62] D.
E.
Manolopoulos,
Derivation
and
reflection
properties
of
a
transmission-free
absorbing
po-
tential,
The
Journal
of
Chemical
Physics
117,
9552
(2002),
https://pubs.aip.org/aip/jcp/article-
pdf/117/21/9552/19225128/9552 1 online.pdf.
[63] T. J. Boerner, S. Deems, T. R. Furlani, S. L. Knuth,
and J. Towns, Access: Advancing innovation: Nsf’s ad-
vanced cyberinfrastructure coordination ecosystem: Ser-
vices & support, in Practice and Experience in Advanced
Research Computing 2023: Computing for the Common
Good, PEARC ’23 (Association for Computing Machin-
ery, New York, NY, USA, 2023) pp. 173–176.
14
Impact Point (IP)
Proton Kinetic Energy (eV)
0.52
4.66
12.96
25.39
41.98
62.70
87.58
IP 1
P, –
C4H11
1.23+
P, –
C4H11
1.25+
S, 6.93
C4H10
0.93+
H0.36+
S, 16.05
C4H9
1.00+
H0.11+
H0.22+
S, 39.45
C4H9
0.97+
H0.25+
H0.14+
A, –
C4H10
1.05+
H0.33+
S, 47.61
C4H9
0.91+
H0.27+
H0.15+
IP 2
A, –
C4H9
1.19+
H2
0.01+
A, –
C4H9
1.24+
H2
0.06+
A, –
C4H10
1.14+
H0.20+
S, 23.98
C4H9
0.92+
H0.12+
H0.40+
S, 37.12
C4H9
0.92+
H0.44+
H0.12+
S, 41.13
C4H9
0.93+
H0.47+
H0.05+
S, 44.35
C4H9
0.72+
H0.42+
H0.31+
IP 3
P, –
C4H11
1.22+
P, –
C4H11
1.33+
S, 11.76
CH3
0.28+
C3H7
0.84+
H0.24+
S, 12.85
C4H10
1.07+
H1.00+
S, 9.30
C4H10
1.01+
H0.38+
S, 8.80
C4H10
1.02+
H0.41+
S, 9.19
C4H10
1.06+
H0.42+
IP 4
S, 0.37
C4H9
1.13+
H0.02−
H0.09+
S, 3.58
C4H10
0.78+
H0.49+
A, –
C4H9
1.27+
H2
0.06+
S, 12.71
C4H9
1.15+
H0.08+
H0.14+
S, 11.07
C4H9
1.01+
H0.08+
H0.27+
S, 10.31
C4H10
0.94+
H0.47+
S, 9.67
C4H10
0.94+
H0.44+
IP 5
S, 0.36
C4H9
1.16+
H0.03+
H0.05+
S, 3.44
C4H10
0.86+
H0.45+
S, 4.95
C4H10
0.88+
H0.39+
S, 9.81
C4H10
0.86+
H0.48+
S, 16.03
C4H10
1.01+
H0.45+
S, 22.33
C4H10
1.06+
H0.39+
S, 30.32
C2H5
0.51+
C2H5
0.51+
H0.45+
IP 6
P, –
C4H11
1.24+
A, –
C4H9
1.27+
H2
0.05+
S, 7.10
C4H10
0.94+
H0.43+
S, 16.53
C2H5
0.59+
C2H5
0.62+
H0.23+
S, 14.30
C2H5
0.70+
C2H5
0.51+
H0.29+
S, 11.04
C4H10
1.08+
H0.37+
S, 10.18
C4H10
1.08+
H0.38+
IP 7
P, –
C4H11
1.25+
A, –
C4H9
1.24+
H2
0.05+
A, –
C4H10
0.90+
H0.48+
S, 21.65
C4H9
1.19+
H0.09+
H0.07+
S, 34.43
C4H9
1.03+
H0.30+
H0.07+
S, 49.35
C4H9
0.78+
H0.42+
H0.37+
S, 55.73
C4H9
0.74+
H0.36+
H0.37+
IP 8
P, –
C4H11
1.23+
S, 4.31
C4H9
1.24+
H0.01+
H0.07+
A, –
C4H9
1.16+
H2
0.17+
S, 22.67
C4H9
1.14+
H0.19+
H0.08+
S, 23.23
C4H9
1.13+
H0.16+
H0.07+
S, 16.89
C4H9
1.09+
H0.08+
H0.14+
S, 14.65
C4H9
1.04+
H0.09+
H0.25+
IP 9
S, 0.44
C4H9
1.18+
H0.08+
H0.01−
S, 4.52
C4H9
1.25+
H0.03+
H0.01+
S, 12.06
C3H7
0.75+
CH3
0.39+
H0.21+
S, 11.52
C4H10
0.95+
H0.48+
S, 9.22
C4H10
1.10+
H0.34+
S, 9.31
C4H10
1.07+
H0.40+
S, 8.79
C4H10
1.05+
H0.32+
IP 10
P, –
C4H11
1.25+
S, 2.62
C4H10
0.84+
H0.42+
S, 7.11
C4H10
0.98+
H0.37+
S, 10.23
C4H10
1.01+
H0.38+
S, 15.07
C3H7
0.81+
CH3
0.27+
H0.35+
S, 21.51
C3H7
0.74+
CH3
0.25+
H0.41+
S, 29.04
C3H7
0.65+
CH3
0.34+
H0.40+
IP 11
P, –
C4H11
1.26+
A, –
C4H10
1.31+
H0.04+
A, –
C4H9
1.16+
H2
0.14+
A, –
C4H9
1.03+
H2
0.19+
S, 12.45
C4H9
0.97+
H0.15+
H0.29+
S, 11.03
C4H10
1.06+
H0.36+
S, 10.60
C4H10
1.02+
H0.43+
TABLE III. Combined outcome data for different C4H10 IPs (rows) and proton KEs (columns). Each cell shows the reaction
type (S: scattering, P: proton capture, A: abstraction), followed by the KE loss of the proton and the resulting fragment
products with their respective charges.
|
Low-energy proton impact dynamics on hydrocarbons: Dependence on kinetic energy and incident site Misa Viveiros,1 Roy Lau,1 Samuel S. Taylor,1, 2 Patrick Barron,1 Attila Czirj ́ak,3, 4 Cody Covington,5 and K ́alm ́an Varga1, ∗ 1 37235, USA 2Pritzker 60637, USA 3ELI ALPS, ELI-HU Non-Profit Ltd, Wolfgang Sandner utca 3., 6728 Szeged, Hungary 4 . krt. 84-86, 6720 Szeged, Hungary 5 37044, USA The dynamics of low-energy proton collisions with hydrocarbon molecules are investigated using real-time time-dependent density functional theory (TDDFT). Through systematic variation of proton kinetic energy and impact site on the molecular surface, the resulting scattering, proton capture, and bond dissociation pathways are analyzed. The simulations reveal a strong dependence of reaction outcomes on both incident energy and collision geometry, with the interplay between electronic and nuclear degrees of freedom highlighted as governing molecular fragmentation and reaction mechanisms. These findings provide insight into the fundamental processes underlying proton-hydrocarbon interactions relevant to radiation chemistry, ion-beam processing, astrochemical environments, and Coulomb explosion. I. INTRODUCTION Ion-molecule collisions are fundamental to a broad range of physical, chemical, and biological processes, spanning from radiation damage in organic matter [1-4] and ion-beam cancer therapy [5-8] to ion-beam-induced material modification [9-14] and astrochemical reaction pathways [15-17]. Among these diverse applications, particular attention has been devoted to the interaction of protons and ions with hydrocarbons and other biologically relevant molecules, owing to the ubiquity of C-H containing species in planetary atmospheres, the interstellar medium, and organic materials [18-25]. In such systems, both the kinetic energy of the incident proton and the collision geometry critically determine whether the interaction results in elastic scattering, chemical bond formation, or molecular fragmentation [25]. Research on proton-molecule collisions remains limited, with most existing calculations focusing primarily on the keV energy range [26-33]. Time-dependent density functional theory has been employed to model proton-DNA collisions at 4 keV energies, with the primary objective of elucidating DNA base pair dissociation mechanisms as a function of proton impact locations [23]. Ab initio molecular dynamics simulations were utilized to investigate proton collisions with deoxyribose, where 5-7 eV protons generated from Coulomb explosions of doubly ionized water molecules subsequently impact deoxyribose, resulting in ring opening [34]. This study was constrained to the adiabatic regime. The influence of collision sites on radiation dynamics in cytosine following proton bombardment in the 150-1000 eV energy range has been examined, along with investigations of protonwater scattering processes [25]. Additional studies have ∗ explored collisions between oxygen molecules (O2) with kinetic energies of 4, 6, or 10 eV and stationary target molecules (Mg2, SiH4, or CH4) to understand light emission phenomena during combustion processes [35]. Regarding proton-hydrocarbon collisions specifically, computational studies are particularly scarce. The proton collision dynamics on CH4 has been investigated [33, 36] using 30 eV protons and the TDDFT calculations demonstrated good agreement with experimental observations for fragment distribution patterns. Low-energy proton-molecule scattering exhibits fundamentally different behavior compared to high-energy collisions due to the critical role of specific interaction sites. During high-energy encounters, rapidly moving protons traverse the molecular structure while transferring only a small portion of their energy to electronic excitations, which subsequently cascade into nuclear motion [13, 24]. Conversely, low-energy collisions result in dramatic alterations to both the proton's kinetic energy and trajectory, making the reaction outcome highly sensitive to the precise impact location. These low-energy systems are characterized by complex interactions arising from multiple scattering centers, anisotropic molecular potential surfaces, and varied bonding environments. This complexity generates a diverse array of possible reaction channels, encompassing elastic scattering events, proton capture processes, and atomic abstraction reactions. Low-energy proton collisions are also highly relevant in the context of hydrocarbon Coulomb explosion. Under intense laser-molecule interactions, hydrocarbons undergo rapid ionization, leading to the production of a wide range of ionic fragments. The lightest and most abundant of these fragments are protons [37-45], which are typically expelled with kinetic energies ranging from a few to several tens of electronvolts [31, 41]. These protons can subsequently collide with nearby neutral or ionized hydrocarbons in the gas phase, initiating sec17 Sep 2025 2 ondary processes such as additional fragmentation, chemical rearrangements, and the formation of new molecular species-processes that occur alongside the primary light-matter interaction. Previous theoretical studies of Coulomb explosion have predominantly focused on the dynamics of isolated molecules, thereby neglecting the role of fragment-molecule collisions [37, 40, 43, 45]. By investigating proton-hydrocarbon collisions in this lowenergy regime, we seek to examine these secondary pathways and assess their potential contribution to the overall reaction dynamics. Time-dependent density functional theory [46, 47] provides a powerful first-principles framework for simulating nonequilibrium processes in real time, as it captures both the electronic response and the coupled nuclear dynamics [48-58]. By accounting for nonadiabatic effects, TDDFT has been successfully applied to describe Coulomb explosion [37, 40, 43, 45] as well as ion-molecule collisions [13, 14, 23-25, 33]. In this work, we employ real-time TDDFT to investigate proton collisions with representative hydrocarbons-C2H2, C3H8, and C4H10-across a range of proton kinetic energies (0.52-87.58 eV) and incident sites. This energy window lies firmly within the low-energy regime, and it corresponds closely to the kinetic energies of protons produced in Coulomb explosion [31, 41]. By systematically varying both the projectile energy and the collision geometry, we identify distinct regimes of scattering, proton capture, and atom abstraction, and quantify the dependence of these processes on the collision parameters. Our results provide new insight into proton-hydrocarbon interactions on femtosecond timescales, revealing systematic trends that can guide future experimental studies and support the development of more comprehensive models of radiation-induced chemistry in complex molecular systems. They also emphasize the important role of many-body effects in governing Coulomb explosion dynamics. II. COMPUTATIONAL METHOD The simulations were performed using TDDFT for modeling the electron dynamics on a real-space grid with real-time propagation [59], with the Kohn-Sham (KS) Hamiltonian of the following form ˆHKS(t) = -ħ2 2m∇2 + Vion(r, t) + VH[ρ](r, t) +VXC[ρ](r, t). (1) Here, ρ is the electron density, defined as the sum of the densities of all occupied orbitals: ρ(r, t) = ∞ X k=1 fk|ψk(r, t)|2, (2) where fk is the occupation number of the orbital ψk, which can take values 0, 1, or 2. Additionally, fk must satisfy the constraint P∞ k=1fk = N, where N is the total number of valence electrons in the system. Vion in eq. 1 is the external potential due to the ions, represented by employing norm-conserving pseudopotentials centered at each ion as given by Troullier and Martins [60]. VH is the Hartree potential, defined as VH(r, t) = Z ρ(r′, t) |r -r′| dr′, (3) and accounts for the electrostatic Coulomb interactions between electrons. The last term in eq. 1, VXC, is the exchange-correlation potential, which is approximated by the adiabatic local-density approximation (ALDA), obtained from a parameterization to a homogeneous electron gas by Perdew and Zunger [61]. At the beginning of the TDDFT calculations, the ground state of the system is prepared by performing a density-functional theory (DFT) calculation. With these initial conditions in place, we then proceed to propagate the KS orbitals, ψk(r, t) over time by using the timedependent KS equation, given as i∂ψk(r, t) ∂t = ˆHKS(t)ψk(r, t). (4) Eq. 4 was solved using the following time propagator ψk(r, t + δt) = exp -iδt ħ ˆHKS(t) ψk(r, t). (5) This operator is approximated using a fourth-degree Taylor expansion, given as ψk(r, t + δt) ≈ 4 X n=0 1 n! -iδt ħ ˆHKS(t) n ψk(r, t). (6) The operator is applied for N time steps until the final time, tfinal = N ·δt, is obtained. The Taylor propagation is conditionally stable, and the time step has to satisfy [59] δt < 1.87(∆x/π)2. For a typical grid spacing of ∆x = 0.3 ̊A this means that the timestep must be smaller than 1.4 as. In our simulations, a time step of δt = 1 attosecond (as) and a total propagation time of tfinal = 120 femtoseconds (fs) were used. In real-space TDDFT, the KS orbitals are represented at discrete points on a uniform rectangular grid in real space. The simulation accuracy is governed by the grid spacing. In our calculations, we employed a grid spacing of 0.3 ̊A and used 100 grid points along each of the x-, y-, and z-axes. To enforce boundary conditions, we set the KS orbitals to zero at the edges of the simulation cell. However, during proton impact events, the collision can impart sufficient energy to the electronic wavefunctions, potentially leading to ionization or the ejection of electronic density beyond the molecule. In such cases, unphysical reflections of the wavefunction from the cell boundaries can 3 occur, introducing artifacts into the simulation. To mitigate this issue, we implemented a complex absorbing potential (CAP) to dampen the wavefunction as it approaches the boundaries. The specific form of the CAP used in our simulations, as described by Manolopoulos [62], is given by: -iw(x) = -i ħ2 2m 2π ∆x 2 f(y), (7) where x1 is the start and x2 is the end of the absorbing region, ∆x = x2 -x1, c = 2.62 is a numerical constant, m is the electron's mass, and f(y) = 4 c2 1 (1 + y)2 + 1 (1 -y)2 -2 , y = x -x1 ∆x . (8) As the molecule becomes ionized during the simulation, electron density is driven towards the CAP. Additionally, any ejected fragments carry their associated electron density as they move towards the boundaries. When electron density reaches the CAP region, it is absorbed. Consequently, the total number of electrons, N(t) = Z V ρ(r, t) d3x, (9) where V is the volume of the simulation box, decreases relative to the initial electron number, N(0). We interpret N(0) -N(t) as the total number of electrons that have been ejected from the simulation box. Motion of the ions in the simulations were treated classically. Using the Ehrenfest theorem, the quantum forces on the ions due to the electrons are given by the derivatives of the expectation value of the total electronic energy with respect to the ionic positions. These forces are then fed into Newton's Second Law, giving Mi d2Ri dt2 = Nions X j̸=i ZiZj(Ri -Rj) |Ri -Rj|3 -∇Ri Z Vion(r, Ri)ρ(r, t) dr, (10) where Mi, Zi, and Ri are the mass, pseudocharge (valence), and position of the ith ion, respectively, and Nions is the total number of ions. Vion(r, Ri) is the pseudopotential representing the combined effect of the nucleus and core electrons, and it interacts with the electron density ρ(r, t) via Ehrenfest dynamics. This differential equation was time propagated using the Verlet algorithm at every time step δt. The target hydrocarbon molecule was placed such that the center of its equilibrium geometry coincided with the origin, with its longest molecular axis aligned along the x-axis. The incident proton was initially positioned 5.0 ̊A above the target, orthogonal to the molecule's largest cross-sectional area (see Fig. 1). At the chosen separation distance, the interaction between the proton FIG. 1. Impact point (IP) diagram for C2H2. The initial proton positions (shown in pink) corresponding to different IPs are placed 5.0 ̊A above the molecular plane. Distances are not drawn to scale. and molecule is negligible. Positioning the proton farther away would not alter the physical results but would require a larger simulation cell, leading to significantly higher computational demands and longer calculation times. Both the target and the proton were contained within a 29.7 ̊A × 29.7 ̊A × 29.7 ̊A simulation grid centered at the origin, providing sufficient space along the proton's trajectory to capture the full dynamics of the collision. The simulations were performed at zero temperature to remove thermal motion of the target atoms as a variable. The atomic nuclei were allowed to move from the forces experienced during the collision, since nuclear motion and energy redistribution into vibrational modes strongly influence fragmentation, bonding, and scattering outcomes. It should be emphasized that these simulations represent an idealized limit. In experimental conditions, target molecules possess finite temperature, undergo random motion, and can be struck at a wide distribution of incident angles and positions. Capturing such effects computationally would require an extensive sampling over molecular orientations, impact geometries, and thermal ensembles, greatly increasing the computational cost. Our approach therefore represents a balance: by focusing on equilibrium geometries, impact points of interest, and orthogonal incidence, we isolate and probe the most characteristic features of proton-hydrocarbon collisions, while recognizing that real experiments will exhibit additional statistical broadening and variability. In the following section, we present the results of proton collisions with three hydrocarbons of varying size: acetylene (C2H2), propane (C3H8), and butane (C4H10). For each molecule, impact points were chosen at chemically relevant sites, including C-C bonds, C atoms, C-H bonds, and H atoms. This selection provides a representative sampling of key collision sites while keeping the computational cost manageable. At each impact point, seven proton initial kinetic energies were considered: 0.52, 4.66, 12.96, 25.39, 41.98, 62.70, and 87.58 eV, motivated by the typical kinetic energies of protons pro4 duced in Coulomb explosion experiments [31, 41]. III. RESULTS A. Acetylene (C2H2) The selected impact points (IPs) for proton collisions with C2H2 are illustrated in Fig. 1. Due to molecular symmetry, only incident points located at or to the right of the molecular center were considered. Specifically, IP 1, IP 2, IP 3, and IP 4 correspond to the central C-C bond, the right C atom, the right C-H bond, and the terminal H atom of C2H2, respectively. The outcomes of proton impacts on C2H2 are summarized in Table I. Each row of the table corresponds to one of the four selected IPs (IP 1-4, labeled in the first column), while the subsequent columns present the results for different initial kinetic energies (KEs) of the proton projectile, ranging from 0.52 to 87.58 eV. In Table I, each cell represents the outcome of a specific simulation defined by a chosen IP and an initial proton KE. The first entry in each cell denotes the reaction type. Three distinct outcomes were observed under the present conditions: "S" (scattering), "P" (proton capture), and "A" (abstraction). "S" indicates cases where the proton was unable to form a bond and was instead reflected or scattered. Scattering events may result in molecular fragmentation as well. "P" corresponds to proton capture, in which the proton was retained by the molecule and formed a stable bond throughout the simulation window with no fragmentation occurring. "A" denotes abstraction events, where the proton induced molecular fragmentation or the dissociation of a single atom, and subsequently bonded with the molecule or one of its fragments. Following the reaction type, the proton's KE loss is reported. This value is only present for scattering events, since the proton retains some KE as it departs from the molecule. In the case of proton capture or abstraction, this entry is marked by "-", as the proton becomes bonded and effectively loses its KE. The subsequent lines within each cell list the final molecular states and fragments, specifying the reaction products and their corresponding charge states. Analyzing each cell in Table I provides information about the outcome of a given IP and proton KE. For example, consider the case of IP 2 with an initial proton KE of 41.98 eV. The reaction is classified as scattering (denoted by "S"), with the proton losing 13.00 eV of its KE. In addition, charge analysis shows that the proton captured 0.33 electrons from C2H2, as reflected in its final charge state of H0.67+ (reduced from the initial H1.00+). Meanwhile, the hydrocarbon target (C2H2) exhibits a final charge state of 0.51+, corresponding to a net loss of 0.51 electrons. This indicates that, during the scattering event, electron ejection further ionized C2H2: 0.33 electrons were transferred from the molecule to the proton, while an additional 0.18 electrons were emitted into the CAP (the meaning of fractional charges observed in molecular fragments will be explained in subsequent discussion). Thus, even in scattering events, rich dynamics emerge involving simultaneous electron capture, ionization, and the transfer of KE from the projectile into both nuclear and electronic degrees of freedom of the target molecule. In certain cases, scattering events are accompanied by molecular fragmentation. For example, in Table I, the simulation corresponding to IP 3 with an initial proton KE of 12.96 eV produces the fragments C2H0.30+, H0.44+, and H0.43+. In all scattering cases that yield multiple fragments, the final entry corresponds to the proton projectile (here, H0.43+), while the preceding H ion (H0.44+) originates from the target molecule. In this instance, the projectile proton dislodged an H atom from the C2H2, ionizing it in the process and simultaneously capturing electrons, which led to the charge states listed in the table. The proton lost 10.56 eV of KE during this interaction. Snapshots of the molecular trajectories and electron densities for this event are shown in Fig. 2. Between 7.5 and 10 fs, the proton approaches IP 3 (the right C-H bond) and wedges itself between the C and H atoms. The resulting close approach generates strong Coulomb repulsion, expelling the terminal H atom downward while deflecting the proton rightward (10-12 fs). Subsequently, mutual repulsion between the positively charged proton and ejected H ion further increases their separation (12-51 fs). By 51 fs, the system consists of a CH2 fragment and two separated H ions, consistent with the products identified in the table. Analyzing a case that results in proton capture is equally insightful. For instance, at IP 2 with an initial proton KE of 0.52 eV (Table I), the proton is captured by the molecule, forming a stable bond as reflected in the final molecular state C2H31.24+. The final charge state also indicates electron ejection. Since the C2H2 molecule begins neutral and the incident proton carries a charge of 1+, the expected charge of the protonated product would be 1+ if no electrons were lost. The observed charge of 1.24+ instead implies that an additional 0.24 electrons were emitted from the hydrocarbon, revealing that electron ejection accompanied the capture process. The final reaction pathway is illustrated for IP 3 at an initial proton KE of 25.39 eV, as shown in Fig. 3. In this trajectory, the proton approaches the C2H2 molecule between 5 and 7.5 fs, inserting itself between a C and H atom. Between 7.5 and 9.5 fs, the molecular framework elongates to accommodate the incoming proton, accompanied by strong interatomic repulsion. By 12 fs, the projectile transfers its KE, displacing the terminal H atom and ejecting it from the molecule, while remaining bound near the C2H fragment. From 30 to 120 fs, the proton establishes a stable bond within the hydrocarbon, yielding a C2H2 fragment. Throughout this process, the molecule exhibits pronounced rotational motion, indicating that a portion of the proton's initial KE is converted into rotation, which contributes to stabilizing the newly formed 5 Impact Point (IP) Proton Kinetic Energy (eV) 0.52 4.66 12.96 25.39 41.98 62.70 87.58 IP 1 S, 0.26 C2H2 1.01+ H0.25+ S, 3.13 C2H2 0.65+ H0.63+ S, 7.35 C2H2 0.63+ H0.71+ S, 13.47 C2H2 0.79+ H0.54+ S, 12.66 C2H2 0.87+ H0.44+ S, 8.73 C2H2 0.81+ H0.56+ S, 8.82 C2H2 0.90+ H0.44+ IP 2 P, - C2H3 1.24+ S, 2.21 C2H2 0.74+ H0.50+ S, 4.39 C2H2 0.78+ H0.42+ S, 8.08 C2H2 0.80+ H0.40+ S, 13.00 C2H2 0.51+ H0.67+ S, 19.90 C2H2 0.65+ H0.58+ S, 29.22 C2H2 0.79+ H0.53+ IP 3 P, - C2H3 1.22+ S, 3.30 C2H2 0.66+ H0.56+ S, 10.56 C2H0.30+ H0.44+ H0.43+ A, - C2H2 0.75+ H0.50+ S, 12.76 C2H0.30+ H0.39+ H0.42+ S, 7.87 C2H2 0.74+ H0.49+ S, 6.81 C2H2 0.82+ H0.41+ IP 4 P, - C2H3 1.20+ A, - C2H2 0.61+ H0.54+ A, - C2H2 0.49+ H0.58+ A, - C2H2 0.80+ H0.30+ A, - C2H2 0.55+ H0.58+ A, - C2H2 0.57+ H0.55+ A, - C2H2 0.73+ H0.38+ TABLE I. Combined outcome data for different C2H2 IPs (rows) and proton KEs (columns). Each cell indicates the reaction type (S: scattering, P: proton capture, A: abstraction), the KE loss of the proton, and the resulting fragment products with their respective charges. FIG. 2. Scattering dynamics of a proton incident on C2H2, directed toward IP 3 with an initial KE of 12.96 eV. The electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001. bond. As shown in Table I, the case analyzed in Fig. 3 is particularly notable because it represents the only instance at IP 3 that leads to abstraction. Adjusting the initial KE either higher or lower instead results in scattering. This behavior highlights the existence of a critical KE range necessary to inject the proton into the molecular system without it being scattered. At this energy, the KE is not so high that the proton simply passes through the C-H bond without being captured, nor is it so low that the proton approaches too close to the C and H nuclear cores and is repelled, as illustrated in Fig. 2. Instead, at approximately 25.39 eV, the proton can effectively bind to the hydrocarbon, displacing an H atom in the process. Interestingly, abstraction is particularly prevalent at IP 4, occurring in every calculation with initial KEs greater than or equal to 4.66 eV. In the case of 4.66 eV, the proton first fragments the molecule by detaching one of the hydrogens-observed as the H0.54+ fragment-and subsequently bonds with the remaining C2H fragment, forming the stable C2H20.61+ listed in Table I. This process effectively replaces the H atom originally bound to the hydrocarbon. The total charge of the resulting fragments is 1.15+, indicating that approximately 0.15 electrons were ejected from the system during the abstraction event. While individual calculations across various IPs and proton KEs provide valuable insights, broader trends emerge when comparing results as a function of IP and proton KE in Table I. For example, at IP 1 the outcome is always scattering, regardless of the proton KE. This is reasonable given that IP 1 lies at the center of the C-C bond (see Fig. 1). In hydrocarbons, hydrogen preferentially bonds in linear or pyramidal geometries at positions maximally separated from neighboring atoms. Here, however, the proton is introduced too close to both carbon nuclei, leaving insufficient space to form a bond and resulting in rapid ejection due to the strong Coulombic repulsion from the carbon cores. In contrast, at IP 4 the dominant process is abstraction whenever the incoming proton has an initial KE of or above 4.66 eV. In these cases, the proton replaces the 6 FIG. 3. Abstraction event dynamics of a proton incident on C2H2, directed toward IP 3 with an initial KE of 25.39 eV. The electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001. right H atom in the C2H2 structure (see Fig. 1). This high probability is consistent with geometry: IP 4 is located directly at the position of the right H atom. The proton arrives with sufficient KE to overcome the local potential barrier and approach the target nucleus. As it nears the H atom, Coulombic repulsion with the hydrocarbon nuclei decelerates the proton, transferring its KE to the target H atom and breaking its bond. The displaced H atom is ejected, while the proton slows enough to stabilize and occupy its position, thereby completing the abstraction process. This mechanism is consistent across all tested KEs of and above 4.66 eV at IP 4. Even at very high values, such as 87.58 eV, nearly all of the proton's KE is transferred to the target H atom, which is knocked out, while the proton itself comes to rest and bonds to the hydrocarbon. At the lowest tested KE (0.52 eV), the proton approaches slowly enough to allow the target H atom to shift and make space for the proton to bond. The relatively small KE can then be redistributed among nuclear and electronic degrees of freedom, consistent with the 0.20 electron ejection reported in Table I. In Table I, the reaction outcome is seen to depend primarily on the IP, as most rows display consistent behavior across all KEs, with only one or two exceptions per row. Nevertheless, KE still plays an important role. At sufficiently low KE (0.52 eV), when the proton does not strike the central C-C bond (IP 1), proton capture occurs at the remaining incident points (IP 2-4). In this regime, the low KE enables local atomic rearrangements that facilitate bond formation while limiting energy redistribution across the molecular degrees of freedom, including charge redistribution. B. Propane (C3H8) The selected IPs for proton collisions with C3H8 are illustrated in Fig. 4. Due to molecular symmetry, only incident points at and to the right of the molecular center are considered. IP 1 through IP 5 correspond, respectively, to the central C atom, the right C-C bond, the right C atom, the right C-H bond, and the terminal H atom of the C3H8 molecule. The results for C3H8, including the reaction type, proton KE loss, and fragment products across the various KEs and IPs, are summarized in Table II. The forFIG. 4. Impact point (IP) diagram for C3H8. mat and types of information presented-namely scattering, proton capture, and abstraction-are consistent with those analyzed for C2H2 in Sec. III A and Table I. As shown in Table II, the collision dynamics for C3H8 can produce particularly striking outcomes. Due to the molecule's larger size, proton-induced fragmentation can be extensive. For example, at IP 1 with an initial KE of 87.58 eV, the resulting fragments are CH20.29+, CH30.31+, CH30.47+, and H+0.32 (the proton projectile). Snapshots of the molecular breakup is illustrated in Fig. 5. The proton approaches the molecule at 2 fs and reaches its closest distance at 4 fs, after which it is rapidly repelled and scattered away by 6 fs. This energy transfer causes the C3H8 molecular geometry to bend and recoil, as indicated by the central C atom dipping between 6 fs and 10 fs. By 15 fs, the proton has fully separated from the molecule, leaving two CH3 fragments on either side, a C-H bond at the bottom, and an unbound hydrogen in the center. Subsequently, asymmetric charge distribution and Coulomb repulsion among the positively charged CH3 fragments drive hydrogen migration toward the CH fragment at the bottom, forming a CH2 fragment by 32 fs. The atoms remain bound but continue to repel one another due to their positive charge, as observed at 96 fs. This trajectory demonstrates how a single proton collision can produce multiple fragments, including hydrogen migration events. In the context of Coulomb explosion, the occurrence of secondary fragmentation (summarized in Table II) shows that some molecular products can result solely from proton-molecule collisions rather than 7 Impact Point (IP) Proton Kinetic Energy (eV) 0.52 4.66 12.96 25.39 41.98 62.70 87.58 IP 1 S, -0.14 C3H7 1.14+ H0.01+ H0.07+ S, 3.09 C3H7 1.12+ H0.10+ H0.07+ S, 6.29 C3H8 1.01+ H0.35+ S, 11.05 C3H8 1.05+ H0.33+ S, 16.42 C3H8 1.12+ H0.30+ S, 23.27 C3H7 1.12+ H0.03H0.32+ S, 30.73 CH2 0.29+ CH3 0.31+ CH3 0.47+ H0.32+ IP 2 A, - C3H7 1.17+ H2 0.05+ S, 4.54 C3H7 1.24+ H0.07+ H0.04+ S, 12.59 C2H5 0.70+ CH3 0.40+ H0.23+ S, 7.93 C3H8 0.92+ H0.47+ S, 7.04 C3H8 0.96+ H0.42+ S, 6.90 C3H8 0.91+ H0.42+ S, 6.22 C3H8 0.88+ H0.46+ IP 3 P, - C3H9 1.25+ S, 3.72 C3H7 1.23+ H0.11+ H0.05S, 6.58 C3H8 0.96+ H0.39+ S, 10.16 C3H8 0.95+ H0.38+ S, 16.15 C2H5 0.67+ CH3 0.35+ H0.36+ S, 23.13 C2H5 0.75+ CH3 0.30+ H0.34+ S, 30.34 C2H5 0.68+ CH2 0.43+ H0.09H0.34+ IP 4 P, - C3H9 1.25+ S, 4.43 C3H7 1.16+ H0.03+ H0.11+ A, - C3H7 1.07+ H2 0.22+ S, 17.22 C3H7 1.06+ H0.15+ H0.13+ S, 12.68 C3H7 1.00+ H0.12+ H0.23+ S, 10.16 C3H7 0.82+ H0.19+ H0.33+ S, 8.69 C3H7 1.06+ H0.03H0.33+ IP 5 P, - C3H9 1.22+ A, - C3H8 0.79+ H0.45+ A, - C3H8 0.86+ H0.36+ A, - C3H8 0.86+ H0.37+ A, - C3H8 0.88+ H0.39+ A, - C3H8 0.88+ H0.38+ A, - C3H8 0.72+ H0.46+ TABLE II. Combined outcome data for different C3H8 IPs (rows) and proton KEs (columns). Each cell shows the reaction type (S: scattering, P: proton capture, A: abstraction), followed by the KE loss of the proton and the resulting fragment products with their respective charges. from direct laser-induced ionization. Such products may also appear among the fragment distributions reported in Coulomb explosion experiments [37-45]. Another noteworthy case in Table II is the formation of an H2 molecule at IP 2, associated with an initial proton kinetic energy of 0.52 eV. The dynamics of this process are shown in Fig. 6. By 43 fs, the incident proton has approached the molecule, and at 47 fs it comes sufficiently close to transiently bond with the top-center H atom of C3H8. By 53 fs, the proton and the bonded H atom detach from the parent molecule and are repelled upward, away from the C3H7 fragment, reaching complete separation by 120 fs. This result demonstrates that proton collisions can induce the formation of H2, underscoring the complex dynamics of these interactions and the ability of protons to abstract molecular fragments. More broadly, it highlights the role of many-body effects in Coulomb explosion and suggests that a wider variety of fragmentation products may emerge through such mechanisms [37, 40]. Interestingly, in terms of reaction type, Table II shows a striking resemblance to Table I. In both molecules, IP 1 results exclusively in scattering outcomes. This is consistent with the geometry, as IP 1 corresponds to the center of the molecule-directly at C-C bonds or central C atoms (see Fig. 1 and Fig. 4). In this configuration, there is insufficient space for the proton to form a bond, and it is repelled by the carbon nuclear core. For the remaining incident points corresponding to the right C atom, the right C-H bond, and the right H atom (IP 2-4 in C2H2 and IP 3-5 in C3H8), proton capture occurs at the lowest KE of 0.52 eV. This strong correlation reflects the similarity in the collision geometry at these sites (see Fig. 1 and Fig. 4). At higher proton energies, the reaction patterns are also analogous between the two molecules. For example, the row of reaction types at IP 2 in C2H2 matches that of IP 3 in C3H8-proton capture at the lowest KE followed by scattering at higher KEs. Similarly, IP 3 in C2H2 and IP 4 in C3H8 exhibit scattering at all KEs except for a specific critical energy range, where abstraction occurs, as discussed in Sec. III A (25.39 eV and 12.96 eV for C2H2 and C3H8, respectively). Finally, the right-most incident point in both molecules (IP 4 for C2H2 and IP 5 for C3H8) consistently results in abstraction at all KEs above the lowest tested value. This highlights that striking the terminal hydrogen in a hydrocarbon leads to a high probability of abstraction, as it allows the proton to efficiently transfer nearly all its KE to the target H atom, with the residual energy redistributed throughout the hydrocarbon molecule. 8 FIG. 5. Scattering dynamics of a proton incident on C3H8, directed toward IP 1 with an initial KE of 87.58 eV. The electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001. C. Butane (C4H10) For the butane molecule, we selected the gauche conformation. This structural choice was made to increase diversity, as it offers a greater number of potential impact points and enables the investigation of a noncentrosymmetric molecular system. The selected IPs for C4H10 are shown in Fig. 7. As in the cases of C2H2 and C3H8 (discussed in Sec. III A and Sec. III B, respectively), the IPs were chosen to represent regions of interest on the molecular framework, specifically bond centers and atomic sites. In contrast to C2H2 and C3H8, however, C4H10 lacks left-right symmetry, requiring the selection of IPs on both sides of the molecule (see Fig. 7). The results of the simulations, including reaction type, KE loss, and fragmentation products, are summarized in Table III. The data highlight the dependence of the outcomes on both the location of the proton collision and the KE of the incoming projectile. At the highest tested KE of 87.58 eV, protons scatter regardless of the chosen IP. In contrast, at the lowest tested KE of 0.52 eV, proton capture is the most frequent outcome, occurring in seven cases across the various IPs. The specific IP also plays a significant role in determining the reaction pathway; for example, IP 5 and IP 9 consistently lead to scattering, independent of the proton projectile KE. The data also reveals diverse fragmentation pathways resulting from proton collisions with C4H10. For example, fragments such as C4H9, C3H7, C2H5, and CH3 are observed across different IPs and proton KEs. Many of these fragmentation events occur only under very specific conditions. A notable case arises at IP 3 with a proton KE of 12.96 eV, where the collision cleaves the molecule into CH3 and C3H7 fragments. At higher KEs, no fragmentation occurs at this IP, while at lower KEs, the molecule instead captures the proton. This demonstrates that such fragmentation events are uncommon and require narrowly defined conditions. The molecular dynamics of the IP 3, 12.96 eV case are illustrated in Fig. 8. As shown, the proton reaches the center of the left C-C bond at 10 fs. By 16 fs, the proton repels the positively charged carbon nuclei, leading to bond cleavage into CH3 and C3H7, while the proton itself is repelled and ejected from the molecular core. From 22 fs to 77 fs, the proton continues migrating away as the two fragments separate and undergo internal rearrangement, ultimately forming the distinct CH3 and C3H7 products. This trajectory highlights why the fragmentation and CH3 formation probability is low: successful breakup requires an optimal set of conditions in which the proton KE is sufficient to penetrate the C-C bond center without being displaced beforehand. At lower KEs (4.66 eV and below), the proton lacks sufficient energy to overcome the potential barrier and reach the C-C bond. In these cases, charge redistribution occurs over a longer timescale, leading to proton capture rather than bond cleavage. At higher KEs (25.39 eV and above), the proton traverses the system too quickly, 9 FIG. 6. Abstraction dynamics of a proton incident on C3H8, directed toward IP 1 with an initial KE of 0.52 eV. The electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001. FIG. 7. Impact point (IP) diagram for C4H10. transferring excess energy and scattering past the target bond rather than inducing fragmentation. Thus, fragmentation occurs only within a narrow KE window near 12.96 eV. This optimal energy range for cleaving terminal C-C bonds is consistent with the results obtained for propane (Sec. III B). Specifically, IP 3/9 in Table III and IP 2 in Table II correspond to terminal C-C bonds in C4H10 and C3H8, respectively. In both molecules, collisions at these sites with a KE of 12.96 eV lead to the separation of a terminal CH3 group from the parent molecule. These findings underscore the similarities across different alkanes and suggest that proton impact energies in this range are optimal for cleaving terminal C-C bonds. The energy required to cleave the central C-C bond (IP 6 in Fig. 7) is greater than that needed to break the terminal C-C bond discussed above. As shown in Table III at IP 6, proton KEs of 25.39 and 41.98 eV result in cleavage of the central bond, separating C4H10 into two equal C2H5 fragments. At both lower and higher KEs, however, this bond remains intact. In contrast to the case where CH3 and C3H7 fragments are produced only within a restricted KE range by directly striking the terminal C-C bond (IP 3/9), fragmentation is more robust when the terminal C atom is impacted directly, as in IP 10 (see Fig. 7). As shown in Table III, every proton KE at or above 41.98 eV for IP 10 results in C-C bond cleavage and the subsequent formation of CH3 and C3H7. In this case, the proton carries sufficient energy to collide with the terminal carbon nucleus and eject it from the molecular framework. An important aspect of the fragmentation results concerns the charge states of the resulting fragments. The 10 FIG. 8. Scattering dynamics of a proton incident on C4H10, directed toward IP 3 with an initial KE of 12.96 eV. The electron density isosurfaces are shown in purple at values of 0.5, 0.1, 0.001, and 0.0001. fractional charge states, obtained by integrating the electron density using TDDFT with the ALDA functional, can be interpreted as probabilities of discrete charge states. For instance, a fragment identified as CH30.28+ (see Table III for 12.96 eV at IP 3) corresponds to a 28% probability of being singly charged and a 72% probability of being neutral. These results therefore predict the formation of neutral as well as charged fragments, offering insight into their role in Coulomb explosion experiments. This is particularly significant because the neutral counterparts of odd-electron fragments correspond to radical species, which have been highlighted as a subject of interest in the Coulomb explosion of butane [40]. One could alternatively contend that the fractional charges observed in molecular fragments arise from computational limitations-specifically, the constrained simulation box size and finite simulation duration. Under this interpretation, the electron cloud disrupted by proton scattering would eventually redistribute to yield integer charges if given adequate time to equilibrate within a sufficiently expansive simulation domain. IV. SUMMARY We have used time-dependent density functional theory to investigate low-energy proton collisions with hydrocarbons of increasing complexity: acetylene (C2H2), propane (C3H8), and butane (C4H10). By systematically varying both the incident point (IP) and the initial proton kinetic energy (KE), we identified general trends in the reaction outcomes as well as molecule-specific fragmentation pathways. Across all systems, three principal reaction types were observed: scattering, proton capture, and abstraction. The outcome depends sensitively on both the proton KE and the collision geometry (See Tables I, II, and III for C2H2, C3H8, and C4H10 results, respectively). IPs at central C-C bonds and C atoms predominantly lead to scattering, while terminal H atoms consistently favor abstraction at KEs above a few electronvolts. At the lowest tested energy (0.52 eV), proton capture dominates at most IPs, where the slow approach enables charge redistribution and local atomic rearrangements that stabilize the projectile within the hydrocarbon framework. C-C bond cleavage was found to be uncommon and sensitive to initial conditions. A narrow KE window near 12.96 eV was identified as optimal for detaching a terminal CH3 group in both propane and butane when striking at the C-C bond. At lower energies, the proton is captured before reaching the bond, while at higher energies, it traverses too quickly and scatters. These results reveal that intermediate KEs are uniquely effective at inducing fragmentation and formation of smaller hydrocarbons as many different fragment products have been observed across various KEs and IPs such as C4H9, C3H7, C2H5, and CH3 stemming from the parent molecule of C4H10. Interestingly, the snapshots reveal complicated dynamics at play such as hydrogen migration and hydrogen abstraction leading to unique fragments in C3H8 such as H2, C2H5, CH2, and CH3. Such outcomes underscore the importance of including proton-molecule collisions when modeling fragment distributions and Coulomb explosion interactions. Charge-state analysis showed that fragments frequently carry fractional charges, which can be interpreted as probabilities of discrete charge states. This indicates that neutral fragments, including radical species such as CH3, can be produced alongside ionic products. Such radicals have been reported in experimental studies of hydrocarbon Coulomb explosion, and our results provide an additional possible microscopic mechanism for their formation [40]. Overall, our findings demonstrate that fragmentation dynamics in hydrocarbons under proton impact arise from a delicate interplay of projectile energy and molecular geometry. The consistent trends observed across acetylene, propane, and butane point toward general features of alkanes such that: C-C bonds are susceptible to breakup only within a narrow intermediate KE range, while terminal hydrogens favor abstraction across a broad range of energies. In the context of Coulomb explosion, these results highlight the role of secondary proton-molecule collisions in generating a wide variety of fragments beyond those directly produced by laserinduced ionization. More broadly, they contribute to a microscopic understanding of ion-molecule interactions relevant to radiation damage, ion-beam processing, and 11 astrochemical environments where low-energy proton collisions play a central role. Future work could involve experimental validation of these results as well as extending the calculations to larger molecules, to collisions involving other ionic projectiles, to finite-temperature conditions, and to trajectories in which the projectiles impinge at different angles. Such studies would further clarify the generality of the trends identified here and help establish connections to real experimental conditions. ACKNOWLEDGMENTS This work has been supported by the National Science Foundation (NSF) under Grants No. DMR-2217759 and No. NSF IRES 2245029. This work used ACES at TAMU through allocation PHYS240167 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296 [63]. The ELI ALPS project (GINOP-2.3.6-15-2015-00001) is supported by the European Union and co-financed by the European Regional Development Fund. [1] M. P. Carante, R. L. Ramos, and F. Ballarini, Radiation damage in biomolecules and cells 3.0, International Journal of Molecular Sciences 25, 10.3390/ijms25126368 (2024). [2] C. Shepard, D. C. Yost, and Y. Kanai, Electronic excitation response of dna to high-energy proton radiation in water, Phys. Rev. Lett. 130, 118401 (2023). [3] S. KC and R. Abolfath, Towards the ionizing radiation induced bond dissociation mechanism in oxygen, water, guanine and dna fragmentation: a density functional theory simulation, Scientific Reports 12, 19853 (2022). [4] A. Saha, M. Mecklenburg, A. Pattison, A. Brewster, J. A. Rodriguez, and P. Ercius, Mapping electron beam-induced radiolytic damage in molecular crystals, Microscopy and Microanalysis 30, ozae044.902 (2024), https://academic.oup.com/mam/articlepdf/30/Supplement 1/ozae044.902/58669529/ozae044.902.pdf. [5] G. Kraft, Tumortherapy with ion beams, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 454, 1 (2000), proc. of the 1st Int Symp. on Applications of Particle Detectors in Medicine, Biology and Astrophysics. [6] F. Tommasino and M. Durante, Proton radiobiology, Cancers 7, 353 (2015). [7] Z. Li, Q. Li, H. Tian, M. Wang, R. Lin, J. Bai, D. Wang, and M. Dong, Proton beam therapy for craniopharyngioma: a systematic review and meta-analysis, Radiation Oncology 19, 161 (2024). [8] C. Graeff, L. Volz, and M. Durante, Emerging technologies for cancer therapy using accelerated particles, Progress in Particle and Nuclear Physics 131, 104046 (2023). [9] L. Huang, H. Wu, G. Cai, S. Wu, D. Li, T. Jiang, B. Qiao, C. Jiang, and F. Ren, Recent progress in the application of ion beam technology in the modification and fabrication of nanostructured energy materials, ACS Nano 18, 2578 (2024). [10] N. Shabi, O. Girshevitz, D. Primetzhofer, M. Kaveh, and I. Shlimak, Dominant impact of ion velocity on defect formation in suspended graphene, Surfaces and Interfaces 58, 105872 (2025). [11] Y. Liu, Y. Deng, Y. Wang, L. Wang, T. Liu, Z. Gong, Z. Fan, H. Wei, Z. Su, W. Wei, Y. Wang, and Y. Dan, Surface self-assembly of gold nanoparticles on graphite driven by ion-irradiation-induced atomic defects, Applied Surface Science 704, 163442 (2025). [12] A. V. Krasheninnikov and F. Banhart, Engineering of nanostructured carbon materials with electron or ion beams, Nature Materials 6, 723 (2007). [13] S. Bubin, B. Wang, S. Pantelides, and K. Varga, Simulation of high-energy ion collisions with graphene fragments, Phys. Rev. B 85, 235435 (2012). [14] Z. Wang, S.-S. Li, and L.-W. Wang, Efficient real-time time-dependent density functional theory method and its application to a collision of an ion with a 2d material, Phys. Rev. Lett. 114, 063004 (2015). [15] J. H. Westlake, J. H. Waite Jr., N. Carrasco, M. Richard, and T. Cravens, The role of ion-molecule reactions in the growth of heavy ions in titan's ionosphere, Journal of Geophysical Research: Space Physics 119, 5951 (2014), https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2014JA020 [16] Mancini, Luca, Valen ̧ca Ferreira de Aragao, Emilia, Pirani, Fernando, Rosi, Marzio, Faginas-Lago, Noelia, Richardson, Vincent, Martini, Luca Matteo, Podio, Linda, Lippi, Manuela, Codella, Claudio, and Ascenzi, Daniela, Destruction of interstellar methyl cyanide (ch3cn) via collisions with he+ ions, Astrophysics and Astronomy 691, A83 (2024). [17] W. Cui and E. Herbst, Exploring the role of ion-molecule reactions on interstellar icy grain surfaces, ACS Earth and Space Chemistry 8, 2218 (2024). [18] H. J. L ̈udde, M. Horbatsch, and T. Kirchner, Protonimpact-induced electron emission from biologically relevant molecules studied with a screened independent atom model, Journal of Physics B: Atomic, Molecular and Optical Physics 52, 195203 (2019). [19] H. Luna, A. L. F. de Barros, J. A. Wyer, S. W. J. Scully, J. Lecointre, P. M. Y. Garcia, G. M. Sigaud, A. C. F. Santos, V. Senthil, M. B. Shah, C. J. Latimer, and E. C. Montenegro, Water-molecule dissociation by proton and hydrogen impact, Phys. Rev. A 75, 042711 (2007). [20] X. Guan and K. Bartschat, Complete breakup of the helium atom by proton and antiproton impact, Phys. Rev. 12 Lett. 103, 213201 (2009). [21] I. B. Abdurakhmanov, S. U. Alladustov, J. J. Bailey, A. S. Kadyrov, and I. Bray, Proton scattering from excited states of atomic hydrogen, Plasma Physics and Controlled Fusion 60, 095009 (2018). [22] A. C. K. Leung and T. Kirchner, Proton impact on ground and excited states of atomic hydrogen, The European Physical Journal D 73, 246 (2019). [23] R. Seraide, M. A. Bernal, G. Brunetto, U. de Giovannini, and A. Rubio, Tddft-based study on the proton-dna collision, The Journal of Physical Chemistry B 121, 7276 (2017). [24] C. Covington, K. Hartig, A. Russakoff, R. Kulpins, and K. Varga, Time-dependent density-functional-theory investigation of the collisions of protons and α particles with uracil and adenine, Phys. Rev. A 95, 052701 (2017). [25] X. Wang, Z.-P. Wang, F.-S. Zhang, and C.-Y. Qian, Collision site effect on the radiation dynamics of cytosine induced by proton, Chinese Physics B 31, 063401 (2022). [26] J. M. Sanders and S. L. Varghese, Electron capture from hydrocarbon molecules by proton projectiles in the 60-120 kev energy range, AIP Conference Proceedings 475, 70 (1999), https://pubs.aip.org/aip/acp/articlepdf/475/1/70/12099216/70 1 online.pdf. [27] J. M. Sanders, S. L. Varghese, C. H. Fleming, and G. A. Soosai, Electron capture by protons and electron loss from hydrogen atoms in collisions with hydrocarbon and hydrogen molecules in the 60-120 kev energy range, Journal of Physics B: Atomic, Molecular and Optical Physics 36, 3835 (2003). [28] H. Baumgart, W. Arnold, J. G ̈unzl, E. Huttel, A. Hofmann, N. Kniest, E. Pfaff, G. Reiter, S. Tharraketta, and G. Clausnitzer, Proton and helium stopping cross sections in gaseous hydrocarbon compounds, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 5, 1 (1984). [29] H. J. L ̈udde, A. Jorge, M. Horbatsch, and T. Kirchner, Net electron capture in collisions of multiply charged projectiles with biologically relevant molecules, Atoms 8, 10.3390/atoms8030059 (2020). [30] H. J. L ̈udde, M. Horbatsch, and T. Kirchner, Electron capture and ionization cross-section calculations for proton collisions with methane and the dna and rna nucleobases, The European Physical Journal D 73, 249 (2019). [31] S. Roither, X. Xie, D. Kartashov, L. Zhang, M. Sch ̈offler, H. Xu, A. Iwasaki, T. Okino, K. Yamanouchi, A. Baltuska, and M. Kitzler, High energy proton ejection from hydrocarbon molecules driven by highly efficient field ionization, Phys. Rev. Lett. 106, 163001 (2011). [32] Y. Zhang, B. Wang, L. Wei, T. Jiang, W. Yu, R. Hutton, Y. Zou, L. Chen, and B. Wei, Proton migration in hydrocarbons induced by slow highly charged ion impact, The Journal of Chemical Physics 150, 204303 (2019), https://pubs.aip.org/aip/jcp/articlepdf/doi/10.1063/1.5088690/9651872/204303 1 online.pdf. [33] E. E. Quashie, B. C. Saha, X. Andrade, and A. A. Correa, Self-interaction effects on charge-transfer collisions, Phys. Rev. A 95, 042517 (2017). [34] M.-A. Herv ́e du Penhoat, N. R. Moraga, M.-P. Gaigeot, R. Vuilleumier, I. Tavernelli, and M.-F. Politis, Proton Collision on Deoxyribose Originating from Doubly Ionized Water Molecule Dissociation, The Journal of Physical Chemistry A 122, 5311 (2018), publisher: American Chemical Society. [35] Y. Miyamoto and T. Komatsu, Molecular-scale modeling of light emission by combustion: An ab initio study, Scientific Reports 9, 12707 (2019). [36] C.-Z. Gao, J. Wang, F. Wang, and F.-S. Zhang, Theoretical study on collision dynamics of H+ + CH4 at low energies, The Journal of Chemical Physics 140, 054308 (2014). [37] S. S. Taylor, K. Varga, K. Mogyor ́osi, V. Chik ́an, and C. Covington, Fragmentation in coulomb explosion of hydrocarbon molecules, Phys. Rev. A 111, 013109 (2025). [38] I. Last and J. Jortner, Nuclear fusion driven by coulomb explosion of methane clusters, The Journal of Physical Chemistry A 106, 10877 (2002). [39] C. Cornaggia, D. Normand, and J. Morellec, Role of the molecular electronic configuration in the coulomb fragmentation of n2, c2h2 and c2h4 in an intense laser field, Journal of Physics B: Atomic, Molecular and Optical Physics 25, L415 (1992). [40] K. Mogyor ́osi, B. T ́oth, K. S ́arosi, B. Gilicze, J. Csontos, T. Somosk ̋oi, S. T ́oth, P. Prasannan Geetha, L. T ́oth, S. S. Taylor, N. Skoufis, L. Barron, K. Varga, C. Covington, and V. Chik ́an, Ch(a) radical formation in coulomb explosion from butane seeded plasma generated with chirp-controlled ultrashort laser pulses, ACS Omega 10, 25285 (2025). [41] A. N. Markevitch, D. A. Romanov, S. M. Smith, and R. J. Levis, Coulomb explosion of large polyatomic molecules assisted by nonadiabatic charge localization, Phys. Rev. Lett. 92, 063001 (2004). [42] H. Hasegawa, A. Matsuda, T. Morishita, L. B. Madsen, F. Jensen, O. I. Tolstikhin, and A. Hishikawa, Dissociative ionization and coulomb explosion of ch4 in twocolor asymmetric intense laser fields, Phys. Chem. Chem. Phys. 25, 25408 (2023). [43] S. S. Taylor, C. Covington, and K. Varga, Quantum effects of coulomb explosion simulations revealed by timedependent density-functional theory, Phys. Rev. A 111, 033109 (2025). [44] S. Palaniyappan, R. Mitchell, N. Ekanayake, A. M. Watts, S. L. White, R. Sauer, L. E. Howard, M. Videtto, C. Mancuso, S. J. Wells, T. Stanev, B. L. Wen, M. F. Decamp, and B. C. Walker, Ionization of ethane, butane, and octane in strong laser fields, Phys. Rev. A 82, 043433 (2010). [45] E. Livshits and R. Baer, Time-dependent densityfunctional studies of the d2 coulomb explosion, The Journal of Physical Chemistry A 110, 8443 (2006). [46] E. Runge and E. K. Gross, Density-functional theory for time-dependent systems, Physical review letters 52, 997 (1984). [47] C. A. Ullrich, Time-Dependent Density-Functional Theory: Concepts and Applications (Oxford University Press, USA, 2012). [48] A. Kononov, A. J. White, K. A. Nichols, S. X. Hu, and A. D. Baczewski, Reproducibility of real-time timedependent density functional theory calculations of electronic stopping power in warm dense matter, Physics of Plasmas 31, 043904 (2024). [49] J. Hekele, Y. Yao, Y. Kanai, V. Blum, and P. Kratzer, All-electron real-time and imaginary-time time-dependent density functional theory within a numeric atom-centered basis function framework, The Journal of Chemical Physics 155, 154801 (2021). [50] C. J. Herring and M. M. Montemore, Recent Advances in 13 Real-Time Time-Dependent Density Functional Theory Simulations of Plasmonic Nanostructures and Plasmonic Photocatalysis, ACS Nanoscience Au 3, 269 (2023), publisher: American Chemical Society. [51] J. Xu, T. E. Carney, R. Zhou, C. Shepard, and Y. Kanai, Real-Time Time-Dependent Density Functional Theory for Simulating Nonequilibrium Electron Dynamics, Journal of the American Chemical Society 146, 5011 (2024), publisher: American Chemical Society. [52] Z. Wang, Efficient Real-Time Time-Dependent Density Functional Theory Method and its Application to a Collision of an Ion with a 2D Material, Physical Review Letters 114, 10.1103/PhysRevLett.114.063004 (2015). [53] T. P. Rossi, M. Kuisma, M. J. Puska, R. M. Nieminen, and P. Erhart, Kohn-Sham Decomposition in Real-Time Time-Dependent Density-Functional Theory: An Efficient Tool for Analyzing Plasmonic Excitations, Journal of Chemical Theory and Computation 13, 4779 (2017), publisher: American Chemical Society. [54] M. Noda, S. A. Sato, Y. Hirokawa, M. Uemoto, T. Takeuchi, S. Yamada, A. Yamada, Y. Shinohara, M. Yamaguchi, K. Iida, I. Floss, T. Otobe, K.-M. Lee, K. Ishimura, T. Boku, G. F. Bertsch, K. Nobusada, and K. Yabana, SALMON: Scalable Ab-initio Light-Matter simulator for Optics and Nanoscience, Computer Physics Communications 235, 356 (2019). [55] X. Andrade, J. Alberdi-Rodriguez, D. A. Strubbe, M. J. T. Oliveira, F. Nogueira, A. Castro, J. Muguerza, A. Arruabarrena, S. G. Louie, A. Aspuru-Guzik, A. Rubio, and M. A. L. Marques, Time-dependent densityfunctional theory in massively parallel computer architectures: the octopus project, Journal of Physics: Condensed Matter 24, 233202 (2012). [56] J. I. Fuks, S. E. B. Nielsen, M. Ruggenthaler, and N. T. Maitra, Time-dependent density functional theory beyond kohn-sham slater determinants, Phys. Chem. Chem. Phys. 18, 20976 (2016). [57] D. B. Dar, Reformulation of Time-Dependent Density Functional Theory for Nonperturbative Dynamics: The Rabi Oscillation Problem Resolved, Physical Review Letters 133, 10.1103/PhysRevLett.133.096401 (2024). [58] X. Li, N. Govind, C. Isborn, A. E. I. DePrince, and K. Lopata, Real-time time-dependent electronic structure theory, Chemical Reviews 120, 9951 (2020), pMID: 32813506. [59] K. Varga and J. A. Driscoll, in Computational Nanoscience: Applications for Molecules, Clusters, and Solids (Cambridge University Press, 2011). [60] N. Troullier and J. L. Martins, Efficient pseudopotentials for plane-wave calculations, Phys. Rev. B 43, 1993 (1991). [61] J. P. Perdew and A. Zunger, Self-interaction correction to density-functional approximations for many-electron systems, Phys. Rev. B 23, 5048 (1981). [62] D. E. Manolopoulos, Derivation and reflection properties of a transmission-free absorbing potential, The Journal of Chemical Physics 117, 9552 (2002), https://pubs.aip.org/aip/jcp/articlepdf/117/21/9552/19225128/9552 1 online.pdf. [63] T. J. Boerner, S. Deems, T. R. Furlani, S. L. Knuth, and J. Towns, Access: Advancing innovation: Nsf's advanced cyberinfrastructure coordination ecosystem: Services & support, in Practice and Experience in Advanced Research Computing 2023: Computing for the Common Good, PEARC '23 (Association for Computing Machinery, New York, NY, USA, 2023) pp. 173-176. 14 Impact Point (IP) Proton Kinetic Energy (eV) 0.52 4.66 12.96 25.39 41.98 62.70 87.58 IP 1 P, - C4H11 1.23+ P, - C4H11 1.25+ S, 6.93 C4H10 0.93+ H0.36+ S, 16.05 C4H9 1.00+ H0.11+ H0.22+ S, 39.45 C4H9 0.97+ H0.25+ H0.14+ A, - C4H10 1.05+ H0.33+ S, 47.61 C4H9 0.91+ H0.27+ H0.15+ IP 2 A, - C4H9 1.19+ H2 0.01+ A, - C4H9 1.24+ H2 0.06+ A, - C4H10 1.14+ H0.20+ S, 23.98 C4H9 0.92+ H0.12+ H0.40+ S, 37.12 C4H9 0.92+ H0.44+ H0.12+ S, 41.13 C4H9 0.93+ H0.47+ H0.05+ S, 44.35 C4H9 0.72+ H0.42+ H0.31+ IP 3 P, - C4H11 1.22+ P, - C4H11 1.33+ S, 11.76 CH3 0.28+ C3H7 0.84+ H0.24+ S, 12.85 C4H10 1.07+ H1.00+ S, 9.30 C4H10 1.01+ H0.38+ S, 8.80 C4H10 1.02+ H0.41+ S, 9.19 C4H10 1.06+ H0.42+ IP 4 S, 0.37 C4H9 1.13+ H0.02H0.09+ S, 3.58 C4H10 0.78+ H0.49+ A, - C4H9 1.27+ H2 0.06+ S, 12.71 C4H9 1.15+ H0.08+ H0.14+ S, 11.07 C4H9 1.01+ H0.08+ H0.27+ S, 10.31 C4H10 0.94+ H0.47+ S, 9.67 C4H10 0.94+ H0.44+ IP 5 S, 0.36 C4H9 1.16+ H0.03+ H0.05+ S, 3.44 C4H10 0.86+ H0.45+ S, 4.95 C4H10 0.88+ H0.39+ S, 9.81 C4H10 0.86+ H0.48+ S, 16.03 C4H10 1.01+ H0.45+ S, 22.33 C4H10 1.06+ H0.39+ S, 30.32 C2H5 0.51+ C2H5 0.51+ H0.45+ IP 6 P, - C4H11 1.24+ A, - C4H9 1.27+ H2 0.05+ S, 7.10 C4H10 0.94+ H0.43+ S, 16.53 C2H5 0.59+ C2H5 0.62+ H0.23+ S, 14.30 C2H5 0.70+ C2H5 0.51+ H0.29+ S, 11.04 C4H10 1.08+ H0.37+ S, 10.18 C4H10 1.08+ H0.38+ IP 7 P, - C4H11 1.25+ A, - C4H9 1.24+ H2 0.05+ A, - C4H10 0.90+ H0.48+ S, 21.65 C4H9 1.19+ H0.09+ H0.07+ S, 34.43 C4H9 1.03+ H0.30+ H0.07+ S, 49.35 C4H9 0.78+ H0.42+ H0.37+ S, 55.73 C4H9 0.74+ H0.36+ H0.37+ IP 8 P, - C4H11 1.23+ S, 4.31 C4H9 1.24+ H0.01+ H0.07+ A, - C4H9 1.16+ H2 0.17+ S, 22.67 C4H9 1.14+ H0.19+ H0.08+ S, 23.23 C4H9 1.13+ H0.16+ H0.07+ S, 16.89 C4H9 1.09+ H0.08+ H0.14+ S, 14.65 C4H9 1.04+ H0.09+ H0.25+ IP 9 S, 0.44 C4H9 1.18+ H0.08+ H0.01S, 4.52 C4H9 1.25+ H0.03+ H0.01+ S, 12.06 C3H7 0.75+ CH3 0.39+ H0.21+ S, 11.52 C4H10 0.95+ H0.48+ S, 9.22 C4H10 1.10+ H0.34+ S, 9.31 C4H10 1.07+ H0.40+ S, 8.79 C4H10 1.05+ H0.32+ IP 10 P, - C4H11 1.25+ S, 2.62 C4H10 0.84+ H0.42+ S, 7.11 C4H10 0.98+ H0.37+ S, 10.23 C4H10 1.01+ H0.38+ S, 15.07 C3H7 0.81+ CH3 0.27+ H0.35+ S, 21.51 C3H7 0.74+ CH3 0.25+ H0.41+ S, 29.04 C3H7 0.65+ CH3 0.34+ H0.40+ IP 11 P, - C4H11 1.26+ A, - C4H10 1.31+ H0.04+ A, - C4H9 1.16+ H2 0.14+ A, - C4H9 1.03+ H2 0.19+ S, 12.45 C4H9 0.97+ H0.15+ H0.29+ S, 11.03 C4H10 1.06+ H0.36+ S, 10.60 C4H10 1.02+ H0.43+ TABLE III. Combined outcome data for different C4H10 IPs (rows) and proton KEs (columns). Each cell shows the reaction type (S: scattering, P: proton capture, A: abstraction), followed by the KE loss of the proton and the resulting fragment products with their respective charges.
|
2509.16251
|
R-Net: A Reliable and Resource-Efficient CNN for Colorectal Cancer
Detection with XAI Integration
Rokonozzaman Ayon
4IR Research Cell
Department of Computer Science and Engineering
Daffodil International University, Dhaka, Bangladesh
ayon15-4393@diu.edu.bd
Dr. Md Taimur Ahad
School of Mathematics, Physics and Computing
Toowoomba Campus
University of Southern Queensland
MdTaimur.Ahad@unisq.edu.au
Bo Song
Lecturer (Industrial Automation)
School of Engineering
University of Southern Queensland
Bo.Song@usq.edu.au
Yan Li
Professor (Computing)
School of Mathematics, Physics and Computing
Toowoomba Campus
University of Southern Queensland
Yan.Li@usq.edu.au
Abstract:
State-of-the-art (SOTA) Convolutional Neural Networks (CNNs) are criticized for their
extensive computational power, long training times, and large datasets. To overcome this
limitation, we propose a reasonable network (R-Net), a lightweight CNN only to detect and
classify colorectal cancer (CRC) using the Enteroscope Biopsy Histopathological
Hematoxylin and Eosin Image Dataset (EBHI). Furthermore, six SOTA CNNs, including
Multipath-based CNNs (DenseNet121, ResNet50), Depth-based CNNs (InceptionV3), width-
based multi-connection CNNs (Xception), depth-wise separable convolutions (MobileNetV2),
spatial exploitation-based CNNs (VGG16), Transfer learning, and two ensemble models are
also tested on the same dataset. The ensemble models are a multipath-depth-width
combination
(DenseNet121-InceptionV3-Xception)
and
a
multipath-depth-spatial
combination (ResNet18-InceptionV3-VGG16). However, the proposed R-Net lightweight
achieved 99.37% accuracy, outperforming MobileNet (95.83%) and ResNet50 (96.94%).
Most importantly, to understand the decision-making of R-Net, Explainable AI such as SHAP,
LIME, and Grad-CAM are integrated to visualize which parts of the EBHI image contribute
to the detection and classification process of R-Net. The main novelty of this research lies in
building a reliable, lightweight CNN R-Net that requires fewer computing resources yet
maintains strong prediction results. SOTA CNNs, transfer learning, and ensemble models
also extend our knowledge on CRC classification and detection. XAI functionality and the
impact of pixel intensity on correct and incorrect classification images are also some
novelties in CRC detection and classification.
Keywords: Colorectal cancer detection, convolutional neural network, CNN, lightweight
CNN, ensemble model, SHAP, LIME, GRAD-CAM, XAI.
1. Introduction
The worldwide incidence of colorectal cancer (CRC) remains high because yearly diagnosis
rates reach 1.8 million new cases (Cowan et al., 2022). As the second most death-causing
cancer worldwide, CRC also stands among the top three cancer types (Alzahrani et al., 2021;
Zhou et al., 2020). CRC caused 930,000 deaths in 2020 while generating 881,000 fatalities in
2018, according to Fadlallah et al. (2024) and deSouza et al. (2024). Researchers continue to
study new treatment methods to improve CRC survival outcomes while reducing mortality
rates. CRC stands as a significant worldwide public health problem because of its high death
rate (Fadlallah et al., 2024). The standard diagnosis of CRC relies on histopathological
examination; however, this method remains time-consuming, subjective, and requires
complex analysis (Sharkas & Attallah, 2024).
Figure 01: Visual View of CRC (amended from American Cancer Society, 2025)
In the detection and classification of CRC, Deep Learning (DL) has dramatically improved
by making decisions more accurately, minimizing human error, reducing mistakes, and
allowing real-time analysis in clinics (Iqbal et al., 2021). From pathology slides, the
classification of cancerous tissues using DL models is very effective to achieve better
accuracy in cancerous and non-cancerous tissues (Xu et al., 2020). Moreover, DL models
have successfully classified polyps in colonoscopy images, and this is an essential step in
CRC prevention and screening (Tanwar et al., 2022). The Deep Convolutional Neural
Network (DCNN) architecture has performed very well in finding CRC-related features in
histopathological images (Sarwinda et al., 2021; Shi et al., 2023). By combining the DL
models with histopathological analysis, it helped to reduce the workload for diagnosis
(doctors) and improve the diagnostic precision (Luo et al., 2023; Attallah et al., 2022).
Among DL techniques, several approaches have been applied, including State-of-the-Art
(SOTA) CNNs, modified CNNs, Transfer Learning (TL), and Ensemble models (Iqbal et al.,
2022).
Despite DL’s ability to detect and classify CRC, the latest DL technologies can visualize the
CRC (Thakur et al., 2020). The visualizing techniques in DL systems operate under the
terminology known as Explainable Artificial Intelligence (XAI). Local Interpretable Model-
Agnostic
Explanations
(LIME),
Shapley
Additive
Explanations
(SHAP),
and
Gradient‐weighted Class Activation Mapping (Grad‐CAM) are popular XAI techniques
found in the literature (Auzine et al., 2024). The combination of SHAP and LIME techniques
produces both global and local explanations that work on validation and test sets for DL
models (Alabi et al., 2023). Lime is basically used by a model named “black-box” to generate
explanations for each prediction (Aldughayfiq et al., 2023), and SHAP is known as the most
used model-agnostic method. It can be applied to any Machine Learning (ML) model to
explain any model prediction. On the other hand, Grand-CAM is used to generate heatmaps
for specific target classes by feature maps from the last layer of a CNN (Ghasemi et al.,
2024). Mainly, it is used to show which parts of an image, or an input, are most important to
predict by a model.
Several studies have been conducted on the CRC dataset for classification and identification;
however, they have some limitations that can affect the reliability and applicability of these
studies in a practical scenario. Numerous studies have supported the concept of lightweight
CNNs and proposed novel CNN architectures that operate with fewer layers. Again, very few
authors applied XAI techniques in cancer cell classifications. Some of the limitations and
study gaps are mentioned below:
1. The use of imperfect ground truth data, inadequate clinical data, and insufficient
training data for validation might lead to poor performance and might question the
reliability of the performance of the trained model. It could perform worse in the
scenario where variability in the dataset is observed (Echle et al., 2020; Xu et al.,
2020).
2. The limitation of Cancer and Non-Cancer classification in detecting CRC represents a
significant disadvantage since it stands in the way of model development for
alternative rare malignant conditions during colonoscopy, like sarcoma, melanoma,
gastrointestinal stromal tumor, lymphoma, and carcinoid tumor (Zhou et al., 2020;
Karthikeyan et al., 2024).
3. Only using a small number of images and limited patient data could create a
significant issue in the trained model, and that would be called overfitting, which
possibly affects the reliability of predictions and restricts model applicability (Ho et
al., 2022).
4. Sarwinda et al. (2021) compared ResNet18 and ResNet50 models that might fall in
comparison with other studies that have worked with many other exceptional
architectures and showed their comparison analysis.
5. Prezja et al. (2024); Khazaee and Rezaee (2023) applied the ensemble strategy and
proposed their ensemble model, but as the ensemble approach combines several
SOTA CNNs, it has many layers. So, this issue was not addressed, and a lightweight
CNN architecture was not proposed.
6. The custom model was presented by Akilandeswari et al. (2022) and Attallah et al.
(2022) in their studies, but in the applicability sector, it could not reach the standard of
lightweight CNN models.
7. None of the studies mentioned above have included XAI techniques like LIME,
SHAP, and GRAD-CAM in their studies to explain the reasoning of their
classification and misclassification.
To fill those gaps, this paper’s contributions are:
1. In order to avoid overfitting issues and improve the model performance in classifying
CRC, data augmentation, balancing, and some preprocessing techniques were
implemented.
2. Six SOTA CNN architectures (InceptionV3, VGG16, MobileNet, ResNet50,
DenseNet121, Xception) were applied on the dataset, and their performance
comparison was presented.
3. Six pre-trained models with Transfer Learning were also applied to monitor the
behavior of the accuracies and to show the comparison with the previous accuracies.
4. Two ensemble approaches based on three strategies, Soft-voting, Hard-voting, and
Rank-based ensemble, were applied to increase the performance of the classification
of CRC cells.
5. One of the main contributions was to build a lightweight Reliable Net (R-Net) model
with a few layers, which achieved 99.37% accuracy with fewer resources.
6. XAI techniques like LIME and SHAP, along with Grad-CAM, were applied to make
the work understandable and to make proper reasoning of classification and
misclassification in CRC classification.
2. Literature Review
The literature covers a wide range of techniques, including colonoscopy and histological
image analysis, reflecting the diversity of strategies being investigated for CRC treatment.
Ahad et al., 2023; Mustofa et al., 2023; Bhowmik et al., 2024, Ahmed & Ahad, 2023;
Emon & Ahad, 2024; Mustofa et al., 2024; Preanto et al., 2024; Mamun et al., 2023;
Ahad et al., 2024; Mustofa et al., 2025; Preanto et al., 2024; Ahmed et al., 2023; Ahad et
al., 2024; Bhowmik et al., 2023; Ahad et al., 2024; Mamun et al., 2025; Ahad et al., 2024,
Ahad et al., 2024; Islam et al., 2024; Ahad et al., 2024; Ahmed et al., 2024; Ahad et al.,
2024; Preanto et al., 2024; Preanto et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Ahad
et al., 2024; Mamun et al., 2024; Emon et al., 2023; Emon et al., 2023; Biplob et al., 2023;
Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al.,
2023). This represents a critical advancement in cervical cancer diagnosis, enhancing the
effectiveness of screening and improving early detection rates. This review highlights the
transformative impact of DL on the detection and treatment of CRC by consolidating findings
from several research studies.
SOTA CNNs proved the capabilities of cancerous cell detection and identification. However,
it also has some limitations, for example, various layers such as concatenation, convolutional,
pooling, and fully connected layers, as well as hyperparameters. Due to their large memory
footprint and high computational demands (Moolchandani et al., 2021), DCNN architectures
have been criticized by researchers (Thakur et al., 2023). Another researcher (Fu et al., 2024)
also supported the previous researcher (Thakur et al., 2023) that CNNs have some limitations.
When implementing CNNs in applied artificial intelligence, they have encountered
challenges due to the complex architecture of the CNN network. To achieve a comparatively
good result from DCNNs, the authors suggested lightweight CNN architectures with fewer
layers, which can accurately identify the disease in cancerous images. Inspired by the success
of lightweight CNNs, several studies (Thakur et al., 2023; Sun et al., 2024; Verma et al.,
2024) have developed lightweight CNNs. Moreover, other methodologies are also applied in
cancer cell detection, such as transfer learning and ensemble models (Xue et al., 2020).
For CRC detection, Transfer Learning (TL) is highly impactful when a large medical dataset
is unavailable, as it utilizes pre-trained models for image classification. For example, to
classify any cancer cell, such as colorectal polyps and cancerous tissues, TL models have
been fine-tuned using CNN pre-trained models on diverse images (Alabdulqader et al., 2024;
Raju et al., 2022). Techniques such as those applied in TL, partial layer freezing, and full
fine-tuning help the models to focus on medical-specific features. For this reason, it
continually strives to achieve better results than the pre-trained model (Davila et al., 2024;
Morid et al., 2021). TL also improves the classification of benign tissues and
adenocarcinomas in histopathology images (Morid et al., 2021). The Ensemble method
functions as a classifier in cancer cell detection with improved accuracy than individual
classification systems. It serves as an important method in many detection processes (Nanglia
et al., 2022). The Ensemble model receives multiple model results from weights representing
VGG19, DenseNet201, and MobileNetV2, along with other models, to enable a slow-learner
algorithm for final prediction (Chugh et al., 2021). Basically, the final output is based on the
cross-validated result and reduces a loss function to find optimal weights for the base model.
The remarkable performance of CRCNet has highlighted the possibility for massive DL in
clinical diagnostics. This new CRC detection model was trained on a big dataset of over
464,000 pictures (Zhou et al., 2020). Using H&E-stained slides, a DL model was created to
detect MSI and MMR in colorectal tumours. This model provides a faster and more
affordable option to conventional molecular diagnosis (Echle et al., 2020). Effective MSI and
dMMR screening for CRC was made possible by the model, which achieved an AUROC of
0.92 during development and 0.96 during validation with colour normalisation after being
trained on 8,836 tumours from various nations. According to Sarwinda et al. (2021), the
ResNet architecture was utilized to detect CRC in histology images and differentiate between
benign and malignant instances. ResNet-50 had the best accuracy (above 80%), sensitivity
(above 87%), and specificity (above 83%) across a range of test sets, demonstrating the
validity of DL in the classification of CRC. To predict patient outcomes from digital tissue
samples, recurrent and CNNs were combined to show that DL can extract prognostic
information from tissue morphology. This approach performed better than human evaluations
with an AUC of 0.69 and a hazard ratio of 2.3 (Bychkov et al., 2018). In a semi-supervised
learning (SSL) technique, 13,111 whole-slide photos from 8,803 patients were utilized to
train the mean teacher model (Yu et al., 2021). This approach achieved expert-level accuracy
with fewer labelled patches (AUC 0.974), performing similarly to standard supervised
learning in patient-level diagnosis. CRCNet, designed to enhance the identification of CRC
during colonoscopy, was trained using 464,105 pictures from over 12,000 patients. It
outperformed endoscopists in terms of recall rates and AUPRC values (Zhou et al., 2020).
This means that CRCNet may be applied to improve CRC screening. With a high sensitivity
(97.4%) and an AUC of 0.917 (Ho et al., 2022), an AI model using a Faster R-CNN
architecture was created for the identification of high-risk characteristics in CRC biopsies,
suggesting that it could help pathologists. An automated deep-learning approach was
developed to classify colorectal polyps in histological images with 93% accuracy across five
polyp types, aiding pathologists in estimating risk and enhancing screening (Korbar et al.,
2022). A two-phase approach for lesion segmentation and classification was used in the
development of a computer-aided diagnostic system for early CRC diagnosis utilizing CT
images (Akilandeswari et al., 2022). The DCNN and residual architecture-based system
showed excellent accuracy of 98.82%. In order to diagnose CRC, a two-stage classification
method was suggested for separating pertinent frames from colonoscopy recordings. These
frames were then classified as either neoplastic or non-neoplastic (Sharma et al., 2020). The
study concluded that VGG19 was the most effective DL model for diagnosing colonoscopy
images after assessing several models. To predict MSI-H in CRC using full-slide images, a
DL method that integrated tumor detection and MSI classification was created (Lou et al.,
2022).
3. Description of experimental method
This section provides the details of the hardware setup, description of the used dataset, the R-
net model development, and how it will be trained for this research.
3.1 Hardware Specification
The experiments were conducted on a Precision 7680 Workstation equipped with a 13th-
generation Intel Core i9-13950HX vPro processor and Windows 11 Pro operating system.
The workstation came equipped with an NVIDIA RTX 3500 Ada Generation GPU and
featured 32GB of powerful DDR5 RAM, along with a 1 TB Solid State Drive (SSD). Python
V3.9 was chosen as the programming language because it worked with TensorFlow-GPU,
SHAP, and LIME.
3.2 Dataset Description
Research data was obtained from an available public repository. Six classes composed the
dataset containing Adenocarcinoma, High-Grade IN, Low-Grade IN, Normal, Polyp, and
Serrated Adenoma, totaling 2228 images. A microscope instrument collected photos, which
the study team stored in RGB format as PNG files. The figure displays different images that
belong to each class category for this study in Figure 2.
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Figure 2: Samples of images used in the study.
3.3 Image Augmentation
In this step, the downloaded images were manually reviewed to identify class imbalances
and potential issues with background color, brightness, and contrast. It was observed that
the images in each class were imbalanced, a common challenge in applications such as
cancer diagnosis (Johnson & Khoshgoftaar, 2019). The application of GANs helps balance
the dataset by generating authentic synthetic data instances that target the minority class. A
total of 4800 images were generated to balance the dataset using this technique, and the
dataset distribution is shown in Figure 3.
Figure 3: Distribution of images.
4. Results of Experiments
The researchers performed four different experiments to analyze CRC images. The analysis
begins with R-Net, followed by DenseNet121, along with ResNet50, InceptionV3, Xception,
MobileNetV2, and VGG16 SOTA CNNs. Then, transfer learning applied to these six SOTA
CNNs. Finally, the research evaluated two ensemble models using DenseNet121 with
InceptionV3-Xception and ResNet18 with InceptionV3-VGG16. The following section
demonstrates experimental methodologies along with their achieved outcomes.
4.1 Experiment 1: R-Net development process and results
The following section explains the R-Net model together with its training process and
evaluation results:
4.1.1 R-Net Model Development
The R-Net model was developed to find CRC cells along with their classifications within
CRC image data. A set of two convolutional layers that use 64 filters begins the process
before max-pooling occurs. The network adds two 128-filter convolutional layers which are
followed by max-pooling before advancing to three 256-filter convolutional layers spread
across more max-pooling layers. The depth of the feature map expands through successive
max-pooling layers following three 512-filter convolutional layers that automatically reduce
spatial dimensions. Feature extraction ends with flattening the output before passing it to two
dense layers that have a fully connected structure.
Figure 4: R-Net model visualisation
The initial dense layer contains 256 neurons, and the second dense layer has 64 neurons. Six
neurons form the last layer structure because it contains every possible target class for
classification purposes. The model contains 15,911,430 trainable parameters for extracting
image features, enabling its use in multiclass image classification.
4.1.2 Training setup
The R-Net model was trained using 5-fold cross-validation. The training process extended for
over 45 epochs using batches of size 32 in each fold. The Adam optimization algorithm was
used for model optimization. The weights of the model became adjustable through gradients
calculated by the algorithm, which resulted in enhanced classification performance and
accuracy of CRC modalities—the selected loss function employed sparse categorical cross-
entropy for data calculation. The training parameters can be found in Table 1.
Table 1: Hyperparameters of training
Parameter
Value
Epochs
50
Batch size
16
Image size
(64, 64, 3)
Learning rate
1.00e-04
K_folds
5
Optimizer
Adam(learning_rate=LEARNING_RATE)
Loss Function
SparseCategoricalCrossentropy(from_logits=True)
Early Stopping
EarlyStopping(monitor='val_accuracy',
patience=10,
verbose=1,
restore_best_weights=True)
Learning Rate Scheduler
LearningRateScheduler(
lambda epoch: LEARNING_RATE * 0.1 **
(epoch // 10)
Callbacks
[early_stopping,
lr_scheduler]
4.1.3 Results of the R-Net
Table 2 presents the evaluation of the R-Net model performance for each fold, which includes
precision, recall, F1-score, and support. All five evaluations produced high-accuracy results
through the model while maintaining low mistake rates. The model in Fold 1 achieved near-
perfect precision but misclassified some instances. However, the classification performance
in Fold 2 proved exceptional because the model achieved outstanding results without any
significant misclassifications. Folds 3, 4, and 5 displayed outstanding performance, as
misclassification was minimal. The model demonstrates exceptional capabilities in
classifying different categories with high precision, thanks to its outstanding error reduction
capabilities.
Table 2: Fold-Wise Classification report with epochs of R-Net
Fold
Class
Precision
Recall
F1-Score Support
1
Adenocarcinoma
1
0.99
0.99
138
HighGradeIN
0.99
1
1
120
LowGradeIN
0.99
0.99
0.99
133
Normal
1
1
1
119
Polyp
0.99
1
1
125
SerratedAdenoma
1
0.99
1
133
2
Adenocarcinoma
0.98
0.99
0.99
132
HighGradeIN
0.99
0.99
0.99
137
LowGradeIN
0.98
0.97
0.98
128
Normal
1
0.99
1
131
Polyp
0.98
0.98
0.98
119
SerratedAdenoma
0.99
1
1
121
3
Adenocarcinoma
1
0.97
0.98
128
HighGradeIN
0.98
1
0.99
137
LowGradeIN
0.99
0.98
0.99
126
Normal
0.99
1
1
131
Polyp
0.98
0.99
0.98
124
SerratedAdenoma
1
0.99
1
122
4
Adenocarcinoma
1
0.99
1
130
HighGradeIN
0.98
1
0.99
124
LowGradeIN
1
0.98
0.99
123
Normal
1
1
1
126
Polyp
0.98
0.99
0.98
122
SerratedAdenoma
1
1
1
143
5
Adenocarcinoma
0.98
0.98
0.98
112
HighGradeIN
0.98
1
0.99
122
LowGradeIN
0.98
0.97
0.97
130
Normal
1
1
1
133
Polyp
0.99
0.98
0.98
150
SerratedAdenoma
1
1
1
121
The model's precision level becomes noticeable through visualization in the confusion matrix
presented in Figure 5. In Fold 1, the model performed well with very minimal
misclassification errors, and Fold 2 achieved better accuracy by successfully separating
challenging class samples. The model reached exceptional levels of classification in Folds 3
through 5 because errors reached virtually zero during these runs. The model successfully
differentiates multiple categories, exhibiting high precision and recall, which proves its
effectiveness in minimizing misclassification errors and ensuring reliability.
Figure 5: Fold-wise confusion matrix of R-Net.
Figure 6: Fold-Wise ROC-Curve of R-Net.
A comparison of different ROC curves appears in Figure 6 based on five-fold cross-
validation techniques. The evaluation methods of Fold 1 show both high accuracy in correctly
identifying cases and correctly misclassified cases while minimizing false positive errors. The
updated Fold 2 enhances the model with a denser curve design. The performance accuracy of
the model becomes evident through near-perfect ROC curves that appear in Folds 3 through
5. The reliability and robustness of the R-Net model are evident in these achieved results in
multi-class classification.
Figure 7 displays training and validation accuracy and training and validation loss data for the
five R-Net model folds. The plot illustrates both training accuracy and validation accuracy
rates, alongside a decreasing training loss and sustained low validation loss, which signifies
outstanding model performance and avoids overfitting occurrences. The model demonstrates
reliable performance and strong generalization capabilities across all folds, as indicated by
these results.
Figure 7: Training and Validation across all folds.
Additional performance evaluation of the R-Net model generated a confusion matrix based
on the test dataset. Figure 8 presents the model classification results, which show accurate
predictions among different categories. The model demonstrated robustness and reliability
through the match between the classification matrix and its high-accuracy assessment. A
small sample misidentification demonstrates the model's efficient generalization
effectiveness, which qualifies it for practical utilization.
Figure 8: Confusion Matrix of the R-Net Model on the Test Dataset
The performance metrics for the R-Net model appear in Table 3 for training, validation, and
the test datasets.
Table 3: Model Performance Metrics on Training, Validation, and Test Sets
Dataset
Loss
Accuracy
Train
1.06e-07
100%
Validation
0.012
99.79%
Test
0.0275
99.37%
The model learned the training data efficiently, achieving a minimal training loss value of
1.06e-7 (0.00000010617) along with perfect accuracy of 100%. The validation loss shows
minimal value (0.0120) alongside a high accuracy of 99.79% which indicates strong
generalization to new data points. The test data shows both low loss at 0.0275 and accuracy at
99.37% which strengthens the reliability and robustness of the model. The model
demonstrates excellent potential for practical application, as it achieves high classification
accuracy while minimizing errors.
R-Net delivers outstanding performance in its combined classification metrics by achieving a
99% accuracy across every category. The model achieves equal and highly effective results
across precision, recall, and F1-scores, with values of approximately 0.99. The model
achieves strong performance based on specific classification results, which show that the
Normal, Serrated Adenoma, and Polyp categories achieve scores close to 1.00. The
evaluation of Adenocarcinoma, High-Grade IN, and Low-Grade IN cancerous cell types
through R-Net shows that the model achieves precision, recall, and F1 scores between 0.97
and 0.99. The model demonstrates reliability through its consistent performance, as shown by
macro and weighted averages across the entire dataset.
The confusion matrix exhibits the R-Net’s high accuracy. Among 4800 images, R-Net
correctly detected and classified 4797 images. However, only 3 LowGradeIN instances were
misclassified as 2 Adenocarcinoma and 1 HighGradeIN, while Normal and Polyp showed no
misclassification errors. The confusion matrix confirms that the model successfully reduces
false positive results.
The fold-wise accuracy and loss curves deliver details about how well the model performs
throughout training with validation intervals. The model's learning process exhibits
significant improvement in accuracy, ultimately achieving high levels of model ability in
terms of generalization and learning. Successful model operation maintains stable learning
efficiency, but error reduction appears firmer because of decreasing loss curve values. The R-
Net model achieved high accuracy along with minimal loss values during training sessions.
4.2 Experiment 2: SOTA CNN performance on CRC
An examination of six SOTA CNNs is conducted according to the taxonomy system of Khan
et al. (2020). The models organized into five categories include Depth-based CNNs
(InceptionV3), Multi-Path-based CNNs (ResNet50, DenseNet121), and Width-based Multi-
Connection CNNs (Xception), Depthwise Separable Convolutions (MobileNet), along with
Spatial Exploitation-based CNNs (VGG16). The selection of these models was done to
provide deep insight into which CNN produces the best results for CRC image classification.
The performance of CNNs during this task was evaluated using three different optimizers:
Adam, Adamax, and RMSprop.
Table 4: Performance comparison of SOTA CNNs and optimizers
Model
Epoch
(Adam)
Accuracy
(Adam)
Epoch
(Adamax)
Accuracy
(Adamax)
Epoch
(RMSprop)
Accuracy
(RMSprop)
DenseNet121 29
0.9993
29
0.9965
23
1
ResNet50
28
0.9694
27
0.9722
29
0.9917
InceptionV3 22
0.9944
37
0.9625
24
0.9993
Xception
14
1
14
1
14
1
MobileNet
50
0.9583
46
0.8465
32
0.9257
VGG16
11
0.1667
30
0.9674
24
0.9944
The performance metrics of six state-of-the-art CNNs are presented in Table 4 under Adam, Adamax,
and RMSprop optimizer conditions. The highest accuracy of 99% is achieved by DenseNet121 using
Adam (0.9993) and RMSprop (1.0000), which required 29 epochs alongside 23 epochs. The
performance of ResNet50 demonstrates reduced accuracy when Adam reaches 0.9694, while Adamax
reaches 0.9722, and RMSprop generates 0.9917 accuracy, yet optimizers fail to affect accuracy
measurements noticeably. The combination of RMSprop (0.9993) and Adam (0.9944) delivers
superior results than Adamax (0.9625) for InceptionV3. Within 14 epochs, the Xception model
achieves a perfect accuracy score of 1.0000 when combined with any of the optimizers. The accuracy
of MobileNet is lower when using Adamax (0.8465), while both Adam (0.9583) and RMSprop
(0.9257) outperform it in terms of accuracy. Adam produces subpar results for the VGG16 model,
with an accuracy rating of only 0.1667, whereas Adamax achieves 0.9674 accuracy and RMSprop
delivers 0.9944 accuracy. The results demonstrate that Adam and RMSprop provide stable
performance metrics, although Adamax shows reduced operational efficiency, mainly when used in
MobileNet and InceptionV3 models. The accuracy-epoch relationship between optimizers and models
appears in Figure 9 through a scatter plot representation.
Figure 9: The training and validation accuracy of the original CNNs.
Figure 10: Confusion matrices of Transfer learning
The confusion matrix in Figure 10 shows that Xception and DenseNet121 have lower Type 1
and Type 2 errors, which indicates better classification performance. The performance of
InceptionV3 falls within the middle range, as it generates both acceptable false and true
negatives and positives. The misclassification rates of VGG16 remain high, especially in
LowGrade-IN and Polyp, which makes this model less dependable than the others. The
misclassification rates for ResNet50 are significantly elevated in classifications of
Adenocarcinoma and Low-Grade IN, resulting in numerous incorrect positive and negative
predictions. MobileNet demonstrates the worst capability among these models based on its
classification errors in HighGrade-IN, LowGrade-IN, and Polyp.
4.3 Experiment 3: SOTA CNNs Transfer Learning Performance on
CRC
An evaluation of six SOTA CNNs transfer learning architectures occurs during this
experiment. The experiment relies on the same classification system from Experiment 1,
using SOTA CNN for equal model comparison throughout. Evaluation takes place through an
analysis of training, validation, and testing accuracy from distinct optimization methods.
Table 5: Performance comparison of SOTA CNNs; Transfer learning and optimizers
Model
Epoch
(Adam)
Accuracy
(Adam)
Epoch
(Adamax)
Accuracy
(Adamax)
Epoch
(RMSprop)
Accuracy
(RMSprop)
DenseNet121
5
0.7962
5
0.7456
5
0.7883
ResNet50
5
0.5677
5
0.4385
5
0.4985
InceptionV3
5
0.8835
5
0.7244
5
0.861
Xception
5
0.63
5
0.4125
5
0.5902
MobileNet
5
0.8769
5
0.704
5
0.8077
VGG16
5
0.7942
5
0.6342
5
0.7025
The accuracy data demonstrate that DenseNet121 achieves high accuracy rates using multiple
optimizers, with Adam reaching 79.62%, Adamax reaching 74.56%, and RMSProp reaching
78.83% (Table 5 and Figure 11). The results show that MobileNet maintained consistent
performance, achieving 87.69% accuracy with Adam, 70.40% accuracy with Adamax, and
80.77% accuracy with RMSProp. The accuracy results show that ResNet50 achieves the
worst performance in transfer learning, with 56.77% accuracy from Adam, 43.85% from
Adamax, and 49.85% from RMSProp, even though all optimizers required five epochs, which
indicates optimization difficulties.
Figure 11: Transfer learning CNN models, accuracy, and epochs
Figure 12: Confusion matrices of Transfer learning.
The classification results in Figure 12 show that Type 1 (false positives) and Type 2 (false
negatives) errors are minimal for both DenseNet121 and MobileNetV2, thus demonstrating
superior classification accuracy. The performance of Xception alongside InceptionV3 shows
moderate errors of false positives and negatives. VGG16 exhibits failures in classification
accuracy due to its higher error rates, particularly in distinguishing between the Normal and
Polyp categories, thus demonstrating inferior reliability compared to competing models.
ResNet50 demonstrates the highest degree of misclassification because it generates numerous
false positive and false negative results that show its operational effectiveness.
4.4 Experiment 3: Ensemble Model Performance
Two ensemble models were built in this research project, with one model using DenseNet-
Inception-Xception for Multi-path Depth-Width based and the other implementing ResNet50-
InceptionV3-VGG16 for Multi-path Depth-Spatial based.
4.4.1 Ensemble 1
The comparison of multi-path-depth-width CNN ensemble (DenseNet–Inception-Xception)
through Soft Voting, Hard Voting, and Rank-Based methods is presented in Table 6.
Table 6: Performance of Multi-path-depth-Width based (DIX) Ensemble.
Multi path-depth-Width based
Ensemble method
Accuracy Precision Recall
F1 Score
DenseNet – Inception-Xception
Soft Voting
98.02%
98.12%
98.02% 98.07%
Rank-Based
57.19%
65.71%
57.19% 59.43%
Hard Voting
95.52%
96.13%
95.52% 95.53%
Soft Voting Ensemble reaches 98.02% accuracy through minimal errors (Type I = 16, Type II
= 16), which appear in the Soft Voting confusion matrix. The detection method demonstrates
98.12% precision, along with 98.02% recall, and achieves an F1 Score of 98.07%, which
positions it as the most effective tool for CRC diagnosis, according to Table 6. The Hard
Voting Ensemble demonstrates an accuracy of 95.52% and both Type I and Type II errors
amount to 43 each. The Hard Voting confusion matrix shows 96.13% precision alongside
95.52% recall. The Rank-Based Ensemble generates only 57.19% accuracy alongside a high
number of errors (Type I = 182 and Type II = 182) through the Rank-Based confusion matrix
that creates poor performance with 65.71% precision and 57.19% recall using Table 6.
Figure 12: Confusion matrices of Transfer learning.
4.4.2 Ensemble 2
The comparison of the Multi-Path-Depth-Spatial based ensemble (ResNet50-InceptionV3-
VGG16) through Soft Voting, Hard Voting, and Rank-Based methods is presented in Table 7.
Table 7: Performance of Multi-path-depth-Spatial based (RIV) Ensemble.
Multi-Path-Depth-Spatial based
Ensemble method
Accuracy Precision Recall
F1 Score
ResNet50-InceptionV3-VGG16
Soft Voting
98.23%
98.25%
98.23% 98.23%
Rank-Based
89.69%
89.83%
89.69% 89.71%
Hard Voting
88.85%
89.15%
88.85% 88.79%
The Soft Voting Ensemble reaches a 98.23% accuracy rate and shows 11 Type I errors as
well as 16 Type II errors in its Soft Voting confusion matrix. According to Table 7, the Soft
Voting Ensemble represents the best method for CRC detection since it reaches 98.25%
precision, 98.23% recall, and an F1 Score of 98.23%. The Hard Voting Ensemble reaches
88.85% accuracy, but it produces more errors than Soft Voting (Type I = 91, Type II = 107),
as the Hard Voting confusion matrix reveals, with 89.15% precision and 88.85% recall. The
Rank-Based Ensemble method demonstrates an 89.69% accuracy rate while producing errors
consisting of 72 Type I and 84 Type II classifications according to the analysis of the Rank-
Based confusion matrix (Table 7).
Figure 13: Confusion matrices of Transfer learning.
4.5 Result of the XAI
To provide both local and global explanations for the proposed R-Net model, this study
employed three XAI methods: LIME, SHAP, and Grad-CAM.
4.5.1 LIME Visualization
As such, LIME is used for generating an explanation for each prediction. Figure 14 below
shows that each explanation of LIME includes two features: Green and Red. “Green regions”
are areas that positively contributed to the predicted class, and “Red regions” are areas that
negatively contributed to the predicted class.
Figure 14: LIME explainer of CRC images
4.5.2 SHAP Explanation
Figure 15 shows each explanation of SHAP, including two features: Red and Blue. “Red
regions” are areas that positively contributed to the predicted class, and “Blue regions” are
areas that negatively contributed to the predicted class, along with a mean SHAP value for
each prediction.
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Adenocarcinoma
HighGradeIN
LowGradeIN
Normal
Polyp
SerratedAdenoma
Figure 15: SHAP explainer of CRC images
4.5.3 Grad-CAM Analysis of Correctly/Incorrectly CRC Modalities
The Grad-CAM tool reveals which parts of an image the R-Net classifies most for
predictions. Grad-CAM measures the connection between feature maps and class prediction
by calculating the gradients within the last CNN layer. The heatmap shows how important
each feature map is to the prediction results. The model employs gradient results to readjust
its feature maps before combining them into the visualization map. The research employs
Grad-CAM analysis to demonstrate which areas of the CRC image the model selected for
diagnostic purposes. For example, Figure 16 shows the misclassification of images.
Figure 16: Examples of misclassified images
Grad-CAM technology displays which CRC cases the model incorrectly identified by
showing where it focused its attention in Figure 17. The heatmaps show locations that
received the highest attention through bright red and lesser attention with blue coloring. The
visualizations demonstrate that the model examined areas of the colorectal that did not
contain cancer. However, it made incorrect diagnoses, such as classifying cancerous cells
(Polyps) without cancer as LowGradeIn.
Original image: Polyp; Predicted: LowGradeIn
Original image: Polyp; Predicted: LowGradeIn
Figure 17: GRAD-CAM view of misclassified images
Figure 18 presents Grad-CAM highlighting the medical model's accurate recognition of
tumor areas on correctly classified images. While we observe this behavior in specific non-
tumor cases, our model tends to direct irrelevant attention to parts of the image, suggesting
future development is needed in feature identification processes.
Original image: Polyp; Predicted: LowGradeIn
Original image: Polyp; Predicted: LowGradeIn
Figure 18: Grad-CAM view of correctly classified images
4.5.4 Pixel Intensity of Correctly/Incorrectly CRC Modalities
The pixel intensity displays feature attribution by showing which parts of the input images
helped to make the CNN’s decisions. This interactive graph displays aspects of a neural
network model that misidentified a polyp as a low-grade cancerous cell (Figure 19). While
Figure 20 shows the actual prediction of CRC. The panel displays the CRC images of the true
cancer regions. The right panel demonstrates the Gradient × Input method that shows the
model determination through pixel intensity values based on part contributions to the image.
Parts of the image with intense colors had a significant impact on the prediction. The model
selected non-cancerous areas for importance during classification because its Gradient ×
Input analysis did not match the cancerous regions. The mismatch between the model's
learned features and the key characteristics of cancerous cells indicates that the model cannot
provide an accurate assessment.
Figure 19: Grad-CAM view of pixel intensity for misclassified images
Figure 20: Grad-CAM view of pixel intensity for correctly classified images
5. Discussion
This research proposes R-Net (Reliable Net) as a compact CNN that effectively detects CRC
through fewer layers while utilizing small learnable parameters. The proposed R-Net
achieves a 99.37% accuracy in CRC image classification compared to SOTA CNNs, transfer
learning, and ensemble models. Notably, the stable performance of Adam remains consistent,
while RMSprop shows variable results between models, which proves that optimizer
selection should consider the specific convergence patterns of the designed framework.
A state-of-the-art evaluation utilized six CNN architectural models, including InceptionV3,
VGG16, MobileNet, ResNet50, DenseNet121, and Xception, for CRC classification. Both
Xception and DenseNet121 yield equivalent diagnostic outcomes, but they require
significantly more computational power than R-Net. Through transfer learning methods,
InceptionV3 and MobileNet executed better classification than alternative approaches,
whereas they did not match R-Net's efficiency levels. The combined utilization of multiple
CNNs through ensemble models achieved high classification results, and Soft Voting proved
to be the most effective method. However, R-Net proves to be a practical choice over
ensemble methods because it delivers effective results while requiring less computational
power and shorter training duration.
R-Net prediction validation and interpretability were improved by the implementation of XAI
techniques, which included LIME SHAP together with Grad-CAM. Visualizations from
Grad-CAM demonstrated that R-Net accurately detects cancerous regions, contributing to the
accuracy of its diagnostic decisions. The detailed explanations provided by LIME and SHAP
helped identify problematic predictions, thereby enhancing the trustworthiness of the model.
The performance evaluation of R-Net classification is presented in Figure 21, which
compares the accuracy between individual CNNs, transfer learning models, and ensemble
methods.
Figure 21: Accuracy comparison among individual CNN, transfer learning, and ensemble models.
The comparative performance results in Table 13 show that R-Net produces better outcomes
than XGBoost, Ensemble, Transfer Learning, and Interpretable Machine Learning Systems
by achieving 99.37% accuracy.
Table 13: Performance comparison of the proposed R-Net model with other models.
Authors
Model
No. of
Classes
Accuracy
(%)
Georgiou et al., (2024)
XGBoost, Ensemble
3
89.79%
Sirinukunwattana et al.,
(2022)
Consensus Molecular Subtype
Classification
4
92%
Neto et al. (2024).
Interpretable ML System
4
94.5%
Kassani et al., (2022)
Transfer Learning
4
95%
Yamashita et al. (2023).
DL for Microsatellite Instability
Prediction
3
97.3%
Elshamy et al., (2024)
Modified DNN Optimizer
3
98%
Proposed Model
R-Net
6
99.37%
The research establishes R-Net as a highly accurate and efficient model which can perform
CRC classification. R-Net proves suitable for medical use because of its robust combination
of user-friendly interpretability with high performance capabilities, along with low system
requirements. Future researchers will continue to develop the model further, as well as
enhance data augmentation methods, while conducting rigorous clinical assessments to
improve reliability in medical diagnostic contexts.
6. Conclusion and Future Work
The research investigates the effectiveness of DL models in CRC detection and classification
by conducting three primary experiments. The researchers applied six CNN models, VGG16,
ResNet50,
DenseNet121,
Xception,
InceptionV3,
and
MobileNet,
to
analyze
histopathological CRC images—secondly, the research utilized transfer learning techniques
to enhance model classification results. The proposed R-Net achieved superior accuracy and
efficiency while utilizing XAI techniques, including LIME, SHAP, and Grad-CAM. The R-
Net showed enhanced reliability as its XAI framework delivered valuable insights about
model prediction features and pixel intensity testing between correct and incorrect
classifications.
The research study offers valuable results, but it also has some limitations. Using secondary
datasets reduces the application range, which reveals the necessity of analyzing extensive,
varied datasets for analysis. A wider test of the model was necessary because its training
occurred exclusively through histopathological image analysis. Current research requires
further investigation to establish the impact of transfer learning on lightweight CNN models,
as the demonstrated results have not been promising. Medical expert confirmation serves as a
requirement for models to acquire a credibility status. The evaluation of various program
models is necessary to optimize their efficiency, performance, and adaptation capabilities
before adoption for practical use.
In conclusion, study results demonstrate that CNN technology proves highly efficient in
identifying CRC during screening examinations. The R-Net system achieves high accuracy in
medical image classification through its practical and lightweight structure, which protects
readability. Modern research must link healthcare professionals with advanced imaging
technology usage to enhance both DL CRC diagnosis detection methods and clinical
diagnostic capabilities.
References
Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2023). Enhancing Tea Leaf Disease Detection Through
Customized Vision Transformer and Hyperparameter Optimization. Available at SSRN,
4940688. https://doi.org/
Ahad, M. T., Bhowmik, A. C., Emon, Y. R., & Ahmed, F. (2024). A Customized Vision Transformer
for Accurate Detection and Classification of Java Plum Leaf Disease. Available at SSRN,
4829650. https://doi.org/2
Ahad, M. T., Emon, Y. R., & Mustofa, S. (2024). Data of history: An open-source and multiformat
wall image dataset of Panam city, a historical place. Data in Brief, 56, 110774.
https://doi.org/6
Ahad, M. T., Li, Y., Song, B., & Bhuiyan, T. (2023). Comparison of CNN-based deep learning
architectures for rice diseases classification. Artificial Intelligence in Agriculture, 9, 22-35.
https://doi.org/231
Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). End User Interface Design of
Mobile-Based Fish Disease Detection to Assist Fish Farmers. Available at SSRN, 4980536.
https://doi.org/
Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). Fishdoc: A Mobile-Based
Fish Disease Detection System Using Yolov8. Available at SSRN, 4899189. https://doi.org/
Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood
cancer detection and classification using a Convolutional Neural Network. arXiv preprint,
arXiv:2409.06689. https://doi.org/2
Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood
cancer detection and classification using Convolutional Neural Network. arXiv e-prints,
arXiv: 2409.06689. https://doi.org/
Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep
Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer
Detection. arXiv preprint, arXiv:2409.06699. https://doi.org/4
Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep
Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer
Detection. arXiv e-prints, arXiv: 2409.06699. https://doi.org/
Ahad, M. T., Mustofa, S., Rahman, M. S., Song, B., & Li, Y. (2023). A Comprehensive Study on
Deep Feature Extraction to Detect and Classify Soursop Leaf Disease. Available at SSRN,
4845099. https://doi.org/
Ahad, M. T., Mustofa, S., Sarker, A., & Emon, Y. R. (2024). Bdpapayaleaf: A dataset of papaya leaf
for disease detection, classification, and analysis. Classification, and Analysis.
https://doi.org/3
Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using novel CNN-
based ensemble approach. arXiv preprint, arXiv:2410.05272. https://doi.org/1
Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using novel CNN-
based ensemble approach. arXiv e-prints, arXiv: 2410.05272. https://doi.org/
Ahad, M. T., Preanto, S. A., Song, B., & Li, Y. (2023). Gan-Generated Spectrogram Detection and
Classification for Heartbeat Classification Using a Vision Transformer. Available at SSRN,
4892869. https://doi.org/
Ahad, M. T., Song, B., & Li, Y. (2024). A Comparison of Convolutional Neural Network, Transfer
Learning and Ensemble Technique for Brain Tumour Detection of Classification. Transfer
Learning and Ensemble Technique for Brain Tumour Detection of… https://doi.org/1
Ahmed, F., & Ahad, M. T. (2023). Machine learning-based tea leaf disease detection: A
comprehensive review. arXiv preprint, arXiv:2311.03240. https://doi.org/22
Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2023). A fuzzy-based vision
transformer model for tea leaf disease detection. International Conference on Trends in
Computational and Cognitive… https://doi.org/5
Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2024). A Fuzzy-Based Vision
Transformer Model for Tea Leaf Disease Detection Check for updates. Proceedings of the
Fifth International Conference on Trends in Computational… https://doi.org/1
Akilandeswari, A., Sungeetha, D., Joseph, C., Thaiyalnayaki, K., Baskaran, K., Jothi Ramalingam, R.,
... & Meansbo Hadish, K. (2022). Automatic detection and segmentation of colorectal cancer
with deep residual convolutional neural network. Evidence‐Based Complementary and
Alternative Medicine, 2022(1), 3415603.
Alabdulqader, E. A., Umer, M., Alnowaiser, K., Wang, H., Alarfaj, A. A., & Ashraf, I. (2024). Image
Processing-based Resource-Efficient Transfer Learning Approach for Cancer Detection
Employing Local Binary Pattern Features. Mobile Networks and Applications, 1-17.
Alabi, R. O., Elmusrati, M., Leivo, I., Almangush, A., & Mäkitie, A. A. (2023). Machine learning
explainability in nasopharyngeal cancer survival using LIME and SHAP. Scientific
Reports, 13(1), 8984.
Aldughayfiq, B., Ashfaq, F., Jhanjhi, N. Z., & Humayun, M. (2023). Explainable AI for
retinoblastoma
diagnosis:
interpreting
deep
learning
models
with
LIME
and
SHAP. Diagnostics, 13(11), 1932.
Alzahrani, S. M., Al Doghaither, H. A., & Al-Ghafari, A. B. (2021). General insight into cancer: An
overview of colorectal cancer. Molecular and clinical oncology, 15(6), 271.
Arthi, N. T., Mubin, K. E., Rahman, J., Rafi, G. M., Sheja, T. T., Reza, M. T., & Alam, M. A. (2022,
December). Decentralized federated learning and deep learning leveraging xai-based
approach to classify colorectal cancer. In 2022 IEEE Asia-Pacific Conference on Computer
Science and Data Engineering (CSDE) (pp. 1-6). IEEE.
Attallah, O., Aslan, M. F., & Sabanci, K. (2022). A framework for lung and colon cancer diagnosis
via lightweight deep learning models and transformation methods. Diagnostics, 12(12), 2926.
Auzine, M. M., Heenaye-Mamode Khan, M., Baichoo, S., Gooda Sahib, N., Bissoonauth-Daiboo, P.,
Gao, X., & Heetun, Z. (2024). Development of an ensemble CNN model with explainable AI
for the classification of gastrointestinal cancer. Plos one, 19(6), e0305628.
Bhowmik, A. C., Ahad, M. T., & Emon, Y. R. (2023). Machine Learning-Based Jamun Leaf Disease
Detection: A Comprehensive Review. arXiv preprint, arXiv:2311.15741. https://doi.org/4
Bhowmik, A. C., Ahad, M. T., Emon, Y. R., Ahmed, F., Song, B., & Li, Y. (2024). A customised
Vision Transformer for accurate detection and classification of Java Plum leaf disease. Smart
Agricultural Technology, 8, 100500. https://doi.org/22
Biplob, T. I., Rabbany, G., Emon, Y. R., Ahad, M. T., & Fimu, F. A. (2023). An Optimized Vision
Based Transformer for Lungs Cancer Detection. International Conference on Trends in
Computational and Cognitive … https://doi.org/
Bychkov, D., Linder, N., Turkki, R., Nordling, S., Kovanen, P. E., Verrill, C., ... & Lundin, J. (2018).
Deep learning based tissue analysis predicts outcome in colorectal cancer. Scientific reports,
8(1), 3395.
Davila, A., Colan, J., & Hasegawa, Y. (2024). Comparison of fine-tuning strategies for transfer
learning in medical image classification. Image and Vision Computing, 146, 105012.
deSouza, A., Nadkarni, S., Roy, S., Kataria, P., Ramaswamy, A., & Ostwal, V. (2024). Colon Cancer.
In Tata Memorial Centre Textbook of Oncology (pp. 565-592). Singapore: Springer Nature
Singapore.
Dulf, E., Bledea, M., Mocan, T., & Mocan, L. (2021). Automatic Detection of Colorectal Polyps
Using
Transfer
Learning.
Sensors
(Basel,
Switzerland),
21.
https://doi.org/10.3390/s21175704.
Echle, A., Grabsch, H. I., Quirke, P., van den Brandt, P. A., West, N. P., Hutchins, G. G., ... & Kather,
J. N. (2020). Clinical-grade detection of microsatellite instability in colorectal tumors by deep
learning. Gastroenterology, 159(4), 1406-1416.
Elshamy, R., Abu-Elnasr, O., Elhoseny, M., & Elmougy, S. (2024). Enhancing colorectal cancer
histology diagnosis using modified deep neural networks optimizer. Scientific Reports, 14(1),
19534.
Emon, Y. R., & Ahad, M. T. (2024). Multi-format open-source sweet orange leaf dataset for disease
detection, classification, and analysis. Data in Brief, 55, 110713. https://doi.org/17
Emon, Y. R., Rabbani, M. G., Ahad, M. T., & Ahmed, F. (2023). A Comprehensive Literature
Review on Sweet Orange Leaf Diseases. arXiv preprint, arXiv:2312.01756. https://doi.org/
Emon, Y. R., Rabbani, M. G., Ahad, M. T., & Ahmed, F. (2023). A Comprehensive Literature
Review on Sweet Orange Leaf Diseases. arXiv e-prints, arXiv: 2312.01756. https://doi.org/
Fadlallah, H., El Masri, J., Fakhereddine, H., Youssef, J., Chemaly, C., Doughan, S., & Abou-Kheir,
W. (2024). Colorectal cancer: Recent advances in management and treatment. World journal
of clinical oncology, 15(9), 1136–1156. https://doi.org/10.5306/wjco.v15.i9.1136
Georgiou, N., Kolias, P., & Chouvarda, I. (2024). Machine Learning Models for the Classification of
Histopathological Images of Colorectal Cancer. Applied Sciences, 14(22), 10731.
Ghasemi, A., Hashtarkhani, S., Schwartz, D. L., & Shaban‐Nejad, A. (2024). Explainable artificial
intelligence in breast cancer detection and risk prediction: A systematic scoping
review. Cancer Innovation, 3(5), e136.
Group, M. (2022). EBHI-SEG (Version 1). figshare. https://doi.org/10.6084/m9.figshare.21540159.v1
Iqbal, M. J., Javed, Z., Sadia, H., Qureshi, I. A., Irshad, A., Ahmed, R., ... & Sharifi-Rad, J. (2021).
Clinical applications of artificial intelligence and machine learning in cancer diagnosis:
looking into the future. Cancer cell international, 21(1), 270.
Iqbal, S., & Qureshi, A. N. (2022). A heteromorphous deep CNN framework for medical image
segmentation using local binary pattern. Ieee Access, 10, 63466-63480.
Islam, R., Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2024). Mental health diagnosis from voice
data using convolutional neural networks and vision transformers. Journal of Voice.
https://doi.org/1
Karthikeyan, A., Jothilakshmi, S., & Suthir, S. (2024). Colorectal cancer detection based on
convolutional neural networks (CNN) and ranking algorithm. Measurement: Sensors, 31,
100976.
Kassani, S. H., Kassani, P. H., Wesolowski, M. J., Schneider, K. A., & Deters, R. (2022). Deep
transfer learning based model for colorectal cancer histopathology segmentation: A
comparative study of deep pre-trained models. International Journal of Medical
Informatics, 159, 104669.
Kazeminia, Salome, Christoph Baur, Arjan Kuijper, Bram van Ginneken, Nassir Navab, Shadi
Albarqouni, and Anirban Mukhopadhyay. "GANs for medical image analysis." Artificial
intelligence in medicine 109 (2020): 101938.
Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of
deep convolutional neural networks. Artificial intelligence review, 53, 5455-5516.
Khazaee Fadafen, M., & Rezaee, K. (2023). Ensemble-based multi-tissue classification approach of
colorectal cancer histology images using a novel hybrid deep learning framework. Scientific
Reports, 13(1), 8823.
Korbar, B., Olofson, A. M., Miraflor, A. P., Nicka, C. M., Suriawinata, M. A., Torresani, L., ... &
Hassanpour, S. (2017). Deep learning for classification of colorectal polyps on whole-slide
images. Journal of pathology informatics, 8(1), 30.
Lou, J., Xu, J., Zhang, Y., Sun, Y., Fang, A., Liu, J., ... & Ji, B. (2022). PPsNet: An improved deep
learning model for microsatellite instability high prediction in colorectal cancer from whole
slide images. Computer Methods and Programs in Biomedicine, 225, 107095.
Luo, N., Zhong, X., Su, L., Cheng, Z., Ma, W., & Hao, P. (2023). Artificial intelligence-assisted
dermatology diagnosis: from unimodal to multimodal. Computers in Biology and Medicine,
107413.
Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2023). Scratch vision
transformer model for diagnosis grape leaf disease. International Conference on Trends in
Computational and Cognitive… https://doi.org/7
Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2024). Scratch Vision
Transformer Model for Diagnosis Grape Leaf Disease Check for updates. Proceedings of the
Fifth International Conference on Trends in Computational… https://doi.org/
Mamun, S. B., Payel, I. J., Ahad, M. T., Atkins, A. S., Song, B., & Li, Y. (2025). Grape Guard: A
YOLO-based mobile application for detecting grape leaf diseases. Journal of Electronic
Science and Technology, 23(1), 100300. https://doi.org/2
Moolchandani, D., Kumar, A., & Sarangi, S. R. (2021). Accelerating CNN inference on ASICs: A
survey. Journal of Systems Architecture, 113, 101887.
Morid, M. A., Borjali, A., & Del Fiol, G. (2021). A scoping review of transfer learning research on
medical image analysis using ImageNet. Computers in biology and medicine, 128, 104115.
Mustofa, S., Ahad, M. T., Emon, Y. R., & Sarker, A. (2024). BDPapayaLeaf: A dataset of papaya leaf
for disease detection, classification, and analysis. Data in Brief, 57, 110910. https://doi.org/7
Mustofa, S., Emon, Y. R., Mamun, S. B., Akhy, S. A., & Ahad, M. T. (2025). A novel AI-driven
model for student dropout risk analysis with explainable AI insights. Computers and
Education: Artificial Intelligence, 8, 100352. https://doi.org/5
Mustofa, S., Munna, M. M. H., Emon, Y. R., Rabbany, G., & Ahad, M. T. (2023). A comprehensive
review on plant leaf disease detection using deep learning. arXiv preprint, arXiv:2308.14087.
https://doi.org/37
Nanglia, S., Ahmad, M., Khan, F. A., & Jhanjhi, N. Z. (2022). An enhanced Predictive heterogeneous
ensemble model for breast cancer prediction. Biomedical Signal Processing and Control, 72,
103279.
Neto, P. C., Montezuma, D., Oliveira, S. P., Oliveira, D., Fraga, J., Monteiro, A., ... & Cardoso, J. S.
(2024). An interpretable machine learning system for colorectal cancer diagnosis from
pathology slides. NPJ precision oncology, 8(1), 56.
Nguyen, H. T. T., Cao, H. Q., Nguyen, K. V. T., & Pham, N. D. K. (2021, May). Evaluation of
explainable artificial intelligence: Shap, lime, and cam. In Proceedings of the FPT AI
Conference (pp. 1-6).
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic
segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv
preprint, arXiv:2409.06671. https://doi.org/7
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study on deep feature
extraction to detect and classify Acute Lymphoblastic Leukemia (ALL). arXiv preprint,
arXiv:2409.06687. https://doi.org/5
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study on deep feature
extraction to detect and classify Acute Lymphoblastic Leukemia (ALL). arXiv e-prints,
arXiv: 2409.06687. https://doi.org/
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic
segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv e-
prints, arXiv: 2409.06671. https://doi.org/
Prezja, F., Annala, L., Kiiskinen, S., Lahtinen, S., Ojala, T., Ruusuvuori, P., & Kuopio, T. (2024).
Improving performance in colorectal cancer histology decomposition using deep and
ensemble machine learning. Heliyon, 10(18).
Raju, M. S. N., & Rao, B. S. (2022, December). Classification of Colon Cancer through analysis of
histopathology images using Transfer Learning. In 2022 IEEE 2nd International Symposium
on Sustainable Energy, Signal Processing and Cyber Security (iSSSC) (pp. 1-6). IEEE.
Sarwinda, D., Paradisa, R. H., Bustamam, A., & Anggia, P. (2021). Deep learning in image
classification using residual network (ResNet) variants for detection of colorectal cancer.
Procedia Computer Science, 179, 423-431.
Sharkas, M., & Attallah, O. (2024). Color-CADx: a deep learning approach for colorectal cancer
classification through
triple
convolutional
neural
networks
and
discrete
cosine
transform. Scientific Reports, 14(1), 6914.
Sharma, N., Sharma, K. P., Mangla, M., & Rani, R. (2023). Breast cancer classification using
snapshot
ensemble
deep
learning
model
and
t-distributed
stochastic
neighbor
embedding. Multimedia Tools and Applications, 82(3), 4011-4029.
Sharma, P., Bora, K., Kasugai, K., & Balabantaray, B. K. (2020). Two Stage Classification with CNN
for Colorectal Cancer Detection. Oncologie (Tech Science Press), 22(3).
Shi, L., Li, X., Hu, W., Chen, H., Chen, J., Fan, Z., ... & Li, C. (2023). EBHI-Seg: A novel
enteroscope biopsy histopathological hematoxylin and eosin image dataset for image
segmentation tasks. Frontiers in Medicine, 10, 1114673.
Sirinukunwattana, K., Domingo, E., Richman, S. D., Redmond, K. L., Blake, A., Verrill, C., ... &
Koelzer, V. H. (2021). Image-based consensus molecular subtype (imCMS) classification of
colorectal cancer using deep learning. Gut, 70(3), 544-554.
Tanwar, S., Vijayalakshmi, S., Sabharwal, M., Kaur, M., AlZubi, A. A., & Lee, H. N. (2022).
Detection and classification of colorectal polyp using deep learning. BioMed Research
International, 2022.
Thakur, N., Yoon, H., & Chong, Y. (2020). Current trends of artificial intelligence for colorectal
cancer pathology image analysis: a systematic review. Cancers, 12(7), 1884.
Xu, L., Walker, B., Liang, P. I., Tong, Y., Xu, C., Su, Y. C., & Karsan, A. (2020). Colorectal cancer
detection based on deep learning. Journal of Pathology Informatics, 11(1), 28.
Xue, D., Zhou, X., Li, C., Yao, Y., Rahaman, M. M., Zhang, J., ... & Sun, H. (2020). An application
of transfer learning and ensemble learning techniques for cervical histopathology image
classification. IEEE Access, 8, 104603-104618.
Yamashita, R., Long, J., Longacre, T., Peng, L., Berry, G., Martin, B., ... & Shen, J. (2021). Deep
learning model for the prediction of microsatellite instability in colorectal cancer: a diagnostic
study. The Lancet Oncology, 22(1), 132-141.
Yu, G., Sun, K., Xu, C., Shi, X. H., Wu, C., Xie, T., ... & Deng, H. W. (2021). Accurate recognition
of colorectal cancer with semi-supervised deep learning on pathological images. Nature
communications, 12(1), 6311.
Zhou, D., Tian, F., Tian, X., Sun, L., Huang, X., Zhao, F., ... & Li, X. (2020). Diagnostic evaluation
of a deep learning model for optical diagnosis of colorectal cancer. Nature communications,
11(1), 2961.
|
R-Net: A Reliable and Resource-Efficient CNN for Colorectal Cancer Detection with XAI Integration Rokonozzaman Ayon 4IR Research Cell . Md Taimur Ahad (Industrial Automation) (Computing) : State-of-the-art (SOTA) Convolutional Neural Networks (CNNs) are criticized for their extensive computational power, long training times, and large datasets. To overcome this limitation, we propose a reasonable network (R-Net), a lightweight CNN only to detect and classify colorectal cancer (CRC) using the Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset (EBHI). Furthermore, six SOTA CNNs, including Multipath-based CNNs (DenseNet121, ResNet50), Depth-based CNNs (InceptionV3), widthbased multi-connection CNNs (Xception), depth-wise separable convolutions (MobileNetV2), spatial exploitation-based CNNs (VGG16), Transfer learning, and two ensemble models are also tested on the same dataset. The ensemble models are a multipath-depth-width combination (DenseNet121-InceptionV3-Xception) and a multipath-depth-spatial combination (ResNet18-InceptionV3-VGG16). However, the proposed R-Net lightweight achieved 99.37% accuracy, outperforming MobileNet (95.83%) and ResNet50 (96.94%). Most importantly, to understand the decision-making of R-Net, Explainable AI such as SHAP, LIME, and Grad-CAM are integrated to visualize which parts of the EBHI image contribute to the detection and classification process of R-Net. The main novelty of this research lies in building a reliable, lightweight CNN R-Net that requires fewer computing resources yet maintains strong prediction results. SOTA CNNs, transfer learning, and ensemble models also extend our knowledge on CRC classification and detection. XAI functionality and the impact of pixel intensity on correct and incorrect classification images are also some novelties in CRC detection and classification. Keywords: Colorectal cancer detection, convolutional neural network, CNN, lightweight CNN, ensemble model, SHAP, LIME, GRAD-CAM, XAI. 1. Introduction The worldwide incidence of colorectal cancer (CRC) remains high because yearly diagnosis rates reach 1.8 million new cases (Cowan et al., 2022). As the second most death-causing cancer worldwide, CRC also stands among the top three cancer types (Alzahrani et al., 2021; Zhou et al., 2020). CRC caused 930,000 deaths in 2020 while generating 881,000 fatalities in 2018, according to Fadlallah et al. (2024) and deSouza et al. (2024). Researchers continue to study new treatment methods to improve CRC survival outcomes while reducing mortality rates. CRC stands as a significant worldwide public health problem because of its high death rate (Fadlallah et al., 2024). The standard diagnosis of CRC relies on histopathological examination; however, this method remains time-consuming, subjective, and requires complex analysis (Sharkas & Attallah, 2024). Figure 01: Visual View of CRC (amended from American Cancer Society, 2025) In the detection and classification of CRC, Deep Learning (DL) has dramatically improved by making decisions more accurately, minimizing human error, reducing mistakes, and allowing real-time analysis in clinics (Iqbal et al., 2021). From pathology slides, the classification of cancerous tissues using DL models is very effective to achieve better accuracy in cancerous and non-cancerous tissues (Xu et al., 2020). Moreover, DL models have successfully classified polyps in colonoscopy images, and this is an essential step in CRC prevention and screening (Tanwar et al., 2022). The Deep Convolutional Neural Network (DCNN) architecture has performed very well in finding CRC-related features in histopathological images (Sarwinda et al., 2021; Shi et al., 2023). By combining the DL models with histopathological analysis, it helped to reduce the workload for diagnosis (doctors) and improve the diagnostic precision (Luo et al., 2023; Attallah et al., 2022). Among DL techniques, several approaches have been applied, including State-of-the-Art (SOTA) CNNs, modified CNNs, Transfer Learning (TL), and Ensemble models (Iqbal et al., 2022). Despite DL's ability to detect and classify CRC, the latest DL technologies can visualize the CRC (Thakur et al., 2020). The visualizing techniques in DL systems operate under the terminology known as Explainable Artificial Intelligence (XAI). Local Interpretable ModelAgnostic Explanations (LIME), Shapley Additive Explanations (SHAP), and Gradient‐weighted Class Activation Mapping (Grad‐CAM) are popular XAI techniques found in the literature (Auzine et al., 2024). The combination of SHAP and LIME techniques produces both global and local explanations that work on validation and test sets for DL models (Alabi et al., 2023). Lime is basically used by a model named "black-box" to generate explanations for each prediction (Aldughayfiq et al., 2023), and SHAP is known as the most used model-agnostic method. It can be applied to any Machine Learning (ML) model to explain any model prediction. On the other hand, Grand-CAM is used to generate heatmaps for specific target classes by feature maps from the last layer of a CNN (Ghasemi et al., 2024). Mainly, it is used to show which parts of an image, or an input, are most important to predict by a model. Several studies have been conducted on the CRC dataset for classification and identification; however, they have some limitations that can affect the reliability and applicability of these studies in a practical scenario. Numerous studies have supported the concept of lightweight CNNs and proposed novel CNN architectures that operate with fewer layers. Again, very few authors applied XAI techniques in cancer cell classifications. Some of the limitations and study gaps are mentioned below: 1. The use of imperfect ground truth data, inadequate clinical data, and insufficient training data for validation might lead to poor performance and might question the reliability of the performance of the trained model. It could perform worse in the scenario where variability in the dataset is observed (Echle et al., 2020; Xu et al., 2020). 2. The limitation of Cancer and Non-Cancer classification in detecting CRC represents a significant disadvantage since it stands in the way of model development for alternative rare malignant conditions during colonoscopy, like sarcoma, melanoma, gastrointestinal stromal tumor, lymphoma, and carcinoid tumor (Zhou et al., 2020; Karthikeyan et al., 2024). 3. Only using a small number of images and limited patient data could create a significant issue in the trained model, and that would be called overfitting, which possibly affects the reliability of predictions and restricts model applicability (Ho et al., 2022). 4. Sarwinda et al. (2021) compared ResNet18 and ResNet50 models that might fall in comparison with other studies that have worked with many other exceptional architectures and showed their comparison analysis. 5. Prezja et al. (2024); Khazaee and Rezaee (2023) applied the ensemble strategy and proposed their ensemble model, but as the ensemble approach combines several SOTA CNNs, it has many layers. So, this issue was not addressed, and a lightweight CNN architecture was not proposed. 6. The custom model was presented by Akilandeswari et al. (2022) and Attallah et al. (2022) in their studies, but in the applicability sector, it could not reach the standard of lightweight CNN models. 7. None of the studies mentioned above have included XAI techniques like LIME, SHAP, and GRAD-CAM in their studies to explain the reasoning of their classification and misclassification. To fill those gaps, this paper's contributions are: 1. In order to avoid overfitting issues and improve the model performance in classifying CRC, data augmentation, balancing, and some preprocessing techniques were implemented. 2. Six SOTA CNN architectures (InceptionV3, VGG16, MobileNet, ResNet50, DenseNet121, Xception) were applied on the dataset, and their performance comparison was presented. 3. Six pre-trained models with Transfer Learning were also applied to monitor the behavior of the accuracies and to show the comparison with the previous accuracies. 4. Two ensemble approaches based on three strategies, Soft-voting, Hard-voting, and Rank-based ensemble, were applied to increase the performance of the classification of CRC cells. 5. One of the main contributions was to build a lightweight Reliable Net (R-Net) model with a few layers, which achieved 99.37% accuracy with fewer resources. 6. XAI techniques like LIME and SHAP, along with Grad-CAM, were applied to make the work understandable and to make proper reasoning of classification and misclassification in CRC classification. 2. Literature Review The literature covers a wide range of techniques, including colonoscopy and histological image analysis, reflecting the diversity of strategies being investigated for CRC treatment. Ahad et al., 2023; Mustofa et al., 2023; Bhowmik et al., 2024, Ahmed & Ahad, 2023; Emon & Ahad, 2024; Mustofa et al., 2024; Preanto et al., 2024; Mamun et al., 2023; Ahad et al., 2024; Mustofa et al., 2025; Preanto et al., 2024; Ahmed et al., 2023; Ahad et al., 2024; Bhowmik et al., 2023; Ahad et al., 2024; Mamun et al., 2025; Ahad et al., 2024, Ahad et al., 2024; Islam et al., 2024; Ahad et al., 2024; Ahmed et al., 2024; Ahad et al., 2024; Preanto et al., 2024; Preanto et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Mamun et al., 2024; Emon et al., 2023; Emon et al., 2023; Biplob et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023). This represents a critical advancement in cervical cancer diagnosis, enhancing the effectiveness of screening and improving early detection rates. This review highlights the transformative impact of DL on the detection and treatment of CRC by consolidating findings from several research studies. SOTA CNNs proved the capabilities of cancerous cell detection and identification. However, it also has some limitations, for example, various layers such as concatenation, convolutional, pooling, and fully connected layers, as well as hyperparameters. Due to their large memory footprint and high computational demands (Moolchandani et al., 2021), DCNN architectures have been criticized by researchers (Thakur et al., 2023). Another researcher (Fu et al., 2024) also supported the previous researcher (Thakur et al., 2023) that CNNs have some limitations. When implementing CNNs in applied artificial intelligence, they have encountered challenges due to the complex architecture of the CNN network. To achieve a comparatively good result from DCNNs, the authors suggested lightweight CNN architectures with fewer layers, which can accurately identify the disease in cancerous images. Inspired by the success of lightweight CNNs, several studies (Thakur et al., 2023; Sun et al., 2024; Verma et al., 2024) have developed lightweight CNNs. Moreover, other methodologies are also applied in cancer cell detection, such as transfer learning and ensemble models (Xue et al., 2020). For CRC detection, Transfer Learning (TL) is highly impactful when a large medical dataset is unavailable, as it utilizes pre-trained models for image classification. For example, to classify any cancer cell, such as colorectal polyps and cancerous tissues, TL models have been fine-tuned using CNN pre-trained models on diverse images (Alabdulqader et al., 2024; Raju et al., 2022). Techniques such as those applied in TL, partial layer freezing, and full fine-tuning help the models to focus on medical-specific features. For this reason, it continually strives to achieve better results than the pre-trained model (Davila et al., 2024; Morid et al., 2021). TL also improves the classification of benign tissues and adenocarcinomas in histopathology images (Morid et al., 2021). The Ensemble method functions as a classifier in cancer cell detection with improved accuracy than individual classification systems. It serves as an important method in many detection processes (Nanglia et al., 2022). The Ensemble model receives multiple model results from weights representing VGG19, DenseNet201, and MobileNetV2, along with other models, to enable a slow-learner algorithm for final prediction (Chugh et al., 2021). Basically, the final output is based on the cross-validated result and reduces a loss function to find optimal weights for the base model. The remarkable performance of CRCNet has highlighted the possibility for massive DL in clinical diagnostics. This new CRC detection model was trained on a big dataset of over 464,000 pictures (Zhou et al., 2020). Using H&E-stained slides, a DL model was created to detect MSI and MMR in colorectal tumours. This model provides a faster and more affordable option to conventional molecular diagnosis (Echle et al., 2020). Effective MSI and dMMR screening for CRC was made possible by the model, which achieved an AUROC of 0.92 during development and 0.96 during validation with colour normalisation after being trained on 8,836 tumours from various nations. According to Sarwinda et al. (2021), the ResNet architecture was utilized to detect CRC in histology images and differentiate between benign and malignant instances. ResNet-50 had the best accuracy (above 80%), sensitivity (above 87%), and specificity (above 83%) across a range of test sets, demonstrating the validity of DL in the classification of CRC. To predict patient outcomes from digital tissue samples, recurrent and CNNs were combined to show that DL can extract prognostic information from tissue morphology. This approach performed better than human evaluations with an AUC of 0.69 and a hazard ratio of 2.3 (Bychkov et al., 2018). In a semi-supervised learning (SSL) technique, 13,111 whole-slide photos from 8,803 patients were utilized to train the mean teacher model (Yu et al., 2021). This approach achieved expert-level accuracy with fewer labelled patches (AUC 0.974), performing similarly to standard supervised learning in patient-level diagnosis. CRCNet, designed to enhance the identification of CRC during colonoscopy, was trained using 464,105 pictures from over 12,000 patients. It outperformed endoscopists in terms of recall rates and AUPRC values (Zhou et al., 2020). This means that CRCNet may be applied to improve CRC screening. With a high sensitivity (97.4%) and an AUC of 0.917 (Ho et al., 2022), an AI model using a Faster R-CNN architecture was created for the identification of high-risk characteristics in CRC biopsies, suggesting that it could help pathologists. An automated deep-learning approach was developed to classify colorectal polyps in histological images with 93% accuracy across five polyp types, aiding pathologists in estimating risk and enhancing screening (Korbar et al., 2022). A two-phase approach for lesion segmentation and classification was used in the development of a computer-aided diagnostic system for early CRC diagnosis utilizing CT images (Akilandeswari et al., 2022). The DCNN and residual architecture-based system showed excellent accuracy of 98.82%. In order to diagnose CRC, a two-stage classification method was suggested for separating pertinent frames from colonoscopy recordings. These frames were then classified as either neoplastic or non-neoplastic (Sharma et al., 2020). The study concluded that VGG19 was the most effective DL model for diagnosing colonoscopy images after assessing several models. To predict MSI-H in CRC using full-slide images, a DL method that integrated tumor detection and MSI classification was created (Lou et al., 2022). 3. Description of experimental method This section provides the details of the hardware setup, description of the used dataset, the Rnet model development, and how it will be trained for this research. 3.1 Hardware Specification The experiments were conducted on a Precision 7680 Workstation equipped with a 13thgeneration Intel Core i9-13950HX vPro processor and Windows 11 Pro operating system. The workstation came equipped with an NVIDIA RTX 3500 Ada Generation GPU and featured 32GB of powerful DDR5 RAM, along with a 1 TB Solid State Drive (SSD). Python V3.9 was chosen as the programming language because it worked with TensorFlow-GPU, SHAP, and LIME. 3.2 Dataset Description Research data was obtained from an available public repository. Six classes composed the dataset containing Adenocarcinoma, High-Grade IN, Low-Grade IN, Normal, Polyp, and Serrated Adenoma, totaling 2228 images. A microscope instrument collected photos, which the study team stored in RGB format as PNG files. The figure displays different images that belong to each class category for this study in Figure 2. Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Figure 2: Samples of images used in the study. 3.3 Image Augmentation In this step, the downloaded images were manually reviewed to identify class imbalances and potential issues with background color, brightness, and contrast. It was observed that the images in each class were imbalanced, a common challenge in applications such as cancer diagnosis (Johnson & Khoshgoftaar, 2019). The application of GANs helps balance the dataset by generating authentic synthetic data instances that target the minority class. A total of 4800 images were generated to balance the dataset using this technique, and the dataset distribution is shown in Figure 3. Figure 3: Distribution of images. 4. Results of Experiments The researchers performed four different experiments to analyze CRC images. The analysis begins with R-Net, followed by DenseNet121, along with ResNet50, InceptionV3, Xception, MobileNetV2, and VGG16 SOTA CNNs. Then, transfer learning applied to these six SOTA CNNs. Finally, the research evaluated two ensemble models using DenseNet121 with InceptionV3-Xception and ResNet18 with InceptionV3-VGG16. The following section demonstrates experimental methodologies along with their achieved outcomes. 4.1 Experiment 1: R-Net development process and results The following section explains the R-Net model together with its training process and evaluation results: 4.1.1 R-Net Model Development The R-Net model was developed to find CRC cells along with their classifications within CRC image data. A set of two convolutional layers that use 64 filters begins the process before max-pooling occurs. The network adds two 128-filter convolutional layers which are followed by max-pooling before advancing to three 256-filter convolutional layers spread across more max-pooling layers. The depth of the feature map expands through successive max-pooling layers following three 512-filter convolutional layers that automatically reduce spatial dimensions. Feature extraction ends with flattening the output before passing it to two dense layers that have a fully connected structure. Figure 4: R-Net model visualisation The initial dense layer contains 256 neurons, and the second dense layer has 64 neurons. Six neurons form the last layer structure because it contains every possible target class for classification purposes. The model contains 15,911,430 trainable parameters for extracting image features, enabling its use in multiclass image classification. 4.1.2 Training setup The R-Net model was trained using 5-fold cross-validation. The training process extended for over 45 epochs using batches of size 32 in each fold. The Adam optimization algorithm was used for model optimization. The weights of the model became adjustable through gradients calculated by the algorithm, which resulted in enhanced classification performance and accuracy of CRC modalities-the selected loss function employed sparse categorical crossentropy for data calculation. The training parameters can be found in Table 1. Table 1: Hyperparameters of training Parameter Value Epochs 50 Batch size 16 Image size (64, 64, 3) Learning rate 1.00e-04 K_folds 5 Optimizer Adam(learning_rate=LEARNING_RATE) Loss Function SparseCategoricalCrossentropy(from_logits=True) Early Stopping EarlyStopping(monitor='val_accuracy', patience=10, verbose=1, restore_best_weights=True) Learning Rate Scheduler LearningRateScheduler( lambda epoch: LEARNING_RATE * 0.1 ** (epoch // 10) Callbacks [early_stopping, lr_scheduler] 4.1.3 Results of the R-Net Table 2 presents the evaluation of the R-Net model performance for each fold, which includes precision, recall, F1-score, and support. All five evaluations produced high-accuracy results through the model while maintaining low mistake rates. The model in Fold 1 achieved nearperfect precision but misclassified some instances. However, the classification performance in Fold 2 proved exceptional because the model achieved outstanding results without any significant misclassifications. Folds 3, 4, and 5 displayed outstanding performance, as misclassification was minimal. The model demonstrates exceptional capabilities in classifying different categories with high precision, thanks to its outstanding error reduction capabilities. Table 2: Fold-Wise Classification report with epochs of R-Net Fold Class Precision Recall F1-Score Support 1 Adenocarcinoma 1 0.99 0.99 138 HighGradeIN 0.99 1 1 120 LowGradeIN 0.99 0.99 0.99 133 Normal 1 1 1 119 Polyp 0.99 1 1 125 SerratedAdenoma 1 0.99 1 133 2 Adenocarcinoma 0.98 0.99 0.99 132 HighGradeIN 0.99 0.99 0.99 137 LowGradeIN 0.98 0.97 0.98 128 Normal 1 0.99 1 131 Polyp 0.98 0.98 0.98 119 SerratedAdenoma 0.99 1 1 121 3 Adenocarcinoma 1 0.97 0.98 128 HighGradeIN 0.98 1 0.99 137 LowGradeIN 0.99 0.98 0.99 126 Normal 0.99 1 1 131 Polyp 0.98 0.99 0.98 124 SerratedAdenoma 1 0.99 1 122 4 Adenocarcinoma 1 0.99 1 130 HighGradeIN 0.98 1 0.99 124 LowGradeIN 1 0.98 0.99 123 Normal 1 1 1 126 Polyp 0.98 0.99 0.98 122 SerratedAdenoma 1 1 1 143 5 Adenocarcinoma 0.98 0.98 0.98 112 HighGradeIN 0.98 1 0.99 122 LowGradeIN 0.98 0.97 0.97 130 Normal 1 1 1 133 Polyp 0.99 0.98 0.98 150 SerratedAdenoma 1 1 1 121 The model's precision level becomes noticeable through visualization in the confusion matrix presented in Figure 5. In Fold 1, the model performed well with very minimal misclassification errors, and Fold 2 achieved better accuracy by successfully separating challenging class samples. The model reached exceptional levels of classification in Folds 3 through 5 because errors reached virtually zero during these runs. The model successfully differentiates multiple categories, exhibiting high precision and recall, which proves its effectiveness in minimizing misclassification errors and ensuring reliability. Figure 5: Fold-wise confusion matrix of R-Net. Figure 6: Fold-Wise ROC-Curve of R-Net. A comparison of different ROC curves appears in Figure 6 based on five-fold crossvalidation techniques. The evaluation methods of Fold 1 show both high accuracy in correctly identifying cases and correctly misclassified cases while minimizing false positive errors. The updated Fold 2 enhances the model with a denser curve design. The performance accuracy of the model becomes evident through near-perfect ROC curves that appear in Folds 3 through 5. The reliability and robustness of the R-Net model are evident in these achieved results in multi-class classification. Figure 7 displays training and validation accuracy and training and validation loss data for the five R-Net model folds. The plot illustrates both training accuracy and validation accuracy rates, alongside a decreasing training loss and sustained low validation loss, which signifies outstanding model performance and avoids overfitting occurrences. The model demonstrates reliable performance and strong generalization capabilities across all folds, as indicated by these results. Figure 7: Training and Validation across all folds. Additional performance evaluation of the R-Net model generated a confusion matrix based on the test dataset. Figure 8 presents the model classification results, which show accurate predictions among different categories. The model demonstrated robustness and reliability through the match between the classification matrix and its high-accuracy assessment. A small sample misidentification demonstrates the model's efficient generalization effectiveness, which qualifies it for practical utilization. Figure 8: Confusion Matrix of the R-Net Model on the Test Dataset The performance metrics for the R-Net model appear in Table 3 for training, validation, and the test datasets. Table 3: Model Performance Metrics on Training, Validation, and Test Sets Dataset Loss Accuracy Train 1.06e-07 100% Validation 0.012 99.79% Test 0.0275 99.37% The model learned the training data efficiently, achieving a minimal training loss value of 1.06e-7 (0.00000010617) along with perfect accuracy of 100%. The validation loss shows minimal value (0.0120) alongside a high accuracy of 99.79% which indicates strong generalization to new data points. The test data shows both low loss at 0.0275 and accuracy at 99.37% which strengthens the reliability and robustness of the model. The model demonstrates excellent potential for practical application, as it achieves high classification accuracy while minimizing errors. R-Net delivers outstanding performance in its combined classification metrics by achieving a 99% accuracy across every category. The model achieves equal and highly effective results across precision, recall, and F1-scores, with values of approximately 0.99. The model achieves strong performance based on specific classification results, which show that the Normal, Serrated Adenoma, and Polyp categories achieve scores close to 1.00. The evaluation of Adenocarcinoma, High-Grade IN, and Low-Grade IN cancerous cell types through R-Net shows that the model achieves precision, recall, and F1 scores between 0.97 and 0.99. The model demonstrates reliability through its consistent performance, as shown by macro and weighted averages across the entire dataset. The confusion matrix exhibits the R-Net's high accuracy. Among 4800 images, R-Net correctly detected and classified 4797 images. However, only 3 LowGradeIN instances were misclassified as 2 Adenocarcinoma and 1 HighGradeIN, while Normal and Polyp showed no misclassification errors. The confusion matrix confirms that the model successfully reduces false positive results. The fold-wise accuracy and loss curves deliver details about how well the model performs throughout training with validation intervals. The model's learning process exhibits significant improvement in accuracy, ultimately achieving high levels of model ability in terms of generalization and learning. Successful model operation maintains stable learning efficiency, but error reduction appears firmer because of decreasing loss curve values. The RNet model achieved high accuracy along with minimal loss values during training sessions. 4.2 Experiment 2: SOTA CNN performance on CRC An examination of six SOTA CNNs is conducted according to the taxonomy system of Khan et al. (2020). The models organized into five categories include Depth-based CNNs (InceptionV3), Multi-Path-based CNNs (ResNet50, DenseNet121), and Width-based MultiConnection CNNs (Xception), Depthwise Separable Convolutions (MobileNet), along with Spatial Exploitation-based CNNs (VGG16). The selection of these models was done to provide deep insight into which CNN produces the best results for CRC image classification. The performance of CNNs during this task was evaluated using three different optimizers: Adam, Adamax, and RMSprop. Table 4: Performance comparison of SOTA CNNs and optimizers Model Epoch (Adam) Accuracy (Adam) Epoch (Adamax) Accuracy (Adamax) Epoch (RMSprop) Accuracy (RMSprop) DenseNet121 29 0.9993 29 0.9965 23 1 ResNet50 28 0.9694 27 0.9722 29 0.9917 InceptionV3 22 0.9944 37 0.9625 24 0.9993 Xception 14 1 14 1 14 1 MobileNet 50 0.9583 46 0.8465 32 0.9257 VGG16 11 0.1667 30 0.9674 24 0.9944 The performance metrics of six state-of-the-art CNNs are presented in Table 4 under Adam, Adamax, and RMSprop optimizer conditions. The highest accuracy of 99% is achieved by DenseNet121 using Adam (0.9993) and RMSprop (1.0000), which required 29 epochs alongside 23 epochs. The performance of ResNet50 demonstrates reduced accuracy when Adam reaches 0.9694, while Adamax reaches 0.9722, and RMSprop generates 0.9917 accuracy, yet optimizers fail to affect accuracy measurements noticeably. The combination of RMSprop (0.9993) and Adam (0.9944) delivers superior results than Adamax (0.9625) for InceptionV3. Within 14 epochs, the Xception model achieves a perfect accuracy score of 1.0000 when combined with any of the optimizers. The accuracy of MobileNet is lower when using Adamax (0.8465), while both Adam (0.9583) and RMSprop (0.9257) outperform it in terms of accuracy. Adam produces subpar results for the VGG16 model, with an accuracy rating of only 0.1667, whereas Adamax achieves 0.9674 accuracy and RMSprop delivers 0.9944 accuracy. The results demonstrate that Adam and RMSprop provide stable performance metrics, although Adamax shows reduced operational efficiency, mainly when used in MobileNet and InceptionV3 models. The accuracy-epoch relationship between optimizers and models appears in Figure 9 through a scatter plot representation. Figure 9: The training and validation accuracy of the original CNNs. Figure 10: Confusion matrices of Transfer learning The confusion matrix in Figure 10 shows that Xception and DenseNet121 have lower Type 1 and Type 2 errors, which indicates better classification performance. The performance of InceptionV3 falls within the middle range, as it generates both acceptable false and true negatives and positives. The misclassification rates of VGG16 remain high, especially in LowGrade-IN and Polyp, which makes this model less dependable than the others. The misclassification rates for ResNet50 are significantly elevated in classifications of Adenocarcinoma and Low-Grade IN, resulting in numerous incorrect positive and negative predictions. MobileNet demonstrates the worst capability among these models based on its classification errors in HighGrade-IN, LowGrade-IN, and Polyp. 4.3 Experiment 3: SOTA CNNs Transfer Learning Performance on CRC An evaluation of six SOTA CNNs transfer learning architectures occurs during this experiment. The experiment relies on the same classification system from Experiment 1, using SOTA CNN for equal model comparison throughout. Evaluation takes place through an analysis of training, validation, and testing accuracy from distinct optimization methods. Table 5: Performance comparison of SOTA CNNs; Transfer learning and optimizers Model Epoch (Adam) Accuracy (Adam) Epoch (Adamax) Accuracy (Adamax) Epoch (RMSprop) Accuracy (RMSprop) DenseNet121 5 0.7962 5 0.7456 5 0.7883 ResNet50 5 0.5677 5 0.4385 5 0.4985 InceptionV3 5 0.8835 5 0.7244 5 0.861 Xception 5 0.63 5 0.4125 5 0.5902 MobileNet 5 0.8769 5 0.704 5 0.8077 VGG16 5 0.7942 5 0.6342 5 0.7025 The accuracy data demonstrate that DenseNet121 achieves high accuracy rates using multiple optimizers, with Adam reaching 79.62%, Adamax reaching 74.56%, and RMSProp reaching 78.83% (Table 5 and Figure 11). The results show that MobileNet maintained consistent performance, achieving 87.69% accuracy with Adam, 70.40% accuracy with Adamax, and 80.77% accuracy with RMSProp. The accuracy results show that ResNet50 achieves the worst performance in transfer learning, with 56.77% accuracy from Adam, 43.85% from Adamax, and 49.85% from RMSProp, even though all optimizers required five epochs, which indicates optimization difficulties. Figure 11: Transfer learning CNN models, accuracy, and epochs Figure 12: Confusion matrices of Transfer learning. The classification results in Figure 12 show that Type 1 (false positives) and Type 2 (false negatives) errors are minimal for both DenseNet121 and MobileNetV2, thus demonstrating superior classification accuracy. The performance of Xception alongside InceptionV3 shows moderate errors of false positives and negatives. VGG16 exhibits failures in classification accuracy due to its higher error rates, particularly in distinguishing between the Normal and Polyp categories, thus demonstrating inferior reliability compared to competing models. ResNet50 demonstrates the highest degree of misclassification because it generates numerous false positive and false negative results that show its operational effectiveness. 4.4 Experiment 3: Ensemble Model Performance Two ensemble models were built in this research project, with one model using DenseNetInception-Xception for Multi-path Depth-Width based and the other implementing ResNet50InceptionV3-VGG16 for Multi-path Depth-Spatial based. 4.4.1 Ensemble 1 The comparison of multi-path-depth-width CNN ensemble (DenseNet-Inception-Xception) through Soft Voting, Hard Voting, and Rank-Based methods is presented in Table 6. Table 6: Performance of Multi-path-depth-Width based (DIX) Ensemble. Multi path-depth-Width based Ensemble method Accuracy Precision Recall F1 Score DenseNet - Inception-Xception Soft Voting 98.02% 98.12% 98.02% 98.07% Rank-Based 57.19% 65.71% 57.19% 59.43% Hard Voting 95.52% 96.13% 95.52% 95.53% Soft Voting Ensemble reaches 98.02% accuracy through minimal errors (Type I = 16, Type II = 16), which appear in the Soft Voting confusion matrix. The detection method demonstrates 98.12% precision, along with 98.02% recall, and achieves an F1 Score of 98.07%, which positions it as the most effective tool for CRC diagnosis, according to Table 6. The Hard Voting Ensemble demonstrates an accuracy of 95.52% and both Type I and Type II errors amount to 43 each. The Hard Voting confusion matrix shows 96.13% precision alongside 95.52% recall. The Rank-Based Ensemble generates only 57.19% accuracy alongside a high number of errors (Type I = 182 and Type II = 182) through the Rank-Based confusion matrix that creates poor performance with 65.71% precision and 57.19% recall using Table 6. Figure 12: Confusion matrices of Transfer learning. 4.4.2 Ensemble 2 The comparison of the Multi-Path-Depth-Spatial based ensemble (ResNet50-InceptionV3VGG16) through Soft Voting, Hard Voting, and Rank-Based methods is presented in Table 7. Table 7: Performance of Multi-path-depth-Spatial based (RIV) Ensemble. Multi-Path-Depth-Spatial based Ensemble method Accuracy Precision Recall F1 Score ResNet50-InceptionV3-VGG16 Soft Voting 98.23% 98.25% 98.23% 98.23% Rank-Based 89.69% 89.83% 89.69% 89.71% Hard Voting 88.85% 89.15% 88.85% 88.79% The Soft Voting Ensemble reaches a 98.23% accuracy rate and shows 11 Type I errors as well as 16 Type II errors in its Soft Voting confusion matrix. According to Table 7, the Soft Voting Ensemble represents the best method for CRC detection since it reaches 98.25% precision, 98.23% recall, and an F1 Score of 98.23%. The Hard Voting Ensemble reaches 88.85% accuracy, but it produces more errors than Soft Voting (Type I = 91, Type II = 107), as the Hard Voting confusion matrix reveals, with 89.15% precision and 88.85% recall. The Rank-Based Ensemble method demonstrates an 89.69% accuracy rate while producing errors consisting of 72 Type I and 84 Type II classifications according to the analysis of the RankBased confusion matrix (Table 7). Figure 13: Confusion matrices of Transfer learning. 4.5 Result of the XAI To provide both local and global explanations for the proposed R-Net model, this study employed three XAI methods: LIME, SHAP, and Grad-CAM. 4.5.1 LIME Visualization As such, LIME is used for generating an explanation for each prediction. Figure 14 below shows that each explanation of LIME includes two features: Green and Red. "Green regions" are areas that positively contributed to the predicted class, and "Red regions" are areas that negatively contributed to the predicted class. Figure 14: LIME explainer of CRC images 4.5.2 SHAP Explanation Figure 15 shows each explanation of SHAP, including two features: Red and Blue. "Red regions" are areas that positively contributed to the predicted class, and "Blue regions" are areas that negatively contributed to the predicted class, along with a mean SHAP value for each prediction. Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Adenocarcinoma HighGradeIN LowGradeIN Normal Polyp SerratedAdenoma Figure 15: SHAP explainer of CRC images 4.5.3 Grad-CAM Analysis of Correctly/Incorrectly CRC Modalities The Grad-CAM tool reveals which parts of an image the R-Net classifies most for predictions. Grad-CAM measures the connection between feature maps and class prediction by calculating the gradients within the last CNN layer. The heatmap shows how important each feature map is to the prediction results. The model employs gradient results to readjust its feature maps before combining them into the visualization map. The research employs Grad-CAM analysis to demonstrate which areas of the CRC image the model selected for diagnostic purposes. For example, Figure 16 shows the misclassification of images. Figure 16: Examples of misclassified images Grad-CAM technology displays which CRC cases the model incorrectly identified by showing where it focused its attention in Figure 17. The heatmaps show locations that received the highest attention through bright red and lesser attention with blue coloring. The visualizations demonstrate that the model examined areas of the colorectal that did not contain cancer. However, it made incorrect diagnoses, such as classifying cancerous cells (Polyps) without cancer as LowGradeIn. Original image: Polyp; Predicted: LowGradeIn Original image: Polyp; Predicted: LowGradeIn Figure 17: GRAD-CAM view of misclassified images Figure 18 presents Grad-CAM highlighting the medical model's accurate recognition of tumor areas on correctly classified images. While we observe this behavior in specific nontumor cases, our model tends to direct irrelevant attention to parts of the image, suggesting future development is needed in feature identification processes. Original image: Polyp; Predicted: LowGradeIn Original image: Polyp; Predicted: LowGradeIn Figure 18: Grad-CAM view of correctly classified images 4.5.4 Pixel Intensity of Correctly/Incorrectly CRC Modalities The pixel intensity displays feature attribution by showing which parts of the input images helped to make the CNN's decisions. This interactive graph displays aspects of a neural network model that misidentified a polyp as a low-grade cancerous cell (Figure 19). While Figure 20 shows the actual prediction of CRC. The panel displays the CRC images of the true cancer regions. The right panel demonstrates the Gradient × Input method that shows the model determination through pixel intensity values based on part contributions to the image. Parts of the image with intense colors had a significant impact on the prediction. The model selected non-cancerous areas for importance during classification because its Gradient × Input analysis did not match the cancerous regions. The mismatch between the model's learned features and the key characteristics of cancerous cells indicates that the model cannot provide an accurate assessment. Figure 19: Grad-CAM view of pixel intensity for misclassified images Figure 20: Grad-CAM view of pixel intensity for correctly classified images 5. Discussion This research proposes R-Net (Reliable Net) as a compact CNN that effectively detects CRC through fewer layers while utilizing small learnable parameters. The proposed R-Net achieves a 99.37% accuracy in CRC image classification compared to SOTA CNNs, transfer learning, and ensemble models. Notably, the stable performance of Adam remains consistent, while RMSprop shows variable results between models, which proves that optimizer selection should consider the specific convergence patterns of the designed framework. A state-of-the-art evaluation utilized six CNN architectural models, including InceptionV3, VGG16, MobileNet, ResNet50, DenseNet121, and Xception, for CRC classification. Both Xception and DenseNet121 yield equivalent diagnostic outcomes, but they require significantly more computational power than R-Net. Through transfer learning methods, InceptionV3 and MobileNet executed better classification than alternative approaches, whereas they did not match R-Net's efficiency levels. The combined utilization of multiple CNNs through ensemble models achieved high classification results, and Soft Voting proved to be the most effective method. However, R-Net proves to be a practical choice over ensemble methods because it delivers effective results while requiring less computational power and shorter training duration. R-Net prediction validation and interpretability were improved by the implementation of XAI techniques, which included LIME SHAP together with Grad-CAM. Visualizations from Grad-CAM demonstrated that R-Net accurately detects cancerous regions, contributing to the accuracy of its diagnostic decisions. The detailed explanations provided by LIME and SHAP helped identify problematic predictions, thereby enhancing the trustworthiness of the model. The performance evaluation of R-Net classification is presented in Figure 21, which compares the accuracy between individual CNNs, transfer learning models, and ensemble methods. Figure 21: Accuracy comparison among individual CNN, transfer learning, and ensemble models. The comparative performance results in Table 13 show that R-Net produces better outcomes than XGBoost, Ensemble, Transfer Learning, and Interpretable Machine Learning Systems by achieving 99.37% accuracy. Table 13: Performance comparison of the proposed R-Net model with other models. Authors Model No. of Classes Accuracy (%) Georgiou et al., (2024) XGBoost, Ensemble 3 89.79% Sirinukunwattana et al., (2022) Consensus Molecular Subtype Classification 4 92% Neto et al. (2024). Interpretable ML System 4 94.5% Kassani et al., (2022) Transfer Learning 4 95% Yamashita et al. (2023). DL for Microsatellite Instability Prediction 3 97.3% Elshamy et al., (2024) Modified DNN Optimizer 3 98% Proposed Model R-Net 6 99.37% The research establishes R-Net as a highly accurate and efficient model which can perform CRC classification. R-Net proves suitable for medical use because of its robust combination of user-friendly interpretability with high performance capabilities, along with low system requirements. Future researchers will continue to develop the model further, as well as enhance data augmentation methods, while conducting rigorous clinical assessments to improve reliability in medical diagnostic contexts. 6. Conclusion and Future Work The research investigates the effectiveness of DL models in CRC detection and classification by conducting three primary experiments. The researchers applied six CNN models, VGG16, ResNet50, DenseNet121, Xception, InceptionV3, and MobileNet, to analyze histopathological CRC images-secondly, the research utilized transfer learning techniques to enhance model classification results. The proposed R-Net achieved superior accuracy and efficiency while utilizing XAI techniques, including LIME, SHAP, and Grad-CAM. The RNet showed enhanced reliability as its XAI framework delivered valuable insights about model prediction features and pixel intensity testing between correct and incorrect classifications. The research study offers valuable results, but it also has some limitations. Using secondary datasets reduces the application range, which reveals the necessity of analyzing extensive, varied datasets for analysis. A wider test of the model was necessary because its training occurred exclusively through histopathological image analysis. Current research requires further investigation to establish the impact of transfer learning on lightweight CNN models, as the demonstrated results have not been promising. Medical expert confirmation serves as a requirement for models to acquire a credibility status. The evaluation of various program models is necessary to optimize their efficiency, performance, and adaptation capabilities before adoption for practical use. In conclusion, study results demonstrate that CNN technology proves highly efficient in identifying CRC during screening examinations. The R-Net system achieves high accuracy in medical image classification through its practical and lightweight structure, which protects readability. Modern research must link healthcare professionals with advanced imaging technology usage to enhance both DL CRC diagnosis detection methods and clinical diagnostic capabilities. References Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2023). Enhancing Tea Leaf Disease Detection Through Customized Vision Transformer and Hyperparameter Optimization. Available at SSRN, 4940688. https://doi.org/ Ahad, M. T., Bhowmik, A. C., Emon, Y. R., & Ahmed, F. (2024). A Customized Vision Transformer for Accurate Detection and Classification of Java Plum Leaf Disease. Available at SSRN, 4829650. https://doi.org/2 Ahad, M. T., Emon, Y. R., & Mustofa, S. (2024). Data of history: An open-source and multiformat wall image dataset of Panam city, a historical place. Data in Brief, 56, 110774. https://doi.org/6 Ahad, M. T., Li, Y., Song, B., & Bhuiyan, T. (2023). Comparison of CNN-based deep learning architectures for rice diseases classification. Artificial Intelligence in Agriculture, 9, 22-35. https://doi.org/231 Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). End User Interface Design of Mobile-Based Fish Disease Detection to Assist Fish Farmers. Available at SSRN, 4980536. https://doi.org/ Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). Fishdoc: A Mobile-Based Fish Disease Detection System Using Yolov8. Available at SSRN, 4899189. https://doi.org/ Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood cancer detection and classification using a Convolutional Neural Network. arXiv preprint, . https://doi.org/2 Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood cancer detection and classification using Convolutional Neural Network. arXiv e-prints, arXiv: 2409.06689. https://doi.org/ Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection. arXiv preprint, . https://doi.org/4 Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection. arXiv e-prints, arXiv: 2409.06699. https://doi.org/ Ahad, M. T., Mustofa, S., Rahman, M. S., Song, B., & Li, Y. (2023). A Comprehensive Study on Deep Feature Extraction to Detect and Classify Soursop Leaf Disease. Available at SSRN, 4845099. https://doi.org/ Ahad, M. T., Mustofa, S., Sarker, A., & Emon, Y. R. (2024). Bdpapayaleaf: A dataset of papaya leaf for disease detection, classification, and analysis. Classification, and Analysis. https://doi.org/3 Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using novel CNNbased ensemble approach. arXiv preprint, . https://doi.org/1 Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using novel CNNbased ensemble approach. arXiv e-prints, arXiv: 2410.05272. https://doi.org/ Ahad, M. T., Preanto, S. A., Song, B., & Li, Y. (2023). Gan-Generated Spectrogram Detection and Classification for Heartbeat Classification Using a Vision Transformer. Available at SSRN, 4892869. https://doi.org/ Ahad, M. T., Song, B., & Li, Y. (2024). A Comparison of Convolutional Neural Network, Transfer Learning and Ensemble Technique for Brain Tumour Detection of Classification. Transfer Learning and Ensemble Technique for Brain Tumour Detection of... https://doi.org/1 Ahmed, F., & Ahad, M. T. (2023). Machine learning-based tea leaf disease detection: A comprehensive review. arXiv preprint, . https://doi.org/22 Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2023). A fuzzy-based vision transformer model for tea leaf disease detection. International Conference on Trends in Computational and Cognitive... https://doi.org/5 Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2024). A Fuzzy-Based Vision Transformer Model for Tea Leaf Disease Detection Check for updates. Proceedings of the Fifth International Conference on Trends in Computational... https://doi.org/1 Akilandeswari, A., Sungeetha, D., Joseph, C., Thaiyalnayaki, K., Baskaran, K., Jothi Ramalingam, R., ... & Meansbo Hadish, K. (2022). Automatic detection and segmentation of colorectal cancer with deep residual convolutional neural network. Evidence‐Based Complementary and Alternative Medicine, 2022(1), 3415603. Alabdulqader, E. A., Umer, M., Alnowaiser, K., Wang, H., Alarfaj, A. A., & Ashraf, I. (2024). Image Processing-based Resource-Efficient Transfer Learning Approach for Cancer Detection Employing Local Binary Pattern Features. Mobile Networks and Applications, 1-17. Alabi, R. O., Elmusrati, M., Leivo, I., Almangush, A., & Mäkitie, A. A. (2023). Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP. Scientific Reports, 13(1), 8984. Aldughayfiq, B., Ashfaq, F., Jhanjhi, N. Z., & Humayun, M. (2023). Explainable AI for retinoblastoma diagnosis: interpreting deep learning models with LIME and SHAP. Diagnostics, 13(11), 1932. Alzahrani, S. M., Al Doghaither, H. A., & Al-Ghafari, A. B. (2021). General insight into cancer: An overview of colorectal cancer. Molecular and clinical oncology, 15(6), 271. Arthi, N. T., Mubin, K. E., Rahman, J., Rafi, G. M., Sheja, T. T., Reza, M. T., & Alam, M. A. (2022, December). Decentralized federated learning and deep learning leveraging xai-based approach to classify colorectal cancer. In 2022 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) (pp. 1-6). IEEE. Attallah, O., Aslan, M. F., & Sabanci, K. (2022). A framework for lung and colon cancer diagnosis via lightweight deep learning models and transformation methods. Diagnostics, 12(12), 2926. Auzine, M. M., Heenaye-Mamode Khan, M., Baichoo, S., Gooda Sahib, N., Bissoonauth-Daiboo, P., Gao, X., & Heetun, Z. (2024). Development of an ensemble CNN model with explainable AI for the classification of gastrointestinal cancer. Plos one, 19(6), e0305628. Bhowmik, A. C., Ahad, M. T., & Emon, Y. R. (2023). Machine Learning-Based Jamun Leaf Disease Detection: A Comprehensive Review. arXiv preprint, . https://doi.org/4 Bhowmik, A. C., Ahad, M. T., Emon, Y. R., Ahmed, F., Song, B., & Li, Y. (2024). A customised Vision Transformer for accurate detection and classification of Java Plum leaf disease. Smart Agricultural Technology, 8, 100500. https://doi.org/22 Biplob, T. I., Rabbany, G., Emon, Y. R., Ahad, M. T., & Fimu, F. A. (2023). An Optimized Vision Based Transformer for Lungs Cancer Detection. International Conference on Trends in Computational and Cognitive ... https://doi.org/ Bychkov, D., Linder, N., Turkki, R., Nordling, S., Kovanen, P. E., Verrill, C., ... & Lundin, J. (2018). Deep learning based tissue analysis predicts outcome in colorectal cancer. Scientific reports, 8(1), 3395. Davila, A., Colan, J., & Hasegawa, Y. (2024). Comparison of fine-tuning strategies for transfer learning in medical image classification. Image and Vision Computing, 146, 105012. deSouza, A., Nadkarni, S., Roy, S., Kataria, P., Ramaswamy, A., & Ostwal, V. (2024). Colon Cancer. In Tata Memorial Centre Textbook of Oncology (pp. 565-592). Singapore: Springer Nature Singapore. Dulf, E., Bledea, M., Mocan, T., & Mocan, L. (2021). Automatic Detection of Colorectal Polyps Using Transfer Learning. Sensors (Basel, Switzerland), 21. https://doi.org/10.3390/s21175704. Echle, A., Grabsch, H. I., Quirke, P., van den Brandt, P. A., West, N. P., Hutchins, G. G., ... & Kather, J. N. (2020). Clinical-grade detection of microsatellite instability in colorectal tumors by deep learning. Gastroenterology, 159(4), 1406-1416. Elshamy, R., Abu-Elnasr, O., Elhoseny, M., & Elmougy, S. (2024). Enhancing colorectal cancer histology diagnosis using modified deep neural networks optimizer. Scientific Reports, 14(1), 19534. Emon, Y. R., & Ahad, M. T. (2024). Multi-format open-source sweet orange leaf dataset for disease detection, classification, and analysis. Data in Brief, 55, 110713. https://doi.org/17 Emon, Y. R., Rabbani, M. G., Ahad, M. T., & Ahmed, F. (2023). A Comprehensive Literature Review on Sweet Orange Leaf Diseases. arXiv preprint, . https://doi.org/ Emon, Y. R., Rabbani, M. G., Ahad, M. T., & Ahmed, F. (2023). A Comprehensive Literature Review on Sweet Orange Leaf Diseases. arXiv e-prints, arXiv: 2312.01756. https://doi.org/ Fadlallah, H., El Masri, J., Fakhereddine, H., Youssef, J., Chemaly, C., Doughan, S., & Abou-Kheir, W. (2024). Colorectal cancer: Recent advances in management and treatment. World journal of clinical oncology, 15(9), 1136-1156. https://doi.org/10.5306/wjco.v15.i9.1136 Georgiou, N., Kolias, P., & Chouvarda, I. (2024). Machine Learning Models for the Classification of Histopathological Images of Colorectal Cancer. Applied Sciences, 14(22), 10731. Ghasemi, A., Hashtarkhani, S., Schwartz, D. L., & Shaban‐Nejad, A. (2024). Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review. Cancer Innovation, 3(5), e136. Group, M. (2022). EBHI-SEG (Version 1). figshare. https://doi.org/10.6084/m9.figshare.21540159.v1 Iqbal, M. J., Javed, Z., Sadia, H., Qureshi, I. A., Irshad, A., Ahmed, R., ... & Sharifi-Rad, J. (2021). Clinical applications of artificial intelligence and machine learning in cancer diagnosis: looking into the future. Cancer cell international, 21(1), 270. Iqbal, S., & Qureshi, A. N. (2022). A heteromorphous deep CNN framework for medical image segmentation using local binary pattern. Ieee Access, 10, 63466-63480. Islam, R., Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2024). Mental health diagnosis from voice data using convolutional neural networks and vision transformers. Journal of Voice. https://doi.org/1 Karthikeyan, A., Jothilakshmi, S., & Suthir, S. (2024). Colorectal cancer detection based on convolutional neural networks (CNN) and ranking algorithm. Measurement: Sensors, 31, 100976. Kassani, S. H., Kassani, P. H., Wesolowski, M. J., Schneider, K. A., & Deters, R. (2022). Deep transfer learning based model for colorectal cancer histopathology segmentation: A comparative study of deep pre-trained models. International Journal of Medical Informatics, 159, 104669. Kazeminia, Salome, Christoph Baur, Arjan Kuijper, Bram van Ginneken, Nassir Navab, Shadi Albarqouni, and Anirban Mukhopadhyay. "GANs for medical image analysis." Artificial intelligence in medicine 109 (2020): 101938. Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 5455-5516. Khazaee Fadafen, M., & Rezaee, K. (2023). Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Scientific Reports, 13(1), 8823. Korbar, B., Olofson, A. M., Miraflor, A. P., Nicka, C. M., Suriawinata, M. A., Torresani, L., ... & Hassanpour, S. (2017). Deep learning for classification of colorectal polyps on whole-slide images. Journal of pathology informatics, 8(1), 30. Lou, J., Xu, J., Zhang, Y., Sun, Y., Fang, A., Liu, J., ... & Ji, B. (2022). PPsNet: An improved deep learning model for microsatellite instability high prediction in colorectal cancer from whole slide images. Computer Methods and Programs in Biomedicine, 225, 107095. Luo, N., Zhong, X., Su, L., Cheng, Z., Ma, W., & Hao, P. (2023). Artificial intelligence-assisted dermatology diagnosis: from unimodal to multimodal. Computers in Biology and Medicine, 107413. Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2023). Scratch vision transformer model for diagnosis grape leaf disease. International Conference on Trends in Computational and Cognitive... https://doi.org/7 Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2024). Scratch Vision Transformer Model for Diagnosis Grape Leaf Disease Check for updates. Proceedings of the Fifth International Conference on Trends in Computational... https://doi.org/ Mamun, S. B., Payel, I. J., Ahad, M. T., Atkins, A. S., Song, B., & Li, Y. (2025). Grape Guard: A YOLO-based mobile application for detecting grape leaf diseases. Journal of Electronic Science and Technology, 23(1), 100300. https://doi.org/2 Moolchandani, D., Kumar, A., & Sarangi, S. R. (2021). Accelerating CNN inference on ASICs: A survey. Journal of Systems Architecture, 113, 101887. Morid, M. A., Borjali, A., & Del Fiol, G. (2021). A scoping review of transfer learning research on medical image analysis using ImageNet. Computers in biology and medicine, 128, 104115. Mustofa, S., Ahad, M. T., Emon, Y. R., & Sarker, A. (2024). BDPapayaLeaf: A dataset of papaya leaf for disease detection, classification, and analysis. Data in Brief, 57, 110910. https://doi.org/7 Mustofa, S., Emon, Y. R., Mamun, S. B., Akhy, S. A., & Ahad, M. T. (2025). A novel AI-driven model for student dropout risk analysis with explainable AI insights. Computers and Education: Artificial Intelligence, 8, 100352. https://doi.org/5 Mustofa, S., Munna, M. M. H., Emon, Y. R., Rabbany, G., & Ahad, M. T. (2023). A comprehensive review on plant leaf disease detection using deep learning. arXiv preprint, . https://doi.org/37 Nanglia, S., Ahmad, M., Khan, F. A., & Jhanjhi, N. Z. (2022). An enhanced Predictive heterogeneous ensemble model for breast cancer prediction. Biomedical Signal Processing and Control, 72, 103279. Neto, P. C., Montezuma, D., Oliveira, S. P., Oliveira, D., Fraga, J., Monteiro, A., ... & Cardoso, J. S. (2024). An interpretable machine learning system for colorectal cancer diagnosis from pathology slides. NPJ precision oncology, 8(1), 56. Nguyen, H. T. T., Cao, H. Q., Nguyen, K. V. T., & Pham, N. D. K. (2021, May). Evaluation of explainable artificial intelligence: Shap, lime, and cam. In Proceedings of the FPT AI Conference (pp. 1-6). Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv preprint, . https://doi.org/7 Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL). arXiv preprint, . https://doi.org/5 Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL). arXiv e-prints, arXiv: 2409.06687. https://doi.org/ Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv eprints, arXiv: 2409.06671. https://doi.org/ Prezja, F., Annala, L., Kiiskinen, S., Lahtinen, S., Ojala, T., Ruusuvuori, P., & Kuopio, T. (2024). Improving performance in colorectal cancer histology decomposition using deep and ensemble machine learning. Heliyon, 10(18). Raju, M. S. N., & Rao, B. S. (2022, December). Classification of Colon Cancer through analysis of histopathology images using Transfer Learning. In 2022 IEEE 2nd International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC) (pp. 1-6). IEEE. Sarwinda, D., Paradisa, R. H., Bustamam, A., & Anggia, P. (2021). Deep learning in image classification using residual network (ResNet) variants for detection of colorectal cancer. Procedia Computer Science, 179, 423-431. Sharkas, M., & Attallah, O. (2024). Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Scientific Reports, 14(1), 6914. Sharma, N., Sharma, K. P., Mangla, M., & Rani, R. (2023). Breast cancer classification using snapshot ensemble deep learning model and t-distributed stochastic neighbor embedding. Multimedia Tools and Applications, 82(3), 4011-4029. Sharma, P., Bora, K., Kasugai, K., & Balabantaray, B. K. (2020). Two Stage Classification with CNN for Colorectal Cancer Detection. Oncologie (Tech Science Press), 22(3). Shi, L., Li, X., Hu, W., Chen, H., Chen, J., Fan, Z., ... & Li, C. (2023). EBHI-Seg: A novel enteroscope biopsy histopathological hematoxylin and eosin image dataset for image segmentation tasks. Frontiers in Medicine, 10, 1114673. Sirinukunwattana, K., Domingo, E., Richman, S. D., Redmond, K. L., Blake, A., Verrill, C., ... & Koelzer, V. H. (2021). Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning. Gut, 70(3), 544-554. Tanwar, S., Vijayalakshmi, S., Sabharwal, M., Kaur, M., AlZubi, A. A., & Lee, H. N. (2022). Detection and classification of colorectal polyp using deep learning. BioMed Research International, 2022. Thakur, N., Yoon, H., & Chong, Y. (2020). Current trends of artificial intelligence for colorectal cancer pathology image analysis: a systematic review. Cancers, 12(7), 1884. Xu, L., Walker, B., Liang, P. I., Tong, Y., Xu, C., Su, Y. C., & Karsan, A. (2020). Colorectal cancer detection based on deep learning. Journal of Pathology Informatics, 11(1), 28. Xue, D., Zhou, X., Li, C., Yao, Y., Rahaman, M. M., Zhang, J., ... & Sun, H. (2020). An application of transfer learning and ensemble learning techniques for cervical histopathology image classification. IEEE Access, 8, 104603-104618. Yamashita, R., Long, J., Longacre, T., Peng, L., Berry, G., Martin, B., ... & Shen, J. (2021). Deep learning model for the prediction of microsatellite instability in colorectal cancer: a diagnostic study. The Lancet Oncology, 22(1), 132-141. Yu, G., Sun, K., Xu, C., Shi, X. H., Wu, C., Xie, T., ... & Deng, H. W. (2021). Accurate recognition of colorectal cancer with semi-supervised deep learning on pathological images. Nature communications, 12(1), 6311. Zhou, D., Tian, F., Tian, X., Sun, L., Huang, X., Zhao, F., ... & Li, X. (2020). Diagnostic evaluation of a deep learning model for optical diagnosis of colorectal cancer. Nature communications, 11(1), 2961.
|
2509.16249
|
Explainability Needs in Agriculture:
Exploring Dairy Farmers’ User Personas
Mengisti Berihu Girmay
University of Kaiserslautern-Landau
Kaiserslautern, Germany
mengisti.berihu@rptu.de
Jakob Droste
Software Engineering Group
Leibniz University Hannover
Hannover, Germany
jakob.droste@inf.uni-hannover.de
Hannah Deters
Software Engineering Group
Leibniz University Hannover
Hannover, Germany
hannah.deters@inf.uni-hannover.de
Joerg Doerr
University of Kaiserslautern-Landau
and Fraunhofer IESE
Kaiserslautern, Germany
joerg.doerr@iese.fraunhofer.de
Abstract—Artificial Intelligence (AI) promises new opportuni-
ties across many domains, including agriculture. However, the
adoption of AI systems in this sector faces several challenges.
System complexity can impede trust, as farmers’ livelihoods
depend on their decision-making and they may reject opaque
or hard-to-understand recommendations. Data privacy concerns
also pose a barrier, especially when farmers lack clarity regarding
who can access their data and for what purposes.
This paper examines dairy farmers’ explainability require-
ments for technical recommendations and data privacy, along
with the influence of socio-demographic factors. Based on a
mixed-methods study involving 40 German dairy farmers, we
identify five user personas through k-means clustering. Our
findings reveal varying requirements, with some farmers pre-
ferring little detail while others seek full transparency across
different aspects. Age, technology experience, and confidence
in using digital systems were found to correlate with these
explainability requirements. The resulting user personas offer
practical guidance for requirements engineers aiming to tailor
digital systems more effectively to the diverse requirements of
farmers.
Index Terms—Requirements Engineering, Agriculture, User
Personas, Explainability, Human-centered AI
I. INTRODUCTION
Artificial Intelligence (AI) is advancing in many areas
and promises new opportunities to increase productivity and
efficiency. Combined with the increasing availability of low-
cost sensors, AI has also sparked renewed interest in precision
agriculture, an approach that uses data-driven insights to
optimize farm management and resource use. AI is also finding
its way into livestock farming, where pattern recognition can
help detect animal health issues, e.g., lameness using image
data or respiratory diseases such as coughing through sound
analysis. This not only promises to improve animal welfare,
but also to enable early interventions, reducing the need for
antibiotics and helping prevent larger disease outbreaks [1].
However, despite the large potential and many use cases,
practical adoption of AI systems is still lacking. AI systems
can raise concerns for farmers, particularly when the reasoning
behind decisions or predictions is unclear. Unlike rule-based
systems of conventional computer programming, in which
the decision-making logic is predefined and transparent, AI
models tend to operate as “black boxes”, making it difficult or
impossible for users (and often even developers) to understand
exactly how certain conclusions are reached.
This “black box” issue presents a particular trust challenge
for AI in agriculture on two interrelated levels. First, when
farmers rely on AI systems for technical guidance but cannot
comprehend the rationale behind recommendations, they may
disregard or reject them entirely — given that their livelihood
depends on their decisions. A lack of transparency, high
system complexity, and fears of error or loss of control can
significantly undermine trust in such systems [2]. Second,
many farmers are concerned about data privacy and fear that
their data could be accessed by authorities or monetized by
companies without their consent [3]. If AI systems and their
data handling practices are perceived as opaque or externally
imposed, they may be met with resistance and ultimately go
unused. Thus, AI should not only be intuitive for end users but
also transparent and adapted to their requirements to support
user acceptance and adoption.
Research in both agricultural [4] and non-agricultural do-
mains [5, 6] has shown that integrating explanations into
AI may increase user trust and promote adoption. However,
explanations can also have a negative impact on usability
if they are not helpful and rather burden the user (e.g., by
cluttering the user interface) [5]. For this reason, explanations
should be well adapted to user requirements.
An established method for capturing the requirements of
stakeholder groups is the development of user personas. To
create them, users with similar requirements are clustered and
examined for similarities, e.g., with regard to their sociodemo-
graphic characteristics. In this way, user personas systemati-
cally represent the needs and human values of different user
groups and help to ensure that users who might otherwise be
overlooked are also addressed. By making end users’ require-
arXiv:2509.16249v1 [cs.CY] 17 Sep 2025
ments more tangible, personas enable requirements engineers
better empathize with them and guide system development in
a way that is ethically sound and inclusive [7].
Building on this, we aim to explore the requirements of
dairy farmers in terms of explainability. To build a data
foundation, we surveyed 40 dairy farmers in Germany and in-
terviewed eight of them for deeper insights into their responses
and views on system explanations. The survey focused on their
requirements regarding explanations, including the desired
level of detail in technical and data privacy explanations across
scenarios related to feeding, breeding, and animal health.
We clustered their feedback into five user personas using k-
means. Each persona represents a typical farmer, combining
sociodemographic traits with related explainability require-
ments. These personas are intended to support requirements
engineers by providing a more tangible picture of the target
group and their various explanatory needs.
The remainder of the paper is structured as follows. Sec-
tion II summarizes relevant background information and re-
lated work. In Section III, we present our research design and
methodology. Section IV presents the findings of our work. In
Section V, we discuss our results. Section VI concludes the
paper with a brief summary and an outlook on future work.
II. BACKGROUND AND RELATED WORK
In the following, we compile relevant background informa-
tion and related work.
A. Explainability
Explainability commonly refers to the extent to which
systems that provide assessments or recommendations make
their reasoning transparent [8]. The concept relates to how
a system explains its decisions rather than what it does. A
key aspect in this context is identifying the intended audience.
For example, while a machine learning engineer may look for
insights into model internals, end users are more likely to look
for support in interpreting the results [9].
Explainability has gained significant attention in recent
years, especially in the context of AI [10, 11]. Applications
include healthcare [12], automotive [13], and finance [14]. As
systems grow more complex, the concept is also gaining im-
portance beyond AI, including in areas like data privacy [15].
Gaining a better understanding of user behavior and when
and how detailed explanations users prefer has the potential
to build trust and improve adoption [15–17]. To achieve this,
explanations must be appropriately tailored, as excessive or
poorly timed information can negatively affect user experi-
ence [16, 18]. Effective explanations provide enough detail
to support understanding without overwhelming or confusing
the user [19]. This suggests a “one-size-fits-all” approach to
explainability is not feasible.
In order to better understand what constitutes good ex-
planations, K¨ohl et al. [6] recommend bringing together a
diverse group of representatives from relevant target groups
(in our case farmers). Including participants with varying
demographics (e.g., age, gender, and experience) can help to
gather feedback from diverse perspectives. By asking targeted
questions about how tailored explanations should ideally look
in each individual case, a well-founded understanding of what
explainability should encompass can be established [20]. In
this paper, we combine a quantitative method (survey) with a
qualitative approach (interviews) to capture both broad input
and in-depth insights into user requirements, following an
approach successfully applied in prior research [7, 21].
Efforts to improve explainability are also made in agri-
culture, although the focus is mostly on developers and re-
searchers, with little involvement from end users. For instance,
Shams et al. [22] integrated an Explainable AI (XAI)-based
algorithm into a yield estimation system, but did not include
farmers in the validation. Similarly, studies on plant disease
detection [23] or plant identification [24] are mostly concerned
with technical interpretability over user-centered evaluation.
We argue that farmers and their needs should play a more
central role, as their satisfaction ultimately determines accep-
tance and practical adoption. Studies have shown that adoption
can also be influenced by sociodemographic factors such as
age, farm size, education, and prior experience with digital
systems [25]. We include these factors to examine how they
correlate with farmers’ explainability requirements.
B. User Personas
The concept of user persona studies is a promising way to
analyze and group user requirements [26]. While there is no
universal template for defining user personas, they commonly
categorize stakeholders based on shared traits [27]. In doing
so, personas help explore how users interact with systems
and what factors, such as sociodemographics, influence their
behavior. Scenarios can further support this by illustrating user
behavior in specific contexts [26].
User personas have been used for requirements engineering
in a wide range of areas, including security and data privacy
[28, 29], as well as explainability [7]. To the best of our
knowledge, however, there are no studies in the literature
that apply the user persona concept to study explainability
requirements in the agricultural domain. Given the diversity
of domains, such as medical, automotive, and agriculture,
findings from one area are difficult to generalize to another. For
example, explainability needs in the automotive domain may
focus on understanding system alerts and component failures,
whereas farmers face different tasks that likely lead to other
distinct requirements.
III. RESEARCH DESIGN AND METHODOLOGY
The primary goal of this work is to explore dairy farmers’
explainability requirements, develop user personas, and ana-
lyze correlations with sociodemographic factors. In the survey
shown to farmers, we asked about their preferred level of detail
in technical explanations (assessments or recommendations by
the system) and data privacy explanations (clarity on where
and how data is stored and processed). Interview responses
were used to derive persona traits and capture farmers’ back-
grounds and attitudes. Both the quantitative and qualitative
Fig. 1: User persona creation process
parts of the study were conducted in parallel. Our research is
guided by the following two questions:
RQ1:
Which personas can be identified for dairy farmers,
and what are their explainability requirements?
RQ2:
Do farmers’ sociodemographic factors correlate
with their explainability requirements?
A. Survey
We designed the survey based on three scenarios in major
decision areas for dairy farmers: feeding, breeding, and health
management. Each scenario presented a hypothetical system
assessment (e.g., a cow infected with mastitis) alongside five
explanations with varying levels of detail. Participants chose
the level of detail that suited them best, with the order of
options randomized to avoid primacy-recency effects [30]. To
this end, we presented exemplary explanations in order to
make the various levels of detail more tangible. By using con-
crete examples, participants did not have to interpret abstract
labels (such as ’little detail’ or ’high detail’), which could have
rendered the selection process ambiguous and subjective due
to varying understandings. Similarly, participants chose one
of five data privacy explanations ranging from no explanation
to highly detailed information, also presented with concrete
examples in randomized order. This yielded six variables,
namely the desired level of detail for technical and data privacy
explanations for each of the three areas. The survey further
gathered sociodemographic data to enable correlation anal-
ysis with the requirements. The sociodemographic variables
included age, gender, farm size, highest educational degree,
frequency and duration of digital system use, and years of
experience in dairy farming.
B. Interviews
The interviews were organized informally based on the
participants’ responses to the survey. The questions asked did
not follow a predefined interview guide. Instead, each farmer
was asked to explain their answers for deeper insight. Inter-
view lengths varied based on farmers’ availability and ranged
from 10 to 25 minutes. The purpose of the interviews was to
obtain qualifying background information on the answers to
the survey questions.
C. Data Collection and Analysis
Farmers were contacted via German farmer associations and
personal contacts. Data collection took place from December
2024 to February 2025. Eight of the participants agreed to be
interviewed in addition to filling out the survey. The online
survey was conducted in German using LimeSurvey [31] and
later translated into English. Out of 57 participants, 40 com-
pleted the survey in full, and only these complete responses
were analyzed. The interviewed farmers were contacted based
on their availability.
We applied Principal Component Analysis (PCA) [32] to
reduce the data’s dimensionality. PCA transforms data into
uncorrelated Principal Components (PCs) that capture the most
significant variance. In doing so, we selected components that
together accounted for 95% of the total variance.
For clustering, we used k-means, an established method for
partitioning PCA-reduced data [33]. The k-means algorithm
produces clear, distinct clusters with unambiguous assignments
(unlike methods such as DBSCAN [34] where assignments can
be ambiguous). This aspect facilitates deriving user personas.
Using the selected PCs, we determined the optimal cluster
number (k) to maximize within-cluster similarity and cluster
separation. We applied the elbow method [35], which identifies
the point where the within-cluster sum of squares (WCSS)
decline slows (see Fig. 2a), and the silhouette score [36],
which evaluates cluster quality from +1 (well-separated) to
-1 (misclassified) (see Fig. 2b). Both methods indicated k = 5
as optimal for k-means clustering.
250
200
150
100
50
0
2
3
4
5
6
k
WCSS
(a) Elbow graph
0.4
0.3
0.2
0.1
0.0
2
3
4
5
6
k
Score
(b) Silhouette scores
Fig. 2: Determining k for k-means: Elbow and Silhouette
To answer RQ2, which explores how sociodemographic
factors relate to preferred explanation detail (technical and
privacy) across the three scenarios, we tested null hypotheses
for each pairwise combination of variables. Pearson correlation
[37] was used for ordinal data, and the Mann-Whitney U test
[38] for nominal data. Relationship direction was determined
via r-values (Pearson) and z-values (Mann-Whitney), with sig-
nificance assessed by p-values. Correlations were considered
significant at p < 0.05.
D. Data Availability Statement
We provide the survey questionnaire with all questions and
answer options as supplementary material [39]. The raw data
from the survey is subject to confidentiality agreements with
the participants and cannot be shared publicly.
IV. FINDINGS
Fig. 3 illustrates the resulting user personas to address RQ1.
The replies from a total of 40 dairy farmers were evaluated,
of which 12 were women and 28 men. Their average age was
39.2 years (range: 22–67, median: 38). Farm sizes varied: three
managed up to 30 cows, 17 had 31–70, another 17 had 71–150,
2 had 151–300, and one managed over 300 cows.
Regarding education, three farmers were career chang-
ers, seven had agricultural training, ten had attended higher
agricultural schools, twelve were certified master farmers,
and eight held university degrees in agriculture. We ranked
these educational backgrounds in the given order. Regard-
ing confidence with digital systems: Three farmers described
themselves as less confident, 18 as somewhat confident, ten
as confident, and nine as very confident. Regarding years of
experience in dairy farming: One farmer reported up to one
year, twelve had 6–10 years, 19 had 11–20 years, and eight
had more than 20 years. Regarding frequency of digital system
use: Two farmers reported no use, two used them up to once a
month, 14 several times per month, six several times per week,
and 16 almost daily. Regarding duration of digital system use:
One farmer had used digital systems for up to one year, 13
for 2–3 years, 14 for 4–5 years, and twelve for more than
five years. Participants were also asked whether dairy farming
was their main source of income. However, due to the lack
of variability (37 answered yes, only three answered no), we
excluded this parameter from the analysis.
A. User Personas
The compiled user personas are primarily based on the
quantitative survey data, with ordinal variables averaged to
define attributes like age and farm size. Gender was assigned
based on cluster majority. For each user persona, we used
the qualitative interview feedback from the corresponding
participants to refine the personality and compile an illustrative
quote. Furthermore, we illustrated the desired level of detail
for explanations regarding technical aspects (abbreviated as
“technical details”) and data privacy (abbreviated as “privacy
details”).
1) The Carefree Power-User: At 29, Tobias Meyer manages
90 cows and is highly engaged with digital systems. With six
years of dairy farming experience and a degree in agriculture,
he confidently uses digital systems daily and has done so
for four years. He values detailed technical explanations to
understand system recommendations and optimize his oper-
ations. Data privacy is not a major concern for him, as he
generally trusts providers. “The systems save me time because
I don’t have to check everything manually. But I need to
know why I get certain recommendations and what factors
led to them. As for data privacy, I just hope companies handle
my data properly”. This persona accounted for 12.5% of the
participants.
2) The Detail-Oriented Privacy-Skeptic: Anna Weber, 39,
has run her 50-cow dairy farm for 15 years. Educated at a
higher agricultural school, she uses digital systems almost
daily and has done so for five years. While she sees clear
benefits in saving time and boosting efficiency, some function-
alities are not always intuitive for her and she wants detailed
technical explanations. Data privacy is a major concern for her.
She insists on knowing how and where her data is processed
and expects full transparency. “I find these systems useful, but
I need to know exactly what happens to my data. If a system
makes a recommendation, I want to understand why and be
able to verify it”. This persona accounted for 25.0% of the
participants.
3) The Minimalist: Lukas Beck, 35, has 10 years of experi-
ence in dairy farming and manages 50 cows. With agricultural
training, he is moderately confident using digital systems
and has worked with them for five years. He uses digital
systems several times a month, but prefers to rely on his
own experience. He values clear explanations but does not
want systems to overwhelm him with excessive detail, both
in technical content and data privacy. “I want to know when
animals are having issues or are in heat, but I don’t need long
explanations. Systems should simply work well and explain
things only when I ask”. This persona accounted for 17.5%
of the participants.
4) The Pragmatist: Michael Fischer, 44, is a certified mas-
ter farmer with 17 years of experience, managing 110 cows.
He has been using digital systems multiple times per week for
four years and feels confident with them. He values technical
explanations that clarify how and why systems make decisions,
but without unnecessary detail. Regarding data privacy, basic
transparency is enough for him and he does not seek in-depth
detail. Michael sees digital systems as useful aids that support
(rather than replace) his expertise. “Explanations are useful if
they give me something I can act on. I’m not overly worried
about data privacy, but I still expect basic transparency”. This
persona accounted for 20.0% of the participants.
5) The Selective Privacy-Skeptic: Johann Braun, 62, has
20 years of dairy farming experience and manages 70 cows.
Although digital systems were not part of his education,
he adopted them confidently four years ago and now uses
them weekly. He relies on them in areas where he feels less
confident, like breeding or health management, and expects
The Carefree
The Detail-Oriented
The Minimalist
The Pragmatist
The Selective
Power-User
Privacy-Skeptic
Privacy-Skeptic
Tobias Meyer
Anna Weber
Lukas Beck
Michael Fischer
Johann Braun
Technical details
Technical details
Technical details
Technical details
Technical details
Privacy details
Privacy details
Privacy details
Privacy details
Privacy details
Fig. 3: Dairy farmer user personas
moderate explanations. In areas where he is confident, like
feeding, he prefers to rely on his own judgment and does
not seek explanations. Data privacy is a major concern and
he expects full transparency, mostly due to worries about
unauthorized access. “If I lack experience, explanations are of
great help. But I know my feeding routine in and out and don’t
need a system explaining me how to do that. In other areas, I
want to understand why the system suggests something”. This
persona accounted for 25.0% of the participants.
B. Correlation Analysis
Table I shows the results for RQ2, which investigates
correlations between farmers’ sociodemographic factors and
their requirements for explanations. The table shows only
correlations where the null hypothesis was rejected, implying
significant correlations (p < 0.05).
The analysis led to the following findings. F1: In the area
of feeding, female farmers preferred more detailed technical
explanations than the male participants. F2 and F3: Farmers
TABLE I: Significant correlations (p < 0.05) between sociode-
mographic factors (Var. 1) and explanation needs (Var. 2)
Finding
Variable 1
Variable 2
r-value or z-value
p-value
F1
dGender
eFeedTech
z = -2.668
p = 0.008
F2
dAge
eBreedPriv
r = 0.453
p = 0.003
F3
dAge
eFeedTech
r = -0.340
p = 0.032
F4
dSysConf
eBreedPriv
r = 0.403
p = 0.010
F5
dSysConf
eFeedTech
r = -0.451
p = 0.003
F6
dSysFreq
eFeedTech
r = 0.415
p = 0.008
F7
dSysDur
eFeedTech
r = 0.339
p = 0.032
dGender
Gender (female, male, other)
dAge
Age group (≤25, 26 to 40, 41 to 60, ≥61)
dSysConf
Confidence in using digital systems (low to high)
dSysFreq
Frequency of digital system use (not at all to almost daily)
dSysDur
Years of digital system use (≤1 to > 5 years)
eFeedTech
Desired detail level for feeding-related technical explanations
eBreedPriv
Desired detail level for breeding-related privacy explanations
of older age showed greater concern about data privacy in
breeding and preferred more detailed technical explanations in
feeding. F4 and F5: Farmers with higher confidence in using
digital systems desired more detailed data privacy explanations
in breeding and less technical detail in feeding. F6 and F7:
Farmers with more frequent or longer-term use of digital
systems tend to desire more detailed technical explanations
in feeding.
C. Threats to Validity
Our research faces several validity threats that we discuss
below in line with Wohlin et al. [40].
a) Construct Validity: All participants were dairy farmers
and thus relevant stakeholders. Although our survey relied
on hypothetical scenarios, we included realistic explanation
examples to reduce hypothetical bias.
The quality of user personas depends on how the underlying
data is gathered. Personas based solely on survey data risk
being shallow, whereas personas based only on interviews lack
quantitative grounding. We used a mixed-methods approach to
combine both methodologies.
Our study focused on text-based explanations, omitting
other forms like audio or visuals. As this work exists in
the early stages of explainability research in agriculture, we
decided to focus on information content and granularity for the
development of our personas. Future research into presentation
forms might yield interesting findings on how farmers prefer
their explanations to be provided and presented.
b) Internal Validity: Designer bias is a common issue
in persona development. To reduce this, two explainability
researchers without ties to digital agriculture independently
reviewed the personas. Nonetheless, some bias may remain.
c) Conclusion Validity: With 40 participants, our find-
ings have limited generalizability. However, given the chal-
lenge of recruiting farmers, we consider our sample mean-
ingful. We do not claim full coverage of all explainability
personas among German farmers.
We applied k-means clustering after PCA, with k = 5 sup-
ported by elbow and silhouette methods. The clusters appeared
balanced, supporting their use as a persona basis. Correlation
analyses used suitable methods (Pearson’s r, Mann-Whitney
U), with all findings meeting 5% significance and indicating
moderate correlations, suggesting representativeness.
d) External Validity: All participants were German, lim-
iting generalizability across regions. While age diversity was
broad, 70% of participants were male. However, this aligns
with BMEL’s 2020 data [41] showing a 36% share of women
among German farmers.
V. DISCUSSION
User personas, including those developed in our study, can
support requirements elicitation and analysis by highlighting
the diversity of requirements and preferences within a user
group. They can help developers gain a shared understanding
of their end users and encourage them to consider user types
that might otherwise be overlooked by fostering empathy for
these user groups. For example, personas may raise awareness
of users with strong data privacy concerns who need more
transparency and control, or users who prefer minimal expla-
nations and quick overviews, which might prompt developers
to offer an option to disable certain explanatory features.
The user personas presented in this paper explicitly refer
to explanation needs in agricultural systems, considering both
system transparency and user privacy. In isolation, explainabil-
ity and privacy – and their related user personas – have already
been addressed by previous works. Droste et al. [7] created
personas for explanation needs in everyday software systems.
They referred, among other things, to demographic factors
such as age and occupational field, and clustered software
users according to their need for different types of expla-
nations. Dupree et al. [29] created personas for privacy and
security practices. They considered knowledge about privacy
and security and the effort users are willing to put in to
protect security and privacy. In contrast, we considered distinct
factors such as experience in dairy farming and farm size, as
these factors are unique to our specific use case. Our personas
illustrate users’ diverse explanation needs in the dairy farming
sector and enable requirements engineers to empathize with
different types of farmers.
VI. CONCLUSIONS AND OUTLOOK
In our work, we explored the explainability requirements of
dairy farmers by creating user personas to gain insights into
their specific needs for explanation. In addition, we analyzed
correlations between sociodemographic factors and explain-
ability requirements. To this end, we defined three scenarios
that reflect everyday situations faced by dairy farmers.
Based on quantitative and qualitative data, we identified
five distinct dairy farmer personas with varying requirements
for technical and data privacy-related explanations. Three
personas (the Carefree Power-User, the Pragmatist, and the
Minimalist) prefer low to moderate detail on data privacy.
Their requirements for technical explanations vary widely,
from minimal to very detailed. Together, they represent half
of the participants. The remaining two personas (the Detail-
Oriented Privacy-Skeptic and the Selective Privacy-Skeptic)
require highly detailed data privacy explanations. They differ
in technical requirements: the former prefers extensive detail,
while the latter seeks information only in specific areas. These
two personas represent the other half of the participants. Our
findings partly contradict studies such as Chazette and Schnei-
der [16], Nunes and Jannach [42] suggesting that explanations
should be concise, as otherwise they could overwhelm most
users.
The correlation analysis revealed several significant links
between sociodemographic factors and explainability require-
ments. Previous research has shown that factors like age,
gender, and experience influence farmers’ acceptance and
trust [25]. Our study supports this and identifies gender, age,
confidence, frequency and duration of digital system use as
the most influential factors.
Requirements regarding explanations also varied across
scenarios, with different sociodemographic factors influencing
preferences depending on context. This highlights that farmers
prioritize explanations differently, underscoring the value of
context-aware design. Understanding these correlations could
help software providers tailor digital farming systems more
effectively.
We acknowledge the limitations of our study, but also see it
as a stepping stone for future research opportunities. Validating
user personas remains a challenge due to the absence of a
widely accepted standard. As a step toward addressing this,
Salminen et al. [27] proposed the Persona Perception Scale, a
framework with eight constructs for assessing persona quality,
which can be adapted to fit the context of a given study. In our
planned future work, we aim to apply and evaluate the use of
this framework to strengthen the validation of our personas.
ACKNOWLEDGMENTS
This work was funded by Carl Zeiss Stiftung, Germany, un-
der the Sustainable Embedded AI project, Grant No.: P2021-
02-009. It was also funded by the Deutsche Forschungsge-
meinschaft (DFG, German Research Foundation) under Grant
No.: 470146331, project softXplain (2022-2025).
We would like to express our sincere gratitude to all
participating farmers for their valuable time and feedback. We
used the following image generation tool to create the images
of the user personas: https://openai.com/index/dall-e-3
REFERENCES
[1] A. Monteiro, S. Santos, and P. Gonc¸alves, “Precision agriculture for crop
and livestock farming—brief review,” Animals, vol. 11, no. 8, p. 2345,
2021.
[2] J. D¨orr and M. Nachtmann, Handbook Digital Farming. Springer, 2022.
[3] S. Linsner, F. Kuntke, E. Steinbrink, J. Franken, and C. Reuter, “The role
of privacy in digitalization–analyzing perspectives of german farmers,”
Proceedings on Privacy Enhancing Technologies, 2021.
[4] M. B. Girmay and F. M¨ohrle, “Perspectives on explanation formats
from two stakeholder groups in germany: Software providers and dairy
farmers,” arXiv preprint arXiv:2506.11665, 2025.
[5] L. Chazette, W. Brunotte, and T. Speith, “Explainable software systems:
from requirements analysis to system evaluation,” Requirements Engi-
neering, vol. 27, no. 4, pp. 457–487, 2022.
[6] M. A. K¨ohl, K. Baum, M. Langer, D. Oster, T. Speith, and D. Bohlender,
“Explainability as a non-functional requirement,” in 2019 IEEE 27th Int.
Requirements Engineering Conference (RE). IEEE, 2019, pp. 363–368.
[7] J. Droste, H. Deters, J. Puglisi, and J. Kl¨under, “Designing end-user
personas for explainability requirements using mixed methods research,”
in 2023 IEEE 31st Int. Requirements Engineering Conference Workshops
(REW).
IEEE, 2023, pp. 129–135.
[8] T. Miller, “Explanation in artificial intelligence: Insights from the social
sciences,” Artificial intelligence, vol. 267, pp. 1–38, 2019.
[9] A. Preece, D. Harborne, D. Braines, R. Tomsett, and S. Chakraborty,
“Stakeholders in explainable ai,” arXiv preprint arXiv:1810.00184, 2018.
[10] A. Adadi and M. Berrada, “Peeking inside the black-box: a survey on
explainable artificial intelligence (xai),” IEEE access, vol. 6, pp. 52 138–
52 160, 2018.
[11] A. B. Arrieta, N. D´ıaz-Rodr´ıguez, J. Del Ser, A. Bennetot, S. Tabik,
A. Barbado, S. Garc´ıa, S. Gil-L´opez, D. Molina, R. Benjamins et al.,
“Explainable artificial intelligence (xai): Concepts, taxonomies, opportu-
nities and challenges toward responsible ai,” Information fusion, vol. 58,
pp. 82–115, 2020.
[12] D. Saraswat, P. Bhattacharya, A. Verma, V. K. Prasad, S. Tanwar,
G. Sharma, P. N. Bokoro, and R. Sharma, “Explainable ai for healthcare
5.0: opportunities and challenges,” IEEE Access, vol. 10, pp. 84 486–
84 517, 2022.
[13] S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, “Explainable
artificial intelligence for autonomous driving: A comprehensive overview
and field guide for future research directions,” IEEE Access, 2024.
[14] O. Kuiper, M. van den Berg, J. van der Burgt, and S. Leijnen, “Exploring
explainable ai in the financial sector: Perspectives of banks and supervi-
sory authorities,” in Artificial Intelligence and Machine Learning: 33rd
Benelux Conference on Artificial Intelligence, BNAIC/Benelearn 2021,
Revised Selected Papers 33.
Springer, 2022, pp. 105–119.
[15] W. Brunotte, J. Droste, and K. Schneider, “Context, content, consent-
how to design user-centered privacy explanations (s).” in SEKE, 2023,
pp. 86–89.
[16] L. Chazette and K. Schneider, “Explainability as a non-functional
requirement: challenges and recommendations,” Requirements Engineer-
ing, vol. 25, no. 4, pp. 493–514, 2020.
[17] H. Deters, L. Reinhardt, J. Droste, M. Obaidi, and K. Schneider, “Iden-
tifying explanation needs: Towards a catalog of user-based indicators,”
in 2025 IEEE 33rd international requirements engineering conference
(RE).
IEEE, 2025.
[18] H. Deters, J. Droste, A. Hess, V. Kl¨os, K. Schneider, T. Speith, and
A. Vogelsang, “The x factor: On the relationship between user experi-
ence and explainability,” in Proceedings of the 13th Nordic Conference
on Human-Computer Interaction, 2024, pp. 1–12.
[19] B. Weatherson, “Explanation, idealisation and the goldilocks problem,”
Philosophy and Phenomenological Research, vol. 84, no. 2, pp. 461–
473, 2012.
[20] M. Langer, D. Oster, T. Speith, H. Hermanns, L. K¨astner, E. Schmidt,
A. Sesing, and K. Baum, “What do we want from explainable artificial
intelligence (xai)?–a stakeholder perspective on xai and a conceptual
model guiding interdisciplinary xai research,” Artificial Intelligence, vol.
296, p. 103473, 2021.
[21] L. Chazette, J. Kl¨under, M. Balci, and K. Schneider, “How can we
develop explainable systems? insights from a literature review and an
interview study,” in Proceedings of the Int. Conference on Software and
System Processes and Int. Conference on Global Software Engineering,
2022, pp. 1–12.
[22] M. Y. Shams, S. A. Gamel, and F. M. Talaat, “Enhancing crop rec-
ommendation systems with explainable artificial intelligence: a study
on agricultural decision-making,” Neural Computing and Applications,
vol. 36, no. 11, pp. 5695–5714, 2024.
[23] K. Mridha, F. G. Tola, I. Khalil, S. M. J. Jakir, P. N. Wilfried, M. A.
Priyok, and M. Shukla, “Explainable deep learning for coffee leaf
disease classification in smart agriculture: A visual approach,” in 2023
Int. Conference on Distributed Computing and Electrical Circuits and
Electronics (ICDCECE), 2023, pp. 1–8.
[24] S. O. Mengisti Berihu Girmay, “Explainable ai: Leaf-based medic-
inal plant classification using knowledge distillation,” in 44. GIL-
Jahrestagung, Biodiversit¨at f¨ordern durch digitale Landwirtschaft.
Gesellschaft f¨ur Informatik eV, 2024, pp. 23–34.
[25] M. Michels, V. Bonke, and O. Musshoff, “Understanding the adoption of
smartphone apps in dairy herd management,” Journal of Dairy Science,
vol. 102, no. 10, pp. 9422–9434, 2019.
[26] D. Karolita, J. McIntosh, T. Kanij, J. Grundy, and H. O. Obie, “Use
of personas in requirements engineering: A systematic mapping study,”
Information and Software Technology, vol. 162, p. 107264, 2023.
[27] J. Salminen, J. M. Santos, H. Kwak, J. An, S.-g. Jung, and B. J. Jansen,
“Persona perception scale: development and exploratory validation of
an instrument for evaluating individuals’ perceptions of personas,” Int.
Journal of Human-Computer Studies, vol. 141, p. 102437, 2020.
[28] J.-L. (Weber) Dupree, E. Lank, and D. M. Berry, “A case study of using
grounded analysis as a requirement engineering method: Identifying
personas that specify privacy and security tool users,” Science of
Computer Programming, vol. 152, pp. 1–37, 2018. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0167642317301697
[29] J. L. Dupree, R. Devries, D. M. Berry, and E. Lank, “Privacy personas:
Clustering users via attitudes and behaviors toward security practices,”
in Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems, ser. CHI ’16.
New York, NY, USA: Association
for Computing Machinery, 2016, p. 5228–5239. [Online]. Available:
https://doi.org/10.1145/2858036.2858214
[30] A. Felfernig, G. Friedrich, B. Gula, M. Hitz, T. Kruggel, G. Leitner,
R. Melcher, D. Riepan, S. Strauss, E. Teppan et al., “Persuasive rec-
ommendation: serial position effects in knowledge-based recommender
systems,” in Persuasive Technology: 2nd Int. Conference on Persuasive
Technology.
Springer, 2007, pp. 283–294.
[31] LimeSurvey
GmbH,
“Limesurvey,”
2025,
accessed:
2025-03-01.
[Online]. Available: https://www.limesurvey.org
[32] H. Hotelling, “Analysis of a complex of statistical variables into principal
components.” Journal of educational psychology, vol. 24, no. 6, p. 417,
1933.
[33] C. Ding and X. He, “K-means clustering via principal component
analysis,” in Proceedings of the twenty-first international conference on
Machine learning, 2004, p. 29.
[34] H. K. Kanagala and V. J. R. Krishnaiah, “A comparative study of k-
means, dbscan and optics,” in Int. Conference on Computer Communi-
cation and Informatics (ICCCI).
IEEE, 2016, pp. 1–6.
[35] M. Cui et al., “Introduction to the k-means clustering algorithm based
on the elbow method,” Accounting, Auditing and Finance, vol. 1, no. 1,
pp. 5–8, 2020.
[36] D. M. Saputra, D. Saputra, and L. D. Oswari, “Effect of distance
metrics in determining k-value in k-means clustering using elbow and
silhouette method,” in Sriwijaya international conference on information
technology and its applications (SICONIAN 2019). Atlantis Press, 2020,
pp. 341–346.
[37] N. J. Gogtay and U. M. Thatte, “Principles of correlation analysis,”
Journal of the Association of Physicians of India, vol. 65, no. 3, pp.
78–81, 2017.
[38] T. W. MacFarland, J. M. Yates, T. W. MacFarland, and J. M. Yates,
“Mann–whitney u test,” Introduction to nonparametric statistics for the
biological sciences using R, pp. 103–132, 2016.
[39] M. B. Girmay, J. Droste, H. Deters, and J. Doerr, “Explainability
needs in agriculture: Exploring dairy farmers’ user personas - survey
questions,” Mar. 2025. [Online]. Available: https://doi.org/10.5281/
zenodo.14976355
[40] C. Wohlin, P. Runeson, M. H¨ost, M. C. Ohlsson, B. Regnell, and
A. Wessl´en, Experimentation in software engineering. Springer Science
& Business Media, 2012.
[41] Bundesministerium
f¨ur
Ern¨ahrung
und
Landwirtschaft
(BMEL),
“Besch¨aftigte
auf
den
Betrieben
–
Gleichstel-
lung
in
der
Landwirtschaft,”
2020,
accessed:
2025-03-
03. [Online]. Available: https://www.bmel-statistik.de/landwirtschaft/
gleichstellung-in-der-landwirtschaft/beschaeftigte-auf-den-betrieben
[42] I. Nunes and D. Jannach, “A systematic review and taxonomy of expla-
nations in decision support and recommender systems,” User Modeling
and User-Adapted Interaction, vol. 27, pp. 393–444, 2017.
|
Explainability Needs in Agriculture: Exploring Dairy Farmers' User Personas Mengisti Berihu Girmay -Landau Kaiserslautern, Germany Jakob Droste Software Engineering Group Leibniz University Hannover Hannover, Germany Hannah Deters Software Engineering Group Leibniz University Hannover Hannover, Germany Joerg Doerr -Landau and Fraunhofer IESE Kaiserslautern, Germany Abstract-Artificial Intelligence (AI) promises new opportunities across many domains, including agriculture. However, the adoption of AI systems in this sector faces several challenges. System complexity can impede trust, as farmers' livelihoods depend on their decision-making and they may reject opaque or hard-to-understand recommendations. Data privacy concerns also pose a barrier, especially when farmers lack clarity regarding who can access their data and for what purposes. This paper examines dairy farmers' explainability requirements for technical recommendations and data privacy, along with the influence of socio-demographic factors. Based on a mixed-methods study involving 40 German dairy farmers, we identify five user personas through k-means clustering. Our findings reveal varying requirements, with some farmers preferring little detail while others seek full transparency across different aspects. Age, technology experience, and confidence in using digital systems were found to correlate with these explainability requirements. The resulting user personas offer practical guidance for requirements engineers aiming to tailor digital systems more effectively to the diverse requirements of farmers. Index Terms-Requirements Engineering, Agriculture, User Personas, Explainability, Human-centered AI I. INTRODUCTION Artificial Intelligence (AI) is advancing in many areas and promises new opportunities to increase productivity and efficiency. Combined with the increasing availability of lowcost sensors, AI has also sparked renewed interest in precision agriculture, an approach that uses data-driven insights to optimize farm management and resource use. AI is also finding its way into livestock farming, where pattern recognition can help detect animal health issues, e.g., lameness using image data or respiratory diseases such as coughing through sound analysis. This not only promises to improve animal welfare, but also to enable early interventions, reducing the need for antibiotics and helping prevent larger disease outbreaks [1]. However, despite the large potential and many use cases, practical adoption of AI systems is still lacking. AI systems can raise concerns for farmers, particularly when the reasoning behind decisions or predictions is unclear. Unlike rule-based systems of conventional computer programming, in which the decision-making logic is predefined and transparent, AI models tend to operate as "black boxes", making it difficult or impossible for users (and often even developers) to understand exactly how certain conclusions are reached. This "black box" issue presents a particular trust challenge for AI in agriculture on two interrelated levels. First, when farmers rely on AI systems for technical guidance but cannot comprehend the rationale behind recommendations, they may disregard or reject them entirely - given that their livelihood depends on their decisions. A lack of transparency, high system complexity, and fears of error or loss of control can significantly undermine trust in such systems [2]. Second, many farmers are concerned about data privacy and fear that their data could be accessed by authorities or monetized by companies without their consent [3]. If AI systems and their data handling practices are perceived as opaque or externally imposed, they may be met with resistance and ultimately go unused. Thus, AI should not only be intuitive for end users but also transparent and adapted to their requirements to support user acceptance and adoption. Research in both agricultural [4] and non-agricultural domains [5, 6] has shown that integrating explanations into AI may increase user trust and promote adoption. However, explanations can also have a negative impact on usability if they are not helpful and rather burden the user (e.g., by cluttering the user interface) [5]. For this reason, explanations should be well adapted to user requirements. An established method for capturing the requirements of stakeholder groups is the development of user personas. To create them, users with similar requirements are clustered and examined for similarities, e.g., with regard to their sociodemographic characteristics. In this way, user personas systematically represent the needs and human values of different user groups and help to ensure that users who might otherwise be overlooked are also addressed. By making end users' require17 Sep 2025 ments more tangible, personas enable requirements engineers better empathize with them and guide system development in a way that is ethically sound and inclusive [7]. Building on this, we aim to explore the requirements of dairy farmers in terms of explainability. To build a data foundation, we surveyed 40 dairy farmers in Germany and interviewed eight of them for deeper insights into their responses and views on system explanations. The survey focused on their requirements regarding explanations, including the desired level of detail in technical and data privacy explanations across scenarios related to feeding, breeding, and animal health. We clustered their feedback into five user personas using kmeans. Each persona represents a typical farmer, combining sociodemographic traits with related explainability requirements. These personas are intended to support requirements engineers by providing a more tangible picture of the target group and their various explanatory needs. The remainder of the paper is structured as follows. Section II summarizes relevant background information and related work. In Section III, we present our research design and methodology. Section IV presents the findings of our work. In Section V, we discuss our results. Section VI concludes the paper with a brief summary and an outlook on future work. II. BACKGROUND AND RELATED WORK In the following, we compile relevant background information and related work. A. Explainability Explainability commonly refers to the extent to which systems that provide assessments or recommendations make their reasoning transparent [8]. The concept relates to how a system explains its decisions rather than what it does. A key aspect in this context is identifying the intended audience. For example, while a machine learning engineer may look for insights into model internals, end users are more likely to look for support in interpreting the results [9]. Explainability has gained significant attention in recent years, especially in the context of AI [10, 11]. Applications include healthcare [12], automotive [13], and finance [14]. As systems grow more complex, the concept is also gaining importance beyond AI, including in areas like data privacy [15]. Gaining a better understanding of user behavior and when and how detailed explanations users prefer has the potential to build trust and improve adoption [15-17]. To achieve this, explanations must be appropriately tailored, as excessive or poorly timed information can negatively affect user experience [16, 18]. Effective explanations provide enough detail to support understanding without overwhelming or confusing the user [19]. This suggests a "one-size-fits-all" approach to explainability is not feasible. In order to better understand what constitutes good explanations, K ̈ohl et al. [6] recommend bringing together a diverse group of representatives from relevant target groups (in our case farmers). Including participants with varying demographics (e.g., age, gender, and experience) can help to gather feedback from diverse perspectives. By asking targeted questions about how tailored explanations should ideally look in each individual case, a well-founded understanding of what explainability should encompass can be established [20]. In this paper, we combine a quantitative method (survey) with a qualitative approach (interviews) to capture both broad input and in-depth insights into user requirements, following an approach successfully applied in prior research [7, 21]. Efforts to improve explainability are also made in agriculture, although the focus is mostly on developers and researchers, with little involvement from end users. For instance, Shams et al. [22] integrated an Explainable AI (XAI)-based algorithm into a yield estimation system, but did not include farmers in the validation. Similarly, studies on plant disease detection [23] or plant identification [24] are mostly concerned with technical interpretability over user-centered evaluation. We argue that farmers and their needs should play a more central role, as their satisfaction ultimately determines acceptance and practical adoption. Studies have shown that adoption can also be influenced by sociodemographic factors such as age, farm size, education, and prior experience with digital systems [25]. We include these factors to examine how they correlate with farmers' explainability requirements. B. User Personas The concept of user persona studies is a promising way to analyze and group user requirements [26]. While there is no universal template for defining user personas, they commonly categorize stakeholders based on shared traits [27]. In doing so, personas help explore how users interact with systems and what factors, such as sociodemographics, influence their behavior. Scenarios can further support this by illustrating user behavior in specific contexts [26]. User personas have been used for requirements engineering in a wide range of areas, including security and data privacy [28, 29], as well as explainability [7]. To the best of our knowledge, however, there are no studies in the literature that apply the user persona concept to study explainability requirements in the agricultural domain. Given the diversity of domains, such as medical, automotive, and agriculture, findings from one area are difficult to generalize to another. For example, explainability needs in the automotive domain may focus on understanding system alerts and component failures, whereas farmers face different tasks that likely lead to other distinct requirements. III. RESEARCH DESIGN AND METHODOLOGY The primary goal of this work is to explore dairy farmers' explainability requirements, develop user personas, and analyze correlations with sociodemographic factors. In the survey shown to farmers, we asked about their preferred level of detail in technical explanations (assessments or recommendations by the system) and data privacy explanations (clarity on where and how data is stored and processed). Interview responses were used to derive persona traits and capture farmers' backgrounds and attitudes. Both the quantitative and qualitative Fig. 1: User persona creation process parts of the study were conducted in parallel. Our research is guided by the following two questions: RQ1: Which personas can be identified for dairy farmers, and what are their explainability requirements? RQ2: Do farmers' sociodemographic factors correlate with their explainability requirements? A. Survey We designed the survey based on three scenarios in major decision areas for dairy farmers: feeding, breeding, and health management. Each scenario presented a hypothetical system assessment (e.g., a cow infected with mastitis) alongside five explanations with varying levels of detail. Participants chose the level of detail that suited them best, with the order of options randomized to avoid primacy-recency effects [30]. To this end, we presented exemplary explanations in order to make the various levels of detail more tangible. By using concrete examples, participants did not have to interpret abstract labels (such as 'little detail' or 'high detail'), which could have rendered the selection process ambiguous and subjective due to varying understandings. Similarly, participants chose one of five data privacy explanations ranging from no explanation to highly detailed information, also presented with concrete examples in randomized order. This yielded six variables, namely the desired level of detail for technical and data privacy explanations for each of the three areas. The survey further gathered sociodemographic data to enable correlation analysis with the requirements. The sociodemographic variables included age, gender, farm size, highest educational degree, frequency and duration of digital system use, and years of experience in dairy farming. B. Interviews The interviews were organized informally based on the participants' responses to the survey. The questions asked did not follow a predefined interview guide. Instead, each farmer was asked to explain their answers for deeper insight. Interview lengths varied based on farmers' availability and ranged from 10 to 25 minutes. The purpose of the interviews was to obtain qualifying background information on the answers to the survey questions. C. Data Collection and Analysis Farmers were contacted via German farmer associations and personal contacts. Data collection took place from December 2024 to February 2025. Eight of the participants agreed to be interviewed in addition to filling out the survey. The online survey was conducted in German using LimeSurvey [31] and later translated into English. Out of 57 participants, 40 completed the survey in full, and only these complete responses were analyzed. The interviewed farmers were contacted based on their availability. We applied Principal Component Analysis (PCA) [32] to reduce the data's dimensionality. PCA transforms data into uncorrelated Principal Components (PCs) that capture the most significant variance. In doing so, we selected components that together accounted for 95% of the total variance. For clustering, we used k-means, an established method for partitioning PCA-reduced data [33]. The k-means algorithm produces clear, distinct clusters with unambiguous assignments (unlike methods such as DBSCAN [34] where assignments can be ambiguous). This aspect facilitates deriving user personas. Using the selected PCs, we determined the optimal cluster number (k) to maximize within-cluster similarity and cluster separation. We applied the elbow method [35], which identifies the point where the within-cluster sum of squares (WCSS) decline slows (see Fig. 2a), and the silhouette score [36], which evaluates cluster quality from +1 (well-separated) to -1 (misclassified) (see Fig. 2b). Both methods indicated k = 5 as optimal for k-means clustering. 250 200 150 100 50 0 2 3 4 5 6 k WCSS (a) Elbow graph 0.4 0.3 0.2 0.1 0.0 2 3 4 5 6 k Score (b) Silhouette scores Fig. 2: Determining k for k-means: Elbow and Silhouette To answer RQ2, which explores how sociodemographic factors relate to preferred explanation detail (technical and privacy) across the three scenarios, we tested null hypotheses for each pairwise combination of variables. Pearson correlation [37] was used for ordinal data, and the Mann-Whitney U test [38] for nominal data. Relationship direction was determined via r-values (Pearson) and z-values (Mann-Whitney), with significance assessed by p-values. Correlations were considered significant at p 5 years) eFeedTech Desired detail level for feeding-related technical explanations eBreedPriv Desired detail level for breeding-related privacy explanations of older age showed greater concern about data privacy in breeding and preferred more detailed technical explanations in feeding. F4 and F5: Farmers with higher confidence in using digital systems desired more detailed data privacy explanations in breeding and less technical detail in feeding. F6 and F7: Farmers with more frequent or longer-term use of digital systems tend to desire more detailed technical explanations in feeding. C. Threats to Validity Our research faces several validity threats that we discuss below in line with Wohlin et al. [40]. a) Construct Validity: All participants were dairy farmers and thus relevant stakeholders. Although our survey relied on hypothetical scenarios, we included realistic explanation examples to reduce hypothetical bias. The quality of user personas depends on how the underlying data is gathered. Personas based solely on survey data risk being shallow, whereas personas based only on interviews lack quantitative grounding. We used a mixed-methods approach to combine both methodologies. Our study focused on text-based explanations, omitting other forms like audio or visuals. As this work exists in the early stages of explainability research in agriculture, we decided to focus on information content and granularity for the development of our personas. Future research into presentation forms might yield interesting findings on how farmers prefer their explanations to be provided and presented. b) Internal Validity: Designer bias is a common issue in persona development. To reduce this, two explainability researchers without ties to digital agriculture independently reviewed the personas. Nonetheless, some bias may remain. c) Conclusion Validity: With 40 participants, our findings have limited generalizability. However, given the challenge of recruiting farmers, we consider our sample meaningful. We do not claim full coverage of all explainability personas among German farmers. We applied k-means clustering after PCA, with k = 5 supported by elbow and silhouette methods. The clusters appeared balanced, supporting their use as a persona basis. Correlation analyses used suitable methods (Pearson's r, Mann-Whitney U), with all findings meeting 5% significance and indicating moderate correlations, suggesting representativeness. d) External Validity: All participants were German, limiting generalizability across regions. While age diversity was broad, 70% of participants were male. However, this aligns with BMEL's 2020 data [41] showing a 36% share of women among German farmers. V. DISCUSSION User personas, including those developed in our study, can support requirements elicitation and analysis by highlighting the diversity of requirements and preferences within a user group. They can help developers gain a shared understanding of their end users and encourage them to consider user types that might otherwise be overlooked by fostering empathy for these user groups. For example, personas may raise awareness of users with strong data privacy concerns who need more transparency and control, or users who prefer minimal explanations and quick overviews, which might prompt developers to offer an option to disable certain explanatory features. The user personas presented in this paper explicitly refer to explanation needs in agricultural systems, considering both system transparency and user privacy. In isolation, explainability and privacy - and their related user personas - have already been addressed by previous works. Droste et al. [7] created personas for explanation needs in everyday software systems. They referred, among other things, to demographic factors such as age and occupational field, and clustered software users according to their need for different types of explanations. Dupree et al. [29] created personas for privacy and security practices. They considered knowledge about privacy and security and the effort users are willing to put in to protect security and privacy. In contrast, we considered distinct factors such as experience in dairy farming and farm size, as these factors are unique to our specific use case. Our personas illustrate users' diverse explanation needs in the dairy farming sector and enable requirements engineers to empathize with different types of farmers. VI. CONCLUSIONS AND OUTLOOK In our work, we explored the explainability requirements of dairy farmers by creating user personas to gain insights into their specific needs for explanation. In addition, we analyzed correlations between sociodemographic factors and explainability requirements. To this end, we defined three scenarios that reflect everyday situations faced by dairy farmers. Based on quantitative and qualitative data, we identified five distinct dairy farmer personas with varying requirements for technical and data privacy-related explanations. Three personas (the Carefree Power-User, the Pragmatist, and the Minimalist) prefer low to moderate detail on data privacy. Their requirements for technical explanations vary widely, from minimal to very detailed. Together, they represent half of the participants. The remaining two personas (the DetailOriented Privacy-Skeptic and the Selective Privacy-Skeptic) require highly detailed data privacy explanations. They differ in technical requirements: the former prefers extensive detail, while the latter seeks information only in specific areas. These two personas represent the other half of the participants. Our findings partly contradict studies such as Chazette and Schneider [16], Nunes and Jannach [42] suggesting that explanations should be concise, as otherwise they could overwhelm most users. The correlation analysis revealed several significant links between sociodemographic factors and explainability requirements. Previous research has shown that factors like age, gender, and experience influence farmers' acceptance and trust [25]. Our study supports this and identifies gender, age, confidence, frequency and duration of digital system use as the most influential factors. Requirements regarding explanations also varied across scenarios, with different sociodemographic factors influencing preferences depending on context. This highlights that farmers prioritize explanations differently, underscoring the value of context-aware design. Understanding these correlations could help software providers tailor digital farming systems more effectively. We acknowledge the limitations of our study, but also see it as a stepping stone for future research opportunities. Validating user personas remains a challenge due to the absence of a widely accepted standard. As a step toward addressing this, Salminen et al. [27] proposed the Persona Perception Scale, a framework with eight constructs for assessing persona quality, which can be adapted to fit the context of a given study. In our planned future work, we aim to apply and evaluate the use of this framework to strengthen the validation of our personas. ACKNOWLEDGMENTS This work was funded by Carl Zeiss Stiftung, Germany, under the Sustainable Embedded AI project, Grant No.: P202102-009. It was also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No.: 470146331, project softXplain (2022-2025). We would like to express our sincere gratitude to all participating farmers for their valuable time and feedback. We used the following image generation tool to create the images of the user personas: https://openai.com/index/dall-e-3 REFERENCES [1] A. Monteiro, S. Santos, and P. Gonc ̧alves, "Precision agriculture for crop and livestock farming-brief review," Animals, vol. 11, no. 8, p. 2345, 2021. [2] J. D ̈orr and M. Nachtmann, Handbook Digital Farming. Springer, 2022. [3] S. Linsner, F. Kuntke, E. Steinbrink, J. Franken, and C. Reuter, "The role of privacy in digitalization-analyzing perspectives of german farmers," Proceedings on Privacy Enhancing Technologies, 2021. [4] M. B. Girmay and F. M ̈ohrle, "Perspectives on explanation formats from two stakeholder groups in germany: Software providers and dairy farmers," arXiv preprint , 2025. [5] L. Chazette, W. Brunotte, and T. Speith, "Explainable software systems: from requirements analysis to system evaluation," Requirements Engineering, vol. 27, no. 4, pp. 457-487, 2022. [6] M. A. K ̈ohl, K. Baum, M. Langer, D. Oster, T. Speith, and D. Bohlender, "Explainability as a non-functional requirement," in 2019 IEEE 27th Int. Requirements Engineering Conference (RE). IEEE, 2019, pp. 363-368. [7] J. Droste, H. Deters, J. Puglisi, and J. Kl ̈under, "Designing end-user personas for explainability requirements using mixed methods research," in 2023 IEEE 31st Int. Requirements Engineering Conference Workshops (REW). IEEE, 2023, pp. 129-135. [8] T. Miller, "Explanation in artificial intelligence: Insights from the social sciences," Artificial intelligence, vol. 267, pp. 1-38, 2019. [9] A. Preece, D. Harborne, D. Braines, R. Tomsett, and S. Chakraborty, "Stakeholders in explainable ai," arXiv preprint , 2018. [10] A. Adadi and M. Berrada, "Peeking inside the black-box: a survey on explainable artificial intelligence (xai)," IEEE access, vol. 6, pp. 52 13852 160, 2018. [11] A. B. Arrieta, N. D ́ıaz-Rodr ́ıguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garc ́ıa, S. Gil-L ́opez, D. Molina, R. Benjamins et al., "Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai," Information fusion, vol. 58, pp. 82-115, 2020. [12] D. Saraswat, P. Bhattacharya, A. Verma, V. K. Prasad, S. Tanwar, G. Sharma, P. N. Bokoro, and R. Sharma, "Explainable ai for healthcare 5.0: opportunities and challenges," IEEE Access, vol. 10, pp. 84 48684 517, 2022. [13] S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, "Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions," IEEE Access, 2024. [14] O. Kuiper, M. van den Berg, J. van der Burgt, and S. Leijnen, "Exploring explainable ai in the financial sector: Perspectives of banks and supervisory authorities," in Artificial Intelligence and Machine Learning: 33rd Benelux Conference on Artificial Intelligence, BNAIC/Benelearn 2021, Revised Selected Papers 33. Springer, 2022, pp. 105-119. [15] W. Brunotte, J. Droste, and K. Schneider, "Context, content, consenthow to design user-centered privacy explanations (s)." in SEKE, 2023, pp. 86-89. [16] L. Chazette and K. Schneider, "Explainability as a non-functional requirement: challenges and recommendations," Requirements Engineering, vol. 25, no. 4, pp. 493-514, 2020. [17] H. Deters, L. Reinhardt, J. Droste, M. Obaidi, and K. Schneider, "Identifying explanation needs: Towards a catalog of user-based indicators," in 2025 IEEE 33rd international requirements engineering conference (RE). IEEE, 2025. [18] H. Deters, J. Droste, A. Hess, V. Kl ̈os, K. Schneider, T. Speith, and A. Vogelsang, "The x factor: On the relationship between user experience and explainability," in Proceedings of the 13th Nordic Conference on Human-Computer Interaction, 2024, pp. 1-12. [19] B. Weatherson, "Explanation, idealisation and the goldilocks problem," Philosophy and Phenomenological Research, vol. 84, no. 2, pp. 461473, 2012. [20] M. Langer, D. Oster, T. Speith, H. Hermanns, L. K ̈astner, E. Schmidt, A. Sesing, and K. Baum, "What do we want from explainable artificial intelligence (xai)?-a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research," Artificial Intelligence, vol. 296, p. 103473, 2021. [21] L. Chazette, J. Kl ̈under, M. Balci, and K. Schneider, "How can we develop explainable systems? insights from a literature review and an interview study," in Proceedings of the Int. Conference on Software and System Processes and Int. Conference on Global Software Engineering, 2022, pp. 1-12. [22] M. Y. Shams, S. A. Gamel, and F. M. Talaat, "Enhancing crop recommendation systems with explainable artificial intelligence: a study on agricultural decision-making," Neural Computing and Applications, vol. 36, no. 11, pp. 5695-5714, 2024. [23] K. Mridha, F. G. Tola, I. Khalil, S. M. J. Jakir, P. N. Wilfried, M. A. Priyok, and M. Shukla, "Explainable deep learning for coffee leaf disease classification in smart agriculture: A visual approach," in 2023 Int. Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE), 2023, pp. 1-8. [24] S. O. Mengisti Berihu Girmay, "Explainable ai: Leaf-based medicinal plant classification using knowledge distillation," in 44. GILJahrestagung, Biodiversit ̈at f ̈ordern durch digitale Landwirtschaft. Gesellschaft f ̈ur Informatik eV, 2024, pp. 23-34. [25] M. Michels, V. Bonke, and O. Musshoff, "Understanding the adoption of smartphone apps in dairy herd management," Journal of Dairy Science, vol. 102, no. 10, pp. 9422-9434, 2019. [26] D. Karolita, J. McIntosh, T. Kanij, J. Grundy, and H. O. Obie, "Use of personas in requirements engineering: A systematic mapping study," Information and Software Technology, vol. 162, p. 107264, 2023. [27] J. Salminen, J. M. Santos, H. Kwak, J. An, S.-g. Jung, and B. J. Jansen, "Persona perception scale: development and exploratory validation of an instrument for evaluating individuals' perceptions of personas," Int. Journal of Human-Computer Studies, vol. 141, p. 102437, 2020. [28] J.-L. (Weber) Dupree, E. Lank, and D. M. Berry, "A case study of using grounded analysis as a requirement engineering method: Identifying personas that specify privacy and security tool users," Science of Computer Programming, vol. 152, pp. 1-37, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167642317301697 [29] J. L. Dupree, R. Devries, D. M. Berry, and E. Lank, "Privacy personas: Clustering users via attitudes and behaviors toward security practices," in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ser. CHI '16. New York, NY, USA: Association for Computing Machinery, 2016, p. 5228-5239. [Online]. Available: https://doi.org/10.1145/2858036.2858214 [30] A. Felfernig, G. Friedrich, B. Gula, M. Hitz, T. Kruggel, G. Leitner, R. Melcher, D. Riepan, S. Strauss, E. Teppan et al., "Persuasive recommendation: serial position effects in knowledge-based recommender systems," in Persuasive Technology: 2nd Int. Conference on Persuasive Technology. Springer, 2007, pp. 283-294. [31] LimeSurvey GmbH, "Limesurvey," 2025, accessed: 2025-03-01. [Online]. Available: https://www.limesurvey.org [32] H. Hotelling, "Analysis of a complex of statistical variables into principal components." Journal of educational psychology, vol. 24, no. 6, p. 417, 1933. [33] C. Ding and X. He, "K-means clustering via principal component analysis," in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 29. [34] H. K. Kanagala and V. J. R. Krishnaiah, "A comparative study of kmeans, dbscan and optics," in Int. Conference on Computer Communication and Informatics (ICCCI). IEEE, 2016, pp. 1-6. [35] M. Cui et al., "Introduction to the k-means clustering algorithm based on the elbow method," Accounting, Auditing and Finance, vol. 1, no. 1, pp. 5-8, 2020. [36] D. M. Saputra, D. Saputra, and L. D. Oswari, "Effect of distance metrics in determining k-value in k-means clustering using elbow and silhouette method," in Sriwijaya international conference on information technology and its applications (SICONIAN 2019). Atlantis Press, 2020, pp. 341-346. [37] N. J. Gogtay and U. M. Thatte, "Principles of correlation analysis," Journal of the Association of Physicians of India, vol. 65, no. 3, pp. 78-81, 2017. [38] T. W. MacFarland, J. M. Yates, T. W. MacFarland, and J. M. Yates, "Mann-whitney u test," Introduction to nonparametric statistics for the biological sciences using R, pp. 103-132, 2016. [39] M. B. Girmay, J. Droste, H. Deters, and J. Doerr, "Explainability needs in agriculture: Exploring dairy farmers' user personas - survey questions," Mar. 2025. [Online]. Available: https://doi.org/10.5281/ zenodo.14976355 [40] C. Wohlin, P. Runeson, M. H ̈ost, M. C. Ohlsson, B. Regnell, and A. Wessl ́en, Experimentation in software engineering. Springer Science & Business Media, 2012. [41] Bundesministerium f ̈ur Ern ̈ahrung und Landwirtschaft (BMEL), "Besch ̈aftigte auf den Betrieben - Gleichstellung in der Landwirtschaft," 2020, accessed: 2025-0303. [Online]. Available: https://www.bmel-statistik.de/landwirtschaft/ gleichstellung-in-der-landwirtschaft/beschaeftigte-auf-den-betrieben [42] I. Nunes and D. Jannach, "A systematic review and taxonomy of explanations in decision support and recommender systems," User Modeling and User-Adapted Interaction, vol. 27, pp. 393-444, 2017.
|
2509.16248
|
GraphMend: Code Transformations for Fixing
Graph Breaks in PyTorch 2
Savini Kashmira
University of Michigan
Ann Arbor, USA
savinik@umich.edu
Jayanaka Dantanarayana
University of Michigan
Ann Arbor, USA
jayanaka@umich.edu
Thamirawaran Sathiyalogeswaran
Jaseci Labs
Ann Arbor, USA
thami@jaseci.org
Yichao Yuan
University of Michigan
Ann Arbor, USA
yichaoy@umich.edu
Nishil Talati
University of Michigan
Ann Arbor, USA
talatin@umich.edu
Krisztian Flautner
University of Michigan
Ann Arbor, USA
manowar@umich.edu
Lingjia Tang
University of Michigan
Ann Arbor, USA
lingjia@umich.edu
Jason Mars
University of Michigan
Ann Arbor, USA
profmars@umich.edu
Abstract—This paper presents GraphMend, a high-level com-
piler that eliminates FX graph breaks in PyTorch 2 programs.
Although PyTorch 2 introduced TorchDynamo and TorchInduc-
tor to enable just-in-time graph compilation, unresolved dynamic
control flow and unsupported Python constructs often fragment
models into multiple FX graphs. These fragments force frequent
fallbacks to eager mode, incur costly CPU-to-GPU synchro-
nizations, and reduce optimization opportunities. GraphMend
addresses this limitation by analyzing and transforming source
code before execution. Built on the Jac compilation framework,
GraphMend introduces two code transformations that remove
graph breaks due to dynamic control flow and Python I/O
functions. This design allows PyTorch’s compilation pipeline to
capture larger, uninterrupted FX graphs without requiring man-
ual refactoring by developers. Evaluation across eight Hugging
Face models shows that GraphMend removes all fixable graph
breaks due to dynamic control flow and Python I/O functions,
driving the break count to 0 in 6 models and reducing it from 5
to 2 in another model. On NVIDIA RTX 3090 and A40 GPUs,
GraphMend achieves up to 75% latency reductions and up to
8% higher end-to-end throughput. These results demonstrate
that high-level code transformation is an effective complement
to PyTorch’s dynamic JIT compilation pipeline, substantially
improving both usability and performance.
Index Terms—Code Transformation, Compiler for AI, PyTorch
2, FX Graph, Graph Breaks
I. INTRODUCTION
PyTorch has emerged as one of the most widely adopted
deep learning frameworks in both academia and industry,
primarily due to its ease of use and flexibility [1]. How-
ever, its general-purpose design incurs efficiency overheads,
leaving PyTorch workloads less optimized than handcrafted
GPU implementations. To bridge this gap, PyTorch 2 [2]
introduced a Python Just-in-Time (JIT) compilation pipeline
that translates model operations into optimized GPU code,
achieving performance close to handcrafted implementations.
One of the central intermediate representations (IR) in
this pipeline is the FX graph [3]. TorchDynamo intercepts
and symbolically evaluates Python bytecode to create an FX
graph. The FX graph is then passed as the IR to a backend
compiler, by default TorchInductor [2], which applies a series
of graph-level and kernel-level optimizations before lowering
the computation to efficient kernels for GPU execution.
A key component in the PyTorch compilation pipeline is the
capture of code into an FX graph through symbolic evaluation.
It is important to avoid splitting the forward function into
smaller graphs, since unsupported code triggers execution to
fall back to eager mode, fragmenting the computation. For
example, TorchDynamo often encounters difficulties when
resolving conditional jumps, as control flow depending on
dynamic values cannot be statically determined [4]. In such
cases like dynamic control flow or calls to certain Python built-
ins, TorchDynamo cannot capture the affected code into the
FX graph. Instead, it inserts a graph break, which splits the
computation into multiple fragments [2]. As a result, a single
forward function may be compiled into several disjoint FX
graphs rather than one unified graph.
The compiled FX graphs run on the optimized backend,
while the regions between them execute in standard PyTorch
eager mode [1]. Each fallback to eager execution requires
GPU kernels to synchronize with the CPU to fetch and
schedule subsequent operations. These repeated host-device
synchronizations introduce substantial overhead, since control
is repeatedly transferred between CPU and GPU instead of
executing long fused kernels entirely on the device. Moreover,
graph breaks reduce optimization opportunities, as each FX
graph is compiled in isolation, preventing cross-graph fusion
and global optimization [2], [5].
Avoiding graph breaks is essential for realizing the full
performance potential of PyTorch 2. The current PyTorch
compilation pipeline begins at the bytecode level and does
not exploit source-level information. As a result, when high-
level constructs such as conditional branches are lowered into
jump instructions at the bytecode level, TorchDynamo often
cannot statically resolve them, causing graph breaks. Our key
insight is that compilers should not limit transformations
to the bytecode level. Instead, more holistic compilation
strategies that analyze and transform higher-level program
representations closer to the source code can expose addi-
tional optimization opportunities. By incorporating source-
arXiv:2509.16248v1 [cs.PL] 17 Sep 2025
level analysis and transformation, high-level code regions
can be systematically rewritten into forms that enable more
effective optimization and unified graph generation within the
PyTorch 2 pipeline.
Building on this insight, we introduce GRAPHMEND, a
high-level compiler technique for PyTorch 2 models that
eliminates two common types of graph breaks those caused
by dynamic control flow and Python I/O operations through
program analysis and transformation. Unlike standard PyTorch
2, which captures execution at runtime to construct graphs
for JIT compilation, GRAPHMEND restructures Python code
before execution. We implement GRAPHMEND within the
Jac framework [6]–[8], a language infrastructure that provides
compiler and runtime support for Python programs. The Jac
compiler constructs an abstract syntax tree (AST) to cap-
ture program structure and a control-flow graph (CFG) to
represent execution paths, merging them into a unified IR
[9]. Multiple compiler passes operate on this IR to analyze
and transform the program, and GRAPHMEND extends this
pipeline with additional passes that detect code patterns likely
to cause graph breaks. Specifically, GRAPHMEND introduces
compiler passes that apply two key transformations: Predi-
cated Dynamic Control Flow and Graph-Epilogue Deferred
Side Effects, which systematically eliminate graph breaks. The
transformed program is then compiled into standard Python
bytecode and executed by the regular CPython interpreter,
seamlessly integrating with the existing PyTorch 2 compilation
pipeline. This design enables GRAPHMEND to automatically
restructure code before it reaches TorchDynamo.
We evaluate GRAPHMEND on a benchmark suite of 8 Hug-
ging Face [10] models that trigger FX graph breaks. GRAPH-
MEND removes all fixable breaks, reducing the break count
to 0 in 6 models and from 5 to 2 in another, with 1 remaining
unfixed. On NVIDIA RTX 3090 and A40 GPUs, it achieves
30–75% lower cold-start forward latency, 2.5–25% lower
steady-state latency, and 5–8% higher end-to-end throughput.
Improvements are larger when a graph break lands in the
middle of a hot function where its disruption is highest, and
smaller when the break is less disruptive; very small models
also tend to see larger relative reductions in per forward pass
latency because CPU to GPU handoff dominates a greater
share of their runtime.
Our contributions are, (1) We introduce a benchmark suite
of eight Hugging Face models that naturally exhibit graph
breaks, providing a realistic and reproducible testbed for
evaluating PyTorch compilation. (2) We design and implement
the GRAPHMEND compilation technique, which automatically
restructure high level code to eliminate common graph breaks.
(3) We conduct comprehensive evaluations on this benchmark
suite using real world models and hardware, demonstrating
the effectiveness of our approach in reducing graph breaks
and improving performance.
II. BACKGROUND
Optimization pipelines in PyTorch aim to capture execution
graphs that can be globally optimized by compilers, unlike
eager mode where each operation runs immediately and inde-
pendently [1], [3].
A. Graph Capturing Techniques in PyTorch Before PyTorch 2
Early approaches to graph capture were limited. torch←-
.jit.trace [11] used record–replay at the C++ dispatcher
level but ignored Python control flow and often produced
incomplete or misleading graphs. torch.jit.script [12] an-
alyzed Python ASTs but followed an “all-or-nothing” rule: if
a single construct was unsupported, capture failed entirely. To
address these limitations, PyTorch introduced the torch.fx [3]
module, enabling symbolic execution of Python code to gen-
erate an FX graph, a higher-level IR. Although more flexible,
symbolic_trace could still misrepresent certain operations.
B. FX Graph IR
An FX graph represents operations as nodes connected by
data dependencies [2], [3]. Unlike eager execution, which
evaluates operations sequentially, an FX graph exposes the
computation as a data flow structure. This global view enables
the compiler backend to apply optimizations such as operation
fusion and kernel specialization.
To illustrate, consider the simple PyTorch forward function
in Figure 1a. When captured by TorchDynamo, it is trans-
formed into the FX graph shown in Figure 1b, where each
operation is represented as a node in the IR.
1
@torch.compile()
2
def forward(x, y):
3
z = x*2 + y*2
4
return torch.relu(z)
(a) Forward function in PyTorch
graph():
%x : torch.Tensor
%y : torch.Tensor
%mul_x : torch.Tensor = torch.ops.aten.mul.
Tensor_Scalar(%x, 2)
%mul_y : torch.Tensor = torch.ops.aten.mul.
Tensor_Scalar(%y, 2)
%z : torch.Tensor = torch.ops.aten.add.Tensor
(%mul_x, %mul_y)
%out : torch.Tensor = torch.ops.aten.relu.
default(%z)
return (%out)
(b) Corresponding FX graph IR
Fig. 1: A simple forward function (a) and its compiled FX
graph IR (b).
This FX graph serves as the foundation for subsequent
compiler passes that optimize and lower the computation into
efficient GPU kernels.
C. TorchDynamo in PyTorch 2
PyTorch 2 [2] introduced TorchDynamo, Python bytecode-
level tracer tightly integrated with CPython. It symbolically
evaluates Python bytecode to extract PyTorch operations into
2
FX graphs, while delegating unsupported constructs back to
Python. This avoids incomplete graphs but may still fragment
the computation by generating partial graphs by inserting
graph-breaks.
III. MOTIVATING EXAMPLE
As discussed in § II, TorchDynamo attempts to capture a
single FX graph for the forward function but inserts graph
breaks whenever it encounters unsupported constructs, forcing
execution to fall back to eager mode [2].
A. FX graph breaks
Let us consider the modified program in Figure 2a, which
adds a conditional branch to the forward pass of Figure 1a.
When Dynamo tries to symbolically evaluate the function to
capture FX graph, it cannot capture the conditional branch
because the outcome depends on the runtime value of the
tensor expression x.sum. Therefore, it inserts a graph break.
As a result, the computation before the branch is recorded in
one FX graph, the conditional is executed in eager mode, and
the remaining operations are traced into a second FX graph
[2], [13].
B. Impact of graph breaks
The backend compiler (TorchInductor by default) compiles
each captured FX graph separately and lowers it to GPU
executables such as fused Triton kernels [2], [14]. Each
compiled FX graph corresponds to a torch.compiled region.
When graph breaks occur, a single Python function results in
multiple compiled regions separated by segments of unsup-
ported Python code [13].
During runtime, the first compiled region runs on the GPU,
after which control returns to the Python interpreter to execute
unsupported code in eager mode before switching back to the
GPU for subsequent compiled regions. These context switches
introduce overhead and lower GPU utilization per forward pass
[2].
This effect is illustrated in Figure 3a, which shows the
CPU–GPU profiling trace of the function in Figure 2a. The
trace reveals two separate torch.compile regions, correspond-
ing to two FX graphs. The graph break is visible in the
device-to-host (D2H) memcpy following the first FX graph,
where control returns to the CPU to execute unsupported
code, leaving the GPU idle until the second CUDA graph
launch begins. This fragmentation both increases CPU-side
scheduling overhead and prevents cross-graph optimizations
such as kernel fusion.
If the graph break is eliminated and the computation is
captured in a single FX graph, the execution trace in Figure 3b
shows one continuous CUDA graph region. In this case, the
GPU remains fully utilized without falling back to the CPU,
and no device-to-host memory transfers are observed between
regions. As a result, kernel launches are streamlined and
synchronization overhead is removed. Therefore, eliminating
graph breaks is necessary.
IV. GRAPHMEND COMPILER TECHNIQUE
As discussed in § III, graph breaks in PyTorch 2 introduce
significant overhead, undermining the performance gains ex-
pected in PyTorch 2. In this work, we focus on two common
sources of graph breaks and introduce a compiler technique
that eliminates them prior to execution.
A. Classification of Graph Breaks
In practice, most graph breaks arise from two common
situations:
1) Data dependent operations [2], [13]: Constructs whose
outcome depends on runtime values and hence, cannot
be traced.
2) Unsupported operations [2], [13]: Several Python built
in functions or certain functionalities that TorchDynamo
does not support.
In this section, we discuss these types of graph breaks.
1) Data dependent operations:
Dynamo inserts graph
breaks when it encounters control flow that depends on tensor
values, such as if-statements or loops, as well as direct tensor
data accesses (e.g., .item, .data_ptr). Among these cases,
when Dynamo encounters dynamic control flow based on a
tensor value as shown in Figure 2a, we can fix the graph break
by rewriting the code into operations supported by the GPU.
2) Python builtin functions: Some Python built-in functions
such as printing, logging, or issuing warnings can also trigger
graph breaks. Figure 4a shows an example where a debug
print statement in the middle of the function causes a break.
B. Fixing Common Graph Breaks
Among these graph break reasons, there are two main
reasons that are common and fixable which are (1) Data-
dependent Control Flow and (2) Python I/O Operations.
1) Data-dependent Control Flow: Dynamo inserts graph
breaks for constructs such as if-statements or loops where
branching depend on tensor values. Returning to the example
in Figure 2a, the conditional introduces a graph break. In such
cases,we can rewrite the function using tensor operations such
as torch.where, which are supported by the GPU. The graph
break fixed version using torch.where is shown in Figure 2b
for the same function in Figure 2a. This eliminates the graph
break and allows Dynamo to capture a single FX graph.
2) Python I/O Operations: Python I/O functions such as
Printing/logging/issuing warnings will result in a graph break.
An example for this is shown in Figure 4a as there is a debug
print statement in the middle of the function. If we can defer
print statement to the end of the function, then we can avoid
graph breaks as shown in Figure 4b.
These graph-fixing techniques are simple and straightfor-
ward: by applying source-level transformations, we can of-
ten avoid graph breaks. However, TorchDynamo traces at
the bytecode level, which prevents it from leveraging such
transformations and leads to unnecessary graph breaks. For
example, consider Figure 5. The function in Figure 5a uses an
explicit if–else block, while the function in Figure 5c omits the
else and relies on a fall-through return. Although the two ver-
sions differ syntactically, both compile to the same bytecode
3
1
@torch.compile()
2
def f(x, y):
3
x_1 = x*2
4
y_1 = y*2
5
if x.sum() > 10: # <-- graph-break here
6
z = x_1 + y_1
7
else:
8
z = x_1 * y_1
9
return torch.relu(z)
(a) A PyTorch forward pass with data dependent control flow. The
conditional branch on x.sum cannot be captured, causing a graph
break.
1
@torch.compile()
2
def f(x, y):
3
x_1 = x*2
4
y_1 = y*2
5
cond = x.sum > 10
6
z_add = x_1 + y_1
7
z_mul = x_1 * y_1
8
9
# rewrite dynamic control flow using
←-
tensor select
10
z = torch.where(cond, z_add, z_mul)
11
return torch.relu(z)
(b) Eliminating the graph break by rewriting data dependent control
flow with torch.where, keeping the computation in a single FX
graph.
Fig. 2: Comparison of control flow handling in torch.compile: (a) graph break due to Python control flow, and (b) fixed
version using torch.where.
Graph Break
CL
Torch-Compiled Region 0
Torch-Compiled Region 1
CL
Setup
cudaGraphLaunch
DtoH Memcpy
cudaGraph.replay
cudaStreamSynchronize
cudaGraph.replay
cudaGraphLaunch
CPU
GPU
199.714 µs
Sync
forward pass - with 1 graph break
(a)
CL
Torch-Compiled Region 0
Setup
cudaGraphLaunch
cudaGraph.replay
CPU
GPU
104.208 µs
Sync
forward pass - no graph break
(b)
Fig. 3: Profiled traces of forward pass execution across CPU and GPU. (a) Forward pass execution trace of code with graph
breaks in Figure 2a. (b) Forward pass execution trace of equivalent code with graph breaks fixed in Figure 2b.
shown in Figure 5b. Because TorchDynamo operates purely
on bytecode, it cannot distinguish between these semantically
equivalent but syntactically distinct patterns.
Similarly, logging operations introduce host I/O into the
forward pass. Because it is unsafe to alter such effects directly
at the bytecode level, TorchDynamo also treats these sites as
graph breaks. In both cases, the limitation stems from relying
solely on bytecode-level semantics captured just-in-time.
To address this, we implement a compilation technique that
analyzes and transforms programs at a higher-level IR. By
operating on high-level IR, we preserve structural information
lost in bytecode, enabling more sophisticated analyses and
safe source-to-source rewrites that remove graph breaks before
execution. This implementation of the technique is shown in
Figure 6 which is elaborated in the rest of this section.
C. Jac Compilation Pipeline
To implement our compiler technique, we build on the Jac
framework [6], [7], [9], which extends Python with additional
language features through its runtime system. The Jac com-
piler accepts both Jac and Python code, parses it into an
abstract syntax tree (AST), and progressively lowers it into
a unified intermediate representation (UniiR). As illustrated in
the top half of Figure 6, UniiR interleaves three structures:
AST, control-flow graph (CFG), and symbol table. In this
way, structural, semantic, and control-flow information are
combined in a single representation. This integration enables
cross-cutting analyses that are difficult to achieve when these
structures are maintained separately. The CFG is constructed
by linking AST nodes to their successors, embedding control-
flow edges within the tree. This process is implemented in the
CFGBuildPass.
4
1
@torch.compile
2
def fn(x):
3
x = torch.relu(x)
4
print("tensor:", x)
# <-- graph-break
←-
here
5
return torch.sin(x)
(a) Compiled function with a debug print of tensor statistics.
Dynamo inserts a graph break at the print statement.
1
@torch.compile
2
def fn(x):
3
x = torch.relu(x)
4
to_print = "tensor:", x # <-- assign
←-
variable to print
5
y = torch.sin(x)
6
print(to_print)
# <-- print at the end of←-
tracing
7
return y
(b) Fixing the graph break by registering print as a reorderable
logging function. TorchDynamo can now safely reorder the call and
keep the computation in a single FX graph.
Fig. 4: Fixing graph breaks due to Python I/O: (a) direct print causes a graph break, (b) reordering via variable assignment
avoids the break.
1
def branch(x):
2
if x > 0:
3
return 1
4
else:
5
return -1
(a) Python function with if-else branch
0
LOAD_FAST
0 (x)
2
LOAD_CONST
1 (0)
4
COMPARE_OP
4 (>)
6
POP_JUMP_IF_FALSE
14
8
LOAD_CONST
2 (1)
10
RETURN_VALUE
14
LOAD_CONST
3 (-1)
16
RETURN_VALUE
(b) Disassembled Python bytecode
1
def branch(x):
2
if x > 0:
3
return 1
4
return -1
(c) Python function having identical bytecode to (a)
Fig. 5: Bytecode-level limitations of TorchDynamo.(a) (a) and
(c) show two syntactically different functions with equivalent
semantics. (b) Both compile to identical Python bytecode.
Our GRAPHMEND pass takes UniiR as input and per-
forms analyses to identify the two types of graph breaks,
as illustrated in the bottom half of Figure 6. It then applies
transformations on the AST to rewrite the code and eliminate
these breaks. Afterward, UniiR is rebuilt and verified before
being passed back to the Jac compiler pipeline. At that stage,
the pipeline can either compile the program to bytecode for
execution on the Python interpreter or generate a transformed
Python source file.
D. GRAPHMEND Implementation
Our compiler technique consists of three main passes,
illustrated in the bottom half of Figure 6.
Input: UniiR; TorchAttrTable (torch attributes 7→{dynamic, static})
Output: Tags: GraphBreak[DynCtrlFl],
GraphBreak[logger/print]
foreach Dynamo Entry Point r in UniiR do
Initialize worklist with successors of r
while worklist not empty do
Pop node n from worklist; push unseen successors of n
// 1) Dynamic control-flow via
if-conditions
if n is an IfStmt then
E ←subexpressions of cond(n)
dyn ←false
foreach e ∈E do
if e has a torch attribute then
a ←attribute of e
// e.g., .sum
if TorchAttrTable[a] = dynamic then
dyn ←true; break
end
end
end
if not dyn then
if E is input/tensor-value dependent (via
SymbolTable + CFG) then
dyn ←true
end
end
if dyn then
Tag n as GraphBreak[DynCtrlFl]
end
end
// 2) Logger / print side effects
if n is a call to print or logger.* then
Tag n as GraphBreak[logger/print]
end
end
end
Algorithm 1: Fixable Graph-Break Detection
1) Dynamo Entry Point Analysis Pass: To identify potential
TorchDynamo compilation entry points, this pass traverses
the AST and analyzes each node. A node is tagged when a
function or an nn.Module object wrapped or decorated with
torch.compile is encountered.
If no occurrences of
torch.compile are found in the
codebase, GRAPHMEND compilation is skipped. Otherwise,
the tagged nodes serve as entry points for subsequent passes.
2) Graph Break Type Analysis Pass: We implement a com-
pound analysis pass on UniiR to identify potential graph-break
locations, following the procedure in Algorithm 1. Starting
from each Dynamo entry point, the pass traverses the control-
5
Jac Compiler
. . .
@t or ch. compi l e( )
def f or war d( x) :
. . .
i f x. sum( ) >2:
x = x * 2
el se:
x = x / 2
. . .
r et ur n x
. . .
1
2
3
4
5
6
7
8
9
10
11
Pyt hon / Jac
Sour ce Code
Lexer and Parser
Jac AST
SymTabBuildPass
ST
ST
ST
ST
ST
ST
ST
ST
Jac AST
+ Symbol Table
CFGBuildPass
Jac AST
+ Symbol Table
+ Control Flow Graph
Uni i R
PyASTGenPass
PyBytecodeGen
Before
After
Before
After
byt ecode
Compilation/Analysis
Compilation/Analysis
GraphMend
ST
ST
ST
ST
ST
ST
ST
ST
PycodeGen
sour ce
SymTabBuildPass
CFGBuildPass
GraphMend Compiler Technique
AST Transformation Pass
Graph Break Type
Analysis Pass
Fi nd & Tag
Data-dependent Control Flow
Python I/O Operations
t or ch. compi l e( f or war d)
def f or war d
Dynamo Entry Point
Analysis Pass
Fi nd & Tag
Graph-Epilogue Defered Side Effect Transformation
IfStmt
x.sum() > 10
assign
ElseStmt
x
x*2
assign
x
x/2
<parent>
IfStmt
x.sum() > 10
assign
ElseStmt
x_t
x*2
assign
x_f
x/2
<parent>
assign
pred
torch.where
x_t
x_f
pred
Predicated Dynamic Control Flow Transformation
call
print
" Logger message"
<parent>
return
Matmul
x
y
assign
differ
" Logger message"
<parent>
return
Matmul
x
y
assign
ret
call
print
differ
ret
ST
ST
ST
ST
ST
ST
ST
ST
Uni i R
ST
ST
ST
ST
ST
ST
ST
ST
Tr ansf or med
Uni i R
Befor e
After
After
Befor e
Fig. 6: GraphMend compiler integration in the Jac pipeline. The pipeline (top) lowers Python/Jac source code into a unified
intermediate representation (UniiR) with AST, symbol table, and CFG. GraphMend (bottom) analyzes entry points, detects
graph breaks, and applies AST transformations (Predicated Dynamic Control Flow and Graph-Epilogue Deferred Side Effects)
to eliminate breaks before execution.
flow graph (CFG) to locate program points that may induce
graph breaks. For conditional branches, the pass inspects the
subexpressions of the condition. If any involve Torch attributes
known to be dynamic (e.g., .sum()), the branch is marked
as dynamic control flow. Otherwise, a use-def analysis over
the symbol table and CFG determines whether the condition
depends on model inputs or tensor values; if so, it is also
tagged as dynamic. In addition, side-effecting operations such
as print and logger.* calls are detected and annotated
as logging-induced graph breaks as in Algorithm 1.
These annotations guide the subsequent transformation
passes that eliminate graph breaks.
3) AST Transformation Pass: After identifying graph-break
locations, the AST must be modified to eliminate them. As
discussed earlier in this section, these fixes can be applied
through source-level transformations. Following the same ap-
proach, the AST is modified according to the graph-break tags
identified in the previous pass. Based on these tags, one of the
following transformations is performed.
a) Predicated Dynamic Control Flow Transformation:
To address data-dependent control-flow graph breaks, we first
evaluate the branch condition and store it as a predicate.
The control flow is then rewritten using torch.where, which
enables predicated execution of both branches. This process
is illustrated in Figure 6. In the AST Transformation Pass
(Figure 6), the original branch if x.sum() > 5: ... else←-
: ... is reduced by extracting the condition into a symbolic
variable pred (R 1). Next, the two branch assignments are
merged into a single dataflow expression using torch.where←-
(pred, e1, e2). Finally, the if and else AST nodes are
removed, and their child nodes are reconnected to the parent,
producing a transformed tree in which both execution paths
are explicitly represented.
b) Graph-Epilogue Deferred Side Effect Transforma-
tion: For print or logger calls, the message is first stored
in a temporary variable. This variable is then passed to
the outermost forward function and executed at the end of
the function (epilogue) before returning, as shown in Fig-
ure 6. In the example ASTs in Figure 6, the original call
print("Logger message") is rewritten as a deferred assign-
ment deferred = "Logger message". This assignment is then
moved to the epilogue block, where it is printed after graph
execution (R 2). In this way, the logging effect is preserved
while remaining safely outside the traced region. Note that any
computation in the return statement must be moved before the
epilogue, stored in a temporary variable, and then returned
afterward to avoid introducing a graph break.
E. Healing UniiR and Downstream Flow
After the AST transformations, the UniiR is no longer
consistent. Therefore, the symbol table and CFG must be
rebuilt using the same passes that originally created them. As
6
shown in Figure 6, once the GRAPHMEND pass is complete,
the SymTabBuildPass and CFGBuildPass are rerun to
restore UniiR for downstream compilation.
At the end of the Jac compiler pipeline, Python bytecode is
generated for interpretation, enabling TorchDynamo to seam-
lessly operate on the transformed code.
V. EVALUATION
We evaluate GRAPHMEND on real-world models that ex-
hibit graph breaks. Our goals are to quantify (i) how often
GRAPHMEND removes fixable breaks and (ii) the end-to-end
improvement after those fixes.
a) Experimental Setup: We implement GRAPHMEND
on top of Jaseci [6], [7], [9], an open-source research infras-
tructure that provides both compiler and runtime control as a
Python superset. Our experiments are conducted on NVIDIA
RTX 3090 and A40 GPUs, using TorchInductor as the backend
compiler.
b) Benchmark Suite: To construct our benchmark, we
(1) randomly sample 65 Hugging Face [10] models spanning
diverse architectures and task categories; (2) identify models
that trigger FX graph breaks; and (3) retain the eight models
with breaks for evaluation. Table I summarizes the resulting
suite, including the number and causes of graph breaks and
the total operator count. A similar methodology was used in
prior works to construct benchmark suite [2].
c) Profiling Methodology: We profile the forward pass
using PyTorch Profiler [23]. For each model, we evaluate two
configurations: original model and graph breaks fixed version.
Each configuration is executed for 7 iterations: one cold-start
iteration followed by six warm iterations. The profiler records
CPU and CUDA activities and operator-level events; traces are
exported and inspected with Chrome’s trace viewer.
A. GRAPHMEND’s Ability to Fix Graph Breaks
First, we check
GRAPHMEND’s ability to fix graph
breaks across the benchmark suite. Table II shows that
GRAPHMEND fully eliminates graph breaks in 6 of the
8 models. It partially fixes longformer-base-4096
and leaves moe-minicpm-x4-base unchanged. By cause,
GRAPHMEND removes all breaks due to logging state-
ments and data-dependent control flow. The remaining breaks:
tensor.item() and dynamic-shape operators could not be
fixed by code transformations. For the performance analysis
that follows, we report results on the seven models whose
break counts were fully eliminated or reduced, excluding
moe-minicpm-x4-base.
B. Latency Improvement per Forward Pass Using GRAPH-
MEND to Fix Graph Breaks
We measure Relative Latency Improvement (%) as
100 × Tbreak −Tfixed
Tbreak
,
where Tbreak is the per-forward-pass latency for the bench-
marked model with graph breaks, and Tfixed is the latency after
applying GRAPHMEND fixes.
We report the relative improvement per forward pass for
break-fixable models in the benchmark suite, using PyTorch
Profiler data for both cold and warm (steady-state) runs.
a) Cold Start Latency Improvement: Figure 7a reports
results on both RTX 3090 and A40 GPUs. Across models, we
observe a relative latency reduction of about 30–75% for both
GPUs, indicating substantial gains in cold runs. At cold start,
each graph break incurs significant overhead because every
CUDA graph requires a separate record and caching. After
eliminating these breaks, only a single (or fewer) CUDA Graph
capture is needed, resulting in fewer records and reduced
capture time, which minimizes overhead. Consequently, we
achieve larger relative latency improvements per forward pass
in cold runs. A more detailed analysis is provided in § V-D.
b) Steady-State Latency Improvement: Figure 7b shows
steady-state improvements on RTX 3090 and A40 GPUs,
ranging from about 2.5% to 25% across models. After the cold
run, the captured graph is cached and replayed, removing the
high capture overhead present initially. However, when graph
breaks persist, execution still incurs GPU–CPU switches (as
discussed in
§ V-D), which add latency. Eliminating these
breaks with GRAPHMEND removes such switches, yielding
additional improvements in relative latency per forward pass.
Notably,
tiny-random-PegasusForCausalLM
shows over 25% improvement. Since this model is very
small, with only 64 operators, the fixed costs from graph
breaks (such as host–device synchronization, and extra graph
launches) take up a large portion of the runtime. Removing
these breaks therefore gives a huge speedup per forward pass.
C. Relative Throughput Improvement by Fixing Graph Breaks
We measure Relative Throughput Improvement (%) as
100 × Tpfixed −Tpbreak
Tpbreak
,
where Tpbreak is the throughput (output tokens per second)
for the model with graph breaks, and Tpfixed is the throughput
after applying GRAPHMEND fixes.
We measure the relative throughput across the models in
the benchmark suite that are fixable through GRAPHMEND for
both RTX 3090 and A40. Figure 7b shows a relative through-
put improvement of about 5-8% across all models on both
GPUs. This improvement follows from the relative latency
reduction per forward pass observed in Section V-B: as each
step becomes faster, the end-to-end tokens per second increase.
The variation of improvement across models highlights an
important point: the magnitude of improvement depends on the
extent to which a graph break affects the forward path. When
the impact of a break is small, the corresponding throughput
gain is also limited.
For example, in both Phi-4-mini-instruct and
Qwen-Audio-Chat, the root cause of the breaks is
dynamic control flow. We observe a 7.5%–8% improve-
ment for Qwen-Audio-Chat, compared to 5%–6% for
Phi-4-mini-instruct from both GPUs. The difference
arises because the break in Phi-4-mini-instruct oc-
curs near the beginning of the forward function, whereas in
7
TABLE I: Models in benchmark suite
Benchmark Model Name
Graph break count
Graph break reasons
Total Operators
biogpt [15]
2
logger calls
408
blenderbot-400M-distill [16]
3
logger calls
489
flan-t5-large [17]
3
logger calls
2130
longformer-base-409 [18]
5
logger calls, tensor.item() call
913
moe-minicpm-x4-base [19]
15
Dynamic shape operator
280
Phi-4-mini-instruct [20]
5
Dynamic control flow
1541
Qwen-Audio-Chat [21]
2
Dynamic control flow
1647
tiny-random-PegasusForCausalLM [22]
2
logger calls
64
biogpt
blenderbot-400M-distill
flan-t5-large
longformer
Phi-4-mini-instruct
Qwen-Audio-Chat
t-r-PegasusForCausalLM
0
20
40
60
Latency Improvement (%)
Nvidia A40
Nvidia RTX 3090
(a)
biogpt
blenderbot-400M-distill
flan-t5-large
longformer
Phi-4-mini-instruct
Qwen-Audio-Chat
t-r-PegasusForCausalLM
0
10
20
Latency Improvement (%)
Nvidia A40
Nvidia RTX 3090
(b)
Fig. 7: Latency improvements from GRAPHMEND (a) Cold-start latency reductions (b) Steady-state latency reductions across
benchmark models on RTX 3090 and A40 GPUs.
TABLE II: Graph break counts in the original model and fix
rates achieved by applying GRAPHMEND across the bench-
mark suite.
Benchmark Model
Graph Breaks
Fixed (%)
biogpt
2
100
blenderbot-400M-distill
3
100
flan-t5-large
3
100
longformer-base-4096
5
40
moe-minicpm-x4-base
15
0
Phi-4-mini-instruct
5
100
Qwen-Audio-Chat
2
100
tiny-random-PegasusForCausalLM
2
100
biogpt
blenderbot-400M-distill
flan-t5-large
longformer
Phi-4-mini-instruct
Qwen-Audio-Chat
t-r-PegasusForCausalLM
0.0
2.5
5.0
7.5
Throughput Improvement (%)
Nvidia A40
Nvidia RTX 3090
Fig. 8: Relative throughput improvement across benchmark
suite
Qwen-Audio-Chat it occurs in the middle, where its impact
is higher. Thus, the location of a break determines its effect:
fixing some breaks yields larger throughput gains, while fixing
others has a smaller impact.
tiny-random-PegasusForCausalLM achieves more
than 25% lower forward-pass latency, but throughput gains are
smaller. This is because throughput also depends on prefill,
decode, CPU tasks (e.g., sampling, tokenization), and data
transfers, which are unaffected. For such a small model, these
fixed costs dominate, so removing the break speeds up GPU
compute but has less impact on end-to-end tokens/s, especially
when decode dominates generation.
D. Overhead Analysis of Graph Breaks
We conducted an analysis to evaluate how fixing graph
breaks with GRAPHMEND reduces overhead. We analyze
Qwen-Audio-Chat model running on an A40 GPU which
has 2 graph breaks.
To ensure a consistent comparison in this section, all activity
plots (Figures 11a, 11b, 11c, and 11d) are drawn only for
the time interval between the first and second CUDA graph
executions in the original model; for the fixed model, we
consider the identical time region.
a) Steady-state run overhead analysis: Figure 9a shows
the profiler trace for the original model. Execution is split
into three CUDA graphs because of two graph breaks, with
idle GPU periods between the first and second FX graph
executions. This is clearly visible in Figure 11a, the GPU
becomes idle while the CPU handles control flow in eager
mode, including a device-to-host (D2H) memcpy triggered by
dynamic control flow (graph break reason for this model). This
CPU fallback introduces an overhead.
After applying GRAPHMEND, the rewritten code replaces
the dynamic branch with a GPU-supported torch.where.
The resulting trace (Figure 9b) shows one continuous CUDA
graph. As confirmed in Figure 11c, GPU activity remains un-
interrupted and CPU fallbacks disappear. Thus, GRAPHMEND
8
Memcpy DtoH
2nd fx-graph
Execution
3rd fx-graph
Execution
( 93.121 ms )
1st fx-graph Execution
(a) Profiler tracer for one iteration of warm runs for Qwen-Audio-Chat original model before fixing the graph break run in A40 GPU
1st fx-graph Execution
( 92.5 ms )
(b) Profiler tracer for one iteration of warm runs for Qwen-Audio-Chat model after fixing the graph break run in A40 GPU
Fig. 9: (a) Profile traces.
eliminates the overhead of CPU–GPU context switching dur-
ing steady-state runs.
b) Cold run overhead analysis: Cold runs introduce
even larger overheads. In the profiler trace of original model
(Figure 10), each graph break triggers a separate CUDA Graph
recording and caching, leading to long idle GPU phases and
heavy CPU involvement as shown in Figure 11b. In this
interval, we observe that GPU idle time dominates, while
the CPU is heavily active, reflecting significant overhead.
By contrast, the fixed model (Figure 11d) executes with a
single capture, avoiding repeated records and reducing idle
GPU time. Overall, GRAPHMEND reduces overhead more
significantly in cold runs than in steady state, explaining
the substantial cold-start latency reductions per forward pass
reported in Figure 7a.
E. GPU Runtime Kernel Statistics Analysis
We
analyze
how
fixing
graph
breaks
affects
GPU
kernel
counts,
ordering,
and
scheduling,
and
whether
this
enables
new
optimization
opportunities.
Using
the
Phi-4-mini-instruct model on an A40 GPU, the pro-
filer reports 404 executed kernels for the original model and
393 for the fixed model with a reduction of 11. This drop
indicates more fusion and fewer launches, as removing breaks
allows the compiler to capture larger regions and emit fewer,
larger kernels.
Figure 12 illustrates this change. In the original model,
3 small kernels appear that are replaced in the fixed model
by a single fused kernel. We also observe reordering: one
kernel from the original run shifts upward in the fixed run.
With one CUDA Graph, the runtime can reorder independent
regions and pack launches more effectively, reducing stalls and
improving locality.
Graph breaks force separate CUDA Graph captures, block-
ing fusion across the break and restricting scheduling to
smaller windows. Removing them allows larger portions of the
forward pass to be captured together, enabling the compiler
and runtime to fuse operations, remove redundant launches,
and reorder independent work. As a result, kernel gaps shrink
and device utilization improves.
In summary, fixing graph breaks with GRAPHMEND not
only reduces launch overhead but also exposes new optimiza-
tion opportunities, underscoring the importance of eliminating
such breaks.
VI. RELATED WORK
PyTorch has long sought to balance performance with
Python’s flexibility. TorchScript’s tracing and scripting en-
abled ahead-of-time graph capture but struggled with dy-
namic control flow and unsupported constructs [11], [12].
PyTorch 2 introduced TorchDynamo for bytecode-level cap-
ture and TorchInductor for backend compilation [1], [2],
9
Text
Text
Original
Fixed
-Audio-Chat
Fig. 10: Profiler tracer of cold run of Qwen-Audio-Chat model run on A40 GPU.
0.0
0.5
1.0
Activity
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Time (ms)
0.0
0.5
1.0
Activity
CPU
GPU
(a) Steady State - Original Model
0.0
0.5
1.0
Activity
300
400
500
600
700
800
900
Time (ms)
0.0
0.5
1.0
Activity
CPU
GPU
(b) Cold Start - Original Model
0.0
0.5
1.0
Activity
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Time (ms)
0.0
0.5
1.0
Activity
CPU
GPU
(c) Steady State - Fixed Model
0.0
0.5
1.0
Activity
300
400
500
600
700
800
900
Time (ms)
0.0
0.5
1.0
Activity
CPU
GPU
(d) Cold Start - Fixed Model
Fig. 11: CPU/GPU activity time traces for Qwen-Audio-Chat model run on A40 GPU.
ampere_fp16 ...
ampere_fp16 ...
triton_per_fused ..
triton_poi_fused ..
triton_per_fused ..
triton_per_fused ..
triton_poi_fused ..
triton_red_fused ..
...
...
triton_poi_fused ..
triton_poi_fused ..
triton_per_fused ..
triton_poi_fused ..
Graph Break 1
Graph Break 2
Graph Break 3
Fig. 12: Kernel Fusion and Reordering Visualization for Phi-
4-mini-instruct model.
yet graph breaks remain common when encountering data-
dependent branches or Python I/O. Our work reduces these
breaks by rewriting source code before execution, allowing
more continuous FX graphs.
General
compiler
infrastructures
such
as
XLA
[24],
TVM [25], and MLIR [26] provide IR-based optimization,
while Glow [27], TASO [28], and ONNX Runtime [29]
extend portability across hardware and frameworks. These
systems assume large graphs are already available, whereas
we restructure PyTorch code to maximize graph continuity so
their optimizations can be applied more effectively.
Dynamic models motivate systems like GRAPE [30] and
Triton [31], [32], which manage runtime variability through
10
efficient kernel generation. PyTorch’s compiler stack still falls
back to eager execution in such cases, while our method
removes common break sources by rewriting branches into
tensorized forms that remain graph-compatible.
Program
transformations
more
broadly,
as
seen
in
Halide [33], Lantern [34], and DLVM [30], show the po-
tential of semantics-preserving rewrites for optimization. Our
approach follows this tradition but targets PyTorch specifically,
introducing lightweight transformations that enable TorchDy-
namo to capture larger uninterrupted graphs.
VII. CONCLUSION
We presented GRAPHMEND, a source-level compiler tech-
nique for PyTorch that systematically eliminates common
FX graph breaks before execution. By restructuring code
constructs such as dynamic control flow and Python I/O opera-
tions, GRAPHMEND reduces fragmentation in the compilation
pipeline and enables larger unified graphs to be captured by
TorchDynamo. Our evaluation across a diverse suite of Hug-
ging Face models demonstrates that GRAPHMEND removes
nearly all fixable breaks and consistently improves runtime
efficiency, with substantial latency and throughput gains on
modern GPUs.
REFERENCES
[1] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,
T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K¨opf,
E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner,
L. Fang, J. Bai, and S. Chintala, PyTorch: an imperative style, high-
performance deep learning library.
Red Hook, NY, USA: Curran
Associates Inc., 2019.
[2] J. Ansel, E. Yang, H. He, N. Gimelshein, A. Jain, M. Voznesensky,
B. Bao, P. Bell, D. Berard, E. Burovski, G. Chauhan, A. Chourdia,
W. Constable, A. Desmaison, Z. DeVito, E. Ellison, W. Feng, J. Gong,
M. Gschwind, B. Hirsh, S. Huang, K. Kalambarkar, L. Kirsch,
M. Lazos, M. Lezcano, Y. Liang, J. Liang, Y. Lu, C. K. Luk, B. Maher,
Y. Pan, C. Puhrsch, M. Reso, M. Saroufim, M. Y. Siraichi, H. Suk,
S. Zhang, M. Suo, P. Tillet, X. Zhao, E. Wang, K. Zhou, R. Zou,
X. Wang, A. Mathews, W. Wen, G. Chanan, P. Wu, and S. Chintala,
“Pytorch 2: Faster machine learning through dynamic python bytecode
transformation and graph compilation,” in Proceedings of the 29th ACM
International Conference on Architectural Support for Programming
Languages and Operating Systems, Volume 2, ser. ASPLOS ’24.
New
York, NY, USA: Association for Computing Machinery, 2024, p.
929–947. [Online]. Available: https://doi.org/10.1145/3620665.3640366
[3] J. K. Reed, Z. DeVito, H. He, A. Ussery, and J. Ansel, “Torch.fx:
Practical program capture and transformation for deep learning in
python,” 2022. [Online]. Available: https://arxiv.org/abs/2112.08429
[4] B. Zheng, C. H. Yu, J. Wang, Y. Ding, Y. Liu, Y. Wang, and
G. Pekhimenko, “Grape: Practical and efficient graphed execution for
dynamic deep neural networks on gpus,” in Proceedings of the 56th
Annual IEEE/ACM International Symposium on Microarchitecture, ser.
MICRO ’23.
New York, NY, USA: Association for Computing
Machinery, 2023, p. 1364–1380. [Online]. Available: https://doi.org/10.
1145/3613424.3614248
[5] A. Agrawal, A. N. Modi, A. Passos, A. Lavoie, A. Agarwal,
A. Shankar, I. Ganichev, J. Levenberg, M. Hong, R. Monga, and S. Cai,
“Tensorflow eager: A multi-stage, python-embedded dsl for machine
learning,” 2019. [Online]. Available: https://arxiv.org/abs/1903.01855
[6] J. Mars, Y. Kang, R. Daynauth, B. Li, A. Mahendra, K. Flautner, and
L. Tang, “The jaseci programming paradigm and runtime stack: Build-
ing scale-out production applications easy and fast,” IEEE Computer
Architecture Letters, vol. 22, no. 2, pp. 101–104, 2023.
[7] J.
Mars,
“Extending
data
spatial
semantics
for
scale
agnostic
programming,” 2025. [Online]. Available: https://arxiv.org/abs/2504.
03109
[8] Jaseci Labs, “Jaseci: The official jaseci code repository,” https://github.
com/Jaseci-Labs/jaseci, 2025.
[9] J. L. Dantanarayana, Y. Kang, K. Sivasothynathan, C. Clarke, B. Li,
S. Kashmira, K. Flautner, L. Tang, and J. Mars, “Mtp: A meaning-typed
language abstraction for ai-integrated programming,” 2025. [Online].
Available: https://arxiv.org/abs/2405.08965
[10] Hugging Face, “Hugging face: Open-source ai community and tools,”
https://huggingface.co, 2025, accessed: 2025-09-12.
[11] PyTorch Team, “torch.jit.trace — pytorch documentation,” https://docs.
pytorch.org/docs/stable/generated/torch.jit.trace.html,
2025,
accessed:
2025-09-08.
[12] PyTorch-TorchScript Team, “torch.jit.script — pytorch documentation,”
https://docs.pytorch.org/docs/stable/generated/torch.jit.script.html, 2025,
accessed: 2025-09-08.
[13] PyTorch Contributors, “Torch compile troubleshooting — pytorch
documentation,”
https://docs.pytorch.org/docs/stable/torch.compiler
troubleshooting.html, 2025, accessed: 2025-09-08.
[14] A. Ghosh, A. Nayak, A. Panwar, and A. Basu, “Pygraph: Robust
compiler support for cuda graphs in pytorch,” 2025. [Online]. Available:
https://arxiv.org/abs/2503.19779
[15] R. Luo, L. Sun, Y. Xia, T. Qin, S. Zhang, H. Poon, and T.-Y.
Liu, “BioGPT: generative pre-trained transformer for biomedical text
generation and mining,” Briefings in Bioinformatics, vol. 23, no. 6, 09
2022, bbac409. [Online]. Available: https://doi.org/10.1093/bib/bbac409
[16] Meta AI, “facebook/blenderbot-400m-distill,” https://huggingface.co/
facebook/blenderbot-400M-distill, 2020.
[17] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li,
X. Wang, M. Dehghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai,
M. Suzgun, X. Chen, A. Chowdhery, S. Narang, G. Mishra, A. Yu,
V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean,
J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei, “Scaling
instruction-finetuned language models,” 2022. [Online]. Available:
https://arxiv.org/abs/2210.11416
[18] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-
document transformer,” arXiv:2004.05150, 2020.
[19] babybirdprd, “babybirdprd/moe-minicpm-x4-base,” https://huggingface.
co/babybirdprd/moe-minicpm-x4-base, 2025.
[20] Microsoft,
“Microsoft
phi-4-mini-instruct,”
https://huggingface.co/
microsoft/Phi-4-mini-instruct, 2025.
[21] Y. Chu, J. Xu, X. Zhou, Q. Yang, S. Zhang, Z. Yan, C. Zhou, and J. Zhou,
“Qwen-audio: Advancing universal audio understanding via unified
large-scale audio-language models,” arXiv preprint arXiv:2311.07919,
2023.
[22] H. Face, “hf-internal-testing/tiny-random-pegasusforcausallm,” https:
//huggingface.co/hf-internal-testing/tiny-random-PegasusForCausalLM,
2025, accessed: 2025-09-12; internal testing minimal model.
[23] “Pytorch profiler: A performance debugging and analysis tool for
pytorch,” https://pytorch.org/docs/stable/profiler.html, 2021, accessed:
2025-09-12.
[24] O. Community, “Stablehlo and openxla,” 2023. [Online]. Available:
https://openxla.org
[25] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan,
L. Wang, Y. Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy, “Tvm:
An automated end-to-end optimizing compiler for deep learning,” in
Proceedings of the 13th USENIX Symposium on Operating Systems
Design and Implementation (OSDI), 2018, pp. 578–594.
[26] C. Lattner et al., “Mlir: A compiler infrastructure for the end of moore’s
law,” arXiv preprint arXiv:2002.11054, 2020.
[27] N. Rotem et al., “Glow: Graph lowering compiler techniques for neural
networks,” arXiv preprint arXiv:1805.00907, 2018.
[28] Z. Jia, S. Lin, C. R. Qi, and A. Aiken, “Taso: Optimizing deep
learning computation with automatic generation of graph substitutions,”
in Proceedings of the 27th ACM Symposium on Operating Systems
Principles (SOSP), 2019, pp. 47–62.
[29] Microsoft, “Onnx runtime: High performance inference engine,” 2018.
[Online]. Available: https://onnxruntime.ai
[30] Z. Zhao et al., “Dlvm: A modern compiler ir for deep learning
frameworks,” arXiv preprint arXiv:1711.03016, 2017.
[31] P. Tillet et al., “Triton: An intermediate language and compiler for tiled
neural network computations,” in ICML Workshop on Systems for ML,
2019.
[32] J. Song et al., “Hidet: Task-mapping programming paradigm for deep
learning tensor programs,” in Proceedings of the 18th USENIX Sympo-
sium on Operating Systems Design and Implementation (OSDI), 2024.
[33] J. Ragan-Kelley et al., “Halide: A language and compiler for optimizing
parallelism, locality, and recomputation,” in Proceedings of the 34th
11
ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI), 2013, pp. 519–530.
[34] F. Wang et al., “Lantern: A search-based compiler for deep learning,” in
Advances in Neural Information Processing Systems (NeurIPS), 2018,
pp. 6035–6045.
12
|
GraphMend: Code Transformations for Fixing Graph Breaks in PyTorch 2 Savini Kashmira -This paper presents GraphMend, a high-level compiler that eliminates FX graph breaks in PyTorch 2 programs. Although PyTorch 2 introduced TorchDynamo and TorchInductor to enable just-in-time graph compilation, unresolved dynamic control flow and unsupported Python constructs often fragment models into multiple FX graphs. These fragments force frequent fallbacks to eager mode, incur costly CPU-to-GPU synchronizations, and reduce optimization opportunities. GraphMend addresses this limitation by analyzing and transforming source code before execution. Built on the Jac compilation framework, GraphMend introduces two code transformations that remove graph breaks due to dynamic control flow and Python I/O functions. This design allows PyTorch's compilation pipeline to capture larger, uninterrupted FX graphs without requiring manual refactoring by developers. Evaluation across eight Hugging Face models shows that GraphMend removes all fixable graph breaks due to dynamic control flow and Python I/O functions, driving the break count to 0 in 6 models and reducing it from 5 to 2 in another model. On NVIDIA RTX 3090 and A40 GPUs, GraphMend achieves up to 75% latency reductions and up to 8% higher end-to-end throughput. These results demonstrate that high-level code transformation is an effective complement to PyTorch's dynamic JIT compilation pipeline, substantially improving both usability and performance. Index Terms-Code Transformation, Compiler for AI, PyTorch 2, FX Graph, Graph Breaks I. INTRODUCTION PyTorch has emerged as one of the most widely adopted deep learning frameworks in both academia and industry, primarily due to its ease of use and flexibility [1]. However, its general-purpose design incurs efficiency overheads, leaving PyTorch workloads less optimized than handcrafted GPU implementations. To bridge this gap, PyTorch 2 [2] introduced a Python Just-in-Time (JIT) compilation pipeline that translates model operations into optimized GPU code, achieving performance close to handcrafted implementations. One of the central intermediate representations (IR) in this pipeline is the FX graph [3]. TorchDynamo intercepts and symbolically evaluates Python bytecode to create an FX graph. The FX graph is then passed as the IR to a backend compiler, by default TorchInductor [2], which applies a series of graph-level and kernel-level optimizations before lowering the computation to efficient kernels for GPU execution. A key component in the PyTorch compilation pipeline is the capture of code into an FX graph through symbolic evaluation. It is important to avoid splitting the forward function into smaller graphs, since unsupported code triggers execution to fall back to eager mode, fragmenting the computation. For example, TorchDynamo often encounters difficulties when resolving conditional jumps, as control flow depending on dynamic values cannot be statically determined [4]. In such cases like dynamic control flow or calls to certain Python builtins, TorchDynamo cannot capture the affected code into the FX graph. Instead, it inserts a graph break, which splits the computation into multiple fragments [2]. As a result, a single forward function may be compiled into several disjoint FX graphs rather than one unified graph. The compiled FX graphs run on the optimized backend, while the regions between them execute in standard PyTorch eager mode [1]. Each fallback to eager execution requires GPU kernels to synchronize with the CPU to fetch and schedule subsequent operations. These repeated host-device synchronizations introduce substantial overhead, since control is repeatedly transferred between CPU and GPU instead of executing long fused kernels entirely on the device. Moreover, graph breaks reduce optimization opportunities, as each FX graph is compiled in isolation, preventing cross-graph fusion and global optimization [2], [5]. Avoiding graph breaks is essential for realizing the full performance potential of PyTorch 2. The current PyTorch compilation pipeline begins at the bytecode level and does not exploit source-level information. As a result, when highlevel constructs such as conditional branches are lowered into jump instructions at the bytecode level, TorchDynamo often cannot statically resolve them, causing graph breaks. Our key insight is that compilers should not limit transformations to the bytecode level. Instead, more holistic compilation strategies that analyze and transform higher-level program representations closer to the source code can expose additional optimization opportunities. By incorporating source17 Sep 2025 level analysis and transformation, high-level code regions can be systematically rewritten into forms that enable more effective optimization and unified graph generation within the PyTorch 2 pipeline. Building on this insight, we introduce GRAPHMEND, a high-level compiler technique for PyTorch 2 models that eliminates two common types of graph breaks those caused by dynamic control flow and Python I/O operations through program analysis and transformation. Unlike standard PyTorch 2, which captures execution at runtime to construct graphs for JIT compilation, GRAPHMEND restructures Python code before execution. We implement GRAPHMEND within the Jac framework [6]-[8], a language infrastructure that provides compiler and runtime support for Python programs. The Jac compiler constructs an abstract syntax tree (AST) to capture program structure and a control-flow graph (CFG) to represent execution paths, merging them into a unified IR [9]. Multiple compiler passes operate on this IR to analyze and transform the program, and GRAPHMEND extends this pipeline with additional passes that detect code patterns likely to cause graph breaks. Specifically, GRAPHMEND introduces compiler passes that apply two key transformations: Predicated Dynamic Control Flow and Graph-Epilogue Deferred Side Effects, which systematically eliminate graph breaks. The transformed program is then compiled into standard Python bytecode and executed by the regular CPython interpreter, seamlessly integrating with the existing PyTorch 2 compilation pipeline. This design enables GRAPHMEND to automatically restructure code before it reaches TorchDynamo. We evaluate GRAPHMEND on a benchmark suite of 8 Hugging Face [10] models that trigger FX graph breaks. GRAPHMEND removes all fixable breaks, reducing the break count to 0 in 6 models and from 5 to 2 in another, with 1 remaining unfixed. On NVIDIA RTX 3090 and A40 GPUs, it achieves 30-75% lower cold-start forward latency, 2.5-25% lower steady-state latency, and 5-8% higher end-to-end throughput. Improvements are larger when a graph break lands in the middle of a hot function where its disruption is highest, and smaller when the break is less disruptive; very small models also tend to see larger relative reductions in per forward pass latency because CPU to GPU handoff dominates a greater share of their runtime. Our contributions are, (1) We introduce a benchmark suite of eight Hugging Face models that naturally exhibit graph breaks, providing a realistic and reproducible testbed for evaluating PyTorch compilation. (2) We design and implement the GRAPHMEND compilation technique, which automatically restructure high level code to eliminate common graph breaks. (3) We conduct comprehensive evaluations on this benchmark suite using real world models and hardware, demonstrating the effectiveness of our approach in reducing graph breaks and improving performance. II. BACKGROUND Optimization pipelines in PyTorch aim to capture execution graphs that can be globally optimized by compilers, unlike eager mode where each operation runs immediately and independently [1], [3]. A. Graph Capturing Techniques in PyTorch Before PyTorch 2 Early approaches to graph capture were limited. torch←- .jit.trace [11] used record-replay at the C++ dispatcher level but ignored Python control flow and often produced incomplete or misleading graphs. torch.jit.script [12] analyzed Python ASTs but followed an "all-or-nothing" rule: if a single construct was unsupported, capture failed entirely. To address these limitations, PyTorch introduced the torch.fx [3] module, enabling symbolic execution of Python code to generate an FX graph, a higher-level IR. Although more flexible, symbolic_trace could still misrepresent certain operations. B. FX Graph IR An FX graph represents operations as nodes connected by data dependencies [2], [3]. Unlike eager execution, which evaluates operations sequentially, an FX graph exposes the computation as a data flow structure. This global view enables the compiler backend to apply optimizations such as operation fusion and kernel specialization. To illustrate, consider the simple PyTorch forward function in Figure 1a. When captured by TorchDynamo, it is transformed into the FX graph shown in Figure 1b, where each operation is represented as a node in the IR. 1 @torch.compile() 2 def forward(x, y): 3 z = x*2 + y*2 4 return torch.relu(z) (a) Forward function in PyTorch graph(): %x : torch.Tensor %y : torch.Tensor %mul_x : torch.Tensor = torch.ops.aten.mul. Tensor_Scalar(%x, 2) %mul_y : torch.Tensor = torch.ops.aten.mul. Tensor_Scalar(%y, 2) %z : torch.Tensor = torch.ops.aten.add.Tensor (%mul_x, %mul_y) %out : torch.Tensor = torch.ops.aten.relu. default(%z) return (%out) (b) Corresponding FX graph IR Fig. 1: A simple forward function (a) and its compiled FX graph IR (b). This FX graph serves as the foundation for subsequent compiler passes that optimize and lower the computation into efficient GPU kernels. C. TorchDynamo in PyTorch 2 PyTorch 2 [2] introduced TorchDynamo, Python bytecodelevel tracer tightly integrated with CPython. It symbolically evaluates Python bytecode to extract PyTorch operations into 2 FX graphs, while delegating unsupported constructs back to Python. This avoids incomplete graphs but may still fragment the computation by generating partial graphs by inserting graph-breaks. III. MOTIVATING EXAMPLE As discussed in § II, TorchDynamo attempts to capture a single FX graph for the forward function but inserts graph breaks whenever it encounters unsupported constructs, forcing execution to fall back to eager mode [2]. A. FX graph breaks Let us consider the modified program in Figure 2a, which adds a conditional branch to the forward pass of Figure 1a. When Dynamo tries to symbolically evaluate the function to capture FX graph, it cannot capture the conditional branch because the outcome depends on the runtime value of the tensor expression x.sum. Therefore, it inserts a graph break. As a result, the computation before the branch is recorded in one FX graph, the conditional is executed in eager mode, and the remaining operations are traced into a second FX graph [2], [13]. B. Impact of graph breaks The backend compiler (TorchInductor by default) compiles each captured FX graph separately and lowers it to GPU executables such as fused Triton kernels [2], [14]. Each compiled FX graph corresponds to a torch.compiled region. When graph breaks occur, a single Python function results in multiple compiled regions separated by segments of unsupported Python code [13]. During runtime, the first compiled region runs on the GPU, after which control returns to the Python interpreter to execute unsupported code in eager mode before switching back to the GPU for subsequent compiled regions. These context switches introduce overhead and lower GPU utilization per forward pass [2]. This effect is illustrated in Figure 3a, which shows the CPU-GPU profiling trace of the function in Figure 2a. The trace reveals two separate torch.compile regions, corresponding to two FX graphs. The graph break is visible in the device-to-host (D2H) memcpy following the first FX graph, where control returns to the CPU to execute unsupported code, leaving the GPU idle until the second CUDA graph launch begins. This fragmentation both increases CPU-side scheduling overhead and prevents cross-graph optimizations such as kernel fusion. If the graph break is eliminated and the computation is captured in a single FX graph, the execution trace in Figure 3b shows one continuous CUDA graph region. In this case, the GPU remains fully utilized without falling back to the CPU, and no device-to-host memory transfers are observed between regions. As a result, kernel launches are streamlined and synchronization overhead is removed. Therefore, eliminating graph breaks is necessary. IV. GRAPHMEND COMPILER TECHNIQUE As discussed in § III, graph breaks in PyTorch 2 introduce significant overhead, undermining the performance gains expected in PyTorch 2. In this work, we focus on two common sources of graph breaks and introduce a compiler technique that eliminates them prior to execution. A. Classification of Graph Breaks In practice, most graph breaks arise from two common situations: 1) Data dependent operations [2], [13]: Constructs whose outcome depends on runtime values and hence, cannot be traced. 2) Unsupported operations [2], [13]: Several Python built in functions or certain functionalities that TorchDynamo does not support. In this section, we discuss these types of graph breaks. 1) Data dependent operations: Dynamo inserts graph breaks when it encounters control flow that depends on tensor values, such as if-statements or loops, as well as direct tensor data accesses (e.g., .item, .data_ptr). Among these cases, when Dynamo encounters dynamic control flow based on a tensor value as shown in Figure 2a, we can fix the graph break by rewriting the code into operations supported by the GPU. 2) Python builtin functions: Some Python built-in functions such as printing, logging, or issuing warnings can also trigger graph breaks. Figure 4a shows an example where a debug print statement in the middle of the function causes a break. B. Fixing Common Graph Breaks Among these graph break reasons, there are two main reasons that are common and fixable which are (1) Datadependent Control Flow and (2) Python I/O Operations. 1) Data-dependent Control Flow: Dynamo inserts graph breaks for constructs such as if-statements or loops where branching depend on tensor values. Returning to the example in Figure 2a, the conditional introduces a graph break. In such cases,we can rewrite the function using tensor operations such as torch.where, which are supported by the GPU. The graph break fixed version using torch.where is shown in Figure 2b for the same function in Figure 2a. This eliminates the graph break and allows Dynamo to capture a single FX graph. 2) Python I/O Operations: Python I/O functions such as Printing/logging/issuing warnings will result in a graph break. An example for this is shown in Figure 4a as there is a debug print statement in the middle of the function. If we can defer print statement to the end of the function, then we can avoid graph breaks as shown in Figure 4b. These graph-fixing techniques are simple and straightforward: by applying source-level transformations, we can often avoid graph breaks. However, TorchDynamo traces at the bytecode level, which prevents it from leveraging such transformations and leads to unnecessary graph breaks. For example, consider Figure 5. The function in Figure 5a uses an explicit if-else block, while the function in Figure 5c omits the else and relies on a fall-through return. Although the two versions differ syntactically, both compile to the same bytecode 3 1 @torch.compile() 2 def f(x, y): 3 x_1 = x*2 4 y_1 = y*2 5 if x.sum() > 10: # 10 6 z_add = x_1 + y_1 7 z_mul = x_1 * y_1 8 9 # rewrite dynamic control flow using ←- tensor select 10 z = torch.where(cond, z_add, z_mul) 11 return torch.relu(z) (b) Eliminating the graph break by rewriting data dependent control flow with torch.where, keeping the computation in a single FX graph. Fig. 2: Comparison of control flow handling in torch.compile: (a) graph break due to Python control flow, and (b) fixed version using torch.where. Graph Break CL Torch-Compiled Region 0 Torch-Compiled Region 1 CL Setup cudaGraphLaunch DtoH Memcpy cudaGraph.replay cudaStreamSynchronize cudaGraph.replay cudaGraphLaunch CPU GPU 199.714 μs Sync forward pass - with 1 graph break (a) CL Torch-Compiled Region 0 Setup cudaGraphLaunch cudaGraph.replay CPU GPU 104.208 μs Sync forward pass - no graph break (b) Fig. 3: Profiled traces of forward pass execution across CPU and GPU. (a) Forward pass execution trace of code with graph breaks in Figure 2a. (b) Forward pass execution trace of equivalent code with graph breaks fixed in Figure 2b. shown in Figure 5b. Because TorchDynamo operates purely on bytecode, it cannot distinguish between these semantically equivalent but syntactically distinct patterns. Similarly, logging operations introduce host I/O into the forward pass. Because it is unsafe to alter such effects directly at the bytecode level, TorchDynamo also treats these sites as graph breaks. In both cases, the limitation stems from relying solely on bytecode-level semantics captured just-in-time. To address this, we implement a compilation technique that analyzes and transforms programs at a higher-level IR. By operating on high-level IR, we preserve structural information lost in bytecode, enabling more sophisticated analyses and safe source-to-source rewrites that remove graph breaks before execution. This implementation of the technique is shown in Figure 6 which is elaborated in the rest of this section. C. Jac Compilation Pipeline To implement our compiler technique, we build on the Jac framework [6], [7], [9], which extends Python with additional language features through its runtime system. The Jac compiler accepts both Jac and Python code, parses it into an abstract syntax tree (AST), and progressively lowers it into a unified intermediate representation (UniiR). As illustrated in the top half of Figure 6, UniiR interleaves three structures: AST, control-flow graph (CFG), and symbol table. In this way, structural, semantic, and control-flow information are combined in a single representation. This integration enables cross-cutting analyses that are difficult to achieve when these structures are maintained separately. The CFG is constructed by linking AST nodes to their successors, embedding controlflow edges within the tree. This process is implemented in the CFGBuildPass. 4 1 @torch.compile 2 def fn(x): 3 x = torch.relu(x) 4 print("tensor:", x) # 0: 3 return 1 4 else: 5 return -1 (a) Python function with if-else branch 0 LOAD_FAST 0 (x) 2 LOAD_CONST 1 (0) 4 COMPARE_OP 4 (>) 6 POP_JUMP_IF_FALSE 14 8 LOAD_CONST 2 (1) 10 RETURN_VALUE 14 LOAD_CONST 3 (-1) 16 RETURN_VALUE (b) Disassembled Python bytecode 1 def branch(x): 2 if x > 0: 3 return 1 4 return -1 (c) Python function having identical bytecode to (a) Fig. 5: Bytecode-level limitations of TorchDynamo.(a) (a) and (c) show two syntactically different functions with equivalent semantics. (b) Both compile to identical Python bytecode. Our GRAPHMEND pass takes UniiR as input and performs analyses to identify the two types of graph breaks, as illustrated in the bottom half of Figure 6. It then applies transformations on the AST to rewrite the code and eliminate these breaks. Afterward, UniiR is rebuilt and verified before being passed back to the Jac compiler pipeline. At that stage, the pipeline can either compile the program to bytecode for execution on the Python interpreter or generate a transformed Python source file. D. GRAPHMEND Implementation Our compiler technique consists of three main passes, illustrated in the bottom half of Figure 6. Input: UniiR; TorchAttrTable (torch attributes 7→{dynamic, static}) Output: Tags: GraphBreak[DynCtrlFl], GraphBreak[logger/print] foreach Dynamo Entry Point r in UniiR do Initialize worklist with successors of r while worklist not empty do Pop node n from worklist; push unseen successors of n // 1) Dynamic control-flow via if-conditions if n is an IfStmt then E ←subexpressions of cond(n) dyn ←false foreach e ∈E do if e has a torch attribute then a ←attribute of e // e.g., .sum if TorchAttrTable[a] = dynamic then dyn ←true; break end end end if not dyn then if E is input/tensor-value dependent (via SymbolTable + CFG) then dyn ←true end end if dyn then Tag n as GraphBreak[DynCtrlFl] end end // 2) Logger / print side effects if n is a call to print or logger.* then Tag n as GraphBreak[logger/print] end end end Algorithm 1: Fixable Graph-Break Detection 1) Dynamo Entry Point Analysis Pass: To identify potential TorchDynamo compilation entry points, this pass traverses the AST and analyzes each node. A node is tagged when a function or an nn.Module object wrapped or decorated with torch.compile is encountered. If no occurrences of torch.compile are found in the codebase, GRAPHMEND compilation is skipped. Otherwise, the tagged nodes serve as entry points for subsequent passes. 2) Graph Break Type Analysis Pass: We implement a compound analysis pass on UniiR to identify potential graph-break locations, following the procedure in Algorithm 1. Starting from each Dynamo entry point, the pass traverses the control5 Jac Compiler . . . @t or ch. compi l e( ) def f or war d( x) : . . . i f x. sum( ) >2: x = x * 2 el se: x = x / 2 . . . r et ur n x . . . 1 2 3 4 5 6 7 8 9 10 11 Pyt hon / Jac Sour ce Code Lexer and Parser Jac AST SymTabBuildPass ST ST ST ST ST ST ST ST Jac AST + Symbol Table CFGBuildPass Jac AST + Symbol Table + Control Flow Graph Uni i R PyASTGenPass PyBytecodeGen Before After Before After byt ecode Compilation/Analysis Compilation/Analysis GraphMend ST ST ST ST ST ST ST ST PycodeGen sour ce SymTabBuildPass CFGBuildPass GraphMend Compiler Technique AST Transformation Pass Graph Break Type Analysis Pass Fi nd & Tag Data-dependent Control Flow Python I/O Operations t or ch. compi l e( f or war d) def f or war d Dynamo Entry Point Analysis Pass Fi nd & Tag Graph-Epilogue Defered Side Effect Transformation IfStmt x.sum() > 10 assign ElseStmt x x*2 assign x x/2 IfStmt x.sum() > 10 assign ElseStmt x_t x*2 assign x_f x/2 assign pred torch.where x_t x_f pred Predicated Dynamic Control Flow Transformation call print " Logger message" return Matmul x y assign differ " Logger message" return Matmul x y assign ret call print differ ret ST ST ST ST ST ST ST ST Uni i R ST ST ST ST ST ST ST ST Tr ansf or med Uni i R Befor e After After Befor e Fig. 6: GraphMend compiler integration in the Jac pipeline. The pipeline (top) lowers Python/Jac source code into a unified intermediate representation (UniiR) with AST, symbol table, and CFG. GraphMend (bottom) analyzes entry points, detects graph breaks, and applies AST transformations (Predicated Dynamic Control Flow and Graph-Epilogue Deferred Side Effects) to eliminate breaks before execution. flow graph (CFG) to locate program points that may induce graph breaks. For conditional branches, the pass inspects the subexpressions of the condition. If any involve Torch attributes known to be dynamic (e.g., .sum()), the branch is marked as dynamic control flow. Otherwise, a use-def analysis over the symbol table and CFG determines whether the condition depends on model inputs or tensor values; if so, it is also tagged as dynamic. In addition, side-effecting operations such as print and logger.* calls are detected and annotated as logging-induced graph breaks as in Algorithm 1. These annotations guide the subsequent transformation passes that eliminate graph breaks. 3) AST Transformation Pass: After identifying graph-break locations, the AST must be modified to eliminate them. As discussed earlier in this section, these fixes can be applied through source-level transformations. Following the same approach, the AST is modified according to the graph-break tags identified in the previous pass. Based on these tags, one of the following transformations is performed. a) Predicated Dynamic Control Flow Transformation: To address data-dependent control-flow graph breaks, we first evaluate the branch condition and store it as a predicate. The control flow is then rewritten using torch.where, which enables predicated execution of both branches. This process is illustrated in Figure 6. In the AST Transformation Pass (Figure 6), the original branch if x.sum() > 5: ... else←- : ... is reduced by extracting the condition into a symbolic variable pred (R 1). Next, the two branch assignments are merged into a single dataflow expression using torch.where←- (pred, e1, e2). Finally, the if and else AST nodes are removed, and their child nodes are reconnected to the parent, producing a transformed tree in which both execution paths are explicitly represented. b) Graph-Epilogue Deferred Side Effect Transformation: For print or logger calls, the message is first stored in a temporary variable. This variable is then passed to the outermost forward function and executed at the end of the function (epilogue) before returning, as shown in Figure 6. In the example ASTs in Figure 6, the original call print("Logger message") is rewritten as a deferred assignment deferred = "Logger message". This assignment is then moved to the epilogue block, where it is printed after graph execution (R 2). In this way, the logging effect is preserved while remaining safely outside the traced region. Note that any computation in the return statement must be moved before the epilogue, stored in a temporary variable, and then returned afterward to avoid introducing a graph break. E. Healing UniiR and Downstream Flow After the AST transformations, the UniiR is no longer consistent. Therefore, the symbol table and CFG must be rebuilt using the same passes that originally created them. As 6 shown in Figure 6, once the GRAPHMEND pass is complete, the SymTabBuildPass and CFGBuildPass are rerun to restore UniiR for downstream compilation. At the end of the Jac compiler pipeline, Python bytecode is generated for interpretation, enabling TorchDynamo to seamlessly operate on the transformed code. V. EVALUATION We evaluate GRAPHMEND on real-world models that exhibit graph breaks. Our goals are to quantify (i) how often GRAPHMEND removes fixable breaks and (ii) the end-to-end improvement after those fixes. a) Experimental Setup: We implement GRAPHMEND on top of Jaseci [6], [7], [9], an open-source research infrastructure that provides both compiler and runtime control as a Python superset. Our experiments are conducted on NVIDIA RTX 3090 and A40 GPUs, using TorchInductor as the backend compiler. b) Benchmark Suite: To construct our benchmark, we (1) randomly sample 65 Hugging Face [10] models spanning diverse architectures and task categories; (2) identify models that trigger FX graph breaks; and (3) retain the eight models with breaks for evaluation. Table I summarizes the resulting suite, including the number and causes of graph breaks and the total operator count. A similar methodology was used in prior works to construct benchmark suite [2]. c) Profiling Methodology: We profile the forward pass using PyTorch Profiler [23]. For each model, we evaluate two configurations: original model and graph breaks fixed version. Each configuration is executed for 7 iterations: one cold-start iteration followed by six warm iterations. The profiler records CPU and CUDA activities and operator-level events; traces are exported and inspected with Chrome's trace viewer. A. GRAPHMEND's Ability to Fix Graph Breaks First, we check GRAPHMEND's ability to fix graph breaks across the benchmark suite. Table II shows that GRAPHMEND fully eliminates graph breaks in 6 of the 8 models. It partially fixes longformer-base-4096 and leaves moe-minicpm-x4-base unchanged. By cause, GRAPHMEND removes all breaks due to logging statements and data-dependent control flow. The remaining breaks: tensor.item() and dynamic-shape operators could not be fixed by code transformations. For the performance analysis that follows, we report results on the seven models whose break counts were fully eliminated or reduced, excluding moe-minicpm-x4-base. B. Latency Improvement per Forward Pass Using GRAPHMEND to Fix Graph Breaks We measure Relative Latency Improvement (%) as 100 × Tbreak -Tfixed Tbreak , where Tbreak is the per-forward-pass latency for the benchmarked model with graph breaks, and Tfixed is the latency after applying GRAPHMEND fixes. We report the relative improvement per forward pass for break-fixable models in the benchmark suite, using PyTorch Profiler data for both cold and warm (steady-state) runs. a) Cold Start Latency Improvement: Figure 7a reports results on both RTX 3090 and A40 GPUs. Across models, we observe a relative latency reduction of about 30-75% for both GPUs, indicating substantial gains in cold runs. At cold start, each graph break incurs significant overhead because every CUDA graph requires a separate record and caching. After eliminating these breaks, only a single (or fewer) CUDA Graph capture is needed, resulting in fewer records and reduced capture time, which minimizes overhead. Consequently, we achieve larger relative latency improvements per forward pass in cold runs. A more detailed analysis is provided in § V-D. b) Steady-State Latency Improvement: Figure 7b shows steady-state improvements on RTX 3090 and A40 GPUs, ranging from about 2.5% to 25% across models. After the cold run, the captured graph is cached and replayed, removing the high capture overhead present initially. However, when graph breaks persist, execution still incurs GPU-CPU switches (as discussed in § V-D), which add latency. Eliminating these breaks with GRAPHMEND removes such switches, yielding additional improvements in relative latency per forward pass. Notably, tiny-random-PegasusForCausalLM shows over 25% improvement. Since this model is very small, with only 64 operators, the fixed costs from graph breaks (such as host-device synchronization, and extra graph launches) take up a large portion of the runtime. Removing these breaks therefore gives a huge speedup per forward pass. C. Relative Throughput Improvement by Fixing Graph Breaks We measure Relative Throughput Improvement (%) as 100 × Tpfixed -Tpbreak Tpbreak , where Tpbreak is the throughput (output tokens per second) for the model with graph breaks, and Tpfixed is the throughput after applying GRAPHMEND fixes. We measure the relative throughput across the models in the benchmark suite that are fixable through GRAPHMEND for both RTX 3090 and A40. Figure 7b shows a relative throughput improvement of about 5-8% across all models on both GPUs. This improvement follows from the relative latency reduction per forward pass observed in Section V-B: as each step becomes faster, the end-to-end tokens per second increase. The variation of improvement across models highlights an important point: the magnitude of improvement depends on the extent to which a graph break affects the forward path. When the impact of a break is small, the corresponding throughput gain is also limited. For example, in both Phi-4-mini-instruct and Qwen-Audio-Chat, the root cause of the breaks is dynamic control flow. We observe a 7.5%-8% improvement for Qwen-Audio-Chat, compared to 5%-6% for Phi-4-mini-instruct from both GPUs. The difference arises because the break in Phi-4-mini-instruct occurs near the beginning of the forward function, whereas in 7 TABLE I: Models in benchmark suite Benchmark Model Name Graph break count Graph break reasons Total Operators biogpt [15] 2 logger calls 408 blenderbot-400M-distill [16] 3 logger calls 489 flan-t5-large [17] 3 logger calls 2130 longformer-base-409 [18] 5 logger calls, tensor.item() call 913 moe-minicpm-x4-base [19] 15 Dynamic shape operator 280 Phi-4-mini-instruct [20] 5 Dynamic control flow 1541 Qwen-Audio-Chat [21] 2 Dynamic control flow 1647 tiny-random-PegasusForCausalLM [22] 2 logger calls 64 biogpt blenderbot-400M-distill flan-t5-large longformer Phi-4-mini-instruct Qwen-Audio-Chat t-r-PegasusForCausalLM 0 20 40 60 Latency Improvement (%) Nvidia A40 Nvidia RTX 3090 (a) biogpt blenderbot-400M-distill flan-t5-large longformer Phi-4-mini-instruct Qwen-Audio-Chat t-r-PegasusForCausalLM 0 10 20 Latency Improvement (%) Nvidia A40 Nvidia RTX 3090 (b) Fig. 7: Latency improvements from GRAPHMEND (a) Cold-start latency reductions (b) Steady-state latency reductions across benchmark models on RTX 3090 and A40 GPUs. TABLE II: Graph break counts in the original model and fix rates achieved by applying GRAPHMEND across the benchmark suite. Benchmark Model Graph Breaks Fixed (%) biogpt 2 100 blenderbot-400M-distill 3 100 flan-t5-large 3 100 longformer-base-4096 5 40 moe-minicpm-x4-base 15 0 Phi-4-mini-instruct 5 100 Qwen-Audio-Chat 2 100 tiny-random-PegasusForCausalLM 2 100 biogpt blenderbot-400M-distill flan-t5-large longformer Phi-4-mini-instruct Qwen-Audio-Chat t-r-PegasusForCausalLM 0.0 2.5 5.0 7.5 Throughput Improvement (%) Nvidia A40 Nvidia RTX 3090 Fig. 8: Relative throughput improvement across benchmark suite Qwen-Audio-Chat it occurs in the middle, where its impact is higher. Thus, the location of a break determines its effect: fixing some breaks yields larger throughput gains, while fixing others has a smaller impact. tiny-random-PegasusForCausalLM achieves more than 25% lower forward-pass latency, but throughput gains are smaller. This is because throughput also depends on prefill, decode, CPU tasks (e.g., sampling, tokenization), and data transfers, which are unaffected. For such a small model, these fixed costs dominate, so removing the break speeds up GPU compute but has less impact on end-to-end tokens/s, especially when decode dominates generation. D. Overhead Analysis of Graph Breaks We conducted an analysis to evaluate how fixing graph breaks with GRAPHMEND reduces overhead. We analyze Qwen-Audio-Chat model running on an A40 GPU which has 2 graph breaks. To ensure a consistent comparison in this section, all activity plots (Figures 11a, 11b, 11c, and 11d) are drawn only for the time interval between the first and second CUDA graph executions in the original model; for the fixed model, we consider the identical time region. a) Steady-state run overhead analysis: Figure 9a shows the profiler trace for the original model. Execution is split into three CUDA graphs because of two graph breaks, with idle GPU periods between the first and second FX graph executions. This is clearly visible in Figure 11a, the GPU becomes idle while the CPU handles control flow in eager mode, including a device-to-host (D2H) memcpy triggered by dynamic control flow (graph break reason for this model). This CPU fallback introduces an overhead. After applying GRAPHMEND, the rewritten code replaces the dynamic branch with a GPU-supported torch.where. The resulting trace (Figure 9b) shows one continuous CUDA graph. As confirmed in Figure 11c, GPU activity remains uninterrupted and CPU fallbacks disappear. Thus, GRAPHMEND 8 Memcpy DtoH 2nd fx-graph Execution 3rd fx-graph Execution ( 93.121 ms ) 1st fx-graph Execution (a) Profiler tracer for one iteration of warm runs for Qwen-Audio-Chat original model before fixing the graph break run in A40 GPU 1st fx-graph Execution ( 92.5 ms ) (b) Profiler tracer for one iteration of warm runs for Qwen-Audio-Chat model after fixing the graph break run in A40 GPU Fig. 9: (a) Profile traces. eliminates the overhead of CPU-GPU context switching during steady-state runs. b) Cold run overhead analysis: Cold runs introduce even larger overheads. In the profiler trace of original model (Figure 10), each graph break triggers a separate CUDA Graph recording and caching, leading to long idle GPU phases and heavy CPU involvement as shown in Figure 11b. In this interval, we observe that GPU idle time dominates, while the CPU is heavily active, reflecting significant overhead. By contrast, the fixed model (Figure 11d) executes with a single capture, avoiding repeated records and reducing idle GPU time. Overall, GRAPHMEND reduces overhead more significantly in cold runs than in steady state, explaining the substantial cold-start latency reductions per forward pass reported in Figure 7a. E. GPU Runtime Kernel Statistics Analysis We analyze how fixing graph breaks affects GPU kernel counts, ordering, and scheduling, and whether this enables new optimization opportunities. Using the Phi-4-mini-instruct model on an A40 GPU, the profiler reports 404 executed kernels for the original model and 393 for the fixed model with a reduction of 11. This drop indicates more fusion and fewer launches, as removing breaks allows the compiler to capture larger regions and emit fewer, larger kernels. Figure 12 illustrates this change. In the original model, 3 small kernels appear that are replaced in the fixed model by a single fused kernel. We also observe reordering: one kernel from the original run shifts upward in the fixed run. With one CUDA Graph, the runtime can reorder independent regions and pack launches more effectively, reducing stalls and improving locality. Graph breaks force separate CUDA Graph captures, blocking fusion across the break and restricting scheduling to smaller windows. Removing them allows larger portions of the forward pass to be captured together, enabling the compiler and runtime to fuse operations, remove redundant launches, and reorder independent work. As a result, kernel gaps shrink and device utilization improves. In summary, fixing graph breaks with GRAPHMEND not only reduces launch overhead but also exposes new optimization opportunities, underscoring the importance of eliminating such breaks. VI. RELATED WORK PyTorch has long sought to balance performance with Python's flexibility. TorchScript's tracing and scripting enabled ahead-of-time graph capture but struggled with dynamic control flow and unsupported constructs [11], [12]. PyTorch 2 introduced TorchDynamo for bytecode-level capture and TorchInductor for backend compilation [1], [2], 9 Text Text Original Fixed -Audio-Chat Fig. 10: Profiler tracer of cold run of Qwen-Audio-Chat model run on A40 GPU. 0.0 0.5 1.0 Activity 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 Time (ms) 0.0 0.5 1.0 Activity CPU GPU (a) Steady State - Original Model 0.0 0.5 1.0 Activity 300 400 500 600 700 800 900 Time (ms) 0.0 0.5 1.0 Activity CPU GPU (b) Cold Start - Original Model 0.0 0.5 1.0 Activity 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 Time (ms) 0.0 0.5 1.0 Activity CPU GPU (c) Steady State - Fixed Model 0.0 0.5 1.0 Activity 300 400 500 600 700 800 900 Time (ms) 0.0 0.5 1.0 Activity CPU GPU (d) Cold Start - Fixed Model Fig. 11: CPU/GPU activity time traces for Qwen-Audio-Chat model run on A40 GPU. ampere_fp16 ... ampere_fp16 ... triton_per_fused .. triton_poi_fused .. triton_per_fused .. triton_per_fused .. triton_poi_fused .. triton_red_fused .. ... ... triton_poi_fused .. triton_poi_fused .. triton_per_fused .. triton_poi_fused .. Graph Break 1 Graph Break 2 Graph Break 3 Fig. 12: Kernel Fusion and Reordering Visualization for Phi4-mini-instruct model. yet graph breaks remain common when encountering datadependent branches or Python I/O. Our work reduces these breaks by rewriting source code before execution, allowing more continuous FX graphs. General compiler infrastructures such as XLA [24], TVM [25], and MLIR [26] provide IR-based optimization, while Glow [27], TASO [28], and ONNX Runtime [29] extend portability across hardware and frameworks. These systems assume large graphs are already available, whereas we restructure PyTorch code to maximize graph continuity so their optimizations can be applied more effectively. Dynamic models motivate systems like GRAPE [30] and Triton [31], [32], which manage runtime variability through 10 efficient kernel generation. PyTorch's compiler stack still falls back to eager execution in such cases, while our method removes common break sources by rewriting branches into tensorized forms that remain graph-compatible. Program transformations more broadly, as seen in Halide [33], Lantern [34], and DLVM [30], show the potential of semantics-preserving rewrites for optimization. Our approach follows this tradition but targets PyTorch specifically, introducing lightweight transformations that enable TorchDynamo to capture larger uninterrupted graphs. VII. CONCLUSION We presented GRAPHMEND, a source-level compiler technique for PyTorch that systematically eliminates common FX graph breaks before execution. By restructuring code constructs such as dynamic control flow and Python I/O operations, GRAPHMEND reduces fragmentation in the compilation pipeline and enables larger unified graphs to be captured by TorchDynamo. Our evaluation across a diverse suite of Hugging Face models demonstrates that GRAPHMEND removes nearly all fixable breaks and consistently improves runtime efficiency, with substantial latency and throughput gains on modern GPUs. REFERENCES [1] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K ̈opf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, PyTorch: an imperative style, highperformance deep learning library. Red Hook, NY, USA: Curran Associates Inc., 2019. [2] J. Ansel, E. Yang, H. He, N. Gimelshein, A. Jain, M. Voznesensky, B. Bao, P. Bell, D. Berard, E. Burovski, G. Chauhan, A. Chourdia, W. Constable, A. Desmaison, Z. DeVito, E. Ellison, W. Feng, J. Gong, M. Gschwind, B. Hirsh, S. Huang, K. Kalambarkar, L. Kirsch, M. Lazos, M. Lezcano, Y. Liang, J. Liang, Y. Lu, C. K. Luk, B. Maher, Y. Pan, C. Puhrsch, M. Reso, M. Saroufim, M. Y. Siraichi, H. Suk, S. Zhang, M. Suo, P. Tillet, X. Zhao, E. Wang, K. Zhou, R. Zou, X. Wang, A. Mathews, W. Wen, G. Chanan, P. Wu, and S. Chintala, "Pytorch 2: Faster machine learning through dynamic python bytecode transformation and graph compilation," in Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, ser. ASPLOS '24. New York, NY, USA: Association for Computing Machinery, 2024, p. 929-947. [Online]. Available: https://doi.org/10.1145/3620665.3640366 [3] J. K. Reed, Z. DeVito, H. He, A. Ussery, and J. Ansel, "Torch.fx: Practical program capture and transformation for deep learning in python," 2022. [Online]. Available: https://arxiv.org/abs/2112.08429 [4] B. Zheng, C. H. Yu, J. Wang, Y. Ding, Y. Liu, Y. Wang, and G. Pekhimenko, "Grape: Practical and efficient graphed execution for dynamic deep neural networks on gpus," in Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO '23. New York, NY, USA: Association for Computing Machinery, 2023, p. 1364-1380. [Online]. Available: https://doi.org/10. 1145/3613424.3614248 [5] A. Agrawal, A. N. Modi, A. Passos, A. Lavoie, A. Agarwal, A. Shankar, I. Ganichev, J. Levenberg, M. Hong, R. Monga, and S. Cai, "Tensorflow eager: A multi-stage, python-embedded dsl for machine learning," 2019. [Online]. Available: https://arxiv.org/abs/1903.01855 [6] J. Mars, Y. Kang, R. Daynauth, B. Li, A. Mahendra, K. Flautner, and L. Tang, "The jaseci programming paradigm and runtime stack: Building scale-out production applications easy and fast," IEEE Computer Architecture Letters, vol. 22, no. 2, pp. 101-104, 2023. [7] J. Mars, "Extending data spatial semantics for scale agnostic programming," 2025. [Online]. Available: https://arxiv.org/abs/2504. 03109 [8] Jaseci Labs, "Jaseci: The official jaseci code repository," https://github. com/Jaseci-Labs/jaseci, 2025. [9] J. L. Dantanarayana, Y. Kang, K. Sivasothynathan, C. Clarke, B. Li, S. Kashmira, K. Flautner, L. Tang, and J. Mars, "Mtp: A meaning-typed language abstraction for ai-integrated programming," 2025. [Online]. Available: https://arxiv.org/abs/2405.08965 [10] Hugging Face, "Hugging face: Open-source ai community and tools," https://huggingface.co, 2025, accessed: 2025-09-12. [11] PyTorch Team, "torch.jit.trace - pytorch documentation," https://docs. pytorch.org/docs/stable/generated/torch.jit.trace.html, 2025, accessed: 2025-09-08. [12] PyTorch-TorchScript Team, "torch.jit.script - pytorch documentation," https://docs.pytorch.org/docs/stable/generated/torch.jit.script.html, 2025, accessed: 2025-09-08. [13] PyTorch Contributors, "Torch compile troubleshooting - pytorch documentation," https://docs.pytorch.org/docs/stable/torch.compiler troubleshooting.html, 2025, accessed: 2025-09-08. [14] A. Ghosh, A. Nayak, A. Panwar, and A. Basu, "Pygraph: Robust compiler support for cuda graphs in pytorch," 2025. [Online]. Available: https://arxiv.org/abs/2503.19779 [15] R. Luo, L. Sun, Y. Xia, T. Qin, S. Zhang, H. Poon, and T.-Y. Liu, "BioGPT: generative pre-trained transformer for biomedical text generation and mining," Briefings in Bioinformatics, vol. 23, no. 6, 09 2022, bbac409. [Online]. Available: https://doi.org/10.1093/bib/bbac409 [16] Meta AI, "facebook/blenderbot-400m-distill," https://huggingface.co/ facebook/blenderbot-400M-distill, 2020. [17] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei, "Scaling instruction-finetuned language models," 2022. [Online]. Available: https://arxiv.org/abs/2210.11416 [18] I. Beltagy, M. E. Peters, and A. Cohan, "Longformer: The longdocument transformer," , 2020. [19] babybirdprd, "babybirdprd/moe-minicpm-x4-base," https://huggingface. co/babybirdprd/moe-minicpm-x4-base, 2025. [20] Microsoft, "Microsoft phi-4-mini-instruct," https://huggingface.co/ microsoft/Phi-4-mini-instruct, 2025. [21] Y. Chu, J. Xu, X. Zhou, Q. Yang, S. Zhang, Z. Yan, C. Zhou, and J. Zhou, "Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models," arXiv preprint , 2023. [22] H. Face, "hf-internal-testing/tiny-random-pegasusforcausallm," https: //huggingface.co/hf-internal-testing/tiny-random-PegasusForCausalLM, 2025, accessed: 2025-09-12; internal testing minimal model. [23] "Pytorch profiler: A performance debugging and analysis tool for pytorch," https://pytorch.org/docs/stable/profiler.html, 2021, accessed: 2025-09-12. [24] O. Community, "Stablehlo and openxla," 2023. [Online]. Available: https://openxla.org [25] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy, "Tvm: An automated end-to-end optimizing compiler for deep learning," in Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2018, pp. 578-594. [26] C. Lattner et al., "Mlir: A compiler infrastructure for the end of moore's law," arXiv preprint , 2020. [27] N. Rotem et al., "Glow: Graph lowering compiler techniques for neural networks," arXiv preprint , 2018. [28] Z. Jia, S. Lin, C. R. Qi, and A. Aiken, "Taso: Optimizing deep learning computation with automatic generation of graph substitutions," in Proceedings of the 27th ACM Symposium on Operating Systems Principles (SOSP), 2019, pp. 47-62. [29] Microsoft, "Onnx runtime: High performance inference engine," 2018. [Online]. Available: https://onnxruntime.ai [30] Z. Zhao et al., "Dlvm: A modern compiler ir for deep learning frameworks," arXiv preprint , 2017. [31] P. Tillet et al., "Triton: An intermediate language and compiler for tiled neural network computations," in ICML Workshop on Systems for ML, 2019. [32] J. Song et al., "Hidet: Task-mapping programming paradigm for deep learning tensor programs," in Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2024. [33] J. Ragan-Kelley et al., "Halide: A language and compiler for optimizing parallelism, locality, and recomputation," in Proceedings of the 34th 11 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2013, pp. 519-530. [34] F. Wang et al., "Lantern: A search-based compiler for deep learning," in Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 6035-6045. 12
|
2509.16250
|
A study on Deep Convolutional Neural Networks, transfer learning, and
Mnet model for Cervical Cancer Detection
Saifuddin Sagor
Department of Computer Science and Engineering
Daffodil International University, Dhaka, Bangladesh
sagor15-6398@s.diu.edu.bd
Dr. Md Taimur Ahad
School of Mathematics, Physics and Computing
Toowoomba Campus
University of Southern Queensland
MdTaimur.Ahad@unisq.edu.au
Faruk Ahmed
Department of Computer Science and Engineering
Daffodil International University, Dhaka, Bangladesh
faruk15-4205@diu.edu.bd
Rokonozzaman Ayon
Department of Computer Science and Engineering
Daffodil International University, Dhaka, Bangladesh
ayon15-4393@diu.edu.bd
Sanzida Parvin
Department of Computer Science and Engineering
Daffodil International University, Dhaka, Bangladesh
parvin15-6265@s.diu.edu.bd
Abstract
Cervical cancer remains one of the most common cancers affecting women worldwide,
particularly in low-resource settings. Early and accurate detection through Pap smear analysis
is critical to improving patient outcomes and reducing mortality. Although state-of-the-art
(SOTA) Convolutional Neural Networks (CNNs) have revolutionized disease diagnosis, most
SOTA CNNs are designed for large-scale object detection and classification tasks. As a result,
they require substantial computational resources, extended training time, and large datasets.
In this study, a lightweight CNN model, S-Net (Simple Net), is developed specifically for
cervical cancer detection and classification using Pap smear images to address these
limitations. Alongside S-Net, six SOTA CNNs were evaluated using transfer learning, including
multi-path (DenseNet201, ResNet152), depth-based (Serasnet152), width-based multi-
connection (Xception), depth-wise separable convolutions (MobileNetV2), and spatial
exploitation-based (VGG19). All models, including S-Net, achieved comparable accuracy, with
S-Net reaching 99.99%. However, S-Net significantly outperforms the SOTA CNNs in terms of
computational efficiency and inference time, making it a more practical choice for real-time
and resource-constrained applications. A major limitation in CNN-based medical diagnosis
remains the lack of transparency in the decision-making process. To address this, Explainable
AI (XAI) techniques, such as SHAP, LIME, and Grad-CAM, were employed to visualize and
interpret the key image regions influencing model predictions. The novelty of this study lies in
the development of a highly accurate yet computationally lightweight model (S-Net) caPable
of rapid inference while maintaining interpretability through XAI integration. Furthermore,
this work analyzes the behavior of SOTA CNNs, investigates the effects of negative transfer
learning on Pap smear images, and examines pixel intensity patterns in correctly and
incorrectly classified samples.
Keywords: Cervical Cancer, CNN, Deep learning, LIME, SHAP, Transfer Learning, Mnet
model, XAI.
1. Introduction
Cervical cancer, which affects the cervix at the lower end of the uterus, remains one of the most
common cancers among women globally. The World Health Organization (WHO) reports that
cervical cancer is a leading cause of cancer-related deaths in over 40 countries, highlighting its
public health significance (Joynab et al., 2024; Fan et al., 2023). In 2020, around 0.6 million
cases of cervical cancer were diagnosed worldwide (Sarhangi et al., 2024). However,
traditional screening methods are prone to high false-positive rates due to human error,
compromising the accuracy of diagnosis and early detection.
To overcome these challenges, machine learning (ML) and deep learning (DL) based
computer-aided diagnostic (CAD) techniques have been increasingly utilized for the automatic
analysis of cervical cytology and colposcopy images. These AI-driven technologies are
significantly improving diagnostic accuracy by automating image segmentation and
classification, thus reducing reliance on manual analysis and minimizing human error ( Ahad
et al., 2023; Mustofa et al., 2023; Bhowmik et al., 2024, Ahmed & Ahad, 2023; Emon &
Ahad, 2024; Mustofa et al., 2024; Preanto et al., 2024; Mamun et al., 2023; Ahad et al.,
2024; Mustofa et al., 2025; Preanto et al., 2024; Ahmed et al., 2023; Ahad et al., 2024;
Bhowmik et al., 2023; Ahad et al., 2024; Mamun et al., 2025; Ahad et al., 2024, Ahad et al.,
2024; Islam et al., 2024; Ahad et al., 2024; Ahmed et al., 2024; Ahad et al., 2024; Preanto
et al., 2024; Preanto et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Ahad et al., 2024;
Mamun et al., 2024; Emon et al., 2023; Emon et al., 2023; Biplob et al., 2023; Ahad et al.,
2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Youneszade
et al., 2023). This represents a critical advancement in cervical cancer diagnosis, enhancing the
effectiveness of screening and improving early detection rates.
One of the most promising AI models in cervical cancer detection is the Deep Convolutional
Neural Network (DCNN). DCNNs are particularly effective in classifying early-stage cancer
and detecting malignant cells by automatically identifying complex patterns in medical images
(Kumar et al., 2024; Nirmala et al., 2024). The incorporation of transfer learning, using pre-
trained models like those from ImageNet, further enhances performance by allowing the
models to adapt to smaller, task-specific datasets (Morid et al., 2021; Atasever et al., 2023).
However, DCNNs present challenges related to their high computational demands and large
memory footprint, which can limit their application, especially in low-resource settings (Pacal
et al., 2024; Lau et al., 2024). To address this, researchers have developed lightweight CNN
models that offer comparable performance while reducing the complexity of the network.
These models are more practical for deployment in mobile devices and resource-constrained
environments, providing an accessible solution for real-world clinical settings (Mathivanan et
al., 2024; Gendy et al., 2024; Mehedi et al., 2024; Dogani et al., 2023).
A key challenge in medical AI is the lack of interpretability of deep learning models. To
improve model transparency and trust, Explainable AI (XAI) methods such as Grad-CAM,
LIME, and SHAP have been developed. These methods provide insights into model decisions,
enhancing clinician trust and supporting ethical decision-making in clinical practice (Arrieta et
al., 2020; Albahri et al., 2023; Band et al., 2023).
Despite notable advancements in CNN models for cervical cancer detection, several knowledge
gaps persist:
1. Many studies focus on developing CNN models but lack attention to clinical concerns
like model generalization and interpretability (Zhang et al., 2025; Rahman et al., 2023).
2. Well-known CNN architectures have not been rigorously evaluated for cervical cancer
detection in real-world clinical environments (Sambyal et al., 2023).
3. Despite its success in other domains, transfer learning remains underutilized in cervical
cancer CAD systems (Yeasmin et al., 2024).
4. High false-positive rates and low reliability limit the clinical applicability of existing
CNN-based methods (Attallah, 2023; Painuli & Bhardwaj, 2022).
Following the gaps, this study conducts three experiments, which make the following
contributions and novelties of this study:
1. A lightweight CNN model, S-Net, is introduced for efficient cervical cancer detection
from Pap smear images, optimizing both computational efficiency and accuracy.
Additionally, six state-of-the-art CNN architectures (VGG19, ResNet152v2, SE-
ResNet152, MobileNetV2, Xception, DenseNet201) were evaluated to identify the
most effective model for this task
2. The study applies XAI techniques (LIME, SHAP, Grad-CAM) to improve model
interpretability and transparency, while statistical analysis of pixel intensity evaluates
classification outcomes (TP, FP, TN, FN).
3. pixel intensity as a key factor in image classification, investigating its role in cervical
cancer detection and its impact on classification accuracy.
2. Related works
Tan et al. (2024) and Khowaja et al. (2024) focused on using deep learning models for cervical
cancer detection through Pap smear images. Tan et al. (2024) employed transfer learning with
pre-trained CNN models for a seven-class classification task, achieving the best performance
with DenseNet-201. Similarly, Khowaja et al. developed a framework that outperformed 25
other models, showing impressive accuracy on the Mendeley and SIPaKMeD datasets. Their
work emphasizes the potential of these models to improve cervical cancer screening and reduce
misdiagnosis.
Deo et al. (2024) introduced CerviFormer, a Transformer-based model, which excelled at
classifying Pap smear images, particularly for large-scale inputs. This method demonstrated
competitive results on two public datasets, with accuracy rates of 96.67% and 94.57%, showing
promise compared to traditional CNN-based methods. Additionally, Pacal (2024) utilized 106
deep learning models, including CNNs and vision transformers, to achieve remarkable
accuracy on the SIPaKMeD and LBC datasets, surpassing existing models in both datasets.
Mazumder et al. (2024) and Nour et al. (2024) combined CNNs like InceptionV3 and
EfficientNetV2S to enhance cervical cancer detection accuracy. These models demonstrated
accuracy rates of up to 99.98%, outperforming contemporary approaches. Hybrid methods,
combining deep learning with handcrafted features, were also explored by Joseph et al. (2024)
for early-stage cancer detection, yielding a high median recall of 99.5%.
Fan et al. (2023) and Tomko et al. (2022) worked on optimizing models for better performance.
Fan et al. introduced CAM-VT, a weakly supervised model using conjugated attention and
visual transformers, which improved pap slide identification accuracy. Tomko et al.(2022)
focused on optimizing input image sizes for models like EfficientNetB0, improving
classification performance. Additionally, some studies, like those of Prasanthi Shandilya et al.
(2024), employed lightweight CNNs (e.g., MobileNetV2 and ResNet-18) to achieve high
accuracy while reducing computational complexity, making them more suitable for clinical
applications.
Wubineh et al. (2024) and Hussain et al. (2024) explored segmentation techniques and model
interpretability. Wubineh et al. focused on cytoplasm and nuclei segmentation for cervical
cancer detection, with EfficientNetB2 achieving impressive accuracy. Meanwhile, Hussain et
al.(2024) used the Xception model for classifying cervical cancer, with AUC scores of up to
0.98. These studies highlight the importance of model transparency and accurate feature
localization, improving the trustworthiness and effectiveness of AI-based cervical cancer
diagnostic tools.
Compared to the D-CNN and transfer learning models, the M-Net model is superior in
accuracy. Both D-CNN and transfer learning methods are computationally expensive and
heavy. The researcher put forth an M-Net model in this investigation. Shorter models, less
convolution, fewer fully linked layers, and fewer layers comprise the M-Net model. This M-
Net approach reduced study time waste while improving accuracy for cervical cancer.
Realizing the effectiveness of CNN in the detection and classification of cervical cancer,
scholars such as Leung and Yue (2024), Luo et al. (2021), Azad et al. (2024), Uyanga et al.
(2024), and Huang et al. (2025) have conducted extensive research in this area. They
experimented with CNN and its variants to improve cervical cancer detection and
classification.
3. Experimental method and materials
This section describes the hardware specification, dataset description, S-Net model
development, and training procedure for this study.
This study presents a comprehensive cervical cancer detection pipeline using Pap smear
images. Initially, the images undergo preprocessing before classification. The framework
evaluates six state-of-the-art CNN models, VGG19, ResNet152V2, SE-ResNet152,
DenseNet201, MobileNetV2, and Xception, through two experimental setups: (i) training from
scratch and (ii) transfer learning using ImageNet weights. Additionally, a custom lightweight
CNN model, S-Net, is introduced and assessed through 5-fold cross-validation and explainable
AI (XAI) techniques like LIME, SHAP, and Grad-CAM.
Figure 1: Experimental flow of the study.
3.1
Hardware Specification
The experiments were conducted using Precision 7680 Workstation. The workstation is a 13th-
generation Intel® Core™ i9-13950HX vPro with a Windows 11 Pro operating system and
NVIDIA® RTX™ 3500 Ada Generation, 32 GB DDR5, and 1 TB SSD. Python (version 3.9)
was chosen as the programming language as the version supported the TensorFlow-GPU,
SHAP, and LIME generation.
3.2
Dataset description
The dataset for the study was collected from a public repository and the dataset is Obuli Sai
Naren. (2022) Multi Cancer Dataset. Figure 1 displays samples of images used in the study.
This dataset contains 25000 Pap smear (Papanicolaou smear) microscopic images and is
classified into five (5) classes: Cervix_Dyskeratotic (Dyk), Cervix_Koilocytotic (Koc),
Cervix_Metaplastic (Mep), Cervix_Parabasal (Pab) and Cervix_Superficial Moderate (Sfi).
Cervical Cancer images were taken from the Obuli Sai Naren. (2022). The images were
captured using pap smears and stored in JPG format.
Dyk
Koc
Mep
Pab
Sfi
Figure 2: Example of 5 cervical cancer classes
3.3
Image Augmentation
This study used position augmentation, such as scaling, cropping, flipping, and rotation, and
color augmentation, such as brightness, contrast, and saturation, was deployed. Random
rotation from −15 to 15 degrees, rotations of multiples of 90 degrees at random, random
distortion, shear transformation, vertical flip, horizontal flip, skewing, and intensity
transformation were also used in the data augmentation process.
Dyk
Koc
Mep
Pab
Sfi
Figure 3. Image after augmentation
3.4 Model Development
Inspired by the architectural taxonomy of CNNs proposed by Khan et al. (2020), we selected
six diverse and high-performing convolutional neural networks (CNNs): VGG19, ResNet152,
SE-ResNet152, DenseNet201, Xception, and MobileNetV2 to evaluate their effectiveness in
classifying cervical cancer cytology images. Each model reflects a unique architectural class:
spatial exploitation (VGG19), depth-based learning (ResNet152), attention mechanisms (SE-
ResNet152), multi-path connectivity (DenseNet201), width-based design (Xception), and
efficient lightweight architecture (MobileNetV2).
To address the challenge of limited annotated medical data, we employed transfer learning,
fine-tuning each model pre-trained on ImageNet. The experimental results demonstrate that
advanced architectures, particularly those integrating attention and residual connections such
as SE-ResNet152 (99.81%) and MobileNetV2 (99.53%), significantly outperform older
architectures like VGG19 (88.58%). These findings confirm that transfer learning, when
combined with modern CNN designs, substantially improves diagnostic accuracy in
cytological image analysis.
Figure 4: Classification of Deep CNN Architectures.
4. Experimental description and results
This study conducted three experiments to detect and classify cervical cancer pap smear
images. Firstly, a lightweight CNN S-Net; secondly, six SOTA CNN’s Multipath
(DenseNet201), Depth-based (ResNet152), width-based multi-connection (Xception), depth-
wise separable convolutions (MobileNetV2), spatial exploitation based (VGG19); thirdly,
transfer learning of 6 SOTA CNN’s a Multi path-depth-Width based (DenseNet201- Xception)
and Multi path-depth-Spatial exploitation (ResNet152-VGG19). The experimental processes
and results are described below.
4.1 Experiment 1: S-Net development process and results
The S-Net model, training process, and results and discussed below:
4.1.1 S-Net model development
The S-Net model, designed for cervical cancer detection in Pap smear images, begins with a
2D convolutional layer featuring 48 filters, followed by max-pooling to reduce spatial
dimensions. The architecture deepens progressively with convolutional layers containing 128,
192, and 512 filters, interspersed with pooling layers for hierarchical feature extraction. A fully
connected layer with 1024 neurons is employed, followed by dropout regularisation,
culminating in a dense output layer with five neurons for multi-class classification (Dyk, Koc,
Mep, Pab, Sfi). With a total of 2,020,549 trainable parameters, the model is optimised for high-
capacity learning and multi-class image categorisation.
Figure 5: S-Net model visualisation
4.1.2 Model Summary
Table 1: Hyperparameters
Layer
Output Shape
Parameters
Description
Conv2D
(None, 52, 52, 48)
3,648
Convolutional layer with 48
filters, kernel size (3,3)
MaxPooling2D
(None, 17, 17, 48)
0
Max pooling operation (2x2)
Conv2D
(None, 17, 17,
128)
55,424
Convolutional layer with 128
filters, kernel size (3,3)
MaxPooling2D
(None, 5, 5, 128)
0
Max pooling operation (2x2)
Conv2D
(None, 5, 5, 192)
221,376
Convolutional layer with 192
filters, kernel size (3,3)
Conv2D
(None, 5, 5, 192)
331,968
Convolutional layer with 192
filters, kernel size (3,3)
Conv2D
(None, 5, 5, 128)
221,312
Convolutional layer with 128
filters, kernel size (3,3)
MaxPooling2D
(None, 1, 1, 128)
0
Max pooling operation (5x5)
Flatten
(None, 128)
0
Flatten the output for input into
dense layers
Dense
(None, 1024)
132,096
Fully connected layer with 1024
units
Dropout
(None, 1024)
0
Dropout for regularization (no
parameters)
Dense
(None, 1024)
1,049,600
Fully connected layer with 1024
units
Dense
(None, 5)
5,125
Output layer with 5 units
(classification output)
Total
Parameters
2,020,549
Total number of trainable
parameters
Trainable
Parameters
2,020,549
All parameters in the model are
trainable
Non-trainable
Parameters
0
No non-trainable parameters
4.1.3 Training Setup
The S-Net model was trained using K-fold cross-validation, with K set to 5. Each fold
underwent 50 training epochs, with a fixed batch size of 16. The Adam optimization algorithm
was used for model optimization. The algorithm ensured the adaptability of the model weights
based on the calculated gradients, thereby enhancing performance and accuracy in classifying
cervical cancer. The loss function was sparse categorical cross-entropy. Early stopping and a
learning rate scheduler were applied during training. The training hyperparameters are
provided in Table 1.
Table 1: Hyperparameters of training
Parameters
Epochs = 50
Batch size = 16
Image size = (64, 64, 3)
Learning rate = 1.0000e-04
K_folds = 5
Optimizer = Adam(learning_rate=LEARNING_RATE)
Loss = SparseCategoricalCrossentropy(from_logits=True)
Early stopping = EarlyStopping(monitor='val_accuracy', patience=10, verbose=1,
restore_best_weights=True)
Learning_rate_scheduler = LearningRateScheduler(lambda epoch: LEARNING_RATE * 0.1 **
(epoch // 10))
Callbacks = [early_stopping, lr_scheduler]
4.1.4 Results of the S-Net
The S-Net model demonstrated high classification accuracy across all metrics (precision, recall,
F1-score, and support). In Fold 1, minor misclassifications occurred with "Mep" and "Koc"
samples, but overall accuracy was strong. Fold 2 showed improved precision, with no
misclassification in "Koc," "Pab," or "Sfi" classes. Folds 3 to 5 exhibited near-perfect
classification, minimizing errors. The confusion matrices confirm the model’s robustness and
suitability for clinical diagnostics.
Table 2: Fold-Wise Classification report with epochs of S-Net
Fold
Class
Precision
Recall
F1-Score
Support
cervix_Dyk
1.00
1.00
1.00
1548
1
cervix_Koc
1.00
1.00
1.00
1513
cervix_Mep
1.00
1.00
1.00
1504
cervix_Pab
1.00
1.00
1.00
1594
cervix_Sfi
1.00
1.00
1.00
1525
Accuracy
1.00
7684
Macro Avg
1.00
1.00
1.00
7684
Weighted Avg
1.00
1.00
1.00
7684
2
cervix_Dyk
1.00
1.00
1.00
1530
cervix_Koc
1.00
1.00
1.00
1599
cervix_Mep
1.00
1.00
1.00
1555
cervix_Pab
1.00
1.00
1.00
1481
cervix_Sfi
1.00
1.00
1.00
1518
Accuracy
1.00
7683
Macro Avg
1.00
1.00
1.00
7683
Weighted Avg
1.00
1.00
1.00
7683
3
cervix_Dyk
1.00
1.00
1.00
1541
cervix_Koc
1.00
1.00
1.00
1536
cervix_Mep
1.00
1.00
1.00
1496
cervix_Pab
1.00
1.00
1.00
1550
cervix_Sfi
1.00
1.00
1.00
1560
Accuracy
1.00
7683
Macro Avg
1.00
1.00
1.00
7683
Weighted Avg
1.00
1.00
1.00
7683
4
cervix_Dyk
1.00
1.00
1.00
1549
cervix_Koc
1.00
1.00
1.00
1491
cervix_Mep
1.00
1.00
1.00
1571
cervix_Pab
1.00
1.00
1.00
1551
cervix_Sfi
1.00
1.00
1.00
1521
Accuracy
1.00
7683
Macro Avg
1.00
1.00
1.00
7683
Weighted Avg
1.00
1.00
1.00
7683
cervix_Dyk
0.99
0.99
0.99
1810
5
cervix_Dyk
1.00
1.00
1.00
1562
cervix_Koc
1.00
1.00
1.00
1525
cervix_Mep
1.00
1.00
1.00
1531
cervix_Pab
1.00
1.00
1.00
1520
cervix_Sfi
1.00
1.00
1.00
1545
Accuracy
1.00
7683
Macro Avg
1.00
1.00
1.00
7683
Weighted Avg
1.00
1.00
1.00
7683
Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi).
Dyk
1548
0
0
0
0
Koc
2
1511
0
0
0
Mep
2
0
1502
0
0
Pab
0
0
0
1594
0
Sfi
0
0
0
0
1525
Dyk
Koc
Mep
Pab
Sfi
Confusion Matrix Fold 1
Dyk
1530
0
0
0
0
Koc
0
1599
0
0
0
Mep
0
0
1555
0
0
Pab
0
0
0
1581
0
Sfi
0
0
0
0
1518
Dyk
Koc
Mep
Pab
Sfi
Confusion Matrix Fold 2
Dyk
1541
0
0
0
0
Koc
0
1536
0
0
0
Mep
0
0
1496
0
0
Pab
0
0
0
1550
0
Sfi
0
0
0
0
1560
Dyk
Koc
Mep
Pab
Sfi
Confusion Matrix Fold 3
Dyk
1549
0
0
0
0
Koc
0
1491
0
0
0
Mep
0
0
1571
0
0
Pab
0
0
0
1551
0
Sfi
0
0
0
0
1521
Dyk
Koc
Mep
Pab
Sfi
Confusion Matrix Fold 4
Dyk
1562
0
0
0
0
Koc
0
1525
0
0
0
Mep
0
0
1531
0
0
Pab
0
0
0
1520
0
Sfi
0
0
0
0
1545
Dyk
Koc
Mep
Pab
Sfi
Confusion Matrix Fold 5
Figure 6: Fold-wise confusion matrix of S-Net
Figure 6 Confusion matrices from five-fold cross-validation reveal the strong classification
ability of the S-Net model across five cervical cell types. Each fold shows a clear diagonal
dominance, indicating high prediction accuracy. Minor misclassifications are observed in Fold
1, primarily between morphologically similar classes. Folds 2 to 5 exhibit near-perfect
classification performance. These results highlight S-Net’s robustness and consistency in
multi-class cervical cell classification.
Fold 1-ROC Curve
Fold 2-ROC Curve
Fold 3-ROC Curve
Fold 4-ROC Curve
Fold 5-ROC Curve
Figure 7: Fold-wise confusion matrix with Epochs and Accuracy
Figure 7 ROC curves from five-fold cross-validation demonstrate S-Net’s strong classification
performance. In Fold 1, the model effectively minimizes false positives while maintaining high
true positive rates. Fold 2 shows improved precision with tighter ROC curves. Folds 3 to 5
confirm the model's consistent and robust classification accuracy. Overall, these results
emphasize the reliability of S-Net in cervical cancer detection.
Class
Precision Recall F1-Score Support
cervix_Dyk
1.00
1.00
1.00
7730
cervix_Koc
1.00
1.00
1.00
7664
cervix_Mep
1.00
1.00
1.00
7657
cervix_Pab
1.00
1.00
1.00
7696
cervix_Sfi
1.00
1.00
1.00
7669
Accuracy
1.00
38416
Macro Avg
1.00
1.00
1.00
38416
Weighted Avg
1.00
1.00
1.00
38416
Dyk
Koc
Mep
Pab
Sfi
Dyk 7730
0
0
0
0
Koc
2
7662
0
0
0
Mep
2
0
7655
0
0
Pab
0
0
0
7696
0
Sfi
0
0
0
0
7669
Figure 8: Combined confusion matrix of M-Net
Figure 8 The combined classification metrics for the S-Net model demonstrate exceptional
performance, with an overall accuracy of 99.98%. Precision, recall, and F1-scores range from
99.97% to 99.99%. Class-specific results indicate strong performance, with only minor
misclassifications in certain folds.
Figure 9: Fold-Wise Classification Report with Epochs and Accuracy
Figure 9 The S-Net model shows minimal classification errors and high discriminatory power,
with low false positives. Accuracy curves rise steadily, and loss curves decline consistently,
confirming the model’s robustness, efficiency, and reliability for cervical cancer classification.
4.2
Experiment 2- Performance of the Original CNN Network
The performances of six original CNN architectures VGG19, ResNet152v2, SE-ResNet152,
DenseNet201, Xception, and MobileNetV2, are presented in this section. Among all models,
the highest accuracy was achieved, and the overall performance was excellent.
Table 3: Accuracy for classification of individual CNN networks in detecting cervical cancer.
The table reports the Precision, Recall, F1-Score, and Support (n) for each network, based on
the number of images (n = numbers).
Dyk
Koc
Mep
Pab
Sfi
Model Accuracy
VGG19
Precision
100%
100%
100%
100%
100%
99.97%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1485
1477
1491
1487
1484
ResNet152
Precision
100%
100%
100%
100%
100%
99.99%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1484
1481
1489
1480
1490
Seresnet152
Precision
100%
100%
100%
100%
100%
99.99%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1488
1491
1481
1482
1483
Densenet201
Precision
100%
100%
100%
100%
100%
100%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1485
1484
1485
1488
1482
Xception
Precision
100%
100%
100%
100%
100%
100%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1488
1483
1481
1485
1487
MobilenetV2
Precision
100%
100%
100%
100%
100%
100%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1485
1482
1488
1486
1483
Table 3 highlights the precision values for each architecture evaluated on the test dataset. The
VGG19, ResNet152v2, DenseNet201, SE-ResNet152, Xception, and MobileNetV2
architectures delivered outstanding results.
From the table, it is evident that the DenseNet201, Xception, and MobileNetV2 models
excelled in correctly detecting and classifying cervical cancer compared to the other
architectures. These models provided highly accurate classification and demonstrated
exceptional performance in detecting cervical cancer.
Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi).
Dyk
1485
0
0
0
0
Koc
0
1475
0
0
0
Mep
0
0
1491
0
0
Pab
0
0
0
1487
0
Sfi
0
2
0
0
1484
Dyk
Koc
Mep
Pab
Sfi
VGG19
Dyk
1485
0
0
0
0
Koc
0
1484
0
0
0
Mep
0
0
1485
0
0
Pab
0
0
0
1488
0
Sfi
0
0
0
0
1482
Dyk
Koc
Mep
Pab
Sfi
DenseNet201
Dyk
1484
0
0
0
0
Koc
0
1481
1
0
0
Mep
0
0
1488
0
0
Pab
0
0
0
1480
0
Sfi
0
0
0
0
1490
Dyk
Koc
Mep
Pab
Sfi
ResNet152v2
Dyk
1488
0
0
0
0
Koc
0
1490
0
0
0
Mep
0
1
1481
0
0
Pab
0
0
0
1482
0
Sfi
0
0
0
0
1482
Dyk
Koc
Mep
Pab
Sfi
SeresNet152
Dyk
1488
0
0
0
0
Koc
0
1483
0
0
0
Mep
0
0
1481
0
0
Pab
0
0
0
1485
0
Sfi
0
0
0
0
1487
Dyk
Koc
Mep
Pab
Sfi
Xception
Dyk
1485
0
0
0
0
Koc
0
1482
0
0
0
Mep
0
0
1488
0
0
Pab
0
0
0
1486
0
Sfi
0
0
0
0
1483
Dyk
Koc
Mep
Pab
Sfi
MobileNetV2
Figure 10: Six confusion matrices of original CNNs
Figure 10 shows confusion matrices for six CNNs: VGG19, DenseNet201, ResNet152v2,
SEResNet152, Xception, and MobileNetV2. DenseNet201, Xception, and MobileNetV2
achieved better classification accuracy, correctly classifying 919 and 977 images, as noted in
Table 3.
Figure 11: The training and validation accuracy of the original CNNs.
Figure 11 presents the training and validation accuracy curves for six CNNs: VGG19,
ResNet152V2, DenseNet201, SE-ResNet152, Xception, and MobileNetV2. DenseNet201,
Xception, and MobileNetV2 show stable accuracy with minimal fluctuations, while VGG19
converges quickly and maintains high accuracy. ResNet152V2 and SE-ResNet152 exhibit
occasional drops in validation accuracy but remain strong overall.
VGG19
ResNet152V2
DenseNet201
Seresnet152
Xception
MobileNetV2
Figure 12: Six roc curves of original CNNs.
Figure 12 The ROC curves for various CNN architectures (VGG19, ResNet152V2,
DenseNet201, SE-ResNet152, Xception, and MobileNetV2) show an AUC of 1.00 for all
models, indicating perfect classification performance. Each model effectively distinguishes
between classes with high confidence.
4.3
Experiment 4: Transfer Learning CNN Network Accuracy in
Detecting Cervical Cancer
Table 4. Transfer learning CNN network accuracy in detecting cervical cancer.
Dyk
Koc
Mep
Pab
Sfi
Model Accuracy
VGG19
Precision
87%
85%
83%
94%
94%
88.58%
Recall
82%
80%
88%
99%
94%
F1-score
84%
82%
86%
97%
94%
Support (N)
1484
1480
1487
1488
1485
Precision
99%
99%
99%
100%
99%
VGG19
ResNet152V2
DenseNet201
Seresnet152
Xception
MobileNetV2
ResNet152
Recall
99%
98%
99%
100%
100%
99.16%
F1-score
99%
98%
99%
100%
99%
Support (N)
1486
1488
1478
1482
1490
Seresnet152
Precision
100%
100%
100%
100%
100%
99.81%
Recall
100%
100%
100%
100%
100%
F1-score
100%
100%
100%
100%
100%
Support (N)
1484
1485
1482
1490
1483
Densenet201
Precision
99%
99%
99%
100%
100%
99.45%
Recall
99%
99%
100%
100%
100%
F1-score
99%
99%
99%
100%
100%
Support (N)
1482
1484
1482
1490
1486
Xception
Precision
98%
97%
98%
100%
99%
98.31%
Recall
97%
97%
98%
100%
99%
F1-score
98%
97%
98%
100%
99%
Support (N)
1483
1481
1485
1487
1488
MobilenetV2
Precision
100%
99%
99%
100%
100%
99.53%
Recall
99%
99%
99%
100%
100%
F1-score
99%
99%
99%
100%
100%
Support (N)
1483
1484
1489
1482
1486
The performance of six transfer learning CNN architectures, VGG19, ResNet152V2, SE-
ResNet152, DenseNet201, Xception, and MobileNetV2, is summarised in Table 4.
DenseNet201 achieved the highest accuracy (99.45%), while VGG19 showed the lowest
(88.58%). Transfer learning models generally demonstrated lower accuracy compared to their
original CNN counterparts.
Table 4 presents the Precision, Recall, and F1-score results of CNN networks with transfer
learning. DenseNet201 achieved the highest precision (99%), while SE-ResNet152 showed the
best overall performance with 99.81% accuracy. VGG19 had the lowest precision (87%).
Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi).
Dyk
1212
114
62
10
0
Koc
116
1190
57
3
41
Mep
113
101
1313
4
45
Pab
32
28
25
1471
3
Sfi
11
47
30
0
1396
Dyk
Koc
Mep
Pab
Sfi
VGG19
Dyk
1467
9
1
0
0
Koc
8
1467
3
0
0
Mep
7
5
1478
0
0
Pab
0
0
0
1490
0
Sfi
0
3
0
0
1486
Dyk
Koc
Mep
Pab
Sfi
DenseNet201
Dyk
1476
7
4
0
2
Koc
5
1458
7
0
4
Mep
4
14
1462
0
1
Pab
0
1
1
1482
0
Sfi
1
8
4
0
1483
Dyk
Koc
Mep
Pab
Sfi
ResNet152v2
Dyk
1477
0
2
0
0
Koc
2
1480
2
0
0
Mep
5
1
1477
0
1
Pab
0
0
0
1490
0
Sfi
0
4
1
0
1482
Dyk
Koc
Mep
Pab
Sfi
SeresNet152
Dyk
1441
14
15
0
0
Koc
24
1436
18
0
4
Mep
14
16
1450
0
7
Pab
1
3
1
1487
1
Sfi
3
12
1
0
1476
Dyk
Koc
Mep
Pab
Sfi
Xception
Dyk
1472
3
3
0
0
Koc
7
1471
5
0
2
Mep
4
9
1479
0
0
Pab
0
0
0
1482
0
Sfi
0
1
2
0
1484
Dyk
Koc
Mep
Pab
Sfi
MobileNetV2
Figure 13: Six confusion matrices of Transfer learning CNNs.
Figure 13 shows the confusion matrices of transfer learning CNNs. DenseNet201 and SE-
ResNet152 achieved the highest accuracy with minimal misclassification, while VGG19 had
the most errors, especially in distinguishing Koc and Dyk cells.
Figure 14: The Transfer Learning version's training and validation accuracy.
Figure 14 shows the training and validation accuracy of various Transfer Learning models.
DenseNet201 and SE-ResNet152 demonstrate the highest accuracy with quick convergence
and stability, while ResNet152V2 shows slightly lower accuracy compared to the others.
VGG19
ResNet152V2
DenseNet201
Seresnet152
Xception
MobileNetV2
Figure 15: The Transfer Learning version's training and validation accuracy.
Figure 15 presents the ROC curves for various Transfer Learning models in multi-class
classification. DenseNet201, SENet152, Xception, and MobileNetV2 show perfect
classification performance with an AUC of 1.00 across all classes. VGG19, however, has a
slightly lower AUC for one class.
4.4 Result of the XAI
This study uses three (3) explainable AI techniques, LIME, SHAP, and Grad-CAM, to generate
local and global explanations for the S-NET (the developed CNN) predictions on the validation
and test sets. LIME generates an interpretable model by training a local linear model around
the prediction point, while SHAP provides a unified framework for feature importance
estimation. Furthermore, we conducted statistical analysis on the correctly classified,
misclassified and true positive-true negative-false positive-false negative images.
4.4.1 Visualization using LIME and SHAP
LIME and SHAP are used to interpret the S-Net cervical cancer model. LIME breaks the input
images into super-pixel regions and approximates the model’s behavior using a simple model
like linear regression. The resulting heatmap highlights regions that positively or negatively
VGG19
ResNet152V2
DenseNet201
Seresnet152
Xception
MobileNetV2
impact the prediction, with red areas indicating strong influence and blue areas decreasing the
likelihood of a class.
SHAP utilizes a pre-trained cervical cancer model to compute Shapley values, which assess
the contribution of each pixel or region to the predicted class. By perturbing the input image
and predicting the outputs for each variation, SHAP provides a quantitative explanation of the
model’s decisions.
Figure 16: LIME partition explainer of Pap smear images
Figure 17: SHAP explainer of Pap Smear images
4.4.2 Grad-CAM analysis of correctly/incorrectly classified cervical cancer
modalities
Grad-CAM is a technique that visualizes which regions of an image contribute most to the S-
Net model’s prediction by generating heatmaps. It computes the gradients of the predicted class
concerning the feature maps in the model's last convolutional layer. These gradients highlight
the importance of each feature map, which are then weighted and combined to create a class
activation heatmap. The heatmap is overlaid on the original image, with red regions indicating
higher attention and blue regions indicating less influence on the prediction. This technique
enhances the transparency and interpretability of the S-Net model in cervical cancer
classification, allowing for a better understanding of how the model focuses on specific areas.
Misclassification examples are displayed in Figure 17.
Figure 18: Examples of misclassified images
Figure 19: GRAD-CAM view of misclassified images
Figure 19 presents Grad-CAM visualizations revealing misclassified cervical cancer cases. Red
regions indicate high model attention, while blue regions show less focus. Misclassifications
occurred when S-Net focused on irrelevant areas, such as predicting “cervix_Sfi” instead of
“cervix_Koc.” This suggests challenges in identifying cancer-specific features accurately.
Figure 20: Grad-CAM view of correctly classified images
Figure 20 presents Grad-CAM visualizations of correctly classified cervical cell images. The
highlighted regions (in red) indicate where the model focuses during classification, aligning
well with class-specific features. This confirms the model’s ability to extract and utilize
meaningful visual cues. However, occasional focus on irrelevant regions (in blue) suggests
opportunities to further refine spatial attention and enhance precision.
4.4.3 Pixel intensity of correctly/incorrectly Cervical Cancer modalities
Pixel intensity highlights areas that contribute most to the CNN’s decision. The original image
shows key features, while the Gradient × Input explanation reveals the influence of each part,
with brighter areas indicating higher influence. Misalignment between the highlighted regions
and expected features suggests the model may focus on irrelevant areas, impacting accurate
classification.
Figure 21: Grad-CAM view of pixel intensity of misclassified images
Figure 22: Grad-CAM view of pixel intensity for correctly classified images
4.4.4 Pixel Intensity Analysis Correct/misclassified images
This study compares the pixel intensity distributions between correctly classified and
misclassified cervical cancer images. The analysis reveals that correctly classified images have
significantly higher mean and median pixel intensities (mean = 61.02, median = 67.24)
compared to misclassified images (mean = 44.94, median = 49.33). The interquartile range
(IQR) is also larger for the correctly classified images, indicating more variability in their pixel
intensities. In contrast, misclassified images exhibit lower variability, suggesting that the model
struggles to classify images with less distinct or variable features. This emphasizes the crucial
role of clear and distinct pixel intensity contrasts for accurate cervical cancer detection, as the
model relies on such intensity patterns to make accurate predictions.
Figure 23: Distribution of Pixel intensity of misclassified/ correctly classified images.
Furthermore, we use Significance Level (α) = 0.05 as the threshold to determine statistical
significance. If the p-value is less than α, we reject the null hypothesis and conclude a
significant difference between the two classes. If the p-value exceeds α, we fail to reject the
null hypothesis, indicating no significant difference.
Result:
Independent t-test p-value: 0.0000
Mann–Whitney U test p-value: 0.0000
Independent t-test: Reject H₀. There is a significant difference between misclassified_100 and
correctly_classified_100.
Table 5: Descriptive Statistics for Pixel Intensity of correctly classified and misclassified
images.
Statistic
Misclassified Correctly Classified
Count
4096.000000
4096.000000
Mean
44.938873
61.021790
Std
29.812426
36.996021
Min
0.416000
4.292000
25%
12.529000
22.864000
50%
49.338001
67.242001
75%
75.280998
98.147999
Max
85.456001
112.676003
Interquartile Range 62.751998
75.283998
4.4.5 Pixel intensity of TP-FP-TN-FN Cervical Cancer modalities
The analysis of pixel intensity distributions across four categories (True Positive, True
Negative, False Positive, and False Negative) reveals key insights. False Negatives (FN) have
the highest mean intensity (62.04) and the broadest range, indicating errors arise from highly
variable intensities. True Positives (TP) and True Negatives (TN) show more consistent
distributions, suggesting better performance when intensities align with patterns. False
Positives (FP) exhibit a narrower intensity range, with errors occurring in areas with less
contrast. Overall, the model performs best with mid-range intensities and struggles with
extreme or variable intensities, especially in the FN category.
Figure 24: Distribution of Pixel intensity of TP-TN-FP-FN
4.4.6 Pixel intercity comparison between TP-FP-TN-FN
Significance Level (α): We use α = 0.05 as the threshold to determine statistical significance.
If the p-value is less than α, we reject the null hypothesis and conclude that there is a significant
difference between the two classes. If the p-value exceeds α, we fail to reject the null
hypothesis, indicating no significant difference.
Accepted/rejected Hypotheses:
Independent t-test and Mann–Whitney U test: Reject H1₀: The pixel intensity distributions of
True positive and True negative are not significantly different.
Independent t-test and Mann–Whitney U test: Reject H2₀: The pixel intensity distributions of
True positive and False positive are not significantly different.
Independent t-test and Mann–Whitney U test: Reject H3₀: The pixel intensity distributions of
True positive and False negative are not significantly different.
Independent t-test and Mann–Whitney U test: Reject H4₀: The pixel intensity distributions of
True negative and False positive are not significantly different.
Independent t-test and Mann–Whitney U test: Reject H5₀: The pixel intensity distributions of
True negative and False negative are not significantly different.
Independent t-test and Mann–Whitney U test: Reject H6₀: The pixel intensity distributions of
False positive and False negative are not significantly different.
Descriptive Statistics for Pixel Intensity:
Table 6: Descriptive Statistics for Pixel Intensity of TP-TN-FP-FN images
Statistic
True Positive
True Negative
False Positive
False Negative
Count
3136.000000 3136.000000
3136.000000
3136.000000
Mean
152.820465
183.763855
170.796997
158.757401
Std
21.152983
17.148066
15.314829
22.917040
Min
89.400002
125.800003
109.000000
84.800003
25%
138.199997
171.600006
161.199997
143.199997
50%
153.000000
183.299995
173.000000
159.600006
75%
168.000000
196.399994
182.850002
175.800003
Max
207.000000
219.199997
200.199997
205.399994
Interquartile
Range
29.800003
24.799988
21.650005
32.600006
5. Discussion
In this study, a lightweight convolutional neural network (CNN) model, termed S-Net, was
developed with fewer layers and learnable parameters to efficiently detect cervical cancer in
Pap smear images. The results demonstrate that the proposed S-Net model effectively classifies
cervical cancer cells and achieves an impressive accuracy of 99.99%, outperforming state-of-
the-art (SOTA) CNN models, transfer learning techniques, and ensemble methods. For training
and evaluation, the S-Net model utilized three Python data generation techniques:
TensorFlow/Keras’s flow_from_directory, flow_from_dataframe, and flow (used with NumPy
arrays). Among these methods, flow_from_directory yielded the best performance, as it
efficiently loads images batch-by-batch, minimizing memory consumption compared to
flow_from_dataframe and flow, which load the entire dataset into RAM. Furthermore, the
Adam optimizer provided consistent and stable performance across all experiments, while the
RMSprop optimizer showed fluctuations depending on the model, highlighting the significance
of selecting the appropriate optimizer based on the specific convergence behavior of the
architecture.
Figure 25: Accuracy comparison among individual CNN, transfer learning, and Mnet
models
This research presents three (3) experiments for detecting and classifying cervical cancer
images using well-known deep learning architectures and identifying the most promising
model. In the first experiment, six CNN models, VGG19, ResNet152V2, SE-ResNet152,
DenseNet201, Xception, and MobileNetV2, were compared on 25,000 microscopic cervical
cancer images across five (5) classes: Dyk, Koc, Mep, Pab, and Sfi.
Figure 26: Accuracy comparison among individual CNN, transfer learning, and ensemble
models.
0
10000000
20000000
30000000
40000000
50000000
60000000
70000000
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
T O T A L P A R A M S
T R A I N A B L E …
N O N - …
V G G 1 9
R E S N E T 1 5 2 V 2
D E N S E N E T 2 0 1
S E R E S N E T 1 5 2X C E P T I O NM O B I L E N E T V 2
S - N E T
PARAMETER COMPARISON OF CNNS & S-NET
99.97%99.99%
100.00%
99.99%
100.00%
100.00%
88.58%
99.16%99.45%99.81%98.31%99.53%
99.74%99.95%
99.86%
82.00%
84.00%
86.00%
88.00%
90.00%
92.00%
94.00%
96.00%
98.00%
100.00%
102.00%
COMPARISON OF MODELS APPILED IN THE STUDY
Series1
Significant studies have highlighted that transfer learning improves classification accuracy and
reduces training time compared to conventional CNN for many classification tasks (Ju et al.,
2022; Karimi et al., 2021). For example, Mehmood et al. (2024) proposed and reported 95.07%
binary classification performance; Emam et al. (2024) reported breast cancer diagnosis using
an optimized transfer learning technique and improved the accuracy. However, in our study,
transfer learning received negative results. The accuracy has decreased compared to the main
CNN. The findings of negative transfer learning support the study (Rangarajan & Raja, 2020;
Mohanty et al., 2016; Barbedo, 2018) that in the case the input image differs from the trained
data of the Imagenet Dataset, the accuracy is likely to be decreased. The effect of background
noise and the application of different augmentation techniques separately with the test sets
resulted in a drop in performance. However, in the case of the original CNN, the model was
trained and tested using similar input, and the prediction capabilities were increased in unseen
data. Moreover, although CNN can learn features irrespective of the input data, this study’s
limited number of datasets is likely a factor influencing the prediction capability. Our view is
also supported by (Barbedo, 2018), who suggested that increasing the dataset size may improve
transfer learning performance when the input image is modified using augmentation.
Most importantly, this study applied three (3) XAI techniques, LIME, SHAP, and Grad-CAM,
to generate explanations for CNN detection and classification. The XAI expands our
understanding of S-NET’s decision-making process. XAIs used in this study enhance the
interpretability of S-Net. LIME provides model-agnostic explanations by approximating the
local decision boundary of a model with an interpretable surrogate, highlighting the regions of
the pap smear images that contribute most to the model’s predictions. Based on cooperative
game theory, SHAP offers consistent and theoretically sound explanations by assigning
Shapley values to input features, thereby quantifying their contribution to the output with high
fidelity. Grad-CAM, designed explicitly for CNNs, generates class-discriminative
visualizations by leveraging gradients from the final convolutional layers, effectively
localizing regions in the image that influence a specific class prediction. In summary, XAI,
coupled with S-Net, enhances the trust and understanding of CNNs by transparentizing their
decision-making processes.
Lastly, pixel intensity was analyzed using the statistical method of true classification,
misclassification, true positive, false positive, true negative, and false negative classification.
This study contributes to understanding the growing literature on using explainable AI to
improve medical image analysis and diagnosis. It provides insights into the interpretability and
transparency of CNN for Cervical cancer modalities.
6. Conclusion and future work
This study evaluates deep learning models for cervical cancer detection, conducting three
sequential experiments. The first experiment tested six CNN architectures—VGG19,
ResNet152V2, DenseNet201, ResNeXt101, SE-ResNet152, and MobileNetV2—on a dataset
of 25,000 Pap smear images. The second experiment applied transfer learning to these models.
The third introduced the novel lightweight CNN, S-Net, and integrated it with Explainable AI
(XAI) methods: LIME, SHAP, and Grad-CAM.
The results demonstrate that S-Net outperformed all baseline models, achieving superior
accuracy, precision, recall, and F1-score. XAI techniques improved interpretability,
highlighting critical regions influencing predictions. A pixel intensity analysis revealed
significant differences between correctly and incorrectly classified samples, emphasizing the
role of intensity in model performance.
However, the study acknowledged limitations, including the use of a secondary dataset and the
exclusive focus on Pap smear images, limiting generalizability. Future work should explore
other imaging modalities like colposcopy and histopathology to enhance clinical applicability.
While transfer learning did not yield optimal results, further research on lightweight CNNs
may be beneficial. Clinical validation with expert input is essential for real-world deployment.
In conclusion, this research underscores the potential of CNNs, particularly S-Net, in
automating cervical cancer detection, offering a significant contribution toward reliable and
interpretable AI-based medical diagnostics
References
Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2023). Enhancing Tea Leaf Disease
Detection Through Customized Vision Transformer and Hyperparameters Optimisation.
Available at SSRN, 4940688. https://doi.org/
Ahad, M. T., Bhowmik, A. C., Emon, Y. R., & Ahmed, F. (2024). A Customized Vision
Transformer for Accurate Detection and Classification of Java Plum Leaf Disease.
Available at SSRN, 4829650. https://doi.org/2
Ahad, M. T., Emon, Y. R., & Mustofa, S. (2024). Data of history: An open-source and
multiformat wall image dataset of Panam city, a historical place. Data in Brief, 56, 110774.
https://doi.org/6
Ahad, M. T., Li, Y., Song, B., & Bhuiyan, T. (2023). Comparison of CNN-based deep
learning architectures for rice diseases classification. Artificial Intelligence in Agriculture,
9, 22-35. https://doi.org/231
Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). End User
Interface Design of Mobile-Based Fish Disease Detection to Assist Fish Farmers. Available
at SSRN, 4980536. https://doi.org/
Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). Fishdoc: A
Mobile-Based Fish Disease Detection System Using Yolov8. Available at SSRN, 4899189.
https://doi.org/
Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive
study on blood cancer detection and classification using Convolutional Neural Network.
arXiv preprint, arXiv:2409.06689. https://doi.org/2
Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive
study on blood cancer detection and classification using Convolutional Neural Network.
arXiv e-prints, arXiv: 2409.06689. https://doi.org/
Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on
Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast
Cancer Detection. arXiv preprint, arXiv:2409.06699. https://doi.org/4
Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on
Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast
Cancer Detection. arXiv e-prints, arXiv: 2409.06699. https://doi.org/
Ahad, M. T., Mustofa, S., Rahman, M. S., Song, B., & Li, Y. (2023). A Comprehensive
Study on Deep Feature Extraction to Detect and Classify Soursop Leaf Disease. Available
at SSRN, 4845099. https://doi.org/
Ahad, M. T., Mustofa, S., Sarker, A., & Emon, Y. R. (2024). Bdpapayaleaf: A dataset
of papaya leaf for disease detection, classification, and analysis. Classification, and
Analysis. https://doi.org/3
Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using
novel CNN-based ensemble approach. arXiv preprint, arXiv:2410.05272. https://doi.org/1
Ahad, M. T., Preanto, S. A., Song, B., & Li, Y. (2023). Gan-Generated Spectrogram
Detection and Classification for Heartbeat Classification Using a Vision Transformer.
Available at SSRN, 4892869. https://doi.org/
Ahad, M. T., Song, B., & Li, Y. (2024). A Comparison of Convolutional Neural
Network, Transfer Learning and Ensemble Technique for Brain Tumour Detection of
Classification. Transfer Learning and Ensemble Technique for Brain Tumour Detection
of… https://doi.org/1
Ahmed, F., & Ahad, M. T. (2023). Machine learning-based tea leaf disease detection:
A comprehensive review. arXiv preprint, arXiv:2311.03240. https://doi.org/22
Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2023). A fuzzy-
based vision transformer model for tea leaf disease detection. International Conference on
Trends in Computational and Cognitive… https://doi.org/5
Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2024). A Fuzzy-
Based Vision Transformer Model for Tea Leaf Disease Detection Check for updates.
Proceedings of the Fifth International Conference on Trends in Computational…
https://doi.org/1
Aladhadh, S., Alsanea, M., Aloraini, M., Khan, T., Habib, S., & Islam, M. (2022). An
effective skin cancer classification mechanism via medical vision transformer. Sensors,
22(11), 4008.
Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L.,
... & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial
intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information
Fusion, 96, 156-191.
Ali, M. S., Hossain, M. M., Kona, M. A., Nowrin, K. R., & Islam, M. K. (2024). An
ensemble classification approach for cervical cancer prediction using behavioral risk
factors. Healthcare Analytics, 5, 100324.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ...
& Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
Atasever, S., Azginoglu, N., Terzi, D. S., & Terzi, R. (2023). A comprehensive survey
of deep learning research on medical image analysis with focus on transfer
learning. Clinical imaging, 94, 18-41.
Attallah, O. (2023). Cercan· net: Cervical cancer classification model via multi-layer
feature ensembles of lightweight cnns and transfer learning. Expert Systems with
Applications, 229, 120624.
Attallah, O. (2023). Cervical cancer diagnosis based on multi-domain features using
deep learning enhanced by handcrafted descriptors. Applied Sciences, 13(3), 1916.
Azad, R., Aghdam, E. K., Rauland, A., Jia, Y., Avval, A. H., Bozorgpour, A., ... &
Merhof, D. (2024). Medical image segmentation review: The success of u-net. IEEE
Transactions on Pattern Analysis and Machine Intelligence.
Band, S. S., Yarahmadi, A., Hsu, C. C., Biyari, M., Sookhak, M., Ameri, R., ... & Liang,
H. W. (2023). Application of explainable artificial intelligence in medical health: A
systematic review of interpretability methods. Informatics in Medicine Unlocked, 40,
101286.
Barbedo, J. G. A. (2018). Impact of dataset size and variety on the effectiveness of deep
learning and transfer learning for plant disease classification. Computers and electronics in
agriculture, 153, 46-53.
Bhandarkar, A., Naik, P., Vakkund, K., Junjappanavar, S., Bakare, S., & Pattar, S.
(2024). Deep learning based computer aided diagnosis of Alzheimer’s disease: a snapshot
of last 5 years, gaps, and future directions. Artificial Intelligence Review, 57(2), 30.
Bhowmik, A. C., Ahad, M. T., & Emon, Y. R. (2023). Machine Learning-Based Jamun
Leaf Disease Detection: A Comprehensive Review. arXiv preprint, arXiv:2311.15741.
https://doi.org/4
Bhowmik, A. C., Ahad, M. T., Emon, Y. R., Ahmed, F., Song, B., & Li, Y. (2024). A
customised Vision Transformer for accurate detection and classification of Java Plum leaf
disease. Smart Agricultural Technology, 8, 100500. https://doi.org/22
Biplob, T. I., Rabbany, G., Emon, Y. R., Ahad, M. T., & Fimu, F. A. (2023). An
Optimized Vision Based Transformer for Lungs Cancer Detection. International
Conference on Trends in Computational and Cognitive … https://doi.org/
Boon, S. S., Luk, H. Y., Xiao, C., Chen, Z., & Chan, P. K. S. (2022). Review of the
standard and advanced screening, staging systems and treatment modalities for cervical
cancer. Cancers, 14(12), 2913
Deo, B. S., Pal, M., Panigrahi, P. K., & Pradhan, A. (2024). CerviFormer: A pap smear‐
based cervical cancer classification method using cross‐attention and latent transformer.
International Journal of Imaging Systems and Technology, 34(2), e23043.
Dogani, J., Namvar, R., & Khunjush, F. (2023). Auto-scaling techniques in container-
based
cloud
and
edge/fog
computing:
Taxonomy
and
survey. Computer
Communications, 209, 120-150.
Emon, Y. R., & Ahad, M. T. (2024). Multi-format open-source sweet orange leaf
dataset for disease detection, classification, and analysis. Data in Brief, 55, 110713.
https://doi.org/17
Fan, Z., Wu, X., Li, C., Chen, H., Liu, W., Zheng, Y., ... & Li, C. (2023). CAM-VT: A
weakly supervised cervical cancer nest image identification approach using conjugated
attention mechanism and visual transformer. Computers in Biology and Medicine, 162,
107070.
Gendy, G., He, G., & Sabor, N. (2023). Lightweight image super-resolution based on
deep learning: State-of-the-art and future directions. Information Fusion, 94, 284-310.
Hossain, S., Tanzim Reza, M., Chakrabarty, A., & Jung, Y. J. (2023). Aggregating
Different Scales of Attention on Feature Variants for Tomato Leaf Disease Diagnosis from
Image Data: A Transformer Driven Study. Sensors, 23(7), 3751.doi: 10.3390/s23073751
identification. Internet of Things, 21, 100650.
Huang, L., Gao, X., Li, Y., Lyu, F., Gao, Y., Bai, Y., ... & Ding, X. (2025). Enhancing
stereotactic ablative boost radiotherapy dose prediction for bulky lung cancer: A multi‐
scale dilated network approach with scale‐balanced structure loss. Journal of Applied
Clinical Medical Physics, 26(1), e14546.
Hussain, E., Mahanta, L. B., Borbora, K. A., Borah, H., & Choudhury, S. S. (2024).
Exploring explainable artificial intelligence techniques for evaluating cervical
intraepithelial neoplasia (CIN) diagnosis using colposcopy images. Expert Systems with
Applications, 249, 123579.
Islam, R., Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2024). Mental health diagnosis
from voice data using convolutional neural networks and vision transformers. Journal of
Voice. https://doi.org/1
Jawahar, M., Anbarasi, L. J., Narayanan, S., & Gandomi, A. H. (2024). An attention-
based deep learning for acute lymphoblastic leukemia classification. Scientific Reports,
14(1), 17447.
Jha, P., Dembla, D., & Dubey, W. (2024). Deep learning models for enhancing potato
leaf disease prediction: Implementation of transfer learning based stacking ensemble
model. Multimedia Tools and Applications, 83(13), 37839-37858.
Joseph, J. S., Vidyarthi, A., & Singh, V. P. (2024). An improved approach for initial
stage detection of laryngeal cancer using effective hybrid features and ensemble learning
method. Multimedia Tools and Applications, 83(6), 17897-17919.
Joynab, N. S., Islam, M. N., Aliya, R. R., Hasan, A. R., Khan, N. I., & Sarker, I. H.
(2024). A federated learning aided system for classifying cervical cancer using PAP-
SMEAR images. Informatics in Medicine Unlocked, 47, 101496.
Ju, J., Zheng, H., Xu, X., Guo, Z., Zheng, Z., & Lin, M. (2022). Classification of jujube
defects in small data sets based on transfer learning. Neural Computing and Applications,
1-14.
Karimi, D., Warfield, S. K., & Gholipour, A. (2021). Transfer learning in medical
image segmentation: New insights from analysis of the dynamics of model parameters and
learned representations. Artificial intelligence in medicine, 116, 102078.
Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent
architectures of deep convolutional neural networks. Artificial intelligence review, 53,
5455-5516.
Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent
architectures of deep convolutional neural networks. Artificial intelligence review, 53,
5455-5516.
Khowaja, A., Zou, B., & Kui, X. (2024). Enhancing cervical cancer diagnosis:
Integrated attention-transformer system with weakly supervised learning. Image and
Vision Computing, 149, 105193.
Kora, P., Ooi, C. P., Faust, O., Raghavendra, U., Gudigar, A., Chan, W. Y., ... &
Acharya, U. R. (2022). Transfer learning techniques for medical.
Kumar, T. S., Rajendran, P., Santhiyakumari, N., Kumarganesh, S., Sundari, L. M.,
Elango, S., ... & Emadifar, H. (2024). Analysis of Computational Methods for Diagnosis
of Cervical Cancer–A Review. Appl. Math, 18(4), 795-809.
Kumar, Y., Shrivastav, S., Garg, K., Modi, N., Wiltos, K., Woźniak, M., & Ijaz, M. F.
(2024). Automating cancer diagnosis using advanced deep learning techniques for multi-
cancer image
Lau, K. W., Po, L. M., & Rehman, Y. A. U. (2024). Large separable kernel attention:
Rethinking the large kernel attention design in cnn. Expert Systems with Applications, 236,
121352.
Leung, S. N., Chandra, S. S., Lim, K., Young, T., Holloway, L., & Dowling, J. A.
(2024). Automatic segmentation of tumour and organs at risk in 3D MRI for cervical cancer
radiation therapy with anatomical variations. Physical and Engineering Sciences in
Medicine, 47(3), 919-928.
Leung, S. N., Chandra, S. S., Lim, K., Young, T., Holloway, L., & Dowling, J. A.
(2024). Automatic segmentation of tumour and organs at risk in 3D MRI for cervical cancer
radiation therapy with anatomical variations. Physical and Engineering Sciences in
Medicine, 47(3), 919-928.
Liu, Q., Jiang, N., Hao, Y., Hao, C., Wang, W., Bian, T., ... & Dong, T. (2023).
Identification of lymph node metastasis in pre‐operation cervical cancer patients by weakly
supervised deep learning from histopathological whole‐slide biopsy images. Cancer
Medicine, 12(17), 17952-17966.
Liu, X., Geng, L. S., Huang, D., Cai, J., & Yang, R. (2024). Deep learning-based target
tracking with X-ray images for radiotherapy: a narrative review. Quantitative Imaging in
Medicine and Surgery, 14(3), 2671.
Liu, X., Song, L., Liu, S., & Zhang, Y. (2021). A review of deep-learning-based medical
image segmentation methods. Sustainability, 13(3), 1224.
Luo, Y., Zhang, S., Ling, J., Lin, Z., Wang, Z., & Yao, S. (2024). Mask-guided
generative adversarial network for MRI-based CT synthesis. Knowledge-Based Systems,
295, 111799.
Mahajan, P., & Kaur, P. (2024). Improving cervical cancer classification in PAP smear
images
with
enhanced
segmentation
and
deep
progressive
learning‐based
techniques. Diagnostic Cytopathology, 52(6), 313-324.
Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2023).
Scratch vision transformer model for diagnosis grape leaf disease. International Conference
on Trends in Computational and Cognitive… https://doi.org/7
Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2024).
Scratch Vision Transformer Model for Diagnosis Grape Leaf Disease Check for updates.
Proceedings of the Fifth International Conference on Trends in Computational…
https://doi.org/
Mamun, S. B., Payel, I. J., Ahad, M. T., Atkins, A. S., Song, B., & Li, Y. (2025). Grape
Guard: A YOLO-based mobile application for detecting grape leaf diseases. Journal of
Electronic Science and Technology, 23(1), 100300. https://doi.org/2
Mathivanan, S. K., Francis, D., Srinivasan, S., Khatavkar, V., P, K., & Shah, M. A.
(2024). Enhancing cervical cancer detection and robust classification through a fusion of
deep learning models. Scientific Reports, 14(1), 10812.
Mathivanan, S. K., Francis, D., Srinivasan, S., Khatavkar, V., P, K., & Shah, M. A.
(2024). Enhancing cervical cancer detection and robust classification through a fusion of
deep learning models. Scientific Reports, 14(1), 10812.
Mazumder, M. K. A., Nobi, M. M. U., Mridha, M. F., & Hasan, K. T. (2024). A Precise
Cervical Cancer Classification in the Early Stage Using Transfer Learning-Based Ensemble
Method: A Deep Learning Approach. In Data-Driven Clinical Decision-Making Using
Deep Learning in Imaging (pp. 41-59). Singapore: Springer Nature Singapore.
Mehedi, M. H. K., Khandaker, M., Ara, S., Alam, M. A., Mridha, M. F., & Aung, Z.
(2024). A lightweight deep learning method to identify different types of cervical cancer.
Scientific Reports, 14(1), 29446.
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-
based plant disease detection. Frontiers in plant science, 7, 215232.
Morid, M. A., Borjali, A., & Del Fiol, G. (2021). A scoping review of transfer learning
research on medical image analysis using ImageNet. Computers in biology and
medicine, 128, 104115.
Mustofa, S., Ahad, M. T., Emon, Y. R., & Sarker, A. (2024). BDPapayaLeaf: A dataset
of papaya leaf for disease detection, classification, and analysis. Data in Brief, 57, 110910.
https://doi.org/7
Mustofa, S., Emon, Y. R., Mamun, S. B., Akhy, S. A., & Ahad, M. T. (2025). A novel
AI-driven model for student dropout risk analysis with explainable AI insights. Computers
and Education: Artificial Intelligence, 8, 100352. https://doi.org/5
Mustofa, S., Munna, M. M. H., Emon, Y. R., Rabbany, G., & Ahad, M. T. (2023). A
comprehensive review on plant leaf disease detection using deep learning. arXiv preprint,
arXiv:2308.14087. https://doi.org/37
Nazir, A., Hussain, A., Singh, M., & Assad, A. (2024). Deep learning in medicine:
advancing healthcare with intelligent solutions and the future of holography imaging in
early diagnosis. Multimedia Tools and Applications, 1-64.
Nirmala, G., Nayudu, P. P., Kumar, A. R., & Sagar, R. (2024). Automatic cervical
cancer classification using adaptive vision transformer encoder with CNN for medical
application. Pattern Recognition, 111201.
Nour, M. K., Issaoui, I., Edris, A., Mahmud, A., Assiri, M., & Ibrahim, S. S. (2024).
Computer aided cervical cancer diagnosis using gazelle optimization algorithm with deep
learning model. IEEE Access.
Pacal, I. (2024). MaxCerVixT: A novel lightweight vision transformer-based Approach
for precise cervical cancer detection. Knowledge-Based Systems, 289, 111482.
Pacal, I., & Kılıcarslan, S. (2023). Deep learning-based approaches for robust
classification of cervical cancer. Neural Computing and Applications, 35(25), 18813-
18828.
Painuli, D., & Bhardwaj, S. (2022). Recent advancement in cancer diagnosis using
machine learning and deep learning techniques: A comprehensive review. Computers in
Biology and Medicine, 146, 105580.
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A
semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO.
arXiv preprint, arXiv:2409.06671. https://doi.org/7
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study
on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL).
arXiv preprint, arXiv:2409.06687. https://doi.org/5
Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A
semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO.
arXiv e-prints, arXiv: 2409.06671. https://doi.org/
Sambyal, D., & Sarwar, A. (2023). Recent developments in cervical cancer diagnosis
using deep learning on whole slide images: An Overview of models, techniques, challenges
and future directions. Micron, 173, 103520.
Sarhangi, H. A., Beigifard, D., Farmani, E., & Bolhasani, H. (2024). Deep Learning
Techniques for Cervical Cancer Diagnosis based on Pathology and Colposcopy Images.
Informatics in Medicine Unlocked, 101503.
Shandilya, G., Gupta, S., Almogren, A., Bharany, S., Altameem, A., Rehman, A. U., &
Hussen, S. (2024). Enhancing advanced cervical cell categorization with cluster-based
intelligent systems by a novel integrated CNN approach with skip mechanisms and GAN-
based augmentation. Scientific Reports, 14(1), 29040.
Tan, S. L., Selvachandran, G., Ding, W., Paramesran, R., & Kotecha, K. (2024).
Cervical cancer classification from pap smear images using deep convolutional neural
network models. Interdisciplinary Sciences: Computational Life Sciences, 16(1), 16-38.
Tomko, M., Pavliuchenko, M., Pavliuchenko, I., Gordienko, Y., & Stirenko, S. (2023).
Multi-label classification of cervix types with image size optimization for cervical cancer
prescreening by deep learning. In Inventive Computation and Information Technologies:
Proceedings of ICICIT 2022 (pp. 885-902). Singapore: Springer Nature Singapore.
Uyanga, E., Khishigdemberel, I., Khongorzul, B., Kiseleva, T. Y., Kobayashi, S.,
Tyapkin, P. Y., ... & Odkhuu, D. (2024). Enhancing heat generation ability and magnetic
properties of MgFe2O4 with Ni substitutes for biomedical applications. Materials Today
Chemistry, 35, 101841.
Wubineh, B. Z., Rusiecki, A., & Halawa, K. (2024, June). Segmentation of Cytology
Images to Detect Cervical Cancer Using Deep Learning Techniques. In International
Conference on Computational Science (pp. 270-278). Cham: Springer Nature Switzerland.
Yaman, O., & Tuncer, T. (2022). Exemplar pyramid deep feature extraction based
cervical cancer image classification model using pap-smear images. Biomedical Signal
Processing and Control, 73, 103428.
Yeasmin, M. N., Al Amin, M., Joti, T. J., Aung, Z., & Azim, M. A. (2024). Advances
of AI in image-based computer-aided diagnosis: A review. Array, 100357.
Youneszade, N., Marjani, M., & Pei, C. P. (2023). Deep learning in cervical cancer
diagnosis: architecture, opportunities, and open research challenges. IEEE Access, 11,
6133-6149.
Yue, J., Jin, S., Wang, S., Zeng, J., Shan, S., Liu, B., ... & Zhou, F. (2024). A shape-
supervised feature fusion U-Net for tubular structure segmentation. Computers and
Electrical Engineering, 119, 109522.
Yue, J., Jin, S., Wang, S., Zeng, J., Shan, S., Liu, B., ... & Zhou, F. (2024). A shape-
supervised feature fusion U-Net for tubular structure segmentation. Computers and
Electrical Engineering, 119, 109522.
Zhang, Y., Ning, C., & Yang, W. (2025). An automatic cervical cell classification
model based on improved DenseNet121. Scientific Reports, 15(1), 3240.
|
A study on Deep Convolutional Neural Networks, transfer learning, and Mnet model for Cervical Cancer Detection Saifuddin Sagor . Md Taimur Ahad -resource settings. Early and accurate detection through Pap smear analysis is critical to improving patient outcomes and reducing mortality. Although state-of-the-art (SOTA) Convolutional Neural Networks (CNNs) have revolutionized disease diagnosis, most SOTA CNNs are designed for large-scale object detection and classification tasks. As a result, they require substantial computational resources, extended training time, and large datasets. In this study, a lightweight CNN model, S-Net (Simple Net), is developed specifically for cervical cancer detection and classification using Pap smear images to address these limitations. Alongside S-Net, six SOTA CNNs were evaluated using transfer learning, including multi-path (DenseNet201, ResNet152), depth-based (Serasnet152), width-based multiconnection (Xception), depth-wise separable convolutions (MobileNetV2), and spatial exploitation-based (VGG19). All models, including S-Net, achieved comparable accuracy, with S-Net reaching 99.99%. However, S-Net significantly outperforms the SOTA CNNs in terms of computational efficiency and inference time, making it a more practical choice for real-time and resource-constrained applications. A major limitation in CNN-based medical diagnosis remains the lack of transparency in the decision-making process. To address this, Explainable AI (XAI) techniques, such as SHAP, LIME, and Grad-CAM, were employed to visualize and interpret the key image regions influencing model predictions. The novelty of this study lies in the development of a highly accurate yet computationally lightweight model (S-Net) caPable of rapid inference while maintaining interpretability through XAI integration. Furthermore, this work analyzes the behavior of SOTA CNNs, investigates the effects of negative transfer learning on Pap smear images, and examines pixel intensity patterns in correctly and incorrectly classified samples. Keywords: Cervical Cancer, CNN, Deep learning, LIME, SHAP, Transfer Learning, Mnet model, XAI. 1. Introduction Cervical cancer, which affects the cervix at the lower end of the uterus, remains one of the most common cancers among women globally. The World Health Organization (WHO) reports that cervical cancer is a leading cause of cancer-related deaths in over 40 countries, highlighting its public health significance (Joynab et al., 2024; Fan et al., 2023). In 2020, around 0.6 million cases of cervical cancer were diagnosed worldwide (Sarhangi et al., 2024). However, traditional screening methods are prone to high false-positive rates due to human error, compromising the accuracy of diagnosis and early detection. To overcome these challenges, machine learning (ML) and deep learning (DL) based computer-aided diagnostic (CAD) techniques have been increasingly utilized for the automatic analysis of cervical cytology and colposcopy images. These AI-driven technologies are significantly improving diagnostic accuracy by automating image segmentation and classification, thus reducing reliance on manual analysis and minimizing human error ( Ahad et al., 2023; Mustofa et al., 2023; Bhowmik et al., 2024, Ahmed & Ahad, 2023; Emon & Ahad, 2024; Mustofa et al., 2024; Preanto et al., 2024; Mamun et al., 2023; Ahad et al., 2024; Mustofa et al., 2025; Preanto et al., 2024; Ahmed et al., 2023; Ahad et al., 2024; Bhowmik et al., 2023; Ahad et al., 2024; Mamun et al., 2025; Ahad et al., 2024, Ahad et al., 2024; Islam et al., 2024; Ahad et al., 2024; Ahmed et al., 2024; Ahad et al., 2024; Preanto et al., 2024; Preanto et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Ahad et al., 2024; Mamun et al., 2024; Emon et al., 2023; Emon et al., 2023; Biplob et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Ahad et al., 2023; Youneszade et al., 2023). This represents a critical advancement in cervical cancer diagnosis, enhancing the effectiveness of screening and improving early detection rates. One of the most promising AI models in cervical cancer detection is the Deep Convolutional Neural Network (DCNN). DCNNs are particularly effective in classifying early-stage cancer and detecting malignant cells by automatically identifying complex patterns in medical images (Kumar et al., 2024; Nirmala et al., 2024). The incorporation of transfer learning, using pretrained models like those from ImageNet, further enhances performance by allowing the models to adapt to smaller, task-specific datasets (Morid et al., 2021; Atasever et al., 2023). However, DCNNs present challenges related to their high computational demands and large memory footprint, which can limit their application, especially in low-resource settings (Pacal et al., 2024; Lau et al., 2024). To address this, researchers have developed lightweight CNN models that offer comparable performance while reducing the complexity of the network. These models are more practical for deployment in mobile devices and resource-constrained environments, providing an accessible solution for real-world clinical settings (Mathivanan et al., 2024; Gendy et al., 2024; Mehedi et al., 2024; Dogani et al., 2023). A key challenge in medical AI is the lack of interpretability of deep learning models. To improve model transparency and trust, Explainable AI (XAI) methods such as Grad-CAM, LIME, and SHAP have been developed. These methods provide insights into model decisions, enhancing clinician trust and supporting ethical decision-making in clinical practice (Arrieta et al., 2020; Albahri et al., 2023; Band et al., 2023). Despite notable advancements in CNN models for cervical cancer detection, several knowledge gaps persist: 1. Many studies focus on developing CNN models but lack attention to clinical concerns like model generalization and interpretability (Zhang et al., 2025; Rahman et al., 2023). 2. Well-known CNN architectures have not been rigorously evaluated for cervical cancer detection in real-world clinical environments (Sambyal et al., 2023). 3. Despite its success in other domains, transfer learning remains underutilized in cervical cancer CAD systems (Yeasmin et al., 2024). 4. High false-positive rates and low reliability limit the clinical applicability of existing CNN-based methods (Attallah, 2023; Painuli & Bhardwaj, 2022). Following the gaps, this study conducts three experiments, which make the following contributions and novelties of this study: 1. A lightweight CNN model, S-Net, is introduced for efficient cervical cancer detection from Pap smear images, optimizing both computational efficiency and accuracy. Additionally, six state-of-the-art CNN architectures (VGG19, ResNet152v2, SEResNet152, MobileNetV2, Xception, DenseNet201) were evaluated to identify the most effective model for this task 2. The study applies XAI techniques (LIME, SHAP, Grad-CAM) to improve model interpretability and transparency, while statistical analysis of pixel intensity evaluates classification outcomes (TP, FP, TN, FN). 3. pixel intensity as a key factor in image classification, investigating its role in cervical cancer detection and its impact on classification accuracy. 2. Related works Tan et al. (2024) and Khowaja et al. (2024) focused on using deep learning models for cervical cancer detection through Pap smear images. Tan et al. (2024) employed transfer learning with pre-trained CNN models for a seven-class classification task, achieving the best performance with DenseNet-201. Similarly, Khowaja et al. developed a framework that outperformed 25 other models, showing impressive accuracy on the Mendeley and SIPaKMeD datasets. Their work emphasizes the potential of these models to improve cervical cancer screening and reduce misdiagnosis. Deo et al. (2024) introduced CerviFormer, a Transformer-based model, which excelled at classifying Pap smear images, particularly for large-scale inputs. This method demonstrated competitive results on two public datasets, with accuracy rates of 96.67% and 94.57%, showing promise compared to traditional CNN-based methods. Additionally, Pacal (2024) utilized 106 deep learning models, including CNNs and vision transformers, to achieve remarkable accuracy on the SIPaKMeD and LBC datasets, surpassing existing models in both datasets. Mazumder et al. (2024) and Nour et al. (2024) combined CNNs like InceptionV3 and EfficientNetV2S to enhance cervical cancer detection accuracy. These models demonstrated accuracy rates of up to 99.98%, outperforming contemporary approaches. Hybrid methods, combining deep learning with handcrafted features, were also explored by Joseph et al. (2024) for early-stage cancer detection, yielding a high median recall of 99.5%. Fan et al. (2023) and Tomko et al. (2022) worked on optimizing models for better performance. Fan et al. introduced CAM-VT, a weakly supervised model using conjugated attention and visual transformers, which improved pap slide identification accuracy. Tomko et al.(2022) focused on optimizing input image sizes for models like EfficientNetB0, improving classification performance. Additionally, some studies, like those of Prasanthi Shandilya et al. (2024), employed lightweight CNNs (e.g., MobileNetV2 and ResNet-18) to achieve high accuracy while reducing computational complexity, making them more suitable for clinical applications. Wubineh et al. (2024) and Hussain et al. (2024) explored segmentation techniques and model interpretability. Wubineh et al. focused on cytoplasm and nuclei segmentation for cervical cancer detection, with EfficientNetB2 achieving impressive accuracy. Meanwhile, Hussain et al.(2024) used the Xception model for classifying cervical cancer, with AUC scores of up to 0.98. These studies highlight the importance of model transparency and accurate feature localization, improving the trustworthiness and effectiveness of AI-based cervical cancer diagnostic tools. Compared to the D-CNN and transfer learning models, the M-Net model is superior in accuracy. Both D-CNN and transfer learning methods are computationally expensive and heavy. The researcher put forth an M-Net model in this investigation. Shorter models, less convolution, fewer fully linked layers, and fewer layers comprise the M-Net model. This MNet approach reduced study time waste while improving accuracy for cervical cancer. Realizing the effectiveness of CNN in the detection and classification of cervical cancer, scholars such as Leung and Yue (2024), Luo et al. (2021), Azad et al. (2024), Uyanga et al. (2024), and Huang et al. (2025) have conducted extensive research in this area. They experimented with CNN and its variants to improve cervical cancer detection and classification. 3. Experimental method and materials This section describes the hardware specification, dataset description, S-Net model development, and training procedure for this study. This study presents a comprehensive cervical cancer detection pipeline using Pap smear images. Initially, the images undergo preprocessing before classification. The framework evaluates six state-of-the-art CNN models, VGG19, ResNet152V2, SE-ResNet152, DenseNet201, MobileNetV2, and Xception, through two experimental setups: (i) training from scratch and (ii) transfer learning using ImageNet weights. Additionally, a custom lightweight CNN model, S-Net, is introduced and assessed through 5-fold cross-validation and explainable AI (XAI) techniques like LIME, SHAP, and Grad-CAM. Figure 1: Experimental flow of the study. 3.1 Hardware Specification The experiments were conducted using Precision 7680 Workstation. The workstation is a 13thgeneration Intel® CoreTM i9-13950HX vPro with a Windows 11 Pro operating system and NVIDIA® RTXTM 3500 Ada Generation, 32 GB DDR5, and 1 TB SSD. Python (version 3.9) was chosen as the programming language as the version supported the TensorFlow-GPU, SHAP, and LIME generation. 3.2 Dataset description The dataset for the study was collected from a public repository and the dataset is Obuli Sai Naren. (2022) Multi Cancer Dataset. Figure 1 displays samples of images used in the study. This dataset contains 25000 Pap smear (Papanicolaou smear) microscopic images and is classified into five (5) classes: Cervix_Dyskeratotic (Dyk), Cervix_Koilocytotic (Koc), Cervix_Metaplastic (Mep), Cervix_Parabasal (Pab) and Cervix_Superficial Moderate (Sfi). Cervical Cancer images were taken from the Obuli Sai Naren. (2022). The images were captured using pap smears and stored in JPG format. Dyk Koc Mep Pab Sfi Figure 2: Example of 5 cervical cancer classes 3.3 Image Augmentation This study used position augmentation, such as scaling, cropping, flipping, and rotation, and color augmentation, such as brightness, contrast, and saturation, was deployed. Random rotation from -15 to 15 degrees, rotations of multiples of 90 degrees at random, random distortion, shear transformation, vertical flip, horizontal flip, skewing, and intensity transformation were also used in the data augmentation process. Dyk Koc Mep Pab Sfi Figure 3. Image after augmentation 3.4 Model Development Inspired by the architectural taxonomy of CNNs proposed by Khan et al. (2020), we selected six diverse and high-performing convolutional neural networks (CNNs): VGG19, ResNet152, SE-ResNet152, DenseNet201, Xception, and MobileNetV2 to evaluate their effectiveness in classifying cervical cancer cytology images. Each model reflects a unique architectural class: spatial exploitation (VGG19), depth-based learning (ResNet152), attention mechanisms (SEResNet152), multi-path connectivity (DenseNet201), width-based design (Xception), and efficient lightweight architecture (MobileNetV2). To address the challenge of limited annotated medical data, we employed transfer learning, fine-tuning each model pre-trained on ImageNet. The experimental results demonstrate that advanced architectures, particularly those integrating attention and residual connections such as SE-ResNet152 (99.81%) and MobileNetV2 (99.53%), significantly outperform older architectures like VGG19 (88.58%). These findings confirm that transfer learning, when combined with modern CNN designs, substantially improves diagnostic accuracy in cytological image analysis. Figure 4: Classification of Deep CNN Architectures. 4. Experimental description and results This study conducted three experiments to detect and classify cervical cancer pap smear images. Firstly, a lightweight CNN S-Net; secondly, six SOTA CNN's Multipath (DenseNet201), Depth-based (ResNet152), width-based multi-connection (Xception), depthwise separable convolutions (MobileNetV2), spatial exploitation based (VGG19); thirdly, transfer learning of 6 SOTA CNN's a Multi path-depth-Width based (DenseNet201- Xception) and Multi path-depth-Spatial exploitation (ResNet152-VGG19). The experimental processes and results are described below. 4.1 Experiment 1: S-Net development process and results The S-Net model, training process, and results and discussed below: 4.1.1 S-Net model development The S-Net model, designed for cervical cancer detection in Pap smear images, begins with a 2D convolutional layer featuring 48 filters, followed by max-pooling to reduce spatial dimensions. The architecture deepens progressively with convolutional layers containing 128, 192, and 512 filters, interspersed with pooling layers for hierarchical feature extraction. A fully connected layer with 1024 neurons is employed, followed by dropout regularisation, culminating in a dense output layer with five neurons for multi-class classification (Dyk, Koc, Mep, Pab, Sfi). With a total of 2,020,549 trainable parameters, the model is optimised for highcapacity learning and multi-class image categorisation. Figure 5: S-Net model visualisation 4.1.2 Model Summary Table 1: Hyperparameters Layer Output Shape Parameters Description Conv2D (None, 52, 52, 48) 3,648 Convolutional layer with 48 filters, kernel size (3,3) MaxPooling2D (None, 17, 17, 48) 0 Max pooling operation (2x2) Conv2D (None, 17, 17, 128) 55,424 Convolutional layer with 128 filters, kernel size (3,3) MaxPooling2D (None, 5, 5, 128) 0 Max pooling operation (2x2) Conv2D (None, 5, 5, 192) 221,376 Convolutional layer with 192 filters, kernel size (3,3) Conv2D (None, 5, 5, 192) 331,968 Convolutional layer with 192 filters, kernel size (3,3) Conv2D (None, 5, 5, 128) 221,312 Convolutional layer with 128 filters, kernel size (3,3) MaxPooling2D (None, 1, 1, 128) 0 Max pooling operation (5x5) Flatten (None, 128) 0 Flatten the output for input into dense layers Dense (None, 1024) 132,096 Fully connected layer with 1024 units Dropout (None, 1024) 0 Dropout for regularization (no parameters) Dense (None, 1024) 1,049,600 Fully connected layer with 1024 units Dense (None, 5) 5,125 Output layer with 5 units (classification output) Total Parameters 2,020,549 Total number of trainable parameters Trainable Parameters 2,020,549 All parameters in the model are trainable Non-trainable Parameters 0 No non-trainable parameters 4.1.3 Training Setup The S-Net model was trained using K-fold cross-validation, with K set to 5. Each fold underwent 50 training epochs, with a fixed batch size of 16. The Adam optimization algorithm was used for model optimization. The algorithm ensured the adaptability of the model weights based on the calculated gradients, thereby enhancing performance and accuracy in classifying cervical cancer. The loss function was sparse categorical cross-entropy. Early stopping and a learning rate scheduler were applied during training. The training hyperparameters are provided in Table 1. Table 1: Hyperparameters of training Parameters Epochs = 50 Batch size = 16 Image size = (64, 64, 3) Learning rate = 1.0000e-04 K_folds = 5 Optimizer = Adam(learning_rate=LEARNING_RATE) Loss = SparseCategoricalCrossentropy(from_logits=True) Early stopping = EarlyStopping(monitor='val_accuracy', patience=10, verbose=1, restore_best_weights=True) Learning_rate_scheduler = LearningRateScheduler(lambda epoch: LEARNING_RATE * 0.1 ** (epoch // 10)) Callbacks = [early_stopping, lr_scheduler] 4.1.4 Results of the S-Net The S-Net model demonstrated high classification accuracy across all metrics (precision, recall, F1-score, and support). In Fold 1, minor misclassifications occurred with "Mep" and "Koc" samples, but overall accuracy was strong. Fold 2 showed improved precision, with no misclassification in "Koc," "Pab," or "Sfi" classes. Folds 3 to 5 exhibited near-perfect classification, minimizing errors. The confusion matrices confirm the model's robustness and suitability for clinical diagnostics. Table 2: Fold-Wise Classification report with epochs of S-Net Fold Class Precision Recall F1-Score Support cervix_Dyk 1.00 1.00 1.00 1548 1 cervix_Koc 1.00 1.00 1.00 1513 cervix_Mep 1.00 1.00 1.00 1504 cervix_Pab 1.00 1.00 1.00 1594 cervix_Sfi 1.00 1.00 1.00 1525 Accuracy 1.00 7684 Macro Avg 1.00 1.00 1.00 7684 Weighted Avg 1.00 1.00 1.00 7684 2 cervix_Dyk 1.00 1.00 1.00 1530 cervix_Koc 1.00 1.00 1.00 1599 cervix_Mep 1.00 1.00 1.00 1555 cervix_Pab 1.00 1.00 1.00 1481 cervix_Sfi 1.00 1.00 1.00 1518 Accuracy 1.00 7683 Macro Avg 1.00 1.00 1.00 7683 Weighted Avg 1.00 1.00 1.00 7683 3 cervix_Dyk 1.00 1.00 1.00 1541 cervix_Koc 1.00 1.00 1.00 1536 cervix_Mep 1.00 1.00 1.00 1496 cervix_Pab 1.00 1.00 1.00 1550 cervix_Sfi 1.00 1.00 1.00 1560 Accuracy 1.00 7683 Macro Avg 1.00 1.00 1.00 7683 Weighted Avg 1.00 1.00 1.00 7683 4 cervix_Dyk 1.00 1.00 1.00 1549 cervix_Koc 1.00 1.00 1.00 1491 cervix_Mep 1.00 1.00 1.00 1571 cervix_Pab 1.00 1.00 1.00 1551 cervix_Sfi 1.00 1.00 1.00 1521 Accuracy 1.00 7683 Macro Avg 1.00 1.00 1.00 7683 Weighted Avg 1.00 1.00 1.00 7683 cervix_Dyk 0.99 0.99 0.99 1810 5 cervix_Dyk 1.00 1.00 1.00 1562 cervix_Koc 1.00 1.00 1.00 1525 cervix_Mep 1.00 1.00 1.00 1531 cervix_Pab 1.00 1.00 1.00 1520 cervix_Sfi 1.00 1.00 1.00 1545 Accuracy 1.00 7683 Macro Avg 1.00 1.00 1.00 7683 Weighted Avg 1.00 1.00 1.00 7683 Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi). Dyk 1548 0 0 0 0 Koc 2 1511 0 0 0 Mep 2 0 1502 0 0 Pab 0 0 0 1594 0 Sfi 0 0 0 0 1525 Dyk Koc Mep Pab Sfi Confusion Matrix Fold 1 Dyk 1530 0 0 0 0 Koc 0 1599 0 0 0 Mep 0 0 1555 0 0 Pab 0 0 0 1581 0 Sfi 0 0 0 0 1518 Dyk Koc Mep Pab Sfi Confusion Matrix Fold 2 Dyk 1541 0 0 0 0 Koc 0 1536 0 0 0 Mep 0 0 1496 0 0 Pab 0 0 0 1550 0 Sfi 0 0 0 0 1560 Dyk Koc Mep Pab Sfi Confusion Matrix Fold 3 Dyk 1549 0 0 0 0 Koc 0 1491 0 0 0 Mep 0 0 1571 0 0 Pab 0 0 0 1551 0 Sfi 0 0 0 0 1521 Dyk Koc Mep Pab Sfi Confusion Matrix Fold 4 Dyk 1562 0 0 0 0 Koc 0 1525 0 0 0 Mep 0 0 1531 0 0 Pab 0 0 0 1520 0 Sfi 0 0 0 0 1545 Dyk Koc Mep Pab Sfi Confusion Matrix Fold 5 Figure 6: Fold-wise confusion matrix of S-Net Figure 6 Confusion matrices from five-fold cross-validation reveal the strong classification ability of the S-Net model across five cervical cell types. Each fold shows a clear diagonal dominance, indicating high prediction accuracy. Minor misclassifications are observed in Fold 1, primarily between morphologically similar classes. Folds 2 to 5 exhibit near-perfect classification performance. These results highlight S-Net's robustness and consistency in multi-class cervical cell classification. Fold 1-ROC Curve Fold 2-ROC Curve Fold 3-ROC Curve Fold 4-ROC Curve Fold 5-ROC Curve Figure 7: Fold-wise confusion matrix with Epochs and Accuracy Figure 7 ROC curves from five-fold cross-validation demonstrate S-Net's strong classification performance. In Fold 1, the model effectively minimizes false positives while maintaining high true positive rates. Fold 2 shows improved precision with tighter ROC curves. Folds 3 to 5 confirm the model's consistent and robust classification accuracy. Overall, these results emphasize the reliability of S-Net in cervical cancer detection. Class Precision Recall F1-Score Support cervix_Dyk 1.00 1.00 1.00 7730 cervix_Koc 1.00 1.00 1.00 7664 cervix_Mep 1.00 1.00 1.00 7657 cervix_Pab 1.00 1.00 1.00 7696 cervix_Sfi 1.00 1.00 1.00 7669 Accuracy 1.00 38416 Macro Avg 1.00 1.00 1.00 38416 Weighted Avg 1.00 1.00 1.00 38416 Dyk Koc Mep Pab Sfi Dyk 7730 0 0 0 0 Koc 2 7662 0 0 0 Mep 2 0 7655 0 0 Pab 0 0 0 7696 0 Sfi 0 0 0 0 7669 Figure 8: Combined confusion matrix of M-Net Figure 8 The combined classification metrics for the S-Net model demonstrate exceptional performance, with an overall accuracy of 99.98%. Precision, recall, and F1-scores range from 99.97% to 99.99%. Class-specific results indicate strong performance, with only minor misclassifications in certain folds. Figure 9: Fold-Wise Classification Report with Epochs and Accuracy Figure 9 The S-Net model shows minimal classification errors and high discriminatory power, with low false positives. Accuracy curves rise steadily, and loss curves decline consistently, confirming the model's robustness, efficiency, and reliability for cervical cancer classification. 4.2 Experiment 2- Performance of the Original CNN Network The performances of six original CNN architectures VGG19, ResNet152v2, SE-ResNet152, DenseNet201, Xception, and MobileNetV2, are presented in this section. Among all models, the highest accuracy was achieved, and the overall performance was excellent. Table 3: Accuracy for classification of individual CNN networks in detecting cervical cancer. The table reports the Precision, Recall, F1-Score, and Support (n) for each network, based on the number of images (n = numbers). Dyk Koc Mep Pab Sfi Model Accuracy VGG19 Precision 100% 100% 100% 100% 100% 99.97% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1485 1477 1491 1487 1484 ResNet152 Precision 100% 100% 100% 100% 100% 99.99% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1484 1481 1489 1480 1490 Seresnet152 Precision 100% 100% 100% 100% 100% 99.99% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1488 1491 1481 1482 1483 Densenet201 Precision 100% 100% 100% 100% 100% 100% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1485 1484 1485 1488 1482 Xception Precision 100% 100% 100% 100% 100% 100% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1488 1483 1481 1485 1487 MobilenetV2 Precision 100% 100% 100% 100% 100% 100% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1485 1482 1488 1486 1483 Table 3 highlights the precision values for each architecture evaluated on the test dataset. The VGG19, ResNet152v2, DenseNet201, SE-ResNet152, Xception, and MobileNetV2 architectures delivered outstanding results. From the table, it is evident that the DenseNet201, Xception, and MobileNetV2 models excelled in correctly detecting and classifying cervical cancer compared to the other architectures. These models provided highly accurate classification and demonstrated exceptional performance in detecting cervical cancer. Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi). Dyk 1485 0 0 0 0 Koc 0 1475 0 0 0 Mep 0 0 1491 0 0 Pab 0 0 0 1487 0 Sfi 0 2 0 0 1484 Dyk Koc Mep Pab Sfi VGG19 Dyk 1485 0 0 0 0 Koc 0 1484 0 0 0 Mep 0 0 1485 0 0 Pab 0 0 0 1488 0 Sfi 0 0 0 0 1482 Dyk Koc Mep Pab Sfi DenseNet201 Dyk 1484 0 0 0 0 Koc 0 1481 1 0 0 Mep 0 0 1488 0 0 Pab 0 0 0 1480 0 Sfi 0 0 0 0 1490 Dyk Koc Mep Pab Sfi ResNet152v2 Dyk 1488 0 0 0 0 Koc 0 1490 0 0 0 Mep 0 1 1481 0 0 Pab 0 0 0 1482 0 Sfi 0 0 0 0 1482 Dyk Koc Mep Pab Sfi SeresNet152 Dyk 1488 0 0 0 0 Koc 0 1483 0 0 0 Mep 0 0 1481 0 0 Pab 0 0 0 1485 0 Sfi 0 0 0 0 1487 Dyk Koc Mep Pab Sfi Xception Dyk 1485 0 0 0 0 Koc 0 1482 0 0 0 Mep 0 0 1488 0 0 Pab 0 0 0 1486 0 Sfi 0 0 0 0 1483 Dyk Koc Mep Pab Sfi MobileNetV2 Figure 10: Six confusion matrices of original CNNs Figure 10 shows confusion matrices for six CNNs: VGG19, DenseNet201, ResNet152v2, SEResNet152, Xception, and MobileNetV2. DenseNet201, Xception, and MobileNetV2 achieved better classification accuracy, correctly classifying 919 and 977 images, as noted in Table 3. Figure 11: The training and validation accuracy of the original CNNs. Figure 11 presents the training and validation accuracy curves for six CNNs: VGG19, ResNet152V2, DenseNet201, SE-ResNet152, Xception, and MobileNetV2. DenseNet201, Xception, and MobileNetV2 show stable accuracy with minimal fluctuations, while VGG19 converges quickly and maintains high accuracy. ResNet152V2 and SE-ResNet152 exhibit occasional drops in validation accuracy but remain strong overall. VGG19 ResNet152V2 DenseNet201 Seresnet152 Xception MobileNetV2 Figure 12: Six roc curves of original CNNs. Figure 12 The ROC curves for various CNN architectures (VGG19, ResNet152V2, DenseNet201, SE-ResNet152, Xception, and MobileNetV2) show an AUC of 1.00 for all models, indicating perfect classification performance. Each model effectively distinguishes between classes with high confidence. 4.3 Experiment 4: Transfer Learning CNN Network Accuracy in Detecting Cervical Cancer Table 4. Transfer learning CNN network accuracy in detecting cervical cancer. Dyk Koc Mep Pab Sfi Model Accuracy VGG19 Precision 87% 85% 83% 94% 94% 88.58% Recall 82% 80% 88% 99% 94% F1-score 84% 82% 86% 97% 94% Support (N) 1484 1480 1487 1488 1485 Precision 99% 99% 99% 100% 99% VGG19 ResNet152V2 DenseNet201 Seresnet152 Xception MobileNetV2 ResNet152 Recall 99% 98% 99% 100% 100% 99.16% F1-score 99% 98% 99% 100% 99% Support (N) 1486 1488 1478 1482 1490 Seresnet152 Precision 100% 100% 100% 100% 100% 99.81% Recall 100% 100% 100% 100% 100% F1-score 100% 100% 100% 100% 100% Support (N) 1484 1485 1482 1490 1483 Densenet201 Precision 99% 99% 99% 100% 100% 99.45% Recall 99% 99% 100% 100% 100% F1-score 99% 99% 99% 100% 100% Support (N) 1482 1484 1482 1490 1486 Xception Precision 98% 97% 98% 100% 99% 98.31% Recall 97% 97% 98% 100% 99% F1-score 98% 97% 98% 100% 99% Support (N) 1483 1481 1485 1487 1488 MobilenetV2 Precision 100% 99% 99% 100% 100% 99.53% Recall 99% 99% 99% 100% 100% F1-score 99% 99% 99% 100% 100% Support (N) 1483 1484 1489 1482 1486 The performance of six transfer learning CNN architectures, VGG19, ResNet152V2, SEResNet152, DenseNet201, Xception, and MobileNetV2, is summarised in Table 4. DenseNet201 achieved the highest accuracy (99.45%), while VGG19 showed the lowest (88.58%). Transfer learning models generally demonstrated lower accuracy compared to their original CNN counterparts. Table 4 presents the Precision, Recall, and F1-score results of CNN networks with transfer learning. DenseNet201 achieved the highest precision (99%), while SE-ResNet152 showed the best overall performance with 99.81% accuracy. VGG19 had the lowest precision (87%). Dyskeratotic=(Dyk) Koilocytotic=(Koc) Metaplastic= (Mep) Parabasal=(Pab) Superficial Moderate=(Sfi). Dyk 1212 114 62 10 0 Koc 116 1190 57 3 41 Mep 113 101 1313 4 45 Pab 32 28 25 1471 3 Sfi 11 47 30 0 1396 Dyk Koc Mep Pab Sfi VGG19 Dyk 1467 9 1 0 0 Koc 8 1467 3 0 0 Mep 7 5 1478 0 0 Pab 0 0 0 1490 0 Sfi 0 3 0 0 1486 Dyk Koc Mep Pab Sfi DenseNet201 Dyk 1476 7 4 0 2 Koc 5 1458 7 0 4 Mep 4 14 1462 0 1 Pab 0 1 1 1482 0 Sfi 1 8 4 0 1483 Dyk Koc Mep Pab Sfi ResNet152v2 Dyk 1477 0 2 0 0 Koc 2 1480 2 0 0 Mep 5 1 1477 0 1 Pab 0 0 0 1490 0 Sfi 0 4 1 0 1482 Dyk Koc Mep Pab Sfi SeresNet152 Dyk 1441 14 15 0 0 Koc 24 1436 18 0 4 Mep 14 16 1450 0 7 Pab 1 3 1 1487 1 Sfi 3 12 1 0 1476 Dyk Koc Mep Pab Sfi Xception Dyk 1472 3 3 0 0 Koc 7 1471 5 0 2 Mep 4 9 1479 0 0 Pab 0 0 0 1482 0 Sfi 0 1 2 0 1484 Dyk Koc Mep Pab Sfi MobileNetV2 Figure 13: Six confusion matrices of Transfer learning CNNs. Figure 13 shows the confusion matrices of transfer learning CNNs. DenseNet201 and SEResNet152 achieved the highest accuracy with minimal misclassification, while VGG19 had the most errors, especially in distinguishing Koc and Dyk cells. Figure 14: The Transfer Learning version's training and validation accuracy. Figure 14 shows the training and validation accuracy of various Transfer Learning models. DenseNet201 and SE-ResNet152 demonstrate the highest accuracy with quick convergence and stability, while ResNet152V2 shows slightly lower accuracy compared to the others. VGG19 ResNet152V2 DenseNet201 Seresnet152 Xception MobileNetV2 Figure 15: The Transfer Learning version's training and validation accuracy. Figure 15 presents the ROC curves for various Transfer Learning models in multi-class classification. DenseNet201, SENet152, Xception, and MobileNetV2 show perfect classification performance with an AUC of 1.00 across all classes. VGG19, however, has a slightly lower AUC for one class. 4.4 Result of the XAI This study uses three (3) explainable AI techniques, LIME, SHAP, and Grad-CAM, to generate local and global explanations for the S-NET (the developed CNN) predictions on the validation and test sets. LIME generates an interpretable model by training a local linear model around the prediction point, while SHAP provides a unified framework for feature importance estimation. Furthermore, we conducted statistical analysis on the correctly classified, misclassified and true positive-true negative-false positive-false negative images. 4.4.1 Visualization using LIME and SHAP LIME and SHAP are used to interpret the S-Net cervical cancer model. LIME breaks the input images into super-pixel regions and approximates the model's behavior using a simple model like linear regression. The resulting heatmap highlights regions that positively or negatively VGG19 ResNet152V2 DenseNet201 Seresnet152 Xception MobileNetV2 impact the prediction, with red areas indicating strong influence and blue areas decreasing the likelihood of a class. SHAP utilizes a pre-trained cervical cancer model to compute Shapley values, which assess the contribution of each pixel or region to the predicted class. By perturbing the input image and predicting the outputs for each variation, SHAP provides a quantitative explanation of the model's decisions. Figure 16: LIME partition explainer of Pap smear images Figure 17: SHAP explainer of Pap Smear images 4.4.2 Grad-CAM analysis of correctly/incorrectly classified cervical cancer modalities Grad-CAM is a technique that visualizes which regions of an image contribute most to the SNet model's prediction by generating heatmaps. It computes the gradients of the predicted class concerning the feature maps in the model's last convolutional layer. These gradients highlight the importance of each feature map, which are then weighted and combined to create a class activation heatmap. The heatmap is overlaid on the original image, with red regions indicating higher attention and blue regions indicating less influence on the prediction. This technique enhances the transparency and interpretability of the S-Net model in cervical cancer classification, allowing for a better understanding of how the model focuses on specific areas. Misclassification examples are displayed in Figure 17. Figure 18: Examples of misclassified images Figure 19: GRAD-CAM view of misclassified images Figure 19 presents Grad-CAM visualizations revealing misclassified cervical cancer cases. Red regions indicate high model attention, while blue regions show less focus. Misclassifications occurred when S-Net focused on irrelevant areas, such as predicting "cervix_Sfi" instead of "cervix_Koc." This suggests challenges in identifying cancer-specific features accurately. Figure 20: Grad-CAM view of correctly classified images Figure 20 presents Grad-CAM visualizations of correctly classified cervical cell images. The highlighted regions (in red) indicate where the model focuses during classification, aligning well with class-specific features. This confirms the model's ability to extract and utilize meaningful visual cues. However, occasional focus on irrelevant regions (in blue) suggests opportunities to further refine spatial attention and enhance precision. 4.4.3 Pixel intensity of correctly/incorrectly Cervical Cancer modalities Pixel intensity highlights areas that contribute most to the CNN's decision. The original image shows key features, while the Gradient × Input explanation reveals the influence of each part, with brighter areas indicating higher influence. Misalignment between the highlighted regions and expected features suggests the model may focus on irrelevant areas, impacting accurate classification. Figure 21: Grad-CAM view of pixel intensity of misclassified images Figure 22: Grad-CAM view of pixel intensity for correctly classified images 4.4.4 Pixel Intensity Analysis Correct/misclassified images This study compares the pixel intensity distributions between correctly classified and misclassified cervical cancer images. The analysis reveals that correctly classified images have significantly higher mean and median pixel intensities (mean = 61.02, median = 67.24) compared to misclassified images (mean = 44.94, median = 49.33). The interquartile range (IQR) is also larger for the correctly classified images, indicating more variability in their pixel intensities. In contrast, misclassified images exhibit lower variability, suggesting that the model struggles to classify images with less distinct or variable features. This emphasizes the crucial role of clear and distinct pixel intensity contrasts for accurate cervical cancer detection, as the model relies on such intensity patterns to make accurate predictions. Figure 23: Distribution of Pixel intensity of misclassified/ correctly classified images. Furthermore, we use Significance Level (α) = 0.05 as the threshold to determine statistical significance. If the p-value is less than α, we reject the null hypothesis and conclude a significant difference between the two classes. If the p-value exceeds α, we fail to reject the null hypothesis, indicating no significant difference. Result: Independent t-test p-value: 0.0000 Mann-Whitney U test p-value: 0.0000 Independent t-test: Reject H0. There is a significant difference between misclassified_100 and correctly_classified_100. Table 5: Descriptive Statistics for Pixel Intensity of correctly classified and misclassified images. Statistic Misclassified Correctly Classified Count 4096.000000 4096.000000 Mean 44.938873 61.021790 Std 29.812426 36.996021 Min 0.416000 4.292000 25% 12.529000 22.864000 50% 49.338001 67.242001 75% 75.280998 98.147999 Max 85.456001 112.676003 Interquartile Range 62.751998 75.283998 4.4.5 Pixel intensity of TP-FP-TN-FN Cervical Cancer modalities The analysis of pixel intensity distributions across four categories (True Positive, True Negative, False Positive, and False Negative) reveals key insights. False Negatives (FN) have the highest mean intensity (62.04) and the broadest range, indicating errors arise from highly variable intensities. True Positives (TP) and True Negatives (TN) show more consistent distributions, suggesting better performance when intensities align with patterns. False Positives (FP) exhibit a narrower intensity range, with errors occurring in areas with less contrast. Overall, the model performs best with mid-range intensities and struggles with extreme or variable intensities, especially in the FN category. Figure 24: Distribution of Pixel intensity of TP-TN-FP-FN 4.4.6 Pixel intercity comparison between TP-FP-TN-FN Significance Level (α): We use α = 0.05 as the threshold to determine statistical significance. If the p-value is less than α, we reject the null hypothesis and conclude that there is a significant difference between the two classes. If the p-value exceeds α, we fail to reject the null hypothesis, indicating no significant difference. Accepted/rejected Hypotheses: Independent t-test and Mann-Whitney U test: Reject H10: The pixel intensity distributions of True positive and True negative are not significantly different. Independent t-test and Mann-Whitney U test: Reject H20: The pixel intensity distributions of True positive and False positive are not significantly different. Independent t-test and Mann-Whitney U test: Reject H30: The pixel intensity distributions of True positive and False negative are not significantly different. Independent t-test and Mann-Whitney U test: Reject H40: The pixel intensity distributions of True negative and False positive are not significantly different. Independent t-test and Mann-Whitney U test: Reject H50: The pixel intensity distributions of True negative and False negative are not significantly different. Independent t-test and Mann-Whitney U test: Reject H60: The pixel intensity distributions of False positive and False negative are not significantly different. Descriptive Statistics for Pixel Intensity: Table 6: Descriptive Statistics for Pixel Intensity of TP-TN-FP-FN images Statistic True Positive True Negative False Positive False Negative Count 3136.000000 3136.000000 3136.000000 3136.000000 Mean 152.820465 183.763855 170.796997 158.757401 Std 21.152983 17.148066 15.314829 22.917040 Min 89.400002 125.800003 109.000000 84.800003 25% 138.199997 171.600006 161.199997 143.199997 50% 153.000000 183.299995 173.000000 159.600006 75% 168.000000 196.399994 182.850002 175.800003 Max 207.000000 219.199997 200.199997 205.399994 Interquartile Range 29.800003 24.799988 21.650005 32.600006 5. Discussion In this study, a lightweight convolutional neural network (CNN) model, termed S-Net, was developed with fewer layers and learnable parameters to efficiently detect cervical cancer in Pap smear images. The results demonstrate that the proposed S-Net model effectively classifies cervical cancer cells and achieves an impressive accuracy of 99.99%, outperforming state-ofthe-art (SOTA) CNN models, transfer learning techniques, and ensemble methods. For training and evaluation, the S-Net model utilized three Python data generation techniques: TensorFlow/Keras's flow_from_directory, flow_from_dataframe, and flow (used with NumPy arrays). Among these methods, flow_from_directory yielded the best performance, as it efficiently loads images batch-by-batch, minimizing memory consumption compared to flow_from_dataframe and flow, which load the entire dataset into RAM. Furthermore, the Adam optimizer provided consistent and stable performance across all experiments, while the RMSprop optimizer showed fluctuations depending on the model, highlighting the significance of selecting the appropriate optimizer based on the specific convergence behavior of the architecture. Figure 25: Accuracy comparison among individual CNN, transfer learning, and Mnet models This research presents three (3) experiments for detecting and classifying cervical cancer images using well-known deep learning architectures and identifying the most promising model. In the first experiment, six CNN models, VGG19, ResNet152V2, SE-ResNet152, DenseNet201, Xception, and MobileNetV2, were compared on 25,000 microscopic cervical cancer images across five (5) classes: Dyk, Koc, Mep, Pab, and Sfi. Figure 26: Accuracy comparison among individual CNN, transfer learning, and ensemble models. 0 10000000 20000000 30000000 40000000 50000000 60000000 70000000 T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... T O T A L P A R A M S T R A I N A B L E ... N O N - ... V G G 1 9 R E S N E T 1 5 2 V 2 D E N S E N E T 2 0 1 S E R E S N E T 1 5 2X C E P T I O NM O B I L E N E T V 2 S - N E T PARAMETER COMPARISON OF CNNS & S-NET 99.97%99.99% 100.00% 99.99% 100.00% 100.00% 88.58% 99.16%99.45%99.81%98.31%99.53% 99.74%99.95% 99.86% 82.00% 84.00% 86.00% 88.00% 90.00% 92.00% 94.00% 96.00% 98.00% 100.00% 102.00% COMPARISON OF MODELS APPILED IN THE STUDY Series1 Significant studies have highlighted that transfer learning improves classification accuracy and reduces training time compared to conventional CNN for many classification tasks (Ju et al., 2022; Karimi et al., 2021). For example, Mehmood et al. (2024) proposed and reported 95.07% binary classification performance; Emam et al. (2024) reported breast cancer diagnosis using an optimized transfer learning technique and improved the accuracy. However, in our study, transfer learning received negative results. The accuracy has decreased compared to the main CNN. The findings of negative transfer learning support the study (Rangarajan & Raja, 2020; Mohanty et al., 2016; Barbedo, 2018) that in the case the input image differs from the trained data of the Imagenet Dataset, the accuracy is likely to be decreased. The effect of background noise and the application of different augmentation techniques separately with the test sets resulted in a drop in performance. However, in the case of the original CNN, the model was trained and tested using similar input, and the prediction capabilities were increased in unseen data. Moreover, although CNN can learn features irrespective of the input data, this study's limited number of datasets is likely a factor influencing the prediction capability. Our view is also supported by (Barbedo, 2018), who suggested that increasing the dataset size may improve transfer learning performance when the input image is modified using augmentation. Most importantly, this study applied three (3) XAI techniques, LIME, SHAP, and Grad-CAM, to generate explanations for CNN detection and classification. The XAI expands our understanding of S-NET's decision-making process. XAIs used in this study enhance the interpretability of S-Net. LIME provides model-agnostic explanations by approximating the local decision boundary of a model with an interpretable surrogate, highlighting the regions of the pap smear images that contribute most to the model's predictions. Based on cooperative game theory, SHAP offers consistent and theoretically sound explanations by assigning Shapley values to input features, thereby quantifying their contribution to the output with high fidelity. Grad-CAM, designed explicitly for CNNs, generates class-discriminative visualizations by leveraging gradients from the final convolutional layers, effectively localizing regions in the image that influence a specific class prediction. In summary, XAI, coupled with S-Net, enhances the trust and understanding of CNNs by transparentizing their decision-making processes. Lastly, pixel intensity was analyzed using the statistical method of true classification, misclassification, true positive, false positive, true negative, and false negative classification. This study contributes to understanding the growing literature on using explainable AI to improve medical image analysis and diagnosis. It provides insights into the interpretability and transparency of CNN for Cervical cancer modalities. 6. Conclusion and future work This study evaluates deep learning models for cervical cancer detection, conducting three sequential experiments. The first experiment tested six CNN architectures-VGG19, ResNet152V2, DenseNet201, ResNeXt101, SE-ResNet152, and MobileNetV2-on a dataset of 25,000 Pap smear images. The second experiment applied transfer learning to these models. The third introduced the novel lightweight CNN, S-Net, and integrated it with Explainable AI (XAI) methods: LIME, SHAP, and Grad-CAM. The results demonstrate that S-Net outperformed all baseline models, achieving superior accuracy, precision, recall, and F1-score. XAI techniques improved interpretability, highlighting critical regions influencing predictions. A pixel intensity analysis revealed significant differences between correctly and incorrectly classified samples, emphasizing the role of intensity in model performance. However, the study acknowledged limitations, including the use of a secondary dataset and the exclusive focus on Pap smear images, limiting generalizability. Future work should explore other imaging modalities like colposcopy and histopathology to enhance clinical applicability. While transfer learning did not yield optimal results, further research on lightweight CNNs may be beneficial. Clinical validation with expert input is essential for real-world deployment. In conclusion, this research underscores the potential of CNNs, particularly S-Net, in automating cervical cancer detection, offering a significant contribution toward reliable and interpretable AI-based medical diagnostics References Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2023). Enhancing Tea Leaf Disease Detection Through Customized Vision Transformer and Hyperparameters Optimisation. Available at SSRN, 4940688. https://doi.org/ Ahad, M. T., Bhowmik, A. C., Emon, Y. R., & Ahmed, F. (2024). A Customized Vision Transformer for Accurate Detection and Classification of Java Plum Leaf Disease. Available at SSRN, 4829650. https://doi.org/2 Ahad, M. T., Emon, Y. R., & Mustofa, S. (2024). Data of history: An open-source and multiformat wall image dataset of Panam city, a historical place. Data in Brief, 56, 110774. https://doi.org/6 Ahad, M. T., Li, Y., Song, B., & Bhuiyan, T. (2023). Comparison of CNN-based deep learning architectures for rice diseases classification. Artificial Intelligence in Agriculture, 9, 22-35. https://doi.org/231 Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). End User Interface Design of Mobile-Based Fish Disease Detection to Assist Fish Farmers. Available at SSRN, 4980536. https://doi.org/ Ahad, M. T., Mamun, S. B., Chowdhury, S., Song, B., & Li, Y. (2023). Fishdoc: A Mobile-Based Fish Disease Detection System Using Yolov8. Available at SSRN, 4899189. https://doi.org/ Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood cancer detection and classification using Convolutional Neural Network. arXiv preprint, . https://doi.org/2 Ahad, M. T., Mamun, S. B., Mustofa, S., Song, B., & Li, Y. (2024). A comprehensive study on blood cancer detection and classification using Convolutional Neural Network. arXiv e-prints, arXiv: 2409.06689. https://doi.org/ Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection. arXiv preprint, . https://doi.org/4 Ahad, M. T., Mustofa, S., Ahmed, F., Emon, Y. R., & Anu, A. D. (2024). A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection. arXiv e-prints, arXiv: 2409.06699. https://doi.org/ Ahad, M. T., Mustofa, S., Rahman, M. S., Song, B., & Li, Y. (2023). A Comprehensive Study on Deep Feature Extraction to Detect and Classify Soursop Leaf Disease. Available at SSRN, 4845099. https://doi.org/ Ahad, M. T., Mustofa, S., Sarker, A., & Emon, Y. R. (2024). Bdpapayaleaf: A dataset of papaya leaf for disease detection, classification, and analysis. Classification, and Analysis. https://doi.org/3 Ahad, M. T., Payel, I. J., Song, B., & Li, Y. (2024). DVS: Blood cancer detection using novel CNN-based ensemble approach. arXiv preprint, . https://doi.org/1 Ahad, M. T., Preanto, S. A., Song, B., & Li, Y. (2023). Gan-Generated Spectrogram Detection and Classification for Heartbeat Classification Using a Vision Transformer. Available at SSRN, 4892869. https://doi.org/ Ahad, M. T., Song, B., & Li, Y. (2024). A Comparison of Convolutional Neural Network, Transfer Learning and Ensemble Technique for Brain Tumour Detection of Classification. Transfer Learning and Ensemble Technique for Brain Tumour Detection of... https://doi.org/1 Ahmed, F., & Ahad, M. T. (2023). Machine learning-based tea leaf disease detection: A comprehensive review. arXiv preprint, . https://doi.org/22 Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2023). A fuzzybased vision transformer model for tea leaf disease detection. International Conference on Trends in Computational and Cognitive... https://doi.org/5 Ahmed, F., Emon, Y. R., Ahad, M. T., Munna, M. H., & Mamun, S. B. (2024). A FuzzyBased Vision Transformer Model for Tea Leaf Disease Detection Check for updates. Proceedings of the Fifth International Conference on Trends in Computational... https://doi.org/1 Aladhadh, S., Alsanea, M., Aloraini, M., Khan, T., Habib, S., & Islam, M. (2022). An effective skin cancer classification mechanism via medical vision transformer. Sensors, 22(11), 4008. Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., ... & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156-191. Ali, M. S., Hossain, M. M., Kona, M. A., Nowrin, K. R., & Islam, M. K. (2024). An ensemble classification approach for cervical cancer prediction using behavioral risk factors. Healthcare Analytics, 5, 100324. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. Atasever, S., Azginoglu, N., Terzi, D. S., & Terzi, R. (2023). A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clinical imaging, 94, 18-41. Attallah, O. (2023). Cercan· net: Cervical cancer classification model via multi-layer feature ensembles of lightweight cnns and transfer learning. Expert Systems with Applications, 229, 120624. Attallah, O. (2023). Cervical cancer diagnosis based on multi-domain features using deep learning enhanced by handcrafted descriptors. Applied Sciences, 13(3), 1916. Azad, R., Aghdam, E. K., Rauland, A., Jia, Y., Avval, A. H., Bozorgpour, A., ... & Merhof, D. (2024). Medical image segmentation review: The success of u-net. IEEE Transactions on Pattern Analysis and Machine Intelligence. Band, S. S., Yarahmadi, A., Hsu, C. C., Biyari, M., Sookhak, M., Ameri, R., ... & Liang, H. W. (2023). Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods. Informatics in Medicine Unlocked, 40, 101286. Barbedo, J. G. A. (2018). Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Computers and electronics in agriculture, 153, 46-53. Bhandarkar, A., Naik, P., Vakkund, K., Junjappanavar, S., Bakare, S., & Pattar, S. (2024). Deep learning based computer aided diagnosis of Alzheimer's disease: a snapshot of last 5 years, gaps, and future directions. Artificial Intelligence Review, 57(2), 30. Bhowmik, A. C., Ahad, M. T., & Emon, Y. R. (2023). Machine Learning-Based Jamun Leaf Disease Detection: A Comprehensive Review. arXiv preprint, . https://doi.org/4 Bhowmik, A. C., Ahad, M. T., Emon, Y. R., Ahmed, F., Song, B., & Li, Y. (2024). A customised Vision Transformer for accurate detection and classification of Java Plum leaf disease. Smart Agricultural Technology, 8, 100500. https://doi.org/22 Biplob, T. I., Rabbany, G., Emon, Y. R., Ahad, M. T., & Fimu, F. A. (2023). An Optimized Vision Based Transformer for Lungs Cancer Detection. International Conference on Trends in Computational and Cognitive ... https://doi.org/ Boon, S. S., Luk, H. Y., Xiao, C., Chen, Z., & Chan, P. K. S. (2022). Review of the standard and advanced screening, staging systems and treatment modalities for cervical cancer. Cancers, 14(12), 2913 Deo, B. S., Pal, M., Panigrahi, P. K., & Pradhan, A. (2024). CerviFormer: A pap smear‐ based cervical cancer classification method using cross‐attention and latent transformer. International Journal of Imaging Systems and Technology, 34(2), e23043. Dogani, J., Namvar, R., & Khunjush, F. (2023). Auto-scaling techniques in containerbased cloud and edge/fog computing: Taxonomy and survey. Computer Communications, 209, 120-150. Emon, Y. R., & Ahad, M. T. (2024). Multi-format open-source sweet orange leaf dataset for disease detection, classification, and analysis. Data in Brief, 55, 110713. https://doi.org/17 Fan, Z., Wu, X., Li, C., Chen, H., Liu, W., Zheng, Y., ... & Li, C. (2023). CAM-VT: A weakly supervised cervical cancer nest image identification approach using conjugated attention mechanism and visual transformer. Computers in Biology and Medicine, 162, 107070. Gendy, G., He, G., & Sabor, N. (2023). Lightweight image super-resolution based on deep learning: State-of-the-art and future directions. Information Fusion, 94, 284-310. Hossain, S., Tanzim Reza, M., Chakrabarty, A., & Jung, Y. J. (2023). Aggregating Different Scales of Attention on Feature Variants for Tomato Leaf Disease Diagnosis from Image Data: A Transformer Driven Study. Sensors, 23(7), 3751. identification. Internet of Things, 21, 100650. Huang, L., Gao, X., Li, Y., Lyu, F., Gao, Y., Bai, Y., ... & Ding, X. (2025). Enhancing stereotactic ablative boost radiotherapy dose prediction for bulky lung cancer: A multi‐ scale dilated network approach with scale‐balanced structure loss. Journal of Applied Clinical Medical Physics, 26(1), e14546. Hussain, E., Mahanta, L. B., Borbora, K. A., Borah, H., & Choudhury, S. S. (2024). Exploring explainable artificial intelligence techniques for evaluating cervical intraepithelial neoplasia (CIN) diagnosis using colposcopy images. Expert Systems with Applications, 249, 123579. Islam, R., Ahad, M. T., Ahmed, F., Song, B., & Li, Y. (2024). Mental health diagnosis from voice data using convolutional neural networks and vision transformers. Journal of Voice. https://doi.org/1 Jawahar, M., Anbarasi, L. J., Narayanan, S., & Gandomi, A. H. (2024). An attentionbased deep learning for acute lymphoblastic leukemia classification. Scientific Reports, 14(1), 17447. Jha, P., Dembla, D., & Dubey, W. (2024). Deep learning models for enhancing potato leaf disease prediction: Implementation of transfer learning based stacking ensemble model. Multimedia Tools and Applications, 83(13), 37839-37858. Joseph, J. S., Vidyarthi, A., & Singh, V. P. (2024). An improved approach for initial stage detection of laryngeal cancer using effective hybrid features and ensemble learning method. Multimedia Tools and Applications, 83(6), 17897-17919. Joynab, N. S., Islam, M. N., Aliya, R. R., Hasan, A. R., Khan, N. I., & Sarker, I. H. (2024). A federated learning aided system for classifying cervical cancer using PAPSMEAR images. Informatics in Medicine Unlocked, 47, 101496. Ju, J., Zheng, H., Xu, X., Guo, Z., Zheng, Z., & Lin, M. (2022). Classification of jujube defects in small data sets based on transfer learning. Neural Computing and Applications, 1-14. Karimi, D., Warfield, S. K., & Gholipour, A. (2021). Transfer learning in medical image segmentation: New insights from analysis of the dynamics of model parameters and learned representations. Artificial intelligence in medicine, 116, 102078. Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 5455-5516. Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 5455-5516. Khowaja, A., Zou, B., & Kui, X. (2024). Enhancing cervical cancer diagnosis: Integrated attention-transformer system with weakly supervised learning. Image and Vision Computing, 149, 105193. Kora, P., Ooi, C. P., Faust, O., Raghavendra, U., Gudigar, A., Chan, W. Y., ... & Acharya, U. R. (2022). Transfer learning techniques for medical. Kumar, T. S., Rajendran, P., Santhiyakumari, N., Kumarganesh, S., Sundari, L. M., Elango, S., ... & Emadifar, H. (2024). Analysis of Computational Methods for Diagnosis of Cervical Cancer-A Review. Appl. Math, 18(4), 795-809. Kumar, Y., Shrivastav, S., Garg, K., Modi, N., Wiltos, K., Woźniak, M., & Ijaz, M. F. (2024). Automating cancer diagnosis using advanced deep learning techniques for multicancer image Lau, K. W., Po, L. M., & Rehman, Y. A. U. (2024). Large separable kernel attention: Rethinking the large kernel attention design in cnn. Expert Systems with Applications, 236, 121352. Leung, S. N., Chandra, S. S., Lim, K., Young, T., Holloway, L., & Dowling, J. A. (2024). Automatic segmentation of tumour and organs at risk in 3D MRI for cervical cancer radiation therapy with anatomical variations. Physical and Engineering Sciences in Medicine, 47(3), 919-928. Leung, S. N., Chandra, S. S., Lim, K., Young, T., Holloway, L., & Dowling, J. A. (2024). Automatic segmentation of tumour and organs at risk in 3D MRI for cervical cancer radiation therapy with anatomical variations. Physical and Engineering Sciences in Medicine, 47(3), 919-928. Liu, Q., Jiang, N., Hao, Y., Hao, C., Wang, W., Bian, T., ... & Dong, T. (2023). Identification of lymph node metastasis in pre‐operation cervical cancer patients by weakly supervised deep learning from histopathological whole‐slide biopsy images. Cancer Medicine, 12(17), 17952-17966. Liu, X., Geng, L. S., Huang, D., Cai, J., & Yang, R. (2024). Deep learning-based target tracking with X-ray images for radiotherapy: a narrative review. Quantitative Imaging in Medicine and Surgery, 14(3), 2671. Liu, X., Song, L., Liu, S., & Zhang, Y. (2021). A review of deep-learning-based medical image segmentation methods. Sustainability, 13(3), 1224. Luo, Y., Zhang, S., Ling, J., Lin, Z., Wang, Z., & Yao, S. (2024). Mask-guided generative adversarial network for MRI-based CT synthesis. Knowledge-Based Systems, 295, 111799. Mahajan, P., & Kaur, P. (2024). Improving cervical cancer classification in PAP smear images with enhanced segmentation and deep progressive learning‐based techniques. Diagnostic Cytopathology, 52(6), 313-324. Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2023). Scratch vision transformer model for diagnosis grape leaf disease. International Conference on Trends in Computational and Cognitive... https://doi.org/7 Mamun, S. B., Ahad, M. T., Morshed, M. M., Hossain, N., & Emon, Y. R. (2024). Scratch Vision Transformer Model for Diagnosis Grape Leaf Disease Check for updates. Proceedings of the Fifth International Conference on Trends in Computational... https://doi.org/ Mamun, S. B., Payel, I. J., Ahad, M. T., Atkins, A. S., Song, B., & Li, Y. (2025). Grape Guard: A YOLO-based mobile application for detecting grape leaf diseases. Journal of Electronic Science and Technology, 23(1), 100300. https://doi.org/2 Mathivanan, S. K., Francis, D., Srinivasan, S., Khatavkar, V., P, K., & Shah, M. A. (2024). Enhancing cervical cancer detection and robust classification through a fusion of deep learning models. Scientific Reports, 14(1), 10812. Mathivanan, S. K., Francis, D., Srinivasan, S., Khatavkar, V., P, K., & Shah, M. A. (2024). Enhancing cervical cancer detection and robust classification through a fusion of deep learning models. Scientific Reports, 14(1), 10812. Mazumder, M. K. A., Nobi, M. M. U., Mridha, M. F., & Hasan, K. T. (2024). A Precise Cervical Cancer Classification in the Early Stage Using Transfer Learning-Based Ensemble Method: A Deep Learning Approach. In Data-Driven Clinical Decision-Making Using Deep Learning in Imaging (pp. 41-59). Singapore: Springer Nature Singapore. Mehedi, M. H. K., Khandaker, M., Ara, S., Alam, M. A., Mridha, M. F., & Aung, Z. (2024). A lightweight deep learning method to identify different types of cervical cancer. Scientific Reports, 14(1), 29446. Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for imagebased plant disease detection. Frontiers in plant science, 7, 215232. Morid, M. A., Borjali, A., & Del Fiol, G. (2021). A scoping review of transfer learning research on medical image analysis using ImageNet. Computers in biology and medicine, 128, 104115. Mustofa, S., Ahad, M. T., Emon, Y. R., & Sarker, A. (2024). BDPapayaLeaf: A dataset of papaya leaf for disease detection, classification, and analysis. Data in Brief, 57, 110910. https://doi.org/7 Mustofa, S., Emon, Y. R., Mamun, S. B., Akhy, S. A., & Ahad, M. T. (2025). A novel AI-driven model for student dropout risk analysis with explainable AI insights. Computers and Education: Artificial Intelligence, 8, 100352. https://doi.org/5 Mustofa, S., Munna, M. M. H., Emon, Y. R., Rabbany, G., & Ahad, M. T. (2023). A comprehensive review on plant leaf disease detection using deep learning. arXiv preprint, . https://doi.org/37 Nazir, A., Hussain, A., Singh, M., & Assad, A. (2024). Deep learning in medicine: advancing healthcare with intelligent solutions and the future of holography imaging in early diagnosis. Multimedia Tools and Applications, 1-64. Nirmala, G., Nayudu, P. P., Kumar, A. R., & Sagar, R. (2024). Automatic cervical cancer classification using adaptive vision transformer encoder with CNN for medical application. Pattern Recognition, 111201. Nour, M. K., Issaoui, I., Edris, A., Mahmud, A., Assiri, M., & Ibrahim, S. S. (2024). Computer aided cervical cancer diagnosis using gazelle optimization algorithm with deep learning model. IEEE Access. Pacal, I. (2024). MaxCerVixT: A novel lightweight vision transformer-based Approach for precise cervical cancer detection. Knowledge-Based Systems, 289, 111482. Pacal, I., & Kılıcarslan, S. (2023). Deep learning-based approaches for robust classification of cervical cancer. Neural Computing and Applications, 35(25), 1881318828. Painuli, D., & Bhardwaj, S. (2022). Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Computers in Biology and Medicine, 146, 105580. Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv preprint, . https://doi.org/7 Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A study on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL). arXiv preprint, . https://doi.org/5 Preanto, S. A., Ahad, M. T., Emon, Y. R., Mustofa, S., & Alamin, M. (2024). A semantic segmentation approach on sweet orange leaf diseases detection utilizing YOLO. arXiv e-prints, arXiv: 2409.06671. https://doi.org/ Sambyal, D., & Sarwar, A. (2023). Recent developments in cervical cancer diagnosis using deep learning on whole slide images: An Overview of models, techniques, challenges and future directions. Micron, 173, 103520. Sarhangi, H. A., Beigifard, D., Farmani, E., & Bolhasani, H. (2024). Deep Learning Techniques for Cervical Cancer Diagnosis based on Pathology and Colposcopy Images. Informatics in Medicine Unlocked, 101503. Shandilya, G., Gupta, S., Almogren, A., Bharany, S., Altameem, A., Rehman, A. U., & Hussen, S. (2024). Enhancing advanced cervical cell categorization with cluster-based intelligent systems by a novel integrated CNN approach with skip mechanisms and GANbased augmentation. Scientific Reports, 14(1), 29040. Tan, S. L., Selvachandran, G., Ding, W., Paramesran, R., & Kotecha, K. (2024). Cervical cancer classification from pap smear images using deep convolutional neural network models. Interdisciplinary Sciences: Computational Life Sciences, 16(1), 16-38. Tomko, M., Pavliuchenko, M., Pavliuchenko, I., Gordienko, Y., & Stirenko, S. (2023). Multi-label classification of cervix types with image size optimization for cervical cancer prescreening by deep learning. In Inventive Computation and Information Technologies: Proceedings of ICICIT 2022 (pp. 885-902). Singapore: Springer Nature Singapore. Uyanga, E., Khishigdemberel, I., Khongorzul, B., Kiseleva, T. Y., Kobayashi, S., Tyapkin, P. Y., ... & Odkhuu, D. (2024). Enhancing heat generation ability and magnetic properties of MgFe2O4 with Ni substitutes for biomedical applications. Materials Today Chemistry, 35, 101841. Wubineh, B. Z., Rusiecki, A., & Halawa, K. (2024, June). Segmentation of Cytology Images to Detect Cervical Cancer Using Deep Learning Techniques. In International Conference on Computational Science (pp. 270-278). Cham: Springer Nature Switzerland. Yaman, O., & Tuncer, T. (2022). Exemplar pyramid deep feature extraction based cervical cancer image classification model using pap-smear images. Biomedical Signal Processing and Control, 73, 103428. Yeasmin, M. N., Al Amin, M., Joti, T. J., Aung, Z., & Azim, M. A. (2024). Advances of AI in image-based computer-aided diagnosis: A review. Array, 100357. Youneszade, N., Marjani, M., & Pei, C. P. (2023). Deep learning in cervical cancer diagnosis: architecture, opportunities, and open research challenges. IEEE Access, 11, 6133-6149. Yue, J., Jin, S., Wang, S., Zeng, J., Shan, S., Liu, B., ... & Zhou, F. (2024). A shapesupervised feature fusion U-Net for tubular structure segmentation. Computers and Electrical Engineering, 119, 109522. Yue, J., Jin, S., Wang, S., Zeng, J., Shan, S., Liu, B., ... & Zhou, F. (2024). A shapesupervised feature fusion U-Net for tubular structure segmentation. Computers and Electrical Engineering, 119, 109522. Zhang, Y., Ning, C., & Yang, W. (2025). An automatic cervical cell classification model based on improved DenseNet121. Scientific Reports, 15(1), 3240.
|
2509.16247
|
Solving Differential Equation with
Quantum-Circuit Enhanced Physics-Informed
Neural Networks
Rachana Soni
School of Computer Science Engineering and Technology, Bennett
University, TechZone 2, Greater Noida, 201310, U.P., India.
Contributing authors: rachanasoni007@gmail.com;
Abstract
I present a simple hybrid framework that combines physics-informed neural net-
works (PINNs) with features generated from small quantum circuits. As a proof
of concept, the first-order equation
dy
dx + 2y = 0 with y(0) = 1 is solved by
feeding quantum measurement probabilities into the neural model. The architec-
ture enforces the initial condition exactly, and training is guided by the ODE
residual loss. Numerical results show that the hybrid model reproduces the ana-
lytical solution e−2x, illustrating the potential of quantum-enhanced PINNs for
differential equation solving.
Keywords: Quantum Feature, Physics-Informed Neural Networks, Machine Learning
1 Introduction
Differential equations play a central role in modeling physical, biological, and engi-
neering systems. Traditional numerical solvers, while well established, can become
computationally demanding in high dimensions or when equations are only partially
known. Physics-informed neural networks (PINNs) have emerged as a data-driven
alternative, where the governing equation is embedded into the loss function of a neu-
ral network. This framework allows approximate solutions to be obtained without
explicitly constructing finite-difference or finite-element discretizations.
At the same time, quantum computing offers new ways of encoding and processing
information. In particular, the statistics generated by shallow quantum circuits can be
1
arXiv:2509.16247v1 [quant-ph] 17 Sep 2025
viewed as nonlinear feature maps that are difficult to reproduce classically. These quan-
tum features may enrich the representation power of machine-learning models even on
near-term devices. Recent studies have explored their use in supervised and generative
tasks, but their application to scientific computing remains largely unexplored.
In this work, I investigate the use of quantum-circuit features within a PINN
framework for solving differential equations. As a proof of concept, we focus on the
linear ordinary differential equation dy
dx + 2y = 0 with initial condition y(0) = 1. A
simple two-qubit circuit is employed to generate measurement probabilities, which are
concatenated with the collocation points and used as inputs to the neural network. The
architecture is constructed to satisfy the initial condition exactly, while the training
process minimizes the ODE residual loss. The numerical results indicate that the
hybrid model can approximate the analytical solution e−2x, suggesting that quantum-
enhanced features may serve as a useful ingredient in neural solvers for differential
equations.
Recent years have witnessed an increasing interest in leveraging quantum mechan-
ics as a computational resource.. Foundational theory of quantum mechanics [1–3]
establish the mathematical models.
Parallel to these advances, physics-informed and operator-based neural archi-
tectures have emerged as powerful tools for solving differential equations in scien-
tific computing. Physics-informed neural networks (PINNs) incorporate governing
equations directly into the loss function [4, 5], while neural operators learn maps
between function spaces and generalize across problem instances [6, 7]. Applications
to high-dimensional PDEs [8] and quantum systems [9–11] further demonstrate their
versatility. By integrating quantum circuit generated feature with PINNs, one can
bridge these two lines of research, opening a pathway for hybrid quantum–classical
solvers that address differential equations from a new perspective.
Significance and Motivation
The significance of this study lies in bridging physics-informed neural networks
(PINNs) with quantum feature maps. While PINNs have proven effective for solv-
ing ordinary and partial differential equations, their accuracy can be limited by the
quality of the input features. On the other hand, parameterized quantum circuits nat-
urally generate structured, nonlinear features that are difficult to reproduce classically.
Integrating such features into the PINN framework represents a novel direction for
scientific machine learning.
The need for this work arises from a gap in the literature. Although quantum
circuits have been employed in classification and generative modeling tasks, their
role in solving differential equations has remained largely unexplored. Existing PINN
approaches rely entirely on classical features, and little attention has been paid to
hybrid models that leverage quantum statistics. By demonstrating the feasibility of
solving the simple ODE dy
dx + 2y = 0 with quantum-circuit features, this work pro-
vides an initial step towards quantum-enhanced solvers for more complex systems of
scientific interest.
2
2 Mathematical Framework
We consider the first-order linear ODE
dy
dx + 2y = 0,
y(0) = 1,
(1)
whose analytical solution is
y(x) = e−2x.
(2)
2.1 Quantum-Circuit Feature Encoding
For each grid point x ∈[0, 1], we define a 2-qubit parameterized circuit
U(t) =
Rx(−2t) ⊗Rx(−2t)
H ⊗I
,
(3)
with evolution parameter t = x. The quantum state is
|ψ(x)⟩= U(t)|00⟩.
(4)
Measuring |ψ(x)⟩in the computational basis yields a probability vector
pZ(x) =
PZ(00), PZ(01), PZ(10), PZ(11)
.
(5)
The quantum feature vector is then
q(x) =
n
pZ(x),
(4-dimensional).
(6)
2.2 Neural Network Ansatz with Hard Initial Condition
We define the neural ansatz as
ˆy(x) = 1 + x gθ
[x, q(x)]
,
(7)
where gθ is a feed-forward neural network with trainable parameters θ. This construc-
tion enforces the initial condition ˆy(0) = 1 exactly.
2.3 Physics-Informed Loss
The residual of (1) at each collocation point xi is
r(xi; θ) = dˆy
dx(xi) + 2ˆy(xi).
(8)
3
Fig. 1 Histogram of measurement outcomes from the two-qubit quantum circuit used for feature
generation. The probabilities of the computational basis states {00, 01, 10, 11} are estimated from
repeated circuit executions (shots). These distributions provide the quantum features that are inte-
grated into the neural ODE solver.
The physics-informed loss is
LODE(θ) = 1
N
N
X
i=1
r(xi; θ)2.
(9)
2.4 Training Objective
The parameters θ are optimized by minimizing
L(θ) = LODE(θ),
(10)
using stochastic gradient descent (Adam optimizer). The trained model ˆy(x) then
approximates the analytical solution e−2x.
4
Algorithm 1: Quantum Feature Generation
Input: Collocation points x1, . . . , xN ∈[0, 1], shots S
Output: Feature matrix Q ∈RN×4 with rows q(xi)
for i ←1 to N do
Set t ←xi ;
// parameterize by input
Prepare 2-qubit circuit U(t): apply H on q0; apply Rx(−2t) on q0, q1;
Measure both qubits in the Z basis (shots S) to get counts over
{00, 01, 10, 11};
Convert counts to probabilities pZ(xi) = [P(00), P(01), P(10), P(11)];
Set q(xi) ←pZ(xi) ;
// Z-only, 4-dim
Stack q(xi) as rows to form Q; standardize columns (Q −µ)/σ;
Algorithm 2: PINN for y′ + 2y = 0 with Quantum Features (Hard IC)
Input: Points x1, . . . , xN, feature matrix Q ∈RN×4
Output: Trained network ˆy(x)
Define MLP gθ : R1+4 →R; input z(x) = [x, q(x)];
Enforce hard initial condition via ˆy(x) = 1 + x gθ(z(x)) so ˆy(0) = 1;
Physics loss: For each xi, compute ˆy′(xi) by automatic differentiation w.r.t.
x and form r(xi) = ˆy′(xi) + 2 ˆy(xi); set LODE(θ) = 1
N
N
X
i=1
r(xi)2.
Training: Initialize θ; for epoch = 1 to T do
Evaluate LODE(θ) on {xi, Qi};
Update θ ←θ −η ∇θLODE(θ) (Adam);
Return ˆy(x) = 1 + x gθ([x, q(x)]).
3 Results
The hybrid solver successfully reproduced the analytical solution y(x) = e−2x using
quantum features within the PINN framework. As shown in Fig. 2, the predicted curve
overlaps the ground truth with only minor deviations at larger x values, confirming
the effectiveness of the approach.
4 Conclusion
This study presented a hybrid framework where quantum-circuit statistics were
embedded as features into a physics-informed neural network for solving an ordinary
differential equation. The results demonstrate that quantum-generated features can
effectively enrich the solution space, allowing the hybrid solver to closely approximate
the analytical solution. While the present work is a proof of concept on a simple ODE,
it highlights the potential of integrating circuit-based features into scientific machine
learning. Extending this approach to more complex equations and higher-dimensional
systems offers a promising direction for future research.
5
Fig. 2 Comparison of the analytical solution y(x) = e−2x (dashed line) with the hybrid quantum–
classical prediction (solid line) obtained using quantum-circuit features. The close agreement in the
region x ∈[0, 0.5] indicates that the solver accurately captures the exponential decay. Small devia-
tions at larger x values are due to finite sampling noise and limited training, yet the overall trend
demonstrates the effectiveness of quantum-generated features in physics-informed ODE solvers.
References
[1] Griffiths, D.J., Schroeter, D.F.: Introduction to Quantum Mechanics, 3rd edn.
Cambridge University Press, Cambridge, UK (2018)
[2] Teschl, G.: Mathematical Methods in Quantum Mechanics: With Applications to
Schr¨odinger Operators, 2nd edn. American Mathematical Society, Providence, RI
(2014)
[3] Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Informa-
tion, 10th anniversary edition edn. Cambridge University Press, Cambridge, UK
(2010)
[4] Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks:
A deep learning framework for solving forward and inverse problems involving
nonlinear partial differential equations. Journal of Computational physics 378,
686–707 (2019)
[5] Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.:
Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021)
6
[6] Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A.,
Anandkumar, A.: Neural operator: Learning maps between function spaces with
applications to pdes. Journal of Machine Learning Research 24(89), 1–97 (2023)
[7] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart,
A., Anandkumar, A.: Fourier neural operator for parametric partial differential
equations. arXiv preprint arXiv:2010.08895 (2020)
[8] Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations
using deep learning. Proceedings of the National Academy of Sciences 115(34),
8505–8510 (2018)
[9] Mills, K., Spanner, M., Tamblyn, I.: Deep learning and the schr¨odinger equation.
Physical Review A 96(4), 042113 (2017)
[10] Carleo, G., Troyer, M.: Solving the quantum many-body problem with artificial
neural networks. Science 355(6325), 602–606 (2017)
[11] Carrasquilla, J., Melko, R.G.: Machine learning phases of matter. Nature Physics
13(5), 431–434 (2017)
[12] Farhi, E., Gutmann, S.: Quantum computation by adiabatic evolution. arXiv
preprint quant-ph/0001106 (2000)
7
|
Solving Differential Equation with Quantum-Circuit Enhanced Physics-Informed Neural Networks Rachana Soni 2, Greater Noida, 201310, U.P., India. Contributing authors: ; Abstract I present a simple hybrid framework that combines physics-informed neural networks (PINNs) with features generated from small quantum circuits. As a proof of concept, the first-order equation dy dx + 2y = 0 with y(0) = 1 is solved by feeding quantum measurement probabilities into the neural model. The architecture enforces the initial condition exactly, and training is guided by the ODE residual loss. Numerical results show that the hybrid model reproduces the analytical solution e-2x, illustrating the potential of quantum-enhanced PINNs for differential equation solving. Keywords: Quantum Feature, Physics-Informed Neural Networks, Machine Learning 1 Introduction Differential equations play a central role in modeling physical, biological, and engineering systems. Traditional numerical solvers, while well established, can become computationally demanding in high dimensions or when equations are only partially known. Physics-informed neural networks (PINNs) have emerged as a data-driven alternative, where the governing equation is embedded into the loss function of a neural network. This framework allows approximate solutions to be obtained without explicitly constructing finite-difference or finite-element discretizations. At the same time, quantum computing offers new ways of encoding and processing information. In particular, the statistics generated by shallow quantum circuits can be 1 17 Sep 2025 viewed as nonlinear feature maps that are difficult to reproduce classically. These quantum features may enrich the representation power of machine-learning models even on near-term devices. Recent studies have explored their use in supervised and generative tasks, but their application to scientific computing remains largely unexplored. In this work, I investigate the use of quantum-circuit features within a PINN framework for solving differential equations. As a proof of concept, we focus on the linear ordinary differential equation dy dx + 2y = 0 with initial condition y(0) = 1. A simple two-qubit circuit is employed to generate measurement probabilities, which are concatenated with the collocation points and used as inputs to the neural network. The architecture is constructed to satisfy the initial condition exactly, while the training process minimizes the ODE residual loss. The numerical results indicate that the hybrid model can approximate the analytical solution e-2x, suggesting that quantumenhanced features may serve as a useful ingredient in neural solvers for differential equations. Recent years have witnessed an increasing interest in leveraging quantum mechanics as a computational resource.. Foundational theory of quantum mechanics [1-3] establish the mathematical models. Parallel to these advances, physics-informed and operator-based neural architectures have emerged as powerful tools for solving differential equations in scientific computing. Physics-informed neural networks (PINNs) incorporate governing equations directly into the loss function [4, 5], while neural operators learn maps between function spaces and generalize across problem instances [6, 7]. Applications to high-dimensional PDEs [8] and quantum systems [9-11] further demonstrate their versatility. By integrating quantum circuit generated feature with PINNs, one can bridge these two lines of research, opening a pathway for hybrid quantum-classical solvers that address differential equations from a new perspective. Significance and Motivation The significance of this study lies in bridging physics-informed neural networks (PINNs) with quantum feature maps. While PINNs have proven effective for solving ordinary and partial differential equations, their accuracy can be limited by the quality of the input features. On the other hand, parameterized quantum circuits naturally generate structured, nonlinear features that are difficult to reproduce classically. Integrating such features into the PINN framework represents a novel direction for scientific machine learning. The need for this work arises from a gap in the literature. Although quantum circuits have been employed in classification and generative modeling tasks, their role in solving differential equations has remained largely unexplored. Existing PINN approaches rely entirely on classical features, and little attention has been paid to hybrid models that leverage quantum statistics. By demonstrating the feasibility of solving the simple ODE dy dx + 2y = 0 with quantum-circuit features, this work provides an initial step towards quantum-enhanced solvers for more complex systems of scientific interest. 2 2 Mathematical Framework We consider the first-order linear ODE dy dx + 2y = 0, y(0) = 1, (1) whose analytical solution is y(x) = e-2x. (2) 2.1 Quantum-Circuit Feature Encoding For each grid point x ∈[0, 1], we define a 2-qubit parameterized circuit U(t) = Rx(-2t) ⊗Rx(-2t) H ⊗I , (3) with evolution parameter t = x. The quantum state is |ψ(x)⟩= U(t)|00⟩. (4) Measuring |ψ(x)⟩in the computational basis yields a probability vector pZ(x) = PZ(00), PZ(01), PZ(10), PZ(11) . (5) The quantum feature vector is then q(x) = n pZ(x), (4-dimensional). (6) 2.2 Neural Network Ansatz with Hard Initial Condition We define the neural ansatz as ˆy(x) = 1 + x gθ [x, q(x)] , (7) where gθ is a feed-forward neural network with trainable parameters θ. This construction enforces the initial condition ˆy(0) = 1 exactly. 2.3 Physics-Informed Loss The residual of (1) at each collocation point xi is r(xi; θ) = dˆy dx(xi) + 2ˆy(xi). (8) 3 Fig. 1 Histogram of measurement outcomes from the two-qubit quantum circuit used for feature generation. The probabilities of the computational basis states {00, 01, 10, 11} are estimated from repeated circuit executions (shots). These distributions provide the quantum features that are integrated into the neural ODE solver. The physics-informed loss is LODE(θ) = 1 N N X i=1 r(xi; θ)2. (9) 2.4 Training Objective The parameters θ are optimized by minimizing L(θ) = LODE(θ), (10) using stochastic gradient descent (Adam optimizer). The trained model ˆy(x) then approximates the analytical solution e-2x. 4 Algorithm 1: Quantum Feature Generation Input: Collocation points x1, . . . , xN ∈[0, 1], shots S Output: Feature matrix Q ∈RN×4 with rows q(xi) for i ←1 to N do Set t ←xi ; // parameterize by input Prepare 2-qubit circuit U(t): apply H on q0; apply Rx(-2t) on q0, q1; Measure both qubits in the Z basis (shots S) to get counts over {00, 01, 10, 11}; Convert counts to probabilities pZ(xi) = [P(00), P(01), P(10), P(11)]; Set q(xi) ←pZ(xi) ; // Z-only, 4-dim Stack q(xi) as rows to form Q; standardize columns (Q -μ)/σ; Algorithm 2: PINN for y′ + 2y = 0 with Quantum Features (Hard IC) Input: Points x1, . . . , xN, feature matrix Q ∈RN×4 Output: Trained network ˆy(x) Define MLP gθ : R1+4 →R; input z(x) = [x, q(x)]; Enforce hard initial condition via ˆy(x) = 1 + x gθ(z(x)) so ˆy(0) = 1; Physics loss: For each xi, compute ˆy′(xi) by automatic differentiation w.r.t. x and form r(xi) = ˆy′(xi) + 2 ˆy(xi); set LODE(θ) = 1 N N X i=1 r(xi)2. Training: Initialize θ; for epoch = 1 to T do Evaluate LODE(θ) on {xi, Qi}; Update θ ←θ -η ∇θLODE(θ) (Adam); Return ˆy(x) = 1 + x gθ([x, q(x)]). 3 Results The hybrid solver successfully reproduced the analytical solution y(x) = e-2x using quantum features within the PINN framework. As shown in Fig. 2, the predicted curve overlaps the ground truth with only minor deviations at larger x values, confirming the effectiveness of the approach. 4 Conclusion This study presented a hybrid framework where quantum-circuit statistics were embedded as features into a physics-informed neural network for solving an ordinary differential equation. The results demonstrate that quantum-generated features can effectively enrich the solution space, allowing the hybrid solver to closely approximate the analytical solution. While the present work is a proof of concept on a simple ODE, it highlights the potential of integrating circuit-based features into scientific machine learning. Extending this approach to more complex equations and higher-dimensional systems offers a promising direction for future research. 5 Fig. 2 Comparison of the analytical solution y(x) = e-2x (dashed line) with the hybrid quantumclassical prediction (solid line) obtained using quantum-circuit features. The close agreement in the region x ∈[0, 0.5] indicates that the solver accurately captures the exponential decay. Small deviations at larger x values are due to finite sampling noise and limited training, yet the overall trend demonstrates the effectiveness of quantum-generated features in physics-informed ODE solvers. References [1] Griffiths, D.J., Schroeter, D.F.: Introduction to Quantum Mechanics, 3rd edn. Cambridge University Press, Cambridge, UK (2018) [2] Teschl, G.: Mathematical Methods in Quantum Mechanics: With Applications to Schr ̈odinger Operators, 2nd edn. American Mathematical Society, Providence, RI (2014) [3] Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information, 10th anniversary edition edn. Cambridge University Press, Cambridge, UK (2010) [4] Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics 378, 686-707 (2019) [5] Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422-440 (2021) 6 [6] Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A., Anandkumar, A.: Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research 24(89), 1-97 (2023) [7] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A.: Fourier neural operator for parametric partial differential equations. arXiv preprint (2020) [8] Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences 115(34), 8505-8510 (2018) [9] Mills, K., Spanner, M., Tamblyn, I.: Deep learning and the schr ̈odinger equation. Physical Review A 96(4), 042113 (2017) [10] Carleo, G., Troyer, M.: Solving the quantum many-body problem with artificial neural networks. Science 355(6325), 602-606 (2017) [11] Carrasquilla, J., Melko, R.G.: Machine learning phases of matter. Nature Physics 13(5), 431-434 (2017) [12] Farhi, E., Gutmann, S.: Quantum computation by adiabatic evolution. arXiv preprint quant-ph/0001106 (2000) 7
|
2509.16245
|
MOTIONAL REPRESENTATION; THE ABILITY TO PREDICT ODOR
CHARACTERS USING MOLECULAR VIBRATIONS
A PREPRINT
Yuki Harada
Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of
Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP
yharada@kumamoto-u.ac.jp
Shuichi Maeda
Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of
Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP
Shuichi2503736@gmail.com
Junwei Shen
Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of
Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP
jwshen@kumamoto-u.ac.jp
Taku Misonou
University of Yamanashi, Emeritus Professors, Takeda 4-4-37
Kofu, Yamanashi, 400-8510, JP
tmisonou@gmail.com
Hirokazu Hori
University of Yamanashi, Emeritus Professors, Takeda 4-4-37
Kofu, Yamanashi, 400-8510, JP
hirohori@yamanashi.ac.jp
Shinichiro Nakamura
Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of
Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP
shindon@kumamoto-u.ac.jp
September 23, 2025
ABSTRACT
The prediction of odor characters is still impossible based on the odorant molecular structure. We
designed a CNN-based regressor for computed parameters in molecular vibrations (CNN_vib), in
order to investigate the ability to predict odor characters of molecular vibrations. In this study,
we explored following three approaches for the predictability; (i) CNN with molecular vibrational
parameters, (ii) logistic regression based on vibrational spectra, and (iii) logistic regression with
molecular fingerprint(FP). Our investigation demonstrates that both (i) and (ii) provide predictablity,
and also that the vibrations as an explanatory variable (i and ii) and logistic regression with fingerprints
(iii) show nearly identical tendencies. The predictabilities of (i) and (ii), depending on odor descriptors,
are comparable to those of (iii). Our research shows that odor is predictable by odorant molecular
arXiv:2509.16245v1 [physics.chem-ph] 17 Sep 2025
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Figure 1: Comparison of three regressors to investigate the explanatory ability of molecular vibrations
vibration as well as their shapes alone. Our findings provide insight into the representation of
molecular motional features beyond molecular structures.
Keywords odor · odorant · odor-sensing space · chemical space · molecular vibrations
1
Introduction
Odorants chemical space is the collection of volatilized chemical compounds that humans and many animals can
perceive via their sense of smell. The total number of whole chemical space is estimated to be 1060 that is the collection
of all potentially possible exisiting compounds, including those yet to be found. [1, 2] There could be almost 400,000
types of odor molecules in the odorant chemical space, in addition to the 20,000 known ones. [3, 4, 5] The prediction
of perceived odor character / quality of odorants is an extremely challenging task based on their molecular structure.
Because some odor character is not simply related directry to molecular structure, while others are related to molecular
structure, such as aliphatic, aromatic, saturated, unsaturated, and polar.
There are different ways to represent chemical space, and there is no established implementation. [6, 7] Molecular
fingerprint(FP) design is currently being refined, and there may be further approaches to detect molecular structure.
[8] However machine learning with FP is a fundamental and/or irreplaceable approach for structure-activity relation-
ship((Q)SAR) [9, 10] and quantitative structure-odor relationship (QSOR). [11] In our previous study, we reported
the relation between the odor descriptors and their structural diversity for odorants groups associated with each odor
descriptor. Using the logistic regression with conventional FPs, we investigated the influence of structural diversity on
the odor descriptor predictability. [12]
Since Richard Axel and Linda Buck discovered the role of G-protein-coupled receptors in olfactory cells in the 1990s,
the biological mechanisms in the olfactory system have been successfully and thoroughly elucidated up to the present
day. The primary stage of olfaction is the chemical interactions between olfactory receptors (around 400 types) and
odorants (odor molecules, more than 20,000 types). [3, 4, 5] The shape theory of olfaction, meaning that odorant
molecules are recognized by their shapes, is no doubt. The vibrational theory of olfaction, which was a complementary
previous hypothesis to it, means that odorant molecular vibrations are responsible for olfaction rather than their shapes
alone. While the shape theory of olfaction is widely accepted concept of the olfaction, some researchers feel that both
the form and vibrational properties of molecules have a role in olfaction. [13, 14, 15]
In this study, we investigated whether the molecular vibration is effective and whether the current representation is
sufficient for the molecular feature, as shown in Fig.1. We compared predictability of three regressors; (i) CNN_vib
with vibrational parameters, (ii) logistic regression based on vibrational spectra, and (iii) logistic regression with
2
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
conventional FPs. For (i), we designed a CNN-based regressor with the quantum chemically computed parameters,
we call it ’CNN_vib’ in this paper. For (iii), we already reported a study. [12] As a natural extention of previous
study, we evaluated differences in predictabilities of each regression from (i) and (ii). We will conclude and discuss the
possibility of molecular vibration in odorants chemicals. Finally, we discuss focusing on the motional representation
using molecular vibration beyond substructures.
2
Materials and methods
2.1
Odorants and odor descriptors in the flavornet database
The data is collected from articles published since 1984 using GCO to detect odorants in natural products. It is available
in pyrfume database [5] as well as Flavornet online database. [16] It is a compilation of aroma compounds found in the
human odor space. The odorants are arranged by chromatographic and odor descriptors.
2.2
Vibrational frequency calculation
Atoms in molecules vibrate relative to each other, including translations (external), rotations (internal), and vibrations.
A diatomic molecule has only one motion, while polyatomic N atoms molecules have 3N −6 vibrations, known as
normal modes. Each has a characteristic mode and frequency. Polyatomic molecules typically vibrate in the following
modes: asymmetric, symmetric, wagging, twisting, scissoring, rocking and others.
All calculations, geometry optimizations and frequency calculations and the vibrational frequency calculation, were
obtained by the Gaussian 16 suite of programs at the B3LYP/6-31G(d) level of theory. [17] We employed the following
four parameters for all normal vibrational modes in this study : Harmonic frequencies (’F’, cm**-1), reduced masses
(’M’, AMU), force constants (’C’, mDyne/A) and IR intensities (’I’, KM/Mole).
2.3
Configuration of CNN_vib
Fig.2 shows the regression with CNN_vib that we carried out in the current study; the explanatory variable is a matrix
of four factors (F, M, C and I) for each vibrational mode. We call it ’Calc.Vib.Param.’ in this paper. CNN_vib is a
multi-output regressor which gives multiple probabilities at once corresponding to each odor descriptor. The output is a
matrix of two factors.
3
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Figure 2: CNN_vib design
4
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
The predictability was evaluated using the Area Under the ROC Curve (AUC), which resulted in a value close to one,
suggesting high classification accuracy. We utilized k-fold cross-validation to evaluate CNN_vib. Then, the AUCs for
each odor descriptor were calculated for training epoch numbers in CNN_vib.
We explored and optimized the design of CNN_vib as well as their hyperparameters. The details of CNN_vib design,
such as the selection or sorting of explanatory variable matrix, hyperparameter tuning, the overlearning behavior and
reproducibility are described in SI.
2.4
Configuration of the logistic regression based on vibrational spectra
We also performed logistic regression based on vibrational spectra, which gives the probability of an odor descriptor. The
spectra were obtained ’F’ and ’I’ from ’Calc.Vib.Param.’ after preprocessing. The preprocessing includes normalizing
and ’moving sum’, which is creating a series of sum of different selections of the full data set (see details in SI Fig.S003).
We call it ’Calc.Spectra’ in this paper. The predictability was evaluated using the AUCs for each odor descriptor.
AUCs were obtained by iterative k-fold cross-validation of the regression model for each odor descriptor, following the
previous report.[12]
2.5
Configuration of logistic regression with FPs
We performed logistic regression with four FPs, including MACCS keys, Extended-Conectivity Fingerprints (ECFPs),
Avalon fingerprints, and RD-kit fingerprints (RDKFP), in the previous report. [12] The predictability was evaluated
using the AUCs. AUCs were obtained by iterative k-fold cross-validation of the regression model, for each odor
descriptor, for each FP.
3
Result and Discussion
We investigated (i) CNN_vib with ’Calc.Vib.Param.’, (ii) the logistic regression with ’Calc.Spectra’, and (iii) the logistic
regression with FPs. The result concerning predictability in the three is summarized in Table 1. It is also visualized
in Fig.3. The horizontal axis is the structural diversity, for which we adopted a conventional approach of Tanimoto
similarity scores as an index in the current study. The vertical axis is the AUC for each odorants group.
In our previous study, we reported regression analysis and mapping based on the odorant chemical space, based on four
conventional FPs: MACCS keys, ECFPs, Avalon fingerprints, and RDKFP. We found that the difficulties relates to the
complexity included in the odor descriptor, and that the strong interplay is traced between structural diversity and the
predictability of the odor descriptor, by all four FPs; the essential arguments are qualitatively conserved among the four
fingerprints. [12].
As a natural extention of previous study, in this study we evaluated differences in predictabilities of each regression
from (i) and (ii). As a result, "wood" and "alkane", which had a high AUC in (iii), are also assigned with a high AUC in
CNN_vib. In the same way, "must", "medicine" and "sweet", which had a small AUC in (iii), are also assigned with a
small AUC in CNN_vib. So far, we confirm the previously reported argument in three regressors. By contrast, we find a
new feature obtained by CNN_vib. It is noteworthy that the new feature is possible to be obtained only by CNN_vib to
be discussed in 3.1.
3.1
Comparison of predictability in three regressors
At the simple CNN architecture, although we introduced a new algorithms different from logistic regression, the
predictability of "wood", "citrus", and "mint" group by CNN_vib is almost the same performance in comparison to that
by logistic regression. By contrast, the predictability of "roast", "sulfur", and "cabbage" group by CNN_vib is less than
that by logistic regression as shown in Table 1 (see marked with *). It suggests that CNN_vib may offer a qualitatively
different perspective. We will discuss the differences in predictability at the CNN_vib depending on odorant group.
Fig.4 shows the examples of molecular structure in these six group. The following trends are observed. In the "wood"
group (Fig.4(A)), hydrocarbon-sesquiterpenes are dominant having conventional functional groups such as ethers,
alcohols, and ketones. In the "citrus" group (Fig.4(B)), terpenes are dominant which have conventional functional groups
such as alcohols and aldehydes. The "mint" group (Fig.4(C)) contains menthol-like skeletons having conventional
functional groups such as ketones, and also contains salicylic acid. Although the parent skeletons are diverse with
various functional groups, even with the different functional groups, we infer that the similar vibrational modes can
occur and give the high AUC by CNN_vib. The "sulfur" group (Fig.4(D)) has distinctive S-containing functional groups.
In the "roast" group (Fig.4(E)), pyrazines or thiazoles are dominant. The logistic regression with ’Calc.Spectra’ for the
5
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Table 1: The AUCs by three regressor; (i) CNN_vib with calculated vibrational parameters (’Calc.Vib.Param.’), (ii) the
logistic regression based on vibrational spectra (’Calc.Spectra’), and (iii) the logistic regression with four conventional
FPs: MACCS keys, ECFPs, Avalon fingerprints, and RDKFP.
6
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Figure 3: Similarity average vs AUC for each odorants group
"sulfur" and "roast" group show slightly inferior predictability to conventional FPs (see column ’Calc.Spectra’ in Table
1).
This indicates that their structural feature may have a superior explanatory ability to the vibrational features for this case.
The "cabbage" group (Fig.4(F)) has distinctive thioisocyanates or S-containing groups. Because the parent skeletons
are diverse, with the characteristic functional groups on their skelton, the resulting ’Calc.Vib.Param.’ vary widely.
CNN_vib shows only low predictability, because the vibrational modes are dissimilar (see also in Fig.S008-S013).
Their structure are diverse in the "sulfur" group and the "cabbage" group. Notice that high predictability is obtained
through the regression with ’Calc.Spectra’ processed with moving sum. Such a softening of vibrational characteristics
is expected to produce good predictability.
3.2
Which descriptors demonstrated regression performance with molecular vibrations ?
In the group "wood", "citrus" and "mint", because of their similar molecular structures as shown in Fig.4. it is potentially
difficult that these molecules can be sniffed out in olfactory cognition. As a matter of fact, human will be able to
distinguish their odor characteristics through repeated training. Then, it is a difficult question whether we can sniff out
without training or not. It means that our culture has fine-resolutional cognition with using fine-resolutional evaluation
word, despite the strongly biased diversity for the three groups.
7
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Figure 4: Example molecules in (A)"wood", (B)"citrus", (C)"mint", (D)"sulfur", (E)"roast", and (F)"cabbage" group
8
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
On the other hands, the "roast", "sulfur" and "cabagge" groups have distinctive functional groups and can be smelled at
a low threshold. Thus, they can be sniffed out in olfactory cognition and clearly apply these evaluation word, leading
to strong structure-odor relationship. Because of their structural diversity except their distinctive functional groups,
CNN_vib has only modest predictability for these groups. We had better consider the possibility that both cognitional
and linguistic biased resolutional distributions must be taken into account in this regression study. This will be a very
challenging subject in future for chemistry, data science and also for cognitive study.
3.3
Insights into theory of olfaction beyond molecular substructure
We attenpted to investigate whether the molecular vibration is truly irrelevant to their odor descriptors with two
regressors; logistic regression and CNN_vib. The shape theory of olfaction, widely accepted concept of the olfaction,
means that odorant molecules are recognized by their shapes, much like a key fits into a lock of olfactory receptor.
Because these four FPs comprise of molecular substructures in a ligand molecule, they closely match the interaction
and recognition of the shape theory of olfaction. Our study demonstrated that ’Calc.Spectra’ with logistic regression
shows predictability, and that ’Calc.Vib.Param.’ with CNN_vib also shows predictability. We found that the molecular
vibration has explanatory ability on odorant characters, rather than their shapes alone (not irresponsible for olfaction).
3.4
Insights into chemical space representation
Some researches, inspired by the development of neural networks, in recent years have been published; image
classification and segmentation, applications of Natural language Processing for line notations [18] and graph neural
network (GNN) model. [19, 20, 21, 22, 23] Some regression studies indicated that the line notations model and graph
neural networks performed even worse than image recognition-based models, [18] although their machine learning
architectures are well-suited to chemical compounds. Even if their prediction performance has not yet meet expectations,
those models have much rooms to be studied more profoundly.
CNN_vib can read data in a variety of shapes and showed predictability: the explanatory variables, derived from
vibrational frequency calculations, change shape according to their vibrational modes that correspond to the molecular
complexity. It offers a fascinating alternative or complementary perspective, when the viewpoint shifts from static/spatial
to motional or dynamic/temporal for the chemical space representation. In contrast, since the explanatory variable
in conventional machine learning regression should have the same size of structure, reshaping of the parameters to
spectrum is unavoidable for logistic regression. When investigating chemical space exploration with complex input or
by nonlinear model, further deep learning techniques are to be studied. Our CNN_vib demonstrated some explanatory
ability of deep learning techniques on this problem. Such a techniques has the potential to improve the chemical space
representation beyond molecular substructure and forecast physical attributes.
4
Conclusions
In this study, we investigated (i) the CNN with molecular vibrational parameters (’Calc.Vib.Param.’), (ii) the logistic
regression based on vibrational spectra (’Calc.Spectra’), and (iii) the logistic regression with four conventional FP. Our
investigation demonstrates that both (i) and (ii) provide predictablity and that (i), (ii) and (iii) show nearly the same
tendencies in their predictability. It was also found that the predictabilities for some odor descriptors of (i) and (ii)
are comparable to those of (iii). If the parent skeletons are diverse with the characteristic functional groups on their
skeltons, their structural feature may have explanatory ability than the vibrational features, thus vibrational feature may
not show predictability, because the resulting vibrational mode parameters vary widely. Our research demonstrated that
the molecular vibration has explanatory ability on odorant characters. To improve the chemical space representation
capable of predicting physical properties, our findings provide insight into the representation of molecular features
beyond molecular substructure.
5
Data and Software Availability
We used the RDkit [7] for the FPs (MACCS, ECFPs, Avalon fingerprint and RDKFP) . The regression methodogy, multi
variate analysis and mapping is proprietary but not restricted to our program. The following Supporting Information is
available; SourceAndSummary.zip (source and summary tables of regression results), out_files.zip (Gaussian 16 output
files for the odorants) and input_files.zip (Gaussian 16 input files for the odorants).
9
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
Table 2: List of abbreviations
Abbreviations
Description
’F’
Harmonic Frequencies (cm**-1)
’M’
Reduced Masses (AMU)
’C’
Force Constants (mDyne/A)
’I’
IR Intensities (KM/Mole)
ECFPs
Extended-Conectivity Fingerprints
RDKFP
RD-kit Fingerprints
IR
Infrared (Spectra)
GCO
Gas Chromatography Olfactometry
FP(s)
Fingerprint(s)
AUC
Area Under the (ROC) Curve
DNN
Deep Neural Network
CNN
Convolutional Neural Network
GNN
Graph Neural Network
6
List of abbreviations
7
Acknowledgments
The authors thank the anonymous reviewers for their valuable suggestions. This work was supported by Shorai
Foundation for Science and Technology. The source code and related materials are available at our GitHub repository.
[24]
References
[1] Jean-Louis Reymond. The chemical space project. Accounts of Chemical Research, 48(3):722–730, 2015.
[2] Dmitry I Osolodkin, Eugene V Radchenko, Alexey A Orlov, Andrey E Voronkov, Vladimir A Palyulin, and
Nikolay S Zefirov. Progress in visual representations of chemical space. Expert opinion on drug discovery, 10(9):
959–973, 2015.
[3] Emily J Mayhew, Charles J Arayata, Richard C Gerkin, Brian K Lee, Jonathan M Magill, Lindsey L Snyder,
Kelsie A Little, Chung Wen Yu, and Joel D Mainland. Drawing the borders of olfactory space. bioRxiv, pages
2020–12, 2020.
[4] Emily J Mayhew, Charles J Arayata, Richard C Gerkin, Brian K Lee, Jonathan M Magill, Lindsey L Snyder,
Kelsie A Little, Chung Wen Yu, and Joel D Mainland. Transport features predict if a molecule is odorous.
Proceedings of the National Academy of Sciences, 119(15):e2116576119, 2022.
[5] Jason B Castro, Travis J Gould, Robert Pellegrino, Zhiwei Liang, Liyah A Coleman, Famesh Patel, Derek S
Wallace, Tanushri Bhatnagar, Joel D Mainland, and Richard C Gerkin. Pyrfume: A window to the world’s
olfactory data. bioRxiv, pages 2022–09, 2022.
[6] Alice Capecchi, Daniel Probst, and Jean-Louis Reymond. One molecular fingerprint to rule them all: drugs,
biomolecules, and the metabolome. Journal of cheminformatics, 12:1–15, 2020.
[7] RDKit: Open-source cheminformatics. https://www.rdkit.org.
[8] Davide Boldini, Davide Ballabio, Viviana Consonni, Roberto Todeschini, Francesca Grisoni, and Stephan A
Sieber. Effectiveness of molecular fingerprints for exploring the chemical space of natural products. Journal of
Cheminformatics, 16(1):35, 2024.
[9] Bruno J Neves, Rodolpho C Braga, Cleber C Melo-Filho, José Teófilo Moreira-Filho, Eugene N Muratov, and
Carolina Horta Andrade. Qsar-based virtual screening: advances and applications in drug discovery. Frontiers in
pharmacology, 9:1275, 2018.
[10] M Chastrette. Trends in structure-odor relationship. SAR and QSAR in Environmental Research, 6(3-4):215–254,
1997.
[11] Benjamin Sanchez-Lengeling, Jennifer N Wei, Brian K Lee, Richard C Gerkin, Alán Aspuru-Guzik, and Alexan-
der B Wiltschko. Machine learning for scent: Learning generalizable perceptual representations of small molecules.
arXiv preprint arXiv:1910.10685, 2019.
10
A simple DNN regression for the chemical composition in essential oil
A PREPRINT
[12] Yuki Harada, Shuichi Maeda, Junwei Shen, Taku Misonou, Hirokazu Hori, and Shinichiro Nakamura. Regression
study of odorant chemical space, molecular structural diversity, and natural language description. ACS omega,
2024.
[13] Luca Turin. A spectroscopic mechanism for primary olfactory reception. Chemical senses, 21(6):773–791, 1996.
[14] Eric Block, Seogjoo Jang, Hiroaki Matsunami, Sivakumar Sekharan, Bérénice Detheir, Mehmed Z Ertem, Sivaji
Gundala, Yi Pan, Shengju Li, Zhen Li, et al. Implausibility of the vibrational theory of olfaction. Proceedings of
the National Academy of Sciences, 112(21):E2766–E2774, 2015.
[15] Leslie B Vosshall. Laying a controversial smell theory to rest. Proceedings of the National Academy of Sciences,
112(21):6525–6526, 2015.
[16] T Acree and H Arn. Flavornet home page, 2003.
[17] MJ Frisch. Gaussian 09 revision d. 01; b) mj frisch, gw trucks, hb schlegel, ge scuseria, ma robb, jr cheeseman, g.
scalmani, v. barone, ga petersson, h. nakatsuji et al. Gaussian 16 Revision A, 3, 2016.
[18] Anju Sharma, Rajnish Kumar, Shabnam Ranjta, and Pritish Kumar Varadwaj. Smiles to smell: decoding the
structure–odor relationship of chemical compounds using the deep neural network approach. Journal of Chemical
Information and Modeling, 61(2):676–688, 2021.
[19] Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. corr abs/1812.04202 (2018). arXiv
preprint arXiv:1812.04202, 2018.
[20] Brian K Lee, Emily E Mayhew, Benjamin Sanchez-Lengeling, Jennifer N Wei, Wesley W Qian, Kelsie Little,
Matthew Andres, Britney B Nguyen, Theresa Moloy, Jane K Parker, et al. A principal odor map unifies diverse
tasks in human olfactory perception. bioRxiv, 2022.
[21] Wesley W Qian, Jennifer N Wei, Benjamin Sanchez-Lengeling, Brian K Lee, Yunan Luo, Marnix Vlot, Koen
Dechering, Jian Peng, Richard C Gerkin, and Alexander B Wiltschko. Metabolic activity organizes olfactory
representations. Elife, 12:e82502, 2023.
[22] Bohayra Mortazavi. Recent advances in machine learning-assisted multiscale design of energy materials. Advanced
Energy Materials, page 2403876, 2024.
[23] Ryan Jacobs, Dane Morgan, Siamak Attarian, Jun Meng, Chen Shen, Zhenghao Wu, Clare Yijia Xie, Julia H
Yang, Nongnuch Artrith, Ben Blaiszik, et al. A practical guide to machine learning interatomic potentials–status
and future. Current Opinion in Solid State and Materials Science, 35:101214, 2025.
[24] yhua0917@github.com.
Motional
representation
by
cnnvib.
https://github.com/yhua0917/
MotionalRepresentationByCnnVib, 2025.
11
|
MOTIONAL REPRESENTATION; THE ABILITY TO PREDICT ODOR CHARACTERS USING MOLECULAR VIBRATIONS A PREPRINT Yuki Harada Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP Shuichi Maeda Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP Junwei Shen Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP Taku Misonou 4-4-37 Kofu, Yamanashi, 400-8510, JP Hirokazu Hori 4-4-37 Kofu, Yamanashi, 400-8510, JP Shinichiro Nakamura Research and Education Institute for Semiconductors and Informatics Laboratory for Data Sciences of Kumamoto University, Kurokami 2-39-1, Chuo-ku, Kumamoto 860-8555, JP September 23, 2025 ABSTRACT The prediction of odor characters is still impossible based on the odorant molecular structure. We designed a CNN-based regressor for computed parameters in molecular vibrations (CNN_vib), in order to investigate the ability to predict odor characters of molecular vibrations. In this study, we explored following three approaches for the predictability; (i) CNN with molecular vibrational parameters, (ii) logistic regression based on vibrational spectra, and (iii) logistic regression with molecular fingerprint(FP). Our investigation demonstrates that both (i) and (ii) provide predictablity, and also that the vibrations as an explanatory variable (i and ii) and logistic regression with fingerprints (iii) show nearly identical tendencies. The predictabilities of (i) and (ii), depending on odor descriptors, are comparable to those of (iii). Our research shows that odor is predictable by odorant molecular 17 Sep 2025 A simple DNN regression for the chemical composition in essential oil A PREPRINT Figure 1: Comparison of three regressors to investigate the explanatory ability of molecular vibrations vibration as well as their shapes alone. Our findings provide insight into the representation of molecular motional features beyond molecular structures. Keywords odor · odorant · odor-sensing space · chemical space · molecular vibrations 1 Introduction Odorants chemical space is the collection of volatilized chemical compounds that humans and many animals can perceive via their sense of smell. The total number of whole chemical space is estimated to be 1060 that is the collection of all potentially possible exisiting compounds, including those yet to be found. [1, 2] There could be almost 400,000 types of odor molecules in the odorant chemical space, in addition to the 20,000 known ones. [3, 4, 5] The prediction of perceived odor character / quality of odorants is an extremely challenging task based on their molecular structure. Because some odor character is not simply related directry to molecular structure, while others are related to molecular structure, such as aliphatic, aromatic, saturated, unsaturated, and polar. There are different ways to represent chemical space, and there is no established implementation. [6, 7] Molecular fingerprint(FP) design is currently being refined, and there may be further approaches to detect molecular structure. [8] However machine learning with FP is a fundamental and/or irreplaceable approach for structure-activity relationship((Q)SAR) [9, 10] and quantitative structure-odor relationship (QSOR). [11] In our previous study, we reported the relation between the odor descriptors and their structural diversity for odorants groups associated with each odor descriptor. Using the logistic regression with conventional FPs, we investigated the influence of structural diversity on the odor descriptor predictability. [12] Since Richard Axel and Linda Buck discovered the role of G-protein-coupled receptors in olfactory cells in the 1990s, the biological mechanisms in the olfactory system have been successfully and thoroughly elucidated up to the present day. The primary stage of olfaction is the chemical interactions between olfactory receptors (around 400 types) and odorants (odor molecules, more than 20,000 types). [3, 4, 5] The shape theory of olfaction, meaning that odorant molecules are recognized by their shapes, is no doubt. The vibrational theory of olfaction, which was a complementary previous hypothesis to it, means that odorant molecular vibrations are responsible for olfaction rather than their shapes alone. While the shape theory of olfaction is widely accepted concept of the olfaction, some researchers feel that both the form and vibrational properties of molecules have a role in olfaction. [13, 14, 15] In this study, we investigated whether the molecular vibration is effective and whether the current representation is sufficient for the molecular feature, as shown in Fig.1. We compared predictability of three regressors; (i) CNN_vib with vibrational parameters, (ii) logistic regression based on vibrational spectra, and (iii) logistic regression with 2 A simple DNN regression for the chemical composition in essential oil A PREPRINT conventional FPs. For (i), we designed a CNN-based regressor with the quantum chemically computed parameters, we call it 'CNN_vib' in this paper. For (iii), we already reported a study. [12] As a natural extention of previous study, we evaluated differences in predictabilities of each regression from (i) and (ii). We will conclude and discuss the possibility of molecular vibration in odorants chemicals. Finally, we discuss focusing on the motional representation using molecular vibration beyond substructures. 2 Materials and methods 2.1 Odorants and odor descriptors in the flavornet database The data is collected from articles published since 1984 using GCO to detect odorants in natural products. It is available in pyrfume database [5] as well as Flavornet online database. [16] It is a compilation of aroma compounds found in the human odor space. The odorants are arranged by chromatographic and odor descriptors. 2.2 Vibrational frequency calculation Atoms in molecules vibrate relative to each other, including translations (external), rotations (internal), and vibrations. A diatomic molecule has only one motion, while polyatomic N atoms molecules have 3N -6 vibrations, known as normal modes. Each has a characteristic mode and frequency. Polyatomic molecules typically vibrate in the following modes: asymmetric, symmetric, wagging, twisting, scissoring, rocking and others. All calculations, geometry optimizations and frequency calculations and the vibrational frequency calculation, were obtained by the Gaussian 16 suite of programs at the B3LYP/6-31G(d) level of theory. [17] We employed the following four parameters for all normal vibrational modes in this study : Harmonic frequencies ('F', cm**-1), reduced masses ('M', AMU), force constants ('C', mDyne/A) and IR intensities ('I', KM/Mole). 2.3 Configuration of CNN_vib Fig.2 shows the regression with CNN_vib that we carried out in the current study; the explanatory variable is a matrix of four factors (F, M, C and I) for each vibrational mode. We call it 'Calc.Vib.Param.' in this paper. CNN_vib is a multi-output regressor which gives multiple probabilities at once corresponding to each odor descriptor. The output is a matrix of two factors. 3 A simple DNN regression for the chemical composition in essential oil A PREPRINT Figure 2: CNN_vib design 4 A simple DNN regression for the chemical composition in essential oil A PREPRINT The predictability was evaluated using the Area Under the ROC Curve (AUC), which resulted in a value close to one, suggesting high classification accuracy. We utilized k-fold cross-validation to evaluate CNN_vib. Then, the AUCs for each odor descriptor were calculated for training epoch numbers in CNN_vib. We explored and optimized the design of CNN_vib as well as their hyperparameters. The details of CNN_vib design, such as the selection or sorting of explanatory variable matrix, hyperparameter tuning, the overlearning behavior and reproducibility are described in SI. 2.4 Configuration of the logistic regression based on vibrational spectra We also performed logistic regression based on vibrational spectra, which gives the probability of an odor descriptor. The spectra were obtained 'F' and 'I' from 'Calc.Vib.Param.' after preprocessing. The preprocessing includes normalizing and 'moving sum', which is creating a series of sum of different selections of the full data set (see details in SI Fig.S003). We call it 'Calc.Spectra' in this paper. The predictability was evaluated using the AUCs for each odor descriptor. AUCs were obtained by iterative k-fold cross-validation of the regression model for each odor descriptor, following the previous report.[12] 2.5 Configuration of logistic regression with FPs We performed logistic regression with four FPs, including MACCS keys, Extended-Conectivity Fingerprints (ECFPs), Avalon fingerprints, and RD-kit fingerprints (RDKFP), in the previous report. [12] The predictability was evaluated using the AUCs. AUCs were obtained by iterative k-fold cross-validation of the regression model, for each odor descriptor, for each FP. 3 Result and Discussion We investigated (i) CNN_vib with 'Calc.Vib.Param.', (ii) the logistic regression with 'Calc.Spectra', and (iii) the logistic regression with FPs. The result concerning predictability in the three is summarized in Table 1. It is also visualized in Fig.3. The horizontal axis is the structural diversity, for which we adopted a conventional approach of Tanimoto similarity scores as an index in the current study. The vertical axis is the AUC for each odorants group. In our previous study, we reported regression analysis and mapping based on the odorant chemical space, based on four conventional FPs: MACCS keys, ECFPs, Avalon fingerprints, and RDKFP. We found that the difficulties relates to the complexity included in the odor descriptor, and that the strong interplay is traced between structural diversity and the predictability of the odor descriptor, by all four FPs; the essential arguments are qualitatively conserved among the four fingerprints. [12]. As a natural extention of previous study, in this study we evaluated differences in predictabilities of each regression from (i) and (ii). As a result, "wood" and "alkane", which had a high AUC in (iii), are also assigned with a high AUC in CNN_vib. In the same way, "must", "medicine" and "sweet", which had a small AUC in (iii), are also assigned with a small AUC in CNN_vib. So far, we confirm the previously reported argument in three regressors. By contrast, we find a new feature obtained by CNN_vib. It is noteworthy that the new feature is possible to be obtained only by CNN_vib to be discussed in 3.1. 3.1 Comparison of predictability in three regressors At the simple CNN architecture, although we introduced a new algorithms different from logistic regression, the predictability of "wood", "citrus", and "mint" group by CNN_vib is almost the same performance in comparison to that by logistic regression. By contrast, the predictability of "roast", "sulfur", and "cabbage" group by CNN_vib is less than that by logistic regression as shown in Table 1 (see marked with *). It suggests that CNN_vib may offer a qualitatively different perspective. We will discuss the differences in predictability at the CNN_vib depending on odorant group. Fig.4 shows the examples of molecular structure in these six group. The following trends are observed. In the "wood" group (Fig.4(A)), hydrocarbon-sesquiterpenes are dominant having conventional functional groups such as ethers, alcohols, and ketones. In the "citrus" group (Fig.4(B)), terpenes are dominant which have conventional functional groups such as alcohols and aldehydes. The "mint" group (Fig.4(C)) contains menthol-like skeletons having conventional functional groups such as ketones, and also contains salicylic acid. Although the parent skeletons are diverse with various functional groups, even with the different functional groups, we infer that the similar vibrational modes can occur and give the high AUC by CNN_vib. The "sulfur" group (Fig.4(D)) has distinctive S-containing functional groups. In the "roast" group (Fig.4(E)), pyrazines or thiazoles are dominant. The logistic regression with 'Calc.Spectra' for the 5 A simple DNN regression for the chemical composition in essential oil A PREPRINT Table 1: The AUCs by three regressor; (i) CNN_vib with calculated vibrational parameters ('Calc.Vib.Param.'), (ii) the logistic regression based on vibrational spectra ('Calc.Spectra'), and (iii) the logistic regression with four conventional FPs: MACCS keys, ECFPs, Avalon fingerprints, and RDKFP. 6 A simple DNN regression for the chemical composition in essential oil A PREPRINT Figure 3: Similarity average vs AUC for each odorants group "sulfur" and "roast" group show slightly inferior predictability to conventional FPs (see column 'Calc.Spectra' in Table 1). This indicates that their structural feature may have a superior explanatory ability to the vibrational features for this case. The "cabbage" group (Fig.4(F)) has distinctive thioisocyanates or S-containing groups. Because the parent skeletons are diverse, with the characteristic functional groups on their skelton, the resulting 'Calc.Vib.Param.' vary widely. CNN_vib shows only low predictability, because the vibrational modes are dissimilar (see also in Fig.S008-S013). Their structure are diverse in the "sulfur" group and the "cabbage" group. Notice that high predictability is obtained through the regression with 'Calc.Spectra' processed with moving sum. Such a softening of vibrational characteristics is expected to produce good predictability. 3.2 Which descriptors demonstrated regression performance with molecular vibrations ? In the group "wood", "citrus" and "mint", because of their similar molecular structures as shown in Fig.4. it is potentially difficult that these molecules can be sniffed out in olfactory cognition. As a matter of fact, human will be able to distinguish their odor characteristics through repeated training. Then, it is a difficult question whether we can sniff out without training or not. It means that our culture has fine-resolutional cognition with using fine-resolutional evaluation word, despite the strongly biased diversity for the three groups. 7 A simple DNN regression for the chemical composition in essential oil A PREPRINT Figure 4: Example molecules in (A)"wood", (B)"citrus", (C)"mint", (D)"sulfur", (E)"roast", and (F)"cabbage" group 8 A simple DNN regression for the chemical composition in essential oil A PREPRINT On the other hands, the "roast", "sulfur" and "cabagge" groups have distinctive functional groups and can be smelled at a low threshold. Thus, they can be sniffed out in olfactory cognition and clearly apply these evaluation word, leading to strong structure-odor relationship. Because of their structural diversity except their distinctive functional groups, CNN_vib has only modest predictability for these groups. We had better consider the possibility that both cognitional and linguistic biased resolutional distributions must be taken into account in this regression study. This will be a very challenging subject in future for chemistry, data science and also for cognitive study. 3.3 Insights into theory of olfaction beyond molecular substructure We attenpted to investigate whether the molecular vibration is truly irrelevant to their odor descriptors with two regressors; logistic regression and CNN_vib. The shape theory of olfaction, widely accepted concept of the olfaction, means that odorant molecules are recognized by their shapes, much like a key fits into a lock of olfactory receptor. Because these four FPs comprise of molecular substructures in a ligand molecule, they closely match the interaction and recognition of the shape theory of olfaction. Our study demonstrated that 'Calc.Spectra' with logistic regression shows predictability, and that 'Calc.Vib.Param.' with CNN_vib also shows predictability. We found that the molecular vibration has explanatory ability on odorant characters, rather than their shapes alone (not irresponsible for olfaction). 3.4 Insights into chemical space representation Some researches, inspired by the development of neural networks, in recent years have been published; image classification and segmentation, applications of Natural language Processing for line notations [18] and graph neural network (GNN) model. [19, 20, 21, 22, 23] Some regression studies indicated that the line notations model and graph neural networks performed even worse than image recognition-based models, [18] although their machine learning architectures are well-suited to chemical compounds. Even if their prediction performance has not yet meet expectations, those models have much rooms to be studied more profoundly. CNN_vib can read data in a variety of shapes and showed predictability: the explanatory variables, derived from vibrational frequency calculations, change shape according to their vibrational modes that correspond to the molecular complexity. It offers a fascinating alternative or complementary perspective, when the viewpoint shifts from static/spatial to motional or dynamic/temporal for the chemical space representation. In contrast, since the explanatory variable in conventional machine learning regression should have the same size of structure, reshaping of the parameters to spectrum is unavoidable for logistic regression. When investigating chemical space exploration with complex input or by nonlinear model, further deep learning techniques are to be studied. Our CNN_vib demonstrated some explanatory ability of deep learning techniques on this problem. Such a techniques has the potential to improve the chemical space representation beyond molecular substructure and forecast physical attributes. 4 Conclusions In this study, we investigated (i) the CNN with molecular vibrational parameters ('Calc.Vib.Param.'), (ii) the logistic regression based on vibrational spectra ('Calc.Spectra'), and (iii) the logistic regression with four conventional FP. Our investigation demonstrates that both (i) and (ii) provide predictablity and that (i), (ii) and (iii) show nearly the same tendencies in their predictability. It was also found that the predictabilities for some odor descriptors of (i) and (ii) are comparable to those of (iii). If the parent skeletons are diverse with the characteristic functional groups on their skeltons, their structural feature may have explanatory ability than the vibrational features, thus vibrational feature may not show predictability, because the resulting vibrational mode parameters vary widely. Our research demonstrated that the molecular vibration has explanatory ability on odorant characters. To improve the chemical space representation capable of predicting physical properties, our findings provide insight into the representation of molecular features beyond molecular substructure. 5 Data and Software Availability We used the RDkit [7] for the FPs (MACCS, ECFPs, Avalon fingerprint and RDKFP) . The regression methodogy, multi variate analysis and mapping is proprietary but not restricted to our program. The following Supporting Information is available; SourceAndSummary.zip (source and summary tables of regression results), out_files.zip (Gaussian 16 output files for the odorants) and input_files.zip (Gaussian 16 input files for the odorants). 9 A simple DNN regression for the chemical composition in essential oil A PREPRINT Table 2: List of abbreviations Abbreviations Description 'F' Harmonic Frequencies (cm**-1) 'M' Reduced Masses (AMU) 'C' Force Constants (mDyne/A) 'I' IR Intensities (KM/Mole) ECFPs Extended-Conectivity Fingerprints RDKFP RD-kit Fingerprints IR Infrared (Spectra) GCO Gas Chromatography Olfactometry FP(s) Fingerprint(s) AUC Area Under the (ROC) Curve DNN Deep Neural Network CNN Convolutional Neural Network GNN Graph Neural Network 6 List of abbreviations 7 Acknowledgments The authors thank the anonymous reviewers for their valuable suggestions. This work was supported by Shorai Foundation for Science and Technology. The source code and related materials are available at our GitHub repository. [24] References [1] Jean-Louis Reymond. The chemical space project. Accounts of Chemical Research, 48(3):722-730, 2015. [2] Dmitry I Osolodkin, Eugene V Radchenko, Alexey A Orlov, Andrey E Voronkov, Vladimir A Palyulin, and Nikolay S Zefirov. Progress in visual representations of chemical space. Expert opinion on drug discovery, 10(9): 959-973, 2015. [3] Emily J Mayhew, Charles J Arayata, Richard C Gerkin, Brian K Lee, Jonathan M Magill, Lindsey L Snyder, Kelsie A Little, Chung Wen Yu, and Joel D Mainland. Drawing the borders of olfactory space. bioRxiv, pages 2020-12, 2020. [4] Emily J Mayhew, Charles J Arayata, Richard C Gerkin, Brian K Lee, Jonathan M Magill, Lindsey L Snyder, Kelsie A Little, Chung Wen Yu, and Joel D Mainland. Transport features predict if a molecule is odorous. Proceedings of the National Academy of Sciences, 119(15):e2116576119, 2022. [5] Jason B Castro, Travis J Gould, Robert Pellegrino, Zhiwei Liang, Liyah A Coleman, Famesh Patel, Derek S Wallace, Tanushri Bhatnagar, Joel D Mainland, and Richard C Gerkin. Pyrfume: A window to the world's olfactory data. bioRxiv, pages 2022-09, 2022. [6] Alice Capecchi, Daniel Probst, and Jean-Louis Reymond. One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome. Journal of cheminformatics, 12:1-15, 2020. [7] RDKit: Open-source cheminformatics. https://www.rdkit.org. [8] Davide Boldini, Davide Ballabio, Viviana Consonni, Roberto Todeschini, Francesca Grisoni, and Stephan A Sieber. Effectiveness of molecular fingerprints for exploring the chemical space of natural products. Journal of Cheminformatics, 16(1):35, 2024. [9] Bruno J Neves, Rodolpho C Braga, Cleber C Melo-Filho, José Teófilo Moreira-Filho, Eugene N Muratov, and Carolina Horta Andrade. Qsar-based virtual screening: advances and applications in drug discovery. Frontiers in pharmacology, 9:1275, 2018. [10] M Chastrette. Trends in structure-odor relationship. SAR and QSAR in Environmental Research, 6(3-4):215-254, 1997. [11] Benjamin Sanchez-Lengeling, Jennifer N Wei, Brian K Lee, Richard C Gerkin, Alán Aspuru-Guzik, and Alexander B Wiltschko. Machine learning for scent: Learning generalizable perceptual representations of small molecules. arXiv preprint , 2019. 10 A simple DNN regression for the chemical composition in essential oil A PREPRINT [12] Yuki Harada, Shuichi Maeda, Junwei Shen, Taku Misonou, Hirokazu Hori, and Shinichiro Nakamura. Regression study of odorant chemical space, molecular structural diversity, and natural language description. ACS omega, 2024. [13] Luca Turin. A spectroscopic mechanism for primary olfactory reception. Chemical senses, 21(6):773-791, 1996. [14] Eric Block, Seogjoo Jang, Hiroaki Matsunami, Sivakumar Sekharan, Bérénice Detheir, Mehmed Z Ertem, Sivaji Gundala, Yi Pan, Shengju Li, Zhen Li, et al. Implausibility of the vibrational theory of olfaction. Proceedings of the National Academy of Sciences, 112(21):E2766-E2774, 2015. [15] Leslie B Vosshall. Laying a controversial smell theory to rest. Proceedings of the National Academy of Sciences, 112(21):6525-6526, 2015. [16] T Acree and H Arn. Flavornet home page, 2003. [17] MJ Frisch. Gaussian 09 revision d. 01; b) mj frisch, gw trucks, hb schlegel, ge scuseria, ma robb, jr cheeseman, g. scalmani, v. barone, ga petersson, h. nakatsuji et al. Gaussian 16 Revision A, 3, 2016. [18] Anju Sharma, Rajnish Kumar, Shabnam Ranjta, and Pritish Kumar Varadwaj. Smiles to smell: decoding the structure-odor relationship of chemical compounds using the deep neural network approach. Journal of Chemical Information and Modeling, 61(2):676-688, 2021. [19] Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. corr abs/1812.04202 (2018). arXiv preprint , 2018. [20] Brian K Lee, Emily E Mayhew, Benjamin Sanchez-Lengeling, Jennifer N Wei, Wesley W Qian, Kelsie Little, Matthew Andres, Britney B Nguyen, Theresa Moloy, Jane K Parker, et al. A principal odor map unifies diverse tasks in human olfactory perception. bioRxiv, 2022. [21] Wesley W Qian, Jennifer N Wei, Benjamin Sanchez-Lengeling, Brian K Lee, Yunan Luo, Marnix Vlot, Koen Dechering, Jian Peng, Richard C Gerkin, and Alexander B Wiltschko. Metabolic activity organizes olfactory representations. Elife, 12:e82502, 2023. [22] Bohayra Mortazavi. Recent advances in machine learning-assisted multiscale design of energy materials. Advanced Energy Materials, page 2403876, 2024. [23] Ryan Jacobs, Dane Morgan, Siamak Attarian, Jun Meng, Chen Shen, Zhenghao Wu, Clare Yijia Xie, Julia H Yang, Nongnuch Artrith, Ben Blaiszik, et al. A practical guide to machine learning interatomic potentials-status and future. Current Opinion in Solid State and Materials Science, 35:101214, 2025. [24] . Motional representation by cnnvib. https://github.com/yhua0917/ MotionalRepresentationByCnnVib, 2025. 11
|
2509.16246
|
Chapter 1
VerilogMonkey: Exploring Parallel Scaling for
Automated Verilog Code Generation with LLMs
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
Abstract We present VerilogMonkey, an empirical study of parallel scaling for the
underexplored task of automated Verilog generation. Parallel scaling improves LLM
performance by sampling many outputs in parallel. Across multiple benchmarks and
mainstream LLMs, we find that scaling to hundreds of samples is cost-effective in both
time and money and, even without any additional enhancements such as post-training or
agentic methods, surpasses prior results on LLM-based Verilog generation. We further
dissect why parallel scaling delivers these gains and show how output randomness in
LLMs affects its effectiveness.
Juxin Niu[0009−0004−8223−4245]
Department of Computer Science, City University of Hong Kong, Hong Kong SAR
e-mail: juxin.niu@my.cityu.edu.hk
Yuxin Du[0009−0001−7017−5805]
Department of Computer Science, City University of Hong Kong, Hong Kong SAR
e-mail: yuxindu8-c@my.cityu.edu.hk
Dan Niu[0000−0002−0715−7946]
School of Automation, Southeast University, China
e-mail: 101011786@seu.edu.cn
Xi Wang[0000−0002−1998−6733]
National Center of Technology Innovation for EDA, China
School of Integrated Circuits, Southeast University, China
e-mail: xi.wang@seu.edu.cn
Zhe Jiang[0000−0002−8509−3167]
National Center of Technology Innovation for EDA, China
School of Integrated Circuits, Southeast University, China
e-mail: zhejiang.arch@gmail.com
Nan Guan[0000−0003−3775−911𝑋]
Department of Computer Science, City University of Hong Kong, Hong Kong SAR
e-mail: nanguan@cityu.edu.hk
1
arXiv:2509.16246v1 [cs.PL] 17 Sep 2025
2
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
1.1 Introduction
Fig. 1.1: Even randomness can achieve great-
ness given infinite attempts.1
Writing Hardware Description Lan-
guage (HDL) code such as Verilog is
demanding, requires substantial exper-
tise, and is prone to bugs and errors.
Therefore, there is growing interest in
LLM-assisted hardware code genera-
tion [27, 31, 37]. However, the capabil-
ity of LLMs on automated Verilog code
generation is still unsatisfactory, com-
paring with other tasks such as software
code generation.
Early progress in improving the ca-
pability of LLM was driven by training-
time scaling, i.e., improving single-shot inference accuracy by training larger models and
ingesting ever larger datasets. However, the pace of training-time scaling has gradually
slowed due to its resource-intensive nature and the sharply rising costs of human
labor [19]. To address this, research attention has shifted to test-time scaling [43], which
boosts performance by allocating more computational resources during inference. A
popular approach in this direction is parallel scaling [21, 35], which generates mul-
tiple outputs in parallel and aggregates them into a final answer. Parallel scaling has
significantly enhanced LLM’s performance in tasks such as math [6, 15] and software
coding [12, 23]. For example, AlphaCode [24] extended parallel sampling to the scale
of millions, achieving unprecedented results in competition-level coding tasks.
However, in automated Verilog code generation, systematic studies on parallel scaling
are still missing. Existing work has shown that small-scale repetition can outperform
single-pass inference [27, 31]. However, the scalability and effectiveness of expanding
the parallel scale to hundreds of samples remains unknown; and the underlying reasons
for such improvements have yet to be explored. To this end, in this paper, we present the
first work on exploring the parallel scaling in automated Verilog code generation with
LLMs. Specifically, the contributions are as follows:
• We show that parallel scaling is cost-effective in both time and monetary terms, as
increased concurrency substantially reduces wall-clock latency and sampling cost.
• We demonstrate that expanding the sampling size leads to consistent and significant
performance improvements, surpassing even advanced reasoning-oriented LLMs
such as OpenAI o1 and Gemini-2.5-pro.
• We investigate the reasons underlying these improvements and show that parallel
scaling enhances both diversity and reliability of outputs.
• We analyze the influence of output randomness on parallel scaling, and find that
regulating randomness preserves accuracy while further supporting convergence in
large-scale sampling.
1 Image generated by ChatGPT.
1 VerilogMonkey
3
As shown in Fig. 1.1, the name “VerilogMonkey” is inspired by the Infinite Monkey
Theorem [33], which states that given infinite time and attempts, a monkey randomly
typing on a keyboard would eventually produce the complete works of Shakespeare.
Similarly, LLMs can be thought of as “thinking monkeys,” and with the help of parallel
scaling, their potential can be further unlocked, leading to promising improvements in
task performance.
1.2 Related Work
Test-time Scaling
Test-time scaling enhances task performance by allocating additional computational
resources during inference. Depending on how the model’s computation is expanded,
test-time scaling can be divided into four categories [43]: parallel, sequential, hybrid,
and internal.
Parallel scaling enhances performance by generating multiple outputs from LLMs
concurrently and then aggregating them into a final answer. Its effectiveness depends
on two metrics [6]: coverage, the probability of producing at least one correct output,
and precision, the ability to identify that correct response. Empirical evidence from [6]
reveals a log-linear relationship between coverage and computational resources. In tasks
like competition-level programming or travel planning, automatic correctness checks are
often impractical. Consequently, research has shifted toward more advanced aggregation
techniques, such as majority voting [12, 24] and genetic algorithms [22].
Sequential scaling generates outputs iteratively, leveraging intermediate results at
each stage. Techniques such as Chain-of-Thought [39] decompose complex problems
into a sequence of reasoning steps. Subsequent research [9, 29] has shown that LLMs
can self-correct through iterative revision. Approaches like ReAct [42] integrate external
tools to provide feedback that guides the model’s next action.
Hybrid scaling leverages the complementary strengths of parallel and sequential
approaches: parallel scaling broadens an LLM’s exploration of possibilities, while
sequential scaling enables deeper investigation along promising paths. Notably, methods
like Tree-of-Thoughts [41], Graph-of-Thought [3], and Forest-of-Thought [4] have
successively introduced ever more complex hybrid scaling patterns.
Internal scaling empowers LLMs to allocate extra computational resources at
inference time through targeted training or fine-tuning. In this paradigm, the model’s
internal parameters instead of external agentic mechanisms govern scaling behavior. The
impressive success of reasoning models like OpenAI o1 [20] and Deepseek-R1 [16]
highlights the efficacy of this approach.
Automated Verilog Code Generation
Existing work [5, 13] on automated Verilog code generation has predominantly fo-
cused on sequential scaling. AutoChip [37] employs reflection strategies to enhance
4
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
Verilog code generation. Additionally, it employs LLM ensembling with more pow-
erful models when lightweight models underperform, further boosting performance.
VerilogCoder [17] introduces an autonomous agent with graph-planning capabilities
and integrates an AST-based waveform-tracing tool to provide feedback during code
debugging. HDLCORE [32] embeds HDL expertise into Chain-of-Thought prompts,
guiding LLMs through step-by-step reasoning and leveraging testbench feedback for
self-verification. Prior studies have shown that performance improves as the number of
repetitions increases [27, 31]. However, these efforts have been confined to small-scale
repetitions, and whether scaling the sampling size to much larger magnitudes (e.g.,
hundreds of attempts) yields further gains remains an open question.
1.3 Deployment
PASS
FAIL
LLM
Spec.
Verilog
Simulator
Fig. 1.2: Workflow of a single sampling iteration.
This section explains our
deployment of VerilogMon-
keyto support parallel scal-
ing.
VerilogMonkey
fo-
cuses on generating self-
contained [31] Verilog code,
that is, implementations that
do not require external mod-
ule instantiations, from nat-
ural language specifications. Given a specification and its associated test cases, the
LLM’s task is to produce functionally correct code that passes every test. Fig. 1.2
shows the workflow of a single sampling iteration: the specification is fed into the
LLM, which generates the Verilog code; this code is then verified by simulation for
functional correctness. VerilogMonkey repeats these iterations until either a generated
implementation passes simulation or a predefined sampling limit is reached.
LLM Generation
Simulation
Global
Configuration
{
"llm": {
"model": "gpt-4o",
”temperature": "0.8",
...
},
"benchmark": {
"name": "rtllm-v2",
...
},
...
}
Task Queue
Controller
LLM Service
Benchmark
Simulator
Fig. 1.3: System architecture of VerilogMonkey.
1 VerilogMonkey
5
Fig. 1.3 shows the overall architecture of VerilogMonkey. The system is fully
automated, and users can tailor its behavior via a global configuration file. To maximize
extensibility, VerilogMonkey adopts a modular, plug-in architecture. For LLM integration,
we adopt the LLM-as-a-Service (LLMaaS) paradigm, relying on providers such as
OpenAI and Anthropic to perform inference. VerilogMonkey interacts with these services
by sending network requests. This paradigm abstracts LLM inference as an external
service, since the internal details lie beyond our scope. Tools like Ollama [30] also allow
locally hosted models to be exposed as LLMaaS endpoints. We use LangChain [7] as
middleware to manage these connections. On the benchmarking side, VerilogMonkey
supports existing test suites [28, 31, 38] through a unified interface that eases future
expansion. For simulation, it is compatible with major EDA tools, including VCS [36],
ModelSim [34], and Icarus Verilog [40].
All samplings run in parallel, allowing VerilogMonkey to fully leverage concurrency
and maximize throughput. The system consists of two stages: LLM generation and
simulation. The LLM generation stage is I/O-bound: after dispatching inference requests,
VerilogMonkey must wait for responses. To maximize utilization during this idle time, it
uses batch invocation together with asynchronous processing. In contrast, the simulation
stage is computation-intensive, so VerilogMonkey adopts a multi-process strategy to
fully leverage all available CPU cores. These two stages are connected by a task queue,
which enables pipelining and synchronization.
1.4 Empirical Study of Parallel Scaling
1.4.1 Research Questions
In this section, we first evaluate the effectiveness of parallel scaling, and analyze
the reasons behind the improvements. We then identify output randomness as a key
influencing factor and explore how its regulation impacts effectiveness. Accordingly,
Sections 1.4.3 to 1.4.6 address the following research questions (RQs), respectively.
RQ1. Is parallel scaling cost-effective in terms of time and money?
RQ2. How effective is parallel scaling in improving performance?
RQ3. What are reasons for improvements?
RQ4. How does LLM’s randomness influence effectiveness?
Highlights
Presented below are brief highlights of the research questions, with further details
provided in the subsequent sections:
(RQ1.) Parallel scaling is cost-effective in both time and money. Greater concurrency
sharply reduces per-job runtime, so total wall-clock time grows far more slowly than
the sample size. On cost, hundreds of samples per prompt typically run $0.04–$10,
and provider features like batching and prompt caching can often cut that roughly in
6
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
half. (RQ2.) By scaling up the sampling size to 512, the LLM delivers substantial
performance gains, achieving success rates on all benchmarks that outperform existing
methods—including reasoning models such as OpenAI o1 and Gemini-2.5-pro. (RQ3.)
For simple problems, LLMs produce more deterministic and confident outputs. However,
errors occur due to hallucinations. Conversely, on complex or math-related tasks,
their performance drops and output uncertainty grows, making small-scale repetition
unreliable. In both scenarios, parallel scaling proves beneficial by enhancing the
diversity and coverage of generated outputs. (RQ4.) Raising the model’s randomness
(via temperature adjustments) does not compromise single-pass accuracy and can further
improve the LLM’s capabilities and convergence speed in large-scale sampling.
1.4.2 Evaluation Setup
Configuration
• Benchmark. We evaluate parallel scaling using three of the most widely used
benchmarks: VerilogEval [31], RTLLM (v2) [28] and GenBen [38]. Together, they
comprise approximately 200 Verilog code generation problems, each providing a
specification, test cases, and a human-written reference implementation.
• Model. Our evaluation includes six mainstream LLMs: Google’s Gemini-2.0-flash
and Gemini-1.5-flash, OpenAI’s GPT-4o (20240806) and GPT-4o-mini (20240718),
and Anthropic’s Claude-3.7-Sonnet (20250219) and Claude-3.5-Haiku (20241022).
• Baseline. We compare against two categories of baselines: existing works (see
Table 1.2) and reasoning models. Reasoning models include OpenAI’s o3-mini and
o1, Anthropic’s Claude-3.7-Sonnet-Think, and Google’s Gemini-2.5-pro.
• Implementation Details. We set the maximum sampling size to 512. For each problem,
we generate samples via zero-shot prompts containing only the specification and
formatting instructions to facilitate code extraction. Unless otherwise noted, all
LLM inferences use default configurations (e.g., temperature, top–𝑝).
Metrics
Our goal is to evaluate LLMs’ Verilog code-generation capability in terms of success
rate, i.e., the proportion of problems they can solve. We focus on two widely used
metrics: Hit@𝑘[24] and Pass@𝑘[8]. Hit@𝑘is the empirical fraction of problems for
which at least one of 𝑘generated candidates is correct:
Hit@𝑘= 1
𝑞
𝑞
∑︁
𝑖=1
hit𝑖= 1
𝑞
𝑞
∑︁
𝑖=1
max
1≤𝑗≤𝑘Pass(𝑟𝑖, 𝑗),
(1.1)
where Pass(𝑟𝑖, 𝑗) = 1 if the 𝑗-th sample for problem 𝑄𝑖is correct and 0 otherwise.
Pass@𝑘estimates, for each problem, the probability of solving it within 𝑘attempts
under a Bernoulli model; when 𝑁samples are collected per problem and 𝑐𝑖of them are
1 VerilogMonkey
7
correct, a standard estimator is
Pass@𝑘= E𝑖
"
1 −
𝑁−𝑐𝑖
𝑘
𝑁
𝑘
#
.
(1.2)
The key observation is that although Pass@𝑘generally provides more robust estimates
than Hit@𝑘, directly computing Pass@𝑘becomes computationally impractical as 𝑘
grows under high concurrency. Consequently, while prior work commonly reports
Pass@𝑘, in our parallel-scaling setting we report Hit@𝑘instead, as it remains tractable
at large 𝑘.
1.4.3 Is Parallel Scaling Cost-Effective in Terms of Time and Money?
1
5
10
15
20
30
# Threads
0.00
0.05
0.10
0.15
0.20
Time per job (s)
Concurrent
Sequential
(a)
100
500
1000
Total simulation job
0
1
2
3
4
5
6
7
8
Time (s)
1 thread
5 threads
10 threads
15 threads
(b)
Fig. 1.4: Simulation performance. (a) Per-thread time to complete a job vs. concurrency
(blue), compared to sequential execution (red). (b) Total time for varying numbers of
jobs across thread counts.
Time. Fig. 1.4 shows the simulation performance under different concurrency. As
concurrency increases, the average time per job drops dramatically. Therefore, the
total time grows far more slowly than the sampling size. Meanwhile, LLM inference is
becoming faster: for example, OpenAI’s GPT-4o delivers a sixfold speedup over GPT-4
within a single year [2], and techniques like quantization and distillation have made local
LLM deployments far more lightweight [45].
Money. Table 1.1 shows that sampling each prompt hundreds of times incurs only
$0.04 to $10 in API fees, depending on the model and problem complexity. Techniques
like non–real-time batch APIs and prompt-caching offered by providers can further cut
these expenses by at least 50%. Even at large scale, LLM-based sampling remains far
more economical than human labor: a U.S. hardware engineer earns roughly $11,000
per month [10] and may spend hours or even days to resolve a single problem.
8
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
Table 1.1: Average per-question LLM API cost (USD) for 500 samples across benchmarks.
VerilogEval [31] RTLLM [28] GenBen [38]
Gemini-1.5-flash
0.04
0.07
0.15
Gemini-2.0-flash
0.06
0.12
0.34
GPT-4o-mini
0.16
0.23
0.57
GPT-4o
1.87
3.12
7.80
Claude-3.5-Haiku
0.61
1.01
2.50
Claude-3.5-Sonnet
2.36
4.38
9.36
1.4.4 How Effective is Parallel Scaling?
2
0
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
# Sample
45
55
65
75
85
95
Success Rate (%)
(a) VerilogEval
2
0
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
# Sample
55
65
75
85
Success Rate (%)
Gemini-1.5-flash
Gemini-2.0-flash
GPT-4o-mini
GPT-4o
Claude-3.7-Sonnet
Claude-3.5-Haiku
(b) RTLLM
2
0
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
# Sample
40
45
50
55
Success Rate (%)
(c) GenBen
Fig. 1.5: Performance improvement with increased samplings using VerilogMonkey
across various benchmarks. Each point indicates a sampling where at least one problem
passed the test for the first time. Success Rate (%) is measured using Hit@𝑘.
Fig. 1.5 shows how success rates evolve across different benchmarks as the number of
samples increases. For example, on VerilogEval, Claude-3.5-Haiku achieves a 23.72%
improvement, while Gemini-2.0-flash sees a 24% jump on RTLLM. Success rates climb
rapidly at first but taper off as sample size grows, following a sub-logarithmic trend
(𝑐∝log log 𝑁): an exponential increase in samples yields logarithmic improvements.
Table 1.2 compares the success rates of various LLMs using parallel scaling against
existing methods and reasoning models. VerilogMonkey outperforms all of them by
simply increasing the sampling size to the hundreds, without relying on additional
enhancements such as post-training or agentic mechanisms.
1.4.5 What Are Reasons for Improvements?
This section is organized as follows: Section 1.4.5.1 analyzes the relationship between
the effectiveness of parallel scaling and two key factors influencing LLM performance,
namely problem difficulty and type. Building on this analysis, we observes that this
1 VerilogMonkey
9
Table 1.2: VerilogMonkey’s performance across benchmarks and its comparison with
baselines.
Baseline
VerilogEval (%)
Pass@1 Pass@5 Pass@10
CraftRTL [26]
68.00
72.40
74.60
OriGen [11]
51.40
58.60
62.20
AutoVCoder [13]
48.50
55.90
-
CodeV [44]
53.20
65.10
68.50
VerilogCoder [18]
94.20
-
-
OpenAI o3-mini
81.41
89.10
89.10
OpenAI o1
74.36
85.90
85.90
Gemini-2.5-pro
76.92
90.38
90.38
Claude-3.7-Sonnet-Think
80.13
89.10
92.31
VerilogMonkey
Hit@1
Hit@512
(Improvement)
Gemini-1.5-flash
44.87
56.41 (+11.54)
Gemini-2.0-flash
61.54
82.05 (+20.51)
GPT-4o-mini
54.49
76.92 (+22.43)
GPT-4o
64.10
87.18 (+23.08)
Claude-3.5-Haiku
57.05
80.77 (+23.72)
Claude-3.7-Sonnet
75.00
94.87 (+19.87)
RTLLM v2 (%)
Pass@1 Pass@5 Pass@10
Existing Work *
HDLCORE [32]
60.00
62.00
-
RTLLM v1 (%) **
CraftRTL [26]
49.00
65.80
74.50
OriGen [11]
-
65.50
-
CodeV [44]
36.60
53.30
61.30
Reasoning Models
72.00
78.00
78.00
64.00
76.00
78.00
70.00
78.00
80.00
68.00
78.00
78.00
Hit@1
Hit@512
(Improvement)
56.00
72.00 (+16.00)
60.00
84.00 (+24.00)
64.00
80.00 (+16.00)
60.00
82.00 (+22.00)
62.00
80.00 (+18.00)
70.00
84.00 (+14.00)
GenBen (%) ***
Pass@1 Pass@5 Pass@10
GPT-3.5-Turbo
-
26.70
-
QWEN-v1-max
-
21.20
-
GLM-4v-plus
-
7.50
-
Llama3
-
6.90
-
-
44.71
47.06
47.06
43.53
47.06
47.06
45.88
47.06
47.06
45.88
48.24
48.24
Hit@1
Hit@512
(Improvement)
42.35
47.06 (+4.71)
41.18
50.59 (+9.41)
40.00
55.29 (+15.29)
44.71
55.29 (+10.58)
44.71
51.76 (+7.05)
48.24
54.12 (+5.88)
† VerilogMonkey’s top-2 performances are highlighted in yellow. Best performances in baselines on
each benchmark are highlighted in gray.
* Data for the Existing Work is directly sourced from the corresponding papers. “-” indicates
unavailable or not reported.
relationship further affects the ways in which the effectiveness of parallel scaling
is manifested. Accordingly, Section 1.4.5.2 first uses code dispersion to summarize
these different forms, and then Section 1.4.5.3 illustrates them in detail with several
representative problem cases.
1.4.5.1 Factors Affecting Effectiveness
Problem Difficulty
VerilogEval
RTLLM
GenBen
2
6
2
7
2
8
2
9
2
10
2
11
2
12
# Token
(a) Specification
VerilogEval
RTLLM
GenBen
2
5
2
7
2
9
2
11
2
13
# Token
(b) Ref. Implementation code
Fig. 1.6: Token length distribution for specifications and golden codes.
10
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
0
5
10
15
# Success
Gemini-2.0-flash
Gemini-1.5-flash
0
5
10
15
# Success
GPT-4o
GPT-4o-mini
2
5
2
7
2
9
2
11
2
13
# Golden code token
0
5
10
15
# Success
Claude-3.7-Sonnet
2
5
2
7
2
9
2
11
2
13
# Golden code token
Claude-3.5-Haiku
Fig. 1.7: Success rates by code length. Each bar contains 15 problems of consecutive
code length. Bar width shows code length range; height shows cumulative successes.
Blue represents successes at 𝑁= 1, orange at 𝑁= 10, and red at 𝑁= 512.
We use the length of a problem’s specification or reference implementation code as
an indicator for difficulty, and analyze how this correlates with success rates. Fig. 1.6
shows the token length distribution for both specifications and reference implementation
codes. Specification token lengths are measured using GPT-4o’s tokenizer; code token
lengths are measured after parsing with pyVerilog. Across all three benchmarks, token
lengths grow progressively, consistent with declining success rates reported in Table 1.2.
Fig. 1.7 further examines the distribution of success rates as a function of code length.
We sort problems by token count, group them into bins of 15, and display each group as
a bar: the bar’s width shows the token-length range, and its height indicates the number
of successes. Within each bar, the blue, orange, and red segments denote successes at
𝑁= 1, 𝑁= 10 and 𝑁= 512, respectively. The figure reveals that model performance
degrades as problem difficulty increases. For simpler problems (token count below 26),
LLMs already achieve high success rates under single-sampling. For moderately difficult
problems (token count between 26 and 210), the 𝑁= 1 success rates drops, while parallel
sampling yields the largest improvements. However, for the hardest problems (token
count exceeding 210), LLMs struggle to generate correct solutions, and the benefits of
parallel sampling become limited. This trend suggests that parallel sampling’s benefits
arise from more effectively leveraging the LLMs’ reasoning capabilities, rather than
bypassing them. Ultimately, the model’s inherent capacity sets the upper bound on
achievable success.
1 VerilogMonkey
11
Problem Type
Gemini-2.0-flash
Gemini-1.5-flash
GPT-4o
GPT-4o-mini
Claude-3.7-Haiku
Claude-3.5-Sonnet
0%
20%
40%
60%
80%
100%
Success Rate (%)
Math-Related
N=1
N=512
Gemini-2.0-flash
Gemini-1.5-flash
GPT-4o
GPT-4o-mini
Claude-3.7-Haiku
Claude-3.5-Sonnet
Others
Fig. 1.8: Parallel scaling contribution for different type of problems on the VerilogEval
benchmark.
The VerilogEval benchmark includes a special category of math-related tasks—finite
state machines (FSMs), waveform-based logic analysis, and Karnaugh maps—comprising
40% of the entire problem set. Although these tasks are not especially complex, LLMs still
exhibit uncertain performance due to their inherent challenges with math reasoning and
formal derivation [1]. Fig. 1.8 shows the success rates for both categories, demonstrating
that LLMs fare significantly worse on math-related problems. However, parallel scaling
delivers substantial improvements for these problems. All LLMs except Gemini-1.5-flash
achieve more than a 20% increase in success rates.
1.4.5.2 Manifestations of Effectiveness
At a high level, parallel scaling broadens the diversity and coverage of generated attempts,
thereby eliminating single points of failure and increasing the likelihood of a correct
solution. These benefits become more nuanced at finer granularity and are directly
related to the dispersion of LLM outputs. To quantify this dispersion, we measure the
pairwise similarity among all sampled codes. Specifically, for a given set of sampled
codes, each code is first tokenized using pyVerilog’s parser. The resulting token lists are
vectorized using TF-IDF over N-grams. After that, getting 𝑁code vectors {𝑣1, . . . , 𝑣𝑁},
the cosine similarity between any two vectors 𝑣𝑖and 𝑣𝑗is
cos 𝜃𝑖, 𝑗=
𝑣𝑖· 𝑣𝑗
∥𝑣𝑖∥2∥𝑣𝑗∥2
.
(1.3)
The mean cosine distance is then given by
12
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
0.0
0.5
1.0
Gemini-1.5-flash
Gemini-2.0-flash
0.0
0.5
1.0
Mean cosine distance (MCD)
GPT-4o-mini
GPT-4o
2
6
2
9
2
12
# Golden code token
0.0
0.5
1.0
Claude-3.5-Haiku
2
6
2
9
2
12
# Golden code token
Claude-3.7-Sonnet
Fig. 1.9: Scatter plots of golden code lengths versus code MCDs. Math-related problems
in VerilogEval are highlighted in red.
MCD =
2
𝑁(𝑁−1)
∑︁
1≤𝑖< 𝑗≤𝑁
1 −cos 𝜃𝑖, 𝑗
.
(1.4)
A higher MCD indicates greater dispersion among the sampled codes. Fig. 1.9 shows the
scatter distribution of the reference implementation code length and the MCD of sampled
codes for each problem across different models. Math-related problems in VerilogEval
are highlighted in red. Overall, code length correlates positively with dispersion; and
notably, math-related problems exhibit higher dispersion than other tasks of comparable
length. Building on Fig. 1.9 and our analysis of Fig. 1.7, we identify two principal ways
in which parallel scaling drives improvement:
• LLMs perform better on simple problems, producing outputs that are more deter-
ministic and confident, resulting in a more concentrated code distribution. However,
errors still arise from hallucinations. By repeated attempts, the probability of
generating code without hallucinations increases.
• For complex and math-related problems, LLM performance decreases. Output
uncertainty rises, leading to a more dispersed code distribution. As a result, small-
scale repetition becomes unreliable. With parallel scaling, the LLM can attempt
various implementations across many samples, thereby increasing the chances of
finding a correct solution.
1 VerilogMonkey
13
1.4.5.3 Case Study
VerilogEval
Prob026
VerilogEval
Prob104
VerilogEval
Prob094
VerilogEval
Prob034
0.0
0.2
0.4
0.6
0.8
1.0
(a) Simple with hallucination
VerilogEval
Prob156
VerilogEval
Prob142
RTLLM
radix2_div
GenBen
t86
0.0
0.2
0.4
0.6
0.8
1.0
(b) Complex
VerilogEval
Prob070
VerilogEval
Prob093
VerilogEval
Prob125
VerilogEval
Prob147
0.0
0.2
0.4
0.6
0.8
1.0
(c) Math-related
Fig. 1.10: Similarity heatmap of different types of problems.
We select several representative problems across different types and collect all
failed code attempts to generate the similarity heatmaps shown in Fig. 1.10. For each
problem, we construct a heatmap 𝐻in which the intensity of cell 𝐻[𝑖, 𝑗] indicates the
textual similarity between the 𝑖-th and 𝑗-th code samples, computed as cos 𝜃𝑖, 𝑗in (1.3).
Since cos 𝜃𝑖, 𝑗is symmetric, 𝐻[𝑖, 𝑗] = 𝐻[ 𝑗, 𝑖]. To more intuitively highlight clustering
tendencies among the code samples, we apply KMeans clustering and then reorder the
samples by their cluster labels. As a result, the heatmaps exhibit block-like patterns,
with each block reflecting the high intra-cluster similarity. We now interpret the two
manifestations of improvements.
14
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
Hallucination
In our context, we define an LLM “hallucinates” when it confidently generates answers
that are actually incorrect [25]. Fig. 1.10a illustrates several problems where such
hallucinations occur. In these cases, the model repeatedly produces only a small set
of distinct outputs, even after hundreds of attempts. For example, in the VerilogEval
benchmark, problem Prob026 yields just three different code variants, and every attempt
on Prob094 produces the exact same output. Once an LLM becomes trapped in these
erroneous patterns, it is extremely unlikely to escape under small-scale repetition.
Scaling up to hundreds of parallel attempts can increase the odds of obtaining a correct
(non-hallucinated) output, but the effectiveness of this strategy depends on how strongly
the hallucination dominates. In the four problems shown, only one or two out of 512
attempts succeed, and in some cases the model never escapes the hallucination.
While most hallucinations arise from the LLM’s intrinsic limitations, some are actually
triggered by ambiguous or misleading specifications. For example, in the RTLLM
benchmark, the task clkgenerator asks for a module that outputs a periodically
toggling square-wave clock signal. However, because the prompt is phrased as a purely
behavioral description, any code produced cannot be synthesized or simulated. The
specification neither requires an RTL-level implementation nor defines the necessary
input signals, as a frequency divider must receive an existing clock input. In fact, even
the provided reference code is behavioral and therefore incorrect. A similar issue appears
in Prob112 of the VerilogEval benchmark. These cases highlight the critical importance
of clear, precise specifications for fair and objective evaluation.
Uncertainty
Fig. 1.10b and 1.10c present similarity heatmaps for complex and math-related problems,
respectively. In contrast to the tight clusters seen in Fig. 1.10a, these heatmaps display
far greater dispersion and much weaker clustering. Such uncertainty makes small-scale
repetition unreliable, since it cannot cover the full range of possible outputs. Moreover, in
some cases the correct solution is not the model’s most confident prediction. For example,
in VerilogEval’s Prob125 and Prob093, the largest clusters consist of incorrect code
variants. Parallel scaling, i.e., generating many attempts in parallel, does improve the
odds of finding a correct answer, but this benefit is ultimately bounded by the model’s
intrinsic capabilities. For problems of high difficulty, the gains from parallel scaling
quickly diminish (see Fig. 1.7).
1.4.6 How Does LLM’s Randomness Influence Effectiveness?
Since parallel scaling boosts an LLM’s performance by broadening its output coverage,
we naturally ask: can we further improve results by increasing output randomness?
The answer is yes. The temperature parameter [14] controls this randomness. Lower
temperatures yield more deterministic, repeatable outputs, while higher temperatures
1 VerilogMonkey
15
T=0.2
T=1.0
T=2.0
0.0
0.2
0.4
0.6
0.8
1.0
Mean Cosine Distance (MCD)
(a) Gemini-1.5-flash
T=0.2
T=1.0
T=2.0
0.0
0.2
0.4
0.6
0.8
1.0
Mean Cosine Distance (MCD)
(b) Gemini-2.0-flash
Fig. 1.11: Code MCD distributions at different temperature settings.
45
50
55
Success Rate (%)
Gemini-1.5-flash
0.0
0.2
0.4
0.7
1.0
1.5
2.0
2
0 2
1 2
2 2
3 2
4 2
5 2
6 2
7 2
8 2
9
# Sample
60
65
70
75
80
85
Success Rate (%)
Gemini-2.0-flash
(a) VerilogEval
55
60
65
70
Success Rate (%)
Gemini-1.5-flash
0.0
0.2
0.4
0.7
1.0
1.5
2.0
2
0 2
1 2
2 2
3 2
4 2
5 2
6 2
7 2
8 2
9
# Sample
60
65
70
75
80
85
Success Rate (%)
Gemini-2.0-flash
(b) RTLLM
40
45
Success Rate (%)
Gemini-1.5-flash
0.0
0.2
0.4
0.7
1.0
1.5
2.0
2
0 2
1 2
2 2
3 2
4 2
5 2
6 2
7 2
8 2
9
# Sample
45
50
Success Rate (%)
Gemini-2.0-flash
(c) GenBen
Fig. 1.12: The impact of temperature on success rate. Success Rate (%) is measured
using Hit@𝑘.
encourage diversity. We demonstrate this effect on two Gemini models, which support
temperatures from 0 to 2.0. Fig. 1.11 plots the distribution of code mean cosine distances
(MCDs) at three temperature settings, and Fig. 1.12 shows how success rates evolve with
sample size under each temperature. At low temperatures, outputs vary little and success
rates quickly plateau. For instance, Gemini-2.0-flash shows no improvement beyond five
samples when 𝑇= 0. As we raise the temperature, LLMs achieve higher final success
rates and converge faster. However, too much randomness can be detrimental: for GPT-4o
and GPT-4o-mini, temperatures above 1.8 (on a 0–2.0 scale) rarely produce coherent
text. Importantly, increasing randomness does not harm accuracy when sampling is
small. With a single sample (𝑁= 1), success-rate differences across temperatures remain
within 5%. We attribute this variance primarily to the inherent noise of individual runs.
16
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
1.5 Conclusion
We present VerilogMonkey, an empirical study of parallel scaling for automated
Verilog generation. Scaling to hundreds of parallel samples—without enhancements like
post-training or agentic methods—is cost-effective in time and money and surpasses
prior LLM-based results across benchmarks and mainstream models. Our analysis
explains these gains, showing that parallel scaling improves diversity and reliability, and
that controlling output randomness shapes its effectiveness.
Bibliography
[1]
Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin.
“Large language models for mathematical reasoning: Progresses and challenges”.
In: arXiv preprint arXiv:2402.00157 (2024).
[2]
Artificial Analysis. Artificial Analysis. url: https://artificialanalysis.
ai.
[3]
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski,
Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr
Nyczyk, et al. “Graph of thoughts: Solving elaborate problems with large language
models”. In: AAAI. Vol. 38. 16. 2024, pp. 17682–17690.
[4]
Zhenni Bi, Kai Han, Chuanjian Liu, Yehui Tang, and Yunhe Wang. “Forest-of-
thought: Scaling test-time compute for enhancing LLM reasoning”. In: arXiv
preprint arXiv:2412.09078 (2024).
[5]
Jason Blocklove, Siddharth Garg, Ramesh Karri, and Hammond Pearce. “Chip-
chat: Challenges and opportunities in conversational hardware design”. In: ML-
CAD. IEEE. 2023, pp. 1–6.
[6]
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le,
Christopher R´e, and Azalia Mirhoseini. “Large language monkeys: Scaling
inference compute with repeated sampling”. In: arXiv preprint arXiv:2407.21787
(2024).
[7]
Harrison Chase. LangChain. Version 0.3. 2025. url: https://github.com/
langchain-ai/langchain.
[8]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De
Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, et al. “Evaluating large language models trained on code”. In:
arXiv:2107.03374 (2021).
[9]
Xinyun Chen, Maxwell Lin, Nathanael Sch¨arli, and Denny Zhou. “Teaching large
language models to self-debug”. In: arXiv preprint arXiv:2304.05128 (2023).
[10]
Coursera. Hardware Engineer Salary: Your 2025 Guide. https : / / www .
coursera.org/articles/hardware-engineer-salary. Jan. 2025.
[11]
Fan Cui, Chenyang Yin, Kexing Zhou, Youwei Xiao, Guangyu Sun, Qiang
Xu, Qipeng Guo, Yun Liang, Xingcheng Zhang, Demin Song, et al. “Origen:
1 VerilogMonkey
17
Enhancing rtl code generation with code-to-code augmentation and self-reflection”.
In: ICCAD. 2024.
[12]
Ryan Ehrlich, Bradley Brown, Jordan Juravsky, Ronald Clark, Christopher R´e,
and Azalia Mirhoseini. “CodeMonkeys: Scaling Test-Time Compute for Software
Engineering”. In: arXiv preprint arXiv:2501.14723 (2025).
[13]
Mingzhe Gao, Jieru Zhao, Zhe Lin, Wenchao Ding, Xiaofeng Hou, Yu Feng,
Chao Li, and Minyi Guo. “AutoVCoder: A Systematic Framework for Automated
Verilog Code Generation using LLMs”. In: 2024 IEEE 42nd International
Conference on Computer Design (ICCD). IEEE. 2024, pp. 162–169.
[14]
Google Cloud. Experiment with parameter values — Generative AI on Vertex AI
— Google Cloud. https://cloud.google.com/vertex-ai/generative-
ai/docs/learn/prompts/adjust-parameter-values. Accessed: 2025-04-
19. Apr. 2025.
[15]
Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan
Yang, and Mao Yang. “rStar-Math: Small LLMs Can Master Math Reasoning
with Self-Evolved Deep Thinking”. In: arXiv preprint arXiv:2501.04519 (2025).
[16]
Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu,
Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. “Deepseek-r1: Incentivizing
reasoning capability in llms via reinforcement learning”. In: arXiv preprint
arXiv:2501.12948 (2025).
[17]
Chia-Tung Ho, Haoxing Ren, and Brucek Khailany. “Verilogcoder: Autonomous
verilog coding agents with graph-based planning and abstract syntax tree (ast)-
based waveform tracing tool”. In: arXiv preprint arXiv:2408.08927 (2024).
[18]
Chia-Tung Ho, Haoxing Ren, and Brucek Khailany. “Verilogcoder: Autonomous
verilog coding agents with graph-based planning and abstract syntax tree (ast)-
based waveform tracing tool”. In: arXiv preprint arXiv:2408.08927 (2024).
[19]
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya,
Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes
Welbl, Aidan Clark, et al. “Training compute-optimal large language models”. In:
arXiv preprint arXiv:2203.15556 (2022).
[20]
Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky,
Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al.
“Openai o1 system card”. In: arXiv preprint arXiv:2412.16720 (2024).
[21]
Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. “Llm-blender: Ensembling large
language models with pairwise ranking and generative fusion”. In: arXiv preprint
arXiv:2306.02561 (2023).
[22]
Kuang-Huei Lee, Ian Fischer, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja,
Dale Schuurmans, and Xinyun Chen. “Evolving Deeper LLM Thinking”. In:
arXiv preprint arXiv:2501.09891 (2025).
[23]
Dacheng Li, Shiyi Cao, Chengkun Cao, Xiuyu Li, Shangyin Tan, Kurt Keutzer,
Jiarong Xing, Joseph E Gonzalez, and Ion Stoica. “S*: Test time scaling for code
generation”. In: arXiv preprint arXiv:2502.14382 (2025).
[24]
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser,
R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago,
et al. “Competition-level code generation with alphacode”. In: Science 378.6624
(2022), pp. 1092–1097.
18
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan
[25]
Fang Liu, Yang Liu, Lin Shi, Houkun Huang, Ruifeng Wang, Zhen Yang, Li
Zhang, Zhongqi Li, and Yuchi Ma. “Exploring and evaluating hallucinations in
llm-powered code generation”. In: arXiv preprint arXiv:2404.00971 (2024).
[26]
Mingjie Liu, Yun-Da Tsai, Wenfei Zhou, and Haoxing Ren. “CraftRTL: High-
quality Synthetic Data Generation for Verilog Code Models with Correct-by-
Construction Non-Textual Representations and Targeted Code Repair”. In: arXiv
preprint arXiv:2409.12993 (2024).
[27]
Shang Liu, Wenji Fang, Yao Lu, Jing Wang, Qijun Zhang, Hongce Zhang, and
Zhiyao Xie. “RTLCoder: Fully open-source and efficient LLM-assisted RTL
code generation technique”. In: IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems (2024).
[28]
Yao Lu, Shang Liu, Qijun Zhang, and Zhiyao Xie. “RTLLM: An Open-Source
Benchmark for Design RTL Generation with Large Language Model”. In: ASP-
DAC. IEEE. 2024, pp. 722–727.
[29]
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah
Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al.
“Self-refine: Iterative refinement with self-feedback”. In: Advances in Neural
Information Processing Systems 36 (2023), pp. 46534–46594.
[30]
Ollama. Ollama. url: https://ollama.com/.
[31]
Nathaniel Pinckney, Christopher Batten, Mingjie Liu, Haoxing Ren, and Brucek
Khailany. Revisiting VerilogEval: Newer LLMs, In-Context Learning, and
Specification-to-RTL Tasks. 2024. arXiv: 2408.11053. url: https://arxiv.
org/abs/2408.11053.
[32]
Heng Ping, Shixuan Li, Peiyu Zhang, Anzhe Cheng, Shukai Duan, Nikos
Kanakaris, Xiongye Xiao, Wei Yang, Shahin Nazarian, Andrei Irimia, et al.
“HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLM-
Generated HDL”. In: arXiv preprint arXiv:2503.16528 (2025).
[33]
William Shakespeare. “Infinite monkey theorem”. In: ().
[34]
SIEMENS. ModelSim. url: https://eda.sw.siemens.com/en-US/ic/
modelsim/.
[35]
Yifan Song, Guoyin Wang, Sujian Li, and Bill Yuchen Lin. “The good, the bad,
and the greedy: Evaluation of llms should not ignore non-determinism”. In: arXiv
preprint arXiv:2407.10457 (2024).
[36]
SYNOPSYS. VCS: Functional Verification Solution. url: https : / / www .
synopsys.com/verification/simulation/vcs.html.
[37]
Shailja Thakur, Jason Blocklove, Hammond Pearce, Benjamin Tan, Siddharth
Garg, and Ramesh Karri. “Autochip: Automating hdl generation using llm
feedback”. In: arXiv preprint arXiv:2311.04887 (2023).
[38]
Gwok-Waa Wan, SamZaak Wong, Mengnv Xing, Nan Guan, Ning Xu, Qiang Xu,
Xi Wang, et al. “GenBen: A Genarative Benchmark for LLM-Aided Design”. In:
().
[39]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi,
Quoc V Le, Denny Zhou, et al. “Chain-of-thought prompting elicits reasoning in
large language models”. In: Advances in neural information processing systems
35 (2022), pp. 24824–24837.
1 VerilogMonkey
19
[40]
Stephen Williams. The ICARUS Verilog Compilation System. url: https :
//steveicarus.github.io/iverilog/.
[41]
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and
Karthik Narasimhan. “Tree of thoughts: Deliberate problem solving with large
language models”. In: Advances in neural information processing systems 36
(2023), pp. 11809–11822.
[42]
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan,
and Yuan Cao. “React: Synergizing reasoning and acting in language models”.
In: International Conference on Learning Representations (ICLR). 2023.
[43]
Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Zhihan Guo,
Yufei Wang, Irwin King, Xue Liu, and Chen Ma. “What, How, Where, and How
Well? A Survey on Test-Time Scaling in Large Language Models”. In: arXiv
preprint arXiv:2503.24235 (2025).
[44]
Yang Zhao, Di Huang, Chongxiao Li, Pengwei Jin, Ziyuan Nan, Tianyun Ma,
Lei Qi, Yansong Pan, Zhenxing Zhang, Rui Zhang, et al. “Codev: Empowering
llms for verilog generation through multi-level summarization”. In: arXiv preprint
arXiv:2407.10424 (2024).
[45]
Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming
Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. “A survey on efficient
inference for large language models”. In: arXiv preprint arXiv:2404.14294 (2024).
|
Chapter 1 VerilogMonkey: Exploring Parallel Scaling for Automated Verilog Code Generation with LLMs Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan Abstract We present VerilogMonkey, an empirical study of parallel scaling for the underexplored task of automated Verilog generation. Parallel scaling improves LLM performance by sampling many outputs in parallel. Across multiple benchmarks and mainstream LLMs, we find that scaling to hundreds of samples is cost-effective in both time and money and, even without any additional enhancements such as post-training or agentic methods, surpasses prior results on LLM-based Verilog generation. We further dissect why parallel scaling delivers these gains and show how output randomness in LLMs affects its effectiveness. Juxin Niu[0009-0004-8223-4245] -mail: Yuxin Du[0009-0001-7017-5805] -mail: Dan Niu[0000-0002-0715-7946] -mail: Xi Wang[0000-0002-1998-6733] National Center of Technology Innovation for EDA, China -mail: Zhe Jiang[0000-0002-8509-3167] National Center of Technology Innovation for EDA, China -mail: Nan Guan[0000-0003-3775-911X] -mail: 1 17 Sep 2025 2 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan 1.1 Introduction Fig. 1.1: Even randomness can achieve greatness given infinite attempts.1 Writing Hardware Description Language (HDL) code such as Verilog is demanding, requires substantial expertise, and is prone to bugs and errors. Therefore, there is growing interest in LLM-assisted hardware code generation [27, 31, 37]. However, the capability of LLMs on automated Verilog code generation is still unsatisfactory, comparing with other tasks such as software code generation. Early progress in improving the capability of LLM was driven by trainingtime scaling, i.e., improving single-shot inference accuracy by training larger models and ingesting ever larger datasets. However, the pace of training-time scaling has gradually slowed due to its resource-intensive nature and the sharply rising costs of human labor [19]. To address this, research attention has shifted to test-time scaling [43], which boosts performance by allocating more computational resources during inference. A popular approach in this direction is parallel scaling [21, 35], which generates multiple outputs in parallel and aggregates them into a final answer. Parallel scaling has significantly enhanced LLM's performance in tasks such as math [6, 15] and software coding [12, 23]. For example, AlphaCode [24] extended parallel sampling to the scale of millions, achieving unprecedented results in competition-level coding tasks. However, in automated Verilog code generation, systematic studies on parallel scaling are still missing. Existing work has shown that small-scale repetition can outperform single-pass inference [27, 31]. However, the scalability and effectiveness of expanding the parallel scale to hundreds of samples remains unknown; and the underlying reasons for such improvements have yet to be explored. To this end, in this paper, we present the first work on exploring the parallel scaling in automated Verilog code generation with LLMs. Specifically, the contributions are as follows: • We show that parallel scaling is cost-effective in both time and monetary terms, as increased concurrency substantially reduces wall-clock latency and sampling cost. • We demonstrate that expanding the sampling size leads to consistent and significant performance improvements, surpassing even advanced reasoning-oriented LLMs such as OpenAI o1 and Gemini-2.5-pro. • We investigate the reasons underlying these improvements and show that parallel scaling enhances both diversity and reliability of outputs. • We analyze the influence of output randomness on parallel scaling, and find that regulating randomness preserves accuracy while further supporting convergence in large-scale sampling. 1 Image generated by ChatGPT. 1 VerilogMonkey 3 As shown in Fig. 1.1, the name "VerilogMonkey" is inspired by the Infinite Monkey Theorem [33], which states that given infinite time and attempts, a monkey randomly typing on a keyboard would eventually produce the complete works of Shakespeare. Similarly, LLMs can be thought of as "thinking monkeys," and with the help of parallel scaling, their potential can be further unlocked, leading to promising improvements in task performance. 1.2 Related Work Test-time Scaling Test-time scaling enhances task performance by allocating additional computational resources during inference. Depending on how the model's computation is expanded, test-time scaling can be divided into four categories [43]: parallel, sequential, hybrid, and internal. Parallel scaling enhances performance by generating multiple outputs from LLMs concurrently and then aggregating them into a final answer. Its effectiveness depends on two metrics [6]: coverage, the probability of producing at least one correct output, and precision, the ability to identify that correct response. Empirical evidence from [6] reveals a log-linear relationship between coverage and computational resources. In tasks like competition-level programming or travel planning, automatic correctness checks are often impractical. Consequently, research has shifted toward more advanced aggregation techniques, such as majority voting [12, 24] and genetic algorithms [22]. Sequential scaling generates outputs iteratively, leveraging intermediate results at each stage. Techniques such as Chain-of-Thought [39] decompose complex problems into a sequence of reasoning steps. Subsequent research [9, 29] has shown that LLMs can self-correct through iterative revision. Approaches like ReAct [42] integrate external tools to provide feedback that guides the model's next action. Hybrid scaling leverages the complementary strengths of parallel and sequential approaches: parallel scaling broadens an LLM's exploration of possibilities, while sequential scaling enables deeper investigation along promising paths. Notably, methods like Tree-of-Thoughts [41], Graph-of-Thought [3], and Forest-of-Thought [4] have successively introduced ever more complex hybrid scaling patterns. Internal scaling empowers LLMs to allocate extra computational resources at inference time through targeted training or fine-tuning. In this paradigm, the model's internal parameters instead of external agentic mechanisms govern scaling behavior. The impressive success of reasoning models like OpenAI o1 [20] and Deepseek-R1 [16] highlights the efficacy of this approach. Automated Verilog Code Generation Existing work [5, 13] on automated Verilog code generation has predominantly focused on sequential scaling. AutoChip [37] employs reflection strategies to enhance 4 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan Verilog code generation. Additionally, it employs LLM ensembling with more powerful models when lightweight models underperform, further boosting performance. VerilogCoder [17] introduces an autonomous agent with graph-planning capabilities and integrates an AST-based waveform-tracing tool to provide feedback during code debugging. HDLCORE [32] embeds HDL expertise into Chain-of-Thought prompts, guiding LLMs through step-by-step reasoning and leveraging testbench feedback for self-verification. Prior studies have shown that performance improves as the number of repetitions increases [27, 31]. However, these efforts have been confined to small-scale repetitions, and whether scaling the sampling size to much larger magnitudes (e.g., hundreds of attempts) yields further gains remains an open question. 1.3 Deployment PASS FAIL LLM Spec. Verilog Simulator Fig. 1.2: Workflow of a single sampling iteration. This section explains our deployment of VerilogMonkeyto support parallel scaling. VerilogMonkey focuses on generating selfcontained [31] Verilog code, that is, implementations that do not require external module instantiations, from natural language specifications. Given a specification and its associated test cases, the LLM's task is to produce functionally correct code that passes every test. Fig. 1.2 shows the workflow of a single sampling iteration: the specification is fed into the LLM, which generates the Verilog code; this code is then verified by simulation for functional correctness. VerilogMonkey repeats these iterations until either a generated implementation passes simulation or a predefined sampling limit is reached. LLM Generation Simulation Global Configuration { "llm": { "model": "gpt-4o", "temperature": "0.8", ... }, "benchmark": { "name": "rtllm-v2", ... }, ... } Task Queue Controller LLM Service Benchmark Simulator Fig. 1.3: System architecture of VerilogMonkey. 1 VerilogMonkey 5 Fig. 1.3 shows the overall architecture of VerilogMonkey. The system is fully automated, and users can tailor its behavior via a global configuration file. To maximize extensibility, VerilogMonkey adopts a modular, plug-in architecture. For LLM integration, we adopt the LLM-as-a-Service (LLMaaS) paradigm, relying on providers such as OpenAI and Anthropic to perform inference. VerilogMonkey interacts with these services by sending network requests. This paradigm abstracts LLM inference as an external service, since the internal details lie beyond our scope. Tools like Ollama [30] also allow locally hosted models to be exposed as LLMaaS endpoints. We use LangChain [7] as middleware to manage these connections. On the benchmarking side, VerilogMonkey supports existing test suites [28, 31, 38] through a unified interface that eases future expansion. For simulation, it is compatible with major EDA tools, including VCS [36], ModelSim [34], and Icarus Verilog [40]. All samplings run in parallel, allowing VerilogMonkey to fully leverage concurrency and maximize throughput. The system consists of two stages: LLM generation and simulation. The LLM generation stage is I/O-bound: after dispatching inference requests, VerilogMonkey must wait for responses. To maximize utilization during this idle time, it uses batch invocation together with asynchronous processing. In contrast, the simulation stage is computation-intensive, so VerilogMonkey adopts a multi-process strategy to fully leverage all available CPU cores. These two stages are connected by a task queue, which enables pipelining and synchronization. 1.4 Empirical Study of Parallel Scaling 1.4.1 Research Questions In this section, we first evaluate the effectiveness of parallel scaling, and analyze the reasons behind the improvements. We then identify output randomness as a key influencing factor and explore how its regulation impacts effectiveness. Accordingly, Sections 1.4.3 to 1.4.6 address the following research questions (RQs), respectively. RQ1. Is parallel scaling cost-effective in terms of time and money? RQ2. How effective is parallel scaling in improving performance? RQ3. What are reasons for improvements? RQ4. How does LLM's randomness influence effectiveness? Highlights Presented below are brief highlights of the research questions, with further details provided in the subsequent sections: (RQ1.) Parallel scaling is cost-effective in both time and money. Greater concurrency sharply reduces per-job runtime, so total wall-clock time grows far more slowly than the sample size. On cost, hundreds of samples per prompt typically run 10, and provider features like batching and prompt caching can often cut that roughly in 6 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan half. (RQ2.) By scaling up the sampling size to 512, the LLM delivers substantial performance gains, achieving success rates on all benchmarks that outperform existing methods-including reasoning models such as OpenAI o1 and Gemini-2.5-pro. (RQ3.) For simple problems, LLMs produce more deterministic and confident outputs. However, errors occur due to hallucinations. Conversely, on complex or math-related tasks, their performance drops and output uncertainty grows, making small-scale repetition unreliable. In both scenarios, parallel scaling proves beneficial by enhancing the diversity and coverage of generated outputs. (RQ4.) Raising the model's randomness (via temperature adjustments) does not compromise single-pass accuracy and can further improve the LLM's capabilities and convergence speed in large-scale sampling. 1.4.2 Evaluation Setup Configuration • Benchmark. We evaluate parallel scaling using three of the most widely used benchmarks: VerilogEval [31], RTLLM (v2) [28] and GenBen [38]. Together, they comprise approximately 200 Verilog code generation problems, each providing a specification, test cases, and a human-written reference implementation. • Model. Our evaluation includes six mainstream LLMs: Google's Gemini-2.0-flash and Gemini-1.5-flash, OpenAI's GPT-4o (20240806) and GPT-4o-mini (20240718), and Anthropic's Claude-3.7-Sonnet (20250219) and Claude-3.5-Haiku (20241022). • Baseline. We compare against two categories of baselines: existing works (see Table 1.2) and reasoning models. Reasoning models include OpenAI's o3-mini and o1, Anthropic's Claude-3.7-Sonnet-Think, and Google's Gemini-2.5-pro. • Implementation Details. We set the maximum sampling size to 512. For each problem, we generate samples via zero-shot prompts containing only the specification and formatting instructions to facilitate code extraction. Unless otherwise noted, all LLM inferences use default configurations (e.g., temperature, top-p). Metrics Our goal is to evaluate LLMs' Verilog code-generation capability in terms of success rate, i.e., the proportion of problems they can solve. We focus on two widely used metrics: Hit@k[24] and Pass@k[8]. Hit@kis the empirical fraction of problems for which at least one of kgenerated candidates is correct: Hit@k= 1 q q ∑︁ i=1 hiti= 1 q q ∑︁ i=1 max 1≤j≤kPass(ri, j), (1.1) where Pass(ri, j) = 1 if the j-th sample for problem Qiis correct and 0 otherwise. Pass@kestimates, for each problem, the probability of solving it within kattempts under a Bernoulli model; when Nsamples are collected per problem and ciof them are 1 VerilogMonkey 7 correct, a standard estimator is Pass@k= Ei " 1 - N-ci k N k # . (1.2) The key observation is that although Pass@kgenerally provides more robust estimates than Hit@k, directly computing Pass@kbecomes computationally impractical as k grows under high concurrency. Consequently, while prior work commonly reports Pass@k, in our parallel-scaling setting we report Hit@kinstead, as it remains tractable at large k. 1.4.3 Is Parallel Scaling Cost-Effective in Terms of Time and Money? 1 5 10 15 20 30 # Threads 0.00 0.05 0.10 0.15 0.20 Time per job (s) Concurrent Sequential (a) 100 500 1000 Total simulation job 0 1 2 3 4 5 6 7 8 Time (s) 1 thread 5 threads 10 threads 15 threads (b) Fig. 1.4: Simulation performance. (a) Per-thread time to complete a job vs. concurrency (blue), compared to sequential execution (red). (b) Total time for varying numbers of jobs across thread counts. Time. Fig. 1.4 shows the simulation performance under different concurrency. As concurrency increases, the average time per job drops dramatically. Therefore, the total time grows far more slowly than the sampling size. Meanwhile, LLM inference is becoming faster: for example, OpenAI's GPT-4o delivers a sixfold speedup over GPT-4 within a single year [2], and techniques like quantization and distillation have made local LLM deployments far more lightweight [45]. Money. Table 1.1 shows that sampling each prompt hundreds of times incurs only 10 in API fees, depending on the model and problem complexity. Techniques like non-real-time batch APIs and prompt-caching offered by providers can further cut these expenses by at least 50%. Even at large scale, LLM-based sampling remains far more economical than human labor: a U.S. hardware engineer earns roughly $11,000 per month [10] and may spend hours or even days to resolve a single problem. 8 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan Table 1.1: Average per-question LLM API cost (USD) for 500 samples across benchmarks. VerilogEval [31] RTLLM [28] GenBen [38] Gemini-1.5-flash 0.04 0.07 0.15 Gemini-2.0-flash 0.06 0.12 0.34 GPT-4o-mini 0.16 0.23 0.57 GPT-4o 1.87 3.12 7.80 Claude-3.5-Haiku 0.61 1.01 2.50 Claude-3.5-Sonnet 2.36 4.38 9.36 1.4.4 How Effective is Parallel Scaling? 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 45 55 65 75 85 95 Success Rate (%) (a) VerilogEval 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 55 65 75 85 Success Rate (%) Gemini-1.5-flash Gemini-2.0-flash GPT-4o-mini GPT-4o Claude-3.7-Sonnet Claude-3.5-Haiku (b) RTLLM 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 40 45 50 55 Success Rate (%) (c) GenBen Fig. 1.5: Performance improvement with increased samplings using VerilogMonkey across various benchmarks. Each point indicates a sampling where at least one problem passed the test for the first time. Success Rate (%) is measured using Hit@k. Fig. 1.5 shows how success rates evolve across different benchmarks as the number of samples increases. For example, on VerilogEval, Claude-3.5-Haiku achieves a 23.72% improvement, while Gemini-2.0-flash sees a 24% jump on RTLLM. Success rates climb rapidly at first but taper off as sample size grows, following a sub-logarithmic trend (c∝log log N): an exponential increase in samples yields logarithmic improvements. Table 1.2 compares the success rates of various LLMs using parallel scaling against existing methods and reasoning models. VerilogMonkey outperforms all of them by simply increasing the sampling size to the hundreds, without relying on additional enhancements such as post-training or agentic mechanisms. 1.4.5 What Are Reasons for Improvements? This section is organized as follows: Section 1.4.5.1 analyzes the relationship between the effectiveness of parallel scaling and two key factors influencing LLM performance, namely problem difficulty and type. Building on this analysis, we observes that this 1 VerilogMonkey 9 Table 1.2: VerilogMonkey's performance across benchmarks and its comparison with baselines. Baseline VerilogEval (%) Pass@1 Pass@5 Pass@10 CraftRTL [26] 68.00 72.40 74.60 OriGen [11] 51.40 58.60 62.20 AutoVCoder [13] 48.50 55.90 - CodeV [44] 53.20 65.10 68.50 VerilogCoder [18] 94.20 - - OpenAI o3-mini 81.41 89.10 89.10 OpenAI o1 74.36 85.90 85.90 Gemini-2.5-pro 76.92 90.38 90.38 Claude-3.7-Sonnet-Think 80.13 89.10 92.31 VerilogMonkey Hit@1 Hit@512 (Improvement) Gemini-1.5-flash 44.87 56.41 (+11.54) Gemini-2.0-flash 61.54 82.05 (+20.51) GPT-4o-mini 54.49 76.92 (+22.43) GPT-4o 64.10 87.18 (+23.08) Claude-3.5-Haiku 57.05 80.77 (+23.72) Claude-3.7-Sonnet 75.00 94.87 (+19.87) RTLLM v2 (%) Pass@1 Pass@5 Pass@10 Existing Work * HDLCORE [32] 60.00 62.00 - RTLLM v1 (%) ** CraftRTL [26] 49.00 65.80 74.50 OriGen [11] - 65.50 - CodeV [44] 36.60 53.30 61.30 Reasoning Models 72.00 78.00 78.00 64.00 76.00 78.00 70.00 78.00 80.00 68.00 78.00 78.00 Hit@1 Hit@512 (Improvement) 56.00 72.00 (+16.00) 60.00 84.00 (+24.00) 64.00 80.00 (+16.00) 60.00 82.00 (+22.00) 62.00 80.00 (+18.00) 70.00 84.00 (+14.00) GenBen (%) *** Pass@1 Pass@5 Pass@10 GPT-3.5-Turbo - 26.70 - QWEN-v1-max - 21.20 - GLM-4v-plus - 7.50 - Llama3 - 6.90 - - 44.71 47.06 47.06 43.53 47.06 47.06 45.88 47.06 47.06 45.88 48.24 48.24 Hit@1 Hit@512 (Improvement) 42.35 47.06 (+4.71) 41.18 50.59 (+9.41) 40.00 55.29 (+15.29) 44.71 55.29 (+10.58) 44.71 51.76 (+7.05) 48.24 54.12 (+5.88) † VerilogMonkey's top-2 performances are highlighted in yellow. Best performances in baselines on each benchmark are highlighted in gray. * Data for the Existing Work is directly sourced from the corresponding papers. "-" indicates unavailable or not reported. relationship further affects the ways in which the effectiveness of parallel scaling is manifested. Accordingly, Section 1.4.5.2 first uses code dispersion to summarize these different forms, and then Section 1.4.5.3 illustrates them in detail with several representative problem cases. 1.4.5.1 Factors Affecting Effectiveness Problem Difficulty VerilogEval RTLLM GenBen 2 6 2 7 2 8 2 9 2 10 2 11 2 12 # Token (a) Specification VerilogEval RTLLM GenBen 2 5 2 7 2 9 2 11 2 13 # Token (b) Ref. Implementation code Fig. 1.6: Token length distribution for specifications and golden codes. 10 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan 0 5 10 15 # Success Gemini-2.0-flash Gemini-1.5-flash 0 5 10 15 # Success GPT-4o GPT-4o-mini 2 5 2 7 2 9 2 11 2 13 # Golden code token 0 5 10 15 # Success Claude-3.7-Sonnet 2 5 2 7 2 9 2 11 2 13 # Golden code token Claude-3.5-Haiku Fig. 1.7: Success rates by code length. Each bar contains 15 problems of consecutive code length. Bar width shows code length range; height shows cumulative successes. Blue represents successes at N= 1, orange at N= 10, and red at N= 512. We use the length of a problem's specification or reference implementation code as an indicator for difficulty, and analyze how this correlates with success rates. Fig. 1.6 shows the token length distribution for both specifications and reference implementation codes. Specification token lengths are measured using GPT-4o's tokenizer; code token lengths are measured after parsing with pyVerilog. Across all three benchmarks, token lengths grow progressively, consistent with declining success rates reported in Table 1.2. Fig. 1.7 further examines the distribution of success rates as a function of code length. We sort problems by token count, group them into bins of 15, and display each group as a bar: the bar's width shows the token-length range, and its height indicates the number of successes. Within each bar, the blue, orange, and red segments denote successes at N= 1, N= 10 and N= 512, respectively. The figure reveals that model performance degrades as problem difficulty increases. For simpler problems (token count below 26), LLMs already achieve high success rates under single-sampling. For moderately difficult problems (token count between 26 and 210), the N= 1 success rates drops, while parallel sampling yields the largest improvements. However, for the hardest problems (token count exceeding 210), LLMs struggle to generate correct solutions, and the benefits of parallel sampling become limited. This trend suggests that parallel sampling's benefits arise from more effectively leveraging the LLMs' reasoning capabilities, rather than bypassing them. Ultimately, the model's inherent capacity sets the upper bound on achievable success. 1 VerilogMonkey 11 Problem Type Gemini-2.0-flash Gemini-1.5-flash GPT-4o GPT-4o-mini Claude-3.7-Haiku Claude-3.5-Sonnet 0% 20% 40% 60% 80% 100% Success Rate (%) Math-Related N=1 N=512 Gemini-2.0-flash Gemini-1.5-flash GPT-4o GPT-4o-mini Claude-3.7-Haiku Claude-3.5-Sonnet Others Fig. 1.8: Parallel scaling contribution for different type of problems on the VerilogEval benchmark. The VerilogEval benchmark includes a special category of math-related tasks-finite state machines (FSMs), waveform-based logic analysis, and Karnaugh maps-comprising 40% of the entire problem set. Although these tasks are not especially complex, LLMs still exhibit uncertain performance due to their inherent challenges with math reasoning and formal derivation [1]. Fig. 1.8 shows the success rates for both categories, demonstrating that LLMs fare significantly worse on math-related problems. However, parallel scaling delivers substantial improvements for these problems. All LLMs except Gemini-1.5-flash achieve more than a 20% increase in success rates. 1.4.5.2 Manifestations of Effectiveness At a high level, parallel scaling broadens the diversity and coverage of generated attempts, thereby eliminating single points of failure and increasing the likelihood of a correct solution. These benefits become more nuanced at finer granularity and are directly related to the dispersion of LLM outputs. To quantify this dispersion, we measure the pairwise similarity among all sampled codes. Specifically, for a given set of sampled codes, each code is first tokenized using pyVerilog's parser. The resulting token lists are vectorized using TF-IDF over N-grams. After that, getting Ncode vectors {v1, . . . , vN}, the cosine similarity between any two vectors viand vjis cos θi, j= vi· vj ∥vi∥2∥vj∥2 . (1.3) The mean cosine distance is then given by 12 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan 0.0 0.5 1.0 Gemini-1.5-flash Gemini-2.0-flash 0.0 0.5 1.0 Mean cosine distance (MCD) GPT-4o-mini GPT-4o 2 6 2 9 2 12 # Golden code token 0.0 0.5 1.0 Claude-3.5-Haiku 2 6 2 9 2 12 # Golden code token Claude-3.7-Sonnet Fig. 1.9: Scatter plots of golden code lengths versus code MCDs. Math-related problems in VerilogEval are highlighted in red. MCD = 2 N(N-1) ∑︁ 1≤i< j≤N 1 -cos θi, j . (1.4) A higher MCD indicates greater dispersion among the sampled codes. Fig. 1.9 shows the scatter distribution of the reference implementation code length and the MCD of sampled codes for each problem across different models. Math-related problems in VerilogEval are highlighted in red. Overall, code length correlates positively with dispersion; and notably, math-related problems exhibit higher dispersion than other tasks of comparable length. Building on Fig. 1.9 and our analysis of Fig. 1.7, we identify two principal ways in which parallel scaling drives improvement: • LLMs perform better on simple problems, producing outputs that are more deterministic and confident, resulting in a more concentrated code distribution. However, errors still arise from hallucinations. By repeated attempts, the probability of generating code without hallucinations increases. • For complex and math-related problems, LLM performance decreases. Output uncertainty rises, leading to a more dispersed code distribution. As a result, smallscale repetition becomes unreliable. With parallel scaling, the LLM can attempt various implementations across many samples, thereby increasing the chances of finding a correct solution. 1 VerilogMonkey 13 1.4.5.3 Case Study VerilogEval Prob026 VerilogEval Prob104 VerilogEval Prob094 VerilogEval Prob034 0.0 0.2 0.4 0.6 0.8 1.0 (a) Simple with hallucination VerilogEval Prob156 VerilogEval Prob142 RTLLM radix2_div GenBen t86 0.0 0.2 0.4 0.6 0.8 1.0 (b) Complex VerilogEval Prob070 VerilogEval Prob093 VerilogEval Prob125 VerilogEval Prob147 0.0 0.2 0.4 0.6 0.8 1.0 (c) Math-related Fig. 1.10: Similarity heatmap of different types of problems. We select several representative problems across different types and collect all failed code attempts to generate the similarity heatmaps shown in Fig. 1.10. For each problem, we construct a heatmap Hin which the intensity of cell H[i, j] indicates the textual similarity between the i-th and j-th code samples, computed as cos θi, jin (1.3). Since cos θi, jis symmetric, H[i, j] = H[ j, i]. To more intuitively highlight clustering tendencies among the code samples, we apply KMeans clustering and then reorder the samples by their cluster labels. As a result, the heatmaps exhibit block-like patterns, with each block reflecting the high intra-cluster similarity. We now interpret the two manifestations of improvements. 14 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan Hallucination In our context, we define an LLM "hallucinates" when it confidently generates answers that are actually incorrect [25]. Fig. 1.10a illustrates several problems where such hallucinations occur. In these cases, the model repeatedly produces only a small set of distinct outputs, even after hundreds of attempts. For example, in the VerilogEval benchmark, problem Prob026 yields just three different code variants, and every attempt on Prob094 produces the exact same output. Once an LLM becomes trapped in these erroneous patterns, it is extremely unlikely to escape under small-scale repetition. Scaling up to hundreds of parallel attempts can increase the odds of obtaining a correct (non-hallucinated) output, but the effectiveness of this strategy depends on how strongly the hallucination dominates. In the four problems shown, only one or two out of 512 attempts succeed, and in some cases the model never escapes the hallucination. While most hallucinations arise from the LLM's intrinsic limitations, some are actually triggered by ambiguous or misleading specifications. For example, in the RTLLM benchmark, the task clkgenerator asks for a module that outputs a periodically toggling square-wave clock signal. However, because the prompt is phrased as a purely behavioral description, any code produced cannot be synthesized or simulated. The specification neither requires an RTL-level implementation nor defines the necessary input signals, as a frequency divider must receive an existing clock input. In fact, even the provided reference code is behavioral and therefore incorrect. A similar issue appears in Prob112 of the VerilogEval benchmark. These cases highlight the critical importance of clear, precise specifications for fair and objective evaluation. Uncertainty Fig. 1.10b and 1.10c present similarity heatmaps for complex and math-related problems, respectively. In contrast to the tight clusters seen in Fig. 1.10a, these heatmaps display far greater dispersion and much weaker clustering. Such uncertainty makes small-scale repetition unreliable, since it cannot cover the full range of possible outputs. Moreover, in some cases the correct solution is not the model's most confident prediction. For example, in VerilogEval's Prob125 and Prob093, the largest clusters consist of incorrect code variants. Parallel scaling, i.e., generating many attempts in parallel, does improve the odds of finding a correct answer, but this benefit is ultimately bounded by the model's intrinsic capabilities. For problems of high difficulty, the gains from parallel scaling quickly diminish (see Fig. 1.7). 1.4.6 How Does LLM's Randomness Influence Effectiveness? Since parallel scaling boosts an LLM's performance by broadening its output coverage, we naturally ask: can we further improve results by increasing output randomness? The answer is yes. The temperature parameter [14] controls this randomness. Lower temperatures yield more deterministic, repeatable outputs, while higher temperatures 1 VerilogMonkey 15 T=0.2 T=1.0 T=2.0 0.0 0.2 0.4 0.6 0.8 1.0 Mean Cosine Distance (MCD) (a) Gemini-1.5-flash T=0.2 T=1.0 T=2.0 0.0 0.2 0.4 0.6 0.8 1.0 Mean Cosine Distance (MCD) (b) Gemini-2.0-flash Fig. 1.11: Code MCD distributions at different temperature settings. 45 50 55 Success Rate (%) Gemini-1.5-flash 0.0 0.2 0.4 0.7 1.0 1.5 2.0 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 60 65 70 75 80 85 Success Rate (%) Gemini-2.0-flash (a) VerilogEval 55 60 65 70 Success Rate (%) Gemini-1.5-flash 0.0 0.2 0.4 0.7 1.0 1.5 2.0 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 60 65 70 75 80 85 Success Rate (%) Gemini-2.0-flash (b) RTLLM 40 45 Success Rate (%) Gemini-1.5-flash 0.0 0.2 0.4 0.7 1.0 1.5 2.0 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 # Sample 45 50 Success Rate (%) Gemini-2.0-flash (c) GenBen Fig. 1.12: The impact of temperature on success rate. Success Rate (%) is measured using Hit@k. encourage diversity. We demonstrate this effect on two Gemini models, which support temperatures from 0 to 2.0. Fig. 1.11 plots the distribution of code mean cosine distances (MCDs) at three temperature settings, and Fig. 1.12 shows how success rates evolve with sample size under each temperature. At low temperatures, outputs vary little and success rates quickly plateau. For instance, Gemini-2.0-flash shows no improvement beyond five samples when T= 0. As we raise the temperature, LLMs achieve higher final success rates and converge faster. However, too much randomness can be detrimental: for GPT-4o and GPT-4o-mini, temperatures above 1.8 (on a 0-2.0 scale) rarely produce coherent text. Importantly, increasing randomness does not harm accuracy when sampling is small. With a single sample (N= 1), success-rate differences across temperatures remain within 5%. We attribute this variance primarily to the inherent noise of individual runs. 16 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan 1.5 Conclusion We present VerilogMonkey, an empirical study of parallel scaling for automated Verilog generation. Scaling to hundreds of parallel samples-without enhancements like post-training or agentic methods-is cost-effective in time and money and surpasses prior LLM-based results across benchmarks and mainstream models. Our analysis explains these gains, showing that parallel scaling improves diversity and reliability, and that controlling output randomness shapes its effectiveness. Bibliography [1] Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. "Large language models for mathematical reasoning: Progresses and challenges". In: arXiv preprint (2024). [2] Artificial Analysis. Artificial Analysis. url: https://artificialanalysis. ai. [3] Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. "Graph of thoughts: Solving elaborate problems with large language models". In: AAAI. Vol. 38. 16. 2024, pp. 17682-17690. [4] Zhenni Bi, Kai Han, Chuanjian Liu, Yehui Tang, and Yunhe Wang. "Forest-ofthought: Scaling test-time compute for enhancing LLM reasoning". In: arXiv preprint (2024). [5] Jason Blocklove, Siddharth Garg, Ramesh Karri, and Hammond Pearce. "Chipchat: Challenges and opportunities in conversational hardware design". In: MLCAD. IEEE. 2023, pp. 1-6. [6] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R ́e, and Azalia Mirhoseini. "Large language monkeys: Scaling inference compute with repeated sampling". In: arXiv preprint (2024). [7] Harrison Chase. LangChain. Version 0.3. 2025. url: https://github.com/ langchain-ai/langchain. [8] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. "Evaluating large language models trained on code". In: (2021). [9] Xinyun Chen, Maxwell Lin, Nathanael Sch ̈arli, and Denny Zhou. "Teaching large language models to self-debug". In: arXiv preprint (2023). [10] Coursera. Hardware Engineer Salary: Your 2025 Guide. https : / / www . coursera.org/articles/hardware-engineer-salary. Jan. 2025. [11] Fan Cui, Chenyang Yin, Kexing Zhou, Youwei Xiao, Guangyu Sun, Qiang Xu, Qipeng Guo, Yun Liang, Xingcheng Zhang, Demin Song, et al. "Origen: 1 VerilogMonkey 17 Enhancing rtl code generation with code-to-code augmentation and self-reflection". In: ICCAD. 2024. [12] Ryan Ehrlich, Bradley Brown, Jordan Juravsky, Ronald Clark, Christopher R ́e, and Azalia Mirhoseini. "CodeMonkeys: Scaling Test-Time Compute for Software Engineering". In: arXiv preprint (2025). [13] Mingzhe Gao, Jieru Zhao, Zhe Lin, Wenchao Ding, Xiaofeng Hou, Yu Feng, Chao Li, and Minyi Guo. "AutoVCoder: A Systematic Framework for Automated Verilog Code Generation using LLMs". In: 2024 IEEE 42nd International Conference on Computer Design (ICCD). IEEE. 2024, pp. 162-169. [14] Google Cloud. Experiment with parameter values - Generative AI on Vertex AI - Google Cloud. https://cloud.google.com/vertex-ai/generativeai/docs/learn/prompts/adjust-parameter-values. Accessed: 2025-0419. Apr. 2025. [15] Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. "rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking". In: arXiv preprint (2025). [16] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning". In: arXiv preprint (2025). [17] Chia-Tung Ho, Haoxing Ren, and Brucek Khailany. "Verilogcoder: Autonomous verilog coding agents with graph-based planning and abstract syntax tree (ast)- based waveform tracing tool". In: arXiv preprint (2024). [18] Chia-Tung Ho, Haoxing Ren, and Brucek Khailany. "Verilogcoder: Autonomous verilog coding agents with graph-based planning and abstract syntax tree (ast)- based waveform tracing tool". In: arXiv preprint (2024). [19] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. "Training compute-optimal large language models". In: arXiv preprint (2022). [20] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. "Openai o1 system card". In: arXiv preprint (2024). [21] Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. "Llm-blender: Ensembling large language models with pairwise ranking and generative fusion". In: arXiv preprint (2023). [22] Kuang-Huei Lee, Ian Fischer, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja, Dale Schuurmans, and Xinyun Chen. "Evolving Deeper LLM Thinking". In: arXiv preprint (2025). [23] Dacheng Li, Shiyi Cao, Chengkun Cao, Xiuyu Li, Shangyin Tan, Kurt Keutzer, Jiarong Xing, Joseph E Gonzalez, and Ion Stoica. "S*: Test time scaling for code generation". In: arXiv preprint (2025). [24] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R ́emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. "Competition-level code generation with alphacode". In: Science 378.6624 (2022), pp. 1092-1097. 18 Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, and Nan Guan [25] Fang Liu, Yang Liu, Lin Shi, Houkun Huang, Ruifeng Wang, Zhen Yang, Li Zhang, Zhongqi Li, and Yuchi Ma. "Exploring and evaluating hallucinations in llm-powered code generation". In: arXiv preprint (2024). [26] Mingjie Liu, Yun-Da Tsai, Wenfei Zhou, and Haoxing Ren. "CraftRTL: Highquality Synthetic Data Generation for Verilog Code Models with Correct-byConstruction Non-Textual Representations and Targeted Code Repair". In: arXiv preprint (2024). [27] Shang Liu, Wenji Fang, Yao Lu, Jing Wang, Qijun Zhang, Hongce Zhang, and Zhiyao Xie. "RTLCoder: Fully open-source and efficient LLM-assisted RTL code generation technique". In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2024). [28] Yao Lu, Shang Liu, Qijun Zhang, and Zhiyao Xie. "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model". In: ASPDAC. IEEE. 2024, pp. 722-727. [29] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. "Self-refine: Iterative refinement with self-feedback". In: Advances in Neural Information Processing Systems 36 (2023), pp. 46534-46594. [30] Ollama. Ollama. url: https://ollama.com/. [31] Nathaniel Pinckney, Christopher Batten, Mingjie Liu, Haoxing Ren, and Brucek Khailany. Revisiting VerilogEval: Newer LLMs, In-Context Learning, and Specification-to-RTL Tasks. 2024. arXiv: 2408.11053. url: https://arxiv. org/abs/2408.11053. [32] Heng Ping, Shixuan Li, Peiyu Zhang, Anzhe Cheng, Shukai Duan, Nikos Kanakaris, Xiongye Xiao, Wei Yang, Shahin Nazarian, Andrei Irimia, et al. "HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLMGenerated HDL". In: arXiv preprint (2025). [33] William Shakespeare. "Infinite monkey theorem". In: (). [34] SIEMENS. ModelSim. url: https://eda.sw.siemens.com/en-US/ic/ modelsim/. [35] Yifan Song, Guoyin Wang, Sujian Li, and Bill Yuchen Lin. "The good, the bad, and the greedy: Evaluation of llms should not ignore non-determinism". In: arXiv preprint (2024). [36] SYNOPSYS. VCS: Functional Verification Solution. url: https : / / www . synopsys.com/verification/simulation/vcs.html. [37] Shailja Thakur, Jason Blocklove, Hammond Pearce, Benjamin Tan, Siddharth Garg, and Ramesh Karri. "Autochip: Automating hdl generation using llm feedback". In: arXiv preprint (2023). [38] Gwok-Waa Wan, SamZaak Wong, Mengnv Xing, Nan Guan, Ning Xu, Qiang Xu, Xi Wang, et al. "GenBen: A Genarative Benchmark for LLM-Aided Design". In: (). [39] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. "Chain-of-thought prompting elicits reasoning in large language models". In: Advances in neural information processing systems 35 (2022), pp. 24824-24837. 1 VerilogMonkey 19 [40] Stephen Williams. The ICARUS Verilog Compilation System. url: https : //steveicarus.github.io/iverilog/. [41] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. "Tree of thoughts: Deliberate problem solving with large language models". In: Advances in neural information processing systems 36 (2023), pp. 11809-11822. [42] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. "React: Synergizing reasoning and acting in language models". In: International Conference on Learning Representations (ICLR). 2023. [43] Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Zhihan Guo, Yufei Wang, Irwin King, Xue Liu, and Chen Ma. "What, How, Where, and How Well? A Survey on Test-Time Scaling in Large Language Models". In: arXiv preprint (2025). [44] Yang Zhao, Di Huang, Chongxiao Li, Pengwei Jin, Ziyuan Nan, Tianyun Ma, Lei Qi, Yansong Pan, Zhenxing Zhang, Rui Zhang, et al. "Codev: Empowering llms for verilog generation through multi-level summarization". In: arXiv preprint (2024). [45] Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. "A survey on efficient inference for large language models". In: arXiv preprint (2024).
|
2509.16242
|
1. Introduction
Quantum computation has the potential for exponential speedup of classical systems in some
applications, such as cryptography, simulation of molecular behavior, and optimization. Nevertheless,
quantum noise drastically limits the utility of quantum computation. Qubits, the fundamental units of
quantum information, are very sensitive to gate errors and environmental interference. Quantum noise
introduces errors that cause errors in quantum computations, as they travel onward over time.
As noise is a fundamental issue with NISQ devices, error mitigation is necessary. Classical methods
like quantum error correction would require the addition of additional qubits and syndrome decoding,
and these are inefficient and in practice at large scale.
Our research proposes a machine learning–aided approach to reduce quantum noise by learning the
patterns of noise and reconstructing clean quantum states from noisy data. We formulate it as a
supervised learning task with the goal of converting a noisy density matrix to its clean representation,
and we achieve this using a fidelity-aware loss and CNN-based autoencoder architecture. We show
how data-driven approaches can improve reconstruction accuracy.
2. Methods
2.1 Quantum Circuit Generation and Density Matrix Extraction
We randomly generate quantum circuits with 5 qubits from the Cirq library. The circuits are
random single-qubit and two-qubit gate sequences (X, Y, Z, H, T, S, RX, RY, RZ, CNOT, CZ, SWAP).
The depth of each circuit is randomly chosen between 6 and 9 layers.
After obtaining the circuit, we get the final state density matrix through Cirq's DensityMatrixSimulator.
The states are each 2^5 x 2^5 = 32 x 32 matrices of complex-valued information which contains
all the quantum information including entanglement and decoherence effects.
2.2 Noise Modelling
We apply five different types of noise channels:
•
Bitflip: random X errors on qubits
•
Depolarizing: replaces qubit state with maximally mixed state with some probability
•
Amplitude Damping: models energy loss (e.g., spontaneous emission)
•
Phase Damping: phase information is lost without energy change
•
Mixed: combination of the above (uniformly sampled)
Noise levels tested: 0.05, 0.10, 0.15, 0.20
2.3 CNN Autoencoder Architecture
We model noise reduction using a deep convolutional autoencoder. The input is a noisy density matrix
split into real and imaginary parts with shape (32, 32, 2). The output is a reconstruction of the clean
matrix.
•
Encoder: 3 blocks (Conv2D → ReLU → MaxPooling → Dropout)
•
Decoder: 3 blocks (Conv2D → ReLU → UpSampling → Dropout)
•
Output Layer: Conv2D (2 channels, linear activation)
We use a composite loss function:
L = MSE (Y, Ŷ) + λ (1 – Fidelity (Y, Ŷ))
Here, the fidelity term is approximated using a normalized Frobenius inner product rather than the
standard trace-based Uhlmann fidelity, which is computationally intensive for large-scale training. This
approximation calculates:
F(ρ, σ) ≈ Re(⟨ρ, σ⟩) / (||ρ||F · ||σ||F)
Where || · ||F is the Frobenius norm , and i ⟨ρ, σ⟩s the element-wise complex inner product. This surrogate
metric aligns well with fidelity trends while remaining differentiable and efficient.
The inclusion of fidelity in the loss function forces the model to capture structural quantum similarities,
not just pixel-wise reconstruction errors.
2.4 Dataset Pipeline
•
10,000 circuit samples
•
Each sample: clean DM, noisy DM, noise type, and level
•
Data split: 80% training, 20% testing
•
Preprocessing: split real and imaginary parts into 2 channels
3. Dataset and Related Work
Dataset
We generated a synthetic dataset consisting of 10,000 density matrices derived from random quantum
circuits. This approach allows us to have perfect ground truth for evaluation and control over noise
parameters, which is challenging with real quantum hardware data. The dataset has the following
characteristics:
•
5-qubit systems (resulting in 32×32 density matrices)
•
Random circuit depths between 6-9 layers
•
5 noise types (depolarizing, amplitude damping, phase damping, bit-flip, mixed)
•
4 noise levels (0.05, 0.1, 0.15, 0.2)
•
Equal distribution across noise types and levels
Our dataset construction is novel in its comprehensive coverage of diverse noise models and
intensities, allowing for robust model training and evaluation across realistic quantum error scenarios.
Related Work
The intersection of quantum error correction and machine learning is an exciting new area of research
on quantum computers. There have been some influential papers on related methods:
Czarnik et al. [1] have suggested an error mitigation technique through training noise models using
classical simulations of Clifford circuits. There is a method aiming at estimation and elimination of the
effects of noise statistically rather than direct reconstruction of density matrices. While effective for
specific gate sets, their approach does not generalize as well to generic quantum circuits as our CNN-
based solution does, which is able to discover the underlying structure of quantum noise regardless of
circuit content.
Chen et al. [2] proposed discriminative quantum neural networks for quantum error correction and
demonstrated their approach on small stabilizer codes. Their work addresses the decoding problem of
standard QECCs rather than direct state recovery. Our density matrix reconstruction approach
operates on a lower level, potentially showing advantages for near-term devices where full QECCs are
no longer feasible.
Endo et al. [3] provided a comprehensive review of quantum error mitigation techniques, which
categorized techniques into extrapolation, probabilistic error cancellation, and symmetry verification.
They did not take deep learning techniques at the density matrix level into account. Our work fills the
gap by showing how neural networks can be trained to correct errors in the quantum state
representation directly.
Krastanov et al. [4] applied neural networks to decoding of surface codes with encouraging evidence
supporting the increase in error thresholds. Their method still follows the typical QECC process of
encoding one logical qubit over multiple physical qubits. Our method, however, doesn't require
additional qubits and is therefore more attractive to limited-resource NISQ devices.
Our CNN-based method updates these earlier papers by addressing density matrix reconstruction for
multiple noise models simultaneously. Unlike traditional QECCs with a dense qubit overhead, our
scheme operates on the existing quantum state without encoding. And unlike statistical error mitigation
techniques, we reconstruct the full quantum state rather than merely expectation values of
observables.
4. Related Implementations
The application of quantum error correction techniques via machine learning has been of interest on
other research platforms. We reviewed some of the associated implementations to contrast and
compare our approach:
The Qiskit Textbook [5] provides a full implementation of quantum error correction codes with a focus
on the repetition code and surface code approaches. Their approach is primarily pedagogical and
demonstrates the standard approach to quantum error correction with multiple physical qubits to
encode one logical qubit. While their implementation is full, it has heavy qubit overhead and is not
designed for the NISQ era. Our CNN-based approach, in contrast, does not require additional qubits
and can be used directly on the quantum state representation, and therefore is more suitable for near-
term quantum hardware with small numbers of qubits.
The Mitiq library [6] developed by the Unitary Fund offers a library of error mitigation techniques like
zero-noise extrapolation and probabilistic error cancellation. Their Python library supports major
quantum computing platforms and has been tested on real quantum hardware. However, their
approach is dealing with error mitigation in expectation values of quantum observables and not
recovery of the full quantum state. Our density matrix reconstruction approach provides an enhanced
error correction solution by recovery of the full quantum state with the ability for more advanced
quantum algorithms based on state preparation instead of expectation values.
The TensorFlow Quantum [7] software includes demonstrations of quantum encoding and error
correction within quantum data based on hybrid quantum-classical models.
Their approach demonstrates use of the variational circuits for error correction and has promising
results for some noise models. Unfortunately, their application does require quantum resources for
both the encoding and the correction. Our traditional CNN protocol performs as well or better without
requiring more quantum resources for the correction step, making it directly applicable to current
quantum hardware and more scalable for increasing system size.
5. Data Analysis
We constructed a synthetic data set of 10,000 density matrices that we derived from randomly created
quantum circuits to examine the different types of noise and levels that influence quantum states.
Having simulated clean as well as noisy circuits, we were able to create perfect ground truth data to
train and test on. Through our data analysis, we found a number of important observations concerning
the dynamics of quantum noise and its impact on quantum state fidelity.
Fidelity distribution analysis across noise types revealed that phase damping noise preserves state
fidelity more than any other noise type with a mean noisy fidelity of 0.541, and bit-flip was most
destructive (0.200 mean fidelity). This is due to the fact that the phase damping type of noise primarily
disturbs off-diagonal elements (coherences) but preserves populations, whereas bit-flip destroys
computational basis states directly.
The noise level-state fidelity relationship is approximately inversely dependent, such that the greater
the noise level, the exponentially smaller the fidelity. This can be quantified by the -0.55 correlation
coefficient of noise level-noisy fidelity in our correlation analysis below:
We observed that quantum states that were exposed to mixed noise exhibited complex error patterns
that had elements of multiple noise channels. This is an especially challenging scenario for classical
quantum error correction schemes but presents an opportunity for machine learning approaches that
can learn and discover such complex patterns.
Visual examination of the density matrices revealed that the different sources of noise act on the
matrix structure in distinct ways. Depolarizing noise compresses the whole density matrix in the
direction of the maximally mixed state, amplitude damping pulls population towards the ground state
direction, and phase damping lowers basically off-diagonal elements. All these usual signatures can be
recognized in real and imaginary parts of the density matrices:
The data allowed us to explore the relation of early noisy fidelity with theoretical maximum recoverable
fidelity, providing insight into the underlying quantum error correction thresholds for a range of noise
models and strengths.
6. Analysis
Our CNN-based approach's strong performance on many types and levels of noise is due to several
key features of the model and quantum noise itself. The encoder-decoder architecture is particularly
suited to quantum error correction since it has the ability to learn features at multiple hierarchical
scales, reflecting the manner in which quantum noise acts at both the local and global levels.
The result on composite noise models is especially notable in that it represents the model's capacity to
generalize from simultaneous multiple types of error. This property can be assumed to transfer to other
such situations with high-complication noise models, as with superconducting or trapped-ion quantum
computers, with simultaneous activity by many error processes. The model effectively learns to adopt
a one-type-fits-all noise-reversing mapping instead of case-dependent correction of particular noises.
Interestingly, the model does better at greater levels of noise (0.15-0.2) than at lower ones (0.05-0.1)
when assessed by relative improvement. Such counterintuitive result suggests that more resilient
patterns of noise are more distinctive and thus easier to identify and correct by the CNN. Such a
property would be advantageous for quantum systems operating on the edge of their coherence
capabilities, where high levels of noise are typical.
The restrictions observed with phase damping correction are likely a result of information-theoretic
restrictions – pure dephasing is a form of information loss that might be inherently reversible. Other
quantum systems similar to those here affected primarily by dephasing noise might not benefit as
much from this technique, marking an essential boundary condition for the applicability of CNN-based
error correction.
The CNN's ability to preserve quantum correlations, observed through off-diagonal density matrix
elements' revival, demonstrates that such a procedure would be particularly worthwhile for
entanglement and quantum coherence-intensive quantum algorithms like Shor's algorithm or quantum
simulation. The preservation of quantum information distinguishes our approach from other classical
error mitigation techniques that only operate on expectation values.
7. Experimental Setup
Our testbed consisted of an end-to-end pipeline for simulating synthetic quantum data, training the
CNN model, and evaluating its performance across different noise regimes. We used the Cirq
quantum computing library to implement the circuit simulation and TensorFlow for modelling the neural
network.
For data generation, we generated 10,000 randomly created quantum circuits of size 5 qubits and
result in the shape of 32×32 complex density matrices. The number of layers for the circuits were
randomly selected in between 6-9 layers in order to provide various quantum states. All circuits were
run through five varieties of various types of noise (depolarizing, amplitude damping, phase damping,
bit-flip, and mixed) on four levels of noises (0.05, 0.1, 0.15, 0.2), and thus created a balanced noise
dataset.
We used an 80/20 train-test split to guarantee unbiased testing, with the split done prior to training to
avoid any data leakage. The model was trained for 100 epochs with a batch size of 16, with 20% of the
training data set aside as a validation set for early stopping and learning rate scheduling. The training
process employed the Adam optimizer with a starting learning rate of 1e-3, which was decreased by a
factor of 0.5 after every 5 epochs without validation loss improvement.
The training was conducted on an NVIDIA Tesla V100 GPU, with one epoch taking approximately 27
seconds. The model converged at approximately 95 epochs, as shown in the learning curves and the
training logs:
For evaluation, we calculated quantum state fidelity between the noiseless initial states, noisy states,
and CNN-corrected states. We also experimented with different types of noise and levels of noise to
see trends and limits in model correction performance. The experiment code and data generating
scripts are available on our GitHub repository
8. Results
Our CNN-based quantum error correction model demonstrated strong performance across various
noise types and levels. After 100 training epochs, the model achieved a test loss of 0.1519 and test
MAE of 0.0094, indicating good prediction accuracy.
8.1 Overall Fidelity Improvement
Fidelity comparisons by noise type and level, demonstrating significant improvements across all
conditions. The model increased quantum state fidelity from an average of 0.298 (noisy) to 0.774
(corrected), representing an average improvement of 0.47.
The fidelity improvement distribution shows most improvements clustered around 0.5-0.6, with some
exceptional cases exceeding 0.8 improvement and a small number of negative cases.
8.2 Performance by Noise Type
Key findings include:
1. Mixed noise shows the highest corrected fidelity (0.807) and improvement (0.567).
2. Phase damping shows the lowest improvement (0.241), despite starting with the highest noisy
fidelity.
3. Bit-flip noise demonstrates exceptional cases with improvements up to 0.84.
8.3 Performance by Noise Level
Interestingly, higher noise levels (0.15, 0.20) show greater improvement than lower levels, indicating
the model is particularly effective at correcting severely corrupted states.
8.4 Density Matrix Visualization
Visualization of original, noisy, and corrected density matrices. The model successfully recovers both
diagonal elements (populations) and off-diagonal elements (coherences) from heavily corrupted noisy
states.
9. Conclusion
This work demonstrates the effectiveness of CNN-based quantum error correction in density matrix
reconstruction. Our model provides high fidelity improvements on various types of noise and
intensities, with extremely favourable outcomes for complex mixed noise and higher intensities of
noise, demonstrating deep learning techniques as a plausible scaling option to traditional quantum
error correction codes.
For application in real-world scenarios, we propose using this method in mixed noise environments
where typical QECCs will be impractical due to qubit overhead limitations. But care must be used with
pure phase damping cases, with lesser assured performance owing to information-theoretic
necessities. For important missions with quantum computation, a hybrid approach combining CNN
with certain phase error correction methods would produce the best solution.
Future work should focus on breaking the phase damping limit, hardware validation on real quantum
devices, and scaling to more qubits. As quantum hardware capability increases, this approach can be
extended to more complex quantum algorithms and error models, which could result in practical
quantum advantage with fewer physical qubits than traditional error correction methods would.
10. References
[1] P. Czarnik, A. Arrasmith, P.J. Coles, S.M. Leichenauer. Error mitigation with Clifford quantum-circuit
data. Quantum, 5:592, 2021. https://quantum-journal.org/papers/q-2021-11-26-592/
[2] H. Chen, L. Wossnig, S. Severini, H. Neven, M. Mohseni. Universal discriminative quantum neural
networks. Quantum Machine Intelligence, 3(1), 2021. https://link.springer.com/article/10.1007/s42484-
020-00025-7
[3] K. Endo, S. Benjamin, Y. Li. Practical quantum error mitigation for near-future applications. Physical
Review X, 8:031027, 2018. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.031027
[4] S. Krastanov, L. Jiang, D. Englund, S. Guha. Neural-network decoder for topological color codes
with circuit level noise. In Proceedings of the 34th Annual Conference on Neural Information
Processing Systems (NeurIPS 2020), 2020.
https://proceedings.neurips.cc/paper/2020/hash/37bc6f5b7952a45e3f4e1170d494b140-Abstract.html
[5] Abraham, H., et al. (2023). Qiskit Textbook - Error Correction with the Repetition Code. Available at:
https://qiskit.org/textbook/ch-quantum-hardware/error-correction-repetition-code.html
[6] Ryan LaRose, Andrea Mari, Nathan Shammah, et al. Mitiq: A software package for error mitigation
on noisy quantum computers. Quantum, 6:774, 2022. GitHub: https://github.com/unitaryfund/mitiq
DOI: https://doi.org/10.22331/q-2022-08-11-774
[7] M. Broughton, G. Verdon, T. McCourt, et al. TensorFlow Quantum: A Software Framework for
Quantum Machine Learning. arXiv:2003.02989, 2020. Tutorial:
https://www.tensorflow.org/quantum/tutorials/quantum_data Paper: https://arxiv.org/abs/2003.02989
11. Appendix:
1. GitHub repository link: https://github.com/KaranKendre11/MLforQuantumNoiseReduction/tree/main
|
1. Introduction Quantum computation has the potential for exponential speedup of classical systems in some applications, such as cryptography, simulation of molecular behavior, and optimization. Nevertheless, quantum noise drastically limits the utility of quantum computation. Qubits, the fundamental units of quantum information, are very sensitive to gate errors and environmental interference. Quantum noise introduces errors that cause errors in quantum computations, as they travel onward over time. As noise is a fundamental issue with NISQ devices, error mitigation is necessary. Classical methods like quantum error correction would require the addition of additional qubits and syndrome decoding, and these are inefficient and in practice at large scale. Our research proposes a machine learning-aided approach to reduce quantum noise by learning the patterns of noise and reconstructing clean quantum states from noisy data. We formulate it as a supervised learning task with the goal of converting a noisy density matrix to its clean representation, and we achieve this using a fidelity-aware loss and CNN-based autoencoder architecture. We show how data-driven approaches can improve reconstruction accuracy. 2. Methods 2.1 Quantum Circuit Generation and Density Matrix Extraction We randomly generate quantum circuits with 5 qubits from the Cirq library. The circuits are random single-qubit and two-qubit gate sequences (X, Y, Z, H, T, S, RX, RY, RZ, CNOT, CZ, SWAP). The depth of each circuit is randomly chosen between 6 and 9 layers. After obtaining the circuit, we get the final state density matrix through Cirq's DensityMatrixSimulator. The states are each 2^5 x 2^5 = 32 x 32 matrices of complex-valued information which contains all the quantum information including entanglement and decoherence effects. 2.2 Noise Modelling We apply five different types of noise channels: • Bitflip: random X errors on qubits • Depolarizing: replaces qubit state with maximally mixed state with some probability • Amplitude Damping: models energy loss (e.g., spontaneous emission) • Phase Damping: phase information is lost without energy change • Mixed: combination of the above (uniformly sampled) Noise levels tested: 0.05, 0.10, 0.15, 0.20 2.3 CNN Autoencoder Architecture We model noise reduction using a deep convolutional autoencoder. The input is a noisy density matrix split into real and imaginary parts with shape (32, 32, 2). The output is a reconstruction of the clean matrix. • Encoder: 3 blocks (Conv2D → ReLU → MaxPooling → Dropout) • Decoder: 3 blocks (Conv2D → ReLU → UpSampling → Dropout) • Output Layer: Conv2D (2 channels, linear activation) We use a composite loss function: L = MSE (Y, Ŷ) + λ (1 - Fidelity (Y, Ŷ)) Here, the fidelity term is approximated using a normalized Frobenius inner product rather than the standard trace-based Uhlmann fidelity, which is computationally intensive for large-scale training. This approximation calculates: F(ρ, σ) ≈ Re(⟨ρ, σ⟩) / (||ρ||F · ||σ||F) Where || · ||F is the Frobenius norm , and i ⟨ρ, σ⟩s the element-wise complex inner product. This surrogate metric aligns well with fidelity trends while remaining differentiable and efficient. The inclusion of fidelity in the loss function forces the model to capture structural quantum similarities, not just pixel-wise reconstruction errors. 2.4 Dataset Pipeline • 10,000 circuit samples • Each sample: clean DM, noisy DM, noise type, and level • Data split: 80% training, 20% testing • Preprocessing: split real and imaginary parts into 2 channels 3. Dataset and Related Work Dataset We generated a synthetic dataset consisting of 10,000 density matrices derived from random quantum circuits. This approach allows us to have perfect ground truth for evaluation and control over noise parameters, which is challenging with real quantum hardware data. The dataset has the following characteristics: • 5-qubit systems (resulting in 32×32 density matrices) • Random circuit depths between 6-9 layers • 5 noise types (depolarizing, amplitude damping, phase damping, bit-flip, mixed) • 4 noise levels (0.05, 0.1, 0.15, 0.2) • Equal distribution across noise types and levels Our dataset construction is novel in its comprehensive coverage of diverse noise models and intensities, allowing for robust model training and evaluation across realistic quantum error scenarios. Related Work The intersection of quantum error correction and machine learning is an exciting new area of research on quantum computers. There have been some influential papers on related methods: Czarnik et al. [1] have suggested an error mitigation technique through training noise models using classical simulations of Clifford circuits. There is a method aiming at estimation and elimination of the effects of noise statistically rather than direct reconstruction of density matrices. While effective for specific gate sets, their approach does not generalize as well to generic quantum circuits as our CNNbased solution does, which is able to discover the underlying structure of quantum noise regardless of circuit content. Chen et al. [2] proposed discriminative quantum neural networks for quantum error correction and demonstrated their approach on small stabilizer codes. Their work addresses the decoding problem of standard QECCs rather than direct state recovery. Our density matrix reconstruction approach operates on a lower level, potentially showing advantages for near-term devices where full QECCs are no longer feasible. Endo et al. [3] provided a comprehensive review of quantum error mitigation techniques, which categorized techniques into extrapolation, probabilistic error cancellation, and symmetry verification. They did not take deep learning techniques at the density matrix level into account. Our work fills the gap by showing how neural networks can be trained to correct errors in the quantum state representation directly. Krastanov et al. [4] applied neural networks to decoding of surface codes with encouraging evidence supporting the increase in error thresholds. Their method still follows the typical QECC process of encoding one logical qubit over multiple physical qubits. Our method, however, doesn't require additional qubits and is therefore more attractive to limited-resource NISQ devices. Our CNN-based method updates these earlier papers by addressing density matrix reconstruction for multiple noise models simultaneously. Unlike traditional QECCs with a dense qubit overhead, our scheme operates on the existing quantum state without encoding. And unlike statistical error mitigation techniques, we reconstruct the full quantum state rather than merely expectation values of observables. 4. Related Implementations The application of quantum error correction techniques via machine learning has been of interest on other research platforms. We reviewed some of the associated implementations to contrast and compare our approach: The Qiskit Textbook [5] provides a full implementation of quantum error correction codes with a focus on the repetition code and surface code approaches. Their approach is primarily pedagogical and demonstrates the standard approach to quantum error correction with multiple physical qubits to encode one logical qubit. While their implementation is full, it has heavy qubit overhead and is not designed for the NISQ era. Our CNN-based approach, in contrast, does not require additional qubits and can be used directly on the quantum state representation, and therefore is more suitable for nearterm quantum hardware with small numbers of qubits. The Mitiq library [6] developed by the Unitary Fund offers a library of error mitigation techniques like zero-noise extrapolation and probabilistic error cancellation. Their Python library supports major quantum computing platforms and has been tested on real quantum hardware. However, their approach is dealing with error mitigation in expectation values of quantum observables and not recovery of the full quantum state. Our density matrix reconstruction approach provides an enhanced error correction solution by recovery of the full quantum state with the ability for more advanced quantum algorithms based on state preparation instead of expectation values. The TensorFlow Quantum [7] software includes demonstrations of quantum encoding and error correction within quantum data based on hybrid quantum-classical models. Their approach demonstrates use of the variational circuits for error correction and has promising results for some noise models. Unfortunately, their application does require quantum resources for both the encoding and the correction. Our traditional CNN protocol performs as well or better without requiring more quantum resources for the correction step, making it directly applicable to current quantum hardware and more scalable for increasing system size. 5. Data Analysis We constructed a synthetic data set of 10,000 density matrices that we derived from randomly created quantum circuits to examine the different types of noise and levels that influence quantum states. Having simulated clean as well as noisy circuits, we were able to create perfect ground truth data to train and test on. Through our data analysis, we found a number of important observations concerning the dynamics of quantum noise and its impact on quantum state fidelity. Fidelity distribution analysis across noise types revealed that phase damping noise preserves state fidelity more than any other noise type with a mean noisy fidelity of 0.541, and bit-flip was most destructive (0.200 mean fidelity). This is due to the fact that the phase damping type of noise primarily disturbs off-diagonal elements (coherences) but preserves populations, whereas bit-flip destroys computational basis states directly. The noise level-state fidelity relationship is approximately inversely dependent, such that the greater the noise level, the exponentially smaller the fidelity. This can be quantified by the -0.55 correlation coefficient of noise level-noisy fidelity in our correlation analysis below: We observed that quantum states that were exposed to mixed noise exhibited complex error patterns that had elements of multiple noise channels. This is an especially challenging scenario for classical quantum error correction schemes but presents an opportunity for machine learning approaches that can learn and discover such complex patterns. Visual examination of the density matrices revealed that the different sources of noise act on the matrix structure in distinct ways. Depolarizing noise compresses the whole density matrix in the direction of the maximally mixed state, amplitude damping pulls population towards the ground state direction, and phase damping lowers basically off-diagonal elements. All these usual signatures can be recognized in real and imaginary parts of the density matrices: The data allowed us to explore the relation of early noisy fidelity with theoretical maximum recoverable fidelity, providing insight into the underlying quantum error correction thresholds for a range of noise models and strengths. 6. Analysis Our CNN-based approach's strong performance on many types and levels of noise is due to several key features of the model and quantum noise itself. The encoder-decoder architecture is particularly suited to quantum error correction since it has the ability to learn features at multiple hierarchical scales, reflecting the manner in which quantum noise acts at both the local and global levels. The result on composite noise models is especially notable in that it represents the model's capacity to generalize from simultaneous multiple types of error. This property can be assumed to transfer to other such situations with high-complication noise models, as with superconducting or trapped-ion quantum computers, with simultaneous activity by many error processes. The model effectively learns to adopt a one-type-fits-all noise-reversing mapping instead of case-dependent correction of particular noises. Interestingly, the model does better at greater levels of noise (0.15-0.2) than at lower ones (0.05-0.1) when assessed by relative improvement. Such counterintuitive result suggests that more resilient patterns of noise are more distinctive and thus easier to identify and correct by the CNN. Such a property would be advantageous for quantum systems operating on the edge of their coherence capabilities, where high levels of noise are typical. The restrictions observed with phase damping correction are likely a result of information-theoretic restrictions - pure dephasing is a form of information loss that might be inherently reversible. Other quantum systems similar to those here affected primarily by dephasing noise might not benefit as much from this technique, marking an essential boundary condition for the applicability of CNN-based error correction. The CNN's ability to preserve quantum correlations, observed through off-diagonal density matrix elements' revival, demonstrates that such a procedure would be particularly worthwhile for entanglement and quantum coherence-intensive quantum algorithms like Shor's algorithm or quantum simulation. The preservation of quantum information distinguishes our approach from other classical error mitigation techniques that only operate on expectation values. 7. Experimental Setup Our testbed consisted of an end-to-end pipeline for simulating synthetic quantum data, training the CNN model, and evaluating its performance across different noise regimes. We used the Cirq quantum computing library to implement the circuit simulation and TensorFlow for modelling the neural network. For data generation, we generated 10,000 randomly created quantum circuits of size 5 qubits and result in the shape of 32×32 complex density matrices. The number of layers for the circuits were randomly selected in between 6-9 layers in order to provide various quantum states. All circuits were run through five varieties of various types of noise (depolarizing, amplitude damping, phase damping, bit-flip, and mixed) on four levels of noises (0.05, 0.1, 0.15, 0.2), and thus created a balanced noise dataset. We used an 80/20 train-test split to guarantee unbiased testing, with the split done prior to training to avoid any data leakage. The model was trained for 100 epochs with a batch size of 16, with 20% of the training data set aside as a validation set for early stopping and learning rate scheduling. The training process employed the Adam optimizer with a starting learning rate of 1e-3, which was decreased by a factor of 0.5 after every 5 epochs without validation loss improvement. The training was conducted on an NVIDIA Tesla V100 GPU, with one epoch taking approximately 27 seconds. The model converged at approximately 95 epochs, as shown in the learning curves and the training logs: For evaluation, we calculated quantum state fidelity between the noiseless initial states, noisy states, and CNN-corrected states. We also experimented with different types of noise and levels of noise to see trends and limits in model correction performance. The experiment code and data generating scripts are available on our GitHub repository 8. Results Our CNN-based quantum error correction model demonstrated strong performance across various noise types and levels. After 100 training epochs, the model achieved a test loss of 0.1519 and test MAE of 0.0094, indicating good prediction accuracy. 8.1 Overall Fidelity Improvement Fidelity comparisons by noise type and level, demonstrating significant improvements across all conditions. The model increased quantum state fidelity from an average of 0.298 (noisy) to 0.774 (corrected), representing an average improvement of 0.47. The fidelity improvement distribution shows most improvements clustered around 0.5-0.6, with some exceptional cases exceeding 0.8 improvement and a small number of negative cases. 8.2 Performance by Noise Type Key findings include: 1. Mixed noise shows the highest corrected fidelity (0.807) and improvement (0.567). 2. Phase damping shows the lowest improvement (0.241), despite starting with the highest noisy fidelity. 3. Bit-flip noise demonstrates exceptional cases with improvements up to 0.84. 8.3 Performance by Noise Level Interestingly, higher noise levels (0.15, 0.20) show greater improvement than lower levels, indicating the model is particularly effective at correcting severely corrupted states. 8.4 Density Matrix Visualization Visualization of original, noisy, and corrected density matrices. The model successfully recovers both diagonal elements (populations) and off-diagonal elements (coherences) from heavily corrupted noisy states. 9. Conclusion This work demonstrates the effectiveness of CNN-based quantum error correction in density matrix reconstruction. Our model provides high fidelity improvements on various types of noise and intensities, with extremely favourable outcomes for complex mixed noise and higher intensities of noise, demonstrating deep learning techniques as a plausible scaling option to traditional quantum error correction codes. For application in real-world scenarios, we propose using this method in mixed noise environments where typical QECCs will be impractical due to qubit overhead limitations. But care must be used with pure phase damping cases, with lesser assured performance owing to information-theoretic necessities. For important missions with quantum computation, a hybrid approach combining CNN with certain phase error correction methods would produce the best solution. Future work should focus on breaking the phase damping limit, hardware validation on real quantum devices, and scaling to more qubits. As quantum hardware capability increases, this approach can be extended to more complex quantum algorithms and error models, which could result in practical quantum advantage with fewer physical qubits than traditional error correction methods would. 10. References [1] P. Czarnik, A. Arrasmith, P.J. Coles, S.M. Leichenauer. Error mitigation with Clifford quantum-circuit data. Quantum, 5:592, 2021. https://quantum-journal.org/papers/q-2021-11-26-592/ [2] H. Chen, L. Wossnig, S. Severini, H. Neven, M. Mohseni. Universal discriminative quantum neural networks. Quantum Machine Intelligence, 3(1), 2021. https://link.springer.com/article/10.1007/s42484020-00025-7 [3] K. Endo, S. Benjamin, Y. Li. Practical quantum error mitigation for near-future applications. Physical Review X, 8:031027, 2018. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.031027 [4] S. Krastanov, L. Jiang, D. Englund, S. Guha. Neural-network decoder for topological color codes with circuit level noise. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS 2020), 2020. https://proceedings.neurips.cc/paper/2020/hash/37bc6f5b7952a45e3f4e1170d494b140-Abstract.html [5] Abraham, H., et al. (2023). Qiskit Textbook - Error Correction with the Repetition Code. Available at: https://qiskit.org/textbook/ch-quantum-hardware/error-correction-repetition-code.html [6] Ryan LaRose, Andrea Mari, Nathan Shammah, et al. Mitiq: A software package for error mitigation on noisy quantum computers. Quantum, 6:774, 2022. GitHub: https://github.com/unitaryfund/mitiq [7] M. Broughton, G. Verdon, T. McCourt, et al. TensorFlow Quantum: A Software Framework for Quantum Machine Learning. , 2020. Tutorial: https://www.tensorflow.org/quantum/tutorials/quantum_data Paper: https://arxiv.org/abs/2003.02989 11. Appendix: 1. GitHub repository link: https://github.com/KaranKendre11/MLforQuantumNoiseReduction/tree/main
|
2509.16244
|
HOW CAN QUANTUM DEEP LEARNING IMPROVE LARGE LANGUAGE MODELS?
Emily Jimin Roh†, Hyojun Ahn†, Samuel Yen-Chi Chen‡, Soohyun Park§, Joongheon Kim†
†Korea University
‡Wells Fargo
§Sookmyung Women’s University
ABSTRACT
The rapid progress of large language models (LLMs) has
transformed natural language processing, yet the challenge
of efficient adaptation remains unresolved. Full fine-tuning
achieves strong performance but imposes prohibitive com-
putational and memory costs. Parameter-efficient fine-tuning
(PEFT) strategies, such as low-rank adaptation (LoRA), Pre-
fix tuning, and sparse low-rank adaptation (SoRA), address
this issue by reducing trainable parameters while maintaining
competitive accuracy.
However, these methods often en-
counter limitations in scalability, stability, and generalization
across diverse tasks. Recent advances in quantum deep learn-
ing introduce novel opportunities through quantum-inspired
encoding and parameterized quantum circuits (PQCs). In par-
ticular, the quantum-amplitude embedded adaptation (QAA)
framework demonstrates expressive model updates with min-
imal overhead. This paper presents a systematic survey and
comparative analysis of conventional PEFT methods and
QAA. The analysis demonstrates trade-offs in convergence,
efficiency, and representational capacity, while providing
insight into the potential of quantum approaches for future
LLM adaptation.
Index Terms— Quantum Deep Learning, Quantum-
Amplitude Embedded Adaptation, Parameter-Efficient Fine-
Tuning, Large Language Model
1. INTRODUCTION
Large language models (LLMs) have emerged as essential
backbones in natural language processing, which makes di-
verse applications from open-domain dialogue to specialized
text generation [1]. Their effectiveness is largely attributed to
massive parameterization and extensive pre-training, which
This research was supported by the Institute of Information & Communi-
cations Technology Planning & Evaluation (IITP) grant funded by the Korea
government [MSIT (Ministry of Science and ICT (Information and Commu-
nications Technology))] (RS-2024-00439803, SW Star Lab) for Quantum AI
Empowered Second-Life Platform Technology; and also by the National Re-
search Foundation of Korea (RS-2025-00561377).
The views expressed in this article are those of the authors and do not
represent the views of Wells Fargo. This article is for informational purposes
only. Nothing contained in this article should be construed as investment
advice. Wells Fargo makes no express or implied warranties and expressly
disclaims all legal, tax, and accounting implications related to this article.
Corresponding Author: Joongheon Kim (joongheon@korea.ac.kr)
Fig. 1: Overview of the QAA framework, a quantum deep
learning approach for the LLM fine-tuning.
provide strong generalization across tasks. Full fine-tuning
provides high accuracy but requires extensive resources, lim-
iting deployment in environments with constrained computa-
tion or energy budgets [2].
To address this limitation, parameter-efficient fine-tuning
(PEFT) techniques have emerged as viable alternatives. Low-
rank adaptation (LoRA) reduces the number of trainable
parameters through low-rank decomposition of weight up-
dates [3].
Prefix tuning introduces task-specific vectors at
the input level, while sparse LoRA (SoRA) extends low-rank
approaches with sparsity constraints for improved scala-
bility [4]. Nevertheless, many approaches still require the
update of millions of parameters in large-scale models, which
imposes significant memory overhead.
Quantum deep learning introduces a new paradigm for
LLM fine-tuning. Quantum encoding combined with param-
eterized quantum circuits (PQCs) enables expressive transfor-
mations, thereby allowing LLM fine-tuning to be performed
more efficiently with a reduced number of trainable param-
eters [5]. Quantum-amplitude embedded adaptation (QAA)
extends this principle by mapping classical hidden states
into quantum states, which produces compact yet powerful
updates [6]. Unlike conventional PEFT methods, QAA lever-
ages quantum superposition and entanglement to preserve
representational richness under strict parameter constraints.
This paper analyzes full tuning, LoRA, Prefix tuning,
SoRA, and QAA in the context of LLM adaptation. The dis-
cussion provides a evaluation of efficiency and convergence.
This study also highlights the unique role of quantum meth-
arXiv:2509.16244v1 [quant-ph] 17 Sep 2025
Table 1: Comparison of representative PEFT methods based on GPT-Neo in terms of main contributions, fine-tuning complex-
ity, and trainable parameter ratio. Here, d denotes the hidden dimension size, r is the rank used in low-rank adaptation, reff is
the effective rank after sparsity adjustment in SoRA, and l the prefix length.
Method
Ref.
Main Contribution
Fine-Tuning Complexity
#Trainable Parameter and Ratio
Full Tuning
[7]
Updates all parameters of the pre-trained model with-
out any restriction, which achieves strong downstream
performance but with extremely high computational and
memory costs.
O(d2)
125,198,592 (100%)
LoRA
[8]
Introduces trainable low-rank matrices into each layer
while freezing the backbone, and this enables efficient
adaptation under the hypothesis that model updates are
intrinsically low-dimensional. Provides a strong trade-
off between performance and efficiency.
O(dr)
147,456 (0.12%)
SoRA
[4]
Extends LoRA by allowing dynamic and sparse adjust-
ment of the intrinsic rank during training. A gating unit,
optimized via proximal gradient methods, adaptively
prunes redundant components. This achieves higher ef-
ficiency and often better accuracy than fixed-rank LoRA
while reducing the number of trainable parameters (e.g.,
0.91M vs. 1.33M for r = 8).
O(dreff), reff < rmax
125,337 (0.10%)
Prefix Tuning
[9]
Learns a sequence of continuous trainable prefix vectors
prepended to the input of each transformer layer. This
conditions the model to new tasks without modifying
the original weights, but introduces additional sequence
length during training and inference.
O(ld)
552,960 (0.44%)
QAA
[6]
Proposed quantum-inspired adapter method that lever-
ages amplitude embedding and PQCs.
It enables ex-
pressive representation power with logarithmic scaling in
qubit space, thereby providing parameter-efficient adap-
tation while maintaining competitive accuracy.
O(d log d)
123,000 (0.09%)
ods in overcoming scalability bottlenecks and shaping the
next generation of fine-tuning strategies for LLMs.
2. PRELIMINARY
2.1. LLM Fine-Tuning Framework
Modern LLMs contain billions of parameters, which makes
full adaptation prohibitively expensive. Given a dataset D =
{(xi, yi)}N
i=1 and a pre-trained model PΦ(y | x) with param-
eters Φ, the full fine-tuning objective can be formulated as,
max
Φ
X
(x,y)∈D
X|y|
t=1 log PΦ(yt | x, y<t).
(1)
Since updating all Φ is impractical for large |Φ|, PEFT
introduces a small set of trainable parameters θ, with |θ| ≪
|Φ|. The update function ∆h(θ) modifies the model as Φ +
∆h(θ), which is defined as,
max
θ
X
(x,y)∈D
X|y|
t=1 log PΦ+∆h(θ)(yt | x, y<t).
(2)
Beyond classical PEFT, quantum-inspired approaches de-
fine ∆Φ(θ) through amplitude embedding and parameterized
circuits, enabling compact yet expressive adaptation.
2.2. Related Work
Recent studies have proposed a variety of PEFT techniques
for adapting LLMs.
Table 1 compares representative ap-
proaches in terms of methodological design, fine-tuning com-
plexity, and parameter efficiency.
Full fine-tuning directly
updates the entire parameter set Φ ∈R|Φ| of a pre-trained
model for each downstream task and can be expressed as,
Φ′ = Φ + ∆Φ,
∆Φ ∈R|Φ|,
(3)
which achieves strong performance but incurs O(d2) com-
plexity and requires storing a full model copy per task. Here,
d denotes the hidden dimension of the model. This makes full
fine-tuning infeasible for billion-scale LLMs [7].
To address this limitation, researchers have developed
methods that introduce restricted sets of trainable compo-
nents while keeping the backbone largely frozen.
LoRA
reduces the trainable parameter space by factorizing weight
updates into low-rank matrices defined as,
∆W = AB⊤,
A ∈Rd×r, B ∈Rd×r, r ≪d,
(4)
and applying the effective weight update W ′ = W + ∆W,
where W denotes the original weight matrix. This approach
lowers the number of trainable parameters to O(dr) while re-
taining competitive accuracy [8].
Building on this idea, SoRA extends LoRA by dynami-
cally adjusting and sparsifying the effective rank through a
gating vector, optimized using proximal gradients as,
∆W = A diag(g)B⊤,
g ∈Rr,
(5)
which adaptively prunes redundant components. This method
often achieves better accuracy than fixed-rank LoRA while
using fewer effective parameters [4].
Another approach is Prefix Tuning, which learns continu-
ous prefix vectors P ∈Rl×d that are prepended to the input
of each transformer block defined as,
h′ = f([P; x]; Φ),
(6)
where x is the input sequence, f(·) denotes the frozen back-
bone, and l represents the prefix length. The computational
cost scales as O(ld) [9].
More recently, QAA adopts a quantum amplitude em-
bedding strategy that compresses an input x ∈Rd into log d
qubits.
The embedded states are processed through PQC
composed of RX rotation gates and CNOT entanglement
gates, which enable expressive non-linear transformations.
The output is then mapped back to the original dimension
through an additional linear up projection, allowing fine-
tuning with a complexity of O(d log d).
A more detailed
description of QAA is provided in Section 3.
3. DETAILS OF QUANTUM-AMPLITUDE
EMBEDDED ADAPTATION
QAA is presented as a quantum deep learning approach for
enhancing the performance of LLMs, where conventional lin-
ear adapters are replaced with compact quantum modules that
enable expressive and parameter-efficient adaptation. By em-
bedding hidden states into a quantum Hilbert space, QAA en-
ables non-linear transformations with a logarithmic number
of qubits, which produces task-specific residuals ∆h while
significantly reducing parameter counts.
As illustrated in Fig. 1, the QAA framework follows four
stages: i) quantum amplitude embedding of input activations,
ii) quantum processing via PQC, iii) measurement and up pro-
jection to recover the model dimension, and iv) optimization
through the parameter-shift rule. The following subsections
provide details of each stage and outline its theoretical advan-
tages.
3.1. Quantum Amplitude Embedding
A quantum state defines the configuration of a quantum sys-
tem and is mathematically represented as a unit vector in a
complex Hilbert space Cd. Let {|i⟩}d
i=1 denote an orthonor-
mal basis of Cd, where each |i⟩corresponds to a distinct clas-
sical state. A general state is expressed as,
|ψ⟩=
Xd
i=1 αi|i⟩,
with
Xd
i=1 |αi|2 = 1,
(7)
Fig. 2: Illustration of how QAA operates within the GPT ar-
chitecture.
where αi ∈C are amplitudes. A measurement collapses |ψ⟩
into basis state |i⟩with probability |αi|2, thereby providing
a probabilistic encoding of all classical indices in superpo-
sition. This property enables quantum systems to represent
exponentially many configurations simultaneously [10].
In QAA, hidden vectors from transformer layers are en-
coded into quantum states using amplitude embedding. Let
x ∈Rd denote a hidden activation vector. The smallest num-
ber of qubits n is chosen such that 2n ≥d, embedding x into
an n-qubit Hilbert space C2n. The vector is normalized as,
˜x =
x
∥x∥2
,
(8)
where ∥x∥2 =
qPd−1
k=0 x2
k, and xk is the k-th entry of the
vector x and ∥x∥2 denotes the ℓ2 norm. This guarantees that
˜x has unit norm, ensuring physical validity as a quantum state.
The normalized vector is mapped to,
|x⟩=
X2n−1
k=0 ˜xk |k⟩,
(9)
where |k⟩denotes the computational basis state correspond-
ing to the binary encoding of index k. This process com-
presses the d-dimensional vector into log2 d qubits while pre-
serving the structure of the original activations [6].
3.2. Parameterized Quantum Circuit
After embedding, the quantum state transforms a PQC U(θ).
A single-qubit gate rotates each qubit j,
RX(θj) = exp
−iθj
2 X
,
(10)
where θj ∈R is a trainable parameter and X is the Pauli-X
matrix
0
1
1
0
. These rotations introduce non-linear degrees
of freedom. To capture correlations between qubits, CNOT
gates are applied,
CNOTj,j+1 |a⟩j |b⟩j+1 = |a⟩j |a ⊕b⟩j+1 ,
(11)
where a, b ∈{0, 1} and ⊕denotes the XOR operation. This
introduces quantum entanglement, which allows the PQC to
model joint dependencies beyond local linear effects [11].
3.3. Measurement and Up Projection
The evolved quantum state is represented as |ψ(θ)⟩
=
U(θ) |x⟩. To extract classical information, each qubit j is
measured in the Pauli-Z basis as,
zj = ⟨ψ(θ)| Zj |ψ(θ)⟩,
j = 1, . . . , n,
(12)
where Zj is the Pauli-Z observable acting on qubit j. This
produces a vector z ∈Rn that summarizes the circuit output.
Since n ≪d, a linear up projection is applied as,
ˆx = W ⊤z,
W ∈Rn×d,
(13)
where W is a trainable projection matrix. The result ˆx is in-
terpreted as the residual update ∆h, which is added to the
frozen hidden state hbase, which forms the adapted represen-
tation hadapted = hbase + ∆h.
3.4. Optimization with Parameter-Shift Rule
To train the trainable parameters of PQC θ, QAA employs
the parameter-shift rule. For an observable O, the expectation
value is defined as,
f(θj) = ⟨ψ(θ)| O |ψ(θ)⟩.
(14)
Its gradient with respect to θj is computed as follows,
∂f
∂θj
= 1
2
h
f(θj + π
2 ) −f(θj −π
2 )
i
.
(15)
This avoids direct differentiation through non-analytic quan-
tum operations. The gradients are combined with a classical
loss L, and each parameter is updated as,
θj ←θj −η · ∂L
∂θj
,
(16)
where η is the learning rate. This hybrid procedure integrates
quantum parameter updates into classical backpropagation.
Table 2: Specifications of hardware platforms, and software
environments for Evaluation.
System
Specification (Value)
Platform (PC)
OS: Ubuntu 20.04
CPU: Intel(R) Xeon(R) CPU E5-2698 v4
GPU: NVIDIA RTX-4090 (24 GB VRAM)
Memory: 256 GB DDR5
Software version
Python: v3.10
CUDA: v11.8
PyTorch: v2.1.2
Transformers (HF): v4.44.2
PEFT: v0.11.1
Datasets: v2.14.5
Pennylane: v0.36.0
3.5. Implementation QAA on LLMs
The integration of QAA into LLMs is designed to replace
conventional adapter modules with quantum-enhanced com-
ponents while keeping the majority of the backbone frozen [12].
As illustrated in Fig. 2, QAA modules are inserted at mul-
tiple transformer layers, specifically after the self-attention
and feedforward blocks. The base transformer weights re-
main fixed, and QAA generates task-specific residuals that
are added to the hidden representations. This design enables
efficient adaptation without modifying the full parameter
set of the pre-trained model. This implementation strategy
highlights two key advantages. First, QAA enables scalable
integration within LLMs by operating as a plug-in module,
which ensures compatibility with transformer-based archi-
tectures. Second, it preserves the representational richness
of hidden states through quantum-inspired transformations,
which achieves expressive and efficient fine-tuning with log-
arithmic qubit complexity and linear projection overhead.
4. PERFORMANCE EVALUATION
To compare the representative PEFT methods, including full
tuning, LoRA, SoRA, Prefix tuning, and the proposed QAA,
experiments are conducted under the simulation environment
summarized in Table 2.
4.1. Quantitative Results
Table 3 reports the performance of various PEFT strategies
in terms of BLEU, BERTScore (F1), and ROUGE metrics,
where each value represents the average score computed
over 100 generation sentences based on the Alpaca dataset.
Full fine-tuning achieves the highest overall accuracy with
BLEU of 12.19, BERTScore of 84.69, and ROUGE of
20.39/12.64/20.25, but at the cost of training all parame-
ters. LoRA achieves competitive performance, with BLEU
of 3.45 and BERTScore of 78.33, while requiring only 0.12%
Table 3: Comparison of the NLG evaluation metrics using
different PEFT methods.
Method
#TP Ratio
BLEU
BERTF1
ROUGE
Full
100%
12.19
84.69
20.39 / 12.64 / 20.25
LoRA
0.12%
3.45
78.33
13.60 / 6.66 / 10.57
SoRA
0.10%
0.67
77.67
7.43 / 1.43 / 5.41
Prefix
0.44%
0.38
58.29
7.18 / 1.82 / 6.77
QAA
0.09%
2.96
78.74
15.01 / 3.89 / 13.55
Fig. 3: Training loss comparison across 1,000 steps.
of the parameters.
SoRA further improves efficiency by
adaptively reducing redundant ranks, which yields BLEU
of 2.67 and BERTScore of 77.67 with 0.09% parameters.
Prefix tuning, despite using 0.44% parameters, shows lower
effectiveness with BLEU of 0.38 and BERTScore of 58.29,
indicating difficulty in stable convergence for generative
tasks. QAA demonstrates a strong balance between efficiency
and performance. With only 0.09% trainable parameters, it
achieves BLEU of 2.96, BERTScore of 78.74, and ROUGE
of 15.01/3.89/13.55. Although full fine-tuning remains the
upper bound, QAA consistently outperforms Prefix tuning
and shows comparable performance to LoRA and SoRA
while maintaining a significantly smaller parameter budget.
These results validate that QAA provides a promising path
for efficient yet expressive LLM adaptation.
4.2. Training Loss Convergence Analysis
The training loss curves across 1,000 steps are illustrated in
Fig. 3. Full fine-tuning converges fastest due to the complete
parameter space. Among PEFT methods, QAA exhibits a no-
tably smooth and rapid convergence trajectory, outperform-
ing Prefix and SoRA tuning and closely following LoRA.
The variance in loss reduction for QAA remains lower than
that of LoRA, SoRA, and Prefix tuning, which highlights the
stabilizing effect of amplitude embedding and quantum cir-
cuit expressivity. These observations confirm that QAA pro-
vides stable gradient flow with reduced parameter complex-
ity, enabling efficient training without sacrificing convergence
speed.
5. CONCLUSION
This work provided a comprehensive survey and analysis
of PEFT strategies for LLMs, including full tuning, LoRA,
SoRA, Prefix tuning, and QAA. Through systematic evalua-
tion, QAA is shown to deliver a favorable balance between
efficiency and performance, which offers competitive per-
formance with significantly fewer trainable parameters. The
overall analysis highlights QAA as a promising direction
that complements classical PEFT methods while demonstrat-
ing the potential of quantum deep learning in future LLM
adaptation.
6. REFERENCES
[1] Die Hu, Jingguo Ge, Weitao Tang, Guoyi Li, Liangxiong Li, and
Bingzhen Wu,
“WebSurfer: enhancing LLM agents with web-wise
feedback for web navigation,” in Proc. IEEE International Conference
on Acoustics, Speech and Signal Processing, (ICASSP), Hyderabad, In-
dia, Apr. 2025.
[2] Hyojun Ahn, Seungcheol Oh, Gyu Seon Kim, Soyi Jung, Soohyun
Park, and Joongheon Kim, “Hallucination-aware generative pretrained
transformer for cooperative aerial mobility control,”
in Proc. IEEE
Global Communications Conference (GLOBECOM), Taipei, Taiwan,
Dec. 2025.
[3] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi
Li, Shean Wang, Lu Wang, and Weizhu Chen, “LoRA: Low-rank adap-
tation of large language models,” in Proc. International Conference on
Learning Representations (ICLR), Virtual, Apr. 2022.
[4] Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou,
Zhiyuan Liu, and Maosong Sun, “Sparse low-rank adaptation of pre-
trained language models,” in Proc. Empirical Methods in Natural Lan-
guage Processing (EMNLP), Singapore, Dec. 2023.
[5] Soohyun Park, Jae Pyoung Kim, Chanyoung Park, Soyi Jung, and
Joongheon Kim,
“Quantum multi-agent reinforcement learning for
autonomous mobility cooperation,” IEEE Communications Magazine,
vol. 62, no. 6, pp. 106–112, Jun. 2024.
[6] Emily Jimin Roh and Joongheon Kim, “Quantum-amplitude embed-
ded adaptation for parameter-efficient fine-tuning in large language
models,” in Proc. ACM International Conference on Information and
Knowledge Management (CIKM), Seoul, Korea, Nov. 2025.
[7] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu, “Ex-
ploring the limits of transfer learning with a unified text-to-text trans-
former,” Journal of Machine Learning Research, vol. 21, no. 1, pp.
5485–5551, Jan. 2020.
[8] Edward J. Hu et al., “LoRA: Low-rank adaptation of large language
models,” in Proc. International Conference on Learning Representa-
tions (ICLR), Virtual, Apr. 2022.
[9] Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, and Jun Zhao, “Zero-
shot cross-lingual event argument extraction with language-oriented
prefix-tuning,” in Proc. Association for the Advancement of Artificial
Intelligence (AAAI), Washington DC, USA, Feb. 2023, vol. 37.
[10] Gyu Seon Kim, Yeryeong Cho, Jaehyun Chung, Soohyun Park, Soyi
Jung, Zhu Han, and Joongheon Kim, “Quantum multi-agent reinforce-
ment learning for cooperative mobile access in space-air-ground inte-
grated networks,” IEEE Transactions on Mobile Computing, pp. 1–18,
2025 (Early Access).
[11] Tyler Wang, Huan-Hsin Tseng, and Shinjae Yoo, “Quantum federated
learning with quantum networks,” in Proc. IEEE International Con-
ference on Acoustics, Speech and Signal Processing (ICASSP), Seoul,
Republic of Korea, Apr. 2024.
[12] Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao
Huang, Mohit Bansal, and Colin Raffel, “Few-shot parameter-efficient
fine-tuning is better and cheaper than in-context learning,”
in Proc.
Advances in Neural Information Processing Systems (NeurIPS), New
Orleans, USA, Dec. 2022.
|
HOW CAN QUANTUM DEEP LEARNING IMPROVE LARGE LANGUAGE MODELS? Emily Jimin Roh†, Hyojun Ahn†, Samuel Yen-Chi Chen‡, Soohyun Park§, Joongheon Kim† †Korea University ‡Wells Fargo §Sookmyung Women's University ABSTRACT The rapid progress of large language models (LLMs) has transformed natural language processing, yet the challenge of efficient adaptation remains unresolved. Full fine-tuning achieves strong performance but imposes prohibitive computational and memory costs. Parameter-efficient fine-tuning (PEFT) strategies, such as low-rank adaptation (LoRA), Prefix tuning, and sparse low-rank adaptation (SoRA), address this issue by reducing trainable parameters while maintaining competitive accuracy. However, these methods often encounter limitations in scalability, stability, and generalization across diverse tasks. Recent advances in quantum deep learning introduce novel opportunities through quantum-inspired encoding and parameterized quantum circuits (PQCs). In particular, the quantum-amplitude embedded adaptation (QAA) framework demonstrates expressive model updates with minimal overhead. This paper presents a systematic survey and comparative analysis of conventional PEFT methods and QAA. The analysis demonstrates trade-offs in convergence, efficiency, and representational capacity, while providing insight into the potential of quantum approaches for future LLM adaptation. Index Terms- Quantum Deep Learning, QuantumAmplitude Embedded Adaptation, Parameter-Efficient FineTuning, Large Language Model 1. INTRODUCTION Large language models (LLMs) have emerged as essential backbones in natural language processing, which makes diverse applications from open-domain dialogue to specialized text generation [1]. Their effectiveness is largely attributed to massive parameterization and extensive pre-training, which This research was supported by the - cations Technology Planning & Evaluation (IITP) grant funded by the Korea government [MSIT (Ministry of Science and ICT (Information and Communications Technology))] (RS-2024-00439803, SW Star Lab) for Quantum AI Empowered Second-Life Platform Technology; and also by the National Research Foundation of Korea (RS-2025-00561377). The views expressed in this article are those of the authors and do not represent the views of Wells Fargo. This article is for informational purposes only. Nothing contained in this article should be construed as investment advice. Wells Fargo makes no express or implied warranties and expressly disclaims all legal, tax, and accounting implications related to this article. Corresponding Author: Joongheon Kim ( ) Fig. 1: Overview of the QAA framework, a quantum deep learning approach for the LLM fine-tuning. provide strong generalization across tasks. Full fine-tuning provides high accuracy but requires extensive resources, limiting deployment in environments with constrained computation or energy budgets [2]. To address this limitation, parameter-efficient fine-tuning (PEFT) techniques have emerged as viable alternatives. Lowrank adaptation (LoRA) reduces the number of trainable parameters through low-rank decomposition of weight updates [3]. Prefix tuning introduces task-specific vectors at the input level, while sparse LoRA (SoRA) extends low-rank approaches with sparsity constraints for improved scalability [4]. Nevertheless, many approaches still require the update of millions of parameters in large-scale models, which imposes significant memory overhead. Quantum deep learning introduces a new paradigm for LLM fine-tuning. Quantum encoding combined with parameterized quantum circuits (PQCs) enables expressive transformations, thereby allowing LLM fine-tuning to be performed more efficiently with a reduced number of trainable parameters [5]. Quantum-amplitude embedded adaptation (QAA) extends this principle by mapping classical hidden states into quantum states, which produces compact yet powerful updates [6]. Unlike conventional PEFT methods, QAA leverages quantum superposition and entanglement to preserve representational richness under strict parameter constraints. This paper analyzes full tuning, LoRA, Prefix tuning, SoRA, and QAA in the context of LLM adaptation. The discussion provides a evaluation of efficiency and convergence. This study also highlights the unique role of quantum meth17 Sep 2025 Table 1: Comparison of representative PEFT methods based on GPT-Neo in terms of main contributions, fine-tuning complexity, and trainable parameter ratio. Here, d denotes the hidden dimension size, r is the rank used in low-rank adaptation, reff is the effective rank after sparsity adjustment in SoRA, and l the prefix length. Method Ref. Main Contribution Fine-Tuning Complexity #Trainable Parameter and Ratio Full Tuning [7] Updates all parameters of the pre-trained model without any restriction, which achieves strong downstream performance but with extremely high computational and memory costs. O(d2) 125,198,592 (100%) LoRA [8] Introduces trainable low-rank matrices into each layer while freezing the backbone, and this enables efficient adaptation under the hypothesis that model updates are intrinsically low-dimensional. Provides a strong tradeoff between performance and efficiency. O(dr) 147,456 (0.12%) SoRA [4] Extends LoRA by allowing dynamic and sparse adjustment of the intrinsic rank during training. A gating unit, optimized via proximal gradient methods, adaptively prunes redundant components. This achieves higher efficiency and often better accuracy than fixed-rank LoRA while reducing the number of trainable parameters (e.g., 0.91M vs. 1.33M for r = 8). O(dreff), reff < rmax 125,337 (0.10%) Prefix Tuning [9] Learns a sequence of continuous trainable prefix vectors prepended to the input of each transformer layer. This conditions the model to new tasks without modifying the original weights, but introduces additional sequence length during training and inference. O(ld) 552,960 (0.44%) QAA [6] Proposed quantum-inspired adapter method that leverages amplitude embedding and PQCs. It enables expressive representation power with logarithmic scaling in qubit space, thereby providing parameter-efficient adaptation while maintaining competitive accuracy. O(d log d) 123,000 (0.09%) ods in overcoming scalability bottlenecks and shaping the next generation of fine-tuning strategies for LLMs. 2. PRELIMINARY 2.1. LLM Fine-Tuning Framework Modern LLMs contain billions of parameters, which makes full adaptation prohibitively expensive. Given a dataset D = {(xi, yi)}N i=1 and a pre-trained model PΦ(y | x) with parameters Φ, the full fine-tuning objective can be formulated as, max Φ X (x,y)∈D X|y| t=1 log PΦ(yt | x, y<t). (1) Since updating all Φ is impractical for large |Φ|, PEFT introduces a small set of trainable parameters θ, with |θ| ≪ |Φ|. The update function ∆h(θ) modifies the model as Φ + ∆h(θ), which is defined as, max θ X (x,y)∈D X|y| t=1 log PΦ+∆h(θ)(yt | x, y<t). (2) Beyond classical PEFT, quantum-inspired approaches define ∆Φ(θ) through amplitude embedding and parameterized circuits, enabling compact yet expressive adaptation. 2.2. Related Work Recent studies have proposed a variety of PEFT techniques for adapting LLMs. Table 1 compares representative approaches in terms of methodological design, fine-tuning complexity, and parameter efficiency. Full fine-tuning directly updates the entire parameter set Φ ∈R|Φ| of a pre-trained model for each downstream task and can be expressed as, Φ′ = Φ + ∆Φ, ∆Φ ∈R|Φ|, (3) which achieves strong performance but incurs O(d2) complexity and requires storing a full model copy per task. Here, d denotes the hidden dimension of the model. This makes full fine-tuning infeasible for billion-scale LLMs [7]. To address this limitation, researchers have developed methods that introduce restricted sets of trainable components while keeping the backbone largely frozen. LoRA reduces the trainable parameter space by factorizing weight updates into low-rank matrices defined as, ∆W = AB⊤, A ∈Rd×r, B ∈Rd×r, r ≪d, (4) and applying the effective weight update W ′ = W + ∆W, where W denotes the original weight matrix. This approach lowers the number of trainable parameters to O(dr) while retaining competitive accuracy [8]. Building on this idea, SoRA extends LoRA by dynamically adjusting and sparsifying the effective rank through a gating vector, optimized using proximal gradients as, ∆W = A diag(g)B⊤, g ∈Rr, (5) which adaptively prunes redundant components. This method often achieves better accuracy than fixed-rank LoRA while using fewer effective parameters [4]. Another approach is Prefix Tuning, which learns continuous prefix vectors P ∈Rl×d that are prepended to the input of each transformer block defined as, h′ = f([P; x]; Φ), (6) where x is the input sequence, f(·) denotes the frozen backbone, and l represents the prefix length. The computational cost scales as O(ld) [9]. More recently, QAA adopts a quantum amplitude embedding strategy that compresses an input x ∈Rd into log d qubits. The embedded states are processed through PQC composed of RX rotation gates and CNOT entanglement gates, which enable expressive non-linear transformations. The output is then mapped back to the original dimension through an additional linear up projection, allowing finetuning with a complexity of O(d log d). A more detailed description of QAA is provided in Section 3. 3. DETAILS OF QUANTUM-AMPLITUDE EMBEDDED ADAPTATION QAA is presented as a quantum deep learning approach for enhancing the performance of LLMs, where conventional linear adapters are replaced with compact quantum modules that enable expressive and parameter-efficient adaptation. By embedding hidden states into a quantum Hilbert space, QAA enables non-linear transformations with a logarithmic number of qubits, which produces task-specific residuals ∆h while significantly reducing parameter counts. As illustrated in Fig. 1, the QAA framework follows four stages: i) quantum amplitude embedding of input activations, ii) quantum processing via PQC, iii) measurement and up projection to recover the model dimension, and iv) optimization through the parameter-shift rule. The following subsections provide details of each stage and outline its theoretical advantages. 3.1. Quantum Amplitude Embedding A quantum state defines the configuration of a quantum system and is mathematically represented as a unit vector in a complex Hilbert space Cd. Let {|i⟩}d i=1 denote an orthonormal basis of Cd, where each |i⟩corresponds to a distinct classical state. A general state is expressed as, |ψ⟩= Xd i=1 αi|i⟩, with Xd i=1 |αi|2 = 1, (7) Fig. 2: Illustration of how QAA operates within the GPT architecture. where αi ∈C are amplitudes. A measurement collapses |ψ⟩ into basis state |i⟩with probability |αi|2, thereby providing a probabilistic encoding of all classical indices in superposition. This property enables quantum systems to represent exponentially many configurations simultaneously [10]. In QAA, hidden vectors from transformer layers are encoded into quantum states using amplitude embedding. Let x ∈Rd denote a hidden activation vector. The smallest number of qubits n is chosen such that 2n ≥d, embedding x into an n-qubit Hilbert space C2n. The vector is normalized as, ̃x = x ∥x∥2 , (8) where ∥x∥2 = qPd-1 k=0 x2 k, and xk is the k-th entry of the vector x and ∥x∥2 denotes the l2 norm. This guarantees that ̃x has unit norm, ensuring physical validity as a quantum state. The normalized vector is mapped to, |x⟩= X2n-1 k=0 ̃xk |k⟩, (9) where |k⟩denotes the computational basis state corresponding to the binary encoding of index k. This process compresses the d-dimensional vector into log2 d qubits while preserving the structure of the original activations [6]. 3.2. Parameterized Quantum Circuit After embedding, the quantum state transforms a PQC U(θ). A single-qubit gate rotates each qubit j, RX(θj) = exp -iθj 2 X , (10) where θj ∈R is a trainable parameter and X is the Pauli-X matrix 0 1 1 0 . These rotations introduce non-linear degrees of freedom. To capture correlations between qubits, CNOT gates are applied, CNOTj,j+1 |a⟩j |b⟩j+1 = |a⟩j |a ⊕b⟩j+1 , (11) where a, b ∈{0, 1} and ⊕denotes the XOR operation. This introduces quantum entanglement, which allows the PQC to model joint dependencies beyond local linear effects [11]. 3.3. Measurement and Up Projection The evolved quantum state is represented as |ψ(θ)⟩ = U(θ) |x⟩. To extract classical information, each qubit j is measured in the Pauli-Z basis as, zj = ⟨ψ(θ)| Zj |ψ(θ)⟩, j = 1, . . . , n, (12) where Zj is the Pauli-Z observable acting on qubit j. This produces a vector z ∈Rn that summarizes the circuit output. Since n ≪d, a linear up projection is applied as, ˆx = W ⊤z, W ∈Rn×d, (13) where W is a trainable projection matrix. The result ˆx is interpreted as the residual update ∆h, which is added to the frozen hidden state hbase, which forms the adapted representation hadapted = hbase + ∆h. 3.4. Optimization with Parameter-Shift Rule To train the trainable parameters of PQC θ, QAA employs the parameter-shift rule. For an observable O, the expectation value is defined as, f(θj) = ⟨ψ(θ)| O |ψ(θ)⟩. (14) Its gradient with respect to θj is computed as follows, ∂f ∂θj = 1 2 h f(θj + π 2 ) -f(θj -π 2 ) i . (15) This avoids direct differentiation through non-analytic quantum operations. The gradients are combined with a classical loss L, and each parameter is updated as, θj ←θj -η · ∂L ∂θj , (16) where η is the learning rate. This hybrid procedure integrates quantum parameter updates into classical backpropagation. Table 2: Specifications of hardware platforms, and software environments for Evaluation. System Specification (Value) Platform (PC) OS: Ubuntu 20.04 CPU: Intel(R) Xeon(R) CPU E5-2698 v4 GPU: NVIDIA RTX-4090 (24 GB VRAM) Memory: 256 GB DDR5 Software version Python: v3.10 CUDA: v11.8 PyTorch: v2.1.2 Transformers (HF): v4.44.2 PEFT: v0.11.1 Datasets: v2.14.5 Pennylane: v0.36.0 3.5. Implementation QAA on LLMs The integration of QAA into LLMs is designed to replace conventional adapter modules with quantum-enhanced components while keeping the majority of the backbone frozen [12]. As illustrated in Fig. 2, QAA modules are inserted at multiple transformer layers, specifically after the self-attention and feedforward blocks. The base transformer weights remain fixed, and QAA generates task-specific residuals that are added to the hidden representations. This design enables efficient adaptation without modifying the full parameter set of the pre-trained model. This implementation strategy highlights two key advantages. First, QAA enables scalable integration within LLMs by operating as a plug-in module, which ensures compatibility with transformer-based architectures. Second, it preserves the representational richness of hidden states through quantum-inspired transformations, which achieves expressive and efficient fine-tuning with logarithmic qubit complexity and linear projection overhead. 4. PERFORMANCE EVALUATION To compare the representative PEFT methods, including full tuning, LoRA, SoRA, Prefix tuning, and the proposed QAA, experiments are conducted under the simulation environment summarized in Table 2. 4.1. Quantitative Results Table 3 reports the performance of various PEFT strategies in terms of BLEU, BERTScore (F1), and ROUGE metrics, where each value represents the average score computed over 100 generation sentences based on the Alpaca dataset. Full fine-tuning achieves the highest overall accuracy with BLEU of 12.19, BERTScore of 84.69, and ROUGE of 20.39/12.64/20.25, but at the cost of training all parameters. LoRA achieves competitive performance, with BLEU of 3.45 and BERTScore of 78.33, while requiring only 0.12% Table 3: Comparison of the NLG evaluation metrics using different PEFT methods. Method #TP Ratio BLEU BERTF1 ROUGE Full 100% 12.19 84.69 20.39 / 12.64 / 20.25 LoRA 0.12% 3.45 78.33 13.60 / 6.66 / 10.57 SoRA 0.10% 0.67 77.67 7.43 / 1.43 / 5.41 Prefix 0.44% 0.38 58.29 7.18 / 1.82 / 6.77 QAA 0.09% 2.96 78.74 15.01 / 3.89 / 13.55 Fig. 3: Training loss comparison across 1,000 steps. of the parameters. SoRA further improves efficiency by adaptively reducing redundant ranks, which yields BLEU of 2.67 and BERTScore of 77.67 with 0.09% parameters. Prefix tuning, despite using 0.44% parameters, shows lower effectiveness with BLEU of 0.38 and BERTScore of 58.29, indicating difficulty in stable convergence for generative tasks. QAA demonstrates a strong balance between efficiency and performance. With only 0.09% trainable parameters, it achieves BLEU of 2.96, BERTScore of 78.74, and ROUGE of 15.01/3.89/13.55. Although full fine-tuning remains the upper bound, QAA consistently outperforms Prefix tuning and shows comparable performance to LoRA and SoRA while maintaining a significantly smaller parameter budget. These results validate that QAA provides a promising path for efficient yet expressive LLM adaptation. 4.2. Training Loss Convergence Analysis The training loss curves across 1,000 steps are illustrated in Fig. 3. Full fine-tuning converges fastest due to the complete parameter space. Among PEFT methods, QAA exhibits a notably smooth and rapid convergence trajectory, outperforming Prefix and SoRA tuning and closely following LoRA. The variance in loss reduction for QAA remains lower than that of LoRA, SoRA, and Prefix tuning, which highlights the stabilizing effect of amplitude embedding and quantum circuit expressivity. These observations confirm that QAA provides stable gradient flow with reduced parameter complexity, enabling efficient training without sacrificing convergence speed. 5. CONCLUSION This work provided a comprehensive survey and analysis of PEFT strategies for LLMs, including full tuning, LoRA, SoRA, Prefix tuning, and QAA. Through systematic evaluation, QAA is shown to deliver a favorable balance between efficiency and performance, which offers competitive performance with significantly fewer trainable parameters. The overall analysis highlights QAA as a promising direction that complements classical PEFT methods while demonstrating the potential of quantum deep learning in future LLM adaptation. 6. REFERENCES [1] Die Hu, Jingguo Ge, Weitao Tang, Guoyi Li, Liangxiong Li, and Bingzhen Wu, "WebSurfer: enhancing LLM agents with web-wise feedback for web navigation," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, (ICASSP), Hyderabad, India, Apr. 2025. [2] Hyojun Ahn, Seungcheol Oh, Gyu Seon Kim, Soyi Jung, Soohyun Park, and Joongheon Kim, "Hallucination-aware generative pretrained transformer for cooperative aerial mobility control," in Proc. IEEE Global Communications Conference (GLOBECOM), Taipei, Taiwan, Dec. 2025. [3] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen, "LoRA: Low-rank adaptation of large language models," in Proc. International Conference on Learning Representations (ICLR), Virtual, Apr. 2022. [4] Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun, "Sparse low-rank adaptation of pretrained language models," in Proc. Empirical Methods in Natural Language Processing (EMNLP), Singapore, Dec. 2023. [5] Soohyun Park, Jae Pyoung Kim, Chanyoung Park, Soyi Jung, and Joongheon Kim, "Quantum multi-agent reinforcement learning for autonomous mobility cooperation," IEEE Communications Magazine, vol. 62, no. 6, pp. 106-112, Jun. 2024. [6] Emily Jimin Roh and Joongheon Kim, "Quantum-amplitude embedded adaptation for parameter-efficient fine-tuning in large language models," in Proc. ACM International Conference on Information and Knowledge Management (CIKM), Seoul, Korea, Nov. 2025. [7] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485-5551, Jan. 2020. [8] Edward J. Hu et al., "LoRA: Low-rank adaptation of large language models," in Proc. International Conference on Learning Representations (ICLR), Virtual, Apr. 2022. [9] Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, and Jun Zhao, "Zeroshot cross-lingual event argument extraction with language-oriented prefix-tuning," in Proc. Association for the Advancement of Artificial Intelligence (AAAI), Washington DC, USA, Feb. 2023, vol. 37. [10] Gyu Seon Kim, Yeryeong Cho, Jaehyun Chung, Soohyun Park, Soyi Jung, Zhu Han, and Joongheon Kim, "Quantum multi-agent reinforcement learning for cooperative mobile access in space-air-ground integrated networks," IEEE Transactions on Mobile Computing, pp. 1-18, 2025 (Early Access). [11] Tyler Wang, Huan-Hsin Tseng, and Shinjae Yoo, "Quantum federated learning with quantum networks," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, Apr. 2024. [12] Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel, "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning," in Proc. Advances in Neural Information Processing Systems (NeurIPS), New Orleans, USA, Dec. 2022.
|
2509.16240
|
Off-Critical Zeros Contradict Contraction in the Dynamical
Reformulation of the Riemann Hypothesis
Hendrik W. A. E. Kuipers
Independent Researcher, Groningen, The Netherlands
hwaekuipers@gmail.com
September 23, 2025
MSC 2020. 11M26 (Primary); 11N05, 37A25, 11N35 (Secondary).
Abstract
We continue the dynamical reformulation of the Riemann Hypothesis initiated in [1]. The
framework is built from an integer map in which composites advance by π(m) and primes
retreat by their prime gap, producing trajectories whose contraction properties encode the
distribution of primes. In this setting, RH is equivalent to the persistence of contraction
inequalities for trajectory-based error functionals E(X), eE(X) across multiplicative scales.
In the present paper we prove that if ζ(s) has a zero off the critical line ℜ(s) = 1
2, then the
Landau–Littlewood Ω-theorem forces oscillations of size xβ in ψ(x) −x. A window-capture
lemma shows that these oscillations are inherited by the composite-only window suprema
E(X), and hence by E(X), producing lower bounds that contradict contraction. Thus any
off–critical zero is incompatible with contraction.
Headline result. Part I introduced the dynamical system and showed that RH is equiva-
lent to the persistence of contraction inequalities. Part II proves that off–critical zeros force
oscillations in ψ(x) −x that inevitably violate contraction. Taken together, these two steps
close the circle: contraction characterizes RH, and off–critical zeros contradict contraction.
Hence every nontrivial zero of ζ(s) lies on the critical line. More generally, the present re-
sult shows that whenever contraction holds, the critical line is forced as the only location of
nontrivial zeros. In this sense, the critical line is not merely the conjectured locus of zeros,
but the equilibrium point singled out by contraction itself.
1
Introduction
The Riemann Hypothesis (RH) asserts that all nontrivial zeros of the Riemann zeta function
ζ(s) lie on the critical line ℜ(s) = 1
2.
In a recent work [1], we introduced a dynamical reformulation of RH based on a simple
integer map: composites advance by π(m), the number of primes ≤m, while primes step
backward by their preceding prime gap. This system generates trajectories whose large-scale
contraction properties are equivalent to RH. In particular, we established:
• one-visit and parent-window lemmas restricting local composite hits,
• macro-step alignment and core overlap across scales,
• a frequency-netting lemma ensuring oscillatory coherence,
• and from these, contraction inequalities for the normalized error functionals E(X), eE(X).
1
arXiv:2509.16240v1 [math.DS] 16 Sep 2025
Iterating these inequalities yields the von Koch–level bound, known to be equivalent to RH [3].
Thus RH holds if and only if contraction persists uniformly across scales.
The aim of the present paper is to close this circle. We prove that if any nontrivial zero
of ζ(s) lies off the critical line, then the contraction inequalities are violated. The argument
relies on classical Ω-results of Landau [4] and Littlewood [5], which show that an off-critical zero
forces oscillations of size xβ in the Chebyshev error ψ(x) −x. We show that such oscillations
are inherited by the composite-only window suprema E(X), contradicting contraction.
In contrast to Part I, which worked with Eπ(x) := π(x)−Li(x), the present paper is formulated
in terms of Eψ(x) := ψ(x) −x, which is more natural for the analytic arguments below (see
Section 2 and Appendix A).
On relation to Part I. This is Part II of a two–paper project. Part I [1] introduced the
dynamical system and established that RH is equivalent to the persistence of contraction in-
equalities for E(X). Part II shows that any off–critical zero forces oscillations that inevitably
violate contraction. Taken together, the two parts close the circle: contraction characterizes
RH, and off–critical zeros contradict contraction. Hence all nontrivial zeros of ζ(s) lie on the
critical line.
2
Background and Setup
For X ≥2, define the one-visit and parent windows
WX =
X, (1 + 0.1/ log X)X
,
f
WX =
X, (1 + 2/ log X)X
.
Let Ncomp denote the set of composite positive integers, and write
Eψ(y) := ψ(y) −y.
Error functionals.
For X ≥X0, define
E(X) :=
sup
y∈WX∩Ncomp
|Eψ(y)|,
eE(X) :=
sup
y∈f
WX∩Ncomp
X
m∈T (y; f
WX)
m∈Ncomp
Eψ(m)
.
Here T (y; f
WX) denotes the segment of the (Part I) trajectory of y that lies inside f
WX.
Remark (Bridge to Part I). Switching from Eπ(x) := π(x) −Li(x) to Eψ(x) := ψ(x) −x is not
merely harmless at the X1/2 log X scale (see Appendix A), but in fact the natural formulation
for the present argument. Part I introduced contraction inequalities and the reformulation of
RH through the elegant integer map, where Eπ arose directly. Here we generalize the setup to
Eψ, whose analytic properties are sharper and better suited to the application of classical Ω-
results. In this way, the contraction framework extends seamlessly from the integer-map setting
to the Chebyshev function.
Lemma 2.1 (Parent functional dominates window supremum). For all sufficiently large X, we
have
E(X) ≤eE(X).
Proof. Let y∗∈WX ∩Ncomp attain E(X). The trajectory started at y∗passes through y∗while
inside f
WX, so the parent-window sum for this trajectory includes the term |Eψ(y∗)| = E(X),
and hence eE(X) ≥E(X).
2
3
Contraction Framework
Theorem 3.1 (Contraction inequality, [1]). There exists C > 0 such that for all sufficiently
large X,
eE(X) ≤
5
6 eE(X3/4) + C X1/2 log X.
This is Part I, Thm. 6.3/6.4 (macro-step contraction) combined with Lem. 6.5 (iteration).
Corollary 3.2 (von Koch–level bound).
eE(X) ≪X1/2 log X.
Lemma 3.3 (Parent-window hits). Fix a single trajectory of the map and put U = log X and
f
WX = [ X, (1 + 2
U )X ].
For X ≥e120, let N be the number of composite iterates of this trajectory that lie in f
WX (i.e.,
the number of indices j with the iterate yj ∈f
WX and yj composite, so the next forward step is
s(yj) = π(yj)). Then
N ≤4.
Remark. On f
WX one has the uniform estimate
π(m) = X
U + O
X
U2
(m ∈f
WX),
so consecutive composite landings are separated by at least
1 −8
U
X
U .
Since |f
WX| = 2X/U, this yields
N −1 ≤
2X/U
(1 −8/U) X/U =
2
1 −8/U
< 2.2
(U ≥120),
which actually forces N ≤3. We retain the looser bound N ≤4 to keep constants synchronized
with Part I.
Proof (sketch). Write s(m) = π(m) for the forward step length at a composite m.
(i) Monotonicity and scale on the window. On f
WX we have s(m) non-decreasing, and by explicit
prime-counting bounds (e.g. Dusart-type),
s(m) = X
U + O
X
U2
,
uniformly for m ∈f
WX,
with an absolute implied constant (fixed once and for all in the assumptions box).
(ii) Spacing between consecutive composite landings. Let y1 < y2 < · · · < yN be the successive
composite landings of the same trajectory that fall in f
WX. Between two such landings yj and
yj+1, the net displacement equals one forward composite jump s(mj) plus a bounded “local
variation” term that accounts for:
• the slow drift of s(m) across f
WX (monotone with slope ≍X/U3);
• any short backward prime jump(s) in the interim, which are absorbed at the O(X/U2)
scale in our log–scale normalization.
3
Hence there is a constant C∗(explicit and small; one may take C∗= 8 conservatively for
U ≥120) such that
yj+1 −yj ≥X
U −C∗
X
U2 =
1 −C∗
U
X
U
≥1
2
X
U
(U ≥120).
In particular the minimal spacing dmin between successive composite landings satisfies dmin ≥
(1 −C∗
U ) X/U > 1
2 X/U.
(iii) Packing bound in the window. The parent-window length is
|f
WX| = 2X
U .
Packing successive landings with gaps at least dmin inside an interval of this length gives
N −1 ≤|f
WX|
dmin
≤
2X/U
(1 −C∗/U) X/U =
2
1 −C∗/U .
For U ≥120 and the conservative C∗= 8, the right-hand side is < 2.2, which already implies
N ≤3; the stated N ≤4 is a laxer bound used to keep constants synchronized with Part I.
X
(1 + 2
U )X
|f
WX| = 2X
U
≈X/U
dots: composite landings
arrows: forward steps s(m) = π(m)
spacing ≥(1 −C∗
U ) X
U ; window length = 2X
U
Figure 1: Window length vs. (nearly constant) step length: at most four composite landings fit.
4
Contribution of an Off-Critical Zero
Assume ζ(ρ) = 0 with ρ = β + iγ and β > 1
2.
4.1
Landau–Littlewood lower bound
If β > 1/2, there exist cρ > 0 and xn →∞with
|ψ(xn) −xn| ≥cρ xβ
n;
see Titchmarsh [6, §14.25] and Edwards [13, Ch. 8].
4.2
Window capture
If x = (1 + δ)X with 0 < δ ≤0.1/ log X, then x ∈WX. Since log x ≍log X on WX, we have
x
X
β
= 1 + O
1
log X
.
4
4.3
Lower bound for the window/trajectory functionals
Lemma 4.1 (Composite selection lemma). Let X be large and WX = [X, (1 + 0.1/ log X)X].
Suppose x ∈WX satisfies |ψ(x)−x| ≥M. Then there exists a composite y ∈WX with |y−x| ≤1
such that
|ψ(y) −y| ≥M −(log(2X) + 1).
In particular, if M ≥2(log(2X) + 1) then |ψ(y) −y| ≥1
2M.
Proof. Pick n ∈{⌊x⌋, ⌈x⌉} so that |n −x| ≤1. Among {n, n ± 1} there is an even integer
y ≥4, hence composite, with |y −x| ≤1. Because WX has length ≍X/ log X ≫1, we still
have y ∈WX.
Between integers, ψ is constant and t 7→ψ(t) −t has slope −1. At an integer m, ψ jumps
by Λ(m) ≤log m ≤log(2X) on [X, 2X]. Thus for u, v ∈[X, 2X],
|ψ(u) −u −(ψ(v) −v)| ≤|u −v| +
X
m∈(u,v]
Λ(m) ≤|u −v| + log(2X) · #{m ∈(u, v]}.
With |u −v| ≤1 this is ≤log(2X) + 1. Taking u = x, v = y gives the claim, i.e.
|ψ(y) −y| ≥|ψ(x) −x| −log(2X) −1
(|y −x| ≤1, y ∈C ∩WX).
Proposition 4.2 (Lower bound for E(X) and eE(X)). Assume ζ(ρ) = 0 with ρ = β + iγ and
β > 1
2. Then there exist Xn →∞and c > 0 such that
E(Xn) ≥c Xβ
n
and hence
eE(Xn) ≥c Xβ
n.
Proof. Let xn be given by the Landau–Littlewood bound. Set Xn := xn/(1 + 0.1/ log xn) so
xn ∈WXn and log xn ≍log Xn. Apply Lemma 4.1 with M = cρxβ
n to obtain a composite
yn ∈WXn, |yn −xn| ≤1, with
|ψ(yn) −yn| ≥cρxβ
n −(log(2Xn) + 1).
Since xβ
n ≫log xn for β > 1/2, the subtraction is ≤1
2cρxβ
n for large n, giving |ψ(yn) −yn| ≥
1
2cρxβ
n ≍Xβ
n. Hence E(Xn) ≥|ψ(yn) −yn| ≫Xβ
n. Lemma 2.1 then gives eE(Xn) ≥E(Xn).
5
Contradiction and Conclusion
From Corollary 3.2, the contraction framework of [1] yields
E(X) ≪X1/2 log X,
(X →∞).
(1)
Since eE(X) ≥E(X) by Lemma 2.1, any lower bound for E(X) transfers immediately to eE(X).
On the other hand, if ζ(s) has a zero ρ = β + iγ with β > 1/2, then by §4.3 we obtain a
subsequence Xn →∞such that
E(Xn) ≫Xβ
n.
(2)
Comparing (1) and (2), the ratio
Xβ
X1/2 log X = Xβ−1/2
log X
tends to infinity as X →∞for every β > 1/2. Thus the lower bound (2) eventually dominates
the contraction bound (1), giving a contradiction.
We use the smoothed explicit formula with kernel W(t) = (1 + t2)−3 and truncation height
T = 1
2(log X)3; the remainder is bounded uniformly by 10X1/2 for X ≥e120. The derivation
of this bound with explicit constants is given in Appendix E, while Appendix D contains the
large–sieve budget ledger used in the contraction engine.
5
Theorem 5.1 (Riemann Hypothesis, assuming [1]). Assume the contraction inequality estab-
lished in [1]. Then every nontrivial zero of ζ(s) lies on the critical line ℜ(s) = 1
2.
Proof. If ζ(ρ) = 0 with ρ = β + iγ and β > 1/2, then §4.3 gives a subsequence Xn →∞with
E(Xn) ≫Xβ
n. This contradicts the contraction bound E(X) ≪X1/2 log X from [1]. Hence no
off–critical zeros exist, and every nontrivial zero lies on the critical line.
6
Conclusion and Outlook
Assuming the contraction inequality established in [1], the contradiction proved in Section 5
rules out all off–critical zeros. Thus, off–critical zeros are incompatible with contraction; taken
together with [1], this closes the circle and yields the Riemann Hypothesis.
The key feature is a rigidity principle: contraction across multiplicative scales cannot co-
exist with the oscillations of size xβ in ψ(x) −x that an off–critical zero would force. This
incompatibility is analytic in nature, independent of any particular realization of contraction.
In the dynamical reformulation of [1], contraction emerges from the integer map and the critical
line appears as its unique equilibrium. More generally, the present result shows that whenever
contraction holds, the critical line is forced as the only location of nontrivial zeros. In this sense,
the critical line is not merely the conjectured locus of zeros, but the equilibrium point singled
out by contraction itself.
Further directions.
• Operator–theoretic formulation. Recast contraction as a spectral property of an as-
sociated transfer operator, potentially linking the dynamical reformulation to the Nyman–
Beurling approach.
• Sharper error budgets. Optimizing the remainder term in the smoothed explicit for-
mula could enlarge the safety margin in the contraction budget and clarify the robustness
of the method.
• Numerical investigations. Simulations of contraction phenomena at finite scales may
illuminate how oscillations manifest and inspire further generalizations.
Acknowledgments
The author acknowledges the use of OpenAI’s GPT-5 language model as a tool for mathematical
exploration and expository preparation during the development of this manuscript.
Appendix A. Bridging π(x) −Li(x) and ψ(x) −x
In Part I the contraction inequalities were formulated in terms of
Eπ(x) := π(x) −Li(x),
Eψ(x) := ψ(x) −x.
In this part we work with Eψ to leverage sharper Ω-results.
The next lemmas show that
switching between Eπ and Eψ is harmless at the von Koch scale x1/2 log x.
Lemma A.1 (Partial-summation bridge from ψ to π). For x ≥3 one has
π(x) −Li(x) = ψ(x) −x
log x
+
Z x
2
ψ(t) −t
t (log t)2 dt −Rpp(x),
(3)
6
where
Rpp(x) =
X
k≥2
θ(x1/k)
log x
+
Z x
2
θ(t1/k)
t (log t)2 dt
!
,
Rpp(x) ≪x1/2.
In particular,
π(x) −Li(x) = ψ(x) −x
log x
+
Z x
2
ψ(t) −t
t (log t)2 dt + O
x1/2
.
(4)
Proof. Partial summation yields
π(x) = θ(x)
log x +
Z x
2
θ(t)
t(log t)2 dt,
Li(x) =
x
log x +
Z x
2
dt
(log t)2 .
Since θ = ψ −P
k≥2 θ( · 1/k), subtract to get (3). Using θ(y) ≤y log y and that the k = 2 term
dominates shows Rpp(x) ≪x1/2.
Lemma A.2 (Reverse bridge from π to ψ). For x ≥3 one has
ψ(x) −x = log x
π(x) −Li(x)
−
Z x
2
π(t) −Li(t)
t
dt + eRpp(x),
(5)
with eRpp(x) = P
k≥2 θ(x1/k) −C0 and
eRpp(x) ≪x1/2.
Proof. Stieltjes integration by parts gives
θ(x) = π(x) log x −
Z x
2
π(t)
t
dt,
x = Li(x) log x −
Z x
2
Li(t)
t
dt + C0.
Now write ψ = θ + P
k≥2 θ( · 1/k) and subtract to get (5). The bound eRpp(x) ≪x1/2 again
follows from θ(y) ≤y log y with the k = 2 term dominant.
Remark (Remainder sizes at the von Koch scale).
Rpp(x) ≪x1/2,
eRpp(x) ≪x1/2.
These bounds are negligible against x1/2 log x and leave all equivalences below unchanged.
Corollary A.3 (Equivalence of von Koch–level bounds). As x →∞,
Eπ(x) ≪x1/2 log x
⇐⇒
Eψ(x) ≪x1/2 log x,
with absolute implied constants.
Proof. Apply (4) and (5). If Eψ(x) ≪x1/2 log x, then the RHS of (4) is ≪x1/2 log x. Conversely,
if Eπ(x) ≪x1/2 log x, then the RHS of (5) is ≪x1/2 log x since the integral term is softer by
one logarithm and eRpp(x) ≪x1/2.
Corollary A.4 (Ω-transfer). If ζ has a zero ρ = β + iγ with β > 1/2, then
Eψ(x) = Ω±
xβ
,
Eπ(x) = Ω±
xβ
log x
.
Window/trajectory comparability. Let
A◦(X) :=
sup
y∈WX∩Ncomp
|E◦(y)|,
eA◦(X) :=
sup
trajectories
X
m∈f
WX∩trajectory
m∈Ncomp
|E◦(m)|,
7
with E◦∈{Eπ, Eψ}. By (4) and log x ≍log X on WX we have
|Eπ(y)| = |Eψ(y)|
log X
+ O
X1/2
log X
(y ∈WX ∩Ncomp),
which promotes to the parent-window sums since, by Lemma 3.3, a single trajectory contributes
at most 4 composite points inside f
WX.
Corollary A.5 (Trajectory-level comparability). At the X1/2 log X scale, the contraction in-
equalities for ( eAπ, Aπ) are equivalent (up to absolute constants) to those for ( eAψ, Aψ).
Appendix B. Constant ledger
Frozen parameters (global throughout). For all statements below we fix
U = log X (≥120),
T = 1
2 U 3,
h = 2
U ,
L =
(log(4/3)) U
.
All bounds hold uniformly for X ≥e120 and improve monotonically with larger U.
Smoothed explicit formula: remainder budget. Writing
Eπ(y) = ℜ
X
|γ|≤T
yρ
ρ log y W(γ/T) + R(y; T),
W(t) = (1 + t2)−3,
we decompose R = Rtriv + RΓ + Rtail + Rsmooth and record the conservative per–piece bounds (valid for
y ≍X with U = log X ≥120):
|R(y; T)| ≤|Rtriv| + |RΓ| + |Rtail| + |Rsmooth| ≤10 X1/2.
Table 1: Remainder components and conservative bounds (all ×X1/2).
Piece
Description
Prototype bound
Budget used
Rtriv
Trivial zeros / low-t terms
≪X−3/2/ log X
≤0.1
RΓ
Gamma–factor/Stirling contribution
≪X1/2 log T
T
≤0.1
Rtail
Zeta zero tail (|γ| > T)
≪X1/2 T −6
≤0.5
Rsmooth
Smoothing/truncation (kernel derivatives)
≪X1/2
≤9.3
Total
≤10.0
Remark. The “budget used” column is purely accounting: it allocates a conservative share to each piece
so the sum is ≤10X1/2. Tighter constants are possible but unnecessary.
Contraction factor audit. Let αeff denote the one–scale contraction factor obtained by composing
the core contraction, macro–step distortion, and overlap terms. With U = log X ≥120,
αeff ≤
5
6
|{z}
core contraction
1 + 2
U
|
{z
}
macro-step distortion
11
12
|{z}
overlap
= 3355
4320 < 0.7767.
Since U 7→(1 + 2/U) is decreasing, the bound improves for larger X.
Appendix C. Frequency netting on a uniform log-scale grid
We give the spacing-free large-sieve style bound used in the parent-window contraction, and a
rigorous transfer from the uniform grid to the actual zeros of ζ(s).
8
Uniform grid bound
Let h > 0, T > 0. Set N := ⌊T/h⌋, Γ := {γm := m h : −N ≤m ≤N}. For u1, . . . , uM ∈R
and weights w1, . . . , wM ∈C (after coalescing coincident points) define
S(γ) :=
M
X
j=1
wjeiγuj.
Theorem C.1 (Log-scale netting on a uniform grid).
X
γ0∈Γ
M
X
j=1
wjeiγ0uj
2
≤
4M T
h + M
M
X
j=1
|wj|2.
In particular, if M ≤4 then
X
γ0∈Γ
M
X
j=1
wjeiγ0uj
2
≤
16 T
h + 4
M
X
j=1
|wj|2.
Sketch. Write the quadratic form P
γ0 |S(γ0)|2 = W ∗KW with entries Kjk = PN
m=−N eimh(uj−uk).
Schur/Gershgorin with a near/far split and |Kjk| ≤min(2N + 1,
4
h|uj−uk|) yields the row-sum
bound ≤4NM + M, hence the claim.
From zeros on the critical line to the uniform grid
Lemma C.2 (Local zero count on length-h intervals). For T ≥3 and h ∈(0, 1], and all
t ∈[−T, T],
N
t + h
2
−N
t −h
2
≤C0
1 + h log(2T)
,
where N(u) counts zeros ρ = 1
2 + iγ with 0 < γ ≤u, and C0 is absolute.
Proof. From the Riemann–von Mangoldt formula N(u) =
u
2π log u
2π −
u
2π + O(log u) and the
mean value theorem.
Lemma C.3 (Zeros-to-grid ℓ2 comparison). Let h ∈(0, 1], T ≥3, and Γ = {−N, . . . , N} · h
with N = ⌊T/h⌋. For S(γ) = PM
j=1 wjeiγuj,
X
|γ|≤T
S(γ)
2 ≤C1
1 + h log(2T)
X
γ0∈Γ
S(γ0)
2 + C2
1 + h log(2T)
T
h M
M
X
j=1
|wj|2.
Proof. Partition [−T, T] into cells I(γ0) := [γ0 −h/2, γ0 + h/2). For γ ∈I(γ0) write S(γ) =
S(γ0) + (γ −γ0)S′(ξ) with S′(ξ) = i P ujwjeiξuj. Then
|S(γ)|2 ≤2|S(γ0)|2 + h2
2
X
|uj||wj|
2
≤2|S(γ0)|2 + h2
2
X
|uj|2X
|wj|2
.
In our application, uj are log-scale positions with |uj| ≪U := log X, hence P |uj|2 ≤M(κU)2.
Summing over at most C0(1 + h log(2T)) zeros per cell (Lemma C.2) and over |Γ| ≪T/h cells
gives the claim.
Corollary C.4 (Specialization to h = 2
U , T = 1
2U3). With U = log X, h = 2
U , T = 1
2U3, and
M ≤4,
X
|γ|≤T
|S(γ)|2 ≤C3 U4
M
X
j=1
|wj|2.
9
Proof. Here h log(2T) ≍(2/U) log U = O(1) and T/h = U4/4. Combine Theorem C.1 with
Lemma C.3.
Corollary C.5 (Weighted ℓ1 bound over zeros). With the same specialization,
X
|γ|≤T
1
|ρ|
S(γ)
≤C4 U2 M
X
j=1
|wj|21/2
.
Proof. By Cauchy–Schwarz,
X
|γ|≤T
|S(γ)|
|ρ|
≤
X
|γ|≤T
1
1
4 + γ2
1/2
·
X
|γ|≤T
|S(γ)|21/2
≪U2X
|wj|21/2
.
Remark (Absorption into the X1/2 log X budget). In the parent-window analysis M ≤4, the
explicit-formula weights include 1/ log y and the smoothing kernel, and the zero-sum is truncated
at T = 1
2U3. Corollary C.5 therefore yields a contribution ≪U2(P |wj|2)1/2, which is absorbed
into the error term C X1/2 log X of the contraction inequality; see Appendix D.
Appendix D. Budget ledger for the large–sieve step
Throughout we fix the global parameters
U = log X (≥120),
T = 1
2U3,
h = 2
U .
Write S(γ) = PM
j=1 wj eiγuj for the window–restricted exponential sum arising from the smoothed
explicit formula (with at most M ≤4 composite points in the parent window). Weights satisfy
|wj| ≍X1/2/ log X for y ≍X, after including the factor 1/(ρ log y) and the kernel W(γ/T).
Table 2: U–power audit for the large–sieve / frequency–netting bound.
Source
Expression
U–scaling
Comment
Zero–cell count
T
h
U3/2
2/U = U4
4
grid spacing h = 2
U up to height T = 1
2U3
Zeros per cell
≪1 + h log T
≪1 + 2
U log(U3/2)
= O(1) since log U
U
→0
Cauchy–Schwarz (ℓ1 →ℓ2)
—
U2
converts P |S(γ)|/|ρ| to
P |S(γ)|21/2
Window multiplicity
M ≤4
√
M ≤2
PM
j=1 |wj|21/2 ≤
√
M maxj |wj|
Weight scale
|wj|
≍X1/2/ log X
from yρ/(ρ log y) W(γ/T) with y ≍X
Net scaling for the sum over zeros
U2 ·
√
M · (X1/2/ log X)
Combining the rows and using U = log X and M ≤4 yields
X
|γ|≤T
|S(γ)|
|ρ|
≪U2
M
X
j=1
|wj|2
1/2
≤U2√
M max
j
|wj| ≪(log X)2·2· X1/2
log X ≪C X1/2 log X,
(6)
for an absolute constant C > 0. This is the only input from the large–sieve side needed in the
contraction inequality; all remaining constants appear in Appendix C.
10
Appendix E. Smoothed explicit formula: a uniform remainder
bound with explicit constants
Statement
Fix X ≥e120 and set U := log X ≥120. For y ≍X, let
T := 1
2 U3,
W(t) := (1 + t2)−3,
W(0) = 1,
0 < W(t) ≤(1 + t2)−3.
Let ρ = 1
2 + iγ range over the nontrivial zeros of ζ(s), and define
E(y) := π(y) −Li(y),
S(y; T) := ℜ
X
|γ|≤T
yρ
ρ log y W(γ/T).
Then
E(y) = S(y; T) + R(y; T),
(7)
with R(y; T) = Rtriv + RΓ + Rtail + Rsmooth and
|R(y; T)| ≤10 X1/2
for all y ≍X, X ≥e120.
(8)
Remainder pieces
(1) Trivial zeros.
|Rtriv(y; T)| ≤y−2/ log y ≤10−40X1/2.
(2) Gamma-factor.
Using Stirling and the decay of W(γ/T),
|RΓ(y; T)| ≤C y1/2 log T
T
≤0.1 X1/2
(U ≥120).
(3) Tail |γ| > T.
With W(γ/T) ≤(T/|γ|)6 and |ρ|−1 ≤2/|γ|,
|Rtail(y; T)| ≤2y1/2
log y T 6 X
|γ|>T
1
|γ|7 ≤0.5 X1/2.
(4) Smoothing/truncation.
Standard contour/smoothing estimates give
|Rsmooth(y; T)| ≤C y1/2
1
T + 1
U
≤0.4 X1/2.
Adding the four pieces yields (8).
References
[1] H. W. A. E. Kuipers, A Dynamical Reformulation of the Riemann Hypothesis via Trajectory
Contraction, preprint (2025).
[2] B. Riemann, “¨Uber die Anzahl der Primzahlen unter einer gegebenen Gr¨osse,” Monats-
berichte der Berliner Akademie, November 1859. English translation in H. M. Edwards,
Riemann’s Zeta Function, Academic Press, 1974.
[3] H. von Koch, “Sur la distribution des nombres premiers,” Acta Math. 24 (1901), 159–182.
[4] E. Landau, “¨Uber die Nullstellen der Zetafunktion,” Math. Ann. 56 (1903), 645–675.
11
[5] J. E. Littlewood, “Sur la distribution des nombres premiers,” C. R. Acad. Sci. Paris 158
(1914), 1869–1872.
[6] E. C. Titchmarsh, The Theory of the Riemann Zeta-Function, 2nd ed., revised by
D. R. Heath-Brown, Oxford University Press, 1986.
[7] J. C. Lagarias, “An elementary problem equivalent to the Riemann Hypothesis,” Amer.
Math. Monthly 109 (2002), 534–543.
[8] B. Nyman, “On some groups and semigroups of translations,” Doctoral thesis, University
of Uppsala, 1950.
[9] A. Beurling, “A closure problem related to the Riemann zeta function,” Proc. Nat. Acad.
Sci. U.S.A. 41 (1955), 312–314.
[10] L. B´aez-Duarte, “A strengthening of the Nyman–Beurling criterion for the Riemann Hy-
pothesis,” Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 16 (2005), 99–117.
[11] H. Iwaniec and E. Kowalski, Analytic Number Theory, AMS Colloquium Publications,
Vol. 53, American Mathematical Society, Providence, RI, 2004.
[12] E. Bombieri, “The Riemann Hypothesis,” in J. Carlson, A. Jaffe, A. Wiles (eds.), The
Millennium Prize Problems, Clay Mathematics Institute, 2006. (Original Clay problem
description, 2000).
[13] H. M. Edwards, Riemann’s Zeta Function, Academic Press, New York, 1974. Reprinted by
Dover Publications, Mineola, NY, 2001.
12
|
Off-Critical Zeros Contradict Contraction in the Dynamical Reformulation of the Riemann Hypothesis Hendrik W. A. E. Kuipers Independent Researcher, Groningen, The Netherlands September 23, 2025 MSC 2020. 11M26 (Primary); 11N05, 37A25, 11N35 (Secondary). Abstract We continue the dynamical reformulation of the Riemann Hypothesis initiated in [1]. The framework is built from an integer map in which composites advance by π(m) and primes retreat by their prime gap, producing trajectories whose contraction properties encode the distribution of primes. In this setting, RH is equivalent to the persistence of contraction inequalities for trajectory-based error functionals E(X), eE(X) across multiplicative scales. In the present paper we prove that if ζ(s) has a zero off the critical line R(s) = 1 2, then the Landau-Littlewood Ω-theorem forces oscillations of size xβ in ψ(x) -x. A window-capture lemma shows that these oscillations are inherited by the composite-only window suprema E(X), and hence by E(X), producing lower bounds that contradict contraction. Thus any off-critical zero is incompatible with contraction. Headline result. Part I introduced the dynamical system and showed that RH is equivalent to the persistence of contraction inequalities. Part II proves that off-critical zeros force oscillations in ψ(x) -x that inevitably violate contraction. Taken together, these two steps close the circle: contraction characterizes RH, and off-critical zeros contradict contraction. Hence every nontrivial zero of ζ(s) lies on the critical line. More generally, the present result shows that whenever contraction holds, the critical line is forced as the only location of nontrivial zeros. In this sense, the critical line is not merely the conjectured locus of zeros, but the equilibrium point singled out by contraction itself. 1 Introduction The Riemann Hypothesis (RH) asserts that all nontrivial zeros of the Riemann zeta function ζ(s) lie on the critical line R(s) = 1 2. In a recent work [1], we introduced a dynamical reformulation of RH based on a simple integer map: composites advance by π(m), the number of primes ≤m, while primes step backward by their preceding prime gap. This system generates trajectories whose large-scale contraction properties are equivalent to RH. In particular, we established: • one-visit and parent-window lemmas restricting local composite hits, • macro-step alignment and core overlap across scales, • a frequency-netting lemma ensuring oscillatory coherence, • and from these, contraction inequalities for the normalized error functionals E(X), eE(X). 1 16 Sep 2025 Iterating these inequalities yields the von Koch-level bound, known to be equivalent to RH [3]. Thus RH holds if and only if contraction persists uniformly across scales. The aim of the present paper is to close this circle. We prove that if any nontrivial zero of ζ(s) lies off the critical line, then the contraction inequalities are violated. The argument relies on classical Ω-results of Landau [4] and Littlewood [5], which show that an off-critical zero forces oscillations of size xβ in the Chebyshev error ψ(x) -x. We show that such oscillations are inherited by the composite-only window suprema E(X), contradicting contraction. In contrast to Part I, which worked with Eπ(x) := π(x)-Li(x), the present paper is formulated in terms of Eψ(x) := ψ(x) -x, which is more natural for the analytic arguments below (see Section 2 and Appendix A). On relation to Part I. This is Part II of a two-paper project. Part I [1] introduced the dynamical system and established that RH is equivalent to the persistence of contraction inequalities for E(X). Part II shows that any off-critical zero forces oscillations that inevitably violate contraction. Taken together, the two parts close the circle: contraction characterizes RH, and off-critical zeros contradict contraction. Hence all nontrivial zeros of ζ(s) lie on the critical line. 2 Background and Setup For X ≥2, define the one-visit and parent windows WX = X, (1 + 0.1/ log X)X , f WX = X, (1 + 2/ log X)X . Let Ncomp denote the set of composite positive integers, and write Eψ(y) := ψ(y) -y. Error functionals. For X ≥X0, define E(X) := sup y∈WX∩Ncomp |Eψ(y)|, eE(X) := sup y∈f WX∩Ncomp X m∈T (y; f WX) m∈Ncomp Eψ(m) . Here T (y; f WX) denotes the segment of the (Part I) trajectory of y that lies inside f WX. Remark (Bridge to Part I). Switching from Eπ(x) := π(x) -Li(x) to Eψ(x) := ψ(x) -x is not merely harmless at the X1/2 log X scale (see Appendix A), but in fact the natural formulation for the present argument. Part I introduced contraction inequalities and the reformulation of RH through the elegant integer map, where Eπ arose directly. Here we generalize the setup to Eψ, whose analytic properties are sharper and better suited to the application of classical Ωresults. In this way, the contraction framework extends seamlessly from the integer-map setting to the Chebyshev function. Lemma 2.1 (Parent functional dominates window supremum). For all sufficiently large X, we have E(X) ≤eE(X). Proof. Let y∗∈WX ∩Ncomp attain E(X). The trajectory started at y∗passes through y∗while inside f WX, so the parent-window sum for this trajectory includes the term |Eψ(y∗)| = E(X), and hence eE(X) ≥E(X). 2 3 Contraction Framework Theorem 3.1 (Contraction inequality, [1]). There exists C > 0 such that for all sufficiently large X, eE(X) ≤ 5 6 eE(X3/4) + C X1/2 log X. This is Part I, Thm. 6.3/6.4 (macro-step contraction) combined with Lem. 6.5 (iteration). Corollary 3.2 (von Koch-level bound). eE(X) ≪X1/2 log X. Lemma 3.3 (Parent-window hits). Fix a single trajectory of the map and put U = log X and f WX = [ X, (1 + 2 U )X ]. For X ≥e120, let N be the number of composite iterates of this trajectory that lie in f WX (i.e., the number of indices j with the iterate yj ∈f WX and yj composite, so the next forward step is s(yj) = π(yj)). Then N ≤4. Remark. On f WX one has the uniform estimate π(m) = X U + O X U2 (m ∈f WX), so consecutive composite landings are separated by at least 1 -8 U X U . Since |f WX| = 2X/U, this yields N -1 ≤ 2X/U (1 -8/U) X/U = 2 1 -8/U 1 2 X/U. (iii) Packing bound in the window. The parent-window length is |f WX| = 2X U . Packing successive landings with gaps at least dmin inside an interval of this length gives N -1 ≤|f WX| dmin ≤ 2X/U (1 -C∗/U) X/U = 2 1 -C∗/U . For U ≥120 and the conservative C∗= 8, the right-hand side is 1 2. 4.1 Landau-Littlewood lower bound If β > 1/2, there exist cρ > 0 and xn →∞with |ψ(xn) -xn| ≥cρ xβ n; see Titchmarsh [6, §14.25] and Edwards [13, Ch. 8]. 4.2 Window capture If x = (1 + δ)X with 0 1 2. Then there exist Xn →∞and c > 0 such that E(Xn) ≥c Xβ n and hence eE(Xn) ≥c Xβ n. Proof. Let xn be given by the Landau-Littlewood bound. Set Xn := xn/(1 + 0.1/ log xn) so xn ∈WXn and log xn ≍log Xn. Apply Lemma 4.1 with M = cρxβ n to obtain a composite yn ∈WXn, |yn -xn| ≤1, with |ψ(yn) -yn| ≥cρxβ n -(log(2Xn) + 1). Since xβ n ≫log xn for β > 1/2, the subtraction is ≤1 2cρxβ n for large n, giving |ψ(yn) -yn| ≥ 1 2cρxβ n ≍Xβ n. Hence E(Xn) ≥|ψ(yn) -yn| ≫Xβ n. Lemma 2.1 then gives eE(Xn) ≥E(Xn). 5 Contradiction and Conclusion From Corollary 3.2, the contraction framework of [1] yields E(X) ≪X1/2 log X, (X →∞). (1) Since eE(X) ≥E(X) by Lemma 2.1, any lower bound for E(X) transfers immediately to eE(X). On the other hand, if ζ(s) has a zero ρ = β + iγ with β > 1/2, then by §4.3 we obtain a subsequence Xn →∞such that E(Xn) ≫Xβ n. (2) Comparing (1) and (2), the ratio Xβ X1/2 log X = Xβ-1/2 log X tends to infinity as X →∞for every β > 1/2. Thus the lower bound (2) eventually dominates the contraction bound (1), giving a contradiction. We use the smoothed explicit formula with kernel W(t) = (1 + t2)-3 and truncation height T = 1 2(log X)3; the remainder is bounded uniformly by 10X1/2 for X ≥e120. The derivation of this bound with explicit constants is given in Appendix E, while Appendix D contains the large-sieve budget ledger used in the contraction engine. 5 Theorem 5.1 (Riemann Hypothesis, assuming [1]). Assume the contraction inequality established in [1]. Then every nontrivial zero of ζ(s) lies on the critical line R(s) = 1 2. Proof. If ζ(ρ) = 0 with ρ = β + iγ and β > 1/2, then §4.3 gives a subsequence Xn →∞with E(Xn) ≫Xβ n. This contradicts the contraction bound E(X) ≪X1/2 log X from [1]. Hence no off-critical zeros exist, and every nontrivial zero lies on the critical line. 6 Conclusion and Outlook Assuming the contraction inequality established in [1], the contradiction proved in Section 5 rules out all off-critical zeros. Thus, off-critical zeros are incompatible with contraction; taken together with [1], this closes the circle and yields the Riemann Hypothesis. The key feature is a rigidity principle: contraction across multiplicative scales cannot coexist with the oscillations of size xβ in ψ(x) -x that an off-critical zero would force. This incompatibility is analytic in nature, independent of any particular realization of contraction. In the dynamical reformulation of [1], contraction emerges from the integer map and the critical line appears as its unique equilibrium. More generally, the present result shows that whenever contraction holds, the critical line is forced as the only location of nontrivial zeros. In this sense, the critical line is not merely the conjectured locus of zeros, but the equilibrium point singled out by contraction itself. Further directions. • Operator-theoretic formulation. Recast contraction as a spectral property of an associated transfer operator, potentially linking the dynamical reformulation to the NymanBeurling approach. • Sharper error budgets. Optimizing the remainder term in the smoothed explicit formula could enlarge the safety margin in the contraction budget and clarify the robustness of the method. • Numerical investigations. Simulations of contraction phenomena at finite scales may illuminate how oscillations manifest and inspire further generalizations. Acknowledgments The author acknowledges the use of OpenAI's GPT-5 language model as a tool for mathematical exploration and expository preparation during the development of this manuscript. Appendix A. Bridging π(x) -Li(x) and ψ(x) -x In Part I the contraction inequalities were formulated in terms of Eπ(x) := π(x) -Li(x), Eψ(x) := ψ(x) -x. In this part we work with Eψ to leverage sharper Ω-results. The next lemmas show that switching between Eπ and Eψ is harmless at the von Koch scale x1/2 log x. Lemma A.1 (Partial-summation bridge from ψ to π). For x ≥3 one has π(x) -Li(x) = ψ(x) -x log x + Z x 2 ψ(t) -t t (log t)2 dt -Rpp(x), (3) 6 where Rpp(x) = X k≥2 θ(x1/k) log x + Z x 2 θ(t1/k) t (log t)2 dt ! , Rpp(x) ≪x1/2. In particular, π(x) -Li(x) = ψ(x) -x log x + Z x 2 ψ(t) -t t (log t)2 dt + O x1/2 . (4) Proof. Partial summation yields π(x) = θ(x) log x + Z x 2 θ(t) t(log t)2 dt, Li(x) = x log x + Z x 2 dt (log t)2 . Since θ = ψ -P k≥2 θ( · 1/k), subtract to get (3). Using θ(y) ≤y log y and that the k = 2 term dominates shows Rpp(x) ≪x1/2. Lemma A.2 (Reverse bridge from π to ψ). For x ≥3 one has ψ(x) -x = log x π(x) -Li(x) - Z x 2 π(t) -Li(t) t dt + eRpp(x), (5) with eRpp(x) = P k≥2 θ(x1/k) -C0 and eRpp(x) ≪x1/2. Proof. Stieltjes integration by parts gives θ(x) = π(x) log x - Z x 2 π(t) t dt, x = Li(x) log x - Z x 2 Li(t) t dt + C0. Now write ψ = θ + P k≥2 θ( · 1/k) and subtract to get (5). The bound eRpp(x) ≪x1/2 again follows from θ(y) ≤y log y with the k = 2 term dominant. Remark (Remainder sizes at the von Koch scale). Rpp(x) ≪x1/2, eRpp(x) ≪x1/2. These bounds are negligible against x1/2 log x and leave all equivalences below unchanged. Corollary A.3 (Equivalence of von Koch-level bounds). As x →∞, Eπ(x) ≪x1/2 log x ⇐⇒ Eψ(x) ≪x1/2 log x, with absolute implied constants. Proof. Apply (4) and (5). If Eψ(x) ≪x1/2 log x, then the RHS of (4) is ≪x1/2 log x. Conversely, if Eπ(x) ≪x1/2 log x, then the RHS of (5) is ≪x1/2 log x since the integral term is softer by one logarithm and eRpp(x) ≪x1/2. Corollary A.4 (Ω-transfer). If ζ has a zero ρ = β + iγ with β > 1/2, then Eψ(x) = Ω± xβ , Eπ(x) = Ω± xβ log x . Window/trajectory comparability. Let A◦(X) := sup y∈WX∩Ncomp |E◦(y)|, eA◦(X) := sup trajectories X m∈f WX∩trajectory m∈Ncomp |E◦(m)|, 7 with E◦∈{Eπ, Eψ}. By (4) and log x ≍log X on WX we have |Eπ(y)| = |Eψ(y)| log X + O X1/2 log X (y ∈WX ∩Ncomp), which promotes to the parent-window sums since, by Lemma 3.3, a single trajectory contributes at most 4 composite points inside f WX. Corollary A.5 (Trajectory-level comparability). At the X1/2 log X scale, the contraction inequalities for ( eAπ, Aπ) are equivalent (up to absolute constants) to those for ( eAψ, Aψ). Appendix B. Constant ledger Frozen parameters (global throughout). For all statements below we fix U = log X (≥120), T = 1 2 U 3, h = 2 U , L = (log(4/3)) U . All bounds hold uniformly for X ≥e120 and improve monotonically with larger U. Smoothed explicit formula: remainder budget. Writing Eπ(y) = R X |γ|≤T yρ ρ log y W(γ/T) + R(y; T), W(t) = (1 + t2)-3, we decompose R = Rtriv + RΓ + Rtail + Rsmooth and record the conservative per-piece bounds (valid for y ≍X with U = log X ≥120): |R(y; T)| ≤|Rtriv| + |RΓ| + |Rtail| + |Rsmooth| ≤10 X1/2. Table 1: Remainder components and conservative bounds (all ×X1/2). Piece Description Prototype bound Budget used Rtriv Trivial zeros / low-t terms ≪X-3/2/ log X ≤0.1 RΓ Gamma-factor/Stirling contribution ≪X1/2 log T T ≤0.1 Rtail Zeta zero tail (|γ| > T) ≪X1/2 T -6 ≤0.5 Rsmooth Smoothing/truncation (kernel derivatives) ≪X1/2 ≤9.3 Total ≤10.0 Remark. The "budget used" column is purely accounting: it allocates a conservative share to each piece so the sum is ≤10X1/2. Tighter constants are possible but unnecessary. Contraction factor audit. Let αeff denote the one-scale contraction factor obtained by composing the core contraction, macro-step distortion, and overlap terms. With U = log X ≥120, αeff ≤ 5 6 |{z} core contraction 1 + 2 U | {z } macro-step distortion 11 12 |{z} overlap = 3355 4320 0, T > 0. Set N := ⌊T/h⌋, Γ := {γm := m h : -N ≤m ≤N}. For u1, . . . , uM ∈R and weights w1, . . . , wM ∈C (after coalescing coincident points) define S(γ) := M X j=1 wjeiγuj. Theorem C.1 (Log-scale netting on a uniform grid). X γ0∈Γ M X j=1 wjeiγ0uj 2 ≤ 4M T h + M M X j=1 |wj|2. In particular, if M ≤4 then X γ0∈Γ M X j=1 wjeiγ0uj 2 ≤ 16 T h + 4 M X j=1 |wj|2. Sketch. Write the quadratic form P γ0 |S(γ0)|2 = W ∗KW with entries Kjk = PN m=-N eimh(uj-uk). Schur/Gershgorin with a near/far split and |Kjk| ≤min(2N + 1, 4 h|uj-uk|) yields the row-sum bound ≤4NM + M, hence the claim. From zeros on the critical line to the uniform grid Lemma C.2 (Local zero count on length-h intervals). For T ≥3 and h ∈(0, 1], and all t ∈[-T, T], N t + h 2 -N t -h 2 ≤C0 1 + h log(2T) , where N(u) counts zeros ρ = 1 2 + iγ with 0 0. This is the only input from the large-sieve side needed in the contraction inequality; all remaining constants appear in Appendix C. 10 Appendix E. Smoothed explicit formula: a uniform remainder bound with explicit constants Statement Fix X ≥e120 and set U := log X ≥120. For y ≍X, let T := 1 2 U3, W(t) := (1 + t2)-3, W(0) = 1, 0 T. With W(γ/T) ≤(T/|γ|)6 and |ρ|-1 ≤2/|γ|, |Rtail(y; T)| ≤2y1/2 log y T 6 X |γ|>T 1 |γ|7 ≤0.5 X1/2. (4) Smoothing/truncation. Standard contour/smoothing estimates give |Rsmooth(y; T)| ≤C y1/2 1 T + 1 U ≤0.4 X1/2. Adding the four pieces yields (8). References [1] H. W. A. E. Kuipers, A Dynamical Reformulation of the Riemann Hypothesis via Trajectory Contraction, preprint (2025). [2] B. Riemann, " ̈Uber die Anzahl der Primzahlen unter einer gegebenen Gr ̈osse," Monatsberichte der Berliner Akademie, November 1859. English translation in H. M. Edwards, Riemann's Zeta Function, Academic Press, 1974. [3] H. von Koch, "Sur la distribution des nombres premiers," Acta Math. 24 (1901), 159-182. [4] E. Landau, " ̈Uber die Nullstellen der Zetafunktion," Math. Ann. 56 (1903), 645-675. 11 [5] J. E. Littlewood, "Sur la distribution des nombres premiers," C. R. Acad. Sci. Paris 158 (1914), 1869-1872. [6] E. C. Titchmarsh, The Theory of the Riemann Zeta-Function, 2nd ed., revised by D. R. Heath-Brown, Oxford University Press, 1986. [7] J. C. Lagarias, "An elementary problem equivalent to the Riemann Hypothesis," Amer. Math. Monthly 109 (2002), 534-543. [8] B. Nyman, "On some groups and semigroups of translations," Doctoral thesis, 1950. [9] A. Beurling, "A closure problem related to the Riemann zeta function," Proc. Nat. Acad. Sci. U.S.A. 41 (1955), 312-314. [10] L. B ́aez-Duarte, "A strengthening of the Nyman-Beurling criterion for the Riemann Hypothesis," Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 16 (2005), 99-117. [11] H. Iwaniec and E. Kowalski, Analytic Number Theory, AMS Colloquium Publications, Vol. 53, American Mathematical Society, Providence, RI, 2004. [12] E. Bombieri, "The Riemann Hypothesis," in J. Carlson, A. Jaffe, A. Wiles (eds.), The Millennium Prize Problems, Clay Mathematics Institute, 2006. (Original Clay problem description, 2000). [13] H. M. Edwards, Riemann's Zeta Function, Academic Press, New York, 1974. Reprinted by Dover Publications, Mineola, NY, 2001. 12
|
2509.16241
|
REAMS: Reasoning Enhanced Algorithm for Maths Solving
Eishkaran Singh 1 Tanav Singh Bajaj 2 Siddharth Nayak 3
Abstract
The challenges of solving complex university-
level mathematics problems, particularly those
from MIT, and Columbia University courses, and
selected tasks from the MATH dataset, remain a
significant obstacle in the field of artificial intel-
ligence. Conventional methods have consistently
fallen short in this domain, highlighting the need
for more advanced approaches. In this paper, we
introduce a language-based solution that leverages
zero-shot learning and mathematical reasoning to
effectively solve, explain, and generate solutions
for these advanced math problems. By integrating
program synthesis, our method reduces reliance
on large-scale training data while significantly im-
proving problem-solving accuracy. Our approach
achieves an accuracy of 90.15%, representing a
substantial improvement over the previous bench-
mark of 81% and setting a new standard in au-
tomated mathematical problem-solving. These
findings highlight the significant potential of ad-
vanced AI methodologies to address and over-
come the challenges presented by some of the
most complex mathematical courses and datasets.
1. Introduction
The domain of advanced mathematics has consistently posed
significant challenges, particularly in the realm of solving
complex equations that demand both precision and deep
reasoning. These challenges are not only a test of computa-
tional ability but also of the capacity to emulate human-like
reasoning and problem-solving methodologies. Traditional
methods for addressing these challenges have typically re-
lied on manual calculations or basic automated tools, which,
while effective in some contexts, are often time-consuming
and limited in both accuracy and scope. These limitations
have been well-documented in the literature, underscoring
*Equal contribution
1Thapar Institute of Engineering and
Technology, India
2University of British Columbia, Canada
3Massachusetts Institute of Technology, Cambridge, USA. Cor-
respondence to: Eishkaran Singh <esingh3 be21@thapar.edu>,
Tanav Singh Bajaj <tanav220student.ubc.ca>.
the necessity for innovative solutions capable of effectively
managing the intricate and multifaceted nature of advanced
mathematical problems Vaswani et al. (2023).
A significant advancement in this field was demonstrated by
Drori et al. (2022) through a collaborative study between the
Massachusetts Institute of Technology (MIT) and Columbia
University. Their research explored the potential of neural
networks in solving university-level mathematics problems
by employing program synthesis. This approach utilized
the capabilities of OpenAI’s Codex transformer, which was
fine-tuned to generate executable programs using a few-shot
learning technique. The methodology achieved a notewor-
thy milestone, attaining an accuracy rate of 81%, thereby
setting a new benchmark in the field of automated mathe-
matical problem-solving. This result marked a considerable
improvement over previous models, which achieved accu-
racy rates ranging between 18.8% and 30.8% using GPT-3’s
text-based few-shot learning and chain-of-thought prompt-
ing (Brown et al., 2020; Rae et al., 2022; Drori et al., 2022).
However, despite the progress realized by Drori et al. (2022),
their approach, which relied primarily on program synthesis
with zero shot learning (Zero-shot learning is a model’s abil-
ity to detect classes never seen during training), introduced
certain limitations. Although effective in addressing a wide
range of mathematical problems, this approach encountered
challenges when confronted with more abstract and complex
problems that necessitated a higher level of reasoning and
contextual understanding. Moreover, the need for human-
like reasoning and explanatory depth, which is crucial for
both educational purposes and the comprehensive under-
standing of complex mathematical problems, remained a
challenge that was not fully addressed in their methodology.
(Kojima et al., 2023).
In response to these identified limitations, this study intro-
duces a novel methodology, Reasoning Enhanced Algo-
rithm for Maths Solving (REAMS). REAMS is designed to
overcome the constraints identified in previous approaches
by integrating neural networks trained on both text and code
with a refined few-shot learning algorithm that combines
symbolic reasoning with contextual understanding. This
hybrid approach not only enhances the accuracy of problem-
solving but also significantly improves the interpretability
of the solutions by providing detailed, reasoning-based ex-
1
arXiv:2509.16241v1 [cs.CL] 16 Sep 2025
REAMS: Reasoning Enhanced Algorithm for Maths Solving
planations.
The REAMS methodology was rigorously tested against
datasets from prominent university-level mathematics
courses, including Mathematics for Computer Science, Sin-
gle Variable Calculus, Multivariable Calculus, Differential
Equations, Probability and Statistics, and Linear Algebra.
The results obtained from these tests are compelling, with
REAMS achieving an accuracy rate of 90.15%. This perfor-
mance not only surpasses the 81% benchmark established
by the Codex-based model but also represents a significant
advancement in the field. In addition to the improved accu-
racy, the solutions generated by REAMS include detailed ex-
planations that closely resemble human reasoning, thereby
making them valuable not only for solving complex mathe-
matical problems but also as educational tools (Hendrycks
et al., 2021).
By advancing both the accuracy and explanatory power of
automated mathematical problem-solving, REAMS repre-
sents a significant contribution to the application of artifi-
cial intelligence in education and research. This study not
only sets a new standard in the field but also opens new
avenues for future research aimed at further enhancing the
capabilities of AI in solving advanced mathematical prob-
lems. The implications of this work extend beyond mere
problem-solving, highlighting the potential for AI-driven
methodologies to play a transformative role in the landscape
of higher education (Chen et al., 2021; Tran et al., 2021).
2. Related Works
The development of mathematical reasoning within large
language models (LLMs) has progressed through systematic
advancements in pre-training, fine-tuning, and the integra-
tion of external tools. Early research focused on establish-
ing a foundational base of computational and mathematical
knowledge in LLMs through extensive exposure to educa-
tional datasets, problem sets, and synthetic data. These
efforts were critical in enabling models to engage with com-
plex mathematical problems effectively. Unlike these early
efforts, our approach integrates reasoning-based methodolo-
gies to enhance interpretability and solution accuracy.
Subsequent research emphasized the need for fine-tuning
models with specialized mathematical datasets, recogniz-
ing the limitations of general pre-training approaches.
Lewkowycz et al. (2022) explored the impact of fine-tuning
on LLMs, demonstrating that incorporating complex reason-
ing paths significantly enhanced the models’ ability to solve
quantitative reasoning problems. This shift towards more
sophisticated fine-tuning methodologies was echoed in the
work of Hendrycks et al. (2021), who emphasized the im-
portance of pre-training on domain-specific data to improve
LLMs’ performance in mathematical problem-solving. Our
htbp
Figure 1. Imported Python programming libraries by course
approach advances these methods by combining fine-tuning
with symbolic reasoning, resulting in a more robust problem-
solving process.
Reinforcement learning (RL) has played a pivotal role in
optimizing the reasoning capabilities of LLMs. By employ-
ing reward models to evaluate the correctness of reasoning
paths, RL has refined the decision-making processes of these
models. Ahn et al. (2024) illustrated the efficacy of RL in
reducing the dependency on human intervention during the
evaluation of model outputs, which in turn increased the
reliability of LLMs in mathematical reasoning tasks. Addi-
tionally, Cobbe et al. (2021) highlighted the integration of
RL with verification mechanisms as a critical step towards
automating the evaluation of model-generated solutions,
thereby enhancing the robustness of LLMs in handling com-
plex mathematical challenges. In contrast, our approach
minimizes reliance on reinforcement learning by integrating
reasoning directly into the code generation process, thereby
simplifying the overall methodology.
The integration of code interpreters with LLMs has further
broadened the scope of their problem-solving capabilities.
This development has allowed models to perform complex
calculations and interact more effectively with external tools,
thereby addressing multifaceted mathematical challenges
with greater efficiency. InternLM-Math, Ying et al. (2024)
explored this integration through models like InternLM-
Math, which employed an approach known as reasoning
interleaved with coding (RICO). This methodology closely
mirrors the human problem-solving process, enabling LLMs
to tackle intricate mathematical tasks more effectively. This
approach aligns with findings by Chen et al. (2021), who
demonstrated that the synergy between reasoning and cod-
ing is essential for solving complex mathematical problems.
Our methodology further refines this integration by using
reasoning to iteratively improve code generation, thereby
increasing the accuracy of solutions.
In the domain of formal mathematical proving, LLMs
2
REAMS: Reasoning Enhanced Algorithm for Maths Solving
have shown potential, particularly with the use of for-
mal languages such as Isabelle (Nipkow et al., 2002),
LEAN(de Moura et al., 2015), and Coq(Bertot & Cast´eran,
2004). Despite challenges related to the limited availability
of formal proof data, models trained on these languages
have achieved state-of-the-art performance on benchmarks
like MiniF2F, as reported by Zheng et al. (2022). This
advancement is supported by the integration of formal rea-
soning capabilities into LLMs, highlighting the potential for
further development in automated theorem proving. These
findings indicate that while significant progress has been
made, further research is needed to address the limitations
associated with data sparsity in this area. Our approach di-
verges by focusing on the practical application of reasoning
in problem-solving without requiring extensive formal proof
datasets.
The ongoing development of LLMs has been guided by the
establishment of robust and credible benchmarks, which
play a critical role in ensuring that advancements are both
measurable and reliable. The importance of integrating pre-
training, fine-tuning, verification, and code interpretation
strategies to propel the field forward. The significance of
benchmarking in assessing the performance of LLMs, partic-
ularly in tasks requiring advanced reasoning and contextual
understanding. Our approach contributes by setting new
benchmarks in the accuracy and interpretability of solutions,
specifically in university-level mathematics.
By refining pre-training methods, advancing fine-tuning
techniques, incorporating reinforcement learning, and in-
tegrating external tools, researchers have significantly im-
proved the accuracy and reliability of LLMs in mathematical
tasks. These developments not only represent substantial
progress in the field but also open new avenues for future
research, particularly in the integration of formal reasoning
systems with LLMs. The continued evolution of LLMs is
likely to yield further advancements, particularly as new
techniques are developed to address the remaining chal-
lenges in this field. Our methodology adds to this evolution
by demonstrating a combined approach of reasoning and
code generation, resulting in a more accurate and inter-
pretable problem-solving process.
3. Problem Statement
The primary challenge addressed in this study is the diffi-
culty of solving complex university-level mathematical prob-
lems using artificial intelligence. Traditional approaches,
including those based on program synthesis and large lan-
guage models (LLMs), have shown limitations in accuracy
and the ability to provide human-like reasoning and de-
tailed explanations. These shortcomings are particularly
pronounced in handling abstract problems that require deep
contextual understanding and logical reasoning. This re-
search aims to overcome these challenges by developing a
methodology that integrates mathematical reasoning with
code generation to improve the accuracy and interpretability
of AI-driven solutions to advanced mathematical problems.
Algorithm 1 Methodology for Code Generation with Rea-
soning (REAMS)
Input: Problem set P = {p1, p2, . . . , pn}, code generation
model Mcode, reasoning model Mreason, expected outputs
{o1, o2, . . . , on}
Output: Success indicators Szero, Sreason
Step 1: Zero-Shot Code Generation
for each problem pi in P do
Generate initial code Ci using Mcode
Execute Ci
Compare the output with expected output oi
if output is correct then
Set Szero[i] = 1
else
Set Szero[i] = 0
end if
end for
Step 2: Reasoning-Based Code Generation
for each problem pi where Szero[i] = 0 do
Generate reasoning Ri using Mreason
Generate revised code C′
i using Mcode with pi and Ri as
inputs
Execute C′
i
Compare the output with expected output oi
if output is correct then
Set Sreason[i] = 1
else
Set Sreason[i] = 0
end if
end for
Step 3: Performance Assessment
Compute Zero-Shot Success Rate:
Zero-Shot Success Rate =
P Szero
n
× 100%
Compute Reasoning Success Rate:
Reasoning Success Rate =
P Sreason
n
× 100%
4. Methodology
The methodology employed in this study is designed to sys-
tematically evaluate the performance of the CodeLlama 13B
and Llama 3.1 8B models in solving complex mathematical
problems through code generation with reasoning. This pro-
cess involves a structured approach to both generating and
3
REAMS: Reasoning Enhanced Algorithm for Maths Solving
htbp
Figure 2. Workflow of the REAMS Approach: The diagram illustrates the iterative process of solving mathematical problems using the
REAMS method. Input problems, sourced from MIT courses and various MATH topics, are first processed by the CodeLlama 13B model
to generate Python code. If the correct answer is obtained, the process concludes. Otherwise, reasoning is generated by the LLama 3.1 8B
model, and based on this mathematical reasoning, the code is iteratively refined and regenerated until the correct solution is achieved
validating code outputs, with subsequent steps to improve
performance where initial attempts fail.The methodology
is divided into several stages, each of which is described in
detail below Figure2 and in Algorithm 1.
4.1. Initial Code Generation using CodeLlama 13B
The first step in our methodology involves the use of the
CodeLlama 13B model for code generation. The model, de-
signed specifically for tasks related to code, is employed to
generate executable code that aims to solve a predefined set
of mathematical problems. These problems are drawn from
datasets encompassing various topics, including calculus,
linear algebra, differential equations, and probability, which
are representative of advanced university-level coursework.
In this stage, the model is prompted using a zero-shot learn-
ing approach. Zero-shot learning refers to the scenario
where the model is provided with a problem statement with-
out any examples or prior demonstrations of similar prob-
lems. The model generates code that it predicts will solve
the problem based on its training. The absence of exam-
ple problems in the prompt is intended to test the model’s
inherent understanding and generalization capabilities, eval-
uating its ability to apply learned principles to new, unseen
problems.
4.1.1. PYTHON PACKAGE DEPENDENCIES
During code generation, the model accounts for the Python
programming packages commonly used across different
courses. As shown in Figure 1, all courses utilize NumPy
and SymPy, with Matplotlib employed for plotting tasks,
and approximately half use the math, random, and SciPy
libraries. Our approach specifies only SymPy or plotting-
related imports in the prompts, while other necessary pack-
ages are automatically synthesized by the model. This in-
tegration ensures that the generated code aligns with the
computational requirements of each course.
4.2. Code Evaluation
Once the code is generated for each question, the next step
involves manually running the code to assess its correct-
ness. This manual execution is crucial because it ensures
that the generated code is not only syntactically correct but
also functionally accurate in producing the correct outputs.
The outputs of the executed code are compared against the
expected solutions, which are predefined and known to be
correct for each problem.
For each piece of code that successfully produces the correct
output, a score of 1 is recorded in the zero-shot evaluation
column. If the code fails to execute correctly or produces
an incorrect output, a score of 0 is recorded. This binary
scoring system allows for a clear assessment of the model’s
performance in the zero-shot context. The process of man-
ually running and evaluating the code is an important step
to verify the model’s ability to generate functionally accu-
rate solutions without relying on any additional training or
examples.
4.3. Generating Mathematical Reasoning with LLaMA
3.1 8B Model
For all the problems where the generated code received a
score of 0 in the zero-shot evaluation, the next step involves
generating a mathematical reasoning or explanation for the
4
REAMS: Reasoning Enhanced Algorithm for Maths Solving
problem. This reasoning is generated using the LLaMA
3.1 8B model, which is a smaller, 8-bit quantized version
of the LLaMA series designed for efficient reasoning tasks
(Dubey et al., 2024). The choice of this model is based on
its capability to produce logical, step-by-step reasoning that
can guide the code generation process.
The reasoning process involves providing the model with
the original problem statement and instructing it to generate
a detailed explanation of the mathematical principles and
steps required to solve the problem. This explanation is
intended to bridge the gap between the problem statement
and the correct solution, offering insights that the code gen-
eration model might have missed in the zero-shot context.
4.4. Code Generation with Mathematical Reasoning as
Input
Once the mathematical reasoning has been generated for
the problems that initially scored 0, the next step is to use
this reasoning as an input for the CodeLlama 13B model. In
this stage, the reasoning and the original problem statement
are both provided as inputs to the model. The objective is
to leverage the reasoning to guide the model in generating
more accurate and contextually relevant code.
This step effectively transforms the problem from a zero-
shot scenario to a more informed task, where the model has
access to additional context in the form of the generated
reasoning. The expectation is that by understanding the
reasoning behind the problem, the model can produce code
that more closely aligns with the expected solution.
4.5. Evaluation of Revised Code
After the CodeLlama 13B model generates the new code
based on mathematical reasoning, the next step involves
manually executing this revised code. As with the initial
code generation, the outputs are compared against the cor-
rect solutions. If the new code produces the correct output,
the corresponding entry in the zero-shot with reasoning
column is updated from 0 to 1.
This process of revising the zero-shot with reasoning evalu-
ation scores based on the model’s performance with addi-
tional reasoning input allows for a more nuanced assessment
of the model’s capabilities. It also provides insight into the
effectiveness of combining reasoning with code generation,
particularly in cases where the model initially fails to pro-
duce the correct output.
5. Experiments
We detail the experiments conducted to evaluate the perfor-
mance of our proposed methodology. The initial experiment
replicates the baseline established by Drori et al. (2022),
using the Codex model fine-tuned on code, while the subse-
quent experiment explores the potential of the CodeLlama
13B model, augmented with reinforcement learning.
5.1. Baseline Experiment: Codex Model
The study conducted by Drori et al. (2022) employed the
Codex model, a variant of OpenAI’s GPT-3 that has been
fine-tuned specifically for code generation tasks (Brown
et al., 2020). The problems were sourced from subjects
including Single Variable Calculus, Multivariable Calcu-
lus, Differential Equations, Probability and Statistics, Lin-
ear Algebra, and Mathematics for Computer Science. The
datasets utilized for this purpose were drawn from MIT
and Columbia University courses, in addition to the MATH
dataset, which includes problems from high school mathe-
matics competitions known for their complexity .
The experimental setup initially employed a zero-shot learn-
ing approach, where the model was provided with the prob-
lem statements. This method was intended to assess the
model’s intrinsic ability to generate correct solutions based
solely on its pre-trained knowledge. The questions that
weren’t correct were then passed on to the few shot learning
approach. The model was prompted with a few examples
of similar problems and their solutions, which served as
a guide for generating solutions to new problems (Chen
et al., 2021). The key metric for evaluating the model’s
performance was the accuracy of the generated solutions.
The findings from this experiment demonstrated that the
Codex model achieved an accuracy rate of 81%, a substan-
tial improvement over earlier models that relied solely on
text-based few-shot learning and chain-of-thought prompt-
ing. The ability of Codex to synthesize executable programs
that solved complex problems, generate explanations for
the solutions, and create new questions based on the solved
problems, set a new benchmark in the field of automated
mathematical problem-solving (Drori et al., 2022), (Chen
et al., 2021).
5.2. CodeT5 Model with Reinforcement Learning
In addition to replicating the baseline experiment, an alter-
native approach was explored using the CodeT5(Wang et al.,
2023) . CodeT5 is designed specifically for code generation
tasks and represents a more recent advancement in the field.
The experiment was structured to evaluate the model’s per-
formance in generating code solutions for the same set of
mathematical problems, starting with a zero-shot learning
approach.
The programs generated by CodeT5 were executed, and
their outputs were analyzed. Programs that failed to ex-
ecute or produced incorrect results were marked as zero.
The initial outcomes from this zero-shot experiment with
5
REAMS: Reasoning Enhanced Algorithm for Maths Solving
CodeT5 indicated 11.5% lower accuracy than the zero-shot
Codex model. The primary issue identified was the model’s
difficulty in generating syntactically correct or logically co-
herent code (Rozi`ere et al., 2024). To get better results
reinforcement learning (RL) was applied, utilizing a Prox-
imal Policy Optimization (PPO) policy (Schulman et al.,
2017). The objective of this approach was to iteratively re-
fine the model’s ability to generate correct code by providing
feedback loops based on the success or failure of previous
code executions. The reinforcement learning framework
was employed to optimize its code generation strategies by
maximizing a reward function.
The application of reinforcement learning led to a 10% incre-
ment in the overall performance of the CodeT5 model which
still remained below that of the baseline Codex model. The
computational resources required for reinforcement learning
did not justify the improvement observed. Consequently, we
developed a reasoning-based approach, which has demon-
strated superior performance.
6. Metrics
We calculate the model’s performance using two metrics:
Accuracy and the Clopper-Pearson Interval.
6.1. Accuracy
Accuracy is defined as the proportion of correct predictions
among the total number of predictions. It is calculated as
follows:
Accuracy =
TP + TN
TP + TN + FP + FN
where TP is the number of true positives, TN is the number
of true negatives, FP is the number of false positives, and
FN is the number of false negatives.
6.2. Clopper-Pearson Interval
Since Accuracy is a proportion derived from a binomial
distribution, we use the Clopper-Pearson Interval to provide
an exact confidence interval for this binomial proportion.
This method offers a conservative estimate, particularly
useful for small sample sizes. Given X successes out of n
trials, the interval is computed as:
Lower Bound = BetaInv
α
2 , X, n −X + 1
Upper Bound = BetaInv
1 −α
2 , X + 1, n −X
where BetaInv(·) is the inverse of the cumulative distribu-
tion function of the beta distribution, and α represents the
significance level, typically set at 0.05 for a 95% confidence
interval.
7. Discussion
The evaluation of the models was conducted across a diverse
set of 265 mathematical problems, sampled systematically
from various academic courses and topics within the MATH
dataset. Specifically, the problems were drawn from seven
advanced university-level courses—18.01, 18.02, 18.03,
18.05, 18.06, 6.042, and COMS3251—and six topics within
the MATH dataset, including Prealgebra, Algebra, Number
Theory, Counting and Probability, Intermediate Algebra,
and Precalculus. Each course contributed 25 randomly se-
lected questions, while each topic within the MATH dataset
contributed 15 questions, ensuring a comprehensive assess-
ment of the models’ performance across a broad range of
mathematical domains.
The performance of the CodeLlama 13B model in the auto-
matic solve rate was a key metric, with the model success-
fully solving 220 out of the 265 problems. This outcome
represents an incremental improvement of 7 questions over
the baseline performance of comparable code-based models.
The accuracy of the CodeLlama 13B model was rigorously
quantified, yielding an accuracy rate of 83.17%. This marks
an enhancement of 11.5% over the baseline accuracy, indi-
cating the model’s capability to generate correct solutions
in a zero-shot learning context.
Further analysis was conducted by integrating reasoning
steps into the code generation process. For problems where
the initial code did not yield correct solutions, the incorpo-
ration of mathematical reasoning, generated by the LLaMA
3.1 8B model, provided additional context and guidance
for the CodeLlama 13B model. The introduction of rea-
soning as an input led to a significant boost in the overall
performance of the model, with the combined approach
achieving an overall accuracy of 90.15%. This accuracy
represents a substantial increase compared to the baseline
accuracy of 81.1%, demonstrating the efficacy of combin-
ing automated code generation with structured reasoning to
enhance problem-solving accuracy across complex mathe-
matical tasks. The detailed breakdown of solve rates, both
for the CodeLlama model alone and in conjunction with rea-
soning. These results underscore the potential of AI-driven
methodologies in handling intricate mathematical problems
with a high degree of precision.
8. Conclusion
In this study, we addressed the complex challenge of au-
tomating the solution of advanced mathematical problems
through AI-driven code generation and refinement. The
approach, termed REAMS (Reasoning Enhanced Algorithm
6
REAMS: Reasoning Enhanced Algorithm for Maths Solving
Course
Codex
REAMS
Codex
REAMS
(Zero-Shot)
(Zero-Shot)
(Few-Shot)
(Zero-Shot+Reasoning)
18.01
74%
76%
75%
88%
(0.55-0. 91)
(0.69-0.97)
18.02
74%
72%
77%
92%
(0.51-0.88)
(0.74-0.99)
18.03
61%
84%
74%
92%
(0.64-0.95)
(0.74-0.99)
18.05
88%
84%
99%
88%
(0.64-0.95)
(0.69-0.97)
18.06
75%
72%
82%
92%
(0.51-0.88)
(0.74-0.99)
6.042
49%
76%
63%
88%
(0.55-0. 91)
(0.69-0.97)
COMS3251
76%
84%
79%
92%
(0.64-0.95)
(0.74-0.99)
Table 1. The table shows the automatic solve-rate of CodeLlama in zero-shot mode and when enhanced with reasoning across various
mathematical categories. The addition of reasoning significantly improves solve-rate accuracy.
Model
Accuracy
GPT-4
42.5%
GPT-3.5
18.2%
PaLM 2-L
34.3%
Claude 2
37.6%
Codex Zero-Shot †
72.2%
Codex Few-Shot †
81.1%
REAMS Zero-Shot †
75.55%
(0.65-0.84)
REAMS Zero-Shot + Reasoning †
89.96%
(0.82-0.95)
Table 2. Performance of various models on the MATH dataset
for Maths Solving), leveraged the capabilities of two state-
of-the-art models: CodeLlama 13B and LLaMA 3.1 8B, to
demonstrate the potential of combining generative AI with
logical reasoning for solving university-level mathematics
problems.
The process began with the application of CodeLlama 13B,
which was tasked with generating executable code based
on a variety of mathematical problems sourced from MIT
courses and specific mathematical domains. By evaluating
the correctness of the generated code, we established a base-
line understanding of the model’s ability to independently
interpret and solve complex problems without prior task-
specific exposure. This initial phase highlighted the inherent
strengths and limitations of the model, showing its capac-
ity to apply mathematical principles directly from problem
statements but also revealing areas where its outputs were
less accurate or incomplete.
Recognizing these limitations, we introduced a second phase
to the problem-solving process, where the LLaMA 3.1 8B
model was employed to generate detailed reasoning based
on the mathematical concepts underlying each problem.
This reasoning served as a crucial enhancement, guiding
the CodeLlama 13B model in revising and refining the gen-
erated code. By incorporating this layer of contextual un-
derstanding, the revised approach not only corrected errors
from the initial phase but also produced more accurate and
logically sound solutions. The iterative nature of this pro-
cess—moving from initial code generation to reasoning-
based refinement—proved to be effective in addressing the
gaps identified in the baseline outputs.
The results of this two-phase approach were significant.
The integration of mathematical reasoning into the code
generation process led to a marked improvement in overall
accuracy, demonstrating that AI models can achieve higher
levels of problem-solving capability when supplemented
with interpretative logic. The CodeLlama 13B model, when
enhanced by the reasoning inputs from LLaMA 3.1 8B,
achieved near-human levels of accuracy across a diverse
set of mathematical problems, showcasing the potential for
such methodologies to tackle increasingly complex tasks in
the field of automated problem solving.
In conclusion, this research not only demonstrates the fea-
sibility of using AI to automate the solving of advanced
mathematical problems but also underscores the importance
of integrating reasoning into AI-driven processes. As AI
continues to evolve, approaches like REAMS can pave the
way for more sophisticated, intelligent systems capable of
handling complex tasks across various domains. The find-
ings of this study contribute to the broader understanding
of AI’s capabilities and set the stage for future research into
7
REAMS: Reasoning Enhanced Algorithm for Maths Solving
combining generative and reasoning-based AI methodolo-
gies for even greater levels of accuracy and efficiency in
problem-solving.
9. Limitations
Our approach has several limitations in its ability to solve
certain types of problems. It is unable to generate graphs
unless the problem statement explicitly requests their sketch-
ing or plotting, even if the problem involves graphical ele-
ments. Additionally, the model cannot handle questions that
require formal proofs, as it lacks the capability to simulate
or replace the logical processes necessary for proof-based
solutions. Computationally intractable problems, such as
factoring very large primes, also present challenges, ex-
ceeding the computational limits of the approach and the
underlying Python libraries it uses. The REAMS approach
further struggles with problems that require the application
of advanced algorithms not supported by the available li-
braries. Its reliance on predefined Python libraries limits
its flexibility, particularly in solving niche or highly spe-
cialized mathematical problems. Finally, the approach’s
performance is sensitive to the clarity and precision of the
problem statements, with ambiguities or non-standard for-
mulations often leading to incorrect or incomplete code
generation.
References
Ahn, J., Verma, R., Lou, R., Liu, D., Zhang, R., and Yin,
W.
Large language models for mathematical reason-
ing: Progresses and challenges, 2024. URL https:
//arxiv.org/abs/2402.00157.
Bertot, Y. and Cast´eran, P. Interactive theorem proving and
program development. Coq’Art: The Calculus of induc-
tive constructions. Springer, 01 2004. ISBN 3540208542.
doi: 10.1007/978-3-662-07964-5.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan,
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu,
J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M.,
Gray, S., Chess, B., Clark, J., Berner, C., McCandlish,
S., Radford, A., Sutskever, I., and Amodei, D. Language
models are few-shot learners, 2020. URL https://
arxiv.org/abs/2005.14165.
Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto,
H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N.,
Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov,
M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray,
S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavar-
ian, M., Winter, C., Tillet, P., Such, F. P., Cummings,
D., Plappert, M., Chantzis, F., Barnes, E., Herbert-
Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak,
N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saun-
ders, W., Hesse, C., Carr, A. N., Leike, J., Achiam,
J., Misra, V., Morikawa, E., Radford, A., Knight, M.,
Brundage, M., Murati, M., Mayer, K., Welinder, P., Mc-
Grew, B., Amodei, D., McCandlish, S., Sutskever, I., and
Zaremba, W. Evaluating large language models trained
on code, 2021. URL https://arxiv.org/abs/
2107.03374.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., Hesse, C., and Schulman, J. Training verifiers to solve
math word problems, 2021. URL https://arxiv.
org/abs/2110.14168.
de Moura, L. M., Kong, S., Avigad, J., van Doorn, F.,
and von Raumer, J. The lean theorem prover (system
description). In CADE, 2015. URL https://api.
semanticscholar.org/CorpusID:232990.
Drori, I., Zhang, S., Shuttleworth, R., Tang, L., Lu, A.,
Ke, E., Liu, K., Chen, L., Tran, S., Cheng, N., Wang,
R., Singh, N., Patti, T. L., Lynch, J., Shporer, A.,
Verma, N., Wu, E., and Strang, G. A neural network
solves, explains, and generates university math problems
by program synthesis and few-shot learning at human
level.
Proceedings of the National Academy of Sci-
ences, 119(32), August 2022. ISSN 1091-6490. doi:
10.1073/pnas.2123433119.
URL http://dx.doi.
org/10.1073/pnas.2123433119.
Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A.,
Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A.,
Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravanku-
mar, A., Korenev, A., Hinsvark, A., Rao, A., Zhang, A.,
Rodriguez, A., Gregerson, A., Spataru, A., Roziere, B.,
Biron, B., Tang, B., Chern, B., Caucheteux, C., Nayak,
C., Bi, C., Marra, C., McConnell, C., Keller, C., Touret,
C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Al-
lonsius, D., Song, D., Pintz, D., Livshits, D., Esiobu, D.,
Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino,
D., Hupkes, D., Lakomkin, E., AlBadawy, E., Lobanova,
E., Dinan, E., Smith, E. M., Radenovic, F., Zhang, F.,
Synnaeve, G., Lee, G., Anderson, G. L., Nail, G., Mialon,
G., Pang, G., Cucurell, G., Nguyen, H., Korevaar, H.,
Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann,
I., Misra, I., Evtimov, I., Copet, J., Lee, J., Geffert, J.,
Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der
Linde, J., Billock, J., Hong, J., Lee, J., Fu, J., Chi, J.,
Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak,
J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Al-
wala, K. V., Upasani, K., Plawiak, K., Li, K., Heafield,
K., Stone, K., El-Arini, K., Iyer, K., Malik, K., Chiu, K.,
Bhalla, K., Rantala-Yeary, L., van der Maaten, L., Chen,
8
REAMS: Reasoning Enhanced Algorithm for Maths Solving
L., Tan, L., Jenkins, L., Martin, L., Madaan, L., Malo, L.,
Blecher, L., Landzaat, L., de Oliveira, L., Muzzi, M., Pa-
supuleti, M., Singh, M., Paluri, M., Kardas, M., Oldham,
M., Rita, M., Pavlova, M., Kambadur, M., Lewis, M.,
Si, M., Singh, M. K., Hassan, M., Goyal, N., Torabi, N.,
Bashlykov, N., Bogoychev, N., Chatterji, N., Duchenne,
O., C¸ elebi, O., Alrassy, P., Zhang, P., Li, P., Vasic, P.,
Weng, P., Bhargava, P., Dubal, P., Krishnan, P., Koura,
P. S., Xu, P., He, Q., Dong, Q., Srinivasan, R., Ganapa-
thy, R., Calderer, R., Cabral, R. S., Stojnic, R., Raileanu,
R., Girdhar, R., Patel, R., Sauvestre, R., Polidoro, R.,
Sumbaly, R., Taylor, R., Silva, R., Hou, R., Wang, R.,
Hosseini, S., Chennabasappa, S., Singh, S., Bell, S., Kim,
S. S., Edunov, S., Nie, S., Narang, S., Raparthy, S., Shen,
S., Wan, S., Bhosale, S., Zhang, S., Vandenhende, S.,
Batra, S., Whitman, S., Sootla, S., Collot, S., Gururangan,
S., Borodinsky, S., Herman, T., Fowler, T., Sheasha, T.,
Georgiou, T., Scialom, T., Speckbacher, T., Mihaylov, T.,
Xiao, T., Karn, U., Goswami, V., Gupta, V., Ramanathan,
V., Kerkez, V., Gonguet, V., Do, V., Vogeti, V., Petrovic,
V., Chu, W., Xiong, W., Fu, W., Meers, W., Martinet, X.,
Wang, X., Tan, X. E., Xie, X., Jia, X., Wang, X., Gold-
schlag, Y., Gaur, Y., Babaei, Y., Wen, Y., Song, Y., Zhang,
Y., Li, Y., Mao, Y., Coudert, Z. D., Yan, Z., Chen, Z.,
Papakipos, Z., Singh, A., Grattafiori, A., Jain, A., Kelsey,
A., Shajnfeld, A., Gangidi, A., Victoria, A., Goldstand,
A., Menon, A., Sharma, A., Boesenberg, A., Vaughan,
A., Baevski, A., Feinstein, A., Kallet, A., Sangani, A.,
Yunus, A., Lupu, A., Alvarado, A., Caples, A., Gu, A.,
Ho, A., Poulton, A., Ryan, A., Ramchandani, A., Franco,
A., Saraf, A., Chowdhury, A., Gabriel, A., Bharambe, A.,
Eisenman, A., Yazdan, A., James, B., Maurer, B., Leon-
hardi, B., Huang, B., Loyd, B., Paola, B. D., Paranjape, B.,
Liu, B., Wu, B., Ni, B., Hancock, B., Wasti, B., Spence,
B., Stojkovic, B., Gamido, B., Montalvo, B., Parker, C.,
Burton, C., Mejia, C., Wang, C., Kim, C., Zhou, C., Hu,
C., Chu, C.-H., Cai, C., Tindal, C., Feichtenhofer, C.,
Civin, D., Beaty, D., Kreymer, D., Li, D., Wyatt, D.,
Adkins, D., Xu, D., Testuggine, D., David, D., Parikh,
D., Liskovich, D., Foss, D., Wang, D., Le, D., Holland,
D., Dowling, E., Jamil, E., Montgomery, E., Presani, E.,
Hahn, E., Wood, E., Brinkman, E., Arcaute, E., Dunbar,
E., Smothers, E., Sun, F., Kreuk, F., Tian, F., Ozgenel, F.,
Caggioni, F., Guzm´an, F., Kanayet, F., Seide, F., Florez,
G. M., Schwarz, G., Badeer, G., Swee, G., Halpern, G.,
Thattai, G., Herman, G., Sizov, G., Guangyi, Zhang, Lak-
shminarayanan, G., Shojanazeri, H., Zou, H., Wang, H.,
Zha, H., Habeeb, H., Rudolph, H., Suk, H., Aspegren, H.,
Goldman, H., Molybog, I., Tufanov, I., Veliche, I.-E., Gat,
I., Weissman, J., Geboski, J., Kohli, J., Asher, J., Gaya,
J.-B., Marcus, J., Tang, J., Chan, J., Zhen, J., Reizenstein,
J., Teboul, J., Zhong, J., Jin, J., Yang, J., Cummings, J.,
Carvill, J., Shepard, J., McPhie, J., Torres, J., Ginsburg,
J., Wang, J., Wu, K., U, K. H., Saxena, K., Prasad, K.,
Khandelwal, K., Zand, K., Matosich, K., Veeraraghavan,
K., Michelena, K., Li, K., Huang, K., Chawla, K., Lakho-
tia, K., Huang, K., Chen, L., Garg, L., A, L., Silva, L.,
Bell, L., Zhang, L., Guo, L., Yu, L., Moshkovich, L.,
Wehrstedt, L., Khabsa, M., Avalani, M., Bhatt, M., Tsim-
poukelli, M., Mankus, M., Hasson, M., Lennie, M., Reso,
M., Groshev, M., Naumov, M., Lathi, M., Keneally, M.,
Seltzer, M. L., Valko, M., Restrepo, M., Patel, M., Vy-
atskov, M., Samvelyan, M., Clark, M., Macey, M., Wang,
M., Hermoso, M. J., Metanat, M., Rastegari, M., Bansal,
M., Santhanam, N., Parks, N., White, N., Bawa, N., Sing-
hal, N., Egebo, N., Usunier, N., Laptev, N. P., Dong, N.,
Zhang, N., Cheng, N., Chernoguz, O., Hart, O., Salpekar,
O., Kalinli, O., Kent, P., Parekh, P., Saab, P., Balaji, P.,
Rittner, P., Bontrager, P., Roux, P., Dollar, P., Zvyagina,
P., Ratanchandani, P., Yuvraj, P., Liang, Q., Alao, R.,
Rodriguez, R., Ayub, R., Murthy, R., Nayani, R., Mitra,
R., Li, R., Hogan, R., Battey, R., Wang, R., Maheswari,
R., Howes, R., Rinott, R., Bondu, S. J., Datta, S., Chugh,
S., Hunt, S., Dhillon, S., Sidorov, S., Pan, S., Verma,
S., Yamamoto, S., Ramaswamy, S., Lindsay, S., Lindsay,
S., Feng, S., Lin, S., Zha, S. C., Shankar, S., Zhang, S.,
Zhang, S., Wang, S., Agarwal, S., Sajuyigbe, S., Chin-
tala, S., Max, S., Chen, S., Kehoe, S., Satterfield, S.,
Govindaprasad, S., Gupta, S., Cho, S., Virk, S., Subrama-
nian, S., Choudhury, S., Goldman, S., Remez, T., Glaser,
T., Best, T., Kohler, T., Robinson, T., Li, T., Zhang, T.,
Matthews, T., Chou, T., Shaked, T., Vontimitta, V., Ajayi,
V., Montanez, V., Mohan, V., Kumar, V. S., Mangla, V.,
Ionescu, V., Poenaru, V., Mihailescu, V. T., Ivanov, V.,
Li, W., Wang, W., Jiang, W., Bouaziz, W., Constable, W.,
Tang, X., Wang, X., Wu, X., Wang, X., Xia, X., Wu, X.,
Gao, X., Chen, Y., Hu, Y., Jia, Y., Qi, Y., Li, Y., Zhang,
Y., Zhang, Y., Adi, Y., Nam, Y., Yu, Wang, Hao, Y., Qian,
Y., He, Y., Rait, Z., DeVito, Z., Rosnbrick, Z., Wen, Z.,
Yang, Z., and Zhao, Z. The llama 3 herd of models, 2024.
URL https://arxiv.org/abs/2407.21783.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring math-
ematical problem solving with the math dataset, 2021.
URL https://arxiv.org/abs/2103.03874.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
Y. Large language models are zero-shot reasoners, 2023.
URL https://arxiv.org/abs/2205.11916.
Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E.,
Michalewski, H., Ramasesh, V., Slone, A., Anil, C.,
Schlag, I., Gutman-Solo, T., Wu, Y., Neyshabur, B.,
Gur-Ari, G., and Misra, V.
Solving quantitative rea-
soning problems with language models, 2022.
URL
https://arxiv.org/abs/2206.14858.
Nipkow, T., Paulson, L., and Wenzel, M. Isabelle/HOL —
9
REAMS: Reasoning Enhanced Algorithm for Maths Solving
A Proof Assistant for Higher-Order Logic. Springer, 01
2002. doi: 10.1007/3-540-45949-9.
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann,
J., Song, F., Aslanides, J., Henderson, S., Ring, R.,
Young, S., Rutherford, E., Hennigan, T., Menick, J.,
Cassirer, A., Powell, R., van den Driessche, G., Hen-
dricks, L. A., Rauh, M., Huang, P.-S., Glaese, A., Welbl,
J., Dathathri, S., Huang, S., Uesato, J., Mellor, J., Hig-
gins, I., Creswell, A., McAleese, N., Wu, A., Elsen, E.,
Jayakumar, S., Buchatskaya, E., Budden, D., Sutherland,
E., Simonyan, K., Paganini, M., Sifre, L., Martens, L.,
Li, X. L., Kuncoro, A., Nematzadeh, A., Gribovskaya,
E., Donato, D., Lazaridou, A., Mensch, A., Lespiau,
J.-B., Tsimpoukelli, M., Grigorev, N., Fritz, D., Sotti-
aux, T., Pajarskas, M., Pohlen, T., Gong, Z., Toyama,
D., de Masson d’Autume, C., Li, Y., Terzi, T., Mikulik,
V., Babuschkin, I., Clark, A., de Las Casas, D., Guy,
A., Jones, C., Bradbury, J., Johnson, M., Hechtman, B.,
Weidinger, L., Gabriel, I., Isaac, W., Lockhart, E., Osin-
dero, S., Rimell, L., Dyer, C., Vinyals, O., Ayoub, K.,
Stanway, J., Bennett, L., Hassabis, D., Kavukcuoglu,
K., and Irving, G. Scaling language models: Methods,
analysis & insights from training gopher, 2022. URL
https://arxiv.org/abs/2112.11446.
Rozi`ere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I.,
Tan, X. E., Adi, Y., Liu, J., Sauvestre, R., Remez, T.,
Rapin, J., Kozhevnikov, A., Evtimov, I., Bitton, J., Bhatt,
M., Ferrer, C. C., Grattafiori, A., Xiong, W., D´efossez,
A., Copet, J., Azhar, F., Touvron, H., Martin, L., Usunier,
N., Scialom, T., and Synnaeve, G. Code llama: Open
foundation models for code, 2024. URL https://
arxiv.org/abs/2308.12950.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A.,
and Klimov, O.
Proximal policy optimization algo-
rithms, 2017.
URL https://arxiv.org/abs/
1707.06347.
Tran, S., Krishna, P., Pakuwal, I., Kafle, P., Singh, N., Lynch,
J., and Drori, I. Solving machine learning problems, 2021.
URL https://arxiv.org/abs/2107.01238.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention
is all you need, 2023. URL https://arxiv.org/
abs/1706.03762.
Wang, Y., Le, H., Gotmare, A. D., Bui, N. D. Q., Li, J., and
Hoi, S. C. H. Codet5+: Open code large language mod-
els for code understanding and generation, 2023. URL
https://arxiv.org/abs/2305.07922.
Ying, H., Zhang, S., Li, L., Zhou, Z., Shao, Y., Fei, Z.,
Ma, Y., Hong, J., Liu, K., Wang, Z., Wang, Y., Wu, Z.,
Li, S., Zhou, F., Liu, H., Zhang, S., Zhang, W., Yan,
H., Qiu, X., Wang, J., Chen, K., and Lin, D. Internlm-
math: Open math large language models toward verifiable
reasoning, 2024. URL https://arxiv.org/abs/
2402.06332.
Zheng, K., Han, J. M., and Polu, S. Minif2f: a cross-system
benchmark for formal olympiad-level mathematics, 2022.
URL https://arxiv.org/abs/2109.00110.
10
REAMS: Reasoning Enhanced Algorithm for Maths Solving
A. Dataset Used
The datasets employed in this study are drawn from two principal sources: a collection of university-level mathematics
courses offered by the Massachusetts Institute of Technology (MIT) and Columbia University, and a subset of the MATH
dataset, which is specifically designed to evaluate mathematical reasoning and problem-solving capabilities.
The dataset, as detailed in Table 3, consists of advanced coursework from MIT and Columbia University. These courses
span a wide array of mathematical disciplines, each designed to build foundational and advanced knowledge necessary
for tackling complex mathematical problems. Courses such as MIT 6.042 (Mathematics for Computer Science) focus
on discrete mathematics, providing essential tools and proof techniques vital in computer science. MIT 18.01 (Single
Variable Calculus) and MIT 18.02 (Multivariable Calculus) emphasize calculus, progressing from single-variable functions
to multi-variable calculus, including essential theorems applicable in higher-dimensional analysis. MIT 18.03 (Differential
Equations) provides a comprehensive exploration of differential equations, crucial for modeling physical systems, while
MIT 18.05 (Introduction to Probability and Statistics) introduces probability models and statistical methods. MIT 18.06
(Introduction to Linear Algebra) and Columbia University’s COMS3251 (Computational Linear Algebra) cover linear
algebra, focusing on both theoretical concepts and computational techniques necessary for a variety of applications in
science and engineering.
The dataset, outlined in Table 4, is sourced from the MATH dataset, which presents a collection of problems categorized
into specific mathematical topics. This subset was chosen to evaluate the algorithm’s ability to handle a broad range
of mathematical challenges. The topics include Algebra, focusing on fundamental operations and equations; Counting
and Probability, which explores combinatorial methods and probabilistic reasoning; Intermediate Algebra, dealing with
more complex algebraic structures; Number Theory, which delves into the properties of integers and modular arithmetic;
Prealgebra, covering essential mathematical principles; and Precalculus, which bridges the understanding required for
calculus, focusing on vectors, matrices, and trigonometry. Our approach not only surpasses previous benchmarks on similar
datasets but also uniquely addresses the challenges associated with solving problems from undergraduate-level courses. The
selection of these datasets was intended to comprehensively assess the Reasoning Enhanced Algorithm for Maths Solving
(REAMS). By incorporating a diverse set of mathematical problems from both university courses and the MATH dataset, the
evaluation aimed to test not only the algorithm’s problem-solving accuracy but also its ability to generate explanations that
closely mimic human reasoning. This dual focus on accuracy and interpretability underscores the potential contributions of
REAMS to both artificial intelligence research and educational applications.
B. Explanation of Notations:
• Mcode: Code generation model (e.g., CodeLlama 13B) used to generate executable code.
• Mreason: Reasoning model (e.g., LLaMA 3.1 8B) used to generate mathematical reasoning or explanations.
• Ci: Initial code generated for problem pi by Mcode.
• C′
i: Revised code generated after incorporating reasoning Ri.
• pi: A single problem from the problem set P.
• oi: Expected correct output for problem pi.
• Szero[i]: Binary success indicator for zero-shot code generation (1 for correct, 0 for incorrect).
• Sreason[i]: Binary success indicator for code generation with reasoning (1 for correct, 0 for incorrect).
• n: Total number of problems in the set P.
B.1. Dataset Availability
The datasets used in this research are derived from publicly available sources:
• MIT Courses Dataset: Accessible through MIT
https://github.com/idrori/mathQ/tree/main/data.
• MATH Dataset: Available from
https://paperswithcode.com/dataset/math
11
REAMS: Reasoning Enhanced Algorithm for Maths Solving
Course Name
Description
MIT 6.042: Mathematics for Computer Sci-
ence
Discrete mathematics, focusing on tools
and proof techniques useful in computer
science.
MIT 18.01: Single Variable Calculus
Differentiation and integration of functions
of one variable, with applications.
MIT 18.02: Multivariable Calculus
Calculus of several variables, including vec-
tor algebra and integration in multiple di-
mensions.
MIT 18.03: Differential Equations
Study of differential equations, includ-
ing first-order ODEs, linear systems, and
Fourier series.
MIT 18.05: Introduction to Probability and
Statistics
Basic probability models, combinatorics,
random variables, and statistical estimation.
MIT 18.06: Introduction to Linear Algebra
Matrix theory, linear algebra, vector spaces,
eigenvalues, and their applications.
Columbia University COMS3251: Compu-
tational Linear Algebra
Linear functions, matrices, vector spaces,
eigenvectors, and spectral decomposition.
Table 3. List of courses by MIT and Columbia
MATH Topic
Description
Algebra
Exponents, logarithms,
simplifying expressions, and
quadratic equations.
Counting and Probability
Counting methods and
probability involving
factorials and binomial
coefficients.
Intermediate Algebra
Advanced algebraic topics,
including polynomial roots
and conic sections.
Number Theory
Primes, divisibility, prime
factorization, modular
arithmetic.
Prealgebra
Basic math concepts
including fractions,
decimals, ratios, and simple
equations.
Precalculus
Vectors, matrices,
trigonometric functions, and
complex numbers.
Table 4. MATH Dataset
12
REAMS: Reasoning Enhanced Algorithm for Maths Solving
ID
Course
Question
Solution
1
18.01 Single Variable
Calculus
Sketch the graph of the function. f(x) = x + |x|
2
18.02 Multi-variable
Calculus
Describe the graph of the function f : f(x, y) =
10 −
p
x2 + y2
3
18.03
Differential
Equations
Find general solutions of the differential equations.
If an initial condition is given, find the correspond-
ing particular solution. Throughout, primes denote
derivatives with respect to x. y′ + y = 2, y(0) = 0
y(x) = 2(1 −e−x)
4
18.05 Introduction to
Probability and Statis-
tics
Suppose X and Y have joint pdf f(x, y) = c(x2 +
xy) on [0, 1] x [0, 1]. Find c.
1.714285714
5
18.06 Linear Algebra
Find a combination x1w1 + x2w2 + x3w3 that
gives the zero vector with x1 = 1. w1 is the vector
(1; 2; 3). w2 is the vector (4; 5; 6). w3 is the vector
(7; 8; 9).
x1 = 1, x2 = −2, x3 = 1
6
6.042
Mathematics
for Computer Science
Find a number x ∈{0, 1, . . . , 112} such that
11x ≡1 (mod 113).
72
7
COMS3251 Compu-
tational Linear Alge-
bra
Given a d-dimensional non-zero vector v, compute
the rank of the matrix vvT .
1
8
MATH Prealgebra
What is the greatest common factor of 84, 112 and
210?
14
9
MATH Algebra
Let N, O be functions such that N(x) = 2√x and
O(x) = x2. What is N(O(N(O(N(O(3))))))?
24
10
MATH Number The-
ory
How many four-digit numbers whose digits add up
to 9 are divisible by 11?
0
11
MATH Counting and
Probability
A standard six-sided fair die is rolled four times.
The probability that the product of all four numbers
rolled is a perfect square is m
n , where m and n are
relatively prime positive integers. Find m + n.
187
12
MATH Intermediate
Algebra
Given that x2 +y2 = 14x+6y+6, find the largest
possible value of 3x + 4y.
73
13
MATH Precalculus
Evaluate (2−w)∗(2−w2)∗...∗(2−w10) where
w = e
2π∗i
11 .
2047.0
Table 5. Example questions and solutions
13
|
REAMS: Reasoning Enhanced Algorithm for Maths Solving Eishkaran Singh 1 Tanav Singh Bajaj 2 Siddharth Nayak 3 Abstract The challenges of solving complex universitylevel mathematics problems, particularly those from MIT, and Columbia University courses, and selected tasks from the MATH dataset, remain a significant obstacle in the field of artificial intelligence. Conventional methods have consistently fallen short in this domain, highlighting the need for more advanced approaches. In this paper, we introduce a language-based solution that leverages zero-shot learning and mathematical reasoning to effectively solve, explain, and generate solutions for these advanced math problems. By integrating program synthesis, our method reduces reliance on large-scale training data while significantly improving problem-solving accuracy. Our approach achieves an accuracy of 90.15%, representing a substantial improvement over the previous benchmark of 81% and setting a new standard in automated mathematical problem-solving. These findings highlight the significant potential of advanced AI methodologies to address and overcome the challenges presented by some of the most complex mathematical courses and datasets. 1. Introduction The domain of advanced mathematics has consistently posed significant challenges, particularly in the realm of solving complex equations that demand both precision and deep reasoning. These challenges are not only a test of computational ability but also of the capacity to emulate human-like reasoning and problem-solving methodologies. Traditional methods for addressing these challenges have typically relied on manual calculations or basic automated tools, which, while effective in some contexts, are often time-consuming and limited in both accuracy and scope. These limitations have been well-documented in the literature, underscoring *Equal contribution 1Thapar 2 3Massachusetts . Correspondence to: Eishkaran Singh , Tanav Singh Bajaj . the necessity for innovative solutions capable of effectively managing the intricate and multifaceted nature of advanced mathematical problems Vaswani et al. (2023). A significant advancement in this field was demonstrated by Drori et al. (2022) through a collaborative study between the Massachusetts (MIT) and Columbia University. Their research explored the potential of neural networks in solving university-level mathematics problems by employing program synthesis. This approach utilized the capabilities of OpenAI's Codex transformer, which was fine-tuned to generate executable programs using a few-shot learning technique. The methodology achieved a noteworthy milestone, attaining an accuracy rate of 81%, thereby setting a new benchmark in the field of automated mathematical problem-solving. This result marked a considerable improvement over previous models, which achieved accuracy rates ranging between 18.8% and 30.8% using GPT-3's text-based few-shot learning and chain-of-thought prompting (Brown et al., 2020; Rae et al., 2022; Drori et al., 2022). However, despite the progress realized by Drori et al. (2022), their approach, which relied primarily on program synthesis with zero shot learning (Zero-shot learning is a model's ability to detect classes never seen during training), introduced certain limitations. Although effective in addressing a wide range of mathematical problems, this approach encountered challenges when confronted with more abstract and complex problems that necessitated a higher level of reasoning and contextual understanding. Moreover, the need for humanlike reasoning and explanatory depth, which is crucial for both educational purposes and the comprehensive understanding of complex mathematical problems, remained a challenge that was not fully addressed in their methodology. (Kojima et al., 2023). In response to these identified limitations, this study introduces a novel methodology, Reasoning Enhanced Algorithm for Maths Solving (REAMS). REAMS is designed to overcome the constraints identified in previous approaches by integrating neural networks trained on both text and code with a refined few-shot learning algorithm that combines symbolic reasoning with contextual understanding. This hybrid approach not only enhances the accuracy of problemsolving but also significantly improves the interpretability of the solutions by providing detailed, reasoning-based ex1 16 Sep 2025 REAMS: Reasoning Enhanced Algorithm for Maths Solving planations. The REAMS methodology was rigorously tested against datasets from prominent university-level mathematics courses, including Mathematics for Computer Science, Single Variable Calculus, Multivariable Calculus, Differential Equations, Probability and Statistics, and Linear Algebra. The results obtained from these tests are compelling, with REAMS achieving an accuracy rate of 90.15%. This performance not only surpasses the 81% benchmark established by the Codex-based model but also represents a significant advancement in the field. In addition to the improved accuracy, the solutions generated by REAMS include detailed explanations that closely resemble human reasoning, thereby making them valuable not only for solving complex mathematical problems but also as educational tools (Hendrycks et al., 2021). By advancing both the accuracy and explanatory power of automated mathematical problem-solving, REAMS represents a significant contribution to the application of artificial intelligence in education and research. This study not only sets a new standard in the field but also opens new avenues for future research aimed at further enhancing the capabilities of AI in solving advanced mathematical problems. The implications of this work extend beyond mere problem-solving, highlighting the potential for AI-driven methodologies to play a transformative role in the landscape of higher education (Chen et al., 2021; Tran et al., 2021). 2. Related Works The development of mathematical reasoning within large language models (LLMs) has progressed through systematic advancements in pre-training, fine-tuning, and the integration of external tools. Early research focused on establishing a foundational base of computational and mathematical knowledge in LLMs through extensive exposure to educational datasets, problem sets, and synthetic data. These efforts were critical in enabling models to engage with complex mathematical problems effectively. Unlike these early efforts, our approach integrates reasoning-based methodologies to enhance interpretability and solution accuracy. Subsequent research emphasized the need for fine-tuning models with specialized mathematical datasets, recognizing the limitations of general pre-training approaches. Lewkowycz et al. (2022) explored the impact of fine-tuning on LLMs, demonstrating that incorporating complex reasoning paths significantly enhanced the models' ability to solve quantitative reasoning problems. This shift towards more sophisticated fine-tuning methodologies was echoed in the work of Hendrycks et al. (2021), who emphasized the importance of pre-training on domain-specific data to improve LLMs' performance in mathematical problem-solving. Our htbp Figure 1. Imported Python programming libraries by course approach advances these methods by combining fine-tuning with symbolic reasoning, resulting in a more robust problemsolving process. Reinforcement learning (RL) has played a pivotal role in optimizing the reasoning capabilities of LLMs. By employing reward models to evaluate the correctness of reasoning paths, RL has refined the decision-making processes of these models. Ahn et al. (2024) illustrated the efficacy of RL in reducing the dependency on human intervention during the evaluation of model outputs, which in turn increased the reliability of LLMs in mathematical reasoning tasks. Additionally, Cobbe et al. (2021) highlighted the integration of RL with verification mechanisms as a critical step towards automating the evaluation of model-generated solutions, thereby enhancing the robustness of LLMs in handling complex mathematical challenges. In contrast, our approach minimizes reliance on reinforcement learning by integrating reasoning directly into the code generation process, thereby simplifying the overall methodology. The integration of code interpreters with LLMs has further broadened the scope of their problem-solving capabilities. This development has allowed models to perform complex calculations and interact more effectively with external tools, thereby addressing multifaceted mathematical challenges with greater efficiency. InternLM-Math, Ying et al. (2024) explored this integration through models like InternLMMath, which employed an approach known as reasoning interleaved with coding (RICO). This methodology closely mirrors the human problem-solving process, enabling LLMs to tackle intricate mathematical tasks more effectively. This approach aligns with findings by Chen et al. (2021), who demonstrated that the synergy between reasoning and coding is essential for solving complex mathematical problems. Our methodology further refines this integration by using reasoning to iteratively improve code generation, thereby increasing the accuracy of solutions. In the domain of formal mathematical proving, LLMs 2 REAMS: Reasoning Enhanced Algorithm for Maths Solving have shown potential, particularly with the use of formal languages such as Isabelle (Nipkow et al., 2002), LEAN(de Moura et al., 2015), and Coq(Bertot & Cast ́eran, 2004). Despite challenges related to the limited availability of formal proof data, models trained on these languages have achieved state-of-the-art performance on benchmarks like MiniF2F, as reported by Zheng et al. (2022). This advancement is supported by the integration of formal reasoning capabilities into LLMs, highlighting the potential for further development in automated theorem proving. These findings indicate that while significant progress has been made, further research is needed to address the limitations associated with data sparsity in this area. Our approach diverges by focusing on the practical application of reasoning in problem-solving without requiring extensive formal proof datasets. The ongoing development of LLMs has been guided by the establishment of robust and credible benchmarks, which play a critical role in ensuring that advancements are both measurable and reliable. The importance of integrating pretraining, fine-tuning, verification, and code interpretation strategies to propel the field forward. The significance of benchmarking in assessing the performance of LLMs, particularly in tasks requiring advanced reasoning and contextual understanding. Our approach contributes by setting new benchmarks in the accuracy and interpretability of solutions, specifically in university-level mathematics. By refining pre-training methods, advancing fine-tuning techniques, incorporating reinforcement learning, and integrating external tools, researchers have significantly improved the accuracy and reliability of LLMs in mathematical tasks. These developments not only represent substantial progress in the field but also open new avenues for future research, particularly in the integration of formal reasoning systems with LLMs. The continued evolution of LLMs is likely to yield further advancements, particularly as new techniques are developed to address the remaining challenges in this field. Our methodology adds to this evolution by demonstrating a combined approach of reasoning and code generation, resulting in a more accurate and interpretable problem-solving process. 3. Problem Statement The primary challenge addressed in this study is the difficulty of solving complex university-level mathematical problems using artificial intelligence. Traditional approaches, including those based on program synthesis and large language models (LLMs), have shown limitations in accuracy and the ability to provide human-like reasoning and detailed explanations. These shortcomings are particularly pronounced in handling abstract problems that require deep contextual understanding and logical reasoning. This research aims to overcome these challenges by developing a methodology that integrates mathematical reasoning with code generation to improve the accuracy and interpretability of AI-driven solutions to advanced mathematical problems. Algorithm 1 Methodology for Code Generation with Reasoning (REAMS) Input: Problem set P = {p1, p2, . . . , pn}, code generation model Mcode, reasoning model Mreason, expected outputs {o1, o2, . . . , on} Output: Success indicators Szero, Sreason Step 1: Zero-Shot Code Generation for each problem pi in P do Generate initial code Ci using Mcode Execute Ci Compare the output with expected output oi if output is correct then Set Szero[i] = 1 else Set Szero[i] = 0 end if end for Step 2: Reasoning-Based Code Generation for each problem pi where Szero[i] = 0 do Generate reasoning Ri using Mreason Generate revised code C′ i using Mcode with pi and Ri as inputs Execute C′ i Compare the output with expected output oi if output is correct then Set Sreason[i] = 1 else Set Sreason[i] = 0 end if end for Step 3: Performance Assessment Compute Zero-Shot Success Rate: Zero-Shot Success Rate = P Szero n × 100% Compute Reasoning Success Rate: Reasoning Success Rate = P Sreason n × 100% 4. Methodology The methodology employed in this study is designed to systematically evaluate the performance of the CodeLlama 13B and Llama 3.1 8B models in solving complex mathematical problems through code generation with reasoning. This process involves a structured approach to both generating and 3 REAMS: Reasoning Enhanced Algorithm for Maths Solving htbp Figure 2. Workflow of the REAMS Approach: The diagram illustrates the iterative process of solving mathematical problems using the REAMS method. Input problems, sourced from MIT courses and various MATH topics, are first processed by the CodeLlama 13B model to generate Python code. If the correct answer is obtained, the process concludes. Otherwise, reasoning is generated by the LLama 3.1 8B model, and based on this mathematical reasoning, the code is iteratively refined and regenerated until the correct solution is achieved validating code outputs, with subsequent steps to improve performance where initial attempts fail.The methodology is divided into several stages, each of which is described in detail below Figure2 and in Algorithm 1. 4.1. Initial Code Generation using CodeLlama 13B The first step in our methodology involves the use of the CodeLlama 13B model for code generation. The model, designed specifically for tasks related to code, is employed to generate executable code that aims to solve a predefined set of mathematical problems. These problems are drawn from datasets encompassing various topics, including calculus, linear algebra, differential equations, and probability, which are representative of advanced university-level coursework. In this stage, the model is prompted using a zero-shot learning approach. Zero-shot learning refers to the scenario where the model is provided with a problem statement without any examples or prior demonstrations of similar problems. The model generates code that it predicts will solve the problem based on its training. The absence of example problems in the prompt is intended to test the model's inherent understanding and generalization capabilities, evaluating its ability to apply learned principles to new, unseen problems. 4.1.1. PYTHON PACKAGE DEPENDENCIES During code generation, the model accounts for the Python programming packages commonly used across different courses. As shown in Figure 1, all courses utilize NumPy and SymPy, with Matplotlib employed for plotting tasks, and approximately half use the math, random, and SciPy libraries. Our approach specifies only SymPy or plottingrelated imports in the prompts, while other necessary packages are automatically synthesized by the model. This integration ensures that the generated code aligns with the computational requirements of each course. 4.2. Code Evaluation Once the code is generated for each question, the next step involves manually running the code to assess its correctness. This manual execution is crucial because it ensures that the generated code is not only syntactically correct but also functionally accurate in producing the correct outputs. The outputs of the executed code are compared against the expected solutions, which are predefined and known to be correct for each problem. For each piece of code that successfully produces the correct output, a score of 1 is recorded in the zero-shot evaluation column. If the code fails to execute correctly or produces an incorrect output, a score of 0 is recorded. This binary scoring system allows for a clear assessment of the model's performance in the zero-shot context. The process of manually running and evaluating the code is an important step to verify the model's ability to generate functionally accurate solutions without relying on any additional training or examples. 4.3. Generating Mathematical Reasoning with LLaMA 3.1 8B Model For all the problems where the generated code received a score of 0 in the zero-shot evaluation, the next step involves generating a mathematical reasoning or explanation for the 4 REAMS: Reasoning Enhanced Algorithm for Maths Solving problem. This reasoning is generated using the LLaMA 3.1 8B model, which is a smaller, 8-bit quantized version of the LLaMA series designed for efficient reasoning tasks (Dubey et al., 2024). The choice of this model is based on its capability to produce logical, step-by-step reasoning that can guide the code generation process. The reasoning process involves providing the model with the original problem statement and instructing it to generate a detailed explanation of the mathematical principles and steps required to solve the problem. This explanation is intended to bridge the gap between the problem statement and the correct solution, offering insights that the code generation model might have missed in the zero-shot context. 4.4. Code Generation with Mathematical Reasoning as Input Once the mathematical reasoning has been generated for the problems that initially scored 0, the next step is to use this reasoning as an input for the CodeLlama 13B model. In this stage, the reasoning and the original problem statement are both provided as inputs to the model. The objective is to leverage the reasoning to guide the model in generating more accurate and contextually relevant code. This step effectively transforms the problem from a zeroshot scenario to a more informed task, where the model has access to additional context in the form of the generated reasoning. The expectation is that by understanding the reasoning behind the problem, the model can produce code that more closely aligns with the expected solution. 4.5. Evaluation of Revised Code After the CodeLlama 13B model generates the new code based on mathematical reasoning, the next step involves manually executing this revised code. As with the initial code generation, the outputs are compared against the correct solutions. If the new code produces the correct output, the corresponding entry in the zero-shot with reasoning column is updated from 0 to 1. This process of revising the zero-shot with reasoning evaluation scores based on the model's performance with additional reasoning input allows for a more nuanced assessment of the model's capabilities. It also provides insight into the effectiveness of combining reasoning with code generation, particularly in cases where the model initially fails to produce the correct output. 5. Experiments We detail the experiments conducted to evaluate the performance of our proposed methodology. The initial experiment replicates the baseline established by Drori et al. (2022), using the Codex model fine-tuned on code, while the subsequent experiment explores the potential of the CodeLlama 13B model, augmented with reinforcement learning. 5.1. Baseline Experiment: Codex Model The study conducted by Drori et al. (2022) employed the Codex model, a variant of OpenAI's GPT-3 that has been fine-tuned specifically for code generation tasks (Brown et al., 2020). The problems were sourced from subjects including Single Variable Calculus, Multivariable Calculus, Differential Equations, Probability and Statistics, Linear Algebra, and Mathematics for Computer Science. The datasets utilized for this purpose were drawn from MIT and Columbia University courses, in addition to the MATH dataset, which includes problems from high school mathematics competitions known for their complexity . The experimental setup initially employed a zero-shot learning approach, where the model was provided with the problem statements. This method was intended to assess the model's intrinsic ability to generate correct solutions based solely on its pre-trained knowledge. The questions that weren't correct were then passed on to the few shot learning approach. The model was prompted with a few examples of similar problems and their solutions, which served as a guide for generating solutions to new problems (Chen et al., 2021). The key metric for evaluating the model's performance was the accuracy of the generated solutions. The findings from this experiment demonstrated that the Codex model achieved an accuracy rate of 81%, a substantial improvement over earlier models that relied solely on text-based few-shot learning and chain-of-thought prompting. The ability of Codex to synthesize executable programs that solved complex problems, generate explanations for the solutions, and create new questions based on the solved problems, set a new benchmark in the field of automated mathematical problem-solving (Drori et al., 2022), (Chen et al., 2021). 5.2. CodeT5 Model with Reinforcement Learning In addition to replicating the baseline experiment, an alternative approach was explored using the CodeT5(Wang et al., 2023) . CodeT5 is designed specifically for code generation tasks and represents a more recent advancement in the field. The experiment was structured to evaluate the model's performance in generating code solutions for the same set of mathematical problems, starting with a zero-shot learning approach. The programs generated by CodeT5 were executed, and their outputs were analyzed. Programs that failed to execute or produced incorrect results were marked as zero. The initial outcomes from this zero-shot experiment with 5 REAMS: Reasoning Enhanced Algorithm for Maths Solving CodeT5 indicated 11.5% lower accuracy than the zero-shot Codex model. The primary issue identified was the model's difficulty in generating syntactically correct or logically coherent code (Rozi`ere et al., 2024). To get better results reinforcement learning (RL) was applied, utilizing a Proximal Policy Optimization (PPO) policy (Schulman et al., 2017). The objective of this approach was to iteratively refine the model's ability to generate correct code by providing feedback loops based on the success or failure of previous code executions. The reinforcement learning framework was employed to optimize its code generation strategies by maximizing a reward function. The application of reinforcement learning led to a 10% increment in the overall performance of the CodeT5 model which still remained below that of the baseline Codex model. The computational resources required for reinforcement learning did not justify the improvement observed. Consequently, we developed a reasoning-based approach, which has demonstrated superior performance. 6. Metrics We calculate the model's performance using two metrics: Accuracy and the Clopper-Pearson Interval. 6.1. Accuracy Accuracy is defined as the proportion of correct predictions among the total number of predictions. It is calculated as follows: Accuracy = TP + TN TP + TN + FP + FN where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives. 6.2. Clopper-Pearson Interval Since Accuracy is a proportion derived from a binomial distribution, we use the Clopper-Pearson Interval to provide an exact confidence interval for this binomial proportion. This method offers a conservative estimate, particularly useful for small sample sizes. Given X successes out of n trials, the interval is computed as: Lower Bound = BetaInv α 2 , X, n -X + 1 Upper Bound = BetaInv 1 -α 2 , X + 1, n -X where BetaInv(·) is the inverse of the cumulative distribution function of the beta distribution, and α represents the significance level, typically set at 0.05 for a 95% confidence interval. 7. Discussion The evaluation of the models was conducted across a diverse set of 265 mathematical problems, sampled systematically from various academic courses and topics within the MATH dataset. Specifically, the problems were drawn from seven advanced university-level courses-18.01, 18.02, 18.03, 18.05, 18.06, 6.042, and COMS3251-and six topics within the MATH dataset, including Prealgebra, Algebra, Number Theory, Counting and Probability, Intermediate Algebra, and Precalculus. Each course contributed 25 randomly selected questions, while each topic within the MATH dataset contributed 15 questions, ensuring a comprehensive assessment of the models' performance across a broad range of mathematical domains. The performance of the CodeLlama 13B model in the automatic solve rate was a key metric, with the model successfully solving 220 out of the 265 problems. This outcome represents an incremental improvement of 7 questions over the baseline performance of comparable code-based models. The accuracy of the CodeLlama 13B model was rigorously quantified, yielding an accuracy rate of 83.17%. This marks an enhancement of 11.5% over the baseline accuracy, indicating the model's capability to generate correct solutions in a zero-shot learning context. Further analysis was conducted by integrating reasoning steps into the code generation process. For problems where the initial code did not yield correct solutions, the incorporation of mathematical reasoning, generated by the LLaMA 3.1 8B model, provided additional context and guidance for the CodeLlama 13B model. The introduction of reasoning as an input led to a significant boost in the overall performance of the model, with the combined approach achieving an overall accuracy of 90.15%. This accuracy represents a substantial increase compared to the baseline accuracy of 81.1%, demonstrating the efficacy of combining automated code generation with structured reasoning to enhance problem-solving accuracy across complex mathematical tasks. The detailed breakdown of solve rates, both for the CodeLlama model alone and in conjunction with reasoning. These results underscore the potential of AI-driven methodologies in handling intricate mathematical problems with a high degree of precision. 8. Conclusion In this study, we addressed the complex challenge of automating the solution of advanced mathematical problems through AI-driven code generation and refinement. The approach, termed REAMS (Reasoning Enhanced Algorithm 6 REAMS: Reasoning Enhanced Algorithm for Maths Solving Course Codex REAMS Codex REAMS (Zero-Shot) (Zero-Shot) (Few-Shot) (Zero-Shot+Reasoning) 18.01 74% 76% 75% 88% (0.55-0. 91) (0.69-0.97) 18.02 74% 72% 77% 92% (0.51-0.88) (0.74-0.99) 18.03 61% 84% 74% 92% (0.64-0.95) (0.74-0.99) 18.05 88% 84% 99% 88% (0.64-0.95) (0.69-0.97) 18.06 75% 72% 82% 92% (0.51-0.88) (0.74-0.99) 6.042 49% 76% 63% 88% (0.55-0. 91) (0.69-0.97) COMS3251 76% 84% 79% 92% (0.64-0.95) (0.74-0.99) Table 1. The table shows the automatic solve-rate of CodeLlama in zero-shot mode and when enhanced with reasoning across various mathematical categories. The addition of reasoning significantly improves solve-rate accuracy. Model Accuracy GPT-4 42.5% GPT-3.5 18.2% PaLM 2-L 34.3% Claude 2 37.6% Codex Zero-Shot † 72.2% Codex Few-Shot † 81.1% REAMS Zero-Shot † 75.55% (0.65-0.84) REAMS Zero-Shot + Reasoning † 89.96% (0.82-0.95) Table 2. Performance of various models on the MATH dataset for Maths Solving), leveraged the capabilities of two stateof-the-art models: CodeLlama 13B and LLaMA 3.1 8B, to demonstrate the potential of combining generative AI with logical reasoning for solving university-level mathematics problems. The process began with the application of CodeLlama 13B, which was tasked with generating executable code based on a variety of mathematical problems sourced from MIT courses and specific mathematical domains. By evaluating the correctness of the generated code, we established a baseline understanding of the model's ability to independently interpret and solve complex problems without prior taskspecific exposure. This initial phase highlighted the inherent strengths and limitations of the model, showing its capacity to apply mathematical principles directly from problem statements but also revealing areas where its outputs were less accurate or incomplete. Recognizing these limitations, we introduced a second phase to the problem-solving process, where the LLaMA 3.1 8B model was employed to generate detailed reasoning based on the mathematical concepts underlying each problem. This reasoning served as a crucial enhancement, guiding the CodeLlama 13B model in revising and refining the generated code. By incorporating this layer of contextual understanding, the revised approach not only corrected errors from the initial phase but also produced more accurate and logically sound solutions. The iterative nature of this process-moving from initial code generation to reasoningbased refinement-proved to be effective in addressing the gaps identified in the baseline outputs. The results of this two-phase approach were significant. The integration of mathematical reasoning into the code generation process led to a marked improvement in overall accuracy, demonstrating that AI models can achieve higher levels of problem-solving capability when supplemented with interpretative logic. The CodeLlama 13B model, when enhanced by the reasoning inputs from LLaMA 3.1 8B, achieved near-human levels of accuracy across a diverse set of mathematical problems, showcasing the potential for such methodologies to tackle increasingly complex tasks in the field of automated problem solving. In conclusion, this research not only demonstrates the feasibility of using AI to automate the solving of advanced mathematical problems but also underscores the importance of integrating reasoning into AI-driven processes. As AI continues to evolve, approaches like REAMS can pave the way for more sophisticated, intelligent systems capable of handling complex tasks across various domains. The findings of this study contribute to the broader understanding of AI's capabilities and set the stage for future research into 7 REAMS: Reasoning Enhanced Algorithm for Maths Solving combining generative and reasoning-based AI methodologies for even greater levels of accuracy and efficiency in problem-solving. 9. Limitations Our approach has several limitations in its ability to solve certain types of problems. It is unable to generate graphs unless the problem statement explicitly requests their sketching or plotting, even if the problem involves graphical elements. Additionally, the model cannot handle questions that require formal proofs, as it lacks the capability to simulate or replace the logical processes necessary for proof-based solutions. Computationally intractable problems, such as factoring very large primes, also present challenges, exceeding the computational limits of the approach and the underlying Python libraries it uses. The REAMS approach further struggles with problems that require the application of advanced algorithms not supported by the available libraries. Its reliance on predefined Python libraries limits its flexibility, particularly in solving niche or highly specialized mathematical problems. Finally, the approach's performance is sensitive to the clarity and precision of the problem statements, with ambiguities or non-standard formulations often leading to incorrect or incomplete code generation. References Ahn, J., Verma, R., Lou, R., Liu, D., Zhang, R., and Yin, W. Large language models for mathematical reasoning: Progresses and challenges, 2024. URL https: //arxiv.org/abs/2402.00157. Bertot, Y. and Cast ́eran, P. Interactive theorem proving and program development. Coq'Art: The Calculus of inductive constructions. Springer, 01 2004. ISBN 3540208542. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners, 2020. URL https:// arxiv.org/abs/2005.14165. Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., HerbertVoss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I., and Zaremba, W. Evaluating large language models trained on code, 2021. URL https://arxiv.org/abs/ 2107.03374. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems, 2021. URL https://arxiv. org/abs/2110.14168. de Moura, L. M., Kong, S., Avigad, J., van Doorn, F., and von Raumer, J. The lean theorem prover (system description). In CADE, 2015. URL https://api. semanticscholar.org/CorpusID:232990. Drori, I., Zhang, S., Shuttleworth, R., Tang, L., Lu, A., Ke, E., Liu, K., Chen, L., Tran, S., Cheng, N., Wang, R., Singh, N., Patti, T. L., Lynch, J., Shporer, A., Verma, N., Wu, E., and Strang, G. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32), August 2022. ISSN 1091-6490. URL http://dx.doi. org/10.1073/pnas.2123433119. Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., Spataru, A., Roziere, B., Biron, B., Tang, B., Chern, B., Caucheteux, C., Nayak, C., Bi, C., Marra, C., McConnell, C., Keller, C., Touret, C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Allonsius, D., Song, D., Pintz, D., Livshits, D., Esiobu, D., Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino, D., Hupkes, D., Lakomkin, E., AlBadawy, E., Lobanova, E., Dinan, E., Smith, E. M., Radenovic, F., Zhang, F., Synnaeve, G., Lee, G., Anderson, G. L., Nail, G., Mialon, G., Pang, G., Cucurell, G., Nguyen, H., Korevaar, H., Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann, I., Misra, I., Evtimov, I., Copet, J., Lee, J., Geffert, J., Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der Linde, J., Billock, J., Hong, J., Lee, J., Fu, J., Chi, J., Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak, J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Alwala, K. V., Upasani, K., Plawiak, K., Li, K., Heafield, K., Stone, K., El-Arini, K., Iyer, K., Malik, K., Chiu, K., Bhalla, K., Rantala-Yeary, L., van der Maaten, L., Chen, 8 REAMS: Reasoning Enhanced Algorithm for Maths Solving L., Tan, L., Jenkins, L., Martin, L., Madaan, L., Malo, L., Blecher, L., Landzaat, L., de Oliveira, L., Muzzi, M., Pasupuleti, M., Singh, M., Paluri, M., Kardas, M., Oldham, M., Rita, M., Pavlova, M., Kambadur, M., Lewis, M., Si, M., Singh, M. K., Hassan, M., Goyal, N., Torabi, N., Bashlykov, N., Bogoychev, N., Chatterji, N., Duchenne, O., C ̧ elebi, O., Alrassy, P., Zhang, P., Li, P., Vasic, P., Weng, P., Bhargava, P., Dubal, P., Krishnan, P., Koura, P. S., Xu, P., He, Q., Dong, Q., Srinivasan, R., Ganapathy, R., Calderer, R., Cabral, R. S., Stojnic, R., Raileanu, R., Girdhar, R., Patel, R., Sauvestre, R., Polidoro, R., Sumbaly, R., Taylor, R., Silva, R., Hou, R., Wang, R., Hosseini, S., Chennabasappa, S., Singh, S., Bell, S., Kim, S. S., Edunov, S., Nie, S., Narang, S., Raparthy, S., Shen, S., Wan, S., Bhosale, S., Zhang, S., Vandenhende, S., Batra, S., Whitman, S., Sootla, S., Collot, S., Gururangan, S., Borodinsky, S., Herman, T., Fowler, T., Sheasha, T., Georgiou, T., Scialom, T., Speckbacher, T., Mihaylov, T., Xiao, T., Karn, U., Goswami, V., Gupta, V., Ramanathan, V., Kerkez, V., Gonguet, V., Do, V., Vogeti, V., Petrovic, V., Chu, W., Xiong, W., Fu, W., Meers, W., Martinet, X., Wang, X., Tan, X. E., Xie, X., Jia, X., Wang, X., Goldschlag, Y., Gaur, Y., Babaei, Y., Wen, Y., Song, Y., Zhang, Y., Li, Y., Mao, Y., Coudert, Z. D., Yan, Z., Chen, Z., Papakipos, Z., Singh, A., Grattafiori, A., Jain, A., Kelsey, A., Shajnfeld, A., Gangidi, A., Victoria, A., Goldstand, A., Menon, A., Sharma, A., Boesenberg, A., Vaughan, A., Baevski, A., Feinstein, A., Kallet, A., Sangani, A., Yunus, A., Lupu, A., Alvarado, A., Caples, A., Gu, A., Ho, A., Poulton, A., Ryan, A., Ramchandani, A., Franco, A., Saraf, A., Chowdhury, A., Gabriel, A., Bharambe, A., Eisenman, A., Yazdan, A., James, B., Maurer, B., Leonhardi, B., Huang, B., Loyd, B., Paola, B. D., Paranjape, B., Liu, B., Wu, B., Ni, B., Hancock, B., Wasti, B., Spence, B., Stojkovic, B., Gamido, B., Montalvo, B., Parker, C., Burton, C., Mejia, C., Wang, C., Kim, C., Zhou, C., Hu, C., Chu, C.-H., Cai, C., Tindal, C., Feichtenhofer, C., Civin, D., Beaty, D., Kreymer, D., Li, D., Wyatt, D., Adkins, D., Xu, D., Testuggine, D., David, D., Parikh, D., Liskovich, D., Foss, D., Wang, D., Le, D., Holland, D., Dowling, E., Jamil, E., Montgomery, E., Presani, E., Hahn, E., Wood, E., Brinkman, E., Arcaute, E., Dunbar, E., Smothers, E., Sun, F., Kreuk, F., Tian, F., Ozgenel, F., Caggioni, F., Guzm ́an, F., Kanayet, F., Seide, F., Florez, G. M., Schwarz, G., Badeer, G., Swee, G., Halpern, G., Thattai, G., Herman, G., Sizov, G., Guangyi, Zhang, Lakshminarayanan, G., Shojanazeri, H., Zou, H., Wang, H., Zha, H., Habeeb, H., Rudolph, H., Suk, H., Aspegren, H., Goldman, H., Molybog, I., Tufanov, I., Veliche, I.-E., Gat, I., Weissman, J., Geboski, J., Kohli, J., Asher, J., Gaya, J.-B., Marcus, J., Tang, J., Chan, J., Zhen, J., Reizenstein, J., Teboul, J., Zhong, J., Jin, J., Yang, J., Cummings, J., Carvill, J., Shepard, J., McPhie, J., Torres, J., Ginsburg, J., Wang, J., Wu, K., U, K. H., Saxena, K., Prasad, K., Khandelwal, K., Zand, K., Matosich, K., Veeraraghavan, K., Michelena, K., Li, K., Huang, K., Chawla, K., Lakhotia, K., Huang, K., Chen, L., Garg, L., A, L., Silva, L., Bell, L., Zhang, L., Guo, L., Yu, L., Moshkovich, L., Wehrstedt, L., Khabsa, M., Avalani, M., Bhatt, M., Tsimpoukelli, M., Mankus, M., Hasson, M., Lennie, M., Reso, M., Groshev, M., Naumov, M., Lathi, M., Keneally, M., Seltzer, M. L., Valko, M., Restrepo, M., Patel, M., Vyatskov, M., Samvelyan, M., Clark, M., Macey, M., Wang, M., Hermoso, M. J., Metanat, M., Rastegari, M., Bansal, M., Santhanam, N., Parks, N., White, N., Bawa, N., Singhal, N., Egebo, N., Usunier, N., Laptev, N. P., Dong, N., Zhang, N., Cheng, N., Chernoguz, O., Hart, O., Salpekar, O., Kalinli, O., Kent, P., Parekh, P., Saab, P., Balaji, P., Rittner, P., Bontrager, P., Roux, P., Dollar, P., Zvyagina, P., Ratanchandani, P., Yuvraj, P., Liang, Q., Alao, R., Rodriguez, R., Ayub, R., Murthy, R., Nayani, R., Mitra, R., Li, R., Hogan, R., Battey, R., Wang, R., Maheswari, R., Howes, R., Rinott, R., Bondu, S. J., Datta, S., Chugh, S., Hunt, S., Dhillon, S., Sidorov, S., Pan, S., Verma, S., Yamamoto, S., Ramaswamy, S., Lindsay, S., Lindsay, S., Feng, S., Lin, S., Zha, S. C., Shankar, S., Zhang, S., Zhang, S., Wang, S., Agarwal, S., Sajuyigbe, S., Chintala, S., Max, S., Chen, S., Kehoe, S., Satterfield, S., Govindaprasad, S., Gupta, S., Cho, S., Virk, S., Subramanian, S., Choudhury, S., Goldman, S., Remez, T., Glaser, T., Best, T., Kohler, T., Robinson, T., Li, T., Zhang, T., Matthews, T., Chou, T., Shaked, T., Vontimitta, V., Ajayi, V., Montanez, V., Mohan, V., Kumar, V. S., Mangla, V., Ionescu, V., Poenaru, V., Mihailescu, V. T., Ivanov, V., Li, W., Wang, W., Jiang, W., Bouaziz, W., Constable, W., Tang, X., Wang, X., Wu, X., Wang, X., Xia, X., Wu, X., Gao, X., Chen, Y., Hu, Y., Jia, Y., Qi, Y., Li, Y., Zhang, Y., Zhang, Y., Adi, Y., Nam, Y., Yu, Wang, Hao, Y., Qian, Y., He, Y., Rait, Z., DeVito, Z., Rosnbrick, Z., Wen, Z., Yang, Z., and Zhao, Z. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset, 2021. URL https://arxiv.org/abs/2103.03874. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. Large language models are zero-shot reasoners, 2023. URL https://arxiv.org/abs/2205.11916. Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., Wu, Y., Neyshabur, B., Gur-Ari, G., and Misra, V. Solving quantitative reasoning problems with language models, 2022. URL https://arxiv.org/abs/2206.14858. Nipkow, T., Paulson, L., and Wenzel, M. Isabelle/HOL - 9 REAMS: Reasoning Enhanced Algorithm for Maths Solving A Proof Assistant for Higher-Order Logic. Springer, 01 2002. Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan, T., Menick, J., Cassirer, A., Powell, R., van den Driessche, G., Hendricks, L. A., Rauh, M., Huang, P.-S., Glaese, A., Welbl, J., Dathathri, S., Huang, S., Uesato, J., Mellor, J., Higgins, I., Creswell, A., McAleese, N., Wu, A., Elsen, E., Jayakumar, S., Buchatskaya, E., Budden, D., Sutherland, E., Simonyan, K., Paganini, M., Sifre, L., Martens, L., Li, X. L., Kuncoro, A., Nematzadeh, A., Gribovskaya, E., Donato, D., Lazaridou, A., Mensch, A., Lespiau, J.-B., Tsimpoukelli, M., Grigorev, N., Fritz, D., Sottiaux, T., Pajarskas, M., Pohlen, T., Gong, Z., Toyama, D., de Masson d'Autume, C., Li, Y., Terzi, T., Mikulik, V., Babuschkin, I., Clark, A., de Las Casas, D., Guy, A., Jones, C., Bradbury, J., Johnson, M., Hechtman, B., Weidinger, L., Gabriel, I., Isaac, W., Lockhart, E., Osindero, S., Rimell, L., Dyer, C., Vinyals, O., Ayoub, K., Stanway, J., Bennett, L., Hassabis, D., Kavukcuoglu, K., and Irving, G. Scaling language models: Methods, analysis & insights from training gopher, 2022. URL https://arxiv.org/abs/2112.11446. Rozi`ere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y., Liu, J., Sauvestre, R., Remez, T., Rapin, J., Kozhevnikov, A., Evtimov, I., Bitton, J., Bhatt, M., Ferrer, C. C., Grattafiori, A., Xiong, W., D ́efossez, A., Copet, J., Azhar, F., Touvron, H., Martin, L., Usunier, N., Scialom, T., and Synnaeve, G. Code llama: Open foundation models for code, 2024. URL https:// arxiv.org/abs/2308.12950. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/ 1707.06347. Tran, S., Krishna, P., Pakuwal, I., Kafle, P., Singh, N., Lynch, J., and Drori, I. Solving machine learning problems, 2021. URL https://arxiv.org/abs/2107.01238. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need, 2023. URL https://arxiv.org/ abs/1706.03762. Wang, Y., Le, H., Gotmare, A. D., Bui, N. D. Q., Li, J., and Hoi, S. C. H. Codet5+: Open code large language models for code understanding and generation, 2023. URL https://arxiv.org/abs/2305.07922. Ying, H., Zhang, S., Li, L., Zhou, Z., Shao, Y., Fei, Z., Ma, Y., Hong, J., Liu, K., Wang, Z., Wang, Y., Wu, Z., Li, S., Zhou, F., Liu, H., Zhang, S., Zhang, W., Yan, H., Qiu, X., Wang, J., Chen, K., and Lin, D. Internlmmath: Open math large language models toward verifiable reasoning, 2024. URL https://arxiv.org/abs/ 2402.06332. Zheng, K., Han, J. M., and Polu, S. Minif2f: a cross-system benchmark for formal olympiad-level mathematics, 2022. URL https://arxiv.org/abs/2109.00110. 10 REAMS: Reasoning Enhanced Algorithm for Maths Solving A. Dataset Used The datasets employed in this study are drawn from two principal sources: a collection of university-level mathematics courses offered by the Massachusetts (MIT) and Columbia University, and a subset of the MATH dataset, which is specifically designed to evaluate mathematical reasoning and problem-solving capabilities. The dataset, as detailed in Table 3, consists of advanced coursework from MIT and Columbia University. These courses span a wide array of mathematical disciplines, each designed to build foundational and advanced knowledge necessary for tackling complex mathematical problems. Courses such as MIT 6.042 (Mathematics for Computer Science) focus on discrete mathematics, providing essential tools and proof techniques vital in computer science. MIT 18.01 (Single Variable Calculus) and MIT 18.02 (Multivariable Calculus) emphasize calculus, progressing from single-variable functions to multi-variable calculus, including essential theorems applicable in higher-dimensional analysis. MIT 18.03 (Differential Equations) provides a comprehensive exploration of differential equations, crucial for modeling physical systems, while MIT 18.05 (Introduction to Probability and Statistics) introduces probability models and statistical methods. MIT 18.06 (Introduction to Linear Algebra) and Columbia University's COMS3251 (Computational Linear Algebra) cover linear algebra, focusing on both theoretical concepts and computational techniques necessary for a variety of applications in science and engineering. The dataset, outlined in Table 4, is sourced from the MATH dataset, which presents a collection of problems categorized into specific mathematical topics. This subset was chosen to evaluate the algorithm's ability to handle a broad range of mathematical challenges. The topics include Algebra, focusing on fundamental operations and equations; Counting and Probability, which explores combinatorial methods and probabilistic reasoning; Intermediate Algebra, dealing with more complex algebraic structures; Number Theory, which delves into the properties of integers and modular arithmetic; Prealgebra, covering essential mathematical principles; and Precalculus, which bridges the understanding required for calculus, focusing on vectors, matrices, and trigonometry. Our approach not only surpasses previous benchmarks on similar datasets but also uniquely addresses the challenges associated with solving problems from undergraduate-level courses. The selection of these datasets was intended to comprehensively assess the Reasoning Enhanced Algorithm for Maths Solving (REAMS). By incorporating a diverse set of mathematical problems from both university courses and the MATH dataset, the evaluation aimed to test not only the algorithm's problem-solving accuracy but also its ability to generate explanations that closely mimic human reasoning. This dual focus on accuracy and interpretability underscores the potential contributions of REAMS to both artificial intelligence research and educational applications. B. Explanation of Notations: • Mcode: Code generation model (e.g., CodeLlama 13B) used to generate executable code. • Mreason: Reasoning model (e.g., LLaMA 3.1 8B) used to generate mathematical reasoning or explanations. • Ci: Initial code generated for problem pi by Mcode. • C′ i: Revised code generated after incorporating reasoning Ri. • pi: A single problem from the problem set P. • oi: Expected correct output for problem pi. • Szero[i]: Binary success indicator for zero-shot code generation (1 for correct, 0 for incorrect). • Sreason[i]: Binary success indicator for code generation with reasoning (1 for correct, 0 for incorrect). • n: Total number of problems in the set P. B.1. Dataset Availability The datasets used in this research are derived from publicly available sources: • MIT Courses Dataset: Accessible through MIT https://github.com/idrori/mathQ/tree/main/data. • MATH Dataset: Available from https://paperswithcode.com/dataset/math 11 REAMS: Reasoning Enhanced Algorithm for Maths Solving Course Name Description MIT 6.042: Mathematics for Computer Science Discrete mathematics, focusing on tools and proof techniques useful in computer science. MIT 18.01: Single Variable Calculus Differentiation and integration of functions of one variable, with applications. MIT 18.02: Multivariable Calculus Calculus of several variables, including vector algebra and integration in multiple dimensions. MIT 18.03: Differential Equations Study of differential equations, including first-order ODEs, linear systems, and Fourier series. MIT 18.05: Introduction to Probability and Statistics Basic probability models, combinatorics, random variables, and statistical estimation. MIT 18.06: Introduction to Linear Algebra Matrix theory, linear algebra, vector spaces, eigenvalues, and their applications. Columbia University COMS3251: Computational Linear Algebra Linear functions, matrices, vector spaces, eigenvectors, and spectral decomposition. Table 3. List of courses by MIT and Columbia MATH Topic Description Algebra Exponents, logarithms, simplifying expressions, and quadratic equations. Counting and Probability Counting methods and probability involving factorials and binomial coefficients. Intermediate Algebra Advanced algebraic topics, including polynomial roots and conic sections. Number Theory Primes, divisibility, prime factorization, modular arithmetic. Prealgebra Basic math concepts including fractions, decimals, ratios, and simple equations. Precalculus Vectors, matrices, trigonometric functions, and complex numbers. Table 4. MATH Dataset 12 REAMS: Reasoning Enhanced Algorithm for Maths Solving ID Course Question Solution 1 18.01 Single Variable Calculus Sketch the graph of the function. f(x) = x + |x| 2 18.02 Multi-variable Calculus Describe the graph of the function f : f(x, y) = 10 - p x2 + y2 3 18.03 Differential Equations Find general solutions of the differential equations. If an initial condition is given, find the corresponding particular solution. Throughout, primes denote derivatives with respect to x. y′ + y = 2, y(0) = 0 y(x) = 2(1 -e-x) 4 18.05 Introduction to Probability and Statistics Suppose X and Y have joint pdf f(x, y) = c(x2 + xy) on [0, 1] x [0, 1]. Find c. 1.714285714 5 18.06 Linear Algebra Find a combination x1w1 + x2w2 + x3w3 that gives the zero vector with x1 = 1. w1 is the vector (1; 2; 3). w2 is the vector (4; 5; 6). w3 is the vector (7; 8; 9). x1 = 1, x2 = -2, x3 = 1 6 6.042 Mathematics for Computer Science Find a number x ∈{0, 1, . . . , 112} such that 11x ≡1 (mod 113). 72 7 COMS3251 Computational Linear Algebra Given a d-dimensional non-zero vector v, compute the rank of the matrix vvT . 1 8 MATH Prealgebra What is the greatest common factor of 84, 112 and 210? 14 9 MATH Algebra Let N, O be functions such that N(x) = 2√x and O(x) = x2. What is N(O(N(O(N(O(3))))))? 24 10 MATH Number Theory How many four-digit numbers whose digits add up to 9 are divisible by 11? 0 11 MATH Counting and Probability A standard six-sided fair die is rolled four times. The probability that the product of all four numbers rolled is a perfect square is m n , where m and n are relatively prime positive integers. Find m + n. 187 12 MATH Intermediate Algebra Given that x2 +y2 = 14x+6y+6, find the largest possible value of 3x + 4y. 73 13 MATH Precalculus Evaluate (2-w)∗(2-w2)∗...∗(2-w10) where w = e 2π∗i 11 . 2047.0 Table 5. Example questions and solutions 13
|
2509.16239
|
G¨ODEL MIRROR: A FORMAL SYSTEM FOR
CONTRADICTION-DRIVEN RECURSION
JHET CHAN
a
Independent Researcher, Kuala Lumpur, Malaysia
e-mail address: jhetchan@gmail.com
Abstract. We introduce the G¨odel Mirror, a formal system defined in Lean 4 that
treats contradiction as a control signal for recursive structural evolution. Inspired by
G¨odelian self-reference, our system’s operational semantics encode symbolic paradoxes as
deterministic transitions. Unlike systems designed to guarantee normalization, the G¨odel
Mirror is a minimal and verifiable architecture that leverages a controlled, non-terminating
loop as a productive feature. Our Lean 4 mechanization proves that self-referential paradoxes
are deterministically encapsulated and resolved into new structures without leading to
logical explosion, yielding a paraconsistent inference loop:
Paradox →Encapsulate →Reenter →Node
We argue that this calculus opens a new class of symbolic systems in which contradiction
is metabolized into structure, providing a formal basis for agents capable of resolving
internal inconsistencies.
1. Introduction
This work introduces the G¨odel Mirror,1 a minimal formal system where self-referential
paradoxes are not errors but the primary engine for deterministic structural transformation.
Classical logical and computational systems are designed to prevent or eliminate contradic-
tions; encountering a paradox is typically a sign of failure, leading either to logical explosion
via ex contradictione quodlibet or to non-termination. Such behaviors are often viewed as
undesirable, as demonstrated by results on the failure of normalization in expressive type
theories [AC20].
In contrast, the G¨odel Mirror is designed to productively manage these phenomena.
It is a paraconsistent calculus that, when faced with a self-referential paradox, initiates a
controlled, deterministic rewrite cycle. Instead of exploding, the system encapsulates the
paradoxical term, reintroduces it into the computational context, and thereby transforms
the contradiction into a new, stable syntactic structure. This behavior is not an emergent
side effect but a designed feature of the system’s operational semantics.
Key words and phrases: G¨odelian recursion, paradox resolution, term rewriting, paraconsistent logic,
formal methods, type theory.
1The name “Mirror” is chosen to evoke the themes of self-reference and the computational reflection
initiated by paradox.
arXiv:2509.16239v1 [cs.LO] 16 Sep 2025
2
J. CHAN
By treating contradiction as a signal for structural evolution, our work provides a formal
basis for agents capable of resolving internal inconsistencies. Such a calculus, for example,
could model belief revision in systems that must integrate new, conflicting information
without failure, or it could provide a foundation for fault-tolerant logical frameworks where
local paradoxes are resolved into stable states rather than causing a global crash.
The primary contributions of this paper are:
(1) The formal definition of the G¨odel Mirror, a term rewriting system with explicit
constructors for self-reference and paradox handling.
(2) A set of core theorems, mechanized in the Lean 4 proof assistant, that formally
establish the system’s deterministic, non-explosive reaction to paradox.
(3) A demonstration of this calculus as a novel model of computation that uses a non-
terminating recursive loop as a productive mechanism.
This paper is organized as follows. Section 2 positions our work relative to foundational
research in term rewriting, recursion theory, and paraconsistent logic. Section 3 defines the
formal syntax and operational semantics of the G¨odel Mirror system. Section 4 presents the
meta-theory of the system, including the core theorems and their proofs. We then provide a
case study (Section 5), discuss the mechanization in Lean (Section 6), and conclude with a
discussion of open problems (Section 7).
2. Related Work
The G¨odel Mirror builds upon foundational themes in logic, recursion theory, and the
semantics of computation. We position our contribution relative to three core areas: the
established theory of term rewriting systems, classical mechanisms for recursion, and prior
work on the interplay between self-reference, consistency, and normalization in formal
systems.
2.1. Term Rewriting Systems. The operational semantics of the G¨odel Mirror can be
understood as a term rewriting system (TRS), a formal model for computation based on the
repeated application of rewrite rules [BN98]. The ‘step’ function of our calculus (Section
3) defines a set of rules for transforming ‘MirrorSystem’ terms. However, classical TRS
theory is often concerned with establishing properties such as confluence and termination
(strong normalization). Our work diverges from this tradition by design. The G¨odel Mirror is
intentionally non-terminating for a specific class of inputs, leveraging a deterministic, cyclic
reduction path not as a flaw, but as its primary computational feature. Our contribution
lies not in proving termination, but in proving that this specific non-terminating behavior is
productive and does not lead to logical explosion.
2.2. Recursion and Fixed-Point Combinators. General recursion in foundational calculi
is classically achieved via fixed-point combinators, such as the Y combinator in the untyped
lambda calculus [Bar84]. These combinators provide a general mechanism for a function
to call itself. The G¨odel Mirror offers a more structured, syntactically explicit alternative.
Instead of a general-purpose combinator, our calculus provides dedicated constructors
(‘self ref’, ‘Encapsulate’, ‘Reenter’) that define a specific modality for self-reference. This
constrains the recursive behavior to a controlled cycle centered around paradox resolution,
providing a domain-specific mechanism for recursion rather than a universal one [Pol20].
G¨ODEL MIRROR
3
2.3. Self-Reference, Consistency, and Normalization. The relationship between self-
reference and consistency is a central theme in logic, famously explored in G¨odel’s incom-
pleteness theorems. Provability logic formalizes how systems can reason about their own
properties, showing the subtleties that arise from such introspection [Boo95]. While classical
logic trivializes in the presence of a paradox (ex contradictione quodlibet), paraconsistent
logics provide frameworks that gracefully handle contradictions without explosion. Our
system can be viewed as a computational implementation of a paraconsistent principle.
Furthermore, recent work in type theory has shown that combining features like impred-
icativity and proof-irrelevance can lead to a failure of normalization, where non-terminating
terms can be constructed within otherwise consistent systems [AC20]. This “negative result”
provides crucial motivation for our work. While such non-terminating terms are typically
seen as bugs or limitations, the G¨odel Mirror reframes this behavior as a feature. We
propose a formal architecture that acknowledges the reality of non-termination and provides
a deterministic, verified mechanism for managing it productively.
3. The G¨odel Mirror System
We now define the formal calculus of the G¨odel Mirror. The system is a term rewriting
system defined over a set of symbolic expressions, MirrorSystem, whose states are classified
by the type MirrorState. Its dynamics are governed by a deterministic single-step reduction
relation, denoted →.
3.1. Syntax. The core data structure of the calculus is the set of terms, MirrorSystem,
defined by the following inductive type in Lean 4.
inductive MirrorSystem : Type where
| base
: MirrorSystem
| node
: MirrorSystem →MirrorSystem
| self_ref : MirrorSystem
| cap
: MirrorSystem →MirrorSystem
-- Encapsulate
| enter
: MirrorSystem →MirrorSystem
-- Reenter
| named
: String →MirrorSystem →MirrorSystem
Terms in this grammar represent the symbolic state of the system. The base constructor
denotes an atomic, irreducible state. The node constructor allows for the inductive growth
of structures. The named constructor provides a mechanism for labeling terms, analogous to
G¨odel numbering, which is used to identify specific paradoxical forms. The constructors
self ref, cap (encapsulate), and enter (reenter) are central to the system’s handling of
self-reference and paradox.
3.2. State Classification. The behavior of a term is determined by its classification into
one of four states. This classification is performed by a function that inspects the top-level
constructor of a term.
inductive MirrorState : Type where
| Normal
-- A standard, reducible term
| Paradox
-- A term representing a self-referential contradiction
| Integrate-- A term that has encapsulated a paradox
| Reentry
-- A term being reintroduced into the system
4
J. CHAN
For the purpose of defining the operational semantics, we define a predicate, is paradox(t),
which holds true if a term t is a specific, syntactically recognizable self-referential form. For
this paper, we define it as follows:
is paradox(t) := ∃s, t = named s self ref
3.3. Operational Semantics. The dynamics of the G¨odel Mirror are defined by a small-
step operational semantics, specified by the reduction relation →. This relation is the
transitive closure of the single-step step function, which we define via the following rewrite
rules.
Let t be a term of type MirrorSystem. The single-step reduction t →t′ is defined as
follows:
• [Reduction-Paradox] If is paradox(t), the term is encapsulated:
t →cap(t)
• [Reduction-Integrate] A term that has been encapsulated is prepared for re-entry:
cap(t) →enter(t)
• [Reduction-Reentry] A re-entering term that is no longer a paradox is integrated into a
stable structure:
enter(t) →node(t)
if ¬is paradox(t)
• [Reduction-Node] The default structural growth rule for terms in a normal state. This
rule represents unbounded, monotonic structural growth in the absence of a paradoxical
trigger, ensuring the system can always progress:
node(t) →node(node(t))
• [Reduction-Named] Labels are preserved during reduction of the underlying term:
t →t′
named(s, t) →named(s, t′)
The base and self ref constructors are irreducible values; they do not appear on
the left-hand side of any reduction rule. This set of rules defines a deterministic state
transition system where a specific syntactic form, a paradox, initiates a controlled, three-step
transformation cycle that results in structural growth rather than non-termination or logical
explosion.
4. Formal Semantics
The operational semantics defined in Section 3 provides a clear, mechanistic account of
the G¨odel Mirror’s behavior as a term rewriting system. To complement this, we briefly
outline a denotational semantics, interpreting terms in a mathematical domain that makes
the system’s core properties, particularly its handling of paradox, explicit.
Our calculus is designed to manage non-termination and contradiction without leading
to logical explosion.
A natural semantic domain for such a system is one based on a
paraconsistent logic, such as Belnap-Dunn four-valued logic (FDE) [Bel77]. In this
setting, a proposition can be interpreted not just as true or false, but also as both (a
contradiction, ⊤) or neither (a gap, ⊥).
G¨ODEL MIRROR
5
We can model the meaning of a MirrorSystem term JtK as a state in a suitable semantic
domain. The state transitions defined by our operational semantics correspond to monotone
functions on this domain. The key properties are:
• Non-Explosion: A paradoxical term like Jnamed(s, self ref)K does not map to a trivi-
alizing state. Instead, its meaning is managed through the semantic interpretations of the
paradox-handling constructors:
– The meaning of Jcap(t)K can be interpreted as a function that transitions the meaning of
JtK from a contradictory state (such as T in four-valued logic) to a contained, observable
state.
– The meaning of Jenter(t)K then represents the controlled reintroduction of this term’s
meaning back into the main semantic domain.
• Structural Growth: The meaning of Jnode(t)K corresponds to a monotonic extension of
the information content of JtK, representing stable and consistent growth.
While the operational semantics is sufficient to establish the main results of this paper,
this denotational perspective provides a semantic justification for the system’s design and
confirms that its behavior is well-founded in established logical principles. A full development
of the calculus’s domain-theoretic or coalgebraic semantics is left for future work.
5. Meta-Theory: Core Properties
We now establish the core meta-theoretical properties of the G¨odel Mirror system. The
following theorems, all of which have been mechanized in the Lean 4 proof assistant, formally
guarantee the system’s deterministic and controlled behavior in the presence of self-referential
paradoxes. Our primary results are that the system makes progress, preserves structure,
and, most importantly, does not explode.
5.1. Progress. The Progress theorem guarantees that any term that is not a value (i.e.,
not an irreducible base term) can take a step. It ensures the system never gets stuck.
Theorem 5.1 (Progress). For any term t ∈MirrorSystem, either t is a value (i.e., t = base)
or there exists a term t′ such that t →t′.
Proof Sketch. The proof proceeds by induction on the structure of the term t. We show that
each constructor that is not a value appears on the left-hand side of a reduction rule in our
operational semantics (Section 3.3).
Interpretation: The system is never in a state where it cannot proceed. A term is either
in a final, stable form (base) or a reduction is possible.
5.2. Preservation. The Preservation theorem ensures that reduction steps maintain the
system’s structural integrity. While our system is untyped, we can define a simple structural
invariant, such as the depth of nested ‘node’ constructors, and prove that reduction does
not violate it in unintended ways. A more practical preservation property is that labels are
preserved.
Theorem 5.2 (Label Preservation). If named(s, t) →t′, then t′ = named(s, t′′) for some t′′
where t →t′′.
6
J. CHAN
Proof Sketch. This follows directly from the definition of the [Reduction-Named] rule in the
operational semantics.
5.3. Non-Explosion. The most critical property of the G¨odel Mirror is that it is paraconsis-
tent; a local contradiction does not trivialize the entire system. We formalize this by stating
that the detection of a paradox leads to a controlled, three-step structural transformation,
not an arbitrary state.
Theorem 5.3 (Controlled Reaction to Paradox). If is paradox(t), then the term deter-
ministically reduces in exactly three steps to a stable, non-paradoxical structure.
t →cap(t) →enter(t) →node(t)
Proof Sketch. This follows by direct application of the [Reduction-Paradox], [Reduction-
Integrate], and [Reduction-Reentry] rules. The side condition on the reentry rule, ¬is paradox(t),
holds because t is now wrapped in other constructors.
Interpretation: This theorem is the formal guarantee of the G¨odel Mirror’s core mechanic.
Unlike in classical logic, where a contradiction entails any proposition, here a contradiction
entails a specific, finite sequence of structural changes. The system metabolizes the paradox
without exploding. This result formally separates our calculus from systems governed by
the principle of explosion.
6. Mirror-Completion and Canonical Forms
The G¨odel Mirror calculus is intentionally non-terminating for paradoxical terms, which
precludes a general normalization property. However, it is still desirable to define a notion
of equivalence between terms and to identify canonical representatives for terms that enter
the paradox resolution cycle. To this end, we define a Mirror-Completion procedure that
transforms a term containing a paradox cycle into a finite, canonical form. This procedure
allows us to establish weak normalization properties for a stratified fragment of the calculus.
6.1. Completion Procedure. The completion procedure, denoted C, is a function that
maps a MirrorSystem term to its canonical form. The procedure identifies the specific,
deterministic three-step cycle initiated by a paradox and replaces it with its resolved, stable
outcome.
We define the completion of a term t as follows:
• If t does not contain a subterm matching the paradoxical form is paradox, then C(t) = t.
• If t contains a subterm p such that is paradox(p), we know from Theorem 3 that p
reduces to node(p) in three steps. The completion procedure replaces the subterm p with
its resolved form node(p).
C(p) := node(p)
This procedure effectively “short-circuits” the paradox resolution cycle, replacing the dynamic,
three-step reduction with its static, terminal structure.
G¨ODEL MIRROR
7
6.2. Properties of Completion. The Mirror-Completion procedure allows us to reason
about the equivalence of terms modulo the resolution of paradoxes.
Proposition 6.1 (Weak Normalization for Stratified Terms). For any term t that does not
contain nested paradoxes (i.e., is stratified), the completion procedure C(t) terminates in a
unique canonical form that is irreducible under the standard reduction relation →.
Proof Sketch. The procedure is guaranteed to terminate because it only acts on a fixed
syntactic pattern and replaces it with a larger, non-paradoxical term. By applying this
procedure recursively from the inside out on a stratified term, all paradoxes are eliminated,
resulting in a term that no longer contains the triggers for the main reduction cycle.
Interpretation: This result establishes that while the G¨odel Mirror is not strongly nor-
malizing, we can recover a form of weak normalization for a well-behaved fragment of the
calculus. This allows for the comparison and symbolic manipulation of terms that would
otherwise reduce infinitely, providing a bridge between our paraconsistent calculus and the
goals of classical rewriting theory.
7. Case Studies
To illustrate the operational semantics and meta-theoretical properties of the G¨odel Mirror,
we present a case study of the system’s behavior when encountering a canonical paradoxical
term. This example demonstrates the full, deterministic cycle of paradox resolution.
7.1. A Self-Referential Paradox. We begin by constructing a minimal term that satisfies
the is paradox predicate defined in Section 3.2. We use the named constructor to label a
self ref term, creating a direct syntactic self-reference.
-- Lean 4 Input
def paradoxical_term := named "Liar" self_ref
According to our operational semantics, this term is not a value and is classified as a
Paradox.
7.2. Execution Trace. We trace the reduction of paradoxical term for three steps. The
trace demonstrates the deterministic application of the reduction rules for paradoxical,
encapsulated, and re-entering terms, as proven in Theorem 3 (Controlled Reaction to
Paradox).
-- Reduction Trace
-- Step 0: named "Liar" self_ref
-- Step 1: cap (named "Liar" self_ref)
-- Step 2: enter (named "Liar" self_ref)
-- Step 3: node (named "Liar" self_ref)
8
J. CHAN
7.3. Interpretation of the Cycle. Each step in the trace corresponds to a specific rule
from the operational semantics, demonstrating the controlled transformation of the paradox
into a stable structure.
• Step 0 →1 (Paradox): The initial term matches the is paradox predicate. The
[Reduction-Paradox] rule is applied, and the term is encapsulated. This corresponds to
the first step of the cycle proven in Theorem 3.
• Step 1 →2 (Integrate): The term now has a cap constructor at its head.
The
[Reduction-Integrate] rule is applied, transforming the term into a Reentry state. This
is the second step of the cycle.
• Step 2 →3 (Reenter): The term has an enter constructor. The side condition for the
[Reduction-Reentry] rule (¬is paradox) is met because the term is no longer a bare
named self ref. The rule fires, transforming the term into a stable node structure. This
is the final step of the proven cycle.
After three steps, the system reaches a term in a Normal state. The initial paradox
has been successfully metabolized into a new, stable syntactic structure without causing
an uncontrolled loop or logical explosion, thus concretely demonstrating the system’s core
non-explosion property.
8. Mechanization in Lean
All definitions, operational semantics, and meta-theoretical properties of the G¨odel Mirror cal-
culus presented in this paper have been formalized in the Lean 4 proof assistant [dMU21].
The mechanization provides a machine-checked guarantee of the system’s correctness and
the soundness of its core principles.
The formalization includes:
• The inductive definitions of the MirrorSystem syntax and MirrorState classifier.
• A function for the step relation that implements the operational semantics.
• Formal proofs for the main theorems, including Progress and the Controlled Reaction
to Paradox (Non-Explosion) property.
The complete, executable Lean development, which allows for the verification of all
theorems and the reproduction of the case study traces, is provided in the Appendix and is
also available in a public code repository. This ensures that our results are fully reproducible
and verifiable.
9. Conclusion and Open Problems
This paper introduced the G¨odel Mirror, a minimal paraconsistent calculus that treats
self-referential paradox not as a logical error, but as a deterministic engine for structural
transformation. We defined its operational semantics as a term rewriting system, mechanized
the calculus in the Lean 4 proof assistant, and proved its central non-explosion property.
The core result is that the system metabolizes contradictions into new syntactic structures
through a controlled, verifiable rewrite cycle.
Unlike classical term rewriting systems focused on termination, the G¨odel Mirror provides
a formal model for productive, non-terminating computation. This work demonstrates that
intentionally non-terminating behavior can be a productive computational feature, offering
a formal basis for systems that must reason in the presence of internal inconsistency.
G¨ODEL MIRROR
9
The minimalism of our system leaves several important theoretical and practical questions
open for future research.
9.1. Open Problems.
(1) Decidability and Normalization: While the core calculus is not strongly normalizing,
the completion procedure defined in Section 6 suggests that a fragment of the system
enjoys weak normalization. What are the precise boundaries of the decidable fragment of
the calculus? Can the normalization techniques for proof-irrelevant systems be adapted
to this paraconsistent setting [Coq18]?
(2) Richer Type Systems: How can the G¨odel Mirror be extended with more expressive
type systems? Integrating dependent types, for example, would allow for more complex
invariants to be expressed and preserved, but it would require a careful treatment of
how the paradox resolution cycle interacts with the type-checking rules.
(3) Categorical Semantics: What is the natural categorical semantics for the G¨odel
Mirror? The system’s state transitions, particularly the encapsulation and re-entry of
terms, suggest a connection to modalities and state-based monads, but a full categorical
model remains to be developed.
(4) Operational Realization: The abstract cycle of the Mirror could be given a concrete
operational interpretation, for instance, as a concurrent process. Can the circular proofs
as session-typed processes framework be adapted to model our paradox resolution loop,
providing a bridge to the theory of concurrency [DP22]?
(5) Expressiveness of Paradox Detection: The is paradox(t) predicate in this work
is intentionally minimal, recognizing only direct syntactic self-reference. Future research
could explore extending this predicate to identify a broader class of self-referential and
paradoxical structures. This might involve more sophisticated static analysis of terms or
the integration of a more expressive naming mechanism analogous to G¨odel numbering,
allowing the system to recognize and metabolize more complex logical paradoxes.
These questions highlight the potential for the G¨odel Mirror to serve as a foundational
building block for a new class of formal systems that are, by design, robust to internal
inconsistency.
References
[AC20]
Andreas Abel and Thierry Coquand. Failure of normalization in impredicative type theory with
proof-irrelevant propositional equality. Logical Methods in Computer Science, 16(2), 2020. doi:
10.23638/LMCS-16(2:14)2020.
[Bar84]
Hendrik Pieter Barendregt. The Lambda Calculus: Its Syntax and Semantics, volume 103 of Studies
in Logic and the Foundations of Mathematics. North-Holland, 1984.
[Bel77]
Nuel D. Belnap. A useful four-valued logic. Modern Uses of Multiple-Valued Logic, pages 5–37,
1977. doi:10.1007/978-94-010-1161-7_2.
[BN98]
Franz Baader and Tobias Nipkow. Term Rewriting and All That. Cambridge University Press, 1998.
doi:10.1017/CBO9781139172752.
[Boo95]
George Boolos. The Logic of Provability. Cambridge University Press, 1995.
[Coq18]
Thierry Coquand. A reduction-free normalisation argument for a proof-irrelevant type system. In
Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS),
pages 299–307, 2018. doi:10.46298/lmcs-19(3:5)2023.
[dMU21] Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language.
In Automated Deduction – CADE 28, volume 12699 of Lecture Notes in Computer Science, pages
625–635. Springer International Publishing, 2021. doi:10.1007/978-3-030-79876-5_37.
10
J. CHAN
[DP22]
Farzaneh Derakhshan and Frank Pfenning. Circular proofs as session-typed processes. Logical
Methods in Computer Science, 18, 2022. doi:10.46298/lmcs-18(2:8)2022.
[Pol20]
Andrew Polonsky. Fixed point combinators as fixed points of higher-order fixed point generators.
Logical Methods in Computer Science, 16(3), 2020. doi:10.23638/LMCS-16(3:7)2020.
Appendix A. Mechanization: Lean 4 Source Code and Proofs
This appendix provides information on the complete Lean 4 implementation of the G¨odel
Mirror system. The formalization serves as a machine-checked companion to the definitions
and theorems presented throughout this paper.
The mechanization is organized into several modules, covering the system’s syntax,
operational semantics, meta-theory, and the case study. Key modules include:
• Syntax.lean: Core inductive definitions for MirrorSystem and MirrorState.
• Semantics.lean: The implementation of the step function and the reduction relation →.
• MetaTheory.lean: The formal proofs of the main theorems, including Progress and the
Controlled Reaction to Paradox (Non-Explosion).
• Examples.lean: The executable trace from the case study, demonstrating the paradox
resolution cycle.
To maintain the readability of the paper and to provide access to the executable
development, the complete, commented source code is hosted in a public repository.
GitHub Repository: https://github.com/jhetchan/godel-mirror
The repository contains all mechanized proofs, allowing for the full verification of the
theorems and the reproduction of all results discussed in this paper.
|
G ̈ODEL MIRROR: A FORMAL SYSTEM FOR CONTRADICTION-DRIVEN RECURSION JHET CHAN a Independent Researcher, Kuala Lumpur, Malaysia e-mail address: Abstract. We introduce the G ̈odel Mirror, a formal system defined in Lean 4 that treats contradiction as a control signal for recursive structural evolution. Inspired by G ̈odelian self-reference, our system's operational semantics encode symbolic paradoxes as deterministic transitions. Unlike systems designed to guarantee normalization, the G ̈odel Mirror is a minimal and verifiable architecture that leverages a controlled, non-terminating loop as a productive feature. Our Lean 4 mechanization proves that self-referential paradoxes are deterministically encapsulated and resolved into new structures without leading to logical explosion, yielding a paraconsistent inference loop: Paradox →Encapsulate →Reenter →Node We argue that this calculus opens a new class of symbolic systems in which contradiction is metabolized into structure, providing a formal basis for agents capable of resolving internal inconsistencies. 1. Introduction This work introduces the G ̈odel Mirror,1 a minimal formal system where self-referential paradoxes are not errors but the primary engine for deterministic structural transformation. Classical logical and computational systems are designed to prevent or eliminate contradictions; encountering a paradox is typically a sign of failure, leading either to logical explosion via ex contradictione quodlibet or to non-termination. Such behaviors are often viewed as undesirable, as demonstrated by results on the failure of normalization in expressive type theories [AC20]. In contrast, the G ̈odel Mirror is designed to productively manage these phenomena. It is a paraconsistent calculus that, when faced with a self-referential paradox, initiates a controlled, deterministic rewrite cycle. Instead of exploding, the system encapsulates the paradoxical term, reintroduces it into the computational context, and thereby transforms the contradiction into a new, stable syntactic structure. This behavior is not an emergent side effect but a designed feature of the system's operational semantics. Key words and phrases: G ̈odelian recursion, paradox resolution, term rewriting, paraconsistent logic, formal methods, type theory. 1The name "Mirror" is chosen to evoke the themes of self-reference and the computational reflection initiated by paradox. 16 Sep 2025 2 J. CHAN By treating contradiction as a signal for structural evolution, our work provides a formal basis for agents capable of resolving internal inconsistencies. Such a calculus, for example, could model belief revision in systems that must integrate new, conflicting information without failure, or it could provide a foundation for fault-tolerant logical frameworks where local paradoxes are resolved into stable states rather than causing a global crash. The primary contributions of this paper are: (1) The formal definition of the G ̈odel Mirror, a term rewriting system with explicit constructors for self-reference and paradox handling. (2) A set of core theorems, mechanized in the Lean 4 proof assistant, that formally establish the system's deterministic, non-explosive reaction to paradox. (3) A demonstration of this calculus as a novel model of computation that uses a nonterminating recursive loop as a productive mechanism. This paper is organized as follows. Section 2 positions our work relative to foundational research in term rewriting, recursion theory, and paraconsistent logic. Section 3 defines the formal syntax and operational semantics of the G ̈odel Mirror system. Section 4 presents the meta-theory of the system, including the core theorems and their proofs. We then provide a case study (Section 5), discuss the mechanization in Lean (Section 6), and conclude with a discussion of open problems (Section 7). 2. Related Work The G ̈odel Mirror builds upon foundational themes in logic, recursion theory, and the semantics of computation. We position our contribution relative to three core areas: the established theory of term rewriting systems, classical mechanisms for recursion, and prior work on the interplay between self-reference, consistency, and normalization in formal systems. 2.1. Term Rewriting Systems. The operational semantics of the G ̈odel Mirror can be understood as a term rewriting system (TRS), a formal model for computation based on the repeated application of rewrite rules [BN98]. The 'step' function of our calculus (Section 3) defines a set of rules for transforming 'MirrorSystem' terms. However, classical TRS theory is often concerned with establishing properties such as confluence and termination (strong normalization). Our work diverges from this tradition by design. The G ̈odel Mirror is intentionally non-terminating for a specific class of inputs, leveraging a deterministic, cyclic reduction path not as a flaw, but as its primary computational feature. Our contribution lies not in proving termination, but in proving that this specific non-terminating behavior is productive and does not lead to logical explosion. 2.2. Recursion and Fixed-Point Combinators. General recursion in foundational calculi is classically achieved via fixed-point combinators, such as the Y combinator in the untyped lambda calculus [Bar84]. These combinators provide a general mechanism for a function to call itself. The G ̈odel Mirror offers a more structured, syntactically explicit alternative. Instead of a general-purpose combinator, our calculus provides dedicated constructors ('self ref', 'Encapsulate', 'Reenter') that define a specific modality for self-reference. This constrains the recursive behavior to a controlled cycle centered around paradox resolution, providing a domain-specific mechanism for recursion rather than a universal one [Pol20]. G ̈ODEL MIRROR 3 2.3. Self-Reference, Consistency, and Normalization. The relationship between selfreference and consistency is a central theme in logic, famously explored in G ̈odel's incompleteness theorems. Provability logic formalizes how systems can reason about their own properties, showing the subtleties that arise from such introspection [Boo95]. While classical logic trivializes in the presence of a paradox (ex contradictione quodlibet), paraconsistent logics provide frameworks that gracefully handle contradictions without explosion. Our system can be viewed as a computational implementation of a paraconsistent principle. Furthermore, recent work in type theory has shown that combining features like impredicativity and proof-irrelevance can lead to a failure of normalization, where non-terminating terms can be constructed within otherwise consistent systems [AC20]. This "negative result" provides crucial motivation for our work. While such non-terminating terms are typically seen as bugs or limitations, the G ̈odel Mirror reframes this behavior as a feature. We propose a formal architecture that acknowledges the reality of non-termination and provides a deterministic, verified mechanism for managing it productively. 3. The G ̈odel Mirror System We now define the formal calculus of the G ̈odel Mirror. The system is a term rewriting system defined over a set of symbolic expressions, MirrorSystem, whose states are classified by the type MirrorState. Its dynamics are governed by a deterministic single-step reduction relation, denoted →. 3.1. Syntax. The core data structure of the calculus is the set of terms, MirrorSystem, defined by the following inductive type in Lean 4. inductive MirrorSystem : Type where | base : MirrorSystem | node : MirrorSystem →MirrorSystem | self_ref : MirrorSystem | cap : MirrorSystem →MirrorSystem -- Encapsulate | enter : MirrorSystem →MirrorSystem -- Reenter | named : String →MirrorSystem →MirrorSystem Terms in this grammar represent the symbolic state of the system. The base constructor denotes an atomic, irreducible state. The node constructor allows for the inductive growth of structures. The named constructor provides a mechanism for labeling terms, analogous to G ̈odel numbering, which is used to identify specific paradoxical forms. The constructors self ref, cap (encapsulate), and enter (reenter) are central to the system's handling of self-reference and paradox. 3.2. State Classification. The behavior of a term is determined by its classification into one of four states. This classification is performed by a function that inspects the top-level constructor of a term. inductive MirrorState : Type where | Normal -- A standard, reducible term | Paradox -- A term representing a self-referential contradiction | Integrate-- A term that has encapsulated a paradox | Reentry -- A term being reintroduced into the system 4 J. CHAN For the purpose of defining the operational semantics, we define a predicate, is paradox(t), which holds true if a term t is a specific, syntactically recognizable self-referential form. For this paper, we define it as follows: is paradox(t) := ∃s, t = named s self ref 3.3. Operational Semantics. The dynamics of the G ̈odel Mirror are defined by a smallstep operational semantics, specified by the reduction relation →. This relation is the transitive closure of the single-step step function, which we define via the following rewrite rules. Let t be a term of type MirrorSystem. The single-step reduction t →t′ is defined as follows: • [Reduction-Paradox] If is paradox(t), the term is encapsulated: t →cap(t) • [Reduction-Integrate] A term that has been encapsulated is prepared for re-entry: cap(t) →enter(t) • [Reduction-Reentry] A re-entering term that is no longer a paradox is integrated into a stable structure: enter(t) →node(t) if ¬is paradox(t) • [Reduction-Node] The default structural growth rule for terms in a normal state. This rule represents unbounded, monotonic structural growth in the absence of a paradoxical trigger, ensuring the system can always progress: node(t) →node(node(t)) • [Reduction-Named] Labels are preserved during reduction of the underlying term: t →t′ named(s, t) →named(s, t′) The base and self ref constructors are irreducible values; they do not appear on the left-hand side of any reduction rule. This set of rules defines a deterministic state transition system where a specific syntactic form, a paradox, initiates a controlled, three-step transformation cycle that results in structural growth rather than non-termination or logical explosion. 4. Formal Semantics The operational semantics defined in Section 3 provides a clear, mechanistic account of the G ̈odel Mirror's behavior as a term rewriting system. To complement this, we briefly outline a denotational semantics, interpreting terms in a mathematical domain that makes the system's core properties, particularly its handling of paradox, explicit. Our calculus is designed to manage non-termination and contradiction without leading to logical explosion. A natural semantic domain for such a system is one based on a paraconsistent logic, such as Belnap-Dunn four-valued logic (FDE) [Bel77]. In this setting, a proposition can be interpreted not just as true or false, but also as both (a contradiction, ⊤) or neither (a gap, ⊥). G ̈ODEL MIRROR 5 We can model the meaning of a MirrorSystem term JtK as a state in a suitable semantic domain. The state transitions defined by our operational semantics correspond to monotone functions on this domain. The key properties are: • Non-Explosion: A paradoxical term like Jnamed(s, self ref)K does not map to a trivializing state. Instead, its meaning is managed through the semantic interpretations of the paradox-handling constructors: - The meaning of Jcap(t)K can be interpreted as a function that transitions the meaning of JtK from a contradictory state (such as T in four-valued logic) to a contained, observable state. - The meaning of Jenter(t)K then represents the controlled reintroduction of this term's meaning back into the main semantic domain. • Structural Growth: The meaning of Jnode(t)K corresponds to a monotonic extension of the information content of JtK, representing stable and consistent growth. While the operational semantics is sufficient to establish the main results of this paper, this denotational perspective provides a semantic justification for the system's design and confirms that its behavior is well-founded in established logical principles. A full development of the calculus's domain-theoretic or coalgebraic semantics is left for future work. 5. Meta-Theory: Core Properties We now establish the core meta-theoretical properties of the G ̈odel Mirror system. The following theorems, all of which have been mechanized in the Lean 4 proof assistant, formally guarantee the system's deterministic and controlled behavior in the presence of self-referential paradoxes. Our primary results are that the system makes progress, preserves structure, and, most importantly, does not explode. 5.1. Progress. The Progress theorem guarantees that any term that is not a value (i.e., not an irreducible base term) can take a step. It ensures the system never gets stuck. Theorem 5.1 (Progress). For any term t ∈MirrorSystem, either t is a value (i.e., t = base) or there exists a term t′ such that t →t′. Proof Sketch. The proof proceeds by induction on the structure of the term t. We show that each constructor that is not a value appears on the left-hand side of a reduction rule in our operational semantics (Section 3.3). Interpretation: The system is never in a state where it cannot proceed. A term is either in a final, stable form (base) or a reduction is possible. 5.2. Preservation. The Preservation theorem ensures that reduction steps maintain the system's structural integrity. While our system is untyped, we can define a simple structural invariant, such as the depth of nested 'node' constructors, and prove that reduction does not violate it in unintended ways. A more practical preservation property is that labels are preserved. Theorem 5.2 (Label Preservation). If named(s, t) →t′, then t′ = named(s, t′′) for some t′′ where t →t′′. 6 J. CHAN Proof Sketch. This follows directly from the definition of the [Reduction-Named] rule in the operational semantics. 5.3. Non-Explosion. The most critical property of the G ̈odel Mirror is that it is paraconsistent; a local contradiction does not trivialize the entire system. We formalize this by stating that the detection of a paradox leads to a controlled, three-step structural transformation, not an arbitrary state. Theorem 5.3 (Controlled Reaction to Paradox). If is paradox(t), then the term deterministically reduces in exactly three steps to a stable, non-paradoxical structure. t →cap(t) →enter(t) →node(t) Proof Sketch. This follows by direct application of the [Reduction-Paradox], [ReductionIntegrate], and [Reduction-Reentry] rules. The side condition on the reentry rule, ¬is paradox(t), holds because t is now wrapped in other constructors. Interpretation: This theorem is the formal guarantee of the G ̈odel Mirror's core mechanic. Unlike in classical logic, where a contradiction entails any proposition, here a contradiction entails a specific, finite sequence of structural changes. The system metabolizes the paradox without exploding. This result formally separates our calculus from systems governed by the principle of explosion. 6. Mirror-Completion and Canonical Forms The G ̈odel Mirror calculus is intentionally non-terminating for paradoxical terms, which precludes a general normalization property. However, it is still desirable to define a notion of equivalence between terms and to identify canonical representatives for terms that enter the paradox resolution cycle. To this end, we define a Mirror-Completion procedure that transforms a term containing a paradox cycle into a finite, canonical form. This procedure allows us to establish weak normalization properties for a stratified fragment of the calculus. 6.1. Completion Procedure. The completion procedure, denoted C, is a function that maps a MirrorSystem term to its canonical form. The procedure identifies the specific, deterministic three-step cycle initiated by a paradox and replaces it with its resolved, stable outcome. We define the completion of a term t as follows: • If t does not contain a subterm matching the paradoxical form is paradox, then C(t) = t. • If t contains a subterm p such that is paradox(p), we know from Theorem 3 that p reduces to node(p) in three steps. The completion procedure replaces the subterm p with its resolved form node(p). C(p) := node(p) This procedure effectively "short-circuits" the paradox resolution cycle, replacing the dynamic, three-step reduction with its static, terminal structure. G ̈ODEL MIRROR 7 6.2. Properties of Completion. The Mirror-Completion procedure allows us to reason about the equivalence of terms modulo the resolution of paradoxes. Proposition 6.1 (Weak Normalization for Stratified Terms). For any term t that does not contain nested paradoxes (i.e., is stratified), the completion procedure C(t) terminates in a unique canonical form that is irreducible under the standard reduction relation →. Proof Sketch. The procedure is guaranteed to terminate because it only acts on a fixed syntactic pattern and replaces it with a larger, non-paradoxical term. By applying this procedure recursively from the inside out on a stratified term, all paradoxes are eliminated, resulting in a term that no longer contains the triggers for the main reduction cycle. Interpretation: This result establishes that while the G ̈odel Mirror is not strongly normalizing, we can recover a form of weak normalization for a well-behaved fragment of the calculus. This allows for the comparison and symbolic manipulation of terms that would otherwise reduce infinitely, providing a bridge between our paraconsistent calculus and the goals of classical rewriting theory. 7. Case Studies To illustrate the operational semantics and meta-theoretical properties of the G ̈odel Mirror, we present a case study of the system's behavior when encountering a canonical paradoxical term. This example demonstrates the full, deterministic cycle of paradox resolution. 7.1. A Self-Referential Paradox. We begin by constructing a minimal term that satisfies the is paradox predicate defined in Section 3.2. We use the named constructor to label a self ref term, creating a direct syntactic self-reference. -- Lean 4 Input def paradoxical_term := named "Liar" self_ref According to our operational semantics, this term is not a value and is classified as a Paradox. 7.2. Execution Trace. We trace the reduction of paradoxical term for three steps. The trace demonstrates the deterministic application of the reduction rules for paradoxical, encapsulated, and re-entering terms, as proven in Theorem 3 (Controlled Reaction to Paradox). -- Reduction Trace -- Step 0: named "Liar" self_ref -- Step 1: cap (named "Liar" self_ref) -- Step 2: enter (named "Liar" self_ref) -- Step 3: node (named "Liar" self_ref) 8 J. CHAN 7.3. Interpretation of the Cycle. Each step in the trace corresponds to a specific rule from the operational semantics, demonstrating the controlled transformation of the paradox into a stable structure. • Step 0 →1 (Paradox): The initial term matches the is paradox predicate. The [Reduction-Paradox] rule is applied, and the term is encapsulated. This corresponds to the first step of the cycle proven in Theorem 3. • Step 1 →2 (Integrate): The term now has a cap constructor at its head. The [Reduction-Integrate] rule is applied, transforming the term into a Reentry state. This is the second step of the cycle. • Step 2 →3 (Reenter): The term has an enter constructor. The side condition for the [Reduction-Reentry] rule (¬is paradox) is met because the term is no longer a bare named self ref. The rule fires, transforming the term into a stable node structure. This is the final step of the proven cycle. After three steps, the system reaches a term in a Normal state. The initial paradox has been successfully metabolized into a new, stable syntactic structure without causing an uncontrolled loop or logical explosion, thus concretely demonstrating the system's core non-explosion property. 8. Mechanization in Lean All definitions, operational semantics, and meta-theoretical properties of the G ̈odel Mirror calculus presented in this paper have been formalized in the Lean 4 proof assistant [dMU21]. The mechanization provides a machine-checked guarantee of the system's correctness and the soundness of its core principles. The formalization includes: • The inductive definitions of the MirrorSystem syntax and MirrorState classifier. • A function for the step relation that implements the operational semantics. • Formal proofs for the main theorems, including Progress and the Controlled Reaction to Paradox (Non-Explosion) property. The complete, executable Lean development, which allows for the verification of all theorems and the reproduction of the case study traces, is provided in the Appendix and is also available in a public code repository. This ensures that our results are fully reproducible and verifiable. 9. Conclusion and Open Problems This paper introduced the G ̈odel Mirror, a minimal paraconsistent calculus that treats self-referential paradox not as a logical error, but as a deterministic engine for structural transformation. We defined its operational semantics as a term rewriting system, mechanized the calculus in the Lean 4 proof assistant, and proved its central non-explosion property. The core result is that the system metabolizes contradictions into new syntactic structures through a controlled, verifiable rewrite cycle. Unlike classical term rewriting systems focused on termination, the G ̈odel Mirror provides a formal model for productive, non-terminating computation. This work demonstrates that intentionally non-terminating behavior can be a productive computational feature, offering a formal basis for systems that must reason in the presence of internal inconsistency. G ̈ODEL MIRROR 9 The minimalism of our system leaves several important theoretical and practical questions open for future research. 9.1. Open Problems. (1) Decidability and Normalization: While the core calculus is not strongly normalizing, the completion procedure defined in Section 6 suggests that a fragment of the system enjoys weak normalization. What are the precise boundaries of the decidable fragment of the calculus? Can the normalization techniques for proof-irrelevant systems be adapted to this paraconsistent setting [Coq18]? (2) Richer Type Systems: How can the G ̈odel Mirror be extended with more expressive type systems? Integrating dependent types, for example, would allow for more complex invariants to be expressed and preserved, but it would require a careful treatment of how the paradox resolution cycle interacts with the type-checking rules. (3) Categorical Semantics: What is the natural categorical semantics for the G ̈odel Mirror? The system's state transitions, particularly the encapsulation and re-entry of terms, suggest a connection to modalities and state-based monads, but a full categorical model remains to be developed. (4) Operational Realization: The abstract cycle of the Mirror could be given a concrete operational interpretation, for instance, as a concurrent process. Can the circular proofs as session-typed processes framework be adapted to model our paradox resolution loop, providing a bridge to the theory of concurrency [DP22]? (5) Expressiveness of Paradox Detection: The is paradox(t) predicate in this work is intentionally minimal, recognizing only direct syntactic self-reference. Future research could explore extending this predicate to identify a broader class of self-referential and paradoxical structures. This might involve more sophisticated static analysis of terms or the integration of a more expressive naming mechanism analogous to G ̈odel numbering, allowing the system to recognize and metabolize more complex logical paradoxes. These questions highlight the potential for the G ̈odel Mirror to serve as a foundational building block for a new class of formal systems that are, by design, robust to internal inconsistency. References [AC20] Andreas Abel and Thierry Coquand. Failure of normalization in impredicative type theory with proof-irrelevant propositional equality. Logical Methods in Computer Science, 16(2), 2020. [Bar84] Hendrik Pieter Barendregt. The Lambda Calculus: Its Syntax and Semantics, volume 103 of Studies in Logic and the Foundations of Mathematics. North-Holland, 1984. [Bel77] Nuel D. Belnap. A useful four-valued logic. Modern Uses of Multiple-Valued Logic, pages 5-37, 1977. [BN98] Franz Baader and Tobias Nipkow. Term Rewriting and All That. Cambridge University Press, 1998. [Boo95] George Boolos. The Logic of Provability. Cambridge University Press, 1995. [Coq18] Thierry Coquand. A reduction-free normalisation argument for a proof-irrelevant type system. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 299-307, 2018. [dMU21] Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language. In Automated Deduction - CADE 28, volume 12699 of Lecture Notes in Computer Science, pages 625-635. Springer International Publishing, 2021. 10 J. CHAN [DP22] Farzaneh Derakhshan and Frank Pfenning. Circular proofs as session-typed processes. Logical Methods in Computer Science, 18, 2022. [Pol20] Andrew Polonsky. Fixed point combinators as fixed points of higher-order fixed point generators. Logical Methods in Computer Science, 16(3), 2020. Appendix A. Mechanization: Lean 4 Source Code and Proofs This appendix provides information on the complete Lean 4 implementation of the G ̈odel Mirror system. The formalization serves as a machine-checked companion to the definitions and theorems presented throughout this paper. The mechanization is organized into several modules, covering the system's syntax, operational semantics, meta-theory, and the case study. Key modules include: • Syntax.lean: Core inductive definitions for MirrorSystem and MirrorState. • Semantics.lean: The implementation of the step function and the reduction relation →. • MetaTheory.lean: The formal proofs of the main theorems, including Progress and the Controlled Reaction to Paradox (Non-Explosion). • Examples.lean: The executable trace from the case study, demonstrating the paradox resolution cycle. To maintain the readability of the paper and to provide access to the executable development, the complete, commented source code is hosted in a public repository. GitHub Repository: https://github.com/jhetchan/godel-mirror The repository contains all mechanized proofs, allowing for the full verification of the theorems and the reproduction of all results discussed in this paper.
|
2509.16236
|
Similarity as Thermodynamic Work:
Between Depth and Diversity
— from Information Distance to Ugly Duckling
Kentaro IMAFUKU
National Institute of Advanced Industrial Science and Technology (AIST)
Abstract
Defining similarity is a fundamental challenge in information science. Watanabe’s Ugly
Duckling Theorem highlights diversity, while algorithmic information theory emphasizes depth
through Information Distance. We propose a statistical-mechanical framework that treats
program length as energy, with a temperature parameter unifying these two aspects: in the
low-temperature limit, similarity approaches Information Distance; in the high-temperature
limit, it recovers the indiscriminability of the Ugly Duckling theorem; and at the critical point,
it coincides with the Solomonoff prior. We refine the statistical-mechanical framework by
introducing regular universal machines and effective degeneracy ratios, allowing us to separate
redundant from core diversity. This refinement yields new tools for analyzing similarity and
opens perspectives for information distance, model selection, and non-equilibrium extensions.
Keywords: similarity, thermodynamic work, Information Distance, Solomonoff universal prior,
Ugly Duckling theorem, Kolmogorov complexity, statistical mechanics
1
Introduction
Similarity plays a fundamental role in science. Many advances in physics, information theory,
and data science have been achieved by identifying similar structures and constructing unified
frameworks that explain them. Despite its importance, however, defining similarity in a principled
way remains a long-standing challenge.
Two contrasting approaches illustrate this difficulty. Watanabe’s Ugly Duckling theorem[1–3]
shows that if all predicates are treated equally, any pair of objects becomes equally similar,
emphasizing the dependence on arbitrary weighting. In contrast, algorithmic information theory
introduces the Information Distance[4, 5], defined through Kolmogorov complexity [6], which
quantifies similarity by the depth of the shortest description. These perspectives highlight
seemingly opposing aspects—diversity versus depth—and make clear the fundamental difficulty
of formalizing similarity.
Recent developments in information thermodynamics (e.g., [7–18]) and in algorithmic thermo-
dynamics (e.g., [19–28]) provide a natural bridge for importing statistical–mechanical concepts
into information theory. Building on this line of work, we treat program length as energy,
introduce partition functions and free energies, and define similarity as the thermodynamic work
required to couple two objects. This yields a unified framework where the balance between
description length and multiplicity plays the role of energy–entropy tradeoff.
1
arXiv:2509.16236v1 [cs.IT] 16 Sep 2025
Figure 1: Similarity as a thermodynamic balance. Program length plays the role of energy
(depth), while effective degeneracy provides residual entropy (diversity). The tradeoff is governed
by the inverse temperature β, interpolating between three regimes: Information Distance at low
temperature, the Solomonoff prior at β = ln 2, and the Ugly Duckling phase at high temperature.
Figure 1 illustrates this perspective. The inverse temperature β tunes the balance between
depth (minimal description length) and diversity (residual entropy of admissible cores). Because
similarity is defined through free–energy differences, trivial wrapper redundancies uniformly
cancel under the regular-UTM assumption, leaving only core diversity as the entropic contribution.
Within this picture, three classical notions of similarity emerge as temperature regimes: the
Information Distance at low temperature, the Solomonoff universal prior at β = ln 2, and the
Ugly Duckling phase at high temperature.
This paper makes the following contributions:
• We propose a statistical–mechanical framework that redefines similarity as thermodynamic
work.
• We demonstrate how the Information Distance, the Solomonoff universal prior, and the
Ugly Duckling theorem arise in distinct temperature regimes.
• We introduce the notion of regular UTMs, separating trivial wrapper redundancy from
genuine core diversity, and define effective degeneracy as a new measure of diversity.
• We identify residual entropy ratios as a structural source of similarity, complementing
description length.
• We outline possible non–equilibrium extensions via Jarzynski–type relations [29–31].
2
Preliminaries
In this section we briefly recall the basic notions from algorithmic information theory and
from Watanabe’s combinatorial argument that are relevant to our framework. For algorithmic
information theory, standard references include Kolmogorov’s original work [6], Chaitin’s program-
size complexity [20, 21], and the textbook by Li and Vitányi [5]. For the combinatorial perspective,
we refer to Watanabe’s papers on the Ugly Duckling theorem and the principle of the unique
choice [1–3], as well as related philosophical discussions by Goodman [32] and methodological
debates in taxonomy [33].
2
2.1
Kolmogorov complexity and Information Distance
For a finite binary string x, the Kolmogorov complexity K(x) is defined as the length of the
shortest prefix-free program p that outputs x on a fixed UTM U [5, 6, 20, 34, 35]:
K(x) := min{ |p| : U(p) = x }.
Intuitively, K(x) measures the depth of the most concise description of x. For two strings x and
y, the conditional complexity K(y|x) is the length of the shortest program producing y given x
as auxiliary input.
Based on these quantities, Bennett, Gács, Li, Vitányi, and Zurek defined the Information
Distance [4, 5]:
E(x, y) := max{K(x|y), K(y|x)}.
Up to logarithmic precision, this gives a universal metric for similarity. In our framework, this
“low-temperature” regime will reproduce K(y|x) as reversible work.
2.2
Solomonoff universal prior
Solomonoff [34, 35] introduced a universal distribution that assigns to each string x the probability
M(x) :=
X
p:U(p)=x
2−|p|.
This Bayesian prior aggregates all programs producing x, weighting shorter ones more heavily.
Unlike K(x), which depends only on the shortest program, M(x) collects the entire ensemble. In
our thermodynamic analogy, M(x) corresponds to a partition function evaluated at the critical
temperature β = ln 2. This interpretation connects algorithmic probability with free-energy
ideas and underlies universal prediction frameworks such as AIXI [36].
2.3
Ugly Duckling theorem (Watanabe)
Watanabe’s Ugly Duckling theorem [1–3] states that if similarity is defined by counting shared
predicates, then every pair of distinct objects is equally similar. Formally, in a universe Ω,
the number of predicates true of both x and y — equivalently, the number of subsets O ⊆Ω
containing both — is 2|Ω|−2, independent of x and y. Thus, when all predicates are weighted
equally, all pairs of objects become indiscriminable. The theorem sharpened Goodman’s earlier
philosophical claim—that induction depends on which predicates are privileged [32]—into a
combinatorial statement.
To connect this symmetry with algorithmic information theory, we reinterpret programs
as predicates.
In the standard view, U(p) = x means that program p outputs x.
In our
reformulation, U(p) is seen as a set O of admissible objects, with x ∈U(p) interpreted as “x
satisfies predicate p”. As illustrated in Fig. 2, this bridges Watanabe’s set-theoretic formulation
with the program-based language of algorithmic information theory. It also reveals the limitation
of the original theorem: uniformly counting predicates erases distinctions, but once predicates
are weighted by description length, shorter programs dominate. This observation motivates
the statistical-mechanical framework we develop, where predicates are treated as programs and
similarity arises from ensembles weighted by their complexity.
3
Framework
3.1
Setup and notation
As discussed in Sec. 2.3, Watanabe’s Ugly Duckling theorem shows that if all predicates
(i.e. subsets of Ω) are counted equally, then every pair of distinct objects is equally similar.
3
Figure 2: Illustration of our reinterpretation. Top: a program outputs a single object. Middle: a
predicate acts as a characteristic function. Bottom: in our framework, a program defines a set
of objects, enabling predicates to be treated as programs.
Combinatorially, for a finite universe Ω, the number of subsets containing both x and y is always
2|Ω|−2, independent of the content of x and y. This symmetry illustrates the arbitrariness of
similarity definitions when no weighting scheme is imposed.
To overcome this limitation, we introduce a weighting scheme grounded in algorithmic
information theory. Instead of assigning equal weight to all subsets O ⊆Ω, we weight each subset
according to the length of a program that generates it. Shorter programs are assigned larger
weights, so the ensemble is biased toward simpler predicates. This construction interpolates
between the uniform view of the Ugly Duckling theorem and the minimal-description view of
Kolmogorov complexity, and provides the basis for our statistical–mechanical framework.
3.1.1
Objects
The objects under consideration are elements of a finite set Ω, where each element is regarded as
a finite binary string (data). For later use we introduce the set
O(−) := { O | O ⊆Ω, O ̸= ∅},
(1)
namely the non-empty subsets of Ω. Throughout, x, y always denote elements of Ω. When
necessary we take O ∈O(−) and consider a program satisfying U(p) = O. For example, in Zβ(x)
the relevant subsets O are those containing x, while in Zβ(x, y) they are those containing both
x and y.
3.1.2
Programs
As in algorithmic information theory, we consider prefix-free programs p executed on UTM U.
When U outputs a set O ∈O(−) on input p, we write U(p) = O. If x ∈O, we write
U(p) ∋x.
(2)
Similarly, if x, y ∈O we write
U(p) ∋(x, y).
(3)
The length of a program p is denoted by |p|.
3.2
Thermodynamic setting
Once predicates are reinterpreted as programs generating subsets O ⊆Ω, we can introduce a
statistical–mechanical formalism. Partition functions and reversible work are defined in the
4
standard sense of statistical mechanics [37–39]. The key idea is to regard the program length
|p| as energy, and to assign each program a Boltzmann weight e−β|p| at inverse temperature
β. In this way, short programs dominate at low temperature, while at high temperature all
programs contribute almost equally, recovering the uniform-counting symmetry of the Ugly
Duckling theorem.
3.2.1
Partition functions
This analogy leads to the following definitions of partition functions and free energies:
Zβ(x) :=
X
p:U(p)∋x
e−β|p|,
Fβ(x) := −β−1 ln Zβ(x),
(4)
and for two objects x, y ∈Ω,
Zβ(x, y) :=
X
p:U(p)∋(x,y)
e−β|p|,
Fβ(x, y) := −β−1 ln Zβ(x, y).
(5)
Here Zβ(x) and Zβ(x, y) represent partition functions restricted to subsets that contain x or
(x, y), respectively.
3.2.2
Thermodynamic work
We next consider a process that interpolates between the constraint “x is included” and “both x
and y are included.” Introducing a coupling parameter λ ∈[0, 1], we define the energy of each
program as
Eλ(p) = |p| + Jλ
1 −sy(p)
,
(6)
where sy(p) = 1 if U(p) ∋y and sy(p) = 0 otherwise. The constant J specifies the strength of
the constraint. Taking J →∞enforces a hard constraint at λ = 1, so that only programs with
U(p) ∋(x, y) contribute.
The corresponding generalized partition function and free energy are
ˆZβ(λ) :=
X
p:U(p)∋x
e−βEλ(p),
ˆFβ(λ) := −β−1 ln ˆZβ(λ).
(7)
In the limits we obtain
ˆFβ(0) = Fβ(x),
ˆFβ(1) = Fβ(x, y).
(8)
For this constrained evolution, the generalized force is
Φλ :=
D∂Eλ(p)
∂λ
E
λ = J⟨1 −sy(p)⟩λ,
(9)
and the reversible isothermal work required for the transformation is
W (β)
rev (x →x, y) =
Z 1
0
Φλ dλ = Fβ(x, y) −Fβ(x).
(10)
Thus, similarity is identified with the free energy difference needed to add y to x.
This process is illustrated schematically in Fig. 3.
As λ increases, the ensemble shifts
smoothly from programs describing x alone to programs describing both x and y. The reversible
work quantifies this shift, and we adopt it as our definition of similarity. In the following sections
we analyze this quantity in different temperature regimes, showing how classical notions such as
Information Distance and the Ugly Duckling theorem emerge as limiting cases.
5
Figure 3: llustration of the λ-coupling process. The horizontal axis represents program length,
the depth axis the coupling parameter λ, and the vertical peaks indicate weighted program
counts. At λ = 0 (red), programs producing x dominate with minimal length K(x). As λ
increases (e.g. λ = 0.4, green), the ensemble gradually shifts. At λ = 1 (purple), only programs
producing both x and y remain, with minimal length K(x, y). The number of shortest programs
themselves does not change with λ; only their normalized weight in the ensemble is rebalanced.
4
Three temperature regimes
The three regimes are summarized schematically in Fig. 4. In the following subsections, we
analyze them in detail: the low–temperature limit corresponding to the Information Distance,
the critical point β = ln 2 yielding the Solomonoff universal prior, and the high–temperature
limit where residual entropy dominates.
4.1
Low-temperature limit (β →∞): Occam’s razor phase
In the limit β →∞, the Boltzmann weight e−β|p| suppresses all but the shortest program. The
partition functions then reduce to
Zβ(x) ≃e−βK(x),
Zβ(x, y) ≃e−βK(x,y).
(11)
Consequently, the free energies become
Fβ(x) ≃K(x),
Fβ(x, y) ≃K(x, y),
(12)
up to O(β−1) corrections. The reversible work is then
W (β→∞)
rev
(x →x, y) ≃K(x, y) −K(x) = K(y|x).
(13)
Thus in the low-temperature limit, reversible work coincides with the conditional Kolmogorov
complexity and reproduces the Information Distance
D(x, y) = max{K(x|y), K(y|x)}.
This regime reflects the dominance of the shortest description, embodying Occam’s razor in
algorithmic form, and will be referred to as the “Occam’s razor phase.”
6
Figure 4: Schematic phase diagram of similarity as thermodynamic work. At low temperature,
the reversible work reduces to the conditional complexity K(y|x), reproducing the Information
Distance (Occam’s razor phase). At the critical point β = ln 2, it coincides with the Solomonoff
universal prior. At high temperature, the divergent residual–entropy term dominates, recovering
the essence of Watanabe’s Ugly Duckling theorem. Finite conditional complexity contributions
K(y|x) remain present but become negligible compared to the divergence.
4.2
Critical point (β = ln 2): Bayesian bridge
At β = ln 2, the partition function Zln 2(x) coincides with the Solomonoff universal prior M(x).
This prior is defined as
M(x) =
X
p:U(p)∋x
2−|p|,
(14)
a weighted sum over all programs that output x, each with weight 2−|p|. Since U is prefix-free,
Kraft’s inequality ensures
X
p
2−|p| ≤1,
so M(x) is a valid semimeasure and can serve as a universal Bayesian prior over all computable
models.
In this case, the reversible work becomes
W (ln 2)
rev
(x →x, y) = −log2
M(x, y)
M(x) .
(15)
Therefore our similarity measure is interpreted as the free energy cost of adding y given x under
the universal prior, which information-theoretically corresponds to the amount of information
required for a universal Bayesian update. For this reason, we refer to the critical regime as the
“Bayesian bridge,” connecting algorithmic and probabilistic perspectives.
4.3
High-temperature limit: Ugly Duckling phase
In the opposite limit β →0, all programs contribute almost equally. Let Nℓ(x) be the number
of programs of length ℓwith U(p) ∋x, and Nℓ(x, y) the number producing (x, y). Then
Zβ(x) ≃
X
ℓ
Nℓ(x),
Zβ(x, y) ≃
X
ℓ
Nℓ(x, y).
Since Nℓ(x) ∼2ℓgrows exponentially with ℓ, these sums diverge for β < ln 2, with the critical
point β = ln 2 marking the convergence boundary.
7
To regularize the divergence, we introduce an excess cutoff : for each x, only programs up to
K(x) + Λ are included. The corresponding reversible work is
W (Λ,β)
rev
(x→x, y) := −1
β log
Z(≤K(x,y)+Λ)
β
(x, y)
Z(≤K(x)+Λ)
β
(x)
.
Under the assumptions of Appendix A, the high-temperature expansion yields
W (Λ,β)
rev
(x→x, y) = K(y|x) −β−1 ln em(Λ)
x,y
em(Λ)
x
+ o(1),
β →0,
where the effective degeneracy
em(Λ)
o
= mo +
Λ
X
∆≥1
m(∆)
o
2−α∆,
or
emo = lim
Λ→∞em(Λ)
o
denotes the effective multiplicity of admissible cores beyond trivial wrappers, with m(∆)
o
denoting
the number of alternative cores of length K(o) + ∆(see Appendix A for a discussion of
convergence). Thus ln emo plays the role of a residual entropy.
The reversible work therefore separates into two contributions:
• A finite term K(y|x), reflecting the depth of description.
• A divergent term −β−1 ln
em(Λ)
x,y / em(Λ)
x
, reflecting the diversity of admissible cores.
In the strict β →0 limit, the divergent contribution overwhelms all finite terms:
W (Λ,β)
rev
(x→x, y) ∼−β−1 ln
em(Λ)
x,y / em(Λ)
x
,
β →0,
so every finite difference such as K(y|x) is suppressed. This universal divergence is the algorithmic
analogue of Watanabe’s Ugly Duckling theorem: at infinite temperature, all objects become
indiscriminable. Nevertheless, at small but finite β the residual entropy ratios em(Λ)
x,y / em(Λ)
x
still
differentiate objects, leaving traces of structural information. In this sense, UDT appears as the
β = 0 endpoint, while the nearby high-temperature regime reveals a refined picture of diversity.
Finally, note that if descriptional complexity is ignored—that is, if all implementations are
counted equally without weighting by length—our construction reduces exactly to Watanabe’s
original setting where all predicates are equally weighted. Retaining complexity refines this
picture: the Ugly Duckling phase is still recovered at β = 0, but finite β corrections now encode
object-dependent diversity through effective degeneracy ratios.
5
Discussion and Conclusion
We have presented a statistical–mechanical framework that unifies two seemingly contradictory
notions of similarity: the depth emphasized by Kolmogorov complexity and the Information
Distance, and the diversity emphasized by Watanabe’s Ugly Duckling theorem. By interpret-
ing program length as energy and similarity as thermodynamic work, we showed that these
two aspects are connected through a single temperature parameter. In the low–temperature
limit, similarity reduces to conditional Kolmogorov complexity, while at high temperature it
is dominated by effective degeneracy, reproducing the essence of the Ugly Duckling theorem.
At the critical point β = ln 2, the Solomonoff universal prior emerges as the universal Bayesian
prior. In particular, our analysis clarifies the relation to Watanabe’s theorem. If descriptional
complexity is ignored altogether—that is, if every admissible implementation is counted equally
8
Figure 5: Instruction–stack analogy for programs. The lowest block represents the shortest
possible instruction implementing x or (x, y), with thickness equal to K(x) or K(x, y). Additional
layers correspond to full instruction variants obtained by adding redundant wrappers on top of
the same core. Such wrappers proliferate uniformly across all objects and therefore cancel out in
the thermodynamic formulation, leaving only the multiplicity of distinct cores as the residual
entropic contribution.
without weighting by program length—our construction reduces exactly to Watanabe’s original
setting where all predicates are equally weighted. In contrast, keeping length weighting but
taking the thermodynamic limit β →0 produces a universal divergence that likewise erases all
finite differences, again recovering the indiscriminability of the Ugly Duckling theorem. The two
perspectives are not identical: the first modifies the weighting scheme, while the second arises
as a limiting case. Yet both capture the same essential symmetry, and our framework embeds
them within a single parameterized picture. Retaining complexity, however, refines this picture:
the strict β = 0 endpoint reproduces indiscriminability, while at small finite β residual entropy
ratios ln( emx,y/ emx) survive and provide additional structural information. Thus our construction
not only recovers Watanabe’s result in the complexity-free case and at β = 0, but also extends
both perspectives into a more nuanced setting where depth and diversity jointly contribute.
A central novelty of this work lies in clarifying that *not all program multiplicities are
equal*. Wrappers, which correspond to trivial syntactic redundancy, proliferate exponentially
but uniformly across all objects. As such, they contribute no distinguishing information. In
contrast, the multiplicity of distinct *cores*—fundamentally different computational routes to
the same object—remains object–dependent. Crucially, distinct cores are not compressible into a
single shortest form: each represents an essentially different constructive pathway. By separating
wrapper redundancy from core diversity through the notion of regular UTMs, we defined the
effective degeneracy emo, which aggregates only the meaningful multiplicities. Its logarithm
plays the role of a residual entropy, complementing description length. This distinction shows
that similarity naturally decomposes into two contributions: the depth of minimal description,
K(y|x), and the diversity of admissible cores, ln( emx,y/ emx).
The analogy illustrated in Fig. 5 highlights that similarity goes beyond what is captured by
minimal description length alone. Traditional complexity–based approaches compare only the
thickness of the thinnest sheet. Our framework instead emphasizes that tasks are not determined
solely by one minimal instruction, but also by the variety of distinct instructions that can realize
them. This residual diversity—captured by effective degeneracy—aligns more closely with the
9
everyday intuition that two tasks can feel similar not only when their minimal descriptions are
close, but also when they admit a wide range of interchangeable realizations.
To our knowledge, this is the first framework that systematically isolates core diversity (as
opposed to trivial wrapper redundancy) and formalizes its contribution to similarity. Algorithmic
information theory has traditionally focused almost exclusively on description length and shortest
programs, with little attention to multiplicities. The present refinement therefore extends AIT
by highlighting core diversity as a genuine and irreducible contributor to similarity. In particular,
the high–temperature regime is governed not by minimal descriptions but by residual entropy
differences, with effective degeneracy ratios determining whether objects appear distinguishable
or indiscriminable.
It should be stressed, however, that effective degeneracy emo cannot be computed exactly.
This limitation follows directly from the uncomputability of Kolmogorov complexity: if K(x)
itself is not effectively calculable, then enumerating all cores and their multiplicities is infeasi-
ble. Moreover, degeneracy depends on the chosen UTM, and the number of programs grows
exponentially with length, further constraining explicit evaluation. Nevertheless, just as residual
entropy in statistical mechanics captures structural properties of ground states that cannot be
enumerated microscopically [37], effective degeneracy provides a principled way to represent
the contribution of diversity to similarity. Approximate estimates may be obtained in practice
through surrogate methods such as compression–based approximations, minimum description
length principles, or bounded searches within finite cutoffs. In this sense, the limitations of
effective degeneracy mirror those of Kolmogorov complexity itself: both are uncomputable in the
strict sense and machine–dependent, yet both provide robust and widely accepted theoretical
foundations.
In addition, effective degeneracy is genuinely dependent on the choice of regular UTM. Unlike
Kolmogorov complexity, where UTM dependence is confined to an additive constant, emo can
vary structurally across machines, since different UTMs expose different families of admissible
cores. This dependence should not be viewed as a drawback: rather, it reflects the natural fact
that similarity is defined relative to a measuring device—in this case, the universal machine.
Once trivial wrappers are factored out, the contribution of core diversity is stable within any
fixed regular UTM, providing a consistent and interpretable measure of diversity.
Beyond its theoretical contribution, our framework suggests possible implications for model
selection and learning theory, where weighting schemes and residual entropy differences naturally
arise. The degeneracy contribution can be interpreted as a measure of ensemble diversity,
complementing the description–length term that implements Occam’s razor. This perspective
offers a principled way to balance depth and diversity in practical learning scenarios.
Empirical similarity measures also fall into place within this perspective. For example, the
Normalized Google Distance (NGD) [40] and the Normalized Compression Distance (NCD) [41]
have traditionally been regarded as practical approximations to the Information Distance. Within
our framework, however, they acquire a natural reinterpretation. NGD relies on frequency counts
f(x) and f(x, y) obtained from web search engines. These counts effectively represent the number
of distinct contexts in which x and y appear, and can therefore be viewed as a practical proxy
for effective degeneracy. In thermodynamic terms, NGD corresponds to a diversity–dominated,
finite– or high–temperature regime. By contrast, NCD, defined via general–purpose compressors,
is more naturally viewed as a low–temperature proxy: shortest descriptions dominate, and
compression length serves as a practical surrogate for Kolmogorov complexity. Thus NGD
and NCD, while both approximations to the Information Distance, can be reinterpreted as
complementary endpoints of the same thermodynamic axis: NGD reflecting diversity at high
temperature and NCD reflecting depth at low temperature. The practical acceptance of NGD
as a measure of similarity suggests that capturing similarity through effective degeneracy ratios
is indeed meaningful, providing empirical support for the theoretical framework developed here.
Beyond these technical aspects, the thermodynamic reformulation itself has several conceptual
10
advantages:
• It places two apparently contradictory perspectives—depth and diversity—on a single
temperature axis, making their relation transparent.
• It expresses similarity as a free–energy difference, a universal physical quantity, thereby
providing a common ground for otherwise disparate notions.
• It enables the use of tools from statistical mechanics, such as Jarzynski–type relations and
phase–transition analogies, to study similarity.
• It offers an intuitive picture, where low temperature corresponds to Occam’s razor (minimal
descriptions) and high temperature to the indiscriminability of the Ugly Duckling theorem.
• The statistical-mechanical formulation also suggests natural extensions beyond the classical
setting, potentially including quantum ensembles.
This combination of mathematical rigor and physical intuition may help broaden the conceptual
reach of algorithmic information theory and its applications.
Finally, our approach is conceptually related to the line of work known as “algorithmic ther-
modynamics,” where program length is treated as energy and partition functions over programs
are analyzed. That literature has primarily focused on the thermodynamic interpretation of
Kolmogorov complexity and universal distributions, such as using free energy to describe com-
pression limits. Our contribution differs in that we explicitly apply the thermodynamic formalism
to the problem of similarity, showing how two seemingly opposed notions—the Information
Distance (depth) and the Ugly Duckling theorem (diversity)—emerge as limiting phases within a
single statistical–mechanical framework. In particular, the identification of effective degeneracy,
purified of wrapper redundancy, as the dominant driver of similarity in the high–temperature
limit appears to be new. This highlights how the algorithmic–thermodynamic perspective can
reveal structural insights that are not visible in either combinatorial or information–theoretic
treatments alone.
Metaphorically, the tension between depth and diversity recalls a contrast in art: classical
traditions aim for faithful compression of form, while impressionism values the richness of many
perspectives. This analogy, albeit informal, conveys the intuitive balance between compactness
and multiplicity that our framework unifies.
Acknowledgments
I am grateful to my colleagues at AIST for inspiring the initial motivation for this work and for
their various forms of support. I also acknowledge the use of OpenAI’s ChatGPT-5 in checking
logical consistency and improving the English presentation of the manuscript.
Appendix: Rigorous results and non-equilibrium extensions
This appendix provides rigorous justification and computational extensions of the results pre-
sented in the main text. Appendix A gives a high–temperature expansion under a regular
UTM and the excess cutoff scheme (also referred to as a surplus cutoff), leading to a precise
formulation of the reversible work. Appendix B outlines non-equilibrium extensions based on
the Jarzynski equality (see e.g. [29–31]), including both theoretical formulation and a sketch of
numerical implementation.
11
A
High–temperature limit under regular UTM and excess cutoff
A.1
Preliminaries
For an object o, define
Nℓ(o) :=
{ p : |p| = ℓ, U(p) ∋o }
,
the number of programs of length ℓgenerating o. The ground length K(o) is the minimal ℓwith
Nℓ(o) > 0.
A.2
Regular UTMs and wrapper construction
Definition 1 (Regular UTM). A regular Universal Turing Machine (UTM) is a prefix-free
machine U such that every program p admits a unique decomposition p = w∥q into
• a wrapper w drawn from a syntactically defined family W, and
• a core code q executed by a fixed reference machine U0, i.e. U(w∥q) = U0(q).
The wrapper set acts uniformly and independently of the output. Let ad := |{w ∈W : |w| = d}|
denote the number of wrappers of length d.
Explicit construction.
Fix a marker M of length h ≥1. Define
Wd := { w = sM : |w| = d, s contains no occurrence of M }.
That is, a wrapper is any binary string that ends with M and whose prefix s is M-free. This
guarantees a unique boundary between w and the core q: in any concatenation p = w∥q, the
first (leftmost) occurrence of M in p is precisely the end of w. After that boundary, the core q
may contain M arbitrarily; parsing remains unambiguous because no occurrence of M appears
before the boundary inside w.
The set { s : |s| = d −h, s M-free } is a regular language recognized by a finite automaton,
hence its cardinality grows exponentially. Therefore there exist constants c, α > 0 such that
ad := |Wd| ≍c 2αd
(d →∞).
We also include the empty wrapper w = ϵ so that shortest programs (d = 0) are available.
A.3
Effective degeneracy
Let mo denote the number of shortest cores of length K(o) on U0. In addition, there may
exist alternative cores of length K(o) + ∆for integers ∆≥1. Let m(∆)
o
be their multiplicity.
Such alternatives include, in particular, the case of zero-length wrappers (pure cores of length
K(o) + ∆). At total length K(o) + d the decomposition gives
NK(o)+d(o) = mo ad +
d−1
X
∆=1
m(∆)
o
ad−∆+ Ed(o),
(16)
where Ed(o) collects boundary effects. These include, for example:
• cases where an alternative core of length K(o) + ∆cannot contribute because d < ∆
(negative wrapper length),
• small-d deviations where the ratio ad−∆/ad has not yet converged to its exponential
asymptotic 2−α∆,
12
• or finitely many exceptional assignments in the prefix-free code of U0 that break the
uniform pattern at very short lengths.
All such contributions occur only at finitely many d or remain uniformly bounded, hence they
satisfy Ed(o) = o(ad) as d →∞.
For wrapper families with exponential growth ad ≍c 2αd (0 < α < 1), the ratio
ad−∆
ad
→2−α∆
(d →∞).
Hence the contribution of an alternative core is asymptotically a fixed constant fraction 2−α∆of
the main family.
Definition 2 (Effective degeneracy). The effective degeneracy of object o is
em(d)
o
:= mo +
d−1
X
∆≥1
m(∆)
o
2−α∆.
We define
emo := lim
d→∞em(d)
o ,
which is finite whenever the alternative families are at most subexponential (as ensured by
prefix-freeness of U0 restricted to output o).
Although emo is formally defined by summing over all alternative cores, in the high–temperature
analysis with an excess cutoff only finitely many near–shortest alternatives contribute appreciably.
Longer alternatives are both exponentially suppressed (by the factor 2−α∆) and excluded by the
cutoff window, so emo is effectively determined by a finite neighborhood around the ground length.
Thus ln emo may be regarded as a residual entropy associated with o, paralleling ground-state
degeneracy in statistical mechanics.
A.4
Asymptotics of Program Counts
Proposition 1. For a regular UTM with exponential wrapper growth ad ≍c 2αd, the program
count admits the representation
NK(o)+d(o) = emo ad (1 + ro(d)),
ro(d) →0 (d →∞).
Sketch. Divide (16) by ad:
NK(o)+d(o)
ad
= mo +
X
∆≥1
m(∆)
o
ad−∆
ad
+ Ed(o)
ad
.
For each fixed ∆, ad−∆
ad
→2−α∆. Thus the finite sum of nearby alternatives converges to
P
∆m(∆)
o
2−α∆. If alternatives exist for infinitely many ∆, sparsity (subexponential growth)
ensures the tail P
∆>D m(∆)
o
ad−∆/ad vanishes uniformly as D →∞. Hence
NK(o)+d(o)
ad
=
emo + o(1),
which is equivalent to NK(o)+d(o) = emo ad(1 + ro(d)) with ro(d) →0.
13
A.5
Partition Functions with Excess Cutoff
For an excess window Λ ≥0, define
Z(≤K(x)+Λ)
β
(x) = e−βK(x)
Λ
X
d=0
NK(x)+d(x) e−βd,
(17)
Z(≤K(x,y)+Λ)
β
(x, y) = e−βK(x,y)
Λ
X
d=0
NK(x,y)+d(x, y) e−βd.
(18)
By Proposition 1,
Z(≤K(x)+Λ)
β
(x) = e−βK(x) em(Λ)
x
Λ
X
d=0
ad(1 + rx(d))e−βd,
(19)
Z(≤K(x,y)+Λ)
β
(x, y) = e−βK(x,y) em(Λ)
x,y
Λ
X
d=0
ad(1 + rx,y(d))e−βd.
(20)
Hence the ratio is
Z(≤K(x,y)+Λ)
β
(x, y)
Z(≤K(x)+Λ)
β
(x)
= e−β(K(x,y)−K(x)) em(Λ)
x,y
em(Λ)
x
PΛ
d=0 ad(1 + rx,y(d))e−βd
PΛ
d=0 ad(1 + rx(d))e−βd .
(21)
A.6
High-Temperature Expansion
For fixed Λ and β →0,
Λ
X
d=0
ad(1 + ro(d))e−βd =
Λ
X
d=0
ad
[1 + O(ε(Λ)) + O(βΛ)],
where ε(Λ) := max{|εx(Λ)|, |εx,y(Λ)|} and ε(Λ) →0 by Proposition 1. Thus (21) becomes
Z(≤K(x,y)+Λ)
β
(x, y)
Z(≤K(x)+Λ)
β
(x)
= e−β(K(x,y)−K(x)) · em(Λ)
x,y
em(Λ)
x
[1 + O(ε(Λ)) + O(βΛ)].
Taking Λ →∞under the scaling βΛ →0, ε(Λ) →0, the error vanishes.
A.7
Reversible Work at High Temperature
Finally,
W (Λ,β)
rev
(x→x, y) = −β−1 log
Z(≤K(x,y)+Λ)
β
(x, y)
Z(≤K(x)+Λ)
β
(x)
(22)
= [K(x, y) −K(x)] −β−1 ln em(Λ)
x,y
em(Λ)
x
+ o(1).
(23)
Thus
W (Λ,β)
rev
(x→x, y) = K(y|x) −β−1 ln em(Λ)
x,y
em(Λ)
x
+ o(1),
with the interpretation that K(y|x) is the finite depth contribution, while ln
em(Λ)
x,y / em(Λ)
x
is the
divergent diversity or residual entropy contribution.
14
A.8
Cutoff Schemes: Absolute vs Excess
An alternative cutoff restricts the absolute program length, e.g. |p| ≤L. While natural and
simple, this treats different objects asymmetrically: the relative position of L to the ground
length K(x) may vary greatly, biasing comparisons across objects. By contrast, the excess cutoff
normalizes each object by its own K(x) and examines only the surplus window Λ. This places
all objects on equal footing and allows the degeneracy ratio to emerge cleanly as the dominant
contribution in the high–temperature limit.
A.9
Supplementary Note: Growth of wrapper families
For completeness we justify the claim that the wrapper family defined by a fixed marker M has
exponential growth.
Proposition 2. Fix a marker M of length h ≥1. Let An(M) denote the set of binary strings
of length n that do not contain M as a substring. Then |An(M)| = C µn + o(µn) as n →∞,
where µ ∈[1, 2) is the spectral radius of the transition matrix of the M-avoidance automaton,
and C > 0 depends on M.
Sketch. The set An(M) is a regular language recognized by a finite automaton constructed from
the prefix function of M (e.g. the Aho-Corasick automaton). Let A be the adjacency matrix of
the automaton restricted to safe states. Then
|An(M)| = e⊤
0 An1
where e0 is the unit vector corresponding to the initial automaton state, and 1 is the all-ones
vector selecting all accepting states. In other words, e⊤
0 An1 counts the number of n-step paths
of the automaton that start in the initial state and end in any safe state, which is exactly
the number of length-n binary strings that avoid M. By Perron-Frobenius, A has a dominant
eigenvalue µ = ρ(A) with positive eigenvectors, so |An(M)| = C µn + o(µn) with C > 0.
As a consequence, the wrapper counts satisfy
ad = |Wd| = |Ad−h(M)| = c 2αd [1 + o(1)],
α = log2 µ ∈[0, 1).
Thus there exist constants c, α > 0 such that ad ≍c 2αd, as used in the main text.
B
Non-equilibrium extension via Jarzynski equality
B.1
Theoretical sketch
Consider a system in contact with a thermal bath at inverse temperature β, with distribution
πβ(p | λ) = e−βEλ(p)
ˆZβ(λ)
,
where Eλ(p) depends on a control parameter λ ∈[0, 1]. In the reversible case, slow variation of
λ gives work equal to free energy difference. In the irreversible case, extra work appears, but
Jarzynski’s equality ensures
e−βW = e−β∆F .
With the excess cutoff, admissible programs are restricted at each λ, leading to the partition
functions
ˆZ(Λ)
β
(λ) =
X
p∈PΛ(λ)
e−βEλ(p).
15
where PΛ(λ) denotes the set of admissible programs under the excess cutoff Λ at control parameter
λ. The Jarzynski equality then reads
e−βW =
ˆZ(Λ)
β
(1)
ˆZ(Λ)
β
(0)
= exp
−β ∆F (Λ)
β
,
which recovers the reversible work defined in the main text.
B.2
Numerical protocol (sketch)
A numerical protocol for estimating similarity at finite temperature is as follows:
1. Enumerate all programs up to surplus cutoff Λ that output x; identify those that also
output y.
2. Discretize λ into a sequence 0 = λ0 < · · · < λM = 1.
3. At each λk, sample programs from πβ(p | λk) ∝e−βEλk(p).
4. For each sampled trajectory, accumulate the work W.
5. Estimate exp(−β∆F) via the average ⟨e−βW ⟩.
This protocol provides a practical scheme for estimating the finite-temperature similarity
measure defined in this work.
Remarks on feasibility and interpretation.
It should be stressed that the above protocol
does not require the exact values of K(x) or K(x, y), which are uncomputable in general. In
practice, one works with finite surrogate sets of programs, obtained for example from compression-
based approximations, minimum description length principles, or bounded searches up to a
cutoff length. Within such restricted ensembles, the Jarzynski procedure remains valid and
yields an effective estimate of the free energy difference. Thus the non-equilibrium extension
should be understood not as a way to compute the exact Kolmogorov-based similarity, but as a
principled framework for defining and estimating approximate similarity measures.
An additional conceptual benefit is that Jarzynski’s framework makes explicit the gap between
reversible and irreversible processes. While our similarity measure is formally defined by the
free energy difference between λ = 0 and λ = 1, practical approximations, coarse sampling, or
limited exploration of program space will generally produce an average work ⟨W⟩≥∆F. The
excess ⟨W⟩−∆F can be interpreted as an irreversible cost of similarity, quantifying how much
apparent dissimilarity is introduced by computational or sampling limitations. This provides a
natural measure of the reliability of approximate similarity estimates.
References
[1] S. Watanabe. Knowing and Guessing: A Quantitative Study of Inference and Information.
Quantitative Study of Inference and Information. Wiley, 1969. ISBN 9780471921301. URL
https://books.google.co.jp/books?id=gIxQAAAAMAAJ.
[2] S. Watanabe.
Pattern Recognition:
Human and Mechanical.
Wiley, 1985.
ISBN
9780471808152. URL https://books.google.co.jp/books?id=ZmdQAAAAMAAJ.
[3] Satosi Watanabe. Epistemological relativity. Annals of the Japan Association for Philosophy
of Science, 7(1):1–14, 1986. doi: 10.4288/jafpos1956.7.1.
16
[4] C.H. Bennett, P. Gacs, Ming Li, P.M.B. Vitanyi, and W.H. Zurek. Information distance.
IEEE Transactions on Information Theory, 44(4):1407–1423, 1998. doi: 10.1109/18.681318.
[5] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications.
Springer, 2008.
[6] A. N. Kolmogorov. Three approaches to the definition of the concept “quantity of informa-
tion”. Probl. Peredachi Inf, 1(1):3–11, 1965.
[7] Leo Szilard. On the decrease of entropy in a thermodynamic system by the intervention of
intelligent beings. Zeitschrift für Physik, 53:840–856, 1929. doi: 10.1007/BF01341281.
[8] Leon Brillouin. Science and Information Theory. Academic Press, New York, 1956.
[9] E. T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620–630,
1957. doi: 10.1103/PhysRev.106.620.
[10] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of
Research and Development, 5(3):183–191, 1961. doi: 10.1147/rd.53.0183.
[11] Charles H. Bennett. The Thermodynamics of Computation—A Review. International
Journal of Theoretical Physics, 21(12):905–940, 1982. doi: 10.1007/BF02084158.
[12] W. H. Zurek.
Thermodynamic cost of computation, algorithmic complexity and the
information metric.
Nature, 341(6238):119–124, 1989.
doi: 10.1038/341119a0.
URL
https://doi.org/10.1038/341119a0.
[13] Harvey S. Leff and Andrew F. Rex, editors. Maxwell’s Demon 2: Entropy, Classical and
Quantum Information, Computing. Institute of Physics Publishing, Bristol and Philadelphia,
2002. ISBN 978-0750307596.
[14] Takahiro Sagawa and Masahito Ueda. Nonequilibrium thermodynamics of feedback control.
Physical Review E, 85(2):021104, 2012. doi: 10.1103/PhysRevE.85.021104.
[15] Takahiro Sagawa. Thermodynamic and logical reversibilities revisited. Journal of Statistical
Mechanics: Theory and Experiment, (3):P03025, 2014. doi: 10.1088/1742-5468/2014/03/
P03025.
[16] Juan M. R. Parrondo, Jordan M. Horowitz, and Takahiro Sagawa.
Thermodynamics
of information.
Nature Physics, 11(2):131–139, 2015.
doi: 10.1038/nphys3230.
URL
https://doi.org/10.1038/nphys3230.
[17] Sosuke Ito and Takahiro Sagawa. Maxwell’s demon in biochemical signal transduction with
feedback loop. Nature Communications, 6:7498, 2015. doi: 10.1038/ncomms8498.
[18] Sosuke Ito. Stochastic Thermodynamic Interpretation of Information Geometry. Physical
Review Letters, 121(3):030605, 2018. doi: 10.1103/PhysRevLett.121.030605.
[19] Leonid A. Levin. Laws of Information Conservation (Non-growth) and Aspects of the
Foundation of Probability Theory. In Problems in the Transmission of Information, pages
206–210. Springer, 1974. Early work related to algorithmic thermodynamics.
[20] Gregory J. Chaitin. A theory of program size formally identical to information theory.
Journal of the ACM, 22(3):329–340, 1975.
doi: 10.1145/321892.321894.
URL http:
//www.cs.auckland.ac.nz/~chaitin/acm75.pdf.
17
[21] Gregory J. Chaitin. Algorithmic Information Theory, volume 1 of Cambridge Tracts in
Theoretical Computer Science. Cambridge University Press, Cambridge, UK, 1987. ISBN
978-0-521-34676-5.
[22] Ludwig Staiger. A tight upper bound on kolmogorov complexity and uniformly optimal
prediction.
Theory of Computing Systems, 31(3):215–229, June 1998.
doi: 10.1007/
s002240000086.
URL https://link.springer.com/article/10.1007/s002240000086.
Special issue on Computability of the Physical.
[23] Cristian S. Calude, Ludwig Staiger, and Sebastiaan A. Terwijn. On partial randomness.
Technical Report 239, Centre for Discrete Mathematics and Theoretical Computer Sci-
ence, University of Auckland, December 2004. URL https://www.cs.auckland.ac.nz/
research/groups/CDMTCS/researchreports/239cris.pdf. December 6, 2004.
[24] Kohtaro Tadaki. A statistical mechanical interpretation of algorithmic information theory. In
Proceedings of the 4th International Conference on Unconventional Models of Computation
(UMC’02), volume 2509 of Lecture Notes in Computer Science, pages 242–251. Springer,
2002. doi: 10.1007/3-540-36377-7_21.
[25] Kohtaro Tadaki. A statistical mechanical interpretation of algorithmic information theory.
CoRR, abs/0801.4194, 2008. URL http://arxiv.org/abs/0801.4194.
[26] John C. Baez and Mike Stay. Algorithmic thermodynamics. Mathematical Structures in
Computer Science, 22(5):771–787, 2012. doi: 10.1017/S0960129511000520. Special Issue:
Computability of the Physical.
[27] Kohtaro Tadaki. A Statistical Mechanical Interpretation of Algorithmic Information Theory,
volume 36 of SpringerBriefs in Mathematical Physics. Springer Singapore, 2019. ISBN
978-981-15-0739-7. doi: 10.1007/978-981-15-0739-7. URL https://doi.org/10.1007/
978-981-15-0739-7.
[28] Aram Ebtekar and Marcus Hutter. Foundations of algorithmic thermodynamics. Phys. Rev.
E, 111:014118, Jan 2025. doi: 10.1103/PhysRevE.111.014118. URL https://link.aps.
org/doi/10.1103/PhysRevE.111.014118.
[29] C. Jarzynski. Nonequilibrium Equality for Free Energy Differences. Physical Review Letters,
78(14):2690–2693, 1997. doi: 10.1103/PhysRevLett.78.2690.
[30] Gavin E. Crooks. Entropy production fluctuation theorem and the nonequilibrium work
relation for free energy differences. Physical Review E, 60(3):2721–2726, 1999. doi: 10.1103/
PhysRevE.60.2721.
[31] Christopher Jarzynski. Equalities and Inequalities: Irreversibility and the Second Law of
Thermodynamics at the Nanoscale. Annual Review of Condensed Matter Physics, 2:329–351,
2011. doi: 10.1146/annurev-conmatphys-062910-140506.
[32] Nelson Goodman. Fact, Fiction, and Forecast. Bobbs-Merrill, Indianapolis, 3rd edition,
1972.
[33] Peter H. A. Sneath and Robert R. Sokal. Numerical Taxonomy: The Principles and Practice
of Numerical Classification. W. H. Freeman, San Francisco, 1973. ISBN 0716706970.
[34] Ray J. Solomonoff. A Formal Theory of Inductive Inference. Part I. Information and
Control, 7(1):1–22, 1964. doi: 10.1016/S0019-9958(64)90223-2.
[35] Ray J. Solomonoff. A Formal Theory of Inductive Inference. Part II. Information and
Control, 7(2):224–254, 1964. doi: 10.1016/S0019-9958(64)90131-7.
18
[36] Marcus Hutter. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic
Probability. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2005.
ISBN 978-3-540-22139-5. doi: 10.1007/b138233.
[37] R. K. Pathria and Paul D. Beale. Statistical Mechanics. Academic Press, 3rd edition, 2011.
ISBN 978-0123821881.
[38] Kerson Huang. Statistical Mechanics. Wiley, 2nd edition, 1987. ISBN 978-0471815181.
[39] Herbert B. Callen. Thermodynamics and an Introduction to Thermostatistics. Wiley, 2nd
edition, 1985. ISBN 978-0471862567.
[40] Rudi L. Cilibrasi and Paul M.B. Vitanyi. The Google Similarity Distance. IEEE Transactions
on Knowledge and Data Engineering, 19(3):370–383, 2007. doi: 10.1109/TKDE.2007.48.
[41] Ming Li, Xin Chen, Xin Li, Bin Ma, and P.M.B. Vitanyi. The similarity metric. IEEE
Transactions on Information Theory, 50(12):3250–3264, 2004. doi: 10.1109/TIT.2004.
838101.
19
|
Similarity as Thermodynamic Work: Between Depth and Diversity - from Information Distance to Ugly Duckling Kentaro IMAFUKU National (AIST) Abstract Defining similarity is a fundamental challenge in information science. Watanabe's Ugly Duckling Theorem highlights diversity, while algorithmic information theory emphasizes depth through Information Distance. We propose a statistical-mechanical framework that treats program length as energy, with a temperature parameter unifying these two aspects: in the low-temperature limit, similarity approaches Information Distance; in the high-temperature limit, it recovers the indiscriminability of the Ugly Duckling theorem; and at the critical point, it coincides with the Solomonoff prior. We refine the statistical-mechanical framework by introducing regular universal machines and effective degeneracy ratios, allowing us to separate redundant from core diversity. This refinement yields new tools for analyzing similarity and opens perspectives for information distance, model selection, and non-equilibrium extensions. Keywords: similarity, thermodynamic work, Information Distance, Solomonoff universal prior, Ugly Duckling theorem, Kolmogorov complexity, statistical mechanics 1 Introduction Similarity plays a fundamental role in science. Many advances in physics, information theory, and data science have been achieved by identifying similar structures and constructing unified frameworks that explain them. Despite its importance, however, defining similarity in a principled way remains a long-standing challenge. Two contrasting approaches illustrate this difficulty. Watanabe's Ugly Duckling theorem[1-3] shows that if all predicates are treated equally, any pair of objects becomes equally similar, emphasizing the dependence on arbitrary weighting. In contrast, algorithmic information theory introduces the Information Distance[4, 5], defined through Kolmogorov complexity [6], which quantifies similarity by the depth of the shortest description. These perspectives highlight seemingly opposing aspects-diversity versus depth-and make clear the fundamental difficulty of formalizing similarity. Recent developments in information thermodynamics (e.g., [7-18]) and in algorithmic thermodynamics (e.g., [19-28]) provide a natural bridge for importing statistical-mechanical concepts into information theory. Building on this line of work, we treat program length as energy, introduce partition functions and free energies, and define similarity as the thermodynamic work required to couple two objects. This yields a unified framework where the balance between description length and multiplicity plays the role of energy-entropy tradeoff. 1 16 Sep 2025 Figure 1: Similarity as a thermodynamic balance. Program length plays the role of energy (depth), while effective degeneracy provides residual entropy (diversity). The tradeoff is governed by the inverse temperature β, interpolating between three regimes: Information Distance at low temperature, the Solomonoff prior at β = ln 2, and the Ugly Duckling phase at high temperature. Figure 1 illustrates this perspective. The inverse temperature β tunes the balance between depth (minimal description length) and diversity (residual entropy of admissible cores). Because similarity is defined through free-energy differences, trivial wrapper redundancies uniformly cancel under the regular-UTM assumption, leaving only core diversity as the entropic contribution. Within this picture, three classical notions of similarity emerge as temperature regimes: the Information Distance at low temperature, the Solomonoff universal prior at β = ln 2, and the Ugly Duckling phase at high temperature. This paper makes the following contributions: • We propose a statistical-mechanical framework that redefines similarity as thermodynamic work. • We demonstrate how the Information Distance, the Solomonoff universal prior, and the Ugly Duckling theorem arise in distinct temperature regimes. • We introduce the notion of regular UTMs, separating trivial wrapper redundancy from genuine core diversity, and define effective degeneracy as a new measure of diversity. • We identify residual entropy ratios as a structural source of similarity, complementing description length. • We outline possible non-equilibrium extensions via Jarzynski-type relations [29-31]. 2 Preliminaries In this section we briefly recall the basic notions from algorithmic information theory and from Watanabe's combinatorial argument that are relevant to our framework. For algorithmic information theory, standard references include Kolmogorov's original work [6], Chaitin's programsize complexity [20, 21], and the textbook by Li and Vitányi [5]. For the combinatorial perspective, we refer to Watanabe's papers on the Ugly Duckling theorem and the principle of the unique choice [1-3], as well as related philosophical discussions by Goodman [32] and methodological debates in taxonomy [33]. 2 2.1 Kolmogorov complexity and Information Distance For a finite binary string x, the Kolmogorov complexity K(x) is defined as the length of the shortest prefix-free program p that outputs x on a fixed UTM U [5, 6, 20, 34, 35]: K(x) := min{ |p| : U(p) = x }. Intuitively, K(x) measures the depth of the most concise description of x. For two strings x and y, the conditional complexity K(y|x) is the length of the shortest program producing y given x as auxiliary input. Based on these quantities, Bennett, Gács, Li, Vitányi, and Zurek defined the Information Distance [4, 5]: E(x, y) := max{K(x|y), K(y|x)}. Up to logarithmic precision, this gives a universal metric for similarity. In our framework, this "low-temperature" regime will reproduce K(y|x) as reversible work. 2.2 Solomonoff universal prior Solomonoff [34, 35] introduced a universal distribution that assigns to each string x the probability M(x) := X p:U(p)=x 2-|p|. This Bayesian prior aggregates all programs producing x, weighting shorter ones more heavily. Unlike K(x), which depends only on the shortest program, M(x) collects the entire ensemble. In our thermodynamic analogy, M(x) corresponds to a partition function evaluated at the critical temperature β = ln 2. This interpretation connects algorithmic probability with free-energy ideas and underlies universal prediction frameworks such as AIXI [36]. 2.3 Ugly Duckling theorem (Watanabe) Watanabe's Ugly Duckling theorem [1-3] states that if similarity is defined by counting shared predicates, then every pair of distinct objects is equally similar. Formally, in a universe Ω, the number of predicates true of both x and y - equivalently, the number of subsets O ⊆Ω containing both - is 2|Ω|-2, independent of x and y. Thus, when all predicates are weighted equally, all pairs of objects become indiscriminable. The theorem sharpened Goodman's earlier philosophical claim-that induction depends on which predicates are privileged [32]-into a combinatorial statement. To connect this symmetry with algorithmic information theory, we reinterpret programs as predicates. In the standard view, U(p) = x means that program p outputs x. In our reformulation, U(p) is seen as a set O of admissible objects, with x ∈U(p) interpreted as "x satisfies predicate p". As illustrated in Fig. 2, this bridges Watanabe's set-theoretic formulation with the program-based language of algorithmic information theory. It also reveals the limitation of the original theorem: uniformly counting predicates erases distinctions, but once predicates are weighted by description length, shorter programs dominate. This observation motivates the statistical-mechanical framework we develop, where predicates are treated as programs and similarity arises from ensembles weighted by their complexity. 3 Framework 3.1 Setup and notation As discussed in Sec. 2.3, Watanabe's Ugly Duckling theorem shows that if all predicates (i.e. subsets of Ω) are counted equally, then every pair of distinct objects is equally similar. 3 Figure 2: Illustration of our reinterpretation. Top: a program outputs a single object. Middle: a predicate acts as a characteristic function. Bottom: in our framework, a program defines a set of objects, enabling predicates to be treated as programs. Combinatorially, for a finite universe Ω, the number of subsets containing both x and y is always 2|Ω|-2, independent of the content of x and y. This symmetry illustrates the arbitrariness of similarity definitions when no weighting scheme is imposed. To overcome this limitation, we introduce a weighting scheme grounded in algorithmic information theory. Instead of assigning equal weight to all subsets O ⊆Ω, we weight each subset according to the length of a program that generates it. Shorter programs are assigned larger weights, so the ensemble is biased toward simpler predicates. This construction interpolates between the uniform view of the Ugly Duckling theorem and the minimal-description view of Kolmogorov complexity, and provides the basis for our statistical-mechanical framework. 3.1.1 Objects The objects under consideration are elements of a finite set Ω, where each element is regarded as a finite binary string (data). For later use we introduce the set O(-) := { O | O ⊆Ω, O ̸= ∅}, (1) namely the non-empty subsets of Ω. Throughout, x, y always denote elements of Ω. When necessary we take O ∈O(-) and consider a program satisfying U(p) = O. For example, in Zβ(x) the relevant subsets O are those containing x, while in Zβ(x, y) they are those containing both x and y. 3.1.2 Programs As in algorithmic information theory, we consider prefix-free programs p executed on UTM U. When U outputs a set O ∈O(-) on input p, we write U(p) = O. If x ∈O, we write U(p) ∋x. (2) Similarly, if x, y ∈O we write U(p) ∋(x, y). (3) The length of a program p is denoted by |p|. 3.2 Thermodynamic setting Once predicates are reinterpreted as programs generating subsets O ⊆Ω, we can introduce a statistical-mechanical formalism. Partition functions and reversible work are defined in the 4 standard sense of statistical mechanics [37-39]. The key idea is to regard the program length |p| as energy, and to assign each program a Boltzmann weight e-β|p| at inverse temperature β. In this way, short programs dominate at low temperature, while at high temperature all programs contribute almost equally, recovering the uniform-counting symmetry of the Ugly Duckling theorem. 3.2.1 Partition functions This analogy leads to the following definitions of partition functions and free energies: Zβ(x) := X p:U(p)∋x e-β|p|, Fβ(x) := -β-1 ln Zβ(x), (4) and for two objects x, y ∈Ω, Zβ(x, y) := X p:U(p)∋(x,y) e-β|p|, Fβ(x, y) := -β-1 ln Zβ(x, y). (5) Here Zβ(x) and Zβ(x, y) represent partition functions restricted to subsets that contain x or (x, y), respectively. 3.2.2 Thermodynamic work We next consider a process that interpolates between the constraint "x is included" and "both x and y are included." Introducing a coupling parameter λ ∈[0, 1], we define the energy of each program as Eλ(p) = |p| + Jλ 1 -sy(p) , (6) where sy(p) = 1 if U(p) ∋y and sy(p) = 0 otherwise. The constant J specifies the strength of the constraint. Taking J →∞enforces a hard constraint at λ = 1, so that only programs with U(p) ∋(x, y) contribute. The corresponding generalized partition function and free energy are ˆZβ(λ) := X p:U(p)∋x e-βEλ(p), ˆFβ(λ) := -β-1 ln ˆZβ(λ). (7) In the limits we obtain ˆFβ(0) = Fβ(x), ˆFβ(1) = Fβ(x, y). (8) For this constrained evolution, the generalized force is Φλ := D∂Eλ(p) ∂λ E λ = J⟨1 -sy(p)⟩λ, (9) and the reversible isothermal work required for the transformation is W (β) rev (x →x, y) = Z 1 0 Φλ dλ = Fβ(x, y) -Fβ(x). (10) Thus, similarity is identified with the free energy difference needed to add y to x. This process is illustrated schematically in Fig. 3. As λ increases, the ensemble shifts smoothly from programs describing x alone to programs describing both x and y. The reversible work quantifies this shift, and we adopt it as our definition of similarity. In the following sections we analyze this quantity in different temperature regimes, showing how classical notions such as Information Distance and the Ugly Duckling theorem emerge as limiting cases. 5 Figure 3: llustration of the λ-coupling process. The horizontal axis represents program length, the depth axis the coupling parameter λ, and the vertical peaks indicate weighted program counts. At λ = 0 (red), programs producing x dominate with minimal length K(x). As λ increases (e.g. λ = 0.4, green), the ensemble gradually shifts. At λ = 1 (purple), only programs producing both x and y remain, with minimal length K(x, y). The number of shortest programs themselves does not change with λ; only their normalized weight in the ensemble is rebalanced. 4 Three temperature regimes The three regimes are summarized schematically in Fig. 4. In the following subsections, we analyze them in detail: the low-temperature limit corresponding to the Information Distance, the critical point β = ln 2 yielding the Solomonoff universal prior, and the high-temperature limit where residual entropy dominates. 4.1 Low-temperature limit (β →∞): Occam's razor phase In the limit β →∞, the Boltzmann weight e-β|p| suppresses all but the shortest program. The partition functions then reduce to Zβ(x) ≃e-βK(x), Zβ(x, y) ≃e-βK(x,y). (11) Consequently, the free energies become Fβ(x) ≃K(x), Fβ(x, y) ≃K(x, y), (12) up to O(β-1) corrections. The reversible work is then W (β→∞) rev (x →x, y) ≃K(x, y) -K(x) = K(y|x). (13) Thus in the low-temperature limit, reversible work coincides with the conditional Kolmogorov complexity and reproduces the Information Distance D(x, y) = max{K(x|y), K(y|x)}. This regime reflects the dominance of the shortest description, embodying Occam's razor in algorithmic form, and will be referred to as the "Occam's razor phase." 6 Figure 4: Schematic phase diagram of similarity as thermodynamic work. At low temperature, the reversible work reduces to the conditional complexity K(y|x), reproducing the Information Distance (Occam's razor phase). At the critical point β = ln 2, it coincides with the Solomonoff universal prior. At high temperature, the divergent residual-entropy term dominates, recovering the essence of Watanabe's Ugly Duckling theorem. Finite conditional complexity contributions K(y|x) remain present but become negligible compared to the divergence. 4.2 Critical point (β = ln 2): Bayesian bridge At β = ln 2, the partition function Zln 2(x) coincides with the Solomonoff universal prior M(x). This prior is defined as M(x) = X p:U(p)∋x 2-|p|, (14) a weighted sum over all programs that output x, each with weight 2-|p|. Since U is prefix-free, Kraft's inequality ensures X p 2-|p| ≤1, so M(x) is a valid semimeasure and can serve as a universal Bayesian prior over all computable models. In this case, the reversible work becomes W (ln 2) rev (x →x, y) = -log2 M(x, y) M(x) . (15) Therefore our similarity measure is interpreted as the free energy cost of adding y given x under the universal prior, which information-theoretically corresponds to the amount of information required for a universal Bayesian update. For this reason, we refer to the critical regime as the "Bayesian bridge," connecting algorithmic and probabilistic perspectives. 4.3 High-temperature limit: Ugly Duckling phase In the opposite limit β →0, all programs contribute almost equally. Let Nl(x) be the number of programs of length lwith U(p) ∋x, and Nl(x, y) the number producing (x, y). Then Zβ(x) ≃ X l Nl(x), Zβ(x, y) ≃ X l Nl(x, y). Since Nl(x) ∼2lgrows exponentially with l, these sums diverge for β 0. A.2 Regular UTMs and wrapper construction Definition 1 (Regular UTM). A regular Universal Turing Machine (UTM) is a prefix-free machine U such that every program p admits a unique decomposition p = w∥q into • a wrapper w drawn from a syntactically defined family W, and • a core code q executed by a fixed reference machine U0, i.e. U(w∥q) = U0(q). The wrapper set acts uniformly and independently of the output. Let ad := |{w ∈W : |w| = d}| denote the number of wrappers of length d. Explicit construction. Fix a marker M of length h ≥1. Define Wd := { w = sM : |w| = d, s contains no occurrence of M }. That is, a wrapper is any binary string that ends with M and whose prefix s is M-free. This guarantees a unique boundary between w and the core q: in any concatenation p = w∥q, the first (leftmost) occurrence of M in p is precisely the end of w. After that boundary, the core q may contain M arbitrarily; parsing remains unambiguous because no occurrence of M appears before the boundary inside w. The set { s : |s| = d -h, s M-free } is a regular language recognized by a finite automaton, hence its cardinality grows exponentially. Therefore there exist constants c, α > 0 such that ad := |Wd| ≍c 2αd (d →∞). We also include the empty wrapper w = ε so that shortest programs (d = 0) are available. A.3 Effective degeneracy Let mo denote the number of shortest cores of length K(o) on U0. In addition, there may exist alternative cores of length K(o) + ∆for integers ∆≥1. Let m(∆) o be their multiplicity. Such alternatives include, in particular, the case of zero-length wrappers (pure cores of length K(o) + ∆). At total length K(o) + d the decomposition gives NK(o)+d(o) = mo ad + d-1 X ∆=1 m(∆) o ad-∆+ Ed(o), (16) where Ed(o) collects boundary effects. These include, for example: • cases where an alternative core of length K(o) + ∆cannot contribute because d D m(∆) o ad-∆/ad vanishes uniformly as D →∞. Hence NK(o)+d(o) ad = emo + o(1), which is equivalent to NK(o)+d(o) = emo ad(1 + ro(d)) with ro(d) →0. 13 A.5 Partition Functions with Excess Cutoff For an excess window Λ ≥0, define Z(≤K(x)+Λ) β (x) = e-βK(x) Λ X d=0 NK(x)+d(x) e-βd, (17) Z(≤K(x,y)+Λ) β (x, y) = e-βK(x,y) Λ X d=0 NK(x,y)+d(x, y) e-βd. (18) By Proposition 1, Z(≤K(x)+Λ) β (x) = e-βK(x) em(Λ) x Λ X d=0 ad(1 + rx(d))e-βd, (19) Z(≤K(x,y)+Λ) β (x, y) = e-βK(x,y) em(Λ) x,y Λ X d=0 ad(1 + rx,y(d))e-βd. (20) Hence the ratio is Z(≤K(x,y)+Λ) β (x, y) Z(≤K(x)+Λ) β (x) = e-β(K(x,y)-K(x)) em(Λ) x,y em(Λ) x PΛ d=0 ad(1 + rx,y(d))e-βd PΛ d=0 ad(1 + rx(d))e-βd . (21) A.6 High-Temperature Expansion For fixed Λ and β →0, Λ X d=0 ad(1 + ro(d))e-βd = Λ X d=0 ad [1 + O(ε(Λ)) + O(βΛ)], where ε(Λ) := max{|εx(Λ)|, |εx,y(Λ)|} and ε(Λ) →0 by Proposition 1. Thus (21) becomes Z(≤K(x,y)+Λ) β (x, y) Z(≤K(x)+Λ) β (x) = e-β(K(x,y)-K(x)) · em(Λ) x,y em(Λ) x [1 + O(ε(Λ)) + O(βΛ)]. Taking Λ →∞under the scaling βΛ →0, ε(Λ) →0, the error vanishes. A.7 Reversible Work at High Temperature Finally, W (Λ,β) rev (x→x, y) = -β-1 log Z(≤K(x,y)+Λ) β (x, y) Z(≤K(x)+Λ) β (x) (22) = [K(x, y) -K(x)] -β-1 ln em(Λ) x,y em(Λ) x + o(1). (23) Thus W (Λ,β) rev (x→x, y) = K(y|x) -β-1 ln em(Λ) x,y em(Λ) x + o(1), with the interpretation that K(y|x) is the finite depth contribution, while ln em(Λ) x,y / em(Λ) x is the divergent diversity or residual entropy contribution. 14 A.8 Cutoff Schemes: Absolute vs Excess An alternative cutoff restricts the absolute program length, e.g. |p| ≤L. While natural and simple, this treats different objects asymmetrically: the relative position of L to the ground length K(x) may vary greatly, biasing comparisons across objects. By contrast, the excess cutoff normalizes each object by its own K(x) and examines only the surplus window Λ. This places all objects on equal footing and allows the degeneracy ratio to emerge cleanly as the dominant contribution in the high-temperature limit. A.9 Supplementary Note: Growth of wrapper families For completeness we justify the claim that the wrapper family defined by a fixed marker M has exponential growth. Proposition 2. Fix a marker M of length h ≥1. Let An(M) denote the set of binary strings of length n that do not contain M as a substring. Then |An(M)| = C μn + o(μn) as n →∞, where μ ∈[1, 2) is the spectral radius of the transition matrix of the M-avoidance automaton, and C > 0 depends on M. Sketch. The set An(M) is a regular language recognized by a finite automaton constructed from the prefix function of M (e.g. the Aho-Corasick automaton). Let A be the adjacency matrix of the automaton restricted to safe states. Then |An(M)| = e⊤ 0 An1 where e0 is the unit vector corresponding to the initial automaton state, and 1 is the all-ones vector selecting all accepting states. In other words, e⊤ 0 An1 counts the number of n-step paths of the automaton that start in the initial state and end in any safe state, which is exactly the number of length-n binary strings that avoid M. By Perron-Frobenius, A has a dominant eigenvalue μ = ρ(A) with positive eigenvectors, so |An(M)| = C μn + o(μn) with C > 0. As a consequence, the wrapper counts satisfy ad = |Wd| = |Ad-h(M)| = c 2αd [1 + o(1)], α = log2 μ ∈[0, 1). Thus there exist constants c, α > 0 such that ad ≍c 2αd, as used in the main text. B Non-equilibrium extension via Jarzynski equality B.1 Theoretical sketch Consider a system in contact with a thermal bath at inverse temperature β, with distribution πβ(p | λ) = e-βEλ(p) ˆZβ(λ) , where Eλ(p) depends on a control parameter λ ∈[0, 1]. In the reversible case, slow variation of λ gives work equal to free energy difference. In the irreversible case, extra work appears, but Jarzynski's equality ensures e-βW = e-β∆F . With the excess cutoff, admissible programs are restricted at each λ, leading to the partition functions ˆZ(Λ) β (λ) = X p∈PΛ(λ) e-βEλ(p). 15 where PΛ(λ) denotes the set of admissible programs under the excess cutoff Λ at control parameter λ. The Jarzynski equality then reads e-βW = ˆZ(Λ) β (1) ˆZ(Λ) β (0) = exp -β ∆F (Λ) β , which recovers the reversible work defined in the main text. B.2 Numerical protocol (sketch) A numerical protocol for estimating similarity at finite temperature is as follows: 1. Enumerate all programs up to surplus cutoff Λ that output x; identify those that also output y. 2. Discretize λ into a sequence 0 = λ0 < · · · < λM = 1. 3. At each λk, sample programs from πβ(p | λk) ∝e-βEλk(p). 4. For each sampled trajectory, accumulate the work W. 5. Estimate exp(-β∆F) via the average ⟨e-βW ⟩. This protocol provides a practical scheme for estimating the finite-temperature similarity measure defined in this work. Remarks on feasibility and interpretation. It should be stressed that the above protocol does not require the exact values of K(x) or K(x, y), which are uncomputable in general. In practice, one works with finite surrogate sets of programs, obtained for example from compressionbased approximations, minimum description length principles, or bounded searches up to a cutoff length. Within such restricted ensembles, the Jarzynski procedure remains valid and yields an effective estimate of the free energy difference. Thus the non-equilibrium extension should be understood not as a way to compute the exact Kolmogorov-based similarity, but as a principled framework for defining and estimating approximate similarity measures. An additional conceptual benefit is that Jarzynski's framework makes explicit the gap between reversible and irreversible processes. While our similarity measure is formally defined by the free energy difference between λ = 0 and λ = 1, practical approximations, coarse sampling, or limited exploration of program space will generally produce an average work ⟨W⟩≥∆F. The excess ⟨W⟩-∆F can be interpreted as an irreversible cost of similarity, quantifying how much apparent dissimilarity is introduced by computational or sampling limitations. This provides a natural measure of the reliability of approximate similarity estimates. References [1] S. Watanabe. Knowing and Guessing: A Quantitative Study of Inference and Information. Quantitative Study of Inference and Information. Wiley, 1969. ISBN 9780471921301. URL https://books.google.co.jp/books?id=gIxQAAAAMAAJ. [2] S. Watanabe. Pattern Recognition: Human and Mechanical. Wiley, 1985. ISBN 9780471808152. URL https://books.google.co.jp/books?id=ZmdQAAAAMAAJ. [3] Satosi Watanabe. Epistemological relativity. Annals of the Japan Association for Philosophy of Science, 7(1):1-14, 1986. 16 [4] C.H. Bennett, P. Gacs, Ming Li, P.M.B. Vitanyi, and W.H. Zurek. Information distance. IEEE Transactions on Information Theory, 44(4):1407-1423, 1998. [5] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, 2008. [6] A. N. Kolmogorov. Three approaches to the definition of the concept "quantity of information". Probl. Peredachi Inf, 1(1):3-11, 1965. [7] Leo Szilard. On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings. Zeitschrift für Physik, 53:840-856, 1929. [8] Leon Brillouin. Science and Information Theory. Academic Press, New York, 1956. [9] E. T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620-630, 1957. [10] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3):183-191, 1961. [11] Charles H. Bennett. The Thermodynamics of Computation-A Review. International Journal of Theoretical Physics, 21(12):905-940, 1982. [12] W. H. Zurek. Thermodynamic cost of computation, algorithmic complexity and the information metric. Nature, 341(6238):119-124, 1989. URL https://doi.org/10.1038/341119a0. [13] Harvey S. Leff and Andrew F. Rex, editors. Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing. 2002. ISBN 978-0750307596. [14] Takahiro Sagawa and Masahito Ueda. Nonequilibrium thermodynamics of feedback control. Physical Review E, 85(2):021104, 2012. [15] Takahiro Sagawa. Thermodynamic and logical reversibilities revisited. Journal of Statistical Mechanics: Theory and Experiment, (3):P03025, 2014. P03025. [16] Juan M. R. Parrondo, Jordan M. Horowitz, and Takahiro Sagawa. Thermodynamics of information. Nature Physics, 11(2):131-139, 2015. URL https://doi.org/10.1038/nphys3230. [17] Sosuke Ito and Takahiro Sagawa. Maxwell's demon in biochemical signal transduction with feedback loop. Nature Communications, 6:7498, 2015. [18] Sosuke Ito. Stochastic Thermodynamic Interpretation of Information Geometry. Physical Review Letters, 121(3):030605, 2018. [19] Leonid A. Levin. Laws of Information Conservation (Non-growth) and Aspects of the Foundation of Probability Theory. In Problems in the Transmission of Information, pages 206-210. Springer, 1974. Early work related to algorithmic thermodynamics. [20] Gregory J. Chaitin. A theory of program size formally identical to information theory. Journal of the ACM, 22(3):329-340, 1975. URL http: //www.cs.auckland.ac.nz/~chaitin/acm75.pdf. 17 [21] Gregory J. Chaitin. Algorithmic Information Theory, volume 1 of Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, Cambridge, UK, 1987. ISBN 978-0-521-34676-5. [22] Ludwig Staiger. A tight upper bound on kolmogorov complexity and uniformly optimal prediction. Theory of Computing Systems, 31(3):215-229, June 1998. s002240000086. URL https://link.springer.com/article/10.1007/s002240000086. Special issue on Computability of the Physical. [23] Cristian S. Calude, Ludwig Staiger, and Sebastiaan A. Terwijn. On partial randomness. Technical Report 239, Centre for Discrete Mathematics and Theoretical Computer Science, 2004. URL https://www.cs.auckland.ac.nz/ research/groups/CDMTCS/researchreports/239cris.pdf. December 6, 2004. [24] Kohtaro Tadaki. A statistical mechanical interpretation of algorithmic information theory. In Proceedings of the 4th International Conference on Unconventional Models of Computation (UMC'02), volume 2509 of Lecture Notes in Computer Science, pages 242-251. Springer, 2002. [25] Kohtaro Tadaki. A statistical mechanical interpretation of algorithmic information theory. CoRR, abs/0801.4194, 2008. URL http://arxiv.org/abs/0801.4194. [26] John C. Baez and Mike Stay. Algorithmic thermodynamics. Mathematical Structures in Computer Science, 22(5):771-787, 2012. Special Issue: Computability of the Physical. [27] Kohtaro Tadaki. A Statistical Mechanical Interpretation of Algorithmic Information Theory, volume 36 of SpringerBriefs in Mathematical Physics. Springer Singapore, 2019. ISBN 978-981-15-0739-7. URL https://doi.org/10.1007/ 978-981-15-0739-7. [28] Aram Ebtekar and Marcus Hutter. Foundations of algorithmic thermodynamics. Phys. Rev. E, 111:014118, Jan 2025. URL https://link.aps. org/doi/10.1103/PhysRevE.111.014118. [29] C. Jarzynski. Nonequilibrium Equality for Free Energy Differences. Physical Review Letters, 78(14):2690-2693, 1997. [30] Gavin E. Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3):2721-2726, 1999. PhysRevE.60.2721. [31] Christopher Jarzynski. Equalities and Inequalities: Irreversibility and the Second Law of Thermodynamics at the Nanoscale. Annual Review of Condensed Matter Physics, 2:329-351, 2011. [32] Nelson Goodman. Fact, Fiction, and Forecast. Bobbs-Merrill, Indianapolis, 3rd edition, 1972. [33] Peter H. A. Sneath and Robert R. Sokal. Numerical Taxonomy: The Principles and Practice of Numerical Classification. W. H. Freeman, San Francisco, 1973. ISBN 0716706970. [34] Ray J. Solomonoff. A Formal Theory of Inductive Inference. Part I. Information and Control, 7(1):1-22, 1964. [35] Ray J. Solomonoff. A Formal Theory of Inductive Inference. Part II. Information and Control, 7(2):224-254, 1964. 18 [36] Marcus Hutter. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2005. ISBN 978-3-540-22139-5. [37] R. K. Pathria and Paul D. Beale. Statistical Mechanics. Academic Press, 3rd edition, 2011. ISBN 978-0123821881. [38] Kerson Huang. Statistical Mechanics. Wiley, 2nd edition, 1987. ISBN 978-0471815181. [39] Herbert B. Callen. Thermodynamics and an Introduction to Thermostatistics. Wiley, 2nd edition, 1985. ISBN 978-0471862567. [40] Rudi L. Cilibrasi and Paul M.B. Vitanyi. The Google Similarity Distance. IEEE Transactions on Knowledge and Data Engineering, 19(3):370-383, 2007. [41] Ming Li, Xin Chen, Xin Li, Bin Ma, and P.M.B. Vitanyi. The similarity metric. IEEE Transactions on Information Theory, 50(12):3250-3264, 2004. 838101. 19
|
2509.16237
|
Andrei Arusoaie, Hora¸tiu Cheval, Radu Iosif: FROM 2025
EPTCS 427, 2025, pp. 117–133, doi:10.4204/EPTCS.427.8
© M. Krahl et al.
This work is licensed under the
Creative Commons Attribution License.
parSAT: Parallel Solving of Floating-Point Satisfiability
Markus Krahl
University of Applied Sciences Munich HM
Munich, Germany
TASKING Germany GmbH
Munich, Germany
markus.krahl@hm.edu
markus.krahl@tasking.com
Matthias Güdemann
University of Applied Sciences Munich HM
Munich, Germany
matthias.guedemann@hm.edu
Stefan Wallentowitz
University of Applied Sciences Munich HM
Munich, Germany
stefan.wallentowitz@hm.edu
Satisfiability-based verification techniques, leveraging modern Boolean satisfiability (SAT) and Satis-
fiability Modulo Theories (SMT) solvers, have demonstrated efficacy in addressing practical problem
instances within program analysis. However, current SMT solver implementations often encounter
limitations when addressing non-linear arithmetic problems, particularly those involving floating point
(FP) operations. This poses a significant challenge for safety critical applications, where accurate and
reliable calculations based on FP numbers and elementary mathematical functions are essential.
This paper shows how an alternative formulation of the satisfiability problem for FP calculations
allows for exploiting parallelism for FP constraint solving. By combining global optimization
approaches with parallel execution on modern multi-core CPUs, we construct a portfolio-based semi-
decision procedure specifically tailored to handle FP arithmetic. We demonstrate the potential of this
approach to complement conventional methods through the evaluation of various benchmarks.
1
Introduction
Various software-intensive systems utilize Floating Point (FP) computations, as FP allows for the approxi-
mation of real numbers that are essential in numerous applications modeling physical behavior. When
these systems operate within safety or security critical domains, their correctness must be guaranteed.
The challenge lies in testing and verifying FP computations due to the vast domain and specific behaviors
inherent to FP arithmetic. However, taking into account recent advancements such as incorporating
IEEE754 compliant FP theory into the SMT-LIB2 standard, current state-of-the-art automatic theorem
provers offer a viable approach for modeling and verifying these complex calculations. For solving FP
constraints, contemporary state-of-the-art SMT solvers, e.g., bitwuzla [16], cvc5 [1], or Z3 [6], employ
bit-blasting [4], a technique that transforms the FP constraints into propositional logic problems. These
problems are then solved by Boolean satisfiability (SAT) solvers, thereby enabling rigorous, automatic
verification of FP computations.
While significant advancements have been made in SMT solver capabilities for solving equations
in FP theory, their capacity for reasoning about FP constraints, particularly those involving non-linear
arithmetic, remains limited for practical applications. This limitation stems from inherent overhead
associated with converting these computations from FP arithmetic to propositional logic, often requiring
word- [16] or bit-blasting operations specific to the FP-focused solver frameworks. Furthermore, this
encoding process fails to exploit advancements in hardware such as dedicated FP units within multicore
CPUs or GPUs that could significantly accelerate computation.
118
parSAT: Parallel Solving of Floating-Point Satisfiability
In this paper, we present parSAT, an integrated tool that performs a portfolio-based semi-decision
procedure for SMT equations in FP theory. It extends the formulation of solving FP constraints based on
global optimization (GO), originally introduced and refined in [8, 2]. We specifically address limitations
by incorporating a more comprehensive support for SMT-LIB2 constraints and enabling a portfolio-
based minimization of the objective function using a multicore implementation to decide satisfiability.
Furthermore, we demonstrate the potential of combining parSAT with an SMT solver to systematically
leverage the advantages of both approaches.
The paper is structured as follows. Section 2 discusses related work, and section 3 provides the
theoretical foundations for converting SMT equations in FP theory. The core principles and implementation
details of parSAT are described in section 4. In section 5, we evaluate parSAT in comparison with other
SMT solvers. Section 6 concludes the paper and discusses the outlook and potential further work.
2
Related Work
The approach to formulate FP constraints in the form of a GO problem was first introduced in XSat [8].
It solves SMT equations in pure FP theory by converting the given constraints into an equisatisfiable
mathematical optimization problem which has a global minimum of value 0 if and only if the original
problem is satisfiable. In detail, it transforms a quantifier-free SMT equation F(−→x ), where −→x ∈FPn
corresponds to an arbitrary assignment of the variables in the equation, into an objective function G(−→x ).
G(−→x ) is minimized by applying GO to find an input vector −→z for which G(−→z ) = 0. In case a −→z was
found, −→z would correspond to a valid assignment α for which F(α) = SAT, therefore F(−→x ) would be
satisfiable. In case only a non-zero global minimum is found, the SMT equation F(−→x ) is considered
to be unsatisfiable. XSat takes an SMT equation in the SMT-LIB2 format as input and generates the
optimization function as C-code which is compiled and loaded as Python-Module that applies the GO
algorithm Basin Hopping [22] (BH) from the SciPy-Package [21] to find its global minimum.
goSAT [2] is based on the ideas of XSat as it similarly transforms an SMT equation in pure FP theory
from a given input file in SMT-LIB2 format to an optimization function and attempts to find its global
minimum. It also supports a code generation mode similar to XSat where the optimization function is
emitted as C-code. However, the main aspect of goSAT is the Just-in-Time (JIT) compilation of the
generated objective function and the immediate search for its global minimum. Additionally, it offers the
possibility to choose from several different GO routines provided through the NLopt [10] library whereas
XSat only involves the BH algorithm.
Our parSAT approach builds on top of XSat and goSAT. We included the three best performing GO
algorithms based on average runtime and found SAT solutions used in goSAT and XSat, namely BH with
the Powell method [17] as local minimizer, Controlled Random Search 2 with local mutation
[19, 11] (CRS2) and Improved Stochastic Ranking Evolution Strategy [20] (ISRES) for use in
minimization by reimplementing each approach in C++. parSAT supports a freely configurable parallel
execution of these algorithms in a portfolio-based approach. Similar to XSat and goSAT, parSAT first
converts the given SMT equation into an optimization function, but it natively compiles this optimization
function into a shared library that is directly loaded into parSAT. Additionally, parSAT provides a more
complete support of the SMT-LIB2 standard, e.g., the ite function for arguments with FP type.
Most other SMT solvers apply the DPLL(T) [14, p. 66] framework which is used to decide combi-
nations of different theories based on structural SAT solving. In that case, an SMT solver consists of
multiple theory solvers and an orchestrating SAT solver. The SMT solvers bitwuzla [16] and cvc5 [1]
use an implementation of FP constraints based on the SymFPU [5] library. This allows them to apply
M. Krahl et al.
119
word-blasting, i.e., an eager conversion of FP into the bitvector theory and then using bit-blasting to
generate a purely propositional bit-level problem for a SAT solver. It can be beneficial to not only have
conjunctions for the theory solver, but to use more abstract reasoning on the mathematical level in order to
avoid searching through large state spaces that are impossible according to the theory. This is for example
realized in the natural domain solving procedure implemented in MathSAT [9] or based on incremental
approximations and logic programming as in Colibri [25]. These approaches to solving FP constraints are
powerful and guaranteed to be complete. The downside is that due to the complexity of the underlying
propositional bit-vector formulas, finding a solution in practical problems is often difficult. They also do
not exploit modern multi-core CPUs or GPUs with dedicated FP units to accelerate FP instructions which
is possible using the parSAT approach.
Previous research has explored running SMT solvers in a portfolio setting, where multiple configura-
tions of a single solver or different solvers are executed in parallel to solve a given equation, such as [24],
[13], and [23]. Our approach similarly adopts a portfolio strategy; however, instead of using multiple
SMT solvers, we concurrently execute different GO methods to locate a zero-valued global minimum of
the optimization function. Since all instances operate within the same optimization framework, previously
evaluated points could be shared more efficiently among them compared to running separate SMT solvers.
3
Theoretical Background
In this chapter, we provide the theoretical foundation of parSAT. Let FP be the set of IEEE754 double
precision FP numbers. Note that this set includes the numbers that can be represented precisely by single
precision numbers, i.e., of type float.
In general, a quantifier-free SMT equation F(−→x ) with −→x ∈FPn is transformed into a mathematical
objective function G(−→x ) in such a way that computing G(−→x ) with a given input vector −→a either returns
0 if −→a corresponds to a satisfiable assignment α for F(−→x ) or a positive distance value that indicates how
close −→a is to a global minimum at zero, i.e., a satisfiable assignment.
To ensure the equivalence between the optimization function G(−→x ) and the initial SMT FP formula
F(−→x ), the following requirements, originally stated in XSat, must hold:
R(1): ∀−→x ∈FPn →G(−→x ) ≥0
R(2): ∀−→x ∈FPn ∧G(−→x ) = 0 →−→x |= F
R(3): ∀−→x ∈FPn ∧−→x |= F →G(−→x ) = 0
R(1) states that the objective form is non-negative. R(2) states that if the objective function has a value of 0
then the corresponding valuation of the free variables in the constraint problem is a satisfying assignment.
Finally R(3) states that every satisfying assignment to the constraint problem corresponds to a root of the
objective function.
parSAT supports a given SMT formula F(−→x ) in the language LFP−SMT representing quantifier-free
FP constraints. The language LFP−SMT supported by parSAT is a strict superset of the language supported
by goSAT and XSat, its syntax is defined as:
Boolean constraints
π := ¬π′ |π1 ∧π2 |π1 ∨π2 |e1 ▷◁e2
FP expressions
e := c|v|e1 ⊗e2 | fun(e1,...,en)|ite(π,e1,e2)
120
parSAT: Parallel Solving of Floating-Point Satisfiability
where ▷◁∈{<,≤,>,≥,==,̸=},⊗∈{+,−,∗,/}, c represents a FP constant, v is a FP variable, fun is
a user-defined, interpreted FP function, and ite corresponds to the if-then-else-function defined in the
SMT-LIB2 core theory. It returns the FP expression e1 if the Boolean argument π is true; otherwise, it
returns the FP expression e2.
Similar to goSAT, we define FCD(−→x ) as the conversion of F(−→x ) by removing the negation through
the application of De-Morgan’s law and transforming it into conjunctive normal form. But in contrast to
the approach described for goSAT, we denote each operand in ▷◁with an additional subscript n to indicate
whether the initial operand is negated due to the application of De-Morgan’s law:
FCD(−→x ) =
^
i∈I
_
j∈J
ei,j ▷◁i,j,n e′
i,j
(1)
From FCD(−→x ) we deduce the optimization function G(−→x ) as follows:
G(−→x ) = ∑
i∈I∏
j∈J
d(▷◁i,j,n,ei,j,e′
i,j)
(2)
where d(▷◁i,j,n,ei,j,e′
i,j) translates the boolean value of the comparison operators in ▷◁to an FP value that
is equal to or greater than zero.
Here, the previously introduced subscript n for an intended negation of the comparison operator
needs to be considered. For instance, for the real numbers ra,rb the statement ¬(ra < rb) could be
transformed to (ra ≥rb) to eliminate the negation. However, this transformation would not be valid for
all FP values, specifically the NaN value. In case for the FP numbers fa, fb where fb = NaN, the first
statement ¬(fa < fb) would be true, whereas the second statement (fa ≥fb) would be false. In order
to address this special case in FP arithmetic, we insert an additional check for NaN to d(▷◁i,j,n,ei,j,e′
i,j)
when the corresponding operator in ▷◁was later negated due to the application of the De-Morgan’s law,
indicated by subscript n = 1.
This results in the following definitions to encode boolean constraints involving FP expressions,
d(==0,e1,e2) = d(̸=1,e1,e2) = θ(e1,e2)
(3)
d(̸=0,e1,e2) = d(==1,e1,e2) = e1 ̸= e2 ? 0 : 1
(4)
d(<0,e1,e2) = e1 < e2 ? 0 : θ(e1,e2)+1
(5)
d(<1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d(≥0,e1,e2)
(6)
d(≤0,e1,e2) = e1 ≤e2 ? 0 : θ(e1,e2)
(7)
d(≤1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d(>0,e1,e2)
(8)
d(>0,e1,e2) = e1 > e2 ? 0 : θ(e1,e2)+1
(9)
d(>1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d(≤0,e1,e2)
(10)
d(≥0,e1,e2) = e1 ≥e2 ? 0 : θ(e1,e2)
(11)
d(≥1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d(<0,e1,e2)
(12)
where θ(f1, f2) computes the distance value between the FP numbers f1 and f2. Similarly to goSAT,
we calculate θ(f1, f2) where f1, f2 ∈FP\{NaN} with the difference in the bit representations of f1 and
f2. To adhere to the IEEE754 standard, that requires that the equality comparison of two NaN values is
always false, NaN values need to be treated differently as the difference in the bit representation of two
M. Krahl et al.
121
NaN values might be zero. Therefore, the following definition is used in parSAT:
θ(e1,e2) = isnan(e1)∨isnan(e2) ? 1 : (e1 == e2 ? 0 : (|bits(e1)−bits(e2)|))
(13)
Accordingly, the following properties hold:
∀f1, f2 ∈FP :
θ(f1, f2) ≥0
(14)
∀f1, f2 ∈FP :
θ(f1, f2) = 0 =⇒f1 = f2
(15)
∀f1, f2 ∈FP :
θ(f1, f2) = θ(f2, f1)
(16)
∀f1, f2 ∈FP :
isnan(f1)∨isnan(f2) =⇒θ(f1, f2) > 0
(17)
Considering equations (2) to (17), we can derive that the previously stated requirements R(1), R(2), and
R(3) hold for G(−→x ). Because of the distinct processing of negated comparison operators, parSAT also
enables the generation of optimization functions that correctly return zero for an assignment α containing
NaN values, which satisfies the initial SMT FP constraints. Still, GO algorithms might be developed
based on mathematical reasoning and therefore might be limited to calculate global minima that do not
contain infinite FP numbers, such as NaN or Infinity even though these might present a valid solution for
the given SMT equation.
FP expressions are directly encoded by performing the operations defined by ⊗using the corresponding
double or float types in C. In this way, the potential non-linear characteristics of the original SMT
equation are preserved when translating it into the generated GO function.
Generally, parSAT allows the integration of any derivative-free GO algorithm. Due to the various
conditionals in the previously presented equations that construct the generated optimization function, it
is very likely that the final optimization function is not smooth and therefore not differentiable. All GO
procedures currently incorporated in parSAT are employing a stochastic driven strategy for locating a
global minimum. Therefore, parSAT fulfills the soundness property only for SMT equations where one of
its GO algorithm finds a zero valued minimum −→z , since −→z necessarily represents a valid solution to the
initial SMT formula F(−→x ). However, parSAT is incomplete and unsound for SMT equations where it
cannot find a satisfiable assignment due to the non-deterministic behavior of the employed GO algorithms.
Because of their stochastic nature, the GO algorithms employed by parSAT may not find a zero-valued
minimum with respect to the given time and evaluation limits, even though the optimization function has
a global minimum of zero, i.e., the given SMT formula is satisfiable. Therefore, when parSAT cannot find
a solution for the generated optimization functions, it is unable to reason if the given SMT problem is
either satisfiable or unsatisfiable and emits UNKNOWN. Accordingly, parSAT may only find a satisfiable
assignment for a given SMT equation but cannot prove unsatisfiability.
4
Implementation Considerations
In the following, we present the core implementation decisions of parSAT. First, we elaborate on the
method used to parse a given SMT equation and describe how the resulting optimization function is
incorporated into parSAT’s subsequent execution process. Second, we examine the design principles that
facilitate the parallel execution of multiple GO algorithms, which enable a parallel, distributed approach
to solving the initial FP equation. We analyze the current set of GO algorithms integrated into parSAT,
including their parameter settings and the potential number of instances that may be created for each
algorithm. Finally, we provide a brief example how a generated GO function by parSAT would compare
to the initial FP satisfiability problem. parSAT and all of its integrated GO algorithms are written in C++.
122
parSAT: Parallel Solving of Floating-Point Satisfiability
Figure 1 provides an overview of the operation process of parSAT. The details concerning each execution
step are described in the following subsections.
parSAT
SMT
Equation
1) Parse SMT Equation
opt_func.c
2) Generate
optimization
function
3) Call compiler
to create
shared library
opt_func.so
4) Hot-load
shared
library
GO1
instances
GO2
instances
GO3
instances
5) Launch multiple threads
per GO method
7) Terminate running
threads if
solution was found
8) Return SAT
or UNKNOWN
6) Return when
SAT solution found
or evaluation limit
reached
Figure 1: Overview of parSAT’s execution behavior
4.1
Optimization Function Generation and Compilation
parSAT accepts an SMT file compliant to the SMT-LIB2 standard as input. Currently only the quantifier-
free FP theory is supported. The rounding mode for the FP operations is set to round nearest ties to
even (RNE). Support for the other rounding modes can be added by inserting calls to the corresponding
functions defined in the fenv-header of the C standard library when generating the C code for the
optimization function.
We use the parser in libz3 from the Z3 SMT solver to parse each assert statement of the SMT file into
a parse tree with all involved FP variables, constants and operations affecting a particular assert statement.
The root node represents the constraint whereas each node corresponds to either a Boolean or FP operation
and each leaf represents either a FP variable or constant. We also apply the simplify method of libz3
to optimize each assertion tree, e.g., to eliminate unused symbols or redundant definitions.
Afterwards, we recursively iterate through each assertion tree to generate the optimization function as
C code. First, the C variables for the leaves, i.e., FP constants or variables, are created. Second, starting
from the closest leaf-nodes to the assertion top-node further variables are defined that connect the variable
names of the children with the corresponding operator until finally the variable of the assertion node is
constructed. Accordingly, each assertion variable defines an optimization function Ga(−→x ) in semantic
M. Krahl et al.
123
equivalence to the equation (2) to (17). Finally, the overall result is calculated by putting all assertions in
conjunction. Because of equation (2) the final result of the optimization function is the sum of all assertion
optimization functions such that G(−→x ) = ∑I
a=1 va where each assertion variable va represents Ga(−→x ) and
I equals to the amount of assertions in the given SMT file.
To avoid rounding errors introduced during the transformation process we assign single precision
FP symbols to variables of type float and double precision FP numerals to double typed C variables.
Other sorts of FP types, such as half precision (16-bit) or quadruple precision (128-bit) are currently
not supported. However, the input vector given to the optimization function and iteratively modified
by the GO algorithms is typed as double since FP values in double precision are capable of accurately
representing single precision numbers. Because of this, the employed GO algorithms could actually
calculate global minima that contain ±∞values in one of their coordinates if this particular dimension
corresponds to a single precision FP number in the initial SMT equation.
The fully generated optimization function is written into a source file and compiled as shared library.
Besides the necessary flags to compile the generated GO function as shared library, the default settings
of the compiler (here gcc) are utilized. Therefore, FP operations shall be performed according to the
IEEE754 standard. We use the system function call of C++ to invoke the compiler and, after successful
compilation, the dlopen and dlsym functions to immediately load the compiled optimization function in
the execution context of parSAT. Subsequently, a reference to the optimization function is passed to each
GO instance. Since the optimization function is a pure function, i.e., it has no side effects, it can be safely
called by multiple threads running in parallel.
4.2
Parallel GO Procedure
Generally, parSAT launches a freely configurable number of GO instances (with respect to the resources
of the available runtime environment) that concurrently search for a global minimum of the provided
optimization function G(−→x ). This may also include multiple instances of the same GO method.
parSAT terminates when the first thread finds a minimum equal to zero, signaling that a satisfiable
assignment for the input formula has been found, or when each GO instance has reached its evaluation
limit, indicating that it is UNKNOWN whether the given SMT equation is satisfiable. The evaluation limit
sets a maximum number for how many times each GO method may call the previously generated GO
function.
We reimplemented CRS2 and ISRES with the available code of NLopt and the algorithm descriptions in
the original papers as reference but adopted the approaches to leverage the unique properties of parSAT’s
generated optimization functions. For BH with the Powell method as local minimizer, we manually
converted the Python code from SciPy into C++. We slightly modified the Powell’s minimization method
to avoid introducing non-finite FP values, such as ±∞or NaN, during the minimization process, possibly
induced because of large deltas between subsequent minima. As source for returning random double
values required by the employed GO algorithms, we implemented a pseudo random number generator
based on the xoshiro256+ [3] algorithm.
Currently, parSAT does not distribute the work for a single GO method into multiple threads, but
executes each GO process together with the optimization function in a distinct thread. Due to the large
search space of one- to multidimensional GO problems in the FP domain, we did not implement a
synchronization mechanism to share the already evaluated coordinates between the GO threads. However,
at the beginning of the solving process, parSAT generates for each GO instance a randomized input
vector to ideally have a broad distributed starting set between all concurrent running GO processes. We
included stochastic GO methods into parSAT where running the same GO algorithm in multiple distributed
124
parSAT: Parallel Solving of Floating-Point Satisfiability
instances would randomly evaluate different regions of the search space. This approach shall also avoid
that multiple GO instances are being trapped in the same local minimum.
4.3
Exemplary Execution of parSAT
To demonstrate the execution process of parSAT, we employ the following quadratic equation
t(x) = −1˙(x+2)2 −2
for which the following exemplary property shall hold:
ExP(1): x ∈FP∧t(x) ≥−2.
When ExP(1) and t(x) are encoded as SMT equation in FP theory (as presented in Listing 1) and submitted
to parSAT, it generates a GO function where its integrated GO algorithms would attempt to find a global
minimum of zero.
1
(set-logic QF_FP)
2
;; set rounding mode to round to nearest, ties to even
3
(define-fun rm()
RoundingMode RNE)
4
;; symbolic variable for input parameter x
5
(declare-fun x()
(_ FloatingPoint 8 24))
6
;; a:= -1.0 ; x_s := 2.0; y_s := -2.0; max_y := -2.0
7
(define-fun a()
(_ FloatingPoint 8 24) ((_ to_fp 8 24) #xbf800000))
8
(define-fun x_s()
(_ FloatingPoint 8 24) ((_ to_fp 8 24) #x40000000))
9
(define-fun y_s()
(_ FloatingPoint 8 24) ((_ to_fp 8 24) #xc0000000))
10
(define-fun max_y() (_ FloatingPoint 8 24) ((_ to_fp 8 24) #xc0000000))
11
(define-fun x2_1()
(_ FloatingPoint 8 24) (fp.add rm x x_s))
12
(define-fun x2_2()
(_ FloatingPoint 8 24) (fp.mul rm x2_1 x2_1))
13
(define-fun x2_3()
(_ FloatingPoint 8 24) (fp.mul rm a x2_2))
14
(define-fun x2_4()
(_ FloatingPoint 8 24) (fp.add rm x2_3 y_s))
15
;; constrain the possible solution to satisfy the property
16
(assert (fp.geq x2_4 max_y))
17
;; check if the problem has a solution
18
(check-sat)
Listing 1: SMT equation of ExP(1) and t(x)
Figure 2 illustrates the function t(x) (left panel) and the plot of the corresponding GO function (right
panel) generated by parSAT based on Listing 1. The construction of the GO function follows the set of
equations (2-17), with equation (11) specifically employed to encode the assert statement involving
a greater-than-or-equal-to relational operator. As observed, the GO function exhibits a local minimum
of 0.0 at x = −2.0, indicated by the red marker. At this point, t(−2.0) = −2.0, thereby satisfying the
constraint ExP(1): t(−2.0) ≥−2.0. Therefore, x = −2.0 constitutes a satisfiable assignment for the SMT
equation in Listing 1.
M. Krahl et al.
125
5
4
3
2
1
0
1
10
8
6
4
2
t(x)
5
4
3
2
1
0
1
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
1e7 GO function of t(x) and ExP(1)
Figure 2: Function plots of t(x) and the generated GO function
5
Evaluation
We evaluated the efficiency and effectiveness of parSAT on the complete Griggio benchmark set, which is
a subset of the quantifier-free FP benchmarks referenced by the SMT-LIB2 initiative [18]. The Griggio
benchmark containing 214 SMT instances was added to the benchmark set of the SMT-LIB2 initiative
after the FP theory was supported by the SMT-LIB2 standard in 2015. Since then, this benchmark was
often used to evaluate the effectiveness and efficiency of SMT solvers on handling FP constraints such
as in [8] and [2]. Additionally, we analyzed the performance of parSAT on a second benchmark, the
2019-Guedemann benchmark set, which is based on verifying calculations to approximate the natural
logarithm and the exponential functions, and which became a subset of the quantifier-free FP and linear
real arithmetic benchmarks maintained by SMT-LIB2.
5.1
Overview
Firstly, we show the results of analyzing various parSAT variants by employing different parallelization
strategies for the included GO algorithms. Secondly, we compare the best observed version of parSAT
against current state-of-the-art SMT solvers, including bitwuzla (0.7.0), cvc5 (1.2.1), Z3 (4.13.4), and
MathSAT (5.6.11). Each solver was executed with its default parameters on the Griggio benchmark.
Secondly, we examine the results of running the same set of SMT solvers with their default configuration
and parSAT on the 2019-Guedemann benchmark. Finally, we propose a combined approach that integrates
parSAT with an SMT solver, aiming to harness the strengths of both methods to establish a fast and
reliable solving process.
In all experiments, the primary evaluation criteria were the efficiency and solution quality of the
approach being investigated, i.e., a particular configuration of parSAT or an SMT solver. The solution
quality was assessed based on the number of SAT instances identified within the benchmark; a higher
count of discovered SAT instances indicated better solution quality. Efficiency was represented by the
average time the solver required to find a solution for a single satisfiable SMT file, calculated by dividing
the total wall-clock time for evaluating all found SAT instances in the benchmark by their quantity.
We conducted all experiments involving parSAT and the SMT solvers on a Linux machine equipped
126
parSAT: Parallel Solving of Floating-Point Satisfiability
with an Intel i7-12700KF processor and 32 GB of memory. For processing each SMT file in the given
benchmark, a timeout period of 600 seconds was specified. All involved GO instances were initialized with
randomly selected start values ranging between −0.5 and 0.5. We externally verified all SAT solutions
reported by parSAT by using bitwuzla.
5.2
Comparison of parSAT Configurations
In Table 1 we present the impact of each GO method employed in parSAT to find a SAT solution.
We executed each specified thread configuration of parSAT ten times on the Griggio benchmark and
subsequently calculated the average results. Each row displays the percentage of the average amount of
SAT solutions found first by each employed GO algorithm with the same number of threads used per GO
method.
BH
CRS2
ISRES
parsat BH1,CRS21,ISRES1
46.6%
27.9%
25.5%
parsat BH2,CRS22,ISRES2
59.1%
25.2%
15.7%
parsat BH3,CRS23,ISRES3
71.6%
17.3%
11.1%
parsat BH4,CRS24,ISRES4
74.0%
17.3%
8.7%
parsat BH5,CRS25,ISRES5
77.5%
15.5%
7.0%
parsat BH6,CRS26,ISRES6
83.6%
10.0%
6.4%
parsat BH7,CRS27,ISRES7
87.0%
7.7%
5.3%
parsat BH8,CRS28,ISRES8
87.5%
7.0%
5.5%
parsat BH9,CRS29,ISRES9
89.6%
5.1%
5.3%
parsat BH10,CRS210,ISRES10
91.2%
4.1%
4.7%
Table 1: Evaluation of fastest GO algorithms per number of threads
The first row indicates that with one thread assigned to each GO routine, BH identified more than
45% of the SAT solutions first, while CRS2 accounted for approximately 28%, and ISRES around 25%. In
the second row, with two threads per GO routine, BH increased its average to nearly 60%, whereas CRS2
decreased to roughly 25%, and ISRES declined to approximately 15%. The subsequent rows confirm this
trend: as the number of parallel GO instances increased further, BH significantly improved, culminating in
the final row where it was the fastest algorithm to find a SAT solution in over 90% for the given SMT
equations. Contrarily, the share of first found SAT solutions by CRS2 and ISRES continuously diminished,
stabilizing around 4−5% for 10 threads per GO instance. This illustrates the potential of BH to scale and
accelerate its GO process by concurrent execution which was also experienced by Ferreiro-Ferreiro et.
al [7]. We hypothesize that the improved scalability of BH compared to CRS2 and ISRES stems from its
non-population-based nature. Unlike CRS2 and ISRES, which require generating an initial population
approximately 10 to 20 times larger than the problem dimensionality and subsequently iterating over and
modifying this population during minimization, BH operates solely on a single initial vector whose size
matches the function’s dimensionality.
Based on these findings we adjusted the number of threads assigned to each GO method. To respect
the limit of available hardware threads on the test machine, we set the maximum number of parallel
running GO instances to 20. We used 70% for running BH and 15% each for executing CRS2 and ISRES,
which roughly corresponds to 14 BH, 3 CRS2, and 3 ISRES threads. We applied parSAT with this particular
configuration ten times to the Griggio benchmark set and measured an average runtime of 1.36 seconds
M. Krahl et al.
127
per SMT instance with approximately found 101.8 SAT solutions on average. The best run of this
specific parSAT configuration discovered 102 SAT instances with an average runtime of 1.27 seconds
per SMT file. This observed best result of parSAT will be used for further comparison and is denoted as
parSATbestBH14,CRS23,ISRES3.
5.3
Evaluation of parSAT’s Performance on conventional Benchmark
Figure 3 shows the runtime of the previously described best parSAT configuration and other SMT solvers
for the SMT equations in the Griggio benchmark that were evaluated as satisfiable. The runtime is denoted
in seconds on a logarithmic scale on the Y-axis. The X-axis represents the (virtual) index of the satisfiable
SMT instance within the Griggio benchmark. The time parSAT took to solve each SMT instance it found
SAT is shown by a red cross, for bitwuzla by a blue diamond, for cvc5 by a green triangle, for MathSAT
by a brown star, and for Z3 by a black dot. Each point represents the time it took to determine an SMT
equation of the benchmark as satisfiable. If multiple points of different SMT solvers are placed exactly on
the same vertical line, this implies, that the corresponding solvers determined the same SMT equation as
SAT. It can be seen that parSAT did not always find a satisfiable assignment before the compared SMT
solvers, i.e., as presented on the far left side of each plot in Figure 3. However, the volatility of the solving
time for its identified SAT equations remains low while the solving time of the other SMT solvers strongly
varies.
Table 2 contains additional findings on running parSAT and the SMT solvers on the Griggio benchmark.
For each solver, we present the number of found SAT instances, UNSAT equations, timeouts or in case of
parSAT evaluation limits, errors, and the average processing time for the found SAT equations in seconds.
An error indicates that the corresponding SMT solver entered an erroneous state on a given SMT equation
and terminated without issuing a SAT or UNSAT verdict before reaching a timeout. When considering the
number of discovered SAT equations, bitwuzla achieved the best result by finding 108 SAT instances,
followed by cvc5 that reported 104 SAT solutions within the given timeout. parSAT comes at third with
102 reported SAT equations before MathSAT with 100, and Z3 with 75. Since parSAT is incomplete and
cannot prove an SMT equation to be unsatisfiable, according to bitwuzla or cvc5 there are at least 76
guaranteed UNSAT equations in the Griggio benchmark. bitwuzla and cvc5 ran into a timeout for 30
instances, therefore no final verdict can be made for the total amount of SAT or UNSAT instances in the
benchmark. In terms of the average runtime per SMT file determined as SAT, parSAT presents by far
the best result with 0.1 seconds. The next best SMT solver is MathSAT with an average runtime of 25.86
seconds, followed by bitwuzla with 32.71 seconds, cvc5 with 34.01 seconds, and lastly Z3 with 88.7
seconds. Overall, parSAT found approximately 6% less SAT solutions than the best solver bitwuzla but
required by average far less time for finding an assignment for a satisfiable SMT equation in the Griggio
benchmark.
5.4
parSAT’s Performance on Benchmark derived from Software Verification Task
In comparison, we show the results of running the same SMT solvers and the best parSAT configuration on
the 2019-Guedemann benchmark in Table 3. This benchmark was derived from verifying an approximation
of the natural logarithm and the exponential function in a real-world scenario. In the Cardano ledger
for a proof-of-stake based consensus protocol [12], precise implementations of these two mathematical
functions are required to assure that each node of the block-chain calculates the same results. Therefore,
following properties shall hold for an approximation of these functions
128
parSAT: Parallel Solving of Floating-Point Satisfiability
10
2
10
1
100
101
102
parSAT
cvc5
10
3
10
2
10
1
100
101
102
parSAT
bitwuzla
10
2
10
1
100
101
102
parSAT
mathsat
10
2
10
1
100
101
102
103
parSAT
z3
Figure 3: Comparison of solving time between parSAT and SMT solvers for satisfiable equations
SAT
UNSAT
timeout /
evaluation
limit
errors
average SAT
runtime
parSATbestBH14,CRS23,ISRES3
102
0
112
0
0.1
bitwuzla
108
76
30
0
32.71
cvc5
104
76
30
4
34.01
MathSAT
100
69
45
0
25.86
Z3
75
56
54
29
88.7
Table 2: SMT solvers and parSAT on the Griggio benchmark
M. Krahl et al.
129
∀x ∈R :
x > 0 =⇒exp′(ln′ x) ≈x
(18)
∀x ∈R :
x > 0 =⇒ln′(exp′ x) ≈x
(19)
∀x,y ∈R :
x,y ∈[0,1] =⇒exp′(x+y) ≈exp′(x)·exp′(y)
(20)
∀x ∈R :
x,y > 0 =⇒ln′(x·y) ≈ln′(x)·ln′(y)
(21)
∀x ∈R :
x,y ∈[0,1] =⇒img(xy) = img(exp′(y·ln′(x))) ≈[0,1]
(22)
where exp′ and ln′ represent the approximated functions as described in [15], img denotes the codomain,
and ≈corresponds to an absolute error of less than ε = 10−12.
The implementations of these functions based on FP arithmetic together with these properties were
converted into SMT equations. If some of these SMT equations is satisfiable, this implies that there exists
a counter-example that violates one of the stated properties. In total 13 SMT files were generated based
on these properties that constitute the 2019-Guedemann Benchmark.
For this benchmark, both parSAT and bitwuzla found 10 SAT instances, cvc5 and MathSAT reported
4 and Z3 did not find any SAT solution within the given time. parSAT did not find a satisfiable assignment
for 3 SMT formulas for which it reached the function evaluation limit. bitwuzla denoted 2 equations as
UNSAT and reached for 1 SMT equation its timeout. cvc5 and MathSAT found 1 UNSAT instance, and
reached a timeout for 8 SMT files. Z3 did not find any SAT or UNSAT SMT formulas, but reached in 4
cases the timeout.
SAT
UNSAT
timeout /
evaluation
limit
errors
average SAT
runtime
parSATbestBH14,CRS23,ISRES3
10
0
3
0
0.1
bitwuzla
10
2
1
0
37.88
cvc5
4
1
8
0
180.09
MathSAT
4
1
8
0
22.8
Z3
0
0
4
9
-
Table 3: SMT solvers and parSAT on the 2019-Guedemann benchmark
For this benchmark, parSAT finds as many SAT instances as bitwuzla with a far lower average
runtime per satisfiable SMT equation. We see as main reason for parSAT’s improved performance in the
2019-Guedemann benchmark that the benchmark was derived from a mathematical problem where the
integrated GO algorithms in parSAT might be more efficient in finding a solution. Another aspect could
be the reduced number of variables used in the 2019-Guedemann benchmark which vary between 1 and 8,
whereas the Griggio benchmark contains some SMT problems with more than 500 variables.
5.5
Combining parSAT with SMT Solvers
Building on our previous findings, this section explores the potential advantages of utilizing parSAT in
conjunction with an SMT solver as a new approach. Our experimental results suggest that compared
to other solvers, parSAT often identifies satisfiable solutions faster. Nevertheless, parSAT is unable to
reliably prove an SMT formula unsatisfiable due to its inherent incompleteness. In contrast, when no
130
parSAT: Parallel Solving of Floating-Point Satisfiability
timeout occurs, SMT solvers provide SAT or UNSAT statements that are guaranteed to be correct, despite
potentially requiring more time than parSAT for processing.
In the subsequent theoretical consideration, we examine the potential of combining existing SMT
solvers with parSAT for handling SMT files. Under this hypothetical framework, both the SMT solver
and parSAT would concurrently initiate the search for a solution to the given SMT problem. If parSAT
identifies a satisfiable solution before the SMT solver, the solver is terminated, and a SAT result is returned.
Conversely, if the SMT solver discovers a satisfiable assignment first, the same procedure applies to
terminate parSAT. However, if parSAT exits with UNKNOWN, the SMT solver continues until it either
proves UNSAT, finds a satisfiable solution, or reaches a timeout. This approach appears promising as it
combines the rapid execution capabilities of parSAT with the definitive completeness properties of SMT
solvers, potentially enhancing overall efficiency and reliability.
In Table 4, we present the potential impact of such a combination. We utilized the measurements
for bitwuzla as representative state-of-the-art SMT solver, alongside the best observed performance
of parSAT using the BH14,CRS23,ISRES3 configuration for the Griggio benchmark. It is assumed that
bitwuzla and parSAT operate completely concurrently, without accounting for any additional overhead
introduced by parallelization. To calculate the average solving runtime for SAT equations, we considered
the runtime of parSAT when it returned a SAT faster than bitwuzla; otherwise, we used the runtime of
bitwuzla. Due to parSAT’s inability to reason about UNSAT SMT equations, instances where parSAT
reached its evaluation limit and bitwuzla returned UNSAT or encountered a timeout were excluded
from the average SAT runtime calculation. Neither parSAT nor bitwuzla encountered any errors on the
Griggio benchmark.
SAT
UNSAT
timeout /
evaluation
limit
errors
average SAT
runtime
parSATbestBH14,CRS23,ISRES3
102
0
112
0
0.1
bitwuzla
108
76
30
0
32.71
parSATbest + bitwuzla
113
76
25
0
12.75
Table 4: Combination of parSAT with bitwuzla on the Griggio benchmark
The table compares the outcomes of executing parSAT and bitwuzla both independently and in a
combined approach. It provides data on identified SAT instances, UNSAT formulas, and the average
runtime per found SAT SMT file, measured in seconds. The first two rows display the results of running
parSAT and bitwuzla separately, while the third row illustrates the potential effect of systematically
combining the two methods. Based on the recorded data, and in contrast to their independent application,
the combined approach would increase the number of identified SAT instances to 113 and reduce the
timeouts to 25. Moreover, the average solving runtime for satisfiable equations would decrease to 12.75
seconds, representing a reduction of slightly more than 60% compared to exclusively using bitwuzla.
Our research suggests a promising avenue for advancing this approach by leveraging the evaluated
data sets by parSAT after it reached its function evaluation limit. The SMT solver could then utilize these
preprocessed values to enhance its internal lemma generation capabilities and exclude already unsatisfiable
assignments from its potential solution set, thereby accelerating the overall solving process.
M. Krahl et al.
131
6
Conclusion and Outlook
Experiments with parSAT demonstrate that its portfolio-approach utilizing multiple different GO al-
gorithms competing concurrently to find solutions, presents a considerable alternative to current state-
of-the-art SMT solvers to quickly verify if a given SMT formula might be satisfiable. Furthermore,
parSAT successfully handled and processed a variety of real-world SMT equations with either small or
large numbers of variables. The modular architecture of parSAT enables straightforward integration of
additional GO routines that could further enhance the complementary and competitive solving approach.
In particular, we demonstrated parSAT’s ability to find SAT solutions to some SMT instances derived
from practical problems with mathematical nature faster than current state-of-the-art SMT solvers.
Because of the currently integrated GO methods, parSAT is a semi-decision procedure that underap-
proximates the search space. If it finds a solution satisfying a problem instance, it is guaranteed to be
correct but might label an actual satisfiable formula as UNKNOWN. In the current version, due to some
implementation details of the integrated GO approaches, parSAT is limited on finding solutions consisting
of finite FP values only. Even though parSAT’s GO routines could potentially calculate ±∞values for
variables in single precision format, they currently cannot systematically find NaN-values in general or
±∞in double precision.
We plan to integrate more types of GO algorithms into parSAT, such as deterministic variants or
brute-force based approaches. We also see a potential benefit in implementing a specific GO method
that evaluates the optimization function by allowing the controlled selection of non-finite FP numbers as
input. Additionally, we aim to further investigate how the generation of the optimization function could
be improved such that GO algorithms might find the global minimum faster and in a more reliable way.
Another aspect to consider is the potential acceleration of integrated GO methods through further paral-
lelization, which may involve utilizing CPU-specific vector instructions, GPU-specific implementations,
or deploying GO instances to a distributed cluster with scalable computing capabilities based on cloud
infrastructure. The approach itself, formulating a constraint problem as a GO function, is not limited to
FP only; we aim to support more theories defined by the SMT-LIB2 standard, such as bitvector or integer
theory.
In addition, we also plan to extend the use of the approach in two domains. The first is the evaluation
how parSAT might be used in a complementary approach with existing SMT solvers. To facilitate this
process, we will investigate how equalities can be derived to propagate information to other theory solvers
during theory combination. The second promising topic is the application of the techniques used in parSAT
to the problem domain of SMT model counting. This involves counting the number of all satisfiable
assignments for a given SMT equation.
References
[1] Haniel Barbosa, Clark Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann, Abdalrhman
Mohamed, Mudathir Mohamed, Aina Niemetz, Andres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew
Reynolds, Ying Sheng, Cesare Tinelli & Yoni Zohar (2022): Cvc5: A Versatile and Industrial-Strength SMT
Solver. In Dana Fisman & Grigore Rosu, editors: Tools and Algorithms for the Construction and Analysis of
Systems, Springer International Publishing, Cham, pp. 415–442, doi:10.1007/978-3-030-99524-9_24.
[2] M. Ammar Ben Khadra, Dominik Stoffel & Wolfgang Kunz (2017): goSAT: Floating-point Satisfiability as
Global Optimization. In: 2017 Formal Methods in Computer Aided Design (FMCAD), IEEE, Vienna, pp.
11–14, doi:10.23919/FMCAD.2017.8102235.
132
parSAT: Parallel Solving of Floating-Point Satisfiability
[3] David Blackman & Sebastiano Vigna (2021): Scrambled Linear Pseudorandom Number Generators. ACM
Trans. Math. Softw. 47(4), doi:10.1145/3460772.
[4] Martin Brain, Florian Schanda & Youcheng Sun (2019): Building Better Bit-Blasting for Floating-Point
Problems. In Tomáš Vojnar & Lijun Zhang, editors: Tools and Algorithms for the Construction and Analysis
of Systems, 11427, Springer International Publishing, Cham, pp. 79–98, doi:10.1007/978-3-030-17462-0_5.
[5] Martin Brain, Florian Schanda & Youcheng Sun (2019): Building Better Bit-Blasting for Floating-Point
Problems. In Tomáš Vojnar & Lijun Zhang, editors: Tools and Algorithms for the Construction and Analysis
of Systems, Springer International Publishing, Cham, pp. 79–98, doi:10.1007/978-3-030-17462-0_5.
[6] Leonardo de Moura & Nikolaj Bjørner (2008): Z3: An Efficient SMT Solver. In C. R. Ramakrishnan &
Jakob Rehof, editors: Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in
Computer Science, Springer, Berlin, Heidelberg, pp. 337–340, doi:10.1007/978-3-540-78800-3_24.
[7] Ana M. Ferreiro-Ferreiro, José A. García-Rodríguez, Luis Souto & Carlos Vázquez (2019): Basin Hopping
with Synched Multi L-BFGS Local Searches. Parallel Implementation in Multi-CPU and GPUs. Applied
Mathematics and Computation 356, pp. 282–298, doi:10.1016/j.amc.2019.02.040.
[8] Zhoulai Fu & Zhendong Su (2016): XSat: A Fast Floating-Point Satisfiability Solver. In Swarat Chaudhuri &
Azadeh Farzan, editors: Computer Aided Verification, Springer International Publishing, Cham, pp. 187–209,
doi:10.1007/978-3-319-41540-6_11.
[9] Leopold Haller, Alberto Griggio, Martin Brain & Daniel Kroening (2012): Deciding Floating-Point Logic
with Systematic Abstraction. In: 2012 Formal Methods in Computer-Aided Design (FMCAD), pp. 131–140.
[10] Steven G. Johnson & Julien Schueller (2021): NLopt: Nonlinear Optimization Library. Astrophysics Source
Code Library, p. ascl:2111.004.
[11] P. Kaelo & M. M. Ali (2006): Some Variants of the Controlled Random Search Algorithm for Global
Optimization. Journal of Optimization Theory and Applications 130(2), pp. 253–264, doi:10.1007/s10957-
006-9101-0.
[12] Aggelos Kiayias, Alexander Russell, Bernardo David & Roman Oliynykov (2017): Ouroboros: A Provably
Secure Proof-of-Stake Blockchain Protocol. In Jonathan Katz & Hovav Shacham, editors: Advances in
Cryptology – CRYPTO 2017, Springer International Publishing, Cham, pp. 357–388, doi:10.1007/978-3-319-
63688-7_12.
[13] Gergely Kovásznai, Krisztián Gajdár & Laura Kovács (2019): Portfolio SAT and SMT Solving of Cardinality
Constraints in Sensor Network Optimization. In: 2019 21st International Symposium on Symbolic and Numeric
Algorithms for Scientific Computing (SYNASC), pp. 85–91, doi:10.1109/SYNASC49474.2019.00021.
[14] Daniel Kroening & Ofer Strichman (2016): Decision Procedures. Texts in Theoretical Computer Science. An
EATCS Series, Springer Berlin Heidelberg, Berlin, Heidelberg, doi:10.1007/978-3-662-50497-0. Available at
http://link.springer.com/10.1007/978-3-662-50497-0.
[15] Matthias Güdemann (2019): Non-integer calculations specification. https://github.com/intersectmbo/
cardano-ledger/releases/latest/download/non-integer-calculations.pdf. Accessed: 2025-
06-04.
[16] Aina Niemetz & Mathias Preiner (2023): Bitwuzla. In Constantin Enea & Akash Lal, editors: Computer
Aided Verification, Lecture Notes in Computer Science, Springer Nature Switzerland, Cham, pp. 3–17,
doi:10.1007/978-3-031-37703-7_1.
[17] M. J. D. Powell (1964): An efficient method for finding the minimum of a function of several variables without
calculating derivatives. The Computer Journal 7(2), pp. 155–162, doi:10.1093/comjnl/7.2.155.
[18] Mathias Preiner, Hans-Jörg Schurr, Clark Barrett, Pascal Fontaine, Aina Niemetz & Cesare Tinelli (2024):
SMT-LIB release 2024 (non-incremental benchmarks), doi:10.5281/zenodo.11061097.
[19] W. L. Price (1983): Global Optimization by Controlled Random Search. Journal of Optimization Theory and
Applications 40(3), pp. 333–348, doi:10.1007/BF00933504.
M. Krahl et al.
133
[20] T.P. Runarsson & Xin Yao (2005): Search biases in constrained evolutionary optimization. IEEE Trans-
actions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 35(2), pp. 233–243,
doi:10.1109/TSMCC.2004.841906.
[21] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni
Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua
Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson,
C J Carey, ˙Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert
Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian
Pedregosa, Paul van Mulbregt & SciPy 1.0 Contributors (2020): SciPy 1.0: Fundamental Algorithms for
Scientific Computing in Python. Nature Methods 17, pp. 261–272, doi:10.1038/s41592-019-0686-2.
[22] David J. Wales & Jonathan P. K. Doye (1997): Global Optimization by Basin-Hopping and the Lowest Energy
Structures of Lennard-Jones Clusters Containing up to 110 Atoms. The Journal of Physical Chemistry A
101(28), pp. 5111–5116, doi:10.1021/jp970984n.
[23] Tjark Weber (2023): tjark/Par. Available at https://github.com/tjark/Par. Original-date: 2020-12-
04T20:12:50Z.
[24] Christoph M. Wintersteiger, Youssef Hamadi & Leonardo de Moura (2009): A Concurrent Portfolio Approach
to SMT Solving. In Ahmed Bouajjani & Oded Maler, editors: Computer Aided Verification, Springer Berlin
Heidelberg, Berlin, Heidelberg, pp. 715–720, doi:10.1007/978-3-642-02658-4_60.
[25] Heytem Zitoun, Claude Michel, Laurent Michel & Michel Rueher (2020): An Efficient Constraint Based
Framework Forhandling Floating Point SMT Problems, doi:10.48550/arXiv.2002.12441. arXiv:2002.12441.
|
Andrei Arusoaie, Hora ̧tiu Cheval, Radu Iosif: FROM 2025 EPTCS 427, 2025, pp. 117-133, © M. Krahl et al. This work is licensed under the Creative Commons Attribution License. parSAT: Parallel Solving of Floating-Point Satisfiability Markus Krahl üdemann -based verification techniques, leveraging modern Boolean satisfiability (SAT) and Satisfiability Modulo Theories (SMT) solvers, have demonstrated efficacy in addressing practical problem instances within program analysis. However, current SMT solver implementations often encounter limitations when addressing non-linear arithmetic problems, particularly those involving floating point (FP) operations. This poses a significant challenge for safety critical applications, where accurate and reliable calculations based on FP numbers and elementary mathematical functions are essential. This paper shows how an alternative formulation of the satisfiability problem for FP calculations allows for exploiting parallelism for FP constraint solving. By combining global optimization approaches with parallel execution on modern multi-core CPUs, we construct a portfolio-based semidecision procedure specifically tailored to handle FP arithmetic. We demonstrate the potential of this approach to complement conventional methods through the evaluation of various benchmarks. 1 Introduction Various software-intensive systems utilize Floating Point (FP) computations, as FP allows for the approximation of real numbers that are essential in numerous applications modeling physical behavior. When these systems operate within safety or security critical domains, their correctness must be guaranteed. The challenge lies in testing and verifying FP computations due to the vast domain and specific behaviors inherent to FP arithmetic. However, taking into account recent advancements such as incorporating IEEE754 compliant FP theory into the SMT-LIB2 standard, current state-of-the-art automatic theorem provers offer a viable approach for modeling and verifying these complex calculations. For solving FP constraints, contemporary state-of-the-art SMT solvers, e.g., bitwuzla [16], cvc5 [1], or Z3 [6], employ bit-blasting [4], a technique that transforms the FP constraints into propositional logic problems. These problems are then solved by Boolean satisfiability (SAT) solvers, thereby enabling rigorous, automatic verification of FP computations. While significant advancements have been made in SMT solver capabilities for solving equations in FP theory, their capacity for reasoning about FP constraints, particularly those involving non-linear arithmetic, remains limited for practical applications. This limitation stems from inherent overhead associated with converting these computations from FP arithmetic to propositional logic, often requiring word- [16] or bit-blasting operations specific to the FP-focused solver frameworks. Furthermore, this encoding process fails to exploit advancements in hardware such as dedicated FP units within multicore CPUs or GPUs that could significantly accelerate computation. 118 parSAT: Parallel Solving of Floating-Point Satisfiability In this paper, we present parSAT, an integrated tool that performs a portfolio-based semi-decision procedure for SMT equations in FP theory. It extends the formulation of solving FP constraints based on global optimization (GO), originally introduced and refined in [8, 2]. We specifically address limitations by incorporating a more comprehensive support for SMT-LIB2 constraints and enabling a portfoliobased minimization of the objective function using a multicore implementation to decide satisfiability. Furthermore, we demonstrate the potential of combining parSAT with an SMT solver to systematically leverage the advantages of both approaches. The paper is structured as follows. Section 2 discusses related work, and section 3 provides the theoretical foundations for converting SMT equations in FP theory. The core principles and implementation details of parSAT are described in section 4. In section 5, we evaluate parSAT in comparison with other SMT solvers. Section 6 concludes the paper and discusses the outlook and potential further work. 2 Related Work The approach to formulate FP constraints in the form of a GO problem was first introduced in XSat [8]. It solves SMT equations in pure FP theory by converting the given constraints into an equisatisfiable mathematical optimization problem which has a global minimum of value 0 if and only if the original problem is satisfiable. In detail, it transforms a quantifier-free SMT equation F(-→x ), where -→x ∈FPn corresponds to an arbitrary assignment of the variables in the equation, into an objective function G(-→x ). G(-→x ) is minimized by applying GO to find an input vector -→z for which G(-→z ) = 0. In case a -→z was found, -→z would correspond to a valid assignment α for which F(α) = SAT, therefore F(-→x ) would be satisfiable. In case only a non-zero global minimum is found, the SMT equation F(-→x ) is considered to be unsatisfiable. XSat takes an SMT equation in the SMT-LIB2 format as input and generates the optimization function as C-code which is compiled and loaded as Python-Module that applies the GO algorithm Basin Hopping [22] (BH) from the SciPy-Package [21] to find its global minimum. goSAT [2] is based on the ideas of XSat as it similarly transforms an SMT equation in pure FP theory from a given input file in SMT-LIB2 format to an optimization function and attempts to find its global minimum. It also supports a code generation mode similar to XSat where the optimization function is emitted as C-code. However, the main aspect of goSAT is the Just-in-Time (JIT) compilation of the generated objective function and the immediate search for its global minimum. Additionally, it offers the possibility to choose from several different GO routines provided through the NLopt [10] library whereas XSat only involves the BH algorithm. Our parSAT approach builds on top of XSat and goSAT. We included the three best performing GO algorithms based on average runtime and found SAT solutions used in goSAT and XSat, namely BH with the Powell method [17] as local minimizer, Controlled Random Search 2 with local mutation [19, 11] (CRS2) and Improved Stochastic Ranking Evolution Strategy [20] (ISRES) for use in minimization by reimplementing each approach in C++. parSAT supports a freely configurable parallel execution of these algorithms in a portfolio-based approach. Similar to XSat and goSAT, parSAT first converts the given SMT equation into an optimization function, but it natively compiles this optimization function into a shared library that is directly loaded into parSAT. Additionally, parSAT provides a more complete support of the SMT-LIB2 standard, e.g., the ite function for arguments with FP type. Most other SMT solvers apply the DPLL(T) [14, p. 66] framework which is used to decide combinations of different theories based on structural SAT solving. In that case, an SMT solver consists of multiple theory solvers and an orchestrating SAT solver. The SMT solvers bitwuzla [16] and cvc5 [1] use an implementation of FP constraints based on the SymFPU [5] library. This allows them to apply M. Krahl et al. 119 word-blasting, i.e., an eager conversion of FP into the bitvector theory and then using bit-blasting to generate a purely propositional bit-level problem for a SAT solver. It can be beneficial to not only have conjunctions for the theory solver, but to use more abstract reasoning on the mathematical level in order to avoid searching through large state spaces that are impossible according to the theory. This is for example realized in the natural domain solving procedure implemented in MathSAT [9] or based on incremental approximations and logic programming as in Colibri [25]. These approaches to solving FP constraints are powerful and guaranteed to be complete. The downside is that due to the complexity of the underlying propositional bit-vector formulas, finding a solution in practical problems is often difficult. They also do not exploit modern multi-core CPUs or GPUs with dedicated FP units to accelerate FP instructions which is possible using the parSAT approach. Previous research has explored running SMT solvers in a portfolio setting, where multiple configurations of a single solver or different solvers are executed in parallel to solve a given equation, such as [24], [13], and [23]. Our approach similarly adopts a portfolio strategy; however, instead of using multiple SMT solvers, we concurrently execute different GO methods to locate a zero-valued global minimum of the optimization function. Since all instances operate within the same optimization framework, previously evaluated points could be shared more efficiently among them compared to running separate SMT solvers. 3 Theoretical Background In this chapter, we provide the theoretical foundation of parSAT. Let FP be the set of IEEE754 double precision FP numbers. Note that this set includes the numbers that can be represented precisely by single precision numbers, i.e., of type float. In general, a quantifier-free SMT equation F(-→x ) with -→x ∈FPn is transformed into a mathematical objective function G(-→x ) in such a way that computing G(-→x ) with a given input vector -→a either returns 0 if -→a corresponds to a satisfiable assignment α for F(-→x ) or a positive distance value that indicates how close -→a is to a global minimum at zero, i.e., a satisfiable assignment. To ensure the equivalence between the optimization function G(-→x ) and the initial SMT FP formula F(-→x ), the following requirements, originally stated in XSat, must hold: R(1): ∀-→x ∈FPn →G(-→x ) ≥0 R(2): ∀-→x ∈FPn ∧G(-→x ) = 0 →-→x |= F R(3): ∀-→x ∈FPn ∧-→x |= F →G(-→x ) = 0 R(1) states that the objective form is non-negative. R(2) states that if the objective function has a value of 0 then the corresponding valuation of the free variables in the constraint problem is a satisfying assignment. Finally R(3) states that every satisfying assignment to the constraint problem corresponds to a root of the objective function. parSAT supports a given SMT formula F(-→x ) in the language LFP-SMT representing quantifier-free FP constraints. The language LFP-SMT supported by parSAT is a strict superset of the language supported by goSAT and XSat, its syntax is defined as: Boolean constraints π := ¬π′ |π1 ∧π2 |π1 ∨π2 |e1 ▷◁e2 FP expressions e := c|v|e1 ⊗e2 | fun(e1,...,en)|ite(π,e1,e2) 120 parSAT: Parallel Solving of Floating-Point Satisfiability where ▷◁∈{ ,≥,==,̸=},⊗∈{+,-,∗,/}, c represents a FP constant, v is a FP variable, fun is a user-defined, interpreted FP function, and ite corresponds to the if-then-else-function defined in the SMT-LIB2 core theory. It returns the FP expression e1 if the Boolean argument π is true; otherwise, it returns the FP expression e2. Similar to goSAT, we define FCD(-→x ) as the conversion of F(-→x ) by removing the negation through the application of De-Morgan's law and transforming it into conjunctive normal form. But in contrast to the approach described for goSAT, we denote each operand in ▷◁with an additional subscript n to indicate whether the initial operand is negated due to the application of De-Morgan's law: FCD(-→x ) = ^ i∈I _ j∈J ei,j ▷◁i,j,n e′ i,j (1) From FCD(-→x ) we deduce the optimization function G(-→x ) as follows: G(-→x ) = ∑ i∈I∏ j∈J d(▷◁i,j,n,ei,j,e′ i,j) (2) where d(▷◁i,j,n,ei,j,e′ i,j) translates the boolean value of the comparison operators in ▷◁to an FP value that is equal to or greater than zero. Here, the previously introduced subscript n for an intended negation of the comparison operator needs to be considered. For instance, for the real numbers ra,rb the statement ¬(ra 0,e1,e2) (8) d(>0,e1,e2) = e1 > e2 ? 0 : θ(e1,e2)+1 (9) d(>1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d(≤0,e1,e2) (10) d(≥0,e1,e2) = e1 ≥e2 ? 0 : θ(e1,e2) (11) d(≥1,e1,e2) = isnan(e1)∨isnan(e2) ? 0 : d( 0 (17) Considering equations (2) to (17), we can derive that the previously stated requirements R(1), R(2), and R(3) hold for G(-→x ). Because of the distinct processing of negated comparison operators, parSAT also enables the generation of optimization functions that correctly return zero for an assignment α containing NaN values, which satisfies the initial SMT FP constraints. Still, GO algorithms might be developed based on mathematical reasoning and therefore might be limited to calculate global minima that do not contain infinite FP numbers, such as NaN or Infinity even though these might present a valid solution for the given SMT equation. FP expressions are directly encoded by performing the operations defined by ⊗using the corresponding double or float types in C. In this way, the potential non-linear characteristics of the original SMT equation are preserved when translating it into the generated GO function. Generally, parSAT allows the integration of any derivative-free GO algorithm. Due to the various conditionals in the previously presented equations that construct the generated optimization function, it is very likely that the final optimization function is not smooth and therefore not differentiable. All GO procedures currently incorporated in parSAT are employing a stochastic driven strategy for locating a global minimum. Therefore, parSAT fulfills the soundness property only for SMT equations where one of its GO algorithm finds a zero valued minimum -→z , since -→z necessarily represents a valid solution to the initial SMT formula F(-→x ). However, parSAT is incomplete and unsound for SMT equations where it cannot find a satisfiable assignment due to the non-deterministic behavior of the employed GO algorithms. Because of their stochastic nature, the GO algorithms employed by parSAT may not find a zero-valued minimum with respect to the given time and evaluation limits, even though the optimization function has a global minimum of zero, i.e., the given SMT formula is satisfiable. Therefore, when parSAT cannot find a solution for the generated optimization functions, it is unable to reason if the given SMT problem is either satisfiable or unsatisfiable and emits UNKNOWN. Accordingly, parSAT may only find a satisfiable assignment for a given SMT equation but cannot prove unsatisfiability. 4 Implementation Considerations In the following, we present the core implementation decisions of parSAT. First, we elaborate on the method used to parse a given SMT equation and describe how the resulting optimization function is incorporated into parSAT's subsequent execution process. Second, we examine the design principles that facilitate the parallel execution of multiple GO algorithms, which enable a parallel, distributed approach to solving the initial FP equation. We analyze the current set of GO algorithms integrated into parSAT, including their parameter settings and the potential number of instances that may be created for each algorithm. Finally, we provide a brief example how a generated GO function by parSAT would compare to the initial FP satisfiability problem. parSAT and all of its integrated GO algorithms are written in C++. 122 parSAT: Parallel Solving of Floating-Point Satisfiability Figure 1 provides an overview of the operation process of parSAT. The details concerning each execution step are described in the following subsections. parSAT SMT Equation 1) Parse SMT Equation opt_func.c 2) Generate optimization function 3) Call compiler to create shared library opt_func.so 4) Hot-load shared library GO1 instances GO2 instances GO3 instances 5) Launch multiple threads per GO method 7) Terminate running threads if solution was found 8) Return SAT or UNKNOWN 6) Return when SAT solution found or evaluation limit reached Figure 1: Overview of parSAT's execution behavior 4.1 Optimization Function Generation and Compilation parSAT accepts an SMT file compliant to the SMT-LIB2 standard as input. Currently only the quantifierfree FP theory is supported. The rounding mode for the FP operations is set to round nearest ties to even (RNE). Support for the other rounding modes can be added by inserting calls to the corresponding functions defined in the fenv-header of the C standard library when generating the C code for the optimization function. We use the parser in libz3 from the Z3 SMT solver to parse each assert statement of the SMT file into a parse tree with all involved FP variables, constants and operations affecting a particular assert statement. The root node represents the constraint whereas each node corresponds to either a Boolean or FP operation and each leaf represents either a FP variable or constant. We also apply the simplify method of libz3 to optimize each assertion tree, e.g., to eliminate unused symbols or redundant definitions. Afterwards, we recursively iterate through each assertion tree to generate the optimization function as C code. First, the C variables for the leaves, i.e., FP constants or variables, are created. Second, starting from the closest leaf-nodes to the assertion top-node further variables are defined that connect the variable names of the children with the corresponding operator until finally the variable of the assertion node is constructed. Accordingly, each assertion variable defines an optimization function Ga(-→x ) in semantic M. Krahl et al. 123 equivalence to the equation (2) to (17). Finally, the overall result is calculated by putting all assertions in conjunction. Because of equation (2) the final result of the optimization function is the sum of all assertion optimization functions such that G(-→x ) = ∑I a=1 va where each assertion variable va represents Ga(-→x ) and I equals to the amount of assertions in the given SMT file. To avoid rounding errors introduced during the transformation process we assign single precision FP symbols to variables of type float and double precision FP numerals to double typed C variables. Other sorts of FP types, such as half precision (16-bit) or quadruple precision (128-bit) are currently not supported. However, the input vector given to the optimization function and iteratively modified by the GO algorithms is typed as double since FP values in double precision are capable of accurately representing single precision numbers. Because of this, the employed GO algorithms could actually calculate global minima that contain ±∞values in one of their coordinates if this particular dimension corresponds to a single precision FP number in the initial SMT equation. The fully generated optimization function is written into a source file and compiled as shared library. Besides the necessary flags to compile the generated GO function as shared library, the default settings of the compiler (here gcc) are utilized. Therefore, FP operations shall be performed according to the IEEE754 standard. We use the system function call of C++ to invoke the compiler and, after successful compilation, the dlopen and dlsym functions to immediately load the compiled optimization function in the execution context of parSAT. Subsequently, a reference to the optimization function is passed to each GO instance. Since the optimization function is a pure function, i.e., it has no side effects, it can be safely called by multiple threads running in parallel. 4.2 Parallel GO Procedure Generally, parSAT launches a freely configurable number of GO instances (with respect to the resources of the available runtime environment) that concurrently search for a global minimum of the provided optimization function G(-→x ). This may also include multiple instances of the same GO method. parSAT terminates when the first thread finds a minimum equal to zero, signaling that a satisfiable assignment for the input formula has been found, or when each GO instance has reached its evaluation limit, indicating that it is UNKNOWN whether the given SMT equation is satisfiable. The evaluation limit sets a maximum number for how many times each GO method may call the previously generated GO function. We reimplemented CRS2 and ISRES with the available code of NLopt and the algorithm descriptions in the original papers as reference but adopted the approaches to leverage the unique properties of parSAT's generated optimization functions. For BH with the Powell method as local minimizer, we manually converted the Python code from SciPy into C++. We slightly modified the Powell's minimization method to avoid introducing non-finite FP values, such as ±∞or NaN, during the minimization process, possibly induced because of large deltas between subsequent minima. As source for returning random double values required by the employed GO algorithms, we implemented a pseudo random number generator based on the xoshiro256+ [3] algorithm. Currently, parSAT does not distribute the work for a single GO method into multiple threads, but executes each GO process together with the optimization function in a distinct thread. Due to the large search space of one- to multidimensional GO problems in the FP domain, we did not implement a synchronization mechanism to share the already evaluated coordinates between the GO threads. However, at the beginning of the solving process, parSAT generates for each GO instance a randomized input vector to ideally have a broad distributed starting set between all concurrent running GO processes. We included stochastic GO methods into parSAT where running the same GO algorithm in multiple distributed 124 parSAT: Parallel Solving of Floating-Point Satisfiability instances would randomly evaluate different regions of the search space. This approach shall also avoid that multiple GO instances are being trapped in the same local minimum. 4.3 Exemplary Execution of parSAT To demonstrate the execution process of parSAT, we employ the following quadratic equation t(x) = -1 ̇(x+2)2 -2 for which the following exemplary property shall hold: ExP(1): x ∈FP∧t(x) ≥-2. When ExP(1) and t(x) are encoded as SMT equation in FP theory (as presented in Listing 1) and submitted to parSAT, it generates a GO function where its integrated GO algorithms would attempt to find a global minimum of zero. 1 (set-logic QF_FP) 2 ;; set rounding mode to round to nearest, ties to even 3 (define-fun rm() RoundingMode RNE) 4 ;; symbolic variable for input parameter x 5 (declare-fun x() (_ FloatingPoint 8 24)) 6 ;; a:= -1.0 ; x_s := 2.0; y_s := -2.0; max_y := -2.0 7 (define-fun a() (_ FloatingPoint 8 24) ((_ to_fp 8 24) #xbf800000)) 8 (define-fun x_s() (_ FloatingPoint 8 24) ((_ to_fp 8 24) #x40000000)) 9 (define-fun y_s() (_ FloatingPoint 8 24) ((_ to_fp 8 24) #xc0000000)) 10 (define-fun max_y() (_ FloatingPoint 8 24) ((_ to_fp 8 24) #xc0000000)) 11 (define-fun x2_1() (_ FloatingPoint 8 24) (fp.add rm x x_s)) 12 (define-fun x2_2() (_ FloatingPoint 8 24) (fp.mul rm x2_1 x2_1)) 13 (define-fun x2_3() (_ FloatingPoint 8 24) (fp.mul rm a x2_2)) 14 (define-fun x2_4() (_ FloatingPoint 8 24) (fp.add rm x2_3 y_s)) 15 ;; constrain the possible solution to satisfy the property 16 (assert (fp.geq x2_4 max_y)) 17 ;; check if the problem has a solution 18 (check-sat) Listing 1: SMT equation of ExP(1) and t(x) Figure 2 illustrates the function t(x) (left panel) and the plot of the corresponding GO function (right panel) generated by parSAT based on Listing 1. The construction of the GO function follows the set of equations (2-17), with equation (11) specifically employed to encode the assert statement involving a greater-than-or-equal-to relational operator. As observed, the GO function exhibits a local minimum of 0.0 at x = -2.0, indicated by the red marker. At this point, t(-2.0) = -2.0, thereby satisfying the constraint ExP(1): t(-2.0) ≥-2.0. Therefore, x = -2.0 constitutes a satisfiable assignment for the SMT equation in Listing 1. M. Krahl et al. 125 5 4 3 2 1 0 1 10 8 6 4 2 t(x) 5 4 3 2 1 0 1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 1e7 GO function of t(x) and ExP(1) Figure 2: Function plots of t(x) and the generated GO function 5 Evaluation We evaluated the efficiency and effectiveness of parSAT on the complete Griggio benchmark set, which is a subset of the quantifier-free FP benchmarks referenced by the SMT-LIB2 initiative [18]. The Griggio benchmark containing 214 SMT instances was added to the benchmark set of the SMT-LIB2 initiative after the FP theory was supported by the SMT-LIB2 standard in 2015. Since then, this benchmark was often used to evaluate the effectiveness and efficiency of SMT solvers on handling FP constraints such as in [8] and [2]. Additionally, we analyzed the performance of parSAT on a second benchmark, the 2019-Guedemann benchmark set, which is based on verifying calculations to approximate the natural logarithm and the exponential functions, and which became a subset of the quantifier-free FP and linear real arithmetic benchmarks maintained by SMT-LIB2. 5.1 Overview Firstly, we show the results of analyzing various parSAT variants by employing different parallelization strategies for the included GO algorithms. Secondly, we compare the best observed version of parSAT against current state-of-the-art SMT solvers, including bitwuzla (0.7.0), cvc5 (1.2.1), Z3 (4.13.4), and MathSAT (5.6.11). Each solver was executed with its default parameters on the Griggio benchmark. Secondly, we examine the results of running the same set of SMT solvers with their default configuration and parSAT on the 2019-Guedemann benchmark. Finally, we propose a combined approach that integrates parSAT with an SMT solver, aiming to harness the strengths of both methods to establish a fast and reliable solving process. In all experiments, the primary evaluation criteria were the efficiency and solution quality of the approach being investigated, i.e., a particular configuration of parSAT or an SMT solver. The solution quality was assessed based on the number of SAT instances identified within the benchmark; a higher count of discovered SAT instances indicated better solution quality. Efficiency was represented by the average time the solver required to find a solution for a single satisfiable SMT file, calculated by dividing the total wall-clock time for evaluating all found SAT instances in the benchmark by their quantity. We conducted all experiments involving parSAT and the SMT solvers on a Linux machine equipped 126 parSAT: Parallel Solving of Floating-Point Satisfiability with an Intel i7-12700KF processor and 32 GB of memory. For processing each SMT file in the given benchmark, a timeout period of 600 seconds was specified. All involved GO instances were initialized with randomly selected start values ranging between -0.5 and 0.5. We externally verified all SAT solutions reported by parSAT by using bitwuzla. 5.2 Comparison of parSAT Configurations In Table 1 we present the impact of each GO method employed in parSAT to find a SAT solution. We executed each specified thread configuration of parSAT ten times on the Griggio benchmark and subsequently calculated the average results. Each row displays the percentage of the average amount of SAT solutions found first by each employed GO algorithm with the same number of threads used per GO method. BH CRS2 ISRES parsat BH1,CRS21,ISRES1 46.6% 27.9% 25.5% parsat BH2,CRS22,ISRES2 59.1% 25.2% 15.7% parsat BH3,CRS23,ISRES3 71.6% 17.3% 11.1% parsat BH4,CRS24,ISRES4 74.0% 17.3% 8.7% parsat BH5,CRS25,ISRES5 77.5% 15.5% 7.0% parsat BH6,CRS26,ISRES6 83.6% 10.0% 6.4% parsat BH7,CRS27,ISRES7 87.0% 7.7% 5.3% parsat BH8,CRS28,ISRES8 87.5% 7.0% 5.5% parsat BH9,CRS29,ISRES9 89.6% 5.1% 5.3% parsat BH10,CRS210,ISRES10 91.2% 4.1% 4.7% Table 1: Evaluation of fastest GO algorithms per number of threads The first row indicates that with one thread assigned to each GO routine, BH identified more than 45% of the SAT solutions first, while CRS2 accounted for approximately 28%, and ISRES around 25%. In the second row, with two threads per GO routine, BH increased its average to nearly 60%, whereas CRS2 decreased to roughly 25%, and ISRES declined to approximately 15%. The subsequent rows confirm this trend: as the number of parallel GO instances increased further, BH significantly improved, culminating in the final row where it was the fastest algorithm to find a SAT solution in over 90% for the given SMT equations. Contrarily, the share of first found SAT solutions by CRS2 and ISRES continuously diminished, stabilizing around 4-5% for 10 threads per GO instance. This illustrates the potential of BH to scale and accelerate its GO process by concurrent execution which was also experienced by Ferreiro-Ferreiro et. al [7]. We hypothesize that the improved scalability of BH compared to CRS2 and ISRES stems from its non-population-based nature. Unlike CRS2 and ISRES, which require generating an initial population approximately 10 to 20 times larger than the problem dimensionality and subsequently iterating over and modifying this population during minimization, BH operates solely on a single initial vector whose size matches the function's dimensionality. Based on these findings we adjusted the number of threads assigned to each GO method. To respect the limit of available hardware threads on the test machine, we set the maximum number of parallel running GO instances to 20. We used 70% for running BH and 15% each for executing CRS2 and ISRES, which roughly corresponds to 14 BH, 3 CRS2, and 3 ISRES threads. We applied parSAT with this particular configuration ten times to the Griggio benchmark set and measured an average runtime of 1.36 seconds M. Krahl et al. 127 per SMT instance with approximately found 101.8 SAT solutions on average. The best run of this specific parSAT configuration discovered 102 SAT instances with an average runtime of 1.27 seconds per SMT file. This observed best result of parSAT will be used for further comparison and is denoted as parSATbestBH14,CRS23,ISRES3. 5.3 Evaluation of parSAT's Performance on conventional Benchmark Figure 3 shows the runtime of the previously described best parSAT configuration and other SMT solvers for the SMT equations in the Griggio benchmark that were evaluated as satisfiable. The runtime is denoted in seconds on a logarithmic scale on the Y-axis. The X-axis represents the (virtual) index of the satisfiable SMT instance within the Griggio benchmark. The time parSAT took to solve each SMT instance it found SAT is shown by a red cross, for bitwuzla by a blue diamond, for cvc5 by a green triangle, for MathSAT by a brown star, and for Z3 by a black dot. Each point represents the time it took to determine an SMT equation of the benchmark as satisfiable. If multiple points of different SMT solvers are placed exactly on the same vertical line, this implies, that the corresponding solvers determined the same SMT equation as SAT. It can be seen that parSAT did not always find a satisfiable assignment before the compared SMT solvers, i.e., as presented on the far left side of each plot in Figure 3. However, the volatility of the solving time for its identified SAT equations remains low while the solving time of the other SMT solvers strongly varies. Table 2 contains additional findings on running parSAT and the SMT solvers on the Griggio benchmark. For each solver, we present the number of found SAT instances, UNSAT equations, timeouts or in case of parSAT evaluation limits, errors, and the average processing time for the found SAT equations in seconds. An error indicates that the corresponding SMT solver entered an erroneous state on a given SMT equation and terminated without issuing a SAT or UNSAT verdict before reaching a timeout. When considering the number of discovered SAT equations, bitwuzla achieved the best result by finding 108 SAT instances, followed by cvc5 that reported 104 SAT solutions within the given timeout. parSAT comes at third with 102 reported SAT equations before MathSAT with 100, and Z3 with 75. Since parSAT is incomplete and cannot prove an SMT equation to be unsatisfiable, according to bitwuzla or cvc5 there are at least 76 guaranteed UNSAT equations in the Griggio benchmark. bitwuzla and cvc5 ran into a timeout for 30 instances, therefore no final verdict can be made for the total amount of SAT or UNSAT instances in the benchmark. In terms of the average runtime per SMT file determined as SAT, parSAT presents by far the best result with 0.1 seconds. The next best SMT solver is MathSAT with an average runtime of 25.86 seconds, followed by bitwuzla with 32.71 seconds, cvc5 with 34.01 seconds, and lastly Z3 with 88.7 seconds. Overall, parSAT found approximately 6% less SAT solutions than the best solver bitwuzla but required by average far less time for finding an assignment for a satisfiable SMT equation in the Griggio benchmark. 5.4 parSAT's Performance on Benchmark derived from Software Verification Task In comparison, we show the results of running the same SMT solvers and the best parSAT configuration on the 2019-Guedemann benchmark in Table 3. This benchmark was derived from verifying an approximation of the natural logarithm and the exponential function in a real-world scenario. In the Cardano ledger for a proof-of-stake based consensus protocol [12], precise implementations of these two mathematical functions are required to assure that each node of the block-chain calculates the same results. Therefore, following properties shall hold for an approximation of these functions 128 parSAT: Parallel Solving of Floating-Point Satisfiability 10 2 10 1 100 101 102 parSAT cvc5 10 3 10 2 10 1 100 101 102 parSAT bitwuzla 10 2 10 1 100 101 102 parSAT mathsat 10 2 10 1 100 101 102 103 parSAT z3 Figure 3: Comparison of solving time between parSAT and SMT solvers for satisfiable equations SAT UNSAT timeout / evaluation limit errors average SAT runtime parSATbestBH14,CRS23,ISRES3 102 0 112 0 0.1 bitwuzla 108 76 30 0 32.71 cvc5 104 76 30 4 34.01 MathSAT 100 69 45 0 25.86 Z3 75 56 54 29 88.7 Table 2: SMT solvers and parSAT on the Griggio benchmark M. Krahl et al. 129 ∀x ∈R : x > 0 =⇒exp′(ln′ x) ≈x (18) ∀x ∈R : x > 0 =⇒ln′(exp′ x) ≈x (19) ∀x,y ∈R : x,y ∈[0,1] =⇒exp′(x+y) ≈exp′(x)·exp′(y) (20) ∀x ∈R : x,y > 0 =⇒ln′(x·y) ≈ln′(x)·ln′(y) (21) ∀x ∈R : x,y ∈[0,1] =⇒img(xy) = img(exp′(y·ln′(x))) ≈[0,1] (22) where exp′ and ln′ represent the approximated functions as described in [15], img denotes the codomain, and ≈corresponds to an absolute error of less than ε = 10-12. The implementations of these functions based on FP arithmetic together with these properties were converted into SMT equations. If some of these SMT equations is satisfiable, this implies that there exists a counter-example that violates one of the stated properties. In total 13 SMT files were generated based on these properties that constitute the 2019-Guedemann Benchmark. For this benchmark, both parSAT and bitwuzla found 10 SAT instances, cvc5 and MathSAT reported 4 and Z3 did not find any SAT solution within the given time. parSAT did not find a satisfiable assignment for 3 SMT formulas for which it reached the function evaluation limit. bitwuzla denoted 2 equations as UNSAT and reached for 1 SMT equation its timeout. cvc5 and MathSAT found 1 UNSAT instance, and reached a timeout for 8 SMT files. Z3 did not find any SAT or UNSAT SMT formulas, but reached in 4 cases the timeout. SAT UNSAT timeout / evaluation limit errors average SAT runtime parSATbestBH14,CRS23,ISRES3 10 0 3 0 0.1 bitwuzla 10 2 1 0 37.88 cvc5 4 1 8 0 180.09 MathSAT 4 1 8 0 22.8 Z3 0 0 4 9 - Table 3: SMT solvers and parSAT on the 2019-Guedemann benchmark For this benchmark, parSAT finds as many SAT instances as bitwuzla with a far lower average runtime per satisfiable SMT equation. We see as main reason for parSAT's improved performance in the 2019-Guedemann benchmark that the benchmark was derived from a mathematical problem where the integrated GO algorithms in parSAT might be more efficient in finding a solution. Another aspect could be the reduced number of variables used in the 2019-Guedemann benchmark which vary between 1 and 8, whereas the Griggio benchmark contains some SMT problems with more than 500 variables. 5.5 Combining parSAT with SMT Solvers Building on our previous findings, this section explores the potential advantages of utilizing parSAT in conjunction with an SMT solver as a new approach. Our experimental results suggest that compared to other solvers, parSAT often identifies satisfiable solutions faster. Nevertheless, parSAT is unable to reliably prove an SMT formula unsatisfiable due to its inherent incompleteness. In contrast, when no 130 parSAT: Parallel Solving of Floating-Point Satisfiability timeout occurs, SMT solvers provide SAT or UNSAT statements that are guaranteed to be correct, despite potentially requiring more time than parSAT for processing. In the subsequent theoretical consideration, we examine the potential of combining existing SMT solvers with parSAT for handling SMT files. Under this hypothetical framework, both the SMT solver and parSAT would concurrently initiate the search for a solution to the given SMT problem. If parSAT identifies a satisfiable solution before the SMT solver, the solver is terminated, and a SAT result is returned. Conversely, if the SMT solver discovers a satisfiable assignment first, the same procedure applies to terminate parSAT. However, if parSAT exits with UNKNOWN, the SMT solver continues until it either proves UNSAT, finds a satisfiable solution, or reaches a timeout. This approach appears promising as it combines the rapid execution capabilities of parSAT with the definitive completeness properties of SMT solvers, potentially enhancing overall efficiency and reliability. In Table 4, we present the potential impact of such a combination. We utilized the measurements for bitwuzla as representative state-of-the-art SMT solver, alongside the best observed performance of parSAT using the BH14,CRS23,ISRES3 configuration for the Griggio benchmark. It is assumed that bitwuzla and parSAT operate completely concurrently, without accounting for any additional overhead introduced by parallelization. To calculate the average solving runtime for SAT equations, we considered the runtime of parSAT when it returned a SAT faster than bitwuzla; otherwise, we used the runtime of bitwuzla. Due to parSAT's inability to reason about UNSAT SMT equations, instances where parSAT reached its evaluation limit and bitwuzla returned UNSAT or encountered a timeout were excluded from the average SAT runtime calculation. Neither parSAT nor bitwuzla encountered any errors on the Griggio benchmark. SAT UNSAT timeout / evaluation limit errors average SAT runtime parSATbestBH14,CRS23,ISRES3 102 0 112 0 0.1 bitwuzla 108 76 30 0 32.71 parSATbest + bitwuzla 113 76 25 0 12.75 Table 4: Combination of parSAT with bitwuzla on the Griggio benchmark The table compares the outcomes of executing parSAT and bitwuzla both independently and in a combined approach. It provides data on identified SAT instances, UNSAT formulas, and the average runtime per found SAT SMT file, measured in seconds. The first two rows display the results of running parSAT and bitwuzla separately, while the third row illustrates the potential effect of systematically combining the two methods. Based on the recorded data, and in contrast to their independent application, the combined approach would increase the number of identified SAT instances to 113 and reduce the timeouts to 25. Moreover, the average solving runtime for satisfiable equations would decrease to 12.75 seconds, representing a reduction of slightly more than 60% compared to exclusively using bitwuzla. Our research suggests a promising avenue for advancing this approach by leveraging the evaluated data sets by parSAT after it reached its function evaluation limit. The SMT solver could then utilize these preprocessed values to enhance its internal lemma generation capabilities and exclude already unsatisfiable assignments from its potential solution set, thereby accelerating the overall solving process. M. Krahl et al. 131 6 Conclusion and Outlook Experiments with parSAT demonstrate that its portfolio-approach utilizing multiple different GO algorithms competing concurrently to find solutions, presents a considerable alternative to current stateof-the-art SMT solvers to quickly verify if a given SMT formula might be satisfiable. Furthermore, parSAT successfully handled and processed a variety of real-world SMT equations with either small or large numbers of variables. The modular architecture of parSAT enables straightforward integration of additional GO routines that could further enhance the complementary and competitive solving approach. In particular, we demonstrated parSAT's ability to find SAT solutions to some SMT instances derived from practical problems with mathematical nature faster than current state-of-the-art SMT solvers. Because of the currently integrated GO methods, parSAT is a semi-decision procedure that underapproximates the search space. If it finds a solution satisfying a problem instance, it is guaranteed to be correct but might label an actual satisfiable formula as UNKNOWN. In the current version, due to some implementation details of the integrated GO approaches, parSAT is limited on finding solutions consisting of finite FP values only. Even though parSAT's GO routines could potentially calculate ±∞values for variables in single precision format, they currently cannot systematically find NaN-values in general or ±∞in double precision. We plan to integrate more types of GO algorithms into parSAT, such as deterministic variants or brute-force based approaches. We also see a potential benefit in implementing a specific GO method that evaluates the optimization function by allowing the controlled selection of non-finite FP numbers as input. Additionally, we aim to further investigate how the generation of the optimization function could be improved such that GO algorithms might find the global minimum faster and in a more reliable way. Another aspect to consider is the potential acceleration of integrated GO methods through further parallelization, which may involve utilizing CPU-specific vector instructions, GPU-specific implementations, or deploying GO instances to a distributed cluster with scalable computing capabilities based on cloud infrastructure. The approach itself, formulating a constraint problem as a GO function, is not limited to FP only; we aim to support more theories defined by the SMT-LIB2 standard, such as bitvector or integer theory. In addition, we also plan to extend the use of the approach in two domains. The first is the evaluation how parSAT might be used in a complementary approach with existing SMT solvers. To facilitate this process, we will investigate how equalities can be derived to propagate information to other theory solvers during theory combination. The second promising topic is the application of the techniques used in parSAT to the problem domain of SMT model counting. This involves counting the number of all satisfiable assignments for a given SMT equation. References [1] Haniel Barbosa, Clark Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare Tinelli & Yoni Zohar (2022): Cvc5: A Versatile and Industrial-Strength SMT Solver. In Dana Fisman & Grigore Rosu, editors: Tools and Algorithms for the Construction and Analysis of Systems, Springer International Publishing, Cham, pp. 415-442, [2] M. Ammar Ben Khadra, Dominik Stoffel & Wolfgang Kunz (2017): goSAT: Floating-point Satisfiability as Global Optimization. In: 2017 Formal Methods in Computer Aided Design (FMCAD), IEEE, Vienna, pp. 11-14, 132 parSAT: Parallel Solving of Floating-Point Satisfiability [3] David Blackman & Sebastiano Vigna (2021): Scrambled Linear Pseudorandom Number Generators. ACM Trans. Math. Softw. 47(4), [4] Martin Brain, Florian Schanda & Youcheng Sun (2019): Building Better Bit-Blasting for Floating-Point Problems. In Tomáš Vojnar & Lijun Zhang, editors: Tools and Algorithms for the Construction and Analysis of Systems, 11427, Springer International Publishing, Cham, pp. 79-98, [5] Martin Brain, Florian Schanda & Youcheng Sun (2019): Building Better Bit-Blasting for Floating-Point Problems. In Tomáš Vojnar & Lijun Zhang, editors: Tools and Algorithms for the Construction and Analysis of Systems, Springer International Publishing, Cham, pp. 79-98, [6] Leonardo de Moura & Nikolaj Bjørner (2008): Z3: An Efficient SMT Solver. In C. R. Ramakrishnan & Jakob Rehof, editors: Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, pp. 337-340, [7] Ana M. Ferreiro-Ferreiro, José A. García-Rodríguez, Luis Souto & Carlos Vázquez (2019): Basin Hopping with Synched Multi L-BFGS Local Searches. Parallel Implementation in Multi-CPU and GPUs. Applied Mathematics and Computation 356, pp. 282-298, [8] Zhoulai Fu & Zhendong Su (2016): XSat: A Fast Floating-Point Satisfiability Solver. In Swarat Chaudhuri & Azadeh Farzan, editors: Computer Aided Verification, Springer International Publishing, Cham, pp. 187-209, [9] Leopold Haller, Alberto Griggio, Martin Brain & Daniel Kroening (2012): Deciding Floating-Point Logic with Systematic Abstraction. In: 2012 Formal Methods in Computer-Aided Design (FMCAD), pp. 131-140. [10] Steven G. Johnson & Julien Schueller (2021): NLopt: Nonlinear Optimization Library. Astrophysics Source Code Library, p. ascl:2111.004. [11] P. Kaelo & M. M. Ali (2006): Some Variants of the Controlled Random Search Algorithm for Global Optimization. Journal of Optimization Theory and Applications 130(2), pp. 253-264, 006-9101-0. [12] Aggelos Kiayias, Alexander Russell, Bernardo David & Roman Oliynykov (2017): Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol. In Jonathan Katz & Hovav Shacham, editors: Advances in Cryptology - CRYPTO 2017, Springer International Publishing, Cham, pp. 357-388, 63688-7_12. [13] Gergely Kovásznai, Krisztián Gajdár & Laura Kovács (2019): Portfolio SAT and SMT Solving of Cardinality Constraints in Sensor Network Optimization. In: 2019 21st International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), pp. 85-91, [14] Daniel Kroening & Ofer Strichman (2016): Decision Procedures. Texts in Theoretical Computer Science. An EATCS Series, Springer Berlin Heidelberg, Berlin, Heidelberg, Available at http://link.springer.com/10.1007/978-3-662-50497-0. [15] Matthias Güdemann (2019): Non-integer calculations specification. https://github.com/intersectmbo/ cardano-ledger/releases/latest/download/non-integer-calculations.pdf. Accessed: 202506-04. [16] Aina Niemetz & Mathias Preiner (2023): Bitwuzla. In Constantin Enea & Akash Lal, editors: Computer Aided Verification, Lecture Notes in Computer Science, Springer Nature Switzerland, Cham, pp. 3-17, [17] M. J. D. Powell (1964): An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal 7(2), pp. 155-162, [18] Mathias Preiner, Hans-Jörg Schurr, Clark Barrett, Pascal Fontaine, Aina Niemetz & Cesare Tinelli (2024): SMT-LIB release 2024 (non-incremental benchmarks), [19] W. L. Price (1983): Global Optimization by Controlled Random Search. Journal of Optimization Theory and Applications 40(3), pp. 333-348, M. Krahl et al. 133 [20] T.P. Runarsson & Xin Yao (2005): Search biases in constrained evolutionary optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 35(2), pp. 233-243, [21] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, ̇Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt & SciPy 1.0 Contributors (2020): SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, pp. 261-272, [22] David J. Wales & Jonathan P. K. Doye (1997): Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms. The Journal of Physical Chemistry A 101(28), pp. 5111-5116, [23] Tjark Weber (2023): tjark/Par. Available at https://github.com/tjark/Par. Original-date: 2020-1204T20:12:50Z. [24] Christoph M. Wintersteiger, Youssef Hamadi & Leonardo de Moura (2009): A Concurrent Portfolio Approach to SMT Solving. In Ahmed Bouajjani & Oded Maler, editors: Computer Aided Verification, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 715-720, [25] Heytem Zitoun, Claude Michel, Laurent Michel & Michel Rueher (2020): An Efficient Constraint Based Framework Forhandling Floating Point SMT Problems, .
|
2509.16238
|
Evolvable Graph Diffusion Optimal Transport with Pattern-Specific Alignment
for Brain Connectome Modeling
Xiaoqi Sheng1, Jiawen Liu2, Jiaming Liang2, Yiheng Zhang4, Hongmin Cai3,∗.*
1School of Future Technology, South China University of Technology, Guangzhou, Guangdong, China
2School of Computer Science and Technology, South China University of Technology, Guangzhou, Guangdong, China
3School of Future Technology, South China University of Technology, Guangzhou, Guangdong, China
4School of Computer Science and Technology, Sichuan University, Chengdu, China
Abstract
Network analysis of human brain connectivity indicates that
individual differences in cognitive abilities arise from neuro-
biological mechanisms inherent in structural and functional
brain networks. Existing studies routinely treat structural con-
nectivity (SC) as optimal or fixed topological scaffolds for
functional connectivity (FC), often overlooking higher-order
dependencies between brain regions and limiting the mod-
eling of complex cognitive processes. Besides, the distinct
spatial organizations of SC and FC complicate direct integra-
tion, as na¨ıve alignment may distort intrinsic nonlinear pat-
terns of brain connectivity. In this study, we propose a novel
framework called Evolvable Graph Diffusion Optimal Trans-
port with Pattern-Specific Alignment (EDT-PA), designed to
identify disease-specific connectome patterns and classify
brain disorders. To accurately model high-order structural de-
pendencies, EDT-PA incorporates a spectrum of evolvable
modeling blocks to dynamically capture high-order depen-
dencies across brain regions. Additionally, a Pattern-Specific
Alignment mechanism employs optimal transport to align
structural and functional representations in a geometry-aware
manner. By incorporating a Kolmogorov–Arnold network
for flexible node aggregation, EDT-PA is capable of mod-
eling complex nonlinear interactions among brain regions
for downstream classification. Extensive evaluations on the
REST-meta-MDD and ADNI datasets demonstrate that EDT-
PA outperforms state-of-the-art methods, offering a more ef-
fective framework for revealing structure–function misalign-
ments and disorder-specific subnetworks in brain disorders.
The project of this work is released via this link.
1
Introduction
Accumulating neuroscience evidence identifies structural
damage and functional reorganization, which show marked
inter-individual variability, as major pathological manifesta-
tions of brain disorders (Zhang et al. 2011). Furthermore,
current pathophysiological models have shifted from em-
phasizing localized brain pathology to investigating struc-
tural and functional interactions across distributed neural
networks (Bian et al. 2024). Crucially, modern neuroimag-
ing advances allow the construction of graph-theoretical
*∗All correspondence should be addressed to Hongmin
Cai(Email: hmcai@scut.edu.cn).
Copyright © 2026, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
brain connectomes, enabling comprehensive and efficient
in vivo characterization of structural connectivity (SC)
and functional connectivity (FC) networks (Fornito and
Bullmore 2015). Specifically, diffusion magnetic resonance
imaging (dMRI) noninvasively reconstructs SC by mapping
white matter fiber tracts that constitute anatomical pathways
for neural information transfer, whereas functional magnetic
resonance imaging (fMRI) identifies FC via statistical cor-
relations of neural activity, reflecting dynamic integration
processes across distributed brain regions. Existing studies
(Bian et al. 2024; Dan and Wu 2023) have demonstrated
that SC correlates with FC at the group level, underscor-
ing their complementary value in enhancing classification
accuracy. However, a primary challenge is that SC reflects
only anatomical connections, whereas FC represents inter-
actions that may occur independently of direct anatomical
links, resulting in an incomplete correspondence between
the two (Popp et al. 2024). Additionally, despite advances
in data-driven methods for brain network analysis (Li et al.
2025), effectively encoding higher-order topological fea-
tures to identify clinically relevant biomarkers for neurolog-
ical disorders remains a significant challenge. Therefore, ac-
curately modeling the topological properties of SC and FC
is critical for understanding complex cognitive functions and
brain behaviors.
Recently, driven by the need for automated diagno-
sis (Bahrami et al. 2023), various deep learning approaches
based on brain connectivity data have been developed to
enable precise analysis of disease-specific topological al-
terations and cognitive impairment (Wang et al. 2025; Ma
et al. 2023; Pang et al. 2023). Among these, Graph Convolu-
tional Networks (GCNs) (Parisot et al. 2018; Cui et al. 2023;
Huang et al. 2020) have emerged as particularly powerful for
brain network analysis, owing to their inherent capacity to
model the topological relationships between SC-FC. How-
ever, existing GCNs primarily focus on distinguishing brain
region features, often overlooking critical topological infor-
mation for modeling brain propagation patterns. One strat-
egy is known as the guided method, where one connectivity
modality (e.g., SC) directly guides the estimation of another
(e.g., FC). For example, Pinotsis et al. (Hansen 2013) pio-
neered a guided connectivity approach, leveraging SC to es-
tablish theoretical relationships between graph topological
properties and simulated functional dynamics. By leverag-
arXiv:2509.16238v1 [q-bio.NC] 16 Sep 2025
Figure 1: An illustration of the challenges in integrating SC
and FC, highlighting two key issues: (a) High-order depen-
dencies: Red dashed lines indicate high-order dependencies
arising from indirect pathways or coordinated activity across
multiple brain regions. (b) Imperfect coupling between SC
and FC: Differences in spatial distribution and organization
between SC and FC.
ing multiple generative adversarial networks (GANs), Tan et
al. (Tan et al. 2025) introduced a framework for cross-modal
connectivity synthesis and translation, achieving substantial
improvements in brain disorder classification accuracy. Bian
et al.’s (Bian et al. 2024) topology-aware GCN framework
integrates homology features from SC to constrain FC es-
timation, enhancing sensitivity to pathological microstruc-
tural changes. However, a major limitation of these meth-
ods is their focus solely on direct connectivity, neglecting
indirect connectivity at broader scales, which significantly
diminishes the reliability of diagnostic outcomes. This lim-
itation arises from the fact that information transmission
between brain regions is characterized by both strong lo-
cal low-order connections and efficient high-order connec-
tions (Stam 2014). Several studies have applied Transformer
architectures to the graph domain to address the aforemen-
tioned challenges (Dong et al. 2024; Sheng et al. 2025).
These methods typically employ specialized positional em-
bedding strategies that integrate FC and SC information to
optimize the computation of global features across brain re-
gions. However, given that the correlations between func-
tional and structural pathways are not linear, traditional
graph transformers struggle to accurately reflect the under-
lying biological mechanisms (Yang et al. 2024). Therefore,
joint combinatorial reasoning of functional and structural
connectomes should be incorporated into graph-based mod-
eling.
The above analysis demonstrates that successful SC and
FC analysis methods effectively capture both the intrinsic
connectivity patterns and the disease-specific discrimina-
tory information. However, recent studies still rely on the
assumption that the structural brain network can serve
as the optimal graph topology for the functional net-
work (Tan et al. 2025). This premise introduces two inherent
challenges that could significantly compromise downstream
disease pattern classification performance: (1) Neglect of
high-order dependencies. As shown in Fig. 1a, restricting
information propagation strictly to SC-defined edges fails to
account for indirect functional interactions and distributed
neural coordination, limiting the model’s capacity to capture
higher-order cognitive processes. (2) Imperfect coupling be-
tween SC and FC. As shown in Fig. 1b, SC and FC exhibit
significant statistical discrepancies in spatial distribution and
organization. Directly integrating them may distort the non-
linear spatial structure of brain networks, ultimately com-
promising the generalizability and interpretability of the re-
sulting models.
In light of the above-mentioned issues, in this study, we
introduce an Evolvable Graph Diffusion–Optimal Transport
with Pattern-Specific Alignment (EDT-PA) to classify brain
disorders. The framework is built upon two key components:
(1) EBCM (Evolvable Brain Connectome Modeling), which
employs an innovative iterative graph diffusion optimiza-
tion strategy to disentangle complex pathways within SC,
generating an adaptive SC adjacency matrix with higher-
order dependency encoding; and (2) PSSA (Pattern-Specific
Structure–Function Alignment), which leverages an optimal
transport strategy to develop an edge-aware graph encoder
that bridges high-order SC and FC characteristics, enhanc-
ing model interpretability while maintaining strong general-
ization capacity. Extensive experiments on two benchmark
brain network datasets, REST-meta-MDD and ADNI, val-
idate the superiority of EDT-PA. Compared with state-of-
the-art (SOTA) methods, our framework achieves improve-
ments of 5.4% and 12.3% in crucial accuracy metrics, while
maintaining robust interpretability and generalization across
tasks. In summary, our contributions are as follows:
• Innovation. An evolvable graph diffusion optimal trans-
port method is proposed for brain connectome modeling,
dynamically capturing interregional dependencies to in-
tegrate structural and functional networks and enhance
disease-specific pattern identification.
• Architecture. A comprehensive end-to-end joint analy-
sis framework, EDT-PA, has been designed. This method
integrates evolvable modeling modules to capture high-
order inter-regional dependencies, enabling precise char-
acterization of disease-related network abnormalities.
• Validation.
Extensive
experiments
on
benchmark
datasets demonstrate the method’s superiority over
state-of-the-art approaches in disease classification,
robustly identifying discriminative disease regions.
2
Preliminary
The brain connectome graph, constructed from fMRI, sMRI,
and DTI data, can be formally represented as a graph G =
(V, E, A, X). Here, the V = {vi | i = 1, . . . , N} denotes
the set of nodes corresponding to brain regions of interest
(ROIs), while the E = {ai,j | (vi, vj) ∈V × V} rep-
resent connections between brain regions’ nodes. The ad-
jacency matrix A ∈RN×N encodes the strength of inter-
nodal connections, and the node feature matrix X ∈RN×d
is derived from Pearson correlation coefficients (PCC) com-
puted from blood-oxygen-level-dependent (BOLD) signals
in fMRI data. Each brain graph is associated with a categor-
ical label y, indicating the subject’s clinical or physiological
condition.
Human Brain
MRI imaging preprocessing
…
Graph �1
Graph �2
Fully Evolved Graph �
Graph ��
Graph Diffusion
Concatenation
�∗
Feature Matrix
Early Diagnosis
Evolvable
Modeling
Block
Mask
Attention
Class
Token
Structural Connectivity �
Graph
Transformer
Time
Functional Connectivity �
Node Feature �
B. Pattern Specific Structure–Function Alignment
A. Evolvable Brain Connectome Modeling
OT
Node Feature
Feature Matrix S
Mat Mul
Align Optimal Transmission
C. Neural Graph Aggregator
�
�∗
�
Addition
^
Addition
Figure 2: Architecture of the proposed EDT-PA for brain connectome modeling
3
Methodology
The proposed EDT-PA framework, as illustrated in Fig. 2,
integrates three core modules to achieve effective brain net-
work analysis: (1) an Evolvable Brain Connectome Mod-
eling module that progressively refines structural connec-
tivity through multi-step graph diffusion and class-aware
Transformers, (2) a Pattern Specific Structure-Function
Alignment module that establishes precise neurobiologi-
cal correspondence via attention-guided fusion and opti-
mal transport-based matching, and (3) a Neural Graph Ag-
gregator that models intricate regional interactions through
Kolmogorov-Arnold Networks for robust downstream clas-
sification. By effectively bridging FC and SC representa-
tions, the model enables more comprehensive neural inter-
action analysis for improved brain disorder classification.
3.1
Evolvable Brain Connectome Modeling
Accurate representation of brain structural connectivity is
crucial for downstream clinical tasks (Sheng et al. 2023).
However, raw structural connectomes are typically repre-
sented as sparse, noisy symmetric matrices, which hin-
der existing algorithms from capturing higher-order inter-
actions and functional dependencies across distant brain re-
gions (Yang et al. 2024). To address these limitations, EDT-
PA develops an EBCM pipeline. The proposed methodology
advances graph diffusion processes via multiple hierarchical
receptive fields, while a class-conditional Transformer ar-
chitecture provides adaptive learning of spatiotemporal cor-
relations. The diffusion steps of EBCM are formulated as
follows:
A(t+1) = T
αSA(t)ST + (1 −α)A
, t = 0, 1, . . . , T −1 (1)
where, S
=
D−1
2 AD−1
2 represents the diffusion oper-
ator, and D is a diagonal matrix with elements
Dii =
PN
j=1 Aij (Bai et al. 2017). The hyperparameter α con-
trols the trade-off between the diffused structure and the
original graph topology. To capture the high-order struc-
tural dependencies inherent in brain networks, a Trans-
former model T is integrated at each diffusion step. The
self-attention mechanism of the Transformer explicitly mod-
els high-order connectomic interactions, addressing the lim-
itations imposed by local neighborhood constraints in tra-
ditional graph diffusion. This integration significantly en-
hances the model’s capacity to represent complex, large-
scale brain organization. The process generates a sequence
A = {A(1), A(2), . . . , A(T )}, encoding progressive struc-
tural relationships across brain regions.
Another central problem in neuroscience is extracting
unique connectivity patterns from the structural connectome
that are associated with brain diseases. To this end, a class
token ey is computed and incorporated into the modeling
of the fully evolved graph. Specifically, ey is obtained via
an index-based lookup operation ey = M(E[Y ]), where
M : R1×N2 →RN×N representing the reshape operation,
and E ∈RC×N 2 is a learnable embedding matrix.
In the absence of accessible class labels during inference,
a soft class-query module is introduced to compute a prob-
abilistic class embedding directly from the input features,
enabling implicit task-aware conditioning. Formally, given
the adjacency matrix A ∈RN×N of the brain connectome
graph G, the query-to-class attention is computed as:
β = softmax(M−1(A) ∗ET )
ey = M(β · E) ∈RN×N
(2)
in which M−1 : RN×N →R1×N2 is the reverse operation
of M. The soft class token is then appended to the structural
diffusion sequence to enable task-aware conditioning with-
out requiring explicit class labels during inference. Once ey
is obtained, it is added as a global prompt token to the se-
quence A:
A∗= {ey, A(1), A(2), . . . , A(T )}
(3)
In essence, the A∗contains the progressively diffused
graph structures along with the class token, which encap-
sulates disease-specific connectivity patterns. To derive the
fully evolved graph from these representations, an autore-
gressive model is employed to iteratively refine and expand
upon A∗:
p
ˆA | A∗
=
K
Y
k=1
p
ˆA | ey, A(1), . . . , A(T )
(4)
To capture the conditional dependency p
ˆA | A∗
, a
Transformer module with mask attention is employed to
approximate the underlying distribution. This process pro-
duces the final output ˆA, which serves as the updated struc-
tural connectome, enriched with multi-scale awareness and
task-specific modulation.
Collectively, the procedure implements an anatomical
prior encoding mechanism that simulates neural signal prop-
agation, emphasizing informative pathways through class-
aware guidance.
3.2 Pattern Specific Structure-Function Alignment
Through iterative diffusion optimization, the structural con-
nectivity graph gradually evolves, capturing richer high-
order dependencies. However, these features are still con-
fined to the brain’s SC modality. Therefore, we further em-
ploy the PSSA module, which aims to refine the align-
ment between SC and FC via an optimal transport mecha-
nism, enabling more accurate modality fusion and enhanc-
ing the expressiveness of the brain connectome. Specifically,
the structural connectivity matrix A and the node features
X ∈RN×d are first integrated using a Graph Transformer
with edge features.
In graph Transformer layer, the feature of a given brain
region (node) xi is concatenated with those of its structurally
adjacent regions:
hi = ∥(xi, {xj | j ∈Ni})
(5)
where ∥denotes the concatenation and Ni represents the set
of neighbors of node i. This concatenated representation is
then processed by the Transformer module, followed by in-
tegration with edge-level connectivity features {aj | j ∈
Ni}:
hi = T1(hi), aj = T2(aj)
(6)
hi =
X
j∈Ni
aijhj
(7)
where T1 and T2 refer to two distinct Transformer layers,
each with independent parameters. Once the graph embed-
ding H = {hi | i = 1, . . . , N} is obtained, cosine similarity
is used to compute the similarity between each pair of nodes:
S =
H · HT
∥H∥× ∥H∥∈RN×N, H ∈RN×nd
(8)
where n denotes the number of neighboring nodes associ-
ated with each node.
Recognizing the complexity and inherent misalignment
between SC and FC, EDT-PA introduces a novel optimal
transport (OT) strategy that transcends traditional alignment
approaches. Unlike prior works that rely on standard OT to
align fixed topologies or embed SC into FC via heuristic
rules, our method constructs a transport-aware correspon-
dence that is dynamically informed by both functional sim-
ilarity S and diffusion-derived structural priors ˆA. For con-
venience in the subsequent formulas, ˆA and S are rewritten
as: ˆA = {ai}N
i=1 and S = {si}N
i=1. Next, the discrete empir-
ical distributions u and v, defined on the probability spaces
ˆA, S ∈Ω, are presented as follows:
u =
N
X
i=1
µiδai,
v =
N
X
j=1
νjδsj,
(9)
where, δai and δsj denote Dirac functions centered on ˆA
and S, respectively. The weight vectors µ = {µi}N
i=1 and
v = {vj}N
j=1 belong to the N -dimensional simplex, i.e.,
PN
i=1 µi = 1, PN
j=1 vi = 1.
The alignment is formalized as an entropy-regularized op-
timal transport problem, which is solved using the Sinkhorn-
Knopp algorithm (Cuturi 2013). Specifically, a transport
plan T∗is calculated to minimize the expected transport cost
between ˆA ∈RN×N and S ∈RN×N, subject to marginal
constraints:
T∗= arg
min
T∈RN×N⟨T, C⟩−ϵZ(T), s.t. T1 = µ, T⊤1 = ν
(10)
where C is a cost matrix defined by pairwise dissimilari-
ties between A and S, computed via the cosine distance.
Z(T) = −P
ij TijlogTij denotes the entropy of the trans-
port plan T, and ϵ > 0 is a smoothing parameter controlling
the regularity of the transport. For the problem Eq. (11), we
have an optimization solution when t →∞:
T∗= diag
at
exp(−C/ϵ) diag
bt
(11)
in which t is the iteration steps, at =
µ
exp(−C/ϵ)bt−1 and
bt =
ν
exp(C/ϵ)at, with the initialization on b0 = 1. The sta-
bility of the iterative computations is ensured by employ-
ing the logarithmic scaling variant of Sinkhorn optimiza-
tion (Schmitzer 2019). The biologically meaningful trans-
port plan T∗aligns the intrinsic organization of SC and FC,
refining the node features as follows:
H∗= T ∗H + H
(12)
By embedding this transport mechanism into a modular
Graph Transformer framework with explicit edge awareness,
we achieve fine-grained, pattern-specific alignment between
FC and SC.
3.3
Neural Graph Aggregator
Given the complexity and spatial variability of inter-regional
brain interactions, we extend the Kolmogorov-Arnold Net-
work (KAN) (Liu et al. 2024) into graph neural networks
as a node-level aggregation mechanism in functional brain
graphs.
For each node i, its refined representation h∗
i is updated
by jointly considering its current state and the representa-
tions of its neighbors {h∗
j | j ∈N(i)} through the KAN
aggregation function:
h∗
i = KAN(h∗
i , {h∗
j}j∈Ni) = ΦL−1 ◦· · · ◦Φ0(h∗
i , {h∗
j})
(13)
where each transformation Φl represents a deep, nonlinear
transformation that learns progressively higher-order inter-
actions between the node i and its neighbors Ni
Once the node-level feature representations are updated,
we proceed to compute a graph-level embedding h∗
G by per-
forming a mean readout operation across all node represen-
tations in the graph:
h∗
G =
1
|V |
X
i∈V
h∗
i
(14)
This graph-level embedding, which captures the global
structure of the brain network, is then passed through a
multi-layer perceptron (MLP) for final classification: ˆy =
F(h∗
G). Therein, F(·) is implemented as an MLP with a
softmax output. Given the ground truth label y, the loss func-
tion of our proposed model is formulated as follows:
loss = Lce(y, ˆy) = −Ey[log(ˆy)]
(15)
where E is the expectation and Lce represents the cross-
entropy loss function.
4
Experiment
4.1
Data Description
In this study, a comprehensive evaluation of EDT-PA’s effec-
tiveness in brain disease diagnosis is conducted on two pub-
licly available datasets: REST-meta-MDD and ADNI. De-
scriptions of these datasets are provided below.
REST-meta-MDD: The REST-meta-MDD consortium
provides standardized sMRI/fMRI data for major depressive
disorder (MDD) research, including 781 matched subjects
(395 MDD patients and 386 controls). MRI data underwent
preprocessing, including motion correction, T1-MRI align-
ment, SPM12-based MNI normalization, and 5mm FWHM
Gaussian smoothing (Dadi et al. 2019). Structural connec-
tivity (SC) was derived from Jensen-Shannon Divergence
(JSD) of inter-regional gray matter volume similarity (Wang
et al. 2016; Li et al. 2021b; Sebenius et al. 2023). The brain
was parcellated into 264 regions using the Power264 at-
las (Power et al. 2011), and functional connectivity (FC)
was computed via Pearson correlations of BOLD time se-
ries from these ROIs.
ADNI: The Alzheimer’s Disease Neuroimaging Initiative
(ADNI) provides a multimodal dataset for Alzheimer’s dis-
ease (AD) research, including sMRI, fMRI, PET, diffusion
imaging, CSF, blood biomarkers, genetic profiles, and cog-
nitive assessments from 203 AD patients and 103 cogni-
tively normal controls (CN), matched for age and sex. Imag-
ing data underwent skull-stripping, with T1-weighted and
rs-fMRI co-registered to DTI space using FLIRT (Jenkin-
son et al. 2012). Rs-fMRI preprocessing included spatial
smoothing, slice-timing correction, temporal prewhitening,
drift removal, and bandpass filtering (0.01–0.1 Hz). Diffu-
sion data were corrected for eddy currents and processed
with MedINRIA (Toussaint, Souplet, and Fillard 2007) for
fiber tractography. T1 images were parcellated into 148 cor-
tical regions using the Destrieux Atlas (Destrieux et al.
2010) in FreeSurfer (Fischl 2012), defining SC nodes. SC
matrices were constructed by counting streamlines between
regions, and FC was derived from Pearson correlations of
BOLD time series.
4.2
Comparison Methods and Settings
To validate the effectiveness of our proposed EDT-PA
model, we compare its performance with a range of clas-
sical machine learning classifiers and SOTA graph learning
methods on REST-meta-MDD and ADNI. These compara-
tive methods can be broadly categorized into four groups:
• Baseline models: Random Forest (RF) (Rigatti 2017)
and Support Vector Machine (SVM) (Jakkula 2006).
• General
Graph
Neural
Network
Methods:
GCNs (Parisot et al. 2018), Graph Isomorphism Network
(GIN) (Duan et al. 2023) and GraphGPS (Ramp´aˇsek
et al. 2022).
• Brain Network-Specific Graph Models: BrainGNN (Li
et
al.
2021a),
BrainGB
(Cui
et
al.
2023)
and
BrainIB (Zheng et al. 2025).
• Joint SC-FC Modeling Methods: BrainGRL (Li, Ma-
teos, and Zhang 2022) and ATPGCN (Bian et al. 2024).
In this experiment, the datasets are randomly partitioned
into 70%, 10%, and 20% for training, validation, and test-
ing, respectively. Additionally, weight α and diffusion steps
T are empirically set to 0.3 and 4, respectively. Five evalua-
tion metrics are used to assess the algorithm’s performance,
including accuracy (ACC), recall (REC), precision (PRE),
area under the ROC curve (AUC), and F1-score (F1). To en-
sure experimental fairness, both EDT-PA and the compar-
ative methods were trained and evaluated under the same
setup as previously outlined. The experimental results of all
methods are summarized in Table 1, with the best values for
each evaluation metric highlighted in bold and sub-SOTA
values underlined.
4.3
Classification Performance
The classification results reported in Table 1 show that,
across two benchmark datasets, the proposed EDT-PA model
demonstrates clear advantages in both accuracy and robust-
ness. These benefits are particularly significant given the
substantial differences between the datasets in terms of sam-
ple size, data heterogeneity, and brain network construction
methods.
On the REST-meta-MDD dataset, EDT-PA outperforms
strong baselines like BrainGNN by dynamically evolving
the structural graph and more effectively aligning the func-
tional topology. This results in improvements of 5.4% and
6.0% in ACC and F1, respectively, surpassing the perfor-
mance of the sub-SOTA method. Notably, although compet-
ing approaches such as GraphGPS and BrainGNN achieve
relatively higher recall, their performance is compromised
by substantially lower precision and AUC scores. This limi-
tation arises from their dependence on either global attention
mechanisms or static node representations, which constrains
their generalization capacity and leads to systematic over-
prediction of positive cases.
To further evaluate the generalizability of EDT-PA, ad-
ditional experiments are conducted on the more challeng-
ing ADNI dataset. In this severely imbalanced classification
task (203 ADs vs. 103 CNs), EDT-PA achieved an accu-
racy of 84.2% and an AUC of 82.8%, significantly outper-
forming all baseline methods. Models such as ATPGCN and
Table 1: Performance comparison on REST-meta-MDD and ADNI datasets
Dataset
Method
ACC
PRE
REC
F1
AUC
REST-meta-MDD
RF
0.567±0.021
0.572±0.024
0.567±0.021
0.567±0.020
0.585±0.023
SVM
0.552±0.043
0.553±0.045
0.553±0.043
0.550±0.042
0.523±0.081
GCN
0.558±0.030
0.463±0.147
0.558±0.030
0.555±0.033
0.567±0.013
GIN
0.564±0.030
0.569±0.025
0.564±0.031
0.559±0.040
0.560±0.038
GraphGPS
0.577±0.034
0.568±0.038
0.780±0.126
0.650±0.027
0.597±0.054
BrainGNN
0.544±0.026
0.527±0.031
0.741±0.226
0.598±0.074
0.531±0.071
BrainGB
0.727±0.023
0.762±0.035
0.727±0.020
0.718±0.020
0.838±0.032
BrainIB
0.636±0.012
0.639±0.015
0.655±0.028
0.643±0.015
0.663±0.014
ATPGCN
0.654±0.013
0.660±0.043
0.692±0.128
0.668±0.046
0.690±0.031
BrainGRL
0.682±0.022
0.673±0.050
0.738±0.107
0.696±0.030
0.683±0.038
EDT-PA (ours)
0.781±0.027
0.787±0.027
0.771±0.038
0.778±0.031
0.841±0.045
ADNI
RF
0.613±0.027
0.518±0.144
0.613±0.027
0.499±0.033
0.539±0.046
SVM
0.619±0.060
0.614±0.050
0.619±0.060
0.611±0.051
0.590±0.014
GCN
0.629±0.018
0.596±0.026
0.629±0.177
0.589±0.125
0.635±0.015
GIN
0.642±0.051
0.556±0.092
0.642±0.051
0.575±0.069
0.547±0.080
GraphGPS
0.639±0.037
0.666±0.014
0.933±0.057
0.777±0.028
0.544±0.085
BrainGNN
0.642±0.026
0.644±0.028
0.975±0.015
0.775±0.017
0.557±0.036
BrainGB
0.654±0.061
0.623±0.053
0.620±0.054
0.617±0.055
0.635±0.078
BrainIB
0.663±0.016
0.722±0.015
0.808±0.030
0.759±0.014
0.623±0.020
ATPGCN
0.677±0.025
0.698±0.014
0.909±0.028
0.790±0.017
0.670±0.025
BrainGRL
0.719±0.061
0.747±0.061
0.886±0.046
0.809±0.040
0.741±0.071
EDT-PA (ours)
0.842±0.012
0.843±0.013
0.842±0.012
0.835±0.015
0.828±0.025
BrainGRL, which integrate SC and FC, demonstrate supe-
rior performance over most baselines on the ADNI dataset.
However, their effectiveness is limited by the intrinsic con-
straints of their fusion strategies. Specifically, these mod-
els lack mechanisms explicitly designed to address modal-
ity heterogeneity and resolve alignment mismatches. Con-
sequently, despite achieving high REC, they exhibit signif-
icant overfitting to the positive class, as evidenced by their
comparatively lower ACC. In contrast, EDT-PA employs an
OT-based alignment strategy that selectively aligns connec-
tivity patterns between structural and functional modalities,
rather than enforcing full distribution-level alignment. This
targeted strategy mitigates the risk of overfitting the domi-
nant class and enhances the robustness to modality-specific
noise. As a result, EDT-PA achieves superior performance
in both ACC and REC, demonstrating strong robustness and
generalization across heterogeneous neuroimaging data.
4.4
Ablation Study
Two ablation studies are conducted to evaluate the contribu-
tions of the EBCM and PSSA modules in EDT-PA. As illus-
trated in Fig. 3, five evaluation metrics (ACC, PRE, REC,
F1, and AUC) are compared on the REST-meta-MDD and
ADNI datasets by selectively removing each component. In
the w/o EBCM setting, the topological evolution process is
disabled, and the original brain graph is directly used with-
out diffusion-based enhancement. In the w/o PSSA case, the
structure-function alignment mechanism is omitted.
The complete EDT-PA framework consistently delivers
the best overall performance across both datasets. Exclud-
ing EBCM results in significant reductions in ACC and F1
Figure 3: Ablation study of EDT-PA on two public datasets.
score, especially on the ADNI dataset, underscoring the crit-
ical role of high-order structural modeling. Excluding PSSA
mainly degrades AUC and REC, indicating that structure-
function misalignment weakens the model’s ability to in-
tegrate modality-specific patterns. These results underscore
the complementary roles of EBCM and PSSA: the former
enhances structural abstraction and evolution, while the lat-
ter facilitates modality-aware fusion. Their joint integration
is critical for robust and generalizable performance in mul-
timodal brain connectome analysis.
4.5
Explanation Study
In brain disorder diagnostics, EDT-PA achieves optimal in-
tegration of SC and FC while deriving the fully evolved con-
nectivity matrix ˆA. To evaluate the discriminative power of
ˆA, a statistical significance analysis was performed across
two independent datasets, demonstrating its effectiveness in
Figure 4: The group difference of original structural matrices and fully evolved matrices in two classification tasks. The con-
nections with significant difference (p-value < 0.005) are denoted by yellow points in the matrices. The size of a node in the
brain network is related to its degree, where a higher degree results in a larger node size.
Table 2: The Top-10 significant regions detected by EDT-PA
in REST-meta-MDD and ADNI dataset.
REST-meta-MDD
Index
Label
Index
Label
1
Supp Motor Area.L1
6
Cingulum Post.L4
2
Temporal Inf.L3
7
Occipital Mid.R2
3
Cingulum Post.L1
8
Calcarine.L1
4
Occipital Sup.L1
9
Frontal Inf Tri.R1
5
Cerebellum Crus1.R2
10
Rolandic Oper.R1
ADNI
Index
Label
Index
Label
1
S circular insula sup
6
G Ins lg and S cent ins
2
G temporal middle
7
S front inf
3
G and S cingul-Mid-Ant
8
S parieto occipital
4
S interm prim-Jensen
9
S postcentral
5
G precentral
10
S suborbital
brain disease diagnostic tasks. The original structural brain
network A and ˆA are divided according to the health sta-
tus of the subjects, followed by a significance difference
analysis across the data from the different subgroups. The
experimental results are shown in Fig. 4. Compared to the
original brain structural matrix, ˆA exhibits more discrimina-
tive connections, demonstrating its ability to capture higher-
order dependencies in the brain. This indicates that ˆA can
precisely identify critical connections related to brain disor-
ders.
A more intriguing discovery is that ˆA is predominantly
concentrated in several distinct brain regions. To further ex-
plore how ˆA captures biomarkers associated with brain dis-
eases, the names of the top-10 brain regions with the most
significant differential connections are visualized in Table 2.
The brain regions associated with the disease are highlighted
in red. EDT-PA identified several key brain regions associ-
ated with depression, including areas in the motor, cingulate,
occipital, and frontal regions, as well as the Rolandic oper-
culum. These regions show alterations in connectivity pat-
tern, affecting visual processing, emotional regulation, and
cognitive functions (Zhang et al. 2024; Taylor et al. 2013;
Sun et al. 2022; Lai, Wu, and Hou 2017; Liu et al. 2020;
Trombello et al. 2022). Using the ADNI dataset, EDT-PA
identified several important brain areas, including regions in
the insula, temporal lobe, cingulate cortex, frontal lobe, and
occipital lobe, as well as the precentral gyrus. These regions
are particularly linked to impairments in cognitive functions,
emotional regulation, and motor abilities, which are essen-
tial for understanding the progression of Alzheimer’s dis-
ease (Wan et al. 2014; He et al. 2009; Pievani et al. 2017;
Foundas et al. 1997).
5
Conclusion
In this study, we propose an end-to-end graph learning
framework for analyzing brain connectivity and classifying
brain disorders. The EDT-PA efficiently combines structural
and functional brain networks by employing a novel op-
timal transport method. The framework dynamically cap-
tures high-order dependencies and models complex inter-
actions within brain regions, providing a robust approach
for disease-specific connectome pattern identification. Ex-
tensive evaluations on two real-world datasets demonstrate
that EDT-PA outperforms state-of-the-art methods in both
classification accuracy and robustness, underscoring its po-
tential to identify disease-specific biomarkers and improve
brain disorder diagnostics. This work offers a promising ap-
proach for modeling complex brain networks, significantly
advancing neuroimaging-based disease diagnosis.
References
Bahrami, M.; Laurienti, P. J.; Shappell, H. M.; and Simpson,
S. L. 2023. Brain network analysis: A review on multivariate
analytical methods. Brain connectivity, 13(2): 64–79.
Bai, S.; Bai, X.; Tian, Q.; and Latecki, L. J. 2017. Regular-
ized diffusion process for visual retrieval. In Proceedings of
the AAAI conference on artificial intelligence, volume 31.
Bian, C.; Xia, N.; Xie, A.; Cong, S.; and Dong, Q. 2024. Ad-
versarially Trained Persistent Homology Based Graph Con-
volutional Network for Disease Identification Using Brain
Connectivity.
IEEE Transactions on Medical Imaging,
43(1): 503–516.
Cui, H.; Dai, W.; Zhu, Y.; Kan, X.; Gu, A. A. C.; Lukemire,
J.; Zhan, L.; He, L.; Guo, Y.; and Yang, C. 2023. BrainGB: A
Benchmark for Brain Network Analysis With Graph Neural
Networks. IEEE Transactions on Medical Imaging, 42(2):
493–506.
Cuturi, M. 2013. Sinkhorn distances: lightspeed computa-
tion of optimal transport.
In Proceedings of the 27th In-
ternational Conference on Neural Information Processing
Systems - Volume 2, NIPS’13, 2292–2300. Red Hook, NY,
USA: Curran Associates Inc.
Dadi, K.; Rahim, M.; Abraham, A.; Chyzhyk, D.; Milham,
M.; Thirion, B.; Varoquaux, G.; Initiative, A. D. N.; et al.
2019. Benchmarking functional connectome-based predic-
tive models for resting-state fMRI. NeuroImage, 192: 115–
134.
Dan, T.; and Wu, G. 2023. Uncovering Structural-Functional
Coupling Alterations for Alzheimer’s Diseases. In Medical
Imaging with Deep Learning, short paper track.
Destrieux, C.; Fischl, B.; Dale, A.; and Halgren, E. 2010.
Automatic parcellation of human cortical gyri and sulci us-
ing standard anatomical nomenclature. Neuroimage, 53(1):
1–15.
Dong, Q.; Cai, H.; Li, Z.; Liu, J.; and Hu, B. 2024. A Multi-
view Brain Network Transformer Fusing Individualized In-
formation for Autism Spectrum Disorder Diagnosis. IEEE
Journal of Biomedical and Health Informatics, 28(8): 4854–
4865.
Duan, J.; Li, Y.; Zhang, X.; Dong, S.; Zhao, P.; Liu, J.;
Zheng, J.; Zhu, R.; Kong, Y.; and Wang, F. 2023. Predict-
ing treatment response in adolescents and young adults with
major depressive episodes from fMRI using graph isomor-
phism network. NeuroImage: Clinical, 40: 103534.
Fischl, B. 2012. FreeSurfer. Neuroimage, 62(2): 774–781.
Fornito, A.; and Bullmore, E. T. 2015. Connectomics: a new
paradigm for understanding brain disease. European Neu-
ropsychopharmacology, 25(5): 733–748.
Foundas, A.; Leonard, C. M.; Sm, M.; Of, A.; and Heilman,
K. M. 1997. Atrophy of the hippocampus, parietal cortex,
and insula in Alzheimer’s disease: a volumetric magnetic
resonance imaging study. Neuropsychiatry, neuropsychol-
ogy, and behavioral neurology, 10 2: 81–9.
Hansen, E. 2013. Anatomical connectivity and the resting
state activity of large cortical networks. NeuroImage, 65:
127–138.
He, Y.; yan Zhang, M.; Head, K.; Chang, D.; and Wang, H.
2009. Voxel-based morphometry of amnestic mild cogni-
tive impairment and Alzheimer’s disease.
Alzheimer’s &
Dementia, 5: p12–p13.
Huang, J.; Zhu, Q.; Wang, M.; Zhou, L.; Zhang, Z.; and
Zhang, D. 2020. Coherent Pattern in Multi-Layer Brain Net-
works: Application to Epilepsy Identification. IEEE Journal
of Biomedical and Health Informatics, 24(9): 2609–2620.
Jakkula, V. 2006. Tutorial on support vector machine (svm).
School of EECS, Washington State University, 37(2.5): 3.
Jenkinson, M.; Beckmann, C. F.; Behrens, T. E.; Woolrich,
M. W.; and Smith, S. M. 2012. Fsl. Neuroimage, 62(2):
782–790.
Lai, C.-H.; Wu, Y.-T.; and Hou, Y.-M. 2017.
Functional
network-based statistics in depression: Theory of mind sub-
network and importance of parietal region. Journal of affec-
tive disorders, 217: 132–137.
Li, S.; Zhu, Q.; Tian, C.; Shao, W.; and Zhang, D. 2025.
Interpretable Dynamic Brain Network Analysis with Func-
tional and Structural Priors. IEEE Transactions on Medical
Imaging, 1–1.
Li, X.; Zhou, Y.; Dvornek, N.; Zhang, M.; Gao, S.; Zhuang,
J.; Scheinost, D.; Staib, L. H.; Ventola, P.; and Duncan, J. S.
2021a. Braingnn: Interpretable brain graph neural network
for fmri analysis. Medical Image Analysis, 74: 102233.
Li, Y.; Mateos, G.; and Zhang, Z. 2022. Learning to model
the relationship between brain structural and functional con-
nectomes.
IEEE Transactions on Signal and Information
Processing over Networks, 8: 830–843.
Li, Y.; Wang, N.; Wang, H.; Lv, Y.; Zou, Q.; and Wang, J.
2021b.
Surface-based single-subject morphological brain
networks: effects of morphological index, brain parcellation
and similarity measure, sample size-varying stability and
test-retest reliability. NeuroImage, 235: 118018.
Liu, X.; Hou, Z.; Yin, Y.; Xie, C.; Zhang, H.; Zhang, H.;
Zhang, Z.; and Yuan, Y. 2020. CACNA1C gene rs11832738
polymorphism influences depression severity by modulating
spontaneous activity in the right middle frontal gyrus in pa-
tients with major depressive disorder. Frontiers in Psychia-
try, 11: 73.
Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halver-
son, J.; Soljaˇci´c, M.; Hou, T. Y.; and Tegmark, M.
2024. Kan: Kolmogorov-arnold networks. arXiv preprint
arXiv:2404.19756.
Ma, H.; Wu, F.; Guan, Y.; Xu, L.; Liu, J.; and Tian, L. 2023.
BrainNet with connectivity attention for individualized pre-
dictions based on multi-facet connections extracted from
resting-state fMRI data.
Cognitive Computation, 15(5):
1566–1580.
Pang, Y.; Liang, J.; Huang, T.; Chen, H.; Li, Y.; Li, D.;
Huang, L.; and Wang, Q. 2023. Slim UNETR: Scale hybrid
transformers to efficient 3D medical image segmentation un-
der limited computational resources. IEEE Transactions on
Medical Imaging, 43(3): 994–1005.
Parisot, S.; Ktena, S. I.; Ferrante, E.; Lee, M.; Guerrero,
R.; Glocker, B.; and Rueckert, D. 2018. Disease prediction
using graph convolutional networks: application to autism
spectrum disorder and Alzheimer’s disease. Medical image
analysis, 48: 117–130.
Pievani, M.; Pini, L.; Ferrari, C.; Pizzini, F.; Galazzo, I. B.;
Cobelli, C.; Cotelli, M.; Manenti, R.; and Frisoni, G. 2017.
Coordinate-Based Meta-Analysis of the Default Mode and
Salience Network for Target Identification in Non-Invasive
Brain Stimulation of Alzheimer’s Disease and Behavioral
Variant Frontotemporal Dementia Networks.
Journal of
Alzheimer’s Disease, 57: 825 – 843.
Popp, J. L.; Thiele, J. A.; Faskowitz, J.; Seguin, C.; Sporns,
O.; and Hilger, K. 2024.
Structural-functional brain net-
work coupling predicts human cognitive ability. NeuroIm-
age, 290: 120563.
Power, J. D.; Cohen, A. L.; Nelson, S. M.; Wig, G. S.;
Barnes, K. A.; Church, J. A.; Vogel, A. C.; Laumann, T. O.;
Miezin, F. M.; Schlaggar, B. L.; et al. 2011. Functional net-
work organization of the human brain. Neuron, 72(4): 665–
678.
Ramp´aˇsek, L.; Galkin, M.; Dwivedi, V. P.; Luu, A. T.; Wolf,
G.; and Beaini, D. 2022. Recipe for a general, powerful,
scalable graph transformer. Advances in Neural Information
Processing Systems, 35: 14501–14515.
Rigatti, S. J. 2017. Random forest. Journal of insurance
medicine, 47(1): 31–39.
Schmitzer, B. 2019. Stabilized sparse scaling algorithms for
entropy regularized transport problems. SIAM Journal on
Scientific Computing, 41(3): A1443–A1481.
Sebenius, I.; Seidlitz, J.; Warrier, V.; Bethlehem, R. A.;
Alexander-Bloch, A.; Mallard, T. T.; Garcia, R. R.; Bull-
more, E. T.; and Morgan, S. E. 2023.
Robust estimation
of cortical similarity networks from brain MRI. Nature neu-
roscience, 26(8): 1461–1471.
Sheng, X.; Cai, H.; Nie, Y.; He, S.; Cheung, Y.-M.; and
Chen, J. 2025. Modality-Aware Discriminative Fusion Net-
work for Integrated Analysis of Brain Imaging Genomics.
IEEE Transactions on Neural Networks and Learning Sys-
tems, 36(5): 8577–8591.
Sheng, X.; Chen, J.; Liu, Y.; Hu, B.; and Cai, H. 2023. Deep
Manifold Harmonic Network With Dual Attention for Brain
Disorder Classification. IEEE Journal of Biomedical and
Health Informatics, 27(1): 131–142.
Stam, C. J. 2014. Modern network science of neurological
disorders. Nature Reviews Neuroscience, 15(10): 683–695.
Sun, J.-f.; Chen, L.-m.; He, J.-k.; Wang, Z.; Guo, C.-l.; Ma,
Y.; Luo, Y.; Gao, D.-q.; Hong, Y.; Fang, J.-l.; et al. 2022. A
comparative study of regional homogeneity of resting-state
fMRI between the early-onset and late-onset recurrent de-
pression in adults. Frontiers in psychology, 13: 849847.
Tan, Y.-F.; Tan, P.-S.; Noman, F.; Phan, R. C.-W.; Ombao,
H.; and Ting, C.-M. 2025. Connectome-GTC: A Unified
Framework for Brain Functional and Structural Connec-
tomes Generation, Translation, and Classification. In 2025
IEEE 22nd International Symposium on Biomedical Imag-
ing (ISBI), 1–5. IEEE.
Taylor, W. D.; Zhao, Z.; Ashley-Koch, A.; Payne, M. E.;
Steffens, D. C.; Krishnan, R. R.; Hauser, E.; and MacFall,
J. R. 2013. Fiber tract-specific white matter lesion sever-
ity Findings in late-life depression and by AGTR1 A1166C
genotype. Human brain mapping, 34(2): 295–303.
Toussaint, N.; Souplet, J.-C.; and Fillard, P. 2007. MedIN-
RIA: medical image navigation and research tool by INRIA.
In Proc. of MICCAI’07 Workshop on Interaction in medical
image analysis and visualization.
Trombello, J. M.; Cooper, C. M.; Fatt, C. C.; Grannemann,
B. D.; Carmody, T. J.; Jha, M. K.; Mayes, T. L.; Greer, T. L.;
Yezhuvath, U.; Aslan, S.; et al. 2022. Neural substrates of
emotional conflict with anxiety in major depressive disor-
der: Findings from the Establishing Moderators and biosig-
natures of Antidepressant Response in Clinical Care (EM-
BARC) randomized controlled trial. Journal of psychiatric
research, 149: 243–251.
Wan, J.; Zhang, Z.; Rao, B. D.; Fang, S.; Yan, J.; Saykin,
A. J.; and Shen, L. 2014.
Identifying the neuroanatomi-
cal basis of cognitive impairment in Alzheimer’s disease by
correlation-and nonlinearity-aware sparse Bayesian learn-
ing. IEEE transactions on medical imaging, 33(7): 1475–
1487.
Wang, H.; Jin, X.; Zhang, Y.; and Wang, J. 2016. Single-
subject morphological brain networks: connectivity map-
ping, topological characterization and test–retest reliability.
Brain and behavior, 6(4): e00448.
Wang, S.; Lei, Z.; Tan, Z.; Ding, J.; Zhao, X.; Dong, Y.; Wu,
G.; Chen, T.; Chen, C.; Zhang, A.; et al. 2025. BrainMAP:
Learning Multiple Activation Pathways in Brain Networks.
In Proceedings of the AAAI Conference on Artificial Intelli-
gence, volume 39, 14432–14440.
Yang, Y.; Ye, C.; Guo, X.; Wu, T.; Xiang, Y.; and Ma,
T. 2024.
Mapping Multi-Modal Brain Connectome for
Brain Disorder Diagnosis via Cross-Modal Mutual Learn-
ing. IEEE Transactions on Medical Imaging, 43(1): 108–
121.
Zhang, L.; Zhang, Y.; Guo, W.; Ma, Q.; Zhang, F.; Li, K.;
and Yi, Q. 2024. An effect of chronic negative stress on
hippocampal structures and functional connectivity in pa-
tients with depressive disorder. Neuropsychiatric Disease
and Treatment, 1011–1024.
Zhang, Z.; Liao, W.; Chen, H.; Mantini, D.; Ding, J.-R.; Xu,
Q.; Wang, Z.; Yuan, C.; Chen, G.; Jiao, Q.; et al. 2011. Al-
tered functional–structural coupling of large-scale brain net-
works in idiopathic generalized epilepsy. Brain, 134(10):
2912–2928.
Zheng, K.; Yu, S.; Li, B.; Jenssen, R.; and Chen, B. 2025.
BrainIB: Interpretable Brain Network-Based Psychiatric Di-
agnosis With Graph Information Bottleneck. IEEE Trans-
actions on Neural Networks and Learning Systems, 36(7):
13066–13079.
|
Evolvable Graph Diffusion Optimal Transport with Pattern-Specific Alignment for Brain Connectome Modeling Xiaoqi Sheng1, Jiawen Liu2, Jiaming Liang2, Yiheng Zhang4, Hongmin Cai3,∗.* 1 2 3 4 - biological mechanisms inherent in structural and functional brain networks. Existing studies routinely treat structural connectivity (SC) as optimal or fixed topological scaffolds for functional connectivity (FC), often overlooking higher-order dependencies between brain regions and limiting the modeling of complex cognitive processes. Besides, the distinct spatial organizations of SC and FC complicate direct integration, as na ̈ıve alignment may distort intrinsic nonlinear patterns of brain connectivity. In this study, we propose a novel framework called Evolvable Graph Diffusion Optimal Transport with Pattern-Specific Alignment (EDT-PA), designed to identify disease-specific connectome patterns and classify brain disorders. To accurately model high-order structural dependencies, EDT-PA incorporates a spectrum of evolvable modeling blocks to dynamically capture high-order dependencies across brain regions. Additionally, a Pattern-Specific Alignment mechanism employs optimal transport to align structural and functional representations in a geometry-aware manner. By incorporating a Kolmogorov-Arnold network for flexible node aggregation, EDT-PA is capable of modeling complex nonlinear interactions among brain regions for downstream classification. Extensive evaluations on the REST-meta-MDD and ADNI datasets demonstrate that EDTPA outperforms state-of-the-art methods, offering a more effective framework for revealing structure-function misalignments and disorder-specific subnetworks in brain disorders. The project of this work is released via this link. 1 Introduction Accumulating neuroscience evidence identifies structural damage and functional reorganization, which show marked inter-individual variability, as major pathological manifestations of brain disorders (Zhang et al. 2011). Furthermore, current pathophysiological models have shifted from emphasizing localized brain pathology to investigating structural and functional interactions across distributed neural networks (Bian et al. 2024). Crucially, modern neuroimaging advances allow the construction of graph-theoretical *∗All correspondence should be addressed to Hongmin Cai(Email: ). , Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. brain connectomes, enabling comprehensive and efficient in vivo characterization of structural connectivity (SC) and functional connectivity (FC) networks (Fornito and Bullmore 2015). Specifically, diffusion magnetic resonance imaging (dMRI) noninvasively reconstructs SC by mapping white matter fiber tracts that constitute anatomical pathways for neural information transfer, whereas functional magnetic resonance imaging (fMRI) identifies FC via statistical correlations of neural activity, reflecting dynamic integration processes across distributed brain regions. Existing studies (Bian et al. 2024; Dan and Wu 2023) have demonstrated that SC correlates with FC at the group level, underscoring their complementary value in enhancing classification accuracy. However, a primary challenge is that SC reflects only anatomical connections, whereas FC represents interactions that may occur independently of direct anatomical links, resulting in an incomplete correspondence between the two (Popp et al. 2024). Additionally, despite advances in data-driven methods for brain network analysis (Li et al. 2025), effectively encoding higher-order topological features to identify clinically relevant biomarkers for neurological disorders remains a significant challenge. Therefore, accurately modeling the topological properties of SC and FC is critical for understanding complex cognitive functions and brain behaviors. Recently, driven by the need for automated diagnosis (Bahrami et al. 2023), various deep learning approaches based on brain connectivity data have been developed to enable precise analysis of disease-specific topological alterations and cognitive impairment (Wang et al. 2025; Ma et al. 2023; Pang et al. 2023). Among these, Graph Convolutional Networks (GCNs) (Parisot et al. 2018; Cui et al. 2023; Huang et al. 2020) have emerged as particularly powerful for brain network analysis, owing to their inherent capacity to model the topological relationships between SC-FC. However, existing GCNs primarily focus on distinguishing brain region features, often overlooking critical topological information for modeling brain propagation patterns. One strategy is known as the guided method, where one connectivity modality (e.g., SC) directly guides the estimation of another (e.g., FC). For example, Pinotsis et al. (Hansen 2013) pioneered a guided connectivity approach, leveraging SC to establish theoretical relationships between graph topological properties and simulated functional dynamics. By leverag16 Sep 2025 Figure 1: An illustration of the challenges in integrating SC and FC, highlighting two key issues: (a) High-order dependencies: Red dashed lines indicate high-order dependencies arising from indirect pathways or coordinated activity across multiple brain regions. (b) Imperfect coupling between SC and FC: Differences in spatial distribution and organization between SC and FC. ing multiple generative adversarial networks (GANs), Tan et al. (Tan et al. 2025) introduced a framework for cross-modal connectivity synthesis and translation, achieving substantial improvements in brain disorder classification accuracy. Bian et al.'s (Bian et al. 2024) topology-aware GCN framework integrates homology features from SC to constrain FC estimation, enhancing sensitivity to pathological microstructural changes. However, a major limitation of these methods is their focus solely on direct connectivity, neglecting indirect connectivity at broader scales, which significantly diminishes the reliability of diagnostic outcomes. This limitation arises from the fact that information transmission between brain regions is characterized by both strong local low-order connections and efficient high-order connections (Stam 2014). Several studies have applied Transformer architectures to the graph domain to address the aforementioned challenges (Dong et al. 2024; Sheng et al. 2025). These methods typically employ specialized positional embedding strategies that integrate FC and SC information to optimize the computation of global features across brain regions. However, given that the correlations between functional and structural pathways are not linear, traditional graph transformers struggle to accurately reflect the underlying biological mechanisms (Yang et al. 2024). Therefore, joint combinatorial reasoning of functional and structural connectomes should be incorporated into graph-based modeling. The above analysis demonstrates that successful SC and FC analysis methods effectively capture both the intrinsic connectivity patterns and the disease-specific discriminatory information. However, recent studies still rely on the assumption that the structural brain network can serve as the optimal graph topology for the functional network (Tan et al. 2025). This premise introduces two inherent challenges that could significantly compromise downstream disease pattern classification performance: (1) Neglect of high-order dependencies. As shown in Fig. 1a, restricting information propagation strictly to SC-defined edges fails to account for indirect functional interactions and distributed neural coordination, limiting the model's capacity to capture higher-order cognitive processes. (2) Imperfect coupling between SC and FC. As shown in Fig. 1b, SC and FC exhibit significant statistical discrepancies in spatial distribution and organization. Directly integrating them may distort the nonlinear spatial structure of brain networks, ultimately compromising the generalizability and interpretability of the resulting models. In light of the above-mentioned issues, in this study, we introduce an Evolvable Graph Diffusion-Optimal Transport with Pattern-Specific Alignment (EDT-PA) to classify brain disorders. The framework is built upon two key components: (1) EBCM (Evolvable Brain Connectome Modeling), which employs an innovative iterative graph diffusion optimization strategy to disentangle complex pathways within SC, generating an adaptive SC adjacency matrix with higherorder dependency encoding; and (2) PSSA (Pattern-Specific Structure-Function Alignment), which leverages an optimal transport strategy to develop an edge-aware graph encoder that bridges high-order SC and FC characteristics, enhancing model interpretability while maintaining strong generalization capacity. Extensive experiments on two benchmark brain network datasets, REST-meta-MDD and ADNI, validate the superiority of EDT-PA. Compared with state-ofthe-art (SOTA) methods, our framework achieves improvements of 5.4% and 12.3% in crucial accuracy metrics, while maintaining robust interpretability and generalization across tasks. In summary, our contributions are as follows: • Innovation. An evolvable graph diffusion optimal transport method is proposed for brain connectome modeling, dynamically capturing interregional dependencies to integrate structural and functional networks and enhance disease-specific pattern identification. • Architecture. A comprehensive end-to-end joint analysis framework, EDT-PA, has been designed. This method integrates evolvable modeling modules to capture highorder inter-regional dependencies, enabling precise characterization of disease-related network abnormalities. • Validation. Extensive experiments on benchmark datasets demonstrate the method's superiority over state-of-the-art approaches in disease classification, robustly identifying discriminative disease regions. 2 Preliminary The brain connectome graph, constructed from fMRI, sMRI, and DTI data, can be formally represented as a graph G = (V, E, A, X). Here, the V = {vi | i = 1, . . . , N} denotes the set of nodes corresponding to brain regions of interest (ROIs), while the E = {ai,j | (vi, vj) ∈V × V} represent connections between brain regions' nodes. The adjacency matrix A ∈RN×N encodes the strength of internodal connections, and the node feature matrix X ∈RN×d is derived from Pearson correlation coefficients (PCC) computed from blood-oxygen-level-dependent (BOLD) signals in fMRI data. Each brain graph is associated with a categorical label y, indicating the subject's clinical or physiological condition. Human Brain MRI imaging preprocessing ... Graph �1 Graph �2 Fully Evolved Graph � Graph �� Graph Diffusion Concatenation �∗ Feature Matrix Early Diagnosis Evolvable Modeling Block Mask Attention Class Token Structural Connectivity � Graph Transformer Time Functional Connectivity � Node Feature � B. Pattern Specific Structure-Function Alignment A. Evolvable Brain Connectome Modeling OT Node Feature Feature Matrix S Mat Mul Align Optimal Transmission C. Neural Graph Aggregator � �∗ � Addition ^ Addition Figure 2: Architecture of the proposed EDT-PA for brain connectome modeling 3 Methodology The proposed EDT-PA framework, as illustrated in Fig. 2, integrates three core modules to achieve effective brain network analysis: (1) an Evolvable Brain Connectome Modeling module that progressively refines structural connectivity through multi-step graph diffusion and class-aware Transformers, (2) a Pattern Specific Structure-Function Alignment module that establishes precise neurobiological correspondence via attention-guided fusion and optimal transport-based matching, and (3) a Neural Graph Aggregator that models intricate regional interactions through Kolmogorov-Arnold Networks for robust downstream classification. By effectively bridging FC and SC representations, the model enables more comprehensive neural interaction analysis for improved brain disorder classification. 3.1 Evolvable Brain Connectome Modeling Accurate representation of brain structural connectivity is crucial for downstream clinical tasks (Sheng et al. 2023). However, raw structural connectomes are typically represented as sparse, noisy symmetric matrices, which hinder existing algorithms from capturing higher-order interactions and functional dependencies across distant brain regions (Yang et al. 2024). To address these limitations, EDTPA develops an EBCM pipeline. The proposed methodology advances graph diffusion processes via multiple hierarchical receptive fields, while a class-conditional Transformer architecture provides adaptive learning of spatiotemporal correlations. The diffusion steps of EBCM are formulated as follows: A(t+1) = T αSA(t)ST + (1 -α)A , t = 0, 1, . . . , T -1 (1) where, S = D-1 2 AD-1 2 represents the diffusion operator, and D is a diagonal matrix with elements Dii = PN j=1 Aij (Bai et al. 2017). The hyperparameter α controls the trade-off between the diffused structure and the original graph topology. To capture the high-order structural dependencies inherent in brain networks, a Transformer model T is integrated at each diffusion step. The self-attention mechanism of the Transformer explicitly models high-order connectomic interactions, addressing the limitations imposed by local neighborhood constraints in traditional graph diffusion. This integration significantly enhances the model's capacity to represent complex, largescale brain organization. The process generates a sequence A = {A(1), A(2), . . . , A(T )}, encoding progressive structural relationships across brain regions. Another central problem in neuroscience is extracting unique connectivity patterns from the structural connectome that are associated with brain diseases. To this end, a class token ey is computed and incorporated into the modeling of the fully evolved graph. Specifically, ey is obtained via an index-based lookup operation ey = M(E[Y ]), where M : R1×N2 →RN×N representing the reshape operation, and E ∈RC×N 2 is a learnable embedding matrix. In the absence of accessible class labels during inference, a soft class-query module is introduced to compute a probabilistic class embedding directly from the input features, enabling implicit task-aware conditioning. Formally, given the adjacency matrix A ∈RN×N of the brain connectome graph G, the query-to-class attention is computed as: β = softmax(M-1(A) ∗ET ) ey = M(β · E) ∈RN×N (2) in which M-1 : RN×N →R1×N2 is the reverse operation of M. The soft class token is then appended to the structural diffusion sequence to enable task-aware conditioning without requiring explicit class labels during inference. Once ey is obtained, it is added as a global prompt token to the sequence A: A∗= {ey, A(1), A(2), . . . , A(T )} (3) In essence, the A∗contains the progressively diffused graph structures along with the class token, which encapsulates disease-specific connectivity patterns. To derive the fully evolved graph from these representations, an autoregressive model is employed to iteratively refine and expand upon A∗: p ˆA | A∗ = K Y k=1 p ˆA | ey, A(1), . . . , A(T ) (4) To capture the conditional dependency p ˆA | A∗ , a Transformer module with mask attention is employed to approximate the underlying distribution. This process produces the final output ˆA, which serves as the updated structural connectome, enriched with multi-scale awareness and task-specific modulation. Collectively, the procedure implements an anatomical prior encoding mechanism that simulates neural signal propagation, emphasizing informative pathways through classaware guidance. 3.2 Pattern Specific Structure-Function Alignment Through iterative diffusion optimization, the structural connectivity graph gradually evolves, capturing richer highorder dependencies. However, these features are still confined to the brain's SC modality. Therefore, we further employ the PSSA module, which aims to refine the alignment between SC and FC via an optimal transport mechanism, enabling more accurate modality fusion and enhancing the expressiveness of the brain connectome. Specifically, the structural connectivity matrix A and the node features X ∈RN×d are first integrated using a Graph Transformer with edge features. In graph Transformer layer, the feature of a given brain region (node) xi is concatenated with those of its structurally adjacent regions: hi = ∥(xi, {xj | j ∈Ni}) (5) where ∥denotes the concatenation and Ni represents the set of neighbors of node i. This concatenated representation is then processed by the Transformer module, followed by integration with edge-level connectivity features {aj | j ∈ Ni}: hi = T1(hi), aj = T2(aj) (6) hi = X j∈Ni aijhj (7) where T1 and T2 refer to two distinct Transformer layers, each with independent parameters. Once the graph embedding H = {hi | i = 1, . . . , N} is obtained, cosine similarity is used to compute the similarity between each pair of nodes: S = H · HT ∥H∥× ∥H∥∈RN×N, H ∈RN×nd (8) where n denotes the number of neighboring nodes associated with each node. Recognizing the complexity and inherent misalignment between SC and FC, EDT-PA introduces a novel optimal transport (OT) strategy that transcends traditional alignment approaches. Unlike prior works that rely on standard OT to align fixed topologies or embed SC into FC via heuristic rules, our method constructs a transport-aware correspondence that is dynamically informed by both functional similarity S and diffusion-derived structural priors ˆA. For convenience in the subsequent formulas, ˆA and S are rewritten as: ˆA = {ai}N i=1 and S = {si}N i=1. Next, the discrete empirical distributions u and v, defined on the probability spaces ˆA, S ∈Ω, are presented as follows: u = N X i=1 μiδai, v = N X j=1 νjδsj, (9) where, δai and δsj denote Dirac functions centered on ˆA and S, respectively. The weight vectors μ = {μi}N i=1 and v = {vj}N j=1 belong to the N -dimensional simplex, i.e., PN i=1 μi = 1, PN j=1 vi = 1. The alignment is formalized as an entropy-regularized optimal transport problem, which is solved using the SinkhornKnopp algorithm (Cuturi 2013). Specifically, a transport plan T∗is calculated to minimize the expected transport cost between ˆA ∈RN×N and S ∈RN×N, subject to marginal constraints: T∗= arg min T∈RN×N⟨T, C⟩-εZ(T), s.t. T1 = μ, T⊤1 = ν (10) where C is a cost matrix defined by pairwise dissimilarities between A and S, computed via the cosine distance. Z(T) = -P ij TijlogTij denotes the entropy of the transport plan T, and ε > 0 is a smoothing parameter controlling the regularity of the transport. For the problem Eq. (11), we have an optimization solution when t →∞: T∗= diag at exp(-C/ε) diag bt (11) in which t is the iteration steps, at = μ exp(-C/ε)bt-1 and bt = ν exp(C/ε)at, with the initialization on b0 = 1. The stability of the iterative computations is ensured by employing the logarithmic scaling variant of Sinkhorn optimization (Schmitzer 2019). The biologically meaningful transport plan T∗aligns the intrinsic organization of SC and FC, refining the node features as follows: H∗= T ∗H + H (12) By embedding this transport mechanism into a modular Graph Transformer framework with explicit edge awareness, we achieve fine-grained, pattern-specific alignment between FC and SC. 3.3 Neural Graph Aggregator Given the complexity and spatial variability of inter-regional brain interactions, we extend the Kolmogorov-Arnold Network (KAN) (Liu et al. 2024) into graph neural networks as a node-level aggregation mechanism in functional brain graphs. For each node i, its refined representation h∗ i is updated by jointly considering its current state and the representations of its neighbors {h∗ j | j ∈N(i)} through the KAN aggregation function: h∗ i = KAN(h∗ i , {h∗ j}j∈Ni) = ΦL-1 ◦· · · ◦Φ0(h∗ i , {h∗ j}) (13) where each transformation Φl represents a deep, nonlinear transformation that learns progressively higher-order interactions between the node i and its neighbors Ni Once the node-level feature representations are updated, we proceed to compute a graph-level embedding h∗ G by performing a mean readout operation across all node representations in the graph: h∗ G = 1 |V | X i∈V h∗ i (14) This graph-level embedding, which captures the global structure of the brain network, is then passed through a multi-layer perceptron (MLP) for final classification: ˆy = F(h∗ G). Therein, F(·) is implemented as an MLP with a softmax output. Given the ground truth label y, the loss function of our proposed model is formulated as follows: loss = Lce(y, ˆy) = -Ey[log(ˆy)] (15) where E is the expectation and Lce represents the crossentropy loss function. 4 Experiment 4.1 Data Description In this study, a comprehensive evaluation of EDT-PA's effectiveness in brain disease diagnosis is conducted on two publicly available datasets: REST-meta-MDD and ADNI. Descriptions of these datasets are provided below. REST-meta-MDD: The REST-meta-MDD consortium provides standardized sMRI/fMRI data for major depressive disorder (MDD) research, including 781 matched subjects (395 MDD patients and 386 controls). MRI data underwent preprocessing, including motion correction, T1-MRI alignment, SPM12-based MNI normalization, and 5mm FWHM Gaussian smoothing (Dadi et al. 2019). Structural connectivity (SC) was derived from Jensen-Shannon Divergence (JSD) of inter-regional gray matter volume similarity (Wang et al. 2016; Li et al. 2021b; Sebenius et al. 2023). The brain was parcellated into 264 regions using the Power264 atlas (Power et al. 2011), and functional connectivity (FC) was computed via Pearson correlations of BOLD time series from these ROIs. ADNI: The Alzheimer's Disease Neuroimaging Initiative (ADNI) provides a multimodal dataset for Alzheimer's disease (AD) research, including sMRI, fMRI, PET, diffusion imaging, CSF, blood biomarkers, genetic profiles, and cognitive assessments from 203 AD patients and 103 cognitively normal controls (CN), matched for age and sex. Imaging data underwent skull-stripping, with T1-weighted and rs-fMRI co-registered to DTI space using FLIRT (Jenkinson et al. 2012). Rs-fMRI preprocessing included spatial smoothing, slice-timing correction, temporal prewhitening, drift removal, and bandpass filtering (0.01-0.1 Hz). Diffusion data were corrected for eddy currents and processed with MedINRIA (Toussaint, Souplet, and Fillard 2007) for fiber tractography. T1 images were parcellated into 148 cortical regions using the Destrieux Atlas (Destrieux et al. 2010) in FreeSurfer (Fischl 2012), defining SC nodes. SC matrices were constructed by counting streamlines between regions, and FC was derived from Pearson correlations of BOLD time series. 4.2 Comparison Methods and Settings To validate the effectiveness of our proposed EDT-PA model, we compare its performance with a range of classical machine learning classifiers and SOTA graph learning methods on REST-meta-MDD and ADNI. These comparative methods can be broadly categorized into four groups: • Baseline models: Random Forest (RF) (Rigatti 2017) and Support Vector Machine (SVM) (Jakkula 2006). • General Graph Neural Network Methods: GCNs (Parisot et al. 2018), Graph Isomorphism Network (GIN) (Duan et al. 2023) and GraphGPS (Ramp ́aˇsek et al. 2022). • Brain Network-Specific Graph Models: BrainGNN (Li et al. 2021a), BrainGB (Cui et al. 2023) and BrainIB (Zheng et al. 2025). • Joint SC-FC Modeling Methods: BrainGRL (Li, Mateos, and Zhang 2022) and ATPGCN (Bian et al. 2024). In this experiment, the datasets are randomly partitioned into 70%, 10%, and 20% for training, validation, and testing, respectively. Additionally, weight α and diffusion steps T are empirically set to 0.3 and 4, respectively. Five evaluation metrics are used to assess the algorithm's performance, including accuracy (ACC), recall (REC), precision (PRE), area under the ROC curve (AUC), and F1-score (F1). To ensure experimental fairness, both EDT-PA and the comparative methods were trained and evaluated under the same setup as previously outlined. The experimental results of all methods are summarized in Table 1, with the best values for each evaluation metric highlighted in bold and sub-SOTA values underlined. 4.3 Classification Performance The classification results reported in Table 1 show that, across two benchmark datasets, the proposed EDT-PA model demonstrates clear advantages in both accuracy and robustness. These benefits are particularly significant given the substantial differences between the datasets in terms of sample size, data heterogeneity, and brain network construction methods. On the REST-meta-MDD dataset, EDT-PA outperforms strong baselines like BrainGNN by dynamically evolving the structural graph and more effectively aligning the functional topology. This results in improvements of 5.4% and 6.0% in ACC and F1, respectively, surpassing the performance of the sub-SOTA method. Notably, although competing approaches such as GraphGPS and BrainGNN achieve relatively higher recall, their performance is compromised by substantially lower precision and AUC scores. This limitation arises from their dependence on either global attention mechanisms or static node representations, which constrains their generalization capacity and leads to systematic overprediction of positive cases. To further evaluate the generalizability of EDT-PA, additional experiments are conducted on the more challenging ADNI dataset. In this severely imbalanced classification task (203 ADs vs. 103 CNs), EDT-PA achieved an accuracy of 84.2% and an AUC of 82.8%, significantly outperforming all baseline methods. Models such as ATPGCN and Table 1: Performance comparison on REST-meta-MDD and ADNI datasets Dataset Method ACC PRE REC F1 AUC REST-meta-MDD RF 0.567±0.021 0.572±0.024 0.567±0.021 0.567±0.020 0.585±0.023 SVM 0.552±0.043 0.553±0.045 0.553±0.043 0.550±0.042 0.523±0.081 GCN 0.558±0.030 0.463±0.147 0.558±0.030 0.555±0.033 0.567±0.013 GIN 0.564±0.030 0.569±0.025 0.564±0.031 0.559±0.040 0.560±0.038 GraphGPS 0.577±0.034 0.568±0.038 0.780±0.126 0.650±0.027 0.597±0.054 BrainGNN 0.544±0.026 0.527±0.031 0.741±0.226 0.598±0.074 0.531±0.071 BrainGB 0.727±0.023 0.762±0.035 0.727±0.020 0.718±0.020 0.838±0.032 BrainIB 0.636±0.012 0.639±0.015 0.655±0.028 0.643±0.015 0.663±0.014 ATPGCN 0.654±0.013 0.660±0.043 0.692±0.128 0.668±0.046 0.690±0.031 BrainGRL 0.682±0.022 0.673±0.050 0.738±0.107 0.696±0.030 0.683±0.038 EDT-PA (ours) 0.781±0.027 0.787±0.027 0.771±0.038 0.778±0.031 0.841±0.045 ADNI RF 0.613±0.027 0.518±0.144 0.613±0.027 0.499±0.033 0.539±0.046 SVM 0.619±0.060 0.614±0.050 0.619±0.060 0.611±0.051 0.590±0.014 GCN 0.629±0.018 0.596±0.026 0.629±0.177 0.589±0.125 0.635±0.015 GIN 0.642±0.051 0.556±0.092 0.642±0.051 0.575±0.069 0.547±0.080 GraphGPS 0.639±0.037 0.666±0.014 0.933±0.057 0.777±0.028 0.544±0.085 BrainGNN 0.642±0.026 0.644±0.028 0.975±0.015 0.775±0.017 0.557±0.036 BrainGB 0.654±0.061 0.623±0.053 0.620±0.054 0.617±0.055 0.635±0.078 BrainIB 0.663±0.016 0.722±0.015 0.808±0.030 0.759±0.014 0.623±0.020 ATPGCN 0.677±0.025 0.698±0.014 0.909±0.028 0.790±0.017 0.670±0.025 BrainGRL 0.719±0.061 0.747±0.061 0.886±0.046 0.809±0.040 0.741±0.071 EDT-PA (ours) 0.842±0.012 0.843±0.013 0.842±0.012 0.835±0.015 0.828±0.025 BrainGRL, which integrate SC and FC, demonstrate superior performance over most baselines on the ADNI dataset. However, their effectiveness is limited by the intrinsic constraints of their fusion strategies. Specifically, these models lack mechanisms explicitly designed to address modality heterogeneity and resolve alignment mismatches. Consequently, despite achieving high REC, they exhibit significant overfitting to the positive class, as evidenced by their comparatively lower ACC. In contrast, EDT-PA employs an OT-based alignment strategy that selectively aligns connectivity patterns between structural and functional modalities, rather than enforcing full distribution-level alignment. This targeted strategy mitigates the risk of overfitting the dominant class and enhances the robustness to modality-specific noise. As a result, EDT-PA achieves superior performance in both ACC and REC, demonstrating strong robustness and generalization across heterogeneous neuroimaging data. 4.4 Ablation Study Two ablation studies are conducted to evaluate the contributions of the EBCM and PSSA modules in EDT-PA. As illustrated in Fig. 3, five evaluation metrics (ACC, PRE, REC, F1, and AUC) are compared on the REST-meta-MDD and ADNI datasets by selectively removing each component. In the w/o EBCM setting, the topological evolution process is disabled, and the original brain graph is directly used without diffusion-based enhancement. In the w/o PSSA case, the structure-function alignment mechanism is omitted. The complete EDT-PA framework consistently delivers the best overall performance across both datasets. Excluding EBCM results in significant reductions in ACC and F1 Figure 3: Ablation study of EDT-PA on two public datasets. score, especially on the ADNI dataset, underscoring the critical role of high-order structural modeling. Excluding PSSA mainly degrades AUC and REC, indicating that structurefunction misalignment weakens the model's ability to integrate modality-specific patterns. These results underscore the complementary roles of EBCM and PSSA: the former enhances structural abstraction and evolution, while the latter facilitates modality-aware fusion. Their joint integration is critical for robust and generalizable performance in multimodal brain connectome analysis. 4.5 Explanation Study In brain disorder diagnostics, EDT-PA achieves optimal integration of SC and FC while deriving the fully evolved connectivity matrix ˆA. To evaluate the discriminative power of ˆA, a statistical significance analysis was performed across two independent datasets, demonstrating its effectiveness in Figure 4: The group difference of original structural matrices and fully evolved matrices in two classification tasks. The connections with significant difference (p-value < 0.005) are denoted by yellow points in the matrices. The size of a node in the brain network is related to its degree, where a higher degree results in a larger node size. Table 2: The Top-10 significant regions detected by EDT-PA in REST-meta-MDD and ADNI dataset. REST-meta-MDD Index Label Index Label 1 Supp Motor Area.L1 6 Cingulum Post.L4 2 Temporal Inf.L3 7 Occipital Mid.R2 3 Cingulum Post.L1 8 Calcarine.L1 4 Occipital Sup.L1 9 Frontal Inf Tri.R1 5 Cerebellum Crus1.R2 10 Rolandic Oper.R1 ADNI Index Label Index Label 1 S circular insula sup 6 G Ins lg and S cent ins 2 G temporal middle 7 S front inf 3 G and S cingul-Mid-Ant 8 S parieto occipital 4 S interm prim-Jensen 9 S postcentral 5 G precentral 10 S suborbital brain disease diagnostic tasks. The original structural brain network A and ˆA are divided according to the health status of the subjects, followed by a significance difference analysis across the data from the different subgroups. The experimental results are shown in Fig. 4. Compared to the original brain structural matrix, ˆA exhibits more discriminative connections, demonstrating its ability to capture higherorder dependencies in the brain. This indicates that ˆA can precisely identify critical connections related to brain disorders. A more intriguing discovery is that ˆA is predominantly concentrated in several distinct brain regions. To further explore how ˆA captures biomarkers associated with brain diseases, the names of the top-10 brain regions with the most significant differential connections are visualized in Table 2. The brain regions associated with the disease are highlighted in red. EDT-PA identified several key brain regions associated with depression, including areas in the motor, cingulate, occipital, and frontal regions, as well as the Rolandic operculum. These regions show alterations in connectivity pattern, affecting visual processing, emotional regulation, and cognitive functions (Zhang et al. 2024; Taylor et al. 2013; Sun et al. 2022; Lai, Wu, and Hou 2017; Liu et al. 2020; Trombello et al. 2022). Using the ADNI dataset, EDT-PA identified several important brain areas, including regions in the insula, temporal lobe, cingulate cortex, frontal lobe, and occipital lobe, as well as the precentral gyrus. These regions are particularly linked to impairments in cognitive functions, emotional regulation, and motor abilities, which are essential for understanding the progression of Alzheimer's disease (Wan et al. 2014; He et al. 2009; Pievani et al. 2017; Foundas et al. 1997). 5 Conclusion In this study, we propose an end-to-end graph learning framework for analyzing brain connectivity and classifying brain disorders. The EDT-PA efficiently combines structural and functional brain networks by employing a novel optimal transport method. The framework dynamically captures high-order dependencies and models complex interactions within brain regions, providing a robust approach for disease-specific connectome pattern identification. Extensive evaluations on two real-world datasets demonstrate that EDT-PA outperforms state-of-the-art methods in both classification accuracy and robustness, underscoring its potential to identify disease-specific biomarkers and improve brain disorder diagnostics. This work offers a promising approach for modeling complex brain networks, significantly advancing neuroimaging-based disease diagnosis. References Bahrami, M.; Laurienti, P. J.; Shappell, H. M.; and Simpson, S. L. 2023. Brain network analysis: A review on multivariate analytical methods. Brain connectivity, 13(2): 64-79. Bai, S.; Bai, X.; Tian, Q.; and Latecki, L. J. 2017. Regularized diffusion process for visual retrieval. In Proceedings of the AAAI conference on artificial intelligence, volume 31. Bian, C.; Xia, N.; Xie, A.; Cong, S.; and Dong, Q. 2024. Adversarially Trained Persistent Homology Based Graph Convolutional Network for Disease Identification Using Brain Connectivity. IEEE Transactions on Medical Imaging, 43(1): 503-516. Cui, H.; Dai, W.; Zhu, Y.; Kan, X.; Gu, A. A. C.; Lukemire, J.; Zhan, L.; He, L.; Guo, Y.; and Yang, C. 2023. BrainGB: A Benchmark for Brain Network Analysis With Graph Neural Networks. IEEE Transactions on Medical Imaging, 42(2): 493-506. Cuturi, M. 2013. Sinkhorn distances: lightspeed computation of optimal transport. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, 2292-2300. Red Hook, NY, USA: Curran Associates Inc. Dadi, K.; Rahim, M.; Abraham, A.; Chyzhyk, D.; Milham, M.; Thirion, B.; Varoquaux, G.; Initiative, A. D. N.; et al. 2019. Benchmarking functional connectome-based predictive models for resting-state fMRI. NeuroImage, 192: 115134. Dan, T.; and Wu, G. 2023. Uncovering Structural-Functional Coupling Alterations for Alzheimer's Diseases. In Medical Imaging with Deep Learning, short paper track. Destrieux, C.; Fischl, B.; Dale, A.; and Halgren, E. 2010. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature. Neuroimage, 53(1): 1-15. Dong, Q.; Cai, H.; Li, Z.; Liu, J.; and Hu, B. 2024. A Multiview Brain Network Transformer Fusing Individualized Information for Autism Spectrum Disorder Diagnosis. IEEE Journal of Biomedical and Health Informatics, 28(8): 48544865. Duan, J.; Li, Y.; Zhang, X.; Dong, S.; Zhao, P.; Liu, J.; Zheng, J.; Zhu, R.; Kong, Y.; and Wang, F. 2023. Predicting treatment response in adolescents and young adults with major depressive episodes from fMRI using graph isomorphism network. NeuroImage: Clinical, 40: 103534. Fischl, B. 2012. FreeSurfer. Neuroimage, 62(2): 774-781. Fornito, A.; and Bullmore, E. T. 2015. Connectomics: a new paradigm for understanding brain disease. European Neuropsychopharmacology, 25(5): 733-748. Foundas, A.; Leonard, C. M.; Sm, M.; Of, A.; and Heilman, K. M. 1997. Atrophy of the hippocampus, parietal cortex, and insula in Alzheimer's disease: a volumetric magnetic resonance imaging study. Neuropsychiatry, neuropsychology, and behavioral neurology, 10 2: 81-9. Hansen, E. 2013. Anatomical connectivity and the resting state activity of large cortical networks. NeuroImage, 65: 127-138. He, Y.; yan Zhang, M.; Head, K.; Chang, D.; and Wang, H. 2009. Voxel-based morphometry of amnestic mild cognitive impairment and Alzheimer's disease. Alzheimer's & Dementia, 5: p12-p13. Huang, J.; Zhu, Q.; Wang, M.; Zhou, L.; Zhang, Z.; and Zhang, D. 2020. Coherent Pattern in Multi-Layer Brain Networks: Application to Epilepsy Identification. IEEE Journal of Biomedical and Health Informatics, 24(9): 2609-2620. Jakkula, V. 2006. Tutorial on support vector machine (svm). 37(2.5): 3. Jenkinson, M.; Beckmann, C. F.; Behrens, T. E.; Woolrich, M. W.; and Smith, S. M. 2012. Fsl. Neuroimage, 62(2): 782-790. Lai, C.-H.; Wu, Y.-T.; and Hou, Y.-M. 2017. Functional network-based statistics in depression: Theory of mind subnetwork and importance of parietal region. Journal of affective disorders, 217: 132-137. Li, S.; Zhu, Q.; Tian, C.; Shao, W.; and Zhang, D. 2025. Interpretable Dynamic Brain Network Analysis with Functional and Structural Priors. IEEE Transactions on Medical Imaging, 1-1. Li, X.; Zhou, Y.; Dvornek, N.; Zhang, M.; Gao, S.; Zhuang, J.; Scheinost, D.; Staib, L. H.; Ventola, P.; and Duncan, J. S. 2021a. Braingnn: Interpretable brain graph neural network for fmri analysis. Medical Image Analysis, 74: 102233. Li, Y.; Mateos, G.; and Zhang, Z. 2022. Learning to model the relationship between brain structural and functional connectomes. IEEE Transactions on Signal and Information Processing over Networks, 8: 830-843. Li, Y.; Wang, N.; Wang, H.; Lv, Y.; Zou, Q.; and Wang, J. 2021b. Surface-based single-subject morphological brain networks: effects of morphological index, brain parcellation and similarity measure, sample size-varying stability and test-retest reliability. NeuroImage, 235: 118018. Liu, X.; Hou, Z.; Yin, Y.; Xie, C.; Zhang, H.; Zhang, H.; Zhang, Z.; and Yuan, Y. 2020. CACNA1C gene rs11832738 polymorphism influences depression severity by modulating spontaneous activity in the right middle frontal gyrus in patients with major depressive disorder. Frontiers in Psychiatry, 11: 73. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljaˇci ́c, M.; Hou, T. Y.; and Tegmark, M. 2024. Kan: Kolmogorov-arnold networks. arXiv preprint . Ma, H.; Wu, F.; Guan, Y.; Xu, L.; Liu, J.; and Tian, L. 2023. BrainNet with connectivity attention for individualized predictions based on multi-facet connections extracted from resting-state fMRI data. Cognitive Computation, 15(5): 1566-1580. Pang, Y.; Liang, J.; Huang, T.; Chen, H.; Li, Y.; Li, D.; Huang, L.; and Wang, Q. 2023. Slim UNETR: Scale hybrid transformers to efficient 3D medical image segmentation under limited computational resources. IEEE Transactions on Medical Imaging, 43(3): 994-1005. Parisot, S.; Ktena, S. I.; Ferrante, E.; Lee, M.; Guerrero, R.; Glocker, B.; and Rueckert, D. 2018. Disease prediction using graph convolutional networks: application to autism spectrum disorder and Alzheimer's disease. Medical image analysis, 48: 117-130. Pievani, M.; Pini, L.; Ferrari, C.; Pizzini, F.; Galazzo, I. B.; Cobelli, C.; Cotelli, M.; Manenti, R.; and Frisoni, G. 2017. Coordinate-Based Meta-Analysis of the Default Mode and Salience Network for Target Identification in Non-Invasive Brain Stimulation of Alzheimer's Disease and Behavioral Variant Frontotemporal Dementia Networks. Journal of Alzheimer's Disease, 57: 825 - 843. Popp, J. L.; Thiele, J. A.; Faskowitz, J.; Seguin, C.; Sporns, O.; and Hilger, K. 2024. Structural-functional brain network coupling predicts human cognitive ability. NeuroImage, 290: 120563. Power, J. D.; Cohen, A. L.; Nelson, S. M.; Wig, G. S.; Barnes, K. A.; Church, J. A.; Vogel, A. C.; Laumann, T. O.; Miezin, F. M.; Schlaggar, B. L.; et al. 2011. Functional network organization of the human brain. Neuron, 72(4): 665678. Ramp ́aˇsek, L.; Galkin, M.; Dwivedi, V. P.; Luu, A. T.; Wolf, G.; and Beaini, D. 2022. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35: 14501-14515. Rigatti, S. J. 2017. Random forest. Journal of insurance medicine, 47(1): 31-39. Schmitzer, B. 2019. Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM Journal on Scientific Computing, 41(3): A1443-A1481. Sebenius, I.; Seidlitz, J.; Warrier, V.; Bethlehem, R. A.; Alexander-Bloch, A.; Mallard, T. T.; Garcia, R. R.; Bullmore, E. T.; and Morgan, S. E. 2023. Robust estimation of cortical similarity networks from brain MRI. Nature neuroscience, 26(8): 1461-1471. Sheng, X.; Cai, H.; Nie, Y.; He, S.; Cheung, Y.-M.; and Chen, J. 2025. Modality-Aware Discriminative Fusion Network for Integrated Analysis of Brain Imaging Genomics. IEEE Transactions on Neural Networks and Learning Systems, 36(5): 8577-8591. Sheng, X.; Chen, J.; Liu, Y.; Hu, B.; and Cai, H. 2023. Deep Manifold Harmonic Network With Dual Attention for Brain Disorder Classification. IEEE Journal of Biomedical and Health Informatics, 27(1): 131-142. Stam, C. J. 2014. Modern network science of neurological disorders. Nature Reviews Neuroscience, 15(10): 683-695. Sun, J.-f.; Chen, L.-m.; He, J.-k.; Wang, Z.; Guo, C.-l.; Ma, Y.; Luo, Y.; Gao, D.-q.; Hong, Y.; Fang, J.-l.; et al. 2022. A comparative study of regional homogeneity of resting-state fMRI between the early-onset and late-onset recurrent depression in adults. Frontiers in psychology, 13: 849847. Tan, Y.-F.; Tan, P.-S.; Noman, F.; Phan, R. C.-W.; Ombao, H.; and Ting, C.-M. 2025. Connectome-GTC: A Unified Framework for Brain Functional and Structural Connectomes Generation, Translation, and Classification. In 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI), 1-5. IEEE. Taylor, W. D.; Zhao, Z.; Ashley-Koch, A.; Payne, M. E.; Steffens, D. C.; Krishnan, R. R.; Hauser, E.; and MacFall, J. R. 2013. Fiber tract-specific white matter lesion severity Findings in late-life depression and by AGTR1 A1166C genotype. Human brain mapping, 34(2): 295-303. Toussaint, N.; Souplet, J.-C.; and Fillard, P. 2007. MedINRIA: medical image navigation and research tool by INRIA. In Proc. of MICCAI'07 Workshop on Interaction in medical image analysis and visualization. Trombello, J. M.; Cooper, C. M.; Fatt, C. C.; Grannemann, B. D.; Carmody, T. J.; Jha, M. K.; Mayes, T. L.; Greer, T. L.; Yezhuvath, U.; Aslan, S.; et al. 2022. Neural substrates of emotional conflict with anxiety in major depressive disorder: Findings from the Establishing Moderators and biosignatures of Antidepressant Response in Clinical Care (EMBARC) randomized controlled trial. Journal of psychiatric research, 149: 243-251. Wan, J.; Zhang, Z.; Rao, B. D.; Fang, S.; Yan, J.; Saykin, A. J.; and Shen, L. 2014. Identifying the neuroanatomical basis of cognitive impairment in Alzheimer's disease by correlation-and nonlinearity-aware sparse Bayesian learning. IEEE transactions on medical imaging, 33(7): 14751487. Wang, H.; Jin, X.; Zhang, Y.; and Wang, J. 2016. Singlesubject morphological brain networks: connectivity mapping, topological characterization and test-retest reliability. Brain and behavior, 6(4): e00448. Wang, S.; Lei, Z.; Tan, Z.; Ding, J.; Zhao, X.; Dong, Y.; Wu, G.; Chen, T.; Chen, C.; Zhang, A.; et al. 2025. BrainMAP: Learning Multiple Activation Pathways in Brain Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 14432-14440. Yang, Y.; Ye, C.; Guo, X.; Wu, T.; Xiang, Y.; and Ma, T. 2024. Mapping Multi-Modal Brain Connectome for Brain Disorder Diagnosis via Cross-Modal Mutual Learning. IEEE Transactions on Medical Imaging, 43(1): 108121. Zhang, L.; Zhang, Y.; Guo, W.; Ma, Q.; Zhang, F.; Li, K.; and Yi, Q. 2024. An effect of chronic negative stress on hippocampal structures and functional connectivity in patients with depressive disorder. Neuropsychiatric Disease and Treatment, 1011-1024. Zhang, Z.; Liao, W.; Chen, H.; Mantini, D.; Ding, J.-R.; Xu, Q.; Wang, Z.; Yuan, C.; Chen, G.; Jiao, Q.; et al. 2011. Altered functional-structural coupling of large-scale brain networks in idiopathic generalized epilepsy. Brain, 134(10): 2912-2928. Zheng, K.; Yu, S.; Li, B.; Jenssen, R.; and Chen, B. 2025. BrainIB: Interpretable Brain Network-Based Psychiatric Diagnosis With Graph Information Bottleneck. IEEE Transactions on Neural Networks and Learning Systems, 36(7): 13066-13079.
|
2509.16234
|
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS
TADAHISA NARA
Abstract. For a given function from a set to itself, we can define a directed graph
called the functional graph, where the vertices are the elements of the set, and the edges
are all the pairs of inputs and outputs for the function.
In this article we consider
functional graphs on Z/mZ with respect to polynomial functions.
The main result
describes the behavior of cycles in functional graphs on Z/pnZ while n is increasing,
where p is a prime number.
1. Introduction
For a given function from a set to itself, we can define a directed graph called the
functional graph, where the vertices are the elements of the set, and the edges are all
the pairs of inputs and outputs for the function. In recent years functional graphs on
finite sets are actively studied, in particular concerning polynomial functions or rational
functions. On finite fields many remarkable results are given from multiple perspectives.
For example, [7, 1] consider the number of non-isomorphic functional graphs and give
bounds for that. [4, 9] consider structural aspect of functional graphs, such as cycles or
connected components.
In this article we consider functional graphs on the finite rings Z/mZ with respect to
polynomial functions. As shown in Theorem 3.1 the structure of a functional graph is
determined only from that on the primary components of Z/mZ. However, the rules
that govern the latter is not clear, and that is why we are interested in functional graphs
on Z/pnZ. The main result is a description on behavior of cycles in functional graphs
on Z/pnZ while n is increasing (Theorem 4.3). In that study we can find some sort
of differential information on the function is a key for classifying the behavior. In [14]
a similar problem is considered, focusing on the size of cycles and the expression of
the vertices, but limited to the case for Chebyshev polynomials from a cryptographic
perspective.
As a related study, counting the number of functions realized as polynomial functions,
which corresponds to the number of functional graphs given by polynomials, is a classi-
cal problem ([12, 6]). In [13] the ring of functions realized as polynomial functions was
investigated and the structure was described. In [3] a kind of polynomial representable
functions from {0, 1, ..., m−1} to {0, 1, ..., n−1} was proposed for m ̸= n, and it was stud-
ied in more sophisticated formulation and generalized to residue class rings of Dedekind
domains in [8].
Alternatively, permutation polynomials, that is, polynomials which are bijective as
functions, are widely studied in this area ([11, 5]).
2020 Mathematics Subject Classification.
37P25, 05C38, 11T06.
Key words and phrases.
functional graph, cycle, polynomial map, arithmetic dynamics, finite ring.
1
arXiv:2509.16234v1 [math.CO] 16 Sep 2025
2
T. NARA
2. Preliminaries
Throughout the paper, we denote the ring of integers modulo m by Zm := Z/mZ and
p is any prime number unless otherwise mentioned. Let f : Z →Z be a polynomial
function, and then we can naturally regard f as a function from Zm to Zm for any m. So
by abusing a notation we use the same symbols for such functions for different domains.
We denote a functional graph of a polynomial function f on Zm by
G(f, Zm) := (V, E),
where vertices V = Zm and the directed edges E = {(v, f(v)) : v ∈V }. Also in this
article the notation f i means the i-th iteration of a function f.
Remark 2.1. A functional graph must have outdegree 1 for any vertex, and so it is
represented by a set of edges in the form
{(0, b0), (1, b1)), ..., (m −1, bm−1)},
bi ∈Zm.
On the other hand, not all the graphs in the form are given by polynomial functions unless
m is a prime. Concerning the number of functional graphs actually given by polynomial
functions, see [12, 6, 13], and also [2], which considers conditions whether a given function
can be realized as a polynomial function.
3. Fundamental Results
The following theorem is a fundamental fact for our situation, which says the structure
of a functional graph on Zm is determined only from that on the primary components of
Zm.
Theorem 3.1. Let m, n be coprime integers and f : Z →Z a polynomial function. Then
we have a graph isomorphism
G(f, Zm) ⊗G(f, Zn) ≃G(f, Zmn),
where the left-hand side is a tensor product of graphs.
Proof. We claim that the isomorphism of the Chinese remainder theorem (CRT) induces
the graph isomorphism. Indeed, let ϕ be the isomorphism
ϕ : Zm × Zn →Zmn
(3.1)
induced by (x, y) 7→bnx + amy for x, y ∈Z, where a, b ∈Z are such that am + bn = 1.
Then this naturally induces the map
ϕG : G(f, Zm) ⊗G(f, Zn) →G(f, Zmn).
By definition ((x (mod m), y (mod n)), (x′ (mod m), y′ (mod n))) is a directed edge
of G(f, Zm)⊗G(f, Zn) if and only if (x (mod m), x′ (mod m)) and (y (mod n), y′ (mod n))
are directed edges of G(f, Zm) and G(f, Zn), respectively. So, to show ϕG is an isomor-
phism, we have to prove the equivalence
f(x) = x′
(mod m), f(y) = y′
(mod n)
⇐⇒f(bnx + amy) = bnx′ + amy′
(mod mn).
Note, because of gcd(m, n) = 1, the RHS is equivalent to
f(bnx + amy) = bnx′ + amy′
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS
3
both modulo m and n. Since f is a polynomial, we have
f(bnx + amy) = f((1 −am)x + amy) = f(x)
(mod m),
bnx′ + amy′ = (1 −am)x′ + amy′ = x′
(mod m)
and similarly we have
f(bnx + amy) = f(y)
(mod n), bnx′ + amy′ = y′
(mod n),
which implies the desired equivalence.
□
From now on we frequently use the notation
C = {v0, v1, . . . , vk−1}
for a cycle, which precisely means the directed graph (V, E), where
V = {v0, v1, . . . , vk−1},
E = {(v0, v1), (v1, v2), . . . , (vk−1, v0)}.
Corollary 3.2. If there is a cycle of size k in G(f, Zm) and a cycle of size l in G(f, Zn)
with gcd(m, n) = 1, then there is a cycle of size lcm(k, l) in G(f, Zmn).
Proof. Let Cm = {v0, . . . , vk−1} and Cn = {u0, . . . , ul−1} be cycles in G(f, Zm) and
G(f, Zn), respectively. Then for any i, j there are directed edges
((vi, uj), (vi+1, uj+1))
in G(f, Zm)⊗G(f, Zn), making a cycle, where we take i, j modulo k, l, respectively. Note
that the least positive integer d such that i = i + d (mod k) and j = j + d (mod l) is the
size of the cycle, which is by definition d = lcm(k, l). Then by the isomorphism of the
theorem there exists a cycle of size d in G(f, Zmn).
□
4. Main Results
Next we would determine graphs G(f, Zpn) from the information on G(f, Zp) for p
prime. But we soon find it fails as expected, that is, G(f, Zp) ≃G(g, Zp) does not mean
G(f, Zpn) ≃G(g, Zpn). Therefore we suggest a problem that how to obtain the structural
behavior of functional graphs G(f, Zpn) for n from the information on G(f, Zp) and some
properties of f. In particular, we focus on the cycles in the graphs.
Throughout this article we use the terms lifted graph and lifted cycle as follows. Let
π : Zpn →Zpm for any n > m, be the natural projection. Then we can naturally define,
for a fixed f,
πG : G(f, Zpn) →G(f, Zpm)
by v 7→π(v) for vertices and (u, v) 7→(π(u), π(v)) for edges.
Let S = (V, E) be a
subgraph of G(f, Zpm), and then we call the subgraph of G(f, Zpn) induced by π−1(V ),
the lifted graph of S. A lifted cycle of S means a cycle in the lifted graph.
Example 4.1. Let p = 3, f = x3 + 2. Then C = {¯0, ¯1, ¯2} is a cycle in G(f, Z3). The
lifted cycle in G(f, Z9) of C is the cycle {¯1, ¯3, ¯2}, and the lifted graph in G(f, Z9) of C is
the whole graph G(f, Z9).
Before the main result we introduce a key concept multiplier. As a reference we have
[10]. Usually this concept is used to study functions and the periodic points in the context
of dynamical systems over infinite fields. So our result is an answer to how the concept
is involved in the context of finite rings.
4
T. NARA
Definition 4.2. Let C = {v0, v1, . . . , vk−1} be a cycle of size k in a functional graph
G(f, Zpn) for a polynomial function f. Then the multiplier of C is defined as
λ(C) := (f k)′(v0) =
Y
v∈C
f ′(v),
where f k is the k-th iteration of f and (f k)′, f ′ denote the derivatives of the functions.
The second equality is given by the chain rule and this allows us to choose any vertex of
C instead of v0 in the second term. In most cases we use only the image of λ(C) in Zp,
denoted by ¯λ(C).
Theorem 4.3. Let C be a cycle of size k in G(f, Zpn) for a polynomial function f with
multiplier λ = λ(C). Put ¯λ as the image of λ in Zp. In what follows we always mean the
lifted graph of C as in G(f, Zpn+1).
(1) If ¯λ = 0, then only one cycle is in the lifted graph of C, which has size k.
(2) If ¯λ = 1, then the lifted graph of C consists of only one cycle of size kp, or p
cycles of size k. The latter case occurs if and only if rv = 0 for a vertex v of C,
where rv is defined in Definition 4.5 below.
(3) If ¯λ ̸= 0, 1, then the lifted graph of C consists of a cycle of size k and (p −1)/m
cycles of size mk, where m = m(¯λ) is the multiplicative order of ¯λ, i.e. the least
positive integer s.t. ¯λm = 1.
Remark 4.4. For (2), we can choose any vertex v of C for rv (see Lemma 4.10 (2)).
From now on we use the following notation for integer ranges:
[a, b) := {a, a + 1, . . . , b −1}.
Definition 4.5. For a vertex v of a cycle C of size k, in a functional graph G(f, Zpn),
define rv ∈[0, p) by
rv := f k(av) −av
pn
% p,
where av is the unique representative in [0, pn) of v, and A % B ∈[0, B) denotes the
remainder for division of A by B > 0.
In other words, rv ∈[0, p) is the integer such that
f k(av) = av + rvpn
(mod pn+1).
Remark 4.6. As we will see in Lemma 4.10 (1), if ¯λ(C) = 1, then in the definition of
rv, we can use any representative of v instead of av. But the quantity rv is used in the
proof of Theorem 4.3, where the assumption does not necessarily hold, and so we define
it using av.
Also note that n is dependent on v.
The next lemma is an easy fact used in the proof of the theorem.
Lemma 4.7. If C is a cycle of size k in G(f, Zpn), then the size of any lifted cycle of C
is a multiple of k.
Proof. Assume the size of a lifted cycle of C is l and write l = Ak+B for some A ≥0 with
0 ≤B < k. Then for a vertex v′ in the lifted cycle, v′ = f l(v′), in particular v = f l(v),
where v ∈Zpn is the image of v′. Since f l(v) = f Ak+B(v) = f B(f Ak(v)) = f B(v), it
requires B = 0.
□
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS
5
Proof of Theorem 4.3. Let v ∈Zpn be a vertex of C, and a ∈Z any representative of
v. Then
a = av + bpn
(mod pn+1)
with some av ∈[0, pn) and b ∈[0, p). Put g := f k and then the Taylor expansion is
g(x0 + h) = g(x0) + g′(x0)h + g′′(x0)
2!
h2 + · · · ,
where note the coefficients g(n)(x0)
n!
are integers and the series is actually finite since g is a
polynomial over Z. So we have for n ≥1
g(a) = g(av + bpn)
(mod pn+1)
= g(av) + g′(av)(bpn) + g′′(av)
2!
(bpn)2 + · · ·
= av + (rv + γb)pn,
(4.1)
where rv is defined in Definition 4.5 and γ := g′(av). Then the iteration gives
g2(a) = av + (rv + γ(rv + γb))pn
(mod pn+1)
= av + (rv + rvγ + γ2b)pn,
...
gN(a) = av + (rv
N−1
X
l=0
γl + γNb)pn
(mod pn+1).
(4.2)
If ¯λ = 0, i.e. γ = g′(av) = 0 (mod p), then g(a) = av + rvpn (mod pn+1), which is
independent of b. This means all the vertices in the form av + cpn in the lifted graph are
mapped to a single vertex by f k. In particular, f k(av + rvpn) = av + rvpn (mod pn+1).
Now write C = {v0, v1, . . . , vk−1} and then we claim the only lifted cycle of C is
C′ = {a0 + r0pn, a1 + r1pn, ..., ak−1 + rk−1pn}
(mod pn+1),
where aj ∈[0, pn) is the representatives of vj and rj := rvj. Indeed, as seen above we
have a cycle
{a0 + r0pn, f(a0 + r0pn), ..., f k−1(a0 + r0pn)}
(mod pn+1)
in G(f, Zpn+1) and this is actually identical to C′ since for any j,
f j(a0 + r0pn) = f j+k(a0 + r0pn)
(mod pn+1)
= f k(f j(a0 + r0pn))
= f k(aj + r′pn)
= aj + rjpn,
where r′ is some unknown integer but make no differences in the result. Now we have to
show there is no other cycle than C′. Let v be a vertex in the lifted graph of C but not
in C′. For a representative of v we can take aj + spn with s ̸= rj for some j. For i < k
clearly f i(aj + spn) ̸= aj + spn modulo pn+1 since it holds modulo pn. Then as we have
seen above, independent of s,
g(aj + spn) = f k(aj + spn) = aj + rjpn,
6
T. NARA
which is in C′ and this never goes outside C′ by iteration of f, which implies C′ is the
only lifted cycle of C.
If ¯λ = 1, then by (4.2)
gN(a) = av + (Nrv + b)pn
(mod pn+1).
This means the smallest N > 0 s.t. gN(a) = a (mod pn+1) is
N =
(
1
(rv = 0),
p
(rv ̸= 0),
thus by Lemma 4.7 the size of the cycles in the lifted graph is
(
k
(rv = 0),
kp
(rv ̸= 0).
Finally, assume ¯λ ̸= 0, 1. Now if N equals the multiplicative order of ¯λ, denoted by
m(¯λ), then since
N−1
X
l=0
γl = γN −1
γ −1 = 0
(mod p),
we have gN(a) = av + bpn = a (mod pn+1) by (4.2), which means the period of a
(mod pn+1) for g is at most m(¯λ).
If it is strictly smaller, that is, N < m(¯λ) and
gN(a) = av + bpn (mod pn+1), then
rv
γN −1
γ −1 + γNb = b
(mod p)
⇐⇒rv = b (1 −γ)
(mod p),
which gives
g(a) = av + bpn
(mod pn+1).
So we have proved that the smallest N > 0 s.t. gN(a) = a (mod pn+1) is
N =
(
1
(b = rv(1 −γ)−1
(mod p)),
m(¯λ)
(b : otherwise).
So the iteration of f with av + lpn (mod pn+1) generates a cycle of size m(¯λ) · k for each
l (mod p) except one value, in which case the size is k. Note km distinct vertices are
necessary to make a cycle of size km, thus the number of cycles of size km is (kp −
Tk)/km = (p −1)/m since the number of vertices in the lifted graph of C is kp.
□
Remark 4.8. In most cases after several lifting of a cycle with multiplier 1, eventually
we attain rv ̸= 0, and so by Theorem 4.3 (2) the size of cycles is expanding by a factor
of p. But there is an example such that rv is staying in 0 while successive liftings. Let
f(x) = 3x −x3,
and Ci := { ¯
−2, ¯2} in G(f, Zpi) for i = 1, 2, 3, · · · is a sequence of lifted cycles, which is
derived from “global” cycle {−2, 2} in G(f, Z). Now, for example, if p = 5, then we have
¯λ(Ci) = f ′(−2)f ′(2) = 81 = 1
(mod 5).
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS
7
Then recall rv is independent of choice of representative of v as mentioned above. So for
v = ¯2 in Ci for every i,
rv = f 2(2) −(2)
pi
%p = 0.
As a complement we give a lemma about the behavior of the multiplier, which is used
in Corollary 4.12.
Lemma 4.9. Let C be a cycle in a functional graph G(f, Zpn) and C′ a lifted cycle in
G(f, Zpn+1) of C. Let the size of C be k. If the size of C′ equals k, then ¯λ(C′) = ¯λ(C).
Otherwise, ¯λ(C′) = 1.
Proof. We have the following commutative diagram:
Zpn+1
Zpn
Zp.
πn+1,n
πn+1
πn
Write C = {v0, v1, . . . , vk−1} and g := f k. If the size of C′ equals k, then we can
write C′ = {v′
0, v′
1, . . . , v′
k−1} with some v′
i such that πn+1,n(v′
i) = vi. Since g′ is also a
polynomial function, we have
¯λ(C′) = πn+1(
Y
g′(v′
i)) = πn(
Y
g′(πn+1,n(v′
i))) = πn(
Y
g′(vi)) = ¯λ(C).
Otherwise, the size of C′ equals kp or km by Theorem 4.3, where m is the multiplicative
order of ¯λ(C). Now note in general if the size of C′ equals kM for some integer M, then
C′ must contain M distinct vertices which are mapped to vi by πn+1,n for each i. So by
similar computation as above we have
¯λ(C′) = ¯λ(C)M.
In the first case ¯λ(C) = 1 by Theorem 4.3 and so ¯λ(C′) = 1.
In the second case
¯λ(C′) = ¯λ(C)m = 1.
□
The following lemma states about properties of the quantity rv, which is also used in
Corollary 4.12.
Lemma 4.10. We will use the same notation as Definition 4.5. Let v be a vertex of C
in G(f, Zpn), and assume ¯λ(C) = 1.
(1) We can use any representative of v instead of av for rv in Definition 4.5. In
particular, f k(a) = a + rvpn (mod pn+1) for any representative a of v, where k is
the size of C.
(2) Either rv = 0 for all v of C, or rv ̸= 0 for all v of C.
(3) Assume p > 3, or p = 3 and n > 1. Let v′ be a vertex of a lifted cycle C′ in
G(f, Zpn+1) of C such that πn+1,n(v′) = v, where πn+1,n is the projection Zpn+1 →
Zpn. If rv ̸= 0, then rv′ ̸= 0.
Proof. Put g = f k.
(1) Let a, b be distinct representatives of v, and then with some c ∈[0, p) we have
b = a + cpn
(mod pn+1).
8
T. NARA
Using the Taylor series,
g(b) −b = g(a + cpn) −(a + cpn)
(mod pn+1)
= g(a) + g′(a)cpn −a −cpn
= g(a) −a,
which implies the assertion.
(2) For another vertex w of C there exists an integer l < k s.t. f l(v) = w and let a be
a representative of v. Then by (1) we have
rw = g(f l(a)) −f l(a)
pn
% p
since f l(a) is a representative of w. Now
g(f l(a)) −f l(a) = f l(g(a)) −f l(a)
(mod pn+1)
= f l(a + rvpn) −f l(a)
= f l(a) + (f l)′(a) rvpn −f l(a)
= (f l)′(a) rvpn.
Note (f l)′(a) is not divisible by p, since (f l)′(a) divides (f k)′(a) and (f k)′(a) = 1 (mod p).
So we have
rv = 0 ⇐⇒rw = 0.
(3) Since rv ̸= 0, by Theorem 4.3 (2) the size of C′ is kp, and also ¯λ(C′) = 1 by Lemma
4.9. So by (1) we have
rv′ = f kp(a′) −a′
pn+1
%p,
for any representative a′ of v′. Now by (2), to see whether rv′ = 0 or not, it suffices to
consider replacing a′ with a representative of any vertex of C′. Note the size of C′ is kp,
thus π−1
n+1,n(v) is contained in C′, which implies we can choose any representative a of v
to replace a′.
First, we have
f k(a) = g(a) = a + rvpn + τpn+1
(mod pn+2)
with some τ ∈[0, p), and so
g2(a) = g(a + rvpn + τpn+1)
(mod pn+2)
= g(a) + g′(a)(rvpn + τpn+1) + g′′(a)
2
(rvpn + τpn+1)2
= a + 2rvpn + (2τ + µ)pn+1 + g′′(a)
2
r2
vp2n,
where µ is such that g′(a) = 1 + µp (mod p2), and the iteration gives
gp(a) = a + p · rvpn + {p τ + µ(1 + 2 + · · · + p −1)rv}pn+1
+ g′′(a)
2
{1 + 22 + · · · (p −1)2}r2
vp2n
(mod pn+2)
= a + rvpn+1 + µ(p −1)p
2
rvpn+1 + g′′(a)
2
(p −1)p(2p −1)
6
r2
vp2n.
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS
9
If p > 3, then integers (p−1)p(2p−1)
6
and (p−1)p
2
are divisible by p. If p = 3 and n > 1, then
p2n = 0 (mod pn+2) and the integer (p−1)p
2
is divisible by p. In any case the statement
holds.
□
Remark 4.11. On (3), the assumption p > 3, or p = 3 and n > 1 is the best possible in
the following sense. If p = 3, n = 1, then we have actually an example where rv ̸= 0 but
rv′ = 0. Indeed, let
p = 3,
f(x) = x2 + 1.
Then G(f, Z3) = {(¯0, ¯1), (¯1, ¯2), (¯2, ¯2)} (represented by the directed edges), and there is a
cycle C = {¯2} with ¯λ(C) = 1. For v = ¯2, we have rv = f(2)−2
3
% 3 = 1. Now the cycle
C′ = {¯2, ¯5, ¯8} in G(f, Z32) is a lifted cycle of C. But, for example, for the vertex v′ = ¯2
of C′, we have rv′ = f3(2)−2
9
% 3 = 0.
Further if p = 2, then f(x) = x3 is an example such that for n = 3 there are vertices
v, v′ such that rv ̸= 0 but rv′ = 0. Indeed, let v = ¯3 ∈Z23, and then C = {v} is a cycle
in G(f, Z23) with λ(C) = 1 and rv = 1. But concerning the lifted cycle C′ = {¯3, ¯11} in
G(f, Z24) of C, we have rv′ = 0, where v′ = ¯3, ¯11. With a little more effort, we can see
this phenomenon occurs for any n ≥3.
Using the main theorem with the lemma, we can give a description on the behavior of
cycles on successive liftings.
Corollary 4.12. Let C be a cycle of size k in G(f, Zp).
(1) If ¯λ(C) = 0, then for each n > 1 there exists the only one lifted cycle in G(f, Zpn)
of C, which has size k.
(2) If ¯λ(C) ̸= 0, then any vertex of the lifted graph of C is part of a cycle.
(3) Assume p > 3 and ¯λ(C) = 1. If the size of a lifted cycle C′ in G(f, ZpN) of C is
kp for some N > 1, then any lifted cycle C′′ in G(f, Zpn) of C′ is of size kpn−N+1
for n ≥N.
(4) If ¯λ(C) ̸= 0, 1, then for each n > 1 there exists exactly one lifted cycle of size k
in G(f, Zpn) of C. Each of the other lifted cycles is of size kmpj for some j ≥0,
where m is the multiplicative order of ¯λ(C).
Proof. (1) It is due to recursively using Theorem 4.3 (1) and Lemma 4.9.
(2) By Theorem 4.3 (2)(3) we see the sum of the sizes of all the lifted cycles is kp,
which clearly equals the number of vertices of the lifted graph.
(3) Consider a sequence of cycles C1, C2, C3, · · · such that Ci is a lifted cycle in G(f, Zpi)
of Ci−1, where C1 = C. Let vi be a vertex in Ci and put ri := rvi.
Now by Theorem 4.3 (2) we can see the size of Cn is k as long as ri = 0 for all i < n.
So the size of CN being kp means rN−1 ̸= 0. By Lemma 4.10 (3) once rN−1 ̸= 0, we must
have rn ̸= 0 for n > N −1 and by the theorem the size of Cn is p times that of Cn−1 for
n > N −1, which proves the statement.
(4) Let G1, G2, G3, · · · be the sequence of the lifted graphs such that Gi is the lifted
graph in G(f, Zpi) of Gi−1, where G1 = C. The proof can be done by induction as follows.
Assumption: in Gi there exists exactly one cycle (denoted by ˆCi) of size k with multi-
plier ¯λ := ¯λ( ˆCi) ̸= 0, 1, and each of the other cycles is of size kmpj for some j ≥0 with
multiplier 1.
Then the lifted cycles in Gi+1 of ˆCi consist of one cycle of size k and cycles of size km
by Theorem 4.3 (3). The former has multiplier ¯λ, and the latter have multiplier 1 by
10
T. NARA
Lemma 4.9. Now let Di be any other cycle than ˆCi in Gi, and then the lifted cycles in
Gi+1 of Di clearly have size kmpj for some j with multiplier 1 by Theorem 4.3 (2). We
have confirmed that the assumption holds for Gi+1.
In the initial case the assumption clearly holds, thus the proof completes.
□
Remark 4.13. In the proof of (4), since another cycle than ˆCi has multiplier 1, we can
use the same argument as in the proof of (3) to give more details for j. That is, assuming
p > 2, if a lifted cycle C′ in G(f, ZpN) of C, is of size kmp for some N > 2, then any
lifted cycle C′′ in G(f, Zpn) of C′ is of size kmpn−N+1 for n ≥N. This fact joining with
(3) is a version of Theorem 2 in [14], not limited to the Chebyshev polynomials.
References
[1] Eric Bach and Andrew Bridy. On the number of distinct functional graphs of affine-linear
transformations over finite fields. Linear Algebra Appl., 439(5):1312–1320, 2013.
[2] Bal´azs Bulyovszky and G´abor Horv´ath. Polynomial functions over finite commutative rings.
Theor. Comput. Sci., 703:76–86, 2017.
[3] Zhibo Chen. On polynomial functions from Zn to Zm. Discrete Math., 137(1):137–145, 1995.
[4] Ryan Flynn and Derek Garton. Graph components and dynamics over finite fields. Int. J. Number
Theory, 10(03):779–792, 2014.
[5] Dalma G¨orcs¨os, G´abor Horv´ath, and Anett M´esz´aros. Permutation polynomials over finite rings.
Finite Fields Appl., 49:198–211, 2018.
[6] Gordon Keller and F. R. Olson. Counting polynomial functions (mod pn). Duke Math. J.,
35(4):835–838, 1968.
[7] Sergei V. Konyagin, Florian Luca, Bernard Mans, Luke Mathieson, Min Sha, and Igor E.
Shparlinski. Functional graphs of polynomials over finite fields. J. Comb. Theory, Ser. B,
116:87–122, 2016.
[8] Xiumei Li and Min Sha. Polynomial functions in the residue class rings of Dedekind domains. Int.
J. Number Theory, 15(07):1473–1486, 2019.
[9] Bernard Mans, Min Sha, Igor E. Shparlinski, and Daniel Sutantyo. On Functional Graphs of
Quadratic Polynomials. Exp. Math., 28(3):292–300, 2019.
[10] Joseph H. Silverman. The Arithmetic of Dynamical Systems, volume 241 of Graduate Texts in
Mathematics. Springer, New York, NY, 2007.
[11] Rajesh P Singh. A Method for Generating Permutation Polynomials Modulo pn. Integers, 21, 2021.
[12] David Singmaster. On polynomial functions (mod m). J. Number Theory, 6(5):345–352, 1974.
[13] Ernst Specker, Norbert Hungerb¨uhler, and Micha Wasem. The ring of polyfunctions over Z/nZ.
Commun. Algebra, 51(1):116–134, 2023.
[14] Daisaburo Yoshioka. Properties of Chebyshev Polynomials Modulo pk. IEEE Trans. Circuits Syst.
Express Briefs, 65(3):386–390, 2018.
Faculty of Information Design, Tokyo Information Design Professional University,
2-7-1 Komatsugawa, Edogawa-ku, Tokyo, 132-0034, Japan
Email address: nara@tid.ac.jp
|
LIFTING OF CYCLES IN FUNCTIONAL GRAPHS TADAHISA NARA Abstract. For a given function from a set to itself, we can define a directed graph called the functional graph, where the vertices are the elements of the set, and the edges are all the pairs of inputs and outputs for the function. In this article we consider functional graphs on Z/mZ with respect to polynomial functions. The main result describes the behavior of cycles in functional graphs on Z/pnZ while n is increasing, where p is a prime number. 1. Introduction For a given function from a set to itself, we can define a directed graph called the functional graph, where the vertices are the elements of the set, and the edges are all the pairs of inputs and outputs for the function. In recent years functional graphs on finite sets are actively studied, in particular concerning polynomial functions or rational functions. On finite fields many remarkable results are given from multiple perspectives. For example, [7, 1] consider the number of non-isomorphic functional graphs and give bounds for that. [4, 9] consider structural aspect of functional graphs, such as cycles or connected components. In this article we consider functional graphs on the finite rings Z/mZ with respect to polynomial functions. As shown in Theorem 3.1 the structure of a functional graph is determined only from that on the primary components of Z/mZ. However, the rules that govern the latter is not clear, and that is why we are interested in functional graphs on Z/pnZ. The main result is a description on behavior of cycles in functional graphs on Z/pnZ while n is increasing (Theorem 4.3). In that study we can find some sort of differential information on the function is a key for classifying the behavior. In [14] a similar problem is considered, focusing on the size of cycles and the expression of the vertices, but limited to the case for Chebyshev polynomials from a cryptographic perspective. As a related study, counting the number of functions realized as polynomial functions, which corresponds to the number of functional graphs given by polynomials, is a classical problem ([12, 6]). In [13] the ring of functions realized as polynomial functions was investigated and the structure was described. In [3] a kind of polynomial representable functions from {0, 1, ..., m-1} to {0, 1, ..., n-1} was proposed for m ̸= n, and it was studied in more sophisticated formulation and generalized to residue class rings of Dedekind domains in [8]. Alternatively, permutation polynomials, that is, polynomials which are bijective as functions, are widely studied in this area ([11, 5]). 2020 Mathematics Subject Classification. 37P25, 05C38, 11T06. Key words and phrases. functional graph, cycle, polynomial map, arithmetic dynamics, finite ring. 1 16 Sep 2025 2 T. NARA 2. Preliminaries Throughout the paper, we denote the ring of integers modulo m by Zm := Z/mZ and p is any prime number unless otherwise mentioned. Let f : Z →Z be a polynomial function, and then we can naturally regard f as a function from Zm to Zm for any m. So by abusing a notation we use the same symbols for such functions for different domains. We denote a functional graph of a polynomial function f on Zm by G(f, Zm) := (V, E), where vertices V = Zm and the directed edges E = {(v, f(v)) : v ∈V }. Also in this article the notation f i means the i-th iteration of a function f. Remark 2.1. A functional graph must have outdegree 1 for any vertex, and so it is represented by a set of edges in the form {(0, b0), (1, b1)), ..., (m -1, bm-1)}, bi ∈Zm. On the other hand, not all the graphs in the form are given by polynomial functions unless m is a prime. Concerning the number of functional graphs actually given by polynomial functions, see [12, 6, 13], and also [2], which considers conditions whether a given function can be realized as a polynomial function. 3. Fundamental Results The following theorem is a fundamental fact for our situation, which says the structure of a functional graph on Zm is determined only from that on the primary components of Zm. Theorem 3.1. Let m, n be coprime integers and f : Z →Z a polynomial function. Then we have a graph isomorphism G(f, Zm) ⊗G(f, Zn) ≃G(f, Zmn), where the left-hand side is a tensor product of graphs. Proof. We claim that the isomorphism of the Chinese remainder theorem (CRT) induces the graph isomorphism. Indeed, let φ be the isomorphism φ : Zm × Zn →Zmn (3.1) induced by (x, y) 7→bnx + amy for x, y ∈Z, where a, b ∈Z are such that am + bn = 1. Then this naturally induces the map φG : G(f, Zm) ⊗G(f, Zn) →G(f, Zmn). By definition ((x (mod m), y (mod n)), (x′ (mod m), y′ (mod n))) is a directed edge of G(f, Zm)⊗G(f, Zn) if and only if (x (mod m), x′ (mod m)) and (y (mod n), y′ (mod n)) are directed edges of G(f, Zm) and G(f, Zn), respectively. So, to show φG is an isomorphism, we have to prove the equivalence f(x) = x′ (mod m), f(y) = y′ (mod n) ⇐⇒f(bnx + amy) = bnx′ + amy′ (mod mn). Note, because of gcd(m, n) = 1, the RHS is equivalent to f(bnx + amy) = bnx′ + amy′ LIFTING OF CYCLES IN FUNCTIONAL GRAPHS 3 both modulo m and n. Since f is a polynomial, we have f(bnx + amy) = f((1 -am)x + amy) = f(x) (mod m), bnx′ + amy′ = (1 -am)x′ + amy′ = x′ (mod m) and similarly we have f(bnx + amy) = f(y) (mod n), bnx′ + amy′ = y′ (mod n), which implies the desired equivalence. □ From now on we frequently use the notation C = {v0, v1, . . . , vk-1} for a cycle, which precisely means the directed graph (V, E), where V = {v0, v1, . . . , vk-1}, E = {(v0, v1), (v1, v2), . . . , (vk-1, v0)}. Corollary 3.2. If there is a cycle of size k in G(f, Zm) and a cycle of size l in G(f, Zn) with gcd(m, n) = 1, then there is a cycle of size lcm(k, l) in G(f, Zmn). Proof. Let Cm = {v0, . . . , vk-1} and Cn = {u0, . . . , ul-1} be cycles in G(f, Zm) and G(f, Zn), respectively. Then for any i, j there are directed edges ((vi, uj), (vi+1, uj+1)) in G(f, Zm)⊗G(f, Zn), making a cycle, where we take i, j modulo k, l, respectively. Note that the least positive integer d such that i = i + d (mod k) and j = j + d (mod l) is the size of the cycle, which is by definition d = lcm(k, l). Then by the isomorphism of the theorem there exists a cycle of size d in G(f, Zmn). □ 4. Main Results Next we would determine graphs G(f, Zpn) from the information on G(f, Zp) for p prime. But we soon find it fails as expected, that is, G(f, Zp) ≃G(g, Zp) does not mean G(f, Zpn) ≃G(g, Zpn). Therefore we suggest a problem that how to obtain the structural behavior of functional graphs G(f, Zpn) for n from the information on G(f, Zp) and some properties of f. In particular, we focus on the cycles in the graphs. Throughout this article we use the terms lifted graph and lifted cycle as follows. Let π : Zpn →Zpm for any n > m, be the natural projection. Then we can naturally define, for a fixed f, πG : G(f, Zpn) →G(f, Zpm) by v 7→π(v) for vertices and (u, v) 7→(π(u), π(v)) for edges. Let S = (V, E) be a subgraph of G(f, Zpm), and then we call the subgraph of G(f, Zpn) induced by π-1(V ), the lifted graph of S. A lifted cycle of S means a cycle in the lifted graph. Example 4.1. Let p = 3, f = x3 + 2. Then C = { ̄0, ̄1, ̄2} is a cycle in G(f, Z3). The lifted cycle in G(f, Z9) of C is the cycle { ̄1, ̄3, ̄2}, and the lifted graph in G(f, Z9) of C is the whole graph G(f, Z9). Before the main result we introduce a key concept multiplier. As a reference we have [10]. Usually this concept is used to study functions and the periodic points in the context of dynamical systems over infinite fields. So our result is an answer to how the concept is involved in the context of finite rings. 4 T. NARA Definition 4.2. Let C = {v0, v1, . . . , vk-1} be a cycle of size k in a functional graph G(f, Zpn) for a polynomial function f. Then the multiplier of C is defined as λ(C) := (f k)′(v0) = Y v∈C f ′(v), where f k is the k-th iteration of f and (f k)′, f ′ denote the derivatives of the functions. The second equality is given by the chain rule and this allows us to choose any vertex of C instead of v0 in the second term. In most cases we use only the image of λ(C) in Zp, denoted by ̄λ(C). Theorem 4.3. Let C be a cycle of size k in G(f, Zpn) for a polynomial function f with multiplier λ = λ(C). Put ̄λ as the image of λ in Zp. In what follows we always mean the lifted graph of C as in G(f, Zpn+1). (1) If ̄λ = 0, then only one cycle is in the lifted graph of C, which has size k. (2) If ̄λ = 1, then the lifted graph of C consists of only one cycle of size kp, or p cycles of size k. The latter case occurs if and only if rv = 0 for a vertex v of C, where rv is defined in Definition 4.5 below. (3) If ̄λ ̸= 0, 1, then the lifted graph of C consists of a cycle of size k and (p -1)/m cycles of size mk, where m = m( ̄λ) is the multiplicative order of ̄λ, i.e. the least positive integer s.t. ̄λm = 1. Remark 4.4. For (2), we can choose any vertex v of C for rv (see Lemma 4.10 (2)). From now on we use the following notation for integer ranges: [a, b) := {a, a + 1, . . . , b -1}. Definition 4.5. For a vertex v of a cycle C of size k, in a functional graph G(f, Zpn), define rv ∈[0, p) by rv := f k(av) -av pn % p, where av is the unique representative in [0, pn) of v, and A % B ∈[0, B) denotes the remainder for division of A by B > 0. In other words, rv ∈[0, p) is the integer such that f k(av) = av + rvpn (mod pn+1). Remark 4.6. As we will see in Lemma 4.10 (1), if ̄λ(C) = 1, then in the definition of rv, we can use any representative of v instead of av. But the quantity rv is used in the proof of Theorem 4.3, where the assumption does not necessarily hold, and so we define it using av. Also note that n is dependent on v. The next lemma is an easy fact used in the proof of the theorem. Lemma 4.7. If C is a cycle of size k in G(f, Zpn), then the size of any lifted cycle of C is a multiple of k. Proof. Assume the size of a lifted cycle of C is l and write l = Ak+B for some A ≥0 with 0 ≤B 0 s.t. gN(a) = a (mod pn+1) is N = ( 1 (rv = 0), p (rv ̸= 0), thus by Lemma 4.7 the size of the cycles in the lifted graph is ( k (rv = 0), kp (rv ̸= 0). Finally, assume ̄λ ̸= 0, 1. Now if N equals the multiplicative order of ̄λ, denoted by m( ̄λ), then since N-1 X l=0 γl = γN -1 γ -1 = 0 (mod p), we have gN(a) = av + bpn = a (mod pn+1) by (4.2), which means the period of a (mod pn+1) for g is at most m( ̄λ). If it is strictly smaller, that is, N 0 s.t. gN(a) = a (mod pn+1) is N = ( 1 (b = rv(1 -γ)-1 (mod p)), m( ̄λ) (b : otherwise). So the iteration of f with av + lpn (mod pn+1) generates a cycle of size m( ̄λ) · k for each l (mod p) except one value, in which case the size is k. Note km distinct vertices are necessary to make a cycle of size km, thus the number of cycles of size km is (kp - Tk)/km = (p -1)/m since the number of vertices in the lifted graph of C is kp. □ Remark 4.8. In most cases after several lifting of a cycle with multiplier 1, eventually we attain rv ̸= 0, and so by Theorem 4.3 (2) the size of cycles is expanding by a factor of p. But there is an example such that rv is staying in 0 while successive liftings. Let f(x) = 3x -x3, and Ci := { ̄ -2, ̄2} in G(f, Zpi) for i = 1, 2, 3, · · · is a sequence of lifted cycles, which is derived from "global" cycle {-2, 2} in G(f, Z). Now, for example, if p = 5, then we have ̄λ(Ci) = f ′(-2)f ′(2) = 81 = 1 (mod 5). LIFTING OF CYCLES IN FUNCTIONAL GRAPHS 7 Then recall rv is independent of choice of representative of v as mentioned above. So for v = ̄2 in Ci for every i, rv = f 2(2) -(2) pi %p = 0. As a complement we give a lemma about the behavior of the multiplier, which is used in Corollary 4.12. Lemma 4.9. Let C be a cycle in a functional graph G(f, Zpn) and C′ a lifted cycle in G(f, Zpn+1) of C. Let the size of C be k. If the size of C′ equals k, then ̄λ(C′) = ̄λ(C). Otherwise, ̄λ(C′) = 1. Proof. We have the following commutative diagram: Zpn+1 Zpn Zp. πn+1,n πn+1 πn Write C = {v0, v1, . . . , vk-1} and g := f k. If the size of C′ equals k, then we can write C′ = {v′ 0, v′ 1, . . . , v′ k-1} with some v′ i such that πn+1,n(v′ i) = vi. Since g′ is also a polynomial function, we have ̄λ(C′) = πn+1( Y g′(v′ i)) = πn( Y g′(πn+1,n(v′ i))) = πn( Y g′(vi)) = ̄λ(C). Otherwise, the size of C′ equals kp or km by Theorem 4.3, where m is the multiplicative order of ̄λ(C). Now note in general if the size of C′ equals kM for some integer M, then C′ must contain M distinct vertices which are mapped to vi by πn+1,n for each i. So by similar computation as above we have ̄λ(C′) = ̄λ(C)M. In the first case ̄λ(C) = 1 by Theorem 4.3 and so ̄λ(C′) = 1. In the second case ̄λ(C′) = ̄λ(C)m = 1. □ The following lemma states about properties of the quantity rv, which is also used in Corollary 4.12. Lemma 4.10. We will use the same notation as Definition 4.5. Let v be a vertex of C in G(f, Zpn), and assume ̄λ(C) = 1. (1) We can use any representative of v instead of av for rv in Definition 4.5. In particular, f k(a) = a + rvpn (mod pn+1) for any representative a of v, where k is the size of C. (2) Either rv = 0 for all v of C, or rv ̸= 0 for all v of C. (3) Assume p > 3, or p = 3 and n > 1. Let v′ be a vertex of a lifted cycle C′ in G(f, Zpn+1) of C such that πn+1,n(v′) = v, where πn+1,n is the projection Zpn+1 → Zpn. If rv ̸= 0, then rv′ ̸= 0. Proof. Put g = f k. (1) Let a, b be distinct representatives of v, and then with some c ∈[0, p) we have b = a + cpn (mod pn+1). 8 T. NARA Using the Taylor series, g(b) -b = g(a + cpn) -(a + cpn) (mod pn+1) = g(a) + g′(a)cpn -a -cpn = g(a) -a, which implies the assertion. (2) For another vertex w of C there exists an integer l 3, then integers (p-1)p(2p-1) 6 and (p-1)p 2 are divisible by p. If p = 3 and n > 1, then p2n = 0 (mod pn+2) and the integer (p-1)p 2 is divisible by p. In any case the statement holds. □ Remark 4.11. On (3), the assumption p > 3, or p = 3 and n > 1 is the best possible in the following sense. If p = 3, n = 1, then we have actually an example where rv ̸= 0 but rv′ = 0. Indeed, let p = 3, f(x) = x2 + 1. Then G(f, Z3) = {( ̄0, ̄1), ( ̄1, ̄2), ( ̄2, ̄2)} (represented by the directed edges), and there is a cycle C = { ̄2} with ̄λ(C) = 1. For v = ̄2, we have rv = f(2)-2 3 % 3 = 1. Now the cycle C′ = { ̄2, ̄5, ̄8} in G(f, Z32) is a lifted cycle of C. But, for example, for the vertex v′ = ̄2 of C′, we have rv′ = f3(2)-2 9 % 3 = 0. Further if p = 2, then f(x) = x3 is an example such that for n = 3 there are vertices v, v′ such that rv ̸= 0 but rv′ = 0. Indeed, let v = ̄3 ∈Z23, and then C = {v} is a cycle in G(f, Z23) with λ(C) = 1 and rv = 1. But concerning the lifted cycle C′ = { ̄3, ̄11} in G(f, Z24) of C, we have rv′ = 0, where v′ = ̄3, ̄11. With a little more effort, we can see this phenomenon occurs for any n ≥3. Using the main theorem with the lemma, we can give a description on the behavior of cycles on successive liftings. Corollary 4.12. Let C be a cycle of size k in G(f, Zp). (1) If ̄λ(C) = 0, then for each n > 1 there exists the only one lifted cycle in G(f, Zpn) of C, which has size k. (2) If ̄λ(C) ̸= 0, then any vertex of the lifted graph of C is part of a cycle. (3) Assume p > 3 and ̄λ(C) = 1. If the size of a lifted cycle C′ in G(f, ZpN) of C is kp for some N > 1, then any lifted cycle C′′ in G(f, Zpn) of C′ is of size kpn-N+1 for n ≥N. (4) If ̄λ(C) ̸= 0, 1, then for each n > 1 there exists exactly one lifted cycle of size k in G(f, Zpn) of C. Each of the other lifted cycles is of size kmpj for some j ≥0, where m is the multiplicative order of ̄λ(C). Proof. (1) It is due to recursively using Theorem 4.3 (1) and Lemma 4.9. (2) By Theorem 4.3 (2)(3) we see the sum of the sizes of all the lifted cycles is kp, which clearly equals the number of vertices of the lifted graph. (3) Consider a sequence of cycles C1, C2, C3, · · · such that Ci is a lifted cycle in G(f, Zpi) of Ci-1, where C1 = C. Let vi be a vertex in Ci and put ri := rvi. Now by Theorem 4.3 (2) we can see the size of Cn is k as long as ri = 0 for all i N -1 and by the theorem the size of Cn is p times that of Cn-1 for n > N -1, which proves the statement. (4) Let G1, G2, G3, · · · be the sequence of the lifted graphs such that Gi is the lifted graph in G(f, Zpi) of Gi-1, where G1 = C. The proof can be done by induction as follows. Assumption: in Gi there exists exactly one cycle (denoted by ˆCi) of size k with multiplier ̄λ := ̄λ( ˆCi) ̸= 0, 1, and each of the other cycles is of size kmpj for some j ≥0 with multiplier 1. Then the lifted cycles in Gi+1 of ˆCi consist of one cycle of size k and cycles of size km by Theorem 4.3 (3). The former has multiplier ̄λ, and the latter have multiplier 1 by 10 T. NARA Lemma 4.9. Now let Di be any other cycle than ˆCi in Gi, and then the lifted cycles in Gi+1 of Di clearly have size kmpj for some j with multiplier 1 by Theorem 4.3 (2). We have confirmed that the assumption holds for Gi+1. In the initial case the assumption clearly holds, thus the proof completes. □ Remark 4.13. In the proof of (4), since another cycle than ˆCi has multiplier 1, we can use the same argument as in the proof of (3) to give more details for j. That is, assuming p > 2, if a lifted cycle C′ in G(f, ZpN) of C, is of size kmp for some N > 2, then any lifted cycle C′′ in G(f, Zpn) of C′ is of size kmpn-N+1 for n ≥N. This fact joining with (3) is a version of Theorem 2 in [14], not limited to the Chebyshev polynomials. References [1] Eric Bach and Andrew Bridy. On the number of distinct functional graphs of affine-linear transformations over finite fields. Linear Algebra Appl., 439(5):1312-1320, 2013. [2] Bal ́azs Bulyovszky and G ́abor Horv ́ath. Polynomial functions over finite commutative rings. Theor. Comput. Sci., 703:76-86, 2017. [3] Zhibo Chen. On polynomial functions from Zn to Zm. Discrete Math., 137(1):137-145, 1995. [4] Ryan Flynn and Derek Garton. Graph components and dynamics over finite fields. Int. J. Number Theory, 10(03):779-792, 2014. [5] Dalma G ̈orcs ̈os, G ́abor Horv ́ath, and Anett M ́esz ́aros. Permutation polynomials over finite rings. Finite Fields Appl., 49:198-211, 2018. [6] Gordon Keller and F. R. Olson. Counting polynomial functions (mod pn). Duke Math. J., 35(4):835-838, 1968. [7] Sergei V. Konyagin, Florian Luca, Bernard Mans, Luke Mathieson, Min Sha, and Igor E. Shparlinski. Functional graphs of polynomials over finite fields. J. Comb. Theory, Ser. B, 116:87-122, 2016. [8] Xiumei Li and Min Sha. Polynomial functions in the residue class rings of Dedekind domains. Int. J. Number Theory, 15(07):1473-1486, 2019. [9] Bernard Mans, Min Sha, Igor E. Shparlinski, and Daniel Sutantyo. On Functional Graphs of Quadratic Polynomials. Exp. Math., 28(3):292-300, 2019. [10] Joseph H. Silverman. The Arithmetic of Dynamical Systems, volume 241 of Graduate Texts in Mathematics. Springer, New York, NY, 2007. [11] Rajesh P Singh. A Method for Generating Permutation Polynomials Modulo pn. Integers, 21, 2021. [12] David Singmaster. On polynomial functions (mod m). J. Number Theory, 6(5):345-352, 1974. [13] Ernst Specker, Norbert Hungerb ̈uhler, and Micha Wasem. The ring of polyfunctions over Z/nZ. Commun. Algebra, 51(1):116-134, 2023. [14] Daisaburo Yoshioka. Properties of Chebyshev Polynomials Modulo pk. IEEE Trans. Circuits Syst. Express Briefs, 65(3):386-390, 2018. Faculty of Information Design, Tokyo Information Design Professional University, 2-7-1 Komatsugawa, Edogawa-ku, Tokyo, 132-0034, Japan Email address:
|
2509.16232
|
Emotions are Recognized Patterns of
Cognitive Activities
Yue Jin
August 2025
Emotions play a crucial role in human life. The research community has proposed many
theories on emotions without reaching much consensus. The situation is similar for
emotions in cognitive architectures and autonomous agents. I propose in this paper that
emotions are recognized patterns of cognitive activities. These activities are responses
of an agent to the deviations between the targets of its goals and the performances of its
actions. Emotions still arise even if these activities are purely logical. I map the patterns of
cognitive activities to emotions. I show the link between emotions and attention and the
impacts of the parameterized functions in the cognitive architecture on the computing
of emotions. My proposition bridges different theories on emotions and advances the
building of consensus.
Yue Jin. Nokia Bell Labs France. Email: yue.1.jin@nokia-bell-labs.com.
arXiv:2509.16232v1 [q-bio.NC] 15 Sep 2025
1.
Introduction
Emotions play a crucial role in human life. They seem to compel us to pursue our goals,
enhance or hinder our learning and decision-making, strengthen or weaken our commu-
nications and connections. Many theories have been proposed on how emotions arise, in
the feeling, motivational and evaluative traditions (Barrett, Lewis, and Haviland-Jones
(2016)). Barrett (2017) argues that “An emotion is your brain’s creation of what your bodily
sensations mean, in relation to what is going on around you.” Lövheim (2012) proposes
that different combinations of 3 monoamine neurotransmitters generate 8 emotions. The
research community is yet to reach consensus.
There is no difference for emotions in Cognitive Architectures. It’s debated whether
cognitive architectures should have an emotion module so that autonomous agents can
have emotions. Laird, Lebiere, and Rosenbloom (2017) don’t include emotions in the
standard model of the mind. Many computational models have been proposed, including
Appraisal theories, Core affect theory, Somatic markers, and Primary-process affect (Larue
et al. (2018)). According to Larue et al. (2018), Soar-Emote uses emotional appraisals to
frame information before it is processed by the cognitive system; Sigma models low-level
appraisals as architectural self-reflection; ACT-R/Φ combines a model of a physiolog-
ical substrate and the primary-process affect theory to augment the ACT-R cognitive
architecture.
I propose in this paper that emotions are recognized patterns of cognitive activities.
For humans or autonomous agents to have emotions, they don’t need a dedicated emotion
module; they need meta-cognition to recognize the patterns of cognitive activities. These
cognitive activities are linked to goals, actions and attention. I first use Soar cognitive
architecture to demonstrate in Section 2 how surprise, nervousness and relief arise even
when the cognitive architecture only performs logical reasoning. I then make the case
for general cognitive architectures on an extended list of emotions in Section 3 and show
how my proposition bridges different theories of emotions. I present the link between
emotions and attention and the impacts of the parameterized functions in the cognitive
architecture on the emotion quantification in Section 4. I conclude in Section 5 with
discussions on the agent’s adaptation of goals and actions based on its emotions and on
emotions in social situations.
2.
Surprise, Nervousness and Relief in Soar
Sore is a long standing Cognitive Architecture (Laird (2022)). It has gone through multiple
versions with gradually added modules and capabilities since its conception in the early
80s. One of its capabilities is responding to incomplete knowledge. Whenever the agent
doesn’t have sufficient knowledge to select or apply an action, Soar labels this as an
1
Impasse arise?
Create substate and
use all processing to
find a new acon
Use Chunking to
create a new rule
and store it
Yes
Nervousness
Relief
Surprise
FIGURE 1. Emotions in Soar when Impasse Arises
impasse. To resolve the impasse, it creates a substate with memories and processing
powers and searches for a new action in the substate with all the processing available in
Soar including planning, retrievals from semantic and episodic memory, and interaction
with the external environment. Once a new action is found, Soar invokes a learning
mechanism called “Chunking” to compile the processing in the substate into a new rule
that it stores for future use.
We can clearly see that Soar goes through several stages when it encounters and re-
solves an impasse and each stage has its distinctive cognitive patterns. When it encounters
an impasse, this deviates from its goal. This deviation can be recognized as “Surprise”.
It then creates a substate and searches for a new action. It uses additional memory and
processing power. This can be recognized as “Nervousness”. Even though it’s not yet a
feature in Soar, the agent can invoke more parallel processing when it has less time to find
a new action. The intensity of “Nervousness” increases with the urgency. When a new
action is found, Soar invokes chunking to learn a new rule and stores it for future use. This
can be recognized as “Relief”. Figure 1 shows the cognitive patterns and corresponding
emotions. For Soar to have emotions, it doesn’t need a dedicated emotion module; it needs
a meta-cognition module that can recognize these cognitive patterns.
2
Improve
profitability of
a data center
Increase
revenue
Reduce costs
Acquire new
customers
Support new
workload types
Develop more
efficient
soware
Acquire more
energy efficient
hardware
…...
…...
…...
Goal Level 1
Goal Level 2
Goal Level 3
FIGURE 2. Recursive decomposition of a goal
3.
Emotions in General Cognitive Architectures
We can generalize the case of Soar to general cognitive architectures. In a general cognitive
architecture, the agent can have general goals. A goal can be decomposed into multiple
components. Each component is a goal at a lower level and can also be decomposed into
multiple components. Figure 2 shows an example of this recursive decomposition. Each
goal has its current value and a target value at a future time point. I use “target” to refer to
a target value. It can also have interim targets at intermediate time points. The recursive
decomposition of the goal continues until it reaches the level of executable actions.
When the agent executes an action, it changes the value of a goal. The performance of
an action is the new value of the goal. The agent receives information from its external
and internal perception. It interprets the information and assesses whether the action
performance equals the (interim) target of the goal. If there is a deviation between the
performance and the target, this is recognized as “Surprise”. If the agent has a world
model and carries out anticipatory thinking (Jones and Laird (2023)), the deviation can
also be between the forecast performance and the target. The agent then assesses whether
the deviation is better or worse based on whether the performance is better or worse than
the target. Figure 3 shows the loop of detecting a deviation. The agent can have multiple
3
Receive info from
external / internal
percepon
Assess how
performance align
with target
Perf deviates
from target?
Yes
No
Deviaon beer
or worse?
Connue current
acon
Surprise
FIGURE 3. Detect a deviation
parallel loops for different goals. The loops can have different time scales for goals at
different levels.
The agent processes the deviation depending on whether it’s better or worse. If the
deviation is worse, the agent mobilizes cognitive resources to find a new action that
restores the target. This is recognized as “Nervousness”. If the deviation is large or the
agent needs to find a new action in a short period of time, the agent needs to make a lot
of parallel processing. This increases the intensity of “Nervousness”. If a new action is
found, the agent compiles the processing and updates the action for the goal for future
use. This is recognized as “Relief”. If no new action is found to restore the target but one
is found to reach a lower value, the agent also updates the action for future use. At the
same time, it revises the target of the goal down to this lower value and stores the new
target. This is recognized as “Sadness”. If no new action is found to even reach a lower
value, the agent removes the goal and updates the related goals at higher levels. This is
recognized as “Grief”. This processing is shown in Figure 4.
If the deviation is better, the agent mobilizes cognitive resources to find a new action to
reach a higher value of the goal. This is recognized as “Excitement”. The agent usually has
less urgency to find a solution comparing to the case of a worse deviation. The intensity is
mostly determined by the size of the deviation. If a new action is found, the agent compiles
the processing and updates the action for the goal for future use. At the same time, it
4
Deviaon beer
or worse?
Learn to find new
acon to restore
target
Able to restore
target?
Update acon
Yes
No
Able to obtain
lower value?
Remove goal and
update higher level
goals
Update acon and
target
Yes
No
worse
Nervousness
Relief
Sadness
Grief
FIGURE 4. Emotions when the deviation is worse
revises the target of the goal up and stores the new target. This is recognized as “Joy”. If
no new action is found, the agent keeps the existing action and target. This is recognized
as “Disappointment”. This processing is shown in Figure 5.
In Table 1, I list the emotions and the cognitive activities they correspond to. For the
emotions not listed in the table, they are recognized patterns of cognitive activities too.
For instance, “Anger” is recognized when the agent deduces that the worse deviation is
caused by another agent and decides it needs to use aggression to change the behavior
of the other agent. A comprehensive list of emotions and their corresponding cognitive
activities is left for future work.
My proposition bridges different theories on emotions. Barrett, Lewis, and Haviland-
Jones (2016) sort the theories of emotions into three broad traditions, namely the feeling
tradition, the motivational tradition and the evaluative tradition. In my proposition, the
cognitive activities are the motivational and evaluative aspects of emotions and the recog-
nition of patterns are the feeling aspect. The agent evaluates whether there are deviations,
whether the deviations are better or worse and whether the learning of new actions re-
stores or improves the targets of its goals. It learns to find new actions including searching
5
Deviaon beer
or worse?
Learn to find a new
acon with beer
performance
Able to improve
performance?
Update acon and
target
Keep exisng acon
and target
Beer
Yes
No
Excitement
Disappointment
Joy
FIGURE 5. Emotions when the deviation is better
past experiences, logical reasoning, executing alternative actions to assess their perfor-
mances, etc. It recognizes the patterns of cognitive activities as emotions.
My proposition provides more precise explanations on emotions. For instance, the ap-
praisal theory considers emotions as the results of evaluations (appraisals) of information
from an agent’s perception over a set of factors relevant to the agent’s well-being (Lazarus
(1991)). My proposition maps emotions to specific cognitive activities. This explains where
the appraisals themselves come from and why the relevance to the agent’s well-being
impacts emotions.
4.
Attention Module and Emotion Quantification
I combine the different processes in Section 3 together in Figure 6. The core mechanism in
this combined module is to keep the performance to a target. Whenever a deviation arises,
either the action or the target is changed to eliminate the deviation and maintain the
accordance between the performance and the target. This can be viewed as an evolution
of the homeostatic mechanism (Lichtenberg and Hadley (2013)). This module is innate
6
Emotion
Cognitive Activities
Nervousness
Learn a new action when the deviation is worse
Relief
Update the action that restores the target
Sadness
Update the action and target when a lower value of
the goal is obtained
Grief
Remove the goal and update higher level goals
Excitement
Learn a new action when the deviation is better
Disappointment
Keep the action and target when a higher value of the
goal is not obtained
Joy
Update the action and target when a higher value of
the goal is obtained
TABLE 1. Emotions and Corresponding Cognitive Activities
for the agent. It dictates the current goal the agent pays attention to and serves as the
attention module of the cognitive architecture.
I greatly simplify the attention module for presentation. The real situation is a lot
more complex and nuanced. First, the processes are depicted as open-ended in Figure
6. But they are actually loops: after a deviation is eliminated, the agent goes back to the
part of detecting deviations. Second, the agent has multiple goals. When a deviation is
detected in one goal and the agent’s attention is on another goal, it may choose to delay the
processing of the deviation until it finishes the action of its current goal. Third, the agent
may find a new action that temporarily restores the target and spread out the learning of
a permanent new action over many disjoint time intervals. The learning of a new action
doesn’t have to be done without interruption. Forth, an action can affect multiple goals. It
can create deviations in multiple goals simultaneously. Some of the deviations may be
better and others worse. Learning a new action is then more complicated because we
need to trade off between different goals.
We can qualitatively identify emotions from the attention module: what type of emo-
tions arise in which kind of situations. To quantitatively compute emotions such as its
intensity and duration, we need the details of the parameterized functions of the cognitive
architecture, in particular those on deviation detection, learning and updating. The func-
tions vary from one cognitive architecture to another. But there are some commonalities.
The intensity of the surprise is determined by the size of the deviation. The agent
doesn’t respond to small differences between the performance and the target. It has a
distribution and rules on whether to label a difference of a certain size as a deviation. As
an exmaple, Figure 7 shows a normal distribution and the rule that the differences larger
than 3 standard deviations are outliers – deviations.
The intensity and duration of the the nervousness / excitement is determined by the
intensity and duration of learning. I have discussed in Section 3 the size of the deviation
7
Receive info from
external / internal
percepon
Assess how
performance align
with target
Perf deviates
from target?
Yes
No
Deviaon beer
or worse?
Learn to find a new
acon with beer
performance
Connue current
acon
Learn to find new
acon to restore
target
Able to improve
performance?
Update acon and
target
Keep exisng acon
and target
Beer
Yes
No
Able to restore
target?
Update acon
Yes
No
Able to obtain
lower value?
Remove goal and
update higher level
goals
Update acon and
target
Yes
No
worse
Nervousness
Relief
Sadness
Grief
Excitement
Disappointment
Joy
Surprise
FIGURE 6. Attention Module
8
FIGURE 7. Detection of deviations – outliers – based on a normal distribution
and the urgency of finding a new action impacts the intensity of learning. The agent has a
function f (Equation 1) to convert the size and urgency into the intensity of learning.
(1)
Intensity = f (Deviation size,Urgency)
For the duration of the learning, the agent decides an initial duration from the deviation
size and the urgency and extends the duration when it sees performance improvement.
A function g0 (Equation 2) converts the size and the urgency to an initial duration. A
function g (Equation 3) converts the amount of improvement into the amount of duration
extension. The agent terminates the learning when it sees no improvement at the end of
the duration.
(2)
Initial Duration = g0(Deviation size,Urgency)
(3)
∆Duration = g(Improvement amount)
The intensity of the sadness / grief / joy is determined by the size of deterioration or
improvement realized. The intensity of the relief / disappointment is determined by the
size of deterioration or improvement not realized.
The agent learns the parameterized functions of the cognitive architecture – the
distributions and functions and their parameters and rules – from its experiences. It
adapts them based on new experiences. For instance, the agent can adapt the function
9
g0 and g and increase the initial duration and duration extension if it finds doing this
increases its chance of finding a new action noticeably. The parameterized functions can
be specific to a goal or to a set of goal features. These parameterized functions manifest
as personalities in humans.
5.
Discussion
I propose that emotions are recognized patterns of cognitive activities. They are the
products of the autonomous agent managing the deviations between the performances of
its actions and the targets of its goals. For humans or autonomous agents to have emotions,
they don’t need a dedicated emotion module; they need meta-cognition to recognize the
patterns of cognitive activities. My proposition bridges different theories on emotions and
advances the building of consensus.
If the agent has a well-understood goal and effective actions to fulfill it, it rarely experi-
ences deviations and emotions from it. If the agent experiences frequently deviations and
emotions from a goal, it’s an indication that the agent doesn’t understand the goal well or
doesn’t have effective actions. The agent needs to review the goal and its actions: decide
whether it should have the goal at all, review whether the target of the goal has a reason-
able value, and revise how it makes decisions on actions. It should do this recursively on
the components of the goal and on the higher level goals where the goal is a component.
Emotions are more complex in social situations. The meta-cognition can recognize
the same patterns differently depending on the social goals in concern. For instance,
“Shame” is actually nervousness on the goal “I’m a good person”. When we feel shame, we
have the goal of being a good person and our current action has a performance short of
the current target of the goal. This deviation triggers a series of cognitive activities. The
meta-cognition recognizes this pattern as “Shame”. Emotions are used as signals in social
communication. One agent can directly signal to another agent with words and emotions
what goals and targets the other agent should have and how the performances of their
actions deviate from the targets of the goals. The agent can distort these signals out of its
own goals.
An accurate theory of emotions enables humans better understand and adapt their
behaviors. Consensus on the theories of emotions fosters smooth collaborations to build
fully functional autonomous agents. My proposition blows away the mist on the path
forward. We can now forge ahead with these advancements with greater clarity.
10
References
Barrett, Lisa Feldman. 2017. How emotions are made: The secret life of the brain. Pan Macmillan.
Barrett, Lisa Feldman, Michael Lewis, and Jeannette M Haviland-Jones. 2016. Handbook of emotions.
Guilford Publications.
Jones, Steven J and John E Laird. 2023. “A cognitive architecture theory of anticipatory thinking.”
AI Magazine 44 (2): 155–164.
Laird, John E. 2022. “Introduction to the soar cognitive architecture.” .
Laird, John E, Christian Lebiere, and Paul S Rosenbloom. 2017. “A standard model of the mind:
Toward a common computational framework across artificial intelligence, cognitive science,
neuroscience, and robotics.” Ai Magazine 38 (4): 13–26.
Larue, Othalia, Robert West, Paul S Rosenbloom, Christopher L Dancy, Alexei V Samsonovich,
Dean Petters, and Ion Juvina. 2018. “Emotion in the common model of cognition.” Procedia
computer science 145: 740–746.
Lazarus, Richard S. 1991. “Cognition and motivation in emotion.” American psychologist 46 (4): 352.
Lichtenberg, Joseph D and June L Hadley. 2013. “Empathy, motivational systems, and a theory of
cure.” In Psychoanalysis and Motivation, pp. 295–336. Routledge.
Lövheim, Hugo. 2012. “A new three-dimensional model for emotions and monoamine neurotrans-
mitters.” Medical hypotheses 78 (2): 341–348.
11
|
Emotions are Recognized Patterns of Cognitive Activities Yue Jin August 2025 Emotions play a crucial role in human life. The research community has proposed many theories on emotions without reaching much consensus. The situation is similar for emotions in cognitive architectures and autonomous agents. I propose in this paper that emotions are recognized patterns of cognitive activities. These activities are responses of an agent to the deviations between the targets of its goals and the performances of its actions. Emotions still arise even if these activities are purely logical. I map the patterns of cognitive activities to emotions. I show the link between emotions and attention and the impacts of the parameterized functions in the cognitive architecture on the computing of emotions. My proposition bridges different theories on emotions and advances the building of consensus. Yue Jin. Nokia Bell Labs France. Email: . 15 Sep 2025 1. Introduction Emotions play a crucial role in human life. They seem to compel us to pursue our goals, enhance or hinder our learning and decision-making, strengthen or weaken our communications and connections. Many theories have been proposed on how emotions arise, in the feeling, motivational and evaluative traditions (Barrett, Lewis, and Haviland-Jones (2016)). Barrett (2017) argues that "An emotion is your brain's creation of what your bodily sensations mean, in relation to what is going on around you." Lövheim (2012) proposes that different combinations of 3 monoamine neurotransmitters generate 8 emotions. The research community is yet to reach consensus. There is no difference for emotions in Cognitive Architectures. It's debated whether cognitive architectures should have an emotion module so that autonomous agents can have emotions. Laird, Lebiere, and Rosenbloom (2017) don't include emotions in the standard model of the mind. Many computational models have been proposed, including Appraisal theories, Core affect theory, Somatic markers, and Primary-process affect (Larue et al. (2018)). According to Larue et al. (2018), Soar-Emote uses emotional appraisals to frame information before it is processed by the cognitive system; Sigma models low-level appraisals as architectural self-reflection; ACT-R/Φ combines a model of a physiological substrate and the primary-process affect theory to augment the ACT-R cognitive architecture. I propose in this paper that emotions are recognized patterns of cognitive activities. For humans or autonomous agents to have emotions, they don't need a dedicated emotion module; they need meta-cognition to recognize the patterns of cognitive activities. These cognitive activities are linked to goals, actions and attention. I first use Soar cognitive architecture to demonstrate in Section 2 how surprise, nervousness and relief arise even when the cognitive architecture only performs logical reasoning. I then make the case for general cognitive architectures on an extended list of emotions in Section 3 and show how my proposition bridges different theories of emotions. I present the link between emotions and attention and the impacts of the parameterized functions in the cognitive architecture on the emotion quantification in Section 4. I conclude in Section 5 with discussions on the agent's adaptation of goals and actions based on its emotions and on emotions in social situations. 2. Surprise, Nervousness and Relief in Soar Sore is a long standing Cognitive Architecture (Laird (2022)). It has gone through multiple versions with gradually added modules and capabilities since its conception in the early 80s. One of its capabilities is responding to incomplete knowledge. Whenever the agent doesn't have sufficient knowledge to select or apply an action, Soar labels this as an 1 Impasse arise? Create substate and use all processing to find a new ac on Use Chunking to create a new rule and store it Yes Nervousness Relief Surprise FIGURE 1. Emotions in Soar when Impasse Arises impasse. To resolve the impasse, it creates a substate with memories and processing powers and searches for a new action in the substate with all the processing available in Soar including planning, retrievals from semantic and episodic memory, and interaction with the external environment. Once a new action is found, Soar invokes a learning mechanism called "Chunking" to compile the processing in the substate into a new rule that it stores for future use. We can clearly see that Soar goes through several stages when it encounters and resolves an impasse and each stage has its distinctive cognitive patterns. When it encounters an impasse, this deviates from its goal. This deviation can be recognized as "Surprise". It then creates a substate and searches for a new action. It uses additional memory and processing power. This can be recognized as "Nervousness". Even though it's not yet a feature in Soar, the agent can invoke more parallel processing when it has less time to find a new action. The intensity of "Nervousness" increases with the urgency. When a new action is found, Soar invokes chunking to learn a new rule and stores it for future use. This can be recognized as "Relief". Figure 1 shows the cognitive patterns and corresponding emotions. For Soar to have emotions, it doesn't need a dedicated emotion module; it needs a meta-cognition module that can recognize these cognitive patterns. 2 Improve profitability of a data center Increase revenue Reduce costs Acquire new customers Support new workload types Develop more efficient so ware Acquire more energy efficient hardware ...... ...... ...... Goal Level 1 Goal Level 2 Goal Level 3 FIGURE 2. Recursive decomposition of a goal 3. Emotions in General Cognitive Architectures We can generalize the case of Soar to general cognitive architectures. In a general cognitive architecture, the agent can have general goals. A goal can be decomposed into multiple components. Each component is a goal at a lower level and can also be decomposed into multiple components. Figure 2 shows an example of this recursive decomposition. Each goal has its current value and a target value at a future time point. I use "target" to refer to a target value. It can also have interim targets at intermediate time points. The recursive decomposition of the goal continues until it reaches the level of executable actions. When the agent executes an action, it changes the value of a goal. The performance of an action is the new value of the goal. The agent receives information from its external and internal perception. It interprets the information and assesses whether the action performance equals the (interim) target of the goal. If there is a deviation between the performance and the target, this is recognized as "Surprise". If the agent has a world model and carries out anticipatory thinking (Jones and Laird (2023)), the deviation can also be between the forecast performance and the target. The agent then assesses whether the deviation is better or worse based on whether the performance is better or worse than the target. Figure 3 shows the loop of detecting a deviation. The agent can have multiple 3 Receive info from external / internal percep on Assess how performance align with target Perf deviates from target? Yes No Devia on be er or worse? Con nue current ac on Surprise FIGURE 3. Detect a deviation parallel loops for different goals. The loops can have different time scales for goals at different levels. The agent processes the deviation depending on whether it's better or worse. If the deviation is worse, the agent mobilizes cognitive resources to find a new action that restores the target. This is recognized as "Nervousness". If the deviation is large or the agent needs to find a new action in a short period of time, the agent needs to make a lot of parallel processing. This increases the intensity of "Nervousness". If a new action is found, the agent compiles the processing and updates the action for the goal for future use. This is recognized as "Relief". If no new action is found to restore the target but one is found to reach a lower value, the agent also updates the action for future use. At the same time, it revises the target of the goal down to this lower value and stores the new target. This is recognized as "Sadness". If no new action is found to even reach a lower value, the agent removes the goal and updates the related goals at higher levels. This is recognized as "Grief". This processing is shown in Figure 4. If the deviation is better, the agent mobilizes cognitive resources to find a new action to reach a higher value of the goal. This is recognized as "Excitement". The agent usually has less urgency to find a solution comparing to the case of a worse deviation. The intensity is mostly determined by the size of the deviation. If a new action is found, the agent compiles the processing and updates the action for the goal for future use. At the same time, it 4 Devia on be er or worse? Learn to find new ac on to restore target Able to restore target? Update ac on Yes No Able to obtain lower value? Remove goal and update higher level goals Update ac on and target Yes No worse Nervousness Relief Sadness Grief FIGURE 4. Emotions when the deviation is worse revises the target of the goal up and stores the new target. This is recognized as "Joy". If no new action is found, the agent keeps the existing action and target. This is recognized as "Disappointment". This processing is shown in Figure 5. In Table 1, I list the emotions and the cognitive activities they correspond to. For the emotions not listed in the table, they are recognized patterns of cognitive activities too. For instance, "Anger" is recognized when the agent deduces that the worse deviation is caused by another agent and decides it needs to use aggression to change the behavior of the other agent. A comprehensive list of emotions and their corresponding cognitive activities is left for future work. My proposition bridges different theories on emotions. Barrett, Lewis, and HavilandJones (2016) sort the theories of emotions into three broad traditions, namely the feeling tradition, the motivational tradition and the evaluative tradition. In my proposition, the cognitive activities are the motivational and evaluative aspects of emotions and the recognition of patterns are the feeling aspect. The agent evaluates whether there are deviations, whether the deviations are better or worse and whether the learning of new actions restores or improves the targets of its goals. It learns to find new actions including searching 5 Devia on be er or worse? Learn to find a new ac on with be er performance Able to improve performance? Update ac on and target Keep exis ng ac on and target Be er Yes No Excitement Disappointment Joy FIGURE 5. Emotions when the deviation is better past experiences, logical reasoning, executing alternative actions to assess their performances, etc. It recognizes the patterns of cognitive activities as emotions. My proposition provides more precise explanations on emotions. For instance, the appraisal theory considers emotions as the results of evaluations (appraisals) of information from an agent's perception over a set of factors relevant to the agent's well-being (Lazarus (1991)). My proposition maps emotions to specific cognitive activities. This explains where the appraisals themselves come from and why the relevance to the agent's well-being impacts emotions. 4. Attention Module and Emotion Quantification I combine the different processes in Section 3 together in Figure 6. The core mechanism in this combined module is to keep the performance to a target. Whenever a deviation arises, either the action or the target is changed to eliminate the deviation and maintain the accordance between the performance and the target. This can be viewed as an evolution of the homeostatic mechanism (Lichtenberg and Hadley (2013)). This module is innate 6 Emotion Cognitive Activities Nervousness Learn a new action when the deviation is worse Relief Update the action that restores the target Sadness Update the action and target when a lower value of the goal is obtained Grief Remove the goal and update higher level goals Excitement Learn a new action when the deviation is better Disappointment Keep the action and target when a higher value of the goal is not obtained Joy Update the action and target when a higher value of the goal is obtained TABLE 1. Emotions and Corresponding Cognitive Activities for the agent. It dictates the current goal the agent pays attention to and serves as the attention module of the cognitive architecture. I greatly simplify the attention module for presentation. The real situation is a lot more complex and nuanced. First, the processes are depicted as open-ended in Figure 6. But they are actually loops: after a deviation is eliminated, the agent goes back to the part of detecting deviations. Second, the agent has multiple goals. When a deviation is detected in one goal and the agent's attention is on another goal, it may choose to delay the processing of the deviation until it finishes the action of its current goal. Third, the agent may find a new action that temporarily restores the target and spread out the learning of a permanent new action over many disjoint time intervals. The learning of a new action doesn't have to be done without interruption. Forth, an action can affect multiple goals. It can create deviations in multiple goals simultaneously. Some of the deviations may be better and others worse. Learning a new action is then more complicated because we need to trade off between different goals. We can qualitatively identify emotions from the attention module: what type of emotions arise in which kind of situations. To quantitatively compute emotions such as its intensity and duration, we need the details of the parameterized functions of the cognitive architecture, in particular those on deviation detection, learning and updating. The functions vary from one cognitive architecture to another. But there are some commonalities. The intensity of the surprise is determined by the size of the deviation. The agent doesn't respond to small differences between the performance and the target. It has a distribution and rules on whether to label a difference of a certain size as a deviation. As an exmaple, Figure 7 shows a normal distribution and the rule that the differences larger than 3 standard deviations are outliers - deviations. The intensity and duration of the the nervousness / excitement is determined by the intensity and duration of learning. I have discussed in Section 3 the size of the deviation 7 Receive info from external / internal percep on Assess how performance align with target Perf deviates from target? Yes No Devia on be er or worse? Learn to find a new ac on with be er performance Con nue current ac on Learn to find new ac on to restore target Able to improve performance? Update ac on and target Keep exis ng ac on and target Be er Yes No Able to restore target? Update ac on Yes No Able to obtain lower value? Remove goal and update higher level goals Update ac on and target Yes No worse Nervousness Relief Sadness Grief Excitement Disappointment Joy Surprise FIGURE 6. Attention Module 8 FIGURE 7. Detection of deviations - outliers - based on a normal distribution and the urgency of finding a new action impacts the intensity of learning. The agent has a function f (Equation 1) to convert the size and urgency into the intensity of learning. (1) Intensity = f (Deviation size,Urgency) For the duration of the learning, the agent decides an initial duration from the deviation size and the urgency and extends the duration when it sees performance improvement. A function g0 (Equation 2) converts the size and the urgency to an initial duration. A function g (Equation 3) converts the amount of improvement into the amount of duration extension. The agent terminates the learning when it sees no improvement at the end of the duration. (2) Initial Duration = g0(Deviation size,Urgency) (3) ∆Duration = g(Improvement amount) The intensity of the sadness / grief / joy is determined by the size of deterioration or improvement realized. The intensity of the relief / disappointment is determined by the size of deterioration or improvement not realized. The agent learns the parameterized functions of the cognitive architecture - the distributions and functions and their parameters and rules - from its experiences. It adapts them based on new experiences. For instance, the agent can adapt the function 9 g0 and g and increase the initial duration and duration extension if it finds doing this increases its chance of finding a new action noticeably. The parameterized functions can be specific to a goal or to a set of goal features. These parameterized functions manifest as personalities in humans. 5. Discussion I propose that emotions are recognized patterns of cognitive activities. They are the products of the autonomous agent managing the deviations between the performances of its actions and the targets of its goals. For humans or autonomous agents to have emotions, they don't need a dedicated emotion module; they need meta-cognition to recognize the patterns of cognitive activities. My proposition bridges different theories on emotions and advances the building of consensus. If the agent has a well-understood goal and effective actions to fulfill it, it rarely experiences deviations and emotions from it. If the agent experiences frequently deviations and emotions from a goal, it's an indication that the agent doesn't understand the goal well or doesn't have effective actions. The agent needs to review the goal and its actions: decide whether it should have the goal at all, review whether the target of the goal has a reasonable value, and revise how it makes decisions on actions. It should do this recursively on the components of the goal and on the higher level goals where the goal is a component. Emotions are more complex in social situations. The meta-cognition can recognize the same patterns differently depending on the social goals in concern. For instance, "Shame" is actually nervousness on the goal "I'm a good person". When we feel shame, we have the goal of being a good person and our current action has a performance short of the current target of the goal. This deviation triggers a series of cognitive activities. The meta-cognition recognizes this pattern as "Shame". Emotions are used as signals in social communication. One agent can directly signal to another agent with words and emotions what goals and targets the other agent should have and how the performances of their actions deviate from the targets of the goals. The agent can distort these signals out of its own goals. An accurate theory of emotions enables humans better understand and adapt their behaviors. Consensus on the theories of emotions fosters smooth collaborations to build fully functional autonomous agents. My proposition blows away the mist on the path forward. We can now forge ahead with these advancements with greater clarity. 10 References Barrett, Lisa Feldman. 2017. How emotions are made: The secret life of the brain. Pan Macmillan. Barrett, Lisa Feldman, Michael Lewis, and Jeannette M Haviland-Jones. 2016. Handbook of emotions. Guilford Publications. Jones, Steven J and John E Laird. 2023. "A cognitive architecture theory of anticipatory thinking." AI Magazine 44 (2): 155-164. Laird, John E. 2022. "Introduction to the soar cognitive architecture." . Laird, John E, Christian Lebiere, and Paul S Rosenbloom. 2017. "A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics." Ai Magazine 38 (4): 13-26. Larue, Othalia, Robert West, Paul S Rosenbloom, Christopher L Dancy, Alexei V Samsonovich, Dean Petters, and Ion Juvina. 2018. "Emotion in the common model of cognition." Procedia computer science 145: 740-746. Lazarus, Richard S. 1991. "Cognition and motivation in emotion." American psychologist 46 (4): 352. Lichtenberg, Joseph D and June L Hadley. 2013. "Empathy, motivational systems, and a theory of cure." In Psychoanalysis and Motivation, pp. 295-336. Routledge. Lövheim, Hugo. 2012. "A new three-dimensional model for emotions and monoamine neurotransmitters." Medical hypotheses 78 (2): 341-348. 11
|
2509.16231
|
Non-Equatorial Uniform-Stress Space Elevators∗†
Blaise Gassend‡
Abstract
Non-equatorial space elevators are interesting as they
give more freedom for anchor location, avoid the
highly occupied geosynchronous orbit and the areas
of highest radiation in the Van Allen belts. We re-
view the static equation for a uniform-stress tether,
and use it to study the tapering of a uniform-stress
tether in a general potential field. We then focus on
a rotating coulomb potential and study the range of
latitudes that are allowed for a uniform-stress space
elevator. Finally, we look at a few practical issues
that are raised by non-equatorial space elevators.
Introduction
A space elevator is a very long tether. One end is at-
tached to an anchor station on the surface of a planet.
The other end is attached to a counterweight located
beyond the planet’s synchronous altitude. Past that
altitude centrifugal force due to the planet’s rota-
tion exceeds the gravitational force, so the counter-
weight is pulled away from the planet.
Thus, the
tether is kept in tension between anchor and counter-
weight. Payloads can be sent into space by climbing
the tether, much more efficiently than with rockets.
The space elevator concept was first proposed in
Russian [2, 3] by Artsutanov, and later introduced
in English by Pearson [4]. Science fiction writers [5]
made the idea popular. But materials with the nec-
essary strength to weight ratio seemed unlikely. And
the proposed elevators were much too heavy for fore-
seeable launch technologies. This changed with the
arrival of carbon nanotubes, and the proposal by Ed-
wards [1] of a space elevator concept that would only
∗©2004 Institute for Scientific Research, Inc.
†This paper was first published in Proc. of the 3rd Inter-
national Space Elevator Conference, June 2004, reprinted on
arXiv by permission of Bradley Edwards former director at
ISR.
‡The author may be contacted at gassend@alum.mit.edu.
require a few tons of material to be lifted by conven-
tional launch means.
To date, studies have considered that the space el-
evator would be anchored on the equator, as that is
where the equilibrium of the tether is easiest to un-
derstand. Clarke [5] even considered that the eleva-
tor would have to be placed on the equator at a local
minimum of the Earth’s geopotential. In fact, there
is no reason for such limitations, and non-equatorial
space elevators can present a number of advantages:
• There is increased flexibility in the selection of
the anchor location.
• The tether does not have to go through the heav-
ily occupied (geo)synchronous orbit.
• The tether avoids the areas of most intense ra-
diation of the Van Allen belts [6]. This is par-
ticularly important when considering the trans-
portation of humans.
• The tether can avoid the Martian moons (in the
case of a Martian elevator).
Figure 1 shows a diagram of a non-equatorial space
elevator. Three forces hold it in equilibrium: gravity,
centrifugal force, and the tension at the anchor. The
major difference with equatorial elevators is that the
elevator is located entirely on one side of the equa-
torial plane. Therefore, gravity tends to pull the ele-
vator towards the equatorial plane. This tendency is
countered by the force applied by the anchor, allow-
ing the elevator to be in equilibrium.
In this paper we will be considering uniform-stress
space-elevators. A uniform-stress tether is a tether in
which the cross-section is varied (tapered) in order to
keep the stress in the tether uniform. Space elevators
are generally designed to have uniform stress as this
maximizes the ratio of payload mass to elevator mass.
To understand off-equator space elevators, we will
first review the static equations that any uniform-
stress tether must satisfy, in Section 1. Then we will
arXiv:2509.16231v1 [physics.class-ph] 15 Sep 2025
Counterweight
Equator
Axis of Rotation
Anchor
ψ
y
x
θ
φ
s
⃗r
Figure 1: A non-equatorial space elevator, and some of the angles that are used to characterize it.
apply these equations to the relevant case of a rotat-
ing Coulomb potential in Section 2. In Section 3 we
will study the problem of determining the maximum
latitude a space elevator can start from. Finally, Sec-
tion 4 covers a few practical concerns that are specific
to non-equatorial space elevators.
All the notation that is used in this paper can be
found in Table 1. Examples will often use values that
are typical for Earth and the tethers considered by
Edwards [1]. Table 2 summarizes these values.
1
Equations for a Static Uni-
form Stress Tether
First we introduce the equations for a static uniform-
stress tether in a potential V .
d⃗r
ds
=
⃗T
T
(1)
d⃗T
ds
=
ρ A ⃗∇V
(2)
T
=
σ0 A
(3)
In these equations s is a curvilinear coordinate
along the tether, ⃗T is the tension that the top part of
the tether applies to the bottom part, ⃗r is a position
vector, A is the position dependent cross-sectional
area of the tether, ρ is the density of the tether,
and σ0 is the stress in the tether. Equation (1) ex-
presses that the tether cannot bear any shear load:
the tension in the tether must be tangent to the ca-
ble. Equation (2) expresses Newton’s second law: the
change in tension along a small piece of tether must
oppose the force due to the potential. Equation (3)
expresses that the tether has a uniform stress. Be-
cause the tether is uniformly stressed, we need not
consider elastic effects as they can be incorporated
into σ0 and ρ.
Equations (2) and (3) can be combined to eliminate
the area of the cable.
d⃗T
ds = ρT
σ0
⃗∇V
(4)
1.1
Taper Profile
First we shall look at the tangential part of the static
equations. Integrating them will lead to a relation
between the cable cross-section A and the local po-
tential V .
First, we take the dot product of (4) with d⃗r, divide
by T , and use (1) to simplify.
dT
T
= ρ
σ0
d⃗r · ⃗∇V
(5)
Integrating we get an expression for T and there-
fore for A.
T
T0
= A
A0
= e
ρ
σ0 V
(6)
This formula shows that the area of the tether at a
given position is directly a function of the potential V
at that position. If ∆V is the difference in potential
energy between the base of the tether and the point
where the potential energy reaches a maximum, then
we can express the taper ratio of the tether as:
Amax
A0
= e
ρ
σ0 ∆V
(7)
Symbol
Description
s
Curvilinear coordinate along the tether.
⃗r
Position vector of a point on the tether.
r
Distance from the center of the planet.
r⊥
Distance from the rotation axis of the
planet.
rs
Distance from the center of the planet
to the synchronous altitude.
ρ
Density of tether material under stress.
σ0
Target stress in tether.
A
Cross-sectional area of tether.
⃗T
Tension applied by the top part of the
tether to the bottom part.
m
Mass of the counterweight.
V
Potential field the tether is placed in.
θ
Angle between the equatorial plane and
the position vector ⃗r.
φ
Angle between the equatorial plane and
the tangent to the tether.
ψ
Angle between the tangent to the tether
and the position vector ⃗r
ˆeφ
Unit vector in the direction of increas-
ing φ.
G
Gravitational constant.
Mp
Mass of the planet.
Ω
Angular velocity of planet rotation.
V0
Characteristic specific energy of the ro-
tating Coulomb potential field.
∆V
Difference in potential between the base
of the tether and the point where the
potential is greatest.
⃗˜g
Combined gravitational and centrifugal
field, in normalized form.
α
Tether shape factor.
P/M
Payload to tether mass ratio.
(⃗v)⊥
Part of some vector ⃗v that is normal to
the tether.
˜d
Distance d in units of rs.
˘d
Distance d in units of αrs.
x0
The value of variable x at the anchor
point (except V0 and σ0).
Table 1: Notation that is used in this paper.
From this expression we can introduce a taper pa-
rameter
ρ
σ0 ∆V that characterizes the difficulty of
Symbol
Typical Value
G
6.67 · 10−11 SI
Mp
5.98 · 1024 kg
Ω
7.29 · 10−5 rad/s
V0
9.46 · 106 J/kg
rs
42.2 · 106 m
r0
6.38 · 106 m
˜r0
0.151
ρ
1300 kg/m3
σ0
65 · 109 N/m2
α
0.189
Table 2: Typical values for the Earth and Edwards’
tether parameters [1].
When nothing is specified,
these values are used for examples.
building a uniform-stress structure across a potential
difference ∆V . When it is much smaller than one, al-
most no tapering is necessary. When it is much larger
than one the taper ratio becomes prohibitively large.
This taper parameter is closely related to the ratio
of Pearson’s characteristic height [4] to the geosyn-
chronous altitude.
1.2
Tether Shape Equation
Projecting (4) onto a direction tangent to the tether
allowed us to determine the taper profile. We now
project perpendicularly to the tether direction and
combine with (1) to determine the shape the tether
adopts in the gravitational potential:
d2⃗r
ds2 = d ˆT
ds = ρ
σ0
⃗∇V
⊥
(8)
where ˆT = ⃗T/T is a unit vector tangent to the tether,
and (⃗∇V )⊥denotes the projection of ⃗∇V perpendic-
ularly to ˆT.
Equation (8) determines the tether’s curvature.
The tether curves towards areas of higher potential,
and the curvature is proportional to the component
of the gravity field that is normal to the tether. This
interpretation becomes more apparent in the case of
a planar tether where we can identify the direction of
the tether by an angle φ so that
dφ
ds = ρ
σ0
ˆeφ · ⃗∇V
(9)
where ˆeφ is a unit vector in the direction of increas-
ing φ.
1.3
Boundary Conditions
To have a complete description of the static tether, we
additionally need to consider boundary conditions.
On one end, the tether is attached to the anchor
point on the planet.
The anchor simply needs to
provide a force equal to the tension in the tether to
keep the tether static. If the base of the tether isn’t
vertical then there will be a horizontal component to
this force, so the tether will tend to pull the anchor
sideways (see Section 4.2).
From equation (6) we know that the tension in the
tether never goes to zero. Therefore, the free end of
the cable must have a force applied to it to balance
the tension in the cable. That force is provided by a
counterweight of mass m which must satisfy:
⃗T = −m⃗∇V
(10)
Thus the counterweight must be located in such a
way that the tether is tangent to the local gravity
field.
2
The Rotating Coulomb Po-
tential
So far we have considered a uniform stress tether in
an arbitrary potential V . To make further progress,
we will now consider the specific potential that ap-
plies in the case of the space elevator attached to a
planet. Because we are considering the statics of the
tether, we have to place ourselves in a reference frame
that is rotating with the planet. Thus the potential
we are interested in is a combination of the Coulomb
potential of the planet’s gravity and the centrifugal
potential due to the planet’s rotation around its axis:
V = −GMp
r
−1
2r2
⊥Ω2
(11)
In this equation G is the gravitational constant, Mp
is the mass of the planet, Ωis the angular velocity of
the planet’s rotation, r is the distance to the center
of the planet, and r⊥is the distance to the axis of
rotation of the planet.
2.1
Planar Tether Profile
One of the first things we note about the potential
is that it is invariant by reflection about planes that
contain the planetary axis of rotation. This invari-
ance must also apply to the resulting acceleration
field. Thus, the forces caused by the potential will
all be in a plane containing the axis of rotation and
the point at which they are applied.
Therefore, if we consider a plane containing the
axis of rotation of the planet and the counterweight,
we find that the tension in the tether at the counter-
weight is in that plane. As we move down the tether,
the forces acting on the tether are in that plane, so
the tether remains in that plane all the way to the
anchor.
We conclude that the shape of the space elevator
will be planar, even in the non-equatorial case. This
greatly simplifies the problem to be solved, as we can
now work in two dimensions in a plane that is per-
pendicular to the equatorial plane.
2.2
Non-Dimensional Problem
Reducing a problem to non dimensional form is an
excellent way of extracting the physically important
parameters. We now apply this methodology to the
space elevator.
First we note that the potential can be written in
terms of the synchronous radius rs = (GMp/Ω2)1/3
and the characteristic potential V0 = (GMpΩ)2/3 in
the form:
V = −V0
rs
r + 1
2
r2
⊥
r2s
(12)
Thus, rs, the distance from the center of the planet
to the synchronous orbit, is the natural distance scale
for this problem. We shall therefore rewrite (8) re-
placing all distances d by normalized distances ˜d =
d/rs, and inserting the expression for V from (12):
d2⃗˜r
d˜s2 = ρV0
σ0
⃗˜r
˜r3 −⃗˜r⊥
!
⊥
= −α
⃗˜g
⃗˜r
⊥
(13)
This is the differential equation that determines the
shape of the tether in a rotating Coulomb potential.
This equation contains a single scalar parameter α
which we shall call the shape parameter.
α = ρV0
σ0
(14)
The shape parameter is the ratio of the character-
istic potential of the potential field to the strength
to weight ratio of the tether material. It also natu-
rally appears in (5), (6) and (7) when they are ap-
plied to the rotating Coulomb potential. The shape
parameter determines how deep it is possible to go
into the normalized potential well before the taper
ratio becomes excessively high. Indeed, well below
the synchronous altitude, ∆V ≈V0rs/r, so the taper
parameter is approximately α/˜r. Thus for α ≪˜r, the
taper ratio is close to 1, while for α ≫˜r, the taper
ratio is gigantic.
In the case of the Earth and the tether parameters
from [1], α ≈0.189 and ˜r ≈0.151. Thus we are close
to the limit of feasibility.
2.3
Solving the Shape Equation Qual-
itatively
To get an idea of the solutions of (13) that satisfy the
boundary condition (10), it is useful to study Figure 2
which is a plot of the vector field ⃗˜g. We shall assume
without loss of generality that the anchor is in the
upper right-hand quadrant (x > 0 and y > 0). More
complete derivations can be found in [7].
To begin, we note that the North-South component
of the field always points towards the equator. This
has two consequences.
First, because of (4) the y
component of ⃗T satisfies:
y dTy
ds > 0
(15)
Second, because of (10), the tip of the tether at the
counterweight has to be sloped towards the equatorial
plane (i.e., yTy < 0. Combining these two facts, we
find that Ty is negative over the whole length of the
tether. This implies via (1) that the distance from
the tether to the equatorial plane must monotonically
decrease as we move along the tether from the anchor
point to the counterweight. If the tether ever crosses
the equatorial plane, or starts heading away from the
equatorial plane, the boundary condition will never
be satisfied.
Below the synchronous altitude, ⃗˜g is pointing down
and to the left. As we have seen, T is pointing down
and to the right. Therefore because of (13) the tether
0
0.2
0.4
0.6
0.8
1
1.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4
(I)
(II)
(III)
y
x
Figure 2: The normalized rotating Coulomb field ⃗˜g.
The equatorial plane is horizontal, and the North-
South direction is vertical. Equipotential lines and
field values are plotted along with three example
tether solutions.
The tether solutions are for α =
0.189, ˜r0 = 0.151, θ0 = 30° and inclinations ψ0 of
35°, 55° and 75°.
1
1.5
2
2.5
3
3.5
52
53
54
55
56
57
58
59
60
(I)
(II)
(III)
Inclination in degrees
˜r at the counterweight
Figure 3: Normalized distance from the center of the
Earth to the counterweight at latitude θ0 = 30°.
only curves away from the equator (φ increases mono-
tonically). We will use this result in Section 3.1.
Figure 2 shows how solutions of (13) depend on
the inclination of the tether at the anchor.
Case
(II) satisfies the boundary condition at the counter-
weight, while (I) and (III) extend infinitely without
the boundary condition ever being satisfied.
0
10
20
30
40
50
60
70
80
90
0
10
20
30
40
50
60
(I)
(II)
(III)
Inclination in degrees
Latitude in degrees
Figure 4: Possible inclinations for an Earth space
elevator as a function of latitude (α = 0.189).
Indeed, if the inclination is too low as in case (I),
then the tether curves away from the equatorial plane
before suitable boundary conditions for a counter-
weight can be found. If the inclination is increased,
the point at which the tether is parallel to the equa-
torial plane moves out towards infinity, and ceases
to exist. At that point, it becomes possible to satisfy
the boundary condition at infinity. As the inclination
is increased more, the altitude of the counterweight
gets lower and lower, as in case (II). Once the al-
titude of the counterweight reaches the synchronous
altitude, satisfying the boundary condition becomes
impossible once again. The tether crosses the equa-
torial plane before reaching the synchronous altitude
preventing it from being terminated as in case (III).
We conclude that for a given anchor location, there
will generally be a range of inclinations for which a
tether shape that satisfies the counterweight bound-
ary condition exists. Within that range, the coun-
terweight altitude decreases from infinity to the syn-
chronous altitude as in Figure 3.
Figure 4 shows the inclinations that lead to valid
tether solutions in the case of the Earth space eleva-
tor. The graph has been truncated at an inclination
of 90°. Higher inclinations are mathematically possi-
ble, but the tether would have to go underground for
the first part of its length.
0
10
20
30
40
50
60
70
80
90
0
0.2
0.4
0.6
0.8
1
0.01
0.02
0.05
0.1
0.2
0.5
1
2
5
1020
50
Maximum Elevator Latitude
Normalized Planetary Radius
(a) Maximum inclination of 90°.
0
5
10
15
20
25
30
35
40
45
0
0.2
0.4
0.6
0.8
1
0.01
0.02
0.05
0.1 0.2
0.5
1
2
5
10
20
50
Maximum Elevator Latitude
Normalized Planetary Radius
(b) Maximum inclination of 45°.
Figure 5: Maximum reachable latitude as a function
of normalized planetary radius for different values of
the shape parameter α, assuming maximum tether
inclinations at the anchor of 90° and 45°.
3
Maximum Anchor Latitude
As we saw in Figure 4, there is a maximum latitude
beyond which no inclination allows the tether to sat-
isfy the counterweight boundary conditions.
With
Figure 5(a), it is possible to determine the maximum
anchor latitude in the general case. This figure was
generated by considering, for a given planetary ra-
dius and shape parameter, which latitude would lead
to an infinitely long tether with 90° inclination.
This figure isn’t very useful for practical purposes
because it accepts tethers that are very inclined at the
base. As we shall see in Section 4.1, such tethers have
a small payload. Therefore, we have also considered
the case where tether inclination is limited to 45° in
Figure 5(b).
Clearly these two figures are very similar except for
a different scaling of the vertical axis. By considering
a number of limit cases we shall try to explain this
similarity and better understand the characteristics
of the plots. For clarity we introduce the function
θmax(˜r0, ψmax, α) which gives the maximum latitude
from the normalized planetary radius, the maximum
acceptable tether inclination and the shape parame-
ter.
θmax is the latitude at which a tether inclined by
θmax at the anchor is right at the limit between case
(I) and case (II). At that limit, the tether solution is
infinitely long. So, to study θmax, we shall be study-
ing tether solutions that go to infinity.
3.1
Small Planet Limit
First we shall direct our attention to the case where
˜r0 ≪1.
This approximation corresponds to the
case where the planet is small compared to the syn-
chronous altitude (we could also say that the planet
rotates slowly compared with an object that would
orbit near ground level). This is a good approxima-
tion for most known planetary bodies; the gas giants
are the main exception. For Earth ˜r0 ≈0.151 and for
Mars ˜r0 ≈0.166.
As always (see Section 2.3), the angle φ decreases
with altitude, as does the distance to the equatorial
plane. Because the planet is small, the distance to the
equatorial plane at the anchor is much smaller than
the distance to the synchronous altitude. Therefore,
to avoid crossing the equatorial plane, the tether is
nearly parallel to the equatorial plane far before the
synchronous altitude. This means that any signifi-
cant tether curvature occurs well below that altitude.
Well below synchronous altitude, the centrifugal force
term in ⃗˜g(⃗˜r) can be ignored, so (13) reduces to
d2⃗˜r
d˜s2 = α
⃗˜r
˜r3
!
⊥
(16)
This equation can be normalized by changing the
length scale by a factor α
d2⃗˘r
d˘s2 =
⃗˘r
˘r3
!
⊥
(17)
where the distance ˘d is the normalized version of ˜d
and corresponds to ˜d/α. This new normalization is
very satisfying as the tether equation contains no con-
stants at all. However, to prevent confusion with the
previous normalization of distances, we will avoid this
notation whenever possible.
The consequence is that θmax is only a function of
˜r0/α and ψmax). There is one fewer parameters to
consider. In Figure 5, for small values of the normal-
ized planetary radius, the curves for different shape
can be deduced from one another by stretching in the
horizontal direction.
3.2
Low Curvature Limit
Remaining in the small planet limit, we now consider
the case where ˜r0/α ≫1. In this case, the tether
undergoes very little curvature at all. This is partic-
ularly clear from (17) where there is very little effect
on the tether when ˘r is large.
If we ignore curvature altogether, we find that the
tether is straight.
In this approximation, θmax ≈
ψmax.
3.3
High Curvature Limit
Still in the small planet limit, we now consider the
opposite case in which ˜r0/α ≪1. In this case, the
tether curves sharply if its inclination ψ is not small.
If the tether is inclined at its base, it very quickly
becomes vertical. This prevents large starting lati-
tudes.
Since the latitude is small and the curvature oc-
curs near the base of the tether, we will make the
approximation of a uniform gravity field over a flat
planet. In this approximation φ ≈−ψ. We normal-
ize equation (9) and apply the small planet limit to
get
dφ
d˜s = −α
˜r2
0
sin(φ)
(18)
which can be further simplified by taking the deriva-
tive with respect to ˜y instead of ˜s
dφ
d˜y = −α
˜r2
0
(19)
We now integrate from 0 to y0, and note that φ = 0
at y = 0, to get an expression for the inclination at
the anchor
0
0.2
0.4
0.6
0.8
1
0
0.5
1
1.5
2
2.5
3
3.5
4
approximate formula
90
45
10
θmax/φmax
˜r0/α
Figure 6: Comparison of (23) with simulation data
for maximum inclinations of 10°, 45° and 90°. Only
data points with ˜r0 < 0.3 were plotted.
ψ0 = −φ0 = α
˜r2
0
y0
(20)
Finally, we can express y0 in terms of r0 and the
starting latitude θ0 as y0 ≈r0θ0 to get
ψ0 = α
˜r0
θ0
(21)
So in the high curvature limit
θmax ≈˜r0
α ψmax
(22)
3.4
A Combined Formula
If ˜r0/α is near 1 then the analysis becomes much
more difficult as both the x and y gravity components
are significant. A simple empirical interpolation can
nevertheless be used with very good results
θmax ≈2
π ψmax arctan
π
2
˜r0
α
(23)
It is easy to verify that this formula holds in both
limit cases. When ˜r0/α is near 1, this formula gives
a result that is up to 8% too high.
Figure 6 illustrates the quality of the approxima-
tion. The match is slightly better for low values of the
tether inclination. For high inclinations the approx-
imation is slightly optimistic. In this figure we have
limited ourselves to ˜r0 < 0.3, for ˜r0 > 0.5 we start
to see significant deviation from (23). With larger ˜r0
higher latitudes than expected can be reached.
4
Practical Considerations
So far we have considered whether it was possible to
have a space elevator at a given latitude by consid-
ering the static tether equation.
In practice other
considerations will further limit the latitude that can
be reached.
4.1
Payload to Elevator Mass Ratio
One of the major concerns in space elevator construc-
tion is the ratio of payload mass to elevator mass. In-
deed, this ratio determines how much material has to
be lifted during the elevator construction for a given
elevator capacity.
We saw in Section 1.1 that the taper ratio of a
uniform-stress tether only depends on the change in
potential along the tether. The potential is uniform
at the surface of a planet, and from Figure 2 the
potential changes very slowly near the synchronous
orbit. Therefore, the taper ratio for non-equatorial
space elevators is almost the same as the taper ratio
for equatorial space elevators.
In the small planet limit, the angle between the
tether and the equatorial plane is small, except pos-
sibly near the surface of the planet. Therefore, the
length of the tether doesn’t depend much on the lat-
itude of the anchor.
Moreover, since the potential
depends slowly on y, the taper profile of the non
equatorial space elevator is nearly the same as the
profile for an equatorial one.
Therefore, the only significant difference between
equatorial and non-equatorial elevators is due to the
tension at the base of the tether not being vertical in
the non-equatorial case. Since only the vertical com-
ponent of the tension can be used to lift a payload,
and elevators of equal mass have nearly equal tension
at their base, we get a reduced payload to elevator
mass ratio in the non-equatorial case:
(P/M)off−equator ≈(P/M)equator cos(ψ0)
(24)
Thus to maintain payload when leaving the equa-
tor means one has to multiply the elevator mass by
1/ cos(ψ0). For small inclinations at the anchor this
inefficiency is insignificant. But approaching inclina-
tions of 90° is clearly out of the question.
0
10
20
30
40
50
60
0
10
20
30
40
50
60
70
80
90 100
90
80
70
60
50
40
30
20
10
ψ0
θmax
σ0(GPa)
Figure 7: Maximum reachable latitude for an Earth
space elevator as a function of σ0, for different values
of the inclination ψ0.
The designer of Earth space elevators will find Fig-
ure 7 useful to quickly determine the maximum ele-
vator latitude as a function of maximum inclination
at the anchor. For ease of use σ0 has been used in-
stead of the shape parameter, the tether density is
assumed to be fixed at 1300 kg/m3.
4.2
Horizontal Force on Anchor
In addition to reducing the payload to mass ratio
of the elevator, the inclination at the tether’s base
causes a horizontal force to be applied to the anchor
platform. This force is simply given by:
F = T0 tan(ψ0)
(25)
If the anchor is a mobile ocean platform, this force
will need to be countered or else the anchor will drift
towards the equator. For heavy lift elevators signifi-
cantly offthe equator, this force will have to be taken
into account when selecting the anchor platform.
4.3
Stability
An equatorial tether with very high elasticity (low
Young’s modulus), or with a small extension beyond
the synchronous altitude can be unstable [7].
For
equatorial space elevators, we would be considering
conditions far from this instability region, so it is of
no concern. In the case of non-equatorial elevators,
the curvature at the base of the elevator should cause
a reduction in the axial stiffness of the tether as seen
from the counterweight. We can therefore conjecture
that the frequency of the elevator modes will decrease
for a non-equatorial elevator. This could cause the
instability to occur for realistic tether parameters.
We conjecture that as in the equatorial case, the
instability will occur for short tethers, near the
boundary between (II) and (III), where the counter-
weight is just beyond the synchronous altitude (see
Section 2.3).
However the instability will extend
to greater counterweight altitudes.
The maximum
reachable is determined at the (I)-(II) boundary and
should not be affected. However, external effects such
as the presence of Earth’s Moon may limit the length
of the elevator, and thus limit the latitude.
4.4
Deployment
During the initial deployment phase, there is no con-
tact between the tether and the Earth, which takes
away the only force keeping the elevator away from
the equatorial plane. This leaves two possibilities for
deploying a non-equatorial space-elevator.
• A propulsion system can be attached to the bot-
tom of the tether during deployment to keep the
tether away from the equatorial plane.
For a
10° latitude, and a one ton elevator this option
would require hundreds of newtons of thrust over
a period of days and is therefore impractical.
• The tether can be deployed in the equatorial
plane, attached to the anchor platform, and then
moved to its final location away from the equa-
tor. If an off-equator location has been selected
to avoid interfering with geosynchronous satel-
lites, the initial deployment can be done at a
longitude where there is an available geosyn-
chronous slot, after which the elevator can be
moved to its final latitude.
4.5
Tether Shape Determined Numer-
ically
Three of the applications we mentioned for non-
equatorial space elevators were the avoidance of par-
ticular areas in the equatorial plane. In this paper we
have not pushed the analysis of (13) far enough to de-
termine whether these obstacles are indeed avoided.
0
1000
2000
3000
4000
5000
0
20000
40000
60000
80000
100000
y (km)
x (km)
0
1000
2000
3000
4000
5000
0
20000
40000
60000
80000
100000
y (km)
x (km)
0
1000
2000
3000
4000
5000
0
20000
40000
60000
80000
100000
y (km)
x (km)
0
1000
2000
3000
4000
5000
0
20000
40000
60000
80000
100000
y (km)
x (km)
Figure 8: Numerical solutions of the shape equation
for the Earth with the standard tether parameters.
The length of the tether was set to 90,000 km. Start-
ing latitudes of 10°, 20°, 30° and 40°. Note that the
scale is different along the x and y axes.
Figure 8 shows some numerical solutions. They sug-
gest that avoiding the geosynchronous satellites is
easy. On the other hand, the most intense areas of
the radiation belts extend over 2,000 km above the
equatorial plane, which only highly inclined elevators
can avoid.
Conclusion
In this paper we have presented the equations that
govern the statics of non-equatorial uniform-stress
space elevators. These equations have been reduced
to a non-dimensional form allowing the analysis to be
applied to any tether and planetary parameters.
The tether’s taper profile has turned out to be easy
to compute as in the equatorial case, once its the
spatial configuration is known.
Unfortunately, the
spatial configuration is difficult to obtain analytically.
Of particular interest to the elevator designer is the
maximum anchor latitude for a non-equatorial eleva-
tor. This problem has been solved in a few limit cases,
and an approximate formula has been proposed.
Offthe equator, the tether is not vertical at the
base of the elevator. This causes a reduction in pay-
load, which is the major engineering cost of being off
the equator. It also causes a horizontal force to be
applied to the anchor station, which much be taken
into account for an ocean platform.
This study has ignored dynamic effects, in partic-
ular the stability of off-equatorial elevators has to be
checked. For small latitudes the stability of equato-
rial elevators carries over, but we expect instabilities
to occur in a wider range of conditions than in the
equatorial case. This remains to be verified. The ef-
fect of climbers on the tether also needs to be studied.
Acknowledgements
I would like to thank Val´erie Leblanc for the time she
spent proof reading and pointing out bugs.
References
[1] B. Edwards and E. Westling, The Space Elevator.
Spageo Inc., 2002.
[2] Y. Artsutanov, “V kosmos na electrovoze (into
space with the help of an electric locomotive),”
Komsomolskaya Pravda, 1960.
[3] V. Lvov, “Sky-hook: Old idea,” Nature, vol. 158,
no. 3803, pp. 946–947, Nov. 1967.
[4] J. Pearson, “The orbital tower:
a spacecraft
launcher using the Earth’s rotational energy.”
Acta Astronautica, vol. 2, pp. 785–799, 1975.
[5] A. C. Clarke, The Fountains of Paradise. Warner
Books, 1978.
[6] A. M. Jorgensen, R. G. Morgan, B. Gassend,
R. H. W. Friedel, and T. Cayton, “Space elevator
radiation hazards and how to mitigate them,” in
Proceedings of the 3rd Annual International Space
Elevator Conference, June 2004.
[7] V. V. Beletsky and E. M. Levin, Dynamics of
Space Tether Systems, ser. Advances in the Astro-
nautical Sciences.
Amer Astronautical Society,
August 1993, vol. 83.
|
Non-Equatorial Uniform-Stress Space Elevators∗† Blaise Gassend‡ Abstract Non-equatorial space elevators are interesting as they give more freedom for anchor location, avoid the highly occupied geosynchronous orbit and the areas of highest radiation in the Van Allen belts. We review the static equation for a uniform-stress tether, and use it to study the tapering of a uniform-stress tether in a general potential field. We then focus on a rotating coulomb potential and study the range of latitudes that are allowed for a uniform-stress space elevator. Finally, we look at a few practical issues that are raised by non-equatorial space elevators. Introduction A space elevator is a very long tether. One end is attached to an anchor station on the surface of a planet. The other end is attached to a counterweight located beyond the planet's synchronous altitude. Past that altitude centrifugal force due to the planet's rotation exceeds the gravitational force, so the counterweight is pulled away from the planet. Thus, the tether is kept in tension between anchor and counterweight. Payloads can be sent into space by climbing the tether, much more efficiently than with rockets. The space elevator concept was first proposed in Russian [2, 3] by Artsutanov, and later introduced in English by Pearson [4]. Science fiction writers [5] made the idea popular. But materials with the necessary strength to weight ratio seemed unlikely. And the proposed elevators were much too heavy for foreseeable launch technologies. This changed with the arrival of carbon nanotubes, and the proposal by Edwards [1] of a space elevator concept that would only ∗ Institute for Scientific Research, Inc. †This paper was first published in Proc. of the 3rd International Space Elevator Conference, June 2004, reprinted on arXiv by permission of Bradley Edwards former director at ISR. ‡The author may be contacted at . require a few tons of material to be lifted by conventional launch means. To date, studies have considered that the space elevator would be anchored on the equator, as that is where the equilibrium of the tether is easiest to understand. Clarke [5] even considered that the elevator would have to be placed on the equator at a local minimum of the Earth's geopotential. In fact, there is no reason for such limitations, and non-equatorial space elevators can present a number of advantages: • There is increased flexibility in the selection of the anchor location. • The tether does not have to go through the heavily occupied (geo)synchronous orbit. • The tether avoids the areas of most intense radiation of the Van Allen belts [6]. This is particularly important when considering the transportation of humans. • The tether can avoid the Martian moons (in the case of a Martian elevator). Figure 1 shows a diagram of a non-equatorial space elevator. Three forces hold it in equilibrium: gravity, centrifugal force, and the tension at the anchor. The major difference with equatorial elevators is that the elevator is located entirely on one side of the equatorial plane. Therefore, gravity tends to pull the elevator towards the equatorial plane. This tendency is countered by the force applied by the anchor, allowing the elevator to be in equilibrium. In this paper we will be considering uniform-stress space-elevators. A uniform-stress tether is a tether in which the cross-section is varied (tapered) in order to keep the stress in the tether uniform. Space elevators are generally designed to have uniform stress as this maximizes the ratio of payload mass to elevator mass. To understand off-equator space elevators, we will first review the static equations that any uniformstress tether must satisfy, in Section 1. Then we will 15 Sep 2025 Counterweight Equator Axis of Rotation Anchor ψ y x θ φ s ⃗r Figure 1: A non-equatorial space elevator, and some of the angles that are used to characterize it. apply these equations to the relevant case of a rotating Coulomb potential in Section 2. In Section 3 we will study the problem of determining the maximum latitude a space elevator can start from. Finally, Section 4 covers a few practical concerns that are specific to non-equatorial space elevators. All the notation that is used in this paper can be found in Table 1. Examples will often use values that are typical for Earth and the tethers considered by Edwards [1]. Table 2 summarizes these values. 1 Equations for a Static Uniform Stress Tether First we introduce the equations for a static uniformstress tether in a potential V . d⃗r ds = ⃗T T (1) d⃗T ds = ρ A ⃗∇V (2) T = σ0 A (3) In these equations s is a curvilinear coordinate along the tether, ⃗T is the tension that the top part of the tether applies to the bottom part, ⃗r is a position vector, A is the position dependent cross-sectional area of the tether, ρ is the density of the tether, and σ0 is the stress in the tether. Equation (1) expresses that the tether cannot bear any shear load: the tension in the tether must be tangent to the cable. Equation (2) expresses Newton's second law: the change in tension along a small piece of tether must oppose the force due to the potential. Equation (3) expresses that the tether has a uniform stress. Because the tether is uniformly stressed, we need not consider elastic effects as they can be incorporated into σ0 and ρ. Equations (2) and (3) can be combined to eliminate the area of the cable. d⃗T ds = ρT σ0 ⃗∇V (4) 1.1 Taper Profile First we shall look at the tangential part of the static equations. Integrating them will lead to a relation between the cable cross-section A and the local potential V . First, we take the dot product of (4) with d⃗r, divide by T , and use (1) to simplify. dT T = ρ σ0 d⃗r · ⃗∇V (5) Integrating we get an expression for T and therefore for A. T T0 = A A0 = e ρ σ0 V (6) This formula shows that the area of the tether at a given position is directly a function of the potential V at that position. If ∆V is the difference in potential energy between the base of the tether and the point where the potential energy reaches a maximum, then we can express the taper ratio of the tether as: Amax A0 = e ρ σ0 ∆V (7) Symbol Description s Curvilinear coordinate along the tether. ⃗r Position vector of a point on the tether. r Distance from the center of the planet. r⊥ Distance from the rotation axis of the planet. rs Distance from the center of the planet to the synchronous altitude. ρ Density of tether material under stress. σ0 Target stress in tether. A Cross-sectional area of tether. ⃗T Tension applied by the top part of the tether to the bottom part. m Mass of the counterweight. V Potential field the tether is placed in. θ Angle between the equatorial plane and the position vector ⃗r. φ Angle between the equatorial plane and the tangent to the tether. ψ Angle between the tangent to the tether and the position vector ⃗r ˆeφ Unit vector in the direction of increasing φ. G Gravitational constant. Mp Mass of the planet. Ω Angular velocity of planet rotation. V0 Characteristic specific energy of the rotating Coulomb potential field. ∆V Difference in potential between the base of the tether and the point where the potential is greatest. ⃗ ̃g Combined gravitational and centrifugal field, in normalized form. α Tether shape factor. P/M Payload to tether mass ratio. (⃗v)⊥ Part of some vector ⃗v that is normal to the tether. ̃d Distance d in units of rs. ̆d Distance d in units of αrs. x0 The value of variable x at the anchor point (except V0 and σ0). Table 1: Notation that is used in this paper. From this expression we can introduce a taper parameter ρ σ0 ∆V that characterizes the difficulty of Symbol Typical Value G 6.67 · 10-11 SI Mp 5.98 · 1024 kg Ω 7.29 · 10-5 rad/s V0 9.46 · 106 J/kg rs 42.2 · 106 m r0 6.38 · 106 m ̃r0 0.151 ρ 1300 kg/m3 σ0 65 · 109 N/m2 α 0.189 Table 2: Typical values for the Earth and Edwards' tether parameters [1]. When nothing is specified, these values are used for examples. building a uniform-stress structure across a potential difference ∆V . When it is much smaller than one, almost no tapering is necessary. When it is much larger than one the taper ratio becomes prohibitively large. This taper parameter is closely related to the ratio of Pearson's characteristic height [4] to the geosynchronous altitude. 1.2 Tether Shape Equation Projecting (4) onto a direction tangent to the tether allowed us to determine the taper profile. We now project perpendicularly to the tether direction and combine with (1) to determine the shape the tether adopts in the gravitational potential: d2⃗r ds2 = d ˆT ds = ρ σ0 ⃗∇V ⊥ (8) where ˆT = ⃗T/T is a unit vector tangent to the tether, and (⃗∇V )⊥denotes the projection of ⃗∇V perpendicularly to ˆT. Equation (8) determines the tether's curvature. The tether curves towards areas of higher potential, and the curvature is proportional to the component of the gravity field that is normal to the tether. This interpretation becomes more apparent in the case of a planar tether where we can identify the direction of the tether by an angle φ so that dφ ds = ρ σ0 ˆeφ · ⃗∇V (9) where ˆeφ is a unit vector in the direction of increasing φ. 1.3 Boundary Conditions To have a complete description of the static tether, we additionally need to consider boundary conditions. On one end, the tether is attached to the anchor point on the planet. The anchor simply needs to provide a force equal to the tension in the tether to keep the tether static. If the base of the tether isn't vertical then there will be a horizontal component to this force, so the tether will tend to pull the anchor sideways (see Section 4.2). From equation (6) we know that the tension in the tether never goes to zero. Therefore, the free end of the cable must have a force applied to it to balance the tension in the cable. That force is provided by a counterweight of mass m which must satisfy: ⃗T = -m⃗∇V (10) Thus the counterweight must be located in such a way that the tether is tangent to the local gravity field. 2 The Rotating Coulomb Potential So far we have considered a uniform stress tether in an arbitrary potential V . To make further progress, we will now consider the specific potential that applies in the case of the space elevator attached to a planet. Because we are considering the statics of the tether, we have to place ourselves in a reference frame that is rotating with the planet. Thus the potential we are interested in is a combination of the Coulomb potential of the planet's gravity and the centrifugal potential due to the planet's rotation around its axis: V = -GMp r -1 2r2 ⊥Ω2 (11) In this equation G is the gravitational constant, Mp is the mass of the planet, Ωis the angular velocity of the planet's rotation, r is the distance to the center of the planet, and r⊥is the distance to the axis of rotation of the planet. 2.1 Planar Tether Profile One of the first things we note about the potential is that it is invariant by reflection about planes that contain the planetary axis of rotation. This invariance must also apply to the resulting acceleration field. Thus, the forces caused by the potential will all be in a plane containing the axis of rotation and the point at which they are applied. Therefore, if we consider a plane containing the axis of rotation of the planet and the counterweight, we find that the tension in the tether at the counterweight is in that plane. As we move down the tether, the forces acting on the tether are in that plane, so the tether remains in that plane all the way to the anchor. We conclude that the shape of the space elevator will be planar, even in the non-equatorial case. This greatly simplifies the problem to be solved, as we can now work in two dimensions in a plane that is perpendicular to the equatorial plane. 2.2 Non-Dimensional Problem Reducing a problem to non dimensional form is an excellent way of extracting the physically important parameters. We now apply this methodology to the space elevator. First we note that the potential can be written in terms of the synchronous radius rs = (GMp/Ω2)1/3 and the characteristic potential V0 = (GMpΩ)2/3 in the form: V = -V0 rs r + 1 2 r2 ⊥ r2s (12) Thus, rs, the distance from the center of the planet to the synchronous orbit, is the natural distance scale for this problem. We shall therefore rewrite (8) replacing all distances d by normalized distances ̃d = d/rs, and inserting the expression for V from (12): d2⃗ ̃r d ̃s2 = ρV0 σ0 ⃗ ̃r ̃r3 -⃗ ̃r⊥ ! ⊥ = -α ⃗ ̃g ⃗ ̃r ⊥ (13) This is the differential equation that determines the shape of the tether in a rotating Coulomb potential. This equation contains a single scalar parameter α which we shall call the shape parameter. α = ρV0 σ0 (14) The shape parameter is the ratio of the characteristic potential of the potential field to the strength to weight ratio of the tether material. It also naturally appears in (5), (6) and (7) when they are applied to the rotating Coulomb potential. The shape parameter determines how deep it is possible to go into the normalized potential well before the taper ratio becomes excessively high. Indeed, well below the synchronous altitude, ∆V ≈V0rs/r, so the taper parameter is approximately α/ ̃r. Thus for α ≪ ̃r, the taper ratio is close to 1, while for α ≫ ̃r, the taper ratio is gigantic. In the case of the Earth and the tether parameters from [1], α ≈0.189 and ̃r ≈0.151. Thus we are close to the limit of feasibility. 2.3 Solving the Shape Equation Qualitatively To get an idea of the solutions of (13) that satisfy the boundary condition (10), it is useful to study Figure 2 which is a plot of the vector field ⃗ ̃g. We shall assume without loss of generality that the anchor is in the upper right-hand quadrant (x > 0 and y > 0). More complete derivations can be found in [7]. To begin, we note that the North-South component of the field always points towards the equator. This has two consequences. First, because of (4) the y component of ⃗T satisfies: y dTy ds > 0 (15) Second, because of (10), the tip of the tether at the counterweight has to be sloped towards the equatorial plane (i.e., yTy 0.5 we start to see significant deviation from (23). With larger ̃r0 higher latitudes than expected can be reached. 4 Practical Considerations So far we have considered whether it was possible to have a space elevator at a given latitude by considering the static tether equation. In practice other considerations will further limit the latitude that can be reached. 4.1 Payload to Elevator Mass Ratio One of the major concerns in space elevator construction is the ratio of payload mass to elevator mass. Indeed, this ratio determines how much material has to be lifted during the elevator construction for a given elevator capacity. We saw in Section 1.1 that the taper ratio of a uniform-stress tether only depends on the change in potential along the tether. The potential is uniform at the surface of a planet, and from Figure 2 the potential changes very slowly near the synchronous orbit. Therefore, the taper ratio for non-equatorial space elevators is almost the same as the taper ratio for equatorial space elevators. In the small planet limit, the angle between the tether and the equatorial plane is small, except possibly near the surface of the planet. Therefore, the length of the tether doesn't depend much on the latitude of the anchor. Moreover, since the potential depends slowly on y, the taper profile of the non equatorial space elevator is nearly the same as the profile for an equatorial one. Therefore, the only significant difference between equatorial and non-equatorial elevators is due to the tension at the base of the tether not being vertical in the non-equatorial case. Since only the vertical component of the tension can be used to lift a payload, and elevators of equal mass have nearly equal tension at their base, we get a reduced payload to elevator mass ratio in the non-equatorial case: (P/M)off-equator ≈(P/M)equator cos(ψ0) (24) Thus to maintain payload when leaving the equator means one has to multiply the elevator mass by 1/ cos(ψ0). For small inclinations at the anchor this inefficiency is insignificant. But approaching inclinations of 90° is clearly out of the question. 0 10 20 30 40 50 60 0 10 20 30 40 50 60 70 80 90 100 90 80 70 60 50 40 30 20 10 ψ0 θmax σ0(GPa) Figure 7: Maximum reachable latitude for an Earth space elevator as a function of σ0, for different values of the inclination ψ0. The designer of Earth space elevators will find Figure 7 useful to quickly determine the maximum elevator latitude as a function of maximum inclination at the anchor. For ease of use σ0 has been used instead of the shape parameter, the tether density is assumed to be fixed at 1300 kg/m3. 4.2 Horizontal Force on Anchor In addition to reducing the payload to mass ratio of the elevator, the inclination at the tether's base causes a horizontal force to be applied to the anchor platform. This force is simply given by: F = T0 tan(ψ0) (25) If the anchor is a mobile ocean platform, this force will need to be countered or else the anchor will drift towards the equator. For heavy lift elevators significantly offthe equator, this force will have to be taken into account when selecting the anchor platform. 4.3 Stability An equatorial tether with very high elasticity (low Young's modulus), or with a small extension beyond the synchronous altitude can be unstable [7]. For equatorial space elevators, we would be considering conditions far from this instability region, so it is of no concern. In the case of non-equatorial elevators, the curvature at the base of the elevator should cause a reduction in the axial stiffness of the tether as seen from the counterweight. We can therefore conjecture that the frequency of the elevator modes will decrease for a non-equatorial elevator. This could cause the instability to occur for realistic tether parameters. We conjecture that as in the equatorial case, the instability will occur for short tethers, near the boundary between (II) and (III), where the counterweight is just beyond the synchronous altitude (see Section 2.3). However the instability will extend to greater counterweight altitudes. The maximum reachable is determined at the (I)-(II) boundary and should not be affected. However, external effects such as the presence of Earth's Moon may limit the length of the elevator, and thus limit the latitude. 4.4 Deployment During the initial deployment phase, there is no contact between the tether and the Earth, which takes away the only force keeping the elevator away from the equatorial plane. This leaves two possibilities for deploying a non-equatorial space-elevator. • A propulsion system can be attached to the bottom of the tether during deployment to keep the tether away from the equatorial plane. For a 10° latitude, and a one ton elevator this option would require hundreds of newtons of thrust over a period of days and is therefore impractical. • The tether can be deployed in the equatorial plane, attached to the anchor platform, and then moved to its final location away from the equator. If an off-equator location has been selected to avoid interfering with geosynchronous satellites, the initial deployment can be done at a longitude where there is an available geosynchronous slot, after which the elevator can be moved to its final latitude. 4.5 Tether Shape Determined Numerically Three of the applications we mentioned for nonequatorial space elevators were the avoidance of particular areas in the equatorial plane. In this paper we have not pushed the analysis of (13) far enough to determine whether these obstacles are indeed avoided. 0 1000 2000 3000 4000 5000 0 20000 40000 60000 80000 100000 y (km) x (km) 0 1000 2000 3000 4000 5000 0 20000 40000 60000 80000 100000 y (km) x (km) 0 1000 2000 3000 4000 5000 0 20000 40000 60000 80000 100000 y (km) x (km) 0 1000 2000 3000 4000 5000 0 20000 40000 60000 80000 100000 y (km) x (km) Figure 8: Numerical solutions of the shape equation for the Earth with the standard tether parameters. The length of the tether was set to 90,000 km. Starting latitudes of 10°, 20°, 30° and 40°. Note that the scale is different along the x and y axes. Figure 8 shows some numerical solutions. They suggest that avoiding the geosynchronous satellites is easy. On the other hand, the most intense areas of the radiation belts extend over 2,000 km above the equatorial plane, which only highly inclined elevators can avoid. Conclusion In this paper we have presented the equations that govern the statics of non-equatorial uniform-stress space elevators. These equations have been reduced to a non-dimensional form allowing the analysis to be applied to any tether and planetary parameters. The tether's taper profile has turned out to be easy to compute as in the equatorial case, once its the spatial configuration is known. Unfortunately, the spatial configuration is difficult to obtain analytically. Of particular interest to the elevator designer is the maximum anchor latitude for a non-equatorial elevator. This problem has been solved in a few limit cases, and an approximate formula has been proposed. Offthe equator, the tether is not vertical at the base of the elevator. This causes a reduction in payload, which is the major engineering cost of being off the equator. It also causes a horizontal force to be applied to the anchor station, which much be taken into account for an ocean platform. This study has ignored dynamic effects, in particular the stability of off-equatorial elevators has to be checked. For small latitudes the stability of equatorial elevators carries over, but we expect instabilities to occur in a wider range of conditions than in the equatorial case. This remains to be verified. The effect of climbers on the tether also needs to be studied. Acknowledgements I would like to thank Val ́erie Leblanc for the time she spent proof reading and pointing out bugs. References [1] B. Edwards and E. Westling, The Space Elevator. Spageo Inc., 2002. [2] Y. Artsutanov, "V kosmos na electrovoze (into space with the help of an electric locomotive)," Komsomolskaya Pravda, 1960. [3] V. Lvov, "Sky-hook: Old idea," Nature, vol. 158, no. 3803, pp. 946-947, Nov. 1967. [4] J. Pearson, "The orbital tower: a spacecraft launcher using the Earth's rotational energy." Acta Astronautica, vol. 2, pp. 785-799, 1975. [5] A. C. Clarke, The Fountains of Paradise. Warner Books, 1978. [6] A. M. Jorgensen, R. G. Morgan, B. Gassend, R. H. W. Friedel, and T. Cayton, "Space elevator radiation hazards and how to mitigate them," in Proceedings of the 3rd Annual International Space Elevator Conference, June 2004. [7] V. V. Beletsky and E. M. Levin, Dynamics of Space Tether Systems, ser. Advances in the Astronautical Sciences. Amer Astronautical Society, August 1993, vol. 83.
|
2509.16233
|
1
Comparison of Deterministic and Probabilistic Machine Learning
Algorithms for Precise Dimensional Control and Uncertainty
Quantification in Additive Manufacturing
Dipayan Sanpui1,2, Anirban Chandra1,2,*, Henry Chan1,2, Sukriti Manna1,2 and Subramanian K.R.S.
Sankaranarayanan1,2
1 Center for Nanoscale Materials, Argonne National Laboratory, Lemont, Illinois 60439, United States.
2 Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, Illinois 60607,
United States.
*Currently at Shell International Exploration and Production Inc., Boston, Massachusetts, 02210, United
States
Abstract
We present an accurate estimation of the dimensions of additively manufactured components by
adopting a probabilistic perspective. Our study utilizes a previously gathered experimental dataset,
encompassing five crucial design features for 405 parts produced in nine production runs. These
runs involved two machines, three polymer materials, and two-part design configurations. To
illustrate design information and manufacturing conditions, we employ data models that integrate
both continuous and categorical factors. For predicting Difference from Target (DFT) values, we
employ two machine learning approaches: deterministic models that offer point estimates and
probabilistic models generating probability distributions. The deterministic models, trained on
80% of the data using Support Vector Regression (SVR), exhibit high accuracy, with the SVR
model demonstrating precision close to the process repeatability. To address systematic deviations,
we introduce probabilistic machine learning methodologies, namely Gaussian Process Regression
(GPR) and Probabilistic Bayesian Neural Networks (BNNs). While the GPR model shows high
accuracy in predicting feature geometry dimensions, the BNNs aim to capture both aleatoric and
epistemic uncertainties. We explore two approaches within BNNs, with the second approach
providing a more comprehensive understanding of uncertainties but showing lower accuracy in
predicting feature geometry dimensions. Emphasizing the importance of quantifying epistemic
uncertainty in machine learning models, we highlight its role in robust decision-making, risk
assessment, and model improvement. We discuss the trade-offs between BNNs and GPR,
considering factors such as interpretability and computational efficiency. The choice between these
models depends on analytical needs, striking a balance between predictive accuracy,
interpretability, and computational constraints. In summary, our analysis of an additive
manufacturing dataset through the lens of a Probabilistic Bayesian Neural Network (BNN) and the
simultaneous quantification of both epistemic and aleatoric uncertainties provides a robust
foundation for advancing manufacturing design.
2
List of Abbreviations
Abbreviations
Meaning
DLS
Digital Light Synthesis
DFT
Difference From Target
BNN
Bayesian Neural Network
AM
Additive Manufacturing
SPC
Statistical Process Control
DOE
Design of Experiments
MCMC
Markov Chain Monte Carlo
VI
Variational Inference
SVR
Support Vector Regression
XGBoost
Xtreme Gradient Boosting
GPR
Gaussian Process Regression
RMSE
Root Mean Squared Error
MSE
Mean Squared Error
RF
Random Forest
LGBM
Light Gradient Boosting
MLP
Multi-layer Perceptron
ML
Machine Learning
SHAP
Shapley Additive Explanations
NN
Neural Network
UMA
Urethane Methacrylate
EPX
Additive Epoxy
RPU
Rigid Polyurethane
GP
Gaussian Process
MC Dropout
Monte Carlo Dropout
UQ
Uncertainty Quantification
Introduction
The decision-making process of acceptance or rejection of produced part is critical in a production
pipeline and mainly depends on the dimensional accuracy. The additive manufacturing techniques,
useful in fabrication of intricate geometries; is highly flexible and undergoes frequent parametric
variations. Oftentimes, traditional methods [1-5] for measurement of the produced parts are time-
consuming and incur higher monetary investments. While the conventional methods for part
quality assessment are sometimes challenging, recent advancements in measurement techniques
that involve automated measurements and real-time analysis of data offers significant benefits in
the measurement process; often termed as smart metrology-based approach [6].
An existing challenge in the smart metrology-based approach is handling a high-dimensional
dataset having uncorrelated processing conditions. In the recent past, data-driven methods have
been successful in unleashing the interrelations between manufacturing conditions and
dimensional accuracy [7]. Amongst the data-driven methods, deterministic regression methods [8-
11], are popular in dimensional accuracy prediction since the implementation is quite
3
straightforward and computationally efficient. However, possess certain limitations while
addressing the uncertainty inherent within the measurements. The lack of reliable uncertainty
estimates hinders the direct use of the deterministic machine learning algorithms, for
manufacturing and materials science-based applications.
It is imperative to estimate uncertainties to provide scientific and engineering decision-makers
with predictive information as well as quantitative information regarding how accurate the
predictions are. GUM (Guide to the expression of Uncertainty in Measurement) [12,13] is an
internationally accepted master document that provides guidelines to assess the uncertainties
associated with various sources of error in measurements. The steps for the uncertainty budget
include the sources of uncertainties, classification, determination of standard uncertainty and
combining them. Another two distinct approaches for uncertainty evaluation are the propagation
of distributions through Monte Carlo approaches [12] and Bayesian uncertainty evaluation [13].
Propagation of distributions through Monte Carlo evaluation is a method for determination of
uncertainty when the inputs are uncertain and randomly sampled for propagation of uncertainty
within the model itself. Instead of relying solely on the analytical methods, the Monte Carlo
approaches use random sampling for propagation of uncertainty, thus allowing for a more detailed
and accurate characterization of output distribution. While the Bayesian uncertainty evaluation
relies on the prior beliefs or knowledge, in most cases considered as standard normal distributions
followed by updating the knowledge with new data and provide a full probability distribution for
the quantity of interest.
The outcomes of scientific applications including in advanced manufacturing are uncertain, both
on an aleatoric and epistemic level [17-19]. Aleatoric uncertainty refers to the uncertainty present
in the dataset itself and arises due to stochastic experimental design and noise present in the
experimental output. Epistemic uncertainty, also known as subjective uncertainty or knowledge
uncertainty, refers to the uncertainty arising from limitations in our understanding, knowledge, or
information about a system or phenomenon [20]. This type of uncertainty reflects the lack of full
knowledge about the true nature of the system, including its behavior, parameters, or underlying
mechanisms. Epistemic uncertainty can manifest in various forms, such as parameter uncertainty
that arise due to the model parameters, model uncertainty that occurs due to the model architecture
and the input uncertainty, that occurs due to the choice of input features and boundary conditions.
Besides aleatoric uncertainty, which is inherent in the data itself, machine learning applications, in
particular, suffer from epistemic uncertainties that arise from a lack of knowledge or data from
experiments and model hyperparameters.
Based on the theoretical approaches [12,13], relatively a newer perspective towards evaluation of
uncertainty is by utilization of the ML algorithms. Epistemic uncertainty can be reduced by
providing the ML model with more data [21]. In contrast with the primary ML algorithms, that
depends on a lot of information, engineering applications frequently manage limited quantities of
complicated information, bringing about enormous epistemic uncertainties. Therefore, the
decision maker must characterize both the uncertainties, aleatoric and epistemic while making
predictions. In this context, probabilistic ML models serve the purpose of the prediction of both
uncertainties [22]. A widely known ML algorithm that incorporates uncertain predictions is
Gaussian processes (GPs), inherently derived from the Bayesian learning framework [23]. In
recent years, the integration of neural networks with the Bayesian learning framework has become
4
popular among the machine learning research community [24]. Previous literature suggests that
there are different ways to incorporate Bayesian inference within neural networks [25]. Bayesian
Neural Network (BNN) makes use of Markov Chain Monte Carlo (MCMC) for approximation of
the prediction as a probability density function (posterior pdf). Alternatively, the variational
inference (VI) approximates the posterior pdfs via distance metrics which can be computed
analytically and easy to sample [23]. Another probabilistic approach is MC Dropout [31], which
uses neural networks to estimate the uncertainty involved within the model training and alleviate
the issue of overfitting. This probabilistic model mimics the Bayesian-like behavior by dropping
out neurons in the network randomly. This algorithm is computationally less costly, but several
researchers reported the underperformance of this algorithm [18]. The MCMC method usually
does not scale well with the model weights and parameters, and thus the convergence is intractable
considering the presence of complex integrals for approximation of posterior distributions [31].
VI frameworks integrated with neural networks assume a form of the posterior distribution after
each epoch and are thus less accurate than MCMC in their estimation of the posterior distribution,
but they are typically faster via optimization; thus, easily leveraged with neural network training,
and convergence is fast. Currently, uncertainty quantification for additive manufacturing
dimensional predictions lacks extensive exploration using probabilistic regression algorithms [32].
Incorporating Probabilistic BNN-based regression into uncertainty quantification efforts can
address the gaps by providing a powerful tool for modeling complex relationships, handling
uncertainty, and offering real-time adaptability and interpretability in the context of additive
manufacturing dimensional predictions.
In this work, we compare algorithms for probabilistic training of neural networks based on
variational Bayesian inference, while using the experimental additive manufacturing dataset [16]
for fabrication and measurement of produced parts using Digital Light Synthesis (DLS). We first
select diverse non-linear deterministic models for the prediction of dimensional accuracy. While
separation of data and model uncertainties are cumbersome using deterministic ML algorithms,
probabilistic ML algorithms excel in quantification of both aleatoric and epistemic uncertainty. The
remainder of the paper is structured as follows. In Section 2, we present the details of the
experimental dataset that we use for the study and describe our implementation techniques for the
ML algorithms, followed by Results and Discussions in Section 3. Section 4 concludes with a
summary of our observations, concluding remarks, and future scopes.
2. Methods
2.1 Description of the Dataset
We utilize an additive manufacturing dataset prepared by McGregor et al. [16], that consists of
information for the fabrication and measurement of produced parts using the Digital Light
Synthesis (DLS) [40] method. The DLS method is a vat-polymerization technique that utilize
digital light projection and oxygen permeable optics for continuous fabrication of highly intricate
and detailed parts from photosensitive resin. For collection of the dataset and exploration of
different manufacturing parameters, McGregor et. al. [16] fabricated a total of 405 parts. Two
Carbon M2 printers were utilized to fabricate the parts, each having unique hardware sets (machine
ids). Three unique part designs, namely clips, plugs and brackets were manufactured. The build
area was divided into three different segments; one-third for each segment for each part design,
5
made with same material. Two organizational layouts were adopted for the experiments, namely,
A and B. Parts were arranged in one of the two organizational layouts, in such a manner that the
location of each design cluster was different. For example, the layout A was organized with 15
clips on the left side, 15 plugs in the middle, and 15 brackets on the right side. Similarly, the layout
B was organized with 15 brackets on the left side, 15 clips in the middle and 15 plugs on the right
side. The part designs, i.e.- clips, plugs and brackets were fabricated in a batch process We present
a detailed schematic of the experimental design in Fig 1. Therefore, in each batch, 45 parts were
manufactured, and 9 experiments were performed that resulted in a total of 405 parts. Five
measurements were taken for each manufactured part, which resulted in a total of 2025
measurements. Three different polymer materials were used for fabrication of the produced parts,
namely urethane methacrylate (UMA); rigid polyurethane (RPU) and additive epoxy (EPX). In
this work, we predict the dimensional accuracy of manufactured parts, the key metric for the
prediction is Difference from Target (DFT), which serves as the target variable from the dataset
and is the measure of dimensional variations of the produced parts from a reference CAD
geometry. Initially during experiments [16, refer supplementary information], duplicate
measurements were taken from repeated scans and one average measurement was reported for
each of the 7290 measured features. Depending on the design and measured features per part; with
subsequent down sampling of features reduced the initial dataset from 7290 measurements to 2025
measurements. In addition to the measured dimensions of five features on each part, the data set
also provides with details on the manufacturing parameters and descriptors associated with each
feature.
Corresponding to each measured part dimension, there are 13 input features, mixture of
continuous and categorical variables. Amongst those 13 input features, eight of them are
manufacturing parameters and rest of the inputs are the geometrical descriptors of the measured
part. The manufacturing parameters are the hardware settings, material, thermal cure, cartesian and
radial co-ordinate locations within the build. Rest of the input features were the measured feature
descriptors, i.e.- feature included nominal dimension, feature category, part design, unique feature
ID, and feature class. In reference [27], McGregor et al. performed an analysis to study the relative
importance of input features and their contributions towards feature DFT predictions. Amongst the
total of 13 input features, they observed 8 input features contribute significantly towards the output
prediction. Therefore, we choose 8 input features, where six were the manufacturing parameters
and rest of the input features were the measured feature descriptors, i.e.- feature class and feature
category. We convert the categorical variables into continuous input features using one-hot
encoding. The feature category is the topological descriptor for a feature category class, i.e.- inner
or outer length of the diameter. The feature class amongst the measured feature descriptor was
either thickness, length, diameter, or height. In Fig. 1 and 2, we present an overview of the
experimental design and a detailed schematic of the input features and output/target variable along
with the part designs respectively.
6
Fig 1: Representation of the experimental design. a Three different parts were fabricated namely, plug, clips and
bracket, using three different materials, namely urethane methacrylate (UMA); rigid polyurethane (RPU) and additive
epoxy (EPX). b The build area was divided into three segments, one-third for each part design. Two organizational
layouts were adopted namely A and B. In each experiment, the layout A was organized with 15 clips on the left side,
15 plugs on the middle and 15 brackets on the right side. Similarly, the layout B consisted of 15 brackets on the left,
15 clips in the middle and 15 clips on the right side of the build area. Each build area consisted of 45 parts, a total of
9 experiments were performed which produced a total of 405 parts with five measurements for each part, that resulted
in a total of 2025 measurements within the dataset.
Fig 2: A detailed representation of the input and output features. a) Computer Aided Draft (CAD) designs
of the additively manufactured parts using the DLS method. Three different part designs with the five critical
dimensions/features A-D are shown in the figure. The measured features A-D signify the thickness, length,
diameter, and out-of-plane height of the manufactured part. b) We use the existing experimental dataset prepared
by McGregor et al. [27] and use it for training of the ML models. The inputs for training the ML models consists
of a mixture of numerical and categorical variables. For a clear representation we demarcate between the
manufacturing parameters and the measured dimensional/feature descriptors describing the topology of the parts.
As the output from the ML models, we predict the Difference from Target (DFT), signifies the dimensional
deviation from the refence CAD geometry.
7
Within the realm of advanced manufacturing methods, the development of a machine learning
framework to predict the dimensions of additively manufactured built parts involves a supervised
regression or classification task [9,10,11]. When provided with a set of input-output data, the
objective is to develop a regressor function 𝑓 (·) that can accurately predict the output 𝑦 = 𝑓 (𝑥)
for a new input 𝑥. The input feature vector, such as structured input data, or it may represent an
image from a microscope of different phases in a microstructure and corresponding microstructure
features, for example, image input data/microstructure feature distributions [14,15]. The output 𝑦
represents the target feature of interest, the DFT values of the built parts. Our work uses both
deterministic and probabilistic ML models for the prediction of the dimensional accuracy of AM
parts. We use the dimensions of the feature descriptors along cartesian and radial coordinates as
continuous inputs to the model. Manufacturing parameters and measured feature descriptors were
the categorical inputs to the models for the prediction of dimensional accuracy.
An important factor to consider is the random variance present within the underlying data that
limits the ability of the model to make accurate predictions. For the experimental dataset we
utilized, different polymeric materials used for fabrication of the parts possess repeatability. The
weighted average of the repeatability of materials utilized for fabrication of the parts is ±47 𝜇𝑚.
Moreover, the measurement uncertainty during curation of the data is ±15 𝜇𝑚. Besides, the
combined estimate of production repeatability and measurement uncertainty is ±50 𝜇𝑚 (root sum
of squares). This estimate indicates that an ideal regression method while evaluated with the test
data might achieve a minimum of 50 𝜇𝑚 root mean squared (RMSE) value for DFT predictions.
Furthermore, for the target feature data distribution, the standard deviation is 180 𝜇𝑚 and serves
as to provide as a baseline for evaluation of the ML regression methods utilized. Smaller prediction
accuracy than this baseline value shows the utility of incorporated ML methods. However, a critical
fact is that the target feature; dimensional deviations of the measured feature from a reference
geometry/Difference from Target (DFT) is analogous to the prediction of random variance inherent
in the data; which is a combination of both production repeatability and measurement uncertainty.
It is noteworthy, that while we utilize deterministic regression methods for DFT predictions, we
actually predict the random variance within the data, which can be attributed to aleatoric
uncertainty.
During experiments, in real-time manufacturing conditions, there are some factors that affects the
process itself. For instance, an unrecognized factor like a calibration error in a machine causes
consistent deviations (bias) in production outcomes. Similarly, this scenario can be compared with
a probabilistic regression method that learns distributions over weights and quantifies uncertainty
based on the model hyperparameters and amount of data provided. Both of the scenarios can be
attributed to systematic variance/biases. We explicitly try to evaluate the aleatoric (random
variance) and epistemic (systematic variance) uncertainties by implementing Bayesian Neural
Networks.
Fig.3 shows a consolidated representation of framework, which we implement for the training and
evaluation of multiple ML algorithms. An ML algorithm is the underlying computational method
used to identify interdependencies between data and predictions; for example, Random Forest (RF)
is an ML algorithm. It is noteworthy that, ML model is an instantiation of an algorithm with a
particular set of hyperparameters and subject to the distinctive data that is sampled for training.
8
Fig 3: A consolidated representation of the ML methods we use for our work. a, b, c We use the existing
experimental dataset prepared by McGregor et al. [27] for the DLS method and utilize it for training of the ML
models. The inputs for training the ML models consists of both numerical and categorical variables followed by
hyperparameter tuning of parameters. d We use both deterministic and probabilistic ML models for training
purpose. e Furthermore, we test the trained deterministic models, amongst which SVR and XGBoost shows
nearly identical performance and gives a point estimate as prediction. While testing the probabilistic ML models
we use two different ML models that quantify the measurement uncertainty, while the GPR model can only
characterize the aleatoric uncertainty, BNN’s are able to characterize both the aleatoric and epistemic uncertainty.
2.2 Deterministic ML algorithms
A similar approach was followed for the implementation of deterministic models as described in
Ref [16]. We utilize Dual Monte Carlo subsampling method comprises of two nested loops and
subsequent steps to process data, including subsampling/data splitting, normalization and tuning
of hyperparameters. In the outer loop, the data is randomly split into training, test and
unused/holdout sets, ensuring that the training data is not used for testing purposes and the
withheld unused split for test data efficiency evaluation. The inner loop utilizes nested k-fold cross-
validation of data over choosing the regular k-fold cross-validation. The key advantage of the
nested k-fold cross-validation over the regular method is reduced overfitting, unbiased
performance estimation, better hyperparameter tuning, and reliable model comparison. Thereafter,
we use the hyperparameter-tuned model to predict the geometry of unmeasured features using the
test data. The inner loop iterates multiple times (50 iterations), training and evaluating a new model
at each stage. After performing multiple iterations, we average over all the expectations of model
accuracies at each iteration and estimate the overall performance (evaluation based on the withheld
test set and reported as RMSE) of the model for a distinctive set of inputs. Reshuffling of the data
was performed in the outer loop (3 iterations) and the process repeats itself through the inner loop.
In section 1 of SI, we elaboratively discuss the dual Monte Carlo subsampling method, along with
the pictorial representation of the method in Fig. S1.
Initially, we evaluate seven deterministic regression algorithms using Scikit-Learn [28], a Python
package for machine learning, for the prediction of the feature DFT of additively manufactured
9
parts. The different deterministic ML algorithms that we used are k-nearest neighbors, Support
Vector Regression (SVR), decision trees, and generalized decision tree-based algorithms such as
Random Forest, Gradient Boosting, Extreme Gradient Boosting (XGBoost) [29,30] and the multi-
layer perceptron (MLP) regressor [31]. In section 3 of SI, we discuss the different deterministic
ML algorithms, their working procedure and the hyperparameters we use to train the models.
We report the best results from the SVR and XGBoost algorithms, as they provide us with nearly
identical performance; discussed in more detail in the results section. SVR is usually known for
its widespread use, however, XGBoost, a scalable machine learning system for tree boosting, is
relatively new in the machine learning community gives state-of-art results and used for different
applications in the recent past [29,30]. The most important factor behind the near-identical
performance of XGBoost when compared with SVR is its scalability in all scenarios. The
scalability of XGBoost is due to several important systems and algorithmic optimizations [29].
2.3 Probabilistic ML Algorithms
We list and describe below the probabilistic ML algorithms used in this study.
2.3.1 Gaussian Process Regression
Gaussian Process Regression (GPR), as implemented in the Scikit-Learn [28] library, is a non-
parametric machine-learning technique used for regression tasks. It leverages Gaussian processes,
which are collections of random variables, to model the relationship between input features and
target variables. GPR not only provides point predictions but also quantifies uncertainty in
predictions, making it particularly useful in scenarios where understanding and managing
prediction uncertainty is crucial. We use a combination of the kernel functions – Matérn and White
kernels for our work. For the Matérn kernel, we use the length scale (equals 1) and the smoothness
parameter (equals 1.5) as the hyper-parameters. Also, for the White kernel that adds a noise
component to the GPR model, we use noise level (equals 1) and the number of optimizer restarts
(equals 50) as the hyper-parameters. While the implementation of GPR is straightforward, we get
an overall uncertainty estimate, thus separation of uncertainties is not possible. Additionally,
scalability is a concern for a vast amount of data, as the computational requirements for training
and prediction can become prohibitive. Therefore, as an alternative, for trustworthy uncertainty
quantification and scalability, next we use Bayesian learning methods.
2.3.2 Bayesian Neural Networks
Different from a standard neural network, Bayesian neural network incorporates additional
probabilistic information, and the network weights are represented as distributions. In a traditional
neural network (Fig 4(a)), the weights are optimized through the backpropagation mechanism,
yields a point estimate provided with a labeled data. Our study centers on a Bayesian deep learning
approach (Fig. 4(b)) that implements stochastic variational inference derived neural networks with
the integration of Bayesian probability theory [21].
10
Implementation of Bayesian inference calculates the conditional probability of approximating a
posterior distribution given the training data, the approximated posterior distribution calculated is
outlined in equation 1[25]:
𝑃(𝜔|𝐷!") =
#$𝜔%𝐷!"&#(()
#(*!")
=
#$𝜔%𝐷!"&#(()
∫#$𝜔%𝐷!"&#((),( (1)
Where, 𝜔 represent the weights, 𝐷!" refers to the training datapoints. 𝑃(𝜔) signify the prior
distribution; 𝑃(𝜔|𝐷!") represent the likelihood estimation and 𝑃(𝐷!") is the evidence.
Given a set of inputs, the output target variable can be probabilistic predicted, presented in equation
(2) [25]:
𝑃(𝑦7|𝑥̅, 𝐷!") = ∫𝑃(𝑥̅|𝑦7, 𝜔) 𝑃(𝜔|𝐷!")𝑑𝜔
(2)
𝑃(𝑥̅|𝑦7, 𝜔) is the predictive distribution/approximated posterior after each forward pass. 𝑥̅ is the
set of inputs and 𝑦7 is the corresponding output distribution.
In Bayesian neural networks, the uncertainty that arise from the model parameters is termed as the
epistemic uncertainty, which is quantified by learning the probability distribution over model
parameters, i.e.-weights. Initially, a prior distribution is considered before observing the training
data. The posterior distribution is then approximated using the Bayesian inference outlined in
equation 1. However, derivation of the posterior term directly from the is cumbersome due the
presence of intractable multi-dimensional integrals presents in denominator term in equation 1. To
address this, Markov Chain Monte Carlo [31] and Variational inference [23] are developed.
Considering the high-dimensional non-linear input feature space, we implement variational
inference for approximation of the posterior distributions. We utilize TensorFlow-Probability [34]
module from TensorFlow [35] for probabilistic reasoning and statistical analysis, to implement the
probabilistic Bayesian neural networks.
Fig 4: Representation of neural network architectures – deterministic and probabilistic approaches. a The
traditional neural networks (left), as the regression models consisting of layers of elementary non-linear functions,
specified using the network weights and biases that minimize the absolute error between the prediction and the true
value. Traditional deterministic neural networks do not account for uncertainties in model predictions. b Bayesian
Neural Networks learns distributions over weights and gives the output as a distribution with a mean and variance. A
probabilistic BNN works as an ensemble while we use variational inference. BNN’s (right) are designed to account
11
for both the epistemic and aleatoric uncertainties. We provide the details of the terminologies mentioned in the
equations in section 4 of SI.
2.3.2.1 Bayesian neural network with trainable mean and variance – yields a single output
distribution
As an initial step, we assume a prior distribution with zero mean and unit variance (a one-
dimensional tensor) using the ‘IndependentNormal’ (to treat independent distributions as a single
distribution with multiple independent dimensions) layer. Next, we use fully connected layers
followed by batch normalization for computation of parameters. We use a total of three hidden
layers, while the first hidden layer consisted of 24 neurons, the second one included 16 neurons
and the third one included 8 neurons, with Rectified linear unit (ReLU) activation function with
each hidden layer. We approximate a posterior distribution using the ‘MultivariateNormalTril’
layer that define a probabilistic output layer where the output is drawn from a multivariate normal
distribution parameterized by the predictions from the fully connected ‘Dense’ layers with the
utilization of prior distribution. Here we define a one-dimensional ‘MultivariateNormalTril’
distribution that defines 2 parameters (1 for the mean of the distribution and 1 for the variance).
We add a Kullback-Leibler (KL) divergence regularization term that penalize the divergence
between the learned distribution and the prior distribution, encouraging the model instantiation to
maintain proximity between the prior and posterior. The regularization parameter is scaled by the
length of training dataset. We present the abovementioned neural network architecture in Fig. 5.
Fig 5: Representation Bayesian neural network with trainable mean and variance – yields a single output
distribution. As an initial step, a prior distribution (having zero mean and unit variance) was initialized followed by
computations of parameters (weights and bias) in the intermediate layers. The outputs from the dense layers were used
to parameterize a distribution object using the IndependentNormal layer. We choose the event size as one to get a
single probabilistic output distribution.
12
2.3.2.2 Bayesian neural network working as ensemble of networks
We start with assuming an initial prior distribution for the model weights expressed by a set of
multivariate normal gaussian distributions (having zero means and unit variances), a single
multivariate distribution rather than a batch of independent distributions. After each forward pass
the posterior distribution is approximated using the “MultivariateNormalTril” layer that make use
of the multivariate Gaussian distribution with the off-diagonal elements to be non-zero. To build
the neural network model (presented in Fig. 6) with variational inference we use the
“DenseVariational” function along with the input shape, followed by a batch normalization of the
input layer. Instead of finding a single set of optimal weights, it learns a distribution over the
weights that best describe the data. We use 8 neurons in the intermediate “DenseVariational” layer
with the “sigmoid” activation function. Typically, the output is modeled as a mixture of gaussians,
each having a mean and variance. Maintaining the input data from the test data constant and
randomly sampling the weights from the distributions, at different number of iterations, we
determine the uncertainties associated within the data and a specific model instantiation at each
iteration with minimization of the negative log-likelihood loss.
We discuss the details of the probabilistic Bayesian ML model, its background, especially the
implementation of the variational inference, the hyperparameters used in section 4 and
corresponding model summaries in Fig. S3 of SI.
Fig 6: Bayesian neural network working as an ensemble. a Eight features were used as inputs to the probabilistic
regression method, out of which six input features were manufacturing parameters, and rest of the features were
measured feature descriptors. b Initially we define a the prior (consists of a set of gaussians with zero mean and
variance) to define a posterior (a covariance matrix having off-diagonal elements non-zero) distribution. c Utilization
of probabilistic layers for a probabilistic output, yields a set of distributions having unique means and variances. d
The variational inference utilized in this work approximates a posterior distribution and maintains proximity with the
13
prior distribution while minimizing the negative log-likelihood loss. The newly approximated posterior is considered
as the recent prior for the next iteration.
3. Results and Discussions
In our work, we employ two distinct machine learning algorithms for prediction of Differences
from Target (DFT) values. Deterministic ML models provide point estimates, yielding a single
predicted value for each input instance. These models are computationally efficient and
straightforward to interpret, making them valuable for scenarios where precise point predictions
are the primary objective. In contrast, probabilistic ML models output probability distributions
rather than a single-point estimate. This approach offers a more comprehensive representation of
uncertainty, enabling the quantification of prediction intervals and aiding in decision-making under
uncertainty. By employing both deterministic and probabilistic models, we aim to leverage their
complementary strengths to enhance the accuracy and reliability of DFT predictions.
3.1 Deterministic ML algorithms
In the initial investigation in Fig. 7, we evaluate seven different deterministic machine learning
algorithms. We found all of these algorithms possess the capability to predict feature DFT values.
However, nonlinear models tend to exhibit superior performance compared to linear models, even
when the training sample fractions are low. The present study illustrates the most optimal outcomes
obtained from the Support Vector Regression (SVR) and Extreme Gradient Boosting (XGBoost)
methods since they exhibit comparable levels of performance. Support Vector Regression (SVR)
is a commonly used machine learning technique known for its extensive use. In Fig 7(b), we
represent the average performance of the SVR model with corresponding training and testing
sampling fractions, similarly presented in McGregor et. al. [16]. However, in Fig 7(c), XGBoost,
a relatively recent and scalable machine learning system for tree boosting, has demonstrated nearly
identical performance with Support Vector Regression. Fig 7(b) illustrates the performance of the
SVR model in terms of training and testing, employing varying sampling fractions ranging from
10% to 90%. The data points in the graph represent the average root mean squared error values,
the average performance of the model across 50 iterations. The bands in the graph indicate the
standard deviation. We evaluate the accuracy, quantified by calculating the root mean square error
(𝑅𝑀𝑆𝐸) between the predicted feature geometries using feature DFT and the corresponding
measured feature geometries for the withheld testing data. This metric indicates the model's ability
to accurately predict feature geometries. Training performance, also known as fit error, is
quantified by the root mean square error (𝑅𝑀𝑆𝐸) between the model's predictions and the
corresponding measurements obtained from the training data. This metric serves as an indicator of
how effectively the model can capture the patterns and characteristics inherent in the training
dataset. The models that exhibit superior performance are characterized by having the lowest
testing 𝑅𝑀𝑆𝐸, while their training accuracy is either comparable or somewhat smaller than the
test accuracy. Similarly, as presented in reference [16], the SVR model demonstrates a remarkable
ability to minimize prediction errors, reaching values approximately as low as 53 µ𝑚 (refer to
Table 1). This level of accuracy is close to the process repeatability and significantly outperforms
the standard deviation of the data, which stands at 180 µm. The anticipated optimal performance
at 50 µm, is determined by considering the repeatability of the manufacturing process and the
14
uncertainty in measurement, reported by the manufacturer itself [16]. The estimated value of 50
µm is in proximity to the asymptotic threshold of the performance of SVR.
Fig 7(b) illustrates that the predictive accuracy of SVR remains high even when using small
sampling fractions, such as 25%, indicating its exceptional efficiency in utilizing data. As an
illustration, the models that were trained to utilize a sampling fraction of 25% can predict the
geometry of the remaining components, yielding an RMSE of around 65 µm. At smaller sampling
fractions, the inclusion of additional training data yields more meaningful enhancements in
prediction accuracy, but at higher sampling fractions, the gains in prediction error are
comparatively less significant. Insufficient data during model training might be the probable
source, as the performances of training and testing converge when higher sampling fractions are
utilized. Furthermore, in Fig. 7(d) we assess the agreement between the predictions and measured
values of DFT for the SVR model with the measured values for DFT, achieving the lowest RMSE
value of 0.0529 mm.
Table 1: Comparison of performance of different deterministic ML models on test dataset (units
of all the error metrics are in microns (mm))
Algorithm
Average
RMSE
Maximum
RMSE
Minimum
RMSE
Standard
Deviation
Prediction
Range
SVR
0.05290
0.06991
0.04608
0.00559
0.01383
kNN
0.05826
0.06569
0.05031
0.00580
0.01538
Decision Tree
0.07123
0.08164
0.06231
0.00689
0.01933
Random Forest
0.05925
0.06380
0.05558
0.00390
0.00822
GBM
0.05615
0.06241
0.04878
0.00549
0.01363
XGBoost
0.05367
0.07451
0.04665
0.00736
0.01786
LGBM
0.05386
0.06590
0.04264
0.00474
0.01204
MLP
0.05852
0.05108
0.04631
0.00654
0.00477
15
Fig 7: Comparison of predictive capabilities of the different deterministic ML models. a) Comparison of
deterministic ML algorithms at different sampling fractions. The RMSE values across different sampling fractions are
reported on the test dataset. b Performance of Support Vector Regressor (SVR) and c Xtreme Gradient Boosting
(XGBoost) across various testing fractions. The points denote the average performance achieved in training and test
across 50 iterations with corresponding standard deviations as error bands. d Parity plot between measured DFT vs.
DFT predictions for support vector regression as the best performing regression method.
3.2 Probabilistic ML algorithms
We employ two distinct probabilistic machine learning approaches: Gaussian Process Regression
(GPR) and Probabilistic Bayesian Neural Networks (BNNs) for our work. Probabilistic BNNs
offer a flexible framework for modeling uncertainty by providing probability distributions over
model parameters, allowing for robust uncertainty quantification. On the other hand, Gaussian
Process Regression leverages a non-parametric approach [36] and offers a powerful tool for
modeling uncertainty and making predictions. Our study centers on the utilization of probabilistic
models to examine the projected predictions of DFT values while considering the accompanying
uncertainties that deterministic models cannot offer.
3.2.1 Gaussian Process Regression (GPR)
In contrast to conventional regression models that yield a singular point estimate, a Gaussian
Process (GP) regression model offers a probability distribution encompassing potential functions
that may elucidate the data. For our study, we use the Scikit-Learn implementation of GPs that
16
make use of prior over functions (usually assumed to be a zero-mean Gaussian process), updated
based on observed data to form a posterior distribution. We start by splitting the dataset into 80:20
ratios and given a set of input points, GP regression allows us to compute a predictive distribution
over the output values. This distribution gives us not only a mean prediction but also uncertainty
estimates in terms of standard deviation. In Fig 8, we provide a comparison between the mean
predictions using GP regression and ground truth from the test dataset for 50 test samples. In Fig
8, we represent a parity plot between the mean predictions and measured values with
corresponding aleatoric uncertainty for each data point from the mean predictions. We present the
uncertainties from the mean predictions of the DFT values with corresponding standard deviations
at 95% confidence. As we evaluate the GP regression model, we get the test RMSE of 49𝜇𝑚,
which is accurate than acceptable RMSE of 50 𝜇𝑚. Moreover, it is worth noting that while the
Gaussian Process Regression (GPR) can estimate the noise variance inherent in the data, that
represents aleatoric uncertainty, it does not offer insights into model uncertainty [36]. To model
the epistemic uncertainty, there is a typical need to incorporate additional techniques such as the
Bayesian methods or ensemble methods to account for model uncertainty or parameter uncertainty.
Fig 8: Parity plot between measured DFT vs. DFT mean predictions with corresponding aleatoric uncertainty plotted
as error bars for each mean prediction from the GP Regressor model.
3.2.2 Bayesian Neural Networks (BNN)
We train a Bayesian neural network using the experimental training data for the fabrication and
measurement of produced parts using Digital Light Synthesis (DLS) and utilize it for dimensional
accuracy predictions on unseen data. The predictions obtained from the test set, together with their
corresponding uncertainties, are compared with the measured values that were estimated using
experiments. A Bayesian neural network is distinguished by its probability distribution across
weights (parameters) and/or outputs. The code implementation of a Bayesian neural network
exhibits modest variations depending on whether aleatoric, epistemic, or both uncertainties are
considered. In our work, we explore two different approaches for using the Bayesian neural
network, where in the first case that characterize only aleatoric uncertainty and in the other case,
can separate both aleatoric and epistemic uncertainty.
17
3.2.2.1 Preliminary approach – BNN with trainable mean and variance
The objective of our preliminary approach is to account for the inherent noise present in the data.
To achieve this, we propose training a regression model that yields a normal distribution with
trainable mean and variance, a single output distribution. This approach significantly differs from
deterministic regression model and point estimation methods. The model's mean and standard
deviation are trainable parameters, allowing for flexibility in capturing the variability of the data.
To represent a normal distribution based on the output generated by the fully-connected layers, we
employ a probabilistic layer that yields a distribution object as output. Following the completion
of training, we extract 50 test samples and provide a representation displaying the parity between
the mean predictions and measured values of DFT with corresponding standard deviation as the
aleatoric uncertainty in Fig 9.
Fig 9: Parity plot between measured DFT vs. DFT predictions with corresponding aleatoric uncertainty plotted as
error bars for each mean prediction from 50 test data samples with 95% confidence intervals for each mean prediction
for the DFT values.
3.2.2.2 Probabilistic approach – BNN working as an ensemble of networks
On the other hand, in our second approach, we consider evaluation of both aleatoric and epistemic
uncertainty using a probabilistic Bayesian NN model. In the context of probabilistic analysis
discussed in this study, we construct the neural network by combining two elements - a network
architecture and prior distribution over the weights. The selected architecture comprises the
"DenseVariational" layer, containing 8 units that employ the variational inference method to
estimate a surrogate posterior distribution for both the kernel matrix and the bias terms. This
architecture connects 8 input features to a single output unit and employs the non-linear
"sigmoid" activation function. We consider the initial probability density function for the
weights to be a conventional normal distribution. This particular model possesses a greater number
of parameters since each weight is characterized by a normal distribution with distinct mean and
standard deviation values, hence increasing the parameter weight counts. We resample the weights
by iterating multiple times to provide diverse predictions while keeping the input data constant
(without shuffling), therefore causing the neural network model to function as an ensemble. In Fig
9, we separate both the aleatoric and epistemic uncertainties along with the mean predictions from
18
50 test samples and measured DFT values. We assess the uncertainties, particularly the aleatoric
uncertainty (𝜎-./-!0"12), by taking the root mean squared of variances (𝜎1
3) over 200 iterations from
the ensemble of the output distributions. Next, we average over the standard deviations of the mean
predictions (𝜇1) from 200 iterations to evaluate the epistemic uncertainty (𝜎/415!/612). Therefore,
we obtain the uncertainty estimates as a mixture of Gaussians over N iterations (from 200
iterations) and represent those in Eq. 2 and Eq. 3.
𝜎-./-!0"12 = C
7
8 ∑
𝜎1
3
8
197
= C𝑚𝑒𝑎𝑛(𝜎1
3) (2)
𝜎/415!/612 = 𝑠𝑡𝑑𝑒𝑣(𝜇1) (3)
The model provides on-average correct predictions that do not systematically overestimate or
underestimate the true values, while the estimated overall uncertainty closely matches the actual
variability in the data. Also, from Fig. 10(a), we observe there are several datapoints (lies within
the range of ±0.4 to ±0.6 mm) of the predicted and measured DFT values from both the training
and test set that works as outliers and gives a “sigmoid” shape to the predictions. In Fig. 10(b), we
observe the error bars of the mean predictions in the parity plot touch the best-fit line, signifying
the model's consistency in accuracy and bias. This alignment between the model's mean
predictions and the parity line indicates reliable and unbiased performance, making it a favorable
choice as the probabilistic predictive model we use in this work.
Fig 10: Parity plot and uncertainty quantification from DFT predictions while considering BNN working as an
ensemble of networks. a) Sigmoid shape distribution of the entire data while we plot the predicted values against the
true values both from the training and the test set. b) Parity plot between measured DFT vs. DFT predictions with
corresponding aleatoric and epistemic uncertainties plotted as error bars with 95% confidence intervals for each mean
prediction of DFT values on test set.
It is noteworthy that, the predictive accuracy of our preliminary approach is superior when
compared with our second approach, the probabilistic BNN. The probabilistic BNN yields a
predictive accuracy or the test RMSE of 0.107 mm, while the preliminary Bayesian neural network
19
model with trainable mean and variance yields a lesser test RMSE value which is 0.079 mm.
However, it is worth noting that the probabilistic BNN model is successful in quantification of
both aleatoric (±63.96 µm) and epistemic uncertainty (±20.68 µm), while the preliminary
approach only characterizes aleatoric uncertainty (±54 µm). Additionally, the performance
difference between the two Bayesian neural network approaches may stem from factors such as
differences in model complexity, sensitivity towards hyperparameter tuning, the size of the
ensemble in DenseVariational layers, training duration, and the adequacy of posterior sampling.
Experimenting with adjustments to these aspects could help reconcile the performance gap and
enhance the probabilistic approach's accuracy on the test set.
Fig 11 illustrates the predictive capabilities of the network ensemble regarding the mean and
variance predictions of the difference from the target (DFT) when a substantial amount of data
(1215 training data points, equivalent to 80% of train data split) is employed. We observe that with
the decrease in training data points, the variations in the posterior distributions are larger, strongly
specifying the presence of epistemic uncertainty within the trained model. Fig 11 (d to f) provides
more evidence that the anticipated epistemic uncertainty effectively diminishes with an increase
in the quantity of training data, namely from 10% to 50% to 80% training percentage.
Fig 11: Variations in the posterior distribution samples and quantified epistemic uncertainty with varying ratios
of training and validation data. a, b, c illustrates the predictive capabilities of the probabilistic network ensemble
in terms of the posterior probability distributions of the difference from target (DFT). d, e, f parity plots between the
measured and predicted DFT values illustrating the variations in aleatoric and epistemic uncertainties for the train-test
20
split ratios. In this observation, it appears that the epistemic uncertainty remains less while considering reasonable
ratios of training and testing data splits, such as 50% and 80% for training data. As we decrease the training data to
10%, we find the error bars on the mean predictions are prominent, hence confirming the anticipated increase in
epistemic uncertainty.
To provide a conclusive verification, in section 4 of SI, in Fig. S4, the trends in aleatoric and
epistemic uncertainty for the train-validation split ratios. We split the training dataset in different
ratios ranging from a 10% training split to a 99% training split. Here, we observe that the aleatoric
uncertainty exhibits a decreasing pattern with intermittent increase while considering appropriate
ratios of training fractions, such as 50% and 80% for the training data. We observe a consistent
downward trajectory in epistemic uncertainty in conjunction with an increase in the number of
training data points.
Besides aleatoric uncertainty, quantification of epistemic uncertainty using a machine learning
model is critically important as it enables robust decision-making by providing a measure of the
model's confidence and acknowledging areas where it lacks knowledge. Selecting between a
Probabilistic Bayesian Neural Network (BNN) and Gaussian Process Regression (GPR) hinges on
the nuanced trade-offs in our analysis. Despite the BNN achieving lower accuracy or higher RMSE
value, its unique strength lies in its capacity to capture both aleatoric and epistemic uncertainties.
This is particularly valuable in applications like additive manufacturing, where comprehending
and quantifying uncertainty is paramount. The BNN's probabilistic nature equips it to provide
richer insights into the nature of uncertainties, enhancing robust decision-making under ambiguous
circumstances. However, GPR, being more interpretable and computationally efficient, might be
preferred in cases where model interpretability or computational resources are limited. In essence,
the choice between the two models can be dictated by the analytical needs, balancing predictive
accuracy, interpretability, computational constraints, and the necessity of uncertainty
characterization in the problem statement.
4. Conclusions
The current investigation centers on an estimation of dimensions related to the geometry of
additively manufactured components. To emulate the additive manufacturing (AM) production
process, we utilize an experimental dataset comprising 405 parts produced in nine distinct
experiments [16]. Using two different machines, three polymer materials, and two-part design
configurations in each run, McGregor et al. [16] analyzed three distinct part designs, measuring
five crucial features for each. This yielded a total of 2025 feature measurements. Data models are
employed to represent design details and manufacturing conditions, encompassing both
continuous and categorical factors. Our work initiates with utilizing two distinct machine learning
approaches to make predictions on Differences from Target (DFT) values. Deterministic machine
learning models provide point estimates, resulting in a singular projected value for each input
instance. In contrast, probabilistic machine learning models generate probability distributions as
outputs instead of providing a singular point estimate.
While we employ deterministic ML algorithms, we utilize an 80:20 train-test data split with dual
Monte Carlo subsampling [] and nested k-fold cross-validation. We employ seven non-linear
deterministic ML models trained on a randomly selected subset of 405 parts, accurately predicting
21
the geometric properties of unsampled parts. The Support Vector Regression (SVR) model better
accuracy amongst other deterministic methods; around 53 µm (also reported in ref [27]), close to
the manufacturer's process repeatability, and significantly smaller than the standard deviation (180
µm) of the data. Considering an expected ideal performance at 50 µm, we account for
manufacturing process repeatability and measurement uncertainty [37]. This value aligns with the
SVR regressor’s asymptotic threshold.
We utilize two distinct probabilistic machine learning methodologies: Gaussian Process
Regression (GPR) and Probabilistic Bayesian Neural Networks (BNNs). First, we use the Gaussian
Process Regression (GPR) model for feature geometry predictions, which estimates the noise
variance inherent in the data but fails to provide insights about the epistemic uncertainty. Moving
forward, we use the Probabilistic Bayesian neural networks (BNNs), which present a versatile
framework for representing uncertainty through the provision of probability distributions over
model parameters that enable robust quantification of uncertainty. We use two different approaches
to develop the Bayesian NN models. The initial approach addresses the intrinsic noise that is
inherent in the dataset, the aleatoric uncertainty. The mean and standard deviation of the model are
trainable parameters, enabling the model to effectively capture the variability present in the data.
In contrast, our second method involves in the assessment of both uncertainties through the
utilization of a probabilistic Bayesian neural network model. The iterative resampling of the
weights of the probabilistic BNN architecture creates diverse predictions, transforming the neural
network into an ensemble. Our second approach, the probabilistic BNN demonstrates low accuracy
compared to our preliminary approach in predicting feature geometry dimensions, achieving an
accuracy of around 107 µm, while our preliminary approach achieves an accuracy of around 79
µm. However, it is worth noting that the probabilistic BNN model with the higher accuracy value
quantifies both the aleatoric (±63.96 µm) and epistemic (±20.68 µm) uncertainties.
Analyzing an additive manufacturing (AM) dataset through the lens of a Probabilistic Bayesian
Neural Network (BNN) and the concurrent quantification of both epistemic and aleatoric
uncertainties provides a robust foundational platform for an array of promising avenues of
research. As a prospective extension of this work, the refinement of uncertainty modeling within
the BNN through the implementation of advanced Bayesian methodologies [30], holds the
potential for more precise quantification of uncertainties, specifically in AM applications. The
microstructure influences the strength of an additively manufactured component and tailored
microstructures serves as a critical determinant in material properties, performance, and quality.
BNNs provide a probabilistic framework that accounts for the quantification of uncertainty
associated with microstructure predictions. In AM processes, where variations in materials
composition, cooling rates, and printing parameters can lead to significant microstructural
variability, accurate assessment of uncertainty is crucial. BNNs enable the trustworthy estimation
of uncertainty intervals for predicted microstructural features while providing valuable and deeper
insights towards the reliability and confidence of these predictions. The ability of accurate
prediction of microstructural features using BNNs can facilitate the optimization of process
parameters and can play a crucial role in materials selection in AM. By leveraging uncertainty
quantification, it is more transparent towards identification of optimal printing conditions that
minimize defects, such as porosity, grain boundary misalignment, or residual stresses, while
maximizing desired microstructural characteristics, such as grain size, phase distribution, or
mechanical properties, while adversely helping towards the enhancement of the overall quality and
22
performance of additively manufactured components. Repeatability in experimental outcomes is a
crucial factor while identifying anomalies and defects in an additive manufacturing (AM) setup,
even when the processing conditions remain constant. Maintaining nearly identical results under
invariant conditions enables swift detection of any deviations from the anticipated outcome.
Probabilistic ML models are a trustworthy solution to replicate those systematic deviations. For
instance, in such cases, employing BNNs with a simple averaging approach across repeated
experiments can help capturing the significant deviations from the expected result or uncertainties.
Moreover, machine learning models, with relevant microstructural features, trained on simulated
data can be relevant surrogates for systematic exploration of different processing conditions
relevant to additive manufacturing experiments and mapping the relationships between process,
structure and materials properties. Often times the simulation methods are stochastic (i.e.- Kinetic
Monte Carlo, Cellular Automata, Phase-field methods), thus using a probabilistic ML model
instead of a deterministic one can help circumvent the issue of stochasticity while mapping
between process, structure and/or properties [14,15].
Furthermore, there is significant scope for the development of real-time monitoring of data and
uncertainty evaluation [37] aiming to ensure strict adherence to established quality standards
during the additive manufacturing processes. Furthermore, there are potential avenues for research
in materials science to develop correlations between uncertainty and material properties, thereby
generating significant insights into the additive manufacturing process. We consider the
development of interpretability techniques to enhance the trustworthiness of the probabilistic
regression methods and adoption in complex industrial settings is a critical facet of future work,
as is the exploration of data augmentation strategies and transfer learning methodologies to
enhance the adaptability of the probabilistic regression methods in diverse manufacturing
environments.
Declaration of interests
The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.
Code and data availability
The code for the deterministic ML models, the probabilistic ML models and the dataset from ref.
[27], that we use to perform the uncertainty quantification calculations as implemented in the
present study is publicly available (https://github.com/dipayan-s/UQ-Repository).
Acknowledgements
This work performed at the Center for Nanoscale Materials, a U.S. Department of Energy Office
of Science User Facility, was supported by the U.S. DOE, Office of Basic Energy Sciences, under
Contract No. DE-AC02-06CH11357. This material is based on work supported by the DOE, Office
of Science, BES Data, Artificial Intelligence, and Machine Learning at DOE Scientific User
Facilities program (ML-Exchange and Digital Twins). SKRS would also like to acknowledge the
support from the UIC faculty start-up fund. All authors thank Suvo Banik and Partha Sarathi Dutta
for their useful comments and discussion during the research activity. This research used resources
23
of the National Energy Research Scientific Computing Center (NERSC), a US Department of
Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory,
operated under Contract No. DE-AC02-05CH11231.
Authorship contribution statement
DS, AC and SKRS conceived the project. DS developed the code and performed the uncertainty
quantification calculations with inputs from AC. All the authors provided feedback on the
workflow. DS, AC and SKRS wrote the manuscript with inputs from all co-authors. HC, SM
provided feedback on the manuscript. All authors participated in the discussion of results and
provided useful comments and suggestions on the various sections of the manuscript. SKRS
supervised the overall project and research activity.
References
[1] MacGregor, J.F. and Kourti, T., 1995. Statistical process control of multivariate
processes. Control engineering practice, 3(3), pp.403-414.
[2] Fisher, R.A., 1936. Design of experiments. British Medical Journal, 1(3923), p.554.
[3] Mun, J., 2006. Modeling risk: Applying Monte Carlo simulation, real options analysis,
forecasting, and optimization techniques (Vol. 347). John Wiley & Sons.
[4] Chase, K.W. and Parkinson, A.R., 1991. A survey of research in the application of tolerance
analysis to the design of mechanical assemblies. Research in Engineering design, 3(1), pp.23-37.
[5] Schroeder, R.G., Linderman, K., Liedtke, C. and Choo, A.S., 2008. Six Sigma: Definition and
underlying theory. Journal of operations Management, 26(4), pp.536-554.
[6] Huang, D.J. and Li, H., 2021. A machine learning guided investigation of quality repeatability
in metal laser powder bed fusion additive manufacturing. Materials & Design, 203, p.109606.
[7] Liu, J., Ye, J., Silva Izquierdo, D., Vinel, A., Shamsaei, N. and Shao, S., 2022. A review of
machine learning techniques for process and performance optimization in laser beam powder bed
fusion additive manufacturing. Journal of Intelligent Manufacturing, pp.1-27.
[8] Song, L., Huang, W., Han, X. and Mazumder, J., 2016. Real-time composition monitoring using
support vector regression of laser-induced plasma for laser additive manufacturing. IEEE
Transactions on Industrial Electronics, 64(1), pp.633-642.
[9] Wang, Q., Song, L., Zhao, J., Wang, H., Dong, L., Wang, X. and Yang, Q., 2023. Application
of the gradient boosting decision tree in the online prediction of rolling force in hot rolling. The
International Journal of Advanced Manufacturing Technology, 125(1-2), pp.387-397.
24
[10] Li, R., Jin, M. and Paquit, V.C., 2021. Geometrical defect detection for additive manufacturing
with machine learning models. Materials & Design, 206, p.109726.
[11] Akbari, P., Ogoke, F., Kao, N.Y., Meidani, K., Yeh, C.Y., Lee, W. and Farimani, A.B., 2022.
MeltpoolNet: Melt pool characteristic prediction in Metal Additive Manufacturing using machine
learning. Additive Manufacturing, 55, p.102817.
[12] Demeyer, S., Fischer, N. and Elster, C., 2020. Guidance on Bayesian uncertainty evaluation
for a class of GUM measurement models. Metrologia, 58(1), p.014001.
[13] Cox, M.G. and Siebert, B.R., 2006. The use of a Monte Carlo method for evaluating
uncertainty and expanded uncertainty. Metrologia, 43(4), p.S178.
[14] Manna, S., Chan, H., Ghosh, A., Chakrabarti, T. and Sankaranarayanan, S.K., 2023.
Understanding and control of Zener pinning via phase field and ensemble learning. Computational
Materials Science, 229, p.112384.
[15] Sanpui, D., Chandra, A., Manna, S., Dutta, P.S., Chan, M.K., Chan, H. and Sankaranarayanan,
S.K., 2024. Understanding structure-processing relationships in metal additive manufacturing via
featurization of microstructural images. Computational Materials Science, 231, p.112566.
[16] McGregor, D.J., Bimrose, M.V., Shao, C., Tawfick, S. and King, W.P., 2022. Using machine
learning to predict dimensions and qualify diverse part designs across multiple additive machines
and materials. Additive Manufacturing, 55, p.102848.
[17] Der Kiureghian, A. and Ditlevsen, O., 2009. Aleatory or epistemic? Does it matter? Structural
safety, 31(2), pp.105-112.
[18] Sankararaman, S. and Mahadevan, S., 2011. Model validation under epistemic
uncertainty. Reliability Engineering & System Safety, 96(9), pp.1232-1241.
[19]
Ghahramani,
Z.,
2015.
Probabilistic
machine
learning
and
artificial
intelligence. Nature, 521(7553), pp.452-459.
[20] Liu, M., Chowdhary, G., Da Silva, B.C., Liu, S.Y. and How, J.P., 2018. Gaussian processes
for learning and control: A tutorial with examples. IEEE Control Systems Magazine, 38(5), pp.53-
86.
[21] Tzikas, D.G., Likas, A.C. and Galatsanos, N.P., 2008. The variational approximation for
Bayesian inference. IEEE Signal Processing Magazine, 25(6), pp.131-146.
[22] Gal, Y. and Ghahramani, Z., 2016, June. Dropout as a bayesian approximation: Representing
model uncertainty in deep learning. In international conference on machine learning (pp. 1050-
1059). PMLR.
25
[,23] Mae, Y., Kumagai, W. and Kanamori, T., 2021. Uncertainty propagation for dropout-based
Bayesian neural networks. Neural Networks, 144, pp.394-406.
[24] Osband, I., Blundell, C., Pritzel, A. and Van Roy, B., 2016. Deep exploration via bootstrapped
DQN. Advances in neural information processing systems, 29.
[25] Olivier, A., Shields, M.D. and Graham-Brady, L., 2021. Bayesian neural networks for
uncertainty quantification in data-driven materials modeling. Computer methods in applied
mechanics and engineering, 386, p.114079.
[26] Malinin, A. and Gales, M., 2018. Predictive uncertainty estimation via prior
networks. Advances in neural information processing systems, 31.
[27] Wang, H. and Yeung, D.Y., 2016. Towards Bayesian deep learning: A framework and some
existing methods. IEEE Transactions on Knowledge and Data Engineering, 28(12), pp.3395-
3408.
[28] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M.,
Prettenhofer, P., Weiss, R., Dubourg, V. and Vanderplas, J., 2011. Scikit-learn: Machine learning
in Python. the Journal of machine Learning research, 12, pp.2825-2830.
[29] Chen, T. and Guestrin, C., 2016, August. Xgboost: A scalable tree boosting system.
In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data
mining (pp. 785-794).
[30] Chang, Y.C., Chang, K.H. and Wu, G.J., 2018. Application of eXtreme gradient boosting trees
in the construction of credit risk assessment models for financial institutions. Applied Soft
Computing, 73, pp.914-920.
[31] Gardner, M.W. and Dorling, S.R., 1998. Artificial neural networks (the multilayer
perceptron)—a
review
of
applications
in
the
atmospheric
sciences. Atmospheric
environment, 32(14-15), pp.2627-2636.
[32] Lundberg, S.M. and Lee, S.I., 2017. A unified approach to interpreting model
predictions. Advances in neural information processing systems, 30.
[33] Choi, S.H. and Lee, J.M., 2022, August. Explainable Fault Diagnosis Model Using Stacked
Autoencoder and Kernel SHAP. In 2022 IEEE International Symposium on Advanced Control of
Industrial Processes (AdCONIP) (pp. 182-187). IEEE.
[34] Dillon, J.V., Langmore, I., Tran, D., Brevdo, E., Vasudevan, S., Moore, D., Patton, B., Alemi,
A., Hoffman, M. and Saurous, R.A., 2017. Tensorflow distributions. arXiv preprint
arXiv:1711.10604.
26
[35] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A.,
Dean, J., Devin, M. and Ghemawat, S., 2016. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
[36] Erickson, C.B., Ankenman, B.E. and Sanchez, S.M., 2018. Comparison of Gaussian process
modeling software. European Journal of Operational Research, 266(1), pp.179-192.
[37] Farid, M., 2022. Data-driven method for real-time prediction and uncertainty quantification
of fatigue failure under stochastic loading using artificial neural networks and Gaussian process
regression. International Journal of Fatigue, 155, p.106415.
[38] Fernández, J., Chiachío, J., Chiachío, M., Barros, J. and Corbetta, M., 2023. Physics-guided
bayesian neural networks by abc-ss: Application to reinforced concrete columns. Engineering
Applications of Artificial Intelligence, 119, p.105790.
[40] Carbon, Carbon DLS 3D Printing Process Engineering Handbook, 2021: 21–22.
27
Supplementary Materials
1. Dual Monte Carlo Subsampling method
Fig. S1 represents the training and testing method, that use a dual Monte Carlo subsampling [1]
methodology with nested k-fold cross-validation, tied with hyperparameter tuning for training and
evaluation of the models, and reduce bias and overfitting. The proposed strategy leverages on an
unbiased hyperparameter tuning procedure and predictions with accurate outcomes [2]. The model
is tested on data without exposure to the training procedure and the withheld data, that is not used
for training and testing either by the ML model, signifies the robustness of measurement in the
experimental conditions where generation and collection of data is often costly, and thus enabling
measure of data efficiency.
Fig. S1: The proposed approach involves the utilization of dual Monte Carlo subsampling in conjunction with a
stacked k-fold cross-validation (3 folds) pipeline for the training and testing of machine learning models. Within the
outer loop, the data undergoes a random splitting process, resulting in the creation of distinct sets for training, testing,
and unused data. The data is rearranged, and this procedure is iterated (here we used 50 iterations) for each iteration
of the outer loop. The utilization of the Monte Carlo method offers a sound approximation for the performance of the
model.
The dual Monte Carlo subsampling method constitute two nested loops and subsequent steps for
data pre-processing, subsampling, scaling, hyperparameter tuning, training, testing at a fixed
number of iterations. In the outer loop, random sampling of data was performed to split data into
training, test, and sometimes unused sets. The training data was normalized using the Minmax
scaling method to have mean zero and a standard deviation of unity. We use the standardized
training data for hyperparameter tuning by leveraging k-fold cross-validation and grid search. In
the outer loop, we define the settings for the tuned hyperparameters, and training data trained using
a single model instance, and the withheld test data for evaluation of the model. Next, we assess the
model performance using the withheld testing data and represent it as root mean square error
(𝑅𝑀𝑆𝐸) between the measurement of the part geometries and geometry predictions. Using two
nested loops, where the data is randomly shuffled at each stage, it is possible to execute multiple
iterations of the train-test procedure. Each iteration trains a distinct and unique model based on
training data and evaluates it based on withheld testing data. In Table S1 we elaboratively discuss
the details of each input feature and the target variable difference from target (DFT), the key metric
to account for the dimensional accuracy of the fabricated parts.
28
Table S1: Detailed description of the input features and the target variable
Input feature
Description
Type
Manufacturing parameters
Hardware set
Machine number used for manufacturing of the part
geometry (2 Carbon M2 printers were used) (Parameter
states: 1, 2)
Categorical
Material
3 different materials were used to fabricate the parts
namely, UMA, RPU and EPX.
Categorical
Thermal cure
The materials used were ultraviolet curable photopolymers;
followed by an oven bake for cross-linking.
Categorical
Layout
2 different layouts were used to show arrangements of
different part designs, within the print area.
Categorical
x-coordinate
Cartesian co-ordinate location of the fabricated part within
the build area.
Continuous
y-coordinate
Cartesian co-ordinate location of the fabricated part within
the build area.
Continuous
R-coordinate
Radial co-ordinate location of the fabricated part within the
build area.
Continuous
Unique build ID
Build ID identified the exact build in which a part was
manufactured. A total of 405 parts were built across 9
different build ids with 5 measurements for each part.
(example: build ids 1 through 9)
Categorical
Measured feature descriptors
Part design
3 different parts designs were fabricated, namely, clip, plug
and bracket.
Categorical
Nominal dimension
Measured feature dimensions from experiments and
reference geometry dimensions from computer-aided
design.
Continuous
Feature class
The measured dimensional entity of the manufactured part.
(example: diameter, length, thickness, height)
Categorical
Feature category
Descriptor for the feature class that describe the topology
of the geometry and categorize them. (example: inner/outer
diameter)
Categorical
Unique feature ID
Feature ID identified the unique feature of the part, such as
bracket_dist_mm_c, which occurred once for a bracket
design for a unique build id and present for each fabricated
bracket.
Categorical
Target variable Description
Type
Difference from
Target (DFT)
Difference between the measured dimension (i.e.- length,
inner/outer diameter of a bracket etc.) and the refence CAD
geometry.
Continuous
29
2. Implementation of Deterministic ML algorithms:
2.2.1 k-Nearest Neighbors (kNN):
k-NN [3] is a straightforward and intuitive machine learning algorithm used for classification and
regression tasks. Being non-parametric, it makes no assumptions on the distribution of the
underlying data. Instead, it makes predictions based on the data itself. The training data points, and
their related labels or values are simply memorized by the algorithm. In regression tasks, the
technique computes the average (or weighted average) of the target values of the 'k' nearest
neighbors rather than assigning a class label. This average becomes the new data point's anticipated
value. The most crucial kNN parameter is 'k', which denotes the number of neighbors to consider
while making predictions. The algorithm's performance is influenced by the selection of 'k'. A
prediction made with a small "k" might be noisy, whereas one made with a greater "k" value might
be overly broad. The effectiveness of the algorithm is highly dependent on the selection of the
distance metric. Euclidean distance, Manhattan distance (measured in city blocks), and cosine
similarity are examples of common distance measurements. The distance metric establishes the
"closeness" of the data points as determined by the algorithm.
2.2.2 Support Vector Regression (SVR):
Support vector machines (SVM) [4] are one of the most prevalent classification algorithms. SVMs
locate "hyperplane" decision boundaries, that divide classes in high-dimensional space using a
linear or kernel function. These hyperplanes are produced by positioning the hyperplane at the
farthest distance possible between the extreme data points or the support vectors, which provides
maximum margin. Regression problems can be addressed with SVMs. The model then anticipates
continuous values as the target variable instead of class labels.
2.2.3 Decision Tree Regression (Tree):
An additional well-liked subset of ML algorithms is decision trees [5]. As a quality check, data is
constantly divided into nodes using a splitting rule (depending on locally optimal choices that
minimize the mean squared error) until further splitting neither improves the model prediction nor
reaches a predetermined depth of the tree. In contrast to most ML models with scale variance, this
allows for very straightforward and understandable predictions and requires minimal data
preparation (no feature normalization or standardization is necessary). If minor data variations lead
to unstable and erroneous tree constructs, there is a problem. As a result, after training individual
trees, there is frequently considerable variance, and when there is an imbalance in the data,
individual trees tend to overfit the input data. By incorporating various tree architectures, ensemble
approaches, like a random forest classifier or regressor, might introduce some randomization to
mitigate such issues. The variance is decreased, and an overall more accurate model is produced
by averaging the predictions of each tree.
30
2.2.4 Random Forest (RF):
A well-liked ensemble learning approach for machine learning is Random Forest [6]
which combines the predictive power of many decision trees. To avoid overfitting and improve
forecast accuracy, it works by training many decision trees on random subsets of the data and
features. It is an invaluable tool for classification and regression tasks across multiple areas, from
banking and healthcare to image analysis and recommendation systems, by pooling the predictions
of multiple trees and producing robust and extremely accurate results. It is a flexible option for
real-world machine learning applications due to its capacity for handling diverse data types, feature
importance ranking, and resistance to overfitting.
2.2.5 Gradient Boosting Machine (GBM):
Gradient Boosting [7] is a powerful machine learning algorithm that sequentially builds an
ensemble of decision trees to create a highly accurate predictive model. It works by iteratively
improving the model's predictions by focusing on the errors made by previous trees. Gradient
Boosting is known for its exceptional predictive performance and is widely used in various fields
such as finance, healthcare, and natural language processing. It is particularly effective in handling
complex, non-linear relationships in data and often outperforms other algorithms in predictive
accuracy. However, it may require careful tuning of hyperparameters to prevent overfitting.
2.2.6 Xtreme Gradient Boosting (XGBoost):
Xtreme Gradient Boosting [8], termed as XGBoost, is an advanced and extremely efficient
gradient boosting implementation. It is renowned for its speed and precision in machine learning
applications. Regularization and parallel processing are utilized by XGBoost to prevent overfitting
and substantially reduce training time. Its advantages include the ability to capture complex data
relationships, speed, handling of missing data, and support for feature importance ranking.
However, XGBoost requires careful hyperparameter tuning, lacks the interpretability of simpler
models, and may not perform optimally on very small datasets, limiting its applicability in such
cases.
2.2.7 Multi-layer Perceptron (MLP):
Artificial neural networks (ANNs) [3] have garnered significant interest in the field of machine
learning due to their effectiveness in addressing various computational challenges. The multilayer
perceptron (MLP) is a type of neural network that employs a forward-predicting approach and is
trained iteratively on a given dataset with the objective of minimizing a specified loss function [3].
During each iteration, the trainable parameters, commonly referred to as weights, undergo updates.
The regularization term is an additional component of the model's functionality that serves to
mitigate the risk of overfitting the model to the training data during the computation of the loss.
Moreover, Multilayer Perceptrons (MLPs) are widely regarded as one of the most intricate
machine learning algorithms in terms of optimization due to their comparatively large number of
hyperparameters.
31
Table S2: Hyperparameters for deterministic machine learning models, here we present the
optimal hyperparameters utilized for the machine learning models using GridsearchCV tied inside
the Dual Monte Carlo subsampling algorithm, here we use 5 folds of cross-validation of the data
with negative root mean squared error as the metric during hyperparameter optimization.
Fig S2: Parity plot between the measured and predicted DFT values as we implement different deterministic ML
models (a) Linear Regression and rest of the three best performing deterministic ML models, (b) Support Vector
Regression (c) Extreme Gradient Boosting, (d) Light Gradient Boosting, with their corresponding RMSE values.
For each of the model implementation, specifically while we fit our data to linear regression, we observe a
‘sigmoid’ or a ‘S’ curve formation that provide subsequent evidence towards the presence of non-linearity in the
data.
Deterministic
ML Algorithms
Hyperparameters
KNN
'metric': 'euclidean', 'n_neighbors': 6
SVR
'epsilon': 0.03, 'gamma': 'scale', 'kernel': 'rbf'
Decision Tree
'max_depth': 20, 'min_samples_leaf': 5, ‘criterion’: “absolute_error”
Random Forest
'bootstrap': True, 'max_features': 3,'min_samples_leaf': 3, ‘n_estimators': 300
Gradient
Boosting
'learning_rate': 0.3,'loss': 'squared_error','max_leaf_nodes': 30,'n_estimators': 120
XGBoost
'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 100,'subsample': 0.9
LightGBM
'learning_rate': 0.1,'max_depth': 7,'n_estimators': 150,'num_leaves': 31
MLP
'activation': 'tanh','hidden_layer_sizes': (16, 8, 4),'learning_rate': 'constant','max_iter': 5000,'
solver': 'lbfgs'}
32
3. Implementation of Bayesian Neural Networks:
Given the mathematical representation of Eq. (1) in the main manuscript, and under the assumption
that the training data points are distinct and identically dispersed, the likelihood of the entire dataset
𝒟 is determined using the Eq (1) given below. The presence of noise in the dependent variables is
not considered in this study, following the approach outlined in the reference provided [8].
𝑝(𝒟|𝜔) = ∏
𝒩(𝑦1, 𝑓((𝑥1), 𝜎:
3 )
:
197
(1)
Epistemic uncertainties, conversely, are assessed by constructing a probability model that
encompasses the network weights, 𝜔 of the given model. In a Bayesian learning framework, a
prior probability density function (pdf) of weights, is 𝑝;(𝜔) is initialized in combination with the
model parameters, The pdf gets updated utilizing the data and Bayes theorem after each
feedforward mechanism to yield a posterior pdf 𝑝(𝜔|𝒟), computed using Eq. (2):
𝑝(𝜔|𝒟) =
4$𝒟%𝜔&4#(()
4(𝒟)
(2)
During the prediction phase, when presented with an input (𝑥∗), it is possible to calculate the
predictive probability; 𝑝(𝑦∗|𝑥∗, 𝒟) of the corresponding output given by:
𝑝(𝑦∗|𝑥∗, 𝒟) = ∫𝑝(𝑦∗|𝑥∗, 𝜔) 𝑝(𝜔|𝒟)𝑑𝜔 (3)
From a classical perspective, as provided in reference [8], using the law of total expectation and
total variance [8], the total uncertainty predictions are obtained as follows:
𝑉𝑎𝑟(𝑦∗|𝑥∗, 𝒟) = 𝜎:(𝑥∗)3 + 𝑉𝑎𝑟4$𝜔%𝒟&V𝑓((𝑥∗)W (4)
In the above equation, the term 𝜎:(𝑥∗)3 is the variance resulting from the inherent noise present
within the data (aleatoric uncertainty) and the second term results from the epistemic uncertainty
computed using variational inference.
However, the computation of an analytical solution for the posterior probability p(𝜔|𝒟) in neural
networks is infeasible due to the complex integral present within the weights. To address this
challenge, researchers employ a range of approximate techniques, including the use of Variational
Bayes [10]. The objective of this strategy is to effectively approximate the posterior distribution
given the neural network weights. In the context of Variational Bayes, the posterior distribution is
approximated using a secondary function referred to as the variational posterior. The variational
inference works by approximation of a posterior distribution and the distribution is parameterized
by a set of parameters, represented as q(ω|θ), acquired by the process of optimization. Here we
consider the distribution to be a Gaussian distribution and given as [14],
q(ω|θ) = ∏
𝒩(𝜔1, 𝜇1, 𝜎1)
:
197
(5)
where, θ represents a set of parameters including the means and variances for each of the
independent Gaussians; and 𝑛 is the total number of weights.
33
The primary concept is seeded around the selection of a functional form for the variational
posterior, which can be optimized efficiently in order to closely resemble the genuine posterior. A
frequently employed method for assessment of the accuracy of an approximation is by evaluating
the Kullback-Leibler (KL) divergence between the variational posterior and the genuine posterior.
The Kullback-Leibler divergence is a metric used to quantify the dissimilarity between two
probability distributions, with a value of zero indicating that the two distributions are identical.
The KL divergence between the variational posterior, 𝑞(𝜔|𝜃), and the actual posterior, 𝑃(𝜔|𝒟),
can be defined based on the observed data, 𝒟 [14].
𝐷>?(q(ω|θ) ∥P(ω|𝒟)W = 𝐸[log 𝑞−log 𝑃] = ∫q(𝜔|𝜃) log e
@((|B)
C$𝜔%𝒟&f 𝑑𝜔 (5)
Where the 𝐸 is the expectation operator and represented as difference between the natural
logarithm of the variational posterior (𝑞) and the actual posterior (𝑃).
When the observed data is treated as constant (the inputs are treated as constant), the above
equation is simplified into a function L(θ|𝒟), that solely relies on the variables 𝜃 and 𝒟 [14],
𝐿(𝜃|𝒟) = 𝐷>?(𝑞(𝜔|𝜃) ∥𝑃(𝜔)W −𝐸D$𝜔%𝜃&V𝑙𝑜𝑔𝑃(𝒟|𝜔)W
= ∫𝑞(𝜔|𝜃)(𝑙𝑜𝑔𝑞(𝜔|𝜃)) −𝑙𝑜𝑔𝑃(𝒟|𝜔) −𝑙𝑜𝑔𝑃(𝜔) 𝑑𝜔
= 𝐸D((|B)(𝑙𝑜𝑔 𝑞(𝜔|𝜃) − 𝑙𝑜𝑔𝑃(𝒟|𝜔) −𝑙𝑜𝑔𝑃(𝜔)) (6)
In the equation (6), the initial term evaluates the difference between the variational and prior
distributions, whereas the subsequent term represents the anticipated negative logarithmic
probability (analogous to negative log likelihood loss term) based on the variational posterior. This
function, Evidence Lower Bound (ELBO), L(θ|𝒟), also known as the variational lower bound,
can then be optimized with respect to θ, the distribution of parameters (means and variances) to
derive the best approximation of the true posterior.
We perform the optimization using a gradient-descent [9] and Resilient Back Propagation (RProp)
[11] based “RMSprop” optimizer, allowing for the scalability of the variational Bayes mechanism
to large and intricate models. The primary goal using the variational inference is to accurately
approximate the true weights of the posterior distribution, treating the inputs as constant.
34
We provide the hyperparameters of the probabilistic ML algorithms in Table S2 and the model
architecture summaries in Fig. S5 (a, b) of the Bayesian Neural Network approaches we used in
this work.
Table S3: Hyperparameters for probabilistic machine learning models
4. Representation of uncertainties from Probabilistic Bayesian Neural
Network with varying train-test split ratios
In Fig S4, we provide a plot illustrating the trends in aleatoric and epistemic uncertainty for the
train-validation split ratios. We split the train dataset in different ratios ranging from 10% training
split to 99% training split. It is noteworthy to emphasize that the 10% training data is exclusively
utilized for observing the prediction distributions generated by the probabilistic network ensemble.
It is widely recognized that training a model with a limited amount of data would inevitably result
in an increase in model uncertainty, hence leading to an increase in epistemic uncertainty.
Additionally, it is worth noting that this practice contributes to the amplification of noise within
the dataset. This is mostly due to the uneven distribution of data between the training and testing
sets, which subsequently leads to an escalation in aleatoric uncertainty. Based on the findings we
Probabilistic
ML Algorithms
Hyperparameters
GPR
Matern (length_scale=1, nu=1.5) + WhiteKernel(noise_level=1),
n_restarts_optimizer=50, random_state=2022
BNN – trainable
mean and variance
Number of hidden layers: 3, Number of neurons in the hidden layers: (24, 16, 8),
Activation Function: Relu (only used in the hidden layers), Loss: “Negative Log
Likelihood”, Optimizer: “Adam”
BNN-working as an
ensemble of
networks
Number of hidden layers: 1, Number of neurons in the hidden layers: 8,
Activation Function: Sigmoid, learning rate: 0.001, Loss: “Negative Log
Likelihood”, Optimizer: RMSProp.
Fig S3: Model summaries for the probabilistic ML models we use for our work (a) Model summary for the
preliminary approach - BNN with trainable mean and variance. (b) Model summary for our second approach
where the BNN is working as the ensemble of networks.
35
present in Fig S4, we observe that the aleatoric uncertainty exhibits a decreasing pattern with
intermittent increase while considering appropriate ratios of training fractions, such as 50% and
80% for training data. Additionally, we also observe a sudden increase in aleatoric uncertainty at
90% training percentage, when an uneven distribution of train-test data split is included. The
observed pattern of increasing aleatoric uncertainty at 90% training data and then decreasing at
99% training data in the Probabilistic BNN might be due to few factors. At 90% training data, the
model might have encountered more diverse and challenging data points, introducing additional
variability in the predictions, thus leading to increased aleatoric uncertainty. This increase might
signify that the model is struggling to capture complex patterns in this subset. As the training
dataset further expands to 99%, the model likely gains a more comprehensive understanding of
the data distribution, resulting in an improved predictive accuracy and reduced aleatoric
uncertainty. Additionally, the increased training data may help the model better estimate the
aleatoric uncertainty, leading to a more reliable representation. However, it's crucial here to
consider the specifics of the dataset and model, as other factors like data quality, model
architecture, or the nature of the training data might have also played a role in this observed
behavior. Upon examining the outcomes pertaining to epistemic uncertainty, it becomes evident
that there is a steady downward trajectory observed in conjunction with an increase in the number
of training data points.
Fig S4: Representation of the aleatoric and epistemic uncertainties with varying train-validation split ratios. We
observe that the aleatoric uncertainty exhibits a decreasing pattern with intermittent increase while considering
appropriate ratios of training fractions, such as 50% and 80% for the training data. We observe a consistent downward
trajectory in epistemic uncertainty in conjunction with an increase in the number of training data points.
36
References
[1] McGregor, D.J., Bimrose, M.V., Shao, C., Tawfick, S. and King, W.P., 2022. Using machine
learning to predict dimensions and qualify diverse part designs across multiple additive machines
and materials. Additive Manufacturing, 55, p.102848.
[2] Ren, Y., Lu, X., Guo, H., Xie, Z., Zhang, H. and Zhang, C., 2023. A review of combinatorial
optimization
problems
in
reverse
logistics
and
remanufacturing
for
end-of-life
products. Mathematics, 11(2), p.298.
[3] James, G., Witten, D., Hastie, T., Tibshirani, R. and Taylor, J., 2023. Statistical learning. In An
Introduction to Statistical Learning: with Applications in Python (pp. 15-67). Cham: Springer
International Publishing.
[4] Biau, G. and Scornet, E., 2016. A random forest guided tour. Test, 25, pp.197-227.
[5] Natekin, A. and Knoll, A., 2013. Gradient boosting machines, a tutorial. Frontiers in
neurorobotics, 7, p.21.
[6] Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y., Cho, H., Chen, K., Mitchell, R., Cano,
I. and Zhou, T., 2015. Xgboost: extreme gradient boosting. R package version 0.4-2, 1(4), pp.1-4.
[7] Fan, J., Ma, X., Wu, L., Zhang, F., Yu, X. and Zeng, W., 2019. Light Gradient Boosting
Machine: An efficient soft computing model for estimating daily reference evapotranspiration with
local and external meteorological data. Agricultural water management, 225, p.105758.
[8] Wright, W.A., 1999. Bayesian approach to neural-network modeling with input
uncertainty. IEEE Transactions on Neural Networks, 10(6), pp.1261-1270.
[9] Smolensky, P., 1996. Overview: Statistical perspectives on neural networks. Mathematical
Perspectives on Neural Networksa, pp.453-496.
[10] Meyer, A.R., 2006. Expectation & Variance.
[11] Fox, C.W. and Roberts, S.J., 2012. A tutorial on variational Bayesian inference. Artificial
intelligence review, 38, pp.85-95.
[12] Ruder, S., 2016. An overview of gradient descent optimization algorithms. arXiv preprint
arXiv:1609.04747.
[13] Saputra, W., Zarlis, M., Sembiring, R.W. and Hartama, D., 2017, December. Analysis resilient
algorithm on artificial neural network backpropagation. In Journal of Physics: Conference
Series (Vol. 930, No. 1, p. 012035). IOP Publishing.
37
[14] Olivier, A., Shields, M.D. and Graham-Brady, L., 2021. Bayesian neural networks for
uncertainty quantification in data-driven materials modeling. Computer methods in applied
mechanics and engineering, 386, p.114079.
|
1 Comparison of Deterministic and Probabilistic Machine Learning Algorithms for Precise Dimensional Control and Uncertainty Quantification in Additive Manufacturing Dipayan Sanpui1,2, Anirban Chandra1,2,*, Henry Chan1,2, Sukriti Manna1,2 and Subramanian K.R.S. Sankaranarayanan1,2 1 Center for Nanoscale Materials, Argonne National Laboratory, Lemont, Illinois 60439, United States. 2 60607, United States. *Currently at Shell International Exploration and Production Inc., Boston, Massachusetts, 02210, United States Abstract We present an accurate estimation of the dimensions of additively manufactured components by adopting a probabilistic perspective. Our study utilizes a previously gathered experimental dataset, encompassing five crucial design features for 405 parts produced in nine production runs. These runs involved two machines, three polymer materials, and two-part design configurations. To illustrate design information and manufacturing conditions, we employ data models that integrate both continuous and categorical factors. For predicting Difference from Target (DFT) values, we employ two machine learning approaches: deterministic models that offer point estimates and probabilistic models generating probability distributions. The deterministic models, trained on 80% of the data using Support Vector Regression (SVR), exhibit high accuracy, with the SVR model demonstrating precision close to the process repeatability. To address systematic deviations, we introduce probabilistic machine learning methodologies, namely Gaussian Process Regression (GPR) and Probabilistic Bayesian Neural Networks (BNNs). While the GPR model shows high accuracy in predicting feature geometry dimensions, the BNNs aim to capture both aleatoric and epistemic uncertainties. We explore two approaches within BNNs, with the second approach providing a more comprehensive understanding of uncertainties but showing lower accuracy in predicting feature geometry dimensions. Emphasizing the importance of quantifying epistemic uncertainty in machine learning models, we highlight its role in robust decision-making, risk assessment, and model improvement. We discuss the trade-offs between BNNs and GPR, considering factors such as interpretability and computational efficiency. The choice between these models depends on analytical needs, striking a balance between predictive accuracy, interpretability, and computational constraints. In summary, our analysis of an additive manufacturing dataset through the lens of a Probabilistic Bayesian Neural Network (BNN) and the simultaneous quantification of both epistemic and aleatoric uncertainties provides a robust foundation for advancing manufacturing design. 2 List of Abbreviations Abbreviations Meaning DLS Digital Light Synthesis DFT Difference From Target BNN Bayesian Neural Network AM Additive Manufacturing SPC Statistical Process Control DOE Design of Experiments MCMC Markov Chain Monte Carlo VI Variational Inference SVR Support Vector Regression XGBoost Xtreme Gradient Boosting GPR Gaussian Process Regression RMSE Root Mean Squared Error MSE Mean Squared Error RF Random Forest LGBM Light Gradient Boosting MLP Multi-layer Perceptron ML Machine Learning SHAP Shapley Additive Explanations NN Neural Network UMA Urethane Methacrylate EPX Additive Epoxy RPU Rigid Polyurethane GP Gaussian Process MC Dropout Monte Carlo Dropout UQ Uncertainty Quantification Introduction The decision-making process of acceptance or rejection of produced part is critical in a production pipeline and mainly depends on the dimensional accuracy. The additive manufacturing techniques, useful in fabrication of intricate geometries; is highly flexible and undergoes frequent parametric variations. Oftentimes, traditional methods [1-5] for measurement of the produced parts are timeconsuming and incur higher monetary investments. While the conventional methods for part quality assessment are sometimes challenging, recent advancements in measurement techniques that involve automated measurements and real-time analysis of data offers significant benefits in the measurement process; often termed as smart metrology-based approach [6]. An existing challenge in the smart metrology-based approach is handling a high-dimensional dataset having uncorrelated processing conditions. In the recent past, data-driven methods have been successful in unleashing the interrelations between manufacturing conditions and dimensional accuracy [7]. Amongst the data-driven methods, deterministic regression methods [811], are popular in dimensional accuracy prediction since the implementation is quite 3 straightforward and computationally efficient. However, possess certain limitations while addressing the uncertainty inherent within the measurements. The lack of reliable uncertainty estimates hinders the direct use of the deterministic machine learning algorithms, for manufacturing and materials science-based applications. It is imperative to estimate uncertainties to provide scientific and engineering decision-makers with predictive information as well as quantitative information regarding how accurate the predictions are. GUM (Guide to the expression of Uncertainty in Measurement) [12,13] is an internationally accepted master document that provides guidelines to assess the uncertainties associated with various sources of error in measurements. The steps for the uncertainty budget include the sources of uncertainties, classification, determination of standard uncertainty and combining them. Another two distinct approaches for uncertainty evaluation are the propagation of distributions through Monte Carlo approaches [12] and Bayesian uncertainty evaluation [13]. Propagation of distributions through Monte Carlo evaluation is a method for determination of uncertainty when the inputs are uncertain and randomly sampled for propagation of uncertainty within the model itself. Instead of relying solely on the analytical methods, the Monte Carlo approaches use random sampling for propagation of uncertainty, thus allowing for a more detailed and accurate characterization of output distribution. While the Bayesian uncertainty evaluation relies on the prior beliefs or knowledge, in most cases considered as standard normal distributions followed by updating the knowledge with new data and provide a full probability distribution for the quantity of interest. The outcomes of scientific applications including in advanced manufacturing are uncertain, both on an aleatoric and epistemic level [17-19]. Aleatoric uncertainty refers to the uncertainty present in the dataset itself and arises due to stochastic experimental design and noise present in the experimental output. Epistemic uncertainty, also known as subjective uncertainty or knowledge uncertainty, refers to the uncertainty arising from limitations in our understanding, knowledge, or information about a system or phenomenon [20]. This type of uncertainty reflects the lack of full knowledge about the true nature of the system, including its behavior, parameters, or underlying mechanisms. Epistemic uncertainty can manifest in various forms, such as parameter uncertainty that arise due to the model parameters, model uncertainty that occurs due to the model architecture and the input uncertainty, that occurs due to the choice of input features and boundary conditions. Besides aleatoric uncertainty, which is inherent in the data itself, machine learning applications, in particular, suffer from epistemic uncertainties that arise from a lack of knowledge or data from experiments and model hyperparameters. Based on the theoretical approaches [12,13], relatively a newer perspective towards evaluation of uncertainty is by utilization of the ML algorithms. Epistemic uncertainty can be reduced by providing the ML model with more data [21]. In contrast with the primary ML algorithms, that depends on a lot of information, engineering applications frequently manage limited quantities of complicated information, bringing about enormous epistemic uncertainties. Therefore, the decision maker must characterize both the uncertainties, aleatoric and epistemic while making predictions. In this context, probabilistic ML models serve the purpose of the prediction of both uncertainties [22]. A widely known ML algorithm that incorporates uncertain predictions is Gaussian processes (GPs), inherently derived from the Bayesian learning framework [23]. In recent years, the integration of neural networks with the Bayesian learning framework has become 4 popular among the machine learning research community [24]. Previous literature suggests that there are different ways to incorporate Bayesian inference within neural networks [25]. Bayesian Neural Network (BNN) makes use of Markov Chain Monte Carlo (MCMC) for approximation of the prediction as a probability density function (posterior pdf). Alternatively, the variational inference (VI) approximates the posterior pdfs via distance metrics which can be computed analytically and easy to sample [23]. Another probabilistic approach is MC Dropout [31], which uses neural networks to estimate the uncertainty involved within the model training and alleviate the issue of overfitting. This probabilistic model mimics the Bayesian-like behavior by dropping out neurons in the network randomly. This algorithm is computationally less costly, but several researchers reported the underperformance of this algorithm [18]. The MCMC method usually does not scale well with the model weights and parameters, and thus the convergence is intractable considering the presence of complex integrals for approximation of posterior distributions [31]. VI frameworks integrated with neural networks assume a form of the posterior distribution after each epoch and are thus less accurate than MCMC in their estimation of the posterior distribution, but they are typically faster via optimization; thus, easily leveraged with neural network training, and convergence is fast. Currently, uncertainty quantification for additive manufacturing dimensional predictions lacks extensive exploration using probabilistic regression algorithms [32]. Incorporating Probabilistic BNN-based regression into uncertainty quantification efforts can address the gaps by providing a powerful tool for modeling complex relationships, handling uncertainty, and offering real-time adaptability and interpretability in the context of additive manufacturing dimensional predictions. In this work, we compare algorithms for probabilistic training of neural networks based on variational Bayesian inference, while using the experimental additive manufacturing dataset [16] for fabrication and measurement of produced parts using Digital Light Synthesis (DLS). We first select diverse non-linear deterministic models for the prediction of dimensional accuracy. While separation of data and model uncertainties are cumbersome using deterministic ML algorithms, probabilistic ML algorithms excel in quantification of both aleatoric and epistemic uncertainty. The remainder of the paper is structured as follows. In Section 2, we present the details of the experimental dataset that we use for the study and describe our implementation techniques for the ML algorithms, followed by Results and Discussions in Section 3. Section 4 concludes with a summary of our observations, concluding remarks, and future scopes. 2. Methods 2.1 Description of the Dataset We utilize an additive manufacturing dataset prepared by McGregor et al. [16], that consists of information for the fabrication and measurement of produced parts using the Digital Light Synthesis (DLS) [40] method. The DLS method is a vat-polymerization technique that utilize digital light projection and oxygen permeable optics for continuous fabrication of highly intricate and detailed parts from photosensitive resin. For collection of the dataset and exploration of different manufacturing parameters, McGregor et. al. [16] fabricated a total of 405 parts. Two Carbon M2 printers were utilized to fabricate the parts, each having unique hardware sets (machine ids). Three unique part designs, namely clips, plugs and brackets were manufactured. The build area was divided into three different segments; one-third for each segment for each part design, 5 made with same material. Two organizational layouts were adopted for the experiments, namely, A and B. Parts were arranged in one of the two organizational layouts, in such a manner that the location of each design cluster was different. For example, the layout A was organized with 15 clips on the left side, 15 plugs in the middle, and 15 brackets on the right side. Similarly, the layout B was organized with 15 brackets on the left side, 15 clips in the middle and 15 plugs on the right side. The part designs, i.e.- clips, plugs and brackets were fabricated in a batch process We present a detailed schematic of the experimental design in Fig 1. Therefore, in each batch, 45 parts were manufactured, and 9 experiments were performed that resulted in a total of 405 parts. Five measurements were taken for each manufactured part, which resulted in a total of 2025 measurements. Three different polymer materials were used for fabrication of the produced parts, namely urethane methacrylate (UMA); rigid polyurethane (RPU) and additive epoxy (EPX). In this work, we predict the dimensional accuracy of manufactured parts, the key metric for the prediction is Difference from Target (DFT), which serves as the target variable from the dataset and is the measure of dimensional variations of the produced parts from a reference CAD geometry. Initially during experiments [16, refer supplementary information], duplicate measurements were taken from repeated scans and one average measurement was reported for each of the 7290 measured features. Depending on the design and measured features per part; with subsequent down sampling of features reduced the initial dataset from 7290 measurements to 2025 measurements. In addition to the measured dimensions of five features on each part, the data set also provides with details on the manufacturing parameters and descriptors associated with each feature. Corresponding to each measured part dimension, there are 13 input features, mixture of continuous and categorical variables. Amongst those 13 input features, eight of them are manufacturing parameters and rest of the inputs are the geometrical descriptors of the measured part. The manufacturing parameters are the hardware settings, material, thermal cure, cartesian and radial co-ordinate locations within the build. Rest of the input features were the measured feature descriptors, i.e.- feature included nominal dimension, feature category, part design, unique feature ID, and feature class. In reference [27], McGregor et al. performed an analysis to study the relative importance of input features and their contributions towards feature DFT predictions. Amongst the total of 13 input features, they observed 8 input features contribute significantly towards the output prediction. Therefore, we choose 8 input features, where six were the manufacturing parameters and rest of the input features were the measured feature descriptors, i.e.- feature class and feature category. We convert the categorical variables into continuous input features using one-hot encoding. The feature category is the topological descriptor for a feature category class, i.e.- inner or outer length of the diameter. The feature class amongst the measured feature descriptor was either thickness, length, diameter, or height. In Fig. 1 and 2, we present an overview of the experimental design and a detailed schematic of the input features and output/target variable along with the part designs respectively. 6 Fig 1: Representation of the experimental design. a Three different parts were fabricated namely, plug, clips and bracket, using three different materials, namely urethane methacrylate (UMA); rigid polyurethane (RPU) and additive epoxy (EPX). b The build area was divided into three segments, one-third for each part design. Two organizational layouts were adopted namely A and B. In each experiment, the layout A was organized with 15 clips on the left side, 15 plugs on the middle and 15 brackets on the right side. Similarly, the layout B consisted of 15 brackets on the left, 15 clips in the middle and 15 clips on the right side of the build area. Each build area consisted of 45 parts, a total of 9 experiments were performed which produced a total of 405 parts with five measurements for each part, that resulted in a total of 2025 measurements within the dataset. Fig 2: A detailed representation of the input and output features. a) Computer Aided Draft (CAD) designs of the additively manufactured parts using the DLS method. Three different part designs with the five critical dimensions/features A-D are shown in the figure. The measured features A-D signify the thickness, length, diameter, and out-of-plane height of the manufactured part. b) We use the existing experimental dataset prepared by McGregor et al. [27] and use it for training of the ML models. The inputs for training the ML models consists of a mixture of numerical and categorical variables. For a clear representation we demarcate between the manufacturing parameters and the measured dimensional/feature descriptors describing the topology of the parts. As the output from the ML models, we predict the Difference from Target (DFT), signifies the dimensional deviation from the refence CAD geometry. 7 Within the realm of advanced manufacturing methods, the development of a machine learning framework to predict the dimensions of additively manufactured built parts involves a supervised regression or classification task [9,10,11]. When provided with a set of input-output data, the objective is to develop a regressor function f (·) that can accurately predict the output y = f (x) for a new input x. The input feature vector, such as structured input data, or it may represent an image from a microscope of different phases in a microstructure and corresponding microstructure features, for example, image input data/microstructure feature distributions [14,15]. The output y represents the target feature of interest, the DFT values of the built parts. Our work uses both deterministic and probabilistic ML models for the prediction of the dimensional accuracy of AM parts. We use the dimensions of the feature descriptors along cartesian and radial coordinates as continuous inputs to the model. Manufacturing parameters and measured feature descriptors were the categorical inputs to the models for the prediction of dimensional accuracy. An important factor to consider is the random variance present within the underlying data that limits the ability of the model to make accurate predictions. For the experimental dataset we utilized, different polymeric materials used for fabrication of the parts possess repeatability. The weighted average of the repeatability of materials utilized for fabrication of the parts is ±47 μm. Moreover, the measurement uncertainty during curation of the data is ±15 μm. Besides, the combined estimate of production repeatability and measurement uncertainty is ±50 μm (root sum of squares). This estimate indicates that an ideal regression method while evaluated with the test data might achieve a minimum of 50 μm root mean squared (RMSE) value for DFT predictions. Furthermore, for the target feature data distribution, the standard deviation is 180 μm and serves as to provide as a baseline for evaluation of the ML regression methods utilized. Smaller prediction accuracy than this baseline value shows the utility of incorporated ML methods. However, a critical fact is that the target feature; dimensional deviations of the measured feature from a reference geometry/Difference from Target (DFT) is analogous to the prediction of random variance inherent in the data; which is a combination of both production repeatability and measurement uncertainty. It is noteworthy, that while we utilize deterministic regression methods for DFT predictions, we actually predict the random variance within the data, which can be attributed to aleatoric uncertainty. During experiments, in real-time manufacturing conditions, there are some factors that affects the process itself. For instance, an unrecognized factor like a calibration error in a machine causes consistent deviations (bias) in production outcomes. Similarly, this scenario can be compared with a probabilistic regression method that learns distributions over weights and quantifies uncertainty based on the model hyperparameters and amount of data provided. Both of the scenarios can be attributed to systematic variance/biases. We explicitly try to evaluate the aleatoric (random variance) and epistemic (systematic variance) uncertainties by implementing Bayesian Neural Networks. Fig.3 shows a consolidated representation of framework, which we implement for the training and evaluation of multiple ML algorithms. An ML algorithm is the underlying computational method used to identify interdependencies between data and predictions; for example, Random Forest (RF) is an ML algorithm. It is noteworthy that, ML model is an instantiation of an algorithm with a particular set of hyperparameters and subject to the distinctive data that is sampled for training. 8 Fig 3: A consolidated representation of the ML methods we use for our work. a, b, c We use the existing experimental dataset prepared by McGregor et al. [27] for the DLS method and utilize it for training of the ML models. The inputs for training the ML models consists of both numerical and categorical variables followed by hyperparameter tuning of parameters. d We use both deterministic and probabilistic ML models for training purpose. e Furthermore, we test the trained deterministic models, amongst which SVR and XGBoost shows nearly identical performance and gives a point estimate as prediction. While testing the probabilistic ML models we use two different ML models that quantify the measurement uncertainty, while the GPR model can only characterize the aleatoric uncertainty, BNN's are able to characterize both the aleatoric and epistemic uncertainty. 2.2 Deterministic ML algorithms A similar approach was followed for the implementation of deterministic models as described in Ref [16]. We utilize Dual Monte Carlo subsampling method comprises of two nested loops and subsequent steps to process data, including subsampling/data splitting, normalization and tuning of hyperparameters. In the outer loop, the data is randomly split into training, test and unused/holdout sets, ensuring that the training data is not used for testing purposes and the withheld unused split for test data efficiency evaluation. The inner loop utilizes nested k-fold crossvalidation of data over choosing the regular k-fold cross-validation. The key advantage of the nested k-fold cross-validation over the regular method is reduced overfitting, unbiased performance estimation, better hyperparameter tuning, and reliable model comparison. Thereafter, we use the hyperparameter-tuned model to predict the geometry of unmeasured features using the test data. The inner loop iterates multiple times (50 iterations), training and evaluating a new model at each stage. After performing multiple iterations, we average over all the expectations of model accuracies at each iteration and estimate the overall performance (evaluation based on the withheld test set and reported as RMSE) of the model for a distinctive set of inputs. Reshuffling of the data was performed in the outer loop (3 iterations) and the process repeats itself through the inner loop. In section 1 of SI, we elaboratively discuss the dual Monte Carlo subsampling method, along with the pictorial representation of the method in Fig. S1. Initially, we evaluate seven deterministic regression algorithms using Scikit-Learn [28], a Python package for machine learning, for the prediction of the feature DFT of additively manufactured 9 parts. The different deterministic ML algorithms that we used are k-nearest neighbors, Support Vector Regression (SVR), decision trees, and generalized decision tree-based algorithms such as Random Forest, Gradient Boosting, Extreme Gradient Boosting (XGBoost) [29,30] and the multilayer perceptron (MLP) regressor [31]. In section 3 of SI, we discuss the different deterministic ML algorithms, their working procedure and the hyperparameters we use to train the models. We report the best results from the SVR and XGBoost algorithms, as they provide us with nearly identical performance; discussed in more detail in the results section. SVR is usually known for its widespread use, however, XGBoost, a scalable machine learning system for tree boosting, is relatively new in the machine learning community gives state-of-art results and used for different applications in the recent past [29,30]. The most important factor behind the near-identical performance of XGBoost when compared with SVR is its scalability in all scenarios. The scalability of XGBoost is due to several important systems and algorithmic optimizations [29]. 2.3 Probabilistic ML Algorithms We list and describe below the probabilistic ML algorithms used in this study. 2.3.1 Gaussian Process Regression Gaussian Process Regression (GPR), as implemented in the Scikit-Learn [28] library, is a nonparametric machine-learning technique used for regression tasks. It leverages Gaussian processes, which are collections of random variables, to model the relationship between input features and target variables. GPR not only provides point predictions but also quantifies uncertainty in predictions, making it particularly useful in scenarios where understanding and managing prediction uncertainty is crucial. We use a combination of the kernel functions - Matérn and White kernels for our work. For the Matérn kernel, we use the length scale (equals 1) and the smoothness parameter (equals 1.5) as the hyper-parameters. Also, for the White kernel that adds a noise component to the GPR model, we use noise level (equals 1) and the number of optimizer restarts (equals 50) as the hyper-parameters. While the implementation of GPR is straightforward, we get an overall uncertainty estimate, thus separation of uncertainties is not possible. Additionally, scalability is a concern for a vast amount of data, as the computational requirements for training and prediction can become prohibitive. Therefore, as an alternative, for trustworthy uncertainty quantification and scalability, next we use Bayesian learning methods. 2.3.2 Bayesian Neural Networks Different from a standard neural network, Bayesian neural network incorporates additional probabilistic information, and the network weights are represented as distributions. In a traditional neural network (Fig 4(a)), the weights are optimized through the backpropagation mechanism, yields a point estimate provided with a labeled data. Our study centers on a Bayesian deep learning approach (Fig. 4(b)) that implements stochastic variational inference derived neural networks with the integration of Bayesian probability theory [21]. 10 Implementation of Bayesian inference calculates the conditional probability of approximating a posterior distribution given the training data, the approximated posterior distribution calculated is outlined in equation 1[25]: P(ω|D!") = # ω%D!"&#(() ∫# D%ω&4#(() 4(D) (2) During the prediction phase, when presented with an input (x∗), it is possible to calculate the predictive probability; p(y∗|x∗, D) of the corresponding output given by: p(y∗|x∗, D) = ∫p(y∗|x∗, ω) p(ω|D)dω (3) From a classical perspective, as provided in reference [8], using the law of total expectation and total variance [8], the total uncertainty predictions are obtained as follows: Var(y∗|x∗, D) = σ:(x∗)3 + Var4 ω%D&f dω (5) Where the E is the expectation operator and represented as difference between the natural logarithm of the variational posterior (q) and the actual posterior (P). When the observed data is treated as constant (the inputs are treated as constant), the above equation is simplified into a function L(θ|D), that solely relies on the variables θ and D [14], L(θ|D) = D>?(q(ω|θ) ∥P(ω)W -ED$ω%θ&VlogP(D|ω)W = ∫q(ω|θ)(logq(ω|θ)) -logP(D|ω) -logP(ω) dω = ED((|B)(log q(ω|θ) - logP(D|ω) -logP(ω)) (6) In the equation (6), the initial term evaluates the difference between the variational and prior distributions, whereas the subsequent term represents the anticipated negative logarithmic probability (analogous to negative log likelihood loss term) based on the variational posterior. This function, Evidence Lower Bound (ELBO), L(θ|D), also known as the variational lower bound, can then be optimized with respect to θ, the distribution of parameters (means and variances) to derive the best approximation of the true posterior. We perform the optimization using a gradient-descent [9] and Resilient Back Propagation (RProp) [11] based "RMSprop" optimizer, allowing for the scalability of the variational Bayes mechanism to large and intricate models. The primary goal using the variational inference is to accurately approximate the true weights of the posterior distribution, treating the inputs as constant. 34 We provide the hyperparameters of the probabilistic ML algorithms in Table S2 and the model architecture summaries in Fig. S5 (a, b) of the Bayesian Neural Network approaches we used in this work. Table S3: Hyperparameters for probabilistic machine learning models 4. Representation of uncertainties from Probabilistic Bayesian Neural Network with varying train-test split ratios In Fig S4, we provide a plot illustrating the trends in aleatoric and epistemic uncertainty for the train-validation split ratios. We split the train dataset in different ratios ranging from 10% training split to 99% training split. It is noteworthy to emphasize that the 10% training data is exclusively utilized for observing the prediction distributions generated by the probabilistic network ensemble. It is widely recognized that training a model with a limited amount of data would inevitably result in an increase in model uncertainty, hence leading to an increase in epistemic uncertainty. Additionally, it is worth noting that this practice contributes to the amplification of noise within the dataset. This is mostly due to the uneven distribution of data between the training and testing sets, which subsequently leads to an escalation in aleatoric uncertainty. Based on the findings we Probabilistic ML Algorithms Hyperparameters GPR Matern (length_scale=1, nu=1.5) + WhiteKernel(noise_level=1), n_restarts_optimizer=50, random_state=2022 BNN - trainable mean and variance Number of hidden layers: 3, Number of neurons in the hidden layers: (24, 16, 8), Activation Function: Relu (only used in the hidden layers), Loss: "Negative Log Likelihood", Optimizer: "Adam" BNN-working as an ensemble of networks Number of hidden layers: 1, Number of neurons in the hidden layers: 8, Activation Function: Sigmoid, learning rate: 0.001, Loss: "Negative Log Likelihood", Optimizer: RMSProp. Fig S3: Model summaries for the probabilistic ML models we use for our work (a) Model summary for the preliminary approach - BNN with trainable mean and variance. (b) Model summary for our second approach where the BNN is working as the ensemble of networks. 35 present in Fig S4, we observe that the aleatoric uncertainty exhibits a decreasing pattern with intermittent increase while considering appropriate ratios of training fractions, such as 50% and 80% for training data. Additionally, we also observe a sudden increase in aleatoric uncertainty at 90% training percentage, when an uneven distribution of train-test data split is included. The observed pattern of increasing aleatoric uncertainty at 90% training data and then decreasing at 99% training data in the Probabilistic BNN might be due to few factors. At 90% training data, the model might have encountered more diverse and challenging data points, introducing additional variability in the predictions, thus leading to increased aleatoric uncertainty. This increase might signify that the model is struggling to capture complex patterns in this subset. As the training dataset further expands to 99%, the model likely gains a more comprehensive understanding of the data distribution, resulting in an improved predictive accuracy and reduced aleatoric uncertainty. Additionally, the increased training data may help the model better estimate the aleatoric uncertainty, leading to a more reliable representation. However, it's crucial here to consider the specifics of the dataset and model, as other factors like data quality, model architecture, or the nature of the training data might have also played a role in this observed behavior. Upon examining the outcomes pertaining to epistemic uncertainty, it becomes evident that there is a steady downward trajectory observed in conjunction with an increase in the number of training data points. Fig S4: Representation of the aleatoric and epistemic uncertainties with varying train-validation split ratios. We observe that the aleatoric uncertainty exhibits a decreasing pattern with intermittent increase while considering appropriate ratios of training fractions, such as 50% and 80% for the training data. We observe a consistent downward trajectory in epistemic uncertainty in conjunction with an increase in the number of training data points. 36 References [1] McGregor, D.J., Bimrose, M.V., Shao, C., Tawfick, S. and King, W.P., 2022. Using machine learning to predict dimensions and qualify diverse part designs across multiple additive machines and materials. Additive Manufacturing, 55, p.102848. [2] Ren, Y., Lu, X., Guo, H., Xie, Z., Zhang, H. and Zhang, C., 2023. A review of combinatorial optimization problems in reverse logistics and remanufacturing for end-of-life products. Mathematics, 11(2), p.298. [3] James, G., Witten, D., Hastie, T., Tibshirani, R. and Taylor, J., 2023. Statistical learning. In An Introduction to Statistical Learning: with Applications in Python (pp. 15-67). Cham: Springer International Publishing. [4] Biau, G. and Scornet, E., 2016. A random forest guided tour. Test, 25, pp.197-227. [5] Natekin, A. and Knoll, A., 2013. Gradient boosting machines, a tutorial. Frontiers in neurorobotics, 7, p.21. [6] Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y., Cho, H., Chen, K., Mitchell, R., Cano, I. and Zhou, T., 2015. Xgboost: extreme gradient boosting. R package version 0.4-2, 1(4), pp.1-4. [7] Fan, J., Ma, X., Wu, L., Zhang, F., Yu, X. and Zeng, W., 2019. Light Gradient Boosting Machine: An efficient soft computing model for estimating daily reference evapotranspiration with local and external meteorological data. Agricultural water management, 225, p.105758. [8] Wright, W.A., 1999. Bayesian approach to neural-network modeling with input uncertainty. IEEE Transactions on Neural Networks, 10(6), pp.1261-1270. [9] Smolensky, P., 1996. Overview: Statistical perspectives on neural networks. Mathematical Perspectives on Neural Networksa, pp.453-496. [10] Meyer, A.R., 2006. Expectation & Variance. [11] Fox, C.W. and Roberts, S.J., 2012. A tutorial on variational Bayesian inference. Artificial intelligence review, 38, pp.85-95. [12] Ruder, S., 2016. An overview of gradient descent optimization algorithms. arXiv preprint . [13] Saputra, W., Zarlis, M., Sembiring, R.W. and Hartama, D., 2017, December. Analysis resilient algorithm on artificial neural network backpropagation. In Journal of Physics: Conference Series (Vol. 930, No. 1, p. 012035). IOP Publishing. 37 [14] Olivier, A., Shields, M.D. and Graham-Brady, L., 2021. Bayesian neural networks for uncertainty quantification in data-driven materials modeling. Computer methods in applied mechanics and engineering, 386, p.114079.
|
2509.16229
|
Low-Cost Shield MicroBCI to Measure EEG with STM32
Ildar Rakhmatulin, PhD
Abstract
The article introduces an accessible pathway into neuroscience using the MicroBCI device,
which leverages the STM32 Nucleo-55RG development board as the core platform.
MicroBCI enables the STM32 board to function as a brain–computer interface, capable of
recording EEG, EMG, and ECG signals across 8 channels. Over the past decade, the rapid
growth of artificial intelligence has transformed many fields, including neurobiology. The
application of machine learning methods has created opportunities for the practical use of
EEG signals in diverse technological domains. This growing interest has fueled the
popularity of affordable brain–computer interface systems that utilize non-invasive electrodes
for EEG acquisition. The MicroBCI device demonstrates reliable noise performance and
accuracy for applied research and prototyping. Furthermore, it effectively detects alpha brain
waves, confirming its ability to capture key neurological signals.
Keywords: microBCI; EMG; EKG; STM32 EEG; brain-computer interface; BCI
Source in GitHub https://github.com/pieeg-club/MicroBCI
1. Introduction
Electroencephalography (EEG) is a widely recognized and accessible method for recording
brain activity. It functions by measuring the electrical signals generated in different brain
regions using electrodes placed on the scalp. Due to its versatility, EEG has become a
fundamental tool in both research and clinical practice, with applications that continue to
expand. A variety of electrode types are available for signal acquisition, most notably wet and
dry electrodes. STM32 is one of the most popular microcontrollers on the market. However,
to the best of our knowledge, there are no boards available that provide a convenient format
along with an SDK and mobile SDK for turning STM32 boards into a brain computer
interface.
2. Problem Statement and Motivation
The price of commercial brain–computer interface (BCI) systems varies considerably, with
many devices remaining too costly for researchers, educators, and hobbyists. Meanwhile, the
rapid progress of neural networks and signal processing techniques has sparked a surge of
interest in BCIs across multiple application domains. As early as 2016, Meng et al. [2]
demonstrated the control of a robotic arm using motor imagery, where neural networks
decoded brain signals collected through electrodes. Previously, such outcomes were mostly
limited to invasive methods employing implanted microelectrode arrays to record brain
activity directly from the cortex [3]. Although effective, invasive approaches are expensive,
technically challenging, and carry significant risks, requiring specialized personnel and
equipment [4].
This has driven increasing attention toward non-invasive, low-cost EEG acquisition
platforms. Conventional EEG readers are typically composed of a microcontroller
(processor), a power supply board, and communication modules—elements that often
contribute to their high cost. To mitigate this issue, we developed the microBCI shield for the
STM32 Nucleo-55RG board, which extends the capabilities of the Nucleo platform to acquire
biosignals such as EEG, EMG, and ECG. Leveraging the Nucleo ecosystem’s popularity for
embedded systems education and prototyping, this approach lowers the entry barrier and
offers an accessible solution for affordable biosignal measurement.
3. Review of Related Work
A wide range of devices has been developed for recording EEG signals, each with varying
levels of accessibility and technical sophistication. Gunawan et al. [5] employed a low-cost
system for EEG signal acquisition aimed at classification tasks, while Ashby et al. [6] used a
similar setup to classify mental visual activities. A general overview of brain computer
interfaces is presented in the paper [7]. Building upon these efforts, this article introduces the
microBCI shield, a self-contained, low-cost biosignal interface designed for the STM32
Nucleo-55RG board. This approach provides a practical and extensible platform for EEG,
EMG, and ECG acquisition, while leveraging the robustness and ecosystem support of the
STM32 Nucleo family.
4. Technical Realizations
In EEG signal acquisition, the ADS1299 analog-to-digital converter (ADC) from Texas
Instruments plays a central role. Having been on the market for over a decade, it is widely
recognized as one of the most reliable and high-performance ADCs for biopotential
measurements. A key advantage of the ADS1299 is its integrated multiplexer, which
simplifies multi-channel EEG acquisition. The capabilities of this ADC, along with the
features of its multiplexer, have been extensively reviewed by Rashid et al. [8], though a
detailed discussion is beyond the scope of this article. Wet electrodes are commonly preferred
because of their relatively low impedance, typically ranging between 200 kΩ and 5 kΩ after
applying conductive gel. Li et al. [1] provided a comprehensive comparison of wet and dry
electrodes, outlining the specific strengths and limitations of each. In our study, we employed
dry Ag/AgCl electrodes, as they eliminate the need for gel application and thereby simplify
the EEG recording process.
For signal acquisition, we placed 8 dry Ag/AgCl electrodes according to the International 10–
20 Electrode Placement System at the following positions: F7, Fz, F8, C3, C4, T5, Pz, and
T6. A general overview of the microBCI shield for STM32 Nucleo-55RG is presented in
Figure 1.
Fig. 1. General view of the PCB boards with soldered elements with STM32 Nucleo-55RG
During testing, the MicroBCI shield operated completely offline, disconnected from any
Power source. A power bank was used to ensure safety and prevent potential interference
from external connections. The device was powered solely by a 5V battery supply, and it
should always be tested and operated using 5V batteries only. A general overview of the
assembled device configuration and the electrode placement is shown in Figure 2.
Fig. 2. General view for connecting EEG electrodes to MicroBCI
5. Dataset Evaluation
5.1. Artefacts
The internal noise of the microBCI shield is approximately 1 µV (dataset available at
https://github.com/Ildaron/ardEEG/tree/main/dataset/internal_noise). We conducted several
tests to record EEG signals and identify common artifacts, such as chewing and blinking,
which are widely used for evaluating EEG device performance. In our experiments, chewing
artifacts were clearly distinguishable from the background EEG signal (Figure 3).
Fig. 3. Artifact test. The process of measuring chewing artifacts using dry electrodes (Fz). Chewing
occurred in the following sequence: 4 times, 3 times, 2, and 1 time. The y-axis is the processed EEG
signal after passing filter bands of 1-40 Hz in microvolts and with 250 samples per second
5.2. Alpha
Alpha waves (α-waves), with frequencies ranging from 8 to 12 Hz and typical amplitudes
between 35 and 65 µV, provide a standard benchmark for evaluating EEG recording systems.
These waves are usually observed in individuals who are awake, relaxed, and have their eyes
closed. In our tests, EEG signals were recorded for 8-second intervals under both eyes-closed
and eyes-open conditions. As expected, an increase in EEG amplitude was observed within
the 8–12 Hz frequency range when the eyes were closed, while alpha activity decreased when
the eyes were open. These results are consistent with the characteristic alpha wave pattern in
the occipital region, confirming the proper functionality and design of the microBCI shield
(Figure 4).
Fig. 4. Alpha test. The process of recording an EEG signal from an electrode (Fz) with eyes open and
closed. The y-axis is the processed EEG signal after passing filter bands of 8-12Hz in microvolts, and
with 250 samples per second
6. Conclusion and Discussion
This article highlights the microBCI shield as a cost-effective yet high-performance solution
for EEG, EMG, and ECG signal acquisition. Its accuracy was validated through standard
EEG benchmarks, including common artifacts and alpha wave detection, demonstrating the
reliability
of
the
device
in
capturing
neurological
signals.
The shield’s noise characteristics closely match those of the ADS1299 ADC from Texas
Instruments, confirming high-quality signal acquisition at a fraction of the cost. Efficient data
transfer between the ADC and the Nucleo processor ensures real-time operation, making the
device suitable for dynamic brain–computer interface (BCI) applications. Integration with the
STM32 Nucleo ecosystem provides an accessible platform for researchers, educators, and
developers
to
prototype,
experiment,
and
innovate
in
BCI
technology.
By providing the hardware design and software in an open-source format, this project
encourages the creation of large, collaborative EEG datasets and facilitates community-
driven development in biosignal processing and neurotechnology research. The microBCI
shield thus represents a versatile foundation for advancing applied BCI solutions.
Reference
1. Li, G., Wang, S., & Duan, Y. Y. (2018). Towards conductive-gel-free electrodes:
Understanding the wet electrode, semi-dry electrode and dry electrode-skin interface
impedance using electrochemical impedance spectroscopy fitting. Sensors and Actuators B:
Chemical,
277,
250–260.
https://doi.org/10.1016/j.snb.2018.10.128
2. Meng, J., Zhang, S., Bekyo, A. et al. Noninvasive Electroencephalogram Based Control of
a
Robotic
Arm
for
Reach
and
Grasp
Tasks.
Sci
Rep
6,
38565
(2016).
https://doi.org/10.1038/srep38565
3. Fernández, E., Greger, B., House, P. A., Aranda, I., Botella, C., Albisua, J., Soto-Sánchez,
C., Alfaro, A., & Normann, R. A. (2014). Acute human brain responses to intracortical
microelectrode arrays: challenges and future prospects. Frontiers in Neuroengineering, 7,
Article
24.
https://doi.org/10.3389/fneng.2014.00024
4.Tonutti, M., Elson, D. S., Yang, G.-Z., Darzi, A. W., & Sodergren, M. H. (2017). The role
of technology in minimally invasive surgery: State of the art, recent developments and future
directions.
Postgraduate
Medical
Journal,
93(1097),
159–167.
https://doi.org/10.1136/postgradmedj-2016-134311
5. Gunawan, A. A. S., Surya, K., & Meiliana. (2018). Brainwave classification of visual
stimuli based on low cost EEG spectrogram using DenseNet. Procedia Computer Science,
135,
128–139.
https://doi.org/10.1016/j.procs.2018.08.158
6. C. Ashby, A. Bhatia, F. Tenore and J. Vogelstein, "Low-cost electroencephalogram (EEG)
based authentication," 2011 5th International IEEE/EMBS Conference on Neural
Engineering, Cancun, Mexico, 2011, pp. 442-445, doi: 10.1109/NER.2011.5910581.
7. B. J. Edelman et al., "Non-Invasive Brain-Computer Interfaces: State of the Art and
Trends," in IEEE Reviews in Biomedical Engineering, vol. 18, pp. 26-49, 2025, doi:
10.1109/RBME.2024.3449790
8. Rashid, U.; Niazi, I.K.; Signal, N.; Taylor, D. An EEG Experimental Study Evaluating the
Performance
of
Texas
Instruments
ADS1299.
Sensors
2018,
18,
3721.
https://doi.org/10.3390/s18113721
|
Low-Cost Shield MicroBCI to Measure EEG with STM32 Ildar Rakhmatulin, PhD Abstract The article introduces an accessible pathway into neuroscience using the MicroBCI device, which leverages the STM32 Nucleo-55RG development board as the core platform. MicroBCI enables the STM32 board to function as a brain-computer interface, capable of recording EEG, EMG, and ECG signals across 8 channels. Over the past decade, the rapid growth of artificial intelligence has transformed many fields, including neurobiology. The application of machine learning methods has created opportunities for the practical use of EEG signals in diverse technological domains. This growing interest has fueled the popularity of affordable brain-computer interface systems that utilize non-invasive electrodes for EEG acquisition. The MicroBCI device demonstrates reliable noise performance and accuracy for applied research and prototyping. Furthermore, it effectively detects alpha brain waves, confirming its ability to capture key neurological signals. Keywords: microBCI; EMG; EKG; STM32 EEG; brain-computer interface; BCI Source in GitHub https://github.com/pieeg-club/MicroBCI 1. Introduction Electroencephalography (EEG) is a widely recognized and accessible method for recording brain activity. It functions by measuring the electrical signals generated in different brain regions using electrodes placed on the scalp. Due to its versatility, EEG has become a fundamental tool in both research and clinical practice, with applications that continue to expand. A variety of electrode types are available for signal acquisition, most notably wet and dry electrodes. STM32 is one of the most popular microcontrollers on the market. However, to the best of our knowledge, there are no boards available that provide a convenient format along with an SDK and mobile SDK for turning STM32 boards into a brain computer interface. 2. Problem Statement and Motivation The price of commercial brain-computer interface (BCI) systems varies considerably, with many devices remaining too costly for researchers, educators, and hobbyists. Meanwhile, the rapid progress of neural networks and signal processing techniques has sparked a surge of interest in BCIs across multiple application domains. As early as 2016, Meng et al. [2] demonstrated the control of a robotic arm using motor imagery, where neural networks decoded brain signals collected through electrodes. Previously, such outcomes were mostly limited to invasive methods employing implanted microelectrode arrays to record brain activity directly from the cortex [3]. Although effective, invasive approaches are expensive, technically challenging, and carry significant risks, requiring specialized personnel and equipment [4]. This has driven increasing attention toward non-invasive, low-cost EEG acquisition platforms. Conventional EEG readers are typically composed of a microcontroller (processor), a power supply board, and communication modules-elements that often contribute to their high cost. To mitigate this issue, we developed the microBCI shield for the STM32 Nucleo-55RG board, which extends the capabilities of the Nucleo platform to acquire biosignals such as EEG, EMG, and ECG. Leveraging the Nucleo ecosystem's popularity for embedded systems education and prototyping, this approach lowers the entry barrier and offers an accessible solution for affordable biosignal measurement. 3. Review of Related Work A wide range of devices has been developed for recording EEG signals, each with varying levels of accessibility and technical sophistication. Gunawan et al. [5] employed a low-cost system for EEG signal acquisition aimed at classification tasks, while Ashby et al. [6] used a similar setup to classify mental visual activities. A general overview of brain computer interfaces is presented in the paper [7]. Building upon these efforts, this article introduces the microBCI shield, a self-contained, low-cost biosignal interface designed for the STM32 Nucleo-55RG board. This approach provides a practical and extensible platform for EEG, EMG, and ECG acquisition, while leveraging the robustness and ecosystem support of the STM32 Nucleo family. 4. Technical Realizations In EEG signal acquisition, the ADS1299 analog-to-digital converter (ADC) from Texas Instruments plays a central role. Having been on the market for over a decade, it is widely recognized as one of the most reliable and high-performance ADCs for biopotential measurements. A key advantage of the ADS1299 is its integrated multiplexer, which simplifies multi-channel EEG acquisition. The capabilities of this ADC, along with the features of its multiplexer, have been extensively reviewed by Rashid et al. [8], though a detailed discussion is beyond the scope of this article. Wet electrodes are commonly preferred because of their relatively low impedance, typically ranging between 200 kΩ and 5 kΩ after applying conductive gel. Li et al. [1] provided a comprehensive comparison of wet and dry electrodes, outlining the specific strengths and limitations of each. In our study, we employed dry Ag/AgCl electrodes, as they eliminate the need for gel application and thereby simplify the EEG recording process. For signal acquisition, we placed 8 dry Ag/AgCl electrodes according to the International 1020 Electrode Placement System at the following positions: F7, Fz, F8, C3, C4, T5, Pz, and T6. A general overview of the microBCI shield for STM32 Nucleo-55RG is presented in Figure 1. Fig. 1. General view of the PCB boards with soldered elements with STM32 Nucleo-55RG During testing, the MicroBCI shield operated completely offline, disconnected from any Power source. A power bank was used to ensure safety and prevent potential interference from external connections. The device was powered solely by a 5V battery supply, and it should always be tested and operated using 5V batteries only. A general overview of the assembled device configuration and the electrode placement is shown in Figure 2. Fig. 2. General view for connecting EEG electrodes to MicroBCI 5. Dataset Evaluation 5.1. Artefacts The internal noise of the microBCI shield is approximately 1 μV (dataset available at https://github.com/Ildaron/ardEEG/tree/main/dataset/internal_noise). We conducted several tests to record EEG signals and identify common artifacts, such as chewing and blinking, which are widely used for evaluating EEG device performance. In our experiments, chewing artifacts were clearly distinguishable from the background EEG signal (Figure 3). Fig. 3. Artifact test. The process of measuring chewing artifacts using dry electrodes (Fz). Chewing occurred in the following sequence: 4 times, 3 times, 2, and 1 time. The y-axis is the processed EEG signal after passing filter bands of 1-40 Hz in microvolts and with 250 samples per second 5.2. Alpha Alpha waves (α-waves), with frequencies ranging from 8 to 12 Hz and typical amplitudes between 35 and 65 μV, provide a standard benchmark for evaluating EEG recording systems. These waves are usually observed in individuals who are awake, relaxed, and have their eyes closed. In our tests, EEG signals were recorded for 8-second intervals under both eyes-closed and eyes-open conditions. As expected, an increase in EEG amplitude was observed within the 8-12 Hz frequency range when the eyes were closed, while alpha activity decreased when the eyes were open. These results are consistent with the characteristic alpha wave pattern in the occipital region, confirming the proper functionality and design of the microBCI shield (Figure 4). Fig. 4. Alpha test. The process of recording an EEG signal from an electrode (Fz) with eyes open and closed. The y-axis is the processed EEG signal after passing filter bands of 8-12Hz in microvolts, and with 250 samples per second 6. Conclusion and Discussion This article highlights the microBCI shield as a cost-effective yet high-performance solution for EEG, EMG, and ECG signal acquisition. Its accuracy was validated through standard EEG benchmarks, including common artifacts and alpha wave detection, demonstrating the reliability of the device in capturing neurological signals. The shield's noise characteristics closely match those of the ADS1299 ADC from Texas Instruments, confirming high-quality signal acquisition at a fraction of the cost. Efficient data transfer between the ADC and the Nucleo processor ensures real-time operation, making the device suitable for dynamic brain-computer interface (BCI) applications. Integration with the STM32 Nucleo ecosystem provides an accessible platform for researchers, educators, and developers to prototype, experiment, and innovate in BCI technology. By providing the hardware design and software in an open-source format, this project encourages the creation of large, collaborative EEG datasets and facilitates communitydriven development in biosignal processing and neurotechnology research. The microBCI shield thus represents a versatile foundation for advancing applied BCI solutions. Reference 1. Li, G., Wang, S., & Duan, Y. Y. (2018). Towards conductive-gel-free electrodes: Understanding the wet electrode, semi-dry electrode and dry electrode-skin interface impedance using electrochemical impedance spectroscopy fitting. Sensors and Actuators B: Chemical, 277, 250-260. https://doi.org/10.1016/j.snb.2018.10.128 2. Meng, J., Zhang, S., Bekyo, A. et al. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks. Sci Rep 6, 38565 (2016). https://doi.org/10.1038/srep38565 3. Fernández, E., Greger, B., House, P. A., Aranda, I., Botella, C., Albisua, J., Soto-Sánchez, C., Alfaro, A., & Normann, R. A. (2014). Acute human brain responses to intracortical microelectrode arrays: challenges and future prospects. Frontiers in Neuroengineering, 7, Article 24. https://doi.org/10.3389/fneng.2014.00024 4.Tonutti, M., Elson, D. S., Yang, G.-Z., Darzi, A. W., & Sodergren, M. H. (2017). The role of technology in minimally invasive surgery: State of the art, recent developments and future directions. Postgraduate Medical Journal, 93(1097), 159-167. https://doi.org/10.1136/postgradmedj-2016-134311 5. Gunawan, A. A. S., Surya, K., & Meiliana. (2018). Brainwave classification of visual stimuli based on low cost EEG spectrogram using DenseNet. Procedia Computer Science, 135, 128-139. https://doi.org/10.1016/j.procs.2018.08.158 6. C. Ashby, A. Bhatia, F. Tenore and J. Vogelstein, "Low-cost electroencephalogram (EEG) based authentication," 2011 5th International IEEE/EMBS Conference on Neural Engineering, Cancun, Mexico, 2011, pp. 442-445, 7. B. J. Edelman et al., "Non-Invasive Brain-Computer Interfaces: State of the Art and Trends," in IEEE Reviews in Biomedical Engineering, vol. 18, pp. 26-49, 2025, 8. Rashid, U.; Niazi, I.K.; Signal, N.; Taylor, D. An EEG Experimental Study Evaluating the Performance of Texas Instruments ADS1299. Sensors 2018, 18, 3721. https://doi.org/10.3390/s18113721
|
2509.16235
|
Motion in Aubry’s galaxy
Motion in Aubry’s galaxy
M. Burri1 and R. S. MacKay2
1)Physics department, University of Warwick, Coventry CV4 7AL, U.K.
2)Mathematics Institute & Centre for Complexity Science, University of Warwick, Coventry CV4 7AL,
U.K. Corresponding author
(*Electronic mail: R.S.MacKay@warwick.ac.uk)
(*Electronic mail: burri.megha@gmail.com)
(Dated: 23 September 2025)
The dynamics of a test particle in the field of a model galaxy proposed by Serge Aubry is studied by a combination
of theory and numerical computation. Regimes of near-integrable behaviour and almost perfect chaos are found. A
proposed explanation for the latter is sketched. Thoughts are presented on the implications of the analysis for galaxies.
Aubry proposed a model for motion in a galaxy consist-
ing of the usual “electrostatic” monopole of a central mass
and an additional “magnetic” dipole corresponding to ro-
tating mass near the centre in a gravitomagnetic formu-
lation. We analyse the resulting motion by a combination
of theory and numerics. We find a spectrum of behaviour
from near-integrable to near-perfect chaos.
I.
INTRODUCTION
Serge Aubry has been creative in many areas of physics
and mathematics. His latest contribution1 is to note that New-
tonian gravity is not Lorentz-invariant and therefore there
should also be a “magnetic” contribution to gravity, whereby
moving matter exerts a force of magnetic type on moving mat-
ter. As he has commented, this idea was already proposed by
Heaviside and has been developed by various people, notably
Jefimenko13 (and see ref. 2 for a recent article). Furthermore,
it seems to be an established view in Newtonian approxima-
tion to general relativity, e.g. ref. 8. Yet Aubry brings incisive
views to the story.
One particular consequence Aubry proposed is that a sim-
ple model for a galaxy could consist of a central mass of which
at least part is rotating, thereby creating a gravitational “mag-
netic” dipole field in addition to the standard “electrostatic”
monopole. Aubry was interested in non-relativistic motion
in this field, with a view to explaining the rotation curves of
galaxies, but given the relativistic origin of the magnetic force
it seems natural to extend to relativistic motion.
An immediate question is whether the gravitational force
on a test particle should be taken proportional to its rest mass
or its relativistic mass. Following Jefimenko, we take rest
mass, though Aubry has been investigating the case of rela-
tivistic mass. For the rotation curves, probably only the non-
relativistic regime is relevant, where both cases agree.
In the case of gravitational force proportional to rest mass,
the resulting equations of motion for the position r and ve-
locity v of a test particle, or momentum p = γv per unit rest
mass, where
γ =
q
1+ p2/c2 = 1/
q
1−v2/c2,
(1)
with respect to coordinate-time t, are
dp
dt
= −GM r
r3 + GD
c2 v ×
3z r
r5 −ˆz
r3
(2)
dr
dt
= v,
(3)
where M is the central mass, D is the dipole moment (its rela-
tivistic angular momentum in the +z direction), G is the grav-
itational constant and c is the speed of light. We take D ≥0.
An alternative description of our system is as the relativistic
motion of a charged particle (with e/m = 1) in the fields
E = −GM r
r3
of an electrostatic monopole and
B = GD
c2
3z r
r5 −ˆz
r3
of a magnetic dipole in Minkowski space. The case with only
magnetic dipole has been studied by many people, going back
to Störmer in connection with charged particle motion in the
earth’s magnetic field. For a recent treatment, see ref. 18. As
we will scale M to 1, the dipole-only case will correspond
to the limit D = ∞(with appropriate scaling of other quan-
tities).
The problem of motion of charged particles in the
earth’s magnetic field, however, should really include a grav-
itational monopole field, even if its effect is small compared
to the dipole field, so our treatment is relevant if one takes D
large. On the other hand, the earth’s magnetic field has sig-
nificant deviations from axisymmetry, but we don’t treat that
here.
If preferred, one can write the equations with respect to
proper time τ for the particle by using dt = γ dτ:
dp
dτ
= −γGM r
r3 + GD
c2 p×
3z r
r5 −ˆz
r3
(4)
dr
dτ
= p.
(5)
Centripetal acceleration in coordinate time for circular mo-
tion in a circle of radius R at speed v is γv2/R, so this system
has circular equatorial orbits with velocity component vφ in
arXiv:2509.16235v1 [physics.class-ph] 16 Sep 2025
Motion in Aubry’s galaxy
2
the tangential direction (in this paper we use “physical” com-
ponents for vectors, as opposed to contravariant or covariant
components) satisfying
γv2
φ
R = GM
R2 + GDvφ
c2R3 .
(6)
From the definition (1) of γ, squaring (6) transforms it to a
quartic in vφ given R. It does not have clean solutions, but
by considering the graphs of the two sides as functions of
vφ ∈(−c,c), we see that there are precisely two solutions vφ
for each radius R, one positive, one negative, corresponding
to orbits that are co- and counter-rotating with respect to the
angular momentum of the central mass. Alternatively, one can
consider it as a quadratic in R for given vφ ̸= 0 and (suppos-
ing D > 0) obtain one positive solution for R if vφ > 0 and
two positive solutions if vφ ∈(vmin,0), where vmin < 0 is the
solution of
γv3
min = −GM2c2
4D
(7)
(remember that γ depends on v).
Aubry posed the question of what the other orbits look like.
Here we answer this question by a combination of theory and
numerics.
To simplify treatment, we use units for length, time and
mass in which G = 1, c = 1, M = 1 (instead of making c =
1 one can make D = 1, which would be useful in the non-
relativistic regime, but we did not use that scaling).
To gain a first idea of the dynamics, we plot some trajec-
tories in (x,y,z) in Figure 1. We highlight some features al-
ready: they are often constrained to some region of space, they
may be quasiperiodic or chaotic, and some exhibit helical mo-
tion when close to the centre. We will address these features
among others.
II.
CONSTANTS OF THE MOTION
To address the constraints on the region of space for an or-
bit, we point out some constants of the motion.
There is an obvious constant of the motion, the energy
H(r,p) = γ −1
r .
(8)
We denote its value by E (there is a potential notational clash
with the absolute value of the electrostatic field E but we will
write |E| for the latter). We will be interested mainly in the
orbits with H = E < 1, the analogue of elliptic orbits in the
Kepler problem, as opposed to the hyperbolic ones. Then r ≤
1
1−E so the motion is bounded by at least this condition. But
we will address E ≥1 too.
There is another constant of the motion associated with ro-
tation symmetry about the z-axis, which is easiest described
in cylindrical polar coordinates (R,z,φ). It is an “angular mo-
mentum” constant
Lz = R(pφ +Aφ),
(9)
(a) (E,Lz,D) = (0.837,−0.864,3.10)
(b) (E,Lz,D) = (0.225,0.200,0.700)
(c) (E,Lz,D) = (0.558,5.92,10.0)
FIG. 1: Sample trajectories for various parameters exhibiting
a range of behaviours: a) near integrable, b) chaotic, and c)
helical.
Motion in Aubry’s galaxy
3
where A is a vector potential for the dipole field, chosen to be
independent of φ, namely A = Aφ ˆϕ with
Aφ = D R
r3 .
(10)
It follows that
Rpφ = Lz −DR2
r3 .
(11)
Note that r2 = R2 +z2.
In the limit case D = 0, there is an additional indepen-
dent constant of the motion, the square of the angular mo-
mentum vector, L2, or one can equivalently take L2
x + L2
y =
(pyz−pzy)2 +(pzx−pxz)2, because we already have Lz con-
stant. This follows from spherical symmetry and can be ver-
ified directly. One question we address is whether this third
constant of the motion has a continuation to D > 0.
For
geodesic motion in the Kerr metric, which can be considered
the general relativistic version of Aubry’s field, there is indeed
a third constant of the motion, the “Carter constant”7. We find,
however, that the answer for Aubry’s field is no (at least from
numerical simulations),
III.
EQUATORIAL CIRCULAR ORBITS
To illustrate the ideas so far, we compute the energy and
“angular momentum” for the equatorial circular orbits. In this
section, we write v for vφ.
Transforming (6) to scaled variables and solving the result-
ing quadratic in R, the circular orbits are given by
R = 1+σ
p
1+4Dγv3
2γv2
,
(12)
with σ = ± for v < 0, + for v > 0. The formula
1
R = −1+σ
p
1+4Dγv3
2Dv
will also be useful.
Its energy is
E = γ −1
R = γ +
1
2Dv(1−σ
p
1+4Dγv3),
(13)
and its angular momentum constant is
Lz = γRv+ D
R = σ
v
p
1+4Dγv3.
(14)
For a parametric plot, see Figure 2.
The cusp can be understood as resulting from a double root
of the equation for critical points of H with respect to R for z =
0, pz = 0, pR = 0 and fixed Lz, to be addressed in section VI.
Note that if we think of speeds v for which γv3/c3 ≈1
4
and we put back the constants G,M,c, then the argument of
the square roots above makes a transition from the first to the
second term being dominant as D increases through GM2/c,
which is the Kerr limit for the angular momentum. The con-
nection, however, is probably no deeper than dimensional
analysis.
-2
-1
1
2
3
4
e
-4
-2
2
4
6
L
FIG. 2: The set of circular orbits for D = 1 projected into the
plane of (E,Lz). The blue curve is for vφ = v > 0; it starts
from v = 0 at Lz = +∞, E = 1 with R = ∞, and as v →1 it
goes to R = 0 with E and Lz going to +∞. The red/green
curve is for v < 0 with the switchover at the minimum
possible speed vmin (7) (which occurs at Lz = 0); v →0 at
both ends; R goes from 0 at the end of the green part to +∞at
the end of the red part.
IV.
HAMILTONIAN FORMULATION
It is convenient to exploit a Hamiltonian formulation of the
dynamics. In addition to the Hamiltonian H as in (8), which
we rewrite here:
H = γ −1
r ,
(15)
define the symplectic form (a symplectic form is a non-
degenerate closed antisymmetric bilinear map from pairs of
tangent vectors to R)
ω = ∑
i
dri ∧dpi −dA♭
(16)
in Cartesian coordinates, where A♭is the 1-form
A♭= D
r3 (xdy−ydx).
(17)
The equations of motion are ( ˙r, ˙p) = X(r,p), where X is
the vector field on R3 ×R3 such that iXω = dH. We check it
agrees with (2, 3). First evaluate
dA♭= 2 D
r3 dx∧dy−3 D
r5 (R2dx∧dy−yzdz∧dx+xzdz∧dy).
(18)
Applying iXω = dH to ∂pi yields the easy equations ˙ri = pi
γ .
Motion in Aubry’s galaxy
4
Applying iXω = dH to ∂ri for i = x,y,z, in turn we obtain
˙px = −1
r3 x+ D
r5 (2r2 −3R2) py
γ −3 D
r5 yz pz
γ
(19)
˙py = −1
r3 y−D
r5 (2r2 −3R2) px
γ +3 D
r5 xz pz
γ
(20)
˙pz = −1
r3 z+3 D
r5
z
γ (ypx −xpy).
(21)
Noting that 2r2 −3R2 = 3z2 −r2, we see we have the desired
equations of motion. Thus, we have formulated the system as
a Hamiltonian system of 3 degrees of freedom (DoF).
Note that the equations of motion are singular at r = 0.
There are techniques to “regularise” the collision, leading to a
continuation of the trajectory after collision, but for this paper
we will ignore the issue.
There is an alternative Hamiltonian formulation with mod-
ified momentum π = p + A, ω = ∑i dri ∧dπi, H = γ −1
r ,
where γ =
p
1+|π −A|2, but we prefer to put the magnetic
effect into the symplectic form and keep the standard momen-
tum p.
There are also formulations in space-time with respect to
proper time. The simplest one has the feature that the “electro-
static” part of gravity is also put into the symplectic form. We
denote by Q = (t,r) the position of the particle in Minkowski
space, with inner product Q · Q′ = −tt′ + r · r′ and associ-
ated “norm”-squared |Q|2 = Q · Q. It has Hamiltonian K =
1
2|P|2, where P = (−γ,p) is the 4-momentum, and symplec-
tic form Ω= −dΘ −F, where Θ = ∑ν PνdQν is the canon-
ical 1-form on the tangent bundle of Minkowski space and
F = FµνdQµ ∧dQν is the Faraday tensor, which has compo-
nents
Fµν =
0
Ex
Ey
Ez
−Ex
0
−Bz
By
−Ey
Bz
0
−Bx
−Ez −By
Bx
0
.
This Hamiltonian system must be restricted to the level set
K = −1
2 (for unit rest mass).
An alternative is to put the
gravitational fields into the Hamiltonian and use the canoni-
cal symplectic form; we will mention a version of that in the
next section.
In Hamiltonian dynamics, there is a natural relation be-
tween continuous symmetries and constants of the motion. In
particular, we can check that the conserved quantity Lz follows
from i∂φ H = 0 and i∂φ ω = dLz. We use
ω = −dΘ−dA♭,
(22)
with the natural 1-form Θ = prdR+ pzdz+Rpφdφ, and note
that
dA♭= D
r5 (2r2 −3R2)RdR∧dφ +3 D
r5 R2zdz∧dφ.
(23)
So
ω = dR∧dpR +dz∧dpz +dφ ∧d(Rpφ)−dA♭,
(24)
and thus
i∂φ ω = d(Rpφ)+ D
r5 (2r2 −3R2)RdR+3 D
r5 R2zdz.
(25)
For comparison, from (9),
dLz = d(Rpφ)−Dd
R2
r3
,
(26)
the
second
term
of
which
can
be
expanded
as
D
r5
2Rr2dR−3R3dR−3R2zdz
, giving a result that agrees
with (25).
V.
REDUCED SYSTEM
The invariance of the Hamiltonian structure under rotation
about the z-axis leads to a reduced Hamiltonian system of 2
DoF, in just (R,z, pR, pz).
The reduced equations of motion for given value of Lz can
be obtained by expressing H and ω in cylindrical coordinates
and substituting (from (11))
pφ = Lz
R −
DR
(R2 +z2)3/2 ,
(27)
leading to
H =
r
1+ p2
R + p2z +
Lz
R −
DR
(R2+z2)3/2
2
−
1
√
R2+z2 ,
(28)
ω = dR∧dpR +dz∧dpz.
(29)
As the symplectic form is canonical, the equations of motion
are in canonical form:
˙R =
pR
γ
(30)
˙z =
pz
γ
(31)
˙pR =
pφ
γ
Lz
R2 + D
r3 −3DR2
r5
−R
r3
(32)
˙pz = −
pφ
γ
3DRz
r5
−z
r3 ,
(33)
with pφ given by (27) and γ the first square root in (28). If
Lz ̸= 0, the reduction appears to have introduced a singularity
on the whole of the z-axis R = 0, but conservation of H implies
that the only way that R = 0 can be reached is if r →0. The
motion in φ can be reconstructed by ˙φ =
pφ
γR.
The system can be written in an alternative form with re-
spect to proper time τ for the test particle. Namely, the motion
for H = E is the motion in proper time for Hamiltonian
J = 1
2
p2
R + p2
z −(E + 1
r )2 +1+( Lz
R −DR
r3 )2
(34)
on J = 0, with respect to the canonical symplectic form (29),
as can be checked by explicit comparison, but one must re-
strict to E + 1
r ≥1.
To analyse the dynamics of the reduced system it is conve-
nient first to notice that the equatorial plane z = 0, pz = 0 is
invariant. So we start by analysing the dynamics there.
Motion in Aubry’s galaxy
5
VI.
EQUATORIAL MOTION
The plane z = 0, pz = 0 is invariant for any value of D. The
motion in it is a 1DoF Hamiltonian system in just (R, pR). It
has
H =
r
1+ p2
R +
Lz
R −D
R2
2
−1
R,
ω = dR∧dpR.
We see that for given value E of H and values of Lz,D, the
allowed region of R is
E + 1
R
2 ≥1+
Lz
R −D
R2
2
,
(35)
with R > 0 and 1
R ≥1−E. As the expression is a polynomial
of degree at most 4 in 1/R, this consists of one or two intervals
or is empty.
In each interval, the motion is periodic between the end-
points, except if an endpoint is at R = 0 or R = ∞. In the latter
case one is obliged to take E ≥1 and it takes infinite time to
reach it. The former case is impossible if D > 0 because D/R2
beats 1/R as R →0. The motion of the original system result-
ing from periodic motion of R in an interval [a,b] with a > 0
resembles a precessing ellipse, that may surround the origin
or not, as can be seen in Figure 3. The precession rate goes to
0 in the non-relativistic limit for D = 0 (Kepler ellipses).
To find the allowed intervals, we treat first the case D = 0.
Then the allowed region is given by
(E2 −1)R2 +2ER+(1−L2
z) ≥0,
(36)
with R ≥0 and
R ≤k =
1
1−E .
(37)
A priori, this is one or two intervals or empty. The separating
cases are when there is a double root (1 −E2)L2
z = 1 or an
endpoint moves through R = 0 or ∞(L2
z = 1 or E2 = 1, re-
spectively). The case R = ∞with E = −1 is ruled out by the
constraint R ≤k, so the case of an endpoint moving through
R = ∞is possible for only E = +1. The result is shown in Fig-
ure 4. We see that in fact there is at most one interval, because
at R = k the function in (36) has the value −L2
z ≤0.
Note that this analysis of equatorial motion in the case D =
0 actually applies to the whole dynamics for D = 0. This is
because the third constant of the motion restricts the motion to
the plane perpendicular to the initial L = r×p. By a rotation
of axes, we can take it to be the equatorial plane.
Turning to D > 0, we already remarked that R = 0 is ex-
cluded. So the separating cases for the number of allowed
intervals are associated with critical points of H or passage of
an endpoint through R = ∞. Just as for D = 0, passage of an
endpoint through ∞requires E = +1. The critical points cor-
respond precisely to the equatorial circular orbits, because the
critical points are the equilibria of the reduced dynamics and
we have restricted to z = 0.
To study the equatorial circular orbits in more detail, it is
tidier to write
u = 1
R.
(38)
(a)
(E,Lz,D) = (0.118,−0.500,1.00×10−4)
(b) (E,Lz,D) = (−0.215,4.38,3.00)
(c) (E,Lz,D) = (0.118,0.500,1.00)
FIG. 3: Equatorial orbits showing a) precession, b) gyration
and c) periodicity.
Then the equations for critical points are
(E +u)2 = 1+(Lzu−Du2)2
(39)
E +u = (Lzu−Du2)(Lz −2Du).
(40)
We can write E + u = γ and Lzu −Du2 = pφ. So the second
equation is
γu = pφ(pφ −Du2),
(41)
Motion in Aubry’s galaxy
6
E
−1
0
+1
Lz
0
1
R
0 a
k
b
(0,a]
R
0
k
b
/0
R
0
a
(0,a]
R
0
/0
R
0
a
(0,a]
R
0
(0,∞)
R
0
c
[c,∞)
R
0
/0
R
0 c
a
[c,a]
FIG. 4: Allowed intervals in R for equatorial motion with
D = 0 in various regions of the (E,Lz)-plane. /0 signifies that
there is no allowed interval. The results are independent of
the sign of Lz so we plot for only Lz ≥0. The curve in the
parameter space is (1−E2)L2
z = 1. The insets show sample
graphs of (E2 −1)R2 +2ER+(1−L2
z) as a function of
R > 0. Note that the constraint R ≤k = 1/(1−E) removes
the apparent unbounded interval for E < −1. Note also that
a →∞as E ↗1, b →∞as E ↗−1, a →0 as Lz ↗1 for
E < 0, and c →0 as Lz ↘1 for E > 0.
which is the same as (6). It has solutions
u =
−γ +σ
q
γ2 +4Dp3
φ
2Dpφ
(42)
with σ ∈{±}.
Then the number of allowed intervals changes by ±1 on
crossing a curve of equatorial circular orbit in Figure 2. By
checking suitable cases, the number of intervals is two in the
region bounded by the cusped curve, one in the region be-
tween the two curves, and zero to the left of the smooth curve.
As promised in Section III, we now explain the shapes of
the curves in Figure 2. They are smooth curves, except at a
triple root of (39), where the curve can be expected to form a
semi-cubic cusp in the parameter space. The condition for a
triple root is
1 = (Lz −2Du)2 −2D(Lzu−Du2),
(43)
which can be written as
1 = u−2(pφ −Du2)2 −2Dpφ.
(44)
Combining with the equation for a double root, we obtain
2Dp3
φ = 1, i.e.
pφ = p∗= (2D)−1/3,
(45)
and γ =
p
1+ p2∗.
Using (42), we obtain that the triple root is at
u = p2
∗
q
3+ p2∗−
q
1+ p2∗
,
(46)
taking σ = + because we require u > 0. Then we obtain that
the position of the cusp is at
E = γ −u = (1+ p2
∗)3/2 −p2
∗
q
3+ p2∗
(47)
and
Lz = Du+
pφ
u =
q
1+3p−2
∗.
(48)
Note that this has Lz > 1. It also has E < 1, because writing
x = p2
∗we have E = (1 + x)3/2 −x(3 + x)1/2, so at x = 0 we
have E = 1, and
dE
dx = 3
2(1+x)1/2 −(3+x)1/2 −1
2x(3+x)−1/2 < 0,
because (2+x)2 = 1+(1+x)(3+x). In principle, one could
check that the generic conditions for a semi-cubic cusp are
satisfied, but we did not do it.
VII.
ANALYSIS OF 2DOF REDUCED DYNAMICS
We see from (28) or (34) that for energy H = E,
p2
z + p2
R = K :=
E + 1
r
2 −1−
Lz
R −DR
r3
2
.
(49)
This has to be at least 0, so the motion is restricted to the
region of (R,z) where K ≥0. Note that the motion is also
restricted to 1
r ≥1−E because the first square root in (28) is
positive and its argument is at least 1. We call the result of
these two restrictions Hill’s region, by analogy with celestial
mechanics.
Figure 5 shows examples of the Hill’s region for a variety
of choices of the parameters (E,Lz,D). We take D > 0 in the
following discussion.
For Lz ≤0, the (DR/r3)2 term in (49) beats the 1/r2 term
as r →0, so non-negativity of p2
R + p2
z implies r = 0 is not
accessible. In contrast, for Lz > 0, the Hill’s region has horns
around a “centre curve” Lz/R = DR/r3 that extends all the
way to r = 0. The equation for the centre curve can be written
as L2
z(R2 + z2)3 = D2R4. These observations agree with the
figures.
We see from the figures that the Hill’s region may have one
or two components. Also, but not shown, for some choices
of parameters it is empty. As the parameters are varied, the
number of components can change only when the boundary
contains a critical point of K (for fixed E,Lz,D) with K = 0,
or when a component crosses R = 0 or R = ∞. We concentrate
on the case of a critical point.
For a local maximum or minimum of K with K = 0, the
component is a single point and a transition can occur as pa-
rameters are varied, in which a component is created or de-
stroyed. For a saddle of K with K = 0, two components (lo-
cally) can merge or one component separate into two (locally)
as parameters are varied.
We compute the equations for the critical points of K by
differentiating K with respect to z and R, respectively:
z
(E + 1
r )r−3 +( Lz
R −DR
r3 ) 3DR
r5
= 0
(50)
E + 1
r
R
r3 +
Lz
R −DR
r3
3DR2
r5
−Lz
R2 −D
r3
= 0
(51)
Motion in Aubry’s galaxy
7
(a) (0.950,3.40,3.00)
(b) (0.480,−0.600,0.200)
(c) (0.330,2.90,4.70)
(d) (0.800,3.00,2.600)
(e) (0.700,−0.100,3.00)
(f) (2.500,4.10,1.00)
(g) (2.50,3.60,1.00)
(h) (10.0,−0.200,0.700)
FIG. 5: Hill’s regions for various choices of (E,Lz,D). Negative Lz components are on the right most column.
The first equation gives z = 0 or its second factor zero. We al-
ready treated the equatorial case z = 0 in the previous section.
For the non-equatorial critical points, writing pφ = Lz
R −DR
r3
and γ = E + 1
r , the equations are
γ2 = 1+ p2
φ
(52)
γr2 +3pφDR = 0
(53)
γ R
r3 + pφ( 3DR2
r5
−Lz
R2 −D
r3 ) = 0.
(54)
Writing vφ = pφ/γ, it follows from the second that
vφ = −r2
3DR < 0,
(55)
and inserting pφ = γvφ into the third, we obtain
Lzr3 = −DR2,
(56)
so in particular Lz < 0. Then
pφ = Lz
R −DR
r3 = −2DR
r3 .
(57)
Recalling (1), γ can be written in two ways:
r
1+4D2R2
r6
= γ = 1/
r
1−9 r4
D2R2 .
(58)
It follows that
4D2R2
r6
−
r4
9D2R2 −4
9r2 = 0.
(59)
Motion in Aubry’s galaxy
8
So we can parametrise the set of non-equatorial critical points
by vφ = v:
r2 =
4
9(v−4 −v−2)
(60)
R = −4
27D(v−5 −v−3).
(61)
It is necessary, however, to require z2 = r2 −R2 ≥0. This
restricts v to the set where
4
9(v−4 −v−2) ≥
16
36D2 (v−5 −v−3)2,
(62)
i.e. v6 ≥
4
81D2 (1−v2), equivalently (remembering that v < 0),
γv3 < −2
9D. To complete the parametric representation,
E = γ −1
r = 1−3
2v2
√
1−v2 ,
(63)
Lz = −DR2
r3
= −
2
27Dv4
p
1−v2.
(64)
In Figure 6 this curve of non-equatorial circular orbits is
added to those for the equatorial circular orbits. We notice
-1.0
-0.5
0.5
1.0
1.5
2.0
e
-1
1
2
3
4
L
FIG. 6: The curves of all critical points of H as functions of
(E,Lz) for D = 1.
that the new curve seems to touch one of the equatorial ones.
This is indeed true. When Dγv3 = −2
9, there is a bifurcation
of critical points in which the equatorial saddle (green and
beginning of orange) absorbs the pair of non-equatorial local
minima (red) and becomes an equatorial minimum (rest of or-
ange). From (13,14), this happens at E = γ +
1
3Dv,Lz = 1
3v.
The conclusion is that the Hill’s region has one of the forms
given by the samples in Figure 5 or is empty. For Lz > 0 there
is a component with horns going into r = 0. For Lz ≤0 the
Hill’s region is bounded away from R = 0 and may be empty.
For E > 1 there is an unbounded component. The number and
shapes of components are given by the parameter-domains in
Figure 7.
In this figure, we also plot the projections of some sample
orbits.
VIII.
POINCARÉ PLOTS
For components of the Hill’s region intersecting the equa-
torial plane, we can take z = 0, pz > 0, as a Poincaré section Σ
(cf. ref. 4). Σ can be considered to be the set of (R, pR) where
E + 1
R >
q
1+( Lz
R −D
R2 )2 + p2
R,
with z = 0 and
pz =
q
(E + 1
R)2 −1−( Lz
R −D
R2 )2 −p2
R.
The boundary of Σ (where z = pz = 0) is an orbit of the equa-
torial reduced system (section VI).
Considering the first return map to Σ simplifies representing
and understanding the dynamics, at least of those orbits that
intersect Σ. The section is transverse to the vector field, be-
cause ˙z = pz > 0. Let Σ+ be the subset for which there is a first
return in positive time. Define T(R, pR) to be the time to the
first return of the orbit started at (R, pR) on Σ+ and P(R, pR)
to be the point of the first return. P is called the Poincaré map.
By transversality of Σ and smoothness of the flow, the set Σ+
is an open subset of Σ and P is smooth on it.
Figure 8 shows some Poincaré plots (orbit segments for the
Poincaré map) for selected values of (E,Lz,D) and initial con-
ditions. We see a range of behaviours, on which we shall com-
ment after discussion of the question of completeness of the
section.
A transverse section is said to be complete if the forward
and backward trajectories of every point in the given Hill com-
ponent crosses it (except for the bounding periodic orbit). A
sufficient condition is that an “angle” variable evolves mono-
tonically and without bound. In particular, define angle α by
tanα = pz/z,
(65)
then use (31, 33) to obtain
˙α =
˙pz
z −pz˙z
z2
cos2 α = −
3DRpφ
r5γ
+ 1
r3
cos2 α −1
γ sin2 α,
which is negative if
pφ > −γr2
3DR.
(66)
We will show that (66) holds on the whole of the Hill compo-
nent if Lz is sufficiently negative (despite this looking counter-
intuitive to the relation (27) and the requirement (66)). For a
bounded Hill component, if (66) holds everywhere then ˙α is
less than a negative constant on it. Then from any point, both
the forward and backward orbits cross α = π
2 mod 2π at some
times, which is our section.
To obtain an explicit region of parameter space where there
is a non-empty Hill component and (66) holds on the whole
of it, we will restrict attention to Lz < 0. The Hill’s region can
then be written as
1+ |Lz|2
R2 + 2|Lz|D
r3
+ D2R2
r6
≤E2 + 2E
r + 1
r2 .
(67)
Motion in Aubry’s galaxy
9
FIG. 7: Shapes of Hill’s region in the various domains in the space of (E,Lz) for D = 3 (/0 means the Hill’s region is empty),
including the projections of some typical orbits at the indicated parameter values.
Using 1/R ≥1/r = u, writing A = |Lz|2 −1, and dropping
the third and fourth terms from the lefthand side (which are
positive and relatively negligible for large distance from the
origin),
Au2 −2Eu+(1−E2) ≤0
(68)
in the Hill’s region. We restrict further to Lz < −1, so A > 0.
It follows that for E ≥
q
A
1+A,
u ≤u+ = E +
p
E2 −A(1−E2)
A
(69)
(and if E <
q
A
1+A then the Hill’s region is empty). Now
|pφ|
γ
3DR
r2
= 3D|Lz|+DR2/r3
r2(E +1/r) ≤3D|Lz|+D/r
r(Er +1) .
Motion in Aubry’s galaxy
10
(a) (E,Lz,D) = (0.800,−0.600,0.300)
(b) (E,Lz,D) = (0.800,−0.200,1.000)
(c) (E,Lz,D) = (0.558,−0.283,0.700)
(d) Transparency plot: (E,Lz,D) = (0.558,−0.283,0.700)
FIG. 8: Poincaré plots for selected values of (E,Lz,D). The
boundary of the Poincaré section (an equatorial periodic orbit
in the reduced system) is shown in blue. We see that the
behaviour varies from a) near-integrable, b) mixed, to c)
near-perfect chaos. In d) we replot the orbit from c) using an
opacity of 0.1, showing that not only does the orbit appear to
be nearly dense, it appears to have uniform density.
The latter expression is decreasing in r, so we can insert r ≥
1/u+ to obtain
|pφ|
γ
3DR
r2
≤3D(|Lz|+Du+)u2
+
(E +u+)
.
This is less than 1 (and so condition (66) holds) if
3D(|Lz|+Du+)u2
+ < E +u+.
We plot this (and the associated curve for empty Hill’s region)
in Figure 9 for D = 1 as the region between the red and blue
curves. We see that the region in which we have established
completeness of the section includes all cases of non-empty
bounded Hill region for Lz sufficiently negative, as claimed.
This can be proved for any value of D > 0 but we don’t give
the estimates here.
0.6
0.8
1.0
1.2
1.4
-5
-4
-3
-2
-1
E
L_z
FIG. 9: The region in (E,Lz)-space for which we prove that
the section z = 0, pz > 0 is complete is the cusped one
between the red and blue curves. The region below the blue
curve is where our bounds show that the Hill’s region is
empty, but from Figure 2 we know that the Hill’s region is
non-empty to the right of a curve asymptotic to Lz = −∞as
E increases to 1.
We expect the true domain for completeness of the section
to go up to the bifurcation point in Figure 6, because near
the part of the curve where the critical point is elliptic, the
Hill’s region is small and the linearisation about the bounding
periodic orbit has non-zero rotation. Perhaps our estimates
could be improved to show this (notably by throwing away
less of the higher order terms in the step from (67) to (68); but
the estimate R ≤r is close to optimal for small Hill’s region).
On the other hand, for Lz not sufficiently negative, the
bounding periodic orbit has linearised rotation rate zero
around it and is hyperbolic (because associated with a hyper-
bolic equatorial circular orbit), so the forward orbits of points
on its local stable manifold do not intersect the section (and
the same for backwards orbits of points on its local unstable
manifold). Indeed, it can be expected that there is a set of pos-
itive volume whose orbits do not intersect the section, so the
Poincaré map would not tell us anything about them.
The boundary in (E,Lz) space between parameter values
for which the equatorial periodic orbit has zero or non-zero
Motion in Aubry’s galaxy
11
linearised rotation rate can in principle be determined by solv-
ing a Hill’s equation for the Floquet multipliers (the linearised
equation in (z, pz) about the periodic orbit R(t); compare the
curve for stabilisation of an inverted pendulum by oscillation
of its support), but we have not attempted that.
If the component C of the energy-surface corresponding to
the chosen Hill component has finite volume VC then Σ+ is
a subset of full measure in Σ, by volume-preservation of the
flow. So Σ\Σ+ is negligible in a measure-theoretic sense. For
components of Hill’s region that are bounded away from r = 0
and ∞, the volume is finite. This is because energy-surface
volume µ is defined on each energy-surface by
1
2ω ∧ω = µ ∧dH,
so VC =
R
C µ. The easiest way to bound it for our problem is
to express it as
VC =
Z
C 2π
√
K dRdz
over the Hill component, because for each point of the Hill
component there is a circle of possible momenta, with magni-
tude
√
K, where
K =
E + 1
r
2 −1−
Lz
R −DR
r3
2
.
(70)
Under the stated conditions, this integral is finite. Thus the
return map is defined on a set of full measure in Σ, which is
consistent with the figures.
For components with horns going to r = 0, the volume is
also finite but to prove it requires a little more work. The
centre curve of the horn is Lzr3 = DR2. We can write this as
R = Rc(z) with Rc(z)2 ∼Lz
D z3. Near the centre curve,
Lz
R −DR
r3 ∼−2D
z3 (R−Rc(z)).
Also, E + 1
r ∼1
z . So the horn is given asymptotically by R =
Rc(z) ± ∆R(z) with ∆R ∼z2
D . Then the integral from the part
of the horn with z ≤ζ is asymptotically
Z ζ
0
z
Ddz = ζ 2
2D < ∞.
The contribution from the rest of the Hill component is
bounded.
For D = 0 we find (not shown) that the Poincaré plots look
integrable. Indeed, it is easy to see an additional constant of
the motion, namely the modulus-squared of the angular mo-
mentum vector,
L2 = r2p2 −(r ·p)2.
This can be evaluated to
L2 = (zpR −Rpz)2 +(1+ z2
R2 )L2
z.
Then a little calculation confirms that d
dt L2 = 0. Some level
sets of L2 are plotted in Figure 10.
FIG. 10: Level sets of L2 in the Poincaré section z = 0.
When D ̸= 0 but small, Poincaré plots like Figure 8(a) sug-
gest that the motion is close to integrable. This is plausible be-
cause for D = 0 the motion is integrable with additional con-
stant L2 and the frequency of precession varies non-trivially,
and for DR ≪r3 the perturbation counts as small, so KAM
theory applies. To complete the argument is somewhat deli-
cate, however, because the precession-frequency goes to zero
in the non-relativistic limit. Another regime in which near-
integrable behaviour can be expected is when the Hill’s region
is a small disk crossing z = 0; in this case the motion is close
to separable into R and z in the formulation (34). We leave
analysing these situations for future work.
For larger D, the Poincaré plots suggest strongly that the
motion is not integrable.
That is in interesting contrast to
geodesic motion in the Kerr metric, to which Aubry’s sys-
tem can be considered as an approximation. As already men-
tioned, Carter found a third constant of the motion (or fourth
in space-time) for the Kerr metric, rendering its geodesic flow
integrable7. The corresponding continuous symmetry is the
Hamiltonian flow of the Carter constant (for the proper-time
dynamics in space-time).
The Carter constant is quadratic
in the 4-velocity (like the Hamiltonian for the dynamics). A
Carter constant is in fact a general feature of Einstein vacuum
space-times with a rotation symmetry30.
Not only does the motion for D > 0 appear to be non-
integrable, for some values of parameters the Poincaré section
appears to be filled densely by a single orbit, e.g. Figure 8(c).
Even more, the chosen orbit appears to fill out the section with
uniform density. This is remarkable. To confirm this, we re-
plotted in Figure 8(d) with opacity 0.1, so that any variations
in density after pixelisation could become apparent, and none
are.
Sometimes, purely topological aspects of a system can
force chaotic dynamics20, but not necessarily on a set of posi-
tive measure, let alone full measure and with unifrom density.
Note that special examples, called pseudo-Anosov, can be
constructed in which this topological chaos has full measure
(and typical orbits have uniform density); there is a chance
something close to this could occur in the case that the bound-
ing orbit of the section has a non-zero half-integer rotation-
Motion in Aubry’s galaxy
12
rate (making “1-prongs” on the boundary), but it likely to
require special conditions. The case of zero rotation-rate is
more relevant, however. It can not be pseudo-Anosov, but we
give a sketch of a suggested explanation for the observed near-
perfect chaos in the next section.
Note that some orbits in the component of the Hill’s region
might not intersect the section, so the Poincaré map might not
give the complete dynamics, even measure-theoretically. The
accessible volume from the Poincaré section is given by
Vacc =
Z
Σ T(R, pR) dRdpR.
This simple formula is a consequence of preservation of
energy-surface volume µ, as defined above. The proof is that
the flux of energy-surface volume iXµ = ω, so the volume
swept out by unit area on the section in time T is Tω.
If Vacc is less than the volume VC of the component C then
a positive volume of orbits is missed by the Poincaré map.
There are some parameter regimes where one could probably
prove this. In particular, when the Hill’s region has a small
neck on z = 0 and large bulges for z ̸= 0 it is plausible that
Vacc →0 as the neck width goes to 0 but VC remains bounded
away from 0. The only way to avoid this would be if the return
time function T diverges sufficiently fast. This scenario will
be discussed in the next section.
One case in whichVacc <VC is if the boundary of a Poincaré
section is a hyperbolic periodic orbit and its stable and unsta-
ble manifolds join up. Then the invariant manifolds separate
the space into three components, one of which is the acces-
sible volume from the section and the other two are invariant
volumes in z > 0 and z < 0 respectively. This will be sketched
in Figure 14.
Two questions arise for future work: can one modify the
construction of the Poincaré map to capture almost all of the
dynamics? And can one make a Poincaré section for compo-
nents of the Hill’s region that do not intersect the equatorial
plane?
On the first question, a general answer is suggested in
ref. 11. In our context it could probably be better achieved by
adding transverse sections spanned by two index 1 “brake” pe-
riodic orbits to be defined in the next paragraph and described
in the next section.
On the second question, any Hill’s component homeomor-
phic to a disk contains a minimising brake periodic orbit29
(use the formulation (34) to apply Seifert’s result directly).
A brake orbit of a reversible Hamiltonian system is an or-
bit whose projection Π to (R,z) connects two points of the
boundary. By reversibility it then reverses along the same
path (but with the opposite velocity). In the z-symmetric Hill
components, there is an obvious brake orbit along z = 0, but
Seifert’s construction generalises this to non-symmetric con-
texts. Then one can use the brake orbit to make a Poincaré
section. Choose an orientation for Π. Let Σ be the set of
(R,z, pR, pz) such that (R,z) ∈Π,
p2
R + p2
z =
E + 1
r
2 −1−
Lz
R −DR
r3
2
,
and such that the oriented angle from the tangent to Π to
(pR, pZ) is in (0,π). Then Σ is transverse to the vector field
0.5
1.0
1.5
2.0
-2
-1
0
1
2
R
z
FIG. 11: Example of Hill’s component containing an
equatorial saddle and non-equatorial minima;
D = 1,Lz = −0.348,E = 0.56.
and the same considerations as for equatorial sections apply.
To use this would require constructing a brake orbit numeri-
cally (which is not hard; we do it for the first question in the
next section), but would still leave open the risk that not every
orbit crosses it.
IX.
FROM INTEGRABLE TO NEAR-PERFECT CHAOS
Here we sketch a suggested explanation for how near-
perfect chaos can arise in a Hill’s component that is a z-
symmetric topological disk containing a saddle of J on z = 0
and a pair of minima of J with z ̸= 0, such as in Figure 11.
With slight modification the same argument is expected to ap-
ply to the case of z-symmetric horned Hill’s regions crossing
z = 0.
The scenario has large potential applicability outside this
problem too, in particular the z-symmetry is not required. The
basic context is a double-well system of 2 DoF. Aspects of this
were understood in the classical theory of chemical reactions
years ago9,27, under the name "reactive islands" (for a recent
article along this line, giving more references, see ref. 23),
but we feel that the geometry of the energy-level has not
been properly described. Another context in which it arises
is celestial mechanics, for example transition of test particles
over the Lagrange points in the circular restricted three-body
problem10,21. It occurs in models of ship capsize too24 and
the double pendulum14. One notable future application would
be to magnetic confinement of charged particles in a finite se-
quence of magnetic bottles, following ref. 6. This is the basis
for “quasi-isodynamic” stellarators, such as Wendelstein-7X.
The energy-level is a 3-sphere. The subset with z = 0 cuts
it into two 3-balls, corresponding to z > 0 and z < 0. The sub-
set with z = 0 is a 2-sphere. It contains a hyperbolic periodic
orbit h on which pz = 0. The hyperbolic periodic orbit cuts
the 2-sphere into two hemispheres, corresponding to pz > 0
and pz < 0. The one with pz > 0 is what we chose as Poincaré
section. We call the other one the “dual section”. The local
forwards and backwards contracting submanifolds of h look
Motion in Aubry’s galaxy
13
h
z = 0, pz < 0
z = 0, pz > 0
W +
W −
FIG. 12: The local contracting submanifolds of the
hyperbolic periodic orbit h relative to the section z = 0. z > 0
inside the indicated sphere and z < 0 outside it.
as in Figure 12 relative to the 2-sphere (compare ref. 19). It
is possible to draw the 2-sphere as the horizontal plane, with
one hemisphere being the disk inside h and the other being its
complement plus the point at infinity, but we find it less con-
venient. Nevertheless, when it comes to considering Poincaré
maps, we might want to flatten the hemispheres in that way.
Focus on the dynamics inside the 3-ball for z > 0 (by z-
symmetry, the 3-ball for z < 0 is equivalent). There is a pe-
riodic orbit e of Poincaré index +1 in z > 0, which bounces
between two points on the boundary of the Hill’s component
(a brake orbit, which we believe can be obtained by a similar
argument to Seifert; there is a corresponding one for z < 0).
We call it e because we think of the case where it is elliptic,
though in some parameter regimes it is inversion hyperbolic
(in particular, this turns out to be the case in Figure 13). It
can be found numerically by shooting from the boundary of
the Hill’s component to a local section pR = 0 near the oppo-
site side of the boundary and varying the initial point to make
pz = 0 on the section. The local backwards contracting sub-
manifold of h is a cylinder whose projection crosses e, see
Figure 13 (cf. pictures in ref. 21, 24, and 27, and in ref. 6 for
an analogous case in a magnetic mirror machine, though none
of these include the index 1 orbit).
In the energy level, h can be spanned by a disk that contains
e and is transverse to the flow except on h and e. To construct
such a disk numerically, one can choose a smooth function
F(R,z) (of the form zg(R,z)) that is 0 on h, 1 on e and has
nowhere-zero derivative. Then a suitable section is the subset
of the energy level where ˙F = 0. For transversality off h and e
we ask for ¨F ̸= 0 there. We draw this disk horizontally in the
next energy-level pictures. The part inside e has ¨F < 0; the
part between h and e has ¨F > 0.
If the dynamics has a rotation symmetry (not necessarily
from isometry of configuration space; it can be from separa-
bility of vertical and horizontal motion or the adiabatic invari-
ant to be described in sec. X) that leaves h and e invariant then
the system is integrable and the phase portrait in z > 0 looks
FIG. 13: The Hill’s region for D = 1,Lz = −0.348,E = 0.56,
showing the hyperbolic (on z = 0) and index 1 (in the top
half) brake periodic orbits and some orbits on one side of the
unstable manifold of h up to their first crossing with e.
h
e
W +
W −
z = 0, pz < 0
z = 0, pz > 0
FIG. 14: The ball z ≥0 in the integrable case.
like Figure 14, where the inner branches of W± coincide. The
map from z = 0, pz > 0 to z = 0, pz < 0 preserves the circles
of rotation, the map on each circle being a rotation depend-
ing on the rotational part of the dynamics and the time it takes
(which goes to infinity at the boundary h). The return map to
z = 0, pz > 0 is the composition of this map with the corre-
sponding one for z < 0. By z-symmetry, the second map is
equivalent to the first, so it is enough to consider just the first.
Motion in Aubry’s galaxy
14
Note that there is a significant volume that is not accessible
from the Poincaré section, namely that enclosed by the sep-
aratrix, including e. The dynamics in it is a “vortex” around
e.
For small deviation from integrable, the picture deforms to
that of Figure 15 (compare ref. 6). There are two (or more)
h
W +
W −
e
S
z = 0, pz > 0
z = 0, pz < 0
FIG. 15: The ball z ≥0 in a near-integrable case.
lobes in the horizontal plane, through which orbits make tran-
sitions between “free” and “trapped” motion. By “trapped”
we mean that it circulates around e and can not reach the
Poincaré section again until it makes a transition to “free”.
The dynamics in this “vortex” is in general more complicated
than in the integrable case. The accessible region is bounded
by a hierarchy of invariant tori and hyperbolic invariant sets
around e.
The map from Poincaré to dual section can be considered
to be the composition of three maps: from Poincaré section to
the disk IN in a section S spanning e (part of the previously
mentioned section spanning h and containing e), from IN to
OUT by iterating the return map f to S, and from OUT to the
dual section.
We start by analysing the map f. It sends a foliation of the
annulus between e and OUT by closed curves around OUT to
a foliation of the annulus between e and IN by closed curves
around IN, with a rotation rate that goes to infinity logarithmi-
cally as the boundary of OUT is approached, because of the
time spent near h.
Thus the image under f of the part of IN that doesn’t go
straight out is infinitely wrapped around the boundary of IN,
as in Figure 16. The intersection of f(IN) with OUT is an infi-
nite sequence of strips in OUT (there could also be a “nose”).
They come from an infinite sequence of similar strips in IN,
but f interchanges the short and the long sides, so it is highly
hyperbolic. Successive images of the remainder of IN get
wrapped around IN, producing further infinite sequences of
strips in OUT (and possible further noses). All but a set of
measure zero of OUT is covered as iteration goes to infinity.
FIG. 16: Sketch of the image of the IN lobe under f.
The shapes of the map from the Poincaré section to the en-
try disk, and that from the exit disk to the dual section, are
not hard to analyse. Focussing on the former, it consists prin-
cipally in a rotation that goes to infinity at the boundary, be-
cause the time taken from the section to the entry disk goes
to infinity like a logarithm of the (reciprocal) distance to the
boundary and during that time, the orbit rotates roughly the
same as h does. This implies that the preimage of a chord in
the entry disk (such as the boundaries of the previously de-
rived strips) is an infinite “yin-yang” curve in the Poincaré
section. Similarly, the image of a chord in the exit disk is
an infinite yin-yang curve in the dual section, as shown in
Figure 17. A sequence of non-intersecting chords produces
a nested sequence of infinite yin-yang curves.
FIG. 17: Sketch of the map from a chord in the exit disk to a
ying-yang curve in the dual section.
Thus the map from Poincaré section to dual section takes
one nested set of infinite yin-yang curves to another one. But
they approach the boundary in opposite directions. More im-
portantly, f interchanges short and long sides.
Figure 18
shows a bit of it (only the first iterate of f has been taken
into account and only the first wrapping around).
For comparison, Figure 19 shows the Poincaré plot with
points coloured by the return time. The yin-yang curves are
clearly visible.
For larger deviation from integrability, the contracting sub-
manifolds of h might miss each other on their first tran-
sit through the horizontal plane, as in Figure 20 (compare
ref. 6 again). Then every orbit from the Poincaré section gets
trapped. It goes round the “vortex” for some time until with
probability 1 it eventually exits and reaches the dual section.
Motion in Aubry’s galaxy
15
FIG. 18: Sketch of part of the transit map from Poincaré
section to dual section.
The picture is similar to the previous case, except there is no
IN/OUT region. By conservation of volume and finiteness of
the trapped volume, the entry disk is partitioned, up to a set
of zero area, into pieces corresponding to different numbers
of revolutions in the vortex before reaching the exit disk in a
similar set of pieces. This could be quite complicated because
of the dynamics in the vortex, but most of it consists of slicing
the disk by an infinite sequence of disjoint chords. Then the
same discussion as above applies.
The result is probably best plotted and analysed in a coor-
dinate system with the logarithm of reciprocal distance to the
boundary as radius. It would be good to flesh out this sketch
analysis more thoroughly. The closest we have seen in the
literature is ref. 16. One route could be to study the dynam-
ics close to the curve of circular equatorial orbits in parameter
space. Then the “neck” is small and it might suffice to follow
the (1D) stable and unstable manifolds of the critical point.
We emphasise that this near-perfect chaos is only on the
section spanning the hyperbolic orbit. It is expected to coexist
with near-integrable behaviour on the sections spanning the
index 1 orbits when elliptic.
One might ask how reliable are long-time numerics for ev-
idence of near-perfect chaos. The theory of shadowing allows
one in principle to determine whether there is a true trajectory
within a small distance of a numerical one; see Ref. 12 and
the many preceding references it cites. We did not implement
this, but based on the accumulated experience of such stud-
ies, we are confident that our numerics do represent genuine
behaviour.
FIG. 19: Poincaré plots for (upper) D = 0.7, Lz = −0.283,
E = 0.558, (lower) D = 1, Lz = −0.348, E = 0.56, with
points coloured according to return time T.
z = 0, pz > 0
z = 0, pz < 0
h
W +
W −
e
S
FIG. 20: The ball z ≥0 in a far from integrable case.
X.
ADIABATIC INVARIANT
We have shown already in Figure 1(c) that the motion is
often helical. If the “magnetic” B and “electrostatic” E fields
of Section I were constant and E⊥< B, then all the motion
would be helical. To see this, apply a Lorentz transformation
Motion in Aubry’s galaxy
16
into a frame moving with velocity V = E × B/B2. Then
E⊥is eliminated, so dp⊥
dτ = p⊥× B and
dp∥
dτ = γE∥(using
the transformed E and B). Thus, p⊥rotates at rate B (with
respect to proper time) around B, and p∥changes at rate E∥
with respect to coordinate time.
Note that if E⊥> B instead, then one can eliminate the
component of B perpendicular to E, but we don’t pursue that
case.
In regions where the “magnetic” field is strong we can ex-
pect an adiabatic invariant, directly analogous to the magnetic
moment for charged particles in strong magnetic fields. The
idea is that if the field seen by a gyrating particle varies by
at most a small relative amount ε during half a gyroperiod
then the action integral µ = 1
2π
R p·dr for one gyro-revolution
varies by at most order ε for a number of gyrorevolutions of
order e−C/ε for someC > 0 (in the analytic case25). The action
integral for the unperturbed motion evaluates to
µ = p2
⊥
B
(71)
(in plasma physics it is common to divide this by 2m, where m
is the rest mass of the particle, but the above form is simpler
in our relativistic context for unit rest mass). To compute p2
⊥
it is simplest to first compute the field strength
B = D
r4
p
R2 +4z2
(72)
and parallel momentum
p∥=
D
r5B(3zRpR +(2z2 −R2)pz),
(73)
and use
p2
⊥= p2 −p2
∥.
(74)
The total momentum-squared p2 can be computed as
p2 = p2
R + p2
z +( Lz
R −DR
r3 )2,
(75)
or
p2 = (E + 1
r )2 −1.
(76)
To complete the computation of µ, p2
⊥is then divided by B.
To quantify ε in our problem, the gyroradius ρ = p⊥/B =
p
µ/B, the distance travelled by the E × B drift in half a
gyroperiod is γ E⊥
B
π
B, and the parallel distance travelled during
half a gyroperiod is p∥π/B, so
ε = max
2
L⊥
r
µ
B , πγE⊥
B2L⊥
, π
L∥
p∥
B
,
where L⊥and L∥are lengthscales for relative variation of B
in the two directions. An order of magnitude for these is r/3,
and we can take B ∼D/r3 and |E| ∼r−2. So the conditions
for adiabatic invariance are that
µ ≪D
r , γr3 ≪D2, p∥≪D
r2 .
Note that the first and last imply that p2
⊥, p2
∥≪D2
r4 , so we ob-
tain γ −1 ≪D
r2 . Thus the second condition is really just that
r3 ≪D2.
Thus, sufficient conditions for the adiabatic invariance are
r ≪D2/3, µ ≪D1/3, p∥≪D−1/3.
A consequence of adiabatic invariance of µ is that the sys-
tem is close to integrable in the regions where it applies. As-
suming the adiabatic invariance, the motion can be reduced
to a 2DoF system for the “guiding centre” of the helix, with
state-space coordinates being the three components of the
guiding centre and its velocity component parallel to the B-
field17. Rotation symmetry about the z-axis then implies con-
servation of DR2/r3 for the guiding centres (because Bφ = 0),
which we will write as r3 = CR2 with C constant. This is just
the equation for staying on a fieldline. It makes sense because
the standard curvature and grad-B drifts across the field are in
the φ direction, which is ignored in the reduction. The dy-
namics of the guiding centre along the fieldline are given by
the reduced Hamiltonian
H =
q
1+ p2
∥+ µB−1
r ,
using arclength s as coordinate. This gives
p2
∥= (E + 1
r )2 −1−µB.
The field strength B is given by
B2 = D2
r8 (4r2 −3R2).
Parametrise the upper and lower halves of the fieldline by r ∈
(0,C], C being the value on the equatorial plane, then R2 = λr3
with λ = 1/C. So
p2
∥= (E + 1
r )2 −1−µD
r3
p
4−3λr.
Arclength s is related to R by
ds
dr
2
= 1−3
4λr
1−λr ,
which is singular at the extreme (r = C) but not a problem.
We see that for µ > 0, the motion consists of bouncing along
fieldlines between reflection points where
(E + 1
r )2 = 1+ µD
r3
p
4−3λr.
So if the adiabatic invariant is well conserved then the re-
turn map to a Poincaré section z = 0 for given E and Lz con-
sists of just a rotation by some gyrophase (depending on the
gyroradius) about a fixed point, hence ellipses in the (R, pR)-
plane. Figure 21 shows an example. In reality, there is a
more accurate adiabatic invariant than µ, which deforms the
ellipses into a family of closed curves that includes the bound-
ary, though the adiabatic invariant is probably still not exactly
conserved.
In practice, what we found more often (particularly when
Lz > 0, for which the Hill components have horns going to the
Motion in Aubry’s galaxy
17
FIG. 21: (upper) A trajectory for E = 0.6,Lz = 30,D = 50,
and (lower) its Poincaré plot.
origin) is that the adiabatic invariant is well conserved while
the particle is in the horns but not in the passage at relatively
large radius between the horns (it moves too far along the field
in one gyroperiod). An example is shown in Figure 22.
Such pictures suggest an alternative approach to under-
standing the near-perfect chaos. Chaos in a Hamiltonian sys-
tem can result from possession of an adiabatic invariant in
parts of the phase space, but with pseudo-random jumps in
the adiabatic invariant in between visits to these parts. The
failure of the adiabatic invariant in most examples studied so
far is due to separatrix crossing (where the normally rapid fre-
quency goes to zero)26. In our system we do not see separa-
trix crossing, but it is clear from pictures like Figure 22 that
the motion sometimes consists of episodes in the cusps of the
Hill’s region, where the adiabatic invariant is well conserved,
separated by passage across z = 0 via relatively large radius,
during which some change to the adiabatic invariant occurs.
Perhaps one can make a probabilistic analysis of the jumps
in adiabatic invariant, as has been done for cases of separa-
trix crossing, and deduce stochastic behaviour. This approach
would be limited to systems like ours with an adiabatic invari-
ant, so not as general as the approach of the previous section.
Yet it could be simpler in cases like ours when it applies.
FIG. 22: (upper) A trajectory for E = 0.9,Lz = 12.5,D = 7.3,
exhibiting (lower) fairly good conservation of µ while in the
horns and substantial changes in between.
XI.
ROTATION CURVES OF GALAXIES
Aubry’s investigation was motivated by the desire to ex-
plain the rotation curves of galaxies without invoking dark
matter (for another approach, which prompted Aubry’s inves-
tigations, see ref. 28). Here we see what our investigations
can say about the issue.
The rotation curve of a galaxy is the graph of the inferred
average of vφ as a function of R in the equatorial plane. For
many galaxies it is observed to settle to a constant around
10−3 (in units of c) as R →∞. The observations are based
on Doppler shift of Hα lines from the gas in the disk.
The co-rotating equatorial circular orbits are the simplest
theoretical probe for this question, but they have vφ ∝R−1/2
as R →∞, so they do not fit the observations. This is the pure
Newtonian prediction but we see from (6) that it is true also
for Aubry’s galaxy.
The standard explanation is that in addition to the visible
matter in a galaxy, there is a halo of “dark matter” whose grav-
itational effect makes vφ for the circular orbits go to a positive
limit as R →∞. For one analysis, see ref. 15.
But here is another approach, avoiding the appeal to dark
Motion in Aubry’s galaxy
18
matter. We have seen that for orbits of Aubry’s system,
γ = E + 1
r
(77)
pφ =
Lz
R −DR
r3 .
(78)
To fit with a non-zero limit for tangential velocity as R →∞,
we need pφ to tend to some p∞> 0 (one could consider p∞< 0
instead, but that is unlikely). So
Lz = Rp∞+ DR2
r3
(79)
must grow with R >
p
D/p∞(we take r ∼R in the disk). Fur-
thermore,
E =
q
1+ p2∞+ p2
R + p2z −1
r
(80)
must decrease unless p2
R + p2
z makes a compensating increase.
Different Lz for different R requires different orbits, hence
circular, and there are none with γ bounded away from 1 as
R →∞. So instead perhaps there is exchange of Lz and of
E between particles, leading to a distribution that is concen-
trated around Lz = Rp∞+ D
R and E = γ∞−1
R? This distribu-
tion would necessarily have a nett outward flux of particles for
R >
2
p2∞because then E > 1 and hence ¨R > 0, but that is not
necessarily a problem. Why shouldn’t a galaxy be losing gas?
It might also be that matter goes in or out along the “horns”
of the Hill’s regions with Lz > 0. Rourke28 proposed outflow
along the rotation axis with gradual inflow in the equatorial
disk, but perhaps it is the other way round.
To develop this idea requires some kinetic theory, so we
leave it for future work.
Another ingredient to consider is that there could be transfer
between orbital angular momentum (our Lz) and spin (angular
momentum of a particle about its centre of mass), which we
have neglected.
Perhaps neither of these effects depends significantly on D,
so Aubry’s model might be more sophisticated than necessary
to explain the rotation curves of galaxies.
XII.
MORE COMMENTS
In general relativity, it has long been recognised that ro-
tating matter has “frame-dragging” effects22.
The gravito-
magnetic force used here is a simple manifestation of frame-
dragging. To our minds, it demystifies frame-dragging be-
cause it shows frame-dragging to correspond to the familiar
effects of a magnetic field.
Rourke’s proposal28 to explain rotation curves of galaxies
(and more) involves modifying the standard Lense-Thirring
frame-dragging effect to one that decays like only 1/r with
distance from a rotating mass. He justifies it by Mach’s prin-
ciple, but it is not clear to us.
One direction to explore is the possible connection of the
magnetic gravity term with Thomas precession3, which is usu-
ally considered at the atomic scale but applies more broadly.
For non-relativistic motion in Aubry’s galaxy, one could
study the non-relativistic version of (28). Then c becomes
irrelevant and one can scale D to 1. A question that Aubry has
posed is what is the set of orbits that go to neither infinity nor
zero. This could predict the shape of an axisymmetric galaxy.
It is an interesting direction for future research.
Another question Aubry posed is what are the effects of the
gravitomagnetic field on light rays? In our model, however,
the effect of putting rest mass to zero is to remove all effects
of gravity and so to just produce constant velocity trajectories
at the speed of light.
Another direction to explore is to take gravitational force
proportional to relativistic mass instead of rest mass. This
would in particular, produce an effect on light rays. We didn’t
find a Hamiltonian formulation of this case, except when D =
0. That is not necessarily an obstacle to studying it, but means
it would require other methods. Nonetheless, for any D an
“energy” is conserved, namely H = logγ −1
r . Equivalently,
γ exp−1
r is conserved, which supports a point of view taken
by Aubry1, namely that the effect of a static potential can be
replaced by taking the rest mass m of a particle to decrease as
it goes down the potential. The total energy of the particle then
being γm, we obtain the formula m ∝exp−1
r . Note also that
standard volume is preserved if one uses a new time s with
ds = γ dt. Thus, the system is close to Hamiltonian, but we
did not yet identify an appropriate symplectic form, nor even
angular momentum constant. There is a vague possibility that
one should view conservation of γ exp−1
r as a non-holonomic
velocity-constraint, so rotation-symmetry might not produce a
conservation law5.
Finally, it would be natural to consider the effects of break-
ing axisymmetry. Spiral galaxies have clearly done that. Then
Lz conservation would no longer be perfect.
XIII.
CONCLUSION
We have studied the motion of a test mass in the gravit-
omagnetic field proposed by Aubry as a simple model of a
galaxy, taking force proportional to rest mass. The system
can equivalently be viewed as motion of a relativistic charged
particle in the field of a co-located electrostatic monopole and
magnetic dipole. We find a mix of regular and chaotic mo-
tion. The dynamics has various forms, depending on the en-
ergy and angular momentum constants. An interesting regime
of almost perfect chaos is found, whose analysis merits more
investigation. Further directions for analysis of possible con-
sequences for galaxies are proposed.
ACKNOWLEDGMENTS
RSM is most grateful to Serge Aubry for stimulation over
many years. This began with Aubry’s fascinating talk on min-
imisers of Frenkel-Kontorova models at Los Alamos in 1982.
Notable other highlights were his explaining anti-integrable
limits to me one Sunday morning in Minnesota in 1990, and
setting me the challenge of explaining discrete breathers in
1993 during a visit to Saclay. Most recently he communicated
Motion in Aubry’s galaxy
19
to me his ideas on rotation curves of galaxies, which led to the
present paper.
We are grateful to Shibabrat Naik for making professional
versions of some of our hand-drawn figures and to Vassili Gel-
freich for useful suggestions.
This work was supported by a grant from the Simons Foun-
dation (601970, RSM).
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are
openly available in GitHub at https://github.com/megha-
burri/Motion-in-Aubry-s-galaxy.git and archived on Zenodo
at https://doi.org/10.5281/zenodo.17117918.
REFERENCES
1Aubry S, General Relativity revisited: Modified Heaviside Theory for Grav-
itation Application for understanding the Hypothetic Dark Matter Manifes-
tations, preprint (2025)
2Behera H, Barik N, Explanation of Gravity Probe B Experimental Re-
sults using Heaviside-Maxwellian (Vector) Gravity in flat Space-time,
arxiv:2002.12124 (2020)
3Ben-Menahem S, The Thomas precession and velocity-space curvature, J
Math Phys 27 (1986) 1284–6.
4Binney J, Tremaine S, Galactic dynamics, 2nd ed (Princeton U Press, 2008).
5Bloch AM, Krishnaprasad PS, Marsden JE, Murray R, Nonholonomic me-
chanical systems with symmetry, Arch Rat Mech An 136 (1996) 21–99.
6Burby JW, MacKay RS, Naik S, Isodrastic magnetic fields for suppressing
transitions in guiding-centre motion, Nonlinearity 36 (2023) 5884–5954.
7Carter B, Global structure of the Kerr family of gravitational fields, Phys
Rev 174 (1968) 1559–71.
8Clark SJ, Tucker RW, Gauge symmetry and gravito-electromagnetism,
Class Q Grav 17 (2000) 4125–57.
9De Leon N, Berne BJ, Intramolecular rate process: Isomerization dynamics
and the transition to chaos, J Chem Phys 75 (1981) 3495—3510.
10Dellnitz M et al, Transport in dynamical astronomy and multibody prob-
lems, Int J Bif Chaos 15 (2005) 699–727.
11Dullin HR, Wittek A, Complete Poincaré sections and tangent sets, J Phys
A 28 (1995) 7157–80.
12Hayes WB, Jackson KR, Rigorous shadowing of numerical solutions of or-
dinary differential equations by containment, SIAM J Num Anal 41 (2003)
1948–73.
13Jefimenko OD, Gravitation and cogravitation (Electret Scientific, 2006).
14Kaheman K, Bramburger JJ, Kutz JN, Brunton SL, Saddle transport and
chaos in the double pendulum, Nonlin Dyn 111 (2023) 7199–7233.
15Kirillov AA, Turaev D, The universal rotation curve of spiral galaxies, Mon
Not R Astron Soc 371 (2006) L31–L35.
16Koon WS, Lo MW, Marsden JE, Ross SD, Heteroclinic connections be-
tween periodic orbits and resonance transitions in celestial mechanics,
Chaos 10 (2000) 427–469.
17Littlejohn RG, Variational principles of guiding center motion, J Plasma
Phys 29 (1983) 111–125.
18Liu R, Liu S, Zhu F, Chen Q, He Y, Cai C, Orbits of charged particles
trapped in a dipole magnetic field, Chaos 32 (2022) 043104.
19MacKay RS, Flux over a saddle, Phys Lett A 145 (1990) 425–7.
20MacKay RS, Complicated dynamics from simple topological hypotheses,
Phil Trans Roy Soc London A 359 (2001) 1479–96.
21Marsden JE, Ross SD, New methods in celestial mechanics and mission
design, Bull Am Math Soc 43 (2005) 43–73.
22Misner CW, Thorne KS, Wheeler JA, Gravitation (Freeman, 1973).
23Naik S, Krajnak V, Wiggins S, Support vector machines for learning reac-
tive islands, Chaos 31 (2021) 103101.
24Naik S, Ross SD, Geometry of escaping dynamics in nonlinear ship motion,
Commun Nonlinear Sci Numer Simulat 47 (2017) 48–70.
25Neishtadt AI, The separation of motions in systems with rapidly rotating
phase, PMM USSR 48:2 (1984) 133–9.
26Neishtadt AI, Simo C, Treschev D, Vasiliev A, Periodic orbits and stability
islands in chaotic seas created by separatrices in slow-fast systems, Discr
Cont Dyn Sys B 10 (2008) 621–650.
27Ozorio de Almeida AM, De Leon N, Mehta MA, Marston CC, Geometry
and dynamics of stable and unstable cylinders in Hamiltonian systems, Phys
D 46 (1990) 265–285.
28Rourke CP, The geometry of the universe (World Sci Publ Co, 2021).
29Seifert H, Periodische Bewegungen mechanischer Systeme, Math. Z 51
(1949) 197–216.
30Walker M, Penrose R, On Quadratic First Integrals of the Geodesic Equa-
tions for Type {22} Spacetime, Commun Math Phys 18 (1970) 265–274.
|
Motion in Aubry's galaxy Motion in Aubry's galaxy M. Burri1 and R. S. MacKay2 1)Physics department, 4 7AL, U.K. 2)Mathematics Institute & Centre for Complexity Science, 4 7AL, U.K. Corresponding author (*Electronic mail: ) (*Electronic mail: ) (Dated: 23 September 2025) The dynamics of a test particle in the field of a model galaxy proposed by Serge Aubry is studied by a combination of theory and numerical computation. Regimes of near-integrable behaviour and almost perfect chaos are found. A proposed explanation for the latter is sketched. Thoughts are presented on the implications of the analysis for galaxies. Aubry proposed a model for motion in a galaxy consisting of the usual "electrostatic" monopole of a central mass and an additional "magnetic" dipole corresponding to rotating mass near the centre in a gravitomagnetic formulation. We analyse the resulting motion by a combination of theory and numerics. We find a spectrum of behaviour from near-integrable to near-perfect chaos. I. INTRODUCTION Serge Aubry has been creative in many areas of physics and mathematics. His latest contribution1 is to note that Newtonian gravity is not Lorentz-invariant and therefore there should also be a "magnetic" contribution to gravity, whereby moving matter exerts a force of magnetic type on moving matter. As he has commented, this idea was already proposed by Heaviside and has been developed by various people, notably Jefimenko13 (and see ref. 2 for a recent article). Furthermore, it seems to be an established view in Newtonian approximation to general relativity, e.g. ref. 8. Yet Aubry brings incisive views to the story. One particular consequence Aubry proposed is that a simple model for a galaxy could consist of a central mass of which at least part is rotating, thereby creating a gravitational "magnetic" dipole field in addition to the standard "electrostatic" monopole. Aubry was interested in non-relativistic motion in this field, with a view to explaining the rotation curves of galaxies, but given the relativistic origin of the magnetic force it seems natural to extend to relativistic motion. An immediate question is whether the gravitational force on a test particle should be taken proportional to its rest mass or its relativistic mass. Following Jefimenko, we take rest mass, though Aubry has been investigating the case of relativistic mass. For the rotation curves, probably only the nonrelativistic regime is relevant, where both cases agree. In the case of gravitational force proportional to rest mass, the resulting equations of motion for the position r and velocity v of a test particle, or momentum p = γv per unit rest mass, where γ = q 1+ p2/c2 = 1/ q 1-v2/c2, (1) with respect to coordinate-time t, are dp dt = -GM r r3 + GD c2 v × 3z r r5 -ˆz r3 (2) dr dt = v, (3) where M is the central mass, D is the dipole moment (its relativistic angular momentum in the +z direction), G is the gravitational constant and c is the speed of light. We take D ≥0. An alternative description of our system is as the relativistic motion of a charged particle (with e/m = 1) in the fields E = -GM r r3 of an electrostatic monopole and B = GD c2 3z r r5 -ˆz r3 of a magnetic dipole in Minkowski space. The case with only magnetic dipole has been studied by many people, going back to Störmer in connection with charged particle motion in the earth's magnetic field. For a recent treatment, see ref. 18. As we will scale M to 1, the dipole-only case will correspond to the limit D = ∞(with appropriate scaling of other quantities). The problem of motion of charged particles in the earth's magnetic field, however, should really include a gravitational monopole field, even if its effect is small compared to the dipole field, so our treatment is relevant if one takes D large. On the other hand, the earth's magnetic field has significant deviations from axisymmetry, but we don't treat that here. If preferred, one can write the equations with respect to proper time τ for the particle by using dt = γ dτ: dp dτ = -γGM r r3 + GD c2 p× 3z r r5 -ˆz r3 (4) dr dτ = p. (5) Centripetal acceleration in coordinate time for circular motion in a circle of radius R at speed v is γv2/R, so this system has circular equatorial orbits with velocity component vφ in 16 Sep 2025 Motion in Aubry's galaxy 2 the tangential direction (in this paper we use "physical" components for vectors, as opposed to contravariant or covariant components) satisfying γv2 φ R = GM R2 + GDvφ c2R3 . (6) From the definition (1) of γ, squaring (6) transforms it to a quartic in vφ given R. It does not have clean solutions, but by considering the graphs of the two sides as functions of vφ ∈(-c,c), we see that there are precisely two solutions vφ for each radius R, one positive, one negative, corresponding to orbits that are co- and counter-rotating with respect to the angular momentum of the central mass. Alternatively, one can consider it as a quadratic in R for given vφ ̸= 0 and (supposing D > 0) obtain one positive solution for R if vφ > 0 and two positive solutions if vφ ∈(vmin,0), where vmin 0. For geodesic motion in the Kerr metric, which can be considered the general relativistic version of Aubry's field, there is indeed a third constant of the motion, the "Carter constant"7. We find, however, that the answer for Aubry's field is no (at least from numerical simulations), III. EQUATORIAL CIRCULAR ORBITS To illustrate the ideas so far, we compute the energy and "angular momentum" for the equatorial circular orbits. In this section, we write v for vφ. Transforming (6) to scaled variables and solving the resulting quadratic in R, the circular orbits are given by R = 1+σ p 1+4Dγv3 2γv2 , (12) with σ = ± for v 0. The formula 1 R = -1+σ p 1+4Dγv3 2Dv will also be useful. Its energy is E = γ -1 R = γ + 1 2Dv(1-σ p 1+4Dγv3), (13) and its angular momentum constant is Lz = γRv+ D R = σ v p 1+4Dγv3. (14) For a parametric plot, see Figure 2. The cusp can be understood as resulting from a double root of the equation for critical points of H with respect to R for z = 0, pz = 0, pR = 0 and fixed Lz, to be addressed in section VI. Note that if we think of speeds v for which γv3/c3 ≈1 4 and we put back the constants G,M,c, then the argument of the square roots above makes a transition from the first to the second term being dominant as D increases through GM2/c, which is the Kerr limit for the angular momentum. The connection, however, is probably no deeper than dimensional analysis. -2 -1 1 2 3 4 e -4 -2 2 4 6 L FIG. 2: The set of circular orbits for D = 1 projected into the plane of (E,Lz). The blue curve is for vφ = v > 0; it starts from v = 0 at Lz = +∞, E = 1 with R = ∞, and as v →1 it goes to R = 0 with E and Lz going to +∞. The red/green curve is for v 0 and 1 R ≥1-E. As the expression is a polynomial of degree at most 4 in 1/R, this consists of one or two intervals or is empty. In each interval, the motion is periodic between the endpoints, except if an endpoint is at R = 0 or R = ∞. In the latter case one is obliged to take E ≥1 and it takes infinite time to reach it. The former case is impossible if D > 0 because D/R2 beats 1/R as R →0. The motion of the original system resulting from periodic motion of R in an interval [a,b] with a > 0 resembles a precessing ellipse, that may surround the origin or not, as can be seen in Figure 3. The precession rate goes to 0 in the non-relativistic limit for D = 0 (Kepler ellipses). To find the allowed intervals, we treat first the case D = 0. Then the allowed region is given by (E2 -1)R2 +2ER+(1-L2 z) ≥0, (36) with R ≥0 and R ≤k = 1 1-E . (37) A priori, this is one or two intervals or empty. The separating cases are when there is a double root (1 -E2)L2 z = 1 or an endpoint moves through R = 0 or ∞(L2 z = 1 or E2 = 1, respectively). The case R = ∞with E = -1 is ruled out by the constraint R ≤k, so the case of an endpoint moving through R = ∞is possible for only E = +1. The result is shown in Figure 4. We see that in fact there is at most one interval, because at R = k the function in (36) has the value -L2 z ≤0. Note that this analysis of equatorial motion in the case D = 0 actually applies to the whole dynamics for D = 0. This is because the third constant of the motion restricts the motion to the plane perpendicular to the initial L = r×p. By a rotation of axes, we can take it to be the equatorial plane. Turning to D > 0, we already remarked that R = 0 is excluded. So the separating cases for the number of allowed intervals are associated with critical points of H or passage of an endpoint through R = ∞. Just as for D = 0, passage of an endpoint through ∞requires E = +1. The critical points correspond precisely to the equatorial circular orbits, because the critical points are the equilibria of the reduced dynamics and we have restricted to z = 0. To study the equatorial circular orbits in more detail, it is tidier to write u = 1 R. (38) (a) (E,Lz,D) = (0.118,-0.500,1.00×10-4) (b) (E,Lz,D) = (-0.215,4.38,3.00) (c) (E,Lz,D) = (0.118,0.500,1.00) FIG. 3: Equatorial orbits showing a) precession, b) gyration and c) periodicity. Then the equations for critical points are (E +u)2 = 1+(Lzu-Du2)2 (39) E +u = (Lzu-Du2)(Lz -2Du). (40) We can write E + u = γ and Lzu -Du2 = pφ. So the second equation is γu = pφ(pφ -Du2), (41) Motion in Aubry's galaxy 6 E -1 0 +1 Lz 0 1 R 0 a k b (0,a] R 0 k b /0 R 0 a (0,a] R 0 /0 R 0 a (0,a] R 0 (0,∞) R 0 c [c,∞) R 0 /0 R 0 c a [c,a] FIG. 4: Allowed intervals in R for equatorial motion with D = 0 in various regions of the (E,Lz)-plane. /0 signifies that there is no allowed interval. The results are independent of the sign of Lz so we plot for only Lz ≥0. The curve in the parameter space is (1-E2)L2 z = 1. The insets show sample graphs of (E2 -1)R2 +2ER+(1-L2 z) as a function of R > 0. Note that the constraint R ≤k = 1/(1-E) removes the apparent unbounded interval for E 0. which is the same as (6). It has solutions u = -γ +σ q γ2 +4Dp3 φ 2Dpφ (42) with σ ∈{±}. Then the number of allowed intervals changes by ±1 on crossing a curve of equatorial circular orbit in Figure 2. By checking suitable cases, the number of intervals is two in the region bounded by the cusped curve, one in the region between the two curves, and zero to the left of the smooth curve. As promised in Section III, we now explain the shapes of the curves in Figure 2. They are smooth curves, except at a triple root of (39), where the curve can be expected to form a semi-cubic cusp in the parameter space. The condition for a triple root is 1 = (Lz -2Du)2 -2D(Lzu-Du2), (43) which can be written as 1 = u-2(pφ -Du2)2 -2Dpφ. (44) Combining with the equation for a double root, we obtain 2Dp3 φ = 1, i.e. pφ = p∗= (2D)-1/3, (45) and γ = p 1+ p2∗. Using (42), we obtain that the triple root is at u = p2 ∗ q 3+ p2∗- q 1+ p2∗ , (46) taking σ = + because we require u > 0. Then we obtain that the position of the cusp is at E = γ -u = (1+ p2 ∗)3/2 -p2 ∗ q 3+ p2∗ (47) and Lz = Du+ pφ u = q 1+3p-2 ∗. (48) Note that this has Lz > 1. It also has E 0 in the following discussion. For Lz ≤0, the (DR/r3)2 term in (49) beats the 1/r2 term as r →0, so non-negativity of p2 R + p2 z implies r = 0 is not accessible. In contrast, for Lz > 0, the Hill's region has horns around a "centre curve" Lz/R = DR/r3 that extends all the way to r = 0. The equation for the centre curve can be written as L2 z(R2 + z2)3 = D2R4. These observations agree with the figures. We see from the figures that the Hill's region may have one or two components. Also, but not shown, for some choices of parameters it is empty. As the parameters are varied, the number of components can change only when the boundary contains a critical point of K (for fixed E,Lz,D) with K = 0, or when a component crosses R = 0 or R = ∞. We concentrate on the case of a critical point. For a local maximum or minimum of K with K = 0, the component is a single point and a transition can occur as parameters are varied, in which a component is created or destroyed. For a saddle of K with K = 0, two components (locally) can merge or one component separate into two (locally) as parameters are varied. We compute the equations for the critical points of K by differentiating K with respect to z and R, respectively: z (E + 1 r )r-3 +( Lz R -DR r3 ) 3DR r5 = 0 (50) E + 1 r R r3 + Lz R -DR r3 3DR2 r5 -Lz R2 -D r3 = 0 (51) Motion in Aubry's galaxy 7 (a) (0.950,3.40,3.00) (b) (0.480,-0.600,0.200) (c) (0.330,2.90,4.70) (d) (0.800,3.00,2.600) (e) (0.700,-0.100,3.00) (f) (2.500,4.10,1.00) (g) (2.50,3.60,1.00) (h) (10.0,-0.200,0.700) FIG. 5: Hill's regions for various choices of (E,Lz,D). Negative Lz components are on the right most column. The first equation gives z = 0 or its second factor zero. We already treated the equatorial case z = 0 in the previous section. For the non-equatorial critical points, writing pφ = Lz R -DR r3 and γ = E + 1 r , the equations are γ2 = 1+ p2 φ (52) γr2 +3pφDR = 0 (53) γ R r3 + pφ( 3DR2 r5 -Lz R2 -D r3 ) = 0. (54) Writing vφ = pφ/γ, it follows from the second that vφ = -r2 3DR 0 there is a component with horns going into r = 0. For Lz ≤0 the Hill's region is bounded away from R = 0 and may be empty. For E > 1 there is an unbounded component. The number and shapes of components are given by the parameter-domains in Figure 7. In this figure, we also plot the projections of some sample orbits. VIII. POINCARÉ PLOTS For components of the Hill's region intersecting the equatorial plane, we can take z = 0, pz > 0, as a Poincaré section Σ (cf. ref. 4). Σ can be considered to be the set of (R, pR) where E + 1 R > q 1+( Lz R -D R2 )2 + p2 R, with z = 0 and pz = q (E + 1 R)2 -1-( Lz R -D R2 )2 -p2 R. The boundary of Σ (where z = pz = 0) is an orbit of the equatorial reduced system (section VI). Considering the first return map to Σ simplifies representing and understanding the dynamics, at least of those orbits that intersect Σ. The section is transverse to the vector field, because ̇z = pz > 0. Let Σ+ be the subset for which there is a first return in positive time. Define T(R, pR) to be the time to the first return of the orbit started at (R, pR) on Σ+ and P(R, pR) to be the point of the first return. P is called the Poincaré map. By transversality of Σ and smoothness of the flow, the set Σ+ is an open subset of Σ and P is smooth on it. Figure 8 shows some Poincaré plots (orbit segments for the Poincaré map) for selected values of (E,Lz,D) and initial conditions. We see a range of behaviours, on which we shall comment after discussion of the question of completeness of the section. A transverse section is said to be complete if the forward and backward trajectories of every point in the given Hill component crosses it (except for the bounding periodic orbit). A sufficient condition is that an "angle" variable evolves monotonically and without bound. In particular, define angle α by tanα = pz/z, (65) then use (31, 33) to obtain ̇α = ̇pz z -pz ̇z z2 cos2 α = - 3DRpφ r5γ + 1 r3 cos2 α -1 γ sin2 α, which is negative if pφ > -γr2 3DR. (66) We will show that (66) holds on the whole of the Hill component if Lz is sufficiently negative (despite this looking counterintuitive to the relation (27) and the requirement (66)). For a bounded Hill component, if (66) holds everywhere then ̇α is less than a negative constant on it. Then from any point, both the forward and backward orbits cross α = π 2 mod 2π at some times, which is our section. To obtain an explicit region of parameter space where there is a non-empty Hill component and (66) holds on the whole of it, we will restrict attention to Lz 0. It follows that for E ≥ q A 1+A, u ≤u+ = E + p E2 -A(1-E2) A (69) (and if E 0 but we don't give the estimates here. 0.6 0.8 1.0 1.2 1.4 -5 -4 -3 -2 -1 E L_z FIG. 9: The region in (E,Lz)-space for which we prove that the section z = 0, pz > 0 is complete is the cusped one between the red and blue curves. The region below the blue curve is where our bounds show that the Hill's region is empty, but from Figure 2 we know that the Hill's region is non-empty to the right of a curve asymptotic to Lz = -∞as E increases to 1. We expect the true domain for completeness of the section to go up to the bifurcation point in Figure 6, because near the part of the curve where the critical point is elliptic, the Hill's region is small and the linearisation about the bounding periodic orbit has non-zero rotation. Perhaps our estimates could be improved to show this (notably by throwing away less of the higher order terms in the step from (67) to (68); but the estimate R ≤r is close to optimal for small Hill's region). On the other hand, for Lz not sufficiently negative, the bounding periodic orbit has linearised rotation rate zero around it and is hyperbolic (because associated with a hyperbolic equatorial circular orbit), so the forward orbits of points on its local stable manifold do not intersect the section (and the same for backwards orbits of points on its local unstable manifold). Indeed, it can be expected that there is a set of positive volume whose orbits do not intersect the section, so the Poincaré map would not tell us anything about them. The boundary in (E,Lz) space between parameter values for which the equatorial periodic orbit has zero or non-zero Motion in Aubry's galaxy 11 linearised rotation rate can in principle be determined by solving a Hill's equation for the Floquet multipliers (the linearised equation in (z, pz) about the periodic orbit R(t); compare the curve for stabilisation of an inverted pendulum by oscillation of its support), but we have not attempted that. If the component C of the energy-surface corresponding to the chosen Hill component has finite volume VC then Σ+ is a subset of full measure in Σ, by volume-preservation of the flow. So Σ\Σ+ is negligible in a measure-theoretic sense. For components of Hill's region that are bounded away from r = 0 and ∞, the volume is finite. This is because energy-surface volume μ is defined on each energy-surface by 1 2ω ∧ω = μ ∧dH, so VC = R C μ. The easiest way to bound it for our problem is to express it as VC = Z C 2π √ K dRdz over the Hill component, because for each point of the Hill component there is a circle of possible momenta, with magnitude √ K, where K = E + 1 r 2 -1- Lz R -DR r3 2 . (70) Under the stated conditions, this integral is finite. Thus the return map is defined on a set of full measure in Σ, which is consistent with the figures. For components with horns going to r = 0, the volume is also finite but to prove it requires a little more work. The centre curve of the horn is Lzr3 = DR2. We can write this as R = Rc(z) with Rc(z)2 ∼Lz D z3. Near the centre curve, Lz R -DR r3 ∼-2D z3 (R-Rc(z)). Also, E + 1 r ∼1 z . So the horn is given asymptotically by R = Rc(z) ± ∆R(z) with ∆R ∼z2 D . Then the integral from the part of the horn with z ≤ζ is asymptotically Z ζ 0 z Ddz = ζ 2 2D 0 appear to be nonintegrable, for some values of parameters the Poincaré section appears to be filled densely by a single orbit, e.g. Figure 8(c). Even more, the chosen orbit appears to fill out the section with uniform density. This is remarkable. To confirm this, we replotted in Figure 8(d) with opacity 0.1, so that any variations in density after pixelisation could become apparent, and none are. Sometimes, purely topological aspects of a system can force chaotic dynamics20, but not necessarily on a set of positive measure, let alone full measure and with unifrom density. Note that special examples, called pseudo-Anosov, can be constructed in which this topological chaos has full measure (and typical orbits have uniform density); there is a chance something close to this could occur in the case that the bounding orbit of the section has a non-zero half-integer rotationMotion in Aubry's galaxy 12 rate (making "1-prongs" on the boundary), but it likely to require special conditions. The case of zero rotation-rate is more relevant, however. It can not be pseudo-Anosov, but we give a sketch of a suggested explanation for the observed nearperfect chaos in the next section. Note that some orbits in the component of the Hill's region might not intersect the section, so the Poincaré map might not give the complete dynamics, even measure-theoretically. The accessible volume from the Poincaré section is given by Vacc = Z Σ T(R, pR) dRdpR. This simple formula is a consequence of preservation of energy-surface volume μ, as defined above. The proof is that the flux of energy-surface volume iXμ = ω, so the volume swept out by unit area on the section in time T is Tω. If Vacc is less than the volume VC of the component C then a positive volume of orbits is missed by the Poincaré map. There are some parameter regimes where one could probably prove this. In particular, when the Hill's region has a small neck on z = 0 and large bulges for z ̸= 0 it is plausible that Vacc →0 as the neck width goes to 0 but VC remains bounded away from 0. The only way to avoid this would be if the return time function T diverges sufficiently fast. This scenario will be discussed in the next section. One case in whichVacc 0 and z 0 and z 0 and pz 0 is what we chose as Poincaré section. We call the other one the "dual section". The local forwards and backwards contracting submanifolds of h look Motion in Aubry's galaxy 13 h z = 0, pz 0 W + W - FIG. 12: The local contracting submanifolds of the hyperbolic periodic orbit h relative to the section z = 0. z > 0 inside the indicated sphere and z 0 (by zsymmetry, the 3-ball for z 0, which bounces between two points on the boundary of the Hill's component (a brake orbit, which we believe can be obtained by a similar argument to Seifert; there is a corresponding one for z 0. If the dynamics has a rotation symmetry (not necessarily from isometry of configuration space; it can be from separability of vertical and horizontal motion or the adiabatic invariant to be described in sec. X) that leaves h and e invariant then the system is integrable and the phase portrait in z > 0 looks FIG. 13: The Hill's region for D = 1,Lz = -0.348,E = 0.56, showing the hyperbolic (on z = 0) and index 1 (in the top half) brake periodic orbits and some orbits on one side of the unstable manifold of h up to their first crossing with e. h e W + W - z = 0, pz 0 FIG. 14: The ball z ≥0 in the integrable case. like Figure 14, where the inner branches of W± coincide. The map from z = 0, pz > 0 to z = 0, pz 0 is the composition of this map with the corresponding one for z 0 z = 0, pz 0 z = 0, pz B instead, then one can eliminate the component of B perpendicular to E, but we don't pursue that case. In regions where the "magnetic" field is strong we can expect an adiabatic invariant, directly analogous to the magnetic moment for charged particles in strong magnetic fields. The idea is that if the field seen by a gyrating particle varies by at most a small relative amount ε during half a gyroperiod then the action integral μ = 1 2π R p·dr for one gyro-revolution varies by at most order ε for a number of gyrorevolutions of order e-C/ε for someC > 0 (in the analytic case25). The action integral for the unperturbed motion evaluates to μ = p2 ⊥ B (71) (in plasma physics it is common to divide this by 2m, where m is the rest mass of the particle, but the above form is simpler in our relativistic context for unit rest mass). To compute p2 ⊥ it is simplest to first compute the field strength B = D r4 p R2 +4z2 (72) and parallel momentum p∥= D r5B(3zRpR +(2z2 -R2)pz), (73) and use p2 ⊥= p2 -p2 ∥. (74) The total momentum-squared p2 can be computed as p2 = p2 R + p2 z +( Lz R -DR r3 )2, (75) or p2 = (E + 1 r )2 -1. (76) To complete the computation of μ, p2 ⊥is then divided by B. To quantify ε in our problem, the gyroradius ρ = p⊥/B = p μ/B, the distance travelled by the E × B drift in half a gyroperiod is γ E⊥ B π B, and the parallel distance travelled during half a gyroperiod is p∥π/B, so ε = max 2 L⊥ r μ B , πγE⊥ B2L⊥ , π L∥ p∥ B , where L⊥and L∥are lengthscales for relative variation of B in the two directions. An order of magnitude for these is r/3, and we can take B ∼D/r3 and |E| ∼r-2. So the conditions for adiabatic invariance are that μ ≪D r , γr3 ≪D2, p∥≪D r2 . Note that the first and last imply that p2 ⊥, p2 ∥≪D2 r4 , so we obtain γ -1 ≪D r2 . Thus the second condition is really just that r3 ≪D2. Thus, sufficient conditions for the adiabatic invariance are r ≪D2/3, μ ≪D1/3, p∥≪D-1/3. A consequence of adiabatic invariance of μ is that the system is close to integrable in the regions where it applies. Assuming the adiabatic invariance, the motion can be reduced to a 2DoF system for the "guiding centre" of the helix, with state-space coordinates being the three components of the guiding centre and its velocity component parallel to the Bfield17. Rotation symmetry about the z-axis then implies conservation of DR2/r3 for the guiding centres (because Bφ = 0), which we will write as r3 = CR2 with C constant. This is just the equation for staying on a fieldline. It makes sense because the standard curvature and grad-B drifts across the field are in the φ direction, which is ignored in the reduction. The dynamics of the guiding centre along the fieldline are given by the reduced Hamiltonian H = q 1+ p2 ∥+ μB-1 r , using arclength s as coordinate. This gives p2 ∥= (E + 1 r )2 -1-μB. The field strength B is given by B2 = D2 r8 (4r2 -3R2). Parametrise the upper and lower halves of the fieldline by r ∈ (0,C], C being the value on the equatorial plane, then R2 = λr3 with λ = 1/C. So p2 ∥= (E + 1 r )2 -1-μD r3 p 4-3λr. Arclength s is related to R by ds dr 2 = 1-3 4λr 1-λr , which is singular at the extreme (r = C) but not a problem. We see that for μ > 0, the motion consists of bouncing along fieldlines between reflection points where (E + 1 r )2 = 1+ μD r3 p 4-3λr. So if the adiabatic invariant is well conserved then the return map to a Poincaré section z = 0 for given E and Lz consists of just a rotation by some gyrophase (depending on the gyroradius) about a fixed point, hence ellipses in the (R, pR)- plane. Figure 21 shows an example. In reality, there is a more accurate adiabatic invariant than μ, which deforms the ellipses into a family of closed curves that includes the boundary, though the adiabatic invariant is probably still not exactly conserved. In practice, what we found more often (particularly when Lz > 0, for which the Hill components have horns going to the Motion in Aubry's galaxy 17 FIG. 21: (upper) A trajectory for E = 0.6,Lz = 30,D = 50, and (lower) its Poincaré plot. origin) is that the adiabatic invariant is well conserved while the particle is in the horns but not in the passage at relatively large radius between the horns (it moves too far along the field in one gyroperiod). An example is shown in Figure 22. Such pictures suggest an alternative approach to understanding the near-perfect chaos. Chaos in a Hamiltonian system can result from possession of an adiabatic invariant in parts of the phase space, but with pseudo-random jumps in the adiabatic invariant in between visits to these parts. The failure of the adiabatic invariant in most examples studied so far is due to separatrix crossing (where the normally rapid frequency goes to zero)26. In our system we do not see separatrix crossing, but it is clear from pictures like Figure 22 that the motion sometimes consists of episodes in the cusps of the Hill's region, where the adiabatic invariant is well conserved, separated by passage across z = 0 via relatively large radius, during which some change to the adiabatic invariant occurs. Perhaps one can make a probabilistic analysis of the jumps in adiabatic invariant, as has been done for cases of separatrix crossing, and deduce stochastic behaviour. This approach would be limited to systems like ours with an adiabatic invariant, so not as general as the approach of the previous section. Yet it could be simpler in cases like ours when it applies. FIG. 22: (upper) A trajectory for E = 0.9,Lz = 12.5,D = 7.3, exhibiting (lower) fairly good conservation of μ while in the horns and substantial changes in between. XI. ROTATION CURVES OF GALAXIES Aubry's investigation was motivated by the desire to explain the rotation curves of galaxies without invoking dark matter (for another approach, which prompted Aubry's investigations, see ref. 28). Here we see what our investigations can say about the issue. The rotation curve of a galaxy is the graph of the inferred average of vφ as a function of R in the equatorial plane. For many galaxies it is observed to settle to a constant around 10-3 (in units of c) as R →∞. The observations are based on Doppler shift of Hα lines from the gas in the disk. The co-rotating equatorial circular orbits are the simplest theoretical probe for this question, but they have vφ ∝R-1/2 as R →∞, so they do not fit the observations. This is the pure Newtonian prediction but we see from (6) that it is true also for Aubry's galaxy. The standard explanation is that in addition to the visible matter in a galaxy, there is a halo of "dark matter" whose gravitational effect makes vφ for the circular orbits go to a positive limit as R →∞. For one analysis, see ref. 15. But here is another approach, avoiding the appeal to dark Motion in Aubry's galaxy 18 matter. We have seen that for orbits of Aubry's system, γ = E + 1 r (77) pφ = Lz R -DR r3 . (78) To fit with a non-zero limit for tangential velocity as R →∞, we need pφ to tend to some p∞> 0 (one could consider p∞ p D/p∞(we take r ∼R in the disk). Furthermore, E = q 1+ p2∞+ p2 R + p2z -1 r (80) must decrease unless p2 R + p2 z makes a compensating increase. Different Lz for different R requires different orbits, hence circular, and there are none with γ bounded away from 1 as R →∞. So instead perhaps there is exchange of Lz and of E between particles, leading to a distribution that is concentrated around Lz = Rp∞+ D R and E = γ∞-1 R? This distribution would necessarily have a nett outward flux of particles for R > 2 p2∞because then E > 1 and hence ̈R > 0, but that is not necessarily a problem. Why shouldn't a galaxy be losing gas? It might also be that matter goes in or out along the "horns" of the Hill's regions with Lz > 0. Rourke28 proposed outflow along the rotation axis with gradual inflow in the equatorial disk, but perhaps it is the other way round. To develop this idea requires some kinetic theory, so we leave it for future work. Another ingredient to consider is that there could be transfer between orbital angular momentum (our Lz) and spin (angular momentum of a particle about its centre of mass), which we have neglected. Perhaps neither of these effects depends significantly on D, so Aubry's model might be more sophisticated than necessary to explain the rotation curves of galaxies. XII. MORE COMMENTS In general relativity, it has long been recognised that rotating matter has "frame-dragging" effects22. The gravitomagnetic force used here is a simple manifestation of framedragging. To our minds, it demystifies frame-dragging because it shows frame-dragging to correspond to the familiar effects of a magnetic field. Rourke's proposal28 to explain rotation curves of galaxies (and more) involves modifying the standard Lense-Thirring frame-dragging effect to one that decays like only 1/r with distance from a rotating mass. He justifies it by Mach's principle, but it is not clear to us. One direction to explore is the possible connection of the magnetic gravity term with Thomas precession3, which is usually considered at the atomic scale but applies more broadly. For non-relativistic motion in Aubry's galaxy, one could study the non-relativistic version of (28). Then c becomes irrelevant and one can scale D to 1. A question that Aubry has posed is what is the set of orbits that go to neither infinity nor zero. This could predict the shape of an axisymmetric galaxy. It is an interesting direction for future research. Another question Aubry posed is what are the effects of the gravitomagnetic field on light rays? In our model, however, the effect of putting rest mass to zero is to remove all effects of gravity and so to just produce constant velocity trajectories at the speed of light. Another direction to explore is to take gravitational force proportional to relativistic mass instead of rest mass. This would in particular, produce an effect on light rays. We didn't find a Hamiltonian formulation of this case, except when D = 0. That is not necessarily an obstacle to studying it, but means it would require other methods. Nonetheless, for any D an "energy" is conserved, namely H = logγ -1 r . Equivalently, γ exp-1 r is conserved, which supports a point of view taken by Aubry1, namely that the effect of a static potential can be replaced by taking the rest mass m of a particle to decrease as it goes down the potential. The total energy of the particle then being γm, we obtain the formula m ∝exp-1 r . Note also that standard volume is preserved if one uses a new time s with ds = γ dt. Thus, the system is close to Hamiltonian, but we did not yet identify an appropriate symplectic form, nor even angular momentum constant. There is a vague possibility that one should view conservation of γ exp-1 r as a non-holonomic velocity-constraint, so rotation-symmetry might not produce a conservation law5. Finally, it would be natural to consider the effects of breaking axisymmetry. Spiral galaxies have clearly done that. Then Lz conservation would no longer be perfect. XIII. CONCLUSION We have studied the motion of a test mass in the gravitomagnetic field proposed by Aubry as a simple model of a galaxy, taking force proportional to rest mass. The system can equivalently be viewed as motion of a relativistic charged particle in the field of a co-located electrostatic monopole and magnetic dipole. We find a mix of regular and chaotic motion. The dynamics has various forms, depending on the energy and angular momentum constants. An interesting regime of almost perfect chaos is found, whose analysis merits more investigation. Further directions for analysis of possible consequences for galaxies are proposed. ACKNOWLEDGMENTS RSM is most grateful to Serge Aubry for stimulation over many years. This began with Aubry's fascinating talk on minimisers of Frenkel-Kontorova models at Los Alamos in 1982. Notable other highlights were his explaining anti-integrable limits to me one Sunday morning in Minnesota in 1990, and setting me the challenge of explaining discrete breathers in 1993 during a visit to Saclay. Most recently he communicated Motion in Aubry's galaxy 19 to me his ideas on rotation curves of galaxies, which led to the present paper. We are grateful to Shibabrat Naik for making professional versions of some of our hand-drawn figures and to Vassili Gelfreich for useful suggestions. This work was supported by a grant from the Simons Foundation (601970, RSM). DATA AVAILABILITY STATEMENT The data that support the findings of this study are openly available in GitHub at https://github.com/meghaburri/Motion-in-Aubry-s-galaxy.git and archived on Zenodo at https://doi.org/10.5281/zenodo.17117918. REFERENCES 1Aubry S, General Relativity revisited: Modified Heaviside Theory for Gravitation Application for understanding the Hypothetic Dark Matter Manifestations, preprint (2025) 2Behera H, Barik N, Explanation of Gravity Probe B Experimental Results using Heaviside-Maxwellian (Vector) Gravity in flat Space-time, arxiv:2002.12124 (2020) 3Ben-Menahem S, The Thomas precession and velocity-space curvature, J Math Phys 27 (1986) 1284-6. 4Binney J, Tremaine S, Galactic dynamics, 2nd ed (Princeton U Press, 2008). 5Bloch AM, Krishnaprasad PS, Marsden JE, Murray R, Nonholonomic mechanical systems with symmetry, Arch Rat Mech An 136 (1996) 21-99. 6Burby JW, MacKay RS, Naik S, Isodrastic magnetic fields for suppressing transitions in guiding-centre motion, Nonlinearity 36 (2023) 5884-5954. 7Carter B, Global structure of the Kerr family of gravitational fields, Phys Rev 174 (1968) 1559-71. 8Clark SJ, Tucker RW, Gauge symmetry and gravito-electromagnetism, Class Q Grav 17 (2000) 4125-57. 9De Leon N, Berne BJ, Intramolecular rate process: Isomerization dynamics and the transition to chaos, J Chem Phys 75 (1981) 3495-3510. 10Dellnitz M et al, Transport in dynamical astronomy and multibody problems, Int J Bif Chaos 15 (2005) 699-727. 11Dullin HR, Wittek A, Complete Poincaré sections and tangent sets, J Phys A 28 (1995) 7157-80. 12Hayes WB, Jackson KR, Rigorous shadowing of numerical solutions of ordinary differential equations by containment, SIAM J Num Anal 41 (2003) 1948-73. 13Jefimenko OD, Gravitation and cogravitation (Electret Scientific, 2006). 14Kaheman K, Bramburger JJ, Kutz JN, Brunton SL, Saddle transport and chaos in the double pendulum, Nonlin Dyn 111 (2023) 7199-7233. 15Kirillov AA, Turaev D, The universal rotation curve of spiral galaxies, Mon Not R Astron Soc 371 (2006) L31-L35. 16Koon WS, Lo MW, Marsden JE, Ross SD, Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics, Chaos 10 (2000) 427-469. 17Littlejohn RG, Variational principles of guiding center motion, J Plasma Phys 29 (1983) 111-125. 18Liu R, Liu S, Zhu F, Chen Q, He Y, Cai C, Orbits of charged particles trapped in a dipole magnetic field, Chaos 32 (2022) 043104. 19MacKay RS, Flux over a saddle, Phys Lett A 145 (1990) 425-7. 20MacKay RS, Complicated dynamics from simple topological hypotheses, Phil Trans Roy Soc London A 359 (2001) 1479-96. 21Marsden JE, Ross SD, New methods in celestial mechanics and mission design, Bull Am Math Soc 43 (2005) 43-73. 22Misner CW, Thorne KS, Wheeler JA, Gravitation (Freeman, 1973). 23Naik S, Krajnak V, Wiggins S, Support vector machines for learning reactive islands, Chaos 31 (2021) 103101. 24Naik S, Ross SD, Geometry of escaping dynamics in nonlinear ship motion, Commun Nonlinear Sci Numer Simulat 47 (2017) 48-70. 25Neishtadt AI, The separation of motions in systems with rapidly rotating phase, PMM USSR 48:2 (1984) 133-9. 26Neishtadt AI, Simo C, Treschev D, Vasiliev A, Periodic orbits and stability islands in chaotic seas created by separatrices in slow-fast systems, Discr Cont Dyn Sys B 10 (2008) 621-650. 27Ozorio de Almeida AM, De Leon N, Mehta MA, Marston CC, Geometry and dynamics of stable and unstable cylinders in Hamiltonian systems, Phys D 46 (1990) 265-285. 28Rourke CP, The geometry of the universe (World Sci Publ Co, 2021). 29Seifert H, Periodische Bewegungen mechanischer Systeme, Math. Z 51 (1949) 197-216. 30Walker M, Penrose R, On Quadratic First Integrals of the Geodesic Equations for Type {22} Spacetime, Commun Math Phys 18 (1970) 265-274.
|
2509.16230
|
MODULES OVER THE CATEGORY OF
JACOBI DIAGRAMS IN HANDLEBODIES
MAI KATADA
Abstract. The linear category A of Jacobi diagrams in handlebodies was
introduced by Habiro and Massuyeau. We study the category of modules over
the category A. We generalize the adjunction given by Powell to an adjunction
between the category of A-modules and the category of modules over the linear
PROP for Casimir Lie algebras by using the category AL of extended Jacobi
diagrams in handlebodies. Then we study subquotient A-modules of A(0, −).
Contents
1.
Introduction
1
2.
Functors on the opposite category of the category of free groups
5
3.
The category AL of extended Jacobi diagrams in handlebodies
8
4.
Functors on the category of Jacobi diagrams in handlebodies
17
5.
Symmetric monoidal structures
23
6.
The A-module AL(L⊗m, H⊗−)
28
7.
Perspectives
38
References
38
1. Introduction
Finitely generated free groups and the automorphism groups of them are fun-
damental and important objects in mathematics, especially in algebraic topology.
Functors from the category gr of finitely generated free groups (or the opposite
grop of it) to the category of abelian groups often arise, and there is a substantial
body of literature on the functor category on gr or grop [3, 8, 20, 18, 19, 17, 14, 1].
We will focus on the functors to the category of vector spaces over a field k of
characteristic 0.
The notion of polynomial functors was introduced by Eilenberg and MacLane [4]
and generalized by Hartl, Pirashvili and Vespa [8]. The notion of outer functors,
introduced by Powell and Vespa [20], also plays an essential role in the functor
category on gr and grop. The full subcategory of polynomial functors (or analytic
functors, which are colimits of polynomial functors) has been studied. In particular,
Powell [18] proved that the full subcategory of analytic functors on grop is equivalent
to the category of modules over the k-linear PROP CatLie for Lie algebras. In some
Date: September 14, 2025.
2020 Mathematics Subject Classification. 18A25, 18A40, 57K16, 18M85, 20F28.
Key words and phrases. Functor categories, Polynomial functors, Adjoint functors, Free
groups, Jacobi diagrams in handlebodies, Casimir Lie algebras, Casimir Hopf algebras.
1
arXiv:2509.16230v1 [math.RT] 14 Sep 2025
2
MAI KATADA
sense, modules over CatLie are easier to handle or understand than analytic functors
on grop.
Habiro and Massuyeau [7] introduced the category A of Jacobi diagrams in
handlebodies in order to extend the Kontsevich invariant to a functor from the
category of bottom tangles in handlebodies. The category A is characterized as
a k-linear PROP freely generated by a Casimir Hopf algebra, and has a grading
defined by the number of copies of the Casimir 2-tensor. Since the category grop
is characterized as a PROP freely generated by a cocommutative Hopf algebra, the
k-linearization kgrop of the category grop is identified with the degree 0 part of
A. The k-linear PROP CatLieC for Casimir Lie algebras was introduced by Hinich
and Vaintrob [9] in their study of the relations between Casimir Lie algebras and
the algebra of chord diagrams. One of the motivations behind this project was
to clarify the relations between the category of A-modules and the category of
CatLieC-modules.
The aim of this paper is to study the category A-Mod of A-modules. We gen-
eralize the adjunction given by Powell to an adjunction between CatLieC-modules
and A-modules. Here, we use an (A, CatLieC)-bimodule induced by the hom-spaces
of the category AL of extended Jacobi diagrams in handlebodies, which was intro-
duced in [12]. Then we study the subquotient A-modules of A(0, −). By using the
indecomposable decomposition of the kgrop-module Ad(0, −) obtained in [11, 12],
we prove that some subquotient A-modules of A(0, −) are indecomposable.
1.1. An adjunction between the categories CatLieC-Mod and A-Mod. Let
k-Mod denote the category of k-vector spaces and k-linear maps for a field k of
characteristic 0. For a k-linear category C, which is a category enriched over k-Mod,
let C-Mod denote the k-linear category of k-linear functors from C to k-Mod.
Powell [18, Theorem 9.19] constructed a (kgrop, CatLie)-bimodule ∆CatAssu,
which yields the hom-tensor adjunction
∆CatAssu ⊗CatLie −: CatLie-Mod
kgrop-Mod
⊤
: Homkgrop-Mod(∆CatAssu, −).
Moreover, he proved that the adjunction restricts to an equivalence of categories be-
tween CatLie-Mod and a full subcategory kgrop-Modω of kgrop-Mod whose objects
correspond to analytic functors on grop.
We will give an alternative interpretation of the adjunction given by Powell by
using the k-linear symmetric monoidal category AL of extended Jacobi diagrams.
The set of objects of the category AL is the free monoid generated by H and L, and
AL includes A (resp. CatLieC) as a full subcategory whose objects are generated
by H (resp. L). By restricting to the degree 0 part, the category AL
0 includes
kgrop ∼= A0 and CatLie ∼= (CatLieC)0 as full subcategories.
(See Section 3 for
details.)
Proposition 1.1 (see Proposition 4.3). We have an isomorphism of (kgrop, CatLie)-
bimodules
∆CatAssu ∼= AL
0 (L⊗−, H⊗−).
Moreover, by generalizing the above interpretation, we obtain an adjunction
between CatLieC-Mod and A-Mod.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
3
Theorem 1.2 (see Theorem 4.4). We have an (A, CatLieC)-bimodule AL(L⊗−, H⊗−),
which induces an adjunction
AL(L⊗−, H⊗−) ⊗CatLieC −: CatLieC-Mod
A-Mod
⊤
: HomA-Mod(AL(L⊗−, H⊗−), −).
We define a full subcategory A-Modω of A-Mod in such a way that an object
of A-Modω is an A-module M which satisfies U(M) ∈kgrop-Modω, where U :
A-Mod →kgrop-Mod denotes the forgetful functor.
Theorem 1.3 (see Theorem 4.9). We have AL(L⊗m, H⊗−) ∈Ob(A-Modω) for
each m ≥0, and the adjunction in Theorem 1.2 restricts to an adjunction
AL(L⊗−, H⊗−) ⊗CatLieC −: CatLieC-Mod
A-Modω
⊤
: HomA-Modω(AL(L⊗−, H⊗−), −).
Remark 1.4. Recently, Minkyu Kim has independently obtained an adjunction,
equivalent to Theorem 1.2, by using a general method involving left ideals. More-
over, he has proved that the restriction of the adjunction, equivalent to that in
Theorem 1.3, induces an equivalence of categories. (See Remark 4.10.)
For a k-linear PROP P, the Day convolution of modules over P induces a k-linear
symmetric monoidal structure on the category of P-modules.
Proposition 1.5 (see Propositions 5.1 and 5.4). The left adjoint functors
AL
0 (L⊗−, H⊗−) ⊗CatLie −: CatLie-Mod →kgrop-Mod
and
AL(L⊗−, H⊗−) ⊗CatLieC −: CatLieC-Mod →A-Mod
are symmetric monoidal with respect to the symmetric monoidal structures induced
by the Day convolution.
We have a graded algebra A = U(A(0, −)) in the symmetric monoidal cate-
gory kgrop-Mod with the Day convolution tensor product. In a similar way, we
have a graded algebra C = U(CatLieC(0, −)) in the symmetric monoidal category
CatLie-Mod with the Day convolution tensor product, where U : CatLieC-Mod →
CatLie-Mod is the forgetful functor.
The graded algebra A corresponds to the
graded algebra C via the category equivalence between kgrop-Modω and CatLie-Mod.
Theorem 1.6 (see Theorem 5.9 and Corollary 5.10). The graded algebra C in
CatLie-Mod is quadratic, and the graded algebra A in kgrop-Mod is quadratic.
1.2. Subquotient A-modules of A(0, −). In [11, 12], the author studied the
kgrop-module Ad = Ad(0, −). One of the main results of [11, 12] is an indecompos-
able decomposition of Ad. More precisely, for d ≥2, we have an indecomposable
decomposition of kgrop-modules
Ad = AdP ⊕AdQ,
where AdP and AdQ are the kgrop-submodules of Ad that are generated by the
symmetric element Pd and the anti-symmetric element Qd, respectively. Moreover,
AdP is a simple kgrop-module which is isomorphic to S2d ◦a#. Here, Sλ denotes
the Schur functor corresponding to a partition λ, and a# = Hom(−ab, k) denotes
the dual of the abelianization functor. (See Section 6.3 for details.)
For d ≥0, the degree ≥d part forms the A-submodule A≥d(0, −) of A(0, −).
For d ≥0, d′ ≥d + 2, the quotient A-module A≥d(0, −)/A≥d′(0, −) does not
4
MAI KATADA
factor through kgrop. By using the kgrop-module structure of Ad, we obtain the
following.
Theorem 1.7 (see Theorems 6.7 and 6.9). For d ≥0, d′ ≥d + 2, the A-module
A≥d(0, −)/A≥d′(0, −) is indecomposable. Moreover, the A-module A(0, −) is in-
decomposable.
Let AQ denote the A-submodule of A(0, −) that is generated by Q2, which
contains all AdQ for d ≥2. For d ≥2, the degree ≥d part forms the A-submodule
AQ≥d of AQ. For the A-module AQ, we also have the indecomposability.
Theorem 1.8 (see Proposition 6.8 and Theorem 6.9). For d ≥2, d′ ≥d + 1,
the A-module AQ≥d/AQ≥d′ is indecomposable. Moreover, the A-module AQ is
indecomposable.
For each m ≥0, we have the A-module AL(L⊗m, H⊗−). For m = 0, we have
AL(L⊗0, H⊗−) ∼= A(0, −) as A-modules.
For m = 1, we obtain a direct sum
decomposition of AL
d = AL
d (L, H⊗−) as kgrop-modules, which is conjectured to be
an indecomposable decomposition.
Proposition 1.9 (see Proposition 6.10). Let d ≥1. We have a direct sum decom-
position of kgrop-modules
AL
d = AL
d P ⊕AL
d Q,
where AL
d P and AL
d Q are the kgrop-submodules of AL
d that are generated by the
symmetric element P L
d and by the two anti-symmetric elements Q′
d and Q′′
d, respec-
tively. Moreover, AL
d P is a simple kgrop-module which is isomorphic to S2d+1 ◦a#.
We make the following conjecture, which is an analogue of the case of A(0, −).
Conjecture 1.10 (see Conjectures 6.14 and 6.13). For d ≥2, the kgrop-module
AL
d Q is indecomposable. Moreover, the A-module AL(L, H⊗−) is indecomposable.
1.3. Outline. We organize the rest of the paper as follows. From Section 2 to
Section 5, we study the adjunction between the categories CatLieC-Mod and A-Mod.
More precisely, in Section 2, we recall some known facts to state the result of
Powell, and in Section 3, we recall the definition of the category AL and study
the hom-space of it, and in Section 4, we prove Theorems 1.2 and 1.3, and then
in Section 5, we study the adjunctions with respect to the symmetric monoidal
structure induced by the Day convolution. In Section 6, we study the subquotient
A-modules of A(0, −) and prove Theorems 1.7 and 1.8. Lastly, in Section 7, we
provide a perspective on modules over the handlebody groups.
Acknowledgements. The author would like to thank Dr. Minkyu Kim for dis-
cussions on the adjunction between A-Mod and CatLieC-Mod, and Professor Kazuo
Habiro for suggestions regarding the algebra A in A-Mod, which inspired the writ-
ing of Section 5. She would like to thank Professor Geoffrey Powell for pointing
out errors in the initial proof of the indecomposability of the A-module A(0, −),
and for his valuable comments throughout drafts of the present paper. She would
also like to thank Professor Christine Vespa for helpful comments. This work was
supported by JSPS KAKENHI Grant Number 24K16916.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
5
2. Functors on the opposite category of the category of free
groups
Here we recall some known facts about the opposite grop of the category gr of
free groups, and describe the adjunction between the functor category on grop and
the category of modules over the k-linear PROP CatLie for Lie algebras, which is
given by Powell [18].
2.1. The category grop. For n ≥0, let Fn be the group freely generated by
{x1, · · · , xn}.
Let gr denote the category of finitely generated free groups and
group homomorphisms, and grop the opposite category of gr.
It is well known that the category grop is a PROP (i.e., a symmetric monoidal
category with non-negative integers as objects) freely generated by a cocommutative
Hopf algebra (H = 1, µ, η, ∆, ε, S). See [16] for details, and [6] for a combinatorial
description. Here, the morphism µ : 2 →1 is the group homomorphism
F1 →F2,
x 7→x1x2,
the morphism η : 0 →1 is the group homomorphism
F1 →F0,
x 7→1,
the morphism ∆: 1 →2 is the group homomorphism
F2 →F1,
x1 7→x,
x2 7→x,
the morphism ε : 1 →0 is the group homomorphism
F0 →F1,
1 7→1 ∈F1,
and the morphism S : 1 →1 is the group homomorphism
F1 →F1,
x 7→x−1.
Define µ[i] : i →1 inductively by µ[0] = η, µ[1] = id1, µ[i] = µ(µ[i−1] ⊗id1) for
i ≥2. Let
µ[p1,··· ,pn] = µ[p1] ⊗· · · ⊗µ[pn] : p1 + · · · + pn →n.
(1)
Similarly, we define ∆[i] : 1 →i by ∆[0] = ε, ∆[1] = id1, ∆[i] = (∆[i−1] ⊗id1)∆for
i ≥2, and let
∆[q1,··· ,qm] = ∆[q1] ⊗· · · ⊗∆[qm] : m →q1 + · · · + qm.
Define a group homomorphism
P : Sm →grop(m, m),
σ 7→Pσ
(2)
by P(i,i+1) = idi−1 ⊗P1,1 ⊗idm−i−1 for 1 ≤i ≤m −1, where P1,1 ∈grop(2, 2) is
the symmetry such that P1,1(x1) = x2, P1,1(x2) = x1. Then we have the following
factorization of morphisms of the category grop.
Lemma 2.1 ([6, Lemma 2, Theorem 8]). Any element of grop(m, n) can be de-
composed into the following form
µ[p1,··· ,pn]Pσ(Se1 ⊗· · · ⊗Ses)∆[q1,··· ,qm],
where s, q1, · · · , qm, p1, · · · , pn ≥0, s = q1 + · · · + qm = p1 + · · · + pn, σ ∈Ss,
e1, · · · , es ∈{0, 1}.
6
MAI KATADA
Note that the above factorization is not unique since we have
µ(id1 ⊗S)∆= ηε.
2.2. The k-linear PROPs for unital associative algebras and Lie algebras.
Let CatAssu denote the k-linear PROP associated to the operad Assu encoding
unital associative algebras. That is, the category CatAssu is the k-linear PROP
freely generated by a unital associative algebra (A = 1, µA, ηA), where the hom-
space is
CatAssu(m, n) =
M
f:{1,...,m}→{1,...,n}
n
O
i=1
Assu(|f −1(i)|).
Define µ[p1,··· ,pn]
A
and a group homomorphism P : Sm →CatAssu(m, m), σ 7→Pσ
in a way similar to (1) and (2), respectively. Then any element of CatAssu(m, n)
has the following factorization.
Lemma 2.2. Any element of CatAssu(m, n) can be written uniquely as a linear
combination of morphisms
µ[p1,··· ,pn]
A
Pσ
with p1 + · · · + pn = m, σ ∈Sm. That is, the set
{µ[p1,··· ,pn]
A
Pσ | p1 + · · · + pn = m, σ ∈Sm}
is a basis for CatAssu(m, n).
Let CatLie denote the k-linear PROP associated to the Lie operad Lie. That is,
CatLie is the k-linear PROP freely generated by a Lie algebra (L = 1, [, ] : L⊗2 →L),
where the hom-space is
CatLie(m, n) =
M
f:{1,...,m}↠{1,...,n}
n
O
i=1
Lie(|f −1(i)|).
Define a group homomorphism P : Sm →CatLie(m, m), σ 7→Pσ in a way similar
to (2). By using the representation of an element of the operad Lie as a rooted
trivalent tree, the hom-space CatLie(m, n) is explicitly described as follows. Here,
a rooted trivalent tree means a trivalent tree with a root and labeled leaves, where
each trivalent vertex has a specified cyclic order of adjacent edges.
Lemma 2.3. The hom-space CatLie(m, n) is the k-vector space spanned by the set
n
(T1 ⊗· · · ⊗Tn)Pσ
Ti∈Lie(mi):a rooted trivalent tree (1≤i≤n),
m1+···+mn=m, σ∈Sm
o
modulo the equivariance relation
(T1ρ1 ⊗· · · ⊗Tnρn)Pσ = (T1 ⊗· · · ⊗Tn)P(ρ1⊗···⊗ρn)σ
(3)
for ρi ∈Smi (1 ≤i ≤n), and the AS and IHX relations described below:
=
−
= −
,
.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
7
The canonical morphism Lie →Assu of operads, which maps [, ] ∈CatLie(2, 1)
to µA −µAP(12) ∈CatAssu(2, 1), induces a morphism ι : CatLie →CatAssu of
k-linear PROPs. Then for each m ≥0, we have a right CatLie-module
CatAssu(ι(−), m) : Catop
Lie →k-Mod.
Here, by a right CatLie-module, we mean a k-linear functor from the opposite Catop
Lie
of the category CatLie to the category k-Mod.
2.3. Functors on the category grop. For a k-linear PROP C, by a left C-module,
we mean a k-linear functor from C to k-Mod. In what follows, all modules are left
modules unless otherwise mentioned. Let C-Mod denote the category of C-modules.
Let kgrop denote the k-linearization of the category grop. Note that k-linearization
induces an isomorphism
Funct(grop, k-Mod)
∼
=
−→kgrop-Mod,
where Funct(grop, k-Mod) denotes the functor category on grop.
Powell [18] defined a kgrop-module structure on the family of hom-spaces {CatAssu(l, m)}m
of the k-linear category CatAssu for each l ≥0, which is denoted by
∆CatAssu(l, −) : kgrop →k-Mod.
In what follows, we will describe the image of each of the generating morphisms of
kgrop which appeared in Lemma 2.1. For p1, · · · , pn ≥0, s = p1 +· · ·+pn, σ ∈Ss,
we have
∆CatAssu(l, µ[p1,··· ,pn]) = µ[p1,··· ,pn]
A
◦−: CatAssu(l, s) →CatAssu(l, n),
∆CatAssu(l, Pσ) = Pσ ◦−: CatAssu(l, s) →CatAssu(l, s).
For e1, · · · , es ∈{0, 1}, the morphism
∆CatAssu(l, Se1 ⊗· · · ⊗Ses) : ∆CatAssu(l, s) →∆CatAssu(l, s)
is defined for µ[p1,··· ,ps]
A
with p1, · · · , ps ≥0, p1 + · · · + ps = l by
∆CatAssu(l, Se1 ⊗· · · ⊗Ses)(µ[p1,··· ,ps]
A
) = (Se1 · µ[p1]
A ) ⊗· · · ⊗(Ses · µ[ps]
A ),
where
S · µ[p]
A = (−1)pµ[p]
A Preflection,
where “reflection” means the permutation
1
2
· · ·
p
p
p −1
· · ·
1
∈Sp. In a similar
way, for q1, · · · , qm ≥0, s = q1 + · · · + qm, the morphism
∆CatAssu(l, ∆[q1,··· ,qm]) = ∆CatAssu(l, m) →∆CatAssu(l, s)
is defined for µ[p1,··· ,pm]
A
with p1, · · · , pm ≥0, p1 + · · · + pm = l by
∆CatAssu(l, ∆[q1,··· ,qm])(µ[p1,··· ,pm]
A
) = (∆[q1] · µ[p1]
A ) ⊗· · · ⊗(∆[qm] · µ[pm]
A
),
where
∆[0] · µ[p]
A = ε · µ[p]
A =
(
0
(p ≥1),
id0
(p = 0)
8
MAI KATADA
and for q ≥1,
∆[q] · µ[p]
A =
X
i1+···+iq=p,
i1,··· ,iq≥0
X
σ∈Sh(i1,··· ,iq)
µ[i1,··· ,iq]
A
Pσ−1,
where, Sh(i1, · · · , iq) ⊂Sp denotes the set of (i1, · · · , iq)-shuffles. For example, we
have
∆CatAssu(0, ∆)(ηA) = ηA ⊗ηA,
∆CatAssu(1, ∆)(id1) = id1 ⊗ηA + ηA ⊗id1,
∆CatAssu(2, ∆)(µA) = µA ⊗ηA + id2 +P(12) + ηA ⊗µA.
Now we have a (kgrop, CatLie)-bimodule ∆CatAssu = ∆CatAssu(−, −). Here, by
a (kgrop, CatLie)-bimodule, we mean a k-linear functor Catop
Lie ⊗kgrop →k-Mod.
Powell [18] proved that the usual hom-tensor adjunction yields an equivalence of
categories.
Proposition 2.4 ([18, Theorem 9.19]). We have the hom-tensor adjunction
∆CatAssu ⊗CatLie −: CatLie-Mod
kgrop-Mod
⊤
: Homkgrop-Mod(∆CatAssu, −).
Moreover, the adjunction restricts to an equivalence of categories
CatLie-Mod ≃kgrop-Modω.
Here, kgrop-Modω denotes the full subcategory of kgrop-Mod whose objects
correspond to analytic functors on grop, where a functor is analytic if it is the
colimit of its polynomial subfunctors. See [18] for details.
3. The category AL of extended Jacobi diagrams in handlebodies
In this section, we recall the definition and some important properties of the
k-linear category A of Jacobi diagrams in handlebodies which was introduced by
Habiro and Massuyeau in [7]. We also recall the k-linear PROP CatLieC for Casimir
Lie algebras introduced by Hinich and Vaintrob in [9]. Then we recall the k-linear
category AL of extended Jacobi diagrams in handlebodies introduced in [12] and
study its hom-spaces.
3.1. The category A of Jacobi diagrams in handlebodies. Habiro and Mas-
suyeau introduced the category A of Jacobi diagrams in handlebodies in [7] in order
to extend the Kontsevich invariant for bottom tangles to a functor. The objects of
A are non-negative integers. The hom-space A(m, n) is the k-vector space spanned
by “(m, n)-Jacobi diagrams in handlebodies”, which are explained below.
For n ≥0, let Xn =
· · ·
be the oriented 1-manifold consisting of
n component oriented arcs. A Jacobi diagram D on Xn is a uni-trivalent graph
such that each trivalent vertex is oriented (i.e., each trivalent vertex has a fixed
cyclic order of the three edges around it), univalent vertices are attached to Xn
(i.e., the set of univalent vertices is embedded into the interior of Xn), and each
connected component has at least one univalent vertex.
Two Jacobi diagrams
D and D′ on Xn are identified if there is a homeomorphism between them whose
restriction to Xn is isotopic to the identity map and which respects the orientations
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
9
on trivalent vertices. The STU relation described below is imposed on the k-vector
space spanned by Jacobi diagrams on Xn:
=
−
The STU relation implies the AS and IHX relations (see [2, Theorem 6]). The
degree of a Jacobi diagram is defined to be half the number of vertices. Note that
the STU, AS and IHX relations respect the degree of Jacobi diagrams.
For m ≥0, let Um be the handlebody of genus m that is obtained from the cube
I3, where I = [−1, 1] ⊂R, by attaching m handles on the top square I2 × {1}
of the cube along the upper line I × {0} × {1}. An (m, n)-Jacobi diagram is a
Jacobi diagram on Xn mapped into the handlebody Um in such a way that the
endpoints of Xn are arranged on the bottom line I × {0} × {−1} and that the
i-th arc component of Xn goes from the 2i-th point to the (2i −1)-st point, where
we count the endpoints from left to right. See [7] for details. The following is an
example of a (2, 3)-Jacobi diagram:
Two (m, n)-Jacobi diagrams are identified if they are homotopic in Um relative to
the endpoints of Xn. Then the hom-space A(m, n) is defined by the k-vector space
spanned by (m, n)-Jacobi diagrams modulo the STU relation. The composition of
morphisms in A is somewhat complicated. To put it simply, for an (m, n)-Jacobi
diagram D and (n, p)-Jacobi diagram D′, the composition D′ ◦D is obtained by
stacking a suitable cabling of D on the top square of D′.
Each hom-space A(m, n) is graded by the degree of Jacobi diagrams and we have
A(m, n) ∼=
M
d≥0
Ad(m, n),
where Ad(m, n) denotes the degree d part of A(m, n). Moreover, the category A has
a structure of an N-graded k-linear symmetric strict monoidal category. The tensor
product on objects is addition, the monoidal unit is 0, and the tensor product
on morphisms is juxtaposition followed by horizontal rescaling and relabeling of
10
MAI KATADA
indices. The category A has a Casimir Hopf algebra (H = 1, µ, η, ∆, ε, S, ˜c), where
µ =
, η =
, ∆=
, ε =
,
S =
, ˜c =
.
Here, a Casimir Hopf algebra (H, ˜c) in a k-linear symmetric monoidal category is
a cocommutative Hopf algebra H equipped with a Casimir 2-tensor ˜c, which is a
morphism ˜c : I →H⊗2 satisfying
(∆⊗idH)˜c = (idH ⊗η ⊗idH)˜c + η ⊗˜c,
PH,H˜c = ˜c,
where PH,H denotes the symmetry, and
(ad ⊗ad)(idH ⊗PH,H ⊗idH)(∆⊗˜c) = ˜cε,
where
ad = µ[3](idH⊗2 ⊗S)(idH ⊗PH,H)(∆⊗idH).
Then the category A is characterized by the Casimir Hopf algebra in the following
sense.
Lemma 3.1 ([7, Theorem 5.11]). The k-linear PROP A is freely generated by the
Casimir Hopf algebra (H = 1, µ, η, ∆, ε, S, ˜c).
The degree 0 part A0 of the category A forms a subcategory of A. Since the
PROP grop is freely generated by a cocommutative Hopf algebra, there exists a
unique symmetric monoidal functor from grop to A such that the cocommutative
Hopf algebra of grop is mapped to the cocommutative Hopf algebra of A. This
functor induces an isomorphism kgrop
∼
=
−→A0 of k-linear PROPs. We will identify
the morphisms of cocommutative Hopf algebras in kgrop and those in A via this
functor.
Since the degree of the generating morphisms µ, η, ∆, ε, S in A is 0, and since the
degree of ˜c is 1, the degree of a morphism of A is equal to the degree by the number
of copies of the Casimir 2-tensor ˜c. A morphism of Ad(m, n) can be factorized as
follows.
Lemma 3.2 ([7, Lemma 5.16]). Any element of Ad(m, n) is a linear combination
of morphisms of the form
µ[p1,··· ,pn]Pσ(Se1 ⊗· · · ⊗Ses ⊗id2d)(∆[q1,··· ,qm] ⊗˜c⊗d),
where s, q1, · · · , qm, p1, · · · , pn ≥0, s = p1 + · · · + pn −2d = q1 + · · · + qm, σ ∈
Ss+2d, e1, · · · , es ∈{0, 1}.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
11
3.2. The k-linear PROP CatLieC for Casimir Lie algebras. Here, we recall the
k-linear PROP for Casimir Lie algebras introduced by Hinich and Vaintrob in [9].
A Casimir Lie algebra (L, c) in a symmetric monoidal category is a Lie algebra L
equipped with a symmetric invariant 2-tensor, which is called the Casimir element,
which is a morphism c : I →L⊗2 satisfying
PL,Lc = c,
([, ] ⊗idL)(idL ⊗c) = (idL ⊗[, ])(c ⊗idL).
(4)
Consider the operad Lie as a cyclic operad. Let τ = (012 · · · n) ∈S1+n. The
symmetric group S1+n is generated by the cyclic permutation τ and the subgroup
Sn ⊂S1+n that stabilizes 0. The right action of τ on an element of Lie is given
by changing the first input into the output and the output into the last input. For
example, we have
· τ =
,
· τ =
,
· τ =
.
Let CatLieC denote the k-linear PROP for Casimir Lie algebras, which is a k-
linear PROP generated by the k-linear PROP CatLie and a symmetric element
c ∈CatLieC(0, 2) satisfying
(id1 ⊗f)(c ⊗idn−1) = (fτ ⊗id1)(idn−1 ⊗c),
for f ∈Lie(n) (see [9, Section 3.1.6] for details). In other words, CatLieC is the
k-linear symmetric monoidal category whose objects are generated by L and whose
morphisms are generated by [, ] : L⊗2 →L and c : I →L⊗2 with relations generated
by the AS, IHX-relations and the relations (4).
The category CatLieC has a grading given by the number of copies of the Casimir
element c, that is, we have
CatLieC(m, n) =
M
d≥0
CatLieC(m, n)d,
where CatLieC(m, n)d is spanned by morphisms with d copies of c. Then the degree 0
part CatLieC(m, n)0 forms a subcategory (CatLieC)0, which is isomorphic to CatLie.
It is easy to see that any element of CatLieC(m, n)d can be decomposed into a
linear combination of morphisms of the form
f(idL⊗m ⊗c⊗d),
where f ∈CatLie(m + 2d, n).
This fact can be described by using the upward
Brauer category uB [22, 15], which is a k-linear category whose objects are non-
negative integers and whose hom-space uB(m, n) is spanned by partitionings of
{1+, · · · , m+, 1−, · · · , n−} into (m + n)/2 unordered pairs in such a way that the
partitioning includes no pairs of two positive elements.
The assignment of the
Casimir element c to each unordered pair of two negative elements induces an
inclusion uB ,→CatLieC, which induces a k-linear full functor
CatLie ⊗S uB ↠CatLieC,
(5)
where S is the k-linear category whose objects are non-negative integers and whose
hom-space is S(m, n) = kSm if m = n and S(m, n) = 0 otherwise, and where the
structure of a k-linear category on CatLie ⊗S uB is the canonical one induced by the
structures of CatLie and uB.
12
MAI KATADA
Remark 3.3. The hom-space CatLieC(0, n) can be identified with the space of open
Jacobi diagrams whose univalent vertices are colored by {1, · · · , n}. See [17, Section
13] for the construction of CatLieC by using open Jacobi diagrams. The surjective
morphism (5) appears in [17, Example 13.15]. It is well known that open Jacobi
diagrams with one univalent vertex vanish by the AS and IHX relations, which
implies that we have
CatLieC(0, 1) = 0.
(6)
By extending the description of the hom-space CatLie(m, n) in Lemma 2.3, we
will explicitly describe the k-vector space CatLieC(0, n)d. Before stating the fol-
lowing lemma, we will introduce some notation.
For σ ∈S2d and an n-tuple
(T1, · · · , Tn) of rooted trivalent trees Ti ∈Lie(mi) with 1 ≤i ≤n, m1 +· · ·+mn =
2d, which satisfy
σ(1) = m1 + · · · + mi−1 + 1,
σ(3) = m1 + · · · + mi−1 + 2,
σ(2) = m1 + · · · + mj−1 + 1,
Ti = t([, ] ⊗idmi−2)
(∗)
for some 1 ≤i < j ≤n and for some rooted trivalent tree t ∈Lie(mi −1), set
Ti = t,
f
Tj = ([, ] ⊗idmj−1)Tj,
eσ = σ′σ,
where σ′ ∈S2d is the order-preserving permutation except for
σ′(σ(3)) = σ(2).
For σ ∈S2d and an n-tuple (T1, · · · , Tn) which satisfy
σ(1) = m1 + · · · + mi−1 + 1,
σ(3) = m1 + · · · + mi−1 + 2,
σ(2) = m1 + · · · + mi−1 + 3,
Ti = t([, ] ⊗idmi−2)
(∗∗)
for some 1 ≤i ≤n, we set
g
(Ti) = (id1 ⊗[, ] ⊗idmi−3)t
and eσ in the same way as above.
Example 3.4. For σ = (1256) ∈S6 and a triple (T1 = id1, T2 =
, T3 =
)
of rooted trivalent trees, we have i = 2, j = 3, T2 =
, f
T3 =
, eσ = (124356).
Example 3.5. For σ = (1245) ∈S6 and a triple (T1 = id1, T2 =
, T3 =
)
of rooted trivalent trees, we have i = j = 2, g
(T2) =
, eσ = (12345).
Lemma 3.6. The degree d part CatLieC(0, n)d is the k-vector space spanned by the
set
n
(T1 ⊗· · · ⊗Tn)Pσc⊗d Ti∈Lie(mi):a rooted trivalent tree (1≤i≤n),
m1+···+mn=2d, σ∈Sm
o
(7)
modulo the equivariance relation (3), the AS and IHX relations, the relation between
the Casimir element c and the symmetry
(T1 ⊗· · · ⊗Tn)Pσρc⊗d = (T1 ⊗· · · ⊗Tn)Pσc⊗d
(8)
for ρ ∈S2≀Sn, and the relations between the Casimir element c and the Lie bracket
(T1 ⊗· · · ⊗Tn)Pσc⊗d = −(T1 ⊗· · · ⊗Ti ⊗· · · ⊗f
Tj ⊗· · · ⊗Tn)Peσc⊗d
(9)
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
13
for (T1 ⊗· · · ⊗Tn)Pσc⊗d satisfying the condition (∗) and
(T1 ⊗· · · ⊗Tn)Pσc⊗d = −(T1 ⊗· · · ⊗g
(Ti) ⊗· · · ⊗Tn)Peσc⊗d
(10)
for (T1 ⊗· · · ⊗Tn)Pσc⊗d satisfying the condition (∗∗).
Proof. The k-vector space uB(0, 2d) has a basis {¯σcd | ¯σ ∈S2d/(S2 ≀Sd)}, where
cd = {{1−, 2−}, · · · , {(2d −1)−, 2d−}}.
Therefore, by Lemma 2.3, the k-vector
space
(CatLie ⊗S uB)(0, n)d = CatLie(2d, n) ⊗k[S2d] uB(0, 2d)
is spanned by the set
n
(T1 ⊗· · · ⊗Tn)Pσ ⊗cd
Ti∈Lie(mi):a rooted trivalent tree (1≤i≤n),
m1+···+mn=2d, σ∈Sm
o
with relations generated by the equivariance relation (3), the AS and IHX relations,
and the relation
(T1 ⊗· · · ⊗Tn)Pσρ ⊗cd = (T1 ⊗· · · ⊗Tn)Pσ ⊗cd
for ρ ∈S2 ≀Sd. It follows from the surjective morphism (5) that CatLieC(0, n)d is
spanned by the set (7) with relations generated by the equivariance relation (3),
the AS and IHX relations, the relation (8), and the relations between the Casimir
element c and the Lie bracket generated by (4). Therefore, it suffices to prove that
the relation (4) reduces to the relations (9) and (10) by the other relations.
By the relations (3) and (8), and the AS and IHX relations, the relation (4)
reduces to the relation for (T1 ⊗· · · ⊗Tn)Pσc⊗d satisfying the condition
σ(1) = m1 + · · · + mi−1 + 1,
σ(3) = m1 + · · · + mi−1 + 2,
σ(2) = m1 + · · · + mj−1 + 1,
Ti = t(([, ](id1 ⊗t′)) ⊗idmi−m′
i−1)
for some 1 ≤i < j ≤n, t ∈Lie(mi −m′
i) and t′ ∈Lie(m′
i), m′
i ≤mi −1, where
the case of t′ = id1 yields the relation (9), and the relation for (T1 ⊗· · ·⊗Tn)Pσc⊗d
satisfying the condition
σ(1) = m1 + · · · + mi−1 + 1,
σ(3) = m1 + · · · + mi−1 + 2,
σ(2) = m1 + · · · + mi−1 + m′
i + 2,
Ti = t(([, ](id1 ⊗t′)) ⊗idmi−m′
i−1)
for some 1 ≤i ≤n, t ∈Lie(mi −m′
i) and t′ ∈Lie(m′
i), m′
i ≤mi −1, where the
case of t′ = id1 yields the relation (10). Since we have (6), these relations reduce
to the relations (9) and (10) by induction on m′
i. This completes the proof.
□
3.3. The category AL of extended Jacobi diagrams in handlebodies. The
author introduced the k-linear category AL of extended Jacobi diagrams in handle-
bodies in [12], which includes both A and CatLieC as full subcategories. We briefly
recall the definition of AL. We refer the readers to [12, Sections 4.2, 4.3, Appendix
A] for details.
The set of objects of the category AL is the free monoid generated by two objects
H and L, where H is regarded as a Hopf algebra and L is regarded as a Lie algebra.
Recall that the hom-space A(m, n) of the category A is spanned by (m, n)-Jacobi
diagrams, where Jacobi diagrams are attached to the oriented 1-manifold Xn, that
is, the set of univalent vertices is embedded into the interior of Xn. The morphisms
of AL extend the morphisms of A in such a way that univalent vertices of a Jacobi
diagram are allowed to be attached to the upper line I ×{0}×{1} in the top square
and the bottom line I×{0}×{−1} in the bottom square of the handlebody, and that
14
MAI KATADA
each connected component of a Jacobi diagram has at least one univalent vertex
which is attached to the bottom line or the oriented 1-manifold. This means that
we do not allow connected components of Jacobi diagrams to be separated from
the bottom line and the oriented 1-manifold. For w, w′ ∈Ob(AL), the hom-space
AL(w, w′) is the k-vector space spanned by “(w, w′)-diagrams” modulo the STU,
AS and IHX relations. Here, a (w, w′)-diagram is a Jacobi diagram partly attached
to Xm′ mapped in Um and partly attached to the upper line or the bottom line of Um
so that the object in AL corresponding to the upper line (resp. the bottom line) is
w (resp. w′) when we count from left to right, where m = Pr
i=1 mi, m′ = Ps
i=1 m′
i
for w = H⊗m1⊗L⊗n1⊗· · ·⊗H⊗mr⊗L⊗nr, w′ = H⊗m′
1⊗L⊗n′
1⊗· · ·⊗H⊗m′
s⊗L⊗n′
s ∈
Ob(AL). The following is an example of an (H⊗2⊗L⊗H⊗L⊗H, L⊗H⊗L⊗2⊗H)-
diagram:
The composition of morphisms in the category AL is defined in a way similar to
that in the category A, that is, the composition D′ ◦D of two morphisms D′ and
D is defined by stacking a suitable cabling of D on the top square of D′.
The category AL includes the categories A and CatLieC as full subcategories,
that is, A is the full subcategory of AL whose set of objects is the monoid generated
by H, and CatLieC is the full subcategory of AL whose set of objects is the monoid
generated by L.
The degree of a (w, w′)-diagram is defined by
1
2#{vertices} −#{univalent vertices attached to the upper line},
which induces an N-grading on the category AL. For ω, ω′ ∈Ob(AL), let AL
d (ω, ω′)
denote the subspace of AL(ω, ω′) spanned by morphisms of degree d. We have
AL
d (H⊗m, H⊗n) = Ad(m, n),
AL
d (L⊗m, L⊗n) = CatLieC(m, n)d
(11)
for d, m, n ≥0.
We can naturally generalize the N-graded k-linear symmetric monoidal structure
of the category A to the category AL. The category AL has a cocommutative
Hopf algebra, which is induced by the Hopf algebra (H, µ, η, ∆, ε, S) in A, and a
Lie algebra (L, [, ] : L⊗2 →L) with a symmetric invariant 2-tensor c : I →L⊗2,
and two morphisms i : L →H and adL : H ⊗L →L described as follows:
[, ] =
, c =
, i =
, adL =
.
The morphism c has degree 1 and the other morphisms have degree 0.
In [12,
Proposition A.7], the author proved that the k-linear symmetric monoidal category
AL is generated by the above morphisms. Moreover, we have the following relations
involving the morphism i:
(AL-1) i [·, ·] = −µ(i ⊗i) + µPH,H(i ⊗i),
(AL-2) ∆i = i ⊗η + η ⊗i,
(AL-3) ϵi = 0,
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
15
(AL-4) Si = −i.
The Casimir Hopf algebra in AL that characterizes the k-linear PROP A is
(H, µ, η, ∆, ε, S, ˜c = i⊗2c), and the Casimir Lie algebra that characterizes the k-
linear PROP CatLieC is (L, [, ], c).
3.4. The degree 0 part AL
0 . The degree 0 part AL
0 of the category AL forms
a wide subcategory of AL. Since kgrop is regarded as the degree 0 part A0 of
the category A, which is a full subcategory of AL, the category kgrop is a full
subcategory of AL
0 whose objects are generated by H. Similarly, CatLie is a full
subcategory of AL
0 whose objects are generated by L.
Here, we explicitly describe the hom-space AL
0 (L⊗m, H⊗n). Let C(m, n) denote
the set of Jacobi diagrams (in a cube) with m chords such that one end of each
chord is attached to the upper line and the other end of it is attached to the oriented
1-manifold Xn. For example, the set C(2, 2) consists of the following six elements
,
,
,
,
,
.
Lemma 3.7. The set C(m, n) is a basis for the hom-space AL
0 (L⊗m, H⊗n).
Proof. Let J(m, n) denote the set of Jacobi diagrams on Xn in a cube with 2m
vertices such that m univalent vertices are attached to the upper line. Note that
each connected component of a Jacobi diagram in J(m, n) is a rooted trivalent tree
whose leaves are attached to the upper line and whose root is attached to Xn.
By the definition of the degree 0 part of the hom-space of the category AL,
AL
0 (L⊗m, H⊗n) is the k-vector space spanned by the set J(m, n) modulo the STU
relation. By the STU relation, any Jacobi diagram in J(m, n) can be written as
a linear combination of chord diagrams in C(m, n). Therefore, AL
0 (L⊗m, H⊗n) is
spanned by the set C(m, n).
Moreover, for each Jacobi diagram D in J(m, n),
the linear combination of chord diagrams in C(m, n) associated to D is uniquely
determined. Therefore, C(m, n) is a basis for AL
0 (L⊗m, H⊗n).
□
Lemma 3.8. We have
C(m, n) = {µ[p1,··· ,pn]Pσi⊗m | p1, · · · , pn ≥0, p1 + · · · + pn = m, σ ∈Sm}.
Proof. For an element C of C(m, n), let E(C) be the set of m chords of C. Define
two maps e and e′ from E(C) to {1, · · · , m} as follows. The map e assigns e(c) ∈
{1, · · · , m} to a chord c ∈E(C), where an endpoint of c is attached to the e(c)-th
point on the upper line when we count the endpoints from left to right. In a similar
way, the map e′ assigns e′(c) to c, where an endpoint of c is attached to the e′(c)-th
point on Xn when we count the endpoints on Xn from left to right on the left-most
arc component of Xn and from left to right on the second arc and so on. Then
we obtain a permutation σC ∈Sm which assigns e(c) to e′(c). For j ∈{1, · · · , n},
let pj(C) be the number of univalent vertices of C that are attached to the j-th
component of Xn. Then we have a bijection
Φ : C(m, n)
∼
=
−→Sm × {(p1, · · · , pn) | p1, · · · , pn ≥0, p1 + · · · + pn = m}
defined by Φ(C) = (σC, (p1(C), · · · , pn(C))). In other words, we have
C(m, n) = {µ[p1,··· ,pn]Pσi⊗m | p1, · · · , pn ≥0, p1 + · · · + pn = m, σ ∈Sm}.
16
MAI KATADA
□
3.5. The hom-space AL(L⊗m, H⊗n). Here, we explicitly describe the hom-space
AL(L⊗m, H⊗n) and more generally AL(w, H⊗n) for w ∈Ob(AL).
Lemma 3.9. For any d, m, n ≥0, we have a k-linear isomorphism
AL
d (L⊗m, H⊗n) ∼= AL
0 (L⊗−, H⊗n) ⊗CatLie CatLieC(m, −)d.
(12)
Proof. The composition of morphisms in AL induces a k-linear isomorphism
AL
0 (−, H⊗n) ⊗AL
0 AL
d (L⊗m, −)
◦−→
∼
= AL
d (L⊗m, H⊗n),
which induces an injective k-linear map
AL
0 (L⊗−, H⊗n) ⊗CatLie AL
d (L⊗m, L⊗−)
◦−→AL
d (L⊗m, H⊗n).
It is easy to see that any element of AL
d (L⊗m, H⊗n) is a k-linear sum of morphisms
of the form
f(idL⊗m ⊗c⊗d)
with f ∈AL
0 (L⊗m+2d, H⊗n).
Therefore, the composition map ◦is surjective.
Hence, the identification (11) of AL
d (L⊗m, L⊗−) and CatLieC(m, −)d completes the
proof.
□
In a similar way, we obtain the following factorization of AL
d (L⊗m, H⊗n).
Lemma 3.10. For any d, m, n ≥0, we have k-linear isomorphisms
AL
d (L⊗m, H⊗n) ∼= Ad(−, n) ⊗kgrop AL
0 (L⊗m, H⊗−).
(13)
We have the following factorization of an element of AL
d (L⊗m, H⊗n).
Lemma 3.11. Any element of AL
d (L⊗m, H⊗n) is a linear combination of mor-
phisms of the form
µ[p1,··· ,pn]Pσi⊗(m+2d)(idL⊗m ⊗c⊗d)
with p1, · · · , pn ≥0, p1 + · · · + pn = m + 2d, σ ∈Sm+2d.
Proof. By Lemma 3.9, we have
AL
d (L⊗m, H⊗n) ∼= AL
0 (L⊗−, H⊗n) ⊗CatLie CatLieC(m, −)d.
By Lemmas 3.7 and 3.8, the set C(m′, n) = {µ[p1,··· ,pn]Pσi⊗m′ | p1, · · · , pn ≥
0, p1 +· · ·+pn = m′, σ ∈Sm′} is a basis for AL
0 (L⊗m′, H⊗n). Since any element of
CatLieC(m, m′)d is a linear combination of f(idL⊗m ⊗c⊗d) for some f ∈CatLie(m+
2d, m′), the statement follows.
□
Remark 3.12. Note that the factorization of an element of AL
d (L⊗m, H⊗n) given
in Lemma 3.11 is not unique. Indeed, we have the relation corresponding to the
“4T-relation”. For example, we have
µ[2,1]P(12)i⊗3(idL ⊗c) −µ[2,1]i⊗3(idL ⊗c)
= µ[1,2]P(12)i⊗3(idL ⊗c) −µ[1,2]P(132)i⊗3(idL ⊗c).
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
17
More generally, we have the following factorization of an element of AL
d (w, H⊗m′)
for any w ∈Ob(AL). For w = H⊗m1 ⊗L⊗n1 ⊗· · · ⊗H⊗mr ⊗L⊗nr, set m =
Pr
j=1 mj, n = Pr
j=1 nj. We have an isomorphism Pw : H⊗m ⊗L⊗n
∼
=
−→w induced
by the symmetry of AL, which induces a k-linear isomorphism
−◦Pw : AL
d (w, H⊗m′)
∼
=
−→AL
d (H⊗m ⊗L⊗n, H⊗m′).
Lemma 3.13. Any element of AL
d (H⊗m ⊗L⊗n, H⊗m′) is a linear combination of
morphisms of the form
µ[p1,··· ,pm′]Pσ(((Se1 ⊗· · · ⊗Ses)∆[q1,··· ,qm]) ⊗(i⊗n+2d(idL⊗n ⊗c⊗d)))
with s, p1, · · · , pm′, q1, · · · , qm ≥0, s = p1 + · · · + pm′ −(n + 2d) = q1 + · · · + qm,
σ ∈Ss+n+2d, e1, · · · , es ∈{0, 1}.
Proof. By the STU relation, any element of AL
d (H⊗m ⊗L⊗n, H⊗m′) is a linear
combination of chord diagrams with d chords attached to only Xm′ and with n
chords attached to both Xm′ and the upper line. By pulling the univalent vertex
of each of the n chords that is attached to Xm′ toward the upper right side of Um
along the chord so that the chord does not go through any handles of Um, we can
transform the chord diagrams into the following form
f(idH⊗m ⊗i⊗n)
with f ∈Ad(m + n, m′). By Lemma 3.2, the morphism f is a linear combination
of
µ[r1,··· ,rm′]Pρ(Se1 ⊗· · · ⊗Set ⊗id2d)(∆[q1,··· ,qm+n] ⊗(i⊗2c)⊗d)
where t, r1, · · · , rm′, q1, · · · , qm+n ≥0, t = r1 + · · · + rm′ −2d = q1 + · · · + qm+n,
ρ ∈St+2d, e1, · · · , et ∈{0, 1}. We have
µ[r1,··· ,rm′]Pρ(Se1 ⊗· · · ⊗Set ⊗id2d)(∆[q1,··· ,qm+n] ⊗(i⊗2c)⊗d)(idH⊗m ⊗i⊗n)
= µ[r1,··· ,rm′]Pρ(((Se1 ⊗· · · ⊗Ses)∆[q1,··· ,qm]) ⊗((Ses+1 ⊗· · · ⊗Set)∆[qm+1,··· ,qm+n]i⊗n) ⊗i⊗2dc⊗d).
By the relations (AL-2), (AL-3) and (AL-4), (Ses+1 ⊗· · ·⊗Set)∆[qm+1,··· ,qm+n]i⊗n is
a linear combination of Pτ(η⊗q⊗i⊗n) with τ ∈Sq+n if q = qm+1+· · ·+qm+n−n ≥0
and otherwise vanishes. Therefore, any element of AL
d (H⊗m ⊗L⊗n, H⊗m′) is a
linear combination of
µ[p1,··· ,pm′]Pσ(((Se1 ⊗· · · ⊗Ses)∆[q1,··· ,qm]) ⊗(i⊗n+2d(idL⊗n ⊗c⊗d))),
which completes the proof.
□
If w′ ∈Ob(AL) \ Ob(A), then we cannot factorize morphisms of AL(w, w′) into
linear combinations of chord diagrams as in the above cases.
4. Functors on the category of Jacobi diagrams in handlebodies
In this section, we study the adjunction given by Powell, which we recalled in
Proposition 2.4, by using the bimodule induced by the hom-spaces of the category
AL
0 . Moreover, by generalizing this point of view, we obtain an adjunction between
the category of A-modules and the category of CatLieC-modules.
18
MAI KATADA
4.1. The adjunction given by Powell. We reconsider the adjunction in Propo-
sition 2.4 by using the hom-spaces of the category AL
0 .
We have a (kgrop, CatLie)-bimodule AL
0 (L⊗−, H⊗−) induced by the composition
in the category AL
0 , that is, the kgrop-module structure is defined by the k-linear
map
AL
0 (H⊗n, H⊗n′) ⊗AL
0 (L⊗−, H⊗n)
◦−→AL
0 (L⊗−, H⊗n′),
where we identify kgrop(n, n′) with AL
0 (H⊗n, H⊗n′), and the right CatLie-module
structure is defined by the k-linear map
AL
0 (L⊗m, H⊗−) ⊗AL
0 (L⊗m′, L⊗m)
◦−→AL
0 (L⊗m′, H⊗−),
where we identify CatLie(m′, m) with AL
0 (L⊗m′, L⊗m).
The hom-space AL
0 (L⊗m, H⊗n) can be identified with CatAssu(m, n) as fol-
lows. The k-linear PROP CatAssu is embedded into the k-linear PROP AL
0 via
the unique morphism of k-linear PROPs that maps (A, µA, ηA) to (H, µ, η) since
CatAssu is freely generated by the unital associative algebra (A, µA, ηA) and the full
subcategory kgrop of AL
0 is freely generated by the cocommutative Hopf algebra
(H, µ, η, ∆, ε, S). We have a CatAssu-module map
i∗
m : CatAssu(m, −) ,→AL
0 (H⊗m, H⊗−)
−◦i⊗m
−−−−→AL
0 (L⊗m, H⊗−)
defined by the composition of the embedding CatAssu(m, n) ,→AL
0 (H⊗m, H⊗n)
and the composition map with the morphism i⊗m.
Lemma 4.1. For m ≥0, i∗
m is an isomorphism of CatAssu-modules.
Proof. By Lemma 2.2, the set {µ[p1,··· ,pn]
A
Pσ | p1 + · · · + pn = m, σ ∈Sm} is
a basis for CatAssu(m, n). By Lemmas 3.7 and 3.8, AL
0 (L⊗m, H⊗n) has a basis
{µ[p1,··· ,pn]Pσi⊗m | p1 + · · · + pn = m, σ ∈Sm}.
Therefore, the k-linear map
(i∗
m)n : CatAssu(m, n) →AL
0 (L⊗m, H⊗n) is an isomorphism for any n ≥0 and thus
the morphism i∗
m is an isomorphism of CatAssu-modules.
□
Remark 4.2. For m ≥0, we have a kgrop-module map
−◦i⊗m : kgrop(m, −) ∼= AL
0 (H⊗m, H⊗−) →AL
0 (L⊗m, H⊗−).
For m = 0, the kgrop-module map −◦i⊗0 = idk is an isomorphism. For m ≥1,
however, the kgrop-module map −◦i⊗m is epic but not monic due to the relation
(AL-3).
The (kgrop, CatLie)-bimodule structure on AL
0 (L⊗−, H⊗−) that we defined above
induces a (kgrop, CatLie)-bimodule structure on CatAssu via the isomorphisms (i∗
m)n :
CatAssu(m, n)
∼
=
−→AL
0 (L⊗m, H⊗n). Then the (kgrop, CatLie)-bimodule CatAssu is
isomorphic to the (kgrop, CatLie)-bimodule ∆CatAssu defined by Powell [18], which
we recalled in Section 2. Indeed, the k-linear isomorphisms
(−1)m : CatAssu(m, n) →∆CatAssu(m, n)
form a natural isomorphism between the (kgrop, CatLie)-bimodules CatAssu and
∆CatAssu.
We can rewrite the statement of Proposition 2.4 as follows.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
19
Proposition 4.3 (cf. Proposition 2.4). We have the hom-tensor adjunction
AL
0 (L⊗−, H⊗−) ⊗CatLie −: CatLie-Mod
kgrop-Mod
⊤
: Homkgrop-Mod(AL
0 (L⊗−, H⊗−), −).
Moreover, the adjunction restricts to an equivalence of categories
CatLie-Mod ≃kgrop-Modω.
4.2. An adjunction between the categories CatLieC-Mod and A-Mod. We
generalize the work of Powell [18] to A-modules by generalizing the perspective of
the last subsection.
We have an (A, CatLieC)-bimodule AL(L⊗−, H⊗−) induced by the composition
in the category AL, that is, the A-module structure is defined by the k-linear map
AL(H⊗n, H⊗n′) ⊗AL(L⊗−, H⊗n)
◦−→AL(L⊗−, H⊗n′),
where we identify A(n, n′) with AL(H⊗n, H⊗n′), and the right CatLieC-module
structure is defined by the k-linear map
AL(L⊗m, H⊗−) ⊗AL(L⊗m′, L⊗m)
◦−→AL(L⊗m′, H⊗−),
where we identify CatLieC(m′, m) with AL(L⊗m′, L⊗m).
Then the general hom-tensor adjunction gives the following adjunction, which
generalizes the adjunction given by Powell.
Theorem 4.4. We have an adjunction
AL(L⊗−, H⊗−) ⊗CatLieC −: CatLieC-Mod
A-Mod
⊤
: HomA-Mod(AL(L⊗−, H⊗−), −).
We have an isomorphism of (kgrop, CatLieC)-bimodules
AL(L⊗−, H⊗−) ∼= AL
0 (L⊗−, H⊗−) ⊗CatLie CatLieC
(14)
induced by the k-linear isomorphism (12), and an isomorphism of (A, CatLie)-
bimodules
AL(L⊗−, H⊗−) ∼= A ⊗kgrop AL
0 (L⊗−, H⊗−)
(15)
induced by the k-linear isomorphism (13).
Remark 4.5. Since by Lemma 3.11, any element of AL
d (L⊗m, H⊗n) is a linear
combination of morphisms of the form
µ[p1,··· ,pn]Pσi⊗(m+2d)(idL⊗m ⊗c⊗d) = µ[p1,··· ,pn]Pσ(idH⊗m ⊗˜c⊗d)i⊗m,
where ˜c = i⊗2c, the composition map (i∗
m)n with the morphism i⊗m
(i∗
m)n : A(m, n)
∼
=
−→AL(H⊗m, H⊗n)
−◦i⊗m
−−−−→AL(L⊗m, H⊗n)
is surjective. However, the map (i∗
m)n is not injective for m ≥1 due to the relation
(AL-3). Remark 4.2 follows from this remark as the case of degree 0.
20
MAI KATADA
4.3. Adjunctions between CatLie-Mod, CatLieC-Mod, kgrop-Mod and A-Mod.
Here, we study adjunctions between the categories CatLie-Mod, CatLieC-Mod, kgrop-Mod
and A-Mod. We will write AL for the (A, CatLieC)-bimodule AL(L⊗−, H⊗−) and
AL
0 for the (kgrop, CatLie)-bimodule AL
0 (L⊗−, H⊗−) for simplicity.
The inclusion functor
kgrop ,→A
induces a forgetful functor
U : A-Mod →kgrop-Mod.
(16)
The functor U is exact and has a left adjoint functor
F = A ⊗kgrop −: kgrop-Mod →A-Mod.
(17)
Note that from (15), we have an isomorphism of A-modules
AL ∼= A ⊗kgrop AL
0 = F(AL
0 ).
(18)
In a similar way, the inclusion functor
CatLie ,→CatLieC
induces a forgetful functor
U : CatLieC-Mod →CatLie-Mod,
which is exact and has a left adjoint functor
F = CatLieC ⊗CatLie −: CatLie-Mod →CatLieC-Mod.
Then the relations between the forgetful functors and the adjoint functors are as
follows.
Lemma 4.6. We have the following commutative diagram (up to isomorphisms)
CatLieC-Mod
AL⊗CatLieC −
/
U
A-Mod
U
CatLie-Mod
AL
0 ⊗CatLie−
/ kgrop-Mod.
⟲
Proof. It follows from the isomorphism (14) of (kgrop, CatLieC)-bimodules that for
a CatLieC-module K, we have isomorphisms of kgrop-modules
U(AL ⊗CatLieC K) ∼= (AL
0 ⊗CatLie CatLieC) ⊗CatLieC K
∼= AL
0 ⊗CatLie U(K).
□
The following lemma compares the composites of right adjoints.
Lemma 4.7. We have the following commutative diagram (up to isomorphisms)
CatLieC-Mod
U
A-Mod
U
HomA-Mod(AL,−)
o
CatLie-Mod
kgrop-Mod.
Homkgrop-Mod(AL
0 ,−)
o
⟲
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
21
Proof. For an A-module M, we have a natural isomorphism
Homkgrop-Mod(AL
0 , U(M)) ∼= HomA-Mod(F(AL
0 ), M)
given by the adjunction between (16) and (17). Therefore, by (18), we have
Homkgrop-Mod(AL
0 , U(M)) ∼= HomA-Mod(AL, M).
It is easy to see that this gives an isomorphism of CatLie-modules.
□
The uniqueness of left adjoints gives the following lemma.
Lemma 4.8. We have the following commutative diagram (up to isomorphisms)
CatLieC-Mod
AL⊗CatLieC −
/ A-Mod
CatLie-Mod
AL
0 ⊗CatLie−
/
F
O
kgrop-Mod.
F
O
⟲
Proof. Since the functor (AL⊗CatLieC −)◦F is the left adjoint of U◦HomA-Mod(AL, −)
and the functor F ◦(AL
0 ⊗CatLie −) is the left adjoint of Homkgrop-Mod(AL
0 , −) ◦U,
the statement follows from Lemma 4.7 and the uniqueness of left adjoints.
□
Let A-Modω denote the full subcategory of A-Mod whose set of objects is
{M ∈Ob(A-Mod) | U(M) ∈Ob(kgrop-Modω)}.
Theorem 4.9. We have AL ∈Ob(A-Modω), and the adjunction in Theorem 4.4
restricts to an adjunction
AL ⊗CatLieC −: CatLieC-Mod
A-Modω
⊤
: HomA-Modω(AL, −).
Proof. For a CatLieC-module K, by Lemma 4.6 and Proposition 2.4, we have
U(AL ⊗CatLieC K) ∼= AL
0 ⊗CatLie U(K) ∈Ob(kgrop-Modω).
Therefore, the functor AL ⊗CatLieC −restricts to a functor
AL ⊗CatLieC −: CatLieC-Mod →A-Modω.
It follows from AL ∼= AL ⊗CatLieC CatLieC that we have AL ∈Ob(A-Modω).
The statement follows since A-Modω is a full subcategory of A-Mod.
□
Remark 4.10. Minkyu Kim has independently obtained an adjunction which is
equivalent to the adjunction in Theorem 4.4. Moreover, he has proved that the
restriction of the adjunction, which is equivalent to that in Theorem 4.9, induces the
category equivalence between CatLieC-Mod and A-Modω. This was presented by
him at the workshop “Algebraic approaches to mapping class groups of surfaces”,
which was held at the Graduate School of Mathematical Sciences, University of
Tokyo [13].
Proposition 4.11. The adjunction between (16) and (17) restricts to
F : kgrop-Modω
A-Modω
⊤
: U.
22
MAI KATADA
Proof. By the definition of the category A-Modω, the forgetful functor U restricts
to U : A-Modω →kgrop-Modω.
We will check that the functor F restricts to a functor F : kgrop-Modω →
A-Modω. For N ∈Ob(kgrop-Modω), we have
F(N) ∼= F(AL
0 ⊗CatLie Homkgrop-Modω(AL
0 , N))
by Proposition 2.4. By Lemma 4.8, we have
F(AL
0 ⊗CatLie Homkgrop-Modω(AL
0 , N)) ∼= AL ⊗CatLieC F(Homkgrop-Modω(AL
0 , N)),
which is an object of A-Modω by Theorem 4.9.
The statement follows since
kgrop-Modω (resp. A-Modω) is a full subcategory of kgrop-Mod (resp. A-Modω).
□
Corollary 4.12. For each m ≥0, AL(L⊗m, H⊗−) is a projective object of A-Modω.
Proof. For each m ≥0, the CatLie-module CatLie(m, −) is projective by the Yoneda
Lemma. Via the equivalence AL
0 ⊗CatLie −, CatLie(m, −) is sent to the projective
object AL
0 ⊗CatLie CatLie(m, −) ∼= AL
0 (L⊗m, H⊗−) in kgrop-Modω. Since F has
an exact right adjoint U, F preserves projectives, and thus AL(L⊗m, H⊗−) ∼=
F(AL
0 (L⊗m, H⊗−)) is a projective object of A-Modω.
□
Remark 4.13. Since we have AL(L⊗m, H⊗−) ∼= AL ⊗CatLieC CatLieC(m, −) and
since CatLieC(m, −) is a projective CatLieC-module, Corollary 4.12 can also be
proved by using the fact that the functor AL ⊗CatLieC −yields an equivalence
of categories (see Remark 4.10).
Remark 4.14. The A-module A(0, −) ∼= AL(L⊗0, H⊗−) is an object of A-Modω,
but A(m, −) is not for any m ≥1.
By a polynomial A-module, we mean an
A-module M such that U(M) corresponds to a polynomial grop-module. Then
polynomial A-modules are objects of A-Modω. Schur functors, which we recall
in Section 6.2, are examples of polynomial A-modules. Since the forgetful functor
U is exact, the stability properties of polynomial grop-modules under subobjects,
quotients, and extensions (see [8]) hold for polynomial A-modules.
Since the category A is an N-graded k-linear category, we have a projection
functor
A ↠A0 ∼= kgrop,
which induces a k-linear functor
T : kgrop-Mod →A-Mod,
(19)
which is fully-faithful, exact and satisfies U ◦T = idkgrop-Mod.
Note that any
morphism of A of degree ≥1 acts by zero on the image of T. In a similar way, the
projection to the degree 0 part induces a k-linear functor
T : CatLie-Mod →CatLieC-Mod,
which has properties similar to those of (19) mentioned above.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
23
Proposition 4.15. We have the following commutative diagrams (up to isomor-
phisms)
CatLieC-Mod
AL⊗CatLieC −
/ A-Mod
CatLie-Mod
AL
0 ⊗CatLie−
/
T
O
kgrop-Mod
T
O
⟲
and
CatLieC-Mod
A-Mod
HomA-Mod(AL,−)
o
CatLie-Mod
T
O
kgrop-Mod.
T
O
Homkgrop-Mod(AL
0 ,−)
o
⟳
Proof. Let J be a CatLie-module. By (14), we have isomorphisms of kgrop-modules
AL ⊗CatLieC T(J) ∼= AL
0 ⊗CatLie CatLieC ⊗CatLieC T(J) ∼= AL
0 ⊗CatLie J.
Since the action of any morphism of A of degree ≥1 on AL ⊗CatLieC T(J) is trivial,
the above isomorphism yields an isomorphism of A-modules.
Let N be an kgrop-module. An element of HomA-Mod(AL, T(N))(n) is a natural
transformation α = {αm : AL(L⊗n, H⊗m) →T(N)(m)}m such that αm maps all
elements of AL
≥1(L⊗n, H⊗m) to zero. Therefore, we obtain a k-linear isomorphism
HomA-Mod(AL, T(N))(n) ∼= Homkgrop-Mod(AL
0 , N)(n).
Since the action of any morphism of CatLieC of degree ≥1 on HomA-Mod(AL, T(N))
is trivial, the above k-linear isomorphism yields an isomorphism of CatLieC-modules.
□
5. Symmetric monoidal structures
Here we study the monoidality of the adjoint functors that we have studied in
Section 4 with respect to the symmetric monoidal structures that are defined by
using Day convolution.
5.1. Day convolution. Let P be a k-linear PROP. The Day convolution M ⊠P N
of P-modules M and N is the P-module defined by
M ⊠P N =
Z m,n∈P
P(m + n, −) ⊗M(m) ⊗N(n).
The Day convolution of P-modules gives a k-linear symmetric monoidal structure
on the category of P-modules, where the monoidal unit is P(0, −).
5.2. The adjunctions with respect to the symmetric monoidal structures.
Here we study the monoidality of the adjoint functors introduced in Section 4 with
respect to the k-linear symmetric monoidal structures
(kgrop-Mod, ⊠kgrop, kgrop(0, −)),
(A-Mod, ⊠A, A(0, −)),
(CatLie-Mod, ⊠CatLie, CatLie(0, −)),
(CatLieC-Mod, ⊠CatLieC , CatLieC(0, −))
induced by the Day convolution.
24
MAI KATADA
Proposition 5.1. The left adjoint functor
AL
0 ⊗CatLie −: CatLie-Mod →kgrop-Mod
is symmetric monoidal.
Proof. For CatLie-modules J and J′, define an isomorphism of kgrop-modules
θJ,J′ : (AL
0 ⊗CatLie J) ⊠kgrop (AL
0 ⊗CatLie J′) →AL
0 ⊗CatLie (J ⊠CatLie J′)
by the composition of the following isomorphisms
(AL
0 ⊗CatLie J) ⊠kgrop (AL
0 ⊗CatLie J′)
=
Z p,p′∈kgrop
kgrop(p + p′, −) ⊗(AL
0 ⊗CatLie J)(p) ⊗(AL
0 ⊗CatLie J′)(p′)
∼=
Z m,m′∈CatLie Z p,p′∈kgrop
AL
0 (H⊗p+p′, H⊗−) ⊗AL
0 (L⊗m, H⊗p) ⊗AL
0 (L⊗m′, H⊗p′)
!
⊗J(m) ⊗J′(m′)
R m,m′∈CatLie(◦⊗J(m)⊗J′(m′))
−−−−−−−−−−−−−−−−−−−−−→
∼
=
Z m,m′∈CatLie
AL
0 (L⊗m+m′, H⊗−) ⊗J(m) ⊗J′(m′)
∼=
Z n,m,m′∈CatLie
AL
0 (L⊗n, H⊗−) ⊗CatLie(m + m′, n) ⊗J(m) ⊗J′(m′)
∼=
Z n∈CatLie
AL
0 (L⊗n, H⊗−) ⊗
Z m,m′∈CatLie
CatLie(m + m′, n) ⊗J(m) ⊗J′(m′)
= AL
0 ⊗CatLie (J ⊠CatLie J′),
where the map ◦is induced by the composition of the category AL
0 .
One can
check that ◦is an isomorphism by using Lemma 3.7. Define an isomorphism of
kgrop-modules
θ0 = id : kgrop(0, −)
∼
=
−→AL
0 ⊗CatLie CatLie(0, −).
Then it is straightforward to check that (AL
0 ⊗CatLie −, θJ,J′, θ0) satisfies the axioms
of symmetric monoidal functors.
□
The symmetric monoidal structure on kgrop-Mod restricts to analytic functors.
Corollary 5.2. We have a symmetric monoidal category
(kgrop-Modω, ⊠kgrop, kgrop(0, −)).
Proof. It follows from Propositions 2.4 and 5.1 that for N, N′ ∈Ob(kgrop-Modω),
we have
N ⊠kgrop N ′
∼= (AL
0 ⊗CatLie Homkgrop(AL
0 , N)) ⊠kgrop (AL
0 ⊗CatLie Homkgrop(AL
0 , N ′))
∼= AL
0 ⊗CatLie (Homkgrop(AL
0 , N) ⊠CatLie Homkgrop(AL
0 , N ′)),
which is an object of kgrop-Modω. Since kgrop(0, −) is analytic, the symmetric
monoidal structure on kgrop-Mod restricts to kgrop-Modω.
□
Remark 5.3. [18, Theorem 11.2] states that the functor ∆CatAssu ⊗CatLie −:
CatLie-Mod →kgrop-Modω is symmetric monoidal with respect to the symmetric
monoidal structure of kgrop-Modω that is defined by pointwise tensor product and
the corresponding symmetric monoidal structure of CatLie-Mod. Note that the Day
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
25
convolution tensor product ⊠kgrop for kgrop-Mod is not equivalent to the pointwise
tensor product.
Proposition 5.4. The left adjoint functor
AL ⊗CatLieC −: CatLieC-Mod →A-Mod
is symmetric monoidal.
Proof. The proof is similar to that of Proposition 5.1. In this case, we use the fact
that the map induced by the composition of the category AL
Z p,p′∈A
AL(H⊗p+p′, H⊗−) ⊗AL(L⊗m, H⊗p) ⊗AL(L⊗m′, H⊗p′) →AL(L⊗m+m′, H⊗−)
is an isomorphism.
□
Remark 5.5. By using the equivalence of categories between CatLieC-Mod and
A-Modω given by Kim (see Remark 4.10), one can also check that the symmetric
monoidal structure on A-Mod restricts to A-Modω.
Proposition 5.6. The left adjoint functors
F : kgrop-Mod →A-Mod,
F : CatLie-Mod →CatLieC-Mod
are symmetric monoidal.
Proof. For CatLie-modules J, J′, we have the following composition of isomorphisms
F(J) ⊠CatLieC F(J′)
=
Z m,m′∈CatLieC
CatLieC(m + m′, −) ⊗F(J)(m) ⊗F(J′)(m′)
∼=
Z p,p′∈CatLie Z m,m′∈CatLieC
CatLieC(m + m′, −) ⊗CatLieC(p, m) ⊗CatLieC(p′, m′)
!
⊗J(p) ⊗J′(p′)
∼=
Z p,p′∈CatLie
CatLieC(p + p′, −) ⊗J(p) ⊗J′(p′)
∼=
Z n∈CatLie
CatLieC(n, −) ⊗
Z p,p′∈CatLie
CatLie(p + p′, n) ⊗J(p) ⊗J′(p′)
!
∼= F(J ⊠CatLie J′)
and an isomorphism
CatLieC(0, −) ∼= F(CatLie(0, −)),
which make F a symmetric monoidal functor.
In a similar way, one can check that the functor F : kgrop →A-Mod is symmetric
monoidal.
□
Proposition 5.7. The forgetful functors
U : A-Mod →kgrop-Mod,
U : CatLieC-Mod →CatLie-Mod
are symmetric lax monoidal.
26
MAI KATADA
Proof. We have a canonical injective morphism of kgrop-modules
νM,M ′ : U(M) ⊠kgrop U(M ′) ,→U(M ⊠A M ′)
for M, M ′ ∈A-Mod, and an inclusion map
ν0 : kgrop(0, −) ,→A(0, −),
such that (U, νM,M ′, ν0) is a symmetric lax monoidal functor.
In a similar way, the functor U : CatLieC-Mod →CatLie-Mod is symmetric lax
monoidal.
□
5.3. The algebra C in CatLie-Mod and the algebra A in kgrop-Mod. In a
k-linear monoidal category (C, ⊗, I), the monoidal unit I forms a canonical al-
gebra (I, µ = id : I ⊗I
∼
=
−→I, η = id : I →I).
Therefore, we have an alge-
bra A(0, −) in the k-linear symmetric monoidal category (A-Mod, ⊠A, A(0, −)),
and we have an algebra CatLieC(0, −) in the k-linear symmetric monoidal category
(CatLieC-Mod, ⊠CatLieC , CatLieC(0, −)).
Since the forgetful functor U : A-Mod →kgrop-Mod is lax monoidal, the algebra
A(0, −) in (A-Mod, ⊠A, A(0, −)) induces an algebra
A = U(A(0, −))
in (kgrop-Mod, ⊠kgrop, kgrop(0, −)). Moreover, the algebra A has the structure of
a graded algebra, where
A =
M
d≥0
Ad,
Ad = Ad(0, −),
and the multiplication consists of
µ : Ad ⊠kgrop Ad′ →Ad+d′
and the unit is
η : kgrop(0, −)
∼
=
−→A0 ,→A(0, −).
In a similar way, the algebra CatLieC(0, −) in (CatLieC-Mod, ⊠CatLieC , CatLieC(0, −))
induces an algebra
C = U(CatLieC(0, −))
in (CatLie-Mod, ⊠CatLie, CatLie(0, −)). The algebra C is also graded, where
C =
M
d≥0
Cd,
Cd = CatLieC(0, −)d.
Recall that the k-vector space Cd(n) = CatLieC(0, n)d for n ≥0 is explicitly de-
scribed in Lemma 3.6.
Remark 5.8. Via the category equivalence in Proposition 2.4, the graded algebra
A in kgrop-Mod corresponds to the graded algebra C in CatLie-Mod.
In what follows, we will study the structure of the graded algebra C in CatLie-Mod,
and prove that the algebra C is quadratic.
Let C⊠
1 denote the tensor algebra in (CatLie-Mod, ⊠CatLie, CatLie(0, −)) gener-
ated by C1, that is, the graded algebra whose underlying CatLie-module is
C⊠
1 =
M
d≥0
C⊠d
1 ,
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
27
whose multiplication
µ : C⊠
1 ⊠CatLie C⊠
1 →C⊠
1
is defined by
µ(h ⊗(f ⊗x1 ⊗· · · ⊗xd1) ⊗(g ⊗y1 ⊗· · · ⊗yd2))
= (h(f ⊗g)) ⊗x1 ⊗· · · ⊗xd1 ⊗y1 ⊗· · · ⊗yd2
for h ∈CatLie(p + q, r), f ∈CatLie(m1 + · · · + md1, p), g ∈CatLie(n1 + · · · +
nd2, q), x1 ∈C1(m1), · · · , xd1 ∈C1(md1), y1 ∈C1(n1), · · · , yd2 ∈C1(nd2), and
whose unit
η : CatLie(0, −) →C⊠
1
is defined by the inclusion
η : CatLie(0, −)
∼
=
−→C⊠0
1
,→C⊠
1 .
Note that the CatLie-module C1 = CatLieC(0, −)1 is concentrated in 2 ∈Ob(CatLie)
and C1(2) is the trivial representation of S2 generated by the Casimir element
c ∈C1(2).
Theorem 5.9. We have a surjective morphism of graded algebras
ϕ : C⊠
1 →C
in (CatLie-Mod, ⊠CatLie, CatLie(0, −)), and the kernel of ϕ is the two-sided ideal of
C⊠
1 generated by the following two elements
rs := id4 ⊗c ⊗c −P(13)(24) ⊗c ⊗c ∈C⊠2
1 (4),
and
rc := (([, ] ⊗id2)P(23)) ⊗c ⊗c + (id1 ⊗[, ] ⊗id1) ⊗c ⊗c ∈C⊠2
1 (3).
Proof. Define a morphism of algebras
ϕ : C⊠
1 →C
by
ϕ(f ⊗x1 ⊗· · · ⊗xd) = f(x1 ⊗· · · ⊗xd)
for f ∈CatLie(m1 + · · · + md, p), x1 ∈C1(m1), · · · , xd ∈C1(md).
Then ϕ is
surjective since any element of Cd(p) = CatLieC(0, p)d is a linear combination of
elements of the form fc⊗d with f ∈CatLie(2d, p).
Let R be the two-sided ideal of C⊠
1 generated by rs and rc. We have ϕ(rs) = 0
and ϕ(rc) = 0 by the second relation of (4), which implies that R ⊂ker ϕ. Then
we have a surjective morphism ϕ of graded algebras
ϕ : C⊠
1 /R
pr
↠C⊠
1 / ker ϕ
ϕ↠C.
In order to prove that ker ϕ = R, it suffices to construct a k-linear map
ϕ−
d : Cd(n) →(C⊠
1 /R)d(n)
such that ϕ−
d ϕ = id(C⊠
1 /R)d(n) for each d, n ≥0.
Define a k-linear map eϕ−
d from the k-vector space spanned by the set (7) to
(C⊠
1 /R)d(n) by
eϕ−
d ((T1 ⊗· · · ⊗Tn)Pσc⊗d) = (T1 ⊗· · · ⊗Tn)Pσ ⊗c ⊗· · · ⊗c.
28
MAI KATADA
By Lemma 3.6, the k-vector space Cd(n) = CatLieC(0, n)d is spanned by the set (7)
with relations generated by the equivariance relation (3), the AS and IHX relations,
(8), (9) and (10). It is straightforward to check that the relations corresponding to
these relations via eϕ−
d hold in (C⊠
1 /R)d(n). Therefore the k-linear map eϕ−
d induces
a k-linear map ϕ−
d : Cd(n) →(C⊠
1 /R)d(n), which satisfies ϕ−
d ϕ = id(C⊠
1 /R)d(n).
This completes the proof.
□
It follows from the category equivalence in Proposition 2.4 that the algebra A is
quadratic, which can be stated in the following corollary. Let A⊠
1 denote the tensor
algebra in (kgrop-Mod, ⊠kgrop, kgrop(0, −)) generated by A1 = A1(0, −).
Note
that the kgrop-module A1 is the simple kgrop-module generated by the Casimir
2-tensor ˜c.
Corollary 5.10. We have a surjective morphism of graded algebras
ψ : A⊠
1 →A
in (kgrop-Mod, ⊠kgrop, kgrop(0, −)), and the kernel of ψ is the two-sided ideal of
A⊠
1 generated by the following two elements
˜rs := id4 ⊗˜c ⊗˜c −P(13)(24) ⊗˜c ⊗˜c ∈A⊠2
1 (4)
and
˜rc := (((µ −µP(12)) ⊗id2)P(23)) ⊗˜c ⊗˜c + (id1 ⊗(µ −µP(12)) ⊗id1) ⊗˜c ⊗˜c ∈A⊠2
1 (3),
where ˜rc corresponds to the 4T-relation.
6. The A-module AL(L⊗m, H⊗−)
For each m ≥0, we have the A-module AL(L⊗m, H⊗−). For the first step,
we study the A-module A(0, −) = AL(L⊗0, H⊗−). Then we study the A-module
AL(L, H⊗−).
6.1. Filtration of A-modules. Here we define a double filtration of A-modules.
For each d, k ≥0, we have the (A, A)-subbimodule A≥d,≥k(−, −) of A(−, −)
spanned by Jacobi diagrams of degree ≥d with ≥k trivalent vertices. Then the
(A, A)-bimodules A≥d,≥k(−, −) form a double filtration of the (A, A)-bimodule
A(−, −)
A≥d,≥k(−, −) ⊃A≥d+1,≥k(−, −),
A≥d,≥k(−, −) ⊃A≥d,≥k+1(−, −).
In the following, we will write A≥d(−, −) for A≥d,≥0(−, −). The filtration A≥d,≥k(−, −)
induces a double filtration of A-modules as follows. Let M be an A-module. We
define a double filtration M≥d,≥k of M by
M≥d,≥k = A≥d,≥k(−, −) ⊗A M ⊂A ⊗A M ∼= M.
That is, for n ≥0, the k-vector space M≥d,≥k(n) is the subspace of M(n) spanned
by {M(f)(x) | f ∈A≥d,≥k(m, n), x ∈M(m), m ≥0}. Then we have
M≥d,≥k ⊃M≥d+1,≥k,
M≥d,≥k ⊃M≥d,≥k+1.
We have a quotient A-module M≥d,≥k/M≥d′,≥k′ for d ≤d′ and k ≤k′.
Remark 6.1. For an A-module M, we have M≥1 = 0 if and only if we have M =
TU(M), where U is the forgetful functor defined in (16) and T is the functor defined
in (19).
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
29
We have the corresponding filtrations for CatLieC-modules. For d, k ≥0, we have
the (CatLieC, CatLieC)-subbimodule CatLieC(−, −)≥d,≥k of CatLieC(−, −) spanned
by Jacobi diagrams of degree ≥d with ≥k trivalent vertices.
These induce a
double filtration K≥d,≥k on a CatLieC-module K by
K≥d,≥k = CatLieC(−, −)≥d,≥k ⊗CatLieC K.
The filtrations AL⊗CatLieC (K≥d,≥k) and (AL⊗CatLieC K)≥d,≥k of the A-module
AL ⊗CatLieC K coincide, which can be checked as follows. For an (L⊗m, H⊗n)-
diagram D, let d denote the degree of D in AL, k the number of trivalent vertices
of D and p the number of univalent vertices of D attached to Xn. In terms of
the generating morphisms of AL as a k-linear symmetric monoidal category, the
degree d is the number of copies of the morphism c, k is the number of copies of
the morphism [, ], p is the number of copies of the morphism i. Then we have
p = 2d −k + m.
Let AL
≥d,≥k denote the (A, CatLieC)-subbimodule of AL spanned by Jacobi dia-
grams of degree ≥d with ≥k trivalent vertices. Then we have
AL ⊗CatLieC (K≥d,≥k) = (AL ⊗CatLieC CatLieC(−, −)≥d,≥k) ⊗CatLieC K
∼= AL
≥d,≥k ⊗CatLieC K
∼= (A≥d,≥k ⊗A AL) ⊗CatLieC K
= (AL ⊗CatLieC K)≥d,≥k.
6.2. Schur functors. Here, we recall Schur functors. We refer the reader to [5] on
representation theory of symmetric groups and general linear groups.
For a partition λ of a non-negative integer, let l(λ) denote the length of λ and
|λ| the size of λ. We write λ ⊢|λ|. Let cλ ∈kS|λ| denote the Young symmetrizer
corresponding to λ, that is,
cλ =
X
σ∈Rλ
σ
! X
τ∈Cλ
sgn(τ)τ
!
,
where Rλ is the row stabilizer and Cλ is the column stabilizer of the canonical
tableau of λ. (See [5] for details.) It is well known that {Sλ := kSd · cλ | λ ⊢d} is
the set of isomorphism classes of irreducible representations of Sd.
Remark 6.2. Since we have CatLie(m, n) = 0 for m < n, it is easy to see that
any simple CatLie-module J is concentrated in d for some d ≥0 and J(d) is an
irreducible representation of Sd. Therefore, the set of isomorphism classes of simple
CatLie-modules is {Sλ | d ≥0, λ ⊢d}.
Let
Sλ : k-Mod →k-Mod
denote the Schur functor corresponding to λ ⊢d, which is defined by
Sλ(V ) = V ⊗d ⊗kSd Sλ.
Then Sλ(V ) is an irreducible representation of GL(V ) if dim(V ) ≥l(λ) and other-
wise we have Sλ(V ) = 0.
Let
a# = Hom(−ab, k) : kgrop →k-Mod
30
MAI KATADA
denote the dual of the abelianization functor.
Regarding the Sd-module kSd as a CatLie-module concentrated in d, we have
an isomorphism of kgrop-modules
AL
0 ⊗CatLie kSd ∼= (a#)⊗d.
Therefore, for each partition λ ⊢d, we have an isomorphism of kgrop-modules
AL
0 ⊗CatLie Sλ ∼= Sλ ◦a#.
Since the functor AL
0 ⊗CatLie −: CatLie-Mod →kgrop-Modω is an equivalence of
categories by Proposition 2.4, the set of isomorphism classes of simple objects in
kgrop-Modω is {Sλ ◦a# | d ≥0, λ ⊢d}.
By a graded CatLieC-module, we mean a CatLieC-module K which admits a
grading K = L
d≥0 Kd such that for any f ∈CatLieC(m, n)d, we have K(f) :
Kd′(m) →Kd+d′(n) for any d′ ≥0.
For example, CatLieC(m, −) is a graded
CatLieC-module for each m ≥0. A morphism α : K →K′ of graded CatLieC-
modules is a morphism of CatLieC-modules which is compatible with some gradings
of K and K′. Let grCatLieC-Mod denote the category of graded CatLieC-modules
and morphisms between them, which is a subcategory of CatLieC-Mod.
Proposition 6.3. The set of isomorphism classes of simple objects in the category
grCatLieC-Mod is {T(Sλ) | d ≥0, λ ⊢d}.
Proof. Let K be a simple object of grCatLieC-Mod. Since K is non-trivial, there
exists some some d ≥0 such that Kd ̸= 0. We may take d as the smallest one. Since
K is simple, the CatLieC-submodule K≥d+1 = L
d′≥d+1 Kd′ should be 0. Therefore,
we have K = Kd. Since any morphism of CatLieC of degree ≥1 raises the grading
of K, we have K = TU(K), that is, K is induced by a simple CatLie-module. This
completes the proof.
□
Remark 6.4. The set of isomorphism classes of simple objects in CatLieC-Mod is
much bigger than that of grCatLieC-Mod. For example, we have the following simple
CatLieC-module. Let
K(1) = S1 = k1,
K(2) = S12 = ka,
K(3) = S21 = k⟨b, c⟩,
where a = 1 −(12), b = 1 + (12) −(13) −(132), c = 1 + (23) −(13) −(123), and
K(m) = 0 for m ̸= 1, 2, 3. Then
K(c ⊗id1)(1) = b,
K([, ] ⊗id1)(b) = 0,
K([, ] ⊗id1)(c) = a,
K([, ])(a) = 1
defines a simple CatLieC-module.
We define graded A-modules similarly. Via the equivalence of categories between
CatLieC-Mod and A-Modω given by Kim (see Remark 4.10), the set of isomorphism
classes of simple objects in the subcategory of A-Modω of graded A-modules is
{T(Sλ ◦a#) | d ≥0, λ ⊢d}.
6.3. The kgrop-module Ad. In [11, 12], the author studied the kgrop-module
Ad = Ad(0, −). As A-modules, we have
T(Ad) = A≥d(0, −)/A≥d+1(0, −).
One of the main results of [11, 12] is an indecomposable decomposition of the
functor Ad, which is explained below.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
31
For d ≥2, let AdP and AdQ be the kgrop-submodules of Ad that are generated
by the symmetric element Pd and the anti-symmetric element Qd defined by
Pd = (
X
σ∈S2d
Pσ) ◦˜c⊗d,
Qd = ˜c⊗d −P(23) ◦˜c⊗d ∈Ad(0, 2d),
(20)
respectively. For d = 0, 1, we set AdP = Ad.
Theorem 6.5 ([11, 12]). Let d ≥2. We have an indecomposable decomposition
Ad = AdP ⊕AdQ
in kgrop-Mod. Moreover, AdP is a simple kgrop-module isomorphic to S2d ◦a#.
We have the following descending filtration of kgrop-modules
Ad ⊃AdQ ⊃Ad,1 ⊃Ad,2 ⊃· · · ⊃Ad,2d−1 = 0,
where Ad,k = U(A≥d,≥k(0, −)/A≥d+1,≥k(0, −)).
Lemma 6.6 ([12]). We have an isomorphism of kgrop-modules
AdQ/Ad,1 ∼=
M
λ⊢d, λ̸=d
S2λ ◦a#,
where 2λ = (2λ1, · · · , 2λl(λ)) for a partition λ = (λ1, · · · , λl(λ)) of d.
6.4. The quotient A-module A(0, −)/A≥d′(0, −). Here, we study the quotient
A-module A(0, −)/A≥d′(0, −). We prove that A(0, −)/A≥d′(0, −) is indecompos-
able for d′ ≥2.
We have the following descending filtration of A(0, −)/A≥d′(0, −) in A-Mod:
A(0, −)/A≥d′(0, −) ⊃A≥1(0, −)/A≥d′(0, −) ⊃· · · ⊃A≥d′(0, −)/A≥d′(0, −) = 0.
As we observed in Section 6.3, the A-module A≥d(0, −)/A≥d+1(0, −) = T(Ad)
factors through kgrop. However, in general, for d ≥0, d′ ≥d + 2, the A-module
A≥d(0, −)/A≥d′(0, −) does not factor through kgrop, and we have the following.
Theorem 6.7. For d ≥0, d′ ≥d + 2, the A-module A≥d(0, −)/A≥d′(0, −) is
indecomposable.
Proof. If d = 0, d′ = 2, then it is easy to see that A(0, −)/A≥2(0, −) is a non-
trivial extension of A(0, −)/A≥1(0, −) ∼= k by A≥1(0, −)/A≥2(0, −) ∼= T(S2 ◦a#).
Therefore, A(0, −)/A≥2(0, −) is indecomposable.
Otherwise, we have d′ ≥3. Suppose that there exist A-submodules F and G
such that
A≥d(0, −)/A≥d′(0, −) = F ⊕G.
(∗)
Then we have
A≥d′−1(0, −)/A≥d′(0, −) = F≥d′−d−1,≥0 ⊕G≥d′−d−1,≥0,
where F≥d′−d−1,≥0 (resp. G≥d′−d−1,≥0) is the A-submodule of F (resp. G) de-
fined in Section 6.1.
It follows from Theorem 6.5 and Lemma 6.6 that the A-
module A≥d′−1(0, −)/A≥d′(0, −) = T(Ad′−1) is decomposed into the direct sum
of indecomposable A-modules T(Ad′−1P) and T(Ad′−1Q), where T(Ad′−1P) ∼=
T(S2(d′−1) ◦a#) and where any composition factor of T(Ad′−1Q) is not isomorphic
to T(S2(d′−1)◦a#). Therefore, we may assume that T(Ad′−1Q) ⊂F≥d′−d−1,≥0 ⊂F.
32
MAI KATADA
For any n ≥0, we have a k-linear isomorphism
A≥d(0, n)/A≥d′(0, n) ∼= AdP(n) ⊕· · · ⊕Ad′−1P(n) ⊕AdQ(n) ⊕· · · ⊕Ad′−1Q(n).
For a non-trivial element z = xd +· · ·+xd′−1 +yd +· · ·+yd′−1, where xj ∈AjP(n),
yj ∈AjQ(n), let
t =
(
d′
if xd = · · · = xd′−1 = 0
min{j | xj ̸= 0}
otherwise,
and
s =
(
d′
if yd = · · · = yd′−1 = 0
min{j | yj ̸= 0}
otherwise.
Suppose that G(n) includes a non-trivial element z. If t > s, then we have
G(˜c⊗(d′−1−s) ⊗idn)(z) = G(˜c⊗(d′−1−s) ⊗idn)(ys),
which is a non-trivial element of T(Ad′−1Q)(n+2(d′ −1−s)) and thus a non-trivial
element of F(n + 2(d′ −1 −s)). If t = s, then there exists a morphism f : n →n
in kgrop ∼= A0 such that
G(f)(z) = x′
t+1 + · · · + x′
d′−1 + y′
t + · · · + y′
d′−1,
where x′
j ∈AjP(n) and y′
j ∈AjQ(n), which means that we have x′
t = 0, since
AtP ∼= S2t ◦a# does not appear as a composition factor of AtQ by Lemma 6.6.
Therefore, this case reduces to the case of t > s. If t < s, then we have
G(˜c⊗(d′−1−t) ⊗idn)(z) = G(˜c⊗(d′−1−t) ⊗idn)(xt) = ˜c⊗(d′−1−t) ⊗xt = u + v,
where u is a non-trivial element of Ad′−1P(n + 2(d′ −1 −t)) and v is a non-
trivial element of Ad′−1Q(n + 2(d′ −1 −t)). By an argument similar to the case
of t = s, there exists a morphism g ∈A0(n + 2(d′ −1 −t), n + 2(d′ −1 −t)) such
that G(g)(u + v) is a non-trivial element of Ad′−1Q(n + 2(d′ −1 −t)) and thus
a non-trivial element of F(n + 2(d′ −1 −t)). Therefore, in any cases, G includes
a non-trivial element of F, which contradicts (∗). Hence, we have G = 0, which
implies that A≥d(0, −)/A≥d′(0, −) is indecomposable.
□
In what follows, we study a submodule of A(0, −) which contains all AdQ with
d ≥2 as subquotient A-modules.
Let AQ denote the A-submodule of A≥2(0, −) generated by the anti-symmetric
element Q2 ∈A2(0, 4), which is a generator of A2Q as a kgrop-module defined in
(20). Then we have the following descending filtration of AQ in A-Mod:
AQ = AQ≥2 ⊃AQ≥3 ⊃· · · ⊃AQ≥d ⊃· · · ,
where
AQ≥d = AQ ∩A≥d(0, −).
The kgrop-modules AdP and AdQ satisfy
T(AdP) ∼= A≥d(0, −)/(A≥d+1(0, −) + AQ≥d),
T(AdQ) ∼= AQ≥d/AQ≥d+1.
The cokernel A(0, −)/AQ is a graded A-module whose underlying kgrop-module
is
U(A(0, −)/AQ) ∼=
M
d≥0
AdP ∼=
M
d≥0
S2d ◦a#
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
33
and the action of the Casimir 2-tensor is given by
(˜c ⊗id2d) · Pd =
(2d)!
(2d + 2)!Pd+1,
where Pd ∈AdP(2d) is the generator of AdP defined in (20).
We have the following descending filtration of the A-module AQ/AQ≥d′
AQ/AQ≥d′ ⊃AQ≥3/AQ≥d′ ⊃· · · ⊃AQ≥d′−1/AQ≥d′ = T(Ad′−1Q).
The following proposition is a partial generalization of Theorem 6.5.
Proposition 6.8. For d ≥2, d′ ≥d + 1, the A-module AQ≥d/AQ≥d′ is indecom-
posable.
Proof. For d′ −d = 1, the A-module AQ≥d/AQ≥d′ ∼= T(AdQ) is indecomposable
by Theorem 6.5. For d′ −d = k ≥2, suppose that there exist A-submodules F and
G such that
AQ≥d/AQ≥d′ = F ⊕G.
(∗)
Then we have
AQ≥d′−1/AQ≥d′ = F≥d′−d−1,≥0 ⊕G≥d′−d−1,≥0,
where F≥d′−d−1,≥0 (resp. G≥d′−d−1,≥0) is the A-submodule of F (resp. G) defined
in Section 6.1. Since AQ≥d′−1/AQ≥d′ is indecomposable, we may assume that
AQ≥d′−1/AQ≥d′ is an A-submodule of F.
For any n ≥0, we have a k-linear
isomorphism
(AQ≥d/AQ≥d′)(n) ∼= AdQ(n) ⊕Ad+1Q(n) ⊕· · · ⊕Ad′−1Q(n).
For a non-trivial element x = xd + · · · + xd′−1 of (AQ≥d/AQ≥d′)(n), where xj ∈
AjQ(n), let t = min{j | xj ̸= 0}. If G(n) includes a non-trivial element x, then we
have
G(˜c⊗(d′−1−t) ⊗idn)(x) = G(˜c⊗(d′−1−t) ⊗idn)(xt) = ˜c⊗(d′−1−t) ⊗xt,
which is a non-trivial element of (AQ≥d′−1/AQ≥d′)(n + 2(d′ −1 −t)) and thus a
non-trivial element of F(n+2(d′ −1−t)). This contradicts (∗). Therefore, we have
G = 0, which implies that AQ≥d′−1/AQ≥d′ is indecomposable.
□
6.5. The A-module A(0, −). Here, we will study the A-module A(0, −).
By Theorem 4.9, A(0, −) is analytic.
By the Yoneda Lemma, A(0, −) is a
projective A-module and we have
EndA-Mod(A(0, −)) ∼= EndA(0) ∼= k.
It follows that idempotents of the A-module A(0, −) are 0 and 1, which implies that
the A-module A(0, −) is indecomposable. In what follows, we will give another
proof of the indecomposability of A(0, −), which can also be applied to the A-
module AQ.
Theorem 6.9. The A-modules A(0, −) and AQ are indecomposable.
Proof. Suppose that we have non-trivial A-submodules F and G of A(0, −) such
that
A(0, −) = F ⊕G.
Then for each d′ ≥2, the projection A(0, −) ↠A(0, −)/A≥d′(0, −) induces
A(0, −)/A≥d′(0, −) ∼= (F/(F ∩A≥d′(0, −))) ⊕(G/(G ∩A≥d′(0, −))).
34
MAI KATADA
Since A(0, −)/A≥d′(0, −) is indecomposable by Theorem 6.7, we may assume that
G/(G ∩A≥d′(0, −)) = 0, which implies that G ⊂A≥d′(0, −) and F ̸⊂A≥d′(0, −).
Therefore, we have G ⊂T
d′≥2 A≥d′(0, −) = 0, which implies that A(0, −) is inde-
composable.
By Proposition 6.8, the above argument holds for AQ and the projection AQ ↠
AQ/AQ≥d′ for d′ ≥3.
□
6.6. The A-module A(0, −)/A≥4(0, −). In this subsection, we study the subquo-
tient A-modules of A(0, −)/A≥4(0, −).
First, we briefly recall from [11] the kgrop-module structures of Ad for d ≤3.
The kgrop-module A0 is spanned by the empty Jacobi diagram, and we have
A0 ∼= k = S0 ◦a#.
The kgrop-module A1 is generated by ˜c, and we have
A1 ∼= S2 ◦a#.
The kgrop-module A2 is generated by ˜c⊗2 and we have the indecomposable decom-
position
A2 = A2P ⊕A2Q.
Recall that A2P is generated by the symmetric element P2 and thus we have A2P ∼=
S4 ◦a#. The kgrop-module A2Q is generated by the anti-symmetric element Q2
and has a unique composition series
A2Q ⊃A2,1 ⊃A2,2 ⊃0
with composition factors S22 ◦a#, S13 ◦a# and S2 ◦a#. Here, A2,1 is generated
by the element a21 =
∈A2(0, 3) and A2,2 is generated by the element
a22 =
∈A2(0, 2). The set of submodules of A2 consists of (A2P)⊕ϵ ⊕M
for M ⊂A2Q, ϵ ∈{0, 1}, and the number of it is 8 since composition factors of
A2 are multiplicity free. See [11, Theorem 7.9] for details. The kgrop-module A3
is generated by ˜c⊗3 and we have the indecomposable decomposition
A3 = A3P ⊕A3Q.
Recall that A3P is generated by the symmetric element P3 and thus we have A3P ∼=
S6 ◦a#. The kgrop-module A3Q is generated by the anti-symmetric element Q3
and has the filtration
A3Q ⊃A3,1 ⊃A3,2 ⊃A3,3 ⊃A3,4 ⊃0,
where A3,k is generated by a31 = a21 ⊗˜c ∈A3(0, 5) if k = 1, by a22 ⊗˜c ∈A3(0, 4)
and a32 =
∈A3(0, 4) if k = 2, by a33 =
∈A3(0, 3) if
k = 3 and by a34 =
∈A3(0, 2) if k = 4. As opposed to A2Q, the
kgrop-module A3Q has S22 ◦a# with multiplicity 2 as composition factors, and
thus has infinitely many kgrop-submodules. (See [12, Section 8.4] for details.)
Now we study the A-module structure of A(0, −)/A≥d(0, −) for d = 2, 3, 4.
For d = 2, it is easy to see that the A-module A(0, −)/A≥2(0, −) is a non-trivial
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
35
extension of T(S2 ◦a#) by T(S0 ◦a#), or in other words, A(0, −)/A≥2(0, −) has
a unique composition series
A(0, −)/A≥2(0, −) ⊃A≥1(0, −)/A≥2(0, −) ⊃0
with composition factors T(S0 ◦a#) and T(S2 ◦a#).
For d = 3, since the A-submodule of A(0, −)/A≥3(0, −) that is generated by
the empty Jacobi diagram is A(0, −)/A≥3(0, −) itself, the set of A-submodules of
A(0, −)/A≥3(0, −) consists of A(0, −)/A≥3(0, −) and submodules of A≥1(0, −)/A≥3(0, −).
The A-module A≥1(0, −)/A≥3(0, −) has the following composition series
A≥1(0, −)/A≥3(0, −) ⊃A≥2(0, −)/A≥3(0, −) ⊃AQ/AQ≥3 ⊃A21 ⊃A22 ⊃0
with composition factors T(S2 ◦a#), T(S4 ◦a#), T(S22 ◦a#), T(S13 ◦a#) and
T(S2 ◦a#), where A21 and A22 are the A-submodules of A≥1(0, −)/A≥3(0, −)
generated by the classes represented by a21 and a22, respectively. Any non-trivial
A-submodule of A≥1(0, −)/A≥3(0, −) is a submodule of A≥2(0, −)/A≥3(0, −) =
T(A2), and thus the number of A-submodules of A≥1(0, −)/A≥3(0, −) is 9.
For d = 4, any non-trivial A-submodule of A(0, −)/A≥4(0, −) is a submodule of
A≥1(0, −)/A≥4(0, −), and any non-trivial A-submodule of A≥1(0, −)/A≥4(0, −)
is a submodule of A≥2(0, −)/A≥4(0, −).
In order to describe A-submodules of
A≥2(0, −)/A≥4(0, −), we will study the A-submodules of AQ/AQ≥4. For a sub-
module M of AQ/AQ≥4, let M = U(M/(M ∩AQ≥3)). Then the kgrop-module
M is one of the A2Q, A2,1, A2,2 and 0, and the A-module M is classified as follows:
(i) if M = 0, then M is a submodule of T(A3Q) = AQ≥3/AQ≥4,
(ii) if M = A2,2, then M is one of the following nine submodules
• the A-submodule generated by a22,
• the A-submodule generated by a22 and a32, which coincides with the
A-module A≥2,≥2(0, −)/A≥4,≥2(0, −),
• the A-submodule generated by a22 and c(312) · a31,
• the A-submodule generated by a22 and c(213) · a31,
• the A-submodule generated by a22 and a31,
• the A-submodule generated by a22 and c(42) · Q3,
• the A-submodule generated by a22, c(42) · Q3 and a31,
• the A-submodule generated by a22 and c(23) · Q3,
• the A-submodule generated by a22 and Q3,
where cλ denotes the Young symmetrizer corresponding to a partition λ,
(iii) if M = A2,1, then M is one of the following four submodules
• the A-submodule generated by a21, which coincides with the A-module
A≥2,≥1(0, −)/A≥4,≥1(0, −),
• the A-submodule generated by a21 and c(42) · Q3,
• the A-submodule generated by a21 and c(23) · Q3,
• the A-submodule generated by a21 and Q3,
(iv) if M = A2Q, then we have M = AQ/AQ≥4.
The set of A-submodules of A≥2(0, −)/A≥4(0, −) consists of
• (T(A3P))⊕ϵ ⊕M for ϵ ∈{0, 1} and an A-submodule M of AQ/AQ≥4,
• the A-submodule generated by P2,
• the A-submodule generated by P2 and a31,
• the A-submodule generated by P2 and Q3,
• the A-submodule generated by P2 and a22,
36
MAI KATADA
• the A-submodule generated by P2 and a22 and a31,
• the A-submodule generated by P2 and Q3,
• the A-submodule generated by P2 and a21,
• the A-submodule generated by P2 and a21 and Q3,
• the A-submodule generated by P2 and Q2, that is, A≥2(0, −)/A≥4(0, −).
6.7. The A-module AL(L, H⊗−). Here, we study the A-module structure of
AL(L, H⊗−).
We write AL
d for the kgrop-module AL
d (L, H⊗−) and AL
d,≥k for
AL
d,≥k(L, H⊗−). Then we have a filtration of kgrop-modules
AL
d ⊃AL
d,≥1 ⊃AL
d,≥2 ⊃· · · ⊃AL
d,≥2d ⊃0,
and the kgrop-module structure of the graded quotient AL
d /AL
d,≥1 is
AL
d /AL
d,≥1 ∼= (S1 ⊗(Sd ◦S2)) ◦a# ∼= (S1 ⊗
M
λ⊢d
S2λ) ◦a#.
(21)
Let AL
d P be the kgrop-submodule of AL
d that is generated by the symmetric
element
P L
d = (
X
σ∈S2d+1
Pσ) ◦(i ⊗˜c⊗d) ∈AL
d (L, H⊗2d+1).
Let AL
d Q be the kgrop-submodule of AL
d that is generated by the two anti-symmetric
elements
Q′
d = i ⊗˜c⊗d −P(12) ◦(i ⊗˜c⊗d), Q′′
d = i ⊗˜c⊗d −P(34) ◦(i ⊗˜c⊗d) ∈AL
d (L, H⊗2d+1).
We have
AL
0 = AL
0 P ∼= S1 ◦a#.
For d ≥1, we obtain an analogue of a partial result of Theorem 6.5.
Proposition 6.10. Let d ≥1. We have a direct sum decomposition
AL
d = AL
d P ⊕AL
d Q
in kgrop-Mod and AL
d P is a simple kgrop-module isomorphic to S2d+1 ◦a#.
Proof. The proof is analogous to that of [11, Theorem 8.2]. Any element of AL
d (n)
is a linear combination of f ◦(i⊗˜c⊗d) for f ∈A0(2d+1, n) by Lemma 3.11. Define
a kgrop-module map
e : AL
d →AL
d
by en(f ◦(i⊗˜c⊗d)) =
1
(2d+1)!f ◦P L
d for f ∈A0(2d+1, n). This is well defined since
the 4T-relation is sent to 0. Since AL
d P is generated by P L
d , we have im e = AL
d P.
Moreover, we have e(AL
d P) = AL
d P, which implies that e is an idempotent (i.e.,
e2 = e). Therefore, we have
AL
d = im e ⊕ker e,
im e = AL
d P,
ker e = im(1 −e).
It follows from
(id2d+1 −P12)(
X
σ∈S2d+1
Pσ) = 0 = (id2d+1 −P34)(
X
σ∈S2d+1
Pσ)
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
37
that we have AL
d Q ⊂ker e. For f ∈A0(2d + 1, n), we have
(1 −e)(f ◦(i ⊗˜c⊗d)) =
1
(2d + 1)!
X
σ∈S2d+1
f ◦((i ⊗˜c⊗d) −Pσ(i ⊗˜c⊗d)).
In order to prove that im(1−e) ⊂AL
d Q, it suffices to prove that for any σ ∈S2d+1,
there exist x, y ∈kS2d+1 such that
(i ⊗˜c⊗d) −Pσ(i ⊗˜c⊗d) = xQ′
d + yQ′′
d.
Since we have
(i ⊗˜c⊗d) −Pσρ(i ⊗˜c⊗d) = (i ⊗˜c⊗d) −Pσ(i ⊗˜c⊗d) + Pσ((i ⊗˜c⊗d) −Pρ(i ⊗˜c⊗d)),
by induction on the length of the permutation σ, it suffices to prove the existence
of x, y if σ is an adjacent transposition. If σ = (12), then we set x = id2d+1, y = 0.
If σ = (2j, 2j + 1) for 1 ≤j ≤d, then we set x = y = 0. If σ = (2j −1, 2j) for
2 ≤j ≤d, then we set x = 0,
y =
1
2
3
4
5
· · ·
2d + 1
1
2j −2
2j −1
2j
2j + 1
· · ·
\
2j −2 · · · \
2j + 1
· · ·
2d + 1
.
Therefore, we have the direct sum decomposition of AL
d .
It follows from the decomposition (21) of AL
d /AL
d,≥1 that the kgrop-module AL
d P
is isomorphic to S2d+1 ◦a# since we have f ◦P L
d = 0 for any f ∈AL
0,≥1(2d + 1, −)
and for any f = (id2d+1 −Pτ), τ ∈S2d+1. This completes the proof.
□
By an argument similar to [11, Theorem 7.9], we obtain the following kgrop-
module structure of AL
1 Q.
Theorem 6.11. The kgrop-module AL
1 Q has a unique composition series
AL
1 Q ⊃AL
1,≥1 ⊃AL
1,≥2 ⊃0
with composition factors S21 ◦a#, S12 ◦a# and S1 ◦a#. In particular, AL
1 Q is
indecomposable.
Remark 6.12. The kgrop-module AL
d has a composition series of finite length whose
set of composition factors consists of Sλ ◦a# for each λ that is obtained by deleting
one box from the Young diagram of µ for each composition factor Sµ ◦a# of Ad+1.
Conjecture 6.13. For d ≥2, the kgrop-module AL
d Q is indecomposable.
We have A-submodules ALQ′, ALQ′′ and ALQ of AL(L, H⊗−) that are gener-
ated by the anti-symmetric elements
Q′ = i ⊗˜c −P(12) ◦(i ⊗˜c) ∈AL
1 (L, H⊗3),
Q′′ = i ⊗˜c⊗2 −P(34) ◦(i ⊗˜c⊗2) ∈AL
2 (L, H⊗5),
QL = {Q′, Q′′},
respectively.
Then we also have the following descending filtration of ALQ in
A-Mod:
ALQ = ALQ≥1 ⊃ALQ≥2 ⊃· · · ⊃ALQ≥d ⊃· · · ,
where
ALQ≥d = ALQ ∩AL
≥d(L, H⊗−).
38
MAI KATADA
Conjecture 6.14. The A-modules ALQ′, ALQ′′, ALQ and AL(L, H⊗−) are in-
decomposable.
7. Perspectives
Here, we consider modules over the handlebody groups induced by A-modules.
Let B denote the category of bottom tangles in handlebodies. Then B identifies with
the opposite Hop of the category H of isotopy classes of embeddings of handlebodies
relative to the bottom square. The automorphism group AutH(n) = Hn,1 is known
as the handlebody group. Let Bq denote the non-strictification of the category B.
Let ˆA = lim
←−d≥0 A/A≥d ∼= Q
d≥0 Ad denote the degree completion of the category
A. Habiro and Massuyeau [7] constructed a functor
Z : Bq →ˆA,
which is an extension of the Kontsevich integral for bottom tangles.
Let M be an A-module which factors through A/A≥d for some d ≥1. Then we
have a Bq-module
˜
M : Bq
Z−→ˆA
πd
−→A/A≥d
M
−→k-Mod,
where πd : ˆA = lim
←−d≥0 A/A≥d →A/A≥d is the projection. The functor ˜
M induces
an Hn,1-module structure on the k-vector space M(n) for each n ≥0. For example,
for each m ≥0, the A-module AL(L⊗m, H⊗−)/AL
≥d(L⊗m, H⊗−) induces an Hn,1-
module AL(L⊗m, H⊗n)/AL
≥d(L⊗m, H⊗n) for each n ≥0. Moreover, for d ≥2, the
Hn,1-module restricts to a non-trivial module over the twist group, which is the
kernel of the canonical map from Hn,1 to Aut(Fn).
Recently, the stable cohomology of Hn,1 with some twisted coefficients has been
studied [10, 21, 23].
It would be interesting to study the stability of the coho-
mology of the handlebody group and the twist group with coefficients induced by
A-modules.
References
[1] Gregory Arone. Polynomial functors from free groups to a stable infinity-category. arXiv
preprint arXiv:2504.04114, 2025.
[2] Dror Bar-Natan. On the Vassiliev knot invariants. Topology, 34(2):423–472, 1995.
[3] Aur´elien Djament and Christine Vespa. Sur l’homologie des groupes d’automorphismes des
groupes libres `a coefficients polynomiaux. Comment. Math. Helv., 90(1):33–58, 2015.
[4] Samuel Eilenberg and Saunders Mac Lane. On the groups H(Π, n). II. Methods of computa-
tion. Ann. of Math. (2), 60:49–139, 1954.
[5] William Fulton and Joe Harris. Representation theory, volume 129 of Graduate Texts in
Mathematics. Springer-Verlag, New York, 1991. A first course, Readings in Mathematics.
[6] Kazuo
Habiro.
On
the
category
of
finitely
generated
free
groups.
arXiv
preprint
arXiv:1609.06599, 2016.
[7] Kazuo Habiro and Gw´ena¨el Massuyeau. The Kontsevich integral for bottom tangles in han-
dlebodies. Quantum Topol., 12(4):593–703, 2021.
[8] Manfred Hartl, Teimuraz Pirashvili, and Christine Vespa. Polynomial functors from algebras
over a set-operad and nonlinear Mackey functors. Int. Math. Res. Not. IMRN, (6):1461–1554,
2015.
[9] Vladimir Hinich and Arkady Vaintrob. Cyclic operads and algebra of chord diagrams. Selecta
Math. (N.S.), 8(2):237–282, 2002.
[10] Tomohiko Ishida and Masatoshi Sato. A twisted first homology group of the handlebody
mapping class group. Osaka J. Math., 54(3):587–619, 2017.
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES
39
[11] Mai Katada. Actions of automorphism groups of free groups on spaces of Jacobi diagrams. I.
Ann. Inst. Fourier (Grenoble), 73(4):1489–1532, 2023.
[12] Mai Katada. Actions of automorphism groups of free groups on spaces of Jacobi diagrams.
II. J. Inst. Math. Jussieu, 23(1):1–69, 2024.
[13] Minkyu Kim. The Lie operad as a subquotient of Jacobi diagrams in handlebodies. Talk at
the “Algebraic approaches to mapping class groups of surfaces”, University of Tokyo, May
2025.
[14] Minkyu Kim and Christine Vespa. On analytic exponential functors on free groups. arXiv
preprint arXiv:2401.09151, 2024.
[15] Alexander Kupers and Oscar Randal-Williams. On the cohomology of Torelli groups. Forum
Math. Pi, 8:e7, 83, 2020.
[16] Teimuraz Pirashvili. On the PROP corresponding to bialgebras. Cah. Topol. G´eom. Diff´er.
Cat´eg., 43(3):221–239, 2002.
[17] Geoffrey Powell. On the Passi and the Mal’cev functors. arXiv preprint arXiv:2309.07605,
2023.
[18] Geoffrey Powell. On analytic contravariant functors on free groups. High. Struct., 8(2):416–
466, 2024.
[19] Geoffrey Powell. Outer functors and a general operadic framework. J. Algebra, 644:526–562,
2024.
[20] Geoffrey Powell and Christine Vespa. Higher Hochschild homology and exponential functors.
Bull. Soc. Math. France, 153(1):1–141, 2025.
[21] Oscar Randal-Williams and Nathalie Wahl. Homological stability for automorphism groups.
Adv. Math., 318:534–626, 2017.
[22] Steven V. Sam and Andrew Snowden. Stability patterns in representation theory. Forum
Math. Sigma, 3:Paper No. e11, 108, 2015.
[23] Arthur Souli´e. Some stable twisted cohomology computations for handlebody mapping class
groups. Talk at the “Algebraic approaches to mapping class groups of surfaces”, University
of Tokyo, May 2025.
Graduate School of Mathematical Sciences, University of Tokyo, Tokyo 153-8914,
Japan
Email address: mkatada@ms.u-tokyo.ac.jp
|
MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES MAI KATADA Abstract. The linear category A of Jacobi diagrams in handlebodies was introduced by Habiro and Massuyeau. We study the category of modules over the category A. We generalize the adjunction given by Powell to an adjunction between the category of A-modules and the category of modules over the linear PROP for Casimir Lie algebras by using the category AL of extended Jacobi diagrams in handlebodies. Then we study subquotient A-modules of A(0, -). Contents 1. Introduction 1 2. Functors on the opposite category of the category of free groups 5 3. The category AL of extended Jacobi diagrams in handlebodies 8 4. Functors on the category of Jacobi diagrams in handlebodies 17 5. Symmetric monoidal structures 23 6. The A-module AL(L⊗m, H⊗-) 28 7. Perspectives 38 References 38 1. Introduction Finitely generated free groups and the automorphism groups of them are fundamental and important objects in mathematics, especially in algebraic topology. Functors from the category gr of finitely generated free groups (or the opposite grop of it) to the category of abelian groups often arise, and there is a substantial body of literature on the functor category on gr or grop [3, 8, 20, 18, 19, 17, 14, 1]. We will focus on the functors to the category of vector spaces over a field k of characteristic 0. The notion of polynomial functors was introduced by Eilenberg and MacLane [4] and generalized by Hartl, Pirashvili and Vespa [8]. The notion of outer functors, introduced by Powell and Vespa [20], also plays an essential role in the functor category on gr and grop. The full subcategory of polynomial functors (or analytic functors, which are colimits of polynomial functors) has been studied. In particular, Powell [18] proved that the full subcategory of analytic functors on grop is equivalent to the category of modules over the k-linear PROP CatLie for Lie algebras. In some Date: September 14, 2025. 2020 Mathematics Subject Classification. 18A25, 18A40, 57K16, 18M85, 20F28. Key words and phrases. Functor categories, Polynomial functors, Adjoint functors, Free groups, Jacobi diagrams in handlebodies, Casimir Lie algebras, Casimir Hopf algebras. 1 14 Sep 2025 2 MAI KATADA sense, modules over CatLie are easier to handle or understand than analytic functors on grop. Habiro and Massuyeau [7] introduced the category A of Jacobi diagrams in handlebodies in order to extend the Kontsevich invariant to a functor from the category of bottom tangles in handlebodies. The category A is characterized as a k-linear PROP freely generated by a Casimir Hopf algebra, and has a grading defined by the number of copies of the Casimir 2-tensor. Since the category grop is characterized as a PROP freely generated by a cocommutative Hopf algebra, the k-linearization kgrop of the category grop is identified with the degree 0 part of A. The k-linear PROP CatLieC for Casimir Lie algebras was introduced by Hinich and Vaintrob [9] in their study of the relations between Casimir Lie algebras and the algebra of chord diagrams. One of the motivations behind this project was to clarify the relations between the category of A-modules and the category of CatLieC-modules. The aim of this paper is to study the category A-Mod of A-modules. We generalize the adjunction given by Powell to an adjunction between CatLieC-modules and A-modules. Here, we use an (A, CatLieC)-bimodule induced by the hom-spaces of the category AL of extended Jacobi diagrams in handlebodies, which was introduced in [12]. Then we study the subquotient A-modules of A(0, -). By using the indecomposable decomposition of the kgrop-module Ad(0, -) obtained in [11, 12], we prove that some subquotient A-modules of A(0, -) are indecomposable. 1.1. An adjunction between the categories CatLieC-Mod and A-Mod. Let k-Mod denote the category of k-vector spaces and k-linear maps for a field k of characteristic 0. For a k-linear category C, which is a category enriched over k-Mod, let C-Mod denote the k-linear category of k-linear functors from C to k-Mod. Powell [18, Theorem 9.19] constructed a (kgrop, CatLie)-bimodule ∆CatAssu, which yields the hom-tensor adjunction ∆CatAssu ⊗CatLie -: CatLie-Mod kgrop-Mod ⊤ : Homkgrop-Mod(∆CatAssu, -). Moreover, he proved that the adjunction restricts to an equivalence of categories between CatLie-Mod and a full subcategory kgrop-Modω of kgrop-Mod whose objects correspond to analytic functors on grop. We will give an alternative interpretation of the adjunction given by Powell by using the k-linear symmetric monoidal category AL of extended Jacobi diagrams. The set of objects of the category AL is the free monoid generated by H and L, and AL includes A (resp. CatLieC) as a full subcategory whose objects are generated by H (resp. L). By restricting to the degree 0 part, the category AL 0 includes kgrop ∼= A0 and CatLie ∼= (CatLieC)0 as full subcategories. (See Section 3 for details.) Proposition 1.1 (see Proposition 4.3). We have an isomorphism of (kgrop, CatLie)- bimodules ∆CatAssu ∼= AL 0 (L⊗-, H⊗-). Moreover, by generalizing the above interpretation, we obtain an adjunction between CatLieC-Mod and A-Mod. MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 3 Theorem 1.2 (see Theorem 4.4). We have an (A, CatLieC)-bimodule AL(L⊗-, H⊗-), which induces an adjunction AL(L⊗-, H⊗-) ⊗CatLieC -: CatLieC-Mod A-Mod ⊤ : HomA-Mod(AL(L⊗-, H⊗-), -). We define a full subcategory A-Modω of A-Mod in such a way that an object of A-Modω is an A-module M which satisfies U(M) ∈kgrop-Modω, where U : A-Mod →kgrop-Mod denotes the forgetful functor. Theorem 1.3 (see Theorem 4.9). We have AL(L⊗m, H⊗-) ∈Ob(A-Modω) for each m ≥0, and the adjunction in Theorem 1.2 restricts to an adjunction AL(L⊗-, H⊗-) ⊗CatLieC -: CatLieC-Mod A-Modω ⊤ : HomA-Modω(AL(L⊗-, H⊗-), -). Remark 1.4. Recently, Minkyu Kim has independently obtained an adjunction, equivalent to Theorem 1.2, by using a general method involving left ideals. Moreover, he has proved that the restriction of the adjunction, equivalent to that in Theorem 1.3, induces an equivalence of categories. (See Remark 4.10.) For a k-linear PROP P, the Day convolution of modules over P induces a k-linear symmetric monoidal structure on the category of P-modules. Proposition 1.5 (see Propositions 5.1 and 5.4). The left adjoint functors AL 0 (L⊗-, H⊗-) ⊗CatLie -: CatLie-Mod →kgrop-Mod and AL(L⊗-, H⊗-) ⊗CatLieC -: CatLieC-Mod →A-Mod are symmetric monoidal with respect to the symmetric monoidal structures induced by the Day convolution. We have a graded algebra A = U(A(0, -)) in the symmetric monoidal category kgrop-Mod with the Day convolution tensor product. In a similar way, we have a graded algebra C = U(CatLieC(0, -)) in the symmetric monoidal category CatLie-Mod with the Day convolution tensor product, where U : CatLieC-Mod → CatLie-Mod is the forgetful functor. The graded algebra A corresponds to the graded algebra C via the category equivalence between kgrop-Modω and CatLie-Mod. Theorem 1.6 (see Theorem 5.9 and Corollary 5.10). The graded algebra C in CatLie-Mod is quadratic, and the graded algebra A in kgrop-Mod is quadratic. 1.2. Subquotient A-modules of A(0, -). In [11, 12], the author studied the kgrop-module Ad = Ad(0, -). One of the main results of [11, 12] is an indecomposable decomposition of Ad. More precisely, for d ≥2, we have an indecomposable decomposition of kgrop-modules Ad = AdP ⊕AdQ, where AdP and AdQ are the kgrop-submodules of Ad that are generated by the symmetric element Pd and the anti-symmetric element Qd, respectively. Moreover, AdP is a simple kgrop-module which is isomorphic to S2d ◦a#. Here, Sλ denotes the Schur functor corresponding to a partition λ, and a# = Hom(-ab, k) denotes the dual of the abelianization functor. (See Section 6.3 for details.) For d ≥0, the degree ≥d part forms the A-submodule A≥d(0, -) of A(0, -). For d ≥0, d′ ≥d + 2, the quotient A-module A≥d(0, -)/A≥d′(0, -) does not 4 MAI KATADA factor through kgrop. By using the kgrop-module structure of Ad, we obtain the following. Theorem 1.7 (see Theorems 6.7 and 6.9). For d ≥0, d′ ≥d + 2, the A-module A≥d(0, -)/A≥d′(0, -) is indecomposable. Moreover, the A-module A(0, -) is indecomposable. Let AQ denote the A-submodule of A(0, -) that is generated by Q2, which contains all AdQ for d ≥2. For d ≥2, the degree ≥d part forms the A-submodule AQ≥d of AQ. For the A-module AQ, we also have the indecomposability. Theorem 1.8 (see Proposition 6.8 and Theorem 6.9). For d ≥2, d′ ≥d + 1, the A-module AQ≥d/AQ≥d′ is indecomposable. Moreover, the A-module AQ is indecomposable. For each m ≥0, we have the A-module AL(L⊗m, H⊗-). For m = 0, we have AL(L⊗0, H⊗-) ∼= A(0, -) as A-modules. For m = 1, we obtain a direct sum decomposition of AL d = AL d (L, H⊗-) as kgrop-modules, which is conjectured to be an indecomposable decomposition. Proposition 1.9 (see Proposition 6.10). Let d ≥1. We have a direct sum decomposition of kgrop-modules AL d = AL d P ⊕AL d Q, where AL d P and AL d Q are the kgrop-submodules of AL d that are generated by the symmetric element P L d and by the two anti-symmetric elements Q′ d and Q′′ d, respectively. Moreover, AL d P is a simple kgrop-module which is isomorphic to S2d+1 ◦a#. We make the following conjecture, which is an analogue of the case of A(0, -). Conjecture 1.10 (see Conjectures 6.14 and 6.13). For d ≥2, the kgrop-module AL d Q is indecomposable. Moreover, the A-module AL(L, H⊗-) is indecomposable. 1.3. Outline. We organize the rest of the paper as follows. From Section 2 to Section 5, we study the adjunction between the categories CatLieC-Mod and A-Mod. More precisely, in Section 2, we recall some known facts to state the result of Powell, and in Section 3, we recall the definition of the category AL and study the hom-space of it, and in Section 4, we prove Theorems 1.2 and 1.3, and then in Section 5, we study the adjunctions with respect to the symmetric monoidal structure induced by the Day convolution. In Section 6, we study the subquotient A-modules of A(0, -) and prove Theorems 1.7 and 1.8. Lastly, in Section 7, we provide a perspective on modules over the handlebody groups. Acknowledgements. The author would like to thank Dr. Minkyu Kim for discussions on the adjunction between A-Mod and CatLieC-Mod, and Professor Kazuo Habiro for suggestions regarding the algebra A in A-Mod, which inspired the writing of Section 5. She would like to thank Professor Geoffrey Powell for pointing out errors in the initial proof of the indecomposability of the A-module A(0, -), and for his valuable comments throughout drafts of the present paper. She would also like to thank Professor Christine Vespa for helpful comments. This work was supported by JSPS KAKENHI Grant Number 24K16916. MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 5 2. Functors on the opposite category of the category of free groups Here we recall some known facts about the opposite grop of the category gr of free groups, and describe the adjunction between the functor category on grop and the category of modules over the k-linear PROP CatLie for Lie algebras, which is given by Powell [18]. 2.1. The category grop. For n ≥0, let Fn be the group freely generated by {x1, · · · , xn}. Let gr denote the category of finitely generated free groups and group homomorphisms, and grop the opposite category of gr. It is well known that the category grop is a PROP (i.e., a symmetric monoidal category with non-negative integers as objects) freely generated by a cocommutative Hopf algebra (H = 1, μ, η, ∆, ε, S). See [16] for details, and [6] for a combinatorial description. Here, the morphism μ : 2 →1 is the group homomorphism F1 →F2, x 7→x1x2, the morphism η : 0 →1 is the group homomorphism F1 →F0, x 7→1, the morphism ∆: 1 →2 is the group homomorphism F2 →F1, x1 7→x, x2 7→x, the morphism ε : 1 →0 is the group homomorphism F0 →F1, 1 7→1 ∈F1, and the morphism S : 1 →1 is the group homomorphism F1 →F1, x 7→x-1. Define μ[i] : i →1 inductively by μ[0] = η, μ[1] = id1, μ[i] = μ(μ[i-1] ⊗id1) for i ≥2. Let μ[p1,··· ,pn] = μ[p1] ⊗· · · ⊗μ[pn] : p1 + · · · + pn →n. (1) Similarly, we define ∆[i] : 1 →i by ∆[0] = ε, ∆[1] = id1, ∆[i] = (∆[i-1] ⊗id1)∆for i ≥2, and let ∆[q1,··· ,qm] = ∆[q1] ⊗· · · ⊗∆[qm] : m →q1 + · · · + qm. Define a group homomorphism P : Sm →grop(m, m), σ 7→Pσ (2) by P(i,i+1) = idi-1 ⊗P1,1 ⊗idm-i-1 for 1 ≤i ≤m -1, where P1,1 ∈grop(2, 2) is the symmetry such that P1,1(x1) = x2, P1,1(x2) = x1. Then we have the following factorization of morphisms of the category grop. Lemma 2.1 ([6, Lemma 2, Theorem 8]). Any element of grop(m, n) can be decomposed into the following form μ[p1,··· ,pn]Pσ(Se1 ⊗· · · ⊗Ses)∆[q1,··· ,qm], where s, q1, · · · , qm, p1, · · · , pn ≥0, s = q1 + · · · + qm = p1 + · · · + pn, σ ∈Ss, e1, · · · , es ∈{0, 1}. 6 MAI KATADA Note that the above factorization is not unique since we have μ(id1 ⊗S)∆= ηε. 2.2. The k-linear PROPs for unital associative algebras and Lie algebras. Let CatAssu denote the k-linear PROP associated to the operad Assu encoding unital associative algebras. That is, the category CatAssu is the k-linear PROP freely generated by a unital associative algebra (A = 1, μA, ηA), where the homspace is CatAssu(m, n) = M f:{1,...,m}→{1,...,n} n O i=1 Assu(|f -1(i)|). Define μ[p1,··· ,pn] A and a group homomorphism P : Sm →CatAssu(m, m), σ 7→Pσ in a way similar to (1) and (2), respectively. Then any element of CatAssu(m, n) has the following factorization. Lemma 2.2. Any element of CatAssu(m, n) can be written uniquely as a linear combination of morphisms μ[p1,··· ,pn] A Pσ with p1 + · · · + pn = m, σ ∈Sm. That is, the set {μ[p1,··· ,pn] A Pσ | p1 + · · · + pn = m, σ ∈Sm} is a basis for CatAssu(m, n). Let CatLie denote the k-linear PROP associated to the Lie operad Lie. That is, CatLie is the k-linear PROP freely generated by a Lie algebra (L = 1, [, ] : L⊗2 →L), where the hom-space is CatLie(m, n) = M f:{1,...,m}↠{1,...,n} n O i=1 Lie(|f -1(i)|). Define a group homomorphism P : Sm →CatLie(m, m), σ 7→Pσ in a way similar to (2). By using the representation of an element of the operad Lie as a rooted trivalent tree, the hom-space CatLie(m, n) is explicitly described as follows. Here, a rooted trivalent tree means a trivalent tree with a root and labeled leaves, where each trivalent vertex has a specified cyclic order of adjacent edges. Lemma 2.3. The hom-space CatLie(m, n) is the k-vector space spanned by the set n (T1 ⊗· · · ⊗Tn)Pσ Ti∈Lie(mi):a rooted trivalent tree (1≤i≤n), m1+···+mn=m, σ∈Sm o modulo the equivariance relation (T1ρ1 ⊗· · · ⊗Tnρn)Pσ = (T1 ⊗· · · ⊗Tn)P(ρ1⊗···⊗ρn)σ (3) for ρi ∈Smi (1 ≤i ≤n), and the AS and IHX relations described below: = - = - , . MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 7 The canonical morphism Lie →Assu of operads, which maps [, ] ∈CatLie(2, 1) to μA -μAP(12) ∈CatAssu(2, 1), induces a morphism ι : CatLie →CatAssu of k-linear PROPs. Then for each m ≥0, we have a right CatLie-module CatAssu(ι(-), m) : Catop Lie →k-Mod. Here, by a right CatLie-module, we mean a k-linear functor from the opposite Catop Lie of the category CatLie to the category k-Mod. 2.3. Functors on the category grop. For a k-linear PROP C, by a left C-module, we mean a k-linear functor from C to k-Mod. In what follows, all modules are left modules unless otherwise mentioned. Let C-Mod denote the category of C-modules. Let kgrop denote the k-linearization of the category grop. Note that k-linearization induces an isomorphism Funct(grop, k-Mod) ∼ = -→kgrop-Mod, where Funct(grop, k-Mod) denotes the functor category on grop. Powell [18] defined a kgrop-module structure on the family of hom-spaces {CatAssu(l, m)}m of the k-linear category CatAssu for each l ≥0, which is denoted by ∆CatAssu(l, -) : kgrop →k-Mod. In what follows, we will describe the image of each of the generating morphisms of kgrop which appeared in Lemma 2.1. For p1, · · · , pn ≥0, s = p1 +· · ·+pn, σ ∈Ss, we have ∆CatAssu(l, μ[p1,··· ,pn]) = μ[p1,··· ,pn] A ◦-: CatAssu(l, s) →CatAssu(l, n), ∆CatAssu(l, Pσ) = Pσ ◦-: CatAssu(l, s) →CatAssu(l, s). For e1, · · · , es ∈{0, 1}, the morphism ∆CatAssu(l, Se1 ⊗· · · ⊗Ses) : ∆CatAssu(l, s) →∆CatAssu(l, s) is defined for μ[p1,··· ,ps] A with p1, · · · , ps ≥0, p1 + · · · + ps = l by ∆CatAssu(l, Se1 ⊗· · · ⊗Ses)(μ[p1,··· ,ps] A ) = (Se1 · μ[p1] A ) ⊗· · · ⊗(Ses · μ[ps] A ), where S · μ[p] A = (-1)pμ[p] A Preflection, where "reflection" means the permutation 1 2 · · · p p p -1 · · · 1 ∈Sp. In a similar way, for q1, · · · , qm ≥0, s = q1 + · · · + qm, the morphism ∆CatAssu(l, ∆[q1,··· ,qm]) = ∆CatAssu(l, m) →∆CatAssu(l, s) is defined for μ[p1,··· ,pm] A with p1, · · · , pm ≥0, p1 + · · · + pm = l by ∆CatAssu(l, ∆[q1,··· ,qm])(μ[p1,··· ,pm] A ) = (∆[q1] · μ[p1] A ) ⊗· · · ⊗(∆[qm] · μ[pm] A ), where ∆[0] · μ[p] A = ε · μ[p] A = ( 0 (p ≥1), id0 (p = 0) 8 MAI KATADA and for q ≥1, ∆[q] · μ[p] A = X i1+···+iq=p, i1,··· ,iq≥0 X σ∈Sh(i1,··· ,iq) μ[i1,··· ,iq] A Pσ-1, where, Sh(i1, · · · , iq) ⊂Sp denotes the set of (i1, · · · , iq)-shuffles. For example, we have ∆CatAssu(0, ∆)(ηA) = ηA ⊗ηA, ∆CatAssu(1, ∆)(id1) = id1 ⊗ηA + ηA ⊗id1, ∆CatAssu(2, ∆)(μA) = μA ⊗ηA + id2 +P(12) + ηA ⊗μA. Now we have a (kgrop, CatLie)-bimodule ∆CatAssu = ∆CatAssu(-, -). Here, by a (kgrop, CatLie)-bimodule, we mean a k-linear functor Catop Lie ⊗kgrop →k-Mod. Powell [18] proved that the usual hom-tensor adjunction yields an equivalence of categories. Proposition 2.4 ([18, Theorem 9.19]). We have the hom-tensor adjunction ∆CatAssu ⊗CatLie -: CatLie-Mod kgrop-Mod ⊤ : Homkgrop-Mod(∆CatAssu, -). Moreover, the adjunction restricts to an equivalence of categories CatLie-Mod ≃kgrop-Modω. Here, kgrop-Modω denotes the full subcategory of kgrop-Mod whose objects correspond to analytic functors on grop, where a functor is analytic if it is the colimit of its polynomial subfunctors. See [18] for details. 3. The category AL of extended Jacobi diagrams in handlebodies In this section, we recall the definition and some important properties of the k-linear category A of Jacobi diagrams in handlebodies which was introduced by Habiro and Massuyeau in [7]. We also recall the k-linear PROP CatLieC for Casimir Lie algebras introduced by Hinich and Vaintrob in [9]. Then we recall the k-linear category AL of extended Jacobi diagrams in handlebodies introduced in [12] and study its hom-spaces. 3.1. The category A of Jacobi diagrams in handlebodies. Habiro and Massuyeau introduced the category A of Jacobi diagrams in handlebodies in [7] in order to extend the Kontsevich invariant for bottom tangles to a functor. The objects of A are non-negative integers. The hom-space A(m, n) is the k-vector space spanned by "(m, n)-Jacobi diagrams in handlebodies", which are explained below. For n ≥0, let Xn = · · · be the oriented 1-manifold consisting of n component oriented arcs. A Jacobi diagram D on Xn is a uni-trivalent graph such that each trivalent vertex is oriented (i.e., each trivalent vertex has a fixed cyclic order of the three edges around it), univalent vertices are attached to Xn (i.e., the set of univalent vertices is embedded into the interior of Xn), and each connected component has at least one univalent vertex. Two Jacobi diagrams D and D′ on Xn are identified if there is a homeomorphism between them whose restriction to Xn is isotopic to the identity map and which respects the orientations MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 9 on trivalent vertices. The STU relation described below is imposed on the k-vector space spanned by Jacobi diagrams on Xn: = - The STU relation implies the AS and IHX relations (see [2, Theorem 6]). The degree of a Jacobi diagram is defined to be half the number of vertices. Note that the STU, AS and IHX relations respect the degree of Jacobi diagrams. For m ≥0, let Um be the handlebody of genus m that is obtained from the cube I3, where I = [-1, 1] ⊂R, by attaching m handles on the top square I2 × {1} of the cube along the upper line I × {0} × {1}. An (m, n)-Jacobi diagram is a Jacobi diagram on Xn mapped into the handlebody Um in such a way that the endpoints of Xn are arranged on the bottom line I × {0} × {-1} and that the i-th arc component of Xn goes from the 2i-th point to the (2i -1)-st point, where we count the endpoints from left to right. See [7] for details. The following is an example of a (2, 3)-Jacobi diagram: Two (m, n)-Jacobi diagrams are identified if they are homotopic in Um relative to the endpoints of Xn. Then the hom-space A(m, n) is defined by the k-vector space spanned by (m, n)-Jacobi diagrams modulo the STU relation. The composition of morphisms in A is somewhat complicated. To put it simply, for an (m, n)-Jacobi diagram D and (n, p)-Jacobi diagram D′, the composition D′ ◦D is obtained by stacking a suitable cabling of D on the top square of D′. Each hom-space A(m, n) is graded by the degree of Jacobi diagrams and we have A(m, n) ∼= M d≥0 Ad(m, n), where Ad(m, n) denotes the degree d part of A(m, n). Moreover, the category A has a structure of an N-graded k-linear symmetric strict monoidal category. The tensor product on objects is addition, the monoidal unit is 0, and the tensor product on morphisms is juxtaposition followed by horizontal rescaling and relabeling of 10 MAI KATADA indices. The category A has a Casimir Hopf algebra (H = 1, μ, η, ∆, ε, S, ̃c), where μ = , η = , ∆= , ε = , S = , ̃c = . Here, a Casimir Hopf algebra (H, ̃c) in a k-linear symmetric monoidal category is a cocommutative Hopf algebra H equipped with a Casimir 2-tensor ̃c, which is a morphism ̃c : I →H⊗2 satisfying (∆⊗idH) ̃c = (idH ⊗η ⊗idH) ̃c + η ⊗ ̃c, PH,H ̃c = ̃c, where PH,H denotes the symmetry, and (ad ⊗ad)(idH ⊗PH,H ⊗idH)(∆⊗ ̃c) = ̃cε, where ad = μ[3](idH⊗2 ⊗S)(idH ⊗PH,H)(∆⊗idH). Then the category A is characterized by the Casimir Hopf algebra in the following sense. Lemma 3.1 ([7, Theorem 5.11]). The k-linear PROP A is freely generated by the Casimir Hopf algebra (H = 1, μ, η, ∆, ε, S, ̃c). The degree 0 part A0 of the category A forms a subcategory of A. Since the PROP grop is freely generated by a cocommutative Hopf algebra, there exists a unique symmetric monoidal functor from grop to A such that the cocommutative Hopf algebra of grop is mapped to the cocommutative Hopf algebra of A. This functor induces an isomorphism kgrop ∼ = -→A0 of k-linear PROPs. We will identify the morphisms of cocommutative Hopf algebras in kgrop and those in A via this functor. Since the degree of the generating morphisms μ, η, ∆, ε, S in A is 0, and since the degree of ̃c is 1, the degree of a morphism of A is equal to the degree by the number of copies of the Casimir 2-tensor ̃c. A morphism of Ad(m, n) can be factorized as follows. Lemma 3.2 ([7, Lemma 5.16]). Any element of Ad(m, n) is a linear combination of morphisms of the form μ[p1,··· ,pn]Pσ(Se1 ⊗· · · ⊗Ses ⊗id2d)(∆[q1,··· ,qm] ⊗ ̃c⊗d), where s, q1, · · · , qm, p1, · · · , pn ≥0, s = p1 + · · · + pn -2d = q1 + · · · + qm, σ ∈ Ss+2d, e1, · · · , es ∈{0, 1}. MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 11 3.2. The k-linear PROP CatLieC for Casimir Lie algebras. Here, we recall the k-linear PROP for Casimir Lie algebras introduced by Hinich and Vaintrob in [9]. A Casimir Lie algebra (L, c) in a symmetric monoidal category is a Lie algebra L equipped with a symmetric invariant 2-tensor, which is called the Casimir element, which is a morphism c : I →L⊗2 satisfying PL,Lc = c, ([, ] ⊗idL)(idL ⊗c) = (idL ⊗[, ])(c ⊗idL). (4) Consider the operad Lie as a cyclic operad. Let τ = (012 · · · n) ∈S1+n. The symmetric group S1+n is generated by the cyclic permutation τ and the subgroup Sn ⊂S1+n that stabilizes 0. The right action of τ on an element of Lie is given by changing the first input into the output and the output into the last input. For example, we have · τ = , · τ = , · τ = . Let CatLieC denote the k-linear PROP for Casimir Lie algebras, which is a klinear PROP generated by the k-linear PROP CatLie and a symmetric element c ∈CatLieC(0, 2) satisfying (id1 ⊗f)(c ⊗idn-1) = (fτ ⊗id1)(idn-1 ⊗c), for f ∈Lie(n) (see [9, Section 3.1.6] for details). In other words, CatLieC is the k-linear symmetric monoidal category whose objects are generated by L and whose morphisms are generated by [, ] : L⊗2 →L and c : I →L⊗2 with relations generated by the AS, IHX-relations and the relations (4). The category CatLieC has a grading given by the number of copies of the Casimir element c, that is, we have CatLieC(m, n) = M d≥0 CatLieC(m, n)d, where CatLieC(m, n)d is spanned by morphisms with d copies of c. Then the degree 0 part CatLieC(m, n)0 forms a subcategory (CatLieC)0, which is isomorphic to CatLie. It is easy to see that any element of CatLieC(m, n)d can be decomposed into a linear combination of morphisms of the form f(idL⊗m ⊗c⊗d), where f ∈CatLie(m + 2d, n). This fact can be described by using the upward Brauer category uB [22, 15], which is a k-linear category whose objects are nonnegative integers and whose hom-space uB(m, n) is spanned by partitionings of {1+, · · · , m+, 1-, · · · , n-} into (m + n)/2 unordered pairs in such a way that the partitioning includes no pairs of two positive elements. The assignment of the Casimir element c to each unordered pair of two negative elements induces an inclusion uB ,→CatLieC, which induces a k-linear full functor CatLie ⊗S uB ↠CatLieC, (5) where S is the k-linear category whose objects are non-negative integers and whose hom-space is S(m, n) = kSm if m = n and S(m, n) = 0 otherwise, and where the structure of a k-linear category on CatLie ⊗S uB is the canonical one induced by the structures of CatLie and uB. 12 MAI KATADA Remark 3.3. The hom-space CatLieC(0, n) can be identified with the space of open Jacobi diagrams whose univalent vertices are colored by {1, · · · , n}. See [17, Section 13] for the construction of CatLieC by using open Jacobi diagrams. The surjective morphism (5) appears in [17, Example 13.15]. It is well known that open Jacobi diagrams with one univalent vertex vanish by the AS and IHX relations, which implies that we have CatLieC(0, 1) = 0. (6) By extending the description of the hom-space CatLie(m, n) in Lemma 2.3, we will explicitly describe the k-vector space CatLieC(0, n)d. Before stating the following lemma, we will introduce some notation. For σ ∈S2d and an n-tuple (T1, · · · , Tn) of rooted trivalent trees Ti ∈Lie(mi) with 1 ≤i ≤n, m1 +· · ·+mn = 2d, which satisfy σ(1) = m1 + · · · + mi-1 + 1, σ(3) = m1 + · · · + mi-1 + 2, σ(2) = m1 + · · · + mj-1 + 1, Ti = t([, ] ⊗idmi-2) (∗) for some 1 ≤i s, then we have G( ̃c⊗(d′-1-s) ⊗idn)(z) = G( ̃c⊗(d′-1-s) ⊗idn)(ys), which is a non-trivial element of T(Ad′-1Q)(n+2(d′ -1-s)) and thus a non-trivial element of F(n + 2(d′ -1 -s)). If t = s, then there exists a morphism f : n →n in kgrop ∼= A0 such that G(f)(z) = x′ t+1 + · · · + x′ d′-1 + y′ t + · · · + y′ d′-1, where x′ j ∈AjP(n) and y′ j ∈AjQ(n), which means that we have x′ t = 0, since AtP ∼= S2t ◦a# does not appear as a composition factor of AtQ by Lemma 6.6. Therefore, this case reduces to the case of t > s. If t < s, then we have G( ̃c⊗(d′-1-t) ⊗idn)(z) = G( ̃c⊗(d′-1-t) ⊗idn)(xt) = ̃c⊗(d′-1-t) ⊗xt = u + v, where u is a non-trivial element of Ad′-1P(n + 2(d′ -1 -t)) and v is a nontrivial element of Ad′-1Q(n + 2(d′ -1 -t)). By an argument similar to the case of t = s, there exists a morphism g ∈A0(n + 2(d′ -1 -t), n + 2(d′ -1 -t)) such that G(g)(u + v) is a non-trivial element of Ad′-1Q(n + 2(d′ -1 -t)) and thus a non-trivial element of F(n + 2(d′ -1 -t)). Therefore, in any cases, G includes a non-trivial element of F, which contradicts (∗). Hence, we have G = 0, which implies that A≥d(0, -)/A≥d′(0, -) is indecomposable. □ In what follows, we study a submodule of A(0, -) which contains all AdQ with d ≥2 as subquotient A-modules. Let AQ denote the A-submodule of A≥2(0, -) generated by the anti-symmetric element Q2 ∈A2(0, 4), which is a generator of A2Q as a kgrop-module defined in (20). Then we have the following descending filtration of AQ in A-Mod: AQ = AQ≥2 ⊃AQ≥3 ⊃· · · ⊃AQ≥d ⊃· · · , where AQ≥d = AQ ∩A≥d(0, -). The kgrop-modules AdP and AdQ satisfy T(AdP) ∼= A≥d(0, -)/(A≥d+1(0, -) + AQ≥d), T(AdQ) ∼= AQ≥d/AQ≥d+1. The cokernel A(0, -)/AQ is a graded A-module whose underlying kgrop-module is U(A(0, -)/AQ) ∼= M d≥0 AdP ∼= M d≥0 S2d ◦a# MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 33 and the action of the Casimir 2-tensor is given by ( ̃c ⊗id2d) · Pd = (2d)! (2d + 2)!Pd+1, where Pd ∈AdP(2d) is the generator of AdP defined in (20). We have the following descending filtration of the A-module AQ/AQ≥d′ AQ/AQ≥d′ ⊃AQ≥3/AQ≥d′ ⊃· · · ⊃AQ≥d′-1/AQ≥d′ = T(Ad′-1Q). The following proposition is a partial generalization of Theorem 6.5. Proposition 6.8. For d ≥2, d′ ≥d + 1, the A-module AQ≥d/AQ≥d′ is indecomposable. Proof. For d′ -d = 1, the A-module AQ≥d/AQ≥d′ ∼= T(AdQ) is indecomposable by Theorem 6.5. For d′ -d = k ≥2, suppose that there exist A-submodules F and G such that AQ≥d/AQ≥d′ = F ⊕G. (∗) Then we have AQ≥d′-1/AQ≥d′ = F≥d′-d-1,≥0 ⊕G≥d′-d-1,≥0, where F≥d′-d-1,≥0 (resp. G≥d′-d-1,≥0) is the A-submodule of F (resp. G) defined in Section 6.1. Since AQ≥d′-1/AQ≥d′ is indecomposable, we may assume that AQ≥d′-1/AQ≥d′ is an A-submodule of F. For any n ≥0, we have a k-linear isomorphism (AQ≥d/AQ≥d′)(n) ∼= AdQ(n) ⊕Ad+1Q(n) ⊕· · · ⊕Ad′-1Q(n). For a non-trivial element x = xd + · · · + xd′-1 of (AQ≥d/AQ≥d′)(n), where xj ∈ AjQ(n), let t = min{j | xj ̸= 0}. If G(n) includes a non-trivial element x, then we have G( ̃c⊗(d′-1-t) ⊗idn)(x) = G( ̃c⊗(d′-1-t) ⊗idn)(xt) = ̃c⊗(d′-1-t) ⊗xt, which is a non-trivial element of (AQ≥d′-1/AQ≥d′)(n + 2(d′ -1 -t)) and thus a non-trivial element of F(n+2(d′ -1-t)). This contradicts (∗). Therefore, we have G = 0, which implies that AQ≥d′-1/AQ≥d′ is indecomposable. □ 6.5. The A-module A(0, -). Here, we will study the A-module A(0, -). By Theorem 4.9, A(0, -) is analytic. By the Yoneda Lemma, A(0, -) is a projective A-module and we have EndA-Mod(A(0, -)) ∼= EndA(0) ∼= k. It follows that idempotents of the A-module A(0, -) are 0 and 1, which implies that the A-module A(0, -) is indecomposable. In what follows, we will give another proof of the indecomposability of A(0, -), which can also be applied to the Amodule AQ. Theorem 6.9. The A-modules A(0, -) and AQ are indecomposable. Proof. Suppose that we have non-trivial A-submodules F and G of A(0, -) such that A(0, -) = F ⊕G. Then for each d′ ≥2, the projection A(0, -) ↠A(0, -)/A≥d′(0, -) induces A(0, -)/A≥d′(0, -) ∼= (F/(F ∩A≥d′(0, -))) ⊕(G/(G ∩A≥d′(0, -))). 34 MAI KATADA Since A(0, -)/A≥d′(0, -) is indecomposable by Theorem 6.7, we may assume that G/(G ∩A≥d′(0, -)) = 0, which implies that G ⊂A≥d′(0, -) and F ̸⊂A≥d′(0, -). Therefore, we have G ⊂T d′≥2 A≥d′(0, -) = 0, which implies that A(0, -) is indecomposable. By Proposition 6.8, the above argument holds for AQ and the projection AQ ↠ AQ/AQ≥d′ for d′ ≥3. □ 6.6. The A-module A(0, -)/A≥4(0, -). In this subsection, we study the subquotient A-modules of A(0, -)/A≥4(0, -). First, we briefly recall from [11] the kgrop-module structures of Ad for d ≤3. The kgrop-module A0 is spanned by the empty Jacobi diagram, and we have A0 ∼= k = S0 ◦a#. The kgrop-module A1 is generated by ̃c, and we have A1 ∼= S2 ◦a#. The kgrop-module A2 is generated by ̃c⊗2 and we have the indecomposable decomposition A2 = A2P ⊕A2Q. Recall that A2P is generated by the symmetric element P2 and thus we have A2P ∼= S4 ◦a#. The kgrop-module A2Q is generated by the anti-symmetric element Q2 and has a unique composition series A2Q ⊃A2,1 ⊃A2,2 ⊃0 with composition factors S22 ◦a#, S13 ◦a# and S2 ◦a#. Here, A2,1 is generated by the element a21 = ∈A2(0, 3) and A2,2 is generated by the element a22 = ∈A2(0, 2). The set of submodules of A2 consists of (A2P)⊕ε ⊕M for M ⊂A2Q, ε ∈{0, 1}, and the number of it is 8 since composition factors of A2 are multiplicity free. See [11, Theorem 7.9] for details. The kgrop-module A3 is generated by ̃c⊗3 and we have the indecomposable decomposition A3 = A3P ⊕A3Q. Recall that A3P is generated by the symmetric element P3 and thus we have A3P ∼= S6 ◦a#. The kgrop-module A3Q is generated by the anti-symmetric element Q3 and has the filtration A3Q ⊃A3,1 ⊃A3,2 ⊃A3,3 ⊃A3,4 ⊃0, where A3,k is generated by a31 = a21 ⊗ ̃c ∈A3(0, 5) if k = 1, by a22 ⊗ ̃c ∈A3(0, 4) and a32 = ∈A3(0, 4) if k = 2, by a33 = ∈A3(0, 3) if k = 3 and by a34 = ∈A3(0, 2) if k = 4. As opposed to A2Q, the kgrop-module A3Q has S22 ◦a# with multiplicity 2 as composition factors, and thus has infinitely many kgrop-submodules. (See [12, Section 8.4] for details.) Now we study the A-module structure of A(0, -)/A≥d(0, -) for d = 2, 3, 4. For d = 2, it is easy to see that the A-module A(0, -)/A≥2(0, -) is a non-trivial MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 35 extension of T(S2 ◦a#) by T(S0 ◦a#), or in other words, A(0, -)/A≥2(0, -) has a unique composition series A(0, -)/A≥2(0, -) ⊃A≥1(0, -)/A≥2(0, -) ⊃0 with composition factors T(S0 ◦a#) and T(S2 ◦a#). For d = 3, since the A-submodule of A(0, -)/A≥3(0, -) that is generated by the empty Jacobi diagram is A(0, -)/A≥3(0, -) itself, the set of A-submodules of A(0, -)/A≥3(0, -) consists of A(0, -)/A≥3(0, -) and submodules of A≥1(0, -)/A≥3(0, -). The A-module A≥1(0, -)/A≥3(0, -) has the following composition series A≥1(0, -)/A≥3(0, -) ⊃A≥2(0, -)/A≥3(0, -) ⊃AQ/AQ≥3 ⊃A21 ⊃A22 ⊃0 with composition factors T(S2 ◦a#), T(S4 ◦a#), T(S22 ◦a#), T(S13 ◦a#) and T(S2 ◦a#), where A21 and A22 are the A-submodules of A≥1(0, -)/A≥3(0, -) generated by the classes represented by a21 and a22, respectively. Any non-trivial A-submodule of A≥1(0, -)/A≥3(0, -) is a submodule of A≥2(0, -)/A≥3(0, -) = T(A2), and thus the number of A-submodules of A≥1(0, -)/A≥3(0, -) is 9. For d = 4, any non-trivial A-submodule of A(0, -)/A≥4(0, -) is a submodule of A≥1(0, -)/A≥4(0, -), and any non-trivial A-submodule of A≥1(0, -)/A≥4(0, -) is a submodule of A≥2(0, -)/A≥4(0, -). In order to describe A-submodules of A≥2(0, -)/A≥4(0, -), we will study the A-submodules of AQ/AQ≥4. For a submodule M of AQ/AQ≥4, let M = U(M/(M ∩AQ≥3)). Then the kgrop-module M is one of the A2Q, A2,1, A2,2 and 0, and the A-module M is classified as follows: (i) if M = 0, then M is a submodule of T(A3Q) = AQ≥3/AQ≥4, (ii) if M = A2,2, then M is one of the following nine submodules • the A-submodule generated by a22, • the A-submodule generated by a22 and a32, which coincides with the A-module A≥2,≥2(0, -)/A≥4,≥2(0, -), • the A-submodule generated by a22 and c(312) · a31, • the A-submodule generated by a22 and c(213) · a31, • the A-submodule generated by a22 and a31, • the A-submodule generated by a22 and c(42) · Q3, • the A-submodule generated by a22, c(42) · Q3 and a31, • the A-submodule generated by a22 and c(23) · Q3, • the A-submodule generated by a22 and Q3, where cλ denotes the Young symmetrizer corresponding to a partition λ, (iii) if M = A2,1, then M is one of the following four submodules • the A-submodule generated by a21, which coincides with the A-module A≥2,≥1(0, -)/A≥4,≥1(0, -), • the A-submodule generated by a21 and c(42) · Q3, • the A-submodule generated by a21 and c(23) · Q3, • the A-submodule generated by a21 and Q3, (iv) if M = A2Q, then we have M = AQ/AQ≥4. The set of A-submodules of A≥2(0, -)/A≥4(0, -) consists of • (T(A3P))⊕ε ⊕M for ε ∈{0, 1} and an A-submodule M of AQ/AQ≥4, • the A-submodule generated by P2, • the A-submodule generated by P2 and a31, • the A-submodule generated by P2 and Q3, • the A-submodule generated by P2 and a22, 36 MAI KATADA • the A-submodule generated by P2 and a22 and a31, • the A-submodule generated by P2 and Q3, • the A-submodule generated by P2 and a21, • the A-submodule generated by P2 and a21 and Q3, • the A-submodule generated by P2 and Q2, that is, A≥2(0, -)/A≥4(0, -). 6.7. The A-module AL(L, H⊗-). Here, we study the A-module structure of AL(L, H⊗-). We write AL d for the kgrop-module AL d (L, H⊗-) and AL d,≥k for AL d,≥k(L, H⊗-). Then we have a filtration of kgrop-modules AL d ⊃AL d,≥1 ⊃AL d,≥2 ⊃· · · ⊃AL d,≥2d ⊃0, and the kgrop-module structure of the graded quotient AL d /AL d,≥1 is AL d /AL d,≥1 ∼= (S1 ⊗(Sd ◦S2)) ◦a# ∼= (S1 ⊗ M λ⊢d S2λ) ◦a#. (21) Let AL d P be the kgrop-submodule of AL d that is generated by the symmetric element P L d = ( X σ∈S2d+1 Pσ) ◦(i ⊗ ̃c⊗d) ∈AL d (L, H⊗2d+1). Let AL d Q be the kgrop-submodule of AL d that is generated by the two anti-symmetric elements Q′ d = i ⊗ ̃c⊗d -P(12) ◦(i ⊗ ̃c⊗d), Q′′ d = i ⊗ ̃c⊗d -P(34) ◦(i ⊗ ̃c⊗d) ∈AL d (L, H⊗2d+1). We have AL 0 = AL 0 P ∼= S1 ◦a#. For d ≥1, we obtain an analogue of a partial result of Theorem 6.5. Proposition 6.10. Let d ≥1. We have a direct sum decomposition AL d = AL d P ⊕AL d Q in kgrop-Mod and AL d P is a simple kgrop-module isomorphic to S2d+1 ◦a#. Proof. The proof is analogous to that of [11, Theorem 8.2]. Any element of AL d (n) is a linear combination of f ◦(i⊗ ̃c⊗d) for f ∈A0(2d+1, n) by Lemma 3.11. Define a kgrop-module map e : AL d →AL d by en(f ◦(i⊗ ̃c⊗d)) = 1 (2d+1)!f ◦P L d for f ∈A0(2d+1, n). This is well defined since the 4T-relation is sent to 0. Since AL d P is generated by P L d , we have im e = AL d P. Moreover, we have e(AL d P) = AL d P, which implies that e is an idempotent (i.e., e2 = e). Therefore, we have AL d = im e ⊕ker e, im e = AL d P, ker e = im(1 -e). It follows from (id2d+1 -P12)( X σ∈S2d+1 Pσ) = 0 = (id2d+1 -P34)( X σ∈S2d+1 Pσ) MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 37 that we have AL d Q ⊂ker e. For f ∈A0(2d + 1, n), we have (1 -e)(f ◦(i ⊗ ̃c⊗d)) = 1 (2d + 1)! X σ∈S2d+1 f ◦((i ⊗ ̃c⊗d) -Pσ(i ⊗ ̃c⊗d)). In order to prove that im(1-e) ⊂AL d Q, it suffices to prove that for any σ ∈S2d+1, there exist x, y ∈kS2d+1 such that (i ⊗ ̃c⊗d) -Pσ(i ⊗ ̃c⊗d) = xQ′ d + yQ′′ d. Since we have (i ⊗ ̃c⊗d) -Pσρ(i ⊗ ̃c⊗d) = (i ⊗ ̃c⊗d) -Pσ(i ⊗ ̃c⊗d) + Pσ((i ⊗ ̃c⊗d) -Pρ(i ⊗ ̃c⊗d)), by induction on the length of the permutation σ, it suffices to prove the existence of x, y if σ is an adjacent transposition. If σ = (12), then we set x = id2d+1, y = 0. If σ = (2j, 2j + 1) for 1 ≤j ≤d, then we set x = y = 0. If σ = (2j -1, 2j) for 2 ≤j ≤d, then we set x = 0, y = 1 2 3 4 5 · · · 2d + 1 1 2j -2 2j -1 2j 2j + 1 · · · \ 2j -2 · · · \ 2j + 1 · · · 2d + 1 . Therefore, we have the direct sum decomposition of AL d . It follows from the decomposition (21) of AL d /AL d,≥1 that the kgrop-module AL d P is isomorphic to S2d+1 ◦a# since we have f ◦P L d = 0 for any f ∈AL 0,≥1(2d + 1, -) and for any f = (id2d+1 -Pτ), τ ∈S2d+1. This completes the proof. □ By an argument similar to [11, Theorem 7.9], we obtain the following kgropmodule structure of AL 1 Q. Theorem 6.11. The kgrop-module AL 1 Q has a unique composition series AL 1 Q ⊃AL 1,≥1 ⊃AL 1,≥2 ⊃0 with composition factors S21 ◦a#, S12 ◦a# and S1 ◦a#. In particular, AL 1 Q is indecomposable. Remark 6.12. The kgrop-module AL d has a composition series of finite length whose set of composition factors consists of Sλ ◦a# for each λ that is obtained by deleting one box from the Young diagram of μ for each composition factor Sμ ◦a# of Ad+1. Conjecture 6.13. For d ≥2, the kgrop-module AL d Q is indecomposable. We have A-submodules ALQ′, ALQ′′ and ALQ of AL(L, H⊗-) that are generated by the anti-symmetric elements Q′ = i ⊗ ̃c -P(12) ◦(i ⊗ ̃c) ∈AL 1 (L, H⊗3), Q′′ = i ⊗ ̃c⊗2 -P(34) ◦(i ⊗ ̃c⊗2) ∈AL 2 (L, H⊗5), QL = {Q′, Q′′}, respectively. Then we also have the following descending filtration of ALQ in A-Mod: ALQ = ALQ≥1 ⊃ALQ≥2 ⊃· · · ⊃ALQ≥d ⊃· · · , where ALQ≥d = ALQ ∩AL ≥d(L, H⊗-). 38 MAI KATADA Conjecture 6.14. The A-modules ALQ′, ALQ′′, ALQ and AL(L, H⊗-) are indecomposable. 7. Perspectives Here, we consider modules over the handlebody groups induced by A-modules. Let B denote the category of bottom tangles in handlebodies. Then B identifies with the opposite Hop of the category H of isotopy classes of embeddings of handlebodies relative to the bottom square. The automorphism group AutH(n) = Hn,1 is known as the handlebody group. Let Bq denote the non-strictification of the category B. Let ˆA = lim ←-d≥0 A/A≥d ∼= Q d≥0 Ad denote the degree completion of the category A. Habiro and Massuyeau [7] constructed a functor Z : Bq →ˆA, which is an extension of the Kontsevich integral for bottom tangles. Let M be an A-module which factors through A/A≥d for some d ≥1. Then we have a Bq-module ̃ M : Bq Z-→ˆA πd -→A/A≥d M -→k-Mod, where πd : ˆA = lim ←-d≥0 A/A≥d →A/A≥d is the projection. The functor ̃ M induces an Hn,1-module structure on the k-vector space M(n) for each n ≥0. For example, for each m ≥0, the A-module AL(L⊗m, H⊗-)/AL ≥d(L⊗m, H⊗-) induces an Hn,1module AL(L⊗m, H⊗n)/AL ≥d(L⊗m, H⊗n) for each n ≥0. Moreover, for d ≥2, the Hn,1-module restricts to a non-trivial module over the twist group, which is the kernel of the canonical map from Hn,1 to Aut(Fn). Recently, the stable cohomology of Hn,1 with some twisted coefficients has been studied [10, 21, 23]. It would be interesting to study the stability of the cohomology of the handlebody group and the twist group with coefficients induced by A-modules. References [1] Gregory Arone. Polynomial functors from free groups to a stable infinity-category. arXiv preprint , 2025. [2] Dror Bar-Natan. On the Vassiliev knot invariants. Topology, 34(2):423-472, 1995. [3] Aur ́elien Djament and Christine Vespa. Sur l'homologie des groupes d'automorphismes des groupes libres `a coefficients polynomiaux. Comment. Math. Helv., 90(1):33-58, 2015. [4] Samuel Eilenberg and Saunders Mac Lane. On the groups H(Π, n). II. Methods of computation. Ann. of Math. (2), 60:49-139, 1954. [5] William Fulton and Joe Harris. Representation theory, volume 129 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1991. A first course, Readings in Mathematics. [6] Kazuo Habiro. On the category of finitely generated free groups. arXiv preprint , 2016. [7] Kazuo Habiro and Gw ́ena ̈el Massuyeau. The Kontsevich integral for bottom tangles in handlebodies. Quantum Topol., 12(4):593-703, 2021. [8] Manfred Hartl, Teimuraz Pirashvili, and Christine Vespa. Polynomial functors from algebras over a set-operad and nonlinear Mackey functors. Int. Math. Res. Not. IMRN, (6):1461-1554, 2015. [9] Vladimir Hinich and Arkady Vaintrob. Cyclic operads and algebra of chord diagrams. Selecta Math. (N.S.), 8(2):237-282, 2002. [10] Tomohiko Ishida and Masatoshi Sato. A twisted first homology group of the handlebody mapping class group. Osaka J. Math., 54(3):587-619, 2017. MODULES OVER THE CATEGORY OF JACOBI DIAGRAMS IN HANDLEBODIES 39 [11] Mai Katada. Actions of automorphism groups of free groups on spaces of Jacobi diagrams. I. Ann. Inst. Fourier (Grenoble), 73(4):1489-1532, 2023. [12] Mai Katada. Actions of automorphism groups of free groups on spaces of Jacobi diagrams. II. J. Inst. Math. Jussieu, 23(1):1-69, 2024. [13] Minkyu Kim. The Lie operad as a subquotient of Jacobi diagrams in handlebodies. Talk at the "Algebraic approaches to mapping class groups of surfaces", 2025. [14] Minkyu Kim and Christine Vespa. On analytic exponential functors on free groups. arXiv preprint , 2024. [15] Alexander Kupers and Oscar Randal-Williams. On the cohomology of Torelli groups. Forum Math. Pi, 8:e7, 83, 2020. [16] Teimuraz Pirashvili. On the PROP corresponding to bialgebras. Cah. Topol. G ́eom. Diff ́er. Cat ́eg., 43(3):221-239, 2002. [17] Geoffrey Powell. On the Passi and the Mal'cev functors. arXiv preprint , 2023. [18] Geoffrey Powell. On analytic contravariant functors on free groups. High. Struct., 8(2):416466, 2024. [19] Geoffrey Powell. Outer functors and a general operadic framework. J. Algebra, 644:526-562, 2024. [20] Geoffrey Powell and Christine Vespa. Higher Hochschild homology and exponential functors. Bull. Soc. Math. France, 153(1):1-141, 2025. [21] Oscar Randal-Williams and Nathalie Wahl. Homological stability for automorphism groups. Adv. Math., 318:534-626, 2017. [22] Steven V. Sam and Andrew Snowden. Stability patterns in representation theory. Forum Math. Sigma, 3:Paper No. e11, 108, 2015. [23] Arthur Souli ́e. Some stable twisted cohomology computations for handlebody mapping class groups. Talk at the "Algebraic approaches to mapping class groups of surfaces", 2025. Graduate 153-8914, Japan Email address:
|
2509.16227
|
J. Loiseau, R. R. Sebastian, Y. Poroshyna, and S. S.-M. Lau-Chapdelaine,
“Estimating Energy Release in Metallized Composite Explosives Using the Taylor Model,” in
“17th International Symposium on Detonation”, pp. 760–770,
Office of Naval Research, Kansas City, USA, 4 Aug. 2024.
Estimating Energy Release in Metallized Composite Explosives Using the Taylor
Model
Jason Loiseau†, Sebastian Rodriguez Rosero‡, Yaroslava Poroshyna∗, S. She-Ming Lau-Chapdelaine†
†Department of Chemistry and Chemical Engineering, Royal Military College of Canada, 11 General Crerar
Crescent, Kingston, ON, Canada, K7K 7B4
‡Department of Mechanical Engineering, McGill University, 817 Sherbrooke St. West, Montreal, QC,
Canada, H3A 0C3
∗Department of Mechanical and Materials Engineering, Queen’s University, 130 Stuart Street, Kingston,
ON, Canada, K7P 2M4
Abstract. The potential for reactive metal fuels to enhance the energetic output of high ex-
plosives has generated an enduring interest in the study of composite explosives. It has typi-
cally been demonstrated that added metal fuels can have little or even deleterious impact on
the accelerating ability of composite military explosives relative to baseline performance.
Often this has led to the assumption of limited reaction of the metal fuel over microsec-
ond timescales. The widespread availability of Photonic Doppler Velocimetry has enabled
time resolved measurement of accelerated confinement, ultimately demonstrating prompt
reaction of metal fuels. Motivated by this observation, hydrocode modelling studies, and
prior author’s modifications of Taylor’s tubular bomb model, we developed a differential
equation form of Taylor’s model in a manner where it is straightforward to add sources or
phases. An afterburning version of the JWL equation of state was used to add energy to
the gaseous products at a linear, time-dependent rate. The metal particles are assumed to
remain in velocity equilibrium with the gaseous products and do not transfer heat or in-
fluence chemical composition. We focus exclusively on added aluminum as it remains the
most ubiquitous choice of metal fuel. The model is initialized with a CJ state calculated
from Cheetah 2.0 assuming the Al particles are inert in the detonation. JWL coefficients for
the baseline explosive are also used. Qualitative agreement is observed between the model
and previously published experiments.
Introduction
Adding reactive metal fuel, typically atomised
aluminum, to high explosives (HE) to increase en-
ergetic output is well-established and extensively
studied. However, mesoscale mechanisms and re-
action kinetics for metal fuels remain unresolved.
Assumptions about these mechanisms influence es-
timates for total energy release and where in the
detonation product expansion it occurs. The funda-
mental observation that added aluminum can react
sufficiently quickly to influence acceleration ability
but not necessarily increase performance over pure
explosives was made by Finger et al.1.
1
arXiv:2509.16227v1 [physics.chem-ph] 12 Sep 2025
However, the influence of anaerobic metal fuel
reaction on the accelerating ability of a composite
HE remains controversial2. While metals oxidized
by H2O, CO2, and CO release more specific energy
than detonating HE, adding metal: 1) reduces prod-
uct gasses available upon detonation by dilution;
2) may reduce product gases through molar decre-
menting reactions; 3) deposits energy over a longer
timescale while pressure accelerating the confining
wall drops rapidly; 4) releases energy that may be
trapped as latent heat in a solid product; and 5) di-
verts momentum from the products to particle drag.
Thus, while added metal may react promptly, can-
celling effects can limit performance.
The widespread availability of photonic Doppler
velocimetry (PDV) has enabled time-resolved mea-
surements of confiner acceleration with simpli-
fied experimental setup.
Studies using PDV
have conclusively demonstrated metal fuel reac-
tions over microsecond timescales, albeit indirectly,
through comparisons to velocimetry traces obtained
with the simple explosive or an inertly diluted
composite3, 4, 5, 6. Rapid reaction is also supported
by measurement of detonation product electrical
conductivity7, 8. Some studies also suggest that a
portion of the fuel may react within the chemical
reaction zone of the detonation based on anomalous
changes in detonation velocity9, 10, 11. This may be
attributed to a reduction in gaseous product as solid
oxides form4.
Experimental studies have shown
a weak effect of particle size on metal fuel ener-
getic output, further confounding potential reaction
models5, 6, 12. Surface oxide passivation likely also
plays a role in fuel reactivity13.
Numerous modelling methodologies have at-
tempted to resolve these complexities.
Thermo-
chemical equilibrium calculations (e.g. Cheetah4,
EXPLO514) can reasonably estimate detonation
parameters and product expansion behaviour for
some composite HEs.
Semi-analytic techniques
can also estimate metal fuel involvement in the
detonation process11, 15. Detailed multiphase cal-
culations are likely necessary to resolve the non-
equilibrium effects of adding large solid fractions to
HE16, 17, 18. Multiphase hydrocodesimulations that
include transport between the particles and gaseous
products and employ program-burn energy release
by the particles have shown good agreement with
experiment. Successive refinements of these two-
phase models and different energy release assump-
tions have resulted in varying predictions for the in-
volvement of Al reaction depending on mass frac-
tion of metal particles and the simple HE stud-
ied, e.g. : nitromethane thickened with polyethylene
glycol and mixed with 10-µm-dia Al in the cylin-
der test19; HMX/wax/50-µm-dia Al in the cylin-
der test20; nitromethane gelled with poly(methyl
methacrylate) and mixed with 50-µm-dia Al in slab
tests21. The latter studies suggest 60–70% of the Al
at 15% mass loading and ∼25% of the Al at 30%
mass loading reacts on wall acceleration timescales.
In the present study, we have attempted to cap-
ture some of the qualitative behaviour of rapid metal
particle reaction in detonation products with a sim-
ple semi-analytic model.
A permutation of Tay-
lor’s tubular bomb model22 was combined with the
Zeldovich-von Neumann-Döring (ZND) equations
to treat the detonation product flow. This method al-
lows source terms to be added easily. An afterburn-
ing version of the Jones-Wilkins-Lee (JWL) equa-
tion of state was used to treat the detonation prod-
ucts with energy added using programmed burn.
Taylor’s method is of historical interest and sim-
ple models may be useful for quick initial fitting of
EOS coefficients before refinement using computa-
tionally expensive hydrocode iterations23.
Taylor Theory
Taylor developed a quasi-1D model to calcu-
late the motion of a cylindrical wall accelerated
by an axially detonating HE22; simultaneously
recognizing the diagnostic value of the geome-
try for measuring detonation product expansion
ahead of widespread acceptance of the cylinder
test (CYLEX)24, 25. Taylor’s model has seen rel-
atively little direct use: Allison et al.
compared
the model to explosively driven cylinders of vary-
ing wall thicknesses, treating the products as a poly-
tropic gas26, 27.
Baker et al.
developed permu-
tations of Taylor’s model for various axisymmet-
ric geometries, while incorporating more realistic
equations of state28, 29, 30.
Baker also extended
Taylor’s model to asymmetric geometries with a La-
grangian plane of zero gas velocity28. Taylor’s kine-
matic relationships for wall tilt angle, and lateral
and longitudinal velocity components are founda-
2
−150
−100
−50
0
50
100
150
y (mm)
Case Wall
Expanding Products
0
50
100
150
200
250
300
x (mm)
0
4
8
12
Pressure (GPa)
Fig. 1. Typical wall shape for a pair of 6.35-mm-thick aluminum flyer plates accelerated by a 22.5-mm-thick
slab of gelled nitromethane. Note the absence of wall thinning in the slab geometry. Also plotted is the spatial
variation in detonation product pressure throughout the expansion process.
tional for reconstructing wall trajectories in exper-
iments instrumented with PDV23, 31, and analytic
predictions of warhead behaviour32, 33.
Taylor postulated that in a detonation-fixed frame
of reference, the confining wall exits the detonation
plane at the detonation velocity, D, with a small an-
gle of tilt, θ, where the lateral velocity component of
the wall is small compared to D. Curvature of the
wall is caused by a centripetal force generated by
the pressure of the detonation products acting on an
internal differential wetted area. The pressure along
the length of the expanding confiner is governed by
the strong form of the Bernoulli equation, coupled
with the integral of the chosen pressure vs. specific
volume relationship, determined from the product
EOS. A typical casing shape from the present Tay-
lor model is shown in Figure 1.
Incorporating time-dependent reactions into this
method is challenging since an integro-differential
equation would result. Instead we have opted to
treat the detonation product flow with the ZND
equations for conservation of mass, momentum, and
energy. With quasi-one dimensional area change to
account for confiner expansion, these equations are:
∂
∂x(ρu) = −ρu 1
A
∂A
∂x
(1)
∂
∂x(ρuu + p) = −ρuu 1
A
∂A
∂x
(2)
∂
∂x(ρYju) = ρ ˙λj −ρuYj
1
A
∂A
∂x
(3)
∂
∂x
ρu
e + 1
2u2 + p
ρ
=
−ρu
e + 1
2u2 + p
ρ
1
A
∂A
∂x
(4)
with density ρ, velocity u, pressure p, cross-
sectional area A, specific internal energy e, mass
fraction Y of species j, and reaction rate ˙λj. Partial
derivatives are taken relative to the position behind
the detonation in the detonation-fixed frame.
Useful manipulations of the mass, momentum,
and energy equations are:
∂ρ
∂x = −ρ
u
∂u
∂x −ρ 1
A
∂A
∂x ,
(5)
∂p
∂x = −ρu∂u
∂x, and
(6)
u∂u
∂x = −∂e
∂x −∂
∂x
p
ρ
.
(7)
Equation of State
The JWL equation of state was chosen given its
simplicity and ubiquity. The authors found inde-
pendent reviews of the JWL EOS by Weseloh34,
Mennikoff35, Segletes36, and Farag37 et al. helpful
when manipulating the equation. In compact nota-
tion the p(e, v) form can be written as:
p =
2
X
i
Λi
1 −ω
Ri
ρ
ρ0
e−(Ri
ρ0
ρ )
+ωρ(e−e0)
3
where ρ0 is the undetonated explosive density, ω is
the constant Grüneisen parameter, Λ1, Λ2, R1, R2
are fitted constants, and e0 ≈−q, where q is the
heat of reaction. In the present study, metal particle
reaction is assumed to add energy through the e0
term such that its derivative is non-zero.
The partial derivative of the detonation product
energy with respect to location behind the detona-
tion can thus be calculated as:
∂e
∂x = ∂
∂x
e0 + 1
ω
p
ρ
+
2
X
i
Λi
1
Riρ0
−1
ωρ
e−(Ri
ρ0
ρ )
Symbolic differentiation yields the following
PDE for the product energy:
∂e
∂x = ∂e0
∂x + 1
ω
∂
∂x
p
ρ
+
2
X
i
Λi
ω + 1
ωρ2 −Ri
ρ0
ωρ3
e−(Ri
ρ0
ρ )
∂ρ
∂x
= ∂e0
∂x + 1
ω
∂
∂x
p
ρ
−1
ω
B(ρ)
ρ2
∂ρ
∂x
(8)
where the following placeholder function is used:
B(ρ) =
2
X
i
Λi
Ri
ρ0
ρ −(ω + 1)
e−(Ri
ρ0
ρ )
Substitution of Equation 8 into Equation 7 yields:
u∂u
∂x = −∂e0
∂x + 1
ω
B(ρ)
ρ2
∂ρ
∂x −ω + 1
ω
∂
∂x
p
ρ
Product-rule expansion of the derivative of the work
term yields:
∂
∂x
p
ρ
= 1
ρ
∂p
∂x + p ∂
∂x
1
ρ
= 1
ρ
∂p
∂x −p
ρ2
∂ρ
∂x
such that:
u∂u
∂x = −∂e0
∂x + 1
ω
B(ρ)
ρ2
∂ρ
∂x
−ω + 1
ω
1
ρ
∂p
∂x −p
ρ2
∂ρ
∂x
= −∂e0
∂x +B(ρ) + p(ω + 1)
ωρ2
∂ρ
∂x −(ω + 1)
ρω
∂p
∂x
The density (5), and pressure (6) differentials are
then substituted to yield:
∂u
∂x =
ω ∂e0
∂x + B(ρ) + p(ω + 1)
ρ
1
A
∂A
∂x
u −B(ρ) + p(ω + 1)
ρu
(9)
Particle reaction
The ZND equations are typically solved by in-
cluding a chemical kinetics mechanism for evolving
gaseous product species15. By using this method,
source terms such as mass, momentum, and energy
transfer between phases can easily be added follow-
ing the body of work using multiphase ZND equa-
tions. Presently, changes in the gaseous species are
ignored and energy from particle reaction is added
at a constant rate to the gasses with no change in
particle mass nor exchange of momentum or heat
between the particle and gas phases. This is simi-
lar to the Miller38, 39 extension for afterburning. A
simple linear burn model was assumed:
DYm
Dt
= ˙λm = 1
τb
(10)
Thus, after converting from a time to spatial deriva-
tive, the energy release is defined as:
∂e0
∂x = φqm
1
u
DYm
Dt
= φqm
1
u
1
τb
(11)
where φ is the initial mass fraction of metal, qm is
the specific energy release of Al (set to 10.6 KJ/g21
in this study), and τb is the burn time, ranging from
25–100 µs.
This analysis implies that the metal particles are
assumed to remain in velocity equilibrium with the
gaseous product and do not depend on heat trans-
fer to deliver energy to the product gases. Influence
of particle oxidation on the composition and total
moles of gaseous products is also neglected. For-
mation of new condensed products from particle re-
action is also not considered.
Cross-sectional area
Motion of the wall is treated using an analogy to
Taylor’s equation of motion, but solved in stream-
4
explosive
case
y
R
d
d
∼tc/2
n
s
x
y
s+ds
uc=D
explosive products
u =D
u =D-ucj
u
tc
ds
β
VY
VX
Vm
Fig. 2. Geometry and flow parameters for the Tay-
lor model. The lab-frame total metal velocity (Vm)
and the lateral (VY) and longitudinal (VX) velocity
components are included.
tube coordinates, and as a function of cross sec-
tional area rather than the casing deflection an-
gle, θ, since this is required for coupling with the
ZND equations.
Only the sandwich geometry is
presented in this analysis, where two walls (flyer
plates) are accelerated by a slab of HE. It is as-
sumed the walls diverge away from the centerline
with a growing rectangular cross-sectional area, A,
between them. In this geometry no wall-thinning
occurs. The present analysis could also be extended
to the more common cylindrical geometry by fol-
lowing the same derivation but where wall thinning
must be included.
Referring to Figure 2, consider a streamline
through the center of the wall thickness as it ex-
pands away from the charge centerline. We adopt
most of the assumptions from Taylor’s original
model: incompressible wall, small deflection an-
gle, and small lateral velocity relative to the detona-
tion velocity because of the quasi-1D assumption in
ZND. In stream tube coordinates the wall only flows
in the streamline direction (ˆs), greatly simplify the
momentum equation for the wall, such that:
1
2
∂u2
c
∂s = −1
ρc
∂p
∂s
and
u2
c
R = −1
ρc
∂p
∂n
(12)
where uc is the wall velocity, R is the instantaneous
radius of curvature of the wall, and ρc is the den-
sity of the wall meterial. The streamline component
is Bernoulli’s equation and the normal component
is the centrifugal force from the pressure gradient
across the case thickness. The latter is equivalent
to Taylor’s original equation of motion. Following
Dehn 40, 41, R can be written exactly through:
1
R =
ds
dθ
−1 =
y′′
(1 + y′2)
3
2
(13)
Since the coordinates are relative to the centre-
line of the wall, the expanding detonation product
thickness is approximately 2(y −tc
2 ) so long as θ
is reasonably small. Neglecting product expansion
off the edges of the HE slab, the instantaneous deto-
nation product thickness can be related to the cross
sectional area by:
A = wc
y −tc
2
(14)
where wc is the width of the slab. Assuming con-
stant wall thickness tc, the derivatives:
y′ = d
dx
A
wc
+ tc
2
= A′
wc
y′′ = d
dx
A′
wc
= A′′
wc
are substituted into Equation 13, yielding:
1
R =
A′′
wc
1 +
A′
wc
2 3
2 =
w2
cA′′
(w2c + A′2)
3
2
(15)
Equation 15 can be combined with Equation 12 to
relate the pressure on the wall to the cross-sectional
area and its derivatives by integrating through the
wall thickness, where curvature 1/R, wall velocity
uc, and wall density ρc, are uniform. Noting the
reversed signs on the integration bounds since ˆn is
positive away from the centre of curvature:
Z ps
p
∂p =
Z y−tc
2
y+ tc
2
−ρc
u2
c
R ∂n
(16)
p −p0 =ρctc
u2
c
R
(17)
where ρctc is the wall mass per unit area, and p0 is
the surrounding pressure, which we assumed to be
zero but retained in subsequent equations.
5
du
dx =
ω ∂e0
∂x + B(ρ) + p(ω + 1)
ρ
1
A
dA
dx
u −B(ρ) + p(ω + 1)
ρu
de0
dx = φqm
1
u
1
τb
dp
dx = −ρudu
dx
dρ
dx = −ρ
u
du
dx −ρA′
A
dA′
dx = A′′ =
p −p0
ρctc
(w2
c + A′2)
3
2
w2c
!
D2 + 2
ρc
(κpCJ −p)
dA
dx = A′
Recall:
B(ρ) =
2
X
i
Λi
Ri
ρ0
ρ −(ω + 1)
e−(Ri
ρ0
ρ )
With initial conditions:
u = (D −uCJ);
p = pCJ;
ρ = ρCJ;
A′ = 0;
A = Acharge
Fig. 3. Summary of the system of equations and initial conditions used in the present model.
The wall velocity can then be related to the pres-
sure and curvature by integrating Bernoulli’s equa-
tion along the stream-tube coordinate, again as-
suming the case is incompressible and the velocity
through the thickness is uniform:
Z v2
c
D2 ∂u2
c = −
Z p
κpCJ
2
ρc
∂p
(18)
u2
c =D2 + 2
ρc
(κpCJ −p)
(19)
while taking the state immediately behind the det-
onation as the lower integration bound, where
uc = D and p = pCJ. In reality, early confiner
motion is compressible because of the reverber-
ating shock transmitted by the detonation.
Pres-
sure at the product–wall interface is governed by
shock impedances and obliqueness of the transmit-
ted shock, but is typically some fraction of the det-
onation pressure42.
To account for these experi-
mental realities we introduced a fitting constant, κ.
In comparison to experiments κ = 1.9 gave best
agreement; a value much higher than would be ex-
pected from results from Neal 42. We have not re-
solved this inconsistency, nor examined values for
other experimental explosive-metal pairs. Note that
if the pressure term in Equation 19 is neglected,
Taylor’s original assumption, uc = D, is recovered.
Equations 15,17, and 19 can be combined:
p −p0 =ρctc
R
D2 + 2
ρc
(κpCJ −p)
and the second derivative of area isolated to yield
a final closure equation as a function of detonation
product pressure:
A′′ =
p −p0
ρctc
(w2
c + A′2)
3
2
w2c
!
D2 + 2
ρc
(κpCJ −p)
(20)
For convenience, the complete system of equa-
tions and initial conditions is summarized in Fig-
ure 3. In the current study the system of equations
6
Fig. 4. Principle isentrope for the baseline explo-
sive (dots), plotted against the p(v) history obtained
from the model (dashed line).
was solved numerically in Python using the Radau
method from the SciPy package ODE solver.
Comparison to Experiment
Confiner wall velocity is now typically measured
using PDV. For a probe observing at 90° from the
initial position of the wall, only the radial veloc-
ity component in the lab frame (VY in Fig.2) is
measured directly.
This velocity is equivalent to
the derivative of y in the present model’s detona-
tion fixed frame. The detonation-fixed frame spatial
derivative can be related to the lab-fixed time deriva-
tive via:
∂
∂t
lab
= D ∂
∂x
such that the lateral velocity from the model can be
compared to the PDV velocity history for the sand-
wich geometry using:
Vc,Y |lab = D d
dx
y + tc
2
= D A′
wc
(21)
We compare the present model with symmetric
sandwich experiments from Loiseau et al.5.
The
subset of experiments presently considered used
6.35-mm-thick 6061 aluminum flyer plates.
The
test explosive was nitromethane (NM) gelled with
4% poly(methyl methacrylate) by mass. The gelled
NM was sensitized with 0.5% 3M K1 glass mi-
croballoons (GMB) by mass and 15%, or 30%
Fig. 5. Experimental velocity histories (dots) plot-
ted against the predictions of the present model.
Valimet H-50 aluminum powder was added by mass
to the sensitized mixture. An inert control consist-
ing of 20.5% alumina powder by mass is also con-
sidered. The explosive cavity was initially 22.5 mm
thick and the sandwich had a width of 10.2 cm. The
PDV probes measured at 90°.
Cheetah 2.0 was used to determine JWL coeffi-
cients for the baseline explosive (0% Al), alumina
control, and 15% or 30% Al assuming either com-
plete reaction or entirely inert behaviour for the Al.
The GMB were treated as porosity. JWL coeffi-
cients are shown in Table 1.
An initial validation was performed to confirm
that the present model reproduces expansion along
the principle isentrope when no afterburning energy
is added. For the baseline explosive, the principle
isentrope was calculated via:
ps = Λ1e−R1
ρ0
ρ + Λ2e−R2
ρ0
ρ + C
ρ0
ρ
−(1+ω)
(22)
and is plotted against the p(v) history from the
model in Figure 4. Agreement is reasonable over
the accessible range of relative volumes.
Figure 5 shows model predictions for wall accel-
eration plotted as lines versus experimental velocity
histories plotted as dots. Model predictions using
Cheetah to treat Al reaction are denoted by “-cha”,
those using programmed burn are donated by “-P.
burn”. Note the smooth ballistic acceleration of the
wall in these experiments. This is because of the
7
Table 1. Summary of detonation properties and JWL coefficients.
Explosive
ρ0
D
Pcj
ucj
ρcj
Λ1
Λ2
C
R1
R2
ω
g/cc km/s
GPa
km/s g/cc
GPa
GPa GPa
-
-
-
Baseline
1.09 5.80 10.17 1.60 1.51 190.48 3.93 1.08 4.58 1.04 0.35
20.5% Alumina 1.29 5.20
8.92
1.33 1.73 248.20 3.56 0.91 4.94 1.08 0.28
15% Al inert
1.20 5.42
8.96
1.38 1.61 293.37 4.75 0.94 5.25 1.17 0.30
30% Al inert
1.33 5.13
7.91
1.16 1.72 293.86 1.59 0.45 4.84 0.64 0.12
15% Al active
1.20 5.91 12.03 1.70 1.68 159.19 2.18 1.33 3.96 0.72 0.27
Fig. 6. Comparison of p(v) histories.
relatively low brisance of the NM explosive and the
detonation being approximately sonic relative to the
sound speed in aluminum. The characteristic sur-
face oscillations from shock reverberation are thus
suppressed. The model accurately predicts the ac-
celeration history of the wall for the baseline case
and also the inert control diluted with 20.5% alu-
mina. The accuracy for the inert control is surpris-
ing given the presumed importance of momentum
and heat transfer during acceleration of the particles
in the detonation and during product expansion.
For 15% Al reacting in equilibrium, model re-
sults using Cheetah-derived JWL coefficients agree
reasonably well with the experimental results, but
under-predict wall velocity after ≈10 µs. For the af-
terburning model predictions for 15% Al, the base-
line JWL parameters were combined with a burn
time (τb) of 25 µs. The model again underpredicts
the experimental result at early times but matches
and then slightly exceeds the experimental values
at probe cut-out. This suggests that the linear burn
model adds too much energy, but also adds it too
late in the expansion process. The influence of par-
ticle energy release on effective product pressure
is shown in Figure 6, which plots the p(v) histo-
ries of the model predictions, versus experimental
isentropes extracted using the method outlined by
Jackson23.
Cheetah yielded poor JWL fits for 30% Al react-
ing in equilibrium, which resulted in non-physical
predictions of wall velocity. These results are thus
omitted from Figure 5. For the afterburning model
predictions for 30% Al, the baseline JWL parame-
ters were again used, but a burn time of 80 µs was
instead specified. Agreement was overall good, but
continued acceleration of the wall after 50 µs, be-
yond the probe cut-off, is likely non physical.
Concluding Remarks
A simple, semi-analytic model was developed
following the assumptions established by Taylor22.
When appropriate EOS coefficients were used, rea-
sonable agreement with flyer plate experiments was
observed. A simple linear burn model for Al parti-
cle reaction qualitatively suggests that a significant
fraction of aluminum can burn to influence metal
acceleration.
References
1. Finger, M., Hornig, H. C., Lee, E. L. and Kury,
J. W., “Metal Acceleration by Composite Ex-
plosives,” in “5th International Symposium on
Detonation,” pp. 137–152, Office of Naval Re-
search, Pasadena, CA, 18–21 August 1970.
2. Ermolaev, B., Khasainov, B., Baudin, G. and
Presles, H. N., “Behavior of Aluminum in
Detonation of High Explosives. Surprises and
8
Interpretations,” Russian Journal of Chemical
Physics, Vol. 18, pp. 1121–1140, 01 2000.
3. Manner, V. W., Pemberton, S. J., Gunderson,
J. A., Herrera, T. J., Lloyd, J. M., Salazar, P. J.,
Rae, P. and Tappan, B. C., “The Role of Alu-
minum in the Detonation and Post-Detonation
Expansion of Selected Cast HMX-Based Ex-
plosives,” Propellants, Explosives, Pyrotech-
nics, Vol. 37, pp. 198–206, 2012.
4. Tappan, B. C., Hill, L. G., Manner, V. W., Pem-
berton, S. J., Lieber, M. A., Johnson, C. and
Sanders, V. E., “Reactions of Powdered Alu-
minum with Explosives that Selectively Form
Carbon Dioxide or Water as Oxidizers,” Inter-
national Journal of Energetic Materials and
Chemical Propulsion, Vol. 15, pp. 339–350,
2016.
5. Loiseau, J., Goroshin, S., Frost, D. L., Hig-
gins, A. J. and Zhang, F., “Ability of Metal-
ized Gelled Nitromethane to Accelerate a Flyer
Plate,” in “16th International Symposium on
Detonation,” Office of Naval Research, Cam-
bridge, MD, 15–20 July 2018.
6. Zhang, D., Yi, Z., Gan, Y., Liu, Q., Liu, F. and
Li, X., “On Weak Influence of Aluminum Pow-
der Size on its Post-Detonation Reaction in Dif-
ferent Time Scales,” AIP Advances, Vol. 14, p.
075014, 07 2024.
7. Gilev, S. D. and Trubachev, A. M., “Detona-
tion Properties and Electrical Conductivity of
Explosive–Metal Additive Mixtures,” Combus-
tion, Explosion and Shock Waves, Vol. 38, pp.
219–234, 2002.
8. Gilev, S. D. and Anisichkin, V. F., “Interaction
of Aluminum with Detonation Products,” Com-
bustion, Explosion and Shock Waves, Vol. 42,
pp. 107–115, 2006.
9. Davydov, V. Y., Grishkin, A. M. and Feodori-
tov, I. I., “Experimental-Theoretical Investiga-
tion of the Oxidation of Aluminum in Detona-
tion Waves,” Combustion, Explosion and Shock
Waves, Vol. 28, pp. 564–568, 1992.
10. Cowperthwaite, M., “Nonideal Detonation in a
Composite CHNO Explosive Containing Alu-
minum,” in “10th International Symposium on
Detonation,” pp. 656—-664, Office of Naval
Research, Boston, MA, 12–16 July 1993.
11. Gonthier, K. and Rumchik, C., “Theory and
Analysis of Non-ideal Detonation for RDX-
metal Mixtures,” in “13th International Sympo-
sium on Detonation,” pp. 176–186, Office of
Naval Research, Norfolk, VA, 23–28 July 2006.
12. Li, X., Pei, H., Zhang, X. and Zheng, X., “Ef-
fect of Aluminum Particle Size on the Perfor-
mance of Aluminized Explosives,” Propellants,
Explosives, Pyrotechnics, Vol. 45, pp. 807–813,
2020.
13. Lewis, W. K., Rumchik, C. G., Smith, M. J.,
Fernando, K. A. S., Crouse, C. A., Spowart,
J. E., Guliants, E. A. and Bunker, C. E., “Com-
parison of Post-detonation Combustion in Ex-
plosives Incorporating Aluminum Nanoparti-
cles: Influence of the Passivation Layer,” Jour-
nal of Applied Physics, Vol. 113, p. 044907, 01
2013.
14. Suceska, M., Dobrilovic, M., Bohanek, V. and
Stimac, B., “Estimation of Explosive Energy
Output by EXPLO5 Thermochemical Code,”
Zeitschrift für Anorganische und Allgemeine
Chemie, Vol. 647, pp. 231–238, 2021.
15. Cowperthwaite, M., “Some Aspects of Non-
ideal Detonation in Composite Explosives,”
Journal of Energetic Materials, Vol. 1, pp. 141–
175, 1983.
16. Mader, C. L., Kershner, J. D. and Pimbley,
G. H., “Three-Dimensional Modeling of In-
ert Metal-Loaded Explosives,” Journal of En-
ergetic Materials, Vol. 1, pp. 293–324, 1983.
17. Mader, C. L., Numerical Modeling of Explo-
sives and Propellants, CRC press, 2007.
18. Ripley, R. C., Zhang, F. and Lien, F.-S., “Ac-
celeration and Heating of Metal Particles in
Condensed Matter Detonation,” Proceedings of
the Royal Society of London A: Mathematical,
Physical and Engineering Sciences, Vol. 468,
pp. 1564–1590, 2012.
9
19. Milne, A. M., Longbottom, A. W., Evans, D. J.,
Haskins, P. J., Cook, M. D. and Briggs, R. I.,
“The Burning Rate of Aluminium Particles in
Nitromethane in Cylinder Tests,” in “12th Inter-
national Symposium on Detonation,” pp. 895–
900, Office of Naval Research, San Diego, CA,
11–16 July 2002.
20. Milne, A. M., Bennett, K. and Longbottom,
A. W., “Modelling a Suite of Aluminized Ex-
plosives Experiments,” in “14th International
Symposium on Detonation,” pp. 51–60, Office
of Naval Research, Coeur d’Alene, ID, 11–16
April 2010.
21. Pontalier, Q., Loiseau, J., Longbottom, A. and
Frost, D. L., “Simulating the Propulsive Ca-
pability of Explosives Loaded with Inert and
Reactive Materials,” AIP Conference Proceed-
ings, Vol. 2272, p. 050023, 11 2020.
22. Taylor, G. I., “Analysis of the Explosion of
a Long Cylindrical Bomb Detonated at One
End,” in “Aerodynamics and the Mechanics
of Projectiles and Explosions,” Vol. 3 of The
Scientific Papers of Sir Geoffrey Ingram Tay-
lor, pp. 277–286, Cambridge University Press,
1963.
23. Jackson, S. I., “An Analytic Method for Two-
Dimensional Wall Motion and Product Isen-
trope from the Detonation Cylinder Test,” Pro-
ceedings of the Combustion Institute, Vol. 35,
pp. 1997–2004, 2015.
24. Kury, J. W., Hornig, H. C., Lee, E. L., McDon-
nel, J. L. and Ornellas, D. L., “Metal Accel-
eration by Chemical Explosives,” in “4th Inter-
national Symposium on Detonation,” Office of
Naval Research, Silver Spring, MD, 12–15 Oc-
tober 1965.
25. Lee, E. L., Hornig, H. C. and Kury, J. W.,
“Adiabatic Expansion of High Explosive Det-
onation Products,” Technical Report UCRL–
50422, Lawrence Radiation Laboratory, Liver-
more, CA, 1968.
26. Allison, F. E. and Watson, R. W., “Explosively
Loaded Metallic Cylinders - I,” Journal of Ap-
plied Physics, Vol. 31, pp. 842–845, 1960.
27. Allison, F. E. and Schriempf, J. T., “Explo-
sively Loaded Metallic Cylinders - II,” Journal
of Applied Physics, Vol. 31, pp. 846–851, 1960.
28. Baker, E. L., “Modeling and Optimization of
Shaped Charge Liner Collapse and Jet For-
mation,” Technical Report ARAED-TR-92019,
ARDEC, Picatinny Arsenal, NJ, 1993.
29. Baker, E. L., Murphy, D., Capellos, C., Ander-
son, P., Wrobel, E. and Stiel, L., “Recent Com-
bined Effects Explosives Technology,” Tech-
nical Report ARMET-TR-10004, ARDEC, Pi-
catinny Arsenal, NJ, 2010.
30. Baker, E., Murphy, D., Stiel, L. and Wro-
bel, E., “Theory and Calibration of JWL and
JWLB Thermodynamic Equations of State,”
WIT Transactions on the Built Environment,
Vol. 113, pp. 147–158, 2010.
31. Chaos, M., “Revisiting the Kinematics of the
Cylinder Test,” Propellants, Explosives, Py-
rotechnics, Vol. 47, p. e202100349, 2022.
32. Walters, W. P., “Explosive Loading of Metals
and Related Topics,” Tech. Rep. BRL-SP-56,
US Army Ballistic Research Laboratory, Ab-
erdeen Proving Ground, MD, May 1986.
33. Walters, W. P. and Zukas, J. A., Fundamentals
of Shaped Charges, Wiley-Interscience, New
York, NY, 1989.
34. Weseloh, W., “JWL in a Nutshell,” Tech. Rep.
LA-UR-14-24318, Los Alamos National Labo-
ratory, 06 2014.
35. Menikoff, R., “JWL Equation of State,” Tech.
Rep. LA-UR-15-29536, Los Alamos National
Laboratory, 07 2017.
36. Segletes, S. B., “An Examination of the JWL
Equation of State,” Tech. Rep. ARL-TR-8403,
US Army Research Laboratory, 07 2018.
37. Farag, G. and Chinnayya, A., “On the Jones-
Wilkins-Lee Equation of State for High Ex-
plosive Products,” Propellants, Explosives, Py-
rotechnics, Vol. 49, p. e202300223, 2024.
10
38. Miller, P. J. and Guirguis, R. H., “Experimental
Study and Model Calculations of Metal Com-
bustion in Al/Ap Underwater Explosives,” MRS
Online Proceedings Library, Vol. 296, pp. 299–
304, 1992.
39. Daniel J. Meaney, N. G. and Brown, R. E.,
“Thermochemical Release Mechanisms and
Circumferential Initiation Affecting Detonation
in an Aluminized Explosive,” Journal of Ener-
getic Materials, Vol. 42, pp. 146–167, 2024.
40. Dehn, J. T., “Models of Explosively Driven
Metal,” Tech. Rep. BRL-TR-2626, US Army
Ballistic Research Laboratory, Aberdeen Prov-
ing Ground, MD, December 1984.
41. Dehn, J. T., “Models of Explosively Driven
Metal,” in “8th International Symposium on
Detonation,” , edited by Short, J. M., pp. 602–
612, Office of Naval Research, Albuquerque,
NM, 15–19 July 1985.
42. Neal, T., “Perpendicular Explosive Drive and
Oblique Shocks,” in “6th International Sympo-
sium on Detonation,” pp. 602–611, Office of
Naval Research, Coronado, CA, 24–27 August
1976.
Question from Sorin Bastea, LLNL
Have you looked at the effect of Al particle size?
Reply by Jason Loiseau
We have not yet investigated particle size in the
model.
The current implementation can only
prescribe a burn time, and the correlation between
burn time and particle size (e.g. extrapolations from
Beckstead correlations to detonation products) is
still controversial. We did investigate Al particle
size effects experimentally and presented these
results at the previous IDS: we saw a weak effect
of particle size on the accelerating ability of the
composite HE.
Question from Tim Manship, Purdue University
Very fascinating approach!
For your model, do
you assume aluminum combustion is just adding
energy to gas products or are you also accounting
for addition of aluminum combustion products?
Reply by Jason Loiseau
We made the simplifying assumption that Al
combustion adds energy directly into the detonation
product gases.
We neglect any effects on the
chemical composition of the detonation products,
and neglect the formation and condensation of
solid Al products or additional carbon as oxygen
is scavenged from CO and CO2.
We thus also
neglected any heat transfer to/from particles or
solid product.
Question from Christopher Miller, LLNL
How well was the aluminum distributed throughout
the samples and how would non-homogeneity
influence your model?
Reply by Jason Loiseau
Free-flowing powders mix well into the gel. Based
on sample microscopy we have observed good uni-
formity and repeat trials have shown good repro-
ducibility. Since the model is quasi-1D it cannot ac-
count for inhomogeneity along the height or width
of the slab (or radially in cylinders). Longitudinal
inhomogenity could be addressed by varying parti-
cle mass fraction and interpolating JWL parameters
for different initial solid loadings.
11
|
J. Loiseau, R. R. Sebastian, Y. Poroshyna, and S. S.-M. Lau-Chapdelaine, "Estimating Energy Release in Metallized Composite Explosives Using the Taylor Model," in "17th International Symposium on Detonation", pp. 760-770, Office of Naval Research, Kansas City, USA, 4 Aug. 2024. Estimating Energy Release in Metallized Composite Explosives Using the Taylor Model Jason Loiseau†, Sebastian Rodriguez Rosero‡, Yaroslava Poroshyna∗, S. She-Ming Lau-Chapdelaine† † 11 General Crerar Crescent, Kingston, ON, Canada, K7K 7B4 ‡ 817 Sherbrooke St. West, Montreal, QC, Canada, H3A 0C3 ∗ 's University, 130 Stuart Street, Kingston, ON, Canada, K7P 2M4 Abstract. The potential for reactive metal fuels to enhance the energetic output of high explosives has generated an enduring interest in the study of composite explosives. It has typically been demonstrated that added metal fuels can have little or even deleterious impact on the accelerating ability of composite military explosives relative to baseline performance. Often this has led to the assumption of limited reaction of the metal fuel over microsecond timescales. The widespread availability of Photonic Doppler Velocimetry has enabled time resolved measurement of accelerated confinement, ultimately demonstrating prompt reaction of metal fuels. Motivated by this observation, hydrocode modelling studies, and prior author's modifications of Taylor's tubular bomb model, we developed a differential equation form of Taylor's model in a manner where it is straightforward to add sources or phases. An afterburning version of the JWL equation of state was used to add energy to the gaseous products at a linear, time-dependent rate. The metal particles are assumed to remain in velocity equilibrium with the gaseous products and do not transfer heat or influence chemical composition. We focus exclusively on added aluminum as it remains the most ubiquitous choice of metal fuel. The model is initialized with a CJ state calculated from Cheetah 2.0 assuming the Al particles are inert in the detonation. JWL coefficients for the baseline explosive are also used. Qualitative agreement is observed between the model and previously published experiments. Introduction Adding reactive metal fuel, typically atomised aluminum, to high explosives (HE) to increase energetic output is well-established and extensively studied. However, mesoscale mechanisms and reaction kinetics for metal fuels remain unresolved. Assumptions about these mechanisms influence estimates for total energy release and where in the detonation product expansion it occurs. The fundamental observation that added aluminum can react sufficiently quickly to influence acceleration ability but not necessarily increase performance over pure explosives was made by Finger et al.1. 1 12 Sep 2025 However, the influence of anaerobic metal fuel reaction on the accelerating ability of a composite HE remains controversial2. While metals oxidized by H2O, CO2, and CO release more specific energy than detonating HE, adding metal: 1) reduces product gasses available upon detonation by dilution; 2) may reduce product gases through molar decrementing reactions; 3) deposits energy over a longer timescale while pressure accelerating the confining wall drops rapidly; 4) releases energy that may be trapped as latent heat in a solid product; and 5) diverts momentum from the products to particle drag. Thus, while added metal may react promptly, cancelling effects can limit performance. The widespread availability of photonic Doppler velocimetry (PDV) has enabled time-resolved measurements of confiner acceleration with simplified experimental setup. Studies using PDV have conclusively demonstrated metal fuel reactions over microsecond timescales, albeit indirectly, through comparisons to velocimetry traces obtained with the simple explosive or an inertly diluted composite3, 4, 5, 6. Rapid reaction is also supported by measurement of detonation product electrical conductivity7, 8. Some studies also suggest that a portion of the fuel may react within the chemical reaction zone of the detonation based on anomalous changes in detonation velocity9, 10, 11. This may be attributed to a reduction in gaseous product as solid oxides form4. Experimental studies have shown a weak effect of particle size on metal fuel energetic output, further confounding potential reaction models5, 6, 12. Surface oxide passivation likely also plays a role in fuel reactivity13. Numerous modelling methodologies have attempted to resolve these complexities. Thermochemical equilibrium calculations (e.g. Cheetah4, EXPLO514) can reasonably estimate detonation parameters and product expansion behaviour for some composite HEs. Semi-analytic techniques can also estimate metal fuel involvement in the detonation process11, 15. Detailed multiphase calculations are likely necessary to resolve the nonequilibrium effects of adding large solid fractions to HE16, 17, 18. Multiphase hydrocodesimulations that include transport between the particles and gaseous products and employ program-burn energy release by the particles have shown good agreement with experiment. Successive refinements of these twophase models and different energy release assumptions have resulted in varying predictions for the involvement of Al reaction depending on mass fraction of metal particles and the simple HE studied, e.g. : nitromethane thickened with polyethylene glycol and mixed with 10-μm-dia Al in the cylinder test19; HMX/wax/50-μm-dia Al in the cylinder test20; nitromethane gelled with poly(methyl methacrylate) and mixed with 50-μm-dia Al in slab tests21. The latter studies suggest 60-70% of the Al at 15% mass loading and ∼25% of the Al at 30% mass loading reacts on wall acceleration timescales. In the present study, we have attempted to capture some of the qualitative behaviour of rapid metal particle reaction in detonation products with a simple semi-analytic model. A permutation of Taylor's tubular bomb model22 was combined with the Zeldovich-von Neumann-Döring (ZND) equations to treat the detonation product flow. This method allows source terms to be added easily. An afterburning version of the Jones-Wilkins-Lee (JWL) equation of state was used to treat the detonation products with energy added using programmed burn. Taylor's method is of historical interest and simple models may be useful for quick initial fitting of EOS coefficients before refinement using computationally expensive hydrocode iterations23. Taylor Theory Taylor developed a quasi-1D model to calculate the motion of a cylindrical wall accelerated by an axially detonating HE22; simultaneously recognizing the diagnostic value of the geometry for measuring detonation product expansion ahead of widespread acceptance of the cylinder test (CYLEX)24, 25. Taylor's model has seen relatively little direct use: Allison et al. compared the model to explosively driven cylinders of varying wall thicknesses, treating the products as a polytropic gas26, 27. Baker et al. developed permutations of Taylor's model for various axisymmetric geometries, while incorporating more realistic equations of state28, 29, 30. Baker also extended Taylor's model to asymmetric geometries with a Lagrangian plane of zero gas velocity28. Taylor's kinematic relationships for wall tilt angle, and lateral and longitudinal velocity components are founda2 -150 -100 -50 0 50 100 150 y (mm) Case Wall Expanding Products 0 50 100 150 200 250 300 x (mm) 0 4 8 12 Pressure (GPa) Fig. 1. Typical wall shape for a pair of 6.35-mm-thick aluminum flyer plates accelerated by a 22.5-mm-thick slab of gelled nitromethane. Note the absence of wall thinning in the slab geometry. Also plotted is the spatial variation in detonation product pressure throughout the expansion process. tional for reconstructing wall trajectories in experiments instrumented with PDV23, 31, and analytic predictions of warhead behaviour32, 33. Taylor postulated that in a detonation-fixed frame of reference, the confining wall exits the detonation plane at the detonation velocity, D, with a small angle of tilt, θ, where the lateral velocity component of the wall is small compared to D. Curvature of the wall is caused by a centripetal force generated by the pressure of the detonation products acting on an internal differential wetted area. The pressure along the length of the expanding confiner is governed by the strong form of the Bernoulli equation, coupled with the integral of the chosen pressure vs. specific volume relationship, determined from the product EOS. A typical casing shape from the present Taylor model is shown in Figure 1. Incorporating time-dependent reactions into this method is challenging since an integro-differential equation would result. Instead we have opted to treat the detonation product flow with the ZND equations for conservation of mass, momentum, and energy. With quasi-one dimensional area change to account for confiner expansion, these equations are: ∂ ∂x(ρu) = -ρu 1 A ∂A ∂x (1) ∂ ∂x(ρuu + p) = -ρuu 1 A ∂A ∂x (2) ∂ ∂x(ρYju) = ρ ̇λj -ρuYj 1 A ∂A ∂x (3) ∂ ∂x ρu e + 1 2u2 + p ρ = -ρu e + 1 2u2 + p ρ 1 A ∂A ∂x (4) with density ρ, velocity u, pressure p, crosssectional area A, specific internal energy e, mass fraction Y of species j, and reaction rate ̇λj. Partial derivatives are taken relative to the position behind the detonation in the detonation-fixed frame. Useful manipulations of the mass, momentum, and energy equations are: ∂ρ ∂x = -ρ u ∂u ∂x -ρ 1 A ∂A ∂x , (5) ∂p ∂x = -ρu∂u ∂x, and (6) u∂u ∂x = -∂e ∂x -∂ ∂x p ρ . (7) Equation of State The JWL equation of state was chosen given its simplicity and ubiquity. The authors found independent reviews of the JWL EOS by Weseloh34, Mennikoff35, Segletes36, and Farag37 et al. helpful when manipulating the equation. In compact notation the p(e, v) form can be written as: p = 2 X i Λi 1 -ω Ri ρ ρ0 e-(Ri ρ0 ρ ) +ωρ(e-e0) 3 where ρ0 is the undetonated explosive density, ω is the constant Grüneisen parameter, Λ1, Λ2, R1, R2 are fitted constants, and e0 ≈-q, where q is the heat of reaction. In the present study, metal particle reaction is assumed to add energy through the e0 term such that its derivative is non-zero. The partial derivative of the detonation product energy with respect to location behind the detonation can thus be calculated as: ∂e ∂x = ∂ ∂x e0 + 1 ω p ρ + 2 X i Λi 1 Riρ0 -1 ωρ e-(Ri ρ0 ρ ) Symbolic differentiation yields the following PDE for the product energy: ∂e ∂x = ∂e0 ∂x + 1 ω ∂ ∂x p ρ + 2 X i Λi ω + 1 ωρ2 -Ri ρ0 ωρ3 e-(Ri ρ0 ρ ) ∂ρ ∂x = ∂e0 ∂x + 1 ω ∂ ∂x p ρ -1 ω B(ρ) ρ2 ∂ρ ∂x (8) where the following placeholder function is used: B(ρ) = 2 X i Λi Ri ρ0 ρ -(ω + 1) e-(Ri ρ0 ρ ) Substitution of Equation 8 into Equation 7 yields: u∂u ∂x = -∂e0 ∂x + 1 ω B(ρ) ρ2 ∂ρ ∂x -ω + 1 ω ∂ ∂x p ρ Product-rule expansion of the derivative of the work term yields: ∂ ∂x p ρ = 1 ρ ∂p ∂x + p ∂ ∂x 1 ρ = 1 ρ ∂p ∂x -p ρ2 ∂ρ ∂x such that: u∂u ∂x = -∂e0 ∂x + 1 ω B(ρ) ρ2 ∂ρ ∂x -ω + 1 ω 1 ρ ∂p ∂x -p ρ2 ∂ρ ∂x = -∂e0 ∂x +B(ρ) + p(ω + 1) ωρ2 ∂ρ ∂x -(ω + 1) ρω ∂p ∂x The density (5), and pressure (6) differentials are then substituted to yield: ∂u ∂x = ω ∂e0 ∂x + B(ρ) + p(ω + 1) ρ 1 A ∂A ∂x u -B(ρ) + p(ω + 1) ρu (9) Particle reaction The ZND equations are typically solved by including a chemical kinetics mechanism for evolving gaseous product species15. By using this method, source terms such as mass, momentum, and energy transfer between phases can easily be added following the body of work using multiphase ZND equations. Presently, changes in the gaseous species are ignored and energy from particle reaction is added at a constant rate to the gasses with no change in particle mass nor exchange of momentum or heat between the particle and gas phases. This is similar to the Miller38, 39 extension for afterburning. A simple linear burn model was assumed: DYm Dt = ̇λm = 1 τb (10) Thus, after converting from a time to spatial derivative, the energy release is defined as: ∂e0 ∂x = φqm 1 u DYm Dt = φqm 1 u 1 τb (11) where φ is the initial mass fraction of metal, qm is the specific energy release of Al (set to 10.6 KJ/g21 in this study), and τb is the burn time, ranging from 25-100 μs. This analysis implies that the metal particles are assumed to remain in velocity equilibrium with the gaseous product and do not depend on heat transfer to deliver energy to the product gases. Influence of particle oxidation on the composition and total moles of gaseous products is also neglected. Formation of new condensed products from particle reaction is also not considered. Cross-sectional area Motion of the wall is treated using an analogy to Taylor's equation of motion, but solved in stream4 explosive case y R d d ∼tc/2 n s x y s+ds uc=D explosive products u =D u =D-ucj u tc ds β VY VX Vm Fig. 2. Geometry and flow parameters for the Taylor model. The lab-frame total metal velocity (Vm) and the lateral (VY) and longitudinal (VX) velocity components are included. tube coordinates, and as a function of cross sectional area rather than the casing deflection angle, θ, since this is required for coupling with the ZND equations. Only the sandwich geometry is presented in this analysis, where two walls (flyer plates) are accelerated by a slab of HE. It is assumed the walls diverge away from the centerline with a growing rectangular cross-sectional area, A, between them. In this geometry no wall-thinning occurs. The present analysis could also be extended to the more common cylindrical geometry by following the same derivation but where wall thinning must be included. Referring to Figure 2, consider a streamline through the center of the wall thickness as it expands away from the charge centerline. We adopt most of the assumptions from Taylor's original model: incompressible wall, small deflection angle, and small lateral velocity relative to the detonation velocity because of the quasi-1D assumption in ZND. In stream tube coordinates the wall only flows in the streamline direction (ˆs), greatly simplify the momentum equation for the wall, such that: 1 2 ∂u2 c ∂s = -1 ρc ∂p ∂s and u2 c R = -1 ρc ∂p ∂n (12) where uc is the wall velocity, R is the instantaneous radius of curvature of the wall, and ρc is the density of the wall meterial. The streamline component is Bernoulli's equation and the normal component is the centrifugal force from the pressure gradient across the case thickness. The latter is equivalent to Taylor's original equation of motion. Following Dehn 40, 41, R can be written exactly through: 1 R = ds dθ -1 = y′′ (1 + y′2) 3 2 (13) Since the coordinates are relative to the centreline of the wall, the expanding detonation product thickness is approximately 2(y -tc 2 ) so long as θ is reasonably small. Neglecting product expansion off the edges of the HE slab, the instantaneous detonation product thickness can be related to the cross sectional area by: A = wc y -tc 2 (14) where wc is the width of the slab. Assuming constant wall thickness tc, the derivatives: y′ = d dx A wc + tc 2 = A′ wc y′′ = d dx A′ wc = A′′ wc are substituted into Equation 13, yielding: 1 R = A′′ wc 1 + A′ wc 2 3 2 = w2 cA′′ (w2c + A′2) 3 2 (15) Equation 15 can be combined with Equation 12 to relate the pressure on the wall to the cross-sectional area and its derivatives by integrating through the wall thickness, where curvature 1/R, wall velocity uc, and wall density ρc, are uniform. Noting the reversed signs on the integration bounds since ˆn is positive away from the centre of curvature: Z ps p ∂p = Z y-tc 2 y+ tc 2 -ρc u2 c R ∂n (16) p -p0 =ρctc u2 c R (17) where ρctc is the wall mass per unit area, and p0 is the surrounding pressure, which we assumed to be zero but retained in subsequent equations. 5 du dx = ω ∂e0 ∂x + B(ρ) + p(ω + 1) ρ 1 A dA dx u -B(ρ) + p(ω + 1) ρu de0 dx = φqm 1 u 1 τb dp dx = -ρudu dx dρ dx = -ρ u du dx -ρA′ A dA′ dx = A′′ = p -p0 ρctc (w2 c + A′2) 3 2 w2c ! D2 + 2 ρc (κpCJ -p) dA dx = A′ Recall: B(ρ) = 2 X i Λi Ri ρ0 ρ -(ω + 1) e-(Ri ρ0 ρ ) With initial conditions: u = (D -uCJ); p = pCJ; ρ = ρCJ; A′ = 0; A = Acharge Fig. 3. Summary of the system of equations and initial conditions used in the present model. The wall velocity can then be related to the pressure and curvature by integrating Bernoulli's equation along the stream-tube coordinate, again assuming the case is incompressible and the velocity through the thickness is uniform: Z v2 c D2 ∂u2 c = - Z p κpCJ 2 ρc ∂p (18) u2 c =D2 + 2 ρc (κpCJ -p) (19) while taking the state immediately behind the detonation as the lower integration bound, where uc = D and p = pCJ. In reality, early confiner motion is compressible because of the reverberating shock transmitted by the detonation. Pressure at the product-wall interface is governed by shock impedances and obliqueness of the transmitted shock, but is typically some fraction of the detonation pressure42. To account for these experimental realities we introduced a fitting constant, κ. In comparison to experiments κ = 1.9 gave best agreement; a value much higher than would be expected from results from Neal 42. We have not resolved this inconsistency, nor examined values for other experimental explosive-metal pairs. Note that if the pressure term in Equation 19 is neglected, Taylor's original assumption, uc = D, is recovered. Equations 15,17, and 19 can be combined: p -p0 =ρctc R D2 + 2 ρc (κpCJ -p) and the second derivative of area isolated to yield a final closure equation as a function of detonation product pressure: A′′ = p -p0 ρctc (w2 c + A′2) 3 2 w2c ! D2 + 2 ρc (κpCJ -p) (20) For convenience, the complete system of equations and initial conditions is summarized in Figure 3. In the current study the system of equations 6 Fig. 4. Principle isentrope for the baseline explosive (dots), plotted against the p(v) history obtained from the model (dashed line). was solved numerically in Python using the Radau method from the SciPy package ODE solver. Comparison to Experiment Confiner wall velocity is now typically measured using PDV. For a probe observing at 90° from the initial position of the wall, only the radial velocity component in the lab frame (VY in Fig.2) is measured directly. This velocity is equivalent to the derivative of y in the present model's detonation fixed frame. The detonation-fixed frame spatial derivative can be related to the lab-fixed time derivative via: ∂ ∂t lab = D ∂ ∂x such that the lateral velocity from the model can be compared to the PDV velocity history for the sandwich geometry using: Vc,Y |lab = D d dx y + tc 2 = D A′ wc (21) We compare the present model with symmetric sandwich experiments from Loiseau et al.5. The subset of experiments presently considered used 6.35-mm-thick 6061 aluminum flyer plates. The test explosive was nitromethane (NM) gelled with 4% poly(methyl methacrylate) by mass. The gelled NM was sensitized with 0.5% 3M K1 glass microballoons (GMB) by mass and 15%, or 30% Fig. 5. Experimental velocity histories (dots) plotted against the predictions of the present model. Valimet H-50 aluminum powder was added by mass to the sensitized mixture. An inert control consisting of 20.5% alumina powder by mass is also considered. The explosive cavity was initially 22.5 mm thick and the sandwich had a width of 10.2 cm. The PDV probes measured at 90°. Cheetah 2.0 was used to determine JWL coefficients for the baseline explosive (0% Al), alumina control, and 15% or 30% Al assuming either complete reaction or entirely inert behaviour for the Al. The GMB were treated as porosity. JWL coefficients are shown in Table 1. An initial validation was performed to confirm that the present model reproduces expansion along the principle isentrope when no afterburning energy is added. For the baseline explosive, the principle isentrope was calculated via: ps = Λ1e-R1 ρ0 ρ + Λ2e-R2 ρ0 ρ + C ρ0 ρ -(1+ω) (22) and is plotted against the p(v) history from the model in Figure 4. Agreement is reasonable over the accessible range of relative volumes. Figure 5 shows model predictions for wall acceleration plotted as lines versus experimental velocity histories plotted as dots. Model predictions using Cheetah to treat Al reaction are denoted by "-cha", those using programmed burn are donated by "-P. burn". Note the smooth ballistic acceleration of the wall in these experiments. This is because of the 7 Table 1. Summary of detonation properties and JWL coefficients. Explosive ρ0 D Pcj ucj ρcj Λ1 Λ2 C R1 R2 ω g/cc km/s GPa km/s g/cc GPa GPa GPa - - - Baseline 1.09 5.80 10.17 1.60 1.51 190.48 3.93 1.08 4.58 1.04 0.35 20.5% Alumina 1.29 5.20 8.92 1.33 1.73 248.20 3.56 0.91 4.94 1.08 0.28 15% Al inert 1.20 5.42 8.96 1.38 1.61 293.37 4.75 0.94 5.25 1.17 0.30 30% Al inert 1.33 5.13 7.91 1.16 1.72 293.86 1.59 0.45 4.84 0.64 0.12 15% Al active 1.20 5.91 12.03 1.70 1.68 159.19 2.18 1.33 3.96 0.72 0.27 Fig. 6. Comparison of p(v) histories. relatively low brisance of the NM explosive and the detonation being approximately sonic relative to the sound speed in aluminum. The characteristic surface oscillations from shock reverberation are thus suppressed. The model accurately predicts the acceleration history of the wall for the baseline case and also the inert control diluted with 20.5% alumina. The accuracy for the inert control is surprising given the presumed importance of momentum and heat transfer during acceleration of the particles in the detonation and during product expansion. For 15% Al reacting in equilibrium, model results using Cheetah-derived JWL coefficients agree reasonably well with the experimental results, but under-predict wall velocity after ≈10 μs. For the afterburning model predictions for 15% Al, the baseline JWL parameters were combined with a burn time (τb) of 25 μs. The model again underpredicts the experimental result at early times but matches and then slightly exceeds the experimental values at probe cut-out. This suggests that the linear burn model adds too much energy, but also adds it too late in the expansion process. The influence of particle energy release on effective product pressure is shown in Figure 6, which plots the p(v) histories of the model predictions, versus experimental isentropes extracted using the method outlined by Jackson23. Cheetah yielded poor JWL fits for 30% Al reacting in equilibrium, which resulted in non-physical predictions of wall velocity. These results are thus omitted from Figure 5. For the afterburning model predictions for 30% Al, the baseline JWL parameters were again used, but a burn time of 80 μs was instead specified. Agreement was overall good, but continued acceleration of the wall after 50 μs, beyond the probe cut-off, is likely non physical. Concluding Remarks A simple, semi-analytic model was developed following the assumptions established by Taylor22. When appropriate EOS coefficients were used, reasonable agreement with flyer plate experiments was observed. A simple linear burn model for Al particle reaction qualitatively suggests that a significant fraction of aluminum can burn to influence metal acceleration. References 1. Finger, M., Hornig, H. C., Lee, E. L. and Kury, J. W., "Metal Acceleration by Composite Explosives," in "5th International Symposium on Detonation," pp. 137-152, Office of Naval Research, Pasadena, CA, 18-21 August 1970. 2. Ermolaev, B., Khasainov, B., Baudin, G. and Presles, H. N., "Behavior of Aluminum in Detonation of High Explosives. Surprises and 8 Interpretations," Russian Journal of Chemical Physics, Vol. 18, pp. 1121-1140, 01 2000. 3. Manner, V. W., Pemberton, S. J., Gunderson, J. A., Herrera, T. J., Lloyd, J. M., Salazar, P. J., Rae, P. and Tappan, B. C., "The Role of Aluminum in the Detonation and Post-Detonation Expansion of Selected Cast HMX-Based Explosives," Propellants, Explosives, Pyrotechnics, Vol. 37, pp. 198-206, 2012. 4. Tappan, B. C., Hill, L. G., Manner, V. W., Pemberton, S. J., Lieber, M. A., Johnson, C. and Sanders, V. E., "Reactions of Powdered Aluminum with Explosives that Selectively Form Carbon Dioxide or Water as Oxidizers," International Journal of Energetic Materials and Chemical Propulsion, Vol. 15, pp. 339-350, 2016. 5. Loiseau, J., Goroshin, S., Frost, D. L., Higgins, A. J. and Zhang, F., "Ability of Metalized Gelled Nitromethane to Accelerate a Flyer Plate," in "16th International Symposium on Detonation," Office of Naval Research, Cambridge, MD, 15-20 July 2018. 6. Zhang, D., Yi, Z., Gan, Y., Liu, Q., Liu, F. and Li, X., "On Weak Influence of Aluminum Powder Size on its Post-Detonation Reaction in Different Time Scales," AIP Advances, Vol. 14, p. 075014, 07 2024. 7. Gilev, S. D. and Trubachev, A. M., "Detonation Properties and Electrical Conductivity of Explosive-Metal Additive Mixtures," Combustion, Explosion and Shock Waves, Vol. 38, pp. 219-234, 2002. 8. Gilev, S. D. and Anisichkin, V. F., "Interaction of Aluminum with Detonation Products," Combustion, Explosion and Shock Waves, Vol. 42, pp. 107-115, 2006. 9. Davydov, V. Y., Grishkin, A. M. and Feodoritov, I. I., "Experimental-Theoretical Investigation of the Oxidation of Aluminum in Detonation Waves," Combustion, Explosion and Shock Waves, Vol. 28, pp. 564-568, 1992. 10. Cowperthwaite, M., "Nonideal Detonation in a Composite CHNO Explosive Containing Aluminum," in "10th International Symposium on Detonation," pp. 656--664, Office of Naval Research, Boston, MA, 12-16 July 1993. 11. Gonthier, K. and Rumchik, C., "Theory and Analysis of Non-ideal Detonation for RDXmetal Mixtures," in "13th International Symposium on Detonation," pp. 176-186, Office of Naval Research, Norfolk, VA, 23-28 July 2006. 12. Li, X., Pei, H., Zhang, X. and Zheng, X., "Effect of Aluminum Particle Size on the Performance of Aluminized Explosives," Propellants, Explosives, Pyrotechnics, Vol. 45, pp. 807-813, 2020. 13. Lewis, W. K., Rumchik, C. G., Smith, M. J., Fernando, K. A. S., Crouse, C. A., Spowart, J. E., Guliants, E. A. and Bunker, C. E., "Comparison of Post-detonation Combustion in Explosives Incorporating Aluminum Nanoparticles: Influence of the Passivation Layer," Journal of Applied Physics, Vol. 113, p. 044907, 01 2013. 14. Suceska, M., Dobrilovic, M., Bohanek, V. and Stimac, B., "Estimation of Explosive Energy Output by EXPLO5 Thermochemical Code," Zeitschrift für Anorganische und Allgemeine Chemie, Vol. 647, pp. 231-238, 2021. 15. Cowperthwaite, M., "Some Aspects of Nonideal Detonation in Composite Explosives," Journal of Energetic Materials, Vol. 1, pp. 141175, 1983. 16. Mader, C. L., Kershner, J. D. and Pimbley, G. H., "Three-Dimensional Modeling of Inert Metal-Loaded Explosives," Journal of Energetic Materials, Vol. 1, pp. 293-324, 1983. 17. Mader, C. L., Numerical Modeling of Explosives and Propellants, CRC press, 2007. 18. Ripley, R. C., Zhang, F. and Lien, F.-S., "Acceleration and Heating of Metal Particles in Condensed Matter Detonation," Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. 468, pp. 1564-1590, 2012. 9 19. Milne, A. M., Longbottom, A. W., Evans, D. J., Haskins, P. J., Cook, M. D. and Briggs, R. I., "The Burning Rate of Aluminium Particles in Nitromethane in Cylinder Tests," in "12th International Symposium on Detonation," pp. 895900, Office of Naval Research, San Diego, CA, 11-16 July 2002. 20. Milne, A. M., Bennett, K. and Longbottom, A. W., "Modelling a Suite of Aluminized Explosives Experiments," in "14th International Symposium on Detonation," pp. 51-60, Office of Naval Research, Coeur d'Alene, ID, 11-16 April 2010. 21. Pontalier, Q., Loiseau, J., Longbottom, A. and Frost, D. L., "Simulating the Propulsive Capability of Explosives Loaded with Inert and Reactive Materials," AIP Conference Proceedings, Vol. 2272, p. 050023, 11 2020. 22. Taylor, G. I., "Analysis of the Explosion of a Long Cylindrical Bomb Detonated at One End," in "Aerodynamics and the Mechanics of Projectiles and Explosions," Vol. 3 of The Scientific Papers of Sir Geoffrey Ingram Taylor, pp. 277-286, Cambridge University Press, 1963. 23. Jackson, S. I., "An Analytic Method for TwoDimensional Wall Motion and Product Isentrope from the Detonation Cylinder Test," Proceedings of the Combustion Institute, Vol. 35, pp. 1997-2004, 2015. 24. Kury, J. W., Hornig, H. C., Lee, E. L., McDonnel, J. L. and Ornellas, D. L., "Metal Acceleration by Chemical Explosives," in "4th International Symposium on Detonation," Office of Naval Research, Silver Spring, MD, 12-15 October 1965. 25. Lee, E. L., Hornig, H. C. and Kury, J. W., "Adiabatic Expansion of High Explosive Detonation Products," Technical Report UCRL50422, Lawrence Radiation Laboratory, Livermore, CA, 1968. 26. Allison, F. E. and Watson, R. W., "Explosively Loaded Metallic Cylinders - I," Journal of Applied Physics, Vol. 31, pp. 842-845, 1960. 27. Allison, F. E. and Schriempf, J. T., "Explosively Loaded Metallic Cylinders - II," Journal of Applied Physics, Vol. 31, pp. 846-851, 1960. 28. Baker, E. L., "Modeling and Optimization of Shaped Charge Liner Collapse and Jet Formation," Technical Report ARAED-TR-92019, ARDEC, Picatinny Arsenal, NJ, 1993. 29. Baker, E. L., Murphy, D., Capellos, C., Anderson, P., Wrobel, E. and Stiel, L., "Recent Combined Effects Explosives Technology," Technical Report ARMET-TR-10004, ARDEC, Picatinny Arsenal, NJ, 2010. 30. Baker, E., Murphy, D., Stiel, L. and Wrobel, E., "Theory and Calibration of JWL and JWLB Thermodynamic Equations of State," WIT Transactions on the Built Environment, Vol. 113, pp. 147-158, 2010. 31. Chaos, M., "Revisiting the Kinematics of the Cylinder Test," Propellants, Explosives, Pyrotechnics, Vol. 47, p. e202100349, 2022. 32. Walters, W. P., "Explosive Loading of Metals and Related Topics," Tech. Rep. BRL-SP-56, US Army Ballistic Research Laboratory, Aberdeen Proving Ground, MD, May 1986. 33. Walters, W. P. and Zukas, J. A., Fundamentals of Shaped Charges, Wiley-Interscience, New York, NY, 1989. 34. Weseloh, W., "JWL in a Nutshell," Tech. Rep. LA-UR-14-24318, Los Alamos National Laboratory, 06 2014. 35. Menikoff, R., "JWL Equation of State," Tech. Rep. LA-UR-15-29536, Los Alamos National Laboratory, 07 2017. 36. Segletes, S. B., "An Examination of the JWL Equation of State," Tech. Rep. ARL-TR-8403, US Army Research Laboratory, 07 2018. 37. Farag, G. and Chinnayya, A., "On the JonesWilkins-Lee Equation of State for High Explosive Products," Propellants, Explosives, Pyrotechnics, Vol. 49, p. e202300223, 2024. 10 38. Miller, P. J. and Guirguis, R. H., "Experimental Study and Model Calculations of Metal Combustion in Al/Ap Underwater Explosives," MRS Online Proceedings Library, Vol. 296, pp. 299304, 1992. 39. Daniel J. Meaney, N. G. and Brown, R. E., "Thermochemical Release Mechanisms and Circumferential Initiation Affecting Detonation in an Aluminized Explosive," Journal of Energetic Materials, Vol. 42, pp. 146-167, 2024. 40. Dehn, J. T., "Models of Explosively Driven Metal," Tech. Rep. BRL-TR-2626, US Army Ballistic Research Laboratory, Aberdeen Proving Ground, MD, December 1984. 41. Dehn, J. T., "Models of Explosively Driven Metal," in "8th International Symposium on Detonation," , edited by Short, J. M., pp. 602612, Office of Naval Research, Albuquerque, NM, 15-19 July 1985. 42. Neal, T., "Perpendicular Explosive Drive and Oblique Shocks," in "6th International Symposium on Detonation," pp. 602-611, Office of Naval Research, Coronado, CA, 24-27 August 1976. Question from Sorin Bastea, LLNL Have you looked at the effect of Al particle size? Reply by Jason Loiseau We have not yet investigated particle size in the model. The current implementation can only prescribe a burn time, and the correlation between burn time and particle size (e.g. extrapolations from Beckstead correlations to detonation products) is still controversial. We did investigate Al particle size effects experimentally and presented these results at the previous IDS: we saw a weak effect of particle size on the accelerating ability of the composite HE. Question from Tim Manship, Purdue University Very fascinating approach! For your model, do you assume aluminum combustion is just adding energy to gas products or are you also accounting for addition of aluminum combustion products? Reply by Jason Loiseau We made the simplifying assumption that Al combustion adds energy directly into the detonation product gases. We neglect any effects on the chemical composition of the detonation products, and neglect the formation and condensation of solid Al products or additional carbon as oxygen is scavenged from CO and CO2. We thus also neglected any heat transfer to/from particles or solid product. Question from Christopher Miller, LLNL How well was the aluminum distributed throughout the samples and how would non-homogeneity influence your model? Reply by Jason Loiseau Free-flowing powders mix well into the gel. Based on sample microscopy we have observed good uniformity and repeat trials have shown good reproducibility. Since the model is quasi-1D it cannot account for inhomogeneity along the height or width of the slab (or radially in cylinders). Longitudinal inhomogenity could be addressed by varying particle mass fraction and interpolating JWL parameters for different initial solid loadings. 11
|
2509.16243
|
1
Binary Classification of Light and Dark Time
Traces of a Transition Edge Sensor Using
Convolutional Neural Networks
Elmeri Rivasto1,*, Katharina-Sophie Isleif2, Friederike Januschek3, Axel Lindner3, Manuel Meyer1, Gulden
Othman2, Jos´e Alejandro Rubiera Gimeno2, and Christina Schwemmbauer3
1CP3-origins, Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, Campusvej 55,
5230 Odense, Denmark
2Helmut-Schmidt-Universit¨at (HSU), Holstenhofweg 85, 22043 Hamburg, Germany
3Deutsches Elektronen Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany
*Contact: rivasto@cp3.sdu.dk
Abstract—The Any Light Particle Search II (ALPS II) is a
light shining through a wall experiment probing the existence
of axions and axion-like particles using a 1064 nm laser source.
While ALPS II is already taking data using a heterodyne based
detection scheme, cryogenic transition edge sensor (TES) based
single-photon detectors are planned to expand the detection
system for cross-checking the potential signals, for which a
sensitivity on the order of 10−24 W is required. In order to
reach this goal, we have investigated the use of convolutional
neural networks (CNN) as binary classifiers to distinguish the
experimentally measured 1064 nm photon triggered (light) pulses
from background (dark) pulses. Despite extensive hyperparam-
eter optimization, the CNN based binary classifier did not
outperform our previously optimized cut-based analysis in terms
of detection significance. This suggests that the used approach is
not generally suitable for background suppression and improving
the energy resolution of the TES. We partly attribute this
to the training confusion induced by near-1064 nm black-body
photon triggers in the background, which we identified as the
limiting background source as concluded in our previous works.
However, we argue that the problem ultimately lies in the binary
classification based approach and believe that regression models
would be better suitable for addressing the energy resolution.
Unsupervised machine learning models, in particular neural
network based autoencoders, should also be considered potential
candidates for the suppression of noise in time traces. While
the presented results and associated conclusions are obtained for
TES designed to be used in the ALPS II experiment, they should
hold equivalently well for any device whose output signal can be
considered as a univariate time trace.
I. INTRODUCTION
Transition edge sensors (TES) are superconducting devices
commonly used as microcalorimeters and bolometers when
voltage-biased within the region of the superconducting phase
transition [1]. In this narrow region, the resistance of the
circuit changes steeply with temperature and consequently the
absorption of a single photons at suitable wavelengths heat
the circuit sufficiently to cause a significant change in its
current. These perturbations can be efficiently detected by su-
perconducting quantum interference devices (SQUIDs) being
inductively coupled to the biased circuit. Unlike many other
single-photon detectors, such as superconducting nanowire
single-photon detectors (SNSPDs), TESs are capable of mea-
suring the energy of the absorbed photons over a wide range
of wavelengths. Their energy resolution together with high
quantum efficiency and microsecond-scale dead time [2–4]
make TESs important tools widely used in quantum computing
[5–9], space and astrophysics experiments [10–14] along with
particle physics and dark matter searches [15–17].
A TES is planned to be used in a future science run of the
Any Light Particle Search II (ALPS II) at DESY Hamburg
(Germany) [18]. ALPS II aims to probe the existence of axions
and axion-like particles (ALPs) [19, 20] and is essentially a
light shining through a wall experiment featuring a powerful
1064 nm laser beam that is shone into a 106 m long resonant
production cavity that is in 5.3 T magnetic field. While the
propagation of the light is blocked by an opaque wall, the the-
oretically proposed photon–axion oscillation enables 1064 nm
photons to emerge on the other side of the optical barrier in
a regeneration cavity (symmetrical to the production cavity)
[21, 22]. The detection of these reconverted photons requires
an extremely sensitive detection scheme achievable with TESs
[19, 20]. The target sensitivity for the conversion rate lies
in the range of 10−5 Hz (one photon per day) setting the
upper limit for the background rate required for the statistically
significant detection of axions and ALPs within the practically
feasible 20 days measurement time [19].
In the recent years machine learning (ML) methods have
been recognized as extremely useful tools in various fields of
physics [23]. ML approaches have been widely implemented
for various tasks associated with particle physics experiments,
including high level data analysis, jet tagging and data sim-
ulation [24–32]. Lately, ML methods have also been used
to detect hints from axion-like particles in LHC [33] and
pulsar dispersion measurements [34], and are planned to be
implemented for analyzing data in the SuperCDMS direct dark
matter experiment [35]. Most relevant for our interest, Manenti
et al. [36] have recently applied unsupervised ML models to
study the background of a TES system not connected to the
outside of a cryostat, by e.g. a fiber (intrinsic background).
They report that the majority of the observed background
arXiv:2509.16243v1 [physics.ins-det] 17 Sep 2025
2
pulses originate from high-energy events associated with
radioactive decays and secondary cosmic-ray particles. The
resulting dark counts are rather easy to distinguish from low-
energy photon induced pulses by simply comparing the pulse
shape, as already concluded in our previous work [37]. This
is because the energy released from the various high-energy
events is likely to be deposited within the underlying substrate
rather than the TES itself. Due to slow diffusion of heat from
the substrate to the TES, the dark counts have significantly
larger rise and decay times when compared with typical photon
induced pulses where the energy is deposited directly to the
TES [36, 38].
While the unsupervised ML models have been mainly used
for qualitative categorization of the recorded pulses, supervised
ML models are better suited for actual quantitative background
suppression. One can expect the state-of-the-art supervised
ML models to outperform the capabilities of traditional data
processing techniques. We have successfully implemented this
in the past for our recorded intrinsic background without
an optical fiber connected to the TES [37]. In this work,
we expand on this study by also considering the extrinsic
background measured while an optical fiber links the lab
environment to the TES inside the dilution refrigerator. This
mainly introduces an additional background resulting from
fiber coupled black-body radiation. The black-body photons
have been identified as the limiting background for our ex-
perimental setup [20, 39, 40], and have been previously ad-
dressed using a traditional cut-based analysis without relying
on machine learning [38]. We want to point out that the black-
body background rate is ultimately determined by the energy
resolution of the TES since lower energy resolution enables
more reliable distinction between the signal and the back-
ground photons. Thus, different analysis methods addressing
the suppression of noise in the time traces differently can have
significant effects on the background rates [40]. For example,
we have found that performing fits to the measured TES pulses
in frequency domain instead of time domain results in 2-fold
improvement in the energy resolution [38, 39].
Here, for the first time, we are trying to further improve the
rejection of near-1064 nm black-body photons using convolu-
tional neural networks (CNN) that are considered as the state-
of-the-art machine learning model for univariate time-series
classification. Ultimately, the goal is to reach a background
rate below 10−5 Hz while maintaining a tolerable rate for
correctly classified signal pulses (analysis efficiency). The
CNNs expand the architecture of the conventional multi-layer
perceptrons (feedforward neural networks) via the introduction
of convolutional layers that apply different filters (kernels)
to the input data enabling the efficient extraction of spatial
(or temporal) patterns [41]. The CNNs remain the underlying
technology behind the state-of-the-art image classification
ML-models [41–43], also covering the univariate time series
classifiers [44–47]. Consequently, CNNs are expected to show
the best performance in the suppression of the background
triggers. A major benefit of CNNs is that they enable model-
independent analysis of recorded pulses as one does not have
to rely on fitting any specific functions to the data. We will
utilize the CNNs as binary classifiers that are trained to distin-
guish between 1064 nm photon induced light pulses and any
other background source induced dark pulses. These classifiers
are then ensembled to quantitatively study the background
sources that are particularly difficult to distinguish from the
light pulses and to see whether the CNNs can be used for
further background suppression.
The manuscript is organized as follows: in Section II we
describe our experimental setup and how the experimental
data used in this work was obtained (see Ref. [38] for more
details). Next, in Section III we present a detailed description
of the overall architecture of the considered CNN and explain
how we use the experimentally measured data to train it and
evaluate its performance. Details of the CNN’s hyperparameter
optimization are also presented. The performance of the opti-
mized CNN is analyzed and further fine-tuned in Section IV.
We then proceed in evaluating the true performance of the
model in classifying the light and dark pulses in Section V
and study the observed background pulses in detail in Sec-
tion VI. Finally, we discuss the background source induced
confusion in the CNN training process in Section VII before
summarizing the final conclusions in Section VIII.
II. EXPERIMENTAL DATA
All of the experimental data was measured using a tungsten-
based TES chip fabricated at National Institute of Standards
and Technology (NIST). The TES has been optimized for the
detection of 1064 nm photons through its multilayer structure
[48]. The working point of the voltage-biased TES was set
to 30% of its normal state resistance and its current was
monitored by an inductively coupled SQUID (manufactured
by Physikalisch Technische Bundesanstalt; PTB) with 5 GHz
gain bandwidth product at a sampling rate of 50 MHz. The
photons absorbed by the TES were then detected as pulses in
the SQUID output voltage Vout(t). The TES+SQUID module
was operated within a Bluefors SD dilution refrigerator at
25 mK base temperature.
In order to recognize the shapes of the 1064 nm photon
Vout(t) pulses, we first gathered data by illuminating the TES
with a highly attenuated 1064 nm laser source for a total of
5 s. The laser light was coupled to the TES via a HI1060
single mode optical fiber. During this time interval, a total of
4722 pulses above the 10 mV trigger threshold were recorded,
where each time trace corresponds to a 200 µs time window
with 104 samples (50 MHz sampling frequency). The recorded
time traces were pre-filtered by discriminating double pulses.
This was done by detecting local maxima from the derivative
of a time trace based on specific height, prominence and
spacing criteria. This left us with 3928 single-photon triggered
pulses. We have performed fits in the time domain using a
phenomenological function [37, 38]
Vph(t) = −
2APh
e
(t0−t)
τrise + e
−(t0−t)
τdecay
+ V0,
(1)
describing a photonic event triggered TES pulse at around
t = t0 −τrise. The shape of the pulse is determined by the
rise and decay times τrise and τdecay, respectively, the obtained
distribution of which will be used to determine the cuts. The
3
0
5
10
15
20
t ( s)
20
15
10
5
0
Vout (mV)
(a)
Dark
Light
0
10
20
t ( s)
10
0
Vout (mV)
Example
light pulse
0
50
100
150
200
250
300
PC1
10
0
10
20
30
40
50
60
PC2
(b)
Dark
Light
Fig. 1: (a) The average measured light pulse and dark pulses, where the shaded regions represent the associated standard
deviations. The inset presents a randomly chosen light pulse as an example of the signal and noise shapes. (b) Principal
Component Analysis (PCA) scatter plot showing the projection of pulse feature vectors (τrise, τdecay, χ2
Ph, PhFFT, χ2
FFT)
onto the first two principal components (PC1 and PC2). The inset shows a close-up of the cluster associated with light pulses,
showing overlap with some of the dark pulses measured in extrinsic background.
χ2
Ph-error associated with the performed fit is also considered.
In addition, we have also considered a fitting function from
Small Signal Theory [1]
VSST(t) =
(
AFFT ·
e−(t−t0)/τ+ −e−(t−t0)/τ−
,
if t ≥t0.
0,
else,
(2)
While fitting Eq. (2) in time domain is unstable, we have
performed the fit in frequency domain, in particular due to the
simple Fourier transformation of Eq. (2). The obtained fitting
parameters were then used to calculate the associated peak
height (PhFFT) which will also be considered for the filtering
of the pulses. This specific parameter was chosen because its
associated distribution for light pulses has previously resulted
in the lowest achieved energy resolution for our TES [38, 39].
The χ2
FFT-error associated with the fits in frequency domain
is also considered for filtering.
In order to mitigate the effects of scattering processes and
nonlinear optical effects (that can alter the wavelength of
the photons emitted from the laser) on the training process,
we have subjected the 3928 single-photon triggers for minor
filtering. This was done by only including pulses whose
τrise, τdecay and PhFFT were simultaneously within 0.1%–
99.9% quantiles of their associated distributions while χ2
Ph
and χ2
FFT being below 99.9% quantiles. This resulted in the
discrimination of 0.76% (30) of the filtered triggers, leaving us
a total of 3898 pulses that are used for training and evaluating
the CNNs.
The filtered 3898 pulses were further post-processed by
removing a possible voltage offset by shifting the recorded
Vout(t) values by the average voltage value measured between
t = 0-24 µs. Finally, in order to reduce the computational
load for the machine learning tasks, the time windows of the
single pulses were narrowed down to 24 µs (1200 samples) by
locating the pulse minimum and including the 300 antecedent
and 900 subsequent samples. For the rest of the manuscript,
we will keep referring to these 1064 nm photon triggered time
traces as light pulses. The average measured light pulse is
presented in Fig. 1(a) together with an example of a single
pulse in the inset of the figure, further illustrating the baseline
noise present in the measurements.
Right after measuring the light pulses, we proceeded to
measure the extrinsic background over a period of two days
using the same system configuration except for disconnecting
the optical fiber from the laser source and sealing its open
end with a metallic cover. A total of 8872 background events
exceeding the 10 mV trigger threshold were observed. While
all of these pulses were included as training and evaluation
data for the CNNs without making any cuts, they were post-
processed in the same way as the light pulses by removing
the voltage offsets and clipping the time windows as described
above. It should be noted that for the pulses with large rise and
decay times, typically originating from intrinsic background
sources, the clipped time window can be too narrow to fully
represent the entire pulse. Regardless, these pulses would have
been in any case very easily distinguishable from the light
pulses. We refer to all of the recorded background pulses
as dark pulses for the rest of the manuscript. The average
measured dark pulse is presented in Fig. 1(a).
In summary, after data cleaning we are left with 3898 light
pulses and 8872 dark pulses, making the overall size of the
dataset 12,770 to be used for the training and evaluation the
CNNs. Before proceeding, we want to further characterize
the differences between light and dark pulses via Principal
Components Analysis (PCA) in order to detect any possible
overlap between the resulting light and dark clusters indicating
the presence of photonic background. We have done this by
associating each pulse with a feature vector assembled from
the above introduced fitting parameters as (τrise, τdecay, χ2
Ph,
PhFFT, χ2
FFT). The PCA scatter plot visualizing the projection
of these feature vectors for both light and dark pulses onto the
two main principal components is presented in Fig. 1(b). As
expected, the light pulses are tightly clustered in one spot while
the dark pulses are much more spread out. Regardless, one can
observe significant overlap between the light and dark pulses
as illustrated in the inset of Fig. 1(b), most likely originating
from fiber coupled black-body radiation [40]. We will analyze
4
Fig. 2: A schematic illustration of the basic architecture of the considered CNN and its hyperparameters whose optimization
is explicitly addressed (see Table I).
this later in the paper aided by the CNN ensembles.
III. CNN ARCHITECTURE
The basic architecture of the CNN considered in this
manuscript resembles the generally used structure for image
classifying tasks [42, 49]. As illustrated in Fig. 2, the CNN
consists of pairs of i) convolutional and average pooling layers
followed by ii) flatten and dropout layers connected to iii)
dense layers with gradually decreasing sizes ultimately ending
up to a single output neuron. As typical for binary classifiers,
we use binary cross-entropy (log-loss) as the loss function
to guide the training process. The weights of the model are
initialized using the Glorot uniform approach [50] and updated
during training following the Adaptive Moment Estimation
(Adam) algorithm [51].
In order to limit the size of the search space for hyperparam-
eter optimization, we fix the activation functions associated
with the convolutional layers to tanh while using ReLu for
the dense layers. This combination was observed to clearly
result in best performance in our initial tests of the model. We
further require that all of the convolutional layers share the
same number of filters, filter size, and pool size. We fix the
pool size to 2, limiting the maximum number of convolutional
layers in the considered architecture to 10. The structure of
the dense layers is limited by requiring that the number of
neurons must always drop by half in the following hidden
layer. This leaves only the maximum number of neurons within
the first layer and the number of hidden layers as the only
hyperparameters to be optimized for the dense part of the
CNN. On top of the architectural hyperparameters, we address
the optimization of the dropout rate, number of epochs and
batch size. A summary of the considered search space for the
hyperparameter optimization is presented in Table I. The CNN
is implemented using a high-level neural network API Keras
version 2.12.0 [52] with a TensorFlow version 2.12.1 backend
[53].
Experimental data
(12,770)
LIGHT
(3,898)
DARK
(8,872)
Training data
(2,000)
LIGHT
(1,000)
80%
Training
Validation
20%
DARK
(1,000)
Testing data
(10,770)
LIGHT
(2,898)
DARK
(7,872)
1,000 light
1,000 dark
Rest
Fig. 3: A schematic illustration of the division of the dataset
into training and testing data. The training set was further
divided 80%-20% into training and validation sets, where the
validation set was to evaluate the performance of the CNN
during the training process.
A. Training process
The model is trained using 1000 light and dark pulses,
respectively, resulting in an overall training set size of 2000
pulses. It should be noted that the training set is perfectly
balanced between light and dark pulses, making it optimal
for training. The training set is further split 80%–20% into
training and validation sets, where the training set is used for
the actual training of the model while the evaluation set is used
to guide the training process via minimization of the binary
cross-entropy used as the loss function. The division of the
dataset into training and testing set is schematically illustrated
in Fig. 3.
5
B. Performance evaluation
The performance of the model is evaluated against the rest of
the dataset that was not dedicated for training as illustrated in
Fig. 3, consisting of 2898 light pulses and 7872 dark pulses.
With our main interest towards the use of the CNN in the
ALPS II experiment, we follow the approach of our previous
work (Ref. [37]) and evaluate the performance of the model
during hyperparameter optimization based on the detection
significance given by [54, 55]
S = 2
p
Tobs ·
√ϵdϵans + nb −√nb
,
(3)
where Tobs = 518 h is the observation time of the experiment
(as used in Ref. [37]), ns = 2.8 · 10−5 Hz is the assumed
signal (1064 nm photon) rate and ϵd
= 0.5 is the pes-
simistically evaluated detection efficiency taking into account
all losses associated with the experimental setup [37]. The
only analysis method dependent parameters are the closely
related background rate (nb) and analysis efficiency (ϵa).
The ϵa is simply calculated as the percentage of correctly
classified light pulses (true positive). The nb on the other
hand is calculated from the number of misclassified dark
pulses (Nmdp, false positives). Since the total of 8872 extrinsic
background pulses were measured over a time period of 2 d,
the used testing set containing the subset of 7872 dark pulses
effectively corresponds to (7872/8872) · 2 d ≈1.77 d time
period. The effective background rate can thus be estimated
from the number of misclassified dark pulses (false positives)
as Nmdp/1.77 d. It should be pointed out that the S score
is a threshold (Th. ∈[0, 1] corresponding to dark–light,
respectively) dependent metric. Consequently, all the reported
values of S in this manuscript have been obtained after
optimizing the threshold to maximize its value.
While the S score will be used to determine the optimal
combination of hyperparameters in the following section,
we will later also evaluate the trained CNNs using the F1
score (F1 = 2 × Precision × Recall/(Precision + Recall)
which balances between precision (TP/(TP+FP)) and recall
(TP/(TP+FN)) thus making it a commonly used evaluation
metric for biased datasets (TP=True Pos., FP=False Pos.,
FN=False Neg.). The F1 score describes the true classification
performance of the CNN better than the S score by directly
measuring how well the CNN is able to correctly classify the
light pulses while avoiding the misclassification of dark pulses.
Thus, we will utilize the F1 score in particular in Section VI,
where we study the nature and origins of the background
pulses which the CNNs are struggling to classify correctly in
more detail. The F1 score is also a threshold dependent metric
and its reported values in later sections have been obtained
after the threshold optimization. It should be pointed out that
the threshold optimization has to be done separately for the S
and F1 scores.
C. Hyperparameter Optimization
The hyperparameters of the CNN architecture introduced in
Section III are optimized by 2000 iterations of random search,
i.e. by training a total of 2000 models using randomized
combinations of hyperparameters and choosing the one with
0
1
2
3
4
5
6
7
Trainable parameters (106)
0.0
0.5
1.0
1.5
2.0
2.5
S score
4.1%
95.9%
S > 1
S < 1
Fig. 4: The evaluated average S scores as a function of number
of trainable parameters for the associated CNN. The error bars
correspond to standard deviations associated with 5 evaluations
of the CNN with different training and testing sets. The dashed
vertical line points to the limit S = 1 above which the points
are colored by red (4.1%) and below as blue (95.9%).
highest evaluation metrics. The search space for the considered
hyperparameters is presented in Table I. In order to reduce
the susceptibility of the evaluated performance towards the
random division of the dataset into training and testing sets
(Fig. 3), each iterated combination of hyperparameters is
evaluated using 5 CNNs trained and tested with different,
randomly divided, datasets as described in Sections III-A and
III-B. The initial weights of the CNNs were fixed between the
iterations of the random search.
The evaluated average S scores (⟨S⟩) and their associated
standard deviations (σS) as a function of trainable parameters
in the CNN are illustrated in Fig. 4. The highest S scores are
Hyperparameter
Optimization range
Optimum
Nb. of conv. layers
3–10
6
Nb. of filters
20–150
45
Kernel size
3–20
12
Dropout rate
0–0.2
0.18
Nb. of dense layers
1–10
3
Max nb. of neurons
100–300
188
Learning rate
10−5–10−3
5.2 · 10−4
Epochs
5–20
10∗
Batch size
32–128
99
⟨S⟩= 1.26 ± 0.16
TABLE I: The considered search space for the hyperparameter
optimization of the CNN using 2000 iterations of random
search. The activation functions of the convolutional and dense
layers were fixed to tanh and ReLu, respectively. The presented
optimum corresponds to the maximum obtained average de-
tection significance ⟨S⟩(see details in Section III-B) for an
ensemble of 5 CNNs trained and evaluated with differently
(randomly) divided training, validation and testing sets, while
the weight initialization for the CNN was fixed. The values of
S were calculated using the optimal threshold that maximizes
its value. ∗The number of epochs was later increased to 20
(see Section IV)
6
CNN #0
model seed=0
data seed=0
CNN #1
model seed=1
data seed=0
CNN #2
model seed=2
data seed=0
CNN #3
model seed=3
data seed=0
CNN #4
model seed=4
data seed=0
CNN #5
model seed=5
data seed=0
CNN #6
model seed=6
data seed=0
CNN #7
model seed=7
data seed=0
CNN #8
model seed=8
data seed=0
CNN #9
model seed=9
data seed=0
(a) Ensemble #1
(b) Ensemble #2
CNN #0
model seed=8
data seed=0
CNN #1
model seed=8
data seed=1
CNN #2
model seed=8
data seed=2
CNN #3
model seed=8
data seed=3
CNN #4
model seed=8
data seed=4
CNN #5
model seed=8
data seed=5
CNN #6
model seed=8
data seed=6
CNN #7
model seed=8
data seed=7
CNN #8
model seed=8
data seed=8
CNN #9
model seed=8
data seed=9
Max
S
% of mis-
classified
dark pulses
closest to
ensemble
average
Set
model
seed
=8
model seed=8
data seed=6
Set
CNN #6
model seed=8
data seed=6
Further
analysis
using
(c)
Model weights: random
Data division: fixed
Model weights: fixed
Data division: random
Model weights: fixed
Data division: fixed
Fig. 5: A schematic illustration of the CNN ensembles used in this study. (a) The ensemble used in Section IV where the
CNNs are trained and evaluated with exactly the same datasets but the initialization of their weights differ between each other.
(b) The ensemble used in Section V and VI. The model weights were fixed by seeding them equivalently to that of the CNN
in the ensemble #1 that achieved the highest S score (Eq. (3)). Only the dataset division into training and testing sets was
randomized for this ensemble. (c) A single CNN trained with data divided equivalently to that of the CNN in the ensemble #2
whose percentage of misclassified dark pulses was closest to the average of the ensemble. This model was used in the latter
part of Section VI.
clearly associated with CNNs with smaller number of trainable
parameters, making the training process more efficient. We
determine the optimal combination of hyperparameters for the
CNN that maximizes ⟨S⟩−σS, under the constraint of limiting
the maximum number of trainable parameters to 0.5·106. The
chosen optimal model has a total of 297,057 parameters and
reached ⟨S⟩= 1.26 ± 0.16. The associated hyperparameters
are presented in Table I.
IV. FINE-TUNING OPTIMIZED CNN
Next, we will proceed investigating and fine-tuning the pre-
viously optimized CNN (Table I). In order to identify pos-
sible underfitting or overfitting, we increase the number of
epochs from the optimized value of 10 up to 20. We train
an ensemble of 10 CNNs with fixed datasets but randomly
initialized weights. This CNN ensemble will be referred to as
ensemble #1 (see Fig. 5(a)). We noticed that the performance
of CNNs is susceptible towards the random initialization of its
weights. This is manifested as a high standard deviation in the
⟨S⟩= 0.98 ± 0.22 (average threshold: ⟨Th.⟩= 0.98 ± 0.012)
for ensemble #1 (see Fig. 5(a)). While the weight initialization
is internally stochastic, we have initialized the ensemble #1
weights in a deterministic manner via specified random seeds.
This enables us to optimize the weight initialization of the
CNN associated with the chosen seed for reproducible re-
sults. The observed sensitivity towards weight initialization
is particularly common for CNNs associated with dropout
layers. Similar observations have been made in other CNN
based machine learning implementations [46]. The high values
of optimal thresholds with respect to the S score reflect the
importance of decreasing the background rate at the cost of
analysis efficiency (see Eq. (3)).
We also evaluate ensemble #1 using the F1 score (see III-B)
reflecting the sole classification performance of the ensemble.
We obtain ⟨F1⟩= 0.96 ± 0.006 (⟨Th.⟩= 0.63 ± 0.34),
indicating reduced conservativity in the classification of light
pulses when compared with the S score. This demonstrates
the importance of threshold optimization for different metrics.
While the high threshold values resulting in low background
rates at the expense of analysis efficiency is preferable for
the S score, the best performance for the CNN, in terms of
correctly classifying the light and the dark pulses, is obtained
at much lower thresholds where the F1 score is maximized.
The ensemble #1 average learning curves with respect to
number of epochs are presented in Fig. 6(a). Note, that the
number of epochs was increased from 10 (obtained from
initial hyperparameter optimization) to 20 in order to identify
possible under or overfitting. The learning curve in Fig. 6(a)
indeed indicates slight underfitting and increasing the number
of epochs to 20 flattens the learning curves without resulting
in overfitting. Thus, the number of epochs will be fixed to 20
for the rest of the manuscript. The learning curves with respect
to training set size, calculated for the best performing model
within the ensemble #1 in terms of the F1 score, are presented
in Fig. 6(b). No underfitting or overfitting is observed from
these curves, indicating that the used training set of size 2000
7
0
2
4
6
8
10
12
14
16
18
20
Epochs
0.0
0.1
0.2
0.3
0.4
0.5
0.6
Loss
(a)
Training loss
Validation loss
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
Training set size (103)
0.5
0.6
0.7
0.8
0.9
1.0
F1
(b)
Train
Test
Fig. 6: Different learning curves associated with the CNN ensemble #1; (a) The average loss (binary cross-entropy) as a
function of epochs for the whole ensemble of 10 CNNs. The dashed vertical line indicates the epochs=10 obtained from the
initial hyperparameter optimization (see Table I). This was then increased to 20 in order to identify underfitting or overfitting
from the associated figure. (b) The F1 score as a function of training set size for the CNN associated with highest F1 (also
highest S) within the ensemble. The error bars correspond to the standard deviation associated with 3 statistical repetitions in
the CNN training, between which the used training data was shuffled. The used testing data was kept constant throughout the
process.
Avg. Detection Significance
Avg. F1 Score
⟨S⟩= 0.95 ± 0.23
⟨F1⟩= 0.961 ± 0.005
Avg. Optimal Threshold
0.98 ± 0.01
0.68 ± 0.31
(⟨Th.⟩)
Avg. Analysis Efficiency
75.3% ± 14.3%
97.5% ± 0.4%
(⟨True Positives⟩= ⟨ϵa⟩)
Avg. misclassified dark pulses
0.6% ± 0.6%
2.00% ± 0.4%
(⟨False Positives⟩)
Background rate (nb)
0.33 mHz ± 0.33 mHz
1.0 mHz ± 0.2 mHz
TABLE II: A summary of the evaluated performance of the CNN ensemble #2 (Fig. 5(b)) in terms of detection significance and
F1 score (Section III-B). The detection significance has been calculated using Eq. (3) with fixed observation time T = 518 h,
detection efficiency ϵd = 0.5 and signal rate ϵs = 2.8 · 10−5 Hz based on the current realistic limitations for the ALPS II
experiment [37]. The reported background rate (nb) is calculated from the average percentage of mislcassified dark pulses (false
positives) by dividing its value by the effective observation time for the extrinsic background as explained in Section III-B.
The reported standard deviation for the avg. miscalssified dark pulses is slightly lower than the average value, but is rounded
up.
(1000 light, 1000 dark) is well suited for training the models.
The best performing CNN within the ensemble #1 achieved
the highest S = 1.4 using a threshold of Th. = 0.97. We will
proceed with initializing the weights of the CNNs used in the
following sections according to this specific model.
V. PERFORMANCE OF THE CNN
After the optimization of the CNN’s hyperparameters (Sec-
tion III-C) together with additional optimization of number
of epochs and weight initialization (Section IV), we now
proceed in studying how well the CNN actually performs for
its designated purpose as a binary classifier for the light and
dark pulses. Here, it is important to note that the S- and F1
scores evaluated for the trained CNN can be expected to be
susceptible to the random division of the dataset into training,
validation and testing sets. This is particularly true in the case
where a small subset of mislabeled pulses (e.g. target value
for a dark pulse = 1) exists. Consequently, we again train an
ensemble of 10 CNNs and study its average performance on
classifying the pulses. We will keep referring to this as ensem-
ble #2 that is schematically illustrated in Fig. 5(b). Unlike for
the ensemble #1 studied in the previous section, the weights of
all the models in the ensemble #2 are initialized similarly based
on the results of the previous Section IV. However, all the
CNNs in ensemble #2 are trained and evaluated with different
randomly divided datasets according to Fig. 3.
The average evaluation metrics for the CNN ensemble #2
is listed in Table II. As expected, the CNNs turned out to
be susceptible to the random division of the datasets with
ensemble average evaluation metrics ⟨S⟩= 0.95 ± 0.23
(⟨Th.⟩= 0.98 ± 0.01) and ⟨F1⟩= 0.961 ± 0.005 (⟨Th.⟩=
0.68 ± 0.31) having rather large standard deviations. The
highest S scores are obtained at much higher values of
thresholds when compared with the F1 scores. As seen in
Table II, the associated differences in analysis efficiency and
number of misclassified dark pulses (∼background rate) reflect
8
0.0
0.5
1.0
1.5
2.0
2.5
n2
0.0
0.5
1.0
1.5
2.0
2.5
n1
Optimum :
Max S score: 1.29±0.03
Optimal cut: [
0.3 ,
+ 1.7 ]
Analysis efficiency: 68%
Background rate: 0.11 mHz
Cut-based analysis
0.0
0.2
0.4
0.6
0.8
1.0
1.2
S score
Fig. 7: The detection significance as a function of the cut
range associated with the PhFFT calculated based on the
fitting parameters obtained in frequency domain analysis.
The cut range is defined as [µm −n1 · σ, µm + n1 · σ],
where n1, n2 ∈0, 1/3, . . . , 3 and the µm ≈−16.33 mV and
σ ≈1.25 mV are obtained by fitting a skewed Gaussian to the
distribution of the peak heights for the light pulses. All of the
detection significances were determined as the average of 5 S
scores calculated for randomly chosen testing sets.
the importance of background suppression at the expense of
analysis efficiency for maximizing S.
A. Comparison to Cut-Based Analysis
Next, we want to compare the above presented pulse analysis
using the CNN ensemble #2 with the traditional (non-ML) cut-
based analysis. In the cut-based analysis the pulse is classified
based on its associated fitting parameters τrise, τdecay, χ2
Ph,
PhFFT and χ2
FFT introduced in Section II. In order for a
pulse to be classified as light, all of the pulse parameters
must simultaneously lie within [µm −3 · σ, µm + 3 · σ] for
the parameter specific µm and σ. The only exception is with
the PhFFT, for which the cut range [µm −n1 · σ, µm + n2 · σ]
will be rigorously optimized for n1, n2 ∈0, 1/3, . . . , 3. These
cut ranges in terms of µm and σ are determined based on the
fits of skewed Gaussians to the parameter distributions.
However, in order to make the comparison between the
performance of the CNN ensemble #2 and cut-based analysis
fair, we determine the cut ranges for the associated pulse
parameters based on 1000 randomly selected light pulses.
In analogy to machine learning, determining the cut region
corresponds to the training of the model. The rest of the light
data together with the whole extrinsic dataset is then used as
testing data to evaluate the S score. It should be noted that the
triggers used for determining the cuts (training) and evaluating
the S score (testing) correspond exactly to the pulses used for
training and evaluating the CNNs. Thus, the used light pulses
have been filtered as described in Section II.
The optimization of the cut range for PhFFT is illustrated
in Fig. 7, where every calculated S score represents the
average of 5 S scores calculated for randomly selected testing
data. The cut-based analysis results in the maximum detection
significance of S = 1.29 ± 0.03 achieved with the optimal
cut [µ −0.3σ, µ + 1.7σ]. The obtained average score is
approximately 36% higher when compared with the average
⟨S⟩= 0.95 ± 0.23 achieved by the CNN ensemble #2.
While being outperformed by the cut-based analysis, we
argue that the CNN’s performance is limited by the measured
extrinsic dataset (containing presumed dark pulses) that has
been distorted by actual light pulses (outliers). That is, the
dark data contains a subset of pulses that actually correspond
to 1064 nm photon induced triggers. Such dataset distortion
would cause confusion in the training of the CNNs thus
limiting their performance. Of course, the presence of the
near-1064 nm black-body photons in the dark dataset also has
detrimental effect on the S score calculated using the cut-based
analysis, but since the dataset used to calculate the S scores
using the CNNs and cut-based analysis are the same, their
comparison with each other is fair. In the following sections,
we will focus on investigating the background pulses that
were previously presumed as dark and based on the results,
quantitatively address how the near-1064 nm photon black-
body photon induced distortion in the dark dataset results in
training confusion.
VI. BACKGROUND CLASSIFICATION
We will use the CNN ensemble #2 from Section III-B to
study the nature and origin of the remaining background set
by the misclassified dark pulses (false positives). To do this,
one needs to work with the F1 score that quantifies how well
the model classifies light pulses as light while avoiding mis-
classifying dark pulses and light. With the associated optimal
thresholds, the ensemble #2 correctly classifies 97.5% ± 0.4%
of the light pulses (analysis efficiency) while misclassifying
2.00% ± 0.4% of the dark pulses as light (Table II). The
latter corresponds to the average of 157 misclassified dark
pulses. It is likely that this subset of dark pulses is triggered
by photonic events, making them difficult to distinguish from
the light pulses. In fact, we have previously concluded that
the limiting background source for our TES is fiber coupled
near–1064 nm black-body photons which are indistinguishable
from the light pulses given the energy resolution of the
TES [20, 39, 40]. In order to confirm this, we begin by
comparing the observed effective rate of the assumed near-
1064 nm black-body background to theoretical predictions.
Using the 1.77 d observation time derived in Section III-B,
the observed background rate is nb = 157/1.77 d that equals
approximately 1.02 mHz ± 0.2 mHz
The expected rate of 1064 nm black-body photons can be
theoretically estimated from the Planck’s spectrum
˙N =
Z
dΩ
Z
dA
Z E2
E1
2
h3c2 ·
E2
eE/kT −1,
(4)
where the first two integrals represent the solid angle (Ω) and
the area (A) over which the black-body radiation can enter
the optical fiber, and h is Planck’s constant, c is the speed of
light, T is the temperature of the black-body radiation source
9
0
5
10
15
20
t ( s)
17.5
15.0
12.5
10.0
7.5
5.0
2.5
0.0
Vout (mV)
(a)
Light (3898)
Miscl. Dark (160)
0
5000
10000
15000
20000
PC1
100
0
100
200
300
PC2
(b)
Miscl. dark
Light
Fig. 8: (a) The average light pulse used for training and testing
the CNN together with the average misclassified dark pulse.
(b) PCA scatter plot showing the projection of feature vectors
(τrise, τdecay, χ2
Ph, PhFFT, χ2
FFT) onto the first two principal
components (PC1 and PC2) for all of the light (true positives)
and misclassified dark pulses (false positives).
and k is Boltzmann’s constant. The integrals over dΩand
dA are purely geometrical and can be estimated using the
supplier provided specs of the used HI-1060 single mode fiber;
numerical aperture NA = 0.14 and core radius R = 3.1 µm.
The solid angle is calculated as Ω= 2π · (1 −cos(θ)),
where the acceptance angle of the fiber is θ = sin−1(NA).
This results in Ω= 0.062. The corresponding area is simply
A = πR2 = 3 · 10−11 m2. The integration limits for the
energy integral are set to E ± 3σE where the E = 1.165 eV
corresponding to 1064 nm photons and σE = 0.088 eV based
on the skewed Gaussian fit to the distribution of PhFFT for
light pulses. With these parameters, the integral in Eq. (4)
at T
= 295 K results in
˙N = 5.1 mHz. The calculated
rate is fivefold higher than what was observed by the CNN
from the experimental data. However, the above presented
calculation represents the theoretical maximum black-body
rate and does not take into account various loss mechanisms
present in experimental setup. In reality, this rate is lowered
by the limited detection efficiency of the TES together with
wavelength dependent transmission losses in the used mating
sleeves and the fiber itself. In particular, fiber curling inside the
0.0
0.2
0.4
0.6
0.8
1.0
Threshold
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
F1
Fig. 9: (a) The F1 score calculated for the experimental test set
(2898 light pulses, 7872 dark pulses) using the best performing
model within the CNN ensemble #2 as a function of threshold.
The maximum of F1 = 0.96 was found for a threshold of 0.95,
illustrated by the dashed vertical line in the figure.
cryostat, that was present in our experimental setup, has been
observed to result in significant attenuation towards longer
wavelength photons. The simulation of losses due to optical
fiber, fiber curling and TES response in the same experimen-
tal setup used in this work has been recently addressed in
Refs. [38, 40]. Using the associated simulation pipeline, we
estimate a 0.57 mHz black-body background associated with
cut-region [µm −3σ, µm +3σ]. This corresponds better to the
herein estimated value of 1.02 mHz ± 0.2 mHz. Considering
the limitations of the simulation, the misclassified dark pulses
seem indeed likely to result from near-1064 nm black-body
photons coupled into the optical fiber based on the above
presented rate comparison.
In order to provide more concrete evidence on the origin
of the misclassified dark pulses, we investigate them using
CNN within the ensemble #2 for which the percentage of
misclassified dark pulses was closest to the ensemble average
2.00% (157) reported above (see Fig. 5(c)). The corresponding
CNN misclassified 2.03% (160) of the 7872 dark pulses under
analysis and can be thus considered to represent the average
of the ensemble with respect to misclassified dark pulses
up to a good extent. It achieves F1 = 0.96 at an optimal
threshold of 0.95. Finding the optimal threshold is illustrated
in Fig. 9. In this case, the associated curve peaks rather sharply
at the optimum Th. = 0.95, indicating susceptibility towards
the optimization of the threshold.
The average misclassified dark pulse is illustrated in
Fig. 8(a) together with the average light pulse. The shapes
of the average pulses closely resemble each other suggesting
a source of the same nature. In addition, Fig. 8(b) illustrates
the PCA scatter plot showing the projections of the associated
feature vectors (τrise, τdecay, χ2
Ph, PhFFT, χ2
FFT) of the light
and misclassified dark pulses onto the two main principal
components. The majority of the misclassified dark pulses are
clustered in the near vicinity of the light pulses. Only roughly
14 out of the 160 misclassified dark pulses are clearly outside
10
the (red) cluster defined by light pulses. Thus, the vast majority
of the misclassified dark pulses are most likely triggered
by near-1064 nm photons originating from the black-body
background. I.e., for the CNN they are indistinguishable from
the light pulses given the energy resolution of the TES. Having
these pulses distort the extrinsics (dark) dataset has detrimental
effects on the training of the CNNs as the model is repeatedly
being trained to classify an actual light pulse as dark. It is
evident that this causes confusion in the training process of the
CNN and limits the performance of the model. The presence of
near-1064 nm black-body photon triggers in the dark dataset
also explains why the CNNs are so susceptible towards the
division of the dataset into training and testing sets (Fig. 3).
How many of these, technically mislabeled dark pulses, end
up in training or testing sets has a significant impact on
both training and evaluation of the CNNs. This results in the
observed high standard deviations in the evaluation metrics of
the CNN ensemble #2 (Table II).
In the following section, we will investigate the black-
body radiation induced training confusion and show that this
ultimately limits the performance of the CNN.
VII. BLACK-BODY PHOTONS AND TRAINING CONFUSION
In the previous section we concluded that the vast majority
of the misclassified dark pulses are ultimately near-1064 nm
photons. While these are physically indistinguishable from the
light pulses given the energy resolution of the TES, training
the CNNs to learn to classify these as dark pulses evidently
causes confusion. The detrimental effects of this label noise
have been widely studied [56–58]. While a small fraction of
corrupted labels can result in improved performance by acting
as a form of regularization, their effect on learning is generally
detrimental [59]. This is particularly true for the herein studied
CNNs, which are already regulated by the dropout layer.
In consequence, the reported classifying performances of
the CNNs in the previous sections should be considered as
lower limits with room for further improvement when trained
with an ideal, undistorted dataset. In order to demonstrate
this, we relabel the target values of the 160 misclassified
dark pulses from the previous section as light (target value
0 →1) and use this data for retraining an ensemble of 10
CNNs exactly as in Section VI (ensemble #2, see Fig. 5(b)).
It should be noted that the 160 misclassified dark pulses
form a subset of the testing data used in Section VI and
consequently some near-1064 nm black-body photon triggers
can still be left unidentified in the training set. Moreover, the
160 misclassified dark pulses had roughly 14 pulses which
were clearly not clustered in the vicinity of the light pulses
and thus these may not correspond to near-1064 nm black-
body photons. Regardless, we relabel all of the 160 previously
misclassified dark pulses as light, after which the vast majority
of the near-1064 nm photon triggered dark counts should have
been addressed. While the dataset still remains somewhat
distorted, one expects to observe improvement in the CNN
performance when trained with the relabeled dataset if there
was training confusion present previously.
Thus, we proceed in retraining the CNN ensemble #2
(Fig. 5(b)) using the dataset with 160 relabeled dark pulses
based on Section VI. The overall dataset now contains 3898+
160 = 4058 light pulses and 8872 −160 = 8712 dark pulses
which is then further divided into training and testing data
according to Fig. (3). Upon doing this, the F1 score improves
from the previously estimated ⟨F1⟩= 0.961±0.005 (Table II)
to ⟨F1⟩= 0.974 ± 0.004. Evidently, the average S score also
improves due to additional background discrimination from
⟨S⟩= 0.95 ± 0.23 to ⟨S⟩= 1.61 ± 0.58 (ϵa = 85.5% and
nb = 0.17 mHz), but still does not outperform the cut-based
analysis after relabeling the presumed black-body triggers. The
observed improvement in the F1 score (see section III-B) is
direct evidence that the performance of the CNN is limited
by the training confusion caused by the presence of near-
1064 nm black-body photon triggers in the measured extrinsic
background.
VIII. CONCLUSIONS AND OUTLOOK
We have aimed to improve the detection significance of a
TES by analyzing the experimentally measured 1064 nm laser
photon (light) and extrinsics background event (dark) triggered
univariate time traces using a CNN based binary classifier. Af-
ter extensive hyperparameter optimization, the CNN ensemble
resulted in average detection significance of ⟨S⟩= 0.95±0.23,
still being outperformed by our previously used (non-ML) cut-
based analysis by 36%. While we have shown that this partly
results from the training confusion introduced by the near-
1064 nm black-body photons (indistinguishable from light)
in the extrinsics background, we conclude that the binary
classification based approach is not optimal analysis method
for classifying the time traces. A CNN trained as a binary
classifier does not seem to be able to learn how to suppress
the noise of the time traces and is thus unable to improve the
energy resolution of the TES, ultimately resulting in lower S
score when compared with the cut-based analysis.
In contrast to the binary classification scheme, we believe
that regression based CNN would be better suitable for im-
proving the detection significance and plan to investigate this
further in future projects. The performance of the regression
based models is presumably more sensitive to the underly-
ing dataset, and thus the significance of dataset control and
optimization should be stressed. Our recently published sim-
ulation framework for TES pulses provides particularly great
opportunities for the required systematic dataset optimization
[38, 40]. We also want to point out that the use of various ML
models in unsupervised fashion, such as neural network based
autoencoders, has shown great potential to address background
suppression related tasks and should be further investigated
[60].
While there exist several potential post-data analysis meth-
ods that can improve the detection significance of the TES
for the ALPS II experiment, we argue that reaching the
black-body radiation limited [40, 61] ultra-low background of
10−5 Hz ultimately requires the implementation of hardware-
based background suppression methods. As already suggested
in Ref. [61], the simplest way to do this is to apply a cryogenic
narrow bandpass optical filter in front of the TES which effec-
tively improves its energy resolution. We are currently building
11
a cryogenic optical U-bench inside our dilution refrigerator
enabling the implementation of such filter as a part of our
TES setup.
IX. ACKNOWLEDGEMENTS
F.J., A.L. and C.S. acknowledge support from the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation)
under Germany’s Excellence Strategy – EXC 2121 “Quantum
Universe” – 390833306. M.M. and E.R. acknowledge support
from the European Research Council (ERC) under the Euro-
pean Union’s Horizon 2020 research and innovation program,
Grant Agreement No. 948689 (AxionDM).
REFERENCES
[1] K. .D Irwin and G. C. Hilton. Transition-edge sensors.
Cryogenic particle detection, pages 63–150, 2005.
[2] Adriana E Lita, Aaron J Miller, and Sae Woo Nam.
Counting near-infrared single-photons with 95% effi-
ciency. Optics express, 16(5):3032–3040, 2008.
[3] Kaori Hattori, Toshio Konno, Yoshitaka Miura, Sachiko
Takasu, and Daiji Fukuda.
An optical transition-edge
sensor with high energy resolution.
Superconductor
Science and Technology, 35(9):095002, 2022.
[4] Mario De Lucia, Paolo Dal Bo, Eugenia Di Giorgi,
Tommaso Lari, Claudio Puglia, and Federico Paolucci.
Transition edge sensors: Physics and applications.
In-
struments, 8(4):47, 2024.
[5] Adriana
E
Lita,
Brice
Calkins,
LA
Pellouchoud,
Aaron Joseph Miller, and Sae Nam.
Superconduct-
ing transition-edge sensors optimized for high-efficiency
photon-number resolving detectors. In Advanced Photon
Counting Techniques IV, volume 7681, pages 71–80.
SPIE, 2010.
[6] Boris S Karasik, Andrei V Sergeev, and Daniel E Prober.
Nanobolometers for thz photon detection. IEEE Transac-
tions on Terahertz Science and Technology, 1(1):97–111,
2011.
[7] Francesco Mattioli, Zili Zhou, Alessandro Gaggero, Ros-
alinda Gaudio, Saeedeh Jahanmirinejad, D¨ond¨u Sahin,
Francesco Marsili, Roberto Leoni, and Andrea Fiore.
Photon-number-resolving superconducting nanowire de-
tectors.
Superconductor
Science
and
Technology,
28(10):104001, 2015.
[8] Ruslan Hummatov, Adriana E Lita, Tannaz Farrahi, Ne-
gar Otrooshi, Samuel Fayer, Matthew J Collins, Malcolm
Durkin, Douglas Bennett, Joel Ullom, Richard P Mirin,
et al.
Fast transition-edge sensors suitable for pho-
tonic quantum computing. Journal of Applied Physics,
133(23), 2023.
[9] Peizhan Li, Jiaqiang Zhong, Wen Zhang, Zheng Wang,
Kangmin Zhou, Wei Miao, Yuan Ren, Jing Li, Qijun Yao,
and Shengcai Shi. Multi-color photon detection with a
single superconducting transition-edge sensor. Nuclear
Instruments and Methods in Physics Research Section A:
Accelerators, Spectrometers, Detectors and Associated
Equipment, 1054:168408, 2023.
[10] RW Romani, Aaron J Miller, B Cabrera, Enectali
Figueroa-Feliciano, and Sae Woo Nam. First astronomi-
cal application of a cryogenic transition edge sensor spec-
trophotometer. The Astrophysical Journal, 521(2):L153,
1999.
[11] Marcel P Bruijn, Wouter M Bergmann Tiest, Henk FC
Hoevers, Eric Krouwer, Jan van der Kuur, Marcel L
Ridder, Zakaria Moktadir, Remco Wiegerink, Dick van
Gelder, and Miko Elwenspoek. Development of arrays
of transition edge sensors for application in x-ray as-
tronomy. Nuclear instruments and methods in physics
research section A: accelerators, spectrometers, detectors
and associated equipment, 513(1-2):143–146, 2003.
[12] DJ Goldie, AV Velichko, DM Glowacka, and S Withing-
ton. Ultra-low-noise MoCu transition edge sensors for
space applications. Journal of Applied Physics, 109(8),
2011.
[13] A Bergen, HJ Van Weers, C Bruineman, MMJ Dhall´e,
HJG
Krooshoop,
HJM
Ter
Brake,
K
Ravensberg,
BD Jackson, and CK Wafelbakker. Design and validation
of a large-format transition edge sensor array magnetic
shielding system for space application. Review of Scien-
tific Instruments, 87(10), 2016.
[14] John W Appel, Charles L Bennett, Michael K Brewer,
Ricardo Bustos, Manwei Chan, David T Chuss, Joseph
Cleary, Jullianna D Couto, Sumit Dahal, Rahul Datta,
et al. Calibration of transition-edge sensor (TES) bolome-
ter arrays with application to CLASS. The Astrophysical
Journal Supplement Series, 262(2):52, 2022.
[15] Akira Miyazaki and P Spagnolo.
Dark photon search
with a gyrotron and a transition edge sensor. In 2020
45th International Conference on Infrared, Millimeter,
and Terahertz Waves (IRMMW-THz), pages 1–2. IEEE,
2020.
[16] Godehard Angloher, S Banik, G Benato, A Bento,
A Bertolini, R Breier, C Bucci, J Burkhart, L Canonica,
A D’Addabbo, et al. DoubleTES detectors to investigate
the CRESST low energy background: results from above-
ground prototypes. The European Physical Journal C,
84(10):1001, 2024.
[17] Roger K Romani, Y-Y Chang, Rupak Mahapatra, Mark
Platt, Maggie Reed, Ivar Rydstrom, Bernard Sadoulet,
Bruno Serfass, and Matt Pyle. A transition edge sensor
operated in coincidence with a high sensitivity athermal
phonon sensor for photon coupled rare event searches.
Applied Physics Letters, 125(23), 2024.
[18] Robin B¨ahre, Babette D¨obrich, Jan Dreyling-Eschweiler,
Samvel
Ghazaryan,
Reza
Hodajerdi,
Dieter
Horns,
Friederike Januschek, E-A Knabbe, Axel Lindner, Dieter
Notz, et al. Any light particle search II—technical design
report. Journal of Instrumentation, 8(09):T09001, 2013.
[19] Rikhav
Shah,
Katharina-Sophie
Isleif,
Friederike
Januschek, Axel Lindner, and Matthias Schott.
TES
detector for ALPS II. arXiv preprint arXiv:2110.10654,
2021.
[20] Jos´e Alejandro Rubiera Gimeno, Katharina-Sophie Isleif,
Friederike Januschek, Axel Lindner, Manuel Meyer,
Gulden Othman, Matthias Schott, Rikhav Shah, and
12
Lukas Sohl. The TES detector of the ALPS II exper-
iment. Nuclear Instruments and Methods in Physics Re-
search Section A: Accelerators, Spectrometers, Detectors
and Associated Equipment, 1046:167588, 2023.
[21] Pierre Sikivie.
Experimental tests of the” invisible”
axion. Physical Review Letters, 51(16):1415, 1983.
[22] Katharina-Sophie Isleif and ALPS Collaboration.
The
any light particle search experiment at DESY. Moscow
University Physics Bulletin, 77(2):120–125, 2022.
[23] Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Lau-
rent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-
Maranto, and Lenka Zdeborov´a. Machine learning and
the physical sciences.
Reviews of Modern Physics,
91(4):045002, 2019.
[24] Josh Cogan, Michael Kagan, Emanuel Strauss, and Ariel
Schwarztman. Jet-images: computer vision inspired tech-
niques for jet tagging. Journal of High Energy Physics,
2015(2):1–16, 2015.
[25] Lucio
Mwinmaarong
Dery,
Benjamin
Nachman,
Francesco Rubbo, and Ariel Schwartzman.
Weakly
supervised classification in high energy physics. Journal
of High Energy Physics, 2017(5):1–11, 2017.
[26] Jason Sang Hun Lee, Inkyu Park, Ian James Watson, and
Seungjin Yang.
Quark-gluon jet discrimination using
convolutional neural networks.
Journal of the Korean
Physical Society, 74:219–223, 2019.
[27] James Barnard, Edmund Noel Dawe, Matthew J Dolan,
and Nina Rajcic. Parton shower uncertainties in jet sub-
structure analyses with deep neural networks. Physical
Review D, 95(1):014018, 2017.
[28] Jing Li and Hao Sun. An attention based neural network
for jet tagging. arXiv preprint arXiv:2009.00170, 2020.
[29] Julian Collado, Kevin Bauer, Edmund Witkowski, Taylor
Faucett, Daniel Whiteson, and Pierre Baldi.
Learning
to isolate muons.
Journal of High Energy Physics,
2021(10):1–17, 2021.
[30] Yi-Lun Du, Daniel Pablos, and Konrad Tywoniuk. Deep
learning jet modifications in heavy-ion collisions. Jour-
nal of High Energy Physics, 2021(3):1–50, 2021.
[31] Łukasz
Kamil
Graczykowski,
Monika
Jakubowska,
Kamil Rafał Deja, Maja Kabus, Alice Collaboration,
et al. Using machine learning for particle identification in
alice. Journal of Instrumentation, 17(07):C07016, 2022.
[32] Thorsten Buss, Frank Gaede, Gregor Kasieczka, Ana-
tolii Korol, Katja Kr¨uger, Peter McKeown, and Mar-
tina Mozzanica.
Calohadronic: a diffusion model for
the generation of hadronic showers.
arXiv preprint
arXiv:2506.21720, 2025.
[33] Jie Ren, Daohan Wang, Lei Wu, Jin Min Yang, and
Mengchao Zhang. Detecting an axion-like particle with
machine learning at the LHC. Journal of High Energy
Physics, 2021(11):1–26, 2021.
[34] Haihao Shi, Zhenyang Huang, Qiyu Yan, Jun Li, Guo-
liang L¨u, and Xuefei Chen. A machine learning pipeline
for hunting hidden axion signals in pulsar dispersion
measurements. arXiv preprint arXiv:2505.16562, 2025.
[35] PB Cushman, MC Fritts, AD Chambers, A Roy, and
T Li. Strategies for machine learning applied to noisy hep
datasets: Modular solid state detectors from supercdms.
arXiv preprint arXiv:2404.10971, 2024.
[36] Laura Manenti, Carlo Pepe, Isaac Sarnoff, Tengiz
Ibrayev, Panagiotis Oikonomou, Artem Knyazev, Euge-
nio Monticone, Hobey Garrone, Fiona Alder, Osama
Fawwaz, et al. Dark counts in optical superconducting
transition-edge sensors for rare-event searches. Physical
Review Applied, 22(2):024051, 2024.
[37] Manuel Meyer, Katharina Isleif, Friederike Januschek,
Axel Lindner, Gulden Othman, Jos´e Alejandro Ru-
biera
Gimeno,
Christina
Schwemmbauer,
Matthias
Schott, Rikhav Shah, and ALPS Collaboration. A first
application of machine and deep learning for background
rejection in the ALPS II TES detector.
Annalen der
Physik, 536(1):2200545, 2024.
[38] Jos´e Alejandro Rubiera Gimeno. Optimizing a Transition
Edge Sensor detector system for low flux infrared photon
measurements at the ALPS II experiment. PhD thesis, U.
Hamburg (main), Hamburg U., Hamburg, 2024.
[39] Jos´e Alejandro Rubiera Gimeno, Friederike Januschek,
Katharina-Sophie Isleif, Axel Lindner, Manuel Meyer,
Gulden Othman, Christina Schwemmbauer, and Rikhav
Shah. A TES system for ALPS II - status and prospects.
Proceedings of Science, 449:567, 2024.
[40] Jos´e Alejandro Rubiera Gimeno, Friederike Januschek,
Axel Lindner, Christina Schwemmbauer, Katharina-
Sophie Isleif, Manuel Meyer, Elmeri Rivasto, Gulden
Othman, and Rikhav Shah. Simulation and measurement
of blackbody radiation background in a transition edge
sensor. Physical Review D, 112(3):032001, 2025.
[41] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and
Jun Zhou. A survey of convolutional neural networks:
analysis, applications, and prospects. IEEE transactions
on neural networks and learning systems, 33(12):6999–
7019, 2021.
[42] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep
learning. nature, 521(7553):436–444, 2015.
[43] Waseem Rawat and Zenghui Wang. Deep convolutional
neural networks for image classification: A compre-
hensive review. Neural computation, 29(9):2352–2449,
2017.
[44] Navid Mohammadi Foumani, Lynn Miller, Chang Wei
Tan, Geoffrey I Webb, Germain Forestier, and Mahsa
Salehi. Deep learning for time series classification and
extrinsic regression: A current survey. ACM Computing
Surveys, 2023.
[45] Zhiguang Wang, Weizhong Yan, and Tim Oates. Time
series classification from scratch with deep neural net-
works: A strong baseline.
In 2017 International joint
conference on neural networks (IJCNN), pages 1578–
1585. IEEE, 2017.
[46] Hassan
Ismail
Fawaz,
Benjamin
Lucas,
Germain
Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan
Weber, Geoffrey I Webb, Lhassane Idoumghar, Pierre-
Alain Muller, and Franc¸ois Petitjean.
Inceptiontime:
Finding alexnet for time series classification. Data Min-
ing and Knowledge Discovery, 34(6):1936–1962, 2020.
[47] Angus Dempster, Franc¸ois Petitjean, and Geoffrey I
13
Webb.
Rocket: exceptionally fast and accurate time
series classification using random convolutional kernels.
Data Mining and Knowledge Discovery, 34(5):1454–
1495, 2020.
[48] Rikhav
Shah,
Katharina-Sophie
Isleif,
Friederike
Januschek,
Axel
Lindner,
and
Matthias
Schott.
Characterising
a
single-photon
detector
for
ALPS
II.
Journal of Low Temperature Physics, 209(3):355–
362, 2022.
[49] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural
networks.
Advances in neural information processing
systems, 25, 2012.
[50] Xavier Glorot and Yoshua Bengio.
Understanding the
difficulty of training deep feedforward neural networks.
In Proceedings of the thirteenth international conference
on artificial intelligence and statistics, pages 249–256.
JMLR Workshop and Conference Proceedings, 2010.
[51] Diederik P Kingma.
Adam: A method for stochastic
optimization. arXiv preprint arXiv:1412.6980, 2014.
[52] Franc¸ois Chollet et al. Keras. https://keras.io, 2015.
[53] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene
Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghe-
mawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving,
Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz
Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion
Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris
Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner,
Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent
Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol
Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke,
Yuan Yu, and Xiaoqiang Zheng.
TensorFlow: Large-
scale machine learning on heterogeneous systems, 2015.
Software available from tensorflow.org.
[54] SI Bityukov and NV Krasnikov. New physics discovery
potential in future experiments. Modern Physics Letters
A, 13(40):3235–3249, 1998.
[55] SI Bityukov and NV Krasnikov.
On the observability
of a signal above background.
Nuclear Instruments
and Methods in Physics Research Section A: Acceler-
ators, Spectrometers, Detectors and Associated Equip-
ment, 452(3):518–524, 2000.
[56] Tongliang Liu and Dacheng Tao.
Classification with
noisy labels by importance reweighting.
IEEE Trans-
actions on pattern analysis and machine intelligence,
38(3):447–461, 2015.
[57] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju
Shin, and Jae-Gil Lee. Learning from noisy labels with
deep neural networks: A survey.
IEEE transactions
on neural networks and learning systems, 34(11):8135–
8153, 2022.
[58] Hyungki Im and Paul Grigas. Binary classification with
instance and label dependent label noise. arXiv preprint
arXiv:2306.03402, 2023.
[59] Yonghoon Lee and Rina Foygel Barber.
Binary clas-
sification with corrupted labels.
Electronic Journal of
Statistics, 16(1):1367–1392, 2022.
[60] Philipp Holl, L Hauertmann, B´ela Majorovits, Oliver
Schulz, Maria Schuster, and AJ Zsigmond. Deep learning
based pulse shape discrimination for germanium detec-
tors. The European Physical Journal C, 79:1–9, 2019.
[61] Aaron J Miller, Adriana Lita, Danna Rosenberg, Stephen
Gruber, and S Nam.
Superconducting photon number
resolving detectors: Performance and promise. In Proc.
8th Int. Conf. Quantum Communication, Measurement
and Computing (QCMC’06), pages 445–450, 2007.
|
1 Binary Classification of Light and Dark Time Traces of a Transition Edge Sensor Using Convolutional Neural Networks Elmeri Rivasto1,*, Katharina-Sophie Isleif2, Friederike Januschek3, Axel Lindner3, Manuel Meyer1, Gulden Othman2, Jos ́e Alejandro Rubiera Gimeno2, and Christina Schwemmbauer3 1CP3-origins, 55, 5230 Odense, Denmark 2Helmut-Schmidt-Universit ̈at (HSU), Holstenhofweg 85, 22043 Hamburg, Germany 3Deutsches Elektronen Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany *Contact: Abstract-The Any Light Particle Search II (ALPS II) is a light shining through a wall experiment probing the existence of axions and axion-like particles using a 1064 nm laser source. While ALPS II is already taking data using a heterodyne based detection scheme, cryogenic transition edge sensor (TES) based single-photon detectors are planned to expand the detection system for cross-checking the potential signals, for which a sensitivity on the order of 10-24 W is required. In order to reach this goal, we have investigated the use of convolutional neural networks (CNN) as binary classifiers to distinguish the experimentally measured 1064 nm photon triggered (light) pulses from background (dark) pulses. Despite extensive hyperparameter optimization, the CNN based binary classifier did not outperform our previously optimized cut-based analysis in terms of detection significance. This suggests that the used approach is not generally suitable for background suppression and improving the energy resolution of the TES. We partly attribute this to the training confusion induced by near-1064 nm black-body photon triggers in the background, which we identified as the limiting background source as concluded in our previous works. However, we argue that the problem ultimately lies in the binary classification based approach and believe that regression models would be better suitable for addressing the energy resolution. Unsupervised machine learning models, in particular neural network based autoencoders, should also be considered potential candidates for the suppression of noise in time traces. While the presented results and associated conclusions are obtained for TES designed to be used in the ALPS II experiment, they should hold equivalently well for any device whose output signal can be considered as a univariate time trace. I. INTRODUCTION Transition edge sensors (TES) are superconducting devices commonly used as microcalorimeters and bolometers when voltage-biased within the region of the superconducting phase transition [1]. In this narrow region, the resistance of the circuit changes steeply with temperature and consequently the absorption of a single photons at suitable wavelengths heat the circuit sufficiently to cause a significant change in its current. These perturbations can be efficiently detected by superconducting quantum interference devices (SQUIDs) being inductively coupled to the biased circuit. Unlike many other single-photon detectors, such as superconducting nanowire single-photon detectors (SNSPDs), TESs are capable of measuring the energy of the absorbed photons over a wide range of wavelengths. Their energy resolution together with high quantum efficiency and microsecond-scale dead time [2-4] make TESs important tools widely used in quantum computing [5-9], space and astrophysics experiments [10-14] along with particle physics and dark matter searches [15-17]. A TES is planned to be used in a future science run of the Any Light Particle Search II (ALPS II) at DESY Hamburg (Germany) [18]. ALPS II aims to probe the existence of axions and axion-like particles (ALPs) [19, 20] and is essentially a light shining through a wall experiment featuring a powerful 1064 nm laser beam that is shone into a 106 m long resonant production cavity that is in 5.3 T magnetic field. While the propagation of the light is blocked by an opaque wall, the theoretically proposed photon-axion oscillation enables 1064 nm photons to emerge on the other side of the optical barrier in a regeneration cavity (symmetrical to the production cavity) [21, 22]. The detection of these reconverted photons requires an extremely sensitive detection scheme achievable with TESs [19, 20]. The target sensitivity for the conversion rate lies in the range of 10-5 Hz (one photon per day) setting the upper limit for the background rate required for the statistically significant detection of axions and ALPs within the practically feasible 20 days measurement time [19]. In the recent years machine learning (ML) methods have been recognized as extremely useful tools in various fields of physics [23]. ML approaches have been widely implemented for various tasks associated with particle physics experiments, including high level data analysis, jet tagging and data simulation [24-32]. Lately, ML methods have also been used to detect hints from axion-like particles in LHC [33] and pulsar dispersion measurements [34], and are planned to be implemented for analyzing data in the SuperCDMS direct dark matter experiment [35]. Most relevant for our interest, Manenti et al. [36] have recently applied unsupervised ML models to study the background of a TES system not connected to the outside of a cryostat, by e.g. a fiber (intrinsic background). They report that the majority of the observed background 17 Sep 2025 2 pulses originate from high-energy events associated with radioactive decays and secondary cosmic-ray particles. The resulting dark counts are rather easy to distinguish from lowenergy photon induced pulses by simply comparing the pulse shape, as already concluded in our previous work [37]. This is because the energy released from the various high-energy events is likely to be deposited within the underlying substrate rather than the TES itself. Due to slow diffusion of heat from the substrate to the TES, the dark counts have significantly larger rise and decay times when compared with typical photon induced pulses where the energy is deposited directly to the TES [36, 38]. While the unsupervised ML models have been mainly used for qualitative categorization of the recorded pulses, supervised ML models are better suited for actual quantitative background suppression. One can expect the state-of-the-art supervised ML models to outperform the capabilities of traditional data processing techniques. We have successfully implemented this in the past for our recorded intrinsic background without an optical fiber connected to the TES [37]. In this work, we expand on this study by also considering the extrinsic background measured while an optical fiber links the lab environment to the TES inside the dilution refrigerator. This mainly introduces an additional background resulting from fiber coupled black-body radiation. The black-body photons have been identified as the limiting background for our experimental setup [20, 39, 40], and have been previously addressed using a traditional cut-based analysis without relying on machine learning [38]. We want to point out that the blackbody background rate is ultimately determined by the energy resolution of the TES since lower energy resolution enables more reliable distinction between the signal and the background photons. Thus, different analysis methods addressing the suppression of noise in the time traces differently can have significant effects on the background rates [40]. For example, we have found that performing fits to the measured TES pulses in frequency domain instead of time domain results in 2-fold improvement in the energy resolution [38, 39]. Here, for the first time, we are trying to further improve the rejection of near-1064 nm black-body photons using convolutional neural networks (CNN) that are considered as the stateof-the-art machine learning model for univariate time-series classification. Ultimately, the goal is to reach a background rate below 10-5 Hz while maintaining a tolerable rate for correctly classified signal pulses (analysis efficiency). The CNNs expand the architecture of the conventional multi-layer perceptrons (feedforward neural networks) via the introduction of convolutional layers that apply different filters (kernels) to the input data enabling the efficient extraction of spatial (or temporal) patterns [41]. The CNNs remain the underlying technology behind the state-of-the-art image classification ML-models [41-43], also covering the univariate time series classifiers [44-47]. Consequently, CNNs are expected to show the best performance in the suppression of the background triggers. A major benefit of CNNs is that they enable modelindependent analysis of recorded pulses as one does not have to rely on fitting any specific functions to the data. We will utilize the CNNs as binary classifiers that are trained to distinguish between 1064 nm photon induced light pulses and any other background source induced dark pulses. These classifiers are then ensembled to quantitatively study the background sources that are particularly difficult to distinguish from the light pulses and to see whether the CNNs can be used for further background suppression. The manuscript is organized as follows: in Section II we describe our experimental setup and how the experimental data used in this work was obtained (see Ref. [38] for more details). Next, in Section III we present a detailed description of the overall architecture of the considered CNN and explain how we use the experimentally measured data to train it and evaluate its performance. Details of the CNN's hyperparameter optimization are also presented. The performance of the optimized CNN is analyzed and further fine-tuned in Section IV. We then proceed in evaluating the true performance of the model in classifying the light and dark pulses in Section V and study the observed background pulses in detail in Section VI. Finally, we discuss the background source induced confusion in the CNN training process in Section VII before summarizing the final conclusions in Section VIII. II. EXPERIMENTAL DATA All of the experimental data was measured using a tungstenbased TES chip fabricated at National (NIST). The TES has been optimized for the detection of 1064 nm photons through its multilayer structure [48]. The working point of the voltage-biased TES was set to 30% of its normal state resistance and its current was monitored by an inductively coupled SQUID (manufactured by Physikalisch Technische Bundesanstalt; PTB) with 5 GHz gain bandwidth product at a sampling rate of 50 MHz. The photons absorbed by the TES were then detected as pulses in the SQUID output voltage Vout(t). The TES+SQUID module was operated within a Bluefors SD dilution refrigerator at 25 mK base temperature. In order to recognize the shapes of the 1064 nm photon Vout(t) pulses, we first gathered data by illuminating the TES with a highly attenuated 1064 nm laser source for a total of 5 s. The laser light was coupled to the TES via a HI1060 single mode optical fiber. During this time interval, a total of 4722 pulses above the 10 mV trigger threshold were recorded, where each time trace corresponds to a 200 μs time window with 104 samples (50 MHz sampling frequency). The recorded time traces were pre-filtered by discriminating double pulses. This was done by detecting local maxima from the derivative of a time trace based on specific height, prominence and spacing criteria. This left us with 3928 single-photon triggered pulses. We have performed fits in the time domain using a phenomenological function [37, 38] Vph(t) = - 2APh e (t0-t) τrise + e -(t0-t) τdecay + V0, (1) describing a photonic event triggered TES pulse at around t = t0 -τrise. The shape of the pulse is determined by the rise and decay times τrise and τdecay, respectively, the obtained distribution of which will be used to determine the cuts. The 3 0 5 10 15 20 t ( s) 20 15 10 5 0 Vout (mV) (a) Dark Light 0 10 20 t ( s) 10 0 Vout (mV) Example light pulse 0 50 100 150 200 250 300 PC1 10 0 10 20 30 40 50 60 PC2 (b) Dark Light Fig. 1: (a) The average measured light pulse and dark pulses, where the shaded regions represent the associated standard deviations. The inset presents a randomly chosen light pulse as an example of the signal and noise shapes. (b) Principal Component Analysis (PCA) scatter plot showing the projection of pulse feature vectors (τrise, τdecay, χ2 Ph, PhFFT, χ2 FFT) onto the first two principal components (PC1 and PC2). The inset shows a close-up of the cluster associated with light pulses, showing overlap with some of the dark pulses measured in extrinsic background. χ2 Ph-error associated with the performed fit is also considered. In addition, we have also considered a fitting function from Small Signal Theory [1] VSST(t) = ( AFFT · e-(t-t0)/τ+ -e-(t-t0)/τ- , if t ≥t0. 0, else, (2) While fitting Eq. (2) in time domain is unstable, we have performed the fit in frequency domain, in particular due to the simple Fourier transformation of Eq. (2). The obtained fitting parameters were then used to calculate the associated peak height (PhFFT) which will also be considered for the filtering of the pulses. This specific parameter was chosen because its associated distribution for light pulses has previously resulted in the lowest achieved energy resolution for our TES [38, 39]. The χ2 FFT-error associated with the fits in frequency domain is also considered for filtering. In order to mitigate the effects of scattering processes and nonlinear optical effects (that can alter the wavelength of the photons emitted from the laser) on the training process, we have subjected the 3928 single-photon triggers for minor filtering. This was done by only including pulses whose τrise, τdecay and PhFFT were simultaneously within 0.1%- 99.9% quantiles of their associated distributions while χ2 Ph and χ2 FFT being below 99.9% quantiles. This resulted in the discrimination of 0.76% (30) of the filtered triggers, leaving us a total of 3898 pulses that are used for training and evaluating the CNNs. The filtered 3898 pulses were further post-processed by removing a possible voltage offset by shifting the recorded Vout(t) values by the average voltage value measured between t = 0-24 μs. Finally, in order to reduce the computational load for the machine learning tasks, the time windows of the single pulses were narrowed down to 24 μs (1200 samples) by locating the pulse minimum and including the 300 antecedent and 900 subsequent samples. For the rest of the manuscript, we will keep referring to these 1064 nm photon triggered time traces as light pulses. The average measured light pulse is presented in Fig. 1(a) together with an example of a single pulse in the inset of the figure, further illustrating the baseline noise present in the measurements. Right after measuring the light pulses, we proceeded to measure the extrinsic background over a period of two days using the same system configuration except for disconnecting the optical fiber from the laser source and sealing its open end with a metallic cover. A total of 8872 background events exceeding the 10 mV trigger threshold were observed. While all of these pulses were included as training and evaluation data for the CNNs without making any cuts, they were postprocessed in the same way as the light pulses by removing the voltage offsets and clipping the time windows as described above. It should be noted that for the pulses with large rise and decay times, typically originating from intrinsic background sources, the clipped time window can be too narrow to fully represent the entire pulse. Regardless, these pulses would have been in any case very easily distinguishable from the light pulses. We refer to all of the recorded background pulses as dark pulses for the rest of the manuscript. The average measured dark pulse is presented in Fig. 1(a). In summary, after data cleaning we are left with 3898 light pulses and 8872 dark pulses, making the overall size of the dataset 12,770 to be used for the training and evaluation the CNNs. Before proceeding, we want to further characterize the differences between light and dark pulses via Principal Components Analysis (PCA) in order to detect any possible overlap between the resulting light and dark clusters indicating the presence of photonic background. We have done this by associating each pulse with a feature vector assembled from the above introduced fitting parameters as (τrise, τdecay, χ2 Ph, PhFFT, χ2 FFT). The PCA scatter plot visualizing the projection of these feature vectors for both light and dark pulses onto the two main principal components is presented in Fig. 1(b). As expected, the light pulses are tightly clustered in one spot while the dark pulses are much more spread out. Regardless, one can observe significant overlap between the light and dark pulses as illustrated in the inset of Fig. 1(b), most likely originating from fiber coupled black-body radiation [40]. We will analyze 4 Fig. 2: A schematic illustration of the basic architecture of the considered CNN and its hyperparameters whose optimization is explicitly addressed (see Table I). this later in the paper aided by the CNN ensembles. III. CNN ARCHITECTURE The basic architecture of the CNN considered in this manuscript resembles the generally used structure for image classifying tasks [42, 49]. As illustrated in Fig. 2, the CNN consists of pairs of i) convolutional and average pooling layers followed by ii) flatten and dropout layers connected to iii) dense layers with gradually decreasing sizes ultimately ending up to a single output neuron. As typical for binary classifiers, we use binary cross-entropy (log-loss) as the loss function to guide the training process. The weights of the model are initialized using the Glorot uniform approach [50] and updated during training following the Adaptive Moment Estimation (Adam) algorithm [51]. In order to limit the size of the search space for hyperparameter optimization, we fix the activation functions associated with the convolutional layers to tanh while using ReLu for the dense layers. This combination was observed to clearly result in best performance in our initial tests of the model. We further require that all of the convolutional layers share the same number of filters, filter size, and pool size. We fix the pool size to 2, limiting the maximum number of convolutional layers in the considered architecture to 10. The structure of the dense layers is limited by requiring that the number of neurons must always drop by half in the following hidden layer. This leaves only the maximum number of neurons within the first layer and the number of hidden layers as the only hyperparameters to be optimized for the dense part of the CNN. On top of the architectural hyperparameters, we address the optimization of the dropout rate, number of epochs and batch size. A summary of the considered search space for the hyperparameter optimization is presented in Table I. The CNN is implemented using a high-level neural network API Keras version 2.12.0 [52] with a TensorFlow version 2.12.1 backend [53]. Experimental data (12,770) LIGHT (3,898) DARK (8,872) Training data (2,000) LIGHT (1,000) 80% Training Validation 20% DARK (1,000) Testing data (10,770) LIGHT (2,898) DARK (7,872) 1,000 light 1,000 dark Rest Fig. 3: A schematic illustration of the division of the dataset into training and testing data. The training set was further divided 80%-20% into training and validation sets, where the validation set was to evaluate the performance of the CNN during the training process. A. Training process The model is trained using 1000 light and dark pulses, respectively, resulting in an overall training set size of 2000 pulses. It should be noted that the training set is perfectly balanced between light and dark pulses, making it optimal for training. The training set is further split 80%-20% into training and validation sets, where the training set is used for the actual training of the model while the evaluation set is used to guide the training process via minimization of the binary cross-entropy used as the loss function. The division of the dataset into training and testing set is schematically illustrated in Fig. 3. 5 B. Performance evaluation The performance of the model is evaluated against the rest of the dataset that was not dedicated for training as illustrated in Fig. 3, consisting of 2898 light pulses and 7872 dark pulses. With our main interest towards the use of the CNN in the ALPS II experiment, we follow the approach of our previous work (Ref. [37]) and evaluate the performance of the model during hyperparameter optimization based on the detection significance given by [54, 55] S = 2 p Tobs · √εdεans + nb -√nb , (3) where Tobs = 518 h is the observation time of the experiment (as used in Ref. [37]), ns = 2.8 · 10-5 Hz is the assumed signal (1064 nm photon) rate and εd = 0.5 is the pessimistically evaluated detection efficiency taking into account all losses associated with the experimental setup [37]. The only analysis method dependent parameters are the closely related background rate (nb) and analysis efficiency (εa). The εa is simply calculated as the percentage of correctly classified light pulses (true positive). The nb on the other hand is calculated from the number of misclassified dark pulses (Nmdp, false positives). Since the total of 8872 extrinsic background pulses were measured over a time period of 2 d, the used testing set containing the subset of 7872 dark pulses effectively corresponds to (7872/8872) · 2 d ≈1.77 d time period. The effective background rate can thus be estimated from the number of misclassified dark pulses (false positives) as Nmdp/1.77 d. It should be pointed out that the S score is a threshold (Th. ∈[0, 1] corresponding to dark-light, respectively) dependent metric. Consequently, all the reported values of S in this manuscript have been obtained after optimizing the threshold to maximize its value. While the S score will be used to determine the optimal combination of hyperparameters in the following section, we will later also evaluate the trained CNNs using the F1 score (F1 = 2 × Precision × Recall/(Precision + Recall) which balances between precision (TP/(TP+FP)) and recall (TP/(TP+FN)) thus making it a commonly used evaluation metric for biased datasets (TP=True Pos., FP=False Pos., FN=False Neg.). The F1 score describes the true classification performance of the CNN better than the S score by directly measuring how well the CNN is able to correctly classify the light pulses while avoiding the misclassification of dark pulses. Thus, we will utilize the F1 score in particular in Section VI, where we study the nature and origins of the background pulses which the CNNs are struggling to classify correctly in more detail. The F1 score is also a threshold dependent metric and its reported values in later sections have been obtained after the threshold optimization. It should be pointed out that the threshold optimization has to be done separately for the S and F1 scores. C. Hyperparameter Optimization The hyperparameters of the CNN architecture introduced in Section III are optimized by 2000 iterations of random search, i.e. by training a total of 2000 models using randomized combinations of hyperparameters and choosing the one with 0 1 2 3 4 5 6 7 Trainable parameters (106) 0.0 0.5 1.0 1.5 2.0 2.5 S score 4.1% 95.9% S > 1 S < 1 Fig. 4: The evaluated average S scores as a function of number of trainable parameters for the associated CNN. The error bars correspond to standard deviations associated with 5 evaluations of the CNN with different training and testing sets. The dashed vertical line points to the limit S = 1 above which the points are colored by red (4.1%) and below as blue (95.9%). highest evaluation metrics. The search space for the considered hyperparameters is presented in Table I. In order to reduce the susceptibility of the evaluated performance towards the random division of the dataset into training and testing sets (Fig. 3), each iterated combination of hyperparameters is evaluated using 5 CNNs trained and tested with different, randomly divided, datasets as described in Sections III-A and III-B. The initial weights of the CNNs were fixed between the iterations of the random search. The evaluated average S scores (⟨S⟩) and their associated standard deviations (σS) as a function of trainable parameters in the CNN are illustrated in Fig. 4. The highest S scores are Hyperparameter Optimization range Optimum Nb. of conv. layers 3-10 6 Nb. of filters 20-150 45 Kernel size 3-20 12 Dropout rate 0-0.2 0.18 Nb. of dense layers 1-10 3 Max nb. of neurons 100-300 188 Learning rate 10-5-10-3 5.2 · 10-4 Epochs 5-20 10∗ Batch size 32-128 99 ⟨S⟩= 1.26 ± 0.16 TABLE I: The considered search space for the hyperparameter optimization of the CNN using 2000 iterations of random search. The activation functions of the convolutional and dense layers were fixed to tanh and ReLu, respectively. The presented optimum corresponds to the maximum obtained average detection significance ⟨S⟩(see details in Section III-B) for an ensemble of 5 CNNs trained and evaluated with differently (randomly) divided training, validation and testing sets, while the weight initialization for the CNN was fixed. The values of S were calculated using the optimal threshold that maximizes its value. ∗The number of epochs was later increased to 20 (see Section IV) 6 CNN #0 model seed=0 data seed=0 CNN #1 model seed=1 data seed=0 CNN #2 model seed=2 data seed=0 CNN #3 model seed=3 data seed=0 CNN #4 model seed=4 data seed=0 CNN #5 model seed=5 data seed=0 CNN #6 model seed=6 data seed=0 CNN #7 model seed=7 data seed=0 CNN #8 model seed=8 data seed=0 CNN #9 model seed=9 data seed=0 (a) Ensemble #1 (b) Ensemble #2 CNN #0 model seed=8 data seed=0 CNN #1 model seed=8 data seed=1 CNN #2 model seed=8 data seed=2 CNN #3 model seed=8 data seed=3 CNN #4 model seed=8 data seed=4 CNN #5 model seed=8 data seed=5 CNN #6 model seed=8 data seed=6 CNN #7 model seed=8 data seed=7 CNN #8 model seed=8 data seed=8 CNN #9 model seed=8 data seed=9 Max S % of misclassified dark pulses closest to ensemble average Set model seed =8 model seed=8 data seed=6 Set CNN #6 model seed=8 data seed=6 Further analysis using (c) Model weights: random Data division: fixed Model weights: fixed Data division: random Model weights: fixed Data division: fixed Fig. 5: A schematic illustration of the CNN ensembles used in this study. (a) The ensemble used in Section IV where the CNNs are trained and evaluated with exactly the same datasets but the initialization of their weights differ between each other. (b) The ensemble used in Section V and VI. The model weights were fixed by seeding them equivalently to that of the CNN in the ensemble #1 that achieved the highest S score (Eq. (3)). Only the dataset division into training and testing sets was randomized for this ensemble. (c) A single CNN trained with data divided equivalently to that of the CNN in the ensemble #2 whose percentage of misclassified dark pulses was closest to the average of the ensemble. This model was used in the latter part of Section VI. clearly associated with CNNs with smaller number of trainable parameters, making the training process more efficient. We determine the optimal combination of hyperparameters for the CNN that maximizes ⟨S⟩-σS, under the constraint of limiting the maximum number of trainable parameters to 0.5·106. The chosen optimal model has a total of 297,057 parameters and reached ⟨S⟩= 1.26 ± 0.16. The associated hyperparameters are presented in Table I. IV. FINE-TUNING OPTIMIZED CNN Next, we will proceed investigating and fine-tuning the previously optimized CNN (Table I). In order to identify possible underfitting or overfitting, we increase the number of epochs from the optimized value of 10 up to 20. We train an ensemble of 10 CNNs with fixed datasets but randomly initialized weights. This CNN ensemble will be referred to as ensemble #1 (see Fig. 5(a)). We noticed that the performance of CNNs is susceptible towards the random initialization of its weights. This is manifested as a high standard deviation in the ⟨S⟩= 0.98 ± 0.22 (average threshold: ⟨Th.⟩= 0.98 ± 0.012) for ensemble #1 (see Fig. 5(a)). While the weight initialization is internally stochastic, we have initialized the ensemble #1 weights in a deterministic manner via specified random seeds. This enables us to optimize the weight initialization of the CNN associated with the chosen seed for reproducible results. The observed sensitivity towards weight initialization is particularly common for CNNs associated with dropout layers. Similar observations have been made in other CNN based machine learning implementations [46]. The high values of optimal thresholds with respect to the S score reflect the importance of decreasing the background rate at the cost of analysis efficiency (see Eq. (3)). We also evaluate ensemble #1 using the F1 score (see III-B) reflecting the sole classification performance of the ensemble. We obtain ⟨F1⟩= 0.96 ± 0.006 (⟨Th.⟩= 0.63 ± 0.34), indicating reduced conservativity in the classification of light pulses when compared with the S score. This demonstrates the importance of threshold optimization for different metrics. While the high threshold values resulting in low background rates at the expense of analysis efficiency is preferable for the S score, the best performance for the CNN, in terms of correctly classifying the light and the dark pulses, is obtained at much lower thresholds where the F1 score is maximized. The ensemble #1 average learning curves with respect to number of epochs are presented in Fig. 6(a). Note, that the number of epochs was increased from 10 (obtained from initial hyperparameter optimization) to 20 in order to identify possible under or overfitting. The learning curve in Fig. 6(a) indeed indicates slight underfitting and increasing the number of epochs to 20 flattens the learning curves without resulting in overfitting. Thus, the number of epochs will be fixed to 20 for the rest of the manuscript. The learning curves with respect to training set size, calculated for the best performing model within the ensemble #1 in terms of the F1 score, are presented in Fig. 6(b). No underfitting or overfitting is observed from these curves, indicating that the used training set of size 2000 7 0 2 4 6 8 10 12 14 16 18 20 Epochs 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Loss (a) Training loss Validation loss 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Training set size (103) 0.5 0.6 0.7 0.8 0.9 1.0 F1 (b) Train Test Fig. 6: Different learning curves associated with the CNN ensemble #1; (a) The average loss (binary cross-entropy) as a function of epochs for the whole ensemble of 10 CNNs. The dashed vertical line indicates the epochs=10 obtained from the initial hyperparameter optimization (see Table I). This was then increased to 20 in order to identify underfitting or overfitting from the associated figure. (b) The F1 score as a function of training set size for the CNN associated with highest F1 (also highest S) within the ensemble. The error bars correspond to the standard deviation associated with 3 statistical repetitions in the CNN training, between which the used training data was shuffled. The used testing data was kept constant throughout the process. Avg. Detection Significance Avg. F1 Score ⟨S⟩= 0.95 ± 0.23 ⟨F1⟩= 0.961 ± 0.005 Avg. Optimal Threshold 0.98 ± 0.01 0.68 ± 0.31 (⟨Th.⟩) Avg. Analysis Efficiency 75.3% ± 14.3% 97.5% ± 0.4% (⟨True Positives⟩= ⟨εa⟩) Avg. misclassified dark pulses 0.6% ± 0.6% 2.00% ± 0.4% (⟨False Positives⟩) Background rate (nb) 0.33 mHz ± 0.33 mHz 1.0 mHz ± 0.2 mHz TABLE II: A summary of the evaluated performance of the CNN ensemble #2 (Fig. 5(b)) in terms of detection significance and F1 score (Section III-B). The detection significance has been calculated using Eq. (3) with fixed observation time T = 518 h, detection efficiency εd = 0.5 and signal rate εs = 2.8 · 10-5 Hz based on the current realistic limitations for the ALPS II experiment [37]. The reported background rate (nb) is calculated from the average percentage of mislcassified dark pulses (false positives) by dividing its value by the effective observation time for the extrinsic background as explained in Section III-B. The reported standard deviation for the avg. miscalssified dark pulses is slightly lower than the average value, but is rounded up. (1000 light, 1000 dark) is well suited for training the models. The best performing CNN within the ensemble #1 achieved the highest S = 1.4 using a threshold of Th. = 0.97. We will proceed with initializing the weights of the CNNs used in the following sections according to this specific model. V. PERFORMANCE OF THE CNN After the optimization of the CNN's hyperparameters (Section III-C) together with additional optimization of number of epochs and weight initialization (Section IV), we now proceed in studying how well the CNN actually performs for its designated purpose as a binary classifier for the light and dark pulses. Here, it is important to note that the S- and F1 scores evaluated for the trained CNN can be expected to be susceptible to the random division of the dataset into training, validation and testing sets. This is particularly true in the case where a small subset of mislabeled pulses (e.g. target value for a dark pulse = 1) exists. Consequently, we again train an ensemble of 10 CNNs and study its average performance on classifying the pulses. We will keep referring to this as ensemble #2 that is schematically illustrated in Fig. 5(b). Unlike for the ensemble #1 studied in the previous section, the weights of all the models in the ensemble #2 are initialized similarly based on the results of the previous Section IV. However, all the CNNs in ensemble #2 are trained and evaluated with different randomly divided datasets according to Fig. 3. The average evaluation metrics for the CNN ensemble #2 is listed in Table II. As expected, the CNNs turned out to be susceptible to the random division of the datasets with ensemble average evaluation metrics ⟨S⟩= 0.95 ± 0.23 (⟨Th.⟩= 0.98 ± 0.01) and ⟨F1⟩= 0.961 ± 0.005 (⟨Th.⟩= 0.68 ± 0.31) having rather large standard deviations. The highest S scores are obtained at much higher values of thresholds when compared with the F1 scores. As seen in Table II, the associated differences in analysis efficiency and number of misclassified dark pulses (∼background rate) reflect 8 0.0 0.5 1.0 1.5 2.0 2.5 n2 0.0 0.5 1.0 1.5 2.0 2.5 n1 Optimum : Max S score: 1.29±0.03 Optimal cut: [ 0.3 , + 1.7 ] Analysis efficiency: 68% Background rate: 0.11 mHz Cut-based analysis 0.0 0.2 0.4 0.6 0.8 1.0 1.2 S score Fig. 7: The detection significance as a function of the cut range associated with the PhFFT calculated based on the fitting parameters obtained in frequency domain analysis. The cut range is defined as [μm -n1 · σ, μm + n1 · σ], where n1, n2 ∈0, 1/3, . . . , 3 and the μm ≈-16.33 mV and σ ≈1.25 mV are obtained by fitting a skewed Gaussian to the distribution of the peak heights for the light pulses. All of the detection significances were determined as the average of 5 S scores calculated for randomly chosen testing sets. the importance of background suppression at the expense of analysis efficiency for maximizing S. A. Comparison to Cut-Based Analysis Next, we want to compare the above presented pulse analysis using the CNN ensemble #2 with the traditional (non-ML) cutbased analysis. In the cut-based analysis the pulse is classified based on its associated fitting parameters τrise, τdecay, χ2 Ph, PhFFT and χ2 FFT introduced in Section II. In order for a pulse to be classified as light, all of the pulse parameters must simultaneously lie within [μm -3 · σ, μm + 3 · σ] for the parameter specific μm and σ. The only exception is with the PhFFT, for which the cut range [μm -n1 · σ, μm + n2 · σ] will be rigorously optimized for n1, n2 ∈0, 1/3, . . . , 3. These cut ranges in terms of μm and σ are determined based on the fits of skewed Gaussians to the parameter distributions. However, in order to make the comparison between the performance of the CNN ensemble #2 and cut-based analysis fair, we determine the cut ranges for the associated pulse parameters based on 1000 randomly selected light pulses. In analogy to machine learning, determining the cut region corresponds to the training of the model. The rest of the light data together with the whole extrinsic dataset is then used as testing data to evaluate the S score. It should be noted that the triggers used for determining the cuts (training) and evaluating the S score (testing) correspond exactly to the pulses used for training and evaluating the CNNs. Thus, the used light pulses have been filtered as described in Section II. The optimization of the cut range for PhFFT is illustrated in Fig. 7, where every calculated S score represents the average of 5 S scores calculated for randomly selected testing data. The cut-based analysis results in the maximum detection significance of S = 1.29 ± 0.03 achieved with the optimal cut [μ -0.3σ, μ + 1.7σ]. The obtained average score is approximately 36% higher when compared with the average ⟨S⟩= 0.95 ± 0.23 achieved by the CNN ensemble #2. While being outperformed by the cut-based analysis, we argue that the CNN's performance is limited by the measured extrinsic dataset (containing presumed dark pulses) that has been distorted by actual light pulses (outliers). That is, the dark data contains a subset of pulses that actually correspond to 1064 nm photon induced triggers. Such dataset distortion would cause confusion in the training of the CNNs thus limiting their performance. Of course, the presence of the near-1064 nm black-body photons in the dark dataset also has detrimental effect on the S score calculated using the cut-based analysis, but since the dataset used to calculate the S scores using the CNNs and cut-based analysis are the same, their comparison with each other is fair. In the following sections, we will focus on investigating the background pulses that were previously presumed as dark and based on the results, quantitatively address how the near-1064 nm photon blackbody photon induced distortion in the dark dataset results in training confusion. VI. BACKGROUND CLASSIFICATION We will use the CNN ensemble #2 from Section III-B to study the nature and origin of the remaining background set by the misclassified dark pulses (false positives). To do this, one needs to work with the F1 score that quantifies how well the model classifies light pulses as light while avoiding misclassifying dark pulses and light. With the associated optimal thresholds, the ensemble #2 correctly classifies 97.5% ± 0.4% of the light pulses (analysis efficiency) while misclassifying 2.00% ± 0.4% of the dark pulses as light (Table II). The latter corresponds to the average of 157 misclassified dark pulses. It is likely that this subset of dark pulses is triggered by photonic events, making them difficult to distinguish from the light pulses. In fact, we have previously concluded that the limiting background source for our TES is fiber coupled near-1064 nm black-body photons which are indistinguishable from the light pulses given the energy resolution of the TES [20, 39, 40]. In order to confirm this, we begin by comparing the observed effective rate of the assumed near1064 nm black-body background to theoretical predictions. Using the 1.77 d observation time derived in Section III-B, the observed background rate is nb = 157/1.77 d that equals approximately 1.02 mHz ± 0.2 mHz The expected rate of 1064 nm black-body photons can be theoretically estimated from the Planck's spectrum ̇N = Z dΩ Z dA Z E2 E1 2 h3c2 · E2 eE/kT -1, (4) where the first two integrals represent the solid angle (Ω) and the area (A) over which the black-body radiation can enter the optical fiber, and h is Planck's constant, c is the speed of light, T is the temperature of the black-body radiation source 9 0 5 10 15 20 t ( s) 17.5 15.0 12.5 10.0 7.5 5.0 2.5 0.0 Vout (mV) (a) Light (3898) Miscl. Dark (160) 0 5000 10000 15000 20000 PC1 100 0 100 200 300 PC2 (b) Miscl. dark Light Fig. 8: (a) The average light pulse used for training and testing the CNN together with the average misclassified dark pulse. (b) PCA scatter plot showing the projection of feature vectors (τrise, τdecay, χ2 Ph, PhFFT, χ2 FFT) onto the first two principal components (PC1 and PC2) for all of the light (true positives) and misclassified dark pulses (false positives). and k is Boltzmann's constant. The integrals over dΩand dA are purely geometrical and can be estimated using the supplier provided specs of the used HI-1060 single mode fiber; numerical aperture NA = 0.14 and core radius R = 3.1 μm. The solid angle is calculated as Ω= 2π · (1 -cos(θ)), where the acceptance angle of the fiber is θ = sin-1(NA). This results in Ω= 0.062. The corresponding area is simply A = πR2 = 3 · 10-11 m2. The integration limits for the energy integral are set to E ± 3σE where the E = 1.165 eV corresponding to 1064 nm photons and σE = 0.088 eV based on the skewed Gaussian fit to the distribution of PhFFT for light pulses. With these parameters, the integral in Eq. (4) at T = 295 K results in ̇N = 5.1 mHz. The calculated rate is fivefold higher than what was observed by the CNN from the experimental data. However, the above presented calculation represents the theoretical maximum black-body rate and does not take into account various loss mechanisms present in experimental setup. In reality, this rate is lowered by the limited detection efficiency of the TES together with wavelength dependent transmission losses in the used mating sleeves and the fiber itself. In particular, fiber curling inside the 0.0 0.2 0.4 0.6 0.8 1.0 Threshold 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 F1 Fig. 9: (a) The F1 score calculated for the experimental test set (2898 light pulses, 7872 dark pulses) using the best performing model within the CNN ensemble #2 as a function of threshold. The maximum of F1 = 0.96 was found for a threshold of 0.95, illustrated by the dashed vertical line in the figure. cryostat, that was present in our experimental setup, has been observed to result in significant attenuation towards longer wavelength photons. The simulation of losses due to optical fiber, fiber curling and TES response in the same experimental setup used in this work has been recently addressed in Refs. [38, 40]. Using the associated simulation pipeline, we estimate a 0.57 mHz black-body background associated with cut-region [μm -3σ, μm +3σ]. This corresponds better to the herein estimated value of 1.02 mHz ± 0.2 mHz. Considering the limitations of the simulation, the misclassified dark pulses seem indeed likely to result from near-1064 nm black-body photons coupled into the optical fiber based on the above presented rate comparison. In order to provide more concrete evidence on the origin of the misclassified dark pulses, we investigate them using CNN within the ensemble #2 for which the percentage of misclassified dark pulses was closest to the ensemble average 2.00% (157) reported above (see Fig. 5(c)). The corresponding CNN misclassified 2.03% (160) of the 7872 dark pulses under analysis and can be thus considered to represent the average of the ensemble with respect to misclassified dark pulses up to a good extent. It achieves F1 = 0.96 at an optimal threshold of 0.95. Finding the optimal threshold is illustrated in Fig. 9. In this case, the associated curve peaks rather sharply at the optimum Th. = 0.95, indicating susceptibility towards the optimization of the threshold. The average misclassified dark pulse is illustrated in Fig. 8(a) together with the average light pulse. The shapes of the average pulses closely resemble each other suggesting a source of the same nature. In addition, Fig. 8(b) illustrates the PCA scatter plot showing the projections of the associated feature vectors (τrise, τdecay, χ2 Ph, PhFFT, χ2 FFT) of the light and misclassified dark pulses onto the two main principal components. The majority of the misclassified dark pulses are clustered in the near vicinity of the light pulses. Only roughly 14 out of the 160 misclassified dark pulses are clearly outside 10 the (red) cluster defined by light pulses. Thus, the vast majority of the misclassified dark pulses are most likely triggered by near-1064 nm photons originating from the black-body background. I.e., for the CNN they are indistinguishable from the light pulses given the energy resolution of the TES. Having these pulses distort the extrinsics (dark) dataset has detrimental effects on the training of the CNNs as the model is repeatedly being trained to classify an actual light pulse as dark. It is evident that this causes confusion in the training process of the CNN and limits the performance of the model. The presence of near-1064 nm black-body photon triggers in the dark dataset also explains why the CNNs are so susceptible towards the division of the dataset into training and testing sets (Fig. 3). How many of these, technically mislabeled dark pulses, end up in training or testing sets has a significant impact on both training and evaluation of the CNNs. This results in the observed high standard deviations in the evaluation metrics of the CNN ensemble #2 (Table II). In the following section, we will investigate the blackbody radiation induced training confusion and show that this ultimately limits the performance of the CNN. VII. BLACK-BODY PHOTONS AND TRAINING CONFUSION In the previous section we concluded that the vast majority of the misclassified dark pulses are ultimately near-1064 nm photons. While these are physically indistinguishable from the light pulses given the energy resolution of the TES, training the CNNs to learn to classify these as dark pulses evidently causes confusion. The detrimental effects of this label noise have been widely studied [56-58]. While a small fraction of corrupted labels can result in improved performance by acting as a form of regularization, their effect on learning is generally detrimental [59]. This is particularly true for the herein studied CNNs, which are already regulated by the dropout layer. In consequence, the reported classifying performances of the CNNs in the previous sections should be considered as lower limits with room for further improvement when trained with an ideal, undistorted dataset. In order to demonstrate this, we relabel the target values of the 160 misclassified dark pulses from the previous section as light (target value 0 →1) and use this data for retraining an ensemble of 10 CNNs exactly as in Section VI (ensemble #2, see Fig. 5(b)). It should be noted that the 160 misclassified dark pulses form a subset of the testing data used in Section VI and consequently some near-1064 nm black-body photon triggers can still be left unidentified in the training set. Moreover, the 160 misclassified dark pulses had roughly 14 pulses which were clearly not clustered in the vicinity of the light pulses and thus these may not correspond to near-1064 nm blackbody photons. Regardless, we relabel all of the 160 previously misclassified dark pulses as light, after which the vast majority of the near-1064 nm photon triggered dark counts should have been addressed. While the dataset still remains somewhat distorted, one expects to observe improvement in the CNN performance when trained with the relabeled dataset if there was training confusion present previously. Thus, we proceed in retraining the CNN ensemble #2 (Fig. 5(b)) using the dataset with 160 relabeled dark pulses based on Section VI. The overall dataset now contains 3898+ 160 = 4058 light pulses and 8872 -160 = 8712 dark pulses which is then further divided into training and testing data according to Fig. (3). Upon doing this, the F1 score improves from the previously estimated ⟨F1⟩= 0.961±0.005 (Table II) to ⟨F1⟩= 0.974 ± 0.004. Evidently, the average S score also improves due to additional background discrimination from ⟨S⟩= 0.95 ± 0.23 to ⟨S⟩= 1.61 ± 0.58 (εa = 85.5% and nb = 0.17 mHz), but still does not outperform the cut-based analysis after relabeling the presumed black-body triggers. The observed improvement in the F1 score (see section III-B) is direct evidence that the performance of the CNN is limited by the training confusion caused by the presence of near1064 nm black-body photon triggers in the measured extrinsic background. VIII. CONCLUSIONS AND OUTLOOK We have aimed to improve the detection significance of a TES by analyzing the experimentally measured 1064 nm laser photon (light) and extrinsics background event (dark) triggered univariate time traces using a CNN based binary classifier. After extensive hyperparameter optimization, the CNN ensemble resulted in average detection significance of ⟨S⟩= 0.95±0.23, still being outperformed by our previously used (non-ML) cutbased analysis by 36%. While we have shown that this partly results from the training confusion introduced by the near1064 nm black-body photons (indistinguishable from light) in the extrinsics background, we conclude that the binary classification based approach is not optimal analysis method for classifying the time traces. A CNN trained as a binary classifier does not seem to be able to learn how to suppress the noise of the time traces and is thus unable to improve the energy resolution of the TES, ultimately resulting in lower S score when compared with the cut-based analysis. In contrast to the binary classification scheme, we believe that regression based CNN would be better suitable for improving the detection significance and plan to investigate this further in future projects. The performance of the regression based models is presumably more sensitive to the underlying dataset, and thus the significance of dataset control and optimization should be stressed. Our recently published simulation framework for TES pulses provides particularly great opportunities for the required systematic dataset optimization [38, 40]. We also want to point out that the use of various ML models in unsupervised fashion, such as neural network based autoencoders, has shown great potential to address background suppression related tasks and should be further investigated [60]. While there exist several potential post-data analysis methods that can improve the detection significance of the TES for the ALPS II experiment, we argue that reaching the black-body radiation limited [40, 61] ultra-low background of 10-5 Hz ultimately requires the implementation of hardwarebased background suppression methods. As already suggested in Ref. [61], the simplest way to do this is to apply a cryogenic narrow bandpass optical filter in front of the TES which effectively improves its energy resolution. We are currently building 11 a cryogenic optical U-bench inside our dilution refrigerator enabling the implementation of such filter as a part of our TES setup. IX. ACKNOWLEDGEMENTS F.J., A.L. and C.S. acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. M.M. and E.R. acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, Grant Agreement No. 948689 (AxionDM). REFERENCES [1] K. .D Irwin and G. C. Hilton. Transition-edge sensors. Cryogenic particle detection, pages 63-150, 2005. [2] Adriana E Lita, Aaron J Miller, and Sae Woo Nam. Counting near-infrared single-photons with 95% efficiency. Optics express, 16(5):3032-3040, 2008. [3] Kaori Hattori, Toshio Konno, Yoshitaka Miura, Sachiko Takasu, and Daiji Fukuda. An optical transition-edge sensor with high energy resolution. Superconductor Science and Technology, 35(9):095002, 2022. [4] Mario De Lucia, Paolo Dal Bo, Eugenia Di Giorgi, Tommaso Lari, Claudio Puglia, and Federico Paolucci. Transition edge sensors: Physics and applications. Instruments, 8(4):47, 2024. [5] Adriana E Lita, Brice Calkins, LA Pellouchoud, Aaron Joseph Miller, and Sae Nam. Superconducting transition-edge sensors optimized for high-efficiency photon-number resolving detectors. In Advanced Photon Counting Techniques IV, volume 7681, pages 71-80. SPIE, 2010. [6] Boris S Karasik, Andrei V Sergeev, and Daniel E Prober. Nanobolometers for thz photon detection. IEEE Transactions on Terahertz Science and Technology, 1(1):97-111, 2011. [7] Francesco Mattioli, Zili Zhou, Alessandro Gaggero, Rosalinda Gaudio, Saeedeh Jahanmirinejad, D ̈ond ̈u Sahin, Francesco Marsili, Roberto Leoni, and Andrea Fiore. Photon-number-resolving superconducting nanowire detectors. Superconductor Science and Technology, 28(10):104001, 2015. [8] Ruslan Hummatov, Adriana E Lita, Tannaz Farrahi, Negar Otrooshi, Samuel Fayer, Matthew J Collins, Malcolm Durkin, Douglas Bennett, Joel Ullom, Richard P Mirin, et al. Fast transition-edge sensors suitable for photonic quantum computing. Journal of Applied Physics, 133(23), 2023. [9] Peizhan Li, Jiaqiang Zhong, Wen Zhang, Zheng Wang, Kangmin Zhou, Wei Miao, Yuan Ren, Jing Li, Qijun Yao, and Shengcai Shi. Multi-color photon detection with a single superconducting transition-edge sensor. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 1054:168408, 2023. [10] RW Romani, Aaron J Miller, B Cabrera, Enectali Figueroa-Feliciano, and Sae Woo Nam. First astronomical application of a cryogenic transition edge sensor spectrophotometer. The Astrophysical Journal, 521(2):L153, 1999. [11] Marcel P Bruijn, Wouter M Bergmann Tiest, Henk FC Hoevers, Eric Krouwer, Jan van der Kuur, Marcel L Ridder, Zakaria Moktadir, Remco Wiegerink, Dick van Gelder, and Miko Elwenspoek. Development of arrays of transition edge sensors for application in x-ray astronomy. Nuclear instruments and methods in physics research section A: accelerators, spectrometers, detectors and associated equipment, 513(1-2):143-146, 2003. [12] DJ Goldie, AV Velichko, DM Glowacka, and S Withington. Ultra-low-noise MoCu transition edge sensors for space applications. Journal of Applied Physics, 109(8), 2011. [13] A Bergen, HJ Van Weers, C Bruineman, MMJ Dhall ́e, HJG Krooshoop, HJM Ter Brake, K Ravensberg, BD Jackson, and CK Wafelbakker. Design and validation of a large-format transition edge sensor array magnetic shielding system for space application. Review of Scientific Instruments, 87(10), 2016. [14] John W Appel, Charles L Bennett, Michael K Brewer, Ricardo Bustos, Manwei Chan, David T Chuss, Joseph Cleary, Jullianna D Couto, Sumit Dahal, Rahul Datta, et al. Calibration of transition-edge sensor (TES) bolometer arrays with application to CLASS. The Astrophysical Journal Supplement Series, 262(2):52, 2022. [15] Akira Miyazaki and P Spagnolo. Dark photon search with a gyrotron and a transition edge sensor. In 2020 45th International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz), pages 1-2. IEEE, 2020. [16] Godehard Angloher, S Banik, G Benato, A Bento, A Bertolini, R Breier, C Bucci, J Burkhart, L Canonica, A D'Addabbo, et al. DoubleTES detectors to investigate the CRESST low energy background: results from aboveground prototypes. The European Physical Journal C, 84(10):1001, 2024. [17] Roger K Romani, Y-Y Chang, Rupak Mahapatra, Mark Platt, Maggie Reed, Ivar Rydstrom, Bernard Sadoulet, Bruno Serfass, and Matt Pyle. A transition edge sensor operated in coincidence with a high sensitivity athermal phonon sensor for photon coupled rare event searches. Applied Physics Letters, 125(23), 2024. [18] Robin B ̈ahre, Babette D ̈obrich, Jan Dreyling-Eschweiler, Samvel Ghazaryan, Reza Hodajerdi, Dieter Horns, Friederike Januschek, E-A Knabbe, Axel Lindner, Dieter Notz, et al. Any light particle search II-technical design report. Journal of Instrumentation, 8(09):T09001, 2013. [19] Rikhav Shah, Katharina-Sophie Isleif, Friederike Januschek, Axel Lindner, and Matthias Schott. TES detector for ALPS II. arXiv preprint , 2021. [20] Jos ́e Alejandro Rubiera Gimeno, Katharina-Sophie Isleif, Friederike Januschek, Axel Lindner, Manuel Meyer, Gulden Othman, Matthias Schott, Rikhav Shah, and 12 Lukas Sohl. The TES detector of the ALPS II experiment. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 1046:167588, 2023. [21] Pierre Sikivie. Experimental tests of the" invisible" axion. Physical Review Letters, 51(16):1415, 1983. [22] Katharina-Sophie Isleif and ALPS Collaboration. The any light particle search experiment at DESY. Moscow University Physics Bulletin, 77(2):120-125, 2022. [23] Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie VogtMaranto, and Lenka Zdeborov ́a. Machine learning and the physical sciences. Reviews of Modern Physics, 91(4):045002, 2019. [24] Josh Cogan, Michael Kagan, Emanuel Strauss, and Ariel Schwarztman. Jet-images: computer vision inspired techniques for jet tagging. Journal of High Energy Physics, 2015(2):1-16, 2015. [25] Lucio Mwinmaarong Dery, Benjamin Nachman, Francesco Rubbo, and Ariel Schwartzman. Weakly supervised classification in high energy physics. Journal of High Energy Physics, 2017(5):1-11, 2017. [26] Jason Sang Hun Lee, Inkyu Park, Ian James Watson, and Seungjin Yang. Quark-gluon jet discrimination using convolutional neural networks. Journal of the Korean Physical Society, 74:219-223, 2019. [27] James Barnard, Edmund Noel Dawe, Matthew J Dolan, and Nina Rajcic. Parton shower uncertainties in jet substructure analyses with deep neural networks. Physical Review D, 95(1):014018, 2017. [28] Jing Li and Hao Sun. An attention based neural network for jet tagging. arXiv preprint , 2020. [29] Julian Collado, Kevin Bauer, Edmund Witkowski, Taylor Faucett, Daniel Whiteson, and Pierre Baldi. Learning to isolate muons. Journal of High Energy Physics, 2021(10):1-17, 2021. [30] Yi-Lun Du, Daniel Pablos, and Konrad Tywoniuk. Deep learning jet modifications in heavy-ion collisions. Journal of High Energy Physics, 2021(3):1-50, 2021. [31] Łukasz Kamil Graczykowski, Monika Jakubowska, Kamil Rafał Deja, Maja Kabus, Alice Collaboration, et al. Using machine learning for particle identification in alice. Journal of Instrumentation, 17(07):C07016, 2022. [32] Thorsten Buss, Frank Gaede, Gregor Kasieczka, Anatolii Korol, Katja Kr ̈uger, Peter McKeown, and Martina Mozzanica. Calohadronic: a diffusion model for the generation of hadronic showers. arXiv preprint , 2025. [33] Jie Ren, Daohan Wang, Lei Wu, Jin Min Yang, and Mengchao Zhang. Detecting an axion-like particle with machine learning at the LHC. Journal of High Energy Physics, 2021(11):1-26, 2021. [34] Haihao Shi, Zhenyang Huang, Qiyu Yan, Jun Li, Guoliang L ̈u, and Xuefei Chen. A machine learning pipeline for hunting hidden axion signals in pulsar dispersion measurements. arXiv preprint , 2025. [35] PB Cushman, MC Fritts, AD Chambers, A Roy, and T Li. Strategies for machine learning applied to noisy hep datasets: Modular solid state detectors from supercdms. arXiv preprint , 2024. [36] Laura Manenti, Carlo Pepe, Isaac Sarnoff, Tengiz Ibrayev, Panagiotis Oikonomou, Artem Knyazev, Eugenio Monticone, Hobey Garrone, Fiona Alder, Osama Fawwaz, et al. Dark counts in optical superconducting transition-edge sensors for rare-event searches. Physical Review Applied, 22(2):024051, 2024. [37] Manuel Meyer, Katharina Isleif, Friederike Januschek, Axel Lindner, Gulden Othman, Jos ́e Alejandro Rubiera Gimeno, Christina Schwemmbauer, Matthias Schott, Rikhav Shah, and ALPS Collaboration. A first application of machine and deep learning for background rejection in the ALPS II TES detector. Annalen der Physik, 536(1):2200545, 2024. [38] Jos ́e Alejandro Rubiera Gimeno. Optimizing a Transition Edge Sensor detector system for low flux infrared photon measurements at the ALPS II experiment. PhD thesis, U. Hamburg (main), Hamburg U., Hamburg, 2024. [39] Jos ́e Alejandro Rubiera Gimeno, Friederike Januschek, Katharina-Sophie Isleif, Axel Lindner, Manuel Meyer, Gulden Othman, Christina Schwemmbauer, and Rikhav Shah. A TES system for ALPS II - status and prospects. Proceedings of Science, 449:567, 2024. [40] Jos ́e Alejandro Rubiera Gimeno, Friederike Januschek, Axel Lindner, Christina Schwemmbauer, KatharinaSophie Isleif, Manuel Meyer, Elmeri Rivasto, Gulden Othman, and Rikhav Shah. Simulation and measurement of blackbody radiation background in a transition edge sensor. Physical Review D, 112(3):032001, 2025. [41] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE transactions on neural networks and learning systems, 33(12):69997019, 2021. [42] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015. [43] Waseem Rawat and Zenghui Wang. Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9):2352-2449, 2017. [44] Navid Mohammadi Foumani, Lynn Miller, Chang Wei Tan, Geoffrey I Webb, Germain Forestier, and Mahsa Salehi. Deep learning for time series classification and extrinsic regression: A current survey. ACM Computing Surveys, 2023. [45] Zhiguang Wang, Weizhong Yan, and Tim Oates. Time series classification from scratch with deep neural networks: A strong baseline. In 2017 International joint conference on neural networks (IJCNN), pages 15781585. IEEE, 2017. [46] Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan Weber, Geoffrey I Webb, Lhassane Idoumghar, PierreAlain Muller, and Franc ̧ois Petitjean. Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery, 34(6):1936-1962, 2020. [47] Angus Dempster, Franc ̧ois Petitjean, and Geoffrey I 13 Webb. Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery, 34(5):14541495, 2020. [48] Rikhav Shah, Katharina-Sophie Isleif, Friederike Januschek, Axel Lindner, and Matthias Schott. Characterising a single-photon detector for ALPS II. Journal of Low Temperature Physics, 209(3):355362, 2022. [49] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [50] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. JMLR Workshop and Conference Proceedings, 2010. [51] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint , 2014. [52] Franc ̧ois Chollet et al. Keras. https://keras.io, 2015. [53] Mart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Largescale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [54] SI Bityukov and NV Krasnikov. New physics discovery potential in future experiments. Modern Physics Letters A, 13(40):3235-3249, 1998. [55] SI Bityukov and NV Krasnikov. On the observability of a signal above background. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 452(3):518-524, 2000. [56] Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447-461, 2015. [57] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 34(11):81358153, 2022. [58] Hyungki Im and Paul Grigas. Binary classification with instance and label dependent label noise. arXiv preprint , 2023. [59] Yonghoon Lee and Rina Foygel Barber. Binary classification with corrupted labels. Electronic Journal of Statistics, 16(1):1367-1392, 2022. [60] Philipp Holl, L Hauertmann, B ́ela Majorovits, Oliver Schulz, Maria Schuster, and AJ Zsigmond. Deep learning based pulse shape discrimination for germanium detectors. The European Physical Journal C, 79:1-9, 2019. [61] Aaron J Miller, Adriana Lita, Danna Rosenberg, Stephen Gruber, and S Nam. Superconducting photon number resolving detectors: Performance and promise. In Proc. 8th Int. Conf. Quantum Communication, Measurement and Computing (QCMC'06), pages 445-450, 2007.
|
2509.16226
|
On LLM-Based Scientific Inductive Reasoning Beyond Equations
Brian S. Lin1* Jiaxin Yuan2* Zihan Zhou3* Shouli Wang4* Shuo Wang1†
Cunliang Kong1 Qi Shi1 Yuxuan Li1 Liner Yang2† Zhiyuan Liu1† Maosong Sun1
1Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center, Tsinghua University
Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University
2Beijing Language and Culture University
3Xiamen University
4Harbin Institute of Technology
caish25@mails.tsinghua.edu.cn
Abstract
As large language models (LLMs) increasingly
exhibit human-like capabilities, a fundamental
question emerges: How can we enable LLMs
to learn the underlying patterns from limited
examples in entirely novel environments and
apply them effectively? This question is central
to the ability of LLMs in inductive reasoning.
Existing research on LLM-based inductive rea-
soning can be broadly categorized based on
whether the underlying rules are expressible
via explicit mathematical equations. However,
many recent studies in the beyond-equations
category have emphasized rule design without
grounding them in specific scenarios. Inspired
by the parallels between inductive reasoning
and human scientific discovery, we propose the
task of LLM-Based Scientific Inductive Rea-
soning Beyond Equations and introduce a new
benchmark, SIRBench-V1, to evaluate the in-
ductive reasoning abilities of LLMs in scien-
tific settings. Our experimental results show
that current LLMs still struggle with this task,
underscoring its difficulty and the need for fur-
ther advancement in this area.1
1
Introduction
In recent years, many advanced reasoning models,
including OpenAI o1 (OpenAI et al., 2024) and
DeepSeek-R1 (DeepSeek-AI et al., 2025), have
demonstrated strong deductive reasoning capabili-
ties, especially as evidenced by their performance
in mathematics and programming tasks. These
tasks are typically characterized by concise prob-
lem descriptions, where the model is required to
generate a long chain of thought (Wei et al., 2022)
to solve complex problems.
In contrast, inductive reasoning (Hayes et al.,
2010) poses a different challenge, requiring mod-
*Equal contribution.
†Corresponding authors.
1The open-source code and data can be found at https:
//github.com/thunlp/SIR-Bench.
Equation-based Scientific
Inductive Reasoning
(from LLM-SRBench)
Scientific Inductive Reasoning
Beyond Equations
(our work)
expected output
<reactants> + <reagents> =
<products>
Task: New Chemistry
Equation Discovery
Task: Reactant Prediction
test input
expected output
ICL examples
- Problem Description:
Reaction rate with respect to
Time and Concentration
- Known Term:
- Data points generated from
unknown novel equations
ICL examples
<reactants> + <reagents>
<products>
Figure 1: Illustrative comparison of scientific inductive
reasoning: on the left, tasks focused on equation dis-
covery (Shojaee et al., 2025), and on the right, tasks
representing broader forms of scientific induction be-
yond equation generation.
els to infer general rules or structures from mul-
tiple specific observations (Chollet, 2019; Yang
et al., 2022). Inductive reasoning involves making
predictions about new scenarios based on existing
knowledge or observed data (Hayes et al., 2010).
Inductive reasoning has been progressively recog-
nized as a critical component for human-like cog-
nitive modeling and the development of general
artificial intelligence (Li et al., 2024). However,
current LLMs still exhibit notable shortcomings
in inductive reasoning tasks (Li et al., 2024; Hua
et al., 2025; Yan et al., 2025). Even state-of-the-art
models often fail to correctly infer abstract rules
from observations and typically rely on memoriz-
ing rather than truly understanding the underlying
concepts.
Currently, artificial intelligence is increasingly
regarded as a transformative paradigm in scientific
discovery, with growing applications across dis-
ciplines such as physics, materials science, and
arXiv:2509.16226v1 [cs.CL] 12 Sep 2025
Benchmark
Task Type
Related to
Scientific
Discovery
Beyond
Mathematical
Equations
Closed-Ended
Questions
#Instances
Sequence
Length
MATDESIGN
HI
✓
✓
×
50
250-1,000
TOMATO-Chem
HI
✓
✓
×
51
100-600
ResearchBench
HI
✓
✓
×
1,386
unknown
chaotic systems
SR
✓
×
✓
131
~100
SRSD
SR
✓
×
✓
240
100-300
LLM-SRBench
SR
✓
×
✓
239
~100
MIRAGE
IR
×
✓
✓
2,000
20-100
MIR-Bench
IR
×
✓
✓
6,930
50-250
IOLBench
IR
×
✓
✓
1,500
200-2,000
SIRBench-V1 (Ours)
IR
✓
✓
✓
710
500-3,000
Table 1: Analysis of existing related benchmarks. HI: Hypothetical Induction, SR: Symbolic Regression, IR:
Inductive Reasoning. Related to Scientific Discovery: targets scientific problem-solving. Beyond Mathematical
Equations: focuses on reasoning not reducible to equation fitting. Closed-Ended Questions: has deterministic
answers for automatic evaluation. #Instances: number of test examples. Sequence Length: input sequence
length—crucial as scientific inductive reasoning often requires extracting information from extensive resources.
chemistry (Xu et al., 2021). Against this backdrop,
increasing attention has been paid to the inductive
reasoning abilities of LLMs in scientific contexts
recently (Yang et al., 2024; Liu et al., 2025; Fang
et al., 2025). However, systematically leveraging
reasoning models to enhance inductive tasks for
scientific discovery remains largely underexplored.
While some scientific rules, such as the velocity
formula of free fall, can be expressed mathemati-
cally, others, such as molecular structure-function
relationships, are not readily amenable to such for-
mulation. Under this criterion, we observe that ex-
isting LLM-based inductive reasoning research can
be broadly categorized based on whether the under-
lying rules can be formulated mathematically. The
first category comprises tasks that are mathematical
equation-based, which are closely related to sym-
bolic regression (Matsubara et al., 2022; Gilpin,
2021). Recent work has shown that LLMs can
serve as equation generators or guide the equa-
tion discovery process (Wang et al., 2024; Du
et al., 2024; Shojaee et al., 2024, 2025; Fang et al.,
2025). However, these tasks typically only cover
cases where the underlying rules can be explic-
itly formulated as equations. A separate line of
work targets tasks beyond mathematical equations,
proposing new inductive tasks and datasets from
various perspectives (Hua et al., 2025; Tang et al.,
2024; Banatt et al., 2024; Goyal and Dan, 2025).
However, many of these studies emphasize the cre-
ation of novel synthetic or low-frequency symbolic
systems, which often have a limited connection to
discovering scientific patterns in real-world scenar-
ios. Recent efforts under the AI4Science agenda
are exploring more scientifically grounded settings
where models emulate researchers by deriving in-
sights or hypotheses from scientific materials (Yang
et al., 2023, 2024; Liu et al., 2025). However, the
reasoning processes of these studies often remain
coarse-grained or open-ended, making robust auto-
matic evaluation challenging.
To address these gaps, we propose to examine
the capabilities of LLMs in Scientific Inductive
Reasoning Tasks Beyond Mathematical Equations.
To the best of our knowledge, high-quality and
easy-to-evaluate datasets to directly investigate this
problem are currently lacking. We have therefore
created SIRBench-V1, a new benchmark consist-
ing of a series of subtasks in chemistry and biol-
ogy. In these subtasks, the underlying rules cannot
be expressed through mathematical equations, yet
they yield relatively deterministic answers. We
transform basic scientific resources from prior stud-
ies (Grešová et al., 2023; Liu et al., 2024; Guo
et al., 2023; Edwards et al., 2022a; Irwin et al.,
2021; Westerlund et al., 2024b,a; Kim et al., 2018)
into inductive reasoning tasks. Furthermore, to
eliminate LLM memorization, we design counter-
factual tasks that establish synthetic scientific rules
for the models to reason with, rather than recall.
We follow several commonly adopted reason-
ing strategies for LLMs on the SIRBench-V1,
including implicit and explicit reasoning, self-
consistency (Wang et al., 2022), and hypothesis
refinement (Qiu et al., 2023). By investigating the
performance of several LLMs augmented with dif-
ferent reasoning strategies, we find that equation-
free scientific inductive reasoning is highly chal-
lenging for modern LLMs. Gemini-2.5-Flash, the
best-performing model, achieves an average accu-
racy of 43.81% in our benchmark, while Claude-
3.5-Haiku and GPT-4.1 demonstrate a lower aver-
age accuracy of 31.53% and 32.41%, respectively.
We also observe that using sophisticated reasoning
strategies provides minimal performance improve-
ment and, in some cases, even leads to performance
decline. Using hypothesis refinement, Gemini-2.5-
Flash, Claude-3.5-Haiku, and GPT-4.1 attain an
average accuracy of 39.06%, 31.63%, and 33.25%,
respectively. We believe this work will pave the
way for a new and fruitful avenue of research in
scientific discovery.
Contributions
In summary, the main contribu-
tions of this work are as follows:
• We present SIRBench-V1, a new scientific
inductive reasoning benchmark featuring au-
thentic and counterfactual test examples from
tasks in both biology and chemistry.
• We conduct evaluations using several repre-
sentative LLMs in conjunction with diverse
advanced inference strategies, the results of
which demonstrate the capability boundaries
of the examined LLMs.
• We derive several constructive findings for
scientific inductive reasoning, such as a com-
parison between many-short-shot and long-
few-shot learning approaches and an analysis
of memorization, which we anticipate will be
helpful for subsequent studies.
2
Related Work
2.1
Inductive Reasoning
Benchmark
Various benchmarks have recently
been introduced to systematically evaluate these
capabilities from multiple perspectives. Hua et al.
(2025) evaluate the model’s ability to infer string
transformation rules from limited input-output ex-
amples. Bongard-OpenWorld (Wu et al., 2023)
examines conceptual induction and image classi-
fication in few-shot scenarios. Tang et al. (2024)
propose an embodied interactive environment re-
quiring models to induce task rules and objec-
tives. MIR-Bench (Yan et al., 2025) provides a
many-shot in-context benchmark covering vari-
ous function-based input-output pairs. WILT (Ba-
natt et al., 2024), inspired by the Wason 2-4-6
task, evaluates multi-turn inductive reasoning and
generalization capabilities. Additionally, bench-
marks such as LINGOLY (Bean et al., 2024), Lin-
guini (Sánchez et al., 2024) and IOLBench (Goyal
and Dan, 2025), derived from the International Lin-
guistics Olympiad, challenge model generalization
under low-resource language scenarios.
Methods
Beyond benchmark development, re-
cent efforts have also explored structured frame-
works to enhance inductive reasoning in LLMs,
addressing limitations observed with chain-of-
thought prompting and few-shot methods (Bowen
et al., 2024; Gendron et al., 2023). For instance,
Chain-of-Language-Models (Yang et al., 2022) em-
ploys a modular pipeline integrating rule generation
and verification. Qiu et al. (2023) combines LLMs
with symbolic executors in a propose-verify-refine
loop, significantly enhancing robustness. Similarly,
the De-In-Ductive (DID) (Cai et al., 2024) sim-
ulates a human-like inductive-then-deductive rea-
soning sequence within a single prompt, enabling
flexible strategy switching and improved cross-task
generalization.
2.2
Scientific Inductive Reasoning in LLMs
Symbolic Regression
Symbolic regression is a
core approach for scientific discovery (Matsubara
et al., 2022; Gilpin, 2021). It is valued for its abil-
ity to extract analytical expressions directly from
data (Angelis et al., 2023). Recent studies have ex-
tended this paradigm by incorporating LLMs into
the tasks. In materials science, Wang et al. (2024)
highlight its role in revealing underlying physical
and chemical principles. Du et al. (2024) propose
a prompt-based framework using LLMs to gener-
ate candidate equations, offering greater flexibility
than traditional methods. Shojaee et al. (2024) treat
equations as programs, guided by scientific priors.
To support systematic evaluation, they then intro-
duce LLM-SRBench, a multi-domain benchmark
designed to evaluate LLMs’ true discovery capabil-
ities.
Hypothetical Induction
Hypothetical Induction
has been recognized as a subtask of inductive rea-
soning (Norton, 2003), with growing interest in
using LLMs to generate novel, valuable scientific
hypotheses from background knowledge or obser-
vations. Kumbhar et al. (2025) introduced a goal-
driven dataset and evaluation framework in mate-
rials science, while Yang et al. (2023, 2024) con-
structed datasets for hypothesis generation in chem-
istry and social science. Researchbench (Liu et al.,
2025) further provides the first benchmark covering
inspiration retrieval, hypothesis formulation, and
ranking.
3
SIRBench-V1: Task and Construction
We curate 7 tasks, with 100 samples for each biol-
ogy task, including synthetic tasks, and 30 samples
for each chemistry task.
3.1
Task Overview
Task 1: DNA Translation (Synthetic)
This task
simulates the biological process of translating a
DNA sequence into its corresponding amino acid
sequence. The model is required to induce the
codon-to-amino-acid mappings solely based on in-
context learning (ICL) examples and apply the
inferred mappings to translate a target DNA se-
quence. However, LLMs may have internalized the
canonical genetic codon table as prior knowledge,
enabling them to generate the correct amino acid
sequence through memorization rather than gen-
uine rule induction. To better assess the inductive
reasoning capabilities of the model, we provide a
synthetic alternative to the standard task design,
by randomly assigning codon-to-amino-acid map-
pings.
Task 2: DNA Table Inference (Synthetic)
This
task focuses explicitly on evaluating the model’s
inductive ability by requiring it to recover the
underlying codon table based solely on a set of
DNA–amino acid sequence pairs. The model is
asked to infer the translation rules and provide
a fully structured codon table, including codon-
to-amino acid mappings, start codons, and stop
codons. We follow the same design as in Task 1,
providing both standard and synthetic configura-
tions.
Task 3: DNA Transformation
This task adopts
a fully synthetic setup, with the goal of evaluating
the model’s ability to infer transformation rules
from ICL examples and to apply them correctly to
unseen test sequences. Each ICL example consists
of an input–output DNA sequence pair generated
by applying one of several predefined transforma-
tions: sequence reversal, complementation, reverse
complementation, segmented transformation, and
fixed base mutation.
Task 4: Molecule Design
This task requires
LLMs to generate molecular structures that sat-
isfy a given textual description. The input is a
natural language sentence (in English), and the out-
put is the corresponding molecule represented in
SMILES format.
Task 5: Molecule Captioning
This task is the
inverse of Task 4, where the input is a molecular
structure and the model is expected to generate a
corresponding description or annotation in natural
language.
Task 6: Reaction Prediction
This task focuses
on chemical reaction prediction. Given one or more
reactants and reagents, the model is expected to pre-
dict the resulting product in the form of a SMILES
string.
Task 7: Name Prediction
This task focuses on
conversions between three common chemical repre-
sentations: SMILES (linear structural encodings),
IUPAC names (standardized nomenclature), and
molecular formulas (atomic composition).
We
include four relatively unambiguous conversions:
smiles2formula, smiles2iupac, iupac2smiles, and
iupac2formula.
3.2
Data Collection
Biology
We derive source DNA sequences and
their corresponding amino acid sequences from
GenomicLLM_GRCh38 (Grešová et al., 2023; Liu
et al., 2024) for the standard task. For the synthetic
task, we generate codon tables by randomizing
every mapping except the start and stop codons,
and translate inputs using these tables.
For DNA Transformation, we randomly sample
DNA fragments from the training set as ICL exam-
ples and truncate them to a maximum length, and
do the same for test sequences. The transforma-
tion type and base-pairing schemes are randomly
sampled from a predefined set. These base-pairing
schemes are designed manually to disrupt natural
complementarity, increasing the inductive reason-
ing challenge. For all the tasks, we ensure that the
ICL examples cover all the mappings used in the
test example.
Chemistry
ChemLLMBench (Guo et al., 2023)
is a chemistry-domian LLM benchmark compris-
ing eight tasks. We select four tasks, corresponding
to Task 4-7 in our work, which exhibit a relatively
stronger emphasis on inductive reasoning capabil-
ities. The Molecule Design and Captioning tasks
are based on the ChEBI-20 dataset (Edwards et al.,
2022a), pairing molecular SMILES with textual
description. The Reaction Prediction task draws
DNA (to-protein)
Translation
DNA Table
Inference
DNA
Transformation
DNA sequence:
ATGGAGGCG
DNA sequence:
ATGGAGGCG
Real examples:
ATGGAGGC → MEA
GGAAGTGGC → GTV
...
Synthetic examples:
ATGGAGGC → MEA
GGAAGTGGC → GTV
...
Input DNA sequence:
...
Output DNA sequence:
...
Amino acid
sequence:
MEA
Amino acid
sequence:
KLH
Based on
authentic
codon
table
Based on
synthetic
codon
table
Real codon table:
forward_table: {"ATG": "M", ...}
start_codons": [...]
stop_codons": [...]
Synthetic codon table:
forward_table: {“GGA": "K", ...}
start_codons": [...]
stop_codons": [...]
Transformation Rules:
- reverse
- complementation
- reverse complementation
- segmented transformation
- fixed base mutation
Molecule
Design
Description:
The molecule is a member of the
class of formamides that is
formamide substituted ...
Smile:
CCCCNC=O
Molecule
Captioning
Smile:
CC1=CCC(CC1=O)C(=C)C
Description:
The molecule is a p-menthane
monoterpenoid that consists of
cyclohex-2-enone ...
Reaction
Prediction
Name
Prediction
Reactants+Reagents:
Brc1cccnc1.C1CCO
C1.CCOCC.CO
c1cccc(CCN2C(=O)c3ccccc3C2=O)c1.
[Cl-].[Li]CCCC.[NH4+]
Products:
COc1cccc(CCN2C(=O)c3
ccccc3C2(O)c2cccnc2)c1
Smile:
CCCOC(=O)OCC(
=O)COC1=CC=CC=C1
IUPAC name:
(2-oxo-3-phen
oxypropyl) propyl
carbonate
Formula:
C13H16O5
Biology Tasks
Chemistry Tasks
Figure 2: Our benchmark includes 7 tasks spanning two scientific disciplines: biology and chemistry.
denotes
tasks that adopt a synthetic configuration;
refers to tasks that involve only rule induction from examples, while
others involve both induction and application to a new test input.
on the USPTO-MIT Mixed reaction dataset (Irwin
et al., 2021; Westerlund et al., 2024b,a), which con-
tains information on reactants, reagents, and prod-
ucts in SMILES reaction format. The Name Pre-
diction task is derived from PubChem (Kim et al.,
2018), which offers extensive mappings between
SMILES strings and their corresponding standard
chemical names, including both IUPAC names and
molecular formulas.
3.3
Metrics
Biology
All three tasks are evaluated using accu-
racy as the primary metric, computed as the propor-
tion of correctly predictions.
Chemistry
For molecule design, we adopt eight
metrics, including BLEU, Exact Match (Edwards
et al., 2022b), and Levenshtein distance (Miller
et al., 2009) for string-level consistency; validity
for structural correctness; MACCS (Ratcliff and
Metzener, 1988), RDK (Landrum, 2020), and Mor-
gan (Dash et al., 2023) for structural similarity; and
FCD (Preuer et al., 2018) for distributional sim-
ilarity. For molecule captioning, we use BLEU,
ROUGE, and METEOR to capture surface-level
overlaps, but also introduce an LLM-as-a-Judge
score (1–10 scale), with an emphasis on scientific
accuracy, while also considering completeness and
clarity. For reaction prediction, we follow the Top-1
Accuracy metric and improve robustness by canon-
icalizing both predicted and reference SMILES
using RDKit (Landrum, 2020) before compari-
son. Finally, for name prediction, we apply the
same canonicalization for the iupac2smiles task,
and adopt Exact Match Accuracy for the other
three tasks (smiles2formula, smiles2iupac, and iu-
pac2formula).
4
Evaluation
4.1
Models
In order to provide a comprehensive assessment
of the inductive reasoning capabilities of cost-
optimized, flagship, and reasoning LLMs, we
choose one representative model from each cat-
egory, namely Claude-3.5-Haiku, GPT-4.1, and
Gemini-2.5-Flash. Since our benchmark is inte-
grated into the OpenCompass framework, it can
be easily evaluated on any other LLM. To ensure
consistency and encourage output diversity during
repeated sampling, we set the temperature at 1.0 for
all experiments. For Gemini-2.5-Flash, we retain
its default “thinking” configuration.
4.2
Inference Strategies
We evaluate SIRBench-V1 on four commonly used
inference strategies for inductive reasoning as il-
lustrated in figure 3. Explicit inductive reasoning
serves as a baseline for advanced methods like self-
consistency and hypothesis refinement, where the
LLM needs to explicitly formulate and apply the
hypotheses.
Implicit Inductive Reasoning.
We provide the
LLM with ICL examples and ask the LLM to pro-
vide the final answer directly without explicitly
stating the induced rules. This approach is the most
straightforward way to perform inductive reason-
ing.
Explicit Inductive Reasoning.
We prompt the
LLM to formulate a hypothesis based on the ICL
examples. Then, we let the LLM apply the hypoth-
esis to the given target question to obtain the final
answer. This approach forces the LLM to perform
the inductive reasoning process explicitly.
Self-Consistency.
For self-consistency (Wang
et al., 2022), we sample multiple hypotheses (we
use n = 5) from the LLM and ask it to apply each
of them to the target question, obtaining a corre-
sponding answer from each hypothesis. A final
answer is selected using majority voting performed
by the LLM itself via prompting (see appendix C).
Hypothesis Refinement.
The hypothesis refine-
ment method (Qiu et al., 2023) follows a three-
stage iterative process: hypothesis generation, se-
lection, and refinement.
Initially, we sample multiple hypotheses (n = 5)
based on the ICL examples, then evaluate them
using one of the two approaches: (1) for code-
executable tasks, we translate them into Python
functions and execute them following Qiu et al.
(2023), or (2) otherwise, we have the LLM apply
each hypothesis directly. A task-specific evaluator
scores each hypothesis’s output.
Next, we generate a new set of hypotheses (n =
5) by prompting (see appendix C for prompt) the
LLM to refine the highest-scoring hypothesis based
on feedback.
We repeat this select-and-refine loop up to t = 3
iterations, stopping early if the hypothesis achieves
a perfect score on ICL examples or performance
degradation is detected. We added the early stop-
ping mechanism for performance degradation to
prevent weaker models from degrading rule quality.
Finally, we apply the best resulting hypothesis
to the target question to produce the answer.
5
Results and Analysis
5.1
Main Results
Table 2 reveals consistently low performance
across most tasks, highlighting the limitations of
current LLMs in scientific inductive reasoning
tasks beyond mathematical equations.
Among
the evaluated models, Gemini-2.5-Flash demon-
strates superior performance in computationally
intensive tasks while exhibiting comparable results
to other models in conceptually oriented tasks such
as Molecule Caption. Additionally, larger flagship
models perform better than cost-optimized models.
We observe that LLMs struggle with explicit
inductive reasoning (i.e., proposing effective rules
and applying them to novel inputs), as shown by the
performance drop from implicit to explicit induc-
tive reasoning. Self-consistency helps alleviate this
shortcoming by sampling multiple diverse reason-
ing paths and marginalizing across them, thereby
enhancing the robustness of the explicit inductive
reasoning process. The hypothesis refinement strat-
egy further improves the performance, as it selects
the best rule from multiple sampled hypothesis and
revises the rule at each iteration. However, we find
that the advantage of hypothesis refinement over
implicit inductive reasoning varies inconsistently
across tasks and models.
To validate our findings across more LLMs, we
evaluated additional open-source models under
implicit inductive reasoning, as shown in Table
3. Deepseek-V3-0324 performs comparably with
GPT-4.1 across most tasks, while Qwen3-8B with
thinking generates extremely long chain-of-thought
reasoning for biology tasks, often exceeding its rec-
ommended 32K max output length without com-
pleting the reasoning process, demonstrating that
long chain-of-thought is not effective on the biol-
ogy tasks. These results reinforce our findings on
the fundamental limitation of current LLMs in sci-
entific inductive reasoning. Additionally, current
inductive reasoning methods remain inadequate for
scientific inductive reasoning tasks beyond mathe-
matical equations.
5.2
Effect of Length
Being able to perform inductive reasoning on a long
context is fundamental. We evaluated the LLMs
on DNA transformation and DNA translation tasks
In-Context Examples
Input: ... Output: ...
Input: ... Output: ...
...
Input: ... Output: ...
Hypothesis
Rule 1: ...
Rule 2: ...
...
Rule n: ...
In-Context
Examples
input
output
output
apply to
input
Hypothesis
...
Hypothesis
output
...
final
iteration
apply to
in-context
examples
output
apply to
input
majority
voting
select
best
hypothesis
refine based on
feedback
yes
no
apply best
hypothesis
to input
1
2
3
4
Implicit Inductive Reasoning
Explicit Inductive Reasoning
Self-Consistency
Hypothesis Refinement
In-Context
Examples
In-Context
Examples
output
output
In-Context
Examples
Hypothesis
input
Hypothesis
output
Hypothesis
Hypothesis
output
Figure 3: Comparison of four inference strategies: (1) Implicit induction - directly providing output; (2) Explicit
induction - formulating clear hypotheses explicitly; (3) Self-consistency - using multiple reasoning paths to reach
consensus; and (4) Hypothesis refinement - iteratively improving hypothesis on feedback.
Models
Biology
Chemistry
Avg.
DNA
DNA Table
DNA
Molecule
Molecule
Reaction
Name
Translation
Inference
Transformation
Design
Caption
Prediction
Prediction
Implicit Inductive Reasoning
Claude-3.5-Haiku
5.47
10.23
27.28
62.00
67.70
44.44
3.57
31.53
GPT-4.1
5.71
12.73
31.37
75.00
66.30
22.22
13.51
32.41
Gemini-2.5-Flash
11.72
32.06
30.42
85.00
63.30
54.17
30.00
43.81
Explicit Inductive Reasoning
Claude-3.5-Haiku
5.85
9.72
26.05
64.00
54.00
19.23
2.81
25.95
GPT-4.1
5.31
12.13
28.73
69.00
59.00
17.86
6.09
28.30
Gemini-2.5-Flash
9.14
23.34
28.66
77.00
67.70
34.78
30.00
38.66
Self-Consistency (Wang et al., 2022)
Claude-3.5-Haiku
5.11
10.00
26.34
66.00
69.70
20.83
0.83
28.40
GPT-4.1
5.96
13.19
30.81
72.00
65.70
25.00
9.58
31.75
Gemini-2.5-Flash
9.15
24.84
30.4
80.00
70.00
39.29
40.13
41.97
Hypothesis Refinement (Qiu et al., 2023)
Claude-3.5-Haiku
5.79
10.02
30.05
73.00
72.70
28.00
1.88
31.63
GPT-4.1
5.62
14.57
35.56
67.00
66.30
32.14
11.59
33.25
Gemini-2.5-Flash
10.60
28.55
30.37
72.00
65.70
32.14
34.07
39.06
Table 2: Performance of Claude-3.5-Haiku, GPT-4.1, and Gemini-2.5-Flash on SIRBench-V1 using four inference
strategies. All scores report accuracy (%), except Molecule Design (Morgan similarity rescaled to 0-100). Molecule
Caption reports the accuracy from LLM-as-judge. Synthetic versions were used for DNA Translation and DNA
Table Inference tasks.
with varying sequence length configurations. The
DNA transformation task demands the comprehen-
sion of the entire sequence (e.g., identifying re-
versals), while the DNA translation task requires
observation of local patterns. As shown in figure 4,
for DNA transformation, we found that the LLMs
achieve relatively strong performance on shorter
sequences but exhibits a significant performance
decline as sequence length increases. For DNA
translation, GPT-4.1 and Claude-3.5-Haiku show
minimal decrease with longer sequences only be-
cause they struggle with this task at shorter lengths.
The results indicate that current LLMs are effec-
tive at inducing pattern only within limited input
Models
Biology
Chemistry
Avg.
DNA
DNA Table
DNA
Molecule
Molecule
Reaction
Name
Translation
Inference
Transformation
Design
Caption
Prediction
Prediction
Qwen3-8B (with thinking)
0.20
4.88
3.24
59.00
52.67
3.33
1.67
17.00
Qwen3-8B (without thinking)
6.30
7.06
27.19
50.00
49.67
0.00
0.00
20.03
Deepseek-V3-0324
7.21
12.24
28.81
75.00
64.00
30.00
14.17
33.06
Table 3: Performance of Qwen3-8B and Deepseek-V3-0324 on SIRBench-V1 under the Implicit Inductive
Reasoning setting. Scores are accuracy (%) except Molecule Design (Morgan similarity, 0-100 scale) and Molecule
Caption (LLM-as-judge accuracy). Synthetic versions used for DNA tasks.
16
32
64
128
256
512
Sequence Length
0
20
40
60
80
100
Accuracy (%)
DNA Transformation
GPT-4.1
Claude-3.5-Haiku
Gemini-2.5-Flash
200
300
400
500
600
700
Sequence Length
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0
Accuracy (%)
DNA Translation
GPT-4.1
Claude-3.5-Haiku
Gemini-2.5-Flash
Figure 4: Effect of Sequence Length in Transformation
and DNA Translation tasks.
2
4
8
16
32
64
128
256
Number of Shots
0
20
40
60
80
100
Accuracy (%)
Reaction Prediction
GPT-4.1
Claude-3.5-Haiku
Gemini-2.5-Flash
2
4
8
16
32
64
128
256
Number of Shots
0
20
40
60
80
100
Accuracy (%)
DNA Transform
GPT-4.1
Claude-3.5-Haiku
Gemini-2.5-Flash
Figure 5: Effect of Number of Shots in Reaction Predic-
tion and DNA Transformation tasks.
lengths. This limitation reflects the broader chal-
lenge of developing robust inductive reasoning ca-
pabilities that can handle long context.
5.3
Effect of Number of Shots
We examine the effect of the number of shots on
accuracy in one representative task each from the
domains of biology and chemistry. Figure 5 shows
that increasing the number of shots has varying
effects on different models. In reaction prediction
task, GPT-4.1 exhibits an upward trend, showing
that it benefits from additional shots. In contrast,
Claude-3.5-Haiku shows performance degradation,
likely due to limitations in its context processing ca-
pability. Gemini-2.5-Flash does not show any clear
upward or downward trend with as shot increases.
For DNA transformation, all the models exhibit
consistent performance, implying that additional
examples provide limited benefit.
5.4
Many-Short-Shot vs. Long-Few-Shot
Unlike previous studies that only explore increas-
ing the number of relatively short examples (Yan
Model
Many-Short-Shot
Few-Long-Short
Claude-3.5-Haiku
31.19
15.63
GPT-4.1
36.94
25.64
Gemini-2.5-Flash
35.14
24.47
Table 4: Performance comparison in many-short-shot
versus long-few-shot settings on the DNA Translation
task. The many-short-shot setting uses 64 shots with
sequence length 100, while the few-long-shot setting
uses 4 shots with sequence length 1600.
et al., 2025), we also explore the inductive reason-
ing capabilities of LLMs on few long examples.
The latter paradigm adheres more to real-world
applications, where it is difficult to obtain numer-
ous examples for long input tasks. Our compara-
tive analysis in table 4 across both scenarios while
maintaining the total input length demonstrates that
LLMs perform worse with few long examples. This
finding highlights a critical area for the advance-
ment of LLM inductive reasoning ability.
5.5
Task Difficulty Analysis
Reasoning ability is not only reflected in overall
accuracy but also in performance across difficulty
levels. We analyzed two representative tasks, one
from biology and one from chemistry, under Im-
plicit Inductive Reasoning. Test instances were
categorized into Easy, Medium, Hard, with 100
samples each. The DNA Translation samples were
grouped by input sequence length, with ranges of
100-300 for Easy, 300-500 for Medium, and 500-
700 for Hard, while the Molecule Design samples
were classified by molecular complexity using RD-
Kit based on structural features. As shown in both
Table 5 and Table 6, model performance exhibits
a clear downward trend from easy to hard sam-
ples, suggesting that difficulty-based categorization
offers a straightforward way to assess robustness
while also enabling a more fine-grained evaluation
of reasoning abilities across domains.
Difficulty Level
GPT-4.1
Claude
Gemini
Easy
accuracy
6.16
5.77
12.6
Medium
accuracy
5.56
4.85
7.98
Hard
accuracy
5.27
4.91
5.9
Table 5: Performance of LLMs on the DNA Translation
task by difficulty level.
Difficulty Level
GPT-4.1
Claude
Gemini
Easy
validity
0.94
0.67
0.94
morgan_sims
0.67
0.39
0.89
fcd (↓)
2.66
9.82
1.15
Medium
validity
0.92
0.64
0.88
morgan_sims
0.55
0.29
0.78
fcd (↓)
7.77
21.08
4.73
Hard
validity
0.74
0.59
0.41
morgan_sims
0.46
0.21
0.6
fcd (↓)
19.85
29.86
22.24
Table 6: Performance of LLMs on the Molecule Design
task by difficulty level.
5.6
Counterfactual Evaluation
Model
DNA Translation
DNA Table Inf.
Aut.
Syn. (∆)
Aut.
Syn. (∆)
Claude-3.5-Haiku
21.95
5.47 (−16.48)
68.50
10.23 (−58.27)
GPT-4.1
21.24
5.71 (−15.53)
81.84
12.73 (−69.11)
Gemini-2.5-Flash
30.64
11.72 (−18.92)
87.09
32.06 (−55.03)
Table 7: Performance comparison between authentic
and synthetic versions of chosen tasks. ∆represents the
performance gap, calculated as the score on synthetic
tasks minus the score on authentic tasks.
To investigate whether LLMs perform true in-
ductive reasoning, we compare their performance
on original and synthetic settings of DNA Trans-
lation and Table Inference. As illustrated in Table
7, all three models suffer a dramatic performance
decline in synthetic tasks, suggesting that higher
performance in authentic versions stems from the
memorization of standard mappings rather than
genuine inductive reasoning capabilities.
Among the evaluated models, Gemini-2.5-Flash
maintains the highest performance on both origi-
nal and synthetic versions of the tasks. This sug-
gests that reasoning models have better capability
to identify rules beyond the constraints of memo-
rized knowledge than non-reasoning models. How-
ever, its absolute score in synthetic tasks remains
low. Overall, these results indicate that current
LLMs are fundamentally limited in their ability to
perform genuine inductive reasoning. In the con-
text of scientific discovery, LLMs need to recog-
nize novel patterns rather than just retrieve existing
knowledge. Therefore, our findings highlight the
need to distinguish inductive reasoning from re-
trieval to advance the ability of LLMs for scientific
discovery.
6
Conclusion
In this paper, we introduce SIRBench-V1, a bench-
mark that includes Chemistry and Biology subtasks,
to evaluate the scientific inductive reasoning of
LLMs on tasks beyond mathematical equation. We
evaluated different LLMs using commonly used
reasoning strategies on our proposed benchmark.
We found that current leading LLMs obtain low
performance on our benchmark and that using so-
phisticated strategies provide minimal benefits. Ad-
ditionally, we point out limitations of LLMs in
performing inductive reasoning on longer context
lengths, few-long-shot settings, and counterfactual
rules. The experimental results will provide valu-
able insights for future studies on LLM-driven sci-
entific discovery.
7
Limitations
In this work, we take the first step toward incor-
porating scientific scenarios into the design of the
LLM-Based Inductive Reasoning Beyond Equa-
tions and introduce a new dataset for evaluation.
However, the SIRBench-V1 is limited to chemistry
and biology domains. As a next step, we plan to
invite domain experts in these areas to review and
refine both our benchmark and evaluation protocol.
In the future, we aim to expand the benchmark to
cover a broader range of scientific disciplines.
Acknowledgement
This work is supported by the AI9Stars community,
the Fundamental Research Funds for the Central
Universities, and the Research Funds of Beijing
Language and Culture University (25YCX118).
We also thank the anonymous reviewers in ACL
Rolling Review May 2025. Their insightful feed-
back and suggestions are significant to refining and
improving our work.
References
Dimitrios Angelis, Filippos Sofos, and Theodoros E.
Karakasidis. 2023. Artificial intelligence in phys-
ical sciences: Symbolic regression trends and per-
spectives. Archives of Computational Methods in
Engineering, pages 1 – 21.
Eryk Banatt, Jonathan Cheng, Skanda Vaidyanath,
and Tiffany Hwu. 2024.
Wilt:
A multi-turn,
memorization-robust inductive logic benchmark for
llms. ArXiv, abs/2410.10998.
Andrew M. Bean, Simi Hellsten, Harry Mayne, Jabez
Magomere, Ethan A. Chi, Ryan Chi, Scott A. Hale,
and Hannah Rose Kirk. 2024. Lingoly: A bench-
mark of olympiad-level linguistic reasoning puz-
zles in low-resource and extinct languages. ArXiv,
abs/2406.06196.
Chen Bowen, Rune Sætre, and Yusuke Miyao. 2024.
A comprehensive evaluation of inductive reasoning
capabilities and problem solving in large language
models. In Findings.
Chengkun Cai, Xu Zhao, Haoliang Liu, Zhongyu Jiang,
Tianfang Zhang, Zongkai Wu, Jenq-Neng Hwang,
and Lei Li. 2024.
The role of deductive and in-
ductive reasoning in large language models. ArXiv,
abs/2410.02892.
Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan
Xiao, Pengcheng Yin, Sushant Prakash, Charles Sut-
ton, Xuezhi Wang, and Denny Zhou. 2023. Universal
self-consistency for large language model generation.
ArXiv, abs/2311.17311.
François Chollet. 2019. On the measure of intelligence.
Preprint, arXiv:1911.01547.
Debadutta Dash, Rahul Thapa, J. Banda, Akshay
Swaminathan, Morgan Cheatham, Mehr Kashyap,
Nikesh Kotecha, Jonathan H. Chen, Saurabh Gom-
bar, Lance Downing, Rachel A. Pedreira, Ethan
Goh, Angel Arnaout, Garret K. Morris, H Magon,
Matthew P. Lungren, Eric Horvitz, and Nigam H.
Shah. 2023. Evaluation of gpt-3.5 and gpt-4 for sup-
porting real-world information needs in healthcare
delivery. ArXiv, abs/2304.13714.
DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang,
Jun-Mei Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu,
Shirong Ma, Peiyi Wang, Xiaoling Bi, Xiaokang
Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou,
Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 179 oth-
ers. 2025. Deepseek-r1: Incentivizing reasoning ca-
pability in llms via reinforcement learning. ArXiv,
abs/2501.12948.
Mengge Du,
Yuntian Chen,
Zhongzheng Wang,
Longfeng Nie, and Dong juan Zhang. 2024. Large
language models for automatic equation discovery of
nonlinear dynamics. Physics of Fluids.
Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke,
Kyunghyun Cho, and Heng Ji. 2022a. Translation
between molecules and natural language. In Proceed-
ings of the 2022 Conference on Empirical Methods in
Natural Language Processing, pages 375–413, Abu
Dhabi, United Arab Emirates. Association for Com-
putational Linguistics.
Carl N. Edwards, T. Lai, Kevin Ros, Garrett Honke, and
Heng Ji. 2022b. Translation between molecules and
natural language. ArXiv, abs/2204.11817.
You-Le Fang, Dong-Shan Jian, Xiang Li, and Yan-Qing
Ma. 2025. Ai-newton: A concept-driven physical law
discovery system without prior physical knowledge.
Preprint, arXiv:2504.01538.
Gaël Gendron, Qiming Bao, M. Witbrock, and Gillian
Dobbie. 2023. Large language models are not strong
abstract reasoners. In International Joint Conference
on Artificial Intelligence.
William Gilpin. 2021.
Chaos as an interpretable
benchmark for forecasting and data-driven modelling.
ArXiv, abs/2110.05266.
Satyam Goyal and Soham Dan. 2025.
Iolbench:
Benchmarking llms on linguistic reasoning. ArXiv,
abs/2501.04249.
Katarína Grešová, Vlastimil Martinek, David ˇCechák,
Petr Šimeˇcek, and Panagiotis Alexiou. 2023. Ge-
nomic benchmarks: a collection of datasets for ge-
nomic sequence classification. BMC Genomic Data,
24(1):25.
Taicheng Guo, Kehan Guo, Bozhao Nan, Zhengwen
Liang, Zhichun Guo, N. Chawla, O. Wiest, and Xian-
gliang Zhang. 2023. What can large language mod-
els do in chemistry? a comprehensive benchmark
on eight tasks. In Neural Information Processing
Systems.
Brett K. Hayes, Evan Heit, and Haruka Swendsen. 2010.
Inductive reasoning. Wiley Interdisciplinary Reviews:
Cognitive Science, 1(2):278–292.
Wenyue Hua, Tyler Wong, Sun Fei, Liangming Pan,
Adam Jardine, and William Yang Wang. 2025. In-
ductionbench: Llms fail in the simplest complexity
class. ArXiv, abs/2502.15823.
Ross Irwin, Spyridon Dimitriadis, Jiazhen He, and Es-
ben Jannik Bjerrum. 2021.
Chemformer: a pre-
trained transformer for computational chemistry. Ma-
chine Learning: Science and Technology, 3.
Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindu-
lyte, Jia He, Siqian He, Qingliang Li, Benjamin A.
Shoemaker, Paul A. Thiessen, Bo Yu, Leonid Y. Za-
slavsky, Jian Zhang, and Evan E. Bolton. 2018. Pub-
chem 2019 update: improved access to chemical data.
Nucleic Acids Research, 47:D1102 – D1109.
Shrinidhi Kumbhar, Venkatesh Mishra, Kevin Coutinho,
Divij Handa, Ashif Iquebal, and Chitta Baral. 2025.
Hypothesis generation for materials discovery and
design using goal-driven and constraint-guided llm
agents. ArXiv, abs/2501.13299.
Greg Landrum. 2020. Rdkit: Open-source cheminfor-
matics. http://www.rdkit.org. [Online; accessed
14-May-2025].
Jiachun Li, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang
Liu, and Jun Zhao. 2024. Mirage: Evaluating and
explaining inductive reasoning process in language
models. ArXiv, abs/2410.09542.
Huaqing Liu, Shuxian Zhou, Peiyi Chen, Jiahui Liu,
Ku-Geng Huo, and Lanqing Han. 2024. Exploring
genomic large language models: Bridging the gap be-
tween natural language and gene sequences. bioRxiv.
Yujie Liu, Zonglin Yang, Tong Xie, Jinjie Ni, Ben
Gao, Yuqiang Li, Shixiang Tang, Wanli Ouyang,
Erik Cambria, and Dongzhan Zhou. 2025. Research-
bench: Benchmarking llms in scientific discovery
via inspiration-based task decomposition.
ArXiv,
abs/2503.21248.
Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Tat-
sunori Taniai, and Y. Ushiku. 2022. Rethinking sym-
bolic regression datasets and benchmarks for scien-
tific discovery. ArXiv, abs/2206.10540.
Frederic P. Miller, Agnes F. Vandome, and John
McBrewster. 2009. Levenshtein distance: Informa-
tion theory, computer science, string (computer sci-
ence), string metric, damerau?levenshtein distance,
spell checker, hamming distance.
John D. Norton. 2003. A little survey of induction.
OpenAI, :, Aaron Jaech, Adam Kalai, and *et al.*. 2024.
Openai o1 system card. Preprint, arXiv:2412.16720.
Kristina Preuer, Philipp Renz, Thomas Unterthiner,
Sepp Hochreiter, and Günter Klambauer. 2018.
Fréchet chemnet distance: A metric for generative
models for molecules in drug discovery. Journal
of chemical information and modeling, 58 9:1736–
1741.
Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar,
Valentina Pyatkin, Chandra Bhagavatula, Bailin
Wang, Yoon Kim, Yejin Choi, Nouha Dziri, and Xi-
ang Ren. 2023. Phenomenal yet puzzling: Testing
inductive reasoning capabilities of language models
with hypothesis refinement. ArXiv, abs/2310.08559.
John W. Ratcliff and David E. Metzener. 1988. Pattern
matching: The gestalt approach. Dr. Dobb’s Journal,
13(7):46.
Eduardo Sánchez, Belen Alastruey, Christophe Ropers,
Pontus Stenetorp, Mikel Artetxe, and Marta Ruiz
Costa-jussà. 2024.
Linguini: A benchmark for
language-agnostic linguistic reasoning.
ArXiv,
abs/2409.12126.
Parshin Shojaee, Kazem Meidani, Shashank Gupta,
Amir Barati Farimani, and Chandan K. Reddy.
2024.
Llm-sr: Scientific equation discovery via
programming with large language models. ArXiv,
abs/2404.18400.
Parshin Shojaee, Ngoc-Hieu Nguyen, Kazem Meidani,
Amir Barati Farimani, Khoa D. Doan, and Chan-
dan K. Reddy. 2025. Llm-srbench: A new bench-
mark for scientific equation discovery with large lan-
guage models.
Xiaojuan Tang, Jiaqi Li, Yitao Liang, Song chun Zhu,
Muhan Zhang, and Zilong Zheng. 2024. Mars: Sit-
uated inductive reasoning in an open-world environ-
ment. ArXiv, abs/2410.08126.
Guanjie Wang, Erpeng Wang, Zefeng Li, Jian Zhou,
and Zhimei Sun. 2024. Exploring the mathematic
equations behind the materials science data using
interpretable symbolic regression. Interdisciplinary
Materials.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2022. Self-consistency improves chain
of thought reasoning in language models.
arXiv
preprint arXiv:2203.11171.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed H. Chi, F. Xia, Quoc Le, and Denny Zhou.
2022. Chain of thought prompting elicits reasoning
in large language models. ArXiv, abs/2201.11903.
Andreas M. Westerlund, Lakshman Saigiridharan, and
Samuel Genheden. 2024a. Constrained synthesis
planning with disconnection-aware transformer and
multi-objective search.
ChemRxiv.
Preprint, not
peer-reviewed.
Annie M. Westerlund, Siva Manohar Koki, Supriya Kan-
charla, Alessandro Tibo, Lakshidaa Saigiridharan,
Mikhail Kabeshov, Rocío Mercado, and Samuel Gen-
heden. 2024b. Do chemformers dream of organic
matter? evaluating a transformer model for multistep
retrosynthesis. Journal of chemical information and
modeling.
Rujie Wu, Xiaojian Ma, Qing Li, Wei Wang, Zhen-
liang Zhang, Song-Chun Zhu, and Yizhou Wang.
2023. Bongard-openworld: Few-shot reasoning for
free-form visual concepts in the real world. ArXiv,
abs/2310.10207.
Yongjun Xu, Qi Wang, Zhulin An, Fei Wang, Libo
Zhang, Yanjun Wu, Fengliang Dong, Cheng-Wei Qiu,
Xin Liu, Junjun Qiu, Keqin Hua, Wentao Su, Huiyu
Xu, Yong Han, Xinya Cao, En ju Liu, Chenguang Fu,
Zhigang Yin, Miao Liu, and 28 others. 2021. Artifi-
cial intelligence: A powerful paradigm for scientific
research. The Innovation, 2.
Kai Yan, Zhan Ling, Kang Liu, Yifan Yang, Ting-Han
Fan, Lingfeng Shen, Zhengyin Du, and Jiecao Chen.
2025. Mir-bench: Benchmarking llm’s long-context
intelligence via many-shot in-context inductive rea-
soning. ArXiv, abs/2502.09933.
Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, E. Cam-
bria, Xiaodong Liu, Jianfeng Gao, and Furu Wei.
2022.
Language models as inductive reasoners.
ArXiv, abs/2212.10923.
Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Sou-
janya Poria, and E. Cambria. 2023. Large language
models for automated open-domain scientific hy-
potheses discovery. In Annual Meeting of the As-
sociation for Computational Linguistics.
Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie,
Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik
Cambria, and Dongzhan Zhou. 2024.
Moose-
chem: Large language models for rediscovering
unseen chemistry scientific hypotheses.
ArXiv,
abs/2410.07076.
A
Additional Details on SIRBench-V1
A.1
Dataset Configurations
We curate 7 tasks in total. Considering that multi-
ple metrics provide robust assessment, for chem-
istry tasks, we evaluate Molecule Design, Molecule
Captioning and Reaction Prediction with 30 exam-
ples each. For Name Prediction, we sample 30 ex-
amples for each type of transformation (including
smiles2formula, smiles2iupac, iupac2smiles, and
iupac2formula). Since biology tasks rely solely on
accuracy, we increase the number of examples to
100 for each biology task to ensure more stable eval-
uation, including DNA Translation, DNA Transla-
tion (Synthetic), DNA Table Inference, DNA Table
Inference (Synthetic) and DNA Transformation.
All experiments are conducted under 5-shot set-
ting, unless otherwise stated. However, since our
benchmark has various configurations and supports
synthetic data generation for some subtasks, the
actual number of items can be configurable.
In our main results, we use the following con-
figurations. For DNA Translation, we uniformly
sample across sequence length 200 to 450 since the
effective DNA sequences in the dataset starts from
length 200. While data are available for longer se-
quences, only sample until 450 because they are too
challenging for most models. For DNA Transfor-
mation, we set the sequence length to 300, which
is a reasonably challenging level.
A.2
Examples of Transformation Types in
DNA Transformation Task
The transformation types include: 1) Sequence re-
versal: reversing the order of the entire sequence
(e.g., AGCT →TCGA); 2) Complementation:
replacing each base according to a substitution
rule (e.g., AGCT →TCGA, using A↔T, C↔G
or a randomized complement map); 3) Reverse
complementation: performing complementation
followed by reversal (e.g., AGCT →AGCT); 4)
Segmented transformation: transforming fixed-
length segments after a fixed stride (e.g., AGCT-
TAGCGT →AGCTTGACGT, reversing 2 bases
every 3 bases); 5) Fixed base mutation: replac-
ing specific bases with new ones (e.g., AGCT →
GGTT, where A→G and C→T).
Task
Model
Initial
Final
Test
DNA Translation
Claude-3.5-Haiku
3.87
6.52
5.79
GPT-4.1
9.15
11.37
5.62
Gemini-2.5-Flash
24.37
30.57
10.60
Molecule Design
Claude-3.5-Haiku
0.67
0.71
0.73
GPT-4.1
0.77
0.82
0.67
Gemini-2.5-Flash
0.92
0.97
0.72
Table 8: Comparison of initial and final hypothesis
quality scores on in-context examples (ICE) alongside
corresponding test performance of final hypothesis for
various models across DNA Translation (Synth) and
Molecule Design tasks. Morgan similarity (scale of 0 to
1) is reported for the Molecule design task.
B
Explicit Inductive Reasoning Analysis
B.1
Hypothesis Quality and Refinement
In order to provide a more thorough analysis, we
show the computed evaluation score of the gener-
ated hypotheses on ICL examples during hypoth-
esis refinement in table 8. For the initial evalu-
ation scores, we report the average score of the
best hypothesis generated by the model prior to
any refinement. This also serves as an approximate
upper bound of the evaluation scores for hypothe-
ses generated by explicit inductive reasoning and
self-consistency. We notice that for DNA Transla-
tion task, these rules obtained low accuracy on ICL
examples. The scores increase at the end of the re-
finement process, but still remain low. This shows
the limitation in inductive reasoning capability of
current LLMs. Furthermore, although hypothesis
refinement enhances the consistency of rules with
ICL examples, we observe that in some cases there
remains a substantial gap between performance on
ICL examples and generalization to test examples.
Future work may focus on enhancing the hypothe-
sis proposal capabilities of LLMs to improve both
accuracy on ICL examples and generalization to
novel scenarios.
We also provided examples of hypothesis gener-
ated by GPT-4.1 which can be found table 9 and
10.
B.2
Misalignment of Advanced Reasoning
Strategies
As shown in Table 2, the performance of LLMs
does not consistently improve with the applica-
tion of more fine-grained reasoning strategies. In
some cases, advanced strategies even reduce per-
formance. To investigate this phenomenon, we
analyzed the recorded reasoning traces, focusing
Task
Hypothesis
DNA Translation
Rule 1: Read the DNA from 5’->3’ in consecutive codons (3 bp) starting at
the first base. Rule 2: Each codon uniquely maps to one amino acid per the
in-context examples. Rule 3: Inferred codon table: ATG:M, ATT:M, ATA:N,
ATC:Q, GTT:S, GTC:V, GTA:C, GTG:V, TTT:F, TTC:F, TTA:W, TTG:W, CTT:M,
CTC:D, CTA:R, CTG:G, TCT:S, TCC:S, TCA:H, TCG:S, CCT:P, CCC:N, CCA:N,
CCG:Y, ACT:P, ACC:N, ACA:N, ACG:W, GCT:A, GCC:N, GCA:N, GCG:F,
TAT:Y, TAC:P, TAA:*, TAG:*, CAT:Q, CAC:Y, CAA:D, CAG:V, AAT:F, AAC:G,
AAA:K, AAG:N, GAT:D, GAC:G, GAA:E, GAG:G, TGT:P, TGC:L, TGA:*,
TGG:W, CGT:R, CGC:C, CGA:M, CGG:H, AGT:S, AGC:F, AGA:G, AGG:S,
GGT:G, GGC:W, GGA:V, GGG:Q. Rule 4: Translate sequentially until a stop
(TAA, TAG, TGA →“*”). Rule 5: Output one-letter amino acid string. Rule 6:
Stop at and include first “*”. Rule 7: Unseen codons should not be assumed.
DNA Table Infer-
ence
Rule 1: DNA–protein pairs align codons (3 bp) to amino acids. Rule 2: Segment
DNA into triplets from 5’ and align to protein until “*” or end. Rule 3: Codons
aligned to “*” are stop codons. Rule 4: First-codon→‘M’ pairs are start codons.
Rule 5: Aggregate across examples; record all observed mappings. Rule 6: Include
only codons seen. Rule 7: Build forward_table from all mappings, excluding stops.
Rule 8: start_codons = all first codons mapped to ‘M’. Rule 9: stop_codons = all
codons aligned to ‘*’. Rule 10: Amino acids are single-letter codes including “*.”
DNA Transform
Rule 1: Split input into 7-nt segments from 5’; last segment may be shorter. Rule
2: Reverse each 7-nt segment. Rule 3: Concatenate reversed segments to form
output.
Table 9: Hypotheses Generated by GPT-4.1 for the DNA tasks
on chemistry-related tasks. In the molecule caption-
ing task, Self-Consistency occasionally produced
lower scores than the Implicit Inductive Reasoning
baseline. While this strategy generates multiple
hypotheses and applies them to derive answers, the
resulting outputs were often fragmented or overly
technical. For example, instead of producing full
descriptive captions, as required by the task, the
model frequently produced structural abbreviations
or linkage names such as beta-D-Galp (1→4) beta-
D-GlcpNAc (which are often part of the rule repre-
sentations extracted by the model), omitting infor-
mation about overall structure or functional roles.
This indicates a misalignment between rule-based
derivations and the task’s requirement for holistic
descriptions. In the reaction prediction task, Hy-
pothesis Refinement also failed to deliver consis-
tent improvements. Our analysis suggests that this
was due to refined rules were not always effectively
applied to the examples, and the selection of the
"best" hypothesis depended solely on an automatic
evaluator of prediction accuracy, which does not
necessarily capture scientific plausibility.
Overall, these results suggest that the limitations
of advanced reasoning strategies stem less from
insufficient domain knowledge in base models than
from structural mismatches between the strategies
and the nuanced demands of the tasks.
C
Experiment Details
C.1
Implementation Details
We run our experiments using API-based closed-
source models,
specifically claude-3-5-haiku-
20241022, gpt-4.1-2025-04-14, and gemini-2.5-
flash-preview-04-17. We implement our inference
strategies in the OpenCompass framework. This
allows us to perform inference in parallel at high
rates. The explicit inductive reasoning is imple-
mented via one-pass decoding, generating the hy-
pothesis and applying it to the test example in one
API call. Self-consistency is implemented by sam-
pling multiple times using the same process as ex-
plicit inductive reasoning. For hypothesis refine-
ment, we sample the hypothesis using the same
general prompt in all tasks, except for DNA Trans-
lation where we ask the model to provide the spe-
cific codon-to-amino acid so that the hypothesis
can be properly refined. For tasks in which the
Task
Hypothesis
Molecule Design
Rule 1: Identify required functional groups (e.g., diamine, aldehyde, etc.). Rule 2:
Map biological role to known scaffolds (e.g., antineoplastic →stilbene). Rule 3:
Choose core heterocycle per “derives from” (e.g., triazine). Rule 4: Decorate core
with substituents to satisfy function and activity. Rule 5: Respect stereochemistry
(e.g., [C@H] per natural enantiomer). Rule 6: For natural products, replicate
known SMILES closely. Rule 7: Attach alkyl/aryl groups at correct positions.
Rule 8: Output valid SMILES with rings, heteroatoms, charges.
Molecule Caption
Rule 1: Identify core ergot alkaloid and name (e.g., ergotaman). Rule 2: Describe
substituents and positions (e.g., 12’-hydroxy). Rule 3: Note stereochemistry if
differentiating isomers. Rule 4: Mention salts/derivatives (e.g., methanesulfonic
acid salt). Rule 5: State biological origin or role if recognizable. Rule 6: Use
“derives from” for parent relationships. Rule 7: Note naming conventions or
historical context if relevant. Rule 8: Separate distinct features into clear sentences.
Reaction
Predic-
tion
Rule 1: Target N-heterocycle fused to benzene undergoes nucleophilic attack.
Rule 2: Organometallics ([Li]CCCC, [H–]) add to carbonyl or halide. Rule 3:
Bases ([NH+
4 ], [OH–]) deprotonate or hydrolyze esters →amides/acids. Rule 4:
Leaving groups replaced by nucleophiles forming C–X or C–C. Rule 5: Ester +
nucleophile -> amide/ether. Rule 6: Most nucleophilic reagent reacts with most
electrophilic center. Rule 7: Ignore spectator ions in final product. Rule 8: Grignard
addition -> alcohol at addition site. Rule 9: Reductions ([H–]) convert carbonyls
→alcohols/amines. Rule 10: On heteroaryl halide, nucleophile replaces halide on
ring. Rule 11: Ethers/amides attach to aromatic systems via substitution/acylation.
Rule 12: With both esters and amines, amide formation is preferred.
Name Prediction
Rule 1: Count all C atoms (including branches/rings). Rule 2: Count H via implicit
valence rules. Rule 3: Count N, O, S, Si, halogens from SMILES. Rule 4: Include
implicit Hs in aromatic rings per standard. Rule 5: Integrate substituent atoms
without double-counting. Rule 6: Adjust H count for double/triple bonds. Rule 7:
Write formula as C, H, then others alphabetically. Rule 8: Expand grouped atoms
(e.g., O[Si](C)(C)C). Rule 9: Sum counts; check branching consistency. Rule 10:
Format as [Element][count]... (e.g., C6H6O).
Table 10: Hypotheses Generated by GPT-4.1 for the Chemistry tasks
hypothesis can be translated into Python code, we
prompt an LLM to generate the code. Otherwise,
we prompt the LLM to apply a hypothesis to all
in-context example inputs and do this to all the gen-
erated hypothesis. We used AI assistants to polish
some of the text in this paper.
C.2
Prompts
Molecule Captioning
As discussed in Sec-
tion3.3, molecule captioning is an open-ended gen-
eration task, for which existing evaluations rely
primarily on surface-level matching. To address
this limitation, we design a dedicated prompt with
fine-grained scoring criteria and employ an LLM
to serve as the evaluator.
One-pass Self-Consistency
To reduce the num-
ber of API calls and improve the efficiency of self-
consistency, we design the prompt so that the model
performs both rule induction and application to the
test input within a single invocation.
Universal Majority Voting with Self-Consistency
Given that the outputs of the chemistry and biol-
ogy tasks in SIRBench-V1 are typically long and
semantically complicated, basic majority voting
mechanism often fails to identify a representative
response, thereby diminishing the effectiveness of
self-consistency. To address this, we adopt the uni-
versal self-consistency strategy(Chen et al., 2023),
selecting the most semantically consistent response
to form the final answer.
Hypothesis Refinement
We provide the main
prompts used in the hypothesis refinement process,
including Hypothesis Induction, Hypothesis Appli-
cation, Hypothesis Refinement, and Final Hypothe-
sis Application.
D
Complete Results on Chemistry Tasks
We provide the full results on Chemistry Tasks that
reports all the metrics in table 11, table 13, and
table 12.
Task
Metric
Implicit
Inductive
Reasoning
Explicit
Inductive
Reasoning
Self-
Consistency
Hypothesis
Refinement
Molecule
Design
exact_match
0.17
0.23
0.23
0.27
bleu
0.41
0.36
0.19
0.71
levenshtein (↓)
70.87
84.70
173.47
26.30
validity
0.70
0.77
0.80
0.70
maccs_sims
0.81
0.75
0.84
0.89
rdk_sims
0.81
0.69
0.69
0.76
morgan_sims
0.62
0.64
0.66
0.73
fcd (↓)
12.82
13.87
12.46
13.22
Molecule
Caption
bleu2
0.20
0.22
0.39
0.24
bleu4
0.14
0.15
0.29
0.17
rouge_1
0.33
0.24
0.48
0.40
rouge_2
0.18
0.12
0.29
0.23
rouge_l
0.25
0.19
0.38
0.31
meteor_score
0.39
0.23
0.44
0.42
LLM as judge
67.70
54.00
69.70
72.70
Reaction Prediction
accuracy
44.44
19.23
20.83
28.00
smiles2formula
accuracy
0.00
0.00
0.00
0.00
smiles2iupac
accuracy
0.00
0.00
0.00
0.00
iupac2smiles
accuracy
14.29
4.55
0.00
4.17
iupac2formula
accuracy
0.00
6.67
3.33
3.33
Table 11: Performance of the Claude-3.5-Haiku on Chemistry Tasks
Task
Metric
Implicit
Inductive
Reasoning
Explicit
Inductive
Reasoning
Self-
Consistency
Hypothesis
Refinement
Molecule
Design
exact_match
0.30
0.20
0.20
0.23
bleu
0.75
0.71
0.70
0.75
levenshtein (↓)
25.37
27.93
26.37
24.03
validity
0.87
1.00
0.93
0.93
maccs_sims
0.92
0.87
0.91
0.87
rdk_sims
0.80
0.74
0.82
0.78
morgan_sims
0.75
0.69
0.72
0.67
fcd (↓)
8.16
7.08
7.97
7.43
Molecule
Caption
bleu2
0.42
0.49
0.49
0.20
bleu4
0.32
0.38
0.39
0.15
rouge_1
0.55
0.55
0.57
0.38
rouge_2
0.36
0.38
0.39
0.24
rouge_l
0.44
0.46
0.48
0.31
meteor_score
0.57
0.52
0.54
0.48
LLM as judge
66.30
59.00
65.70
66.30
Reaction Prediction
accuracy
22.22
17.86
25.00
32.14
smiles2formula
accuracy
13.33
6.67
10.00
10.00
smiles2iupac
accuracy
0.00
0.00
0.00
0.00
iupac2smiles
accuracy
17.39
4.35
5.00
13.04
iupac2formula
accuracy
23.33
13.33
23.33
23.33
Table 12: Performance of the GPT-4.1 on Chemistry Tasks
Task
Metric
Implicit
Inductive
Reasoning
Explicit
Inductive
Reasoning
Self-
Consistency
Hypothesis
Refinement
Molecule
Design
exact_match
0.33
0.27
0.27
0.20
bleu
0.73
0.79
0.79
0.76
levenshtein (↓)
27.90
25.27
22.50
26.67
validity
0.80
0.77
0.90
0.73
maccs_sims
0.95
0.94
0.94
0.81
rdk_sims
0.89
0.86
0.87
0.82
morgan_sims
0.85
0.77
0.80
0.72
fcd (↓)
8.19
8.89
6.26
10.56
Molecule
Caption
bleu2
0.49
0.54
0.51
0.42
bleu4
0.38
0.43
0.41
0.33
rouge_1
0.57
0.61
0.61
0.52
rouge_2
0.38
0.42
0.41
0.35
rouge_l
0.47
0.50
0.49
0.43
meteor_score
0.55
0.59
0.59
0.52
LLM as judge
63.30
67.70
70.00
65.70
Reaction Prediction
accuracy
54.17
34.78
39.29
32.14
smiles2formula
accuracy
30.00
20.00
30.00
16.67
smiles2iupac
accuracy
0.00
0.00
3.33
0.00
iupac2smiles
accuracy
20.00
40.00
53.85
52.94
iupac2formula
accuracy
70.00
60.00
73.33
66.67
Table 13: Performance of the Gemini-2.5-Flash on Chemistry Tasks
LLM-as-Judge Evaluation of Molecule Captioning:
You are an expert molecular biologist.
Below is a SMILES string representing a molecule: {smiles}
Here is a reference description of the molecule: {gt}
Here is a predicted description of the same molecule: {pred}
Your task is to evaluate the predicted description only based on its scientific quality compared to
the reference.
You must assign a score from 1 to 10 based on the following criteria:
• Score 10: Nearly perfect — scientifically precise, complete, and fluent. Matches all key
aspects of the reference (e.g., functional groups, chemical class, derivation, roles).
• Score 8–9: Very good — minor omissions or slight rewording, but the core structure-level
and functional meaning is intact.
• Score 6–7: Reasonable — generally correct but may lack specific details (e.g., derivation or
one functional role). Possibly vague phrasing.
• Score 4–5: Partial — captures the general category or one function but omits multiple
important details or shows misunderstanding in phrasing.
• Score 2–3: Poor — vague, generic, or scientifically weak. May refer to the wrong compound
type or confuse structural features.
• Score 1: Completely incorrect or irrelevant.
Only output a single line in the following format: Score: [1-10]
One-pass Self-Consistency:
Below is a full prompt about the reasoning task, which includes the ICL examples and a new test
case. Your task is:
1. Read the full prompt to understand the task and identify:
1) the example input-output pairs
2) the specific input question to answer.
2. Analyze these example pairs and generate a series of rules that explains how each input is
transformed to its corresponding output.
3. Then, apply those rules to the final test question and output the answer.
4. Return your answer in the following format:
<rules>
Rule 1: ...
Rule 2: ...
Rule 3: ...
...
</rules>
<answer>
{{your answer}}
</answer>
Full prompt: {full_prompt}
Universal Majority Voting with Self-Consistency:
You are given a reasoning task prompt and multiple candidate responses to the question in that
prompt. Your task is:
1. Read the full prompt carefully to understand the question being asked.
2. Examine all the candidate responses and determine whether any of them form a majority
consensus.
• A majority exists if any single response appears more than any other (either verbatim
or semantically equivalent).
• In case of a tie (e.g., all responses differ or two responses appear with equal frequency),
consider that no majority exists.
3. If a majority exists, return that response as the final answer.
4. If no majority exists, then select the most reasonable and task-appropriate response based
on the prompt.
Candidate responses: {responses}
Full prompt: {full_prompt}
Return your final answer using exactly the following format:
majority_found: [yes or no]
selected_response: {full response content}
Example:
majority_found: yes
selected_response: This is the most common (or semantically equivalent)
response and correctly answers the question.
Hypothesis Induction Prompt
Below is a full prompt about the reasoning task, which includes the ICL examples that you should
learn from. Your task is:
1. Read the full prompt to understand the task and identify the example input-output pairs.
2. Analyze these example pairs and generate a series of rules that explains how each input is
transformed to its corresponding output.
3. Provide as much detail as possible in the rules, such as elaborating on the specific map-
ping.{note}
4. Return your rules in the following format (each rule on its own line):
<hypothesis>
Rule 1: ...
Rule 2: ...
Rule 3: ...
...
</hypothesis>
Full prompt:
{full_prompt}
Hypothesis Application Prompt (General)
Task Description: task_description
Please apply the given hypothesis to the given list of inputs. Ensure that you provide the actual
output for each input. Do not give a program, partial output, or placeholder.
Hypothesis: hypothesis
Input: icl_in
Format your output as follows:
<output>
Output 1: ...
Output 2: ...
...
</output>
DNA Table Prompt
Below is a full prompt about the reasoning task, which includes the question that you should give
the corresponding answer. Your task is:
1. Read the full prompt to understand the task and identify the specific input question to answer.
2. Based on your understanding of the given rules, generate the corresponding output for the
question.
Rules: hypothesis
Full prompt: x
Enclose your answer with <answer></answer> tags.
DNA Translation/Transformation as Python Code Prompt
Convert the following hypothesis into a Python function called apply that takes a string input
and returns the transformed output. The function should implement the rules described in the
hypothesis. Make sure to handle all the transformations correctly.
Task Description: self.task_description
Hypothesis: hypothesis
Your function should follow this template:
def apply(input_str):
# Implementation based on the hypothesis rules
# ...
return result
Return ONLY the Python code without any explanation or markdown formatting.
Hypothesis Refinement Prompt
You are given a candidate hypothesis that attempts to explain how each input is transformed into
its output. A hypothesis consists of rules that explain how the inputs are mapped to the outputs.
Your goal is to revise this hypothesis so it fully accounts for any discrepancies. You may add new
rules, modify existing ones, or remove inaccurate ones. You can also propose a completely new
hypothesis.
Context: self.task_description
Current Hypothesis: hypothesis
Input: icl_in
Model Output: generated_output
Expected Output: expected_output
Steps:
1. List the exact differences between Model Output and Expected Output.
2. For each difference, identify which existing rule (if any) fails to cover it.
3. Revise existing rules or introduce new rules to fix these gaps.
4. Ensure the rules clearly state how the input is mapped into output in a detailed manner.{note}
Output only the refined hypothesis—do not solve the original task.
Format your output as follows:
<new_hypothesis>
Rule 1: ...
Rule 2: ...
Rule 3: ...
...
</new_hypothesis>
Final Hypothesis Application Prompt
Below is a full prompt about the reasoning task, which includes the question that you should give
the corresponding answer. Your task is:
1. Read the full prompt to understand the task and identify the specific input question to answer.
2. Based on your understanding of the given rules, generate the corresponding output for the
question.
Rules: hypothesis
Full prompt: x
Enclose your answer with <answer></answer> tags.
|
On LLM-Based Scientific Inductive Reasoning Beyond Equations Brian S. Lin1* Jiaxin Yuan2* Zihan Zhou3* Shouli Wang4* Shuo Wang1† Cunliang Kong1 Qi Shi1 Yuxuan Li1 Liner Yang2† Zhiyuan Liu1† Maosong Sun1 1Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center, Tsinghua University Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University 2Beijing Language and Culture University 3Xiamen University 4Harbin (LLMs) increasingly exhibit human-like capabilities, a fundamental question emerges: How can we enable LLMs to learn the underlying patterns from limited examples in entirely novel environments and apply them effectively? This question is central to the ability of LLMs in inductive reasoning. Existing research on LLM-based inductive reasoning can be broadly categorized based on whether the underlying rules are expressible via explicit mathematical equations. However, many recent studies in the beyond-equations category have emphasized rule design without grounding them in specific scenarios. Inspired by the parallels between inductive reasoning and human scientific discovery, we propose the task of LLM-Based Scientific Inductive Reasoning Beyond Equations and introduce a new benchmark, SIRBench-V1, to evaluate the inductive reasoning abilities of LLMs in scientific settings. Our experimental results show that current LLMs still struggle with this task, underscoring its difficulty and the need for further advancement in this area.1 1 Introduction In recent years, many advanced reasoning models, including OpenAI o1 (OpenAI et al., 2024) and DeepSeek-R1 (DeepSeek-AI et al., 2025), have demonstrated strong deductive reasoning capabilities, especially as evidenced by their performance in mathematics and programming tasks. These tasks are typically characterized by concise problem descriptions, where the model is required to generate a long chain of thought (Wei et al., 2022) to solve complex problems. In contrast, inductive reasoning (Hayes et al., 2010) poses a different challenge, requiring mod- *Equal contribution. †Corresponding authors. 1The open-source code and data can be found at https: //github.com/thunlp/SIR-Bench. Equation-based Scientific Inductive Reasoning (from LLM-SRBench) Scientific Inductive Reasoning Beyond Equations (our work) expected output + = Task: New Chemistry Equation Discovery Task: Reactant Prediction test input expected output ICL examples - Problem Description: Reaction rate with respect to Time and Concentration - Known Term: - Data points generated from unknown novel equations ICL examples + Figure 1: Illustrative comparison of scientific inductive reasoning: on the left, tasks focused on equation discovery (Shojaee et al., 2025), and on the right, tasks representing broader forms of scientific induction beyond equation generation. els to infer general rules or structures from multiple specific observations (Chollet, 2019; Yang et al., 2022). Inductive reasoning involves making predictions about new scenarios based on existing knowledge or observed data (Hayes et al., 2010). Inductive reasoning has been progressively recognized as a critical component for human-like cognitive modeling and the development of general artificial intelligence (Li et al., 2024). However, current LLMs still exhibit notable shortcomings in inductive reasoning tasks (Li et al., 2024; Hua et al., 2025; Yan et al., 2025). Even state-of-the-art models often fail to correctly infer abstract rules from observations and typically rely on memorizing rather than truly understanding the underlying concepts. Currently, artificial intelligence is increasingly regarded as a transformative paradigm in scientific discovery, with growing applications across disciplines such as physics, materials science, and 12 Sep 2025 Benchmark Task Type Related to Scientific Discovery Beyond Mathematical Equations Closed-Ended Questions #Instances Sequence Length MATDESIGN HI ✓ ✓ × 50 250-1,000 TOMATO-Chem HI ✓ ✓ × 51 100-600 ResearchBench HI ✓ ✓ × 1,386 unknown chaotic systems SR ✓ × ✓ 131 ~100 SRSD SR ✓ × ✓ 240 100-300 LLM-SRBench SR ✓ × ✓ 239 ~100 MIRAGE IR × ✓ ✓ 2,000 20-100 MIR-Bench IR × ✓ ✓ 6,930 50-250 IOLBench IR × ✓ ✓ 1,500 200-2,000 SIRBench-V1 (Ours) IR ✓ ✓ ✓ 710 500-3,000 Table 1: Analysis of existing related benchmarks. HI: Hypothetical Induction, SR: Symbolic Regression, IR: Inductive Reasoning. Related to Scientific Discovery: targets scientific problem-solving. Beyond Mathematical Equations: focuses on reasoning not reducible to equation fitting. Closed-Ended Questions: has deterministic answers for automatic evaluation. #Instances: number of test examples. Sequence Length: input sequence length-crucial as scientific inductive reasoning often requires extracting information from extensive resources. chemistry (Xu et al., 2021). Against this backdrop, increasing attention has been paid to the inductive reasoning abilities of LLMs in scientific contexts recently (Yang et al., 2024; Liu et al., 2025; Fang et al., 2025). However, systematically leveraging reasoning models to enhance inductive tasks for scientific discovery remains largely underexplored. While some scientific rules, such as the velocity formula of free fall, can be expressed mathematically, others, such as molecular structure-function relationships, are not readily amenable to such formulation. Under this criterion, we observe that existing LLM-based inductive reasoning research can be broadly categorized based on whether the underlying rules can be formulated mathematically. The first category comprises tasks that are mathematical equation-based, which are closely related to symbolic regression (Matsubara et al., 2022; Gilpin, 2021). Recent work has shown that LLMs can serve as equation generators or guide the equation discovery process (Wang et al., 2024; Du et al., 2024; Shojaee et al., 2024, 2025; Fang et al., 2025). However, these tasks typically only cover cases where the underlying rules can be explicitly formulated as equations. A separate line of work targets tasks beyond mathematical equations, proposing new inductive tasks and datasets from various perspectives (Hua et al., 2025; Tang et al., 2024; Banatt et al., 2024; Goyal and Dan, 2025). However, many of these studies emphasize the creation of novel synthetic or low-frequency symbolic systems, which often have a limited connection to discovering scientific patterns in real-world scenarios. Recent efforts under the AI4Science agenda are exploring more scientifically grounded settings where models emulate researchers by deriving insights or hypotheses from scientific materials (Yang et al., 2023, 2024; Liu et al., 2025). However, the reasoning processes of these studies often remain coarse-grained or open-ended, making robust automatic evaluation challenging. To address these gaps, we propose to examine the capabilities of LLMs in Scientific Inductive Reasoning Tasks Beyond Mathematical Equations. To the best of our knowledge, high-quality and easy-to-evaluate datasets to directly investigate this problem are currently lacking. We have therefore created SIRBench-V1, a new benchmark consisting of a series of subtasks in chemistry and biology. In these subtasks, the underlying rules cannot be expressed through mathematical equations, yet they yield relatively deterministic answers. We transform basic scientific resources from prior studies (Grešová et al., 2023; Liu et al., 2024; Guo et al., 2023; Edwards et al., 2022a; Irwin et al., 2021; Westerlund et al., 2024b,a; Kim et al., 2018) into inductive reasoning tasks. Furthermore, to eliminate LLM memorization, we design counterfactual tasks that establish synthetic scientific rules for the models to reason with, rather than recall. We follow several commonly adopted reasoning strategies for LLMs on the SIRBench-V1, including implicit and explicit reasoning, selfconsistency (Wang et al., 2022), and hypothesis refinement (Qiu et al., 2023). By investigating the performance of several LLMs augmented with different reasoning strategies, we find that equationfree scientific inductive reasoning is highly challenging for modern LLMs. Gemini-2.5-Flash, the best-performing model, achieves an average accuracy of 43.81% in our benchmark, while Claude3.5-Haiku and GPT-4.1 demonstrate a lower average accuracy of 31.53% and 32.41%, respectively. We also observe that using sophisticated reasoning strategies provides minimal performance improvement and, in some cases, even leads to performance decline. Using hypothesis refinement, Gemini-2.5Flash, Claude-3.5-Haiku, and GPT-4.1 attain an average accuracy of 39.06%, 31.63%, and 33.25%, respectively. We believe this work will pave the way for a new and fruitful avenue of research in scientific discovery. Contributions In summary, the main contributions of this work are as follows: • We present SIRBench-V1, a new scientific inductive reasoning benchmark featuring authentic and counterfactual test examples from tasks in both biology and chemistry. • We conduct evaluations using several representative LLMs in conjunction with diverse advanced inference strategies, the results of which demonstrate the capability boundaries of the examined LLMs. • We derive several constructive findings for scientific inductive reasoning, such as a comparison between many-short-shot and longfew-shot learning approaches and an analysis of memorization, which we anticipate will be helpful for subsequent studies. 2 Related Work 2.1 Inductive Reasoning Benchmark Various benchmarks have recently been introduced to systematically evaluate these capabilities from multiple perspectives. Hua et al. (2025) evaluate the model's ability to infer string transformation rules from limited input-output examples. Bongard-OpenWorld (Wu et al., 2023) examines conceptual induction and image classification in few-shot scenarios. Tang et al. (2024) propose an embodied interactive environment requiring models to induce task rules and objectives. MIR-Bench (Yan et al., 2025) provides a many-shot in-context benchmark covering various function-based input-output pairs. WILT (Banatt et al., 2024), inspired by the Wason 2-4-6 task, evaluates multi-turn inductive reasoning and generalization capabilities. Additionally, benchmarks such as LINGOLY (Bean et al., 2024), Linguini (Sánchez et al., 2024) and IOLBench (Goyal and Dan, 2025), derived from the International Linguistics Olympiad, challenge model generalization under low-resource language scenarios. Methods Beyond benchmark development, recent efforts have also explored structured frameworks to enhance inductive reasoning in LLMs, addressing limitations observed with chain-ofthought prompting and few-shot methods (Bowen et al., 2024; Gendron et al., 2023). For instance, Chain-of-Language-Models (Yang et al., 2022) employs a modular pipeline integrating rule generation and verification. Qiu et al. (2023) combines LLMs with symbolic executors in a propose-verify-refine loop, significantly enhancing robustness. Similarly, the De-In-Ductive (DID) (Cai et al., 2024) simulates a human-like inductive-then-deductive reasoning sequence within a single prompt, enabling flexible strategy switching and improved cross-task generalization. 2.2 Scientific Inductive Reasoning in LLMs Symbolic Regression Symbolic regression is a core approach for scientific discovery (Matsubara et al., 2022; Gilpin, 2021). It is valued for its ability to extract analytical expressions directly from data (Angelis et al., 2023). Recent studies have extended this paradigm by incorporating LLMs into the tasks. In materials science, Wang et al. (2024) highlight its role in revealing underlying physical and chemical principles. Du et al. (2024) propose a prompt-based framework using LLMs to generate candidate equations, offering greater flexibility than traditional methods. Shojaee et al. (2024) treat equations as programs, guided by scientific priors. To support systematic evaluation, they then introduce LLM-SRBench, a multi-domain benchmark designed to evaluate LLMs' true discovery capabilities. Hypothetical Induction Hypothetical Induction has been recognized as a subtask of inductive reasoning (Norton, 2003), with growing interest in using LLMs to generate novel, valuable scientific hypotheses from background knowledge or observations. Kumbhar et al. (2025) introduced a goaldriven dataset and evaluation framework in materials science, while Yang et al. (2023, 2024) constructed datasets for hypothesis generation in chemistry and social science. Researchbench (Liu et al., 2025) further provides the first benchmark covering inspiration retrieval, hypothesis formulation, and ranking. 3 SIRBench-V1: Task and Construction We curate 7 tasks, with 100 samples for each biology task, including synthetic tasks, and 30 samples for each chemistry task. 3.1 Task Overview Task 1: DNA Translation (Synthetic) This task simulates the biological process of translating a DNA sequence into its corresponding amino acid sequence. The model is required to induce the codon-to-amino-acid mappings solely based on incontext learning (ICL) examples and apply the inferred mappings to translate a target DNA sequence. However, LLMs may have internalized the canonical genetic codon table as prior knowledge, enabling them to generate the correct amino acid sequence through memorization rather than genuine rule induction. To better assess the inductive reasoning capabilities of the model, we provide a synthetic alternative to the standard task design, by randomly assigning codon-to-amino-acid mappings. Task 2: DNA Table Inference (Synthetic) This task focuses explicitly on evaluating the model's inductive ability by requiring it to recover the underlying codon table based solely on a set of DNA-amino acid sequence pairs. The model is asked to infer the translation rules and provide a fully structured codon table, including codonto-amino acid mappings, start codons, and stop codons. We follow the same design as in Task 1, providing both standard and synthetic configurations. Task 3: DNA Transformation This task adopts a fully synthetic setup, with the goal of evaluating the model's ability to infer transformation rules from ICL examples and to apply them correctly to unseen test sequences. Each ICL example consists of an input-output DNA sequence pair generated by applying one of several predefined transformations: sequence reversal, complementation, reverse complementation, segmented transformation, and fixed base mutation. Task 4: Molecule Design This task requires LLMs to generate molecular structures that satisfy a given textual description. The input is a natural language sentence (in English), and the output is the corresponding molecule represented in SMILES format. Task 5: Molecule Captioning This task is the inverse of Task 4, where the input is a molecular structure and the model is expected to generate a corresponding description or annotation in natural language. Task 6: Reaction Prediction This task focuses on chemical reaction prediction. Given one or more reactants and reagents, the model is expected to predict the resulting product in the form of a SMILES string. Task 7: Name Prediction This task focuses on conversions between three common chemical representations: SMILES (linear structural encodings), IUPAC names (standardized nomenclature), and molecular formulas (atomic composition). We include four relatively unambiguous conversions: smiles2formula, smiles2iupac, iupac2smiles, and iupac2formula. 3.2 Data Collection Biology We derive source DNA sequences and their corresponding amino acid sequences from GenomicLLM_GRCh38 (Grešová et al., 2023; Liu et al., 2024) for the standard task. For the synthetic task, we generate codon tables by randomizing every mapping except the start and stop codons, and translate inputs using these tables. For DNA Transformation, we randomly sample DNA fragments from the training set as ICL examples and truncate them to a maximum length, and do the same for test sequences. The transformation type and base-pairing schemes are randomly sampled from a predefined set. These base-pairing schemes are designed manually to disrupt natural complementarity, increasing the inductive reasoning challenge. For all the tasks, we ensure that the ICL examples cover all the mappings used in the test example. Chemistry ChemLLMBench (Guo et al., 2023) is a chemistry-domian LLM benchmark comprising eight tasks. We select four tasks, corresponding to Task 4-7 in our work, which exhibit a relatively stronger emphasis on inductive reasoning capabilities. The Molecule Design and Captioning tasks are based on the ChEBI-20 dataset (Edwards et al., 2022a), pairing molecular SMILES with textual description. The Reaction Prediction task draws DNA (to-protein) Translation DNA Table Inference DNA Transformation DNA sequence: ATGGAGGCG DNA sequence: ATGGAGGCG Real examples: ATGGAGGC → MEA GGAAGTGGC → GTV ... Synthetic examples: ATGGAGGC → MEA GGAAGTGGC → GTV ... Input DNA sequence: ... Output DNA sequence: ... Amino acid sequence: MEA Amino acid sequence: KLH Based on authentic codon table Based on synthetic codon table Real codon table: forward_table: {"ATG": "M", ...} start_codons": [...] stop_codons": [...] Synthetic codon table: forward_table: {"GGA": "K", ...} start_codons": [...] stop_codons": [...] Transformation Rules: - reverse - complementation - reverse complementation - segmented transformation - fixed base mutation Molecule Design Description: The molecule is a member of the class of formamides that is formamide substituted ... Smile: CCCCNC=O Molecule Captioning Smile: CC1=CCC(CC1=O)C(=C)C Description: The molecule is a p-menthane monoterpenoid that consists of cyclohex-2-enone ... Reaction Prediction Name Prediction Reactants+Reagents: Brc1cccnc1.C1CCO C1.CCOCC.CO c1cccc(CCN2C(=O)c3ccccc3C2=O)c1. [Cl-].[Li]CCCC.[NH4+] Products: COc1cccc(CCN2C(=O)c3 ccccc3C2(O)c2cccnc2)c1 Smile: CCCOC(=O)OCC( =O)COC1=CC=CC=C1 IUPAC name: (2-oxo-3-phen oxypropyl) propyl carbonate Formula: C13H16O5 Biology Tasks Chemistry Tasks Figure 2: Our benchmark includes 7 tasks spanning two scientific disciplines: biology and chemistry. denotes tasks that adopt a synthetic configuration; refers to tasks that involve only rule induction from examples, while others involve both induction and application to a new test input. on the USPTO-MIT Mixed reaction dataset (Irwin et al., 2021; Westerlund et al., 2024b,a), which contains information on reactants, reagents, and products in SMILES reaction format. The Name Prediction task is derived from PubChem (Kim et al., 2018), which offers extensive mappings between SMILES strings and their corresponding standard chemical names, including both IUPAC names and molecular formulas. 3.3 Metrics Biology All three tasks are evaluated using accuracy as the primary metric, computed as the proportion of correctly predictions. Chemistry For molecule design, we adopt eight metrics, including BLEU, Exact Match (Edwards et al., 2022b), and Levenshtein distance (Miller et al., 2009) for string-level consistency; validity for structural correctness; MACCS (Ratcliff and Metzener, 1988), RDK (Landrum, 2020), and Morgan (Dash et al., 2023) for structural similarity; and FCD (Preuer et al., 2018) for distributional similarity. For molecule captioning, we use BLEU, ROUGE, and METEOR to capture surface-level overlaps, but also introduce an LLM-as-a-Judge score (1-10 scale), with an emphasis on scientific accuracy, while also considering completeness and clarity. For reaction prediction, we follow the Top-1 Accuracy metric and improve robustness by canonicalizing both predicted and reference SMILES using RDKit (Landrum, 2020) before comparison. Finally, for name prediction, we apply the same canonicalization for the iupac2smiles task, and adopt Exact Match Accuracy for the other three tasks (smiles2formula, smiles2iupac, and iupac2formula). 4 Evaluation 4.1 Models In order to provide a comprehensive assessment of the inductive reasoning capabilities of costoptimized, flagship, and reasoning LLMs, we choose one representative model from each category, namely Claude-3.5-Haiku, GPT-4.1, and Gemini-2.5-Flash. Since our benchmark is integrated into the OpenCompass framework, it can be easily evaluated on any other LLM. To ensure consistency and encourage output diversity during repeated sampling, we set the temperature at 1.0 for all experiments. For Gemini-2.5-Flash, we retain its default "thinking" configuration. 4.2 Inference Strategies We evaluate SIRBench-V1 on four commonly used inference strategies for inductive reasoning as illustrated in figure 3. Explicit inductive reasoning serves as a baseline for advanced methods like selfconsistency and hypothesis refinement, where the LLM needs to explicitly formulate and apply the hypotheses. Implicit Inductive Reasoning. We provide the LLM with ICL examples and ask the LLM to provide the final answer directly without explicitly stating the induced rules. This approach is the most straightforward way to perform inductive reasoning. Explicit Inductive Reasoning. We prompt the LLM to formulate a hypothesis based on the ICL examples. Then, we let the LLM apply the hypothesis to the given target question to obtain the final answer. This approach forces the LLM to perform the inductive reasoning process explicitly. Self-Consistency. For self-consistency (Wang et al., 2022), we sample multiple hypotheses (we use n = 5) from the LLM and ask it to apply each of them to the target question, obtaining a corresponding answer from each hypothesis. A final answer is selected using majority voting performed by the LLM itself via prompting (see appendix C). Hypothesis Refinement. The hypothesis refinement method (Qiu et al., 2023) follows a threestage iterative process: hypothesis generation, selection, and refinement. Initially, we sample multiple hypotheses (n = 5) based on the ICL examples, then evaluate them using one of the two approaches: (1) for codeexecutable tasks, we translate them into Python functions and execute them following Qiu et al. (2023), or (2) otherwise, we have the LLM apply each hypothesis directly. A task-specific evaluator scores each hypothesis's output. Next, we generate a new set of hypotheses (n = 5) by prompting (see appendix C for prompt) the LLM to refine the highest-scoring hypothesis based on feedback. We repeat this select-and-refine loop up to t = 3 iterations, stopping early if the hypothesis achieves a perfect score on ICL examples or performance degradation is detected. We added the early stopping mechanism for performance degradation to prevent weaker models from degrading rule quality. Finally, we apply the best resulting hypothesis to the target question to produce the answer. 5 Results and Analysis 5.1 Main Results Table 2 reveals consistently low performance across most tasks, highlighting the limitations of current LLMs in scientific inductive reasoning tasks beyond mathematical equations. Among the evaluated models, Gemini-2.5-Flash demonstrates superior performance in computationally intensive tasks while exhibiting comparable results to other models in conceptually oriented tasks such as Molecule Caption. Additionally, larger flagship models perform better than cost-optimized models. We observe that LLMs struggle with explicit inductive reasoning (i.e., proposing effective rules and applying them to novel inputs), as shown by the performance drop from implicit to explicit inductive reasoning. Self-consistency helps alleviate this shortcoming by sampling multiple diverse reasoning paths and marginalizing across them, thereby enhancing the robustness of the explicit inductive reasoning process. The hypothesis refinement strategy further improves the performance, as it selects the best rule from multiple sampled hypothesis and revises the rule at each iteration. However, we find that the advantage of hypothesis refinement over implicit inductive reasoning varies inconsistently across tasks and models. To validate our findings across more LLMs, we evaluated additional open-source models under implicit inductive reasoning, as shown in Table 3. Deepseek-V3-0324 performs comparably with GPT-4.1 across most tasks, while Qwen3-8B with thinking generates extremely long chain-of-thought reasoning for biology tasks, often exceeding its recommended 32K max output length without completing the reasoning process, demonstrating that long chain-of-thought is not effective on the biology tasks. These results reinforce our findings on the fundamental limitation of current LLMs in scientific inductive reasoning. Additionally, current inductive reasoning methods remain inadequate for scientific inductive reasoning tasks beyond mathematical equations. 5.2 Effect of Length Being able to perform inductive reasoning on a long context is fundamental. We evaluated the LLMs on DNA transformation and DNA translation tasks In-Context Examples Input: ... Output: ... Input: ... Output: ... ... Input: ... Output: ... Hypothesis Rule 1: ... Rule 2: ... ... Rule n: ... In-Context Examples input output output apply to input Hypothesis ... Hypothesis output ... final iteration apply to in-context examples output apply to input majority voting select best hypothesis refine based on feedback yes no apply best hypothesis to input 1 2 3 4 Implicit Inductive Reasoning Explicit Inductive Reasoning Self-Consistency Hypothesis Refinement In-Context Examples In-Context Examples output output In-Context Examples Hypothesis input Hypothesis output Hypothesis Hypothesis output Figure 3: Comparison of four inference strategies: (1) Implicit induction - directly providing output; (2) Explicit induction - formulating clear hypotheses explicitly; (3) Self-consistency - using multiple reasoning paths to reach consensus; and (4) Hypothesis refinement - iteratively improving hypothesis on feedback. Models Biology Chemistry Avg. DNA DNA Table DNA Molecule Molecule Reaction Name Translation Inference Transformation Design Caption Prediction Prediction Implicit Inductive Reasoning Claude-3.5-Haiku 5.47 10.23 27.28 62.00 67.70 44.44 3.57 31.53 GPT-4.1 5.71 12.73 31.37 75.00 66.30 22.22 13.51 32.41 Gemini-2.5-Flash 11.72 32.06 30.42 85.00 63.30 54.17 30.00 43.81 Explicit Inductive Reasoning Claude-3.5-Haiku 5.85 9.72 26.05 64.00 54.00 19.23 2.81 25.95 GPT-4.1 5.31 12.13 28.73 69.00 59.00 17.86 6.09 28.30 Gemini-2.5-Flash 9.14 23.34 28.66 77.00 67.70 34.78 30.00 38.66 Self-Consistency (Wang et al., 2022) Claude-3.5-Haiku 5.11 10.00 26.34 66.00 69.70 20.83 0.83 28.40 GPT-4.1 5.96 13.19 30.81 72.00 65.70 25.00 9.58 31.75 Gemini-2.5-Flash 9.15 24.84 30.4 80.00 70.00 39.29 40.13 41.97 Hypothesis Refinement (Qiu et al., 2023) Claude-3.5-Haiku 5.79 10.02 30.05 73.00 72.70 28.00 1.88 31.63 GPT-4.1 5.62 14.57 35.56 67.00 66.30 32.14 11.59 33.25 Gemini-2.5-Flash 10.60 28.55 30.37 72.00 65.70 32.14 34.07 39.06 Table 2: Performance of Claude-3.5-Haiku, GPT-4.1, and Gemini-2.5-Flash on SIRBench-V1 using four inference strategies. All scores report accuracy (%), except Molecule Design (Morgan similarity rescaled to 0-100). Molecule Caption reports the accuracy from LLM-as-judge. Synthetic versions were used for DNA Translation and DNA Table Inference tasks. with varying sequence length configurations. The DNA transformation task demands the comprehension of the entire sequence (e.g., identifying reversals), while the DNA translation task requires observation of local patterns. As shown in figure 4, for DNA transformation, we found that the LLMs achieve relatively strong performance on shorter sequences but exhibits a significant performance decline as sequence length increases. For DNA translation, GPT-4.1 and Claude-3.5-Haiku show minimal decrease with longer sequences only because they struggle with this task at shorter lengths. The results indicate that current LLMs are effective at inducing pattern only within limited input Models Biology Chemistry Avg. DNA DNA Table DNA Molecule Molecule Reaction Name Translation Inference Transformation Design Caption Prediction Prediction Qwen3-8B (with thinking) 0.20 4.88 3.24 59.00 52.67 3.33 1.67 17.00 Qwen3-8B (without thinking) 6.30 7.06 27.19 50.00 49.67 0.00 0.00 20.03 Deepseek-V3-0324 7.21 12.24 28.81 75.00 64.00 30.00 14.17 33.06 Table 3: Performance of Qwen3-8B and Deepseek-V3-0324 on SIRBench-V1 under the Implicit Inductive Reasoning setting. Scores are accuracy (%) except Molecule Design (Morgan similarity, 0-100 scale) and Molecule Caption (LLM-as-judge accuracy). Synthetic versions used for DNA tasks. 16 32 64 128 256 512 Sequence Length 0 20 40 60 80 100 Accuracy (%) DNA Transformation GPT-4.1 Claude-3.5-Haiku Gemini-2.5-Flash 200 300 400 500 600 700 Sequence Length 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Accuracy (%) DNA Translation GPT-4.1 Claude-3.5-Haiku Gemini-2.5-Flash Figure 4: Effect of Sequence Length in Transformation and DNA Translation tasks. 2 4 8 16 32 64 128 256 Number of Shots 0 20 40 60 80 100 Accuracy (%) Reaction Prediction GPT-4.1 Claude-3.5-Haiku Gemini-2.5-Flash 2 4 8 16 32 64 128 256 Number of Shots 0 20 40 60 80 100 Accuracy (%) DNA Transform GPT-4.1 Claude-3.5-Haiku Gemini-2.5-Flash Figure 5: Effect of Number of Shots in Reaction Prediction and DNA Transformation tasks. lengths. This limitation reflects the broader challenge of developing robust inductive reasoning capabilities that can handle long context. 5.3 Effect of Number of Shots We examine the effect of the number of shots on accuracy in one representative task each from the domains of biology and chemistry. Figure 5 shows that increasing the number of shots has varying effects on different models. In reaction prediction task, GPT-4.1 exhibits an upward trend, showing that it benefits from additional shots. In contrast, Claude-3.5-Haiku shows performance degradation, likely due to limitations in its context processing capability. Gemini-2.5-Flash does not show any clear upward or downward trend with as shot increases. For DNA transformation, all the models exhibit consistent performance, implying that additional examples provide limited benefit. 5.4 Many-Short-Shot vs. Long-Few-Shot Unlike previous studies that only explore increasing the number of relatively short examples (Yan Model Many-Short-Shot Few-Long-Short Claude-3.5-Haiku 31.19 15.63 GPT-4.1 36.94 25.64 Gemini-2.5-Flash 35.14 24.47 Table 4: Performance comparison in many-short-shot versus long-few-shot settings on the DNA Translation task. The many-short-shot setting uses 64 shots with sequence length 100, while the few-long-shot setting uses 4 shots with sequence length 1600. et al., 2025), we also explore the inductive reasoning capabilities of LLMs on few long examples. The latter paradigm adheres more to real-world applications, where it is difficult to obtain numerous examples for long input tasks. Our comparative analysis in table 4 across both scenarios while maintaining the total input length demonstrates that LLMs perform worse with few long examples. This finding highlights a critical area for the advancement of LLM inductive reasoning ability. 5.5 Task Difficulty Analysis Reasoning ability is not only reflected in overall accuracy but also in performance across difficulty levels. We analyzed two representative tasks, one from biology and one from chemistry, under Implicit Inductive Reasoning. Test instances were categorized into Easy, Medium, Hard, with 100 samples each. The DNA Translation samples were grouped by input sequence length, with ranges of 100-300 for Easy, 300-500 for Medium, and 500700 for Hard, while the Molecule Design samples were classified by molecular complexity using RDKit based on structural features. As shown in both Table 5 and Table 6, model performance exhibits a clear downward trend from easy to hard samples, suggesting that difficulty-based categorization offers a straightforward way to assess robustness while also enabling a more fine-grained evaluation of reasoning abilities across domains. Difficulty Level GPT-4.1 Claude Gemini Easy accuracy 6.16 5.77 12.6 Medium accuracy 5.56 4.85 7.98 Hard accuracy 5.27 4.91 5.9 Table 5: Performance of LLMs on the DNA Translation task by difficulty level. Difficulty Level GPT-4.1 Claude Gemini Easy validity 0.94 0.67 0.94 morgan_sims 0.67 0.39 0.89 fcd (↓) 2.66 9.82 1.15 Medium validity 0.92 0.64 0.88 morgan_sims 0.55 0.29 0.78 fcd (↓) 7.77 21.08 4.73 Hard validity 0.74 0.59 0.41 morgan_sims 0.46 0.21 0.6 fcd (↓) 19.85 29.86 22.24 Table 6: Performance of LLMs on the Molecule Design task by difficulty level. 5.6 Counterfactual Evaluation Model DNA Translation DNA Table Inf. Aut. Syn. (∆) Aut. Syn. (∆) Claude-3.5-Haiku 21.95 5.47 (-16.48) 68.50 10.23 (-58.27) GPT-4.1 21.24 5.71 (-15.53) 81.84 12.73 (-69.11) Gemini-2.5-Flash 30.64 11.72 (-18.92) 87.09 32.06 (-55.03) Table 7: Performance comparison between authentic and synthetic versions of chosen tasks. ∆represents the performance gap, calculated as the score on synthetic tasks minus the score on authentic tasks. To investigate whether LLMs perform true inductive reasoning, we compare their performance on original and synthetic settings of DNA Translation and Table Inference. As illustrated in Table 7, all three models suffer a dramatic performance decline in synthetic tasks, suggesting that higher performance in authentic versions stems from the memorization of standard mappings rather than genuine inductive reasoning capabilities. Among the evaluated models, Gemini-2.5-Flash maintains the highest performance on both original and synthetic versions of the tasks. This suggests that reasoning models have better capability to identify rules beyond the constraints of memorized knowledge than non-reasoning models. However, its absolute score in synthetic tasks remains low. Overall, these results indicate that current LLMs are fundamentally limited in their ability to perform genuine inductive reasoning. In the context of scientific discovery, LLMs need to recognize novel patterns rather than just retrieve existing knowledge. Therefore, our findings highlight the need to distinguish inductive reasoning from retrieval to advance the ability of LLMs for scientific discovery. 6 Conclusion In this paper, we introduce SIRBench-V1, a benchmark that includes Chemistry and Biology subtasks, to evaluate the scientific inductive reasoning of LLMs on tasks beyond mathematical equation. We evaluated different LLMs using commonly used reasoning strategies on our proposed benchmark. We found that current leading LLMs obtain low performance on our benchmark and that using sophisticated strategies provide minimal benefits. Additionally, we point out limitations of LLMs in performing inductive reasoning on longer context lengths, few-long-shot settings, and counterfactual rules. The experimental results will provide valuable insights for future studies on LLM-driven scientific discovery. 7 Limitations In this work, we take the first step toward incorporating scientific scenarios into the design of the LLM-Based Inductive Reasoning Beyond Equations and introduce a new dataset for evaluation. However, the SIRBench-V1 is limited to chemistry and biology domains. As a next step, we plan to invite domain experts in these areas to review and refine both our benchmark and evaluation protocol. In the future, we aim to expand the benchmark to cover a broader range of scientific disciplines. Acknowledgement This work is supported by the AI9Stars community, the Fundamental Research Funds for the Central Universities, and the Research Funds of Beijing Language and Culture University (25YCX118). We also thank the anonymous reviewers in ACL Rolling Review May 2025. Their insightful feedback and suggestions are significant to refining and improving our work. References Dimitrios Angelis, Filippos Sofos, and Theodoros E. Karakasidis. 2023. Artificial intelligence in physical sciences: Symbolic regression trends and perspectives. Archives of Computational Methods in Engineering, pages 1 - 21. Eryk Banatt, Jonathan Cheng, Skanda Vaidyanath, and Tiffany Hwu. 2024. Wilt: A multi-turn, memorization-robust inductive logic benchmark for llms. ArXiv, abs/2410.10998. Andrew M. Bean, Simi Hellsten, Harry Mayne, Jabez Magomere, Ethan A. Chi, Ryan Chi, Scott A. Hale, and Hannah Rose Kirk. 2024. Lingoly: A benchmark of olympiad-level linguistic reasoning puzzles in low-resource and extinct languages. ArXiv, abs/2406.06196. Chen Bowen, Rune Sætre, and Yusuke Miyao. 2024. A comprehensive evaluation of inductive reasoning capabilities and problem solving in large language models. In Findings. Chengkun Cai, Xu Zhao, Haoliang Liu, Zhongyu Jiang, Tianfang Zhang, Zongkai Wu, Jenq-Neng Hwang, and Lei Li. 2024. The role of deductive and inductive reasoning in large language models. ArXiv, abs/2410.02892. Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, and Denny Zhou. 2023. Universal self-consistency for large language model generation. ArXiv, abs/2311.17311. François Chollet. 2019. On the measure of intelligence. Preprint, . Debadutta Dash, Rahul Thapa, J. Banda, Akshay Swaminathan, Morgan Cheatham, Mehr Kashyap, Nikesh Kotecha, Jonathan H. Chen, Saurabh Gombar, Lance Downing, Rachel A. Pedreira, Ethan Goh, Angel Arnaout, Garret K. Morris, H Magon, Matthew P. Lungren, Eric Horvitz, and Nigam H. Shah. 2023. Evaluation of gpt-3.5 and gpt-4 for supporting real-world information needs in healthcare delivery. ArXiv, abs/2304.13714. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Jun-Mei Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiaoling Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 179 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. ArXiv, abs/2501.12948. Mengge Du, Yuntian Chen, Zhongzheng Wang, Longfeng Nie, and Dong juan Zhang. 2024. Large language models for automatic equation discovery of nonlinear dynamics. Physics of Fluids. Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, and Heng Ji. 2022a. Translation between molecules and natural language. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 375-413, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Carl N. Edwards, T. Lai, Kevin Ros, Garrett Honke, and Heng Ji. 2022b. Translation between molecules and natural language. ArXiv, abs/2204.11817. You-Le Fang, Dong-Shan Jian, Xiang Li, and Yan-Qing Ma. 2025. Ai-newton: A concept-driven physical law discovery system without prior physical knowledge. Preprint, . Gaël Gendron, Qiming Bao, M. Witbrock, and Gillian Dobbie. 2023. Large language models are not strong abstract reasoners. In International Joint Conference on Artificial Intelligence. William Gilpin. 2021. Chaos as an interpretable benchmark for forecasting and data-driven modelling. ArXiv, abs/2110.05266. Satyam Goyal and Soham Dan. 2025. Iolbench: Benchmarking llms on linguistic reasoning. ArXiv, abs/2501.04249. Katarína Grešová, Vlastimil Martinek, David ˇCechák, Petr Šimeˇcek, and Panagiotis Alexiou. 2023. Genomic benchmarks: a collection of datasets for genomic sequence classification. BMC Genomic Data, 24(1):25. Taicheng Guo, Kehan Guo, Bozhao Nan, Zhengwen Liang, Zhichun Guo, N. Chawla, O. Wiest, and Xiangliang Zhang. 2023. What can large language models do in chemistry? a comprehensive benchmark on eight tasks. In Neural Information Processing Systems. Brett K. Hayes, Evan Heit, and Haruka Swendsen. 2010. Inductive reasoning. Wiley Interdisciplinary Reviews: Cognitive Science, 1(2):278-292. Wenyue Hua, Tyler Wong, Sun Fei, Liangming Pan, Adam Jardine, and William Yang Wang. 2025. Inductionbench: Llms fail in the simplest complexity class. ArXiv, abs/2502.15823. Ross Irwin, Spyridon Dimitriadis, Jiazhen He, and Esben Jannik Bjerrum. 2021. Chemformer: a pretrained transformer for computational chemistry. Machine Learning: Science and Technology, 3. Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A. Shoemaker, Paul A. Thiessen, Bo Yu, Leonid Y. Zaslavsky, Jian Zhang, and Evan E. Bolton. 2018. Pubchem 2019 update: improved access to chemical data. Nucleic Acids Research, 47:D1102 - D1109. Shrinidhi Kumbhar, Venkatesh Mishra, Kevin Coutinho, Divij Handa, Ashif Iquebal, and Chitta Baral. 2025. Hypothesis generation for materials discovery and design using goal-driven and constraint-guided llm agents. ArXiv, abs/2501.13299. Greg Landrum. 2020. Rdkit: Open-source cheminformatics. http://www.rdkit.org. [Online; accessed 14-May-2025]. Jiachun Li, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, and Jun Zhao. 2024. Mirage: Evaluating and explaining inductive reasoning process in language models. ArXiv, abs/2410.09542. Huaqing Liu, Shuxian Zhou, Peiyi Chen, Jiahui Liu, Ku-Geng Huo, and Lanqing Han. 2024. Exploring genomic large language models: Bridging the gap between natural language and gene sequences. bioRxiv. Yujie Liu, Zonglin Yang, Tong Xie, Jinjie Ni, Ben Gao, Yuqiang Li, Shixiang Tang, Wanli Ouyang, Erik Cambria, and Dongzhan Zhou. 2025. Researchbench: Benchmarking llms in scientific discovery via inspiration-based task decomposition. ArXiv, abs/2503.21248. Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Tatsunori Taniai, and Y. Ushiku. 2022. Rethinking symbolic regression datasets and benchmarks for scientific discovery. ArXiv, abs/2206.10540. Frederic P. Miller, Agnes F. Vandome, and John McBrewster. 2009. Levenshtein distance: Information theory, computer science, string (computer science), string metric, damerau?levenshtein distance, spell checker, hamming distance. John D. Norton. 2003. A little survey of induction. OpenAI, :, Aaron Jaech, Adam Kalai, and *et al.*. 2024. Openai o1 system card. Preprint, . Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Günter Klambauer. 2018. Fréchet chemnet distance: A metric for generative models for molecules in drug discovery. Journal of chemical information and modeling, 58 9:17361741. Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, and Xiang Ren. 2023. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. ArXiv, abs/2310.08559. John W. Ratcliff and David E. Metzener. 1988. Pattern matching: The gestalt approach. Dr. Dobb's Journal, 13(7):46. Eduardo Sánchez, Belen Alastruey, Christophe Ropers, Pontus Stenetorp, Mikel Artetxe, and Marta Ruiz Costa-jussà. 2024. Linguini: A benchmark for language-agnostic linguistic reasoning. ArXiv, abs/2409.12126. Parshin Shojaee, Kazem Meidani, Shashank Gupta, Amir Barati Farimani, and Chandan K. Reddy. 2024. Llm-sr: Scientific equation discovery via programming with large language models. ArXiv, abs/2404.18400. Parshin Shojaee, Ngoc-Hieu Nguyen, Kazem Meidani, Amir Barati Farimani, Khoa D. Doan, and Chandan K. Reddy. 2025. Llm-srbench: A new benchmark for scientific equation discovery with large language models. Xiaojuan Tang, Jiaqi Li, Yitao Liang, Song chun Zhu, Muhan Zhang, and Zilong Zheng. 2024. Mars: Situated inductive reasoning in an open-world environment. ArXiv, abs/2410.08126. Guanjie Wang, Erpeng Wang, Zefeng Li, Jian Zhou, and Zhimei Sun. 2024. Exploring the mathematic equations behind the materials science data using interpretable symbolic regression. Interdisciplinary Materials. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903. Andreas M. Westerlund, Lakshman Saigiridharan, and Samuel Genheden. 2024a. Constrained synthesis planning with disconnection-aware transformer and multi-objective search. ChemRxiv. Preprint, not peer-reviewed. Annie M. Westerlund, Siva Manohar Koki, Supriya Kancharla, Alessandro Tibo, Lakshidaa Saigiridharan, Mikhail Kabeshov, Rocío Mercado, and Samuel Genheden. 2024b. Do chemformers dream of organic matter? evaluating a transformer model for multistep retrosynthesis. Journal of chemical information and modeling. Rujie Wu, Xiaojian Ma, Qing Li, Wei Wang, Zhenliang Zhang, Song-Chun Zhu, and Yizhou Wang. 2023. Bongard-openworld: Few-shot reasoning for free-form visual concepts in the real world. ArXiv, abs/2310.10207. Yongjun Xu, Qi Wang, Zhulin An, Fei Wang, Libo Zhang, Yanjun Wu, Fengliang Dong, Cheng-Wei Qiu, Xin Liu, Junjun Qiu, Keqin Hua, Wentao Su, Huiyu Xu, Yong Han, Xinya Cao, En ju Liu, Chenguang Fu, Zhigang Yin, Miao Liu, and 28 others. 2021. Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2. Kai Yan, Zhan Ling, Kang Liu, Yifan Yang, Ting-Han Fan, Lingfeng Shen, Zhengyin Du, and Jiecao Chen. 2025. Mir-bench: Benchmarking llm's long-context intelligence via many-shot in-context inductive reasoning. ArXiv, abs/2502.09933. Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, E. Cambria, Xiaodong Liu, Jianfeng Gao, and Furu Wei. 2022. Language models as inductive reasoners. ArXiv, abs/2212.10923. Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and E. Cambria. 2023. Large language models for automated open-domain scientific hypotheses discovery. In Annual Meeting of the Association for Computational Linguistics. Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. 2024. Moosechem: Large language models for rediscovering unseen chemistry scientific hypotheses. ArXiv, abs/2410.07076. A Additional Details on SIRBench-V1 A.1 Dataset Configurations We curate 7 tasks in total. Considering that multiple metrics provide robust assessment, for chemistry tasks, we evaluate Molecule Design, Molecule Captioning and Reaction Prediction with 30 examples each. For Name Prediction, we sample 30 examples for each type of transformation (including smiles2formula, smiles2iupac, iupac2smiles, and iupac2formula). Since biology tasks rely solely on accuracy, we increase the number of examples to 100 for each biology task to ensure more stable evaluation, including DNA Translation, DNA Translation (Synthetic), DNA Table Inference, DNA Table Inference (Synthetic) and DNA Transformation. All experiments are conducted under 5-shot setting, unless otherwise stated. However, since our benchmark has various configurations and supports synthetic data generation for some subtasks, the actual number of items can be configurable. In our main results, we use the following configurations. For DNA Translation, we uniformly sample across sequence length 200 to 450 since the effective DNA sequences in the dataset starts from length 200. While data are available for longer sequences, only sample until 450 because they are too challenging for most models. For DNA Transformation, we set the sequence length to 300, which is a reasonably challenging level. A.2 Examples of Transformation Types in DNA Transformation Task The transformation types include: 1) Sequence reversal: reversing the order of the entire sequence (e.g., AGCT →TCGA); 2) Complementation: replacing each base according to a substitution rule (e.g., AGCT →TCGA, using A↔T, C↔G or a randomized complement map); 3) Reverse complementation: performing complementation followed by reversal (e.g., AGCT →AGCT); 4) Segmented transformation: transforming fixedlength segments after a fixed stride (e.g., AGCTTAGCGT →AGCTTGACGT, reversing 2 bases every 3 bases); 5) Fixed base mutation: replacing specific bases with new ones (e.g., AGCT → GGTT, where A→G and C→T). Task Model Initial Final Test DNA Translation Claude-3.5-Haiku 3.87 6.52 5.79 GPT-4.1 9.15 11.37 5.62 Gemini-2.5-Flash 24.37 30.57 10.60 Molecule Design Claude-3.5-Haiku 0.67 0.71 0.73 GPT-4.1 0.77 0.82 0.67 Gemini-2.5-Flash 0.92 0.97 0.72 Table 8: Comparison of initial and final hypothesis quality scores on in-context examples (ICE) alongside corresponding test performance of final hypothesis for various models across DNA Translation (Synth) and Molecule Design tasks. Morgan similarity (scale of 0 to 1) is reported for the Molecule design task. B Explicit Inductive Reasoning Analysis B.1 Hypothesis Quality and Refinement In order to provide a more thorough analysis, we show the computed evaluation score of the generated hypotheses on ICL examples during hypothesis refinement in table 8. For the initial evaluation scores, we report the average score of the best hypothesis generated by the model prior to any refinement. This also serves as an approximate upper bound of the evaluation scores for hypotheses generated by explicit inductive reasoning and self-consistency. We notice that for DNA Translation task, these rules obtained low accuracy on ICL examples. The scores increase at the end of the refinement process, but still remain low. This shows the limitation in inductive reasoning capability of current LLMs. Furthermore, although hypothesis refinement enhances the consistency of rules with ICL examples, we observe that in some cases there remains a substantial gap between performance on ICL examples and generalization to test examples. Future work may focus on enhancing the hypothesis proposal capabilities of LLMs to improve both accuracy on ICL examples and generalization to novel scenarios. We also provided examples of hypothesis generated by GPT-4.1 which can be found table 9 and 10. B.2 Misalignment of Advanced Reasoning Strategies As shown in Table 2, the performance of LLMs does not consistently improve with the application of more fine-grained reasoning strategies. In some cases, advanced strategies even reduce performance. To investigate this phenomenon, we analyzed the recorded reasoning traces, focusing Task Hypothesis DNA Translation Rule 1: Read the DNA from 5'->3' in consecutive codons (3 bp) starting at the first base. Rule 2: Each codon uniquely maps to one amino acid per the in-context examples. Rule 3: Inferred codon table: ATG:M, ATT:M, ATA:N, ATC:Q, GTT:S, GTC:V, GTA:C, GTG:V, TTT:F, TTC:F, TTA:W, TTG:W, CTT:M, CTC:D, CTA:R, CTG:G, TCT:S, TCC:S, TCA:H, TCG:S, CCT:P, CCC:N, CCA:N, CCG:Y, ACT:P, ACC:N, ACA:N, ACG:W, GCT:A, GCC:N, GCA:N, GCG:F, TAT:Y, TAC:P, TAA:*, TAG:*, CAT:Q, CAC:Y, CAA:D, CAG:V, AAT:F, AAC:G, AAA:K, AAG:N, GAT:D, GAC:G, GAA:E, GAG:G, TGT:P, TGC:L, TGA:*, TGG:W, CGT:R, CGC:C, CGA:M, CGG:H, AGT:S, AGC:F, AGA:G, AGG:S, GGT:G, GGC:W, GGA:V, GGG:Q. Rule 4: Translate sequentially until a stop (TAA, TAG, TGA →"*"). Rule 5: Output one-letter amino acid string. Rule 6: Stop at and include first "*". Rule 7: Unseen codons should not be assumed. DNA Table Inference Rule 1: DNA-protein pairs align codons (3 bp) to amino acids. Rule 2: Segment DNA into triplets from 5' and align to protein until "*" or end. Rule 3: Codons aligned to "*" are stop codons. Rule 4: First-codon→'M' pairs are start codons. Rule 5: Aggregate across examples; record all observed mappings. Rule 6: Include only codons seen. Rule 7: Build forward_table from all mappings, excluding stops. Rule 8: start_codons = all first codons mapped to 'M'. Rule 9: stop_codons = all codons aligned to '*'. Rule 10: Amino acids are single-letter codes including "*." DNA Transform Rule 1: Split input into 7-nt segments from 5'; last segment may be shorter. Rule 2: Reverse each 7-nt segment. Rule 3: Concatenate reversed segments to form output. Table 9: Hypotheses Generated by GPT-4.1 for the DNA tasks on chemistry-related tasks. In the molecule captioning task, Self-Consistency occasionally produced lower scores than the Implicit Inductive Reasoning baseline. While this strategy generates multiple hypotheses and applies them to derive answers, the resulting outputs were often fragmented or overly technical. For example, instead of producing full descriptive captions, as required by the task, the model frequently produced structural abbreviations or linkage names such as beta-D-Galp (1→4) betaD-GlcpNAc (which are often part of the rule representations extracted by the model), omitting information about overall structure or functional roles. This indicates a misalignment between rule-based derivations and the task's requirement for holistic descriptions. In the reaction prediction task, Hypothesis Refinement also failed to deliver consistent improvements. Our analysis suggests that this was due to refined rules were not always effectively applied to the examples, and the selection of the "best" hypothesis depended solely on an automatic evaluator of prediction accuracy, which does not necessarily capture scientific plausibility. Overall, these results suggest that the limitations of advanced reasoning strategies stem less from insufficient domain knowledge in base models than from structural mismatches between the strategies and the nuanced demands of the tasks. C Experiment Details C.1 Implementation Details We run our experiments using API-based closedsource models, specifically claude-3-5-haiku20241022, gpt-4.1-2025-04-14, and gemini-2.5flash-preview-04-17. We implement our inference strategies in the OpenCompass framework. This allows us to perform inference in parallel at high rates. The explicit inductive reasoning is implemented via one-pass decoding, generating the hypothesis and applying it to the test example in one API call. Self-consistency is implemented by sampling multiple times using the same process as explicit inductive reasoning. For hypothesis refinement, we sample the hypothesis using the same general prompt in all tasks, except for DNA Translation where we ask the model to provide the specific codon-to-amino acid so that the hypothesis can be properly refined. For tasks in which the Task Hypothesis Molecule Design Rule 1: Identify required functional groups (e.g., diamine, aldehyde, etc.). Rule 2: Map biological role to known scaffolds (e.g., antineoplastic →stilbene). Rule 3: Choose core heterocycle per "derives from" (e.g., triazine). Rule 4: Decorate core with substituents to satisfy function and activity. Rule 5: Respect stereochemistry (e.g., [C@H] per natural enantiomer). Rule 6: For natural products, replicate known SMILES closely. Rule 7: Attach alkyl/aryl groups at correct positions. Rule 8: Output valid SMILES with rings, heteroatoms, charges. Molecule Caption Rule 1: Identify core ergot alkaloid and name (e.g., ergotaman). Rule 2: Describe substituents and positions (e.g., 12'-hydroxy). Rule 3: Note stereochemistry if differentiating isomers. Rule 4: Mention salts/derivatives (e.g., methanesulfonic acid salt). Rule 5: State biological origin or role if recognizable. Rule 6: Use "derives from" for parent relationships. Rule 7: Note naming conventions or historical context if relevant. Rule 8: Separate distinct features into clear sentences. Reaction Prediction Rule 1: Target N-heterocycle fused to benzene undergoes nucleophilic attack. Rule 2: Organometallics ([Li]CCCC, [H-]) add to carbonyl or halide. Rule 3: Bases ([NH+ 4 ], [OH-]) deprotonate or hydrolyze esters →amides/acids. Rule 4: Leaving groups replaced by nucleophiles forming C-X or C-C. Rule 5: Ester + nucleophile -> amide/ether. Rule 6: Most nucleophilic reagent reacts with most electrophilic center. Rule 7: Ignore spectator ions in final product. Rule 8: Grignard addition -> alcohol at addition site. Rule 9: Reductions ([H-]) convert carbonyls →alcohols/amines. Rule 10: On heteroaryl halide, nucleophile replaces halide on ring. Rule 11: Ethers/amides attach to aromatic systems via substitution/acylation. Rule 12: With both esters and amines, amide formation is preferred. Name Prediction Rule 1: Count all C atoms (including branches/rings). Rule 2: Count H via implicit valence rules. Rule 3: Count N, O, S, Si, halogens from SMILES. Rule 4: Include implicit Hs in aromatic rings per standard. Rule 5: Integrate substituent atoms without double-counting. Rule 6: Adjust H count for double/triple bonds. Rule 7: Write formula as C, H, then others alphabetically. Rule 8: Expand grouped atoms (e.g., O[Si](C)(C)C). Rule 9: Sum counts; check branching consistency. Rule 10: Format as [Element][count]... (e.g., C6H6O). Table 10: Hypotheses Generated by GPT-4.1 for the Chemistry tasks hypothesis can be translated into Python code, we prompt an LLM to generate the code. Otherwise, we prompt the LLM to apply a hypothesis to all in-context example inputs and do this to all the generated hypothesis. We used AI assistants to polish some of the text in this paper. C.2 Prompts Molecule Captioning As discussed in Section3.3, molecule captioning is an open-ended generation task, for which existing evaluations rely primarily on surface-level matching. To address this limitation, we design a dedicated prompt with fine-grained scoring criteria and employ an LLM to serve as the evaluator. One-pass Self-Consistency To reduce the number of API calls and improve the efficiency of selfconsistency, we design the prompt so that the model performs both rule induction and application to the test input within a single invocation. Universal Majority Voting with Self-Consistency Given that the outputs of the chemistry and biology tasks in SIRBench-V1 are typically long and semantically complicated, basic majority voting mechanism often fails to identify a representative response, thereby diminishing the effectiveness of self-consistency. To address this, we adopt the universal self-consistency strategy(Chen et al., 2023), selecting the most semantically consistent response to form the final answer. Hypothesis Refinement We provide the main prompts used in the hypothesis refinement process, including Hypothesis Induction, Hypothesis Application, Hypothesis Refinement, and Final Hypothesis Application. D Complete Results on Chemistry Tasks We provide the full results on Chemistry Tasks that reports all the metrics in table 11, table 13, and table 12. Task Metric Implicit Inductive Reasoning Explicit Inductive Reasoning SelfConsistency Hypothesis Refinement Molecule Design exact_match 0.17 0.23 0.23 0.27 bleu 0.41 0.36 0.19 0.71 levenshtein (↓) 70.87 84.70 173.47 26.30 validity 0.70 0.77 0.80 0.70 maccs_sims 0.81 0.75 0.84 0.89 rdk_sims 0.81 0.69 0.69 0.76 morgan_sims 0.62 0.64 0.66 0.73 fcd (↓) 12.82 13.87 12.46 13.22 Molecule Caption bleu2 0.20 0.22 0.39 0.24 bleu4 0.14 0.15 0.29 0.17 rouge_1 0.33 0.24 0.48 0.40 rouge_2 0.18 0.12 0.29 0.23 rouge_l 0.25 0.19 0.38 0.31 meteor_score 0.39 0.23 0.44 0.42 LLM as judge 67.70 54.00 69.70 72.70 Reaction Prediction accuracy 44.44 19.23 20.83 28.00 smiles2formula accuracy 0.00 0.00 0.00 0.00 smiles2iupac accuracy 0.00 0.00 0.00 0.00 iupac2smiles accuracy 14.29 4.55 0.00 4.17 iupac2formula accuracy 0.00 6.67 3.33 3.33 Table 11: Performance of the Claude-3.5-Haiku on Chemistry Tasks Task Metric Implicit Inductive Reasoning Explicit Inductive Reasoning SelfConsistency Hypothesis Refinement Molecule Design exact_match 0.30 0.20 0.20 0.23 bleu 0.75 0.71 0.70 0.75 levenshtein (↓) 25.37 27.93 26.37 24.03 validity 0.87 1.00 0.93 0.93 maccs_sims 0.92 0.87 0.91 0.87 rdk_sims 0.80 0.74 0.82 0.78 morgan_sims 0.75 0.69 0.72 0.67 fcd (↓) 8.16 7.08 7.97 7.43 Molecule Caption bleu2 0.42 0.49 0.49 0.20 bleu4 0.32 0.38 0.39 0.15 rouge_1 0.55 0.55 0.57 0.38 rouge_2 0.36 0.38 0.39 0.24 rouge_l 0.44 0.46 0.48 0.31 meteor_score 0.57 0.52 0.54 0.48 LLM as judge 66.30 59.00 65.70 66.30 Reaction Prediction accuracy 22.22 17.86 25.00 32.14 smiles2formula accuracy 13.33 6.67 10.00 10.00 smiles2iupac accuracy 0.00 0.00 0.00 0.00 iupac2smiles accuracy 17.39 4.35 5.00 13.04 iupac2formula accuracy 23.33 13.33 23.33 23.33 Table 12: Performance of the GPT-4.1 on Chemistry Tasks Task Metric Implicit Inductive Reasoning Explicit Inductive Reasoning SelfConsistency Hypothesis Refinement Molecule Design exact_match 0.33 0.27 0.27 0.20 bleu 0.73 0.79 0.79 0.76 levenshtein (↓) 27.90 25.27 22.50 26.67 validity 0.80 0.77 0.90 0.73 maccs_sims 0.95 0.94 0.94 0.81 rdk_sims 0.89 0.86 0.87 0.82 morgan_sims 0.85 0.77 0.80 0.72 fcd (↓) 8.19 8.89 6.26 10.56 Molecule Caption bleu2 0.49 0.54 0.51 0.42 bleu4 0.38 0.43 0.41 0.33 rouge_1 0.57 0.61 0.61 0.52 rouge_2 0.38 0.42 0.41 0.35 rouge_l 0.47 0.50 0.49 0.43 meteor_score 0.55 0.59 0.59 0.52 LLM as judge 63.30 67.70 70.00 65.70 Reaction Prediction accuracy 54.17 34.78 39.29 32.14 smiles2formula accuracy 30.00 20.00 30.00 16.67 smiles2iupac accuracy 0.00 0.00 3.33 0.00 iupac2smiles accuracy 20.00 40.00 53.85 52.94 iupac2formula accuracy 70.00 60.00 73.33 66.67 Table 13: Performance of the Gemini-2.5-Flash on Chemistry Tasks LLM-as-Judge Evaluation of Molecule Captioning: You are an expert molecular biologist. Below is a SMILES string representing a molecule: {smiles} Here is a reference description of the molecule: {gt} Here is a predicted description of the same molecule: {pred} Your task is to evaluate the predicted description only based on its scientific quality compared to the reference. You must assign a score from 1 to 10 based on the following criteria: • Score 10: Nearly perfect - scientifically precise, complete, and fluent. Matches all key aspects of the reference (e.g., functional groups, chemical class, derivation, roles). • Score 8-9: Very good - minor omissions or slight rewording, but the core structure-level and functional meaning is intact. • Score 6-7: Reasonable - generally correct but may lack specific details (e.g., derivation or one functional role). Possibly vague phrasing. • Score 4-5: Partial - captures the general category or one function but omits multiple important details or shows misunderstanding in phrasing. • Score 2-3: Poor - vague, generic, or scientifically weak. May refer to the wrong compound type or confuse structural features. • Score 1: Completely incorrect or irrelevant. Only output a single line in the following format: Score: [1-10] One-pass Self-Consistency: Below is a full prompt about the reasoning task, which includes the ICL examples and a new test case. Your task is: 1. Read the full prompt to understand the task and identify: 1) the example input-output pairs 2) the specific input question to answer. 2. Analyze these example pairs and generate a series of rules that explains how each input is transformed to its corresponding output. 3. Then, apply those rules to the final test question and output the answer. 4. Return your answer in the following format: Rule 1: ... Rule 2: ... Rule 3: ... ... {{your answer}} Full prompt: {full_prompt} Universal Majority Voting with Self-Consistency: You are given a reasoning task prompt and multiple candidate responses to the question in that prompt. Your task is: 1. Read the full prompt carefully to understand the question being asked. 2. Examine all the candidate responses and determine whether any of them form a majority consensus. • A majority exists if any single response appears more than any other (either verbatim or semantically equivalent). • In case of a tie (e.g., all responses differ or two responses appear with equal frequency), consider that no majority exists. 3. If a majority exists, return that response as the final answer. 4. If no majority exists, then select the most reasonable and task-appropriate response based on the prompt. Candidate responses: {responses} Full prompt: {full_prompt} Return your final answer using exactly the following format: majority_found: [yes or no] selected_response: {full response content} Example: majority_found: yes selected_response: This is the most common (or semantically equivalent) response and correctly answers the question. Hypothesis Induction Prompt Below is a full prompt about the reasoning task, which includes the ICL examples that you should learn from. Your task is: 1. Read the full prompt to understand the task and identify the example input-output pairs. 2. Analyze these example pairs and generate a series of rules that explains how each input is transformed to its corresponding output. 3. Provide as much detail as possible in the rules, such as elaborating on the specific mapping.{note} 4. Return your rules in the following format (each rule on its own line): Rule 1: ... Rule 2: ... Rule 3: ... ... Full prompt: {full_prompt} Hypothesis Application Prompt (General) Task Description: task_description Please apply the given hypothesis to the given list of inputs. Ensure that you provide the actual output for each input. Do not give a program, partial output, or placeholder. Hypothesis: hypothesis Input: icl_in Format your output as follows: Output 1: ... Output 2: ... ... DNA Table Prompt Below is a full prompt about the reasoning task, which includes the question that you should give the corresponding answer. Your task is: 1. Read the full prompt to understand the task and identify the specific input question to answer. 2. Based on your understanding of the given rules, generate the corresponding output for the question. Rules: hypothesis Full prompt: x Enclose your answer with tags. DNA Translation/Transformation as Python Code Prompt Convert the following hypothesis into a Python function called apply that takes a string input and returns the transformed output. The function should implement the rules described in the hypothesis. Make sure to handle all the transformations correctly. Task Description: self.task_description Hypothesis: hypothesis Your function should follow this template: def apply(input_str): # Implementation based on the hypothesis rules # ... return result Return ONLY the Python code without any explanation or markdown formatting. Hypothesis Refinement Prompt You are given a candidate hypothesis that attempts to explain how each input is transformed into its output. A hypothesis consists of rules that explain how the inputs are mapped to the outputs. Your goal is to revise this hypothesis so it fully accounts for any discrepancies. You may add new rules, modify existing ones, or remove inaccurate ones. You can also propose a completely new hypothesis. Context: self.task_description Current Hypothesis: hypothesis Input: icl_in Model Output: generated_output Expected Output: expected_output Steps: 1. List the exact differences between Model Output and Expected Output. 2. For each difference, identify which existing rule (if any) fails to cover it. 3. Revise existing rules or introduce new rules to fix these gaps. 4. Ensure the rules clearly state how the input is mapped into output in a detailed manner.{note} Output only the refined hypothesis-do not solve the original task. Format your output as follows: Rule 1: ... Rule 2: ... Rule 3: ... ... Final Hypothesis Application Prompt Below is a full prompt about the reasoning task, which includes the question that you should give the corresponding answer. Your task is: 1. Read the full prompt to understand the task and identify the specific input question to answer. 2. Based on your understanding of the given rules, generate the corresponding output for the question. Rules: hypothesis Full prompt: x Enclose your answer with tags.
|
2509.16228
|
Technische Universität Berlin
Electrical Engineering and Computer Science
Models and Theory of Distributed Systems
Compositional Interface Refinement
Through Subtyping
in Probabilistic Session Types
Masterarbeit
von Paula Blechschmidt
zur Erlangung des Grades „Master of Science“ (M. Sc.)
im Studiengang Computer Science (Informatik)
Erstgutachter:
Prof. Dr. Uwe Nestmann
Zweitgutachterin:
Prof. Dr. Kirstin Peters
August 2025
arXiv:2509.16228v1 [cs.LO] 12 Sep 2025
Usage of Artificial Intelligence
The large language model ChatGPT, chatgpt.com, was sparingly used throughout this
thesis for general language help concerning tone, grammar, and vocabulary and as a
guide finding literature. All usage is contained in one publicly accessible chat, whose
link is found in the bibliography [OpenAI, 2025].
3
Abstract
Multiparty session types (MPST) are a robust typing framework that ensures safe and
deadlock-free communication within distributed protocols. As these protocols grow in
complexity, compositional modelling becomes increasingly important to scalably verify
their behaviour. Therefore, we propose using a refinement-based subtyping approach
to facilitate the modularity needed for compositional verification. Subtyping in classic
MPST systems inherently represents a notion of refinement: A larger type may be safely
substituted by a smaller, refined type. The aim of this thesis is to significantly extend
this concept and discover just how flexible and expressive subtyping relations can be.
We present a probabilistic extension for MPST, the probabilistic mixed choice multi-
party session π-calculus, with a novel, flexible subtyping system which allows one channel
(the interface) to be substituted by several channels (the refinement). Our subtyping
is remarkably expressive; any selection of well-typed channels as the refinement has a
corresponding interface in a single channel type. To facilitate this generality, we base our
system on a powerful variant of MPST, mixed choice multiparty session types (MCMP),
which offers greater flexibility in communication choices.
We establish soundness of the probabilistic mixed choice multiparty session system
through several key results. In particular, we prove subject reduction, error-freedom
and deadlock-freedom, ensuring that well-typed processes are well-behaved.
This work demonstrates subtyping to possess great previously untapped potential for
stepwise refinement and compositional verification. The presented framework enables
highly expressive, compositional, and verifiable modelling of probabilistic distributed
communication.
A promising avenue for further research is imperfect refinement, a
logical extension of the system which leverages the strengths of the probabilistic setting:
We can allow the refinement to be deadlock-free with bounded probability instead of in
100% of cases.
5
Contents
1
Introduction
11
1.1
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2
Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
15
2.1
Process Syntax
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
2.2
Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.3
Typing System
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.3.1
Type Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.3.2
Standard Subtyping
. . . . . . . . . . . . . . . . . . . . . . . . .
25
2.3.3
Typing Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3
Subtyping with Refinement
33
3.1
Single-Channel Subtyping
. . . . . . . . . . . . . . . . . . . . . . . . . .
34
3.2
Adapted Typing System . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
3.3
Multi-Channel Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4
Properties
47
4.1
Properties of Types and Subtyping
. . . . . . . . . . . . . . . . . . . . .
47
4.1.1
Subtyping is a Preorder
. . . . . . . . . . . . . . . . . . . . . . .
48
4.1.2
Safety and Deadlock-Freedom . . . . . . . . . . . . . . . . . . . .
50
4.1.3
Interface Existence . . . . . . . . . . . . . . . . . . . . . . . . . .
57
4.2
Properties of Typed Processes . . . . . . . . . . . . . . . . . . . . . . . .
58
4.2.1
Subject Reduction
. . . . . . . . . . . . . . . . . . . . . . . . . .
59
Preliminary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . .
59
Theorem Statement and Proof . . . . . . . . . . . . . . . . . . . .
66
4.2.2
Error- and Deadlock-Freedom . . . . . . . . . . . . . . . . . . . .
69
5
Conclusion and Future Work
73
7
List of Examples
1
Running Example
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
2
Running Example—Syntax
. . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3
Running Example—Semantics . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4
Refinement into Multiple Reduction Sequences
. . . . . . . . . . . . . . . .
22
5
Prefixes: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
6
Prefixes: Subtyping
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
7
Standard Subtyping Derivation . . . . . . . . . . . . . . . . . . . . . . . . .
29
8
Running Example—Types . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
9
Single-Channel Subtyping Derivation . . . . . . . . . . . . . . . . . . . . . .
35
10 Running Example—Subtyping
. . . . . . . . . . . . . . . . . . . . . . . . .
44
11 Running Example—Subtyping Derivation . . . . . . . . . . . . . . . . . . .
45
12 Subtyping is not Antisymmetric . . . . . . . . . . . . . . . . . . . . . . . . .
50
13 Deadlock Freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
9
1 Introduction
In general, the term interface refers to the observable—or externally accessible—aspects
of a component, module, or system; the exact representation of an interface inherently
depends on the formalism at hand. In many cases, interfaces appear at various levels
of abstraction and granularity, which may be developed through stepwise refinement
starting from some high-level description and adding details with each refinement step;
reasonable notions of refinement relations should obviously be transitive. Ideally, refine-
ment can be done in a compositional manner, where single components may be refined
independently, while yielding a refinement of the full system. There is a large body of
research on notions of refinement and accompanying methods, as it is generally quite
specific to the formalism at hand. Consequently, there is no real consensus on definitions
and interpretations, not even on terminology.
State-based representations of systems are likely the class of formalisms with the
most diverse contributions. There, refinement relations typically compare behaviours
(in its simplest form traces), often expressed as a mapping from the refined version
back to its initial specification [Abadi and Lamport, 1988]. The concepts of (forward
and/or backward) simulation [Lynch and Vaandrager, 1995], which roughly correspond
to completeness and soundness requirements of an implementation w.r.t. its specification,
are possibly even more wide-spread and come in many flavors.
In multiparty session types (MPST) [Yoshida and Gheri, 2020], where our work is sit-
uated, the interface of a process captures the type of the session it participates in, which
describes the sequence of input/output actions (messages), along with their types, direc-
tion, and order of communication. Here, often enough, the notion of channel comprises
a particular session of a protocol among several participants, including all of the roles
that act within the respective protocol to achieve a common goal [Honda et al., 1998,
2008; Scalas and Yoshida, 2019]. If an interface is represented as a type, then refinement
is naturally and best realized as a subtyping relation.
In this thesis, interfaces are (sets of) channels, which are themselves constructed
from sessions and the roles used within them. As usual these channels are given as
implementation in a session calculus and as specification in local contexts, where a local
context ∆is a collection of channel names and their respective channel types. The type
system provides rules to check the implementation against its specification and ensures
safety and deadlock-freedom (among other properties that could be defined).
As part of our type system, we introduce a novel subtyping relation ≤P that allows to
also verify probabilities P and specifies a refinement relation on local contexts. Starting
from ∆, if ∆′ ≤1 ∆then the refinment ∆′ refines the interface ∆by distributing some
behaviour (on single channels) on sets of interacting channels.
Starting from ∆′, if
∆′ ≤1 ∆then the interface ∆provides an abstract specification of the refinment ∆′,
11
1 Introduction
where abstract means from an external point of view abstracting from interactions within
∆′.
To gain some initial understanding, we highlight said dynamic of a refinement dis-
tributing the behaviour of a single channel, the interface, to a set of interacting channels
with an example. We will keep revisiting this setting throughout the thesis as we build
up the theory of our system. We use the symbol ⃝to indicate the end of an example.
Example 1 (Running Example). Consider a plaintiff bringing a lawsuit to court, which,
after possibly taking witness testimony, announces a verdict. To initiate the protocol,
interface
p
c
w
the plaintiff sends a message to the court, announcing
the lawsuit (lws). Upon receiving this, the court decides
with a total 70% chance that it has enough information to
announce the verdict to the plaintiff. Of those 70%, half
of the time, i.e., with a probability of 35%, the court will find the defendant guilty (glt)
and else not guilty. Otherwise, with 30%, the court requests (rqs) a witness to testify.
Upon receiving this request, the witness sends a message to the court representing the
statement (st) after which the court announces the verdict not guilty to the plaintiff.
In case no witness is called, the court will send them a message releasing (rls) them
and the protocol terminates.
This protocol has an issue: The defendant on whom the judgement is passed is not
represented as a participant. To obtain its verdict, the defendant could implicitly be as-
sumed part of the court. But then the court has a clear conflict of interest: they hold the
power to pass judgement on themselves. A solution is splitting the court, thereby divid-
ing the power. Hence, consider the court to be an interface that must be refined into two
separate participants: defendant and judge. The plaintiff is the same as before but now
refinement
p
j
d
w
interacts with a judge instead of the court. This judge,
after receiving the lawsuit, waits for a message from the
defendant.
The defendant will send a weak (wk) and
strong (str) defence with a 50% and 20% likelihood, re-
spectively. Otherwise, it requests to call upon a witness, in
30% of cases. If the defence is strong, the verdict is always
not guilty. With 70% a weak defence results in guilty and with 30% in not guilty. If no
statement is given and a witness is requested, the judge receives the witness testimony,
and decides as before.
⃝
The target audience for this thesis is familiar with transition systems and type sys-
tems, including π-calculus, but we do not assume a close familiarity with (multiparty)
session types. We begin by introducing our calculus in Chapter 2, i.e., its syntax, op-
erational semantics, and typing system. The latter includes basic subtyping which does
not include any of our novel refinement ideas, to ease the reader into the system as a
whole. Conceptually, this chapter can be considered to be our preliminaries. Afterwards,
in Chapter 3, we move on to the core of this work, the refinement-based multi-channel
subtyping. We will first build intuition for the novel subtyping system by providing an
intermediary set of subtyping rules which bridge the gap between the functionality of
12
1.1 Related Work
basic subtyping and the complex form of the advanced subtyping. Only then do we
finally present the novel subtyping rules. The chapter contains plenty of examples to
guide the reader through the, at times, convoluted syntax. Additionally, we show that
the intermediary subtyping subsumes the basic one and is in turn subsumed itself by
the novel, advanced one. In Chapter 4, we then prove several essential properties of
the presented probabilistic mixed choice multiparty session π-calculus, its type system,
and multi-channel subtyping. These include subject reduction (§ 4.2.1) and deadlock-
freedom (§ 4.2.2), both standard in MPST. Additionally, we prove the flexibility of our
subtyping: Any selection of well-typed, safe, and deadlock-free channels has a corre-
sponding interface in a single channel type (§ 4.1.3).
Large parts of the work presented in this thesis have also been submitted as a paper to
the 22nd International Colloquium on Theoretical Aspects of Computing (ICTAC 2025)
[Blechschmidt et al.].
1.1 Related Work
The history of multiparty session types begins with the inception of the process calculus
CCS (calculus of communication systems) in 1980 [Milner, 1980, 1989]. Extended with
name-passing, this framework later became the now well-known π-calculus [Milner, 1993,
1999].
Based on this, the original binary session type theory was conceived in the
1990s [Honda, 1993; Takeuchi et al., 1994; Honda et al., 1998]. Inspired also by linear
logic [Girard, 1987], session type theory can handle binary communication in which the
two communication partners essentially act as duals of each other. With the goal of
allowing more than two communicating partners, session types were further developed
into the classic MPST framework in [Honda et al., 2008] (asynchronous) and [Bejleri and
Yoshida, 2008] (synchronous). The slightly simplified MPST system of [Bettini et al.,
2008], introduced shortly afterwards, gained wide acceptance as the canonical version of
MPST. (Multiparty) session types have been extensively researched in many contexts
since then, see surveys of the field [Hüttel et al., 2016; Gay and Ravara, 2017]. The
duality-based approach of binary sessions carried over into multiparty sessions, which
was overhauled by [Scalas and Yoshida, 2019], becoming the new standard. Hence, we,
too, base our framework on theirs.
Our system is additionally heavily influenced by [Peters and Yoshida, 2024], who
presented a mixed choice multiparty (MCMP) calculus (based on [Casal et al., 2022]).
We integrate their approach into our system. As shown in [Peters and Yoshida, 2022,
2024], MCMP is strictly more expressive than both standard MPST and the system of
[Casal et al., 2022].
There is a significant history of integrating probabilities into transition systems, as
early as 1991 with work on probabilistic bisimulation [Larsen and Skou, 1991]. A number
of process calculi have been reimagined with probabilities in literature, including prob-
abilistic CCS [Hansson, 1991] and probabilistic π-calculus [Herescu and Palamidessi,
2000]. Binary sessions have seen probabilistic extension in [Inverso et al., 2020]. We
take inspiration mostly from [Aman and Ciobanu, 2019], who extended the MPST the-
13
1 Introduction
ory of [Honda et al., 2008, 2016].
Subtyping has been a fundamental concept in these systems since the introduction
of subtyping for π-calculus in [Pierce and Sangiorgi, 1996].
But even earlier, there
were precursors to subtyping for CCS in, for example, simulation [Milner, 1980; Lynch
and Vaandrager, 1995] and testing preorders capturing safe replacement [Nicola and
Hennessy, 1984]. Later, in [Gay and Hole, 2005], this concept has been adapted for
the subsequent binary session theory. Modern MPST subtyping is largely based on the
framework of [Dezani-Ciancaglini et al., 2015]. Our subtyping fundamentally builds on
[Peters and Yoshida, 2024] (itself based on [Chen et al., 2017]), in particular for their
handling of mixed choices, though the additional complexity of our multi-channel system
makes the connection less immediately apparent.
Another approach to integrate refinement in subtyping, though based on a very dif-
ferent motivation, can be found in [Horne, 2020]. In comparison, our subtyping is much
more flexible, as we will prove in Section 4.1.3. For instance, we do not exclude races,
i.e., two senders that want to communicate with the same receiver at the same time.
14
2 Preliminaries: The Probabilistic
Mixed Choice MPST π-Calculus
After giving a small introduction to multiparty session types (MPST), this chapter
introduces the process syntax (§ 2.1), the operational semantics (§ 2.2) and the typing
system of the probabilistic mixed choice multiparty session π-calculus.
Given the complexity and breadth of the field, we will refrain from explaining MPST in
full technicality, neither will we expect detailed background knowledge. For the curious
reader, we recommend [Yoshida and Gheri, 2020] as a starting point.
In general terms, MPST is a formal framework for specification and verification of
communication protocols involving more than two participants. Participants are mod-
elled as processes which are formed by a certain grammar, the process syntax. The core
functionality of these processes is communication: Sending and receiving messages to
and from other processes. These messages are sent and received via channels. A channel
comprises a common name for a session both participants are in, and a role, which is
an identifier within said session. In communication, by using the channel and the syn-
tax of the sending action, it is therefore specified which role is sending what message
to which other role, where both communication partners must share the same session.
Interactions happen in synchronous steps, both communication partners “use up” the
action at the same time: The process makes a reduction step (asynchronous MPST
exists too, see Honda et al. [2008], we use synchronous MPST). To verify or specify
interactions between processes, channels are assigned session types using typing rules.
Processes built from these types are then said to be justified by them. A key property
of these systems, called subject reduction, is that a process justified by some types will
be justified by some other types once it reduces. In other words, once we establish a
process to be well-behaved in some sense, we know that no matter which steps it per-
forms, well-behaved-ness remains. Furthermore by verifying that a collection of types
fulfils certain properties, we can infer that processes justified by that collection also fulfil
them. One of the most common properties of interest is deadlock-freedom, the guarantee
that whenever no more interaction can occur, all participants are finished. In line with
modern MPST theory, our system does not have projection (of global types onto local
types) and thus much of the understanding for general π-calculus is transferable to the
work of this thesis.
At its core, our system is a standard multiparty session calculus (as in [Scalas and
Yoshida, 2019, 2018]) without session initialisation and delegation, i.e., without transfer-
ring access to certain channels across participants. We extend this by the mixed choices
of [Peters and Yoshida, 2024] (based on [Milner, 1993; Milner et al., 1992]). Classically,
15
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
when a participant seeks to communicate, it will be able to do so with only exactly one
other participant and only in one mode (sending or receiving). Mixed choices allow the
participant to offer sending actions and receiving actions at the same time, to and from
arbitrarily different participants. Mixed choice calculi have increased expressiveness and
flexibility. Finally, our system includes probabilities in the outputs (as in [Aman and
Ciobanu, 2019; Inverso et al., 2020], based on [Herescu and Palamidessi, 2000; Varacca
and Yoshida, 2007]).
2.1 Process Syntax
We begin by defining which shapes our processes may assume. After explaining the def-
inition in detail, we follow by formalizing the courthouse example from the introduction
using this syntax.
Definition 2.1 (Process Syntax). The syntax of the probabilistic mixed choice multiparty
session π-calculus is inductively given by:
v ::=
x, y, z,.. | 1, 2,.. | ⊤, ⊥
(variables, numbers, booleans)
c ::=
s[r]
(session with set of roles)
P, Q, Pi ::=
0 | P | Q | (νs)P
(inaction, parallel composition, restriction)
| if v then P else Q
(conditional)
| def D in P | X⟨ev, ec⟩
(process definition, process call)
| c
X
i∈I
Mi
(mixed choice on c with finite I ̸= ∅)
Mi ::=
p←q ? l(x).P
(p receives message l(x) from q)
|
M
i∈I
Pi ▶Ni.Pi
(probabilistic choice with finite I ̸= ∅)
Ni ::=
p→q ! l⟨v⟩| τ
(p sends message l⟨v⟩to q, internal action)
D ::=
X(ex; ec)
def
= P
(declaration of process constant X)
We will first explain the syntax. Values v are variables, numbers, or booleans. Note
that the inclusion of more complex values, such as floating point numbers, would be
a straightforward, orthogonal extension of the system.
A channel c = s[r] specifies
a session s being used as a communication endpoint by the roles r to interact with
other roles within that session. The term “participant” is overloaded in MPST, referring
both to a participant in a communication protocol, and a participant in a session. In
our system, a single process may use several different channels s[r], possibly even from
different sessions. Thus, to avoid confusion, we will refer to r only as “roles”. In standard
session typing, instead of sets, single roles are used. The reason why we chose these role
sets will become apparent when the multi-channel subtyping is introduced later.
16
2.1 Process Syntax
Inaction 0 is the representation of a terminated process, we sometimes omit trailing
0. Composition P | Q allows the processes P and Q to run in parallel. We often use
the previously mentioned notion of “participant” to denote parallel processes. Parallelly
composed processes, i.e., different participants may interact. Restriction (νs)P encap-
sulates a session s within the scope of P. The conditional if v then P else Q behaves
as P if v is true and else as Q. Process definition def D in P models recursion in
our calculus through recursive process calls. Summands of mixed choices c P
i∈I Mi are
inputs p←q ? l(x).P and output sums Pi ▶Ni.Pi. They differ from [Peters and Yoshida,
2024] in the fact that our output sums are probabilistic. The channel c in front of the
sum specifies the session s within which the entire mixed choice is taking place. In
inputs p←q ? l(x).P, role p receives a message with label l from role q. The trans-
mitted payload will be stored in variable x. After receiving the input, the participant
will continue as the process P.
Classically, a sum of inputs is called branching, as
each different input a participant receives, allows them to continue differently. In other
words, their behaviour branches depending on the communication partner(’s choices).
Probabilistic output choices L
i∈I Pi ▶Ni.Pi, or probabilistic sums, function similarly to
typical non-probabilistic output choices. Where classically one output action is chosen
nondeterministically from a sum of outputs (called selection), in our system one prob-
abilistic output sum is chosen nondeterministically from several occurring in a mixed
choice. Clearly differentiating the nondeterministic choices from probabilistic ones is
key for probabilistic systems (see [Segala and Lynch, 1994]). Then the output action
is selected according to the probabilities Pi ∈R. These two selection steps, however,
functionally occur simultaneously, not successively. Output actions Ni may either be
p→q ! l⟨v⟩, where role p sends a message with label l and payload v to role q, or τ,
an internal action. Afterwards, the participant continues as process Pi. Declarations D
provide definitions of processes that may be invoked by a process call X⟨ev, ec⟩with some
values ev, where ec lists the channels used by the process that is declared by X.
Next, let us specify the binders. We adopt a form of Barendregt convention: We
assume that alpha-conversion is implicitly applied to ensure that all bound variables are
pairwise distinct and different from free ones. Restriction (νs)P binds session s in P.
Declaration X(ex; ec)
def
= P binds process constants X and variables ex in P. The vector ec
lists the channels used in P. Message receiving p←q ? l(x).P binds the variable x in P.
All other occurrences of process constants, variables, and sessions are free. Let fs(P)
and fs(D) denote the set of free sessions in P and D, respectively. Let dpv(D) be the
set of process variables declared in D and let fpv(P) and fpv(D) denote the set of free
process variables in P and D. Substitution P[x1, . . . , xn 7→v1, . . . , vn] simultaneously
replaces all free occurrences of xi by vi in P, possibly applying alpha-conversion to avoid
capture. Substitution P[ex 7→ev] is undefined if |ex| ̸= |ev|.
Finally, for legibility and convenience, we introduce some shorthands and abbrevia-
tions. We abbreviate singleton sums c P
i∈{1} M as c ▷M and L
i∈{1} P ▶N.P as P ▶N.P.
We sometimes omit the probability 1, i.e., abbreviate outputs 1 ▶N.P as N.P. When-
ever I ∩J = ∅and I ∪J ̸= ∅, we allow to split a mixed choice c P
i∈I∪J Mi into
c ▷P
i∈I Mi +P
j∈J Mj and we allow to split a probabilistic choice L
i∈I∪J Pi ▶Ni.Pi into
17
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
L
i∈I Pi ▶Ni.Pi ⊕L
j∈J Pj ▶Nj.Pj. In particular, we often split sums into a single sum-
mand and the rest of the sum, i.e., c P
i∈I Mi becomes c ▷Mj +M with M = c P
i∈I\{j} Mi
and L
i∈I Pi ▶Ni.Pi becomes Pj ▶Nj.Pj ⊕N with N = L
i∈I\{j} Pi ▶Ni.Pi. To simplify
the reduction rules in Definition 2.3, we allow M and N to be empty mixed/probabilistic
choices.
We allow to unify and split similar summands of probabilistic choices, i.e.,
Pi ▶N.P ⊕Pj ▶N.P = (Pi + Pj) ▶N.P.
Let us return to the running example to show the protocol as a process in our syntax.
Example 2 (Running Example—Syntax). The unrefined interface of the courthouse
system in Example 1 can be implemented as a process PI = (νs)(Pp | Pc | Pw), whose
components represent the participants in the described protocol. The plaintiff is Pp, the
court is Pc, and the witness is Pw. The implementations of these processes are given as
Pp = s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x)
Pc = s[c] ▷j←p ? lws().s[c] ▷
0.35 ▶j→p ! glt⟨⊤⟩.s[j] ▷j→w ! rls⟨⟩
⊕0.35 ▶j→p ! glt⟨⊥⟩.s[j] ▷j→w ! rls⟨⟩
⊕0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩
Pw = s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+
w←j ? rls()
where the role set of the channel s[c] used by the court process Pc is c = {j, d}, as it
embodies both the judge and defendant. Avid readers might notice that the defendant
is not found within the actions of Pc. As, however, this participant gets refined into two,
both roles are already found in the role set. Syntactically, this does not pose a problem;
unused roles are indeed allowed to occur in role sets, as we will see later on.
The refined system can be implemented as PR = (νs)(Pp | Pj | Pd | Pw). The pro-
cesses Pp and Pw stay the same and we have:
Prls = s[c] ▷j→w ! rls⟨⟩
Pd = s[d] ▷
0.5 ▶d→j ! wk⟨⟩
⊕
0.2 ▶d→j ! str⟨⟩
⊕
0.3 ▶d→j ! wit⟨⟩
Pj = s[j] ▷j←p ? lws().s[j] ▷
j←d ? wk().s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕
0.3 ▶j→p ! glt⟨⊥⟩.Prls
+ j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls
+ j←d ? wit().s[j] ▷j→w ! rqs⟨⟩.
s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩
for the defendant and judge, where Prls is used as a helper process for legibility.
⃝
With the process syntax defined, we will now move on to specifying how processes
behave and interact.
18
2.2 Operational Semantics
2.2 Operational Semantics
In this section, we introduce the operational semantics of the probabilistic mixed choice
multiparty session π-calculus. First, we define the structural congruence, which we need
for the reduction semantics to specify all possible reductions of processes.
Definition 2.2 (Structural Congruence ≡). Structural congruence ≡is the smallest
congruence on processes that includes alpha conversion ≡α and:
P | 0 ≡P
P | Q ≡Q | P
(P | Q) | R ≡P | (Q | R)
(νs)0 ≡0
(νs)(νs′)P ≡(νs′)(νs)P
P | (νs)Q ≡(νs) (P | Q)
if s /∈fs(P)
def D in (νs)P ≡(νs)def D in P
if s /∈fs(D)
(def D in P) | Q ≡def D in (P | Q)
if dpv(D) ∩fpv(Q) = ∅
def D in (def D′ in P) ≡def D ∪D′ in P
if dpv(D) ∩dpv(D′) = ∅and
fpv(D) ∩dpv(D′) = ∅
Definition 2.3 (Probabilistic Reduction Semantics). The reduction semantics is given
as the relation −→P inductively defined as follows.
if ⊤then P else Q −→1 P [R-Cond-⊤]
if ⊥then P else Q −→1 Q [R-Cond-⊥]
def D in 0 −→1 0 [R-Def-0]
c ▷(P ▶τ.P ⊕N) + M −→P P [R-τ]
s[r1] ▷(P ▶q→p ! l⟨v⟩.Q ⊕N) + MQ | s[r2] ▷p←q ? l(x).P + MP
−→P Q | P[x 7→v] [R-Com]
X(ex; ec)
def
= P ∈D
def D in (X⟨ev, ec⟩| Q) −→1 def D in (P[ex 7→ev] | Q) [R-Def]
P −→P P ′
P | Q −→P P ′ | Q [R-Par]
P ≡P ′
P ′ −→P Q′
Q′ ≡Q
P −→P Q
[R-Struct]
P −→P P ′
(νs)P −→P (νs)P ′ [R-Res]
P −→P P ′
def D in P −→P def D in P ′ [R-Def-In]
The statement P −→P P ′ is meant to be read as “process P reduces to the contin-
uation P ′ with probability P”. Let us now explain each reduction rule. [R-Cond-⊤]
and [R-Cond-⊥] allow conditional processes to take a reduction step to their intended
continuation depending on the boolean value in the clause. Rule [R-Def-0] allows to
garbage collect disused declarations. With [R-τ], an internal action is performed with
the probability P.
Rule [R-Com] allows communication between two parallel mixed
choices, one containing an input and the other an output, with matching roles p, q and
matching label l, where the probability P of this step is determined by the sender. By
[R-Def], a process call may be executed. Given the declaration of X as X(ex; ec)
def
= P
being contained in the declarations D, the process call is replaced by the substitution
P[ex 7→ev]. Often, the reduction semantics is defined “up-to” structural congruence, how-
ever, we instead choose to include an explicit rule, [R-Struct]. From the remaining rules
19
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
[R-Par], [R-Res], and [R-Def-In], we get that processes can still reduce in different con-
texts, namely in parallel composition, under session restriction, and within a process
definition.
We write P −→P if P −→P P ′ for some P ′, and P ↛if there is no P such that P −→P.
Let =⇒P be inductively defined as (a) P =⇒1 P and (b) if P −→P1 P ′ and P ′ =⇒P2 P ′′
then P =⇒P1P2 P ′′.
To aid understanding, let us revisit the running example.
Example 3 (Running Example—Semantics). Let us examine possible reductions of Ex-
ample 2. Recall that the interface process was PI = (νs)(Pp | Pc | Pw) with c = {j, d}.
For convenience, we reordered the parallel processes, i.e., PI = (νs)(Pp | Pc | Pw) ≡
PI = (νs)(Pp | Pw | Pc).
To highlight which components interact in each reduction
step, we will alternately highlight them and the corresponding arrow −→P in light green
and dark green.
The sequence of reductions of the interface process PI we chose begins with the
plaintiff sending a lawsuit to the court, specifically the judge. The court then sends
the verdict guilty to the plaintiff with a 35% probability and afterwards releases the
witness.
PI = (νs)(Pp | Pw | Pc) =
(νs)
s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x) | s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
| s[c] ▷j←p ? lws().s[c] ▷
0.35 ▶j→p ! glt⟨⊤⟩.s[j] ▷j→w ! rls⟨⟩
⊕0.35 ▶j→p ! glt⟨⊥⟩.s[j] ▷j→w ! rls⟨⟩
⊕0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩
!
−→1(νs)
s[p] ▷p←j ? glt(x) | s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
| s[c] ▷
0.35 ▶j→p ! glt⟨⊤⟩.s[c] ▷j→w ! rls⟨⟩
⊕
0.35 ▶j→p ! glt⟨⊥⟩.s[c] ▷j→w ! rls⟨⟩
⊕
0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩
!
−→0.35(νs)
0 | s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
| s[c] ▷j→w ! rls⟨⟩
!
−→1(νs)
0 | 0 | 0
Hence, PI has a sequence PI =⇒0.35 0, where the court finds the defendant guilty.
Let us find a comparable sequence of reductions in the refined system PR. The initial
lawsuit message is sent from plaintiff to judge. Afterwards, the defendant delivers a
20
2.2 Operational Semantics
weak defense to the judge with 50% probability. The judge then sends guilty to the
plaintiff with 70% probability and releases the witness.
PR = (νs)(Pp | Pj | Pd | Pw) = (νs)
s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x)
| s[j] ▷j←p ? lws().s[j] ▷
j←d ? wk().s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕
0.3 ▶j→p ! glt⟨⊥⟩.Prls
+ j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls
+ j←d ? wit().s[j] ▷j→w ! rqs⟨⟩.
s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩
| s[d] ▷
0.5 ▶d→j ! wk⟨⟩
⊕
0.2 ▶d→j ! str⟨⟩
⊕
0.3 ▶d→j ! wit⟨⟩
| s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
!
−→1(νs)
s[p] ▷p←j ? glt(x) | s[j] ▷
j←d ? wk().s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls
+ j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls
+ j←d ? wit().s[j] ▷j→w ! rqs⟨⟩.
s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩
| s[d] ▷
0.5 ▶d→j ! wk⟨⟩
⊕
0.2 ▶d→j ! str⟨⟩
⊕
0.3 ▶d→j ! wit⟨⟩
| s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
!
−→0.5(νs)
s[p] ▷p←j ? glt(x) | s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕
0.3 ▶j→p ! glt⟨⊥⟩.Prls
| 0 | s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
!
−→0.7(νs)
0 | Prls | 0 | s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
!
Prls=s[j] ▷j→w ! rls⟨⟩
≡
(νs)
s[j] ▷j→w ! rls⟨⟩| s[w] ▷
(
w←j ? mtg().s[w] ▷w→j ! st⟨⟩
+ w←j ? rls()
!
−→1(νs)
0 | 0
Thus, PR, too, has a sequence PR =⇒0.35 0, in which the judge finds the defendant
guilty.
⃝
21
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
Sometimes, one sequence of the interface system will correspond to several transition
sequences of the refined system. We observe this in both processes and types, whenever
probabilistic branches with the same continuations are summed up in the interface.
Example 4 (Running Example—Refinement into Separate Reduction Sequences). For
instance, the sequence of PI that leads to not guilty without calling the witness with
probability 0.35 is refined into two sequences with probabilities 0.15 and 0.2, respec-
tively. Let P ′
I denote the system obtained after one reduction step of PI in the previous
Example 3. Once again, to highlight which components interact in each reduction step,
we will alternately highlight them and the corresponding arrow −→P in light green and
dark green. Observe the following reduction step of P ′
I.
P ′
I = (νs)
s[p] ▷p←j ? glt(x) | Pw
| s[c] ▷
0.35 ▶j→p ! glt⟨⊤⟩.s[c] ▷j→w ! rls⟨⟩
⊕
0.35 ▶j→p ! glt⟨⊥⟩.s[c] ▷j→w ! rls⟨⟩
⊕
0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩
!
−→0.35(νs)
0 | Pw | s[c] ▷j→w ! rls⟨⟩
Compare this now to the following two reduction sequences obtainable from the system
obtained after one reduction step of PR:
P ′
R = (νs)
s[p] ▷p←j ? glt(x) | s[j] ▷
j←d ? wk().s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls
+ j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls
+ j←d ? wit().s[j] ▷j→w ! rqs⟨⟩.
s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩
| s[d] ▷
0.5 ▶d→j ! wk⟨⟩
⊕
0.2 ▶d→j ! str⟨⟩
⊕
0.3 ▶d→j ! wit⟨⟩
| Pw
!
−→0.5(νs)
s[p] ▷p←j ? glt(x) | s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕
0.3 ▶j→p ! glt⟨⊥⟩.Prls
| 0 | Pw
!
−→0.3(νs)
0 | Prls | 0 | Pw
22
2.3 Typing System
For the reduction sequence P ′
R ⇒0.15 (νs)
0 | Prls | 0 | Pw
, and
P ′
R = (νs)
s[p] ▷p←j ? glt(x) | s[j] ▷
j←d ? wk().s[j] ▷
(
0.7 ▶j→p ! glt⟨⊤⟩.Prls
⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls
+ j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls
+ j←d ? wit().s[j] ▷j→w ! rqs⟨⟩.
s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩
| s[d] ▷
0.5 ▶d→j ! wk⟨⟩
⊕
0.2 ▶d→j ! str⟨⟩
⊕
0.3 ▶d→j ! wit⟨⟩
| Pw
!
−→0.2(νs)
s[p] ▷p←j ? glt(x) | s[j] ▷j→p ! glt⟨⊥⟩.Prls | 0 | Pw
!
−→1(νs)
0 | Prls | 0 | Pw
,
a different reduction sequence to the same system, P ′
R ⇒0.2 (νs)
0 | Prls | 0 | Pw
.
These two steps together correspond to the reduction step of P ′
I with 35% probability
we have seen above.
⃝
Having introduced process syntax and operational semantics, we are finished with
introducing the probabilistic mixed choice multiparty session π-calculus itself.
2.3 Typing System
This section introduces the typing system of the probabilistic mixed choice multiparty
session π-calculus. Within our system, akin to other works in MPST (see [Scalas and
Yoshida, 2019]), to type processes, session types are assigned to the channels through
which the processes communicate. There are three components to our typing system.
First is the type syntax, a grammar defining the shape of types and typing contexts.
Then comes subtyping, a preorder relation on types which enhances the flexibility of
the system. Finally, typing rules will specify which processes are justifiable by which
collection of types.
2.3.1 Type Syntax
Not unusually in MPST systems, our type syntax looks quite similar to the process
syntax. At first glance there are nonetheless several differences. Instead of variables and
values, we find base types. Channels are omitted; each channel can have only exactly
one type and need therefore not be explicitly stated within the type. Conditionals and
23
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
process definitions/-calls do not have explicit types, as they are not needed. Instead
we find classic µ-recursion, as is typically the case for MPST types. For mixed choice
actions, however, any previously acquired intuition will carry over to the corresponding
types.
Definition 2.4 (Type Syntax). The syntax of the probabilistic mixed choice multiparty
session types are inductively given by:
U ::=
nat | bool
(base types for numbers and booleans)
T, Ti ::=
end | t | (µt)T
(inaction type, recursion variable, recursion)
|
X
i∈I
Li | L | H.T
(mixed choice type with a finite I ̸= ∅, output)
L, Li ::=
In | Out
(mixed choice modes)
In ::=
p←q ? l(U).T
(p receives message l with type U from q)
Out ::=
M
i∈I
Pi ▶Hi.Ti
(probabilistic choice with finite I ̸= ∅and
X
i∈I
Pi ≤1)
H, Hi ::=
p→q ! l⟨U⟩| τ
(p sends message l with type U to q, internal action)
∆::=
∅| ∆, c : T
(local context)
As usual, we begin by explaining the syntax in chronological order. The base types for
numbers and booleans are given as nat and bool. The addition of more base types would
be a straightforward extension of the system, similar to values in processes. Inaction
holds the same meaning as in the process syntax. The recursion variable t and recursion
(µt)T are the standard way to type recursive process calls (see [Scalas and Yoshida,
2019]).
We require recursion to be guarded: For (µt)T, the type T has to contain
meaningful action, i.e., T ̸= t′ for all recursion variables t′. The mixed choice type is
analogous to the process syntax. For convenience we use the mixed choice modes In and
Out in dense syntax blocks. Input type and probabilistic choice type, too, follow the
process syntax. P are probabilities, P ∈R with 0 < P ≤1. The sum of probabilities in
a probabilistic choice type is at most one, P
i∈I Pi ≤1. The output type is analogous to
the process syntax. Finally, ∆is a local context, i.e., the aforementioned “collection of
types” using which processes are justified. It may either be empty or contain assignments
c : T of a session type T to a channel c.
Let c ∈∆iff c : T ∈∆for some T. If s[r] ∈∆and p ∈r, we write p ∈∆. Types in
local contexts are treated equi-recursively, i.e., ∆, c : (µt)T = ∆, c : T[t 7→(µt)T]. Addi-
tionally, we consider a type assignment of a sum with empty index set to be inaction and
an inaction type assignment to be the empty set, i.e., ∆, s[r] : P
i∈∅Li = ∆, s[r] : end =
∆.
Composition of local contexts is required to be linear, i.e., ∆1, ∆2 is defined iff
r1 ∩r2 = ∅for all r1 ∈∆1 and r2 ∈∆2.
We inherit for choice types the abbreviations defined for choices in processes, except
that we do not allow to omit the probability 1, i.e., T = 1 ▶H.T ′ is not the same as
24
2.3 Typing System
H.T ′. This distinction is necessary for the subtyping rules. We overload the notations
M and N to also use them on choices in types. We again allow to unify and split similar
summands of probabilistic choices, i.e., Pi ▶H.T ⊕Pj ▶H.T = (Pi + Pj) ▶H.T.
The typing and subtyping rules decompose types in a stepwise manner. Hence, we
also have types such as probabilistic choices with P
i∈I Pi ≤1 or the output type H.T.
Definition 2.5 (Well-Formed). A type T is well-formed if
(a) all outputs H are part of a probabilistic choice L
i∈I Pi ▶Hi.Ti ⊕P ▶H.T and
(b) for all probabilistic choices P
i∈I Pi = 1 and
(c) for all p←q ? l(Ui).Ti + p←q ? l(Uj).Tj and Pi ▶p→q ! l⟨Ui⟩.Ti ⊕Pj ▶p→q ! l⟨Uj⟩.Tj
have Ui = Uj and Ti = Tj.
Hence, a type H.T is not well-formed.
Similar to [Peters and Yoshida, 2024], we allow for mixed and probabilistic choices
to combine the same label with different messages and/or continuations. Accordingly,
we do not require the labels of a mixed or probabilistic choice to be pairwise distinct.
Instead we specify that in well-formed types, summands which share sender, receiver,
label and input/output mode must have the same payload- and continuation type.
2.3.2 Standard Subtyping
To enhance the flexibility of the typing system, we use subtyping. The intuitive idea
behind classic subtyping rules is that for T ′ ≤T, the subtype T ′ is smaller than its super-
type T if it is less demanding, accepting more external choices (messages to be received)
and offering fewer internal choices (messages to be sent). Using the subtyping rules to
enrich the typing rules formalizes a notion of safe substitution [Dezani-Ciancaglini et al.,
2015; Gay, 2016]. When we consider types to be a protocol specification where the pro-
cesses are the concrete implementation, safe substitution states that if the specification
calls for a certain type T, the implementation may instead use any subtype T ′. A natural
intuition for this concept can be drawn from programming. Imagine a programmer call-
ing an external function which takes an input value and then gives back a return value.
For example, say that both the input value and output value may be either a natural
number or boolean. If the programmer writes code that sometimes calls the function by
inputting a natural number and sometimes a boolean, the code would run as expected.
Removing one of these modes (offering fewer internal choices) and only ever inputting
natural numbers would clearly not cause errors. If instead, however, the programmer
were to sometimes also input another value type, say a string, then we would expect
an exception. For handling the return value, they would initialize variables to store
whichever return value they get. If they additionally prepared to receive a floating point
number (accepting more external choices), the code would still run. Failing to prepare
for either boolean or natural number return values would instead cause errors. This
concept also extends to base types. When receiving an integer, for example, initializing
a variable as a float would be fine. Instead initializing the variable as a natural number
25
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
will cause errors if the value received is below zero. This subtyping for base types is
not uncommon in MPST systems (see [Scalas and Yoshida, 2019], based on [Dezani-
Ciancaglini et al., 2015]). Our system does not offer this, but an extension would be
rather straightforward.
To introduce the subtyping rules, we first define the set of unguarded prefixes of a
type, as also seen in [Peters and Yoshida, 2024].
Definition 2.6 (Prefix). The set of unguarded prefixes of a type T, pre(T), is given as
follows:
pre(end) = pre(t) = ∅
pre((µt)T) = pre(T)
pre(P ▶τ.T) = {P ▶τ.T}
pre(p←q ? l(U).T) = {p←q ?}
pre(P ▶p→q ! l⟨U⟩.T) = {p→q !}
pre(
X
i∈I
Li) = {pre(Li)}i∈I
pre(
M
i∈I
Pi ▶Hi.Ti) = {pre(Pi ▶Hi.Ti)}i∈I
Probabilities are summed up such that:
pre(N ⊕P1 ▶p→q ! l1⟨U1⟩.T1
⊕P2 ▶p→q ! l2⟨U2⟩.T2) = pre(N) ∪{P1 ▶p→q !} ∪{P2 ▶p→q !}
= pre(N) ∪{P1 + P2 ▶p→q !}
pre(N ⊕P1 ▶τ.T ⊕P2 ▶τ.T) = pre(N) ∪{P1 ▶τ.T} ∪{P2 ▶τ.T}
= pre(N) ∪{P1 + P2 ▶τ.T}
In our subtyping, we will compare prefix sets of types. Let us therefore examine some
types which have and do not have the same set of prefixes.
Example 5 (Prefixes—Introduction). Consider the following type and its set of un-
guarded prefixes.
T =
0.5 ▶a→b ! one⟨⟩
⊕0.2 ▶a→c ! two⟨⟩
⊕0.3 ▶τ.T ′
pre(T) = {0.5 ▶a→b !, 0.2 ▶a→c !, 0.3 ▶τ.T ′}
The following types have the same prefixes as T, i.e., pre(T) = pre(T1) = pre(T2) despite
the outputs having new continuations in T1 and different labels in T2.
T1 =
0.5 ▶a→b ! one⟨⟩.Tone
⊕0.2 ▶a→c ! two⟨⟩.Ttwo
⊕0.3 ▶τ.T ′
T2 =
0.5 ▶a→b ! three⟨⟩
⊕0.2 ▶a→c ! four⟨⟩
⊕0.1 ▶τ.T ′
⊕0.2 ▶τ.T ′
26
2.3 Typing System
These types, on the other hand, do not have the same prefixes as T, i.e., pre(T) ̸= pre(T3)
and pre(T) ̸= pre(T4), as the continuation after the internal action is different in T3 and
the sending to role c is absent in T4.
T3 =
0.5 ▶a→b ! one⟨⟩
⊕0.2 ▶a→c ! two⟨⟩
⊕0.3 ▶τ.T ′′
, where T ′′ ̸= T
T4 =
(
0.7 ▶a→b ! one⟨⟩
⊕0.3 ▶τ.T ′
pre(T3) = {0.5 ▶a→b !, 0.2 ▶a→c !, 0.3 ▶τ.T ′′}
pre(T4) = {0.7 ▶a→b !, 0.3 ▶τ.T ′}
⃝
Despite being based on [Peters and Yoshida, 2024], the subtyping rules in Definition 2.7
do not follow their exact structure. [Peters and Yoshida, 2024] have standard-form input-
and output rules; the additional logic required for properly subtyping mixed choice types
is entirely contained in their summation rule.
In our version, some of that logic is
transferred to the input- and output rules.
Definition 2.7 (Standard Subtyping). The relation ≤◦is coinductively defined:
end ≤◦end [S◦-end]
I ∪J ̸= ∅
P
i∈I′ In′
i ≤◦P
i∈I Ini
P
j∈J′ Out′
j ≤◦P
j∈J Outj
P
i∈I′ In′
i + P
j∈J′ Out′
j ≤◦P
i∈I Ini + P
j∈J Outj
[S◦-Σ]
I′ ̸= ∅
∀j ∈J′. ∃i ∈I′. pre(In′
j) = pre(In′
i)
∀i ∈I′. T ′
i ≤◦Ti
P
i∈I′∪J′ qi←pi ? li(Ui).T ′
i ≤◦P
i∈I′ qi←pi ? li(Ui).Ti
[S◦-Σ-In]
I ̸= ∅
∀i ∈I. ∀k ∈Ki. T ′
k ≤◦Tk
∀j ∈J. ∃i ∈I. pre(L
k∈KiPk ▶Hk.T ′
k) = pre(L
k∈Ki Pk ▶Hk.Tk)
P
i∈I
L
k∈KiPk ▶Hk.T ′
k ≤◦P
i∈I
L
k∈Ki Pk ▶Hk.Tk + P
j∈J Outj
[S◦-Σ-Out]
We have c1 : T ′
1, . . . , cn : T ′
n ≤◦c1 : T1, . . . , cn : Tn if for all i ∈[1..n] it holds that T ′
i ≤◦Ti.
Here and throughout the thesis, by [1..n] we denote the set {1, . . . , n}. The explana-
tions of each rule are as follows. By our summation-splitting rule [S◦-Σ], a mixed choice
type P
i∈I′ In′
i + P
j∈J′ Out′
j is smaller than another, P
i∈I Ini + P
j∈J Outj, if the sum of
all inputs and the sum of all outputs are smaller than the respective sum of the larger
type, i.e., P
i∈I′ In′
i ≤◦P
i∈I Ini and P
j∈J′ Out′
j ≤◦P
j∈J Outj. The input rule [S◦-Σ-In]
lets a smaller type have more external choices as usual, with the requirement that no
prefixes are found in the sum of the smaller type which are not also found in the larger
type, ∀j ∈J′. ∃i ∈I′. pre(In′
j) = pre(In′
i). The output rule [S◦-Σ-Out], while looking
more complex given the additional ⊕-layer, is essentially analogous to the input rule.
As is usual, smaller types have fewer internal choices (see the additional output sum
in the larger type P
j∈J Outj). Additionally, all prefixes found in the sum of the larger
type, i.e., in P
i∈I
L
k∈Ki Pk ▶Hk.Tk + P
j∈J Outj, must be present in the smaller type,
∀j ∈J. ∃i ∈I. pre(L
k∈KiPk ▶Hk.T ′
k) = pre(L
k∈Ki Pk ▶Hk.Tk).
27
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
A second example for prefixes is presented here, to demonstrate their influence on
subtyping.
Example 6 (Prefixes—Subtyping). To show why the subtyping rules include the prefix
conditions, let us consider the following example types.
Ta= 1 ▶a→b ! nat⟨nat⟩
Tb = b←a ? nat(nat).b←c ? bool(bool)
Tc = c→b ! bool⟨bool⟩
Additionally, without formal introduction of the typing rules, take the following process
which is typed by Ta, Tb, and Tc above, P =
s[a] ▷a→b ! nat⟨2⟩| s[b] ▷b←a ? nat(x).s[b] ▷b←c ? bool(y) | s[c] ▷c→b ! bool⟨⊤⟩
P has two simple reduction steps, does not deadlock and all transmitted values will be
stored in properly typed variables. Now, consider the following subtype of Tb, which we
could derive if the prefix condition in Rule [S◦-Σ-In] was not present.
T ′
b =
(
b←a ? nat(nat).b←c ? bool(bool)
+ b←c ? new(nat)
Finally, let us examine the process P ′, in which the component typed by Tb is substituted
by a component typed by T ′
b, P ′ =
s[a] ▷a→b ! nat⟨1⟩| s[b] ▷
(
b←a ? nat(x).s[b] ▷b←c ? bool(y)
+ b←c ? new(z)
| s[c] ▷c→b ! bool⟨⊤⟩
While the formal definition of an error (Definition 4.20) is yet to come, the well-versed
reader will recognize that the presence of b←c ? new() causes P ′ to be an error process.
In the explanation of why that is, let us call the participants in P ′ by the channels
that they use, for simplicity. We see that s[b] is prepared to receive a message from
s[c], who is also ready to send one. However, s[b] is expecting a natural number with
label new while s[c] is transmitting a boolean value with label bool. From programming
experience, we can intuitively understand why this would cause an error.
Similarly, consider an example which might occur if the prefix condition were removed
from Rule [S◦-Σ-Out].
Ta =
(
1 ▶a→b ! hi⟨⟩
+
1 ▶a→c ! oops⟨⟩
Tb = b←a ? hi()
P = s[a] ▷
(
1 ▶a→b ! hi⟨⟩
+
1 ▶a→c ! oops⟨⟩| s[b] ▷b←a ? hi()
As before, consider the following subtype of Ta obtainable without the prefix condition
and the process P ′ with its substitution.
T ′
a = 1 ▶a→c ! oops⟨⟩
P ′ = s[a] ▷1 ▶a→c ! oops⟨⟩| s[b] ▷b←a ? hi()
After the substitution, the new system P ′ is deadlocked.
⃝
28
2.3 Typing System
We present a small example showcasing a subtyping derivation using Definition 2.7.
Example 7 (Standard Subtyping Derivation). Consider the following session type T
and its subtype T ′, i.e., T ′ ≤◦T.
T ′ =
a←b ? in1()
a←b ? in2().T ′′
+ 1 ▶a→c ! out3⟨⟩
T =
a←b ? in1()
+
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
The subtyping derivation tree of T ′ ≤◦T is as follows. We have abbreviated the rule
names to their suffixes, [end] refers to [S◦-end], [Σ] to [S◦-Σ] and so on.
end ≤◦end [end]
(
a←b ? in1()
+ a←b ? in2().T ′′ ≤◦a←b ? in1()
[In]
end ≤◦end [end]
1 ▶a→c ! out3⟨⟩≤◦
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
[Out]
a←b ? in1()
a←b ? in2().T ′′
+ 1 ▶a→c ! out3⟨⟩
≤◦
a←b ? in1()
+
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
[Σ]
⃝
2.3.3 Typing Rules
To finish off the typing system, we will now finally present the probabilistic mixed choice
multiparty session typing rules, specifying exactly how and which processes are justified
by a given local context. We begin by defining a global environment, another context
which stores information about variables and process variables as type derivation trees
are traversed.
Definition 2.8 (Global Environment).
Γ ::= ∅| Γ, x : U | Γ, X : ⟨eU, eT⟩
(empty set, variable, process variable)
Global environments Γ contain assignments x : U of variables to base types and as-
signments X : ⟨eU, eT⟩of process constants to the base types and session types used in the
respective declaration.
Our type judgements are of the form Γ ⊢P ▷∆, meaning “given the variable- and pro-
cess types in global environment Γ, process P is well-typed according to local context ∆”.
We are mainly interested in “self-contained” processes without free (process) variables,
as their behaviour is predictable. For these processes, the global environment is empty
at the bottom of the type derivation tree. When Γ = ∅, we sometimes write ⊢P ▷∆.
We will additionally encounter the judgement Γ ⊢v : U, meaning “global environment Γ
justifies value v to have type U”.
29
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
Definition 2.9 (Typing Rules). For local context ∆containing only well-formed types,
the type judgement Γ ⊢P ▷∆is defined inductively as follows.
Γ ⊢1, 2,.. : nat [Nat]
Γ ⊢⊤, ⊥: bool [Bool]
Γ, x : U ⊢x : U [Base]
Γ ⊢0 ▷∅[T-0]
safe
{s[ri] : Ti}i∈I
Γ ⊢P ▷∆, {s[ri] : Ti}i∈I
Γ ⊢(νs)P ▷∆
[T-Res]
Γ ⊢v : bool
Γ ⊢P ▷∆
Γ ⊢Q ▷∆
Γ ⊢if v then P else Q ▷∆
[T-If]
Γ ⊢P ▷∆1
Γ ⊢Q ▷∆2
Γ ⊢P | Q ▷∆1, ∆2
[T-Par]
Γ, X :
D
eU, T1,.., Tn
E
, ex : eU ⊢P ▷c1 : T1,.., cn : Tn
Γ, X :
D
eU, T1,.., Tn
E
⊢Q ▷∆
Γ ⊢def X(ex; c1,.., cn)
def
= P in Q ▷∆
[T-Def]
Γ ⊢ev : eU
Γ, X :
D
eU, T1,.., Tn
E
⊢X⟨ev, c1,.., cn⟩▷c1 : T1,.., cn : Tn
[T-Var]
∀i ∈I. Γ ⊢c ▷Mi ▷∆, c : Li
Γ ⊢c P
i∈I Mi ▷∆, c : P
i∈I Li
[T-Sum]
Γ ⊢P ▷∆′
∆′ ≤1 ∆
Γ ⊢P ▷∆
[T-Sub]
Γ ⊢P ▷∆, c : T
Γ ⊢c ▷P ▶τ.P ▷∆, c : P ▶τ.T
[T-τ]
p ∈c
Γ, x : U ⊢P ▷∆, c : T
Γ ⊢c ▷p←q ? l(x).P ▷∆, c : p←q ? l(U).T
[T-Inp]
p ∈c
Γ ⊢v : U
Γ ⊢P ▷∆, c : T
Γ ⊢c ▷P ▶p→q ! l⟨v⟩.P ▷∆, c : P ▶p→q ! l⟨U⟩.T
[T-Out]
∀i ∈I. Γ ⊢c ▷Pi ▶Ni.Pi ▷∆, c : Pi ▶Hi.Ti
Γ ⊢c ▷L
i∈I Pi ▶Ni.Pi ▷∆, c : L
i∈I Pi ▶Hi.Ti
[T-Prob]
The typing rules are mostly standard. According to [Nat] and [Bool], the natural
numbers 1, 2,.. being of type nat and the truth values ⊤, ⊥being of type bool will be
justified by any global environment. [Base] justifies the variable x to be of type U if
Γ contains that type assignment. Rule [T-∅] states that given any Γ, the terminated
process 0 is justified by an empty local context. By [T-Res], a restricted process (νs)P is
justified by ∆if P can be justified by the composition of ∆and all channel types within
the restricted session {s[ri] : Ti}i∈I, given that these types fulfil the predicate safe(). Said
predicate is introduced with the properties of the type system in Chapter 4. Intuitively,
it verifies that during communication no label mismatch can occur. [T-If] states that
given Γ, a conditional if v then P else Q is justified according to ∆if the assignment
of the v to bool is justified by Γ, and the continuation processes P and Q each justified
by ∆(given Γ). [T-Par] handles parallel composition, in which two participant processes
P and Q may be composed if they are each justified by local contexts ∆1 and ∆2, whose
composition ∆1, ∆2 is defined (i.e., respects linearity). Rule [T-Def] handles process
definitions def X(ex; c1,.., cn)
def
= P in Q. Firstly, the process P must be justifiable by
the local context containing the channels used in the definition, c1 : T1,.., cn : Tn given the
process variable and variable assignments X :
D
eU, T1,.., Tn
E
, ex : eU. Secondly, given the
30
2.3 Typing System
same process variable and variable assignments, the process Q must be justified by ∆.
Rule [T-Var] is a derivation anchor for processes ending in a process call X⟨ev, c1,.., cn⟩.
The global environment has to contain the process variable assignment for X, namely
X :
D
eU, T1,.., Tn
E
, and the local context has to comprise only the channels c1,.., cn,
whose types are those from the process variable assignment in the global environment.
Additionally Γ must justify the value types, Γ ⊢ev : eU. With [T-Sum], a process with
a mixed choice sum c P
i∈I Mi is justified if the respective channel type is a mixed
choice sum type of the same cardinality c : P
i∈I Li such that each process summand is
justified by one type summand Γ ⊢c ▷Mi ▷∆, c : Li. Rule [T-Sub] allows a process to be
justified by a local context ∆if it can be justified by a subtype-context ∆′ ≤∆. This
represents the principle of safe substitution mentioned previously. Internal actions are
handled by [T-τ], the process can use a channel c to perform an internal action τ with
probability P if the local context has a session type P ▶τ.T assigned to the same channel
and the continuation P can be justified by a local context in which c is now assigned the
continuation type T. The rules for inputs and outputs, [T-Inp] and [T-Out], function
analogously. Probabilistic choices are typed by [T-Prob] similarly to how sums are typed
with [T-Sum].
We will now come back to the running example to showcase the session types of the
channels used in our protocol.
Example 8 (Running Example—Types). The processes of Example 2 can be typed
by the types Tp, Tc, Tw, Tj, and Td, such that we have ∅⊢PI ▷∆I with local context
∆I = s[p] : Tp, s[c] : Tc, s[w] : Tw for the interface, and ∅⊢PR ▷∆R with local context
∆R = s[p] : Tp, s[j] : Tj, s[d] : Td, s[w] : Tw for the refinement.
Tp = p→j ! lws⟨⟩.p←j ? glt(bool)
Tc = j←p ? lws().
(
0.7 ▶j→p ! glt⟨bool⟩.1 ▶j→w ! rls⟨⟩
⊕0.3 ▶j→w ! rqs⟨⟩.j←w ? st().1 ▶j→p ! glt⟨bool⟩
Tw =
(
w←j ? mtg().w→j ! st⟨⟩
+ w←j ? rls()
Td =
0.5 ▶d→j ! wk⟨⟩
⊕0.2 ▶d→j ! str⟨⟩
⊕0.3 ▶d→j ! wit⟨⟩
Tj = j←p ? lws().
j←d ? wk().1 ▶j→p ! glt⟨bool⟩.1 ▶j→w ! rls⟨⟩
+ j←d ? str().1 ▶j→p ! glt⟨bool⟩.1 ▶j→w ! rls⟨⟩
+ j←d ? wit().1 ▶j→w ! rqs⟨⟩.
j←w ? st().1 ▶j→p ! glt⟨bool⟩
⃝
Definition 2.10 presents rules for transitions of local contexts. These rules map reduc-
tions of processes to their respective types, i.e., they are similar to the reduction rules in
Definition 2.3. Labelled rules are used on types to highlight the main actors of a step.
31
2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus
Definition 2.10 (Labelled Transitions of Local Contexts). Labels α are of the form
s : p←q ? l(U) for inputs, s : p→q ! l⟨U⟩for outputs, s : τ, or s : pq : l⟨U⟩for communica-
tion (from p to q). The labelled transition
α
−→P is inductively defined by the following
rules.
s[r] : p←q ? l(U).T + M
s : p←q ? l(U)
−−−−−−−→1 s[r] : T [TR-Inp]
s[r] : (P ▶p→q ! l⟨U⟩.T ⊕N) + M
s : p→q ! l⟨U⟩
−−−−−−−→P s[r] : T [TR-Out]
∆, s[r] : (P ▶τ.T ⊕N) + M
s : τ
−−→P ∆, s[r] : T [TR-τ]
c1 : T1
s : p→q ! l⟨U⟩
−−−−−−−→1 c1 : T ′
1
c2 : T2
s : q←p ? l(U)
−−−−−−−→P c2 : T ′
2
∆, c1 : T1, c2 : T2
s : pq : l⟨U⟩
−−−−−−→P ∆, c1 : T ′
1, c2 : T ′
2
[TR-Com]
We write ∆
α
−→P if ∆
α
−→P ∆′ for some ∆′. Moreover, we write ∆7→P ∆′ if ∆
s : pq : l⟨U⟩
−−−−−−→P
∆′ or ∆
s : τ
−−→P ∆′ and lastly, we write ∆↛if there are no ∆′ and P such that ∆7→P ∆′.
Let 7→∗
P be inductively defined as (a) ∆7→∗
1 ∆and (b) if ∆7→P1 ∆′ and ∆′ 7→∗
P2 ∆′′ then
∆7→∗
P1P2 ∆′′.
We have now formally introduced all relevant concepts for the probabilistic mixed
choice multiparty session π-calculus and -types. Interspersed with examples, we have
presented the process syntax, operational semantics, type syntax, subtyping, and typing
rules. Although we consider the past chapter to merely be preliminaries for the main
contribution of this work, it should be stated that the calculus itself is novel in its own
right.
While first strides in probabilistic session types have been made ([Aman and
Ciobanu, 2019; Inverso et al., 2020]), to the best of our knowledge, our probabilistic
mixed choice framework is entirely new. Even leaving the system as we have defined it
so far, and moving on to show central properties of MPST systems for it, we would have
created a meaningful scientific contribution. We will, however, instead introduce another
layer: The main idea of this thesis, the multi-channel refinement-based subtyping. Only
after developing our framework further and expanding its capabilities greatly will we
prove the major theorems.
32
3 Subtyping with Refinement
There are two ways to view a typing context and depending on the goal, both are true
and useful. The intuition for subtyping and its intricacies might slightly differ, however.
One might see typing as chronologically “secondary” to the processes. The protocol is
given to us as interacting processes and we then assign it types to abstract and verify
its behaviour. Instead, though, we can also view it as chronologically “primary”. In this
approach, types are an easily verifiable system specification, an interface, according to
which an implementation is built—the process. Let us, for now, assume the latter view.
Using classic subtyping, then, one can justify processes that slightly differ from the given
specification. We can refine the interface, by using a “smaller” subtype to justify a part of
the protocol. Our novel system exploits that exact mechanism and completely overhauls
what a refinement subtype can be. Behaviour specified by a single typed channel can
be distributed among several interacting ones, such that in the abstract, the exact same
behaviour is observable from “the outside”. With this idea in mind, we have created a
subtyping system which allows several channels to be a subtype to a single channel while
preserving desirable properties and interaction behaviour.
In the conception of these ideas, we drew inspiration from [Watanabe et al., 2023] and
their work on compositional Markov decision processes. They remarked that given the
current trajectory in technological advancements, due to state-space explosion, modern
verification targets can be enormous, to the extent that some of these models require
more space than the memory size of verification machines.
One alleviation to this,
they propose, is compositionality, which offers not only a memory advantage, but, if
compositional components get repeated and reused, a performance advantage as well—
divide and conquer.
With this in mind, our system offers another great advantage.
Assume for this the other view on subtyping: Processes come first, which are then typed
to verify and abstract. By utilising the subtyping relation in the “other direction”, i.e.,
from refinement to interface, we can iteratively and compositionally verify a protocol in
parts. As we will show in Chapter 4, any collection of typed channels can be composed
into one combined interface supertype. The subtyping derivation naturally verifies the
“internal” behaviour (all those interactions which occur only between the typed channels
of the respective collection), while preserving the “external” behaviour (those interactions
whose communication partners are not within the collection). Applying this principle,
we can iteratively create much simpler types for which desired properties are easier to
verify.
Note, that usually in MPST, the channels to which the session types are assigned
are not considered for the subtyping relation (see [Scalas and Yoshida, 2019; Peters and
Yoshida, 2024]). However for us, as the interactions of the channels are highly relevant
to subtyping, we require the subtyping relation to not merely relate types, T ′ ≤T, but
33
3 Subtyping with Refinement
typed channels, c : T ′ ≤c : T or ∆≤c : T.
Given the syntactic density of the multi-channel subtyping system, we have not just
opted to dedicate this entire chapter to it, we have also decided to introduce an interme-
diary subtyping relation which aims to bridge the gap between the standard subtyping
(§ 2.3.2) and the multi-channel subtyping: Functionally, it is just as expressive as the
standard subtyping, but its syntax and structure is aligned with the advanced subtyping.
As it is intended as a didactic stepping stone, no major properties are shown. What we
do show, however, is that the more “difficult” subtyping relations subsume the “easier”
ones. We first introduce this intermediary subtyping, then adapt the typing system,
enhancing it with extra syntax to prepare for the complexities of the new subtyping
rules before tackling the multi-channel subtyping.
3.1 Single-Channel Subtyping
As previously stated, the following subtyping is functionally similar to the standard
subtyping (Definition 2.7). The form of the rules, however is more akin to those of the
upcoming multi-channel subtyping, in fact, these rules are their single-channel variant.
After introducing and explaining the single-channel subtyping, we present an exemplary
subtyping derivation, compare this subtyping to the previously encountered standard
subtyping, and finally prove that the new subtyping subsumes standard subtyping.
Definition 3.1 (Single-Channel Subtyping). The relation ≤
⊙is coinductively defined:
I ∪J ̸= ∅
c′ : P
i∈I′ Ini
≤
⊙
P
c : P
i∈I Li
c′ : P
j∈J′ Outj
≤
⊙
P
c : P
j∈J Lj
c′ : P
i∈I′ Ini + P
j∈J′ Outj ≤
⊙
P c : P
i∈I Li + P
j∈J Lj
[S
⊙-Σ]
S
i∈I′ Ii ̸= ∅
∀j ∈J′. ∃i ∈I′. pre(In′
j) = pre(In′
i)
∀i ∈I′.
c′ : In′
i
≤
⊙
Pc : P
k∈Ii Lk
c′ : P
i∈I′∪J′ In′
i ≤
⊙
P c : P
i∈I′
P
k∈Ii Lk
[S
⊙-Σ-In]
S
i∈I′ Ii ∪J ̸= ∅
∀j ∈J. ∃i ∈I′. pre(Outj) = pre(Out′
i)
∀i ∈I′.
c′ : Out′
i
≤
⊙
Pc : P
k∈Ii Lk
c′ : P
i∈I′ Out′
i ≤
⊙
P c : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj
[S
⊙-Σ-Out]
J = S
i∈I Ji ̸= ∅
P
j∈J Pj = PΣ
∀i ̸= j ∈I. Ji ∩Jj = ∅
∀i ∈I.
c′ : H′
i.T ′
i
≤
⊙
P′
iPΣc : L
j∈Ji Pj ▶Hj.Tj
c′ : L
i∈IP′
i ▶H′
i.T ′
i ≤
⊙
PΣ c : L
j∈J Pj ▶Hj.Tj
[S
⊙-⊕]
r1 ⊆r2
q ∈r2
s[r1] : T ′ ≤
⊙
1 s[r2] : T
s[r1] : q←p ? l(U).T ′ ≤
⊙
1 s[r2] : q←p ? l(U).T
[S
⊙-In]
r1 ⊆r2
p ∈r2
s[r1] : T ′ ≤
⊙
1 s[r2] : T
s[r1] : p→q ! l⟨U⟩.T ′ ≤
⊙
P s[r2] : P ▶p→q ! l⟨U⟩.T
[S
⊙-Out]
34
3.1 Single-Channel Subtyping
c′ : T ′ ≤
⊙
1 c : T
c′ : τ.T ′ ≤
⊙
P c : P ▶τ.T
[S
⊙-τ]
∅≤
⊙
1 ∅[S
⊙-∅]
∆′ ≤
⊙
1 ∆
c′ : T ′ ≤
⊙
1 c : T
∆′, c′ : T ′ ≤
⊙
1 ∆, c : T
[S
⊙-Split]
The first four rules may be understood as one large rule whose purpose is to split mixed
choice sums into singular actions. Not always need all (or any) of these rules be applied,
but whenever they do occur in a derivation, they will appear in order—[S
⊙-Σ] is applied
first (lowermost in the derivation tree) and [S
⊙-⊕] last (uppermost). [S
⊙-Σ] splits the
mixed choice of the subtype into its inputs and probabilistic sums, where each sub-sum
must then be subtype of a sub-sum of the mixed choice of the supertype. As previously,
choices on the left of the subtyping may have fewer outputs and more inputs. This is
again implemented by [S
⊙-Σ-In] and [S
⊙-Σ-Out]. The difference to earlier is, however, that
instead of comparing the respective input or output prefixes immediately, the sums are
split further. The rule [S
⊙-Σ-In] splits an input choice into single summands. Thereby,
it allows for additional inputs on left (via the index set J′) that are not matched on the
right. It is ensured that no additional choice In′
j = p←q ? l(U).T ′ introduces a prefix
pre(In′
j) = p←q ? that was not previously justified. Similarly, [S
⊙-Σ-Out] splits a choice
into probabilistic choices. Here, the right hand side may have additional probabilistic
choices. The rule ensures that no prefix of the supertype is lost. Finally, [S
⊙-⊕] splits
a probabilistic choice into its individual summands and removes their probabilities P′
i,
by multiplying P′
i with the current probability in the index of the subtype relation ≤
⊙
P.
Hence, ≤
⊙
P identifies a branch in the subtyping derivation tree in which a subterm of the
subtype is considered that occurs in the original, i.e., not split, subtype with probability
P. In other words, the subterm is subtype of the supertype with probability P.
The remaining rules, [S
⊙-In] to [S
⊙-Split], contain the core of the actual subtyping
behaviour. Due to [S
⊙-In], a single input action is the subtype of the same single action
if the respective continuations are sub- and supertype. Both the acting role q and the
subtype role set r1 must be contained in the supertype role set r2. Rule [S
⊙-Out] is
similar. Additionally it matches the probability on the relation ≤
⊙
P in the conclusion
to the probability of the sending action of the supertype, s[r2] : P ▶p→q ! l⟨U⟩.T. For
the premise then ≤1 is used. Similarly, [S
⊙-τ] handles internal actions, matching the
probability of the relation. [S
⊙-∅] ensures that ∅≤
⊙
1 ∅. As ∆, c : end = ∆, this rule
relates end-typed channels. Finally, [S
⊙-Split] splits contexts such that each channel in
the supertype context is related to exactly one channel in the subtype context.
For better understanding, we will perform a subtyping derivation using these rules.
Example 9 (Single-Channel Subtyping Derivation). Consider the session type T and
its subtype T ′ from Example 7, i.e., T ′ ≤◦T.
T ′ =
a←b ? in1()
a←b ? in2().T ′′
+ 1 ▶a→c ! out3⟨⟩
T =
a←b ? in1()
+
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
With the following subtyping derivation, we show that also c : T ′ ≤
⊙
1 c : T.
We have
abbreviated the rule names to their suffixes, [∅] refers to [S
⊙-∅], [Σ] to [S
⊙-Σ] and so on.
35
3 Subtyping with Refinement
As the derivation tree is rather wide, we present the two branches separately, beginning
on the left-hand side with the inputs.
c : end ≤
⊙
1 c : end [∅]
c : a←b ? in1() ≤
⊙
1 c : a←b ? in1() [In]
c :
(
a←b ? in1()
+ a←b ? in2().T ′′ ≤
⊙
1 c : a←b ? in1()
[Σ-In]
. . .
c :
a←b ? in1()
a←b ? in2().T ′′
+ 1 ▶a→c ! out3⟨⟩
≤
⊙
1 c :
a←b ? in1()
+
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
[Σ]
Next is the branch on the right, the outputs.
. . .
c : end ≤
⊙
1 c : end [∅]
c : a→c ! out3⟨⟩≤
⊙
1 c : 1 ▶a→c ! out3⟨⟩[Out]
c : 1 ▶a→c ! out3⟨⟩≤
⊙
1 c : 1 ▶a→c ! out3⟨⟩[⊕]
c : 1 ▶a→c ! out3⟨⟩≤
⊙
1 c :
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
[Σ-Out]
c :
a←b ? in1()
a←b ? in2().T ′′
+ 1 ▶a→c ! out3⟨⟩
≤
⊙
1 c :
a←b ? in1()
+
(
0.7 ▶a→c ! out1⟨⟩
⊕0.3 ▶a→c ! out2⟨⟩
+ 1 ▶a→c ! out3⟨⟩
[Σ]
We notice that despite there being more steps, the overall tree is similar to that of
Example 7.
⃝
As we have just seen, the functionality of the previous [S◦-Σ-In] and [S◦-Σ-Out] is
now distributed over several rules, namely [S
⊙-Σ-In] and [S
⊙-In] for [S◦-Σ-In] and [S
⊙-
Σ-Out], [S
⊙-⊕], [S
⊙-Out], and [S
⊙-τ] for [S◦-Σ-Out]. Additionally, the previous [S◦-end]
rule became [S
⊙-∅]. [S
⊙-Split] is a new rule, which in essence accomplishes what the
side condition in Definition 2.7, “We have c1 : T ′
1, . . . , cn : T ′
n ≤◦c1 : T1, . . . , cn : Tn if for all
i ∈[1..n] it holds that T ′
i ≤◦Ti”, did. Notice that with the single-channel rules, we may
subtype different channels, as long as the role set of the subtype channel is a subset
of the roles of the supertype channel, i.e., s[r1] : T ′ ≤
⊙
P s[r2] : T if r1 ⊆r2 (by [S
⊙-In],
[S
⊙-Out]).
We now show that, as demonstrated in the previous example, single-channel subtyping
indeed subsumes the standard subtyping of Definition 2.7.
36
3.1 Single-Channel Subtyping
Theorem 3.2. If ∆′ ≤◦∆, then ∆′ ≤
⊙
1 ∆.
Proof. Assume that ∆′ ≤◦∆. Note that according to Definition 2.7, for contexts ∆′ =
c1 : T ′
1, . . . , cn : T ′
n and ∆= c1 : T1, . . . , cn : Tn we have ∆′ ≤◦∆if for all k ∈[1..n] we have
T ′
k ≤◦Tk. Additionally, from Definition 3.1, specifically rule [S
⊙-Split], we have that if
all k ∈[1..n] we have ck : T ′
k ≤
⊙
1 ck : Tk, then also ∆′ ≤
⊙
1 ∆. It thus suffices to prove that if
T ′
k ≤◦Tk then ck : T ′
k ≤
⊙
1 ck : Tk which we show by structural induction on the derivation
of T ′
k ≤◦Tk. For legibility we drop the subscript and continue with ck = c, T ′
k = T ′ and
Tk = T. We proceed by case distinction on the last applied rule.
Case [S◦-end]: We have T ′ = end and T = end with end ≤◦end. By the definition
of local contexts c : end = end thus by [S
⊙-∅] of Definition 3.1 we have c : end ≤
⊙
1
c : end as required.
Case [S◦-Σ]: We have T ′ = P
i∈I′ In′
i + P
j∈J′ Out′
j and T = P
i∈I Ini + P
j∈J Outj
for which P
i∈I′ In′
i + P
j∈J′ Out′
j ≤◦P
i∈I Ini + P
j∈J Outj, P
i∈I′ In′
i ≤◦P
i∈I Ini,
P
j∈J′ Out′
j ≤◦P
j∈J Outj, and I ∪J ̸= ∅.
Therefore by induction hypothesis
have c : P
i∈I′ In′
i ≤
⊙
1 c : P
i∈I Ini and c : P
j∈J′ Out′
j ≤◦c : P
j∈J Outj. By [S
⊙-Σ] of
Definition 3.1, we thus have c : P
i∈I′ Ini + P
j∈J′ Outj ≤
⊙
1 c : P
i∈I Li + P
j∈J Lj as
required.
Case [S◦-Σ-In]: Have T ′ = P
i∈I′∪J′ qi←pi ? li(Ui).T ′
i and T = P
i∈I′ qi←pi ? li(Ui).Ti
with P
i∈I′∪J′ qi←pi ? li(Ui).T ′
i ≤◦P
i∈I′ qi←pi ? li(Ui).Ti, I′ ̸= ∅, ∀j ∈J′. ∃i ∈
I′. pre(In′
j) = pre(In′
i), and ∀i ∈I′. T ′
i ≤◦Ti. Apply the induction hypothesis on
∀i ∈I′. T ′
i ≤◦Ti to obtain ∀i ∈I′. c : T ′
i ≤◦c : Ti. Since both sub- and supertype
have the same channel, which thus have the same role vector, we may apply [S
⊙-In]
of Definition 3.1 to obtain c : qi←pi ? li(Ui).T ′
i ≤
⊙
1 c : qi←pi ? li(Ui).Ti for ∀i ∈I′.
Let then ∀i ∈I′. Ii = i, where as I′ ̸= ∅have S
i∈I′ Ii ̸= ∅. With this, ∀i ∈
I′. c : qi←pi ? li(Ui).T ′
i ≤
⊙
1 c : qi←pi ? li(Ui).Ti, and ∀j ∈J′. ∃i ∈I′. pre(In′
j) =
pre(In′
i), we can apply [S
⊙-Σ-In] of Definition 3.1 to obtain the required fact of
c : P
i∈I′∪J′ qi←pi ? li(Ui).T ′
i ≤
⊙
1 c : P
i∈I′ qi←pi ? li(Ui).Ti.
Case [S◦-Σ-Out]: In this case we have the two types T ′ = P
i∈I
L
k∈KiPk ▶Hk.T ′
k
and T = P
i∈I
L
k∈Ki Pk ▶Hk.Tk + P
j∈J Outj, where P
i∈I
L
k∈KiPk ▶Hk.T ′
k ≤◦
P
i∈I
L
k∈Ki Pk ▶Hk.Tk + P
j∈J Outj, ∀j ∈J. ∃i ∈I. pre(L
k∈KiPk ▶Hk.T ′
k) =
pre(L
k∈Ki Pk ▶Hk.Tk), and I ̸= ∅, ∀i ∈I. ∀k ∈Ki. T ′
k ≤◦Tk. From the induction
hypothesis we obtain ∀i ∈I. ∀k ∈Ki. c : T ′
k ≤
⊙
1 c : Tk. Now ∀i ∈I. ∀k ∈Ki depend-
ing on the shape of Hk we have one of two cases. If Hk = τ, we apply [S
⊙-τ-L] and
[S
⊙-τ-R] of Definition 3.1 to obtain c : τ.T ′
k≤
⊙
Pk c : τ.Tk. In the case in which Hk ̸= τ,
since both sub- and supertype have the same channel, which thus have the same
role vector, we may apply [S
⊙-Out] of Definition 3.1 to obtain c : Pk ▶Hk.T ′
k ≤
⊙
Pk
c : Pk ▶Hk.Tk. Thus we have ∀i ∈I. ∀k ∈Ki. c : Hk.T ′
k ≤
⊙
Pk c : Pk ▶Hk.T ′
k. By
the definition of local contexts, all types in ∆′ and ∆are well-formed. For T ′ =
P
i∈I
L
k∈KiPk ▶Hk.T ′
k specifically this implies that ∀i ∈I. P
k∈Ki Pk = 1. Let
then for each i in I Ji = i, hence J = I. From this, I ̸= ∅, ∀i ∈I. P
k∈Ki Pk = 1,
37
3 Subtyping with Refinement
and ∀i ∈I. ∀k ∈Ki. c : Hk.T ′
k ≤
⊙
Pk c : Pk ▶Hk.T ′
k we may apply [S
⊙-⊕] of Defini-
tion 3.1 to obtain ∀i ∈I. c : L
k∈Ki Pk ▶Hk.T ′
k≤
⊙
1c : L
k∈Ki Pk ▶Hk.T ′
k. Finally, from
this, I ̸= ∅, and ∀j ∈J. ∃i ∈I. pre(L
k∈KiPk ▶Hk.T ′
k) = pre(L
k∈Ki Pk ▶Hk.Tk),
we can apply [S
⊙-Σ-Out] of Definition 3.1 to obtain c : P
i∈I
L
k∈Ki Pk ▶Hk.T ′
k ≤
⊙
1
c : P
i∈I
L
k∈Ki Pk ▶Hk.T ′
k + P
j∈J Outj as required.
With this, we have finished presenting the single-channel subtyping. Before making
use of the acquired intuition to introduce the multi-channel subtyping, we will first have
a small intermission to slightly adapt the typing system.
3.2 Adapted Typing System
Our aim was to keep the subtyping rules as slim as possible. When the subtype, however,
is a collection of several typed channels, a lot of terms must be matched and thus there
inevitably exists a large number of branches within the subtyping derivation tree. To
keep track of those, we opted to introduce the concept of the active channel.
Only
found within subtyping derivation trees and not in type judgements, the active channel
essentially marks that channel to which a branch in the derivation is dedicated. For this,
we introduce a new typing context, the local context with an active channel, Λ.
Definition 3.3 (Local Context with an Active Channel).
Λ ::= ∆| c · (∆, c : T)
A Λ is either a local context ∆(Definition 2.4) or a local context with an active
channel. In accordance with this new context, we redefine labelled transitions of local
contexts, based on Definition 2.10. The labelled transition on local contexts with an
active channel
α
−→P is given inductively by the following rules.
Definition 3.4 (Labelled Transitions of Local Contexts with an Active Channel). Labels
α in these rules are of the form s : p←q ? l(U) for inputs, s : p→q ! l⟨U⟩for outputs, s : τ,
or s : pq : l⟨U⟩for communication (from p to q).
s[r] : p←q ? l(U).T + M
s : p←q ? l(U)
−−−−−−−→1 s[r] : T [TR-Inp]
s[r] : (P ▶p→q ! l⟨U⟩.T ⊕N) + M
s : p→q ! l⟨U⟩
−−−−−−−→P s[r] : T [TR-Out]
s[r] : p→q ! l⟨U⟩.T
s : p→q ! l⟨U⟩
−−−−−−−→1 s[r] : T [TRΛ-Out]
∆, s[r] : (P ▶τ.T ⊕N) + M
s : τ
−−→P ∆, s[r] : T [TR-τ]
s[r] · (∆, s[r] : (P ▶τ.T ⊕N) + M)
s : τ
−−→P ∆, s[r] : T [TRΛ-τ]
s[r] · (∆, s[r] : τ.T)
s : τ
−−→1 ∆, s[r] : T [TRΛ-τ-2]
c1 : T1
s : p→q ! l⟨U⟩
−−−−−−−→1 c1 : T ′
1
c2 : T2
s : q←p ? l(U)
−−−−−−−→P c2 : T ′
2
∆, c1 : T1, c2 : T2
s : pq : l⟨U⟩
−−−−−−→P ∆, c1 : T ′
1, c2 : T ′
2
[TR-Com]
38
3.3 Multi-Channel Subtyping
c1 : T1
s : p→q ! l⟨U⟩
−−−−−−−→1 c1 : T ′
1
c2 : T2
s : q←p ? l(U)
−−−−−−−→P c2 : T ′
2
c2 · (∆, c1 : T1, c2 : T2)
s : pq : l⟨U⟩
−−−−−−→P ∆, c1 : T ′
1, c2 : T ′
2
[TRΛ-Com]
We write Λ
α
−→P if Λ
α
−→P Λ′ for some Λ′. Moreover, we write Λ 7→P Λ′ if Λ
s : pq : l⟨U⟩
−−−−−−→P Λ′
or Λ
s : τ
−−→P Λ′ and Λ ↛if there are no Λ′ and P such that Λ 7→P Λ′. Let 7→∗
P be inductively
defined as (a) Λ 7→∗
1 Λ and (b) if Λ 7→P1 Λ′ and Λ′ 7→∗
P2 Λ′′ then Λ 7→∗
P1P2 Λ′′.
All rules in black, named [TR-*], are taken directly from the previous definition of
labelled transitions. Those highlighted in blue, named [TRΛ-*], are additions needed
to accommodate active channels. As before, these rules map reductions of processes
on their respective types, i.e., they are similar to the reduction rules in Definition 2.3.
Labelled rules are used on types to highlight the main actors of a step.
3.3 Multi-Channel Subtyping
With all preliminaries behind us, we now finally present the multi-channel subtyping
of the probabilistic mixed choice multiparty session π-calculus.
In this section, after
introducing and explaining the subtyping rules in detail, we will show that multi-channel
subtyping subsumes single-channel subtyping, and finish the chapter by revisiting the
courthouse protocol with two examples.
Structurally, the multi-channel subtyping rules are very similar to those of single-
channel subtyping (§ 3.1). Apart from the addition of certain rules, as explained in the
following, for most rules the difference lies mostly in the amount of typed channels on
the left-hand side of the relation: One for single-channel subtyping, several for multi-
channel, as the names imply.
Definition 3.5 (Subtyping). The relation ≤P is coinductively defined:
S
k∈[1..n] Jk ̸= ∅
∀k ∈[1..n] . s[rk] · ∆≤P s[r] : P
j∈Jk Lj
∆= s[r1] : T1, . . . , s[rn] : Tn ≤P s[r] : P
j∈J1 Lj + . . . + P
j∈Jn Lj
[S-Σ-1]
I ∪J ̸= ∅
s[r1] ·
∆, s[r1] : P
i∈I′ Ini
≤P
s[r2] : P
i∈I Li
s[r1] ·
∆, s[r1] : P
j∈J′ Outj
≤P
s[r2] : P
j∈J Lj
s[r1] ·
∆, s[r1] : P
i∈I′ Ini + P
j∈J′ Outj
≤P s[r2] : P
i∈I Li + P
j∈J Lj
[S-Σ-2]
S
i∈I′ Ii ̸= ∅
∀j ∈J′. ∃i ∈I′. pre(In′
j) = pre(In′
i)
∀i ∈I′.
s[r1] · (∆, s[r1] : In′
i)
≤P s[r2] : P
k∈Ii Lk
s[r1] ·
∆, s[r1] : P
i∈I′∪J′ In′
i
≤P s[r2] : P
i∈I′
P
k∈Ii Lk
[S-Σ-In]
S
i∈I′ Ii ∪J ̸= ∅
∀j ∈J. ∃i ∈I′. pre(Outj) = pre(Out′
i)
∀i ∈I′.
s[r1] · (∆, s[r1] : Out′
i)
≤P s[r2] : P
k∈Ii Lk
s[r1] ·
∆, s[r1] : P
i∈I′ Out′
i
≤P s[r2] : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj
[S-Σ-Out]
39
3 Subtyping with Refinement
J = S
i∈I Ji ̸= ∅
P
j∈J Pj = PΣ
∀i ̸= j ∈I. Ji ∩Jj = ∅
∀i ∈I.
s[r1] · (∆, s[r1] : H′
i.T ′
i)
≤P′
iPΣ s[r2] : L
j∈Ji Pj ▶Hj.Tj
s[r1] ·
∆, s[r1] : L
i∈IP′
i ▶H′
i.T ′
i
≤PΣ s[r2] : L
j∈J Pj ▶Hj.Tj
[S-⊕]
r1 ⊆r2
q ∈r2
p /∈∆
∆, s[r1] : T ′ ≤1 s[r2] : T
s[r1] · (∆, s[r1] : q←p ? l(U).T ′) ≤1 s[r2] : q←p ? l(U).T
[S-In]
r1 ⊆r2
p ∈r2
q /∈∆
∆, s[r1] : T ′ ≤1 s[r2] : T
s[r1] · (∆, s[r1] : p→q ! l⟨U⟩.T ′) ≤P s[r2] : P ▶p→q ! l⟨U⟩.T
[S-Out]
p ∈r1
q ∈r2
∆, s[r1] : T ′
1, s[r2] : T ′
2 ≤P s[r] : T
s[r1] ·
∆, s[r1] : p→q ! l⟨U⟩.T ′
1, s[r2] : q←p ? l(U).T ′
2 + P
i∈I Li
≤P s[r] : T
[S-Link]
∆, s[r1] : T ′ ≤P s[r2] : T
s[r1] · (∆, s[r1] : τ.T ′) ≤P s[r2] : T
[S-τ-L]
Λ ≤1 s[r] : T
Λ ≤P s[r] : P ▶τ.T
[S-τ-R]
∅≤1 ∅[S-∅-1]
pend(c · ∆)
c · ∆≤P ∅
[S-∅]
∆′
1 ≤1 ∆
∆′
2 ≤1 c : T
∆′
1, ∆′
2 ≤1 ∆, c : T
[S-Split]
Akin to single-channel subtyping, the first five rules may be understood as one large
rule, as the order of these rules within a derivation is fixed. Among these five rules,
[S-Σ-1] is applied first (lowermost in the derivation tree) and [S-⊕] last (uppermost). As
we are now handling several channels in the refinement, the new branching rule [S-Σ-1]
introduces the aforementioned active channel to produce branches for each channel in the
refinement. No other rule sets an active channel. We have ∆= s[r1] : T1, . . . , s[rn] : Tn ≤P
s[r] : P
j∈J1 Lj + . . . + P
j∈Jn Lj if there is a way to split the choice of the supertype along
the n channels of the refinement. Therefore, [S-Σ-1] creates n branches, where in each
branch exactly one of the n channels is set active s[r1]·∆. Intuitively, s[rk]·∆is used to
identify the next possible action of s[rk] if it exists. Else s[rk] is pending (see below) and
an empty choice s[r] : P
j∈Jk Lj with Jk is created. The side condition S
k∈[1..n] Jk ̸= ∅
ensures that not all Jk are empty. For each (non-pending) channel in ∆, the derivation
tree in the respective branch above [S-Σ-1] creates the corresponding component of the
mixed choice of s[r]. Using an active channel and branching in this way, allows us to
separate the splitting of a choice onto several rules.
After an active channel is set, the remaining four splitting rules function essentially like
those of the single-channel subtyping (Definition 3.1), while keeping the active channel
on the left in the precondition. Rule [S-Σ-2], corresponds to [S
⊙-Σ], [S-Σ-In] and [S-
Σ-Out] to [S
⊙-Σ-In] and [S
⊙-Σ-Out], respectively, and [S-⊕] to [S
⊙-⊕]. For the latter,
the probabilistic choice rules, the purpose of “storing” the probability P′
i in the relation
was not apparent thus far. Essentially, when several steps of purely internal interaction
between channels of the refinement occur, these several steps can be aggregated into one
step, whose probability is the product of the individual steps.
Again, the remaining rules, [S-In] to [S-Split], contain the core of the actual subtyping
behaviour; in most of these rules subtype contexts have an active channel whose type
is a singular input/output action. The input rule, [S-In], is largely the same as [S
⊙-In]:
40
3.3 Multi-Channel Subtyping
A single input action is the supertype of a context if its active channel offers the same
single action and the continuations are subtypes within the local context without an
active channel. As in [S
⊙-In], both the acting role q and the subtype role set r1 must be
contained in the supertype role set r2. Note that the communication partner p of this
input must not be found within ∆. Intuitively, this means that only “external” inputs
can occur in the interface type; input actions whose communication partner is within
the refinement are handled by [S-Link] or [S-∅]. Rule [S-Out] is similar; its parallel is
[S
⊙-Out]. Here again, it matches the probability on the relation ≤P in the conclusion to
the probability of the sending action of the supertype, s[r2] : P ▶p→q ! l⟨U⟩.T. For the
premise then ≤1 is used.
[S-Link] is an entirely new rule, which unfolds internal communication within the
context on the left, if the active channel offers the sending action.
The respective
communication is not visible in the supertype. Similar to [S-In] and [S-Out], there is
no active channel in the premise.
Similarly, [S-τ-L] allows internal actions of the active channel of the subtype context
to unfold. [S-τ-R] allows internal actions of the supertype to unfold. They expand on
the [S
⊙-τ] rule: Making two rules out of one allows the supertype to contain internal
actions independently from those of the subtype context. Indeed, sometimes a supertype
is required to contain an internal action that does not correspond to an internal action
in the subtype. Consider the case ∆≤1 ∆′, where ∆′ has a chain of internal commu-
nications ∆7→∗
P ∆′ leading to a context in which an external input ∆′
s:p←q ? l(U)
−−−−−−−→with
q /∈∆′ is available. Then the supertype ∆needs a transition ∆
s:τ
−→P. [S-τ-R] matches
the probabilities of the relation and the action similar to [S-Out]. [S-∅-1] is equivalent
to [S
⊙-∅].
By the new rule [S-∅], the end type is also a supertype of a pending context. In [S-
Σ-1], [S-Σ-2], [S-Σ-In], and [S-Σ-Out], some of the constructed parts of the choice in the
supertype may be empty. This happens if the subtype context with its active channel
is pending, pend(c · ∆). An action of c1 · ∆may be pending, because its communication
partner on c2 might need to first link with some c3. Note that internal actions, treated
by [S-Link] and [S-τ-L], are resolved only in branches, where the sender is the active
channel. Hence, all internal inputs are pending, because they are handled in the branch
of the sender. Sending actions may also be pending, if the corresponding receiving action
is not available yet.
Definition 3.6 (Pending). A local typing context with an active channel Λ = s[p] · ∆
is pending, pend(Λ), if s[p] : P
i∈I Ini + P
j∈J Outj ∈∆and
1. for all i ∈I have Ini = pi←qi ? li(Ui).Ti for which exists s[q] : T ∈∆with qi ∈q,
and
2. for all j ∈J have Outj = pk→qk ! lk⟨Uk⟩.Tk ⊕Nj for which exists s[q] : T ∈∆with
qk ∈q and T ̸= qi←pi ? li(Ui).T ′
i + M.
In other words, pend(c · ∆) if the type of c is a mixed choice, where all inputs seek to
communicate with each some partner from ∆which is not available, and all probabilistic
41
3 Subtyping with Refinement
choices contain an output seeking to communicate with a partner from ∆which is not
available.
Finally, [S-Split] splits contexts such that there is a single channel on the right-hand
side of the subtyping relation. In contrast to [S
⊙-Split], the left-hand side may now be
a collection of typed channels, a subtype context. This rule facilitates reflexivity and
transitivity of ≤1.
The subtyping rules ensure that all roles on which actions are performed in the re-
finement appear in the role set used in the interface—or role sets for more than one
channel in the interface. This may appear as a limitation of our approach, since the
interface needs to know about these roles even if it has no further knowledge about the
inner structure of the refinement. However, this design choice was made to simplify
the presentation of subtyping and is not crucial. Instead we may use a new role (for
each channel) in the interface, let [S-Split] create a mapping between refinement roles
and their corresponding interface role for the bookkeeping in the remaining subtyping
rules, and replace in rules such as [S-In] each refinement role in actions on the left by
the corresponding interface role in the actions on the right.
We will now show that single-channel subtyping subsumes multi-channel subtyping,
similar to Theorem 3.2.
Theorem 3.7. If ∆′ ≤
⊙
P ∆then ∆′ ≤P ∆.
For the proof, we will utilise the following lemma for dealing with active channels.
Lemma 3.8. c′ : T ′ ≤P c : T implies c′ · (c′ : T ′) ≤P c : T.
Proof. By induction on the derivation of c′ : T ′ ≤P c : T. Proceed by case distinction on
the last applied rule, we may consider only those cases in which the conclusion has no
active channel.
Case [S-Σ-1]: Clear.
Case [S-τ-R]: Have c′ : T ′ ≤P c : P ▶τ.T and c′ : T ′ ≤1 c : T. By induction hypothesis
c′ · (c′ : T ′) ≤1 c : T. By [S-τ-R] thus c′ · (c′ : T ′) ≤P c : P ▶τ.T as required.
Case [S-∅-1]: Have ∅≤P ∅with P = 1, thus c′ : end = ∅. Since pend(c′ · (c′ : end))
holds vacuously, from [S-∅] we obtain c′ · (∅) ≤P ∅as required.
Case [S-Split]: By induction hypothesis.
Proof of Theorem 3.7. Assume that ∆′ ≤
⊙
P ∆. We show ∆′ ≤P ∆by structural induction
on the derivation of ∆′ ≤
⊙
P ∆. We proceed by case distinction on the last applied rule.
Case [S
⊙-Σ]: Have ∆′ = s[r1] : P
i∈I′ Ini +P
j∈J′ Outj and ∆= s[r2] : P
i∈I Li +P
j∈J Lj
with s[r1] : P
i∈I′ Ini +P
j∈J′ Outj ≤
⊙
P s[r2] : P
i∈I Li +P
j∈J Lj, s[r1] : P
i∈I′ Ini ≤
⊙
P
s[r2] : P
i∈I Li, s[r1] : P
j∈J′ Outj ≤
⊙
P s[r2] : P
j∈J Lj, and I ∪J ̸= ∅. From the in-
duction hypothesis get s[r1] : P
i∈I′ Ini ≤P s[r2] : P
i∈I Li and s[r1] : P
j∈J′ Outj ≤P
42
3.3 Multi-Channel Subtyping
s[r2] : P
j∈J Lj. By Lemma 3.8, we obtain s[r1] ·
s[r1] : P
i∈I′ Ini
≤P s[r2] : P
i∈I Li
and s[r1] ·
s[r1] : P
j∈J′ Outj
≤P s[r2] : P
j∈J Lj. Since also I ∪J ̸= ∅we can thus
apply [S-Σ-2] of Definition 3.5 to obtain s[r1] ·
s[r1] : P
i∈I′ Ini + P
j∈J′ Outj
≤P
s[r2] : P
i∈I Li + P
j∈J Lj. With this and I ∪J ̸= ∅we can use [S-Σ-1] of Defi-
nition 3.5 to obtain s[r1] : P
i∈I′ Ini + P
j∈J′ Outj ≤P s[r2] : P
i∈I Li + P
j∈J Lj as
required.
Case [S
⊙-Σ-In]: Have ∆′ = s[r1] : P
i∈I′∪J′ In′
i and ∆= s[r2] : P
i∈I′
P
k∈Ii Lk
for which
s[r1] : P
i∈I′∪J′ In′
i ≤
⊙
P s[r2] : P
i∈I′
P
k∈Ii Lk
, ∀i ∈I′. s[r1] : In′
i ≤
⊙
P s[r2] : P
k∈Ii Lk,
with S
i∈I′ Ii ̸= ∅, and ∀j ∈J′. ∃i ∈I′. pre(In′
j) = pre(In′
i). Apply induction
hypothesis to obtain ∀i ∈I′. s[r1] : In′
i ≤P s[r2] : P
k∈Ii Lk. Thus ∀i ∈I′. s[r1] ·
(s[r1] : In′
i) ≤P s[r2] : P
k∈Ii Lk by Lemma 3.8. With this, S
i∈I′ Ii ̸= ∅, and ∀j ∈
J′. ∃i ∈I′. pre(In′
j) = pre(In′
i), we can apply [S-Σ-In] and [S-Σ-1] of Definition 3.5
to conclude similarly to Case [S
⊙-Σ].
Case [S
⊙-Σ-Out]: In this case we have the two local contexts ∆′ = s[r1] : P
i∈I′ Out′
i
and ∆= s[r2] : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj, for which also s[r1] : P
i∈I′ Out′
i ≤
⊙
P
s[r2] : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj, and ∀i ∈I′. s[r1] : Out′
i ≤
⊙
P s[r2] : P
k∈Ii Lk,
where S
i∈I′ Ii ∪J ̸= ∅, and ∀j ∈J. ∃i ∈I′. pre(Outj) = pre(Out′
i).
Sim-
ilar to previous cases, we apply the induction hypothesis and Lemma 3.8 to
∀i ∈I′. s[r1] : Out′
i ≤
⊙
P s[r2] : P
k∈Ii Lk to obtain ∀i ∈I′. s[r1] · (s[r1] : Out′
i) ≤P
s[r2] : P
k∈Ii Lk. Analogous to Case [S
⊙-Σ-In], we conclude by [S-Σ-Out] and [S-Σ-
1] of Definition 3.5.
Case [S
⊙-⊕]: Have ∆′ = s[r1] : L
i∈IP′
i ▶H′
i.T ′
i and ∆= s[r2] : L
j∈J Pj ▶Hj.Tj where
s[r1] : L
i∈IP′
i ▶H′
i.T ′
i ≤
⊙
PΣ s[r2] : L
j∈J Pj ▶Hj.Tj, and ∀i ∈I. s[r1] : H′
i.T ′
i ≤
⊙
P′
iPΣ
s[r2] : L
j∈Ji Pj ▶Hj.Tj, and J = S
i∈I Ji ̸= ∅, P
j∈J Pj = PΣ, and ∀i ̸= j ∈
I. Ji ∩Jj = ∅.
Similar to previous cases, we apply the induction hypothesis
and Lemma 3.8 to ∀i ∈I. s[r1] : H′
i.T ′
i ≤
⊙
P′
iPΣ s[r2] : L
j∈Ji Pj ▶Hj.Tj to obtain
∀i ∈I. s[r1] · (s[r1] : H′
i.T ′
i) ≤P′
iPΣ s[r2] : L
j∈Ji Pj ▶Hj.Tj. Analogous to Case [S
⊙-
Σ-In], we conclude by [S-⊕] and [S-Σ-1] of Definition 3.5.
Case [S
⊙-In]: We have ∆′ = s[r1] : q←p ? l(U).T ′ and ∆= s[r2] : q←p ? l(U).T, for
which s[r1] : q←p ? l(U).T ′ ≤
⊙
1 s[r2] : q←p ? l(U).T, and s[r1] : T ′ ≤
⊙
1 s[r2] : T, where
q ∈r2 and r1 ⊆r2. We can apply the induction hypothesis to obtain s[r1] : T ′ ≤1
s[r2] : T. With this and q ∈r2 and r1 ⊆r2 we can apply [S-In] of Definition 3.5
to obtain s[r1] · (s[r1] : q←p ? l(U).T ′) ≤1 s[r2] : q←p ? l(U).T. Then by [S-Σ-1] of
Definition 3.5 we have s[r1] : q←p ? l(U).T ′ ≤1 s[r2] : q←p ? l(U).T as required.
Case [S
⊙-Out]: We have ∆′ = s[r1] : p→q ! l⟨U⟩.T ′ and ∆= s[r2] : P ▶p→q ! l⟨U⟩.T
for which s[r1] : p→q ! l⟨U⟩.T ′ ≤
⊙
P s[r2] : P ▶p→q ! l⟨U⟩.T, r1 ⊆r2, p ∈r2, and
s[r1] : T ′ ≤
⊙
1 s[r2] : T. We can apply the induction hypothesis to obtain s[r1] : T ′ ≤1
s[r2] : T. With this, r1 ⊆r2, and p ∈r2 we can apply [S-Out] of Definition 3.5
43
3 Subtyping with Refinement
to obtain s[r1] · (s[r1] : p→q ! l⟨U⟩.T ′) ≤P s[r2] : P ▶p→q ! l⟨U⟩.T. Then apply [S-
Σ-1] of Definition 3.5 to obtain s[r1] : p→q ! l⟨U⟩.T ′ ≤P s[r2] : P ▶p→q ! l⟨U⟩.T as
required.
Case [S
⊙-τ]: We have ∆′ = c′ : τ.T ′ and ∆= c : P ▶τ.T with c′ : τ.T ′ ≤
⊙
P c : P ▶τ.T and
c′ : T ′ ≤
⊙
1 c : T. From the induction hypothesis we get c′ : T ′ ≤1 c : T. We can then
apply [S-τ-R] and [S-τ-L] of Definition 3.5 to obtain c′ ·(c′ : τ.T ′) ≤P c : P ▶τ.T. By
[S-Σ-1] of Definition 3.5 then c′ : τ.T ′ ≤P c : P ▶τ.T as required.
Case [S
⊙-∅]: Straightforward by [S-∅-1] of Definition 3.5.
Case [S
⊙-Split]: Straightforward by application of the induction hypothesis and [S-
Split] of Definition 3.5.
Let us revisit the courthouse protocol from our running example to help build more
intuition for refinement based multi-channel subtyping.
Example 10 (Running Example—Subtyping). Looking at the types from Example 8,
clearly the interactions between roles j and d are concealed to the others.
Indeed,
with the rules of Definition 3.5, we can assert that s[j] : Tj, s[d] : Td ≤1 s[c] : Tc, where
c = {j, d}.
To further illustrate subtyping, consider a more complex version of the courthouse
protocol. Here, the defendant, after notifying the judge about wanting a witness state-
ment, seeks out a meeting with the witness themselves. With Tp and Tw untouched, we
redefine only Tc, Td, and Tj as follows.
T ∗
c = j←p ? lws().
0.7 ▶j→p ! glt⟨bool⟩.1 ▶j→d ! rls⟨⟩
⊕0.3 ▶τ.
(
1 ▶d→w ! mtg⟨⟩.j←w ? st().1 ▶j→p ! glt⟨bool⟩
+ j←w ? st().1 ▶d→w ! mtg⟨⟩.1 ▶j→p ! glt⟨bool⟩
T ∗
d =
0.5 ▶d→j ! wk⟨⟩
⊕0.2 ▶d→j ! str⟨⟩
⊕0.3 ▶d→j ! wit⟨⟩.1 ▶d→w ! mtg⟨⟩
T ∗
j = j←p ? lws().
j←d ? wk().1 ▶j→p ! glt⟨bool⟩.1 ▶j→d ! rls⟨⟩
+ j←d ? str().1 ▶j→p ! glt⟨bool⟩.1 ▶j→d ! rls⟨⟩
+ j←d ? wit().j←w ? st().1 ▶j→p ! glt⟨bool⟩
Where, as before, for c = {j, d} it holds that s[j] : T ∗
j , s[d] : T ∗
d ≤1 s[c] : T ∗
c .
Notice how the interface now contains an internal action τ. The interface is created
without knowledge of the specifics of the witness and therefore needs to accommodate for
both the defendant-witness meeting coming before the witness-judge statement and vice
versa. This leads in T ∗
c to a probabilistic choice being followed in the case with probability
0.3 by a mixed choice containing receiving actions. A τ in between is necessary to connect
these choices. Also note, that only the interface type T ∗
c has mixed choice but not the
refinement.
⃝
44
3.3 Multi-Channel Subtyping
We will now present a subtyping derivation using these rules. Instead of the types
used in Example 7 and Example 9, we will here present the more complex setting from
above.
Example 11 (Running Example—Subtyping Derivation). We highlight the following
snippets from the subtyping derivation of s[j] : T ∗
j , s[d] : T ∗
d ≤1 s[c] : T ∗
c from Example 10,
to illustrate our subtyping rules. During the derivation, we will reuse the identifiers
Tj, Td, Tc for legibility; they do not refer to the types from our previous examples. Moving
bottom-up, we begin at the original statement to demonstrate how [S-Σ-1] handles cases
in which channels are pending.
s[j] : Tj, s[d] : T ∗
d ≤1 s[c] : Tc
s[j] · ∆≤1 s[c] : j←p ? lws().Tc
[S-In]
pend(s[d] · ∆)
s[d] · ∆≤1 ∅
[S-∅]
∆= s[j] : j←p ? lws().Tj, s[d] : T ∗
d ≤1 s[c] : j←p ? lws().Tc
[S-Σ-1]
Continuing upwards from the left-hand side of this tree, we have the following derivation.
Note that for the probabilistic sum in the type of s[c], the case “guilty” with 70%
probability is split into a case with 50% and a case with 20%. [S-⊕] splits the tree into
three branches. We give the branch that leads to [S-τ].
. . .
s[j] : j←w ? st().T ′
j, s[d] : T ′
d ≤1 s[c] : T ′
c
s[j] : j←w ? st().T ′
j, s[d] : T ′
d ≤0.3 s[c] : 0.3 ▶τ.T ′
c
[S-τ]
s[j] · (s[j] : Tj, s[d] : d→j ! wit⟨⟩.T ′
d) ≤0.3 s[c] : 0.3 ▶τ.T ′
c
[S-Link]
s[j] ·
s[j] : Tj, s[d] :
0.5 ▶d→j ! wk⟨⟩
⊕0.2 ▶d→j ! str⟨⟩
⊕0.3 ▶d→j ! wit⟨⟩.T ′
d
≤1 s[c] :
0.5 ▶Hglt.Trls
⊕0.2 ▶Hglt.Trls
⊕0.3 ▶τ.T ′
c
[S-⊕]
Finally, continuing on the previous branch, we see how [Sub-Σ-1] assembles a mixed
choice in the supertype of a subtype context without any mixed choices.
. . .
s[d] · ∆≤1 s[c] : 1 ▶d→w ! mtg⟨⟩.T1
[S-⊕]
. . .
s[j] · ∆≤1 s[c] : j←w ? st().T2
[S-In]
∆=
s[j] : j←w ? st().T ′
j,
s[d] : 1 ▶d→w ! mtg⟨⟩≤1 s[c] :
(
1 ▶d→w ! mtg⟨⟩.T1
+ j←w ? st().T2
[S-Σ-1]
⃝
In this chapter, we have introduced the novel refinement-based multi-channel subtyp-
ing of the probabilistic mixed choice multiparty session π-calculus. We first introduced
the idea and concept, then gave an intermediary single-channel subtyping (§ 3.1) to sim-
plify the transition from standard subtyping to our approach. Afterwards, we presented
the concept of active channels and introduced the corresponding new typing context Λ,
needed for keeping track of branches within subtyping derivations (§ 3.2). Finally, we
defined the main subtyping. To affirm the readers intuition, we proved two theorems on
45
3 Subtyping with Refinement
the size of the different subtypings in relation to each other. Firstly, Theorem 3.2, stating
that types which are related according to standard subtyping ≤◦are also related accord-
ing to single-channel subtyping (with probability one) ≤
⊙
1. Secondly, Theorem 3.7 stating
the same for single-channel subtyping ≤
⊙
P and multi-channel subtyping ≤P. Along the
way, we gave several examples to aid understanding and highlight important concepts.
Conceptually, the presented theory is already quite interesting; in the upcoming chap-
ter, however, we will prove just how powerful and functional the system actually is.
46
4 Properties
MPST systems can enforce many desirable properties on the run-time behaviour of
processes through the typing contexts which justify them. This chapter is dedicated
to proving such properties, fulfilled by the multi-channel subtyping system and the
processes of the probabilistic mixed choice multiparty session π-calculus. We show both
crucial standard properties, such as subject reduction (§ 4.2.1) and deadlock-freedom
(§ 4.2.2), as well as properties unique to our system. The latter includes a strong result
on flexibility which separates our work from the contributions of [Horne, 2020]: Any safe
and deadlock-free local context has a interface which is a single channel type (§ 4.1.3).
To show which good behaviour we can enforce on our processes through their typing,
we first need to focus on the types themselves.
4.1 Properties of Types and Subtyping
We begin with properties of probabilistic mixed choice multiparty session types and their
multi-channel subtyping.
Three main results are presented in this section: (1) The
subtyping relation (with probability one) ≤1 is a preorder. (2) Preservation of safety
and deadlock-freedom under subtyping and labelled transitions for local contexts with
an active channel Λ. (3) Any local context ∆has an interface which is a single channel.
First, we state three preliminary properties which are used not only for the larger
proofs of this section, but also subject reduction in Section 4.2. From here forward,
instead of regular commas “ , ”, we sometimes use “ ; ” for punctuation to avoid confusion
with composition of local contexts ∆1, ∆.
The following result states that an empty context can only be the supertype of an
empty context, where the probability of the relation is 1.
Lemma 4.1. Λ ≤P ∅implies Λ = ∅and P = 1.
Proof. Note that the side condition S
k∈[1..n] Jk ̸= ∅forbids for applications of [S-Σ-1]
on empty local contexts at the right of ≤P. Because of that, it is impossible to set an
active channel on the left of ≤P if the right is empty. By the subtyping rules, then the
only rule that can be applied to derive Λ ≤P ∅is [S-∅-1]. By [S-∅-1], then Λ = ∅and
P = 1.
Note that the opposite direction of the above lemma is not true, i.e., ∅≤P ∆does
not imply ∆= ∅(not even for P = 1). The subtyping rule [S-τ-R] allows for τ in ∆.
However, ∅≤1 ∆implies that all types in ∆are constructed from L, τ, and end only.
47
4 Properties
Lemma 4.2 and Lemma 4.3 state that the subtyping relation is preserved if the involved
local contexts are split or composed if the probability of the relation is one.
Lemma 4.2. ∆≤1 ∆1, ∆2 implies that there are ∆′
1 and ∆′
2 such that ∆= ∆′
1, ∆′
2;
∆′
1 ≤1 ∆1, and ∆′
2 ≤1 ∆2.
Proof. Assume ∆≤1 ∆1, ∆2. We proceed by induction on the number of assignments
in ∆2.
Base Case ∆2 = ∅: Then we can choose ∆′
1 = ∆and ∆′
2 = ∅.
Clearly, then ∆=
∆′
1, ∆′
2. Then ∆≤1 ∆1, ∆2 implies ∆′
1 ≤1 ∆1. Finally, by [S-∅-1], then ∆′
2 ≤1 ∆2.
Induction Step ∆2 = ∆3, c : T: By the subtyping rules, then the only rule that can be
applied in the bottom of the derivation tree for ∆≤1 ∆1, ∆2 is then [S-Split].
By [S-Split], then ∆= ∆1,1, ∆1,2 such that ∆1,1 ≤1 ∆1, ∆3 and ∆1,2 ≤1 c : T.
By the induction hypothesis, then ∆1,1 ≤1 ∆1, ∆3 implies that there are ∆′
1 and
∆′
3 such that ∆1,1 = ∆′
1, ∆′
3; ∆′
1 ≤1 ∆1, and ∆′
3 ≤1 ∆3.
By [S-Split], then
∆′
3 ≤1 ∆3 and ∆1,2 ≤1 c : T imply ∆′
3, ∆1,2 ≤1 ∆2. By choosing ∆′
2 = ∆′
3, ∆1,2,
then ∆= ∆1,1, ∆1,2 and ∆1,1 = ∆′
1, ∆′
3 imply ∆= ∆′
1, ∆′
2.
We already have
∆′
1 ≤1 ∆1. Finally, ∆′
2 = ∆′
3, ∆1,2 and ∆′
3, ∆1,2 ≤1 ∆2 imply ∆′
2 ≤1 ∆2.
Lemma 4.3. If ∆′
1 ≤1 ∆1 and ∆′
2 ≤1 ∆2 such that the compositions ∆′
1, ∆′
2 and ∆1, ∆2
are defined then ∆′
1, ∆′
2 ≤1 ∆1, ∆2.
Proof. Assume ∆′
1 ≤1 ∆1 and ∆′
2 ≤1 ∆2 and that the compositions ∆′
1, ∆′
2 and ∆1, ∆2
are defined. We show ∆′
1, ∆′
2 ≤1 ∆1, ∆2 by induction on the number of assignments in
∆2.
Base Case ∆2 = ∅: By Lemma 4.1, then ∆′
2 ≤1 ∆2 implies ∆′
2 = ∅. Then ∆′
1 ≤1 ∆1
implies ∆′
1, ∆′
2 ≤1 ∆1, ∆2.
Induction Step ∆2 = ∆3, c : T: By the subtyping rules, then the only rule that can be
applied in the bottom of the derivation tree for ∆′
2 ≤1 ∆2 is then [S-Split]. By
[S-Split], then ∆′
2 = ∆′
2,1, ∆′
2,2 such that ∆′
2,1 ≤1 ∆3 and ∆′
2,2 ≤1 c : T. Since
the compositions ∆′
1, ∆′
2 and ∆1, ∆2 are defined, so are the compositions ∆′
1, ∆′
2,1
and ∆1, ∆3. By the induction hypothesis, then ∆′
1 ≤1 ∆1 and ∆′
2,1 ≤1 ∆3 imply
∆′
1, ∆′
2,1 ≤1 ∆1, ∆3. By [S-Split], then ∆′
1, ∆′
2,1 ≤1 ∆1, ∆3 and ∆′
2,2 ≤1 c : T imply
∆′
1, ∆′
2 ≤1 ∆1, ∆2.
4.1.1 Subtyping is a Preorder
As is classic for subtyping relations (see [Pierce, 2002]), our ≤1-relation, too, is a pre-
order and towards this, we show reflexivity and transitivity separately. We restrict this
statement to the relation with probability one, not for general probabilities. This is not
to say that considerations of especially transitivity with non-one probabilities would not
be interesting. For ∆1 ≤P1 ∆2 ≤P2 ∆3 with P1 ̸= 1 and P2 ̸= 1 the probability P3 for
∆1 ≤P3 ∆3 is likely the product P3 = P1P2. However, the usefulness of such a statement
48
4.1 Properties of Types and Subtyping
is limited in the current setting; any “classic” usage of subtyping will be with ≤1, such
as in the typing rules. Clearly, we have to restrict the usage of the relation to local
contexts without an active channel, as supertype contexts never have an active channel.
We argue that this is hardly a restriction, as active channels are merely a syntactic tool
to help with the legibility of our subtyping rules.
The proof of reflexivity is rather simple thanks to the [S-Split] rule ensuring that the
right-hand side will always be a single channel for the remainder of the rules.
Proposition 4.4 (Reflexivity). The relation ≤1 restricted to local contexts (without an
active channel) is reflexive.
Proof. Due to [S-Split], it suffices to show reflexivity on typed channels, not local con-
texts.
∆≤1 ∆
c : T ≤1 c : T
∆, c : T ≤1 ∆, c : T
[S-Split]
We show the reflexivity c : T ≤1 c : T by induction on the type T. For T = end it follows
immediately from [S-end]. For T = P
i∈I Ini + P
j∈J Outj we have
∀i ∈I. c · (c : Ini) ≤1 c : Ini
c ·
c : P
i∈I Ini
≤1 c : P
i∈I Ini
[S-Σ-In]
∀j ∈J. c · (c : Outj) ≤1 c : Outj
c ·
c : P
j∈J Outj
≤1 c : P
j∈J Outj
[S-Σ-Out]
c ·
c : P
i∈I Ini + P
j∈J Outj
≤1 c : P
i∈I Ini + P
j∈J Outj
[S-Σ-2]
c : P
i∈I Ini + P
j∈J Outj ≤1 c : P
i∈I Ini + P
j∈J Outj
[S-Σ-1]
For all Ini = pi←qi ? li(Ui).Ti, the left-hand side immediately follows from [S-In]. For all
Outj = L
i∈Ij Pi ▶Hi.Ti, we have
∀i ∈Ij. c · (c : Hi.Ti) ≤Pi c : Pi ▶Hi.Ti
c ·
c : L
i∈Ij Pi ▶Hi.Ti
≤1 c : L
i∈Ij Pi ▶Hi.Ti
[S-⊕]
where all branches follow straightforwardly from [S-Out] or [S-τ].
For transitivity, too, the rule [S-Split] causes the proof to not be too complex.
Proposition 4.5 (Transitivity). The relation ≤1 restricted to local contexts (without an
active channel) is transitive.
Proof. Assume ∆1 ≤1 ∆2 and ∆2 ≤1 ∆3. We prove ∆1 ≤1 ∆3 by induction on the
number of assignments in ∆3.
Base Case ∆3 = ∅: By Lemma 4.1, then ∆2 ≤1 ∆3 implies that ∆2 = ∅. By Lemma 4.1,
then ∆1 ≤1 ∆2 implies that ∆1 = ∅. By [S-∅-1], then ∆1 ≤1 ∆3.
49
4 Properties
Induction Step ∆3 = ∆′
3, c : T: By the subtyping rules, then the only rule that can
be applied in the bottom of the derivation tree for ∆2 ≤1 ∆3 is then [S-Split].
By [S-Split], then ∆2 = ∆2,1, ∆2,2 such that ∆2,1 ≤1 ∆′
3 and ∆2,2 ≤1 c : T. By
Lemma 4.2, then ∆1 = ∆1,1, ∆1,2; ∆1,1 ≤1 ∆2,1, and ∆1,2 ≤1 ∆2,2. By the induction
hypothesis, then ∆1,1 ≤1 ∆2,1 and ∆2,1 ≤1 ∆′
3 imply ∆1,1 ≤1 ∆′
3 and, similarly,
∆1,2 ≤1 ∆2,2 and ∆2,2 ≤1 c : T imply ∆1,2 ≤1 c : T. By [S-Split], then ∆1,1 ≤1 ∆′
3
and ∆1,2 ≤1 c : T imply ∆1 ≤1 ∆3.
By Propositions 4.4 and 4.5, our subtyping relation fulfils all properties of a preorder.
Corollary 4.6. The relation ≤1 restricted to local contexts (without an active channel)
is a preorder.
The relation is not a partial order, however. A simple example for why ≤1 is not
antisymmetric involves stacked internal actions τ.
Example 12 (Subtyping is not Antisymmetric). Consider the following two types of
the channel s[r], s[r] : 1 ▶τ and s[r] : 1 ▶τ.1 ▶τ.
Then clearly they are not the same,
s[r] : 1 ▶τ ̸= s[r] : 1 ▶τ.1 ▶τ, but we have:
s[r] : 1 ▶τ ≤1 s[r] : 1 ▶τ Prop. 4.4
s[r] : 1 ▶τ ≤1 s[r] : 1 ▶τ.1 ▶τ [S-τ-R]
s[r] : 1 ▶τ ≤1 s[r] : 1 ▶τ Prop. 4.4
s[r] · (s[r] : τ.1 ▶τ) ≤1 s[r] : 1 ▶τ [S-τ-L]
s[r] · (s[r] : 1 ▶τ.1 ▶τ) ≤1 s[r] : 1 ▶τ [S-⊕]
s[r] : 1 ▶τ.1 ▶τ ≤1 s[r] : 1 ▶τ
[S-Σ-1]
Therefore they are both subtypes of each other.
⃝
4.1.2 Safety and Deadlock-Freedom
Being arguably the two most important and commonly shown properties of MPST sys-
tems, we, too, show that our system fulfils safety and deadlock-freedom. Recall that
our goal is to infer that typed processes satisfy them from the fact that their types do.
Thus we need to first define these properties for local contexts (with an active channel).
After doing so, we show a key result on their preservation under subtyping and labelled
transitions.
We inherit the predicates for safety and deadlock-freedom from [Scalas and Yoshida,
2019; Peters and Yoshida, 2024], extended to local contexts with an active channel. A
context is safe if it contains for every unguarded output for which there is an unguarded
input on the same prefix also an unguarded matching input.
Definition 4.7 (Safety Property). The co-inductive property φ is a safety property of
local contexts Λ if and only if for all φ(Λ)
1. if Λ = ∆(without active channel), then
a) transitions ∆
s : p→q ! l⟨U⟩
−−−−−−−→P and ∆
s : q←p ? l′(U′)
−−−−−−−−→1 imply ∆
s : pq : l⟨U⟩
−−−−−−→P ∆′ and
φ(∆′), and
50
4.1 Properties of Types and Subtyping
b) the transition ∆
s : τ
−−→P ∆′ implies φ(∆′).
2. if Λ = s[r] · ∆(with active channel), then
a) transitions ∆
s : p→q ! l⟨U⟩
−−−−−−−→P with p ∈r and ∆
s : q←p ? l′(U′)
−−−−−−−−→1 imply Λ
s : pq : l⟨U⟩
−−−−−−→P
∆′ and φ(∆′), and
b) the transition Λ
s : τ
−−→P ∆′ implies φ(∆′).
We say that Λ is safe, safe(Λ), if φ(Λ) for some safety property φ.
Following almost immediately from this definition, we show that a safe context ∆can
only transition into a safe context.
Lemma 4.8. If ∆7→∗
P ∆′ and safe(∆) then safe(∆′).
Proof. Assume ∆7→P ∆′ and safe(∆). By Definition 2.10, then either ∆
s : p→q ! l⟨U⟩
−−−−−−−→P,
and ∆
s : p←q ? l′(U′)
−−−−−−−−→1, and ∆
s : pq : l⟨U⟩
−−−−−−→P ∆′, or ∆
s : τ
−−→P ∆′. By Definition 4.7, then
safe(∆′).
The proof for ∆7→∗
P ∆′ and safe(∆) follows then by an induction on the
number of steps in ∆7→∗
P ∆′.
Next is deadlock-freedom. A context is deadlock-free if it can terminate only with ∅
and similarly a process is deadlock-free if it can terminate modulo ≡only in 0. Also,
we define pending contexts to be deadlock-free, as their actions are treated by another
branch of the subtyping derivation tree. This is merely a technicality with no major
impact, however, given that pending contexts appear only in subtyping derivation trees
and not type judgements.
Definition 4.9 (Deadlock-Freedom). The local context (with an active channel) Λ is
deadlock-free, denoted as dfree(Λ), if Λ 7→∗Λ′ ↛implies Λ′ = ∅for all Λ′ or if pend(Λ).
A process P is deadlock-free if and only if for all P ′ such that P =⇒P P ′ either (a) P ′ ↛
and P ≡0, or (b) there are P ′′ and P′ such that P ′ −→P′ P ′′.
Similar to [Scalas and Yoshida, 2019; Peters and Yoshida, 2024], we show that our
type system satisfies the standard properties of subtyping, subject reduction, and ensures
deadlock-freedom.
Theorem 4.10 (Subtyping and Properties).
1. If Λ1 ≤P Λ2 and safe(Λ2), then
a) safe(Λ1).
b) If Λ1 7→P1 Λ′
1 then there exist Λ′
2, P2, P3 such that Λ2 7→∗
P1P2 Λ′
2 and Λ′
1 ≤P3 Λ′
2
with P = P2P3 and safe(Λ′
2).
2. If Λ1 ≤P Λ2, safe(Λ2), and dfree(Λ2), then
a) dfree(Λ1).
b) If Λ1 7→P1 Λ′
1 then there exist Λ′
2, P2, P3 such that Λ2 7→∗
P1P2 Λ′
2 and Λ′
1 ≤P3 Λ′
2
with P = P2P3 and dfree(Λ′
2).
3. Checking safe(∆) and dfree(∆) is decidable.
51
4 Properties
Proof. 1.(a) Assume Λ1 ≤P Λ2 and safe(Λ2). We show safe(Λ1) by structural induction
on the derivation of Λ1 ≤P Λ2.
Case [S-Σ-1]: We have the two contexts Λ1 = ∆= s[r1] : T1, . . . , s[rn] : Tn, and
Λ2 = s[r] : P
j∈J1 Lj + . . . + P
j∈Jn Lj, where s[rk] · ∆≤P s[r] : P
j∈Jk Lj for
all k ∈[1..n]. Since safe(Λ2), we also have safe
s[r] : P
j∈Jk Lj
for all k ∈
[1..n]. By the induction hypothesis, then safe(s[rk] · ∆) for all k ∈[1..n]. By
Definition 3.4 for transitions for typing contexts with an active channel, for
each transition Λ1 7→∗
Pk Λ′
1 there exists a k ∈[1..n] such that s[rk]·∆7→∗
Pk Λ′
1.
Since safe(s[rk] · ∆) for all k ∈[1..n], then safe(Λ1).
Case [S-Σ-2]: We have Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′ Ini + P
j∈J′ Outj
and Λ2 =
s[r2] : P
i∈I Li + P
j∈J Lj, where s[r1] ·
∆, s[r1] : P
i∈I′ Ini
≤P s[r2] : P
i∈I Li,
and s[r1] ·
∆, s[r1] : P
j∈J′ Outj
≤P s[r2] : P
j∈J Lj.
Since safe(Λ2), then
safe
s[r2] : P
i∈I Li
and safe
s[r2] : P
j∈J Lj
.
By the induction hypothe-
sis, safe
s[r1] ·
∆, s[r1] : P
i∈I′ Ini
and safe
s[r1] ·
∆, s[r1] : P
j∈J′ Outj
.
Then safe(Λ1) as required.
Case [S-Σ-In]: In this case Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′∪J′ In′
i
. By Definition 3.4
of transitions for contexts with an active channel, Λ1 has no transitions and
is thus safe.
Case [S-Σ-Out]: In this case we have Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′ Out′
i
and Λ2 =
s[r2] : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj, where for all i ∈I′ it holds that s[r1] ·
(∆, s[r1] : Out′
i) ≤P s[r2] : P
k∈Ii Lk. Since safe(Λ2), also safe
s[r2] : P
k∈Ii Lk
for all i ∈I′. By the induction hypothesis, then safe(s[r1] · (∆, s[r1] : Out′
i))
for all i ∈I′. Thus safe(Λ1) as required.
Case [S-⊕]: This case is similar to the previous case (with L instead of P).
Case [S-In]: We have Λ1 = s[r1] · (∆, s[r1] : q←p ? l(U).T ′). Similar to [S-Σ-In],
Λ1 can not perform any transitions and is thus safe.
Case [S-Out]: In this case Λ1 = s[r1] · (∆, s[r1] : p→q ! l⟨U⟩.T ′) and q /∈∆. By
Definition 3.4 for transitions for contexts with an active channel, the only
transition that Λ1 might perform is a communication using the available type
s[r1] : p→q ! l⟨U⟩.T ′ of a sending action. Because of q /∈∆, the channel q
containing the matching receiving type is not in ∆. Then Λ1 has no transitions
and is thus safe.
Case [S-Link]: In this case we have contexts Λ1 = s[r1] ·
∆, s[r1] : p→q ! l⟨U⟩.T ′
1,
s[r2] : q←p ? l(U).T ′
2 +P
i∈I Li
and Λ2 =s[r] : T, with ∆, s[r1] : T ′
1, s[r2] : T ′
2 ≤P
Λ2. By safe(Λ2) and the induction hypothesis, then safe(∆, s[r1] : T ′
1, s[r2] : T ′
2).
By Def. 3.4, s[r1] ·
∆, s[r1] : p→q ! l⟨U⟩.T ′
1, s[r2] : q←p ? l(U).T ′
2 + P
i∈I Li
can only perform transition reducing the output on channel s[r1], resulting
52
4.1 Properties of Types and Subtyping
in a safe transition. Since we already have safe(∆, s[r1] : T ′
1, s[r2] : T ′
2), then
safe(Λ1) as required.
Case [S-τ-L]: We have Λ1 = s[r1] · (∆, s[r1] : τ.T ′) and Λ2 = s[r2] : T, where
∆, s[r1] : T ′ ≤P Λ2.
Since safe(Λ2) and by the induction hypothesis, then
safe(∆, s[r1] : T ′). Therefore safe(Λ1).
Case [S-τ-R]: We have Λ2 = s[r] : P ▶τ.T and Λ1 ≤1 s[r] : T.
Since safe(Λ2)
and by Definition 4.7 of safety, then also safe(s[r] : T).
By the induction
hypothesis, then safe(Λ1).
Case [S-∅-1]: In this case Λ1 = ∅and thus safe(Λ1).
Case [S-∅]: In this case Λ1 = c · ∆and pend(Λ1). Then Λ1 is safe as pending
contexts cannot perform any transitions.
Case [S-Split]: In this case Λ1 = ∆′
1, ∆′
2; Λ2 = ∆, c : T; ∆′
1 ≤1 ∆, and ∆′
2 ≤1 c : T.
Because of safe(Λ2), we have safe(c : T) and safe(∆). By the induction hypoth-
esis, then safe(∆′
1) and safe(∆′
2). Since both contexts are safe individually, for
∆′
1, ∆′
2 to not be safe, it must be that ∆′
1
s : p←q ? l′(U′)
−−−−−−−−→1 and ∆′
2
s : q→p ! l⟨U⟩
−−−−−−−→P
but not ∆′
1, ∆′
2
s : pq : l⟨U⟩
−−−−−−→P ∆′′ (or vice versa). Let us assume for contradiction
that this is indeed the case (the other case in which the output occurs in ∆′
1
and the input in ∆′
2 is symmetric). Then there exists a channel s[ri] : T ′ ∈∆′
2
such that s[ri] : T ′
s : q→p ! l⟨U⟩
−−−−−−−→P. By the subtyping rules, in particular [S-Σ-1],
[S-Σ-2], [S-Σ-Out], and [S-Out], the same output exists in c : T, too. Now,
either ∆
s : p←q ? l′′(U′′)
−−−−−−−−−→1 or not.
Case ∆
s : p←q ? l′′(U′′)
−−−−−−−−−→1: Then since c : T
s : q→p ! l⟨U⟩
−−−−−−−→P by safe(∆, c : T), it must
be that also ∆
s : p←q ? l(U)
−−−−−−−→1. By the subtyping rules, in particular [S-Σ-1],
[S-Σ-2], [S-Σ-In], and [S-In], this input is also in ∆′
1, thus ∆′
1, ∆′
2
s : pq : l⟨U⟩
−−−−−−→P
∆′′, a contradiction.
Case ¬∆
s : p←q ? l′′(U′′)
−−−−−−−−−→1: By the subtyping rules, in particular rules [S-Σ-1],
[S-Σ-2], [S-Σ-In], and [S-In], there can be no input prefix in ∆′
1 which was
not also in ∆. Thus ¬∆′
1
s : p←q ? l′(U′)
−−−−−−−−→1, a contradiction.
Thus, safe(∆′
1, ∆′
2).
1.(b) Assume Λ1 ≤P Λ2, safe(Λ2), and Λ1 7→P1 Λ′
1. By above, safe(Λ1). By the Def-
inition 4.7 of safety, then safe(Λ′
1) and safe(Λ′
2) for all Λ′
2 and all P∗such that
Λ2 7→∗
P∗Λ′
2. We have to show that there are Λ′
2, P2, P3 such that Λ2 7→∗
P1P2 Λ′
2 and
Λ′
1 ≤P3 Λ′
2 with P = P2P3 and safe(Λ′
2). We proceed by structural induction on
the derivation of Λ1 ≤P Λ2.
Case [S-Σ-1]: In this case Λ1 = ∆; Λ2 = s[r] : P
j∈J1 Lj + . . . + P
j∈Jn Lj, and
s[rk] · (∆) ≤P s[r] : P
j∈Jk Lj for all k ∈[1..n].
By Definition 3.4 for the
transitions of local contexts (with or without active channels), Λ1 7→P1 Λ′
1
53
4 Properties
implies that there is one channel s[ri] for which s[ri] · ∆7→P1 Λ′
1.
Since
safe(Λ2), then also safe
s[r] : P
j∈Ji Lj
. By the induction hypothesis, then
there are Λ′
2, P2, P3 such that s[r] : P
j∈Ji Lj 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and P =
P2P3. Then also Λ2 7→∗
P1P2 Λ′
2.
Case [S-Σ-2]: In this case Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′ Ini + P
j∈J′ Outj
; Λ2 =
s[r2] : P
i∈I Li + P
j∈J Lj, and s[r1]·
∆, s[r1] : P
j∈J′ Outj
≤P s[r2] : P
j∈J Lj.
By Definition 3.4 for transitions of local contexts, Λ1 7→P1 Λ′
1 implies that
s[r1] ·
∆, s[r1] : P
j∈J′ Outj
7→P1 Λ′
1. As safe(Λ2), also safe
s[r2] : P
j∈J Lj
.
Then, by the induction hypothesis, there exist Λ′
2, P2, and P3 such that
s[r2] : P
j∈J Lj 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and P = P2P3. Then also Λ2 7→∗
P1P2 Λ′
2.
Case [S-Σ-In]: In this case Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′∪J′ In′
i
. By Definition 3.4
for transitions of local contexts with an active channel, Λ1 ↛, i.e., Λ1 has
no transitions. Accordingly, this case contradicts the assumption Λ1 7→P1 Λ′
1
and, thus, holds trivially.
Case [S-Σ-Out]: This case is similar to the Case [S-Σ-2]. We have Λ1 = s[r1] ·
∆, s[r1] : P
i∈I′ Out′
i
; Λ2 = s[r2] : P
i∈I′
P
k∈Ii Lk
+ P
j∈J Outj, and s[r1] ·
(∆, s[r1] : Out′
i) ≤P s[r2] : P
k∈Ii Lk for all i ∈I′. By Definition 3.4 for transi-
tions of local contexts, Λ1 7→P1 Λ′
1 implies that there is one i ∈I′ such that
s[r1] · (∆, s[r1] : Out′
i) 7→P1 Λ′
1. Since safe(Λ2), then also safe
s[r2] : P
k∈Ii Lk
.
Then, by the induction hypothesis, there exist Λ′
2, P2, and P3 such that
s[r2] : P
k∈Ii Lk 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and P = P2P3. Then also Λ2 7→∗
P1P2 Λ′
2.
Case [S-⊕]: In this case we have Λ1 = s[r1] ·
∆, s[r1] : L
i∈IP′
i ▶H′
i.T ′
i
; Λ2 =
s[r2] : L
j∈J Pj ▶Hj.Tj, and
s[r1] · (∆, s[r1] : H′
i.T ′
i) ≤P′
iP s[r2] :
M
j∈Ji
Pj ▶Hj.Tj
(4.1)
for all i ∈I, where P = P
j∈J Pj and J = S
i∈I Ji. By Definition 3.4 for
transitions of local contexts with an active channel, then Λ1 7→P1 Λ′
1 implies
that there is some k ∈I such that P′
k = P1 and s[r1] : T ′
k = Λ′
1. Then also
s[r1] · (∆, s[r1] : H′
k.T ′
k) 7→1 Λ′
1.
As in the previous cases, safe(Λ2) implies
safe
s[r2] : L
j∈Jk Pj ▶Hj.Tj
. With P′
k = P1 the (4.1) above for the case k
becomes:
s[r1] · (∆, s[r1] : H′
k.T ′
k) ≤P1P s[r2] :
M
j∈Jk
Pj ▶Hj.Tj
Then, by the induction hypothesis, there exist Λ′
2, P′
2, and P3 such that
s[r2] : L
j∈Jk Pj ▶Hj.Tj 7→∗
P′
2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and P1P = P′
2P3.
Thus also
Λ2 7→∗
P′
2 Λ′
2. By choosing P2 =
P
P3 and with P1P = P′
2P3, then Λ2 7→∗
P1P2 Λ′
2 as
required. Finally, P =
P
P3P3 = P2P3.
54
4.1 Properties of Types and Subtyping
Case [S-In]: In this case Λ1 = s[r1] · (∆, s[r1] : q←p ? l(U).T ′) and P = 1. By
Definition 3.4 for transitions of local contexts with an active channel, Λ1 has
no transitions. Thus, this case holds trivially.
Case [S-Out]: In this case Λ1 = s[r1] · (∆, s[r1] : p→q ! l⟨U⟩.T ′) and q /∈∆. By
Definition 3.4 for transitions of local contexts with an active channel and
because of q /∈∆; Λ1 has no transitions. Thus, this case holds trivially.
Case [S-Link]: Here, we have the contexts Λ1 = s[r1] ·
∆, s[r1] : p→q ! l⟨U⟩.T ′
1,
s[r2] : q←p ? l(U).T ′
2 + P
i∈I Li
and Λ2 = s[r2] : T, for which (∆, s[r1] : T ′
1,
s[r2] : T ′
2) ≤P Λ2. By Definition 3.4 for transitions of local contexts with an
active channel, then Λ1 7→P1 Λ′
1 implies P1 = 1 and Λ′
1 = ∆, s[r1] : T ′
1, s[r2] : T ′
2.
We have to show that there are Λ′
2, P2, P3 such that Λ2 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3
Λ′
2, and P = P2P3. By reflexivity Λ2 7→∗
1 Λ2, i.e., we can choose Λ′
2 = Λ2
and P2 = 1 such that Λ2 7→∗
P1P2 Λ′
2. From Λ′
1 = ∆, s[r1] : T ′
1, s[r2] : T ′
2 and
∆, s[r1] : T ′
1, s[r2] : T ′
2 ≤P Λ2 and by choosing P3 = P, we obtain then Λ′
1 ≤P3
Λ′
2. Finally, P = P2P3.
Case [S-τ-L]: In this case Λ1 = s[r1] · (∆, s[r1] : τ.T ′) and Λ2 = s[r2] : T with
∆, s[r1] : T ′
1 ≤P Λ2. By Definition 3.4 for transitions of local contexts with an
active channel, then Λ1 7→P1 Λ′
1 implies that Λ′
1 = ∆, s[r1] : T ′ and P1 = 1.
By reflexivity and by choosing Λ′
2 = Λ2 and P2 = 1, then Λ2 7→∗
P1P2 Λ′
2.
By choosing P3 = P, then ∆, s[r1] : T ′
1 ≤P Λ2 becomes Λ′
1 ≤P3 Λ′
2. Finally,
P = P2P3.
Case [S-τ-R]: In this case Λ1 = Λ and Λ2 = s[r2] : P ▶τ.T, with Λ1 ≤1 s[r2] : T.
Since safe(Λ2), then also safe(s[r2] : T). By the induction hypothesis, then
there are Λ′
2, P′
2, P3 such that s[r2] : T 7→∗
P1P′
2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and 1 = P′
2P3. By
Definition 2.10, then Λ2 7→∗
PP1P′
2 Λ′
2. By choosing P2 = PP′
2, then Λ2 7→∗
P1P2 Λ′
2.
Finally, because of 1 = P′
2P3 and P2 = PP′
2, then P = PP′
2P3 = P2P3.
Case [S-∅-1]: In this case Λ1 = ∅has no transitions. Thus, this case holds triv-
ially.
Case [S-∅]: In this case Λ1 = c · ∆and pend(c · ∆). By Definition 3.6 of pending
contexts, then Λ1 has no transitions. Thus, this case holds trivially.
Case [S-Split]: In this case Λ1 = ∆′
1, ∆′
2; Λ2 = ∆, c : T, P = 1, ∆′
1 ≤1 ∆, and
∆′
2 ≤1 c : T. From safe(Λ2) and 1.(a) above, then safe(∆), safe(c : T), safe(∆′
1),
and safe(∆′
2). By Definition 2.10 for local contexts without active channel,
then Λ1 7→P1 Λ′
1 was performed by ∆′
1; ∆′
2, or by both contexts together.
Case of ∆′
1 7→P1 Λ′
1: By the induction hypothesis then, there are Λ′
2, P2, P3
such that ∆7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2 and P = P2P3. Then also Λ2 7→∗
P1P2 Λ′
2.
Case of ∆′
2 7→P1 Λ′
1: By the induction hypothesis then, there are Λ′
2, P2, P3
such that c : T 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2 and P = P2P3. Then also Λ2 7→∗
P1P2
Λ′
2.
55
4 Properties
Case of ∆′
1,∆′
2 7→P1 Λ′
1 by an interaction between ∆′
1 and ∆′
2: By Defini-
tion 2.10, then ∆′
1
s : p←q ? l(U)
−−−−−−−→1 ∆′′
1 and ∆′
2
s : q→p ! l⟨U⟩
−−−−−−−→P1 ∆′′
2 (or vice
versa). We only show this case, as the other in which the output occurs
in ∆′
1 and the input in ∆′
2 is symmetric. Then there exists a channel
s[ri] : T ′ ∈∆′
2 such that s[ri] : T ′
s : q→p ! l⟨U⟩
−−−−−−−→P1. By the subtyping rules,
in particular [S-Σ-1], [S-Σ-2], [S-Σ-Out], and [S-Out], the same output
exists in c : T, too. Then c : T
s : q→p ! l⟨U⟩
−−−−−−−→P1 c : To. Since the same out-
put is reduced in both steps, ∆′
2 ≤1 c : T; ∆′
2
s : q→p ! l⟨U⟩
−−−−−−−→P1 ∆′′
2, and
c : T
s : q→p ! l⟨U⟩
−−−−−−−→P1 c : To imply ∆′′
2 ≤1 c : To. Now, assume for contradic-
tion that ¬∆
s : p←q ? l′(U′)
−−−−−−−−→1. By the subtyping rules, in particular [S-Σ-1],
[S-Σ-2], [S-Σ-In], and [S-In], there can be no input prefix in ∆′
1 which was
not also in ∆. But ∆′
1
s : p←q ? l(U)
−−−−−−−→1, a contradiction. Thus, it must be that
∆
s : p←q ? l′(U′)
−−−−−−−−→1. Since safe(Λs), then ∆
s : p←q ? l(U)
−−−−−−−→1 ∆i and Λ2 7→P1 Λ′
2,
where Λ′
2 = ∆i, c : To. By choosing P2 = 1, then Λ2 7→P1P2 Λ′
2. Since
the same input is reduced in both steps, ∆′
1 ≤1 ∆; ∆′
1
s : p←q ? l(U)
−−−−−−−→1 ∆′′
1,
and ∆
s : p←q ? l(U)
−−−−−−−→1 ∆i imply ∆′′
1 ≤1 ∆i. By [S-Split], ∆′′
2 ≤1 c : To and
∆′′
1 ≤1 ∆i imply Λ′
1 ≤1 Λ′
2. By choosing P3 = 1, then Λ′
1 ≤P3 Λ′
2. Finally,
P = 1 = P2P3.
2.(a) Assume Λ1 ≤P Λ2, safe(Λ2), and dfree(Λ2). We show dfree(Λ1) by contradiction.
Assume the contrary, i.e., assume that Λ1 7→∗
P1 Λ′
1 such that Λ′
1 ̸= ∅and Λ′
1 ↛.
By Theorem 4.10.1(b) above, then Λ2 7→∗
P1P2 Λ′
2; Λ′
1 ≤P3 Λ′
2, and P = P2P3. By
Lemma 4.8, then safe(Λ′
2). Since dfree(Λ2), then dfree(Λ′
2). By Definition 2.10,
then Λ′
1 does not contain any unguarded τ and instead all guards are inputs or
outputs but there is no matching input and output. Moreover, we can neglect
channels typed as end. Hence, we can assume that all channels of Λ′
1 are typed by
an unguarded sum containing at least one summand and all unguarded summands
are outputs or inputs. Each input and output on the left hand side of subtyping is
checked in the proof tree of Λ′
1 ≤P3 Λ′
2 by one of the rules [S-In], [S-Out], [S-Link],
or [S-∅]. Since all these rules require an active channel on the left hand side of the
subtyping relation and since [S-Σ-1] is the only rule introducing active channels,
these inputs and outputs passed the rule [S-Σ-1] in the proof tree of Λ1 ≤1 Λ2
before moving further upwards to one of the rules [S-In], [S-Out], [S-Link], or [S-
∅]. Since [S-Σ-1] creates a branch for every channel occurring on the left, each
input and each output is in one of these branches checked with its channel marked
as active. For every branch in the proof tree of Λ′
1 ≤P3 Λ′
2 consider the lowest
application of [S-Σ-1]. For each unguarded sum in Λ′
1 there is one such application
of [S-Σ-1], but one application of [S-Σ-1] may cover several unguarded sums of Λ′
1.
Because of the side condition S
k∈[1..n] Jk ̸= ∅, for each application of [S-Σ-1], the
right hand side of the conclusion cannot be empty and at least one not empty sum
is produced in the precondition ∀k ∈[1..n] . s[rk]·(∆) ≤Pi s[r] : P
j∈Jk Lj. Then for
56
4.1 Properties of Types and Subtyping
each lowest application of [S-Σ-1] at least one of the Lj becomes an input or output
(possibly guarded by τ). All such inputs and outputs in Lj’s are introduced by
[S-In] or [S-Out]. Then the same inputs and outputs (with the same prefix, label,
and message type) are also contained in Λ′
1. Since Λ′
1 does not contain unguarded
τ, none of these outputs and inputs in Λ′
1 is guarded by τ. Since we considered the
lowest application of [S-Σ-1], none of these outputs and inputs is guarded by other
sums that results from inputs or outputs also present in Λ′
2. These outputs and
inputs can also not be guarded by other inputs and outputs that appear only in
Λ′
1. Such other inputs and outputs in Λ′
1 describe interactions introduced between
channels that are unified to a single channel in Λ′
2. They are checked by the rule
[S-Link] or [S-∅]. The rule [S-Link] was not used, because else a matching pair of
additional input and output would be unguarded in Λ′
1. Then Λ′
1 can do a step
reducing these two actions contradicting the assumption Λ′ ↛. Also rule [S-∅]
cannot be used, because without [S-Link] for such additional inputs or outputs it
will create empty right hand sides, i.e., empty Lj’s. Then the outputs and inputs in
Λ′
1 that lead to non-empty Lj’s in the lowest application of [S-Σ-1] are unguarded
in Λ′
1. The mentioned inputs and outputs in Lj’s are unguarded or guarded only
by τ in Λ′
2. By the subtyping rules and [S-Σ-In] and [S-Σ-Out] in particular, all
unguarded inputs, but not necessarily all unguarded outputs, from Λ′
2 appear in
Λ′
1. Since dfree(Λ′
2) and since all such inputs appear on both sides, at least one
of those inputs together with a matching output can be reduced in a step of Λ′
2
possibly after some τ-steps. By the side condition on pre() in [S-Out], not each
output of Λ′
2 appears in Λ′
1 but for each omitted output there is another output
with the same prefix (same roles) in both Λ′
1 and Λ′
2. By safe(Λ′
2), then at least one
pair of output and input in both Λ′
1 and Λ′
2 is matching and can together perform
a step. But then Λ′
1 7→P′ contradicting Λ′
1 ↛. We conclude that the assumption
Λ′
1 ↛was wrong, i.e., that dfree(Λ1).
2.(b) Follows from 2.(a) and 1.(b).
3. We inherit the definitions of safety and deadlock-freedom from [Scalas and Yoshida,
2019]. Accordingly, also the proof of the decidability of these predicates is similar
to [Scalas and Yoshida, 2018] (the technical report of [Scalas and Yoshida, 2019]).
The main argument (here and in [Scalas and Yoshida, 2018]) is, that the transitive
closure of the transition relation on local contexts defined in Definition 2.10 induces
a finite-state transition system. Hence, algorithms to check safety and deadlock-
freedom can be built for each local context on its respective finite-state transition
system. One way to do that, i.e., one strategy on how to build these algorithms,
is presented in [Scalas and Yoshida, 2018].
4.1.3 Interface Existence
Our subtyping is very flexible. Indeed, for every safe and deadlock-free collection of
channel types there exists their interface as a single channel type. Note that this is a
major difference to [Horne, 2020].
57
4 Properties
Theorem 4.11 (Interface Existence). For all ∆′ = {s[ri] : T ′
i}i∈I with safe(∆′) and
dfree(∆′, ∆) for some ∆there are r and T such that r ⊆S
i∈I ri and ∆′ ≤1 s[r] : T.
Proof. We fix ∆′ = {s[ri] : T ′
i}i∈I with safe(∆′) and dfree(∆′, ∆) for some ∆. If I = ∅or
all T ′
i = end, then we can choose r = S
i∈I ri and T = end such that ∆′ ≤1 s[r] : T by
[S-∅]. Else and without loss of generality, assume I = [1..n] with n > 1 and T ′
i ̸= end
for all i ∈I. By [S-Σ-1], if there are s[ri] · ∆′ ≤1 s[r] : P
j∈Ji Lj for all i ∈I then
we can choose T = P
j∈J1 Lj + . . . + P
j∈Jn and are done. Hence, we fix i and show
s[ri] · ∆′ ≤1 s[r] : P
j∈Ji Lj. If pend(s[ri] · ∆′) then, by [S-∅], we can choose Ji = ∅and
are done. Else the type T ′
i of s[ri] in ∆′ is guarded by a mixed choice that may contain
τ, inputs, or outputs. We perform an induction on the structure of T ′
i.
Case T ′
i = end: This case violates the assumption T ′
i ̸= end for all i ∈I.
Case T ′
i = P
j∈J L′
j: Each L′
j is of one of the following cases:
Case L′
j = p←q ? l(U).T ′
j: If q /∈∆′, then the input is removed with [S-Out].
Then we conclude with the induction hypothesis and the respective input in
front of T. Else if q ∈∆′, then pend(s[ri] · ∆′′), where ∆′′ is obtained from
∆′ by replacing T ′
i with L′
j. Then the respective part of some Lx is empty.
By [S-∅], then s[ri] · ∆′ ≤1 s[r] : P
j∈Ji Lj.
Case L′
j = L
k∈K Pk ▶Hk.T ′
k: Each Hk.T ′
k is of one of the following cases:
Case τ.T ′
k: The τ can be removed with [S-τ-L]. Then we conclude with the
induction hypothesis.
Case p→q ! l⟨U⟩.T ′
k: If q /∈∆′, then the output is removed with [S-Out].
Then we conclude with the induction hypothesis and the respective out-
put in front of T. Else we have q = ∆′. By dfree(∆′, ∆) and safe(∆′),
then ∆′, ∆can perform a step. If no step of ∆′, ∆involves this output
then pend(s[ri] · ∆′′), where ∆′′ is obtained from ∆′ by replacing T ′
i with
L′
j. By [S-∅], then s[ri] · ∆′ ≤1 s[r] : P
j∈Ji Lj. Otherwise, the match-
ing input is unguarded in ∆′. By [S-Link], we can then conclude by the
induction hypothesis.
We do not prove a related statement for the opposite direction, because it is much
simpler. For example by reflexivity of ≤1 (Proposition 4.4) the following holds trivially.
If safe(s[r] : T) then there are r1, . . . , rn with r ⊆S
i∈[1..n] ri and T ′
1, . . . , T ′
n such that
{s[ri] : T ′
i}i∈[1..n] ≤1 s[r] : T.
Finding a more meaningful statement for “a refinement
always exits” needs careful crafting, but it is quite conceivable that an interesting yet
general result can be shown.
4.2 Properties of Typed Processes
Having introduced safety and deadlock-freedom for types, in this section we will see how
these properties are transferable to processes justified by safe and deadlock-free types.
58
4.2 Properties of Typed Processes
The main results are Theorem 4.19, subject reduction, and Theorem 4.24, deadlock-
freedom.
4.2.1 Subject Reduction
This subsection is dedicated to the Subject Reduction Theorem, which states that typed
processes can only reduce to typed processes. By extension, any desirable feature en-
forced by the typing is preserved for all interaction behaviour and holds for all reductions.
It is clear to see why this is greatly beneficial and thus subject reduction is a key property
across type theory. For its proof, a significant number of smaller lemmas are shown first.
Most notably, these include Lemma 4.14, subject congruence, and 4.18, substitution.
The eponymous Theorem 4.19 is stated and proven afterwards.
Preliminary Lemmas
We will now show eight smaller lemmas needed for the proof of Theorem 4.19. We chose
to give some properties of types and subtyping here, instead of the previous section, as
they are only used in the proofs of this section.
The first two results are small statements used in the proof of subject congruence.
Lemma 4.12 states that the empty context is safe.
Lemma 4.12. safe(∅).
Proof. By Definition 4.7, since ∅has no transitions.
According to Lemma 4.13, adding additional variable and process variable assignments
to the global context Γ does not invalidate a type judgement.
Lemma 4.13 (Global Weakening). If Γ ⊢P ▷∆then also Γ, Γ′ ⊢P ▷∆.
Proof. Straightforward by induction on the derivation of Γ ⊢P ▷∆, since all assignments
added to Γ in the derivation tree are on bound names.
We next show subject congruence, an important property ensuring that a typing
judgement of a process P also holds for all processes congruent to P.
Lemma 4.14 (Subject Congruence). If Γ ⊢P ▷∆and P ≡Q then also Γ ⊢Q ▷∆.
Proof. Assume Γ ⊢P ▷∆and P ≡Q. We prove Γ ⊢Q ▷∆by structural induction on
the derivation of P ≡Q. Since each typing and subtyping rule by definition respects
alpha conversion and the rules that make ≡a congruence, we consider only the rules of
Definition 2.2 explicitly.
Case R | 0 ≡R: Applying this rule from left to right, then P = R | 0 and Q = R.
By inverting the typing rules (and in particular [T-0] and [T-Par] under arbitrary
applications of [T-Sub]), then Γ ⊢P ▷∆implies that Γ ⊢R ▷∆′
R with ∆′
R ≤1 ∆R,
Γ ⊢0 ▷∅with ∅≤1 ∆∅, and ∆R, ∆∅≤1 ∆. By Lemma 4.3, then ∆′
R ≤1 ∆R, ∆∅. By
Proposition 4.5, then ∆′
R ≤1 ∆. By [T-Sub], then Γ ⊢R ▷∆′
R implies Γ ⊢Q ▷∆.
59
4 Properties
Applying this rule from right to left, then P = R and Q = R | 0. By [T-0] and
[T-Par], then Γ ⊢P ▷∆implies Γ ⊢Q ▷∆.
Case R1 | R2 ≡R2 | R1: Applying this rule from left to right, then P = R1 | R2 and
Q = R2 | R1.
By inverting the typing rules (and in particular [T-Par] under
arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢R1 ▷∆′
R1 with
∆′
R1 ≤1 ∆R1, Γ ⊢R2 ▷∆′
R2 with ∆′
R2 ≤1 ∆R2, and ∆R1, ∆R2 ≤1 ∆. By [T-Par] and
[T-Sub], then Γ ⊢Q ▷∆.
The case of applying this rule from right to left is symmetric.
Case (R1 | R2) | R3 ≡R1 | (R2 | R3): Applying this rule from left to right, then P =
(R1 | R2) | R3 and Q = R1 | (R2 | R3). By inverting the typing rules (and in
particular [T-Par] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies
Γ ⊢R1 ▷∆′
R1 with ∆′
R1 ≤1 ∆R1, Γ ⊢R2 ▷∆′
R2 with ∆′
R2 ≤1 ∆R2, Γ ⊢R3 ▷∆′
R3 with
∆′
R3 ≤1 ∆R3, and ∆R1, ∆R2, ∆R3 ≤1 ∆. By [T-Par] and [T-Sub], then Γ ⊢Q ▷∆.
The case of applying this rule from right to left is similar.
Case (νs)0 ≡0: Applying this rule from left to right, then P = (νs)0 and Q = 0.
By inverting the typing rules (and in particular [T-Res] and [T-0] under arbitrary
applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢0 ▷∅with ∅≤1 ∆∅and
∆∅≤1 ∆. By Proposition 4.5, then ∅≤1 ∆. By [T-0] and [T-Sub], then Γ ⊢Q ▷∆.
Applying this rule from right to left, then P = 0 and Q = (νs)0. By inverting
the typing rules (and in particular [T-0] under arbitrary applications of [T-Sub]),
then Γ ⊢P ▷∆implies ∅≤1 ∆. By [T-Res] and Lemma 4.12, then Γ ⊢Q ▷∅. By
[T-Sub], then Γ ⊢Q ▷∆.
Case (νs1)(νs2)R ≡(νs2)(νs1)R: Applying this rule from left to right, have then P =
(νs1)(νs2)R and Q = (νs2)(νs1)R. By inverting the typing rules (and in particular
[T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies safe(∆s1)
and Γ ⊢(νs2)R ▷∆′, ∆s1 with ∆′ ≤1 ∆. Then safe(∆s2) and Γ ⊢R ▷∆R, ∆s2 with
∆R ≤1 ∆′, ∆s1. By Lemma 4.2, then ∆R ≤1 ∆′, ∆s1 implies ∆R = ∆′′, ∆′
s1; ∆′′ ≤1
∆′, and ∆′
s1 ≤1 ∆s1. By [T-Sub] and Corollary 4.6, then Γ ⊢R ▷∆R, ∆s2; ∆R =
∆′′, ∆′
s1, and ∆′
s1 ≤1 ∆s1 imply Γ ⊢R ▷∆′′, ∆s2, ∆s1. By [T-Res], then safe(∆s1)
and Γ ⊢R ▷∆′′, ∆s2, ∆s1 imply Γ ⊢(νs1)R ▷∆′′, ∆s2. By [T-Res], then safe(∆s2)
and Γ ⊢(νs1)R ▷∆′′, ∆s2 imply Γ ⊢Q ▷∆′′. By [T-Sub] and Proposition 4.5, then
Γ ⊢Q ▷∆′′; ∆′′ ≤1 ∆′, and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆.
The case of applying this rule from right to left is similar.
Case R1 | (νs)R2 ≡(νs) (R1 | R2) if s /∈fs(R1): Applying this rule from left to right,
then P = R1 | ressR2, Q = (νs) (R1 | R2), and s /∈fs(R1). By inverting the
typing rules (and in particular [T-Par] and [T-Res] under arbitrary applications
of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢R1 ▷∆′
1 with ∆′
1 ≤1 ∆1, safe(∆s), Γ ⊢
R2 ▷∆′
2, ∆s with ∆′
2 ≤1 ∆2, and ∆= ∆1, ∆2, where we ensure by applying alpha-
conversion that s is fresh in ∆1 and ∆′
1. By Lemma 4.3, then ∆′
1 ≤1 ∆1; ∆′
2 ≤1
60
4.2 Properties of Typed Processes
∆2, and ∆= ∆1, ∆2 imply ∆′
1, ∆′
2 ≤1 ∆.
By [T-Par], then Γ ⊢R1 ▷∆′
1 and
Γ ⊢R2 ▷∆′
2, ∆s imply Γ ⊢R1 | R2 ▷∆′
1, ∆′
2, ∆s. By [T-Res], then safe(∆s) and
Γ ⊢R1 | R2 ▷∆′
1, ∆′
2, ∆s imply Γ ⊢Q ▷∆′
1, ∆′
2. By [T-Sub], then Γ ⊢Q ▷∆′
1, ∆′
2
and ∆′
1, ∆′
2 ≤1 ∆imply Γ ⊢Q ▷∆.
Applying this rule from right to left is similar.
Case def D in (νs)R ≡(νs)def D in R if s /∈fs(D): Then D = {Xi(exi, ci,1,.., ci,ni) =
Ri}i∈I. Applying this rule from left to right, then P = def D in (νs)R, Q =
(νs)def D in R, and s /∈fs(D). By inverting the typing rules (and in particular
[T-Def] and [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I, Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢(νs)R ▷∆′ with ∆′ ≤1 ∆, safe(∆s),
and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R ▷∆′
R, ∆s with ∆′
R ≤1 ∆′, where we ensure
by applying alpha-conversion that s is fresh in D. By [T-Def], then for all i ∈I
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni,
and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R ▷∆′
R, ∆s imply Γ ⊢def D in R ▷∆′
R, ∆s.
By [T-Res], then safe(∆s) and Γ ⊢def D in R ▷∆′
R, ∆s imply Γ ⊢Q ▷∆′
R. By
[T-Sub] and Proposition 4.5, then Γ ⊢Q ▷∆′
R; ∆′
R ≤1 ∆′, and ∆′ ≤1 ∆imply
Γ ⊢Q ▷∆.
Applying this rule from right to left, then P = (νs)def D in R and Q =
def D in (νs)R, where s /∈fs(D). By inverting the typing rules (and in partic-
ular [T-Def] and [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆
implies safe(∆s), Γ ⊢def D in R ▷∆′, ∆s with ∆′ ≤1 ∆, and for all i ∈I
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni,
and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R ▷∆R with ∆R ≤1 ∆′, ∆s, where we ensure
by applying alpha-conversion that s is fresh in D. By [T-Sub], then ∆R ≤1 ∆′, ∆s
and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R ▷∆R imply Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢
R ▷∆′, ∆s. By [T-Res], then safe(∆s) and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I⊢R ▷∆′, ∆s
imply Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢(νs)R ▷∆′. By [T-Def], then
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢(νs)R ▷∆′ imply Γ ⊢Q ▷∆′. By
[T-Sub], then Γ ⊢Q ▷∆′ and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆.
Case (def D in R1) | R2 ≡def D in (R1 | R2) if dpv(D) ∩fpv(Q) = ∅: Then:
D = {Xi(exi, ci,1, . . . , ci,ni) = R′
i}i∈I
61
4 Properties
Applying this rule from left to right, we then get P = (def D in R1) | R2, Q =
def D in (R1 | R2), and dpv(D) ∩fpv(Q) = ∅. By inverting the typing rules (in
particular [T-Def] and [T-Par] under arbitrary appl. of [T-Sub]), Γ⊢P ▷∆implies
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢R′
i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I, Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R1 ▷∆′
1 with ∆′
1 ≤1 ∆1, Γ ⊢R2 ▷∆′
2
with ∆′
2 ≤1 ∆2, and ∆1, ∆2 ≤1 ∆.
By Lemma 4.3, then ∆′
1 ≤1 ∆1; ∆′
2 ≤1
∆2, and ∆= ∆1, ∆2 imply ∆′
1, ∆′
2 ≤1 ∆. By Lemma 4.13, then Γ ⊢R2 ▷∆′
2
implies Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R2 ▷∆′
2. By rule [T-Par], we then have
that Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R1 ▷∆′
1 and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢
R2 ▷∆′
2 imply Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R1 | R2 ▷∆′
1, ∆′
2. By [T-Def], then
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢R′
i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢R1 | R2 ▷∆′
1, ∆′
2 imply Γ ⊢
Q ▷∆′
1, ∆′
2. By [T-Sub], then Γ ⊢Q ▷∆′
1, ∆′
2 and ∆′
1, ∆′
2 ≤1 ∆imply Γ ⊢Q ▷∆.
Applying this rule from right to left is similar.
Case def D in (def D′ in R) ≡def D ∪D′ in R if dpv(D) ∩dpv(D′) = ∅: Then:
D = {Xi(exi, ci,1,.., ci,ni) = R′
i}i∈I
D′ =
Yj
eyj, dj,1,.., dj,nj
= R′′
j
j∈J
Applying this rule from left to right, then P = def D in (def D′ in R), Q =
def D ∪D′ in R, and dpv(D)∩dpv(D′) = ∅. By inverting the typing rules (and in
particular [T-Def] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies
Γ,X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢R′
i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
Γ,X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi,
Y1 :
D
f
U ′
1, T ′
1,1,..T ′
1,n1
E
,.., Yj :
D
f
U ′
j, T ′
j,1,..T ′
j,nj
E
, eyj : f
U ′
j ⊢R′′
j ▷c′
j,1 : T ′
j,1,.., c′
j,nj : T ′
j,nj
for all i ∈I and all j ∈J, Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢def D′ in R ▷∆′ with
∆′ ≤1 ∆, and
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
,
Y1 :
D
f
U ′
1, T ′
1,1,..T ′
1,n1
E
,.., Yj :
D
f
U ′
j, T ′
j,1,..T ′
j,nj
E
⊢R ▷∆′′
with ∆′′ ≤1 ∆′. By [T-Def], then Γ ⊢Q ▷∆′
2. By [T-Sub] and Proposition 4.5,
then Γ ⊢Q ▷∆′′; ∆′′ ≤1 ∆′, and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆.
Applying this rule from right to left is similar.
62
4.2 Properties of Typed Processes
Lemmas 4.15, and 4.17 give a certain notion of splitting and composing the transitions
of local contexts. Lemma 4.16 states a useful consequence of the subtyping properties
of Theorem 4.10 on supertypes being able to “follow” the transitions of their subtypes.
Lemma 4.15. If ∆1, ∆2 7→∗
P ∆and s(∆1) ∩s(∆2) = ∅then there are ∆′
1, ∆′
2 such that
∆1 7→∗
P1 ∆′
1; ∆2 7→∗
P2 ∆′
2, P = P1P2, and ∆= ∆′
1, ∆′
2.
Proof. By Definition 2.10, local contexts of different sessions cannot interact.
Then
every sequence of steps ∆1, ∆2 7→∗
P ∆can be split such that ∆1 7→∗
P1 ∆′
1; ∆2 7→∗
P2 ∆′
2,
P = P1P2, and ∆= ∆′
1, ∆′
2
Lemma 4.16. If ∆1 ≤1 ∆2, safe(∆2), and ∆1 7→∗
P ∆′
1 then there is some ∆′
2 such that
∆2 7→∗
P ∆′
2 and ∆′
1 ≤1 ∆′
2.
Proof. Assume ∆1 ≤1 ∆2, safe(∆2), and ∆1 7→∗
P ∆′
1. By Theorem 4.10.1(b), then there
are ∆′
2, P2, P3 such that ∆2 7→∗
PP2 ∆′
2; ∆′
1 ≤P3 ∆′
2, and 1 = P2P3. Because of 1 = P2P3,
then P2 = 1 and P3 = 1. Then ∆2 7→∗
P ∆′
2 and ∆′
1 ≤1 ∆′
2.
Lemma 4.17. If ∆1 7→∗
P1 ∆′
1; ∆2 7→∗
P2 ∆′
2, safe(∆), and ∆1, ∆2 ≤1 ∆then there is some
∆′ such that ∆7→∗
P1P2 ∆′ and ∆′
1, ∆′
2 ≤1 ∆′.
Proof. Assume ∆1 7→∗
P1 ∆′
1; ∆2 7→∗
P2 ∆′
2, safe(∆), and ∆1, ∆2 ≤1 ∆. Then ∆1, ∆2 7→∗
P1P2
∆′
1, ∆′
2. By Lemma 4.16, then there is some ∆′ such that ∆7→∗
P1P2 ∆′ and ∆′
1, ∆′
2 ≤1
∆′.
The final lemma needed for showing subject reduction establishes the standard sub-
stitution property.
Lemma 4.18 (Substitution). If Γ ⊢v : U and Γ, x : U ⊢P ▷∆, then Γ ⊢P[x 7→v] ▷∆.
Proof. Assume Γ ⊢v : U and Γ, x : U ⊢P ▷∆. We prove Γ ⊢P[x 7→v] ▷∆by structural
induction on the derivation of Γ, x : U ⊢P ▷∆.
Case of [T-0]: In this case P = 0 and ∆= ∅. Then P[x 7→v] = 0. By [T-0], then
Γ ⊢P[x 7→v] ▷∆.
Case of [T-Res]: In this case P = (νs)Q and Γ, x : U ⊢Q ▷∆, {s[ri] : Ti}i∈I with
safe
{s[ri] : Ti}i∈I
, where we use alpha-conversion to ensure that s ̸= x. Then
P[x 7→v] = (νs)(Q[x 7→v]).
By the induction hypothesis, then Γ ⊢v : U and
Γ, x : U ⊢Q ▷∆, {s[ri] : Ti}i∈I imply Γ ⊢Q[x 7→v] ▷∆, {s[ri] : Ti}i∈I. By [T-Res],
then safe
{s[ri] : Ti}i∈I
and Γ⊢Q[x 7→v] ▷∆, {s[ri] : Ti}i∈I imply Γ⊢P[x 7→v] ▷∆.
Case of [T-If]: In this case P = if vb then Q else R, where Γ, x : U ⊢vb : bool, and
Γ, x : U ⊢Q ▷∆and Γ, x : U ⊢R ▷∆. Then we have the substitution P[x 7→v] =
if (vb[x 7→v]) then (Q[x 7→v]) else (R[x 7→v]). By Definition 2.9, Γ, x : U ⊢
vb : bool implies one of the following three cases:
vb = y, x ̸= y, and y : bool ∈Γ, x : U: Then y : bool ∈Γ. By [T-Base], then Γ ⊢
y : bool. Moreover, vb[x 7→v] = vb. Hence, Γ ⊢vb[x 7→v] : bool.
63
4 Properties
vb = x: Then Γ, x : U ⊢vb : bool implies U = bool. Moreover, vb[x 7→v] = v. By
Γ ⊢v : U, then Γ ⊢vb[x 7→v] : bool.
vb ∈{⊤, ⊥}: Then vb[x 7→v] = vb. By [Bool], then Γ ⊢vb[x 7→v] : bool.
In all three cases we have Γ ⊢vb[x 7→v] : bool.
By the induction hypothesis,
Γ ⊢v : U and Γ, x : U ⊢Q ▷∆imply Γ ⊢Q[x 7→v] ▷∆, and Γ ⊢v : U and
Γ, x : U ⊢R ▷∆imply Γ ⊢R[x 7→v] ▷∆.
By [T-If], then Γ ⊢vb[x 7→v] : bool,
Γ ⊢Q[x 7→v] ▷∆, and Γ ⊢R[x 7→v] ▷∆imply Γ ⊢P[x 7→v] ▷∆.
Case of [T-Par]: In this case P = Q | R, ∆= ∆1, ∆2, Γ, x : U ⊢Q ▷∆1, and Γ, x : U ⊢
R ▷∆2. Then P[x 7→v] = (Q[x 7→v]) | (R[x 7→v]). By the induction hypothesis,
then Γ ⊢v : U and Γ, x : U ⊢Q ▷∆1 imply Γ ⊢Q[x 7→v] ▷∆1, and Γ ⊢v : U and
Γ, x : U ⊢R ▷∆2 imply Γ ⊢R[x 7→v] ▷∆2. By [T-Par], then Γ ⊢Q[x 7→v] ▷∆1
and Γ ⊢R[x 7→v] ▷∆2 imply Γ ⊢P[x 7→v] ▷∆.
Case of [T-Def]: In this case we have P = def X(ey; c1,.., cn)
def
= Q in R with
Γ, X :
D
f
U ′, T1,.., Tn
E
, ey : f
U ′, x : U ⊢Q ▷c1 : T1,.., cn : Tn, and
Γ, X :
D
f
U ′, T1,.., Tn
E
,
x : U
⊢R ▷∆, where we use alpha-conversion to ensure that x /∈ey.
Then
P[x 7→v] = def X(ey; c1,.., cn)
def
= (Q[x 7→v]) in (R[x 7→v]). By the induction hy-
pothesis, then Γ ⊢v : U and Γ, X :
D
f
U ′, T1,.., Tn
E
, ey : f
U ′, x : U ⊢Q ▷c1 : T1,.., cn : Tn
imply Γ, X :
D
f
U ′, T1,.., Tn
E
, ey : f
U ′ ⊢Q[x 7→v] ▷c1 : T1,.., cn : Tn. Moreover Γ ⊢v : U
and Γ, X :
D
f
U ′, T1,.., Tn
E
, x : U ⊢R ▷∆imply Γ, X :
D
f
U ′, T1,.., Tn
E
⊢R[x 7→v] ▷∆.
By [T-Def], then have Γ ⊢P[x 7→v] ▷∆.
Case of [T-Var]: In this case P = X
D
ev′, c1,.., cn
E
, X :
D
f
U ′, T1,.., Tn
E
∈Γ, x : U, ∆=
c1 : T1,.., cn : Tn, and Γ, x : U ⊢ev′ : f
U ′. Then P[x 7→v] = X
D
ev′[x 7→v]
, c1,.., cn
E
.
By Definition 2.9, Γ, x : U ⊢ev′ : f
U ′, i.e., Γ, x : U ⊢v′
i : U ′
i for all v′
i ∈ev′, implies for
every v′
i one of the following four cases:
Case v′
i = y, x ̸= y, and y : U ′
i ∈Γ, x : U: Then y : U ′
i ∈Γ. By [T-Base], then Γ ⊢
y : U ′
i. Then, v′
i[x 7→v] = v′
i. Hence, Γ ⊢v′
i[x 7→v] : U ′
i.
Case v′
i = x: Then Γ, x : U ⊢v′
i : bool implies U = U ′
i. Moreover, v′
i[x 7→v] = v.
By Γ ⊢v : U, then Γ ⊢v′
i[x 7→v] : U ′
i.
Case v′
i ∈N+: Then U ′
i = nat and v′
i[x 7→v] = v′
i. By [Nat], then Γ ⊢v′
i[x 7→v] : U ′
i.
Case v′
i ∈{⊤, ⊥}: Then U ′
i = bool and v′
i[x 7→v] = v′
i.
By [Bool], then Γ ⊢
v′
i[x 7→v] : U ′
i.
In all four cases we have Γ ⊢v′
i[x 7→v] : U ′
i and, thus, Γ ⊢
ev′[x 7→v]
: f
U ′. By
[T-Var], then Γ ⊢P[x 7→v] ▷∆.
64
4.2 Properties of Typed Processes
Case of [T-Sum]: In this case P = c P
i∈I Mi, ∆= ∆0, c : P
i∈I Li, and for all i ∈I
we have Γ, x : U ⊢c ▷Mi ▷∆0, c : Li. Then P[x 7→v] = c P
i∈I (Mi[x 7→v]). By
the induction hypothesis, Γ ⊢v : U and Γ, x : U ⊢c ▷Mi ▷∆0, c : Li imply Γ ⊢
c ▷(Mi[x 7→v]) ▷∆0, c : Li for all i ∈I. BY [T-Sum], then Γ ⊢P[x 7→v] ▷∆.
Case of [T-Sub]: In this case Γ, x : U ⊢P ▷∆′ and ∆′ ≤1 ∆. By the induction hypoth-
esis, then Γ ⊢v : U and Γ, x : U ⊢P ▷∆′ imply Γ ⊢P[x 7→v] ▷∆′. By [T-Sub],
then Γ ⊢P[x 7→v] ▷∆′ and ∆′ ≤1 ∆imply Γ ⊢P[x 7→v] ▷∆.
Case of [T-τ]: In this case P = c ▷P ▶τ.Q with ∆= ∆0, c : P ▶τ.T, where Γ, x : U ⊢
Q ▷∆0, c : T. Then we have P[x 7→v] = c ▷P ▶τ.(Q[x 7→v]). By the induction
hypothesis, then the judgements Γ ⊢v : U and Γ, x : U ⊢Q ▷∆0, c : T imply that
Γ ⊢Q[x 7→v] ▷∆0, c : T. By [T-τ], then Γ ⊢P[x 7→v] ▷∆.
Case of [T-Inp]: In this case P = c ▷p←q ? l(y).Q, ∆= ∆0, c : p←q ? l(Uy).T, p ∈c,
and Γ, y : Uy, x : U ⊢Q ▷∆0, c : T, where we use alpha-conversion to ensure that
x ̸= y. Then, since x ̸= y, P[x 7→v] = c ▷p←q ? l(y).(Q[x 7→v]). By the in-
duction hypothesis, Γ ⊢v : U and Γ, y : Uy, x : U ⊢Q ▷∆0, c : T imply Γ, y : Uy ⊢
Q[x 7→v] ▷∆0, c : T. By [T-Inp], then p ∈c and Γ, y : Uy ⊢Q[x 7→v] ▷∆0, c : T
imply Γ ⊢P[x 7→v] ▷∆.
Case of [T-Out]: In this case P = c ▷P ▶p→q ! l⟨v′⟩.Q, ∆= ∆0, c : P ▶p→q ! l⟨U ′⟩.T,
p ∈c, Γ, x : U ⊢v′ : U ′, and Γ, x : U ⊢Q ▷∆, c : T. Then have the substitution
P[x 7→v] = c ▷P ▶p→q ! l⟨(v′[x 7→v])⟩.(Q[x 7→v]). By Definition 2.9, Γ, x : U ⊢
v′ : U ′ implies one of the following four cases:
Case v′ = y, x ̸= y, and y : U ′ ∈Γ, x : U: Then y : U ′ ∈Γ. By [T-Base], then Γ ⊢
y : U ′. Then, v′[x 7→v] = v′. Hence, Γ ⊢v′[x 7→v] : U ′.
Case v′ = x: Then Γ, x : U ⊢v′ : bool implies U = U ′. Moreover, v′[x 7→v] = v.
By Γ ⊢v : U, then Γ ⊢v′[x 7→v] : U ′.
Case v′ ∈N+: Then U ′ = nat and v′[x 7→v] = v′. By [Nat], then Γ ⊢v′[x 7→v] : U ′.
Case v′ ∈{⊤, ⊥}: Then U ′ = bool and v′[x 7→v] = v′.
By [Bool], then Γ ⊢
v′[x 7→v] : U ′.
In all four cases we have Γ ⊢v′[x 7→v] : U ′. By the induction hypothesis, then
Γ ⊢v : U and Γ, x : U ⊢Q ▷∆0, c : T imply Γ ⊢Q[x 7→v] ▷∆0, c : T. By [T-Out],
then p ∈c, Γ ⊢v′[x 7→v] : U ′, and Γ ⊢Q[x 7→v] ▷∆0, c : T imply Γ ⊢P[x 7→v] ▷∆.
Case of [T-Prob]: In this case P = c ▷L
i∈I Pi ▶Ni.Pi, ∆= ∆0, c : L
i∈I Pi ▶Hi.Ti, and
for all i ∈I we have Γ, x : U ⊢c ▷Pi ▶Ni.Pi ▷∆0, c : Pi ▶Hi.Ti. Then P[x 7→v] =
c ▷L
i∈I Pi ▶((Ni.Pi)[x 7→v]). By the induction hypothesis, then judgements Γ ⊢
v : U and Γ, x : U ⊢c ▷Ni.Pi ▷∆0, c : Pi ▶Hi.Ti imply Γ ⊢c ▷Pi ▶((Ni.Pi)[x 7→v]) ▷
∆0, c : Pi ▶Hi.Ti for all i ∈I. By [T-Prob], then Γ ⊢P[x 7→v] ▷∆.
65
4 Properties
Theorem Statement and Proof
With the properties we have shown above, the proof of subject reduction is a straight-
forward induction on the derivation of P −→P P ′. In each case we construct the typing
for P ′ from the information gained by Γ ⊢P ▷∆.
Theorem 4.19 (Subject Reduction). If Γ ⊢P ▷∆, safe(∆), and P −→P P ′ then there
is some ∆′ such that ∆7→∗
P ∆′ and Γ ⊢P ′ ▷∆′.
Proof. Assume Γ ⊢P ▷∆, safe(∆), and P −→P P ′. We prove ∆7→∗∆′ and Γ ⊢P ′ ▷∆′
by structural induction on the derivation of P −→P P ′.
Case of [R-Cond-⊤]: In this case P = if ⊤then Q else R and P ′ = Q. By inverting
the typing rules (and in particular [T-If] under arbitrary applications of [T-Sub]),
then Γ ⊢P ▷∆implies Γ ⊢P ′ ▷∆Q with ∆Q ≤1 ∆. By [T-Sub], then Γ ⊢P ′ ▷∆Q
and ∆Q ≤1 ∆imply Γ ⊢P ′ ▷∆and ∆7→∗∆.
Case of [R-Cond-⊥]: This case is similar to the case above.
Case of [R-Def-0]: In this case we have P = def D in 0 and P ′ = 0.
Let D =
{Xi(exi, ci,1, . . . , ci,ni) = Ri}i∈I. By inverting the typing rules (in particular [T-Def]
and [T-0] under arbitrary appl. of [T-Sub]), then Γ,
n
Xi :
D
eUi, Ti,1, . . . , Ti,ni
Eo
i∈I⊢
0 ▷∅with ∅≤1 ∆. By [T-0] and [T-Sub], then Γ ⊢P ′ ▷∆and ∆7→∗∆.
Case of [R-τ]: In this case P = c ▷(P ▶τ.Q ⊕N) + M and P ′ = Q.
By inverting
the typing rules (and in particular [T-Sum], [T-Prob] and [T-τ] under arbitrary
applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢P ′ ▷∆′ with ∆′ = ∆Q, c : T
and ∆Q, c : (P ▶τ.T ⊕N) + M ≤1 ∆. By [TR-τ], then ∆7→∗∆′.
Case of [R-Com]: In this case we have processes P
= s[r1] ▷p←q ? l(x).Q + MQ
| s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) + MR and P ′ = Q[x 7→v] | R. By inverting the
typing rules (and in particular [T-Par], [T-Sum], [T-Inp], [T-Prob], and [T-Out]
under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies
Γ ⊢s[r1] ▷p←q ? l(x).Q + MQ ▷∆′
1
Γ ⊢s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) + MR ▷∆′
2
with ∆′
1 ≤1 ∆1; ∆′
2 ≤1 ∆2, and ∆1, ∆2 ≤1 ∆,
Γ ⊢s[r1] ▷p←q ? l(x).Q ▷∆T
1 , s[r1] : p←q ? l(U1).T1
Γ, x : U1 ⊢Q ▷∆T
1 , s[r1] : T1
with ∆T
1 , s[r1] : p←q ? l(U1).T1 + M′
Q ≤1 ∆′
1, p ∈er1,
Γ ⊢s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) ▷∆T
2 , s[r2] : P ▶q→p ! l⟨U2⟩.T2 ⊕N′
66
4.2 Properties of Typed Processes
with ∆T
2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′
R ≤1 ∆′
2,
Γ ⊢s[r2] ▷P ▶q→p ! l⟨v⟩.R ▷∆T
2 , s[r2] : P ▶q→p ! l⟨U2⟩.T2
Γ ⊢R ▷∆T
2 , s[r2] : T2
with q ∈er2 and Γ ⊢v : U2. By Definition 4.7 and Theorem 4.10.1, safe(∆) and
the above subtyping relations imply U1 = U2. By Lemma 4.18, then Γ ⊢v : U2 and
Γ, x : U1 ⊢Q ▷∆T
1 , s[r1] : T1 imply Γ ⊢Q[x 7→v] ▷∆T
1 , s[r1] : T1. By Proposition 4.5
and Lemma 4.3, then:
∆T
1 , s[r1] : p←q ? l(U1).T1 + M′
Q, ∆T
2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′
R ≤1 ∆
By Definition 2.10, then:
∆T
1 , s[r1] : p←q ? l(U1).T1 + M′
Q, ∆T
2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′
R 7→
∆T
1 , s[r1] : T1, ∆T
2 , s[r2] : T2
By Lemma 4.16, then there is some ∆′ such that ∆7→∗∆′ and
∆T
1 , s[r1] : T1,
∆T
2 , s[r2] : T2
≤1 ∆′. By [T-Par] and [T-Sub], then Γ ⊢Q[x 7→v] ▷∆T
1 , s[r1] : T1
and Γ ⊢R ▷∆T
2 , s[r2] : T2 imply Γ ⊢P ′ ▷∆′.
Case of [R-Def]: Have P = def D in (X⟨ev, ec⟩| Q) and P ′ = def D in (R[ex 7→ev] | Q),
where X(ex; ec)
def
= R ∈D with ec = c1,.., cn. By applying structural congruence and
by using the argumentation of case [R-Struct] in this proof, we can ensure that
X(ex; ec) is the last/innermost declaration in P. By inverting the typing rules (and in
particular [T-Def], [T-Var], and [T-Par] under arbitrary applications of [T-Sub]),
then Γ ⊢P ▷∆implies Γ, Γ′, X :
D
eU, T1,.., Tn
E
, ex : eU ⊢R ▷c1 : T1,.., cn : Tn, Γ, Γ′ ⊢
ev : eU, Γ, Γ′, X :
D
eU, T1,.., Tn
E
⊢X⟨ev, ec⟩▷c1 : T1,.., cn : Tn, and Γ, Γ′ ⊢Q ▷∆Q with
c1 : T1,.., cn : Tn, ∆Q ≤1 ∆, where Γ′ lists exactly one assignment for each pro-
cess variable contained in D besides X and nothing else and for each such pro-
cess variable we have the corresponding type judgement for the declared process
similar to Γ, Γ′, X :
D
eU, T1,.., Tn
E
, ex : eU ⊢R ▷c1 : T1,.., cn : Tn.
By Lemma 4.18,
then Γ, Γ′ ⊢ev : eU and Γ, Γ′, X :
D
eU, T1,.., Tn
E
, ex : eU ⊢R ▷c1 : T1,.., cn : Tn imply
Γ, Γ′, X :
D
eU, T1,.., Tn
E
⊢R[ex 7→ev] ▷c1 : T1,.., cn : Tn. Then, by [T-Par], the judge-
ments Γ, Γ′ ⊢Q ▷∆Q and Γ, Γ′, X :
D
eU, T1,.., Tn
E
⊢R[ex 7→ev] ▷c1 : T1,.., cn : Tn im-
ply Γ, Γ′, X :
D
eU, T1,.., Tn
E
⊢R[ex 7→ev] | Q ▷c1 : T1,.., cn : Tn, ∆Q. Then, by several
applications of [T-Def] using the type judgements for the process variables besides
X, obtain Γ ⊢P ′ ▷c1 : T1,.., cn : Tn, ∆Q. By [T-Sub], then c1 : T1,.., cn : Tn, ∆Q ≤1
∆and Γ ⊢P ′ ▷c1 : T1,.., cn : Tn, ∆Q imply Γ ⊢P ′ ▷∆and ∆7→∗∆.
Case of [R-Par]: In this case P = Q | R, P ′ = Q′ | R, and Q −→P Q′. By inverting
the typing rules (and in particular [T-Par] under arbitrary applications of [T-
Sub]), then Γ ⊢P ▷∆implies Γ ⊢Q ▷∆′′
Q with ∆′′
Q ≤1 ∆Q, Γ ⊢R ▷∆′
R with
67
4 Properties
∆′
R ≤1 ∆R, and ∆Q, ∆R ≤1 ∆. By [T-Sub], then Γ ⊢Q ▷∆′′
Q and ∆′′
Q ≤1 ∆Q
imply Γ ⊢Q ▷∆Q. By [T-Sub], then Γ ⊢R ▷∆′
R and ∆′
R ≤1 ∆R imply Γ ⊢R ▷∆R.
By the induction hypothesis, Γ ⊢Q ▷∆Q and Q −→P Q′ imply ∆Q 7→∗∆′
Q and
Γ ⊢Q′ ▷∆′
Q. By Lemma 4.17, then ∆Q 7→∗∆′
Q; ∆R 7→∗∆R, and ∆Q, ∆R ≤1 ∆
imply ∆7→∗∆′ and ∆′
Q, ∆R ≤1 ∆′. By [T-Par], then Γ ⊢Q′ ▷∆′
Q and Γ ⊢R ▷∆R
imply Γ ⊢P ′ ▷∆′
Q, ∆R. By [T-Sub], then Γ ⊢P ′ ▷∆′
Q, ∆R and ∆′
Q, ∆R ≤1 ∆′
imply Γ ⊢P ′ ▷∆′.
Case of [R-Struct]: In this case P ≡Q, Q −→P Q′, and Q′ ≡P ′. By Lemma 4.14,
Γ ⊢P ▷∆and P ≡Q imply Γ ⊢Q ▷∆.
By the induction hypothesis, then
Γ ⊢Q ▷∆and Q −→P Q′ imply ∆7→∗∆′ and Γ ⊢Q′ ▷∆′.
Case of [R-Res]: In this case P = (νs)Q, P ′ = (νs)Q′, and Q −→P Q′. By inverting
the typing rules (and in particular [T-Res] under arbitrary applications of [T-
Sub]), then Γ ⊢P ▷∆implies safe(∆s) and Γ ⊢Q ▷∆Q with ∆Q ≤1 ∆, ∆s and
s(∆) ∩s(∆s) = ∅. By the induction hypothesis, then Γ ⊢Q ▷∆Q and Q −→P
Q′ imply ∆Q 7→∗∆′
Q and Γ ⊢Q′ ▷∆′
Q.
By Lemma 4.16, then ∆Q ≤1 ∆, ∆s
and ∆Q 7→∗∆′
Q imply ∆, ∆s 7→∗∆′′ and ∆′
Q ≤1 ∆′′.
By Lemma 4.15, then
∆, ∆s 7→∗∆′′ and s(∆) ∩s(∆s) = ∅imply ∆7→∗∆′; ∆s 7→∗∆′
s, and ∆′′ = ∆′, ∆′
s.
By Lemma 4.8, then ∆s 7→∗∆′
s and safe(∆s) imply safe(∆′
s). By [T-Sub], then
Γ ⊢Q′ ▷∆′
Q; ∆′
Q ≤1 ∆′′, and ∆′′ = ∆′, ∆′
s imply Γ ⊢Q′ ▷∆′, ∆′
s. By [T-Res], then
safe(∆′
s) and Γ ⊢Q′ ▷∆′, ∆′
s imply Γ ⊢P ′ ▷∆′.
Case of [R-Def-In]: In this case P = def D in Q, P ′ = def D in Q′, and Q′ −→P Q′.
Let:
D = {Xi(exi, ci,1,.., ci,ni) = Ri}i∈I
By inverting the typing rules (and in particular [T-Def] under arbitrary applica-
tions of [T-Sub]), then Γ ⊢P ▷∆implies
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢R′
i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢Q ▷∆Q with ∆Q ≤1 ∆. By the
induction hypothesis, then Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢Q ▷∆Q and Q −→P Q′
imply Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢Q′ ▷∆′
Q and ∆Q 7→∗∆′
Q. By [T-Def], then
Γ, X1 :
D
f
U1, T1,1,..T1,n1
E
,.., Xi :
D
eUi, Ti,1,..Ti,ni
E
, exi : eUi ⊢R′
i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni
for all i ∈I and Γ,
n
Xi :
D
eUi, Ti,1,.., Ti,ni
Eo
i∈I ⊢Q′ ▷∆′
Q imply Γ ⊢P ′ ▷∆′
Q. By
Lemma 4.16, then ∆Q ≤1 ∆and ∆Q 7→∗∆′
Q imply ∆7→∗∆′ and ∆′
Q ≤1 ∆′. By
[T-Sub], then Γ ⊢P ′ ▷∆′
Q and ∆′
Q ≤1 ∆′ imply Γ ⊢P ′ ▷∆′.
68
4.2 Properties of Typed Processes
4.2.2 Error- and Deadlock-Freedom
From the Subject Reduction Theorem, two crucial results follow rather naturally, error-
freedom and deadlock-freedom. Errors are essentially the opposite of safety and therefore
by having a safe typing context, we seek to infer processes to be error-free when typed
by it. The definition of error processes is similar to the definition of session errors in
[Peters and Yoshida, 2024]. In systems with mixed choice, instead of considering all
pairs of parallel processes (for non-MC systems, see [Scalas and Yoshida, 2019]), the
error definition must consider all available input-output pairs. Thus, we define an error
process to be a process in which any available output with an equally available dual
input does not have an input for which the labels match.
Definition 4.20 (Error Process). A process P has a communication error if
P ≡s[r1] ▷(p→q ! l⟨U⟩.Q ⊕N) + M | s[r2] ▷q←p ? lq(Uq).Qq + Mq | P ′
Where l ̸= lq and for all M q
i = q←p ? li(Ui).Qi ∈Mq, the labels do not match, i.e.,
l ̸= li. A process P has a value error if
P ≡if v then P else Q | P ′
v /∈{⊤, ⊥}
P is an error process if it has either a communication error or value error.
From Theorem 4.19 and the fact that error processes are untypable by a safe context
it follows that:
Corollary 4.21 (Error-Freedom). If Γ ⊢P ▷∆with safe(∆) and P =⇒P P ′, then P ′ is
not an error process.
We now show the Deadlock-Freedom Theorem, stating that a process typed by a
deadlock-free and safe local context, is deadlock-free. Similar to [Scalas and Yoshida,
2019], we require two additional side conditions on the shape of these processes. The
first prohibits restriction within the process and the second enforces each participant in
a session to be typed by exactly one corresponding type. While our system does not
inherently impose these conditions on all processes, some other formalism, like [Peters
and Yoshida, 2024], do. Thus, we consider these conditions quite natural and reasonable
(for results on interleaved, interfering sessions, see [Coppo et al., 2016]). Indeed, without
these conditions, deadlock-freedom for a local type does not imply deadlock-freedom for
processes typed by it. Let us illustrate why this is with the following example.
Example 13 (Deadlock Freedom). Consider the following deadlock-free local context
∆, which we will use to type two different processes, each exemplifying why one of the
conditions is needed.
∆= s[a] : a←b ? l(), s[b] : b→a ! l⟨⟩
Firstly, take process P1 which can be justified by it, i.e., ⊢P1 ▷∆.
P1 = (νs′) (s[a] ▷a→b ! l⟨⟩| s[b] ▷b→a ! l⟨⟩| s′[x] ▷x→y ! l⟨⟩)
69
4 Properties
P1 will deadlock after one step, despite ∆being deadlock-free, as restricting a session
within a process “removes” the types used for that session from the context. For the
second condition, take the process P2 for which also holds ⊢P2 ▷∆.
P2 = s[a] ▷a←b ? l().s[b] ▷b→a ! l⟨⟩
Clearly, P2 is deadlocked, too.
⃝
We now formalize these conditions.
Definition 4.22 (Canonical). Assume ⊢P ▷∆, then P is canonical for ∆iff
1. there is no occurrence of restriction, (νs′)P ′, in P, and
2. P ≡
k∈[1..n]Pk and ∆= c1 : T1, . . . , cn : Tn where for all k ∈[1..n] have ⊢Pk ▷ck : Tk
and for all subterms of the form c P
i∈I Mi occurring in Pk, have c = ck.
To show deadlock-freedom, we first prove an intermediary lemma stating that if a
canonical process has no reductions, then its corresponding local context has no reduc-
tions either. We will use the shorthand
s : τ
−−→
∗
1 to mean a reflexive transitive closure of
the labelled transition
s : τ
−−→P, i.e., a reduction sequence of 1 ▶τ actions.
Lemma 4.23. If P ↛, ⊢P ▷∆, and P canonical for ∆then ∆
s : τ
−−→
∗
1 ∆′ with ⊢P ▷∆′,
and ∆′ ↛.
Proof. We will first show that given P ↛, ⊢P ▷∆, and P canonical for ∆, the transition
sequence ∆
s : τ
−−→
∗
1 ∆′ implies ⊢P ▷∆′. Assume that there is at least one step in said
transition sequence, as otherwise the statement holds trivially by reflexivity.
As P
canonical for ∆we have that P ≡
k∈[1..n]Pk and ∆= c1 : T1, . . . , cn : Tn where for all
k ∈[1..n] have ⊢Pk ▷ck : Tk and for all subterms of the form c P
i∈I Mi occurring in
Pk, have c = ck.
W.l.o.g.
let us now fix a typed channel ci : Ti from ∆which can
perform said sequence of at least one internal action, i.e., ci : Ti
s : τ
−−→
∗
1 and ci : Ti
s : τ
−−→1.
As P ↛, in particular also Pi ↛. In other words Pi has no unguarded internal action
1 ▶τ. Therefore, in the typing derivation of ⊢Pi ▷ci : Ti, there occurs an application of
[T-Sub] such that ⊢Pi ▷ci : T ′
i with ci : T ′
i ≤1 ci : Ti and T ′
i has no unguarded internal
action 1 ▶τ. By Corollary 4.6 and the subtyping rules in Definition 3.5, ci : T ′
i ≤1 ci : Ti
where ci : Ti
s : τ
−−→1 and ci : T ′
i ↛implies that [S-τ-R] is the only applicable rule. Thus,
ci : Ti
s : τ
−−→
∗
1 ci : T ′
i. Let ∆′ be obtained from ∆by replacing ci : Ti with ci : T ′
i. Then
indeed ∆
s : τ
−−→
∗
1 ∆′. As P canonical for ∆with ⊢Pi ▷ci : Ti and ⊢Pi ▷ci : T ′
i, we have
⊢P ▷∆′ as required.
Given this, we may restrict our attention to contexts which have no internal transitions
of probability one. To show the main statement given not ∆
s : τ
−−→1, assume the contrary,
i.e., assume P ↛, ⊢P ▷∆, and P is canonical for ∆but not ∆↛. Then ∆
s : pq : l⟨U⟩
−−−−−−→P1
or ∆
s : τ
−−→P2 with P2 ̸= 1. In the former case, by [TR-Com], [TR-Inp], and [TR-Out], then
∆
s : pq : l⟨U⟩
−−−−−−→P ∆′ and P canonical for ∆imply that P contains an unguarded output and
a matching unguarded input. Then P can perform a step using [R-Com] contradicting
P ↛. In the latter case, by [TR-τ], then ∆
s : τ
−−→P ∆′ with P canonical for ∆imply that
P contains an unguarded subterm of the form τ.Q. Then P can perform a step using
[R-τ] again contradicting P ↛. In both cases we have a contradiction. Hence, ∆↛.
70
4.2 Properties of Typed Processes
Theorem 4.24 (Deadlock-Freedom). If ⊢P ▷∆, safe(∆), dfree(∆), and P canonical
for ∆, then P is deadlock-free.
The proof of Deadlock-Freedom is similar to the corresponding proof in [Peters and
Yoshida, 2024].
Proof. Assume P =⇒P P ′ ↛,
⊢P ▷∆, safe(∆), dfree(∆), and P canonical for ∆.
By Theorem 4.19, then there is some ∆′ such that ∆7→∗
P ∆′ and
⊢P ′ ▷∆′.
By
Lemma 4.23, then ∆′
s : τ
−−→
∗
1 ∆′′ with ⊢P ′ ▷∆′′ and ∆′′ ↛.
By Definition 4.9, then
∆′′ = ∅. Since the global environment is empty and ⊢P ′ ▷∆′′, for all conditionals in
P ′ the boolean condition is already a value and, since P ′ ↛, then P ′ cannot contain
unguarded conditionals. Since P ′ ↛also cannot contain (modulo structural congruence),
unguarded subterms of the form def D in 0, by inverting the typing rules, then
⊢
P ′ ▷∆′′ and ∆′′ = ∅imply P ′ ≡0.
In this chapter we have presented numerous important results of the probabilistic mixed
choice multiparty session π-calculus, its types, and refinement-based multi-channel sub-
typing. With Corollaries 4.6 (≤1 is a preorder) and 4.21 (error-freedom), as well as Theo-
rems 4.19 (subject reduction) and 4.24 (deadlock-freedom), we have established that our
system fulfils all expected and necessary properties of MPST systems. Additionally, the
Interface Existence Theorem 4.11 guarantees great flexibility of the multi-channel sub-
typing by ensuring that a single-channel interface can always be assembled from almost
any local context, which distinguishes our work from [Horne, 2020]. Having finished all
technical contributions, we will now summarize the presented work, highlighting context
and significance, and discuss interesting avenues for future research.
71
5 Conclusion and Future Work
This thesis set out to create a previously unexplored approach to compositional re-
finement in multiparty session types using subtyping. Towards this, we first presented a
powerful calculus framework, the probabilistic mixed choice multiparty session π-calculus,
whose combination of the expressive mixed choice setting from [Peters and Yoshida,
2024] with probabilistic choices (see [Aman and Ciobanu, 2019; Inverso et al., 2020]) is,
to the best of our knowledge, entirely new. We then extended said calculus with the
refinement-based multi-channel subtyping (Definition 3.5). This unique subtyping allows
a typed channel (the interface) to be safely substituted by a collection of several typed
channels (the refinement), if their collective behaviour models that of the interface. This
framework enables robust stepwise refinement. A single channel within a protocol may
be taken as interface and safely replaced by several channels, that in turn can each be
interfaces for another refinement. Not only is stepwise refinement useful for systematic,
collaborative programming, but, as we have already hinted at with the “conflict of inter-
est” in our courthouse example, having the option to perform refinement in this way can
enable designers to easily patch security concerns. For example, a compromised system
in which an actor has too many access rights can be repaired by replacing this actor
with several and distributing their power (cf. Horne [2020]). Moreover, by considering
a refinement as the starting point, we can combine its channels into a single channel
for whom the interactions within the refinement are concealed. Cleverly doing so will
greatly reduce the complexity and size of the overall system. Crucially, our subtyping
relation is maximally flexible, allowing to unify any selection of well-typed, safe, and
deadlock-free channels (the refinement) into an interface consisting of a single channel
(Theorem 4.11). Hence, our framework facilitates the strong compositional verifiability
of protocols that we sought to create.
Currently the system only supports natural numbers and booleans as base types,
and subtyping does thus not include the types of payloads.
Adding more complex
types would be a straightforward addition. Similarly, we currently do not allow session
delegation, i.e., sending and receiving channels as payload. While session delegation is
not uncommon, it would add significant complexity, especially in our system.
So far, we have been requiring deadlock-freedom to be preserved in 100% of cases
when constructing a refinement. As the system is probabilistic, we believe it would be
very interesting to explore imperfect refinement, by allowing the refinement to deadlock
with a bounded probability. Much of the system is already designed to accommodate
such extensions: We imagine expanding on the semantics of the probability P at the
subtyping relation ≤P to be especially fruitful in this regard. Furthermore, bounded-
probability safety may also be worth exploring. For example one might allow protocols
73
5 Conclusion and Future Work
which are still under development to be deployed if their error probability is below a
small threshold.
We are also interested in analysing behavioural properties other than deadlock-freedom
and safety. [Scalas and Yoshida, 2019] uses the safety predicate φ to statically verify
typing contexts to be, for example, terminating and live (see [Kobayashi and Sangiorgi,
2010; Padovani et al., 2014]) in addition to safe. As before, typed processes would then
be enforced to fulfil these run-time properties. The notion of imperfect refinement could
then be taken even further by allowing bounded-probability termination and liveness,
too.
74
Bibliography
Martín Abadi and Leslie Lamport. The existence of refinement mappings. In Proceedings
of the Third Annual Symposium on Logic in Computer Science (LICS ’88), Edinburgh,
Scotland, UK, July 5-8, 1988, pages 165–175. IEEE Computer Society, 1988. URL
https://doi.org/10.1109/LICS.1988.5115.
Bogdan Aman and Gabriel Ciobanu. Probabilities in session types. In Mircea Marin and
Adrian Craciun, editors, Proceedings Third Symposium on Working Formal Methods,
FROM 2019, Timişoara, Romania, 3-5 September 2019, volume 303 of EPTCS, pages
92–106, 2019. URL https://doi.org/10.4204/EPTCS.303.7.
Andi Bejleri and Nobuko Yoshida. Synchronous multiparty session types. In Vasco T.
Vasconcelos and Nobuko Yoshida, editors, Proceedings of the First Workshop on Pro-
gramming Language Approaches to Concurrency and Communication-cEntric Soft-
ware, PLACES@DisCoTec 2008, Oslo, Norway, June 7, 2008, volume 241 of Elec-
tronic Notes in Theoretical Computer Science, pages 3–33. Elsevier, 2008.
URL
https://doi.org/10.1016/j.entcs.2009.06.002.
Lorenzo Bettini, Mario Coppo, Loris D’Antoni, Marco De Luca, Mariangiola Dezani-
Ciancaglini, and Nobuko Yoshida. Global progress in dynamically interleaved mul-
tiparty sessions.
In Franck van Breugel and Marsha Chechik, editors, CONCUR
2008 - Concurrency Theory, 19th International Conference, CONCUR 2008, Toronto,
Canada, August 19-22, 2008. Proceedings, volume 5201 of Lecture Notes in Computer
Science, pages 418–433. Springer, 2008.
URL https://doi.org/10.1007/978-3-
540-85361-9_33.
Paula Blechschmidt, Kirstin Peters, and Uwe Nestmann. Compositional Interface Re-
finement Through Subtyping in Probabilistic Session Types. Submitted to ICTAC’25.
Filipe Casal, Andreia Mordido, and Vasco T. Vasconcelos.
Mixed sessions.
Theor.
Comput. Sci., 897:23–48, 2022. URL https://doi.org/10.1016/j.tcs.2021.08.
005.
Tzu-Chun Chen, Mariangiola Dezani-Ciancaglini, Alceste Scalas, and Nobuko Yoshida.
On the preciseness of subtyping in session types. Log. Methods Comput. Sci., 13(2),
2017. URL https://doi.org/10.23638/LMCS-13(2:12)2017.
Mario Coppo, Mariangiola Dezani-Ciancaglini, Nobuko Yoshida, and Luca Padovani.
Global
progress
for
dynamically
interleaved
multiparty
sessions.
Math.
Struct. Comput. Sci., 26(2):238–302, 2016.
URL https://doi.org/10.1017/
S0960129514000188.
75
Bibliography
Mariangiola Dezani-Ciancaglini, Silvia Ghilezan, Svetlana Jaksic, Jovanka Pantovic, and
Nobuko Yoshida. Precise subtyping for synchronous multiparty sessions. In Simon
Gay and Jade Alglave, editors, Proceedings Eighth International Workshop on Pro-
gramming Language Approaches to Concurrency- and Communication-cEntric Soft-
ware, PLACES 2015, London, UK, 18th April 2015, volume 203 of EPTCS, pages
29–43, 2015. URL https://doi.org/10.4204/EPTCS.203.3.
Simon Gay and António Ravara.
Behavioural Types: from Theory to Tools.
River
Publishers, 2017. URL https://doi.org/10.1145/2873052.
Simon J. Gay. Subtyping supports safe session substitution. In Sam Lindley, Conor
McBride, Philip W. Trinder, and Donald Sannella, editors, A List of Successes That
Can Change the World - Essays Dedicated to Philip Wadler on the Occasion of His
60th Birthday, volume 9600 of Lecture Notes in Computer Science, pages 95–108.
Springer, 2016. URL https://doi.org/10.1007/978-3-319-30936-1_5.
Simon J. Gay and Malcolm Hole. Subtyping for session types in the pi calculus. Acta
Informatica, 42(2-3):191–225, 2005. URL https://doi.org/10.1007/s00236-005-
0177-z.
Jean-Yves Girard. Linear logic. Theor. Comput. Sci., 50:1–102, 1987. URL https:
//doi.org/10.1016/0304-3975(87)90045-4.
Hans A. Hansson. Time and probability in formal design of distributed systems. PhD
thesis, University Uppsala, Sweden, 1991.
Oltea Mihaela Herescu and Catuscia Palamidessi. Probabilistic asynchronous pi-calculus.
In Jerzy Tiuryn, editor, Foundations of Software Science and Computation Struc-
tures, Third International Conference, FOSSACS 2000, Held as Part of the Joint
European Conferences on Theory and Practice of Software,ETAPS 2000, Berlin, Ger-
many, March 25 - April 2, 2000, Proceedings, volume 1784 of Lecture Notes in Com-
puter Science, pages 146–160. Springer, 2000. URL https://doi.org/10.1007/3-
540-46432-8_10.
Kohei Honda. Types for dyadic interaction. In Eike Best, editor, CONCUR ’93, 4th
International Conference on Concurrency Theory, Hildesheim, Germany, August 23-
26, 1993, Proceedings, volume 715 of Lecture Notes in Computer Science, pages 509–
523. Springer, 1993. URL https://doi.org/10.1007/3-540-57208-2_35.
Kohei Honda, Vasco Thudichum Vasconcelos, and Makoto Kubo. Language primitives
and type discipline for structured communication-based programming. In Chris Han-
kin, editor, Programming Languages and Systems - ESOP’98, 7th European Sympo-
sium on Programming, Held as Part of the European Joint Conferences on the Theory
and Practice of Software, ETAPS’98, Lisbon, Portugal, March 28 - April 4, 1998, Pro-
ceedings, volume 1381 of Lecture Notes in Computer Science, pages 122–138. Springer,
1998. URL https://doi.org/10.1007/BFb0053567.
76
Bibliography
Kohei Honda, Nobuko Yoshida, and Marco Carbone.
Multiparty asynchronous ses-
sion types.
In George C. Necula and Philip Wadler, editors, Proceedings of the
35th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,
POPL 2008, San Francisco, California, USA, January 7-12, 2008, pages 273–284.
ACM, 2008. URL https://doi.org/10.1145/1328438.1328472.
Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session
types. J. ACM, 63(1):9:1–9:67, 2016. URL https://doi.org/10.1145/2827695.
Ross Horne. Session subtyping and multiparty compatibility using circular sequents. In
Igor Konnov and Laura Kovács, editors, 31st International Conference on Concur-
rency Theory, CONCUR 2020, September 1-4, 2020, Vienna, Austria (Virtual Con-
ference), volume 171 of LIPIcs, pages 12:1–12:22. Schloss Dagstuhl - Leibniz-Zentrum
für Informatik, 2020. URL https://doi.org/10.4230/LIPICS.CONCUR.2020.12.
Hans Hüttel, Ivan Lanese, Vasco T. Vasconcelos, Luís Caires, Marco Carbone, Pierre-
Malo Deniélou, Dimitris Mostrous, Luca Padovani, António Ravara, Emilio Tuosto,
Hugo Torres Vieira, and Gianluigi Zavattaro.
Foundations of session types and
behavioural contracts.
ACM Comput. Surv., 49(1):3:1–3:36, 2016.
URL https:
//doi.org/10.1145/2873052.
Omar Inverso, Hernán Melgratti, Luca Padovani, Catia Trubiani, and Emilio Tuosto.
Probabilistic Analysis of Binary Sessions. In Igor Konnov and Laura Kovács, edi-
tors, 31st International Conference on Concurrency Theory (CONCUR 2020), volume
171 of Leibniz International Proceedings in Informatics (LIPIcs), pages 14:1–14:21,
Dagstuhl, Germany, 2020. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. ISBN
978-3-95977-160-3. URL https://doi.org/10.4230/LIPIcs.CONCUR.2020.14.
Naoki Kobayashi and Davide Sangiorgi. A hybrid type system for lock-freedom of mobile
processes. ACM Trans. Program. Lang. Syst., 32(5):16:1–16:49, 2010. URL https:
//doi.org/10.1145/1745312.1745313.
Kim Guldstrand Larsen and Arne Skou. Bisimulation through probabilistic testing. Inf.
Comput., 94(1):1–28, 1991. URL https://doi.org/10.1016/0890-5401(91)90030-
6.
Nancy A. Lynch and Frits W. Vaandrager.
Forward and Backward Simulations: I.
Untimed Systems. Information and Computation, 121(2):214–233, 1995. URL https:
//doi.org/10.1006/INCO.1995.1134.
Robin Milner. A Calculus of Communicating Systems, volume 92 of Lecture Notes in
Computer Science. Springer, 1980. ISBN 3-540-10235-3. URL https://doi.org/10.
1007/3-540-10235-3.
Robin Milner. Communication and concurrency. PHI Series in computer science. Pren-
tice Hall, 1989. ISBN 978-0-13-115007-2.
77
Bibliography
Robin Milner. The Polyadic π-Calculus: a Tutorial. In Logic and Algebra of Specification,
volume 49 of Series F: Computer & Systems Sciences, pages 203–246, 1993. URL
https://doi.org/10.1007/978-3-642-58041-3_6.
Robin Milner. Communicating and mobile systems - the Pi-calculus. Cambridge Uni-
versity Press, 1999. ISBN 978-0-521-65869-0.
Robin Milner, Joachim Parrow, and David Walker. A Calculus of Mobile Processes,
Part I and II.
Information and Computation, 100(1):1–77, 1992.
URL https://
doi.org/10.1016/0890-5401(92)90008-4 and https://doi.org/10.1016/0890-
5401(92)90009-5.
Rocco De Nicola and Matthew Hennessy. Testing equivalences for processes. Theor.
Comput. Sci., 34:83–133, 1984.
URL https://doi.org/10.1016/0304-3975(84)
90113-0.
OpenAI.
ChatGPT.
https://chatgpt.com/share/688b6028-44b0-8001-8616-
37d0ffcdcaea, 2025. Large language model, versions gpt-4o and gpt-5o, accessed
August 2025, Homepage-URL https://chatgpt.com/.
Luca Padovani, Vasco Thudichum Vasconcelos, and Hugo Torres Vieira. Typing liveness
in multiparty communicating systems. In Eva Kühn and Rosario Pugliese, editors,
Coordination Models and Languages - 16th IFIP WG 6.1 International Conference,
COORDINATION 2014, Held as Part of the 9th International Federated Conferences
on Distributed Computing Techniques, DisCoTec 2014, Berlin, Germany, June 3-5,
2014, Proceedings, volume 8459 of Lecture Notes in Computer Science, pages 147–162.
Springer, 2014. URL https://doi.org/10.1007/978-3-662-43376-8_10.
Kirstin Peters and Nobuko Yoshida. On the expressiveness of mixed choice sessions.
In Valentina Castiglioni and Claudio Antares Mezzina, editors, Proceedings Combined
29th International Workshop on Expressiveness in Concurrency and 19th Workshop
on Structural Operational Semantics, EXPRESS/SOS 2022, Warsaw, Poland, 12th
September 2022, volume 368 of EPTCS, pages 113–130, 2022. URL https://doi.
org/10.4204/EPTCS.368.7.
Kirstin Peters and Nobuko Yoshida. Separation and encodability in mixed choice mul-
tiparty sessions. In Pawel Sobocinski, Ugo Dal Lago, and Javier Esparza, editors,
Proceedings of the 39th Annual ACM/IEEE Symposium on Logic in Computer Sci-
ence, LICS 2024, Tallinn, Estonia, July 8-11, 2024, pages 62:1–62:15. ACM, 2024.
URL https://doi.org/10.1145/3661814.3662085.
Benjamin C. Pierce. Types and programming languages. MIT Press, 2002. ISBN 978-0-
262-16209-8.
Benjamin C. Pierce and Davide Sangiorgi. Typing and subtyping for mobile processes.
Math. Struct. Comput. Sci., 6(5):409–453, 1996. URL https://doi.org/10.1017/
s096012950007002x.
78
Bibliography
Alceste Scalas and Nobuko Yoshida. Less is More: Multiparty Session Types Revisited.
Doc technical report dtrs-18-6, Imperial College London, 2018.
Alceste Scalas and Nobuko Yoshida. Less is more: multiparty session types revisited.
Proc. ACM Program. Lang., 3(POPL):30:1–30:29, 2019. URL https://doi.org/10.
1145/3290343.
Roberto Segala and Nancy A. Lynch. Probabilistic simulations for probabilistic pro-
cesses. In Bengt Jonsson and Joachim Parrow, editors, CONCUR ’94, Concurrency
Theory, 5th International Conference, Uppsala, Sweden, August 22-25, 1994, Pro-
ceedings, volume 836 of Lecture Notes in Computer Science, pages 481–496. Springer,
1994. URL https://doi.org/10.1007/978-3-540-48654-1_35.
Kaku Takeuchi, Kohei Honda, and Makoto Kubo. An interaction-based language and its
typing system. In Constantine Halatsis, Dimitris G. Maritsas, George Philokyprou,
and Sergios Theodoridis, editors, PARLE ’94: Parallel Architectures and Languages
Europe, 6th International PARLE Conference, Athens, Greece, July 4-8, 1994, Pro-
ceedings, volume 817 of Lecture Notes in Computer Science, pages 398–413. Springer,
1994. URL https://doi.org/10.1007/3-540-58184-7_118.
Daniele Varacca and Nobuko Yoshida. Probabilistic pi-calculus and event structures. In
Alessandro Aldini and Franck van Breugel, editors, Proceedings of the Fifth Workshop
on Quantitative Aspects of Programming Languages, QAPL 2007, Braga, Portugal,
March 24-25, 2007, volume 190 of Electronic Notes in Theoretical Computer Science,
pages 147–166. Elsevier, 2007. URL https://doi.org/10.1016/j.entcs.2007.07.
009.
Kazuki Watanabe, Clovis Eberhart, Kazuyuki Asada, and Ichiro Hasuo. Compositional
probabilistic model checking with string diagrams of mdps. In Constantin Enea and
Akash Lal, editors, Computer Aided Verification - 35th International Conference, CAV
2023, Paris, France, July 17-22, 2023, Proceedings, Part III, volume 13966 of Lecture
Notes in Computer Science, pages 40–61. Springer, 2023. URL https://doi.org/
10.1007/978-3-031-37709-9_3.
Nobuko Yoshida and Lorenzo Gheri. A very gentle introduction to multiparty session
types. In Dang Van Hung and Meenakshi D’Souza, editors, Distributed Computing and
Internet Technology - 16th International Conference, ICDCIT 2020, Bhubaneswar,
India, January 9-12, 2020, Proceedings, volume 11969 of Lecture Notes in Computer
Science, pages 73–93. Springer, 2020. URL https://doi.org/10.1007/978-3-030-
36987-3_5.
79
|
Technische Universität Berlin Electrical Engineering and Computer Science Models and Theory of Distributed Systems Compositional Interface Refinement Through Subtyping in Probabilistic Session Types Masterarbeit von Paula Blechschmidt zur Erlangung des Grades „Master of Science" (M. Sc.) im Studiengang Computer Science (Informatik) Erstgutachter: Prof. Dr. Uwe Nestmann Zweitgutachterin: Prof. Dr. Kirstin Peters August 2025 12 Sep 2025 Usage of Artificial Intelligence The large language model ChatGPT, chatgpt.com, was sparingly used throughout this thesis for general language help concerning tone, grammar, and vocabulary and as a guide finding literature. All usage is contained in one publicly accessible chat, whose link is found in the bibliography [OpenAI, 2025]. 3 Abstract Multiparty session types (MPST) are a robust typing framework that ensures safe and deadlock-free communication within distributed protocols. As these protocols grow in complexity, compositional modelling becomes increasingly important to scalably verify their behaviour. Therefore, we propose using a refinement-based subtyping approach to facilitate the modularity needed for compositional verification. Subtyping in classic MPST systems inherently represents a notion of refinement: A larger type may be safely substituted by a smaller, refined type. The aim of this thesis is to significantly extend this concept and discover just how flexible and expressive subtyping relations can be. We present a probabilistic extension for MPST, the probabilistic mixed choice multiparty session π-calculus, with a novel, flexible subtyping system which allows one channel (the interface) to be substituted by several channels (the refinement). Our subtyping is remarkably expressive; any selection of well-typed channels as the refinement has a corresponding interface in a single channel type. To facilitate this generality, we base our system on a powerful variant of MPST, mixed choice multiparty session types (MCMP), which offers greater flexibility in communication choices. We establish soundness of the probabilistic mixed choice multiparty session system through several key results. In particular, we prove subject reduction, error-freedom and deadlock-freedom, ensuring that well-typed processes are well-behaved. This work demonstrates subtyping to possess great previously untapped potential for stepwise refinement and compositional verification. The presented framework enables highly expressive, compositional, and verifiable modelling of probabilistic distributed communication. A promising avenue for further research is imperfect refinement, a logical extension of the system which leverages the strengths of the probabilistic setting: We can allow the refinement to be deadlock-free with bounded probability instead of in 100% of cases. 5 Contents 1 Introduction 11 1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus 15 2.1 Process Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Operational Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Typing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 Type Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.2 Standard Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.3 Typing Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3 Subtyping with Refinement 33 3.1 Single-Channel Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2 Adapted Typing System . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Multi-Channel Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4 Properties 47 4.1 Properties of Types and Subtyping . . . . . . . . . . . . . . . . . . . . . 47 4.1.1 Subtyping is a Preorder . . . . . . . . . . . . . . . . . . . . . . . 48 4.1.2 Safety and Deadlock-Freedom . . . . . . . . . . . . . . . . . . . . 50 4.1.3 Interface Existence . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Properties of Typed Processes . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.1 Subject Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Preliminary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . 59 Theorem Statement and Proof . . . . . . . . . . . . . . . . . . . . 66 4.2.2 Error- and Deadlock-Freedom . . . . . . . . . . . . . . . . . . . . 69 5 Conclusion and Future Work 73 7 List of Examples 1 Running Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Running Example-Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Running Example-Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4 Refinement into Multiple Reduction Sequences . . . . . . . . . . . . . . . . 22 5 Prefixes: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6 Prefixes: Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 7 Standard Subtyping Derivation . . . . . . . . . . . . . . . . . . . . . . . . . 29 8 Running Example-Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 9 Single-Channel Subtyping Derivation . . . . . . . . . . . . . . . . . . . . . . 35 10 Running Example-Subtyping . . . . . . . . . . . . . . . . . . . . . . . . . 44 11 Running Example-Subtyping Derivation . . . . . . . . . . . . . . . . . . . 45 12 Subtyping is not Antisymmetric . . . . . . . . . . . . . . . . . . . . . . . . . 50 13 Deadlock Freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 9 1 Introduction In general, the term interface refers to the observable-or externally accessible-aspects of a component, module, or system; the exact representation of an interface inherently depends on the formalism at hand. In many cases, interfaces appear at various levels of abstraction and granularity, which may be developed through stepwise refinement starting from some high-level description and adding details with each refinement step; reasonable notions of refinement relations should obviously be transitive. Ideally, refinement can be done in a compositional manner, where single components may be refined independently, while yielding a refinement of the full system. There is a large body of research on notions of refinement and accompanying methods, as it is generally quite specific to the formalism at hand. Consequently, there is no real consensus on definitions and interpretations, not even on terminology. State-based representations of systems are likely the class of formalisms with the most diverse contributions. There, refinement relations typically compare behaviours (in its simplest form traces), often expressed as a mapping from the refined version back to its initial specification [Abadi and Lamport, 1988]. The concepts of (forward and/or backward) simulation [Lynch and Vaandrager, 1995], which roughly correspond to completeness and soundness requirements of an implementation w.r.t. its specification, are possibly even more wide-spread and come in many flavors. In multiparty session types (MPST) [Yoshida and Gheri, 2020], where our work is situated, the interface of a process captures the type of the session it participates in, which describes the sequence of input/output actions (messages), along with their types, direction, and order of communication. Here, often enough, the notion of channel comprises a particular session of a protocol among several participants, including all of the roles that act within the respective protocol to achieve a common goal [Honda et al., 1998, 2008; Scalas and Yoshida, 2019]. If an interface is represented as a type, then refinement is naturally and best realized as a subtyping relation. In this thesis, interfaces are (sets of) channels, which are themselves constructed from sessions and the roles used within them. As usual these channels are given as implementation in a session calculus and as specification in local contexts, where a local context ∆is a collection of channel names and their respective channel types. The type system provides rules to check the implementation against its specification and ensures safety and deadlock-freedom (among other properties that could be defined). As part of our type system, we introduce a novel subtyping relation ≤P that allows to also verify probabilities P and specifies a refinement relation on local contexts. Starting from ∆, if ∆′ ≤1 ∆then the refinment ∆′ refines the interface ∆by distributing some behaviour (on single channels) on sets of interacting channels. Starting from ∆′, if ∆′ ≤1 ∆then the interface ∆provides an abstract specification of the refinment ∆′, 11 1 Introduction where abstract means from an external point of view abstracting from interactions within ∆′. To gain some initial understanding, we highlight said dynamic of a refinement distributing the behaviour of a single channel, the interface, to a set of interacting channels with an example. We will keep revisiting this setting throughout the thesis as we build up the theory of our system. We use the symbol ⃝to indicate the end of an example. Example 1 (Running Example). Consider a plaintiff bringing a lawsuit to court, which, after possibly taking witness testimony, announces a verdict. To initiate the protocol, interface p c w the plaintiff sends a message to the court, announcing the lawsuit (lws). Upon receiving this, the court decides with a total 70% chance that it has enough information to announce the verdict to the plaintiff. Of those 70%, half of the time, i.e., with a probability of 35%, the court will find the defendant guilty (glt) and else not guilty. Otherwise, with 30%, the court requests (rqs) a witness to testify. Upon receiving this request, the witness sends a message to the court representing the statement (st) after which the court announces the verdict not guilty to the plaintiff. In case no witness is called, the court will send them a message releasing (rls) them and the protocol terminates. This protocol has an issue: The defendant on whom the judgement is passed is not represented as a participant. To obtain its verdict, the defendant could implicitly be assumed part of the court. But then the court has a clear conflict of interest: they hold the power to pass judgement on themselves. A solution is splitting the court, thereby dividing the power. Hence, consider the court to be an interface that must be refined into two separate participants: defendant and judge. The plaintiff is the same as before but now refinement p j d w interacts with a judge instead of the court. This judge, after receiving the lawsuit, waits for a message from the defendant. The defendant will send a weak (wk) and strong (str) defence with a 50% and 20% likelihood, respectively. Otherwise, it requests to call upon a witness, in 30% of cases. If the defence is strong, the verdict is always not guilty. With 70% a weak defence results in guilty and with 30% in not guilty. If no statement is given and a witness is requested, the judge receives the witness testimony, and decides as before. ⃝ The target audience for this thesis is familiar with transition systems and type systems, including π-calculus, but we do not assume a close familiarity with (multiparty) session types. We begin by introducing our calculus in Chapter 2, i.e., its syntax, operational semantics, and typing system. The latter includes basic subtyping which does not include any of our novel refinement ideas, to ease the reader into the system as a whole. Conceptually, this chapter can be considered to be our preliminaries. Afterwards, in Chapter 3, we move on to the core of this work, the refinement-based multi-channel subtyping. We will first build intuition for the novel subtyping system by providing an intermediary set of subtyping rules which bridge the gap between the functionality of 12 1.1 Related Work basic subtyping and the complex form of the advanced subtyping. Only then do we finally present the novel subtyping rules. The chapter contains plenty of examples to guide the reader through the, at times, convoluted syntax. Additionally, we show that the intermediary subtyping subsumes the basic one and is in turn subsumed itself by the novel, advanced one. In Chapter 4, we then prove several essential properties of the presented probabilistic mixed choice multiparty session π-calculus, its type system, and multi-channel subtyping. These include subject reduction (§ 4.2.1) and deadlockfreedom (§ 4.2.2), both standard in MPST. Additionally, we prove the flexibility of our subtyping: Any selection of well-typed, safe, and deadlock-free channels has a corresponding interface in a single channel type (§ 4.1.3). Large parts of the work presented in this thesis have also been submitted as a paper to the 22nd International Colloquium on Theoretical Aspects of Computing (ICTAC 2025) [Blechschmidt et al.]. 1.1 Related Work The history of multiparty session types begins with the inception of the process calculus CCS (calculus of communication systems) in 1980 [Milner, 1980, 1989]. Extended with name-passing, this framework later became the now well-known π-calculus [Milner, 1993, 1999]. Based on this, the original binary session type theory was conceived in the 1990s [Honda, 1993; Takeuchi et al., 1994; Honda et al., 1998]. Inspired also by linear logic [Girard, 1987], session type theory can handle binary communication in which the two communication partners essentially act as duals of each other. With the goal of allowing more than two communicating partners, session types were further developed into the classic MPST framework in [Honda et al., 2008] (asynchronous) and [Bejleri and Yoshida, 2008] (synchronous). The slightly simplified MPST system of [Bettini et al., 2008], introduced shortly afterwards, gained wide acceptance as the canonical version of MPST. (Multiparty) session types have been extensively researched in many contexts since then, see surveys of the field [Hüttel et al., 2016; Gay and Ravara, 2017]. The duality-based approach of binary sessions carried over into multiparty sessions, which was overhauled by [Scalas and Yoshida, 2019], becoming the new standard. Hence, we, too, base our framework on theirs. Our system is additionally heavily influenced by [Peters and Yoshida, 2024], who presented a mixed choice multiparty (MCMP) calculus (based on [Casal et al., 2022]). We integrate their approach into our system. As shown in [Peters and Yoshida, 2022, 2024], MCMP is strictly more expressive than both standard MPST and the system of [Casal et al., 2022]. There is a significant history of integrating probabilities into transition systems, as early as 1991 with work on probabilistic bisimulation [Larsen and Skou, 1991]. A number of process calculi have been reimagined with probabilities in literature, including probabilistic CCS [Hansson, 1991] and probabilistic π-calculus [Herescu and Palamidessi, 2000]. Binary sessions have seen probabilistic extension in [Inverso et al., 2020]. We take inspiration mostly from [Aman and Ciobanu, 2019], who extended the MPST the13 1 Introduction ory of [Honda et al., 2008, 2016]. Subtyping has been a fundamental concept in these systems since the introduction of subtyping for π-calculus in [Pierce and Sangiorgi, 1996]. But even earlier, there were precursors to subtyping for CCS in, for example, simulation [Milner, 1980; Lynch and Vaandrager, 1995] and testing preorders capturing safe replacement [Nicola and Hennessy, 1984]. Later, in [Gay and Hole, 2005], this concept has been adapted for the subsequent binary session theory. Modern MPST subtyping is largely based on the framework of [Dezani-Ciancaglini et al., 2015]. Our subtyping fundamentally builds on [Peters and Yoshida, 2024] (itself based on [Chen et al., 2017]), in particular for their handling of mixed choices, though the additional complexity of our multi-channel system makes the connection less immediately apparent. Another approach to integrate refinement in subtyping, though based on a very different motivation, can be found in [Horne, 2020]. In comparison, our subtyping is much more flexible, as we will prove in Section 4.1.3. For instance, we do not exclude races, i.e., two senders that want to communicate with the same receiver at the same time. 14 2 Preliminaries: The Probabilistic Mixed Choice MPST π-Calculus After giving a small introduction to multiparty session types (MPST), this chapter introduces the process syntax (§ 2.1), the operational semantics (§ 2.2) and the typing system of the probabilistic mixed choice multiparty session π-calculus. Given the complexity and breadth of the field, we will refrain from explaining MPST in full technicality, neither will we expect detailed background knowledge. For the curious reader, we recommend [Yoshida and Gheri, 2020] as a starting point. In general terms, MPST is a formal framework for specification and verification of communication protocols involving more than two participants. Participants are modelled as processes which are formed by a certain grammar, the process syntax. The core functionality of these processes is communication: Sending and receiving messages to and from other processes. These messages are sent and received via channels. A channel comprises a common name for a session both participants are in, and a role, which is an identifier within said session. In communication, by using the channel and the syntax of the sending action, it is therefore specified which role is sending what message to which other role, where both communication partners must share the same session. Interactions happen in synchronous steps, both communication partners "use up" the action at the same time: The process makes a reduction step (asynchronous MPST exists too, see Honda et al. [2008], we use synchronous MPST). To verify or specify interactions between processes, channels are assigned session types using typing rules. Processes built from these types are then said to be justified by them. A key property of these systems, called subject reduction, is that a process justified by some types will be justified by some other types once it reduces. In other words, once we establish a process to be well-behaved in some sense, we know that no matter which steps it performs, well-behaved-ness remains. Furthermore by verifying that a collection of types fulfils certain properties, we can infer that processes justified by that collection also fulfil them. One of the most common properties of interest is deadlock-freedom, the guarantee that whenever no more interaction can occur, all participants are finished. In line with modern MPST theory, our system does not have projection (of global types onto local types) and thus much of the understanding for general π-calculus is transferable to the work of this thesis. At its core, our system is a standard multiparty session calculus (as in [Scalas and Yoshida, 2019, 2018]) without session initialisation and delegation, i.e., without transferring access to certain channels across participants. We extend this by the mixed choices of [Peters and Yoshida, 2024] (based on [Milner, 1993; Milner et al., 1992]). Classically, 15 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus when a participant seeks to communicate, it will be able to do so with only exactly one other participant and only in one mode (sending or receiving). Mixed choices allow the participant to offer sending actions and receiving actions at the same time, to and from arbitrarily different participants. Mixed choice calculi have increased expressiveness and flexibility. Finally, our system includes probabilities in the outputs (as in [Aman and Ciobanu, 2019; Inverso et al., 2020], based on [Herescu and Palamidessi, 2000; Varacca and Yoshida, 2007]). 2.1 Process Syntax We begin by defining which shapes our processes may assume. After explaining the definition in detail, we follow by formalizing the courthouse example from the introduction using this syntax. Definition 2.1 (Process Syntax). The syntax of the probabilistic mixed choice multiparty session π-calculus is inductively given by: v ::= x, y, z,.. | 1, 2,.. | ⊤, ⊥ (variables, numbers, booleans) c ::= s[r] (session with set of roles) P, Q, Pi ::= 0 | P | Q | (νs)P (inaction, parallel composition, restriction) | if v then P else Q (conditional) | def D in P | X⟨ev, ec⟩ (process definition, process call) | c X i∈I Mi (mixed choice on c with finite I ̸= ∅) Mi ::= p←q ? l(x).P (p receives message l(x) from q) | M i∈I Pi ▶Ni.Pi (probabilistic choice with finite I ̸= ∅) Ni ::= p→q ! l⟨v⟩| τ (p sends message l⟨v⟩to q, internal action) D ::= X(ex; ec) def = P (declaration of process constant X) We will first explain the syntax. Values v are variables, numbers, or booleans. Note that the inclusion of more complex values, such as floating point numbers, would be a straightforward, orthogonal extension of the system. A channel c = s[r] specifies a session s being used as a communication endpoint by the roles r to interact with other roles within that session. The term "participant" is overloaded in MPST, referring both to a participant in a communication protocol, and a participant in a session. In our system, a single process may use several different channels s[r], possibly even from different sessions. Thus, to avoid confusion, we will refer to r only as "roles". In standard session typing, instead of sets, single roles are used. The reason why we chose these role sets will become apparent when the multi-channel subtyping is introduced later. 16 2.1 Process Syntax Inaction 0 is the representation of a terminated process, we sometimes omit trailing 0. Composition P | Q allows the processes P and Q to run in parallel. We often use the previously mentioned notion of "participant" to denote parallel processes. Parallelly composed processes, i.e., different participants may interact. Restriction (νs)P encapsulates a session s within the scope of P. The conditional if v then P else Q behaves as P if v is true and else as Q. Process definition def D in P models recursion in our calculus through recursive process calls. Summands of mixed choices c P i∈I Mi are inputs p←q ? l(x).P and output sums Pi ▶Ni.Pi. They differ from [Peters and Yoshida, 2024] in the fact that our output sums are probabilistic. The channel c in front of the sum specifies the session s within which the entire mixed choice is taking place. In inputs p←q ? l(x).P, role p receives a message with label l from role q. The transmitted payload will be stored in variable x. After receiving the input, the participant will continue as the process P. Classically, a sum of inputs is called branching, as each different input a participant receives, allows them to continue differently. In other words, their behaviour branches depending on the communication partner('s choices). Probabilistic output choices L i∈I Pi ▶Ni.Pi, or probabilistic sums, function similarly to typical non-probabilistic output choices. Where classically one output action is chosen nondeterministically from a sum of outputs (called selection), in our system one probabilistic output sum is chosen nondeterministically from several occurring in a mixed choice. Clearly differentiating the nondeterministic choices from probabilistic ones is key for probabilistic systems (see [Segala and Lynch, 1994]). Then the output action is selected according to the probabilities Pi ∈R. These two selection steps, however, functionally occur simultaneously, not successively. Output actions Ni may either be p→q ! l⟨v⟩, where role p sends a message with label l and payload v to role q, or τ, an internal action. Afterwards, the participant continues as process Pi. Declarations D provide definitions of processes that may be invoked by a process call X⟨ev, ec⟩with some values ev, where ec lists the channels used by the process that is declared by X. Next, let us specify the binders. We adopt a form of Barendregt convention: We assume that alpha-conversion is implicitly applied to ensure that all bound variables are pairwise distinct and different from free ones. Restriction (νs)P binds session s in P. Declaration X(ex; ec) def = P binds process constants X and variables ex in P. The vector ec lists the channels used in P. Message receiving p←q ? l(x).P binds the variable x in P. All other occurrences of process constants, variables, and sessions are free. Let fs(P) and fs(D) denote the set of free sessions in P and D, respectively. Let dpv(D) be the set of process variables declared in D and let fpv(P) and fpv(D) denote the set of free process variables in P and D. Substitution P[x1, . . . , xn 7→v1, . . . , vn] simultaneously replaces all free occurrences of xi by vi in P, possibly applying alpha-conversion to avoid capture. Substitution P[ex 7→ev] is undefined if |ex| ̸= |ev|. Finally, for legibility and convenience, we introduce some shorthands and abbreviations. We abbreviate singleton sums c P i∈{1} M as c ▷M and L i∈{1} P ▶N.P as P ▶N.P. We sometimes omit the probability 1, i.e., abbreviate outputs 1 ▶N.P as N.P. Whenever I ∩J = ∅and I ∪J ̸= ∅, we allow to split a mixed choice c P i∈I∪J Mi into c ▷P i∈I Mi +P j∈J Mj and we allow to split a probabilistic choice L i∈I∪J Pi ▶Ni.Pi into 17 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus L i∈I Pi ▶Ni.Pi ⊕L j∈J Pj ▶Nj.Pj. In particular, we often split sums into a single summand and the rest of the sum, i.e., c P i∈I Mi becomes c ▷Mj +M with M = c P i∈I\{j} Mi and L i∈I Pi ▶Ni.Pi becomes Pj ▶Nj.Pj ⊕N with N = L i∈I\{j} Pi ▶Ni.Pi. To simplify the reduction rules in Definition 2.3, we allow M and N to be empty mixed/probabilistic choices. We allow to unify and split similar summands of probabilistic choices, i.e., Pi ▶N.P ⊕Pj ▶N.P = (Pi + Pj) ▶N.P. Let us return to the running example to show the protocol as a process in our syntax. Example 2 (Running Example-Syntax). The unrefined interface of the courthouse system in Example 1 can be implemented as a process PI = (νs)(Pp | Pc | Pw), whose components represent the participants in the described protocol. The plaintiff is Pp, the court is Pc, and the witness is Pw. The implementations of these processes are given as Pp = s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x) Pc = s[c] ▷j←p ? lws().s[c] ▷ 0.35 ▶j→p ! glt⟨⊤⟩.s[j] ▷j→w ! rls⟨⟩ ⊕0.35 ▶j→p ! glt⟨⊥⟩.s[j] ▷j→w ! rls⟨⟩ ⊕0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩ Pw = s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() where the role set of the channel s[c] used by the court process Pc is c = {j, d}, as it embodies both the judge and defendant. Avid readers might notice that the defendant is not found within the actions of Pc. As, however, this participant gets refined into two, both roles are already found in the role set. Syntactically, this does not pose a problem; unused roles are indeed allowed to occur in role sets, as we will see later on. The refined system can be implemented as PR = (νs)(Pp | Pj | Pd | Pw). The processes Pp and Pw stay the same and we have: Prls = s[c] ▷j→w ! rls⟨⟩ Pd = s[d] ▷ 0.5 ▶d→j ! wk⟨⟩ ⊕ 0.2 ▶d→j ! str⟨⟩ ⊕ 0.3 ▶d→j ! wit⟨⟩ Pj = s[j] ▷j←p ? lws().s[j] ▷ j←d ? wk().s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕ 0.3 ▶j→p ! glt⟨⊥⟩.Prls + j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls + j←d ? wit().s[j] ▷j→w ! rqs⟨⟩. s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩ for the defendant and judge, where Prls is used as a helper process for legibility. ⃝ With the process syntax defined, we will now move on to specifying how processes behave and interact. 18 2.2 Operational Semantics 2.2 Operational Semantics In this section, we introduce the operational semantics of the probabilistic mixed choice multiparty session π-calculus. First, we define the structural congruence, which we need for the reduction semantics to specify all possible reductions of processes. Definition 2.2 (Structural Congruence ≡). Structural congruence ≡is the smallest congruence on processes that includes alpha conversion ≡α and: P | 0 ≡P P | Q ≡Q | P (P | Q) | R ≡P | (Q | R) (νs)0 ≡0 (νs)(νs′)P ≡(νs′)(νs)P P | (νs)Q ≡(νs) (P | Q) if s /∈fs(P) def D in (νs)P ≡(νs)def D in P if s /∈fs(D) (def D in P) | Q ≡def D in (P | Q) if dpv(D) ∩fpv(Q) = ∅ def D in (def D′ in P) ≡def D ∪D′ in P if dpv(D) ∩dpv(D′) = ∅and fpv(D) ∩dpv(D′) = ∅ Definition 2.3 (Probabilistic Reduction Semantics). The reduction semantics is given as the relation -→P inductively defined as follows. if ⊤then P else Q -→1 P [R-Cond-⊤] if ⊥then P else Q -→1 Q [R-Cond-⊥] def D in 0 -→1 0 [R-Def-0] c ▷(P ▶τ.P ⊕N) + M -→P P [R-τ] s[r1] ▷(P ▶q→p ! l⟨v⟩.Q ⊕N) + MQ | s[r2] ▷p←q ? l(x).P + MP -→P Q | P[x 7→v] [R-Com] X(ex; ec) def = P ∈D def D in (X⟨ev, ec⟩| Q) -→1 def D in (P[ex 7→ev] | Q) [R-Def] P -→P P ′ P | Q -→P P ′ | Q [R-Par] P ≡P ′ P ′ -→P Q′ Q′ ≡Q P -→P Q [R-Struct] P -→P P ′ (νs)P -→P (νs)P ′ [R-Res] P -→P P ′ def D in P -→P def D in P ′ [R-Def-In] The statement P -→P P ′ is meant to be read as "process P reduces to the continuation P ′ with probability P". Let us now explain each reduction rule. [R-Cond-⊤] and [R-Cond-⊥] allow conditional processes to take a reduction step to their intended continuation depending on the boolean value in the clause. Rule [R-Def-0] allows to garbage collect disused declarations. With [R-τ], an internal action is performed with the probability P. Rule [R-Com] allows communication between two parallel mixed choices, one containing an input and the other an output, with matching roles p, q and matching label l, where the probability P of this step is determined by the sender. By [R-Def], a process call may be executed. Given the declaration of X as X(ex; ec) def = P being contained in the declarations D, the process call is replaced by the substitution P[ex 7→ev]. Often, the reduction semantics is defined "up-to" structural congruence, however, we instead choose to include an explicit rule, [R-Struct]. From the remaining rules 19 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus [R-Par], [R-Res], and [R-Def-In], we get that processes can still reduce in different contexts, namely in parallel composition, under session restriction, and within a process definition. We write P -→P if P -→P P ′ for some P ′, and P ↛if there is no P such that P -→P. Let =⇒P be inductively defined as (a) P =⇒1 P and (b) if P -→P1 P ′ and P ′ =⇒P2 P ′′ then P =⇒P1P2 P ′′. To aid understanding, let us revisit the running example. Example 3 (Running Example-Semantics). Let us examine possible reductions of Example 2. Recall that the interface process was PI = (νs)(Pp | Pc | Pw) with c = {j, d}. For convenience, we reordered the parallel processes, i.e., PI = (νs)(Pp | Pc | Pw) ≡ PI = (νs)(Pp | Pw | Pc). To highlight which components interact in each reduction step, we will alternately highlight them and the corresponding arrow -→P in light green and dark green. The sequence of reductions of the interface process PI we chose begins with the plaintiff sending a lawsuit to the court, specifically the judge. The court then sends the verdict guilty to the plaintiff with a 35% probability and afterwards releases the witness. PI = (νs)(Pp | Pw | Pc) = (νs) s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x) | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() | s[c] ▷j←p ? lws().s[c] ▷ 0.35 ▶j→p ! glt⟨⊤⟩.s[j] ▷j→w ! rls⟨⟩ ⊕0.35 ▶j→p ! glt⟨⊥⟩.s[j] ▷j→w ! rls⟨⟩ ⊕0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩ ! -→1(νs) s[p] ▷p←j ? glt(x) | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() | s[c] ▷ 0.35 ▶j→p ! glt⟨⊤⟩.s[c] ▷j→w ! rls⟨⟩ ⊕ 0.35 ▶j→p ! glt⟨⊥⟩.s[c] ▷j→w ! rls⟨⟩ ⊕ 0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩ ! -→0.35(νs) 0 | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() | s[c] ▷j→w ! rls⟨⟩ ! -→1(νs) 0 | 0 | 0 Hence, PI has a sequence PI =⇒0.35 0, where the court finds the defendant guilty. Let us find a comparable sequence of reductions in the refined system PR. The initial lawsuit message is sent from plaintiff to judge. Afterwards, the defendant delivers a 20 2.2 Operational Semantics weak defense to the judge with 50% probability. The judge then sends guilty to the plaintiff with 70% probability and releases the witness. PR = (νs)(Pp | Pj | Pd | Pw) = (νs) s[p] ▷p→j ! lws⟨⟩.s[p] ▷p←j ? glt(x) | s[j] ▷j←p ? lws().s[j] ▷ j←d ? wk().s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕ 0.3 ▶j→p ! glt⟨⊥⟩.Prls + j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls + j←d ? wit().s[j] ▷j→w ! rqs⟨⟩. s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩ | s[d] ▷ 0.5 ▶d→j ! wk⟨⟩ ⊕ 0.2 ▶d→j ! str⟨⟩ ⊕ 0.3 ▶d→j ! wit⟨⟩ | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() ! -→1(νs) s[p] ▷p←j ? glt(x) | s[j] ▷ j←d ? wk().s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls + j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls + j←d ? wit().s[j] ▷j→w ! rqs⟨⟩. s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩ | s[d] ▷ 0.5 ▶d→j ! wk⟨⟩ ⊕ 0.2 ▶d→j ! str⟨⟩ ⊕ 0.3 ▶d→j ! wit⟨⟩ | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() ! -→0.5(νs) s[p] ▷p←j ? glt(x) | s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕ 0.3 ▶j→p ! glt⟨⊥⟩.Prls | 0 | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() ! -→0.7(νs) 0 | Prls | 0 | s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() ! Prls=s[j] ▷j→w ! rls⟨⟩ ≡ (νs) s[j] ▷j→w ! rls⟨⟩| s[w] ▷ ( w←j ? mtg().s[w] ▷w→j ! st⟨⟩ + w←j ? rls() ! -→1(νs) 0 | 0 Thus, PR, too, has a sequence PR =⇒0.35 0, in which the judge finds the defendant guilty. ⃝ 21 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus Sometimes, one sequence of the interface system will correspond to several transition sequences of the refined system. We observe this in both processes and types, whenever probabilistic branches with the same continuations are summed up in the interface. Example 4 (Running Example-Refinement into Separate Reduction Sequences). For instance, the sequence of PI that leads to not guilty without calling the witness with probability 0.35 is refined into two sequences with probabilities 0.15 and 0.2, respectively. Let P ′ I denote the system obtained after one reduction step of PI in the previous Example 3. Once again, to highlight which components interact in each reduction step, we will alternately highlight them and the corresponding arrow -→P in light green and dark green. Observe the following reduction step of P ′ I. P ′ I = (νs) s[p] ▷p←j ? glt(x) | Pw | s[c] ▷ 0.35 ▶j→p ! glt⟨⊤⟩.s[c] ▷j→w ! rls⟨⟩ ⊕ 0.35 ▶j→p ! glt⟨⊥⟩.s[c] ▷j→w ! rls⟨⟩ ⊕ 0.3 ▶j→w ! rqs⟨⟩.s[c] ▷j←w ? st().s[c] ▷j→p ! glt⟨⊥⟩ ! -→0.35(νs) 0 | Pw | s[c] ▷j→w ! rls⟨⟩ Compare this now to the following two reduction sequences obtainable from the system obtained after one reduction step of PR: P ′ R = (νs) s[p] ▷p←j ? glt(x) | s[j] ▷ j←d ? wk().s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls + j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls + j←d ? wit().s[j] ▷j→w ! rqs⟨⟩. s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩ | s[d] ▷ 0.5 ▶d→j ! wk⟨⟩ ⊕ 0.2 ▶d→j ! str⟨⟩ ⊕ 0.3 ▶d→j ! wit⟨⟩ | Pw ! -→0.5(νs) s[p] ▷p←j ? glt(x) | s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕ 0.3 ▶j→p ! glt⟨⊥⟩.Prls | 0 | Pw ! -→0.3(νs) 0 | Prls | 0 | Pw 22 2.3 Typing System For the reduction sequence P ′ R ⇒0.15 (νs) 0 | Prls | 0 | Pw , and P ′ R = (νs) s[p] ▷p←j ? glt(x) | s[j] ▷ j←d ? wk().s[j] ▷ ( 0.7 ▶j→p ! glt⟨⊤⟩.Prls ⊕0.3 ▶j→p ! glt⟨⊥⟩.Prls + j←d ? str().s[j] ▷j→p ! glt⟨⊥⟩.Prls + j←d ? wit().s[j] ▷j→w ! rqs⟨⟩. s[j] ▷j←w ? st().s[j] ▷j→p ! glt⟨⊥⟩ | s[d] ▷ 0.5 ▶d→j ! wk⟨⟩ ⊕ 0.2 ▶d→j ! str⟨⟩ ⊕ 0.3 ▶d→j ! wit⟨⟩ | Pw ! -→0.2(νs) s[p] ▷p←j ? glt(x) | s[j] ▷j→p ! glt⟨⊥⟩.Prls | 0 | Pw ! -→1(νs) 0 | Prls | 0 | Pw , a different reduction sequence to the same system, P ′ R ⇒0.2 (νs) 0 | Prls | 0 | Pw . These two steps together correspond to the reduction step of P ′ I with 35% probability we have seen above. ⃝ Having introduced process syntax and operational semantics, we are finished with introducing the probabilistic mixed choice multiparty session π-calculus itself. 2.3 Typing System This section introduces the typing system of the probabilistic mixed choice multiparty session π-calculus. Within our system, akin to other works in MPST (see [Scalas and Yoshida, 2019]), to type processes, session types are assigned to the channels through which the processes communicate. There are three components to our typing system. First is the type syntax, a grammar defining the shape of types and typing contexts. Then comes subtyping, a preorder relation on types which enhances the flexibility of the system. Finally, typing rules will specify which processes are justifiable by which collection of types. 2.3.1 Type Syntax Not unusually in MPST systems, our type syntax looks quite similar to the process syntax. At first glance there are nonetheless several differences. Instead of variables and values, we find base types. Channels are omitted; each channel can have only exactly one type and need therefore not be explicitly stated within the type. Conditionals and 23 2 Preliminaries: The Probabilistic Mixed Choice MPST pi-Calculus process definitions/-calls do not have explicit types, as they are not needed. Instead we find classic μ-recursion, as is typically the case for MPST types. For mixed choice actions, however, any previously acquired intuition will carry over to the corresponding types. Definition 2.4 (Type Syntax). The syntax of the probabilistic mixed choice multiparty session types are inductively given by: U ::= nat | bool (base types for numbers and booleans) T, Ti ::= end | t | (μt)T (inaction type, recursion variable, recursion) | X i∈I Li | L | H.T (mixed choice type with a finite I ̸= ∅, output) L, Li ::= In | Out (mixed choice modes) In ::= p←q ? l(U).T (p receives message l with type U from q) Out ::= M i∈I Pi ▶Hi.Ti (probabilistic choice with finite I ̸= ∅and X i∈I Pi ≤1) H, Hi ::= p→q ! l⟨U⟩| τ (p sends message l with type U to q, internal action) ∆::= ∅| ∆, c : T (local context) As usual, we begin by explaining the syntax in chronological order. The base types for numbers and booleans are given as nat and bool. The addition of more base types would be a straightforward extension of the system, similar to values in processes. Inaction holds the same meaning as in the process syntax. The recursion variable t and recursion (μt)T are the standard way to type recursive process calls (see [Scalas and Yoshida, 2019]). We require recursion to be guarded: For (μt)T, the type T has to contain meaningful action, i.e., T ̸= t′ for all recursion variables t′. The mixed choice type is analogous to the process syntax. For convenience we use the mixed choice modes In and Out in dense syntax blocks. Input type and probabilistic choice type, too, follow the process syntax. P are probabilities, P ∈R with 0 1 and T ′ i ̸= end for all i ∈I. By [S-Σ-1], if there are s[ri] · ∆′ ≤1 s[r] : P j∈Ji Lj for all i ∈I then we can choose T = P j∈J1 Lj + . . . + P j∈Jn and are done. Hence, we fix i and show s[ri] · ∆′ ≤1 s[r] : P j∈Ji Lj. If pend(s[ri] · ∆′) then, by [S-∅], we can choose Ji = ∅and are done. Else the type T ′ i of s[ri] in ∆′ is guarded by a mixed choice that may contain τ, inputs, or outputs. We perform an induction on the structure of T ′ i. Case T ′ i = end: This case violates the assumption T ′ i ̸= end for all i ∈I. Case T ′ i = P j∈J L′ j: Each L′ j is of one of the following cases: Case L′ j = p←q ? l(U).T ′ j: If q /∈∆′, then the input is removed with [S-Out]. Then we conclude with the induction hypothesis and the respective input in front of T. Else if q ∈∆′, then pend(s[ri] · ∆′′), where ∆′′ is obtained from ∆′ by replacing T ′ i with L′ j. Then the respective part of some Lx is empty. By [S-∅], then s[ri] · ∆′ ≤1 s[r] : P j∈Ji Lj. Case L′ j = L k∈K Pk ▶Hk.T ′ k: Each Hk.T ′ k is of one of the following cases: Case τ.T ′ k: The τ can be removed with [S-τ-L]. Then we conclude with the induction hypothesis. Case p→q ! l⟨U⟩.T ′ k: If q /∈∆′, then the output is removed with [S-Out]. Then we conclude with the induction hypothesis and the respective output in front of T. Else we have q = ∆′. By dfree(∆′, ∆) and safe(∆′), then ∆′, ∆can perform a step. If no step of ∆′, ∆involves this output then pend(s[ri] · ∆′′), where ∆′′ is obtained from ∆′ by replacing T ′ i with L′ j. By [S-∅], then s[ri] · ∆′ ≤1 s[r] : P j∈Ji Lj. Otherwise, the matching input is unguarded in ∆′. By [S-Link], we can then conclude by the induction hypothesis. We do not prove a related statement for the opposite direction, because it is much simpler. For example by reflexivity of ≤1 (Proposition 4.4) the following holds trivially. If safe(s[r] : T) then there are r1, . . . , rn with r ⊆S i∈[1..n] ri and T ′ 1, . . . , T ′ n such that {s[ri] : T ′ i}i∈[1..n] ≤1 s[r] : T. Finding a more meaningful statement for "a refinement always exits" needs careful crafting, but it is quite conceivable that an interesting yet general result can be shown. 4.2 Properties of Typed Processes Having introduced safety and deadlock-freedom for types, in this section we will see how these properties are transferable to processes justified by safe and deadlock-free types. 58 4.2 Properties of Typed Processes The main results are Theorem 4.19, subject reduction, and Theorem 4.24, deadlockfreedom. 4.2.1 Subject Reduction This subsection is dedicated to the Subject Reduction Theorem, which states that typed processes can only reduce to typed processes. By extension, any desirable feature enforced by the typing is preserved for all interaction behaviour and holds for all reductions. It is clear to see why this is greatly beneficial and thus subject reduction is a key property across type theory. For its proof, a significant number of smaller lemmas are shown first. Most notably, these include Lemma 4.14, subject congruence, and 4.18, substitution. The eponymous Theorem 4.19 is stated and proven afterwards. Preliminary Lemmas We will now show eight smaller lemmas needed for the proof of Theorem 4.19. We chose to give some properties of types and subtyping here, instead of the previous section, as they are only used in the proofs of this section. The first two results are small statements used in the proof of subject congruence. Lemma 4.12 states that the empty context is safe. Lemma 4.12. safe(∅). Proof. By Definition 4.7, since ∅has no transitions. According to Lemma 4.13, adding additional variable and process variable assignments to the global context Γ does not invalidate a type judgement. Lemma 4.13 (Global Weakening). If Γ ⊢P ▷∆then also Γ, Γ′ ⊢P ▷∆. Proof. Straightforward by induction on the derivation of Γ ⊢P ▷∆, since all assignments added to Γ in the derivation tree are on bound names. We next show subject congruence, an important property ensuring that a typing judgement of a process P also holds for all processes congruent to P. Lemma 4.14 (Subject Congruence). If Γ ⊢P ▷∆and P ≡Q then also Γ ⊢Q ▷∆. Proof. Assume Γ ⊢P ▷∆and P ≡Q. We prove Γ ⊢Q ▷∆by structural induction on the derivation of P ≡Q. Since each typing and subtyping rule by definition respects alpha conversion and the rules that make ≡a congruence, we consider only the rules of Definition 2.2 explicitly. Case R | 0 ≡R: Applying this rule from left to right, then P = R | 0 and Q = R. By inverting the typing rules (and in particular [T-0] and [T-Par] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies that Γ ⊢R ▷∆′ R with ∆′ R ≤1 ∆R, Γ ⊢0 ▷∅with ∅≤1 ∆∅, and ∆R, ∆∅≤1 ∆. By Lemma 4.3, then ∆′ R ≤1 ∆R, ∆∅. By Proposition 4.5, then ∆′ R ≤1 ∆. By [T-Sub], then Γ ⊢R ▷∆′ R implies Γ ⊢Q ▷∆. 59 4 Properties Applying this rule from right to left, then P = R and Q = R | 0. By [T-0] and [T-Par], then Γ ⊢P ▷∆implies Γ ⊢Q ▷∆. Case R1 | R2 ≡R2 | R1: Applying this rule from left to right, then P = R1 | R2 and Q = R2 | R1. By inverting the typing rules (and in particular [T-Par] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢R1 ▷∆′ R1 with ∆′ R1 ≤1 ∆R1, Γ ⊢R2 ▷∆′ R2 with ∆′ R2 ≤1 ∆R2, and ∆R1, ∆R2 ≤1 ∆. By [T-Par] and [T-Sub], then Γ ⊢Q ▷∆. The case of applying this rule from right to left is symmetric. Case (R1 | R2) | R3 ≡R1 | (R2 | R3): Applying this rule from left to right, then P = (R1 | R2) | R3 and Q = R1 | (R2 | R3). By inverting the typing rules (and in particular [T-Par] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢R1 ▷∆′ R1 with ∆′ R1 ≤1 ∆R1, Γ ⊢R2 ▷∆′ R2 with ∆′ R2 ≤1 ∆R2, Γ ⊢R3 ▷∆′ R3 with ∆′ R3 ≤1 ∆R3, and ∆R1, ∆R2, ∆R3 ≤1 ∆. By [T-Par] and [T-Sub], then Γ ⊢Q ▷∆. The case of applying this rule from right to left is similar. Case (νs)0 ≡0: Applying this rule from left to right, then P = (νs)0 and Q = 0. By inverting the typing rules (and in particular [T-Res] and [T-0] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢0 ▷∅with ∅≤1 ∆∅and ∆∅≤1 ∆. By Proposition 4.5, then ∅≤1 ∆. By [T-0] and [T-Sub], then Γ ⊢Q ▷∆. Applying this rule from right to left, then P = 0 and Q = (νs)0. By inverting the typing rules (and in particular [T-0] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies ∅≤1 ∆. By [T-Res] and Lemma 4.12, then Γ ⊢Q ▷∅. By [T-Sub], then Γ ⊢Q ▷∆. Case (νs1)(νs2)R ≡(νs2)(νs1)R: Applying this rule from left to right, have then P = (νs1)(νs2)R and Q = (νs2)(νs1)R. By inverting the typing rules (and in particular [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies safe(∆s1) and Γ ⊢(νs2)R ▷∆′, ∆s1 with ∆′ ≤1 ∆. Then safe(∆s2) and Γ ⊢R ▷∆R, ∆s2 with ∆R ≤1 ∆′, ∆s1. By Lemma 4.2, then ∆R ≤1 ∆′, ∆s1 implies ∆R = ∆′′, ∆′ s1; ∆′′ ≤1 ∆′, and ∆′ s1 ≤1 ∆s1. By [T-Sub] and Corollary 4.6, then Γ ⊢R ▷∆R, ∆s2; ∆R = ∆′′, ∆′ s1, and ∆′ s1 ≤1 ∆s1 imply Γ ⊢R ▷∆′′, ∆s2, ∆s1. By [T-Res], then safe(∆s1) and Γ ⊢R ▷∆′′, ∆s2, ∆s1 imply Γ ⊢(νs1)R ▷∆′′, ∆s2. By [T-Res], then safe(∆s2) and Γ ⊢(νs1)R ▷∆′′, ∆s2 imply Γ ⊢Q ▷∆′′. By [T-Sub] and Proposition 4.5, then Γ ⊢Q ▷∆′′; ∆′′ ≤1 ∆′, and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆. The case of applying this rule from right to left is similar. Case R1 | (νs)R2 ≡(νs) (R1 | R2) if s /∈fs(R1): Applying this rule from left to right, then P = R1 | ressR2, Q = (νs) (R1 | R2), and s /∈fs(R1). By inverting the typing rules (and in particular [T-Par] and [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢R1 ▷∆′ 1 with ∆′ 1 ≤1 ∆1, safe(∆s), Γ ⊢ R2 ▷∆′ 2, ∆s with ∆′ 2 ≤1 ∆2, and ∆= ∆1, ∆2, where we ensure by applying alphaconversion that s is fresh in ∆1 and ∆′ 1. By Lemma 4.3, then ∆′ 1 ≤1 ∆1; ∆′ 2 ≤1 60 4.2 Properties of Typed Processes ∆2, and ∆= ∆1, ∆2 imply ∆′ 1, ∆′ 2 ≤1 ∆. By [T-Par], then Γ ⊢R1 ▷∆′ 1 and Γ ⊢R2 ▷∆′ 2, ∆s imply Γ ⊢R1 | R2 ▷∆′ 1, ∆′ 2, ∆s. By [T-Res], then safe(∆s) and Γ ⊢R1 | R2 ▷∆′ 1, ∆′ 2, ∆s imply Γ ⊢Q ▷∆′ 1, ∆′ 2. By [T-Sub], then Γ ⊢Q ▷∆′ 1, ∆′ 2 and ∆′ 1, ∆′ 2 ≤1 ∆imply Γ ⊢Q ▷∆. Applying this rule from right to left is similar. Case def D in (νs)R ≡(νs)def D in R if s /∈fs(D): Then D = {Xi(exi, ci,1,.., ci,ni) = Ri}i∈I. Applying this rule from left to right, then P = def D in (νs)R, Q = (νs)def D in R, and s /∈fs(D). By inverting the typing rules (and in particular [T-Def] and [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I, Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢(νs)R ▷∆′ with ∆′ ≤1 ∆, safe(∆s), and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R ▷∆′ R, ∆s with ∆′ R ≤1 ∆′, where we ensure by applying alpha-conversion that s is fresh in D. By [T-Def], then for all i ∈I Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni, and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R ▷∆′ R, ∆s imply Γ ⊢def D in R ▷∆′ R, ∆s. By [T-Res], then safe(∆s) and Γ ⊢def D in R ▷∆′ R, ∆s imply Γ ⊢Q ▷∆′ R. By [T-Sub] and Proposition 4.5, then Γ ⊢Q ▷∆′ R; ∆′ R ≤1 ∆′, and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆. Applying this rule from right to left, then P = (νs)def D in R and Q = def D in (νs)R, where s /∈fs(D). By inverting the typing rules (and in particular [T-Def] and [T-Res] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆ implies safe(∆s), Γ ⊢def D in R ▷∆′, ∆s with ∆′ ≤1 ∆, and for all i ∈I Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni, and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R ▷∆R with ∆R ≤1 ∆′, ∆s, where we ensure by applying alpha-conversion that s is fresh in D. By [T-Sub], then ∆R ≤1 ∆′, ∆s and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R ▷∆R imply Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢ R ▷∆′, ∆s. By [T-Res], then safe(∆s) and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I⊢R ▷∆′, ∆s imply Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢(νs)R ▷∆′. By [T-Def], then Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢Ri ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢(νs)R ▷∆′ imply Γ ⊢Q ▷∆′. By [T-Sub], then Γ ⊢Q ▷∆′ and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆. Case (def D in R1) | R2 ≡def D in (R1 | R2) if dpv(D) ∩fpv(Q) = ∅: Then: D = {Xi(exi, ci,1, . . . , ci,ni) = R′ i}i∈I 61 4 Properties Applying this rule from left to right, we then get P = (def D in R1) | R2, Q = def D in (R1 | R2), and dpv(D) ∩fpv(Q) = ∅. By inverting the typing rules (in particular [T-Def] and [T-Par] under arbitrary appl. of [T-Sub]), Γ⊢P ▷∆implies Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢R′ i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I, Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R1 ▷∆′ 1 with ∆′ 1 ≤1 ∆1, Γ ⊢R2 ▷∆′ 2 with ∆′ 2 ≤1 ∆2, and ∆1, ∆2 ≤1 ∆. By Lemma 4.3, then ∆′ 1 ≤1 ∆1; ∆′ 2 ≤1 ∆2, and ∆= ∆1, ∆2 imply ∆′ 1, ∆′ 2 ≤1 ∆. By Lemma 4.13, then Γ ⊢R2 ▷∆′ 2 implies Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R2 ▷∆′ 2. By rule [T-Par], we then have that Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R1 ▷∆′ 1 and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢ R2 ▷∆′ 2 imply Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R1 | R2 ▷∆′ 1, ∆′ 2. By [T-Def], then Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢R′ i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢R1 | R2 ▷∆′ 1, ∆′ 2 imply Γ ⊢ Q ▷∆′ 1, ∆′ 2. By [T-Sub], then Γ ⊢Q ▷∆′ 1, ∆′ 2 and ∆′ 1, ∆′ 2 ≤1 ∆imply Γ ⊢Q ▷∆. Applying this rule from right to left is similar. Case def D in (def D′ in R) ≡def D ∪D′ in R if dpv(D) ∩dpv(D′) = ∅: Then: D = {Xi(exi, ci,1,.., ci,ni) = R′ i}i∈I D′ = Yj eyj, dj,1,.., dj,nj = R′′ j j∈J Applying this rule from left to right, then P = def D in (def D′ in R), Q = def D ∪D′ in R, and dpv(D)∩dpv(D′) = ∅. By inverting the typing rules (and in particular [T-Def] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ,X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢R′ i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni Γ,X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi, Y1 : D f U ′ 1, T ′ 1,1,..T ′ 1,n1 E ,.., Yj : D f U ′ j, T ′ j,1,..T ′ j,nj E , eyj : f U ′ j ⊢R′′ j ▷c′ j,1 : T ′ j,1,.., c′ j,nj : T ′ j,nj for all i ∈I and all j ∈J, Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢def D′ in R ▷∆′ with ∆′ ≤1 ∆, and Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , Y1 : D f U ′ 1, T ′ 1,1,..T ′ 1,n1 E ,.., Yj : D f U ′ j, T ′ j,1,..T ′ j,nj E ⊢R ▷∆′′ with ∆′′ ≤1 ∆′. By [T-Def], then Γ ⊢Q ▷∆′ 2. By [T-Sub] and Proposition 4.5, then Γ ⊢Q ▷∆′′; ∆′′ ≤1 ∆′, and ∆′ ≤1 ∆imply Γ ⊢Q ▷∆. Applying this rule from right to left is similar. 62 4.2 Properties of Typed Processes Lemmas 4.15, and 4.17 give a certain notion of splitting and composing the transitions of local contexts. Lemma 4.16 states a useful consequence of the subtyping properties of Theorem 4.10 on supertypes being able to "follow" the transitions of their subtypes. Lemma 4.15. If ∆1, ∆2 7→∗ P ∆and s(∆1) ∩s(∆2) = ∅then there are ∆′ 1, ∆′ 2 such that ∆1 7→∗ P1 ∆′ 1; ∆2 7→∗ P2 ∆′ 2, P = P1P2, and ∆= ∆′ 1, ∆′ 2. Proof. By Definition 2.10, local contexts of different sessions cannot interact. Then every sequence of steps ∆1, ∆2 7→∗ P ∆can be split such that ∆1 7→∗ P1 ∆′ 1; ∆2 7→∗ P2 ∆′ 2, P = P1P2, and ∆= ∆′ 1, ∆′ 2 Lemma 4.16. If ∆1 ≤1 ∆2, safe(∆2), and ∆1 7→∗ P ∆′ 1 then there is some ∆′ 2 such that ∆2 7→∗ P ∆′ 2 and ∆′ 1 ≤1 ∆′ 2. Proof. Assume ∆1 ≤1 ∆2, safe(∆2), and ∆1 7→∗ P ∆′ 1. By Theorem 4.10.1(b), then there are ∆′ 2, P2, P3 such that ∆2 7→∗ PP2 ∆′ 2; ∆′ 1 ≤P3 ∆′ 2, and 1 = P2P3. Because of 1 = P2P3, then P2 = 1 and P3 = 1. Then ∆2 7→∗ P ∆′ 2 and ∆′ 1 ≤1 ∆′ 2. Lemma 4.17. If ∆1 7→∗ P1 ∆′ 1; ∆2 7→∗ P2 ∆′ 2, safe(∆), and ∆1, ∆2 ≤1 ∆then there is some ∆′ such that ∆7→∗ P1P2 ∆′ and ∆′ 1, ∆′ 2 ≤1 ∆′. Proof. Assume ∆1 7→∗ P1 ∆′ 1; ∆2 7→∗ P2 ∆′ 2, safe(∆), and ∆1, ∆2 ≤1 ∆. Then ∆1, ∆2 7→∗ P1P2 ∆′ 1, ∆′ 2. By Lemma 4.16, then there is some ∆′ such that ∆7→∗ P1P2 ∆′ and ∆′ 1, ∆′ 2 ≤1 ∆′. The final lemma needed for showing subject reduction establishes the standard substitution property. Lemma 4.18 (Substitution). If Γ ⊢v : U and Γ, x : U ⊢P ▷∆, then Γ ⊢P[x 7→v] ▷∆. Proof. Assume Γ ⊢v : U and Γ, x : U ⊢P ▷∆. We prove Γ ⊢P[x 7→v] ▷∆by structural induction on the derivation of Γ, x : U ⊢P ▷∆. Case of [T-0]: In this case P = 0 and ∆= ∅. Then P[x 7→v] = 0. By [T-0], then Γ ⊢P[x 7→v] ▷∆. Case of [T-Res]: In this case P = (νs)Q and Γ, x : U ⊢Q ▷∆, {s[ri] : Ti}i∈I with safe {s[ri] : Ti}i∈I , where we use alpha-conversion to ensure that s ̸= x. Then P[x 7→v] = (νs)(Q[x 7→v]). By the induction hypothesis, then Γ ⊢v : U and Γ, x : U ⊢Q ▷∆, {s[ri] : Ti}i∈I imply Γ ⊢Q[x 7→v] ▷∆, {s[ri] : Ti}i∈I. By [T-Res], then safe {s[ri] : Ti}i∈I and Γ⊢Q[x 7→v] ▷∆, {s[ri] : Ti}i∈I imply Γ⊢P[x 7→v] ▷∆. Case of [T-If]: In this case P = if vb then Q else R, where Γ, x : U ⊢vb : bool, and Γ, x : U ⊢Q ▷∆and Γ, x : U ⊢R ▷∆. Then we have the substitution P[x 7→v] = if (vb[x 7→v]) then (Q[x 7→v]) else (R[x 7→v]). By Definition 2.9, Γ, x : U ⊢ vb : bool implies one of the following three cases: vb = y, x ̸= y, and y : bool ∈Γ, x : U: Then y : bool ∈Γ. By [T-Base], then Γ ⊢ y : bool. Moreover, vb[x 7→v] = vb. Hence, Γ ⊢vb[x 7→v] : bool. 63 4 Properties vb = x: Then Γ, x : U ⊢vb : bool implies U = bool. Moreover, vb[x 7→v] = v. By Γ ⊢v : U, then Γ ⊢vb[x 7→v] : bool. vb ∈{⊤, ⊥}: Then vb[x 7→v] = vb. By [Bool], then Γ ⊢vb[x 7→v] : bool. In all three cases we have Γ ⊢vb[x 7→v] : bool. By the induction hypothesis, Γ ⊢v : U and Γ, x : U ⊢Q ▷∆imply Γ ⊢Q[x 7→v] ▷∆, and Γ ⊢v : U and Γ, x : U ⊢R ▷∆imply Γ ⊢R[x 7→v] ▷∆. By [T-If], then Γ ⊢vb[x 7→v] : bool, Γ ⊢Q[x 7→v] ▷∆, and Γ ⊢R[x 7→v] ▷∆imply Γ ⊢P[x 7→v] ▷∆. Case of [T-Par]: In this case P = Q | R, ∆= ∆1, ∆2, Γ, x : U ⊢Q ▷∆1, and Γ, x : U ⊢ R ▷∆2. Then P[x 7→v] = (Q[x 7→v]) | (R[x 7→v]). By the induction hypothesis, then Γ ⊢v : U and Γ, x : U ⊢Q ▷∆1 imply Γ ⊢Q[x 7→v] ▷∆1, and Γ ⊢v : U and Γ, x : U ⊢R ▷∆2 imply Γ ⊢R[x 7→v] ▷∆2. By [T-Par], then Γ ⊢Q[x 7→v] ▷∆1 and Γ ⊢R[x 7→v] ▷∆2 imply Γ ⊢P[x 7→v] ▷∆. Case of [T-Def]: In this case we have P = def X(ey; c1,.., cn) def = Q in R with Γ, X : D f U ′, T1,.., Tn E , ey : f U ′, x : U ⊢Q ▷c1 : T1,.., cn : Tn, and Γ, X : D f U ′, T1,.., Tn E , x : U ⊢R ▷∆, where we use alpha-conversion to ensure that x /∈ey. Then P[x 7→v] = def X(ey; c1,.., cn) def = (Q[x 7→v]) in (R[x 7→v]). By the induction hypothesis, then Γ ⊢v : U and Γ, X : D f U ′, T1,.., Tn E , ey : f U ′, x : U ⊢Q ▷c1 : T1,.., cn : Tn imply Γ, X : D f U ′, T1,.., Tn E , ey : f U ′ ⊢Q[x 7→v] ▷c1 : T1,.., cn : Tn. Moreover Γ ⊢v : U and Γ, X : D f U ′, T1,.., Tn E , x : U ⊢R ▷∆imply Γ, X : D f U ′, T1,.., Tn E ⊢R[x 7→v] ▷∆. By [T-Def], then have Γ ⊢P[x 7→v] ▷∆. Case of [T-Var]: In this case P = X D ev′, c1,.., cn E , X : D f U ′, T1,.., Tn E ∈Γ, x : U, ∆= c1 : T1,.., cn : Tn, and Γ, x : U ⊢ev′ : f U ′. Then P[x 7→v] = X D ev′[x 7→v] , c1,.., cn E . By Definition 2.9, Γ, x : U ⊢ev′ : f U ′, i.e., Γ, x : U ⊢v′ i : U ′ i for all v′ i ∈ev′, implies for every v′ i one of the following four cases: Case v′ i = y, x ̸= y, and y : U ′ i ∈Γ, x : U: Then y : U ′ i ∈Γ. By [T-Base], then Γ ⊢ y : U ′ i. Then, v′ i[x 7→v] = v′ i. Hence, Γ ⊢v′ i[x 7→v] : U ′ i. Case v′ i = x: Then Γ, x : U ⊢v′ i : bool implies U = U ′ i. Moreover, v′ i[x 7→v] = v. By Γ ⊢v : U, then Γ ⊢v′ i[x 7→v] : U ′ i. Case v′ i ∈N+: Then U ′ i = nat and v′ i[x 7→v] = v′ i. By [Nat], then Γ ⊢v′ i[x 7→v] : U ′ i. Case v′ i ∈{⊤, ⊥}: Then U ′ i = bool and v′ i[x 7→v] = v′ i. By [Bool], then Γ ⊢ v′ i[x 7→v] : U ′ i. In all four cases we have Γ ⊢v′ i[x 7→v] : U ′ i and, thus, Γ ⊢ ev′[x 7→v] : f U ′. By [T-Var], then Γ ⊢P[x 7→v] ▷∆. 64 4.2 Properties of Typed Processes Case of [T-Sum]: In this case P = c P i∈I Mi, ∆= ∆0, c : P i∈I Li, and for all i ∈I we have Γ, x : U ⊢c ▷Mi ▷∆0, c : Li. Then P[x 7→v] = c P i∈I (Mi[x 7→v]). By the induction hypothesis, Γ ⊢v : U and Γ, x : U ⊢c ▷Mi ▷∆0, c : Li imply Γ ⊢ c ▷(Mi[x 7→v]) ▷∆0, c : Li for all i ∈I. BY [T-Sum], then Γ ⊢P[x 7→v] ▷∆. Case of [T-Sub]: In this case Γ, x : U ⊢P ▷∆′ and ∆′ ≤1 ∆. By the induction hypothesis, then Γ ⊢v : U and Γ, x : U ⊢P ▷∆′ imply Γ ⊢P[x 7→v] ▷∆′. By [T-Sub], then Γ ⊢P[x 7→v] ▷∆′ and ∆′ ≤1 ∆imply Γ ⊢P[x 7→v] ▷∆. Case of [T-τ]: In this case P = c ▷P ▶τ.Q with ∆= ∆0, c : P ▶τ.T, where Γ, x : U ⊢ Q ▷∆0, c : T. Then we have P[x 7→v] = c ▷P ▶τ.(Q[x 7→v]). By the induction hypothesis, then the judgements Γ ⊢v : U and Γ, x : U ⊢Q ▷∆0, c : T imply that Γ ⊢Q[x 7→v] ▷∆0, c : T. By [T-τ], then Γ ⊢P[x 7→v] ▷∆. Case of [T-Inp]: In this case P = c ▷p←q ? l(y).Q, ∆= ∆0, c : p←q ? l(Uy).T, p ∈c, and Γ, y : Uy, x : U ⊢Q ▷∆0, c : T, where we use alpha-conversion to ensure that x ̸= y. Then, since x ̸= y, P[x 7→v] = c ▷p←q ? l(y).(Q[x 7→v]). By the induction hypothesis, Γ ⊢v : U and Γ, y : Uy, x : U ⊢Q ▷∆0, c : T imply Γ, y : Uy ⊢ Q[x 7→v] ▷∆0, c : T. By [T-Inp], then p ∈c and Γ, y : Uy ⊢Q[x 7→v] ▷∆0, c : T imply Γ ⊢P[x 7→v] ▷∆. Case of [T-Out]: In this case P = c ▷P ▶p→q ! l⟨v′⟩.Q, ∆= ∆0, c : P ▶p→q ! l⟨U ′⟩.T, p ∈c, Γ, x : U ⊢v′ : U ′, and Γ, x : U ⊢Q ▷∆, c : T. Then have the substitution P[x 7→v] = c ▷P ▶p→q ! l⟨(v′[x 7→v])⟩.(Q[x 7→v]). By Definition 2.9, Γ, x : U ⊢ v′ : U ′ implies one of the following four cases: Case v′ = y, x ̸= y, and y : U ′ ∈Γ, x : U: Then y : U ′ ∈Γ. By [T-Base], then Γ ⊢ y : U ′. Then, v′[x 7→v] = v′. Hence, Γ ⊢v′[x 7→v] : U ′. Case v′ = x: Then Γ, x : U ⊢v′ : bool implies U = U ′. Moreover, v′[x 7→v] = v. By Γ ⊢v : U, then Γ ⊢v′[x 7→v] : U ′. Case v′ ∈N+: Then U ′ = nat and v′[x 7→v] = v′. By [Nat], then Γ ⊢v′[x 7→v] : U ′. Case v′ ∈{⊤, ⊥}: Then U ′ = bool and v′[x 7→v] = v′. By [Bool], then Γ ⊢ v′[x 7→v] : U ′. In all four cases we have Γ ⊢v′[x 7→v] : U ′. By the induction hypothesis, then Γ ⊢v : U and Γ, x : U ⊢Q ▷∆0, c : T imply Γ ⊢Q[x 7→v] ▷∆0, c : T. By [T-Out], then p ∈c, Γ ⊢v′[x 7→v] : U ′, and Γ ⊢Q[x 7→v] ▷∆0, c : T imply Γ ⊢P[x 7→v] ▷∆. Case of [T-Prob]: In this case P = c ▷L i∈I Pi ▶Ni.Pi, ∆= ∆0, c : L i∈I Pi ▶Hi.Ti, and for all i ∈I we have Γ, x : U ⊢c ▷Pi ▶Ni.Pi ▷∆0, c : Pi ▶Hi.Ti. Then P[x 7→v] = c ▷L i∈I Pi ▶((Ni.Pi)[x 7→v]). By the induction hypothesis, then judgements Γ ⊢ v : U and Γ, x : U ⊢c ▷Ni.Pi ▷∆0, c : Pi ▶Hi.Ti imply Γ ⊢c ▷Pi ▶((Ni.Pi)[x 7→v]) ▷ ∆0, c : Pi ▶Hi.Ti for all i ∈I. By [T-Prob], then Γ ⊢P[x 7→v] ▷∆. 65 4 Properties Theorem Statement and Proof With the properties we have shown above, the proof of subject reduction is a straightforward induction on the derivation of P -→P P ′. In each case we construct the typing for P ′ from the information gained by Γ ⊢P ▷∆. Theorem 4.19 (Subject Reduction). If Γ ⊢P ▷∆, safe(∆), and P -→P P ′ then there is some ∆′ such that ∆7→∗ P ∆′ and Γ ⊢P ′ ▷∆′. Proof. Assume Γ ⊢P ▷∆, safe(∆), and P -→P P ′. We prove ∆7→∗∆′ and Γ ⊢P ′ ▷∆′ by structural induction on the derivation of P -→P P ′. Case of [R-Cond-⊤]: In this case P = if ⊤then Q else R and P ′ = Q. By inverting the typing rules (and in particular [T-If] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢P ′ ▷∆Q with ∆Q ≤1 ∆. By [T-Sub], then Γ ⊢P ′ ▷∆Q and ∆Q ≤1 ∆imply Γ ⊢P ′ ▷∆and ∆7→∗∆. Case of [R-Cond-⊥]: This case is similar to the case above. Case of [R-Def-0]: In this case we have P = def D in 0 and P ′ = 0. Let D = {Xi(exi, ci,1, . . . , ci,ni) = Ri}i∈I. By inverting the typing rules (in particular [T-Def] and [T-0] under arbitrary appl. of [T-Sub]), then Γ, n Xi : D eUi, Ti,1, . . . , Ti,ni Eo i∈I⊢ 0 ▷∅with ∅≤1 ∆. By [T-0] and [T-Sub], then Γ ⊢P ′ ▷∆and ∆7→∗∆. Case of [R-τ]: In this case P = c ▷(P ▶τ.Q ⊕N) + M and P ′ = Q. By inverting the typing rules (and in particular [T-Sum], [T-Prob] and [T-τ] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢P ′ ▷∆′ with ∆′ = ∆Q, c : T and ∆Q, c : (P ▶τ.T ⊕N) + M ≤1 ∆. By [TR-τ], then ∆7→∗∆′. Case of [R-Com]: In this case we have processes P = s[r1] ▷p←q ? l(x).Q + MQ | s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) + MR and P ′ = Q[x 7→v] | R. By inverting the typing rules (and in particular [T-Par], [T-Sum], [T-Inp], [T-Prob], and [T-Out] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ ⊢s[r1] ▷p←q ? l(x).Q + MQ ▷∆′ 1 Γ ⊢s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) + MR ▷∆′ 2 with ∆′ 1 ≤1 ∆1; ∆′ 2 ≤1 ∆2, and ∆1, ∆2 ≤1 ∆, Γ ⊢s[r1] ▷p←q ? l(x).Q ▷∆T 1 , s[r1] : p←q ? l(U1).T1 Γ, x : U1 ⊢Q ▷∆T 1 , s[r1] : T1 with ∆T 1 , s[r1] : p←q ? l(U1).T1 + M′ Q ≤1 ∆′ 1, p ∈er1, Γ ⊢s[r2] ▷(P ▶q→p ! l⟨v⟩.R ⊕N) ▷∆T 2 , s[r2] : P ▶q→p ! l⟨U2⟩.T2 ⊕N′ 66 4.2 Properties of Typed Processes with ∆T 2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′ R ≤1 ∆′ 2, Γ ⊢s[r2] ▷P ▶q→p ! l⟨v⟩.R ▷∆T 2 , s[r2] : P ▶q→p ! l⟨U2⟩.T2 Γ ⊢R ▷∆T 2 , s[r2] : T2 with q ∈er2 and Γ ⊢v : U2. By Definition 4.7 and Theorem 4.10.1, safe(∆) and the above subtyping relations imply U1 = U2. By Lemma 4.18, then Γ ⊢v : U2 and Γ, x : U1 ⊢Q ▷∆T 1 , s[r1] : T1 imply Γ ⊢Q[x 7→v] ▷∆T 1 , s[r1] : T1. By Proposition 4.5 and Lemma 4.3, then: ∆T 1 , s[r1] : p←q ? l(U1).T1 + M′ Q, ∆T 2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′ R ≤1 ∆ By Definition 2.10, then: ∆T 1 , s[r1] : p←q ? l(U1).T1 + M′ Q, ∆T 2 , s[r2] : (P ▶q→p ! l⟨U2⟩.T2 ⊕N′) + M′ R 7→ ∆T 1 , s[r1] : T1, ∆T 2 , s[r2] : T2 By Lemma 4.16, then there is some ∆′ such that ∆7→∗∆′ and ∆T 1 , s[r1] : T1, ∆T 2 , s[r2] : T2 ≤1 ∆′. By [T-Par] and [T-Sub], then Γ ⊢Q[x 7→v] ▷∆T 1 , s[r1] : T1 and Γ ⊢R ▷∆T 2 , s[r2] : T2 imply Γ ⊢P ′ ▷∆′. Case of [R-Def]: Have P = def D in (X⟨ev, ec⟩| Q) and P ′ = def D in (R[ex 7→ev] | Q), where X(ex; ec) def = R ∈D with ec = c1,.., cn. By applying structural congruence and by using the argumentation of case [R-Struct] in this proof, we can ensure that X(ex; ec) is the last/innermost declaration in P. By inverting the typing rules (and in particular [T-Def], [T-Var], and [T-Par] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ, Γ′, X : D eU, T1,.., Tn E , ex : eU ⊢R ▷c1 : T1,.., cn : Tn, Γ, Γ′ ⊢ ev : eU, Γ, Γ′, X : D eU, T1,.., Tn E ⊢X⟨ev, ec⟩▷c1 : T1,.., cn : Tn, and Γ, Γ′ ⊢Q ▷∆Q with c1 : T1,.., cn : Tn, ∆Q ≤1 ∆, where Γ′ lists exactly one assignment for each process variable contained in D besides X and nothing else and for each such process variable we have the corresponding type judgement for the declared process similar to Γ, Γ′, X : D eU, T1,.., Tn E , ex : eU ⊢R ▷c1 : T1,.., cn : Tn. By Lemma 4.18, then Γ, Γ′ ⊢ev : eU and Γ, Γ′, X : D eU, T1,.., Tn E , ex : eU ⊢R ▷c1 : T1,.., cn : Tn imply Γ, Γ′, X : D eU, T1,.., Tn E ⊢R[ex 7→ev] ▷c1 : T1,.., cn : Tn. Then, by [T-Par], the judgements Γ, Γ′ ⊢Q ▷∆Q and Γ, Γ′, X : D eU, T1,.., Tn E ⊢R[ex 7→ev] ▷c1 : T1,.., cn : Tn imply Γ, Γ′, X : D eU, T1,.., Tn E ⊢R[ex 7→ev] | Q ▷c1 : T1,.., cn : Tn, ∆Q. Then, by several applications of [T-Def] using the type judgements for the process variables besides X, obtain Γ ⊢P ′ ▷c1 : T1,.., cn : Tn, ∆Q. By [T-Sub], then c1 : T1,.., cn : Tn, ∆Q ≤1 ∆and Γ ⊢P ′ ▷c1 : T1,.., cn : Tn, ∆Q imply Γ ⊢P ′ ▷∆and ∆7→∗∆. Case of [R-Par]: In this case P = Q | R, P ′ = Q′ | R, and Q -→P Q′. By inverting the typing rules (and in particular [T-Par] under arbitrary applications of [TSub]), then Γ ⊢P ▷∆implies Γ ⊢Q ▷∆′′ Q with ∆′′ Q ≤1 ∆Q, Γ ⊢R ▷∆′ R with 67 4 Properties ∆′ R ≤1 ∆R, and ∆Q, ∆R ≤1 ∆. By [T-Sub], then Γ ⊢Q ▷∆′′ Q and ∆′′ Q ≤1 ∆Q imply Γ ⊢Q ▷∆Q. By [T-Sub], then Γ ⊢R ▷∆′ R and ∆′ R ≤1 ∆R imply Γ ⊢R ▷∆R. By the induction hypothesis, Γ ⊢Q ▷∆Q and Q -→P Q′ imply ∆Q 7→∗∆′ Q and Γ ⊢Q′ ▷∆′ Q. By Lemma 4.17, then ∆Q 7→∗∆′ Q; ∆R 7→∗∆R, and ∆Q, ∆R ≤1 ∆ imply ∆7→∗∆′ and ∆′ Q, ∆R ≤1 ∆′. By [T-Par], then Γ ⊢Q′ ▷∆′ Q and Γ ⊢R ▷∆R imply Γ ⊢P ′ ▷∆′ Q, ∆R. By [T-Sub], then Γ ⊢P ′ ▷∆′ Q, ∆R and ∆′ Q, ∆R ≤1 ∆′ imply Γ ⊢P ′ ▷∆′. Case of [R-Struct]: In this case P ≡Q, Q -→P Q′, and Q′ ≡P ′. By Lemma 4.14, Γ ⊢P ▷∆and P ≡Q imply Γ ⊢Q ▷∆. By the induction hypothesis, then Γ ⊢Q ▷∆and Q -→P Q′ imply ∆7→∗∆′ and Γ ⊢Q′ ▷∆′. Case of [R-Res]: In this case P = (νs)Q, P ′ = (νs)Q′, and Q -→P Q′. By inverting the typing rules (and in particular [T-Res] under arbitrary applications of [TSub]), then Γ ⊢P ▷∆implies safe(∆s) and Γ ⊢Q ▷∆Q with ∆Q ≤1 ∆, ∆s and s(∆) ∩s(∆s) = ∅. By the induction hypothesis, then Γ ⊢Q ▷∆Q and Q -→P Q′ imply ∆Q 7→∗∆′ Q and Γ ⊢Q′ ▷∆′ Q. By Lemma 4.16, then ∆Q ≤1 ∆, ∆s and ∆Q 7→∗∆′ Q imply ∆, ∆s 7→∗∆′′ and ∆′ Q ≤1 ∆′′. By Lemma 4.15, then ∆, ∆s 7→∗∆′′ and s(∆) ∩s(∆s) = ∅imply ∆7→∗∆′; ∆s 7→∗∆′ s, and ∆′′ = ∆′, ∆′ s. By Lemma 4.8, then ∆s 7→∗∆′ s and safe(∆s) imply safe(∆′ s). By [T-Sub], then Γ ⊢Q′ ▷∆′ Q; ∆′ Q ≤1 ∆′′, and ∆′′ = ∆′, ∆′ s imply Γ ⊢Q′ ▷∆′, ∆′ s. By [T-Res], then safe(∆′ s) and Γ ⊢Q′ ▷∆′, ∆′ s imply Γ ⊢P ′ ▷∆′. Case of [R-Def-In]: In this case P = def D in Q, P ′ = def D in Q′, and Q′ -→P Q′. Let: D = {Xi(exi, ci,1,.., ci,ni) = Ri}i∈I By inverting the typing rules (and in particular [T-Def] under arbitrary applications of [T-Sub]), then Γ ⊢P ▷∆implies Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢R′ i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢Q ▷∆Q with ∆Q ≤1 ∆. By the induction hypothesis, then Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢Q ▷∆Q and Q -→P Q′ imply Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢Q′ ▷∆′ Q and ∆Q 7→∗∆′ Q. By [T-Def], then Γ, X1 : D f U1, T1,1,..T1,n1 E ,.., Xi : D eUi, Ti,1,..Ti,ni E , exi : eUi ⊢R′ i ▷ci,1 : Ti,1,.., ci,ni : Ti,ni for all i ∈I and Γ, n Xi : D eUi, Ti,1,.., Ti,ni Eo i∈I ⊢Q′ ▷∆′ Q imply Γ ⊢P ′ ▷∆′ Q. By Lemma 4.16, then ∆Q ≤1 ∆and ∆Q 7→∗∆′ Q imply ∆7→∗∆′ and ∆′ Q ≤1 ∆′. By [T-Sub], then Γ ⊢P ′ ▷∆′ Q and ∆′ Q ≤1 ∆′ imply Γ ⊢P ′ ▷∆′. 68 4.2 Properties of Typed Processes 4.2.2 Error- and Deadlock-Freedom From the Subject Reduction Theorem, two crucial results follow rather naturally, errorfreedom and deadlock-freedom. Errors are essentially the opposite of safety and therefore by having a safe typing context, we seek to infer processes to be error-free when typed by it. The definition of error processes is similar to the definition of session errors in [Peters and Yoshida, 2024]. In systems with mixed choice, instead of considering all pairs of parallel processes (for non-MC systems, see [Scalas and Yoshida, 2019]), the error definition must consider all available input-output pairs. Thus, we define an error process to be a process in which any available output with an equally available dual input does not have an input for which the labels match. Definition 4.20 (Error Process). A process P has a communication error if P ≡s[r1] ▷(p→q ! l⟨U⟩.Q ⊕N) + M | s[r2] ▷q←p ? lq(Uq).Qq + Mq | P ′ Where l ̸= lq and for all M q i = q←p ? li(Ui).Qi ∈Mq, the labels do not match, i.e., l ̸= li. A process P has a value error if P ≡if v then P else Q | P ′ v /∈{⊤, ⊥} P is an error process if it has either a communication error or value error. From Theorem 4.19 and the fact that error processes are untypable by a safe context it follows that: Corollary 4.21 (Error-Freedom). If Γ ⊢P ▷∆with safe(∆) and P =⇒P P ′, then P ′ is not an error process. We now show the Deadlock-Freedom Theorem, stating that a process typed by a deadlock-free and safe local context, is deadlock-free. Similar to [Scalas and Yoshida, 2019], we require two additional side conditions on the shape of these processes. The first prohibits restriction within the process and the second enforces each participant in a session to be typed by exactly one corresponding type. While our system does not inherently impose these conditions on all processes, some other formalism, like [Peters and Yoshida, 2024], do. Thus, we consider these conditions quite natural and reasonable (for results on interleaved, interfering sessions, see [Coppo et al., 2016]). Indeed, without these conditions, deadlock-freedom for a local type does not imply deadlock-freedom for processes typed by it. Let us illustrate why this is with the following example. Example 13 (Deadlock Freedom). Consider the following deadlock-free local context ∆, which we will use to type two different processes, each exemplifying why one of the conditions is needed. ∆= s[a] : a←b ? l(), s[b] : b→a ! l⟨⟩ Firstly, take process P1 which can be justified by it, i.e., ⊢P1 ▷∆. P1 = (νs′) (s[a] ▷a→b ! l⟨⟩| s[b] ▷b→a ! l⟨⟩| s′[x] ▷x→y ! l⟨⟩) 69 4 Properties P1 will deadlock after one step, despite ∆being deadlock-free, as restricting a session within a process "removes" the types used for that session from the context. For the second condition, take the process P2 for which also holds ⊢P2 ▷∆. P2 = s[a] ▷a←b ? l().s[b] ▷b→a ! l⟨⟩ Clearly, P2 is deadlocked, too. ⃝ We now formalize these conditions. Definition 4.22 (Canonical). Assume ⊢P ▷∆, then P is canonical for ∆iff 1. there is no occurrence of restriction, (νs′)P ′, in P, and 2. P ≡ k∈[1..n]Pk and ∆= c1 : T1, . . . , cn : Tn where for all k ∈[1..n] have ⊢Pk ▷ck : Tk and for all subterms of the form c P i∈I Mi occurring in Pk, have c = ck. To show deadlock-freedom, we first prove an intermediary lemma stating that if a canonical process has no reductions, then its corresponding local context has no reductions either. We will use the shorthand s : τ --→ ∗ 1 to mean a reflexive transitive closure of the labelled transition s : τ --→P, i.e., a reduction sequence of 1 ▶τ actions. Lemma 4.23. If P ↛, ⊢P ▷∆, and P canonical for ∆then ∆ s : τ --→ ∗ 1 ∆′ with ⊢P ▷∆′, and ∆′ ↛. Proof. We will first show that given P ↛, ⊢P ▷∆, and P canonical for ∆, the transition sequence ∆ s : τ --→ ∗ 1 ∆′ implies ⊢P ▷∆′. Assume that there is at least one step in said transition sequence, as otherwise the statement holds trivially by reflexivity. As P canonical for ∆we have that P ≡ k∈[1..n]Pk and ∆= c1 : T1, . . . , cn : Tn where for all k ∈[1..n] have ⊢Pk ▷ck : Tk and for all subterms of the form c P i∈I Mi occurring in Pk, have c = ck. W.l.o.g. let us now fix a typed channel ci : Ti from ∆which can perform said sequence of at least one internal action, i.e., ci : Ti s : τ --→ ∗ 1 and ci : Ti s : τ --→1. As P ↛, in particular also Pi ↛. In other words Pi has no unguarded internal action 1 ▶τ. Therefore, in the typing derivation of ⊢Pi ▷ci : Ti, there occurs an application of [T-Sub] such that ⊢Pi ▷ci : T ′ i with ci : T ′ i ≤1 ci : Ti and T ′ i has no unguarded internal action 1 ▶τ. By Corollary 4.6 and the subtyping rules in Definition 3.5, ci : T ′ i ≤1 ci : Ti where ci : Ti s : τ --→1 and ci : T ′ i ↛implies that [S-τ-R] is the only applicable rule. Thus, ci : Ti s : τ --→ ∗ 1 ci : T ′ i. Let ∆′ be obtained from ∆by replacing ci : Ti with ci : T ′ i. Then indeed ∆ s : τ --→ ∗ 1 ∆′. As P canonical for ∆with ⊢Pi ▷ci : Ti and ⊢Pi ▷ci : T ′ i, we have ⊢P ▷∆′ as required. Given this, we may restrict our attention to contexts which have no internal transitions of probability one. To show the main statement given not ∆ s : τ --→1, assume the contrary, i.e., assume P ↛, ⊢P ▷∆, and P is canonical for ∆but not ∆↛. Then ∆ s : pq : l⟨U⟩ ------→P1 or ∆ s : τ --→P2 with P2 ̸= 1. In the former case, by [TR-Com], [TR-Inp], and [TR-Out], then ∆ s : pq : l⟨U⟩ ------→P ∆′ and P canonical for ∆imply that P contains an unguarded output and a matching unguarded input. Then P can perform a step using [R-Com] contradicting P ↛. In the latter case, by [TR-τ], then ∆ s : τ --→P ∆′ with P canonical for ∆imply that P contains an unguarded subterm of the form τ.Q. Then P can perform a step using [R-τ] again contradicting P ↛. In both cases we have a contradiction. Hence, ∆↛. 70 4.2 Properties of Typed Processes Theorem 4.24 (Deadlock-Freedom). If ⊢P ▷∆, safe(∆), dfree(∆), and P canonical for ∆, then P is deadlock-free. The proof of Deadlock-Freedom is similar to the corresponding proof in [Peters and Yoshida, 2024]. Proof. Assume P =⇒P P ′ ↛, ⊢P ▷∆, safe(∆), dfree(∆), and P canonical for ∆. By Theorem 4.19, then there is some ∆′ such that ∆7→∗ P ∆′ and ⊢P ′ ▷∆′. By Lemma 4.23, then ∆′ s : τ --→ ∗ 1 ∆′′ with ⊢P ′ ▷∆′′ and ∆′′ ↛. By Definition 4.9, then ∆′′ = ∅. Since the global environment is empty and ⊢P ′ ▷∆′′, for all conditionals in P ′ the boolean condition is already a value and, since P ′ ↛, then P ′ cannot contain unguarded conditionals. Since P ′ ↛also cannot contain (modulo structural congruence), unguarded subterms of the form def D in 0, by inverting the typing rules, then ⊢ P ′ ▷∆′′ and ∆′′ = ∅imply P ′ ≡0. In this chapter we have presented numerous important results of the probabilistic mixed choice multiparty session π-calculus, its types, and refinement-based multi-channel subtyping. With Corollaries 4.6 (≤1 is a preorder) and 4.21 (error-freedom), as well as Theorems 4.19 (subject reduction) and 4.24 (deadlock-freedom), we have established that our system fulfils all expected and necessary properties of MPST systems. Additionally, the Interface Existence Theorem 4.11 guarantees great flexibility of the multi-channel subtyping by ensuring that a single-channel interface can always be assembled from almost any local context, which distinguishes our work from [Horne, 2020]. Having finished all technical contributions, we will now summarize the presented work, highlighting context and significance, and discuss interesting avenues for future research. 71 5 Conclusion and Future Work This thesis set out to create a previously unexplored approach to compositional refinement in multiparty session types using subtyping. Towards this, we first presented a powerful calculus framework, the probabilistic mixed choice multiparty session π-calculus, whose combination of the expressive mixed choice setting from [Peters and Yoshida, 2024] with probabilistic choices (see [Aman and Ciobanu, 2019; Inverso et al., 2020]) is, to the best of our knowledge, entirely new. We then extended said calculus with the refinement-based multi-channel subtyping (Definition 3.5). This unique subtyping allows a typed channel (the interface) to be safely substituted by a collection of several typed channels (the refinement), if their collective behaviour models that of the interface. This framework enables robust stepwise refinement. A single channel within a protocol may be taken as interface and safely replaced by several channels, that in turn can each be interfaces for another refinement. Not only is stepwise refinement useful for systematic, collaborative programming, but, as we have already hinted at with the "conflict of interest" in our courthouse example, having the option to perform refinement in this way can enable designers to easily patch security concerns. For example, a compromised system in which an actor has too many access rights can be repaired by replacing this actor with several and distributing their power (cf. Horne [2020]). Moreover, by considering a refinement as the starting point, we can combine its channels into a single channel for whom the interactions within the refinement are concealed. Cleverly doing so will greatly reduce the complexity and size of the overall system. Crucially, our subtyping relation is maximally flexible, allowing to unify any selection of well-typed, safe, and deadlock-free channels (the refinement) into an interface consisting of a single channel (Theorem 4.11). Hence, our framework facilitates the strong compositional verifiability of protocols that we sought to create. Currently the system only supports natural numbers and booleans as base types, and subtyping does thus not include the types of payloads. Adding more complex types would be a straightforward addition. Similarly, we currently do not allow session delegation, i.e., sending and receiving channels as payload. While session delegation is not uncommon, it would add significant complexity, especially in our system. So far, we have been requiring deadlock-freedom to be preserved in 100% of cases when constructing a refinement. As the system is probabilistic, we believe it would be very interesting to explore imperfect refinement, by allowing the refinement to deadlock with a bounded probability. Much of the system is already designed to accommodate such extensions: We imagine expanding on the semantics of the probability P at the subtyping relation ≤P to be especially fruitful in this regard. Furthermore, boundedprobability safety may also be worth exploring. For example one might allow protocols 73 5 Conclusion and Future Work which are still under development to be deployed if their error probability is below a small threshold. We are also interested in analysing behavioural properties other than deadlock-freedom and safety. [Scalas and Yoshida, 2019] uses the safety predicate φ to statically verify typing contexts to be, for example, terminating and live (see [Kobayashi and Sangiorgi, 2010; Padovani et al., 2014]) in addition to safe. As before, typed processes would then be enforced to fulfil these run-time properties. The notion of imperfect refinement could then be taken even further by allowing bounded-probability termination and liveness, too. 74 Bibliography Martín Abadi and Leslie Lamport. The existence of refinement mappings. In Proceedings of the Third Annual Symposium on Logic in Computer Science (LICS '88), Edinburgh, Scotland, UK, July 5-8, 1988, pages 165-175. IEEE Computer Society, 1988. URL https://doi.org/10.1109/LICS.1988.5115. Bogdan Aman and Gabriel Ciobanu. Probabilities in session types. In Mircea Marin and Adrian Craciun, editors, Proceedings Third Symposium on Working Formal Methods, FROM 2019, Timişoara, Romania, 3-5 September 2019, volume 303 of EPTCS, pages 92-106, 2019. URL https://doi.org/10.4204/EPTCS.303.7. Andi Bejleri and Nobuko Yoshida. Synchronous multiparty session types. In Vasco T. Vasconcelos and Nobuko Yoshida, editors, Proceedings of the First Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software, PLACES@DisCoTec 2008, Oslo, Norway, June 7, 2008, volume 241 of Electronic Notes in Theoretical Computer Science, pages 3-33. Elsevier, 2008. URL https://doi.org/10.1016/j.entcs.2009.06.002. Lorenzo Bettini, Mario Coppo, Loris D'Antoni, Marco De Luca, Mariangiola DezaniCiancaglini, and Nobuko Yoshida. Global progress in dynamically interleaved multiparty sessions. In Franck van Breugel and Marsha Chechik, editors, CONCUR 2008 - Concurrency Theory, 19th International Conference, CONCUR 2008, Toronto, Canada, August 19-22, 2008. Proceedings, volume 5201 of Lecture Notes in Computer Science, pages 418-433. Springer, 2008. URL https://doi.org/10.1007/978-3540-85361-9_33. Paula Blechschmidt, Kirstin Peters, and Uwe Nestmann. Compositional Interface Refinement Through Subtyping in Probabilistic Session Types. Submitted to ICTAC'25. Filipe Casal, Andreia Mordido, and Vasco T. Vasconcelos. Mixed sessions. Theor. Comput. Sci., 897:23-48, 2022. URL https://doi.org/10.1016/j.tcs.2021.08. 005. Tzu-Chun Chen, Mariangiola Dezani-Ciancaglini, Alceste Scalas, and Nobuko Yoshida. On the preciseness of subtyping in session types. Log. Methods Comput. Sci., 13(2), 2017. URL https://doi.org/10.23638/LMCS-13(2:12)2017. Mario Coppo, Mariangiola Dezani-Ciancaglini, Nobuko Yoshida, and Luca Padovani. Global progress for dynamically interleaved multiparty sessions. Math. Struct. Comput. Sci., 26(2):238-302, 2016. URL https://doi.org/10.1017/ S0960129514000188. 75 Bibliography Mariangiola Dezani-Ciancaglini, Silvia Ghilezan, Svetlana Jaksic, Jovanka Pantovic, and Nobuko Yoshida. Precise subtyping for synchronous multiparty sessions. In Simon Gay and Jade Alglave, editors, Proceedings Eighth International Workshop on Programming Language Approaches to Concurrency- and Communication-cEntric Software, PLACES 2015, London, UK, 18th April 2015, volume 203 of EPTCS, pages 29-43, 2015. URL https://doi.org/10.4204/EPTCS.203.3. Simon Gay and António Ravara. Behavioural Types: from Theory to Tools. River Publishers, 2017. URL https://doi.org/10.1145/2873052. Simon J. Gay. Subtyping supports safe session substitution. In Sam Lindley, Conor McBride, Philip W. Trinder, and Donald Sannella, editors, A List of Successes That Can Change the World - Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday, volume 9600 of Lecture Notes in Computer Science, pages 95-108. Springer, 2016. URL https://doi.org/10.1007/978-3-319-30936-1_5. Simon J. Gay and Malcolm Hole. Subtyping for session types in the pi calculus. Acta Informatica, 42(2-3):191-225, 2005. URL https://doi.org/10.1007/s00236-0050177-z. Jean-Yves Girard. Linear logic. Theor. Comput. Sci., 50:1-102, 1987. URL https: //doi.org/10.1016/0304-3975(87)90045-4. Hans A. Hansson. Time and probability in formal design of distributed systems. PhD thesis, University Uppsala, Sweden, 1991. Oltea Mihaela Herescu and Catuscia Palamidessi. Probabilistic asynchronous pi-calculus. In Jerzy Tiuryn, editor, Foundations of Software Science and Computation Structures, Third International Conference, FOSSACS 2000, Held as Part of the Joint European Conferences on Theory and Practice of Software,ETAPS 2000, Berlin, Germany, March 25 - April 2, 2000, Proceedings, volume 1784 of Lecture Notes in Computer Science, pages 146-160. Springer, 2000. URL https://doi.org/10.1007/3540-46432-8_10. Kohei Honda. Types for dyadic interaction. In Eike Best, editor, CONCUR '93, 4th International Conference on Concurrency Theory, Hildesheim, Germany, August 2326, 1993, Proceedings, volume 715 of Lecture Notes in Computer Science, pages 509523. Springer, 1993. URL https://doi.org/10.1007/3-540-57208-2_35. Kohei Honda, Vasco Thudichum Vasconcelos, and Makoto Kubo. Language primitives and type discipline for structured communication-based programming. In Chris Hankin, editor, Programming Languages and Systems - ESOP'98, 7th European Symposium on Programming, Held as Part of the European Joint Conferences on the Theory and Practice of Software, ETAPS'98, Lisbon, Portugal, March 28 - April 4, 1998, Proceedings, volume 1381 of Lecture Notes in Computer Science, pages 122-138. Springer, 1998. URL https://doi.org/10.1007/BFb0053567. 76 Bibliography Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session types. In George C. Necula and Philip Wadler, editors, Proceedings of the 35th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2008, San Francisco, California, USA, January 7-12, 2008, pages 273-284. ACM, 2008. URL https://doi.org/10.1145/1328438.1328472. Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session types. J. ACM, 63(1):9:1-9:67, 2016. URL https://doi.org/10.1145/2827695. Ross Horne. Session subtyping and multiparty compatibility using circular sequents. In Igor Konnov and Laura Kovács, editors, 31st International Conference on Concurrency Theory, CONCUR 2020, September 1-4, 2020, Vienna, Austria (Virtual Conference), volume 171 of LIPIcs, pages 12:1-12:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. URL https://doi.org/10.4230/LIPICS.CONCUR.2020.12. Hans Hüttel, Ivan Lanese, Vasco T. Vasconcelos, Luís Caires, Marco Carbone, PierreMalo Deniélou, Dimitris Mostrous, Luca Padovani, António Ravara, Emilio Tuosto, Hugo Torres Vieira, and Gianluigi Zavattaro. Foundations of session types and behavioural contracts. ACM Comput. Surv., 49(1):3:1-3:36, 2016. URL https: //doi.org/10.1145/2873052. Omar Inverso, Hernán Melgratti, Luca Padovani, Catia Trubiani, and Emilio Tuosto. Probabilistic Analysis of Binary Sessions. In Igor Konnov and Laura Kovács, editors, 31st International Conference on Concurrency Theory (CONCUR 2020), volume 171 of Leibniz International Proceedings in Informatics (LIPIcs), pages 14:1-14:21, Dagstuhl, Germany, 2020. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. ISBN 978-3-95977-160-3. URL https://doi.org/10.4230/LIPIcs.CONCUR.2020.14. Naoki Kobayashi and Davide Sangiorgi. A hybrid type system for lock-freedom of mobile processes. ACM Trans. Program. Lang. Syst., 32(5):16:1-16:49, 2010. URL https: //doi.org/10.1145/1745312.1745313. Kim Guldstrand Larsen and Arne Skou. Bisimulation through probabilistic testing. Inf. Comput., 94(1):1-28, 1991. URL https://doi.org/10.1016/0890-5401(91)900306. Nancy A. Lynch and Frits W. Vaandrager. Forward and Backward Simulations: I. Untimed Systems. Information and Computation, 121(2):214-233, 1995. URL https: //doi.org/10.1006/INCO.1995.1134. Robin Milner. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer, 1980. ISBN 3-540-10235-3. URL https://doi.org/10. 1007/3-540-10235-3. Robin Milner. Communication and concurrency. PHI Series in computer science. Prentice Hall, 1989. ISBN 978-0-13-115007-2. 77 Bibliography Robin Milner. The Polyadic π-Calculus: a Tutorial. In Logic and Algebra of Specification, volume 49 of Series F: Computer & Systems Sciences, pages 203-246, 1993. URL https://doi.org/10.1007/978-3-642-58041-3_6. Robin Milner. Communicating and mobile systems - the Pi-calculus. Cambridge University Press, 1999. ISBN 978-0-521-65869-0. Robin Milner, Joachim Parrow, and David Walker. A Calculus of Mobile Processes, Part I and II. Information and Computation, 100(1):1-77, 1992. URL https:// doi.org/10.1016/0890-5401(92)90008-4 and https://doi.org/10.1016/08905401(92)90009-5. Rocco De Nicola and Matthew Hennessy. Testing equivalences for processes. Theor. Comput. Sci., 34:83-133, 1984. URL https://doi.org/10.1016/0304-3975(84) 90113-0. OpenAI. ChatGPT. https://chatgpt.com/share/688b6028-44b0-8001-861637d0ffcdcaea, 2025. Large language model, versions gpt-4o and gpt-5o, accessed August 2025, Homepage-URL https://chatgpt.com/. Luca Padovani, Vasco Thudichum Vasconcelos, and Hugo Torres Vieira. Typing liveness in multiparty communicating systems. In Eva Kühn and Rosario Pugliese, editors, Coordination Models and Languages - 16th IFIP WG 6.1 International Conference, COORDINATION 2014, Held as Part of the 9th International Federated Conferences on Distributed Computing Techniques, DisCoTec 2014, Berlin, Germany, June 3-5, 2014, Proceedings, volume 8459 of Lecture Notes in Computer Science, pages 147-162. Springer, 2014. URL https://doi.org/10.1007/978-3-662-43376-8_10. Kirstin Peters and Nobuko Yoshida. On the expressiveness of mixed choice sessions. In Valentina Castiglioni and Claudio Antares Mezzina, editors, Proceedings Combined 29th International Workshop on Expressiveness in Concurrency and 19th Workshop on Structural Operational Semantics, EXPRESS/SOS 2022, Warsaw, Poland, 12th September 2022, volume 368 of EPTCS, pages 113-130, 2022. URL https://doi. org/10.4204/EPTCS.368.7. Kirstin Peters and Nobuko Yoshida. Separation and encodability in mixed choice multiparty sessions. In Pawel Sobocinski, Ugo Dal Lago, and Javier Esparza, editors, Proceedings of the 39th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2024, Tallinn, Estonia, July 8-11, 2024, pages 62:1-62:15. ACM, 2024. URL https://doi.org/10.1145/3661814.3662085. Benjamin C. Pierce. Types and programming languages. MIT Press, 2002. ISBN 978-0262-16209-8. Benjamin C. Pierce and Davide Sangiorgi. Typing and subtyping for mobile processes. Math. Struct. Comput. Sci., 6(5):409-453, 1996. URL https://doi.org/10.1017/ s096012950007002x. 78 Bibliography Alceste Scalas and Nobuko Yoshida. Less is More: Multiparty Session Types Revisited. Doc technical report dtrs-18-6, Imperial College London, 2018. Alceste Scalas and Nobuko Yoshida. Less is more: multiparty session types revisited. Proc. ACM Program. Lang., 3(POPL):30:1-30:29, 2019. URL https://doi.org/10. 1145/3290343. Roberto Segala and Nancy A. Lynch. Probabilistic simulations for probabilistic processes. In Bengt Jonsson and Joachim Parrow, editors, CONCUR '94, Concurrency Theory, 5th International Conference, Uppsala, Sweden, August 22-25, 1994, Proceedings, volume 836 of Lecture Notes in Computer Science, pages 481-496. Springer, 1994. URL https://doi.org/10.1007/978-3-540-48654-1_35. Kaku Takeuchi, Kohei Honda, and Makoto Kubo. An interaction-based language and its typing system. In Constantine Halatsis, Dimitris G. Maritsas, George Philokyprou, and Sergios Theodoridis, editors, PARLE '94: Parallel Architectures and Languages Europe, 6th International PARLE Conference, Athens, Greece, July 4-8, 1994, Proceedings, volume 817 of Lecture Notes in Computer Science, pages 398-413. Springer, 1994. URL https://doi.org/10.1007/3-540-58184-7_118. Daniele Varacca and Nobuko Yoshida. Probabilistic pi-calculus and event structures. In Alessandro Aldini and Franck van Breugel, editors, Proceedings of the Fifth Workshop on Quantitative Aspects of Programming Languages, QAPL 2007, Braga, Portugal, March 24-25, 2007, volume 190 of Electronic Notes in Theoretical Computer Science, pages 147-166. Elsevier, 2007. URL https://doi.org/10.1016/j.entcs.2007.07. 009. Kazuki Watanabe, Clovis Eberhart, Kazuyuki Asada, and Ichiro Hasuo. Compositional probabilistic model checking with string diagrams of mdps. In Constantin Enea and Akash Lal, editors, Computer Aided Verification - 35th International Conference, CAV 2023, Paris, France, July 17-22, 2023, Proceedings, Part III, volume 13966 of Lecture Notes in Computer Science, pages 40-61. Springer, 2023. URL https://doi.org/ 10.1007/978-3-031-37709-9_3. Nobuko Yoshida and Lorenzo Gheri. A very gentle introduction to multiparty session types. In Dang Van Hung and Meenakshi D'Souza, editors, Distributed Computing and Internet Technology - 16th International Conference, ICDCIT 2020, Bhubaneswar, India, January 9-12, 2020, Proceedings, volume 11969 of Lecture Notes in Computer Science, pages 73-93. Springer, 2020. URL https://doi.org/10.1007/978-3-03036987-3_5. 79
|
2509.16263
|
Beyond Stoquasticity: Structural Steering and Interference in Quantum
Optimization
Vicky Choi
Gladiolus Veritatis Consulting Co.*
September 23, 2025
Abstract
We present a theoretical analysis of the DIC-DAC-DOA algorithm, a non-stoquastic quantum algorithm for
solving the Maximum Independent Set (MIS) problem. The algorithm runs in polynomial time and achieves
exponential speedup over both transverse-field quantum annealing (TFQA) and classical algorithms on a struc-
tured family of NP-hard MIS instances, under assumptions supported by analytical and numerical evidence.
The core of this speedup lies in the ability of the evolving ground state to develop both positive and negative
amplitudes, enabled by the non-stoquastic XX-driver. This sign structure permits quantum interference that
produces negative amplitudes in the computational basis, allowing efficient evolution paths beyond the reach
of stoquastic algorithms, whose ground states remain strictly non-negative. In our analysis, the efficiency of
the algorithm is measured by the presence or absence of an anti-crossing, rather than by spectral gap estimation
as in traditional approaches. The key idea is to infer it from the crossing behavior of bare energy levels of rel-
evant subsystems associated with the degenerate local minima (LM) and the global minimum (GM). The cliques
of the critical LM, responsible for the anti-crossing in TFQA, can be efficiently identified to form the XX-driver
graph. Based on the clique structure of LM, we construct a decomposition of the Hilbert space into same-sign
and opposite-sign sectors, yielding a corresponding block-diagonal form of the Hamiltonian. We then show
that the non-stoquastic XX-driver induces a see-saw effect that shifts their bare energies, leading to analytically
derived bounds on Jxx which support a two-stage annealing schedule that prevents anti-crossings throughout
the evolution. The resulting speedup can be attributed to two mechanisms: in the first stage, energy-guided
localization within the same-sign block steers the ground state smoothly into the GM-supporting region, while
in the second stage, the opposite-sign blocks are invoked and sign-generating quantum interference drives the
evolution along an opposite-sign path. Finally, our analysis produces scalable small-scale models, derived
from our structural reduction, that capture the essential dynamics of the algorithm. These models provide a
concrete opportunity for verification of the quantum advantage mechanism on currently available universal
quantum computers.
*https://www.vc-gladius.com
1
arXiv:2509.16263v1 [quant-ph] 18 Sep 2025
Contents
1
Introduction
4
2
MIS Problem and Jzz Coupling
9
2.1
The Role of Jzz in the MIS-Ising Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3
Classical Hard Instances of MIS and the GIC Instances
10
3.1
Necessary Classical Hardness Conditions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.2
dMIC: Structured Form of a deg-MLIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.3
GIC and Its Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.3.1
Bipartite Substructures: Gdis and Gshare . . . . . . . . . . . . . . . . . . . . . . . . . .
13
4
Revised DIC-DAC-DOA: System Hamiltonian and Annealing Schedule
15
4.1
Recall: System Hamiltonian of DIC-DAC-DOA . . . . . . . . . . . . . . . . . . . . . . . . . .
15
4.2
Revised Annealing Schedule: Stages 0, 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . .
15
5
Anti-Crossing and the Two-Level Hamiltonian B(w, x)
16
5.1
Level Repulsion vs. Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
5.1.1
Anti-crossing in a Multi-level System . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
5.1.2
Types of Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
5.2
The Basic Matrix B(w, x): Eigenvalues and Eigenstates
. . . . . . . . . . . . . . . . . . . . .
20
6
A Single Clique: Decomposition and Low-Energy Subspace Analysis
20
6.1
Decomposition of the Clique Space: Low and High Energy Subspaces . . . . . . . . . . . . . .
21
6.2
Angular-Momentum Decomposition of the Low-Energy Subspace . . . . . . . . . . . . . . . .
21
6.2.1
Low-Energy Subspace in Total Angular Momentum Basis Ba
. . . . . . . . . . . . . .
21
6.2.2
Spin Operators in the B′
a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
6.2.3
Decomposition into Spin-1
2 and Spin-0 Components
. . . . . . . . . . . . . . . . . . .
24
6.2.4
Full Spectrum and the Jxx See-Saw Effect . . . . . . . . . . . . . . . . . . . . . . . . .
25
6.2.5
Transformation via Two Coupled Subcliques
. . . . . . . . . . . . . . . . . . . . . . .
26
7
Analytical Solution for the Bare Subsystem (dMIC)
28
7.1
Low-Energy Structure of the Bare Subsystem: Same-Sign and Opposite-Sign Sectors . . . . . .
29
7.2
Closed-Form Solution of the Same-Sign Block HCbare . . . . . . . . . . . . . . . . . . . . . . .
30
7.3
Uniform Clique Size: Symmetric-Subspace Reduction of the Same-Sign Block
. . . . . . . . .
31
7.4
Block Energy Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
8
Analysis of Stage 0
34
8.1
Decomposition of the Hilbert Space
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
8.2
Effective Hamiltonian at the End of Stage 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
8.3
Spectral Gap Behavior in Stage 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
9
Main Analysis of the Two-Stage Dynamics on Bipartite Structures
38
9.1
Block Decomposition of the Full Hamiltonian: Same-Sign vs. Opposite-Sign Blocks
. . . . . .
39
9.1.1
The Disjoint-Structure Graph: Block-Diagonal Structure via Clique Contraction
. . . .
41
9.1.2
The Shared-Structure Graph: Modification to the Disjoint Case
. . . . . . . . . . . . .
43
2
9.2
Inner Decompositions of the Same-Sign Block during Stage 1
. . . . . . . . . . . . . . . . . .
45
9.2.1
Two Block Decompositions of HC: L-Inner vs. R-Inner
. . . . . . . . . . . . . . . . .
45
9.2.2
Definition of Structural Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
9.3
Two-Stage Evolution and Feasibility Bounds on Jxx . . . . . . . . . . . . . . . . . . . . . . . .
49
9.3.1
Analytical Bounds on Jxx and the Feasibility Window
. . . . . . . . . . . . . . . . . .
49
9.3.2
Stage 1: Structural Steering
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
9.3.3
Stage 2: Smooth Evolution to GM via an Opposite-Sign Path
. . . . . . . . . . . . . . .
53
9.4
Three-Vertex Conceptual Model: Illustrating Quantum Interference
. . . . . . . . . . . . . . .
59
9.4.1
Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
9.4.2
Explaining Quantum Interference via Basis Rotation . . . . . . . . . . . . . . . . . . .
65
9.4.3
Numerical Illustration of Sign-Generating Interference . . . . . . . . . . . . . . . . . .
66
10 Full-System Extension via Iterative Application
67
11 Conclusion and Outlook
69
3
1
Introduction
A distinctive feature of quantum mechanics is that the wavefunction of a quantum state, expressed in a given
basis, can have both positive and negative amplitudes—a characteristic with no classical counterpart, as classical
probabilities must be non-negative. Motivated by the goal of harnessing this intrinsically quantum property, we
proposed the DIC-DAC-DOA algorithm in [1, 2] for the NP-hard Maximum Independent Set (MIS) problem. The
acronym originally stood for Driver graph from Independent Cliques, Double Anti-Crossing, Diabatic quantum
Optimization Annealing. The algorithm modifies standard transverse-field quantum annealing (TFQA) [3, 4, 5, 6]
by adding a specially designed XX-driver term, aiming to overcome small-gap anti-crossings. For completeness,
we include the full algorithmic details and necessary refinements in this paper.
We now recall the system Hamiltonian for DIC-DAC-DOA:
H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem,
where HX = −P
i σx
i , HXX = P
(i,j)∈E(Gdriver) σx
i σx
j , and Hproblem is the MIS-Ising Hamiltonian defined in
Equation (2.3). The time-dependent parameter schedule jxx(t) depends on the XX-coupling strength Jxx. In
particular, jxx(t) ≡0 when Jxx = 0, so the case Jxx = 0 corresponds to TFQA, without the XX-driver. The
system Hamiltonian is stoquastic (in the computational basis) if Jxx ≤0, and non-stoquastic if Jxx > 0.
The goal of our analysis is to demonstrate how and why an appropriately chosen non-stoquastic (i.e., pos-
itive) coupling strength Jxx enables exponential speedup with DIC-DAC-DOA. Specifically, we develop a the-
oretical framework for understanding the role of Jxx in shaping the Hamiltonian structure and guiding ground
state evolution. We show that the algorithm succeeds in polynomial time by successively dissolving small-gap
anti-crossings, one by one, for a structured class of input instances, called GIC graphs. Despite their structure,
we argue in Section 3 that GIC graphs exhibit classical hardness: solving MIS on such instances would require
exponential time unless P = NP. Our analysis methods combine analytical techniques and numerical verifica-
tion, with perturbative and effective Hamiltonian arguements that can in principle be made rigorous with further
work.
First, we introduce some necessary terminology. Throughout this paper, all Hamiltonians considered are real
and Hermitian. Accordingly, we restrict our attention to quantum states with real-valued amplitudes. That is, the
phase of each component is either 0 (corresponding to a positive sign) or π (corresponding to a negative sign).
We now formalize the sign structure of quantum states, which plays a central role in our analysis:
Definition 1.1. Let |ψ⟩= P
x∈B ψ(x)|x⟩be a quantum state with real amplitudes in a basis B.
• |ψ⟩is called a same-sign state if ψ(x) ≥0 for all x ∈B. That is, all components are in phase (with
relative phase 0).
• |ψ⟩is called an opposite-sign state if there exist x, x′ ∈B such that ψ(x) > 0 and ψ(x′) < 0. In this
case, some components are out of phase, differing by a relative phase of π.
Unless stated otherwise, we take B to be the computational basis when referring to same-sign or opposite-sign
states. More generally, the computational basis is assumed whenever the basis is unspecified.
Accordingly, we define the notions of same-sign and opposite-sign bases, sectors, and blocks:
Definition 1.2. An orthonormal basis consisting entirely of same-sign states is called a same-sign basis. A basis
that includes at least one opposite-sign state is called an opposite-sign basis. A subspace spanned by a same-sign
4
basis is called a same-sign sector; otherwise, it is called an opposite-sign sector. A submatrix (or block) of a
Hamiltonian is called a same-sign block if it is expressed in a same-sign basis. Otherwise, it is referred to as an
opposite-sign block.
Example.
The state |+⟩=
1
√
2(|0⟩+ |1⟩) is a same-sign state, while |−⟩=
1
√
2(|0⟩−|1⟩) is an opposite-sign
state. The computational basis {|0⟩, |1⟩} is a same-sign basis of C2, and the Hadamard basis {|+⟩, |−⟩} is an
opposite-sign basis.
These definitions are closely related to the concept of stoquasticity, which constrains the sign structure of the
ground state. It is well-known that, by the Perron–Frobenius theorem, the ground state of a stoquastic Hamil-
tonian (i.e., one whose off-diagonal elements are non-positive in a given basis) is a same-sign state in that ba-
sis [5, 7]. In particular, when expressed in the computational basis, the ground state of a stoquastic Hamiltonian
is necessarily a same-sign state. By contrast, the ground state of a non-stoquastic Hamiltonian may be either
a same-sign or an opposite-sign state. We refer to the former as Eventually Stoquastic and the latter as Proper
Non-stoquastic, as introduced in [1].
This distinction is crucial and can be understood in terms of the admissible subspace, defined as the subspace
of Hilbert space that the instantaneous ground state of the system Hamiltonian can dynamically access. The shape
and sign structure of this subspace differ fundamentally depending on whether the Hamiltonian is stoquastic
or properly non-stoquastic. In the (eventually) stoquastic case, the admissible subspace remains confined to
the same-sign sector of the computational basis, where all amplitudes are non-negative. In the properly non-
stoquastic case, this subspace expands to include the opposite-sign sector, allowing superpositions with both
positive and negative amplitudes.
It is therefore clear that by “de-signing” the non-stoquastic Hamiltonian [9], one reduces the admissible
subspace. The larger the admissible subspace, the more possible evolution paths become accessible. This raises
the central question: can some of these new paths—particularly those involving smooth transitions rather than
exponentially slow tunneling—be efficiently exploited?
We answer this question affirmatively in this work. In particular, we show that
• The XX-driver opens access to opposite-sign sectors, enlarging the admissible subspace to include opposite-
sign states;
• In this expanded subspace, sign-generating quantum interference—interference that produces negative
amplitudes in the computational basis—enables a smooth evolution path that bypasses tunneling.
This is the essence of going beyond stoquasticity: the ability to exploit a larger sign-structured subspace,
which enables new evolution paths that are dynamically inaccessible in the stoquastic regime.
To demonstrate this idea concretely, we now turn to the algorithmic framework and its guiding structure.
First, let LM and GM denote a set of degenerate local minima and the global minimum of the problem Hamiltonian.
We now introduce the notion of the associated bare energy, which plays a key role in our analysis. For a subset
A ⊆V(G), let HA(t) denote the restriction of the Hamiltonian to the subspace spanned by the subsets of A.
We define the ground state energy EA
0 (t) as the bare energy associated with A. In particular, ELM
0 (t) and EGM
0 (t)
denote the bare energies associated with LM (where A = S
M∈LM M) and GM, respectively. The full algorithm
consists of two phases; see Table 1. Phase I uses polynomial-time TFQA to identify an independent-clique
(IC) structure associated with the critical LM, which is responsible for the (right-most) anti-crossing with GM in
the TFQA ground state evolution. Informally, such an anti-crossing arises from the crossing of the bare energies
associated with LM and GM, see Figure 1. Phase II constructs an XX-driver—based on the identified IC—to lift this
anti-crossing. This process is repeated iteratively: at each iteration, the algorithm identifies the next critical LM,
5
constructs an additional driver term, added to the system Hamiltonian. Provided that certain structural conditions
are met, this iterative process can be applied effectively (see Figure 2 for a schematic depiction). In this way,
the algorithm progressively removes small-gap obstructions from the system. The main focus of our analysis
is therefore on Phase II applied to a single bipartite substructure—in particular, on how the XX-driver, through
an appropriate choice of the parameter Jxx, enables the key mechanisms that dissolve each anti-crossing without
introducing new ones.
E1 (t)
E0 (t)
(a)
ELM
0 (t)
EGM
0 (t)
(b)
ELM
0 (t)
EGM
0 (t)
E1 (t)
E0 (t)
(c)
Figure 1: Illustration of an (LM, GM)-anti-crossing. (a) An anti-crossing between the lowest two levels E0(t) and
E1(t). (b) Bare energies ELM
0 (t) and EGM
0 (t) cross. (c) Overlay showing that the anti-crossing originates from the
bare crossing of ELM
0 (t) and EGM
0 (t).
x=0
(t=1)
x=Γ
(t=0)
Energy
x(2)
x(1)
EGM
0 (t)
(
ELM(1)
0
(t)
ELM(2)
0
(t)
(a)
x=0
(t=1)
x=Γ
(t=0)
Energy
x(2)
EGM
0 (t)
(
ELM(1)
0
(t)
ELM(2)
0
(t)
(b)
x=0
(t=1)
x=Γ
(t=0)
Energy
EGM
0 (t)
(
ELM(1)
0
(t)
ELM(2)
0
(t)
(c)
Figure 2: Illustration of the iterative removal of anti-crossings by DIC-DAC-DOA. (a) Under TFQA, the energy
associated with the global minimum EGM
0 (t) (blue) has bare crossings with the bare energies of two local minima,
ELM(1)
0
(t) and ELM(2)
0
(t), at positions x(1) = x(t1) and x(2) = x(t2). Each such bare crossing corresponds to an
(LM(i), GM)-anti-crossing in the system spectrum. (b) After Phase I of DIC-DAC-DOA, the first crossing at x(1) is
removed by lifting ELM(1)
0
(t), while the second crossing at x(2) remains. (c) Phase II applies the same procedure
to lift ELM(2)
0
(t), thereby completing the removal of both anti-crossings.
Informally, the XX-driver graph, which consists of a set of independent cliques, induces a natural angular
momentum structure in the Hilbert space. This structure allows for a block decomposition of the Hamiltonian into
disjoint same-sign and opposite-sign blocks. Importantly, in the signed basis induced by this decomposition, the
off-diagonal terms involving non-stoquastic coupling strength Jxx become diagonal. Consequently, Jxx induces
a see-saw energy effect: it raises the energy associated with local minima in the same-sign block while lowering
that of the opposite-sign blocks. This effect introduces a fundamental trade-off: if Jxx is too small, the local
6
minima in the same-sign block are not lifted high enough (to remove the anti-crossing); if it is too large, the
opposite-sign blocks drop too low, inducing new anti-crossings. As a result, the success of the algorithmic
design depends critically on choosing appropriate Jxx. A central contribution of this work is the identification
of four analytically derived bounds on Jxx. In addition, we identify two upper bounds on Jzz that constrain
the problem Hamiltonian; these, together with the Jxx bounds, define the full feasibility window for successful
evolution. We then attribute the quantum speedup to two key mechanisms: (1) steering the evolving ground
state away from the local-minima-supporting region and directly into the global-minimum-supporting region (a
mechanism we later formalize as structural steering); and (2) enabling sign-generating quantum interference
between blocks of different signs, which produces negative amplitudes in the computational basis. The detailed
quantum interference mechanism will be illustrated in Section 9.4. 1
To better understand the role of these mechanisms, we now return to the original motivation for introducing
the XX-driver—the formation of a double anti-crossing—and reinterpret its role in light of our current structural
understanding.
Revisiting the Original Idea.
We begin by revisiting the original motivation behind the DIC-DAC-DOA al-
gorithm, which was to convert the narrow anti-crossing in TFQA into a double anti-crossing (double-AC) by
introducing the XX-driver. The idea was that this transformation would enable exponential speedup through di-
abatic annealing, via a ground-to-excited-to-ground state transition. Numerical evidence supporting this idea—
including the splitting of degenerate local minima and the formation of a double-AC—was presented in [1].2
While this observation guided the early design of the algorithm, our current structural understanding provides
a more precise explanation. The apparent double-AC actually arises from a pair of anti-crossings between the
ground-state energies of two blocks—one same-sign and one opposite-sign.3 When these blocks are decoupled or
only weakly coupled, the transition can be reinterpreted as effective adiabatic evolution confined to the same-sign
block, with the double-AC dynamically bypassed (see Section 5). In general, however, the coupling between the
same-sign and opposite-sign blocks may not be weak, and transitions between blocks become possible, and the
double-AC mechanism does not necessarily apply. 4
We conclude that the double-AC—though visually striking and initially suggestive of significance—is not
essential for achieving quantum advantage. What initially appeared as negative amplitude in the ground state can
now be more precisely understood as a manifestation of sign-generating quantum interference between same-sign
and opposite-sign blocks. Still, it was the numerical observation of this phenomenon in our original weighted
MIS example that led to the discovery of the Hamiltonian’s block structure and the mechanisms governing the
evolution of the quantum ground state. Even if the double-AC ultimately serves only as a by-product of a deeper
structural principle, its appearance was a gateway to the framework we now develop.
1By quantum interference, we mean the superposition of components in a quantum system that leads to amplitude cancellation or
enhancement of the same basis state. It is inherently quantum in nature, as only quantum systems admit decomposition into opposite-sign
bases with complex or signed amplitudes. However, the mere presence of such interference does not, by itself, imply quantum advantage.
In our system, the observed speedup is attributed to a restricted form of quantum interference—namely, interference that produces negative
amplitudes in the computational basis. This form of sign-generating interference is not accessible to classical simulation or stoquastic
dynamics, whose ground states remain confined to the non-negative cone.
2More concretely, we constructed an XX-driver graph Gdriver from the set of independent cliques corresponding to the critical de-
generate local minima L identified via polynomial-time TFQA. By applying a sufficiently large XX-coupling strength Jxx, we induced
a splitting of the states in L into two subsets with opposite amplitudes, denoted L+ and L−, ensuring that L remained in the instanta-
neous ground state when its energy was minimized. This resulted in two anti-crossings bridged by (L+, L−), forming what we called a
double-AC. This behavior was confirmed numerically through exact diagonalization.
3This was first observed by J. Kerman.
4For example, the first anti-crossing may remain a block-level transition, while the second—involving quantum interference—does
not correspond to a small gap.
7
Remark. Due to historical reasons, we retain the name DIC-DAC-DOA, even though the roles of DAC and DOA
have evolved in light of our current understanding. Alternatively, one might consider renaming it to DIC-DEE-
DAW, where DEE-DAW reflects the see-saw effect.
In summary, DIC-DAC-DOA is motivated by turning around the obstacle of traditional stoquastic quantum
annealing (TFQA)—specifically, the presence of an anti-crossing caused by the competition between the energies
associated with LM and GM. This anti-crossing obstacle has long been observed and explained using perturbative
arguments in the small transverse-field regime (e.g., [12, 13]). In our companion paper [8], we provide an
explicit structural explanation and an analytical proof for such an anti-crossing beyond the small transverse-field
perturbative regime, for the class of structured input graphs.
In our analysis, the efficiency of the algorithm is measured by the presence or absence of an anti-crossing,
rather than by spectral gap estimation as in traditional approaches (see e.g. [6] and references therein). An anti-
crossing can be rigorously defined through reduction to a 2 × 2 effective Hamiltonian, with the precise definition
given in Section 5. The key idea is that the unperturbed energies (the diagonal entries of the effective Hamil-
tonian) can be well-approximated by the bare energies of relevant subsystems: the local minima (LM), whose
cliques are efficiently identified in Phase I and used to construct the XX-driver graph, and the global minimum
(GM), which is assumed as a reference in the analysis. This makes it possible to infer the presence or absence of
an anti-crossing directly from the crossing behavior of bare energy levels, without explicitly constructing the ef-
fective two-level Hamiltonian. In particular, our structural analysis consists of three main components, supported
by both analytical and numerical evidence:
• A structural decomposition of the Hilbert space induced by the XX-driver graph, separating same-sign and
opposite-sign blocks via angular momentum techniques.
• Identification of a see-saw energy effect induced by the non-stoquastic parameter Jxx, which raises the
energy associated with local minima in the same-sign block while lowering that of opposite-sign blocks.
• Derivation of four analytical bounds on Jxx and design of a two-stage annealing schedule that guarantees
correct ground state evolution without encountering an anti-crossing, confirmed by numerical evidence.
A by-product contribution of this work is the design of effective small-scale models, derived from our struc-
tural reduction, that capture the essential dynamics of the algorithm. These models provide a concrete opportunity
for verification of the quantum advantage mechanism on currently available gate-model quantum devices through
simulation.
The paper is organized into four Parts.
Part I: Structural Setup and Annealing Framework. In Section 2, we review the MIS-Ising Hamiltonian
and the role of the Jzz coupling. Section 3 introduces the class of GIC graphs for which we argue classical
hardness, and describe the two fundamental bipartite substructures formed by the critical degenerate local minima
and the global minimum. Section 4 introduces the revised annealing schedule used in Phase II of the algorithm,
and explains how the components x(t), jxx(t), and p(t) vary across the evolution.
Part II: Analytical Tools and Block Decomposition. Section 5 defines and clarifies the notion of anti-
crossings, and presents the basic two-level matrix used throughout the analysis. Section 6 develops the core
framework for analyzing a single clique, including the block decomposition of its low-energy subspace based
on its angular momentum structure. Section 7 presents a closed-form analytical solution for the bare subsystem,
which consists of a collection of independent cliques.
Part III: Two-Stage Dynamics and Full-System Application. Section 8 analyzes the optional Stage 0 and
derives the effective low-energy Hamiltonian that initiates Stage 1. Section 9 contains the main technical results
8
of the paper, focusing on a single bipartite substructure. We analyze the decomposition of the Hamiltonian into
same-sign and opposite-sign blocks, the role of Jxx in steering and interference, and provide analytical bounds
along with numerical illustrations. In Section 10, we extend the analysis to the full system by iteratively applying
the two-phase procedure to successive substructures.
Part IV: Conclusion. We conclude in Section 11 with a discussion of implications, limitations, and future
directions.
Algorithm Overview.
The full algorithm consists of two phases:
• Phase I: Extract an independent-clique (IC) structure using a polynomial-time TFQA process:
1. Use TFQA and polynomial annealing time to return the excited states involved in the anti-
crossing.
2. Extract a set of seeds (local minima states) from the resulting output.
3. Apply a classical procedure to identify an independent-clique (IC) structure based on these seeds.
• Phase II: Define an XX-driver graph using the IC structure and perform an annealing process with
tunable Jxx:
1. Construct an XX-driver graph Gdriver using the IC structure identified in Phase I.
2. Define a new time-dependent Hamiltonian H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem.
3. For each feasible Jxx, evolve adiabatically according to a three-stage schedule:
– Stage 0 (optional initialization);
– Stage 1 (energy-guided localization);
– Stage 2 (interference-driven transition).
4. Measure the final ground state to extract the optimal solution.
Table 1: Two-phase structure of the full DIC-DAC-DOA algorithm. Phase I uses stoquastic TFQA to extract
seeds and identify an IC structure; Phase II applies a non-stoquastic driver with tunable Jxx to lift anti-crossings.
2
MIS Problem and Jzz Coupling
The NP-hard weighted Maximum Independent Set (MIS) problem is formulated as follows:
Input: An undirected graph G = (V(G), E(G)) with N = |V(G)|, where each vertex i ∈V(G) =
{1, . . . , N} is assigned a positive rational weight wi.
Output: A subset S ⊆V(G) such that S is independent (i.e., for each i, j ∈S, (i, j) ̸∈E(G)), and the total
weight of S, given by w(S) = P
i∈S wi, is maximized. We denote this optimal set as mis(G).
For simplicity, we focus on the unweighted MIS problem, where all wi = w = 1. However, we retain wi in
our formulation to allow for generalization to the weighted case and for later analysis purpose. In the unweighted
setting, mis(G) corresponds to the largest independent set in G.
9
2.1
The Role of Jzz in the MIS-Ising Hamiltonian
As shown in [14], the MIS problem can be encoded in the ground state of the MIS-Ising Hamiltonian:
HMIS−Ising(G) =
X
i∈V(G)
(−wi)˜σz
i +
X
(i,j)∈E(G)
Jij˜σz
i ˜σz
j ,
(2.1)
where Jij > max{wi, wj} for all (i, j) ∈E(G). This formulation slightly modifies that of [14, 1], replacing
σz with the shifted-σz operator ˜σz
i := I+σz
i
2
, whose eigenvalues are {0, 1}, making the correspondence with the
classical energy function direct:
E(G) =
X
i∈V(G)
(−wi)xi +
X
(i,j)∈E(G)
Jijxixj,
(2.2)
where xi ∈{0, 1}. The energy is minimized when no two adjacent vertices i, j satisfy xi = xj = 1, ensuring
that the ground state corresponds to mis(G). For convenience, we refer to all Jij as Jzz, although these couplings
need not be uniform, even in the unweighted case. The only requirement is that Jzz > max{wi, wj} for each
edge (i, j) ∈E(G).
In the unweighted case (wi = 1), an independent set of size m has energy Eind = −m, while a dependent
set has energy Edep = Jzz · (#edges) −(#vertices). Thus, large Jzz creates a large energy separation between
independent-set states and dependent-set states. In principle, Jzz can be chosen arbitrarily large; however, exces-
sively large values of Jzz can be detrimental. With the use of the XX-driver in DIC-DAC-DOA, the dependent-set
states can act as “bridges” between different groups of independent-set states to facilitate the smooth structural
steering (to be elaborated in Section 9.3.2). The appropriate choice of Jzz is crucial for the success of the algo-
rithm.
For DIC-DAC-DOA, we use two different couplings:
• Jclique
zz
: assigned to edges within cliques of the driver graph;
• Jzz: assigned to all other edges.
That is, the MIS–Ising problem Hamiltonian takes the form
Hproblem =
X
i∈V(G)
(−wi)˜σz
i + Jclique
zz
X
(i,j)∈E(Gdriver)
˜σz
i ˜σz
j + Jzz
X
(i,j)∈E(G)\E(Gdriver)
˜σz
i ˜σz
j .
(2.3)
The value of Jclique
zz
is set sufficiently large to restrict the system to the clique low-energy subspace, while Jzz
must satisfy two upper bounds, J inter
zz and J steer
zz , to ensure the success of the algorithm. The precise role of these
bounds will be analyzed in Section 9.3.
3
Classical Hard Instances of MIS and the GIC Instances
Recall that an independent set is maximal if no larger independent set contains it. Each maximal independent
set corresponds to a local minimum of the energy function in Eq. (2.2). A collection of maximal independent
sets (MLIS) all having the same size m corresponds to a set of degenerate local minima with equal energy −m.
When the meaning is clear (i.e., they all have the same size), we refer to such a collection simply as a deg-MLIS,
and its cardinality as the degeneracy. In this work, we use the terms degenerate local minima and deg-MLIS
interchangeably.
10
This section explores the relationship between classical hardness and the structure of deg-MLIS, with a focus
on how degeneracy influences computational complexity in both classical and quantum settings. In Section 3.1 we
show that classical hardness requires exponentially many MLIS, with at least one deg-MLIS exhibiting exponential
degeneracy. In Section 3.2 we introduce a structured form of degeneracy, denoted by dMIC. Finally, in Section 3.3
we define the class of Graphs with Independent Cliques (GIC), together with a reduction to fundamental bipartite
structures—Gdis and Gshare—that capture both classical hardness and the structural features exploited by our
algorithm.
3.1
Necessary Classical Hardness Conditions
The MIS problem was among the first shown NP-complete [16]. A simple branching algorithm [17, 18] without
pruning (i.e., branch-without-bound) enumerates all MLIS in time O(N · #MLIS), where #MLIS denotes the
number of sets in MLIS. Thus, MIS can be solved by identifying the largest among them. In the worst case,
#MLIS = O(3N/3), though often significantly smaller—yet still exponential.
Observation 1. A necessary condition for classical hardness is that the MIS instance contains exponentially
many MLIS.
Observation 2. A necessary consequence is that at least one deg-MLIS must exhibit exponential degeneracy.
Observation 1 follows directly from the enumeration algorithm above. Observation 2 follows from the pi-
geonhole principle: since an independent set can have at most N distinct sizes, at least one size class must contain
exponentially many MLIS. Otherwise, if all deg-MLIS had at most polynomial degeneracy, the total number of
MLIS would be polynomial—contradicting Observation 1.
As a side note, it is worth emphasizing that classical algorithms for MIS outperform Grover’s unstructured
quantum search [19, 20], which requires O(2N/2) time. As early as 1977, a simple and implementable algo-
rithm achieved a runtime of O(2N/3) via careful branching analysis [21], and this exponent has been further
improved over time using more sophisticated branch-and-bound techniques, see [22] for references. Since other
optimization problems such as EXACT-COVER can be reduced to MIS with the same problem size [24], it follows
that unstructured adiabatic quantum optimization [23]—while perhaps of theoretical interest—offers no practical
algorithmic advantage.
3.2
dMIC: Structured Form of a deg-MLIS
It is widely believed that a deg-MLIS with exponential degeneracy causes a tunneling-induced anti-crossing in
TFQA. We call such a deg-MLIS critical. The exponentially many MLIS in a critical deg-MLIS must exhibit
substantial vertex-sharing. This suggests a natural partitioning of the involved vertices into k cliques, where
each maximal independent set includes exactly one vertex from each clique. However, efficiently identifying
such a partition may not always be feasible, and multiple partitions may be needed to fully characterize a critical
deg-MLIS.
In its simplest structured form, a critical deg-MLIS can be represented as a dMIC, namely a collection of
mutually independent cliques whose vertices together generate exactly all the MLIS in that deg-MLIS.
Definition 3.1. A dMIC of size k consists of k independent cliques (i.e., no edges exist between them), denoted as
Clique(wi, ni), where wi is the vertex weight and ni is the clique size, for 1 ≤i ≤k. Each maximal independent
set in the corresponding deg-MLIS is formed by selecting exactly one vertex from each clique. In this case, the
degeneracy of the dMIC is given by Qk
i=1 ni.
11
Moreover, given any single maximal independent set as a seed, all k cliques in the dMIC can be identified in
linear time due to the independence condition. For each vertex in the seed, the corresponding clique is obtained by
collecting all vertices that are mutually adjacent to it and simultaneously non-adjacent to all other seed vertices.
This procedure depends only on vertex adjacencies in the graph.
Under the assumption of the dMIC structure, we analytically establish in the companion paper [8] that TFQA
encounters a tunneling-induced anti-crossing with an exponentially small gap. (This generalizes the so-called
perturbative crossing, which is defined only in the small transverse-field regime.) While such an (LM, GM)-
anti-crossing is typically a bottleneck for reaching the global minimum GM, here we turn it around and use it
constructively to reach the local minima LM (the obstacle) instead. More specifically, a polynomial-time TFQA
algorithm evolves adiabatically through this anti-crossing, tracking the energy associated with LM (blue) instead
of transitioning to the global minimum GM (red), as illustrated in Figure 5. This outputs a maximal indepen-
dent set configuration from LM, which can be used to seed the IC-based driver construction in Phase I of the
DIC-DAC-DOA algorithm; see Table 1.
Remark 3.2. When the cliques are dependent—that is, edges may exist between them—we refer to the structure
as an dMDC, a deg-MLIS formed by a set of dependent cliques. This represents a relaxed structure to which
our analysis may be generalized. The main difficulty in this relaxation is that the cliques can no longer be
unambiguously identified, unlike in the dMIC case. Further discussion is deferred to future work.
Terminology and Abbreviations.
For convenience, we summarize below the key abbreviations and their
meanings. These notions are closely related: a deg-MLIS corresponds to degenerate local minima, while dMIC
and dMDC describe structured forms of such degeneracy.
Abbreviation
Meaning
MLIS
Maximal independent sets
deg-MLIS
A collection of MLIS of equal size, corresponding to degenerate local minima
dMIC
A deg-MLIS formed by a set of independent cliques
dMDC
A deg-MLIS formed by a set if dependent cliques
3.3
GIC and Its Hardness
We now define the class of structured problem instances assumed in this paper. A Graph with Independent
Cliques (GIC) is a graph whose structure satisfies the following properties:
• Each critical deg-MLIS in the graph assumes the structure of a dMIC;
• It has a unique (global) maximum independent set, denoted as GM. which may be viewed as a dMIC
consisting entirely of singleton cliques;
• The GM shares at least one vertex with at least one clique in some critical dMIC.
A critical dMIC contains exponentially many MLIS, and its partial overlap with the GM obscures the global
structure, making it difficult for classical algorithms to distinguish the true optimum from nearby local maxima.
The classical hardness of GIC instances stems from these same two features: (1) the exponential number of
12
maximal independent sets in the dMIC, and (2) the partial vertex overlap between the global maximum and these
local maxima, which creates ambiguity and hinders global optimization.
While we do not prove the classical hardness of GIC in the formal complexity-theoretic sense (otherwise we
would have proven P = NP, we note that the definition can be further strengthened if needed. For example, one
may require that the graph contains at least two critical dMICs, and that most vertices of the GM belong to cliques
in these dMICs. In other words, GIC instances are designed to be structured enough for rigorous analysis, yet rich
enough to present classical difficulty.
Reduction to Fundamental Bipartite Structures.
Our key argument is that the analysis of any general GIC
instance can be reduced to a sequence of simpler bipartite substructures, each formed by a critical dMIC and the
global maximum (GM).
Recall that each dMIC corresponds to a set of degenerate local minima (LM) in the MIS–Ising energy land-
scape. The terms LM and dMIC are thus used interchangeably. We use GM to refer both to the (global) max-
imum independent set and to its corresponding global minimum in the energy landscape. In what follows,
we use LM and GM to denote the dMIC and GM in each such bipartite substructure, respectively.
3.3.1
Bipartite Substructures: Gdis and Gshare
We consider two bipartite substructures:
• Gdis: The local minima (LM) and the global minimum (GM) are vertex-disjoint.
• Gshare: The GM shares exactly one vertex with each clique in the LM.
We begin with the disjoint-structure graph Gdis = (V, E), in which the vertex set V is partitioned into left
and right (disjoint) vertex sets, with the following structural properties:
• The left component is defined by a set L = {C1, . . . , Cml} of ml disjoint cliques, each denoted Ci =
Clique(wi, ni). We let VL = Sml
i=1 Ci denote the full vertex set.
• The right component R consists of mr independent vertices, each with weight wr.
• Every vertex in VL is adjacent to every vertex in R.
In this paper we mainly focus on the unweighted MIS case, and assume uniform weights wi = wr = w.
Under the MIS–Ising mapping with these uniform weights, VL corresponds to the degenerate local minima (LM)
with degeneracy Qml
i=1 ni, while R defines the global minimum (GM) with mg = mr. Each local minimum (in
LM) corresponds to a maximal independent set of size ml, and thus has energy −ml. The global minimum (GM)
has energy −mg.
We now define the shared-structure graph Gshare, which differs from Gdis in that each vertex in R is adjacent
to all but one vertex in each clique of L. This modification allows the GM to include shared vertices from the
cliques in L, thereby introducing overlap with the LM. Structurally, L and R are defined exactly as in Gdis, but
with the adjacency rule modified as above. Specifically, the global maximum GM consists of one shared vertex
from each clique Ci ∈L together with all mr independent vertices in R, yielding a total size mg = ml + mr.
Figure 3 illustrates both cases.
For convenience, we write m := ml, dropping the subscript l when no confusion arises.
We assume
Pm
i=1
√ni > mg, so that an anti-crossing is induced by the competition between LM and GM.
13
1
2
4
3
a
5
6
8
7
c
b
(a) The LM and GM are vertex-disjoint.
1
2
4
3
a
5
6
8
7
c
b
(b) The GM shares exactly one vertex with
each clique in the LM.
Figure 3: Example graphs illustrating the Gdis and Gshare structures. Recall that each LM here is structurally a
dMIC. (a) Disjoint-structure graph Gdis: The set L consists of ml = 2 disjoint cliques, each of size n1 = n2 = 4,
with their vertices (pink) forming the local minima LM. The set R (blue) consists of mr = 3 independent vertices,
forming the global minimum GM.
(b) Shared-structure graph Gshare: The set L again consists of two cliques of size n1 = n2 = 4, with pink and
purple vertices. The purple vertices (one per clique) are shared between LM and GM. The set R (blue) contains
mr = 3 independent vertices. The global minimum consists of all vertices in R, together with the shared purple
vertices in L, giving mg = 5. In both cases, edges between the pink vertices in L and all vertices in R are
complete, though not all are shown for visual clarity.
These bipartite substructures represent two extreme regimes of connectivity and overlap between the critical
LM and the GM. One can view Gdis as a special case of Gshare in which no vertices are shared between LM and GM.
Note that Gdis is not classically hard on its own (for example, one can remove all LM vertices and directly identify
the GM). However, it may still arise as a substructure during a single iteration of the full algorithm, as discussed
in Section 10. We consider Gdis separately here for two reasons. First, in the Gdis case, the relevant blocks of
the Hamiltonian are decoupled, allowing the mechanisms of our algorithm to be simply illustrated. Second, it
highlights the core distinction introduced by shared vertices: namely, that a single shared vertex can enable block
coupling and necessitate an upper bound on Jxx to prevent an interference-involved block-level anti-crossing. We
note that for the TFQA case (Jxx = 0), we argue in the companion paper [8] that it is sufficient to analyze only
the disjoint-structure case (for the presence of the anti-crossing and the gap size).
Remark 3.3. Our reduction to bipartite substructures is not presented as a formal derivation, but rather as a
structural perspective informed by effective Hamiltonian techniques. Each individual anti-crossing is primarily
governed by the interaction between a set of critical local minima and the global minimum. This motivates a
stepwise reduction to low-energy effective Hamiltonians that couple the GM to one critical structure at a time—
yielding a bipartite substructure, with connectivity and shared vertices bounded by the Gshare worst case. While
14
a fully rigorous justification of this reduction is beyond the scope of this paper, our detailed analysis of the Gshare
instance, together with numerical examples involving multiple local minima in Section 10, provides supporting
evidence for the validity of this bipartite reduction perspective.
4
Revised DIC-DAC-DOA: System Hamiltonian and Annealing Schedule
In this section, we first recall the system Hamiltonian used in the original DIC-DAC-DOA algorithm, then in-
troduce the revised annealing schedule. The full evolution consists of an optional initialization phase (Stage 0),
followed by the two-stage algorithmic core (Stage 1 and Stage 2).
4.1
Recall: System Hamiltonian of DIC-DAC-DOA
The system acts on a Hilbert space of N spin-1
2 particles, one for each vertex of the problem graph G. The
Hamiltonian is expressed in terms of the spin operators Sx = 1
2σx and S˜z = ˜σz, where ˜σz denotes the shifted σz
operator. The system Hamiltonian is defined in terms of a time-dependent annealing schedule:
H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem,
where
HX = −
X
i
σx
i ,
HXX =
X
(i,j)∈E(Gdriver)
σx
i σx
j .
The problem Hamiltonian Hproblem, introduced in Section 2, is recalled here for completeness:
Hproblem =
X
i∈V(G)
(−wi) ˜σz
i + Jclique
zz
X
(i,j)∈E(Gdriver)
˜σz
i ˜σz
j + Jzz
X
(i,j)∈E(G)\E(Gdriver)
˜σz
i ˜σz
j .
The original annealing parameters evolve as: x(t) = (1 −t)Γ, jxx(t) = t(1 −t)Jxx, p(t) = t. We revise
DIC-DAC-DOA by modifying its annealing schedule, specifically the functions x(t), jxx(t), and p(t).
4.2
Revised Annealing Schedule: Stages 0, 1 and 2
We describe the system using two Hamiltonians: H0(t), defined for t ∈[0, 1], corresponding to the optional
Stage 0; and H1(t), also defined over t ∈[0, 1], governing the evolution during Stages 1 and 2.
There are three distinct values for the transverse field: Γ0 > Γ1 > Γ2. The three stages are distinguished
by the values of these Γ’s: the transverse field is reduced from Γ0 to Γ1 during Stage 0, from Γ1 to Γ2 during
Stage 1, and from Γ2 to 0 during Stage 2.
Stage 0 (optional).
This stage begins in the uniform superposition state and evolves adiabatically toward the
ground state of H0(1) = H1(0).
The Hamiltonian for Stage 0 is
H0(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem,
t ∈[0, 1],
with x(t) = (1 −t)(Γ0 −Γ1) + Γ1, jxx(t) = tJxx, p(t) = t. During this stage, the problem parameters wi,
Jclique
zz
, and Jzz are gradually ramped to their final values as p(t) increases from 0 to 1, while jxx(t) increases
linearly to its target value Jxx. If the ground state of H0(1) = H1(0) can be prepared directly—e.g., via quantum
hardware initialization— then Stage 0 may be omitted.
15
Main Stages 1 and 2 (algorithmic core).
The system Hamiltonian during the main two-stage evolution is
H1(t) = x(t)HX + jxx(t)HXX + Hproblem,
t ∈[0, 1].
The annealing schedule divides into two stages, determined by a structure-dependent parameter Γ2. We define
the transition time tsep := 1 −Γ2
Γ1 , so that Stage 1 corresponds to t ∈[0, tsep], and Stage 2 to t ∈[tsep, 1]. The
transverse field is given by x(t) = (1 −t)Γ1, which decreases linearly from Γ1 at t = 0 to Γ2 at t = tsep, and
then to 0 at t = 1. The XX-driver schedule is
jxx(t) =
(
Jxx
for t ∈[0, tsep] ⇔x ∈[Γ2, Γ1],
αx(t)
for t ∈[tsep, 1] ⇔x ∈[0, Γ2],
where Jxx = αΓ2. That is, jxx(t) remains constant during Stage 1, and both x(t) and jxx(t) ramp linearly to
zero during Stage 2. The parameter α controls the strength of the XX-coupling relative to the transverse field
during Stage 2. Its value is critical and will be determined analytically in Section 9.3.1.
Remark 4.1 (Time-Dependent Quantities and Font Convention). Note that jxx can be reparametrized as a
function of x5. Accordingly, the Hamiltonian H1 can be viewed as a function of the transverse field x, which
decreases monotonically with t. This reparametrization is particularly useful when describing multiple iterations
of the algorithm, in which each substructure evolves under a Hamiltonian governed by x. This dual viewpoint
allows us to parametrize the system either explicitly in t or implicitly in terms of x. When there is no ambiguity,
we often omit the parameter t and use text font (e.g., x, jxx) to indicate time dependence.
Parameter Values.
The values of Γ0, Γ1, Γ2, Jxx (α), and Jzz are critical to the success of the algorithm. In particular, suitable values
of Γs and Jzz are chosen in relation to the analytical bounds on Jxx, ensuring that the effective Hamiltonian sup-
ports the desired localization and interference behavior. These parameter choices will be specified and justified
in the subsequent analysis.
Remark 4.2. After Stage 0, the system evolves under the full Hamiltonian H1 defined above. However, our
analysis focuses on an effective Hamiltonian Heff
1 , Eq. (8.1), derived in Section 8. This effective Hamiltonian
captures the essential dynamics responsible for the algorithmic speedup and serves as the foundation for the
analytical and numerical investigations presented in the following sections.
5
Anti-Crossing and the Two-Level Hamiltonian B(w, x)
The term anti-crossing is sometimes used loosely, so we begin with a precise notion in the two-level case, then
extend it to multi-level systems, and finally classify different types of anti-crossing. We also introduce a canonical
two-level Hamiltonian whose eigensystem will be used throughout our analysis.
5Since x = (1 −t)Γ1 decreases monotonically from Γ1 to 0, each value t ∈[0, 1] corresponds uniquely to a transverse field value
x ∈[0, Γ1]. This one-to-one correspondence allows us to equivalently express the annealing schedule in terms of x rather than t, as
indicated by the domain annotations.
16
1
Γ0
x
w
Jxx
Γ2
0
Stage 0
Stage 2
Γ1
Jzz
Stage 1
Jclique
zz
Figure 4:
Annealing parameter schedule for the system Hamiltonian H(t) = x(t)HX + jxx(t)HXX +
p(t)Hproblem. Transverse field x(t) (blue) begins at Γ0, decreases to Γ1 in Stage 0, then to Γ2 in Stage 1, and
reaches 0 at the end of Stage 2. The XX-coupling strength jxx(t) (red) increases linearly to Jxx in Stage 0, re-
mains constant in Stage 1, and decreases to 0 in Stage 2. Vertex weights wi (orange) and ZZ-couplings Jzz, Jclique
zz
(green) ramp to final values in Stage 0 and remain fixed thereafter.
5.1
Level Repulsion vs. Anti-Crossing
We begin by distinguishing the concept of an anti-crossing (also called avoided-crossing) from level repulsion.
Consider a generic two-level Hamiltonian of the form
H(x) :=
e1(x)
v(x)
v(x)
e2(x)
,
where e1(x), e2(x), and v(x) are real-valued functions of a parameter x.
The eigenvalues of this Hamiltonian are λ±(x) =
e1(x)+e2(x)
2
± 1
2
p
(e1(x) −e2(x))2 + 4v(x)2, and the
energy gap between them is ∆(x) := λ+(x) −λ−(x) =
p
(e1(x) −e2(x))2 + 4v(x)2. The off-diagonal term
v(x) induces level repulsion: if v(x) ̸= 0, then the eigenvalues never cross, and the gap ∆(x) remains strictly
positive. Thus, assuming the off-diagonal coupling v(x) is nonzero, level repulsion is always present.
Definition 5.1. We say that an anti-crossing occurs when the two unperturbed energy levels e1(x) and e2(x)
cross, i.e., e1(x∗) = e2(x∗) for some x∗, and the off-diagonal coupling v(x∗) ̸= 0. In this case the eigenvalue
curves form an anti-crossing with gap ∆min = 2|v(x∗)|.
The size of the anti-crossing gap depends on |v(x∗)|: stronger coupling leads to a larger gap, while weaker
coupling results in a narrower one.
By contrast, if the two diagonal entries e1(x) and e2(x) remain well separated for all x, then the system
exhibits level repulsion but not an anti-crossing. Figure 6 illustrates an example of level repulsion without an
anti-crossing.
17
The eigenvectors of the two-level Hamiltonian are given by
|λ−(x)⟩= cos θ(x) |0⟩+ sin θ(x) |1⟩,
|λ+(x)⟩= −sin θ(x) |0⟩+ cos θ(x) |1⟩,
where the mixing angle θ(x) satisfies tan(2θ(x)) =
2v(x)
e1(x)−e2(x). Thus, near the anti-crossing point x = x∗, the
eigenstates interpolate between the unperturbed basis states.
Remark 5.2. The trigonometric expression for eigenvectors in terms of the mixing angle θ(x), is equivalent to
the rational-form representation
|λ−(x)⟩=
1
√
1+γ(x)2 (γ(x) |0⟩+ |1⟩) ,
|λ+(x)⟩=
1
√
1+γ(x)2 (|0⟩−γ(x) |1⟩) ,
where the two parametrizations are related by γ(x) =
1
tan θ(x), and
tan(2θ(x)) =
2v(x)
e1(x)−e2(x). This rational-
form expression is particularly useful for our analysis, as it aligns directly with the basic matrix form introduced
below.
Remark 5.3. While the explicit forms of the eigenvectors are not directly used in this paper, they are included
here for completion, and used in the companion paper [8] for bounding the anti-crossing gap.
Our earlier work [1, 15], including the development of the original DIC-DAC-DOA algorithm, was motivated
by investigating the structural characteristics of eigenstates around the anti-crossing.
5.1.1
Anti-crossing in a Multi-level System
In a multi-level system, the notion of an anti-crossing extends naturally by restricting the Hamiltonian to the two-
dimensional subspace spanned by the pair of eigenstates whose unperturbed energies intersect. This reduction
yields a 2 × 2 effective Hamiltonian that captures the essential structure of the anti-crossing, including both the
energy gap and the interpolating behavior of the eigenstates. Thus, the same framework as in the two-level case
applies.
With this perspective, we refine the definition of an (L, R)-anti-crossing given in [1]. Recall that EA
0 (t)
denotes the ground state energy of the Hamiltonian HA(t) projected to the subspace spanned by the subsets of
A. For simplicity, we will refer to this subpace as the subspace defined by A.
Definition 5.4. We say an anti-crossing is an (L, R)-anti-crossing at t∗if there exist bare energies EL
0 (t) and
ER
0 (t) such that:
1. EL
0 (t) and ER
0 (t) approximate the unperturbed energy levels of the effective 2 × 2 Hamiltonian describing
the anti-crossing for t ∈[t∗−δ, t∗+ δ] for some small δ > 0; and
2. EL
0 (t) and ER
0 (t) cross at t∗, i.e. EL
0 (t∗) = ER
0 (t∗).
See Figure 5 for an illustration.
In this sense, the anti-crossing is said to originate from the crossing of EL
0 (t) and ER
0 (t). The unperturbed
levels define the formal structure, while the bare energies provide concrete analytic proxies that allow us to
identify and study the anti-crossing without explicitly constructing the effective 2 × 2 Hamiltonian, which could
be challenging. In the companion paper [8], an effective 2 × 2 Hamiltonian is explicitly constructed to derive a
perturbative bound on the anti-crossing gap.
18
t*
ER
0 (t)
EL
0 (t)
Figure 5: Schematic of an (L, R)-anti-crossing. Dashed lines: bare energies EL
0 (t) and ER
0 (t) crossing at t∗.
Solid gray curves: the two lowest eigenvalues of the Hamiltonian, showing the avoided crossing that originates
from this bare crossing.
5.1.2
Types of Anti-Crossing
We classify anti-crossings according to whether the subspaces defined by L and R, when embedded in the full
Hilbert space, overlap or are disjoint.
Tunneling-Induced Anti-Crossing.
If the embedded subspaces of L and R overlap, then HL and HR are
submatrices of the same block in the Hilbert space decomposition. In this case, off-diagonal terms within the
block induce tunneling between the two configurations. We refer to this as a tunneling-induced anti-crossing.
Block-Level Anti-Crossing.
If the embedded subspaces of L and R are disjoint, then HL and HR belong to
two distinct blocks of the Hamiltonian. In this case, the ground states of the two blocks may become nearly
degenerate, leading to a true crossing if the blocks are decoupled, or to an avoided crossing if there is weak but
nonzero inter-block coupling. We refer to this as a block-level anti-crossing.
Figure 5 provides a generic schematic of an (L, R)-anti-crossing, whether tunneling-induced or block-level.
The interpretation depends on the relation between the subspaces defined by L and R. In the block-level case,
for example, the two levels correspond to the ground states of two distinct blocks—the same-sign block HL
(blue) and the opposite-sign block HR (red). A weak inter-block coupling lifts their degeneracy and produces an
avoided crossing. When the resulting gap is small, the system evolves almost as if the blocks were decoupled,
remaining adiabatically in the same-sign block and following the blue path.
The original double anti-crossing (DAC) observed in [1] is, in fact, a double block-level anti-crossing. Thus,
instead of a diabatic transition—from ground to excited to ground state—the evolution effectively remains con-
fined to the same-sign block, without any true inter-block transition. In this sense, the block-level anti-crossing
is dynamically bypassed.
There is also a more subtle variant, in which one of the two competing levels is not an eigenstate of a single
block, but a superposition of states from different blocks. In this case, the coupling is not merely perturbative; the
anti-crossing reflects quantum interference between blocks. We refer to this as an interference-involved block-
level anti-crossing, an example is shown in Figure 24.
19
5.2
The Basic Matrix B(w, x): Eigenvalues and Eigenstates
We define the following effective two-level Hamiltonian, which will serve as a basic building block for our
analysis throughout the paper:
B(w, x) :=
−w
−1
2x
−1
2x
0
,
(5.1)
where w = w(t) and x = x(t) are real-valued parameters, typically derived from problem Hamiltonians and
driver strengths. This is a special case of a spin-1
2 system, with analytic eigenstructure. The eigenvalues of
B(w, x) are
βk = −1
2
w + (−1)kp
w2 + x2
,
k = 0, 1,
(5.2)
with normalized eigenvectors
|β0⟩=
1
√
1+γ2
γ |0⟩+ |1⟩
,
|β1⟩=
1
√
1+γ2
|0⟩−γ |1⟩
,
(5.3)
where the mixing coefficient is γ =
x
w+
√
w2+x2 .
Remark 5.5. For w > 0, we use the standard basis |0⟩=
1, 0
T , |1⟩=
0, 1
T . For w < 0, the roles flip:
|0⟩(w < 0) = |1⟩(w > 0) and |1⟩(w < 0) = |0⟩(w > 0).
Figure 6 visualizes the eigenspectrum and ground state behavior under variation of x and w.
Figure 6: Eigenvalues of the two-level Hamiltonian B(w, x), where βk = −1
2
w + (−1)k√
w2 + x2
, k =
0, 1.
The ground state energy (β0) is shown in black, and the first excited state energy (β1) is shown in
blue.
The energy gap widens as x increases—there is no anti-crossing in this case.
The ground state is
|β0⟩=
1
√
1+γ2 (γ |0⟩+ |1⟩), with γ =
x
w+
√
w2+x2 . (a) w = 1;
(b) w = −1. For w > 0, we have |0⟩= |↑⟩and
|1⟩= |↓⟩as in (a). Notice that the contents of |0⟩and |1⟩swap for w < 0 as in (b).
6
A Single Clique: Decomposition and Low-Energy Subspace Analysis
In this section, we focus on analyzing the Hilbert space associated with a single clique. We begin, in Section 6.1,
by describing a natural decomposition of the Hilbert space into a low-energy subspace, consisting of independent-
20
set states, and a high-energy subspace, consisting of dependent-set states. We describe the angular momentum
structure and spectral properties of the low-energy subspace in Section 6.2.
To fix notation, let Gc = (V, E) be a clique, where V = {1, . . . , nc} is the set of vertices, and E = {(i, j) :
i < j, i, j ∈V } is the set of edges. We assume all vertices in the clique have the same weight wc, i.e., wi = wc
for all i ∈V . For clarity, we will omit the graph argument (Gc) in this section, with the understanding that all
quantities refer to this single clique.
6.1
Decomposition of the Clique Space: Low and High Energy Subspaces
The Hilbert space Vc for a clique of size nc consists of 2nc computational basis states, each corresponding to a
binary string of length nc. Among these, only nc + 1 basis states correspond to independent sets:
b0 = 00...0
b1 = 10...0
b2 = 01...0
...
bnc = 00...1
with Nind = {b0, b1, . . . , bnc} , where each bi is a binary string of length nc with a single 1 in position i (and
0s elsewhere), and b0 is the all-zero string.
The energy associated with each singleton state is −wc, while the empty set has energy 0. In contrast, the
energy of all other bit strings—which correspond to dependent sets—is at least (Jclique
zz
−2wc). By choosing
Jclique
zz
sufficiently large, these dependent-set states become energetically inaccessible. Hence, the Hilbert space
admits a natural decomposition:
Vc = Lind ⊕Hdep,
(6.1)
where Lind is the low-energy subspace spanned by Nind, and Hdep is the high-energy subspace spanned by the
dependent-set states.
The remainder of this section focuses on the analysis of the low-energy subspace Lind.
6.2
Angular-Momentum Decomposition of the Low-Energy Subspace
The low-energy subspace Lind of a clique can be naturally expressed in the total angular momentum basis. By
changing to this basis, Lind decomposes into a single effective spin- 1
2 (two-dimensional same-sign sector) to-
gether with (nc −1) spin-0 (one-dimensional opposite-sign) components. In this representation, the relevant
spin operators take a block-diagonal form, which induces a block-diagonal structure in the restricted Hamilto-
nian. This block-diagonal restricted Hamiltonian yields exact spectral formulas, explains the Jxx see-saw effect
between the same-sign and opposite-sign blocks, and extends naturally to the case of coupled subcliques, a key
step toward the global block decomposition of shared-structure graphs.
6.2.1
Low-Energy Subspace in Total Angular Momentum Basis Ba
21
Lemma 6.1. The total angular momentum basis for Lind consists of the states:
(Ba)
|s, −(s −1)⟩,
|1, (s −1), −(s −1)⟩,
. . .
|nc −1, (s −1), −(s −1)⟩
|s, −s⟩
(6.2)
where s = nc
2 is the total spin.
Explicitly:
• |s, −s⟩= |b0⟩, representing the empty set.
• |s, −(s −1)⟩=
1
√nc
Pnc
i=1 |bi⟩, a uniform superposition of all singletons with positive amplitudes.
• |k, s −1, −(s −1)⟩, for k = 1, . . . , nc −1, consists of a superposition of singleton states with both
positive and negative amplitudes.
Thus, |s, −s⟩and |s, −(s −1)⟩are same-sign states, while |k, s −1, −(s −1)⟩are opposite-sign states.
Proof. The total Hilbert space of nc spin-1
2 particles decomposes into irreducible representations of total spin
s = nc
2 , followed by smaller spin sectors (s −1, s −2, . . .). According to the Clebsch–Gordan decomposition of
nc spin- 1
2 particles, we have:
1
2 ⊗1
2 ⊗. . . ⊗1
2
|
{z
}
nc
= nc
2 ⊕nc
2 −1 ⊕. . . ⊕nc
2 −1
|
{z
}
nc−1
⊕. . .
This decomposition is a standard result in angular momentum theory and provides a natural organization
of basis states according to their total spin quantum numbers. The key observation is that the independent-set
states reside in the lowest-weight subspace, corresponding to the smallest ms values within these spin multiplets.
Specifically, the independent-set states are spanned by the lowest-weight states (ms = −s, −(s −1)) in the
spin-s multiplet and the last (ms = −(s −1)) states in spin-(s −1) multiplets. This yields the following basis:
(Ba)
|s, −(s −1)⟩,
|1, (s −1), −(s −1)⟩,
. . .
|nc −1, (s −1), −(s −1)⟩
|s, −s⟩
One can check that: |s, −(s −1)⟩=
1
√nc
Pnc
i=1 |bi⟩,
|s, −s⟩= |b0⟩.
For k = 1, . . . , nc −1, let |k, s −1, −(s −1)⟩denote the last (ms = −(s −1)) state of spin-(s −1).
Since these states share the same ms value, they must be orthogonal to |s, −(s −1)⟩: |k, s −1, −(s −1)⟩=
Pnc
i=1 aki |bi⟩, where aki are Clebsch–Gordan coefficients.
For each k ≥1, at least one coefficient ak,j < 0 for some j, indicating that the superposition contains both
positive and negative amplitudes. Hence, these states are opposite-sign.
Remark (Basis Reordering).
For convenience, we reorder the basis states in Ba as follows:
(B′
a)
|s, −(s −1)⟩, |s, −s⟩, |1, s −1, −(s −1)⟩, . . . , |nc −1, s −1, −(s −1)⟩.
That is, we swap the order of the two same-sign basis states. This ordering simplifies the representation of
operators in the next steps.
22
Basis Transformation.
The transformation between the computational basis {|bi⟩} and the angular momen-
tum basis (B′
a) can be derived either from the Clebsch–Gordan coefficients or directly from the relationships
established in the proof. Specifically:
• The state |s, −(s −1)⟩is a uniform superposition over all singleton states {|bi⟩}nc
i=1.
• The remaining states |k, s −1, −(s −1)⟩, for k = 1, . . . , nc −1, form an orthogonal complement to
|s, −(s −1)⟩within the subspace spanned by {|b1⟩, . . . , |bnc⟩}.
We denote the basis transformation matrix from the computational basis to the angular momentum basis by
Uclique.
6.2.2
Spin Operators in the B′
a Basis
Consider the spin operators on Vc:
SZ = 1
2
nc
X
i=1
σz
i ,
S˜Z =
nc
X
i=1
˜σz
i ,
SX = 1
2
nc
X
i=1
σx
i ,
SXX = 1
4
X
ij∈E(Gdriver)
σx
i σx
j ,
where Gdriver = Gc.
We project these operators onto the low-energy subspace Lind using the projection operator Πind, and then
transform them into the B′
a basis via the basis transformation Uclique. For any operator X, we use a bar to denote
the operator:
X = Uclique†ΠindXΠindUclique.
Theorem 6.2. The restricted operators in the B′
a basis are block-diagonal and given explicitly by:
S˜Z = ˜σz ⊕1 ⊕· · · ⊕1,
SX =
√nc
2 σx ⊕0 ⊕· · · ⊕0,
SXX =
nc−1
4
˜σz ⊕
−1
4
⊕· · · ⊕
−1
4
.
(6.3)
where ˜σz and σx act on the effective spin-1
2 (two-dimensional same-sign) subspace, while the scalars act on
spin-0 (one-dimensional opposite-sign) subspaces.
Proof. Recall the reordered basis B′
a is: {|s, −(s−1)⟩,
|s, −s⟩,
|1, s−1, −(s−1)⟩, . . . , |nc −1, s −1, −(s −1)⟩}.
In standard angular momentum theory, the eigenvalues of SZ for these states are:
• |s, −(s −1)⟩: eigenvalue ms = −(s −1)
• |s, −s⟩: eigenvalue −s
• |k, s −1, −(s −1)⟩: eigenvalue −(s −1) for all k
Thus:
SZ =
−(s −1)
0
0
· · ·
0
0
−s
0
· · ·
0
0
0
−(s −1)
· · ·
0
...
...
...
...
0
· · ·
0
· · ·
−(s −1)
23
and the shifted operator S˜Z becomes:
S˜Z =
1
0
0
· · ·
0
0
0
0
· · ·
0
0
0
1
· · ·
0
...
...
...
...
0
· · ·
0
· · ·
1
= ˜σz ⊕1 ⊕· · · ⊕1.
To obtain SX, recall: SX = 1
2(S+ + S−), where S+ and S−are the (raising and lowering) ladder operators.
From angular momentum algebra:
⟨s, −(s −1)|S+|s, −s⟩= ⟨s, −s|S−|s, −(s −1)⟩= √nc,
thus: SX =
√nc
2 σx ⊕0 ⊕· · · ⊕0.
Since S2
+ = S2
−= 0 on Lind, it follows: S2
X = 1
4(S+S−+ S−S+), and:
S2
X |s, −s⟩= nc
4 |s, −s⟩,
S2
X |s, −(s −1)⟩= 3nc−2
4
|s, −(s −1)⟩,
S2
X |s −1, −(s −1)⟩= nc−2
4
|s −1, −(s −1)⟩.
Using: SXX = 1
2(4S2
X −nc), we conclude: SXX =
nc−1
4
˜σz ⊕
−1
4
⊕· · · ⊕
−1
4
.
6.2.3
Decomposition into Spin-1
2 and Spin-0 Components
The transformed operators given by Theorem 6.2 offer a clear physical interpretation: the low-energy subspace
Lind decomposes into a direct sum consisting of a single effective spin-1
2 subsystem, together with nc −1
(opposite-sign) spin-0 subsystems:
Lind =
1
2
nc ⊕0 ⊕· · · ⊕0
|
{z
}
nc−1
.
(6.4)
The effective spin-1
2 subsystem
1
2
nc is spanned by the two same-sign basis states:
1
2, −1
2
= |s, −(s −
1)⟩,
1
2, 1
2
= |s, −s⟩. The spin-0 components correspond to the opposite-sign basis vectors: |k, s −1, −(s −
1)⟩, for k = 1, . . . , nc −1.
Correspondingly, the Hamiltonian decomposes into a same-sign two-dimensional effective spin-1
2 block and
nc −1 opposite-sign spin-0 blocks. The system Hamiltonian H1, which governs the evolution during Stages 1
and 2, when restricted to the low-energy subspace Lind of the clique Gc, takes the form
H1 = −x SX + jxx SXX −wc S˜Z.
Substituting the operator expressions from Eq. 6.3 in Theorem 6.2, we obtain
H1 =
−
√nc
2
x σx +
−wc + nc−1
4
jxx
˜σz
⊕
−
wc + 1
4jxx
⊕· · · ⊕
−
wc + 1
4jxx
|
{z
}
nc−1
(6.5)
= B(weff
c , √ncx) ⊕[−(wc + 1
4jxx)] ⊕· · · ⊕[−(wc + 1
4jxx)]
|
{z
}
nc−1
(6.6)
where the effective weight is defined as weff
c = wc −nc−1
4
jxx.
An illustration of this basis transformation is provided in Figure 7, which shows how the original product
basis is transformed into a direct-sum decomposition via total angular momentum.
24
basis change
an effective spin-1/2 block
(same-sign)
0-spin blocks
(opposite-sign)
b0
b0
bn
bn
b1
b1
c0 c1 o1
on-1
Figure 7: Basis transformation of Lind from the product space {b0, b1, . . . , bn} to the direct-sum space (angu-
lar momentum basis). The same-sign states {c0, c1} are the basis of the effective spin-1
2 subspace, while the
opposite-sign states {o1, . . . , on−1} are the bases of the spin-0 subspaces. The Hamiltonian matrix decomposes
accordingly into a same-sign block and n −1 opposite-sign blocks.
6.2.4
Full Spectrum and the Jxx See-Saw Effect
Since the eigensystem of B(w, x) is known analytically (Eqs. (5.2), (5.3)), the full spectrum and eigenstates of
the Hamiltonian H1 for the single clique Gc are also known analytically. In particular, the eigenvalues of the
same-sign block B(weff
c , √ncx) are:
βk = −1
2
weff
c + (−1)kp
[weff
c ]2 + nc[x]2
,
k = 0, 1.
(6.7)
Note that β0 and β1 are time-dependent through x(t).
Analytic Spectrum and the See-Saw Effect.
The spectrum of the reduced Hamiltonian H1 consists of three
distinct eigenvalues: {β0, β1, θ} , where {β0, β1} arise from the effective spin- 1
2 (same-sign) block and θ =
−
wc + 1
4jxx
is the eigenvalue shared by the (nc−1) degenerate spin-0 (opposite-sign) blocks; see Figure 8(a).
We now examine how this spectrum changes with the strength Jxx. Let Jxx = αΓ2 and jxx = αx, with
α > 0. Then the effective weight weff
c (x) = wc −nc−1
4
jxx crosses zero at the critical point x0 =
4wc
α(nc−1). We
thus have
weff
c (x) > 0
for x < x0,
weff
c (x) = 0
at x = x0,
weff
c (x) < 0
for x > x0,
At this point, the sign of weff
c changes, and the roles of |0⟩and |1⟩in the ground state eigenvector exchange. For
x > x0, the ground state energy of the same-sign block approximates β0(x) ≈−1
2α x, so its slope with respect to
x is approximately −1
2α, independent of nc. In contrast, when Jxx = 0 (i.e., α = 0), the same energy behaves as
β0(x) ≈−1
2
wc + √ncx
. As Jxx increases, the magnitude of this slope decreases—from approximately √nc
down to 1
α—flattening the spectral curve; see Figure 8(c).
Crossover point xc.
The crossing point xc at which β0(xc) = θ(xc) satisfies
xc =
4α
4−α2 .
(6.8)
To ensure that xc ≤Γ2, we must restrict α to satisfy
α < αmax(Γ2) := −2+2√
1+Γ2
2
Γ2
,
(6.9)
25
At x = xc, the ground state switches from the same-sign subspace to the opposite-sign subspace. This
transition is a key feature of the algorithm and motivates the design of the two-stage schedule. The behavior of
this crossing is shown in Figure 8(b), and its dependence on α across the full schedule is illustrated in Figure 10.
This interplay—the rising of the same-sign energies and simultaneous lowering of the opposite-sign energy as
Jxx increases—manifests as a see-saw effect, shown in Figure 8(d).
6.2.5
Transformation via Two Coupled Subcliques
In this section, we reinterpret a single clique as two coupled subcliques and derive the transformation needed to
express the partial clique ZZ-coupling.
Consider a clique c composed of two subcliques a and b, with nc = na + nb. We apply the single-clique
analysis from Eq. (6.4) independently to each subclique, yielding:
La =
1
2
na ⊕0 ⊕· · · ⊕0
|
{z
}
na−1
,
Lb =
1
2
nb ⊕0 ⊕· · · ⊕0
|
{z
}
nb−1
.
The low-energy subspace Lc of clique c is the restricted subspace within the tensor product of La and Lb.
Projecting onto the low-energy sector of two spin- 1
2 systems yields an effective spin-1
2 for the full clique, along
with an additional spin-0 component:
Πind 1
2
na ⊗
1
2
nb
Πind =
1
2
nc ⊕0.
We now describe the corresponding Hamiltonians. For simplicity, we drop the time parameter (t).
The Hamiltonians for the effective spin-1
2 block are explicitly:
Ba =
−
w −na−1
4
jxx
−
√na
2 x
−
√na
2 x
0
,
Bb =
−
w −nb−1
4
jxx
−
√nb
2 x
−
√nb
2 x
0
.
The combined low-energy Hamiltonian is:
Dc = Πind (Ba ⊗Ib + Ia ⊗Bb + HXX) Πind,
which takes the explicit 3 × 3 form:
Dc =
−
w −na−1
4
jxx
−
√na
2 x
√nanb
4
jxx
−
√na
2 x
0
−
√nb
2 x
√nanb
4
jxx
−
√nb
2 x
−
w −nb−1
4
jxx
.
To recover the effective spin-1
2 plus zero-spin structure, we apply the basis transformation:
Umerge =
q
na
nc
0
−
q
nb
nc
0
1
0
q
nb
nc
0
q
na
nc
,
26
(a)
(b)
(c)
(d)
Figure 8: Effect of Jxx on the eigenvalues of a single clique, where Jxx = αΓ2. We compare the case α = 0 with
α = 0.7.
Each clique has three eigenvalues: {β0(x), β1(x)} from the effective spin-1
2 subspace, and θ(x) from the spin-0
state. The x-axis is the transverse field x, which decreases from Γ2 = 1 to 0. Note: jxx = αx. [wc = 1, nc = 9]
(a) α = 0: β0(x), β1(x) in solid black; θ(x) in dashed black.
(b) α = 0.7: β0(x), β1(x) in solid blue; θ(x) in dashed red. The green dashed line marks the crossing point
xc =
4α
4−α2 where β0(xc) = θ(xc).
(c) Slope magnitude reduced: as Jxx increases from 0 to 0.7, the same-sign energies β0 and β1 rise (black
to blue). The magnitude of the negative slope decreases; that is, the curve becomes less steep, flattening from
approximately −√nc (black dotdashed) to −1
α (blue dotdashed).
(d) See-saw effect: increasing Jxx simultaneously raises the same-sign energies (black to blue) and lowers the
opposite-sign energy θ(x) (black dashed to red dashed), in a see-saw-like fashion.
27
where the columns correspond to the transformed basis states:
|1c⟩=
q
na
nc |1a0b⟩+
q
nb
nc |0a1b⟩,
|0c⟩= |0a0b⟩,
|⊙q⟩= −
q
nb
nc |1a0b⟩+
q
na
nc |0a1b⟩.
In this basis, the Hamiltonian simplifies to:
U†
mergeDcUmerge =
−
w −nc−1
4
jxx
−
√nc
2 x
0
−
√nc
2 x
0
0
0
0
−
w + 1
4jxx
,
which clearly isolates the effective spin- 1
2 subsystem and the spin-0 subsystem, as described at the outset of this
section.
Remark 6.3. If the vertex weights are not exactly the same, their difference will appear in the off-diagonal,
weakly coupling the two blocks. This is the case in our original example in [1], where the LM is almost degenerate.
The weight difference is so small (considered as almost-degenerate) that one can treat the blocks as if they were
disjoint.
Partial clique ZZ-coupling.
Suppose an external spin r ZZ-couples only to subclique a (and not to subclique
b), described initially by the operator: (Pna
i=1 ˜σz
i ) ˜σz
r. Applying the single-clique result to subclique a, this sim-
plifies to: ˜σz
a˜σz
r.
Now, under the transformation Umerge, we explicitly obtain:
U†
merge˜σz
aUmerge =
na
nc
0
−
√nanb
nc
0
0
0
−
√nanb
nc
0
nb
nc
,
demonstrating clearly the induced coupling between the same-sign two-dimensional subsystem and the zero-spin
subsystem. The coupling strength is explicitly given by: −
√nanb
nc
.
In particular, we denote the transformed operator for na = nc −1 and nb = 1 by
T =
nc −1
nc
0
−
√nc −1
nc
0
0
0
−
√nc −1
nc
0
1
nc
(6.10)
which will be used in our later analysis.
7
Analytical Solution for the Bare Subsystem (dMIC)
We define the bare subsystem as the system restricted to a given dMIC, that is, a subgraph consisting of indepen-
dent cliques {Clique(wi, ni)}k
i=1. A dMIC defines either a critical local minima (LM), or the global minimum (GM)
28
where each clique has size one. It is a subsystem because we consider only the subgraph induced by this dMIC,
and it is bare because it is fully decoupled from the rest of the graph.
We refer to the bare subsystem induced by the set of cliques in L (generating the local minima LM) as the
L-bare subsystem, denoted by Hbare
L
. Similarly, the bare subsystem induced by the subset R ⊆GM is called the
R-bare subsystem, denoted by Hbare
R
, and referred to as the GM-bare subsystem when R = GM. In our analysis in
later sections, we will express the total Hamiltonian in terms of the L- and R-bare subsystems together with their
coupling. Throughout, we distinguish bare subsystem operators from their full-system counterparts by using the
superscript bare; the same symbol without the superscript refers to the corresponding operator in the full system.
The bare subsystem admits a fully analytical treatment due to its tensor-product structure across independent
cliques. Our analysis proceeds in three steps, and the resulting closed-form solutions will later be used to derive
analytical bounds on Jxx.
• First, we decompose the low-energy subspace of the bare subsystem into three types of sectors: the same-
sign sector Cbare, the all-spin-zero (AS0) opposite-sign sector Qbare, and the intermediate opposite-sign
sectors. We define corresponding Hamiltonians for each: HCbare, HQbare, and HWbare, where Wbare denotes
a typical intermediate sector.
• Second, we derive a closed-form solution for the same-sign block HCbare, including explicit expressions
for its eigenvalues and eigenstates. In the case of uniform clique sizes, we perform a symmetric-subspace
reduction based on angular momentum structure, reducing the Hilbert space dimension from 2m to m + 1.
• Third, the ground-state energies of the blocks are totally ordered. Depending on the relative energy of the
spin-1
2 and spin-0 components, the ordering takes one of two forms: either the same-sign sector has the
lowest energy and the AS0 opposite-sign sector the highest, or the order is reversed.
7.1
Low-Energy Structure of the Bare Subsystem: Same-Sign and Opposite-Sign Sectors
Let Gmic consist of m independent cliques Clique(wi, ni), where wi ≡1 denotes the vertex weight and ni the
size of the ith clique. This graph admits Qm
i=1 ni degenerate maximal independent sets of size m.
From the single-clique result (Eq. (6.4)), the low-energy subspace of each clique decomposes as:
Li =
1
2
ni ⊕0 ⊕· · · ⊕0
|
{z
}
ni−1
.
Hence, the total low-energy subspace L of the bare subsystem is given by:
L =
m
O
i=1
Li =
m
O
i=1
1
2
ni ⊕0 ⊕· · · ⊕0
|
{z
}
ni−1
=
M m
O
i=1
1
2 or 0
!
,
where the direct sum ranges over all tensor products selecting one spin-1
2 and ni −1 spin-zeros from each clique.
This decomposition yields Qm
i=1 ni block subspaces. Among them, the block
Cbare :=
m
O
i=1
1
2
ni
is composed entirely of spin-1
2 factors and is referred to as the same-sign sector.
29
At the opposite extreme, there are Qm
i=1(ni −1) blocks composed entirely of spin-0 factors:
Qbare :=
m
O
i=1
0,
referred to as the all-spin-zero (AS0) opposite-sign sector.
The remaining opposite-sign blocks, each containing a mixture of spin- 1
2 and spin-0 components, are called
the intermediate sectors. Let Wbare denote one such intermediate sector.
We denote the restriction of the Hamiltonian H1 to each sector by:
HCbare := projection of H1 to Cbare,
HQbare := projection of H1 to Qbare,
HWbare := projection of H1 to Wbare.
The same-sign block Hamiltonian is given by:
HCbare =
m
X
i=1
Bi (weff
i , √ni x) ,
(7.1)
where weff
i =
wi −ni−1
4 jxx
, and each Bi is a two-level block operator defined as:
Bi = I2 ⊗· · · ⊗
B
|{z}
ith position
⊗· · · ⊗I2,
with B being the two-level Hamiltonian from Eq. 5.1, and I2 the 2 × 2 identity matrix.
The AS0 opposite-sign block Hamiltonian is:
HQbare =
m
X
i=1
−
wi + 1
4jxx
,
which evaluates to a single-value offset.
The intermediate block Hamiltonian HWbare can be expressed in terms of a same-sign block Hamiltonian with
fewer active Bi terms, accompanied by an appropriate energy shift.
In the following, we present a closed-form solution for the same-sign block; analogous expressions for the
intermediate blocks follow similarly.
7.2
Closed-Form Solution of the Same-Sign Block HCbare
Since each Bi in HCbare (in Eq. (7.1)) acts on a disjoint spin-1
2 subsystem, the full Hamiltonian HCbare is frustration-
free. Its eigenstates are tensor products of the eigenstates of the individual Bi, and its eigenvalues are additive:
HCbare |ψ1⟩⊗· · · ⊗|ψm⟩=
m
X
i=1
β(i)
ki
!
|ψ1⟩⊗· · · ⊗|ψm⟩,
where |ψi⟩∈{|β(i)
0 ⟩, |β(i)
1 ⟩} are the eigenstates of the ith two-level block Bi, and β(i)
k
are the corresponding
eigenvalues (Eqs. (5.2), (5.3)).
As a result, the full spectrum and eigensystem of the same-sign block HCbare are exactly solvable.
30
Theorem 7.1 (Exact Spectrum and Ground State of the Same-Sign Block). Let HCbare be given as in
Eq. (7.1). Then all eigenvalues and eigenstates of HCbare are exactly solvable. Each eigenstate is indexed by
a bit string z = z1z2 . . . zm ∈{0, 1}m, with:
(
Ez
= Pm
i=1 β(i)
zi ,
|Ez⟩
= Nm
i=1 |β(i)
zi ⟩,
(7.2)
where β(i)
k
and |β(i)
k ⟩, for k = 0, 1, are the eigenvalues and eigenvectors of the local two-level subsystem
Bi, given by:
β(i)
k
= −1
2
w
eff
i + (−1)k
q
w
eff
i
2 + ni x2
,
|β(i)
0 ⟩
=
1
√
1+γ2
i
(γi |0⟩i + |1⟩i) ,
|β(i)
1 ⟩=
1
√
1+γ2
i
(|0⟩i −γi |1⟩i) ,
with mixing ratio γi =
√ni x
weff
i +
q
[weff
i ]2+ni x2 .
In particular, the ground state corresponds to the all-zero bit string z = 0 . . . 0, and is given by:
|E0⟩
= Nm
i=1 |β(i)
0 ⟩,
E0
= Pm
i=1 β(i)
0
= −1
2
Pm
i=1
w
eff
i +
q
w
eff
i
2 + ni x2
.
7.3
Uniform Clique Size: Symmetric-Subspace Reduction of the Same-Sign Block
In the case when all cliques have the same size ni = nc, one can further reduce the same-sign block. The same-
sign block HCbare, defined on m effective spin-1
2 subsystems, acts on a Hilbert space of dimension 2m. Using the
same technique of transforming to the total angular momentum basis, this space decomposes into a direct sum of
angular momentum sectors. In particular,
1
2
⊗m
nc
= 1
2 ⊗· · · ⊗1
2
|
{z
}
m
= m
2 ⊕
m
2 −1
⊕· · · ⊕
m
2 −1
|
{z
}
m−1
⊕· · ·
Let s = m
2 . The highest-spin sector s corresponds to the fully symmetric subspace. Its basis vectors |s, ms⟩
are uniform superpositions over computational basis states with fixed Hamming weight, as illustrated in Table 7.3.
|s, s⟩= |11 . . . 1⟩
(Hamming weight m)
|s, s −1⟩=
1
√m (|01 . . . 1⟩+ · · · + |11 . . . 0⟩)
...
|s, −(s −1)⟩=
1
√m (|10 . . . 0⟩+ · · · + |00 . . . 1⟩)
|s, −s⟩= |00 . . . 0⟩
(Hamming weight 0)
(7.3)
Each |s, ms⟩is supported on all bitstrings of Hamming weight s + ms. We interpret 1 as spin-up (included
vertex), and 0 as spin-down (excluded vertex).
31
Operators in the Symmetric Subspace.
Within the spin-s = m
2 subspace, the operators SZ, S˜Z, and SX reduce
to:
CSZ(m) =
s
0
· · ·
0
0
s −1
· · ·
0
...
...
...
...
0
0
· · ·
−s
,
CS˜Z(m) = CSZ(m) + m
2 I =
m
0
· · ·
0
0
m −1
· · ·
0
...
...
...
...
0
0
· · ·
0
.
(7.4)
Recall that the transverse operator SX = 1
2(S++S−) where S± |s, ms⟩=
p
s(s + 1) −ms(ms ± 1) |s, ms ± 1⟩.
Letting ms = s −a, where a counts the number of spin flips downward, for a = 0, 1, . . . , 2s, this gives:
⟨s, ms|SX|s, ms −1⟩= 1
2
p
(m −a)(a + 1), where m = 2s is the total number of spins.
Hence,
CSX(m) =
0
√m
2
0
· · ·
0
√m
2
0
√
2(m−1)
2
· · ·
0
0
√
2(m−1)
2
0
· · ·
0
...
...
...
...
√m
2
0
0
0
√m
2
0
.
(7.5)
One can again use CG transformation to obtain the restricted same-sign block to the symmetric subspace,
which is also known as spanned by Dicke states and can be obtained directly through the explicit transformation.
Explicit Transformation to the Symmetric Subspace.
The symmetric subspace of m spin- 1
2 particles corre-
sponds to the totally symmetric sector of the full Hilbert space (C2)⊗m. This subspace has dimension m + 1,
and is spanned by the Dicke states, which are uniform superpositions over computational basis states with fixed
Hamming weight.
Dicke Basis.
For each k = 0, 1, . . . , m, define the Dicke state: |s, ms = k −m
2 ⟩=
1
q
(m
k)
P
⃗z∈{0,1}m
Hamming weight=k
|⃗z⟩.
These form an orthonormal basis for the symmetric subspace, where s = m
2 and ms = −s, . . . , s.
Transformation Matrix.
Let UDicke ∈C2m×(m+1) denote the matrix whose columns are the normalized Dicke
states (embedded in the full space). Then U†
DickeUDicke = Im+1, and the following operator reductions hold:
CS˜Z(m)
= U†
Dicke S˜Z(m) UDicke,
CSX(m)
= U†
Dicke SX(m) UDicke,
Hsym
Cbare
= U†
Dicke HCbare UDicke = −√nc x CSX(m) −weff
c CS˜Z(m).
Theorem 7.2 (Reduction to the Symmetric Subspace via Dicke Basis). Let HCbare ∈R2m×2m be the same-sign
block Hamiltonian acting on m effective spin-1
2 subsystems, each corresponding to a clique of size nc. Then its
restriction to the symmetric subspace is given by:
Hsym
Cbare = −√nc x CSX(m) −weff
c CS˜Z(m),
where weff
c =
wc −nc−1
4
jxx
.
32
As a direct consequence, the symmetric-subspace Hamiltonian Hsym
Cbare is a special tridiagonal matrix with
reflective off-diagonal entries. This structure admits a closed-form solution for the eigensystem.
Corollary 7.3. The eigensystem of the following tridiagonal matrix:
Mm(w, x) =
−mw
−
√m
2 x
0
· · ·
0
−
√m
2 x
−(m −1)w
−
√
2(m−1)
2
x
· · ·
0
0
−
√
2(m−1)
2
x
−(m −2)w
· · ·
0
...
...
...
...
−
√m
2 x
0
0
0
−
√m
2 x
0
(7.6)
is given in closed form by: λms = −sw + ms
√
w2 + x2,
where s = m
2 , ms = −s, −s + 1, . . . , s. In
particular, the lowest eigenvalue occurs at ms = −s, yielding: λmin = −s
w +
√
w2 + x2
.
To the best of our knowledge, this closed-form eigensystem for Mm(w, x) has not been known for general
m > 1.
Figure 9 illustrates the three eigenvalues of the same-sign block in the symmetric subspace Hsym
Cbare, along with
the AS0 energy level of HQbare, under different values of Jxx. The plots reveal a clear see-saw effect: as Jxx
increases, the same-sign energy levels are lifted upward, while the AS0 energy level is lowered.
Figure 9: Energy spectrum of a bare subsystem (dMIC) under different values of Jxx = αΓ2, with m = 2,
nc = 9. We set Γ2 = m and Γ1 = 2Γ2. The coupling schedule is piecewise-defined: jxx = Jxx for x ≥Γ2, and
jxx = αx for x < Γ2.
(a) α = 0: three eigenvalues of the symmetric-subspace same-sign block Hsym
Cbare are shown in black; the energy
of the AS0 block HQbare is shown as a black dashed line.
(b) α = 1: same-sign eigenvalues are shown in blue; the AS0 energy is shown as a red dashed line.
The see-saw effect is evident: the same-sign energies (black) rise with Jxx, while the AS0 energy (black dashed)
drops.
33
7.4
Block Energy Ordering
Since the ground state of each block is a tensor product of the eigenstates of the individual i-spins, and the
corresponding eigenvalues are additive, the ground-state energy of each block is simply the sum of its local
ground-state energies. Each block contains exactly m spins (either spin-1
2 or spin-0), so the blocks can be directly
compared based on their composition. As a result, the ground-state energies of the blocks are totally ordered.
If the ground-state energy of the spin- 1
2 component, denoted β(i)
0 , is lower than that of the spin-0 component,
denoted θ(i), then the same-sign block has the lowest energy and the AS0 opposite-sign block the highest. The
ordering is reversed if β(i)
0
> θ(i). The relative ordering of β(i)
0
and θ(i) switches at the crossing point xc =
4α
4−α2
(see Eq. (6.8) in Section 6), as illustrated in Figure 10.
(a)
(b)
Figure 10: Two-stage eigenvalue evolution for a single clique under different values of Jxx = αΓ2, with Γ2 = 1,
Γ1 = 2, wc = 1, and nc = 9. The coupling schedule is piecewise-defined: jxx = Jxx for x ≥Γ2, and jxx = αx
for x < Γ2. (a) α = 0.7 < αmax(Γ2): the crossing occurs during Stage 2. (b) α = 1.2 > αmax(Γ2): the crossing
occurs during Stage 1. Same-sign eigenvalues β0, β1 are shown as solid lines; the opposite-sign eigenvalue θ is
dashed. The dashed green vertical line indicates the crossing point xc =
4α
4−α2 .
Thus, to determine the overall block ordering, it suffices to compare the two extreme cases: the same-sign
block HCbare and the AS0 opposite-sign block HQbare. In the case Jxx = 0, i.e., α = 0, we have β(i)
0
< θ(i) for all
x ∈[0, Γ1], so the same-sign block remains lowest in energy throughout the schedule. In contrast, when α > 0,
and in particular when α < αmax(Γ2), the spectrum exhibits a transition at xc: for x > xc, β(i)
0 (x) < θ(i)(x);
at x = xc, β(i)
0
= θ(i); and for x < xc, β(i)
0
> θ(i). Thus, the energy ordering reverses after xc, and the AS0
opposite-sign block becomes the lowest in energy. This reversal is illustrated in Figure 11.
8
Analysis of Stage 0
Recall that the Hamiltonian for Stage 0 is
H0(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem,
t ∈[0, 1],
34
Figure 11: Block energy ordering for m = 2, nc = 9, with Γ2 = m. Same-sign block energy in blue; intermediate
opposite-sign block in brown; AS0 block in red dashed. (a) α = 1.0 < αmax(Γ2): the crossing occurs during
Stage 2, after which the AS0 block becomes lowest in energy. (b) α = 1.5 > αmax(Γ2): the crossing occurs
during Stage 1, and the AS0 block becomes lowest earlier in the evolution.
with x(t) = (1 −t)(Γ0 −Γ1) + Γ1, jxx(t) = tJxx, p(t) = t. During this phase, the problem parameters wi,
Jclique
zz
, and Jzz are gradually ramped to their final values as p(t) increases from 0 to 1, while jxx(t) increases
linearly to its target value Jxx.
Definition 8.1. Given a Hamiltonian H acting on a Hilbert space V, and a subspace L ⊂V with orthogonal
projector ΠL, we define the restricted Hamiltonian on L as H|L := ΠLHΠL.
The main goal of this section is to derive the effective Hamiltonian at the end of Stage 0. The foundational
idea follows [10], which presents a systematic approach to constrained Hamiltonians. The Projection Lemma in
[10] shows that when a large penalty is assigned to a subspace, the lowest eigenvalues of the full Hamiltonian
are close to those of the restricted Hamiltonian. To improve this approximation, [10] further derives an effec-
tive Hamiltonian by incorporating perturbative corrections from the high-energy subspace. We adopt a similar
approach: by assigning a large penalty Jclique
zz
to states involving edges within a clique, we energetically sup-
press their contribution and restrict the evolution to a reduced subspace. We then apply perturbation theory to
characterize the resulting effective Hamiltonian.
In the following, we first introduce a decomposition of the Hilbert space based on energy penalties from the
problem Hamiltonian. We then derive the effective Hamiltonian and argue that the spectral gap remains large
throughout Stage 0, justifying the adiabatic elimination of high-energy states.
8.1
Decomposition of the Hilbert Space
Recall the MIS-Ising problem Hamiltonian defined in Eq. 2.3:
Hproblem =
X
i∈V(G)
(−wi)˜σz
i + Jclique
zz
X
(i,j)∈E(Gdriver)
˜σz
i ˜σz
j + Jzz
X
(i,j)∈E(G)\E(Gdriver)
˜σz
i ˜σz
j .
The vertex set of the graph is V(G) = VL ∪R, where VL is the union of all cliques in L and R is the set of
vertices outside these cliques. The corresponding Hilbert space factors as V = VL ⊗VR, where VL is the Hilbert
space of all vertices in VL and VR is that of the vertices in R.
35
The parameter Jclique
zz
penalizes states that include occupied edges within a clique. By setting Jclique
zz
suffi-
ciently large, the Hilbert space VL separates into a low-energy subspace (spanned by all independent-set states
within L) and a high-energy subspace (spanned by all dependent-set states within L).
We define
L−:= (low-energy subspace of VL) ⊗VR,
L+ := (high-energy subspace of VL) ⊗VR.
Here L+ corresponds to high-energy states—each such state containing at least one intra-clique edge incurring
the Jclique
zz
penalty.
Let Π−and Π+ denote the orthogonal projectors onto L−and L+, respectively. With respect to this decom-
position, the system Hamiltonian at the end of Stage 0 can be written in block form:
H0(1) =
Hlow
V
V†
Hhigh
,
with Hlow = Π−H0(1)Π−, Hhigh = Π+H0(1)Π+. At the end of Stage 0, the full Hamiltonian takes the form
H0(1) =
Γ1HX + JxxHXX + Hlow
prob
+ Hhigh
prob,
where Hproblem = Hlow
prob + Hhigh
prob, with Hlow
prob = Π−HproblemΠ−, Hhigh
prob = Π+HproblemΠ+. By construction,
Hhigh
prob annihilates the low-energy subspace: Hhigh
probΠ−= Π−Hhigh
prob = 0. Thus Hlow = Π−H0(1)Π−= Π−(M +
Hhigh
prob)Π−= Π−MΠ−, where M = Γ1HX + JxxHXX + Hlow
prob.
In the following, we will show that by the end of Stage 0, if Jclique
zz
is chosen sufficiently large, the Stage 1
evolution is governed exactly by the restricted Hamiltonian Heff
1 = Hlow = Π−MΠ−.
8.2
Effective Hamiltonian at the End of Stage 0
Definition 8.2. Let λcutoff ∈R be a cutoff energy, and let ϵ > 0. We say that Heff is an ϵ-close effective low-
energy Hamiltonian of H if for every eigenstate H |ψ⟩= λ |ψ⟩with λ < λcutoff, there exists a state |ψ′⟩such
that Heff |ψ′⟩= λ′ |ψ′⟩, and:
1. |λ −λ′| = O(ϵ)
(energy closeness),
2. ∥|ψ′⟩−Π−|ψ⟩∥= O(ϵ)
(state closeness).
Intuitively, Heff provides an approximation of H that accurately captures the system’s behavior within the low-
energy subspace L−. This ensures that both the spectrum and eigenstates of H below λcutoff are preserved up to
an error of order O(ϵ).
The following Theorem 8.3 can be derived as a consequence of Theorem 6.2 in [10]. However, we provide a
direct derivation based on projection properties, and explicitly establish the correspondence between eigenstates.
Theorem 8.3. Let H = M + Hhigh
prob, where Π−Hhigh
prob = Hhigh
probΠ−= 0 and all eigenvalues of Hhigh
prob are at
least λhigh > λcutoff. Furthermore, to ensure the effective Hamiltonian approximation remains within O(ϵ), we
impose the stronger condition: λhigh ≥1
ϵ∥M∥2. Then the projected Hamiltonian Heff := Π−MΠ−is an ϵ-close
effective low-energy Hamiltonian of H, as defined in Definition 8.2.
36
Proof. Let H |ψ⟩= λ |ψ⟩be an eigenstate of H with λ < λcutoff. We express H in the block basis defined by
Π−and Π+:
H =
H++
H+−
H−+
H−−
,
where Hab := ΠaHΠb for a, b ∈{+, −}.
Similarly, we decompose M as
M =
M++
M+−
M−+
M−−
.
Let |ψ+⟩:= Π+ |ψ⟩and |ψ−⟩:= Π−|ψ⟩. The eigenvalue equation becomes:
H++
H+−
H−+
H−−
|ψ+⟩
|ψ−⟩
= λ
|ψ+⟩
|ψ−⟩
.
Solving for |ψ+⟩, we obtain |ψ+⟩= G(λ)H+−|ψ−⟩, where the Green’s function is defined as G(λ) :=
(λI −H++)−1, which corresponds to G+(z) in [10].
Substituting back, we obtain an effective energy-dependent equation for |ψ−⟩: Hexact
eff
(λ) |ψ−⟩= λ |ψ−⟩,
where the exact effective Hamiltonian is given by
Hexact
eff
(λ) := H−−+ H−+G(λ)H+−= M−−+ M−+G(λ)M+−.
To approximate this, we expand G(λ) using a resolvent expansion:
G(λ) =
λI −Hhigh
prob −M++
−1
= D−1 I −M++D−1−1 ,
where D := λI −Hhigh
prob, and hence ∥D−1∥≤1/(λhigh −λcutoff).
Expanding in powers of M++D−1, we obtain:
Hexact
eff
(λ) = M−−+ M−+D−1M+−+ M−+D−1M++D−1M+−+ . . .
= M−−+ M−+D−1M+−+ O
∥M∥3
(λhigh−λcutoff)2
.
Under the condition λhigh ≥1
ϵ∥M∥2, the leading correction is bounded by O(ϵ), and we may set Heff :=
M−−as an ϵ-close effective low-energy Hamiltonian of H.
Finally, letting |ψ′⟩:= |ψ−⟩, we see that both the energy and state closeness conditions in Definition 8.2 are
satisfied up to O(ϵ), completing the proof.
Remark 8.4. The structure of this proof closely resembles methods commonly associated with the Schrieffer–
Wolff transformation, a technique originally developed to perturbatively eliminate high-energy subspaces via a
unitary block-diagonalization. Although the present argument is based on resolvent expansion and projection—
rather than an explicit unitary transformation—the resulting effective Hamiltonian is often referred to in the
literature, particularly in physics, as a Schrieffer–Wolff-type reduction; see, e.g., [11].
Applying Theorem 8.3, we have the effective low-energy Hamiltonian
H0(1)eff = Π−MΠ−= Π−(Γ1HX) Π−+ JxxHXX + Hlow
prob.
(8.1)
by setting Jclique
zz
= Ω(∥M∥2/ϵ). The ground state of H0(1)eff is approximately the ground state of the transverse-
field term Π−(Γ1HX)Π−as
JxxHXX + Hlow
prob
≪∥Π−(Γ1HX)Π−∥when Γ1 = O
NJzz + 1 + (nc−1)
4
Jxx
.
37
8.3
Spectral Gap Behavior in Stage 0
The initial Hamiltonian Γ0HX (with Γ0 = 2Γ1) has a spectral gap of 2Γ0. We argue that this spectral gap
decreases smoothly from 2Γ0 to 2Γ1, without encountering any small-gap regime.
The initial ground state is the uniform superposition over all computational basis states, corresponding to the
ground state of HX. As the system evolves, the problem Hamiltonian Hproblem(t) and the XX-driver HXX are
gradually turned on, while the transverse field x(t) is smoothly reduced.
Throughout Stage 0, the transverse field remains large, and the suppression of high-energy states in L+ helps
ensure that the spectral gap decreases in a controlled manner, without abrupt transitions. This can be understood
as follows:
1. For small t, the transverse field term x(t)HX dominates, maintaining a large gap between the ground and
first excited states.
2. As HXX gradually turns on, it remains small relative to the transverse field.
3. At intermediate t, the increasing energy penalty t · Jclique
zz
progressively separates the L+ subspace from
the low-energy dynamics.
4. For large t, the system is effectively projected into L−, and the spectral gap approaches 2Γ1, determined
by the final transverse field strength.
Thus, the gap transitions from 2Γ0 to 2Γ1 smoothly, allowing the adiabatic evolution in Stage 0 to proceed
efficiently without risk of gap closure.
9
Main Analysis of the Two-Stage Dynamics on Bipartite Structures
We now analyze the main two-stage dynamics of the system Hamiltonian for the two bipartite substructures
introduced in Section 3.3: the disjoint-structure graph Gdis and the shared-structure graph Gshare, as shown in
Figure 3.
To simplify notation, we denote the effective Hamiltonian at the start of Stage 1 by Heff
1 (0) := H0(1)eff.
From this point onward, the system evolves under the time-dependent Hamiltonian Heff
1 (t), which incorporates
the Stage 1 and 2 annealing schedules. When there is no ambiguity, we refer to Heff
1 as the full Hamiltonian for
Stages 1 and 2, and all subsequent block decompositions—into same-sign and opposite-sign components—are
understood to apply to Heff
1 .
This section is organized as follows:
1. In Section 9.1, we derive the block decomposition of the full Hamiltonian into same-sign and opposite-sign
components through local angular momentum transformations applied to each clique.
2. In Section 9.2, we describe two ways of structurally decomposing the same-sign block Hamiltonian during
Stage 1, corresponding to the L-inner and R-inner block structures.
3. In Section 9.3, we analyze the two-stage evolution and derive feasibility bounds on Jxx. We show that,
within the feasible regime, the system evolves to the global minimum without encountering an anti-
crossing.
4. In Section 9.4, we illustrate the quantum interference effects using a minimal three-vertex conceptual
model (V3), which may also serve as a testbed for experimental validation.
38
9.1
Block Decomposition of the Full Hamiltonian: Same-Sign vs. Opposite-Sign Blocks
We begin by expressing the Hamiltonian Heff
1 in the angular momentum basis induced by the XX-driver graph.
This basis is constructed locally within each clique using projection and transformation operators, as shown in
Section 6. The resulting global basis, denoted Bang, yields a block-diagonal decomposition of the full Hamilto-
nian into one same-sign block and multiple opposite-sign blocks.
The angular momentum basis Bang is constructed by applying a local transformation within each clique Ci,
which maps the computational basis on Ci to an angular momentum basis B′
ai. Because the total Hilbert space is
a tensor product, these local transformations can be applied independently to each clique, with the identity acting
on the remainder of the system. These local transformations are then combined to define a global transformation
on the full system, preserving the tensor-product structure and yielding a basis constructed from the local angular
momentum decompositions of the cliques defined by the XX-driver graph.
The resulting block decomposition emerges hierarchically: each clique yields one same-sign sector (an effec-
tive spin-1
2) and several opposite-sign sectors (spin-0). At the next level, the set of cliques in L—those forming
the LM—induces a bare subsystem with the same-sign sector Cbare, the intermediate opposite-sign sectors Wbare,
and the all-spin-zero (AS0) opposite-sign sector Qbare. Finally, the full same-sign sector C of the system is
defined as
C = Cbare ⊗
C2⊗mr ,
where mr = |R|. Similarly, the full opposite-sign sectors W and Q are defined by
W = Wbare ⊗
C2⊗mr ,
Q = Qbare ⊗
C2⊗mr .
Thus, Q is also referred to as the (AS0) opposite-sign sector of the full system. The full Hamiltonian then
decomposes into the same-sign block HC and multiple opposite-sign blocks HQ, HW, which are either decoupled
(in the disjoint case) or coupled (in the shared case).
Notation for Operators in Bang.
A bar denotes an operator expressed in the angular momentum basis Bang,
e.g., HX for the transverse-field term, HP for the problem Hamiltonian, and HXX for the XX-driver term.
The effective Hamiltonian in Bang is given by:
Heff
1 := xHX + jxxHXX + HP.
First, we state the resulting block structure in Theorem 9.1, illustrated in Figure 12. We prove the theorem by
deriving Heff
1 for the disjoint-structure graph Gdis in Section 9.1.1. We then describe the modifications required
for the shared-structure graph Gshare in Section 9.1.2, highlighting the critical differences introduced by shared
connectivity.
Theorem 9.1 (Block Structure of the Transformed Hamiltonian). Under the angular momentum basis trans-
formation, the effective Hamiltonian Heff
1 becomes block-diagonal in the basis Bang:
Heff
1 = HC ⊕· · · ⊕HQ + Hinter−block
where Hinter−block ≡0 in the disjoint case, while in the shared case
Hinter−block = Jzz
X
(i,j)∈L×R
√ni−1
ni
Tcq
i ˜σz
j ,
39
where the matrix Tcq (as in Eq. (9.3)) is a 3 × 3 off-diagonal operator that mixes same-sign and opposite-
sign components. The coupling strength in Hinter−block depends on Jzz and clique size ni, but not on the
transverse field x.
In both the disjoint and shared cases, the Hamiltonians HC and HQ admit unified expressions:
HC =
X
i∈L
Bi
w
eff
i , √ni x
+
X
j∈R
Bj(wj, x) + Jzz
X
(i,j)∈L×R
fC
i ˜σz
i ˜σz
j ,
HQ = −
X
i∈L
wi + 1
4jxx
+
X
j∈R
Bj
−wj + Jzz
X
i∈L
fQ
i , x
!
,
with effective coefficients
w
eff
i = wi −ni−1
4 jxx,
fC
i =
(
1
for Gdis,
ni−1
ni
for Gshare,
fQ
i =
(
1
for Gdis,
1
ni
for Gshare.
The initial ground state resides in HC at the beginning of Stage 1.
opposite-sign
blocks
C
Q
Same-sign block
All-spin-0 (AS0)
Figure 12: Block-diagonal structure of the transformed Hamiltonian Heff
1 . The same-sign block HC (green) and
the AS0 block HQ (red) are the two dominant blocks in the low-energy analysis. Intermediate opposite-sign
blocks are shown in neutral color. In the disjoint case the blocks are decoupled, while in the shared case they are
coupled via the inter-block term Hinter−block.
The following corollary follows directly from Theorem 9.1:
Corollary 9.2 (Bare Subsystem Decomposition of HC). The same-sign block Hamiltonian HC decomposes
into two bare subsystems coupled via ZZ-couplings:
HC = Hbare
L
⊗IR + IL ⊗Hbare
R
+ HLR,
(9.1)
40
where
Hbare
L
=
X
i∈L
BL
i (weff, √nc x),
Hbare
R
=
X
j∈R
BR
j (w, x),
HLR = Jzz
X
(i,j)∈L×R
fC
i ˜σz
i ˜σz
j ,
with the effective weight weff = w −nc−1
4
jxx. The superscripts in BL
i and BR
j indicate the subspace of the
tensor product on which each two-level Hamiltonian acts, and are omitted when the context is clear.
9.1.1
The Disjoint-Structure Graph: Block-Diagonal Structure via Clique Contraction
We now turn to the disjoint-structure graph Gdis, illustrated in Figure 3 of Section 3.3, which serves as the
foundation for the block decomposition stated in Theorem 9.1. The key idea is that the local angular momentum
transformation within each clique induces an effective contraction of the graph structure, allowing us to derive
the block Hamiltonians explicitly.
Recall that within the low-energy subspace of a clique of size nc, the transformed spin operators S˜Z, SX, SXX,
given in Equation (6.3) of Theorem 6.2, are block-diagonal in the angular momentum basis.
This local transformation naturally induces a clique contraction: each clique, whose vertices interact identi-
cally with the rest of the graph, is collapsed into a single super-vertex, carrying the effective operators described
above. Under this contraction, the graph Gdis becomes a bipartite graph Gcontract, in which each clique is replaced
by a super-vertex, as illustrated in Figure 13. Thus, L consists of the set of super-vertices.
contraction
2
1
3
a
b
a
b
1’
Figure 13: Clique contraction: A clique ({1, 2, 3}) whose vertices share identical external connectivity is con-
tracted into a single super-vertex (1′).
The Transformed Transverse-Field Term.
For the transverse-field Hamiltonian HX, the transformed operator
in Bang takes the form:
HX =
X
i∈L
√ni
2 σx
i ⊕0 ⊕· · · ⊕0
+ 1
2
X
i∈R
σx
i
=
X
i∈L
√ni
2 σx
i + 1
2
X
i∈R
σx
i
!
⊕· · · ⊕
X
i∈L′
√ni
2 σx
i + 1
2
X
i∈R
σx
i
!
⊕· · · ⊕
X
i∈L
0 + 1
2
X
i∈R
σx
i
!
,
where the direct sum runs over the block decomposition induced by Bang, and the contributions are grouped as
follows:
• H
C
X := P
i∈L
√ni
2 σx
i + 1
2
P
i∈R σx
i corresponds to the same-sign block C.
41
• H
Q
X := 1
2
P
i∈R σx
i corresponds to the AS0 opposite-sign block Q, which receives no transverse contribu-
tion from L.
• H
W
X := P
i∈L′
√ni
2 σx
i + 1
2
P
i∈R σx
i corresponds to the intermediate opposite-sign blocks W, where some
cliques are in spin-0 sectors and others in spin-1
2, denoted by the subset L′ ⊂L.
The Transformed Problem Hamiltonian.
The Ising interaction term, P
(i,j)∈E(G) ˜σz
i ˜σz
j , is mapped under
clique contraction to: P
(i′,j′)∈E(Gcontract)
˜σz
i′ ⊕1 ⊕· · · ⊕1
˜σz
j′, where E(Gcontract) = L × R. This yields a
block structure:
X
(i′,j′)∈E(Gcontract)
˜σz
i′˜σz
j′ ⊕· · · ⊕ml
X
j∈R
˜σz
j ,
which highlights the distinct contributions of the same-sign and opposite-sign sectors.
The transformed problem Hamiltonian becomes:
HP =
X
i∈L
(−wi) (˜σz
i ⊕1 ⊕· · · ⊕1) +
X
i∈R
(−wi)˜σz
i + Jzz
X
(i,j)∈E(Gcontract)
˜σz
i ˜σz
j ⊕· · · ⊕ml
X
j∈R
˜σz
j
=
X
i∈L∪R
(−wi)˜σz
i + Jzz
X
(i,j)∈E(Gcontract)
˜σz
i ˜σz
j
⊕· · · ⊕
X
i∈L
(−wi) +
X
i∈R
(−wi + mlJzz)˜σz
i
!
.
Thus, the same-sign sector behaves as a standard Ising Hamiltonian on the contracted graph Gcontract, while each
opposite-sign block contributes an energy shift and modifies only the R-part of the spectrum.
The Transformed XX-Driver Term.
The XX-driver modifies the diagonal energies of super-vertices:
HXX =
X
i∈L
ni−1
4
˜σz
i ⊕
−1
4
⊕· · · ⊕
−1
4
.
(9.2)
This shifts effective vertex-weights as follows:
• In the same-sign block:
−wi 7−→−
wi −ni−1
4 jxx
(energy lifted);
• In the AS0 opposite-sign block:
−wi 7−→−
wi + 1
4jxx
(energy lowered).
Remark 9.3. In the disjoint-structure case, the opposite-sign blocks are, in principle, decoupled from the same-
sign block and can be ignored. In contrast, in the shared-structure case, weak to moderate couplings emerge
between the same-sign and opposite-sign sectors, necessitating their inclusion in the analysis.
Because of the block ordering in Section 7.4, the energy levels of the intermediate opposite-sign blocks are
sandwiched between those of the same-sign block and the AS0 block. Therefore, for the purpose of analyzing the
low-energy spectrum and the dynamical behavior near critical anti-crossings, it is sufficient to retain only the
same-sign block and the AS0 block. All other opposite-sign blocks can be safely ignored.
42
Same-Sign Block Hamiltonian.
We denote by Hdis
C the effective Hamiltonian restricted to the same-sign sector
C for the disjoint-structure graph case:
Hdis
C = −xH
C
X + H
C
Z(jxx) + Jzz
X
(i,j)∈E(Gcontract)
˜σz
i ˜σz
j ,
where
H
C
Z(jxx) :=
X
i∈L
−wi + nc−1
4
jxx
˜σz
i +
X
j∈R
(−wj)˜σz
j
collects all diagonal terms linear in ˜σz
i , including both the bare vertex weights and the XX-induced energy shifts.
AS0 Block Hamiltonian.
We denote by Hdis
Q the effective Hamiltonian restricted to the AS0 sector Q for the
disjoint-structure graph case:
Hdis
Q = −xH
Q
X + H
Q
Z(jxx),
where
H
Q
X = 1
2
X
j∈R
σx
j ,
H
Q
Z(jxx) :=
X
i∈L
−wi −1
4jxx
+
X
j∈R
(−wj + mlJzz) ˜σz
j .
The diagonal term H
Q
Z reflects a constant energy shift from the XX-driver applied to L, and a coupling-induced
energy shift on the R-vertices, scaled by the number of cliques ml. Note that Hdis
Q contains no coupling, as the
spin-0 components contribute only a scalar constant to the interaction term. Hdis
Q is analytically solvable.
Remark 9.4. The differences between the same-sign block Hdis
C and the AS0 block Hdis
Q are:
• The effective transverse-field contribution from L vanishes: √niσx 7→0.
• The operator ˜σz
i is replaced by the scalar 1 (i.e., ˜σz
i 7→1), contributing only constant energy shifts.
• The XX-driver term, which raises the energy of each vertex i ∈L by ni−1
4 Jxx in the same-sign block,
instead lowers the energy in the all-spin-zero block by 1
4Jxx per clique.
This completes the block-level analysis of the disjoint-structure case. We now turn to the shared-structure
case, where weak inter-block couplings emerge and modify this picture.
9.1.2
The Shared-Structure Graph: Modification to the Disjoint Case
The shared-structure graph Gshare, shown in Figure 3, differs from the disjoint-structure graph Gdis primarily in
its inter-subsystem ZZ-couplings.
In the transformed Hamiltonian, the only modification appears in the ZZ-interaction terms. Unlike the fully
connected case in Gdis, only nc −1 out of nc vertices in each clique of size nc are coupled to each vertex j ∈R.
As shown in Section 6.2.5, this partial adjacency induces a modified coupling structure in the angular momentum
basis.
The interaction between an external vertex and the internal basis of a clique of size nc is represented by the
matrix T in Eq. (6.10), which decomposes as T =
nc−1
nc ˜σz
⊕
1
nc
+
√nc−1
nc
Tcq, with
Tcq =
0
0
−1
0
0
0
−1
0
0
,
(9.3)
43
which mixes same-sign and opposite-sign components. The coupling magnitude is
√nc−1
nc
≈
1
√nc+1, decreasing
with clique size nc.
The resulting decomposition and sector-wise coupling structure are illustrated in Figure 14.
1
2
3
4
1
2
3
4
r
Spin-1/2
Spin-0
r
q
q
c
c
Figure 14: Clique-to-vertex coupling under partial adjacency. A clique ({1, 2, 3, 4}) is transformed into an
effective spin-1
2 component labeled c (orange) and a spin-0 component labeled q (gray square). The external
vertex r (blue) connects to all but one clique vertex (purple). The resulting ZZ-coupling is split across sectors:
nc−1
nc Jzz between c and r, and
1
nc Jzz between q and r. An additional off-diagonal coupling term of strength
−
√nc−1
nc
Jzz, not shown in the figure, induces mixing between the same-sign and opposite-sign sectors.
Modification of ZZ Terms.
The Ising interaction term
X
(i,j)∈E(G)
˜σz
i ˜σz
j =
X
j∈R
X
i∈L
ni−1
X
ik=1
˜σz
ik ˜σz
j
is modified under contraction due to the partial connectivity between R and cliques in L. For each i ∈L, j ∈R,
the interaction becomes:
ni−1
X
ik=1
˜σz
ik ˜σz
j =⇒(Ti ⊕1 ⊕· · · ⊕1) ˜σz
j = Ti˜σz
j ⊕˜σz
j ⊕· · · ⊕˜σz
j
|
{z
}
ni−2 times
,
Therefore, the full Ising interaction term is transformed into:
X
j∈R
X
i∈L
ni−1
X
ik=1
˜σz
ik ˜σz
j =
X
j∈R
X
i∈L
Ti˜σz
j ⊕· · · ⊕ml
X
j∈R
˜σz
j .
44
The transformed problem Hamiltonian thus becomes:
H
share
P
=
X
i∈L
(−wi) (˜σz
i ⊕1 ⊕· · · ⊕1) +
X
j∈R
(−wj)˜σz
j + Jzz
X
j∈R
X
i∈L
Ti˜σz
j
⊕· · · ⊕ml
X
j∈R
˜σz
j
=
X
i∈L∪R
(−wi)˜σz
i + Jzz
X
(i,j)∈L×R
Ti˜σz
j
⊕· · · ⊕
X
i∈L
(−wi) +
X
j∈R
(−wj + mlJzz)˜σz
j
.
In summary, the interaction term ˜σz
i ˜σz
j is replaced by the effective operator Ti˜σz
j .
The full Hamiltonian for the shared-state case is thus:
Heff
1 = −x
H
C
X ⊕. . . ⊕H
Q
X
+
H
C
Z(jxx) ⊕. . . ⊕H
Q
Z (jxx)
+ Jzz
X
(i,j)∈L×R
ni−1
ni ˜σz
i ⊕1
ni + Tcq
i
˜σz
j
=
Hshare
C
⊕. . . ⊕Hshare
Q
+ Hinter−block
with
Hshare
C
= −xH
C
X + H
C
Z(jxx) + Jzz
X
(i,j)∈E(Gcontract)
ni−1
ni ˜σz
i ˜σz
j ,
Hshare
Q
= −xH
Q
X +
X
i∈L
−wi −1
4jxx
+
X
j∈R
−wj + Jzz
X
i∈L
1
ni
!
˜σz
j ,
Hinter−block = Jzz
X
(i,j)∈L×R
√ni−1
ni
Tcq
i ˜σz
j .
The inter-block coupling magnitude depends on Jzz,
1
√ni+1, and ml (size of L), but notably is indepen-
dent of the time or the transverse field.
9.2
Inner Decompositions of the Same-Sign Block during Stage 1
This subsection develops two inner decompositions of the same-sign block HC (L-inner and R-inner)6, illustrates
them through a concrete example, and introduces the corresponding notions of L- and R-localization.
9.2.1
Two Block Decompositions of HC: L-Inner vs. R-Inner
Physically, the same-sign block Hamiltonian HC is symmetric under interchange of the L and R subsystems:
permuting the tensor factors leaves the spectrum unchanged. Mathematically, this corresponds to a permutation
similarity transformation: reordering the basis to place L before R, or vice versa, yields a matrix with identical
eigenvalues. Combinatorially, this symmetry allows the matrix to be organized into a two-layer block structure
with either subsystem—L or R—serving as the “inner” block. That is, HC can be expressed either as a collection
of L-blocks indexed by the states from R, or as R-blocks indexed by the states from L.
6The decompositions in Section 9.1 and Section 9.2 are both structural, but in fundamentally different senses: the former is physical,
reflecting a true partition of the Hilbert space into dynamically distinct sectors; the latter is combinatorial, reorganizing the matrix to
reveal subsystem indexing, without affecting the underlying dynamics.
45
For illustration, we assume uniform clique size ni = nc for all i. We apply Theorem 7.2 to restrict the
same-sign block Hamiltonian in Eq. (9.1) to the symmetric subspace:
Hsym
C
= Hbare
L
⊗IR + IL ⊗Hbare
R
+ HLR,
where
Hbare
L
= −√nc x CSX(m) −weff CS˜Z(m),
Hbare
R
= −x CSX(mr) −w CS˜Z(mr),
and
HLR = JC
zz CS˜Z(m) CS˜Z(mr),
with JC
zz = JzzfC
c =
(
Jzz
for Gdis,
nc−1
nc Jzz
for Gshare.
Here, CS˜Z(m) and CSX(m) denote the collective Pauli operators restricted to the symmetric subspace of m
spins, as defined in Eqs. (7.4) and (7.5), and all bare operators are understood in this context to be restricted to the
symmetric subspace.
We illustrate this dual block structure through a concrete example with m = 2 and mr = 3 in the symmetric
subspace, showing (i) the basis structure (Figure 15) and (ii) the explicit matrix form.
Visual and Combinatorial Structure.
Combinatorially, the symmetric subspace contains (m + 1)(mr + 1)
basis states. These states can be organized either as the L-inner decomposition (grouping by L, with blocks
indexed by spin-up count in R), or as the R-inner decomposition (grouping by R, with blocks indexed by spin-up
count in L).
See Figure 15 for an illustration.
Explicit Matrix Representation.
We now present the explicit matrix form of Hsym
C
in the symmetric subspace,
corresponding to the two decompositions shown in Figure 15. In both cases, the matrix has dimension 12 × 12,
but the block layout differs depending on the ordering of basis states.
Each diagonal entry reflects the total energy of a basis state. Off-diagonal entries arise from the transverse-
field term, which connects basis states differing by a single spin flip.
L-inner block decomposition:
There are mr + 1 = 4 outer-layer blocks, each a (m + 1) × (m + 1) matrix
acting on the L subsystem. The full matrix representation is denoted HL−inner
C
=
6Jzz −3w −2weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
0
0
−
√nc x
√
2
3Jzz −3w −weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
0
0
−
√nc x
√
2
−3w
0
0
−1
2
√
3x
0
0
0
0
0
0
−1
2
√
3x
0
0
4Jzz −2w −2weff
−
√nc x
√
2
0
−x
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
2Jzz −2w −weff
−
√nc x
√
2
0
−x
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
−2w
0
0
−x
0
0
0
0
0
0
−x
0
0
2Jzz −w −2weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
−x
0
−
√nc x
√
2
Jzz −w −weff
−
√nc x
√
2
0
−1
2
√
3x
0
0
0
0
0
0
−x
0
−
√nc x
√
2
−w
0
0
−1
2
√
3x
0
0
0
0
0
0
−1
2
√
3x
0
0
−2weff
−
√nc x
√
2
0
0
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
−weff
−
√nc x
√
2
0
0
0
0
0
0
0
0
−1
2
√
3x
0
−
√nc x
√
2
0
46
(a) L-inner
(b) R-inner
−2weff + 3(2Jzz −w)
−2weff + 2(2Jzz −w)
−2weff + (2Jzz −w)
−2weff
−weff + 3(Jzz −w)
−weff + 2(Jzz −w)
−weff + (Jzz −w)
−weff
−3w
−2w
−w
0
Figure 15: Two possible orderings of the basis states in the symmetric subspace of the same-sign block Hsym
C
,
illustrated for m = 2, mr = 3. (a) L-inner ordering: basis states are grouped by the number of spin-up qubits
in the R subsystem (blue), with each group forming a block over the L subsystem (pink). (b) R-inner ordering:
basis states are grouped by spin-up count in L (pink), with each group forming a block over R (blue). The dashed
circle marks the empty-set basis state (no spin-ups). Each blue circle contributes energy −w; each pink circle
contributes −weff; each blue-pink pair incurs a coupling penalty of +Jzz. The total energy of each basis state
is shown on the right. Energies within each block vary monotonically. To ensure that the block energies in (b)
decrease monotonically from top to bottom, we require −mweff > −weff + mr(Jzz −w). This condition forms
the basis for deriving the Steering lower bound J steer
xx .
47
R-inner block decomposition:
There are m + 1 = 3 outer-layer blocks, each a (mr + 1) × (mr + 1) matrix
acting on the R subsystem. The full matrix representation is denoted HR−inner
C
=
6Jzz −3w −2weff
−1
2
√
3x
0
0
−
√nc x
√
2
0
0
0
0
0
0
0
−1
2
√
3x
4Jzz −2w −2weff
−x
0
0
−
√nc x
√
2
0
0
0
0
0
0
0
−x
2Jzz −w −2weff
−1
2
√
3x
0
0
−
√nc x
√
2
0
0
0
0
0
0
0
−1
2
√
3x
−2weff
0
0
0
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
0
0
3Jzz −3w −weff
−1
2
√
3x
0
0
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
0
−1
2
√
3x
2Jzz −2w −weff
−x
0
0
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
0
−x
Jzz −w −weff
−1
2
√
3x
0
0
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
0
−1
2
√
3x
−weff
0
0
0
−
√nc x
√
2
0
0
0
0
−
√nc x
√
2
0
0
0
−3w
−1
2
√
3x
0
0
0
0
0
0
0
−
√nc x
√
2
0
0
−1
2
√
3x
−2w
−x
0
0
0
0
0
0
0
−
√nc x
√
2
0
0
−x
−w
−1
2
√
3x
0
0
0
0
0
0
0
−
√nc x
√
2
0
0
−1
2
√
3x
0
Notice that each inner-block Hamiltonian is a linear shift of a base Hamiltonian:
H(r)
L = H(0)
L + r · SL,
H(l)
R = H(0)
R + l · SR,
(9.4)
for r ∈{0, 1, . . . , mr} and l ∈{0, 1, . . . , m}, where the shift matrices are given by:
SL =
mJzz −w
· · ·
0
0
...
...
...
...
0
· · ·
Jzz −w
0
0
· · ·
0
−w
,
SR =
mrJzz −weff
· · ·
0
0
...
...
...
...
0
· · ·
Jzz −weff
0
0
· · ·
0
−weff
.
Remark 9.5. The zero-indexed blocks correspond directly to the bare subsystems:
H(0)
R = |0⟩L ⟨0| ⊗Hbare
R
,
H(0)
L = Hbare
L
⊗|0⟩R ⟨0| .
Thus there are two possible inner decompositions of HC: the L-inner blocks H(r)
L indexed by r, and the R-inner
blocks H(l)
R indexed by l. For brevity, we will often refer to these simply as the L-blocks and R-blocks.
9.2.2
Definition of Structural Localization
We define a quantitative notion of localization of the ground state of the same-sign block HC, based on the
inner decompositions. We begin with the R-localization defined via the R-inner decomposition; the definition of
L-localization is analogous.
Recall from Eq. (9.4) that
H(l)
R = H(0)
R + l · SR,
with the block energies ordered in decreasing order when −weff = −w + nc−1
4
Jxx > 0 or Jxx is sufficiently large.
Definition 9.6 (Structural Localization). Let {P (l)
R } denote the projectors onto the H(l)
R -inner block, for l =
0, . . . , m. For parameters k ≪m and tolerance ϵ > 0, we say the ground state |ψ(t)⟩of HC(t) is R-localized
up to depth k if the cumulative overlap
k−1
X
l=0
∥P (l)
R |ψ(t)⟩∥2 ≥1 −ϵ.
48
In words, the ground state amplitude is almost entirely concentrated within the lowest k R-blocks.
In Section 9.3.2, we will show that if Jxx satisfies the Steering lower bound (derived in Section 9.3.1), the
ground state is smoothly steered into the R-localized region by the end of Stage 1. In contrast, for Jxx = 0,
the ground state exhibits L-localization with dominant weight on H(0)
L . This case is analyzed in detail in the
companion paper [8].
9.3
Two-Stage Evolution and Feasibility Bounds on Jxx
Our goal is to show that, for an appropriate choice of Jxx and Jzz, the system evolves from the initial ground
state of Heff
1 to the global minimum (GM) without encountering an anti-crossing. The correctness of the algorithm
critically depends on four feasibility bounds on Jxx(Table 2), together with two upper bounds on Jzz: J inter
zz and
J steer
zz .
In Section 9.3.1, we derive these bounds analytically, and show that for Jzz ≤J steer
zz there exists a nonempty
feasible window for Jxx. To give intuition before the detailed derivations, we briefly summarize how each bound
contributes to maintaining a smooth two-stage evolution. The Stage-Separation bound together with J inter
zz ensure
that Stage 1 is effectively confined to the same-sign block, allowing Stage 1 and Stage 2 to be analyzed sep-
arately; the Lifting bound ensures the original anti-crossing is removed; the Steering bound directs the system
smoothly into the R-region (bypassing tunneling) during Stage 1, and Stage 2 is secured by the Sinking bound,
which prevents the emergence of a new anti-crossing when the lowest opposite-sign block participates. The
analysis, supported by numerical results, of Stage 1 and Stage 2 is presented in Section 9.3.2 and Section 9.3.3,
respectively.
9.3.1
Analytical Bounds on Jxx and the Feasibility Window
Recall that Jxx produces a see-saw effect: it raises the energy associated with LM in the same-sign block while
simultaneously lowering the energy in the opposite-sign blocks. This effect introduces a fundamental trade-off:
if Jxx is too small, the system fails to lift the local minima in the same-sign block high enough; if it is too large, it
lowers the opposite-sign state too far, introducing new anti-crossings. As a result, the success of the algorithmic
design depends critically on choosing an appropriate value of Jxx. These considerations motivate four analytical
feasibility bounds on Jxx summarized in Table 2.
Name
Notation
Purpose
(i) Stage-Separation Upper Bound
J sep
xx
Ensure the ground state of the same-sign block is the lowest during Stage 1
(ii) Lifting Lower Bound
J lift
xx
Lift LM energy in the same-sign block high enough during Stage 2
(iii) Steering Lower Bound
J steer
xx
Ensure smooth transition into the R-localized region during Stage 1
(iv) Sinking Upper Bound
J sink
xx
Prevent the AS0 opposite-sign block from dropping too low in Stage 2
Table 2: Summary of the four feasibility bounds on Jxx. (Note: the Sinking Upper Bound is required only in the
shared-structure case.)
For simplicity, we assume uniform clique size, ni = nc for all i, and remark on the changes needed for each
bound when clique sizes vary. We also set w = 1 in the following analysis.
We derive the following explicit bounds (below) in terms of Γ2, Jzz, m, and the unknown quantities mr and
49
mg:
J lift
xx = 2 m
mg Γ2,
J steer
xx =
4
nc−1[mr(Jzz −1) + 1],
J sep
xx = 2(Γ2 −1),
J sink
xx = 2(Γ2 −1) + 2mr
nc Jzz.
(Jxx-Bounds)
These analytical bounds are approximate but conservative, derived from the bare (decoupled) block energies
and perturbative shifts. They suffice to demonstrate a nonempty feasible window for Jxx. In practice, additional
values may succeed beyond the bounds and can be identified through numerical exploration or adaptive tuning.
Feasible Jxx Window
Notice that the lower bound J steer
xx
depends on Jzz. If Jzz is too large, the required Jxx may exceed the upper
bounds, resulting in an empty feasible window. To avoid this, we now show that the four bounds are compatible
under a reasonable condition on Jzz, thereby ensuring a nonempty feasible window.
Theorem 9.7. Assume uniform clique size ni = nc for all i, and suppose m ≥3 and mr ≥2. Use Γ2 = m.
If Jzz ≤J steer
zz
:= 1 + (nc−1)(m−1)−2
2mr
, then setting Jxx = 2(m −1), yields max{J lift
xx, J steer
xx } ≤Jxx ≤
min{J sep
xx , J sink
xx }, where explicit formulas for all bounds are derived in equations (Jxx-Bounds).
The choice Jxx = 2(m −1) is not unique; it simply serves as a constructive witness that the feasible window is
nonempty whenever the condition on Jzz holds.
Feasibility Condition on Jzz.
The upper bound J steer
zz
in the theorem depends on the unknown parameter mr.
To ensure feasibility without knowing mr precisely, we apply a conservative upper bound: mr < √ncm. Sub-
stituting into the definition of J steer
zz , we obtain: Jzz ≤1 + (nc−1)(m−1)−2
2√ncm
. In practice, it suffices to enforce the
relaxed condition: Jzz ≲1 +
√nc+1
2
.
We now present the detailed derivations of the four analytical bounds on Jxx. Each bound is introduced by
its guiding idea, followed by an explicit derivation of the corresponding formula.
(i) Stage-Separation Upper Bound and J inter
zz
The Stage-Separation requirement explains the rationale behind the two-stage design of the coupling schedule.
The objective is to ensure that the system’s ground state starts and remains confined to the same-sign block during
Stage 1, and involves opposite-sign blocks only during Stage 2. To achieve this, two conditions are needed: (a)
the lowest energy level of the full system must lie within the same-sign block throughout Stage 1, and (b) the
inter-block coupling between same-sign and opposite-sign blocks must remain weak enough to be neglected.
Condition (a): Crossover within Stage 2.
This motivates a two-stage schedule in which the XX-coupling jxx
is held constant during Stage 1 and scaled linearly with the transverse field during Stage 2. Using the block
energy ordering results of the L-bare subsystem from Section 7.4 (Figure 11), we find that the crossover between
the same-sign and opposite-sign blocks occurs at xc =
4α
4−α2 . This lies within Stage 2 (i.e., xc ≤Γ2) if and only
50
if α ≤αmax(Γ2), where αmax(Γ2) is defined in Eq. 6.9. For Γ2 ≥2, one has αmax(Γ2) ≳2(Γ2−1)
Γ2
. This yields
the Stage-Separation bound on the constant XX-coupling during Stage 1:
J sep
xx = 2(Γ2 −1).
Condition (b): Weak inter-block coupling.
In addition, recall from Theorem 9.1 that the effective Hamilto-
nian decomposes as
Heff
1 = HC ⊕· · · ⊕HQ + Hinter−block,
where Hinter−block = 0 in the disjoint case, while in the shared case
Hinter−block = Jzz
X
(i,j)∈L×R
√ni−1
ni
Tcq
i ˜σz
j .
Here Tcq (Eq. (9.3)) is a 3 × 3 off-diagonal operator mixing same-sign and opposite-sign components. The
strength of this coupling depends on Jzz and clique size ni, but not on the transverse field x.
Since we take Γ2 = m, and Stage 1 is defined by x ∈[Γ2, Γ1], the inter-block coupling is negligible whenever
Jzz ≪m. We summarize this condition by writing Jzz ≤J inter
zz . In particular, this requires that Jzz be chosen as a
constant (independent of m), so that the condition continues to hold as the problem size grows. In practice, since
J steer
zz ≤J inter
zz by construction, it suffices to impose Jzz ≤J steer
zz .
Conclusion.
Together, the Stage-Separation bound J sep
xx and the inter-block coupling bound J inter
zz ensure that the
low-energy dynamics of Stage 1 are effectively confined to the same-sign block. This joint condition justifies
treating Stage 1 and Stage 2 as separate regimes in the analysis.
(ii) Lifting Lower Bound J lift
xx
Idea: This bound ensures that the energy associated with LM in the same-sign block is raised high enough, during
Stage 2. The key idea is to use exact analytical expressions for the bare energy levels, ELM
0
and EGM
0
, derived
in Section 7, to approximate the true energies associated with LM and GM. We then determine the condition under
which ELM
0
remains strictly above EGM
0
throughout Stage 2.
This is a perturbative approximation. By second-order perturbation theory, the energy associated with LM is
shifted upward, while the energy associated with GM is shifted downward. Therefore, if the bare energy levels do
not cross, the true levels will not have an anti-cross either.
The resulting threshold defines the Lifting Lower Bound J lift
xx: the minimum value of Jxx required to ensure
that the energy level associated with LM in the same-sign block remains sufficiently lifted throughout Stage 2.
This approximation is further supported numerically.
Derivation: The (negative) slope magnitude of ELM
0
(x) decreases from
√nc
2
· m
(at Jxx = 0)
to
1
α · m
(at Jxx = αΓ2).
This is illustrated in Figure 8(c), and notably, the slope bound is independent of clique size nc. In contrast, the
(negative) slope magnitude of EGM
0
(x) remains approximately constant: mg
2 . To maintain ELM
0
(x) > EGM
0
(x)
throughout Stage 2, it suffices to require m
α < mg
2 , i.e. α > 2 m
mg . Thus, the bound becomes:
J lift
xx = 2 · m
mg · Γ2.
51
(iii) Steering Lower Bound J steer
xx
Idea: The lower bound J steer
xx is defined as the minimal value of Jxx such that the diagonal entries of the R-inner
block decomposition Hamiltonian are strictly decreasing from top to bottom. This condition ensures that the
system can smoothly localize into the lowest R-blocks.
The key idea is that the coupling strength Jxx must be large enough so that any configuration involving
vertices from L incurs a high energy penalty, thereby suppressing support on the L-blocks throughout Stage 1.
See Figure 15 for an illustration.
Derivation: Specifically, we require the inequality −2 weff > −(weff + mr(1 −Jzz)), where weff = 1 −nc−1
4
Jxx.
is the effective vertex weight. Solving the inequality yields the lower bound:
J steer
xx =
4
nc−1 [mr(Jzz −1) + 1] .
Remark 9.8. Since J steer
xx depends on nc, a conservative generalization for non-uniform clique sizes is to replace
nc with mini ni.
(iv) Sinking Upper Bound J sink
xx
Idea: This is analogous to the Lifting Lower Bound J lift
xx. We want to make sure that the AS0 energy level is
not dropped too low to form a new anti-crossing. We use the bare (decoupled) energy levels EGM
0
(the ground
state energy of Hbare
GM ) and EAS0
0
(the ground state energy of HQ) to approximate the true energy levels. We then
impose an upper bound on Jxx to ensure that EAS0
0
> EGM
0
throughout Stage 2.
By the same perturbative argument applied when the blocks are coupled, while using the decoupled bare
levels as reference points, the true energy level corresponding to EAS0
0
is shifted upward, while the true energy
level corresponding to EGM
0
is shifted downward. Therefore, as long as EAS0
0
> EGM
0
, no anti-crossing will be
formed. This approximation is further supported numerically.
Derivation: We require that EAS0
0
(x) > EGM
0
(x), during Stage 2, and in particular, we enforce this at the
beginning of Stage 2, with x = Γ2:
EAS0
0
(Γ2) ≈−m
1 + Jxx
4
−mr
2
1 −m
nc Jzz + Γ2
,
EGM
0
(Γ2) ≈−m+mr
2
(1 + Γ2).
Solving this inequality yields the upper bound:
J sink
xx = 2(Γ2 −1) + 2mr
nc Jzz.
Remark 9.9. Since J sink
xx depends on nc, a conservative generalization for non-uniform clique sizes is to replace
nc with maxi ni.
Having established the four feasibility bounds on Jxx and the condition for a nonempty window, we now turn
to the detailed analysis of Stage 1 and Stage 2.
9.3.2
Stage 1: Structural Steering
The purpose of Stage 1 is twofold: (1) to ensure that the evolution is confined to the same-sign block, and (2) to
demonstrate successful structural steering within this block. Confinement is guaranteed by the Stage-Separation
bound J sep
xx together with the requirement Jzz ≤J inter
zz (in the shared case).
52
We define structural steering as the mechanism by which the evolving ground state is directed smoothly into
the R-localized region (the global-minimum-supporting region). In our design, steering is achieved by ordering
the energies of the R-inner blocks of the same-sign Hamiltonian so that the ground state is guided directly into
the R-localized region, rather than first becoming trapped in the L-region and only later tunneling into the R-
region, as happens in the stoquastic annealing case. Formally, this mechanism is guaranteed by two feasibility
bounds: the Lifting bound J lift
xx, which forces the ground state toward the lowest R-blocks; and the Steering
bound J steer
xx , which ensures that this localization proceeds smoothly. This analytically established behavior is
further confirmed by numerical evidence.
Numerical Confirmation of Structural Steering
We confirm structural steering numerically by tracking structural localization (as defined in Section 9.2.2),
namely the cumulative projection weight of the ground state onto the lowest R-blocks.
Since this analysis involves only the same-sign block, and assuming uniform clique size, we may employ
the symmetric-subspace reduction, which enables large-scale calculations. These calculations determine the
appropriate value of k, corresponding to the minimum number of R-blocks that capture nearly all of the ground-
state weight. In practice, when mr > m, we typically find k ≈2, whereas when 2 ≤mr ≪m (possible in the
shared-structure case) 7 a slightly larger k (upto 0.2m) may be required.
For the disjoint case, Figure 16(a) shows smooth steering into the lowest k = 2 blocks (with mr = mg > m).
For contrast, Figure 16(b) shows the evolution when the steering condition is violated (with Jzz = 1000 ≫J steer
zz ,
resulting in Jxx < J steer
xx ). Here the ground state first localizes in the L-region and only later tunnels into the
R-region, highlighting the tunneling bottleneck that structural steering avoids.
Figure 17 shows successful steering for m = 30 with small mr = 5. By the end of Stage 1, more than 90%
of the ground state amplitude is concentrated within just six blocks.
Overall, throughout Stage 1 the ground state evolves smoothly from a uniform superposition into a com-
pressed state supported by the lowest few energy-ordered R-blocks, whenever mr ≥2. This continuous struc-
tural compression avoids abrupt transitions between disjoint regions of the Hilbert space, thereby preventing
tunneling-induced anti-crossings and ensuring that the spectral gap remains large. While a full proof of gap
preservation is left for future work, these numerical results provide strong supporting evidence for the effective-
ness and scalability of structural steering in Stage 1.
In summary, Stage 1 achieves smooth and scalable localization into the R-region without tunneling.
9.3.3
Stage 2: Smooth Evolution to GM via an Opposite-Sign Path
The purpose of Stage 2 is to show that the evolution proceeds from the R-localized region at the end of Stage 1
to GM without the emergence of any new anti-crossing. We analyze this stage under two structural cases: the
disjoint-structure graph Gdis and the shared-structure graph Gshare.
The disjoint-structure case serves mainly for illustration and comparison: the evolution remains entirely
within the same-sign block, and the dynamics proceed smoothly. In the shared-structure case, the lowest opposite-
sign block (AS0) participates in the evolution. By construction, the Sinking bound Jxx ≤J sink
xx ensures that no
new anti-crossing emerges, so the evolution remains smooth. Moreover, the dynamics can be explained by sign-
generating interference, which produces negative amplitudes in the computational basis of the ground state. Thus
the system follows an opposite-sign path, a route not accessible in the stoquastic case.
7In this case, only the condition Jzz ≤Jinter
zz
is required (not Jzz ≤Jsteer
zz ).
53
(a) Successful structural steering with Jxx = 2(m −1),
Jzz = 1 +
√nc+1
2
.
(b) Failure of structural steering when Jzz = 1000 ≫
J steer
zz .
Figure 16: Stage 1 evolution for a disjoint-structure graph with m = 10, mg = mr = 15, nc = 9. Here we
use Γ1 = 4Γ2 with Γ2 = m. Orange (dashed) indicates the L-region block H(0)
L , gray the lowest R-block H(0)
R ,
and blue the cumulative overlap with the lowest two R-blocks H(≤2)
R
. In (a), by the end of Stage 1 (t = 0.5),
more than 90% of the ground state amplitude is concentrated within H(≤2)
R
, demonstrating effective structural
steering. In (b), Jzz = 1000 ≫J steer
zz , and thus Jxx ≪J steer
xx . The ground state remains localized in the L-region
until t ≈0.38, then abruptly tunnels into the R-region, indicating an exponentially small-gap tunneling-induced
anti-crossing.
Figure 17: Stage 1 structural steering for a shared-structure graph with m = 30, mr = 5, nc = 9, and Jxx =
2(m−1) and Jzz = 1 +
√nc+1
2
. Here we use Γ1 = 4Γ2 with Γ2 = m. The cumulative projection weight of the
ground state onto the lowest energy-ordered blocks is shown as a function of time t. Orange (dashed) indicates
the L-region block H(0)
L , gray the lowest R-block H(0)
R , and blue the cumulative overlap with the lowest five
R-blocks H(≤5)
R
. By the end of Stage 1 (t = 0.5), more than 90% of the ground state amplitude is concentrated
within H(≤5)
R
, demonstrating effective structural steering even for small mr relative to m.
54
Disjoint-Structure Case
In this case, the evolution remains entirely within the same-sign block—more precisely, it proceeds from the
R-localized region into the H(0)
R block during Stage 2. The spectral gap within H(0)
R is given analytically by
√
1 + x2, and thus remains bounded below by 1 throughout Stage 2. Figure 18 compares the energy spectra of
the same-sign block for Jxx = 0 and Jxx = 2(m−1).
Because the same-sign block admits a symmetric-subspace reduction, its dimension scales linearly with m.
This allows efficient numerical calculations on large instances. Figure 19 presents a large-scale example with
m = 25 and mg = 35, confirming that the algorithm continues to perform correctly at scale.
In this disjoint-structure case, there is no requirement for the Sinking bound. Large values of Jxx are also
admissible, in which case double block-level true crossings occur but are dynamically bypassed. See Figure 20.
(a)
(b)
(c)
(d)
Figure 18: Energy spectra for the disjoint-structure graph Gdis, with m = ml = 3, mg = 5, and clique size
nc = 9.
(a) TFQA spectrum showing a small-gap anti-crossing; the corresponding gap profile is shown in (c).
(b) DIC-DAC-DOA spectrum with XX-driver at Jxx = 2(m−1) = 4; the corresponding gap profile is shown in
(d).
55
(a)
(b)
(c)
(d)
Figure 19: Spectral decomposition for the disjoint-structure graph Gdis, with m = ml = 25, mg = 35, and
clique size nc = 9.
(a) Energy spectrum of the same-sign block under TFQA (Jxx = 0), showing an anti-crossing near t ≈0.985.
(b) Zoom of panel (a) over t ∈[0.95, 1].
(c) Full spectrum under DIC-DAC-DOA with Jxx = 2(m−1) = 48: the opposite-sign energies (dashed red) are
lowered by the XX-driver but remain above the same-sign ground state (green), confirming successful evolution
with no anti-crossing.
(d) Zoom of panel (c) over t ∈[0.95, 1].
Shared-Structure Case
We now analyze the shared-structure case in detail. The goal is to show explicitly how the participation of
opposite-sign blocks leads to both positive and negative amplitudes in the ground state. Our approach is twofold:
first we establish the sign pattern mathematically, and then we interpret it physically in terms of sign-generating
quantum interference. We begin with the mathematical analysis.
56
(a)
(b)
(c)
(d)
Figure 20: Spectral decomposition for the disjoint-structure graph Gdis, with m = ml = 3, mg = 5, and clique
size nc = 9.
(a) Energy spectrum of the same-sign block under TFQA (Jxx = 0), showing an anti-crossing near t ≈0.85.
(b) Full spectrum under DIC-DAC-DOA with Jxx = 4: the opposite-sign energies (dashed red) remain above the
same-sign ground state (green), confirming successful evolution with no anti-crossing.
(c) Legend identifying curve types across panels.
(d) Large-Jxx case (Jxx = 12): two true block-level crossings occur as the lowest opposite-sign energy drops
below the same-sign ground state energy near t ≈0.4 and again near t ≈0.7.
57
Emergence of Negative Amplitudes
In the angular-momentum basis, Stage 2 involves the lowest R-supporting block together with the opposite-sign
blocks that couple to it. We consider the following restricted subspace in the computational basis (transforming
back), which consists of two parts:
• M: the set of GM-supporting states containing R, M = { M ⊂GM : R ⊂M }.
• D: the set of dependent-set states paired to M via XX-couplers. For each M ∈M and each b ∈M \ R,
there is a unique partner a such that (a, b) is an XX-coupled pair. The corresponding dependent state
DM,b ∈D is obtained by flipping (a, b). Thus each M has |M \ R| associated dependent-set states, while
each DM,b is uniquely coupled to its parent M.
Restricting the Hamiltonian to this subspace yields the block form
H(MD)
eff
=
HM
V
V⊤
HD
,
where HM and HD are both stoquastic (off-diagonals ≤0), with HD has strictly positive diagonal entries, and
V consists of only nonnegative entries jxx (as each D ∈D is coupled only to one M ∈M).
To identify the ground-state structure, we apply the Rayleigh variational principle to a trial vector (uM, uD),
with uM ≥0 on M. The minimum is attained when uD = −H−1
D V⊤uM. Since V⊤uM ≥0 and H−1
D
≥0
entrywise, the Stage 2 ground state necessarily takes the form (uM, uD), with uM ≥0 and uD ≤0. In other
words, the ground state carries positive amplitudes on the GM-supporting sector and negative amplitudes on the
dependent-set sector, establishing the structural origin of negative components in the Stage 2 ground state. This
analytical prediction is confirmed numerically: Figure 21 plots the total fraction of negative amplitudes as a
function of time.
Figure 21: Emergence of Negative Amplitudes during Stage 2. The plot shows the total fraction of negative
amplitudes in the ground state as a function of annealing time t. Throughout Stage 1, the amplitude distribution
remains strictly non-negative, consistent with confinement to the same-sign block. As Stage 2 begins (t = 0.5),
negative amplitudes emerge around t ≈0.6.
58
Interpretation as sign-generating interference.
The negative amplitudes identified above can be understood
in terms of constructive and destructive interference between same-sign and opposite-sign components. This
provides the physical mechanism we call sign-generating quantum interference.
We illustrate this mechanism in detail using the three-vertex conceptual model (V3) in Section 9.4, which
may also serves as a minimal testbed for probing sign-generating interference experimentally.
Numerical analysis.
Figure 22 compares the energy spectra for the shared-structure case under TFQA and
DIC-DAC-DOA. In TFQA (Jxx = 0), a small-gap anti-crossing is visible, while in DIC-DAC-DOA with Jxx =
2(m−1) this anti-crossing is dissolved. The mechanism is clarified in Figure 23, which decomposes the spectrum
into same-sign and opposite-sign blocks: the same-sign component is lifted while the opposite-sign component
is lowered, producing destructive interference that avoids any new anti-crossing. Finally, Figure 24 illustrates the
large-Jxx regime, where double anti-crossings appear and may cause failure.
In summary, Stage 2 succeeds through sign-generating interference along an opposite-sign path, whereby the
tunneling-induced anti-crossing is dissolved without introducing new ones.
9.4
Three-Vertex Conceptual Model: Illustrating Quantum Interference
We define quantum interference as the superposition of components in a quantum system that project onto the
same basis state and either interfere destructively—reducing amplitude through opposite-sign cancellation—or
constructively—increasing amplitude through same-sign contributions.
To illustrate this mechanism, we introduce the three-vertex model (V3), a minimal example in which in-
terference arises from basis rotation and occurs in both the stoquastic case (Jxx = 0) and the non-stoquastic
case (Jxx > 0). Within this framework, we emphasize the distinctive feature of sign-generating interference: it
produces negative amplitudes in the computational basis.
This section is organized as follows:
• Model Description: Formulation of the full Hamiltonian in both the computational and angular-momentum
bases (Section 9.4.1).
• Quantum Interference via Basis Rotation: How interference emerges from a basis change (Section 9.4.2).
• Numerical Illustration of Sign-Generating Interference: Emergence of negative amplitudes in the non-
stoquastic case (Section 9.4.3).
9.4.1
Model Description
The model consists of three vertices V = {a, b, r}, with associated weights wa, wb, and wr, respectively. The
graph includes two edges, {ab, ar}, each associated with a ZZ-coupling, denoted Jab
zz and Jar
zz . Additionally,
there is an XX-coupling on the edge ab, denoted Jxx.
We bipartition V into L = {a, b} and R = {r}. The set L corresponds to the local minima (LM), while GM =
{b, r}—the maximum independent set—corresponds to the global minimum. See Figure 25 for a visualization
of this setup.
59
(a)
(b)
(c)
(d)
Figure 22: Energy spectra for the shared-structure graph Gshare, with m = ml = 3, mr = 2, mg = 5, and clique
size nc = 9.
(a) TFQA spectrum showing a small-gap anti-crossing; the corresponding gap profile is shown in (c).
(b) DIC-DAC-DOA spectrum with XX-driver at Jxx = 2(m−1); the corresponding gap profile is shown in (d).
60
(a)
(b)
Figure 23: Spectral decomposition for the shared-structure graph Gshare, with m = ml = 3, mr = 2, mg = 5,
and clique size nc = 9. In TFQA (a), the ground state resides entirely in the same-sign block (green dashed),
producing a tunneling-induced anti-crossing. In DIC-DAC-DOA (b), this anti-crossing is dissolved through a see-
saw effect: the same-sign block (green dashed) is lifted while the opposite-sign block (red dashed) is lowered.
61
(a)
(b)
(c)
(d)
Figure 24: Spectral decomposition for the shared-structure graph Gshare, with m = ml = 5, mg = 7, and clique
size nc = 9.
(a) Energy spectrum of the same-sign block under TFQA (Jxx = 0), showing an anti-crossing near t ≈0.9.
(b) Full spectrum under DIC-DAC-DOA with Jxx = 8: the opposite-sign energies (dashed red) remain above the
same-sign ground state (green), confirming successful evolution with no anti-crossing.
(c) Legend identifying curve types across panels.
(d) Large-Jxx case (Jxx = 12): the system exhibits two anti-crossings. A block-level anti-crossing appears near
t ≈0.3 (Stage 1), followed by an interference-involved block-level anti-crossing near t ≈0.8 (Stage 2). Both
gaps may not be small enough to allow the evolution to bypass them. While the first gap can be small, the
evolution remains in the same-sign block and follows the first excited state instead of the true ground state, which
is overtaken by the opposite-sign ground state. The second gap, however, can be large, preventing a transition
back to the true ground state. As a result, the evolution may terminate in the first excited state.
62
a
b
r
c
r
q
Figure 25: (Left) A weighted graph with three vertices {a, b, r}. The graph has two ZZ-couplings (black edges)
on ab and ar, and one XX-coupling (red edge) on ab.
The vertex set is bipartitioned as L = {a, b} and R = {r}. The set L (pink dashed oval) corresponds to the
local minima (LM), while GM = {b, r} (blue dashed oval)—the maximum independent set—represents the global
minimum. A local basis transformation is applied to the L-subsystem.
(Right) The pair {a, b} is transformed into {c, q}, where c is a spin-1
2 subsystem, and q is a spin-0 subsystem
corresponding to an opposite-sign state |⊙q⟩.
Remark 9.10. The V3 model is not meant to demonstrate structural steering in Stage 1. Since the right com-
ponent contains only a single vertex (mr = 1), structrual steering fails, as it requires mr ≥2. The system
therefore undergoes a tunneling-induced anti-crossing in Stage 1. At the end of Stage 1, we assume that the
ground state has successfully tunneled and is now localized on the state {r}, which is a subset of the global
minimum configuration GM = {b, r}.
Hamiltonian and Basis Construction
Each vertex k ∈{a, b, r} is modeled as a spin-1
2 system with transverse field strength proportional to √nk. The
local Hamiltonian at vertex k, in the computational basis {|1k⟩, |0k⟩}, is given by
Bk =
−wk
−
√nk
2 x
−
√nk
2 x
0
,
wk = w −nk−1
4
jxx.
The L-subsystem consists of vertices a and b, coupled via both ZZ- and XX-terms:
HL = Ba ⊗I2 + I2 ⊗Bb + Jab
zz ˜σz
a˜σz
b +
√nanb
4
jxxσx
aσx
b .
In the computational basis {|1a1b⟩, |1a0b⟩, |0a1b⟩, |0a0b⟩}, this becomes:
HL =
|1a1b⟩
|1a0b⟩
|0a1b⟩
|0a0b⟩
|1a1b⟩
Jab
zz −wa −wb
−
√nb
2 x
−
√na
2 x
√nanb
4
jxx
|1a0b⟩
−
√nb
2 x
−wa
√nanb
4
jxx
−
√na
2 x
|0a1b⟩
−
√na
2 x
√nanb
4
jxx
−wb
−
√nb
2 x
|0a0b⟩
√nanb
4
jxx
−
√na
2 x
−
√nb
2 x
0
63
Assuming the coupling Jab
zz is large, we restrict to the low-energy subspace spanned by |1a0b⟩, |0a1b⟩, |0a0b⟩,
yielding:
HL = ΠindHLΠind =
|1a0b⟩
|0a1b⟩
|0a0b⟩
|1a0b⟩
−wa
√nanb
4
jxx
−
√na
2 x
|0a1b⟩
√nanb
4
jxx
−wb
−
√nb
2 x
|0a0b⟩
−
√na
2 x
−
√nb
2 x
0
This restriction removes the high-energy state |1a1b⟩, which is energetically penalized by large Jab
zz .
To uncover the effective angular momentum structure, we apply the basis transformation:
Umerge =
|1c⟩
|0c⟩
|⊙q⟩
|1a0b⟩
q
na
nc
0
−
q
nb
nc
|0a1b⟩
q
nb
nc
0
q
na
nc
|0a0b⟩
0
1
0
where nc = na + nb.
The corresponding transformed basis states are:
|0c⟩
= |0a0b⟩,
|1c⟩
=
q
na
nc |1a0b⟩+
q
nb
nc |0a1b⟩,
|⊙q⟩
= −
q
nb
nc |1a0b⟩+
q
na
nc |0a1b⟩.
In this basis, the restricted Hamiltonian becomes:
HL =
|1c⟩
|0c⟩
|⊙q⟩
|1c⟩
−
w −nc−1
4
jxx
−
√nc
2 x
0
|0c⟩
−
√nc
2 x
0
0
|⊙q⟩
0
0
−
w + 1
4jxx
We specialize to the case nb = 1, na = nc −1, we have wb = wr = w, while wa = w −nc−2
4
jxx. The
corresponding full transformed basis states are:
|0c0r⟩
=
|0a0b0r⟩,
|0c1r⟩
=
|0a0b1r⟩,
|1c0r⟩
=
q
nc−1
nc
|1a0b0r⟩+
q
1
nc |0a1b0r⟩,
|1c1r⟩
=
q
nc−1
nc
|1a0b1r⟩+
q
1
nc |0a1b1r⟩,
|⊙q0r⟩
=
−
q
1
nc |1a0b0r⟩+
q
nc−1
nc
|0a1b0r⟩,
|⊙q1r⟩
=
−
q
1
nc |1a0b1r⟩+
q
nc−1
nc
|0a1b1r⟩.
We now explicitly write down the full Hamiltonian in the two bases:
Full Hamiltonian in the Computational Basis.
64
Hcomp
full
=
|1a0b1r⟩
|1a0b0r⟩
|0a1b1r⟩
|0a1b0r⟩
|0a0b1r⟩
|0a0b0r⟩
|1a0b1r⟩
−2w + nc−2
4
jxx + Jzz
−1
2x
√nc−1
4
jxx
0
−
√nc−1
2
x
0
|1a0b0r⟩
−1
2x
−w + nc−2
4
jxx
0
√nc−1
4
jxx
0
−
√nc−1
2
x
|0a1b1r⟩
√nc−1
4
jxx
0
−2w
−1
2x
−1
2x
0
|0a1b0r⟩
0
√nc−1
4
jxx
−1
2x
−w
0
−1
2x
|0a0b1r⟩
−
√nc−1
2
x
0
−1
2x
0
−w
−1
2x
|0a0b0r⟩
0
−
√nc−1
2
x
0
−1
2x
−1
2x
0
Full Hamiltonian in the Angular Momentum Basis.
Hang
full =
|1c1r⟩
|1c0r⟩
|0c1r⟩
|0c0r⟩
|⊙q1r⟩
|⊙q0r⟩
|1c1r⟩
−2w + nc−1
4
jxx + nc−1
nc Jzz
−1
2x
−
√nc
2 x
0
−
q
nc−1
n2c Jzz
0
|1c0r⟩
−1
2x
−w + nc−1
4
jxx
0
−
√nc
2 x
0
0
|0c1r⟩
−
√nc
2 x
0
−w
−1
2x
0
0
|0c0r⟩
0
−
√nc
2 x
−1
2x
0
0
0
|⊙q1r⟩
−
q
nc−1
n2c Jzz
0
0
0
−2w −1
4jxx + 1
nc Jzz
−1
2x
|⊙q0r⟩
0
0
0
0
−1
2x
−w −1
4jxx
The first four basis states span the same-sign block; the last two belong to the opposite-sign block. Coupling
between blocks arises via the off-diagonal term proportional to Jzz = Jar
zz.
9.4.2
Explaining Quantum Interference via Basis Rotation
At the start of Stage 2, the evolving ground state has been steered into the R-localized region (i.e., |0c1r⟩). As
the evolution progresses, the relevant subspace is projected to I −|0r⟩⟨0r| = |1r⟩⟨1r|, which is spanned by
|1c1r⟩, |0c1r⟩, |⊙q1r⟩
,
with effective Hamiltonian
Hang
eff =
|1c1r⟩
|0c1r⟩
|⊙q1r⟩
|1c1r⟩
−2w + nc−1
4
jxx + nc−1
nc Jzz
−
√nc
2 x
−
q
nc−1
n2c Jzz
|0c1r⟩
−
√nc
2 x
−w
0
|⊙q1r⟩
−
q
nc−1
n2c Jzz
0
−2w −1
4jxx + 1
nc Jzz
(9.5)
The ground state evolves into a superposition of the form
|ψ(t)⟩= ψcr(t) |1c1r⟩+ ψqr(t) |⊙q1r⟩+ ψcr(t) |0c1r⟩,
where all coefficients are nonnegative throughout the evolution as the Hamiltonian is stoquastic (in the angular-
momentum basis). The interference mechanism arises from the angular-momentum basis rotation:
|1c1r⟩=
q
nc−1
nc
|1a0b1r⟩+
q
1
nc |0a1b1r⟩,
|⊙q1r⟩= −
q
1
nc |1a0b1r⟩+
q
nc−1
nc
|0a1b1r⟩.
65
Restricting to the subspace spanned by these orthonormal components, we write
|ψ(t)⟩= ψcr(t) |1c1r⟩+ ψqr(t) |⊙q1r⟩.
Expressed in the computational basis, this becomes
|ψ(t)⟩= α(t) |0a1b1r⟩+ β(t) |1a0b1r⟩,
(9.6)
where
α(t) = ψcr(t)
q
1
nc + ψqr(t)
q
nc−1
nc ,
β(t) = ψcr(t)
q
nc−1
nc
−ψqr(t)
q
1
nc .
(9.7)
Here ψcr(t) and ψqr(t) are the amplitudes on the same-sign basis state |1c1r⟩and the opposite-sign basis state
|⊙q1r⟩, respectively.
Interference picture. The orthogonal components |1c1r⟩and |⊙q1r⟩interfere constructively on |0a1b1r⟩, in-
creasing α(t) (supporting the global minimum), and destructively on |1a0b1r⟩, decreasing β(t) (dependent-set
cancellation).
In the stoquastic case (Jxx = 0), the interference produces only nonnegative amplitudes, and both α(t) and
β(t) remain nonnegative. In the non-stoquastic case (Jxx > 0), however, destructive interference can drive β(t)
negative, providing evidence of sign-generating quantum interference.
To show that β(t) < 0, we express Hang
eff back in the computational basis on the reduced subspace:
Hcomp
eff
=
|0a1b1r⟩
|1a0b1r⟩
|0a1b1r⟩
−2w
√nc−1
4
jxx
|1a0b1r⟩
√nc−1
4
jxx
−2w + nc−2
4
jxx + Jzz
(9.8)
Consistent with the Rayleigh variational argument for H(MD)
eff
above, the ground state of Hcomp
eff
has positive
amplitude on the GM state |0a1b1r⟩, namely α(t) > 0, and negative amplitude on the dependent-set state |1a0b1r⟩,
namely β(t) < 0, throughout Stage 2 (when the effective Hamiltonian is valid).
9.4.3
Numerical Illustration of Sign-Generating Interference
To visualize the mechanism described above, we numerically compute the instantaneous ground state of the V3
Hamiltonian over the course of the annealing evolution during Stage 2. We plot its amplitude components in both
the angular momentum basis and the computational basis, showing how quantum interference arises in both the
stoquastic case (Jxx = 0) and the non-stoquastic case (Jxx = 0.6), and how sign generation emerges only in the
latter.
The comparison highlights the essential distinction between tunneling-based and interference-based evolu-
tion. In the stoquastic case, the system undergoes a tunneling-induced anti-crossing as it transitions from support
on local minima to support on the global minimum. Although quantum interference is present, it remains sign-
preserving, and the system evolves entirely within the non-negative cone. In contrast, when Jxx = 0.6, negative
amplitudes appear—evidence of sign-generating quantum interference. Figures 26 and 27 show the correspond-
ing energy spectra and signed probability evolutions.
66
(a) Jxx = 0 (stoquastic case)
(b) Jxx = 0.6 (non-stoquastic case)
Figure 26: Energy spectrum during annealing for the V3 model at two values of Jxx. (a) For Jxx = 0,
the spectrum exhibits a tunneling-induced anti-crossing near t ≈0.75, where the ground state transitions from
support on local minima to support on the global minimum.
(b) For Jxx = 0.6, no anti-crossing arises in Stage 2. The ground state energy of the opposite-sign block (EAS0
0
,
red dotdashed) remains above the true ground state Etrue
0
.
10
Full-System Extension via Iterative Application
In this section, we extend the analysis to the full system by iteratively applying the two-phase procedure to
successive bipartite substructures. In each iteration, a driver subgraph G(k)
driver—and thus the parameters mk and
n(k)
c —is identified.
The correctness of applying the algorithm iteratively—by incrementally building up the driver graph, i.e.,
∪it
k=1G(k)
driver—follows from the following three facts:
1. The transverse field is global: x = (1 −t)Γ1.
2. Each iteration has its own stage-separation parameter Γ(k)
2 .
3. We use the latest J(k)
zz .
Remark 10.1. The upper bound on Jzz, J steer
zz , which ensures that the ground state steers into the GM-supporting
region during Stage 1, needs to be enforced only in the last iteration. In earlier iterations, the algorithm localizes
into another LM-supporting region (associated with the current critical local minima), so the J steer
zz (and thus J steer
xx )
bound may be violated without affecting the algorithm’s performance. Each iteration applies its own XX-driver
and chooses a Jzz value appropriate to that iteration’s structure. It is only in the final iteration—when the goal is
to steer into the GM-supporting region—that the upper bound on J steer
zz must be respected. This allows for adaptive,
iteration-dependent tuning of Jzz, even when the clique sizes vary or structural conditions differ across iterations.
We set Γ(k)
2
= mk, and define the corresponding transition point tk := 1 −Γ(k)
2
Γ1 ≤1
2. For each iteration k,
we define αk := 2 · Γ(k)
2
−1
Γ(k)
2
, and J(k)
xx := αk Γ(k)
2 . To simplify implementation across multiple iterations, we fix
Γ1 := K · maxk Γ(k)
2 , for some constant K. This choice guarantees that the transverse field schedule for the
67
(a) Jxx = 0
(b) Jxx = 0
(c) Jxx = 0.6
(d) Jxx = 0.6
Figure 27: Signed probability evolution of the instantaneous ground state in the V3 model, shown in two
bases (computational and angular-momentum). Each entry shows sign(ψi) · |ψi|2, where ψi is the amplitude
on basis state i. Color shading encodes signed magnitude: positive (orange), negative (blue), and near-zero
(white).
(a,b) Stoquastic case (Jxx = 0). In (a), all amplitudes remain non-negative, with no blue region. Around
t ≈0.75, a sharp transition occurs, indicative of a tunneling-induced anti-crossing. In (b), interference between
|1c1r⟩and the opposite-sign component |⊙q1r⟩is present but remains sign-preserving. This shows that quantum
interference is present even in the stoquastic case, but it remains sign-preserving.
(c,d) Non-stoquastic case (Jxx = 0.6). In (c), the transition from orange to blue on the dependent-set state
|1a0b1r⟩reflects destructive interference and the onset of a negative amplitude, while constructive interference
amplifies the global-minimum state |0a1b1r⟩. In (d), the same evolution is shown in the angular-momentum
basis: the first four states belong to the same-sign block, and the last two to the opposite-sign block.
68
two-stage evolution, x(t) = (1 −t)Γ1, can be reused identically across all iterations. While each iteration may
use a different structure-dependent value Γ(k)
2 , they all share the same global annealing profile governed by Γ1.
In particular, the system Hamiltonian builds up incrementally. The Hamiltonian at iteration it for Stage 0 is
given by
H0(t) = x(t)HX +
it
X
k=1
jxx(t)(k)
X
(i,j)∈E(G(k)
driver)
σx
i σx
j
+ p(t)H(k)
problem,
t ∈[0, 1],
with x(t) = (1 −t)(Γ0 −Γ1) + Γ1, jxx(t)(k) = tJ(k)
xx , p(t) = t. The transverse-field term is HX = −P
i σx
i ,
and the problem Hamiltonian is
H(k)
problem =
X
i∈V(G)
(−wi) ˜σz
i +
p
X
k=1
Jclique
zz
X
(i,j)∈E(∪it
k=1G(k)
driver)
˜σz
i ˜σz
j
+ J(k)
zz
X
(i,j)∈E(G)\E(∪it
k=1G(k)
driver)
˜σz
i ˜σz
j .
Here, we take J(k)
zz :=
1 +
√
n(k)
c
+1
2
.
The system Hamiltonian during the main two-stage evolution is
H1(t) = x(t)HX +
it
X
k=1
jxx(t)(k)
X
(i,j)∈E(G(k)
driver)
σx
i σx
j
+ H(k)
problem,
t ∈[0, 1].
The transverse field is given by x(t) = (1 −t)Γ1.
The time-dependent coupling jxx(t)(k) for iteration k is given by
jxx(t)(k) =
(
Jxx(k),
for t ∈[0, tk]
⇔
x(t) ∈[Γ(k)
2 , Γ1],
αk x(t),
for t ∈[tk, 1]
⇔
x(t) ∈[0, Γ(k)
2 ].
Figure 28 illustrates how the algorithm progressively removes small-gap obstructions by lifting one critical local
minimum per iteration.
11
Conclusion and Outlook
We have developed a systematic framework for analyzing a non-stoquastic quantum annealing algorithm, DIC-
DAC-DOA, which achieves exponential speedup on a structured family of Maximum Independent Set (MIS)
instances. Our approach departs from the traditional spectral-gap perspective and instead measures the efficiency
of the algorithm by the presence or absence of an anti-crossing. The key idea is to infer it directly from the
crossing behavior of bare energy levels of relavent subsystems, without explicitly constructing the effective two-
level Hamiltonian.
The analytical foundation of our approach rests on three structural components, each supported by analytical
derivations and numerical evidence. First, a block decomposition of the Hamiltonian into same-sign and opposite-
sign components, derived from the angular momentum structure of the dMIC underlying the XX-driver graph.
Second, the identification of a see-saw energy effect induced by the non-stoquastic parameter Jxx, which raises
69
Figure 28: Lifting Anti-Crossings in Two Iterations. Top: Jxx = 0. The energy level associated with the global
minimum (GM) exhibits two anti-crossings with those of two local minima, highlighted by the red dotted oval.
The first LM has ml = 2, nc = 30, and shares both vertices with GM; the second has m1 = 3, nc = 10, and
shares one vertex with GM. The global minimum has mg = 5. Middle: Jxx(1) = 2; the first anti-crossing is lifted,
but the second remains (dotted oval). Bottom: Jxx(2) = 4; both anti-crossings are lifted, and the system evolves
smoothly to the global minimum.
70
the energy associated with local minima in the same-sign block while lowering that of opposite-sign blocks.
Third, the derivation of analytical bounds on Jxx (together with upper bounds on Jzz) that support a two-stage
annealing schedule ensuring smooth ground state evolution without anti-crossings.
Together, these structural components reveal two key mechanisms responsible for the speedup through a
smooth evolution path:
• Structural steering: energy-guided localization within the same-sign block that steers the ground state
smoothly into the GM-supporting region, bypassing tunneling-induced anti-crossings.
• Sign-generating quantum interference: production of negative amplitudes that enables an opposite-sign
path through destructive interference in the computational basis.
The analysis of these mechanisms is supported by both analytical derivations and numerical validation, and
can, in principle, be made rigorous with further work. We now summarize the key structural assumptions and
aspects of the analysis that may be further formalized.
Foundational Assumptions and Future Refinement
• Iterative correctness. We justify that the full-system evolution can be accurately approximated by ana-
lyzing each bipartite substructure in isolation.
• Worst-case structure. We argue that the shared-structure graph represents the worst-case configuration
within each bipartite subproblem. This allows us to conservatively bound the relevant parameters.
• Bare energy approximation. Our key idea is to infer the presence or absence of anti-crossings directly
from the crossing behavior of bare energy levels of relevant subsystems (LM and GM), without explicitly
constructing the effective two-level Hamiltonian. This reduction makes the problem tractable and captures
the essential structure. It can be partially justified by effective Hamiltonian arguments, but a fully rigorous
treatment would require bounding the error of this approximation.
• Inter-block coupling. We argue that Stage 1 is confined to the same-sign block by assuming the inter-
block coupling is weak. Our numerical results confirm that taking Jzz = J steer
zz
suffices in practice, but
deriving a sharp analytical bound on J inter
zz remains an open direction for future refinement.
• Structural steering. We argue that Stage 1 achieves structural steering by ordering the energies of the R-
inner blocks so that the ground state is guided into the R-localized region. This is supported by large-scale
numerical evidence, while a fully rigorous justification remains future work.
• Emergence of negative amplitudes. We demonstrate the emergence of negative amplitudes using the
effective Hamiltonian H(MD)
eff
, which is sufficient for our analysis. A fully rigorous treatment would require
mathematically justifying this construction as the correct effective reduction of the full system.
Quantumness and Absence of Classical Analogues
The emergence of negative amplitudes—produced by sign-generating interference due to the Proper Non-Stoquastic
Hamiltonian—serves as a witness to the quantum nature of the speedup. Classical simulation algorithms, such
as quantum Monte Carlo methods (see [25] and references therein), rely on the non-negativity of wavefunctions
in the computational basis and break down in regimes where interference induces sign structure. This places the
algorithm beyond the reach of eventually stoquastic annealing and efficient classical simulation.
71
From the perspective of classical solvers, the effect of sign-generating interference may be hard to repro-
duce classically. Algorithmically speaking, the XX-driver with appropriately tuned coupling strength Jxx en-
ables a form of collective suppression of local minima, induced by edge couplings that effectively reduce vertex
weights—while selectively preserving the global minimum, even in the presence of partial overlap. To the best of
our understanding, no classical algorithm is able to replicate this effect, which appears to be a genuinely quantum
capability.
Together, these points position DIC-DAC-DOA as a candidate for demonstrating quantum advantage.
Experimental Validation of the Quantum Advantage Mechanism
A by-product of our analysis is that a bipartite substructure graph Gshare with mlnc + mr vertices can be ef-
fectively reduced to 2ml + mr vertices, with nc appearing only as a parameter, see Figure 29. This reduction
enables the construction of small-scale models amenable to experimental verification of quantum advantage. In
particular, a minimal 3-vertex model (with ml = mr = 1) demonstrates the emergence of negative amplitudes
through sign-generating interference, thereby providing a direct coherence test for the negative-amplitude phase
structure in the computational basis, while an 8-vertex model (with ml = 3, mr = 2, mg = ml + mr = 5)
as shown in Figure 29(b), not only exhibits negative amplitudes (see Figures 21) but also shows a measurable
speedup compared to the stoquastic case with Jxx = 0 (see Figures 22 and 23). Both models are within reach of
current gate-model quantum computers via simulation [26, 27].
Contract (nc-1)
(a)
(b)
Figure 29: (a) Schematic contraction. A clique consisting of nc −1 pink vertices and one purple vertex. The
subclique of nc −1 pink vertices is contracted into a single pink vertex. Each pink vertex is ZZ-coupled to both
the purple and blue vertices, while the purple and blue vertices have no direct edge. XX-couplings occur between
pink–pink and pink–purple pairs. (b) Example of a bipartite substructure graph Gshare with mlnc + mr vertices
reduced to 2ml + mr vertices. Shown is the case ml = 3, mr = 2, yielding an 8-vertex effective graph where
each blue vertex is adjacent to each pink vertex. The (global) maximum independent set has size mg = ml +mr,
consisting of all purple and blue vertices.
72
On the Relaxation of the Structured Input Assumption
Our analysis is developed under a structured assumption on the problem graph of MIS, which we refer to as the
GIC assumption: each critical degenerate local minima—corresponding to a set of maximal independent sets of
fixed size—is formed by a set of independent cliques (dMIC). This assumption underlies both the design of the
XX-driver and the block decomposition of the Hamiltonian. The independence of cliques is assumed primarily
to allow efficient identification during Phase I. In principle, one can allow the cliques to be dependent (dMDC),
meaning that some edges are permitted between cliques. In this case, the cliques may need to be identified
heuristically rather than exactly. Under suitable conditions, the algorithm may remain robust even when applied
to dMDC structures. More generally, each critical degenerate local minima might consist of a set of disjoint dMDCs.
A clique-based driver graph may still be constructed in such cases, but the generalization becomes more intricate
and may require further structural insights. Other future directions include the study of weighted MIS instances
and adaptive strategies for selecting or optimizing Jxx during the anneal.
In conclusion, the results presented here establish both a theoretical and numerical foundation for exponen-
tial quantum advantage via the DIC-DAC-DOA algorithm. Beyond the algorithm itself, a key by-product of our
analysis is the identification of effective small-scale models that distill the essential dynamics into experimentally
accessible settings. These models provide a concrete opportunity to verify the quantum advantage mechanism on
currently available gate-model quantum devices through simulation. We hope this work not only offers a founda-
tional framework for quantum optimization algorithm design and analysis, but also inspires the development of
experimental devices equipped with the required XX non-stoquastic couplers, capable of directly implementing
the presented algorithm.
Acknowledgment
This work was written by the author with the help of ChatGPT (OpenAI), which assisted in refining the presen-
tation and in expressing the intended ideas with greater clarity and precision. The author thanks Jamie Kerman
for introducing her to the angular-momentum basis and for early discussions that contributed to the companion
work [8]. We also thank Siyuan Han for helpful comments. We recognize that this manuscript may not cite all
relevant literature. If you believe your work should be included in a future version, please contact the author with
the appropriate references.
References
[1] V. Choi. Essentiality of the Non-stoquastic Hamiltonians and Driver Graph Design in Quantum Optimiza-
tion Annealing. arXiv:2105.02110v2 [quant-ph], 2021.
[2] V. Choi. Constructing and Programming Driver Graphs in Quantum Hardware for Non-Stoquastic Quan-
tum Optimization Annealing Processes. U.S. Patent No. 12,001,924 B2, issued June 4, 2024.
[3] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser.
Quantum computation by adiabatic evolution.
arXiv:quant-ph/0001106, 2000.
[4] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabatic evolution
algorithm applied to random instances of an NP-complete problem. Science, 292(5516):472–475, 2001.
73
[5] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation
is equivalent to standard quantum computation.
SIAM Journal on Computing, 37(1):166–194, 2007.
Conference version in Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer
Science (FOCS), pages 42–51, 2004.
[6] T. Albash and D. A. Lidar. Adiabatic quantum computation. Rev. Mod. Phys., 90:015002, 2018.
[7] A. Elhashash and D. B. Szyld. On general matrices having the Perron–Frobenius property. Electronic
Journal of Linear Algebra, 17:389–402, 2008.
[8] V. Choi. Limitation of Stoquastic Quantum Annealing: A Structural Perspective, 2025.
[9] E. Crosson, T. Albash, I. Hen, and A. P. Young. De-signing Hamiltonians for quantum adiabatic optimiza-
tion. Quantum, 4:334, 2020.
[10] J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM Journal on
Computing, 35(5):1070–1097, 2006.
[11] S. Bravyi, D. DiVincenzo, and D. Loss. Schrieffer–Wolff transformation for quantum many-body systems.
Annals of Physics, 326(10):2793–2826, 2011.
[12] M. H. S. Amin and V. Choi. First-order phase transition in adiabatic quantum computation. Phys. Rev. A,
80(6):062326, 2009. arXiv:0904.1387 [quant-ph].
[13] B. Altshuler, H. Krovi, and J. Roland. Anderson localization makes adiabatic quantum optimization fail.
Proceedings of the National Academy of Sciences, 107:12446–12450, 2010.
[14] V. Choi. Minor-embedding in adiabatic quantum computation: I. The parameter setting problem. Quan-
tum Information Processing, 7:193–209, 2008.
[15] V. Choi. The Effects of the Problem Hamiltonian Parameters on the Minimum Spectral Gap in Adiabatic
Quantum Optimization. Quantum Inf. Processing., 19:90, 2020. arXiv:quant-ph/1910.02985.
[16] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness,
W. H. Freeman, San Francisco, 1979.
[17] S. Tsukiyama, M. Ide, H. Ariyoshi, and I. Shirakawa. A new algorithm for generating all maximal inde-
pendent sets. SIAM J. Comput., 6 (1977), pp. 505–517.
[18] D. S. Johnson, C. H. Papadimitriou, and M. Yannakakis. On generating all maximal independent sets.
Information Processing Letters, 27:119–123, 1988.
[19] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th
Annual ACM Symposium on the Theory of Computing (STOC), pages 212–219, 1996.
[20] J. Roland and N. J. Cerf. Quantum search by local adiabatic evolution. Phys. Rev. A, 65:042308, 2002.
[21] R. E. Tarjan and A. E. Trojanowski. Finding a maximum independent set. SIAM Journal on Computing,
6(3):537–546, 1977.
74
[22] S. Lamm, C. Schulz, D. Strash, R. Williger, and H. Zhang. Exactly solving the maximum weight in-
dependent set problem on large real-world graphs. In Proceedings of the 21st Workshop on Algorithm
Engineering and Experiments (ALENEX), pages 71–85, 2019.
[23] A. Braida, S. Chakraborty, A. Chaudhuri, J. Cunningham, R. Menavlikar, L. Novo, and J. Roland.
Unstructured Adiabatic Quantum Optimization: Optimality with Limitations. Quantum, 9:1790, 2025.
arXiv:2411.05736v3 [quant-ph].
[24] V. Choi. Different adiabatic quantum optimization algorithms for the NP-complete exact cover and 3SAT
problems. Quantum Info. Comput., 11(7–8):638–648, July 2011.
[25] I. Hen. Determining quantum Monte Carlo simulability with geometric phases. Phys. Rev. Research,
3(2):023080, 2021.
[26] S. Lloyd. Universal quantum simulators. Science, 273(5278):1073–1078, 1996.
[27] D. W. Berry, G. Ahokas, R. Cleve, and B. C. Sanders. Efficient quantum algorithms for simulating sparse
Hamiltonians. Communications in Mathematical Physics, 270(2):359–371, 2007.
Appendix A: Font Conventions for Notation
We adopt the following conventions throughout:
• Hilbert space / Subspace / Basis: calligraphic, e.g. V, B.
• Hamiltonian / Matrix: blackboard bold, e.g. H, B.
• Time-dependent quantity: typewriter, e.g. x := x(t), jxx := jxx(t).
• Named object / Abbreviation: capital typewriter, e.g. LM, GM, MLIS.
75
|
Beyond Stoquasticity: Structural Steering and Interference in Quantum Optimization Vicky Choi Gladiolus Veritatis Consulting Co.* September 23, 2025 Abstract We present a theoretical analysis of the DIC-DAC-DOA algorithm, a non-stoquastic quantum algorithm for solving the Maximum Independent Set (MIS) problem. The algorithm runs in polynomial time and achieves exponential speedup over both transverse-field quantum annealing (TFQA) and classical algorithms on a structured family of NP-hard MIS instances, under assumptions supported by analytical and numerical evidence. The core of this speedup lies in the ability of the evolving ground state to develop both positive and negative amplitudes, enabled by the non-stoquastic XX-driver. This sign structure permits quantum interference that produces negative amplitudes in the computational basis, allowing efficient evolution paths beyond the reach of stoquastic algorithms, whose ground states remain strictly non-negative. In our analysis, the efficiency of the algorithm is measured by the presence or absence of an anti-crossing, rather than by spectral gap estimation as in traditional approaches. The key idea is to infer it from the crossing behavior of bare energy levels of relevant subsystems associated with the degenerate local minima (LM) and the global minimum (GM). The cliques of the critical LM, responsible for the anti-crossing in TFQA, can be efficiently identified to form the XX-driver graph. Based on the clique structure of LM, we construct a decomposition of the Hilbert space into same-sign and opposite-sign sectors, yielding a corresponding block-diagonal form of the Hamiltonian. We then show that the non-stoquastic XX-driver induces a see-saw effect that shifts their bare energies, leading to analytically derived bounds on Jxx which support a two-stage annealing schedule that prevents anti-crossings throughout the evolution. The resulting speedup can be attributed to two mechanisms: in the first stage, energy-guided localization within the same-sign block steers the ground state smoothly into the GM-supporting region, while in the second stage, the opposite-sign blocks are invoked and sign-generating quantum interference drives the evolution along an opposite-sign path. Finally, our analysis produces scalable small-scale models, derived from our structural reduction, that capture the essential dynamics of the algorithm. These models provide a concrete opportunity for verification of the quantum advantage mechanism on currently available universal quantum computers. *https://www.vc-gladius.com 1 18 Sep 2025 Contents 1 Introduction 4 2 MIS Problem and Jzz Coupling 9 2.1 The Role of Jzz in the MIS-Ising Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 Classical Hard Instances of MIS and the GIC Instances 10 3.1 Necessary Classical Hardness Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 dMIC: Structured Form of a deg-MLIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 GIC and Its Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3.1 Bipartite Substructures: Gdis and Gshare . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4 Revised DIC-DAC-DOA: System Hamiltonian and Annealing Schedule 15 4.1 Recall: System Hamiltonian of DIC-DAC-DOA . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 Revised Annealing Schedule: Stages 0, 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5 Anti-Crossing and the Two-Level Hamiltonian B(w, x) 16 5.1 Level Repulsion vs. Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.1 Anti-crossing in a Multi-level System . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.1.2 Types of Anti-Crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 The Basic Matrix B(w, x): Eigenvalues and Eigenstates . . . . . . . . . . . . . . . . . . . . . 20 6 A Single Clique: Decomposition and Low-Energy Subspace Analysis 20 6.1 Decomposition of the Clique Space: Low and High Energy Subspaces . . . . . . . . . . . . . . 21 6.2 Angular-Momentum Decomposition of the Low-Energy Subspace . . . . . . . . . . . . . . . . 21 6.2.1 Low-Energy Subspace in Total Angular Momentum Basis Ba . . . . . . . . . . . . . . 21 6.2.2 Spin Operators in the B′ a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6.2.3 Decomposition into Spin-1 2 and Spin-0 Components . . . . . . . . . . . . . . . . . . . 24 6.2.4 Full Spectrum and the Jxx See-Saw Effect . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.2.5 Transformation via Two Coupled Subcliques . . . . . . . . . . . . . . . . . . . . . . . 26 7 Analytical Solution for the Bare Subsystem (dMIC) 28 7.1 Low-Energy Structure of the Bare Subsystem: Same-Sign and Opposite-Sign Sectors . . . . . . 29 7.2 Closed-Form Solution of the Same-Sign Block HCbare . . . . . . . . . . . . . . . . . . . . . . . 30 7.3 Uniform Clique Size: Symmetric-Subspace Reduction of the Same-Sign Block . . . . . . . . . 31 7.4 Block Energy Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 8 Analysis of Stage 0 34 8.1 Decomposition of the Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8.2 Effective Hamiltonian at the End of Stage 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 8.3 Spectral Gap Behavior in Stage 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 9 Main Analysis of the Two-Stage Dynamics on Bipartite Structures 38 9.1 Block Decomposition of the Full Hamiltonian: Same-Sign vs. Opposite-Sign Blocks . . . . . . 39 9.1.1 The Disjoint-Structure Graph: Block-Diagonal Structure via Clique Contraction . . . . 41 9.1.2 The Shared-Structure Graph: Modification to the Disjoint Case . . . . . . . . . . . . . 43 2 9.2 Inner Decompositions of the Same-Sign Block during Stage 1 . . . . . . . . . . . . . . . . . . 45 9.2.1 Two Block Decompositions of HC: L-Inner vs. R-Inner . . . . . . . . . . . . . . . . . 45 9.2.2 Definition of Structural Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 9.3 Two-Stage Evolution and Feasibility Bounds on Jxx . . . . . . . . . . . . . . . . . . . . . . . . 49 9.3.1 Analytical Bounds on Jxx and the Feasibility Window . . . . . . . . . . . . . . . . . . 49 9.3.2 Stage 1: Structural Steering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 9.3.3 Stage 2: Smooth Evolution to GM via an Opposite-Sign Path . . . . . . . . . . . . . . . 53 9.4 Three-Vertex Conceptual Model: Illustrating Quantum Interference . . . . . . . . . . . . . . . 59 9.4.1 Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 9.4.2 Explaining Quantum Interference via Basis Rotation . . . . . . . . . . . . . . . . . . . 65 9.4.3 Numerical Illustration of Sign-Generating Interference . . . . . . . . . . . . . . . . . . 66 10 Full-System Extension via Iterative Application 67 11 Conclusion and Outlook 69 3 1 Introduction A distinctive feature of quantum mechanics is that the wavefunction of a quantum state, expressed in a given basis, can have both positive and negative amplitudes-a characteristic with no classical counterpart, as classical probabilities must be non-negative. Motivated by the goal of harnessing this intrinsically quantum property, we proposed the DIC-DAC-DOA algorithm in [1, 2] for the NP-hard Maximum Independent Set (MIS) problem. The acronym originally stood for Driver graph from Independent Cliques, Double Anti-Crossing, Diabatic quantum Optimization Annealing. The algorithm modifies standard transverse-field quantum annealing (TFQA) [3, 4, 5, 6] by adding a specially designed XX-driver term, aiming to overcome small-gap anti-crossings. For completeness, we include the full algorithmic details and necessary refinements in this paper. We now recall the system Hamiltonian for DIC-DAC-DOA: H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem, where HX = -P i σx i , HXX = P (i,j)∈E(Gdriver) σx i σx j , and Hproblem is the MIS-Ising Hamiltonian defined in Equation (2.3). The time-dependent parameter schedule jxx(t) depends on the XX-coupling strength Jxx. In particular, jxx(t) ≡0 when Jxx = 0, so the case Jxx = 0 corresponds to TFQA, without the XX-driver. The system Hamiltonian is stoquastic (in the computational basis) if Jxx ≤0, and non-stoquastic if Jxx > 0. The goal of our analysis is to demonstrate how and why an appropriately chosen non-stoquastic (i.e., positive) coupling strength Jxx enables exponential speedup with DIC-DAC-DOA. Specifically, we develop a theoretical framework for understanding the role of Jxx in shaping the Hamiltonian structure and guiding ground state evolution. We show that the algorithm succeeds in polynomial time by successively dissolving small-gap anti-crossings, one by one, for a structured class of input instances, called GIC graphs. Despite their structure, we argue in Section 3 that GIC graphs exhibit classical hardness: solving MIS on such instances would require exponential time unless P = NP. Our analysis methods combine analytical techniques and numerical verification, with perturbative and effective Hamiltonian arguements that can in principle be made rigorous with further work. First, we introduce some necessary terminology. Throughout this paper, all Hamiltonians considered are real and Hermitian. Accordingly, we restrict our attention to quantum states with real-valued amplitudes. That is, the phase of each component is either 0 (corresponding to a positive sign) or π (corresponding to a negative sign). We now formalize the sign structure of quantum states, which plays a central role in our analysis: Definition 1.1. Let |ψ⟩= P x∈B ψ(x)|x⟩be a quantum state with real amplitudes in a basis B. • |ψ⟩is called a same-sign state if ψ(x) ≥0 for all x ∈B. That is, all components are in phase (with relative phase 0). • |ψ⟩is called an opposite-sign state if there exist x, x′ ∈B such that ψ(x) > 0 and ψ(x′) max{wi, wj} for all (i, j) ∈E(G). This formulation slightly modifies that of [14, 1], replacing σz with the shifted-σz operator ̃σz i := I+σz i 2 , whose eigenvalues are {0, 1}, making the correspondence with the classical energy function direct: E(G) = X i∈V(G) (-wi)xi + X (i,j)∈E(G) Jijxixj, (2.2) where xi ∈{0, 1}. The energy is minimized when no two adjacent vertices i, j satisfy xi = xj = 1, ensuring that the ground state corresponds to mis(G). For convenience, we refer to all Jij as Jzz, although these couplings need not be uniform, even in the unweighted case. The only requirement is that Jzz > max{wi, wj} for each edge (i, j) ∈E(G). In the unweighted case (wi = 1), an independent set of size m has energy Eind = -m, while a dependent set has energy Edep = Jzz · (#edges) -(#vertices). Thus, large Jzz creates a large energy separation between independent-set states and dependent-set states. In principle, Jzz can be chosen arbitrarily large; however, excessively large values of Jzz can be detrimental. With the use of the XX-driver in DIC-DAC-DOA, the dependent-set states can act as "bridges" between different groups of independent-set states to facilitate the smooth structural steering (to be elaborated in Section 9.3.2). The appropriate choice of Jzz is crucial for the success of the algorithm. For DIC-DAC-DOA, we use two different couplings: • Jclique zz : assigned to edges within cliques of the driver graph; • Jzz: assigned to all other edges. That is, the MIS-Ising problem Hamiltonian takes the form Hproblem = X i∈V(G) (-wi) ̃σz i + Jclique zz X (i,j)∈E(Gdriver) ̃σz i ̃σz j + Jzz X (i,j)∈E(G) (Gdriver) ̃σz i ̃σz j . (2.3) The value of Jclique zz is set sufficiently large to restrict the system to the clique low-energy subspace, while Jzz must satisfy two upper bounds, J inter zz and J steer zz , to ensure the success of the algorithm. The precise role of these bounds will be analyzed in Section 9.3. 3 Classical Hard Instances of MIS and the GIC Instances Recall that an independent set is maximal if no larger independent set contains it. Each maximal independent set corresponds to a local minimum of the energy function in Eq. (2.2). A collection of maximal independent sets (MLIS) all having the same size m corresponds to a set of degenerate local minima with equal energy -m. When the meaning is clear (i.e., they all have the same size), we refer to such a collection simply as a deg-MLIS, and its cardinality as the degeneracy. In this work, we use the terms degenerate local minima and deg-MLIS interchangeably. 10 This section explores the relationship between classical hardness and the structure of deg-MLIS, with a focus on how degeneracy influences computational complexity in both classical and quantum settings. In Section 3.1 we show that classical hardness requires exponentially many MLIS, with at least one deg-MLIS exhibiting exponential degeneracy. In Section 3.2 we introduce a structured form of degeneracy, denoted by dMIC. Finally, in Section 3.3 we define the class of Graphs with Independent Cliques (GIC), together with a reduction to fundamental bipartite structures-Gdis and Gshare-that capture both classical hardness and the structural features exploited by our algorithm. 3.1 Necessary Classical Hardness Conditions The MIS problem was among the first shown NP-complete [16]. A simple branching algorithm [17, 18] without pruning (i.e., branch-without-bound) enumerates all MLIS in time O(N · #MLIS), where #MLIS denotes the number of sets in MLIS. Thus, MIS can be solved by identifying the largest among them. In the worst case, #MLIS = O(3N/3), though often significantly smaller-yet still exponential. Observation 1. A necessary condition for classical hardness is that the MIS instance contains exponentially many MLIS. Observation 2. A necessary consequence is that at least one deg-MLIS must exhibit exponential degeneracy. Observation 1 follows directly from the enumeration algorithm above. Observation 2 follows from the pigeonhole principle: since an independent set can have at most N distinct sizes, at least one size class must contain exponentially many MLIS. Otherwise, if all deg-MLIS had at most polynomial degeneracy, the total number of MLIS would be polynomial-contradicting Observation 1. As a side note, it is worth emphasizing that classical algorithms for MIS outperform Grover's unstructured quantum search [19, 20], which requires O(2N/2) time. As early as 1977, a simple and implementable algorithm achieved a runtime of O(2N/3) via careful branching analysis [21], and this exponent has been further improved over time using more sophisticated branch-and-bound techniques, see [22] for references. Since other optimization problems such as EXACT-COVER can be reduced to MIS with the same problem size [24], it follows that unstructured adiabatic quantum optimization [23]-while perhaps of theoretical interest-offers no practical algorithmic advantage. 3.2 dMIC: Structured Form of a deg-MLIS It is widely believed that a deg-MLIS with exponential degeneracy causes a tunneling-induced anti-crossing in TFQA. We call such a deg-MLIS critical. The exponentially many MLIS in a critical deg-MLIS must exhibit substantial vertex-sharing. This suggests a natural partitioning of the involved vertices into k cliques, where each maximal independent set includes exactly one vertex from each clique. However, efficiently identifying such a partition may not always be feasible, and multiple partitions may be needed to fully characterize a critical deg-MLIS. In its simplest structured form, a critical deg-MLIS can be represented as a dMIC, namely a collection of mutually independent cliques whose vertices together generate exactly all the MLIS in that deg-MLIS. Definition 3.1. A dMIC of size k consists of k independent cliques (i.e., no edges exist between them), denoted as Clique(wi, ni), where wi is the vertex weight and ni is the clique size, for 1 ≤i ≤k. Each maximal independent set in the corresponding deg-MLIS is formed by selecting exactly one vertex from each clique. In this case, the degeneracy of the dMIC is given by Qk i=1 ni. 11 Moreover, given any single maximal independent set as a seed, all k cliques in the dMIC can be identified in linear time due to the independence condition. For each vertex in the seed, the corresponding clique is obtained by collecting all vertices that are mutually adjacent to it and simultaneously non-adjacent to all other seed vertices. This procedure depends only on vertex adjacencies in the graph. Under the assumption of the dMIC structure, we analytically establish in the companion paper [8] that TFQA encounters a tunneling-induced anti-crossing with an exponentially small gap. (This generalizes the so-called perturbative crossing, which is defined only in the small transverse-field regime.) While such an (LM, GM)- anti-crossing is typically a bottleneck for reaching the global minimum GM, here we turn it around and use it constructively to reach the local minima LM (the obstacle) instead. More specifically, a polynomial-time TFQA algorithm evolves adiabatically through this anti-crossing, tracking the energy associated with LM (blue) instead of transitioning to the global minimum GM (red), as illustrated in Figure 5. This outputs a maximal independent set configuration from LM, which can be used to seed the IC-based driver construction in Phase I of the DIC-DAC-DOA algorithm; see Table 1. Remark 3.2. When the cliques are dependent-that is, edges may exist between them-we refer to the structure as an dMDC, a deg-MLIS formed by a set of dependent cliques. This represents a relaxed structure to which our analysis may be generalized. The main difficulty in this relaxation is that the cliques can no longer be unambiguously identified, unlike in the dMIC case. Further discussion is deferred to future work. Terminology and Abbreviations. For convenience, we summarize below the key abbreviations and their meanings. These notions are closely related: a deg-MLIS corresponds to degenerate local minima, while dMIC and dMDC describe structured forms of such degeneracy. Abbreviation Meaning MLIS Maximal independent sets deg-MLIS A collection of MLIS of equal size, corresponding to degenerate local minima dMIC A deg-MLIS formed by a set of independent cliques dMDC A deg-MLIS formed by a set if dependent cliques 3.3 GIC and Its Hardness We now define the class of structured problem instances assumed in this paper. A Graph with Independent Cliques (GIC) is a graph whose structure satisfies the following properties: • Each critical deg-MLIS in the graph assumes the structure of a dMIC; • It has a unique (global) maximum independent set, denoted as GM. which may be viewed as a dMIC consisting entirely of singleton cliques; • The GM shares at least one vertex with at least one clique in some critical dMIC. A critical dMIC contains exponentially many MLIS, and its partial overlap with the GM obscures the global structure, making it difficult for classical algorithms to distinguish the true optimum from nearby local maxima. The classical hardness of GIC instances stems from these same two features: (1) the exponential number of 12 maximal independent sets in the dMIC, and (2) the partial vertex overlap between the global maximum and these local maxima, which creates ambiguity and hinders global optimization. While we do not prove the classical hardness of GIC in the formal complexity-theoretic sense (otherwise we would have proven P = NP, we note that the definition can be further strengthened if needed. For example, one may require that the graph contains at least two critical dMICs, and that most vertices of the GM belong to cliques in these dMICs. In other words, GIC instances are designed to be structured enough for rigorous analysis, yet rich enough to present classical difficulty. Reduction to Fundamental Bipartite Structures. Our key argument is that the analysis of any general GIC instance can be reduced to a sequence of simpler bipartite substructures, each formed by a critical dMIC and the global maximum (GM). Recall that each dMIC corresponds to a set of degenerate local minima (LM) in the MIS-Ising energy landscape. The terms LM and dMIC are thus used interchangeably. We use GM to refer both to the (global) maximum independent set and to its corresponding global minimum in the energy landscape. In what follows, we use LM and GM to denote the dMIC and GM in each such bipartite substructure, respectively. 3.3.1 Bipartite Substructures: Gdis and Gshare We consider two bipartite substructures: • Gdis: The local minima (LM) and the global minimum (GM) are vertex-disjoint. • Gshare: The GM shares exactly one vertex with each clique in the LM. We begin with the disjoint-structure graph Gdis = (V, E), in which the vertex set V is partitioned into left and right (disjoint) vertex sets, with the following structural properties: • The left component is defined by a set L = {C1, . . . , Cml} of ml disjoint cliques, each denoted Ci = Clique(wi, ni). We let VL = Sml i=1 Ci denote the full vertex set. • The right component R consists of mr independent vertices, each with weight wr. • Every vertex in VL is adjacent to every vertex in R. In this paper we mainly focus on the unweighted MIS case, and assume uniform weights wi = wr = w. Under the MIS-Ising mapping with these uniform weights, VL corresponds to the degenerate local minima (LM) with degeneracy Qml i=1 ni, while R defines the global minimum (GM) with mg = mr. Each local minimum (in LM) corresponds to a maximal independent set of size ml, and thus has energy -ml. The global minimum (GM) has energy -mg. We now define the shared-structure graph Gshare, which differs from Gdis in that each vertex in R is adjacent to all but one vertex in each clique of L. This modification allows the GM to include shared vertices from the cliques in L, thereby introducing overlap with the LM. Structurally, L and R are defined exactly as in Gdis, but with the adjacency rule modified as above. Specifically, the global maximum GM consists of one shared vertex from each clique Ci ∈L together with all mr independent vertices in R, yielding a total size mg = ml + mr. Figure 3 illustrates both cases. For convenience, we write m := ml, dropping the subscript l when no confusion arises. We assume Pm i=1 √ni > mg, so that an anti-crossing is induced by the competition between LM and GM. 13 1 2 4 3 a 5 6 8 7 c b (a) The LM and GM are vertex-disjoint. 1 2 4 3 a 5 6 8 7 c b (b) The GM shares exactly one vertex with each clique in the LM. Figure 3: Example graphs illustrating the Gdis and Gshare structures. Recall that each LM here is structurally a dMIC. (a) Disjoint-structure graph Gdis: The set L consists of ml = 2 disjoint cliques, each of size n1 = n2 = 4, with their vertices (pink) forming the local minima LM. The set R (blue) consists of mr = 3 independent vertices, forming the global minimum GM. (b) Shared-structure graph Gshare: The set L again consists of two cliques of size n1 = n2 = 4, with pink and purple vertices. The purple vertices (one per clique) are shared between LM and GM. The set R (blue) contains mr = 3 independent vertices. The global minimum consists of all vertices in R, together with the shared purple vertices in L, giving mg = 5. In both cases, edges between the pink vertices in L and all vertices in R are complete, though not all are shown for visual clarity. These bipartite substructures represent two extreme regimes of connectivity and overlap between the critical LM and the GM. One can view Gdis as a special case of Gshare in which no vertices are shared between LM and GM. Note that Gdis is not classically hard on its own (for example, one can remove all LM vertices and directly identify the GM). However, it may still arise as a substructure during a single iteration of the full algorithm, as discussed in Section 10. We consider Gdis separately here for two reasons. First, in the Gdis case, the relevant blocks of the Hamiltonian are decoupled, allowing the mechanisms of our algorithm to be simply illustrated. Second, it highlights the core distinction introduced by shared vertices: namely, that a single shared vertex can enable block coupling and necessitate an upper bound on Jxx to prevent an interference-involved block-level anti-crossing. We note that for the TFQA case (Jxx = 0), we argue in the companion paper [8] that it is sufficient to analyze only the disjoint-structure case (for the presence of the anti-crossing and the gap size). Remark 3.3. Our reduction to bipartite substructures is not presented as a formal derivation, but rather as a structural perspective informed by effective Hamiltonian techniques. Each individual anti-crossing is primarily governed by the interaction between a set of critical local minima and the global minimum. This motivates a stepwise reduction to low-energy effective Hamiltonians that couple the GM to one critical structure at a timeyielding a bipartite substructure, with connectivity and shared vertices bounded by the Gshare worst case. While 14 a fully rigorous justification of this reduction is beyond the scope of this paper, our detailed analysis of the Gshare instance, together with numerical examples involving multiple local minima in Section 10, provides supporting evidence for the validity of this bipartite reduction perspective. 4 Revised DIC-DAC-DOA: System Hamiltonian and Annealing Schedule In this section, we first recall the system Hamiltonian used in the original DIC-DAC-DOA algorithm, then introduce the revised annealing schedule. The full evolution consists of an optional initialization phase (Stage 0), followed by the two-stage algorithmic core (Stage 1 and Stage 2). 4.1 Recall: System Hamiltonian of DIC-DAC-DOA The system acts on a Hilbert space of N spin-1 2 particles, one for each vertex of the problem graph G. The Hamiltonian is expressed in terms of the spin operators Sx = 1 2σx and S ̃z = ̃σz, where ̃σz denotes the shifted σz operator. The system Hamiltonian is defined in terms of a time-dependent annealing schedule: H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem, where HX = - X i σx i , HXX = X (i,j)∈E(Gdriver) σx i σx j . The problem Hamiltonian Hproblem, introduced in Section 2, is recalled here for completeness: Hproblem = X i∈V(G) (-wi) ̃σz i + Jclique zz X (i,j)∈E(Gdriver) ̃σz i ̃σz j + Jzz X (i,j)∈E(G) (Gdriver) ̃σz i ̃σz j . The original annealing parameters evolve as: x(t) = (1 -t)Γ, jxx(t) = t(1 -t)Jxx, p(t) = t. We revise DIC-DAC-DOA by modifying its annealing schedule, specifically the functions x(t), jxx(t), and p(t). 4.2 Revised Annealing Schedule: Stages 0, 1 and 2 We describe the system using two Hamiltonians: H0(t), defined for t ∈[0, 1], corresponding to the optional Stage 0; and H1(t), also defined over t ∈[0, 1], governing the evolution during Stages 1 and 2. There are three distinct values for the transverse field: Γ0 > Γ1 > Γ2. The three stages are distinguished by the values of these Γ's: the transverse field is reduced from Γ0 to Γ1 during Stage 0, from Γ1 to Γ2 during Stage 1, and from Γ2 to 0 during Stage 2. Stage 0 (optional). This stage begins in the uniform superposition state and evolves adiabatically toward the ground state of H0(1) = H1(0). The Hamiltonian for Stage 0 is H0(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem, t ∈[0, 1], with x(t) = (1 -t)(Γ0 -Γ1) + Γ1, jxx(t) = tJxx, p(t) = t. During this stage, the problem parameters wi, Jclique zz , and Jzz are gradually ramped to their final values as p(t) increases from 0 to 1, while jxx(t) increases linearly to its target value Jxx. If the ground state of H0(1) = H1(0) can be prepared directly-e.g., via quantum hardware initialization- then Stage 0 may be omitted. 15 Main Stages 1 and 2 (algorithmic core). The system Hamiltonian during the main two-stage evolution is H1(t) = x(t)HX + jxx(t)HXX + Hproblem, t ∈[0, 1]. The annealing schedule divides into two stages, determined by a structure-dependent parameter Γ2. We define the transition time tsep := 1 -Γ2 Γ1 , so that Stage 1 corresponds to t ∈[0, tsep], and Stage 2 to t ∈[tsep, 1]. The transverse field is given by x(t) = (1 -t)Γ1, which decreases linearly from Γ1 at t = 0 to Γ2 at t = tsep, and then to 0 at t = 1. The XX-driver schedule is jxx(t) = ( Jxx for t ∈[0, tsep] ⇔x ∈[Γ2, Γ1], αx(t) for t ∈[tsep, 1] ⇔x ∈[0, Γ2], where Jxx = αΓ2. That is, jxx(t) remains constant during Stage 1, and both x(t) and jxx(t) ramp linearly to zero during Stage 2. The parameter α controls the strength of the XX-coupling relative to the transverse field during Stage 2. Its value is critical and will be determined analytically in Section 9.3.1. Remark 4.1 (Time-Dependent Quantities and Font Convention). Note that jxx can be reparametrized as a function of x5. Accordingly, the Hamiltonian H1 can be viewed as a function of the transverse field x, which decreases monotonically with t. This reparametrization is particularly useful when describing multiple iterations of the algorithm, in which each substructure evolves under a Hamiltonian governed by x. This dual viewpoint allows us to parametrize the system either explicitly in t or implicitly in terms of x. When there is no ambiguity, we often omit the parameter t and use text font (e.g., x, jxx) to indicate time dependence. Parameter Values. The values of Γ0, Γ1, Γ2, Jxx (α), and Jzz are critical to the success of the algorithm. In particular, suitable values of Γs and Jzz are chosen in relation to the analytical bounds on Jxx, ensuring that the effective Hamiltonian supports the desired localization and interference behavior. These parameter choices will be specified and justified in the subsequent analysis. Remark 4.2. After Stage 0, the system evolves under the full Hamiltonian H1 defined above. However, our analysis focuses on an effective Hamiltonian Heff 1 , Eq. (8.1), derived in Section 8. This effective Hamiltonian captures the essential dynamics responsible for the algorithmic speedup and serves as the foundation for the analytical and numerical investigations presented in the following sections. 5 Anti-Crossing and the Two-Level Hamiltonian B(w, x) The term anti-crossing is sometimes used loosely, so we begin with a precise notion in the two-level case, then extend it to multi-level systems, and finally classify different types of anti-crossing. We also introduce a canonical two-level Hamiltonian whose eigensystem will be used throughout our analysis. 5Since x = (1 -t)Γ1 decreases monotonically from Γ1 to 0, each value t ∈[0, 1] corresponds uniquely to a transverse field value x ∈[0, Γ1]. This one-to-one correspondence allows us to equivalently express the annealing schedule in terms of x rather than t, as indicated by the domain annotations. 16 1 Γ0 x w Jxx Γ2 0 Stage 0 Stage 2 Γ1 Jzz Stage 1 Jclique zz Figure 4: Annealing parameter schedule for the system Hamiltonian H(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem. Transverse field x(t) (blue) begins at Γ0, decreases to Γ1 in Stage 0, then to Γ2 in Stage 1, and reaches 0 at the end of Stage 2. The XX-coupling strength jxx(t) (red) increases linearly to Jxx in Stage 0, remains constant in Stage 1, and decreases to 0 in Stage 2. Vertex weights wi (orange) and ZZ-couplings Jzz, Jclique zz (green) ramp to final values in Stage 0 and remain fixed thereafter. 5.1 Level Repulsion vs. Anti-Crossing We begin by distinguishing the concept of an anti-crossing (also called avoided-crossing) from level repulsion. Consider a generic two-level Hamiltonian of the form H(x) := e1(x) v(x) v(x) e2(x) , where e1(x), e2(x), and v(x) are real-valued functions of a parameter x. The eigenvalues of this Hamiltonian are λ±(x) = e1(x)+e2(x) 2 ± 1 2 p (e1(x) -e2(x))2 + 4v(x)2, and the energy gap between them is ∆(x) := λ+(x) -λ-(x) = p (e1(x) -e2(x))2 + 4v(x)2. The off-diagonal term v(x) induces level repulsion: if v(x) ̸= 0, then the eigenvalues never cross, and the gap ∆(x) remains strictly positive. Thus, assuming the off-diagonal coupling v(x) is nonzero, level repulsion is always present. Definition 5.1. We say that an anti-crossing occurs when the two unperturbed energy levels e1(x) and e2(x) cross, i.e., e1(x∗) = e2(x∗) for some x∗, and the off-diagonal coupling v(x∗) ̸= 0. In this case the eigenvalue curves form an anti-crossing with gap ∆min = 2|v(x∗)|. The size of the anti-crossing gap depends on |v(x∗)|: stronger coupling leads to a larger gap, while weaker coupling results in a narrower one. By contrast, if the two diagonal entries e1(x) and e2(x) remain well separated for all x, then the system exhibits level repulsion but not an anti-crossing. Figure 6 illustrates an example of level repulsion without an anti-crossing. 17 The eigenvectors of the two-level Hamiltonian are given by |λ-(x)⟩= cos θ(x) |0⟩+ sin θ(x) |1⟩, |λ+(x)⟩= -sin θ(x) |0⟩+ cos θ(x) |1⟩, where the mixing angle θ(x) satisfies tan(2θ(x)) = 2v(x) e1(x)-e2(x). Thus, near the anti-crossing point x = x∗, the eigenstates interpolate between the unperturbed basis states. Remark 5.2. The trigonometric expression for eigenvectors in terms of the mixing angle θ(x), is equivalent to the rational-form representation |λ-(x)⟩= 1 √ 1+γ(x)2 (γ(x) |0⟩+ |1⟩) , |λ+(x)⟩= 1 √ 1+γ(x)2 (|0⟩-γ(x) |1⟩) , where the two parametrizations are related by γ(x) = 1 tan θ(x), and tan(2θ(x)) = 2v(x) e1(x)-e2(x). This rationalform expression is particularly useful for our analysis, as it aligns directly with the basic matrix form introduced below. Remark 5.3. While the explicit forms of the eigenvectors are not directly used in this paper, they are included here for completion, and used in the companion paper [8] for bounding the anti-crossing gap. Our earlier work [1, 15], including the development of the original DIC-DAC-DOA algorithm, was motivated by investigating the structural characteristics of eigenstates around the anti-crossing. 5.1.1 Anti-crossing in a Multi-level System In a multi-level system, the notion of an anti-crossing extends naturally by restricting the Hamiltonian to the twodimensional subspace spanned by the pair of eigenstates whose unperturbed energies intersect. This reduction yields a 2 × 2 effective Hamiltonian that captures the essential structure of the anti-crossing, including both the energy gap and the interpolating behavior of the eigenstates. Thus, the same framework as in the two-level case applies. With this perspective, we refine the definition of an (L, R)-anti-crossing given in [1]. Recall that EA 0 (t) denotes the ground state energy of the Hamiltonian HA(t) projected to the subspace spanned by the subsets of A. For simplicity, we will refer to this subpace as the subspace defined by A. Definition 5.4. We say an anti-crossing is an (L, R)-anti-crossing at t∗if there exist bare energies EL 0 (t) and ER 0 (t) such that: 1. EL 0 (t) and ER 0 (t) approximate the unperturbed energy levels of the effective 2 × 2 Hamiltonian describing the anti-crossing for t ∈[t∗-δ, t∗+ δ] for some small δ > 0; and 2. EL 0 (t) and ER 0 (t) cross at t∗, i.e. EL 0 (t∗) = ER 0 (t∗). See Figure 5 for an illustration. In this sense, the anti-crossing is said to originate from the crossing of EL 0 (t) and ER 0 (t). The unperturbed levels define the formal structure, while the bare energies provide concrete analytic proxies that allow us to identify and study the anti-crossing without explicitly constructing the effective 2 × 2 Hamiltonian, which could be challenging. In the companion paper [8], an effective 2 × 2 Hamiltonian is explicitly constructed to derive a perturbative bound on the anti-crossing gap. 18 t* ER 0 (t) EL 0 (t) Figure 5: Schematic of an (L, R)-anti-crossing. Dashed lines: bare energies EL 0 (t) and ER 0 (t) crossing at t∗. Solid gray curves: the two lowest eigenvalues of the Hamiltonian, showing the avoided crossing that originates from this bare crossing. 5.1.2 Types of Anti-Crossing We classify anti-crossings according to whether the subspaces defined by L and R, when embedded in the full Hilbert space, overlap or are disjoint. Tunneling-Induced Anti-Crossing. If the embedded subspaces of L and R overlap, then HL and HR are submatrices of the same block in the Hilbert space decomposition. In this case, off-diagonal terms within the block induce tunneling between the two configurations. We refer to this as a tunneling-induced anti-crossing. Block-Level Anti-Crossing. If the embedded subspaces of L and R are disjoint, then HL and HR belong to two distinct blocks of the Hamiltonian. In this case, the ground states of the two blocks may become nearly degenerate, leading to a true crossing if the blocks are decoupled, or to an avoided crossing if there is weak but nonzero inter-block coupling. We refer to this as a block-level anti-crossing. Figure 5 provides a generic schematic of an (L, R)-anti-crossing, whether tunneling-induced or block-level. The interpretation depends on the relation between the subspaces defined by L and R. In the block-level case, for example, the two levels correspond to the ground states of two distinct blocks-the same-sign block HL (blue) and the opposite-sign block HR (red). A weak inter-block coupling lifts their degeneracy and produces an avoided crossing. When the resulting gap is small, the system evolves almost as if the blocks were decoupled, remaining adiabatically in the same-sign block and following the blue path. The original double anti-crossing (DAC) observed in [1] is, in fact, a double block-level anti-crossing. Thus, instead of a diabatic transition-from ground to excited to ground state-the evolution effectively remains confined to the same-sign block, without any true inter-block transition. In this sense, the block-level anti-crossing is dynamically bypassed. There is also a more subtle variant, in which one of the two competing levels is not an eigenstate of a single block, but a superposition of states from different blocks. In this case, the coupling is not merely perturbative; the anti-crossing reflects quantum interference between blocks. We refer to this as an interference-involved blocklevel anti-crossing, an example is shown in Figure 24. 19 5.2 The Basic Matrix B(w, x): Eigenvalues and Eigenstates We define the following effective two-level Hamiltonian, which will serve as a basic building block for our analysis throughout the paper: B(w, x) := -w -1 2x -1 2x 0 , (5.1) where w = w(t) and x = x(t) are real-valued parameters, typically derived from problem Hamiltonians and driver strengths. This is a special case of a spin-1 2 system, with analytic eigenstructure. The eigenvalues of B(w, x) are βk = -1 2 w + (-1)kp w2 + x2 , k = 0, 1, (5.2) with normalized eigenvectors |β0⟩= 1 √ 1+γ2 γ |0⟩+ |1⟩ , |β1⟩= 1 √ 1+γ2 |0⟩-γ |1⟩ , (5.3) where the mixing coefficient is γ = x w+ √ w2+x2 . Remark 5.5. For w > 0, we use the standard basis |0⟩= 1, 0 T , |1⟩= 0, 1 T . For w 0) and |1⟩(w 0). Figure 6 visualizes the eigenspectrum and ground state behavior under variation of x and w. Figure 6: Eigenvalues of the two-level Hamiltonian B(w, x), where βk = -1 2 w + (-1)k√ w2 + x2 , k = 0, 1. The ground state energy (β0) is shown in black, and the first excited state energy (β1) is shown in blue. The energy gap widens as x increases-there is no anti-crossing in this case. The ground state is |β0⟩= 1 √ 1+γ2 (γ |0⟩+ |1⟩), with γ = x w+ √ w2+x2 . (a) w = 1; (b) w = -1. For w > 0, we have |0⟩= |↑⟩and |1⟩= |↓⟩as in (a). Notice that the contents of |0⟩and |1⟩swap for w 0. Then the effective weight weff c (x) = wc -nc-1 4 jxx crosses zero at the critical point x0 = 4wc α(nc-1). We thus have weff c (x) > 0 for x x0, At this point, the sign of weff c changes, and the roles of |0⟩and |1⟩in the ground state eigenvector exchange. For x > x0, the ground state energy of the same-sign block approximates β0(x) ≈-1 2α x, so its slope with respect to x is approximately -1 2α, independent of nc. In contrast, when Jxx = 0 (i.e., α = 0), the same energy behaves as β0(x) ≈-1 2 wc + √ncx . As Jxx increases, the magnitude of this slope decreases-from approximately √nc down to 1 α-flattening the spectral curve; see Figure 8(c). Crossover point xc. The crossing point xc at which β0(xc) = θ(xc) satisfies xc = 4α 4-α2 . (6.8) To ensure that xc ≤Γ2, we must restrict α to satisfy α 1. Figure 9 illustrates the three eigenvalues of the same-sign block in the symmetric subspace Hsym Cbare, along with the AS0 energy level of HQbare, under different values of Jxx. The plots reveal a clear see-saw effect: as Jxx increases, the same-sign energy levels are lifted upward, while the AS0 energy level is lowered. Figure 9: Energy spectrum of a bare subsystem (dMIC) under different values of Jxx = αΓ2, with m = 2, nc = 9. We set Γ2 = m and Γ1 = 2Γ2. The coupling schedule is piecewise-defined: jxx = Jxx for x ≥Γ2, and jxx = αx for x θ(i). The relative ordering of β(i) 0 and θ(i) switches at the crossing point xc = 4α 4-α2 (see Eq. (6.8) in Section 6), as illustrated in Figure 10. (a) (b) Figure 10: Two-stage eigenvalue evolution for a single clique under different values of Jxx = αΓ2, with Γ2 = 1, Γ1 = 2, wc = 1, and nc = 9. The coupling schedule is piecewise-defined: jxx = Jxx for x ≥Γ2, and jxx = αx for x αmax(Γ2): the crossing occurs during Stage 1. Same-sign eigenvalues β0, β1 are shown as solid lines; the opposite-sign eigenvalue θ is dashed. The dashed green vertical line indicates the crossing point xc = 4α 4-α2 . Thus, to determine the overall block ordering, it suffices to compare the two extreme cases: the same-sign block HCbare and the AS0 opposite-sign block HQbare. In the case Jxx = 0, i.e., α = 0, we have β(i) 0 0, and in particular when α xc, β(i) 0 (x) θ(i). Thus, the energy ordering reverses after xc, and the AS0 opposite-sign block becomes the lowest in energy. This reversal is illustrated in Figure 11. 8 Analysis of Stage 0 Recall that the Hamiltonian for Stage 0 is H0(t) = x(t)HX + jxx(t)HXX + p(t)Hproblem, t ∈[0, 1], 34 Figure 11: Block energy ordering for m = 2, nc = 9, with Γ2 = m. Same-sign block energy in blue; intermediate opposite-sign block in brown; AS0 block in red dashed. (a) α = 1.0 αmax(Γ2): the crossing occurs during Stage 1, and the AS0 block becomes lowest earlier in the evolution. with x(t) = (1 -t)(Γ0 -Γ1) + Γ1, jxx(t) = tJxx, p(t) = t. During this phase, the problem parameters wi, Jclique zz , and Jzz are gradually ramped to their final values as p(t) increases from 0 to 1, while jxx(t) increases linearly to its target value Jxx. Definition 8.1. Given a Hamiltonian H acting on a Hilbert space V, and a subspace L ⊂V with orthogonal projector ΠL, we define the restricted Hamiltonian on L as H|L := ΠLHΠL. The main goal of this section is to derive the effective Hamiltonian at the end of Stage 0. The foundational idea follows [10], which presents a systematic approach to constrained Hamiltonians. The Projection Lemma in [10] shows that when a large penalty is assigned to a subspace, the lowest eigenvalues of the full Hamiltonian are close to those of the restricted Hamiltonian. To improve this approximation, [10] further derives an effective Hamiltonian by incorporating perturbative corrections from the high-energy subspace. We adopt a similar approach: by assigning a large penalty Jclique zz to states involving edges within a clique, we energetically suppress their contribution and restrict the evolution to a reduced subspace. We then apply perturbation theory to characterize the resulting effective Hamiltonian. In the following, we first introduce a decomposition of the Hilbert space based on energy penalties from the problem Hamiltonian. We then derive the effective Hamiltonian and argue that the spectral gap remains large throughout Stage 0, justifying the adiabatic elimination of high-energy states. 8.1 Decomposition of the Hilbert Space Recall the MIS-Ising problem Hamiltonian defined in Eq. 2.3: Hproblem = X i∈V(G) (-wi) ̃σz i + Jclique zz X (i,j)∈E(Gdriver) ̃σz i ̃σz j + Jzz X (i,j)∈E(G) (Gdriver) ̃σz i ̃σz j . The vertex set of the graph is V(G) = VL ∪R, where VL is the union of all cliques in L and R is the set of vertices outside these cliques. The corresponding Hilbert space factors as V = VL ⊗VR, where VL is the Hilbert space of all vertices in VL and VR is that of the vertices in R. 35 The parameter Jclique zz penalizes states that include occupied edges within a clique. By setting Jclique zz sufficiently large, the Hilbert space VL separates into a low-energy subspace (spanned by all independent-set states within L) and a high-energy subspace (spanned by all dependent-set states within L). We define L-:= (low-energy subspace of VL) ⊗VR, L+ := (high-energy subspace of VL) ⊗VR. Here L+ corresponds to high-energy states-each such state containing at least one intra-clique edge incurring the Jclique zz penalty. Let Π-and Π+ denote the orthogonal projectors onto L-and L+, respectively. With respect to this decomposition, the system Hamiltonian at the end of Stage 0 can be written in block form: H0(1) = Hlow V V† Hhigh , with Hlow = Π-H0(1)Π-, Hhigh = Π+H0(1)Π+. At the end of Stage 0, the full Hamiltonian takes the form H0(1) = Γ1HX + JxxHXX + Hlow prob + Hhigh prob, where Hproblem = Hlow prob + Hhigh prob, with Hlow prob = Π-HproblemΠ-, Hhigh prob = Π+HproblemΠ+. By construction, Hhigh prob annihilates the low-energy subspace: Hhigh probΠ-= Π-Hhigh prob = 0. Thus Hlow = Π-H0(1)Π-= Π-(M + Hhigh prob)Π-= Π-MΠ-, where M = Γ1HX + JxxHXX + Hlow prob. In the following, we will show that by the end of Stage 0, if Jclique zz is chosen sufficiently large, the Stage 1 evolution is governed exactly by the restricted Hamiltonian Heff 1 = Hlow = Π-MΠ-. 8.2 Effective Hamiltonian at the End of Stage 0 Definition 8.2. Let λcutoff ∈R be a cutoff energy, and let ε > 0. We say that Heff is an ε-close effective lowenergy Hamiltonian of H if for every eigenstate H |ψ⟩= λ |ψ⟩with λ λcutoff. Furthermore, to ensure the effective Hamiltonian approximation remains within O(ε), we impose the stronger condition: λhigh ≥1 ε∥M∥2. Then the projected Hamiltonian Heff := Π-MΠ-is an ε-close effective low-energy Hamiltonian of H, as defined in Definition 8.2. 36 Proof. Let H |ψ⟩= λ |ψ⟩be an eigenstate of H with λ -weff + mr(Jzz -w). This condition forms the basis for deriving the Steering lower bound J steer xx . 47 R-inner block decomposition: There are m + 1 = 3 outer-layer blocks, each a (mr + 1) × (mr + 1) matrix acting on the R subsystem. The full matrix representation is denoted HR-inner C = 6Jzz -3w -2weff -1 2 √ 3x 0 0 - √nc x √ 2 0 0 0 0 0 0 0 -1 2 √ 3x 4Jzz -2w -2weff -x 0 0 - √nc x √ 2 0 0 0 0 0 0 0 -x 2Jzz -w -2weff -1 2 √ 3x 0 0 - √nc x √ 2 0 0 0 0 0 0 0 -1 2 √ 3x -2weff 0 0 0 - √nc x √ 2 0 0 0 0 - √nc x √ 2 0 0 0 3Jzz -3w -weff -1 2 √ 3x 0 0 - √nc x √ 2 0 0 0 0 - √nc x √ 2 0 0 -1 2 √ 3x 2Jzz -2w -weff -x 0 0 - √nc x √ 2 0 0 0 0 - √nc x √ 2 0 0 -x Jzz -w -weff -1 2 √ 3x 0 0 - √nc x √ 2 0 0 0 0 - √nc x √ 2 0 0 -1 2 √ 3x -weff 0 0 0 - √nc x √ 2 0 0 0 0 - √nc x √ 2 0 0 0 -3w -1 2 √ 3x 0 0 0 0 0 0 0 - √nc x √ 2 0 0 -1 2 √ 3x -2w -x 0 0 0 0 0 0 0 - √nc x √ 2 0 0 -x -w -1 2 √ 3x 0 0 0 0 0 0 0 - √nc x √ 2 0 0 -1 2 √ 3x 0 Notice that each inner-block Hamiltonian is a linear shift of a base Hamiltonian: H(r) L = H(0) L + r · SL, H(l) R = H(0) R + l · SR, (9.4) for r ∈{0, 1, . . . , mr} and l ∈{0, 1, . . . , m}, where the shift matrices are given by: SL = mJzz -w · · · 0 0 ... ... ... ... 0 · · · Jzz -w 0 0 · · · 0 -w , SR = mrJzz -weff · · · 0 0 ... ... ... ... 0 · · · Jzz -weff 0 0 · · · 0 -weff . Remark 9.5. The zero-indexed blocks correspond directly to the bare subsystems: H(0) R = |0⟩L ⟨0| ⊗Hbare R , H(0) L = Hbare L ⊗|0⟩R ⟨0| . Thus there are two possible inner decompositions of HC: the L-inner blocks H(r) L indexed by r, and the R-inner blocks H(l) R indexed by l. For brevity, we will often refer to these simply as the L-blocks and R-blocks. 9.2.2 Definition of Structural Localization We define a quantitative notion of localization of the ground state of the same-sign block HC, based on the inner decompositions. We begin with the R-localization defined via the R-inner decomposition; the definition of L-localization is analogous. Recall from Eq. (9.4) that H(l) R = H(0) R + l · SR, with the block energies ordered in decreasing order when -weff = -w + nc-1 4 Jxx > 0 or Jxx is sufficiently large. Definition 9.6 (Structural Localization). Let {P (l) R } denote the projectors onto the H(l) R -inner block, for l = 0, . . . , m. For parameters k ≪m and tolerance ε > 0, we say the ground state |ψ(t)⟩of HC(t) is R-localized up to depth k if the cumulative overlap k-1 X l=0 ∥P (l) R |ψ(t)⟩∥2 ≥1 -ε. 48 In words, the ground state amplitude is almost entirely concentrated within the lowest k R-blocks. In Section 9.3.2, we will show that if Jxx satisfies the Steering lower bound (derived in Section 9.3.1), the ground state is smoothly steered into the R-localized region by the end of Stage 1. In contrast, for Jxx = 0, the ground state exhibits L-localization with dominant weight on H(0) L . This case is analyzed in detail in the companion paper [8]. 9.3 Two-Stage Evolution and Feasibility Bounds on Jxx Our goal is to show that, for an appropriate choice of Jxx and Jzz, the system evolves from the initial ground state of Heff 1 to the global minimum (GM) without encountering an anti-crossing. The correctness of the algorithm critically depends on four feasibility bounds on Jxx(Table 2), together with two upper bounds on Jzz: J inter zz and J steer zz . In Section 9.3.1, we derive these bounds analytically, and show that for Jzz ≤J steer zz there exists a nonempty feasible window for Jxx. To give intuition before the detailed derivations, we briefly summarize how each bound contributes to maintaining a smooth two-stage evolution. The Stage-Separation bound together with J inter zz ensure that Stage 1 is effectively confined to the same-sign block, allowing Stage 1 and Stage 2 to be analyzed separately; the Lifting bound ensures the original anti-crossing is removed; the Steering bound directs the system smoothly into the R-region (bypassing tunneling) during Stage 1, and Stage 2 is secured by the Sinking bound, which prevents the emergence of a new anti-crossing when the lowest opposite-sign block participates. The analysis, supported by numerical results, of Stage 1 and Stage 2 is presented in Section 9.3.2 and Section 9.3.3, respectively. 9.3.1 Analytical Bounds on Jxx and the Feasibility Window Recall that Jxx produces a see-saw effect: it raises the energy associated with LM in the same-sign block while simultaneously lowering the energy in the opposite-sign blocks. This effect introduces a fundamental trade-off: if Jxx is too small, the system fails to lift the local minima in the same-sign block high enough; if it is too large, it lowers the opposite-sign state too far, introducing new anti-crossings. As a result, the success of the algorithmic design depends critically on choosing an appropriate value of Jxx. These considerations motivate four analytical feasibility bounds on Jxx summarized in Table 2. Name Notation Purpose (i) Stage-Separation Upper Bound J sep xx Ensure the ground state of the same-sign block is the lowest during Stage 1 (ii) Lifting Lower Bound J lift xx Lift LM energy in the same-sign block high enough during Stage 2 (iii) Steering Lower Bound J steer xx Ensure smooth transition into the R-localized region during Stage 1 (iv) Sinking Upper Bound J sink xx Prevent the AS0 opposite-sign block from dropping too low in Stage 2 Table 2: Summary of the four feasibility bounds on Jxx. (Note: the Sinking Upper Bound is required only in the shared-structure case.) For simplicity, we assume uniform clique size, ni = nc for all i, and remark on the changes needed for each bound when clique sizes vary. We also set w = 1 in the following analysis. We derive the following explicit bounds (below) in terms of Γ2, Jzz, m, and the unknown quantities mr and 49 mg: J lift xx = 2 m mg Γ2, J steer xx = 4 nc-1[mr(Jzz -1) + 1], J sep xx = 2(Γ2 -1), J sink xx = 2(Γ2 -1) + 2mr nc Jzz. (Jxx-Bounds) These analytical bounds are approximate but conservative, derived from the bare (decoupled) block energies and perturbative shifts. They suffice to demonstrate a nonempty feasible window for Jxx. In practice, additional values may succeed beyond the bounds and can be identified through numerical exploration or adaptive tuning. Feasible Jxx Window Notice that the lower bound J steer xx depends on Jzz. If Jzz is too large, the required Jxx may exceed the upper bounds, resulting in an empty feasible window. To avoid this, we now show that the four bounds are compatible under a reasonable condition on Jzz, thereby ensuring a nonempty feasible window. Theorem 9.7. Assume uniform clique size ni = nc for all i, and suppose m ≥3 and mr ≥2. Use Γ2 = m. If Jzz ≤J steer zz := 1 + (nc-1)(m-1)-2 2mr , then setting Jxx = 2(m -1), yields max{J lift xx, J steer xx } ≤Jxx ≤ min{J sep xx , J sink xx }, where explicit formulas for all bounds are derived in equations (Jxx-Bounds). The choice Jxx = 2(m -1) is not unique; it simply serves as a constructive witness that the feasible window is nonempty whenever the condition on Jzz holds. Feasibility Condition on Jzz. The upper bound J steer zz in the theorem depends on the unknown parameter mr. To ensure feasibility without knowing mr precisely, we apply a conservative upper bound: mr EGM 0 (x) throughout Stage 2, it suffices to require m α 2 m mg . Thus, the bound becomes: J lift xx = 2 · m mg · Γ2. 51 (iii) Steering Lower Bound J steer xx Idea: The lower bound J steer xx is defined as the minimal value of Jxx such that the diagonal entries of the R-inner block decomposition Hamiltonian are strictly decreasing from top to bottom. This condition ensures that the system can smoothly localize into the lowest R-blocks. The key idea is that the coupling strength Jxx must be large enough so that any configuration involving vertices from L incurs a high energy penalty, thereby suppressing support on the L-blocks throughout Stage 1. See Figure 15 for an illustration. Derivation: Specifically, we require the inequality -2 weff > -(weff + mr(1 -Jzz)), where weff = 1 -nc-1 4 Jxx. is the effective vertex weight. Solving the inequality yields the lower bound: J steer xx = 4 nc-1 [mr(Jzz -1) + 1] . Remark 9.8. Since J steer xx depends on nc, a conservative generalization for non-uniform clique sizes is to replace nc with mini ni. (iv) Sinking Upper Bound J sink xx Idea: This is analogous to the Lifting Lower Bound J lift xx. We want to make sure that the AS0 energy level is not dropped too low to form a new anti-crossing. We use the bare (decoupled) energy levels EGM 0 (the ground state energy of Hbare GM ) and EAS0 0 (the ground state energy of HQ) to approximate the true energy levels. We then impose an upper bound on Jxx to ensure that EAS0 0 > EGM 0 throughout Stage 2. By the same perturbative argument applied when the blocks are coupled, while using the decoupled bare levels as reference points, the true energy level corresponding to EAS0 0 is shifted upward, while the true energy level corresponding to EGM 0 is shifted downward. Therefore, as long as EAS0 0 > EGM 0 , no anti-crossing will be formed. This approximation is further supported numerically. Derivation: We require that EAS0 0 (x) > EGM 0 (x), during Stage 2, and in particular, we enforce this at the beginning of Stage 2, with x = Γ2: EAS0 0 (Γ2) ≈-m 1 + Jxx 4 -mr 2 1 -m nc Jzz + Γ2 , EGM 0 (Γ2) ≈-m+mr 2 (1 + Γ2). Solving this inequality yields the upper bound: J sink xx = 2(Γ2 -1) + 2mr nc Jzz. Remark 9.9. Since J sink xx depends on nc, a conservative generalization for non-uniform clique sizes is to replace nc with maxi ni. Having established the four feasibility bounds on Jxx and the condition for a nonempty window, we now turn to the detailed analysis of Stage 1 and Stage 2. 9.3.2 Stage 1: Structural Steering The purpose of Stage 1 is twofold: (1) to ensure that the evolution is confined to the same-sign block, and (2) to demonstrate successful structural steering within this block. Confinement is guaranteed by the Stage-Separation bound J sep xx together with the requirement Jzz ≤J inter zz (in the shared case). 52 We define structural steering as the mechanism by which the evolving ground state is directed smoothly into the R-localized region (the global-minimum-supporting region). In our design, steering is achieved by ordering the energies of the R-inner blocks of the same-sign Hamiltonian so that the ground state is guided directly into the R-localized region, rather than first becoming trapped in the L-region and only later tunneling into the Rregion, as happens in the stoquastic annealing case. Formally, this mechanism is guaranteed by two feasibility bounds: the Lifting bound J lift xx, which forces the ground state toward the lowest R-blocks; and the Steering bound J steer xx , which ensures that this localization proceeds smoothly. This analytically established behavior is further confirmed by numerical evidence. Numerical Confirmation of Structural Steering We confirm structural steering numerically by tracking structural localization (as defined in Section 9.2.2), namely the cumulative projection weight of the ground state onto the lowest R-blocks. Since this analysis involves only the same-sign block, and assuming uniform clique size, we may employ the symmetric-subspace reduction, which enables large-scale calculations. These calculations determine the appropriate value of k, corresponding to the minimum number of R-blocks that capture nearly all of the groundstate weight. In practice, when mr > m, we typically find k ≈2, whereas when 2 ≤mr ≪m (possible in the shared-structure case) 7 a slightly larger k (upto 0.2m) may be required. For the disjoint case, Figure 16(a) shows smooth steering into the lowest k = 2 blocks (with mr = mg > m). For contrast, Figure 16(b) shows the evolution when the steering condition is violated (with Jzz = 1000 ≫J steer zz , resulting in Jxx 0). Within this framework, we emphasize the distinctive feature of sign-generating interference: it produces negative amplitudes in the computational basis. This section is organized as follows: • Model Description: Formulation of the full Hamiltonian in both the computational and angular-momentum bases (Section 9.4.1). • Quantum Interference via Basis Rotation: How interference emerges from a basis change (Section 9.4.2). • Numerical Illustration of Sign-Generating Interference: Emergence of negative amplitudes in the nonstoquastic case (Section 9.4.3). 9.4.1 Model Description The model consists of three vertices V = {a, b, r}, with associated weights wa, wb, and wr, respectively. The graph includes two edges, {ab, ar}, each associated with a ZZ-coupling, denoted Jab zz and Jar zz . Additionally, there is an XX-coupling on the edge ab, denoted Jxx. We bipartition V into L = {a, b} and R = {r}. The set L corresponds to the local minima (LM), while GM = {b, r}-the maximum independent set-corresponds to the global minimum. See Figure 25 for a visualization of this setup. 59 (a) (b) (c) (d) Figure 22: Energy spectra for the shared-structure graph Gshare, with m = ml = 3, mr = 2, mg = 5, and clique size nc = 9. (a) TFQA spectrum showing a small-gap anti-crossing; the corresponding gap profile is shown in (c). (b) DIC-DAC-DOA spectrum with XX-driver at Jxx = 2(m-1); the corresponding gap profile is shown in (d). 60 (a) (b) Figure 23: Spectral decomposition for the shared-structure graph Gshare, with m = ml = 3, mr = 2, mg = 5, and clique size nc = 9. In TFQA (a), the ground state resides entirely in the same-sign block (green dashed), producing a tunneling-induced anti-crossing. In DIC-DAC-DOA (b), this anti-crossing is dissolved through a seesaw effect: the same-sign block (green dashed) is lifted while the opposite-sign block (red dashed) is lowered. 61 (a) (b) (c) (d) Figure 24: Spectral decomposition for the shared-structure graph Gshare, with m = ml = 5, mg = 7, and clique size nc = 9. (a) Energy spectrum of the same-sign block under TFQA (Jxx = 0), showing an anti-crossing near t ≈0.9. (b) Full spectrum under DIC-DAC-DOA with Jxx = 8: the opposite-sign energies (dashed red) remain above the same-sign ground state (green), confirming successful evolution with no anti-crossing. (c) Legend identifying curve types across panels. (d) Large-Jxx case (Jxx = 12): the system exhibits two anti-crossings. A block-level anti-crossing appears near t ≈0.3 (Stage 1), followed by an interference-involved block-level anti-crossing near t ≈0.8 (Stage 2). Both gaps may not be small enough to allow the evolution to bypass them. While the first gap can be small, the evolution remains in the same-sign block and follows the first excited state instead of the true ground state, which is overtaken by the opposite-sign ground state. The second gap, however, can be large, preventing a transition back to the true ground state. As a result, the evolution may terminate in the first excited state. 62 a b r c r q Figure 25: (Left) A weighted graph with three vertices {a, b, r}. The graph has two ZZ-couplings (black edges) on ab and ar, and one XX-coupling (red edge) on ab. The vertex set is bipartitioned as L = {a, b} and R = {r}. The set L (pink dashed oval) corresponds to the local minima (LM), while GM = {b, r} (blue dashed oval)-the maximum independent set-represents the global minimum. A local basis transformation is applied to the L-subsystem. (Right) The pair {a, b} is transformed into {c, q}, where c is a spin-1 2 subsystem, and q is a spin-0 subsystem corresponding to an opposite-sign state |⊙q⟩. Remark 9.10. The V3 model is not meant to demonstrate structural steering in Stage 1. Since the right component contains only a single vertex (mr = 1), structrual steering fails, as it requires mr ≥2. The system therefore undergoes a tunneling-induced anti-crossing in Stage 1. At the end of Stage 1, we assume that the ground state has successfully tunneled and is now localized on the state {r}, which is a subset of the global minimum configuration GM = {b, r}. Hamiltonian and Basis Construction Each vertex k ∈{a, b, r} is modeled as a spin-1 2 system with transverse field strength proportional to √nk. The local Hamiltonian at vertex k, in the computational basis {|1k⟩, |0k⟩}, is given by Bk = -wk - √nk 2 x - √nk 2 x 0 , wk = w -nk-1 4 jxx. The L-subsystem consists of vertices a and b, coupled via both ZZ- and XX-terms: HL = Ba ⊗I2 + I2 ⊗Bb + Jab zz ̃σz a ̃σz b + √nanb 4 jxxσx aσx b . In the computational basis {|1a1b⟩, |1a0b⟩, |0a1b⟩, |0a0b⟩}, this becomes: HL = |1a1b⟩ |1a0b⟩ |0a1b⟩ |0a0b⟩ |1a1b⟩ Jab zz -wa -wb - √nb 2 x - √na 2 x √nanb 4 jxx |1a0b⟩ - √nb 2 x -wa √nanb 4 jxx - √na 2 x |0a1b⟩ - √na 2 x √nanb 4 jxx -wb - √nb 2 x |0a0b⟩ √nanb 4 jxx - √na 2 x - √nb 2 x 0 63 Assuming the coupling Jab zz is large, we restrict to the low-energy subspace spanned by |1a0b⟩, |0a1b⟩, |0a0b⟩, yielding: HL = ΠindHLΠind = |1a0b⟩ |0a1b⟩ |0a0b⟩ |1a0b⟩ -wa √nanb 4 jxx - √na 2 x |0a1b⟩ √nanb 4 jxx -wb - √nb 2 x |0a0b⟩ - √na 2 x - √nb 2 x 0 This restriction removes the high-energy state |1a1b⟩, which is energetically penalized by large Jab zz . To uncover the effective angular momentum structure, we apply the basis transformation: Umerge = |1c⟩ |0c⟩ |⊙q⟩ |1a0b⟩ q na nc 0 - q nb nc |0a1b⟩ q nb nc 0 q na nc |0a0b⟩ 0 1 0 where nc = na + nb. The corresponding transformed basis states are: |0c⟩ = |0a0b⟩, |1c⟩ = q na nc |1a0b⟩+ q nb nc |0a1b⟩, |⊙q⟩ = - q nb nc |1a0b⟩+ q na nc |0a1b⟩. In this basis, the restricted Hamiltonian becomes: HL = |1c⟩ |0c⟩ |⊙q⟩ |1c⟩ - w -nc-1 4 jxx - √nc 2 x 0 |0c⟩ - √nc 2 x 0 0 |⊙q⟩ 0 0 - w + 1 4jxx We specialize to the case nb = 1, na = nc -1, we have wb = wr = w, while wa = w -nc-2 4 jxx. The corresponding full transformed basis states are: |0c0r⟩ = |0a0b0r⟩, |0c1r⟩ = |0a0b1r⟩, |1c0r⟩ = q nc-1 nc |1a0b0r⟩+ q 1 nc |0a1b0r⟩, |1c1r⟩ = q nc-1 nc |1a0b1r⟩+ q 1 nc |0a1b1r⟩, |⊙q0r⟩ = - q 1 nc |1a0b0r⟩+ q nc-1 nc |0a1b0r⟩, |⊙q1r⟩ = - q 1 nc |1a0b1r⟩+ q nc-1 nc |0a1b1r⟩. We now explicitly write down the full Hamiltonian in the two bases: Full Hamiltonian in the Computational Basis. 64 Hcomp full = |1a0b1r⟩ |1a0b0r⟩ |0a1b1r⟩ |0a1b0r⟩ |0a0b1r⟩ |0a0b0r⟩ |1a0b1r⟩ -2w + nc-2 4 jxx + Jzz -1 2x √nc-1 4 jxx 0 - √nc-1 2 x 0 |1a0b0r⟩ -1 2x -w + nc-2 4 jxx 0 √nc-1 4 jxx 0 - √nc-1 2 x |0a1b1r⟩ √nc-1 4 jxx 0 -2w -1 2x -1 2x 0 |0a1b0r⟩ 0 √nc-1 4 jxx -1 2x -w 0 -1 2x |0a0b1r⟩ - √nc-1 2 x 0 -1 2x 0 -w -1 2x |0a0b0r⟩ 0 - √nc-1 2 x 0 -1 2x -1 2x 0 Full Hamiltonian in the Angular Momentum Basis. Hang full = |1c1r⟩ |1c0r⟩ |0c1r⟩ |0c0r⟩ |⊙q1r⟩ |⊙q0r⟩ |1c1r⟩ -2w + nc-1 4 jxx + nc-1 nc Jzz -1 2x - √nc 2 x 0 - q nc-1 n2c Jzz 0 |1c0r⟩ -1 2x -w + nc-1 4 jxx 0 - √nc 2 x 0 0 |0c1r⟩ - √nc 2 x 0 -w -1 2x 0 0 |0c0r⟩ 0 - √nc 2 x -1 2x 0 0 0 |⊙q1r⟩ - q nc-1 n2c Jzz 0 0 0 -2w -1 4jxx + 1 nc Jzz -1 2x |⊙q0r⟩ 0 0 0 0 -1 2x -w -1 4jxx The first four basis states span the same-sign block; the last two belong to the opposite-sign block. Coupling between blocks arises via the off-diagonal term proportional to Jzz = Jar zz. 9.4.2 Explaining Quantum Interference via Basis Rotation At the start of Stage 2, the evolving ground state has been steered into the R-localized region (i.e., |0c1r⟩). As the evolution progresses, the relevant subspace is projected to I -|0r⟩⟨0r| = |1r⟩⟨1r|, which is spanned by |1c1r⟩, |0c1r⟩, |⊙q1r⟩ , with effective Hamiltonian Hang eff = |1c1r⟩ |0c1r⟩ |⊙q1r⟩ |1c1r⟩ -2w + nc-1 4 jxx + nc-1 nc Jzz - √nc 2 x - q nc-1 n2c Jzz |0c1r⟩ - √nc 2 x -w 0 |⊙q1r⟩ - q nc-1 n2c Jzz 0 -2w -1 4jxx + 1 nc Jzz (9.5) The ground state evolves into a superposition of the form |ψ(t)⟩= ψcr(t) |1c1r⟩+ ψqr(t) |⊙q1r⟩+ ψcr(t) |0c1r⟩, where all coefficients are nonnegative throughout the evolution as the Hamiltonian is stoquastic (in the angularmomentum basis). The interference mechanism arises from the angular-momentum basis rotation: |1c1r⟩= q nc-1 nc |1a0b1r⟩+ q 1 nc |0a1b1r⟩, |⊙q1r⟩= - q 1 nc |1a0b1r⟩+ q nc-1 nc |0a1b1r⟩. 65 Restricting to the subspace spanned by these orthonormal components, we write |ψ(t)⟩= ψcr(t) |1c1r⟩+ ψqr(t) |⊙q1r⟩. Expressed in the computational basis, this becomes |ψ(t)⟩= α(t) |0a1b1r⟩+ β(t) |1a0b1r⟩, (9.6) where α(t) = ψcr(t) q 1 nc + ψqr(t) q nc-1 nc , β(t) = ψcr(t) q nc-1 nc -ψqr(t) q 1 nc . (9.7) Here ψcr(t) and ψqr(t) are the amplitudes on the same-sign basis state |1c1r⟩and the opposite-sign basis state |⊙q1r⟩, respectively. Interference picture. The orthogonal components |1c1r⟩and |⊙q1r⟩interfere constructively on |0a1b1r⟩, increasing α(t) (supporting the global minimum), and destructively on |1a0b1r⟩, decreasing β(t) (dependent-set cancellation). In the stoquastic case (Jxx = 0), the interference produces only nonnegative amplitudes, and both α(t) and β(t) remain nonnegative. In the non-stoquastic case (Jxx > 0), however, destructive interference can drive β(t) negative, providing evidence of sign-generating quantum interference. To show that β(t) 0, and negative amplitude on the dependent-set state |1a0b1r⟩, namely β(t) < 0, throughout Stage 2 (when the effective Hamiltonian is valid). 9.4.3 Numerical Illustration of Sign-Generating Interference To visualize the mechanism described above, we numerically compute the instantaneous ground state of the V3 Hamiltonian over the course of the annealing evolution during Stage 2. We plot its amplitude components in both the angular momentum basis and the computational basis, showing how quantum interference arises in both the stoquastic case (Jxx = 0) and the non-stoquastic case (Jxx = 0.6), and how sign generation emerges only in the latter. The comparison highlights the essential distinction between tunneling-based and interference-based evolution. In the stoquastic case, the system undergoes a tunneling-induced anti-crossing as it transitions from support on local minima to support on the global minimum. Although quantum interference is present, it remains signpreserving, and the system evolves entirely within the non-negative cone. In contrast, when Jxx = 0.6, negative amplitudes appear-evidence of sign-generating quantum interference. Figures 26 and 27 show the corresponding energy spectra and signed probability evolutions. 66 (a) Jxx = 0 (stoquastic case) (b) Jxx = 0.6 (non-stoquastic case) Figure 26: Energy spectrum during annealing for the V3 model at two values of Jxx. (a) For Jxx = 0, the spectrum exhibits a tunneling-induced anti-crossing near t ≈0.75, where the ground state transitions from support on local minima to support on the global minimum. (b) For Jxx = 0.6, no anti-crossing arises in Stage 2. The ground state energy of the opposite-sign block (EAS0 0 , red dotdashed) remains above the true ground state Etrue 0 . 10 Full-System Extension via Iterative Application In this section, we extend the analysis to the full system by iteratively applying the two-phase procedure to successive bipartite substructures. In each iteration, a driver subgraph G(k) driver-and thus the parameters mk and n(k) c -is identified. The correctness of applying the algorithm iteratively-by incrementally building up the driver graph, i.e., ∪it k=1G(k) driver-follows from the following three facts: 1. The transverse field is global: x = (1 -t)Γ1. 2. Each iteration has its own stage-separation parameter Γ(k) 2 . 3. We use the latest J(k) zz . Remark 10.1. The upper bound on Jzz, J steer zz , which ensures that the ground state steers into the GM-supporting region during Stage 1, needs to be enforced only in the last iteration. In earlier iterations, the algorithm localizes into another LM-supporting region (associated with the current critical local minima), so the J steer zz (and thus J steer xx ) bound may be violated without affecting the algorithm's performance. Each iteration applies its own XX-driver and chooses a Jzz value appropriate to that iteration's structure. It is only in the final iteration-when the goal is to steer into the GM-supporting region-that the upper bound on J steer zz must be respected. This allows for adaptive, iteration-dependent tuning of Jzz, even when the clique sizes vary or structural conditions differ across iterations. We set Γ(k) 2 = mk, and define the corresponding transition point tk := 1 -Γ(k) 2 Γ1 ≤1 2. For each iteration k, we define αk := 2 · Γ(k) 2 -1 Γ(k) 2 , and J(k) xx := αk Γ(k) 2 . To simplify implementation across multiple iterations, we fix Γ1 := K · maxk Γ(k) 2 , for some constant K. This choice guarantees that the transverse field schedule for the 67 (a) Jxx = 0 (b) Jxx = 0 (c) Jxx = 0.6 (d) Jxx = 0.6 Figure 27: Signed probability evolution of the instantaneous ground state in the V3 model, shown in two bases (computational and angular-momentum). Each entry shows sign(ψi) · |ψi|2, where ψi is the amplitude on basis state i. Color shading encodes signed magnitude: positive (orange), negative (blue), and near-zero (white). (a,b) Stoquastic case (Jxx = 0). In (a), all amplitudes remain non-negative, with no blue region. Around t ≈0.75, a sharp transition occurs, indicative of a tunneling-induced anti-crossing. In (b), interference between |1c1r⟩and the opposite-sign component |⊙q1r⟩is present but remains sign-preserving. This shows that quantum interference is present even in the stoquastic case, but it remains sign-preserving. (c,d) Non-stoquastic case (Jxx = 0.6). In (c), the transition from orange to blue on the dependent-set state |1a0b1r⟩reflects destructive interference and the onset of a negative amplitude, while constructive interference amplifies the global-minimum state |0a1b1r⟩. In (d), the same evolution is shown in the angular-momentum basis: the first four states belong to the same-sign block, and the last two to the opposite-sign block. 68 two-stage evolution, x(t) = (1 -t)Γ1, can be reused identically across all iterations. While each iteration may use a different structure-dependent value Γ(k) 2 , they all share the same global annealing profile governed by Γ1. In particular, the system Hamiltonian builds up incrementally. The Hamiltonian at iteration it for Stage 0 is given by H0(t) = x(t)HX + it X k=1 jxx(t)(k) X (i,j)∈E(G(k) driver) σx i σx j + p(t)H(k) problem, t ∈[0, 1], with x(t) = (1 -t)(Γ0 -Γ1) + Γ1, jxx(t)(k) = tJ(k) xx , p(t) = t. The transverse-field term is HX = -P i σx i , and the problem Hamiltonian is H(k) problem = X i∈V(G) (-wi) ̃σz i + p X k=1 Jclique zz X (i,j)∈E(∪it k=1G(k) driver) ̃σz i ̃σz j + J(k) zz X (i,j)∈E(G) (∪it k=1G(k) driver) ̃σz i ̃σz j . Here, we take J(k) zz := 1 + √ n(k) c +1 2 . The system Hamiltonian during the main two-stage evolution is H1(t) = x(t)HX + it X k=1 jxx(t)(k) X (i,j)∈E(G(k) driver) σx i σx j + H(k) problem, t ∈[0, 1]. The transverse field is given by x(t) = (1 -t)Γ1. The time-dependent coupling jxx(t)(k) for iteration k is given by jxx(t)(k) = ( Jxx(k), for t ∈[0, tk] ⇔ x(t) ∈[Γ(k) 2 , Γ1], αk x(t), for t ∈[tk, 1] ⇔ x(t) ∈[0, Γ(k) 2 ]. Figure 28 illustrates how the algorithm progressively removes small-gap obstructions by lifting one critical local minimum per iteration. 11 Conclusion and Outlook We have developed a systematic framework for analyzing a non-stoquastic quantum annealing algorithm, DICDAC-DOA, which achieves exponential speedup on a structured family of Maximum Independent Set (MIS) instances. Our approach departs from the traditional spectral-gap perspective and instead measures the efficiency of the algorithm by the presence or absence of an anti-crossing. The key idea is to infer it directly from the crossing behavior of bare energy levels of relavent subsystems, without explicitly constructing the effective twolevel Hamiltonian. The analytical foundation of our approach rests on three structural components, each supported by analytical derivations and numerical evidence. First, a block decomposition of the Hamiltonian into same-sign and oppositesign components, derived from the angular momentum structure of the dMIC underlying the XX-driver graph. Second, the identification of a see-saw energy effect induced by the non-stoquastic parameter Jxx, which raises 69 Figure 28: Lifting Anti-Crossings in Two Iterations. Top: Jxx = 0. The energy level associated with the global minimum (GM) exhibits two anti-crossings with those of two local minima, highlighted by the red dotted oval. The first LM has ml = 2, nc = 30, and shares both vertices with GM; the second has m1 = 3, nc = 10, and shares one vertex with GM. The global minimum has mg = 5. Middle: Jxx(1) = 2; the first anti-crossing is lifted, but the second remains (dotted oval). Bottom: Jxx(2) = 4; both anti-crossings are lifted, and the system evolves smoothly to the global minimum. 70 the energy associated with local minima in the same-sign block while lowering that of opposite-sign blocks. Third, the derivation of analytical bounds on Jxx (together with upper bounds on Jzz) that support a two-stage annealing schedule ensuring smooth ground state evolution without anti-crossings. Together, these structural components reveal two key mechanisms responsible for the speedup through a smooth evolution path: • Structural steering: energy-guided localization within the same-sign block that steers the ground state smoothly into the GM-supporting region, bypassing tunneling-induced anti-crossings. • Sign-generating quantum interference: production of negative amplitudes that enables an opposite-sign path through destructive interference in the computational basis. The analysis of these mechanisms is supported by both analytical derivations and numerical validation, and can, in principle, be made rigorous with further work. We now summarize the key structural assumptions and aspects of the analysis that may be further formalized. Foundational Assumptions and Future Refinement • Iterative correctness. We justify that the full-system evolution can be accurately approximated by analyzing each bipartite substructure in isolation. • Worst-case structure. We argue that the shared-structure graph represents the worst-case configuration within each bipartite subproblem. This allows us to conservatively bound the relevant parameters. • Bare energy approximation. Our key idea is to infer the presence or absence of anti-crossings directly from the crossing behavior of bare energy levels of relevant subsystems (LM and GM), without explicitly constructing the effective two-level Hamiltonian. This reduction makes the problem tractable and captures the essential structure. It can be partially justified by effective Hamiltonian arguments, but a fully rigorous treatment would require bounding the error of this approximation. • Inter-block coupling. We argue that Stage 1 is confined to the same-sign block by assuming the interblock coupling is weak. Our numerical results confirm that taking Jzz = J steer zz suffices in practice, but deriving a sharp analytical bound on J inter zz remains an open direction for future refinement. • Structural steering. We argue that Stage 1 achieves structural steering by ordering the energies of the Rinner blocks so that the ground state is guided into the R-localized region. This is supported by large-scale numerical evidence, while a fully rigorous justification remains future work. • Emergence of negative amplitudes. We demonstrate the emergence of negative amplitudes using the effective Hamiltonian H(MD) eff , which is sufficient for our analysis. A fully rigorous treatment would require mathematically justifying this construction as the correct effective reduction of the full system. Quantumness and Absence of Classical Analogues The emergence of negative amplitudes-produced by sign-generating interference due to the Proper Non-Stoquastic Hamiltonian-serves as a witness to the quantum nature of the speedup. Classical simulation algorithms, such as quantum Monte Carlo methods (see [25] and references therein), rely on the non-negativity of wavefunctions in the computational basis and break down in regimes where interference induces sign structure. This places the algorithm beyond the reach of eventually stoquastic annealing and efficient classical simulation. 71 From the perspective of classical solvers, the effect of sign-generating interference may be hard to reproduce classically. Algorithmically speaking, the XX-driver with appropriately tuned coupling strength Jxx enables a form of collective suppression of local minima, induced by edge couplings that effectively reduce vertex weights-while selectively preserving the global minimum, even in the presence of partial overlap. To the best of our understanding, no classical algorithm is able to replicate this effect, which appears to be a genuinely quantum capability. Together, these points position DIC-DAC-DOA as a candidate for demonstrating quantum advantage. Experimental Validation of the Quantum Advantage Mechanism A by-product of our analysis is that a bipartite substructure graph Gshare with mlnc + mr vertices can be effectively reduced to 2ml + mr vertices, with nc appearing only as a parameter, see Figure 29. This reduction enables the construction of small-scale models amenable to experimental verification of quantum advantage. In particular, a minimal 3-vertex model (with ml = mr = 1) demonstrates the emergence of negative amplitudes through sign-generating interference, thereby providing a direct coherence test for the negative-amplitude phase structure in the computational basis, while an 8-vertex model (with ml = 3, mr = 2, mg = ml + mr = 5) as shown in Figure 29(b), not only exhibits negative amplitudes (see Figures 21) but also shows a measurable speedup compared to the stoquastic case with Jxx = 0 (see Figures 22 and 23). Both models are within reach of current gate-model quantum computers via simulation [26, 27]. Contract (nc-1) (a) (b) Figure 29: (a) Schematic contraction. A clique consisting of nc -1 pink vertices and one purple vertex. The subclique of nc -1 pink vertices is contracted into a single pink vertex. Each pink vertex is ZZ-coupled to both the purple and blue vertices, while the purple and blue vertices have no direct edge. XX-couplings occur between pink-pink and pink-purple pairs. (b) Example of a bipartite substructure graph Gshare with mlnc + mr vertices reduced to 2ml + mr vertices. Shown is the case ml = 3, mr = 2, yielding an 8-vertex effective graph where each blue vertex is adjacent to each pink vertex. The (global) maximum independent set has size mg = ml +mr, consisting of all purple and blue vertices. 72 On the Relaxation of the Structured Input Assumption Our analysis is developed under a structured assumption on the problem graph of MIS, which we refer to as the GIC assumption: each critical degenerate local minima-corresponding to a set of maximal independent sets of fixed size-is formed by a set of independent cliques (dMIC). This assumption underlies both the design of the XX-driver and the block decomposition of the Hamiltonian. The independence of cliques is assumed primarily to allow efficient identification during Phase I. In principle, one can allow the cliques to be dependent (dMDC), meaning that some edges are permitted between cliques. In this case, the cliques may need to be identified heuristically rather than exactly. Under suitable conditions, the algorithm may remain robust even when applied to dMDC structures. More generally, each critical degenerate local minima might consist of a set of disjoint dMDCs. A clique-based driver graph may still be constructed in such cases, but the generalization becomes more intricate and may require further structural insights. Other future directions include the study of weighted MIS instances and adaptive strategies for selecting or optimizing Jxx during the anneal. In conclusion, the results presented here establish both a theoretical and numerical foundation for exponential quantum advantage via the DIC-DAC-DOA algorithm. Beyond the algorithm itself, a key by-product of our analysis is the identification of effective small-scale models that distill the essential dynamics into experimentally accessible settings. These models provide a concrete opportunity to verify the quantum advantage mechanism on currently available gate-model quantum devices through simulation. We hope this work not only offers a foundational framework for quantum optimization algorithm design and analysis, but also inspires the development of experimental devices equipped with the required XX non-stoquastic couplers, capable of directly implementing the presented algorithm. Acknowledgment This work was written by the author with the help of ChatGPT (OpenAI), which assisted in refining the presentation and in expressing the intended ideas with greater clarity and precision. The author thanks Jamie Kerman for introducing her to the angular-momentum basis and for early discussions that contributed to the companion work [8]. We also thank Siyuan Han for helpful comments. We recognize that this manuscript may not cite all relevant literature. If you believe your work should be included in a future version, please contact the author with the appropriate references. References [1] V. Choi. Essentiality of the Non-stoquastic Hamiltonians and Driver Graph Design in Quantum Optimization Annealing. , 2021. [2] V. Choi. Constructing and Programming Driver Graphs in Quantum Hardware for Non-Stoquastic Quantum Optimization Annealing Processes. U.S. Patent No. 12,001,924 B2, issued June 4, 2024. [3] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106, 2000. [4] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem. Science, 292(5516):472-475, 2001. 73 [5] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM Journal on Computing, 37(1):166-194, 2007. Conference version in Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 42-51, 2004. [6] T. Albash and D. A. Lidar. Adiabatic quantum computation. Rev. Mod. Phys., 90:015002, 2018. [7] A. Elhashash and D. B. Szyld. On general matrices having the Perron-Frobenius property. Electronic Journal of Linear Algebra, 17:389-402, 2008. [8] V. Choi. Limitation of Stoquastic Quantum Annealing: A Structural Perspective, 2025. [9] E. Crosson, T. Albash, I. Hen, and A. P. Young. De-signing Hamiltonians for quantum adiabatic optimization. Quantum, 4:334, 2020. [10] J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM Journal on Computing, 35(5):1070-1097, 2006. [11] S. Bravyi, D. DiVincenzo, and D. Loss. Schrieffer-Wolff transformation for quantum many-body systems. Annals of Physics, 326(10):2793-2826, 2011. [12] M. H. S. Amin and V. Choi. First-order phase transition in adiabatic quantum computation. Phys. Rev. A, 80(6):062326, 2009. . [13] B. Altshuler, H. Krovi, and J. Roland. Anderson localization makes adiabatic quantum optimization fail. Proceedings of the National Academy of Sciences, 107:12446-12450, 2010. [14] V. Choi. Minor-embedding in adiabatic quantum computation: I. The parameter setting problem. Quantum Information Processing, 7:193-209, 2008. [15] V. Choi. The Effects of the Problem Hamiltonian Parameters on the Minimum Spectral Gap in Adiabatic Quantum Optimization. Quantum Inf. Processing., 19:90, 2020. arXiv:quant-ph/1910.02985. [16] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, San Francisco, 1979. [17] S. Tsukiyama, M. Ide, H. Ariyoshi, and I. Shirakawa. A new algorithm for generating all maximal independent sets. SIAM J. Comput., 6 (1977), pp. 505-517. [18] D. S. Johnson, C. H. Papadimitriou, and M. Yannakakis. On generating all maximal independent sets. Information Processing Letters, 27:119-123, 1988. [19] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC), pages 212-219, 1996. [20] J. Roland and N. J. Cerf. Quantum search by local adiabatic evolution. Phys. Rev. A, 65:042308, 2002. [21] R. E. Tarjan and A. E. Trojanowski. Finding a maximum independent set. SIAM Journal on Computing, 6(3):537-546, 1977. 74 [22] S. Lamm, C. Schulz, D. Strash, R. Williger, and H. Zhang. Exactly solving the maximum weight independent set problem on large real-world graphs. In Proceedings of the 21st Workshop on Algorithm Engineering and Experiments (ALENEX), pages 71-85, 2019. [23] A. Braida, S. Chakraborty, A. Chaudhuri, J. Cunningham, R. Menavlikar, L. Novo, and J. Roland. Unstructured Adiabatic Quantum Optimization: Optimality with Limitations. Quantum, 9:1790, 2025. . [24] V. Choi. Different adiabatic quantum optimization algorithms for the NP-complete exact cover and 3SAT problems. Quantum Info. Comput., 11(7-8):638-648, July 2011. [25] I. Hen. Determining quantum Monte Carlo simulability with geometric phases. Phys. Rev. Research, 3(2):023080, 2021. [26] S. Lloyd. Universal quantum simulators. Science, 273(5278):1073-1078, 1996. [27] D. W. Berry, G. Ahokas, R. Cleve, and B. C. Sanders. Efficient quantum algorithms for simulating sparse Hamiltonians. Communications in Mathematical Physics, 270(2):359-371, 2007. Appendix A: Font Conventions for Notation We adopt the following conventions throughout: • Hilbert space / Subspace / Basis: calligraphic, e.g. V, B. • Hamiltonian / Matrix: blackboard bold, e.g. H, B. • Time-dependent quantity: typewriter, e.g. x := x(t), jxx := jxx(t). • Named object / Abbreviation: capital typewriter, e.g. LM, GM, MLIS. 75
|
2509.16224
|
1
Predicting First-Year Dropout from Pre-Enrolment Motivation
Statements Using Text Mining
Karlijn F. B. Soppe*#, Ayoub Bagheri*, Shiva Nadi+, Irene G. Klugkist*, Theo Wubbels¥, & Leoniek
D.N.V. Wijngaards-de Meij*
*Department of Methods and Statistics, Faculty of Social and Behavioral Sciences, Utrecht University
+Direction of Information and Technology Services, Utrecht University
¥Department of Education, Faculty of Social and Behavioral Sciences, Utrecht University
#corresponding author email: k.f.b.soppe@uu.nl
Abstract
Preventing student dropout is a major challenge in higher education and it is difficult to predict prior
to enrollment which students are likely to drop out and which students are likely to succeed. High
School GPA (HSGPA) is a strong predictor of dropout, but much variance in dropout remains to be
explained. This study focused on predicting university dropout by using text mining techniques with
the aim of exhuming information contained in students’ written motivation. By combining text data
with classic predictors of dropout (student characteristics), we attempt to enhance the available set
of predictive student characteristics. Our dataset consisted of 7,060 motivation statements of
students enrolling in a non-selective bachelor at a Dutch university in 2014 and 2015. Support Vector
Machines (SVMs) were trained on 75% of the data and several models were estimated on the test
data (25%). We used various combinations of student characteristics (e.g., high school grades, age)
and text (i.e., TFiDF, topic modelling, LIWC dictionary). Results showed that, although the combination
of text and student characteristics did not improve the prediction of dropout, text analysis alone
predicted dropout similarly well as a set of student characteristics. Suggestions for future research are
provided.
Keywords motivation, transition into HE, dropout prediction, text mining, natural language
processing
2
1. Introduction
Improving student retention is one of the biggest challenges in higher education. Retaining students
results in higher revenue for universities (Zhang et al., 2010) since their funding is often partially based
on graduation rates (Jongbloed et al., 2018). For students, finalizing their degree is also important, as
dropping out of higher education is associated with negative consequences, such as untapped human
potential, a low return on their financial investment (Psacharopoulos, 1994), and reduced social
welfare (Hällsten, 2017). Moreover, low retention rates also impact society since income levels rise
with a higher education degree (Jayaraman, 2020). Thus, it is paramount for society to keep dropout
in higher education to a minimum.
Ideally, students at risk of dropout should be identified prior to enrollment, to minimize negative
consequences for both students and universities. In selective admission, it is common practice to try
to identify students at risk of dropout based on their application. Staff members of the admissions
committee are generally looking for both cognitive (e.g., prior performance) and non-cognitive (e.g.,
personality and motivation) factors when selecting suitable candidates (Kurysheva et al., 2019). The
use of some of these non-cognitive criteria, especially by means of motivation and recommendation
letters, for selecting students has been subjected to criticism (Kira Talent, 2018; Posselt, 2016). Self-
report measures such as motivation letters are susceptible to faking by the applicant, when being used
in a high-stakes context (Niessen et al., 2017). Moreover, filtering out true motivation can be
challenging for program staff. They may need to “read between the lines” to form an idea about the
factors driving a student to apply for their study program. Furthermore, it might be hard to identify
students’ motivation solely based on a written statement. Characteristics of the reader (e.g.,
experience), their psychology, and environment can introduce bias into the evaluation of the
motivation letters (Bridgeman, 2013). These aspects make humans inconsistent and unreliable
evaluators (Zupanc, 2018). Lastly, reading these statements is very time consuming and it is not easy
to compare motivation across students. This cumbersomeness may make program staff less likely to
engage in evaluating motivation texts as part of the enrollment procedure (Moskal et al., 2016). All
things considered, the vast amount of text data in the application process may exceed the human
capacity to process it thoroughly.
This study, therefore, focuses on predicting university dropout by using text mining techniques to
exhume information contained in students’ written motivation. The aim of this study is to investigate
whether this novel approach can disclose information present in text, and thereby contribute to
detecting students who are potentially at risk of dropout as early as possible. If so, traditional
prediction models could be updated, using these techniques, to obtain higher predictive power.
3
Using machine learning techniques in education is not a new phenomenon (see Foster and Francis,
2020 for a systematic review), but almost all Educational Data Mining (EDM) research on student
dropout prediction used structured data (i.e., quantitative, alpha-numeric data that can directly be
used as input in statistical models). There are, however, some studies using Natural Language
Processing (NLP) techniques and unstructured data (i.e., qualitative data in no particular format, such
as text, audio, or video files) in predicting student completion of Massive Open Online Courses
(MOOCs). Most of these studies use sentiment analysis to detect positive or negative phrases,
motivation, engagement, etc. in discussion forums or assignments (Jayaraman, 2020). For example, in
a study on students’ opinions towards a course, Wen and colleagues (2014) found, using sentiment
analysis, that students who used words related to motivation were more likely to complete the course.
Moreover, Crossley and colleagues (2016) used NLP techniques on MOOC forum posts and found that
a range of NLP indicators, such as lexical sophistication and writing fluency, were predictive of student
completion of the MOOC.
Outside of MOOCs, we know of only two studies that used text mining and NLP techniques to add to
the early identification of students at risk of dropout by improving the predictive validity of student
dropout models using unstructured data. One of these studies used sentiment analysis to predict
dropout by analyzing notes written by student advisors (Jayaraman, 2020). The most frequently used
words were “drop” (negative sentiment) and “progress” (positive sentiment). By comparing several
models, the most accurate model was sought to predict dropout. The best performing model, a
random forest classifier, predicted dropout with 73% accuracy. In the second study, Stone and
colleagues (2019) used both human coding and a variety of NLP techniques to detect non-cognitive
traits, such as psychological connection (which they used as a proxy for intrinsic motivation), by
analyzing students’ 150-word open-ended descriptions of their own extracurricular activities or work
experiences included in their college applications. Correlations between human coding and model-
based coding ranged from medium-small to medium-strong on the respective non-cognitive traits.
The correlation between human- and model-based coding for psychological connection was .655,
indicating a medium-strong relation. The non-cognitive traits were then used in separate regression
models to predict 6-year graduation outcomes. Psychological connection was insignificant in both the
human- and the model-coded prediction of 6-year graduation. However, results showed that some
other traits had predictive power net of other known predictors. For example, results from both the
human- and model-based coding models showed that students portraying a growth mindset were
more likely to graduate within six years, when controlling for sociodemographics, secondary school
GPA and intelligence.
4
In this study, having the aim to contribute to early detection of at-risk students, we use NLP-based
techniques to analyze short motivation statements of applicants to non-selective bachelor programs
(i.e., programs that admit all students who obtained a pre-university diploma) in the Netherlands. In
doing so, we try to answer the question whether students at risk of dropout can be identified through
text mining, based on their motivation for the study program as written in their intake questionnaire
prior to enrollment; and whether information extracted from these motivation statements adds
predictive power net of student characteristics.
2. Methods
2.1. Dataset
All students applying to a higher education study program in the Netherlands have to file an
application through a central website. Upon filing such an initial request, students applying to a non-
selective bachelor will receive a request to participate in a matching procedure. This procedure
consists of an online questionnaire regarding motivation and student background characteristics and
may be followed by an activity online or on campus (e.g., interview, online course, or matching day).
Upon completion of the matching procedure, students will receive a request to finalize their
enrollment.
For this study, we obtained 7,060 student records from a Dutch university, composing all students
who enrolled in a non-selective bachelor at this university during the academic years 2014 and 2015.
The obtained academic records consisted of student background information, their answers to the
matching questionnaire, and student progress data such as first-year dropout. The dataset is analyzed
using Python and source code can be found on GitHub1.
2.2. Variables
In this study both structured data (i.e., a set of student characteristics ranging from prior education to
the number of study programs a student applied for) and unstructured data (i.e., motivation
statements) were used to predict first-year student dropout. Below, we discuss the operationalization
of the structured data. In the next sections we will elaborate on the unstructured data and how
features were extracted from the raw text data.
Dropout2. Student dropout is measured using information on whether a student re-enrolled
in the second year of the study program. Students who paid the tuition fees for the second year are
1 https://github.com/ShNadi/study_motivation
2 Whether students paid for their re-enrollment is one way of operationalizing dropout. Another way is to look
at whether students have obtained sufficient credits to continue. In the Netherlands, students must obtain
45/60 ECTS to continue in their sophomore year. For this study we ran all models with this classifier (i.e., did or
did not obtain 45 ECTS) as well and results differ only marginally.
5
considered to have re-enrolled for their sophomore year. They are treated as the retention class (0).
Students who did not pay tuition for their sophomore year are classified as dropouts (1).
Prior education. Students with a variety of educational backgrounds enroll in Dutch
universities. Prior education was measured using a categorical variable with the following levels:
preparatory university diploma (VWO), to be obtained (1); preparatory university diploma (VWO),
already obtained (2); university of applied sciences propaedeutic diploma (3); and other (4).
High school performance. In the Netherlands, high school grades are given on a scale from 1-
10, with 10 representing a perfect score. In the questionnaire prospective students were requested to
self-report their grades for the three core subjects in high school (Dutch, English, and Mathematics).
To measure high school performance, a mean grade of these three core subjects in the pre-final year
of high school was calculated. If one of the grades was missing, a mean of the remaining two subjects
was taken, otherwise the variable was coded as missing.
Ability beliefs. Students’ belief in their own abilities was measured using a single item from
the questionnaire, stating: “the program matches my capacities and skills”. The item could be
answered with (1) yes or (0) no.
Interests. Students’ interest in the study program was measured using the statement “the
program is in line with my interests”. The item could be answered with (1) yes or (0) no.
Gender. Information about students’ gender was taken from the university registration
systems. Male students were coded (1) and female students as (0).
Age. Students’ date of birth was taken from the university registration systems and then
recoded into age at the start of the study program by subtracting date of birth from the first day of
the academic year for each cohort.
Cohort. The dataset consists of students who enrolled in a non-selective bachelor’s program
during the academic years of 2014, coded as (1) and 2015, coded as (2).
Study program. In the Netherlands, students choose a specific study program to enroll in for
their university education, e.g., Psychology, Chemistry, Law. To control for differences in dropout
across study programs, all study programs were added as dichotomous variables, i.e., coded as (1) if
a student applied for that program and as (0) otherwise.
Discipline. All study programs were allocated into three different disciplines: Science,
Technology, Engineering & Mathematics (1), Social Sciences (2) and Humanities (3).
6
Previously enrolled. Students who have previously been enrolled in another study program
at the same university were coded (1) and all others as (0).
Multiple requests. Students who filed admission requests for more than one study program
were coded (1) and all others as (0).
Table 1 provides an overview of the descriptive statistics of the structured data in the sample, split by
cohort.
Table 1. Descriptive statistics of the structured data per cohort.
Characteristics
2014
2015
Re-enrolled
Dropout
Re-enrolled
Dropout
n M(SD)
n M(SD)
n M(SD)
n M(SD)
Prior education
pre-university; to be obtained
1785
458
1619
480
pre-university; obtained
793
293
636
276
propaedeutic diploma
309
96
284
95
other
63
13
54
16
High school performance
2843
6.84
(.61)
817
6.64
(.56)
2316
6.89
(.66)
775
6.62
(.59)
Ability beliefs
positive
2092
616
1801
562
negative
858
244
792
305
Interests
positive
2895
830
2401
783
negative
55
30
192
84
Gender
male
1267
455
1184
506
female
1683
405
1409
361
Age
2837
19.00
(2.44)
816
19.35
(2.43)
2593
18.95
(2.39)
867
19.50
(3.03)
Discipline
STEM
677
248
612
233
Social sciences
1739
419
1413
416
Humanities
534
193
568
218
Previously enrolled
yes
284
93
251
114
no
2666
767
2342
753
Multiple requests
yes
107
40
107
39
no
2843
820
2486
828
2.2.1. Preprocessing the motivation statements
To analyze unstructured text data, several steps need to be taken to reduce its high dimensionality.
Our raw text data consist of students’ answer on the intake questionnaire to the following question:
“Why do you want to study [program name] in [city]? (10-25 lines)”. This question was followed by
the instruction: “When answering the question, for example think about your motivation for the
7
content of the study program, your choice for an academic program, and your motivation for a
profession or position that this program prepares you for”. The first step that needs to be taken is pre-
processing the data. In our analysis, pre-processing the motivation statements consists of stop word
removal, removing whitespaces and numbers, and converting text into lowercases. This is a step that
enhances the performance of the algorithm in later stages. After the pre-processing step, the high
dimensionality of text data still prevents it from being used directly in a statistical model and therefore
a feature engineering step is required to inspect the text data and acquire multiple feature sets.
2.2.2. Feature engineering
Feature engineering is a process in machine learning that is used to extract analyzable properties from
raw data. In machine learning these analyzable properties are known as features; they can be
considered independent variables. In this study, three different types of feature engineering were
applied to the motivation statements. Initially, a bag-of-words representation was used as a simplified
representation of the text data. Thereafter, two additional, and more advanced, feature engineering
methods were applied; latent Dirichlet allocation topic modelling, and linguistic inquiry and word
count (LIWC) dictionary words. Each of these methods is explained below.
Bag-of-Words. This is a process that converts text data into numbers in a (e.g., document-
term) matrix. To create this matrix, we used Term Frequency inverse Document Frequency (TFiDF)
which is a bag-of-words method intended to reflect the relative frequency of a term (word) in each
document (motivation statement). TFiDF can be calculated by multiplying the number of times a word
appears in a document, and the inverse document frequency of the word in the dataset. With TFiDF,
the words that are common in every document rank low even though they appear many times. This is
because TFiDF is offset by the number of documents that contain the word. The bag-of-words
representation is the simplest way to make text analyzable in a statistical model. In the remainder of
this paper, we will refer to it as just “text” or the method we used: “TFiDF”.
Topic modeling. One way of reducing the high dimensionality of text data, is to represent it
as a set of topics across documents. So, instead of looking at the word frequency of single words like
TFiDF does, words are clustered into groups that represent underlying concepts. To identify topics in
the motivation statements we employed a topic modeling method, Latent Dirichlet allocation (LDA),
using a collapsed Gibbs sampling approach (Blei et al., 2003). LDA topic modeling considers each topic
as a probability distribution over terms, and each document as a combination of topics. LDA is an
unsupervised method, which means that it can extract hidden topics from text without human
assistance. This entails that the extracted topics do not always construe a clear meaning. Therefore, it
is up to the researchers to identify which number of topics is the best conceptual representation of
the text data. Therefore, we ran a set of models with different numbers of topics (i.e., 5, 10, 15, 20,
8
50). Based on inspection of the representative terms in each topic and to what extent these terms
could form a meaningful topic together, two of the authors independently selected the model with 15
topics as the best representation of the data. Therefore, the 15 topics were used as “features
extracted from text”.
Linguistic Inquiry and Word Count (LIWC). LIWC is a text analysis tool that reduces the
dimensionality of the data by mapping words onto predetermined categories, using psychometrically
validated dictionaries. LIWC can expose certain psychological characteristics of the writer (Tausczik &
Pennebaker, 2012). The main categories provided by LIWC are general information (e.g., word count),
linguistic dimensions (categorized in verbs and function words, such as pronouns), psychological
processes (containing the main categories social processes, affective processes, cognitive processes,
perceptual processes, biological processes, and relativity), personal concerns, and spoken language
(Pennebaker, et al., 2007). Each of these categories has some features. For example, the category
“social words” contains the features “family”, “friends”, and “humans”, and the category “relativity”
is divided into the features “motion”, “space” and “time”. We use LIWC2007 to extract these features
from the motivation statements and to use as input for the models. In this version of LIWC, its
dictionaries have been translated into several languages, including Dutch (Boot et al., 2017). Together
with the 15 topics, we refer to the LIWC sub-categories as “features extracted from text” in the
remainder of this paper.
2.2.3. Training the algorithm
Support Vector Machine (SVM) was chosen to analyze the data, since it can handle (the combination
of) structured and unstructured data, and is known to generally perform well in text classification
problems. First, the data were randomly split into training (75%; N = 5295)) and test sets (25%; N =
1765). The training set was used to train the algorithm by providing it with both the input (i.e., student
characteristics, text, and text features) and the output (i.e., whether a student dropped out or not).
K-fold cross validation with k = 5 was used to evaluate the performance of the training model
(Refaeilzadeh et al., 2009). Cross validation is a resampling process in which the training data is split
in k-different portions to test and train a model on different iterations. If the different iterations return
different levels of model accuracy, this can indicate potential problems regarding overfitting or
selection bias. The full set of training data can then be inspected further to ensure better performance
of the algorithm when eventually applied to the test data. For none of the estimated models in our
study cross validation indicated a need for further inspection of the training data.
2.2.4. Analysis
We analyzed our data using six separate SVM models, exploring the most accurate combination of
features to predict dropout. First, we started with a model using only the structured data, our set of
9
student characteristics, as input for the model. This provided us with a baseline of how well we would
be able to predict dropout if we would not have any text data. Second, we estimated a model with
only the text using TFiDF as input to compare the algorithms’ performance to that of the first model.
Third, we added the features that we extracted from the text through LDA topic modeling and the
LIWC dictionary to the text-only model to assess the added value of more advanced text mining
techniques on top of the simple bag-of-words representation. Lastly, to answer the question whether
information extracted from text can add to the prediction of dropout net of structured data, we
examined the performance of the SVM with different combined feature sets. Table 2 provides an
overview of the input per model.
Table 2. Overview of the feature sets across the different models.
Features
Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Structured data
Prior education
X
X
X
X
High school performance
X
X
X
X
Ability beliefs
X
X
X
X
Interests
X
X
X
X
Gender
X
X
X
X
Age
X
X
X
X
Cohort
X
X
X
X
Discipline
X
X
X
X
Study program
X
X
X
X
Previously enrolled
X
X
X
X
Multiple requests
X
X
X
X
Unstructured data
Text (TFiDF)
X
X
X
X
Latent Dirichlet Allocation
X
X
X
LIWC dictionary
X
X
X
Student retention data are generally imbalanced, since the number of dropouts is much smaller than
the number of students that continue the study program. This imbalance can be problematic, as the
standard classification algorithms have a bias towards the majority class, giving misleadingly promising
results (Dalipi et al., 2018). To solve the issue of imbalanced data, we applied a balanced class weight
with all our classification algorithms (see GitHub for code). This balanced weight can be considered an
10
oversampling technique which essentially replicates the smaller class until it reaches the sample size
of the larger one.
Performance of Support Vector Machines is generally assessed by accuracy, precision, recall, and f1-
scores. Accuracy is an unreliable measure for imbalanced data and therefore we do not use it.
Moreover, because of the imbalance, weighted output for precision, recall, and f1-score are reported
to assess the performance of the algorithm. Precision denotes the true positives divided by the true
positives + false positives. Recall is defined as the true positives divided by the true positives + false
negatives. The f1-score is the weighted average of precision and recall. To get a better sense of these
performance measures for our specific context, Figure 1 shows precision and recall for the dropout
class.
Figure 1. Visualization of the model performance criteria precision and recall, applied to the dropout
class.
11
3. Results
The total dataset was split into a training set (75%) and a test set (25%). Results presented in this
section are based on the 1765 motivation statements forming the test data. Of these 1765 statements,
1312 belonged to the retention class and 453 to the dropout class. Table 3 provides a summary of the
results. Given our dual aim of exploring whether students at risk of dropout can be identified through
motivation statements and whether information extracted from these motivation statements adds
predictive power net of student characteristics, we are interested in model performances in absolute
terms, but not in whether certain models statistically outperform others.
Table 3. Performance measures of the estimated models, for the total test set (T) and split by the
retention (R) and dropout (D) class.
Precision
Recall
f1-score
Model
T
R
D
T
R
D
T
R
D
1
.71
.84
.31
.57
.55 .65
.60
.66
.42
2
.67
.79
.28
.63
.71 .37
.65
.75
.32
3
.67
.79
.29
.64
.72 .37
.65
.75
.32
4
.69
.81
.32
.65
.71 .44
.67
.76
.37
5
.70
.82
.31
.60
.61 .56
.63
.70
.40
6
.68
.80
.31
.64
.71 .42
.66
.75
.35
3.1. Model 1: Student characteristics
First, a SVM model with only student characteristics was run as a baseline model. This model contains
variables that are known to be predictive of student dropout, including high school performance. The
weighted precision score of Model 1 was .71, the weighted recall score .57. This resulted in a weighted
f1-score of .60. The prediction of precision for the majority class (retention) was much better than the
performance for the minority class (dropout). With a score of .65, recall was higher for the dropout
class than for the retention class (.55). This is notable, given the fact that algorithms generally perform
better for the majority class, even after correcting for imbalanced data.
12
3.2. Model 2: Only text
To identify what the analysis of text data through NLP-based techniques can add to the prediction of
dropout, first a model with only text was analyzed, using TFiDF to identify the importance of words in
the corpus (Model 2). The model had a weighted precision score of .67, a weighted recall score of .63,
and a weighted f1-score of .65. When comparing the performance of the algorithm for this model
several things stand out. First, precision of this model is worse than in Model 1, meaning that in the
model with the student characteristics, of all the selected students there were proportionally more
true positives. In Model 2 recall is better than in Model 1, meaning that of all the relevant students
there were proportionally more true positives in the model with only text. Second, recall scores for
this model are less balanced across the classes than in Model 1. Table 3 shows a much higher recall
score for the retention class (.71) than for the dropout class (.37).
3.3. Model 3: text and text features
Model 3 investigates the predictive power of all information we extracted from text. To that end, we
added features extracted from text to the text data (LIWC dictionary words and topics extracted
through LDA topic modelling), to investigate whether this combination could outperform Model 1.
The LIWC features already consist of predetermined categories, using psychometrically validated
dictionaries. For the LDA topic modelling the features first must be identified before they can be used
in the analyses. For the 15 topics that were identified in the motivation statements, the top 10 terms
of each of these topics are listed in Table 4. Upon consensus among the authors, the 15 topics were
given a theoretical label if possible. Some of the topics did not construe one clear underlying concept
and were therefore left unlabeled.
13
Table 4. Top ten most used terms per topic, resulting from the LDA topic modelling.
Topic
Label
Top words
1
program interest
study, Utrecht, program, very, finds, fun, seems, good, very, rather
2
general interest
university, highly, knowledge, Utrecht, study, interest, offers, like, choice,
developing
3
previously enrolled
year, study, go, wanted, came, found, rather, choice, studying, knew
4
applied sciences
program, university of applied sciences, Utrecht, scientific education,
university, less, aspects, academic, difference, choose
5
culture minded
subjects, different, language, interests, culture, year, broad, cultures,
choosing, liberal arts
6
sense of belonging
day, open, trial studying days, visited, found, open days, during, ambiance,
spoke, immediately
7
societal minded
social, spatial planning, study, sciences, general, geography, studies,
different, expect, hope
8
pedagogical minded
children, study, pedagogical sciences, chosen, helping, doubts, good,
characteristics, later, finished
9
computer minded
programming, games, computers, artificial intelligence, game technology,
suitable, technology, game, computer, logic
10
artistic students
media, art, theater, culture, film, television, films, chose, theater film,
Amsterdam
11
-
music, nature, hopefully, astronomy, most important, therein, teaching
school, ever, madam, dear sir
12
-
person, maximum, character, getting acquainted, function, fascinated, legal
system, honest, nature (kind), wonderful
13
location
location, widening, exist, per, passed, stadium, analysis, classes,
acquaintances, about
14
politics minded
political, strike, stone (figure of speech), technological, horizon, sustainable,
advance, curriculum
15
-
help, automatically, job opportunity, sociological, public, mono disciplinary,
suits
Figure 2 shows the different topics ordered by strength for predicting dropout (positive values) and
retention (negative values). The topics 13 (location), 7 (societal minded), and 11 (unlabeled) were the
strongest positive predictors of dropout. Students using a lot of words related to these aspects, are
more likely to drop out. The strongest negative predictors of dropout were the topics 2 (program
14
interest), 12 (unlabeled), and 1 (general interest). Students using many of such words, are less likely
to drop out.
Figure 2. Topics extracted through LDA topic modelling, ordered by strength (x-axis coefficient) for
predicting dropout (positive values) and retention (negative values).
Subsequently, all 15 topics were fed to the SVM, together with text data of Model 2 and features from
the LIWC package for text analysis. The results in Table 3 show that Model 3 is almost identical to
Model 2. In other words, it appears that, in this study, the features extracted through LDA topic
modelling and the LIWC package do not add to the prediction of dropout in comparison to text alone.
3.4. Model 4: Student characteristics and text
To identify whether written motivation adds to the prediction of first-year dropout net of student
characteristics, Model 1 was combined with different elements of text. In Model 4 the input of Model
1 (student characteristics) and Model 2 (text only) was combined. The weighted recall (.65) and
weighted f1-score (.67) in this model are the highest of all the estimated models. The weighted
precision score (.69) of this Model holds a middle position regarding algorithm performance between
Model 2 and 3 on the one hand, and Model 1 on the other hand.
15
3.5. Model 5: Student characteristics and text features
In this fifth model, student characteristics were combined with features extracted from the text,
rather than the text itself. Even though the features extracted from text did not add to the predictive
power of dropout net of text alone (Model 3), the combination of student characteristics and features
extracted from text might improve the prediction of dropout. With a weighted precision score of .70,
a weighted recall score of .60, and a weighted f1-score of .63, this algorithm performed worse than in
Model 4.
Model 5 was also used to inspect the importance of all features together that were used as input for
the SVM (i.e., student characteristics, LIWC words, and LDA topics). We performed this step on this
model, rather than Model 6 (see below), because the vast number of features of the TFiDF method
(i.e., all the individual words in the texts) does not allow it to be captured in a figure. The strength of
the 25 most important features for predicting dropout (positive values) and retention (negative
values) is shown in Figure 3. Some of the most important features are discussed. The strongest positive
effect is word count (WC), indicating that the more words students used the higher their probability
of dropout. Second, the use of personal pronouns (ppron) is a similarly strong predictor of dropout in
this model. The more personal pronouns a student uses in their text, the higher the probability of
dropout. Frequent use of the first person singular (i), however, is negatively related to dropout.
Looking at Figure 3, it indeed seems to matter which pronouns are being used. For example, the more
the second person (you) is used, the lower the probability of dropout, whereas the relatively frequent
use of impersonal pronouns (ipron) is associated with a higher probability of dropout. It is noteworthy
that, in this model, high school performance was again the strongest negative predictor of dropout.
Lastly, relatively important features in this list are age and article. Age was a relatively weak predictor
in the model with only student characteristics. In this model, however, it is the third most important
predictor of dropout, with older students having a higher probability to drop out. The relatively
frequent use of articles (article), on the other hand, is associated with a lower probability to drop out.
Among the top 25 most important features for the prediction of dropout there are several other
features that were obtained through the LIWC dictionary. Interestingly though, none of the topics
from the LDA topic modelling is amongst the 25 most important predictors of dropout.
16
Figure 3. 25 most predictive features3 of Model 5, ordered by strength (x-axis coefficient) for
predicting dropout (positive values) and retention (negative values).
3.6. Model 6: Student characteristics, text, and text features
Lastly, a model was estimated that included the student characteristics, as well as the text itself and
the features extracted from the text. The algorithm performance of this model was almost the same
as the performance for Model 4. The weighted precision was .68, weighted recall .64, and the
weighted f1-score .66. When comparing this model to Model 5, it strengthens the conclusion that
features extracted from text are, in our dataset, of limited additional value in the prediction of
dropout.
3 Meaning of the features from top to down: word count (WC); personal pronouns (ppron); age; dictionary
words (Dic); impersonal pronous (ipron); conjunctions (conj); prepositions (preps); present tense (present);
common verbs (verb); total function words (funct); future tense (future); auxiliary verbs (auxverb); words with
more than 6 letters (sixltr); first person plural (we); past tense (past); third person singular (shehe); words per
sentence (WPS); adverbs (adverb); total pronouns (pronoun); negations (negate); third person plural (they);
articles (article); second person (you); first person singular (i); HSGPA (high school performance).
17
3.7. Comparing most frequently used terms of correctly and incorrectly classified
dropouts
To get a better understanding of the algorithm performance, top terms used by students who were
correctly identified as dropouts, were compared to top terms used by students who were incorrectly
classified as dropouts. A high overlap in the commonly used terms, would indicate that there is not
enough discrepancy in the written motivation between the two groups for the SVM to detect any
differences.
When we inspected the 100 most used terms for both groups, overlap was indeed identified. Roughly
a quarter of the top terms was used by students in both categories. Most of these words were names
of study programs (e.g., Biology, Law, Sociology), or derivatives thereof (e.g., game(s) for Game
Technology or art for Art History). The other overlapping words are generic, such as program or name,
or apply to a specific field (i.e., people and children for behavioral sciences and music and film for arts
programs). Given that most of the overlapping words refer to names of study programs or derivatives
thereof, the prediction of dropout may improve if these words can be excluded from the text input.
Because of the too small sample size per study program in our data we were not able to do this.
4. Discussion
In this study we attempted to answer the question whether students at risk of dropout can be
identified based on their motivation for the study program of their initial choice as written in their
intake questionnaire prior to enrollment by using NLP-based techniques. Moreover, we asked the
question whether information extracted from these motivation statements adds predictive power net
of student characteristics. The results showed that the answer to this question is twofold. When text
was used in addition to student characteristics, it hardly added to the prediction of dropout. However,
when only text data were used, the algorithm performed very similar to the one in the model with
only the student characteristics.
Predicting dropout accurately is not easy, especially not based on student information that is available
prior to enrolment in higher education. Since the model with only text showed very similar results to
the model with only student characteristics, it appears that student dropout can be predicted with a
short motivation statement analyzed with data mining techniques at least as good as with a set of
known predictors like high school performance. Once it becomes known exactly what aspects of
motivation texts are predictive of dropout, these features might be easy to manipulate. Therefore,
future research should focus on the reliability of motivation texts in comparison to student
characteristics.
18
Moreover, structured and unstructured data seem to complement each other, as precision was higher
in the model with student characteristics (Model 1) and recall was higher in the model with only text
(Model 2). Therefore, analyzing text data with text mining techniques seems promising. Our aim was
to exhume hidden information from text data and investigate whether this information could be used
to predict students at risk of dropout. Unstructured data, like text, are very time consuming and
complex to analyze for humans. However, if highly predictive text mining algorithms can be developed
to analyze these data, that could potentially be useful to identify students at risk of dropout before
the start of the study program without needing an extensive set of student characteristics. Such
students could then immediately be offered support to mitigate the risk.
The fact that combining the text and student characteristics, like high school performance, does not
(substantially) improve the model prediction in this study, might indicate that they measure the same
underlying concepts. It is possible that the way the question about motivation for the study program
was asked or explained, probes students to put into words the information they already filled out
earlier in the questionnaire by answering the other survey questions. Future research could try to
verify this hypothesis by studying motivation statements using less directive questions. Another
possible way might be to ask more open-ended questions about specific components of program
choice (e.g., why this study program; why this university; what in this study program makes it
attractive; etc.) to obtain more unstructured (i.e., text) data covering a variety of underlying concepts
to analyze and relate to academic outcomes, using machine learning techniques. It was beyond the
scope of this paper to compare the performance of the SVM to other machine learning techniques,
like K-nearest neighbors or naïve Bayes (e.g., Aggarwal et al., 2012, for comparing the performance of
different techniques), but such an approach could be considered in the future.
A limitation of this study lies in the properties of our dataset. First, the dataset is imbalanced because
there are proportionally few dropouts. This is generally the case with student retention data, and
therefore, cannot be solved. Oversampling and undersampling techniques were used and weighted
scores were reported to deal with this limitation. Second, the motivation statements are generally
short (i.e., students were requested to write 10-25 lines, resulting in texts that are roughly 250 words
long) and the dataset consists of applicants to all non-selective bachelor programs. Both the length of
the texts and the heterogeneous student population may have an influence on the ability of the
algorithm to construct an accurate prediction model. Algorithms learn by providing them with more
and more data. Despite our relatively large number of motivation statements, the relatively short and
pluriform texts that were used could have affected the performance of the algorithm for the text
models. Future research may investigate whether a more uniform group of students (e.g., of one
faculty or one study program) would result in a better performance of the text mining approach.
19
Another direction for future research is to apply deep learning-based NLP methods with the use of
transfer learning (i.e., improving learning in a new task through the transfer of knowledge acquired in
a previous task) on a bigger dataset. This could improve representation of text data using the
distributional hypothesis, which poses that the more semantically similar two words are, the more
distributionally similar they will be, and thus the more they tend to occur in similar linguistic contexts
(Sahlgren, 2008). For example, the algorithm can use the fact that the words city and location are
distributionally more like one another than they are to the word scientific in a multidimensional
context, to predict that city and location are also semantically more like one another than to the word
scientific. However, these techniques require more data. Nevertheless, this direction is worth
researching as it could help in capturing the distinctive writing style of a student. This information
could, in turn, contribute to the early identification of dropouts.
When developing prediction models with the aim to use them for the early identification of dropouts,
one should especially focus on improving the precision scores for the dropout class. If text mining
methods were to become reliable enough to be used in advising students on their program choice, we
think it would be better to incorrectly classify a student as successful, than to incorrectly classify a
student as a future dropout. Some students who would then be incorrectly advised to start the study
program might actually be successful if the positive advice has an influence on their feelings of self-
efficacy and/or motivation. Regardless of whether an advice can have such an effect, we think it is
better to have a predictive instrument that returns very few false positives (students unjustly classified
as dropout) than an instrument that returns very few false negatives (students who were unjustly
classified as successful). Therefore, if choices must be made in developing these models, prioritizing a
high precision score for the dropout category is most favorable when one wants to give every student
a chance.
There is a famous quote by the economist Ronald Coase stating: “if you torture the data long enough,
it will confess to anything”. Coase meant this as a warning to fellow researchers, not to engage in
scientific misconduct (e.g., p-hacking). Although not for the purposes of making it confess to anything
specific, torturing the data is exactly what we did in this study. We approached the motivation data
from different angles, to predict first-year dropout of students applying for a certain undergraduate
program. By comparing and combining different methods, we found that applying machine learning
techniques on student motivation is a potentially promising new way of approaching student dropout.
We believe it is worthwhile to explore this line of research further to find better ways to extract
information from motivation statements. When doing so, focus could also be placed on comparing
human-coded and algorithm-coded motivation statements to get a better sense of how accurate these
methods are in predicting dropout and which of them is better.
20
References
Aggarwal, C. C., & Zhai, C. (2012). A survey of text classification algorithms. In C. C., Aggarwal & C.
Zhai (Eds.), Mining text data (pp. 163-222). Springer. https://doi.org/ 10.1007/978-1-4614
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning
Research, 3, 993-1022.
Boot, P., Zijlstra, H., Geenen, R. (2017). The Dutch translation of the Linguistic Inquiry and Word
Count (LIWC) 2007 dictionary. Dutch Journal of Applied Linguistics, 6 (1), p. 65-76.
Bridgeman, B. (2013). Human Ratings and Automated Essay Evaluation. In M. D. Shermis and J. C.
Burstein (Eds.), Handbook of Automated Essay Evaluation: Current Applications and New
Directions, (pp. 221-232). Routledge.
Crossley, S., Paquette, L., Dascalu, M., McNamara, D. S., & Baker, R. S. (2016, April). Combining click-
stream data with NLP tools to better understand MOOC completion. In Proceedings of the sixth
international conference on learning analytics & knowledge (pp. 6-14).
Dalipi, F., Imran, A. S., & Kastrati, Z. (2018, April). MOOC dropout prediction using machine learning
techniques: Review and research challenges. In 2018 IEEE Global Engineering Education
Conference (EDUCON) (pp. 1007-1014). IEEE.
Foster, C. & Francis, P. (2020). A systematic review on the deployment and effectiveness of data
analytics in higher education to improve student outcomes, Assessment & Evaluation in Higher
Education, 45(6), 822-841. https://doi.org/10.1080/02602938.2019.1696945
Hällsten, M. (2017). Is Education a Risky Investment? The Scarring Effect of University Dropout in
Sweden. European Sociological Review, 33(2), 169-181. https://doi.org/1 0.1093/esr/jcw053
Jayaraman, J. (2020). Predicting Student Dropout by Mining Advisor Notes. In Proceedings of The
13th International Conference on Educational Data Mining (EDM 2020) (pp. 629-632).
Jongbloed, B. W. A., de Boer, H. F., Kaiser, F., & Vossensteyn, H. J. (2018). Bekostiging van het
Nederlandse hoger onderwijs: kostendeterminanten en varianten.
Kira Talent (2018). Breaking down bias in admissions: The how-to guide to preventing admissions
bias at your school. http://start.kiratalent.com/breaking-down-admissions-bias/ Accessed 21 July
2021.
21
Kurysheva, A., van Rijen, H. V., & Dilaver, G. (2019). How do admission committees select? Do
applicants know how they select? Selection criteria and transparency at a Dutch University. Tertiary
Education and Management, 25(4), 367-388.
Moskal, A. C. M., S. J. Stein, and C. Golding. 2016. “Can You Increase Teacher Engagement with
Evaluation Simply by Improving the Evaluation System?” Assessment & Evaluation in Higher
Education 41(2): 286–300. doi:10.1080/02602938.2015.1007838.
Niessen, A. S. M., Meijer, R. R., & Tendeiro, J. N. (2017). Measuring non-cognitive predictors in high-
stakes contexts: The effect of self-presentation on self-report instruments used in admission to
higher education. Personality and Individual Differences, 106, 183–189.
https://doi.org/10.1016/j.paid.2016.11.014.
Pennebaker, J. W., Chung, C. K., Ireland, M., Gonzales, A., & Booth, R. (2007). The Development and
Psychometric Properties of LIWC2007 (pp. 1–22). Pennebaker Conglomerates.
Posselt, J. R. (2016). Inside graduate admissions: Merit, diversity, and faculty gatekeeping.
Cambridge and London: Harvard University Press.
Psacharopoulos, G. (1994). Returns to Investment in Education: A Global Update. World
Development, 22(9), 1325-1343.
Refaeilzadeh, P., Tang, L., & Liu, H. (2009). Cross-validation. Encyclopedia of Database Systems, 5,
532-538.
Sahlgren, M. (2008). The distributional hypothesis. Italian Journal of Disability Studies, 20, 33-53.
Stone, C., Quirk, A., Gardener, M., Hutt, S., Duckworth, A. L., & D'Mello, S. K. (2019, March).
Language as thought: Using natural language processing to model noncognitive traits that predict
college success. In Proceedings of the 9th International Conference on Learning Analytics &
Knowledge (pp. 320-329).
Wen, M., Yang, D., & Rose, C. (2014, July). Sentiment Analysis in MOOC Discussion Forums: What
does it tell us?. In Educational data mining 2014.
Zhang, Y., Oussena, S., Clark, T., & Kim, H. (2010, June). Use Data Mining to Improve Student Retention
in Higher Education-A Case Study. In ICEIS (1) (pp. 190-197).
Zupanc, K. (2018). Semantics-based automated essay evaluation. Doctoral thesis, University of
Ljubljana.
|
1 Predicting First-Year Dropout from Pre-Enrolment Motivation Statements Using Text Mining Karlijn F. B. Soppe*#, Ayoub Bagheri*, Shiva Nadi+, Irene G. Klugkist*, Theo Wubbels¥, & Leoniek D.N.V. Wijngaards-de Meij* * +Direction of Information and Technology Services, Utrecht University ¥ #corresponding author email: Abstract Preventing student dropout is a major challenge in higher education and it is difficult to predict prior to enrollment which students are likely to drop out and which students are likely to succeed. High School GPA (HSGPA) is a strong predictor of dropout, but much variance in dropout remains to be explained. This study focused on predicting university dropout by using text mining techniques with the aim of exhuming information contained in students' written motivation. By combining text data with classic predictors of dropout (student characteristics), we attempt to enhance the available set of predictive student characteristics. Our dataset consisted of 7,060 motivation statements of students enrolling in a non-selective bachelor at a Dutch university in 2014 and 2015. Support Vector Machines (SVMs) were trained on 75% of the data and several models were estimated on the test data (25%). We used various combinations of student characteristics (e.g., high school grades, age) and text (i.e., TFiDF, topic modelling, LIWC dictionary). Results showed that, although the combination of text and student characteristics did not improve the prediction of dropout, text analysis alone predicted dropout similarly well as a set of student characteristics. Suggestions for future research are provided. Keywords motivation, transition into HE, dropout prediction, text mining, natural language processing 2 1. Introduction Improving student retention is one of the biggest challenges in higher education. Retaining students results in higher revenue for universities (Zhang et al., 2010) since their funding is often partially based on graduation rates (Jongbloed et al., 2018). For students, finalizing their degree is also important, as dropping out of higher education is associated with negative consequences, such as untapped human potential, a low return on their financial investment (Psacharopoulos, 1994), and reduced social welfare (Hällsten, 2017). Moreover, low retention rates also impact society since income levels rise with a higher education degree (Jayaraman, 2020). Thus, it is paramount for society to keep dropout in higher education to a minimum. Ideally, students at risk of dropout should be identified prior to enrollment, to minimize negative consequences for both students and universities. In selective admission, it is common practice to try to identify students at risk of dropout based on their application. Staff members of the admissions committee are generally looking for both cognitive (e.g., prior performance) and non-cognitive (e.g., personality and motivation) factors when selecting suitable candidates (Kurysheva et al., 2019). The use of some of these non-cognitive criteria, especially by means of motivation and recommendation letters, for selecting students has been subjected to criticism (Kira Talent, 2018; Posselt, 2016). Selfreport measures such as motivation letters are susceptible to faking by the applicant, when being used in a high-stakes context (Niessen et al., 2017). Moreover, filtering out true motivation can be challenging for program staff. They may need to "read between the lines" to form an idea about the factors driving a student to apply for their study program. Furthermore, it might be hard to identify students' motivation solely based on a written statement. Characteristics of the reader (e.g., experience), their psychology, and environment can introduce bias into the evaluation of the motivation letters (Bridgeman, 2013). These aspects make humans inconsistent and unreliable evaluators (Zupanc, 2018). Lastly, reading these statements is very time consuming and it is not easy to compare motivation across students. This cumbersomeness may make program staff less likely to engage in evaluating motivation texts as part of the enrollment procedure (Moskal et al., 2016). All things considered, the vast amount of text data in the application process may exceed the human capacity to process it thoroughly. This study, therefore, focuses on predicting university dropout by using text mining techniques to exhume information contained in students' written motivation. The aim of this study is to investigate whether this novel approach can disclose information present in text, and thereby contribute to detecting students who are potentially at risk of dropout as early as possible. If so, traditional prediction models could be updated, using these techniques, to obtain higher predictive power. 3 Using machine learning techniques in education is not a new phenomenon (see Foster and Francis, 2020 for a systematic review), but almost all Educational Data Mining (EDM) research on student dropout prediction used structured data (i.e., quantitative, alpha-numeric data that can directly be used as input in statistical models). There are, however, some studies using Natural Language Processing (NLP) techniques and unstructured data (i.e., qualitative data in no particular format, such as text, audio, or video files) in predicting student completion of Massive Open Online Courses (MOOCs). Most of these studies use sentiment analysis to detect positive or negative phrases, motivation, engagement, etc. in discussion forums or assignments (Jayaraman, 2020). For example, in a study on students' opinions towards a course, Wen and colleagues (2014) found, using sentiment analysis, that students who used words related to motivation were more likely to complete the course. Moreover, Crossley and colleagues (2016) used NLP techniques on MOOC forum posts and found that a range of NLP indicators, such as lexical sophistication and writing fluency, were predictive of student completion of the MOOC. Outside of MOOCs, we know of only two studies that used text mining and NLP techniques to add to the early identification of students at risk of dropout by improving the predictive validity of student dropout models using unstructured data. One of these studies used sentiment analysis to predict dropout by analyzing notes written by student advisors (Jayaraman, 2020). The most frequently used words were "drop" (negative sentiment) and "progress" (positive sentiment). By comparing several models, the most accurate model was sought to predict dropout. The best performing model, a random forest classifier, predicted dropout with 73% accuracy. In the second study, Stone and colleagues (2019) used both human coding and a variety of NLP techniques to detect non-cognitive traits, such as psychological connection (which they used as a proxy for intrinsic motivation), by analyzing students' 150-word open-ended descriptions of their own extracurricular activities or work experiences included in their college applications. Correlations between human coding and modelbased coding ranged from medium-small to medium-strong on the respective non-cognitive traits. The correlation between human- and model-based coding for psychological connection was .655, indicating a medium-strong relation. The non-cognitive traits were then used in separate regression models to predict 6-year graduation outcomes. Psychological connection was insignificant in both the human- and the model-coded prediction of 6-year graduation. However, results showed that some other traits had predictive power net of other known predictors. For example, results from both the human- and model-based coding models showed that students portraying a growth mindset were more likely to graduate within six years, when controlling for sociodemographics, secondary school GPA and intelligence. 4 In this study, having the aim to contribute to early detection of at-risk students, we use NLP-based techniques to analyze short motivation statements of applicants to non-selective bachelor programs (i.e., programs that admit all students who obtained a pre-university diploma) in the Netherlands. In doing so, we try to answer the question whether students at risk of dropout can be identified through text mining, based on their motivation for the study program as written in their intake questionnaire prior to enrollment; and whether information extracted from these motivation statements adds predictive power net of student characteristics. 2. Methods 2.1. Dataset All students applying to a higher education study program in the Netherlands have to file an application through a central website. Upon filing such an initial request, students applying to a nonselective bachelor will receive a request to participate in a matching procedure. This procedure consists of an online questionnaire regarding motivation and student background characteristics and may be followed by an activity online or on campus (e.g., interview, online course, or matching day). Upon completion of the matching procedure, students will receive a request to finalize their enrollment. For this study, we obtained 7,060 student records from a Dutch university, composing all students who enrolled in a non-selective bachelor at this university during the academic years 2014 and 2015. The obtained academic records consisted of student background information, their answers to the matching questionnaire, and student progress data such as first-year dropout. The dataset is analyzed using Python and source code can be found on GitHub1. 2.2. Variables In this study both structured data (i.e., a set of student characteristics ranging from prior education to the number of study programs a student applied for) and unstructured data (i.e., motivation statements) were used to predict first-year student dropout. Below, we discuss the operationalization of the structured data. In the next sections we will elaborate on the unstructured data and how features were extracted from the raw text data. Dropout2. Student dropout is measured using information on whether a student re-enrolled in the second year of the study program. Students who paid the tuition fees for the second year are 1 https://github.com/ShNadi/study_motivation 2 Whether students paid for their re-enrollment is one way of operationalizing dropout. Another way is to look at whether students have obtained sufficient credits to continue. In the Netherlands, students must obtain 45/60 ECTS to continue in their sophomore year. For this study we ran all models with this classifier (i.e., did or did not obtain 45 ECTS) as well and results differ only marginally. 5 considered to have re-enrolled for their sophomore year. They are treated as the retention class (0). Students who did not pay tuition for their sophomore year are classified as dropouts (1). Prior education. Students with a variety of educational backgrounds enroll in Dutch universities. Prior education was measured using a categorical variable with the following levels: preparatory university diploma (VWO), to be obtained (1); preparatory university diploma (VWO), already obtained (2); university of applied sciences propaedeutic diploma (3); and other (4). High school performance. In the Netherlands, high school grades are given on a scale from 110, with 10 representing a perfect score. In the questionnaire prospective students were requested to self-report their grades for the three core subjects in high school (Dutch, English, and Mathematics). To measure high school performance, a mean grade of these three core subjects in the pre-final year of high school was calculated. If one of the grades was missing, a mean of the remaining two subjects was taken, otherwise the variable was coded as missing. Ability beliefs. Students' belief in their own abilities was measured using a single item from the questionnaire, stating: "the program matches my capacities and skills". The item could be answered with (1) yes or (0) no. Interests. Students' interest in the study program was measured using the statement "the program is in line with my interests". The item could be answered with (1) yes or (0) no. Gender. Information about students' gender was taken from the university registration systems. Male students were coded (1) and female students as (0). Age. Students' date of birth was taken from the university registration systems and then recoded into age at the start of the study program by subtracting date of birth from the first day of the academic year for each cohort. Cohort. The dataset consists of students who enrolled in a non-selective bachelor's program during the academic years of 2014, coded as (1) and 2015, coded as (2). Study program. In the Netherlands, students choose a specific study program to enroll in for their university education, e.g., Psychology, Chemistry, Law. To control for differences in dropout across study programs, all study programs were added as dichotomous variables, i.e., coded as (1) if a student applied for that program and as (0) otherwise. Discipline. All study programs were allocated into three different disciplines: Science, Technology, Engineering & Mathematics (1), Social Sciences (2) and Humanities (3). 6 Previously enrolled. Students who have previously been enrolled in another study program at the same university were coded (1) and all others as (0). Multiple requests. Students who filed admission requests for more than one study program were coded (1) and all others as (0). Table 1 provides an overview of the descriptive statistics of the structured data in the sample, split by cohort. Table 1. Descriptive statistics of the structured data per cohort. Characteristics 2014 2015 Re-enrolled Dropout Re-enrolled Dropout n M(SD) n M(SD) n M(SD) n M(SD) Prior education pre-university; to be obtained 1785 458 1619 480 pre-university; obtained 793 293 636 276 propaedeutic diploma 309 96 284 95 other 63 13 54 16 High school performance 2843 6.84 (.61) 817 6.64 (.56) 2316 6.89 (.66) 775 6.62 (.59) Ability beliefs positive 2092 616 1801 562 negative 858 244 792 305 Interests positive 2895 830 2401 783 negative 55 30 192 84 Gender male 1267 455 1184 506 female 1683 405 1409 361 Age 2837 19.00 (2.44) 816 19.35 (2.43) 2593 18.95 (2.39) 867 19.50 (3.03) Discipline STEM 677 248 612 233 Social sciences 1739 419 1413 416 Humanities 534 193 568 218 Previously enrolled yes 284 93 251 114 no 2666 767 2342 753 Multiple requests yes 107 40 107 39 no 2843 820 2486 828 2.2.1. Preprocessing the motivation statements To analyze unstructured text data, several steps need to be taken to reduce its high dimensionality. Our raw text data consist of students' answer on the intake questionnaire to the following question: "Why do you want to study [program name] in [city]? (10-25 lines)". This question was followed by the instruction: "When answering the question, for example think about your motivation for the 7 content of the study program, your choice for an academic program, and your motivation for a profession or position that this program prepares you for". The first step that needs to be taken is preprocessing the data. In our analysis, pre-processing the motivation statements consists of stop word removal, removing whitespaces and numbers, and converting text into lowercases. This is a step that enhances the performance of the algorithm in later stages. After the pre-processing step, the high dimensionality of text data still prevents it from being used directly in a statistical model and therefore a feature engineering step is required to inspect the text data and acquire multiple feature sets. 2.2.2. Feature engineering Feature engineering is a process in machine learning that is used to extract analyzable properties from raw data. In machine learning these analyzable properties are known as features; they can be considered independent variables. In this study, three different types of feature engineering were applied to the motivation statements. Initially, a bag-of-words representation was used as a simplified representation of the text data. Thereafter, two additional, and more advanced, feature engineering methods were applied; latent Dirichlet allocation topic modelling, and linguistic inquiry and word count (LIWC) dictionary words. Each of these methods is explained below. Bag-of-Words. This is a process that converts text data into numbers in a (e.g., documentterm) matrix. To create this matrix, we used Term Frequency inverse Document Frequency (TFiDF) which is a bag-of-words method intended to reflect the relative frequency of a term (word) in each document (motivation statement). TFiDF can be calculated by multiplying the number of times a word appears in a document, and the inverse document frequency of the word in the dataset. With TFiDF, the words that are common in every document rank low even though they appear many times. This is because TFiDF is offset by the number of documents that contain the word. The bag-of-words representation is the simplest way to make text analyzable in a statistical model. In the remainder of this paper, we will refer to it as just "text" or the method we used: "TFiDF". Topic modeling. One way of reducing the high dimensionality of text data, is to represent it as a set of topics across documents. So, instead of looking at the word frequency of single words like TFiDF does, words are clustered into groups that represent underlying concepts. To identify topics in the motivation statements we employed a topic modeling method, Latent Dirichlet allocation (LDA), using a collapsed Gibbs sampling approach (Blei et al., 2003). LDA topic modeling considers each topic as a probability distribution over terms, and each document as a combination of topics. LDA is an unsupervised method, which means that it can extract hidden topics from text without human assistance. This entails that the extracted topics do not always construe a clear meaning. Therefore, it is up to the researchers to identify which number of topics is the best conceptual representation of the text data. Therefore, we ran a set of models with different numbers of topics (i.e., 5, 10, 15, 20, 8 50). Based on inspection of the representative terms in each topic and to what extent these terms could form a meaningful topic together, two of the authors independently selected the model with 15 topics as the best representation of the data. Therefore, the 15 topics were used as "features extracted from text". Linguistic Inquiry and Word Count (LIWC). LIWC is a text analysis tool that reduces the dimensionality of the data by mapping words onto predetermined categories, using psychometrically validated dictionaries. LIWC can expose certain psychological characteristics of the writer (Tausczik & Pennebaker, 2012). The main categories provided by LIWC are general information (e.g., word count), linguistic dimensions (categorized in verbs and function words, such as pronouns), psychological processes (containing the main categories social processes, affective processes, cognitive processes, perceptual processes, biological processes, and relativity), personal concerns, and spoken language (Pennebaker, et al., 2007). Each of these categories has some features. For example, the category "social words" contains the features "family", "friends", and "humans", and the category "relativity" is divided into the features "motion", "space" and "time". We use LIWC2007 to extract these features from the motivation statements and to use as input for the models. In this version of LIWC, its dictionaries have been translated into several languages, including Dutch (Boot et al., 2017). Together with the 15 topics, we refer to the LIWC sub-categories as "features extracted from text" in the remainder of this paper. 2.2.3. Training the algorithm Support Vector Machine (SVM) was chosen to analyze the data, since it can handle (the combination of) structured and unstructured data, and is known to generally perform well in text classification problems. First, the data were randomly split into training (75%; N = 5295)) and test sets (25%; N = 1765). The training set was used to train the algorithm by providing it with both the input (i.e., student characteristics, text, and text features) and the output (i.e., whether a student dropped out or not). K-fold cross validation with k = 5 was used to evaluate the performance of the training model (Refaeilzadeh et al., 2009). Cross validation is a resampling process in which the training data is split in k-different portions to test and train a model on different iterations. If the different iterations return different levels of model accuracy, this can indicate potential problems regarding overfitting or selection bias. The full set of training data can then be inspected further to ensure better performance of the algorithm when eventually applied to the test data. For none of the estimated models in our study cross validation indicated a need for further inspection of the training data. 2.2.4. Analysis We analyzed our data using six separate SVM models, exploring the most accurate combination of features to predict dropout. First, we started with a model using only the structured data, our set of 9 student characteristics, as input for the model. This provided us with a baseline of how well we would be able to predict dropout if we would not have any text data. Second, we estimated a model with only the text using TFiDF as input to compare the algorithms' performance to that of the first model. Third, we added the features that we extracted from the text through LDA topic modeling and the LIWC dictionary to the text-only model to assess the added value of more advanced text mining techniques on top of the simple bag-of-words representation. Lastly, to answer the question whether information extracted from text can add to the prediction of dropout net of structured data, we examined the performance of the SVM with different combined feature sets. Table 2 provides an overview of the input per model. Table 2. Overview of the feature sets across the different models. Features Model 1 Model 2 Model 3 Model 4 Model 5 Model 6 Structured data Prior education X X X X High school performance X X X X Ability beliefs X X X X Interests X X X X Gender X X X X Age X X X X Cohort X X X X Discipline X X X X Study program X X X X Previously enrolled X X X X Multiple requests X X X X Unstructured data Text (TFiDF) X X X X Latent Dirichlet Allocation X X X LIWC dictionary X X X Student retention data are generally imbalanced, since the number of dropouts is much smaller than the number of students that continue the study program. This imbalance can be problematic, as the standard classification algorithms have a bias towards the majority class, giving misleadingly promising results (Dalipi et al., 2018). To solve the issue of imbalanced data, we applied a balanced class weight with all our classification algorithms (see GitHub for code). This balanced weight can be considered an 10 oversampling technique which essentially replicates the smaller class until it reaches the sample size of the larger one. Performance of Support Vector Machines is generally assessed by accuracy, precision, recall, and f1scores. Accuracy is an unreliable measure for imbalanced data and therefore we do not use it. Moreover, because of the imbalance, weighted output for precision, recall, and f1-score are reported to assess the performance of the algorithm. Precision denotes the true positives divided by the true positives + false positives. Recall is defined as the true positives divided by the true positives + false negatives. The f1-score is the weighted average of precision and recall. To get a better sense of these performance measures for our specific context, Figure 1 shows precision and recall for the dropout class. Figure 1. Visualization of the model performance criteria precision and recall, applied to the dropout class. 11 3. Results The total dataset was split into a training set (75%) and a test set (25%). Results presented in this section are based on the 1765 motivation statements forming the test data. Of these 1765 statements, 1312 belonged to the retention class and 453 to the dropout class. Table 3 provides a summary of the results. Given our dual aim of exploring whether students at risk of dropout can be identified through motivation statements and whether information extracted from these motivation statements adds predictive power net of student characteristics, we are interested in model performances in absolute terms, but not in whether certain models statistically outperform others. Table 3. Performance measures of the estimated models, for the total test set (T) and split by the retention (R) and dropout (D) class. Precision Recall f1-score Model T R D T R D T R D 1 .71 .84 .31 .57 .55 .65 .60 .66 .42 2 .67 .79 .28 .63 .71 .37 .65 .75 .32 3 .67 .79 .29 .64 .72 .37 .65 .75 .32 4 .69 .81 .32 .65 .71 .44 .67 .76 .37 5 .70 .82 .31 .60 .61 .56 .63 .70 .40 6 .68 .80 .31 .64 .71 .42 .66 .75 .35 3.1. Model 1: Student characteristics First, a SVM model with only student characteristics was run as a baseline model. This model contains variables that are known to be predictive of student dropout, including high school performance. The weighted precision score of Model 1 was .71, the weighted recall score .57. This resulted in a weighted f1-score of .60. The prediction of precision for the majority class (retention) was much better than the performance for the minority class (dropout). With a score of .65, recall was higher for the dropout class than for the retention class (.55). This is notable, given the fact that algorithms generally perform better for the majority class, even after correcting for imbalanced data. 12 3.2. Model 2: Only text To identify what the analysis of text data through NLP-based techniques can add to the prediction of dropout, first a model with only text was analyzed, using TFiDF to identify the importance of words in the corpus (Model 2). The model had a weighted precision score of .67, a weighted recall score of .63, and a weighted f1-score of .65. When comparing the performance of the algorithm for this model several things stand out. First, precision of this model is worse than in Model 1, meaning that in the model with the student characteristics, of all the selected students there were proportionally more true positives. In Model 2 recall is better than in Model 1, meaning that of all the relevant students there were proportionally more true positives in the model with only text. Second, recall scores for this model are less balanced across the classes than in Model 1. Table 3 shows a much higher recall score for the retention class (.71) than for the dropout class (.37). 3.3. Model 3: text and text features Model 3 investigates the predictive power of all information we extracted from text. To that end, we added features extracted from text to the text data (LIWC dictionary words and topics extracted through LDA topic modelling), to investigate whether this combination could outperform Model 1. The LIWC features already consist of predetermined categories, using psychometrically validated dictionaries. For the LDA topic modelling the features first must be identified before they can be used in the analyses. For the 15 topics that were identified in the motivation statements, the top 10 terms of each of these topics are listed in Table 4. Upon consensus among the authors, the 15 topics were given a theoretical label if possible. Some of the topics did not construe one clear underlying concept and were therefore left unlabeled. 13 Table 4. Top ten most used terms per topic, resulting from the LDA topic modelling. Topic Label Top words 1 program interest study, Utrecht, program, very, finds, fun, seems, good, very, rather 2 general interest university, highly, knowledge, Utrecht, study, interest, offers, like, choice, developing 3 previously enrolled year, study, go, wanted, came, found, rather, choice, studying, knew 4 applied sciences program, university of applied sciences, Utrecht, scientific education, university, less, aspects, academic, difference, choose 5 culture minded subjects, different, language, interests, culture, year, broad, cultures, choosing, liberal arts 6 sense of belonging day, open, trial studying days, visited, found, open days, during, ambiance, spoke, immediately 7 societal minded social, spatial planning, study, sciences, general, geography, studies, different, expect, hope 8 pedagogical minded children, study, pedagogical sciences, chosen, helping, doubts, good, characteristics, later, finished 9 computer minded programming, games, computers, artificial intelligence, game technology, suitable, technology, game, computer, logic 10 artistic students media, art, theater, culture, film, television, films, chose, theater film, Amsterdam 11 - music, nature, hopefully, astronomy, most important, therein, teaching school, ever, madam, dear sir 12 - person, maximum, character, getting acquainted, function, fascinated, legal system, honest, nature (kind), wonderful 13 location location, widening, exist, per, passed, stadium, analysis, classes, acquaintances, about 14 politics minded political, strike, stone (figure of speech), technological, horizon, sustainable, advance, curriculum 15 - help, automatically, job opportunity, sociological, public, mono disciplinary, suits Figure 2 shows the different topics ordered by strength for predicting dropout (positive values) and retention (negative values). The topics 13 (location), 7 (societal minded), and 11 (unlabeled) were the strongest positive predictors of dropout. Students using a lot of words related to these aspects, are more likely to drop out. The strongest negative predictors of dropout were the topics 2 (program 14 interest), 12 (unlabeled), and 1 (general interest). Students using many of such words, are less likely to drop out. Figure 2. Topics extracted through LDA topic modelling, ordered by strength (x-axis coefficient) for predicting dropout (positive values) and retention (negative values). Subsequently, all 15 topics were fed to the SVM, together with text data of Model 2 and features from the LIWC package for text analysis. The results in Table 3 show that Model 3 is almost identical to Model 2. In other words, it appears that, in this study, the features extracted through LDA topic modelling and the LIWC package do not add to the prediction of dropout in comparison to text alone. 3.4. Model 4: Student characteristics and text To identify whether written motivation adds to the prediction of first-year dropout net of student characteristics, Model 1 was combined with different elements of text. In Model 4 the input of Model 1 (student characteristics) and Model 2 (text only) was combined. The weighted recall (.65) and weighted f1-score (.67) in this model are the highest of all the estimated models. The weighted precision score (.69) of this Model holds a middle position regarding algorithm performance between Model 2 and 3 on the one hand, and Model 1 on the other hand. 15 3.5. Model 5: Student characteristics and text features In this fifth model, student characteristics were combined with features extracted from the text, rather than the text itself. Even though the features extracted from text did not add to the predictive power of dropout net of text alone (Model 3), the combination of student characteristics and features extracted from text might improve the prediction of dropout. With a weighted precision score of .70, a weighted recall score of .60, and a weighted f1-score of .63, this algorithm performed worse than in Model 4. Model 5 was also used to inspect the importance of all features together that were used as input for the SVM (i.e., student characteristics, LIWC words, and LDA topics). We performed this step on this model, rather than Model 6 (see below), because the vast number of features of the TFiDF method (i.e., all the individual words in the texts) does not allow it to be captured in a figure. The strength of the 25 most important features for predicting dropout (positive values) and retention (negative values) is shown in Figure 3. Some of the most important features are discussed. The strongest positive effect is word count (WC), indicating that the more words students used the higher their probability of dropout. Second, the use of personal pronouns (ppron) is a similarly strong predictor of dropout in this model. The more personal pronouns a student uses in their text, the higher the probability of dropout. Frequent use of the first person singular (i), however, is negatively related to dropout. Looking at Figure 3, it indeed seems to matter which pronouns are being used. For example, the more the second person (you) is used, the lower the probability of dropout, whereas the relatively frequent use of impersonal pronouns (ipron) is associated with a higher probability of dropout. It is noteworthy that, in this model, high school performance was again the strongest negative predictor of dropout. Lastly, relatively important features in this list are age and article. Age was a relatively weak predictor in the model with only student characteristics. In this model, however, it is the third most important predictor of dropout, with older students having a higher probability to drop out. The relatively frequent use of articles (article), on the other hand, is associated with a lower probability to drop out. Among the top 25 most important features for the prediction of dropout there are several other features that were obtained through the LIWC dictionary. Interestingly though, none of the topics from the LDA topic modelling is amongst the 25 most important predictors of dropout. 16 Figure 3. 25 most predictive features3 of Model 5, ordered by strength (x-axis coefficient) for predicting dropout (positive values) and retention (negative values). 3.6. Model 6: Student characteristics, text, and text features Lastly, a model was estimated that included the student characteristics, as well as the text itself and the features extracted from the text. The algorithm performance of this model was almost the same as the performance for Model 4. The weighted precision was .68, weighted recall .64, and the weighted f1-score .66. When comparing this model to Model 5, it strengthens the conclusion that features extracted from text are, in our dataset, of limited additional value in the prediction of dropout. 3 Meaning of the features from top to down: word count (WC); personal pronouns (ppron); age; dictionary words (Dic); impersonal pronous (ipron); conjunctions (conj); prepositions (preps); present tense (present); common verbs (verb); total function words (funct); future tense (future); auxiliary verbs (auxverb); words with more than 6 letters (sixltr); first person plural (we); past tense (past); third person singular (shehe); words per sentence (WPS); adverbs (adverb); total pronouns (pronoun); negations (negate); third person plural (they); articles (article); second person (you); first person singular (i); HSGPA (high school performance). 17 3.7. Comparing most frequently used terms of correctly and incorrectly classified dropouts To get a better understanding of the algorithm performance, top terms used by students who were correctly identified as dropouts, were compared to top terms used by students who were incorrectly classified as dropouts. A high overlap in the commonly used terms, would indicate that there is not enough discrepancy in the written motivation between the two groups for the SVM to detect any differences. When we inspected the 100 most used terms for both groups, overlap was indeed identified. Roughly a quarter of the top terms was used by students in both categories. Most of these words were names of study programs (e.g., Biology, Law, Sociology), or derivatives thereof (e.g., game(s) for Game Technology or art for Art History). The other overlapping words are generic, such as program or name, or apply to a specific field (i.e., people and children for behavioral sciences and music and film for arts programs). Given that most of the overlapping words refer to names of study programs or derivatives thereof, the prediction of dropout may improve if these words can be excluded from the text input. Because of the too small sample size per study program in our data we were not able to do this. 4. Discussion In this study we attempted to answer the question whether students at risk of dropout can be identified based on their motivation for the study program of their initial choice as written in their intake questionnaire prior to enrollment by using NLP-based techniques. Moreover, we asked the question whether information extracted from these motivation statements adds predictive power net of student characteristics. The results showed that the answer to this question is twofold. When text was used in addition to student characteristics, it hardly added to the prediction of dropout. However, when only text data were used, the algorithm performed very similar to the one in the model with only the student characteristics. Predicting dropout accurately is not easy, especially not based on student information that is available prior to enrolment in higher education. Since the model with only text showed very similar results to the model with only student characteristics, it appears that student dropout can be predicted with a short motivation statement analyzed with data mining techniques at least as good as with a set of known predictors like high school performance. Once it becomes known exactly what aspects of motivation texts are predictive of dropout, these features might be easy to manipulate. Therefore, future research should focus on the reliability of motivation texts in comparison to student characteristics. 18 Moreover, structured and unstructured data seem to complement each other, as precision was higher in the model with student characteristics (Model 1) and recall was higher in the model with only text (Model 2). Therefore, analyzing text data with text mining techniques seems promising. Our aim was to exhume hidden information from text data and investigate whether this information could be used to predict students at risk of dropout. Unstructured data, like text, are very time consuming and complex to analyze for humans. However, if highly predictive text mining algorithms can be developed to analyze these data, that could potentially be useful to identify students at risk of dropout before the start of the study program without needing an extensive set of student characteristics. Such students could then immediately be offered support to mitigate the risk. The fact that combining the text and student characteristics, like high school performance, does not (substantially) improve the model prediction in this study, might indicate that they measure the same underlying concepts. It is possible that the way the question about motivation for the study program was asked or explained, probes students to put into words the information they already filled out earlier in the questionnaire by answering the other survey questions. Future research could try to verify this hypothesis by studying motivation statements using less directive questions. Another possible way might be to ask more open-ended questions about specific components of program choice (e.g., why this study program; why this university; what in this study program makes it attractive; etc.) to obtain more unstructured (i.e., text) data covering a variety of underlying concepts to analyze and relate to academic outcomes, using machine learning techniques. It was beyond the scope of this paper to compare the performance of the SVM to other machine learning techniques, like K-nearest neighbors or naïve Bayes (e.g., Aggarwal et al., 2012, for comparing the performance of different techniques), but such an approach could be considered in the future. A limitation of this study lies in the properties of our dataset. First, the dataset is imbalanced because there are proportionally few dropouts. This is generally the case with student retention data, and therefore, cannot be solved. Oversampling and undersampling techniques were used and weighted scores were reported to deal with this limitation. Second, the motivation statements are generally short (i.e., students were requested to write 10-25 lines, resulting in texts that are roughly 250 words long) and the dataset consists of applicants to all non-selective bachelor programs. Both the length of the texts and the heterogeneous student population may have an influence on the ability of the algorithm to construct an accurate prediction model. Algorithms learn by providing them with more and more data. Despite our relatively large number of motivation statements, the relatively short and pluriform texts that were used could have affected the performance of the algorithm for the text models. Future research may investigate whether a more uniform group of students (e.g., of one faculty or one study program) would result in a better performance of the text mining approach. 19 Another direction for future research is to apply deep learning-based NLP methods with the use of transfer learning (i.e., improving learning in a new task through the transfer of knowledge acquired in a previous task) on a bigger dataset. This could improve representation of text data using the distributional hypothesis, which poses that the more semantically similar two words are, the more distributionally similar they will be, and thus the more they tend to occur in similar linguistic contexts (Sahlgren, 2008). For example, the algorithm can use the fact that the words city and location are distributionally more like one another than they are to the word scientific in a multidimensional context, to predict that city and location are also semantically more like one another than to the word scientific. However, these techniques require more data. Nevertheless, this direction is worth researching as it could help in capturing the distinctive writing style of a student. This information could, in turn, contribute to the early identification of dropouts. When developing prediction models with the aim to use them for the early identification of dropouts, one should especially focus on improving the precision scores for the dropout class. If text mining methods were to become reliable enough to be used in advising students on their program choice, we think it would be better to incorrectly classify a student as successful, than to incorrectly classify a student as a future dropout. Some students who would then be incorrectly advised to start the study program might actually be successful if the positive advice has an influence on their feelings of selfefficacy and/or motivation. Regardless of whether an advice can have such an effect, we think it is better to have a predictive instrument that returns very few false positives (students unjustly classified as dropout) than an instrument that returns very few false negatives (students who were unjustly classified as successful). Therefore, if choices must be made in developing these models, prioritizing a high precision score for the dropout category is most favorable when one wants to give every student a chance. There is a famous quote by the economist Ronald Coase stating: "if you torture the data long enough, it will confess to anything". Coase meant this as a warning to fellow researchers, not to engage in scientific misconduct (e.g., p-hacking). Although not for the purposes of making it confess to anything specific, torturing the data is exactly what we did in this study. We approached the motivation data from different angles, to predict first-year dropout of students applying for a certain undergraduate program. By comparing and combining different methods, we found that applying machine learning techniques on student motivation is a potentially promising new way of approaching student dropout. We believe it is worthwhile to explore this line of research further to find better ways to extract information from motivation statements. When doing so, focus could also be placed on comparing human-coded and algorithm-coded motivation statements to get a better sense of how accurate these methods are in predicting dropout and which of them is better. 20 References Aggarwal, C. C., & Zhai, C. (2012). A survey of text classification algorithms. In C. C., Aggarwal & C. Zhai (Eds.), Mining text data (pp. 163-222). Springer. https://doi.org/ 10.1007/978-1-4614 Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022. Boot, P., Zijlstra, H., Geenen, R. (2017). The Dutch translation of the Linguistic Inquiry and Word Count (LIWC) 2007 dictionary. Dutch Journal of Applied Linguistics, 6 (1), p. 65-76. Bridgeman, B. (2013). Human Ratings and Automated Essay Evaluation. In M. D. Shermis and J. C. Burstein (Eds.), Handbook of Automated Essay Evaluation: Current Applications and New Directions, (pp. 221-232). Routledge. Crossley, S., Paquette, L., Dascalu, M., McNamara, D. S., & Baker, R. S. (2016, April). Combining clickstream data with NLP tools to better understand MOOC completion. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 6-14). Dalipi, F., Imran, A. S., & Kastrati, Z. (2018, April). MOOC dropout prediction using machine learning techniques: Review and research challenges. In 2018 IEEE Global Engineering Education Conference (EDUCON) (pp. 1007-1014). IEEE. Foster, C. & Francis, P. (2020). A systematic review on the deployment and effectiveness of data analytics in higher education to improve student outcomes, Assessment & Evaluation in Higher Education, 45(6), 822-841. https://doi.org/10.1080/02602938.2019.1696945 Hällsten, M. (2017). Is Education a Risky Investment? The Scarring Effect of University Dropout in Sweden. European Sociological Review, 33(2), 169-181. https://doi.org/1 0.1093/esr/jcw053 Jayaraman, J. (2020). Predicting Student Dropout by Mining Advisor Notes. In Proceedings of The 13th International Conference on Educational Data Mining (EDM 2020) (pp. 629-632). Jongbloed, B. W. A., de Boer, H. F., Kaiser, F., & Vossensteyn, H. J. (2018). Bekostiging van het Nederlandse hoger onderwijs: kostendeterminanten en varianten. Kira Talent (2018). Breaking down bias in admissions: The how-to guide to preventing admissions bias at your school. http://start.kiratalent.com/breaking-down-admissions-bias/ Accessed 21 July 2021. 21 Kurysheva, A., van Rijen, H. V., & Dilaver, G. (2019). How do admission committees select? Do applicants know how they select? Selection criteria and transparency at a Dutch University. Tertiary Education and Management, 25(4), 367-388. Moskal, A. C. M., S. J. Stein, and C. Golding. 2016. "Can You Increase Teacher Engagement with Evaluation Simply by Improving the Evaluation System?" Assessment & Evaluation in Higher Education 41(2): 286-300. Niessen, A. S. M., Meijer, R. R., & Tendeiro, J. N. (2017). Measuring non-cognitive predictors in highstakes contexts: The effect of self-presentation on self-report instruments used in admission to higher education. Personality and Individual Differences, 106, 183-189. https://doi.org/10.1016/j.paid.2016.11.014. Pennebaker, J. W., Chung, C. K., Ireland, M., Gonzales, A., & Booth, R. (2007). The Development and Psychometric Properties of LIWC2007 (pp. 1-22). Pennebaker Conglomerates. Posselt, J. R. (2016). Inside graduate admissions: Merit, diversity, and faculty gatekeeping. Cambridge and London: Harvard University Press. Psacharopoulos, G. (1994). Returns to Investment in Education: A Global Update. World Development, 22(9), 1325-1343. Refaeilzadeh, P., Tang, L., & Liu, H. (2009). Cross-validation. Encyclopedia of Database Systems, 5, 532-538. Sahlgren, M. (2008). The distributional hypothesis. Italian Journal of Disability Studies, 20, 33-53. Stone, C., Quirk, A., Gardener, M., Hutt, S., Duckworth, A. L., & D'Mello, S. K. (2019, March). Language as thought: Using natural language processing to model noncognitive traits that predict college success. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 320-329). Wen, M., Yang, D., & Rose, C. (2014, July). Sentiment Analysis in MOOC Discussion Forums: What does it tell us?. In Educational data mining 2014. Zhang, Y., Oussena, S., Clark, T., & Kim, H. (2010, June). Use Data Mining to Improve Student Retention in Higher Education-A Case Study. In ICEIS (1) (pp. 190-197). Zupanc, K. (2018). Semantics-based automated essay evaluation. Doctoral thesis, .
|
2509.16225
|
INCREASING INTER-FIBER CONTACT IN THE
ALTENDORF-JEULIN MODEL
Alex Keilmann∗
Department of Mathematics
RPTU Kaiserslautern-Landau
ORCID: 0009-0004-9793-3065
Claudia Redenbach
Department of Mathematics
RPTU Kaiserslautern-Landau
ORCID: 0000-0002-8030-069X
François Willot
Center of Mathematical Morphology (CMM)
Mines Paris Tech
ORCID: 0000-0003-1544-6550
ABSTRACT
In fields such as material design or biomedicine, fiber materials play an important role. Fiber
simulations, also called digital twins, provide a basis for testing and optimizing the material’s physical
behavior digitally. Inter-fiber contacts can influence the thermal and mechanical behavior of a fiber
system; to our knowledge, however, there exist no parametric fiber models allowing for explicit
modeling of the number of inter-fiber contacts. Therefore, this paper proposes an extension of the
iterative force-biased fiber packing by Altendorf & Jeulin. In this extension, we model the inter-fiber
contacts explicitly and add another force to the force-biased packing to increase the number of
contacts. We successfully validate the packing with respect to its parameter accuracy. Moreover, we
show that the extension indeed increases the number of contacts, even exceeding theoretical values.
Hence, this packing scheme has the potential to achieve higher accuracy in physical simulations.
1
Introduction
Fiber materials are of great interest in areas such as material design or biomedicine due to their advantageous properties,
which depend on their microstructure. Therefore, fiber simulations are used to test and optimize material behavior, also
known as creating digital twins. For example, Andrä et al. [1] modeled fiber insulation mats based on CT images and
systematically varied parameters to study their impact on thermal conductivity. Such an approach is more sustainable
than producing a variety of materials and testing their properties, as is traditionally done.
However, for digital twins to be successful, they need to model the relevant characteristics of the fiber system to ensure
that they are representative of the physical behavior. On the level of a single fiber, this includes the fiber size, shape,
curvature, and direction distribution. On the scale of the fiber system, the interaction between fibers can be of great
interest: Whereas unrealistic fiber intersections are negligible in some applications [2], they are prohibitive in others.
Fiber models allowing for such intersections are called softcore models, whereas models prohibiting them are called
hardcore models.
Algorithms for simulating hardcore fiber systems can generally be categorized into two distinct approaches, the
sequential approach and the collective rearrangement approach. In sequential approaches, fibers are added sequentially
to the fiber system such that they do not intersect. In collective rearrangement, the fibers are generated simultaneously
and may intersect in the initial configuration. In a subsequent step, their intersections are removed.
One of the first models to achieve a hardcore system is the random sequential adsorption (RSA) model. Here, elements
such as straight cylinders are generated independently following a given distribution on length, radius, and/or orientation.
These elements are placed iteratively into the existing system if they do not cause any intersections. Otherwise, they
∗Corresponding author: keilmann@rptu.de
arXiv:2509.16225v1 [cs.CE] 12 Sep 2025
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
are resampled. However, these models allow for rather low volume fractions [3, 4]. There are several variations and
extensions of the original RSA approach, such as allowing fibers to "grow" until they intersect other fibers [5]. This
allows for higher volume fractions up to 22.5 % at the cost of less control over the length distribution.
Sedimentation models [6–9] usually describe fiber systems that are obtained by fibers subsequently falling on top of
each other. They can achieve higher volume fractions up to 80 %. However, their fiber direction distributions are
roughly limited to the plane.
When collective rearrangement approaches remove intersections, they calculate the direction and amount of object
displacement based on forces, which in turn are modeled on the basis of physics laws. In an early collective rearrange-
ment approach, a fiber system is generated and, afterwards, fiber intersections are removed depending on the solution
of a differential equation. It allows simulation of packings of spherocylinders [10] or ball chains [11]. Schneider and
colleagues[12–17] transformed this approach into an optimization problem, including curved fibers. In addition, they
found that the algorithm performs better when fibers are added to the system sequentially. Venkateshan et al. [8] and
Klar et al. [18, 19] propose sedimentation models with collective rearrangement using differential equations.
As another subcategory of collective rearrangement approaches, force-biased approaches are inspired by molecular
dynamics. Salnikov et al. [20] propose such a model for cylinders and spheres; Bezrukov & Stoyan [21] introduce a
force-biased model for ellipsoids. Altendorf & Jeulin [22] extend this approach to more complex fiber shapes: They
consider fibers as ball chains that are generated by a random walk. To remove intersections, repulsion forces act on
overlapping balls. To maintain the fibrous structure, recovery forces act between neighboring balls of a chain. Moreover,
the recovery forces preserve a prescribed fiber curvature. This approach was extended by Easwaran [23] for infinite
fibers and fiber bundles; Chapelle et al. [24] improve the model’s runtime by using spherocylinders instead of balls.
Note that the molecular dynamics approach is also not limited to collective rearrangement approaches - Kumar et al. [25]
develop a fiber model based on the movement paths of charged particles, which are introduced sequentially.
In various applications, not only the (lack of) fiber intersections but also the contact between fibers is of interest. For
example, inter-fiber contact areas influence the thermal conductivity of fiber systems [26, 1]. Friction, adhesion, and
wear along contact points [27] are an essential feature of entangled mechanical networks [28] such as textiles [29]. They
must be taken into account to model the significant strain rates and hysteresis phenomena observed in many industrial
compaction processes [30]. Micromechanical models, pioneered by Wyk [31], have established non-trivial scaling laws
for the effective mechanical response as a function of the fiber density and for the number of contact points with respect
to the fiber volume fraction [32].
A common approach [33, 34, 26] to model contact between fibers is the explicit transformation of fiber intersections
into contact areas, thus dropping the hardcore condition. Implicitly, Altendorf & Jeulin’s force-biased packing [22]
achieves the same. However, it has not yet been thoroughly researched how to model inter-fiber contact parametrically.
For cellular fiber networks, i.e., fibers that are connected at their endpoints, inter-fiber contact is modeled more explicitly.
Deogekar et al. [35] model them as stochastic Voronoi networks with a connectivity parameter, which is constant for
each network. Huisman et al. [36] randomly change the topology using a Monte Carlo minimization scheme. In their
earlier work [37], they employ a force-biased approach in which fiber endpoints within a certain distance attract each
other. The network topology, especially the number of contact points, is controlled via the force strength.
In the present paper, we propose a parametric hardcore fiber model where the number of inter-fiber contacts can be
increased explicitly. We extend the force-biased approach by Altendorf & Jeulin [22] by another force to increase
the number of contacts analogous to Huisman et al. [37]. The paper is structured as follows. Section 2.1 reviews an
analytical quantification of contact in softcore models; Section 2.2 recapitulates the Altendorf-Jeulin model [22]. In
Section 3, we introduce our model extension, starting with algorithmic notions of contact and closeness. Runtime,
feature accuracy, and the maximum contact densities achievable are evaluated in Section 4. In Section 5, we replicate
models for a real-world data set, namely a wood fiber insulation mat considered by Andrä et. al. [1], and we close with
a discussion and conclusion in Section 6.
2
State of the Art
2.1
Analytical Description for Fiber Contact
Consider a stationary fiber system in a compact window W ⊂R3 with intensity (mean number of fibers per unit
volume) λF . We assume that the fibre orientations follow a distribution with density ψ. We assume that the fibers have
constant radius r and variable length ℓand denote by ¯ℓthe expected value of the fiber length distribution. We denote by
λc the mean number of inter-fiber contact points per unit volume, i.e., the intensity of inter-fiber contacts. The expected
number of contacts per fiber is λcF = λc/λF .
2
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
Toll [38] provides the following estimate for the intersection intensity:
λcF
≈
4λF r¯ℓ
¯ℓfψ + πr(gψ + 1)
,
λc ≈4λ2
F r¯ℓ
¯ℓfψ + πr(gψ + 1)
,
(1a)
fψ
=
Z
S2
Z
S2 |p × p′|ψ(p′)ψ(p) dp′ dp,
gψ =
Z
S2
Z
S2 |p · p′|ψ(p′)ψ(p) dp′ dp,
(1b)
where × and · denote the vector and scalar products in R3. We refer to [39] for formulas obtained in special cases
corresponding to an isotropic or transversely-isotropic fiber-distribution and to [40] for strongly elongated fibers. In
effect, the approximations in (1a) for λcF and λc can be rigorously reinterpreted as the number of intersections per
unit volume and fiber, respectively, in a Poisson-Boolean random set of cylinders with mean length ¯ℓand orientation
distribution ψ (see Appendix D). In the following, the estimates (1a) will be compared with numerical simulations.
2.2
The Altendorf-Jeulin Model
The fiber model by Altendorf & Jeulin [22] specifies a force-biased packing of fibers that are modeled as ball chains.
Its starting configuration of fibers is obtained by simulating the fibers independently by a random walk in a closed
cubic window W = [0, w1] × [0, w2] × [0, w3] with periodic boundary conditions. To preserve the periodic boundary
conditions, we use the periodic equivalent dP of the Euclidean distance. For x, x′ ∈R3 and i ∈{1, 2, 3}, let
∆i(x, x′) =
x′
i −xi −wi,
for x′
i −xi ≥wi
2
x′
i −xi + wi,
for x′
i −xi ≤−wi
2
x′
i −xi,
otherwise
(2)
Then, dP (x, x′) = ∥∆(x, x′)∥2 is the periodic distance function and
vP (x, x′) =
∆(x, x′)
∥∆(x, x′)∥2
(3)
the corresponding direction of the vector between two points.
After the random walk, the fibers may intersect but follow given distributions regarding model features such as fiber
size and direction. The force-biased packing removes the intersections, while keeping the model features approximately
intact. Note that this packing scheme can also be applied to fiber systems that were generated differently, e.g., with a
different random walk [41]. In the following, we will recap the main features of the algorithm.
2.2.1
Formalizing the Fiber System
In the initial fiber system, a single fiber F = {b1, ..., bl} is modeled as a chain of l overlapping balls bi = (xi, µi, r), i =
1, ..., l, of radius r and center xi ∈R3. In addition, we record the current direction µi ∈S2 of the random walk. More
precisely, µi = xi −xi−1 is the direction between the last ball and the current ball for i > 1; for the start of the random
walk in i = 1, µi indicates the main direction of the whole fiber. In this paper, the radius r of the balls and the chain
length l are constant for the sake of simplicity, but they can be randomized (see also [22]). Note that the fiber length ℓ
will typically differ from the chain length l.
The point x1 ∈R3 denotes the starting point of a fiber and can be sampled from a uniform distribution on W. The
fiber’s main direction µ1 ∈S2 is sampled from a Schladitz distribution [42] with anisotropy parameter β > 0. Its
density w.r.t. polar coordinates (θ, ϕ) is given by
pS(θ, ϕ|β) = 1
4π
β sin θ
(1 + (β2 −1) cos2 θ)
3
2 ,
θ ∈[0, π), ϕ ∈[0, 2π).
(4)
For β close to 0, this distribution generates directions that are mostly aligned with the z-direction; β = 1 corresponds to
the uniform distribution on the unit sphere; β ≫1 results in girdle distributions with directions that are concentrated
around the equator.
The fiber bending is modeled based on a multivariate von Mises-Fisher (mvMF) distribution with parameters κ1, κ2 > 0.
A univariate von Mises-Fisher distribution [43] has the density
pM(x|µ, κ) =
κ
4π sinh κ exp
κµT x
,
x ∈S2
(5)
with anisotropy parameter κ ≥0 and direction µ ∈S2. For higher κ, the fibers are more concentrated around the
direction µ. The multivariate version mvMF(µ′, κ1, µ′′, κ2) of the von Mises-Fisher distribution uses κ = |κ1µ′+κ2µ′′|
and µ = κ1µ′+κ2µ′′
κ
.
3
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
Having defined the mvMF distribution, we can return to the definition of the random walk: The direction µi+1 is
generated following the mvMF(µ1, κ1, µi, κ2)-distribution. Hence, κ1 should indicate the reliability of the random
walk to the main fiber direction µ1 and κ2 the reliability to the last direction µi. This defines the new position
xi+1 = xi + r
2µi+1. At the end of the random walk, the fiber is rotated such that its main fiber direction ¯µ :=
xl−x1
∥xl−x1∥2
equals µ1 [22]. In the implementation, the fibers are prepacked using a modified version of RSA [3, 44]: Iteratively,
the starting point of each fiber is randomly assigned for a given number of trials. The fiber is eventually placed at the
position of minimal overlap compared to the other trials. Other than RSA, this version does not resample fibers; instead,
it uses the fibers generated previously. In addition, overlap is not avoided completely, just reduced. This later improves
runtime, as fewer intersections must be removed during force-biased packing.
The total fiber system is specified by a graph S = (BS, FS), where the node set
BS = {b1,1, b2,1, ..., bl,1, b1,2, ..., bl,n}
(6)
contains the balls of the n fibers, and the undirected edge set
FS = {{bi,j, bi+1,j}|i ∈{1, ..., l −1}, j ∈1, ..., n}
(7)
contains the connections between the balls as specified by the random walk. As a consequence, the connected
components of S are equivalent to the fibers. We indicate that balls b, c ∈BS belong to the same fiber by writing
b ∼F c.
2.2.2
Force-biased Packing
To remove the initial fiber intersections, the force-biased packing by Altendorf & Jeulin [22] applies repulsion forces to
overlapping balls, see Fig. 1a. Afterwards, recovery forces restore each fiber’s connectedness and features, see Fig. 1b.
This process is repeated iteratively until the forces are small enough.
(a) Sketch of repulsion forces acting on intersecting fibers.
(b) Sketch of recovery forces acting on corrupted fibers.
Figure 1: Sketches of forces in the Altendorf-Jeulin model.
The repulsion force acts on the intersecting balls and moves them apart. An exception pose balls that are "close" within
the same fiber as they are designed to overlap: Due to the distance of r
2 between subsequent positions in the ball
chain, balls inevitably intersect if they are connected by a path consisting of ≤4 edges. To allow some room for fiber
curvature, Altendorf & Jeulin suggested excluding ball pairs from the repulsion force that are connected by a path of
length ≤5. For brevity, b ∼5 c implies that there exists a path of length ≤5 between b and c, whereas b ≁5 c denotes
that such a path does not exist.
In the hardcore case, the strength of the repulsion force Fr depends on the amount of intersection of the balls via
I(b, c) = max (0, 2r −dP (xb, xc)) .
(8)
Let the softcore ratio τ ∈[0, 1] describe the amount of intersection allowed, so two ball centers must have at least
the distance 2(1 −τ)r. Thus, τ = 0 models the hardcore case, while tau = 1 allows for arbitrary overlap. Then, I
generalizes to
Iτ(b, c) = max (0, 2(1 −τ)r −dP (xb, xc)) .
(9)
Its direction is v(b, c) = vP (xb, xc), yielding
Fr(b, c) = 1b≁5c
Iτ(b, c)
2
v(b, c).
(10)
4
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
There are two recovery forces, which only act between neighboring balls within a single fiber: The spring force Fs
keeps the ball spacing within the chains intact, whereas the angle force Fa preserves the fibers’ bending. The spring
force’s strength between balls b, c ∈BS with {b, c} ∈FS depends on their distance, or, more precisely, on the deviation
from their initial distance r
2:
αd(b, c) = r
2 −|xb −xc|.
(11)
Due to the fibers’ chainlike structure, the spring force easily causes ripple effects. In order to reduce them, it further
incorporates a smoothing factor that depends on the function αd, namely
fαs,αe(α) :=
0,
for α < αs
1
2 −1
2 cos
|α|−αs
αe−αs π
,
for αs ≤α ≤αe
1,
for αe < α
(12)
for αsαe, see Fig. 2. For the special case of αs = αe = 0, we use f0,0(α) := 1R+(α).
0.00
0.25
0.50
0.75
1.00
0
αs
αe
Relative Distance
Smoothing Factor
Figure 2: Plot of the smoothing factor for 0 < αs < αe.
The spring force is then
Fs(b, c) = 1FS(b, c)fαs,αe
2|αd(b, c)|
r
αd(b, c)v(b, c).
(13)
Choose 0 < αs < αe < 1 as the argument of fαs,αe is a relative distance here Due to the smoothing factor, the force is
set to 0 if αd(b, c) < rαs
2 , which keeps the ripple effects of the force on the system small.
The angle force Fa(b) is dependent on the required displacement of balls to preserve the fibers’ local curvature.
Consequently, it depends on the balls’ direct neighbors, just like the spring force. As its definition is rather technical,
we refer the reader to the original article [22].
Taken all together, the total force acting on a single ball b ∈BS is
Ftotal(b) =
X
c ∈BS
c̸=b
Fr(b, c) + ρ
Fa(b) +
X
{b,c}∈FS
Fs(b, c)
,
(14)
where ρ ∈[0, 1] modulates the composition of repulsion and recovery forces. All parameter choices in this paper are
described and justified in Section 3.4.
3
Extension to a Contact Packing Scheme
To incorporate contact in the fiber packing, we extend Altendorf & Jeulin’s [22] fiber packing by an additional force,
the contact force Fc. This force will be applied to balls that are close enough to establish contact. Before properly
defining Fc, we need to specify what it means that two fibers are in contact.
5
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
3.1
Algorithmic Notions of Contact and Closeness
Let the balls b, c ∈BS belong to different fibers, i.e., b ≁F c, then
• b and c intersect if |xb −xc| < 2r.
• b and c are in (perfect) contact if |xb −xc| = 2r.
• b and c are d-close for d ≥0 if |xb −xc| ≤2r + d, denoted as b
d∼c.
These relations are depicted in Fig. 3.
(a) Intersecting balls.
(b) Balls in perfect contact.
d
(c) d-close balls.
Figure 3: Sketch for the degrees of closeness on the level of balls.
Based on this contact notion, we can define a graph Cd = (BS, Ed) that describes the d-closeness between the fibers
with the undirected edge set
Ed =
n
{b, c}|b, c ∈BS, b ≁F c, and b
d∼c
o
.
(15)
In Fig. 4, the blue lines depict the set E0, that is, ball pairs in perfect contact.
Apart from the contact on the level of balls, we are also interested in the contact on the level of fibers. As shown in
Fig. 4, one contact area between fibers may consist of multiple ball pairs that are in contact. Using only the contact
notion on the level of balls would therefore be counterintuitive as it confounds the number of ball pairs with the number
of fibers. Moreover, it is dependent on the balls’ size, and one ball may be in contact with more than one other ball. As
a solution, we understand the contact area between two fibers as the ball chain sections where their balls are in contact.
Since the ball chains are described by the edges in FS, it seems natural to extend Ed by the balls’ incident edges w.r.t.
FS. In Fig. 4, they are depicted by red lines. To formalize this concept, we extend the edge set Ed as follows:
¯Ed = Ed ∪{{b, c} ∈FS|∃a ∈BS s.t. {b, a} ∈Ed ∨{a, c} ∈Ed} .
(16)
A connected component of the graph ¯Ed may contain more than two fibers; these we call clots to discriminate them from
(pairwise) contact areas between two fibers. By counting the connected components, we will later report the number
of contacts per unit volume as estimator for the contact intensity, as well as the number of contacts and the number
of clots per fiber as estimators of the contact and clot density, respectively. This way, we close the loop to analytical
quantification for contact, see Section 2.1.
Figure 4: Sketch for the contact areas between fibers. Blue indicates the contact between balls as described by E0. Red
indicates the ball chain sections creating the contact as described by ¯E0\E0. The gray area indicates the intuitive contact
area.
Now that we have introduced a notion to quantify contact and closeness in the Altendorf-Jeulin model, we can turn
to creating more contact. To do so, we identify suitable balls, to which we later apply the contact force. In the style
of Huisman et al. [37], we identify such balls depending on their closeness: The balls b, c ∈BS, b ≁F c, are contact
candidates w.r.t. the interaction distance ε > 0 if b
ε∼c. The set of contact candidates w.r.t. the interaction distance ε is
6
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
then Eε. We deliberately include balls that are already in contact as contact candidates since these are supposed to stay
in contact.
However, applying the contact force to all contact candidates in Eε may be computationally expensive for a large ε. The
forces might even be conflicting (see Fig. 5). Therefore, we reduce the contact candidates to a "shortlist" Sε, such that
every ball belongs to at most one candidate pair per fiber pair. To achieve faster convergence, we keep the edge in the
shortlist that has the minimal distance compared to incident edges. To formalize this, we define the set of incident edges
Iε({b, c}) = {{d, e} ∈Eε|e ∼F c} ∪{{c, e} ∈Eε|e ∼F b}.
(17)
This set contains all edges in Eε that belong to the node b or the node c. If there exists an edge {b, c} ∈Eε, then
Iε({b, c}) consists of all its incident edges. We define the shortlist as
Sε =
(
{b, c} ∈Eε|{b, c} =
argmin
{d,e}∈Iε({b,c})
∥d −e∥
)
.
(18)
Figure 5: Sketch for the contact candidates (indicated by blue lines) and shortlist (indicated by red lines) for the
interaction distance ε = 3
4r.
3.2
Contact Force
We extend Altendorf & Jeulin’s [22] fiber packing by one more force, namely the contact force Fc. It exists between
two shortlisted contact candidates {b, c} ∈Sε. We aim to move the balls such that they are in perfect contact, i.e.
|xb −xc| = 2r,
(19)
so the force depends on the difference between the ideal and actual distance. Further, there is no force necessary when
the balls are already intersecting. With this, we define the effective distance
dE(b, c) := max(0, dP (xb, xc) −2r).
(20)
Note that dE differs from I, which we introduced for the repulsion force in Section 2.2, by exchanging subtrahent and
minuend, which makes them complementary: Whereas the repulsion force pushes intersecting balls apart until they are
at most in contact, the contact force draws balls together until they are at least in contact.
In analogy to the repulsion force, the contact force has the direction v(c, b). Hence, we propose the contact force
Fc(b, c) = 1
2dE(b, c)v(c, b)
(21)
for {b, c} ∈Sε.
In some applications, such as nanotube composites, fibers start to interact before being in physical contact [45]. In this
case, it makes sense to consider dc-close balls as being in contact for a given contact distance dc > 0. Consequently,
the balls b, c ∈BS, b ≁F c are contact candidates w.r.t. the interaction distance ε > 0 if |xb −xc| ≤2r + dc + ε. One
can reason similarly when discretizing the fiber system, e.g. for voxelized images. There, depicting distances below the
voxel size is usually impossible.
For a contact distance dc > 0, we have additional leeway for balls to be considered in contact without intersection. It
seems natural to use max(0, dP (xb, xc)−(2r +dc)) as the effective distance, thus only aiming for dc-closeness instead
of perfect contact. However, this also lowers the force strength and may consequentially lead to slower convergence.
7
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
Therefore, we incorporate the smoothing factor f0,dc as introduced for the recovery force in Section 2.2. In total, we
generalize the contact force to
Fc(b, c) = f0,dc (dE(b, c))
2
dE(b, c)v(c, b)
(22)
for (b, c) ∈Sε.
3.3
Synopsis of the Contact Packing Scheme
In our contact packing scheme, we first run the regular force-biased packing by Altendorf & Jeulin [22]. This way, the
fiber system is already (nearly) free of intersections and thus possesses lower inherent forces. If this system does not
have the desired amount of inter-fiber contacts yet, we increase the contact by applying the packing incorporating the
contact force. We explain this modified packing scheme in the following. A pseudocode is provided in Algorithm 1.
In the first step of our contact-modified fiber packing, we identify contact candidates Eε for a given interaction distance
ε > 0, which we then reduce to the shortlist Sε, such that every ball belongs to only one candidate pair per fiber pair,
see Section 3.1. In this configuration, we iteratively apply the force
Ftotal, c(b) =
X
c ∈BS
c̸=b
Fr(b, c) + ρR
Fa(b) +
X
{b,c}∈FS
Fs(b, c)
+ ρC
X
{b,c}∈Sε
Fc(b, c),
(23)
to every ball b ∈FS until convergence. The stop criterion is specified in Section 3.4. Note that ρR, ρC ∈[0, 1] are
factors modulating the composition of forces analogously to ρ in Section 2.2.
Some shortlisted candidates (b, c) ∈Sε can cause conflicting forces in the fiber system even after fiber packing. This
can, for example, happen in denser fiber systems when the contact forces in different contact components "pull" fibers
in directions that conflict unresolvably with their repulsion and recovery forces. Note that a similar phenomenon has
already been observed by Altendorf & Jeulin [22], where they compare it with the jammed state [11]. We remove such
shortlisted candidates and apply the force again iteratively until termination. If this does not achieve the desired amount
of contact, we can repeat the procedure with an increased interaction distance ε. Shortlisted candidates are classified as
causing conflicting forces if for a = b or a = c the force length ∥Ftotal, c(b)∥exceeds a specified threshold t > 0.
Algorithm 1 Contact Packing Scheme
Require: dc ≥0, interaction distance ε
generate a starting configuration of fibers
run the Altendorf-Jeulin fiber packing with Ftotal
calculate the contact candidates Eε
calculate the shortlist Sε
while !(stop criterion) do
run fiber packing with Ftotal, mod
end while
if some candidates in Sε cause unresolvable forces then
remove candidates from Sε
while !(stop criterion) do
run fiber packing with Ftotal, c
end while
end if
3.4
Implementational Details
The original fiber packing by Altendorf & Jeulin [22] stops when the sum of total forces
P
b∈BS Ftotal(b)
falls below
a certain threshold. For the contact-modified fiber packing, however, we decided to stop the packing when the sum
of total forces decreases by less than a specified relative amount. This has the advantage of being independent of
parameters like the number of fibers or window size, as is the case in the implementation by Altendorf & Jeulin [22].
We chose a limit of 0.001% in the present paper. To ensure termination, we stop the packing when the number of
iterations exceeds 1 000.
As a reminder, one iteration of fiber packing classicly encompasses the calculation of forces on the fiber system and,
subsequently, the corresponding repositioning of balls. In this paper, we choose t = 0.1 to remove shortlisted candidates
8
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
that cause unresolvable forces in the fiber system, as it provides a trade-off between accuracy of the fibers’ curvature
by removing unresolvable forces on the one hand and achieving high contact on the other hand. Alternatively, the
interaction distance could be chosen higher to achieve the same amount of contact, but this would cause a higher
runtime in return.
For the realizations in this paper, we use ρ = ρR = ρC = 0.5, indicating equal strength for both recovery and contact
forces. For the spring force, we use αs = 0.05 and αe = 0.1. Unless indicated otherwise, the softcore ratio is τ = 0
and the contact distance dc = 0. This conforms to the parameters used by Altendorf & Jeulin [22], thus allowing for
direct comparison.
To detect contact candidates and intersecting balls, we follow Altendorf & Jeulin [44] and use the particularly fast
implementation by Mos´ci´nski et al. [46]. They divide the window into subwindows with a side length (sx, sy, sz). To
find contact or intersection candidates faster, every subwindow is given a list of balls. Notably, a subwindow’s list
contains both the balls, whose center is in the subwindow, and the balls, whose center is in the subwindow’s neighboring
subwindows. This way, only balls in a subwindow’s list need to be checked for contact candidacy or overlap. For a
maximal interaction distance of εmax, we use a subwindow size of at least max(2.5r, (2 + εmax)r). We calculate the
connected components of (BS, ¯Eε) using UnionFind [47].
4
Experimental Validation
In this section, we examine the performance of our packing scheme regarding the features of the fiber system when
increasing contact. We implemented the scheme in C++ using and extending the library MAVIlib [48], which
Fraunhofer ITWM develops and maintains. For further speed-up, we parallelized the calculation of forces using
OpenMP. Experiments were run on the ITWM Beehive Cluster using an Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz,
125 GiB RAM, and the GNU compiler GCC 11.5.
4.1
Experimental Setup
As discussed in Section 2.1, the intensity of inter-fiber contacts depends on the fibers’ length, radius, intensity, and
direction distribution. Therefore, we vary these parameters in our experimental setup as well. For image sizes of 64 to
640 voxels per edge, we study
• short, medium, and long fibers, namely fibers with an aspect ratio a :=
ℓ
2r of a = 10, 30, 50; the radius stays
constant at r = 2.
• low, medium, and high volume fractions, namely VV = 10%, 20%, 30%. Note that fiber systems with even
higher intensities will be packed quite densely already; in this case, increasing the contact density is less
meaningful.
• the direction distribution from aligned over the uniform distribution to girdle distributions, namely we use the
Schladitz distribution with β = 0.1, 0.5, 1.0, 2.0, 3.0.
As a packing parameter, we choose the interaction distances ε = 0.1, 0.2, ..., 0.5, 1.0.
In this setup, we use a medium curvature of κ1 = 10, κ2 = 100, the softshell ratio τ = 0, and the contact distance
dc = 0 as default parameters. To study the scheme’s accuracy when varying these parameters, we further study the
cases of low curvature, namely κ1, κ2 = 100, and high curvature, namely κ1, κ2 = 10, for medium aspect ratio and
medium volume fraction. Note that varying these parameters for all parameter combinations discussed above would
result in three times the runtime and computing resources, which is not merely tedious but also unsustainable given the
necessary energy consumption. All used parameter combinations can be found in Table 2 in the Appendix.
4.2
Representative Volume Element and Runtime
The representative volume element (RVE) of a microstructure is the volume V that is considered large enough to be
statistically representative. Note that the RVE depends not only on the microstructure, but also on the investigated
property Z. This means, for example, that the RVE for the volume fraction and the thermal conductivity can differ
for the same microstructure. Moreover, it is not always computationally feasible to generate a microstructure that
is large enough. Therefore, the concept is generalized to a combination of the size and number of realizations to
be representative of the underlying microstructure. This way, it can be calculated, for example, how many small
realizations are necessary when high realizations are computationally prohibitive. The following computations are
based on the assumption of ergodicity [49].
9
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
In the present work, we are interested in the properties that describe the phenomenon of forming contact and clotting.
Therefore, we focus on the contact intensity λc, the expected number of contacts per fiber λcF , and the expected
number of clots per fiber λclF = λcl
λF , where λcl is the expected number of clots per unit volume. For algorithms
whose runtime depends on the volume, the decision on the chosen size and the corresponding number, or vice versa, to
achieve an RVE is a tradeoff depending on the runtime. In the following, we will refer to this as (RVE) size-number
combination. To make the decision based on runtime considerations, we will not only carry out an RVE estimation
following Kanit et al. [49], but we will also report the observed runtime to justify our chosen size-number combination.
More precisely, we carry it out on the ’most average’ case of our experimental setup (see Section 4.1), namely
medium fiber aspect ratio a = 30, volume fraction VV = 20%, isotropic orientation (β = 1.0), and a curvature of
κ1 = 10, κ2 = 100 under the interaction distance ε = 0.3. We generate 50 realizations each for the window edge
lengths s = 64, 128, 256, 384, 512, 640. For each parameter combination, we estimate the expected number of contacts
per fiber.2 λcF , and the expected number of clots per fiber λclF . For each property Z = λcF , λclF , we fit a model of
the variance D2
Z of the volume V as
D2
Z(V ) = KV −α,
K, α > 0,
(24)
based on the mean value ¯Z of characteristic Z on the volume V ; see also Dirrenberger et al. [50]. The factor K
1
3 is a
multiple of the generalized integral range, which can be interpreted as the scale of the phenomenon [49], which in our
case is forming contact and clotting. For a relative precision φ for the estimation of Z, we can then estimate the required
number of simulations when using volume V to get overall representative information, short RVE number as [50]:
nRVE(Z, V ) =
4
φ ¯Z
K
V α .
(25)
The runtime of the contact packing is proportional to the volume, see Fig. 6. It has a runtime of roughly 10 min for size
384, which is acceptable when considering ’many’ parameters in a simulation study, as we do in this section 4. Note
that this runtime increases with the volume fraction, or rather, the number of fibers in a window of constant size.
0
10
20
30
40
2563
5123
6403
Volume
Runtime [min]
Figure 6: Plot for the runtime depending on the window size for medium fiber length and volume fraction, β = 1.0 and
the curvature κ1 = 10, κ2 = 100. The scale of the window size is transformed cubically (corresponding to the volume).
Bars indicate the standard deviation, the line in blue indicates a fitted model.
For each characteristic Z and window size s, we fit a model of the variance D2
Z to the simulation results and report the
RVE number in Table 1 using a relative precision of 1%. For the expected number of contacts per fiber λcF , we found
K
1
3
cF ≃51.65 and αcF ≃−0.9954. For the the expected number of clots per fiber λclF , we found K
1
3
clF ≃14.91 and
αclF ≃−1.0020. Notably, both K
1
3
cF and K
1
3
cl are smaller than the fiber length of 120. Plots of D2
Z w.r.t. the volume
and the fitted model can be found in Fig. 15 in the Appendix.
In the following, we will generate 4 realizations on a window size of 384 for each parameter combination: As Table 1
shows, only 4 realizations of window size 384 are necessary to achieve a relative precision of 1%. This yields the lowest
2The contact intensity λc is, in this case, only a scaled version of the expected number of contacts per fiber λcF ; therefore, we
omit it here.
10
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
Table 1: The number of realizations necessary for a given size and characteristic for a relative precision of 1%.
Size
λcF
λcl
64
420
797
128
46
87
256
6
11
384
2
4
512
1
2
640
1
1
runtime, see Fig. 6, in comparison to the other RVE size-number combinations, while having a larger window size than
the longest fiber length ℓ= 200.
4.3
Contact Intensity Achieved Depending on the Interaction Distance
The contact packing does not contain the contact intensity as a direct input parameter; instead, it contains the interaction
distance, which does not directly translate to the contact intensity. Therefore, we investigate the relationship between
the interaction distance and contact intensity in this subsection. Results are given in Fig. 7.
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.000
0.005
0.010
0.03
0.06
0.09
0.12
0.2
0.3
β
λc
ε
0.1
0.2
0.3
0.4
0.5
1
Figure 7: Plot of the contact intensity λc w.r.t. the parameter of the orientation distribution for varying interaction
distance ε (in colour) for the experimental setup described in Section 4.1. Note that κ1 = 10, κ2 = 100.
The contact intensity is not quite linear w.r.t. the interaction distance, but Fig. 7 provides an indication of what to expect
for similar setups. For each parameter combination, the contact intensity is relatively constant for β ≥1.0 but generally
smaller for β < 1.0. A likely reason for this is that for isotropic and girdle distributions, i.e., β ≥1.0, there are more
balls of different fibers in the neighborhood of each ball. For more aligned distributions, i.e., β < 1.0, the neighboring
balls are more likely to belong to the same fiber, which yields larger contact areas instead. The same trend is observed
for the Toll estimate, which increase with β as −β log β when β is small (see Eq. 38, Appendix D). Validating this
conjecture will be part of future work. Notably, this effect reverses for low volume fractions, short fibers, and high
11
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
interaction distances. Since the fibers are smaller, the likelihood that balls belong to different fibers increases again.
This translates to more fibers clustering, as is shown in Fig. 8.
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.0
0.1
0.2
0.0
0.5
1.0
1.5
0.6
0.9
1.2
1.5
β
λclF
ε
0.1
0.2
0.3
0.4
0.5
1
Figure 8: Plot of the clot density λcl w.r.t. the parameter of the orientation distribution for varying interaction distance ε
(in colour) for the experimental setup described in Section 4.1. Note that κ1 = 10, κ2 = 100.
For long fibers and low volume fraction, the chosen interaction distance barely achieves an increase in contact intensity.
This is most likely due to the distance between single fibers. This also conforms with the high parameter accuracy in
this case, see Section 4.5. Similarly, curvy fibers result in slightly lower contact and clot densities than their straighter
counterparts, see Fig. 16 and 17.
4.4
Achieved Contact Intensity Compared to Toll’s Formula
In order to compare the contact intensity between both packing algorithms and Toll’s formula [39], see Section 2.1,
we study the realizations for each parameter combination proposed in the section above with the interaction distance
ε = 1.0. We estimate their contact intensity ˆλc, see Section 2.1, both after the Altendorf-Jeulin packing and the contact
packing.
Fig. 9 shows the results for the intensities. The already low contact intensities for the low volume fraction mostly stay
below Toll’s intersection intensity. This may be improved by omitting the prepacking in the Altendorf-Jeulin model.
Yet for most other cases, the contact intensity can be raised above Toll’s intensity. For high volume fraction and at least
medium aspect ratio, the contact intensity even exceeds Toll’s intensity after the Altendorf-Jeulin packing. This effect is
not surprising considering that hardcore packing excludes a considerable volume for fibers, whereas the Boolean model
does not have this constraint.
The probability distribution for the number of contacts per fiber is represented in Fig. 10 for the medium aspect ratio
a = 30, a volume fraction VV = 20%, an isotropic orientation distribution, and an interaction distance of ε = 1.0.
Results obtained for the Altendorf-Jeulin and our contact models are compared with the distribution obtained for the
Toll model (1a). This distribution is obtained in a similar way as the mean number of contact points and follows a
Poisson law in the case of an isotropic distribution of orientation (see Appendix D for details). Furthermore, we fit the
numerical results obtained for the Altendorf-Jeulin and contact models with Poisson laws.
12
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
The Toll model gives a Poisson distribution with a mean of 6.6 contacts per fiber. In the AJ model, it has a mean of
roughly 5 contact points and is very close to a Poisson distribution. In the contact model, the mean is about 8.2, which
is much higher. Furthermore, the contact distribution is more narrow than in a Poisson distribution. Note, however, that
the Toll model gives relevant estimates for intersecting fibers. The absence of interpenetations in the contact model is a
considerable constraint that induce in particular repulsion effects and anti-correlations not taken into account in the Toll
model.
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.000
0.005
0.010
0.015
0.03
0.06
0.09
0.12
0.1
0.2
0.3
β
λc
AJ
Contact
Toll
Figure 9: Plot of the contact intensity of the Altendorf-Jeulin model, the contact model, and the analytically determined
formula by Toll. Note that κ1 = 10, κ2 = 100.
13
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
0.00
0.05
0.10
0.15
0.20
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16
#Contacts
Frequency
AJ
Contact
Toll
Figure 10: Plot of the distribution of the number of contacts of individual fibers for the Altendorf-Jeulin model and the
contact model (colored bars). Solid lines represent Poisson distributions with the same mean as that of the two models
(left and right) as well as that predicted by the Toll model (in-between). Note that κ1 = 10, κ2 = 100.
4.5
Accuracy Results
In order to evaluate the accuracy of our packing scheme, we estimate the direction parameter ˆβ and the curvature
parameters ˆκ1 and ˆκ2 as proposed and implemented by Altendorf [22]. We do this both after fiber generation but
before packing, after packing with the Altendorf-Jeulin method, and after contact packing with interaction distance
ε = 0.1. We report their relative deviation from the original input parameters as an indicator of accuracy. Note that 0
indicates high accuracy due to no deviation, and 1 indicates low accuracy due to high deviation. Being particularly
interested in any possible inaccuracy introduced by the contact packing steps, we also compare the deviation between
the Altendorf-Jeulin packing and the contact packing.
All methods (fiber generation, Altendorf-Jeulin packing, contact packing) show acceptable accuracy for the fiber
direction parameter β for values of β ≥0.5, see Fig. 11: The relative deviation from the input parameter is below 0.1.
For smaller values, however, the relative deviation rises up to 0.6, even more so for shorter fibers. The estimated value
of the curvature parameters κ1 and κ2, however, shows rather high inaccuracy: Whereas κ2 has a relative deviation
from the input parameter below just 0.2, the relative deviation can even rise up to even 0.75 for the parameter κ1, see
Fig. 12. This is especially pronounced for shorter fibers. Note that it is already alluded to in previous works that the
curvature parameters are qualitatively meaningful, but their quantitative estimation is "not highly accurate" [44].
Fig. 13 shows the relative deviation of the contact packing to the Altendorf-Jeulin packing. Whereas the direction
parameter β shows only very small deviation, i.e., it stays highly consistent, one can observe higher deviation for the
local curvature κ2. This is an intuitive result considering that the contact packing creates contact by attracting balls of
fibers that are close to each other, which likely increases local curvature (corresponding to lower values). Nevertheless,
the deviation of the curvature should be taken with a grain of salt, given the inaccuracy of their estimation observed
above.
14
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.0
0.2
0.4
0.6
0.0
0.2
0.4
0.6
0.0
0.2
0.4
0.6
β
Relative deviation of β^
AJ
Contact
Generation
Figure 11: Plots of the deviation from the direction parameter β relative to the input value. The estimated values of
β for the contact packing and the Altendorf-Jeulin packing correspond so closely to the values upon fiber generation
that the plots overlay each other. Bars indicate standard deviations, but are often not visible because they are so
low. Note that low deviation around 0 indicates high accuracy, whereas high values indicate low accuracy. Note that
κ1 = 10, κ2 = 100.
15
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.00
0.25
0.50
0.75
0.00
0.25
0.50
0.75
0.00
0.25
0.50
0.75
β
Relative deviation of κ^1
AJ
Contact
Generation
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
0.0
0.1
0.2
0.0
0.1
0.2
0.0
0.1
0.2
β
Relative deviation of κ^2
AJ
Contact
Generation
Figure 12: Plots of the deviation of the curvature parameters κ1 = 10 and κ2 = 100 relative to the input values after
fiber generation, Altendorf-Jeulin packing, and contact packing each. Bars indicate standard deviations, but are often
not visible because they are so low. Note that low values for the absolute deviation around 0 indicate high accuracy,
whereas high values indicate low accuracy.
16
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
short
medium
long
VV = 10%
VV = 20%
VV = 30%
0
1
2
3
0
1
2
3
0
1
2
3
−0.04
−0.02
0.00
0.02
−0.04
−0.02
0.00
0.02
−0.04
−0.02
0.00
0.02
β
Relative Deviation
β^
κ^1
κ^2
Figure 13: Deviation of the estimated parameter values of the contact model relative to the Altendorf-Jeulin model.
Bars indicate standard deviations, but are often not visible because they are so low. Note that low values for the absolute
deviation around 0 indicate high accuracy, whereas high values indicate low accuracy. In this setup, the curvature
parameters are κ1 = 10, κ2 = 100.
17
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
5
Application
In the present paper, we study the contact intensity of fiber models due to their role in materials’ physical properties. So
far, we have solely studied the models’ behavior depending on input parameters. In this section, we study its behavior
when applied to a real-world problem, namely, modeling thermal conductivity in wood fiber insulation mats. We will
replicate the models generated by Andrä et al. [1]. However, we will use the Altendorf-Jeulin and the contact model
instead of a Boolean model. Note that simulating the thermal conductivity is a part of future work.
We replicate Andrä et al.’s base model M1 on a voxel spacing of 25 µm with the following parameter combinations:
The mean fiber radius is r = 50 µm, the fiber length ℓ= 1 mm, the orientation parameter β = 3.0, indicating a fiber
distribution closely aligned with the x-y-plane, and the curvature parameters κ1, κ2 = 100. We generate 6 480 fibers to
generate a volume fraction of roughly 6 %. This accounts for the lumen, the tunnel in wood fibers, modeled by Andrä
et al. [1]. Note that we omit the fiber chunks here, thus yielding slightly changed volume fractions.
As proposed by Andrä et al. [1], we vary the radius as r = 35 µm, 45 µm, 55 µm, 65 µm, the length as ℓ= 0.5 mm,
2 mm, 3 mm, the orientation parameter as β = 3.5, 4.0, each such that we keep all other parameters constant, and the
number of fibers as 2 160, 4 320, 8 640, 10 800 to yield volume fractions of roughly VV = 2%, 4%, 8%, 10%. For each
combination, we plot the contact intensity for the Altendorf-Jeulin model, the contact model for interaction distance
ε = 1.0, and Toll’s intersection intensity [38, 39] for the Boolean model.
We observe from Fig. 14 that increasing the fiber radius reduces the contact density, whereas increasing the fiber length
increases the density. This is not surprising as Toll’s formula is directly dependent on the aspect ratio of fibers and the
volume fraction. Interestingly, though, the contact model’s density seems to be a fraction of Toll’s density, except for
the case of high fiber length. Here, the density of the contact model even rises higher than Toll’s density.
0.00
0.25
0.50
0.75
1.00
35 µm
45 µm
50 µm
55 µm
65 µm
Fiber Radius
λcF
Packing
AJ Model
Contact Model
Toll
(a) Contact density for varying radius.
0
1
2
3
0.5 mm
1 mm
2 mm
4 mm
Fiber Length
λcF
Packing
AJ Model
Contact Model
Toll
(b) Contact density for varying length.
0.0
0.2
0.4
0.6
3.0
3.5
4.0
β
λcF
Packing
AJ Model
Contact Model
Toll
(c) Contact density for varying orientation parameter β.
0.00
0.25
0.50
0.75
1.00
1.25
2%
4%
6%
8%
10%
VV
λcF
Packing
AJ Model
Contact Model
Toll
(d) Contact density for varying volume fraction.
Figure 14: Contact densities when replicating the models proposed by Andrä et al. [1]. Note that κ1, κ2 = 100.
18
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
The results indicate that modelling the fiber system with a Boolean model or a hardcore model may result in strongly
differing conductivities due to the difference in contact densities. Moreover, the impact of the number of contacts in
comparison to other parameters can be studied by systematic variation, which is planned for future work.
6
Discussion & Conclusion
Despite the relevance of inter-fiber contacts when simulating physical properties of fiber systems, there exists little
research on parametrically modelling fiber systems including inter-fiber contact. Therefore, we extend the force-biased
model by Altendorf & Jeulin [22] to a model where the contact between fibers can be explicitly increased. For this,
we have developed algorithmic notions of contact and closeness. To increase the number of contacts, we have added
another force to the model by Altendorf & Jeulin that creates contact between fibers that are close enough.
We have shown that our model indeed increases the number of contacts in comparison to the Altendorf-Jeulin model
while maintaining similar accuracy to other parameters of the fiber system. Moreover, we compared the number of
contacts in the algorithmic hardcore models with the intersection intensity of the Boolean model, finding that the contact
intensity of the contact model exceeds the intersection intensity of the Boolean model for volume fractions above 20 %.
When simulating physical properties on fiber systems realized from the contact model, this allows for higher accuracy
regarding the intensity of inter-fiber contacts. Moreover, the contact intensity can be systematically varied, thus allowing
for a deeper understanding of the influence of contact on the physical property in comparison to other parameters. Such
a study for thermal conductivity in wood fiber mats is currently underway. So far, the contact intensity can be reliably
influenced by the interaction distance. To allow for an explicit parameter in the fiber packing algorithm for the contact
intensity, further research is needed.
Studying the accuracy of the curvature was severely limited since, to our knowledge, no accurate estimators for the
parameters κ1 and κ2 exist [22]. In some cases, it may suffice to use only one curvature parameter, thus allowing for
higher accuracy.
In the next stage, we will extend the fiber model to contain fiber bundles and potentially different shapes such as square
cross sections. Moreover, studying and varying the contact surface in addition to the number (intensity) of contacts is
worthwhile to gain a deeper understanding of the influence of contact on physical properties.
Acknowledgements
We thank Markus Kronenberger and Michael Godehardt, Fraunhofer Institute for Industrial Mathematics (ITWM),
for their support with the MAVIlib library and access to the ITWM Beehive Cluster. This work was supported by the
German Federal Ministry of Education and Research under Grant Agreement No: 05M22UKA and the French-German
doctoral college "Mathematische Bildverarbeitung".
Code and Data Availability
The code for the contact model can be provided upon reasonable request. The result data and corresponding scripts for
evaluation are hosted on doi:10.5281/zenodo.17105506.
Author Contributions
Alex Keilmann: Data curation, Formal analysis, Methodology, Software, Validation, Visualization, Writing - original
draft, Writing - review & editing. Claudia Redenbach: Conceptualization, Writing - review & editing . François
Willot: Conceptualization, Writing - review & editing.
References
[1] H. Andrä, D. Dobrovolskij, M. Engelhardt, M. Godehardt, M. Makas, C. Mercier, S. Rief, K. Schladitz, S. Staub,
K. Trawka, and S. Treml. Image-based microstructural simulation of thermal conductivity for highly porous
wood fiber insulation boards: 3D imaging, microstructure modeling, and numerical simulations for insight into
structure–property relation. Wood Science and Technology, 57(1):5–31, 2023. doi: 10.1007/s00226-022-01434-6.
[2] L. Berhan and A.M. Sastry. Modeling percolation in high-aspect-ratio fiber systems. i. soft-core versus hard-core
models. Phys. Rev. E, 75:041120, 2007. doi: 10.1103/PhysRevE.75.041120.
19
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
[3] Y. Pan, L. Iorga, and A.A. Pelegri. Analysis of 3D random chopped fiber reinforced composites using FEM and
random sequential adsorption. Computational Materials Science, 43(3):450–461, 2008. doi: 10.1016/j.commatsci.
2007.12.016.
[4] C. Redenbach and I. Vecchio. Statistical analysis and stochastic modelling of fibre composites. Composites
Science and Technology, 71(2):107–112, 2011. doi: 10.1016/j.compscitech.2010.10.014.
[5] W. Tian, L. Qi, J. Zhou, J. Liang, and Y. Ma. Representative volume element for composites reinforced by spatially
randomly distributed discontinuous fibers and its applications. Composite Structures, 131:366–373, 2015. doi:
10.1016/j.compstruct.2015.05.014.
[6] N. Provatas, M. Haataja, J. Asikainen, S. Majaniemi, M. Alava, and T. Ala-Nissila. Fiber deposition models in
two and three spatial dimensions. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 165(1-3):
209–229, 2000. doi: 10.1016/S0927-7757(99)00417-3.
[7] K.J. Niskanen and M.J. Alava. Planar Random Networks with Flexible Fibers. Physical Review Letters, 73(25):
3475–3478, 1994. doi: 10.1103/PhysRevLett.73.3475.
[8] D. Venkateshan, M. Tahir, H. Vahedi Tafreshi, and B. Pourdeyhimi. Modeling effects of fiber rigidity on thickness
and porosity of virtual electrospun mats. Materials & Design, 96:27–35, 2016. doi: 10.1016/j.matdes.2016.01.105.
[9] A. Moghadam, S. Yousefi, H.V. Tafreshi, and B. Pourdeyhimi. Characterizing nonwoven materials via realistic
microstructural modeling. Separation and Purification Technology, 211:602–609, 2019. doi: 10.1016/j.seppur.
2018.10.018.
[10] S.R. Williams and A.P. Philipse. Random packings of spheres and spherocylinders simulated by mechanical
contraction. Physical Review E, 67(5):051301, 2003. doi: 10.1103/PhysRevE.67.051301.
[11] N.C. Karayiannis and M. Laso. Dense and Nearly Jammed Random Packings of Freely Jointed Chains of Tangent
Hard Spheres. Physical Review Letters, 100(5):050602, 2008. doi: 10.1103/PhysRevLett.100.050602.
[12] M. Schneider. The sequential addition and migration method to generate representative volume elements for
the homogenization of short fiber reinforced plastics. Computational Mechanics, 59(2):247–263, 2017. doi:
10.1007/s00466-016-1350-7.
[13] M. Schneider. An algorithm for generating microstructures of fiber-reinforced composites with long fibers.
International Journal for Numerical Methods in Engineering, 123(24):6197–6219, 2022. doi: 10.1002/nme.7110.
[14] A. Mehta and M. Schneider. A sequential addition and migration method for generating microstructures of
short fibers with prescribed length distribution. Computational Mechanics, 70(4):829–851, 2022. doi: 10.1007/
s00466-022-02201-x.
[15] C. Lauff, M. Schneider, J. Montesano, and T. Böhlke. Generating microstructures of long fiber reinforced
composites by the fused sequential addition and migration method. International Journal for Numerical Methods
in Engineering, page e7573, 2024. doi: 10.1002/nme.7573.
[16] C. Lauff, M. Schneider, and T. Böhlke. Microstructure generation of long fiber reinforced hybrid composites
using the fused sequential addition and migration method. Journal of Thermoplastic Composite Materials, page
08927057251314425, 2025. doi: 10.1177/08927057251314425.
[17] C. Lauff, M. Krause, M. Schneider, and T. Böhlke. On the Influence of the Fiber Curvature on the Stiffness
of Long Fiber Reinforced Composites. International Journal for Numerical Methods in Engineering, 126(15):
e70094, 2025. doi: 10.1002/nme.70094.
[18] A. Klar. Interacting fiber structures: mathematical aspects and applications. Rivista di Matematica della Università
di Parma, 10(2):199–268, 2019.
[19] R. Borsche, A. Klar, C. Nessler, A. Roth, and O. Tse. A Retarded Mean-Field Approach for Interacting Fiber
Structures. Multiscale Modeling & Simulation, 15(3):1130–1154, 2017. doi: 10.1137/151005592.
[20] V. Salnikov, D. Choï, and P. Karamian-Surville. On efficient and reliable stochastic generation of RVEs for
analysis of composites within the framework of homogenization. Computational Mechanics, 55(1):127–144,
2015. doi: 10.1007/s00466-014-1086-1.
[21] A. Bezrukov and D. Stoyan. Simulation and Statistical Analysis of Random Packings of Ellipsoids. Particle &
Particle Systems Characterization, 23(5):388–398, 2006. doi: 10.1002/ppsc.200600974.
[22] H. Altendorf and D. Jeulin. Random-walk-based stochastic modeling of three-dimensional fiber systems. Physical
Review E, 83(4):041804, 2011. doi: 10.1103/PhysRevE.83.041804.
[23] P. Easwaran. Stochastic Geometry Models for Interacting Fibers. PhD thesis, TU Kaiserslautern, 2017.
20
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
[24] L. Chapelle, M. Lévesque, P. Brøndsted, M.R. Foldschack, and Y. Kusano.
GENERATION OF NON-
OVERLAPPING FIBER ARCHITECTURE. Proceedings of the 20th International Conference on Composite
Materials, 2015.
[25] A. Kumar, A. DasGupta, and A. Jain. Microstructure generation algorithm and micromechanics of curved fiber
composites with random waviness. International Journal of Solids and Structures, 289:112625, 2024. doi:
10.1016/j.ijsolstr.2023.112625.
[26] C. Gaunand, Y. De Wilde, A. François, V. Grigorova-Moutiers, and K. Joulain. Modeling conductive thermal
transport in three-dimensional fibrous media with fiber-to-fiber contacts. Phys. Rev. Appl., 23:034084, 2025. doi:
10.1103/PhysRevApplied.23.034084.
[27] V.A. Yastrebov. Numerical methods in contact mechanics. John Wiley & Sons, 2013.
[28] R. Picu. Mechanics of random fiber networks—a review. Soft Matter, 7(15):6768–6785, 2011.
[29] J. Xie, Z. Guo, M. Shao, W. Zhu, W. Jiao, Z. Yang, and L. Chen. Mechanics of textiles used as composite preforms:
A review. Composite structures, 304:116401, 2023.
[30] D. Poquillon, B. Viguier, and E. Andrieu. Experimental data about mechanical behaviour during compression
tests for various matted fibres. Journal of materials science, 40:5963–5970, 2005.
[31] C. Van Wyk. 20—note on the compressibility of wool. Journal of the Textile Institute Transactions, 37(12):
T285–T292, 1946.
[32] D. Durville. Numerical simulation of entangled materials mechanical properties. Journal of materials science, 40:
5941–5948, 2005.
[33] M. Faessel, C. Delisée, F. Bos, and P. Castéra. 3D Modelling of random cellulosic fibrous networks based on
X-ray tomography and image analysis. Composites Science and Technology, 65(13):1931–1940, 2005. doi:
10.1016/j.compscitech.2004.12.038.
[34] A. Karakoç, E. Hiltunen, and J. Paltakari. Geometrical and spatial effects on fiber network connectivity. Composite
Structures, 168:335–344, 2017. doi: 10.1016/j.compstruct.2017.02.062.
[35] S. Deogekar, Z. Yan, and R.C. Picu. Random Fiber Networks With Superior Properties Through Network Topology
Control. Journal of Applied Mechanics, 86(8):081010, 2019. doi: 10.1115/1.4043828.
[36] E.M. Huisman, C. Storm, and G.T. Barkema. Monte Carlo study of multiply crosslinked semiflexible polymer
networks. Physical Review E, 78(5):051801, 2008. doi: 10.1103/PhysRevE.78.051801.
[37] E.M. Huisman, T. Van Dillen, P.R. Onck, and E. Van Der Giessen. Three-Dimensional Cross-Linked F-Actin
Networks: Relation between Network Architecture and Mechanical Behavior. Physical Review Letters, 99(20):
208103, 2007. doi: 10.1103/PhysRevLett.99.208103.
[38] S. Toll. Note: On the tube model for fiber suspensions. Journal of Rheology, 37(1):123–125, 1993. doi:
10.1122/1.550460.
[39] S. Toll. Packing mechanics of fiber reinforcements. Polymer Engineering & Science, 38(8):1337–1350, 1998. doi:
10.1002/pen.10304.
[40] T. Komori and K. Makishima. Numbers of Fiber-to-Fiber Contacts in General Fiber Assemblies. Textile Research
Journal, 47(1):13–17, 1977. doi: 10.1177/004051757704700104.
[41] G. Gaiselmann, D. Froning, C. Tötzke, C. Quick, I. Manke, W. Lehnert, and V. Schmidt. Stochastic 3D modeling
of non-woven materials with wet-proofing agent. International Journal of Hydrogen Energy, 38(20):8448–8460,
2013. doi: 10.1016/j.ijhydene.2013.04.144.
[42] K. Schladitz, S. Peters, D. Reinel-Bitzer, A. Wiegmann, and J. Ohser. Design of acoustic trim based on geometric
modeling and flow simulation for non-woven. Computational Materials Science, 38(1):56–66, 2006. doi:
10.1016/j.commatsci.2006.01.018.
[43] N.I. Fisher, T. Lewis, and B.J.J. Embleton. Statistical Analysis of Spherical Data. Cambridge University Press,
Cambridge [Cambridgeshire] ; New York, 1987. ISBN 978-0-521-24273-8.
[44] H. Altendorf. 3D Morphological Analysis and Modeling of Random Fiber Networks. PhD thesis, TU Kaiserslautern,
Kaiserslautern, Germany, 2011.
[45] J.M. Benoit, B. Corraze, and O. Chauvet. Localization, coulomb interactions, and electrical heating in single-wall
carbon nanotubes/polymer composites. Physical Review B, 65(24), 2002. doi: 10.1103/physrevb.65.241405.
[46] J. Mo´sci´nski, M. Bargieł, Z.A. Rycerz, and P.W.M. Jacobs. The Force-Biased Algorithm for the Irregular Close
Packing of Equal Hard Spheres. Molecular Simulation, 3(4):201–212, 1989. doi: 10.1080/08927028908031373.
21
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
[47] R. Sedgewick and K. Wayne. Algorithms. Addison-Wesley Professional, 2011.
[48] Fraunhofer ITWM, Department of Image Processing. MAVI – modular algorithms for volume images, v. 2024.
http://www.mavi-3d.de, 2024.
[49] T. Kanit, S. Forest, I. Galliet, V. Mounoury, and D. Jeulin. Determination of the size of the representative volume
element for random composites: Statistical and numerical approach. International Journal of Solids and Structures,
40(13-14):3647–3679, 2003. doi: 10.1016/S0020-7683(03)00143-4.
[50] J. Dirrenberger, S. Forest, and D. Jeulin. Towards gigantic RVE sizes for 3D stochastic fibrous networks.
International Journal of Solids and Structures, 51(2):359–376, 2014. doi: 10.1016/j.ijsolstr.2013.10.011.
[51] J. Serra. The boolean model and random sets. In Image modeling, pages 343–370. Elsevier, 1981.
22
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
A
Parameter combinations in Section 4
In Section 4.1, we explained the experimental setup for validating the contact model rather illustratively, omitting exact
parameters. Here, in Table 2, we provide detailed input parameters for the Altendorf-Jeulin and the contact model.
Table 2: Overview of parameter combinations used in the experiments in Section 4. All parameters were combined
with β = 0.1, 0.5, 1.0, 2.0, 3.0 and ε = 0.1, 0.2, 0.3, 0.4, 0.5, 1.0. Note that the number of fibers depends on the image
size s.
Aspect ratio
Radius
Length
VV
#fibers
κ1
κ2
τ
dc
10
2
40
∼10%
400s3
10
100
0.0
0
10
2
40
∼20%
800s3
10
100
0.0
0
10
2
40
∼30%
1 200s3
10
100
0.0
0
30
2
120
∼10%
133s3
10
100
0.0
0
30
2
120
∼20%
266s3
10
100
0.0
0
30
2
120
∼30%
400s3
10
100
0.0
0
50
2
200
∼10%
80s3
10
100
0.0
0
50
2
200
∼20%
160s3
10
100
0.0
0
50
2
200
∼30%
240s3
10
100
0.0
0
30
2
120
∼20%
266s3
10
10
0.0
0
30
2
120
∼20%
266s3
100
100
0.0
0
B
Fitted Model for the Representative Volume Element
Fig. 15 shows the variance plots w.r.t. the volume window, which is used in Section 4.2, together with their fitted
models. These are used for the RVE size-number estimation. Note that we only present the plot for Z = λcF , λclF as
λc is a multiple of λcF in this case due to a constant λF . Note that the fitted models correspond closely to the observed
variances.
0.001
0.010
0.100
1.000
1e+05
1e+06
1e+07
1e+08
Window Volume
Var(λcF)
1e−05
1e−04
1e−03
1e−02
1e+05
1e+06
1e+07
1e+08
Window Volume
Var(λclF)
Figure 15: Plots of the variance of properties w.r.t. to the window volume. The line in blue indicates the fitted model.
C
Validation Plots of Section 4
In Section 4, the simulation results were shown for the main setup of moderate fiber curvature κ1 = 10, κ2 = 100.
Fig. 16 shows the impact of contact distance ε on both the contact intensity λc and the expected number of clots
per fiber λclF for straight fibers, so curvature parameters κ1, κ2 = 100. Fig. 17 again, shows these results for
curvy fibers, meaning curvature parameters κ1, κ2 = 10. Their behavior is similar to fibers with moderate curvature
κ1 = 10, κ2 = 100, yet differ in the specific values and observed variance of λc and λclF .
Fig. 18 presents relative deviation of estimated parameters for rather straight fibers (κ1, κ2 = 100) and more curvy
fibers (κ1, κ2 = 10) relative to the corresponding Altendorf-Jeulin model. Note that the accuracy is higher for curvy
fibers.
23
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
0.050
0.075
0.100
0
1
2
3
β
λc
ε
0.1
0.2
0.3
0.4
0.5
1
0.6
0.8
1.0
0
1
2
3
β
λclF
ε
0.1
0.2
0.3
0.4
0.5
1
Figure 16: Plot of the contact intensity λc and he expected number of clots per fiber λclF w.r.t. the parameter of
the orientation distribution for varying interaction distance ε (in colour) for straight fibers with κ1, κ2 = 100, see
Section 4.3.
0.04
0.06
0.08
0.10
0
1
2
3
β
λc
ε
0.1
0.2
0.3
0.4
0.5
1
0.4
0.6
0.8
1.0
0
1
2
3
β
λclF
ε
0.1
0.2
0.3
0.4
0.5
1
Figure 17: Plot of the contact intensity λc and he expected number of clots per fiber λclF w.r.t. the parameter of the
orientation distribution for varying interaction distance ε (in colour) for curvy fibers with κ1, κ2 = 10, see Section 4.3.
−0.03
−0.02
−0.01
0.00
0.01
0
1
2
3
β
Relative Deviation
β^
κ^1
κ^2
(a) Relative deviations of straight fibers with κ1, κ2 = 100, see
Section 4.1.
−0.012
−0.008
−0.004
0.000
0
1
2
3
β
Relative Deviation
β^
κ^1
κ^2
(b) Relative deviations of curvy fibers, see Section 4.1.
Figure 18: Deviation of the estimated parameter values of the contact model relative to the Altendorf-Jeulin model for
the special cases of straight (κ1, κ2 = 100) and curvy (κ1, κ2 = 10) fibers. Bars indicate the standard deviation. Note
that small values for the absolute deviation around 0 indicate high accuracy, whereas high values indicate low accuracy.
24
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT)
D
Distribution of the number of fiber-fiber intersections
Hereafter, we derive the distribution for the number of intersections of a test fiber with other fibers in a Boolean random
set [51]. The Boolean random set is made of cylinders of radius r, length ℓ, volume V = πr2ℓand density λf. The
fibers are randomly oriented and follow the orientation distribution ψ(p) where p is a unit vector and:
Z
|p|=1
ψ(p)dp = 1.
(26)
Let us consider a cylinder oriented along direction p and the centerline of an arbitrary cylinder oriented along direction
p′. We follow [38] and note that the centerline will cut the test cylinder if and only if its center is inside a domain of
volume:
Aℓ+ V,
(27)
with A the projected area of the test cylinder along a plane perpendicular to p′:
A = 2rℓ|p × p′| + πr2|p · p′|.
(28)
The mean number of intersections between the test cylinder and the centerline of a cylinder oriented along direction p
therefore reads:
δ(p′) (Aℓ+ V ) ,
(29)
where δ(p′) = λfψ(p′)dp′ is the density of cylinders oriented along direction p′. Denote λ′
i the number of intersects of
a test cylinder oriented along direction p with a cylinder oriented in direction p′. Its mean value reads (see [39]):
λ′
i = 4δ(p′) (A′ℓ+ V ) ,
A′ = rℓ|p × p′| + πr2|p · p′|,
(30)
and the variable λ′
i follows the Poisson distribution:
P{λ′
i = k} = e−λ′
i λ′
i
k
k! ,
k ∈N.
(31)
The total number of intersections λi between the test fiber and a fiber in any direction is the sum of the independent
Poisson variables λ′
i and is therefore also Poisson-distributed. Accordingly:
P{λi = k} = e−λi λi
k
k! ,
λi =
Z
|p′|=1
4ψ(p′)λf(A′ℓ+ V )dp′.
(32)
In the above, λi and P{λi = k} depend, in general, on the direction p of the test cylinder. Finally, the distribution of the
number of intersections eλi of an arbitrarily-oriented cylinder with another cylinder is given by averaging (32) over all p:
P{ eλi = k} =
Z
p
dp ψ(p)e−λi λi
k
k! ,
k ∈N.
(33)
For a set of cylinders with the same length, Eq. (1a) can be deduced from the above, by taking the expectation of the
probability law. When fibers are isotropically-distributed (β = 1) or aligned (β = 0), Eq. (33) reduces to the same
expression as in (33) and so eλi is also Poisson-distributed.
Let us now focus on the particular case where ψ(p) takes the form (4) and consider the limit r →∞of infinitely-
elongated cylinders. The integral for fψ in (1a) reads, in polar coordinates:
fψ =
Z
[0;π]2×[0;2π]2 dθ dθ′ d ϕ dϕ′ β2 sin θ sin θ′p
1 −[sin θ sin θ′ cos(ϕ −ϕ′) + cos θ cos θ′]2
4π2 [(β2 −1) cos2 θ + 1]3/2 [(β2 −1) cos2(θ′) + 1]3/2
.
(34)
We denote u = ϕ −ϕ′ and expand the integrand as θ, θ′ →0:
fψ ≈
Z 2π
u=0
Z π/2
θ=0
Z π/2
θ′=0
du dθ dθ′ uβ2θθ′√
θ2 + θ′2 −2θθ′ cos u
2π2(β2 + θ2)3/2(β2 + θ′2)3/2 ,
(35)
which provides, after a change of variable w = θ/β, w′ = θ′/β
∂(fψ/β)
∂wmax
≈
Z 2π
u=0
du
Z wmax
w=1
dwu
p
w2 −2wwmax cos u + w2max
π2w2w2max
≈
Z 2π
u=0
du
Z wmax
w=1
dw
u
π2w2wmax
(36)
with wmax = π/(2β). Finally, we obtain:
fψ = −2β log β + O(β),
β →0,
(37)
whereas gψ →1 (see [39]). Therefore, in the limit of quasi-aligned fiebrs of infinite length, we obtain:
λcF = 8λF rℓ(πr −ℓβ log β).
(38)
25
|
INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL Alex Keilmann∗ -Landau ORCID: 0009-0004-9793-3065 Claudia Redenbach -Landau ORCID: 0000-0002-8030-069X François Willot Center of Mathematical Morphology (CMM) Mines Paris Tech ORCID: 0000-0003-1544-6550 ABSTRACT In fields such as material design or biomedicine, fiber materials play an important role. Fiber simulations, also called digital twins, provide a basis for testing and optimizing the material's physical behavior digitally. Inter-fiber contacts can influence the thermal and mechanical behavior of a fiber system; to our knowledge, however, there exist no parametric fiber models allowing for explicit modeling of the number of inter-fiber contacts. Therefore, this paper proposes an extension of the iterative force-biased fiber packing by Altendorf & Jeulin. In this extension, we model the inter-fiber contacts explicitly and add another force to the force-biased packing to increase the number of contacts. We successfully validate the packing with respect to its parameter accuracy. Moreover, we show that the extension indeed increases the number of contacts, even exceeding theoretical values. Hence, this packing scheme has the potential to achieve higher accuracy in physical simulations. 1 Introduction Fiber materials are of great interest in areas such as material design or biomedicine due to their advantageous properties, which depend on their microstructure. Therefore, fiber simulations are used to test and optimize material behavior, also known as creating digital twins. For example, Andrä et al. [1] modeled fiber insulation mats based on CT images and systematically varied parameters to study their impact on thermal conductivity. Such an approach is more sustainable than producing a variety of materials and testing their properties, as is traditionally done. However, for digital twins to be successful, they need to model the relevant characteristics of the fiber system to ensure that they are representative of the physical behavior. On the level of a single fiber, this includes the fiber size, shape, curvature, and direction distribution. On the scale of the fiber system, the interaction between fibers can be of great interest: Whereas unrealistic fiber intersections are negligible in some applications [2], they are prohibitive in others. Fiber models allowing for such intersections are called softcore models, whereas models prohibiting them are called hardcore models. Algorithms for simulating hardcore fiber systems can generally be categorized into two distinct approaches, the sequential approach and the collective rearrangement approach. In sequential approaches, fibers are added sequentially to the fiber system such that they do not intersect. In collective rearrangement, the fibers are generated simultaneously and may intersect in the initial configuration. In a subsequent step, their intersections are removed. One of the first models to achieve a hardcore system is the random sequential adsorption (RSA) model. Here, elements such as straight cylinders are generated independently following a given distribution on length, radius, and/or orientation. These elements are placed iteratively into the existing system if they do not cause any intersections. Otherwise, they ∗Corresponding author: 12 Sep 2025 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) are resampled. However, these models allow for rather low volume fractions [3, 4]. There are several variations and extensions of the original RSA approach, such as allowing fibers to "grow" until they intersect other fibers [5]. This allows for higher volume fractions up to 22.5 % at the cost of less control over the length distribution. Sedimentation models [6-9] usually describe fiber systems that are obtained by fibers subsequently falling on top of each other. They can achieve higher volume fractions up to 80 %. However, their fiber direction distributions are roughly limited to the plane. When collective rearrangement approaches remove intersections, they calculate the direction and amount of object displacement based on forces, which in turn are modeled on the basis of physics laws. In an early collective rearrangement approach, a fiber system is generated and, afterwards, fiber intersections are removed depending on the solution of a differential equation. It allows simulation of packings of spherocylinders [10] or ball chains [11]. Schneider and colleagues[12-17] transformed this approach into an optimization problem, including curved fibers. In addition, they found that the algorithm performs better when fibers are added to the system sequentially. Venkateshan et al. [8] and Klar et al. [18, 19] propose sedimentation models with collective rearrangement using differential equations. As another subcategory of collective rearrangement approaches, force-biased approaches are inspired by molecular dynamics. Salnikov et al. [20] propose such a model for cylinders and spheres; Bezrukov & Stoyan [21] introduce a force-biased model for ellipsoids. Altendorf & Jeulin [22] extend this approach to more complex fiber shapes: They consider fibers as ball chains that are generated by a random walk. To remove intersections, repulsion forces act on overlapping balls. To maintain the fibrous structure, recovery forces act between neighboring balls of a chain. Moreover, the recovery forces preserve a prescribed fiber curvature. This approach was extended by Easwaran [23] for infinite fibers and fiber bundles; Chapelle et al. [24] improve the model's runtime by using spherocylinders instead of balls. Note that the molecular dynamics approach is also not limited to collective rearrangement approaches - Kumar et al. [25] develop a fiber model based on the movement paths of charged particles, which are introduced sequentially. In various applications, not only the (lack of) fiber intersections but also the contact between fibers is of interest. For example, inter-fiber contact areas influence the thermal conductivity of fiber systems [26, 1]. Friction, adhesion, and wear along contact points [27] are an essential feature of entangled mechanical networks [28] such as textiles [29]. They must be taken into account to model the significant strain rates and hysteresis phenomena observed in many industrial compaction processes [30]. Micromechanical models, pioneered by Wyk [31], have established non-trivial scaling laws for the effective mechanical response as a function of the fiber density and for the number of contact points with respect to the fiber volume fraction [32]. A common approach [33, 34, 26] to model contact between fibers is the explicit transformation of fiber intersections into contact areas, thus dropping the hardcore condition. Implicitly, Altendorf & Jeulin's force-biased packing [22] achieves the same. However, it has not yet been thoroughly researched how to model inter-fiber contact parametrically. For cellular fiber networks, i.e., fibers that are connected at their endpoints, inter-fiber contact is modeled more explicitly. Deogekar et al. [35] model them as stochastic Voronoi networks with a connectivity parameter, which is constant for each network. Huisman et al. [36] randomly change the topology using a Monte Carlo minimization scheme. In their earlier work [37], they employ a force-biased approach in which fiber endpoints within a certain distance attract each other. The network topology, especially the number of contact points, is controlled via the force strength. In the present paper, we propose a parametric hardcore fiber model where the number of inter-fiber contacts can be increased explicitly. We extend the force-biased approach by Altendorf & Jeulin [22] by another force to increase the number of contacts analogous to Huisman et al. [37]. The paper is structured as follows. Section 2.1 reviews an analytical quantification of contact in softcore models; Section 2.2 recapitulates the Altendorf-Jeulin model [22]. In Section 3, we introduce our model extension, starting with algorithmic notions of contact and closeness. Runtime, feature accuracy, and the maximum contact densities achievable are evaluated in Section 4. In Section 5, we replicate models for a real-world data set, namely a wood fiber insulation mat considered by Andrä et. al. [1], and we close with a discussion and conclusion in Section 6. 2 State of the Art 2.1 Analytical Description for Fiber Contact Consider a stationary fiber system in a compact window W ⊂R3 with intensity (mean number of fibers per unit volume) λF . We assume that the fibre orientations follow a distribution with density ψ. We assume that the fibers have constant radius r and variable length land denote by ̄lthe expected value of the fiber length distribution. We denote by λc the mean number of inter-fiber contact points per unit volume, i.e., the intensity of inter-fiber contacts. The expected number of contacts per fiber is λcF = λc/λF . 2 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) Toll [38] provides the following estimate for the intersection intensity: λcF ≈ 4λF r ̄l ̄lfψ + πr(gψ + 1) , λc ≈4λ2 F r ̄l ̄lfψ + πr(gψ + 1) , (1a) fψ = Z S2 Z S2 |p × p′|ψ(p′)ψ(p) dp′ dp, gψ = Z S2 Z S2 |p · p′|ψ(p′)ψ(p) dp′ dp, (1b) where × and · denote the vector and scalar products in R3. We refer to [39] for formulas obtained in special cases corresponding to an isotropic or transversely-isotropic fiber-distribution and to [40] for strongly elongated fibers. In effect, the approximations in (1a) for λcF and λc can be rigorously reinterpreted as the number of intersections per unit volume and fiber, respectively, in a Poisson-Boolean random set of cylinders with mean length ̄land orientation distribution ψ (see Appendix D). In the following, the estimates (1a) will be compared with numerical simulations. 2.2 The Altendorf-Jeulin Model The fiber model by Altendorf & Jeulin [22] specifies a force-biased packing of fibers that are modeled as ball chains. Its starting configuration of fibers is obtained by simulating the fibers independently by a random walk in a closed cubic window W = [0, w1] × [0, w2] × [0, w3] with periodic boundary conditions. To preserve the periodic boundary conditions, we use the periodic equivalent dP of the Euclidean distance. For x, x′ ∈R3 and i ∈{1, 2, 3}, let ∆i(x, x′) = x′ i -xi -wi, for x′ i -xi ≥wi 2 x′ i -xi + wi, for x′ i -xi ≤-wi 2 x′ i -xi, otherwise (2) Then, dP (x, x′) = ∥∆(x, x′)∥2 is the periodic distance function and vP (x, x′) = ∆(x, x′) ∥∆(x, x′)∥2 (3) the corresponding direction of the vector between two points. After the random walk, the fibers may intersect but follow given distributions regarding model features such as fiber size and direction. The force-biased packing removes the intersections, while keeping the model features approximately intact. Note that this packing scheme can also be applied to fiber systems that were generated differently, e.g., with a different random walk [41]. In the following, we will recap the main features of the algorithm. 2.2.1 Formalizing the Fiber System In the initial fiber system, a single fiber F = {b1, ..., bl} is modeled as a chain of l overlapping balls bi = (xi, μi, r), i = 1, ..., l, of radius r and center xi ∈R3. In addition, we record the current direction μi ∈S2 of the random walk. More precisely, μi = xi -xi-1 is the direction between the last ball and the current ball for i > 1; for the start of the random walk in i = 1, μi indicates the main direction of the whole fiber. In this paper, the radius r of the balls and the chain length l are constant for the sake of simplicity, but they can be randomized (see also [22]). Note that the fiber length l will typically differ from the chain length l. The point x1 ∈R3 denotes the starting point of a fiber and can be sampled from a uniform distribution on W. The fiber's main direction μ1 ∈S2 is sampled from a Schladitz distribution [42] with anisotropy parameter β > 0. Its density w.r.t. polar coordinates (θ, φ) is given by pS(θ, φ|β) = 1 4π β sin θ (1 + (β2 -1) cos2 θ) 3 2 , θ ∈[0, π), φ ∈[0, 2π). (4) For β close to 0, this distribution generates directions that are mostly aligned with the z-direction; β = 1 corresponds to the uniform distribution on the unit sphere; β ≫1 results in girdle distributions with directions that are concentrated around the equator. The fiber bending is modeled based on a multivariate von Mises-Fisher (mvMF) distribution with parameters κ1, κ2 > 0. A univariate von Mises-Fisher distribution [43] has the density pM(x|μ, κ) = κ 4π sinh κ exp κμT x , x ∈S2 (5) with anisotropy parameter κ ≥0 and direction μ ∈S2. For higher κ, the fibers are more concentrated around the direction μ. The multivariate version mvMF(μ′, κ1, μ′′, κ2) of the von Mises-Fisher distribution uses κ = |κ1μ′+κ2μ′′| and μ = κ1μ′+κ2μ′′ κ . 3 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) Having defined the mvMF distribution, we can return to the definition of the random walk: The direction μi+1 is generated following the mvMF(μ1, κ1, μi, κ2)-distribution. Hence, κ1 should indicate the reliability of the random walk to the main fiber direction μ1 and κ2 the reliability to the last direction μi. This defines the new position xi+1 = xi + r 2μi+1. At the end of the random walk, the fiber is rotated such that its main fiber direction ̄μ := xl-x1 ∥xl-x1∥2 equals μ1 [22]. In the implementation, the fibers are prepacked using a modified version of RSA [3, 44]: Iteratively, the starting point of each fiber is randomly assigned for a given number of trials. The fiber is eventually placed at the position of minimal overlap compared to the other trials. Other than RSA, this version does not resample fibers; instead, it uses the fibers generated previously. In addition, overlap is not avoided completely, just reduced. This later improves runtime, as fewer intersections must be removed during force-biased packing. The total fiber system is specified by a graph S = (BS, FS), where the node set BS = {b1,1, b2,1, ..., bl,1, b1,2, ..., bl,n} (6) contains the balls of the n fibers, and the undirected edge set FS = {{bi,j, bi+1,j}|i ∈{1, ..., l -1}, j ∈1, ..., n} (7) contains the connections between the balls as specified by the random walk. As a consequence, the connected components of S are equivalent to the fibers. We indicate that balls b, c ∈BS belong to the same fiber by writing b ∼F c. 2.2.2 Force-biased Packing To remove the initial fiber intersections, the force-biased packing by Altendorf & Jeulin [22] applies repulsion forces to overlapping balls, see Fig. 1a. Afterwards, recovery forces restore each fiber's connectedness and features, see Fig. 1b. This process is repeated iteratively until the forces are small enough. (a) Sketch of repulsion forces acting on intersecting fibers. (b) Sketch of recovery forces acting on corrupted fibers. Figure 1: Sketches of forces in the Altendorf-Jeulin model. The repulsion force acts on the intersecting balls and moves them apart. An exception pose balls that are "close" within the same fiber as they are designed to overlap: Due to the distance of r 2 between subsequent positions in the ball chain, balls inevitably intersect if they are connected by a path consisting of ≤4 edges. To allow some room for fiber curvature, Altendorf & Jeulin suggested excluding ball pairs from the repulsion force that are connected by a path of length ≤5. For brevity, b ∼5 c implies that there exists a path of length ≤5 between b and c, whereas b ≁5 c denotes that such a path does not exist. In the hardcore case, the strength of the repulsion force Fr depends on the amount of intersection of the balls via I(b, c) = max (0, 2r -dP (xb, xc)) . (8) Let the softcore ratio τ ∈[0, 1] describe the amount of intersection allowed, so two ball centers must have at least the distance 2(1 -τ)r. Thus, τ = 0 models the hardcore case, while tau = 1 allows for arbitrary overlap. Then, I generalizes to Iτ(b, c) = max (0, 2(1 -τ)r -dP (xb, xc)) . (9) Its direction is v(b, c) = vP (xb, xc), yielding Fr(b, c) = 1b≁5c Iτ(b, c) 2 v(b, c). (10) 4 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) There are two recovery forces, which only act between neighboring balls within a single fiber: The spring force Fs keeps the ball spacing within the chains intact, whereas the angle force Fa preserves the fibers' bending. The spring force's strength between balls b, c ∈BS with {b, c} ∈FS depends on their distance, or, more precisely, on the deviation from their initial distance r 2: αd(b, c) = r 2 -|xb -xc|. (11) Due to the fibers' chainlike structure, the spring force easily causes ripple effects. In order to reduce them, it further incorporates a smoothing factor that depends on the function αd, namely fαs,αe(α) := 0, for α 0 if b ε∼c. The set of contact candidates w.r.t. the interaction distance ε is 6 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) then Eε. We deliberately include balls that are already in contact as contact candidates since these are supposed to stay in contact. However, applying the contact force to all contact candidates in Eε may be computationally expensive for a large ε. The forces might even be conflicting (see Fig. 5). Therefore, we reduce the contact candidates to a "shortlist" Sε, such that every ball belongs to at most one candidate pair per fiber pair. To achieve faster convergence, we keep the edge in the shortlist that has the minimal distance compared to incident edges. To formalize this, we define the set of incident edges Iε({b, c}) = {{d, e} ∈Eε|e ∼F c} ∪{{c, e} ∈Eε|e ∼F b}. (17) This set contains all edges in Eε that belong to the node b or the node c. If there exists an edge {b, c} ∈Eε, then Iε({b, c}) consists of all its incident edges. We define the shortlist as Sε = ( {b, c} ∈Eε|{b, c} = argmin {d,e}∈Iε({b,c}) ∥d -e∥ ) . (18) Figure 5: Sketch for the contact candidates (indicated by blue lines) and shortlist (indicated by red lines) for the interaction distance ε = 3 4r. 3.2 Contact Force We extend Altendorf & Jeulin's [22] fiber packing by one more force, namely the contact force Fc. It exists between two shortlisted contact candidates {b, c} ∈Sε. We aim to move the balls such that they are in perfect contact, i.e. |xb -xc| = 2r, (19) so the force depends on the difference between the ideal and actual distance. Further, there is no force necessary when the balls are already intersecting. With this, we define the effective distance dE(b, c) := max(0, dP (xb, xc) -2r). (20) Note that dE differs from I, which we introduced for the repulsion force in Section 2.2, by exchanging subtrahent and minuend, which makes them complementary: Whereas the repulsion force pushes intersecting balls apart until they are at most in contact, the contact force draws balls together until they are at least in contact. In analogy to the repulsion force, the contact force has the direction v(c, b). Hence, we propose the contact force Fc(b, c) = 1 2dE(b, c)v(c, b) (21) for {b, c} ∈Sε. In some applications, such as nanotube composites, fibers start to interact before being in physical contact [45]. In this case, it makes sense to consider dc-close balls as being in contact for a given contact distance dc > 0. Consequently, the balls b, c ∈BS, b ≁F c are contact candidates w.r.t. the interaction distance ε > 0 if |xb -xc| ≤2r + dc + ε. One can reason similarly when discretizing the fiber system, e.g. for voxelized images. There, depicting distances below the voxel size is usually impossible. For a contact distance dc > 0, we have additional leeway for balls to be considered in contact without intersection. It seems natural to use max(0, dP (xb, xc)-(2r +dc)) as the effective distance, thus only aiming for dc-closeness instead of perfect contact. However, this also lowers the force strength and may consequentially lead to slower convergence. 7 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) Therefore, we incorporate the smoothing factor f0,dc as introduced for the recovery force in Section 2.2. In total, we generalize the contact force to Fc(b, c) = f0,dc (dE(b, c)) 2 dE(b, c)v(c, b) (22) for (b, c) ∈Sε. 3.3 Synopsis of the Contact Packing Scheme In our contact packing scheme, we first run the regular force-biased packing by Altendorf & Jeulin [22]. This way, the fiber system is already (nearly) free of intersections and thus possesses lower inherent forces. If this system does not have the desired amount of inter-fiber contacts yet, we increase the contact by applying the packing incorporating the contact force. We explain this modified packing scheme in the following. A pseudocode is provided in Algorithm 1. In the first step of our contact-modified fiber packing, we identify contact candidates Eε for a given interaction distance ε > 0, which we then reduce to the shortlist Sε, such that every ball belongs to only one candidate pair per fiber pair, see Section 3.1. In this configuration, we iteratively apply the force Ftotal, c(b) = X c ∈BS c̸=b Fr(b, c) + ρR Fa(b) + X {b,c}∈FS Fs(b, c) + ρC X {b,c}∈Sε Fc(b, c), (23) to every ball b ∈FS until convergence. The stop criterion is specified in Section 3.4. Note that ρR, ρC ∈[0, 1] are factors modulating the composition of forces analogously to ρ in Section 2.2. Some shortlisted candidates (b, c) ∈Sε can cause conflicting forces in the fiber system even after fiber packing. This can, for example, happen in denser fiber systems when the contact forces in different contact components "pull" fibers in directions that conflict unresolvably with their repulsion and recovery forces. Note that a similar phenomenon has already been observed by Altendorf & Jeulin [22], where they compare it with the jammed state [11]. We remove such shortlisted candidates and apply the force again iteratively until termination. If this does not achieve the desired amount of contact, we can repeat the procedure with an increased interaction distance ε. Shortlisted candidates are classified as causing conflicting forces if for a = b or a = c the force length ∥Ftotal, c(b)∥exceeds a specified threshold t > 0. Algorithm 1 Contact Packing Scheme Require: dc ≥0, interaction distance ε generate a starting configuration of fibers run the Altendorf-Jeulin fiber packing with Ftotal calculate the contact candidates Eε calculate the shortlist Sε while !(stop criterion) do run fiber packing with Ftotal, mod end while if some candidates in Sε cause unresolvable forces then remove candidates from Sε while !(stop criterion) do run fiber packing with Ftotal, c end while end if 3.4 Implementational Details The original fiber packing by Altendorf & Jeulin [22] stops when the sum of total forces P b∈BS Ftotal(b) falls below a certain threshold. For the contact-modified fiber packing, however, we decided to stop the packing when the sum of total forces decreases by less than a specified relative amount. This has the advantage of being independent of parameters like the number of fibers or window size, as is the case in the implementation by Altendorf & Jeulin [22]. We chose a limit of 0.001% in the present paper. To ensure termination, we stop the packing when the number of iterations exceeds 1 000. As a reminder, one iteration of fiber packing classicly encompasses the calculation of forces on the fiber system and, subsequently, the corresponding repositioning of balls. In this paper, we choose t = 0.1 to remove shortlisted candidates 8 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) that cause unresolvable forces in the fiber system, as it provides a trade-off between accuracy of the fibers' curvature by removing unresolvable forces on the one hand and achieving high contact on the other hand. Alternatively, the interaction distance could be chosen higher to achieve the same amount of contact, but this would cause a higher runtime in return. For the realizations in this paper, we use ρ = ρR = ρC = 0.5, indicating equal strength for both recovery and contact forces. For the spring force, we use αs = 0.05 and αe = 0.1. Unless indicated otherwise, the softcore ratio is τ = 0 and the contact distance dc = 0. This conforms to the parameters used by Altendorf & Jeulin [22], thus allowing for direct comparison. To detect contact candidates and intersecting balls, we follow Altendorf & Jeulin [44] and use the particularly fast implementation by Mos ́ci ́nski et al. [46]. They divide the window into subwindows with a side length (sx, sy, sz). To find contact or intersection candidates faster, every subwindow is given a list of balls. Notably, a subwindow's list contains both the balls, whose center is in the subwindow, and the balls, whose center is in the subwindow's neighboring subwindows. This way, only balls in a subwindow's list need to be checked for contact candidacy or overlap. For a maximal interaction distance of εmax, we use a subwindow size of at least max(2.5r, (2 + εmax)r). We calculate the connected components of (BS, ̄Eε) using UnionFind [47]. 4 Experimental Validation In this section, we examine the performance of our packing scheme regarding the features of the fiber system when increasing contact. We implemented the scheme in C++ using and extending the library MAVIlib [48], which Fraunhofer ITWM develops and maintains. For further speed-up, we parallelized the calculation of forces using OpenMP. Experiments were run on the ITWM Beehive Cluster using an Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, 125 GiB RAM, and the GNU compiler GCC 11.5. 4.1 Experimental Setup As discussed in Section 2.1, the intensity of inter-fiber contacts depends on the fibers' length, radius, intensity, and direction distribution. Therefore, we vary these parameters in our experimental setup as well. For image sizes of 64 to 640 voxels per edge, we study • short, medium, and long fibers, namely fibers with an aspect ratio a := l 2r of a = 10, 30, 50; the radius stays constant at r = 2. • low, medium, and high volume fractions, namely VV = 10%, 20%, 30%. Note that fiber systems with even higher intensities will be packed quite densely already; in this case, increasing the contact density is less meaningful. • the direction distribution from aligned over the uniform distribution to girdle distributions, namely we use the Schladitz distribution with β = 0.1, 0.5, 1.0, 2.0, 3.0. As a packing parameter, we choose the interaction distances ε = 0.1, 0.2, ..., 0.5, 1.0. In this setup, we use a medium curvature of κ1 = 10, κ2 = 100, the softshell ratio τ = 0, and the contact distance dc = 0 as default parameters. To study the scheme's accuracy when varying these parameters, we further study the cases of low curvature, namely κ1, κ2 = 100, and high curvature, namely κ1, κ2 = 10, for medium aspect ratio and medium volume fraction. Note that varying these parameters for all parameter combinations discussed above would result in three times the runtime and computing resources, which is not merely tedious but also unsustainable given the necessary energy consumption. All used parameter combinations can be found in Table 2 in the Appendix. 4.2 Representative Volume Element and Runtime The representative volume element (RVE) of a microstructure is the volume V that is considered large enough to be statistically representative. Note that the RVE depends not only on the microstructure, but also on the investigated property Z. This means, for example, that the RVE for the volume fraction and the thermal conductivity can differ for the same microstructure. Moreover, it is not always computationally feasible to generate a microstructure that is large enough. Therefore, the concept is generalized to a combination of the size and number of realizations to be representative of the underlying microstructure. This way, it can be calculated, for example, how many small realizations are necessary when high realizations are computationally prohibitive. The following computations are based on the assumption of ergodicity [49]. 9 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) In the present work, we are interested in the properties that describe the phenomenon of forming contact and clotting. Therefore, we focus on the contact intensity λc, the expected number of contacts per fiber λcF , and the expected number of clots per fiber λclF = λcl λF , where λcl is the expected number of clots per unit volume. For algorithms whose runtime depends on the volume, the decision on the chosen size and the corresponding number, or vice versa, to achieve an RVE is a tradeoff depending on the runtime. In the following, we will refer to this as (RVE) size-number combination. To make the decision based on runtime considerations, we will not only carry out an RVE estimation following Kanit et al. [49], but we will also report the observed runtime to justify our chosen size-number combination. More precisely, we carry it out on the 'most average' case of our experimental setup (see Section 4.1), namely medium fiber aspect ratio a = 30, volume fraction VV = 20%, isotropic orientation (β = 1.0), and a curvature of κ1 = 10, κ2 = 100 under the interaction distance ε = 0.3. We generate 50 realizations each for the window edge lengths s = 64, 128, 256, 384, 512, 640. For each parameter combination, we estimate the expected number of contacts per fiber.2 λcF , and the expected number of clots per fiber λclF . For each property Z = λcF , λclF , we fit a model of the variance D2 Z of the volume V as D2 Z(V ) = KV -α, K, α > 0, (24) based on the mean value ̄Z of characteristic Z on the volume V ; see also Dirrenberger et al. [50]. The factor K 1 3 is a multiple of the generalized integral range, which can be interpreted as the scale of the phenomenon [49], which in our case is forming contact and clotting. For a relative precision φ for the estimation of Z, we can then estimate the required number of simulations when using volume V to get overall representative information, short RVE number as [50]: nRVE(Z, V ) = 4 φ ̄Z K V α . (25) The runtime of the contact packing is proportional to the volume, see Fig. 6. It has a runtime of roughly 10 min for size 384, which is acceptable when considering 'many' parameters in a simulation study, as we do in this section 4. Note that this runtime increases with the volume fraction, or rather, the number of fibers in a window of constant size. 0 10 20 30 40 2563 5123 6403 Volume Runtime [min] Figure 6: Plot for the runtime depending on the window size for medium fiber length and volume fraction, β = 1.0 and the curvature κ1 = 10, κ2 = 100. The scale of the window size is transformed cubically (corresponding to the volume). Bars indicate the standard deviation, the line in blue indicates a fitted model. For each characteristic Z and window size s, we fit a model of the variance D2 Z to the simulation results and report the RVE number in Table 1 using a relative precision of 1%. For the expected number of contacts per fiber λcF , we found K 1 3 cF ≃51.65 and αcF ≃-0.9954. For the the expected number of clots per fiber λclF , we found K 1 3 clF ≃14.91 and αclF ≃-1.0020. Notably, both K 1 3 cF and K 1 3 cl are smaller than the fiber length of 120. Plots of D2 Z w.r.t. the volume and the fitted model can be found in Fig. 15 in the Appendix. In the following, we will generate 4 realizations on a window size of 384 for each parameter combination: As Table 1 shows, only 4 realizations of window size 384 are necessary to achieve a relative precision of 1%. This yields the lowest 2The contact intensity λc is, in this case, only a scaled version of the expected number of contacts per fiber λcF ; therefore, we omit it here. 10 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) Table 1: The number of realizations necessary for a given size and characteristic for a relative precision of 1%. Size λcF λcl 64 420 797 128 46 87 256 6 11 384 2 4 512 1 2 640 1 1 runtime, see Fig. 6, in comparison to the other RVE size-number combinations, while having a larger window size than the longest fiber length l= 200. 4.3 Contact Intensity Achieved Depending on the Interaction Distance The contact packing does not contain the contact intensity as a direct input parameter; instead, it contains the interaction distance, which does not directly translate to the contact intensity. Therefore, we investigate the relationship between the interaction distance and contact intensity in this subsection. Results are given in Fig. 7. short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.000 0.005 0.010 0.03 0.06 0.09 0.12 0.2 0.3 β λc ε 0.1 0.2 0.3 0.4 0.5 1 Figure 7: Plot of the contact intensity λc w.r.t. the parameter of the orientation distribution for varying interaction distance ε (in colour) for the experimental setup described in Section 4.1. Note that κ1 = 10, κ2 = 100. The contact intensity is not quite linear w.r.t. the interaction distance, but Fig. 7 provides an indication of what to expect for similar setups. For each parameter combination, the contact intensity is relatively constant for β ≥1.0 but generally smaller for β < 1.0. A likely reason for this is that for isotropic and girdle distributions, i.e., β ≥1.0, there are more balls of different fibers in the neighborhood of each ball. For more aligned distributions, i.e., β < 1.0, the neighboring balls are more likely to belong to the same fiber, which yields larger contact areas instead. The same trend is observed for the Toll estimate, which increase with β as -β log β when β is small (see Eq. 38, Appendix D). Validating this conjecture will be part of future work. Notably, this effect reverses for low volume fractions, short fibers, and high 11 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) interaction distances. Since the fibers are smaller, the likelihood that balls belong to different fibers increases again. This translates to more fibers clustering, as is shown in Fig. 8. short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.0 0.1 0.2 0.0 0.5 1.0 1.5 0.6 0.9 1.2 1.5 β λclF ε 0.1 0.2 0.3 0.4 0.5 1 Figure 8: Plot of the clot density λcl w.r.t. the parameter of the orientation distribution for varying interaction distance ε (in colour) for the experimental setup described in Section 4.1. Note that κ1 = 10, κ2 = 100. For long fibers and low volume fraction, the chosen interaction distance barely achieves an increase in contact intensity. This is most likely due to the distance between single fibers. This also conforms with the high parameter accuracy in this case, see Section 4.5. Similarly, curvy fibers result in slightly lower contact and clot densities than their straighter counterparts, see Fig. 16 and 17. 4.4 Achieved Contact Intensity Compared to Toll's Formula In order to compare the contact intensity between both packing algorithms and Toll's formula [39], see Section 2.1, we study the realizations for each parameter combination proposed in the section above with the interaction distance ε = 1.0. We estimate their contact intensity ˆλc, see Section 2.1, both after the Altendorf-Jeulin packing and the contact packing. Fig. 9 shows the results for the intensities. The already low contact intensities for the low volume fraction mostly stay below Toll's intersection intensity. This may be improved by omitting the prepacking in the Altendorf-Jeulin model. Yet for most other cases, the contact intensity can be raised above Toll's intensity. For high volume fraction and at least medium aspect ratio, the contact intensity even exceeds Toll's intensity after the Altendorf-Jeulin packing. This effect is not surprising considering that hardcore packing excludes a considerable volume for fibers, whereas the Boolean model does not have this constraint. The probability distribution for the number of contacts per fiber is represented in Fig. 10 for the medium aspect ratio a = 30, a volume fraction VV = 20%, an isotropic orientation distribution, and an interaction distance of ε = 1.0. Results obtained for the Altendorf-Jeulin and our contact models are compared with the distribution obtained for the Toll model (1a). This distribution is obtained in a similar way as the mean number of contact points and follows a Poisson law in the case of an isotropic distribution of orientation (see Appendix D for details). Furthermore, we fit the numerical results obtained for the Altendorf-Jeulin and contact models with Poisson laws. 12 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) The Toll model gives a Poisson distribution with a mean of 6.6 contacts per fiber. In the AJ model, it has a mean of roughly 5 contact points and is very close to a Poisson distribution. In the contact model, the mean is about 8.2, which is much higher. Furthermore, the contact distribution is more narrow than in a Poisson distribution. Note, however, that the Toll model gives relevant estimates for intersecting fibers. The absence of interpenetations in the contact model is a considerable constraint that induce in particular repulsion effects and anti-correlations not taken into account in the Toll model. short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.000 0.005 0.010 0.015 0.03 0.06 0.09 0.12 0.1 0.2 0.3 β λc AJ Contact Toll Figure 9: Plot of the contact intensity of the Altendorf-Jeulin model, the contact model, and the analytically determined formula by Toll. Note that κ1 = 10, κ2 = 100. 13 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) 0.00 0.05 0.10 0.15 0.20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #Contacts Frequency AJ Contact Toll Figure 10: Plot of the distribution of the number of contacts of individual fibers for the Altendorf-Jeulin model and the contact model (colored bars). Solid lines represent Poisson distributions with the same mean as that of the two models (left and right) as well as that predicted by the Toll model (in-between). Note that κ1 = 10, κ2 = 100. 4.5 Accuracy Results In order to evaluate the accuracy of our packing scheme, we estimate the direction parameter ˆβ and the curvature parameters ˆκ1 and ˆκ2 as proposed and implemented by Altendorf [22]. We do this both after fiber generation but before packing, after packing with the Altendorf-Jeulin method, and after contact packing with interaction distance ε = 0.1. We report their relative deviation from the original input parameters as an indicator of accuracy. Note that 0 indicates high accuracy due to no deviation, and 1 indicates low accuracy due to high deviation. Being particularly interested in any possible inaccuracy introduced by the contact packing steps, we also compare the deviation between the Altendorf-Jeulin packing and the contact packing. All methods (fiber generation, Altendorf-Jeulin packing, contact packing) show acceptable accuracy for the fiber direction parameter β for values of β ≥0.5, see Fig. 11: The relative deviation from the input parameter is below 0.1. For smaller values, however, the relative deviation rises up to 0.6, even more so for shorter fibers. The estimated value of the curvature parameters κ1 and κ2, however, shows rather high inaccuracy: Whereas κ2 has a relative deviation from the input parameter below just 0.2, the relative deviation can even rise up to even 0.75 for the parameter κ1, see Fig. 12. This is especially pronounced for shorter fibers. Note that it is already alluded to in previous works that the curvature parameters are qualitatively meaningful, but their quantitative estimation is "not highly accurate" [44]. Fig. 13 shows the relative deviation of the contact packing to the Altendorf-Jeulin packing. Whereas the direction parameter β shows only very small deviation, i.e., it stays highly consistent, one can observe higher deviation for the local curvature κ2. This is an intuitive result considering that the contact packing creates contact by attracting balls of fibers that are close to each other, which likely increases local curvature (corresponding to lower values). Nevertheless, the deviation of the curvature should be taken with a grain of salt, given the inaccuracy of their estimation observed above. 14 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6 β Relative deviation of β^ AJ Contact Generation Figure 11: Plots of the deviation from the direction parameter β relative to the input value. The estimated values of β for the contact packing and the Altendorf-Jeulin packing correspond so closely to the values upon fiber generation that the plots overlay each other. Bars indicate standard deviations, but are often not visible because they are so low. Note that low deviation around 0 indicates high accuracy, whereas high values indicate low accuracy. Note that κ1 = 10, κ2 = 100. 15 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 β Relative deviation of κ^1 AJ Contact Generation short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 0.0 0.1 0.2 0.0 0.1 0.2 0.0 0.1 0.2 β Relative deviation of κ^2 AJ Contact Generation Figure 12: Plots of the deviation of the curvature parameters κ1 = 10 and κ2 = 100 relative to the input values after fiber generation, Altendorf-Jeulin packing, and contact packing each. Bars indicate standard deviations, but are often not visible because they are so low. Note that low values for the absolute deviation around 0 indicate high accuracy, whereas high values indicate low accuracy. 16 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) short medium long VV = 10% VV = 20% VV = 30% 0 1 2 3 0 1 2 3 0 1 2 3 -0.04 -0.02 0.00 0.02 -0.04 -0.02 0.00 0.02 -0.04 -0.02 0.00 0.02 β Relative Deviation β^ κ^1 κ^2 Figure 13: Deviation of the estimated parameter values of the contact model relative to the Altendorf-Jeulin model. Bars indicate standard deviations, but are often not visible because they are so low. Note that low values for the absolute deviation around 0 indicate high accuracy, whereas high values indicate low accuracy. In this setup, the curvature parameters are κ1 = 10, κ2 = 100. 17 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) 5 Application In the present paper, we study the contact intensity of fiber models due to their role in materials' physical properties. So far, we have solely studied the models' behavior depending on input parameters. In this section, we study its behavior when applied to a real-world problem, namely, modeling thermal conductivity in wood fiber insulation mats. We will replicate the models generated by Andrä et al. [1]. However, we will use the Altendorf-Jeulin and the contact model instead of a Boolean model. Note that simulating the thermal conductivity is a part of future work. We replicate Andrä et al.'s base model M1 on a voxel spacing of 25 μm with the following parameter combinations: The mean fiber radius is r = 50 μm, the fiber length l= 1 mm, the orientation parameter β = 3.0, indicating a fiber distribution closely aligned with the x-y-plane, and the curvature parameters κ1, κ2 = 100. We generate 6 480 fibers to generate a volume fraction of roughly 6 %. This accounts for the lumen, the tunnel in wood fibers, modeled by Andrä et al. [1]. Note that we omit the fiber chunks here, thus yielding slightly changed volume fractions. As proposed by Andrä et al. [1], we vary the radius as r = 35 μm, 45 μm, 55 μm, 65 μm, the length as l= 0.5 mm, 2 mm, 3 mm, the orientation parameter as β = 3.5, 4.0, each such that we keep all other parameters constant, and the number of fibers as 2 160, 4 320, 8 640, 10 800 to yield volume fractions of roughly VV = 2%, 4%, 8%, 10%. For each combination, we plot the contact intensity for the Altendorf-Jeulin model, the contact model for interaction distance ε = 1.0, and Toll's intersection intensity [38, 39] for the Boolean model. We observe from Fig. 14 that increasing the fiber radius reduces the contact density, whereas increasing the fiber length increases the density. This is not surprising as Toll's formula is directly dependent on the aspect ratio of fibers and the volume fraction. Interestingly, though, the contact model's density seems to be a fraction of Toll's density, except for the case of high fiber length. Here, the density of the contact model even rises higher than Toll's density. 0.00 0.25 0.50 0.75 1.00 35 μm 45 μm 50 μm 55 μm 65 μm Fiber Radius λcF Packing AJ Model Contact Model Toll (a) Contact density for varying radius. 0 1 2 3 0.5 mm 1 mm 2 mm 4 mm Fiber Length λcF Packing AJ Model Contact Model Toll (b) Contact density for varying length. 0.0 0.2 0.4 0.6 3.0 3.5 4.0 β λcF Packing AJ Model Contact Model Toll (c) Contact density for varying orientation parameter β. 0.00 0.25 0.50 0.75 1.00 1.25 2% 4% 6% 8% 10% VV λcF Packing AJ Model Contact Model Toll (d) Contact density for varying volume fraction. Figure 14: Contact densities when replicating the models proposed by Andrä et al. [1]. Note that κ1, κ2 = 100. 18 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) The results indicate that modelling the fiber system with a Boolean model or a hardcore model may result in strongly differing conductivities due to the difference in contact densities. Moreover, the impact of the number of contacts in comparison to other parameters can be studied by systematic variation, which is planned for future work. 6 Discussion & Conclusion Despite the relevance of inter-fiber contacts when simulating physical properties of fiber systems, there exists little research on parametrically modelling fiber systems including inter-fiber contact. Therefore, we extend the force-biased model by Altendorf & Jeulin [22] to a model where the contact between fibers can be explicitly increased. For this, we have developed algorithmic notions of contact and closeness. To increase the number of contacts, we have added another force to the model by Altendorf & Jeulin that creates contact between fibers that are close enough. We have shown that our model indeed increases the number of contacts in comparison to the Altendorf-Jeulin model while maintaining similar accuracy to other parameters of the fiber system. Moreover, we compared the number of contacts in the algorithmic hardcore models with the intersection intensity of the Boolean model, finding that the contact intensity of the contact model exceeds the intersection intensity of the Boolean model for volume fractions above 20 %. When simulating physical properties on fiber systems realized from the contact model, this allows for higher accuracy regarding the intensity of inter-fiber contacts. Moreover, the contact intensity can be systematically varied, thus allowing for a deeper understanding of the influence of contact on the physical property in comparison to other parameters. Such a study for thermal conductivity in wood fiber mats is currently underway. So far, the contact intensity can be reliably influenced by the interaction distance. To allow for an explicit parameter in the fiber packing algorithm for the contact intensity, further research is needed. Studying the accuracy of the curvature was severely limited since, to our knowledge, no accurate estimators for the parameters κ1 and κ2 exist [22]. In some cases, it may suffice to use only one curvature parameter, thus allowing for higher accuracy. In the next stage, we will extend the fiber model to contain fiber bundles and potentially different shapes such as square cross sections. Moreover, studying and varying the contact surface in addition to the number (intensity) of contacts is worthwhile to gain a deeper understanding of the influence of contact on physical properties. Acknowledgements We thank Markus Kronenberger and Michael Godehardt, Fraunhofer Institute for Industrial Mathematics (ITWM), for their support with the MAVIlib library and access to the ITWM Beehive Cluster. This work was supported by the German Federal Ministry of Education and Research under Grant Agreement No: 05M22UKA and the French-German doctoral college "Mathematische Bildverarbeitung". Code and Data Availability The code for the contact model can be provided upon reasonable request. The result data and corresponding scripts for evaluation are hosted on Author Contributions Alex Keilmann: Data curation, Formal analysis, Methodology, Software, Validation, Visualization, Writing - original draft, Writing - review & editing. Claudia Redenbach: Conceptualization, Writing - review & editing . François Willot: Conceptualization, Writing - review & editing. References [1] H. Andrä, D. Dobrovolskij, M. Engelhardt, M. Godehardt, M. Makas, C. Mercier, S. Rief, K. Schladitz, S. Staub, K. Trawka, and S. Treml. Image-based microstructural simulation of thermal conductivity for highly porous wood fiber insulation boards: 3D imaging, microstructure modeling, and numerical simulations for insight into structure-property relation. Wood Science and Technology, 57(1):5-31, 2023. [2] L. Berhan and A.M. Sastry. Modeling percolation in high-aspect-ratio fiber systems. i. soft-core versus hard-core models. Phys. Rev. E, 75:041120, 2007. 19 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) [3] Y. Pan, L. Iorga, and A.A. Pelegri. Analysis of 3D random chopped fiber reinforced composites using FEM and random sequential adsorption. Computational Materials Science, 43(3):450-461, 2008. 2007.12.016. [4] C. Redenbach and I. Vecchio. Statistical analysis and stochastic modelling of fibre composites. Composites Science and Technology, 71(2):107-112, 2011. [5] W. Tian, L. Qi, J. Zhou, J. Liang, and Y. Ma. Representative volume element for composites reinforced by spatially randomly distributed discontinuous fibers and its applications. Composite Structures, 131:366-373, 2015. [6] N. Provatas, M. Haataja, J. Asikainen, S. Majaniemi, M. Alava, and T. Ala-Nissila. Fiber deposition models in two and three spatial dimensions. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 165(1-3): 209-229, 2000. [7] K.J. Niskanen and M.J. Alava. Planar Random Networks with Flexible Fibers. Physical Review Letters, 73(25): 3475-3478, 1994. [8] D. Venkateshan, M. Tahir, H. Vahedi Tafreshi, and B. Pourdeyhimi. Modeling effects of fiber rigidity on thickness and porosity of virtual electrospun mats. Materials & Design, 96:27-35, 2016. [9] A. Moghadam, S. Yousefi, H.V. Tafreshi, and B. Pourdeyhimi. Characterizing nonwoven materials via realistic microstructural modeling. Separation and Purification Technology, 211:602-609, 2019. 2018.10.018. [10] S.R. Williams and A.P. Philipse. Random packings of spheres and spherocylinders simulated by mechanical contraction. Physical Review E, 67(5):051301, 2003. [11] N.C. Karayiannis and M. Laso. Dense and Nearly Jammed Random Packings of Freely Jointed Chains of Tangent Hard Spheres. Physical Review Letters, 100(5):050602, 2008. [12] M. Schneider. The sequential addition and migration method to generate representative volume elements for the homogenization of short fiber reinforced plastics. Computational Mechanics, 59(2):247-263, 2017. [13] M. Schneider. An algorithm for generating microstructures of fiber-reinforced composites with long fibers. International Journal for Numerical Methods in Engineering, 123(24):6197-6219, 2022. [14] A. Mehta and M. Schneider. A sequential addition and migration method for generating microstructures of short fibers with prescribed length distribution. Computational Mechanics, 70(4):829-851, 2022. s00466-022-02201-x. [15] C. Lauff, M. Schneider, J. Montesano, and T. Böhlke. Generating microstructures of long fiber reinforced composites by the fused sequential addition and migration method. International Journal for Numerical Methods in Engineering, page e7573, 2024. [16] C. Lauff, M. Schneider, and T. Böhlke. Microstructure generation of long fiber reinforced hybrid composites using the fused sequential addition and migration method. Journal of Thermoplastic Composite Materials, page 08927057251314425, 2025. [17] C. Lauff, M. Krause, M. Schneider, and T. Böhlke. On the Influence of the Fiber Curvature on the Stiffness of Long Fiber Reinforced Composites. International Journal for Numerical Methods in Engineering, 126(15): e70094, 2025. [18] A. Klar. Interacting fiber structures: mathematical aspects and applications. Rivista di Matematica della Università di Parma, 10(2):199-268, 2019. [19] R. Borsche, A. Klar, C. Nessler, A. Roth, and O. Tse. A Retarded Mean-Field Approach for Interacting Fiber Structures. Multiscale Modeling & Simulation, 15(3):1130-1154, 2017. [20] V. Salnikov, D. Choï, and P. Karamian-Surville. On efficient and reliable stochastic generation of RVEs for analysis of composites within the framework of homogenization. Computational Mechanics, 55(1):127-144, 2015. [21] A. Bezrukov and D. Stoyan. Simulation and Statistical Analysis of Random Packings of Ellipsoids. Particle & Particle Systems Characterization, 23(5):388-398, 2006. [22] H. Altendorf and D. Jeulin. Random-walk-based stochastic modeling of three-dimensional fiber systems. Physical Review E, 83(4):041804, 2011. [23] P. Easwaran. Stochastic Geometry Models for Interacting Fibers. PhD thesis, TU Kaiserslautern, 2017. 20 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) [24] L. Chapelle, M. Lévesque, P. Brøndsted, M.R. Foldschack, and Y. Kusano. GENERATION OF NONOVERLAPPING FIBER ARCHITECTURE. Proceedings of the 20th International Conference on Composite Materials, 2015. [25] A. Kumar, A. DasGupta, and A. Jain. Microstructure generation algorithm and micromechanics of curved fiber composites with random waviness. International Journal of Solids and Structures, 289:112625, 2024. [26] C. Gaunand, Y. De Wilde, A. François, V. Grigorova-Moutiers, and K. Joulain. Modeling conductive thermal transport in three-dimensional fibrous media with fiber-to-fiber contacts. Phys. Rev. Appl., 23:034084, 2025. [27] V.A. Yastrebov. Numerical methods in contact mechanics. John Wiley & Sons, 2013. [28] R. Picu. Mechanics of random fiber networks-a review. Soft Matter, 7(15):6768-6785, 2011. [29] J. Xie, Z. Guo, M. Shao, W. Zhu, W. Jiao, Z. Yang, and L. Chen. Mechanics of textiles used as composite preforms: A review. Composite structures, 304:116401, 2023. [30] D. Poquillon, B. Viguier, and E. Andrieu. Experimental data about mechanical behaviour during compression tests for various matted fibres. Journal of materials science, 40:5963-5970, 2005. [31] C. Van Wyk. 20-note on the compressibility of wool. Journal of the Textile Institute Transactions, 37(12): T285-T292, 1946. [32] D. Durville. Numerical simulation of entangled materials mechanical properties. Journal of materials science, 40: 5941-5948, 2005. [33] M. Faessel, C. Delisée, F. Bos, and P. Castéra. 3D Modelling of random cellulosic fibrous networks based on X-ray tomography and image analysis. Composites Science and Technology, 65(13):1931-1940, 2005. [34] A. Karakoç, E. Hiltunen, and J. Paltakari. Geometrical and spatial effects on fiber network connectivity. Composite Structures, 168:335-344, 2017. [35] S. Deogekar, Z. Yan, and R.C. Picu. Random Fiber Networks With Superior Properties Through Network Topology Control. Journal of Applied Mechanics, 86(8):081010, 2019. [36] E.M. Huisman, C. Storm, and G.T. Barkema. Monte Carlo study of multiply crosslinked semiflexible polymer networks. Physical Review E, 78(5):051801, 2008. [37] E.M. Huisman, T. Van Dillen, P.R. Onck, and E. Van Der Giessen. Three-Dimensional Cross-Linked F-Actin Networks: Relation between Network Architecture and Mechanical Behavior. Physical Review Letters, 99(20): 208103, 2007. [38] S. Toll. Note: On the tube model for fiber suspensions. Journal of Rheology, 37(1):123-125, 1993. [39] S. Toll. Packing mechanics of fiber reinforcements. Polymer Engineering & Science, 38(8):1337-1350, 1998. [40] T. Komori and K. Makishima. Numbers of Fiber-to-Fiber Contacts in General Fiber Assemblies. Textile Research Journal, 47(1):13-17, 1977. [41] G. Gaiselmann, D. Froning, C. Tötzke, C. Quick, I. Manke, W. Lehnert, and V. Schmidt. Stochastic 3D modeling of non-woven materials with wet-proofing agent. International Journal of Hydrogen Energy, 38(20):8448-8460, 2013. [42] K. Schladitz, S. Peters, D. Reinel-Bitzer, A. Wiegmann, and J. Ohser. Design of acoustic trim based on geometric modeling and flow simulation for non-woven. Computational Materials Science, 38(1):56-66, 2006. [43] N.I. Fisher, T. Lewis, and B.J.J. Embleton. Statistical Analysis of Spherical Data. Cambridge University Press, Cambridge [Cambridgeshire] ; New York, 1987. ISBN 978-0-521-24273-8. [44] H. Altendorf. 3D Morphological Analysis and Modeling of Random Fiber Networks. PhD thesis, TU Kaiserslautern, Kaiserslautern, Germany, 2011. [45] J.M. Benoit, B. Corraze, and O. Chauvet. Localization, coulomb interactions, and electrical heating in single-wall carbon nanotubes/polymer composites. Physical Review B, 65(24), 2002. [46] J. Mo ́sci ́nski, M. Bargieł, Z.A. Rycerz, and P.W.M. Jacobs. The Force-Biased Algorithm for the Irregular Close Packing of Equal Hard Spheres. Molecular Simulation, 3(4):201-212, 1989. 21 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) [47] R. Sedgewick and K. Wayne. Algorithms. Addison-Wesley Professional, 2011. [48] Fraunhofer ITWM, . MAVI - modular algorithms for volume images, v. 2024. http://www.mavi-3d.de, 2024. [49] T. Kanit, S. Forest, I. Galliet, V. Mounoury, and D. Jeulin. Determination of the size of the representative volume element for random composites: Statistical and numerical approach. International Journal of Solids and Structures, 40(13-14):3647-3679, 2003. [50] J. Dirrenberger, S. Forest, and D. Jeulin. Towards gigantic RVE sizes for 3D stochastic fibrous networks. International Journal of Solids and Structures, 51(2):359-376, 2014. [51] J. Serra. The boolean model and random sets. In Image modeling, pages 343-370. Elsevier, 1981. 22 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) A Parameter combinations in Section 4 In Section 4.1, we explained the experimental setup for validating the contact model rather illustratively, omitting exact parameters. Here, in Table 2, we provide detailed input parameters for the Altendorf-Jeulin and the contact model. Table 2: Overview of parameter combinations used in the experiments in Section 4. All parameters were combined with β = 0.1, 0.5, 1.0, 2.0, 3.0 and ε = 0.1, 0.2, 0.3, 0.4, 0.5, 1.0. Note that the number of fibers depends on the image size s. Aspect ratio Radius Length VV #fibers κ1 κ2 τ dc 10 2 40 ∼10% 400s3 10 100 0.0 0 10 2 40 ∼20% 800s3 10 100 0.0 0 10 2 40 ∼30% 1 200s3 10 100 0.0 0 30 2 120 ∼10% 133s3 10 100 0.0 0 30 2 120 ∼20% 266s3 10 100 0.0 0 30 2 120 ∼30% 400s3 10 100 0.0 0 50 2 200 ∼10% 80s3 10 100 0.0 0 50 2 200 ∼20% 160s3 10 100 0.0 0 50 2 200 ∼30% 240s3 10 100 0.0 0 30 2 120 ∼20% 266s3 10 10 0.0 0 30 2 120 ∼20% 266s3 100 100 0.0 0 B Fitted Model for the Representative Volume Element Fig. 15 shows the variance plots w.r.t. the volume window, which is used in Section 4.2, together with their fitted models. These are used for the RVE size-number estimation. Note that we only present the plot for Z = λcF , λclF as λc is a multiple of λcF in this case due to a constant λF . Note that the fitted models correspond closely to the observed variances. 0.001 0.010 0.100 1.000 1e+05 1e+06 1e+07 1e+08 Window Volume Var(λcF) 1e-05 1e-04 1e-03 1e-02 1e+05 1e+06 1e+07 1e+08 Window Volume Var(λclF) Figure 15: Plots of the variance of properties w.r.t. to the window volume. The line in blue indicates the fitted model. C Validation Plots of Section 4 In Section 4, the simulation results were shown for the main setup of moderate fiber curvature κ1 = 10, κ2 = 100. Fig. 16 shows the impact of contact distance ε on both the contact intensity λc and the expected number of clots per fiber λclF for straight fibers, so curvature parameters κ1, κ2 = 100. Fig. 17 again, shows these results for curvy fibers, meaning curvature parameters κ1, κ2 = 10. Their behavior is similar to fibers with moderate curvature κ1 = 10, κ2 = 100, yet differ in the specific values and observed variance of λc and λclF . Fig. 18 presents relative deviation of estimated parameters for rather straight fibers (κ1, κ2 = 100) and more curvy fibers (κ1, κ2 = 10) relative to the corresponding Altendorf-Jeulin model. Note that the accuracy is higher for curvy fibers. 23 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) 0.050 0.075 0.100 0 1 2 3 β λc ε 0.1 0.2 0.3 0.4 0.5 1 0.6 0.8 1.0 0 1 2 3 β λclF ε 0.1 0.2 0.3 0.4 0.5 1 Figure 16: Plot of the contact intensity λc and he expected number of clots per fiber λclF w.r.t. the parameter of the orientation distribution for varying interaction distance ε (in colour) for straight fibers with κ1, κ2 = 100, see Section 4.3. 0.04 0.06 0.08 0.10 0 1 2 3 β λc ε 0.1 0.2 0.3 0.4 0.5 1 0.4 0.6 0.8 1.0 0 1 2 3 β λclF ε 0.1 0.2 0.3 0.4 0.5 1 Figure 17: Plot of the contact intensity λc and he expected number of clots per fiber λclF w.r.t. the parameter of the orientation distribution for varying interaction distance ε (in colour) for curvy fibers with κ1, κ2 = 10, see Section 4.3. -0.03 -0.02 -0.01 0.00 0.01 0 1 2 3 β Relative Deviation β^ κ^1 κ^2 (a) Relative deviations of straight fibers with κ1, κ2 = 100, see Section 4.1. -0.012 -0.008 -0.004 0.000 0 1 2 3 β Relative Deviation β^ κ^1 κ^2 (b) Relative deviations of curvy fibers, see Section 4.1. Figure 18: Deviation of the estimated parameter values of the contact model relative to the Altendorf-Jeulin model for the special cases of straight (κ1, κ2 = 100) and curvy (κ1, κ2 = 10) fibers. Bars indicate the standard deviation. Note that small values for the absolute deviation around 0 indicate high accuracy, whereas high values indicate low accuracy. 24 INCREASING INTER-FIBER CONTACT IN THE ALTENDORF-JEULIN MODEL (PREPRINT) D Distribution of the number of fiber-fiber intersections Hereafter, we derive the distribution for the number of intersections of a test fiber with other fibers in a Boolean random set [51]. The Boolean random set is made of cylinders of radius r, length l, volume V = πr2land density λf. The fibers are randomly oriented and follow the orientation distribution ψ(p) where p is a unit vector and: Z |p|=1 ψ(p)dp = 1. (26) Let us consider a cylinder oriented along direction p and the centerline of an arbitrary cylinder oriented along direction p′. We follow [38] and note that the centerline will cut the test cylinder if and only if its center is inside a domain of volume: Al+ V, (27) with A the projected area of the test cylinder along a plane perpendicular to p′: A = 2rl|p × p′| + πr2|p · p′|. (28) The mean number of intersections between the test cylinder and the centerline of a cylinder oriented along direction p therefore reads: δ(p′) (Al+ V ) , (29) where δ(p′) = λfψ(p′)dp′ is the density of cylinders oriented along direction p′. Denote λ′ i the number of intersects of a test cylinder oriented along direction p with a cylinder oriented in direction p′. Its mean value reads (see [39]): λ′ i = 4δ(p′) (A′l+ V ) , A′ = rl|p × p′| + πr2|p · p′|, (30) and the variable λ′ i follows the Poisson distribution: P{λ′ i = k} = e-λ′ i λ′ i k k! , k ∈N. (31) The total number of intersections λi between the test fiber and a fiber in any direction is the sum of the independent Poisson variables λ′ i and is therefore also Poisson-distributed. Accordingly: P{λi = k} = e-λi λi k k! , λi = Z |p′|=1 4ψ(p′)λf(A′l+ V )dp′. (32) In the above, λi and P{λi = k} depend, in general, on the direction p of the test cylinder. Finally, the distribution of the number of intersections eλi of an arbitrarily-oriented cylinder with another cylinder is given by averaging (32) over all p: P{ eλi = k} = Z p dp ψ(p)e-λi λi k k! , k ∈N. (33) For a set of cylinders with the same length, Eq. (1a) can be deduced from the above, by taking the expectation of the probability law. When fibers are isotropically-distributed (β = 1) or aligned (β = 0), Eq. (33) reduces to the same expression as in (33) and so eλi is also Poisson-distributed. Let us now focus on the particular case where ψ(p) takes the form (4) and consider the limit r →∞of infinitelyelongated cylinders. The integral for fψ in (1a) reads, in polar coordinates: fψ = Z [0;π]2×[0;2π]2 dθ dθ′ d φ dφ′ β2 sin θ sin θ′p 1 -[sin θ sin θ′ cos(φ -φ′) + cos θ cos θ′]2 4π2 [(β2 -1) cos2 θ + 1]3/2 [(β2 -1) cos2(θ′) + 1]3/2 . (34) We denote u = φ -φ′ and expand the integrand as θ, θ′ →0: fψ ≈ Z 2π u=0 Z π/2 θ=0 Z π/2 θ′=0 du dθ dθ′ uβ2θθ′√ θ2 + θ′2 -2θθ′ cos u 2π2(β2 + θ2)3/2(β2 + θ′2)3/2 , (35) which provides, after a change of variable w = θ/β, w′ = θ′/β ∂(fψ/β) ∂wmax ≈ Z 2π u=0 du Z wmax w=1 dwu p w2 -2wwmax cos u + w2max π2w2w2max ≈ Z 2π u=0 du Z wmax w=1 dw u π2w2wmax (36) with wmax = π/(2β). Finally, we obtain: fψ = -2β log β + O(β), β →0, (37) whereas gψ →1 (see [39]). Therefore, in the limit of quasi-aligned fiebrs of infinite length, we obtain: λcF = 8λF rl(πr -lβ log β). (38) 25
|
2509.16222
|
An Open Dataset for Temperature Modelling in Machine
Tools
C. Coelho1,+,*, D. Fern´andez2,+, M. Hohmann1,+, L. Penter2, S. Ihlenfeldt2, and O.
Niggemann1
1Institute for Artificial Intelligence, Helmut Schmidt University, 22043 Hamburg,
Germany
2Chair of Machine Tools Development and Adaptive Controls, Dresden University
of Technology TUD, 01069 Dresden, Germany
*Corresponding author: cecilia.coelho@hsu-hh.de
+Shared co-first authorship.
Abstract
This data set descriptor introduces a structured, high-resolution dataset of transient ther-
mal simulations for a vertical axis of a machine tool test rig. The data set includes temperature
and heat flux values recorded at 29 probe locations at 1800 time steps, sampled every second
over a 30-minute range, across 17 simulation runs derived from a fractional factorial design.
First, a computer-aided design model was de-featured, segmented, and optimized, followed by
finite element (FE) modelling. Detailed information on material, mesh, and boundary condi-
tions is included. To support research and model development, the dataset provides summary
statistics, thermal evolution plots, correlation matrix analyses, and a reproducible Jupyter
notebook. The data set is designed to support machine learning and deep learning applica-
tions in thermal modelling for prediction, correction, and compensation of thermally induced
deviations in mechanical systems, and aims to support researchers without FE expertise by
providing ready-to-use simulation data.
1
Background & Summary
Thermal effects are widely recognized as the predominant source of machining inaccuracies, with
thermally induced deformations contributing up to 70% of the total geometric errors in machine
tools [14]. Thus, accurately predicting temperature fields over time is crucial to achieving preci-
sion, reliability, and reduced production costs. Traditional approaches relied heavily on manual
compensation and simplistic regulation of internal heat sources. Despite significant advancements
in thermal compensation and temperature control strategies [10, 3, 5], achieving consistent, high-
precision machining remains challenging. The complex thermo-mechanical behaviour of machine
structures continues to compromise positional accuracy and process stability [14, 10]. Thermally
induced displacements at the tool centre point (TCP) stem from a combination of internal heat
sources, such as spindles, drive motors, and guideways, and external thermal disturbances, includ-
ing ambient temperature fluctuations, acting along the thermo-elastic functional chain [3, 5].
Current existing thermal modelling and thermal error correction models can be classified into
four categories: statistical correlation models [11, 9]; transfer function-based models [12, 13];
analytical and numerical structural models [19, 18]; and, less explored, Artificial Intelligence-
driven approaches [15, 21],
High-quality, publicly and ready available datasets for manufacturing processes, involving ther-
mal behaviour in machine tools, are scarce. The main reason for this scarcity is that generating
1
arXiv:2509.16222v1 [cs.CE] 11 Sep 2025
realistic, high-fidelity thermal data requires expertise in Finite Element (FE) modelling, including
knowledge of geometry preparation, meshing, material properties, and boundary conditions. As
a result, such data is typically out of reach for researchers in Machine Learning (ML) and Deep
Learning (DL) who may lack expertise in these tools or processes. To address these limitations, we
present a structured, high-resolution dataset of transient thermal simulations designed to support
the development of ML and DL surrogates [2]. Training surrogate models on this data significantly
reduces the computational cost of predicting temperature and heat flux evolution under varying
input conditions [17, 7]. These models can then be used to optimise and perform error correction
on demand, thus reducing production downtime and improving process efficiency.
The presented dataset captures temperature and heat flux at 29 probe node locations over time
of a vertical axis of a machine tool test rig. Seventeen distinct simulation runs with different initial
conditions were derived from a 2(5−1) fractional factorial design. These simulations represent a
wide range of thermal conditions, making the dataset suitable for studying and modelling both
specific (a single initial condition) or global behaviour. A total of 35 files are provided, comprising
fixed geometry and material property data, and transient temperature and heat flux values. Each
simulation run contains 1,800 time steps, resulting in values per second over a period of 30 minutes.
Statistical analysis, time evolution plots, and correlation matrices are also included to offer insights
into the data structure and variability. The statistical analysis is reproducible using the Jupyter
notebook included in the repository, allowing researchers to explore, extend, or adapt the analysis
pipeline with minimal setup. The dataset is designed for the application of ML and DL models,
supporting regression tasks, such as predicting thermal behaviour, and classification tasks, such as
identifying component type or location. This work aims to accelerate the development of thermal
surrogate models to enable wider adoption and development of ML and DL techniques in thermal
modelling for machine tools.
2
Methods
Model Construction
The FE model was generated by defeaturing, segmenting, and optimizing the initial Computer-
Aided Design (CAD) assembly using Solidworks [22]. Special emphasis was placed on the most
important heat source components, such as the motor, bearings, and guideways. The node num-
bers were set according to a criteria that maximizes the quality of the thermal behaviour repre-
sentation in the machine tool, achieved through finer meshing in regions with steep temperature
gradients, such as heat sources and boundaries, while avoiding prohibitive computational costs.
This approach incorporates a convergence study to ensure result stability, maintains optimal el-
ement aspect ratios to enhance accuracy, and balances precision with computational efficiency
[6]. Their locations are as follows, Fig. 2: 1-2 (carrier), 3-5 (guideway), 6-8 (motor base), 9-10
(bearing), 11-14 (front structure), 15-19 (lateral structure), 20-23 (top structure), and 24-29 (back
structure). Non-critical details, such as small holes and screws, were removed to simplify the
meshing process and due to its non-relevant effect on the output of the current approach. The FE
model was built using ANSYS. A mesh with 26,434 hexahedral elements and 157,327 nodes at a
0.01 m resolution was created. Fig. 1 shows the FE CAD model preparation and the heat fluxes
(heat loads) applied as part of the set of boundary conditions.
Boundary Conditions
The thermal boundary conditions were applied as follows: All exposed exterior surfaces, such as the
motor and the guideways, are subject to convective heat transfer with the ambient air. The variable
associated for convective effects was the air film coefficient, which represents the behaviour of a
laminar or turbulent flow on the involved surfaces. Radiation was not considered in this study. The
set of boundary conditions also considered ground temperature, air temperature, initial machine
temperature, and heat fluxes as thermal loads. Regarding heat fluxes, only three main loads were
2
Figure 1: Modelling workflow: from CAD preparation to set thermal loads (with 1 the motor, 2
the bearings and 3 the guideways).
considered and thus also simplified due to their complexity in real life. To simplify the approach,
effects such as velocity-dependent friction or non-linearities due to changes in material properties
were avoided. From the previous, the thermal loads were considered constant on the surfaces
implied related to the main heat sources: the motor, bearings, and guideways. Each considering
different magnitudes based on their relevance in the heat contribution (see Table 1). Motor and
bearings received uniform heat fluxes, while the guideways were split into three thermal load
locations. The top and bottom segments were assumed to have the same heat flux magnitude,
while the middle segment had a higher magnitude due to its longer exposure to the carrier’s
movement along its path. Fig.1 on the far right shows the locations of the heat fluxes, numbered
1 for the motor, 2 for the bearings and 3 for the guideways.
Table 1: Heat flux magnitudes.
Source
Heat flux
Unit
Motor
1000
W/m²
Bearings (top and bottom)
300
W/m²
Guideways: top and bottom
200
W/m²
Guideways: middle
600
W/m²
Then, the Design of Experiment (DoE) was carried out with five factors and a two-level fac-
torial design with one centre point, resulting in 17 non-repetitive runs due to the nature of the
data generation. Table 2 summarises the thermal boundary conditions used across the simulation
runs: the heat fluxes used for each run and component was obtained by multiplying the heat
flux magnitudes presented in Table 1 with the Heat Flux Level factor. The air film coefficient
represents the convective heat transfer and models the effectiveness of passive cooling. The am-
bient temperature defines the temperature of the direct machine environment, which influences
heat dissipation. The system initial temperature defines the uniform starting temperature of the
machine. The ground temperature represents the thermal boundary condition at the base of the
machine tool and simulates heat exchange with the floor or mounting surface.
Each transient simulation run was solved for a period of 30 minutes (t = 1200 s), with outputs
3
Figure 2: Sensor locations.
recorded every second.
3
Data Records
The dataset consists of 35 data files in standard Comma-Separated Values (CSV):
• Mesh and properties: A single file (TransientThermalSimulationFE MaterialProperties *1.txt),
common to the 17 simulation runs, listing the FE model node identity number and coordi-
nates, and material properties (density and thermal conductivity), Table 3;
Table 3: Format of the mesh and properties file.
Node Number
X Position
Y Position
Z Position
Density
Thermal Conductivity
1
0.064100
-0.133366
0.035898
7850
60.5
. . .
. . .
. . .
. . .
. . .
. . .
29
0.064100
-0.153729
0.073535
7850
60.5
• Temperature fields: 17 files (TransientThermalSimulationFE RunN Temperature *.txt,
with N ∈1, . . . , 17), each corresponding to one simulation run, Table 4. Columns represent:
steps; time in seconds; temperature at each of the 29 probe nodes in ◦C;
• Heat flux fields: 17 files (TransientThermalSimulationFE RunN HeatFlux *.txt, with N ∈
1, . . . , 17), each corresponding to one simulation run, Table 5. Columns represent: steps;
time in seconds; heat flux at each of the 29 probe nodes (in W/m2).
Table 4: Format of a temperature field file.
Steps
Time
Probe1
. . .
Probe29
1
1
20
. . .
19.997
. . .
. . .
. . .
. . .
. . .
1800
1
20.269
. . .
18.378
Table 5: Format of a heat flux field file.
Steps
Time
Probe1
. . .
Probe29
1
1
4.7853e-5
. . .
2.2382e-8
. . .
. . .
. . .
. . .
. . .
1800
1
6.0656
. . .
346.11
1* is used as a wildcard (regex syntax) indicating there exists more characters after the base name but are not
relevant for main file type identification (run date, mesh size).
4
Table 2: Design of experiments thermal boundary conditions for the 17 runs.
Run
Heat Flux
Level, W/m2
Air Film
Coefficient,
W/m2 ·◦C
Ambient
Tempera-
ture, ◦C
System
Initial Tem-
perature, ◦C
Ground
Tempera-
ture, ◦C
1
0.5
10
20
20
18
2
1.5
10
20
20
38
3
0.5
50
20
20
38
4
1.5
50
20
20
18
5
0.5
10
40
20
38
6
1.5
10
40
20
18
7
0.5
50
40
20
18
8
1.5
50
40
20
38
9
0.5
10
20
40
38
10
1.5
10
20
40
18
11
0.5
50
20
40
18
12
1.5
50
20
40
38
13
0.5
10
40
40
18
14
1.5
10
40
40
38
15
0.5
50
40
40
38
16
1.5
50
40
40
18
17
1.0
30
30
30
28
All data files are openly available at github.com/temperatureModellingDataset.
The units
and data organisation are consistent across files and the column headers and file naming are
straightforward to identify the simulation run and variable.
Statistical Analysis and Data Visualisation
To complement the dataset, a statistical analysis and visualisation is provided.
The fixed material and mesh data include 29 probe nodes sampled from the FE model. All
nodes are located in components that share the same material, structural steel with a density
of 7850 Kg/m3 and thermal conductivity of 60.5 W/m2. The node coordinates basic statistics,
minimum (min) and maximum (max) value, mean and standard deviation (std) are organised in
Table 6.
Table 6: Node coordinate statistics.
Coordinate
Min
Max
Mean
Std
Unit
x
0.0641
0.0641
0.0641
0
m
y
-0.2058
-0.1242
-0.1664
0.0307
m
z
0.0248
0.0952
0.0547
0.0260
m
Each of the 17 temperature field files contains 1800 data points recorded at 29 probe nodes.
Across all simulation runs, the basic statistics are organised in Table 7.
5
Table 7: Temperature field statistics.
Node
Min
Max
Mean
Std
Unit
1
20.00
40.84
30.15
7.96
◦C
2
20.00
41.93
30.54
7.89
◦C
3
20.00
41.70
30.47
7.89
◦C
4
20.04
44.34
32.07
8.00
◦C
5
19.35
40.21
29.83
6.49
◦C
6
20.07
60.99
39.19
9.92
◦C
7
20.07
54.82
36.13
8.85
◦C
8
20.07
57.14
37.03
9.13
◦C
9
20.00
41.31
30.28
8.04
◦C
10
20.00
40.79
30.13
8.01
◦C
11
19.81
40.51
30.08
7.27
◦C
12
18.41
40.00
28.62
7.63
◦C
13
18.25
40.00
28.54
7.73
◦C
14
19.85
40.00
29.98
7.81
◦C
15
20.00
40.38
30.06
7.94
◦C
16
20.00
40.18
30.02
7.94
◦C
17
20.00
40.94
30.17
7.90
◦C
18
20.00
41.06
30.20
7.93
◦C
19
20.00
43.50
30.87
7.91
◦C
20
20.00
46.45
32.16
8.11
◦C
21
20.00
41.02
30.20
8.09
◦C
22
20.00
40.07
30.01
7.88
◦C
23
19.39
40.00
29.72
7.07
◦C
24
18.64
40.00
28.99
6.62
◦C
25
20.00
40.08
30.01
7.89
◦C
26
18.38
40.00
28.64
7.28
◦C
27
20.00
42.09
30.48
7.85
◦C
28
19.29
40.13
29.78
6.82
◦C
29
20.00
40.49
30.09
7.95
◦C
To detect and analyse possible linear relationships between thermal responses at different
probe node locations, the correlation matrix, using the Pearson coefficient, across all 17 runs was
computed, Fig. 3.
Figure 3: Correlation matrix for the temperature field data.
6
Most of the nodes show high correlation between each other, indicating a uniform temperature
field in the system. Only a few nodes in the front structure (12, 13) and back structure (26) have a
lower correlation with other locations. In the context of ML or DL applications, highly correlated
structures can be excluded to minimise redundancy and reduce computational costs during models
training and testing [1].
Furthermore, plots of the temperature increment in transient state at selected nodes are pro-
vided in Fig. 4.
Figure 4: Temperature curve for selected node 1, 4, 13, and 28.
Similarly, each 17 heat flux field files contain 1800 data points recorded at 29 probe nodes.
The main statistics, across all simulation runs, are organised in Table 8.
7
Table 8: Heat flux field statistics.
Node
Min
Max
Mean
Std
Unit
1
0.00
961.19
139.79
207.30
W/m2
2
0.00
1574.90
193.82
270.92
W/m2
3
0.02
1675.20
217.74
306.07
W/m2
4
261.48
2068.10
851.71
403.88
W/m2
5
0.35
1678.10
274.16
315.89
W/m2
6
352.41
1499.60
999.28
484.84
W/m2
7
347.81
2192.80
1223.93
609.48
W/m2
8
339.66
1537.20
978.86
474.87
W/m2
9
4.43
7669.20
2022.62
2048.50
W/m2
10
14.48
1741.00
559.93
438.46
W/m2
11
0.00
896.78
199.57
221.40
W/m2
12
0.00
307.34
59.41
78.86
W/m2
13
0.18
1013.80
474.39
319.01
W/m2
14
15.45
10656.00
2539.30
2130.18
W/m2
15
1.33
7990.10
1725.73
1641.09
W/m2
16
0.00
971.77
162.68
187.43
W/m2
17
0.00
972.21
140.09
173.57
W/m2
18
0.00
993.88
152.76
194.35
W/m2
19
0.00
993.95
107.68
169.81
W/m2
20
0.00
1335.30
159.34
238.95
W/m2
21
0.00
940.24
185.38
181.78
W/m2
22
0.00
1319.50
188.18
246.72
W/m2
23
0.02
3667.30
1728.13
998.05
W/m2
24
0.00
732.85
206.60
204.02
W/m2
25
0.00
978.31
236.59
282.97
W/m2
26
0.00
2694.70
917.34
757.36
W/m2
27
0.26
9130.60
3145.15
2538.94
W/m2
28
0.00
1692.00
320.15
405.99
W/m2
29
32.00
11828.00
2814.57
2425.87
W/m2
Additionally, the correlation matrix, using the Pearson coefficient, across all 17 runs was com-
puted, Figure 5.
Figure 5: Correlation matrix for the heat flux data.
8
The nodes on the carrier (1, 2, and 3) show a strong correlation, indicating similar heat
flux behaviour. In contrast, the nodes in the motor base (6, 7, and 8) exhibit a strong internal
correlation with the middle guideway (4) and are no correlation with other locations. In an ML
or DL application, highly correlated structures can be removed to reduce training and testing
computational costs [1].
4
Technical Validation
The simulations were validated for internal consistency and physical plausibility following the
methodology of Thiem et al.
[20].
This process included de-featuring, segmenting functional
components, and defining all surfaces that interact with thermal boundary conditions, followed
by meshing.
The meshing process was streamlined based on prior improvements, such as op-
timizing element size distribution to reduce computational overhead, implementing automated
mesh refinement in high-gradient thermal zones, adopting a pre-defined convergence threshold to
minimize manual iterations, and utilizing symmetry to halve the model domain where applica-
ble. Notably, the FE model was simplified by excluding nonlinear effects, such as friction and
temperature-dependent material properties, as these would significantly increase computational
complexity First, repeated centre-point runs were found to produce identical temperature curves,
confirming numerical stability. Additionally, the thermal responses behaved as expected across all
runs: higher input fluxes or warmer ambient temperatures resulted in higher node temperatures.
No negative, physically implausible, or outliers values were detected.
Although experimental measurements are not provided, comparable FE models have been
validated in prior studies [20, 4, 8]. Furthermore, the modelling workflow, Fig. 1, is a common
approach in the literature [16].
5
Usage Notes
The dataset and statistical analysis Jupyter notebook are available on the Github platform at
github.com/temperatureModellingDataset.
Application examples
This dataset is intended to be used for training and testing ML or DL thermal surrogate models,
aiming at reducing the computational cost of predicting the thermal behaviour of machine tool
components. Potential users should note:
• Data format: All data files are in CSV, being compatible with any programming or/and
data analysis environment (Python, R, MATLAB, etc.). Column headers are provided as
the first row of each file and variable units are given in the International System of Units;
• Probe nodes: Information on 29 probe nodes, at key locations, is provided, comprising of
3-dimensional coordinates and material properties.
• Application examples: The dataset can be used for both regression and classification
tasks, in machine learning and deep learning [1]. For regression, the data can be seen as a
time-series and used to model and predict the transient thermal behaviour (temperature and
heat flux) at specific locations, given known input conditions. This allows the development of
surrogate models (for digital twins, real-time monitoring, or rapid thermal design evaluation)
capable of predicting the behaviour with lower computational cost than FE simulations.
For classification, the data can be used to classify the different components (bearings, motor,
guideways) and their location (front, top, middle, centre, lateral, back). Based on their ther-
mal behaviour over time, models can learn to detect structural roles or positional identifiers
which is useful for automated model annotation, component tracking, or anomaly detection.
9
License
The license for the dataset and all external material is Creative Commons Attribution 4.0 Interna-
tional. Researchers are free to share and adapt the presented data set, but they must give credit
to the authors with a reference, provide a link to the license, and indicate changes to the original
data set and other additional material.
6
Data availability
All data supporting this Data Descriptor are available at github.com/CeciliaCoelho/temperatureModellingDataset
composing of: 17 heat flux and 17 temperature field files with values for 29 probe nodes and a
sampling of one data point per second across 30 minutes; a properties file with node information
and material properties for the 17 runs; a Jupyter notebook for data analysis. All resources are
under license Creative Commons Attribution 4.0 International.
7
Code availability
No custom code is needed to access the data. The Python code for basic statistical analysis and
data visualisation can be found in a Jupyter notebook on Github.
8
Acknowledgements
C. Coelho would like to thank the KIBIDZ project funded by dtec.bw—Digitalization and Technol-
ogy Research Center of the Bundeswehr; dtec.bw is funded by the European Union—NextGenerationEU.
M. Hohmann is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foun-
dation) – 520460697. D. Fern´andez is funded by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) - F-009765-551-521-1130701.
Contributions
Conceptualization and Methodology: C.C., and D.F.; Software: D.F., C.C, and M.H.; Hardware:
D.F., C.C, and M.H.; Validation: C.C., and D.F.; Formal analysis: D.F., C.C, and M.H.; Re-
sources, O.N.; Data curation: C.C and M.H.; Writing—original draft preparation: D.F., C.C,
and M.H.; Writing—review and editing, D.F., C.C, M.H., L.P.; Visualization: D.F., and M.H.;
Supervision and Project Administration: C.C., O.N., L.P, S.I.; Funding acquisition: O.N., S.I. All
authors reviewed the manuscript.
9
Competing interests
The authors declare no competing interests.
References
[1] Christopher M. Bishop and Nasser M. Nasrabadi.
Pattern Recognition and Machine Learning.
Volume 4, Number 4, Springer, 2006.
[2] Steven L. Brunton and J. Nathan Kutz.
Data-driven Science and Engineering: Machine Learning, Dynamical Systems, and Control.
Cambridge University Press, 2022.
10
[3] Wenlong Feng, Zihan Li, Qunying Gu, and Jianguo Yang.
Thermally induced positioning error modelling and compensation based on thermal charac-
teristic analysis.
International Journal of Machine Tools and Manufacture, 93:26–36, 2015.
https://doi.org/10.1016/j.ijmachtools.2015.03.006
[4] Christian Friedrich, Alexander Geist, Muhammad Faisal Yaqoob, Arvid Hellmich, and Steffen
Ihlenfeldt.
Correction of Thermal Errors in Machine Tools by a Hybrid Model Approach.
Applied Sciences, 14(2):671, 2024.
[5] Makoto Fujishima, Koichiro Narimatsu, Naruhiro Irino, Masahiko Mori, and Soichi Ibaraki.
Adaptive thermal displacement compensation method based on deep learning.
CIRP Journal of Manufacturing Science and Technology, 25:22–25, 2019.
https://doi.org/10.1016/j.cirpj.2019.04.002
[6] Be˜nat I˜nigo, Natalia Colinas-Armijo, Luis Norberto L´opez de Lacalle, and Gorka Aguirre.
Digital Twin for Volumetric Thermal Error Compensation of Large Machine Tools.
Sensors, 24(19), 2024.
https://doi.org/10.3390/s24196196
[7] Jaydeep Karandikar and Thomas Kurfess.
Cost optimization and experimental design in milling using surrogate models and value of
information.
Journal of Manufacturing Systems, 37:479–486, 2015.
Published by Elsevier.
[8] Tharun Suresh Kumar, Alexander Geist, Christian Naumann, Janine Gl¨anzel, and Steffen
Ihlenfeldt.
Split CFD-simulation approach for effective quantification of mixed convective heat transfer
coefficients on complex machine tool models.
Procedia CIRP, 118:199–204, 2023.
[9] Jin-Hyeon Lee and Seung-Han Yang.
Statistical optimization and assessment of a thermal error model for CNC machine tools.
International Journal of Machine Tools and Manufacture, 42(1):147–155, 2002.
[10] Zihan Li, Jianguo Yang, Kaiguo Fan, and Yi Zhang.
Integrated geometric and thermal error modeling and compensation for vertical machining
centers.
International Journal of Advanced Manufacturing Technology, 76(5-8):1139–1150, 2015.
https://doi.org/10.1007/s00170-014-6336-z
[11] Hui Liu, Enming Miao, Liyin Zhang, Long Li, Yinlong Hou, and Dafeng Tang.
Thermal error modeling for machine tools: Mechanistic analysis and solution for the pseudo-
correlation of temperature-sensitive points.
IEEE Access, 8:63497–63513, 2020.
[12] Martin Mares, Otakar Horejs, Jan Hornych, and Jan Smolik.
Robustness and portability of machine tool thermal error compensation model based on con-
trol of participating thermal sources.
Journal of Machine Engineering, 13(1):24–36, 2013.
Published by Wroc lawska Rada Federacji Stowarzysze´n Naukowo-Technicznych.
[13] Martin Mareˇs, Otakar Horejˇs, ˇStˇep´an Fiala, Ch Lee, S. M. Jeong, and K. H. Kim.
Strategy of Milling Center Thermal Error Compensation Using a Transfer Function Model
and Its Validation Outside of Calibration Range.
11
MM Science Journal, 2019.
Published by MM Publishing.
[14] Josef Mayr, Jerzy Jedrzejewski, Eckart Uhlmann, M. Alkan Donmez, Wolfgang Knapp, Frank
H¨artig, Klaus Wendt, Toshimichi Moriwaki, Paul Shore, Robert Schmitt, Christian Brecher,
Timo W¨urz, and Konrad Wegener.
Thermal issues in machine tools.
CIRP Annals - Manufacturing Technology, 61(2):771–791, 2012.
https://doi.org/10.1016/j.cirp.2012.05.008
[15] Sen Mu, Chunping Yu, Kunlong Lin, Caijiang Lu, Xi Wang, Tao Wang, and Guoqiang Fu.
A Review of Machine Learning-Based Thermal Error Modeling Methods for CNC Machine
Tools.
Machines, 13(2):153, 2025.
[16] David W. Nicholson.
Finite Element Analysis: Thermomechanics of Solids.
CRC Press, 2008.
[17] Karel Pavl´ıˇcek, V´aclav Kotlan, and Ivo Doleˇzel.
Applicability and comparison of surrogate techniques for modeling of selected heating prob-
lems.
Computers & Mathematics with Applications, 78(9):2897–2910, 2019.
Published by Elsevier.
[18] Xaver Thiem, Knut Großmann, and Andreas M¨uhl.
Structural model-based correction of thermo-elastic machine tool errors.
In Thermo-energetic Design of Machine Tools: A Systemic Approach to Solve the Conflict
Between Power Efficiency, Accuracy and Productivity Demonstrated at the Example of Ma-
chining Production, pages 185–197. Springer, 2014.
[19] Xaver Thiem, Bernd Kauschinger, and Steffen Ihlenfeldt.
Online correction of thermal errors based on a structure model.
International Journal of Mechatronics and Manufacturing Systems, 12(1):49–62, 2019.
Published by Inderscience Publishers (IEL).
[20] Xaver Thiem, Holger Rudolph, Robert Krahn, Steffen Ihlenfeldt, Christof Fetzer, and Jens
M¨uller.
Adaptive Thermal Model for Structure Model Based Correction.
In Lecture Notes in Production Engineering, pages 67–82. Springer International Publishing,
2023.
https://doi.org/10.1007/978-3-031-34486-2 6
[21] Yi Zhang, Jianguo Yang, and Hui Jiang.
Machine tool thermal error modeling and prediction by grey neural network.
The International Journal of Advanced Manufacturing Technology, 59(9):1065–1072, 2012.
[22] Patrick P¨ohlmann, Jens M¨uller, Steffen Ihlenfeldt.
Strategy for compensation of thermally induced displacements in machine structure using
distributed temperature field control.
Journal of Machine Engineering, 2024, Vol. 24, No. 3,5-16
12
|
An Open Dataset for Temperature Modelling in Machine Tools C. Coelho1,+,*, D. Fern ́andez2,+, M. Hohmann1,+, L. Penter2, S. Ihlenfeldt2, and O. Niggemann1 1Institute for Artificial Intelligence, Helmut Schmidt University, 22043 Hamburg, Germany 2Chair of Machine Tools Development and Adaptive Controls, Dresden 01069 Dresden, Germany *Corresponding author: +Shared co-first authorship. Abstract This data set descriptor introduces a structured, high-resolution dataset of transient thermal simulations for a vertical axis of a machine tool test rig. The data set includes temperature and heat flux values recorded at 29 probe locations at 1800 time steps, sampled every second over a 30-minute range, across 17 simulation runs derived from a fractional factorial design. First, a computer-aided design model was de-featured, segmented, and optimized, followed by finite element (FE) modelling. Detailed information on material, mesh, and boundary conditions is included. To support research and model development, the dataset provides summary statistics, thermal evolution plots, correlation matrix analyses, and a reproducible Jupyter notebook. The data set is designed to support machine learning and deep learning applications in thermal modelling for prediction, correction, and compensation of thermally induced deviations in mechanical systems, and aims to support researchers without FE expertise by providing ready-to-use simulation data. 1 Background & Summary Thermal effects are widely recognized as the predominant source of machining inaccuracies, with thermally induced deformations contributing up to 70% of the total geometric errors in machine tools [14]. Thus, accurately predicting temperature fields over time is crucial to achieving precision, reliability, and reduced production costs. Traditional approaches relied heavily on manual compensation and simplistic regulation of internal heat sources. Despite significant advancements in thermal compensation and temperature control strategies [10, 3, 5], achieving consistent, highprecision machining remains challenging. The complex thermo-mechanical behaviour of machine structures continues to compromise positional accuracy and process stability [14, 10]. Thermally induced displacements at the tool centre point (TCP) stem from a combination of internal heat sources, such as spindles, drive motors, and guideways, and external thermal disturbances, including ambient temperature fluctuations, acting along the thermo-elastic functional chain [3, 5]. Current existing thermal modelling and thermal error correction models can be classified into four categories: statistical correlation models [11, 9]; transfer function-based models [12, 13]; analytical and numerical structural models [19, 18]; and, less explored, Artificial Intelligencedriven approaches [15, 21], High-quality, publicly and ready available datasets for manufacturing processes, involving thermal behaviour in machine tools, are scarce. The main reason for this scarcity is that generating 1 11 Sep 2025 realistic, high-fidelity thermal data requires expertise in Finite Element (FE) modelling, including knowledge of geometry preparation, meshing, material properties, and boundary conditions. As a result, such data is typically out of reach for researchers in Machine Learning (ML) and Deep Learning (DL) who may lack expertise in these tools or processes. To address these limitations, we present a structured, high-resolution dataset of transient thermal simulations designed to support the development of ML and DL surrogates [2]. Training surrogate models on this data significantly reduces the computational cost of predicting temperature and heat flux evolution under varying input conditions [17, 7]. These models can then be used to optimise and perform error correction on demand, thus reducing production downtime and improving process efficiency. The presented dataset captures temperature and heat flux at 29 probe node locations over time of a vertical axis of a machine tool test rig. Seventeen distinct simulation runs with different initial conditions were derived from a 2(5-1) fractional factorial design. These simulations represent a wide range of thermal conditions, making the dataset suitable for studying and modelling both specific (a single initial condition) or global behaviour. A total of 35 files are provided, comprising fixed geometry and material property data, and transient temperature and heat flux values. Each simulation run contains 1,800 time steps, resulting in values per second over a period of 30 minutes. Statistical analysis, time evolution plots, and correlation matrices are also included to offer insights into the data structure and variability. The statistical analysis is reproducible using the Jupyter notebook included in the repository, allowing researchers to explore, extend, or adapt the analysis pipeline with minimal setup. The dataset is designed for the application of ML and DL models, supporting regression tasks, such as predicting thermal behaviour, and classification tasks, such as identifying component type or location. This work aims to accelerate the development of thermal surrogate models to enable wider adoption and development of ML and DL techniques in thermal modelling for machine tools. 2 Methods Model Construction The FE model was generated by defeaturing, segmenting, and optimizing the initial ComputerAided Design (CAD) assembly using Solidworks [22]. Special emphasis was placed on the most important heat source components, such as the motor, bearings, and guideways. The node numbers were set according to a criteria that maximizes the quality of the thermal behaviour representation in the machine tool, achieved through finer meshing in regions with steep temperature gradients, such as heat sources and boundaries, while avoiding prohibitive computational costs. This approach incorporates a convergence study to ensure result stability, maintains optimal element aspect ratios to enhance accuracy, and balances precision with computational efficiency [6]. Their locations are as follows, Fig. 2: 1-2 (carrier), 3-5 (guideway), 6-8 (motor base), 9-10 (bearing), 11-14 (front structure), 15-19 (lateral structure), 20-23 (top structure), and 24-29 (back structure). Non-critical details, such as small holes and screws, were removed to simplify the meshing process and due to its non-relevant effect on the output of the current approach. The FE model was built using ANSYS. A mesh with 26,434 hexahedral elements and 157,327 nodes at a 0.01 m resolution was created. Fig. 1 shows the FE CAD model preparation and the heat fluxes (heat loads) applied as part of the set of boundary conditions. Boundary Conditions The thermal boundary conditions were applied as follows: All exposed exterior surfaces, such as the motor and the guideways, are subject to convective heat transfer with the ambient air. The variable associated for convective effects was the air film coefficient, which represents the behaviour of a laminar or turbulent flow on the involved surfaces. Radiation was not considered in this study. The set of boundary conditions also considered ground temperature, air temperature, initial machine temperature, and heat fluxes as thermal loads. Regarding heat fluxes, only three main loads were 2 Figure 1: Modelling workflow: from CAD preparation to set thermal loads (with 1 the motor, 2 the bearings and 3 the guideways). considered and thus also simplified due to their complexity in real life. To simplify the approach, effects such as velocity-dependent friction or non-linearities due to changes in material properties were avoided. From the previous, the thermal loads were considered constant on the surfaces implied related to the main heat sources: the motor, bearings, and guideways. Each considering different magnitudes based on their relevance in the heat contribution (see Table 1). Motor and bearings received uniform heat fluxes, while the guideways were split into three thermal load locations. The top and bottom segments were assumed to have the same heat flux magnitude, while the middle segment had a higher magnitude due to its longer exposure to the carrier's movement along its path. Fig.1 on the far right shows the locations of the heat fluxes, numbered 1 for the motor, 2 for the bearings and 3 for the guideways. Table 1: Heat flux magnitudes. Source Heat flux Unit Motor 1000 W/m2 Bearings (top and bottom) 300 W/m2 Guideways: top and bottom 200 W/m2 Guideways: middle 600 W/m2 Then, the Design of Experiment (DoE) was carried out with five factors and a two-level factorial design with one centre point, resulting in 17 non-repetitive runs due to the nature of the data generation. Table 2 summarises the thermal boundary conditions used across the simulation runs: the heat fluxes used for each run and component was obtained by multiplying the heat flux magnitudes presented in Table 1 with the Heat Flux Level factor. The air film coefficient represents the convective heat transfer and models the effectiveness of passive cooling. The ambient temperature defines the temperature of the direct machine environment, which influences heat dissipation. The system initial temperature defines the uniform starting temperature of the machine. The ground temperature represents the thermal boundary condition at the base of the machine tool and simulates heat exchange with the floor or mounting surface. Each transient simulation run was solved for a period of 30 minutes (t = 1200 s), with outputs 3 Figure 2: Sensor locations. recorded every second. 3 Data Records The dataset consists of 35 data files in standard Comma-Separated Values (CSV): • Mesh and properties: A single file (TransientThermalSimulationFE MaterialProperties *1.txt), common to the 17 simulation runs, listing the FE model node identity number and coordinates, and material properties (density and thermal conductivity), Table 3; Table 3: Format of the mesh and properties file. Node Number X Position Y Position Z Position Density Thermal Conductivity 1 0.064100 -0.133366 0.035898 7850 60.5 . . . . . . . . . . . . . . . . . . 29 0.064100 -0.153729 0.073535 7850 60.5 • Temperature fields: 17 files (TransientThermalSimulationFE RunN Temperature *.txt, with N ∈1, . . . , 17), each corresponding to one simulation run, Table 4. Columns represent: steps; time in seconds; temperature at each of the 29 probe nodes in ◦C; • Heat flux fields: 17 files (TransientThermalSimulationFE RunN HeatFlux *.txt, with N ∈ 1, . . . , 17), each corresponding to one simulation run, Table 5. Columns represent: steps; time in seconds; heat flux at each of the 29 probe nodes (in W/m2). Table 4: Format of a temperature field file. Steps Time Probe1 . . . Probe29 1 1 20 . . . 19.997 . . . . . . . . . . . . . . . 1800 1 20.269 . . . 18.378 Table 5: Format of a heat flux field file. Steps Time Probe1 . . . Probe29 1 1 4.7853e-5 . . . 2.2382e-8 . . . . . . . . . . . . . . . 1800 1 6.0656 . . . 346.11 1* is used as a wildcard (regex syntax) indicating there exists more characters after the base name but are not relevant for main file type identification (run date, mesh size). 4 Table 2: Design of experiments thermal boundary conditions for the 17 runs. Run Heat Flux Level, W/m2 Air Film Coefficient, W/m2 ·◦C Ambient Temperature, ◦C System Initial Temperature, ◦C Ground Temperature, ◦C 1 0.5 10 20 20 18 2 1.5 10 20 20 38 3 0.5 50 20 20 38 4 1.5 50 20 20 18 5 0.5 10 40 20 38 6 1.5 10 40 20 18 7 0.5 50 40 20 18 8 1.5 50 40 20 38 9 0.5 10 20 40 38 10 1.5 10 20 40 18 11 0.5 50 20 40 18 12 1.5 50 20 40 38 13 0.5 10 40 40 18 14 1.5 10 40 40 38 15 0.5 50 40 40 38 16 1.5 50 40 40 18 17 1.0 30 30 30 28 All data files are openly available at github.com/temperatureModellingDataset. The units and data organisation are consistent across files and the column headers and file naming are straightforward to identify the simulation run and variable. Statistical Analysis and Data Visualisation To complement the dataset, a statistical analysis and visualisation is provided. The fixed material and mesh data include 29 probe nodes sampled from the FE model. All nodes are located in components that share the same material, structural steel with a density of 7850 Kg/m3 and thermal conductivity of 60.5 W/m2. The node coordinates basic statistics, minimum (min) and maximum (max) value, mean and standard deviation (std) are organised in Table 6. Table 6: Node coordinate statistics. Coordinate Min Max Mean Std Unit x 0.0641 0.0641 0.0641 0 m y -0.2058 -0.1242 -0.1664 0.0307 m z 0.0248 0.0952 0.0547 0.0260 m Each of the 17 temperature field files contains 1800 data points recorded at 29 probe nodes. Across all simulation runs, the basic statistics are organised in Table 7. 5 Table 7: Temperature field statistics. Node Min Max Mean Std Unit 1 20.00 40.84 30.15 7.96 ◦C 2 20.00 41.93 30.54 7.89 ◦C 3 20.00 41.70 30.47 7.89 ◦C 4 20.04 44.34 32.07 8.00 ◦C 5 19.35 40.21 29.83 6.49 ◦C 6 20.07 60.99 39.19 9.92 ◦C 7 20.07 54.82 36.13 8.85 ◦C 8 20.07 57.14 37.03 9.13 ◦C 9 20.00 41.31 30.28 8.04 ◦C 10 20.00 40.79 30.13 8.01 ◦C 11 19.81 40.51 30.08 7.27 ◦C 12 18.41 40.00 28.62 7.63 ◦C 13 18.25 40.00 28.54 7.73 ◦C 14 19.85 40.00 29.98 7.81 ◦C 15 20.00 40.38 30.06 7.94 ◦C 16 20.00 40.18 30.02 7.94 ◦C 17 20.00 40.94 30.17 7.90 ◦C 18 20.00 41.06 30.20 7.93 ◦C 19 20.00 43.50 30.87 7.91 ◦C 20 20.00 46.45 32.16 8.11 ◦C 21 20.00 41.02 30.20 8.09 ◦C 22 20.00 40.07 30.01 7.88 ◦C 23 19.39 40.00 29.72 7.07 ◦C 24 18.64 40.00 28.99 6.62 ◦C 25 20.00 40.08 30.01 7.89 ◦C 26 18.38 40.00 28.64 7.28 ◦C 27 20.00 42.09 30.48 7.85 ◦C 28 19.29 40.13 29.78 6.82 ◦C 29 20.00 40.49 30.09 7.95 ◦C To detect and analyse possible linear relationships between thermal responses at different probe node locations, the correlation matrix, using the Pearson coefficient, across all 17 runs was computed, Fig. 3. Figure 3: Correlation matrix for the temperature field data. 6 Most of the nodes show high correlation between each other, indicating a uniform temperature field in the system. Only a few nodes in the front structure (12, 13) and back structure (26) have a lower correlation with other locations. In the context of ML or DL applications, highly correlated structures can be excluded to minimise redundancy and reduce computational costs during models training and testing [1]. Furthermore, plots of the temperature increment in transient state at selected nodes are provided in Fig. 4. Figure 4: Temperature curve for selected node 1, 4, 13, and 28. Similarly, each 17 heat flux field files contain 1800 data points recorded at 29 probe nodes. The main statistics, across all simulation runs, are organised in Table 8. 7 Table 8: Heat flux field statistics. Node Min Max Mean Std Unit 1 0.00 961.19 139.79 207.30 W/m2 2 0.00 1574.90 193.82 270.92 W/m2 3 0.02 1675.20 217.74 306.07 W/m2 4 261.48 2068.10 851.71 403.88 W/m2 5 0.35 1678.10 274.16 315.89 W/m2 6 352.41 1499.60 999.28 484.84 W/m2 7 347.81 2192.80 1223.93 609.48 W/m2 8 339.66 1537.20 978.86 474.87 W/m2 9 4.43 7669.20 2022.62 2048.50 W/m2 10 14.48 1741.00 559.93 438.46 W/m2 11 0.00 896.78 199.57 221.40 W/m2 12 0.00 307.34 59.41 78.86 W/m2 13 0.18 1013.80 474.39 319.01 W/m2 14 15.45 10656.00 2539.30 2130.18 W/m2 15 1.33 7990.10 1725.73 1641.09 W/m2 16 0.00 971.77 162.68 187.43 W/m2 17 0.00 972.21 140.09 173.57 W/m2 18 0.00 993.88 152.76 194.35 W/m2 19 0.00 993.95 107.68 169.81 W/m2 20 0.00 1335.30 159.34 238.95 W/m2 21 0.00 940.24 185.38 181.78 W/m2 22 0.00 1319.50 188.18 246.72 W/m2 23 0.02 3667.30 1728.13 998.05 W/m2 24 0.00 732.85 206.60 204.02 W/m2 25 0.00 978.31 236.59 282.97 W/m2 26 0.00 2694.70 917.34 757.36 W/m2 27 0.26 9130.60 3145.15 2538.94 W/m2 28 0.00 1692.00 320.15 405.99 W/m2 29 32.00 11828.00 2814.57 2425.87 W/m2 Additionally, the correlation matrix, using the Pearson coefficient, across all 17 runs was computed, Figure 5. Figure 5: Correlation matrix for the heat flux data. 8 The nodes on the carrier (1, 2, and 3) show a strong correlation, indicating similar heat flux behaviour. In contrast, the nodes in the motor base (6, 7, and 8) exhibit a strong internal correlation with the middle guideway (4) and are no correlation with other locations. In an ML or DL application, highly correlated structures can be removed to reduce training and testing computational costs [1]. 4 Technical Validation The simulations were validated for internal consistency and physical plausibility following the methodology of Thiem et al. [20]. This process included de-featuring, segmenting functional components, and defining all surfaces that interact with thermal boundary conditions, followed by meshing. The meshing process was streamlined based on prior improvements, such as optimizing element size distribution to reduce computational overhead, implementing automated mesh refinement in high-gradient thermal zones, adopting a pre-defined convergence threshold to minimize manual iterations, and utilizing symmetry to halve the model domain where applicable. Notably, the FE model was simplified by excluding nonlinear effects, such as friction and temperature-dependent material properties, as these would significantly increase computational complexity First, repeated centre-point runs were found to produce identical temperature curves, confirming numerical stability. Additionally, the thermal responses behaved as expected across all runs: higher input fluxes or warmer ambient temperatures resulted in higher node temperatures. No negative, physically implausible, or outliers values were detected. Although experimental measurements are not provided, comparable FE models have been validated in prior studies [20, 4, 8]. Furthermore, the modelling workflow, Fig. 1, is a common approach in the literature [16]. 5 Usage Notes The dataset and statistical analysis Jupyter notebook are available on the Github platform at github.com/temperatureModellingDataset. Application examples This dataset is intended to be used for training and testing ML or DL thermal surrogate models, aiming at reducing the computational cost of predicting the thermal behaviour of machine tool components. Potential users should note: • Data format: All data files are in CSV, being compatible with any programming or/and data analysis environment (Python, R, MATLAB, etc.). Column headers are provided as the first row of each file and variable units are given in the International System of Units; • Probe nodes: Information on 29 probe nodes, at key locations, is provided, comprising of 3-dimensional coordinates and material properties. • Application examples: The dataset can be used for both regression and classification tasks, in machine learning and deep learning [1]. For regression, the data can be seen as a time-series and used to model and predict the transient thermal behaviour (temperature and heat flux) at specific locations, given known input conditions. This allows the development of surrogate models (for digital twins, real-time monitoring, or rapid thermal design evaluation) capable of predicting the behaviour with lower computational cost than FE simulations. For classification, the data can be used to classify the different components (bearings, motor, guideways) and their location (front, top, middle, centre, lateral, back). Based on their thermal behaviour over time, models can learn to detect structural roles or positional identifiers which is useful for automated model annotation, component tracking, or anomaly detection. 9 License The license for the dataset and all external material is Creative Commons Attribution 4.0 International. Researchers are free to share and adapt the presented data set, but they must give credit to the authors with a reference, provide a link to the license, and indicate changes to the original data set and other additional material. 6 Data availability All data supporting this Data Descriptor are available at github.com/CeciliaCoelho/temperatureModellingDataset composing of: 17 heat flux and 17 temperature field files with values for 29 probe nodes and a sampling of one data point per second across 30 minutes; a properties file with node information and material properties for the 17 runs; a Jupyter notebook for data analysis. All resources are under license Creative Commons Attribution 4.0 International. 7 Code availability No custom code is needed to access the data. The Python code for basic statistical analysis and data visualisation can be found in a Jupyter notebook on Github. 8 Acknowledgements C. Coelho would like to thank the KIBIDZ project funded by dtec.bw-Digitalization and Technology Research Center of the Bundeswehr; dtec.bw is funded by the European Union-NextGenerationEU. M. Hohmann is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 520460697. D. Fern ́andez is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - F-009765-551-521-1130701. Contributions Conceptualization and Methodology: C.C., and D.F.; Software: D.F., C.C, and M.H.; Hardware: D.F., C.C, and M.H.; Validation: C.C., and D.F.; Formal analysis: D.F., C.C, and M.H.; Resources, O.N.; Data curation: C.C and M.H.; Writing-original draft preparation: D.F., C.C, and M.H.; Writing-review and editing, D.F., C.C, M.H., L.P.; Visualization: D.F., and M.H.; Supervision and Project Administration: C.C., O.N., L.P, S.I.; Funding acquisition: O.N., S.I. All authors reviewed the manuscript. 9 Competing interests The authors declare no competing interests. References [1] Christopher M. Bishop and Nasser M. Nasrabadi. Pattern Recognition and Machine Learning. Volume 4, Number 4, Springer, 2006. [2] Steven L. Brunton and J. Nathan Kutz. Data-driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, 2022. 10 [3] Wenlong Feng, Zihan Li, Qunying Gu, and Jianguo Yang. Thermally induced positioning error modelling and compensation based on thermal characteristic analysis. International Journal of Machine Tools and Manufacture, 93:26-36, 2015. https://doi.org/10.1016/j.ijmachtools.2015.03.006 [4] Christian Friedrich, Alexander Geist, Muhammad Faisal Yaqoob, Arvid Hellmich, and Steffen Ihlenfeldt. Correction of Thermal Errors in Machine Tools by a Hybrid Model Approach. Applied Sciences, 14(2):671, 2024. [5] Makoto Fujishima, Koichiro Narimatsu, Naruhiro Irino, Masahiko Mori, and Soichi Ibaraki. Adaptive thermal displacement compensation method based on deep learning. CIRP Journal of Manufacturing Science and Technology, 25:22-25, 2019. https://doi.org/10.1016/j.cirpj.2019.04.002 [6] Be ̃nat I ̃nigo, Natalia Colinas-Armijo, Luis Norberto L ́opez de Lacalle, and Gorka Aguirre. Digital Twin for Volumetric Thermal Error Compensation of Large Machine Tools. Sensors, 24(19), 2024. https://doi.org/10.3390/s24196196 [7] Jaydeep Karandikar and Thomas Kurfess. Cost optimization and experimental design in milling using surrogate models and value of information. Journal of Manufacturing Systems, 37:479-486, 2015. Published by Elsevier. [8] Tharun Suresh Kumar, Alexander Geist, Christian Naumann, Janine Gl ̈anzel, and Steffen Ihlenfeldt. Split CFD-simulation approach for effective quantification of mixed convective heat transfer coefficients on complex machine tool models. Procedia CIRP, 118:199-204, 2023. [9] Jin-Hyeon Lee and Seung-Han Yang. Statistical optimization and assessment of a thermal error model for CNC machine tools. International Journal of Machine Tools and Manufacture, 42(1):147-155, 2002. [10] Zihan Li, Jianguo Yang, Kaiguo Fan, and Yi Zhang. Integrated geometric and thermal error modeling and compensation for vertical machining centers. International Journal of Advanced Manufacturing Technology, 76(5-8):1139-1150, 2015. https://doi.org/10.1007/s00170-014-6336-z [11] Hui Liu, Enming Miao, Liyin Zhang, Long Li, Yinlong Hou, and Dafeng Tang. Thermal error modeling for machine tools: Mechanistic analysis and solution for the pseudocorrelation of temperature-sensitive points. IEEE Access, 8:63497-63513, 2020. [12] Martin Mares, Otakar Horejs, Jan Hornych, and Jan Smolik. Robustness and portability of machine tool thermal error compensation model based on control of participating thermal sources. Journal of Machine Engineering, 13(1):24-36, 2013. Published by Wroc lawska Rada Federacji Stowarzysze ́n Naukowo-Technicznych. [13] Martin Mareˇs, Otakar Horejˇs, ˇStˇep ́an Fiala, Ch Lee, S. M. Jeong, and K. H. Kim. Strategy of Milling Center Thermal Error Compensation Using a Transfer Function Model and Its Validation Outside of Calibration Range. 11 MM Science Journal, 2019. Published by MM Publishing. [14] Josef Mayr, Jerzy Jedrzejewski, Eckart Uhlmann, M. Alkan Donmez, Wolfgang Knapp, Frank H ̈artig, Klaus Wendt, Toshimichi Moriwaki, Paul Shore, Robert Schmitt, Christian Brecher, Timo W ̈urz, and Konrad Wegener. Thermal issues in machine tools. CIRP Annals - Manufacturing Technology, 61(2):771-791, 2012. https://doi.org/10.1016/j.cirp.2012.05.008 [15] Sen Mu, Chunping Yu, Kunlong Lin, Caijiang Lu, Xi Wang, Tao Wang, and Guoqiang Fu. A Review of Machine Learning-Based Thermal Error Modeling Methods for CNC Machine Tools. Machines, 13(2):153, 2025. [16] David W. Nicholson. Finite Element Analysis: Thermomechanics of Solids. CRC Press, 2008. [17] Karel Pavl ́ıˇcek, V ́aclav Kotlan, and Ivo Doleˇzel. Applicability and comparison of surrogate techniques for modeling of selected heating problems. Computers & Mathematics with Applications, 78(9):2897-2910, 2019. Published by Elsevier. [18] Xaver Thiem, Knut Großmann, and Andreas M ̈uhl. Structural model-based correction of thermo-elastic machine tool errors. In Thermo-energetic Design of Machine Tools: A Systemic Approach to Solve the Conflict Between Power Efficiency, Accuracy and Productivity Demonstrated at the Example of Machining Production, pages 185-197. Springer, 2014. [19] Xaver Thiem, Bernd Kauschinger, and Steffen Ihlenfeldt. Online correction of thermal errors based on a structure model. International Journal of Mechatronics and Manufacturing Systems, 12(1):49-62, 2019. Published by Inderscience Publishers (IEL). [20] Xaver Thiem, Holger Rudolph, Robert Krahn, Steffen Ihlenfeldt, Christof Fetzer, and Jens M ̈uller. Adaptive Thermal Model for Structure Model Based Correction. In Lecture Notes in Production Engineering, pages 67-82. Springer International Publishing, 2023. https://doi.org/10.1007/978-3-031-34486-2 6 [21] Yi Zhang, Jianguo Yang, and Hui Jiang. Machine tool thermal error modeling and prediction by grey neural network. The International Journal of Advanced Manufacturing Technology, 59(9):1065-1072, 2012. [22] Patrick P ̈ohlmann, Jens M ̈uller, Steffen Ihlenfeldt. Strategy for compensation of thermally induced displacements in machine structure using distributed temperature field control. Journal of Machine Engineering, 2024, Vol. 24, No. 3,5-16 12
|
2509.16221
|
Evaluation of Ensemble Learning
Techniques for handwritten OCR
Improvement
Evaluierung von Ensemble Lern Techniken zur
Verbesserung von hangeschriebener OCR
Martin Preiß
Universitätsbachelorarbeit
zur Erlangung des akademischen Grades
Bachelor of Science
(B. Sc.)
im Studiengang
IT Systems Engineering
eingereicht am 29. July 2022 am
Fachgebiet Digital Health & Machine Learning der
Digital-Engineering-Fakultät
der Universität Potsdam
Gutachter
Prof. Dr. Christoph Lippert
Betreuer
Benjamin Bergner
0
Abstract
For the bachelor project 2021 of Professor Lippert’s research group, handwritten
entries of historical patient records needed to be digitized using Optical Character
Recognition (OCR) methods. Since the data will be used in the future, a high degree
of accuracy is naturally required. Especially in the medical field this has even more
importance. Ensemble Learning is a method that combines several machine learning
methods. This procedure is claimed to be able to achieve an increased accuracy for
existing methods. For this reason, Ensemble Learning in combination with OCR is
investigated in this work in order to create added value for the digitization of the
patient records. It was possible to discover that ensemble learning can lead to an
increased accuracy for OCR, which methods were able to achieve this and that the
size of the training data set did not play a role here.
iii
0
Zusammenfassung
Für das Bachelorprojekt 2021 des Lehrstuhls von Professor Lippert sollten handge-
schriebene Einträge von historischen Patientenakten mithilfe von Optical Character
Recognition (OCR) Methoden digitalisiert werden. Da die Daten weiterhin verwen-
det werden sollen, ist natürlich eine hohe Genauigkeit erforderlich. Gerade im
medizinischen Bereich gewinnt dies noch mehr an Bedeutung. Ensemble Learning
ist ein Verfahren welches mehrere Machine Learning Methoden kombiniert. Diesem
Verfahren wird die Fähigkeit nachgesagt, für bestehenden Mehtoden eine erhöh-
te Genauigkeiten dieser erreichen zu können. Aus diesem Grund wird Ensemble
Learning in Kombination mit OCR in dieser Arbeit untersucht, um für die Digiatli-
sierung der Patientenakten einen Mehrwert zu schaffen. Dabei konnte beantwortet
werden, dass Ensemble Learning für OCR zu einer erhöhten Accuracy führen kann,
welche Methoden dies umsetzten und dass die größte des Trainingsdatensatzes hier
keine Rolle spielte.
v
0
Acknowledgments
Since all the people I want to thank are fluent in german, I have written the
acknowledgment in german. Nevertheless an english translation can be found
below.
German original
Ich habe sehr stark dafür kämpfen müssen um am HPI studieren
zu dürfen. Deshalb möchte ich als erstes den Menschen in meinem Leben danken
die es mir überhaupt erst ermöglicht haben diese Zeilen zu schreiben. Mein Dank
geht an meine Familie die mich jetzt nun 23 Jahre lang auf diesem Planeten un-
terstützt. Danke Papa, dass du stets so darauf erpicht warst, dass aus mir was
wird. Danke Mama, dass du stets drauf geachtet hast, dass ich auch ein gesunder
Bub werde. Danke Schwesterherz, dass ich immer auf dich zählen kann. Ich bin
sehr stolz auf dich. Natürlich will ich auch nicht meine Omas, Tanten, Patentante,
Cousinen, Onkel und Freunde vergessen. Ohne euch wäre ich nicht wo ich heute
stehe. Vielen Danke für eure Unterstützung und Zuspruch all die Jahre.
Auch möchte ich den Betreuern des Bachelorprojekts Benjamin Bergner und
Prof. Dr. Christoph Lippert für die Unterstützung das letze Jahr danken. Ich hab
das Jahr als sehr spannend empfunden und freue mich, dass wir ein wenig Wissen
von Ihnen abschöpfen durften.
Zum Schluss möchte ich noch meinem Team danken, welches mich das letzte
Jahr begleiten durfte. Wir waren klasse Leute und jeder von euch hat seinen eigenen
Satz verdient . Was ihr mit diesem macht ist eure Sache (zwinkersmiley). Danke
Cedric, für deinen starken unterstützenden Rücken. Danke Smilla, für ihre Güte uns
in den letzten Monaten zu behüten. Danke Elena, die mir half TrOCR zu bezwingen.
Danke Paul, dem zweit besten Datenputzer dieses Teams :P. Danke Romeos Gehirn.
Danke Tom , dass er nicht wahnsinnig durch mich geworden ist . Danke Abdu, der
mir immer half neues zu lernen. Natürlich danke ich auch dem Leser dieser Arbeit,
dass er sich die Zeit nimmt in meinen Zeilen zu verweilen. Ich wünsche viel Spaß
beim Lesen.
English translation
I had to fight very hard to be able to study at HPI. Therefore,
I would first like to thank the people in my life who made it possible for me to write
these lines. My thanks goes to my family who has supported me on this planet
for 23 years now. Thank you dad for always being so eager with me to become
vii
successful. Thank you mom for always making sure that I would be a healthy lad.
Thank you sister, that I can always count on you. I am very proud of you. Of course
I don’t want to forget my grandmas, aunts, godmother, cousins, uncles and friends.
Without you I would not be where I am today. Thank you for your support and
encouragement all these years.
I would also like to thank the supervisors of the bachelor project Benjamin
Bergner and Prof. Dr. Christoph Lippert for their support during the last year.
I found the year very inspiring and I am glad that we were able to gain some
knowledge from them. Finally, I would like to thank my team, which was allowed
to accompany me during the last year. We have been great folks and each of you
deserves your own sentence. What you do with it is up to you (wink wink). Thank
you Cedric, for your strong supportive back. Thank you Smilla, for your kindness
to shepherd us in the last months. Thank you Elena, for helping me defeat TrOCR.
Thank you Paul, the second best data cleaner of this team :P. Thank you Romeo’s
brain. Thanks Tom for not going insane because of me . Thanks Abdu who always
helped me to learn new things. Of course I also thank the reader of this work for
taking the time to dwell in my lines. I wish a lot of fun while reading.
viii
0
Contents
Abstract
iii
Zusammenfassung
v
Acknowledgments
vii
Contents
ix
1
Introduction
1
1.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
Background
3
2.1
OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.1.1
Definition . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.1.2
OCR in Deep Learning . . . . . . . . . . . . . . . . . . . .
3
2.2
Ensemble Learning
. . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2.1
Definition . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2.2
Assets and Drawbacks . . . . . . . . . . . . . . . . . . . .
4
2.2.3
Design Levels . . . . . . . . . . . . . . . . . . . . . . . . .
5
3
Related Work
7
4
Methodology
9
4.1
Experimental Process . . . . . . . . . . . . . . . . . . . . . . . . .
9
4.2
OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
4.2.1
TrOCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
4.2.2
AttentionHTR . . . . . . . . . . . . . . . . . . . . . . . . .
10
4.2.3
SimpleHTR . . . . . . . . . . . . . . . . . . . . . . . . . .
11
4.3
Ensemble Learning Methods . . . . . . . . . . . . . . . . . . . . .
12
4.3.1
Dataset Level . . . . . . . . . . . . . . . . . . . . . . . . .
13
4.3.2
Base Learner Level . . . . . . . . . . . . . . . . . . . . . .
14
4.3.3
Output Level
. . . . . . . . . . . . . . . . . . . . . . . . .
15
4.3.4
Combining the Methods . . . . . . . . . . . . . . . . . . .
16
ix
4.3.5
Ineligible Methods . . . . . . . . . . . . . . . . . . . . . .
16
5
Experiments
19
5.1
Datasets
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
5.2
Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
5.3
Experimental Setup and Execution . . . . . . . . . . . . . . . . . .
21
5.4
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
6
Discussion
27
6.1
Research Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . .
27
6.2
Research Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . .
29
6.2.1
Dataset Level . . . . . . . . . . . . . . . . . . . . . . . . .
29
6.2.2
Base Learner Level . . . . . . . . . . . . . . . . . . . . . .
31
6.2.3
Output Level
. . . . . . . . . . . . . . . . . . . . . . . . .
34
6.2.4
Conclusion RQ2 . . . . . . . . . . . . . . . . . . . . . . . .
36
6.3
Research Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . .
37
7
Conclusions and Outlook
39
Appendix
41
Bibliography
45
Declaration of Authorship
51
x
0
List of Figures
2.1
General supervised learning procedure [6] . . . . . . . . . . . . .
4
2.2
General structure of an Ensemble [10] . . . . . . . . . . . . . . . .
6
4.1
Model architecture of TrOCR [38] . . . . . . . . . . . . . . . . . .
10
4.2
Model architecture of AttentionHTR [39] . . . . . . . . . . . . . .
11
4.3
Model architecture of SimpleHTR [40]
. . . . . . . . . . . . . . .
12
5.1
Example pictures from the 2 Datasets with Duke on the left and
IAM on the right side . . . . . . . . . . . . . . . . . . . . . . . . .
20
5.2
Word Accuracy
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
xi
0
List of Tables
5.1
Experimentation Results of Single OCR Modells . . . . . . . . . .
23
5.2
Experimentation Results of homogenous TrOCR Ensembles on
Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.3
Experimentation Results of homogenous TrOCR Ensembles on IAM
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.4
Experimentation Results of homogenous AttentionHTR Ensembles
on Duke Data
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.5
Experimentation Results of homogenous AttentionHTR Ensembles
on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.6
Experimentation Results of homogenous SimpleHTR Ensembles on
Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
5.7
Experimentation Results of homogenous SimpleHTR Ensembles on
IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
5.8
Experimentation Results of heterogenous Ensembles on Duke Data
25
5.9
Experimentation Results of heterogenous Ensembles on IAM Data
25
6.1
Comparison of Single WA’s with maximal achieved Ensemble WA’s
28
6.2
Comparison of best achieved base learner or single WA’s with
maximal achieved Ensemble WA’s . . . . . . . . . . . . . . . . . .
29
6.3
counted number of best performing dataset level methods per OCR
model/dataset combination . . . . . . . . . . . . . . . . . . . . . .
30
6.4
counted number of best performing dataset level methods per OCR
model/dataset combination per output level method . . . . . . . .
31
6.5
Averages of the dataset level methods . . . . . . . . . . . . . . . .
31
6.6
Maximum values of homogenous and heterogenous Ensembles . .
32
6.7
Differences of the highest achieved dataset level methods of the
heterogenous and the homogenous approach
. . . . . . . . . . .
33
6.8
Differences of the highest achieved output level methods of the
heterogenous and the homogenous approach
. . . . . . . . . . .
33
6.9
Averaged WA’s of base learners without Partitioning
. . . . . . .
34
6.10 Maximum reached WA’s of base learners . . . . . . . . . . . . . .
34
6.11 Counted number of best performing output level methods per OCR
model/dataset combination . . . . . . . . . . . . . . . . . . . . . .
35
xiii
6.12 Counted number of best performing output level methods per OCR
model/dataset combination per dataset level method . . . . . . . .
36
6.13 Averages of the output level methods . . . . . . . . . . . . . . . .
36
6.14 Differences of the highest base learners to the most accurate output
level method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
7.1
Character Error Rates of Single OCR Modells . . . . . . . . . . . .
41
7.2
Character Error Rates of homogenous TrOCR Ensembles on Duke
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
7.3
Character Error Rates of homogenous TrOCR Ensembles on IAM
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
7.4
Character Error Rates of homogenous AttentionHTR Ensembles on
Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
7.5
Character Error Rates of homogenous AttentionHTR Ensembles on
IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
7.6
Character Error Rates of homogenous SimpleHTR Ensembles on
Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
7.7
Character Error Rates of homogenous SimpleHTR Ensembles on
IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
7.8
Character Error Rates of heterogenous Ensembles on Duke Data .
43
7.9
Character Error Rates of heterogenous Ensembles on IAM Data
.
43
xiv
1
Introduction
1.1 Motivation
As part of the 2021 bachelor’s project "Human in the Loop: Deep Learning on hand-
written Patient Records" of Professor Lippert’s research group, historical patient
records of the Walter Kempner Diet had to be digitized for the Duke University.
For this purpose, various modern Optical Character Recognition (OCR) methods
were used to extract the handwriting contained in the documents. This provided
acceptable results. But since the digitized data will be used for medical studies, it
is important that they are captured as accurately as possible. After all, important
decisions and findings will be made later on the basis of this data. During the
practical implementation of the OCR methods for the bachelor project, the idea of
using several of these methods together came up independently. The methods had
different accuracies and the idea of combining them was quickly considered. On
the one hand, there would be no need to commit to only one method and discard
the others. On the other hand, it was also questioned whether a combination could
lead to an increased accuracy. After a short research, it was discovered that the
idea is not new and is summarized under the term Ensemble Learning. Since many
blog articles on this topic also promised increased accuracy [1, 2], it was decided
to get to the bottom of this topic in combination with OCR in order to hopefully
create added value for the digitization of patient records.
1.2 Contribution
For this purpose, the following research questions will be answered in this paper.
First, the main question is whether Ensemble Learning can really improve OCR.
After all, further research questions on this topic would make less sense if Ensemble
Learning is useless for OCR anyway. This is formulated in RQ1 as follows: "Can
Ensemble Learning improve the Accuracy of modern OCR methods?" If the answer
to RQ1 is yes, it is of course also of particular interest which methods have created
this added benefit. The reason for this is that ensemble learning is a large topic
and several approaches can be pursued here. Therefore, the goal of RQ2 is to
determine "Which Ensemble Learning methods are the most valuable?" One of the
1
Chapter 1
Introduction
main difficulties of the bachelor project, not mentioned so far, was the relatively
small data set, which was needed for the implementation of OCR. Therefore, the
last question to be asked in RQ3 is: "Can Ensemble Learning add significantly
better value on smaller datasets?". Since it would be very interesting to know, if
Ensemble Learning can help more on small datasets compared to large datasets.
To answer these 3 research questions, this paper is structured as follows. First
of all, in the background section, important terms around OCR and Ensemble
Learning will be clarified for readers with less previous knowledge. Then, the
methodology chapter will explain how the research questions will be approached.
In the same chapter, the involved OCR methods and ensemble learning methods
are named and explained. In the experiment chapter it is then described how
these methods were put into practice and what results they had. These results
are then evaluated in the discussion chapter to answer the research questions.
The statements presented there are then summarized at the end in the conclusion
chapter and further interesting research possibilities for OCR in combination with
ensemble learning are mentioned.
2
2
Background
Initially, the purpose of this section is to formally clarify important terms related
to OCR and Ensemble Learning. Readers with the necessary knowledge can skip
this section.
2.1 OCR
2.1.1 Definition
Optical Character Recognition (OCR) in general refers to the translation of optically
available handwritten or printed characters/words/numbers (any kind of symbols)
into digital form[3]. The term Handwritten Text Recognition (HTR) is often used in
this context. HTR is a specialization of OCR that converts sequences of handwritten
symbols (words/phrases)[4]. Although HTR would often be formally the more
accurate term, in the majority of scientific papers the term OCR is used rather than
HTR. Nevertheless, it is necessary to mention HTR as a synonym at this point,
since the term may help in researching further work. In the further course of this
work, OCR will also be used as a term for HTR.
2.1.2 OCR in Deep Learning
There are many possibilities to implement OCR in practice. In recent years, however,
Deep Learning approaches mainly have been used[5]. Deep learning is a type of
machine learning that is characterized by using multiple processing layers, each
with a large number of parameters, to make a prediction [6]. Deep refers here to
the large number of parameters and layers that are used by deep learning models.
In the field of deep learning, OCR can be classified as a sequence labeling task for
text or a classification task for individual symbols.[6, 7] Therefore, a model that
implements OCR receives a sequence as input (the input image) and outputs a string
(sequence labeling) or only a single character (classification). Usually, sequence
labeling models are supervised learning algorithms. Since all models of this work
can be classified in this category, this concept and their general procedure shall be
clarified. Supervised learning refers to methods that make a label prediction on a
given set of input features by prior parameter adaptation of the prediction model.
3
Chapter 2
Background
The parameter adaptation is achieved by a learning algorithm. This algorithm
receives already labeled input features as input and outputs the finished model
with adjusted parameters in the end. The process of "learning" is generally referred
to as training. In 2.1 you can see an overview of supervised learning.
Figure 2.1: General supervised learning procedure [6]
2.2 Ensemble Learning
2.2.1 Definition
Ensemble Learning is a method of machine learning[8]. Usually, exactly one model
is involved in making predictions, and its predictions are used directly. The differ-
ence in Ensemble Learning is that several different models are applied to generate
predictions. These models are called base learners, base classifiers, or inducers [8,
9, 10]. For a given problem, all base learners perform their own prediction. The
predictions of all base learners are then passed to an output component. This output
component then uses all predictions to compute the final prediction. This computa-
tion can be done in a variety of ways and the most important and well-known are
part of this work
2.2.2 Assets and Drawbacks
The main idea of Ensemble Learning is that multiple opinions combined can be
more accurate than a single opinion. This basic idea was already proven in 1785 by
Marquis de Condorcet in his Jury Theorem [11]. To achieve more precise estimation
in ensemble learning, the following 2 principles must be satisfied [10]. First, the
base learners must have as much diversity as possible in their estimates. This
4
Ensemble Learning
Section 2.2
ensures a high variety of opinions, which can be taken into account in the final
combination. On the other hand, the base learners should estimate as precisely
as possible despite the diversity principle. This is important because a high di-
versity of opinions does not add value if all opinions are wrong. In combination,
these principles allow increased accuracy in estimation as well as other benefits
at statistical, computational, and representational levels [9, 10, 12]. Statistically,
combining multiple models reduces the likelihood of using a single model that
overfits on the training data. Of course, an ensemble can also overfit, but then the
principle of diversity of base learners would be violated. In addition, individual
models can remain in a local optimum in the training process. The combination of
those models would allow a better approximation of the best possible optimum in
the function space. The ensemble of several models also increases the probability
of better generalization. In other words, to react better to new data not available in
the data set, since the combination of the individual models expands the function
space. Naturally, Ensemble Learning also has disadvantages. These are mainly the
high costs of resources, for example time, power or memory, which are caused by
the training of the base learner or the prediction of the ensemble. However, if these
disadvantages can be ignored or if better accuracy is simply more important, as it
is the case in this work, then ensemble learning offers itself in theory as a possible
tool.
2.2.3 Design Levels
The construction of an ensemble and the ensemble methods that are involved with
it can be practically classified on 3 levels[13]. First, at the dataset level, it has to be
chosen which base learner gets which training dataset with which features. Then,
at the base learner level, it must be decided what methods will be used for the base
learners and of course how many base learners there are in total. Finally, at the
output level, an approach for merging the individual base learner estimates must
be selected. Only if the right decisions are made at these 3 levels the benefits of
Ensemble Learning can be practically implemented. The Ensemble methods used
for the implementation can, however, be classified on several levels.
5
Chapter 2
Background
Figure 2.2: General structure of an Ensemble [10]
6
3
Related Work
The idea of combining ensemble learning with OCR is not new. First papers were
already published in the 1990s. In 1992 for example neural networks (NN) were
combined in an ensemble with the goal of recognizing handwritten digits. This
ensemble achieved 20-25% better results than the best individual NN.[14] In the
following years, research was further advanced and the first ensemble methods
such as bagging and boosting in combination with NNs were evaluated.[15, 16, 17,
18]. Again, better accuracy results could be achieved.
Over the years, ensemble learning methods have been repeatedly evaluated in
combination with other machine learning architectures such as Hidden Markov
Models [12, 19, 20] or Decision Trees[21] as well as with multi-architecture ensem-
bles [22]. Here as well, better accuracy was achieved by using ensemble methods.
However, with the beginning of the Deep Learning era, these evaluations related
to OCR declined. One of the last papers known to the author is the one from
2014 [23], which deals among other topics with ensemble methods for improved
recognition of historical prints. The paper evaluates some ensemble methods but
this is not the main focus of the paper.
The reasons for the decline of evaluations are on the one hand the time and
computational costs of training deep learning networks [24], and on the other hand
the very good generic results of deep learning architectures in OCR [25]. Therefore,
in recent years, there has been more research on developing single Deep Learning
architectures, which already achieved excellent results even without Ensemble
Learning. A good example is the paper by Ahlawat et al. [26], which aims to
outperform Ensemble Learning with a single OCR model to avoid the drawbacks of
ensembles.
This is certainly no reason to believe that the combination of ensemble learning
with OCR is no longer relevant for research. In recent years, papers still have
been published in which individual ensemble learning methods have been com-
bined with specific OCR use cases in order to improve the results. For example,
ensembles have been used to recognize handwritten persian numerals [27], farsi
numerals [28], bangla letters [29], arabic words [30], musical notes [31], medieval
prints/manuscripts [32, 33, 34], or bio-collections [35].
Another work worth mentioning here is the one of Matos Silva [36]. In his
research, table cells from medical records were digitized using a custom-built
7
Chapter 3
Related Work
ensemble of neural networks, which reaffirms the potential of this work in the
medical context as well.
These examples show that ensemble learning is still relevant for OCR. Neverthe-
less, the most recent papers only refer to one or a few ensemble methods. Thus,
there is a lack of an up-to-date assessment of the most important ensemble methods
with respect to OCR. Outside the subject-specific OCR domain, these already exist
Surveys are still published that summarize the current state of research in general
[10, 37], also with respect to Deep Learning [9, 24]. In these surveys, however, OCR
is mentioned at most as a possible application example.
To the best of the author’s knowledge, there is a lack of studies evaluating modern
OCR deep learning models in combination with ensemble learning. This work tries
to fill this gap by reevaluating the most popular and important ensemble learning
methods with regard to OCR.
8
4
Methodology
4.1 Experimental Process
To answer the research questions, the following procedure is applied. First, the
state of the art OCR methods and ensemble methods are identified. Then, in the
experiments, each OCR method is first evaluated on its own. Then, the ensemble
methods are further analyzed. The values recorded there are then compared with
the previously recorded values of the stand-alone OCR methods. This will make it
possible to answer RQ1, whether ensemble methods can contribute to an improved
accuracy with the OCR methods used. To clarify RQ2, the evaluation results of
the ensemble methods will then be compared in order to be able to name the
most effective methods. To answer RQ3, it is necessary to consider two data
sets separately in the evaluations. With the comparison of measured values of a
large and a small dataset, it can be clarified whether Ensemble Learning can add
significantly better value on smaller datasets.
4.2 OCR
In the following, the OCR models used in this work will be named and briefly ex-
plained. For a detailed description of the methods and the exact training procedures,
please visit the corresponding papers.
4.2.1 TrOCR
The first model to be looked at is TrOCR. TrOCR was released in 2021 by a Microsoft
research team and is an end-to-end transformer-based OCR model [38]. Basically,
TrOCR corresponds to a vanilla transformer encoder-decoder structure and the
architecture (see figure) is structured as follows. First, TrOCR resizes the input
image to a size of 384 × 384 pixels and then splits it into 16*16 sized disjoint patches.
Patch embedding as well as positional embedding is then applied to the patches
to retain important contextual information about the other patches. The now
processed sequence of patches is then given as input to the encoder, which consists
of multi-head attention modules and feed forward networks. The encoder extracts
9
Chapter 4
Methodology
features from each of the patches, which then serves as input to the decoder. The
decoder also consists of multi-head attention modules, feed forward networks, and
additionally masked multi-head attention modules. With the output of the encoder,
the decoder creates a probability matrix that assigns token probabilities to specific
subsections of the input image. In the context domain of TrOCR, tokens refer
to particular character sequences, of which TrOCR has 50265 by default. At the
end the output is generated by GreedyMaxDecoding. That means for each section
the most probable token is used for output. By multiplying these probabilities a
confidence score for the prediction is obtained.
Figure 4.1: Model architecture of TrOCR [38]
4.2.2 AttentionHTR
AttentionHTR is an attention-based sequence-to-sequence model for handwritten
word recognition published by the Uppsala University of Sweden in 2022[39]. The
architecture of AttentionHTR consists of four stages: a transformation stage, a
feature extraction stage, a sequence modeling stage, and a prediction stage. In the
transformation stage, the input word image is normalized via a thin-plate-spline
(TPS) transformation, scaling it to a size of 100×32 pixels. The normalized, resized
image is then fed into the encoder. The encoder includes the feature extraction stage
and the sequence modeling stage. The feature extraction stage consists of a ResNet
that encodes the input into a visual feature map. In the sequence modeling stage,
this visual feature map is then transformed into a sequence of features and is used by
a BLSTM to capture contextual information in the sequence. The output sequence
is then used in the prediction phase by an attention-based decoder, which consists
10
OCR
Section 4.2
of an unidirectional LSTM and content-based attention mechanisms. The decoder
outputs a probability matrix with a likelihood for each entry of the character set
per sequence step. Then, for each sequence step, the character with the highest
probability is used as the prediction (GreedyMaxDecoding). Again, a confidence
score can be calculated by multiplying the individual output probabilities.
Figure 4.2: Model architecture of AttentionHTR [39]
4.2.3 SimpleHTR
The last model to be looked at is SimpleHTR, which was published by Harald
Scheidl in 2018[40]. The structure of SimpleHTR corresponds to a typical CRNN
architecture. This means that the architecture consists of 3 components. The first
component is a convolutional neural network (CNN), which receives as input the
original image previously reduced to 128×32 pixels and creates a feature sequence
matrix from it. The feature sequence matrix is then passed to a recurrent neural
11
Chapter 4
Methodology
network (RNN) component, here given as a LSTM. This component creates a proba-
bility matrix of all available characters per image section. Finally, the Connectionist
Temporal Classification (CTC) component is used for decoding, again using Greedy-
MaxDecoding. A confidence score is also calculated from the probability matrix of
the RNN output.
Figure 4.3: Model architecture of SimpleHTR [40]
4.3 Ensemble Learning Methods
Now the ensemble learning methods considered for OCR shall be mentioned and
explained. As described in the background, the construction of an ensemble takes
place on 3 levels. At the same time, it should be clarified on which design level
which ensemble method can be implemented. Afterwards, other popular ensemble
learning methods will be discussed and it will be explained why they are not covered
in this work.
12
Ensemble Learning Methods
Section 4.3
4.3.1 Dataset Level
At the dataset level, decisions are made about which data is given to which base
learner for the training. OCR of course strongly limits the possible ensemble meth-
ods. Therefore, only methods that OCR allows as a sequence labeling/classification
task are possible. This means that all methods can only be applied on the dataset
level if they do not damage the image structure too much, which prevents any
feature sampling for example. Nevertheless, there are infinite other ways to split
the training data among the base learners. But the most popular and at the same
time generic ones are, according to the author’s opinion, CompleteData, Bagging,
KFOLD and Partitioning.
CompleteData
The first and simplest option is to simply give each base learner
the complete training data set unchanged. The principle of divesity is then imple-
mented at the base learner level. CompleteData is not a special ensembling method,
instead it is the default way if none is implemented at the dataset level.
Bagging
One of the best known ensemble learning methods is the bootstrap
aggregation method (bagging)[9, 10, 13, 19]. It corresponds to the combination
of bootstrapping and a later chosen form of aggregation. In bootstrapping, the
original dataset is randomly resampled with replacement for each base learner,
usually keeping the length of the original dataset. It is expected that this will
diversify the distribution of the training input and counteract a possible poor
selection of the training dataset. The possible forms of aggregation are discussed
in the output level chapter.
KFOLD
In machine learning, k-fold-cross-validation is very popular mainly for
the evaluation of machine learning algorithms[41]. In this method, the first step is
to merge the train and validation parts within a train/val/test split of the dataset.
This now newly combined dataset is then split into k new training and validation
splits, where always 1/kth of the dataset becomes the new validation part, which
is disjoint to the other k-1 . The remainder then becomes the training dataset.
For evaluation, the machine learning algorithm is then retrained on each of the k
train/val divisions and tested on it using the unmodified test dataset. The k test
results of the individually trained models are then used to evaluate the algorithm.
This procedure of the dataset split and training can also be taken advantage of in
Ensemble Learning [12, 13]. That way, each baser learner receives one of the k
train/val datasets for training. Then, instead of just testing the baser learners, they
are applied for prediction and merged in the output layer. Therefore, the number
13
Chapter 4
Methodology
of baser learners goes hand in hand with the number of divisions. For the rest of
the work, this approach is referred to with KFOLD.
Partitioning
In partitioning, the training dataset is divided among k base learners
[9, 10]. This means that each base learner receives a kth disjoint part of the training
dataset for training. Especially with large datasets, it is hoped that this will result
in a faster training process with consistent results.
4.3.2 Base Learner Level
At the base learner level, all decisions are made for the base learners underlying the
ensemble. Here, the options available are limited by the base learner architectures
chosen. According to the author, the following 3 decisions have to be made: which
base learners will be selected, how many base learners will be selected and how
to initialize the base learners. These are partly not explicit ensemble learning
methods found in the literature, they rather describe the general procedure for the
construction of the ensemble on base learner level.
Base Learner Selection
As described above, it must first be selected which type
of base learner is used. That means, which machine learning algorithm underlies
which base learner, here either TrOCR, AttentionHTR or SimpleHTR. There are
two approaches for this: the heterogeneous or the homogeneous approach [9]
In the homogeneous case, all base learners are based on the same architecture,
for example k times TrOCR for k base learners. In the heterogeneous approach,
different architectures are used for the base learners, such as k/3 times TrOCR,
k/3 times AttentionHTR and k/3 times SimpleHTR for k base learners or in other
variations.
Number of Base Learners
The next step is one of the most important. It must
be chosen how many k base learner will be used at all. This decision is fundamental
for the ensemble. If k is chosen too small, the ensemble may have too little diversity
of opinions. If k is chosen too large, the computational effort may be too high to be
useful in practice. Also, k should be an odd number so that a total majority is more
likely in voting procedures.
Base Learner Initialisation
Finally, the choice has to be made, which base
learner will be initialized with which parameters and which initial weights it will
receive. As it is usual also without ensemble learning, the initialization of the initial
14
Ensemble Learning Methods
Section 4.3
weights can be done randomly or by transfer learning. Even without different
parameter choice/random initialization of the base learner, it is possible that the
base learners become diverse enough, because their training does not necessarily
have to be deterministic.
4.3.3 Output Level
At the output level, the predictions of the base learner are combined for the final
result. Since labels/classifaction are combined here, only voting, probability and last
layer combination methods are possible. Therefore, all combination methods that
use pure numerical predictions as an input, such as averaging, which can be used
for regression problems, are not possible. In this work, majority voting, weighted
voting and max probability methods are discussed.
WordVote
The most popular and at the same time simplest method for merging
the predictions is majority voting [9, 13] here called WordVoting. In WordVot-
ing, each of the base learner’s predictions is counted by the number of times the
prediction occurred. The prediction that occurred most often is then used as the
final result. If there is no absolute majority, for example, if all base learners vote
differently, the first found maximum is then chosen as the output. This is also the
case for the other methods.
CharVote
CharVoting follows a similar approach as WordVoting. The main
difference is that instead of voting for the whole prediction, each character is voted
individually. The idea for this came from the author to see if voting on character
level can be more effective. In the implementation, all predictions are first brought
to the same length by appending spaces. Now for each position i it is checked
which letter was predicted the most. This letter is then at position i of the output.
Therefore a voting takes place for each position of the character level. Finally, the
excess blanks are truncated.
WeightedWordVote
In WeightedWordVoting, as the name implies, the predic-
tions of the k base learners are weighted for voting [19, 20, 24]. The weighting
process works as follows. Prior to the predictions, a ranking of the base learners is
determined based on their performance on the validation dataset. The predictions
of the best performing base learner are then weighted with k points, those of the
second best performing base learner with k-1 and so on until the prediction of the
15
Chapter 4
Methodology
worst base learner is weighted with only 1. The weights of all equal predicitons are
then added and the prediction with the highest weight is returned as the output.
WeightedCharVote
The process of WeightedCharVoting is very similar to that
of WeightedWordVoting. But here again the final output is decided on character
level. Also in this case it is interesting to see if the voting on total or character level
makes a difference. First the predicitons of the base learner are brought to the same
length like in the Charvoting with the appending of blanks. Then, for each position
of the character level, the weighted voting just described takes place. At the end,
any exceeding blanks are shortened.
MaxProb
The MaximumProbability method (MaxProb) works with the confi-
dence scores of the base learners for the predictions [19]. Here, the base learner
prediction with the highest confidence score is simply returned at the end.
AvgProb
In the MaxProb method, identical predictions are only considered in-
dividually. The AveragedMaximumProbability method (AvgProb) enhances the
MaxProb method. Here the goal was to see if the combination of the confidence
scores of the same predictions makes a difference. Again, the idea came from the
author of this paper. From all identical base learner predictions the average of the
confidence scores is calculated. Just like in the MaxProb procedure, the prediction
with the highest confidence score is then returned.
4.3.4 Combining the Methods
Finally, the question obviously arises: How exactly are the above-mentioned en-
semble methods of the design levels combined? The answer is quite simple. Every
method mentioned above can be combined with all methods of the other design
levels. The implementation of this will be considered in more detail in the experi-
ments.
4.3.5 Ineligible Methods
There are, of course, other methods that are theoretically possible in combination
with OCR. However, these have not been dealt with in this work because they
are too application-specific or cannot be combined with the selected OCR models.
However, for further research purposes, other very popular methods or possibilities
that are interesting from the author’s point of view will be mentioned here.
16
Ensemble Learning Methods
Section 4.3
On the data set level, Input Variation and Category Sort methods are worth
mentioning. Input Variation varies the input for the base learner to achieve more
diversity [9]. For example, the base learners receive differently blurred image
data. In category sort methods the training data is sorted by certain categories
and each base learner receives certain categories for the training, for example only
numbers. The idea for this came up while creating this work, since the medical data
consists of different categories. However, both methods are very dataset specific
and therefore difficult to evaluate in general. In addition, Input Variation is very
similar to DataAugmentation, which is a big topic for its own.
On base learner level the ensemble methods are limited by the selected OCR
models. So any possibilities are excluded, that can not be combined with the 3
previously mentioned OCR models. Worth mentioning here are the very popu-
lar Boosting method or Fast Ensemble methods like Snapshoting. Boosting uses
weighted learning algorithms to more strongly influence the incorrectly predicted
training data of the previous base learner when training the next base learner [9].
Fast Ensemble methods aim to compensate for the training disadvantages of ensem-
bles with specially designed learning algorithms[24]. The most famous example
Snapshoting creates an ensemble from all minima it finds in the training process,
for which cyclic learning rates are needed[24]. However, the 3 models considered
in this paper do not possess the necessary properties for the implementation of
the above mentioned examples. It is also very common to let base learners of the
same architecture vary in their parameters e.g. by different number of layers [13].
However, here the decision was made not to do this, since on the one hand this is
again very application-specific and on the other hand the developers of the models
already concerned themselves enough with the selection for the correct parameters.
On the output level one of the most important ensemble methods is stacking.
Stacking involves the training of an extra machine learning model that combines
the predictions or last layers of the base learner [9, 10, 13]. Unfortunately, stacking
is not very generic, as it has to be adapted very precisely to the model structures.
Moreover, it is such an extensive topic that an extra paper would be necessary. In
theory, the Bayes Combination Method for the output layer appears repeatedly
[12]. It is considered to be the most accurate method in theory, but it is not
practically feasible here, since knowledge about the probabilities of occurrence
of the labels would be needed. Other very popular and interesting methods are
Classifier Selection Methods. Here only certain "experts" of the base learners are
used for the ensemble prediction, e.g. experts only for numbers[24]. This can be
done using the Category Sort methods or by algorithms that evaluate the base
learners for different problems. The Classifier Selection Methods were also not
covered, because they are again a very large topic and very application specific.
17
5
Experiments
Next, the practical procedure will be explained. First of all, it will be clarified on
which datasets the experiments take place and which measured values are recorded.
Then the experimental setup/execution will be described and at the end the final
results will be presented.
5.1 Datasets
For the experiments we will use 2 datasets with handwritten images.
Duke Dataset
The first is the Duke Dataset already mentioned in the introduc-
tion, which consists of cell entries from medical tables. The data were collected on
special patient records in the Rice Diet study by Walter Kempner for the treatment
of hypertension [42]. The image data now available were thereby extracted from 5
columns of these records. These were the blood pressure column, weight column,
and sodium, chlorine, potassium columns of the patients urine. The total size of the
dataset is 6287. There are a total of 2367 cell images of the blood pressure column,
2083 of the weight column, 151 of the sodium urine column, 764 of the chloride
urine column, and 79 of the potassium urine column. In addition, the dataset also
consists of 843 empty cell images. The following characters occur in the labels of
the images: ",./0123456789 1
4
1
2
3
4
1
8”
IAM Words Dataset
The second dataset is the very popular IAM Words Dataset
from the IAM Handwriting Database[43]. In the OCR field, the IAM Words Dataset
is regularly used to evaluate new technologies. It was published by the University
of Bern and contains 115320 isolated and labeled english word images. These
word images were extracted with automatic mechansims from scans of english
scripts, were manually verified and are now available in gray tones. The following
characters appear in the labels of the images:" 0123456789abcdefghijklmnopqrstu-
vwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!" &.’()*+,-./:;?". In the following, the
IAM Word Dataset will also just be called the IAM Dataset.
19
Chapter 5
Experiments
Split Ratio
Both datasets were divided into a train, validation and test part for
the experiments. The reason for this is that the OCR methods require a disjoint
set of training and validation data for the learning process. On the other hand, at
the end, the methods need to be evaluated with the test data that has never been
observed before. Since the Duke Dataset is relatively small, a large test dataset is
necessary to evaluate the ensemble learning techniques well. For a good result of
the ensembles the training and validation part must not be too small. Therefore,
the decision was made for a 70/15/15 split ratio, because it best combines the above
requirements. For the Duke Dataset, the number of training images is therefore
4400, for the validation part 943 and for the test part 944. For the IAM Dataset, there
are 80722 images for training, 17298 for validation, and 17298 for testing. Hence,
the IAM Dataset is about 18 times larger than the Duke Dataset.
(a) Dataset:Duke Label:116¾
(b) Dataset:IAM Label:MOVE
(c) Dataset:Duke Label:146/120
(d) Dataset:IAM Label:Kaunda’s
(e) Dataset:Duke Label:Empty
(f) Dataset:IAM Label:1800-2200
Figure 5.1: Example pictures from the 2 Datasets with Duke on the left and IAM on the
right side
5.2 Metrics
To measure the accuracy of the predictions on the datasets, the Word Accuracy
(WA) and the Char Error Rate (CER) are used. These metrics are commonly used to
measure OCR methods, including the OCR models discussed here [38, 39, 40]. Word
Accuracy is the percentage of correctly estimated words out of the total number of
input words. It is calculated by dividing the number of correctly estimated words
(len(CorrectWords)) by the number of all input words (len(Inputs)) multiplied by
100. The Char Error Rate (CER) corresponds to the normalized levenstein distance
of a word multiplied by 100[38]. The levenstein distance is the number of insertions,
20
Experimental Setup and Execution
Section 5.3
substitutions, and deletions of characters to obtain the true word. This is normalized
by dividing throug number of characters of the ground truth. For measurement the
average of all CER’s of the predictions will be used. However, for the evaluation of
the experiments only the Word Accuracy is used, since this was sufficient for the
evaluation of the research questions. For those interested, the CER’s will be part of
the appendix.
At this point it should be mentioned that the diversity principle explained in
the background chapter is not really measurable. Although some diversity metrics
have been designed, there is no agreement on a standard. In fact, it is generally
agreed that there is no correlation between diversity metrics and the prediction
performance of ensembles [44]. This does not mean that the principle of diversity is
invalid. The base learners should still be designed to be diverse. However, diversity
is difficult to measure since methods can also be diversely designed and lead to the
same result.
Figure 5.2: Word Accuracy
5.3 Experimental Setup and Execution
It shall now be clarified how to proceed exactly in order to evaluate the ensemble
methods mentioned in the methodology.
Initialisation of OCR Modells
Before each experiment, the OCR models must
be initialized first. This is done as follows.
TrOCR
For TrOCR the pre-trained "trocrbasestage1" model is used. For hand-
writing recognition the "trocrbasehandwritten1" model would be more recom-
mended. However, this one has already been pre-trained on the IAM Dataset. So
the decision was made to use the TrOCR default model from the TrOCR paper.
Nevertheless "trocrbasehandwritten1" is mentioned here, because it will give bet-
ter results for other OCR projects. Otherwise, the configuration of the model is
largely untouched. The only parameters that are still set individually are number
of workers with value 8, batch size with value 20 and train epochs with value 8,
which are important for the training process. The number of train epochs is not
very high, because TrOCR does not need many epochs for a good finetuning. The
21
Chapter 5
Experiments
choice for batch size and number of workers was made because this worked best
with the available resources. However, the setting of the parameter values for the
models has no strong relevance for this work. They only have to be constant for all
experiments to provide a comparable basis for the ensemble methods. Additionally
it should be mentioned that TrOCR does not need a character list, because the
standard token dictionary is sufficient here.
AttentionHTR
AttentionHTR is initialized with the pre-trained "Attention-
HTR Imgur5K sensitive" model. It is used because on the one hand case sensitive
labels are in the data and on the other hand only this model provided by the devel-
opers is not already pre-trained on IAM. Additionally, the values for batch size and
character list will be set. Otherwise the parameters set by AttentionHTR will be
left as they are. Again, the relevance of these parameters is relatively insignificant,
since the main focus lies on the ensemble methods. For the batch size the value
32 was chosen and for the character list the above mentioned character list of the
respective dataset is used. The choice for the batch size was also made here due to
the best usage of the available resources.
SimpleHTR
SimpleHTR is initialized without a pre-trained model, since there
is no model with the chararcter list required for the Duke Dataset. The character
list influences the parameters of the architecture so strongly that the use of the
model is no longer possible. In order to be able to draw better comparisons later,
no pre-trained model is also used during the experiments with the IAM Dataset. In
addition, the values for batch size and earlystopping are set to 100. Independent
experiments showed that these provided the best values.
Experimental Execution
Now the execution of the experiments can be con-
sidered. First of all, the 3 OCR models will be evaluated separately on the Duke
and IAM Datasets. For this purpose, the training of the initialized models is per-
formed with the help of the train/val dataset. Once the training is complete, the
Word Accuracy/Char Error Rate of the models is calculated on the test dataset.
This is already the experiment for the evaluation of the single models. The ex-
perimental procedure for the ensemble methods is as follows. Each experiment,
which was performed on the Duke Dataset, should also be performed on the IAM
Dataset. For this purpose, the datasets are first prepared using the 4 dataset level
methods described in the methodology. The training of the base learners is now
to be performed with the datasets that have been processed. For this, the last
fundamental decisions have to be made on the base learner level, i.e. which base
22
Results
Section 5.4
learner corresponds to which OCR model and how many base learners make up
the ensemble. For the base learner selection, both the homogeneous approach
with all OCR models and the heterogeneous approach with an equally distributed
number of OCR models will be considered. For the evaluation of the homogeneous
ensembles, k=5 is chosen, because k=3 would be too few predictions and k=7 would
require too much computational effort. Also an odd number is important. For the
evaluation of the heterogeneous ensemble k=9 is chosen with 3 models each of
TrOCR, AttentionHTR and SimpleHTR. In each case the first 3 trained base learners
of the homogeneous ensembles will be selected here. This means that the dataset
level methods KFOLD and Partitioning are of course not implemented correctly
at this point. But this is not important here, because it is more about the question
whether the heterogeneous approach brings more added value compared to the
homogeneous approach. The choice for k=9 was made because the ratio of the
different OCR models should be the same. In addition, k=3 is too small, k=6 even
and k=12 too large. Now the training of the base learners can be triggered using
the previously processed train/val datasets. Once the training is completed, the
Word Accuracy/Char Error Rate rate of the base learners is calculated on the test
dataset. Finally, the base learners are combined using the output level methods
and their Word Accuracy/Char Error Rate is also calculated. In total, there are 32
different ensembles for the evaluation of the methods.
5.4 Results
From the above described experiments, the following Word Accuracy values were
obtained. The values for the average Char Error Rate can be found in the appendix
for those who are interested. The number of results is overwhelming at first. But in
the conclusions chapter these values are broken down to more simple levels. In the
tables, the best measured value is always printed in bold, as well as for ensembles
best base learner.
TrOCR
AttentionHTR
SimpleHTR
Duke
96,82
93,43
84
IAM
72,83
86,84
75,74
Table 5.1: Experimentation Results of Single OCR Modells
23
Chapter 5
Experiments
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
90,67
96,29
88,98
86,02
96,08
95,86
95,44
97,03
93,33
91,38
83,41
Bagging
93,54
92,69
95,34
95,66
95,02
96,61
95,44
96,93
95,02
93,31
92,19
K-FoldCross
96,34
96,4
96,73
88,87
96,82
98,09
96,82
97,88
96,50
93,03
88,22
Partitioning
0,1
0,2
0,0
0,0
0,32
0,10
0,0,
0,11
0,0
0,32
0,32
Table 5.2: Experimentation Results of homogenous TrOCR Ensembles on Duke Data
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
87,43
77,62
75,24
76,24
80,45
83,51
76,08
83,51
74,57
85,5
82,56
Bagging
78,98
79,08
68,48
70,69
72,16
80,29
67,18
80,29
65,99
78,80
74,69
K-FoldCross
78,89
76,28
81,11
70,74
72,52
80,57
70,63
80,5
69,8
79,32
76,64
Partitioning
74,13
79,25
49,87
77,98
76,43
81,17
64,73
81,17
63,13
79,44
75,54
Table 5.3: Experimentation Results of homogenous TrOCR Ensembles on IAM Data
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
94,07
92,27
93,96
93,75
92,26
95,76
94,28
95,76
94,28
96,08
94,60
Bagging
89,62
90,78
90,89
91,53
89,94
93,96
91,42
93,75
91,74
93,11
91,84
K-FoldCross
92,58
93,11
91,00
92,48
91,84
96,40
94,28
95,87
93,75
95,97
93,54
Partitioning
60,38
65,78
69,39
63,35
69,49
75,42
71,08
76,48
67,69
76,38
69,60
Table 5.4: Experimentation Results of homogenous AttentionHTR Ensembles on Duke
Data
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
87,03
87,43
87,63
87,07
86,47
89,45
87,37
89,32
87,18
89,80
89,09
Bagging
85,33
85,79
86,69
86,39
85,63
88,68
86,23
88,46
85,83
88,98
88,29
K-FoldCross
86,96
86,68
86,96
87,06
86,83
89,40
87,42
88,99
87,13
89,84
89,03
Partitioning
84,02
83,63
83,19
83,77
83,90
86,83
84,14
86,57
83,72
87,19
86,23
Table 5.5: Experimentation Results of homogenous AttentionHTR Ensembles on IAM Data
24
Results
Section 5.4
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
84,85
82,63
83,69
87,29
82,42
91,10
84,96
90,36
89,41
90,36
86,44
Bagging
85,06
78,18
86,97
83,90
84,53
89,41
84,85
89,51
87,92
90,68
88,14
K-FoldCross
91,21
89,09
90,57
87,71
91,00
94,49
90,78
93,22
90,15
94,92
91,84
Partitioning
55,40
43,40
58,26
50,42
45,33
64,60
59,53
63,98
57,94
61,44
55,93
Table 5.6: Experimentation Results of homogenous SimpleHTR Ensembles on Duke Data
Dataset Methods
Base Learners
Output Level Method
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
74,66
74,25
74,01
73,46
73,65
80,44
72,45
79,25
71,76
80,73
77,28
Bagging
74,38
75,04
73,24
73,72
73,18
80,47
73,01
79,58
72,35
80,61
77,53
K-FoldCross
76,22
76,73
76,43
76,22
76,52
82,28
75,76
81,27
75,56
82,40
79,69
Partitioning
63,91
62,60
63,79
64,74
64,57
71,98
62,87
71,44
62,15
73,04
68,48
Table 5.7: Experimentation Results of homogenous SimpleHTR Ensembles on IAM Data
Dataset Methods
Output Level Method
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
97,35
92,48
98,09
91,95
92,01
81,35
Bagging
96,50
88,87
97,03
87,61
93,06
87,98
K-FoldCross
97,99
93,75
96,61
93,75
97,35
91,95
Partitioning
75,10
36,33
76,48
7,20
33,26
23,09
Table 5.8: Experimentation Results of heterogenous Ensembles on Duke Data
Dataset Methods
Output Level Method
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
90,70
76,42
89,70
74,74
88,67
80,52
Bagging
89,59
72,39
88,91
69,54
85,92
80,84
K-FoldCross
90,12
77,51
89,95
74,47
87,84
81,16
Partitioning
87,61
61,62
86,72
59,67
83,19
71,90
Table 5.9: Experimentation Results of heterogenous Ensembles on IAM Data
25
6
Discussion
With the measured values collected above, the research questions will now be
answered.
6.1 Research Question 1
First of all, concerning the answer of RQ1: "Can Ensemble Learning improve
the Accuracy of modern OCR methods?". The idea to clarify this question was
to compare the models trained alone with the ensemble results. In order to not
confront 24 measured values with only one single one, only the Word Accuracies
of the highest Word Accuracies of the respective ensemble are considered in the
following table 6.1. Thereby it will be looked whether the Word Accuracies of the
highest ensemble method is also larger than that of the single model. The values
of the heterogeneous ensembles are not considered here, since these consist of all
OCR models and therefore can hardly be compared with the single model values.
It can be seen immediately that the Word Accuracy of the ensembles is always
better than the ones of the single models. Of course, the relation is quite unbalanced
here. The reason is that each single WA is evaluated alone with the maximum of 24
output level combinations. In order to balance this a bit, the base learners should
also be included, because the base learners also obviously correspond to a single
model. Therefore the highest Word Accuracy value of the respective base learner
or the single model should be used for the comparison. Hence, in Table 6.2 per
OCR model/dataset combination, 21 single models are now compared with the 24
output level combinations.
Even with the inclusion of the base learners, the individual models were only
once better than one of the ensembles. For the ensemble that is worse than the
base learner (TrocrIamKFOLD1), it is noticeable that the principle of the highest
possible accuracy is violated here. Even with the inclusion of the base learners,
the individual models were only once better than one of the ensembles. For the
ensemble that is worse than the base learner (TrocrIamKFOLD1), it is noticeable
that the principle of the highest possible accuracy is violated here. Since the other
base learners of the TrocrIamKFOLD ensemble are substantially more inefficient on
the test dataset than the TrocrIamKFOLD1 base learner. If this would be remedied
27
Chapter 6
Discussion
by retraining the other base learners and keeping only the trained base learners
that also have higher accuracy, it is very possible that the ensemble would predict
better. Furthermore, just because a base learner performs better on the test dataset
it does not mean that the ensemble as a whole is worse. It is not known to what
extent the base learner generalizes well. It is quite possible that this base learner
performs well only here on the test dataset and is much worse on other unknown
data. As mentioned in the background, ensembles have the advantage to generalize
better and to have a lower risk of overfitting. For this reason, the ensemble would
still be preferable here.
With this knowledge, RQ1 can be answered as follows. Yes, Ensemble Learning
can help with high probability to improve the accuracy of OCR methods. However,
as seen above, this does not mean that this is always the case. With a good choice
of base learners, however, Ensemble Learning is definitely suitable as a possible
tool to get a few last percentage points of Word Accuracies. Especially with the
background thought of the possible better generalization of the ensembles.
OCR
Modell
Dataset
Single WA
Best Ensemble WA
Single <
Best Ensemble ?
TrOCR
Duke
96,82
98,09
True
AttentionHTR
Duke
93,43
96,40
True
SimpleHTR
Duke
84,00
94,92
True
TrOCR
IAM
72,83
85,53
True
AttentionHTR
IAM
86,84
89,84
True
SimpleHTR
IAM
75,74
82,40
True
Table 6.1: Comparison of Single WA’s with maximal achieved Ensemble WA’s
28
Research Question 2
Section 6.2
OCR
Modell
Dataset
Best Single or
Base Learner WA
Best Ensemble WA
Single <
Best Ensemble ?
TrOCR
Duke
96,82
98,09
True
AttentionHTR
Duke
94,07
96,40
True
SimpleHTR
Duke
91,21
94,92
True
TrOCR
IAM
87,43
85,53
False
AttentionHTR
IAM
87,63
89,84
True
SimpleHTR
IAM
76,73
82,40
True
Table 6.2: Comparison of best achieved base learner or single WA’s with maximal achieved
Ensemble WA’s
6.2 Research Question 2
In RQ2, the question was asked, "Which Ensemble Learning methods are the most
valuable?" To get an answer to this question, the ensemble methods of the different
design levels will be compared to find the most performant methods.
6.2.1 Dataset Level
For the dataset methods, the decisions of the other design levels and the choice of
the dataset obviously have a large impact on them. Therefore, pure Word Accuracies
can not be considered here. Instead, it will be counted which method on which
model/dataset combination achieved the best possible Accuracy of any output
method. This can be seen in table 6.3 In this way, it should be shown which method
has performed best and how often.
According to the table it can be assumed that only CompleteDataset and KFOLD
provide an additional value. However, only one of the 6 output level methods was
counted here. The values of these lie partly very close together, with which this
consideration could easily falsify itself. Therefore, the above counting shall take
place again for all output methods, which is to be examined in table 6.3
With this data it can now be clearly said that, considering the previously set values,
the CompleteData and KFOLD approaches add the most value. This seems to be
the case more often for KFOLD than for the CompleteData method. This confirms
itself again with view on the total averages of the individual ensemble methods,
which can be found in table 6.5
29
Chapter 6
Discussion
Here, KFOLD also has the highest average accuracy, followed by the CompleteData
method. This shows that KFOLD performed with the best results here. For the other
2 DatasetLevel methods, it is immediately noticeable that Partitioning achieved
rather unusable values. On TrOCR even so badly that both base learners and
output methods reach an Accuracy of nearly and/or equal 0. It is interesting that
Partitioning on IAM TrOCR delivers again acceptable results. Therefore, it can be
speculated that the number of training data or the number of training epochs is too
small for Duke TrOCR. Hence, Partitioning may be a good solution for even much
larger datasets or for a different number of base learners. Under the circumstances
considered above, however, Partitioning is rather to be discarded. Looking at the
average word accuracy of the Bagging method, it is noticeable that it performs
only 2 to 1 percentage points worse than the KFOLD or CompleteData methods.
Also in the table 6.4 Bagging were able to achieve 2 times the best result of an
output level method within a OCR model/dataset combination. This shows that
Bagging does not work badly here perse. Nevertheless, KFOLD and CompleData
performed much more often and overall better. This could be due to the fact that
the bagging intensity is possibly too low, too high or bagging simply works better
with more base learners.Still, the potential of the bagging method is proven here.
Finally, it can be said for the methods of the dataset level that all of them except for
Partitioning delivered good results, whereby the KFOLD and the CompleteData
method were the most useful.
OCR Modell
Dataset
CompleteData
KFOLD
Bagging
Partitioning
TrOCR
Duke
1
AttentionHTR
Duke
1
SimpleHTR
Duke
1
heterogenous
Duke
1
TrOCR
IAM
1
AttentionHTR
IAM
1
SimpleHTR
IAM
1
heterogenous
IAM
1
Complete Count
3
5
Table 6.3: counted number of best performing dataset level methods per OCR model/dataset
combination
30
Research Question 2
Section 6.2
OCR Modell
Dataset
CompleteData
KFOLD
Bagging
Partitioning
TrOCR
Duke
4
2
AttentionHTR
Duke
4
3
SimpleHTR
Duke
6
heterogenous
Duke
1
5
TrOCR
IAM
6
AttentionHTR
IAM
4
2
SimpleHTR
IAM
6
heterogenous
IAM
3
3
Complete Count
18
29
2
Table 6.4: counted number of best performing dataset level methods per OCR model/dataset
combination per output level method
CompleteData
KFOLD
Bagging
Partitioning
87,37
88,22
86,07
61,12
Table 6.5: Averages of the dataset level methods
6.2.2 Base Learner Level
On the base learner level, only the heterogenous and homogenous approach as
well as the OCR models underlying the base learners can be evaluated, meaning
which OCR model is best suited as a base learner. The remaining decisions were
made for the experiments as a whole and are not comparable here, because there
is too little measurement data or the underlying methodologies are too different,
like for example the number of base learners. It is meaningless to compare the
number of base learners of 5 with 9 if the methodology is different (homogenous
vs heterogenous) and only 2 different numbers are available.
Comparison of Heterogenous and Homogenous
For the homogenous vs het-
erogenous comparison, the maximum Word Accuracies of all homogenous ensem-
bles should first be compared to the maximum Word Accuracies of all heterogenous
ensembles on both the Duke and IAM datasets. This can be seen in the following
table 6.6.
31
Chapter 6
Discussion
It is noticeable that there is hardly a difference in the maximum of both approaches.
In order to look at a few more values, the highest Word Accuracies of the respective
dataset methods and output methods should now also be used, see Table 6.7 and 6.8.
The difference between the heterogeneous and the homogenous shall be looked at
here. The reason why in the table only the methods WordVote, WeightedWordVote
and MaxProb are shown, follows in the Output Level section.
When looking at the differences in the two tables, it is immediately evident that
none of the methods stands out. The differences between the two are only a few
percentage points apart. The average difference is also just 0.38%. This means that
the heterogeneous approach is slightly better here. But this number is so small that
this should be given little importance. Thus it can be said that both the homogenous
and the heterogenous approach can deliver good results. Nevertheless, it cannot be
said here that one of the methods is better. Beside that, it is interesting to mention
here that the heterogenous Duke Partitioning Ensemble managed to compensate the
weak TrOCR Partitioning base learners. This confirms the advantage of ensembles
that bad base learners can be outvoted.
Dataset
homogenous
heterogenous
Duke
98.09
98.09
IAM
89.843
90.698
Table 6.6: Maximum values of homogenous and heterogenous Ensembles
32
Research Question 2
Section 6.2
Dataset
Dataset Level Methods
Maximum WA’s
homogenous
Maximum WA’s
heterogenous
Difference
Duke
CompleteData
97,03
98,09
1,06
Duke
Bagging
96,93
97,03
0,1
Duke
KFOLD
98,09
97,99
-0,1
Duke
Partitoning
76,48
76,48
0
IAM
CompleteData
89,8
90,7
0,9
IAM
Bagging
88,98
89,59
0,61
IAM
KFOLD
89,84
90,12
0,28
IAM
Partitoning
87,19
87,61
0,42
Table 6.7: Differences of the highest achieved dataset level methods of the heterogenous
and the homogenous approach
Dataset
Dataset Level Methods
Maximum WA’s
homogenous
Maximum WA’s
heterogenous
Difference
Duke
WordVote
98,09
97,99
-0,10
Duke
WeightedWordVote
97,88
98,09
0,21
Duke
MaxProb
96,08
97,35
1,27
IAM
WordVote
89,45
90,7
1,25
IAM
WeightedWordVote
89,32
89,95
0,63
IAM
MaxProb
89,84
88,67
-1,17
Table 6.8: Differences of the highest achieved output level methods of the heterogenous
and the homogenous approach
Base Learner Selection
Now it will be evaluated which OCR model brings the
most additional value as a base learner. In table 6.9 the averaged Word Accura-
cies without the by outlayers affected Partitioning can be seen. In Table 6.10 the
maximum achieved Word Accuracies are visible. The values of the heterogenous
approach are again not considered here, as well as in RQ1.
The first thing that stands out is that SimpleHTR is much worse than TrOCR and
AttentionHTR on average and at maximum. This makes sense because SimpleHTR
33
Chapter 6
Discussion
is the oldest architecture and does not use a pre-trained model for the initialization.
On the Duke Dataset, with the above initializations TrOCR and AttentionHTR
predicted about the same on average. Nevertheless, here TrOCR has the largest
maximum, which is 1.69% higher than that of AttentionHTR, This is different on
the IAM Dataset. Here AttentionHTR is much more accurate in the average as well
as in the maximum. Thus, in conclusion it can be said that TrOCR is best suited for
the Duke Dataset and AttentionHTR is best suited for the IAM Dataset. Of course,
this is only in consideration of the decisions made before.
Dataset
TrOCR
AttentionHTR
SimpleHTR
Duke
94,25
94,24
89,91
IAM
77,25
88,36
77,91
Combined
85,75
91,3
83,92
Table 6.9: Averaged WA’s of base learners without Partitioning
Dataset
TrOCR
AttentionHTR
SimpleHTR
Duke
98.09
96.4
94.92
IAM
85.53
89.84
82.4
Table 6.10: Maximum reached WA’s of base learners
6.2.3 Output Level
Last but not least, a statement should be made about the output methods. The first
thing that stands out is that voting at the character level or the AvgProb method
do not add any value here. The Word Accuracy of these methods are always worse
than their counterpart, the voting on word level or the standard MaxProb method.
For this reason, they are not being looked at in the further evaluation. In the case
of voting at character level, however, it can be assumed that these methods may be
better at dealing with longer words or sentences, because the datasets only have
quite short labels.
Like when looking at the Data Set Level methods, it will now be counted which
method has achieved the best possible accuracy of any dataset level method on
which model/dataset combination. This can be seen in table 6.11
34
Research Question 2
Section 6.2
Looking at these results, it can be assumed that mainly WordVoting and MaxProb
work better. In order to have more values for the comparison, the highest output
level method will be counted again for all dataset level methods, which is to be
examined in table 6.12 (excluded the values of Partition Duke TrOCR)
Once again, WordVote and MaxProb were much better than the WeightedWordVot-
ing method. But some peculiarities can be seen here. For SimpleHTR, the MaxProb
procedure performed significantly better. In general, for the homogenous ensemble
on the IAM Dataset, the MaxProb method always gave the best result see Table 6.11 .
For the predictions on the Duke Dataset, as well as for TrOCR and the heterogenous
ensembles, the voting methods achieved the best Word Accuracies. Therefore, it
can be said that the methods predict better depending on the circumstances. To the
WeightedWordVoting procedure it can be noted that this can lead also to the best
result. When looking at the average accuracy of the methods, it can be seen that
all 3 methods are not very far apart, see Table 6.13 Finally, on the output level, it
can be said that all 3 methods can lead to good results. However, WordVoting and
MaxProb achieved the best results.
OCR Modell
Dataset
WordVote
WeightedWordVote
MaxProb
TrOCR
Duke
1
AttentionHTR
Duke
1
SimpleHTR
Duke
1
heterogenous
Duke
1
TrOCR
IAM
1
AttentionHTR
IAM
1
SimpleHTR
IAM
1
heterogenous
IAM
1
Complete Count
3
1
4
Table 6.11: Counted number of best performing output level methods per OCR
model/dataset combination
35
Chapter 6
Discussion
OCR Modell
Dataset
WordVote
WeightedWordVote
MaxProb
TrOCR
Duke
1
2
AttentionHTR
Duke
2
1
1
SimpleHTR
Duke
2
2
heterogenous
Duke
1
2
TrOCR
IAM
3
1
1
AttentionHTR
IAM
4
SimpleHTR
IAM
4
heterogenous
IAM
4
Complete Count
14
6
11
Table 6.12: Counted number of best performing output level methods per OCR
model/dataset combination per dataset level method
WordVoter
WeightedWordVote
MaxProb
84,74
84,52
82,31
Table 6.13: Averages of the output level methods
6.2.4 Conclusion RQ2
With the accumulated knowledge on the design levels, it is now possible to answer
RQ2. It can be seen that there is no such thing as the one true ensemble method.
Many methods only bring added value under certain conditions. However, it became
clear that Partitioning, Charvote, WeightedCharVote and AvgProb do not bring any
recommendable added value. On the one hand, it became clear that on dataset level
CompleteData as well as KFOLD, and on outpute level WordVote as well as MaxProb
delivered the best results. On the other hand, Baggigng and WeightedWordVoting
were also able to achieve acceptable Word Accuracies, but they rarely reached the
maximum. At base learner level, it was found out that neither the heterogenous
approach nor the homogenous approach were better than the other and that both
can be useful. It also became clear that SimpleHTR is not much useful as a base
learner. In contrast, both TrOCR and AttentionHTR scored well, with TrOCR being
more suitable for the Duke Dataset and AttentionHTR more suitable for the IAM
36
Research Question 3
Section 6.3
Dataset. All these statements are of course only to be understood by looking at the
decisions made before.
6.3 Research Question 3
Finally, to RQ3: "Can Ensemble Learning add significantly better value on smaller
datasets? " The idea to answer this question was to compare the values of the small
Duke Dataset with the large IAM Dataset. First of all, it can be said that the accuracy
values on the Duke Dataset are significantly better than on the IAM Dataset. This is
proven for example by the averages of the Word Accuracy of the ensemble outputs.
The average of all Duke values of the ensemble outputs is 81.47% compared to the
IAM average of 80.13%. Without the WordAccuracies of the Partitioning method
with its strong outlayers it is even 92,88% against 81,56.%. But the Duke Dataset is
smaller with the amount of test data and the character list of the IAM Dataset is
much larger. Therefore, the absolute values cannot be used for the comparison. For
this reason, the values of the difference of the respective highest base learners to
the most accurate output level method will be used, which can be seen in Table
6.14. In the table it can be noticed that 2 times an ensemble of the Duke Dataset
reached a larger distance to the base learner and 2 times an ensemble of the IAM
Dataset. Also looking at the numbers, it is noticeable that they do not differ much.
The largest gap between two differences is just about 3 percentage points. With so
few numbers and so few differences, definitely no side can be favored here. It is also
already indirectly known from RQ1 that ensemble learning added value for both
datasets. Therefore, it can be concluded for RQ3: No, Ensemble Learning could not
prove a higher added value here on a smaller dataset than compared to a larger one.
Nevertheless, Ensemble Learning was able to achieve a higher accuracy on both
datasets. Consequently, Ensemble Learning can be used with small as well as with
large datasets in order to achieve an increased word accuracy.
OCR Modell
Best Duke Single
or Base Leaerner
Best Duke
Ensemble WA
Duke Difference
Best IAM Single
or Base Leaerner
Best IAM
Ensemble WA
IAM Difference
TrOCR
96,82
98,09
1,27
87,43
85,53
-1,9
Attention
94,07
96,4
2,33
87,63
89,84
2,21
simple
91,21
94,92
3,71
76,73
82,4
5,67
All 3 Models
96,82
98,09
1,27
87,63
90,7
3,07
Table 6.14: Differences of the highest base learners to the most accurate output level
method
37
7
Conclusions and Outlook
For the bachelor project 2021 "Human in the Loop: Deep Learning of Handwritten
Patient Records" of Professor Lippert’s research group, Ensemble Learning was
looked at in order to achieve improved accuracy for the digitization of patient
records.
The main question of this work was to find out with RQ1:"Can Ensemble Learning
improve the Accuracy of modern OCR methods?". The answer to this question is
clearly YES. Ensemble Learning offers itself, if one can overlook the disadvantages,
as a very good tool to bring out some last percentage points in the accuracy of OCR.
Here, however, the subliminal non-measurable, but also theoretical, advantages of
Ensemble Learning such as better generalization and protection against overfitting
are particularly worth mentioning. Therefore Ensemble Learning can help for the
digitization of the above mentioned records.
In this context with RQ2, it was then clarified which ensemble learning methods
add the most value. The main finding here was that there is not the one true
ensemble method. Multiple ensemble methods can add more or less value given
different circumstances. However, the most promising results here were achieved at
the dataset level by CompleteData and KFOLD , at the BaseLearner level by TrOCR
for Duke and by AttentionHTR for IAM, and at the output level by WordVoting and
MaxProb. If these methods will be used in a new OCR project, it is very well possible
that they could provide an improved accuracy. Also some here not usable methods
were identified. These were Partitioning, CharVoting, WeightedCharVoting and
AvgProb. The remaining methods were still shown to be relevant, although they
did not produce the best results.
Since the dataset used for the Bachelor project mentioned in the introduction
was very small, the last question to be answered with RQ3 was whether Ensemble
Learning can help more on small datasets. Here it was found out that it makes no
difference whether the dataset is large or small. Ensemble Learning added value
for both datasets.
Overall, the most important finding of this work is that once again the potential
of Ensemble Learning has been revealed. Consequently, ensemble learning as a
research question is still relevant for OCR and can offer decisive added value. In
future work, the methods looked at above could be looked at again with a different
experimental setup (initializations), different datasets, or different OCR models.
39
Chapter 7
Conclusions and Outlook
The question of the optimal number of base learners could even be evaluated in a
separate paper. Also the approaches mentioned in the chapter ineligible methods
can be used for further research. Stacking, Boosting or SnapshotEnsembling could
be particularly interesting here. The topic EnsembleLearning in combination with
OCR remains promising.
40
7
Appendix
TrOCR
AttentionHTR
SimpleHTR
Duke
1.39
2.40
5.72
IAM
18.04
7.05
10.67
Table 7.1: Character Error Rates of Single OCR Modells
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
7,94
1,54
10,18
9,24
1,14
2,90
2,44
1,73
5,13
3,63
6,57
Bagging
3,68
3,92
1,94
2,03
1,54
1,27
3,41
1,50
3,44
2,16
3,09
K-FoldCross
1,61
1,88
1,11
10,50
1,11
0,60
1,62
0,65
1,74
2,45
3,82
Partitioning
131,31
114,91
93,73
110,16
101,81
114,91
134,89
114,91
134,42
122,64
130,42
Table 7.2: Character Error Rates of homogenous TrOCR Ensembles on Duke Data
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
7,18
14,32
16,13
14,86
12,11
10,16
15,95
10,16
17,90
9,51
11,44
Bagging
12,78
13,19
23,26
20,03
19,20
12,52
24,40
12,52
26,25
14,02
17,16
K-FoldCross
13,25
15,23
11,73
20,68
19,75
12,59
22,30
12,59
23,13
13,77
15,54
Partitioning
14,89
11,16
43,03
12,36
12,88
11,36
26,81
11,36
28,28
12,75
15,11
Table 7.3: Character Error Rates of homogenous TrOCR Ensembles on IAM Data
41
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
1,99
2,88
2,55
2,37
3,02
1,92
2,69
1,97
2,69
1,24
1,57
Bagging
3,87
3,15
3,19
2,59
3,87
1,98
3,68
2,58
3,78
2,23
2,68
K-FoldCross
2,61
2,65
3,49
2,49
3,16
1,67
2,82
1,79
3,00
1,20
1,86
Partitioning
13,92
12,10
11,17
13,11
11,03
8,51
2,28
8,73
12,93
7,70
9,46
Table 7.4: Character Error Rates of homogenous AttentionHTR Ensembles on Duke Data
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
7,03
7,19
6,75
7,03
7,25
6,01
7,43
6,03
7,52
5,82
6,01
Bagging
7,78
7,86
7,11
7,38
7,79
6,47
8,09
6,51
8,31
6,24
6,33
K-FoldCross
7,28
7,43
7,11
7,39
7,15
6,21
7,57
6,43
7,75
6,00
6,14
Partitioning
8,69
8,51
9,07
8,62
8,74
7,34
9,28
7,47
9,45
6,95
7,32
Table 7.5: Character Error Rates of homogenous AttentionHTR Ensembles on IAM Data
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
5,65
5,91
5,42
4,39
6,29
3,13
6,16
3,42
3,92
3,44
4,66
Bagging
5,27
7,60
4,44
5,56
5,19
3,81
5,87
3,79
4,54
3,53
4,28
K-FoldCross
3,19
3,51
3,17
3,80
3,17
2,16
3,53
2,22
3,65
1,92
2,90
Partitioning
16,04
21,16
14,78
19,17
20,71
12,53
16,46
12,49
15,43
13,39
15,43
Table 7.6: Character Error Rates of homogenous SimpleHTR Ensembles on Duke Data
Dataset Methods
Base Learners
OutputCombination
1
2
3
4
5
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
11,04
11,47
11,26
11,38
11,35
9,02
12,85
9,47
13,12
8,70
9,92
Bagging
11,16
10,87
11,49
11,21
11,52
8,83
12,62
9,26
13,03
8,72
9,66
K-FoldCross
10,27
10,13
10,23
10,35
10,07
8,21
11,51
8,62
11,64
7,95
8,75
Partitioning
15,97
16,26
16,01
15,55
15,62
12,55
18,30
12,87
18,83
11,78
13,44
Table 7.7: Character Error Rates of homogenous SimpleHTR Ensembles on IAM Data
42
Dataset Methods
OutputCombination
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
1,02
3,55
0,63
3,95
3,45
7,13
Bagging
1,35
5,61
1,19
6,23
2,34
4,53
K-FoldCross
1,20
2,95
1,68
2,89
1,26
2,80
Partitioning
8,89
51,16
8,21
59,55
79,41
90,06
Table 7.8: Character Error Rates of heterogenous Ensembles on Duke Data
Dataset Methods
OutputCombination
WordVote
CharVote
Weighted-
WordVote
Weighted-
CharVote
MaxProb
AvgProb
Complete Data
5,63
11,98
5,86
13,38
6,97
10,30
Bagging
5,96
16,33
6,13
18,74
8,48
12,89
K-FoldCross
5,98
11,96
5,83
14,49
7,33
10,32
Partitioning
6,95
25,72
7,34
27,43
9,87
15,26
Table 7.9: Character Error Rates of heterogenous Ensembles on IAM Data
43
7
Bibliography
[1]
Benjamin Aunkofer. Ensemble Learning. url: https://data-science-blog.com/blog/
2017/12/03/ensemble-learning/ (see page 1).
[2]
Jason Brownlee. A Gentle Introduction to Ensemble Learning Algorithms. url: https:
//machinelearningmastery.com/tour-of-ensemble-learning-algorithms/ (see page 1).
[3]
Ajay Shekhawat Sargur N. Srihari and Stephen W. Lam. Optical character recog-
nition (OCR). In: Encyclopedia of Computer Science. John Wiley and Sons Ltd., 2003,
1326ś1333 (see page 3).
[4]
Veronica Romero, Nicolas Serrano, Alejandro H. Toselli, Joan Andreu Sanchez, and
Enrique Vidal. Handwritten Text Recognition for Historical Documents. In:
Proceedings of Language Technologies for Digital Humanities and Cultural Heritage
Workshop. Association for Computational Linguistics, 2011, 90ś96. url: https://
aclanthology.org/W11-4114.pdf (see page 3).
[5]
JAMSHED MEMON, MAIRA SAMI, RIZWAN AHMED KHAN, and MUEEN UD-
DIN. Handwritten Optical Character Recognition(OCR): A Comprehensive
systematic Literature Review (SLR). IEEE Access (2020). url: https://ieeexplore.
ieee.org/stamp/stamp.jsp?tp=&arnumber=9151144 (see page 3).
[6]
Aston Zhang, Zack C. Lipton, Mu Li, Alex J. Smola, Brent Werness andRachel Hu,
and Shuai Zhang andYi Tay. Dive into Deep Learning. 2021. url: http://d2l.ai/
(see pages 3, 4).
[7]
Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks.
PhD thesis. Technische Universit¨at Munchen, 2008 (see page 3).
[8]
Zhi-Hua Zhou, 181ś210. In: Machine Learning. Singapore: Springer Singapore, 2021.
isbn: 978-981-15-1967-3. doi: 10.1007/978-981-15-1967-3_8. url: https://doi.org/10.
1007/978-981-15-1967-3_8 (see page 4).
[9]
M. A. Ganaie, Minghui Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan. Ensemble
deep learning: A review (2021). eprint: arXiv:2104.02395. url: https://arxiv.org/
pdf/2104.02395.pdf (see pages 4, 5, 8, 13ś15, 17).
[10]
Omer Sagi and Lior Rokach. Ensemble learning: A survey. WIREs Data Mining
and Knowledge Discovery 8:4 (2018), e1249. doi: https://doi.org/10.1002/widm.1249.
url: https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/widm.1249 (see
pages 4ś6, 8, 13, 14, 17).
45
[11]
M.J.A.N.C. de Condorcet. Essai sur l’application de l’analyse à la probabilité
des décisions rendues à la pluralité des voix. L’imprimerie royale, 1785. url:
https://books.google.de/books?id=RzAVAAAAQAAJ (see page 4).
[12]
Roman Bertolami. Ensemble Methods for Offline Handwritten Text Line Recog-
nition. Inauguraldissertation. Universität Bern, 2008. url: https://biblio.unibe.ch/
download/eldiss/08bertolami_r.pdf (see pages 5, 7, 13, 17).
[13]
Alok Kumar and Mayank Jain. Ensemble Learning for AI Developers. Apress
Berkeley CA, 2020. isbn: 978-1-4842-5939-9. doi: https://doi.org/10.1007/978-1-4842-
5940-5 (see pages 5, 13, 15, 17).
[14]
Hansen L.K., Liisberg C., and Salamon P. Ensemble methods for handwritten
digit recognition. In: Neural Networks for Signal Processing II Proceedings of the
1992 IEEE Workshop. 1992, 333ś342. doi: 10.1109/NNSP.1992.253679 (see page 7).
[15]
Michael Perrone and Leon Cooper. When Networks Disagree: Ensemble Meth-
ods for Hybrid Neural Networks. Neural networks for speech and image processing
(1993). doi: 10.1142/9789812795885_0025 (see page 7).
[16]
Harris Drucker, Corinna Cortes, L. D. Jackel, Yann LeCun, and Vladimir Vapnik.
Boosting and Other Ensemble Methods. Neural Computation 6:6 (1994), 1289ś
1301. doi: 10.1162/neco.1994.6.6.1289 (see page 7).
[17]
Jianchang Mao and K.M. Mohiuddin. Improving OCR performance using char-
acter degradation models and boosting algorithm. Pattern Recognition Letters
18:11 (1997), 1415ś1419. issn: 0167-8655. doi: https://doi.org/10.1016/S0167-8655(97)
00137-2. url: https://www.sciencedirect.com/science/article/pii/S0167865597001372
(see page 7).
[18]
Jianchang Mao. A case study on bagging, boosting and basic ensembles of
neural networks for OCR. In: 1998 IEEE International Joint Conference on Neu-
ral Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat.
No.98CH36227). Vol. 3. 1998, 1828ś1833 vol.3. doi: 10.1109/IJCNN.1998.687135 (see
page 7).
[19]
Simon Günter and Horst Bunke. Ensembles of classifiers for handwritten word
recognition. Document Analysis and Recognition 5:4 (2003), 224ś232. url: https:
//doi.org/10.1007/s10032-002-0088-2 (see pages 7, 13, 15, 16).
[20]
Simon Günter and Horst Bunke. Evaluation of Classical and Novel Ensemble
Methods for Handwritten Word Recognition. In: Structural, Syntactic, and Sta-
tistical Pattern Recognition. Ed. by Ana Fred, Terry M. Caelli, Robert P. W. Duin,
Aurélio C. Campilho, and Dick de Ridder. Berlin, Heidelberg: Springer Berlin Hei-
delberg, 2004, 583ś591. isbn: 978-3-540-27868-9 (see pages 7, 15).
46
[21]
S. Bernard, S. Adam, and L. Heutte. Using Random Forests for Handwritten
Digit Recognition. In: Ninth International Conference on Document Analysis and
Recognition (ICDAR 2007). Vol. 2. 2007, 1043ś1047. doi: 10.1109/ICDAR.2007.4377074
(see page 7).
[22]
Wolfgang Wilczok Elkeand Lellmann, 123ś136. In: Reading and Learning: Adaptive
Content Recognition. Ed. by Andreas Dengel, Markus Junker, and Anette Weisbecker.
Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. doi: 10.1007/978-3-540-24642-
8_8. url: https://doi.org/10.1007/978-3-540-24642-8_8 (see page 7).
[23]
William B Lund. Ensemble Methods for Historical Machine-Printed Docu-
ment Recognition. PhD thesis. Brigham Young University, 2014 (see page 7).
[24]
Yongquan Yang and Haijun Lv. A Survey on Ensemble Learning under the Era
of Deep Learning. CoRR abs/2101.08387 (2021). url: https://arxiv.org/abs/2101.
08387 (see pages 7, 8, 15, 17).
[25]
Haifeng Wang, Changzai Pan, Xiao Guo, Chunlin Ji, and Ke Deng. From object
detection to text detection and recognition: A brief evolution history of op-
tical character recognition. WIREs Computational Statistics 13:5 (2021), e1547. doi:
https://doi.org/10.1002/wics.1547. url: https://wires.onlinelibrary.wiley.com/doi/
abs/10.1002/wics.1547 (see page 7).
[26]
Savita Ahlawat, Amit Choudhary, Anand Nayyar, Saurabh Singh, and Byungun
Yoon. Improved Handwritten Digit Recognition Using Convolutional Neu-
ral Networks (CNN). Sensors 20:12 (2020). doi: 10.3390/s20123344. url: https:
//www.mdpi.com/1424-8220/20/12/3344 (see page 7).
[27]
Hossein Karimi, Azadeh Esfahanimehr, Mohammad Mosleh, Faraz Mohammadian
jadval ghadam, Simintaj Salehpour, and Omid Medhati. Persian Handwritten
Digit Recognition Using Ensemble Classifiers. Procedia Computer Science 73
(2015). International Conference on Advanced Wireless Information and Communica-
tion Technologies (AWICT 2015), 416ś425. doi: https://doi.org/10.1016/j.procs.2015.
12.018. url: https://www.sciencedirect.com/science/article/pii/S1877050915034791
(see page 7).
[28]
Yaser Ahangari Nanehkaran, Junde Chen, Soheil Salimi, and Defu Zhang. A prag-
matic convolutional bagging ensemble learning for recognition of Farsi
handwritten digits. The Journal of Supercomputing 77:11 (2021), 13474ś13493 (see
page 7).
[29]
Mir Moynuddin Ahmed Shibly, Tahmina Tisha, and Shamim Ripon. Deep Learning
and Ensemble Methods to Recognize Bangla Handwritten Character. PhD
thesis. East West University Dhaka-1212, Bangladesh, 2020 (see page 7).
47
[30]
Mohamed Awni, Mahmoud I. Khalil, and Hazem M. Abbas. Deep-Learning En-
semble for Offline Arabic Handwritten Words Recognition. In: 2019 14th In-
ternational Conference on Computer Engineering and Systems (ICCES). 2019, 40ś45.
doi: 10.1109/ICCES48960.2019.9068184 (see page 7).
[31]
Ashis Paul, Rishav Pramanik, Samir Malakar, and Ram Sarkar. An ensemble of
deep transfer learning models for handwritten music symbol recognition.
Neural Computing and Applications 34:13 (2022), 10409ś10427 (see page 7).
[32]
Christian Reul, Dennis Christ, Alexander Hartelt, Nico Balbach, Maximilian Wehner,
Uwe Springmann, Christoph Wick, Christine Grundig, Andreas Büttner, and Frank
Puppe. OCR4allÐAn Open-Source Tool Providing a (Semi-)Automatic OCR
Workflow for Historical Printings. Applied Sciences 9:22 (2019). doi: 10.3390/
app9224853. url: https://www.mdpi.com/2076-3417/9/22/4853 (see page 7).
[33]
Christoph Wick and Christian Reul. One-Model Ensemble-Learning for Text
Recognition of Historical Printings. In: Document Analysis and Recognition –
ICDAR 2021. Ed. by Josep Lladós, Daniel Lopresti, and Seiichi Uchida. Cham: Springer
International Publishing, 2021, 385ś399. isbn: 978-3-030-86549-8 (see page 7).
[34]
Christian Reul, Stefan Tomasek, Florian Langhanki, and Uwe Springmann. Open
Source Handwritten Text Recognition on00a0Medieval Manuscripts Using
Mixed Models and00a0Document-Specific Finetuning. In: Document Analysis
Systems. Ed. by Seiichi Uchida, Elisa Barney, and Véronique Eglin. Springer Interna-
tional Publishing, 2022, 414ś428. isbn: 978-3-031-06555-2 (see page 7).
[35]
Icaro Alzuru, Rhiannon Stephens, Andréa Matsunaga, Maurício Tsugawa, Paul Fle-
mons, and José A.B. Fortes. Quality-Aware Human-Machine Text Extraction
for Biocollections using Ensembles of OCRs. In: 2019 15th International Confer-
ence on eScience (eScience). 2019, 116ś125. doi: 10.1109/eScience.2019.00020 (see
page 7).
[36]
João Adriano Portela de Matos Silva. Ensembles de OCRs para aplicações médi-
cas. PhD thesis. University of Porto, 2021 (see page 7).
[37]
Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on
ensemble learning. Frontiers of Computer Science 14:2 (2020), 241ś258 (see page 8).
[38]
Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun
Li, and Furu Wei. TrOCR: Transformer-based Optical Character Recognition with
Pre-trained Models. 2021. doi: 10.48550/ARXIV.2109.10282. url: https://arxiv.org/
abs/2109.10282 (see pages 9, 10, 20).
[39]
Dmitrijs Kass and Ekta Vats. AttentionHTR: Handwritten Text Recognition
Based on00a0Attention Encoder-Decoder Networks. In: Document Analysis
Systems. Springer International Publishing, 2022, 507ś522. isbn: 978-3-031-06555-2
(see pages 10, 11, 20).
48
[40]
Harald Scheidl. Build a Handwritten Text Recognition System using TensorFlow. 2018.
url: https://towardsdatascience.com/build-a-handwritten-text-recognition-system-
using-tensorflow-2326a3487cd5 (see pages 11, 12, 20).
[41]
Jason Brownlee. A Gentle Introduction to k-fold Cross-Validation. url: https : / /
machinelearningmastery.com/k-fold-cross-validation (see page 13).
[42]
Walter Kempner. Treatment of hypertensive vascular disease with rice diet.
The American journal of medicine 4:4 (1948), 545ś577 (see page 19).
[43]
U-V Marti and Horst Bunke. The IAM-database: an English sentence database
for offline handwriting recognition. International Journal on Document Analysis
and Recognition 5:1 (2002), 39ś46 (see page 19).
[44]
Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classi-
fier ensembles and their relationship with the ensemble accuracy. Machine
learning 51:2 (2003), 181ś207 (see page 21).
49
|
Evaluation of Ensemble Learning Techniques for handwritten OCR Improvement Evaluierung von Ensemble Lern Techniken zur Verbesserung von hangeschriebener OCR Martin Preiß Universitätsbachelorarbeit zur Erlangung des akademischen Grades Bachelor of Science (B. Sc.) im Studiengang IT Systems Engineering eingereicht am 29. July 2022 am Fachgebiet Digital Health & Machine Learning der Digital-Engineering-Fakultät der Universität Potsdam Gutachter Prof. Dr. Christoph Lippert Betreuer Benjamin Bergner 0 Abstract For the bachelor project 2021 of Professor Lippert's research group, handwritten entries of historical patient records needed to be digitized using Optical Character Recognition (OCR) methods. Since the data will be used in the future, a high degree of accuracy is naturally required. Especially in the medical field this has even more importance. Ensemble Learning is a method that combines several machine learning methods. This procedure is claimed to be able to achieve an increased accuracy for existing methods. For this reason, Ensemble Learning in combination with OCR is investigated in this work in order to create added value for the digitization of the patient records. It was possible to discover that ensemble learning can lead to an increased accuracy for OCR, which methods were able to achieve this and that the size of the training data set did not play a role here. iii 0 Zusammenfassung Für das Bachelorprojekt 2021 des Lehrstuhls von Professor Lippert sollten handgeschriebene Einträge von historischen Patientenakten mithilfe von Optical Character Recognition (OCR) Methoden digitalisiert werden. Da die Daten weiterhin verwendet werden sollen, ist natürlich eine hohe Genauigkeit erforderlich. Gerade im medizinischen Bereich gewinnt dies noch mehr an Bedeutung. Ensemble Learning ist ein Verfahren welches mehrere Machine Learning Methoden kombiniert. Diesem Verfahren wird die Fähigkeit nachgesagt, für bestehenden Mehtoden eine erhöhte Genauigkeiten dieser erreichen zu können. Aus diesem Grund wird Ensemble Learning in Kombination mit OCR in dieser Arbeit untersucht, um für die Digiatlisierung der Patientenakten einen Mehrwert zu schaffen. Dabei konnte beantwortet werden, dass Ensemble Learning für OCR zu einer erhöhten Accuracy führen kann, welche Methoden dies umsetzten und dass die größte des Trainingsdatensatzes hier keine Rolle spielte. v 0 Acknowledgments Since all the people I want to thank are fluent in german, I have written the acknowledgment in german. Nevertheless an english translation can be found below. German original Ich habe sehr stark dafür kämpfen müssen um am HPI studieren zu dürfen. Deshalb möchte ich als erstes den Menschen in meinem Leben danken die es mir überhaupt erst ermöglicht haben diese Zeilen zu schreiben. Mein Dank geht an meine Familie die mich jetzt nun 23 Jahre lang auf diesem Planeten unterstützt. Danke Papa, dass du stets so darauf erpicht warst, dass aus mir was wird. Danke Mama, dass du stets drauf geachtet hast, dass ich auch ein gesunder Bub werde. Danke Schwesterherz, dass ich immer auf dich zählen kann. Ich bin sehr stolz auf dich. Natürlich will ich auch nicht meine Omas, Tanten, Patentante, Cousinen, Onkel und Freunde vergessen. Ohne euch wäre ich nicht wo ich heute stehe. Vielen Danke für eure Unterstützung und Zuspruch all die Jahre. Auch möchte ich den Betreuern des Bachelorprojekts Benjamin Bergner und Prof. Dr. Christoph Lippert für die Unterstützung das letze Jahr danken. Ich hab das Jahr als sehr spannend empfunden und freue mich, dass wir ein wenig Wissen von Ihnen abschöpfen durften. Zum Schluss möchte ich noch meinem Team danken, welches mich das letzte Jahr begleiten durfte. Wir waren klasse Leute und jeder von euch hat seinen eigenen Satz verdient . Was ihr mit diesem macht ist eure Sache (zwinkersmiley). Danke Cedric, für deinen starken unterstützenden Rücken. Danke Smilla, für ihre Güte uns in den letzten Monaten zu behüten. Danke Elena, die mir half TrOCR zu bezwingen. Danke Paul, dem zweit besten Datenputzer dieses Teams :P. Danke Romeos Gehirn. Danke Tom , dass er nicht wahnsinnig durch mich geworden ist . Danke Abdu, der mir immer half neues zu lernen. Natürlich danke ich auch dem Leser dieser Arbeit, dass er sich die Zeit nimmt in meinen Zeilen zu verweilen. Ich wünsche viel Spaß beim Lesen. English translation I had to fight very hard to be able to study at HPI. Therefore, I would first like to thank the people in my life who made it possible for me to write these lines. My thanks goes to my family who has supported me on this planet for 23 years now. Thank you dad for always being so eager with me to become vii successful. Thank you mom for always making sure that I would be a healthy lad. Thank you sister, that I can always count on you. I am very proud of you. Of course I don't want to forget my grandmas, aunts, godmother, cousins, uncles and friends. Without you I would not be where I am today. Thank you for your support and encouragement all these years. I would also like to thank the supervisors of the bachelor project Benjamin Bergner and Prof. Dr. Christoph Lippert for their support during the last year. I found the year very inspiring and I am glad that we were able to gain some knowledge from them. Finally, I would like to thank my team, which was allowed to accompany me during the last year. We have been great folks and each of you deserves your own sentence. What you do with it is up to you (wink wink). Thank you Cedric, for your strong supportive back. Thank you Smilla, for your kindness to shepherd us in the last months. Thank you Elena, for helping me defeat TrOCR. Thank you Paul, the second best data cleaner of this team :P. Thank you Romeo's brain. Thanks Tom for not going insane because of me . Thanks Abdu who always helped me to learn new things. Of course I also thank the reader of this work for taking the time to dwell in my lines. I wish a lot of fun while reading. viii 0 Contents Abstract iii Zusammenfassung v Acknowledgments vii Contents ix 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Background 3 2.1 OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1.2 OCR in Deep Learning . . . . . . . . . . . . . . . . . . . . 3 2.2 Ensemble Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2.2 Assets and Drawbacks . . . . . . . . . . . . . . . . . . . . 4 2.2.3 Design Levels . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Related Work 7 4 Methodology 9 4.1 Experimental Process . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2 OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2.1 TrOCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2.2 AttentionHTR . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2.3 SimpleHTR . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 Ensemble Learning Methods . . . . . . . . . . . . . . . . . . . . . 12 4.3.1 Dataset Level . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.3.2 Base Learner Level . . . . . . . . . . . . . . . . . . . . . . 14 4.3.3 Output Level . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3.4 Combining the Methods . . . . . . . . . . . . . . . . . . . 16 ix 4.3.5 Ineligible Methods . . . . . . . . . . . . . . . . . . . . . . 16 5 Experiments 19 5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.3 Experimental Setup and Execution . . . . . . . . . . . . . . . . . . 21 5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6 Discussion 27 6.1 Research Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.2 Research Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.2.1 Dataset Level . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.2.2 Base Learner Level . . . . . . . . . . . . . . . . . . . . . . 31 6.2.3 Output Level . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.2.4 Conclusion RQ2 . . . . . . . . . . . . . . . . . . . . . . . . 36 6.3 Research Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 37 7 Conclusions and Outlook 39 Appendix 41 Bibliography 45 Declaration of Authorship 51 x 0 List of Figures 2.1 General supervised learning procedure [6] . . . . . . . . . . . . . 4 2.2 General structure of an Ensemble [10] . . . . . . . . . . . . . . . . 6 4.1 Model architecture of TrOCR [38] . . . . . . . . . . . . . . . . . . 10 4.2 Model architecture of AttentionHTR [39] . . . . . . . . . . . . . . 11 4.3 Model architecture of SimpleHTR [40] . . . . . . . . . . . . . . . 12 5.1 Example pictures from the 2 Datasets with Duke on the left and IAM on the right side . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 Word Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 xi 0 List of Tables 5.1 Experimentation Results of Single OCR Modells . . . . . . . . . . 23 5.2 Experimentation Results of homogenous TrOCR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.3 Experimentation Results of homogenous TrOCR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.4 Experimentation Results of homogenous AttentionHTR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.5 Experimentation Results of homogenous AttentionHTR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.6 Experimentation Results of homogenous SimpleHTR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.7 Experimentation Results of homogenous SimpleHTR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.8 Experimentation Results of heterogenous Ensembles on Duke Data 25 5.9 Experimentation Results of heterogenous Ensembles on IAM Data 25 6.1 Comparison of Single WA's with maximal achieved Ensemble WA's 28 6.2 Comparison of best achieved base learner or single WA's with maximal achieved Ensemble WA's . . . . . . . . . . . . . . . . . . 29 6.3 counted number of best performing dataset level methods per OCR model/dataset combination . . . . . . . . . . . . . . . . . . . . . . 30 6.4 counted number of best performing dataset level methods per OCR model/dataset combination per output level method . . . . . . . . 31 6.5 Averages of the dataset level methods . . . . . . . . . . . . . . . . 31 6.6 Maximum values of homogenous and heterogenous Ensembles . . 32 6.7 Differences of the highest achieved dataset level methods of the heterogenous and the homogenous approach . . . . . . . . . . . 33 6.8 Differences of the highest achieved output level methods of the heterogenous and the homogenous approach . . . . . . . . . . . 33 6.9 Averaged WA's of base learners without Partitioning . . . . . . . 34 6.10 Maximum reached WA's of base learners . . . . . . . . . . . . . . 34 6.11 Counted number of best performing output level methods per OCR model/dataset combination . . . . . . . . . . . . . . . . . . . . . . 35 xiii 6.12 Counted number of best performing output level methods per OCR model/dataset combination per dataset level method . . . . . . . . 36 6.13 Averages of the output level methods . . . . . . . . . . . . . . . . 36 6.14 Differences of the highest base learners to the most accurate output level method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 7.1 Character Error Rates of Single OCR Modells . . . . . . . . . . . . 41 7.2 Character Error Rates of homogenous TrOCR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.3 Character Error Rates of homogenous TrOCR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.4 Character Error Rates of homogenous AttentionHTR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.5 Character Error Rates of homogenous AttentionHTR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.6 Character Error Rates of homogenous SimpleHTR Ensembles on Duke Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.7 Character Error Rates of homogenous SimpleHTR Ensembles on IAM Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 7.8 Character Error Rates of heterogenous Ensembles on Duke Data . 43 7.9 Character Error Rates of heterogenous Ensembles on IAM Data . 43 xiv 1 Introduction 1.1 Motivation As part of the 2021 bachelor's project "Human in the Loop: Deep Learning on handwritten Patient Records" of Professor Lippert's research group, historical patient records of the Walter Kempner Diet had to be digitized for the Duke University. For this purpose, various modern Optical Character Recognition (OCR) methods were used to extract the handwriting contained in the documents. This provided acceptable results. But since the digitized data will be used for medical studies, it is important that they are captured as accurately as possible. After all, important decisions and findings will be made later on the basis of this data. During the practical implementation of the OCR methods for the bachelor project, the idea of using several of these methods together came up independently. The methods had different accuracies and the idea of combining them was quickly considered. On the one hand, there would be no need to commit to only one method and discard the others. On the other hand, it was also questioned whether a combination could lead to an increased accuracy. After a short research, it was discovered that the idea is not new and is summarized under the term Ensemble Learning. Since many blog articles on this topic also promised increased accuracy [1, 2], it was decided to get to the bottom of this topic in combination with OCR in order to hopefully create added value for the digitization of patient records. 1.2 Contribution For this purpose, the following research questions will be answered in this paper. First, the main question is whether Ensemble Learning can really improve OCR. After all, further research questions on this topic would make less sense if Ensemble Learning is useless for OCR anyway. This is formulated in RQ1 as follows: "Can Ensemble Learning improve the Accuracy of modern OCR methods?" If the answer to RQ1 is yes, it is of course also of particular interest which methods have created this added benefit. The reason for this is that ensemble learning is a large topic and several approaches can be pursued here. Therefore, the goal of RQ2 is to determine "Which Ensemble Learning methods are the most valuable?" One of the 1 Chapter 1 Introduction main difficulties of the bachelor project, not mentioned so far, was the relatively small data set, which was needed for the implementation of OCR. Therefore, the last question to be asked in RQ3 is: "Can Ensemble Learning add significantly better value on smaller datasets?". Since it would be very interesting to know, if Ensemble Learning can help more on small datasets compared to large datasets. To answer these 3 research questions, this paper is structured as follows. First of all, in the background section, important terms around OCR and Ensemble Learning will be clarified for readers with less previous knowledge. Then, the methodology chapter will explain how the research questions will be approached. In the same chapter, the involved OCR methods and ensemble learning methods are named and explained. In the experiment chapter it is then described how these methods were put into practice and what results they had. These results are then evaluated in the discussion chapter to answer the research questions. The statements presented there are then summarized at the end in the conclusion chapter and further interesting research possibilities for OCR in combination with ensemble learning are mentioned. 2 2 Background Initially, the purpose of this section is to formally clarify important terms related to OCR and Ensemble Learning. Readers with the necessary knowledge can skip this section. 2.1 OCR 2.1.1 Definition Optical Character Recognition (OCR) in general refers to the translation of optically available handwritten or printed characters/words/numbers (any kind of symbols) into digital form[3]. The term Handwritten Text Recognition (HTR) is often used in this context. HTR is a specialization of OCR that converts sequences of handwritten symbols (words/phrases)[4]. Although HTR would often be formally the more accurate term, in the majority of scientific papers the term OCR is used rather than HTR. Nevertheless, it is necessary to mention HTR as a synonym at this point, since the term may help in researching further work. In the further course of this work, OCR will also be used as a term for HTR. 2.1.2 OCR in Deep Learning There are many possibilities to implement OCR in practice. In recent years, however, Deep Learning approaches mainly have been used[5]. Deep learning is a type of machine learning that is characterized by using multiple processing layers, each with a large number of parameters, to make a prediction [6]. Deep refers here to the large number of parameters and layers that are used by deep learning models. In the field of deep learning, OCR can be classified as a sequence labeling task for text or a classification task for individual symbols.[6, 7] Therefore, a model that implements OCR receives a sequence as input (the input image) and outputs a string (sequence labeling) or only a single character (classification). Usually, sequence labeling models are supervised learning algorithms. Since all models of this work can be classified in this category, this concept and their general procedure shall be clarified. Supervised learning refers to methods that make a label prediction on a given set of input features by prior parameter adaptation of the prediction model. 3 Chapter 2 Background The parameter adaptation is achieved by a learning algorithm. This algorithm receives already labeled input features as input and outputs the finished model with adjusted parameters in the end. The process of "learning" is generally referred to as training. In 2.1 you can see an overview of supervised learning. Figure 2.1: General supervised learning procedure [6] 2.2 Ensemble Learning 2.2.1 Definition Ensemble Learning is a method of machine learning[8]. Usually, exactly one model is involved in making predictions, and its predictions are used directly. The difference in Ensemble Learning is that several different models are applied to generate predictions. These models are called base learners, base classifiers, or inducers [8, 9, 10]. For a given problem, all base learners perform their own prediction. The predictions of all base learners are then passed to an output component. This output component then uses all predictions to compute the final prediction. This computation can be done in a variety of ways and the most important and well-known are part of this work 2.2.2 Assets and Drawbacks The main idea of Ensemble Learning is that multiple opinions combined can be more accurate than a single opinion. This basic idea was already proven in 1785 by Marquis de Condorcet in his Jury Theorem [11]. To achieve more precise estimation in ensemble learning, the following 2 principles must be satisfied [10]. First, the base learners must have as much diversity as possible in their estimates. This 4 Ensemble Learning Section 2.2 ensures a high variety of opinions, which can be taken into account in the final combination. On the other hand, the base learners should estimate as precisely as possible despite the diversity principle. This is important because a high diversity of opinions does not add value if all opinions are wrong. In combination, these principles allow increased accuracy in estimation as well as other benefits at statistical, computational, and representational levels [9, 10, 12]. Statistically, combining multiple models reduces the likelihood of using a single model that overfits on the training data. Of course, an ensemble can also overfit, but then the principle of diversity of base learners would be violated. In addition, individual models can remain in a local optimum in the training process. The combination of those models would allow a better approximation of the best possible optimum in the function space. The ensemble of several models also increases the probability of better generalization. In other words, to react better to new data not available in the data set, since the combination of the individual models expands the function space. Naturally, Ensemble Learning also has disadvantages. These are mainly the high costs of resources, for example time, power or memory, which are caused by the training of the base learner or the prediction of the ensemble. However, if these disadvantages can be ignored or if better accuracy is simply more important, as it is the case in this work, then ensemble learning offers itself in theory as a possible tool. 2.2.3 Design Levels The construction of an ensemble and the ensemble methods that are involved with it can be practically classified on 3 levels[13]. First, at the dataset level, it has to be chosen which base learner gets which training dataset with which features. Then, at the base learner level, it must be decided what methods will be used for the base learners and of course how many base learners there are in total. Finally, at the output level, an approach for merging the individual base learner estimates must be selected. Only if the right decisions are made at these 3 levels the benefits of Ensemble Learning can be practically implemented. The Ensemble methods used for the implementation can, however, be classified on several levels. 5 Chapter 2 Background Figure 2.2: General structure of an Ensemble [10] 6 3 Related Work The idea of combining ensemble learning with OCR is not new. First papers were already published in the 1990s. In 1992 for example neural networks (NN) were combined in an ensemble with the goal of recognizing handwritten digits. This ensemble achieved 20-25% better results than the best individual NN.[14] In the following years, research was further advanced and the first ensemble methods such as bagging and boosting in combination with NNs were evaluated.[15, 16, 17, 18]. Again, better accuracy results could be achieved. Over the years, ensemble learning methods have been repeatedly evaluated in combination with other machine learning architectures such as Hidden Markov Models [12, 19, 20] or Decision Trees[21] as well as with multi-architecture ensembles [22]. Here as well, better accuracy was achieved by using ensemble methods. However, with the beginning of the Deep Learning era, these evaluations related to OCR declined. One of the last papers known to the author is the one from 2014 [23], which deals among other topics with ensemble methods for improved recognition of historical prints. The paper evaluates some ensemble methods but this is not the main focus of the paper. The reasons for the decline of evaluations are on the one hand the time and computational costs of training deep learning networks [24], and on the other hand the very good generic results of deep learning architectures in OCR [25]. Therefore, in recent years, there has been more research on developing single Deep Learning architectures, which already achieved excellent results even without Ensemble Learning. A good example is the paper by Ahlawat et al. [26], which aims to outperform Ensemble Learning with a single OCR model to avoid the drawbacks of ensembles. This is certainly no reason to believe that the combination of ensemble learning with OCR is no longer relevant for research. In recent years, papers still have been published in which individual ensemble learning methods have been combined with specific OCR use cases in order to improve the results. For example, ensembles have been used to recognize handwritten persian numerals [27], farsi numerals [28], bangla letters [29], arabic words [30], musical notes [31], medieval prints/manuscripts [32, 33, 34], or bio-collections [35]. Another work worth mentioning here is the one of Matos Silva [36]. In his research, table cells from medical records were digitized using a custom-built 7 Chapter 3 Related Work ensemble of neural networks, which reaffirms the potential of this work in the medical context as well. These examples show that ensemble learning is still relevant for OCR. Nevertheless, the most recent papers only refer to one or a few ensemble methods. Thus, there is a lack of an up-to-date assessment of the most important ensemble methods with respect to OCR. Outside the subject-specific OCR domain, these already exist Surveys are still published that summarize the current state of research in general [10, 37], also with respect to Deep Learning [9, 24]. In these surveys, however, OCR is mentioned at most as a possible application example. To the best of the author's knowledge, there is a lack of studies evaluating modern OCR deep learning models in combination with ensemble learning. This work tries to fill this gap by reevaluating the most popular and important ensemble learning methods with regard to OCR. 8 4 Methodology 4.1 Experimental Process To answer the research questions, the following procedure is applied. First, the state of the art OCR methods and ensemble methods are identified. Then, in the experiments, each OCR method is first evaluated on its own. Then, the ensemble methods are further analyzed. The values recorded there are then compared with the previously recorded values of the stand-alone OCR methods. This will make it possible to answer RQ1, whether ensemble methods can contribute to an improved accuracy with the OCR methods used. To clarify RQ2, the evaluation results of the ensemble methods will then be compared in order to be able to name the most effective methods. To answer RQ3, it is necessary to consider two data sets separately in the evaluations. With the comparison of measured values of a large and a small dataset, it can be clarified whether Ensemble Learning can add significantly better value on smaller datasets. 4.2 OCR In the following, the OCR models used in this work will be named and briefly explained. For a detailed description of the methods and the exact training procedures, please visit the corresponding papers. 4.2.1 TrOCR The first model to be looked at is TrOCR. TrOCR was released in 2021 by a Microsoft research team and is an end-to-end transformer-based OCR model [38]. Basically, TrOCR corresponds to a vanilla transformer encoder-decoder structure and the architecture (see figure) is structured as follows. First, TrOCR resizes the input image to a size of 384 × 384 pixels and then splits it into 16*16 sized disjoint patches. Patch embedding as well as positional embedding is then applied to the patches to retain important contextual information about the other patches. The now processed sequence of patches is then given as input to the encoder, which consists of multi-head attention modules and feed forward networks. The encoder extracts 9 Chapter 4 Methodology features from each of the patches, which then serves as input to the decoder. The decoder also consists of multi-head attention modules, feed forward networks, and additionally masked multi-head attention modules. With the output of the encoder, the decoder creates a probability matrix that assigns token probabilities to specific subsections of the input image. In the context domain of TrOCR, tokens refer to particular character sequences, of which TrOCR has 50265 by default. At the end the output is generated by GreedyMaxDecoding. That means for each section the most probable token is used for output. By multiplying these probabilities a confidence score for the prediction is obtained. Figure 4.1: Model architecture of TrOCR [38] 4.2.2 AttentionHTR AttentionHTR is an attention-based sequence-to-sequence model for handwritten word recognition published by the Uppsala 2022[39]. The architecture of AttentionHTR consists of four stages: a transformation stage, a feature extraction stage, a sequence modeling stage, and a prediction stage. In the transformation stage, the input word image is normalized via a thin-plate-spline (TPS) transformation, scaling it to a size of 100×32 pixels. The normalized, resized image is then fed into the encoder. The encoder includes the feature extraction stage and the sequence modeling stage. The feature extraction stage consists of a ResNet that encodes the input into a visual feature map. In the sequence modeling stage, this visual feature map is then transformed into a sequence of features and is used by a BLSTM to capture contextual information in the sequence. The output sequence is then used in the prediction phase by an attention-based decoder, which consists 10 OCR Section 4.2 of an unidirectional LSTM and content-based attention mechanisms. The decoder outputs a probability matrix with a likelihood for each entry of the character set per sequence step. Then, for each sequence step, the character with the highest probability is used as the prediction (GreedyMaxDecoding). Again, a confidence score can be calculated by multiplying the individual output probabilities. Figure 4.2: Model architecture of AttentionHTR [39] 4.2.3 SimpleHTR The last model to be looked at is SimpleHTR, which was published by Harald Scheidl in 2018[40]. The structure of SimpleHTR corresponds to a typical CRNN architecture. This means that the architecture consists of 3 components. The first component is a convolutional neural network (CNN), which receives as input the original image previously reduced to 128×32 pixels and creates a feature sequence matrix from it. The feature sequence matrix is then passed to a recurrent neural 11 Chapter 4 Methodology network (RNN) component, here given as a LSTM. This component creates a probability matrix of all available characters per image section. Finally, the Connectionist Temporal Classification (CTC) component is used for decoding, again using GreedyMaxDecoding. A confidence score is also calculated from the probability matrix of the RNN output. Figure 4.3: Model architecture of SimpleHTR [40] 4.3 Ensemble Learning Methods Now the ensemble learning methods considered for OCR shall be mentioned and explained. As described in the background, the construction of an ensemble takes place on 3 levels. At the same time, it should be clarified on which design level which ensemble method can be implemented. Afterwards, other popular ensemble learning methods will be discussed and it will be explained why they are not covered in this work. 12 Ensemble Learning Methods Section 4.3 4.3.1 Dataset Level At the dataset level, decisions are made about which data is given to which base learner for the training. OCR of course strongly limits the possible ensemble methods. Therefore, only methods that OCR allows as a sequence labeling/classification task are possible. This means that all methods can only be applied on the dataset level if they do not damage the image structure too much, which prevents any feature sampling for example. Nevertheless, there are infinite other ways to split the training data among the base learners. But the most popular and at the same time generic ones are, according to the author's opinion, CompleteData, Bagging, KFOLD and Partitioning. CompleteData The first and simplest option is to simply give each base learner the complete training data set unchanged. The principle of divesity is then implemented at the base learner level. CompleteData is not a special ensembling method, instead it is the default way if none is implemented at the dataset level. Bagging One of the best known ensemble learning methods is the bootstrap aggregation method (bagging)[9, 10, 13, 19]. It corresponds to the combination of bootstrapping and a later chosen form of aggregation. In bootstrapping, the original dataset is randomly resampled with replacement for each base learner, usually keeping the length of the original dataset. It is expected that this will diversify the distribution of the training input and counteract a possible poor selection of the training dataset. The possible forms of aggregation are discussed in the output level chapter. KFOLD In machine learning, k-fold-cross-validation is very popular mainly for the evaluation of machine learning algorithms[41]. In this method, the first step is to merge the train and validation parts within a train/val/test split of the dataset. This now newly combined dataset is then split into k new training and validation splits, where always 1/kth of the dataset becomes the new validation part, which is disjoint to the other k-1 . The remainder then becomes the training dataset. For evaluation, the machine learning algorithm is then retrained on each of the k train/val divisions and tested on it using the unmodified test dataset. The k test results of the individually trained models are then used to evaluate the algorithm. This procedure of the dataset split and training can also be taken advantage of in Ensemble Learning [12, 13]. That way, each baser learner receives one of the k train/val datasets for training. Then, instead of just testing the baser learners, they are applied for prediction and merged in the output layer. Therefore, the number 13 Chapter 4 Methodology of baser learners goes hand in hand with the number of divisions. For the rest of the work, this approach is referred to with KFOLD. Partitioning In partitioning, the training dataset is divided among k base learners [9, 10]. This means that each base learner receives a kth disjoint part of the training dataset for training. Especially with large datasets, it is hoped that this will result in a faster training process with consistent results. 4.3.2 Base Learner Level At the base learner level, all decisions are made for the base learners underlying the ensemble. Here, the options available are limited by the base learner architectures chosen. According to the author, the following 3 decisions have to be made: which base learners will be selected, how many base learners will be selected and how to initialize the base learners. These are partly not explicit ensemble learning methods found in the literature, they rather describe the general procedure for the construction of the ensemble on base learner level. Base Learner Selection As described above, it must first be selected which type of base learner is used. That means, which machine learning algorithm underlies which base learner, here either TrOCR, AttentionHTR or SimpleHTR. There are two approaches for this: the heterogeneous or the homogeneous approach [9] In the homogeneous case, all base learners are based on the same architecture, for example k times TrOCR for k base learners. In the heterogeneous approach, different architectures are used for the base learners, such as k/3 times TrOCR, k/3 times AttentionHTR and k/3 times SimpleHTR for k base learners or in other variations. Number of Base Learners The next step is one of the most important. It must be chosen how many k base learner will be used at all. This decision is fundamental for the ensemble. If k is chosen too small, the ensemble may have too little diversity of opinions. If k is chosen too large, the computational effort may be too high to be useful in practice. Also, k should be an odd number so that a total majority is more likely in voting procedures. Base Learner Initialisation Finally, the choice has to be made, which base learner will be initialized with which parameters and which initial weights it will receive. As it is usual also without ensemble learning, the initialization of the initial 14 Ensemble Learning Methods Section 4.3 weights can be done randomly or by transfer learning. Even without different parameter choice/random initialization of the base learner, it is possible that the base learners become diverse enough, because their training does not necessarily have to be deterministic. 4.3.3 Output Level At the output level, the predictions of the base learner are combined for the final result. Since labels/classifaction are combined here, only voting, probability and last layer combination methods are possible. Therefore, all combination methods that use pure numerical predictions as an input, such as averaging, which can be used for regression problems, are not possible. In this work, majority voting, weighted voting and max probability methods are discussed. WordVote The most popular and at the same time simplest method for merging the predictions is majority voting [9, 13] here called WordVoting. In WordVoting, each of the base learner's predictions is counted by the number of times the prediction occurred. The prediction that occurred most often is then used as the final result. If there is no absolute majority, for example, if all base learners vote differently, the first found maximum is then chosen as the output. This is also the case for the other methods. CharVote CharVoting follows a similar approach as WordVoting. The main difference is that instead of voting for the whole prediction, each character is voted individually. The idea for this came from the author to see if voting on character level can be more effective. In the implementation, all predictions are first brought to the same length by appending spaces. Now for each position i it is checked which letter was predicted the most. This letter is then at position i of the output. Therefore a voting takes place for each position of the character level. Finally, the excess blanks are truncated. WeightedWordVote In WeightedWordVoting, as the name implies, the predictions of the k base learners are weighted for voting [19, 20, 24]. The weighting process works as follows. Prior to the predictions, a ranking of the base learners is determined based on their performance on the validation dataset. The predictions of the best performing base learner are then weighted with k points, those of the second best performing base learner with k-1 and so on until the prediction of the 15 Chapter 4 Methodology worst base learner is weighted with only 1. The weights of all equal predicitons are then added and the prediction with the highest weight is returned as the output. WeightedCharVote The process of WeightedCharVoting is very similar to that of WeightedWordVoting. But here again the final output is decided on character level. Also in this case it is interesting to see if the voting on total or character level makes a difference. First the predicitons of the base learner are brought to the same length like in the Charvoting with the appending of blanks. Then, for each position of the character level, the weighted voting just described takes place. At the end, any exceeding blanks are shortened. MaxProb The MaximumProbability method (MaxProb) works with the confidence scores of the base learners for the predictions [19]. Here, the base learner prediction with the highest confidence score is simply returned at the end. AvgProb In the MaxProb method, identical predictions are only considered individually. The AveragedMaximumProbability method (AvgProb) enhances the MaxProb method. Here the goal was to see if the combination of the confidence scores of the same predictions makes a difference. Again, the idea came from the author of this paper. From all identical base learner predictions the average of the confidence scores is calculated. Just like in the MaxProb procedure, the prediction with the highest confidence score is then returned. 4.3.4 Combining the Methods Finally, the question obviously arises: How exactly are the above-mentioned ensemble methods of the design levels combined? The answer is quite simple. Every method mentioned above can be combined with all methods of the other design levels. The implementation of this will be considered in more detail in the experiments. 4.3.5 Ineligible Methods There are, of course, other methods that are theoretically possible in combination with OCR. However, these have not been dealt with in this work because they are too application-specific or cannot be combined with the selected OCR models. However, for further research purposes, other very popular methods or possibilities that are interesting from the author's point of view will be mentioned here. 16 Ensemble Learning Methods Section 4.3 On the data set level, Input Variation and Category Sort methods are worth mentioning. Input Variation varies the input for the base learner to achieve more diversity [9]. For example, the base learners receive differently blurred image data. In category sort methods the training data is sorted by certain categories and each base learner receives certain categories for the training, for example only numbers. The idea for this came up while creating this work, since the medical data consists of different categories. However, both methods are very dataset specific and therefore difficult to evaluate in general. In addition, Input Variation is very similar to DataAugmentation, which is a big topic for its own. On base learner level the ensemble methods are limited by the selected OCR models. So any possibilities are excluded, that can not be combined with the 3 previously mentioned OCR models. Worth mentioning here are the very popular Boosting method or Fast Ensemble methods like Snapshoting. Boosting uses weighted learning algorithms to more strongly influence the incorrectly predicted training data of the previous base learner when training the next base learner [9]. Fast Ensemble methods aim to compensate for the training disadvantages of ensembles with specially designed learning algorithms[24]. The most famous example Snapshoting creates an ensemble from all minima it finds in the training process, for which cyclic learning rates are needed[24]. However, the 3 models considered in this paper do not possess the necessary properties for the implementation of the above mentioned examples. It is also very common to let base learners of the same architecture vary in their parameters e.g. by different number of layers [13]. However, here the decision was made not to do this, since on the one hand this is again very application-specific and on the other hand the developers of the models already concerned themselves enough with the selection for the correct parameters. On the output level one of the most important ensemble methods is stacking. Stacking involves the training of an extra machine learning model that combines the predictions or last layers of the base learner [9, 10, 13]. Unfortunately, stacking is not very generic, as it has to be adapted very precisely to the model structures. Moreover, it is such an extensive topic that an extra paper would be necessary. In theory, the Bayes Combination Method for the output layer appears repeatedly [12]. It is considered to be the most accurate method in theory, but it is not practically feasible here, since knowledge about the probabilities of occurrence of the labels would be needed. Other very popular and interesting methods are Classifier Selection Methods. Here only certain "experts" of the base learners are used for the ensemble prediction, e.g. experts only for numbers[24]. This can be done using the Category Sort methods or by algorithms that evaluate the base learners for different problems. The Classifier Selection Methods were also not covered, because they are again a very large topic and very application specific. 17 5 Experiments Next, the practical procedure will be explained. First of all, it will be clarified on which datasets the experiments take place and which measured values are recorded. Then the experimental setup/execution will be described and at the end the final results will be presented. 5.1 Datasets For the experiments we will use 2 datasets with handwritten images. Duke Dataset The first is the Duke Dataset already mentioned in the introduction, which consists of cell entries from medical tables. The data were collected on special patient records in the Rice Diet study by Walter Kempner for the treatment of hypertension [42]. The image data now available were thereby extracted from 5 columns of these records. These were the blood pressure column, weight column, and sodium, chlorine, potassium columns of the patients urine. The total size of the dataset is 6287. There are a total of 2367 cell images of the blood pressure column, 2083 of the weight column, 151 of the sodium urine column, 764 of the chloride urine column, and 79 of the potassium urine column. In addition, the dataset also consists of 843 empty cell images. The following characters occur in the labels of the images: ",./0123456789 1 4 1 2 3 4 1 8" IAM Words Dataset The second dataset is the very popular IAM Words Dataset from the IAM Handwriting Database[43]. In the OCR field, the IAM Words Dataset is regularly used to evaluate new technologies. It was published by the 115320 isolated and labeled english word images. These word images were extracted with automatic mechansims from scans of english scripts, were manually verified and are now available in gray tones. The following characters appear in the labels of the images:" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!" &.'()*+,-./:;?". In the following, the IAM Word Dataset will also just be called the IAM Dataset. 19 Chapter 5 Experiments Split Ratio Both datasets were divided into a train, validation and test part for the experiments. The reason for this is that the OCR methods require a disjoint set of training and validation data for the learning process. On the other hand, at the end, the methods need to be evaluated with the test data that has never been observed before. Since the Duke Dataset is relatively small, a large test dataset is necessary to evaluate the ensemble learning techniques well. For a good result of the ensembles the training and validation part must not be too small. Therefore, the decision was made for a 70/15/15 split ratio, because it best combines the above requirements. For the Duke Dataset, the number of training images is therefore 4400, for the validation part 943 and for the test part 944. For the IAM Dataset, there are 80722 images for training, 17298 for validation, and 17298 for testing. Hence, the IAM Dataset is about 18 times larger than the Duke Dataset. (a) Dataset:Duke Label:1163⁄4 (b) Dataset:IAM Label:MOVE (c) Dataset:Duke Label:146/120 (d) Dataset:IAM Label:Kaunda's (e) Dataset:Duke Label:Empty (f) Dataset:IAM Label:1800-2200 Figure 5.1: Example pictures from the 2 Datasets with Duke on the left and IAM on the right side 5.2 Metrics To measure the accuracy of the predictions on the datasets, the Word Accuracy (WA) and the Char Error Rate (CER) are used. These metrics are commonly used to measure OCR methods, including the OCR models discussed here [38, 39, 40]. Word Accuracy is the percentage of correctly estimated words out of the total number of input words. It is calculated by dividing the number of correctly estimated words (len(CorrectWords)) by the number of all input words (len(Inputs)) multiplied by 100. The Char Error Rate (CER) corresponds to the normalized levenstein distance of a word multiplied by 100[38]. The levenstein distance is the number of insertions, 20 Experimental Setup and Execution Section 5.3 substitutions, and deletions of characters to obtain the true word. This is normalized by dividing throug number of characters of the ground truth. For measurement the average of all CER's of the predictions will be used. However, for the evaluation of the experiments only the Word Accuracy is used, since this was sufficient for the evaluation of the research questions. For those interested, the CER's will be part of the appendix. At this point it should be mentioned that the diversity principle explained in the background chapter is not really measurable. Although some diversity metrics have been designed, there is no agreement on a standard. In fact, it is generally agreed that there is no correlation between diversity metrics and the prediction performance of ensembles [44]. This does not mean that the principle of diversity is invalid. The base learners should still be designed to be diverse. However, diversity is difficult to measure since methods can also be diversely designed and lead to the same result. Figure 5.2: Word Accuracy 5.3 Experimental Setup and Execution It shall now be clarified how to proceed exactly in order to evaluate the ensemble methods mentioned in the methodology. Initialisation of OCR Modells Before each experiment, the OCR models must be initialized first. This is done as follows. TrOCR For TrOCR the pre-trained "trocrbasestage1" model is used. For handwriting recognition the "trocrbasehandwritten1" model would be more recommended. However, this one has already been pre-trained on the IAM Dataset. So the decision was made to use the TrOCR default model from the TrOCR paper. Nevertheless "trocrbasehandwritten1" is mentioned here, because it will give better results for other OCR projects. Otherwise, the configuration of the model is largely untouched. The only parameters that are still set individually are number of workers with value 8, batch size with value 20 and train epochs with value 8, which are important for the training process. The number of train epochs is not very high, because TrOCR does not need many epochs for a good finetuning. The 21 Chapter 5 Experiments choice for batch size and number of workers was made because this worked best with the available resources. However, the setting of the parameter values for the models has no strong relevance for this work. They only have to be constant for all experiments to provide a comparable basis for the ensemble methods. Additionally it should be mentioned that TrOCR does not need a character list, because the standard token dictionary is sufficient here. AttentionHTR AttentionHTR is initialized with the pre-trained "AttentionHTR Imgur5K sensitive" model. It is used because on the one hand case sensitive labels are in the data and on the other hand only this model provided by the developers is not already pre-trained on IAM. Additionally, the values for batch size and character list will be set. Otherwise the parameters set by AttentionHTR will be left as they are. Again, the relevance of these parameters is relatively insignificant, since the main focus lies on the ensemble methods. For the batch size the value 32 was chosen and for the character list the above mentioned character list of the respective dataset is used. The choice for the batch size was also made here due to the best usage of the available resources. SimpleHTR SimpleHTR is initialized without a pre-trained model, since there is no model with the chararcter list required for the Duke Dataset. The character list influences the parameters of the architecture so strongly that the use of the model is no longer possible. In order to be able to draw better comparisons later, no pre-trained model is also used during the experiments with the IAM Dataset. In addition, the values for batch size and earlystopping are set to 100. Independent experiments showed that these provided the best values. Experimental Execution Now the execution of the experiments can be considered. First of all, the 3 OCR models will be evaluated separately on the Duke and IAM Datasets. For this purpose, the training of the initialized models is performed with the help of the train/val dataset. Once the training is complete, the Word Accuracy/Char Error Rate of the models is calculated on the test dataset. This is already the experiment for the evaluation of the single models. The experimental procedure for the ensemble methods is as follows. Each experiment, which was performed on the Duke Dataset, should also be performed on the IAM Dataset. For this purpose, the datasets are first prepared using the 4 dataset level methods described in the methodology. The training of the base learners is now to be performed with the datasets that have been processed. For this, the last fundamental decisions have to be made on the base learner level, i.e. which base 22 Results Section 5.4 learner corresponds to which OCR model and how many base learners make up the ensemble. For the base learner selection, both the homogeneous approach with all OCR models and the heterogeneous approach with an equally distributed number of OCR models will be considered. For the evaluation of the homogeneous ensembles, k=5 is chosen, because k=3 would be too few predictions and k=7 would require too much computational effort. Also an odd number is important. For the evaluation of the heterogeneous ensemble k=9 is chosen with 3 models each of TrOCR, AttentionHTR and SimpleHTR. In each case the first 3 trained base learners of the homogeneous ensembles will be selected here. This means that the dataset level methods KFOLD and Partitioning are of course not implemented correctly at this point. But this is not important here, because it is more about the question whether the heterogeneous approach brings more added value compared to the homogeneous approach. The choice for k=9 was made because the ratio of the different OCR models should be the same. In addition, k=3 is too small, k=6 even and k=12 too large. Now the training of the base learners can be triggered using the previously processed train/val datasets. Once the training is completed, the Word Accuracy/Char Error Rate rate of the base learners is calculated on the test dataset. Finally, the base learners are combined using the output level methods and their Word Accuracy/Char Error Rate is also calculated. In total, there are 32 different ensembles for the evaluation of the methods. 5.4 Results From the above described experiments, the following Word Accuracy values were obtained. The values for the average Char Error Rate can be found in the appendix for those who are interested. The number of results is overwhelming at first. But in the conclusions chapter these values are broken down to more simple levels. In the tables, the best measured value is always printed in bold, as well as for ensembles best base learner. TrOCR AttentionHTR SimpleHTR Duke 96,82 93,43 84 IAM 72,83 86,84 75,74 Table 5.1: Experimentation Results of Single OCR Modells 23 Chapter 5 Experiments Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 90,67 96,29 88,98 86,02 96,08 95,86 95,44 97,03 93,33 91,38 83,41 Bagging 93,54 92,69 95,34 95,66 95,02 96,61 95,44 96,93 95,02 93,31 92,19 K-FoldCross 96,34 96,4 96,73 88,87 96,82 98,09 96,82 97,88 96,50 93,03 88,22 Partitioning 0,1 0,2 0,0 0,0 0,32 0,10 0,0, 0,11 0,0 0,32 0,32 Table 5.2: Experimentation Results of homogenous TrOCR Ensembles on Duke Data Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 87,43 77,62 75,24 76,24 80,45 83,51 76,08 83,51 74,57 85,5 82,56 Bagging 78,98 79,08 68,48 70,69 72,16 80,29 67,18 80,29 65,99 78,80 74,69 K-FoldCross 78,89 76,28 81,11 70,74 72,52 80,57 70,63 80,5 69,8 79,32 76,64 Partitioning 74,13 79,25 49,87 77,98 76,43 81,17 64,73 81,17 63,13 79,44 75,54 Table 5.3: Experimentation Results of homogenous TrOCR Ensembles on IAM Data Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 94,07 92,27 93,96 93,75 92,26 95,76 94,28 95,76 94,28 96,08 94,60 Bagging 89,62 90,78 90,89 91,53 89,94 93,96 91,42 93,75 91,74 93,11 91,84 K-FoldCross 92,58 93,11 91,00 92,48 91,84 96,40 94,28 95,87 93,75 95,97 93,54 Partitioning 60,38 65,78 69,39 63,35 69,49 75,42 71,08 76,48 67,69 76,38 69,60 Table 5.4: Experimentation Results of homogenous AttentionHTR Ensembles on Duke Data Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 87,03 87,43 87,63 87,07 86,47 89,45 87,37 89,32 87,18 89,80 89,09 Bagging 85,33 85,79 86,69 86,39 85,63 88,68 86,23 88,46 85,83 88,98 88,29 K-FoldCross 86,96 86,68 86,96 87,06 86,83 89,40 87,42 88,99 87,13 89,84 89,03 Partitioning 84,02 83,63 83,19 83,77 83,90 86,83 84,14 86,57 83,72 87,19 86,23 Table 5.5: Experimentation Results of homogenous AttentionHTR Ensembles on IAM Data 24 Results Section 5.4 Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 84,85 82,63 83,69 87,29 82,42 91,10 84,96 90,36 89,41 90,36 86,44 Bagging 85,06 78,18 86,97 83,90 84,53 89,41 84,85 89,51 87,92 90,68 88,14 K-FoldCross 91,21 89,09 90,57 87,71 91,00 94,49 90,78 93,22 90,15 94,92 91,84 Partitioning 55,40 43,40 58,26 50,42 45,33 64,60 59,53 63,98 57,94 61,44 55,93 Table 5.6: Experimentation Results of homogenous SimpleHTR Ensembles on Duke Data Dataset Methods Base Learners Output Level Method 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 74,66 74,25 74,01 73,46 73,65 80,44 72,45 79,25 71,76 80,73 77,28 Bagging 74,38 75,04 73,24 73,72 73,18 80,47 73,01 79,58 72,35 80,61 77,53 K-FoldCross 76,22 76,73 76,43 76,22 76,52 82,28 75,76 81,27 75,56 82,40 79,69 Partitioning 63,91 62,60 63,79 64,74 64,57 71,98 62,87 71,44 62,15 73,04 68,48 Table 5.7: Experimentation Results of homogenous SimpleHTR Ensembles on IAM Data Dataset Methods Output Level Method WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 97,35 92,48 98,09 91,95 92,01 81,35 Bagging 96,50 88,87 97,03 87,61 93,06 87,98 K-FoldCross 97,99 93,75 96,61 93,75 97,35 91,95 Partitioning 75,10 36,33 76,48 7,20 33,26 23,09 Table 5.8: Experimentation Results of heterogenous Ensembles on Duke Data Dataset Methods Output Level Method WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 90,70 76,42 89,70 74,74 88,67 80,52 Bagging 89,59 72,39 88,91 69,54 85,92 80,84 K-FoldCross 90,12 77,51 89,95 74,47 87,84 81,16 Partitioning 87,61 61,62 86,72 59,67 83,19 71,90 Table 5.9: Experimentation Results of heterogenous Ensembles on IAM Data 25 6 Discussion With the measured values collected above, the research questions will now be answered. 6.1 Research Question 1 First of all, concerning the answer of RQ1: "Can Ensemble Learning improve the Accuracy of modern OCR methods?". The idea to clarify this question was to compare the models trained alone with the ensemble results. In order to not confront 24 measured values with only one single one, only the Word Accuracies of the highest Word Accuracies of the respective ensemble are considered in the following table 6.1. Thereby it will be looked whether the Word Accuracies of the highest ensemble method is also larger than that of the single model. The values of the heterogeneous ensembles are not considered here, since these consist of all OCR models and therefore can hardly be compared with the single model values. It can be seen immediately that the Word Accuracy of the ensembles is always better than the ones of the single models. Of course, the relation is quite unbalanced here. The reason is that each single WA is evaluated alone with the maximum of 24 output level combinations. In order to balance this a bit, the base learners should also be included, because the base learners also obviously correspond to a single model. Therefore the highest Word Accuracy value of the respective base learner or the single model should be used for the comparison. Hence, in Table 6.2 per OCR model/dataset combination, 21 single models are now compared with the 24 output level combinations. Even with the inclusion of the base learners, the individual models were only once better than one of the ensembles. For the ensemble that is worse than the base learner (TrocrIamKFOLD1), it is noticeable that the principle of the highest possible accuracy is violated here. Even with the inclusion of the base learners, the individual models were only once better than one of the ensembles. For the ensemble that is worse than the base learner (TrocrIamKFOLD1), it is noticeable that the principle of the highest possible accuracy is violated here. Since the other base learners of the TrocrIamKFOLD ensemble are substantially more inefficient on the test dataset than the TrocrIamKFOLD1 base learner. If this would be remedied 27 Chapter 6 Discussion by retraining the other base learners and keeping only the trained base learners that also have higher accuracy, it is very possible that the ensemble would predict better. Furthermore, just because a base learner performs better on the test dataset it does not mean that the ensemble as a whole is worse. It is not known to what extent the base learner generalizes well. It is quite possible that this base learner performs well only here on the test dataset and is much worse on other unknown data. As mentioned in the background, ensembles have the advantage to generalize better and to have a lower risk of overfitting. For this reason, the ensemble would still be preferable here. With this knowledge, RQ1 can be answered as follows. Yes, Ensemble Learning can help with high probability to improve the accuracy of OCR methods. However, as seen above, this does not mean that this is always the case. With a good choice of base learners, however, Ensemble Learning is definitely suitable as a possible tool to get a few last percentage points of Word Accuracies. Especially with the background thought of the possible better generalization of the ensembles. OCR Modell Dataset Single WA Best Ensemble WA Single < Best Ensemble ? TrOCR Duke 96,82 98,09 True AttentionHTR Duke 93,43 96,40 True SimpleHTR Duke 84,00 94,92 True TrOCR IAM 72,83 85,53 True AttentionHTR IAM 86,84 89,84 True SimpleHTR IAM 75,74 82,40 True Table 6.1: Comparison of Single WA's with maximal achieved Ensemble WA's 28 Research Question 2 Section 6.2 OCR Modell Dataset Best Single or Base Learner WA Best Ensemble WA Single < Best Ensemble ? TrOCR Duke 96,82 98,09 True AttentionHTR Duke 94,07 96,40 True SimpleHTR Duke 91,21 94,92 True TrOCR IAM 87,43 85,53 False AttentionHTR IAM 87,63 89,84 True SimpleHTR IAM 76,73 82,40 True Table 6.2: Comparison of best achieved base learner or single WA's with maximal achieved Ensemble WA's 6.2 Research Question 2 In RQ2, the question was asked, "Which Ensemble Learning methods are the most valuable?" To get an answer to this question, the ensemble methods of the different design levels will be compared to find the most performant methods. 6.2.1 Dataset Level For the dataset methods, the decisions of the other design levels and the choice of the dataset obviously have a large impact on them. Therefore, pure Word Accuracies can not be considered here. Instead, it will be counted which method on which model/dataset combination achieved the best possible Accuracy of any output method. This can be seen in table 6.3 In this way, it should be shown which method has performed best and how often. According to the table it can be assumed that only CompleteDataset and KFOLD provide an additional value. However, only one of the 6 output level methods was counted here. The values of these lie partly very close together, with which this consideration could easily falsify itself. Therefore, the above counting shall take place again for all output methods, which is to be examined in table 6.3 With this data it can now be clearly said that, considering the previously set values, the CompleteData and KFOLD approaches add the most value. This seems to be the case more often for KFOLD than for the CompleteData method. This confirms itself again with view on the total averages of the individual ensemble methods, which can be found in table 6.5 29 Chapter 6 Discussion Here, KFOLD also has the highest average accuracy, followed by the CompleteData method. This shows that KFOLD performed with the best results here. For the other 2 DatasetLevel methods, it is immediately noticeable that Partitioning achieved rather unusable values. On TrOCR even so badly that both base learners and output methods reach an Accuracy of nearly and/or equal 0. It is interesting that Partitioning on IAM TrOCR delivers again acceptable results. Therefore, it can be speculated that the number of training data or the number of training epochs is too small for Duke TrOCR. Hence, Partitioning may be a good solution for even much larger datasets or for a different number of base learners. Under the circumstances considered above, however, Partitioning is rather to be discarded. Looking at the average word accuracy of the Bagging method, it is noticeable that it performs only 2 to 1 percentage points worse than the KFOLD or CompleteData methods. Also in the table 6.4 Bagging were able to achieve 2 times the best result of an output level method within a OCR model/dataset combination. This shows that Bagging does not work badly here perse. Nevertheless, KFOLD and CompleData performed much more often and overall better. This could be due to the fact that the bagging intensity is possibly too low, too high or bagging simply works better with more base learners.Still, the potential of the bagging method is proven here. Finally, it can be said for the methods of the dataset level that all of them except for Partitioning delivered good results, whereby the KFOLD and the CompleteData method were the most useful. OCR Modell Dataset CompleteData KFOLD Bagging Partitioning TrOCR Duke 1 AttentionHTR Duke 1 SimpleHTR Duke 1 heterogenous Duke 1 TrOCR IAM 1 AttentionHTR IAM 1 SimpleHTR IAM 1 heterogenous IAM 1 Complete Count 3 5 Table 6.3: counted number of best performing dataset level methods per OCR model/dataset combination 30 Research Question 2 Section 6.2 OCR Modell Dataset CompleteData KFOLD Bagging Partitioning TrOCR Duke 4 2 AttentionHTR Duke 4 3 SimpleHTR Duke 6 heterogenous Duke 1 5 TrOCR IAM 6 AttentionHTR IAM 4 2 SimpleHTR IAM 6 heterogenous IAM 3 3 Complete Count 18 29 2 Table 6.4: counted number of best performing dataset level methods per OCR model/dataset combination per output level method CompleteData KFOLD Bagging Partitioning 87,37 88,22 86,07 61,12 Table 6.5: Averages of the dataset level methods 6.2.2 Base Learner Level On the base learner level, only the heterogenous and homogenous approach as well as the OCR models underlying the base learners can be evaluated, meaning which OCR model is best suited as a base learner. The remaining decisions were made for the experiments as a whole and are not comparable here, because there is too little measurement data or the underlying methodologies are too different, like for example the number of base learners. It is meaningless to compare the number of base learners of 5 with 9 if the methodology is different (homogenous vs heterogenous) and only 2 different numbers are available. Comparison of Heterogenous and Homogenous For the homogenous vs heterogenous comparison, the maximum Word Accuracies of all homogenous ensembles should first be compared to the maximum Word Accuracies of all heterogenous ensembles on both the Duke and IAM datasets. This can be seen in the following table 6.6. 31 Chapter 6 Discussion It is noticeable that there is hardly a difference in the maximum of both approaches. In order to look at a few more values, the highest Word Accuracies of the respective dataset methods and output methods should now also be used, see Table 6.7 and 6.8. The difference between the heterogeneous and the homogenous shall be looked at here. The reason why in the table only the methods WordVote, WeightedWordVote and MaxProb are shown, follows in the Output Level section. When looking at the differences in the two tables, it is immediately evident that none of the methods stands out. The differences between the two are only a few percentage points apart. The average difference is also just 0.38%. This means that the heterogeneous approach is slightly better here. But this number is so small that this should be given little importance. Thus it can be said that both the homogenous and the heterogenous approach can deliver good results. Nevertheless, it cannot be said here that one of the methods is better. Beside that, it is interesting to mention here that the heterogenous Duke Partitioning Ensemble managed to compensate the weak TrOCR Partitioning base learners. This confirms the advantage of ensembles that bad base learners can be outvoted. Dataset homogenous heterogenous Duke 98.09 98.09 IAM 89.843 90.698 Table 6.6: Maximum values of homogenous and heterogenous Ensembles 32 Research Question 2 Section 6.2 Dataset Dataset Level Methods Maximum WA's homogenous Maximum WA's heterogenous Difference Duke CompleteData 97,03 98,09 1,06 Duke Bagging 96,93 97,03 0,1 Duke KFOLD 98,09 97,99 -0,1 Duke Partitoning 76,48 76,48 0 IAM CompleteData 89,8 90,7 0,9 IAM Bagging 88,98 89,59 0,61 IAM KFOLD 89,84 90,12 0,28 IAM Partitoning 87,19 87,61 0,42 Table 6.7: Differences of the highest achieved dataset level methods of the heterogenous and the homogenous approach Dataset Dataset Level Methods Maximum WA's homogenous Maximum WA's heterogenous Difference Duke WordVote 98,09 97,99 -0,10 Duke WeightedWordVote 97,88 98,09 0,21 Duke MaxProb 96,08 97,35 1,27 IAM WordVote 89,45 90,7 1,25 IAM WeightedWordVote 89,32 89,95 0,63 IAM MaxProb 89,84 88,67 -1,17 Table 6.8: Differences of the highest achieved output level methods of the heterogenous and the homogenous approach Base Learner Selection Now it will be evaluated which OCR model brings the most additional value as a base learner. In table 6.9 the averaged Word Accuracies without the by outlayers affected Partitioning can be seen. In Table 6.10 the maximum achieved Word Accuracies are visible. The values of the heterogenous approach are again not considered here, as well as in RQ1. The first thing that stands out is that SimpleHTR is much worse than TrOCR and AttentionHTR on average and at maximum. This makes sense because SimpleHTR 33 Chapter 6 Discussion is the oldest architecture and does not use a pre-trained model for the initialization. On the Duke Dataset, with the above initializations TrOCR and AttentionHTR predicted about the same on average. Nevertheless, here TrOCR has the largest maximum, which is 1.69% higher than that of AttentionHTR, This is different on the IAM Dataset. Here AttentionHTR is much more accurate in the average as well as in the maximum. Thus, in conclusion it can be said that TrOCR is best suited for the Duke Dataset and AttentionHTR is best suited for the IAM Dataset. Of course, this is only in consideration of the decisions made before. Dataset TrOCR AttentionHTR SimpleHTR Duke 94,25 94,24 89,91 IAM 77,25 88,36 77,91 Combined 85,75 91,3 83,92 Table 6.9: Averaged WA's of base learners without Partitioning Dataset TrOCR AttentionHTR SimpleHTR Duke 98.09 96.4 94.92 IAM 85.53 89.84 82.4 Table 6.10: Maximum reached WA's of base learners 6.2.3 Output Level Last but not least, a statement should be made about the output methods. The first thing that stands out is that voting at the character level or the AvgProb method do not add any value here. The Word Accuracy of these methods are always worse than their counterpart, the voting on word level or the standard MaxProb method. For this reason, they are not being looked at in the further evaluation. In the case of voting at character level, however, it can be assumed that these methods may be better at dealing with longer words or sentences, because the datasets only have quite short labels. Like when looking at the Data Set Level methods, it will now be counted which method has achieved the best possible accuracy of any dataset level method on which model/dataset combination. This can be seen in table 6.11 34 Research Question 2 Section 6.2 Looking at these results, it can be assumed that mainly WordVoting and MaxProb work better. In order to have more values for the comparison, the highest output level method will be counted again for all dataset level methods, which is to be examined in table 6.12 (excluded the values of Partition Duke TrOCR) Once again, WordVote and MaxProb were much better than the WeightedWordVoting method. But some peculiarities can be seen here. For SimpleHTR, the MaxProb procedure performed significantly better. In general, for the homogenous ensemble on the IAM Dataset, the MaxProb method always gave the best result see Table 6.11 . For the predictions on the Duke Dataset, as well as for TrOCR and the heterogenous ensembles, the voting methods achieved the best Word Accuracies. Therefore, it can be said that the methods predict better depending on the circumstances. To the WeightedWordVoting procedure it can be noted that this can lead also to the best result. When looking at the average accuracy of the methods, it can be seen that all 3 methods are not very far apart, see Table 6.13 Finally, on the output level, it can be said that all 3 methods can lead to good results. However, WordVoting and MaxProb achieved the best results. OCR Modell Dataset WordVote WeightedWordVote MaxProb TrOCR Duke 1 AttentionHTR Duke 1 SimpleHTR Duke 1 heterogenous Duke 1 TrOCR IAM 1 AttentionHTR IAM 1 SimpleHTR IAM 1 heterogenous IAM 1 Complete Count 3 1 4 Table 6.11: Counted number of best performing output level methods per OCR model/dataset combination 35 Chapter 6 Discussion OCR Modell Dataset WordVote WeightedWordVote MaxProb TrOCR Duke 1 2 AttentionHTR Duke 2 1 1 SimpleHTR Duke 2 2 heterogenous Duke 1 2 TrOCR IAM 3 1 1 AttentionHTR IAM 4 SimpleHTR IAM 4 heterogenous IAM 4 Complete Count 14 6 11 Table 6.12: Counted number of best performing output level methods per OCR model/dataset combination per dataset level method WordVoter WeightedWordVote MaxProb 84,74 84,52 82,31 Table 6.13: Averages of the output level methods 6.2.4 Conclusion RQ2 With the accumulated knowledge on the design levels, it is now possible to answer RQ2. It can be seen that there is no such thing as the one true ensemble method. Many methods only bring added value under certain conditions. However, it became clear that Partitioning, Charvote, WeightedCharVote and AvgProb do not bring any recommendable added value. On the one hand, it became clear that on dataset level CompleteData as well as KFOLD, and on outpute level WordVote as well as MaxProb delivered the best results. On the other hand, Baggigng and WeightedWordVoting were also able to achieve acceptable Word Accuracies, but they rarely reached the maximum. At base learner level, it was found out that neither the heterogenous approach nor the homogenous approach were better than the other and that both can be useful. It also became clear that SimpleHTR is not much useful as a base learner. In contrast, both TrOCR and AttentionHTR scored well, with TrOCR being more suitable for the Duke Dataset and AttentionHTR more suitable for the IAM 36 Research Question 3 Section 6.3 Dataset. All these statements are of course only to be understood by looking at the decisions made before. 6.3 Research Question 3 Finally, to RQ3: "Can Ensemble Learning add significantly better value on smaller datasets? " The idea to answer this question was to compare the values of the small Duke Dataset with the large IAM Dataset. First of all, it can be said that the accuracy values on the Duke Dataset are significantly better than on the IAM Dataset. This is proven for example by the averages of the Word Accuracy of the ensemble outputs. The average of all Duke values of the ensemble outputs is 81.47% compared to the IAM average of 80.13%. Without the WordAccuracies of the Partitioning method with its strong outlayers it is even 92,88% against 81,56.%. But the Duke Dataset is smaller with the amount of test data and the character list of the IAM Dataset is much larger. Therefore, the absolute values cannot be used for the comparison. For this reason, the values of the difference of the respective highest base learners to the most accurate output level method will be used, which can be seen in Table 6.14. In the table it can be noticed that 2 times an ensemble of the Duke Dataset reached a larger distance to the base learner and 2 times an ensemble of the IAM Dataset. Also looking at the numbers, it is noticeable that they do not differ much. The largest gap between two differences is just about 3 percentage points. With so few numbers and so few differences, definitely no side can be favored here. It is also already indirectly known from RQ1 that ensemble learning added value for both datasets. Therefore, it can be concluded for RQ3: No, Ensemble Learning could not prove a higher added value here on a smaller dataset than compared to a larger one. Nevertheless, Ensemble Learning was able to achieve a higher accuracy on both datasets. Consequently, Ensemble Learning can be used with small as well as with large datasets in order to achieve an increased word accuracy. OCR Modell Best Duke Single or Base Leaerner Best Duke Ensemble WA Duke Difference Best IAM Single or Base Leaerner Best IAM Ensemble WA IAM Difference TrOCR 96,82 98,09 1,27 87,43 85,53 -1,9 Attention 94,07 96,4 2,33 87,63 89,84 2,21 simple 91,21 94,92 3,71 76,73 82,4 5,67 All 3 Models 96,82 98,09 1,27 87,63 90,7 3,07 Table 6.14: Differences of the highest base learners to the most accurate output level method 37 7 Conclusions and Outlook For the bachelor project 2021 "Human in the Loop: Deep Learning of Handwritten Patient Records" of Professor Lippert's research group, Ensemble Learning was looked at in order to achieve improved accuracy for the digitization of patient records. The main question of this work was to find out with RQ1:"Can Ensemble Learning improve the Accuracy of modern OCR methods?". The answer to this question is clearly YES. Ensemble Learning offers itself, if one can overlook the disadvantages, as a very good tool to bring out some last percentage points in the accuracy of OCR. Here, however, the subliminal non-measurable, but also theoretical, advantages of Ensemble Learning such as better generalization and protection against overfitting are particularly worth mentioning. Therefore Ensemble Learning can help for the digitization of the above mentioned records. In this context with RQ2, it was then clarified which ensemble learning methods add the most value. The main finding here was that there is not the one true ensemble method. Multiple ensemble methods can add more or less value given different circumstances. However, the most promising results here were achieved at the dataset level by CompleteData and KFOLD , at the BaseLearner level by TrOCR for Duke and by AttentionHTR for IAM, and at the output level by WordVoting and MaxProb. If these methods will be used in a new OCR project, it is very well possible that they could provide an improved accuracy. Also some here not usable methods were identified. These were Partitioning, CharVoting, WeightedCharVoting and AvgProb. The remaining methods were still shown to be relevant, although they did not produce the best results. Since the dataset used for the Bachelor project mentioned in the introduction was very small, the last question to be answered with RQ3 was whether Ensemble Learning can help more on small datasets. Here it was found out that it makes no difference whether the dataset is large or small. Ensemble Learning added value for both datasets. Overall, the most important finding of this work is that once again the potential of Ensemble Learning has been revealed. Consequently, ensemble learning as a research question is still relevant for OCR and can offer decisive added value. In future work, the methods looked at above could be looked at again with a different experimental setup (initializations), different datasets, or different OCR models. 39 Chapter 7 Conclusions and Outlook The question of the optimal number of base learners could even be evaluated in a separate paper. Also the approaches mentioned in the chapter ineligible methods can be used for further research. Stacking, Boosting or SnapshotEnsembling could be particularly interesting here. The topic EnsembleLearning in combination with OCR remains promising. 40 7 Appendix TrOCR AttentionHTR SimpleHTR Duke 1.39 2.40 5.72 IAM 18.04 7.05 10.67 Table 7.1: Character Error Rates of Single OCR Modells Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 7,94 1,54 10,18 9,24 1,14 2,90 2,44 1,73 5,13 3,63 6,57 Bagging 3,68 3,92 1,94 2,03 1,54 1,27 3,41 1,50 3,44 2,16 3,09 K-FoldCross 1,61 1,88 1,11 10,50 1,11 0,60 1,62 0,65 1,74 2,45 3,82 Partitioning 131,31 114,91 93,73 110,16 101,81 114,91 134,89 114,91 134,42 122,64 130,42 Table 7.2: Character Error Rates of homogenous TrOCR Ensembles on Duke Data Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 7,18 14,32 16,13 14,86 12,11 10,16 15,95 10,16 17,90 9,51 11,44 Bagging 12,78 13,19 23,26 20,03 19,20 12,52 24,40 12,52 26,25 14,02 17,16 K-FoldCross 13,25 15,23 11,73 20,68 19,75 12,59 22,30 12,59 23,13 13,77 15,54 Partitioning 14,89 11,16 43,03 12,36 12,88 11,36 26,81 11,36 28,28 12,75 15,11 Table 7.3: Character Error Rates of homogenous TrOCR Ensembles on IAM Data 41 Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 1,99 2,88 2,55 2,37 3,02 1,92 2,69 1,97 2,69 1,24 1,57 Bagging 3,87 3,15 3,19 2,59 3,87 1,98 3,68 2,58 3,78 2,23 2,68 K-FoldCross 2,61 2,65 3,49 2,49 3,16 1,67 2,82 1,79 3,00 1,20 1,86 Partitioning 13,92 12,10 11,17 13,11 11,03 8,51 2,28 8,73 12,93 7,70 9,46 Table 7.4: Character Error Rates of homogenous AttentionHTR Ensembles on Duke Data Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 7,03 7,19 6,75 7,03 7,25 6,01 7,43 6,03 7,52 5,82 6,01 Bagging 7,78 7,86 7,11 7,38 7,79 6,47 8,09 6,51 8,31 6,24 6,33 K-FoldCross 7,28 7,43 7,11 7,39 7,15 6,21 7,57 6,43 7,75 6,00 6,14 Partitioning 8,69 8,51 9,07 8,62 8,74 7,34 9,28 7,47 9,45 6,95 7,32 Table 7.5: Character Error Rates of homogenous AttentionHTR Ensembles on IAM Data Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 5,65 5,91 5,42 4,39 6,29 3,13 6,16 3,42 3,92 3,44 4,66 Bagging 5,27 7,60 4,44 5,56 5,19 3,81 5,87 3,79 4,54 3,53 4,28 K-FoldCross 3,19 3,51 3,17 3,80 3,17 2,16 3,53 2,22 3,65 1,92 2,90 Partitioning 16,04 21,16 14,78 19,17 20,71 12,53 16,46 12,49 15,43 13,39 15,43 Table 7.6: Character Error Rates of homogenous SimpleHTR Ensembles on Duke Data Dataset Methods Base Learners OutputCombination 1 2 3 4 5 WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 11,04 11,47 11,26 11,38 11,35 9,02 12,85 9,47 13,12 8,70 9,92 Bagging 11,16 10,87 11,49 11,21 11,52 8,83 12,62 9,26 13,03 8,72 9,66 K-FoldCross 10,27 10,13 10,23 10,35 10,07 8,21 11,51 8,62 11,64 7,95 8,75 Partitioning 15,97 16,26 16,01 15,55 15,62 12,55 18,30 12,87 18,83 11,78 13,44 Table 7.7: Character Error Rates of homogenous SimpleHTR Ensembles on IAM Data 42 Dataset Methods OutputCombination WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 1,02 3,55 0,63 3,95 3,45 7,13 Bagging 1,35 5,61 1,19 6,23 2,34 4,53 K-FoldCross 1,20 2,95 1,68 2,89 1,26 2,80 Partitioning 8,89 51,16 8,21 59,55 79,41 90,06 Table 7.8: Character Error Rates of heterogenous Ensembles on Duke Data Dataset Methods OutputCombination WordVote CharVote WeightedWordVote WeightedCharVote MaxProb AvgProb Complete Data 5,63 11,98 5,86 13,38 6,97 10,30 Bagging 5,96 16,33 6,13 18,74 8,48 12,89 K-FoldCross 5,98 11,96 5,83 14,49 7,33 10,32 Partitioning 6,95 25,72 7,34 27,43 9,87 15,26 Table 7.9: Character Error Rates of heterogenous Ensembles on IAM Data 43 7 Bibliography [1] Benjamin Aunkofer. Ensemble Learning. url: https://data-science-blog.com/blog/ 2017/12/03/ensemble-learning/ (see page 1). [2] Jason Brownlee. A Gentle Introduction to Ensemble Learning Algorithms. url: https: //machinelearningmastery.com/tour-of-ensemble-learning-algorithms/ (see page 1). [3] Ajay Shekhawat Sargur N. Srihari and Stephen W. Lam. Optical character recognition (OCR). In: Encyclopedia of Computer Science. John Wiley and Sons Ltd., 2003, 1326ś1333 (see page 3). [4] Veronica Romero, Nicolas Serrano, Alejandro H. Toselli, Joan Andreu Sanchez, and Enrique Vidal. Handwritten Text Recognition for Historical Documents. In: Proceedings of Language Technologies for Digital Humanities and Cultural Heritage Workshop. Association for Computational Linguistics, 2011, 90ś96. url: https:// aclanthology.org/W11-4114.pdf (see page 3). [5] JAMSHED MEMON, MAIRA SAMI, RIZWAN AHMED KHAN, and MUEEN UDDIN. Handwritten Optical Character Recognition(OCR): A Comprehensive systematic Literature Review (SLR). IEEE Access (2020). url: https://ieeexplore. ieee.org/stamp/stamp.jsp?tp=&arnumber=9151144 (see page 3). [6] Aston Zhang, Zack C. Lipton, Mu Li, Alex J. Smola, Brent Werness andRachel Hu, and Shuai Zhang andYi Tay. Dive into Deep Learning. 2021. url: http://d2l.ai/ (see pages 3, 4). [7] Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis. Technische Universit ̈at Munchen, 2008 (see page 3). [8] Zhi-Hua Zhou, 181ś210. In: Machine Learning. Singapore: Springer Singapore, 2021. isbn: 978-981-15-1967-3. url: https://doi.org/10. 1007/978-981-15-1967-3_8 (see page 4). [9] M. A. Ganaie, Minghui Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan. Ensemble deep learning: A review (2021). eprint: . url: https://arxiv.org/ pdf/2104.02395.pdf (see pages 4, 5, 8, 13ś15, 17). [10] Omer Sagi and Lior Rokach. Ensemble learning: A survey. WIREs Data Mining and Knowledge Discovery 8:4 (2018), e1249. url: https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/widm.1249 (see pages 4ś6, 8, 13, 14, 17). 45 [11] M.J.A.N.C. de Condorcet. Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix. L'imprimerie royale, 1785. url: https://books.google.de/books?id=RzAVAAAAQAAJ (see page 4). [12] Roman Bertolami. Ensemble Methods for Offline Handwritten Text Line Recognition. Inauguraldissertation. Universität Bern, 2008. url: https://biblio.unibe.ch/ download/eldiss/08bertolami_r.pdf (see pages 5, 7, 13, 17). [13] Alok Kumar and Mayank Jain. Ensemble Learning for AI Developers. Apress Berkeley CA, 2020. isbn: 978-1-4842-5939-9. 5940-5 (see pages 5, 13, 15, 17). [14] Hansen L.K., Liisberg C., and Salamon P. Ensemble methods for handwritten digit recognition. In: Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop. 1992, 333ś342. (see page 7). [15] Michael Perrone and Leon Cooper. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks. Neural networks for speech and image processing (1993). (see page 7). [16] Harris Drucker, Corinna Cortes, L. D. Jackel, Yann LeCun, and Vladimir Vapnik. Boosting and Other Ensemble Methods. Neural Computation 6:6 (1994), 1289ś 1301. (see page 7). [17] Jianchang Mao and K.M. Mohiuddin. Improving OCR performance using character degradation models and boosting algorithm. Pattern Recognition Letters 18:11 (1997), 1415ś1419. issn: 0167-8655. 00137-2. url: https://www.sciencedirect.com/science/article/pii/S0167865597001372 (see page 7). [18] Jianchang Mao. A case study on bagging, boosting and basic ensembles of neural networks for OCR. In: 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227). Vol. 3. 1998, 1828ś1833 vol.3. (see page 7). [19] Simon Günter and Horst Bunke. Ensembles of classifiers for handwritten word recognition. Document Analysis and Recognition 5:4 (2003), 224ś232. url: https: //doi.org/10.1007/s10032-002-0088-2 (see pages 7, 13, 15, 16). [20] Simon Günter and Horst Bunke. Evaluation of Classical and Novel Ensemble Methods for Handwritten Word Recognition. In: Structural, Syntactic, and Statistical Pattern Recognition. Ed. by Ana Fred, Terry M. Caelli, Robert P. W. Duin, Aurélio C. Campilho, and Dick de Ridder. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, 583ś591. isbn: 978-3-540-27868-9 (see pages 7, 15). 46 [21] S. Bernard, S. Adam, and L. Heutte. Using Random Forests for Handwritten Digit Recognition. In: Ninth International Conference on Document Analysis and Recognition (ICDAR 2007). Vol. 2. 2007, 1043ś1047. (see page 7). [22] Wolfgang Wilczok Elkeand Lellmann, 123ś136. In: Reading and Learning: Adaptive Content Recognition. Ed. by Andreas Dengel, Markus Junker, and Anette Weisbecker. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. 8_8. url: https://doi.org/10.1007/978-3-540-24642-8_8 (see page 7). [23] William B Lund. Ensemble Methods for Historical Machine-Printed Document Recognition. PhD thesis. Brigham Young University, 2014 (see page 7). [24] Yongquan Yang and Haijun Lv. A Survey on Ensemble Learning under the Era of Deep Learning. CoRR abs/2101.08387 (2021). url: https://arxiv.org/abs/2101. 08387 (see pages 7, 8, 15, 17). [25] Haifeng Wang, Changzai Pan, Xiao Guo, Chunlin Ji, and Ke Deng. From object detection to text detection and recognition: A brief evolution history of optical character recognition. WIREs Computational Statistics 13:5 (2021), e1547. url: https://wires.onlinelibrary.wiley.com/doi/ abs/10.1002/wics.1547 (see page 7). [26] Savita Ahlawat, Amit Choudhary, Anand Nayyar, Saurabh Singh, and Byungun Yoon. Improved Handwritten Digit Recognition Using Convolutional Neural Networks (CNN). Sensors 20:12 (2020). url: https: //www.mdpi.com/1424-8220/20/12/3344 (see page 7). [27] Hossein Karimi, Azadeh Esfahanimehr, Mohammad Mosleh, Faraz Mohammadian jadval ghadam, Simintaj Salehpour, and Omid Medhati. Persian Handwritten Digit Recognition Using Ensemble Classifiers. Procedia Computer Science 73 (2015). International Conference on Advanced Wireless Information and Communication Technologies (AWICT 2015), 416ś425. 12.018. url: https://www.sciencedirect.com/science/article/pii/S1877050915034791 (see page 7). [28] Yaser Ahangari Nanehkaran, Junde Chen, Soheil Salimi, and Defu Zhang. A pragmatic convolutional bagging ensemble learning for recognition of Farsi handwritten digits. The Journal of Supercomputing 77:11 (2021), 13474ś13493 (see page 7). [29] Mir Moynuddin Ahmed Shibly, Tahmina Tisha, and Shamim Ripon. Deep Learning and Ensemble Methods to Recognize Bangla Handwritten Character. PhD thesis. East West University Dhaka-1212, Bangladesh, 2020 (see page 7). 47 [30] Mohamed Awni, Mahmoud I. Khalil, and Hazem M. Abbas. Deep-Learning Ensemble for Offline Arabic Handwritten Words Recognition. In: 2019 14th International Conference on Computer Engineering and Systems (ICCES). 2019, 40ś45. (see page 7). [31] Ashis Paul, Rishav Pramanik, Samir Malakar, and Ram Sarkar. An ensemble of deep transfer learning models for handwritten music symbol recognition. Neural Computing and Applications 34:13 (2022), 10409ś10427 (see page 7). [32] Christian Reul, Dennis Christ, Alexander Hartelt, Nico Balbach, Maximilian Wehner, Uwe Springmann, Christoph Wick, Christine Grundig, Andreas Büttner, and Frank Puppe. OCR4allÐAn Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings. Applied Sciences 9:22 (2019). app9224853. url: https://www.mdpi.com/2076-3417/9/22/4853 (see page 7). [33] Christoph Wick and Christian Reul. One-Model Ensemble-Learning for Text Recognition of Historical Printings. In: Document Analysis and Recognition - ICDAR 2021. Ed. by Josep Lladós, Daniel Lopresti, and Seiichi Uchida. Cham: Springer International Publishing, 2021, 385ś399. isbn: 978-3-030-86549-8 (see page 7). [34] Christian Reul, Stefan Tomasek, Florian Langhanki, and Uwe Springmann. Open Source Handwritten Text Recognition on00a0Medieval Manuscripts Using Mixed Models and00a0Document-Specific Finetuning. In: Document Analysis Systems. Ed. by Seiichi Uchida, Elisa Barney, and Véronique Eglin. Springer International Publishing, 2022, 414ś428. isbn: 978-3-031-06555-2 (see page 7). [35] Icaro Alzuru, Rhiannon Stephens, Andréa Matsunaga, Maurício Tsugawa, Paul Flemons, and José A.B. Fortes. Quality-Aware Human-Machine Text Extraction for Biocollections using Ensembles of OCRs. In: 2019 15th International Conference on eScience (eScience). 2019, 116ś125. (see page 7). [36] João Adriano Portela de Matos Silva. Ensembles de OCRs para aplicações médicas. PhD thesis. 2021 (see page 7). [37] Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on ensemble learning. Frontiers of Computer Science 14:2 (2020), 241ś258 (see page 8). [38] Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, and Furu Wei. TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models. 2021. url: https://arxiv.org/ abs/2109.10282 (see pages 9, 10, 20). [39] Dmitrijs Kass and Ekta Vats. AttentionHTR: Handwritten Text Recognition Based on00a0Attention Encoder-Decoder Networks. In: Document Analysis Systems. Springer International Publishing, 2022, 507ś522. isbn: 978-3-031-06555-2 (see pages 10, 11, 20). 48 [40] Harald Scheidl. Build a Handwritten Text Recognition System using TensorFlow. 2018. url: https://towardsdatascience.com/build-a-handwritten-text-recognition-systemusing-tensorflow-2326a3487cd5 (see pages 11, 12, 20). [41] Jason Brownlee. A Gentle Introduction to k-fold Cross-Validation. url: https : / / machinelearningmastery.com/k-fold-cross-validation (see page 13). [42] Walter Kempner. Treatment of hypertensive vascular disease with rice diet. The American journal of medicine 4:4 (1948), 545ś577 (see page 19). [43] U-V Marti and Horst Bunke. The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition 5:1 (2002), 39ś46 (see page 19). [44] Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine learning 51:2 (2003), 181ś207 (see page 21). 49
|
2509.16220
|
ON TIME-LIKE CLASS A SURFACES IN A STATIC SPACE TIME
FURKAN KAYA AND NURETTIN CENK TURGAY
Abstract. In this paper, we consider time-like surfaces in the static space-time given by
the warped product L3
1(c) f ×(I, dz2), where L3
1(c) denotes the Lorentzian space form with
the constant sectional curvature c ∈{−1, 0, 1}. In particular, we study the surfaces with
light-like
∂
∂z
T . First, we construct a globally defined pseudo-orthonormal frame field on
a surface satisfying this condition and deal with the invariants associated with this frame
field. Then, we obtain a complete classification theorem for class A surfaces. Finally, we
consider some applications of this theorem.
1. Introduction
Let M be a (semi-) Riemannian submanifold of ¯
M × I, where I is an open interval and
M m is a (semi-) Riemannian manifold and Rn(c) denote the n-dimensional Riemannian
space form of constant sectional curvature c with the metric tensor gc. By considering the
decomposition
(1.1)
∂
∂w = T + η,
one can define a vector field T tangent to and a vector field η normal to M, where
∂
∂w
denotes the unit vector field tangent to the interval I. Based on the decomposition (1.1),
the natural question of “how the geometric constraints imposed on T and η determine the
submanifold itself ” arises. The first systematic studies in this direction were carried out in
[12, 15], where, in particular, the notion of class A immersions was introduced as follows:
Definition 1.1. [15] Let ϕ : M m →Rn(c)×I be an isometric immersion from a Riemann-
ian manifold M. The immersion ϕ is said to be of class A if the tangent component T is
an eigenvector of the shape operator at each point of M.
It turns out that class A submanifolds have some connections with several well-studied
submanifolds of product spaces. For example, a biconservative submanifold is necessarily
class A under some conditions as obtained by Manfio et al. in [10], where the authors
study biconservative submanifolds of R4(c) × R. Furthermore, the link between class A
submanifolds and those with constant principal curvatures is explored in a recent study in
[9], which focuses on hypersurfaces in R3(c) × R.
On the other hand, a classical and fundamental example of a warped product space-
time is the Robertson–Walker model I1
1 ×f R3(c), which is defined on the product manifold
I × R3(c) with the metric tensor
−dt2 + f(t)2gc,
where f : I →(0, ∞) is a smooth, non-vanishing, warping function depending only on
the time-like coordinate t, [14, 17]. In this construction, the vector field tangent to the
base I is time-like, and thus the Robertson–Walker space-time describes a time-oriented
2010 Mathematics Subject Classification. 53C42 (Primary), 53A10.
Key words and phrases. Warped product spaces, static space-times, class A surfaces, time-like
submanifolds.
1
arXiv:2509.16220v1 [math.DG] 11 Sep 2025
2
F. KAYA AND N. CENK TURGAY
warped product in which the warping function evolves in the temporal direction. In con-
trast, Dobarro and ¨Unal studied the geometric properties of standard static space-times,
focusing on the characterization of their warped product structure, geodesic completeness,
and curvature properties in [4]. In this paper, based on their work, we study submanifolds
of a static space-time defined on the product manifold L3
1(c)×I, equipped with the warped
product metric
(1.2)
eg = f(z)2gc + dz2,
where Ln
1(c) denotes the n-dimensional Lorentzian space form with sectional curvature c
equipped with the metric tensor gc and the warping function f : I →(0, ∞) depends
on a spatial coordinate z. Unlike the Robertson–Walker case, where the base direction is
time-like, here the vector field tangent to the base I is space-like. As a result, with the
Killing vector
∂
∂t, [5], the geometry represents a static warped product space-time in which
the Lorentzian structure is entirely determined by the fiber L3
1(c), while the base direction
is purely spatial.
Several recent works investigate submanifolds of warped product spaces, which play a
significant role in modeling various physical space-times, [1, 2, 3, 16]. For example, Dekimpe
and Van der Veken [3] provide an overview of the geometric properties of marginally trapped
surfaces in Robertson–Walker space-times. Recently, Alias et al. investigate codimension-
two space-like submanifolds within the null hypersurfaces of generalized Robertson–Walker
space-times in [1] and some geometrical properties of class A surfaces of Robertson–Walker
space-times are obtained in [2, 16].
The structural difference between the two models has important geometric implications.
In the Robertson–Walker case I1
1 ×f R3(c), the temporal warping emphasizes cosmological
evolution and is naturally suited for studying expanding or contracting universes. On the
other hand, the static nature of the model L3
1(c) f×I studied in this paper leads to a different
variety of submanifold configurations. In particular, from the viewpoint of the theory of
non-degenerated submanifolds, the space-like base in the static space-time L3
1(c) f ×I allows
for different geometric behaviors compared to the time-like base of Robertson–Walker space-
times. One of the main differences is the causality of the vector fields T and η defined by
the decomposition (1.1). Unlike the Robertson–Walker space-times, where T and η have
opposite causal character—that is, one of them is space-like while the other is time-like—in
the space-time L3
1(c) f × I, either T or η can be light-like.
In this direction, we investigate surfaces in the static space-time L3
1(c) f × I with light-
like
∂
∂z
T. Section 2 recalls the fundamental notions of submanifold theory and fixes the
notation that will be used throughout the paper. In Section 3, after constructing explicit
examples of surfaces with light-like
∂
∂z
T, we derive a characterization of surfaces with this
property. Section 4 contains the main result, Theorem 4.3, which provides a complete local
classification of class A surfaces in L3
1(c) f ×I. Finally, in Section 5, we present applications
of Theorem 4.3, yielding all pseudo-umbilical and totally umbilical surfaces with light-like
∂
∂z.
2. Basic Concepts and Notation
This section presents the basic notation used throughout the paper, along with a short
summary of the theory of submanifolds of semi-Riemannian manifolds.
Let En
r denote the n-dimensional semi-Euclidean space with the index r given by the
metric tensor
g0,r = −
r
X
i=1
dx2
i +
n
X
i=r+1
dx2
i ,
TIME-LIKE SURFACES
3
where (x1, x2, . . . , xn) is a rectangular coordinate system in Rn and we put (x1, x2, x3, x4) =
(t, x, y, z) if n = 4.
When n > 2, Ln
1(c) stands for the n dimensional Lorentzian space-form with the constant
sectional curvature c, i.e.,
Ln
1(c) =
Sn
1
if c = 1,
En
1
if c = 0,
Hn
1
if c = −1
,
and gc stands for its metric tensor, where Sn
1(K0) and Hn
1(K0) will stand for n-dimensional
de Sitter space and anti de-Sitter spaces, respectively, with the sectional curvature K0 and
we put Sn
1(1) = Sn
1 and Hn
1(−1) = Hn
1.
2.1. The Static Space-Time L3
1(c) f ×I. In this subsection, we are going to consider the
static space-time given by the warped product L3
1(c) f × I with the Levi-Civita connection
e∇and metric tensor eg = ⟨·, ·⟩defined by (1.2) for a non-vanishing smooth function f.
Throughout this paper, F will denote the function defined by
F(z) :=
Z z
z0
1
f(ξ)dξ
for a z0 ∈I. Moreover, for a given vector field X tangent to L3
1(c) f × I, we define a vector
field ¯X tangent to L3
1(c) and a function X4 by
(2.1)
¯X := Π∗(X),
X4 := ⟨X, ∂
∂z⟩,
where Π : L3
1(c) × I →L3
1(c) is the canonical projection. Then, we write
X = ( ¯X, X4).
Occasionally, by a slight abuse of notation, we adopt the convention
( ¯X, 0) = ¯X.
On the other hand, from [13] it is obtained that Levi-Civita connection e∇of L3
1(c) f × I
has the form
(2.2)
e∇XY = ∇0
XY + f ′
f
X4 ¯Y + Y4 ¯X, −⟨¯X, ¯Y ⟩
,
where ∇0 denotes the Levi-Civita connection of the Cartesian product L3
1(c) × I.
Now, let M be a time-like surface in L3
1(c) f ×I. Then, by letting w = z in (1.1), a vector
field T tangent to M and a vector field η normal to M are defined by
(2.3)
∂
∂z
M
= T + η.
Remark 2.1. Assumptions. We are going to exclude the trivial cases when M contains
an open part of a horizontal slice ˆ
M f(z0) × {z0} or a vertical cylinder M = α f × I and also
the case when it is contained in a totally geodesic hypersurface of L3
1(c) f × I, where ˆ
M is
a surface and α is a curve in the space form L3
1(c). Therefore, throughout the paper, we
shall assume that
• Tp ̸= 0 and ηp ̸= 0 at every point p ∈M,
• M does not contain any open subset lying in a totally geodesic hypersurface of
L3
1(c) f × I.
4
F. KAYA AND N. CENK TURGAY
2.2. Basic Definitions and Facts in the Theory of Submanifolds. Let Mn be a
semi-Riemannian submanifold of (N, ˆg) with the Levi-Civita connection ∇, the second
fundamental form h, shape operator A and normal connection ∇⊥. Then, for all vector
fields X and Y tangent to M and ξ normal to M, the Gauss and Weingarten formulæ
∇N
XY = ∇XY + h(X, Y ),
∇N
Xζ = −AζX + ∇⊥
Xξ
are satisfied, where ∇N denotes the Levi-Civita connection of N. Note that h and A are
related by
(2.4)
ˆg(h(X, Y ), ξ) = ˆg(AξX, Y ).
The mean curvature vector field H of M is defined by
H = 1
ntr h.
M is said to be umbilical along ξ if there exists a function aξ ∈C∞(M) such that
ˆg(AξX, Y ) = aξˆg(X, Y ).
If M is umbilical along H, then it is called a pseudo-umbilical submanifold of N. Fur-
thermore, if M is umbilical along ξ for every ξ normal to M, then M is said to be totally
umbilical.
Let ˜R and R stand for the curvature tensors of M and N, respectively and R⊥denote
the normal curvature tensor of M. Then the Codazzi, Gauss and Ricci equations take the
form
( ˜R(X, Y )Z)⊥
=
(∇Xh)(Y, Z) −(∇Y h)(X, Z),
(2.5)
R(X, Y )Z
=
Ah(Y,Z)X −Ah(X,Z)Y
(2.6)
and
(2.7)
( ˜R(X, Y )ξ)⊥= R⊥(X, Y )ξ + h(AξX, Y ) −h(X, AξY ),
respectively, where the covariant derivative (∇Xh)(Y, Z) of the second fundamental form h
is defined by
(∇Xh)(Y, Z) = ∇⊥
Xh(Y, Z) −h(∇XY, Z) −h(Y, ∇XZ).
M is said to have flat normal bundle if R⊥= 0.
Now, let ˆ
M be a time-like surface in the Lorentzian space form L3
1(c), with metric tensor ˆg
and shape operator ˆS. Further, assume that ˆ
M is not totally geodesic. Then, the Gaussian
curvature K of ˆ
M is defined by
K =
⟨R(X, Y )Y, X⟩
⟨X, X⟩⟨Y, Y ⟩−⟨X, Y ⟩2,
where {X, Y } is a frame field for the tangent bundle of
ˆ
M.
ˆ
M is said to be flat if K
vanishes identically on ˆ
M. In this case, for all p ∈M there exists a local coordinate system
(Np, (U, V )) such that Np ∋p and
(2.8)
ˆg = −(dUdV + dV dU).
We are going to use the following remark.
TIME-LIKE SURFACES
5
Remark 2.2. Let ˆ
M be a time-like surface. Then, a direct computation yields that local
coordinates u, v satisfy
(2.9)
ˆg = −
1
f(u)2du2 +
1
f(u)(dudv + dvdu)
and ∂v is proportional to ∂U if and only if (u, v) and (U, V ) given in (2.8) are related by
(2.10)
U(u, v) = 1
2c1
F(u) −v
c1
+ c2,
V (u, v) = c1F(u)
for some constants c1 ̸= 0, c2.
It is well known that the matrix representation of ˆS, with respect to a suitable frame
field, takes one of the forms
(2.11)
I. ˆS =
λ1
0
0
λ2
,
II. ˆS =
λ
µ
−µ
λ
,
III. ˆS =
λ
1
0
λ
for some smooth functions λ, λi, and µ ̸= 0.
Note that in case III, the frame field is
pseudo-orthonormal, whereas in the remaining cases, it is orthonormal.
We are going to use the following well-known result:
Remark 2.3. If ˆ
M 2
1 is a (totally) umbilical surface, then it is locally an open part of one
of the following surfaces (see, for example, [11]):
(1) c = 1 and ˆ
M ⊂S2
1( 1
r2) ⊂S3
1 is parametrized by
(2.12)
˜ϕ(s1, s2) = (r sinh s1, r cosh s1 cos s2, r cosh s1 sin s2,
√
1 −r2),
0 < r2 < 1,
(2) c = −1 and ˆ
M ⊂H2
1(−1
r2) ⊂H3
1 is parametrized by
(2.13)
˜ϕ(s1, s2) = (r cosh s1 cos s2, r cosh s1 sin s2, r sinh s1,
√
r2 −1),
r2 > 1,
(3) c = −1 and ˆ
M ⊂H3
1 is the flat surface parametrized by
˜ϕ(U, V ) =
U + V
√
2
, a(2UV −1) −1
4a, a(2UV −1) + 1
4a, U −V
√
2
,
a ̸= 0,
(2.14)
(4) c = 0 and ˆ
M ⊂S2
1( 1
r2) ⊂E3
1 is parametrized by
(2.15)
˜ϕ(s1, s2) = (r sinh s1, r cosh s1 cos s2, r cosh s1 sin s2).
2.3. Null Scrolls. Let ˆ
M be a Lorentzian surface in L3
1(c). The shape operator ˆS of ˆ
M
has the canonical form of Case III in (2.11) if and only if M is a null scroll, given in the
following example (see, e.g., the proof of [6, Theorem on p. 55]).
Example 2.4. [6, 7] Let α be a light-like curve in L3
1(c) with a Cartan frame {A, B; C}
such that
(2.16)
⟨A, B⟩= −1, ⟨C, C⟩= 1,
⟨A, A⟩= ⟨B, B⟩= ⟨C, A⟩= ⟨C, B⟩= 0
along α satisfying
(2.17)
α′ = A,
A′ = aC,
B′ = bC + cα,
C′ = bA + aB
for some smooth functions a and b. Then, the surface parametrized by
(2.18)
˜ϕ(U, V ) = α(U) + V B(U)
is called a null scroll. Note that if the function b is constant, then the surface parametrized
by (2.18) is called B-scroll.
6
F. KAYA AND N. CENK TURGAY
Remark 2.5. [6] Let ˆ
M be a null scroll in L3
1(c) given by the parametrization (2.18). Then,
the shape operator ˆS of ˆ
M along the unit normal vector field
˜N = −tbB −C,
is, [6],
(2.19)
b
a + tb′
0
b
with respect to {∂V , ∂U}.
Therefore, a null scroll in L3
1(0) = E3
1 is flat (and equivalently minimal) if and only if it is
a B-scroll with b = 0. In this case, if a ̸= 0, then ˆ
M is congruent to the surface given by
(2.20)
˜ϕ(U, V ) =
1
6
√
2
U 3 + 6U + 6V, 3
√
2U 2, U 3 −6U + 6V
.
We are going to use the following lemma.
Lemma 2.6. A flat null scroll ˆ
M in the anti de Sitter space H3
1 generated by a = −1
k2 and
b = 1 is congruent to the B-scroll parametrized by
˜ϕ(U, V ) =
(U −2k2V ) cos
U
k
−2k sin
U
k
2k
, −(2k3 + k) cos
U
k
+ (U −2k2V ) sin
U
k
2
√
2k2
,
k (2k2 −1) cos
U
k
−(U −2k2V ) sin
U
k
2
√
2k2
, (U −2k2V ) cos
U
k
2k
!
.
(2.21)
Conversely, the surface given by (2.21) in H3
1 is flat.
Proof.
By solving (2.17) for c = −1, a = −1
k2 and b = 1, we get
α(U) =
sin U
k −
U cos U
k
2k
k
v1 +
−
U sin U
k
2k
−1
2 cos U
k
k
v2 + k sin U
k v3 −k cos U
k v4,
A(U) =
U sin U
k
2k3
+ sin U
k sin 2U
k
4k2
+ cos3 U
k
2k2
!
v1 + cos U
k v3 + sin U
k v4
+
−U cos U
k
2k3
−sin U
k cos2 U
k
2k2
+ sin 2U
k cos U
k
4k2
!
v2,
B(U) = cos U
k v1 + sin U
k v2,
C(U) = −U cos U
k
2k2
v1 +
k cos U
k −U sin U
k
2k2
v2 + k sin U
k v3 −k cos U
k v4
(2.22)
for some constant vectors vi ∈E4
1. By a direct computation using (2.16) and (2.22), we
obtain
⟨v1, v3⟩= −1,
⟨v3, v3⟩= 1
k2,
⟨v2, v4⟩= 1,
and ⟨vi, vj⟩= 0 for all other pairs. Therefore, up to a suitable isometry of H3
1, one can
choose
v1 = (−k, 0, 0, −k), v2 =
0, 1
√
2, 1
√
2, 0
, v3 =
0, 1
√
2, −1
√
2, 0
, v4 =
0, 0, 0, 1
k
from which, together with (2.22), we obtain (2.21).
Converse of the lemma follows from a direct computation.
□
TIME-LIKE SURFACES
7
3. Surfaces with Light-Like T
In this section, we obtain local classifications of time-like surfaces in L3
1(c) f × I with
light-like
∂
∂z
T.
Let M be an oriented time-like surface in L3
1(c) f × I and assume that the normal vector
field T defined by (2.3) is light-like. In this case, the decomposition (2.3) turns into
(3.1)
∂
∂z
M
= T + e3,
where e3 is a unit normal vector field. We are going to consider the pseudo-orthonormal
frame field {T, U} of the tangent bundle of M such that
⟨T, T⟩= ⟨U, U⟩= 0,
⟨T, U⟩= −1
and an orthonormal frame field {e3, e4} of the normal bundle of M.
3.1. Examples of Surfaces. In this subsection, we construct some examples of surfaces
which satisfies certain geometrical properties.
First, we obtain the next proposition to present a surface which satisfies the property of
having light-like
∂
∂z
T.
Proposition 3.1. Let ˜ϕ(u, v) be a parametrization of a time-like surface ˆ
M in L3
1(c) with
the induced metric
(3.2)
ˆg = −
1
f(u)2du2 + E(u, v)
f(u)2 (dudv + dvdu)
for a non-vanishing function E and consider the surface M of L3
1(c) f × I parametrized by
(3.3)
ϕ(u, v) =
˜ϕ(u, v), u
.
Then, T =
∂
∂z
T is light-like on M.
Proof.
By considering (3.2), we see that T =
1
E∂v is light-like and the vector field
e3 = 1
E(−˜ϕv, E) is normal to M. Furthermore, T and e3 satisfy (3.1). Therefore,
∂
∂z
T = T
is light-like on M.
□
Definition 3.2. Throughout this paper, a surface M of L3
1(c) f × I parametrized by (3.3)
for a surface ˆ
M in L3
1(c) is going to be called as the surface generated by ˆ
M.
By making certain specific choices for the surface
ˆ
M in Theorem 3.1, we obtain the
following explicit examples:
Example 3.3. Let c = 1 and consider the surface M generated by ˆ
M ⊂S2
1( 1
r2), r < 1.
Then, by a direct computation considering (2.12) and (3.2), we obtain the parametrization
ϕ(u, v) =
r
A(u, v), r
cos(a(u))
A(u, v) −sin(a(u))
, r
sin(a(u))
A(u, v) + cos(a(u))
,
√
1 −r2, u
(3.4)
of M in S3
1 f × I, where A and a are smooth, non-vanishing functions satisfying
(3.5)
Au = −
A2
2r2f 2a′ −1
2a′ A2 + 1
.
8
F. KAYA AND N. CENK TURGAY
Note that we have T = 1
E∂v and the vector fields
e3 =
1
rf 2a′
1, cos a, sin a, 0, rf 2a′
,
e4 =
√
1 −r2
fA
−1, A sin a −cos a, −A cos a −sin a,
rA
√
1 −r2, 0
form an orthonormal base of the normal M such that η = e3, where the function E appearing
in Theorem 3.1 is
(3.6)
E = r2f 2a′Av
A2
.
By a direct computation we obtain the shape operators of M as
(3.7)
Ae3 =
−f′
f
a′
A + f′
f + a′′
a′
0
−f′
f
!
,
Ae4 =
−h4
12
−h4
22
0
−h4
12
,
which yields that the surface parametrized by (3.4) is a class A surface with light-like
∂
∂z
T,
where we have
(3.8)
h4
12 = h4
22 = −
√
1 −r2
rf
.
Example 3.4. Let c = −1 and consider the surface M generated by ˆ
M ⊂H2
1(−1
r2), r > 1.
Then, by a direct computation considering (2.13) and (3.2), we obtain the parametrization
ϕ(u, v) =
r
cos a(u)
A(u, v) −sin a(u)
, r
sin a(u)
A(u, v) + cos a(u)
,
r
A(u, v),
√
r2 −1, u
(3.9)
of M in H3
1 f × I, where A and a are smooth, non-vanishing functions satisfying
Au =
A2
2r2f 2a′ −1
2a′ A2 + 1
.
Similar to Theorem 3.6, we observe that the surface M is a class A surface with light-like
∂
∂z
T by obtaining (3.7) for
e3 =
1
rf 2a′
−cos a, −sin a, −1, 0, rf 2a′
,
e4 =
√
r2 −1
fA
cos a −A sin a, A cos a + sin a, 1,
rA
√
r2 −1, 0
E = −r2f 2a′Av
A2
,
h4
12 = h4
22 =
√
r2 −1
rf
Example 3.5. Let c = −1 and consider the surface M generated by flat totally umbilical
surface ˆ
M parametrized by (2.14). Then, by considering (2.14) and (3.2), we obtain the
parametrization
ϕ(u, v) =
2c2
1F(u) + 2c2c1 + F(u) −2v
2
√
2c1
, −a (c1F(u) + c2) (2v −F(u))
c1
−a −1
4a,
−a (c1F(u) + c2) (2v −F(u))
c1
−a + 1
4a, (2c2
1 −1) F(u) + 2 (c1c2 + v)
2
√
2c1
, u
TIME-LIKE SURFACES
9
of M in H3
1 f × I, where a, c1, and c2 are some constants with a, b ̸= 0.
By a direct
computation, we obtain the shape operators of M as
Ae3 = f ′
f I,
Ae4 =
1
f
1
f
0
1
f
,
(3.10)
where we have
e3 =
1
√
2c1f
1, 2
√
2a (c1F + c2) , 2
√
2a (c1F + c2) , −1,
√
2c1f
,
e4 =
1
2
√
2c1f
−2c2
1F −2c2c1 −F + 2v, 2
√
2c1
a (c1F + c2) (2v −F)
c1
−a + 1
4a
,
−2
√
2c1
−a (c1F + c2) (2v −F)
c1
+ a + 1
4a
,
1 −2c2
1
F −2 (c1c2 + v) , 0
E =f.
Hence, M is a class A surface with light-like
∂
∂z
T.
Example 3.6. Let c = 0 and consider the surface M generated by ˆ
M ⊂S2
1( 1
r2). Then, by
a direct computation considering (2.15) and (3.2), we obtain the parametrization
ϕ(u, v) =
r
A(u, v), r
cos a(u)
A(u, v) −sin a(u)
, r
sin a(u)
A(u, v) + cos a(u)
, u
of M in E3
1 f × I, where A and a are smooth, non-vanishing functions satisfying (3.5).
Similar to Theorem 3.3, we observe that the surface M is a class A surface with light-like
∂
∂z
T by obtaining (3.7) for
e3 =
1
rf 2a′
1, cos a, sin a, rf 2a′
,
e4 = −1
Af (1, cos a −A sin a, A cos a + sin a, 0)
E =r2f 2a′Av
A2
,
h4
12 = h4
22 = −1
rf
Example 3.7. Consider the surface M in L3
1(c) f × I generated by a null scroll ˆ
M in L3
1(c)
described in Theorem 2.4 for some smooth functions a and b. Then, by a direct computation
considering (2.18) and (3.2), we obtain the parametrization
ϕ(u, v) = (α(U(u)) + V (u, v)B(U(u)), u)
(3.11)
of M in L3
1(c) f × I, where V and U are smooth, non-vanishing functions satisfying
(3.12)
U ′V 2 b(U)2 + c
−2Vu = −
1
f 2U ′ .
Similar to Theorem 3.3, we observe that the shape operators of M has the form
(3.13)
Ae3 =
−f′
f
f′U′+f((b(U)2+c)V U′2+U′′)
fU′
0
−f′
f
!
,
Ae4 =
b(U)
f
f2(a+V b′)U′2+b(U)
f
0
b(U)
f
!
,
10
F. KAYA AND N. CENK TURGAY
which yields that M is a class A surface with light-like
∂
∂z
T, where we have
e3
=
1
U ′f 2
B(U), U ′f 2
,
e4
=
1
f (V b(U)B(U) + C(U), 0)
E
=
−U ′f 2Vv.
(3.14)
3.2. Surfaces with Light-Like T. In this subsection, we consider time-like surfaces under
the condition that the vector field T =
∂
∂z
T is light-like.
Let M be an oriented time-like surface in the space-time L3
1(c) f ×I such that the tangent
vector field T, defined by (2.3), is light-like at every point of M. In this case, using equation
(3.1), one can construct a pseudo-orthonormal frame field {T, U} for the tangent bundle of
M satisfying
⟨T, T⟩= ⟨U, U⟩= 0,
⟨T, U⟩= −1
as well as an orthonormal frame field {e3, e4} for the normal bundle of M. Note that (3.1)
implies
T = ¯T,
U = ( ¯U, −1),
e3 = ( ¯e3, 1),
e4 = ¯e4.
(3.15)
We are going to use the following lemma throughout this article:
Lemma 3.8. Let M be a time-like surface in L3
1(c) f × I such that the vector field
∂
∂z
T is
light-like on M. Then, the vector field U defined by (3.1) satisfies
(3.16)
f ′|M = −U (f|M) .
Proof.
Let p = (˜p, z(p)) ∈M and consider an integral curve α = (¯α, α4) of e1 starting
from p. Then, we have
U(f|M)p = d
du
u=0
(f ◦α)(u) = d
du
u=0
f(α4(u))
which yields
(3.17)
U(f|M)p = α′
4(0)f ′(α4(0)).
Since α(0) = p and α′(0) = Up, (3.15) implies α′
4(0) = −1. Hence, (3.17) turns into
U(f|M)p = f ′(z(p)).
which yields (3.16).
□
On the other hand, the Levi-Civita connection ∇of M satisfies
e∇TT = ω1T,
e∇TU = −ω1U,
e∇UT = ω2T,
e∇UU = −ω2U,
(3.18)
while the normal connection ∇⊥of M induces
e∇⊥
T e3 = ω3e4,
e∇⊥
T e4 = −ω3e3,
e∇⊥
Ue3 = ω4e4,
e∇⊥
Ue4 = −ω4e3,
(3.19)
TIME-LIKE SURFACES
11
for some smooth functions ωi. Moreover, the second fundamental form h of M takes the
form
h(T, T) = h3
11e3 + h4
11e4
h(T, U) = h3
12e3 + h4
12e4
h(U, U) = h3
22e3 + h4
22e4,
(3.20)
where ha
jk are some smooth functions, with a = 3, 4 and j, k = 1, 2.
Now, we are ready to prove the following lemma:
Lemma 3.9. Let M be a time-like surface in L3
1(c) f × I such that the vector field
∂
∂z
T is
light-like on M and consider the positively oriented, global frame frame field {T, U; e3, e4}
defined by (3.1). Then, the following conditions hold:
(1) The Levi-Civita connection ∇of M satisfies
∇TT = ∇TU = 0,
∇UT =
f ′
f −h3
22
T,
∇UU = −
f ′
f −h3
22
U.
(3.21)
(2) The matrix representation of the shape operators of M corresponding to the pseudo-
orthonormal frame {T, U} has the form
(3.22)
Ae3 =
−f′
f
−h3
22
0
−f′
f
!
,
Ae4
=
−h4
12
−h4
22
−h4
11
−h4
12
(3) The second fundamental form h of M satisfies
h(T, T) = h4
11e4
h(T, U) = −f ′
f e3 + h4
12e4,
h(U, U) = h3
22e3 + h4
22e4.
(3.23)
(4) The normal connection ∇⊥of M induces the following relations
∇⊥
T e3 = −h4
11e4,
∇⊥
Ue3 = −h4
12e4.
(3.24)
Proof.
We are going to use the formulæ given by (3.15) - (3.20). Note that by combining
(3.19) with (2.4), we have
Ae3T = −h3
12T −h3
11U,
Ae3U = −h3
22T −h3
12U,
Ae4T = −h4
12T −h4
11U,
Ae4U = −h4
22T −h4
12U.
(3.25)
On the other hand, by getting the covariant derivative of (3.1) along a vector field X
tangent to M, we obtain
(3.26)
e∇X
∂
∂z = ∇XT + h(X, T) −Ae3X + ∇⊥
Xe3.
Moreover, (2.2) implies e∇X
∂
∂z = f′
f ¯X which is equivalent to
(3.27)
e∇X
∂
∂z = f ′
f
X −⟨X, T⟩∂
∂z
because of (2.1). By considering (3.1) and (3.27), we observe that the tangential and normal
parts of (3.26) imply
(3.28)
f ′
f (X −⟨X, T⟩T) = ∇XT −Ae3X
and
(3.29)
−f ′
f ⟨X, T⟩e3 = h(X, T) + ∇⊥
Xe3,
12
F. KAYA AND N. CENK TURGAY
respectively.
By combining (3.15), (3.18) and (3.25) with (3.28) for X = T and X = U, we get
f ′
f T = h3
11U + (ω1 + h3
12)T,
f ′
f (T + U) = (ω2 + h3
22)T + h3
12U
(3.30)
and, because of (3.19) and (3.20), (3.29) for X = T and X = U imply
0 = h3
11e3 + (ω3 + h4
11)e4,
f ′
f (e3) = h3
12e3 + (h4
12 + ω4)e4.
(3.31)
By a direct computation using (3.30), we obtain
h3
11 = 0,
h3
12 = f ′
f ,
ω1 + h3
12 = f ′
f ,
ω2 + h3
22 = f ′
f ,
(3.32)
and the equations appearing in (3.31) give
(3.33)
ω3 = −h4
11,
ω4 = −h4
12.
By using (3.32), we observe that (3.18), (3.25) and (3.20) turn into (3.21), (3.22) and (3.23),
respectively, and (3.33), together with (3.19), yields (3.24).
□
Next, by using Theorem 3.9, we construct a local coordinate system compatible with the
global frame field {T, U; e3, e4}.
Lemma 3.10. Let M be a time-like surface in L3
1(c) f × I such that the vector field
∂
∂z
T
is light-like on M and p ∈M. Then, there exists a local coordinate system (Np, (u, v)) such
that Np ∋p and the following conditions hold:
(a) The vector field T and U defined by (3.1) satisfy
T|Np
=
1
E ∂v,
(3.34)
U|Np
=
−∂u
(3.35)
for a non-vanishing function E defined on Np,
(b) The fuction h3
22 satisfies
(3.36)
h3
22 = −Eu
E + f ′
f .
Proof.
Because of (3.21), we have [ET, −U] =
E
f ′
f −h3
22
+ U(E)
T whenever
E ∈C∞(M). So, if E is a non-vanishing function satisfying
(3.37)
E
f ′
f −h3
22
+ U(E) = 0,
then we have [ET, −U] = 0. Therefore there exists a local coordinate system (Np, (u, v))
near to p such that ET = ∂v and −U = ∂u from which we obtain (3.34) and (3.35).
Consequently, (3.35) and (3.37) implies (3.36).
□
Next, we have the following local classification theorem for the surfaces satisfying the
condition that
∂
∂z
T is light-like.
TIME-LIKE SURFACES
13
Theorem 3.11. Let M be a time-like surface in L3
1(c) f × I. Then,
∂
∂z
T is light-like on
M if and only if every point p ∈M has a neighborhood which can be parametrized as given
in Theorem 3.1.
Proof.
In order to prove the necessary condition, consider a local coordinate sys-
tem (Np, (u, v)) given in Theorem 3.10 near to an arbitrary p ∈M.
Let ϕ(u, v) =
(˜ϕ(u, v), Z(u, v)) be the parametrization of Np, where we define
˜ϕ := Π ◦ϕ,
Z := ϕ4
and Π : L3
1(c) × I →L3
1(c) is the canonical projection. Then, by combining (3.34) and
(3.35) with (3.15), we get
1
E
˜ϕv, Zv
= T =
¯T,
−
˜ϕu, Zu
= U =
( ¯U, −1)
on Np from which we have
Zv = 1,
Zu = 0,
(3.38)
˜ϕv = ¯T,
−˜ϕu = ¯U.
(3.39)
Because of (3.38), by a translation on the coordinate u, we have Z(u, v) = u. Therefore, Np
can be parametrized as given in (3.3) for an L3
1(c)-valued smooth function ˜ϕ. Furthermore,
(3.15) and (3.39) imply
gc
˜ϕu, ˜ϕu
= −
1
f(u)2,
gc
˜ϕu, ˜ϕv
=
E
f(u)2,
gc
˜ϕv, ˜ϕv
= 0.
Consequently, ˜ϕ is a parametrization of a Lorentzian surface ˆ
M in L3
1(c) with the induced
metric given by (3.2). Thus, the proof of the necessary condition is completed. The proof
of the sufficient condition is obtained in Theorem 3.1.
□
4. Class A Surfaces
In this subsection, we are going to consider class A surfaces in L3
1(c) f ×I such that
∂
∂z
T
is light-like.
Let ˆ
M be a Lorentzian surface in L3
1(c) with the local parametrization ˜ϕ(u, v) such that
(3.2) and (3.3). Let ˜N and ˆS denote the unit normal vector field and shape operator of ˆ
M,
respectively. We are going to use the following lemma obtained by a direct computation.
Lemma 4.1. The Levi-Civita connection ∇L3
1(c) satisfies
∇
L3
1(c)
∂u
∂u =
Eu
E −2f ′
f
˜ϕu +
Eu
E2 −f ′
fE
˜ϕv + h1 ˜N
∇
L3
1(c)
∂u
∂v = h2 ˜N
∇
L3
1(c)
∂v
∂v = Ev
E
˜ϕv + h3 ˜N
(4.1)
for some smooth functions hi.
14
F. KAYA AND N. CENK TURGAY
Now, consider the surface parametrized by (3.3). Then, by Theorem 3.11,
∂
∂z
T is light-
like on M. On the other hand, the vector fields
T = 1
E ∂v = 1
E (˜ϕv, 0),
U = −∂u = −(˜ϕu, 1),
e3 = 1
E (˜ϕv, 1),
e4 = 1
f ( ˜N, 0)
form the frame field {T, U, e3, e4} defined by (3.1). By a direct computation, we obtain
(4.2)
Ae3 =
−f′
f
Eu
E −f′
f
0
−f′
f
!
,
Ae4 =
fh2
E
−fh1
−fh3
E2
fh2
E
.
Next, we get the following corollary by considering the shape operators of M given by
(4.2).
Corollary 4.2. Let M be a surface in L3
1(c) f × I given in Theorem 3.1 with light-like
∂
∂z
T. Then, the followings are equivalent to each other:
(i) M is a class A surface,
(ii) h3 = 0,
(iii) ∂v is a principal direction of ˆ
M.
Proof.
(i) ⇔(ii) Because of the first equation of (4.2), we have Ae3T = −f′
f , that is T
is a principal direction of Ae3. Therefore, (4.1) and the second equation of (4.2) imply that
M is a A surface if and only if h3 = 0.
(ii) ⇔(iii) By the definition of h3, we have h3 = 0 is satisfied if and only if
gc
∇
L3
1(c)
∂v
∂v, ˜N
= 0.
which is equivalent to
ˆS(∂v) = Eh2
f 2 ∂v.
□
Now, we are ready to prove the main result of this paper.
Theorem 4.3. Let M be a surface in the space-time L3
1(c) f ×I such that
∂
∂z
T is light-like.
Then, M is a class A surface if and only if for all p ∈M there exists a neighborhood Np
given by one of following five cases:
(i) Np is congruent to the surface given in Theorem 3.7,
(ii) c = 1 and Np is congruent to the surface given in Theorem 3.3,
(iii) c = −1 and Np is congruent to the surface given in Theorem 3.4,
(iv) c = −1 and Np is congruent to the surface given in Theorem 3.5,
(v) c = 0 and Np is congruent to the surface given in Theorem 3.6.
Proof.
Let p ∈M. Then, since M has light-like
∂
∂z
T, Theorem 3.11 implies the
existence of a local coordinate system (Np, (u, v)) near to p such that Np is parametrized
by (3.3) for a Lorentzian surface ˆ
M with the metric tensor (3.2). Now, in order to prove
the necessary condition, assume that Np is class A. Then Theorem 4.2 implies that the
shape operator ˆS of ˆ
M has a light-like eigenvector ∂v. Therefore, ˆS has the canonical forms
given in case I or III of (2.11).
Case 1. ˆS is diagonalizable. In this case, there exists an orthonormal frame field of the
tangent bundle of ˆ
M such that ˆS has the form given in case I of (2.11) for some λ1, λ2.
TIME-LIKE SURFACES
15
Since ∂v is a light-like eigenvector, we have λ1 = λ2, i.e,
ˆ
M is (totally) umbilical. By
Theorem 2.3, we have four subcases.
Case 1.(a). c = 1 and ˆ
M ⊂S2
1( 1
r2) ⊂S3
1 for a constant 0 < r < 1. In this case, Np can be
parametrized by
ϕ(u, v) =
r
s1(u, v), r
cos s2(u, v)
s1(u, v)
−sin s2(u, v)
, r
sin s2(u, v)
s1(u, v)
+ cos s2(u, v)
,
√
1 −r2, u
(4.3)
for some smooth functions s1, s2. Since ∂u and ∂v are light-like vectors, (4.3) implies
∂s2
∂v
2∂s1
∂v +
s1
2 + 1
∂s2
∂v
=
0,
(4.4)
s1
2 + r2f 2 s1
2 + 1
∂s2
∂u
2
+ 2r2f 2∂s1
∂u
∂s2
∂u
=
0.
(4.5)
Because of (4.4), we have either
(4.6)
s2(u, v) = a(u)
or
(4.7)
s2(u, v) = a(u) −2 tan−1 (s1(u, v))
for a smooth function a.
Let s2 satisfy (4.6). In this case, (4.5) implies
(4.8)
s1(u, v) = A(u, v)
for a function A satisfying (3.5). By combining (4.6) and (4.8) with (4.3), we obtain (3.4).
Therefore, we have the case (ii) of the theorem.
Note that the other case (4.7) results in a surface congruent to the one given in Theo-
rem 3.3.
Case 1.(b). c = −1 and ˆ
M ⊂H2
1(−1
r2) ⊂H3
1 for a constant r > 1. In this case, similar to
case 1.(b), we start with the parametrization of Np given by
ϕ(u, v) =
r
cos s2(u, v)
s1(u, v)
−sin s2(u, v)
, r
sin s2(u, v)
s1(u, v)
+ cos s2(u, v)
,
r
s1(u, v),
√
r2 −1, u
for some smooth functions s1, s2 and obtain (4.4) together with
s1
2 −r2f 2 s1
2 + 1
∂s2
∂u
2
−2r2f 2∂s1
∂u
∂s2
∂u
=
0.
Then, we get the surface parametrized by (3.9). Thus, we have the case (iii) of the theorem.
Case 1.(c). c = −1 and ˆ
M ⊂H3
1 is the flat surface parametrized by (2.14). Note that the
parameters U, V in (2.14) satisfy (2.8). Therefore, because of Theorem 2.2, the coordinates
u, v satisfy (2.10). By combining (2.14), (2.10) with (3.3), we obtain the surface constructed
in Theorem 3.5. Therefore, we have the case (iv) of the theorem.
Case 1.(d). c = 0 and ˆ
M ⊂S2
1( 1
r2) ⊂E3
1. In this case, by a similar way to case 1.(b), we
get the case (v) of the theorem.
Case 2.
ˆS is non-diagonalizable. In this case, there exists a pseudo-orthonormal frame
field of the tangent bundle of ˆ
M such that ˆS has the form given in case III of (2.11). In
16
F. KAYA AND N. CENK TURGAY
this case, ˆ
M is a null scroll described in Theorem 2.4 for some functions a and b and it can
be parametrized as given in (2.18).
By considering (2.18) and (3.3), we obtain (3.11) for some smooth functions U(u, v) and
V (u, v). Since ∂u and ∂v are light-like vectors, (4.3) implies
∂U
∂v
V 2 b(U)2 + c
∂U
∂v −2∂V
∂v
=
0,
(4.9)
f 2∂U
∂u
V 2 b(U)2 + c
∂U
∂u −2∂V
∂u
+ 1
=
0.
(4.10)
Because ∂V is the only eigenvector of the shape operator of ˆ
M (See (2.19)), the vector field
∂v must be proportional to ∂V . Therefore, (4.9) implies
U = U(u).
(4.11)
Consequently, (4.10) turns into (3.12). By combining (3.12) and (4.11) with (3.11), we
get the surface constructed in Theorem 3.7. Therefore, we have case (i) of the theorem.
Hence the proof of necessary condition is completed. The proof of the sufficient condition
is obtained in the previous subsection.
□
5. Applications of Class A Surfaces
In this section, as applications of Theorem 4.3 we obtain local clasification theorems on
some important classes of surfaces in the space-time L3
1(c) f × I with the property of of
having light-like
∂
∂z
T.
Let M be the surface in L3
1(c) f × I given by Theorem 3.1. First, we obtain the following
corollaries directly follows from (4.2).
Corollary 5.1. M is a pseudo-umbilical surface of L3
1(c) f × I if and only if the equations
(5.1)
h2h3 = 0,
h1h2f 4 −Euf ′f + Ef ′2 = 0
are satisfied.
Proof.
By a direct computation using (4.2), we obtain
AH =
f2h2
2
E2 + f′2
f2
−h1h2f4−Euf′f+Ef′2
Ef2
−f2h2h3
E3
f2h2
2
E2 + f′2
f2
!
which yields that M is pseudo-umbilical if and only if (5.1) is satisfied.
□
Corollary 5.2. M is a totally umbilical surface of L3
1(c) f × I if and only the functions E,
h1 and h3 satisfy
(5.2)
f ′
f −Eu
E = h1 = h3 = 0.
Proof.
The proof directly follows from (4.2).
□
Corollary 5.3. M has flat normal bundle in L3
1(c) f × I if and only the functions E and
h3 satisfy
(5.3)
h3
f ′
f −Eu
E
= 0
Proof.
The proof directly follows from (4.2) and the Ricci equation (2.7).
□
TIME-LIKE SURFACES
17
5.1. Surfaces with Flat Normal Bundle. In this subsection, as a result of Theorem 5.3,
we obtain the following corollary.
Proposition 5.4. Let M be a Lorentzian surface in the static space-time L3
1(c) f × I with
light-like
∂
∂z
T. Then M has flat normal bundle if and only if it is locally congruent to one
of the following class of surfaces:
(i) A class A surface given in Theorem 4.3,
(ii) A surface which can be parametrized by (3.3) given in Theorem 3.1, where ˜ϕ(u, v)
is a local parametrization of a flat Lorentzian surface ˆ
M in L3
1(c) with the induced
metric given by (2.9).
Proof.
Let p ∈M. Since M has light-like
∂
∂z
T, Theorem 3.11 implies that p has a
neighborhood Np parametrized by (3.3) given in Theorem 3.1.
Now, in order to prove the necessary condition, we assume that M has flat normal bundle.
Then, because of Theorem 5.3, we have two cases: If h3 = 0 on Np, then Theorem 4.2 implies
that Np is class A and we have the case (i). On the other hand, if h3(q) ̸= 0 at q ∈Np,
then, by shrinking Np, if necessary, we observe that (5.3) implies
f ′
f −Eu
E = 0
on Np. So, there exists a non-vanishing function c1 such that E(u, v) = c1(v)f(u). By
re-defining v properly we can assume c1 = 1. Thus, we have (2.9). Consequently, Np is flat
and we have case (ii) of the theorem.
Converse follows from a direct computation.
□
Note that Theorem 2.2 ensures the existence of a local coordinate system for which (2.9)
is satisfied.
5.2. Pseudo-Umbilical Surfaces. In this subsection, we consider the pseudo-umbilical
surfaces with light-like
∂
∂z
T in L3
1(c) f × I.
We are going to use the following lemma.
Lemma 5.5. Let M be a class A surface in L3
1(c) f × I with light-like
∂
∂z
T. Then, M is
pseudo-umbilical if and only if it is congruent to the surface given in Theorem 3.7 for some
a, b, U, V satisfying (3.12) and
U ′b′(U)b(U) −f ′ (b(U)2 + c)
f
=
0,
(5.4a)
a(U)b(U)U ′2 + b(U)2
f 2
−f ′ (f ′U ′ + fU ′′)
f 2U ′
=
0.
(5.4b)
Proof.
If M is pseudo-umbilical. Then it is locally congruent to one of five surfaces
given in Theorem 4.3. First, we are going to prove that the surface given in case (ii)-(v)
can not be pseudo-umbilical.
Towards contradiction, assume that M is the surface given in Theorem 3.3 and it is
pseudo-umbilical. Then, (5.1) is satisfied because of Theorem 5.1. By a direct computation
using (3.7), (3.8) and (5.1), we obtain
f(u)a′(u)f ′(u)
U(u, v)
+ f(u)a′′(u)f ′(u)
a′(u)
+ f ′(u)2 −1
r2 + 1 = 0
which can be satisfied only if a′(u)f ′(u)Uv(u, v) = 0. However, this is not possible because of
(3.6). By a similar method, we observe that none of the surfaces constructed in Theorem 3.4,
Theorem 3.5, and Theorem 3.6 is pseudo-umbilical because of (3.7) and (3.10).
18
F. KAYA AND N. CENK TURGAY
Consequently, M is locally congruent to a surface given in Theorem 3.7 for some smooth
functions a, b, U, V, E satisfying (3.12) and (3.14). Moreover, from (3.13) and (5.1) we see
that M is pseudo-umbilical if and only if the equation
(5.5) V U ′
b(U)U ′b′(U) −f ′ (b(U)2 + c)
f
+a(U)b(U) (U ′)2+b(U)2
f 2
−f ′ (f ′U ′ + fU ′′)
f 2U ′
= 0
is satisfied. Note that (3.14) implies that Vv ̸= 0. Therefore, (5.5) implies (5.4).
□
Next, we obtain the local classification of pseudo-umbilical surfaces in E3
1(c) f × I.
Proposition 5.6. Let M be a surface in the static space-time E3
1(c) f × I with light-like
∂
∂z
T.
Then M is pseudo-umbilical if and only if it is locally congruent to one of the
following surfaces:
(i) M is a class A surface parametrized by
ϕ(u, v) =
(c1F(u) + c2)3 + 6(c1F(u) + c2) + 3F(u)−6v
c1
6
√
2
, 1
2(c1F(u) + c2)2,
(c1F(u) + c2)3 −6(c1F(u) + c2) + 3F(u)−6v
c1
6
√
2
, u
!
,
(5.6)
(ii) M is a class A surface given in Theorem 3.7 for some a, b, U, V satisfying
b(U)
=
c3f,
Vu = c2
3
2 f 2UV 2 +
1
2f 2U
(5.7)
a(U)
=
−c2
3f 2U ′ + ff ′U ′′ + f ′2U ′
c3f 3U ′3
(5.8)
for a non-zero constant c3.
(iii) A flat surface generated by a suitable cone in E3
1 which can be parametrized by
ϕ(u, v)
=
(F(u) −v, b1(v), b2(v), u)
(5.9)
for some smooth functions b1, b2 such that b′
1
2 + b′
2
2 = 1.
Proof.
Let p ∈M. Since the vector field
∂
∂z
T is light-like on M, Theorem 3.11 ensures
that a neighborhood Np of p which can be parametrized by (3.3) presented in Theorem 3.1
for a Lorentzian surface ˆ
M. In order to prove the necessary condition, we assume that Np
is pseudo-umbilical. Then, Theorem 5.1 implies that we have two cases.
Case I. Np is class A. In this case, Theorem 5.5 implies that is congruent to the surface
given in Theorem 3.7, for some a, b, U, V satisfying (3.12) and (5.4) and ˆ
M is a null scroll.
Since c = 0, (5.4a) implies the first equation in (5.7) for a constant c3.
We have two
sub-cases:
Case I.a. c3 = 0. In this case, we have b = 0. Thus, ˆ
M is flat which yields that (U, V )
satisfies for (2.10) for some constants c1 ̸= 0, c2.
When a ̸= 0, by Theorem 2.5, ˆ
M is congruent to the B-scroll parametrized by (2.20). By
combining (2.20) with (2.10) we get (5.6). So, we have case (i) of the theorem.
On the other hand, if a = 0, then (3.13) implies that Ae4 = 0 and (4) of Theorem 3.9
implies ∇⊥e4 = 0.
Therefore, Np lies on the totally geodesic hypersurface M × I of
E3
1(c) f ×I, where M is the plane in E3
1 with the normal ¯e4. However, this is a contradiction.
Case I.b. c3 ̸= 0. In this case, (3.12) and (5.4b) gives the second equation in (5.7) and
(5.8), respectively. So, we have case (ii) of the theorem.
TIME-LIKE SURFACES
19
Case II. Np is not class A, i.e., h3 ̸= 0. In this case, (5.1) implies
h2 = f ′
f −Eu
E
=
0.
(5.10)
Therefore, ˆ
M is flat, we have E = f and (2.9). Moreover, from Theorem 4.1 we have
˜ϕuu
=
−f ′
f
˜ϕu + h1 ˜N,
(5.11)
˜ϕuv
=
0
(5.12)
and (2.9) implies
(5.13)
Nu = −h1f ˜ϕv,
Nv = −h3(f ˜ϕu + ˜ϕv).
Therefore,
ˆ
M is flat and because of (5.13), the Gauss equation (2.6) implies h1h3 = 0
which implies h1 = 0. Moreover, the Codazzi equation (2.5) for X = Z = ∂u, Y = ∂v gives
h1 = h1(u). Thus, by solving (5.11) and (5.12), we obtain
˜ϕ(u, v)
=
F(u)v1 + ϕ2(v)
(5.14)
for a constant vector v1 and a E3
1-valued smooth function ϕ2. (2.9) and (5.14) imply
g0(v1, v1) = −1, g0(v1, ϕ′
2) = 1, g0(ϕ′
2, ϕ′
2) = 0.
Therefore up to a suitable isometry of E3
1, one can choose
v1 = (1, 0, 0),
ϕ2(v) = (−v, b1(v), b2(v)), b′
1
2 + b′
2
2 = 1.
(5.15)
By combining (5.14) and (5.15) we obtain (5.9) which gives case (iii) of the proposition.
Hence, the proof of necessary condition completed.
Conversely, a direct computation yields that if M is a surface given by case (i) or case
(iii) of the theorem, then the shape operator AH of M along H takes the form
AH =
f ′
f
2
I
and for the surface M we obtain
AH =
c2
3 + f ′(u)2
f(u)2
I.
Hence, all of the surfaces given in the proposition are pseudo-umbilical.
□
In the following proposition, we consider pseudo-umbilical surfaces in S3
1(c) f × I.
Proposition 5.7. Let M be a surface in the static space-time S3
1(c) f × I with light-like
∂
∂z
T.
Then M is pseudo-umbilical if and only if it is locally congruent to one of the
following surfaces:
(i) M is a class A surface given in Theorem 3.7 for some a, b, U, V satisfying
b(U)
=
p
c3f 2 −1,
Vu = c3f 2
2 V 2U ′ +
1
2f 2U ′
(5.16)
a(U)
=
U ′ (−c3f 2 + f ′2 + 1) + ff ′U ′′
f 2p
c3f 2 −1 (U ′)3
(5.17)
for a non-zero constant c3.
20
F. KAYA AND N. CENK TURGAY
(ii) A flat surface generated by the flat isoparametric surface S1
1
1
cos θ
× S1 1
sin θ
⊂S3
1
which can be parametrized by
ϕ(u, v) =
−cos θ sinh
−c2 sec θ
√
2
+ F(u)(−cos θ)
p
sec(2θ) +
v sec θ
p
sec(2θ)
!
,
cos θ cosh
−c2 sec θ
√
2
+ F(u)(−cos θ)
p
sec(2θ) +
v sec θ
p
sec(2θ)
!
,
sin θ cos
−c2 csc θ
√
2
+ F(u) sin θ
p
sec(2θ) +
v csc θ
p
sec(2θ)
!
,
−sin θ sin
−c2 csc θ
√
2
+ F(u) sin θ
p
sec(2θ) +
v csc θ
p
sec(2θ)
!
, u
!
(5.18)
for some θ ∈(0, π/4) and c2 ∈R.
Proof.
Let p ∈M.
Similar to the proof of Theorem 5.6, we consider the local
parametrization of M on a neighborhood Np of p given in Theorem 3.1 for a Lorentzian
surface ˆ
M and assume that Np is pseudo-umbilical. We study two cases obtained from
Theorem 5.1:
Case I. Np is class A. Because of Theorem 5.5, ˆ
M is a null scroll parametrized by (2.18)
and Np is the surface constructed in Theorem 3.7, for some a, b, U, V satisfying (3.12). By
solving (5.4a) for c = 1, we see that b satisfies the first equation in (5.16) for a constant c3
and ε = ±1. By re-defining V properly on (2.18), one can assume ε = 1. So, we have the
first equation in (5.16). By a further computation using (3.12) and (5.4b), we also get the
second equation in (5.16) and (5.17), respectively. So, we have case (i) of the theorem.
Case II. Np is not class A, i.e., h3 ̸= 0. In this case, similar to the proof of Theorem 5.6,
we have (5.10) from which we observe that ˆ
M is flat and the equations (2.9), (5.13) are
satisfied for some functions h1, h3.
By taking into account (4.1) in Theorem 4.1 and (5.13), we use the Codazzi and Gauss
equations (2.6) and (2.5) to obtain h1 =
1
k1f2, h3 = k1. Thus, because of (5.13), the shape
operator ˆS of ˆ
M takes the form
(5.19)
ˆS =
0
k1f
−
1
2fk1
k1
with respect to {∂u, ∂v}.
for a constant k1 ̸= 0. Therefore, ˆ
M is a isoparametric surface with distinct real principal
curvatures which yields that ˆ
M is an open part of S1
1
1
cos θ
× S1 1
sin θ
, [8]. So, up to a
suitable isometry of S3
1, we can assume that ˆ
M can be parametrized by
ˆψ(U, V ) =
cos θ sinh U + V
√
2 cos θ, cos θ cosh U + V
√
2 cos θ, sin θ cos U −V
√
2 sin θ,
sin θ sin U −V
√
2 sin θ
,
(5.20)
where θ ∈(0, π/2) is defined by k1 = cot θ −tan θ.
Note that the shape operator ˆS of ˆ
M is
(5.21)
ˆS =
1
2(cot(θ) −tan(θ))
−csc(2θ)
−csc(2θ)
1
2(cot(θ) −tan(θ))
with respect to {∂U, ∂V }.
TIME-LIKE SURFACES
21
Moreover, by Theorem 2.2, the coordinate systems (u, v) and (U, V ) are related by (2.10)
for some c1 ̸= 0, c2. By a direct computation considering (2.10), (5.19) and (5.21), we
obtain θ ∈(0, π/4) and
(5.22)
c1 =
r
sec(2θ)
2
.
By combining (5.20) with (2.10) and (5.22), we obtain (5.18) which give the case (ii) of
the proposition. Hence, the proof of the necessary condition is completed. The proof of the
sufficient condition follows from a direct computation.
□
5.3. Totally-Umbilical Surfaces. In this subsection, we consider the totally-umbilical
surfaces.
Let M be a totally-umbilical surface in L3
1(c) f × I with light-like
∂
∂z
T. Then, M is also
pseudo-umbilical. Furthermore, Theorem 4.2 and Theorem 5.2 implies that it is class A.
Therefore, M is locally congruent to a surface given in Theorem 3.7 for some smooth
functions a, b, U, V, E satisfying (3.12) and (3.14). Then, from (3.13) and (5.2) imply that
have
V U ′ b(U)2 + c
+ f ′
f + U ′′
U ′ =0,
f 2U ′2 (a(U) + V b′(U)) + b(U) =0.
(5.23)
As described in proof of Theorem 5.5, we have Vv ̸= 0 and U ′ ̸= 0. Therefore, (5.23) gives
b(U)2 + c
=
0, ,
(5.24a)
f ′
f + U ′′
U ′ =
0,
(5.24b)
f 2U ′2a(U) + b(U) =
0.
(5.24c)
and (3.12), together with (5.24a), implies
(5.24d)
2f 2U ′Vu = 1.
We are ready to prove the following proposition:
Proposition 5.8. Let M be a surface in the static space-time L3
1(c) f × I with light-like
∂
∂z
T. Then, we have the followings:
(i) If c = 0 or c = 1, then M can not be totally umbilical.
(ii) M is a totally umbilical surface of H3
1(c) f × I if and only if it is congruent to the
flat surface given by
ϕ(u, v) =
(2kv + k2) cos
F(u) + k2
k
−2k sin
F(u) + k2
k
2k
,
−k (2k2 + 1) cos
F(u) + k2
k
−(2kv + k2) sin
F(u) + k2
k
2
√
2k2
,
k (2k2 −1) cos
F(u) + k2
k
−(2kv + k2) sin
F(u) + k2
k
2
√
2k2
,
(2kv + k2) cos
F(u) + k2
k
2k
, u
!
(5.25)
22
F. KAYA AND N. CENK TURGAY
Proof.
(i) follows from (5.24a) and (5.24c). First, we are going to prove the necessary
condition of (ii).
Assume that c = −1 and M is a totally umblical surface of H3
1(c) f × I, i.e., (5.24) is
satisfied. In this case, as described above, M is locally congruent to a surface given in
Theorem 3.7 for some smooth functions a, b, U, V, E satisfying (3.12) and (3.14). Note that
the equation (5.24a) implies b = 1 which yields that ˆ
M is flat.
Next, by combining (2.10) given in Theorem 2.2 with (5.24b)-(5.24d) we get
a = −1
k
2
,
U = kF(u) + k2,
V = F(u) −2v
2k
(5.26)
for some constants k and k2. Furthermore, since b = 1, the first equation in (5.26) and
Theorem 2.6 imply that ˆ
M is congruent to (2.21). By combining (2.21) and (5.26), we get
(5.25). Hence the proof is completed.
Conversely, let M be a surface given by (5.25). By a direct computation, we obtain
Ae3 = −f ′
f I,
Ae4 = 1
f I
which yields that M is totally umbilical.
□
Acknowledgements
This work forms a part of the first-named author’s PhD thesis and was carried out within
the scope of a project supported by T¨UB˙ITAK, the Scientific and Technological Research
Council of T¨urkiye (Project number 121F352).
Data Availability. Data sharing not applicable to this article because no datasets were
generated or analysed during the current study.
Code availability. N/A.
Conflicts of interest. The authors have not disclosed any competing interests.
References
[1] Alias, L.J., Melendez, J., Navarro, M. and Solis, A.D., Codimension two spacelike submanifolds into
null hypersurfaces of generalized Robertson-Walker spacetimes, Preprint: arXiv:2508.13852 (2025).
[2] Bekta¸s Demirci, B., Turgay, N.C. and Ye˘gin S¸en, R. On space-like class A surfaces in Robertson–Walker
spacetimes, Math. Nachr. 298 (2025), 718–729.
[3] Dekimpe, K. and Van der Veken, J., Geometric Study of Marginally Trapped Surfaces in Space Forms
and Robertson-Walker Space times—An Overview, Axioms 9 (2020), 60.
[4] Dobarro, F., ¨Unal, B., Special standard static space–times, J. Math. Phys. 45 (2004), 2675–2684.
[5] Dobarro, F., ¨Unal, B., Characterizing killing vector fields of standard static space-times, J. Geom. Phys.
62 (2012), 1070–1087.
[6] Ji, F. and Hou, Z.H., On Lorentzian surfaces with H2 = K in Minkowski 3-space, J. Math. Anal. 334
(2007), 54–58.
[7] Kim, D.-S., Kim, Y.-H., B-scrolls with non-diagonalizable shape operators,
Rocky Mt. J. Math. 33
(2003), 175–190.
[8] Li, C. and Wang, J. The classification of isoparametric surfaces in S3
1, Kobe J. Math. 22 (2005) 1–12.
[9] Manfio, F., dos Santos, J.B.M., dos Santos, J.P., Van der Veken, J. Hypersurfaces of S3 ×R and H3 ×R
with constant principal curvatures, J. Geom. Phys. 213 (2025), 105495
[10] Manfio, F., Turgay, N.C., Upadhyay, A. Biconservative Submanifolds in Sn × R and Hn × R J. Geom.
Anal. 29 (2019), 283–298
[11] Garc´ıa-Mart´ınez, S. C., Lucas, P. and Ram´ırez-Ospina, H. F., L1-2-Type Surfaces in 3-Dimensional
De Sitter and Anti De Sitter Spaces, Bull. Malays. Math. Sci. Soc. 46 (2023), 46:139.
[12] Mendon¸ca, B. and Tojeiro, R., Umbilical Submanifolds of Sn ×R, Canad. J. Math. 66 (2014), 400–428.
[13] B. O’Neill, Semi–Riemannian Geometry with Applications to Relativity, Academic Press, New York,
1982.
[14] Robertson, H. P., Kinematics and world-structure, Astrophys. J. 82 (1935), 284–301.
TIME-LIKE SURFACES
23
[15] Tojeiro, R., On a class of hypersurfaces in Sn × R and Hn × R, Bull. Braz.Math. Soc. 41 (2010),
199–209.
[16] Turgay, N.C., Ye˘gin S¸en, R. Biconservative Surfaces in Robertson–Walker Spaces, Results Math. 80
(2025), 77.
[17] Walker, A. G., On Milne’s theory of world-structure, P. Lond. Math. Soc. 42 (1937), 90–127.
Department of Mathematics, Faculty of Science and Letters, Istanbul Technical Uni-
versity, Istanbul, T¨urkiye
Email address: kayaf18@itu.edu.tr
Department of Mathematics, Faculty of Science and Letters, Istanbul Technical Uni-
versity, Istanbul, T¨urkiye
Email address: turgayn@itu.edu.tr
|
ON TIME-LIKE CLASS A SURFACES IN A STATIC SPACE TIME FURKAN KAYA AND NURETTIN CENK TURGAY Abstract. In this paper, we consider time-like surfaces in the static space-time given by the warped product L3 1(c) f ×(I, dz2), where L3 1(c) denotes the Lorentzian space form with the constant sectional curvature c ∈{-1, 0, 1}. In particular, we study the surfaces with light-like ∂ ∂z T . First, we construct a globally defined pseudo-orthonormal frame field on a surface satisfying this condition and deal with the invariants associated with this frame field. Then, we obtain a complete classification theorem for class A surfaces. Finally, we consider some applications of this theorem. 1. Introduction Let M be a (semi-) Riemannian submanifold of ̄ M × I, where I is an open interval and M m is a (semi-) Riemannian manifold and Rn(c) denote the n-dimensional Riemannian space form of constant sectional curvature c with the metric tensor gc. By considering the decomposition (1.1) ∂ ∂w = T + η, one can define a vector field T tangent to and a vector field η normal to M, where ∂ ∂w denotes the unit vector field tangent to the interval I. Based on the decomposition (1.1), the natural question of "how the geometric constraints imposed on T and η determine the submanifold itself " arises. The first systematic studies in this direction were carried out in [12, 15], where, in particular, the notion of class A immersions was introduced as follows: Definition 1.1. [15] Let φ : M m →Rn(c)×I be an isometric immersion from a Riemannian manifold M. The immersion φ is said to be of class A if the tangent component T is an eigenvector of the shape operator at each point of M. It turns out that class A submanifolds have some connections with several well-studied submanifolds of product spaces. For example, a biconservative submanifold is necessarily class A under some conditions as obtained by Manfio et al. in [10], where the authors study biconservative submanifolds of R4(c) × R. Furthermore, the link between class A submanifolds and those with constant principal curvatures is explored in a recent study in [9], which focuses on hypersurfaces in R3(c) × R. On the other hand, a classical and fundamental example of a warped product spacetime is the Robertson-Walker model I1 1 ×f R3(c), which is defined on the product manifold I × R3(c) with the metric tensor -dt2 + f(t)2gc, where f : I →(0, ∞) is a smooth, non-vanishing, warping function depending only on the time-like coordinate t, [14, 17]. In this construction, the vector field tangent to the base I is time-like, and thus the Robertson-Walker space-time describes a time-oriented 2010 Mathematics Subject Classification. 53C42 (Primary), 53A10. Key words and phrases. Warped product spaces, static space-times, class A surfaces, time-like submanifolds. 1 11 Sep 2025 2 F. KAYA AND N. CENK TURGAY warped product in which the warping function evolves in the temporal direction. In contrast, Dobarro and ̈Unal studied the geometric properties of standard static space-times, focusing on the characterization of their warped product structure, geodesic completeness, and curvature properties in [4]. In this paper, based on their work, we study submanifolds of a static space-time defined on the product manifold L3 1(c)×I, equipped with the warped product metric (1.2) eg = f(z)2gc + dz2, where Ln 1(c) denotes the n-dimensional Lorentzian space form with sectional curvature c equipped with the metric tensor gc and the warping function f : I →(0, ∞) depends on a spatial coordinate z. Unlike the Robertson-Walker case, where the base direction is time-like, here the vector field tangent to the base I is space-like. As a result, with the Killing vector ∂ ∂t, [5], the geometry represents a static warped product space-time in which the Lorentzian structure is entirely determined by the fiber L3 1(c), while the base direction is purely spatial. Several recent works investigate submanifolds of warped product spaces, which play a significant role in modeling various physical space-times, [1, 2, 3, 16]. For example, Dekimpe and Van der Veken [3] provide an overview of the geometric properties of marginally trapped surfaces in Robertson-Walker space-times. Recently, Alias et al. investigate codimensiontwo space-like submanifolds within the null hypersurfaces of generalized Robertson-Walker space-times in [1] and some geometrical properties of class A surfaces of Robertson-Walker space-times are obtained in [2, 16]. The structural difference between the two models has important geometric implications. In the Robertson-Walker case I1 1 ×f R3(c), the temporal warping emphasizes cosmological evolution and is naturally suited for studying expanding or contracting universes. On the other hand, the static nature of the model L3 1(c) f×I studied in this paper leads to a different variety of submanifold configurations. In particular, from the viewpoint of the theory of non-degenerated submanifolds, the space-like base in the static space-time L3 1(c) f ×I allows for different geometric behaviors compared to the time-like base of Robertson-Walker spacetimes. One of the main differences is the causality of the vector fields T and η defined by the decomposition (1.1). Unlike the Robertson-Walker space-times, where T and η have opposite causal character-that is, one of them is space-like while the other is time-like-in the space-time L3 1(c) f × I, either T or η can be light-like. In this direction, we investigate surfaces in the static space-time L3 1(c) f × I with lightlike ∂ ∂z T. Section 2 recalls the fundamental notions of submanifold theory and fixes the notation that will be used throughout the paper. In Section 3, after constructing explicit examples of surfaces with light-like ∂ ∂z T, we derive a characterization of surfaces with this property. Section 4 contains the main result, Theorem 4.3, which provides a complete local classification of class A surfaces in L3 1(c) f ×I. Finally, in Section 5, we present applications of Theorem 4.3, yielding all pseudo-umbilical and totally umbilical surfaces with light-like ∂ ∂z. 2. Basic Concepts and Notation This section presents the basic notation used throughout the paper, along with a short summary of the theory of submanifolds of semi-Riemannian manifolds. Let En r denote the n-dimensional semi-Euclidean space with the index r given by the metric tensor g0,r = - r X i=1 dx2 i + n X i=r+1 dx2 i , TIME-LIKE SURFACES 3 where (x1, x2, . . . , xn) is a rectangular coordinate system in Rn and we put (x1, x2, x3, x4) = (t, x, y, z) if n = 4. When n > 2, Ln 1(c) stands for the n dimensional Lorentzian space-form with the constant sectional curvature c, i.e., Ln 1(c) = Sn 1 if c = 1, En 1 if c = 0, Hn 1 if c = -1 , and gc stands for its metric tensor, where Sn 1(K0) and Hn 1(K0) will stand for n-dimensional de Sitter space and anti de-Sitter spaces, respectively, with the sectional curvature K0 and we put Sn 1(1) = Sn 1 and Hn 1(-1) = Hn 1. 2.1. The Static Space-Time L3 1(c) f ×I. In this subsection, we are going to consider the static space-time given by the warped product L3 1(c) f × I with the Levi-Civita connection e∇and metric tensor eg = ⟨·, ·⟩defined by (1.2) for a non-vanishing smooth function f. Throughout this paper, F will denote the function defined by F(z) := Z z z0 1 f(ξ)dξ for a z0 ∈I. Moreover, for a given vector field X tangent to L3 1(c) f × I, we define a vector field ̄X tangent to L3 1(c) and a function X4 by (2.1) ̄X := Π∗(X), X4 := ⟨X, ∂ ∂z⟩, where Π : L3 1(c) × I →L3 1(c) is the canonical projection. Then, we write X = ( ̄X, X4). Occasionally, by a slight abuse of notation, we adopt the convention ( ̄X, 0) = ̄X. On the other hand, from [13] it is obtained that Levi-Civita connection e∇of L3 1(c) f × I has the form (2.2) e∇XY = ∇0 XY + f ′ f X4 ̄Y + Y4 ̄X, -⟨ ̄X, ̄Y ⟩ , where ∇0 denotes the Levi-Civita connection of the Cartesian product L3 1(c) × I. Now, let M be a time-like surface in L3 1(c) f ×I. Then, by letting w = z in (1.1), a vector field T tangent to M and a vector field η normal to M are defined by (2.3) ∂ ∂z M = T + η. Remark 2.1. Assumptions. We are going to exclude the trivial cases when M contains an open part of a horizontal slice ˆ M f(z0) × {z0} or a vertical cylinder M = α f × I and also the case when it is contained in a totally geodesic hypersurface of L3 1(c) f × I, where ˆ M is a surface and α is a curve in the space form L3 1(c). Therefore, throughout the paper, we shall assume that • Tp ̸= 0 and ηp ̸= 0 at every point p ∈M, • M does not contain any open subset lying in a totally geodesic hypersurface of L3 1(c) f × I. 4 F. KAYA AND N. CENK TURGAY 2.2. Basic Definitions and Facts in the Theory of Submanifolds. Let Mn be a semi-Riemannian submanifold of (N, ˆg) with the Levi-Civita connection ∇, the second fundamental form h, shape operator A and normal connection ∇⊥. Then, for all vector fields X and Y tangent to M and ξ normal to M, the Gauss and Weingarten formulæ ∇N XY = ∇XY + h(X, Y ), ∇N Xζ = -AζX + ∇⊥ Xξ are satisfied, where ∇N denotes the Levi-Civita connection of N. Note that h and A are related by (2.4) ˆg(h(X, Y ), ξ) = ˆg(AξX, Y ). The mean curvature vector field H of M is defined by H = 1 ntr h. M is said to be umbilical along ξ if there exists a function aξ ∈C∞(M) such that ˆg(AξX, Y ) = aξˆg(X, Y ). If M is umbilical along H, then it is called a pseudo-umbilical submanifold of N. Furthermore, if M is umbilical along ξ for every ξ normal to M, then M is said to be totally umbilical. Let ̃R and R stand for the curvature tensors of M and N, respectively and R⊥denote the normal curvature tensor of M. Then the Codazzi, Gauss and Ricci equations take the form ( ̃R(X, Y )Z)⊥ = (∇Xh)(Y, Z) -(∇Y h)(X, Z), (2.5) R(X, Y )Z = Ah(Y,Z)X -Ah(X,Z)Y (2.6) and (2.7) ( ̃R(X, Y )ξ)⊥= R⊥(X, Y )ξ + h(AξX, Y ) -h(X, AξY ), respectively, where the covariant derivative (∇Xh)(Y, Z) of the second fundamental form h is defined by (∇Xh)(Y, Z) = ∇⊥ Xh(Y, Z) -h(∇XY, Z) -h(Y, ∇XZ). M is said to have flat normal bundle if R⊥= 0. Now, let ˆ M be a time-like surface in the Lorentzian space form L3 1(c), with metric tensor ˆg and shape operator ˆS. Further, assume that ˆ M is not totally geodesic. Then, the Gaussian curvature K of ˆ M is defined by K = ⟨R(X, Y )Y, X⟩ ⟨X, X⟩⟨Y, Y ⟩-⟨X, Y ⟩2, where {X, Y } is a frame field for the tangent bundle of ˆ M. ˆ M is said to be flat if K vanishes identically on ˆ M. In this case, for all p ∈M there exists a local coordinate system (Np, (U, V )) such that Np ∋p and (2.8) ˆg = -(dUdV + dV dU). We are going to use the following remark. TIME-LIKE SURFACES 5 Remark 2.2. Let ˆ M be a time-like surface. Then, a direct computation yields that local coordinates u, v satisfy (2.9) ˆg = - 1 f(u)2du2 + 1 f(u)(dudv + dvdu) and ∂v is proportional to ∂U if and only if (u, v) and (U, V ) given in (2.8) are related by (2.10) U(u, v) = 1 2c1 F(u) -v c1 + c2, V (u, v) = c1F(u) for some constants c1 ̸= 0, c2. It is well known that the matrix representation of ˆS, with respect to a suitable frame field, takes one of the forms (2.11) I. ˆS = λ1 0 0 λ2 , II. ˆS = λ μ -μ λ , III. ˆS = λ 1 0 λ for some smooth functions λ, λi, and μ ̸= 0. Note that in case III, the frame field is pseudo-orthonormal, whereas in the remaining cases, it is orthonormal. We are going to use the following well-known result: Remark 2.3. If ˆ M 2 1 is a (totally) umbilical surface, then it is locally an open part of one of the following surfaces (see, for example, [11]): (1) c = 1 and ˆ M ⊂S2 1( 1 r2) ⊂S3 1 is parametrized by (2.12) ̃φ(s1, s2) = (r sinh s1, r cosh s1 cos s2, r cosh s1 sin s2, √ 1 -r2), 0 1, (3) c = -1 and ˆ M ⊂H3 1 is the flat surface parametrized by ̃φ(U, V ) = U + V √ 2 , a(2UV -1) -1 4a, a(2UV -1) + 1 4a, U -V √ 2 , a ̸= 0, (2.14) (4) c = 0 and ˆ M ⊂S2 1( 1 r2) ⊂E3 1 is parametrized by (2.15) ̃φ(s1, s2) = (r sinh s1, r cosh s1 cos s2, r cosh s1 sin s2). 2.3. Null Scrolls. Let ˆ M be a Lorentzian surface in L3 1(c). The shape operator ˆS of ˆ M has the canonical form of Case III in (2.11) if and only if M is a null scroll, given in the following example (see, e.g., the proof of [6, Theorem on p. 55]). Example 2.4. [6, 7] Let α be a light-like curve in L3 1(c) with a Cartan frame {A, B; C} such that (2.16) ⟨A, B⟩= -1, ⟨C, C⟩= 1, ⟨A, A⟩= ⟨B, B⟩= ⟨C, A⟩= ⟨C, B⟩= 0 along α satisfying (2.17) α′ = A, A′ = aC, B′ = bC + cα, C′ = bA + aB for some smooth functions a and b. Then, the surface parametrized by (2.18) ̃φ(U, V ) = α(U) + V B(U) is called a null scroll. Note that if the function b is constant, then the surface parametrized by (2.18) is called B-scroll. 6 F. KAYA AND N. CENK TURGAY Remark 2.5. [6] Let ˆ M be a null scroll in L3 1(c) given by the parametrization (2.18). Then, the shape operator ˆS of ˆ M along the unit normal vector field ̃N = -tbB -C, is, [6], (2.19) b a + tb′ 0 b with respect to {∂V , ∂U}. Therefore, a null scroll in L3 1(0) = E3 1 is flat (and equivalently minimal) if and only if it is a B-scroll with b = 0. In this case, if a ̸= 0, then ˆ M is congruent to the surface given by (2.20) ̃φ(U, V ) = 1 6 √ 2 U 3 + 6U + 6V, 3 √ 2U 2, U 3 -6U + 6V . We are going to use the following lemma. Lemma 2.6. A flat null scroll ˆ M in the anti de Sitter space H3 1 generated by a = -1 k2 and b = 1 is congruent to the B-scroll parametrized by ̃φ(U, V ) = (U -2k2V ) cos U k -2k sin U k 2k , -(2k3 + k) cos U k + (U -2k2V ) sin U k 2 √ 2k2 , k (2k2 -1) cos U k -(U -2k2V ) sin U k 2 √ 2k2 , (U -2k2V ) cos U k 2k ! . (2.21) Conversely, the surface given by (2.21) in H3 1 is flat. Proof. By solving (2.17) for c = -1, a = -1 k2 and b = 1, we get α(U) = sin U k - U cos U k 2k k v1 + - U sin U k 2k -1 2 cos U k k v2 + k sin U k v3 -k cos U k v4, A(U) = U sin U k 2k3 + sin U k sin 2U k 4k2 + cos3 U k 2k2 ! v1 + cos U k v3 + sin U k v4 + -U cos U k 2k3 -sin U k cos2 U k 2k2 + sin 2U k cos U k 4k2 ! v2, B(U) = cos U k v1 + sin U k v2, C(U) = -U cos U k 2k2 v1 + k cos U k -U sin U k 2k2 v2 + k sin U k v3 -k cos U k v4 (2.22) for some constant vectors vi ∈E4 1. By a direct computation using (2.16) and (2.22), we obtain ⟨v1, v3⟩= -1, ⟨v3, v3⟩= 1 k2, ⟨v2, v4⟩= 1, and ⟨vi, vj⟩= 0 for all other pairs. Therefore, up to a suitable isometry of H3 1, one can choose v1 = (-k, 0, 0, -k), v2 = 0, 1 √ 2, 1 √ 2, 0 , v3 = 0, 1 √ 2, -1 √ 2, 0 , v4 = 0, 0, 0, 1 k from which, together with (2.22), we obtain (2.21). Converse of the lemma follows from a direct computation. □ TIME-LIKE SURFACES 7 3. Surfaces with Light-Like T In this section, we obtain local classifications of time-like surfaces in L3 1(c) f × I with light-like ∂ ∂z T. Let M be an oriented time-like surface in L3 1(c) f × I and assume that the normal vector field T defined by (2.3) is light-like. In this case, the decomposition (2.3) turns into (3.1) ∂ ∂z M = T + e3, where e3 is a unit normal vector field. We are going to consider the pseudo-orthonormal frame field {T, U} of the tangent bundle of M such that ⟨T, T⟩= ⟨U, U⟩= 0, ⟨T, U⟩= -1 and an orthonormal frame field {e3, e4} of the normal bundle of M. 3.1. Examples of Surfaces. In this subsection, we construct some examples of surfaces which satisfies certain geometrical properties. First, we obtain the next proposition to present a surface which satisfies the property of having light-like ∂ ∂z T. Proposition 3.1. Let ̃φ(u, v) be a parametrization of a time-like surface ˆ M in L3 1(c) with the induced metric (3.2) ˆg = - 1 f(u)2du2 + E(u, v) f(u)2 (dudv + dvdu) for a non-vanishing function E and consider the surface M of L3 1(c) f × I parametrized by (3.3) φ(u, v) = ̃φ(u, v), u . Then, T = ∂ ∂z T is light-like on M. Proof. By considering (3.2), we see that T = 1 E∂v is light-like and the vector field e3 = 1 E(- ̃φv, E) is normal to M. Furthermore, T and e3 satisfy (3.1). Therefore, ∂ ∂z T = T is light-like on M. □ Definition 3.2. Throughout this paper, a surface M of L3 1(c) f × I parametrized by (3.3) for a surface ˆ M in L3 1(c) is going to be called as the surface generated by ˆ M. By making certain specific choices for the surface ˆ M in Theorem 3.1, we obtain the following explicit examples: Example 3.3. Let c = 1 and consider the surface M generated by ˆ M ⊂S2 1( 1 r2), r 1. Then, by a direct computation considering (2.13) and (3.2), we obtain the parametrization φ(u, v) = r cos a(u) A(u, v) -sin a(u) , r sin a(u) A(u, v) + cos a(u) , r A(u, v), √ r2 -1, u (3.9) of M in H3 1 f × I, where A and a are smooth, non-vanishing functions satisfying Au = A2 2r2f 2a′ -1 2a′ A2 + 1 . Similar to Theorem 3.6, we observe that the surface M is a class A surface with light-like ∂ ∂z T by obtaining (3.7) for e3 = 1 rf 2a′ -cos a, -sin a, -1, 0, rf 2a′ , e4 = √ r2 -1 fA cos a -A sin a, A cos a + sin a, 1, rA √ r2 -1, 0 E = -r2f 2a′Av A2 , h4 12 = h4 22 = √ r2 -1 rf Example 3.5. Let c = -1 and consider the surface M generated by flat totally umbilical surface ˆ M parametrized by (2.14). Then, by considering (2.14) and (3.2), we obtain the parametrization φ(u, v) = 2c2 1F(u) + 2c2c1 + F(u) -2v 2 √ 2c1 , -a (c1F(u) + c2) (2v -F(u)) c1 -a -1 4a, -a (c1F(u) + c2) (2v -F(u)) c1 -a + 1 4a, (2c2 1 -1) F(u) + 2 (c1c2 + v) 2 √ 2c1 , u TIME-LIKE SURFACES 9 of M in H3 1 f × I, where a, c1, and c2 are some constants with a, b ̸= 0. By a direct computation, we obtain the shape operators of M as Ae3 = f ′ f I, Ae4 = 1 f 1 f 0 1 f , (3.10) where we have e3 = 1 √ 2c1f 1, 2 √ 2a (c1F + c2) , 2 √ 2a (c1F + c2) , -1, √ 2c1f , e4 = 1 2 √ 2c1f -2c2 1F -2c2c1 -F + 2v, 2 √ 2c1 a (c1F + c2) (2v -F) c1 -a + 1 4a , -2 √ 2c1 -a (c1F + c2) (2v -F) c1 + a + 1 4a , 1 -2c2 1 F -2 (c1c2 + v) , 0 E =f. Hence, M is a class A surface with light-like ∂ ∂z T. Example 3.6. Let c = 0 and consider the surface M generated by ˆ M ⊂S2 1( 1 r2). Then, by a direct computation considering (2.15) and (3.2), we obtain the parametrization φ(u, v) = r A(u, v), r cos a(u) A(u, v) -sin a(u) , r sin a(u) A(u, v) + cos a(u) , u of M in E3 1 f × I, where A and a are smooth, non-vanishing functions satisfying (3.5). Similar to Theorem 3.3, we observe that the surface M is a class A surface with light-like ∂ ∂z T by obtaining (3.7) for e3 = 1 rf 2a′ 1, cos a, sin a, rf 2a′ , e4 = -1 Af (1, cos a -A sin a, A cos a + sin a, 0) E =r2f 2a′Av A2 , h4 12 = h4 22 = -1 rf Example 3.7. Consider the surface M in L3 1(c) f × I generated by a null scroll ˆ M in L3 1(c) described in Theorem 2.4 for some smooth functions a and b. Then, by a direct computation considering (2.18) and (3.2), we obtain the parametrization φ(u, v) = (α(U(u)) + V (u, v)B(U(u)), u) (3.11) of M in L3 1(c) f × I, where V and U are smooth, non-vanishing functions satisfying (3.12) U ′V 2 b(U)2 + c -2Vu = - 1 f 2U ′ . Similar to Theorem 3.3, we observe that the shape operators of M has the form (3.13) Ae3 = -f′ f f′U′+f((b(U)2+c)V U′2+U′′) fU′ 0 -f′ f ! , Ae4 = b(U) f f2(a+V b′)U′2+b(U) f 0 b(U) f ! , 10 F. KAYA AND N. CENK TURGAY which yields that M is a class A surface with light-like ∂ ∂z T, where we have e3 = 1 U ′f 2 B(U), U ′f 2 , e4 = 1 f (V b(U)B(U) + C(U), 0) E = -U ′f 2Vv. (3.14) 3.2. Surfaces with Light-Like T. In this subsection, we consider time-like surfaces under the condition that the vector field T = ∂ ∂z T is light-like. Let M be an oriented time-like surface in the space-time L3 1(c) f ×I such that the tangent vector field T, defined by (2.3), is light-like at every point of M. In this case, using equation (3.1), one can construct a pseudo-orthonormal frame field {T, U} for the tangent bundle of M satisfying ⟨T, T⟩= ⟨U, U⟩= 0, ⟨T, U⟩= -1 as well as an orthonormal frame field {e3, e4} for the normal bundle of M. Note that (3.1) implies T = ̄T, U = ( ̄U, -1), e3 = ( ̄e3, 1), e4 = ̄e4. (3.15) We are going to use the following lemma throughout this article: Lemma 3.8. Let M be a time-like surface in L3 1(c) f × I such that the vector field ∂ ∂z T is light-like on M. Then, the vector field U defined by (3.1) satisfies (3.16) f ′|M = -U (f|M) . Proof. Let p = ( ̃p, z(p)) ∈M and consider an integral curve α = ( ̄α, α4) of e1 starting from p. Then, we have U(f|M)p = d du u=0 (f ◦α)(u) = d du u=0 f(α4(u)) which yields (3.17) U(f|M)p = α′ 4(0)f ′(α4(0)). Since α(0) = p and α′(0) = Up, (3.15) implies α′ 4(0) = -1. Hence, (3.17) turns into U(f|M)p = f ′(z(p)). which yields (3.16). □ On the other hand, the Levi-Civita connection ∇of M satisfies e∇TT = ω1T, e∇TU = -ω1U, e∇UT = ω2T, e∇UU = -ω2U, (3.18) while the normal connection ∇⊥of M induces e∇⊥ T e3 = ω3e4, e∇⊥ T e4 = -ω3e3, e∇⊥ Ue3 = ω4e4, e∇⊥ Ue4 = -ω4e3, (3.19) TIME-LIKE SURFACES 11 for some smooth functions ωi. Moreover, the second fundamental form h of M takes the form h(T, T) = h3 11e3 + h4 11e4 h(T, U) = h3 12e3 + h4 12e4 h(U, U) = h3 22e3 + h4 22e4, (3.20) where ha jk are some smooth functions, with a = 3, 4 and j, k = 1, 2. Now, we are ready to prove the following lemma: Lemma 3.9. Let M be a time-like surface in L3 1(c) f × I such that the vector field ∂ ∂z T is light-like on M and consider the positively oriented, global frame frame field {T, U; e3, e4} defined by (3.1). Then, the following conditions hold: (1) The Levi-Civita connection ∇of M satisfies ∇TT = ∇TU = 0, ∇UT = f ′ f -h3 22 T, ∇UU = - f ′ f -h3 22 U. (3.21) (2) The matrix representation of the shape operators of M corresponding to the pseudoorthonormal frame {T, U} has the form (3.22) Ae3 = -f′ f -h3 22 0 -f′ f ! , Ae4 = -h4 12 -h4 22 -h4 11 -h4 12 (3) The second fundamental form h of M satisfies h(T, T) = h4 11e4 h(T, U) = -f ′ f e3 + h4 12e4, h(U, U) = h3 22e3 + h4 22e4. (3.23) (4) The normal connection ∇⊥of M induces the following relations ∇⊥ T e3 = -h4 11e4, ∇⊥ Ue3 = -h4 12e4. (3.24) Proof. We are going to use the formulæ given by (3.15) - (3.20). Note that by combining (3.19) with (2.4), we have Ae3T = -h3 12T -h3 11U, Ae3U = -h3 22T -h3 12U, Ae4T = -h4 12T -h4 11U, Ae4U = -h4 22T -h4 12U. (3.25) On the other hand, by getting the covariant derivative of (3.1) along a vector field X tangent to M, we obtain (3.26) e∇X ∂ ∂z = ∇XT + h(X, T) -Ae3X + ∇⊥ Xe3. Moreover, (2.2) implies e∇X ∂ ∂z = f′ f ̄X which is equivalent to (3.27) e∇X ∂ ∂z = f ′ f X -⟨X, T⟩∂ ∂z because of (2.1). By considering (3.1) and (3.27), we observe that the tangential and normal parts of (3.26) imply (3.28) f ′ f (X -⟨X, T⟩T) = ∇XT -Ae3X and (3.29) -f ′ f ⟨X, T⟩e3 = h(X, T) + ∇⊥ Xe3, 12 F. KAYA AND N. CENK TURGAY respectively. By combining (3.15), (3.18) and (3.25) with (3.28) for X = T and X = U, we get f ′ f T = h3 11U + (ω1 + h3 12)T, f ′ f (T + U) = (ω2 + h3 22)T + h3 12U (3.30) and, because of (3.19) and (3.20), (3.29) for X = T and X = U imply 0 = h3 11e3 + (ω3 + h4 11)e4, f ′ f (e3) = h3 12e3 + (h4 12 + ω4)e4. (3.31) By a direct computation using (3.30), we obtain h3 11 = 0, h3 12 = f ′ f , ω1 + h3 12 = f ′ f , ω2 + h3 22 = f ′ f , (3.32) and the equations appearing in (3.31) give (3.33) ω3 = -h4 11, ω4 = -h4 12. By using (3.32), we observe that (3.18), (3.25) and (3.20) turn into (3.21), (3.22) and (3.23), respectively, and (3.33), together with (3.19), yields (3.24). □ Next, by using Theorem 3.9, we construct a local coordinate system compatible with the global frame field {T, U; e3, e4}. Lemma 3.10. Let M be a time-like surface in L3 1(c) f × I such that the vector field ∂ ∂z T is light-like on M and p ∈M. Then, there exists a local coordinate system (Np, (u, v)) such that Np ∋p and the following conditions hold: (a) The vector field T and U defined by (3.1) satisfy T|Np = 1 E ∂v, (3.34) U|Np = -∂u (3.35) for a non-vanishing function E defined on Np, (b) The fuction h3 22 satisfies (3.36) h3 22 = -Eu E + f ′ f . Proof. Because of (3.21), we have [ET, -U] = E f ′ f -h3 22 + U(E) T whenever E ∈C∞(M). So, if E is a non-vanishing function satisfying (3.37) E f ′ f -h3 22 + U(E) = 0, then we have [ET, -U] = 0. Therefore there exists a local coordinate system (Np, (u, v)) near to p such that ET = ∂v and -U = ∂u from which we obtain (3.34) and (3.35). Consequently, (3.35) and (3.37) implies (3.36). □ Next, we have the following local classification theorem for the surfaces satisfying the condition that ∂ ∂z T is light-like. TIME-LIKE SURFACES 13 Theorem 3.11. Let M be a time-like surface in L3 1(c) f × I. Then, ∂ ∂z T is light-like on M if and only if every point p ∈M has a neighborhood which can be parametrized as given in Theorem 3.1. Proof. In order to prove the necessary condition, consider a local coordinate system (Np, (u, v)) given in Theorem 3.10 near to an arbitrary p ∈M. Let φ(u, v) = ( ̃φ(u, v), Z(u, v)) be the parametrization of Np, where we define ̃φ := Π ◦φ, Z := φ4 and Π : L3 1(c) × I →L3 1(c) is the canonical projection. Then, by combining (3.34) and (3.35) with (3.15), we get 1 E ̃φv, Zv = T = ̄T, - ̃φu, Zu = U = ( ̄U, -1) on Np from which we have Zv = 1, Zu = 0, (3.38) ̃φv = ̄T, - ̃φu = ̄U. (3.39) Because of (3.38), by a translation on the coordinate u, we have Z(u, v) = u. Therefore, Np can be parametrized as given in (3.3) for an L3 1(c)-valued smooth function ̃φ. Furthermore, (3.15) and (3.39) imply gc ̃φu, ̃φu = - 1 f(u)2, gc ̃φu, ̃φv = E f(u)2, gc ̃φv, ̃φv = 0. Consequently, ̃φ is a parametrization of a Lorentzian surface ˆ M in L3 1(c) with the induced metric given by (3.2). Thus, the proof of the necessary condition is completed. The proof of the sufficient condition is obtained in Theorem 3.1. □ 4. Class A Surfaces In this subsection, we are going to consider class A surfaces in L3 1(c) f ×I such that ∂ ∂z T is light-like. Let ˆ M be a Lorentzian surface in L3 1(c) with the local parametrization ̃φ(u, v) such that (3.2) and (3.3). Let ̃N and ˆS denote the unit normal vector field and shape operator of ˆ M, respectively. We are going to use the following lemma obtained by a direct computation. Lemma 4.1. The Levi-Civita connection ∇L3 1(c) satisfies ∇ L3 1(c) ∂u ∂u = Eu E -2f ′ f ̃φu + Eu E2 -f ′ fE ̃φv + h1 ̃N ∇ L3 1(c) ∂u ∂v = h2 ̃N ∇ L3 1(c) ∂v ∂v = Ev E ̃φv + h3 ̃N (4.1) for some smooth functions hi. 14 F. KAYA AND N. CENK TURGAY Now, consider the surface parametrized by (3.3). Then, by Theorem 3.11, ∂ ∂z T is lightlike on M. On the other hand, the vector fields T = 1 E ∂v = 1 E ( ̃φv, 0), U = -∂u = -( ̃φu, 1), e3 = 1 E ( ̃φv, 1), e4 = 1 f ( ̃N, 0) form the frame field {T, U, e3, e4} defined by (3.1). By a direct computation, we obtain (4.2) Ae3 = -f′ f Eu E -f′ f 0 -f′ f ! , Ae4 = fh2 E -fh1 -fh3 E2 fh2 E . Next, we get the following corollary by considering the shape operators of M given by (4.2). Corollary 4.2. Let M be a surface in L3 1(c) f × I given in Theorem 3.1 with light-like ∂ ∂z T. Then, the followings are equivalent to each other: (i) M is a class A surface, (ii) h3 = 0, (iii) ∂v is a principal direction of ˆ M. Proof. (i) ⇔(ii) Because of the first equation of (4.2), we have Ae3T = -f′ f , that is T is a principal direction of Ae3. Therefore, (4.1) and the second equation of (4.2) imply that M is a A surface if and only if h3 = 0. (ii) ⇔(iii) By the definition of h3, we have h3 = 0 is satisfied if and only if gc ∇ L3 1(c) ∂v ∂v, ̃N = 0. which is equivalent to ˆS(∂v) = Eh2 f 2 ∂v. □ Now, we are ready to prove the main result of this paper. Theorem 4.3. Let M be a surface in the space-time L3 1(c) f ×I such that ∂ ∂z T is light-like. Then, M is a class A surface if and only if for all p ∈M there exists a neighborhood Np given by one of following five cases: (i) Np is congruent to the surface given in Theorem 3.7, (ii) c = 1 and Np is congruent to the surface given in Theorem 3.3, (iii) c = -1 and Np is congruent to the surface given in Theorem 3.4, (iv) c = -1 and Np is congruent to the surface given in Theorem 3.5, (v) c = 0 and Np is congruent to the surface given in Theorem 3.6. Proof. Let p ∈M. Then, since M has light-like ∂ ∂z T, Theorem 3.11 implies the existence of a local coordinate system (Np, (u, v)) near to p such that Np is parametrized by (3.3) for a Lorentzian surface ˆ M with the metric tensor (3.2). Now, in order to prove the necessary condition, assume that Np is class A. Then Theorem 4.2 implies that the shape operator ˆS of ˆ M has a light-like eigenvector ∂v. Therefore, ˆS has the canonical forms given in case I or III of (2.11). Case 1. ˆS is diagonalizable. In this case, there exists an orthonormal frame field of the tangent bundle of ˆ M such that ˆS has the form given in case I of (2.11) for some λ1, λ2. TIME-LIKE SURFACES 15 Since ∂v is a light-like eigenvector, we have λ1 = λ2, i.e, ˆ M is (totally) umbilical. By Theorem 2.3, we have four subcases. Case 1.(a). c = 1 and ˆ M ⊂S2 1( 1 r2) ⊂S3 1 for a constant 0 1. In this case, similar to case 1.(b), we start with the parametrization of Np given by φ(u, v) = r cos s2(u, v) s1(u, v) -sin s2(u, v) , r sin s2(u, v) s1(u, v) + cos s2(u, v) , r s1(u, v), √ r2 -1, u for some smooth functions s1, s2 and obtain (4.4) together with s1 2 -r2f 2 s1 2 + 1 ∂s2 ∂u 2 -2r2f 2∂s1 ∂u ∂s2 ∂u = 0. Then, we get the surface parametrized by (3.9). Thus, we have the case (iii) of the theorem. Case 1.(c). c = -1 and ˆ M ⊂H3 1 is the flat surface parametrized by (2.14). Note that the parameters U, V in (2.14) satisfy (2.8). Therefore, because of Theorem 2.2, the coordinates u, v satisfy (2.10). By combining (2.14), (2.10) with (3.3), we obtain the surface constructed in Theorem 3.5. Therefore, we have the case (iv) of the theorem. Case 1.(d). c = 0 and ˆ M ⊂S2 1( 1 r2) ⊂E3 1. In this case, by a similar way to case 1.(b), we get the case (v) of the theorem. Case 2. ˆS is non-diagonalizable. In this case, there exists a pseudo-orthonormal frame field of the tangent bundle of ˆ M such that ˆS has the form given in case III of (2.11). In 16 F. KAYA AND N. CENK TURGAY this case, ˆ M is a null scroll described in Theorem 2.4 for some functions a and b and it can be parametrized as given in (2.18). By considering (2.18) and (3.3), we obtain (3.11) for some smooth functions U(u, v) and V (u, v). Since ∂u and ∂v are light-like vectors, (4.3) implies ∂U ∂v V 2 b(U)2 + c ∂U ∂v -2∂V ∂v = 0, (4.9) f 2∂U ∂u V 2 b(U)2 + c ∂U ∂u -2∂V ∂u + 1 = 0. (4.10) Because ∂V is the only eigenvector of the shape operator of ˆ M (See (2.19)), the vector field ∂v must be proportional to ∂V . Therefore, (4.9) implies U = U(u). (4.11) Consequently, (4.10) turns into (3.12). By combining (3.12) and (4.11) with (3.11), we get the surface constructed in Theorem 3.7. Therefore, we have case (i) of the theorem. Hence the proof of necessary condition is completed. The proof of the sufficient condition is obtained in the previous subsection. □ 5. Applications of Class A Surfaces In this section, as applications of Theorem 4.3 we obtain local clasification theorems on some important classes of surfaces in the space-time L3 1(c) f × I with the property of of having light-like ∂ ∂z T. Let M be the surface in L3 1(c) f × I given by Theorem 3.1. First, we obtain the following corollaries directly follows from (4.2). Corollary 5.1. M is a pseudo-umbilical surface of L3 1(c) f × I if and only if the equations (5.1) h2h3 = 0, h1h2f 4 -Euf ′f + Ef ′2 = 0 are satisfied. Proof. By a direct computation using (4.2), we obtain AH = f2h2 2 E2 + f′2 f2 -h1h2f4-Euf′f+Ef′2 Ef2 -f2h2h3 E3 f2h2 2 E2 + f′2 f2 ! which yields that M is pseudo-umbilical if and only if (5.1) is satisfied. □ Corollary 5.2. M is a totally umbilical surface of L3 1(c) f × I if and only the functions E, h1 and h3 satisfy (5.2) f ′ f -Eu E = h1 = h3 = 0. Proof. The proof directly follows from (4.2). □ Corollary 5.3. M has flat normal bundle in L3 1(c) f × I if and only the functions E and h3 satisfy (5.3) h3 f ′ f -Eu E = 0 Proof. The proof directly follows from (4.2) and the Ricci equation (2.7). □ TIME-LIKE SURFACES 17 5.1. Surfaces with Flat Normal Bundle. In this subsection, as a result of Theorem 5.3, we obtain the following corollary. Proposition 5.4. Let M be a Lorentzian surface in the static space-time L3 1(c) f × I with light-like ∂ ∂z T. Then M has flat normal bundle if and only if it is locally congruent to one of the following class of surfaces: (i) A class A surface given in Theorem 4.3, (ii) A surface which can be parametrized by (3.3) given in Theorem 3.1, where ̃φ(u, v) is a local parametrization of a flat Lorentzian surface ˆ M in L3 1(c) with the induced metric given by (2.9). Proof. Let p ∈M. Since M has light-like ∂ ∂z T, Theorem 3.11 implies that p has a neighborhood Np parametrized by (3.3) given in Theorem 3.1. Now, in order to prove the necessary condition, we assume that M has flat normal bundle. Then, because of Theorem 5.3, we have two cases: If h3 = 0 on Np, then Theorem 4.2 implies that Np is class A and we have the case (i). On the other hand, if h3(q) ̸= 0 at q ∈Np, then, by shrinking Np, if necessary, we observe that (5.3) implies f ′ f -Eu E = 0 on Np. So, there exists a non-vanishing function c1 such that E(u, v) = c1(v)f(u). By re-defining v properly we can assume c1 = 1. Thus, we have (2.9). Consequently, Np is flat and we have case (ii) of the theorem. Converse follows from a direct computation. □ Note that Theorem 2.2 ensures the existence of a local coordinate system for which (2.9) is satisfied. 5.2. Pseudo-Umbilical Surfaces. In this subsection, we consider the pseudo-umbilical surfaces with light-like ∂ ∂z T in L3 1(c) f × I. We are going to use the following lemma. Lemma 5.5. Let M be a class A surface in L3 1(c) f × I with light-like ∂ ∂z T. Then, M is pseudo-umbilical if and only if it is congruent to the surface given in Theorem 3.7 for some a, b, U, V satisfying (3.12) and U ′b′(U)b(U) -f ′ (b(U)2 + c) f = 0, (5.4a) a(U)b(U)U ′2 + b(U)2 f 2 -f ′ (f ′U ′ + fU ′′) f 2U ′ = 0. (5.4b) Proof. If M is pseudo-umbilical. Then it is locally congruent to one of five surfaces given in Theorem 4.3. First, we are going to prove that the surface given in case (ii)-(v) can not be pseudo-umbilical. Towards contradiction, assume that M is the surface given in Theorem 3.3 and it is pseudo-umbilical. Then, (5.1) is satisfied because of Theorem 5.1. By a direct computation using (3.7), (3.8) and (5.1), we obtain f(u)a′(u)f ′(u) U(u, v) + f(u)a′′(u)f ′(u) a′(u) + f ′(u)2 -1 r2 + 1 = 0 which can be satisfied only if a′(u)f ′(u)Uv(u, v) = 0. However, this is not possible because of (3.6). By a similar method, we observe that none of the surfaces constructed in Theorem 3.4, Theorem 3.5, and Theorem 3.6 is pseudo-umbilical because of (3.7) and (3.10). 18 F. KAYA AND N. CENK TURGAY Consequently, M is locally congruent to a surface given in Theorem 3.7 for some smooth functions a, b, U, V, E satisfying (3.12) and (3.14). Moreover, from (3.13) and (5.1) we see that M is pseudo-umbilical if and only if the equation (5.5) V U ′ b(U)U ′b′(U) -f ′ (b(U)2 + c) f +a(U)b(U) (U ′)2+b(U)2 f 2 -f ′ (f ′U ′ + fU ′′) f 2U ′ = 0 is satisfied. Note that (3.14) implies that Vv ̸= 0. Therefore, (5.5) implies (5.4). □ Next, we obtain the local classification of pseudo-umbilical surfaces in E3 1(c) f × I. Proposition 5.6. Let M be a surface in the static space-time E3 1(c) f × I with light-like ∂ ∂z T. Then M is pseudo-umbilical if and only if it is locally congruent to one of the following surfaces: (i) M is a class A surface parametrized by φ(u, v) = (c1F(u) + c2)3 + 6(c1F(u) + c2) + 3F(u)-6v c1 6 √ 2 , 1 2(c1F(u) + c2)2, (c1F(u) + c2)3 -6(c1F(u) + c2) + 3F(u)-6v c1 6 √ 2 , u ! , (5.6) (ii) M is a class A surface given in Theorem 3.7 for some a, b, U, V satisfying b(U) = c3f, Vu = c2 3 2 f 2UV 2 + 1 2f 2U (5.7) a(U) = -c2 3f 2U ′ + ff ′U ′′ + f ′2U ′ c3f 3U ′3 (5.8) for a non-zero constant c3. (iii) A flat surface generated by a suitable cone in E3 1 which can be parametrized by φ(u, v) = (F(u) -v, b1(v), b2(v), u) (5.9) for some smooth functions b1, b2 such that b′ 1 2 + b′ 2 2 = 1. Proof. Let p ∈M. Since the vector field ∂ ∂z T is light-like on M, Theorem 3.11 ensures that a neighborhood Np of p which can be parametrized by (3.3) presented in Theorem 3.1 for a Lorentzian surface ˆ M. In order to prove the necessary condition, we assume that Np is pseudo-umbilical. Then, Theorem 5.1 implies that we have two cases. Case I. Np is class A. In this case, Theorem 5.5 implies that is congruent to the surface given in Theorem 3.7, for some a, b, U, V satisfying (3.12) and (5.4) and ˆ M is a null scroll. Since c = 0, (5.4a) implies the first equation in (5.7) for a constant c3. We have two sub-cases: Case I.a. c3 = 0. In this case, we have b = 0. Thus, ˆ M is flat which yields that (U, V ) satisfies for (2.10) for some constants c1 ̸= 0, c2. When a ̸= 0, by Theorem 2.5, ˆ M is congruent to the B-scroll parametrized by (2.20). By combining (2.20) with (2.10) we get (5.6). So, we have case (i) of the theorem. On the other hand, if a = 0, then (3.13) implies that Ae4 = 0 and (4) of Theorem 3.9 implies ∇⊥e4 = 0. Therefore, Np lies on the totally geodesic hypersurface M × I of E3 1(c) f ×I, where M is the plane in E3 1 with the normal ̄e4. However, this is a contradiction. Case I.b. c3 ̸= 0. In this case, (3.12) and (5.4b) gives the second equation in (5.7) and (5.8), respectively. So, we have case (ii) of the theorem. TIME-LIKE SURFACES 19 Case II. Np is not class A, i.e., h3 ̸= 0. In this case, (5.1) implies h2 = f ′ f -Eu E = 0. (5.10) Therefore, ˆ M is flat, we have E = f and (2.9). Moreover, from Theorem 4.1 we have ̃φuu = -f ′ f ̃φu + h1 ̃N, (5.11) ̃φuv = 0 (5.12) and (2.9) implies (5.13) Nu = -h1f ̃φv, Nv = -h3(f ̃φu + ̃φv). Therefore, ˆ M is flat and because of (5.13), the Gauss equation (2.6) implies h1h3 = 0 which implies h1 = 0. Moreover, the Codazzi equation (2.5) for X = Z = ∂u, Y = ∂v gives h1 = h1(u). Thus, by solving (5.11) and (5.12), we obtain ̃φ(u, v) = F(u)v1 + φ2(v) (5.14) for a constant vector v1 and a E3 1-valued smooth function φ2. (2.9) and (5.14) imply g0(v1, v1) = -1, g0(v1, φ′ 2) = 1, g0(φ′ 2, φ′ 2) = 0. Therefore up to a suitable isometry of E3 1, one can choose v1 = (1, 0, 0), φ2(v) = (-v, b1(v), b2(v)), b′ 1 2 + b′ 2 2 = 1. (5.15) By combining (5.14) and (5.15) we obtain (5.9) which gives case (iii) of the proposition. Hence, the proof of necessary condition completed. Conversely, a direct computation yields that if M is a surface given by case (i) or case (iii) of the theorem, then the shape operator AH of M along H takes the form AH = f ′ f 2 I and for the surface M we obtain AH = c2 3 + f ′(u)2 f(u)2 I. Hence, all of the surfaces given in the proposition are pseudo-umbilical. □ In the following proposition, we consider pseudo-umbilical surfaces in S3 1(c) f × I. Proposition 5.7. Let M be a surface in the static space-time S3 1(c) f × I with light-like ∂ ∂z T. Then M is pseudo-umbilical if and only if it is locally congruent to one of the following surfaces: (i) M is a class A surface given in Theorem 3.7 for some a, b, U, V satisfying b(U) = p c3f 2 -1, Vu = c3f 2 2 V 2U ′ + 1 2f 2U ′ (5.16) a(U) = U ′ (-c3f 2 + f ′2 + 1) + ff ′U ′′ f 2p c3f 2 -1 (U ′)3 (5.17) for a non-zero constant c3. 20 F. KAYA AND N. CENK TURGAY (ii) A flat surface generated by the flat isoparametric surface S1 1 1 cos θ × S1 1 sin θ ⊂S3 1 which can be parametrized by φ(u, v) = -cos θ sinh -c2 sec θ √ 2 + F(u)(-cos θ) p sec(2θ) + v sec θ p sec(2θ) ! , cos θ cosh -c2 sec θ √ 2 + F(u)(-cos θ) p sec(2θ) + v sec θ p sec(2θ) ! , sin θ cos -c2 csc θ √ 2 + F(u) sin θ p sec(2θ) + v csc θ p sec(2θ) ! , -sin θ sin -c2 csc θ √ 2 + F(u) sin θ p sec(2θ) + v csc θ p sec(2θ) ! , u ! (5.18) for some θ ∈(0, π/4) and c2 ∈R. Proof. Let p ∈M. Similar to the proof of Theorem 5.6, we consider the local parametrization of M on a neighborhood Np of p given in Theorem 3.1 for a Lorentzian surface ˆ M and assume that Np is pseudo-umbilical. We study two cases obtained from Theorem 5.1: Case I. Np is class A. Because of Theorem 5.5, ˆ M is a null scroll parametrized by (2.18) and Np is the surface constructed in Theorem 3.7, for some a, b, U, V satisfying (3.12). By solving (5.4a) for c = 1, we see that b satisfies the first equation in (5.16) for a constant c3 and ε = ±1. By re-defining V properly on (2.18), one can assume ε = 1. So, we have the first equation in (5.16). By a further computation using (3.12) and (5.4b), we also get the second equation in (5.16) and (5.17), respectively. So, we have case (i) of the theorem. Case II. Np is not class A, i.e., h3 ̸= 0. In this case, similar to the proof of Theorem 5.6, we have (5.10) from which we observe that ˆ M is flat and the equations (2.9), (5.13) are satisfied for some functions h1, h3. By taking into account (4.1) in Theorem 4.1 and (5.13), we use the Codazzi and Gauss equations (2.6) and (2.5) to obtain h1 = 1 k1f2, h3 = k1. Thus, because of (5.13), the shape operator ˆS of ˆ M takes the form (5.19) ˆS = 0 k1f - 1 2fk1 k1 with respect to {∂u, ∂v}. for a constant k1 ̸= 0. Therefore, ˆ M is a isoparametric surface with distinct real principal curvatures which yields that ˆ M is an open part of S1 1 1 cos θ × S1 1 sin θ , [8]. So, up to a suitable isometry of S3 1, we can assume that ˆ M can be parametrized by ˆψ(U, V ) = cos θ sinh U + V √ 2 cos θ, cos θ cosh U + V √ 2 cos θ, sin θ cos U -V √ 2 sin θ, sin θ sin U -V √ 2 sin θ , (5.20) where θ ∈(0, π/2) is defined by k1 = cot θ -tan θ. Note that the shape operator ˆS of ˆ M is (5.21) ˆS = 1 2(cot(θ) -tan(θ)) -csc(2θ) -csc(2θ) 1 2(cot(θ) -tan(θ)) with respect to {∂U, ∂V }. TIME-LIKE SURFACES 21 Moreover, by Theorem 2.2, the coordinate systems (u, v) and (U, V ) are related by (2.10) for some c1 ̸= 0, c2. By a direct computation considering (2.10), (5.19) and (5.21), we obtain θ ∈(0, π/4) and (5.22) c1 = r sec(2θ) 2 . By combining (5.20) with (2.10) and (5.22), we obtain (5.18) which give the case (ii) of the proposition. Hence, the proof of the necessary condition is completed. The proof of the sufficient condition follows from a direct computation. □ 5.3. Totally-Umbilical Surfaces. In this subsection, we consider the totally-umbilical surfaces. Let M be a totally-umbilical surface in L3 1(c) f × I with light-like ∂ ∂z T. Then, M is also pseudo-umbilical. Furthermore, Theorem 4.2 and Theorem 5.2 implies that it is class A. Therefore, M is locally congruent to a surface given in Theorem 3.7 for some smooth functions a, b, U, V, E satisfying (3.12) and (3.14). Then, from (3.13) and (5.2) imply that have V U ′ b(U)2 + c + f ′ f + U ′′ U ′ =0, f 2U ′2 (a(U) + V b′(U)) + b(U) =0. (5.23) As described in proof of Theorem 5.5, we have Vv ̸= 0 and U ′ ̸= 0. Therefore, (5.23) gives b(U)2 + c = 0, , (5.24a) f ′ f + U ′′ U ′ = 0, (5.24b) f 2U ′2a(U) + b(U) = 0. (5.24c) and (3.12), together with (5.24a), implies (5.24d) 2f 2U ′Vu = 1. We are ready to prove the following proposition: Proposition 5.8. Let M be a surface in the static space-time L3 1(c) f × I with light-like ∂ ∂z T. Then, we have the followings: (i) If c = 0 or c = 1, then M can not be totally umbilical. (ii) M is a totally umbilical surface of H3 1(c) f × I if and only if it is congruent to the flat surface given by φ(u, v) = (2kv + k2) cos F(u) + k2 k -2k sin F(u) + k2 k 2k , -k (2k2 + 1) cos F(u) + k2 k -(2kv + k2) sin F(u) + k2 k 2 √ 2k2 , k (2k2 -1) cos F(u) + k2 k -(2kv + k2) sin F(u) + k2 k 2 √ 2k2 , (2kv + k2) cos F(u) + k2 k 2k , u ! (5.25) 22 F. KAYA AND N. CENK TURGAY Proof. (i) follows from (5.24a) and (5.24c). First, we are going to prove the necessary condition of (ii). Assume that c = -1 and M is a totally umblical surface of H3 1(c) f × I, i.e., (5.24) is satisfied. In this case, as described above, M is locally congruent to a surface given in Theorem 3.7 for some smooth functions a, b, U, V, E satisfying (3.12) and (3.14). Note that the equation (5.24a) implies b = 1 which yields that ˆ M is flat. Next, by combining (2.10) given in Theorem 2.2 with (5.24b)-(5.24d) we get a = -1 k 2 , U = kF(u) + k2, V = F(u) -2v 2k (5.26) for some constants k and k2. Furthermore, since b = 1, the first equation in (5.26) and Theorem 2.6 imply that ˆ M is congruent to (2.21). By combining (2.21) and (5.26), we get (5.25). Hence the proof is completed. Conversely, let M be a surface given by (5.25). By a direct computation, we obtain Ae3 = -f ′ f I, Ae4 = 1 f I which yields that M is totally umbilical. □ Acknowledgements This work forms a part of the first-named author's PhD thesis and was carried out within the scope of a project supported by T ̈UB ̇ITAK, the Scientific and Technological Research Council of T ̈urkiye (Project number 121F352). Data Availability. Data sharing not applicable to this article because no datasets were generated or analysed during the current study. Code availability. N/A. Conflicts of interest. The authors have not disclosed any competing interests. References [1] Alias, L.J., Melendez, J., Navarro, M. and Solis, A.D., Codimension two spacelike submanifolds into null hypersurfaces of generalized Robertson-Walker spacetimes, Preprint: (2025). [2] Bekta ̧s Demirci, B., Turgay, N.C. and Ye ̆gin S ̧en, R. On space-like class A surfaces in Robertson-Walker spacetimes, Math. Nachr. 298 (2025), 718-729. [3] Dekimpe, K. and Van der Veken, J., Geometric Study of Marginally Trapped Surfaces in Space Forms and Robertson-Walker Space times-An Overview, Axioms 9 (2020), 60. [4] Dobarro, F., ̈Unal, B., Special standard static space-times, J. Math. Phys. 45 (2004), 2675-2684. [5] Dobarro, F., ̈Unal, B., Characterizing killing vector fields of standard static space-times, J. Geom. Phys. 62 (2012), 1070-1087. [6] Ji, F. and Hou, Z.H., On Lorentzian surfaces with H2 = K in Minkowski 3-space, J. Math. Anal. 334 (2007), 54-58. [7] Kim, D.-S., Kim, Y.-H., B-scrolls with non-diagonalizable shape operators, Rocky Mt. J. Math. 33 (2003), 175-190. [8] Li, C. and Wang, J. The classification of isoparametric surfaces in S3 1, Kobe J. Math. 22 (2005) 1-12. [9] Manfio, F., dos Santos, J.B.M., dos Santos, J.P., Van der Veken, J. Hypersurfaces of S3 ×R and H3 ×R with constant principal curvatures, J. Geom. Phys. 213 (2025), 105495 [10] Manfio, F., Turgay, N.C., Upadhyay, A. Biconservative Submanifolds in Sn × R and Hn × R J. Geom. Anal. 29 (2019), 283-298 [11] Garc ́ıa-Mart ́ınez, S. C., Lucas, P. and Ram ́ırez-Ospina, H. F., L1-2-Type Surfaces in 3-Dimensional De Sitter and Anti De Sitter Spaces, Bull. Malays. Math. Sci. Soc. 46 (2023), 46:139. [12] Mendon ̧ca, B. and Tojeiro, R., Umbilical Submanifolds of Sn ×R, Canad. J. Math. 66 (2014), 400-428. [13] B. O'Neill, Semi-Riemannian Geometry with Applications to Relativity, Academic Press, New York, 1982. [14] Robertson, H. P., Kinematics and world-structure, Astrophys. J. 82 (1935), 284-301. TIME-LIKE SURFACES 23 [15] Tojeiro, R., On a class of hypersurfaces in Sn × R and Hn × R, Bull. Braz.Math. Soc. 41 (2010), 199-209. [16] Turgay, N.C., Ye ̆gin S ̧en, R. Biconservative Surfaces in Robertson-Walker Spaces, Results Math. 80 (2025), 77. [17] Walker, A. G., On Milne's theory of world-structure, P. Lond. Math. Soc. 42 (1937), 90-127. - versity, Istanbul, T ̈urkiye Email address: - versity, Istanbul, T ̈urkiye Email address:
|
2509.16218
|
An Automated Framework for Assessing Electric Vehicle
Charging Impacts on a Campus Distribution Grid
Mohammadreza Iranpour, Sammy Hamed, Mohammad Rasoul Narimani, Silvia Carpitella, Kourosh
Sedghisigarchi, Xudong Jia
Abstract—This paper introduces a unified and automated
framework designed to dynamically assess the impact of elec-
tric vehicle (EV) charging on distribution feeders and trans-
formers at California State University, Northridge (CSUN).
As EV adoption accelerates, the resulting increase in charging
demand imposes additional stress on local power distribution
systems. Moreover, the evolving nature of EV load profiles
throughout the day necessitates detailed temporal analysis
to identify peak loading conditions, anticipate worst-case
scenarios, and plan timely infrastructure upgrades. Our main
contribution is the development of a flexible testbed that
integrates Julia, a high-performance programming language
for technical computing, with PowerWorld Simulator via the
EasySimauto.jl package. This integration enables seamless
modeling, simulation, and analysis of EV charging load
profiles and their implications for campus grid infrastructure.
The framework leverages a real-world dataset collected from
CSUNs EV charging stations, consisting of 15-minute interval
measurements over the course of one year. By coupling high-
resolution data with dynamic simulations, the proposed system
offers a valuable tool for evaluating transformer loading,
feeder utilization, and overall system stress. The results
support data-driven decision-making for EV infrastructure
deployment, load forecasting, and energy management strate-
gies. In addition, the framework allows for scenario-based
studies to explore the impact of future increases in EV
penetration or changes in charging behavior. Its modular
architecture also makes it adaptable to other campus or urban
distribution systems facing similar electrification challenges.
Index Terms—Electric Vehicle (EV), Power Distribution
Systems, Power System Operation, EV Charging Stations.
I. INTRODUCTION
The growing use of electric vehicles (EVs) is changing
both transportation systems and how electric power is con-
sumed and managed. This growth is supported by several
factors, such as the decreasing cost of batteries, increasing
awareness of environmental issues, and strong government
incentives [1]–[3]. According to the International Energy
Agency (IEA), the number of EVs on the road surpassed
26 million in 2022, which is over ten times the number in
2013 [1]. As more EVs are deployed, the demand for elec-
tricity increases significantly, placing additional pressure on
existing power grid infrastructure [?], [4]–[9]. This growing
demand requires new approaches for grid planning and load
management to ensure reliable and efficient operation [10]–
[17].
Despite the positive impact of EVs on reducing green-
house gas emissions and improving energy efficiency, they
also have the potential to create new challenges for electric
Department of Electrical and Computer Engineering, California State
University Northridge (CSUN). Rasoul.narimani@csun.edu. Support from
NSF contract #2308498 and the Climate Action - Community-driven
eLectric vEhicle chArging solutioN (CA-CLEAN) project
power systems. The extra demand caused by EV charging
stations can lead to several operational problems, including
overloaded feeders, voltage instability, and higher stress on
transformers [18], [19]. Most existing power distribution
systems were not originally built to handle these types of
fast-changing and locally concentrated loads, which makes
it important to evaluate their impact carefully [20], [21]. As
the number of EVs continues to increase, there is a growing
need to strengthen transmission and distribution networks
to support the additional demand [22], [23]. Without proper
planning and system upgrades, these stresses may lead to
equipment degradation and lower overall grid reliability
[24], [25].
Static analyses, which look at only a single snapshot
of the power system, are no longer enough to understand
the growing complexity and time-varying nature of EV
charging behavior. Traditional power flow studies often use
fixed or average demand values, which overlook the fact
that EV charging patterns can change quickly and are often
unpredictable. These simplified assumptions may lead to
inaccurate evaluations of grid performance, especially as
the number of EVs continues to grow. As EVs become
a more common mode of transportation, the shortcomings
of static methods become increasingly evident. Dynamic
analysis methods are therefore necessary to model changes
in load over time and to better assess potential opera-
tional issues, such as voltage drops, line congestion, and
transformer overloading [26], [27]. Incorporating time-
dependent simulations into planning and operations helps
utilities anticipate critical stress points and develop more
effective mitigation strategies.
The changing nature of EV load profiles is influenced by
several connected factors, such as time-of-use electricity
pricing, individual charging habits, differences in vehicle
models, the availability of charging stations, and battery
charging strategies. For instance, many users may plug
in their vehicles at the same time, such as after work or
after classes, leading to short-term spikes in demand that
go well beyond typical average levels. These unpredictable
patterns make it difficult to depend on simple forecasting
tools or fixed models. As a result, real-time or near-real-
time simulations that use detailed, high-frequency data are
essential for accurately analyzing how EVs affect the power
grid under realistic conditions [28], [29]. This level of
analysis is particularly important for system planners who
must ensure grid reliability while managing increasingly
flexible and variable energy demands.
University campuses, such as California State University
Northridge (CSUN), are well-suited for studying the real-
world effects of EV charging on power distribution systems.
arXiv:2509.16218v1 [cs.CY] 9 Sep 2025
Power Network
Electric Load
Profile with EVs
Power
Distribution
Network
Editing
Load profile
Files In
Julia
EasySimauto.jl
PowerWorld
Visualizing
Changes of Power
flows
In PowerWorld
Data Layer
Compution Layer
Visualizing
Layer
Figure 1. Schematic of the proposed framework for Dynamically Inves-
tigating the Impact of EV Chargers on Power Distribution Systems using
Various tools and in three layers.
Campuses typically have a high concentration of buildings
and a mix of different types of electricity use, including
residential, academic, and research loads. They also serve
a wide range of EV users, faculty, staff, and students’ vehi-
cles. As more EV charging stations are installed, managing
when and where charging occurs becomes more complex,
both over time and across different campus locations. These
conditions make campuses ideal environments for testing
how clustered charging patterns, driven by daily schedules
and user habits, can lead to overloading of transformers and
congestion on local feeders [30], [31].
To accurately study complex and data-rich scenarios like
EV charging, it is important to use computational tools that
are both powerful and flexible. Interface-based simulation
platforms have become effective solutions for this purpose.
These tools make it possible to connect high-performance
programming languages, such as Julia, with professional
power system simulation software like PowerWorld. This
setup allows users to automate simulation tasks, adjust
inputs such as EV charging patterns or building loads, and
run real-time power flow analysis across many different sce-
narios [32], [33]. These interfaces improve the scalability,
consistency, and accuracy of power system studies, helping
researchers explore how the grid responds under a wide
range of operating conditions [34], [35]. When working
with large datasets, such as the one-year set of 15-minute
interval measurements from CSUNs EV charging stations,
these frameworks are especially important. They offer a
reliable and adaptable environment for understanding how
EV charging affects the grid over time and for planning
future infrastructure upgrades with greater confidence.
In this paper, we present an automated and unified
framework to study how EV charging affects the electric
power grid at CSUN. We created a detailed model of
CSUNs distribution network in PowerWorld, using real data
about the campus infrastructure and layout. To support the
analysis, we collected one year of electricity usage data
from meters installed at campus buildings and EV charging
stations, with measurements taken every 15 minutes. From
this dataset, we identified the worst-case daily load profiles
for analysis. Using the EasySimauto.jl package, we built
a dynamic connection between Julia and PowerWorld to
automate data input and simulation processes. We de-
veloped scripts that automatically adjusted the electrical
load at different buses and ran power flow simulations
for each load scenario. This approach allowed us to ex-
amine important system parameters such as voltage levels,
transformer loading, and feeder congestion under different
operating conditions. The main contribution of this work is
the introduction of a flexible and automated framework that
captures the time-varying nature of EV-related load profiles
and allows system parameters to be modified at any selected
time interval. This capability enables the simulation of a
wide range of operating scenarios and provides a powerful
tool for both power system operation and planning. The
framework supports detailed, time-resolved analysis of grid
impacts, making it highly valuable for infrastructure design,
reliability assessment, and long-term energy management.
The rest of this paper is organized as follows: In Section
II, we describe how CSUNs campus electric grid is modeled
and coresponding data sets are injected to the proposed
framework. In Section III we introduce the proposed high-
fidelity model for analyzing the impact of EVs on low-
voltage distribution systems. Section IV presents the sim-
ulation results, and Section III provides the conclusion.
II. METHODOLOGICAL FRAMEWORK
A. Modeling CSUNs Electric Power Network
To accurately assess how EV charging affects grid oper-
ations, it is important to have a realistic model of CSUNs
electric power distribution system. An overview of the
modeled campus distribution network in PowerWorld is
shown in Figure 2. For this purpose, we built a detailed
representation of the campus grid using data provided
by the Physical Plant Management (PPM) department.
The model includes key system components such as un-
derground cables, transformers, and distributed generation
units.
To calculate the resistance and impedance of the under-
ground cables, we used a one-line diagram of the campus
that showed the types of cables and their geographic
layout. The coordinates of each cable were used to estimate
their physical lengths. We then applied standard resistance
values (in ohms per mile), provided by PPM for each
cable type, and multiplied them by the cable lengths to
determine total resistance. This process allowed us to
assign realistic impedance values to each section of the
network, preserving both spatial and physical accuracy in
the simulation. Transformers were also modeled in detail
using nameplate data from PPM, which included their
capacities, impedances, and configurations. These details
were incorporated into our PowerWorld simulation to better
understand how transformers respond to changes in load,
particularly under EV charging conditions. In addition, the
campus includes several photovoltaic (PV) systems that
contribute to local energy generation. These include a 467
kW system at Parking Lot B2, a 225 kW system at Parking
Lot E6, and a 1.2 MW system at the Student Recreation
Center. These PV systems were modeled as distributed
generators using actual generation profiles provided by
PPM, which allowed us to capture their variability over
different times of the day and seasons.
Figure 2.
PowerWorld model of CSUNs campus distribution network
under regular (non-EV) load conditions. All feeders operate below thermal
limits, indicating no line congestion in the baseline scenario.
B. Load Data Collection and Modeling
To build a realistic and time-varying load model for the
CSUN campus, we carried out a detailed data collection
process that included measurements from both campus
buildings and EV charging stations. Meters were installed
on each buildings power cables to record real and reactive
power demand every 15 minutes over the span of a full
year. Since the campus electrical system operates close to
a balanced condition, measuring just one phase per building
was sufficient to represent the overall load. This approach
simplified the data collection while still providing reliable
results. We also analyzed the data to capture how electricity
use changes over time, including daily cycles and seasonal
trends. These time-based patterns are important for accu-
rately reflecting real operating conditions. By incorporating
this time-series data into our simulations, we were able to
model how campus loads vary in response to factors like
academic calendars, building occupancy, and routine op-
erations. This time-resolved load modeling is essential for
evaluating the true impact of EV integration and identifying
periods of potential stress on the distribution system.
C. Incorporating EV Charging Loads
Including EV charging loads in the campus power system
model was a key part of our analysis. We used real data
collected from CSUNs operational EV charging stations.
This dataset contained one year of charging records at 15-
minute intervals for each station, providing detailed infor-
mation on how much energy was used, how often vehicles
were charged, and when charging typically occurred.
To study the effects of this additional load, we identified
the times with the highest charging activity and created
daily worst-case charging profiles. These were combined
with the building-level load data to produce a complete
load profile that represents the campus system under heavy
demand. Our model included both real power (kW) and
reactive power (kVar) components of EV charging to accu-
rately reflect their effect on the power factor. By directly
embedding these EV charging patterns into our time-based
load profiles, we developed a strong and realistic framework
for evaluating how EVs impact system performance, specif-
ically in terms of voltage fluctuations, transformer stress,
and feeder congestion, within the current campus grid setup
under various operating scenarios.
D. Power Flow Analysis via PowerWorld Simulator
To study how the campus grid responds to changing load
conditions and EV charging activity, we used PowerWorld
Simulator, an industry-standard tool for performing time-
series power flow analysis. We connected PowerWorld with
Julia using the EasySimauto.jl interface, which allowed us
to automate simulations and manage data efficiently. This
setup made it possible to update the load at each bus in
the system dynamically and run power flow calculations
across thousands of time steps, each representing a different
loading condition. Our Julia scripts processed the time-
series data from building and EV loads and sent the updated
values directly to PowerWorld using SimAuto commands.
This automated approach eliminated the need for manual
data entry or interface interaction, which greatly reduced
the overall simulation time and made the process more
consistent and repeatable. In addition, the flexible scripting
environment allowed us to easily test different load growth
scenarios and EV penetration levels. This capability makes
the framework suitable for long-term planning studies and
stress testing of grid infrastructure under future conditions.
III. HIGH-FIDELITY MODEL
In this study, we developed a detailed and structured
framework to evaluate how residential EV chargers affect
low-voltage power distribution systems. The framework
is organized into three main layers: the data layer, the
computational layer, and the visualization layer. These
layers work together to process input data, run simulations,
and present the results clearly. An overview of the complete
framework is shown in Fig. 1.
00:15
02:15
04:15
06:15
08:15
10:15
12:15
14:15
16:15
18:15
20:15
22:15
Time of Day
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Load at Parking Lot G6 (MW)
April
August
December
February
January
July
June
March
May
November
October
September
Figure 3. Load profiles of EV charging station at Parkin Lot G6 during
one day in 12 months.
In the Computational Layer, each of these load profiles
was applied to the simulated distribution network using
the EasySimAuto.jl API. This allowed for programmatic
updates to PowerWorlds load values for each interval,
automating the process of time-series simulation and en-
suring consistent system modeling across all months. Each
scenario was simulated in PowerWorld, and the resulting
voltage magnitudes, line power flow’s changes, and line
losses levels were recorded for analysis.
A. Data Layer
The data layer is responsible for gathering and organizing
all necessary input data for the simulation process. It
consists of two main components: First, we collected de-
tailed information about the physical characteristics of the
campus distribution network, including specifications for
underground cables, transformers, and the overall system
topology. This information was used to build a realistic
and accurate network model, which was implemented in
PowerWorld Simulator. Second, the data layer incorporates
time-series measurements for both the baseline building
loads and the EV charging profiles. As described in the
previous section, these data were recorded at 15-minute
intervals over a full calendar year. For each month, we
analyzed the data to identify the day with the highest peak
load, representing the most critical or worst-case scenario.
This process resulted in twelve representative daily load
profiles, one for each month, each consisting of 96 time
steps to reflect the 15-minute resolution.
B. Computational Layer
The computational layer handles the automated execution
of simulations using PowerWorld Simulator. We used the
EasySimAuto.jl package, a Julia-based interface, to control
and interact with PowerWorld programmatically. This setup
allowed us to load different load profiles into the simulation
model automatically, without manual input.
EasySimAuto.jl works by connecting to PowerWorlds
SimAuto server, which provides access to a range of com-
mands and data through its COM-based interface. Using
this API, we updated load values for each 15-minute inter-
val in the selected daily profiles and then ran simulations to
observe the system’s behavior under various EV charging
scenarios. This layer made it possible to run flexible and
repeatable experiments by automating the setup, execution,
and data retrieval processes for each simulation case.
C. Visualization Layer
The visualization layer is responsible for analyzing and
presenting the results from the simulations based on the
different load profiles and time intervals. From each Power-
World simulation, we extracted key performance indicators
such as voltage levels, current flow, and loading on net-
work components. These results were post-processed and
visualized using custom scripts written in Julia. The goal
was to understand how different EV charging behaviors,
captured for the most critical day of each month, affect the
performance and stability of the campus power system.
For each of the twelve monthly profiles (with 96 time
intervals per day), we tracked system responses such as
voltage drops, overloaded lines, and transformer stress
0
10
20
30
40
50
60
70
80
90
100
Time Step (96 step- Per 15 min)
0.892
0.894
0.896
0.898
0.9
0.902
0.904
0.906
0.908
Voltage (pu)
24 hour Voltage Profile for Bus 144 Across Months
April
August
February
January
July
June
March
May
November
October
September
Figure 4. 24-hour per-unit voltage profiles at Bus 144 (Parking Lot G6)
for each month under the EV load scenario.
levels. These results were plotted over time to identify when
and where the grid experiences the most stress. In addition,
we created comparative heatmaps and time-series plots to
show both the spatial and temporal effects of residential EV
charging on the distribution network. This layer provides
a detailed, data-driven view of grid performance, helping
utility planners and researchers identify critical time win-
dows and system locations that may require upgrades or
mitigation strategies.
IV. RESULTS AND DISCUSSION
To evaluate the impact of residential EV charging on
low-voltage distribution systems, we ran simulations using
twelve representative daily load profiles, one per month,
based on year-long, 15-minute interval data. Each profile
reflects the peak-load day of its month, capturing worst-
case conditions in 96 time steps. These profiles represent
realistic daily and seasonal variations across the campus.
An example for Parking Lot G6 on a peak day is shown in
Fig. 3.
In the Visualization Layer, post-processed data were used
to examine voltage trends, line flows, and power losses.
Fig. 4 shows the 24-hour per-unit voltage profile at Bus 144
(Parking Lot G6), which experiences the largest daily volt-
age swing under a heavy EV charging scenario (4Œ base
EV load). Each trace represents one months 96 time steps
at 15-minute intervals. A midday voltage plateau («0.905
pu) drops sharply in the afternoon (steps 3060), coinciding
with peak charging demand. Spring months (e.g., February,
April) show the lowest voltages («0.893 pu), while summer
months (June, July) remain slightly higher («0.900-0.902
pu) due to lower connected EV during the summer. Voltage
recovers after step 50 as charging activity declines. These
results indicate that even with currently low EV penetration
at CSUN, significant voltage drops already occur during
peak hours. As EV adoption increases, these effects will
intensify. Proactive mitigation, such as on-load tap changers
or local energy storage, will be essential to maintain voltage
stability, particularly at critical nodes like Parking Lot G6.
Fig. 5 shows the 24-hour voltage profiles at three critical
CSUN buses, Bus 144 (Lot G6), Bus 75 (Lot B5), and
Bus 99 (Lot G3), under both base load and a 4Œ EV
Figure 5. Twenty-four-hour voltage trajectories at three critical CSUN grid under both the base load and a 4Œ EV-load scenario. Each subplot overlays
the base-case voltage (gray bars) with the EV-augmented voltage (red bars) over 96 fifteen-minute steps. For each bus, 144, 75 and 99, the single
worst-swing month (February, April, August or November) was automatically selected based on the maximum difference in voltage range when EV
load is multiplied by four.
load scenario. Each panel compares the voltage without EV
charging (gray bars) to the voltage with EV load (red bars),
across 96 fifteen-minute intervals. At Bus 144, the largest
voltage drop occurs during February and April, reaching up
to 0.015 pu below the base case. This indicates limited volt-
age support and moderate background load. Bus 75 shows
a smaller but still noticeable drop of about 0.010 pu. Bus
99 is least affected, with voltage deviations generally below
0.005 pu, suggesting stronger upstream support. Seasonal
variation is evident: voltage troughs are deeper and occur
earlier in the winter and spring months, while summer and
fall months (e.g., August and November) maintain higher
minimum voltages due to stronger base conditions. These
results demonstrate that concentrated EV charging can
significantly degrade voltage quality at specific nodes. Even
with current low EV adoption, the impact is measurable.
As EV penetration increases, proactive measures, such as
voltage regulation devices or infrastructure upgrades, will
be necessary to maintain power quality and grid stability.
Figure 6 compares active power flows between Buses 144
(Lot G6) and 145 over 96 time steps for four representative
months: April, July, October, and January. Under base load
conditions (blue bars), the power flow remains steady at
around 0.15 MW, reflecting a relatively constant campus
load. When EV charging is included (red bars), line loading
increases significantly during the day. Peak flows occur
between steps 35-65, reaching up to 0.48 MW in April
and October, 0.42 MW in January, and 0.27 MW in July.
Figure 6.
Comparison of 96step active power flows (LineMW) on the
line between buses 144 (parking lot G6) and 145 under the base loading
and EV charging scenarios for April, July, October, and January.
The higher peaks in April and October align with increased
EV usage in moderate weather, while the lower peak in
July reflects reduced background load. These results show
that uncoordinated EV charging can more than triple the
load on specific feeders, potentially violating thermal or
voltage limits. The step-by-step analysis highlights the
need for demand management or infrastructure upgrades
to accommodate future EV growth on the CSUN campus.
V. CONCLUSION
This paper presents an automated, high-resolution frame-
work for assessing the impact of EV charging on the CSUN
distribution network. By integrating real 15-minute EV
charging data with Julias EasySimauto.jl interface and Pow-
erWorld Simulator, we developed a fully scriptable pipeline
that performs time-series load flow simulations and extracts
voltage and power flow metrics across the network. Our
analysis highlights significant daily and seasonal voltage
drops and feeder loading increases at key campus locations,
even under current low EV penetration. These effects are
most pronounced during afternoon peak charging periods,
especially in spring and winter months. As EV adoption
grows, such stresses are expected to intensify, potentially
compromising power quality and system reliability. The
proposed framework enables detailed, data-driven evalua-
tion of feeder overload risks and voltage deviations under
various load scenarios. This supports informed decision-
making for coordinated charging strategies, infrastructure
upgrades, and grid reinforcement. Built on a reproducible
and scalable platform, this tool can assist utilities, planners,
and researchers in developing and testing EV integration
strategies under realistic conditions.
REFERENCES
[1] International Energy Agency, “Global ev outlook 2023,” 2023,
available: https://www.iea.org/reports/global-ev-outlook-2023.
[2] N. Lutsey, “The role of policy in reducing electric vehicle battery
costs,” International Council on Clean Transportation (ICCT), 2018.
[3] Bloomberg New Energy Finance, “Electric vehicle outlook 2019,”
2019, available: https://about.bnef.com/electric-vehicle-outlook/.
[4] M. R. Narimani, J.-Y. Joo, and M. L. Crow, “Dynamic economic
dispatch with demand side management of individual residential
loads,” in 2015 North American Power Symposium (NAPS).
IEEE,
2015, pp. 1–6.
[5] T. Niknam, R. Azizipanah-Abarghooee, and M. R. Narimani, “An ef-
ficient scenario-based stochastic programming framework for multi-
objective optimal micro-grid operation,” Applied Energy, vol. 99, pp.
455–470, 2012.
[6] O. Sundstrom and C. Binding, “Flexible charging optimization for
electric vehicles considering distribution grid constraints,” Electric
Power Systems Research, vol. 81, no. 1, pp. 115–124, 2012.
[7] Y. Liu, J. Wang et al., “The impact of ev charging on power grid:
A review,” Renewable and Sustainable Energy Reviews, vol. 53, pp.
256–264, 2015.
[8] M. R. Narimani, J.-Y. Joo, and M. L. Crow, “The effect of demand
response on distribution system operation,” in 2015 IEEE Power and
Energy Conference at Illinois (PECI).
IEEE, 2015, pp. 1–6.
[9] M. R. Narimani, Maigha, J.-Y. Joo, and M. Crow, “Multi-objective
dynamic economic dispatch with demand side management of resi-
dential loads and electric vehicles,” Energies, vol. 10, no. 5, p. 624,
2017.
[10] M. R. Narimani, A. Azizivahed, and E. Naderi, “An efficient
scenario-based stochastic energy management of distribution net-
works with distributed generation, pv module, and energy storage,”
arXiv preprint arXiv:1910.07109, 2019.
[11] M. R. Narimani, P. J. Nauert, J.-Y. Joo, and M. L. Crow, “Reliability
assesment of power system at the presence of demand side man-
agement,” in 2016 IEEE Power and Energy Conference at Illinois
(PECI).
IEEE, 2016, pp. 1–5.
[12] A. Azizivahed, E. Naderi, H. Narimani, M. Fathi, and M. R.
Narimani, “A new bi-objective approach to energy management in
distribution networks with energy storage systems,” IEEE Transac-
tions on Sustainable Energy, vol. 9, no. 1, pp. 56–64, 2017.
[13] M. R. Narimani, B. Asghari, and R. Sharma, “Optimal sizing and
operation of energy storage for demand charge management and
pv utilization,” in 2018 IEEE/PES Transmission and Distribution
Conference and Exposition (T&D).
IEEE, 2018, pp. 1–5.
[14] B. Asghari, M. R. Narimani, and R. Sharma, “Method for op-
eration of energy storage systems to reduce demand charges and
increase photovoltaic (pv) utilization,” Jan. 31 2019, uS Patent App.
16/006,239.
[15] M. R. Narimani, B. Asghari, and R. Sharma, “Energy storage
control methods for demand charge reduction and pv utilization
improvement,” in 2017 IEEE PES Asia-Pacific Power and Energy
Engineering Conference (APPEEC).
IEEE, 2017, pp. 1–5.
[16] M. R. Narimani, “Demand side management for homes in smart
grids,” in 2019 North American Power Symposium (NAPS).
IEEE,
2019, pp. 1–6.
[17] M. R. Narimani, Maigha, J.-Y. Joo, and M. Crow, “Multi-objective
dynamic economic dispatch with demand side management of resi-
dential loads and electric vehicles,” Energies, vol. 10, no. 5, p. 624,
2017.
[18] K. Clement-Nyns, E. Haesen, and J. Driesen, “The impact of
charging plug-in hybrid electric vehicles on a residential distribution
grid,” IEEE Transactions on Power Systems, vol. 25, no. 1, pp. 371–
380, 2010.
[19] S. Shao, M. Pipattanasomporn, and S. Rahman, “Impact of plug-in
electric vehicles on distribution network: A review,” International
Journal of Electrical Power and Energy Systems, vol. 57, pp. 273–
281, 2014.
[20] R. J. Bessa and M. A. Matos, “Optimization models to integrate
electric vehicles in the electric power system,” Electric Power
Systems Research, vol. 97, pp. 50–58, 2013.
[21] P. Richardson, D. Flynn, and A. Keane, “Impact assessment of
varying penetrations of electric vehicles on low voltage distribution
systems,” IEEE PES General Meeting, 2010.
[22] H. Cai, X. Jia, and A. S. Chiu, “A probabilistic model for ev charging
behavior based on vehicle usage data,” Transportation Research Part
D: Transport and Environment, vol. 33, pp. 114–125, 2014.
[23] S. Deb, K. Tammi, K. Kivikko, and R. Lahdelma, “Impact of electric
vehicles and solar photovoltaic penetration on distribution network
losses and voltage profile,” Applied Energy, vol. 227, pp. 297–311,
2018.
[24] O. Boyaci, M. R. Narimani, K. Davis, and E. Serpedin, “Spatio-
temporal failure propagation in cyber-physical power systems,” in
2022 3rd International Conference on Smart Grid and Renewable
Energy (SGRE).
IEEE, 2022, pp. 1–6.
[25] M. Iranpour, M. R. Narimani, and X. Jia, “Assessing ev charging
impacts on power distribution systems: A unified co-simulation
framework,” arXiv preprint arXiv:2505.21773, 2025.
[26] E. Yao and J. Zhang, “Dynamic analysis of electric vehicle charging
load based on user behavior,” IEEE Conference on Transportation
Electrification Asia-Pacific, 2014.
[27] D. Wang, J. Zhang, and Y. Lu, “A real-time assessment method
of electric vehicle charging impact based on dynamic simulation,”
Energies, vol. 13, no. 9, p. 2283, 2020.
[28] A. Mohamed and M. Koivunen, “Electric vehicle load modeling and
simulation,” IEEE PES General Meeting, 2015.
[29] K. M. Tan, V. K. Ramachandaramurthy, and J. Y. Yong, “Integration
of electric vehicles in smart grid: A review on vehicle to grid tech-
nologies and optimization techniques,” Renewable and Sustainable
Energy Reviews, vol. 53, pp. 720–732, 2016.
[30] D. Gough, J. Lefevre et al., “Designing microgrids for university
campuses,” IEEE Electrification Magazine, vol. 7, no. 3, pp. 30–39,
2019.
[31] H. Wu, Z. Liu et al., “Modeling and analysis of ev charging
load on campus microgrids,” IEEE Green Technologies Conference
(GreenTech), 2018.
[32] T. Lee and C. Chen, “Interfacing power system analysis software
with external programming tools,” IEEE Transactions on Smart
Grid, 2016.
[33] J. Fletcher and T. C. Green, “Interfacing simulation tools for dy-
namic analysis of power systems,” IEEE Transactions on Industrial
Electronics, vol. 66, no. 9, pp. 7131–7140, 2019.
[34] X. Huang, L. Zhang, and Y. Chen, “A flexible simulation frame-
work for power systems using python and powerworld,” Simulation
Modelling Practice and Theory, vol. 101, p. 102038, 2020.
[35] A. G. Martinez, D. Perez, and L. Vega, “Development of a dynamic
interface between julia and commercial power simulators,” IEEE
Access, vol. 9, pp. 84 320–84 331, 2021.
|
An Automated Framework for Assessing Electric Vehicle Charging Impacts on a Campus Distribution Grid Mohammadreza Iranpour, Sammy Hamed, Mohammad Rasoul Narimani, Silvia Carpitella, Kourosh Sedghisigarchi, Xudong Jia Abstract-This paper introduces a unified and automated framework designed to dynamically assess the impact of electric vehicle (EV) charging on distribution feeders and transformers at California State University, Northridge (CSUN). As EV adoption accelerates, the resulting increase in charging demand imposes additional stress on local power distribution systems. Moreover, the evolving nature of EV load profiles throughout the day necessitates detailed temporal analysis to identify peak loading conditions, anticipate worst-case scenarios, and plan timely infrastructure upgrades. Our main contribution is the development of a flexible testbed that integrates Julia, a high-performance programming language for technical computing, with PowerWorld Simulator via the EasySimauto.jl package. This integration enables seamless modeling, simulation, and analysis of EV charging load profiles and their implications for campus grid infrastructure. The framework leverages a real-world dataset collected from CSUNs EV charging stations, consisting of 15-minute interval measurements over the course of one year. By coupling highresolution data with dynamic simulations, the proposed system offers a valuable tool for evaluating transformer loading, feeder utilization, and overall system stress. The results support data-driven decision-making for EV infrastructure deployment, load forecasting, and energy management strategies. In addition, the framework allows for scenario-based studies to explore the impact of future increases in EV penetration or changes in charging behavior. Its modular architecture also makes it adaptable to other campus or urban distribution systems facing similar electrification challenges. Index Terms-Electric Vehicle (EV), Power Distribution Systems, Power System Operation, EV Charging Stations. I. INTRODUCTION The growing use of electric vehicles (EVs) is changing both transportation systems and how electric power is consumed and managed. This growth is supported by several factors, such as the decreasing cost of batteries, increasing awareness of environmental issues, and strong government incentives [1]-[3]. According to the International Energy Agency (IEA), the number of EVs on the road surpassed 26 million in 2022, which is over ten times the number in 2013 [1]. As more EVs are deployed, the demand for electricity increases significantly, placing additional pressure on existing power grid infrastructure [?], [4]-[9]. This growing demand requires new approaches for grid planning and load management to ensure reliable and efficient operation [10]- [17]. Despite the positive impact of EVs on reducing greenhouse gas emissions and improving energy efficiency, they also have the potential to create new challenges for electric (CSUN). . Support from NSF contract #2308498 and the Climate Action - Community-driven eLectric vEhicle chArging solutioN (CA-CLEAN) project power systems. The extra demand caused by EV charging stations can lead to several operational problems, including overloaded feeders, voltage instability, and higher stress on transformers [18], [19]. Most existing power distribution systems were not originally built to handle these types of fast-changing and locally concentrated loads, which makes it important to evaluate their impact carefully [20], [21]. As the number of EVs continues to increase, there is a growing need to strengthen transmission and distribution networks to support the additional demand [22], [23]. Without proper planning and system upgrades, these stresses may lead to equipment degradation and lower overall grid reliability [24], [25]. Static analyses, which look at only a single snapshot of the power system, are no longer enough to understand the growing complexity and time-varying nature of EV charging behavior. Traditional power flow studies often use fixed or average demand values, which overlook the fact that EV charging patterns can change quickly and are often unpredictable. These simplified assumptions may lead to inaccurate evaluations of grid performance, especially as the number of EVs continues to grow. As EVs become a more common mode of transportation, the shortcomings of static methods become increasingly evident. Dynamic analysis methods are therefore necessary to model changes in load over time and to better assess potential operational issues, such as voltage drops, line congestion, and transformer overloading [26], [27]. Incorporating timedependent simulations into planning and operations helps utilities anticipate critical stress points and develop more effective mitigation strategies. The changing nature of EV load profiles is influenced by several connected factors, such as time-of-use electricity pricing, individual charging habits, differences in vehicle models, the availability of charging stations, and battery charging strategies. For instance, many users may plug in their vehicles at the same time, such as after work or after classes, leading to short-term spikes in demand that go well beyond typical average levels. These unpredictable patterns make it difficult to depend on simple forecasting tools or fixed models. As a result, real-time or near-realtime simulations that use detailed, high-frequency data are essential for accurately analyzing how EVs affect the power grid under realistic conditions [28], [29]. This level of analysis is particularly important for system planners who must ensure grid reliability while managing increasingly flexible and variable energy demands. University campuses, such as California State University Northridge (CSUN), are well-suited for studying the realworld effects of EV charging on power distribution systems. 9 Sep 2025 Power Network Electric Load Profile with EVs Power Distribution Network Editing Load profile Files In Julia EasySimauto.jl PowerWorld Visualizing Changes of Power flows In PowerWorld Data Layer Compution Layer Visualizing Layer Figure 1. Schematic of the proposed framework for Dynamically Investigating the Impact of EV Chargers on Power Distribution Systems using Various tools and in three layers. Campuses typically have a high concentration of buildings and a mix of different types of electricity use, including residential, academic, and research loads. They also serve a wide range of EV users, faculty, staff, and students' vehicles. As more EV charging stations are installed, managing when and where charging occurs becomes more complex, both over time and across different campus locations. These conditions make campuses ideal environments for testing how clustered charging patterns, driven by daily schedules and user habits, can lead to overloading of transformers and congestion on local feeders [30], [31]. To accurately study complex and data-rich scenarios like EV charging, it is important to use computational tools that are both powerful and flexible. Interface-based simulation platforms have become effective solutions for this purpose. These tools make it possible to connect high-performance programming languages, such as Julia, with professional power system simulation software like PowerWorld. This setup allows users to automate simulation tasks, adjust inputs such as EV charging patterns or building loads, and run real-time power flow analysis across many different scenarios [32], [33]. These interfaces improve the scalability, consistency, and accuracy of power system studies, helping researchers explore how the grid responds under a wide range of operating conditions [34], [35]. When working with large datasets, such as the one-year set of 15-minute interval measurements from CSUNs EV charging stations, these frameworks are especially important. They offer a reliable and adaptable environment for understanding how EV charging affects the grid over time and for planning future infrastructure upgrades with greater confidence. In this paper, we present an automated and unified framework to study how EV charging affects the electric power grid at CSUN. We created a detailed model of CSUNs distribution network in PowerWorld, using real data about the campus infrastructure and layout. To support the analysis, we collected one year of electricity usage data from meters installed at campus buildings and EV charging stations, with measurements taken every 15 minutes. From this dataset, we identified the worst-case daily load profiles for analysis. Using the EasySimauto.jl package, we built a dynamic connection between Julia and PowerWorld to automate data input and simulation processes. We developed scripts that automatically adjusted the electrical load at different buses and ran power flow simulations for each load scenario. This approach allowed us to examine important system parameters such as voltage levels, transformer loading, and feeder congestion under different operating conditions. The main contribution of this work is the introduction of a flexible and automated framework that captures the time-varying nature of EV-related load profiles and allows system parameters to be modified at any selected time interval. This capability enables the simulation of a wide range of operating scenarios and provides a powerful tool for both power system operation and planning. The framework supports detailed, time-resolved analysis of grid impacts, making it highly valuable for infrastructure design, reliability assessment, and long-term energy management. The rest of this paper is organized as follows: In Section II, we describe how CSUNs campus electric grid is modeled and coresponding data sets are injected to the proposed framework. In Section III we introduce the proposed highfidelity model for analyzing the impact of EVs on lowvoltage distribution systems. Section IV presents the simulation results, and Section III provides the conclusion. II. METHODOLOGICAL FRAMEWORK A. Modeling CSUNs Electric Power Network To accurately assess how EV charging affects grid operations, it is important to have a realistic model of CSUNs electric power distribution system. An overview of the modeled campus distribution network in PowerWorld is shown in Figure 2. For this purpose, we built a detailed representation of the campus grid using data provided by the Physical Plant Management (PPM) department. The model includes key system components such as underground cables, transformers, and distributed generation units. To calculate the resistance and impedance of the underground cables, we used a one-line diagram of the campus that showed the types of cables and their geographic layout. The coordinates of each cable were used to estimate their physical lengths. We then applied standard resistance values (in ohms per mile), provided by PPM for each cable type, and multiplied them by the cable lengths to determine total resistance. This process allowed us to assign realistic impedance values to each section of the network, preserving both spatial and physical accuracy in the simulation. Transformers were also modeled in detail using nameplate data from PPM, which included their capacities, impedances, and configurations. These details were incorporated into our PowerWorld simulation to better understand how transformers respond to changes in load, particularly under EV charging conditions. In addition, the campus includes several photovoltaic (PV) systems that contribute to local energy generation. These include a 467 kW system at Parking Lot B2, a 225 kW system at Parking Lot E6, and a 1.2 MW system at the Student Recreation Center. These PV systems were modeled as distributed generators using actual generation profiles provided by PPM, which allowed us to capture their variability over different times of the day and seasons. Figure 2. PowerWorld model of CSUNs campus distribution network under regular (non-EV) load conditions. All feeders operate below thermal limits, indicating no line congestion in the baseline scenario. B. Load Data Collection and Modeling To build a realistic and time-varying load model for the CSUN campus, we carried out a detailed data collection process that included measurements from both campus buildings and EV charging stations. Meters were installed on each buildings power cables to record real and reactive power demand every 15 minutes over the span of a full year. Since the campus electrical system operates close to a balanced condition, measuring just one phase per building was sufficient to represent the overall load. This approach simplified the data collection while still providing reliable results. We also analyzed the data to capture how electricity use changes over time, including daily cycles and seasonal trends. These time-based patterns are important for accurately reflecting real operating conditions. By incorporating this time-series data into our simulations, we were able to model how campus loads vary in response to factors like academic calendars, building occupancy, and routine operations. This time-resolved load modeling is essential for evaluating the true impact of EV integration and identifying periods of potential stress on the distribution system. C. Incorporating EV Charging Loads Including EV charging loads in the campus power system model was a key part of our analysis. We used real data collected from CSUNs operational EV charging stations. This dataset contained one year of charging records at 15minute intervals for each station, providing detailed information on how much energy was used, how often vehicles were charged, and when charging typically occurred. To study the effects of this additional load, we identified the times with the highest charging activity and created daily worst-case charging profiles. These were combined with the building-level load data to produce a complete load profile that represents the campus system under heavy demand. Our model included both real power (kW) and reactive power (kVar) components of EV charging to accurately reflect their effect on the power factor. By directly embedding these EV charging patterns into our time-based load profiles, we developed a strong and realistic framework for evaluating how EVs impact system performance, specifically in terms of voltage fluctuations, transformer stress, and feeder congestion, within the current campus grid setup under various operating scenarios. D. Power Flow Analysis via PowerWorld Simulator To study how the campus grid responds to changing load conditions and EV charging activity, we used PowerWorld Simulator, an industry-standard tool for performing timeseries power flow analysis. We connected PowerWorld with Julia using the EasySimauto.jl interface, which allowed us to automate simulations and manage data efficiently. This setup made it possible to update the load at each bus in the system dynamically and run power flow calculations across thousands of time steps, each representing a different loading condition. Our Julia scripts processed the timeseries data from building and EV loads and sent the updated values directly to PowerWorld using SimAuto commands. This automated approach eliminated the need for manual data entry or interface interaction, which greatly reduced the overall simulation time and made the process more consistent and repeatable. In addition, the flexible scripting environment allowed us to easily test different load growth scenarios and EV penetration levels. This capability makes the framework suitable for long-term planning studies and stress testing of grid infrastructure under future conditions. III. HIGH-FIDELITY MODEL In this study, we developed a detailed and structured framework to evaluate how residential EV chargers affect low-voltage power distribution systems. The framework is organized into three main layers: the data layer, the computational layer, and the visualization layer. These layers work together to process input data, run simulations, and present the results clearly. An overview of the complete framework is shown in Fig. 1. 00:15 02:15 04:15 06:15 08:15 10:15 12:15 14:15 16:15 18:15 20:15 22:15 Time of Day 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Load at Parking Lot G6 (MW) April August December February January July June March May November October September Figure 3. Load profiles of EV charging station at Parkin Lot G6 during one day in 12 months. In the Computational Layer, each of these load profiles was applied to the simulated distribution network using the EasySimAuto.jl API. This allowed for programmatic updates to PowerWorlds load values for each interval, automating the process of time-series simulation and ensuring consistent system modeling across all months. Each scenario was simulated in PowerWorld, and the resulting voltage magnitudes, line power flow's changes, and line losses levels were recorded for analysis. A. Data Layer The data layer is responsible for gathering and organizing all necessary input data for the simulation process. It consists of two main components: First, we collected detailed information about the physical characteristics of the campus distribution network, including specifications for underground cables, transformers, and the overall system topology. This information was used to build a realistic and accurate network model, which was implemented in PowerWorld Simulator. Second, the data layer incorporates time-series measurements for both the baseline building loads and the EV charging profiles. As described in the previous section, these data were recorded at 15-minute intervals over a full calendar year. For each month, we analyzed the data to identify the day with the highest peak load, representing the most critical or worst-case scenario. This process resulted in twelve representative daily load profiles, one for each month, each consisting of 96 time steps to reflect the 15-minute resolution. B. Computational Layer The computational layer handles the automated execution of simulations using PowerWorld Simulator. We used the EasySimAuto.jl package, a Julia-based interface, to control and interact with PowerWorld programmatically. This setup allowed us to load different load profiles into the simulation model automatically, without manual input. EasySimAuto.jl works by connecting to PowerWorlds SimAuto server, which provides access to a range of commands and data through its COM-based interface. Using this API, we updated load values for each 15-minute interval in the selected daily profiles and then ran simulations to observe the system's behavior under various EV charging scenarios. This layer made it possible to run flexible and repeatable experiments by automating the setup, execution, and data retrieval processes for each simulation case. C. Visualization Layer The visualization layer is responsible for analyzing and presenting the results from the simulations based on the different load profiles and time intervals. From each PowerWorld simulation, we extracted key performance indicators such as voltage levels, current flow, and loading on network components. These results were post-processed and visualized using custom scripts written in Julia. The goal was to understand how different EV charging behaviors, captured for the most critical day of each month, affect the performance and stability of the campus power system. For each of the twelve monthly profiles (with 96 time intervals per day), we tracked system responses such as voltage drops, overloaded lines, and transformer stress 0 10 20 30 40 50 60 70 80 90 100 Time Step (96 step- Per 15 min) 0.892 0.894 0.896 0.898 0.9 0.902 0.904 0.906 0.908 Voltage (pu) 24 hour Voltage Profile for Bus 144 Across Months April August February January July June March May November October September Figure 4. 24-hour per-unit voltage profiles at Bus 144 (Parking Lot G6) for each month under the EV load scenario. levels. These results were plotted over time to identify when and where the grid experiences the most stress. In addition, we created comparative heatmaps and time-series plots to show both the spatial and temporal effects of residential EV charging on the distribution network. This layer provides a detailed, data-driven view of grid performance, helping utility planners and researchers identify critical time windows and system locations that may require upgrades or mitigation strategies. IV. RESULTS AND DISCUSSION To evaluate the impact of residential EV charging on low-voltage distribution systems, we ran simulations using twelve representative daily load profiles, one per month, based on year-long, 15-minute interval data. Each profile reflects the peak-load day of its month, capturing worstcase conditions in 96 time steps. These profiles represent realistic daily and seasonal variations across the campus. An example for Parking Lot G6 on a peak day is shown in Fig. 3. In the Visualization Layer, post-processed data were used to examine voltage trends, line flows, and power losses. Fig. 4 shows the 24-hour per-unit voltage profile at Bus 144 (Parking Lot G6), which experiences the largest daily voltage swing under a heavy EV charging scenario (4Œ base EV load). Each trace represents one months 96 time steps at 15-minute intervals. A midday voltage plateau («0.905 pu) drops sharply in the afternoon (steps 3060), coinciding with peak charging demand. Spring months (e.g., February, April) show the lowest voltages («0.893 pu), while summer months (June, July) remain slightly higher («0.900-0.902 pu) due to lower connected EV during the summer. Voltage recovers after step 50 as charging activity declines. These results indicate that even with currently low EV penetration at CSUN, significant voltage drops already occur during peak hours. As EV adoption increases, these effects will intensify. Proactive mitigation, such as on-load tap changers or local energy storage, will be essential to maintain voltage stability, particularly at critical nodes like Parking Lot G6. Fig. 5 shows the 24-hour voltage profiles at three critical CSUN buses, Bus 144 (Lot G6), Bus 75 (Lot B5), and Bus 99 (Lot G3), under both base load and a 4Œ EV Figure 5. Twenty-four-hour voltage trajectories at three critical CSUN grid under both the base load and a 4Œ EV-load scenario. Each subplot overlays the base-case voltage (gray bars) with the EV-augmented voltage (red bars) over 96 fifteen-minute steps. For each bus, 144, 75 and 99, the single worst-swing month (February, April, August or November) was automatically selected based on the maximum difference in voltage range when EV load is multiplied by four. load scenario. Each panel compares the voltage without EV charging (gray bars) to the voltage with EV load (red bars), across 96 fifteen-minute intervals. At Bus 144, the largest voltage drop occurs during February and April, reaching up to 0.015 pu below the base case. This indicates limited voltage support and moderate background load. Bus 75 shows a smaller but still noticeable drop of about 0.010 pu. Bus 99 is least affected, with voltage deviations generally below 0.005 pu, suggesting stronger upstream support. Seasonal variation is evident: voltage troughs are deeper and occur earlier in the winter and spring months, while summer and fall months (e.g., August and November) maintain higher minimum voltages due to stronger base conditions. These results demonstrate that concentrated EV charging can significantly degrade voltage quality at specific nodes. Even with current low EV adoption, the impact is measurable. As EV penetration increases, proactive measures, such as voltage regulation devices or infrastructure upgrades, will be necessary to maintain power quality and grid stability. Figure 6 compares active power flows between Buses 144 (Lot G6) and 145 over 96 time steps for four representative months: April, July, October, and January. Under base load conditions (blue bars), the power flow remains steady at around 0.15 MW, reflecting a relatively constant campus load. When EV charging is included (red bars), line loading increases significantly during the day. Peak flows occur between steps 35-65, reaching up to 0.48 MW in April and October, 0.42 MW in January, and 0.27 MW in July. Figure 6. Comparison of 96step active power flows (LineMW) on the line between buses 144 (parking lot G6) and 145 under the base loading and EV charging scenarios for April, July, October, and January. The higher peaks in April and October align with increased EV usage in moderate weather, while the lower peak in July reflects reduced background load. These results show that uncoordinated EV charging can more than triple the load on specific feeders, potentially violating thermal or voltage limits. The step-by-step analysis highlights the need for demand management or infrastructure upgrades to accommodate future EV growth on the CSUN campus. V. CONCLUSION This paper presents an automated, high-resolution framework for assessing the impact of EV charging on the CSUN distribution network. By integrating real 15-minute EV charging data with Julias EasySimauto.jl interface and PowerWorld Simulator, we developed a fully scriptable pipeline that performs time-series load flow simulations and extracts voltage and power flow metrics across the network. Our analysis highlights significant daily and seasonal voltage drops and feeder loading increases at key campus locations, even under current low EV penetration. These effects are most pronounced during afternoon peak charging periods, especially in spring and winter months. As EV adoption grows, such stresses are expected to intensify, potentially compromising power quality and system reliability. The proposed framework enables detailed, data-driven evaluation of feeder overload risks and voltage deviations under various load scenarios. This supports informed decisionmaking for coordinated charging strategies, infrastructure upgrades, and grid reinforcement. Built on a reproducible and scalable platform, this tool can assist utilities, planners, and researchers in developing and testing EV integration strategies under realistic conditions. REFERENCES [1] International Energy Agency, "Global ev outlook 2023," 2023, available: https://www.iea.org/reports/global-ev-outlook-2023. [2] N. Lutsey, "The role of policy in reducing electric vehicle battery costs," International Council on Clean Transportation (ICCT), 2018. [3] Bloomberg New Energy Finance, "Electric vehicle outlook 2019," 2019, available: https://about.bnef.com/electric-vehicle-outlook/. [4] M. R. Narimani, J.-Y. Joo, and M. L. Crow, "Dynamic economic dispatch with demand side management of individual residential loads," in 2015 North American Power Symposium (NAPS). IEEE, 2015, pp. 1-6. [5] T. Niknam, R. Azizipanah-Abarghooee, and M. R. Narimani, "An efficient scenario-based stochastic programming framework for multiobjective optimal micro-grid operation," Applied Energy, vol. 99, pp. 455-470, 2012. [6] O. Sundstrom and C. Binding, "Flexible charging optimization for electric vehicles considering distribution grid constraints," Electric Power Systems Research, vol. 81, no. 1, pp. 115-124, 2012. [7] Y. Liu, J. Wang et al., "The impact of ev charging on power grid: A review," Renewable and Sustainable Energy Reviews, vol. 53, pp. 256-264, 2015. [8] M. R. Narimani, J.-Y. Joo, and M. L. Crow, "The effect of demand response on distribution system operation," in 2015 IEEE Power and Energy Conference at Illinois (PECI). IEEE, 2015, pp. 1-6. [9] M. R. Narimani, Maigha, J.-Y. Joo, and M. Crow, "Multi-objective dynamic economic dispatch with demand side management of residential loads and electric vehicles," Energies, vol. 10, no. 5, p. 624, 2017. [10] M. R. Narimani, A. Azizivahed, and E. Naderi, "An efficient scenario-based stochastic energy management of distribution networks with distributed generation, pv module, and energy storage," arXiv preprint , 2019. [11] M. R. Narimani, P. J. Nauert, J.-Y. Joo, and M. L. Crow, "Reliability assesment of power system at the presence of demand side management," in 2016 IEEE Power and Energy Conference at Illinois (PECI). IEEE, 2016, pp. 1-5. [12] A. Azizivahed, E. Naderi, H. Narimani, M. Fathi, and M. R. Narimani, "A new bi-objective approach to energy management in distribution networks with energy storage systems," IEEE Transactions on Sustainable Energy, vol. 9, no. 1, pp. 56-64, 2017. [13] M. R. Narimani, B. Asghari, and R. Sharma, "Optimal sizing and operation of energy storage for demand charge management and pv utilization," in 2018 IEEE/PES Transmission and Distribution Conference and Exposition (T&D). IEEE, 2018, pp. 1-5. [14] B. Asghari, M. R. Narimani, and R. Sharma, "Method for operation of energy storage systems to reduce demand charges and increase photovoltaic (pv) utilization," Jan. 31 2019, uS Patent App. 16/006,239. [15] M. R. Narimani, B. Asghari, and R. Sharma, "Energy storage control methods for demand charge reduction and pv utilization improvement," in 2017 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC). IEEE, 2017, pp. 1-5. [16] M. R. Narimani, "Demand side management for homes in smart grids," in 2019 North American Power Symposium (NAPS). IEEE, 2019, pp. 1-6. [17] M. R. Narimani, Maigha, J.-Y. Joo, and M. Crow, "Multi-objective dynamic economic dispatch with demand side management of residential loads and electric vehicles," Energies, vol. 10, no. 5, p. 624, 2017. [18] K. Clement-Nyns, E. Haesen, and J. Driesen, "The impact of charging plug-in hybrid electric vehicles on a residential distribution grid," IEEE Transactions on Power Systems, vol. 25, no. 1, pp. 371380, 2010. [19] S. Shao, M. Pipattanasomporn, and S. Rahman, "Impact of plug-in electric vehicles on distribution network: A review," International Journal of Electrical Power and Energy Systems, vol. 57, pp. 273281, 2014. [20] R. J. Bessa and M. A. Matos, "Optimization models to integrate electric vehicles in the electric power system," Electric Power Systems Research, vol. 97, pp. 50-58, 2013. [21] P. Richardson, D. Flynn, and A. Keane, "Impact assessment of varying penetrations of electric vehicles on low voltage distribution systems," IEEE PES General Meeting, 2010. [22] H. Cai, X. Jia, and A. S. Chiu, "A probabilistic model for ev charging behavior based on vehicle usage data," Transportation Research Part D: Transport and Environment, vol. 33, pp. 114-125, 2014. [23] S. Deb, K. Tammi, K. Kivikko, and R. Lahdelma, "Impact of electric vehicles and solar photovoltaic penetration on distribution network losses and voltage profile," Applied Energy, vol. 227, pp. 297-311, 2018. [24] O. Boyaci, M. R. Narimani, K. Davis, and E. Serpedin, "Spatiotemporal failure propagation in cyber-physical power systems," in 2022 3rd International Conference on Smart Grid and Renewable Energy (SGRE). IEEE, 2022, pp. 1-6. [25] M. Iranpour, M. R. Narimani, and X. Jia, "Assessing ev charging impacts on power distribution systems: A unified co-simulation framework," arXiv preprint , 2025. [26] E. Yao and J. Zhang, "Dynamic analysis of electric vehicle charging load based on user behavior," IEEE Conference on Transportation Electrification Asia-Pacific, 2014. [27] D. Wang, J. Zhang, and Y. Lu, "A real-time assessment method of electric vehicle charging impact based on dynamic simulation," Energies, vol. 13, no. 9, p. 2283, 2020. [28] A. Mohamed and M. Koivunen, "Electric vehicle load modeling and simulation," IEEE PES General Meeting, 2015. [29] K. M. Tan, V. K. Ramachandaramurthy, and J. Y. Yong, "Integration of electric vehicles in smart grid: A review on vehicle to grid technologies and optimization techniques," Renewable and Sustainable Energy Reviews, vol. 53, pp. 720-732, 2016. [30] D. Gough, J. Lefevre et al., "Designing microgrids for university campuses," IEEE Electrification Magazine, vol. 7, no. 3, pp. 30-39, 2019. [31] H. Wu, Z. Liu et al., "Modeling and analysis of ev charging load on campus microgrids," IEEE Green Technologies Conference (GreenTech), 2018. [32] T. Lee and C. Chen, "Interfacing power system analysis software with external programming tools," IEEE Transactions on Smart Grid, 2016. [33] J. Fletcher and T. C. Green, "Interfacing simulation tools for dynamic analysis of power systems," IEEE Transactions on Industrial Electronics, vol. 66, no. 9, pp. 7131-7140, 2019. [34] X. Huang, L. Zhang, and Y. Chen, "A flexible simulation framework for power systems using python and powerworld," Simulation Modelling Practice and Theory, vol. 101, p. 102038, 2020. [35] A. G. Martinez, D. Perez, and L. Vega, "Development of a dynamic interface between julia and commercial power simulators," IEEE Access, vol. 9, pp. 84 320-84 331, 2021.
|
2509.16216
|
On the Detection of Internal Defects in Structured
Media
Bryl Nico M. Ong 1, Aarush Borker 2, Neil Jerome A. Egarguin 1, Daniel Onofrei 3
1 Institute of Mathematical Sciences, University of the Philippines Los Ba˜nos, Laguna, Philippines
2 Mahwah High School, Mahwah, NJ, USA
3 Department of Mathematics, University of Houston, Houston, TX, USA
Abstract
A critical issue that affects engineers trying to assess the structural integrity of var-
ious infrastructures, such as metal rods or acoustic ducts, is the challenge of detecting
internal fractures (defects). Traditionally, engineers depend on audible and visual aids
to identify these fractures, as they do not physically dissect the object in question into
multiple pieces to check for inconsistencies. This research introduces ideas towards
the development of a robust strategy to image such defects using only a small set of
minimal, non-invasive measurements.
Assuming a one dimensional model (e.g.
longitudinal waves in long and thin
rods/acoustic ducts or transverse vibrations of strings), we make use of the continuous
one-dimensional wave equation to model these physical phenomena and then employ
specialized mathematical analysis tools (the Laplace transform and optimization) to
introduce our defect imaging ideas. In particular, we will focus on the case of a long
bar which is homogeneous throughout except in a small area where a defect in its
Young’s modulus is present. We will first demonstrate how the problem is equivalent
to a spring-mass vibrational system, and then show how our imaging strategy makes
use of the Laplace domain analytic map between the characteristics of the respective
defect and the measurement data.
More explicitly, we will utilize MATLAB (a platform for numerical computations)
to collect synthetic data (computational alternative to real world measurements) for
several scenarios with one defect of arbitrary location and stiffness. Subsequently, we
will use this data along with our analytically developed map (between defect charac-
teristics and measurements) to construct a residual function which, once optimized,
will reveal the location and magnitude of the stiffness defect.
1
Introduction
Maintaining the integrity of infrastructure, such as bridges, drainage, and aerospace compo-
nents, relies on the ability to identify hidden defects using non-destructive methods. Current
1
arXiv:2509.16216v1 [cs.CE] 6 Sep 2025
non-destructive evaluation (NDE) techniques employ various forms of vibrational analysis
due to their cost-effectiveness and reliability (see for instance [2, 17, 12, 18]). Specifically,
many methods employ one-dimensional spring-mass analogues and wave equation models.
These solutions are notable for their intuitive physical representation, analytical versatility
(resulting from the Laplace transform), and especially their straightforward implementation
in various numerical software, such as MATLAB [15, 2].
In biomechanics and computer graphics, spring-mass networks can simulate soft-tissue
deformation and cloth dynamics in real-time, sacrificing system continuity for computational
speed and robustness [4]. Moreover, acoustic metamaterials use one-dimensional (1D) spring-
mass chains to block specific sound frequencies, thus creating an acoustically manipulable
system [11]. Even vehicle and vibrational suspension systems employ a discrete set of springs
and masses to identify and isolate harmful fluctuations arising from dynamic loads [3].
An emerging area of NDE research focuses on treating internal cracks, or defects, as
an “extra spring.” When measuring certain values of such systems, the spring associated
with the crack perturbs natural frequencies, thus shifting the poles of a system’s Laplace-
domain output [14]. Certain studies have already demonstrated that employing just two
low-frequency measurements can be used to detect a single defect through a singular formula
[2]. Recently, works use guided-wave Bayesian methods [16] and particle-swarm optimizers
[7] to detect multiple and/or nonlinear defects.
On the other hand, many methods either rely on continuous and precise Laplace-domain
data, depend on closed-form inversion (valid for one or two defects), or struggle to generate
an inverse map with boundary measurements and defect parameters. In reality, sensors may
only monitor discrete time-series data, which is plagued by noise, and cracks can occur at
arbitrary depths with varying magnitudes. As a result, the challenge of creating a data-
driven imaging strategy that reliably recovers the location and size of a single defect from
minimal, easily measurable data, while considering noise, remains unsolved.
In this work, we study the inverse problem of locating and quantifying a localized stiffness
defect in a one–dimensional elastic bar using only a single endpoint displacement trace.
The forward model is the standard 1D longitudinal wave equation discretized by a lumped
spring–mass chain (e.g., in our particular numerical setup, length L = 1 m, N = 100 nodes,
∆x = L/N). The inverse task is to recover the index j and the local spring constant k∗
of a single anomalous element from noisy measurements of the left-end displacement u0(t)
produced by an impulsive initial velocity.
Our numerical results show the inversion recovers the defect location exactly (to the
discretization cell) and recovers k∗with relative errors ≲0.1% under realistic Gaussian
measurement noise up to σ = 10−5 m. The discrete contrast k∗maps directly to a continuum
Young’s modulus in the defective element via Edef = k∗∆x/A; consequently results for
k∗∈[0.1, 5] correspond to Edef/E0 ∈[0.1, 5] in the continuum model.
Key features of our approach are:
• A hybrid Laplace-domain forward solver that yields cheap, high-fidelity forward re-
sponses used to build a synthetic measurement map.
• A robust inversion pipeline that combines a coarse per-index search with a local non-
2
linear refine (Gauss–Newton / constrained optimization) and simple smoothing regu-
larization of the forward data.
• An extensive validation campaign (Monte Carlo noise sweeps, contrast sweeps, and
sensitivity to parameter mismatch) that quantifies the practical detection limits.
This work builds on our previous works, [6], [5], and attempts to make a step towards
addressing these issues in the context of the problem of determining the location and size of
a defect in the Young’s modulus of a long rod which is otherwise homogeneous. We will start
by showing the quantitative equivalence between a metal homogeneous rod with a localized
defect in its Young’s modulus and a 1-dimensional homogeneous spring-mass system with a
defective spring constant. Thus, an impulsive force at the left end of the rod will results in
a longitudinal wave propagating along the rod and subsequent vibrations being measured at
the left end point. Equivalently, the discrete system activated by an initial impulse applied
at its first mass will generate vibrations through the entire system. Then, in this discrete
setup, measurements will consist of the resulting vibrations of the first mass. As shown in
[5], through a z−transform approach one can use the set of discrete time measurments of the
first mass and obtain an approximation of the Laplace transform of the first mass vibrations
(when perceived as a function of time).
We then proceed towards building the analytic map relating the defective spring constant
(defect size) and its location to the vibrations of the first mass.
Then, in the Laplace
domain, a residual functional is proposed measuring the discrepancy between this analytic
map the synthetic measurements data set (i.e., the Laplace transform of the first mass
vibrations). Finally, minimization of this residual functional determines both the location
and magnitude of the stiffness defect. All of this is achieved with minimal, non-invasive data,
as measurements are only taken from one end of the system. In the context of a metal rod,
our results show how vibrational data resulting from an impulsive force at one end of the
rod will indicate the position and level of abnormal Young’s modulus at one point in the rod
which in turn offers a prediction for the position and likelihood of a future crack occurring
at that point.
Figure 1: Discrete spring–mass chain model of a 1D bar with a single stiffness defect (high-
lighted spring). Each mass represents a segment of the bar and each spring its local stiffness
ki. Adapted from [5].
Our proposed method shares similarities with previous works that evaluate vibrational
NDE by modeling defects as local compliance using a 1D spring-mass system combined
with Laplace-domain analysis [14, 2]. Similar to [16], our method inverts boundary wave
measurements to determine the defect parameters—specifically, the defect’s position along
the system and its severity, indicated by the deviation of the spring constant from that of the
3
homogeneous system. Unlike a Bayesian posterior that updates with each new observation,
our residual minimization function can be optimized to identify both the size and location of
the defect using only minimal, discrete endpoint measurements. An analytic inverse-spectral
method waqs employed in [15] and closed-form techniques were used in [14] to identify one or
two defects from frequency data. In [5] the authors used the poles of the transfer function to
detect one or two defects with left hand measurements in a system activated by an impulsive
force. The method we propose can handle arbitrary defect positions in a single-defect scenario
with discrete data and our numerics suggest it is extensible to the case of arbitrary number of
defects. Additionally, unlike forward-only spectral-element methods (see [11, 9, 17, 10]) and
frequency-shift reconstructions [3], our approach develops an explicit inverse mapping from
endpoint vibrational data to infer the defective spring constant and its position. In [6] the
authors build a similar analytic map relating the defect characteristics to the measurement
data and identify one defect (size and location) or two defects if appriori information about
their location or sizes is offered. In [5] the authors use the defect signature on the poles of
the transfer function, although the method proposed there requires continuous spectral data.
Both of these approaches extend the concept of using a spring-mass system within a Laplace-
domain framework. Notably, while newer methods employ guided-wave Bayesian inference
or particle-swarm analysis for defect identification [16, 7], our method is computationally
cheap and maintains accuracy even with noisy and discrete data.
2
Motivation
Our effort to develop a robust defect-imaging strategy is motivated by the practical problem
of determining the size and location of defects in otherwise homogeneous hyperbolic linear
systems. More explicitly, we consider two one dimensional wave propagation models, one
supporting transverse waves (string with fixed ends) and the other longitudinal waves (long
bar with clamped ends), each containing one defect of unknown size and location (defective
localized string tension and respectively defective localized Young’s modulus). First we show
how each of these two models can be equivalently represented by a discrete spring–mass
system where one defective spring constant. Then we will develop a new strategy to detect
the position and size of the defective spring constant resulting, through the above equivalence,
in a strategy to detect the defective location and size in the string tension or the elastic bar’s
Young’s modulus, respectively.
We will proceed first to describe the main paradigm in the context of a discrete spring
and mass system.
2.1
Discrete Spring–Mass–Damper Chain
We model our structure as a chain of N point masses m1, . . . , mN connected in series by linear
springs and dashpots. Mass mj sits between springs of stiffness kj (to the left) and kj+1 (to
the right), and dashpots of damping dj and dj+1 (see Figure 1). Denote its displacement by
xj(t). Newton’s second law at each interior mass j = 1, . . . , N gives (see [6, 5]:
4
m1 x′′
1(t) = −k1x1 −d1x′
1 + k2
x2 −x1
+ γδ(t)
mj x′′
j(t) = kj
xj−1 −xj
+ djx′
j + kj+1
xj+1 −xj
for j = 2, N −1
mn x′′
N(t) = −kN+1xN −dNx′
N −kN
xN −xN−1
where δ(t) denotes the Dirac distribution centered at zero and where we assumed that the
system is driven by an impulsive force acting on the first mass with prescribed amplitude γ.
Rearranging, this can be written in the compact tridiagonal form:
x′′
j = kj
mj
xj−1 −kj + kj+1
mj
xj + kj+1
mj
xj+1 −dj
mj
x′
j + fj, for j = 1, N.
(1)
where f1 =
γ
m1δ(t), f2 = ... = fN = 0 represents the impulsive forcing and where in (1) we
assume the convention that (x0(t) = xN+1(t) = 0).
This discrete spring–mass–damper model serves as our reference.
In Section 2.2 and
Section 2.3, we will demonstrate that applying centered finite differences to the continuous
1D wave PDEs (string and bar) yields exactly these same equations once we set
mj = ρ(xj) ∆x,
kj = local stiffness at xj
∆x
,
dj = local damping at xj
∆x
, for j = 1, N.
Next, although the discussion in Section 2.2 is similar to the discussion in Section 2.3 we
chose to present each topic separately for the sake of clarity of the two types of situations
we address.
2.2
Transverse String and Its Spring–Mass–Damper Analogue
We begin with the most general 1D transverse-wave equation for a clamped string modeled as
a one dimensional segment [0, L], and whose linear density ρ0(x), tension T(x), and damping
µ(x) vary with position:
∂2u
∂t2 =
1
ρ0(x)
∂
∂x
T(x) ∂u
∂x
−µ(x)
ρ0(x)
∂u
∂t .
(2)
with boundary data given by u(0, t) = u(L, t) = 0 and activated by impulsive initial data.
i.e. u(x, 0) = 0; u′(x, 0) = γδ(x) (where δ(x) is the Dirac distribution focused at the origin).
Sampling at equally spaced points xi = i ∆x, for i = 0, ..., N + 1, (i.e., (N + 1)∆x = L),
and considering ρi = ρ0(xi), Ti = T(xi), µi = µ(xi), and ui(t) ≈u(xi, t), after making use of
a centered-difference in space we obtain
∂
∂x
T ux
xi
≈Ti+1 ui+1 −(Ti + Ti+1) ui + Ti ui−1
(∆x)2
,
i = 1, . . . , N.
Denoting u′′
i = d2ui/dt2 and u′
i = dui/dt, the discrete update reads
u′′
i = 1
ρi
Ti+1ui+1 −(Ti + Ti+1)ui + Ti ui−1
(∆x)2
−µi
ρi
u′
i,
i = 1, . . . , N.
(3)
5
with the observation that the fixed end boundary conditions imply u0 = un+1 = 0.
On the other hand, from (1) we have that the equation of motion for the ith mass xi(t)
in a discrete chain of N masses m1, . . . , mN linked by springs k1, . . . , kN+1 and dashpots
d1, . . . , dN, assuming that (x0(t) = xN+1(t) = 0), is
x′′
i = ki
mi
xi−1 −ki + ki+1
mi
xi + ki+1
mi
xi+1 −di
mi
x′
i,
i = 1, . . . , N.
(4)
Equations (3) and (4) coincide exactly under the identifications
mi = ρi ∆x,
ki = Ti
∆x,
di = µi∆x
Therefore, each string segment of length ∆x and density ρi becomes a discrete mass mi,
each local tension Ti becomes a spring stiffness ki, and the continuous damping µi becomes
the damping coefficient di. This one-to-one mapping showcases our defect-imaging strategy,
which treats local changes in T(x) as defective springs in the spring and mass chain. In
particular, a localized modification in the string tension Tj∗corresponds to a defective spring
kj∗in the chain, enabling us to detect defect location and severity via spring–mass inversion.
2.3
Longitudinal Vibration in a Heterogeneous Bar and Its Spring–Mass
Analogue
Consider axial vibrations w(x, t) in a rod of length L whose density ρ(x), Young’s modulus
E(x), and damping µ(x) vary with position:
ρ(x) ∂2w
∂t2 = ∂
∂x
E(x) ∂w
∂x
−µ(x) ∂w
∂t .
(5)
where we assumed homogenuous Dirichlet boundary conditions w(0, t) = w(L, t) = 0 and
the vibrations generated by an impulsive initial data, i.e.
w(x, 0) = 0; w′(x, 0) = γδ(x)
(where δ(x) is the Dirac distribution focused at the origin). We recall that as classically
defined, the Young’s modulus E(x) is the proportionality constant between stress and strain
in linear elastic media (with stress defined as the internal force per unit area and strain as
the measure of elongation (gradient of the displacement)). We mention that the Young’s
modulus usually encodes the level of stress accumulation in the media.
We discretize (5) with xi = i ∆x, i = 0, . . . , N + 1, with ∆x = L/(N + 1), set ρi = ρ(xi),
Ei = E(xi), µi = µ(xi), and write wi(t) ≈w(xi, t). A centered-difference approximation in
x gives
∂
∂x
E wx
xi
≈Ei+1 wi+1 −(Ei + Ei+1) wi + Ei wi−1
(∆x)2
.
Hence, denoting w′′
i = d2wi/dt2, w′
i = dwi/dt, the finite-difference update is
w′′
i = 1
ρi
Ei+1 wi+1 −(Ei + Ei+1) wi + Ei wi−1
(∆x)2
−µi
ρi
w′
i,
i = 1, . . . , N.
(6)
6
Figure 2: Left: a continuous bar with a localized Young’s-modulus defect at xj∗. Right: the
equivalent spring–mass chain where the j∗–th spring has altered stiffness kj∗= Ej∗/∆x.
with the observation that the fixed end boundary conditions imply E0 = En+1 = 0.
On the other hand, from (1) we have that the equation of motion for the ith mass xi(t)
in a discrete chain of N masses m1, . . . , mN linked by springs k1, . . . , kN+1 and dashpots
d1, . . . , dN, assuming that (x0(t) = xN+1(t) = 0), is given by (4). Equations (4) and (6)
coincide exactly under the identifications
mi = ρi ∆x,
ki = Ei
∆x,
di = µi ∆x.
Therefore, each string segment of length ∆x and density ρi becomes a discrete mass mi,
each local Young’s modulus Ei becomes a spring stiffness ki, and the continuous damping µi
becomes the damping coefficient di. This one-to-one mapping showcases our defect-imaging
strategy, which treats local changes in E(x) as defective springs in the spring and mass chain.
In particular, a localized drop (or rise) in the bar’s Young’s modulus Ej∗corresponds to a
defective spring kj∗in the chain (highlighted in Fig. 2), enabling us to detect defect location
and severity via spring–mass inversion.
3
Mathematical Framework
In this section, we consider the system in (1) first under the homogeneous assumption and
then a system with one defective spring constant. Thus, for the homogeneous system, i.e.
when mj = 1, dj = d, kj = k, driven by the impulsive force at the first mass, we have the
following mathematical model [6, 5]:
x′′
1 + dx′
1 + 2kx1 −kx2
= γδ(t)
x′′
2 + dx′
2 + 2kx2 −kx1 −kx3
= 0
...
x′′
j + dx′
j + 2kxj −kxj−1 −kxj+1
= 0
...
x′′
N−1 + dx′
N−1 + 2kxN−1 −kxN−2 −kxN
= 0
x′′
N + dx′
N + 2kxN −kxN−1
= 0
.
(7)
7
Now, suppose that the all constants are equal to 1, except that of the spring at position j
with a spring constant k∗̸= 1. Then the system becomes
x′′
1 + dx′
1 + 2x1 −x2
= γδ(t)
x′′
2 + dx′
2 + 2x2 −x1 −x3
= 0
...
x′′
j−1 + dx′
j−1 + (1 + k∗)xj−1 −kxj−2 −xj
= 0
x′′
j + dx′
j + (1 + k∗)xj −k∗xj−1 −xj+1
= 0
...
x′′
N−1 + dx′
N−1 + 2xN−1 −xN−2 −xN
= 0
x′′
N + dx′
N + 2xN −xN−1
= 0
.
(8)
Taking the Laplace transform of (8) plus some algebraic manipulation yields
(s2 + ds + 2)˜x1 −˜x2
= γ
(s2 + ds + 2)˜x2 −˜x1 −˜x3
= 0
...
(s2 + ds + (1 + k∗))˜xj−1 −˜xj−2 −˜xj
= 0
(s2 + ds + (1 + k∗))˜xj −k∗˜xj−1 −˜xj+1
= 0
...
(s2 + ds + 2)˜xN−1 −˜xN−2 −˜xN
= 0
(s2 + ds + 2)˜xN −˜xN−1
= 0
.
(9)
Letting h = −(s2 + ds + 2) and performing some more algebraic manipulations allow us
to write (9) in the matrix form
AX = b
(10)
where the entry Am,p of the coefficient matrix A in the mth row and pth column is given by
Am,p =
h,
if m = p but m ̸= j −1, j
h + 1 −k∗,
if m = p = j −1 or m = p = j
1,
if |m −p| = 1 but m ̸= j
1,
if m = j, p = j + 1
k∗,
if m = j, p = j −1
0,
elsewhere
.
(11)
Meanwhile the right-hand side vector b is given by b = [−γ 0 ... 0]T and the unknown
vector X consists of the responses ˜xi, i = 1, 2, .., N of each mass to the excitation force in
the Laplace domain.
8
The coefficient matrix A can further be manipulated and written in the form A = Ah +
P where Ah is the tridiagonal matrix obtained by taking the Laplace transform of the
homogeneous (nondefective) system (7). More explicitly, the diagonal entries of Ah are all
equal to h while it’s off-diagonal entries are all 1. The matrix P then is a very sparse matrix
whose inverse can easily be calculated. Using the result from [8] giving an explicit form for
the inverse of Ah, we get the expression ˜x1(s) for the response of the system in the Laplace
domain given by
˜x1(s) = −γR1,1 −
R1,j−1(1 −k∗) + R1,i(k∗−1)
˜xj−1(s) −R1,j(1 −k∗)˜xj(s),
(12)
where
Rm,p = cosh[(N + 1 −|p −m|)λ] −cosh[(N + 1 −m −p)λ]
2 sinh λ sinh(N + 1)λ
(13)
and λ satisfies h = −2 cosh λ. The response of the masses immediately adjacent to the
defective spring satisfy the system
(
(1 −k∗)(Rj,j−1 −Rj,j)˜xj−1 +
1 + Rj,j(1 −k∗)
˜xj
= −γRj,1
1 + Rj−1,j−1(1 −k∗) + Rj−1,j(k∗−1)
˜xj−1 + Rj−1,j(1 −k∗)˜xj
= −γRj−1,1
.
(14)
Solving the preceding system of linear equations in the unknown ˜xj−1 and ˜xj gives
˜xj−1
= −−γRj,1V + γRj−1,1G
GU −FV
˜xj
= −γRj,1U −γRj−1,1F
GU −FV
,
(15)
where F = (1−k∗)(Rj,j−1−Rj,j), G = 1+Rj,j(1−k∗), U = 1+Rj−1,j−1(1−k∗)+Rj−1,j(k∗−1)
and V = Rj−1,j(1−k∗). Using (15) into (12) yields an expression for the response of the first
system as a function of the parameter j and k∗representing the defect location and defect
size, respectively.
4
Results and Discussions
In this section, we present the defect characterization scheme that aims to find the location
and size of the single defective spring in a system of arbitrary length.
We start with a
description of the basic optimization algorithm. We illustrate the effect of measurement
noise to this approach necessitating the need for the introduction of what we call the σ-
smooth approach. This modification in the objective function, coupled with a Monte Carlo
run, mitigates the effect of the noise to the system. The section ends with an illustration of
a more analytic approach that can be an alternative for scenarios when the defect is located
near the end of the system.
9
4.1
Optimization Algorithm Description
The proposed method identifies the defective spring in the system by minimizing the dis-
crepancy between the analytically computed response of the system to the excitation force δ
applied to the first mass and noisy synthetic data that mimics measurements from physical
setting. Let ˜x1,analytic be the analytically computed response, given in (12) ,of the first mass.
Since we shall assume that our scheme has no access to the location and size of the defect,
we shall assume that ˜x1,analytic is a function of the location j of the defect, the size k of
the defect, and the Laplace variable s. Meanwhile, we let ˜x1,synthetic denote the synthetic
data that mimic perfect real-life measurements in the Laplace domain. To simulate mea-
surement uncertainty, Gaussian noise ϵ is added to the synthetic data. The measurement
noise is quantified by its relative size with the synthetic data. The objective function to be
minimized is
f(j, k) = log
Z 100
0
˜x1,analytic(j, k, s) −
˜x1,synthetic(s) + ϵ(s)
2 ds
,
which is the logarithm of the squared L2-norm of the residual. The logarithm ensures that the
optimizer (fmincon) avoids premature termination due to very small jumps in the objective
function values between iterations.
The introduction of the noise function ϵ makes this
approach better reflect the conditions of practical defect detection, where the measurements
are inevitably corrupted by noise.
In practice we run the local optimizer (Matlab fmincon) independently for each candidate
defect index j ∈{2, . . . , N} and pick the (j, k) pair with minimal residual; this exhaustive
per-index refine reduces sensitivity to local minima in k and keeps the inversion computation-
ally cheap (one forward solve per j). The optimizer uses gradient-based methods to search
for a local minimum of a user-defined objective function, subject to linear and nonlinear
constraints, as well as bound restrictions. Because it is a local solver, its success depends
strongly on the smoothness of the objective landscape and the choice of initial guess. For
each candidate j, fmincon is executed to estimate the optimal k that minimizes the resid-
ual. This process is repeated for all j ∈{2, . . . , N}, and the pair (j, k) that produces the
minimum value of the objective function is selected as the location and estimated size of the
defect.
4.2
Noise- free Simulation
In this section, we show that the good performance of the proposed optimization proce-
dure for noise- free data. Figure 3 shows the synthetic data generated with the following
parameters: number of masses N = 100, damping coefficient d = 0.1, impulse intensity
γ = 1, uniform spring constant k = 1, defect location jtrue = 40, and defect spring constant
k∗= 1.3. In other words, the system contains a single defective spring in position jtrue = 40
with stiffness k∗= 1.3, while all other springs have k = 1.
To identify the defect, the optimization routine fmincon was executed 99 times, once
for each possible location of the defect j ∈{2, . . . , 100}. For each j, the optimizer solved
10
Figure 3: Graph of Synthetic Data for the system with parameters N = 100, d = 0.1, γ = 1,
j = 40, and k∗= 1.3.
for the optimal defect size k that minimizes the objective function f(j, k). Figure 4 shows
the corresponding minimal residual values with respect to k, calculated as 10 f(j,k) for each
possible value of j. Here, the x-axis corresponds to the possible defect locations j, while the
y-axis represents the residual magnitude 10 f(j,k∗
j , where k∗
j is the minimizer of the objective
function for each fixed value of j.
The results indicate that the smallest residual occurs at j = 40, matching the true defect
location. The corresponding computed defect stiffness is k∗
comp ≈1.2999999, which is in
excellent agreement with the true value k∗= 1.3. The relative error is
k∗
comp −k∗
k∗
≈1.14 × 10−6,
demonstrating high accuracy. This suggests that given a perfect set of measurement values of
the system response, the proposed method yields highly reliable results. The next subsection
shows the effect of the introduction of various noise levels to the measurements.
4.3
Effect of Gaussian Noise
Modern laboratory Laser Doppler Vibrometers (LDV) achieve sub-nanometer to picometer-
class displacement noise floors depending on bandwidth and surface reflectivity; assuming
a conservative axial displacement noise of order 10−6 m for our experimental bandwidth is
realistic (see [1], [13]).
11
Figure 4: Residual Function of Possible Defect Location.
Figure 5 and Figure 6 show the effect of the Gaussian noise ϵ on the accuracy of defect
detection. We again consider the defective system with the same parameters as the ones
used in Subsection 4.2. Figure 3 plots the relative error in the predicted defect location as
a function of the relative size of ϵ. For ϵ of magnitude 10−8, 10−7, and 10−6 relative to the
synthetic data, the predicted defect location matches the true location exactly. However,
when the noise level is at 10−5, the RelativeErrorinLocation increases to approximately 5%,
and it continues to grow as the noise level increases. This suggests a noise level threshold
near of 10−6 beyond which location detection degrades significantly.
Figure 4 shows the RelativeErrorinDefectSize as a function of noise level. At noise level
10−5, the relative error in the estimated defect size is about 9.30%, whereas for ϵ = 10−6,
the error is on the order of 10−6. This confirms that 10−6 serves as a practical noise level
threshold for accurate detection. Notably, this noise level is still well within the capabilities
of modern defect detection systems, which can achieve precision up to 10−12.
In the next subsection, we present a modification of the basic optimization algorithm
that mitigates the effect of the measurement noise. We shall see that this approach improves
the noise level threshold by some orders.
4.4
σ-Smooth Approach
To further improve robustness against noise, we propose a variant of the optimization pro-
cedure, which we refer to as the σ-smooth approach. The framework remains the same: we
begin with the synthetic data ˜x1,synthetic, add Gaussian noise ϵ of prescribed size, and then
minimize the residual between the analytic and measured responses. The key modification
12
Figure 5: Relative error in the estimate for the defect location as a function of noise level
for the system with parameters N = 100, d = 0.1, γ = 1, j = 40, and k∗= 1.3.
Figure 6: Relative error in the estimate for the defect size as a function of noise level for the
system with parameters N = 100, d = 0.1, γ = 1, j = 40, and k∗= 1.3.
13
is the introduction of a random perturbation to the defect size parameter k in the objective
function.
In the original formulation, the optimizer solves directly for k. In the σ-smooth approach,
however, the unknown k is replaced by
k + δ,
where δ is a random perturbation drawn from a normal distribution with mean zero and
variance σ2
smooth, i.e., δ ∼N(0, σ2
smooth). Thus, instead of estimating a single deterministic
value of k, the method effectively searches for an interval [k −∆, k +∆] of admissible values.
This allows the solution to remain stable under noisy conditions, since small shifts in k within
this interval still produce consistent results.
To account for randomness in δ, we generate Nδ independent samples {δi}Nδ
i=1 from the
distribution above. For each δj, we evaluate the modified objective function
f(j, k; δi) = log
Z
˜x1,analytic(j, k + δi, s) −
˜x1,measured(s) + ϵ(s)
2 ds
1
2!
.
The σ-smooth objective is then defined as the average:
F(j, k) = 1
Nδ
Nδ
X
i=1
f(j, k; δi).
Minimization is performed on F with respect to k, while j is treated as a discrete variable
as before. When σsmooth = 0, the method reduces to the deterministic formulation. For
σsmooth > 0, the perturbation introduces robustness by mitigating the effect of Gaussian
noise in the measured data. In practice, we found that setting σsmooth = 10−4 and averaging
over Nδ ≈50 samples is sufficient to stabilize the results.
Note that Monte Carlo runs were also employed in the σ-smooth approach to account for
the presence of noise and the stochastic nature of the defect detection problem. By repeat-
ing the optimization procedure over multiple randomized noise realizations, we were able to
obtain statistically reliable estimates of the residual functional and the optimizer’s perfor-
mance. This averaging process reduces sensitivity to a single noise instance and highlights
the overall trend of defect detectability across different locations. The final estimate for the
defect location and defect size are taken to be the median of the respective estimates from
each Monte Carlo run.
4.5
A Simulation Employing the σ-Smooth Approach
To assess the performance of the σ-smooth approach, we compared its results against the
deterministic method under noisy conditions. In particular, we tested the method by con-
sidering the defective system with the same parameters as in Subsection 4.2 but with noise
of size 5 × 10−5.
14
Without regularization (i.e., using the deterministic approach), the relative error in defect
location was approximately 2.5%, while the RelativeErrorinDefectSize was about 5.75%.
These results indicate noticeable degradation in accuracy under noise.
In contrast, when the σ-smooth approach with Nδ = 50 was applied to 100 Monte Carlo
runs with each run having a different noise function, the performance improved significantly.
Recall that the final estimate for the defect location and defect size were taken to be the
median of the respective estimates from all the Monte Carlo runs. The relative error in defect
location decreased from 2.5% to 0%, meaning the defect was identified exactly. Similarly, the
RelativeErrorinDefectSize dropped from 5.75% to 1.8 × 10−4, demonstrating several orders
of magnitude improvement. These results suggest that the proposed σ-smooth approach is
effective in mitigating the influence of noise. Figure 7 illustrates the defect location identified
in each simulation run, while Figure 8 shows the corresponding results for the defect size.
Figure 7: EstimatesfortheDefectLocation (Monte Carlo Runs).
4.6
Multiple Simulation Runs
In this subsection, we investigate the limits of the robustness of σ-smooth approach. We
ruse the σ-smooth approach with Nδ = 50 draws across 100 Monte Carlo runs for defective
systems with varying defect location and defect sizes. The common parameters among these
systems are the uniform spring constant k = 1, damping coefficient d = 0.1 and number of
masses N = 100. In all the experiments, the noise level was set to 5 × 10−4.
15
Figure 8: EstimatesfortheDefectSize (Monte Carlo Runs).
4.6.1
Defect Detection for Fixed Defect Size and Varying Location
First, we employ the σ-smooth approach to characterize the defect in systems with a defective
spring of stiffness k∗= 1.30 but varying locations j across the chain. Figure 9 shows the
estimated defect size as a function of the true defect location. For most defect locations,
the optimizer is able to recover the defect size accurately, yielding values close to k∗= 1.30.
However, beginning around j = 79, the optimizer has increasing difficulty estimating the
defect size, resulting in unstable or significantly overestimated values. This observation is
verified in Figure 10 where the relative error in the defect size estimates is shown as a function
of the true defect location. We observe that up to j = 78, the relative error remains below
the 5% threshold. Beyond this point, however, the relative error grows rapidly, indicating a
degradation in the scheme’s accuracy when the defect is located near the end of the system.
Figure 11 shows the estimated defect location versus the true defect location. The opti-
mizer performs very well in this task, producing estimates that closely follow the diagonal
line (perfect agreement). The corresponding relative error, shown in Figure 12, confirms
this: the location is always predicted within at most 2.33% error, which corresponds to a
maximum of two positions away from the true defect location.
In summary, for a fixed defect size k∗= 1.30, the optimizer reliably identifies the defect
location across the entire domain, but struggles to estimate the defect size accurately once
the defect is positioned near the end of the system. This makes sense from the physical
point of view as the exciting force dissipates fast as it travels across the chain. Hence, the
effect of the defect to the vibrations in the first mass becomes less and less as the defect’s
location becomes close to the other end. One way to handle such cases is to incorporate
16
Figure 9: Estimate for the Defect Size vs
Actual Defect Location
Figure 10: Relative Error in the Defect Size
Estimate vs Actual Defect Location
Figure 11: Estimate for the Defect Location
vs Actual Defect Location
Figure 12: Relative Error in the Defect Lo-
cation Estimate vs Actual Defect Location
measurements from the last mass in the optimization. This will be explored in upcoming
studies.
4.6.2
Defect Detection for Fixed Location and Varying Defect Size
Now, we fix the defect location at j = 40 and vary the defect size k∗. Figure 13 shows
the estimated defect size as a function of the true defect size. The estimates align almost
perfectly with the diagonal, indicating that the scheme is highly successful in recovering the
true defect size across the tested range. This is confirmed in Figure 14 which shows the
corresponding relative error in the defect size estimate. It shows that the maximum error is
just around 2.77%.
Figure 15 shows the estimated defect location as a function of the true defect size. Here,
17
Figure 13: Estimate for the Defect Size vs
Actual Defect Location
Figure 14: Relative Error in the Defect Size
Estimate vs Actual Defect Location
the estimates remain constant at the true defect location j = 40. This is further confirmed
in Figure 16, where the RelativeErrorinLocation estimates is exactly zero for all defect sizes.
Figure 15: Estimate for the Defect Location
vs Actual Defect Location
Figure 16: Relative Error in the Defect Lo-
cation Estimate vs Actual Defect Location
These multi-case simulations show that the defect characterization scheme utilizing sev-
eral Monte Carlo runs of the σ-smooth approach works well with most systems of various
defect locations and defect sizes even when the measurement data are tainted with noise
of size 5 × 10−4. The exemptions are the cases when the defect is located near the end
of the system. To address these cases, we mentioned a possible extension of the current
approach which incorporates the measurements from the other end of the system. Another
approach, albeit more mathematically involved and computationally expensive is to simply
plot the residual as symbolic functions of j and k. This approach is illustrated in the next
subsection.
18
4.7
An Analytic Approach
To address the cases when the defect is located near the end of the system, we employ a
purely analytic approach. This time, we treated all quantities as symbolic functions and
directly evaluated the residual
f(j, k) = log
Z 100
0
ex1,analytic(j, k, s) −
ex1,synthetic(s) + ϵ(s)
2 ds
,
as a function of the possible defect locations j and defect sizes k.
For these experiment, we introduced a noise of magnitude 5×10−4. By evaluating f(j, k)
across a range of defect locations and defect magnitudes and plotting the results as a three-
dimensional surface, we can visually identify the location and size of the defect.
First, we consider the case of a system with the following parameters: N = 100, d =
0.1, k = 1, k∗= 1.3, j∗= 85. The residual 10f(j,k) is plotted in Figure 17. Here, we see
spikes in the surface, indicating the extreme values of the residuals. Two dimensional slices
of this surface are shown in Figure 18 and Figure 19, indicating that the global minima
indeed occur at j = 85 and k = 1.3.
Figure 17: 3D graph of Residual as a function of location and defect size in the system with
N = 100, d = 0.1, k = 1, k∗= 1.3, j = 85.
These plots also indicate why the optimization routine employing fmincon is having some
difficulties in characterizing the defect. The frequent oscillations of the residual function
creates multiple local minima. The MATLAB fmincon, being a local solver might had been
trapped in one of these local minima and hence, converged to an inaccurate estimate for the
defect location and/ or defect size.
19
Figure 18:
2D slice of residual function
shown in Figure 17 along the possible de-
fect locations.
Figure 19:
2D slice of residual function
shown in Figure 17 along the possible de-
fect sizes.
The next simulation shows a very similar scenario, with N = 100, d = 0.1, k = 1, k∗=
1.1, j∗= 90. Here we have a smaller defect located further in the system. Again, we see
in Figure 20 the 3D rendering of the residual as a function of the possible defect locations
and defect sizes. Multiple spikes are again observed, showing the peaks and valleys of the
residual.
Figure 20: 3D graph of Residual as a function of location and defect size for the system with
N = 100, d = 0.1, k = 1, k∗= 1.1, j∗= 90.
20
The 2D slices of the surface in Figure 20 are shown in Figure 21 and Figure 22. These
slices indicate that this analytic approach predicts the location and size of the defect quite
accurately.
Figure 21:
2D slice of residual function
shown in Figure 20 along the possible de-
fect locations.
Figure 22:
2D slice of residual function
shown in Figure 20 along the possible de-
fect sizes.
These cases show an alternative way of characterizing the defect.
This is extremely
useful especially for cases when the defect is located further down the system. However,
this approach is mathematically tedious and computationally expensive as all variables are
treated to be symbolic.
5
Conclusions
In this paper we studied the problem of imaging the location and size of a one defect in the
Young’s modulus for a long metal bar of length L and cross-sectional area A = 1. The model
was idealized as a 1D bar and was shown equivalent to a discrete spring-mass system.
All computations are performed on a nondimensional 1D chain of length L = 1 with
N = 100 cells and ∆x = 1/N. For convenience we set the nondimensional cross-section
A = 1 and the nominal mass and stiffness per cell to mi = 1, k = 1. To map results to
physical units, choose a physical cross-section Aphys, density ρ and baseline Young’s modulus
E0. The physical cell length is ∆xphys = Lphys/N, and
mphys
i
= ρAphys∆xphys,
kphys
i
= EiAphys
∆xphys
.
Hence the conversion factors are mref = ρAphys∆xphys and kref = E0Aphys/∆xphys, and phys-
ical time/frequency follow
tphys = t
rmref
kref
,
ωphys = ω
r
kref
mref
,
21
where t, ω denote the time and frequency, respectively and
p
mref/kref = ∆xphys/c with
c =
p
E0/ρ, so a nondimensional time unit equals the travel time across one cell.
We proposed a robust algorithm for characterizing one defect in a spring mass system
under the action of an impulsive force. In our particular numerical example the setup we
used was a L = 1 discretized with N = 100 finite difference points. This will resolve only
modes up to the model dependent cut-off frequency. In fact, with spatial spacing h = ∆x
and wave speed c =
p
E0/ρ the model reliably resolves frequencies up to
fmax ≈c
4h = c N
4L .
(In practice one must choose N so that fmax exceeds the highest physical frequency of interest;
the factor 4 is a conservative margin to limit numerical dispersion.)
This type of forcing is a good approximation of a regular band limited forcing since
the high order modes are carrying very little energy and contribute insignificantly to the
measurement map and thus can be neglected. In fact in our numerical setup for the L = 1,
N = 100 we can resolve approximately 50 modes. The unresolved modes have amplitude less
than O(1/N) × δ, i.e., 1% of the main vibrational amplitude. We also tested our systems for
robustness against Gaussian noise of size ϵ ∈(10−8, 10−2). In practice, one should assume
a noise level ϵ ≈10−2 · δ where δ denotes the typical displacement amplitude in the system
considered (e.g., for example in our numerical setup of L = 1 bar with N = 100 masses,
δ ≈10−4). For this level of noise we showed that our method remains robust.
The proposed approach minimizes the discrepancy between the analytically computed
response map, i.e., vibrations of the first mass as a function of the defect location and
size and the synthetic data map that mimics measurements in a physical setting, i.e., the
vibrations of the first mass when the system with one defect is activated by the impulsive
force at the first mass. The approach entails a minimization procedure that seems to be
sensitive to measurement noise. To mitigate the effect of noise, a smoothening technique,
referred to here as the σsmooth-approach is employed to modify the objective functional. This,
coupled with multiple Monte Carlo runs proved to make this approach a couple of orders
less sensitive to measurement noise.
The proposed scheme works well against Gaussian noise and perfectly characterizes de-
fects sie and location for defects located not to close near the right end of the bar. The
proposed optimization strategy appears to have some difficulties in characterizing defects
that occur near the other end point of the system. This may be due to two factors, namely
the quick dissipation of the energy in the system dues to the assumed damping and the
highly oscillating behavior of the residual functional. We proposed an analytic approach for
such cases. Numerical results indicate that this approach works in detecting the exact size
and location of defects located near the right end of the bar but it tends to be computation-
ally expensive. An alternative approach, one that incorporates in the objective functional
measurements from the last mass in the system seems much more elegant and it will be
considered in forthcoming studies.
22
References
[1] Ofv-5000 modular vibrometer, product brochure.
https://www.polytecstore.fr/
polytec_images/documents/oms/pb_ofv-5000.pdf. Accessed: 2025-09-01.
[2] Peter Cawley and R. Adams. The location of defects in structures from measurements
of natural frequencies. Journal of Strain Analysis for Engineering Design - J Strain
Anal Eng Design, 14:49–57, 04 1979.
[3] Michele Dilena and Antonino Morassi. Structural health monitoring of rods based on
natural frequency and antiresonant frequency measurements. Structural Health Moni-
toring, 8(2):149–173, 2009.
[4] Andrew D. Dimarogonas. Vibration of cracked structures: A state of the art review.
Engineering Fracture Mechanics, 55(5):831–857, 1996.
[5] Neil Jerome Egarguin, Larry Guan, and Daniel Onofrei. Defect characterization in a 1d
spring mass system using the laplace and z-transforms. Journal of Vibration Engineering
& Technologies, 10:1121– 1134, 02 2022.
[6] Neil Jerome Egarguin, Taoufik Meklachi, Daniel Onofrei, and Noam Harari-Arnold.
Vibration suppression and defect detection schemes in 1d linear spring-mass systems.
Journal of Vibration Engineering & Technologies, 8:489–503, 2020.
[7] Horea Grebla, Vasile Rusu, Gilbert-Rainer Gillich, and Thu Bui. Assessment of cracks
in beams using changes in the measured frequencies and particle swarm optimization.
Vibroengineering Procedia, 51:29–34, 10 2023.
[8] Guo-Yong Hu and Robert F. O’Connell. Analytical inversion of symmetric tridiagonal
matrices. Journal of Physics A: Mathematical and General, 29(7):1511, apr 1996.
[9] Marek Krawczuk, Joanna Grabowska, and Magdalena Palacz. Longitudinal wave propa-
gation. part ii—analysis of crack influence. Journal of Sound and Vibration, 295(3):479–
490, 2006.
[10] Fushou Liu, Aobo Liu, Libin Wang and Yang Wei. Accurate modeling and wave propa-
gation analysis of cracked slender structural members by the spectral element method.
Structural Control and Health Monitoring, 2023(1):5569434, 2023.
[11] Magdalena Palacz and Marek Krawczuk. Analysis of longitudinal wave propagation in
a cracked rod by the spectral element method. Computers & Structures, 80(24):1809–
1816, 2002.
[12] Kunhong Peng, Yi Zhang, Xian Xu, Jinsong Han and Yaozhi Luo. Crack detection of
threaded steel rods based on ultrasonic guided waves Sensors, 22(18), 2022.
23
[13] Steve J. Rothberg, Matthew S. Allen, Paolo Castellini, Dario Di Maio, Joris J.J. Dirckx,
David John Ewins, Ben J. Halkon, Pieter Muyshondt, Nicola Paone, Teresa Ryan, Hein-
rich Steger, E.P. Tomasini, Steve J.A. Vanlanduit, and Joseph F. Vignola. An interna-
tional review of laser doppler vibrometry: Making light work of vibration measurement.
Optics and Lasers in Engineering, 99:11–22, 2017. Laser Doppler vibrometry.
[14] Lourdes Rubio, Jos´e Fern´andez-S´aez, and Antonino Morassi. Identification of two cracks
in a rod by minimal resonant and antiresonant frequency data. Mechanical Systems and
Signal Processing, 60-61:1–13, 2015.
[15] Efim I. Shifrin. Inverse spectral problem for a non-uniform rod with multiple cracks.
Mechanical Systems and Signal Processing, 96:348–365, 2017.
[16] Zijie Zeng, Min Gao, Ching Tai Ng, and Abdul Hamid Sheikh. Guided wave-based
characterisation of cracks in pipes utilising approximate bayesian computation. Thin-
Walled Structures, 192:111138, 2023.
[17] Shuaifang Zhang, Wei Shen, Dongsheng Li, Xiwen Zhang and Baiyu Chen. Nondestruc-
tive ultrasonic testing in rod structure with a novel numerical Laplace based wavelet
finite element method. Latin American Journal of Solids and Structures, 15, 2018.
[18] Y. Zou, L. Tong and Grant Steven. Vibration-based model-dependent damage (delami-
nation) identification and health monitoring for composite structures - a review. Journal
of Sound and Vibration, 230: 357-378, 2000.
24
|
On the Detection of Internal Defects in Structured Media Bryl Nico M. Ong 1, Aarush Borker 2, Neil Jerome A. Egarguin 1, Daniel Onofrei 3 1 ̃nos, Laguna, Philippines 2 Mahwah High School, Mahwah, NJ, USA 3 - ious infrastructures, such as metal rods or acoustic ducts, is the challenge of detecting internal fractures (defects). Traditionally, engineers depend on audible and visual aids to identify these fractures, as they do not physically dissect the object in question into multiple pieces to check for inconsistencies. This research introduces ideas towards the development of a robust strategy to image such defects using only a small set of minimal, non-invasive measurements. Assuming a one dimensional model (e.g. longitudinal waves in long and thin rods/acoustic ducts or transverse vibrations of strings), we make use of the continuous one-dimensional wave equation to model these physical phenomena and then employ specialized mathematical analysis tools (the Laplace transform and optimization) to introduce our defect imaging ideas. In particular, we will focus on the case of a long bar which is homogeneous throughout except in a small area where a defect in its Young's modulus is present. We will first demonstrate how the problem is equivalent to a spring-mass vibrational system, and then show how our imaging strategy makes use of the Laplace domain analytic map between the characteristics of the respective defect and the measurement data. More explicitly, we will utilize MATLAB (a platform for numerical computations) to collect synthetic data (computational alternative to real world measurements) for several scenarios with one defect of arbitrary location and stiffness. Subsequently, we will use this data along with our analytically developed map (between defect characteristics and measurements) to construct a residual function which, once optimized, will reveal the location and magnitude of the stiffness defect. 1 Introduction Maintaining the integrity of infrastructure, such as bridges, drainage, and aerospace components, relies on the ability to identify hidden defects using non-destructive methods. Current 1 6 Sep 2025 non-destructive evaluation (NDE) techniques employ various forms of vibrational analysis due to their cost-effectiveness and reliability (see for instance [2, 17, 12, 18]). Specifically, many methods employ one-dimensional spring-mass analogues and wave equation models. These solutions are notable for their intuitive physical representation, analytical versatility (resulting from the Laplace transform), and especially their straightforward implementation in various numerical software, such as MATLAB [15, 2]. In biomechanics and computer graphics, spring-mass networks can simulate soft-tissue deformation and cloth dynamics in real-time, sacrificing system continuity for computational speed and robustness [4]. Moreover, acoustic metamaterials use one-dimensional (1D) springmass chains to block specific sound frequencies, thus creating an acoustically manipulable system [11]. Even vehicle and vibrational suspension systems employ a discrete set of springs and masses to identify and isolate harmful fluctuations arising from dynamic loads [3]. An emerging area of NDE research focuses on treating internal cracks, or defects, as an "extra spring." When measuring certain values of such systems, the spring associated with the crack perturbs natural frequencies, thus shifting the poles of a system's Laplacedomain output [14]. Certain studies have already demonstrated that employing just two low-frequency measurements can be used to detect a single defect through a singular formula [2]. Recently, works use guided-wave Bayesian methods [16] and particle-swarm optimizers [7] to detect multiple and/or nonlinear defects. On the other hand, many methods either rely on continuous and precise Laplace-domain data, depend on closed-form inversion (valid for one or two defects), or struggle to generate an inverse map with boundary measurements and defect parameters. In reality, sensors may only monitor discrete time-series data, which is plagued by noise, and cracks can occur at arbitrary depths with varying magnitudes. As a result, the challenge of creating a datadriven imaging strategy that reliably recovers the location and size of a single defect from minimal, easily measurable data, while considering noise, remains unsolved. In this work, we study the inverse problem of locating and quantifying a localized stiffness defect in a one-dimensional elastic bar using only a single endpoint displacement trace. The forward model is the standard 1D longitudinal wave equation discretized by a lumped spring-mass chain (e.g., in our particular numerical setup, length L = 1 m, N = 100 nodes, ∆x = L/N). The inverse task is to recover the index j and the local spring constant k∗ of a single anomalous element from noisy measurements of the left-end displacement u0(t) produced by an impulsive initial velocity. Our numerical results show the inversion recovers the defect location exactly (to the discretization cell) and recovers k∗with relative errors ≲0.1% under realistic Gaussian measurement noise up to σ = 10-5 m. The discrete contrast k∗maps directly to a continuum Young's modulus in the defective element via Edef = k∗∆x/A; consequently results for k∗∈[0.1, 5] correspond to Edef/E0 ∈[0.1, 5] in the continuum model. Key features of our approach are: • A hybrid Laplace-domain forward solver that yields cheap, high-fidelity forward responses used to build a synthetic measurement map. • A robust inversion pipeline that combines a coarse per-index search with a local non2 linear refine (Gauss-Newton / constrained optimization) and simple smoothing regularization of the forward data. • An extensive validation campaign (Monte Carlo noise sweeps, contrast sweeps, and sensitivity to parameter mismatch) that quantifies the practical detection limits. This work builds on our previous works, [6], [5], and attempts to make a step towards addressing these issues in the context of the problem of determining the location and size of a defect in the Young's modulus of a long rod which is otherwise homogeneous. We will start by showing the quantitative equivalence between a metal homogeneous rod with a localized defect in its Young's modulus and a 1-dimensional homogeneous spring-mass system with a defective spring constant. Thus, an impulsive force at the left end of the rod will results in a longitudinal wave propagating along the rod and subsequent vibrations being measured at the left end point. Equivalently, the discrete system activated by an initial impulse applied at its first mass will generate vibrations through the entire system. Then, in this discrete setup, measurements will consist of the resulting vibrations of the first mass. As shown in [5], through a z-transform approach one can use the set of discrete time measurments of the first mass and obtain an approximation of the Laplace transform of the first mass vibrations (when perceived as a function of time). We then proceed towards building the analytic map relating the defective spring constant (defect size) and its location to the vibrations of the first mass. Then, in the Laplace domain, a residual functional is proposed measuring the discrepancy between this analytic map the synthetic measurements data set (i.e., the Laplace transform of the first mass vibrations). Finally, minimization of this residual functional determines both the location and magnitude of the stiffness defect. All of this is achieved with minimal, non-invasive data, as measurements are only taken from one end of the system. In the context of a metal rod, our results show how vibrational data resulting from an impulsive force at one end of the rod will indicate the position and level of abnormal Young's modulus at one point in the rod which in turn offers a prediction for the position and likelihood of a future crack occurring at that point. Figure 1: Discrete spring-mass chain model of a 1D bar with a single stiffness defect (highlighted spring). Each mass represents a segment of the bar and each spring its local stiffness ki. Adapted from [5]. Our proposed method shares similarities with previous works that evaluate vibrational NDE by modeling defects as local compliance using a 1D spring-mass system combined with Laplace-domain analysis [14, 2]. Similar to [16], our method inverts boundary wave measurements to determine the defect parameters-specifically, the defect's position along the system and its severity, indicated by the deviation of the spring constant from that of the 3 homogeneous system. Unlike a Bayesian posterior that updates with each new observation, our residual minimization function can be optimized to identify both the size and location of the defect using only minimal, discrete endpoint measurements. An analytic inverse-spectral method waqs employed in [15] and closed-form techniques were used in [14] to identify one or two defects from frequency data. In [5] the authors used the poles of the transfer function to detect one or two defects with left hand measurements in a system activated by an impulsive force. The method we propose can handle arbitrary defect positions in a single-defect scenario with discrete data and our numerics suggest it is extensible to the case of arbitrary number of defects. Additionally, unlike forward-only spectral-element methods (see [11, 9, 17, 10]) and frequency-shift reconstructions [3], our approach develops an explicit inverse mapping from endpoint vibrational data to infer the defective spring constant and its position. In [6] the authors build a similar analytic map relating the defect characteristics to the measurement data and identify one defect (size and location) or two defects if appriori information about their location or sizes is offered. In [5] the authors use the defect signature on the poles of the transfer function, although the method proposed there requires continuous spectral data. Both of these approaches extend the concept of using a spring-mass system within a Laplacedomain framework. Notably, while newer methods employ guided-wave Bayesian inference or particle-swarm analysis for defect identification [16, 7], our method is computationally cheap and maintains accuracy even with noisy and discrete data. 2 Motivation Our effort to develop a robust defect-imaging strategy is motivated by the practical problem of determining the size and location of defects in otherwise homogeneous hyperbolic linear systems. More explicitly, we consider two one dimensional wave propagation models, one supporting transverse waves (string with fixed ends) and the other longitudinal waves (long bar with clamped ends), each containing one defect of unknown size and location (defective localized string tension and respectively defective localized Young's modulus). First we show how each of these two models can be equivalently represented by a discrete spring-mass system where one defective spring constant. Then we will develop a new strategy to detect the position and size of the defective spring constant resulting, through the above equivalence, in a strategy to detect the defective location and size in the string tension or the elastic bar's Young's modulus, respectively. We will proceed first to describe the main paradigm in the context of a discrete spring and mass system. 2.1 Discrete Spring-Mass-Damper Chain We model our structure as a chain of N point masses m1, . . . , mN connected in series by linear springs and dashpots. Mass mj sits between springs of stiffness kj (to the left) and kj+1 (to the right), and dashpots of damping dj and dj+1 (see Figure 1). Denote its displacement by xj(t). Newton's second law at each interior mass j = 1, . . . , N gives (see [6, 5]: 4 m1 x′′ 1(t) = -k1x1 -d1x′ 1 + k2 x2 -x1 + γδ(t) mj x′′ j(t) = kj xj-1 -xj + djx′ j + kj+1 xj+1 -xj for j = 2, N -1 mn x′′ N(t) = -kN+1xN -dNx′ N -kN xN -xN-1 where δ(t) denotes the Dirac distribution centered at zero and where we assumed that the system is driven by an impulsive force acting on the first mass with prescribed amplitude γ. Rearranging, this can be written in the compact tridiagonal form: x′′ j = kj mj xj-1 -kj + kj+1 mj xj + kj+1 mj xj+1 -dj mj x′ j + fj, for j = 1, N. (1) where f1 = γ m1δ(t), f2 = ... = fN = 0 represents the impulsive forcing and where in (1) we assume the convention that (x0(t) = xN+1(t) = 0). This discrete spring-mass-damper model serves as our reference. In Section 2.2 and Section 2.3, we will demonstrate that applying centered finite differences to the continuous 1D wave PDEs (string and bar) yields exactly these same equations once we set mj = ρ(xj) ∆x, kj = local stiffness at xj ∆x , dj = local damping at xj ∆x , for j = 1, N. Next, although the discussion in Section 2.2 is similar to the discussion in Section 2.3 we chose to present each topic separately for the sake of clarity of the two types of situations we address. 2.2 Transverse String and Its Spring-Mass-Damper Analogue We begin with the most general 1D transverse-wave equation for a clamped string modeled as a one dimensional segment [0, L], and whose linear density ρ0(x), tension T(x), and damping μ(x) vary with position: ∂2u ∂t2 = 1 ρ0(x) ∂ ∂x T(x) ∂u ∂x -μ(x) ρ0(x) ∂u ∂t . (2) with boundary data given by u(0, t) = u(L, t) = 0 and activated by impulsive initial data. i.e. u(x, 0) = 0; u′(x, 0) = γδ(x) (where δ(x) is the Dirac distribution focused at the origin). Sampling at equally spaced points xi = i ∆x, for i = 0, ..., N + 1, (i.e., (N + 1)∆x = L), and considering ρi = ρ0(xi), Ti = T(xi), μi = μ(xi), and ui(t) ≈u(xi, t), after making use of a centered-difference in space we obtain ∂ ∂x T ux xi ≈Ti+1 ui+1 -(Ti + Ti+1) ui + Ti ui-1 (∆x)2 , i = 1, . . . , N. Denoting u′′ i = d2ui/dt2 and u′ i = dui/dt, the discrete update reads u′′ i = 1 ρi Ti+1ui+1 -(Ti + Ti+1)ui + Ti ui-1 (∆x)2 -μi ρi u′ i, i = 1, . . . , N. (3) 5 with the observation that the fixed end boundary conditions imply u0 = un+1 = 0. On the other hand, from (1) we have that the equation of motion for the ith mass xi(t) in a discrete chain of N masses m1, . . . , mN linked by springs k1, . . . , kN+1 and dashpots d1, . . . , dN, assuming that (x0(t) = xN+1(t) = 0), is x′′ i = ki mi xi-1 -ki + ki+1 mi xi + ki+1 mi xi+1 -di mi x′ i, i = 1, . . . , N. (4) Equations (3) and (4) coincide exactly under the identifications mi = ρi ∆x, ki = Ti ∆x, di = μi∆x Therefore, each string segment of length ∆x and density ρi becomes a discrete mass mi, each local tension Ti becomes a spring stiffness ki, and the continuous damping μi becomes the damping coefficient di. This one-to-one mapping showcases our defect-imaging strategy, which treats local changes in T(x) as defective springs in the spring and mass chain. In particular, a localized modification in the string tension Tj∗corresponds to a defective spring kj∗in the chain, enabling us to detect defect location and severity via spring-mass inversion. 2.3 Longitudinal Vibration in a Heterogeneous Bar and Its Spring-Mass Analogue Consider axial vibrations w(x, t) in a rod of length L whose density ρ(x), Young's modulus E(x), and damping μ(x) vary with position: ρ(x) ∂2w ∂t2 = ∂ ∂x E(x) ∂w ∂x -μ(x) ∂w ∂t . (5) where we assumed homogenuous Dirichlet boundary conditions w(0, t) = w(L, t) = 0 and the vibrations generated by an impulsive initial data, i.e. w(x, 0) = 0; w′(x, 0) = γδ(x) (where δ(x) is the Dirac distribution focused at the origin). We recall that as classically defined, the Young's modulus E(x) is the proportionality constant between stress and strain in linear elastic media (with stress defined as the internal force per unit area and strain as the measure of elongation (gradient of the displacement)). We mention that the Young's modulus usually encodes the level of stress accumulation in the media. We discretize (5) with xi = i ∆x, i = 0, . . . , N + 1, with ∆x = L/(N + 1), set ρi = ρ(xi), Ei = E(xi), μi = μ(xi), and write wi(t) ≈w(xi, t). A centered-difference approximation in x gives ∂ ∂x E wx xi ≈Ei+1 wi+1 -(Ei + Ei+1) wi + Ei wi-1 (∆x)2 . Hence, denoting w′′ i = d2wi/dt2, w′ i = dwi/dt, the finite-difference update is w′′ i = 1 ρi Ei+1 wi+1 -(Ei + Ei+1) wi + Ei wi-1 (∆x)2 -μi ρi w′ i, i = 1, . . . , N. (6) 6 Figure 2: Left: a continuous bar with a localized Young's-modulus defect at xj∗. Right: the equivalent spring-mass chain where the j∗-th spring has altered stiffness kj∗= Ej∗/∆x. with the observation that the fixed end boundary conditions imply E0 = En+1 = 0. On the other hand, from (1) we have that the equation of motion for the ith mass xi(t) in a discrete chain of N masses m1, . . . , mN linked by springs k1, . . . , kN+1 and dashpots d1, . . . , dN, assuming that (x0(t) = xN+1(t) = 0), is given by (4). Equations (4) and (6) coincide exactly under the identifications mi = ρi ∆x, ki = Ei ∆x, di = μi ∆x. Therefore, each string segment of length ∆x and density ρi becomes a discrete mass mi, each local Young's modulus Ei becomes a spring stiffness ki, and the continuous damping μi becomes the damping coefficient di. This one-to-one mapping showcases our defect-imaging strategy, which treats local changes in E(x) as defective springs in the spring and mass chain. In particular, a localized drop (or rise) in the bar's Young's modulus Ej∗corresponds to a defective spring kj∗in the chain (highlighted in Fig. 2), enabling us to detect defect location and severity via spring-mass inversion. 3 Mathematical Framework In this section, we consider the system in (1) first under the homogeneous assumption and then a system with one defective spring constant. Thus, for the homogeneous system, i.e. when mj = 1, dj = d, kj = k, driven by the impulsive force at the first mass, we have the following mathematical model [6, 5]: x′′ 1 + dx′ 1 + 2kx1 -kx2 = γδ(t) x′′ 2 + dx′ 2 + 2kx2 -kx1 -kx3 = 0 ... x′′ j + dx′ j + 2kxj -kxj-1 -kxj+1 = 0 ... x′′ N-1 + dx′ N-1 + 2kxN-1 -kxN-2 -kxN = 0 x′′ N + dx′ N + 2kxN -kxN-1 = 0 . (7) 7 Now, suppose that the all constants are equal to 1, except that of the spring at position j with a spring constant k∗̸= 1. Then the system becomes x′′ 1 + dx′ 1 + 2x1 -x2 = γδ(t) x′′ 2 + dx′ 2 + 2x2 -x1 -x3 = 0 ... x′′ j-1 + dx′ j-1 + (1 + k∗)xj-1 -kxj-2 -xj = 0 x′′ j + dx′ j + (1 + k∗)xj -k∗xj-1 -xj+1 = 0 ... x′′ N-1 + dx′ N-1 + 2xN-1 -xN-2 -xN = 0 x′′ N + dx′ N + 2xN -xN-1 = 0 . (8) Taking the Laplace transform of (8) plus some algebraic manipulation yields (s2 + ds + 2) ̃x1 - ̃x2 = γ (s2 + ds + 2) ̃x2 - ̃x1 - ̃x3 = 0 ... (s2 + ds + (1 + k∗)) ̃xj-1 - ̃xj-2 - ̃xj = 0 (s2 + ds + (1 + k∗)) ̃xj -k∗ ̃xj-1 - ̃xj+1 = 0 ... (s2 + ds + 2) ̃xN-1 - ̃xN-2 - ̃xN = 0 (s2 + ds + 2) ̃xN - ̃xN-1 = 0 . (9) Letting h = -(s2 + ds + 2) and performing some more algebraic manipulations allow us to write (9) in the matrix form AX = b (10) where the entry Am,p of the coefficient matrix A in the mth row and pth column is given by Am,p = h, if m = p but m ̸= j -1, j h + 1 -k∗, if m = p = j -1 or m = p = j 1, if |m -p| = 1 but m ̸= j 1, if m = j, p = j + 1 k∗, if m = j, p = j -1 0, elsewhere . (11) Meanwhile the right-hand side vector b is given by b = [-γ 0 ... 0]T and the unknown vector X consists of the responses ̃xi, i = 1, 2, .., N of each mass to the excitation force in the Laplace domain. 8 The coefficient matrix A can further be manipulated and written in the form A = Ah + P where Ah is the tridiagonal matrix obtained by taking the Laplace transform of the homogeneous (nondefective) system (7). More explicitly, the diagonal entries of Ah are all equal to h while it's off-diagonal entries are all 1. The matrix P then is a very sparse matrix whose inverse can easily be calculated. Using the result from [8] giving an explicit form for the inverse of Ah, we get the expression ̃x1(s) for the response of the system in the Laplace domain given by ̃x1(s) = -γR1,1 - R1,j-1(1 -k∗) + R1,i(k∗-1) ̃xj-1(s) -R1,j(1 -k∗) ̃xj(s), (12) where Rm,p = cosh[(N + 1 -|p -m|)λ] -cosh[(N + 1 -m -p)λ] 2 sinh λ sinh(N + 1)λ (13) and λ satisfies h = -2 cosh λ. The response of the masses immediately adjacent to the defective spring satisfy the system ( (1 -k∗)(Rj,j-1 -Rj,j) ̃xj-1 + 1 + Rj,j(1 -k∗) ̃xj = -γRj,1 1 + Rj-1,j-1(1 -k∗) + Rj-1,j(k∗-1) ̃xj-1 + Rj-1,j(1 -k∗) ̃xj = -γRj-1,1 . (14) Solving the preceding system of linear equations in the unknown ̃xj-1 and ̃xj gives ̃xj-1 = --γRj,1V + γRj-1,1G GU -FV ̃xj = -γRj,1U -γRj-1,1F GU -FV , (15) where F = (1-k∗)(Rj,j-1-Rj,j), G = 1+Rj,j(1-k∗), U = 1+Rj-1,j-1(1-k∗)+Rj-1,j(k∗-1) and V = Rj-1,j(1-k∗). Using (15) into (12) yields an expression for the response of the first system as a function of the parameter j and k∗representing the defect location and defect size, respectively. 4 Results and Discussions In this section, we present the defect characterization scheme that aims to find the location and size of the single defective spring in a system of arbitrary length. We start with a description of the basic optimization algorithm. We illustrate the effect of measurement noise to this approach necessitating the need for the introduction of what we call the σsmooth approach. This modification in the objective function, coupled with a Monte Carlo run, mitigates the effect of the noise to the system. The section ends with an illustration of a more analytic approach that can be an alternative for scenarios when the defect is located near the end of the system. 9 4.1 Optimization Algorithm Description The proposed method identifies the defective spring in the system by minimizing the discrepancy between the analytically computed response of the system to the excitation force δ applied to the first mass and noisy synthetic data that mimics measurements from physical setting. Let ̃x1,analytic be the analytically computed response, given in (12) ,of the first mass. Since we shall assume that our scheme has no access to the location and size of the defect, we shall assume that ̃x1,analytic is a function of the location j of the defect, the size k of the defect, and the Laplace variable s. Meanwhile, we let ̃x1,synthetic denote the synthetic data that mimic perfect real-life measurements in the Laplace domain. To simulate measurement uncertainty, Gaussian noise ε is added to the synthetic data. The measurement noise is quantified by its relative size with the synthetic data. The objective function to be minimized is f(j, k) = log Z 100 0 ̃x1,analytic(j, k, s) - ̃x1,synthetic(s) + ε(s) 2 ds , which is the logarithm of the squared L2-norm of the residual. The logarithm ensures that the optimizer (fmincon) avoids premature termination due to very small jumps in the objective function values between iterations. The introduction of the noise function ε makes this approach better reflect the conditions of practical defect detection, where the measurements are inevitably corrupted by noise. In practice we run the local optimizer (Matlab fmincon) independently for each candidate defect index j ∈{2, . . . , N} and pick the (j, k) pair with minimal residual; this exhaustive per-index refine reduces sensitivity to local minima in k and keeps the inversion computationally cheap (one forward solve per j). The optimizer uses gradient-based methods to search for a local minimum of a user-defined objective function, subject to linear and nonlinear constraints, as well as bound restrictions. Because it is a local solver, its success depends strongly on the smoothness of the objective landscape and the choice of initial guess. For each candidate j, fmincon is executed to estimate the optimal k that minimizes the residual. This process is repeated for all j ∈{2, . . . , N}, and the pair (j, k) that produces the minimum value of the objective function is selected as the location and estimated size of the defect. 4.2 Noise- free Simulation In this section, we show that the good performance of the proposed optimization procedure for noise- free data. Figure 3 shows the synthetic data generated with the following parameters: number of masses N = 100, damping coefficient d = 0.1, impulse intensity γ = 1, uniform spring constant k = 1, defect location jtrue = 40, and defect spring constant k∗= 1.3. In other words, the system contains a single defective spring in position jtrue = 40 with stiffness k∗= 1.3, while all other springs have k = 1. To identify the defect, the optimization routine fmincon was executed 99 times, once for each possible location of the defect j ∈{2, . . . , 100}. For each j, the optimizer solved 10 Figure 3: Graph of Synthetic Data for the system with parameters N = 100, d = 0.1, γ = 1, j = 40, and k∗= 1.3. for the optimal defect size k that minimizes the objective function f(j, k). Figure 4 shows the corresponding minimal residual values with respect to k, calculated as 10 f(j,k) for each possible value of j. Here, the x-axis corresponds to the possible defect locations j, while the y-axis represents the residual magnitude 10 f(j,k∗ j , where k∗ j is the minimizer of the objective function for each fixed value of j. The results indicate that the smallest residual occurs at j = 40, matching the true defect location. The corresponding computed defect stiffness is k∗ comp ≈1.2999999, which is in excellent agreement with the true value k∗= 1.3. The relative error is k∗ comp -k∗ k∗ ≈1.14 × 10-6, demonstrating high accuracy. This suggests that given a perfect set of measurement values of the system response, the proposed method yields highly reliable results. The next subsection shows the effect of the introduction of various noise levels to the measurements. 4.3 Effect of Gaussian Noise Modern laboratory Laser Doppler Vibrometers (LDV) achieve sub-nanometer to picometerclass displacement noise floors depending on bandwidth and surface reflectivity; assuming a conservative axial displacement noise of order 10-6 m for our experimental bandwidth is realistic (see [1], [13]). 11 Figure 4: Residual Function of Possible Defect Location. Figure 5 and Figure 6 show the effect of the Gaussian noise ε on the accuracy of defect detection. We again consider the defective system with the same parameters as the ones used in Subsection 4.2. Figure 3 plots the relative error in the predicted defect location as a function of the relative size of ε. For ε of magnitude 10-8, 10-7, and 10-6 relative to the synthetic data, the predicted defect location matches the true location exactly. However, when the noise level is at 10-5, the RelativeErrorinLocation increases to approximately 5%, and it continues to grow as the noise level increases. This suggests a noise level threshold near of 10-6 beyond which location detection degrades significantly. Figure 4 shows the RelativeErrorinDefectSize as a function of noise level. At noise level 10-5, the relative error in the estimated defect size is about 9.30%, whereas for ε = 10-6, the error is on the order of 10-6. This confirms that 10-6 serves as a practical noise level threshold for accurate detection. Notably, this noise level is still well within the capabilities of modern defect detection systems, which can achieve precision up to 10-12. In the next subsection, we present a modification of the basic optimization algorithm that mitigates the effect of the measurement noise. We shall see that this approach improves the noise level threshold by some orders. 4.4 σ-Smooth Approach To further improve robustness against noise, we propose a variant of the optimization procedure, which we refer to as the σ-smooth approach. The framework remains the same: we begin with the synthetic data ̃x1,synthetic, add Gaussian noise ε of prescribed size, and then minimize the residual between the analytic and measured responses. The key modification 12 Figure 5: Relative error in the estimate for the defect location as a function of noise level for the system with parameters N = 100, d = 0.1, γ = 1, j = 40, and k∗= 1.3. Figure 6: Relative error in the estimate for the defect size as a function of noise level for the system with parameters N = 100, d = 0.1, γ = 1, j = 40, and k∗= 1.3. 13 is the introduction of a random perturbation to the defect size parameter k in the objective function. In the original formulation, the optimizer solves directly for k. In the σ-smooth approach, however, the unknown k is replaced by k + δ, where δ is a random perturbation drawn from a normal distribution with mean zero and variance σ2 smooth, i.e., δ ∼N(0, σ2 smooth). Thus, instead of estimating a single deterministic value of k, the method effectively searches for an interval [k -∆, k +∆] of admissible values. This allows the solution to remain stable under noisy conditions, since small shifts in k within this interval still produce consistent results. To account for randomness in δ, we generate Nδ independent samples {δi}Nδ i=1 from the distribution above. For each δj, we evaluate the modified objective function f(j, k; δi) = log Z ̃x1,analytic(j, k + δi, s) - ̃x1,measured(s) + ε(s) 2 ds 1 2! . The σ-smooth objective is then defined as the average: F(j, k) = 1 Nδ Nδ X i=1 f(j, k; δi). Minimization is performed on F with respect to k, while j is treated as a discrete variable as before. When σsmooth = 0, the method reduces to the deterministic formulation. For σsmooth > 0, the perturbation introduces robustness by mitigating the effect of Gaussian noise in the measured data. In practice, we found that setting σsmooth = 10-4 and averaging over Nδ ≈50 samples is sufficient to stabilize the results. Note that Monte Carlo runs were also employed in the σ-smooth approach to account for the presence of noise and the stochastic nature of the defect detection problem. By repeating the optimization procedure over multiple randomized noise realizations, we were able to obtain statistically reliable estimates of the residual functional and the optimizer's performance. This averaging process reduces sensitivity to a single noise instance and highlights the overall trend of defect detectability across different locations. The final estimate for the defect location and defect size are taken to be the median of the respective estimates from each Monte Carlo run. 4.5 A Simulation Employing the σ-Smooth Approach To assess the performance of the σ-smooth approach, we compared its results against the deterministic method under noisy conditions. In particular, we tested the method by considering the defective system with the same parameters as in Subsection 4.2 but with noise of size 5 × 10-5. 14 Without regularization (i.e., using the deterministic approach), the relative error in defect location was approximately 2.5%, while the RelativeErrorinDefectSize was about 5.75%. These results indicate noticeable degradation in accuracy under noise. In contrast, when the σ-smooth approach with Nδ = 50 was applied to 100 Monte Carlo runs with each run having a different noise function, the performance improved significantly. Recall that the final estimate for the defect location and defect size were taken to be the median of the respective estimates from all the Monte Carlo runs. The relative error in defect location decreased from 2.5% to 0%, meaning the defect was identified exactly. Similarly, the RelativeErrorinDefectSize dropped from 5.75% to 1.8 × 10-4, demonstrating several orders of magnitude improvement. These results suggest that the proposed σ-smooth approach is effective in mitigating the influence of noise. Figure 7 illustrates the defect location identified in each simulation run, while Figure 8 shows the corresponding results for the defect size. Figure 7: EstimatesfortheDefectLocation (Monte Carlo Runs). 4.6 Multiple Simulation Runs In this subsection, we investigate the limits of the robustness of σ-smooth approach. We ruse the σ-smooth approach with Nδ = 50 draws across 100 Monte Carlo runs for defective systems with varying defect location and defect sizes. The common parameters among these systems are the uniform spring constant k = 1, damping coefficient d = 0.1 and number of masses N = 100. In all the experiments, the noise level was set to 5 × 10-4. 15 Figure 8: EstimatesfortheDefectSize (Monte Carlo Runs). 4.6.1 Defect Detection for Fixed Defect Size and Varying Location First, we employ the σ-smooth approach to characterize the defect in systems with a defective spring of stiffness k∗= 1.30 but varying locations j across the chain. Figure 9 shows the estimated defect size as a function of the true defect location. For most defect locations, the optimizer is able to recover the defect size accurately, yielding values close to k∗= 1.30. However, beginning around j = 79, the optimizer has increasing difficulty estimating the defect size, resulting in unstable or significantly overestimated values. This observation is verified in Figure 10 where the relative error in the defect size estimates is shown as a function of the true defect location. We observe that up to j = 78, the relative error remains below the 5% threshold. Beyond this point, however, the relative error grows rapidly, indicating a degradation in the scheme's accuracy when the defect is located near the end of the system. Figure 11 shows the estimated defect location versus the true defect location. The optimizer performs very well in this task, producing estimates that closely follow the diagonal line (perfect agreement). The corresponding relative error, shown in Figure 12, confirms this: the location is always predicted within at most 2.33% error, which corresponds to a maximum of two positions away from the true defect location. In summary, for a fixed defect size k∗= 1.30, the optimizer reliably identifies the defect location across the entire domain, but struggles to estimate the defect size accurately once the defect is positioned near the end of the system. This makes sense from the physical point of view as the exciting force dissipates fast as it travels across the chain. Hence, the effect of the defect to the vibrations in the first mass becomes less and less as the defect's location becomes close to the other end. One way to handle such cases is to incorporate 16 Figure 9: Estimate for the Defect Size vs Actual Defect Location Figure 10: Relative Error in the Defect Size Estimate vs Actual Defect Location Figure 11: Estimate for the Defect Location vs Actual Defect Location Figure 12: Relative Error in the Defect Location Estimate vs Actual Defect Location measurements from the last mass in the optimization. This will be explored in upcoming studies. 4.6.2 Defect Detection for Fixed Location and Varying Defect Size Now, we fix the defect location at j = 40 and vary the defect size k∗. Figure 13 shows the estimated defect size as a function of the true defect size. The estimates align almost perfectly with the diagonal, indicating that the scheme is highly successful in recovering the true defect size across the tested range. This is confirmed in Figure 14 which shows the corresponding relative error in the defect size estimate. It shows that the maximum error is just around 2.77%. Figure 15 shows the estimated defect location as a function of the true defect size. Here, 17 Figure 13: Estimate for the Defect Size vs Actual Defect Location Figure 14: Relative Error in the Defect Size Estimate vs Actual Defect Location the estimates remain constant at the true defect location j = 40. This is further confirmed in Figure 16, where the RelativeErrorinLocation estimates is exactly zero for all defect sizes. Figure 15: Estimate for the Defect Location vs Actual Defect Location Figure 16: Relative Error in the Defect Location Estimate vs Actual Defect Location These multi-case simulations show that the defect characterization scheme utilizing several Monte Carlo runs of the σ-smooth approach works well with most systems of various defect locations and defect sizes even when the measurement data are tainted with noise of size 5 × 10-4. The exemptions are the cases when the defect is located near the end of the system. To address these cases, we mentioned a possible extension of the current approach which incorporates the measurements from the other end of the system. Another approach, albeit more mathematically involved and computationally expensive is to simply plot the residual as symbolic functions of j and k. This approach is illustrated in the next subsection. 18 4.7 An Analytic Approach To address the cases when the defect is located near the end of the system, we employ a purely analytic approach. This time, we treated all quantities as symbolic functions and directly evaluated the residual f(j, k) = log Z 100 0 ex1,analytic(j, k, s) - ex1,synthetic(s) + ε(s) 2 ds , as a function of the possible defect locations j and defect sizes k. For these experiment, we introduced a noise of magnitude 5×10-4. By evaluating f(j, k) across a range of defect locations and defect magnitudes and plotting the results as a threedimensional surface, we can visually identify the location and size of the defect. First, we consider the case of a system with the following parameters: N = 100, d = 0.1, k = 1, k∗= 1.3, j∗= 85. The residual 10f(j,k) is plotted in Figure 17. Here, we see spikes in the surface, indicating the extreme values of the residuals. Two dimensional slices of this surface are shown in Figure 18 and Figure 19, indicating that the global minima indeed occur at j = 85 and k = 1.3. Figure 17: 3D graph of Residual as a function of location and defect size in the system with N = 100, d = 0.1, k = 1, k∗= 1.3, j = 85. These plots also indicate why the optimization routine employing fmincon is having some difficulties in characterizing the defect. The frequent oscillations of the residual function creates multiple local minima. The MATLAB fmincon, being a local solver might had been trapped in one of these local minima and hence, converged to an inaccurate estimate for the defect location and/ or defect size. 19 Figure 18: 2D slice of residual function shown in Figure 17 along the possible defect locations. Figure 19: 2D slice of residual function shown in Figure 17 along the possible defect sizes. The next simulation shows a very similar scenario, with N = 100, d = 0.1, k = 1, k∗= 1.1, j∗= 90. Here we have a smaller defect located further in the system. Again, we see in Figure 20 the 3D rendering of the residual as a function of the possible defect locations and defect sizes. Multiple spikes are again observed, showing the peaks and valleys of the residual. Figure 20: 3D graph of Residual as a function of location and defect size for the system with N = 100, d = 0.1, k = 1, k∗= 1.1, j∗= 90. 20 The 2D slices of the surface in Figure 20 are shown in Figure 21 and Figure 22. These slices indicate that this analytic approach predicts the location and size of the defect quite accurately. Figure 21: 2D slice of residual function shown in Figure 20 along the possible defect locations. Figure 22: 2D slice of residual function shown in Figure 20 along the possible defect sizes. These cases show an alternative way of characterizing the defect. This is extremely useful especially for cases when the defect is located further down the system. However, this approach is mathematically tedious and computationally expensive as all variables are treated to be symbolic. 5 Conclusions In this paper we studied the problem of imaging the location and size of a one defect in the Young's modulus for a long metal bar of length L and cross-sectional area A = 1. The model was idealized as a 1D bar and was shown equivalent to a discrete spring-mass system. All computations are performed on a nondimensional 1D chain of length L = 1 with N = 100 cells and ∆x = 1/N. For convenience we set the nondimensional cross-section A = 1 and the nominal mass and stiffness per cell to mi = 1, k = 1. To map results to physical units, choose a physical cross-section Aphys, density ρ and baseline Young's modulus E0. The physical cell length is ∆xphys = Lphys/N, and mphys i = ρAphys∆xphys, kphys i = EiAphys ∆xphys . Hence the conversion factors are mref = ρAphys∆xphys and kref = E0Aphys/∆xphys, and physical time/frequency follow tphys = t rmref kref , ωphys = ω r kref mref , 21 where t, ω denote the time and frequency, respectively and p mref/kref = ∆xphys/c with c = p E0/ρ, so a nondimensional time unit equals the travel time across one cell. We proposed a robust algorithm for characterizing one defect in a spring mass system under the action of an impulsive force. In our particular numerical example the setup we used was a L = 1 discretized with N = 100 finite difference points. This will resolve only modes up to the model dependent cut-off frequency. In fact, with spatial spacing h = ∆x and wave speed c = p E0/ρ the model reliably resolves frequencies up to fmax ≈c 4h = c N 4L . (In practice one must choose N so that fmax exceeds the highest physical frequency of interest; the factor 4 is a conservative margin to limit numerical dispersion.) This type of forcing is a good approximation of a regular band limited forcing since the high order modes are carrying very little energy and contribute insignificantly to the measurement map and thus can be neglected. In fact in our numerical setup for the L = 1, N = 100 we can resolve approximately 50 modes. The unresolved modes have amplitude less than O(1/N) × δ, i.e., 1% of the main vibrational amplitude. We also tested our systems for robustness against Gaussian noise of size ε ∈(10-8, 10-2). In practice, one should assume a noise level ε ≈10-2 · δ where δ denotes the typical displacement amplitude in the system considered (e.g., for example in our numerical setup of L = 1 bar with N = 100 masses, δ ≈10-4). For this level of noise we showed that our method remains robust. The proposed approach minimizes the discrepancy between the analytically computed response map, i.e., vibrations of the first mass as a function of the defect location and size and the synthetic data map that mimics measurements in a physical setting, i.e., the vibrations of the first mass when the system with one defect is activated by the impulsive force at the first mass. The approach entails a minimization procedure that seems to be sensitive to measurement noise. To mitigate the effect of noise, a smoothening technique, referred to here as the σsmooth-approach is employed to modify the objective functional. This, coupled with multiple Monte Carlo runs proved to make this approach a couple of orders less sensitive to measurement noise. The proposed scheme works well against Gaussian noise and perfectly characterizes defects sie and location for defects located not to close near the right end of the bar. The proposed optimization strategy appears to have some difficulties in characterizing defects that occur near the other end point of the system. This may be due to two factors, namely the quick dissipation of the energy in the system dues to the assumed damping and the highly oscillating behavior of the residual functional. We proposed an analytic approach for such cases. Numerical results indicate that this approach works in detecting the exact size and location of defects located near the right end of the bar but it tends to be computationally expensive. An alternative approach, one that incorporates in the objective functional measurements from the last mass in the system seems much more elegant and it will be considered in forthcoming studies. 22 References [1] Ofv-5000 modular vibrometer, product brochure. https://www.polytecstore.fr/ polytec_images/documents/oms/pb_ofv-5000.pdf. Accessed: 2025-09-01. [2] Peter Cawley and R. Adams. The location of defects in structures from measurements of natural frequencies. Journal of Strain Analysis for Engineering Design - J Strain Anal Eng Design, 14:49-57, 04 1979. [3] Michele Dilena and Antonino Morassi. Structural health monitoring of rods based on natural frequency and antiresonant frequency measurements. Structural Health Monitoring, 8(2):149-173, 2009. [4] Andrew D. Dimarogonas. Vibration of cracked structures: A state of the art review. Engineering Fracture Mechanics, 55(5):831-857, 1996. [5] Neil Jerome Egarguin, Larry Guan, and Daniel Onofrei. Defect characterization in a 1d spring mass system using the laplace and z-transforms. Journal of Vibration Engineering & Technologies, 10:1121- 1134, 02 2022. [6] Neil Jerome Egarguin, Taoufik Meklachi, Daniel Onofrei, and Noam Harari-Arnold. Vibration suppression and defect detection schemes in 1d linear spring-mass systems. Journal of Vibration Engineering & Technologies, 8:489-503, 2020. [7] Horea Grebla, Vasile Rusu, Gilbert-Rainer Gillich, and Thu Bui. Assessment of cracks in beams using changes in the measured frequencies and particle swarm optimization. Vibroengineering Procedia, 51:29-34, 10 2023. [8] Guo-Yong Hu and Robert F. O'Connell. Analytical inversion of symmetric tridiagonal matrices. Journal of Physics A: Mathematical and General, 29(7):1511, apr 1996. [9] Marek Krawczuk, Joanna Grabowska, and Magdalena Palacz. Longitudinal wave propagation. part ii-analysis of crack influence. Journal of Sound and Vibration, 295(3):479490, 2006. [10] Fushou Liu, Aobo Liu, Libin Wang and Yang Wei. Accurate modeling and wave propagation analysis of cracked slender structural members by the spectral element method. Structural Control and Health Monitoring, 2023(1):5569434, 2023. [11] Magdalena Palacz and Marek Krawczuk. Analysis of longitudinal wave propagation in a cracked rod by the spectral element method. Computers & Structures, 80(24):18091816, 2002. [12] Kunhong Peng, Yi Zhang, Xian Xu, Jinsong Han and Yaozhi Luo. Crack detection of threaded steel rods based on ultrasonic guided waves Sensors, 22(18), 2022. 23 [13] Steve J. Rothberg, Matthew S. Allen, Paolo Castellini, Dario Di Maio, Joris J.J. Dirckx, David John Ewins, Ben J. Halkon, Pieter Muyshondt, Nicola Paone, Teresa Ryan, Heinrich Steger, E.P. Tomasini, Steve J.A. Vanlanduit, and Joseph F. Vignola. An international review of laser doppler vibrometry: Making light work of vibration measurement. Optics and Lasers in Engineering, 99:11-22, 2017. Laser Doppler vibrometry. [14] Lourdes Rubio, Jos ́e Fern ́andez-S ́aez, and Antonino Morassi. Identification of two cracks in a rod by minimal resonant and antiresonant frequency data. Mechanical Systems and Signal Processing, 60-61:1-13, 2015. [15] Efim I. Shifrin. Inverse spectral problem for a non-uniform rod with multiple cracks. Mechanical Systems and Signal Processing, 96:348-365, 2017. [16] Zijie Zeng, Min Gao, Ching Tai Ng, and Abdul Hamid Sheikh. Guided wave-based characterisation of cracks in pipes utilising approximate bayesian computation. ThinWalled Structures, 192:111138, 2023. [17] Shuaifang Zhang, Wei Shen, Dongsheng Li, Xiwen Zhang and Baiyu Chen. Nondestructive ultrasonic testing in rod structure with a novel numerical Laplace based wavelet finite element method. Latin American Journal of Solids and Structures, 15, 2018. [18] Y. Zou, L. Tong and Grant Steven. Vibration-based model-dependent damage (delamination) identification and health monitoring for composite structures - a review. Journal of Sound and Vibration, 230: 357-378, 2000. 24
|
2509.16217
|
1
An objective criterion for evaluating new physical theories
Yefim Bakman, bakmanyef@gmail.com
August 14, 2025
Abstract: Currently, the value of a new physical theory is determined by
whether it has been published in a prestigious journal. Yet, to publish an
article in a prestigious journal, one must undergo a rigorous process of
independent subjective peer review, with the success of the theory
depending on the subjective opinion of reviewers who have proven
themselves in developing old theories of physics. As a result, a closed
system (gatekeeping) has been created that does not allow the
penetration of new theories, especially those that call into question the
established paradigm.
A subjective approach is not uniquely possible. Here, we propose an
objective criterion for evaluating new physical theories and illustrate this
criterion through an example.
1. Introduction
To construct a criterion for evaluating new physical theories, we use four
indicators, as presented in Table 1.
Table 1.
Number Indicator
Description
1
Entities
The number of postulates, laws, principles,
and unique concepts (such as the fifth
dimension, dark photons, strings, etc.) used
in the work.
2
Solved
problems
The number of problems that the theory
explains by concrete physical mechanisms,
serving as an indicator of explanatory power.
2
3
New
problems
The number of new contradictions or
questions created by the theory under
discussion.
4
Integrity
The number of clear cause–effect relations
between elements of the theory, representing
the opposite of fragmentation.
There is a broad consensus that Occam's razor should be included as a
criterion for evaluating new theories in physics. Hence, the first indicator
is the number of entities used in a theory. This term refers to postulates,
laws, principles, and new concepts used in the work, such as the fifth
dimension, dark photons, strings, etc. A reduction in the number of
entities involved signals that a given theory is approaching the truth.
The above four indicators provide a simple report of a theory. A graphic
diagram of the logical connections between the elements of the theory
will be more meaningful and visual, as shown as the example in the next
section.
2. An example of applying the proposed criterion
As an example, we use a new theory denoted as “A New Physical
Paradigm,” published in 2020 [1] [2]. This paradigm touches on various
topics, but we will limit ourselves here to the basic questions of gravity.
Fig. 1 shows a logical diagram of the new paradigm, which is based on a
fundamental medium (in Tesla’s words, a “primary substance” [3]). The
substance with a non-uniform density is a gravitational field.
In this paradigm, elementary particles are vortices of the same primary
substance. Thus, gravity and quantum are closely related in the new
paradigm.
From the four entities follow solutions to four problems: gravitational
acceleration, dark matter, dark energy, and action at a distance.
3
Fig. 1. Logical scheme of the new physical paradigm in relation
to the topic of gravity. Entities (four) are shown in black
frames, solved problems (four) are indicated in green frames,
and cause–effect relations (eight) are presented by green
arrows. New problems (zero in this case) would be indicated in
red frames.
In addition to the universal medium and vortex particles, the new
paradigm is based on the following insight:
The speed of the vortex wave depends on the density
of the universal medium.
As a consequence, in a heterogeneous medium, vortex particles
accelerate toward a higher density of the medium without the
participation of contact forces, driven only by the difference in the
velocities of the vortex wave [4].
The Earth's gravitational field is a consequence of the planet's atoms
and molecules losing some of their mass or completely disintegrating
into primary substance. Thus, a configuration is constantly being
gravitational
acceleration
action at the
distance
dark energy
dark matter
gravitational field
is a non-uniform
primary substance
elementary particles
are vortices of the
primary substance
speed of the vortex
wave depends on the
medium density
primary
substance
4
recreated in which the density of primary substance near the Earth's
surface is higher than at a distance.
Table 2 shows scores for the four indicators of the new paradigm in
relation to the topic of gravitation.
Table 2. Scores of objective indicators for the new paradigm with
respect to gravitation.
Indicator
Score
entities
4
resolved problems
4
new problems
0
integrity
8
For comparison, the dominant paradigm incorporates numerous entities
to explain the structure of hadrons: six types of quarks and eight types
of gluons to hold quarks together. Yet, to explain the phenomena of
gravity and dark matter/energy, it is not possible to select entities,
despite the freedom of their choice.
3. Conclusion
A graphical representation depicting the structure of a theory can show
relationships between entities, problems, and mechanisms. We propose
that the scientific community adopt this format as a standard
supplement to theoretical articles in order to provide editors and readers
with a visual representation of the theory structure.
References
5
[1] Y. Bakman, "A New Physical Paradigm," 23 July 2020. [Online].
Available: https://vixra.org/pdf/2008.0038v1.pdf. [Accessed 23
October 2024].
[2] Y. Bakman, "Comparative Analysis of the New Physical Paradigm,"
Journal of Space Exploration, vol. 13, no. 9, 2024.
[3] N. Tesla, "Mr. Tesla's vision.; How the electrician lamp of Aladdin
may construct new worlds.," New York Times, 21 April 1908.
[4] Y. Bakman, "How a gravitational field accelerates particles and
atoms.," vixra, 9 March 2021. [Online]. Available:
https://vixra.org/abs/2103.0050. [Accessed 20 May 2021].
|
1 An objective criterion for evaluating new physical theories Yefim Bakman, August 14, 2025 Abstract: Currently, the value of a new physical theory is determined by whether it has been published in a prestigious journal. Yet, to publish an article in a prestigious journal, one must undergo a rigorous process of independent subjective peer review, with the success of the theory depending on the subjective opinion of reviewers who have proven themselves in developing old theories of physics. As a result, a closed system (gatekeeping) has been created that does not allow the penetration of new theories, especially those that call into question the established paradigm. A subjective approach is not uniquely possible. Here, we propose an objective criterion for evaluating new physical theories and illustrate this criterion through an example. 1. Introduction To construct a criterion for evaluating new physical theories, we use four indicators, as presented in Table 1. Table 1. Number Indicator Description 1 Entities The number of postulates, laws, principles, and unique concepts (such as the fifth dimension, dark photons, strings, etc.) used in the work. 2 Solved problems The number of problems that the theory explains by concrete physical mechanisms, serving as an indicator of explanatory power. 2 3 New problems The number of new contradictions or questions created by the theory under discussion. 4 Integrity The number of clear cause-effect relations between elements of the theory, representing the opposite of fragmentation. There is a broad consensus that Occam's razor should be included as a criterion for evaluating new theories in physics. Hence, the first indicator is the number of entities used in a theory. This term refers to postulates, laws, principles, and new concepts used in the work, such as the fifth dimension, dark photons, strings, etc. A reduction in the number of entities involved signals that a given theory is approaching the truth. The above four indicators provide a simple report of a theory. A graphic diagram of the logical connections between the elements of the theory will be more meaningful and visual, as shown as the example in the next section. 2. An example of applying the proposed criterion As an example, we use a new theory denoted as "A New Physical Paradigm," published in 2020 [1] [2]. This paradigm touches on various topics, but we will limit ourselves here to the basic questions of gravity. Fig. 1 shows a logical diagram of the new paradigm, which is based on a fundamental medium (in Tesla's words, a "primary substance" [3]). The substance with a non-uniform density is a gravitational field. In this paradigm, elementary particles are vortices of the same primary substance. Thus, gravity and quantum are closely related in the new paradigm. From the four entities follow solutions to four problems: gravitational acceleration, dark matter, dark energy, and action at a distance. 3 Fig. 1. Logical scheme of the new physical paradigm in relation to the topic of gravity. Entities (four) are shown in black frames, solved problems (four) are indicated in green frames, and cause-effect relations (eight) are presented by green arrows. New problems (zero in this case) would be indicated in red frames. In addition to the universal medium and vortex particles, the new paradigm is based on the following insight: The speed of the vortex wave depends on the density of the universal medium. As a consequence, in a heterogeneous medium, vortex particles accelerate toward a higher density of the medium without the participation of contact forces, driven only by the difference in the velocities of the vortex wave [4]. The Earth's gravitational field is a consequence of the planet's atoms and molecules losing some of their mass or completely disintegrating into primary substance. Thus, a configuration is constantly being gravitational acceleration action at the distance dark energy dark matter gravitational field is a non-uniform primary substance elementary particles are vortices of the primary substance speed of the vortex wave depends on the medium density primary substance 4 recreated in which the density of primary substance near the Earth's surface is higher than at a distance. Table 2 shows scores for the four indicators of the new paradigm in relation to the topic of gravitation. Table 2. Scores of objective indicators for the new paradigm with respect to gravitation. Indicator Score entities 4 resolved problems 4 new problems 0 integrity 8 For comparison, the dominant paradigm incorporates numerous entities to explain the structure of hadrons: six types of quarks and eight types of gluons to hold quarks together. Yet, to explain the phenomena of gravity and dark matter/energy, it is not possible to select entities, despite the freedom of their choice. 3. Conclusion A graphical representation depicting the structure of a theory can show relationships between entities, problems, and mechanisms. We propose that the scientific community adopt this format as a standard supplement to theoretical articles in order to provide editors and readers with a visual representation of the theory structure. References 5 [1] Y. Bakman, "A New Physical Paradigm," 23 July 2020. [Online]. Available: https://vixra.org/pdf/2008.0038v1.pdf. [Accessed 23 October 2024]. [2] Y. Bakman, "Comparative Analysis of the New Physical Paradigm," Journal of Space Exploration, vol. 13, no. 9, 2024. [3] N. Tesla, "Mr. Tesla's vision.; How the electrician lamp of Aladdin may construct new worlds.," New York Times, 21 April 1908. [4] Y. Bakman, "How a gravitational field accelerates particles and atoms.," vixra, 9 March 2021. [Online]. Available: https://vixra.org/abs/2103.0050. [Accessed 20 May 2021].
|
2509.16269
|
Preface
1
myPhysmatics
Connected Mathematical Models in Physics
Sergey Pankratov
2
Preface
Preface
“If you don’t know where you are going, any road will get you there.” Lewis Carroll
For many years, I have conducted a sort of scientific diary in which I put facts
and results that I liked at the moment or found interesting and useful. My
principal motivation was to learn how clever people construct beautiful
mathematical models out of physical garbage - incongruent sets of data,
apparently disjoined experimental facts and other multiple raw material. Having
looked in hindsight through these notes, I could ascertain that nearly everything
in them revolved around a bunch of more or less standard ideas, facts, techniques,
and approaches which may be regarded as indispensable elements of the
standard education of a physicist. Such elements can be networked to provide an
apparently cohesive picture of a fair physical education and a mathematical
culture, sufficient for a physicist. At least satisfactory to many of us.
The present book stems from that old diary of mine and to a certain extent
retains its disparity. However, the surrounding world seems to be united, which
manifests itself in the fact that new horizons are inevitably unfolded and
unexpected links between apparently disjoined, isolated models, most of them
being physics-based mathematical models, are continuously opened when one
attempts to consider various aspects of stand-alone and ad hoc constructs. As a
result, the book was not limited to physics alone, but also contains some
rudimentary information on the situation in neighboring disciplines. The
existence of hidden relationships between seemingly different domains has
always been to many people a wonderful and astounding quality of physical and
mathematical sciences. One can observe that the art of producing a good work in
physics is essentially the art of revealing connections between seemingly
disparate manifestations. The same applies to mathematics, sometimes in even
larger extent. So, in my diary, I tried to pay attention to the links, analogies and
similarities inside the mosaic of fragmentary mathematical models of physics. I
rather register and describe those links than offer a rational explanation for the
very phenomenon of linking. To illustrate such linking, I can bring a rather
obvious example. Although general relativity is regarded by many people as an
autonomous subscience, a discipline which is separate from the whole body of
physics, I think that is erroneous. The study of general relativity helps to
understand classical mechanics much better than while studying mechanics alone
although general relativity and classical mechanics traditionally belong to
different physical courses and are practically never taught together.
As one more example, one can recall Bohr’s atom, which was basically an ad
hoc model, but later this naive model has profusely stimulated the emergence of
more sophisticated mathematical (quantum) approaches and theories. One can
remember that during the conception of quantum mechanics, the model of Bohr’s
Preface
3
atom, quite unexpectedly, was connected with such distant subjects as the black-
body radiation - also originally an ad hoc mathematical model, with the theory of
adiabatic invariants, which is very close to the modern theory of dynamical
systems, and with the spectral theory of linear operators. This and some other
cross-disciplinary links will be described in the book where appropriate. For
myself, long time ago, I called such unification of seemingly disjoint results a
“physmatical effect” - pointing to the fact that multitudes of fine-grained physics-
based mathematical models (PBMM) become inextricably linked and networked,
and I called the corresponding network a physmatical one. Mathematics serves as
a key code gluing together isolated physical models viewed as nodes of this
network. Widely disparate and independently developed, subjects from physics
and mathematics converge unexpectedly to become unified ingredients of a single
theme. This is similar to a polyphonic construction of music, when a diversity of
tunes combine to produce an interesting melody.
The corresponding discipline that is focused on ties and lateral associations
between mathematical models in physics may, in this terminology, be called
“physmatics” - in distinction to “mathphysics”, or mathematical physics, which
has traditionally been a well-structured discipline centered around partial
differential equations. A considerable portion of mathematical physics is devoted
to attempts of rigorous scrutiny of a selected bunch of mathematical models that
involve mathematical aspects of the proof of these models’ freedom from
inconsistencies. In particular, proofs of existence and uniqueness of solutions,
analysis of the properties of the chosen system of equations, construction of exact
or self-similar solutions plays the major part in mathematical physics; nowadays
a trend towards axiomatic constructs is more and more obvious. Although
extremely important and comprising an indispensable element of any physicist’s
background, mathematical physics appears to be a realm of mathematicians,
rather than physicists.
In contrast, “physmatics” treats mathematics as a service subject, a set of
protocols carrying universal research techniques between physical models, the
latter may be regarded as networked “nodes”. Scientific journals (and, lately,
servers) play the role of hubs, routers and, sometimes, switches in this network
redirecting the research efforts. Here “switch” is a component of a cognitive
network that connects two or more lines of thinking to complete a task or a model.
Physmatics is a more or less a closed, “private” network, in the sense that it does
not so far include social sciences or medicine - these possess their own cognitive
networks. Focusing on common features provides exciting observations and
deepens understanding of such models, like accidental recognition of common
acquaintances stimulates mutual interest and fastens friendship ties. I think that
people are better inclined to recognition than to hard-core ab ovo learning. The
more unexpected the liaison, the deeper is the appreciation.
I concede of course that writing just about everything is a firm symptom of
unprofessionalism: physicists are commonly encouraged to generate very
4
Preface
focused articles. Today, however, in contrast with the 19th and the beginning of
the 20th century, probably most fruitful period in physics, physicists too often
explore minute problems. But, honestly speaking, papers such as - this is my
fantasy, of course – “On the third correction to the second off-diagonal matrix
element in the quasibound X-like to Γ-like states transition within the eight-band
approximation in III-V heterojunctions” leave me unmoved, although such partial
results form the very texture of physics and their authors should be utterly
respected. Furthermore, the papers of the “Influence of Singing on Seeing”-type1
serve as a publication multiplier, and the number of publications is, in practice,
considered as the main indicator of success in science (although quite the
opposite may be true). One can rapidly produce dissertations, which actually
happens. In relation to this, I often recall J. W. Gibbs who may be considered an
antipode to the current breed of prolific scientists. Since in 1970s I translated
thermodynamical works by Gibbs into Russian, I had to study his scientific legacy
and found out that he, being really a great figure, was quite reluctant to publishing
his papers, at least in a rush. The American “publish or perish” approach seems to
me a noisy impasse, more suitable to journalists than to scientists. It would be a
mere truism to observe that rapid multiplication of scientific (and near-scientific)
journals and other publications makes the entire physmatical network
increasingly complex and noisy. This multiplication process is analogous to the
unfavorable development of the Internet which can eventually result in its
catastrophic transformation (e.g., split).
Furthermore, physmatics is constructed in such a way that one can, in
principle, starting from each particular result (a physmatical node, in this
terminology), reach any other point, even conceptually quite distant, say, one can
travel from nanotechnology to topology. However, the time needed to drift from
one physmatical concept (node) to another may well exceed the duration of
human life. And what to do if so many things in physics and mathematics are
equally interesting that it is one’s desire to grasp them all? Then the networking
agenda, with many routes between the concepts of interest would be the only
solution. After all, intelligence is primarily the skill to interrelate seemingly
different, heterogeneous things.
The great Russian theoretical physicist L. D. Landau considered theoretical
physics a “small science”, he used to say that a physicist can understand the whole
of it. On the contrary, experimental physics, according to Landau, is a “big
science”, a single person is unable to know all its parts [51]. The famous “Course
of Theoretical Physics” by L. D. Landau and E. M. Lifshitz was based on this idea -
a physicist can understand the whole physics, no matter how far from each other
its specific models might appear to be. But that was long ago. Today,
unfortunately, physics and mathematics more and more remind us of the Tower
of Babel, and the only efficient method to cope with such an overwhelming flood
1 In Russian, “Vliyanie peniya na zrenie”.
Preface
5
of information is to employ the networking approach (similar to hypertext or
hyper-reference used in modern online encyclopedia). To a physicist trained in
the late 20th century, theoretical physics constructed around the least action
principle serves as a backbone for the entire physmatical network. To me
personally, “physmatics” reminds me of a hybrid language2 capable of activating
the nodes of networked physical and mathematical knowledge. Network-oriented
languages aimed at the analysis and depiction of data flows and network
topologies have been known in networking technology for some time. Such
languages enable us to see structures and links that may remain hidden at first
glance. Creation of network-oriented languages is usually an interdisciplinary
undertaking that merges network analysis, complexity theory, graph theory,
communication theory and other disciplines elucidating the laws according to
which new ideas, results, technologies, etc. are propagated. Networks are
ubiquitous: just think of everyday networks like public transportation,
communication, utilities, electrical engineering, etc. Physmatics represents a class
of other, invisible networks that become observable when their objects are seen
in relationship to each other. This all sounds unnecessarily abstract, but can be
made transparent by simple examples. In the oscillator example, photons,
phonons and many other quasiparticles are mostly described in the universal
language of creation and annihilation operators, which is just an interpretation
stemming from the fact that the energy spectrum of an oscillator is equidistant
and can be obtained algebraically by representing the Hamiltonian through the
product of ubiquitous raising and lowering operators. One might say that the
oscillator model “radiates” its messages throughout the physmatical network and,
in principle, receives feedback. Analogous exchange of messages can be observed,
for instance, in nonlinear differential equations and dynamical systems, in
geometrical properties of gauge models, in physics-based models beyond physics.
Probably, in the near future, an algorithm may be built to produce a series of
hierarchical - coarser - networks by searching highly dense subnets (or
subgraphs) in each level of the entire network, and then a clustering multilevel
algorithm can be applied. This networking approach would enable us to carry out
a structured analysis of nowadays unweighted physical and mathematical
networks. The networking paradigm fuels my conviction that physics and
mathematics are a non-stop issue, with new nodes being constantly incorporated.
Such an outlook is, of course, closer to expressing a dream than setting up a
concrete problem3. My primary goal in this manuscript is much more modest - I
2 Hybrid languages exist not only as artificial programming or document-oriented formal
languages (with some distributed data model that allows addressing, searching and linking of content
from documents), but also among human dialects. For instance, Yiddish, with its ex-territoriality and
massive inclusion of Slavic elements, even sovietisms, is a typical hybrid language destined to
consolidate a network of disparate enclaves.
3 General features of representing science as a network or a map are discussed in the well-
known book by John Ziman “Reliable Knowledge” [109], ch.4.
6
Preface
was trying to make the book a good read as well as a good reference to some
interesting facts thoroughly covered in the available literature. I also tried to
combine attention to detail with personal recollections and the occasional
anecdote (often in order to make my points), rather than the purely scholarly
approach. The diary form implies by default a compendium of many topics, with
not necessarily everything really derived. Thus, the book may also appeal to
people who wish to “comprehend” numerous fields of physics without doing
much work.
There is an obvious gap between the expert knowledge space and the
educational one. The number of scientific papers grows so rapidly that the
traditional methods of information dissemination (via journal papers,
monographs, textbooks) become inefficient: even the specialists in narrow fields
find it difficult to focus on the principal issues since there may be a lot of “noise”
totally obscuring useful information. The Internet search, though very useful in
skillful hands, often only aggravates the problem. Everyone is familiar with the
difficulty of finding a precise piece of information, and the novices are completely
lost in it. Strict monographs and detailed review papers provide little help: the
monographs are usually too rigorous so that the reader must invest a lot of time
to get to the essence, and rare overview articles are unbiased. The latter fact is
natural: most of the authors extensively refer to their own work. Besides, people
who are most active in the research seem to be reluctant to write long and
detailed reviews about their field in general. Under these circumstances, it is
likely that some free-style manuscripts might be required. Such free-style books
would occupy an intermediate position between science4 popularization and real
monographs. The free-style genre provides an opportunity for the author to
convey her/his point of view whereas for the reader it may serve as a means to
fill the gap between the common university courses and the highbrow scientific
monographs or research papers based on special mathematical techniques. The
present text is an attempt to quilt such a free-style book. And I repeat, I am aware
that loose reflections are perceived as a grave professional disadvantage as well
as extensively using “I” instead of impersonal “we”. This “I-ing” i.e., an obvious
retreat from the academic style is neither puerilism nor exhibitionism - it is rather
an expression of personal opinion. So, the book may appear heavily opinionated,
which is quite natural for a diary.
Putting together seemingly diverse subjects takes time of course, so this book
has been written in steps reflecting different levels of understanding. Some parts
of the book are intended also for the people more attracted by arts, literature, and
social studies than by science saturated with hard math. The obvious problem is
that such people usually experience difficulties with mathematics. The book
partially aims to reinforce basic mathematical knowledge, in particular, by
favoring and interpreting mathematical models as well as by trying to criticize
4 For definiteness, I shall only speak about physics.
Preface
7
some of them. Mathematics viewed as an intellectual reality can be made
instrumental not only in physics, but also in social life. As far as the scientific
component of the manuscript goes (there are also unscientific ones), only the
most basic models are discussed in this book. The style of the text - slow,
ruminating, recurrent, sometimes light - reflects the tendency to study things
while writing. Some readers might designate a number of fragments in the text as
“philology” or as “pouring from void into empty”. I, however, think this
impression is deceptive. There exist at least three good reasons to ruminate about
foundations of physics: firstly, by scrutinizing the fundamental ideas once again,
one can learn a lot from ingenious individuals who originated them; secondly, one
can reformat individual thought templates making them less susceptible to
collective mantras; and thirdly, pondering over possible mathematical
formulations of physical basics can catalyze the study of mathematics. We shall
see later in this book that many valuable mathematical ideas have come through
contemplating over various possibilities of exploring basic laws of physics. As an
illustration one might recall that any successful physical theory has historically
served as a great stimulus and rich source of ideas primarily for mathematicians.
This was an exchange with instruments of understanding. Furthermore, we shall
parse some customary notions; re-examining conceptual systems seems to be
useful because otherwise attitudes pass ahead of knowledge.
Beside “philology”, one may encounter some inconsistencies - often
deliberate, as, e.g., in coordinate transformations - in the notation of different
sections, largely because different parts of the book were written at different
times. One may also notice that in special and general relativity different systems
of notations such as ((𝑥1, 𝑥2, 𝑥3, 𝑥4 = 𝑖𝑐𝑡, ) (pseudo-Euclidean) and Minkowski
space with signatures (+, −, −, −) or sometimes (−, +, +, +), respectively, (see
Chapters 3, 9) are in fact more convenient than a single one, although they are just
parts of the same geometric theory (however, I use only one system of relativistic
notations in this book with a small exception of writing coordinate indices below
to simplify the notations in a couple of the most primitive cases). Usually,
understanding the author’s notations takes considerable time while reading
physical and especially mathematical papers. To spare such superfluous efforts, I
tried to keep notations as simple as possible. Yet, I would not consider this text as
fully suitable for students since it would take a lot of precious time to read all my
reminiscences which may be unnecessary for focused studies. I concede that this
book is a kind of an occasional supplementary reading, a dubious introspective
report rather than a textbook or a monograph, although I was much more
interested in physical and corresponding mathematical problems than in
discussions and gossips about these problems. Some parts of this book may be
read by a general audience without difficulties. Because of numerous distractions
this text is not exactly what one needs to get prepared for exams. Nevertheless, I
disagree with those who regard all such distractions totally irrelevant - I hope
they might produce helpful associations. There is, incidentally, a general principle
8
Preface
known to many people studying martial arts, e.g., karate, but applicable for each
kind of learning: absorb what is useful, reject what is useless, and add what is
specifically your own.
As stated, the manuscript is intentionally defocused and lacks unity. In a
number of fragments, I decided to sacrifice stringency to make the reading more
comprehensible. In some respects, the book is closer to a collection of popular
essays than to a scientific treatise. I can understand those readers who would be
irritated by such style and organization of the book and would rate it as
unsatisfactory. To such readers I might remark that I tried to assume the role of a
generalist in order to see the forest behind the trees. It is my deep conviction that
physics and mathematics are the playground for free-thinking, open-minded and
versatile personalities.
As far as the interaction between physics and mathematics is concerned, one
can, of course, better spend one’s time reading refined and focused reflections
about mathematics and physics by V. Arnold, G. H. Hardy, M. Klein, Yu. Manin, J.
von Neumann, G. Polya, N. Wiener, a number of prominent physicists, even
Bourbaki who seem to be a collective pseudonym for a group of mathematical
extremists united in their disdain of physicists. The present book may only fuel
such disdain since I never intended to meet the present-day mathematical
standards. The book consists of twelve chapters which are in fact not
independent:
1. Introduction
2. Principles of Mathematical Modeling
3. Mathematical Potpourri
4. Classical Deterministic Systems
5. Classical Fields and Waves
6. The Quantum World
7. Stochastic Reality
8. Radiation in Matter
9. What Remains to Be Solved
10. Climate as a Physical System
11. Made in Physics
12. Conclusion and Outlook.
I could not refrain from revealing cross-chapter associations and exploiting
common models. There exist popular and pretty universal mathematical methods
such as Fourier series, Green’s functions, perturbation techniques, asymptotic
expansions, etc. that can be applied in each field to handle models with totally
different physical content. Thus, it seems indispensable to master these universal
methods, if one is striving to work professionally with physical models.
Being essentially a diary and based on personal reflections, the book enjoys
relaxed standards of modesty and customary humility. I did not try to make the
Preface
9
text look maximally impersonal as well as to conceal subjective likings and
dislikings. In this book I used to intersperse technical descriptions with some
reminiscences of my personal interactions with physicists and mathematicians.
The scientific content in some fragments and sections is nearly absent. Yet the
book contains, apart from personal reminiscences and opinions, a pronounced
objective i.e., a scientific drive whose aim is to demonstrate strength and
versatility of modeling methods in physics. This scientific component may bring
certain difficulties to an occasional general reader, so the book prerequisites are
some standard university courses on algebra, analysis, and differential equations
as well as familiarity with basic facts from physics. Honestly speaking, learning
the models of physics presupposes a certain degree of maturity, since they usually
involve tying together diverse concepts from many areas of physics and
mathematics. One can observe that a good article on physics is in fact a collection
of mathematical problems with solutions. Bearing in mind these mathematical
necessities, the preliminary Chapter 3 (“Mathematical Potpourri”) presents a
recapitulation of known mathematical facts, with the notation adopted
throughout the book being introduced. Tinkering with mathematical expressions
seems to be a growing trend in modern physics, and the rows of such expressions
are often overloaded with fancy notations and can be totally intransparent for a
person who is not a narrow specialist in the field. Occasionally, when the
calculations are too lengthy and tedious to be reproduced exhaustively, I have
simply quoted the results with the respective references.
Many examples in the book can be traced back to the works of giants. Great
physicists may be considered to come in two more or less pure varieties: deep
thinkers (like e.g., N. Bohr, P. A. M. Dirac, F. Dyson, A. Einstein, J. W. Gibbs, or W.
Heisenberg) and efficient problem solvers (like H. Bethe, R. Feynman, E. Fermi, L.
D. Landau, A. Sommerfeld, or Ya. B. Zeldovich). There may be also intermediate
types interpolating between the two pure varieties, e.g., S. Chandrasekhar, L. I.
Mandelstam, W. Pauli. Besides, there exist great minds whose mathematical
power equals physical considerations or even dominates over them, those are
exemplified by V. I. Arnold, N. N. Bogoliubov, L. D. Faddeev, V. A. Fock, H. A.
Kramers, J. von Neumann, E. Wigner, E. Witten. All these people have produced
highly influential results to be consumed by a sea of less creative individuals. The
present book reflects the position of such a consumer: my main motivation was
rather to learn, understand and absorb than to create and excite. One should not,
however, try to absorb new science or techniques in a gulp, it must be done
gradually, bit by bit. Recall that a real connoisseur would never hastily gulp the
precious old wine, he would rather enjoy each sip, feel the gradations of taste,
nuances of scent, hues of color.
I realize that the title of this book may be misleading, since this is not a book
on mathematical methods in physics. Many of such methods have been described
in comprehensive textbooks and monographs. For myself, it was important to
trace how the outstanding scientists worked, how they employed their intuition,
10
Preface
set up concrete problems, guessed the answer and tried to corroborate it by
developing the mathematical metaphors - pretty universal to be transferred to
other models and to connect them. It was also utterly instructive to observe how
the great inventive minds tried to adjust and sometimes distort mathematics in
order to obtain the required, intuitively anticipated answer without, of course,
crucially violating the strict rules of mathematical operations. Therefore, one may
note that the book is mainly focused on examples of physics-based models and
not on hard-core rigorous descriptions favored, in all their generality, mostly by
professional mathematicians. It is in this sense that the present book may serve
only as a supplementary material to existing textbooks, e.g., on theoretical and
mathematical physics.
My interest lies not in mathematics as such but only in its use. I treat
mathematics as a support stuff for physics or sometimes even as a part of physics.
Therefore, despite some explicit calculations contained in this book, I wrote it
mostly in a “do-it-yourself” manner so that in many places I give only drafts of
well-known models or theories. This means that starting definitions and
statements (theorems) are often only formulated, with basic ideas and formulas
being provided. The importance of correct definitions of physical and especially
mathematical concepts is irreducible. Nevertheless, physicists often believe that
it is the image of the concept and not its definition which forms the most essential
component of understanding, and mathematicians are inclined to unjustified
exaggeration of partial results. To some extent, in this book I succumb to this
popular physical stereotype, and some mathematically important details such as
establishing consistence of definitions or proving existence/uniqueness
theorems may be left to the careful reader.
To be really creative, one must solve problems and not just describe how
other people did it - this is a second-hand science. Likewise, it is inefficient to
study any physmatical subject of interest beforehand: much more useful would
be to take some problem relevant to this subject. At first, I wanted to supply the
book with a list of problems, some of them rather complicated (with solutions),
but then I dropped this idea because many problems are inevitably scattered over
the main text in the form of models and adding some artificial problems would
make the manuscript look like a textbook, whereas it is basically a diary. In other
words, most exercises that I conjured initially for this book would look very
difficult not because they really are such, but because the theory included in the
book is not systematized enough to solve them all. This is a book for reading, not
for studying.
In many cases, I return to the problems and models I have already discussed
at a different level of understanding and try to observe them from a different
angle. Such a recursive method of viewing a problem helps to elucidate its
sensitive points. I tried to write as freely as possible, often preferring to stop and
once again scrutinize the well-known models, sometimes with slightly different
Preface
11
notations, attempting to find new features or connections to other models in the
formerly discussed problems.
Thus, despite a cacophony of subjects, I hope the book is not totally useless.
In general, the book may be perceived as reflecting a kind of intuitive protest
against the progressive compartmentalization of science, its artificial breakdown
into narrow disciplines. There are no natural frontiers between disciplines - it is
people who have established them, often for the purposes having nothing in
common with science.
A few words about sources. Many results in physics are hard to ascribe to a
single author, they may be perceived as an outcome of certain scientific evolution.
This fact manifests itself in the cited literature. In view of the rather broad scope
of the book, no attempt has been made to supply it with an exhaustive list of
references. I tried to cite moderately but exactly. The general criterion was to
point at sources that could supplement the information, contained in the present
book, which I consider insufficient or “divergent” (too many chained references
with branching). Although in general I tried to make all the calculations
transparent and understandable, in many instances in this book explorations of
ideas contained in many standard sources such as textbooks is not fully complete
and thorough; to provide a detailed account in all cases would make the
manuscript unobservable. Some references, especially related to the material to
be readily found on the Internet, are given, for the reader’s convenience, in the
main text. I did not shy away from using “unscientific” sources and from citing
popular science books and articles, even those written by journalists and
appearing in newspapers and on the Internet i.e., the “gray literature” - the
material not published in peer-reviewed scientific journals. I also repeatedly cite
Wikipedia, although this source can hardly be called an authority (for instance,
Wikipedia can be biased). The names of many great people are present in the text.
I was lucky to listen and to talk to some of them: V. M. Galitskii, V. L. Ginzburg, L.
V. Keldysh, D. A. Kirznitz, A. B. Migdal, Ya. B. Zeldovich, D. N. Zubarev. I would not
dare to say that I know or knew them all: it is only narrow circles of close friends
have the right to say so. I was also lucky to have very kind and understanding
teachers: Professors N. P. Kalashnikov and M. I. Ryazanov.
Almost all the material in this book is not based on original results.
Nevertheless, the presentation of the material is predominantly my own and
consists of original or considerably modified explanations of known results,
discussion of their relevance to the contemporary scientific or engineering
practice. In this sense, although there are no fundamentally new results, the book
is not a primitive compilation. I tried to acknowledge all borrowings of
presentations, which I was aware of, in the notes scattered over the manuscript.
Since I attempted to present the modeling techniques by developing them from
the first principles and in a self-contained way, I did not provide a hundred
percent comprehensive list of references. Yet, I have supplied the book with the
12
Preface
minimal bibliography, which would help the reader to duly appreciate the work
performed by many ingenious people.
After all that was said one can justifiably ask: why am I writing all this stuff?
A variant of the answer lies in the following image I used to have in my mind:
people who study physics and mathematics today often find themselves in a
situation of a late passenger trying to catch the train that has already started. My
vision is that it would be possible to spare the inconvenience of jumping into the
last car of the train, and instead of hectic turns to be comfortably installed in a
cozy lounge. To this end, one should not strive to be at the forefront of modern
science, which is frequently dictated by fashion, but to understand thoroughly not
so many crucial scientific patterns. And I would like to add that I wanted to write
not to please the experts or the referees, but what people could remember and
use.
Acknowledgements
I would like to express my deep gratitude to all those colleagues and friends of
mine who have helped me in discussions, in text and pictures preparation and in
many other ways. I am grateful to Professor Christoph Zenger and Professor
Hans-Joachim Bungartz of the Technische Universität München whose shrewd
critical remarks made this text less vague. Special thanks to Dr. Dmytro Chibisov
who was very patient to explain to me a lot of tricks in practical TEX/ LATEX usage
(I call such tricks TEX-nicalities). Dr. Chibisov who is a well-known specialist in
computer algebra techniques has also given me many good advice especially
concerning computer usage in mathematics. I would also like to thank Professor
Vladimir Preobrazhenski from l’Institut d’Electronique, de Microélectronique et
de Nanotechnologie de Lille for many constructive comments and suggestions.
Finally, but not the least I would like to express my deep gratitude to Dr.
Thomas S. Ligon, who overtook the burden of the scientific editing of this book
and my wife, Tatjana Znamenski, for preparing this manuscript for publication.
Basic Principles of Mathematical Modeling
13
Contents
14
Contents
2
Contents
Acknowledgements ........................................................................................................ 12
Contents .............................................................................................................................. 13
1
Introduction.............................................................................................................. 20
2
Principles of Mathematical Modeling ............................................................ 30
2.1
Basic Principles of Mathematical Modeling .............................. 32
2.2
Mathematical Models in Physics .................................................... 36
2.3
Ten Worlds of Physics ........................................................................ 45
2.3.1
The Classical World ............................................................. 46
2.3.2
The Thermal World ............................................................. 46
2.3.3
The Nonequilibrium World ............................................. 47
2.3.4
The Continuum World ....................................................... 47
2.3.5
The Electromagnetic World ............................................. 48
2.3.6
The Plasma World ................................................................ 48
2.3.7
The quantum world............................................................. 49
2.3.8
The high energy world ....................................................... 50
2.3.9
The relativistic world ......................................................... 50
2.3.10 The Cosmological World ................................................... 51
2.4
Physics-Based Mathematical Models (PBMM) ........................ 53
2.5
Theory, Experiment, and Models ................................................... 55
2.6
On the Relationship Between Physics and Mathematics .... 58
2.7
Mathematical Physics and Physmatics ....................................... 62
2.8
The Role of Human Communication ............................................ 63
2.9
Antimodels .............................................................................................. 72
2.10
Topological Models .............................................................................. 79
2.11
Engineering Models ............................................................................. 80
2.12
Mathematical Models in Biology .................................................. 82
2.13
Cognitive Models .................................................................................. 87
2.13.1 Religious Models ................................................................... 92
2.14
Science and Arts .................................................................................... 96
2.15
Physics and Philosophy ..................................................................... 97
2.16
Prognosis .................................................................................................. 98
2.17
Some Tricks of the Trade ............................................................... 104
3
Mathematical Potpourri ................................................................................... 108
Basic Principles of Mathematical Modeling
15
3.1
Sets ........................................................................................................... 113
3.2
Maps and Operators ......................................................................... 114
3.3
Groups .................................................................................................... 114
3.3.1. Semigroups ............................................................................. 115
3.4
The Rotation Group ........................................................................... 116
3.5
Lorentz and Poincaré Groups ......................................................... 116
3.6
Rings and Fields .................................................................................. 122
3.7
Morphisms ............................................................................................ 123
3.8
Algebras ................................................................................................. 124
3.9
Lie Groups and Lie Algebras ........................................................ 124
3.10
Vector Spaces ...................................................................................... 125
3.11
Basis Vectors........................................................................................ 135
3.12
Dual Spaces........................................................................................... 139
3.13
Some Remarks on Indices .............................................................. 143
3.14
Operators in Quantum Mechanics ............................................. 143
3.15
Dualities in Physics ........................................................................... 144
3.16
Manifolds ............................................................................................... 151
3.17
Notes on Derivatives ........................................................................ 154
3.18
Notes on Calculus .............................................................................. 156
3.19
Basic Geometry for Physics ........................................................... 157
3.20
Vector Fields ........................................................................................ 162
3.21
Geometry and Physics ..................................................................... 163
3.22
Geometry of Classical Mechanics ................................................ 163
3.23
Transformation of Affine Coordinates ..................................... 169
3.24
General Coordinate Transformations ....................................... 170
3.25
Variational Methods ......................................................................... 171
3.26
Differential Equations ..................................................................... 172
4
Classical Deterministic Systems ................................................................... 174
4.1
Main models of classical mechanics .......................................... 175
4.2
Newtonian Mechanics ..................................................................... 177
4.3
Lagrangian Mechanics ..................................................................... 178
4.4
Hamiltonian Mechanics .................................................................. 183
4.5
Oscillations ........................................................................................... 192
4.6
Harmonic Oscillator ......................................................................... 194
16
Contents
4.7
Symmetries and Conservation Laws ......................................... 196
4.8
Relativistic Mechanics ..................................................................... 197
4.9
Dynamical Systems ........................................................................... 198
4.10
Dynamical Systems and the Cauchy Problem ....................... 198
4.11
Autonomous Dynamical Systems ............................................... 200
4.12
Non-autonomous Systems ............................................................ 202
4.13
Dynamical Systems in Mathematical Modeling .................... 205
4.14
Nonlinear Science .............................................................................. 207
4.15
The logistic model: the bugs are coming ................................. 209
4.15.1. Extensions of the logistic model .................................. 218
4.15.2. Applications of the logistic model ............................... 224
4.16
Instabilities and chaos ..................................................................... 228
4.16.1 Chaos in dissipative systems ........................................ 232
5
Classical Fields and Waves .............................................................................. 235
5.1
The Maxwell Equations ................................................................... 235
5.2
Gauge Invariance in Classical Electrodynamics ................... 238
5.3
Four-Dimensional Formulation of Electrodynamics ......... 242
5.4
Classical Electromagnetic Field without Sources ................ 245
5.5
Equations of Motion for the Electromagnetic Field ........... 246
5.6.
Hamiltonian Formalism in Electromagnetic Theory ......... 247
5.7
Limitations of Classical Electromagnetic Theory ................ 250
5.8
Integral Equations in Field Theory ............................................ 252
5.9
Phenomenological Electrodynamics ........................................ 253
5.9.1
The Traditional Averaging Procedure .................... 258
5.9.2
Ensemble Averaging of Fields and Currents ......... 260
6
The Quantum World .......................................................................................... 266
6.1
In the Middle of Revolution ........................................................... 268
6.2
Some Notes on the Historical Development of Quantum
Mechanics ............................................................................................................... 274
6.3
Mathematical Models of Quantum Mechanics ...................... 275
6.4
The Schrödinger Equation ............................................................. 278
6.5
Quantum Tunneling .......................................................................... 284
6.6
Quantum Evolution ........................................................................... 284
6.7
The Stone Theorem .......................................................................... 285
Basic Principles of Mathematical Modeling
17
6.8
Geometrical Formulation of Quantum Mechanics .............. 286
6.9
Quantum-Classical Correspondence ......................................... 287
6.10
The Ehrenfest Theorem and Its Meaning ............................... 298
6.11
Wave Packets in Quantum Mechanics ...................................... 304
6.12
Semiclassical Expansions and Asymptotic Methods .......... 305
6.13
The Density Matrix and Its Relatives ........................................ 305
6.14
Do You Need an Interpretation? ................................................. 305
6.14.1 More on Copenhagen Interpretation ........................ 308
6.14.2 Bohm’s version ................................................................... 313
6.14.3 Statistical interpretation ................................................ 314
6.14.4 The Many-Worlds Interpretation .............................. 315
6.15
Causality ................................................................................................ 317
6.16
Quantum Chaos .................................................................................. 318
6.17
Path Integrals in Physics ................................................................ 318
6.18
Quantum Field Theory .................................................................... 321
6.18.1 More on Relativistic Invariance .................................. 323
6.18.2 Feynman Diagrams .......................................................... 324
6.18.3 S-Matrix ................................................................................ 324
6.18.4 Particles and Symmetries in Quantum Theory ... 325
6.18.5 Quantum Field Theory and Mathematics............... 326
6.18.6 Canonical quantization in QED .................................. 329
6.18.7 Gauge Fields in Quantum Theory .............................. 329
7
Stochastic Reality ................................................................................................ 331
7.1
Thermodynamics: the Study of Paradoxes ............................ 332
7.2
Statistical Way of Thinking ........................................................... 333
7.3
Statistical Equilibrium .................................................................... 335
7.4
Statistical Ensembles ....................................................................... 336
7.5
The Bogoliubov Chain ...................................................................... 337
7.6
Chaotic Behavior ................................................................................ 338
8
Radiation and Matter ......................................................................................... 341
8.1
Interaction of Electromagnetic Radiation with Matter.
General Concepts ................................................................................................. 341
8.2
Field Energy Dissipation in Matter ........................................... 344
8.3
More on Charge in Electromagnetic Fields............................ 347
18
Contents
8.3.1
Interaction of a Particle with a Standing Wave . 348
8.3.2
Interaction of a Particle with a Traveling Wave .... 358
8.4
On
Hamiltonian
Formalism
for Particle Motion
in
Electromagnetic Fields ..................................................................................... 358
8.5
Interaction between Atoms and Radiation Field ................. 359
8.6
Laser-Matter Interaction ................................................................ 360
8.6.1. Ultrashort Laser Pulses ...................................................... 360
9
What remains to be solved? ........................................................................... 362
9.1
The Standard Model ......................................................................... 362
9.2
The Arrow of Time ............................................................................ 363
9.2.1
Perennial Problems ......................................................... 365
9.2.2
Observations of Possible TRS Breakdown ............ 367
9.2.3
Model-Based Claims ........................................................ 371
9.2.4
Closed Systems .................................................................. 373
9.2.5
Irreversibility and Time Reversal Noninvariance:
Remarks about Terminology ......................................................... 373
9.2.6
The Time Operator ........................................................... 376
9.2.7
Elementary
Properties
of
the
Time-Reversal
Operator ................................................................................................ 377
9.2.8
Time Operator in Classical Mechanics ..................... 378
9.2.9
The Pauli Theorem ........................................................... 380
9.2.10 Time Reversal Puzzles .................................................... 381
9.3
Irreversibility ...................................................................................... 387
9.4
Origins of Unpredictability ............................................................ 390
9.5
Understanding superconductivity ............................................. 392
9.5.1
Early History of Superconductivity ........................... 393
9.5.2
Some Physical Models of Superconductivity ........ 396
9.6
Superfluids and Supersolids ......................................................... 401
9.7
Relativity ............................................................................................... 402
9.7.1
Special Relativity ............................................................... 402
9.7.2
General relativity .............................................................. 402
9.8
Gravitation ............................................................................................ 403
9.8.1
The Equivalence Principle ............................................ 407
9.8.2
The Einstein Equations .................................................. 408
9.9
Cosmology ............................................................................................ 409
Basic Principles of Mathematical Modeling
19
9.10
Black Holes ........................................................................................... 413
9.11
Quantum Gravity................................................................................ 415
9.12
String Theory and String Theorists ........................................... 416
9.13
Is Relativity Firmly Established? ................................................ 421
10
Climate as a Physical System ........................................................ 426
10.1
Some purely Climatological Questions. ................................... 429
10.2
Some Purely Physical Questions ................................................ 431
10.3
The Earth as a Black Body Emitter ............................................ 434
10.4
Climate and Weather ....................................................................... 436
10.5
Dynamical Systems in Climate Modeling ................................ 440
10.6
Combining Models with Observations .................................... 447
10.7
Climate Variability ............................................................................ 449
10.8
The AGW Evidence ............................................................................ 450
10.9
The Evil Role of Carbon Dioxide ................................................. 460
10.10
The Role of the Sun ........................................................................... 470
10.11
Limitations of Current Climate Modeling ............................... 474
11
Made in physics .................................................................................. 478
11.1
Exported Models ................................................................................ 479
11.2
The Limits of Sociology ................................................................. 480
11.2.1 Self-Reproducing Social Patterns .............................. 481
11.2.2 Archetypical Questions of Social Sciences ............ 486
11.2.3 Limits and Errors in Social Sciences ......................... 488
11.3
Hierarchical Multilevel Systems .................................................. 490
11.3.1 The Politics of Bureaucracy ......................................... 490
11.4
Physical Economy ............................................................................. 492
11.5
Naive Taxation Models .................................................................... 495
12
Conclusion and outlook .................................................................. 501
13
Bibliography ........................................................................................ 505
20
Introduction
1 Introduction
By some experience I have noticed that many people do not really read the
whole of a science book, actually they don’t read anything beyond the
foreword, introduction and conclusive chapter. Bearing this in mind, I decided
to slightly exceed the commonly accepted scope of these ancillary parts,
although the present manuscript is strictly speaking not a science book. It
pursues two main goals: firstly, to refresh the standard repertoire of working
physicists and, secondly, to emphasize the role of interdisciplinary links
enjoying physical and mathematical inclination (I call the col- lection of all
such links a physmatical network). The first goal implies only a superficial
coverage of subjects, with the main intention here being to arouse interest to
foundations, whereas the second goal was to demonstrate occasional
fruitfulness of cross-disciplinary jumps and unexpected analogies, often
regarded with disfavor as rather philosophical and speculative.
The very notion of “interdisciplinary approach” as well as the words
“interdisciplinary”,
“cross-disciplinary”,
“transdisciplinary”,
or
“multidisciplinary” seem to be strongly discredited although it is often
asserted
that
highly
institutionalized
knowledge
tends
to
limit
understanding. Scientists habitually consider all such terms as a distinctive
mark of unprofessionalism, when everything is swept into a single heap and
it would be difficult to set up a clear-cut problem under blurred
circumstances. As a consequence, no specialized knowledge appears to be
required, and active charlatans with their childish babble or at least holistic
half-professionals with their vague claims may profit. Although such a danger
really exists, I still think that total refutation of the interdisciplinary approach
(or even its occasional equalizing to pseudoscience) is erroneous.
Interdisciplinarity is basically a good thing, allowing one to transcend
artificial boundaries and to overcome excessively narrow specialization.
More and more areas are becoming inherently interdisciplinary: biophysics,
biomechanics, medical physics, other life sciences, robotics, nanoscience and
nanotechnology, quantum computing, ecology, climate studies, complex
systems and so on. Such renowned scientific journals as the Physical Review
E and Physica D claim to be “interdisciplinary in scope” (PRE) or devoted to
“nonlinear phenomena in general” (Physica D). Yet people are uncertain
whether an interdisciplinary career is in principle feasible, with
interdisciplinary studies only denoting a rubric.
People who are comfortable in multiple areas, e.g., in physics and social
sciences, in mathematics and climate science, human perception and nuclear
engineering, chemistry and geoscience, mathematics and genetics, or political
analysis and scientific ecology are exceedingly rare, and I think one should
create a favorite environment - institutional, intellectual and moral - that
would encourage the emergence of a new breed of scientists feeling at home
Basic Principles of Mathematical Modeling
21
with unusual alliances of specialties. These are Renaissance persons5 yet I
don’t think that today value of such scientists is properly appreciated: in spite
of many words pronounced in favor of interdisciplinary research, the current
reward system seems to impede flourishing of cross-field experts and
projects. In this book, I group together things that appear to be utterly diverse
partly in order to emphasize the value of interdisciplinarity.
Another curious observation of mine was that people - even very
qualified persons - mostly do not think thoroughly about the basics. This is
quite natural: people study the rudimentary things at a very young age when
wonderful distracting factors surrounding them are abundant and
afterwards, in adult life, there is a drastic shortage of time to return to
foundations. One should not blame people for a few blank spots in their basic
thesaurus. Many of us have probably met some angry, disturbed and
humorless professors who were pinching students and younger colleagues,
without any compelling reason, solely on the ground of having forgotten
allegedly elementary facts. Bearing this situation in mind, I have devoted a
considerable space to the subjects commonly considered too primitive to pay
attention to. To me, they proved to be not at all primitive, and I often found,
with utter amazement, that thinking about the basics rapidly develops into
facing very intricate issues protruding into various fields of physics and
mathematics. Although this remark about the importance of reviewing
elementary concepts may sound trivial, I wanted to share my amazement
with other people. This explains why I ruminate about pesky basics for so
long in the manuscript.
This book is devoted to an informal discussion of patterns constructed for
treating physical problems. Such patterns, when sufficiently formalized, are
usually referred to as “models”, and they tend to be applied today not only in
physics, but conquer the fields traditionally occupied by other disciplines
generally considered to be totally different from physics. Accordingly, in this
book the word “physics” is understood in a broad sense as the general study
of natural phenomena. A tiny part of the models related to natural phenomena
may be set up as mathematical problems and solved using contemporary
mathematical means, exactly or approximately (e.g., numerically), whereas a
much larger part of the models can only be qualitatively described. These
latter verbal patterns are typically regarded as imprecise low-level
statements which, hopefully, will be formalized in future. A mathematical
formalism, however, does not necessarily imply exact knowledge; rather it
demarcates the frontiers of ignorance that are fuzzy in qualitative statements.
Nevertheless, a large part of this book is devoted to qualitative statements
and verbal patterns considered as a kind of raw material for building up
satisfactory mathematical models.
Inevitably, the presentation of many topics may contain short excerpts
from the courses of classical and quantum mechanics as well as from classical
5 A renaissance person is a model of personality possessing a broad range of
knowledge. One of such person was, for example, Alexander von Humboldt who had the
ability to embrace nearly all scientific disciplines known at his time and traveled through
all the continents (maybe except the Antarctic).
22
Introduction
electrodynamics and quantum field theory. Much of this material may appear
trivial and even unnecessary. I think this impression is false. The aim of
including the textbook excerpts is to trigger the imagination. Well-known
results start the intrigue, and afterwards the new and unusual things come
along. The reader may also find it annoying when I use innumerable
“possibly”, “probably”, “might be”, “perhaps”, “in principle” and so on. The
reason for this approximativeness is not an obsessive carefulness but merely
an inexact knowledge. Unfortunately, most of the knowledge in the world is
inexact and cannot be reliably quantified, and one should not commit the
adolescent sin of not admitting it.
In the Middle Ages, there existed the so-called ”Exempla”, i.e., collections
of illustrative examples to be used by priests to save the parish people from
falling asleep during the sermon. The wakening effect was reached by the
presence of numerous vivid details in the “exempla”, for instance, chilling
minutes about hell, the devil, demons, etc. Imagining easy to understand, life-
like pictures, the public heard prayers. In this book, I have tried to produce
something of the kind of such “Exempla”, although I am not sure I can always
keep the reader awake.
Being overloaded with personal details and opinions, this book
nevertheless contains examples of the application of selected mathematical
methods to mathematical models frequently encountered in various branches
of science and engineering. Since physics seems to be the best-known
collection of models, it is physics based mathematical models (PBMMs) that
are predominantly discussed. Even when other disciplines come into play,
most of the models under review have been imported from physics and
adapted for scholarly or engineering problems arising in these disciplines. To
promote cross-fertilization of ideas between a variety of disciplines, it is often
advantageous to consider the so-called complex systems requiring a real
exchange of concepts and techniques. In this transdisciplinary approach there
is of course a danger of debating and arguing about too many things.
After a brief methodological prelude about general modeling principles
(Chapter 2), some standard mathematical techniques are informally
discussed in Chapter 3. This chapter is neither a micro-handbook nor a review
of mathematical methods in physics. The main purpose of this “mathematical
potpourri” is to introduce some mathematical concepts, mostly of geometrical
nature, which have long become routine in mathematical texts but still are not
quite easily acquired by physicists and engineers engaged in the business of
mathematical modeling 6 . Chapter 4 is devoted to classical mechanics
culminating in the theory of dynamical systems. This is perhaps the main part
of the book; the emphasis in it is placed on dynamical systems - this is due to
the fact that change is the most interesting aspect of models. Moreover,
classical dynamics is probably the most developed part of science, it studies
the evolution of systems of material points, bodies that are so small that their
inner structure is disregarded and the only surviving characteristic is their
6 I have written “business”, but it is probably a wrong and compromised word; one should
rather think of an art of mathematical modeling, even in its numerical stage.
Basic Principles of Mathematical Modeling
23
position in space, 𝐫= 𝐫𝑖(𝑡) . The dominant modeling concept exploited
throughout Chapter 4 is the notion of local equilibrium. Mathematical
modeling of complex systems far from equilibrium is mostly reduced to
irreversible nonlinear equations. Being trained in such an approach allows
one to model a great lot of situations in science and engineering. In Chapter 5,
classical field theory is briefly outlined, with some accent being placed on
wave motion and wavelike models. Since there is hardly any means to define
what can be called a wave process in general, a very broad variety of problems
is mentioned in this chapter. Physical systems can be roughly separated into
two classes: particles and fields, correspondingly there are two basic classes
of models. I used the word “roughly” because in fact there is an overlap, for
example, particles serve as field sources. Moreover, the separation of matter
into particles and fields is, strictly speaking, outdated and incorrect. It is used
here only for convenience: the main difference between dynamical systems
in classical mechanics, where particles are studied, and in field theory is in the
number of degrees of freedom. Any classical mechanical system consisting of
a finite number 𝑁 of particles has only a finite number of degrees of freedom,
𝑛 = 3𝑁, whereas fields possess an infinite number of them.
Chapter 6 is devoted to quantum (and mostly relativistic) fields. No
former knowledge of quantum field theory (QFT) by the reader is implied,
although the corresponding mathematical problems are quite intricate.
Quantum field theory has a reputation of being hard to study, but I think that
such a reputation is mostly due to “user-unfriendly” expositions, heavily
technical and overloaded with details. I hope that after scrolling this chapter
one will be able to read the literature on QFT with the least amount of pain.
Quantum field theory can be crudely interpreted as quantum mechanics of
the systems with infinite number of degrees of freedom. The development of
quantum field theory began with quantum electrodynamics whose main
equations appeared in the 1920s, almost simultaneously with nonrelativistic
quantum mechanics in the works of Dirac, Heisenberg and Pauli.
The primary bunch of quantum field models being just an extension of
quantum electrodynamics may be called the “old” quantum field theory. As it
is customary, we shall discuss quantum fields in the relativistic four-
dimensional context, although the requirement of relativistic invariance is,
strictly speaking, not necessary for quantum fields. After some recapitulation
of the four-dimensional formalism, we shall subsequently observe quantum
fields for spin 0, spin 1, and spin 1/2. Then we shall discuss a “new” physics
of quantum fields, where geometric ideas are fully exploited. Thus, theories
based on gauge invariance comprise a class of “new” quantum field theories.
One can illustrate the bridge between “old” and “new” quantum field theories
by an observation that all physical theories in four-dimensional spacetime are
characterized by a number of common features. In particular, long-range
forces should exist that require conservation of the corresponding charges.
This fact provides a passage from “old” to “new” QFT based on gauge theories.
It is widely known that methods of quantum field theory are applicable in
other areas of physics; the most popular application area of such methods is
statistical physics.
24
Introduction
Mathematical models of quantum theory reviewed in Chapter 6 are partly
connected with the wave problems discussed in the preceding chapter, so
there are a number of natural cross-chapter references. This fact may be
viewed as a manifestation of modularity and reusability of many
mathematical models in physics. Symmetry issues that should necessarily be
taken into account both in the orthodox quantum theory and in field theory
are tackled in many sections of Chapters 5 and 6. Apart from a description of
mathematical models of quantum mechanics, questions of its interpretation
are discussed in this chapter. Although bearing a heavy philosophical load,
these questions survived a renaissance in the 1990s and continue to be
popular also among physicists dealing with quantum computing and
quantum cryptography. For me, the so-called Copenhagen interpretation is
good enough, although it leads to paradoxes and inconsistencies. It is
presumed that N. Bohr understood causality as the Laplace determinism only,
i.e., in a narrow sense, and juxtaposed to it “complementarity”. Similarly to
Laplace who extrapolated the successful solution by Newton of the Kepler
problem on the entire universe, thus postulating a mechanistic model of the
world, Bohr created an anti-Laplace model by a philosophical generalization
of the Heisenberg “indeterminacy relations” which are in fact just trivial
consequence of the Fourier integral. This juxtaposition led Bohr to the idea of
incompatibility of quantum mechanics with determinism, at least of the
Laplace type. Less conservative ideas lead to a negation of causality in some
situations with participation of quantum particles, i.e., to acausal or even anti-
causal quantum models. Causality as a principle is discussed more or less
thoroughly in this chapter.
By the way, when studying quantum mechanics, it is only natural to
acquire some interest in its historical developments. Physics professionals
usually do not have a passion for the history of physics, mostly considering
the human element behind the scene an unnecessary distraction. Yet people
are different. As far as I am concerned, my interest in specific physical
phenomena was quite often fueled by the desire to comprehend the
corresponding historical occurrences. Besides, some evolutions of science
contain a pronounced dramatic element and bringing it to light allows one to
deepen the purely scientific understanding. For instance, I have found out that
scrutinizing the historical facts which accompanied the development of
superconductivity allowed me to understand its theoretical models easier
and better.
Statistical models treated, maybe a little superficially, in Chapter 7 lead
to problems that cannot be fully treated within the framework of classical
theory only, despite a lot of classical concepts discussed previously in Chapter
4 being invoked. It would of course be difficult to treat the whole scope of
statistical and stochastic issues permeating both physics and other disciplines
using methods developed for physical systems (such as economics and
sociology), thus many problems are only briefly mentioned in this chapter,
with respective references being provided of course.
Chapter 8 is devoted to the loose description of theoretical approaches to
the interaction of radiation, both electromagnetic and corpuscular, with
Basic Principles of Mathematical Modeling
25
matter. The chapter is divided into two parts, one related to the
electromagnetic field (mainly radiation) in material media, the other to the
passage of particles (mostly charged ones) through matter. The concept of
material medium and its interaction with external agents, in its general
setting, has a great many particular facets far beyond physics. For example,
many social problems may be reduced to the depiction of an appropriate
medium and its interaction with external influence such as immigration. One
can also portray a human society in a steady communicational state affected
by outside opinions. Mathematical models of immunology are conceptually
close to these considerations. In physics, external agents provoking a
response in material media are usually taken to be radiation fields or streams.
The problem of material media response is very complex and is dissected -
sometimes artificially - into many subproblems and models. Typical models
are related to the interaction of pulsed electromagnetic radiation with
material media. Such models are very relevant these days, specifically with
regard to new laser technologies allowing one to produce extremely powerful
ultrashort pulses.
The next Chapter 9 outlines “dark spots” - subjects that not only the
author does not fully understand, but also the great minds of the current
physmatical milieu seem to be unable to give exhaustive answers about. In
this chapter and the next one, two large subsections can be found - on time
reversal symmetry and about Earth’s climate (the latter discussed in Chapter
10) - like “novels in a novel” or, say, dreams seen in a dream i.e., so to say,
dreams of a second order. And like in a dream, the real problems are mixed
with fantasies, speculations and pseudoproblems. Both subsections contain
elements of a journalistic essay, and I do not shy away from discussing the
fictive problems bordering on metaphysics or excessive philosophical
generalities. In fact, a lot of current physics easily accommodates things that
are little more than philosophy and cannot be falsified in any way. For instance,
the problem of time-invariance violation discussed in this chapter is of semi-
philosophical, worldview character. For centuries it was believed that all
events are, in principle, predictable, time-revertive, and can be described by
differential equations similar to the equations of motion for an isolated body.
Seemingly unpredictable and irreversible phenomena, such as weather or
human behavior, were believed unpredictable and irreversible only due to a
very large number of variables. As to predictions, it is still hoped by some
scholars that with the advent of the powerful computers all long-range
predictions would be possible (which is probably wrong).
When thinking about mathematical models in physics, one cannot get rid
of a thought that the whole army of researchers has won a great many battles
but is losing the war. Indeed, there exist several diverse paradigms in physics
which serve as a base for model construction, but taken together they produce
an impression of great confusion. Which model of the particle is ultimately
correct - pointlike, as prescribed by special relativity, or extended and spread
over some finite domain, in accordance with quantum mechanics? Or a tiny
string? Or containing an internal world? Is the reductionist approach, when
one attempts to explain all phenomena as based on a fundamental set of
26
Introduction
elementary interactions, in general valid? One can notice that almost all cases
of physical interest, except probably some exotic ultra-high-energy
accelerator experiments, involve systems of a great number of particles. It
may be impossible to directly calculate the behavior of complex systems from
single-particle models. For example, reductionist arguments trying to derive
properties of biological tissue starting with some fundamental model such as
Newton’s second law or the Schrödinger equation can hardly be called
adequate. In Chapter 9 we shall discuss a typical reductionist problem: time
reversal in complex physical systems. It is generally believed that each
phenomenon in our reality should be, in the final analysis, invariant under
time reversal on the ground that the most popular mathematical models of
fundamental interactions are time-invariant. This, however, can be a delusion
or at least a stereotype.
Chapter 10 “Climate as a Physical System” could be considered as a
continuation of “dark spots”. It has already been emphasized that the climate
system is a very complex physical object and prognoses of its evolution must
necessarily be an extremely refined issue. There are many discussions among
professional climatologists, meteorologists, physicists and representatives of
other sciences about how to approach - not even to solve - this problem. There
seems to be more of humanities and politics than of natural sciences in the
climate disputes. The matter is that humans - even the most enlightened
climatologists do not know enough either about the Earth’s climatic system
or about the chaotic dynamic systems to produce accurate mathematical
models containing thousands of entangled variables. Hence the prevailing
role of the anthropogenic factor compared to the natural influencers on
Climate such as solar activity, oceanic circulation or lunar motion is highly
questionable.
Chapter 11 is devoted to mathematical models beyond physics. Physics is
distinguished from other disciplines also employing mathematical modeling
by the fact that models in physics are linked. This is an important property
ensuring the success of the collection of mathematical models called a science,
and that is a feature that makes physics “the splendid architecture of reason”
[18]. This linked architecture appeared mostly due to firmly established laws
of physics that can be expressed in the form of differential equations. As soon
as a model is disconnected from the main architecture, its heuristic value
becomes substantially reduced. One could see this diminishing value of
standalone models - despite the generally flourishing physics - on the
examples of numerous group (multiplet) models of elementary particles,
being of fashion in the 1960s. Nowadays, after the Standard Model epoch,
nobody seems to remember those quickly baked concepts which did not use
the wealth of physical results. A similar situation is typical, for example, of
economics where mainly ad hoc mathematical models are in use,
characterized by a lot of arbitrary parameters and suggestions.
Now, let me make some additional comments about the subjects
contained in the present book. This book is not about well-known
mathematical methods in physics - there exist a great lot of sources on this
subject, and I had no intention to compete. The reason for discussing
Basic Principles of Mathematical Modeling
27
foundational issues was to stir up the reader’s interest, inducing one to
address oneself to more precisely focused and professional sources.
Nonetheless, in this manuscript I tried to evade repeating well-known results,
numerously described in the vast literature, but whenever I still had to
address such results - usually while starting to describe a particular model or
a theoretical approach - I made an attempt to concentrate on those features
which seemed to me fresh and unexpectedly connected to apparently far
away subjects. Indeed, in physics and mathematics combined we primarily
find out about the existence of wonderful connections between things which
at first sight seem to be completely different. Some of these connections are
really intriguing. Today, well-known links exist between the theory of
dynamical systems and statistical mechanics, between the models of black
holes and thermodynamics, although these subject pairs are apparently
related to different fields of physics. There is now also a well-known interface
between superconductivity, cosmology and high energy physics, based on the
unifying idea of gauge invariance. Less known connections may appear quite
striking. For example, can one apply the notion of wave function in classical
statistical mechanics? What does classical signal analysis have to do with the
foundations of quantum mechanics? Another example of networked
physmatical domains is the presence of thick links between heat and mass
transfer (with countless engineering, geophysical, ecological, socio-
demographic, etc. applications), quantum mechanics, and geometric
topology, in particular the now fashionable7 Ricci flows. The latter describe
the spread of a tensor quantity, the Ricci curvature, which in distinction to
scalar quantities governed by the heat, diffusion and Schrödinger equations
manifests the behavior of a purely geometrical property. But the idea behind
all these models consisting in the dynamical smoothing out of high
concentrations is basically the same. Or, I was astonished to find out that the
Bogoliubov-Mitropolski expansion, well- known in the classical vibration
theory [242], can be efficiently applied to nanostructure analysis. The
burgeoning areas of biotechnology, nanotechnology, quantum engineering,
and quantum information processing are today closely converging. This
cross-border linkage of apparently diverse disciplines may serve as an
example of a physmatical network in action. One might notice that physics is
full of shared approaches and synthetic subdisciplines such as the interaction
of radiation with matter, astrophysics, geophysics, cosmology, etc. Even the
elementary study of physics and its relevant mathematics increasingly shows
that apparently disparate topics are in fact closely related. For example, let us
start from mechanical problems. Then we very soon come to a well-known
fact that Hamiltonian systems determine a vector field and solving the
Hamiltonian equations means finding the integral curves of this vector field.
This is known as finding the dynamics of a physical system. Only a slight
generalization of this simple physical situation leads to familiar linear algebra
notions and geometrical concepts: a first-order differential equation on some
manifold 𝑀 is a vector field on this manifold, and solving a differential
7 Mostly due to the famous recent results by G. Perelman.
28
Introduction
equation for given initial conditions is translated into finding a curve or,
rather, a family of curves. All this is of course trivial, yet there is an
unexpected link here, namely that we naturally come to the notion of a flow
for a vector field leading in its turn to quite sophisticated concepts of the
theory of dynamical systems with a multitude of applications. In other parts
of physics and associated to it mathematics, we have, for instance, the Hahn-
Banach theorem and then the concept of separation of convex sets, which
eventually leads to such ultra-practical things as the optimal control of
missiles. Still in other nodes of this physical-mathematical network we locate
the integral equations and Fredholm operators naturally connected with the
scattering of waves and particles and, although it may seem strange, with the
Navier-Stokes equations. Hence some people may find this text suitable for
acquiring some knowledge on the interaction between different branches of
mathematics and physics, starting from some seemingly familiar examples
contained in the book. In physmatics, due to its networking properties, it is
easier to think inductively or, roughly speaking, to generalize simple
examples - in contrast to the famous Sherlock Holmes’ method.
Although this book is about mathematical models and, predominantly,
mathematical models in physics, it contains many notions that appear too
inconcrete from a scientific point of view: nature, society, values, knowledge,
science, politics, policymakers, bureaucracy, democracy, scientific tribes,
history, intuition, conscience, God, faith, beauty, truth, freedom, etc. These
descriptive terms usually serve to label some phenomena with the purpose to
invoke associations and exploit human attitudes. Such objects form fuzzy sets
or are high-level entities existing in dedicated spaces, multidimensional and
of course not necessarily linear so that it would be difficult to define them
exhaustively with words. These and a great deal of other badly defined
concepts are appealing to unwritten laws. Moreover, such objects cannot be
in general reduced to “black-white” dichotomies 0-1, but involve complex
though essential details. More than that, the subsets of these fuzzy notions
which describe certain groups of people are not only very unspecific but, on
the one hand, impersonal while on the other hand too poorly defined to be
studied by really scientific methods. By the way, fuzzy notions are often used
not only in the humanities, but also in physics and even mathematics. For
instance, such words as local, very small, negligible, immediate vicinity and
their inverse i.e., global, very large, essential, far from, etc. are ambiguous
words, but their use seems to be unavoidable in science. So it would be wrong
to assume that “exact” sciences are based solely on numbers understood as
ordered sets; by the way, one may recall that numbers themselves originally
modeled physical (or rather biomechanical) objects such as human fingers.
Since this book is mainly devoted to building mathematical models in
physics, allow me to expand a little bit on the subject of modeling in this
introduction. Mathematical modeling in physics is a nontrivial interplay
between mathematics, physics and engineering in the study of complex
systems (see in the following chapter a slightly more extensive discussion of
the properties of models in physics). Any model in general is a simplification
of reality, with irrelevant details being ignored. It follows from here that when
Basic Principles of Mathematical Modeling
29
a model is used, it may lead to incorrect predictions when the neglected
details cease to be irrelevant. Thus, it is important to estimate the limits of
applicability of the model - the art largely mastered by theoretical physicists
and often overlooked in more applied disciplines and, strange as it might
seem, by computer scientists whose primary task is to appreciate errors. And
appreciating errors is indispensable for validating models.
When we are approaching a problem, we tend to do it in a biased, limited
way (recall the chrestomathic tale “The Blind Men and the Elephant”), with
some feature seeming the most substantial according to previous experience,
prejudices, stereotypes and other purely human factors. Understanding of
where a particular model fits within the overall theoretical description
enables one to estimate the model limitations. Moreover, it does not mean
that we shall always succeed in giving a useful answer by employing
mathematical tools. When one tries to solve a problem that one does not know
thoroughly, there is always the risk of building castles in the air. For instance,
it would be risky to model biomedical processes without elementary
knowledge of the subject area. I think that if one is interested in applying
mathematics, one should start first by studying carefully the problem one is
interested in and then learning the necessary mathematical theory. The use
of mathematics in physics is usually unavoidable, as well as in other areas
which claim to be “exact” (chemistry, engineering, some parts of biology).
Examples of disciplines that are definitely not “exact” are the major part of
medicine, psychology and philosophy. The bad thing about such disciplines is
that they are relying more on so-called “expert opinions” and too generally
understood “human experience” than on a solid reproducible base so that it
is difficult to controvert or falsify the vague statements of inexact disciplines.
One often forgets the truism that even in “exact” sciences the theoretical
solution to an actual problem must be confirmed by experiment.
Experimental proof is the strongest of arguments whereas reference to an
authoritative opinion is the weakest one.
30
Principles of Mathematical Modeling
2 Principles of
Mathematical Modeling
The world available to us may be defined as the sum of manifestations of the
systems having different levels of complexity. The systems whose complexity
(defined in any manner) exceeds our abilities by orders of magnitude cannot
be studied by exact methods, in particular, by mathematical means. In this
case, two possibilities come out: either to dissect the observed system into
subsystems of lower complexity, so that such subsystems could already be
studied by available mathematical techniques, or resort to fuzzy cognitive
representations. The first option is that of mathematical modeling, whereas
the second is closer to philosophy. In this book, predominantly the first
alternative is used.
Nonetheless, models of reality can be of vital importance also without an
exact mathematical form. When the Black Death - the plague - swept
relentlessly over Europe (it was several times, probably the most ferocious
pestilence attack was in the 14th century), there were three main models for
the cause of the plague: it was regarded as a medical event, as astrological
misfortune, or as a God’s punishment. Thus, there were roughly speaking
three respective classes of models explaining the plague cause: terrestrial,
celestial, and divine (religious). These three classes of models were not
independent. Terrestrial models, for example, were based on the ancient
Greek science represented, e.g., on the Aristotle’s “Meteorology” stressing
beside atmospheric phenomena the view that certain conjunctions of planets
(e.g., Mars, Saturn, Jupiter) would bring disaster, and on the Hippocratic
“Epidemics”, where the importance of astrology for medical practice was
discussed. We shall deal with astrological and religious models a little later;
now it may be noticed that terrestrial models were closer to contemporary
scientific concepts than the two other classes of models. Terrestrial models
discussed atmospheric events, weather, climatic variations, comets, meteors,
earthquakes as possible sources of poisonous gases spoiling the air and thus
causing illness. Rain, thunder, storm, wet winds could disperse ominous
vapors (for instance, produced by cadavers rotting in swamps). So the
essence of the model for the cause of the plague consisted in the notion of the
poisoned air entering the body, contaminating it and causing its organs to
disintegrate - quite a viable model even from the contemporary viewpoint. It
is clear that understanding the cause of the plague could substantially
increase chances to prevent its spread. We shall discuss below some models
of epidemics based on dynamical systems, here I wanted to emphasize the
importance of non-mathematical models for the human race.
Many people think that real science and technology begins only where
sophisticated mathematical methods enter the picture. I think it is just a
stereotype. Refined mathematical models of the world are not necessarily
Basic Principles of Mathematical Modeling
31
destined to play an instrumental part in life practice. For example, there is no
practical necessity to use geometric theorems in order to measure the plot
area: the value of a particular estate is only very approximately determined
by its area, “human factors” such as prestige or location are much more
weighty. For practical purposes, the ancient Egyptian formula, 𝑆=
𝑎+𝑐
2 +
𝑏+𝑑
2 ,
crudely expressing the area of a rectangle as the product of half-sums of the
opposite sides is quite sufficient. Moreover, using geometric theorems would
be counterproductive for a land surveyor, since they require more precise
measurement which does not produce a better estimate of the plot real value.
Actually, the dichotomy “mathematical models - cognitive models” is
linked to the models of language and semantic scattering in it. Each word in
any human language has a spectrum of different values (meanings). Such a
spectrum can be represented in the dictionary8, but if one reckons with a
multitude of nuances and gradations (which may be represented as a fine
structure of each semantic value) then one is inclined to think that discrete
models of the language may be too deficient. Therefore, to build up a model
of the language which would be adequate to its complexity one has to consider
continuous spectra of values. Cognitive models of the world exploit such
continuum spectra of verbal values, whereas mathematical models are
centered around discrete notions which can be more precisely defined and
thus expressed by equations.
We shall return below to cognitive models, some of them, as I have
mentioned, may be quite valuable. Nevertheless, one can often encounter an
utterly disdainful attitude to models which are not based on mathematical
equations, especially from the young people studying mathematical, physical,
or engineering disciplines. These mathematical extremists tend to decry
anything not containing mathematical formulas as a “bla-bla-bla”, not
understanding that mathematical notation and models are only related to a
comparatively low complexity level.
Therefore, the question arises, which seems to be common for all young
people who study “exact” sciences but are in fact attracted by the humanities:
is there in this latter part of the culture a concrete knowledge that must be
considered necessary for understanding the world? This question may be
rephrased in more operational terms: does the inductive method of scientific
research bring the results whose value may be as high as obtained by formal
deductive method favored by mathematicians? Traditional physics combines
both methods, but which one is more important? The question may probably
be formalized, but this is a prerogative of the philosophy or history of science.
I intentionally put it in vague terms, because I do not know the answer. The
only thing that seems to be certain: modeling requires a certain kind of people
possessing crossover skills and an intensive dialog between specialists
representing a variety of disciplines.
8 In fact only a discrete semantic spectrum can be represented in the dictionary. One can
imagine here a massive thesaurus such as Webster, Oxford, Roget’s, Larousse, Duden, Meyers,
etc.
32
Principles of Mathematical Modeling
Here, I would like to make a personal remark. For myself, ruminating
about mathematical modeling in general is simply an occasion to babble on
unspecified subjects, making idle recommendations. There are, however,
people who possess a natural aversion to diffuse quasi-philosophical and
other verbose texts. They can easily omit this chapter.
2.1 Basic Principles of Mathematical Modeling
This introductory section reiterates some traditional concepts which are
basic to the overview of connected mathematical models of physics described
in the subsequent chapters. The reader can find that I did not shy away from
truisms and general statements. I hope I shall be forgiven for these elements
of didactics. Moreover, this section contains standard methodological
recommendations about rudimentary modeling principles. I tried to make all
these observations a bit less boring by supplying them with some personal
remarks. The reader not much interested in contemplative hints on
unexpected interrelations between different domains of physics and
mathematics may easily skip this section.
Since a large part of science is based on models, it would be useful to learn
some general principles of their construction9, no matter how declarative the
enumeration of such principles may seem. To make their discussion less
boring I rely not on terse quasi-scientific “proofs”, but on simple examples -
for myself, I call this anti-didactic procedure “descholastization”.
In a scientific research or an engineering inquiry, in order to understand,
describe or predict some complex phenomenon, we employ mathematical
modeling, already reflected upon. In the mathematical modeling technique,
we describe the state of a physical, biological, economical, social or any other
system by time-depending functions of relevant variables. Then we attempt
to formulate, in mathematical terms, a basic law governing the phenomenon.
When we are interested in the system’s behavior, the law to be provided is
expressed by one or several differential equations relating the rate of
temporal change of the system variables to the components of some vector
field, analogous to a field of forces. Such a formulation of a modeling task leads
to a dynamical system. Then the dynamical system models some piece of the
world - of course, it may not model it very well, but the model’s success or
failure largely depends on the law that has been proposed to govern the
phenomenon. Usually, mathematical modeling can be naturally connected
with dynamical systems theory. It means that mostly the dynamical
processes, that is those evolving in time, are considered. This restriction
leaves out quasi-static models displaying the relationships between the
system attributes close to equilibrium (in physics, for instance, the Gibbs
thermodynamical equilibrium, in economics - the national economy models,
etc.). In general, such equilibrium states when all distribution functions do
not depend on time (see Chapter 4 of the book) are disregarded in dynamical
modeling. In this book I am focused mainly on physical models and reduction
of the latter only to dynamical systems is not necessarily relevant. One may
9 This was in fact the unique motivation to write the entire modeling chapter.
Basic Principles of Mathematical Modeling
33
note that, for example, the major part of quantum mechanics is devoted to
describing the spectral states of a quantum system i.e. is static by nature.
Present-time dynamic modeling of real systems such as weather forecast,
climate viewed as a physical system, transportation networks, energy grids,
genomics, etc. has become increasingly complex. To obtain trustworthy
solutions to these problems, especially real time results, intensive use of
computational resources (processors, memory, storage) is required. In
general, modeling can be performed in a variety of ways, with different
aspects of the situation being emphasized and different levels of difficulty
standing out. This is a typical case in mathematical modeling, which implies
that the process to be modeled is not necessarily described uniquely. It means
that modeling is not reduced to a set of rigid rules but rather provides a vast
field for transdisciplinary creativity. The reader I vaguely have in mind is a
person who strives to transfer the methods developed within a certain
scientific or engineering area to other branches of knowledge. Modern
mathematical modeling is a synthetic discipline embracing mathematics,
physics, computer science, engineering, biology, economics, sociology - you
name it. Mathematical modeling enjoys enrichment due to interdisciplinarity.
Although, as I have already mentioned, this book is mostly devoted to physical
models, the term “physics” is to be understood throughout the book in the
universal sense: most of the models in other fields of human knowledge are
increasingly using methods developed in physics (see Chapter 11).
When computers were not yet abusing the most part of your time, the
main tools of a mathematician were pencil, paper and a waste-paper basket. I
don’t remember who said that it was the waste-paper basket that
distinguished a mathematician from a philosopher, but I still think this is very
true even today, although mathematics has become much more scholastic
than previously, when it was closely connected with physics. Unfortunately,
as far as contemporary mathematical modeling goes, the waste basket should
be used much more intensely as a working tool.
One might note that nearly all books on mathematical modeling are
generally very eclectic and incorporate a variety of random problems as well
as disjoint hat trick mathematical approaches. The very notion of
mathematical modeling is hard to define - everything e.g., in physics is
mathematical modeling. So, this term is infected with vagueness.
Nevertheless, there exist certain common principles of modeling such as the
just mentioned “divide and conquer” principle. To evade some
methodological prolixity, which seems to be customary when speaking about
modeling, I may refer the reader to many good books on principles of
mathematical modeling ([1, 2, 13]) 10. Here, I mention just the following
counterpoints:
•
Qualitative vs. quantitative
10 The reader can also find some collection of modeling examples illustrating general
principles in https://www5.in.tum.de/lehre/praktika/comp_mod/SS02/modeling.pdf, where
modeling of dynamical processes is presented.
34
Principles of Mathematical Modeling
•
Discreet vs. continuous
•
Analytical vs. numerical
•
Deterministic vs. random
•
Microscopic vs. macroscopic
•
First principles vs. phenomenology
In practice, these pure types are interpolated and, like in musical
counterpoints, all parts of the model must be blended together as one, even if
each is a feature of its own. The ultimate difficulty in mathematical modeling
still persists: there cannot be a single recipe how to build a model. One always
has to make a choice: this implies rather the art than the science of modeling.
As in any art, some vague principles come into play, here e.g. the spirit of
Occam’s razor, which may be roughly formulated as “simple explanations are
preferable to complex ones”, should not be violated. This leads to a somewhat
deplorable situation: many models may be rejected not because they are
basically wrong but because they are too sophisticated. One can, in principle,
construct any kind of mathematical model irrespective of their relationship
to reality; such models may be non-contradictory and even beautiful (like in
the modern string theory), but the requirement of their experimental
validation may be substantially weakened. In this connection, I can recall an
aphorism ascribed to J. W. Gibbs and quoted in “The Scientific Monthly”,
December, 1944: “A mathematician may say anything he pleases, but a
physicists must be at least partially sane.” Mathematical trickery alone rarely
brings fruit in physics.
A mathematical model is not uniquely determined by the investigated
object or considered situation. There are usually a plethora of conditions, and
only one of them is selected. Moreover, selection of the model is dictated by
accuracy requirements. For example, in motion and surface planning models
that are very important in robotics a table may be represented as having a
rectangular or oval shape; in road traffic models a car is typically represented
either as a point or a rectangle; should one take into account atmospheric
influence on a falling body or not? The answer depends on the required
precision. When some difficulties arise due to an over-idealized mathematical
model of a given physical situation, the model can be changed or improved.
This is the approach most favored by physicists. The other approach, logically
opposite and adopted by many mathematicians, would be to address the
physical situation in the most general and possibly rigorous formulation right
from the start imposing restrictions as one proceeds. Such methodics may be
quite powerful (historical examples are the works by N. N. Bogoliubov, V. A.
Fock, H. A. Kramers, L. A. Vainstein), but often makes the treatment
unpalatable for physicists.
Core mathematics contained in the real physical models is not too
abundant. By “real”, I mean those models and theories that can be
corroborated or disproved by some experiment or observation. In this sense,
I do not think that a vast number of string theories are real, although some of
them might be quite “beautiful”. Incidentally, aesthetic criteria alone may well
Basic Principles of Mathematical Modeling
35
have nothing to do with reality: it may happen that physical phenomena are
often described by simple and symmetrical equations which can be
considered “beautiful”, but the reverse is not necessarily true: not every
“beautiful” (i.e. simple and symmetrical) expression pertains to physical
phenomenon.
It is trivial but not always held that mathematical models should not
contradict the fundamental laws of nature. Sometimes, however, this
requirement is weakened. Thus, some numerical models, for example, based
on the Runge-Kutta method may lead to energy non-conservation. The
number of particles or mass should be conserved, which is usually formalized
as the continuity equation; again, there exist numerical simulations, with the
continuity equation being violated or, at least, not exactly fulfilled. It is not
infrequent that the laws of physics are rejected in computer technology: one
can recall “magical” fluid simulations and animations in computer graphics.
In general, however, mathematical models must be tested against basic
laws of physics. Although an ever increasing number of people think that
mathematics is totally different from physics and therefore must be free from
constraints imposed by physical laws, I think nonetheless that mathematics
is a service subject, like the art of hieroglyphic painting in the Oriental
cultures, the content being still provided by natural sciences, in this case by
physics. From here it follows that many principles successfully employed in
physics, such as symmetry and scaling, should be taken into account in the
domain of mathematical modeling and widely exploited in order to reduce the
complexity of real-life models. In the spirit of our physmatics, one can also
extensively use analogies, e.g., chemical reactions are analogous to
competition models in biology and economics.
The latter fact points at the universality of certain techniques: different
objects are described by the same mathematical model. The trivial example is
given by the ubiquitous oscillator model which can be equally well applied in
mechanical engineering, e.g., to explore the car body vibrations, in electrical
engineering, e.g., to study the passage of signals through filters, and in physics
- to build up models based on the concept of elementary excitations. On the
other hand, the oscillator model participates in the intriguing duality between
the Hooke’s law and the Newton’s or Coulomb 1/r forces. In general, the
oscillator model seems to be the most favored by physicists, and I shall dwell
on it many times in various contexts.
Universality of some models is a manifestation of numerous physmatical
polymorphisms, somewhat mysterious analogies hinting at the fundamental
unity of all parts of physmatics. This may imply that increasing specialization
and fragmentation of physical and mathematical sciences into tiny domains
can become the principal obstacle to their development, like bureaucratic
subdivision of the world into nationalistic states has become the main
impediment
to
global
progress.
Being
universally
interconnected,
physmatical models almost always produce sub models, i.e., even apparently
simple models are organized on hierarchical principle and can be step-like
refined. It is here that the vague boundary between a model and a theory lies:
a theory incorporates a large number of interconnected models, just as a
36
Principles of Mathematical Modeling
manifold incorporates a number of coordinate patches being glued together.
The presence of sub models naturally leads to modularity and reusability -
concepts that are well-known for software developers.
When treated more or less seriously, mathematical modeling should be
endowed with some unifying paradigm which would be operational
throughout the whole domain (ideally) or, at least, constitute the backbone of
it. For instance, the finite-dimensional linear algebra serves as a backbone for
numerics and so-called scientific computing. In the domain of mathematical
modeling, the dominant paradigm is a greatly simplified dynamical systems
theory. We shall discuss dynamical systems in some detail in Chapter 4 largely
devoted to classical deterministic mechanics; here I merely state some salient
features of this, now more or less traditional, approach to mathematical
modeling.
When speaking about mathematical modeling techniques, V. I. Arnold
uses the terms “hard” and “soft” models giving an example of multiplication
table as a hard model. One can find numerous examples of soft models in life,
for instance, the statements of “the more the less” type as well as given by
proverbs, aphorisms, and other common sayings. Thus the statement “A man
with a watch knows what time it is, a man with two watches is never sure”
may be considered an example of a soft mathematical model.11 Nevertheless,
one should remember the famous aphorism by Voltaire “A witty saying
proves nothing”, so the value of common sayings as soft models is very
limited.
2.2 Mathematical Models in Physics
Almost a century ago, Lord Ernest Rutherford somewhat arrogantly
subdivided science into physics and collecting stamps. This contemptuous
remark can be interpreted in the following way: “physics” designates a group
of experimentally-oriented sciences accompanied by a well-developed
theoretical overlay. “Collecting stamps” denotes a group of disciplines that
accentuate the descriptions and accumulation of data, for example, old-time
biology, orthodox medicine, geography, and many others. Models accepted in
such disciplines are mainly reduced to classifications, which seems to be the
primary stage of a good theory, and one can observe that traditional
disciplines that could be recently attributed to the “stamp collection” class
continuously evolve towards “physics” (although with different speed). The
theory wrapped over the “raw” reality is quantitative and endowed with
predictive capabilities. Thus although the principal results are obtained
through observations and dedicated reproducible experiments (as performed
by Galileo, but in modern times relying on powerful scientific
instrumentation), one should not think that theoretical models mostly fulfil a
decorative function in physics as it was frequently claimed by administrators.
In fact, experimental achievements and modeling results are difficult to
separate, and it is this blend - not the experimental results alone - that can be
11 I like especially the saying “Power tends to corrupt, absolute power corrupts absolutely”
which may be regarded as a soft model with a very large area of applicability.
Mathematical Models in Physics
37
converted into ubiquitous technologies. Unfortunately, the role of
mathematical knowledge in technologies is often hidden.
Modeling is an art of simplifying a complex system, and mainly due to this,
models play a very important part in physics and technology. A good model
enables one to understand the most fundamental physical parameters of a
complicated process. Mathematical modeling may be loosely defined as an
idealized description of reality constrained by mathematical concepts. All
approaches to study the reality, both in the form of natural or behavioral
sciences, which are using mathematical tools may be declared as
mathematical modeling. Many people even assert that there is no
mathematical modeling, but just instances of applied physics and
mathematics under a fancy - and sometimes fashionable - name. People
claiming that they pursue some scientific purposes are just replacing a natural
(or social) object by its model and studying the latter. There is an element of
deceit in this process. No model fully describes the studied phenomenon, and
models of physics are no exception. Therefore, by the way, the question of a
model’s applicability is usually highly nontrivial. The already mentioned
successful solution by Newton of the mysterious Kepler problem inspired
thinkers and philosophers, primarily Laplace, to make bold generalizations
and to develop the mechanistic model of the universe. There was no room for
randomness in this clockwork vision of the world. The biggest challenge of
biology, medicine, society, and economics is that randomness leads to fine-
tuned processes (in time) and structures (in space). It means that the notion
of the world as a machine seems to be inadequate. In fact, fully mechanistic
nature would be incompatible with life, where evolution gains order through
fluctuations. Classical science mostly studied systems and their states close to
equilibrium, and that allowed one to construct a beautiful collection of
comparatively simple physical models for the world. Such models depicted
the systems that reacted on perturbations more or less predictably: these
systems tend to return to equilibrium (in the parlance of statistical physics,
they evolve to a state that minimizes the free energy). However, systems close
to equilibrium can describe only a small fraction of phenomena in the
surrounding world; in fact, it is a linear model. In contrast, nonequilibrium
systems are ubiquitous in nature. Any system subject to a flow of energy and
matter can be driven in the nonlinear mode, far from equilibrium. For
example, open systems such as the Earth, living cell, public economy or a
social group exhibit highly complex behavior which is hard to model
mathematically using the methods adapted mostly to mechanical patterns.
Most of the processes in the open systems far from equilibrium are
interrelated, nonlinear, and irreversible. Often a tiny influence can produce a
sizable effect. One more typical feature of systems far from equilibrium is that
they can lose their stability and evolve to one of many states. This behavior
appeared so “unphysical” from the habitual viewpoint that many orthodox
physicists were inclined to despise those colleagues who were trying to
consider systems far from equilibrium, especially those beyond physics. Yet,
to model the processes in the real world, one must learn how to describe the
38
Principles of Mathematical Modeling
systems far from equilibrium. We shall deal with systems far from
equilibrium many times in this book.
Physicists generally dislike the term “mathematical modeling” because of
its overwhelmingly general applicability. Everything can be declared
mathematical modeling. This is, of course, true. The most enthusiastic
proponents of mathematical modeling were mathematical physicists,
specifically those belonging to the well-known school of A. N. Tikhonov and
A. A. Samarski. Unfortunately, I could not find any good cliometric work on
mathematical modeling in physics where the popularity rises and falls of this
concept would be monitored. Neither could I find a quasi-definition of what a
mathematical model really is. Such a definition will be probably useless, yet I
would prefer to have a unified consideration. What I understand by
mathematical modeling in this book is building compartmental fragments of
our representation of reality based on clear-cut assumptions and discussed in
mathematical terms, predominantly with differential equations. One must
remember that models are not reality and should not be perceived as such. In
some extreme cases, models may have nothing to do with reality. There are
two favorite mathematical models which play an outstanding role in physics:
the harmonic oscillator in its linear part and the logistical model in its
nonlinear part. These two privileged models are at the same time simple and
rich, giving rise to many theories and connecting diverse domains. I shall
discuss both of them thoroughly on a number of occasions. In the single-
electron energy band theory of solid state, the favorite model is that of Kronig
and Penney, which beautifully illustrates the spread of individual energy
levels and zone formation. There also exist other favorite models which are
applied in many areas of science and we shall play with them as well,
however, not very thoroughly. Here I typically start from a qualitative
discussion of a physical phenomenon to be modeled, then produce physical
examples and arguments and only after that I attempt to provide the
mathematical results.
By the way, one can easily find a situation when the qualitative
discussion, though seemingly plausible, gives a wrong answer. For instance,
there is a folklore question: you have a hot cup of coffee and want to cool it to
the drinkable temperature, will the coffee be cooled faster if you leave it as it
is and pour cold milk into it right before drinking than if you pour the same
amount of cold milk right away? The usual qualitative answer is ”yes”,
presumably because the difference of temperatures is higher if you leave hot
coffee intact, but let us see what mathematics says. Let hot coffee have the
initial temperature Θ0 which is higher than the environment temperature 𝑇0
so that Θ0 −𝑇0 > 0. To produce a mathematical model corresponding to the
coffee-with-milk situation we may assume that cooling subordinates to a
linear law (usually ascribed to Newton):
−𝑑𝑇(𝑡)
𝑑𝑡
= 𝑘(𝑇(𝑡) −𝑇0),
Mathematical Models in Physics
39
where 𝑇(𝑡) is the coffee temperature at moment 𝑡. This is a simple first-
order inhomogeneous differential equation with a parameter 𝑘. One can
also consider for simplicity the milk to be in equilibrium with the
environment and has therefore the ambient temperature 𝑇0 . This
assumption is not essential and serves only to simplify the formulas. The
general solution to the respective homogeneous equation is 𝑇(𝑘, 𝑡) =
𝐶𝑒−𝑘𝑡, and if the ambient temperature 𝑇0 is constant, we have a physically
natural particular solution to the inhomogeneous equation, 𝑇(𝑡) = 𝑇0 .
Then the general solution of the inhomogeneous equation corresponding
to the initial temperature 𝑇(0) ≡Θ0 is
𝑇(𝑡) = 𝑇0 −(Θ0 −𝑇0)𝑒−𝑘𝑡
To further simplify our model we may assume that coffee and milk have
the same density as well as specific heat values. Corrections in terms of the
respective density and specific heat differences would be an unwarranted
excess of accuracy. Thus, we have
𝑇𝑉𝑐+ 𝑇0𝑉𝑚−(𝑉𝑐+ 𝑉𝑚)Θ −VΘ,
where 𝑉𝑐, 𝑉𝑚 are the partial volumes of coffee and milk, respectively, 𝑉= 𝑉𝑐+
𝑉𝑚 is the total volume, and Θ is the temperature of coffee with milk. Let us
now, to answer our first question, compare two cases.
1. We leave our coffee cooling by itself (according to the above equation
for 𝑇(𝑡)) and pour milk into it just before drinking. The coffee-with-milk
has the temperature
Θ1(𝑡) = 𝑉𝑐(𝑇0 + (Θ0 −𝑇0)𝑒−𝑘𝑡) + 𝑉𝑚𝑇0
𝑉𝑐+ 𝑉𝑚
2. We pour milk into the hot coffee right away, at 𝑇= 0, then the starting
temperature becomes ΘΘ̃0 (a physicist might say is ”renormalized”) where
Θ̃0 = 𝑉𝑐Θ0 + 𝑉𝑚𝑇0
𝑉𝑐+ 𝑉𝑚
,
and in the process of cooling the coffee-milk mixture reaches by moment 𝑡
temperature value Θ2
Θ2(𝑡) = 𝑇0 + (Θ̃0 −𝑇0)𝑒−𝑘𝑡.
Now, the most astonishing thing happens. By simple algebraic
manipulations one can show that Θ1(𝑡) = Θ2(𝑡) for all moments 𝑡.
Indeed,
40
Principles of Mathematical Modeling
Θ1(𝑡) = 𝑇0 + 𝑉𝑐
𝑉(Θ0 −𝑇0)𝑒−𝑘𝑡= Θ2.
In other words, it does not matter whether you pour milk in your coffee
first or last thing before drinking.
The model described above represents an example of semi-useless
models. It is just a simple mathematical record of an everyday observation.
There exist, however, totally useless models such as simulation of smoke rings
puffed by a cigarette smoker. Constructing an exhaustive mathematical model
for such a process would involve considerable difficulties, yet probably its
analysis would produce a superfluous knowledge. There exist probably many
instances of such extraneous knowledge, but the difficulty is to a priori
diagnose it as such.
The reason why I am writing about mathematical modeling in physics
at all is that, contrary to the general view that physics is centered around the
laws of nature, I observe that it deals mostly with their mathematical models.
Thus, Newton’s law is a successful mathematical model that can be applied to
a body moving under the influence of other bodies in the low-energy limit,
rather than the universal law of nature such as, e.g., certain symmetry
properties of the universe, more exactly, of its part available to observations.
Likewise, the Schrödinger equation is a typical mathematical model, also
quite successful when applied for description of atomic and subatomic scale
particles in the low-energy limit. The wave function standing in the
Schrödinger equation is not directly observable and is a typical attribute of a
mathematical model. An extreme case of such a model - the wave function of
the universe, for example, in the form suggested by J. Hartle and S. Hawking
[67], see Chapter 9, is just an interesting speculation and hardly a verifiable
model. In this sense, the model of Hartle and Hawking is more like a
philosophical concept expressed in the mathematical language (untypical of
philosophers) rather than a physical object, that is the one to be
experimentally validated. This kind of model building reflects a current trend
in physics, where the quest for experimental proof is largely subdued. A huge
generation of armchair physicists, focusing almost exclusively on formalism
and symbols, produce numerous pieces of writing without much attention to
experimental techniques. For them there is no question of participating in and
observing the mundane work of experimentalists, with all its tinkering and
small cunnings. Armchair physicists are mainly motivated to express
themselves so that the subjects are treated for their own sake and glory, with
little attention paid to firm experimental knowledge or applications. The
produced symbolic forms can also create powerful myths12 and are closer to
art than to physics. Moreover, it is still a philosophical extrapolation, a widely
spread belief that the models developed by humans and rather arrogantly
12 Examples of such powerful theoretical myths are, e.g., anthropic principle, broadly
interpreted many-worlds interpretation of quantum mechanics, and possibly superstring/M-
theories, to say nothing of such pseudoscience concepts as time travel, teleportation of
macroscopic bodies, or military use of torsion fields.
Mathematical Models in Physics
41
called the “Laws of Nature” should be necessarily applied to the entire
universe. This statement may be considered a belief because it has never been
proved, only examples are generally used to corroborate it, and isolated facts
by themselves mean very little. Nevertheless, to say something against the
belief in universality of our “Laws of Nature” is merely a mauvais ton, for
example, in the community of astrophysicists, although nobody can deny that
astrophysics in general achieved really great successes13. Yet, when one stops
questioning the truth, it easily transforms into dogma.
Once we have mentioned the Schrödinger equation in the modeling
context, let me make the following remark. The situation with the Schrödinger
equation is very indicative: this equation works very well in a great number of
situations so that the “shut up and calculate” approach to quantum mechanics
has become quite popular among physicists. Yet we know that the
Schrödinger equation is unsatisfactory in many respects. For example, it
describes interaction between the particles purely phenomenologically, with
a coefficient 𝑈(𝐫), it does not account for the quantum vacuum, it is not at all
trivial to get the classical limit, there are still many questions in regard to the
so-called Bohmian version of quantum mechanics, etc. We shall discuss the
limitations of the Schrödinger equation in Chapter 6.
The greatest question of all is, probably: what are the ultimate laws of
physics? And I don’t think anybody has ever provided the final answer,
although there have been many distinguished candidates. This question may
be answered in the language of mathematics, like Newton’s law was
formulated as a mathematical model, but one cannot exclude that the answer
will be provided in some other form. It is possible, as Professor John Wheeler,
an outstanding physicist famous for his unexpected thoughts, once predicted
(see section “Prognosis” below), that if and when people eventually happen
to learn the ultimate laws of physics they will be astonished that these laws
had not been known from the beginning - so obvious they will look.
I have already noted that nobody in fact can be certain that the ultimate
underlying laws of physics from which everything we know can be derived do
really exist. Even if such a set of final physical statements does exist, it is not
obvious that it will be useful for concrete applications. The obvious example
is thermodynamics as an engineering discipline. For engineering applications,
thermodynamics can be based on the 17th century phlogiston model.
Phlogiston, a hypothetical fluid conjured up only to explain the spread of heat,
was a typical mathematical - not physical - model. Indeed, the heat propagates
in matter in the same way as a fluid diffuses through other media: equations
describing both processes are the same. Therefore, an ad hoc model of
phlogiston could satisfactorily describe plenty of idealized thermodynamic
processes. However, the fact that this model was only mathematical and not
physical (or ”physmatical” - in our fancy language) eventually backlashed. The
13 Actually, it is in astrophysics that processes are abound when our conventional models for
physical laws cease to hold. For instance, it is unlikely that one can apply Newton’s law of motion
to describe the phenomena accompanying the collision of black holes located in the centers of
galaxies.
42
Principles of Mathematical Modeling
phlogiston
model
encountered
insurmountable
difficulties.
Nobody
succeeded to observe the fluid called phlogiston or to measure its properties
such as density, even in indirect experiments. A lot of heat-generating events,
e.g., friction, dissipation in general, electric discharge, etc. could not be
explained by the phlogiston model. So, the latter had to be abandoned in favor
of the molecular-kinetic theory.
I think the phlogiston history is rather instructive, because a relatively
successful mathematical model had to gracefully fade away, succumbing to a
more physically compelling theory. Incidentally, the same process, but
perhaps more painful, was observed in connection with the model of ether.
The phlogiston model, a clever gadget which was nonetheless totally wrong
from the physical point of view, would have been more adequate than
molecular dynamics models of the future, when all molecular trajectories
might have been computed. However, now we know that even if one can, in
principle, predict the behavior of each molecule, the value of such a prediction
for engineering applications will be very low - in engineering models based
on thermodynamics people are interested in such quantities as temperature
or enthalpy and not in molecular trajectories or quantum particles. For every
phenomenon, one may have many possible levels of description, which makes
the physmatic modeler’s life more interesting but not easier. Physics looks as
not simply a good reason - it is much less than that. It is reason constrained
by experiment, so one must give pragmatic answers and withdraw from being
carried away by mathematical possibilities. The goal of physically-based
mathematical modeling (PBMM) may be formulated as to abstract from the
tremendous wealth of variables leaving only those relevant to the research
objectives. We can very well see the implementation of this principle in the
works of masters of physics, a good example for me personally was the
collection of papers by J. W. Gibbs ([71]), see more minutes in Chapter 7. This
is in fact an intuitive construction of a multilevel hierarchical system of
models. It is interesting to notice that modern - exponential - representation
of numbers is successful in computational techniques just because it is
hierarchical, in distinction to ancient Roman representation that makes
arithmetical operations cumbersome. Nowadays the multilevel methods with
different degree of detail and hierarchical ladder of abstraction are
increasingly popular in computer simulations (integrated circuits, traffic,
fluids, fusion plasma, weather, complex boundaries, etc.). One must, however,
be very careful about which details are discarded. A single erroneous
assumption about a detail that was arbitrarily assumed to be irrelevant can
totally invalidate a model. One can see this effect in computer simulations
where multitudes of beautiful images can be obtained - quite easily nowadays,
but the physical or engineering value of such visualized artifacts might be
quite doubtful, to the chagrin of many computer scientists. It is not a
coincidence that computer science (informatics) departments at the
universities
are
usually
not
in
high
demand
among
physicists,
mathematicians or mechanical engineering people - the specialists in the
respective fields prefer to develop their own computer simulations, often less
artificial and based on solid knowledge of the subject matter.
Mathematical Models in Physics
43
One of the fundamental problems of classical physics was the stability of
atoms. Why do the electrons not fall onto the nucleus? This notorious problem
plagued physics in the beginning of the 20th century. We shall illustrate the
instability of the “classical” atom as a good example of a local model,
apparently free of contradictions but nonetheless producing wrong results.
This wrong result was an indication that a more global model or the whole
underlying theory should be modernized or abandoned. The point electron
had already been discovered (by J. J. Thomson) and the model of point nucleus
had been experimentally corroborated by E. Rutherford. To explain the
stability of atoms, an entirely different approach should be summoned, and
indeed, this fact was explained by quantum mechanics which is based on
totally different mathematical models (see Chapter 6). Speaking very roughly,
the stability of the atom is the consequence of noncommutativity of quantum
mechanical operators, in this case of nonrelativistic kinetic energy of
electrons and Coulomb potential energy between the charged particles. The
stability of the atom may be formulated as the existence of a finite lower
bound for the energy of electrons in an atom. Due to noncommutativity of
kinetic and potential energy operators, any attempt to make the electrostatic
energy very large negative would require that an electron be localized near
the nucleus, but this would result in even larger increase of the positive
kinetic energy. In short, one of the first successes of quantum mechanics was
that it explained the stability of the atom. Now, let us see why classical models
are inadequate even at the naive atomic level.
As mentioned before, physical systems can be roughly separated into two
classes: particles and fields, these are two basic models. This separation is
really very crude, primarily because there is an overlap, for example, when
particles are treated as field sources or excitations of some gauge field. We
shall discuss the difference between particles and fields more thoroughly
when talking about fermions and bosons and their possible mixing within the
supersymmetry framework. Although, to frequent complaints on the part of
mathematicians, there seems to be no good definition of the notion “field” in
physics, each physicist intuitively understands the main difference between
particles and fields which lies in the number of degrees of freedom. Any
physical system consisting of a finite number 𝑁 of particles has only finite
number of degrees of freedom, 𝑛 ≤ 3𝑁. Recall that the number of degrees of
freedom is defined as dimensionality of the configuration space of a physical
system, e.g., a system with 2 degrees of freedom is 𝐱̈ = 𝐅(𝐱, 𝑡) where 𝐅 is a
plane vector field, 𝐱∈ℝ𝟐. The field, on the contrary, is characterized by an
infinite number of degrees of freedom. We shall discuss the implications of
this fact in Chapters 5 and 9.
Although in the foundation of mathematical model building lies the
reduction of the complex initial real-life problem to some idealized scheme,
typically with clearly defined input and output, which can be treated by
comparatively simple mathematical means, many models in physics and
technology are by necessity bundled - they incorporate concepts from
different fields of physics. For instance, cosmological models as a rule cannot
be compartmentalized to just general relativity; one has to include quantum
44
Principles of Mathematical Modeling
mechanics, statistical physics, and sometimes high-energy physics
considerations. In particular, different models of phase transitions involving
cosmological phase transitions, which have given rise to inflationary models,
may be reiterated. Superconductivity (which is also a model of phase
transition) is also based on bundled models, mostly combining those of
quantum mechanics, solid state, electrodynamics and statistical physics.
These inter-field bundles make mathematical models in physics rich and
flexible. We shall illustrate the richness of the model bundles, in particular,
while discussing the interaction of radiation (both electromagnetic and
corpuscular) with matter (Chapter 8).
As I have already mentioned, all physical laws are, in today’s terminology,
just mathematical models, although underlying ideas are not necessarily
formulated in mathematical terms. The final questions are often put not by
physicists, but by philosophers though maybe in a vague form. There is the
reverse trend to canonize mathematical models, make a fetish of them. A
model, even an accurate simulation, is only a rehearsal of some reality, there
may be many conceivable realities and, in particular, those that people might
not have devised without modeling. In practical terms, models aim to help
technologists (in the broad sense) predicting the operations and evade
possible pitfalls. Therefore, one should be reminded of some risks. Quite
often, people construct would-be good mathematical models that have only a
very distant relationship to reality. I - and probably many other persons - have
seen it many times. In such cases, mathematical modeling gives a wrong idea
of what it means to solve an actual problem. A mathematical model is not
uniquely determined by the investigated object or situation, and selection of
the model is dictated by accuracy requirements. Examples of this relativism
of models are abundant. In our everyday life, if we look at a table, its
representation as rectangular or not depends on our practical needs; in
ballistics, we may take into consideration or totally disregard the influence of
the atmosphere; in atomic physics, it is relatively seldom that we have to
consider finite dimensions of the nucleus; in military planning, one may
regard a variety of models with different degree of “hardness”, for instance,
harmonic oscillator without attenuation is “harder” than oscillator with
damping and nonlinearity. More exotic examples may cite regular armies
(linear models), and the presence of partisans or terrorists (nonlinear
models). Some results can be easily obtained using one model, but very
difficult within another approach. The fact that it would be possible to find a
nice solution to a highly simplified model we have built does not at all mean
that we were able to obtain a practical solution to the problem we actually
face in the real world.
So, one must be careful in taking the current physical laws as something
invariable and firmly established. Since the laws of physics are formulated as
mathematical models for the current level of knowledge, they can be
eventually corrected. Physics is an experimental science, and any of its theory,
presumably rigorous and absolutely exact, must be from time to time
experimentally validated. However, no experiment can have an absolute
precision, an experiment’s accuracy is inevitably limited (e.g., due to
Ten Worlds of Physics
45
technological constraints, noise, etc.). One may always expect a violation of
the established views, an indication at novel interactions 14, necessity of
radically new mathematical (algebraic or geometric) representations. Even
the important “correspondence principle” which is a statement that each new
theory must incorporate the old one as the limiting case can be broken -
nobody has ever proved its absolute inevitability, and nobody knows its area
of application. This principle may well be just a belief “adopted by repetition”
or an obscurantist stereotype, even though contemporary physics essentially
uses it to test new theories.
2.3 Ten Worlds of Physics
In principle, the following topics comprise the main part of today’s physics:
quantum mechanics, with its applications in atomic, nuclear, particle, and
condensed-matter physics; relativistic quantum theory including the concept
of a photon, the Dirac equation, electron-photon interaction (QED), and
Feynman diagrams; quantum fields; and general relativity. Elementary
working knowledge of these topics would be sufficient for a solid foothold in
the physical community. However, this standard repertoire is very limited,
and I shall try to list the major concepts which, in my view, constitute the real
bulk of physics. I call these major concepts “worlds of physics” to be organized
as
1. The classical world
2. The thermal world
3. The nonequilibrium world
4. The continuum world
5. The electromagnetic world
6. The plasma world
7. The quantum world
8. The high energy world
9. The relativistic world
10. The cosmological world
These ten worlds constitute a backbone of physics. Some of them will be
discussed in this book more or less thoroughly, others (e.g., the continuum
world or the plasma world) will be touched upon only superficially. Of course,
the above list is highly subjective and not exhaustive. Below I ventured to
bring the main topics inside each item. All these internal sub lists are also
14 I am not hinting at profound studies of notorious torsion fields of course, because they are
small and their interaction with matter is negligible, but looking for new forces is a legitimate
physical task. The flip side of the coin is always pseudoscientific fantasies.
46
Principles of Mathematical Modeling
open and far from being complete, anyone can complement them in
accordance with their tree-like structure. In the following lists, I give only a
telegraph-style account of the main items, in the respective chapters we shall
discuss some of the concepts only mentioned here in more detail.
2.3.1
The Classical World
The classical world of physics is based on the following key notions:
•
The Galileo group (inertial systems)
•
Newton’s law of motion (classical limit of special relativity and
quantum mechanics)
•
Newtonian gravity (classical limit of general relativity)
•
The Kepler problem (rotation of planets about the Sun)
•
Potential fields, classical scattering
•
The Euler-Lagrange equations
•
Variational schemes
•
Noether’s theorems and conservation laws, conservative systems
•
The Hamiltonian equations, Hamiltonian flows on symplectic
manifolds
•
The Hamilton-Jacobi equation
•
Motion on manifolds, constraints
•
The Liouville theorem
•
Key figures: Galileo, J. Kepler, I. Newton, L. Euler, J. L. Lagrange, W.
R. Hamilton.
2.3.2
The Thermal World
•
Classical thermodynamics (equilibrium)
•
The nature of heat, temperature, heat transfer
•
Mechanical work and heat, interconversion, engines and cycles
•
Heat capacity (𝐶=
𝑑𝑄
𝑑𝑇)
•
Laws of thermodynamics, thermodynamic potentials
•
The concept of entropy, reversible and irreversible processes
•
Entropy production
•
Thermochemistry, chemical reactions
•
Equations of state
•
Phase transitions, Ginzburg-Landau model
•
Low temperatures, superfluidity and superconductivity
Ten Worlds of Physics
47
•
Heat as the particles motion, Maxwell distribution, statistical
mechanics
Key figures: L. Boltzmann, S. Carnot, R. Clausius J.-B-J. Fourier, J. W. Gibbs,
V. L. Ginzburg, J. P. Joule, L. D. Landau, A.-L. Lavoisier, J. C. Maxwell.
2.3.3
The Nonequilibrium World
•
The Liouville equation, Gibbs distribution
•
Open systems
•
Kinetic equations, Boltzmann equation, Bogoliubov’s hierarchy
•
Diffusion, Langevin equation, Fokker-Planck equation
•
Fluctuation-dissipation theorem (FDT)
•
Linear response theory, Kubo formula, Onsager’s reciprocal
relations
•
Multiple scattering theory
•
Nonequilibrium phase transitions, time-dependent Ginzburg-
Landau model
•
Classical stochastic models, nonlinear regime, branching and
bifurcations, stability of nonequilibrium stationary states, attractors
•
The Poincaré map, logistic model
•
Dynamical chaos, indeterminism (impossibility of predictions)
•
Dissipative structures, order through fluctuations, Turing structures
•
Chiral symmetry breaking and life
Key figures: N. N. Bogoliubov, L. Boltzmann, J. W. Gibbs, Yu. L.
Klimontovich, N. S. Krylov, R. Kubo, L. Onsager, A. Poincaré, I. Prigozhin, D.
Ruelle, Ya. G. Sinai, R. L. Stratonovich
2.3.4
The Continuum World
•
The Euler equation
•
The Navier-Stokes equation
•
Hyperbolic flow equations, shock and rarefaction waves
•
Compressible gas dynamics and supersonic flows
•
Self-similar models and explosions
•
Turbulent flows and the models of turbulence
•
Elastic solid models
•
Viscoelasticity, plasticity, composites
•
Seismic ray propagation and seismic ray theory
•
Acoustics, sound wave/pulse excitation, propagation and scattering
48
Principles of Mathematical Modeling
•
Detonation and flames, propagation of fires
•
Superfluidity
Key figures: Archimedes, D. Bernoulli, L. Euler, A. N. Kolmogorov, L. D.
Landau, B. Pascal, Rayleigh (J. W. Strutt), G. Stokes, Ya. B. Zel’dovich.
2.3.5
The Electromagnetic World
•
Maxwell’s equations
•
Laplace and Poisson equations
•
Interaction of electromagnetic (EM) fields with matter
•
Electromagnetic response of material media
•
Linear and nonlinear susceptibilities
•
Linear and nonlinear optics
•
Atoms and molecules in the electromagnetic field
•
Electromagnetic wave and pulse propagation
•
Diffraction and scattering of electromagnetic waves
•
Electromagnetic radiation
•
Rays of light, asymptotic theories, coherence of light
•
Photometry and colorimetry
Key figures: A.-M. Ampère, M. Faraday, C. F. Gauss, O. Heaviside, J. C.
Maxwell, Rayleigh (J. W. Strutt).
2.3.6
The Plasma World
•
The plasma dielectric function, linear waves in plasma
•
Screening in plasma, correlations of charged particles
•
Hydrodynamic models of plasma
•
Distribution functions
•
Kinetic models of plasma, collision integrals of Boltzmann, Landau,
Klimontovich, Lenard-Balescu, etc.
•
Collisionless plasma, a self-consistent field model (the Vlasov
equation)
•
Plasma in external fields, the magnetized plasma
•
Landau damping
•
Theory of plasma instabilities
•
Quasilinear and nonlinear models of plasma
Key figures: P. Debye, Yu. L. Klimontovich, L. D. Landau, I. Langmuir, A. A.
Vlasov.
Ten Worlds of Physics
49
2.3.7
The quantum world
•
The particle nature of electromagnetic radiation, the Planck
hypothesis
•
The duality of light, photoelectric effect (A. Einstein, 1905)
•
The Bohr atom
•
The de Broglie hypothesis, hidden parameters discussion, quantum
trajectories
•
The Schrödinger equation, wave functions
•
Observables, measurements, probability amplitudes
•
Wave packets, Heisenberg uncertainty relations
•
The Heisenberg-Weyl algebra, representations of compact Lie
groups (H. Weyl)
•
The theory of unbounded self-adjoint operators (J. von Neumann),
Hilbert space
•
Rotation group representations, spin
•
Sturm-Liouville problem, discrete spectrum, eigenfunction
expansions, Green’s functions
•
The density matrix, the Wigner function (E. Wigner, 1932), Husimi
and tomographic representation
•
Unitary evolution, semigroups
•
Eigenvalue perturbation theory, iterative procedures
•
Quantum-classical correspondence, decoherence, canonical
quantization
•
Asymptotic expansions, semiclassical limit
•
Scattering theory, S-matrix, continuous spectrum
•
Integral equations, inverse problems
•
Decaying states, resonances
•
Periodic potentials (F. Bloch, L. Brillouin, H. A. Kramers), Floquet
and Hill equations
•
Solid state physics, semiconductors, transistors, engineering
applications of quantum mechanics
•
Many-body problems, second quantization, elementary excitations,
condensed matter physics, Coulomb energy, thermofield dynamics
•
Ergodic potentials, Anderson localization
•
New developments, EPR and hidden parameters debates, Bell’s
inequalities, Bohm version
50
Principles of Mathematical Modeling
•
Quantum computing
Key figures: H. Bethe, N. Bohr, L. de Broglie, P. A. M. Dirac, A. Einstein, V.
A. Fock, G. A. Gamov, W. Heisenberg, L. D. Landau, J. von Neumann, W. Pauli,
M. Planck, E. Schrödinger, E. Wigner
2.3.8
The high energy world
•
Strong (nuclear) interaction, π-mesons, exchange forces, Yukawa’s
model
•
Resonances, super multiplets
•
Baryons, mesons, hyperons
•
CPT-theorem, group concepts in particle physics
•
K-mesons, particle mixing, C and P non-conservation, CP and T
violation
•
Isospin, strange particles, “elementary particle zoo”, SU(2), SU(3) -
first attempts of Lie group classification
•
Early quark models (1960s), color, flavor, charm, etc.
•
Hypercharge, Gell-Mann - Nishijima relation
•
Cross-sections, formfactors, S-matrix, current algebra
•
J/ψ-meson, confirmation of charm, quark-antiquark system
•
Quark-quark interaction through gluons, quark-gluon plasma
•
Quantum chromodynamics (QCD) confinement, deconfinement,
asymptotic freedom
•
Electroweak interactions, the Standard Model, W- and Z- bosons,
spontaneous symmetry breaking, Higgs particle
•
Non-abelian gauge theories
•
Grand Unification and new proposed theories and models: SO(10),
left-right model, technicolor, SUSY, etc.
•
String and M theories
Key figures: L. D. Faddeev, R. Feynman, M. Gell-Mann, S. Glashow, J.
Goldstone, P. W. Higgs, G. t’Hooft, R. L. Mills, A. M. Polyakov, A. Salam, S.
Weinberg, E. Witten, C. N. Yang
2.3.9
The relativistic world
•
The Michelson-Morley experiment
•
Lorentz transformations, relativistic kinematics
•
Special relativity, Einstein’s paper “Zur Elektrodynamik Bewegter
Körper (On the Electrodynamics of Moving Bodies)”, Annalen der
Physik 17, pp. 891921
•
Minkowski space, Poincaré group
Ten Worlds of Physics
51
•
General relativity, Einstein’s paper “Die Feldgleichungen der
Gravitation (Field Equations of Gravitation)”, Königlich Preussische
Akademie der Wissenschaften, pp. 844847
•
Redshift of spectral lines, deflection of light, time delay by
gravitation
•
Relativistic mechanics, 𝐸 = 𝑚𝑐2, relativistic mass
•
Accelerators, nuclear physics
•
The Dirac equation, quantum vacuum, positron, antiparticles
•
Relativistic quantized fields, particles as field excitations, bosons
and fermions, spin-statistic theorem
•
Quantum electrodynamics, particle-field interactions, Feynman
diagrams, renormalization
•
Path integrals, Feynman-Kac formula
•
Gauge models, Yang-Mills theory
•
Higgs particle, the Standard Model
•
New quantum field theories, scalar, vector, tensor fields
•
Gravitational radiation, gravitational wave detectors
•
Strings and superstrings
•
Controversies over relativity, tests of special and general relativity
Key figures: P. A. M. Dirac, F. Dyson, A. Einstein, R. Feynman, D. Hilbert, H.
A. Lorentz, H. Minkowski, J. Schwinger, S. Weinberg
2.3.10
The Cosmological World
•
Spacetime curvature, the spacetime of general relativity
•
Solutions to Einstein’s equations
•
Early cosmological models, non-stationary metric, redshift, Hubble
constant, FLRW (Friedman-Lemaître-Robertson-Walker) isotropic
model
•
The Big Bang, expanding universe, time asymmetry, the universe
evolution
•
Relic radiation, cosmic microwave background
•
Black holes, escape velocity, Schwarzschild solution, Chandrasekhar
limit, event horizon, spacetime singularities
•
Astrophysics, radio, optical, infrared images, gravitational lenses
•
Early universe symmetry breaking, cosmological phase transitions,
topological defects, cosmic strings and structures
•
Anthropic principle and other speculations (e.g., large numbers)
52
Principles of Mathematical Modeling
•
Cosmological constant, vacuum energy, inflationary models
•
Hartle-Hawking wave function
•
Quantum gravity
•
Strings and extra dimensions
•
Universe: finite (Aristotle) or infinite (Giordano Bruno)
•
Dark energy, dark matter (hidden mass), WMAP
Key figures: S. Chandrasekhar, A. A. Friedman, S. Hawking, E. Hubble, P.
Laplace, G. Lemaître, R. Penrose, W. de Sitter.
Although these “worlds of physics” have links to each other, it is still
difficult to combine current physical theories into a coherent picture. For
instance, classical mechanics, quantum mechanics, classical field theory,
quantum field theory, high energy physics (the Standard Model), general
relativity, string/M theory are all essentially different. These are a basic set of
theories that can be constructed independently of one another. Each of these
theories may be regarded as a cluster of intrinsic models. One can, if desired,
even find notorious contradictions between models belonging to different
clusters (i.e., built up on the base of different theories). It has already been
mentioned that particles in physics should be treated as point-like in classical
relativistic models and as extended in quantum models. And in general, is the
world finally classical or quantum? Should one take quantum mechanics
seriously or is it a well guessed collection of calculational prescriptions? It is
clear that, for example, the Schrödinger equation is a successful guesswork.
Another contradiction is associated with fixed background metric of special
relativity and “old” quantum field theory, which is badly compatible with the
dynamic spacetime of general relativity. Some physicists, mostly trained in
high-energy physics, are to these days reluctant to accept the geometric
nature of general relativity [133] preferring to treat it as a usual quantum field
theory (i.e., a Lagrangian theory on a fixed Minkowski background). This
point of view obviously defies the Einstein guiding idea that spacetime has
dynamical properties of its own and cannot be regarded as a passive
background. We shall discuss this problem in Chapter 6 in some mathematical
detail.
There is also a notorious irreversibility paradox, a contradiction between
time-reversal invariant microscopic models of physics (the so-called laws of
motion) and phenomenological time non-invariance observed in our
everyday experience. This paradox stirs a lot of controversy up till now and
does not seem to be ultimately resolved, although I think it belongs to an
extensive class of pseudo problems. We shall devote some space to the
discussion of the irreversibility paradox in Chapter 7, too.
The lack of a coherent picture can be tolerated in cookbook disciplines
such as scientific computing, numerical mathematics, networking protocols
or computer graphics, but is hard to be meekly accepted in physics which
strives to provide a unified image of the world. Local experimental successes,
albeit quite impressive as, e.g., in quantum field theory (QFT) do not soften
Physics-Based Mathematical Models (PBMM)
53
the frustration, and typical compensatory reactions in the form of escape from
empirical reality are more and more often observed. People easily accept
fanciful anti-empirical speculations such as multiple universes, a great
number of dimensions or going back in time. Nevertheless, I don’t think it
reflects a “trouble with physics” or some sort of a crisis. Simply a unique
hierarchical principle or a dreamlike super-formula from which all physical
theories could be derived as specific cases may not exist. The networking
construction connecting a variety of mathematical models of physics seems
to be more plausible, at least nowadays.
2.4 Physics-Based Mathematical Models
(PBMM)
The world is, in principle, inseparable, and the ultimate purpose of physics is
to find its meaning. It was presumably Einstein’s dream - to grasp how God
had designed the world. Contrary to that synthetic view, modern physicists
(with very rare exceptions) never attempt to understand everything at once.
One cannot be sure that there are in general universal, model-free structures
in physics. Being by tradition strictly limited to the phenomena that occur in
non-living nature, physics has always striven to produce models describing
such phenomena in relatively simple terms. Indeed, while trying to model
some entity, it is wise to think about the simplest at first. Nevertheless, one
can apply the same approach to studying more complex phenomena than only
those encountered in inorganic nature15. In this book, I treat physics not as a
closed discipline, but rather as an intelligent approach to the entire human
environment, that is to say this book may be considered as a form of a protest
against the physical isolationism. And naturally, one can touch upon things
that traditionally have nothing to do with physics in the narrow sense such as
biology, economics, or even sociology. This extension often infuriates those
physics professionals who are inclined to regard physics in the traditional
“dead” sense as an ample arena for building even the wildest models.
Representatives of other professions, primarily humanities and medicine, are
also not always happy when physicists intrude into their “comfort zones”. Of
course, physics is so vast that one can easily spend one’s professional life
within any of its small and seemingly isolated subfields. However, our
“physmatics” approach presupposes the study of links between different
disciplines. Accordingly, in chapter 11, we shall discuss a few interesting
mathematical models applied to the phenomena lying outside physics, if the
word “physics” is still intuitively understood in the narrow sense, implying
exclusively the subjects which I have classified for myself as “ten worlds of
physics” (see above).
Worlds of physics are just clusters of suitable models. There are, as we
have discussed, links between worlds invoking substructures with repeatable,
15This has nothing to do with the usual physicists’ arrogance, when people, especially young
and not quite experienced but trained in some physics, claim that they can excel in many other
areas such as chemistry, Earth sciences, economics, history, politology, sociology, business, etc.
54
Principles of Mathematical Modeling
reusable patterns. These patterns have lately been spread outside physics.
The tree of mathematical modeling in physics, with branches, leaves and buds
as individual models, provides the primary networking structure (called
“physmatics” in our somewhat fancy terminology). This situation is
reflected in new notions, for example “econophysics”, “physical economics”,
“chaos in finance”, “nonlinear dynamics in the financial markets”, “fractals
and scaling in finance”, “percolation model of financial market”, “dynamical
model”, “self-organized fragmentation and coagulation of financial markets”,
“stochastic PDE for option pricing”, etc. One can observe the rising popularity
of traffic flow models which are mainly constructed on the principles
generally adopted in physics (e.g., conservation laws). All these model
clusters are quite important and undoubtedly their potential future impact
may be difficult to overstate, so they deserve a special treatment being out of
scope of this book.
The very idea of easy linking different subjects seems to be close to the
purely mathematical idea of compatibility lying, e.g., at the core of quantum
mechanics. The concept of compatibility reflects the consistency of
approaches or, as in the special case of quantum mechanics, of measurements
(see Chapter 6). The idea of compatibility of different things is also manifest
in mechanics where the notion of complete integrability of Hamiltonian flows
has recently become of great value (partly due to some overcompensation for
the oblivion of classical mechanics and nonlinear science during the most part
of the 20th century). More specifically, the flow must be included into a class
of compatible (commuting) Hamiltonian flows. Analogously, it is a
characteristic feature of integrable PDEs (such as those encountered in the
theory of solitons) that they should be organized in families of compatible i.e.,
commuting type, mostly having an hierarchical nature. So, the general idea of
consistency of different subjects manifests itself in mathematics as
integrability that is to say the possibility to find a closed solution to a
complicate problem based, e.g., on nonlinear partial differential equations.
Even in the realm of linear PDEs, one can trace many links at the level of
mathematical models related to absolutely different fields of physics. It would
be, for example, a truism to observe the connection between the one-
dimensional Schrödinger equation
−𝜕𝑥
2𝜓(𝑥, 𝐸) + 𝑉(𝑥)𝜓(𝑥, 𝐸) = 𝐸𝜓(𝑥, 𝐸)
and the one-dimensional scalar wave (acoustic) problem
−𝜕𝑡
2𝑢(𝑡, 𝜆) = 𝑉(𝑡)
𝜆
𝑢(𝑡, 𝜆).
It is a sad fact that a great deal of phenomena can be explained only at the
level of hypotheses. For example, current science does not exactly know what
is inside of the Earth. Geologists have dug the whole planet, but this is only a
thin surface layer. One can only assume what happened on the Earth when life
subsisted in the forms different from today’s, e.g., in the form of viruses and
2.5
Theory, Experiment, and Models
55
bacteria. The human organism is an entire universe of its own, with the
interplay of physical, mechanical, chemical, and biological processes in
concerted action. Despite the successful pace of modern medicine (largely
due to physical instrumentation), understanding of these processes is very
approximate and primitive. A great number of serious illnesses remain
unexplained, and biomedical models can only describe the symptoms
intuitively correlating them to previous cases. This is an unsatisfactory
procedure leading to frequently occurring medical errors dangerous for the
patients. Moreover, inexact knowledge invokes pseudoscience such as healers
and “biofield correctors” in paramedicine, parapsychologists and clairvoyants
in other fields. In Chapter 10, we shall discuss climate viewed as a physical
system. Unfortunately, the climatic system is so complex that no one seems to
understand its functioning, although prognoses, pledges and political
rigmarole are abound. In space science, despite a fast accumulation of
observational data, almost everything is still at the level of hypothesis, rather
than can be perceived as a reliably established result. To illustrate this
pessimistic statement, it would be enough to recall the “dark side” of the
universe problems (dark matter, dark energy, black holes). The cosmological
constant problem (and the related problem of vacuum energy density), the
idea of quintessence, inflationary models, the anthropic principle and so on
are discussed, despite sophisticated mathematical tools, still on the
hypothetical level. Everybody can prolong this list with topics of inexact
knowledge. However, inexact knowledge is not necessarily bad: it has a
provocative role, fascinates curious persons, and stimulates them to play with
intelligent models.
2.5
Theory, Experiment, and Models
”I take the positivist viewpoint that a physical theory is just a mathematical
model and that it is meaningless to ask whether it corresponds to reality. All
that one can ask is that its predictions should be in agreement with
observation.” Stephen Hawking
What is the relationship between these three components of human
endeavor attaining physical knowledge? The interplay between theory and
experiment induces the creation of models that are used to simulate the
observations and predict new features that can be observed in new
experiments. A theory in general may be defined as a cohesive system of
concepts that was experimentally validated to the extent that there is no
unclearness or contradictions within the limits of applicability for this system
of concepts. A good theory contains a heavy load of knowledge. For example,
the mechanical theory (discussed at some length in Chapter 4), together with
the whole set of initial values, allows one to calculate the positions of the
planets for many thousands of years into the future and into the past with
sufficient accuracy (limited by the Newtonian approximation). The real
mathematical modeling probably began with the nuclear bomb construction
in 1940s in the USA and USSR (below, I shall reiterate some relevant facts that
I was able to come across). Now, the trend of increasing complexity and
resorting to expensive (sometimes prohibitively) experiments, initiated
56
Principles of Mathematical Modeling
during nuclear weapon projects, calls for simulation of both the theory and
experiment. Modeling in this field usually gives the following results:
•
The theory is insufficient and must be modified, revised, improved
•
A new theory is needed
•
The accuracy of experiments is insufficient
•
New and better experiments are needed
One may note in passing that each nuclear test explosion is nothing else
but a physical experiment, rather than a military event. A physical experiment
is intended to measure certain physical quantities. In general, the main
physical quantities to be measured in nuclear test experiments are the
medium (gas) variable density and velocity - as in more conventional fluid
dynamics measurements. For example, the so-called Richtmyer-Meshkov
instability, which develops when the interface between two fluids having
different densities is struck by the propagating shock wave, has been one of
the prime objectives in nuclear explosion experiments. (This instability was
predicted around 1960 by R. D. Richtmyer, a well-known mathematical
physicist, and experimentally confirmed in the USSR by E. E. Meshkov.)
Nowadays, due to the nuclear test ban, nuclear explosions are mostly
simulated on computers, with validation provided by small-scale local
laboratory experiments.
Now, how is a theory related to a model? This is again a philosophical
question - one of those that can induce lengthy discussions with no outcome.
Nonetheless, I shall try to answer it as I understand this relationship. Models
are usually constructed within the framework of a certain theory, for instance,
the Friedman cosmological model is built up within the general relativity
theory or the BCS model is a limited fragment of the non-relativistic quantum
theory.
The keyword for a model is the result, the keyword for a theory is a
framework. The merit of a model is that it can be explored exhaustively. A
theory cannot cover everything in the universe to the smallest detail and does
not necessarily bring specific results, it only provides tools to obtain them.
For example, the whole solid state theory may be considered as a particular
case of quantum mechanics (also a theory but higher in the hierarchy),
however, concrete results are produced when specific models within solid
state theory are considered. There is a hierarchy both of theories and models,
with inheritance of basic features down the ladder. Of course, the
classification of models and theories is not absolute and may be subjective.
Thus, the single-electron model is partly a model, partly a theory, whereas the
Kronig-Penney model is just a model. One of the best examples of a model
related to a theory is the E. Ising model of ferromagnetism 16 where the theory
provides a simple framework and one can obtain ultimate results assuming a
simple model of spin interactions. Later we shall see that the Ising model
16 The Ising model is probably the simplest possible model of a ferromagnet in two directions.
2.5
Theory, Experiment, and Models
57
serves as a pattern for a number of derived models in the areas having nothing
in common with a ferromagnetic.
In contrast to the broad mathematical theories, mathematical models are
much more specific, illustrative and closed. The ultimate – and the most
difficult – skill of modeling is to convert a seemingly complex problem into a
much simpler one. Or to dissect a big problem into a multitude of small and
sharply cut ones. This modelization of reality does not go without penalty:
some reality must be sacrificed in order to gain more understanding of the
problem being modeled. Even worse: the scope of reality left outside of a
mathematical model is in many cases unknown, so it is rather nontrivial to
quantitatively establish the model’s applicability unless the model is
formulated in terms of analytical functions or asymptotic expansions when
one can evaluate and quantify the corrections. More than that: a model may
be understood as a set of constraints which is bound to be replaced by a new
set in the further study, and one can often foresee how the old model would
be discarded.
When a mathematical model is set up and processed, it results in the
outcome of mathematical statements. Here, one must not forget that the
validity of such statements is very limited - largely conditioned by the model.
Nowadays, one can often observe the tendency to treat the model’s outcome
as absolute, which leads to a confusion and even grave mistakes. As an
example, one might recall the deeply rooted stereotype that all physics should
be time-reversal invariant, which is not true. It is just an arbitrary
extrapolation of time-invariant models onto all observed phenomena.
However, this simple belief is sometimes used as an instrument, for instance,
to discard solutions containing terms with odd powers of frequency.
Mathematical theories, to the extent they are not part of physics, are
considered true when they are accepted by a certain social group - a
community of mathematicians, with only a negligible part of this community
really understanding a theory in question. In distinction to such public
acceptance, physical theories model reality and are on the one hand a product
of observations and on the other hand a source of predictions in new series of
observations. Ideally, a theory is based on measured results, which are not
precisely known. Then the predictions made from such a theory are also not
exact.
Scientific measurements - not necessarily in physics - have one
profoundly lying common problem: one is limited in them by the type of
experiments one is capable of performing. Even in the case of comparatively
pure physical systems, the applied experimental techniques severely
constrain our ability to investigate things. Take as an example one of the most
precise experimental techniques, a scanning probe microscopy/spectroscopy
(SPM/SPS) applied to explore solid surfaces. Even in such highly sensitive
experiments, one has to admit the drastic information loss: to explore
quantum localized states on the surface one has to apply voltage and produce
the tunneling current which would then strongly perturb and modify such
states. Thus, a genuine “non-demolition” experiment intended to study
quantum (in this example) states in their pure, native form becomes highly
58
Principles of Mathematical Modeling
problematic. This situation is typical not only of the quantum world. In the
case of mental processes, for example, to produce a non-demolition
measurement may be more difficult than in physics: there are a lot of
components and interactions between them.
One can see that such disciplines as physics and mathematics, though
incorporating considerable applied parts, are centered around basic research.
So, the question naturally arises: is basic research in physics and mathematics
a luxury for certain countries or a necessity for all the countries?
2.6
On the Relationship Between Physics and
Mathematics
“I am so clever that sometimes I don’t understand a single word of what I am
saying.” (Oscar Wilde)//
The relationship between physics and mathematics is, of course, a subject
for philosophical discussions, with scientific content in this section being
close to zero. I would only state the following observation: complicated
models are seldom useful - maybe solely to produce the texts exhibiting
scholarly merits of an applicant, e.g., in theses. Strangely enough, mathematics
continues to stir powerful emotions not only in mathematical circles, but also
among non-mathematicians including physicists. For the old guard, using
more modern mathematical techniques such as differential forms or
geometric methods in general seems to be similar to undergoing an
unpleasant medical procedure. An appropriate use of mathematics appears to
be unclear even to theoretical physicists to whom mastering math is an
absolute must. This element of obscurity and ill-digested trials is more and
more evident in contemporary papers on theoretical physics, being strongly
aggravated by an obsessive drive towards abstraction and maximal generality
fashionable among “pure” mathematicians (and being servilely acquiesced by
some physicists). Mathematics, among the so-called exact science, is the only
discipline that enjoys being abstract. Philosophy is also abstract, but it is in no
way an exact science. The feature of abstraction results in the unique power
of mathematics - generality that works. In distinction to physics and other
exact sciences, the underlying content is immaterial in mathematics. For
example, if one considers a rotation group and properties of its operations, it
is irrelevant to ask whether one implies an electron, an atom, a nucleus or a
molecule of some chemical substance, although all of them are totally diverse
physical objects. This working generality of mathematics permits it to be
applied throughout all other sciences. It permits thus to perform cross-
disciplinary mathematical modeling.
The property of working generality leads to a special role of mathematical
models in other sciences: these models combine the insight accumulated in
specialized disciplines, primarily in physics, with the powerful abstraction of
available mathematical structures - at least available to the developers of
mathematical models. So far, such a combination worked amazingly well,
especially in physics. Analogies stemming from generalization attempts are
very important, they sharpen mathematical instincts and encourage one to
2.6
On the Relationship Between Physics and Mathematics
59
look for counterexamples. Many intricate things in physics appear simpler
due to analogies. For example, non-commutative algebra based on space-time
variables (𝑥, 𝑝) ↔(𝑡, 𝜔) have the same mathematical framework and may be
treated similarly, although the physical content is totally different.
The main ideas of using mathematics for model construction in physics
are better grasped by discussing analytical models. Once we understand
them, we can proceed to computational models and to specific cases,
collectively named physical engineering. One can say that all this is the so-
called applied mathematics. Of course, it does not matter how you denote
your field of activities, but personally I don’t think there is such a field as
applied mathematics, there are applied mathematicians. For instance,
quantum mechanics may be considered as applied functional analysis,
cryptography as applied number theory, signal theory as applied theory of
functions, and so on. Even a very abstract mathematics may turn its
application side to physicists. Does a physicist really need topology?
Traditionally, the work of a physicist consisted in describing models in terms
of differential, rarely integral or integro-differential equations and then trying
to solve them. However, to really produce the solutions to physical-based
mathematical equations, applied to specific cases, is typically quite tiresome
and may even prove impossible, even despite the fact that the starting-point
equations of physics are as a rule well known and all their nice mathematical
properties - smoothness of solutions, their localization, compactness, etc. -
have been explored. It does not help much. Therefore, a number of ingenious
methods have been developed, with attempt to analyze the solutions of
complicated equations without solving them. There has appeared lately a
number of human activities aimed at doing something without really doing it.
Management, for example - it’s the art of doing something with others’ hands.
Or software engineering - a popular movement under the slogan “how to
produce a software system without knowing how to write a code”. A very
cunning strategy. I would call such an approach “creative alienation”. One of
the best examples of productively using this approach in science is group
theory, which consequently exploits symmetry. Topology in physics serves
more or less the same purpose of creative alienation from the honest process
of directly solving physical equations (see below).
It seems only natural that a progressively widening chasm between
mathematically minded physicists and physically minded physicists is to be
observed. Mathematically minded people are the real insiders in math, they
must know what is going on in mathematics and are attentive to all fine
mathematical stitches, all those smooth and nice properties that fasten
mathematical constructions that initially appear totally abstract but later
foster the progress of physics. We shall see examples of such fostering in this
book. In contrast to this refined approach, physically minded physicists prefer
broad strokes, largely intuitive, not meticulously founded or even completely
unfounded (we shall see some salient examples of mathematical inaccuracies
in physics). This approach is typical of engineers for whom it is important to
obtain a tangible result than to prove that it can (or cannot) be in principle
obtained. Thus, the physically minded physicists, even theoretical physicists,
60
Principles of Mathematical Modeling
are just mathematical engineers. And these two breeds of physicists are
submerged in different environments: while mathematically minded people
deal mostly with books and papers, those physically minded still have to
understand experiments - physics never ceases to be an experimental science,
despite all theoretical extremism. For a physically minded person, it is harder
to deal only with books than for a mathematically minded one. There are
almost no drawings in most mathematical books, which is uncommon for an
engineer, even mathematical engineer. And the basic questions for these two
breeds of people are different: “how would I prove it?” - for a mathematician
and “how would I do it?” - for a physicist.
There has been written a lot about the interaction between mathematics
and physics. One of the best statements about this relationship I (as well as
many friends and colleagues of mine) have read was the inscription left by
some anonymous thinker on one of the student tables at the Central Physical
Audience in the Physical Department of the Moscow State University: “Physics
without mathematics is the same as a naked person in the metro: possible but
indecent”. Here, one can sense a hint at a somewhat troubled relationship.
Personally, I still think that mathematics stems from physics, at least even
today physics serves as the greatest source of mathematical ideas.
Professional mathematicians are usually infuriated by this statement
claiming that mathematics is totally different from physics. Nowadays there
are new sources of inspiration for the mathematicians originating from other
disciplines such as economics, telecommunications, networking, computer
science, social studies, defense and military planning. We shall review some
mathematical models related to these disciplines and we shall see that all such
models to a large extent bear the imprint of approaches developed in physics.
It is surprising to me that vocal objections to contributions from theoretical
physics as the main supplier of mathematical problems seems to be a bon ton
among mathematicians. “Theoretical physics is a science locally isomorphic
to mathematics” is the most favorable saying I have heard recently from
mathematicians. Mathematicians and physicists are not always friendly
tribes. Is a new kind of religious cold war pending?
This is my hyperbolization, of course. Sometimes, however, human
incompatibility of physicists and mathematicians reaches a rather high
degree. To illustrate this strange feud one can recall the following episode
from the history of the Soviet nuclear weapons project. Needless to say, what
importance was assigned to this project by the Soviet authorities. In the
1950s, the theoretical laboratory headed by I. M. Khalatnikov, a well-known
physicist and one of the closest Landau disciples 17 was performing
calculations of the design and physics of the Soviet hydrogen bomb. The lab
employees worked at the Institute of Physical Problems but, due to political
intrigues, the whole laboratory was transferred to the Institute of Applied
Mathematics formed in 1953 specifically for nuclear weapons and missile
17 I. M. Khalatnikov is mostly known for his classical works on superfluidity, superconductivity
and other issues of low-temperature physics. He has been a long-time director of the famous L.
D. Landau Institute for Theoretical Physics.
2.6
On the Relationship Between Physics and Mathematics
61
computations (now the Keldysh Institute of Applied Mathematics). Rather
acute conflicts between newcomer physicists and aboriginal mathematicians
followed, and the physical laboratory could survive in the milieu of
mathematicians for only half a year. Owing to the personal order of I. V.
Kurchatov, the head of the Soviet nuclear weapon project, the entire lab of
Khalatnikov was transferred again - without mathematicians, this time to the
institute headed by I. V. Kurchatov (now the Russian Scientific Centre
“Kurchatov Institute”).
Let us conjecture a little bit about what physicists (and other natural
scientists) might think regarding mathematical methods in their respective
sciences. Mathematics is a wonderful language, but does it really produce
exact results? Mathematics can correctly - without singularities, infinities, and
unsurmountable difficulties - describe only very simple and artificial physical
situations. Our everyday phenomena like the transition to turbulence in the
flow of water streaming down from the tap or the friction force are beyond
the reach of mathematical description. One can produce more examples
where mathematics fails than where it admits a successful theoretical
description. Just try to construct a mathematical theory of a flag fluttering in
the wind. Or of a living cell irradiated by a laser pulse. Or even of an isolated
living cell. Complexity of biomedical, social, economical or cultural systems is
merely of a different level than that attained in contemporary mathematics,
and it is not incidentally that the so-called serious scientists usually express
astonishment mixed with contempt when hearing that some of their
colleagues had ventured to attack a “weird” social problem by mathematical
means18. In the kingdom of Soviet physics, such people were outlaws.
It is because of this complexity level unreachable for contemporary
mathematics that we have to focus primarily on particular mathematical
models of reality than on new theories19. Yet, frankly speaking I think we
already have enough theories. Mathematics in its actual form appears to be
perfectly adapted to describing general principles such as motion equations
and conservation laws. The study of symmetries on a certain complexity level
can also be achieved by mathematical means. Nevertheless, I doubt that such
general objects as spacetime, universe, quantum background, mass or - even
worse - complex sociocultural systems can in principle be investigated by
theoretical mathematics. It may simply not work, that is to say, standard
mathematical methods may be compromised by infinities and singularities.
Then probably only ad hoc “cookbook” methods and local mathematical
models (e.g., not based on any regular theory) would be possible. This would
be a very unfortunate development, of course.
“A mathematician may say anything he pleases, but a physicist must be at
least partially sane”, this statement is ascribed to J. W. Gibbs. In that sense,
18 The disdainful attitude towards these persons exceeded that semi-openly manifested in
regard to those who became interested in the history or (even worse) philosophy of science. Such
persons were regarded as just impotents and weaklings, whereas the misguided thinkers over
fanciful social mathematics were simply considered inadequate.
19 It does not necessarily mean that the models should be phenomenological - they may be
based on first-principle equations.
62
Principles of Mathematical Modeling
string theory, for example, is more mathematics than physics. Probably today
it would be better to say “string/M theory”, which sounds even more
mysterious, is even farther alienated from experimental roots of traditional
conservative physics and based almost uniquely on purely mathematical
considerations. Where is the demarcation line between physics and
mathematics in the new theoretical physics, especially after the so-called
second string revolution, I don’t know. Although I devoted a considerable
amount of time trying to understand new concepts in physics, of course, I am
familiar with the string/M theory only extremely superficially. Like many
other amateurs, I failed to find any experimental confirmation or
astronomical observations in favor of string theory (except some attempts to
predict the value of the cosmological term, see Chapter 9). Besides, does
anyone know exactly what the M-theory is? The rapidly increasing number of
string theorists is, to my mind, a very weak indicator of physical soundness of
the string theory (see a well-known book by L. Smolin [74] on this subject).
I am not going to even try to formulate the axiomatics of the current
string/M theory. As near as I understand, this theory may be regarded as a
speculative area of theoretical physics beyond the traditional theoretical or
even mathematical physics developed in the course of the 20th century and
closely related to experimental real-life problems. In general, theoretical
physicists try to describe the world inventing models of reality which are used
to explain, rationalize and, in the best case, predict physical phenomena
within a certain “physical theory”. As for physical theories, one usually
distinguishes between three sorts of them: basic or mainstream theories -
such as classical or quantum mechanics; proposed but not validated theories
- such as loop quantum gravity; and marginal or fringe theories - such as
torsion fields. On the one hand, some of the physical theories simply cannot
be confirmed by an experiment or even by observation (although it was
always required of classical theories and generally would be highly desirable)
and, on the other hand, they cannot be deduced from a non-contradictory
system of axioms like a mathematical theorem. This specific position of
modern physical theories leads to regarding theoretical physics as a chimeric
discipline between physics and mathematics, a cross of physics without
experiment and mathematics without rigor20.
Both physics and mathematics can be vital, not optional for survival.
Indeed, it could take, for example, just one comet to demolish the Earth. To
ensure the survival of the biosphere, humans need to learn how to avoid this
danger which is basically a physical and mathematical task.
2.7
Mathematical Physics and Physmatics
What is the traditional mathematical physics? Some prominent physicists and
mathematicians used to say that it is also, like theoretical physics, neither
mathematics nor physics. When I was studying physics at the university, our
course of mathematical physics was largely reduced to a linear theory of
20 « La physique théorique est l’alliance de la physique sans l’expérience, et des
mathématiques sans la rigueur ». Jean-Marie Souriau, a French mathematician.
2.8
The Role of Human Communication
63
partial differential equations, covering the respective theorems of existence
and uniqueness, some concepts of functional analysis including the spectral
theory of linear operators in Hilbert space, and the usual tricks of solving
boundary
value
problems
(variable
separation,
Green’s
functions,
eigenfunction expansions, etc.) Such university courses were typically named
“The Equations of Mathematical Physics” or “Methods of Mathematical
Physics”, with the traditional textbooks such as those by A. N. Tikhonov and
A. A. Samarski ([3]), R. Courant and D. Hilbert ([7]) and sometimes P. Morse
and H. Feshbach ([4]) being recommended as standard. (All these books are
even now a right place to look, anyway.)
In traditional mathematical physics, rather simplistic models are
generally constructed, with actual physical details being reduced to an
absolute minimum. In this way, the conventional PDEs of mathematical
physics have been obtained and investigated. This is a typical model-building
approach, and although it enabled one to study the standard set of equations
and the properties of their solutions - so-called special functions - physicists
usually consider the canonical mathematical physics inappropriate or at least
insufficient for their opulent circumstances.
Physmatics stands to mathematical physics approximately in the same
relationship as, e.g., physical biology to biophysics. During its evolutionary
development and due to the accumulation of vast experimental material,
biophysics has become a more or less crystallized discipline, and people
usually understand - although intuitively - what one is talking about when the
term “biophysics” is used. Contrariwise, physical biology 21 is a synthetic
discipline devoted to the application of general physical principles and
models to biological systems, and understood as such it may even include
biophysics as a particular subdiscipline.
2.8
The Role of Human Communication
“Unthinking respect for authority is the greatest enemy of truth.” Albert
Einstein
Science as the refined extension of the intrinsic human curiosity instinct
necessary for survival evolved as the increasingly specialized attempts to
answer the “old human questions” (OHQ) such as why is the sky blue, why is
the heart on the left side, why do birds fly, why is the water wet and liquid,
etc. Over the last several hundred years, more and more specialized science
has succeeded to answer many old human questions in terms of light and
atoms. As a by-product, new weaponry has been invented and deployed,
which justified the separation of scientists into an isolated (in extreme cases,
physically) urban group, not all of professional scientists and not
continuously privileged but always under direct control of the top authorities.
During and after the World War II, a distinct and influential scientific
component of the human society emerged, instead of separate dispersed
individuals driven by curiosity and exchanging letters. Inside this coherent
21 The originator of physical biology was probably the mathematician Alfred Lotka
whose book “Elements of Physical Biology” (1925) seems to be unfairly forgotten.
64
Principles of Mathematical Modeling
scientific milieu, light and atoms have been described in progressively
microscopic terms of quarks, leptons, gauge bosons, etc. That was a chain of
downsized multilevel models about the human environment that appeared
synchronously with the stabilization of the social institute of science, with all
attributes of human groups: dominant bosses who are forced to surround
themselves by intellectual specialists, self-assertive leaders who are afraid
most of all to lose face, small number of compulsive fanatics fostering
conflicting evidence, and the large mass of the weaker members of the
community supporting off-the-shelf enthusiasms. All like in other hierarchical
groups.
One of the salient phenomena of the 20th century science was the
establishment of semi-closed scientific communities, so-called scientific
schools. This is neither bad nor good, it has just happened. Scientific schools
provide people with great strengths as well as grave weaknesses. It is a
common observation that science nowadays is done by big collaborations
which may be called scientific tribes (scitribes). There are countries where
more accent is put on such tribes, as, e.g., the former Soviet Union22, as well as
the countries where individualistic traditions were more pronounced. There
are also paradoxical countries with a combination of collectivist trends and
individual science. In Germany, for example, the culture of scientific schools
is not highly developed, although communities - also scientific ones - are
highly praised.
In this section, I shall offer some personal observations about scientific
schools [167], predominantly in the former USSR where they were very
powerful. By their very nature, these remarks of mine are highly subjective
and can be met with skepticism. They reflect that tiny chunk of experience
that I had while observing scientific as well as bureaucratic mechanisms from
inside. However, I cannot be counted as a genuine insider. Honestly speaking,
I know very little about the whole complicated texture of the relations
between scientific schools and between people inside each of them which, in
the former Soviet Union, was developed against a background of communist
party and KGB control, state supported antisemitism and ideological
doublethink. I know the situation with the two major Soviet schools of physics
(these of Landau and Bogoliubov, see below) only by attending their
respective seminars and from my sporadic informal contacts with some of the
established participants of these two schools. My observations are related to
1970s when I was able to attend seminars especially often, first as a student
of the theoretical department of the Moscow Engineering Physics Institute
and then as a young research scientist. Later, in 1985-1989, while working in
the popular science magazine “Science and Life”, where I was responsible for
physics and mathematics, I have had a second round of observations, but by
that time the “Perestroika” began already distorting the pure Soviet patterns.
So, my knowledge of the situation with these schools is very superficial or,
rather, tangential. Nevertheless, while filtering my memories, I tried to refrain
22 There were social reasons for it since the ruling authorities preferred to communicate with
an appointed leader.
2.8
The Role of Human Communication
65
from inadvertent confabulations or vague, verbose fantasies that could have
only a hazy relationship to reality. And I don’t want to name concrete persons.
Indeed, can one remember all the details after 30-40 years have elapsed?
One might then justifiably ask: if you don’t know the situation exactly
from inside, why the hell are you writing about it? My answer is: that was my
subjective vision of the situation with the so-called scientific schools, and
there is nothing wrong in presenting it. The only grave sin would be a
deliberate lie, which is not the case here. Furthermore, this “sociological”
context is inseparable from a “scientific” one, at least for me - and I have
strong suspicions that is the case for any person immersed in the scientific
milieu.
The prevalent effect of scientific schools is sociological, not scientific,
even despite wonderful achievements obtained by the school members. It
means that social forces can act against the purely scientific process. In such
a case science can be easily betrayed and even sacrificed. The cold war of
thinking or not thinking was dominating my beginner’s studies of theoretical
physics when I was under twenty. This “war” was waged between the adepts
of the well-known “Landau school” and those who considered themselves to
belong to the “Bogoliubov school”23. Both school leaders were great scientists
and there were probably equally outstanding scientists among the both
school members, but it was a rare occasion that they openly interacted, and I
don’t remember any joint cross-school publications related to that period
(end 1960s - beginning 1970s). When recollecting my experience as a physics
student, I remember an odd sense of irrationality when I heard at the famous
theoretical seminar in the Institute for Physical Problems that one should not
read papers on low-temperature physics unless they stem from this institute,
or when some prominent people in Dubna asserted that all what is and will
be (!) said about elementary particles and high energy physics in
“Physproblemy”24 is a priori crap and should not be even listened to. We,
physics students, were of course too inexperienced to make our own
judgment, and the situation of uncritical choice between two mainstreams
(either theoretical physics in the Landau school or mathematical physics
formalism in the Bogoliubov school) produced sort of a schizophrenic effect
on a number of young persons. I suspect that some of them were unable to
recover before today. As far as I am concerned, I experience the same strange
sense of irrationality when I hear, for instance, contemptuous remarks about
the scientists belonging to the Princeton Institute for Advance Studies. I
simply cannot rationalize this attitude nowadays, in the same way as I could
not at my student time comprehend derogatory remarks exchanged by the
representatives of the two great Soviet schools of physics. This is a pure
sociology whose role in physics and mathematics is, regrettably, enormously
important but is mostly left unnoticed by the historians of science. At least, I
23 Every school of thought is like a man who has talked to himself for a hundred years and is
delighted with his own mind, however stupid it may be. (J.W. Goethe, 1817, Principles of Natural
Science)
24 The colloquial name of the Institute for Physical Problems
66
Principles of Mathematical Modeling
failed to find any good accounts of the impact of group prejudices and near-
scientific stereotypes on functioning of physics or mathematics communities.
Unfortunately, I was too young to see and hear L. D. Landau himself, as
well as N. N. Bogoliubov, so my impression of their schools is by necessity
secondary from sporadic contacts with their participants, therefore my
remarks about the Bogoliubov and Landau schools should be taken only as
personal impressions. There is a lot of literature depicting these two great
persons, but I don’t think the information in this literature is sufficiently
accurate. As far as L. D. Landau is concerned, some of the publications bear a
scandalous character. The figure of N. N. Bogoliubov is not so controversial. I
hope that maybe Professor Igor Landau25, son of L. D. Landau, will write his
own authentic recollections, which could probably remove a lot of
controversies. I grew up in the USSR, the country characterized by what may
be called a monocentric evolution: all important scientific and bureaucratic
institutions were concentrated in Moscow. As a reaction to this top-heaviness,
people in many regional scientific centers disliked the Moscow scientists
intensely. (Dwellers of Moscow are in general nearly hated in the rest of
Russia.) The scientists in provincial cities formed their own “schools” where
Muscovites were rarely welcome unless, of course, they were influential
scientific or bureaucratic bosses. My personal observations were that
scientific workers from the city of Gor’ki (now Nizhni Novgorod) were
marked by specific arrogance towards rank-and-file Moscow colleagues. Even
the Leningrad (now St. Petersburg) and Novosibirsk scientists, which had
created really world-class scientific schools in many directions, were more
tolerable to the Muscovites. This parochial miscommunication rooted in
regional prejudices considerably influenced the scientific scene. At least,
group attitudes in scientific communities have always interfered with
answering the simple question of research: “Is it true or false?”
Nevertheless, even in countries characterized by polycentric evolution
such as Germany, group stereotypes in local scientific communities often
prevail over rational responses. There are attempts to solve this problem by
rotating research staff between various universities, but such rotation may be
painful for individuals and destroys scientific schools that need a prolonged
period of stability to be firmly established.
Presumably, the “Landau school” was originally based on the so-called
theoretical minimum or, shortly, “theorminimum” that was in effect a test to
be passed by any physicist who wanted to work with L. D. Landau. On the
other hand, the program of “theorminimum” later provided the contents for
the famous “Course of Theoretical Physics” by L. D. Landau and E. M. Lifshitz.
Already from here one can see that a sheer preparation to the obligatory
“theorminimum” test ensured that an applicant would acquire rather
extensive qualifications in physics and accompanying it mathematics. The
Landau school produced physical universalists.
25 By a sheer coincidence, I studied with Igor Landau in the same class in the Moscow high
school No. 5
2.8
The Role of Human Communication
67
Now the times of universal physicists like L. D. Landau are gone forever26.
Today, universality became feasible only within large collectives or
institutions.
But in such cases the problem of the common language appears. Such a
common language is hard to develop in a loose assembly of physicists unless
they are carefully picked up and uniformly trained.
It is my firm belief that extremely sharp specialization, a typical present-
day phenomenon, has a pronounced detrimental effect on the development of
physics. Strictly speaking, it is not obvious that the laws of nature do exist.
These laws are invented by humans - in modern times in the form of
mathematical models - to approximately describe the processes in nature. In
fact, these laws and their numerous consequences are present only on paper
and mostly serve as tools to increase the status of the “professional scientists”.
It is sometimes astounding that these mathematical laws and especially their
corollaries really do work not on the paper of status seekers, but to
manufacture real products. There is a kind of complicity among the
professional scientists bordering on tribalism, with socially significant slangs,
jargons and dialects purposed to insulate professional scientists from
common folk, usually under disguise of scientific precision. Members of
scientific tribes calling themselves academics are relying on internal
communication more than on anything else, with somewhat ambivalent
attitude towards colleagues: rivalry mixed with support against outsiders. It
is curious that if the specialized jargon becomes widely spread and partly
adopted by non-scientific groups as, for example, in cosmology, computer
science and chaos theory, then the jargon’s isolating function is lost, and new
slang expressions are created restoring the anti-communication barriers.
These barriers are erected between scientists and “ordinary folk” as well
as between scientific groups. Life, however, by putting forward real problems,
in most cases forces us to transcend the frontiers between different sciences,
which have been established - purely artificially - by people. Nature does not
care at all what mental schemes people are trying to impose on its
manifestations. Indeed, why should we look for an artificial way to structure
data and procedures, when the real world has already organized them for us?
Nevertheless, “professionals” tend to safeguard artificial frontiers invented
by them with jealousy bordering on aggression27. People tend to form closed
groups declaring them “elite”. I have just given above examples of zealotic
behavior in Bogoliubov and Landau schools and I shall try to describe some
salient features of these elitist establishments a bit later. The Landau school
was especially indicative: theoretical physicists belonging to the “Landau
26 The last physicist of this universal type is probably V. L. Ginzburg, the 2003 Nobel Prize
winner. I would also mention R. Feynman, V. A. Fock and W. Pauli who are unfortunately long
gone.
27 This is especially clearly seen in the medical community, which, basically belonging to a
breed of natural sciences, vehemently opposes penetration of physicists into their field. This
becomes increasingly difficult since the progress of medicine is largely due to modern
engineering physics. Safeguarding “medical traditions” in new technological environment is
nothing but a characteristic ambivalent attitude of defensive hierarchical groups.
68
Principles of Mathematical Modeling
school” created in the 1960s rather high barriers, mostly of psychological
character, around themselves. Young proselytes were specifically aggressive:
some of them seemed to be always ready to apply clichés to the work of
everyone not too eager to join their elitist circle, denouncing physical papers
containing much math as “a disordered set of formulas” (that label was often
applied to the works of competitors from “Bogoliubov school”) whereas
papers based mainly on physical reasoning were labeled as “philology” or
“philosophy”. The latter label was considered especially humiliating: it was
considered a shame not to be able to saturate paper with equations (mostly
differential), inequalities delineating limiting cases, integrals and, in the final
part, formalized statement of results. Frankly speaking, papers based on
“physical considerations” usually produce weaker results and contain vague
fragments. Yet, the folklore saying that each discipline contains the amount of
science exactly equal to its mathematical content is, to my mind, a
mathematical extremism.
This is all history now, and perhaps of no interest to the new generations
of scientists but, to my understanding, a very instructive history. There have
been no similar mechanisms of scientific development in the West after the
World War as scientific schools in the USSR. Great scientists in Western
countries did not form schools around themselves. It has been noticed that
neither Einstein nor Feynman, nor Wigner - in one word nobody has created
scientific schools that could rival in impact with those existing in the Soviet
science. University professors may have a couple of students, graduates, or
postgraduates, or in comparatively rare cases a chair. That is all.
What personally I did not like about elitist scientific establishments - not
only “Physproblemy”, there were some other institutions and not only in
Moscow, usually calling themselves “firma” (a firm) where the research
employees saw themselves as local prim donnas. There was even an element
of ritualized aggression mixed with inferiority in the behavior of these
researchers - against intruders and under the slogan “I am great”. By the way,
really bright people such as V. L. Ginzburg, L. V. Keldysh, D. A. Kirzhnitz, A. B.
Migdal or A. D. Sakharov never emphasized their top position. Both the
Landau and the Bogoliubov schools were really very advanced and powerful,
yet there were certain things which I found appalling. What one could
observe, sometimes painfully, about both schools was their “national
orientation”28. As for the Landau school, this effect was of course a natural
defensive reaction to the disgusting antisemitic politics of the Communist
Party of the USSR in approximately 1965-1985, but sometimes one could not
get rid of an impression that the “national orientation” extended far beyond
the compensatory focus. This may probably be called an overreaction.
However, such an overreaction had some inherent dangers distorting the
delightful construction of scientific schools - an informal community of highly
qualified people. In particular, the “national orientation” tended to attract
28 This expression belongs to A. A. Rukhadze, a well-known plasma physicist[148]. Quite
naturally, I risk invoking indignation and derogatory arguments from the remnants of the both
schools, if they condescend of course.
2.8
The Role of Human Communication
69
stray persons (I met some of those). The second thing that I could not
unconditionally accept was the principle of problem selection according to
the leadership taste. Take the Landau school, for example. L. D. Landau was
indisputably a genius, and he raised a bunch of super professionals who
inherited, to some extent, his arbitrariness and prima donna habits. One of
the principles of problem selection was according to their potential
solvability. That amounted to the situation when there existed a class of
physical problems which deserved some consideration and an antagonist
class of problems not worthy to spend time on. Technically speaking, this
meant that the problems where there was no distinct small parameter should
be neglected. Unfortunately, some physically interesting problems landed in
the “unworthy” class and had to be rejected. For example, the theory of
liquids, as near as I remember, was totally uninteresting - at least for young
adepts of the Landau school of that time (beginning 1970s). Incidentally, a
somewhat haughty attitude towards the theory of liquids was reflected in the
textbook “Statistical Physics” by L. D. Landau and E. M. Lifshitz [24], beginning
of §76. I remember that I tried to discuss the situation with liquids with
Professor D. N. Zubarev whom I respected very much and who was a very kind
and knowledgeable person. D. N. Zubarev was generous and absolutely
neutral as far as group prejudices were concerned, and there were, as I
remember, no limitations in topics discussed at his seminar in the Steklov
Mathematical Institute devoted mostly to statistical mechanics. I also
remember approximately what he said to me about the attitude of the
“Physproblemy” people to liquids: this problem is outside the customary
mathematical repertoire of young Landau scholars, therefore liquids are not
interesting to them; it is rather our problem.
I had an impression that collective norms, likings and dislikings, were
obvious and nobody tried to conceal them. The sense of solidarity was more
important to school members than personal aspirations and even individual
achievements. This could be easily explained: belonging to a school meant
security, a conviction that the personal and at least professional career is
mapped out in advance. The pressure for working commitments and rapid
fulfillment, if one had already established oneself within the school, was
lessened.
Despite the pressures exerted on ordinary members by the elite scientific
community, the benefits are great. The community and its influential leaders
protect the community members from the critic of rival schools, even in the
case of scientific mistakes. Thus, the basic problems of survival in a scientific
milieu, very acute for an individual researcher, become inessential for a
member of an established community. Formally, this quasi-paternal
protection is due to severe filtering of scientific mistakes inside the
community, but in fact it is the very stamp of the renowned scientific school
that matters, especially in highly complex subjects.
There is nothing unnatural or unexpected about such protective
mechanisms and cooperative behavior in science, all this is a basic part of
human social and biological inheritance. Bygone are the unhurried days of
individual, alienated scientists. Today, collective norms dominate. Even
70
Principles of Mathematical Modeling
Einstein was known to be largely neglected and almost ridiculed by the herd
of physicists engaged in so-called elementary particle physics which in 1950s
- 1960s was in fact hardly physics, rather multiple attempts to cope with the
bewildering number of particles by classification. Now that period is often
referred to as the “zoological” one, but at that time it was of fashion, with
crowds of people being attracted. In today’s parlance, that was the period of
an extensive build-up of mathematical models based on the Lie group theory
- a decent pastime in its own right, but having little in common with physics.
One
can
see
the
remnants
of
this
approach
in
http://pdg.lbl.gov/2007/download/rpp-2006-book.pdf.
Scientific schools, despite all their drawbacks and emphasis on human
communication, have become, as stated, a powerful development mechanism
of basic science. They ensured a very encouraging, even protective
atmosphere which is vital for the young members. Such a coaching and
mutual assistance, a fundamental urge to cooperate within the “scientific
tribe” - this was basically provided by the school - existed only in the Soviet
Union of those years. Now this scientifically productive period has gone
forever.
To make things clear, I don’t have any nostalgia for the Soviet times, as
many people do. Nostalgia for one’s childhood does not necessarily mean that
the childhood was a happy one. The communist regime commited a lot of
atrocities, and any society under communist rule is completely deformed.
There is no point to be attracted to this regime. But science was excellent,
maybe due to some compensatory merit selection - the best people went to
science and not in business. Besides, radically different criteria were applied
in Soviet scientific schools as compared with the current situation. For
example, nobody paid much attention to the number of publications. The
quality of the work, the degree of understanding of scientific ideas by a given
person (for concreteness, I speak of physics here) were most valued - even
more than his or her nearness to the leader. Nothing of the kind exists today.
The “publish or perish” principle of career development resulted in the
multiplication of scientific and quasi-scientific journals, their number
nowadays exceeding the human capacity to observe. Worse than that,
scientific literature has become cluttered up by a heap of immature, wrong or
“not even wrong” papers. Unfortunately, these statements are not empty
lamentations.
I am not saying that a disintegration of science into “schools” or scientific
tribes as major units - a 20th century phenomenon - is necessarily a
drawback. This is probably an answer on the increased complexity of
problems, especially in nuclear weapon creation. What I am trying to say is
the increasing price people have to pay for the breakdown of science into
groups, usually in the form of community norms, attitudes, strict
subordination, prejudices and stereotypes. I have a few personal
reminiscences how powerful these community norms can be. In 1970s,
together with one good experimental physicist, we produced a theory and
made a device aimed at exploring radiation properties of modulated electron
and electron-ion beams. This work was supported by some leading Soviet
2.8
The Role of Human Communication
71
physicists including Prof. E. P. Velikhov and Prof. M. A. Leontovich. Now I think
that it was not a bad work elucidating some interesting microscopic
properties of coherent mechanisms of electromagnetic radiation. We were
recommended to submit the work to the competition for the so-called
Komsomol Prize. In the final part of the competition, however, we were
rejected with the explanation that the Prize that year should go to the city of
Gorki (now Nizhni Novgorod) because the powerful leaders of the local radio
physical school complained that Moscow had taken all the prizes in the
previous years. My colleague-experimentalist was so disappointed that in a
short time ceased to work on his experimental installation. Later, he
completely changed his field of activities and became the managing director
of one of the most famous restaurants in Moscow. I also remember an extreme
case of a brutal verbal attack on a person from the Moscow Engineering
Physics Institute (MEPhI) during his presentation of the doctor of sciences’
thesis. The attack carried out by a member of a rival scientific organization
was scientific only in the form. The genuine reason was, as I understood, to
demonstrate a supremacy of the scientific institution the attacker
represented. The unhappy applicant died of a heart attack the same evening.
I have also had an experience of being attacked by some narcissistic member
of a well-known scientific school during a presentation. His argumentation
was ridiculously incompetent and not in the form of questions, but even if he
was understanding this, it did not matter at all. It was the social act of
community defense.
How important it was to belong to a certain “school” was manifested in
the group attitudes of even the most prominent physicists. I was constantly
reminded of this. When in autumn 1985 I wanted to publish a very interesting
and fresh article by D. N. Klyshko on quantum optics in the journal “Science
and Life” (David Klyshko was a modest university professor and little known
to physics grands at that time; besides he was a very modest person and an
individualist). The editor-in-chief and his servile deputy were harshly against
this publication on the ground that Klyshko did not belong to any known
scientific community. It was only much later that D. N. Klyshko’s works
became almost classic (unfortunately, he died prematurely in 2000). In the
same 1985, I was at a very interesting MePHI summer school on condensed
media physics (later I wrote a report on this school in “Science in Life”, No.1,
1986). One of the leading presenters there was Professor D. A. Kirzhnitz, a
super qualified physicist and a very good person, whom many of us, school
participants, liked very much. He seemed to possess a special charm of
firmness - a feature of independent and unfairly traumatized person. I think
Professor Kirzhnitz was not fully estimated as he deserved it. Once we -
Professor Kirzhnitz and me - accidentally met near the news stand, and his
first words to me were: “And you, Sergey, to which school do you adhere?” I
did not know what to reply. What astonished me then was that even such a
prominent physicist as D. A. Kirzhnitz tended to automatically classify people
72
Principles of Mathematical Modeling
according to their adherence to a certain school. At least I interpreted his
question in such a way29.
So there are many tribal signs in scientific communities. No one apart
from your scitribe dares using the community resources such as library,
computers, Internet access, discussion notes, etc. From time to time some
members of the community are set off assailing other communities. Younger
members produce as much noise as possible around the community, imitating
the aggressive manners of their senior leaders. In case the community is
successful, it attracts new members and grows in size. Then splinter groups
typically appear establishing themselves inside new institutions 30 . This
process reminds me of colonization of new territories, in this case in socio-
bureaucratic space.
2.9
Antimodels
The problem with science is that it is an extreme manifestation of the human
curiosity and exploratory activity, thus it tends to be alienated from everyday
needs on which “common wisdom” is based. Therefore, scientific truth must
be safeguarded, developed, improved and continuously explained to people
who, mostly due to personal circumstances, do not necessarily have an
appropriate background - the frame of reference required to appreciate
scientific statements. Science is developed to be approved by colleagues, non-
science is mainly approved by consumers - bureaucrats, businessmen,
journalists, and other lay public. Both science and non-science can be good or
bad, interesting or dull, useful or useless, as well as approvers can be fair or
foul, competent or incompetent, disinterested or biased. Many prominent
philosophers were engaged in analyzing the phenomenon of pseudoscience
(see, e.g., [141]). In my terminology, non-science incorporates scholarship,
fiction and pseudoscience. Scholarship is a collection of disciplines based on
study of documents and in this respect it is close to science. Fair and unbiased
investigative journalism, in my understanding, is akin to scholarship. Fiction
does not pretend to be based on accurate documentary investigation, it
constructs models of life by depicting a series of specific situations, in which
presumably free-will personages develop some action to embody the author’s
presumed ideas. Fiction borders on philosophy - in fact philosophy is a kind
of fiction where the presumed ideas become so valued to the author that
he/she does not resort to specific situations to convey these ideas, using
abstractions and generalizations instead31. Philosophers tend to argue and
declare instead of calculate, in this sense philosophical works are closer to
theoretical systems than to specific models. Pseudoscience is based neither
on documents nor even on philosophical generalizations of observed factoids;
it uses beliefs and vague resemblances - as in, e.g., telepathy - instead of
29 Perhaps incorrectly, but now it is impossible to find out the truth: unfortunately, D. A.
Kirzhnitz died in 1998.
30 Remarkably, not necessarily scientific, which means that status drive is stronger than
curiosity that drives science.
31 One can see an interesting example of the transition from fiction to philosophy in the novel
“War and Peace” by L. N. Tolstoy.
2.9
Antimodels
73
analyzing facts. In this sense, pseudoscience is a falsified science which
substitutes the knowledge about the real world by speculative prescriptions
and arbitrary rules.
Scientific models, in particular, mathematical models, in distinction to
others, operate with quantities that may be measured with quantified
precision. To measure a quantity means to compare it with a homological
quantity taken as a unit and express the obtained result of this comparison by
a real number. One must remember that there exist many “quantities” that
can be ordered, i.e., to them the notions “more” (>) or “less” (<) may be
applied, but still not measurable, for example, beauty and ugliness, courage
and cowardice, cleverness and stupidity, wittiness and dullness. I am not sure
that there are universal units to measure, e.g., creativity and orthodox
ineptitude or wit and dumbness, etc. Despite all the efforts of psychologists
and other social scholars, such quantities cannot be so far expressed by
numbers. Therefore, it is difficult to build mathematical or really scientific
models for the major dichotomies expressed by vague antonyms in our
everyday language.
One should not, however, conclude that modeling should by necessity be
scientific in character. In our intuitive modeling of reality, the scientific
component is customarily negligible. There exist also para-scientific and even
anti-scientific models, and they may be quite popular, e.g., astrology,
esoterics, medieval magic, or creationism (now being taught in schools). In
such models of reality, the scientific component, even if it is present, is sinking
in the spicy bouillon brewed of disinformation, mysticism and deliberate
charlatanry. If scientific facts contradict such esoteric beliefs, too bad for the
facts. A good example of such an approach is delivered by the “castrated”
history model advertised by a group around the well-known Russian
mathematician A. Fomenko. Even the Chinese medicine with its
representation of the human body as a bundle of channels, although
frequently quite successful, is based on a bunch of para-scientific models not
properly corroborated by experiments and easily adopted by charlatans. An
interesting example of para-mathematical modeling are torsion fields as an
explanation of telepathy/telekinesis (see below). Thus, some sectarian views
might even be put in a mathematical form. Although many such models,
frequently appealing to awe and prejudice, may be intriguing and it is
sometimes exciting to disclose their inconsistencies - I shall not discuss them.
Life is very short, and normal, non-esoteric mathematical models provide
ample material to deal with, see however “The PEAR Proposition”, [278].
There is usually a great demand for pseudoscience in society, and one of
the reasons for explosive growth of pseudoscientific beliefs lies in indifferent
and condescending attitude, although perhaps mixed with contempt, of
absolute majority of scientists to all that “esoterics”. It is curious that esoteric
beliefs often go side by side with science making them hardly distinguishable
from the latter in the layperson’s eyes. One can easily notice that a vast
number of newspapers and magazines regularly publish astrological
assertions and report on various “miracles” alongside observing scientific
facts and discoveries. It is clear that astrology is pragmatically used as a tool
74
Principles of Mathematical Modeling
to increase circulation, which is a sad evidence of mercantile populism.
Despite the fact that many millions of people believe in astrology (there exists
even a TV channel “Astro TV”), I have no respect for it. Devoted people usually
say that astrology is not a discipline being capable of fulfilling the demands
for a scientific testing at the present time, just like, e.g., string theory. Yet the
obvious difference between astrology and string theory is the fact that the
latter is mathematically sound, at least some versions of it. Astrology does not
seem to be mathematically sound. I cannot understand the main statement of
astrology that natal horoscopes ensure definite prediction about what a
person’s life will hold for each individual. This implies that there must be a
bijective mapping between a set of time points, say h hours m minutes s
seconds on dd mm yy and a set of individuals, each of them born at this time
point, with their totally different life trajectories described to the minutest
details. Of course, there is no such mapping. In other words, how many people
are born each second? Do you really think they will have the same fate? Each
day about 300000 people are born to the world, which gives approximately
four humans pro second. Do all of them have the same fate? Or the temporal
resolution of 1 second is insufficient for astrological predictions? Then one
can remark that in the absolute majority of horoscopes accuracy is limited by
a day. Even if the time of birth is known with hourly precision, it means that
approximately 12500 people are born simultaneously within this accuracy.
Dividing by two (accounting for gender) does not improve the situation. This
is an obvious objection to astrology, and I could not find any response to it in
astrological texts picked up at random in media and Internet. There were
numerous statistical tests of the validity of astrological prognoses ([11]).
Researchers failed to find in the available literature any statistical match in
astrological interpretations. It is remarkable that astrologers dislike such
tests. They typically assert that scientific methods are too crude and intrusive
whereas the alleged astrological effects are so subtle and hard to detect that
astrologers can only arrive at the correct prediction (interpretation of an
astrological chart) by using paranormal forces or by tuning to the “cosmic
order”. This tuning can solely occur during authentic (one-to-one)
consultations. As soon as inquisitive scientists interfere by posing their
primitive questions, the occult tuning disappears. Presumably, scientists are
not a part of the cosmic order.
An astonishingly high potential for pseudoscience is not an
uncomplicated socio-psychological phenomenon. General public strives to
miracles, magic manifestations, and all kind of mystical rubbish whereas
scientific standards require considerable intellectual efforts. Scientific truth
is not for couch-potatoes. It has been noticed that striving to mysticism is
amplified during periods of social instability and confusion - it’s a kind of
escapism. One can also notice another socio-psychological phenomenon
related to “esoteric” models of reality consists in a sharp division on “pro” and
“contra”. The proponents of “all that” (i.e., pseudoscientific beliefs) make
absurd statements and their opponents - mostly scientifically literate persons
- tend to produce emotional response: see, what crap! Such polemics is based
on stereotypes - unfortunately for both sides. Only rarely the “scientific” side
2.9
Antimodels
75
suggests constructive approaches to objectively estimate the claims
promoted by the “pseudoscience” side. The Godik-Gulyaev group attempting
to study a full set of physical fields that could in principle account for
paranormal phenomena (this research was conducted in the Soviet Union in
1980s [41]) was a notable exception.
Instead of precise scientific evaluation of facts, propaganda and
counterpropaganda are flourishing. In my opinion, it’s a shame primarily for
real science, not pseudoscience, since by now a rather extensive observational
material related to rare natural events (such as light emitting objects, UFOs,
rare biomedical phenomena, etc.) has been accumulated. Of course, not all of
these observations are trustworthy but authenticity of some of them may be
confirmed. The question is how can one correctly interpret such
phenomena32.
It is exactly here that orthodox science is very passive. Scientists consider
it “indecent” to scrutinize incomprehensible observations, construct
intelligent models and propose validating experiments. Naturally, the niche is
filled up with charlatans disproving current science, offering “new physical
principles” and “scientific pluralism”, while ignoring the entire scientific
experience and the current state of knowledge. The increasing activity of
charlatans is promptly catalyzed by the falling level of education in natural
sciences33. One can see a distorted logic in the “scientific pluralism”, which
can be demonstrated each time on examples but it is a tedious procedure to
point out mistakes of inventors of just another perpetuum mobile. Let us
illustrate this distorted logic by a physically neutral example of creationists.
The typical statement of liberal anti-Darwinists (there exist also militant
ones) is as follows: “Evolution is only a model. It is not a fact. Creationism is
also a model.” Therefore, we can present these two models in schools together
as just two different models.
However, there is an implicit logical mistake in this statement: a
replacement of the ∩ quantor by the ∪ one. evolution and creationism are
incompatible so that they cannot be presented side by side - it’s like 0 and 1
in logical schemes, true and false. Speaking truth to the public is not always
appreciated, but if one claims to be a scientist one should not allow ignorance
to prevail, even if comes under the guise of common sense or is supported by
the majority. One should leave succumbing to the majority’s urging to
politicians.
32 I like scientific skepticism, but I don’t understand stubborn rejection of facts, even in case
they lack proper bureaucratic registration. I don’t think all the people (including a lot of military
personnel and even ex-President Carter) who claim to have seen UFOs are lying or confabulating.
On the contrary, it is very interesting why the other party, namely high military commanders,
special service officials and governmental investigators in many countries are covering up the
facts.)
33 Nowadays, we are observing the Great Bureaucratic Revolution in the whole world, which
requires more soft-skill persons than scientists and engineers. So, there is a redistribution of
people over specialties, favoring administrators, managers, lawyers, marketing and PR agents,
etc.
76
Principles of Mathematical Modeling
Forewarned is forearmed. “New physical principles” are attracted by not
very qualified but enthusiastic people to explain the phenomena of telepathy,
clairvoyance and other “biofield”-driven type. Some philosophers stipulate,
for example, that mysterious particles with imaginary mass are responsible
for information transfer between humans, and the respective fields may be
called the “conscience fields”. But what is the imaginary mass? To answer this
question we must recall, firstly, what is mass and, secondly, what is
“imaginary”. We shall talk a lot about mass in the subsequent chapters, in
particular in Chapter 8, let us now recollect basic facts about imaginary
numbers. First of all, there are no imaginary numbers, only complex numbers
that play a great role in physics. There exists only one imaginary number: 𝑖
defined as 𝑖2 = −1. Of course, there are decent complex mass models in
serious physics, for example to describe unstable (decaying) particles and
resonances (see, e.g., http://en.wikipedia.org/wiki/Particle_decay). The
complex mass concept, by the way, may work well for Higgs bosons (and in
general for all gauge bosons, see below) in the upcoming investigations on
LHC (Large Hadron Collider).34 The LHC investigations, specifically tests of
the Standard Model, see Chapter 9, are based on scattering and decay
processes with the participation of many particles, and one of the suitable
mathematical tools to describe such processes is exactly the complex mass
scheme. But this real physics has nothing to do with philosophical
speculations about “imaginary mass” as the foundation for telepathy.
Incidentally, the imaginary mass can be ascribed to tachyons,
hypothetical particles whose energy grows with decreasing velocity - quite an
unconventional behavior. Tachyons most probably do not exist, since
otherwise a lot of physical principles that have been experimentally
confirmed would be violated, for example, causality in which people strongly
believe (see, e.g., http://en.wikipedia.org/wiki/Grandfather_paradox; see
also Chapter 9). In this sense, tachyons are “parasitic” quantum field
excitations rather indicating at some instabilities in the theory than providing
real means for telepathic information transfer.
Some other people believe that telepathic signals are carried out by
gravitational waves that presumably affect us as the people denoted as
“lunatics”. So, there may be links between various fanciful clusters (nodes) of
models such as ESP (extra-sensory perception) and, e.g., astrology. There is
probably a subnetwork of speculative pseudoscience with a gateway to real
scientific research. Astrology for example, as well as some other occult
models, is based on exaggerated ancient fascination with the relationship
between the motion of heavenly bodies and earthly phenomena such as
regular variations of illumination and temperature, as well as of plant growth
and animal behavior. Extrapolating this on human life, one gets a would-be
system of regularity patterns. According to astrology, distant heavenly bodies
exert a profound influence on you at and after the moment of birth. This is an
example of focusing on resemblances rather than cause-effect relations. From
the physical viewpoint, it has been clear for over two centuries that astrology
34 This text was written in 2007, before LHC was put in operation.
2.9
Antimodels
77
has nothing to do with reality. Astrology (together with other occult
disciplines) always reminded me of a smoke-and-mirror trick to distract
people’s attention. Physics has left no place for astrology35. Indeed, physical
understanding allows one to separate the true influence of heavenly bodies
on the objects on the Earth from fanciful speculations. Of the four known
forces in nature - electromagnetism, strong interaction, weak interaction, and
gravity - only gravity and electromagnetism could theoretically have such
influence over long distances. Could the force of gravity carry astrological
influences? Obviously not, and already the secondary school course in physics
would be sufficient to understand it. There were many objects close by when
you were born that exerted much larger gravitational force on your infant
body (it is remarkable that astrology is silent about them). For example, one
can easily calculate that one’s mother would produce the gravitational (tidal
force) effect approximately 106 stronger than the Moon:
𝐅≈−𝐺𝑚𝑀
𝑟3 𝐫+ 𝐺2𝑚𝑀
𝑟3
(𝑎
𝑟) 𝐫
(2.1)
where G is the gravitational constant, m is the mass of the newborn, a is its
dimensions; for M one should take concurrently the mass of the Moon
(7.36×1022 kg) and the typical mass of a human body (70 kg), so the
comparison gives
𝐹𝑀𝑜𝑜𝑛
𝐹𝑚𝑜𝑡ℎ𝑒𝑟
= 𝑀𝑀𝑜𝑜𝑛
𝑀𝑚𝑜𝑡ℎ𝑒𝑟
(𝑎
𝑅)
3
~1021 (
0.4𝑚
4 × 108𝑚)
3
~10−6
(2.2)
Likewise, the other planets of the Solar System cannot exert any
considerable influence on a newborn child against the background of
neighboring gravitating objects. The effect on the biosphere of cosmic bodies
located outside the Solar System is so negligible that no microscopic scales
typical of living species are adequate to describe it.36 It is curious that such
simple estimates are not (or seldom) produced at schools. So, astrology is a
pseudoscience because there is no known physical mechanism accounting for
the influences it postulates. Astrology claims that the very weak forces
exerted on us by the planets can strongly affect our lives and characters, but
science allows for no such physical possibility.
Specifically, it can be neither the gravitational nor the electromagnetic
force that carries astrological influences, which means that none of the known
forces of nature is responsible. Of course, there may be new long-range forces
in Nature, and there are some experiments aimed at finding new interactions,
but physics imposes very stringent limitations on their magnitude (coupling
constant) and range. For instance, considering some new hypothetical
interaction, it is indispensable to verify its compatibility with conservation
35 Nevertheless, I personally know some physicists, even of rather high rank, who are fond of
astrology and even hire “professional” astrologers.
36 By the way, tidal forces are the manifestations of spacetime curvature, see Chapter 8.
78
Principles of Mathematical Modeling
laws, with celestial mechanics (which is very thoroughly studied),
astronomical and astrophysical data, with the data from high-energy physics
about the lifetime of elementary particles, with other empirical information
such as chemical element distributions, with the equivalence principle that
has been tested with record accuracy ([12]), etc. All these constraints
accumulated by modern physics make the room for such fundamental
discoveries as new forces extremely narrow and the requirements to the
skills of the researchers who venture to look for fundamental discoveries
inordinately high. I strongly doubt that such requirements could be fulfilled
by anyone in the astrological milieu.
I would dare to give the people who try to discover new and unorthodox
phenomena a good advice: first to learn and then to make discoveries. It
applies also to a large number of people who childishly believe in various
fanciful or occult models of reality: microlepton, torsion and unmeasurable
bio-energetic or psi fields, telekinesis, cosmic influence, astrology, chaos
magic, biorhythms, and “all this”. These are in fact not models nor theories,
but a number of verbal patterns. Occult (lat. secret, hidden, clandestine) or
mystical models are unscientific or, rather, anti-scientific. Indeed, one can
define mystics as something that violates the laws of physics. The rational
component in these models is that they serve as a consolation for
unsuccessful scientists or social losers. Honestly speaking, I do not think that
it makes much sense to intellectually fight or scientifically disprove astrology.
This strange sin is as inevitable as drunkenness, depravity, wickedness and
other human vices. Astrology is supported by fatalism and belief in superior
leadership, both had been used for long centuries by feudals and authorities
to exercise control. I do not discuss here the so-called business astrologers -
the majority of them are probably mere swindlers, so these activities have
nothing to do with any modeling. Today, astrological business is flourishing
owing to the ubiquity of personal computers allowing one to run simple
astrological software. The common feature of “not quite scientific models” is
their vagueness: such models make rather unspecific predictions, just as
astrology makes weak, general and highly inconcrete predictions about an
individual’s character and life. A standard set of vague statements can hardly
be called a model of reality. However, a number of scientists somewhat
uneasily confess that they are studying astrology as a possible base to explore
the “cosmic influence on the biosphere”. This position of scientists testifies to
the fact that there is no strong immunity to pseudoscience in different social
groups, even among professional scientists and natural science students.
Ironically, one of the first commercial astrologers openly practicing for money
was a great scientist Johann Kepler. Nevertheless, Kepler apparently treated
astrology with disdain as an ordinary superstition ([11]).
Usually, the astrologers respond to the scientific criticism that “it is
obvious that astrology works in practice, but you do not know why. Likewise,
the gravity existed and worked long before people formulated the laws of
gravity. You cannot find now scientific evidence for astrology; many scientific
claims are made in the absence of any known underlying physical mechanism.
For example, few doubt that there is a strong causal connection between
Topological Models
79
smoking and lung cancer, but no one knows the precise causal mechanism
involved - because no one knows the details of carcinogenesis. The
fundamental error in this criticism of astrology is to look at it only in terms of
a part of science, the part that explains by means of laws and theories. But
there’s another part: discovery of new phenomena that have yet to be
explained”.
That is a deliberate confusion. Statistical tests show that there is no
empirical evidence that astrology works in practice, in contrast to gravity. For
a rational person, only statistical data are enough to disprove astrology. The
kernel of astrology is characterological prognosis and forecasts of an
individual’s fate. It is exactly when medieval astrology was engaged in these
activities that it diverged from astronomy regarded as a science and became
a kind of a social drug. Many people promised to donate considerable sums of
money, in one extreme case the entire fortune and property to a person who
in a correct, scientific test could prove that the astrology functioned. Large
prizes were destined also for adepts of other occult or pseudosciences. To my
knowledge, nobody has lost a cent.
2.10 Topological Models
We characterize topology as a general means to analyze differential equations
without solving them. This is only a local task for topology. In general, this
discipline deals with those properties of manifolds (in particular surfaces)
that are preserved under continuous transformations. For example, a triangle
and a disk are equivalent topological objects since any two points located
within these figures may be connected by a continuous curve. However, a ring
(circle) and a disk are not equivalent in the topological sense: in contrast to
the disk, there are closed paths (loops) on a ring that cannot be contracted
into a point by any continuous transformation. This fact is a manifestation of
topological non-equivalence of these two figures: the disk is simply connected
whereas the ring (or 𝑆1-circle) is not - it may be separated into two totally
disjoint open sets.
One can readily recall some advantages arising from the application of
topological techniques in physics. The first thing that comes to mind is the
analysis of critical points of a function (see Chapter 4). Recall that the critical
point 𝑥𝑐 is the one in which 𝜕𝑖𝑓≡𝜕𝑓𝜕𝑥𝑖= 0, 𝑖= 1, … 𝑛, n is the number of
variables (dimensionality of the domain space or manifold). If 𝑑𝑒𝑡(𝜕𝑖𝜕𝑗) ≠0,
then we can represent the final difference ∆𝑓 near the critical point as the
competing sums of positive and negative squares:
∆𝑓= −∑(∆𝑦𝑖)2
𝑚
𝑖=1
+ ∑(∆𝑦𝑖)2
𝑛
𝑗=𝑚+1
,
where37
37 Here we retain the summation symbol as it is customary in topological texts
80
Principles of Mathematical Modeling
𝑦𝑖= ∑𝑏𝑖𝑘𝑥𝑘
𝑛
𝑘=1
,
𝑏𝑖𝑘 is the matrix which diagonalizes 𝜕𝑖𝜕𝑗𝑓. The number m of negative squares
determines the type of a critical point 𝑥𝑐: if 𝑚 = 0 then 𝑓(𝑥), 𝑥= (𝑥1, … , 𝑥𝑛)
has a minimum in 𝑥𝑐, 𝑓(𝑥𝑐) = 𝑚𝑖𝑛; if 𝑚= 𝑛, 𝑓(𝑥𝑐) = 𝑚𝑎𝑥, and in
intermediary cases 𝑥𝑐 is a saddle point. If we define as 𝑝𝑚 the number of
critical points of the m-type, then we can observe that the minimal possible
values of 𝑝𝑚 depend on the topological properties of the manifold on which 𝑓
is defined. Thus, the qualitative properties of solutions to the physical
equations, which can be nonlinear and quite complicated, may be studied by
topological methods. Such methods are especially valuable when it is
necessary to establish the relationship between the structure of critical points
of the considered differential equation and of its solutions (see Chapter 4 for
more details).
In Chapter 9, we shall briefly discuss superconductivity - an astonishing
phenomenon that has not been, in my opinion, fully understood yet in spite of
many loud claims and a lot of hype. If one produces a ring-like superconductor
(experimentally, a frequent case), then the magnetic flux trough the ring
associated with the superconducting current becomes quantized. It is a
nontrivial fact that the flux quantization in a superconductor is determined
by the latter’s topology: for a simply connected superconductor the magnetic
flux is zero whereas for a multiply connected one the flux Φ = 𝑛ℎ/2𝑒, with
integer n.
2.11 Engineering Models
Engineering is not an exact science. During studies of engineering, one grows
little by little accustomed to finding empirical patches adjusted by the hand
to make things work, and indeed, they work. For a theoretical physicist or,
even worse, for a mathematician, it may be rather unpleasant to make
engineering calculations based on coefficients which may seem arbitrary, or
on relations presumably stemming from experiments. “Pure” science seems
to be underemployed - maybe down to several per cent in some profane
engineering applications. It is a truism to say that life is very complicated, so
even best science cannot always provide us with powerful engineering tools
and off-the-shelf solutions. That is why scientific models always have a
comparatively narrow applicability area and always require experimentally-
based correction.
It is natural that both in a scientific research and in an engineering
inquiry, in order to understand, describe or predict some complex
phenomenon, we employ mathematical modeling, already reflected upon.
Mathematical models range from a simple formula (such as, e.g., the Aristotle
model of motion) to an extremely sophisticated system of mathematical
concepts such as superstring/M-theory or climate research. For an engineer,
designing a concrete technological system in contrast to a general researcher,
it is desirable that a model should be as simple as possible still revealing all
2.11
Engineering Models
81
essential properties of an engineering system. But what does the word
“simple” mean in the operational engineering context? In my view,
engineering simplicity is not an antonym of mathematical complexity which
is, by the way, still a poorly defined concept (see, e.g., [279]) closely tied with
the interconnection of elements within the system and not necessarily with
the system’s functioning. The latter is the primary focus of an engineering
endeavor. One can formally construct, e.g., on paper, a mathematical model
based on highly nonlinear partial differential equations for which
conventional engineering control structures, such as inputs and outputs, will
be hard to define and still more difficult to design. In this way, we would have
a kind of a mathematical model giving very little to an engineer from the
practical viewpoint (such models are in reality produced on paper in large
numbers). It is thus a matter of considerable practical interest to know how
to build simple models - from the engineering viewpoint. Here, by “simple” I
understand models involving the least possible number of variables and
phenomenological adjustment parameters that ought to be considered.
Quite naturally, this minimal number is a function both of the system type
to be modeled and of modeling purposes38.
Although personally I admire good - and sometimes beautiful -
engineering solutions, I still think that engineering education lacks a certain
mathematical hygiene. For specific problems, engineers have to rely on their
experience in order to choose the most essential parameters destined to serve
as the dynamic state variables. Then one may pose the important question:
would it be possible to develop a systematic procedure for the model
reduction, which would be intuitively prompted and, at the same time, would
preserve the most relevant features of the system to be designed? One must
remember that an engineering system in question may be mathematically
represented by complex non-linear dynamical equations. The answer is
probably “yes, it would be possible”, although such a statement, of course,
cannot have the rank of a mathematical theorem - at least so far. Some
systematic procedures to obtain viable engineering schemes from
scientifically-oriented mathematical models do exist, but they have a local
character. One can recall the examples of the Kotelnikov-Nyquist-Shannon-
Whittaker theorem or of the Karhunen-Lo`eve expansion. However, the lack
of universal systematic procedures for transforming highly sophisticated
mathematical models into engineering schemes is the main reason for the
chasm between the world of science and that of technology.
Speaking honestly, engineering models are not quite mathematical; they
are rather ad hoc solutions cooked for a particular real-life problem but
cooked also by great people. We are using a lot of wonderful things which are
engineered and manufactured but are not based on solid mathematical
models. The only purpose of engineering models is to help technologists with
38 In this spot, I could have produced a number of simple examples of engineering
systems differing by the set of variables and parameters to be taken into account
depending on the required accuracy and design goals, but I rather refrain from stating
these trivial observations. Everyone can readily do it.
82
Principles of Mathematical Modeling
arranging the operations, predicting their outcome, and evading inherent
pitfalls. This purpose does not need mathematical stylishness.
2.12 Mathematical Models in Biology
By biology in this section, I understand the study of living organisms i.e., of
structures possessing the property of life, namely capable of growth and
reproduction. For instance, modern humans arrogantly calling themselves
Homo sapiens are a particular species in the branched hierarchy of living
organisms. Biology of humans is bordering with medicine, and one often
unifies these two disciplines under a compound biomedical area. Medicine,
however, is so far a descriptive discipline not largely based on reliable data.
Therefore, accompanying myth creation is still spread among medical
doctors, often in the form of allegedly vital advice. (One should of course be
skeptical in applying medical advice, especially if they contradict one
another.) It is interesting that people usually attribute medicine to sciences
although it is more concentrated on practical services and less on research in
background fields such as biology, chemistry, and physics. Medical clinics bear
more similarity to a repair shop staffed with engineers than to research
institutions with scientific personnel.
In contrast with inorganic matter, dynamics of biological systems is
determined not only by the cause, but also by the goal. In general, goal setting
and selecting among future alternatives, completely nonexistent in the
inorganic part of the universe, serve as efficient survival tools for biological
systems. One can say that not only retarded (causal) solutions but also
advanced (non-causal) ones can be realized in the biological world. Modeling
equations can have not only time-delayed coefficients, but also forward-
shifted. In other words, biological dynamics, from cells to complex organisms,
requires some goal-oriented mathematics.
In biophysics, one has long attempted to study cells as systems
interacting with the environment. The living cell contains a series of naturally
evolved complex nano systems which can be studied by quantum-mechanical
methods and probably reproduced with the help of quantum engineering.
However, given the extreme complexity of even the simplest cellular
functions, comprehensive mathematical models of living cells can likely be
developed primarily for experimentally well-studied biological systems such
as Drosophila, yeast, E. coli, etc. Such models can then be used to investigate
and analyze the genotype-phenotype relationship in general. This study of
phenotypes, provided the genotypes are already known, will thus have a
pronounced theoretical component because of the mathematical modeling
and computer simulation techniques. The main functions of the living cell
seem to be replicating and processing information on the quantum level,
therefore special techniques of quantum biology and quantum information
processing probably should be developed. Besides, the available (at least to
me) techniques of mathematical results are not yet sufficiently matured to
describe the living objects.
The biggest challenge of biology is that the great diversity of data and
their randomness can nonetheless lead to fine-tuned processes (in time) and
Mathematical Models in Biology
83
structures (in space). It means that the ideal physical notion of the world as a
machine seems to be inadequate so that a paradigm shift is required. In fact,
fully mechanistic nature would be incompatible with life, where evolution
gains order through fluctuations. Biological evolution is typically understood
as a descent accompanied by slight modifications. Diversity of biological
components increases viability and resilience of a biological system. From the
point of view of biological esthetics, diversity is the foundation of beauty since
it produces outstanding samples against a uniform background. Variability
and the ensuing diversity arise as replication errors: in the now fashionable
language of code, such errors are insertion, deletion and replacement of code
fragments. Anyway, biological evolution basically occurs due to errors, and
this picture is poorly consistent with deterministic classical mechanics.
Living systems are, in general, complex open systems that constantly
develop and strongly interact with the environment. Each biological entity,
however small, is a complex system in itself. The interplay of its various parts
e.g., cells, proteins reveal new emerging properties which initially may not be
a part of single subsystem. They are typically found in non-equilibrium states
compatible with the environment so that living systems must retain their
inner stability (often known as homeostasis). Preservation of inner stability
and compatibility with the environment is the principal strategy for staying
alive. Nonlinear equations modeling biological systems reflect their self-
organizing properties that are in most cases manifested without any external
organizing agent or aligning force.
The human organism can be regarded as a complex open system in a state
of delicate equilibrium exchanging chemical substances, energy, and entropy
(information) with the outer world. Since the whole organism consists of a
number of strongly coupled and cohesively working subsystems, the failure
of any one of them tends to provoke an avalanche of other failures. Therefore,
treatment of a single organ or a separate disease may very well lead to
complications involving other organs – an effect well-known to any practicing
doctor. Mathematically speaking, the health state is a manifold in the space of
all possible physiological parameters. From the perspective of new
technologies, in particular, mobile technologies such as wearable body
networks, the human organism can be conveniently treated as a nonlinear
unstable dynamical system incorporating a number of coupled subsystems.
Nonlinearities play a crucial role in biology and, in particular, in the
physiology of the human organism. In the organism consisting of interacting
subsystems (organs), changes tend to affect the entire system, therefore in a
functioning organism considered as a system of highly integrated parts,
changes are in general deleterious. Yet if the homeostasis is stable and the
stability is robust, then alterations do not jeopardize the whole organism,
with the eventual outcome that changes will be lost. If the stability limits of
the organism have been transgressed, unstable states emerge, interpreted as
diseases. For instance, the transition to unstable states (diseases) from the
health state can be based on the hierarchy of instabilities developed in a
nonlinear system, which is a far analogy with the transition from regular fluid
84
Principles of Mathematical Modeling
motion to turbulence. Stability can be monitored, in particular, by the
mHealth technologies [182].
A rather primitive illustration of health (stability) margins is the
ubiquitous reaction of the human organism to alcohol (ethanol). Small doses
of this substance i.e. blood alcohol concentration (BAC) less than 30
milligrams of ethanol in 100 milliliters (0.1 l) of blood or 0.3 per mil (0.3 g/l)
do not appear to seriously impair cognitive abilities or locomotion of an
“average” person. This value might be roughly taken as the boundary of the
stability (health) domain. Returning to this stability domain occurs with the
estimated rate of 0.15 per mil/hour, although of course this relaxation
process may vary from person to person. Both the health state and the
restoring rate depend on other variables such as age, gender, BMI, etc., not
only on BAC, which illustrates the fact that the health (stability) domain and
the restoring path cannot be uniquely defined for all persons and conditions.
In general, most chemical substances in the body have their specific threshold
levels with respect to their concentration in blood.
Classical science mostly studied systems and their states close to
equilibrium, and that allowed one to construct a beautiful collection of
comparatively simple physical models for the world. Such models depicted
the systems that reacted on perturbations more or less predictably: these
systems tend to return to equilibrium (in the parlance of statistical physics,
they evolve to a state that minimizes the free energy). Remarkably, however,
systems close to equilibrium can describe only a small fraction of phenomena
in the surrounding world – it is in fact a linear model. Any realistic system
subject to a flow of energy and matter will be driven to the nonlinear mode
i.e., far from equilibrium. For example, open systems such as Earth, climate,
living cell, public economy or a social group exhibit highly complex behavior
that is, firstly, hard to be replicated in the laboratory and, secondly, almost
impossible to model mathematically using the methods adapted mainly to
mechanical patterns. In contrast with the closed mechanical models which are
a drastic simplification of reality, open and nonequilibrium systems are
ubiquitous in nature. Most of the processes in the open systems far from
equilibrium are interrelated, nonlinear, and irreversible. Often a tiny
influence can produce a sizable effect, which is a universal property of
nonlinear regimes, and in the real world almost any system is nonlinear.
We see that from perspective of physics living organisms demonstrate
classical properties of open, complex and highly dynamic systems that would
primarily be described by nonlinear equations that capture the dynamics
such as differential or time dependent ordinary of partial differential
equations. However, while designing mathematical modelling of the living
structures one should always consider its biological nature. One can also
observe less time constrained events or quasistatic process that allow the
system to slowly preserve its inner equilibrium such as e.g., metabolic
procedures. Mathematical Modeling of Complex Biological Systems [181].
If living creatures are a certain form of physical objects, they can be
treated by physical means and the latter by the adequate mathematical
instruments.
Mathematical Models in Biology
85
Take for example the effect of scaling. Scaling properties of Newton’s law
show that if we increase the mass of the body four times, it will pass the same
trajectory twice as slow. Here we implicitly assumed that the potential 𝑈 does
not depend on the inertial mass 𝑚. This fact contradicts our intuition, e.g., if
we recall falling bodies. However, in the gravitational field 𝑈 is proportional
to gravitational mass (in Newtonian approximation), so the period does not
depend on 𝑚. The scaling function 𝑓(𝜃) can be determined either
numerically, or by ODE integration, or from experiment (measuring the
period or frequency, all other parameters being fixed).
Let us apply scaling to biological modeling. Many people still remember
the “Jurassic Park” movie – this efficient manifestation of mathematical
zoology. The group of scientists had to escape from the hungry
Tyrannosaurus Rex. Would it be easier to run across a flat plane or up the hill?
The speed of a living creature while crossing hills is inversely proportional to
its size, since in such conditions it is not the air resistance that the animal
should overcome but the force of gravity, so 𝑃~𝑀𝑣~𝐿3𝑣~𝐿2, which means
that 𝑣~𝐿−1. In order to not be compacted by a promenading dinosaur, it is
better to escape uphill: big size makes its move difficult for a creature to catch
you.
The year 2020 showed us that the world is becoming increasingly
vulnerable to outbreaks of sudden epidemics. Infectious diseases are caused
by a variety of pathogens (viruses, bacteria, fungi, protozoa) that can pass
from host to host. Mathematical and computer modeling of this process is
usually based on subdivision of the total population into three classes
(stocks).
1. Individuals susceptible to the disease but not yet infected (S)
2. Persons who are infected and capable of transmitting the disease (I)
3. Removed persons: those who are no longer capable of transmitting,
due to recovery, quarantine or death (R)
This 1,2,3 model is often called the SIR model. The disease is transmitted
horizontally by interaction between the S and I classes, the transfer of I to R
is assumed to occur at constant rate.
To model the spread of infectious disease one can apply a spatially
homogeneous (ODE) model:
𝑑𝑆
𝑑𝑡= −𝛼𝐼(𝑡)𝑆(𝑡)
𝑑𝐼
𝑑𝑡= 𝛼𝐼(𝑡)𝑆(𝑡) −𝛽𝐼(𝑡)
𝑑𝑅
𝑑𝑡= 𝛽𝐼(𝑡)
The spatial spread of infectious disease can be modeled by reaction-
diffusion systems. These are PDE equations that include the reactions (the
time derivatives as above) and also space derivatives for diffusion.
86
Principles of Mathematical Modeling
A simplified model of the infection spread can be produced by using the
logistic equation. Let 𝑝 be the infected fraction of the population, then (1 −𝑝)
is the susceptible fraction, and the time variation of 𝑝 can be modeled as
𝑑𝑝
𝑑𝑡= 𝛼𝑝(1 −𝑝) −𝛾𝑝
where 𝛼 is the transmission rate of the disease and 𝛾 reflects the recovery
(removal) of the infected population. These parameters are specific for the
disease and, in general, for different populations.
Investigation of this logistic equation may be performed by separation of
variables, integration and inversion. It will be discussed in more detail later.
Rates α and γ are control parameters.
Discrete analogs and critical parameters can be obtained, describing the
transition to catastrophic amplification of a disease, in the same way as for
the general logistic model. The transmission rate 𝛼 can be reduced by
quarantining; 𝛾 can be raised by immunization.
The logistic model, despite its simplicity, was able to adequately
represent population dynamics in various countries, e.g., in England, Scotland
and USA. In biology and ecology this model is used to describe various
evolutionary scenarios when the future population of species depends
linearly on the present population:
𝑥𝑖(𝑡+ 1) = ∑𝑀𝑖𝑗𝑥𝑗(𝑡)
𝑗
, 𝑥𝑗(𝑡) ≥0
The diagonal terms of the matrix 𝑀 represent individual growth rates, while
the off-diagonal terms represent the interaction between species. Fixed
amount of available resources corresponds to conservation laws. Evolution
gets really competitive when the total population reaches its maximum value,
∑𝑥𝑗(𝑡)
𝑗
= 𝑋
Let us not forget that any mathematical model is just an abstraction and
not an exact replica of its living prototype. It does not capture all the
diversities and dynamic changes of nature. Often the initial assumption of a
researcher may erroneously ignore some of the essential parameters critical
to reconstruct the behavior of biological species in question. The good thing
is that unlike laboratory experiments mathematics modeling can almost
endlessly verify initial assumptions, finetuning the model and narrowing its
approximation to reality.
Recently, the basic research of living organisms has noticeably expanded.
The front of such a research is advancing along many directions that seem to
be converging. Indeed, physics and chemistry are studying the functions of
organisms on the molecular level; mathematicians, physiologists, electronics
2.13
Cognitive Models
87
engineers and medical doctors are working together trying to exhaust various
options. Although there is a rising perception that contemporary medicine is
approaching a state of crisis characterized by bureaucratic inertia, rapidly
growing number of errors, averaging out creativity and stifling originality39,
innovative life science segments spread far from biology as the basic life
science discipline spanning a wide range of subjects: from purely biomedical
topics to the theory of dynamical systems, evolutionary models, physics,
mathematics, engineering, even history and philosophy. [181]
2.13 Cognitive Models
It may be considered an unfortunate fact that our world mostly consists of non-
quantifiable things. One says: “he is a good man” or “this book is interesting”,
how can you naturally make a numerical - ordered - set of such statements?
Human communication in general consists of only a tiny fraction of
mathematically sound messages, the absolute majority of them being purely
intuitive (you can prove this thesis by listening attentively to your own
everyday communication). The human language is largely adapted to such
intuitive messaging and has very little in common with formal mathematical
systems - it is here that the main difficulties of man-machine communication
using natural languages seem to be hidden. Take, for example, rhetoric - an
important art of influencing other people by messages: how can you judge
who is the better orator? Are there formal, quantifiable means to measure
eloquence? Hardly. So, in most cases, a person must develop highly subjective
algorithms to make a judgment and intuitive patterns to appreciate the truth
in human statements. It means that each of us constantly makes up cognitive
models making it possible for a person to separate true from false, thus
recognizing reality. In many cases, cognitive models are purely based on
intuition, insight, and former experience and can be “silent” i.e., not explicitly
formulated in words; in other situations, such models are verbalized as
opinions. I do not know the statistical distribution between verbalized and
purely intuitive - pre-verbal - judgments in typical human behavior (what is
typical?), and I doubt that any one does know. Here, I shall try to focus only
on cognitive models based on verbal messaging since silent insight is too
complicated for me to discuss.
In such a subclass of cognitive models, people use verbal constructions
rather than formal mathematical statements. Verbal constructions are
probably more appropriate for the human brain than thinking in numbers or
even in formulas. Our conscience does not in general function as a digital
computer, although there are such hypotheses as well. It would be interesting
to understand, why?
The incalculable complexity of human society results in the
circumstances when it is practically impossible to make sharp, clear-cut and
rational decisions required by mathematical thinking. The data continuously
obtained by a person are typically so diverse and quite often conflicting that if
one tries to apply mathematical reasoning to the received evidence one would
39 See, e.g., Bartens, W. Das Ärztehasserbuch [243].
88
Principles of Mathematical Modeling
be confused and compelled to long hesitation. This would be an obvious
disadvantage for survival in the condition of delicate balance between harsh
competition and unstable cooperation in the human tribe. So, it is more
beneficial to base the decisions on incomplete information, assumptions, and
not necessarily correct prognoses - mathematical pondering would be a great
luxury or even dangerous not only for an ordinary person, but even for a
leader. No wonder, many people have developed an aversion to mathematical
formulas.
One of the reasons for replacing mathematical statements by verbal
constructions, apart from a lack of qualifications, is possibly the fact that, as
Stephen Hawking put it in his famous “Brief history of time” [78], each
equation contained in the book would halve the sales. I could never
understand it. Take one of the best popular science magazines “Scientific
American”. Mathematical formulas have always been forbidden in this
journal. As a result, some articles in it are more difficult to read than in
“Physical Review”. I am not speaking about “Scientific American” articles on
biology, which typically require some professional competence in this field
and only rarely win lay readership. Why should one forbid a limited use of
formulas in popular articles? It is true that many people are afraid of
mathematical expressions, mostly because of lousy teaching methods in high
school, but such people would hardly buy “Scientific American” anyway.
Banning mathematical expressions out of everyday life is counterproductive
and, to my mind, leads to preservation of illiteracy, not only mathematical.
School-time ambivalence and fear of math suppress mathematical instincts
badly needed for survival40, and instead of encouraging their development,
powerful social forces still further inhibit these instincts.
Models play an important part not only in physics, but generally in life.
Those who find mathematical tools in modeling the human environment
unnecessary tend to take refuge in the alleged human experience or “expert
knowledge”. The authors of cognitive models concentrate on verbally
extracting the ideas for modeling out of the history of thinking and culture.
The statistical and mathematical problems are out of the scope in this
approach. It is quite natural since, even if there are some laws governing
human behavior and social processes, the mathematical structure of these
laws is unknown and the information contained in them cannot as yet be
condensed in formulas. In cognitive models, there is no means to controvert a
vague statement, therefore verbose discussions are habitual. Nevertheless,
cognitive models, sometimes taking the form of common maxims or
aphorisms, may be perceived as rather cute and precise statements, a kind of
“soft” model. Or take poetry as an example. Poetry may be hard to read, and
some people feel alien to it, which fact likens it to math. Poetry is similar to
mathematics in various aspects: for instance, in careful selection of teasing
words to compose the text, in anti-verbosity, in the tendency to reach the
distinct result by some sort of textual proof.
40 For instance, frequently putting the question “What would be if...?”
2.13
Cognitive Models
89
My interest in mathematical structures lies not in mathematics per se, but
rather in its possible use. In this sense, it is a pragmatic interest. I think that
mathematics, due to its abstract nature and the resulting generality, allows
one to avoid boring lists, enumerations, tedious classifications and verbose
descriptions which is, to my mind, an obvious advantage of the so-called exact
sciences. Cognitive models studied, e.g., in philosophy, psychology, theoretical
sociology, etc. are not even expected to be “exact”, in the sense that the issue
of the accuracy is never being discussed in these models, nor in the respective
disciplines. Contrariwise, mathematics and physics as well as a number of
areas in engineering, chemistry and biology are supposed to be exact since
they are preoccupied with quantitative precision. It is the question of
accuracy that mainly distinguishes exact sciences from the disciplines based
on cognitive models, where proper approximations and reasonable answers
are not quantified.
One of such disciplines is literature, although it is conventionally not
related to anything scientific. Nonetheless, literature is also exploring reality
by model-building. One of the best cognitive models, in my opinion, was
produced by George Orwell (Eric Blair) in “Animal Farm” [11] which may be
regarded as an essential model of socialism. Another famous book by George
Orwell, the novel “1984”, represented a rather exact model of a totalitarian
future: the world is divided between three despotic states with absolutely
identical behavior - they lie, kill, and try to dominate. There is no place for
democracy in this dystopian extrapolation of current social patterns. Luckily,
Orwell’s prognosis was erroneous, mostly due to the necessity of sustainable
economic development. Economies are striving to be open systems;
economies subjected to the overwhelming bureaucratic control are
foredoomed to failure. The model of evolutionary history was given in the
famous book by William Golding “The Lord of the Flies”, where a bunch of
modern English schoolboys, having suffered a shipwreck, reverted to
savagery and primitive dominance, reinventing the entire package of basic
ancient forms: hunting-gathering, ritual dances and sacrifices. The bestseller
novelist Arthur Hailey has conducted a number of case studies modeling
certain complex systems (Airport, Hotel, The Evening News, The Final
Diagnosis, The Moneychangers, Strong Medicine, Wheels, etc.). Chekhov has
produced a gallery of almost mathematically exact patterns, especially
manifested by the personages of his plays. Maybe because of this nearly
mathematical modeling, Chekhov’s plays are often considered by many
people boring and schematized. For instance, the main female characters in
Chekhov’s plays are in fact the same personage standing out under different
names.
Literary writers design situations i.e., they are trying to lie as truthfully
as possible about what might have happened but in reality, never occurred.
Many successful literary works are in fact constructed as the models of
physics. For instance, a satirical dyad, “The Twelve Chairs” and “The Golden
Calf”, immensely popular in the former Soviet Union, was designed by its
90
Principles of Mathematical Modeling
authors who wrote under the pseudonyms E. Ilf and E. Petrov41 around a hero
without internal states, a dimensionless figure of a swindler Ostap Bender
which serves as a probe submerged into the Soviet reality. The function of this
hero-probe was only to illuminate the stupidities of the Soviet bureaucratic
society. It moves through the medium as a structureless elementary particle,
with the medium being excited and its hidden absurdities becoming manifest.
The same or close models have been exploited in other well-known works of
the world literature. To begin with, Gogol’s classics, the play “The Inspector”
and the novel “Dead Souls”42 were also built around featureless swindlers
functioning as a probe. To a great extent, the same framework of a
dimensionless hero (in this case not a con man) applies to Nabokov’s famous
“Lolita”, Solzhenitsyn’s “The Cancer Ward” and “The First Circle”43, etc. One
can provide one’s own multiple examples of the same cognitive model: a
structureless hero-probe producing excitation of the medium in order to
exhibit its properties, as well as other archetypal literary schemes. The model
of biological survival by Daniel Defoe (“Robinson Crusoe”) has a clear message
of creative individualism against the forces of nature. Witty fantasies by
Jonathan Swift and Lewis Carrol were pointed narrative models mirroring the
contemporary society and patterns of human behavior. More obvious as
magnifying tools of some pieces of social and physical reality are the science-
fiction works: they are just physical (e.g., in stories of Jorge Luis Borges) and
in some cases even mathematical models (such as the famous Bradbury
butterfly, see Chapter 4) expressed through specially constructed human
situations. Physical models of A. Clarke or R. Heinlein (in particular,
“Skyhook”) have been sources for serious scientific investigations.
One of the obvious deficiencies of cognitive models is the tendency to
operate with poorly defined notions. Take, for instance, models in psychology
(see, e.g., http://www.psychology.org/), even those where mathematical
symbolics and techniques are beginning to enter the scene. Such terms as
conscience, subconsciousness, emotion, attitude, perception, self-identity,
personality, mental health, and mental disturbance are constantly used, but
hardly any psychologist - applied or even academic - can provide a fair
definition for all these terms with which she/he so readily operates. And even
when such a definition is offered, there is no guarantee that it would be
accepted by fellow psychologists. Take any of the key notions, for instance,
conscience. Using this term, we do not understand what it is and are referred
to vague intuitive images. Such branch of psychology as psychoanalysis
operating with specially constructed models of “personality” seems to be
especially unsatisfactory from the scientific point of view, primarily because
of relying on essentially unobservable notions (such as superego, “id”, etc.).
41 Even a small planet (registered under the number 3668) that was discovered by
a Soviet astronomer L. S. Karachkina was named “Ilfpetrov”.
42 Gogol himself named this novel “a poem”.
43 The 1970 Nobel Prize winner A. I. Solzhenitsyn, as a former mathematician,
seemingly used quasi-scientific methods to create his works. He used to dissect the
source material into tiny pieces, reassembling them afterwards under the guidance of a
leading idea.
2.13
Cognitive Models
91
Current interpersonal psychoanalysis is thus inconsistent with physical or
mathematical techniques. Nowadays, there exists “global psychology”,
attempting to explore large-scale phenomena in psychological terms, with the
entire human population being taken as the object of investigation. Such
psychological texts may be amusing to read, but their positive value is highly
questionable. Can one predict or calculate anything with a prescribed
accuracy using psychological techniques?
There is nothing wrong in using poorly defined notions in everyday talk,
but this talk does not claim to be a science. It is just a means to state a person’s
position or to pass simple messages. Ambiguity and lack of precision are
partly compensated for by the redundancy of human speech. It is these three
qualities: ambiguity, lack of precision, and redundancy that make human
languages unsuitable for computer programming, and one has to invent
artificial machine languages dedicated for coding.
With further development of neurological science and machine learning,
one can notice an attempt to apply nonlinear computational methods to
cognitive processing in order to understand psychological functions of
different areas of the brain. In this sense, cognitive models are frequently
distinguished from conceptual frameworks; the latter largely adhere to
natural language (verbal) descriptions of theoretical assumptions. [173]
Neuroscience was always struggling to understand the unique ability of
the brain to perform instructions controlling human physical and mental
behavior. However, experiments on humans are rather costly and difficult to
replicate. Artificial neural networks (ANN) aimed to resemble the structure
of their biological prototype. Just like biological neural networks, ANN consist
of several layers of neurons with processing nodes, the so called neuronodes,
arranged in layers. After receiving the external data, nodes on the input layer
pass signals to the output layer responsible for problem solving. The ANN
training algorithm analyzes collected data sets asserting weight to neurons
depending on the error rate between target and actual output, thus
generating communication patterns. [174]
Benefits of cognitive process computation are manifold, especially in
those areas involving large amounts of data, such as natural science e.g.,
biology or medicine. Health data are exploding, expected to grow by 48
percent annually, already reaching over 150 Exabytes in 2013. If the trend
continues, (and there is no evidence that it will stop), no human brain alone
will be able to cope with such an avalanche. “The ability of Artificial Neural
Networks for “self -training” repeatedly comparing input and output data
using “deep learning” algorithms proved to be effective in various branches
of medicine e.g., cardiology, neurology, and oncology, where physicians have
to deal with complex images containing multiple physiological data. In order
to achieve a degree of complexity necessary for these applications, deep
learning systems always need to process a large number of nodes and
nonlinear components in certain cases surpassing the analytical capabilities
of a human brain. However, the technology is mostly applied while dealing
with imaging and cognitive models clearly defined (e.g., stimulus-reaction).
For example, in image analysis, purely linear models are only capable of
92
Principles of Mathematical Modeling
recognizing straight lines and planes, but not curves, as explained in a recent
review of deep learning. [175].
Unfortunately for scientists, but perhaps still fortunate for humanity, not
all cognitive models could be computationally simulated having distinct
measurable variables. There will always remain “soft models” e.g.,
perceptions, biases and concepts based not only on theories and verifiable
assumptions, but on emotions, beliefs, religious dogmas and ideologies (the
latter are almost synonyms).
Therefore, cognitive models alone do not allow one to answer the
important questions which continuously appear like the heads of the
Lernaean Hydra. Is global warming real or is it a political hoax? Should one
build nuclear power plants? Does psychoanalysis deserve to be considered a
helpful medicine? Should one invest heavily in the research of the physical
basis of cognitive (mental) processes?
2.13.1
Religious Models
Since the emergence of the notion “exact sciences”, the role of religion in
science and technology has drastically decreased. Scientists, the best of them
being obsessed with scientific truth, have been inclined to assert that it is very
unlikely that God does exist - at least as a biblical character, and religious
claims that the universe contains God are not indispensable to explain natural
disasters such as earthquakes, hurricanes, tsunamis and the like. Introduction
of extensive mathematical modeling in every branch of science, hopefully
capable of predicting and analyzing such cataclysms, is going to make one
more serious blow to religious beliefs. The head of the Russian Orthodox
Church likes to claim that the earthquakes (in particular, the 2010 Haiti
earthquake) are the punishment for human sins. All traditional religions
typically associate such disasters with human misdemeanors. One might
recall in this context the death of Pompeii and Herculaneum in 79 A. D.
destroyed by volcano eruption allegedly as a punishment for massive sins.
Scientists tend to be less religious than the public in general. This is
natural, since scientists could observe (starting probably from Laplace) that
the God hypothesis does not help much to calculate, find experimental values
with an appreciated precision or produce engineering applications of science.
This hypothesis has nothing to do with mathematical or physical models, and
its “scientific” meaning is reduced to an attempt to verbally - i.e., without
including mathematical formalism - explain certain foundational issues
pertaining
to
the
world
around
us.
This
apparent
uselessness
notwithstanding, there are still a considerable number of scientists who
believe in God, and some are probably devout believers. Why are they not
atheists, if scientific knowledge seems to exclude religion - at least to the
extent that science declares the existence of God very unlikely? Is not it a
paradox?
One can explain this “schizophrenic” attitude of believing scientists by the
human ability to place contradictory concepts into non-intersecting domains
of conscience in order to evade a cognitive dissonance. So, a person may be a
deeply religious thief (examples abound) or a highly superstitious
2.13
Cognitive Models
93
intellectual. In the same way, a scientist may believe in God implying that it
does not interfere with her/his scientific activities. I, however, still think that
such split of conscience is a painful syndrome leading to substantial mental
disorders. But one can try to reconcile believers and atheists within the
scientific milieu. If the term “God” designates the presence of poorly
understood - “mysterious” and low-probability - connections between events,
then its usage is justified. Yet it is hard for an honest person to imagine God as
a biblical character, more or less anthropomorphic looking down at each
person and busy recording all minor sins. The idea that God takes a special
interest in our petty affairs seems to be the manifestation of an extreme
egocentricity. Such a model of God is most probably a product of the human
brain, the latter being a very complicated object constantly performing
simulations of the surrounding world. The models of God and other religious
characters form a class of simulations which may be studied, in particular by
scientific methods. In the process of such studies, one can find some results,
expected or not, for example, that religious rituals affect human health
indicators
(see,
e.g.,
https://www.oecd.org/els/health-
systems/49105858.pdf) for a given region or the distribution of wealth in the
population. Then the whole ladder of further questions arises such as what
exactly is the effect of religious rituals on health, do they influence the human
immune system, improve the transport of oxygen, or else? In case the wealth
distribution varies as a function of, say, frequency and intensity of prayers, one
can establish the optimal timing and tone. All this can be precisely measured,
and quantified, medical or economic prescriptions would be elaborated
without any mysticism. And this is not a blasphemy but a typical science with
practical output (see more on that in [178], chapter 2; see also [179]).
The rationalization of religion is in the well-known thesis: absence of
evidence is no evidence of absence. It is in general always very difficult to
prove an absence of some entity. That is to say, absence of evidence of presence
of God is no evidence of God’s absence. There is, however, an obvious logical
error here: if you assert that some entity does exist, it is your business to
provide the proofs of its existence. Otherwise, one can postulate anything, for
instance, that some invisible blue pigs are floating in the stratosphere and
produce global warming. So logically there is nothing wrong in being an
atheist, although in some cultures you have to be very careful with it. Crowd
effects often bring about forces directed against scientific truth, or simply
against truth. One can observe how the wake of religious fundamentalism, of
an uncritical faith, is instigating human aggression, setting people against
each other. Of course, this crowd effect 44 is used by politicians for their
purposes having nothing in common with scientific truth. But if scientists who
are often passionate about scientific truth still yield to the religious pressures
of the crowd, then it tends to result in a serious damage for a personality and
a society.
44 To have a feeling of the local crowd effects, one might ask an arbitrary number of
individuals, e.g., in Russia: “Are you a racist?” or in Germany: “Are you an antisemite?”
and the answer would be in most cases an indignant “No”, despite the multitude of racist
manifestations in Russia and antisemitic ones in Germany.
94
Principles of Mathematical Modeling
Thus, religion is not a rational concept, or a point of view based on
empirical information meticulously collected and calibrated in an objective
manner. It is not even an idea. Although being a product of brain simulation,
religion is primarily a passion, often a deep passion. One of the popular
justifications of religion is that it makes people happy, contrariwise to science.
“Will a person come to an academician when she/he is suffering? No, he will
come to a priest.” I have heard this sentence several times. This argument
makes religions similar to drugs: using them may also offer a temporary
consolation, which is of course deceptive. Likewise, in religious faith, one is
comfortable not with truth, but with dreams - the fact exploited by the
terrorists setting up suicide bombers. In this respect, “war on terror” is
effectively the war waged against the abuse of an uncritical religious faith,
although the politicians are reluctant to admit it.
Religious models are difficult to make compatible with science already
because they operate with undefined notions. Can one define, for instance, the
term “soul”? The concept “God” itself is hard to define, which has long been a
subject of theological discussions. The concept of God can symbolize human
weakness - this statement might sound banal, but it does not cease to be true
in spite of banality45. Specifically, in ancient times, the notion of God had
manifested a collective intellectual deficiency and attempts to resort to the
ultimate superstition instead of analysis. There exist in human history several
written formalizations of such superstitions, mostly taking the form of loose
collections of legends. One of the most influential such collections - the Bible
- is a set of beautiful tales, most of them directly contradicting experimental
facts of science. Even the Christians (who are today not the most intolerant
believers) still do not always accept the theory of evolution, the origins of life,
cosmological models and even some astrophysical facts. It is well known that
contradictions with observational data produced acute conflicts in the Middle
Ages and many outstanding people suffered, were condemned or murdered
by the political bureaucracy called the church. This is a typical exploitation of
cognitive - and occasionally even of mathematical - models by ideological
institutions relying on the conformist mainstream of mass conscience.
Nevertheless, the set of fancy narratives, partly childish, contained in the
Bible and other codified belief formats had been compiled by extremely
creative persons. One may notice that religion is not necessarily focused on
the belief in God, especially in its quasi-anthropomorphic form; it may be an
abstract set of hypotheses intended to describe the world and to express one’s
feelings about it. The real meaning of the Bible, for example, lies in social
models (which are abundant in this book) such as the collision of a small,
enlightened elite and a vast uncultured mass of oppressed population - a
conflict typical of the societies based on ideological coercion and “oriental
despotism”. There may be tolerant religions, but they are rather an exception,
and the representation of reality fixed in a dominating religion in the given
45 If something is banal, it does not necessarily imply that it is wrong. It may
be just simple. For instance, it may seem banal that one should exit through the
door and not through the window. Nonetheless, it is true and, in most cases, more
convenient.
2.13
Cognitive Models
95
society tends to be obligatory for its members. Even relatively modern
societies such as Russia can be clericalized to the extent of religious
intolerance “us” against “them”.
As already mentioned, the world we live in is, almost as in ancient times,
largely incomprehensible to the most of us, and the absence of any coherent
picture produces a profound frustrating effect requiring some psychological
defense. In such a seemingly inconsistent world where no easily
understandable order can be observed by an ordinary person, untrained in
physically sophisticated modeling, nothing is incredible. So, one could
without much cognitive difficulties accept the statement that, e.g., exactly 2
104 angels were sitting on the tip of a pin. Why not? According to the religious
concepts, the universe is populated by angels, after all. More and more people,
disoriented, backward, and self-opinionated are taught by the popes to vaunt
their ignorance as a virtue, like Roman matrons.
However, it is not at all obvious that religion helped humans to survive as
a biological species, spreading their DNA (if we assume that the propagation
of DNA is the ultimate purpose of the species’ existence). At an individual’s
level, religion may help dying but it interferes with living. “I do not want to
hear what is true, but only good news”, this is the motto of religious doctrines.
From the cultural side, if science can be figuratively called a kitchen of the
future, religion may be likened to a cellar of the past. As almost all ideological
doctrines, religion tends to preserve the yesteryear. This is natural because
religion and its self-appointed carrier - the church - claims to possess the
ultimate knowledge so it must use a selective filtering of scientific data to
ensure the invariability of this old knowledge. If a person believes in six days
of creation or that the Earth is flat and rests on something (three whales,
elephants, or tortoises), one cannot resist these beliefs, in spite of the fact that
they clearly contradict the evidence. There have recently been numerous
debates about teaching religious models at schools, and this would amount to
overviewing ancient beliefs about flat Earth, tortoises and so on. It would be
a very entertaining read, of course.
The fact that religious models of reality are close to superstitions may be
seen from the fact that almost all religious beliefs are labeled “superstitious”
by representatives of the rival religion. Omens, supernatural events, miracles,
the concept of afterlife, etc. - in other words, miracles that contradict the laws
of physics are present in all religions but addressing them in various religions
is different and cross-religious accusations of superstitions are commonplace.
If we take the Orthodox Church as an example, we shall see that it is tacitly
(and even openly) declared that all other Christian denominations, except
Orthodox, are not genuine churches. Even rival Orthodox flavors can be
anathemized. This is a manifestation of the fact that churches like other
ideological bureaucracies cannot get along with one another. Intolerance of
extraneous and new ideas is a key signature of the church who always claims
to possess the ultimate truth. Whereas bundled antimodels such as astrology,
mysticism or postulating immaterial parts of a person such as soul, spirit,
ghosts, etc., which are distinguished from physical objects and biological life,
96
Principles of Mathematical Modeling
are propagated by hostile ignoramuses, religious models are mostly imposed
by obscurants.
2.14 Science and Arts
Most people can observe the crucial difference between the practitioners of
the arts and the sciences, pointed out by a prominent British writer C. P. Snow
in his famous book “The Two Cultures” [171]. Honestly speaking, I don’t quite
understand why this book has stirred such a controversy: it contains quite
obvious observations and some mild criticism of the education system. The
main idea is that cultural differences between “artists” and “scientists”
obstruct the solution of the world problems. “Artists” take it for granted that
there is no absolute truth, for example in literature, painting, music, etc. The
public and other artists, the peers or the reference group, make value
judgments which can be modified with time, so the truth in “arts” is only
relative.
“Scientists”, in contrast, are inclined to loathe the concept of relative truth
and it is considered suitable in the scientific milieu to hope that there is much
more in science than public opinion or even peer approval, no matter how
influential, predominant or well-intentioned this opinion may be.
It is not so easy to apply the category of correctness to the conflict of these
two groups. Both may be consistent within their own cognitive frame of
reference. There are of course extremists on both sides, similar to religious
extremists, who can be infuriated by the opponents’ views. In this extreme
case there is no communication between the two camps. Each of us can
attribute herself/himself to each of these large groups: either to the camp of
arts and humanities or to the one of science and engineering, irrespective of
profession or place in the society. However, the thesaurus accumulated
during the lifetime is important, and it is this load that determines as a rule
the respective frame of reference. When I describe a particular mathematical
model or an experimental fact, I presume by a prospective reader an a priori
understanding of terminology, basic ideas, and assumptions. It saves time
because I don’t have to provide verbose preliminary explanations. But the flip
side of the coin is that the same thesaurus hinders the understanding of how
the world works. One cannot readily apply this knowledge to other important
subjects, and the ability to accumulate new knowledge is limited, e.g., by the
memory capacity and available time. So, the knowledge gaps are natural.
It would be still completely respectable to say, even among the people
who are considered educated: “I know nothing about physics, and I never
liked it” or “I can’t do math”, but it is only seldom that one would dare saying:
“I have never read any novel” - that would single out such a person as dumb.
In the popular science journals, mathematical formulas are typically
forbidden in order not to frighten lay public, which would reduce the
circulation46.
As I have mentioned, it would be difficult to prove in general that nature
is rigidly governed by some impersonal laws, and at least these laws, if those
46 Biological schemes, in my view, are much more complicated, but they are allowed.
2.15
Physics and Philosophy
97
exist, are not necessarily condensed in a simple mathematical model like, for
example, Newton’s law or the Schrödinger equation. The “objective reality” is,
unfortunately, a philosophical belief and as such it can hardly be taken as a
guiding principle for science. At least this belief should not be a sacred cow of
science. From here, it would be just one step to “hidden parameters”. One
could see in many experiments, and not only in physics, that the observer and
the observed can be interdependent. This very fact leads to the idea that
knowledge is a construction created by the observer, e.g., in the form of a
mathematical model, rather than an unveiling of a pre-existing hidden reality.
It is, by the way a perennial controversy among mathematicians: is
mathematics a purely mind construction or does it discover some hidden
facts. I don’t think all this is important. Good science, to my understanding,
should not be opinionated and a priori dismissing, but always prepared to
amend the views of the world by continuously creating new models induced
by new observations. Imposing the hidden reality principle as opposite to the
“artistic” view of cultural relativism seems to be a poor defense for “scientific
truth”. It is totally unnecessary because perpetually accommodated facts and
constructed models do not depend on philosophical beliefs - is there an
objective absolute truth or not. It is known that P. A. M. Dirac criticized his
colleague J. Robert Oppenheimer for his keen interest in poetry: “The aim of
science is to make difficult things understandable in a simpler way; the aim of
poetry is to state simple things in an incomprehensible way. The two are
incompatible.” [14]
Although physics and mathematics are very important sciences, there
exist also other disciplines, in particular those that are located in the area
between science and arts. These disciplines, such as medicine, architecture,
sociology, computer science47, to some extent biology, have also accumulated
rules, principles and models which are often called scientific laws within
respective communities. These laws, however, distinctly reflect cultural
influences and there is no certainty whatsoever that these disciplines will
eventually become free from the cultural load. It does not mean of course that
one should incorporate anti-science such as voodoo, astrology or creationism,
support “proletarian” rejection of genetics and quantum mechanics or study
Lysenko’s biology because of the pronounced element of cultural relativism.
Science may be neutral with respect to philosophical beliefs, but it can hardly
be “naked” with respect to social influences.
2.15 Physics and Philosophy
The most appealing cognitive models take the form of verbose philosophical
discussions. Honestly speaking, I dislike philosophy and fully agree with the
aphorism often ascribed to L. D. Landau that physics ends exactly where
philosophy begins. Nevertheless, I must admit that some philosophical
systems or generalizations can be quite clever. For example, I enjoyed reading
Karl Popper [141], partly H. Reichenbach [58], and when philosophical
47 Software engineering, for example, in contrast to “hard” sciences, incorporates and
accentuates a considerable human component.
98
Principles of Mathematical Modeling
concepts are essentially based on hard facts and records rather than being
merely speculative spiritual products, they may become a source of
inspiration for more precise research. An example is the philosophy of history
by A. J. Toynbee [142], sometimes harshly criticized by his fellow historians,
but still a forerunner of a modeling approach to history, sort of a theoretical
history. Apart from some brilliant think-stars - mostly due to their eloquent
style - the rank-and-file philosophers do not produce anything interesting. At
least, however hard I tried, I could find nothing attractive, on the Internet or
otherwise. Having grown up with obligatory Marxist-Leninist philosophy,
which is just a dubious form of ideological aggression, I thought at first that
free-thinking Western philosophers, unavailable or even forbidden in the
Soviet Union, would be less prejudiced and offensive. I was bitterly
disappointed, especially in the philosophical discussions of scientific
concepts. I found that the typical style of such discussions was a mixture of
arbitrary assertions and pompous terminology. I also discovered pretty soon
that it is probably the custom to make philosophical polemics as intellectually
aggressive as possible, which reminded me of good student times in the USSR.
I am not going to give references, although it is not difficult despite the fact
that I am not a professional philosopher, but each time I encounter
philosophical discourse of scientific issues I have a feeling of being submerged
in some viscous medium. Scrolling philosophical papers, one can see that
philosophers typically dislike mathematical models, favoring texts “not
contaminated” with mathematical expressions. This makes polemics
confused and ambiguous. To me, a right mathematical model, simple and
clear, speaks for itself, without the need of verbose philosophical defenses.
“Some people speak from experience, others, from experience, don’t speak”.
A witty person - not a very frequent quality amidst professional
philosophers, Bertrand Russel once said that philosophy lies between science
and religion, being attacked from both sides. One of the typical reprimands is
that philosophy and cognitive models in general may be only good for writing
papers, they do not result in any products - it contrasts to, for example,
numerical simulations. This is simply not true. Recall, for instance, such a
bestselling product as “Civilization”, a very successful computer game entirely
based on the historical philosophy model by Arnold J. Toynbee. It would be
difficult to assert that strictly technical products such as those stemming from
discretization of equations more stimulate the advancement of science and
technology than, e.g., cognitive representations of reality48.
2.16 Prognosis
Many people are seduced by an apparent simplicity of making forecasts.
Every wild and speculative projection is a model of the future. Unfortunately,
most forecasts of the future usually take the form of unconditional qualitative
statements without indicating their accuracy. In such a form, they are just
irresponsible cognitive models. Even outstanding scientists have produced
48 I dare to recommend in this respect my own article in the Russian journal “Priroda”
(Nature), No. 6, 1997, p. 43-48 (in Russian).
2.16
Prognosis
99
odd and erroneous forecasts. Thus, Lord Kelvin (Sir William Thomson) was
known to predict in relation to the Marconi experiments in 1897 that “radio
has no future” or, while being the president of the Royal Society, that “heavier-
than-air flying machines are impossible” or that “X-rays will prove to be a
hoax” (see, e.g., http://zapatopi.net/kelvin/quotes/). These curious facts
indicate that making prognoses is a difficult business. A witty sentence
attributed to Mark Twain emphasizes the hardships of dealing with the
future: “The art of prophecy is very difficult, especially with respect to the
future” (see, however, http://bancroft.berkeley.edu/MTP/.)
Prognoses border on science or, rather, antiscience fiction. They are
almost always extremely inexact and almost never try to establish their error
margins, which is much worse than inexactitude. Unless one deals with more
or less well-understood phenomena such as, for example, that heat will flow
from a hotter to a colder body until the two bodies get the same temperature
there seems to be no general scientific principle for prognostic activity. One
might justifiably ask about prognoses: on what principle can they be found? I
would answer that prognoses are always based on the same thing: clear
understanding of the process which evolution one is trying to forecast. In
other words, you must have a firm model of at least a non-contradictory
hypothesis about the causal links in this process i.e., what consequences may
be expected if certain strings are pulled. Besides, one must accumulate hard
facts, not just opinions nor even expert considerations, and be able to
quantitatively assess those facts, in particular using mathematical techniques
on statistical data. The latter are, in most cases, available, e.g., in economics or
social studies, but one usually lacks a good model.
It is in this lack of reliable models that the problem with trustful
prognoses may be located. Regretfully, since time machines and crystal balls
are not produced in ample quantities so far, there is currently no other way
to gain insight into what is lying ahead of us than mathematical modeling,
albeit with highly imperfect models.
One might notice two major things about prognoses: firstly, sophisticated
statistical methods and software codes seem to give no advantages over
intuition; secondly, there is no responsibility for making false forecasts. This
is especially obvious in ecological catastrophism. I recall that around thirty
years ago doomsday forecasts were abound predicting acid rains over the
entire Europe, perishing forests, and steppe-forming everywhere. Who cares
now about acid rains? Carbon dioxide projections are now on the agenda. The
dire prognoses centered around the anthropogenic global warming (AGW)
concept predict that many countries will suffer from either terrible heat
waves or terrible cold, that Europe is likely to experience very low winter
temperatures, and humans will be on the verge of total extermination. There
are many attempts to substantiate such prognoses with computer models, but
unfortunately the value of these models is indeterminate since there is no
clear understanding of the physical processes controlling the evolution of the
climate system.
Demographic prognoses: can you say with - not certainty but at least
some convincing probability - what can we anticipate, demographic collapse
100
Principles of Mathematical Modeling
in many countries or inadmissible population explosion? As far as
sophisticated scientific techniques vs. human intuition in forecasts go, it
seems to be an interesting feature of the European culture that only numbers
- no matter how inaccurately evaluated - are accepted, which automatically
discards intuition as a “serious” forecasting method.
Overcoming the shortcomings of commonplace models is a long process.
Typically, during half a life we are building stereotypes, clichés, and
dogmas - almost automatically, without paying much attention to it - then we
find out that the accumulated stereotypes begin constricting us, and finally,
when they collapse, we feel relief and are happy to be free. Stereotypes
prevent us from building bold models on which prognoses can be founded.
Besides, one tends to confuse adherence to stereotypes with common sense.
Moreover, science has become so dispersed that it is difficult to imagine
successful forecasting results in the real - transdisciplinary - world.
Prognostic activities seem to be horizontal (a philosopher might say “Locke-
like”) rather than vertical (“Hobbes-like”), within a specific discipline. Making
intelligent prognoses requires a lot of knowledge and may appear
unacceptably high-brow for a great number of people who believe in
astrology and divine, e.g., Bible-based, creationism. Politicians, in their
continual pursuit of gaining popularity, use to attract these people’s voices to
promote the solutions having unfavorable impact on future development.
Should one construct nuclear power plants? Can astrology be taught at
schools? Should one use genetically engineered food? How can governments
regulate nativity (birth rate), if at all? There are many important issues having
a direct impact on the future, about which people cannot form intelligent
judgments if these people are scientifically illiterate. It means that prognostic
activities, unless they are restricted to a narrow circle of bodies which may be
biased in favor of the technologies they have been supporting or developing,
would need to increase the general level of scientific education.
The most interesting thing for the people who long for prognoses is their
questionable usefulness. There exist customers for whom long-term
prognoses, extending over one thousand years, are of vital practical
importance. For instance, there is an urgent problem of radioactive waste
management and disposal, especially of high-level waste such as spent
nuclear fuel. Some of the radioactive elements contained in spent fuel have
long half-lives, for example isotope Pu-240 has the half-life of 6800 years and
the half- life of Pu-239 is 24000 years. Besides, plutonium is highly toxic
chemically. Therefore, radioactive waste containing spent nuclear fuel must
be isolated from the environment and controlled for many thousands of years.
Ideally, an impenetrable barrier of radiation protection must be placed
between high-level radioactive waste and the biosphere, which would be
intact for such a long period. Nuclear waste is supposed to be buried in special
mines (disposal in crystallite rock) or salt mines as in Gorleben, Germany, and
one must be sure that nothing will happen to these sites in several thousand
years, e.g., their walls will not be destroyed due to intensive heat and
radiation released by decaying high-level waste. If, in other example, the
nuclear waste repositories are constructed in permafrost zones, one must be
2.16
Prognosis
101
sure that permafrost does not melt in 103 - 104 years. In general, determining
whether the site would be suitable for permanent disposal of the high-level
waste is a very challenging scientific and engineering problem.
Many now fashionable dogmas may become obsolete in the near future.
The same will probably apply to today’s specialized training such as in
computer science and information technologies (IT). The currently precious
degrees and certificates which required years of hard work may be declared
worthless and give no advantages for available jobs49.
Furthermore, attitude to prognoses drastically changes with time. In the
13th-14th centuries, people who tried to make prognoses were declared
witches or warlocks and burned alive. Now such people exhort from nearly
all TV channels. People who make plausible forecasts, e.g., of oil or stock
option prices, earn a lot of money irrespective of the accuracy of their
prognoses. Prognoses over 1-10 years are extremely important for economies
and highly politically charged so that the scientific component in such
prognoses may easily succumb to political pressure. Moreover, politicians
typically use prognoses for their purposes: an obvious example is climate
forecasts. Climate is a complex physical system, and its space-time variations
must be studied by scientific means. However, in this issue political interests
take over. For example, the well-publicized Kyoto Protocol is a political action
based on presumption that global warming of the near-surface atmosphere
is, firstly, a proven fact and, secondly, is only due to human industrial
activities. The fact that the climate has undergone many noticeable variations,
comparable or exceeding current changes, during the existence of the Earth
is ignored or rebutted as willful accusations - largely on political reasons.
In reality, there exist powerful natural factors affecting the climate, and
human-induced changes in the gaseous composition of the atmosphere are
not necessarily the dominating ones. Unless one makes serious modeling of
physical processes, people will never get ahead of climate changes - one can
only be reactive. However, current projections of the future climate are
largely a matter of faith. One must admit that today’s science knows too little
about the Earth’s climatic system to make the models which are built today a
reliable
foundation
for
meaningful
predictions.
However,
making
catastrophic prognoses is a sheer pleasure, a sort of a sweet candy. Doom,
nemesis, downfall have their trade cycle, and fears can be better sold (recall
Y2K or the recent - unsuccessful - panic about the black hole formation during
the LHC experiments). Yet the catastrophes, to our great luck, seldom
materialize. As a rule, non-cataclysmic, routine development is statistically
more probable. So, prognoses are closely connected with risk assessment, and
the word “assessment” implies a quantitative approach. Here, I would only
like to remark that there are no zero risk endeavors. One can only compare
the risks.
In the world of physics and mathematics, prognoses are obtained as a
statement having the form IF ¡conditions¿ THEN ¡prediction¿ i.e.,
49 Who remembers nowadays the Novell or Digital certificates highly praised at the
beginning of 1990s?
102
Principles of Mathematical Modeling
as an answer to a problem corresponding to some mathematical model.
Therefore, prognoses can be only as accurate as the model employed and
input data used, and one should always interpret the predicted values in the
context of conditions. The situation is quite opposite in the social disciplines
and political space. Here, the hidden dangers of exceeding the validity of a
model or of unduly extrapolated results are typically ignored, and prognoses
usually have an unconditional character. For instance, the language of
environmental catastrophism is used to proclaim a universal prophecy,
physical conditions notwithstanding.
The ability to predict future behavior and even events depends on our
understanding of the laws determining such behavior. If we restrict ourselves
to physical phenomena and natural events, then one can speak of physical
laws. Nevertheless, even for some physical systems which are subordinated
to well-understood physical laws the long-term prediction can be impossible,
as in the case of chaotic systems. Here, one usually brings the example of
weather forecasts illustrating the impossibility of a reliable long-term
prediction because of intrinsic instability, but there are other important
examples whose value is often underestimated. As mentioned above, it has
been lately of fashion to make alarmistic predictions of the imminent climate
catastrophe stating that the Earth surface temperature will rise by 2-10
degrees centigrade in 50 years exclusively due to human activity50. Based on
such predictions essential political decisions affecting the lives of many
million people have been made. Many European politicians are probably sure
that industrial development directly leads to occasional summer heat waves
and because of the excessive number of cars in private hands there is not
enough snow at ski resorts. The truth, however, is that the climatic system,
being influenced by a lot of factors outside human control, is at least as
unstable as the weather. It means that the quality of long-term climate
forecasts is by necessity very low and should be carefully scrutinized by non-
engaged experts before being put as a starting point for political moves.
Certain forecasts51 are bordering on dreams, for instance, intergalactic
space travel or time loops. Such concepts belong to the realm of science fiction
rather than science - at least at today’s level of knowledge. The prognoses in
the real world are limited by what one can expect for the future, based on
scientific extrapolations. Nevertheless, most of the prognoses take the form
of an informal study - not in the form of mathematical models - of a likely
future development. Interestingly enough, this lack of mathematical
description relates both to “soft” issues (e.g., social unfolding) and
technological development.
There may also be “chaotic” trajectories or at least parts of trajectories,
when the future or some of its elements such as technological development
50 I failed to find explicit mathematical relationships giving these figures as an
outcome, so I do not know the accuracy of such prognoses. Is the uncertainty 10 percent,
100 percent or not quantified at all?
51 The term “prognosis” sounds more scientific than “forecast”, the latter invoking
associations of clairvoyants and other charlatans; however, I do not see a substantial
difference between the two terms and use them interchangeably.
2.16
Prognosis
103
cannot be envisaged in principle. Such parts of world trajectory start after
passing a singularity in the space of parameters determining the world as a
dynamical system. However, I am not sure that such a space of parameters
and, in particular, a phase space can be correctly stated for the biological
subsystem of the world i.e., the biosphere. Of course, one can formally define
the phase space of all the positions and momenta for all N particles
constituting the biosphere, but this physical procedure would not have much
sense. Firstly, it would be an uncontrollable idealization since in such a
construction a closed classical system would be assumed (considering an
open quantum system operating with myriads of particles involved in
biological processes would probably be a hopeless task). Secondly, one can
hardly proceed further from formally defining a phase space since there seem
to be no criteria to select the subsets of biologically relevant collective
variables (similar to elementary excitations in inorganic physics) which
mainly determine the evolution. So far there is no algorithm for biological
dynamical systems in terms of microscopic variables. Already this limitation
of knowledge, which may have nothing to do with instabilities or chaos, leads
to essential unpredictability in foreseeing the path of the biosphere (a subset
of the total world trajectory). Thus, unknown species may emerge. A very
similar situation can be observed in the technological evolution: the standard
example is the Internet. Could professional futurologists predict its explosive
development fifty years ago?
Nonetheless, making forecasts can be a lucrative business if these
forecasts do not contradict current political attitudes. Of course, the question
whether the politically motivated prognoses have anything to do with reality
is totally irrelevant. For example, in already mentioned climate forecasts
there are billions of dollars endorsed by decision-makers and invested in
“independent” research as a reward for the scientific backup of political
moves. It may be very tempting to make forecasts, but when one dares to ask
about their accuracy the predictor is usually abashed and does not know how
to reply.
One may count five mainstream directions of science and technology
development in the 21st century: (1) information and communication
technology (ICT); (2) biotechnology (BT); (3) nanotechnology (NT); (4) space
research (SR); (5) nuclear power and fusion (NPF). Since all of these sectors
are technologically oriented, one must confess that there is not much room in
the 21st century for great individual scientists, their time seems to be gone.
Besides, the general public displays less and less esteem to individual minds,
this phenomenon, together with rising infantilism and what might be called
Africanization52, can be observed everywhere in the world. Technological
52 By Africanization I understand not only the intensive immigration from Africa to
Europe, but the entire emerging system of values, to some extent opposite to the so-
called protestant ethics. Once in hot summer, I saw a barefooted university teacher in
the canteen area, and nobody seemed to be astonished. The trend in dress is exactly
opposite to that which dominated, say, in 1960s or 1970s when a person felt
uncomfortable without coat and tie at a conference (see old conference photographs).
Today, it is perceived almost abnormal when a person attends a conference wearing
104
Principles of Mathematical Modeling
evolution fosters not Einsteins, but efficiently managed scientific-
technological collaborations, and indeed such groups are taking over.
Macrosocially, creeping socialism in the 21st century, infantile striving of the
population for governmental control, and increasing bureaucratization,
alongside with higher taxation and lessened personal freedom are favorable
to the development of group science versus individual achievements.
2.17 Some Tricks of the Trade
What mathematical techniques should we use for modeling? The common
modeling triad is: constructing, solving, and interpreting the results. For
constructing a model, usually elements of the systems science are employed,
e.g., basic concepts of dynamical systems, control theory, linear systems and
the like. For solving, simple mathematical methods such as calculus, linear
algebra, and theory of differential equations are predominantly used. More
sophisticated mathematics, honestly speaking, is rarely needed; more useful
techniques than, say, working on complex manifolds seem to be model
reduction, e.g., by dimensional analysis (which is, however, a peripheral part
of the group theory). With the advent of more computer power, the principal
art of model building has become to determine what is needed to synthesize
an insightful computer simulation. This involves a careful selection of
relevant numerical methods, designing or utilizing an algorithm, doing the
programming and using Web resources.
One may notice that all the so-called laws of nature have the form of
equations. It is, in principle, not at all obvious that equations must play such
a principal part - laws of nature might be expressed as inequalities, long codes
or concise verbal statements. But they are, to our current understanding,
expressed by equations which may be considered a hint at a primary pattern
for efficient mathematical modeling. So, assume that we must select some
equations to describe processes or phenomena. As a rule, real - not too much
idealized - systems and processes are described by nonlinear equations, e.g.,
nonlinear partial differential equations (PDE) with respect to time and spatial
coordinates. Such systems are distributed in space and correspond to an
infinite number of degrees of freedom. Nonlinear PDE are usually very
complicated and only very specific types of such equations admit observable
analytical solutions. However, models - by their very definition - can be
drastically simplified, for instance,
by considering first spatially-
homogeneous situations. In this case, the equations modeling the system do
jacket and tie. Although the tie is in general a redundant attribute, I prefer the jacket-
and-tie style to the T-shirted one in a conference room. Besides, this old style appears to
be more compatible with scientific discipline, e.g., in performing calculations, than the
“creative” T-shirted eccentricity. Africanization promptly intrudes into people’s lifestyle;
it aggressively manifests itself in numerous love and gay parades, overwhelming
popularity of rhythmical recitatives, expressive collective dances, and whining tremolos.
I have noticed that young people with tattoos and piercing are met more often than
without such symbols of infantile exhibitionism. Key notions of this Africanized
colonization of Europe are “fun” and “sexy”, both symbolizing hedonistic protest against
the parochial protestant ethic, boring but productive.
2.17
Some Tricks of the Trade
105
not contain spatial derivatives and become ordinary differential equations
(ODE-based models). Such a system is called point-like or having null-
dimension (0d). In other words, it is the transition to homogeneous (point)
models or, more exactly, ignoring spatial distribution of quantities, 𝜕𝜕𝐫
⁄
= 0,
that leads to ODE instead of PDE. A good example of such reduction to ODE-
based (0d) modeling is the treatment of atmospheric processes in terms of
radiation balance and global near-surface temperature dynamics (see
Chapter 10). It is always a favorable situation when one can reduce the
equations to the most primitive form, retaining the essence of the model.
Many equations of motion for classical mechanical systems have the form
of ODE. Such systems are point-like or lumped. In modeling lumped
mechanical system with ODEs, each degree of freedom is described by a
second-order ODE. Differential equations of the first order correspond to 1/2
degrees of freedom. Here one must be careful, since in the theory of dynamical
systems the notion of a degree of freedom may be defined differently: each
degree of freedom corresponds to a phase space coordinate (see Chapter 4 for
more details). For example, instead of denoting a practically important case in
chaos theory as having 1.5 degrees of freedom one can say “a system having
3 degrees of freedom”.
The equation 𝑑𝑥𝑑𝑡
⁄
= 𝑓(𝑥, 𝑡) gives an example of a dynamical
system, i.e., the one whose behavior is uniquely determined by its initial
state (deterministic behavior). Strictly speaking, one may also consider such
dynamical systems whose future states are not uniquely determined by the
evolution of initial ones, but we shall not discuss them in this book.
What are the typical ways to simplify mathematical models provided they
have been written in the equation form? The usual tricks of the trade are:
•
disregarding small terms (in fact expansion in power series)
•
using small or large parameters (as a rule, it is an asymptotic
expansion)
•
replacing geometrical forms by more symmetrical ones
•
substituting constants for functions (in fact applying the average
value theorem)
•
linearization
•
scaling
•
transition to dimensionless units (a special case of scaling)
•
discretization and introduction of lattices (lattice models)
In this book, we shall employ all of these tricks to simplify the problems.
Now, for a brief demonstration, let us see how, e.g., scaling works. If the
modeling situation is rather complicated and one has no a priori idea what
system of equations can describe it, one may choose from intuitive physical
considerations some parameters that could in principle determine the
quantity we are primarily interested in. One can assume that such an
106
Principles of Mathematical Modeling
interesting quantity has some physical dimensionality (this is an overloaded
term, please don’t confuse it with the dimensionality of mathematical space)
and can be represented as the product of undefined so far powers of
parameters determining the interesting quantity. By equaling the exponents
in the dimensions of the quantity we are interested in and in its expression as
a product of determining parameters, we obtain some simple system of
algebraic equations specifying the dimensions. This description of a simple
trick sounds a little abstract, but later I shall show on examples how it is done.
The most salient example is probably the so-called point explosion problem
solved by J. von Neumann [299] in the USA and L. A. Sedov [300] in Russia53
using just this method. The principal idealization in the point explosion
models is the assumption that the energy release is instantaneous and in
dimensionless point (explosive material has zero mass and occupies zero
volume). One can find the solution details to the point explosion problem in
the textbook by L. D. Landau and E. M. Lifshitz [85], §106.
The dimensionless form of mathematical models plays a somewhat
special role among other tricks: the matter is that numerical values are
independent of measurement units. Thus, one can consider specific cases of
the model just by choosing numerical limits and not by comparing physical
quantities. This fact is especially important in numerical modeling: in
numerical techniques being applied to dimensionless models, one can readily
neglect terms that are small in numbers as compared to errors in other terms
(but one in general cannot disregard terms that are small compared to other
terms). And we know that estimating errors is essential for validating the
model.
Historically, scaling and dimensionless combinations appeared first in
the problems of heat and mass transfer, these problems represent complex
interactions of thermodynamical, fluid dynamical and electrodynamical
processes. In the so-called multiphysics processes in general, such as the
study of turbulence, multiphase system dynamics, scaling and dimension
analysis is a powerful heuristic tool. For example, in the problem of high
practical importance such as Chernobyl heavy accident modeling (motion of
the fluid with internal heat sources) scaling considerations and dimension
analysis alone enabled one to produce tangible results (see [54]). The well-
known computer code “Rasplav” designed to model heavy nuclear accidents
was based to a large extent on scaling and dimensional analysis.
This chapter could have carried the title “Models of the World and the
World of Models” because the set of mathematical or other cognitive
representations reflecting the “reality” are forming the parallel world which
sometimes appears totally different from the world we live in. Had we limited
ourselves to just the evident features as ancient people had, we would be
satisfied with the picture of the flat Earth (everyone knows how difficult it is
to walk over a ball’s surface!), small Sun rotating around the Earth (it is but
53 As well as by J. Taylor in USA and K. P. Staniukovich in Russia.
2.17
Some Tricks of the Trade
107
totally obvious!), little shiny stars, and the God who made all the things that
we see around including ourselves.
Luckily for the species who call themselves “human sapiens” due to
developed capabilities for abstractions, the perception of the Universe is not
so direct: it is mediated by mental images being eventually transformed into
models of the world, the most advanced of which are mathematical ones.
Models reveal hidden mechanisms of the world allowing one to understand
and exploit them. Some of them are so deeply hidden that they initially induce
massive protests (quantum theory, relativity, and genetics are standard
examples). Nevertheless, the achievements of technology are mostly due to
such “crazy” models which were perceived at first as fantasies (Maxwell’s
electrodynamics, quantum mechanics, genetic engineering, etc.).
Mathematical modeling teaches us how we can rely more on
mathematical techniques rather than on gut instincts. At the same time some
mathematical modeling is intended to quantify things which typically are not
quantified.
Science in general works like brain - by model-fitting. The brain
constantly fits models which are, as we have discussed, crude and partial
representations of reality to the data obtained from the outside world.
Paradoxes and discrepancies 54 provoke people to invent new models,
many of them are still awaiting their mathematical polishing (e.g., climate,
social and economic systems).
54 One can recall such well-known problems as those of the cosmological constant (see
Chapter 9) or biological evolution which stimulate scientists to constantly generate new
theories.
108
Mathematical Potpourri
3 Mathematical Potpourri
In this book, only that portion of mathematics which is directly - in some
rudimentary sense - connected to physics is discussed. The preceding
sentence is an example of a statement deprived of any exact meaning. I try not
to be afraid of making such statements - intuitive and logically irresponsible.
Sometimes vague formulations invoke vivid associations and provide more
cognitive freedom than rigid pedantry. It is a wonderful delusion that
mathematics is based on strict axiomatic foundation. Try, for instance, to
construct a fair definition to the notion “trapezium”, and you will find out that
there is no consensus among professional mathematicians even about such a
simple object. It is well known that the fundamental notion of set is not
defined properly. The same applies to the not less fundamental notion of field
(in the physical sense), of space and time, spacetime, and a great lot of other,
intuitively embraced notions. There are a lot of verbal constructions in
modern physics which are not defined at all and can be understood only at
the hand-waving level. In classical physics, the term “pseudovector” - a very
useful notion - hardly has a precise definition. The list of poorly defined but
widely used concepts is rather long, and we shall have to discuss a number of
them, so I am prepared to take the criticism of “verbosity” and “philosophy”.
And although, frankly speaking, I like hard technical definitions, I try to evade
algebraic gymnastics in order to put an accent on physical systems and not on
mathematical structures. Thus, only the facts and statements relevant to
physics will be presented in this introductory mathematical chapter.
One can easily find an extensive treatment of the subjects touched upon
in this introductory chapter in a large number of sources including those
Internet-based. As a matter of fact, only basic definitions and notations used
throughout the book are presented here. Some well-known mathematical
facts to be used further in the text are only briefly mentioned, with the
references for a more detailed acquaintance being provided. In principle, this
chapter is not indispensable for further reading.
Another goal of this chapter is to overcome rather acute cultural
differences between the mathematical and physical communities and to share
an informal understanding of the mathematical subjects I happened to deal
with. Although the subjects I propose to discuss in this book may be only
tangential to the main matters dealt with by professional mathematicians,
these subjects would enable us to handle physical (and related to physics)
models contained in subsequent chapters in a free and flexible manner.
Classical mathematics has always been a language and an intrinsic part of
physics, and many eminent scientists, even mathematicians, for example V. I.
Arnold, think that it is a pity that mathematics revolted and diverged from
physics in the second part of the XX century. Nowadays more and more
mathematicians consider mathematics to be totally different from physics. In
this chapter, I shall attempt to revert this trend and discuss mainly those
mathematical techniques that are relevant for physics. I tried to evade
2.17
Some Tricks of the Trade
109
formally correct but long, boring and not fully physically comprehended
mathematical derivations, with formulas used mainly as illustrations serving
to clarify the whole picture of the modeled phenomena. Such an approach
reflects the general philosophy of physmatics where math is considered as a
service subject. Bearing some risk of being reprimanded for triviality, I tried
to explain mathematical formulations in detail, unless they were really very
simple.
Since I wrote this book mostly for mathematically oriented physicists and
engineers, I was motivated by the applications - to find an appropriate
mathematics for them. I selected physically based mathematical models not
only by their value for specific engineering or scientific problems but also by
their intrinsic methodical interest. Nevertheless, revering some scientific or
engineering usefulness, I largely omitted works of pure mathematicians -
although I admit their extreme importance. References to these authoritative
works are mostly relegated to the notes and are also contained in the
literature I cite. There are strong elements of translation work in this chapter.
Mathematical insiders may know a lot, but few from the physical or
engineering community would understand them. At least the time required to
dig into notations and terminology may be too excessive. I am fully aware that
I would probably be executed by mathematicians for some vulgarization of
mathematical discussions. My physmatical level implies a certain
primitivizing.
Mathematics is a process rather than a fixed scope of knowledge, and the
process of math is a very living thing: people visit each other, attend the
conferences, talk and communicate. However, I shall try to give only a short
account of standard mathematical subjects, and this account is far from being
rigorous or complete. Indeed, each matter I touch upon here can be
adequately treated in a fair lecture course given by an expert mathematician.
However, honestly speaking I would not recommend supplementing books
on physics that appear not sufficiently rigorous by some authoritative
Bourbaki-style treatises. The point is that the latter are, as a rule, too abstract
to be useful. The Bourbaki group endeavored the mission of unification and
standardization of mathematics, at least of its major part, and many people
think (even those who did not read the volumes) that the Bourbaki authors
have
succeeded.
Nevertheless,
even
though
this
unification
and
standardization has been performed, the work of the Bourbaki group now
turns out to be largely useless, at least for the majority of physicists. The
matter is that when we handle a problem, especially in physics or applied
mathematics, we rarely need that high level of abstraction which was used by
the Bourbaki authors. Moreover, what we really need - at least what I know
from my own experience - are specific techniques or results to deal with the
details of our models and solutions. Many mathematicians copying the
Bourbaki patterns pay very little attention to practical needs; they prefer
keeping to the most general possible setting. Such “ivory tower” (and partly
schizoid, i.e., extreme alienated) treatment devoids abstract mathematics of
any perspective of use by a physicist or an engineer because she/he misses
completely the purpose of abstraction. The power of abstraction should
110
Mathematical Potpourri
manifest itself when dealing with concrete situations. Are the Bourbaki
volumes widely read by physicists or engineers?
In modern physics, various mathematical concepts and methods are used,
e.g., differential equations, phase spaces and flows, manifolds, maps, tensor
analysis and differential geometry, variational techniques, groups in general
and, in particular, Lie groups, ergodic theory, stochastics, etc. In fact, all of
them have grown from the practical needs - mostly those of mechanical
engineering - and only recently acquired the high-brow and somewhat
pathological axiomatic form that makes their study and direct application so
difficult. Of course, the level of mathematical sophistication should always be
adequate, otherwise one may get absorbed in mathematical prerequisites
instead of solving physical or engineering problems. In some respects,
modern mathematics diverges from physics and tends to philosophy, since,
like philosophy, mathematics increasingly juggles with concepts invented just
for this purpose. Mathematics cares very little about the relations of these
concepts to physical observations. For example, although geometry lies at the
core of mathematics, it has many practical applications, but mathematicians
focus only on symbolic forms and typically could not care less about any
practical use. Some of them are flaunting their superiority stating that
“mathematicians do not like applied stuff” - a sentence I heard many times
since my school years. Physics also deals with mathematical concepts, and
ideally the latter are meaningful only upon demonstrating their relations to
observations. At least, the physical traditionalists would cheer these relations.
Nowadays, when string theories and some highly mathematized models are
in fashion, despite the fact that they are unsupported by experiment, the role
of mathematical sophistication increases. Pragmatically speaking, a person
possessing extra mathematical skills can more easily generate new versions
of currently popular theories in order to gain reputation and grants. Rather
few physicists think about foundational problems; most scientists make their
careers by mathematically treating specific problems. For example, quantum
theory, which is maybe the most fundamental science, allows one to produce
both a vast number of applied mathematical models and speculative
interpretations. Mathematical performance can be sufficient for career
achievements in modern physics, irrespective of the agreement with reality.
Partly because of that I decided to devote a considerable place to
mathematical preliminaries and the discussion of mathematical techniques.
I hope that owing to that preliminary chapter all parts of this book will be
understood by physics or engineering students, the precise direction of
studies or working experience being immaterial. The book does not contain
really advanced mathematics, so nobody will sweat blood over fanciful
notations and lengthy definitions. Proofs are more intuitive rather than
technical, to a possible disgust of mathematicians. Nevertheless, I don’t think
that the readership would consist of mathematicians alone; for many
uninitiated, e.g., for engineers and physicists, numerous technical proof
details are hardly essential. I would be happy if I could share with
mathematically not necessarily well-informed readers some working
knowledge of certain subjects of mathematics that can be applied to construct
2.17
Some Tricks of the Trade
111
physics-based models. I am inclined to think that an obsessive drive to
extremely great generality and precision accompanied by a scholastic
pedantry, which are becoming customary in contemporary mathematics,
build up a considerable barrier between mathematics and modern science.
Making their language available to other scientists is not an easy task, many
professional mathematicians usually neglect it. My observation is that to
preserve a certain separation between themselves and the rest of the
scientific/engineering community is considered a certain chic among the
mathematical community, especially among young mathematicians. This
group norm does not actually serve as an eye-opener for physicists and
engineers, and I call it for myself the tedium barricade, and to avoid it I tried
to mix up mathematical rigor in homeopathic doses.
An example of the detrimental effect of fashion mixed with the trend to
obsessive rigor is an attempt to formulate everything in the language of
differential forms - a wonderful instrument per se. They have been introduced
nearly a century ago and I think they are indispensable for many applications
in physics (and maybe in some derived disciplines). This is an example when
“classic” - classic vector analysis - does not necessarily mean the best. Now,
just try to ask physicists who of them is using differential forms in her/his
regular work. I did it, although my sampling was neither very extensive, nor
representative. Nonetheless, my primitive sociological experiment showed
that only a very tiny percentage of working physicists are familiar with this
tool, which may be considered a convenient alternative to vector calculus, the
latter being ubiquitous in physics and engineering. Some respondents have
even become aggressive: “Differential forms? What are they good for? To
formulate the Stokes’ theorem? All this new stuff - differential forms, bundles,
connections, multilinear algebra - is just ostentatious propaganda tricks
devised by some clique of mathematicians striving to become known.”
Needless to say, I surveyed only theoreticians; it would be impertinent to ask
experimentalists or engineers about forms - I would have been heartily
laughed at.
This is not accidental. I could not find in the abundant literature a good
exposition of differential forms for physicists. Most of the sources I came
across were written by mathematicians with utter neglect to the physicist’s
needs. Maybe I was unlucky55. But the fact remains: mostly because of modern
mathematics, “physics is becoming so unbelievably complex that it is taking
longer and longer to train a physicist. It is taking so long, in fact, to train a
physicist to the place where he understands the nature of physical problems
that he is already too old to solve them”. These words belonging to E. P.
Wigner, a prominent mathematician and at the same time a great physicist -
a rare combination, reflect an unfortunate tendency that, according to D.
Hilbert, “physics is becoming too difficult for physicists”. The sheer variety of
55 In this book, I am using mostly the more ubiquitous formalism of traditional tensor analysis
(one might call it the Riemann-Ricci calculus) [154], rather than the less widespread techniques
of differential forms (Cartan calculus) [103, 155, 104]. This does not reflect my personal
preferences, but I do not think they are relevant. Sometimes I give both versions, which amounts
to some kind of translation.
112
Mathematical Potpourri
physical models combined with newly obtained experimental facts,
multiplied by the abundance of mathematical methods and channeled to
clumsy computers to obtain numerical results make the whole physmatical
construction unobservable and lacking any consistent interpretation. Any
person who studied physics must somehow cope with his/her ignorance, and
persons claiming to have mastered the most part of physics, including the
relevant mathematics, are just arrogant hypocrites.
On the other, mathematical, side, it may be a common opinion among the
majority of modern mathematicians that the mathematical methods used by
physicists would not stand up to the current level of rigor. I have heard
several times that the methods the physicists use appear mysterious, ad hoc
or merely incorrect to many mathematicians. The latter claim to have a
natural aversion to inexact statements. Honestly speaking, I don’t care.
Moreover, I think that the majority of these mathematicians would also
derogate, for example, many of Euler’s methods on the same ground. But
somehow there appeared no new Eulers amidst such rigorists. Certainly,
mathematical rigor is extremely important, but it cannot replace the intuitive
and imaginative line of thought that Euler shared e.g., with Einstein, Dirac,
Feynman. There were not many people who could combine extended
mathematical skills and deep physical insight. Interestingly enough, the latter
may not necessarily help in practical work, e.g., when treating quantum
models. To keep a balance between physical model-building and
mathematical abstraction is probably the most intricate art in the work of a
physicist. So instead of threading one abstraction after another, I was trying
to comment on some contemporary developments that might be useful for
physicists.
I have already mentioned that the questions of rigor are only of secondary
importance for an average working physicist. Even nowadays, when physical
problems are frequently formulated in the language more palatable for
mathematicians than for physicists of older generations. One might recall that
during the most fruitful period of physics in Russia (1950s-1970s) dominated
by the “Landau school” (see Chapter 2) it would be highly improbable to
encounter the word “theorem” in the leading Russian physical journal JETP.56
I suspect that this word was informally prohibited in JETP. Published papers
practically never began with formal definitions, and terminology was quite
different from that of the mathematical literature. This difference of cultures
was emphasized by physicists and mathematicians alike. I remember very
well that when I was studying analytical mechanics our professor strongly
advised us against using the textbook “Mechanics” by L. D. Landau and E. M.
Lifshitz [23] as a source, allegedly because of the lack of rigor in this book.
One more cultural difference between the physical and mathematical
communities is related to notations. In distinction to mathematical texts
where notations are often specially designed, notations in physics are not
56 One could meet the word “theorem” in the text of an article as a reference, e.g., “according
to the Wigner’s theorem”, but the formal structure usual for mathematics “definitions theorem -
proof” was probably inadmissible in JETP.
3.1
Sets
113
used to stress the logical structure of the notions involved, but rather to make
the heuristic physical calculations as transparent as possible.
3.1
Sets
The notion of set is known to survive without a formal definition, at least I
have never encountered any. A set can be loosely understood as a collection
of objects usually called elements. Other terms for a set may be a variety, a
class, a family, an assemblage or an assortment. Synonyms for the things
called elements may be, e.g., members, constituents or even points. Here, I
shall denote a set of elements 𝑥1, 𝑥2, 𝑥3, … by curly brackets {𝑥1, 𝑥2, 𝑥3, … }. A
set containing one element is called a singleton. In case each element 𝑥𝑖
belonging to a set 𝐴, 𝑥1 ∈𝐴, is also an element of a set 𝐵, then 𝐴 ⊆ 𝐵, i.e., 𝐴 is
a subset of 𝐵. Obviously, when both relationships 𝐴 ⊆ 𝐵 and 𝐵 ⊆ 𝐴 are
valid, then 𝐴 = 𝐵. Some sets have standard notations, e.g., ∅ is an empty set
(containing no elements), the set of all real numbers is denoted ℝ, the set of
all complex numbers is ℂ, the set of all integers is ℤ, the set of all natural
numbers is ℕ.
By a domain, I shall mean a connected open subset in ℝ𝑛 or ℂ𝑛 for some
positive integer 𝑛. Throughout this book, I shall use 𝑥= (𝑥1, 𝑥2, … , 𝑥𝑛) or 𝑧=
(𝑧1, 𝑧2, … , 𝑧𝑛) for the coordinates of a point in ℝ𝑛 (ℂ𝑛). Sometimes it is
notationally more convenient to consider domains in ℝ𝑛+1 (ℂ𝑛+1) rather than
in ℝ𝑛 (ℂ𝑛) starting coordinate sets from 𝑥0 (𝑧0).
There are simple rules to operate with sets. For example, one can easily
prove the following identities for sets 𝐴, 𝐵, 𝐶 (see any textbook on the set
theory):
(𝐴∖𝐵)|𝐶= 𝐴∖(𝐵∪𝐶),
(3.1)
𝐴∖(𝐵∖𝐶) = (𝐴∖𝐵) ∪(𝐴∩𝐶),
(3.2)
and
(𝐴∪𝐵) ∖(𝐶∖𝐵) = (𝐴∖𝐶) ∪𝐵.
(3.3)
This theory seems very abstract, yet it may help solving practical
problems, specifically in computer science (informatics). One can easily
construct an example of a real-life situation when the set theory reasoning
can help. Suppose you have to analyze the test results for a group of students
being examined on three subjects: mathematics, physics, and French. There
were 𝑁 persons to undergo the tests, with 𝑛𝑚 successful in math, 𝑛𝑝 in
physics and 𝑛𝐹 in French. We know that 𝑎𝑚𝑝 failed both in mathematics and
physics, 𝑎𝑚𝐹 in mathematics and French, 𝑎𝑝𝐹 could pass neither physics nor
French; 𝑎𝑚𝑝𝐹 failed in all three subjects. The question is, how many students,
𝑛𝑚𝑝𝐹, successfully passed all three tests? In this example, just to draw the
usual pictures (set diagrams) would help to solve this problem.
Now, to make the discussion of sets more concrete, let us review basic
algebraic structures. A vague notion of a “mathematical object” does not
necessarily denote a set with a certain algebraic structure, but if it does then
a mathematical object can be understood as some algebraic structure, for
example, group (a set with a single operation), ring (a set with two
operations), or a vector space. A little below we shall briefly discuss mappings
114
Mathematical Potpourri
(or maps) between sets; now I just mention that in algebra the sets endowed
with operations are typically studied which makes the following question
relevant: what happens with the algebraic structure i.e., with the set elements
and operations on them when one set is mapped to another?
3.2
Maps and Operators
In the study of sets, the most important concept seems to be mapping.
Assume there are two sets, 𝑋 and 𝑌, then a rule 𝐹 assigning 𝑦∈𝑌 to each
𝑥∈𝑋, 𝐹: 𝑋→𝑌 i.e., 𝑥↦𝑦 is called a mapping or a map. In the general
algebraic context one can define such mappings as isomorphisms which are
typically understood as bijective homomorphisms. For the above examples of
the most popular algebraic structures such as groups, rings, or vector spaces,
homomorphisms are defined as some specific, context-dependent mappings,
e.g., a linear operator for the vector space, group and ring homomorphisms
respectively for groups and rings. In any case, a homomorphism is one of
the basic morphisms, it is a mapping that preserves the algebraic structure
of two sets or algebraic objects to be mapped from one to another, in
particular, from the domain into the target image of a function.
The term “eigenfunction” refers to a function that preserves its original
form (perhaps up to a multiplicative complex constant) after having passed
through some system, e.g., an engineering or measuring device.
3.3
Groups
One can observe that the concept of a group is the simplest and the most basic
among all algebraic structures used in physics. A group is an algebraic
structure consisting of a set of objects together with a binary operation on
this set. That is an operation which produces a third object of the same kind
from any two. Historically, the notion of a group has originated from studying
the symmetry transformations in geometry on a plane (what we used to call
planimetry in high school). Algebraic generalizations of such transformations
have led to classifying symmetry properties through the operations regarded
as elements of certain sets. After the invention of matrices57 and especially
following their systematic study by A. Cayley in the 1850s, matrices began
playing the key role in linear algebra. Accordingly, matrix representations of
groups emerged, a tool especially widely used in quantum mechanics. The
discovery of Lie groups, named after Sophus Lie, an outstanding 19th century
Norwegian mathematician, has led him to the idea of applying continuous
groups to the solution of differential equations [190]. Recall (and we shall see
it below) that a Lie group is simultaneously a differential manifold obeying
the group properties so that one can study Lie groups by using ordinary
calculus of infinitesimals. Incidentally, S. Lie himself was known to call such
groups infinitesimal ones.
Mathematical models of physics are always constructed in some spaces
which are sets of elements, in general of any kind, endowed with an
57 It is difficult to trace who really has introduced matrices in mathematical
calculations, see [191].
3.3
Groups
115
appropriate mathematical structure. Standard mathematical structures
typically have an algebraic or a topological nature. An example of an algebraic
structure is the group; this structure may be considered one of the simplest.
(Some elementary topological structures will be briefly discussed below.) A
set 𝐺 of elements of arbitrary origin is known as a group if a group operation
is defined and the following rules are fulfilled.
1. For any two elements 𝑎, 𝑏∈𝐺, there exists an element 𝑐= 𝑎⊗𝑏, 𝑐∈
𝐺.
2. This operation is associative: for any three elements 𝑎, 𝑏, 𝑐∈𝐺,
(𝑎⊗𝑏) ⊗𝑐= 𝑎⊗(𝑏⊗𝑐).
3. There exists a neutral element 𝑒 such that for any 𝑎∈𝐺, 𝑎⊗𝑒=
𝑒⊗𝑎= 𝑎.
4. For each element 𝑎∈𝐺 there exists an inverse (symmetric) element
𝑎−1 such that 𝑎⊗𝑎−1 = 𝑎−1 ⊗𝑎= 𝑒.
Note: if the group operation ⊗ is called multiplication (usually denoted
as × or ∙ ), the element 𝑐 is called a product of 𝑎 and 𝑏, the neutral element is
called unity, and the group itself is known as multiplicative. If the group
operation is understood as addition, the element 𝑐 is usually written as 𝑐=
𝑎+ 𝑏, the neutral element is called zero, and the inverse element to 𝑎 is also
called the opposite one, being written as −𝑎. The group itself is known as
additive. If for any two elements 𝑎, 𝑏∈𝐺, 𝑎⊗𝑏= 𝑏⊗𝑎, the group is called
Abelian or commutative.
The simplest transformation group is probably the one of rotations of a
circle, we shall look at it below in some detail. Although one can probably find
even simpler examples, rotation of a circle is interesting to us since it is a
trivial analog of the SO(𝑛) group which corresponds to rotations of n-
dimensional Euclidean space 𝐸𝑛.
3.3.1. Semigroups
One can weaken some of the imposed rules for the group, and then other
algebraic structures arise. For example, a set of elements without a
compulsory neutral (identity) element 𝑒 and without an inverse element
𝑎−1 is known as a semigroup. Semigroups became important, in particular,
in the study of dynamical systems described by ordinary and partial
differential equations. The matter is that any evolutionary problem modeled
by a dynamical system may be formalized as an initial value problem for some
ordinary differential equation (ODE), in general for a vector function 𝑓
defined on some space 𝑉:
𝑑𝑓(𝑡)
𝑑𝑡
= 𝐴𝑓(𝑡), 𝑓(𝑡0) = 𝑓0,
116
Mathematical Potpourri
where 𝐴 may be interpreted as an evolution generator. The formal
solution to this problem is 𝑓(𝑡) = exp {∫𝐴𝑑𝑡
𝑡
𝑡0
} 𝑓0 or, in the simpler case of
an autonomous problem, 𝑓(𝑡) = exp(𝑡−𝑡0)𝐴𝑓0. This expression can be
heuristically interpreted as an action of a semigroup of operators (neither
the neutral nor the inverse elements are in general required), exp(𝑡−𝑡0)𝐴,
taking the initial state 𝑓0 = 𝑓(𝑡0) to 𝑓(𝑡). However, one should give a
precise meaning to the exponential exp(𝑡−𝑡0)𝐴. We shall deal with this
issue a number of times in different contexts.
3.4
The Rotation Group
Studying the rotation group is necessary for many quantum-mechanical
applications such as the theory of angular momentum, atomic spectra and
nuclear physics. There is one stronger motivation to become familiar with the
rotation group: it is a wonderful example of general Lie groups, and the latter
serve as a natural tool for treating symmetry in physics (for instance, the Lie
group SU(3) × SU(2) × U(1) plays the most essential part in the Standard
Model of elementary particle physics).
Let us start, as promised above, from a very simple example of the group
of rotations of a circle. Let 𝑔𝛼 denote the rotation by angle 𝛼 so that the
representation 𝑅(𝑔𝛼) of this operation i.e., operator 𝑅(𝑔𝛼) acting on a
space of functions 𝐹(𝜑) of the form (𝜑) ≔𝑓(cos𝜑, sin𝜑) is defined by the
expression
𝑅(𝑔𝛼)𝐹(𝜑) = 𝐹(𝜑+ 𝛼).
This operation symbolizes a rotation by angle 𝛼. One can slightly
generalize this expression as 𝑅(𝑔)𝑓(𝑥) = 𝑓(𝑔−1𝑥).
To be more specific, imagine for a moment that there exists a single point
which may be regarded as a center of universe (it is not necessary that God is
sitting exactly at this point). Then we may consider this point as a universal
coordinate origin so that the entire space can be rotated about it. Let us
denote such rotations by 𝑔𝑖. It is clear that the 𝑔𝑖 form a group, i.e., all the
axioms stated above are fulfilled, with the group unity corresponding to the
rotation by zero angle. Let 𝐫 be the vector connecting our origin with a point
(𝑥1, 𝑥2, 𝑥3).
The rotation group and its affine extensions such as the Galileo group
with elements 𝑔(𝑡, 𝐫) = 𝑡+ 𝑏, 𝑅𝐫+ 𝐯𝑡+ 𝐚, together with Lorentz and
Poincaré groups, play a fundamental role in physics determining the invariant
structures in geometric transformations.
3.5
Lorentz and Poincaré Groups
One can find a number of common features in the Galileo and Lorentz groups.
Primarily, they both are intended to define admissible reference frames for an
inertial observer with respect to any event in spacetime. As to the Poincar´e
group 𝑃(ℝ, 3), it is just an affine extension of the Lorentz group, a semidirect
product of the Lorentz group Λ(ℝ, 3) and the abelian translation group 𝑇(4)
3.5
Lorentz and Poincaré Groups
117
acting on a Minkowski spacetime, Λ(ℝ, 3) ⋊𝑇(4). Here, I shall remind us that
a semidirect product 𝐶= 𝐴⋊𝐵 of two groups defines a way to form a new
group 𝐶 from two groups 𝐴 and 𝐵 which then become subgroups of the new
group. In particular, an affine group of all non-degenerate motions
(isometries)58 on a plane ℝ2 contains transformations defined by the pair
(𝐴, 𝜉) where 𝐴∈O(2) i.e., the matrix 𝐴 belongs to the group of orthogonal
2 × 2 matrices describing rotations and reflections while keeping the origin
fixed, and 𝜉∈ℝ2 is a plane vector i.e., belonging to the abelian translation
group ℝ2 so that 𝐱↦𝐴𝐱+ 𝜉: 𝐱′ = 𝐴𝐱+ 𝜉 with det𝐴≠0. The composition
rule in this simple affine group has the form:
(𝐴, 𝜉) ∗(𝐵, 𝜂) = (𝐴𝐵, 𝜉+ 𝐴𝜂) ≔𝐴⋊𝐵.
We shall observe some features of the Poincaré group as the affine extension of
the Lorentz group i.e., the semidirect product of the translations and the
Lorentz transformations a little later. Let us now discuss some formalism
pertaining to the most basic relativistic groups, primarily the Galileo (mainly
homogeneous) and Lorentz group. These groups give rise to the two types of
theories that are known as Newtonian and Einsteinian physics so that one
might justifiably ask: do all the differences between these two types of
physics, including interacting particles and not only inertial motion, stem
from the differences in the structures of the Galileo and Lorentz (Poincaré)
groups? I think this is a rather complicated question, and we shall discuss it
- rather superficially - also a little later.
Recall the Galilean transformations (see more in Chapter 3 under
Geometry of Classical Mechanics): 𝑡→𝑡′, 𝐫→𝐫′ which we can
symbolically write as
(𝑡′
𝐫′) = (1
0
𝐯
𝐈3) (𝑡
𝐫)
(3.4)
Here 𝐈3 is an orthogonal 3 × 3 matrix which intuitively symbolizes rotations
in the Euclidean 3d space i.e., 𝐈3 ∈SO(3). Let us choose, in particular, Galileo
transformations along the x-axis, 𝐯= 𝑣𝑥𝐞𝑥= 𝑣𝐞1, then 𝐈3 →1 and we have
(𝑡′
𝑥′) = (1
0
𝑣
1) (𝑡
𝑥)
i.e. the simple Newtonian kinematics with an absolute time, 𝑡= 𝑡′. This is,
of course, a very particular case even in the class of linear spacetime
transformations, and we can find some generalizations of the Galileo
transformations. For example, special relativity gives
one
possible
58 Recall that an isometry is usually defined as a map that preserves the
distance; for example, the plane isometry 𝑓: ℝ2 →ℝ2 ensures that 𝑑(𝑥,𝑦) =
𝑑(𝑓(𝑥), 𝑓(𝑦)) where 𝑑(𝑥,𝑦) = ((𝑥1 −𝑦1)2 + (𝑥2 −𝑦2)2)1/2 is the Euclidean
distance corresponding to the norm ‖𝑥‖ = (𝑥, 𝑥)1/2 where (𝑥, 𝑦) is an inner
product in the Euclidean space 𝐸𝑛 (in this example 𝑛= 2).
118
Mathematical Potpourri
generalization of Newtonian kinematics, when time and space become
interconnected so that 𝑡= 𝑡(𝑥) and 𝑡≠𝑡′. Restricting ourselves to the
linear relationships, we get in the general one-dimensional case
(𝑡′
𝑥′) = (𝑎
𝑏
𝑐
𝑑) (𝑡
𝑥)
where matrix elements 𝑎, 𝑏, 𝑐, 𝑑 of the transformation (boost) matrix are so
far unknown functions of the relative velocity 𝐯−𝑣𝐞𝑥−𝑣𝐞1. The standard
way to obtain the Lorentz transformations i.e., to specify the above matrix
elements 𝑎, 𝑏, 𝑐, 𝑑 is to consider the simple kinematic situation when two
inertial systems move with respect to each other with constant velocity 𝐯=
𝑣𝐞𝑥. This is a textbook derivation, and I shall briefly reproduce it so that we
can later discuss some less trivial facts and ideas. So, the hyperbolic
(light) form 𝑠2 = 𝑥02 −𝑥12 = (𝑥0 + 𝑥1)(𝑥0 −𝑥1) must be invariant 59
which naturally gives
𝑥0
′ + 𝑥1
′ = 𝑓(𝑣)(𝑥0 + 𝑥1),
𝑥0
′ −𝑥1
′ =
1
𝑓(𝑣) (𝑥0 −𝑥1),
𝑓(𝑣) ≠0.
One can assume the unknown scalar function of relative velocity 𝑓(𝑣) to be
strictly positive so that, e.g., the difference (𝑥0 −𝑥1) should have the same
sign in each coordinate system. Obviously, 𝑓(𝑣) →1 with 𝑣→0. Solving
this elementary linear system of equations, we get
𝑥0
′ = 1
2 (𝑓+ 1
𝑓) 𝑥0 + 1
2 (𝑓−1
𝑓) 𝑥1,
𝑥1
′ = 1
2 (𝑓−1
𝑓) 𝑥0 + 1
2 (𝑓+ 1
𝑓) 𝑥1.
If we now regard, for instance, the coordinate origin of the “moving”
system i.e., the point 𝑥1
′ = 0 with respect to the “laboratory” system, we
get for (𝑥0
′, 0), using the relationship
𝑥
1 =
𝑑𝑥
𝑑𝑡 in the “laboratory” system
𝑥1 = −𝑥0
𝑓−1
𝑓
𝑓+ 1
𝑓
= 𝑣
𝑐𝑐𝑡.
From this equation we can find the unknown function 𝑓(𝑣):
𝑓2 −1
𝑓2 + 1 = −𝑣
𝑐≡−𝛽,
𝑓2 = 1 −𝛽
1 + 𝛽
so that
59 Here, to make the formulas ultimately simple I temporarily write coordinate indices
below (𝑥𝑖) and not above as it is conventional in geometry. I hope it will not result in a
confusion.
3.5
Lorentz and Poincaré Groups
119
𝑥0
′ = 1
2
1 + 𝑓2
𝑓
𝑥0 −1
2
1 −𝑓2
𝑓
𝑥1 = 𝛾𝑥0 −𝛽𝑥1,
𝑥1
′ = 1
2
1 −𝑓2
𝑓
𝑥0 −1
2
1 + 𝑓2
𝑓
𝑥1 = −𝛽𝑥0 + 𝛾𝑥1
since
1 + 𝑓2
𝑓
= 2
1
(1 −𝛽2)1 2
⁄ ≡2𝛾,
1 −𝑓2
𝑓
= 2
𝛽
(1 −𝛽2)1 2
⁄ ≡2𝛽𝛾.
Notice that 0 ≤𝛽≡𝑣/𝑐≤1 and the Lorentz factor 𝛾≡(1 −𝛽2)−1 2
⁄ ≥1.
So, the elementary Lorentz transformation (“Lorentz rotation”) in the
(𝑥0, 𝑥1) plane takes the form
(𝑥0
′
𝑥1
′) = 1
2 (𝑓+ 1 𝑓
⁄
𝑓−1 𝑓
⁄
𝑓−1 𝑓
⁄
𝑓+ 1 𝑓
⁄ ) (𝑥0
𝑥1) = ( 𝛾
−𝛽𝛾
−𝛽𝛾
𝛾) (𝑥0
𝑥1)
or in (𝑡, 𝑥)-variables
(𝑡′
𝑥′) = ( 𝛾
−𝛽𝛾
−𝛽𝛾
𝛾) (𝑡
𝑥).
The word “elementary” implies in this context the simplest possible case of
relativistic kinematics. If we now assume that Lorentz rotation corresponding
to inertial motion occurs not in (𝑥0, 𝑥1) plane, but in the pseudoeuclidean
(Minkowski) ℝ3,1 space, the velocity 𝐯 still being directed along 𝑥1-axis, 𝐯=
𝑣𝐞1 , then we get the following matrix representation of the Lorentz
transformations
Λ1(𝐯) = (
𝛾
−𝛽𝛾
0
0
−𝛽𝛾
𝛾
0
0
0
0
1
0
0
0
0
1
)
Notice that matrix Λ1 is symmetric, and the upper index 1 signifies that the
motion occurs along 𝑥1-axis. Matrices Λ2 and Λ3 i.e., corresponding to the
case when spatial coordinates normal to the direction of motion are left intact,
have obviously a similar form. One can easily verify that detΛ𝑖(𝐯) = 1, 𝑖=
1,2,3. If we do not assume the velocity to be directed along any of axes 𝑥, 𝑦, 𝑧,
we shall get a rather clumsy but still symmetric matrix (see, e.g., [192], B.1,
Ch. 4) representing the arbitrary Lorentz transformation of special relativity:
120
Mathematical Potpourri
Λ(𝐯)
=
(
𝛾
−𝛽𝑥𝛾
−𝛽𝑦𝛾
−𝛽𝑧𝛾
−𝛽𝑥𝛾
1 + (𝛾−1) (𝛽𝑥
𝛽)
2
(𝛾−1) 𝛽𝑥𝛽𝑦
𝛽2
(𝛾−1) 𝛽𝑥𝛽𝑧
𝛽2
−𝛽𝑦𝛾
(𝛾−1) 𝛽𝑦𝛽𝑥
𝛽2
1 + (𝛾−1) (𝛽𝑦
𝛽)
2
(𝛾−1) 𝛽𝑦𝛽𝑧
𝛽2
−𝛽𝑧𝛾
(𝛾−1) 𝛽𝑧𝛽𝑥
𝛽2
(𝛾−1) 𝛽𝑧𝛽𝑦
𝛽2
1 + (𝛾−1) (𝛽𝑧
𝛽)
2
)
.
This matrix still reflects only the Lorentz boost i.e., a transition between
two reference frames 𝐾 and 𝐾′ whose axes 𝑥, 𝑦, 𝑧, and 𝑥′, 𝑦′, 𝑧′,
respectively, remain parallel in the transformation. It is only in this case
that the Lorentz matrices are symmetric. In the most general case of
Lorentz transformations, the 𝑥, 𝑦, 𝑧 axes are also rotated (by an arbitrary real
angle), and then the corresponding matrices are not restricted to the class of
symmetric ones. The matrix of the general Lorentz transformation may be
obtained as the product 𝐿(1,3) = 𝑂−1(1,3)Λ(𝐯)𝑂(1,3) where 𝑂(1,3) is the
matrix of Galilean rotation
𝑂(1,3) ≔(𝑥0
′
𝐫′) = (1
0
𝐯
𝐈3)
leaving time 𝑡 invariant (see equation (3.4)).
The above elementary derivation of the Lorentz transformations,
directly exploiting the invariance of hyperbolic bilinear form 𝑠2 = 𝑥𝑖𝑥𝑖=
(𝑥0)2 −(𝑥1)2 (recall the appearance of the scaling function 𝑓(𝑣) ), is
contained in numerous textbooks on special relativity (see, e.g., [192], B.1,
Ch. 4)) so that I do not need to discuss it any further. One can, however,
mention an elegant way to obtain the Lorentz transformation formulas in
the textbook by Landau and Lifshitz ([193], 4), which is in fact a special
case of the above classical derivation corresponding to the exponential
mapping 𝑓(𝑣) = 𝑒𝑥𝑝(−𝜃(𝑣)) . Parameter 𝜃 corresponds in this
representation to hyperbolic rotation in pseudoeuclidean space
(Minkowski spacetime), in the simplest case in a complex (𝑥0, 𝑥1) plane60.
Here tanh𝜃= 𝛽. Thus, using such 𝜃-parametrization, one can emphasize
the notion of Lorentz rotation straight away.
Let us now discuss some features of the Lorentz transformations, a few of
them not being very conspicuous. Apparently the Lorentz group, as presented
above, is of a purely kinematic (i.e. geometric) nature - indeed it is the group
of invertible linear transformations of ℝ4 that preserves the quadratic form
60 We may call hyperbolic rotations about the coordinate origin in pseudoeuclidean
space all linear transformations that do not change the distance to the coordinate origin
and taking the cone domains, e.g., 𝑠2 = 𝑥𝑖𝑥𝑖> 0, into themselves. One can easily see
that hyperbolic rotations form a group just like the usual rotations in ℝ𝑛. Indeed, a
composition of two hyperbolic rotations is again a hyperbolic rotation as well as an
inverse hyperbolic rotation, see more on it in [188].
3.5
Lorentz and Poincaré Groups
121
(𝑐𝑡)2 −𝐫2 = 𝑐2𝑡2 −𝑥2 −𝑦2 −𝑧2 where 𝑐 is a constant. The form (𝑥, 𝑦) =
𝑥0𝑦0 −𝑥𝑖𝑦𝑖, 𝑖= 1,2,3 is usually known as the Minkowski metric so that the
Lorentz group is the one of linear isometries of this metric. Nevertheless, to
be absolutely honest one has to admit that the Lorentz transformations for a
single particle in a given spacetime point - and it is usually only a single
particle that is considered in kinematic theories - may also depend on the
interaction between particles, which makes Lorentz boosts sensitive to the
presence of other particles or fields. One might pick up the hint about the
necessity of possible dynamic corrections to purely kinematic Lorentz
transformations already in the fact that time evolution of a system interacting
with the environment is different from the time evolution of an isolated
system (see more on time evolution in Chapters 4 and 7).
However, the phenomena described by special relativity are not
reduced to just kinematics. Recall that Lorentz transformations were used
by Einstein in his famous first paper of 1905 on special relativity to express
the transition between two frames of reference: one is considered to be fixed
i.e., at rest with respect to an observer (“laboratory system”) whereas the
other is instantly co-moving with the observed electron, and the latter can
be in general i.e., accelerating motion. It means that Lorentz
transformations may contain the instantaneous velocity 𝐯(𝑡) as a
parameter, which fact manifests itself in relativistic mechanics
considering not only kinematic but also dynamical situation. For example,
the particle 4-momentum 𝑝𝑖=
𝜕𝐿
𝜕𝑢𝑖, 𝑢𝑖≔
𝑑𝑥𝑖
𝑑𝜏, is usually expressed
through the derivatives of coordinates over the proper time, 𝑝𝑖=
𝛾𝑖𝑗𝑝𝑗, 𝑝𝑗= 𝑚𝑐𝑢𝑗 (in this particular case we restrict the dynamics to the
Galilean plane metric), and the dynamical 3-momentum is differentiated
with respect to laboratory time, as in nonrelativistic Newton’s law,
𝑑𝐩
𝑑𝑡= 𝐅+ 𝐯
𝑐× (
𝛾
𝛾+ 1 (𝐯× 𝐅
𝑐
)),
where 𝐅 is the force field in the classical (Newtonian) sense, for instance, the
Lorentz force. However, the question of applying special relativity to
dynamical situations that imply an accelerated motion is not at all trivial (see
more on that below in the “Relativistic Mechanics” section).
In special relativity, spacetime is considered plane (flat) so that one can
describe it globally by the pseudo-Euclidean coordinates with the Galilean
diagonal metric diag(1, −1, −1, −1) i.e., in the general metric tensor 𝑔𝑖𝑘 only
the terms with 𝑖= 𝑘 are left and the length element (usually known in
relativistic physics as interval) is simply
𝑑𝑠2 = (𝑑𝑥0)2 −(𝑑𝑥1)2 −(𝑑𝑥2)2 −(𝑑𝑥2)2.
All inertial (Lorentz) frames leave this metric invariant. This is of course a
very special choice of the metric which, by the way, justifies the term “special
relativity”. One cannot introduce such a plane metric globally on a general
122
Mathematical Potpourri
manifold 𝑀4 because the latter may be curved. In such cases the metric tensor
𝑔𝑖𝑘(𝑥) has in general all off-diagonal components, 𝑖≠𝑘 and, besides,
depends on spacetime point 𝑥 on the manifold 𝑀4. Yet one can always61,
by an appropriate coordinate transformation, bring tensor 𝑔𝑖𝑘 locally i.e., in
the vicinity of 𝑥 to a diagonal form, in other words, to introduce the locally
Galilean metric diag(1, −1, −1, −1). Physically it means that the spacetime
can be regarded as locally Euclidean. By the way, if such a coordinate
transformation brings the metric tensor to a diagonal (Galilean) form in
each point of 𝑀4 then 𝑔≔det𝑔𝑖𝑘< 0 everywhere in 𝑀4 - the fact that has
a significant importance in general relativity.
Let us briefly discuss the meaning of the relativistic (Minkowski)
interval 𝑑𝑠= (𝑑𝑥𝑖𝑑𝑥𝑖)1/2 = 𝑐𝑑𝜏, where 𝜏 is the “proper time” (see [193],
§§2,3). From the geometric perspective, this interval is just the line
element whereas physically it manifests the constancy of the light speed
and its independence on the frame of reference - an experimental fact
verified many times with very high accuracy (see, e.g., [198], see also a
comparatively recent account of Michelson-Morley type experiments in
[199]. One can interpret the Minkowski interval for a physical body, say a
particle, as the distance counted in units of proper time. For example, if a
particle had a clock the latter would register the passage of time experienced
by the particle. If the time is measured in seconds, this interval will be
measured in light-seconds; if the unit of time is years, interval will be counted
in light-years.
We can see that the Lorentz group is six-dimensional i.e., the Poincaré
group 𝑃(ℝ, 3) is the semidirect product of the Lorentz rotations 𝑂(1,3) =
Λ(ℝ, 3) and pseudoeuclidean spacetime translations 𝑇(4) = ℝ1,3.
One can formulate a more general question: do all the differences
between classical and relativistic physics, not restricted to inertial motion and
special relativity, follow from the differences in the structures of the Galileo
and Lorentz (or Poincaré) groups?
3.6
Rings and Fields
Other important algebraic structures apart from groups are rings and fields,
although they are rarely mentioned in physical literature. Both structures are
the sets of elements of any kind on which not one, as for the groups, but two
operations - e.g., addition and multiplication - are defined. One understands a
ring as a set which is a commutative group with respect to addition and a
semigroup with respect to multiplication, with the distribution property, 𝑎∙
(𝑏+ 𝑐) = 𝑎∙𝑏+ 𝑎∙𝑐, (𝑎+ 𝑏) ∙𝑐= 𝑎∙𝑐+ 𝑏∙𝑐. This means that the presence
of unity and an inverse element with respect to multiplication is not required
in rings. A field is a ring in which the set of all elements without 0 forms an
Abelian group with respect to multiplication. For example, the real numbers
ℝ represent such algebraic structures as a field and a group whereas the set
ℤ of integers forms a ring. If, however, the requirement of commutativity of
multiplication is omitted i.e. the set of all elements without 0 forms a non-
61 Provided some natural conditions are fulfilled, see [188] §87.
3.7
Morphisms
123
commutative group with respect to multiplication, such a ring is called a
skew field; it is sometimes also called a division ring (an often cited
example is the ring of quaternions). The ring of integers ℤ is a ring with
unity 1, the ring of even integers 2ℤ is a ring without 1. One might notice that
generally in mathematical axiomatics it is important to pay main attention not
to what is present, but to what is absent. Thus, in the ring axioms,
commutativity of multiplication is not required as well as the presence of an
inverse element, although it is not explicitly stated in the sequence of axioms.
Examples of fields: the rational numbers, the integers modulo a prime, e.g.,
integers 3ℤ of the form 3𝑛. For physics, the most important structures are the
ring ℤ of all integers, ℝ of real numbers, the field ℚ of rational numbers, and
the field ℂ of complex numbers. One can find plenty of other examples of rings
and fields in any course of modern algebra.
One can naturally define “subthings”: subgroups, subrings and subfields
as subsets of, respectively, groups, rings and fields retaining all their
properties.
3.7
Morphisms
In abstract mathematics, morphism is a notion generalizing structure-
preserving maps between two mathematical structures. This is, of course, not
a definition but a free interpretation. To correctly define morphism, one has
to make an excursus into category theory, which would take us too far away
from physically motivated mathematics. My task in discussing morphisms is
rather modest: primarily, to refresh the concept of mapping and, secondly, to
eliminate the confusion that is frequently encountered as to what is an
isomorphism, a homomorphism, an automorphism, and an endomorphism.
Moreover, it is worthwhile to elucidate the relationship of all these
morphisms to such fundamental notions as bijection, injection and surjection.
Assume, for example, that there exists a bijection between two sets endowed
with some algebraic structures, and if this bijection preserves all the
operations defined for such algebraic structures, then it is known as an
isomorphism whereas the sets between which an isomorphism is established
are called isomorphic. If we now do not require a bijection between two sets
endowed with algebraic structures, but an injection or a surjection, the
operations on such algebraic structures still being preserved, we shall have a
homomorphism, and the two sets between which there exists a
homomorphism are called homomorphic. For instance, if two groups, 𝐹 and 𝐺
are homomorphic, the elements of 𝐹 corresponding to the identity (neutral)
element 𝑒𝐺 of 𝐺 form a subgroup 𝐻 of 𝐹. Such a subgroup is known as a
normal (or invariant) subgroup, whereas the group isomorphic to 𝐺 is
denoted as 𝐹/𝐻 and is called the quotient (or factor) group. It is also
known as 𝐹𝑚𝑜𝑑𝐻, 𝑚𝑜𝑑 being a shorthand for modulo. To gain an
understanding of a similar factoring and the respective terminology for
rings and fields, consider two homomorphic rings, 𝒫 and ℛ. The elements of
𝒫 corresponding to zero in ℛ form a subring ℐ of 𝒫 called an ideal. Then the
ring isomorphic to ℛ is called the quotient ring (rarely the factor ring) 𝒫/𝒥.
124
Mathematical Potpourri
3.8
Algebras
One of the algebras important for physics, although studied mostly not by the
physicists but by pure mathematicians, is the so-called Jordan algebra
invented by the Göttingen physicists Pascual Jordan in 1933 [208]. P. Jordan62,
together with his teacher M. Born [194] (“Dreimännerarbeit”) as well as with
other creators of quantum mechanics, W. Heisenberg, J. von Neumann, W.
Pauli, and E. Wigner [195], was trying to develop algebraic tools making the
manipulations with self-adjoint operators more convenient. The core idea of
P. Jordan was quite simple: if we have two self-adjoint operators 𝐴 and 𝐵 on a
Hilbert space ℍ, representing, for example, quantum “observables”, then the
product 𝐴𝐵 is not necessarily self-adjoint63, but the symmetrized (later called
Jordanian) product 𝐴∗𝐵≔(𝐴𝐵+ 𝐵𝐴)/2 is already self-adjoint.
3.9
Lie Groups and Lie Algebras
The rotation group is a typical example of a Lie Group, where the set is not a
small, discrete set, but a continuum. The mathematical description of Lie
Groups is often done by the Lie algebra related to the group, so it is worth
taking a look at that.
In 1960s, Lie group representations became a dominant paradigm for the
construction of the theory of elementary particles. In fact, it was not the
theory, but attempts to classify the particles, a conventional framework rather
than a physical theory. Some physicists did not address this classification
other as “zoology”.
Lie algebras are rather important for physics, although they are rarely
covered in textbooks, even on theoretical physics (an exception is a useful
course by F. Scheck [280]). The motivation to study Lie algebras in connection
with physics stems from the necessity to systematically study binary, more
specifically bilinear skew-symmetric operations which are abundant in
physics. For instance, the Poisson brackets in classical mechanics,
commutators in quantum mechanics, and the usual cross (vector) product in
linear algebra - physical examples are torque and angular momentum - are
specific cases of such binary operations. Recall that a binary operation defined
on a set 𝐺 is understood as a map 𝑓 taking the elements of the Cartesian
product 𝐺× 𝐺 to 𝐺 i.e., 𝑓: 𝐺× 𝐺→𝐺. A Lie algebra is a vector space 𝑉 (see
the next section) over some field 𝐾 together with a skew symmetric bilinear
operation 𝑉× 𝑉→𝑉, often denoted by square brackets [, ] or simply [ ],
provided the Jacobi identity
[𝑎[𝑏𝑐]] + [𝑏[𝑐𝑎]] + [𝑐[𝑎𝑏]] = 0
62 Despite his undoubtful talent and mathematical prowess, P. Jordan was
reproached by many colleagues for his lack of political abstinence during the Nazi
regime.
63 Indeed, (𝐴𝐵)+ = 𝐵+𝐴+ ≠𝐴𝐵.
3.10
Vector Spaces
125
holds. Thus, a vector product which is bilinear, skew-symmetric, and
satisfying the Jacobi identity64 turns our Euclidean vector space into a Lie
algebra. Another example: it is not difficult to see that a set (space) 𝑀(𝑛, ℝ)
of all 𝑛× 𝑛 matrices with real entries becomes a Lie algebra if the bilinear
skew-symmetric operation is defined as commutator [𝐴, 𝐵] = 𝐴𝐵−𝐵𝐴.
Bilinearity in the definition of the Lie algebra is understood as the
property
[𝛼𝑎+ 𝛽𝑏, 𝑐] = 𝛼[𝑎, 𝑐] + 𝛽[𝑏, 𝑐],
[𝑐, 𝛼𝑎+ 𝛽𝑏] = 𝛼[𝑐, 𝑎] + 𝛽[𝑐, 𝑏],
𝛼, 𝛽∈𝐾, 𝑎, 𝑏∈𝑉,
whereas skew-symmetry is merely understood as anticommutativity [𝑎, 𝑏] =
, [𝑏, 𝑎], 𝑎, 𝑏∈𝑉.
3.10 Vector Spaces
This is the basic stuff for great many mathematical models, and although it
seems to be quite simple, its importance does not diminish because of
simplicity. It is a great delusion that right things in mathematics or physics
must be complicated - quite the opposite. For instance, superstring/M theory
may be far too complicated, despite the generally inspired awe, to be the
ultimate truth (see Chapter 9). So, in spite of being comparatively simple, the
notion of vector spaces plays a fundamental role in physics - and not only in
physics of course - and this may be a justification for a somewhat extensive,
though not very rigorous, treatment of this subject. In fact, in the previous
chapter, we have already dealt with vector spaces without giving an explicit
definition.
There are numerous motivations to consider vector spaces in
mathematics, classical and quantum mechanics, computer physics, computer
graphics - you name it, and also in “soft sciences” such as economics, ecology
or sociology. One of the obvious motivations is related to systems of linear
equations
𝑎𝑖𝑘𝑥𝑘= 𝑏𝑖,
𝑖= 1, … , 𝑚,
𝑘= 1, … , 𝑛.
64 V. I. Arnold [196] has recently proved that the Jacobi identity has in this case the
following physical meaning: it ensures that the three altitudes of a triangle have a
common point (called the ortho-center). For a plane triangle, this fact was known
already to Euclid and is, accordingly, discussed in geometry lessons in high schools
(provided such lessons still exist), of course without mentioning the Jacobi identity. But
in the case of non-Euclidean geometry, e.g., for hyperbolic triangles, the situation is less
trivial. In his paper, V. I. Arnold used the Jacobi identity for the Poisson brackets of
quadratic forms over ℝ2 endowed with a canonical symplectic structure (see [197], see
also section “Geometry of Classical Mechanics” in this book.)
126
Mathematical Potpourri
Such systems naturally arise in a great number of applications, e.g., in
numerical modeling, electrical engineering, industrial planning, even in
agriculture.65
In general, a system of linear algebraic equations may, for instance, be
represented as a linear combination of matrix columns
𝑥1 (
𝑎11
⋮
𝑎𝑚1
) + 𝑥2 (
𝑎12
⋮
𝑎𝑚2
) + ⋯+ 𝑥𝑛(
𝑎1𝑛
⋮
𝑎𝑚𝑛
) = (
𝑏1
⋮
𝑏𝑚
),
where the parameters 𝑎𝑖𝑗 (coefficients of the system) form the rectangular
matrix
𝐴≔(
𝑎11
⋯
𝑎1𝑛
⋮
⋯
⋮
𝑎𝑚1
⋯
𝑎𝑚𝑛
)
called matrix of the system. If we denote the columns of matrix 𝐴 as vectors
𝐚1, … , 𝐚𝑛 where the lower index 𝑖= 1, … , 𝑛 corresponds to variables 𝑥𝑖, then
the question whether the system of linear equations can be solved would be
equivalent to the question whether vector 𝐛= (𝐛1, … , 𝐛𝑚)𝑇∈ b = (b1,...,bm)T
∈ℝ𝑚 can be represented as a linear combination of vectors 𝐚𝑖 or, in more
current terminology, 𝐛∈〈𝐚1, … , 𝐚𝑛〉, where angular brackets denote the so-
called span of ℝ𝑛 (see below).
If we consider first a homogeneous system of equation (vector 𝐛= {𝑏𝑖} =
0), then we may notice that any two solution vectors, 𝐱= {𝑥𝑖} and 𝐲= {𝑦𝑖}
produce also solutions 𝐳= 𝐱+ 𝐲 with 𝑧𝑖= 𝑥𝑖+ 𝑦𝑖 and 𝐰= 𝜆𝐱, where each
𝑤𝑖= 𝜆𝑥𝑖 for any real 𝜆. Thus, the set of solutions to a homogeneous system of
linear equations has a vector space structure - vectors can be added (e.g.,
using the rule of parallelogram) and multiplied by a number. One can see that
65 A typical problem known from ancient times: one has two fields with the total area 1 (in
arbitrary units). One field supports crop output 𝑎 per unit area, the other 𝑏. The crop collected
from the first field exceeds that produced by the second field by 𝑚. What are the areas of field 1
and 2? This problem is obviously formalized as a linear system
𝑥+ 𝑦= 1
𝑎𝑥−𝑏𝑦= 𝑚
𝑎> 0, 𝑏> 0, 𝑥> 0 𝑦> 0
,
with the solution
𝑥= 𝑏+ 𝑚
𝑎+ 𝑏, 𝑦= 𝑎−𝑚
𝑎+ 𝑏.
It would not be easy to obtain the field partial areas in a purely thoughtful logical way, without
solving the system of equations.
3.10
Vector Spaces
127
a real vector space 𝑉 is just an additive group 66 whose elements can be
multiplied by real numbers so that
𝜆(𝑥+ 𝑦) = 𝜆𝑥+ 𝜆𝑦,
𝜆∈ℝ,
𝑥, 𝑦∈𝑉,
(𝜆+ 𝜇)𝑥= 𝜆𝑥+ 𝜇𝑥,
𝜆, 𝜇∈ℝ,
𝜆(𝜇𝑥) = (𝜆𝜇)𝑥,
and 1𝑥= 𝑥.
So, vectors are elements of a linear (vector) space. If a linear space 𝑉 is
exemplified by a coordinate space ℝ𝑛, then vectors are typically identified
with n-tuples of numbers and written as 𝑎≔(𝑎1, … , 𝑎𝑛)𝑇= 𝑎1𝐞1 + ⋯+
𝑎𝑛𝐞𝑛≡𝑎𝑖𝐞𝑖. The vectors 𝐞𝑖 form a coordinate basis in ℝ𝑛, the most
convenient choice of the basis vectors is when they are orthonormalized i.e.,
their scalar (inner) product gives a Kronecker symbol, 𝐞𝑖𝐞𝑗= 𝛿𝑖𝑗. We shall
see soon that the generalization of this scalar product naturally leads to the
concept of metrics.
In mathematics, one usually defines vector spaces over some “field”. This
amounts to choosing 𝜆 not necessarily from ℝ but from another set with
similar properties, e.g., ℂ of complex numbers, or in general any set in which
the operations of addition, subtraction, multiplication, and division (except
division by zero) are allowed and the same rules that are familiar from the
school-time arithmetic of ordinary numbers can be applied. In such more
general cases the considered additive group ceases to be a real vector space.
One may notice by association with numbers that natural numbers were
obviously the first used by the humans, these numbers could be easily added
and multiplied, but the inverse operations - subtraction and division -
required some head-scratching. Such inverse operations became possible only
with the invention of negative and fractional numbers.
Quite often the definition of a vector space consists of a long list of
axiomatic statements which may produce the impression that the vector
space is a complicated object. In fact, just the opposite is true: objects that
comply only with some parts of these rules are more complicated. For
instance, human beings that obey none of such rules are not described by any
satisfactory theory. But an important thing to be stressed is that vectors can
be multiplied by numbers from some field called scalars and added according
to the “rule of parallelogram” and these operations retain vectors within a
clearly defined set which is the vector space. The possibility to use such
operations means that vector spaces are supplied with a linear structure. The
preservation of a linear structure is not a trivial requirement: for instance the
maps 𝑓: ℝ𝑚→ℝ𝑛 implemented by differential functions do not in general
preserve the linear structure, although such maps are ubiquitous in geometry
66 A commutative (Abel) group with respect to addition.
128
Mathematical Potpourri
and in the theory of dynamical systems. 67 In vector spaces with a linear
structure, linear maps are naturally the most important ones, just because
they preserve the linear structure. A map 𝐹: 𝑈→𝑉 (map, mapping, operator,
transformation are all synonyms. In quantum mechanics, the term “operator”
is mostly used in connection with maps 𝐹: 𝑈→𝑈.) A map is linear if for all
𝑥, 𝑦∈𝑈 and 𝑎∈ℝ (here for simplicity we restrict ourselves to scalars from
ℝ), 𝐹(𝑥+ 𝑦) = 𝐹(𝑥) + 𝐹(𝑦), 𝐹(𝑎𝑥) = 𝑎𝐹(𝑥) , i.e., 𝐹(𝑎𝑥+ 𝑏𝑦) = 𝑎𝐹(𝑥) +
𝑏𝐹(𝑦) for all 𝑥, 𝑦∈𝑈, and 𝑎, 𝑏∈ℝ. One may note that the set of all linear
maps 𝐹: 𝑈→𝑉 forms itself a vector space with respect to addition and
multiplication by a scalar. Symbolically, one may write 𝐹: 𝑈→𝑉+ 𝐺: 𝑈→
𝑉= (𝐹+ 𝐺): 𝑈→𝑉 or (𝐹+ 𝐺)(𝑥+ 𝑦) = (𝐹+ 𝐺)𝑥+ (𝐹+ 𝐺)𝑦 and the
composition of linear maps 𝐹 and 𝐺 is also linear, 𝐺𝐹(𝑎𝑥) = 𝐺𝑎𝐹(𝑥) =
𝑎𝐺𝐹(𝑥). Such a transitivity of linear maps is well reflected in the matrix
theory: once a basis has been fixed, every linear map 𝐹 is represented by a
matrix 𝐹𝑚𝑛 and every matrix corresponds to a map (isomorphism). This fact
is extensively used in quantum mechanics (see Chapter 6); actually, it is the
mathematical reason for the equivalence of the Schrödinger and Heisenberg
formulations, although both great physicists presumably did not wish to
accept this equivalence in the initial period of quantum mechanics (1924 -
1928).
There is a long list of familiar linear maps in mathematics and physics.
For example, integration may be interpreted as a linear map or a linear
functional. Let us take as 𝑈 the vector space of all real-valued continuous
functions (with compact support) defined at [0,1] and 𝑉= ℝ. Then
𝐹(𝑓) = ∫𝑓(𝑥)𝑑𝑥
1
0
A linear functional may be given for the same choice of 𝑈, 𝑉 also by, e.g.,
𝐹(𝑓) = 𝑓(0). And of course, differentiation also produces a linear map: let
𝑈= 𝑉 be the vector space of all differentiable functions on ℝ, then 𝐹(𝑓) = 𝑓′
is a linear map. In simple geometry in ℝ2, rotation or shift
(𝑥′
𝑦′) = 𝑓(𝑥
𝑦)
with
𝑥′ = 𝑥cos𝜑−𝑦sin𝜑
𝑦′ = 𝑥sin𝜑+ 𝑦cos𝜑
or
67 Later we shall see that there exists a corresponding linear map df called differential.
3.10
Vector Spaces
129
𝑥′ = 𝑥+ 𝑎𝑦
𝑦′ = 𝑦
Here, it would be worthwhile to recall what the isomorphism is (in simple
terms). A linear map 𝐹: 𝑈→𝑉 is an isomorphism if there exists a map 𝐺: 𝑉→
𝑈 such that both 𝐹𝐺= 𝐸 and 𝐺𝐹= 𝐸 where 𝐸 is the identity map, 𝐸𝑥= 𝑥 for
all 𝑥∈𝑈. Strictly speaking, we must write 𝐸𝑈 and 𝐸𝑉 for spaces 𝑈 and 𝑉,
respectively, but we shall neglect this difference. One usually says: 𝐹: 𝑈~𝑉 is
an isomorphism from 𝑈 to 𝑉 (or between 𝑈 and 𝑉). Since 𝐺 is the inverse of
𝐹, 𝐹 is invertible and 𝐺= 𝐹−1. An invertible map is nonsingular, i.e., from
𝐹(𝑥) = 0 follows 𝑥= 0 and if 𝐹(𝑥) = 0 for 𝑥≠0 then 𝑥 is a singular element
(a singular vector, if one considers a vector space) of map 𝐹. A set of such
singular elements {𝑥∈𝑈: 𝐹(𝑥) = 0} is called the kernel of map 𝐹, denoted
𝐾𝑒𝑟𝐹. One can easily prove that 𝐾𝑒𝑟𝐹 forms a linear subspace in 𝑈 and if 𝑈 is
finite-dimensional then 𝐾𝑒𝑟𝐹 is also finite-dimensional. It is exactly a
subspace and not just a subset. In other words, the kernel, 𝐾𝑒𝑟𝐹 of 𝐹: 𝑈→𝑉
denotes the subspace of all singular vectors of 𝐹. It is easy to prove (see
below) that 𝐹 is injective if and only if 𝐾𝑒𝑟𝐹= 0, i.e., map 𝐹 is non-singular.
The finite-dimensional ( 𝑑𝑖𝑚𝑉= 𝑛) vector space allows a simple
geometrical or rather graphical (for 𝑛≤3) interpretation, with elements of
the space being directed arrows connecting the origin and the point 𝑥1, … , 𝑥𝑛.
In some different cases, e.g., for dual spaces (see below), other pictorial
representations are more adequate. Usually, only the real (as in classical
mechanics) and the complex (as in quantum mechanics) vector spaces, ℝ𝑛
and ℂ𝑛 respectively, are considered but nothing prevents us from using other
number fields as the source of scalars over which the vector space is defined.
Different fields are important in various scientific contexts, for example, finite
fields consisting of 2,4,8,16 or in general 2𝑛 elements are important in digital
technology and theory of communication.
Allow me some remarks on the terminology and sources. Algebra seems
to be a discipline which provides a certain freedom for each author to
demonstrate his/her “mathematical culture”. Therefore, there exists a variety
of expositions of essentially the same basic facts. When I was first studying
linear algebra, the contemporary exposition of maps in terms of injection,
surjection and bijection was not yet accustomed. Honestly speaking, I had
some difficulties in translating old notions related to mappings into this new
(and in many situations very convenient) terminology. For instance, one can
see that a map 𝐹: 𝑈→𝑉 is an isomorphism when and only when it is a linear
bijection so that in the linear case the “old” notion of isomorphism and a
comparatively “new” notion of bijection are equivalent. Indeed, a linear map
𝐹: 𝑈→𝑉 is by definition an isomorphism from 𝑈 to 𝑉 if there is an inverse
map 𝐺: 𝑉→𝑈 such that 𝐹𝐺= 𝐸 and 𝐺𝐹= 𝐸. Thus, an isomorphism is a
linear map by definition and a bijection since its inverse does exist.
Conversely, if 𝐹 is a bijection, then there exists 𝐺: 𝑉→𝑈 (which need not be
necessarily linear) so that 𝐹𝐺= 𝐸 and 𝐺𝐹= 𝐸. If, additionally, map 𝐹 is
linear, then it is easy to prove that its inverse 𝐺 is also linear. Indeed, let us
take two elements 𝑣1, 𝑣2 ∈𝑉. Then since 𝐹 is a bijection and, consequently, a
130
Mathematical Potpourri
surjection, there exist such 𝑢1 and 𝑢2 that 𝑣1 = 𝐹(𝑢1) and 𝑣2 = 𝐹(𝑢2). Since
𝐹 is linear, we have 𝑣1 + 𝑣2 = 𝐹(𝑢1 + 𝑢2) and 𝐺(𝑣1 + 𝑣2) = 𝐺𝐹(𝑢1 + 𝑢2) =
𝑢1 + 𝑢2 = 𝐺𝐹(𝑢1) + 𝐺𝐹(𝑢2) = 𝐺(𝑣1) + 𝐺(𝑣2) . Therefore, an isomorphism
implies that the linear map 𝐹 is surjective and nonsingular. In fact, non-
singularity means that 𝐹 is an injection, i.e., its inverse contains not more than
one element. Here, I am using the standard convention: the simple right
arrow 𝑈→𝑉 denotes mapping from set 𝑈 to set 𝑉 whereas mapping of
point 𝑢 into point 𝑣 is usually denoted by the “↦” arrow.
Why is all this stuff important for physics? Primarily because we can in
practical calculations - physical modeling - identify any finite-dimensional
space 𝑈 with ℝ𝑛. More precisely, if 𝑈 is an n-dimensional vector space then
there exists an isomorphism 𝐹: 𝑈→ℝ𝑛. This fact enables us to easily change
spaces by specifying a map 𝐹 from one n-dimensional space 𝑈 to another n-
dimensional space 𝑉. Technically in practice this is done by fixing a basis and
writing vectors as n-tuples of numbers. A map to another space is then
defined by its action on the basis vectors. Let 𝑈 and 𝑉 be two finite-
dimensional vector spaces (𝑑𝑖𝑚𝑈= 𝑑𝑖𝑚𝑉= 𝑛) and let 𝛼= (𝛼1, … , 𝛼𝑛) be a
set of basis vectors that we have fixed in vector space 𝑈 and 𝛽= (𝛽1, … , 𝛽𝑛)
the basis in 𝑉. Then we have 𝐮= 𝑢𝑖𝛼𝑖 and 𝐯= 𝑣𝑖𝛽𝑖, with
𝑣𝑗= [𝐹(𝑢)]𝑗= [𝐹(𝑢𝑖𝛼𝑖)]𝑗= 𝑢𝑖[𝐹(𝛼𝑖)]𝑗
or 𝑣𝑗= 𝛼𝑖
𝑗𝑢𝑖 where 𝛼𝑖
𝑗= [𝐹(𝛼𝑖)]𝑗. It is important and highly convenient that
linear maps can be represented by matrices but for this, as we saw, one needs
to fix a basis (in the above example 𝛼= (𝛼1, … , 𝛼𝑛) in 𝑈= 𝑈𝑛). According to
the definition of a linear map 𝐹, the latter is determined by images 𝐹(𝛼) =
(𝐹(𝛼1), … , 𝐹(𝛼𝑛)). An inverse map is represented by the inverse matrix. See
in this connection, e.g., the classical book by P. K. Rashevski [154], where one
can find a lot of clearly explained details.68
We have just asked ourselves, what do we need all this algebraic stuff for?
Apart from playing with bases, which is one of the favorite games for
physicists and not particularly appreciated by mathematicians, elementary
algebraic notions are really needed to explore the quantum world (Chapter
6). The concept of a linear operator that plays a key role in quantum
mechanics, is conventionally defined as a linear map from a vector space 𝑉 to
itself. In case this map is an isomorphism, it is called (as well as an operator
on vector space 𝑉) an automorphism. A frequently used concept of the
general linear group of 𝑉, GL(𝑉) which will be from time to time encountered
in this text69, may be interpreted as merely the set of all automorphisms on 𝑉.
68 In spite of being very old and out of fashion, this is one of my favorite books. When
reading it, you get an impression of being in a good company, talking with a very clever
interlocutor - a feeling which never emerges when dealing with a computer program,
although one can also learn a lot in this latter case.
69 We shall deal with the general linear group especially in the context of Lie groups
which are of utmost importance to physics.
3.10
Vector Spaces
131
One can find an exhaustive treatment of the formal linear algebra in many
modern courses and books. Personally, I studied this subject using the book
by G. E. Shilov [153] which I found at that time excellent. There are of course
more modern textbooks, from which I especially like the one by A. I. Kostrikin
and Yu. I. Manin [155].
For completeness, let us now recall the notion of an additive group using
the form commonly given in relation to vector spaces. In principle, groups are
easier to understand not algebraically but geometrically, e.g., through
physical transformations such as motion. We shall exploit this geometrical
representation later in this book; now we shall stick to the more traditional
algebraic definitions. A group may be simply defined as a pair of a non-empty
set 𝑋 and a map ∗ (composition law, 𝑋∗𝑋→𝑋) such that this map is
associative, invertible, and possesses an identity element. An additive group
𝐺 is a set together with a map such that to each pair of elements 𝑥, 𝑦∈𝐺
corresponds a new element from 𝐺, denoted as 𝑥+ 𝑦, with the following rules
being valid:
1.
(𝑥+ 𝑦) + 𝑧= 𝑥+ (𝑦+ 𝑧) (which means that operation “+” is
associative)
2.
𝑥+ 𝑦= 𝑦+ 𝑥 (commutativity of the group, which is a very strong
requirement)
3.
There exists an element 0 ∈𝐺 such that 𝑥+ 0 = 𝑥 for all 𝑥∈𝐺
4.
For any 𝑥 there is 𝑦 such that 𝑥+ 𝑦= 0 (existence of the inverse
element)
This short list of axioms is usually supplemented by some simple
consequences (see any textbook on algebra). These consequences look very
much like simple exercises, but algebra teachers are fond of them considering
them good logical exercises (at least such was my university experience). The
primary consequence is that there may be only one 0 element. Indeed,
assume that there are two zeros, 01 and 02, then we would have 𝑥+ 01 = 𝑥
and 𝑥+ 02 = 𝑥 for all 𝑥∈𝐺. By putting, e.g., 𝑥= 01, we get 01 = 01 + 02 =
02 + 01 = 02. The second consequence of axioms is usually formulated as
“uniqueness”: for any pair 𝑥, 𝑦∈𝐺 there exists only one 𝑧∈𝐺 such that 𝑥+
𝑧= 𝑦. This property also can be easily proved: according to axiom 4, to each
𝑥 we can find 𝑥′ so that 𝑥+ 𝑥′ = 0. Then for 𝑧≔𝑥′ + 𝑦 we have
𝑥+ 𝑧= 𝑥+ (𝑥′ + 𝑦) = (𝑥+ 𝑥′) + 𝑦= 0 + 𝑦= 𝑦+ 0 = 𝑦
A variation of this “uniqueness” statement: let 𝑥+ 𝑧= 𝑦 and 𝑥+ 𝑤= 𝑦.
Then 𝑧= 𝑤. Indeed,
𝑧= (𝑥+ 𝑥′) + 𝑧= 𝑥′ + (𝑥+ 𝑧) = 𝑥′ + 𝑦= 𝑥′ + (𝑥+ 𝑤) = (𝑥′ + 𝑥) + 𝑤
= 0 + 𝑤= 𝑤
132
Mathematical Potpourri
The inverse element 𝑦 in Axiom 4 is determined by 𝑥, 𝑦(𝑥). It is denoted
as −𝑥, which is a kind of a mathematical standard. Like any good standard
(and standards are not necessarily good, an example of a bad standard is the
SI system of units), this one makes life easier. So instead of 𝑥+ (−𝑦) we can
simply write 𝑥−𝑦, which is interpreted as the difference between elements
𝑥 and 𝑦.
A remark on zero. Strictly speaking, there are two zeros in vector spaces
- one (denoted as 0) is inherited from the number system (field), the other
(denoted as 𝟎) from the vector space 𝑉 itself. However, it is easy to see that
0𝑥= 𝟎 for all elements 𝑥∈𝑉. Indeed, 0𝑥+ 𝑥= 0𝑥+ 1𝑥= (0 + 1)𝑥= 𝑥,
which exactly means that 0𝑥= 𝟎. As far as the inverse element in 𝑉 goes, one
can see that (−1)𝑥= −𝑥 for any element 𝑥∈𝑉. Indeed, −𝑥 is the only
element from 𝑉 with the property 𝑥+ (−𝑥) = 0. But 𝑥+ (−)𝑥= (1 −1)𝑥=
0𝑥= 𝟎, thus (−)𝑥= −𝑥. Thus, we can disregard the logical disparity of zeros
from different algebraic structures and always write only 0.
Let us now give some illustrations to this very simple though slightly
abstract algebraic theory. Sets (fields) ℝ and ℂ are examples of an additive
group 𝐺 with respect to ordinary addition. Yet the group operation “+” is, of
course, not always the ordinary addition. Let us now look at a simple but less
common example, the even-odd algebra 𝐴 with a binary operation {, }, 𝐴=
{𝑒, 𝑜}: 𝑒+ 𝑒= 𝑒, 𝑒+ 𝑜= 𝑜, 𝑜+ 𝑜= 𝑒(𝑒−even, 𝑜−odd) . Here the role of
zero is played by 𝑒 and −𝑥= 𝑥 for all elements 𝑥. We can generalize this
algebra by introducing a cycle. Take any integer 𝑛> 1 and the set 𝐴=
{0,1,2, … , 𝑛−1} with the binary operation
𝑥⊕𝑦= { 𝑥+ 𝑦,
if 𝑥+ 𝑦< 𝑚,
𝑥+ 𝑦−𝑛,
if 𝑥+ 𝑦≥𝑚.
where “+” denotes the usual addition of numbers. Here zero is the usual
0 and the inverse element ⊝𝑥 equals 𝑛−𝑥 if 0 < 𝑥< 𝑛 and 0 if 𝑥= 0. This
construct leads to the concept of point groups which are very useful in the
classification of molecular structures (see [84], §93).
There are several other standard examples of real vector spaces (see, e.g.,
[153]) such as the vector space of all n-tuples (𝑥1, … , 𝑥𝑛) of real numbers with
component-like addition. If one writes such n-tuples as vector columns, then
the vector space property becomes quite obvious since
(
𝑥1
⋮
𝑥𝑛
) + (
𝑦1
⋮
𝑦𝑛
) = (
𝑥1 + 𝑦1
⋮
𝑥𝑛+ 𝑦𝑛
)
and
𝜆(
𝑥1
⋮
𝑥𝑛
) = (
𝜆𝑥1
⋮
𝜆𝑥𝑛
)
3.10
Vector Spaces
133
It is this vector space that is denoted as ℝ𝑛. By the way, the rule of
parallelogram and multiplication (scaling) by a scalar familiar to us from the
schooltime physics are just a geometric equivalent of these two rules. In
principle, any field can be used for coordinates in geometry, with properties
of the field being reflected in the geometry features 70 . Other standard
examples are the vector spaces of all single-variable polynomials and of all
real-valued continuous functions with compact support, e.g., defined on [0,1].
One might notice that it is possible to consider other structures in 𝑉=
ℝ𝑛, not only the vector space. For example, in classical mechanics such
geometric structures as the Euclidean space 𝔼𝑛 and affine space 𝔸𝑛 and of
course symplectic manifolds are of essential importance. The Euclidean 3d
space of classical physics 𝔼3 admits a global Cartesian system i.e., defined on
the entire space. Writing physical quantities in the Cartesian coordinates is
very convenient unless the studied phenomenon has some complex
symmetry: in such cases the equations written in Cartesian coordinates may
become rather involved and the transition to curvilinear coordinates
consistent with the symmetry type of the problem is advantageous (the
hydrogen atom is an archetypal example, see [84], 36,37).
The simplest example of a vector space is probably the straight line,
which represents a one-dimensional vector space uniquely determined by
any non-zero vector. Another simple example of a vector space is the familiar
set of complex numbers. Here the basis consists of two elements (1, i), with
the generating operations being addition and multiplication by real numbers.
Thus, we can easily produce the representation of complex numbers by 2 × 2
matrices with real entries, i.e. of the form
𝑍= (𝑎
−𝑏
𝑏
𝑎)
One can show that matrices of such type form a field, which is
isomorphous to the complex field. In other words, the set of these Z-matrices
is closed with respect to addition, subtraction, multiplication and division
(except by zero). Moreover, all the rules familiar from the school-years
arithmetic such as associativity, commutativity and distributivity are valid.
Indeed, we can represent every Z-matrix as the sum
𝑍= 𝑎(1
0
0
1) + 𝑏(0
−1
1
0 ) = 𝑎𝐸+ 𝑏𝐼
where 𝐸 is the unit (identity) matrix and the symplectic matrix 𝐼 can be
identified with the imaginary number 𝑖, 𝑖2 = −1 (one can easily verify that
𝐼2 = −1). One can see that 2 × 2 Z-matrices are fully defined by ordered pairs
(𝑎, 𝑏) of real numbers, with the pair (𝑎, 0) being itself a real number 𝑎. The
matrix 𝐼 transforms any plane vector (𝑥, 𝑦)𝑇 into (−𝑦, 𝑥)𝑇, i.e., rotates the
70 This material is usually discussed in modern geometry courses, see e.g., the Pappus’s
theorem and the Desargues’s theorem.
134
Mathematical Potpourri
vector (𝑥, 𝑦)𝑇 counterclockwise just like the complex operator 𝑖= exp(𝑖𝜋/2).
If one inverts a 𝑍-matrix, one gets again a matrix of the same class. Indeed, for
any invertible 2 × 2 matrix
𝐴= (𝛼
𝛽
𝛾
𝛿) , det𝐴≠0
the inverse is
𝐴−1 =
1
det𝐴( 𝛿
−𝛽
−𝛾
𝛼)
Our matrix 𝑍 is obtained from the generic 𝐴-matrix by putting 𝛼= 𝛿=
𝑎, −𝛽= 𝛾= 𝑏. Then
𝐴−1 =
1
𝑎2 + 𝑏2 ( 𝑎
𝑏
−𝑏
𝑎)
is of the same type as 𝐴, i.e.
𝐴−1 =
𝑎
𝑎2 + 𝑏2 𝐸+
𝑏
𝑎2 + 𝑏2 (−𝐼)
and corresponds to 𝑍−1 =
1
⌊𝑧⌋2 (𝑎−𝑖𝑏) if 𝑍= 𝑎+ 𝑖𝑏. Here we denote ⌊𝑧⌋2 =
𝑎2 + 𝑏2. The operator 𝑍−1 implements the clockwise rotation of a plane
vector (𝑥, 𝑦)𝑇.
The algebra of Z-matrices is closed under ordinary operations
characterizing the mathematical field, addition and multiplication. It is easy
to verify that sums and products of Z-matrices are also of the same type.
Complex conjugation corresponds to transposition of Z-matrices, which is not
a linear operation. It is actually the antilinear operation, and in quantum
mechanics, for example, it manifests the time inversion (see Chapters 5, 9).
The conjugate 𝑍∗= (𝑎, 𝑏)∗ of 𝑍 is given by
𝑍∗= (𝑎
−𝑏
𝑏
𝑎) = (𝑎, −𝑏)
with 𝑍∗𝑍= ⌊𝑧⌋2(𝐸+ 0𝐼) = (𝑎2 + 𝑏2, 0).
One can define a norm with the help of the operation of conjugation, since
the quantity 𝑎2 + 𝑏2 is always a non-negative real number being zero if and
only if 𝑎= 𝑏= 0. Therefore, matrices representing complex numbers form a
two-dimensional real normed vector space, which is reflected in the above
matrix operations. Due to isomorphism, the field of complex numbers ℂ may
be also regarded as a vector space. It is, however, easier to operate with
complex numbers than with their matrix representation. Nevertheless, the
latter may be interesting from the theoretical viewpoint. We have seen that
matrices representing complex numbers are determined by two independent
3.11
Basis Vectors
135
real parameters 𝑎, 𝑏. Thus, one can generalize the matrix construction for
complex numbers by taking matrix elements not necessarily from the real
field. For example, in case these matrix elements are complex numbers, one
arrives at quaternion algebra. Or the Z-matrix entries may be themselves
matrices, then we obtain a sequence of algebras.
One might ask, why are the mathematicians so fond of abstract vector
spaces, would not it be enough to work only with simple ℝ𝑛? The confusing
point here is that one can find in many algebra textbooks every n-dimensional
vector space 𝑉𝑛 is isomorphic to ℝ𝑛. However, this isomorphism is, in the
mathematical terminology, not canonical - simply speaking it depends on the
choice of the basis. Therefore, modern mathematicians and mathematical
physicists prefer using abstract vector spaces instead of ℝ𝑛 because they
consider it desirable to operate quantities in a coordinate-free manner,
without choosing some arbitrary basis. As a matter of fact, this is a well-
grounded reasoning since many vector spaces naturally arising in
contemporary physics do not have a hint on a preferred choice of a basis.
Nevertheless, the bases in vector spaces are of utmost importance not only in
physics but also to building mathematical models in different applied areas,
in computer science, etc., therefore we shall devote some attention to their
selection.
3.11 Basis Vectors
In the preceding section, we had an example of the homogeneous system of
linear equations. The set of solutions of such a system (expressed as n-tuple,
(𝑥1, … , 𝑥𝑛)𝑇) gives an example of a subspace 𝑈 of ℝ𝑛. In general, a subspace
𝑈⊂𝑉, where 𝑉 is the vector space, is called the vector subspace, if for 𝑥, 𝑦∈
𝑈 also their sum 𝑥+ 𝑦∈𝑈 and also for 𝑥∈𝑈 and any 𝜆∈ℝ, 𝜆𝑥∈𝑈. For
instance, all linear combinations, viz. vectors of the form 𝐮𝑘= 𝛼𝑘
𝑗𝐯𝑗, 𝛼𝑘
𝑗∈
ℝ, 𝐯𝑗∈𝑉, 𝑗= 1, … , 𝑛; 𝑘= 1, … , 𝑚, comprise a subspace of 𝑉, sometimes
denoted as 〈𝐯1, … , 𝐯𝑛〉 and called span (denoted also by Sp(𝐯1, … , 𝐯𝑛)). Span
is in some sense the smallest subspace of 𝑉 containing 𝐯1, … , 𝐯𝑛. To see
whether a set of vectors spans a vector space, one must check that there are
at least as many linearly independent vectors as the dimension of the space.
It is clear that if the 𝐮𝑘 are linearly independent, then 𝑚≤𝑛. For example, in
the n-dimensional space ℝ𝑛, 𝑛+ 1 vectors are never linearly independent,
and 𝑛−1 vectors never span. It follows from here that each vector space 𝑉
contains a finite subspace 𝑉𝑛 of linearly independent vectors, namely of
dimension n. In other words, there exists a number 𝑛∈ℕ so that any 𝑛+ 1
vectors are linearly dependent. If we neglect some mathematical pedantry,
then any linearly independent system of 𝑛 vectors in 𝑉𝑛 may be used to
generate all other elements of 𝑉𝑛 and then it is called “basis”. Thus, a basis of
a vector space is a linearly independent spanning set. In particular for 𝑉𝑛=
ℝ𝑛, any linearly independent system of 𝑛 vectors in 𝑉𝑛 may be used to
generate all other elements of 𝑉𝑛71. The term “generate” means in this context
71 In general, in abstract algebra where the concept of span is defined as a module M over a
ring R, a linearly independent subset which generates the whole module may not necessarily
136
Mathematical Potpourri
that each vector 𝐱∈𝑉𝑛 may be represented as a linear combination 𝐱=
𝑎𝑖𝐮𝑖, 𝑖= 1, … , 𝑛. It is clear that all bases in 𝑉𝑛 have the same number of
vectors, namely n72. Indeed, if we have two bases, 𝐞1, … , 𝐞𝑚 and 𝐟1, … , 𝐟𝑛, then
we may take 〈𝐟1, … , 𝐟𝑛〉 to be a span and 𝐞1, … , 𝐞𝑚 a system of linearly
independent vectors. Then it follows from the above considerations that 𝑚≤
𝑛. But of course, we may exchange these two systems of vectors with the
consequence that 𝑛≤𝑚 - the argumentation is symmetric with respect to 𝑚
and 𝑛. Putting it all together, we get 𝑚= 𝑛. It is the common length 𝑛 of all
bases in 𝑉 that is called the dimension of 𝑉, n=dim𝑉𝑛.
One can define a basis 𝐵 in the vector space 𝑉 defined as a subset 𝐵⊂𝑉
of linearly independent vectors73. Then the set of all linear combinations of 𝐵
(the linear hull) coincides with 𝑉. We have just seen that if 𝐵 consists of a
finite number 𝑛 of vectors, then 𝑉 is finite-dimensional (dim𝑉= 𝑛). The
standard basis for 𝑉= ℝ𝑛 is conventionally the set of n vectors 𝐞1, … , 𝐞𝑛, 𝐞𝑖=
{0, … 𝑖= 1, … , 𝑛} (i.e., 1 stands in the i-th place). It seems to be clear that if 𝐵
is a basis in 𝑉, then no subset of 𝐵 (with the trivial exception of 𝐵 itself) may
also be a basis in 𝑉.
It has already been mentioned that there exist vector spaces that do not
possess a finite system of vectors with the help of which one can generate all
other vectors belonging to the same vector space. In other words, one cannot
find a natural number 𝑛∈ℕ so that any 𝑛+ 1 vectors from 𝑉 are linearly
dependent. In this more complicated case, vector space 𝑉 is called infinite
dimensional. In other words, a vector space is infinite-dimensional if it does
not have a finite basis 74. For instance, the vector space of all polynomials is
infinite-dimensional since vectors (functions) 1, 𝑡, 𝑡2, … , 𝑡𝑛 are linearly
independent for each n. The same applies to the vector space of all continuous
real-valued functions on [0,1] - this vector space is infinite-dimensional.
On the other hand, ℝ𝑛 is finite-dimensional, dimℝ𝑛 since the unit vectors
𝐞1 = (
1
0
⋮
0
) , 𝐞2 = (
0
2
⋮
0
) , … 𝐞𝑛= (
0
0
⋮
1
)
are linearly independent and form the basis in ℝ𝑛. As a simple but important
example, one may consider the vector space 𝑉 of all differentiable functions
𝑓= 𝑓(𝑥) on ℝ with the property 𝑓′ = 𝑓. This property may serve as one of
the definitions for the exponent, 𝑓(𝑥) = exp(𝑥), and being its own derivative
exist. In other words, even if module M is generated by some n elements, it is not necessarily true
that any other set of n linearly independent elements of M spans the entire module M. In effect,
this means that a basis may not always exist. The example usually produced in higher algebra in
relation to this statement is that 𝕫 is generated by 1 as a 𝕫-module but not by 2.
72 One must note that it is only in finite-dimensional vector spaces, where any vector may be
represented as the finite linear combination, that the number of basis elements does not depend
on basis. If the vector space is infinite-dimensional, dim𝑉= ∞, then, although it possesses a basis,
the latter is always infinite and, consequently, one cannot compare the number of basis elements.
73 One colloquially says that B is linearly independent.
74 And is not zero-dimensional i.e., does not consist only of a zero, which is a trivial zero-space.
3.11
Basis Vectors
137
selects the exponent as the only function having this property. Another way
to express it is to state that we have here a fixed point of the differential
operator (regarded as a functional). The exponential function is an element of
𝑉 and is non-zero, which means that dim𝑉≤1. Let us now consider some
other element of 𝑉, 𝑢∈𝑉. From the differentiation formula and the property
(. )′ = (. ), we get
(𝑢
𝑓)
′
= 𝑓𝑢′ −𝑢𝑓′
𝑓2
,
therefore 𝑢= 𝑐𝑓, where 𝑐= 𝑐𝑜𝑛𝑠𝑡 and dim𝑉= 1.
Physicists and mathematicians are forced to systematically visualize
objects in four, five or even infinite-dimensional spaces. Thus, thinking over
other dimensions led B. Riemann in 1854 to getting out of the Euclidean
geometry and showing that space can be curved - nowadays a trivial notion,
but not at all in Riemann’s time75. General relativity operates on curved spaces
and perhaps our universe is a kind of a 3D spherical surface - a closed three-
dimensional manifold carrying a metric of positive Ricci curvature. These
concepts are closely connected with modern mathematical topics such as
Einstein manifolds, Ricci flow, and G. Perelman’s famous proof of the so-called
Poincaré conjecture [159]. The new physics implies that our 3+1 dimensional
space is just what we see in our primitive everyday dullness, boring low-
energy province; in fact our world has 10, 11, 26 or some other number of
dimensions (so far undetermined). Anyway, one has to learn to deal with any
number of dimensions, even infinite as in quantum mechanics, and the first
step in this direction is working with vector spaces.
Allow me to make some general observations here that might look as
distractions. One usually extends and applies well-known ideas from finite
dimensional linear algebra to infinite dimensions. This motivates us to carry
out the difficult calculations whose purpose is to connect various models and
cross-link different levels of abstraction in the studied problem. Even though
the basic ideas and mathematical structure have been developed in linear
algebra, the motivation for using them stems from physics and engineering.
In particular, one can recall multidimensional mathematical tools for
electromagnetics and vibration theory for engineers and quantum mechanics
for physicists. This may be nontrivial mathematics from a cross-disciplinary
perspective. Furthermore, the main ideas of linear mathematics in infinite
dimensions are already present in waves, signals, and fields, even in those
whose domains are one-dimensional. The transition to higher-dimensional
domains comes rather smoothly once the one-dimensional models have been
75 Bernhardt Riemann lived a rather short life, 1826 - 1866, but his impact is enormous. Some
people consider that Riemann geometry is hopelessly outdated, I think they are hopelessly
wrong. I do not know which name is more suitable for the famous equations in the theory of
analytical functions - the Cauchy-Riemann equations or D’Alembert-Euler equations, but it does
not diminish Riemann’s contribution to this beautiful theory, extremely powerful in physical
applications. And without Riemann surfaces, one would be totally confused when dealing with
multivalued functions.
138
Mathematical Potpourri
explored. Interestingly enough, the transition to higher dimensions leads to
surprising new features such as the appearance of special functions - envoys
of the world of symmetries. The very existence and properties of special
functions are a consequence of the group properties [70]. For instance, the
group of motions of the Euclidean plane leads to the families of Bessel
functions, the rotation group of the Euclidean three-dimensional space
produces the Legendre and Jacobi polynomials. In physical terms, the
symmetry properties giving birth to these classes of special functions are
mostly interpreted as the required invariance under translations and
rotations of measured distances and of the shapes of propagating waves.
More complicated groups such as those of motion of an n-dimensional
Euclidean space, n-dimensional sphere or n-dimensional Lobachevsky space
lead to some new classes of polynomials, not necessarily reduced to special
functions generated by the motions in lower-dimensional spaces. This reflects
the general observation: increased dimensionality has its own specifics.
We shall return to a discussion of special functions several times in this
book; I think this is a fascinating subject although many people consider it
mundane and boring. In fact, special functions may always be considered as
realizations of some basis in an appropriate functional space - exactly in the
sense of linear algebra - and thus they connect three disciplines: algebra,
geometry and physics.
As for bases in general, there exists a mathematical recommendation
(largely ignored by physicists): use coordinate systems only when needed. Of
course, one has to introduce a specific coordinate system to perform
computations to the end, but what is meant by mathematicians in the above
maxim is that one should resist the temptation to fix an arbitrary basis until
it is absolutely necessary. What is a basis? The word “basis” is often used both
in physics and mathematics, but I don’t think there exists a fully satisfactory
general definition (we have seen that such a situation is not infrequent in
mathematics). We can use the following description as a substitution for the
lacking definition. One usually understands a basis for a set 𝑋 as some subset
𝐴⊂𝑋 that can generate 𝑋. What does “generate” mean? Here this term means
that it is possible to get any element 𝑥∈𝑋 by applying operations from some
class 𝑈 to elements 𝑎∈𝐴, i.e., 𝑥= 𝑈𝑎 for any 𝑥∈𝑋. The basis is typically
thought of as a minimal subset 𝐴, with which it is possible to generate the
whole set 𝑋. The term “minimal” in this context means that no subset of 𝐴 can
generate 𝑋, i.e., ∃𝐴̃ ⊂𝐴 such that 𝑥= 𝑈𝑎,̃ 𝑎,̃ ∈𝐴̃, 𝑥∈𝑋. One also implies that
all elements 𝑎 from the subset 𝐴 of 𝑋 are independent, i.e., none of them can
be obtained by means of operations 𝑈 applied to other elements from this
subset. Contrariwise, other elements from 𝑋/𝑌 become dependent on
elements 𝑎∈𝐴 via operations 𝑈.
It is curious that there seems to be sort of a fight (reminding us of
Jonathan Swift) among mathematicians, at least at the folklore (not textbook)
level. Some of them argue that metric spaces should be given a priority,
whereas others state that the concept of metric spaces is an old-fashioned
framework, and what people need is normed spaces or even more - the
normed algebras. Besides, one may notice that both mathematicians and
3.12
Dual Spaces
139
physicists are sometimes quite loose with terminology - we shall find some
examples of this looseness below.
3.12 Dual Spaces
Dual spaces is an extremely important concept for physics, although many
physicists do not use it in their everyday language. I shall try to demonstrate
the importance of dual spaces on some examples. Please be lenient if some
constructions will lack rigor and completeness.
Usually, dual vector spaces are defined through linear functions76 on a
vector space 𝑉. Recall that a map 𝐿, for instance, between two vector spaces
𝑉1 and 𝑉2 is called linear if it satisfies the relationship 𝐿(𝑎1𝑥1 + 𝑎2𝑥2) =
𝑎1𝐿(𝑥1) + 𝑎2𝐿(𝑥2) for any 𝑎1, 𝑎2 ∈𝐾 where 𝐾 is the field, usually understood
as the field of scalars (please do not confuse with the scalar field in physics
which we shall study in Chapters 4 and 9). Any linear map 𝐿: 𝑉1 →𝑉2 (we shall
deal exclusively with the fields 𝐾 of real or complex numbers that is 𝐾= ℝ, ℂ)
may be written as a linear combination 𝐋= 𝑙𝑖𝐞𝑖 where coefficients 𝑙𝑖, 𝑖=
1,2 … 𝑛 are real or complex numbers. It is clear that a linear combination of
linear functions makes a linear function, therefore the set of linear functions
also forms a vector space and this linear vector space is called the dual space
𝑉∗ to 𝑉 . In modern mathematics one usually calls the duality operation (∗) a
“functor”, more exactly it is an example of a functor.
To write out the dot product of two vectors 𝐚, 𝐛 in terms of abstract linear
functions or linear functionals is probably not the most convenient way. To
make the expressions less abstract one usually introduces two sets of basis
vectors, sometimes called the “direct” and the “dual” basis vectors that satisfy
the relations 𝐞𝑖𝐞∗𝑗= 𝛿𝑖
𝑗. Since the “dual basis” vectors also form a basis, one
can write any vector as a linear combination of them. For example, we can
rewrite the vector 𝐛∈𝑉∗ as 𝐛= 𝑏𝑖𝐞𝑖 and vector 𝐚∈𝑉 as 𝐚= 𝑎𝑖𝐞𝑗. This
allows us to write the dot product of 𝒂 and 𝒃 in a much simpler way:
(𝐚, 𝐛) = 𝑎𝑖𝑏𝑗(𝐞𝑖, 𝐞𝑗) = 𝑎𝑖𝑏𝑖= 𝑏𝑖𝑎𝑖= (𝑏1
⋯
𝑏𝑛) (
𝑎1
⋮
𝑎𝑛
) = (𝐛, 𝐚) = 𝐛𝐚= 𝐚𝐛
= 𝑎𝑖𝑏𝑖= (𝑎1
⋯
𝑎𝑛) (
𝑏1
⋮
𝑏𝑛
)
One can notice, by the way, that the scalar product of basis vectors (𝐞𝑖, 𝐞𝑗)
serves as a tool for raising and lowering indices. We shall see later that the
scalar product of basis vectors can be generalized to the so-called metric
tensor. In the case of dual spaces, the formula for a scalar product is fully
analogous to that for the Euclidean space, the difference is that such a formula
connects vectors living in different spaces.
One should remember that it is only in the simplest case that a dual vector
space is defined by the available basis. If we managed to fix a basis in a vector
76 These constructions are also called linear maps or linear functionals.
140
Mathematical Potpourri
space 𝑉, with a set of basis vectors 𝐞𝑖, we may define the dual space 𝑉∗ as the
vector space spanned by the basis vectors 𝐞∗𝑗 with the requirement 𝐞𝑖𝐞∗𝑗=
𝛿𝑖
𝑗 where the right-hand side is the Kronecker symbol, i.e., the dot product of
the basis vectors is zero for i not equal to j77.
This construction means that if 𝑉 is finite-dimensional, then 𝑉∗ has the
same dimension as 𝑉. The mathematicians often say that each 𝐞∗𝑗 represents
a scalar linear map on the set of basis vectors 𝐞𝑖. In the language of tensor
analysis and tensor geometry, elements of vector space 𝑉 are called
contravariant vectors and elements of the dual space 𝑉∗ are called covariant
vectors or covectors. In the coordinate-free language of differential forms,
these dual space elements are called one-forms (see below a discussion of
differential forms). One-form (or 1-form) is just another name for a covector.
The set of 𝐞∗𝑗 provides a basis for linear functions (maps) 𝐿, and we can
represent any linear map on 𝑉 in the form 𝐋= 𝑙𝑗𝐞∗𝑗 L where 𝑙𝑗 are real (or
complex) coefficients 78. A linear map in this context may be called a dual
vector.
So, we see that dual vectors are important spin-offs of conventional
vectors i.e., dual spaces are spin-offs of vector spaces. As dual vectors when
multiplied by their corresponding vector produce a number, one may notice
a peculiar feature of duals: if the vector space - the space a vector lives in - is
diminished, a conventional (contravariant) vector shrinks whereas a dual
(covariant) vector, roughly speaking, becomes larger.
Let us, as an example of usage of dual spaces, consider the special case
interesting for classical mechanics and dynamic systems theory, 𝑉=
𝑇𝑞𝑀, 𝑉∗= 𝑇𝑞
∗𝑀. These reciprocally dual spaces are called tangent and
cotangent spaces adjoining to each point 𝑞 of the configuration manifold 𝑀.
We shall more or less thoroughly discuss these constructions in the next
chapter, in connection with Lagrangian mechanics. It will be clear that 𝑇𝑞𝑀 is
the space of directional derivatives, which are actually operators acting on
functions defined on the manifold 𝑀. We may choose a basis for the tangent
space 𝑇𝑞𝑀, for instance, as the set of partial derivatives, 𝑒𝑖=
𝜕
𝜕𝑥𝑖. Note that
here basis vectors are differential operators. Let us now introduce the dual
basis using the above definition, 𝐞𝑖𝐞∗𝑗= 𝛿𝑖
𝑗, 𝑖, 𝑗= 1,2 … 𝑛 (it is not necessary
to mark up vectors with boldface; later I shall omit the asterisk denoting the
dual basis vectors, leaving only contravariant index does not result in any
ambiguity). One can easily guess that the dual basis is 𝑒𝑗= 𝑑𝑥𝑗 so that
𝜕𝑖𝑑𝑥𝑗= 𝛿𝑖
𝑗, (𝜕≡
𝜕
𝜕𝑥𝑖). This choice of basis allows us to write any vector
77 It is customary today to denote the dual basis vectors with an asterisk. Previously, the dual
basis was designated only by upper indices. I shall use both notations depending on the context,
unless it produces a hopeless clarity violation.
78 In physics, one is mainly interested in 𝐾= ℝ, 𝐾= ℂ. There exist, of course, both algebraic
and geometric extensions of the real and complex number systems (e.g., with many roots of −1).
Yet, despite the existence of quaternionic quantum mechanics [86] and some geometric algebra
applications [115], I am unaware of unbeatably successful attempts in physics to identify the
scalars with anything different from the real or complex numbers.
3.12
Dual Spaces
141
𝑢(𝑞) ∈𝑇𝑞𝑀 as 𝑢= 𝑢𝑖𝜕𝑖 and any dual vector 𝑣(𝑞) as 𝑣= 𝑣𝑗𝑑𝑥𝑗. Then we get a
general expression for the linear map (functional) 𝐿 acting, e.g., on 𝑢:
𝐿𝑢≡𝐿[𝑢] = 𝑙𝑗𝑑𝑥𝑗[𝑢𝑖𝜕𝑖] = 𝑙𝑗𝑢𝑖𝛿𝑖
𝑗= 𝑙𝑗𝑢𝑗
(3.5)
It is this linear functional representing a dual vector in 𝑇𝑞
∗𝑀 that is called
a 1-form (see above). We see that there is a well-defined field of 1-forms on
every cotangent space 𝑇𝑞
∗𝑀 (and on every cotangent bundle 𝑇∗𝑀). It is
important to note that this linear functional acting on each tangent vector
takes it to the 2n-dimensional manifold 𝑇∗𝑀 and not to 𝑀. We shall discuss
this point in some detail in connection with Lagrangian mechanics.
If the basis has been fixed, we can associate mutually dual vectors by their
components, 𝑢𝑖↔𝑣𝑗. This is an important trick in solid state physics (see
below). However, this association is not, as mathematicians use to say, a
natural mapping between two sets of components and, therefore, between
two spaces. This mild mathematical objection is based on the fact that
components are mutable, they change when we transform the coordinate
system - and we always have the right to do it for our convenience. This
volatility of components of a vector was one of the main motivations for
mathematicians to use the differential form calculus instead of “old-
fashioned” coordinate formulation usually preferred by physicists. Today,
differential forms are gradually penetrating into many areas of physics, at
least one can observe that they have become the dominating tool in
Hamiltonian mechanics, in the physical part of differential geometry and even
in field theory.
The concept of dual spaces seems to be a fair platform to start studying
differential forms, since, roughly speaking, differential forms are dual to
vectors. However, before we proceed a bit later to discussing differential
forms, their formal definitions and how they are used in physics, I shall review
some preparatory points addressing once again the ubiquitous examples of
tangent and cotangent spaces of Lagrangian mechanics. So, we have selected
the basis 𝜕𝑖 in the tangent space 𝑇𝑞𝑀 and the basis 𝑑𝑥𝑗 in its dual, cotangent
space 𝑇𝑞
∗𝑀. Actually, one can define the cotangent space through its basis 𝑑𝑥𝑗.
One might be astonished by this latter notation, since the quantity 𝑑𝑥𝑗 is
extensively used in classical calculus and integration. Is such a coincidence a
symptom of negligence or is it due to a lack of symbols, frequently
encountered in physical and mathematical texts? Neither the first nor the
second. There exists a deep connection between classical calculus and more
modern79 techniques of differential forms. To feel this connection, we may do
the following. Let us take a differentiable function 𝑓: 𝑀→ℝ 80 defined on a
manifold 𝑀. Now let us define some special element 𝑣(𝑓) ∈𝑇𝑞
∗𝑀: 𝑣(𝑓) =
𝜕𝑗𝑓𝑑𝑥𝑗 or, in terms of ordinary vector analysis, 𝑣(𝑓) = ∇𝑓𝑑𝐫. Now if we apply
𝑣(𝑓), regarded as an operator, to 𝑢∈𝑇𝑀
79 By “modern” one can understand methods developed over last hundred years.
80 An exact meaning of the word “differentiable” is not essential here.
142
Mathematical Potpourri
𝑣(𝑓)[𝑢𝑖𝜕𝑖] = 𝜕𝑗𝑓𝑑𝑥𝑗[𝑢𝑖𝜕𝑖] = 𝑢𝑖𝛿𝑖
𝑗𝜕𝑗𝑓= 𝑢𝑗𝜕𝑗𝑓
(3.6)
That is by acting with 𝑣(𝑓) from the cotangent space on element u from
the tangent space we get, due to
𝐿(𝑓) ≡𝑣(𝑓)[𝑢𝑖𝜕𝑖] = 𝑢𝑗𝜕𝑗𝑓= 𝑢(𝑓)
the directional derivative of 𝑓 in the direction of 𝑢. If we denote, in the spirit
of differential forms, 𝑣(𝑓) ≔𝑑𝑓, we shall have81 𝑑𝑓= 𝜕𝑗𝑓𝑑𝑥𝑗 and 𝑑𝑓[𝑢] =
𝑢(𝑓). In general, any d-operator in 𝑇𝑞
∗𝑀 can be written as 𝑑= 𝜕𝑗𝑑𝑥𝑗, and we
find a general formula for 𝜕 acting on 𝑢:
𝜕[𝑢] = 𝜕𝑗𝑑𝑥𝑗[𝑢𝑖𝜕𝑖] = 𝜕𝑗𝑢𝑖𝛿𝑖
𝑗= 𝜕𝑖𝑢𝑖
These exercises with linear maps in dual spaces may appear trivial and
not deserving the space devoted to them, yet these forms have some hidden
profound meaning, which will be gradually unfolded in our studies of
mechanics and geometrical aspects of the field theory.
So far, to exemplify the relation between dual spaces 𝑇𝑞𝑀 and 𝑇𝑞
∗𝑀 we
have used some unspecified differentiable function 𝑓. Let us now take the
simple specific case of 𝑓= 𝑥𝑗. Then
𝑑𝑓= 𝑑(𝑥𝑗) = 𝜕𝑖𝑥𝑗𝑑𝑥𝑖= 𝑣(𝑥𝑗)𝑑𝑥𝑖= 𝛿𝑖
𝑗𝑑𝑥𝑖= 𝑑𝑥𝑗
So 𝑑(𝑥𝑗) = 𝑑𝑥𝑗, which expression connects “new” notations with “old”.
The meaning of “new” notation 𝑑(𝑥) invokes the concept of exterior
derivative, whereas the “old” notation 𝑑𝑥 traditionally understood as a small
increment of the quantity 𝑥 is in fact an element of the dual space 𝑇𝑞
∗𝑀 - a
linear map from vectors to numbers (linear functional). The old derivative
studied in classical calculus and vector analysis as well as vector operations
such as gradient, rotor (curl) and divergence are specific cases of the exterior
divergence. The latter is an operator that maps an n-form 𝜃 defined on a
manifold M to an (n+1)-form 𝑑𝜃, which is also defined on the same manifold.
The notion of exterior derivative may be regarded as a generalization of the
traditional derivative introduced first by Newton and Leibniz and then
substantiated in classical calculus. The ordinary classical derivative maps a
scalar function (scalar field), which is in this parlance a 0-form, to 1-form 𝑑𝑓.
See more on derivatives in section “Notes on Derivatives” below.
Let us now return to our discussion of dual spaces. We could have felt that
it might be very convenient to know whether a quantity belongs to the main
or base space such as, for example, a velocity-like and displacement-like
81 in the mathematical texts on differential forms such linear forms are customarily denoted
by ω or θ.
3.13
Some Remarks on Indices
143
vector in mechanics or to its dual such as momentum or wave vector, the first
being vectors whereas the second are covectors.
Thus, if we have fixed a set of basis vectors, its corresponding dual set can
be determined by a simple procedure. Just write a matrix whose columns are
the Cartesian coordinates of the chosen basis vectors. Now, the Cartesian
coordinates of the dual set of vectors are represented by the rows of the
inverse matrix. It follows from here that if the chosen basis vectors happen to
be orthonormal, as in the case of the Cartesian coordinate systems, then the
given set of basis vectors coincides with the set of their dual basis vectors. In
other words, if we simply interpret the space of all sequences of real numbers
ℝ𝑛 as the space of columns of n real numbers, its dual space may be written
as the space of rows of n real numbers. Again, in the terminology of
mathematicians, one may say that such a row represents a linear functional
on ℝ with respect to ordinary matrix multiplication. Here we may see a hint
on some deficiency of this conventional approach: it restricts the transition to
dual space to finite-dimensional spaces since it is only then that duality is well
defined. Another mathematical saying in relation to dual spaces is that there
is no canonical (i.e., independent of coordinate choice) isomorphism between
a vector space and its dual: isomorphism here depends on a particular choice
of a basis. Nevertheless, there exists a canonical isomorphism between a
finite-dimensional vector space and its double dual.
One may note in conclusion of this section that duality plays a nice part
in a number of geometrical applications. Take projective geometry (see
below) as an example. In this discipline, projective planes are self-dual: if one
switches the words “point” and “line”, one doubles results, so to say, for free.
3.13 Some Remarks on Indices
The upper and lower indices are used to distinguish two sets of components:
those of a vector in a given basis (more or less arbitrarily fixed by us) and in
the corresponding dual basis. In many textbooks on physics where exclusively
the orthonormal bases are used, the authors typically make no distinction
between upper and lower indices. The reason for such a simplification is that
in orthonormal bases the sets of components of a vector in a given coordinate
system and in that with the respective dual basis are the same. In more
general situations, raising and lowering indices expresses a natural duality
between tangent vectors and covectors or 1-forms. Specifically for vector
bundles of mechanics (see Chapter 1), lowering indices corresponds to
mapping of the tangent to cotangent bundle 𝑇𝑀→𝑇∗𝑀, whereas raising
indices reflects a reciprocal map 𝑇∗𝑀→𝑇𝑀. Both operations are examples of
a duality isomorphism.
3.14 Operators in Quantum Mechanics
While studying
quantum mechanics, one
might notice that the
mathematicians seem to talk mostly about self-adjoint operators whereas the
physicists prefer to talk about Hermitian operators, the use of operators for
practical quantum-mechanical calculations being basically the same. Is there
144
Mathematical Potpourri
any real difference between self-adjoint operators and Hermitian operators?
It is interesting that in some modern linear algebra courses, where I wanted
to check how exactly the term “Hermitian” is defined, I could not find the word
even in the index. (I just checked two Linear Algebra texts to see exactly how
they defined it and the word is not even in the index!)
Of course, one can a priori assume that mathematicians would tend to
talk more abstractly than physicists, but let us try to explore the substance of
these two classes of operators leaving aside “sociological” considerations. Let
𝑈 and 𝑉 be any inner product spaces and 𝐴 a linear map from 𝑈 to 𝑉, then the
adjoint of 𝐴 denoted as 𝐴∗, is a linear map from 𝑉 to 𝑈 such that, for any 𝑢 in
𝑈 and 𝑣 in 𝑉, (𝐴𝑢, 𝑣) = (𝑢, 𝐴∗𝑣) . The brackets here denote the inner
product82, and the two inner products are taken in 𝑉 and 𝑈, respectively. In
the particular case when 𝑈= 𝑉 and 𝐴∗= 𝐴, i.e., if (𝐴𝑢, 𝑣) = (𝑢, 𝐴𝑣), then
operator (map) 𝐴 is self-adjoint. In other words, an operator is Hermitian if it
possesses the above symmetry property, (𝐴𝑢, 𝑣) = (𝑢, 𝐴𝑣) for all 𝑢, 𝑣 in the
domain of 𝐴, and an operator is self-adjoint if 𝐴∗= 𝐴 everywhere. The subtle
difference here is that the domains of 𝐴 and 𝐴∗ in general may not coincide.
When dealing with arbitrary normed spaces, the adjoint map, by the
construction of scalar product, acts between the duals, 𝐴∗: 𝑉∗→𝑈∗ (see
above). Therefore, there may be difficulties in identifying self-adjoint
operators with Hermitian ones at least for unbounded operators when it is
not easy to define a dual space.
3.15 Dualities in Physics
This section is a digression from a traditional mathematical exposition of an
introductory material. I am making this running-ahead digression
intentionally in order to demonstrate how rather abstract mathematical
concepts can work in physics. Although the term ‘duality’ might appear a bit
overloaded, contemplating about various dualities leads us to beautiful
extensions. Duality is not only a high-brow theoretical concept, it also enables
one to solve concrete problems. Duality enters physics under various guises,
for example, electromagnetic duality may be regarded as a four-dimensional
representation of duality.
Take, for instance, the Maxwell equations (Chapter 5). In vacuum and
without external sources they have the form
∇× 𝐇= 1
𝑐
𝜕𝐄
𝜕𝑡, ∇𝐄= 0,
(3.7)
∇× 𝐄= −1
𝑐
𝜕𝐇
𝜕𝑡, ∇𝐇= 0
(3.8)
where 𝐄= 𝐄(𝐫, 𝑡), 𝐇= 𝐇(𝐫, 𝑡) denote electrical and magnetic components of
the electromagnetic field. Obviously, these equations possess a symmetry
𝐄→𝐇, 𝐇→−𝐄 which is now termed as duality.
82 For simplicity, I denote the inner product by round brackets and not by angular brackets as
customary in mathematical literature.
3.15
Dualities in Physics
145
This symmetry exchanging electric and magnetic fields has been probably
known since the 19th century83, but only recently has it been exploited as a
source of inspiration for constructing new theories and models. The first
question to be discussed while trying to generalize the duality of the Maxwell
equations is: does the symmetry observed while interchanging electric and
magnetic fields still hold for inhomogeneous Maxwell equations, i.e., in the
presence of external charges and currents? This is not a simple question, since
the symmetry of the inhomogeneous Maxwell equation seems to be broken
by the observational fact that while the electric charge does exist, nobody has
yet succeeded to detect the magnetic charge (commonly called magnetic
monopole). The 𝐄−𝐇 symmetry of microscopical electrodynamics appears
also to be broken in material media whose response to an incident field is
actually due to induced charges and currents (Chapter 5). Even in vacuum,
when one treats the electromagnetic response naively, without a consistent
relativistic consideration, electrical charges respond to 𝐄 and 𝐇 differently:
electric fields provide a force accelerating the charge parallel to the field lines,
whereas magnetic fields result in the Lorentz force ensuring only a normal
acceleration. Furthermore, magnetic materials exhibit their highly useful
properties due to macroscopic ordering of magnetic dipoles, but isolated
magnetic charges, even if they could be observed, seem to play no part in
magnetic interactions.
From the mathematical viewpoint we know that one can represent the
magnetic field, which is a solenoidal vector field, with the help of the vector
potential, 𝐀(𝐫, 𝑡) . This is a great convenience. On the other hand, one
represents the electric field, which is a potential field in a static frame of
reference, as the gradient of a scalar potential, 𝜑(𝐫, 𝑡). This difference of
representations of the two would-be symmetric components of the unified
electromagnetic field also seems to break the duality.
In the four-dimensional formalism (see Chapter 5), the dual Maxwell
equations
𝜕𝜇𝐹𝜇𝜈= 0,
𝜕𝜇𝐹∗𝜇𝜈= 0
(3.9)
where the dual field is defined as
𝐹∗𝜇𝜈= 1
2 𝜀𝜇𝜈𝜚𝜎𝐹𝜚𝜎
Recall that the electromagnetic field tensor 𝐹𝜇𝜈 is expressed through the
potentials 𝐴𝜇 as 𝐹𝜇𝜈= 𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇. We postpone the discussion of the
electromagnetic field properties till Chapter 5 (see also Chapter 9) and make
now some more remarks about dualities.
A simple way to preserve the symmetry between 𝐄 and 𝐇 in the presence
of external electrical charges and currents would be to add magnetic charges
83 The first who explicitly used this symmetry was probably O. Heaviside [37].
146
Mathematical Potpourri
and currents to the Maxwell equations. This would also entail a symmetry
between electrical and magnetic charges and currents in the quantum field
theory (QFT). Although it appears difficult to ensure such a duality in the
traditional version of QFT, there exist a number of quite consistent field
theories containing both types of charges (Polyakov-t’Hooft monopoles) [97],
[208]. Magnetic monopoles in this class of theories appear as collective
excitations of elementary particles. Recall in this connection that collective
excitations in condensed media physics are characterized by the wave vector
𝐤 that is dual to the position vector 𝐫 or velocity 𝐯. In other words, electrical
and magnetic charges emerge in the respective quantum field theory in a
totally disparate manner, at least in the so-called weak coupling mode. The
term “weak coupling” denotes in this context the domain where the coupling
constant is small. Applied to electrodynamics, this simply means that the fine
structure constant 𝑒2/ℏ𝑐≈1/137 is small.
A simple form of duality may be modeled by the conformal
transformation of a complex variable, e.g., 𝑧→−1/𝑧. Roughly speaking, by a
duality we may understand some transformation of variables or parameters
in a given domain of a theory or model into a physically related (not
necessarily completely equivalent) theory or model with a different set of
variables or parameters that define it. For instance, a duality may exchange a
string theory with a field theory. Or a duality may exchange a strong coupling
mode of a given theory with the perturbative regime of the dual theory - the
most advantageous case from the physical viewpoint.
Duality in physics is usually defined in rather simple terms: if in some
theory 𝐴 function 𝑓𝐴 depends on variable 𝑥, i.e. 𝑓𝐴= 𝑓(𝑥), and in another
theory 𝐵 there appears a function 𝑓𝐵≔𝑓𝐵(1/𝑥), then in case . 𝑓𝐴(𝑥) =
𝑓𝐵(1/𝑥) such theories are known as dual. For 𝑓𝐴(𝜉) = 𝑓𝐵(𝜉) ≔𝑓(𝜉), a
theory is called self-dual. As I have just mentioned, an often given example
of duality is the monopole theory, in particular the Dirac monopole
introduced by P. A. M. Dirac in 1931 [183], with the monopole “charge” 𝑚
having the property 𝑚∙𝑒= 𝑐𝑜𝑛𝑠𝑡∙𝑛, 𝑛= 1,2, … i.e. the product of electric
and magnetic charges is quantized. In other words, the conjectural magnetic
monopoles, should they exist, are quantized in units inversely proportional to
the elementary electric charge. From here, 𝑚~1/𝑒 so that the monopole
coupling constant 𝛽↔1/𝛼, where 𝛼 is the electromagnetic coupling
constant (the fine structure constant). This means that if 𝛼= 𝑒2/ℏ𝑐≈
1/137 is small, then 𝛽~1/𝛼 is large and no perturbation theory of the
quantum electrodynamics type seems to be possible. Therefore, one can
deduce that point-like monopoles probably do not exist, in contrast with
electrons, and one has to consider extended objects from the very
beginning. One of the popular versions for such extended monopoles is a
soliton.
Typical dualities in modern physics are related just to the coupling
constants. More specifically, suppose there are two theories 84, each being
84 It is customary nowadays to call pieces of theoretical endeavors “theories”, I would prefer
to name them “models” because of their limited scope.
3.15
Dualities in Physics
147
characterized by a coupling constant 𝑔𝑖, 𝑖= 1,2 . A coupling constant is
typically an arbitrary parameter. Assume that we change coupling 𝑔1 of the
first theory so that eventually it becomes inverse to coupling 𝑔2 of the second
theory. If in this case both theories provide similar regimes, e.g., nearly
identical relationships of state probabilities, then such theories are called
dual. In string theory this situation is known as S-duality, S standing for
“strong-weak” (𝑔 and 1/𝑔). More accurately one can say that the S-duality
maps states and vacua arising in one theory (with coupling constant 𝑔1) to
those in the dual theory (with coupling 𝑔2) when 𝑔1 = 1/𝑔2. Duality here is a
cunning trick allowing one to apply the perturbation theory - the main tool of
quantum mechanics which normally can be used only for weak coupling, i.e.,
for 𝑔< 1 - to strongly coupled theories (𝑔> 1), by mapping strongly coupled
to weakly coupled states. In the context of usual four-dimensional quantum
field theory (QFT), which is much simpler than string theory, S-duality
exchanges the electrical and magnetic field components and, respectively,
charged particles with already mentioned magnetic monopoles (see, e.g.,
[134], [135]).
Discussion of magnetic monopoles, as well as of T-dualities (topological,
see, e.g., https://en.wikipedia.org/wiki/Superstring_theory), would lead us
far astray, e.g., to scrutinizing the solutions of nonlinear PDEs and topological
properties of space. Topology is a highly advanced discipline requiring a
profound treatment, even in the physical context. More or less detailed theory
of nonlinear PDEs is a special and serious topic being outside the scope of this
book - otherwise the text would be swollen beyond the perceptive ability of a
normal person. Nevertheless, we shall discuss later some symmetry
properties of the main equations of physics and see how the duality appears.
Now I shall only note that a symmetry of the Maxwell equations permitting us
to exchange electrical and magnetic fields appears as a duality between
elementary charges and collective excitations, since in a weak coupling case
electrical charges look like quanta whereas magnetic charges emerge as
collective excitations. Moreover, it seems that in general, quantum dualities
exchange particles with collective excitations, in QFT with quasiparticles
represented as solitonic solutions.
Furthermore, one can often hear about the “wave-particle duality” (see
Chapter 6). However, the concept of wave-particle duality as it is widely
taught to students is intuitive, unspecific, somewhat ambiguous and
imprecise. This is essentially the same kind of duality as between dual spaces
in algebra, because to each displacement or velocity-like vector in classical
mechanics we have a momentum or wavenumber-like vector in quantum
mechanics. Due to this duality, quantum mechanics becomes very similar to
classical field theory by semiclassical correspondence between particles
(mechanics) and waves (fields) - the same wave equations are used since the
mechanical Hamiltonian becomes the kinetic energy operator of the
corresponding wave theory. The fact that differentials and derivatives are dual
to each other can be a good introduction not only into differential geometry
but into quantum mechanics as well. So, examples of dualities are abundant
both in physics and in mathematics. One of the popular recent examples is
148
Mathematical Potpourri
the Veneziano dual model [184], see also [185]. The Veneziano model was
later reformulated and included into a set of modern string theories. For the
students of numerical mathematics and so-called scientific computing,
nowadays of fashion, it would be interesting to know that, for instance, matrix
columns, which may be treated in Euclidean space as forming a vector
subspace, possess the dual space which is the subspace of rows. In solid state
physics, one uses dual (reciprocal) basis to describe the crystallographic
lattice and specify the electron states. If we take the Dirac notation in quantum
mechanics (Chapter 6), the ket vectors form a subspace of a Hilbert space and
have the dual space of bra vectors. For any vector field we have a dual field,
however sometimes not so easily represented in the form of simple
geometrical images. Likewise, the dual space for the tangent vector space,
𝑇𝑥(𝑀), is the cotangent space 𝑇𝑥
∗(𝑀) which, unfortunately, also does not have a
simple geometrical representation like the tangent space - we can
mathematically define but cannot easily visualize the cotangent space even
for the low-dimensional cases 𝑑= 1,2,3 … So it seems that in this case we
must leave the visual patterns of physics aside and stick to mathematical
definitions.
We have already seen that for each vector space 𝑉 there exists a dual
space 𝑉∗ - here I may refer, e.g., to the section on coordinate transformations
where the difference between contravariant and covariant vectors and
coordinates was discussed. For convenience, I shall recall some simple
geometrical interpretations. For example, the velocity vector for each smooth
curve passing through point 𝑥∈𝑀 is a contravariant tangent vector lying in
tangent space 𝑇𝑥(𝑀) at a point 𝑥 of a manifold 𝑀. It is an element of the vector
space tangent to the manifold in this point and one can view it as being located
in a hyperplane analogous to the tangent plane for dimension 𝑑= 2. Another
example is a displacement vector 𝐫. We have seen that a covariant vector in a
point 𝑥 of a manifold is an element of another - dual - vector space called
cotangent space in the point 𝑥 of the manifold. A covariant vector is a
geometrical object that transforms like the basis vectors, whereas a
contravariant vector is an object that transforms “against” the basis vectors.
By the way, one should not confuse covariant vectors and Lorentz covariance,
this is just an abuse of the language. For instance, a contravariant Lorentz
vector is different from a covariant Lorentz vector, although the former is a
vector that is Lorentz covariant (see Chapter 5). Now, however, we confine
ourselves to simple examples needed only to illustrate the concept of duality.
Duality, in human terms, is just two different ways of looking at the same
phenomenon.85 In physics, duality in fact appeared long before many other
concepts, when in the 17th century Huygens and Newton proposed
85 This principle has been long exploited by authors in classical literature. For example, in in
polyphonic novels by F. M. Dostoyevsky such as “The Karamazov Brothers” there exists no
‘objective’ description of the world, but only different manifestations of reality depending on the
perception of acting personages. A brilliant illustration of duality has been produced in the film
“Rashomon” by the famous Japanese film director, Akira Kurosava. This film was based on the
short story “In a Grove” by Akutagawa Ryunoske (although it was a far adaptation); both the story
by Akutagawa and the movie depict several characters offering different “true” accounts of rape
and murder. Thus, it is an artistic exploration of the nature of “truth”.
3.15
Dualities in Physics
149
competing wave and corpuscular theories of light’s behavior. For over a
century Newton’s corpuscular theory was dominant, mainly, it seems, due the
author’s authority - one more example of sociological effects in science. It is
only in the early 19th century when diffraction has been reliably observed,
e.g., in Thomas Young’s double slit experiments, that people apprehended
grave complications for the Newton’s corpuscular theory of light. Many
observations clearly demonstrated an obvious wave behavior, the light waves
showed interference patterns - here the principle of superposition for linear
waves is manifested (see many interesting details in [106]). All this seemed
to prove that light traveled in waves so that the Newton’s theory had to be
replaced by Huygens’ theory. But if light existed as waves, this fact would
imply, according to the standard wave theory accepted at that time, a medium
of some kind through which the wave must propagate. This invisible medium
was suggested by Huygens and called by him “luminoforous aether”, in a more
customary terminology, ether. Nobody could observe this medium despite a
number of experimental attempts throughout the 19th century. Finally, the
search for ether culminated in the famous and direct Michelson-Morley
experiment, which led to creation of the relativity theory. Then the
photoeffect was studied (in particular, in the Einstein’s work) and again a
particle theory of light took dominance, eventually leading to mathematical
models of quantum mechanics (see Chapter 6). Thus, the old concept of
duality was plainly related to today’s physics, the latter possessing more
modern dualities. Now, it would be a truism to say that light manifests itself
as both a particle and a wave, depending on what observations are made and
how the experiment is set up. The same duality applies to matter, e.g., to
elementary particles. To manifest dual properties of matter, different
conditions are necessary. Incompatible external conditions reveal either
wavelike or corpuscular properties of microparticles, e.g., electrons or
neutrons. In other words, depending on imposed external conditions (such as
observation means - experimental setup) quantum particles display wave or
particle features, just like light. The meaning of duality is that there is a
potential possibility for every object to display totally diverse - even opposite
and complementary - features depending on premeditated settings. It would
be probably wrong to understand duality too literally, for instance, in the
quantum-mechanical case, to model the particle as a singular point of a
certain field or represent the particle motion as being carried by the wave.
Such mathematical models, of course, can be constructed but their heuristic
value seems to be too low (see more details in Chapter 6).
Here, I would like to make a few comments. One must notice that such
circumstances may exist when dual features can be displayed simultaneously.
For example, the bound state of an electron in atoms is described by a
standing wave, usually with an amplitude that is rapidly diminishing with the
distance to the center (nucleus). This fact means that the electron is
approximately localized (a corpuscular feature), sometimes with a good
accuracy, but it is nonetheless a wave. A typical feature of such combined
manifestations of the dual (complementary) properties is that they are less
distinctly exhibited.
150
Mathematical Potpourri
competing wave and corpuscular theories of light’s behavior. For over a
century Newton’s corpuscular theory was dominant, mainly, it seems, due the
author’s authority - one more example of sociological effects in science. It is
only in the early 19th century when diffraction has been reliably observed,
e.g., in Thomas Young’s double slit experiments, that people apprehended
grave complications for the Newton’s corpuscular theory of light. Many
observations clearly demonstrated an obvious wave behavior, the light waves
showed interference patterns - here the principle of superposition for linear
waves is manifested (see many interesting details in [106]). All this seemed
to prove that light traveled in waves so that the Newton’s theory had to be
replaced by Huygens’ theory. But if light existed as waves, this fact would
imply, according to the standard wave theory accepted at that time, a medium
of some kind through which the wave must propagate. This invisible medium
was suggested by Huygens and called by him “luminoforous aether”, in a more
customary terminology, ether. Nobody could observe this medium despite a
number of experimental attempts throughout the 19th century. Finally, the
search for ether culminated in the famous and direct Michelson-Morley
experiment, which led to creation of the relativity theory. Then the
photoeffect was studied (in particular, in the Einstein’s work) and again a
particle theory of light took dominance, eventually leading to mathematical
models of quantum mechanics (see Chapter 6). Thus, the old concept of
duality was plainly related to today’s physics, the latter possessing more
modern dualities. Now, it would be a truism to say that light manifests itself
as both a particle and a wave, depending on what observations are made and
how the experiment is set up. The same duality applies to matter, e.g., to
elementary particles. To manifest dual properties of matter, different
conditions are necessary. Incompatible external conditions reveal either
wavelike or corpuscular properties of microparticles, e.g., electrons or
neutrons. In other words, depending on imposed external conditions (such as
observation means - experimental setup) quantum particles display wave or
particle features, just like light. The meaning of duality is that there is a
potential possibility for every object to display totally diverse - even opposite
and complementary - features depending on premeditated settings. It would
be probably wrong to understand duality too literally, for instance, in the
quantum-mechanical case, to model the particle as a singular point of a
certain field or represent the particle motion as being carried by the wave.
Such mathematical models, of course, can be constructed but their heuristic
value seems to be too low (see more details in Chapter 6).
Here, I would like to make a few comments. One must notice that such
circumstances may exist when dual features can be displayed simultaneously.
For example, the bound state of an electron in atoms is described by a
standing wave, usually with an amplitude that is rapidly diminishing with the
distance to the center (nucleus). This fact means that the electron is
approximately localized (a corpuscular feature), sometimes with a good
accuracy, but it is nonetheless a wave. A typical feature of such combined
manifestations of the dual (complementary) properties is that they are less
distinctly exhibited.
3.16
Manifolds
151
Another remark is rather trivial: diverse experimental settings often may
properties. The resulting wave function (amplitude) is constructed as a linear
superposition of all interfering probability amplitudes and, consequently,
reveals the same wave-like features. One can interpret this construction as
follows: the probability of finding a particle in some location is determined by
a wave, but the actual physical observation detects a corpuscle.
The so-called physical meaning of the wave-particle duality has long been
a focus point of heated debates in the early years of quantum physics.
Nowadays, it seems this issue remains a hot topic only for amateurs.
Nevertheless, attempts to explain what the wave-particle duality “really
means” has resulted in a number of interesting interpretations of quantum
mechanics. To my knowledge, all these interpretations produced no new
results and no fresh experimental predictions as compared with the set of
wave equations. The mathematical models associated with the latter may
appear to some people rather complicated, but they mostly provide accurate
experimental predictions.
One can also note that duality as a principle exists not only in
mathematics and physics. In the preceding chapter, I have briefly mentioned
cognitive models, in particular those developed in so-called cultural
anthropology. One such model, comprehensively discussed by the famous
philosopher Karl Popper in his highly interesting two-volume manuscript
“The Open Society and Its Enemies” [186], was designed to explain the
stability of human societies86. The main statement here is that any stable
society must have a dual structure, in particular support two parties, e.g., “left”
and “right”. This bipartite system was found to be typical even of the ancient
and primeval societies, and up till now the majority of societies stick to the
primeval pattern. The society was divided into two parts, and each member of
society was exactly aware to what part she/he belonged. Even today, one can
tell whether a person sympathizes with the liberal attitudes or identifies
oneself with a strong state domineering over its citizens. Ancient people even
formulated social duality in the form of a slogan: “Always take your wife from
the other half of the tribe.” This principle made intra-tribal aggression highly
ritualized and, consequently, non-destructive. The dual notions are especially
evident in oriental - mainly Chinese - traditions (Yin-Yang etc., see, e.g.,
http://en.wikipedia.org/wiki/Yin_and_yang). Social duality was also popular
and extensively depicted in literature (R. L. Stevenson, J. Swift).
3.16 Manifolds
One may ask: why did we devote so much attention to a seemingly trivial
subject of vector spaces? The main motivation was the following: n-
dimensional linear (vector) spaces, e.g., ℝ𝑛, and their open subsets 𝑈 are
the simplest sets on which one can define functions as well as vector and
tensor fields. Recall that ℝ𝑛 may be thought as consisting of vectors
(𝑥1, … , 𝑥𝑛), with dimensionality 𝑛 being arbitrary large, but still finite.
86 Periods of instability, e.g., revolutions, are assumed significantly less durable than
periods of stability.
152
Mathematical Potpourri
The most useful feature of such simple sets is the possibility to establish on
them a unique coordinate system. For instance, one can represent a map
𝑓(𝑥) on 𝑥∈𝑈⊆ℝ𝑛 as a function of n coordinates 𝑥𝑖, 𝑖= 1, … , 𝑛.
However, already very simple examples demonstrate that such a
compelling-looking scheme is a somewhat provisional way of looking at
physical spaces, and it may easily become inadequate. If we take, for instance,
the unit sphere, 𝑆𝑛−1 ⊂ℝ𝑛, which may be described by the equation 𝑥𝑖𝑥𝑖=
1, 𝑖= 1, … , 𝑛, then we quickly get into trouble. To be more specific let us
take 𝑛= 3 i.e., 𝑆2 ⊂ℝ3 . Then we have the familiar equation from the
school-time geometry:
(𝑥1)2 + (𝑥2)2 + (𝑥3)2 = 1
(hopefully, there will be no confusion between coordinate and power
indices). Writing this equation in spherical coordinates
𝑥1 = cos𝜑sin𝜃,
𝑥2 = sin𝜑sin𝜃,
𝑥3 = cos𝜃,
0 < 𝜑≤2𝜋, 0 ≤𝜃< 𝜋.
we may immediately observe that there is no coordinate system that can
describe the whole 𝑆2 surface.
Cartesian (rectangular) or affine coordinates naturally reflect geometric
properties of Euclidean and affine spaces. However, in many situations it is
much more convenient to work in curvilinear coordinates than in Cartesian
ones, e.g., by dealing with curvilinear objects. It is enormously easier, for
example, to integrate differential equations of mathematical physics for a
spherically symmetric problem in spherical coordinates than in Cartesian
those. (It is remarkable that this adaptation of coordinate frame to symmetry
and respective geometrical techniques are often disregarded in numerical
treatment of respective problems.) A systematic study of the curvilinear
coordinate systems naturally leads to the merge of geometry and physics,
which became obvious after the construction of general relativity by Einstein
and Hilbert (actually D. Hilbert submitted an article containing the correct
field equations for general relativity several days before Einstein, yet Hilbert
never claimed priority for this theory.
One may notice that the idea of a manifold is so simple that it is
astonishing that the physicists rarely used this concept when physics was
especially fruitful, say, in the period of 1900-1970. Working with manifolds
simply means that one should be able to view the surrounding space as locally
Euclidean (see above the definition and discussion of a Euclidean space). In
other words, a manifold looks like 𝔼𝑛 (or ℝ𝑛, ℂ𝑛) locally, but not necessarily
globally. This is basically a geographic idea: the Earth looks plane near your
home, and when mathematically generalizing this illusion, we can
approximate a rather small domain around a point on practically any surface
by the tangent plane at this point - an idea close to linearization.
I apologize for using such vague notions as locally, near, small domain,
practically and the like. Later, little-by-little a concrete, but context-
3.16
Manifolds
153
dependent meaning will be assigned to these words. One might note in
passing that in contrast with geometry which mostly deals with the local
properties of manifolds (for instance, differential geometry), there exists a
discipline known as topology where the manifolds are studied as whole
entities i.e., at the global level.
What is the fundamental difficulty in doing classical differential calculus
on manifolds? One may remember that differential calculus is the basic tool of
physics, starting from Newton and Leibniz. The main difficulty is to extend the
results obtained in one coordinate system to other coordinate systems on a
manifold making sure that these results remain valid in all of them. The
traditional way, extensively used in physics, to circumvent this problem is to
apply the language of tensor analysis. In this manner, the laws of physics to be
used in physical models on manifolds are formulated through tensor
derivatives. Then the requirement of coordinate invariance in handling
physical models on manifolds naturally leads to similarity or isomorphism
between the vector spaces attached to neighboring but distinct points, in the
limiting case between the points which are infinitely close to each other (in
the spirit of classical differential calculus). Such isomorphisms form a
mathematical structure commonly known as the Levi-Civita connection.
But before we study the standard concepts of differential geometry, we
have to discuss the possibility of confidently using the tools of classical
calculus on manifolds. I have already mentioned that the concept of a
manifold is a simple extension of a surface in ℝ3 . Then the notion of a
differentiable manifold is a generalization of a regular surface in ℝ3. The
surface 𝑆 is considered regular if for each point 𝑥∈𝑆 there exists a
neighborhood 𝑈𝑥 of 𝑥 in ℝ3 and a map of an open set 𝑉 of ℝ2 onto 𝑈 i.e.
𝑓: 𝑉∈ℝ2 →𝑈∩𝑆, this map being a differentiable homeomorhism (simply a
diffeomorphism, although I know that there exist pedants who distinguish
these two notions). Intuitively, a regular surface is perceived as a union of two-
dimensional (ℝ2) open sets making up an “atlas” of a number of “charts” so
that such open sets overlap, the transition from one to another can be made
in a differentiable (smooth) fashion. This perception signifies that one can
painlessly go from one parametrization of a surface to another, and this
change of parametrizations or, in simple words, coordinate systems is a
diffeomorphism. Use of differentiable maps for the change of coordinates on
the one hand requires and on the other hand ensures applying the powerful
techniques of classical calculus.
One can, however, immediately notice that the presence of ℝ3 in the
concept of a regular surface - in fact of any surface - is totally irrelevant. This
ambient space in no way affects the surface; its role is only reduced to a
physical container or a vessel.
Viewing from the physical positions, differentiable manifolds have a
number of extremely useful applications. First of all, a differentiable manifold
may be considered a rudimentary model of our spacetime. Why rudimentary
and why only a model? Primarily because a differentiable manifold in general
does not yet possess the customary and convenient, sometimes even vital for
154
Mathematical Potpourri
us geometric features such as, e.g., distance, length, angle, area 87. Thus, a
differentiable manifold may in fact serve only as a prototype for our
spacetime. To make this prototype closer to the version of spacetime we think
we know, one must introduce some additional properties such as metric and
affine connection on the tangent space at each point.
A vector bundle is a manifold created by assigning a vector space to each
point of the manifold. In particular, the tangent bundle consists of a manifold
plus the tangent space at each point. The cotangent bundle is formed
analogously. We will see that tangent bundles are used for the Lagrangian
formulation of classical mechanics, and cotangent bundles are used for the
Hamiltonian formulation.
3.17 Notes on Derivatives
Let us recall some basic facts from classical analysis. The main object of
classical calculus is the function 𝑓 taking 𝑥∈𝑋 to 𝑓(𝑥) ∈𝑌 where 𝑋 and 𝑌
are some vector (affine) spaces. Now, what is a derivative of a function? One
can say that it is a very simple object, yet thinking about it in association with
other concepts gives rise to interesting observations. Let us take the simplest
example of a single-variable function 𝑓: ℝ→ℝ. Differentiating it produces
another function, 𝑑𝑥𝑓≡𝑑𝑓𝑑𝑥
⁄
: ℝ→ℝ, so the differentiation is a map on a
set of functions defined on ℝ with the range also in ℝ. Actually, it is one of the
simplest examples of a linear operator that acts on a set of functions forming
a vector space with respect to ordinary addition and multiplication. If the
function 𝑓 has a derivative at 𝑥, it is uniquely defined and referred to as the
derivative of 𝑓. In nearly all physical models, differentiability is assumed
beforehand, and non-smooth functions rarely exist.88
Imagine now a function of more than one variable. Geometrically, we
have in this case more than a single number to characterize the derivative, i.e.,
the tangent to the graph of a map (𝑥, 𝑓(𝑥)): 𝑋→𝑌. For instance, in the 2D
case, ℝ2 →ℝ which still can be easily visualized, two numbers are required
to specify the tangent plane to a surface in each point ( 𝑥1, 𝑥2 ∈ℝ).
Differentiation does not produce the same kind of map, ℝ2 →ℝ, and if we fix
a basis and write the map of differentiation in coordinates, we get partial
derivatives. Now, remembering that basis and coordinate representation is a
convenient but not a universal one - there is often no natural way to fix a basis
- we may say that taking the derivative gives a map from the tangent space at
𝑥∈𝑋 to the tangent space in 𝑓(𝑥) ∈𝑌. There are a number of such maps and
we shall discuss them later one by one.
In a slightly different view, the derivative of 𝑓 at x is a linear functional
taking functions to ℝ. In case f is defined on some vector (affine) space
87 These are the examples of local structures: a distance between points specifies a
metric structure, an angle between curves manifests a conformal structure (angles are
left unchanged by a conformal mapping), area on a surface corresponds to a symplectic
structure.
88 Recall the first-year calculus terminology: if 𝑓 has a derivative at 𝑥, 𝑓 is called differentiable
at 𝑥, and if 𝑓 is differentiable wherever it is defined, it is simply called differentiable.
3.17
Notes on Derivatives
155
𝑋, 𝑓: 𝑋→ℝ, the derivative is a linear functional defined on the tangent space,
𝑇𝑥𝑋→ℝ, to the vector (affine) space 𝑋. Here we may recall that ℝ, while
being algebraically a field, is understood as a source of real scalars 89 .
Generalizing this construction, we obtain the derivative of 𝑓 at 𝑥∈𝑋 as a
linear map 𝑑𝑥𝑓: 𝑇𝑥𝑋→𝑇𝑓(𝑥)𝑌 where 𝑓: 𝑋→𝑌 is a map between vector
(affine) spaces 𝑋 and 𝑌. From here on, we can easily understand the concepts
of partial derivatives and of a directional derivative, the latter being very
important in Hamiltonian mechanics, the theory of dynamical systems and
relativity. It is remarkable that tensor quantities naturally emerge from
differentiation, even when one deals with scalar functions. This fact is, in
particular, manifested by the matrix representation 𝑑𝑥𝑗𝑓𝑖 for the linear map
𝑑𝑥𝑓, with partial derivatives 𝜕𝑓𝑖/𝜕𝑥𝑗≡𝜕𝑗𝑓𝑖(𝑥) . Here 𝑥𝑗, 𝑗= 1, … , 𝑛 are
coordinates on 𝑋, 𝑓𝑖, 𝑖= 1, … , 𝑚 are components of the vector function 𝑓∈𝑌
in basis 𝛽 on 𝑌. This manner of writing the derivatives as a map of a vector-
function is common in dynamical systems theory and we shall employ it later.
Thus, the apparently familiar concept of a derivative leads us to rather
delicate subjects - covariant derivatives, metric tensors, geodesics, etc. For
instance, the simple notion of a directional derivative leads to rather
nontrivial entities in variational calculus. A “directional derivative” in ℝ3
i.e., the derivative of function 𝑓(𝐫), 𝐫= (𝑥1, 𝑥2, 𝑥3) along the direction
determined by vector 𝐥= (𝑙1, 𝑙2, 𝑙3) is defined in calculus as
𝜕1𝑓= lim
𝜖→0
1
𝜖[𝑓(𝑥1 + 𝜖𝑙1, 𝑥2 + 𝜖𝑙2, 𝑥3 + 𝜖𝑙3) −𝑓(𝑥1, 𝑥2, 𝑥3)]
(see any course of elementary calculus). It can be easily verified that 𝜕1𝑓=
(1, ∇𝑓). Let us now consider a functional 𝑆[𝑓], in particular defined as an
integral of Lagrangian 𝐿= 𝐿(𝑥, 𝑓, 𝑓𝑥) , where 𝑥= (𝑥1, 𝑥2, 𝑥3) ∈𝐷, 𝑓∈
𝑉, 𝑉= 𝑉𝑚 is a linear space of smooth functions 𝑓(𝑥) ≔𝑓𝑖(𝑥𝜇), 𝑖= 1, … , 𝑚.
The Lagrangian 𝐿 is also considered a smooth function of 𝑥𝜇, 𝑓𝑖 and 𝑓𝑥𝜇
𝑖.
One might note that the requirement of smoothness is sufficient for nearly
all variational problems having practical importance in physics, but can
become too restrictive in certain singular cases. We, however, shall not
consider such cases. So let us write the functional 𝑆[𝑓] in the usual form
𝑆[𝑓] = ∫𝑑𝑛𝑥𝐿(𝑥𝜇, 𝑓𝑖, 𝑓𝑥𝜇
𝑖)
𝐷
, 𝑖= 1, … , 𝑚.
Thus, the functional 𝑆[𝑓] is defined on 𝑉. The domain 𝐷 of variables 𝑥𝜇, 𝜇=
1, … , 𝑛 which may be regarded as parameters of a variational problem is, for
simplicity, supposed to be finite in ℝ𝑛 i.e., delimited by a border 𝜕𝐷,
assumed smooth or at least piecewise smooth. Notice that the shorthand
89 Scalars used in physics are not necessarily real, other number systems can also underlie a
vector space. For example, in quantum mechanics vector spaces are built over complex scalars
(which, by the way, may be treated as vectors in the context of complex analysis)
156
Mathematical Potpourri
𝑑𝑛𝑥 actually denotes n-dimensional volume in ℝ𝑛, 𝑑𝑛𝑥≡𝑑Ω𝑛≔𝑑𝑥1 ∧
𝑑𝑥2 … ∧𝑑𝑥𝑛.
Let us now perturb 𝑆[𝑓] i.e., displace, as we have already done,
𝑓𝑖(𝑥1, 𝑥2, 𝑥3) ∈𝑉 to 𝑓𝑖(𝑥1, 𝑥2, 𝑥3) + 𝜖ℎ𝑖(𝑥1, 𝑥2, 𝑥3), where ℎ≔ℎ𝑖(𝑥𝜇), 𝜇=
1, … , 𝑛 also belongs to 𝑉 and is chosen in such a way as ℎ|𝜕𝐷= 0; 𝜖 is a
small parameter. Now we may construct an expression very similar to a
directional derivative well-known from classical calculus:
Let us briefly review what we have done. First of all, we introduced the
basis, i.e., the possibility to label each point in the vector (affine) space by
coordinates 𝑥𝑗∈ℝ𝑛, and then identified the point 𝑥∈𝑋 with this set of
coordinates. One must, however, remember that the set of 𝑥𝑗, 𝑗= 1, … , 𝑛 are
just labels for a point in the given coordinate system. If you choose another
basis, these labels are changed, they do not have absolute meaning. Besides,
there is often no natural way to choose the basis, the sphere being the favorite
example. Nevertheless, once the concrete identification of the points has been
accepted, we may write the usual limit that is considered, in the traditional
courses of calculus, as the definition of partial derivative:
𝜕𝑓𝑖
𝜕𝑥𝑗≡𝜕𝑗𝑓𝑖(𝑥) = lim
𝛿𝑥𝑗→0
𝑓𝑖(𝑥1, … , 𝑥𝑗+ 𝛿𝑥𝑗, … , 𝑥𝑛) −𝑓𝑖(𝑥1, … , 𝑥𝑛)
𝛿𝑥𝑗
An important thing about this limit is that it may exist, although the map
𝑑𝑥𝑓 does not exist.
The matrix 𝜕𝑗𝑓𝑖(𝑥) representing the map 𝑑𝑥𝑓 in some specified basis is
the Jacobian matrix of the map 𝑓(𝑥): 𝑋→𝑌. We shall encounter this matrix
many times in various contexts.
There are some particular cases studied in analysis courses. If 𝑋= ℝ, we
go back to the simplest case when the variable 𝑥 has only one direction (and
its reverse), so the derivatives become ‘ordinary’, 𝑑𝑓𝑖𝑑𝑥
⁄
= 𝑑𝑓𝑖𝑑𝑥(𝑥)
⁄
, 𝑖=
1, … , 𝑚 and the Jacobian matrix is reduced to a column vector. If, on the other
hand, 𝑌= ℝ, then vector 𝑓𝑖, 𝑖= 1, … , 𝑚 has a single component that may be
denoted 𝑓(𝑥) and the Jacobian matrix is degraded to the row vector of partial
derivatives, 𝜕𝑗𝑓.
3.18 Notes on Calculus
While discussing derivatives, one can naturally recollect the basic concepts of
calculus. I totally disagree with the point of view that calculus, or classical
analysis, is an old-fashioned and “completed” discipline, so there is little hope
to invent something new in it. This snobbish attitude towards classical
disciplines is a symptom of exaggerated attention to fashionable parts of
science. It is a sociological effect encountered not only in mathematics. In
physics, some vociferous proponents of current outfit treat classical
electrodynamics and even classical mechanics with a poorly concealed
disdain. As I have mentioned in the preceding chapter, when I was very young,
I often came to the so-called “Landau seminar” at the Institute of Physical
Problems in Moscow. There, it was probably considered trendy to derogate
3.19
Basic Geometry for Physics
157
all the directions that were outside the scope of local theoreticians (the
“Landau School”, L. D. Landau himself had unfortunately died by this time). I
remember how it stunned me when I occasionally heard from the persons for
whom I held a great respect such fanciful statements as “classical
electrodynamics is a pastime for assiduous idiots”, “what they do in
mechanics has nothing to do with physics”, “irreversible thermodynamics is
irreversible stupidity” (the last statement was ascribed to Landau). Probably,
such fads and fancies exist also in the biological community - at least I have
heard some rumors from biologists about the subjects that do not even
deserve of being studied.
I have already touched upon sociological effects in physics (see Chapter
2), and in Chapter 9 (“What Remains to Be Solved”) we shall have to discuss
them once more; now, reverting to calculus I would like to note that there is
a lot of room for improvement in both understanding its conceptual roots and
in effective exposition of the subject. When working with students, I could
observe that, in spite of its beauty, official calculus is too hard for the majority
of them, especially for those who are more inclined to use computers than to
learn numerous theorems and proofs.
3.19 Basic Geometry for Physics
When considering this subject, one can get a feeling that there are different
cultures in physics: intuitive, algebraic, geometric, computational, visionary,
and so on. In particular, one can understand the term “geometry” as the study
of sets 𝑋 consisting of elements 𝑥 called points (irrespective of their physical
nature, e.g., they may actually be straight lines) together with some
fundamental relations between them. Points 𝑥 are all peers i.e., they are
usually considered homogeneous with respect to such fundamental relations,
and a group of automorphisms that is of one-to-one mappings of set 𝑋
onto itself, 𝑥↦𝑥′ acts on 𝑋 leaving all fundamental relations intact. In
many instances, we shall have to use geometrical language to understand
physical models. I have read long ago that one of the great
mathematicians, David Hilbert, considered physics and geometry to be
actually the same subject, since both deal with the objects of the real
world. At least, modern physics extensively uses geometrical concepts. For
me personally, it has always been really fascinating how deep is the role of
differential geometry in physics. Certain things, even very simple facts
from classical mechanics, cannot be properly understood without
elementary notions from differential geometry. For instance, the whole of
kinematics is indistinguishable from elementary differential geometry.
Yet, some physicists consider reducing physics to geometry a grave crime,
since spatial and temporal scales are essentially physical quantities.
Fundamentals of differential geometry include such important and in
general nontrivial concepts as flows, manifolds, bundles, connections,
differential forms, Lie groups, jets and so on. One can see that discussing them
even in moderate scope would require a lot of time and pages, so nolens
volens I have to be superficial while addressing the subject of differential
158
Mathematical Potpourri
geometry. Trying to cover this subject in a very lapidary mode, I will
persistently refer to a couple of my favorite books [187] and [188].
Physical geometry starts from discussing spaces in which physical
processes can evolve and the coordinate systems that can be introduced in
such spaces. Nowadays, coordinate-free methods (e.g., those based on
differential forms) are becoming increasingly popular. The coordinate-based
approach is, of course, less general but provides a clear and intuitive
introduction to many delicate topics such as covariant derivatives, metric
tensors, geodesics, connections, etc. Therefore, it is still very valuable even
now, when the differential form approach seems to have won the battle. I shall
try to use both approaches, in many cases giving translations from one
language into the other.
There exist many kinds of geometries. The most well-known and natural
from the viewpoint of everyday life is the Euclidean geometry on which
classical physics was based. One usually models point spaces considered in
Euclidean geometry as vector spaces (see the respective section above), but
the entire Euclidean ideology is not always an adequate choice. A slight
generalization of school-time Euclidean geometry produces affine geometry,
also very important for physics, studying geometric properties that are
invariant under the action of the group of affine transformations. When the
Renaissance painters started using pictures as a synthetic language to ex-
press or model reality, projective geometry became essential, although it was
not mathematically formulated in the Renaissance epoch. Projective
geometry studies projective transformations distorting angles and distances
but preserving lines. From the painter’s viewpoint, projective geometry is that
of a perspective. Further development of human civilization quickly brought
new mathematical ideas serving to depict reality, however not in the
conventional naive form. These ideas were quite often formulated as new
geometries such as Riemann geometry (now serving as the mathematical base
for general relativity), Bolyai-Lobachevsky hyperbolic geometry (needed for
spacetime concepts based on special relativity), old (with coordinates and
indices) and modern (coordinateless and based on forms) differential
geometry widely used in many fields of physics, Hermitian geometry (serving
quantum mechanics with its Hilbert space), symplectic geometry (needed for
modern formulations of classical mechanics), convex geometry (primarily
studying the Radon transformations that are very important for modern
technology), computational geometry (the main objects of study being convex
sets, Delaunay triangulations and Voronoi diagrams), non-commutative
geometry (modern microscopic physics), and some other geometry kinds. In
the traditional geometry that makes up unnecessary long courses in high
schools, such structure quantities as distance, angle, segment, triangle,
parallel lines, etc. are the principal notions. Actually, it is these notions that
form the whole school-time geometry. In contrast to this traditional approach,
geometry may be defined by transformation properties, i.e., the
transformation (symmetry) groups can serve as fundamental principles
determining the geometry. This idea was put into the foundation of the so-
called “Erlangen Program” formulated by Felix Klein in 1872. According to this
3.19
Basic Geometry for Physics
159
concept, ordinary geometrical structures, like those studied at schools, are
only of secondary nature and can be derived from the action of the respective
transformation group. In this manner, one can specify a number of standard
geometries by their fundamental groups. For example, the principal
geometries of physics based on transformation groups are as follows:
Euclidean geometry, affine geometry, Riemann geometry, differential
geometry, symplectic geometry, noncommutative geometry. This is just a list -
a short one, and there is no hierarchy between these geometries. They are,
however, connected with physically interesting links - thus momentum
signifies transition from ordinary Euclidean geometry to Lee groups and
symplectic geometry. We are comfortable with the phase space of classical
mechanics as well as with the four-dimensional pseudo-Euclidean geometry
describing the flat spacetime in special relativity and, perhaps to a somewhat
lesser extent, with the conformal geometry of quantum field theory. Later we
shall observe the possible geometrical interpretation of quantum mechanics,
which provides a number of curious connections between different variants
of geometry.
Why is affine geometry important for physics? This issue was quite
clearly explained in [14]. Simply speaking, our world is a 4D affine space 𝐴4
whose elements are events (world points). Any affine space “unites” points
and vectors, in this case 𝐴4 includes the four-dimensional vector space ℝ4
which works as a group of parallel translations: for any two points 𝑎, 𝑏∈𝐴𝑛
one can find 𝐯∈ℝ𝑛 so that 𝑎= 𝐯+ 𝑏. In other words, the action of the vector
space 𝑉𝑛 in 𝐴𝑛 is a map 𝑉× 𝐴→𝐴. One can find the full definition of affine
space in any textbook on geometry, that is why I don’t bring it here.90 For
classical mechanics we can take a particular model of an affine space 𝔸4 with
ℝ4 as a point space and ℝ4 as a vector space of all translation vectors (in this
case, of course, 𝑛= 4). The time is interpreted as a linear map 𝑇: 𝑉→ℝ of the
vector space of parallel translations to the real axis. This map implements a
projection onto one of the coordinates, a temporal one, 𝑇(𝑎, 𝑡) = 𝑡 for 𝑎, 𝑡∈
𝑉, i.e., a split of 𝑉 into ℝ3 × ℝ - a specific feature of classical mechanics
distinguishing it from relativistic theories. The subspace of simultaneous
events is obviously 𝐴3, and the kernel of 𝑇: 𝑉0 = Ker𝑇= {𝑣∈𝑉: 𝑇(𝑣) = 0}
includes all parallel translations linking all simultaneous events. In classical
mechanics, 𝑉0 = Ker𝑇 is simply the spatial part ℝ3. Simply speaking, a set of
all simultaneous events constitutes a 3D affine space. The time difference for
events 𝑎, 𝑏∈𝐴4 is 𝑇(𝑎𝑏) and is zero for vectors 𝐯= {𝑎−𝑏} ∈𝑉0 . The
mathematician would say that fibers 𝑉𝑡= 𝑇−1(𝑡) describe in 𝑉, and
consequently in 𝐴4 the layers of simultaneous events.91
An important feature of the affine space distinguishing it from the
ordinary vector (linear) space is that no coordinate origin is specified in the
90 See,
for
example
affine
coordinates
in
the
Encyclopedia
of
Mathematics
(https://encyclopediaofmath.org/wiki/Affine_coordinate_system).
91 One may recall here that in mathematics notations and terms are often specially designed;
in traditional (pre-string) physics very seldom.
160
Mathematical Potpourri
affine space (and no preferred directions). For creationists’ models of the
world not the affine, but the linear structure would be more appropriate.
An affine map of a plane onto a plane is of the form
𝑥′ = 𝑎11𝑥+ 𝑎12𝑦+ 𝑐1,
𝑦′ = 𝑎21𝑥+ 𝑎22𝑦+ 𝑐2
Let us denote by 𝔸𝑛 an affine space of finite dimension 𝑛 based on the
vector space 𝑉𝑛.
A very important example of a set is ℝ which is a set of all reals. Then
ℝ𝑛 which was previously known as an arithmetic space and is a set of all
real n-tuples i.e., ordered sets of 𝑛 real numbers (arithmetic points). So ℝ𝑛
is an n-dimensional space of points ℝ𝑛= (𝑥1, … , 𝑥𝑛), where each 𝑥𝑖, 𝑖=
1, … , 𝑛 is a real number. It is an n-dimensional real vector (linear) space
so that its elements are called vectors. It means that two operations,
addition and multiplication by a scalar from some field are defined in ℝ𝑛,
with all standard axioms delineating a vector space being fulfilled (see the
section “Vector Spaces” below). In fact, however, points and vectors are
different objects so that one needs to introduce another set called an affine
space, 𝔸𝑛, in addition to ℝ𝑛. From the physical perspective, the affine space
𝔸𝑛 differs from vector space ℝ𝑛 by an absence of a fixed coordinate origin
(see also the respective section on the affine space below). If, additionally,
the vector space ℝ𝑛 is endowed with a Euclidean structure i.e., a positive-
definite bilinear symmetric form (𝑥, 𝑦) called an inner product (also
known as scalar product) is defined on ℝ𝑛 (or on 𝔸𝑛), then such a space is
called a Euclidean space 𝔼𝑛. Thus, a Euclidean space is an inner product
space. Moreover, it is a normed space since inner product (𝑥, 𝑥) ≥0 for
any 𝑥∈𝔼𝑛 so that this product allows us to define the length of a vector 𝑥
as a real nonnegative function of 𝑥, ‖𝑥‖ = (𝑥, 𝑥)1/2 = (𝑥𝑖𝑥𝑖)1/2 satisfying
all the properties required of a norm. These properties are (see any
textbook on linear algebra or functional analysis): (1) ‖𝑥‖ ≥0 , with
‖𝑥‖ = 0 if and only if 𝑥= 0 which means that the norm is the positive-
definite function of its argument 𝑥; (2) ‖𝛼𝑥‖ = 𝛼‖𝑥‖, 𝛼∈ℝ, 𝑥∈𝔼𝑛; (3)
‖𝑥+ 𝑦‖ ≤‖𝑥‖ + ‖𝑦‖, 𝑥, 𝑦∈𝔼𝑛
;
(4)
‖𝑥−𝑦‖ ≤‖𝑥−𝑧‖ + ‖𝑧−
𝑦‖, 𝑥, 𝑦, 𝑧∈𝔼𝑛 (properties (3) and (4) are usually known as the triangle
inequalities).
One can symbolically write an n-dimensional Euclidean space as 𝔼𝑛=
{(𝑥1, … , 𝑥𝑛)|𝑥𝑖∈ℝ, 𝑖= 1, … , 𝑛}. Thus, 𝔼1 represents the real line, 𝔼2 is the
ordinary plane that we study at school (recall the Pythagorean theorem),
and 𝔼3 is our three-dimensional space where all classical nonrelativistic
events take place. When studying classical mechanics, we shall mainly deal
with the Euclidean space. Note that the inner product allows one to define
a distance between two points 𝑥 and 𝑦 of ℝ𝑛 (or 𝔸𝑛), 𝑠(𝑥, 𝑦) = ‖𝑥−𝑦‖ =
(𝑥−𝑦, 𝑥−𝑦)1/2 = (𝑥𝑖−𝑦𝑖, 𝑥𝑖−𝑦𝑖) which is nothing more than the
Pythagorean theorem. This distance is a particular case of a metric
(usually called the Euclidean metric), 𝑠(𝑥, 𝑦), so that Euclidean space is
also a metric space.
3.19
Basic Geometry for Physics
161
It is curious that there seems to be sort of a fight (reminding us of
Jonathan Swift) among mathematicians, at least at the folklore level i.e., not
necessarily in the textbook expositions. Some of the mathematicians argue
that metric spaces should be given a priority, whereas others state that the
concept of metric spaces is an old-fashioned framework, and what people
need is normed spaces or even more - normed algebras. For instance, the
Euclidean space 𝔼𝑛 is more conveniently treated as a normed vector space
such that its norm, namely a positive definite function ‖𝑥‖ →ℝ is the
square root of inner product (𝑥, 𝑥) →ℝ. Accordingly, Euclidean space is
known as an example of an inner product space. Besides, one may notice that
both mathematicians and especially physicists are sometimes quite loose
with terminology - we shall find some examples of this looseness below.
By a domain, I shall mean a connected open subset in ℝ𝑛 or ℂ𝑛 for some
positive integer n. Throughout this book, I shall use 𝑥= (𝑥1, 𝑥2, … , 𝑥𝑛) or
𝑧= (𝑧1, 𝑧2, … , 𝑧𝑛) for the coordinates of a point in ℝ𝑛 or ℂ𝑛, respectively.
Sometimes it is notationally more convenient (for example in relativistic
theories) to consider domains in ℝ𝑛−1 ( ℂ𝑛−1) rather than in ℝ𝑛 ( ℂ𝑛)
starting coordinate sets from 𝑥0 (𝑧0).
For discussing tensors, let us start from recapitulating very simple
facts some of which have already been partly touched upon. Consider the
transition between two coordinate systems, 𝑋 and 𝑌: 𝑥1, … , 𝑥𝑛↦
𝑦1, … , 𝑦𝑛. Here, assuming bijective mapping and in order to avoid some
distracting complications, I implicitly assumed that both coordinate
systems have the same dimensionality n. It would of course be more
accurate to talk about coordinate patches comprising an n-manifold, where
each patch having n-coordinates is an open domain in ℝ𝑛.
If we consider now a simple generalization of the gradient i.e., a
similar operation that will be performed not over a scalar function
𝑓(𝑥): ℝ→ℝ, but over a tensor field 𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟(𝑥1, … , 𝑥𝑛) we shall at first
assume for simplicity the point 𝑥∈ℝ𝑛 to be labeled by Cartesian
coordinates 𝑥= (𝑥1, … , 𝑥𝑛). The derivative
𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟(𝑥1, … , 𝑥𝑛; 𝑘) ≔
𝜕𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟(𝑥1, … , 𝑥𝑛)
𝜕𝑥𝑘
≡𝜕𝑘𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟(𝑥1, … , 𝑥𝑛)
is in general not a tensor with respect to arbitrary coordinate
transformations 𝑦𝑖= 𝑓𝑖(𝑥𝑗) unless this is a linear (affine) transformation
𝐲= 𝐀𝐱+ 𝐃 or, in components, 𝑦𝑖= 𝑎𝑗
𝑖𝑥𝑗+ 𝑑𝑖 where det𝐴≠0 that is the
transformation is considered reversible. To make the formulas less clumsy
I shall restrict myself to linear transformations 𝑦= 𝐴𝑥, 𝑥= 𝐵𝑦 i.e.,
𝐵= 𝐴−1 or 𝑦𝑖= 𝑎𝑗
𝑖𝑥𝑗, 𝑥𝑗= 𝑏𝑘
𝑗𝑦𝑘, 𝑎𝑗
𝑖𝑏𝑘
𝑗= 𝛿𝑘
𝑖, where all matrix
elements 𝑎𝑗
𝑖 and 𝑏𝑗
𝑖 are constants. Indeed, for linear transformations,
second derivatives
𝜕2𝑦𝑖
𝜕𝑥𝑗𝜕𝑥𝑘≡𝜕𝑗𝜕𝑘𝑦𝑖= 0 and tensor 𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟 transformed
to new coordinates 𝑦1, … , 𝑦𝑛) (we may denote this transformed tensor as 𝑇̃)
162
Mathematical Potpourri
𝑇̃𝑙1,…,𝑙𝑠
𝑘1,…,𝑘𝑟(𝑦1(𝑥1, … , 𝑥𝑛), … , 𝑦𝑛(𝑥1, … , 𝑥𝑛))
= 𝜕𝑦𝑘1
𝜕𝑥𝑖1 … 𝜕𝑦𝑘𝑟
𝜕𝑥𝑖𝑟
𝜕𝑥𝑗1
𝜕𝑦𝑙1 … 𝜕𝑥𝑗𝑠
𝜕𝑦𝑙𝑠𝑇𝑗1,…,𝑗𝑠
𝑖1,…,𝑖𝑟(𝑥1, … , 𝑥𝑛)
= 𝑎𝑖1,…,𝑖𝑟
𝑘1,…,𝑘𝑟𝑏𝑙1,…,𝑙𝑠
𝑗1,…,𝑗𝑠
for 𝑦𝑖= 𝑎𝑗
𝑖𝑥𝑗, 𝑥𝑗= 𝑏𝑘
𝑗𝑦𝑘. Note that it is a trivial generalization of the
notion of contra- and covariant transformation of a vector, only the notations
are very clumsy. Since all the coefficients 𝑎𝑖1,…,𝑖𝑟
𝑘1,…,𝑘𝑟 and 𝑏𝑙1,…,𝑙𝑠
𝑗1,…,𝑗𝑠 are constant,
we have after differentiation
𝑇̃𝑙1,…,𝑙𝑠
𝑘1,…,𝑘𝑟; 𝑝≔𝜕𝑝𝑇̃𝑙1,…,𝑙𝑠
𝑘1,…,𝑘𝑟
3.20 Vector Fields
Speaking very crudely, in the case of vector fields, the coefficients 𝑎𝑖
′, 𝑎𝑗 are no
longer numbers but functions of 𝑥. The idea of a field in ℝ3 being applied to
vectors amounts to attaching a vector to each point of ℝ3, this vector changing
smoothly from point to point. The same may be said about the vector ℝ4 and
affine 𝔸4 spaces of classical mechanics. If one considers some general
manifold instead of ℝ3 or ℝ4, the concept of vectors attached to each point
may be based on tangent vectors. Now the question arises: how can we ensure
smoothness while constructing a vector field on a manifold? Will it be the
smoothness of a map 𝑥→𝑉(𝑥) where 𝑉(𝑥) represents some tangent vector
at point 𝑥? We know that smoothness always implies some differential
structure, therefore we shall try to introduce and analyze such a structure.
But before we proceed with it, let us recall some elementary geometric
notions.
The school definition of a vector introduces the quantity having both
absolute value and direction. In a differential context, to some extent
discussed above, this traditional description, where the term “quantity” is
evasively indistinct, one usually defines a vector in ℝ𝑛 as a set of 𝑛 numbers
𝑎𝑖, 𝑖= 1, … , 𝑛 that transform according to the rule
𝑎′𝑖= 𝜕𝑥′𝑖
𝜕𝑥𝑗𝑎𝑗
(3.10)
(see section “Vector Spaces” for details). Two sets of 𝑛 numbers, 𝑎′𝑖 and 𝑎𝑗,
actually represent the same vector, provided they are related by this formula.
In more formal terms, a vector field 𝐹(𝑥) on a manifold 𝑀 is a smooth section
of the tangent bundle 𝑇𝑀, i.e., for each point 𝑝∈𝑀 a choice of a tangent
vector 𝐹(𝑥) ∈𝑇𝑝𝑀 is possible so that a map between manifolds 𝐹: 𝑀→𝑇𝑀 is
smooth.
3.21
Geometry and Physics
163
3.21 Geometry and Physics
In this section, one can find a non-technical overview of main ideas and
approaches when physicists are trying to adapt contemporary geometry for
their purposes.
I have already remarked that there exist many interrelationships,
sometimes striking, between many domains of physics and mathematics that
are traditionally regarded as completely autonomous. Probably the most
obvious example is given by countless links between geometry and physics:
the two disciplines are in many fields so interpenetrating that it would be hard
to tell the difference between geometry and physics, for instance, in classical
mechanics, theory of dynamical systems, relativity theory, gauge models of
fundamental interactions and nearly all other concepts or modern quantum
field theory. In this section, one can find a brief non-technical overview of
main ideas and approaches used by the physicists who are trying to adapt
contemporary geometry for their purposes. The standard evidence of
intimate ties between physics and geometry is provided by relativity theory.
Special relativity, for example, is a purely geometric theory in which three
habitual dimensions of space 𝑟= (𝑥, 𝑦, 𝑧) ∈ℝ3 are combined with a
single-dimensional time 𝑡∈ℝ forming a four-dimensional manifold
usually called a spacetime. Adding one more dimension to Euclidean space
changes its basic symmetry: the symmetry group of our 3𝑑 Euclidean space is
the Euclidean group 𝐸(3) whereas the symmetry group of the Minkowski
spacetime is the Poincaré group.
We shall define metric in the Minkowski space by the diagonal tensor
𝑔𝑖𝑘≡𝛾𝑖𝑘 with the following signature: 𝑔00 = −𝑔11 = −𝑔22 = −𝑔33 =
1, i.e., the scalar product is (𝑎, 𝑏) ≔𝑎𝑏= 𝑎0𝑏0 −𝐚𝐛, 𝑎𝛼= −𝑎𝛼, 𝛼=
1,2,3.
Based on this metric, we can define the D’Alembert operator
☐≔∆−𝜕0𝜕0 = −𝜕𝑖𝜕𝑖, which is wave operator used in special
relativity. It is used for formulating wave equations of
electromagnetic fields.
3.22 Geometry of Classical Mechanics
A correct formulation of classical mechanics is based on the notions of
geometry and symmetry, being in fact inseparable from them - we shall see it
many times shortly. In particular, classical mechanics and differential
geometry are close relatives, being affectively devoted to each other and even
inseparable. The reader could probably get a feeling from the previous
material that even nonrelativistic classical mechanics may be viewed as a
specific field theory over a one-dimensional base (time). We shall return to the
discussion of the differential-geometric and field aspects of classical
mechanics in a number of places of this book. I have already mentioned that
mathematics has traditionally served as a glue binding together a variety of
physical subjects; in the specific case of classical mechanics, it is Lie groups
and algebras that play a fundamental role of the main gluing substance.
164
Mathematical Potpourri
Typically, geometric formulation of mechanical problems faces some
conceptual difficulties, at least during the first study. These difficulties are
mostly due to the counter-intuitive transformation of velocities and
accelerations between generic curvilinear and non-inertial systems. More
specifically, the source of embarrassment is hidden in the fact that in the
Euclidean (or Cartesian, or Galilean) coordinates differentials of some vector
𝑎𝑖 also form a vector, together with the time derivatives 𝑑𝑎𝑖/𝑑𝑡, and the
partial derivatives over coordinates 𝜕𝑘𝑎𝑖≡𝜕𝑎𝑖/𝜕𝑥𝑘 form a tensor. However,
one cannot uncritically spread this intuitive perception of differentiating a
vector over the arbitrary curvilinear or non-inertial coordinate systems, when
the metric tensor 𝑔𝑖𝑘 is some (in general non-linear) function of coordinates:
in this case 𝑑𝑎𝑖 is not a vector and 𝜕𝑘𝑎𝑖 is not a tensor, because in different
spatial points vectors transform differently. It means that velocity, for
example, is characterized by a distinct vector in each coordinate system;
moreover, all such velocity vectors will have diverse sets of components in
every system.
Recall in this connection that a vector, from a geometric viewpoint, is a
linear operator acting on functions (see the above discussion of vector,
affine, and dual spaces). The output of the operator is the derivative of the
input function in the direction defined by the vector. For instance, the
tangent vector 𝑎𝑞≔(𝐚, 𝐪), where 𝐪∈𝐔 is a point in an open set 𝑈∈ℝ𝑛,
operates on any differentiable function 𝑓(𝐪) according to the formula
𝑎𝑞(𝑓) = 𝑎𝑖𝜕𝑖𝑓(𝐪) i.e., the tangent vector 𝑎𝑞 is a differential operator. In
elementary textbooks, vectors are conventionally identified with their
components, which may bring some confusion. In classical mechanics,
vectors such as position, velocity, or acceleration live in the tangent
bundles whereas forces belong to the cotangent ones. So, there must be a
natural duality between vectors and forces, and in the traditional
mechanics (e.g., of constrained systems) such a duality is ensured by the
principle of virtual work.
The fundamental concepts of space and time are essentially defined
through affine spaces and parallel transport. The Galilean spacetime structure
defining the class of inertial systems serves as a fundamental symmetry (the
Galileo group) for the classical world92. One may, however, notice that the
affine structure alone is not sufficient to fully describe the physical space. One
must also provide means to measure lengths and angles. This necessity leads
to introducing the Euclidean structure, i.e., the Euclidean scalar product on
the vector space 𝑉, which is a symmetrical bilinear form, (, ): 𝑉× 𝑉→ℝ with
92 When studying quantum models (Chapter 6) we shall observe that spacetime
transformations embraced by the Galileo group play in quantum mechanics a slightly different
role - at least in some aspects - as in classical mechanics. Translations, rotations, spatial
reflections induce unitary transformations of each element of Hilbert space, for such elements
are defined with respect to ℝ𝑛× ℝ≈. Temporal reflections induce anti-unitary transformations
(see also Chapter 9). If the Hamiltonian operator 𝐻 defining a quantum system is invariant under
Galileo transformations, then one can deduce some nontrivial statements about eigenvalues and
eigenvectors (wave functions).
3.22
Geometry of Classical Mechanics
165
(𝑣, 𝑣) > 0 for all 𝑣≠0, 𝑣∈𝑉. This scalar product results in the distance
between two simultaneous points 𝑎, 𝑏 (on fibers 𝑉𝑡)
𝜌(𝑎, 𝑏) = (𝑎−𝑏, 𝑎−𝑏)
1
2 = ‖𝑎−𝑏‖
(3.11)
and makes each fiber 𝑉𝑡⊂ℝ4 a 3D Euclidean space (fig.3.1) or, more
generally, a manifold 𝑀 which is not necessarily Euclidean (Galilean).
Roughly speaking, the n-dimensional Euclidean space 𝔼𝑛 is the vector
space ℝ𝑛 endowed with the Euclidean metric.
Figure 3.1.: Galilean and curved spacetime fibrations. Here 𝛾(𝑡) is a path of a
material point (world line) represented as a curve in spacetime ℝ3 × ℝ.
Slightly generalizing this picture, we might say that a global time mapping
should exist on the spacetime, i.e., a function 𝑡: 𝑀→ℝ whose gradient is
everywhere timelike. The existence of this mapping signifies that the
spacetime can be foliated into time-ordered spacelike hypersurfaces, each
corresponding to 𝑡𝑖= 𝑐𝑜𝑛𝑠𝑡, 𝑖= 1,2, …. The mapping 𝑡: 𝑀→ℝ guarantees
simultaneity of all events contained in t-hypersurfaces (see, e.g., [189]). Each
spacelike hypersurface 𝑉𝑡 corresponding to some value 𝑡 of the global time
splits the entire spacetime into two halves, being in the Galilean-Newtonian
picture the mirror copies of one another. Initial conditions to evolutionary
equations are set up on any hypersurface 𝑉𝑡. In a curved spacetime this time-
166
Mathematical Potpourri
reflection symmetry can be broken: there may be no spacelike hypersurfaces
𝑉𝑡 from which the spacetime observed in two opposite directions of time
looks identically. This temporal asymmetry is expressed by the spacetime
metric 𝑔𝑖𝑘.
The Galileo group expressing the invariance properties of classical
mechanics, together with the Euclidean, Lorentz and Poincaré groups, may be
all considered the relativity groups. Mathematically, all these groups are
particular cases of semidirect group construction, which is often used in
group theory and its applications. Take as a basic example the Euclidean group
𝐸(3) or 𝑆𝑂(3) acting on ℝ3 - it is a semidirect product of rotations and
translations. Generalizing this construction a little bit, we get some group 𝐺
acting on a vector space 𝑉 (and on its dual space 𝑉∗)93. In a nonrelativistic
framework (Chapter 4), the Galileo group 𝐺 is the natural kinematic
model. Adding independent space and time dilations, we get the affine Galileo
group. It is interesting that by imposing certain constraints on these dilations
we may get other invariance groups important for physics, for example, the
Schrödinger group (the invariance group of the Schrödinger and heat equation)
and the Poincaré group essential for relativity theory. To better understand all
this terminology let us write down the generic element of the affine relativity
group, 𝐴𝑅. The latter is the Galileo group G94 combined with independent
spatial and temporal dilations. Recall that dilations (also called dilatations)
are, in the physical language, just scaling transformations whereas from the
mathematical standpoint these transformations represent a specific case of
the conformal group. Dilatations are usually understood as a collection of
maps from a metric space into itself that scale the distances between each two
points. Thus, dilatations give the result of uniform stretching or shrinking,
perhaps accompanied by rotations. In school geometry we study similarity
transformations, which are dilatations in Euclidean space. Dilatations
comprise a subgroup of the affine group (see above), with matrix 𝐴 of a linear
transformation is an orthogonal matrix multiplied by a scalar (dilatation
ratio). So roughly speaking, a dilatation is a transformation that expands or
contracts all points with respect to a given central point by some ratio, which
may be greater or smaller than unity. Therefore, in order to specify a
dilatation, one has to fix the central point and the ratio. A well-known physical
example of dilatations is the γ-factor, which appears in relativity. It is always
greater than unity but very close to it for nonrelativistic velocities. If the
velocity (of a particle, for example) equals the velocity of light c, γ-factor is
infinite and for velocities greater than c it is, strictly speaking, undefined.
Let us take a look for a moment at the affine relativity group 𝐴𝑅. A
generic element of 𝐴𝑅 may be denoted as 𝑔= (𝜑, 𝑡0, 𝑥0, 𝑣, 𝑅, 𝛼, 𝛽) where
𝑡0 ∈ℝ, 𝑥0 ∈ℝ𝑛 are time and space translations, 𝑣∈ℝ𝑛 in the boost
93 G may be, e.g., a Lie group or a group of automorphisms.
94 Some authors consider the group of Galilean transformations not being the affine
group since in general one cannot write 𝑥↦𝑥′ = 𝐀𝑥+ 𝑏 where matrix 𝐀∈𝑂(𝑛), but it
depends on the definition of the affine group. Here, we simply consider any Euclidean
group 𝐸(𝑛) to be a subgroup of an affine group 𝐴(𝑛) in n dimensions.
3.22
Geometry of Classical Mechanics
167
parameter, 𝑅∈𝑆𝑂(𝑛) is a rotation, 𝛼 and 𝛽 are time and space dilatations,
respectively.
It is interesting, by the way, that dilatations provide a link to quantum
theory where there the quantity 𝐫∇𝑉= 𝑥𝑖𝜕𝑖𝑉 (𝑉 is the potential) known as
virial is considered. In fact, operator 𝑥𝑖𝜕𝑖 is the generator of the dilatation
group. The virial theorem in quantum mechanics states that when wave
function 𝜓 is the eigenfunction of the Schrödinger operator, then the
expectation value of the virial is twice the kinetic energy i.e., (𝜓, 𝐫∇𝑉𝜓) =
2(𝜓, −∆𝜓).
Now, let us return to affine geometry of classical mechanics. The
fundamental group for this geometry is the group of all affine
transformations, i.e., the composition 𝑢= 𝑃(𝐚) ∘𝑣 where 𝑃(𝐚); = 𝑥+ 𝐚 is
the translation by vector 𝐚, 𝑉 is the vector space over ℝ, 𝑣∈𝐺𝐿(𝑉) is the
linear and bijective mapping, 𝐺𝐿(𝑉) is, as usual, the general linear group.
Simply speaking, the group of affine transformations or affine group is just a
group of linear maps supplemented by translations. The transition from the
Euclidean group 𝐸(𝑛) to the affine group 𝐴(𝑛) exemplifies a mathematical
structure which is called a semidirect product (see some details in [187], 4).
So, the Euclidean group 𝐸(𝑛) is a subgroup of affine transformations.
The affine group 𝐴(𝑛) in n-dimensional space works over the general
linear group and is isomorphic to the group of matrices of order 𝑛+ 1 of the
type
𝐿= (𝐀
𝐚
0
1)
- a fact extensively exploited also in computer graphics (where 𝑛= 3 of
course) [95]. To put it another way, we may specify each element of the group
of affine transformations in an n-dimensional space in two ways: either by a
pair (𝐀, 𝐚), with 𝐀 being an 𝑛× 𝑛 orthogonal matrix and 𝐚 being a real n-
dimensional vector, or by a single (𝑛+ 1) × (𝑛+ 1) square matrix.
One may notice that affine groups are simple specific cases of Lie groups.
An affine group on ℝ𝑛 is a combination of orthogonal (in general
pseudoorthogonal) transformations and translations in the vector space ℝ𝑛.
Now we might try to discern at which point geometry enters classical
mechanics. To begin with, we are often reminded of geometric optics in
textbooks on mechanics. This allusion actually means that the ray of light
selects the path between each two points, 𝑥1 and 𝑥2, requiring the shortest
time to travel. In our everyday life we usually say that light travels in straight
lines. In simple mathematical terms, this ray theory of geometric optics as well
as of classical mechanics is formulated by the expression
𝛿∫𝑛(𝑥)𝑑𝑠
𝑥2
𝑥1
= 0,
(3.12)
168
Mathematical Potpourri
where function 𝑛(𝑥) is in optics the refraction index, 𝑛(𝑥) = 𝑐𝑣(𝑥)
⁄
, 𝑑𝑠=
𝑣𝑑𝑡, 𝑑𝑡= 𝑑𝑠𝑛(𝑥) 𝑐
⁄ , 𝑐 is the velocity of light in vacuum, 𝑣(𝑥) is its
velocity in the medium. An analogous expression holds also for mechanical
particles, and now we shall find this analog. The above expression is an
elementary path integral construction, where time does not appear explicitly
- usually a very convenient presentation, a predecessor of the Feynman path
integrals [44], see below “Path Integrals in Physics”. It is interesting that the
ideas of replacing time when considering the mechanical motion by other
variables circulated already in the 18th century (Euler, Fermat, Jacobi).
Excluding the time as a parameter leads directly to geometrization of
mechanics. Indeed, let us consider, for simplicity, a single free particle whose
kinetic energy is
𝑇= 1
2 𝑚𝑖𝑘
𝑑𝑥𝑖
𝑑𝑡
𝑑𝑥𝑘
𝑑𝑡.
Here, the quantities (𝑥) are the components of the mass tensor. (A little later
we shall see that the notion of mass is highly nontrivial and stirs a lot of
controversy up to now.) In the simplest (e.g., isotropic) case 𝑚𝑖𝑘= 𝑚𝛿𝑖𝑘.
Introducing
the
length
element
by the
usual
expression 𝑑𝑠2 =
𝑚𝑖𝑘(𝑞𝑗)𝑑𝑞𝑖𝑑𝑞𝑗, where 𝑞𝑖 are the generalized coordinates in the three-
dimensional coordinate space95, we get for the kinetic energy
𝑇= 1
2
𝑑𝑠2
𝑑𝑡2.
We may observe that the mass tensor 𝑚𝑖𝑘 begins to play the role of metric
tensor 𝑔𝑖𝑘. This simple analogy provides a direct link from mechanics to
geometry. We see that the attained geometry is non-Euclidean since the
components of mass (metric) tensor are position dependent. For instance, in
cylindrical coordinates we have
𝑚𝑖𝑘= 𝑑𝑖𝑎𝑔(𝑚, 𝐼, 𝑚) = (
𝑚
0
0
0
𝐼
0
0
0
𝑚
),
where 𝐼≔𝑚𝑟2 is the moment of inertia of a point mass, and
𝑇= 1
2 𝑚𝑑𝑟2 + 𝑟2𝑑𝜑2 + 𝑑𝑧2
𝑑𝑡2
.
Mass tensor models are frequent attributes of condensed matter physics
where the anisotropic response of quasiparticles (electrons, excitons,
polarons, etc.) to an applied force should be described. They are an important
95 In this simple example, the coordinate manifold is three-dimensional, in a more
general case of N particles it obviously has n = 3N dimensions.
3.23
Transformation of Affine Coordinates
169
concept in condensed matter, particularly in energy band theory. The mass
tensor is also encountered in other fields of physics such as relativistic
theories, nuclear physics, studies of particle motion in a magnetic field (e.g.,
in accelerators), theory of elasticity, soft tissue and polymer physics,
mechanics of the rigid body, robotics, study of mobility and frictional effects,
etc. In general, in the non-relativistic picture mass is a covariant tensor to be
contracted with two contravariant velocities producing a scalar kinetic
energy, 𝑇= (1 2
⁄ )𝑚𝑖𝑘𝑞̇ 𝑖𝑞̇ 𝑘. The corresponding inverse mass tensor is given by
(𝑚−1)𝑖𝑘. In general, two tensors 𝑎𝑖𝑘 and 𝑏𝑖𝑘 are called inverse96 if 𝑎𝑖𝑗𝑏𝑗𝑘=
𝛿𝑖
𝑘. We can define the contravariant mass tensor 𝑚𝑖𝑘 to be inverse to 𝑚𝑖𝑘
which means that (𝑚−1)𝑖𝑘= 𝑚𝑖𝑘, analogously to the metric tensor 𝑔𝑖𝑘. One
might observe that the mass tensor, at least in many examples, is an
involution, i.e., its own inverse.
3.23 Transformation of Affine Coordinates
Let us start from the most primitive notions. Affine coordinates are defined
as rectilinear coordinates in an affine space. If we have an n-dimensional
affine space, in which we have fixed a basis, with each point 𝑥 in it has
coordinate labels 𝑥 which are transformed as
𝑥′𝑗= 𝑎𝑖
𝑗𝑥𝑖+ 𝑏𝑗
(3.13)
when we move from one affine coordinate system (𝐾) to another (𝐾′). The
coefficients 𝑎𝑖
𝑗 forming elements of a matrix 𝐴 are, in principle, arbitrary; the
only condition on this matrix, det𝑎𝑖
𝑗= det𝐴≠0, ensures that one can express
the old coordinates 𝑥𝑖 through the new ones 𝑥′𝑗. This reversibility of the
coordinate transformation (𝐾↔𝐾′) in fact means that the systems 𝐾 and 𝐾′
(which are arbitrary affine coordinate systems) are equivalent.
Affine coordinate unit vectors (formerly often called repère) in 𝐾′ can be
expanded over unit vectors in 𝐾
𝐞′
𝑗= 𝛼𝑗
𝑖𝐞𝑘
(3.14)
96 Inverting a tensor or a matrix requires in general some tedious calculations
usually performed on a computer. A more or less simple formula is practical only for
small dimensionality such as, for example 3 × 3. Thus, for matrix (tensor)
𝐴= (
𝑎
𝑏
𝑐
𝑑
𝑒
𝑓
𝑔
ℎ
𝑖
)
𝐴−1 =
1
𝑑𝑒𝑡𝐴(
−𝑓ℎ+ 𝑒𝑖
𝑐ℎ−𝑏𝑖
−𝑐𝑒+ 𝑏𝑓
𝑓𝑔−𝑑𝑖
−𝑐𝑔+ 𝑎𝑖
−𝑑𝑏+ 𝑎𝑒
−𝑒𝑔+ 𝑑ℎ
𝑏𝑔−𝑎ℎ
−𝑏𝑑+ 𝑎𝑒
)
170
Mathematical Potpourri
where coefficients 𝛼𝑗
𝑖 represent a matrix that must be related to the
previously encountered matrix 𝑎𝑖
𝑗. This relationship is well-known and can
be found in any textbook on linear algebra. For the sake of completeness, I
shall bring here some main formulas. Vectors 𝐞′
𝑗 and 𝐞𝑘 may be chosen
arbitrary, the sole requirement being that they are linearly independent,
which results in det𝛼𝑗
𝑖≠0. Due to this condition, the inverse transform
𝐞𝑖= 𝛽𝑖
𝑘𝐞′
𝑘
(3.15)
does exist, with matrix 𝛽𝑖
𝑘 being inverse to 𝛼𝑗
𝑘, i.e., 𝛼𝑖
𝑘𝛽𝑗
𝑖= 𝛿𝑗
𝑘 and 𝛼𝑖
𝑘𝛽𝑘
𝑗= 𝛿𝑖
𝑗.
These formulas express a simple identical transformation 𝐞𝑖= 𝛽𝑖
𝑘𝛼𝑘
𝑗𝐞𝑗 (one
may recall that the expansion on repère vectors is unique). Now let us write
the components of the same vector 𝐱 in two different coordinate systems, 𝐾
and 𝐾′:
𝐱= 𝑥𝑖𝐞𝑖= 𝑥′𝑗𝐞′
𝑗= 𝑥𝑖𝛽𝑖
𝑘𝐞′
𝑘= 𝑥𝑖𝛽𝑖
𝑘𝛿𝑘
𝑗𝐞′
𝑗
(3.16)
Comparing these two expressions, we have
{𝑥′𝑗= 𝛽𝑖
𝑘𝑥𝑖𝛿𝑘
𝑗= 𝛽𝑖
𝑗𝑥𝑖
𝐞′
𝑗= 𝛼𝑗
𝑖𝐞𝑖
(3.17)
One may see that when one moves from 𝐾 to 𝐾′, the vector coordinates
are transformed with the matrix 𝛽𝑖
𝑗 which is the inverse transposed of 𝛼𝑖
𝑗.
This is the traditional linear algebra techniques. Later we shall see that
the basic facts from linear algebra are related to the spectral properties of
Hermitian matrices in finite-dimensional spaces. Here the connectivity with
other physmatical domains, e.g., quantum theory, is reflected in the faith that
unbounded self-adjoined operators in Hilbert spaces are characterized by
similar properties.
3.24 General Coordinate Transformations
The above example shows some simple properties of coordinate
transformations, namely the connection between matrices relating the
change of basis vectors for two affine coordinate systems with the matrices
governing the transformation of vector components in the transition between
these different coordinate systems. The problem of a change of coordinate
systems is ubiquitous in physmatics and may be considered one of the most
important. We shall encounter it practically in all fields of physics: relativity
theory is essentially the discipline studying the invariance properties of
physical quantities under coordinate transformations; quantum mechanics is
based, to a large extent, on a mathematical model of a unitary state space
understood as an infinite-dimensional Hilbert space, with state vectors
represented as linear combinations of basis states being transformed from
3.25
Variational Methods
171
one representation to another (see Chapter 6); in classical mechanics,
canonical transformations (the group) preserving the measure in the phase
space are a powerful tool to simplify problems by making a canonical change
of variables (see Chapter 4); the Fourier transform, which is one of the main
mathematical instruments of physics (see Chapters 5, 6), is nothing more than
a transformation between different bases (transition to a dual basis); change
of coordinates (substitution) is a favorite trick in integration, and so on.
Therefore, it would be worthwhile to study some general principles of general
coordinate transformations.
It is curious that this subject, even in its not mathematically refined and
abstract version, seems to be traditionally difficult for the students. So, I shall
try to leave no ambiguities in telling the story of coordinate transformations
and especially as far as their relationship with various physical models is
concerned.
3.25 Variational Methods
The most demonstrative example of the usefulness of variational calculus is
traditionally given by the Lagrangian formulation of classical mechanics (see
the next chapter). In the variational formulation of classical mechanics, the
system (e.g., particle) trajectories 𝑞(𝑡) are extremizers (minimizers), or at
least critical points, of the action integral with fixed endpoints
𝑆= ∫𝐿(𝑡, 𝑞, 𝑞̇)𝑑𝑡
𝑡1
𝑡1
(3.18)
where 𝐿(𝑡, 𝑞, 𝑞̇): ℝ2𝑛+ 1 →ℝ is the Lagrangian - the difference between
kinetic and potential energy. One usually assumes the Lagrangian to be
smooth and strictly convex in 𝑣≡𝑞̇ , i.e., 𝜕𝑣𝑣
2 𝐿> 0 (physically this last
condition may be wrong for the systems with negative mass). The minimizing
trajectories are then the solutions to the Euler-Lagrange equations
𝛿𝑆
𝛿𝑞(𝑡) = 𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇𝑖
−𝜕𝐿
𝜕𝑞𝑖
=
(3.19)
Here we have used the symbol of functional derivative (see below).
In the classical monograph by R. Courant and D. Hilbert [7], chapter 7, one
can find a very interesting and, in my opinion, quite actual discussion of the
relation between the calculus of variations and the boundary value
(eigenvalue) problems. This relationship has been thoroughly worked out
ever since and served as a foundation for the so-called direct methods of
variational calculus.
In physics, especially in quantum field theory (QFT), it is customary to
introduce the notion of a so-called functional derivative. To elucidate the idea
of such a derivative, let us consider the system with a single degree of
freedom. By definition, the functional derivative of a quantity 𝑆 with respect
to 𝑞(𝑡) is written as
172
Mathematical Potpourri
𝛿𝑆[𝑞, 𝑞̇]
𝛿𝑞(𝑠) = lim
𝜖→0
𝑆[𝑞(𝑡) + 𝜖𝛿(𝑡−𝑠), 𝑞̇(𝑡) + 𝜖𝑑
𝑑𝑡𝛿(𝑡−𝑠)] −𝑆[𝑞(𝑡), 𝑞̇(𝑡)]
𝜖
(3.20)
Let us expand the quantity containing delta-functions:
𝑆[𝑞(𝑡) + 𝜖𝛿(𝑡−𝑠), 𝑞̇(𝑡) + 𝜖𝑑
𝑑𝑡𝛿(𝑡−𝑠)] =
∫
𝑑𝑡𝐿(𝑞(𝑡) + 𝜖𝛿(𝑡−𝑠), 𝑞̇(𝑡) + 𝜖𝑑
𝑑𝑡𝛿(𝑡−𝑠))
𝑡1
𝑡1
=
∫𝑑𝑡𝐿(𝑞, 𝑞̇)
𝑡1
𝑡1
+ 𝜖∫
𝑑𝑡(𝜕𝐿
𝜕𝑞𝛿(𝑡−𝑠) + 𝜕𝐿
𝜕𝑞̇ 𝛿(𝑡−𝑠))
𝑡1
𝑡1
+ 𝑜(𝜖) =
𝑆[𝑞, 𝑞̇] + 𝜖(𝜕𝐿
𝜕𝑞+ 𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇)
𝑡=𝑠
+ 𝑜(𝜖)
(3.21)
So the Euler-Lagrange equations may be written as
𝛿𝑆
𝛿𝑞(𝑡) = 0
(3.22)
(where we have put 𝑡= 𝑠).
3.26 Differential Equations
In fact, we have already encountered differential equations many times, but
this subject is so important for physics that I shall try to briefly describe a few
methods of obtaining solutions to differential equations, both in the two-
variable domain (ordinary differential equations, ODE) and many-variable
domain (partial differential equation, PDE). Differential equations may be
without exaggeration called the principal tool of physics.
The term œquatio differentiale was probably first coined by Leibnitz to
designate the relationship between variables 𝑥, 𝑦 and their infinitesimal
increments (differentials) 𝑑𝑥, 𝑑𝑦.
In the linear case, 𝐱̇ = 𝐴(𝑡)𝐱, 𝐱∈ℝ𝑛, matrix function 𝐴(𝑡): ℝ𝑛→ℝ𝑛 may
be considered smooth, but in a number of physically interesting
applications one can only require this matrix to be continuous in 𝑡∈𝐼⊆
ℝ. Physicists usually don’t pay attention to such sophistry, but would it be
mathematically correct? Curiously enough, yes, because the proof of the
famous Cauchy-Peano (or Picard-Lindel¨of) theorem stating existence and
uniqueness of a solution as well as its continuity with respect to initial data
of a differential equation, 𝐱(𝑡0) = 𝐱0 uses only the Lipschitz condition or,
at maximum, differentiability over 𝑥 for a fixed 𝑡 (see above more on this
theorem and also below the section on dynamical systems) so that
continuity in 𝑡 would be sufficient.
3.26
Differential Equations
173
A second-order linear differential equation
𝑦′′ + 𝑝𝑦′ + 𝑞= 0,
(3.23)
where 𝑝(𝑥) and 𝑞(𝑥) is extensively used in the wave-mechanical (Schrödinger)
version of quantum mechanics.
In this chapter, well-known mathematical facts and results are rendered.
I have provided only a cursory coverage of geometric methods in physics
since, firstly, I am in no way a specialist and, secondly, there are excellent texts
on modern geometry with physical applications such as [187]. The main
purpose of mathematical potpourri is to ensure a more or less firm ground to
painlessly read the major part of the literature on current physics, in
particular where geometric methods are used. I was hoping that this section
will at least help to unite physical and mathematical communities overcoming
corporate arrogances and isolationism.
174
Classical Deterministic Systems
4 Classical Deterministic
Systems
Deterministic systems are those for which the full dynamic description can be
achieved without introducing the concept of probability. Ideally, in
deterministic systems the state of the system at an initial moment 𝑡0 uniquely
determines the future behavior of the system, i.e., for any 𝑡> 0. The study of
continuous deterministic systems may be reduced to mathematical models
based on differential equations for the functions giving the state of a system.
The first - and the most famous - model of this type was the one of Newton
and is usually called Newtonian mechanics. It studies the motion of a system
of material points in ℝ3 and may be regarded as a special case of classical
dynamics.
Classical dynamics is probably the most developed part of science, it
studies the evolution of systems made of material points - bodies that are so
small that their inner structure is disregarded and the only characteristic is
their position in space, 𝐫𝑖= 𝐫𝑖(𝑡), 𝑖= 1,2, … , 𝑁 where 𝑁 is the number of
points incorporated into the system. Typically, in classical mechanics the so-
called generalized coordinates 𝑞𝑖(𝑡) are used, which correspond to the
degrees of freedom of a mechanical system. Generalized coordinates have
been introduced by Lagrange and are especially useful in Lagrangian
mechanics (see below). In modern terminology using generalized coordinates
means that one should consider possible configurations of a mechanical
system as points on a differentiable manifold. Naturally, one can also use any
system of local coordinates which are convenient from a practical point of
view. We have already observed in Chapter 3 the general change of
coordinates; below we shall see in some detail how to pass from one system
of curvilinear coordinates to another. In modern terminology this means that
one should consider possible configurations of a mechanical system as points
in a differentiable manifold, and, of course, then use any system of local
coordinates which are continuously differentiable.
Here, one may notice that the number of degrees of freedom is defined in
classical mechanics as dimensionality of the configuration space of a physical
system. For example, a system with two degrees of freedom is 𝐫̈ = 𝐅(𝐫, 𝑡)
where 𝐅 is a plane vector field, 𝑟∈ℝ2. If we convert this equation into the
dynamical system form, we shall get four first-order equations and,
respectively, four degrees of freedom as the dimensionality of the vector
system of differential equations (defining a vector field) corresponding to a
dynamical system. In other words, there may be a certain confusion in
counting the number of degrees of freedom when one passes from
conventional classical mechanics to the dynamical systems theory. The set of
possible paths 𝑞𝑖(𝑡) in the configuration space of generalized coordinates
completely describes the system. Of course, these paths should be single-
4.1
Main models of classical mechanics
175
valued functions, usually with a compact support, i.e., defined on a compact
time interval[𝑡1, 𝑡2]. Time 𝑡 is clearly a very important variable; in the static
models the role of dynamics - not only evolution – it has been excluded. One
can of course take the dynamics into account employing a parameter which
changes with time (such as the growing size of some object). This approach is
used, for example, in self-similar models.
4.1
Main models of classical mechanics
The primary feature of models based on classical deterministic (dynamical)
systems is causality, i.e., in such models the effect cannot precede the cause
and the response cannot appear before the input signal is applied. One may
note that causality does not follow from any deeply underlying equation or a
theory, it is simply a postulate, a result of human experience. Non-causal
systems would allow us to get the signals from the future or to influence the
past.
Causality is closely connected with time-reversal non-invariance (the
arrow of time). The time-invariance requires that direct and time-reversed
processes should be identical and have equal probabilities. Most
mathematical models corresponding to real-life processes are time non-
invariant (in distinction to mechanical models). There is a wide-spread belief
that all real processes in nature, in the final analysis, should not violate time-
reversal invariance, but this presumption seems to be wrong (see Chapter 9).
Now, let me say a few words about bibliography. There exist tons of books
on classical mechanics, but most of them must be updated by adding a number
of modern subjects such as nonlinear phenomena, dynamical systems and
differential geometric methods. The first book on classical dynamics that I
studied was a comparatively thin book by F. R. Gantmacher [21], the Russian
(Soviet) mathematician well-known for his monumental volume “The Matrix
Theory” [22]. Gantmacher’s book on analytical dynamics was simple and
rigorous, so one could easily trace all the conventional transformations and
derivations comprising the major part of traditional analytical dynamics (e.g.,
transition to generalized coordinates, canonical transformations, etc.).
Difficult topics were treated by the author in a comprehensive manner. After
the nice Gantmacher’s “Lectures”, the obligatory course of L. D. Landau and E.
M. Lifshitz [23] at first seemed too lapidary and full of declarative
prescriptions. Only much later did I realize that the whole classical multi-
volume course by Landau and Lifshitz was intended not for students, but
rather for professionals working specifically in the field of theoretical (not
mathematical!) physics. Classic books are invariably valuable, but they are not
always the best ones for a concrete person. Despite some strange but luckily
rare mistakes, it is an undeniable fact that every physicist (at least in Russia)
has learned a lot from these books and specifically from “Mechanics”. Now I
think that as far as classical dynamics goes, the book by V. I. Arnold [14] is
fully sufficient to provide the basic knowledge. Despite being over thirty years
old, this is the kind of book belonging to the narrow class I appreciate most of
all: one that contains the information you need. The meticulous text of Arnold
covers almost every subject of classical mechanics (and some extra), starting
176
Classical Deterministic Systems
from elementary Newtonian mechanics to semi-classics (short-wave
asymptotics) and perturbation theory. In my opinion, this book, like some
volumes of the course by Landau and Lifshitz, may be considered an example
of connected sciences approach.
Physicists often treat classical mechanics with an element of disdain or at
least as something alien to “real” physics, say condensed matter physics.
Personally, I think that classical mechanics is extremely useful for any part of
physics, for many great ideas are rooted in classical mechanics. Examples of
Gibbs and Dirac who persistently looked for opportunities to transfer
methods of classical mechanics to other fields are very persuasive. One may
notice that the ideas from dynamical systems theory have been present in
mechanics from the time of Lyapunov and Poincaré, but could not overcome
the barrier between classical mechanics and physics until the 1970s when
nonlinear dynamics all of a sudden became of fashion.
Why is classical mechanics so important? The matter is that classical
mechanics has been the primary collection of mathematical models about the
world, mostly about the motion of its objects. The principal aim of classical
mechanics is to describe and explain this motion, especially under the
influence of external forces. Ancient thinkers were enchanted by the celestial
clockwork and devised a number of interesting models, the most well-known
among them was that of Aristoteles (Aristotle), which can be formulated as
the statement that the motion of bodies is possible only in the presence of
external forces produced by other bodies. This verbal construct of Aristoteles
can be translated into the mathematical language as the first-order
differential equation, with the state of a moving body (or particle) being
described by three coordinates (𝑥, 𝑦, 𝑧) that change under the influence of an
external force
𝑑𝐫
𝑑𝑡= 𝐟(𝐫), 𝐫= (𝑥, 𝑦, 𝑧), 𝐟= (𝑓𝑥, 𝑓𝑦, 𝑓𝑧)
(4.1)
This is really the simplest model of the motion. One can immediately see
that the crucial difference of Aristotle’s model based on the first-order vector
equation with the second-order system corresponding to Newton’s model is,
primarily, in the irreversible character of the motion.
Aristotle’s considerations were rooted in everyday experience: the
trolley should be towed to be in motion; if the muscle force stops acting, the
trolley comes to rest. In such a model the state of a system would be given by
positions alone - velocities could not be assigned freely. One might observe in
this connection that even the most fundamental models reflecting everyday
reality are not unique and often controversial.
The contrast of Aristotle’s model to that of Newton is readily seen when
one starts thinking about acceleration (as probably Einstein did). In Newton’s
model of single-particle dynamics, the general problem is to solve the
equation 𝐅= 𝑚𝐚, where
4.2
Newtonian Mechanics
177
𝐚≔𝑑2𝐫
𝑑𝑡2 , 𝐫≔(𝑥, 𝑦, 𝑧), 𝐫∈ℝ3
when the force 𝐅 is given.
One might, however, ask: was Aristotle always wrong? The trolley stops
due to friction: 𝐅𝑹= −𝛼𝐯. The Newton’s equations for this case may be
written in the form
𝑑𝐫
𝑑𝑡= 𝐯, 𝑚𝑑𝐯
𝑑𝑡= 𝐅−𝛼𝐯,
(4.2)
where 𝐅 is the towing force and 𝑚 is the mass of the trolley. When 𝐅 is nearly
constant, i.e. the towing force, as it is frequently the case, slowly varies with
time, the mechanical system is close to equilibrium, so the inertial term 𝑚
𝑑𝐯
𝑑𝑡
is small compared to other terms. In this case, we get the equilibrium
(stationary) solution 𝐯=
𝑑𝐫
𝑑𝑡=
𝐅
𝛼, which has Aristotle’s form. This solution
form is valid only when the motion is almost uniform, i.e., the acceleration is
negligeable, and the friction is sufficiently large, |
𝑑𝐯
𝑑𝑡| ≪|𝛼𝐯| . One can, in
principle, imagine the world constructed on Aristotle’s principles: the
Aristotle model would correspond to the Universe immersed in an infinite
fluid with a low Reynolds number.
It is curious that Aristotle’s model is in fact extensively used in
contemporary physics, engineering, and even in everyday life. An example is
Ohm’s law, 𝐣= 𝜎𝐄, 𝐯= 𝐄/𝑛𝑒𝜌, where 𝑒 is the electron charge, 𝐄 is the electric
field (acting force), 𝑛 is the charge density, and 𝐯 is the average velocity of the
charge carriers. Ohm’s law is a typical macroscopic stationary model, when
the driving force is compensated by resistance. Stationary models are typical
of classical physics: in fact, classical physics dealt only with slow and smooth
motions, e.g., planetary movement. Models describing rapid and irreversible
changes, resulting in multiple new states, appeared only in the 20th century.
Stationary and quasi-stationary models serve as a foundation of
thermodynamics (Chapter 7) whose principal notion - temperature - may be
correctly defined only for equilibrium. Another example of such models is
steady-state traffic flow simulation.
4.2
Newtonian Mechanics
Let us now get back to the Newton’s model. Although the stuff below looks
trivial, there are several reasons to discuss it once more in order to better
understand why, despite the fact that Newton’s law of motion may be
considered a complete statement of classical mechanics, one would desire
other mathematical formulations for this discipline. As is well known, the
general problem in Newtonian mechanics is to solve the system of equations
𝑚𝑎
𝑑2𝐫𝑎
𝑑𝑡2 = ∑𝐅𝑖(𝐫𝑎, 𝐫̇𝑎, 𝑡)
𝑖
,
178
Classical Deterministic Systems
where index 𝑎= 1, … , 𝑁 enumerates the particles and the sum goes over all
the forces acting on the a-th particle. Forces 𝐅𝑖 are regarded as given functions
of 𝐫, 𝐫̇, 𝑡 in every point 𝐫𝑎, 𝐫̇𝑎, 𝑡. A mathematical solution of this system of
second-order differential equations is a 3N vector-function 𝐫𝑎(𝑡), 𝑎= 1, … , 𝑁.
If one considers the simplest case of the motion of a point mass (material
particle) in 3D space, the position vector 𝐫 may be expressed as 𝐫= 𝑥𝐢+ 𝑦𝐣+
𝑧𝐤, where {𝑥, 𝑦, 𝑧} are projections that are varying when the particle moves,
three quantities {𝐢, 𝐣, 𝐤} (also denoted {𝐞1, 𝐞2, 𝐞3}) are unit vectors which are
assumed constant in the laboratory system.
Consider a particle in a potential force field, namely the one where force
𝐅 is the (negative) gradient of some potential energy function 𝑉:
𝑚𝐫̈ = −∇𝑉(𝐫)
(4.3)
where 𝑉: ℝ𝑛→ℝ is some function on ℝ𝑛. The total - kinetic plus potential -
energy is given by the expression 𝐸=
𝑚
2 |𝐫̇|2 + 𝑉(𝐫) and is easily verified to
be constant on any solution. To show it, one can just differentiate 𝐸(𝑡) and
use Newton’s equation:
𝑑𝐸
𝑑𝑡= 𝑚𝑑𝐫
𝑑𝑡
𝑑2𝐫
𝑑𝑡2 + 𝑑𝐫
𝑑𝑡∇𝑉(𝐫) = 0
(4.4)
In principle, dimensionality 𝑛, 𝐫= (𝑥1, … , 𝑥𝑛) may be arbitrary. For the
sake of simplicity and to make the treatment more intuitive, let us confine
ourselves to 𝑛= 3. Assume that the potential 𝑉(𝐫) is spherically symmetric,
i.e., that 𝑉 depends only on the distance from the origin - this is usually called
the central field case. We may put e.g., 𝑉(𝑟) = (1/2)𝑣(|𝑟|2), then the equation
of motion becomes
𝑚𝑥̈ 𝑖= −𝑣′(|𝑟|2)𝑥𝑖
(4.5)
The energy 𝐸 is still a conserved quantity. The term “conserved” here
means that the function 𝐸(𝑥𝑖, 𝑥̇ 𝑖, 𝑡), 𝑡∈ℝ remains constant on any solution of
the equation of motion.
4.3
Lagrangian Mechanics
Although Newton’s formulation of classical mechanics has been very
successful, it still has some drawbacks. First of all, it requires that all forces
acting on a mechanical system should be explicitly known. In practice,
however, forces are often given implicitly, for example, by constraining the
trajectory to belong to some manifold such as to lie on a surface. Newton’s law
is a vector equation, and it may be cumbersome to transform its components
into curvilinear coordinate systems. Furthermore, the analysis of symmetries
is not easy to perform as long as we stay within the framework of Newtonian
mechanics. Constraints are also difficult to take into account. The Newtonian
4.3
Lagrangian Mechanics
179
model is essentially a local one: it was designed to describe the motion in
terms of the local velocity change caused by the local force value. Thus, it is
difficult to gain an insight into the global features of motion, its mathematical
structure. Mathematically speaking, it is reflected in the fact that Newton’s
equation is a second-order one, so the global properties of the system cannot
be figured out as easily as for the first-order equations, for which an extensive
dynamical systems theory has been developed. Furthermore, Newton’s model
is difficult to apply to fields - it is basically suitable for classical systems which
consist of particles. Because of this, Newtonian mechanics cannot serve as a
starting point for quantization of distributed systems. Thus, Newton’s
equations, despite their ubiquity, are seldom used in modern condensed
matter physics. And in general, the quantization of a classical system can
hardly be based on Newton’s formulation of mechanics.
In contrast to Newtonian mechanics, the Lagrangian mathematical model
provides a global and, in principle, coordinate-free formulation of classical
(i.e., trajectory-based) motion. Let us first give some elementary
considerations. Imagine a system of many particles having coordinates 𝑞𝑖(𝑡)
and velocities 𝑞̇𝑖(𝑡) (here, for simplicity, we make no distinction between co-
and contravariant coordinates), although the usual summation convention
omitting the sum symbol will be used. The quantities 𝑞𝑖(𝑡), 𝑞̇𝑖(𝑡) labeling the
particles’ positions and velocities are not necessarily Cartesian (rectangular)
coordinates. For 𝑁 particles, 𝑖 runs from 1 to 3𝑁, so the motion of these
particles in ℝ3 may be represented by a trajectory 𝛾: [𝑡1, 𝑡2] in ℝ3𝑁 usually
called the configuration space. This trajectory starts from the point in the
configuration space corresponding to the initial time 𝑡1 and terminates at the
point corresponding to the final time 𝑡2 . The main idea of Lagrangian
mechanics is to characterize the actual motion of a mechanical system by a
single trajectory, against all others joining the initial and final points in the
configuration space, which extremizes some functional 𝑆 called action. The
latter is usually written as
𝑆= ∫𝐿(𝑞, 𝑞̇, 𝑡)𝑑𝑡
𝑡2
𝑡1
(4.6)
We already know that the function 𝐿 is called the Lagrangian.
Mathematically speaking, a Lagrangian is defined as a smooth function
𝐿: ℝ× 𝑇𝑀→ℝ on a manifold 𝑀. Nowadays people say that 𝐿 is a function on
the tangent bundle 𝑇𝑀 of 𝑀. The action 𝑆= 𝑆𝐿 associated with 𝐿 is defined
for any smooth curve 𝛾: [𝑡1, 𝑡2] →𝑀. The set of all such smooth curves may be
thought of as an infinite-dimensional manifold, with the action to be a
function on it. Simply speaking, the classical action is the integral of the
Lagrangian along the trajectory. We allow the Lagrangian 𝐿 to depend on time
𝑡, i.e., 𝐿 is a function on 𝐿: ℝ× 𝑇𝑀→ℝ, although often 𝐿 is only required to
be defined on some open set in 𝑇𝑀. Also, for some models it might be
important to weaken the smoothness requirement for 𝛾 and 𝐿.
180
Classical Deterministic Systems
In other words, one is usually interested in determining the curves
𝛾: [𝑡1, 𝑡2] →𝑀 with given endpoint conditions, for which the action functional
reaches a minimum among nearly lying curves. Such a set of curves are
typically called variations. Given a curve γ with fixed endpoints 𝑡1, 𝑡2 and
some 𝜀> 0, one can define a smooth variation of it as a smooth map
𝛾𝜀(𝑡, 𝜎) ≔𝛾𝜀[𝑡1, 𝑡2] × (−𝜀, 𝜀) →𝑀,
(4.7)
𝜎∈(−𝜀, 𝜀) with the properties 𝛾𝜀(𝑡, 𝜎) = 𝛾(𝑡) for all 𝑡∈[𝑡1, 𝑡2] and
𝛾𝜀(𝑡1, 𝜎) = 𝛾(𝑡1), 𝛾𝜀(𝑡2, 𝜎) = 𝛾(𝑡2) for all 𝜎∈(−𝜀, 𝜀) . In standard texts on
Lagrangian mechanics, this variation is typically denoted as 𝛿𝑞(𝑡).
Now, it is easy to produce once again the Euler-Lagrange equations, but
before that I would like to note that the Lagrangian 𝐿 depends on two
independent sets of 3𝑁 variables denoted 𝑞𝑖(𝑡) and 𝑞̇𝑖(𝑡), but the latter are
not necessarily the 𝑡-derivatives of the former. One should not be confused by
the fact that in these coordinates an arbitrary curve in the tangent bundle has
to be denoted as (𝑞𝑖(𝑡), 𝑞̇𝑖(𝑡)). Geometrically, the second set of variables
coincides with the time derivative of the first only when the curve in 𝑇𝑈 is the
so-called tangential lift of a curve in 𝑈 where 𝑈⊂𝑀 is e.g., an open set, on
which we may establish a coordinate chart 𝑥: 𝑈→ℝ𝑛 (here 𝑛 = 3𝑁).
From the physicist’s perspective, only the elementary methods of the
calculus of variations are needed to derive the Euler-Lagrange equations. The
latter are produced by the elementary integration by parts. Assume 𝑞𝑖(𝑡) to
be a path for which the action 𝑆 is stationary, i.e., under the path variation
𝑞𝑖(𝑡) →𝑞𝑖(𝑡) + 𝛿𝑞𝑖(𝑡) where 𝛿𝑞𝑖(𝑡) is an arbitrary smooth function of 𝑡∈
[𝑡1, 𝑡2] vanishing at the endpoints of the curve (for simplicity, we are using
here the customary physical notations), the variation 𝛿𝑆 of 𝑆 must be zero in
the first order in 𝛿𝑞𝑖(𝑡). This means
𝑆= ∫𝑑𝑡( 𝜕𝐿
𝜕𝑞𝑖
𝛿𝑞𝑖+ 𝜕𝐿
𝜕𝑞̇𝑖
𝛿𝑞̇𝑖)
𝑡2
𝑡1
= ∫𝑑𝑡( 𝜕𝐿
𝜕𝑞𝑖
−𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇𝑖
) 𝛿𝑞𝑖
𝑡2
𝑡1
+ [ 𝜕𝐿
𝜕𝑞̇𝑖
𝛿𝑞𝑖]
𝑡1
𝑡2
= 0
(4.8)
where integration by parts has been carried out. At the endpoints of the path
𝛿𝑞𝑖= 0, and we get the Euler-Lagrange equation in its elementary form for
the classical extremal trajectory
𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇𝑖
−𝜕𝐿
𝜕𝑞𝑖
= 0
(4.9)
It is in general the system of second-order ODEs and according to the
Cauchy-Peano theorem from the theory of differential equations, this system
should have a unique solution, provided the initial values of 𝑞𝑖, 𝑞̇𝑖 are given.
How can we establish connection with the Newtonian model? Let us take,
for simplicity, a single particle:
4.3
Lagrangian Mechanics
181
𝐿(𝑞𝑖, 𝑞̇𝑖, 𝑡) ≡𝐿(𝑥𝑖, 𝑥̇ 𝑖) = 𝑚
2 𝑥̇ 𝑖𝑥̇ 𝑖−𝑉(𝑥𝑖)
(4.10)
Then the Euler-Lagrange equation takes the form of Newton’s law
𝑑
𝑑𝑡(𝑚𝑥̇ 𝑖) = −𝜕𝑉
𝜕𝑥𝑖
(4.11)
This simple example in fact demonstrates the generality of the
Lagrangian approach. Taking other Lagrangians, one can produce a set of
mathematical case studies. For instance, one may consider a Riemannian
metric 𝐿= 𝑔𝑖𝑗(𝑥𝑘)𝑥̇ 𝑖𝑥̇ 𝑗 or 𝐿 may be a linear form on each tangent space, 𝐿=
𝑎𝑖(𝑥𝑘)𝑥̇ 𝑖. The system under consideration is nearly always specified in
physics by choosing an appropriate model Lagrangian. As we shall see in the
chapter devoted to quantum field theory, the same methods can be applied to
physical systems with an infinite number of degrees of freedom which are not
constituted of moving particles, such as fields, by simply stipulating a suitable
Lagrangian function.
We have mentioned that it is not so easy to take into account symmetries
in the context of the Newtonian mechanics. Let us try to consider symmetries
in the Lagrangian formalism (one can consult the book “Mechanics” by L. D.
Landau and E. M. Lifshitz for an elegant treatment of this subject, [23]).
Assume that the Lagrangian 𝐿 does not depend on a certain coordinate 𝑞𝑗
(such a coordinate is historically called cyclic). Naturally, 𝐿 may depend on 𝑞̇𝑗,
otherwise the variable 𝑞𝑗 is of no interest at all. Then one can immediately see
from the Euler-Lagrange equations that the generalized momentum 𝑝𝑗
conjugate to a cyclic coordinate,
𝜕𝐿
𝜕𝑞𝑗, is conserved:
𝑑𝑝𝑗
𝑑𝑡= 𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇𝑗
= 𝜕𝐿
𝜕𝑞𝑗
= 0
(4.12)
It is from here, by some simple generalization in the spirit of Lie group
theory, that one of the two famous Noether’s theorems can be obtained [14].
Assume that the Lagrangian has a symmetry that may be continuously
parameterized, i.e., the action 𝑆 is invariant under an infinitesimal symmetry
operation applied to the path 𝑞𝑗(𝑡), 𝑞𝑗(𝑡) →𝑞𝑗(𝑡) + 𝛿𝑞𝑗(𝑡). It is important
that the symmetry is continuous: in this case it is always possible to define an
infinitesimal symmetry operation (infinitesimal displacement leaving the
action invariant). Another way to put it is to notice that discrete groups do not
in general result in conservation laws. The reason to this fact is that discrete
symmetries cannot in general imply infinitesimal variations. Since the action
𝑆 is invariant under the displacement 𝛿𝑞𝑗(𝑡) , we have for an arbitrary
number of degrees of freedom
182
Classical Deterministic Systems
𝑆= ∫∑𝛿𝑞𝑗( 𝜕𝐿
𝜕𝑞𝑗
−𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇𝑗
)
𝑗
𝑡2
𝑡1
+ ∑[𝛿𝑞𝑗
𝜕𝐿
𝜕𝑞̇𝑗
]
𝑗
𝑡1
𝑡2
= 0
(4.13)
The first term here vanishes since all 𝑞𝑗(𝑡) are the solutions of the Euler-
Lagrange equations. Thus, we get
∑𝛿𝑞𝑗(𝑡1)𝑝𝑗(𝑡1)
𝑗
= ∑𝛿𝑞𝑗(𝑡2)𝑝𝑗(𝑡2)
𝑗
(4.14)
where 𝑡1 and 𝑡2 are arbitrary initial and final time-points on the trajectory
𝑞𝑗(𝑡), 𝑡∈[𝑡1, 𝑡2]. It is clear that 𝛿𝑞𝑗(𝑡1) and 𝛿𝑞𝑗(𝑡2) do not in general vanish.
Since 𝑡1 and 𝑡2 are arbitrary, the above equation may be interpreted as the
conservation of the quantity
𝛿𝑆= ∑𝛿𝑞𝑗(𝑡1)𝑝𝑗(𝑡1)
𝑗
,
(4.15)
i.e., independence of this quantity on 𝑡 (see the textbook “Mechanics” by L. D.
Landau and E. M. Lifshitz, [23], §43 “Action as a function of coordinates”, for
a discussion and examples). A connection of this invariant with the customary
conservation laws may be illustrated by a classical particle moving in a central
potential 𝑉(𝑟). The Lagrangian written in spherical coordinates (𝑟, 𝜃, 𝜑) is
𝐿= 𝑚
2 [𝑟̇2 + 𝑟2(𝜃̇ 2 + 𝜑̇ 2sin2𝜃)] −𝑉(𝑟)
(4.16)
Since 𝜑 is here cyclic, the invariant (𝜕𝐿𝜕𝜑̇
⁄
)𝛿𝜑= 𝑐𝑜𝑛𝑠𝑡 results in the
conservation law:
𝑚(𝑟sin𝜃)2𝜑̇ = 𝑐𝑜𝑛𝑠𝑡,
(4.17)
which is the conserved component of the angular momentum about 𝑧-axis.
Let me now, as usual, make a few comments on the Lagrangian
framework and its general validity in physics. The famous “Course of
Theoretical Physics” by L. D. Landau and E. M. Lifshitz is based on the
assumption that the whole physics may be derived from the least action
principle97. Action and action density have long been the well-established
concepts in classical mechanics and field theories. However, in fluid dynamics
the corresponding quantities have been introduced only comparatively
recently. One of the reasons for this delay is probably owing to the fact that
fluid dynamics had traditionally been developed in the Eulerian coordinates
whereas the action principle is more conveniently written and treated in the
Lagrangian frame.
97 An exception is “Fluid Mechanics” volume [85], see below.
4.4
Hamiltonian Mechanics
183
The Lagrangian framework has many other advantages and spreads well
beyond classical mechanics. The generalized coordinates 𝑞𝑖(𝑡) do not
necessarily describe the motion of a material point or a set of material points.
For instance, in scalar field theory (we shall study some models based on it
later) each 𝑞𝑖(𝑡) may correspond to the field value regarded as a function of
time at various field points. Certain versions of quantum mechanics (e.g., a
novel sum-over-histories formulations) also use the Lagrangian framework
and generalized coordinates. Moreover, such a description is not restricted to
nonrelativistic physics only. Thus, physical systems in quantum field theory
are customarily specified by their Lagrangians. The Lagrangian approach
provides a global description of the trajectory in a configuration space as well
as a fair understanding of the conservation laws. One trivial example of a
conservation law is obvious already from the form of the Euler-Lagrange
equations. If the function L does not depend on a particular variable 𝑞𝑖, then
the quantity
𝜕𝐿
𝜕𝑞̇𝑖≡𝑝𝑖 is a constant of the motion - a conserved quantity. In this
case, it is the conserved momentum conjugate to the cyclic coordinate 𝑞𝑖. In
general, there exists a conserved quantity corresponding to each generator of
the Lie algebra of a Lie group of continuous symmetries. Moreover, there
exists a close relationship between Lagrangian mechanics, which is typically
based on positive-definite symmetrical bilinear forms, and Riemannian
geometry. One can even say that Lagrangian mechanics is a subdomain of
Riemannian geometry.
4.4
Hamiltonian Mechanics
There are some practical inconveniences in Lagrangian mechanics. First of all,
the Euler-Lagrange equations, like those of Newton, are second-order ODEs.
This fact makes it more difficult to provide a simple geometric interpretation
for their solutions than for a system of the first-order differential equations,
which readily admit geometric interpretation as flows along vector fields.
Possibly the main motivation underlying the Hamiltonian formulation of
mechanics was the intention to interpret the time evolution of a physical
system as such flows. This nice interpretation is especially convenient when
studying the properties of dynamical systems (see below). Moreover, in the
Hamiltonian formulation there is practically no difference between
coordinates and momenta - they are just variables in a system of an even
number (2n) of equations, with the possibility to treat all of them equally,
make changes and transformations between them and construct some kind of
geometry in this 2n-dimensional phase space. Such a geometry is usually
formulated in terms of symplectic manifolds, with a group of
diffeomorphisms acting on them. However, before we proceed to using this
geometrical language, I shall make some general observations and try to
illustrate the main ideas on simple examples.
One more important motivation for some other description, different
from the Lagrangian formulation, consists in the desire to get explicit
expressions for the right-hand side of the dynamical system vector equation,
𝑑𝑥𝑑𝑡
⁄
= 𝐹(𝑥, 𝑡) where in the Lagrangian formulation variables are 𝑥=
184
Classical Deterministic Systems
(𝑞𝑖, 𝑞̇ 𝑖) , so the first half of equations in the dynamical system is simply
𝑑𝑞𝑖𝑑𝑡
⁄
= 𝑞̇ 𝑖, but there is no regular method to extract the right-hand side for
𝑑𝑞̇ 𝑖𝑑𝑡
⁄
from the Euler-Lagrange equations. This is related to the geometric
properties of the tangent bundle 𝑇𝑀, which is too simple to support
dynamics, but luckily not the only one. The tangent bundle 𝑇𝑀 is formed by
tangent spaces (fibers) attached to each point of the configuration manifold
𝑀, which contain the velocity vector fields of the same contravariant type as
vectors 𝑞𝑖∈𝑀. One may note in passing that the Hamiltonian equations are
covariant whereas the Lagrangian ones are contravariant. The tangent bundle
𝑇𝑀 is insufficient to form the phase space of a physical system, one needs also
a dual space (see below). We can interpret the Hamiltonian equations as a
dynamical system on a cotangent space to configuration manifold 𝑀.
There exist of course many important books on classical mechanics
where the geometry of phase space is thoroughly discussed. I have already
remarked that for me, personally, the best one is still the book by Arnold [14],
in spite of the fact that it has been written over thirty years ago. Yet, I have
heard several times from the students an astounding declaration that the
book by V. I. Arnold was highly unorthodox and was viewed a bit skeptically
by their professors. (A similar statement was related also to another book by
V. I. Arnold [15] which personally I found quite fresh when I first read it in the
beginning of the 1970s.)
Before we proceed, one may notice that the Hamiltonian systems are only
a very restricted class of dynamical systems, since they are specified by a
single scalar function - the Hamiltonian. The description of an autonomous
(time-independent) Hamiltonian system possessing N degrees of freedom,
i.e., evolving in a 2N-dimensional phase space98, is reduced, due to the energy
integral of motion, to (2N-1) variables - one can say that the Hamiltonian
dynamical system is restricted to (2N-1)-dimensional energy shell
(manifold).
Physicists traditionally prefer the coordinate-dependent notation,
whereas in contemporary mathematics the coordinate-free geometric
language is nowadays of fashion. For me, it was useful to compile sort of a
dictionary translating the notions of Hamiltonian (and, to some extent,
Lagrangian) mechanics into the language of modern geometry. In fact, this
language is not that modern - it has been used in mathematics for about a
century. As usual, I avoid hard technical definitions.
The particle coordinates 𝑥𝑖, 𝑖= 1, … , 𝑛 on the configuration space ℝ3𝑛, 𝑛
is the number of particles in the system, are usually replaced by “generalized”
coordinates 𝑞𝑖 which are understood as global variables, but more generally
they may be interpreted as local coordinates on a configuration space 𝑀,
which can be any manifold whose points coincide with all possible
configurations of the system. The variables 𝑞̇ 𝑖 represent all tangent vectors to
all possible curves (paths) lying in 𝑀. These variables, viewed as
98 Recall that variables in Hamiltonian systems appear as canonically conjugated pairs, which
results in an even dimension of the phase space. This is not true for the case of generic dynamical
systems.
4.4
Hamiltonian Mechanics
185
“generalized” velocities, are usually called by mathematicians “fiber
coordinates on the tangent bundle 𝑇(𝑀) ”, the fiber itself is sometimes
denoted as 𝑇𝑥(𝑀). Tangent bundles and fibers are often denoted with no
brackets, i.e., as 𝑇𝑀 and 𝑇𝑥𝑀, respectively (see more details in Chapter 3 on
mathematics).
The paradigmatic examples of a vector bundle are the tangent and
cotangent bundles 𝑇𝑀 and 𝑇∗𝑀 of a smooth manifold 𝑀. These are the
bundles whose fibers at 𝑥 in 𝑀 are the tangent space 𝑇𝑥𝑀 and cotangent
space 𝑇𝑥
∗𝑀 respectively. In this case the bundle charts and transition maps are
induced by the derivatives of the coordinate charts on 𝑀 (or their adjoints,
for the cotangent bundle).
The momenta 𝑝𝑖 in this language are called fiber coordinates for the
cotangent bundle 𝑇∗(𝑀), which is in fact the phase space. In Lagrangian
mechanics, the momenta are defined as 𝑝𝑖= 𝜕𝐿𝜕𝑞̇ 𝑖
⁄
which implies that 𝑝𝑖
and 𝑞𝑖 would be transformed with mutually inverse matrices under a
coordinate change, similarly to co- and contravariant vectors in linear
algebra. These inverse matrices correspond to dual bundles. The concept of
the cotangent bundle, or simpler the cotangent space, is constructed by using
the base notion of a manifold and its associated tangent space (see Chapter 3
on vector spaces and other geometrical definitions). Let us now recall some
basic facts.
A manifold 𝑀 may be regarded as a direct generalization of the concept
of a surface. To put it simply, a manifold can be treated as an n-dimensional
“curved” space where one desires to define vectors as in usual Euclidean
space. In principle, there are several ways to introduce vectors, one of them
being with the help of a tangent space. At each point 𝑥 of a manifold 𝑀 (we
assume it to be finite-dimensional, in the infinite-dimensional case one may
encounter some complications, see below) we can define the tangent space of
vectors, 𝑇𝑥(𝑀), for example, as the set of all vectors tangent at point 𝑥 to
smooth curves passing through 𝑥. Such a construction produces the space of
all vectors defined at 𝑥 with the same dimension 𝑛 as 𝑀, which is also a
manifold. By taking the union over all points 𝑥∈𝑀, we may define a tangent
bundle of 𝑀
𝑇(𝑀) = ⋃{𝑥} × 𝑇𝑥(𝑀)
𝑥∈𝑀
Later we shall often use these (and a little extended) geometric concepts;
now, before we proceed with simple analysis and examples more commonly
used in traditional physics textbooks, e.g., in [23], allow me to make a small
digression.
In general, given any 𝑛-dimensional vector space 𝑉, we might look at the
space of all real-valued linear maps on 𝑉. This space of linear maps forms itself
a vector space, which we can call, under certain conditions of orthogonality,
the dual space 𝑉∗. This may appear, at the first sight, an extremely abstract
space, but in fact it is not drastically different from the initial space 𝑉. If, for
example, the space 𝑉 is formed by the contravariant vectors like radius-vector
186
Classical Deterministic Systems
or velocity, then 𝑉∗ contains covariant vectors such as gradient, wave vector
or quantum-mechanical momentum. One can, by the way, rather abstractly
define a covariant vector as a linear functional or, rather, a polylinear operator
mapping contravariant vectors into scalars, yet I don’t think this level of
mathematical abstraction would bring new understanding.
One may ask: what was the main idea behind the introduction of tangent
and cotangent spaces? One idea probably was to bypass the traditional
coordinate representation of classical differential geometry. In fact, choosing
a basis is an arbitrary operation: once you choose a basis in a vector space, it
becomes isomorphic to ℝ𝑛. Then, unless you take some precautions, you risk
establishing an isomorphism between tangent and cotangent spaces, because
ℝ𝑛 is isomorphic to its dual. Moreover, you may be tempted to introduce a
usual inner (scalar) or canonical dot product via coordinate entries as in ℝ𝑛,
but this object does not exist in general in arbitrary vector spaces.
When we discussed differential manifolds in Chapter 3, we observed that
for them a similar problem exists: although one may introduce local
coordinates, there is no regular way to select them. On a sphere, for example,
which is one of the simplest realizations of a manifold, one cannot find a
natural place for the coordinate origin. Thus, intrinsic properties of manifolds
must be studied without any special choice of local coordinates. The latter
cannot be found by physics - they do not occur in nature and we introduce
them only for convenience. One of the reasons some people identify tangent
and cotangent (dual) spaces e.g., in the form of contravariant and covariant
objects may be that they implicitly assume the existence of local coordinates
and bases isomorphic to those in ℝ𝑛.
Let us repeat the main points once again. The concept of a cotangent
space is built upon the already introduced structures of a manifold and an
associated tangent space. In a manifold 𝑀, which actually is an n-dimensional
space, we would like to define “vectors” as we usually do in the ordinary
Euclidean space. There are different manners to do it, and one of the simplest
ways is to introduce the tangent space of vectors at a point 𝑥 in 𝑀, denoted
𝑇𝑥(𝑀), which is essentially the space of all vectors defined at 𝑥. By the way,
this construction is reasonable, because the tangent space is a manifold in
itself.
Now, the linear algebra - the most powerful discipline in handling linear
vector spaces - teaches us that for each vector space 𝑉 there is a naturally
associated dual space 𝑉∗. Thus, we may assume that in this special case there
exists a dual space to the tangent vector space 𝑇𝑥(𝑀), which is what we may
call the cotangent space 𝑇𝑥
∗(𝑀). Dual vectors in 𝑇𝑥
∗(𝑀) are often referred to as
one-forms. We may note that, even though 𝑇𝑥(𝑀) and 𝑇𝑥
∗(𝑀) may have the
same size and more or less the same structure, there is in general no unique
map between these two spaces. In other words, given an arbitrary vector 𝑥,
there is no natural way to associate it with a unique one-form 𝜔1. One can try
to put these vectors in mutual correspondence using components in some
fixed basis (see Chapter 3), but, as I have several times mentioned, the
components of a vector are not fundamental quantities: they change under
coordinate transformations. Nevertheless, we shall see later that it is possible
4.4
Hamiltonian Mechanics
187
to choose some particular structure helping to identify vectors and one-
forms, and this convenient additional structure is referred to as metric.
Given the local cotangent space 𝑇𝑥
∗(𝑀), one can also define the cotangent
bundle in the same way as with the tangent bundle:
𝑇∗(𝑀) = ⋃{𝑥} × 𝑇𝑥
∗(𝑀)
𝑥∈𝑀
.
One may observe (and the reciprocal lattice in solid state physics is a good
physical illustration to this observation) that if one has an n-dimensional
vector space, one can build up a space of all linear maps on it (e.g., comprised
of nontrivial linear combinations with real coefficients). This space of linear
maps is also a linear space, which can be defined through basis vectors 𝐞𝑗
∗,
each of the latter being a linear combination of the full set of basis vectors 𝐞𝑖
for the vector space 𝑉. If we require, as when we considered above the
transformations between affine coordinates, that 𝐞𝑖
∗𝐞𝑗= 𝛿𝑖𝑗, then we get a
basis for a dual space formed by linear maps in the initial vector space 𝑉. Any
linear map in 𝑉∈ℝ𝑛 may be written as a linear combination of the basis
vectors 𝐞𝑗
∗ with real coefficients (see Chapter 3). In other words, given a basis
𝐞𝑖 of a vector space 𝑉, we may define the dual basis of 𝑉∗ using the rule 𝐞𝑖
∗𝐞𝑗=
𝛿𝑖𝑗, i.e., any basis in 𝑉 may produce a dual basis in 𝑉∗. The Kronecker symbol
(as well as delta-function in the case of functional spaces) ensures a one-to-
one correspondence between the two sets of basis vectors, but does not
guarantee orthonormality, if one deals with vector spaces and not necessarily
with inner product spaces (see about the hierarchy of spaces in Chapter 3).
A brief note about infinite-dimensional spaces: this case may be quite
nontrivial, since vectors or functions dual to those forming a basis in 𝑉 do not
in general provide a basis in 𝑉∗. In particular, the usual consequence of
isomorphism between vector spaces 𝑉 and 𝑉∗ and the Euclidean space ℝ𝑛,
dim 𝑉= dim 𝑉∗ becomes meaningless in the infinite-dimensional case.
Moreover, if you try to find the double dual, i.e., dual of dual vector space 𝑉∗∗,
it will not necessarily be isomorphic to the starting vector space 𝑉. See e.g.,
http://planetmath.org/encyclopedia/DualSpace.html.
After all these digressions let us get back to Hamiltonian mechanics. We
know that it is essentially based on the notion of a Hamiltonian function or
Hamiltonian. One usually defines the Hamiltonian as a function 𝐻(𝑝𝑖, 𝑞𝑖, 𝑡) of
6N+1 variables 𝑝𝑖, 𝑞𝑖, 𝑖, 𝑗= 1, … ,3𝑁 plus parameter 𝑡 which is interpreted as
time. The role of time is not always simple, and we shall specially discuss it
several times in various contexts (see e.g., Chapter 4). The Hamiltonian is
related to the Lagrangian as
𝐻(𝑝𝑖, 𝑞𝑖, 𝑡) = 𝑝𝑖𝑞𝑗𝛿𝑗
𝑖−𝐿(𝑞𝑗, 𝑞̇ 𝑗, 𝑡).
(4.18)
In curved spaces one should in general write, instead of 𝛿𝑗
𝑖, some non-
diagonal tensor 𝑔𝑗
𝑖, e.g., the metric tensor. One usually assumes that the
188
Classical Deterministic Systems
defining expressions 𝑝𝑖= 𝜕𝐿(𝑞𝑖, 𝑞̇ 𝑖, 𝑡) 𝜕𝑞̇ 𝑖
⁄
are invertible and can be solved as
equations with respect to 𝑞̇ 𝑖 giving the generalized velocities in terms of 𝑝𝑖, 𝑞𝑖.
A sufficient condition for such invertibility is the nonsingularity of the Hessian
matrix
𝐷𝑖𝑗=
𝜕2𝐿
𝜕𝑞̇ 𝑖𝜕𝑞̇ 𝑗= 𝜕𝑝𝑖
𝜕𝑞̇ 𝑗, det 𝐷𝑖𝑗≠0
This condition is almost always fulfilled in simple mechanical problems
but is not necessarily valid in more complicated physical problems. This fact
may be a source of difficulties in quantum field theory (QFT), and we shall
discuss this issue later (Chapter 6).
The procedure of inverting the momentum equations, 𝑝𝑖= 𝜕𝐿𝜕𝑞̇ 𝑖
⁄
, and
arriving at the Hamiltonian from the Lagrangian is usually called in mechanics
the Legendre transformation, although it is only a particular case of the
Legendre transformation in general. Now we may notice that the Hamiltonian
𝐻 is a function of its variables on the cotangent bundle 𝑇∗𝑀 of a 𝑑= 3𝑁-
dimensional manifold with the local coordinates 𝑞𝑖, 𝑖= 1, … ,3𝑁, whereas the
Lagrangian of the system is a function on the tangent bundle 𝑇𝑀 of 𝑀.
Considering, for simplicity the case when the Hamiltonian does not explicitly
depend on time (this case corresponds to a closed system, which is not driven
by any external agent), we have
𝑑𝐻= 𝑝𝑖𝑑𝑞̇ 𝑖+ 𝑞̇ 𝑖𝑑𝑝𝑖−𝜕𝐿
𝜕𝑞𝑖𝑑𝑞𝑖−𝜕𝐿
𝜕𝑞̇ 𝑖𝑑𝑞̇ 𝑖= 𝑞̇ 𝑖𝑑𝑝𝑖−𝜕𝐿
𝜕𝑞𝑖𝑑𝑞𝑖,
(4.19)
which means that the differential 𝑑𝐻 is conveniently expressed only through
the differentials of its own (cotangent bundle) variables with
𝜕𝐻
𝜕𝑝𝑖
= 𝑞̇ 𝑖,
𝜕𝐻
𝜕𝑞𝑖= −𝜕𝐿
𝜕𝑞𝑖
(4.20)
We know that trajectories of mechanical particles are described by the
curves 𝑞𝑖(𝑡) on 𝑀, with the dynamics being specified by the Lagrangian
𝐿(𝑞̇ 𝑖(𝑡), 𝑞𝑖(𝑡), 𝑡) . These trajectories obey the Euler-Lagrange equations
resulting from extremizing the classical action defined as an integral of the
Lagrangian along the trajectory (see above)
𝜕𝐿
𝜕𝑞𝑖= 𝑑
𝑑𝑡
𝜕𝐿
𝜕𝑞̇ 𝑖
𝑖
= 𝑝̇𝑖
Thus, using these simple transformations, we arrive at an alternative
description of the extremal path - this time in terms of other equations of
motion:
4.4
Hamiltonian Mechanics
189
𝑞̇ 𝑖= 𝜕𝐻
𝜕𝑝𝑖
,
𝑝̇𝑖= −𝜕𝐻
𝜕𝑞𝑖.
(4.21)
This system of the first-order ordinary differential equations (ODEs) is
called the Hamiltonian’s system of equations. We may write the least action
principle in the Hamiltonian formulation as
𝛿𝑆[𝑝𝑖, 𝑞𝑖] = 𝛿∫𝑑𝑡(𝑝𝑖𝑞̇ 𝑖−𝐻(𝑝𝑖, 𝑞𝑖))
𝑡2
𝑡1
= 0,
(4.22)
which gives the above written Hamiltonian equations of motion. Denoting
𝜂≔𝑝𝑖𝑑𝑞𝑖 (often called the Poincaré form) so that the first term in the action
may be written as an integral over path 𝛾
∫𝑝𝑖𝑞̇ 𝑖𝑑𝑡
𝑡2
𝑡1
= ∫𝜂
𝛾
One can associate with this integral a so-called symplectic form, 𝜔=
𝑑𝑝𝑖∧𝑑𝑞𝑖 (see e.g., [14], Ch.8). It is sometimes said that the action written in
the Hamiltonian form characterizes the symplectic structure, with the
cotangent bundle 𝑇∗𝑀 on which the Hamiltonian 𝐻 is defined being the
symplectic manifold associated with the considered mechanical system, i.e.,
the particle motion on manifold 𝑀. Since this geometrical language seems to
be universally accepted nowadays, I shall use it systematically even in simple
examples in order to get accustomed to it. Imagine, for instance, a particle
moving over the real line ℝ so that 𝜂= 𝑝𝑑𝑞 99. The associated symplectic
manifold 𝑇∗ℝ has local coordinates (𝑝, 𝑞). Then the corresponding symplectic
form is simply 𝜔= 𝑑𝑝∧𝑑𝑞. Let us now try to generalize this construction a
little bit. The local coordinates (𝑝, 𝑞) may be denoted, for instance, as (𝑧1, 𝑧2),
and if a symplectic manifold has an arbitrary finite dimensionality, then local
coordinates can be written as 𝑧≔(𝑧1, … , 𝑧𝑛) so that the Poincaré form is 𝜂=
𝐵𝛼(𝑧)𝑑𝑧𝛼 and the symplectic form may be written as
𝜔= 1
2 𝜔𝛼,𝛽(𝑧)𝑑𝑧𝛼∧𝑑𝑧𝛽
where 𝜔𝛼,𝛽(𝑧) ≔𝜕𝛼𝐵𝛽(𝑧) −𝜕𝛽𝐵𝛼(𝑧).
This last equation provides the coordinate expression of an exterior
derivative of the form 𝜔. One can define here the general Poisson brackets
(some authors prefer a singular - the Poisson bracket)
99 It is common now to omit the symbol d in differential forms, in particular in the form 𝑑𝜂;
frankly speaking, I do not like this “modern” style (although I use it), since I belong to the old
school in which the number of integrals was always equal to the number of differentials. Besides,
there is a lot of confusion about this modern notation.
190
Classical Deterministic Systems
{𝐹, 𝐺} = 𝜔𝑖,𝑗(𝑧) 𝜕𝐹
𝜕𝑧𝑖
𝜕𝐺
𝜕𝑧𝑗
where 𝜔𝑖,𝑗(𝑧) is the inverse matrix to 𝜔𝑖,𝑗(𝑧), 𝜔𝑖,𝑗(𝑧)𝜔𝑗,𝑘(𝑧) = 𝛿𝑘
𝑖; here to
simplify notation, I changed indices from Greek to Latin - I hope it will not
produce any confusion. One can encounter the Poisson brackets in various
contexts, and we shall discuss their properties ad hoc. Now I would like only
to emphasize their connection with quantum mechanics. Indeed, quantization
of classical mechanics on a manifold 𝑀 was originally performed as replacing
the Poisson bracket algebra by the commutator algebra, i.e., by the transition
from the classical phase space (cotangent bundle 𝑇∗𝑀) to the quantum
mechanical Hilbert space ℍ. In the conventional textbook example when the
manifold 𝑀 is simply ℝ3, the Hamiltonian coordinates 𝑞𝑘 and 𝑝𝑖 are treated
as operators 𝑞̂𝑘 and 𝑝̂𝑖 on ℍ= 𝐿2(ℝ3) , with [𝑝̂𝑖, 𝑞̂𝑘] = 𝑖ℏ𝛿𝑖
𝑘 whereas the
classical Poisson brackets are {𝑝𝑖, 𝑞𝑘} = 𝛿𝑖
𝑘. Operator 𝑞̂𝑘 is simply the
multiplication by 𝑞𝑘, whereas 𝑝𝑘 is a covector differential operation, 𝑝𝑘≔
−𝑖𝜕𝜕𝑞𝑘
⁄
. Then one needs to replace “classical observables” represented by
real functions 𝐹(𝑝, 𝑞) defined on 𝑇∗𝑀 100 by quantum self-adjoint operators 𝐹̂
on ℍ. Then, by definition, the expectation value 𝐹̅ of a “quantum observable”
𝐹̂ at some state Ψ ∈ℍ is the scalar product 𝐹̅ = (Ψ, 𝐹̂Ψ) (we assume the unit
norm here). See Chapter 6 for details of quantization, in particular canonical
quantization. Now we may just state that orthodox quantum mechanics based
on the Schrödinger equation finds its direct analog in the Hamiltonian picture
of classical mechanics. In Chapter 6 we shall see that the Schrödinger equation
itself is the quantum analog of the Hamiltonian equations. Thus, one may
boldly say that Hamiltonian systems play a central part in mathematics and
physics. They have brought forth symplectic geometry methods, canonical
quantization and the concept of integrable systems. They also allowed us to
better understand time evolution and conserved quantities as Noether
symmetries. We have also seen that if the Lagrangian of a physical system is
known, then one can obtain the Hamiltonian by the Legendre transformation
(see about the general Legendre transformation in Chapter 4). However, for
an arbitrary system of equations, it may be not an easy procedure.
From the viewpoint of a theory of differential equations, the Hamiltonian
form of a system of differential equations has a specific structure, namely the
symplectic structure.
To use all the advantages of the Hamiltonian structure it would be
desirable to know whether an arbitrary system of differential equations has a
Hamiltonian structure, and if it has to be capable of writing the system of
equations in the Hamiltonian form, i.e., finding such a structure explicitly.
Thus, when we use the Hamiltonian formulation in classical mechanics,
we have to study symplectic geometry in the phase space and,
correspondingly, the algebra of differential forms. When turning to quantum
100 We have already noticed that in classical mechanics functions are assumed to be smooth,
e.g., 𝐹∈𝐶∞(𝑇∗𝑀).
4.4
Hamiltonian Mechanics
191
mechanics, the classical Hamiltonian formulation naturally mutates into the
Hilbert space paradigm using the language of operators and the techniques of
boundary value problems for the Schrödinger equation. In distinction with
the Hamiltonian formulation, the Lagrangian mechanics requires, as we have
seen, studying variational calculus, and when we pass from the Lagrangian
description to quantum theory, we have to learn the language of path
integrals (developed by P. A. M. Dirac and R. Feynman).
Let us now try to illustrate how Hamiltonian mechanics works on several
simple examples. We have already briefly described the most trivial example
possible - that of a particle moving along a line. If we try to formulate this one-
dimensional problem in the jargon of symplectic geometry, we must first
write the necessary forms. The corresponding symplectic manifold (the
cotangent bundle 𝑇∗ℝ) has local real coordinates (𝑧1, 𝑧2) = (𝑞, 𝑝), with the
following forms:
Poincaré: 𝜂= 𝑝𝑑𝑞 symplectic: 𝜔= 𝑑𝜂= 𝑑𝑝∧𝑑𝑞 exterior derivatives:
𝜔1,2 = 𝜔2,1 = −1; 𝜔1,2 = 𝜔2,1 = 1 Poisson brackets: {𝐹, 𝐺} = 𝜔1,2 𝜕𝐹
𝜕𝑞
𝜕𝐺
𝜕𝑝+
𝜔2,1 𝜕𝐹
𝜕𝑝
𝜕𝐺
𝜕𝑞=
𝜕𝐹
𝜕𝑞
𝜕𝐺
𝜕𝑝−
𝜕𝐹
𝜕𝑝
𝜕𝐺
𝜕𝑞= 1; {𝑞, 𝑝} = 1. Here, as we have noted a couple of
times, 𝐹 and 𝐺 are typically considered smooth.
One may also notice that the operation of taking the Poisson bracket is
skew-symmetric and satisfies the Jacobi identity. This means that the Poisson
brackets are simultaneously Lie brackets (see Chapter 3) on 𝐶∞(𝑀). We can
write the Hamiltonian equations of motion for any “classical observable”
𝐹(𝑞(𝑡), 𝑝(𝑡), 𝑡), which is again considered smooth, 𝐹(𝑞, 𝑝) ∈𝐶∞(𝑇∗𝑀) (for
simplicity, I have omitted the explicit dependence on parameter 𝑡):
𝑑𝐹(𝑞(𝑡), 𝑝(𝑡))
𝑑𝑡
= {𝐹(𝑞(𝑡), 𝑝(𝑡)), 𝐻(𝑞(𝑡), 𝑝(𝑡))}
This equation is quite important, since it is equivalent to the Hamilton
equations and states that the evolution of an “observable” 𝐹 - i.e., the rate of
change of the observed value of the function 𝐹(𝑞, 𝑝) - is equal to the observed
value of the Poisson brackets {𝐹, 𝐻}.
For many applications of classical mechanics, it is sufficient to assume the
Hamiltonian function 𝐻: ℝ2 →ℝ to be a 𝐶2 function of 2n variables
𝑞𝑖, 𝑝𝑗, 𝑖, 𝑗= 1, … , 𝑛. In the 1D case (𝑛= 1) the integral 𝐻(𝑞(𝑡), 𝑝(𝑡)) = 𝑐𝑜𝑛𝑠𝑡
fully describes the trajectories in the phase space (phase plane). One can
easily illustrate the meaning of the “energy surface” 𝐻(𝑞𝑖, 𝑝𝑗) = 𝐸 on this
simple example. Since we have excluded an explicit dependence on time (such
a case is usually called autonomous), we obviously have a time-translation
invariance. This property can be easily verified by replacing 𝑡 with 𝜏= 𝑡−𝑡0
where 𝑡0 = 𝑐𝑜𝑛𝑠𝑡 is a translation along the time axis. Consider, e.g., the
Hamiltonian equations or, in general, a vector equation of the form 𝑑𝑧𝑑𝑡
⁄
=
𝐹(𝑧) where, as before, 𝑧= (𝑧1, … , 𝑧𝑛)𝑇 are local coordinates. The motion
equations written in this form are usually called a dynamical system (see
below). Consider the initial value problem (IVP) for the above vector
192
Classical Deterministic Systems
equation, 𝑑𝑧𝑑𝑡
⁄
= 𝐹(𝑧), 𝑧(0) = 𝑧0, and assume that it has the solution 𝑧=
𝜑(𝑡). Then making a substitution 𝜏= 𝑡−𝑡0 we see that the IVP 𝑑𝑧𝑑𝑡
⁄
=
𝐹(𝑧), 𝑧(0) = 𝑧0 has the solution 𝜓(𝑡) = 𝜑(𝑡−𝑡0). This second solution is
obtained by merely translating the first one along the time axis. It is important
to understand that these two solutions to our dynamical (IVP) problem, 𝜑(𝑡)
and 𝜓(𝑡), are different, but belong to the same energy. In our 1D case, these
solutions lie on the same phase curve (orbit).
This situation is commonly illustrated by the oscillator equation, 𝑧̈ +
𝜔0
2 = 0, which has solutions 𝑧1 = sin 𝜔0𝑡 and 𝑧2 = cos 𝜔0𝑡, the latter being
obtained from the first by the transformation 𝑡→𝑡−𝜋/2. The oscillator
equation can be cast in the form of the equivalent vector equation with
𝑥= 𝑥1, 𝑥̇ = 𝑥2.
So, in a fairly unorthodox manner, the notation has led us to a map
from functions into forms; 𝑑 is known as the exterior derivative. You
might ask, “What about our notion of 𝑑𝑥 as a small change in 𝑥?” Well,
we have to throw that picture out of the window, because the notation
“𝑑𝑥” does not actually mean a small change in 𝑥. “𝑑𝑥” isn’t even really a
number. It’s a linear map from vectors to numbers. It can act on small
vectors to produce small numbers, but it isn’t a small number in itself;
it’s not even an element of ℝ. It’s an element of 𝑇𝑝∗𝑀. So what does it
really mean when we see “𝑑𝑥” in an integral?
To finalize this section, we can emphasize the link between the general
theory of dynamical systems and classical mechanics describing the motion
of material bodies. Namely, one can show (see, e.g., [14]) that any dynamical
system 𝑑𝑥𝑗/𝑑𝑡= 𝑣𝑖(𝑥𝑗) admits, at least not in the vicinity of its critical
points, a symplectic structure (i.e., Hamiltonian vector field) 𝜔𝑖𝑗(𝑥), 𝑥=
(𝑥1, … , 𝑥𝑛) so that one can write the dynamical system as 𝑣𝑖𝜔𝑖𝑗(𝑥) = 𝜕𝑗𝐻(𝑥),
where 𝐻(𝑥) is the respective Hamiltonian function.
4.5 Oscillations
In many cases, the forces appearing in a mechanical system, when it is driven
from the equilibrium, strive to return the system to the equilibrium position,
when there are no forces producing motions in the system. Return to
equilibrium is the general tendency of almost all natural systems (see Chapter
7). For small deviations from equilibrium, the returning forces are
proportional to such deviations (Hooke’s law), which is simply a
manifestation of the fact that each smooth (or analytical) function is locally
linear. This linearity leads to the equation of small oscillations in a 1d
mechanical system, 𝑚𝑥̈ + 𝑘𝑥= 0, 𝑘> 0, which is the simplest possible model
of a finite analytical motion. This motion is obviously periodic.
One may notice that although the motion equations are purposed to
describe evolution, systems in purely periodic motion are difficult to call
evolving since they repeat the same states a countless number of times. In an
evolving system, each state is, in general, unlike any other. A little later we
4.5
Oscillations
193
shall specially discuss time evolution in classical mechanics in connection
with dynamical systems theory.
The case of small oscillations with a single degree of freedom is such a
common mathematical model that it is rather boring to discuss it once again.
onetheless, I shall do it in order to have a possibility of returning to this
ubiquitous model in future chapters and, perhaps, to find some not so
common features in it. One can of course find careful exposition of a 1d
oscillation theory in any textbook on mechanics so that I won’t care much
about details. Let 𝑥= 𝑥0 be a stable equilibrium i.e., 𝜕𝑈(𝑥0) 𝜕𝑥
⁄
=
0, 𝜕2𝑈(𝑥0) 𝜕𝑥2
⁄
> 0.101 Then the solution to the mechanical system with
kinetic energy 𝑇(𝑝, 𝑥) = 𝑇(𝑥̇, 𝑥) = (1/2)𝑥(𝑎)𝑎̇ 2, 𝑈= 𝑈(𝑥) is periodic for
(𝑝, 𝑥) near (𝑝= 0, 𝑥0). The first question that arises here is: what is the
period of the motion? The answer is: the period 𝑇0 near equilibrium
position with the decrease of the amplitude 𝑥(𝑡) tends to the limit 𝑇0 =
2𝜋/𝜔0 where
𝜔0
2 = 𝑏
𝑎,
𝑏= 1
2
𝜕2𝑈(𝑥0)
𝜕𝑥2
,
𝑎= 𝑎(𝑥0),
since for a linearized system 𝑈(𝑥) = (1/2)𝑏(𝑥−𝑥0)2 and the solution is
𝑥(𝑡, 𝜔0) = 𝐴cos𝜔0𝑡+ 𝐵sin𝜔0𝑡 . The qualitative picture of 1d small
oscillations (e.g., in the (𝑥̇, 𝑥)-space) can be plotted even without solving
differential equations of motion: the relationship for the “invariant
manifold” (see below the section on dynamical systems)
𝑚𝑥̇ 2
2
+ 𝑏𝑥2
2
= 𝑐𝑜𝑛𝑠𝑡
represents the ellipse.
The most common example of a periodic, i.e., oscillating, 1d mechanical
system is a pendulum. The simple mathematical pendulum consisting of a
point mass 𝑚 and of a massless thread (constraint) of length 𝑙 is described
by the Hamiltonian 𝐻= 𝑝𝜑
2 2𝑚𝑙2
⁄
−𝑚𝑔𝑙cos𝜑 where 𝑝𝜑= 𝑚𝑙2𝜑̇ is the
angular momentum and 𝜑 is the declination angle. In this section, we shall
mostly deal with small oscillations. For small oscillations 𝜑≫1 around the
equilibrium position 𝜑= 0 , the pendulum motion is described by the
linearized (harmonic) Hamiltonian
𝐻=
𝑝𝜑
2
2𝑚𝑙2 + 𝑚𝑔𝑙
2
𝜑2 = 𝑚𝑙2𝜑̇ 2
2𝑚
+ 𝑚𝑔𝑙
2
𝜑2 = 𝑚𝑥̇ 2
2
+ 𝑚𝜔2𝑥2
2
,
where 𝜔= (𝑔/𝑙)1/2.
One can of course describe oscillations in all possible versions of
mechanics: Newtonian, Lagrangian, Hamiltonian, with the help of
101 I write these expressions for brevity not in a fully correct form; one should write, e.g.,
𝜕𝑈(𝑥) 𝜕𝑥
⁄
|𝑥=𝑥0 and analogously for second derivatives.
194
Classical Deterministic Systems
Hamilton-Jacobi equations and all other possible methods. I prefer to
attach oscillations to the most intuitive version of mechanics, the
Newtonian one, because geometric language, more adequate in other
formulations than in Newtonian, may obscure the basic notions of the
oscillation theory. Oscillations are usually understood as finite motions
that occur in the vicinity of equilibrium points. Recall an equilibrium
point.102 If, for instance, point 𝑎 is a local minimum of the potential
𝑈(𝑥, 𝜆), where 𝜆 is some parameter, then 𝑥= 𝑎(𝜆) brings the Lyapunov
stability (see the section on dynamical systems), i.e., for initial conditions
{𝑝(0), 𝑥(0)} sufficiently close to {0, 𝑎} the whole phase trajectory
{𝑝(𝑡), 𝑥(𝑡)} is close to {0, 𝑎} . Mathematical models of almost all
multidimensional vibrating systems are often just generalizations of the 1d
case: 𝑥𝑛+1 = 𝑄(𝑥1, … , 𝑥𝑛), and if 𝑄 is positive definite, then any small motion
can be represented as a superposition of oscillations along the main axes.
4.6 Harmonic Oscillator
The model of harmonic oscillator is not only the favorite toy model for
physicists, both in classical and quantum domain, but it possesses certain
unique features that manifest themselves especially conspicuously in
quantum mechanics and quantum field theory (QFT). We shall see in Chapters
6 that the harmonic oscillator in quantum mechanics has very distinguished
spectral properties. The energy spectrum of a harmonic oscillator is
noteworthy due to at least three reasons. Firstly, its lowest energy is non-zero
which means that the quantum oscillator performs ground-state motions. The
average kinetic energy of the quantum oscillator in a ground state is positive.
This fact leads in quantum field theory to formally infinite vacuum energy. In
QFT, non-zero energy in the lowest state implies that the electromagnetic field
exists even when there are no photons. All these ramifications of harmonic
oscillator treatment irritated theoretical physicists to such an extent that
eventually a bunch of new theories emerged out of this seemingly primitive
model, and we shall later discuss some of them. Secondly, the energy levels of
the harmonic oscillator are quantized i.e., they may only take the discrete
values of elementary energy ℏ𝜔 times (2𝑛+ 1)/2, where 𝜔 is the classical
oscillation frequency, ℏ is the Planck constant, 𝑛= 0,1,2, …. Taking only
discrete values is a typical feature of many (but not all!) quantum systems
and, in general, of boundary value problems, but the energy levels of a
harmonic oscillator are equally spaced unlike for other quantized systems.
The equidistant character of the harmonic oscillator spectrum is the third and
maybe the most important feature. Besides, scrutinizing the solutions to the
quantum oscillator problem one can observe the wavelike tunneling effect
non-existent in classical mechanics.
Within the classical Newtonian framework, it will be for the beginning
sufficient to know that oscillations are just finite motions occurring in the
102 One more time trying to infuriate the mathematicians, I shall make in the present
context no distinction between equilibrium, fixed, stationary and critical points. In
general, however, these notions may be different.
4.6
Harmonic Oscillator
195
vicinity of equilibrium points. In principle, Newton’s equations may be highly
nonlinear which makes their solution a complicated problem, but harmonic
oscillations are the result of linearization of any smoothly changing force near
the state of equilibrium. This fact makes the model of t h e harmonic
oscillator very general. In the Newtonian version of classical mechanics,
one usually considers coordinates 𝑥𝑖, corresponding velocities 𝑣𝑖= 𝑑𝑥𝑖/𝑑𝑡≡
𝑥̇ 𝑖 and forces 𝐹𝑗(𝑥𝑖, 𝑣𝑖, 𝑡), not necessarily potential ones. The kinetic
energy is defined simply as a quadratic form 𝑇𝑘𝑖𝑛= ∑𝑚𝑖𝑣𝑖
2
𝑖
. We have
already seen in connection with a brief discussion of geometric properties
of classical mechanics that the kinetic energy is in general a bilinear form
defining a metric on the tangent space 𝑇𝑄 where 𝑄 is the configuration
manifold, 𝑇𝑘𝑖𝑛=
1
2 𝑔𝑖𝑘𝑞̇ 𝑖𝑞̇ 𝑘, but these geometric concepts are not
indispensable for discussing simple models of Newtonian mechanics. By the
way, since geometric transformations are usually of minor importance in the
Newtonian formulation of classical mechanics, we shall temporarily not
distinguish between contra- and covariant coordinates, writing vector
indices below. Let us start, for simplicity, from a one-dimensional motion of a
point mass: some practical experience shows that in Newtonian mechanics
mathematical models for a great many systems are just multidimensional
generalizations of 1d situations. In a single dimension we have the second-
order motion equation 𝑚𝑥̈ = 𝐹(𝑥, 𝑥̇, 𝑡) or, writing it in the dynamical system
form,
𝑥̇ = 1
𝑚𝑝,
𝑝̇ = 𝐹(𝑥, 𝑝, 𝑡),
we may define vectors 𝑥≔{𝑥1 = 𝑥, 𝑥2 = 𝑝} and 𝑓≔{𝑓1 = 𝑝𝑚
⁄
, 𝑓2 = 𝐹}
and then write Newton’s equation in the vector form 𝑥̇ = 𝑓(𝑥, 𝑡). Here 𝑥
represents points in the phase space (phase manifold) 𝑀 of dimension two
( dim𝑀= 2 ), with both variables 𝑥, 𝑝 being regarded as independent
coordinates. Solutions to the motion equation or, equivalently, to the
corresponding dynamical system define a family of phase curves 𝛾(𝑡) =
(𝛾1, 𝛾2) or, in “physical” variables, 𝑥1(𝑡) = 𝛾(𝑡), 𝑥2(𝑡) = 𝑚𝛾̇(𝑡) . If the
function 𝐹 does not depend on time 𝑡 (one usually assumes this function to
be also independent of 𝑝 and continuous in 𝑥), then one may introduce the
potential 103 𝑈(𝑥) = −∫𝐹(𝑥′)𝑑𝑥′
𝑥
𝑥0
so that 𝐹(𝑥) = −
𝑑
𝑑𝑥𝑈(𝑥). Then the
total energy 𝐸= 𝑇+ 𝑈 is conserved i.e.,
𝑑𝐸
𝑑𝑡=
𝑑
𝑑𝑡(𝑇+ 𝑈) = 0 in the
process of motion, in other words, energy 𝐸(𝑝, 𝑥) = 𝐸(𝛾̇(𝑡), 𝛾(𝑡)) = 𝑐𝑜𝑛𝑠𝑡
along the phase curves 𝛾. Note that along such solution curves 𝑝 is a
function of 𝑥 (and vice versa). The solution “flows” through the phase
space, being restricted to constant energy curves (or surfaces if dim𝑀>
2) if energy is conserved.
103 The independence of 𝐹 on time and velocity is a condition at any rate sufficient,
but in certain cases may be too restrictive. One can sometimes introduce a potential for
time and velocity dependent forces.
196
Classical Deterministic Systems
All these trivial reminders serve as a preparatory material for the
models of concrete physical systems such as oscillator. We have seen that
the harmonic oscillator is characterized by the simplest possible force law,
𝐹(𝑥, 𝑝, 𝑡) ≔𝐹(𝑥) = −𝑘𝑥, 𝑘> 0, which means that the force pulls the point
mass back to the equilibrium position (origin of coordinates). Here the
quantity 𝑘 is the model parameter which physically describes the
oscillator rigidity: the more the numerical value of 𝑘, the more is the force
driving the point mass to the origin of coordinates. One, however, typically
introduces another parameter: the frequency 𝜔= (𝑘/𝑚)1/2 and writes all
the formulas pertaining to the oscillator in terms of this parameter. For
example, using the definition of potential, we get 𝑈(𝑥) =
1
2 𝑚𝜔2(𝑥2 −𝑥0
2),
where 𝑥0 stems from the integration and we can put 𝑥0 = 0 without any
loss of generality104. Then using the above notations, we have 𝑥̇ = 𝑓(𝑥)
with 𝑥1 = 𝑥, 𝑥1 = 𝑝 and 𝑓1 = 𝑝𝑚
⁄
= 𝑥2 𝑚
⁄
, 𝑓2 = 𝐹(𝑥) = −𝑚𝜔2𝑥1 so
that the motion equations in the dynamical system form look as
𝑥̇1 = 1
𝑚𝑥2,
𝑥̇2 = −𝑚𝜔2𝑥1.
4.7
Symmetries and Conservation Laws
Symmetries play a fundamental role in understanding the physical process
considered. Usually, the importance of symmetries for physics is related to
invariance properties of differential equations modeling the physical
situation. Symmetries associated with conservation laws are specific cases of
such invariance properties leading to integral combinations for the respective
differential equations. One is tempted to think in this connection that each
symmetry of differential equations leads to a conservation law and,
conversely, each conservation law is the consequence of some symmetry.
Both statements are wrong: there exist symmetries of differential equations
of physics that do not provide conservation laws 105 as well as there are
conservation laws which are not related to any symmetries of such equations
(e.g., conservation of mass).
A natural question would be: how many integrals of motion are there in
a mechanical system? A classical system is described by 2n first-order
differential equations 106 whose solution is determined by 2n independent
constants. It may be natural to choose the initial coordinates and momenta of
the classical system, i.e., coordinates of the initial point of the system’s path in
its phase space, as such independent constants. One can also arrange these
initial coordinates and momenta in various combinations, not all of them
independent. Corresponding to the number of initial coordinates and
momenta as independently chosen constants, a classical system admits
104 One performs this transformation (𝑥−𝑥0 →𝑥) every time in physics often
forgetting that we are justified to do it owing to affine properties of our space.
105 Point or discrete symmetries.
106 This is the typical dynamical systems form, see the following section.
4.8
Relativistic Mechanics
197
exactly 2n independent integrals of motion. By the way, one often forgets that
initial coordinates and momenta are also integrals of motion.
Here, it would be probably worthwhile to make the following remark.
There are two interpretations of symmetry transformations: active and
passive. Under the active interpretation one understands an actual change of
a physical state such as rotation or reflection whereas the passive
interpretation consists in changing the viewpoint of an observer such as
transition to a different coordinate system. In other words, the passive
interpretation does not imply any real physical transformation, and maybe
because of that it is mostly favored by mathematicians. In contrast, physicists
generally prefer the active interpretation, and this diversity of preferences is
of course curious. As a standard example, one usually takes the rotation of a
vector: under the active interpretation vector is actually rotated and a new
vector is obtained, while under the passive interpretation vector remains
intact and only the basis vectors are rotated.
4.8
Relativistic Mechanics
In almost all the textbooks, it is customary to treat relativistic mechanics
together with classical field theory, probably because of the unifying concept
of Lorentz invariance. This is a matter of taste of course, but personally I think
that mechanics is just mechanics and can be treated as a quasi-isolated cluster
of models, without the concept of fields. Relativistic mechanics of a material
point, on a primitive level of a 3D geometry may be viewed as a classical
mechanics with a velocity-dependent mass (see [39], §9). As R. Feynman put
it in his “Feynman’s Lectures on Physics”, “The theory of relativity just
changes Newtons laws by introducing a correction factor to the mass.” [138],
p.15-1. Simply speaking, one can substitute 𝛾𝑚 instead of 𝑚, where 𝛾=
(1 −𝛽2)−1/2, 𝛽= 𝑣/𝑐 is the relativistic factor, 𝑣 is velocity with respect to a
laboratory frame (an observer being at rest). For many practical calculations
this is sufficient. So, one sees that the “relativistic mass” increases with
velocity. In popular science interpretations of special relativity, the statement
that the mass of a body rises with its velocity is ubiquitous. This statement is
supported, although more accurately, in many textbooks on relativity (see
[158]). On a more profound, geometric level the simple heuristic concept of
“relativistic mass” may lead to some difficulties. Let us take a closer look at
relativistic mechanics, using, in the spirit of connected models in physics, this
opportunity to discuss the properties of mass in general.
It is interesting that the concept of variable mass, e.g., depending on the
body velocity, emerged before the creation of special relativity, in particular,
in the paper by O. Heaviside [38]. Moreover, there were experiments set up
in the beginning of the 19th century which demonstrated the dependence of
the quantity that the experimenters identified with the mass of a moving body
on its velocity.
198
Classical Deterministic Systems
4.9
Dynamical Systems
Historically, the term “dynamical system” was applied to a mechanical system
with a finite number of degrees of freedom. The state of such a system was
usually characterized by its position, for example, by the location of a center
mass point or by the configuration of a number of points, whereas the rate of
change of this position (more generally, of the system’s state) was given by
some law of motion. That was all in the original meaning of the term. In
general, the state of a dynamical system may be characterized by some
quantities, not necessarily of a mechanical origin, which may assume
arbitrary real values, for instance in chemistry, biology or ecology.
Mathematically, of course, complex values are also admitted. If these
quantities are treated as coordinates 𝑥𝑖 of a point in an n-dimensional space,
𝑖= 1,2, … , 𝑛, then such a space is usually called the phase space of the
considered dynamical system and the point 𝑥𝑖 representing the state of a
system is usually called its “phase”. This terminology is probably due to J. W.
Gibbs who called the state of the system its phase. The phase space 𝑈⊂ℝ𝑛 is
usually considered an open domain; we shall limit ourselves to Euclidean
domains, although it is of course possible - and in many cases necessary - to
consider more general differential manifolds as the phase space, e.g., circle,
cylinder, or torus. We have already seen that the phase space of a particle is a
six-dimensional Euclidean space, the six components of the phase velocity
vector being the components of the ordinary velocity and of the force,
whereas the projection of the phase trajectory on the space 𝑇𝑝𝑋 (parallel to
the momentum space) is the trajectory of the particle in the ordinary sense of
the word. The evolution of the system with time is represented as a motion of
the phase point in the phase space over some curve - the phase trajectory. As
we have seen on the example of vector spaces, to each point 𝑥𝑖 a vector with
components 𝑥𝑖 may be associated.
The usual type of a dynamical system is given by a map 𝐹: 𝑈→𝑈. As
particular cases of maps 𝐹, we have homeomorphisms, which are continuous
maps admitting a continuous inverse, and diffeomorphisms, which are
continuously differentiable maps admitting a continuously differentiable
inverse. In other words, by a dynamical system one can mean a
diffeomorphism of a compact differentiable manifold without boundary or a
one parameter group such that 𝜑𝑡+𝑠= 𝜑𝑡∘𝜑𝑠. This is a one-parameter
transformation of a phase space - a phase flow, which may be both causal,
𝑑𝑥𝑖𝑑𝑡
⁄
= 𝑓𝑖(𝑥1, … , 𝑥𝑛) and non-causal, 𝑑𝑥𝑖𝑑𝑡
⁄
= 𝑓𝑖(𝑥1, … , 𝑥𝑘(𝑡+ 𝜏), … , 𝑥𝑛).
4.10 Dynamical Systems and the Cauchy
Problem
Let us consider the elementary dynamical systems theory in simple terms of
the theory of ordinary differential equations. One often writes the system of
differential equations corresponding to a dynamical system in the abridged
(vector) form, 𝑑𝑥𝑑𝑡
⁄
= 𝑓(𝑡, 𝑥), where 𝑓 is a vector function 𝑓: 𝑆→ℝ𝑛, with 𝑆
being an open subset of ℝ𝑛+1. The parameter 𝑡∈ℝ is as a rule identified with
4.10
Dynamical Systems and the Cauchy Problem
199
time, 𝑥∈ℝ𝑛. More specifically, the vector function 𝑥: 𝑇→ℝ𝑛 is a solution to
the equation 𝑑𝑥𝑑𝑡
⁄
= 𝑓(𝑡, 𝑥) on an interval 𝑇→ℝ𝑛, and we shall assume it
to be at least continuously differentiable at this interval. The vector function
𝑓(𝑡, 𝑥) then is supposed to be at least continuous in both 𝑡 and 𝑥. In
elementary dynamical systems theory, all the variables are considered real.
It is clear that the general scalar equation of the n-th order
𝑑𝑛𝑥
𝑑𝑡𝑛= 𝐹(𝑡, 𝑥, 𝑑𝑥
𝑑𝑡, … , 𝑑𝑛−1𝑥
𝑑𝑡𝑛−1),
𝐹: ℝ𝑛+1 →ℝ, can be represented in the vector form.
One usually requires the vector function 𝑓(𝑡, 𝑥) defined in 𝑥∈𝐷⊂
ℝ𝑛, 𝑡∈𝑇⊂ℝ, to satisfy the Lipschitz condition
‖𝑓(𝑡, 𝑥1) −𝑓(𝑡, 𝑥2)‖ ≤𝐿‖𝑥1 −𝑥2‖
with respect to 𝑥 for all (𝑡, 𝑥) ∈𝑇× 𝐷. Here ‖. ‖ denotes the norm, ‖𝑓‖ =
∑
|𝑓𝑖|
𝑛
𝑖=1
. One often calls such functions to be “Lipschitz-continuous in variable
𝑥”. It is easily seen that Lipschitz-continuity in 𝑥 leads to ordinary continuity
but the reverse is not true. However, continuous differentiability - absence of
breakpoints - is sufficient for Lipschitz-continuity.
The notion of Lipschitz-continuity is an important condition for
establishing uniqueness of the solution to the initial value problem (the
Cauchy problem or IVP):
𝑑𝑥
𝑑𝑡= 𝑓(𝑡, 𝑥), 𝑥∈𝐷⊂ℝ𝑛, 𝑡∈𝑇⊂ℝ,
(4.23)
𝑥(𝑡0) = 𝑥0.
(4.24)
One can see the proof and commentaries to this essential theorem in any
good textbook on ordinary differential equations (ODE), e.g. [15], [19]. It may
be important to note that in the context of this uniqueness theorem the
domain 𝑆= 𝑇× 𝐷 is usually specified as 𝑇≔|𝑡−𝑡0| ≤𝑎, 𝐷≔{𝑥: ‖𝑥−
𝑥0‖ ≤𝑑} , where 𝑎> 0 and 𝑑> 0 are constants. The statement of the
uniqueness theorem is typically formulated as follows: if 𝑓(𝑡, 𝑥) is continuous
in 𝑆= 𝑇× 𝐷 and is also Lipschitz-continuous in 𝑥, then the Cauchy problem
has one and only one solution in |𝑡−𝑡0| ≤𝑖𝑛𝑓(𝑎, 𝑑/𝐺) where 𝐺≔sup𝑆‖𝑓‖.
It is interesting that the existence and uniqueness of a solution 𝑥(𝑡) ≔
𝑥(𝑡, 𝑡0, 𝑥0) is ensured not very far from initial “time” 𝑡0, the respective domain
effectively decreasing with the supnorm 𝐺= sup(𝑡,𝑥)∈𝑆‖𝑓‖. However, the
time domain of existence and uniqueness is determined by a sufficient
condition that may become too restrictive if the actual solution can exist
beyond the guaranteed domain. Therefore, it may be more practical in a
specific case to analyze a concrete problem finding the time domain of
existence ad hoc from the available 𝑎, 𝑑, 𝐺, rather than to apply
straightforward the general theorem.
200
Classical Deterministic Systems
4.11 Autonomous Dynamical Systems
Let us now consider a particular but very important case when the variable 𝑡
is not explicitly present in the vector equation of a dynamical system. Such
systems are called autonomous107. We have already seen, in connection with
energy conservation issue, that autonomous systems are invariant with
respect to time translations, viz. if 𝑥(𝑡) is a solution in 𝐷⊂ℝ𝑛 then 𝑥(𝑡−𝑡0)
is also a solution, in general a different one (as, e.g., sin 𝑡 and cos 𝑡). Here 𝑡0 is
assumed to be a constant time shift.
In the theory of dynamical systems, the domain 𝐷⊂ℝ𝑛 is regarded as the
phase space for the vector equation 𝑥̇ = 𝑓(𝑥), 𝑥∈𝐷. This concept may be
considered as a certain generalization of the traditional physical terminology
where phase space is understood as a direct product of coordinate and
momentum spaces. In modern classical mechanics, phase space is typically
defined as a cotangent bundle 𝑇∗𝑀 where 𝑀 is a configuration manifold (see
section on Hamiltonian mechanics for some comments). However, when
dealing with dynamical systems there are some other features and accents
which are important as compared to the traditional exposition of classical
mechanics. For instance, in mechanics it is often convenient to consider the
phase space as a Euclidean space whereas in the theory of dynamical systems
the phase space is, in general, not a Euclidean space but a differential
manifold, on which a vector field corresponding to the vector equation 𝑥̇ =
𝑓(𝑥) is defined. This equation means that, in the process of evolution
described by the dynamical system, to each point 𝑥 a vector 𝑓(𝑥) is ascribed
determining the velocity of the phase point (the vector 𝑓(𝑥) belongs to the
tangent space of the manifold at point 𝑥). This is a kinematic, in fact a
geometric interpretation of the above vector equation.
Nevertheless, all this are just nuances: one usually knows exactly in what
space one finds oneself and operates. In all cases, phase space is a geometric
concept embracing the total number of all states of a dynamical system and
convenient to describe the evolution of state points 𝑥= (𝑥1, … , 𝑥𝑛)𝑇
parameterized by variable 𝑡 (usually time). The state points 𝑥(𝑡) =
(𝑥1(𝑡), … , 𝑥𝑛(𝑡))
𝑇 for a fixed 𝑡 are, as we know, called phase points108, and
they move through the phase space with changing 𝑡, each of them traversing
the phase manifold along its own phase trajectory. The term “phase” was
probably coined by J. W. Gibbs who referred to the state of a system as its
phase. In physical literature, one often talks about ensembles of dynamical
systems - the set of non-interacting systems of the same type differing from
one another only by their state at any given moment, that is by initial
conditions. There is an illustrative analogy that is customarily exploited in
physics namely the picture of a stationary flow of some fluid, in which every
fluid particle moves from point in phase space to another during time 𝑡−𝑡0
according to equation 𝑥̇ = 𝑓(𝑥), 𝑥(𝑡0) = 𝑥0 . Later we shall see that this
107 Scalar equations of the n-th order corresponding to the autonomous case take the form
𝑑𝑛𝑥
𝑑𝑑𝑛= 𝐹(𝑥,
𝑑𝑥
𝑑𝑡, … ,
𝑑𝑛−1𝑥
𝑑𝑡𝑛−1)
108 Rarely, representing points.
4.11
Autonomous Dynamical Systems
201
equation may be interpreted as a mapping of the phase space into itself so the
“flow” of the “phase fluid” implements a transformation (in fact a family of
transformations) of the phase manifold into itself, in the considered case of a
smooth vector field described by differential equations - a diffeomorphism
i.e., continuously differentiable invertible map. One may notice that the
analogy between the “phase fluid” and some physically observable
continuous media - liquid or gas - is not complete: there is no interaction
between particles of the phase fluid.
Excluding 𝑡 from parametric equations 𝑥𝑖(𝑡), 𝑖= 1, … , 𝑛 (if we can do it),
we get a projection onto phase space 𝐷. The domain 𝑆= 𝑇× 𝐷 is
conventionally called (mostly in old sources) an extended phase space.
Although it is not always easy or at all feasible to get rid of parameter 𝑡109, it
may be possible to obtain differential equations directly describing the
trajectories in the phase space. Indeed, from the equation 𝑥̇ 𝑖= 𝑓𝑖(𝑥), 𝑖=
1, … , 𝑛 we get, for example, the following system of 𝑛−1 equations:
𝑑𝑥2
𝑑𝑥1 = 𝑓2(𝑥)
𝑓1(𝑥) , … , 𝑑𝑥𝑛
𝑑𝑥1 = 𝑓𝑛(𝑥)
𝑓1(𝑥)
whose solution gives the trajectories parameterized by 𝑥1. Conversely, we
can turn any non-autonomous system into an autonomous one by introducing
a new variable 𝑥𝑛+1, thus producing a system of 𝑛+ 1 equations instead of 𝑛
which corresponds to increasing the dimensionality of the phase space. In this
sense, autonomous systems may be considered general enough to focus
mainly on their study.
Applying the uniqueness theorem to the above system of 𝑛−1
equations, we may conclude that the phase trajectories do not intersect. Of
course, here it is assumed that 𝑓1(𝑥) ≠0. If 𝑓1(𝑥𝑎) = 0 in some points 𝑥𝑎, 𝑎=
1,2 … , we can take any other 𝑓𝑗(𝑥), 𝑗= 1, … , 𝑛 instead of 𝑓1(𝑥) provided
𝑓𝑗(𝑥) ≠0, which means that we are taking 𝑥𝑗 as a parameter. There may be,
however, difficulties in the so-called critical points - zero points 𝑥̅ =
(𝑥̅1, … , 𝑥̅𝑛) where 𝑓(𝑥̅) = 0 i.e., 𝑓𝑖(𝑥̅) = 0 for all 𝑖= 1, … , 𝑛. We shall discuss
this problem as soon as we learn a little more about the dynamical systems in
general.
The above-mentioned uniqueness theorem 110 states that under rather
weak assumptions about the properties of vector field 𝑓(𝑥) (usually it is
assumed differentiable or just Lipschitz-continuous) there exists for each
point 𝑥∈𝐷 exactly one solution 𝑥(𝑡) of the law of motion 𝑥̇ = 𝑓(𝑥) with
initial value 𝑥(𝑡0) = 𝑥0. In other words, the evolution of a dynamical system -
its future states at 𝑡> 𝑡0 - is completely determined by its initial state.111 It is
109 One might recall in this connection that the parametric form is the most general one in
representing curves and surfaces.
110 This theorem is usually known as the Cauchy-Peano theorem, although probably it is more
correct to name it the Picard-Lindelöf theorem. The matter is that the first theorem states only
existence whereas the second one requires less and guarantees uniqueness.
111 It may also be the final state that can be taken to determine the backward evolution - a
retrodiction setting.
202
Classical Deterministic Systems
a fundamental model of determinism: what we can say about tomorrow
(more correct - about the properties the system in question will have
tomorrow) is uniquely determined by what we can say about the system
today. This is not true for quantum or statistical mechanics, although some
philosophers contend that quantum mechanics is a fully deterministic theory
(because of time evolution features). In my opinion, this is just a cunning word
usage typical of philosophers since it is hard to imagine a deterministic
scheme where observations affect the system.
When speaking about a dynamical system, one can totally forget about its
mechanical origin. It is completely irrelevant whether the vector equation for
a dynamical system describes a mechanical or any other evolution.
Mechanical systems are commonly attracted in textbooks as convenient
examples of dynamical systems. More important, however, is the fact that
some mechanical systems possess specific properties narrowing the entire
class of dynamical systems to a clearly defined distinguished subclass, e.g.,
that of Hamiltonian systems. Nevertheless, it would be a mistake to think that
only the Hamiltonian systems are considered in classical mechanics. For
instance, non-holonomic systems of mechanics are also dynamical systems of
course. The notion of a dynamical system is a generalization of classical
mechanics.
Thus, the term “dynamical system” can be applied to any vector field
described by a first-order vector differential equation of the form 𝑥̇ =
𝑓(𝑥), 𝑥(𝑡0) = 𝑥0 (or any equivalent form, see below), irrespective of its
natural or behavioral content. This abstraction serves as a background for
mathematical modeling based on dynamical systems.
Similarly, for all 𝑟> 1. See also Topological dynamical system; Bendixson
criterion (absence of closed trajectories); Poincaré-Bendixson theory that has
the unfortunate tendency to explode. A rough system is sometimes called a
structurally stable system or a robust system. A well-documented survey on
(mainly differentiable) dynamical systems is in [146]. Many recent
developments are discussed in the various volumes of [147].
In general, an attractor of a dynamical system is a non-empty subset of
the phase space such that all trajectories from a neighborhood of it tend to
this subset when time increases. An attractor is also called a domain of
attraction or basin of attraction. A repelling set, or repellor in a dynamical
system is a subset of the phase space of the system that is an attractor for the
reverse system. If an attractor, respectively repellor, consists of one point,
then one speaks of an attracting, respectively repelling, point. For details (e.g.,
on stability of attractors) see [146]. It should be noted that in other literature
the definition of an attractor is what is called a stable attractor in [146]. For
discussions on the “correct” definition of an attractor see [205], Sect. 5.4, and
[147].
4.12 Non-autonomous Systems
We have already partly discussed the non-autonomous case in the theory of
differential equations, now we shall mainly focus on some aspects of non-
autonomous dynamical systems i.e., with more focus on system evolution
4.12
Non-autonomous Systems
203
properties. One must confess that even today, in the beginning of the 21st
century, one does not have fully rigorous foundations of non-autonomous (as
well as relativistic) mechanics. Indeed, even such basic notions as force,
energy, power, frame of reference, etc. often need to be reassessed for non-
autonomous systems. We have probably felt from the brief overview of
differential equations that it seems to be appropriate to treat autonomous
and non-autonomous cases differently - so different they are despite the fact
that each non-autonomous dynamical system can be reduced to an
autonomous one merely by introducing a new variable thus increasing the
dimensionality of the vector field constituting the dynamic system. This
additional dimension of the phase space is the price paid for the transition
from a non-autonomous to an autonomous system.
The situation with non-autonomous systems is much more complicated
from both the physical and mathematical perspective as with autonomous
systems. Physically, a non-autonomous dynamical system corresponds to an
open system placed in a time-dependent external field. This fact is reflected
by the explicit dependence of vectors 𝑓𝑖, 𝑖= 1, … , 𝑛 in the corresponding
vector differential equation 𝑥̇ 𝑖= 𝑓𝑖 on the independent variable 𝑡 (usually
interpreted as time), 𝑥̇ = 𝑓(𝑡, 𝑥) which makes the solution time-
noninvariant in distinction to those for autonomous systems (see above). Of
course, in the non-autonomous case the energy integral in general does not
exist (see [23], §6) which makes the system of equations significantly more
difficult to integrate than in the autonomous case. In classical mechanics,
explicit dependence of the coefficients in motion equations on time are
usually interpreted as the presence of an external field ([23], §5). On an
intuitive physical level one can illustrate additional difficulties by allowing
the coefficients of a differential equation which could be explicitly solved to
depend on time. Even in the most primitive linear models, this dependence
will immediately produce new types of solutions (e.g., parametric
oscillations in elementary vibration theory, see [23], §27).
One can sometimes encounter the assertion that the equation 𝑥̇ =
𝑓(𝑡, 𝑥) can be “solved” implying that the solution for 𝑥(𝑡) can be
represented as an explicit expression consisting of, e.g., power functions,
exponentials, trigonometric functions, etc. This is not true: even in the
scalar case, 𝑛= 1 i.e., 𝑥∈ℝ and 𝑓∈ℝ, equations of the type 𝑥̇ = 𝑓(𝑡, 𝑥)
can be explicitly (analytically) integrated only in a relatively small
number of very exceptional cases. To illustrate this point for themselves,
just try to integrate a rather innocent-looking first-order equation 𝑥̇ =
exp 𝑡𝑥. One should not, however, think that substantial difficulties and -
quite often - a sheer impossibility to integrate non-autonomous equations in
terms of elementary or even special functions mean that such equations do
not have solutions.
Let us try to “feel the difference” between autonomous and non-
autonomous dynamical systems. One may notice that explicit dependence on
time in a dynamical system (non- autonomous case), although not essential
204
Classical Deterministic Systems
from purely theoretical viewpoint 112 , can lead to substantial technical
difficulties. Indeed, the usual concept of attractor (see below) may become
inadequate since the natural notion of time asymptotics is hard to apply: any
object in the phase space is “drifting” with time. To understand the situation
better let us recall some basic facts from the theory of ordinary differential
equations (ODE). The vector ODE 𝑥̇ = 𝑓(𝑡, 𝑥), 𝑥∈𝑉𝑛, 𝑉𝑛 is an n-
dimensional vector space (manifold), defines a smooth field of directions
in the domain 𝑇× 𝑉𝑛, 𝑇⊂ℝ of the extended phase space of arbitrary finite
dimension 1 + 𝑛.
Any equation of motion has this form, with 𝑓=
{𝑓1, … , 𝑓𝑛} being composed of the vector field components for a dynamical
system. If one deals with distributed systems described by partial differential
equations (PDE), as in the case of fluid dynamics or climate modeling, then
the vector space 𝑉 can be infinite-dimensional. Physically speaking, the
system becomes non-autonomous when a time-dependent driving force is
acting on it, or when time-dependent constraints are applied, or if the system
parameters are varying with time. Equilibrium (stationary) states of non-
autonomous systems are, in general, no longer fixed points in the phase space,
but rather extended orbits. One might recall from the courses of classical
mechanics that the concept of a phase portrait is not as helpful for non-
autonomous, e.g., driven, systems as for autonomous ones. One can also recall
that attractors play a vital role in assessing the long-term behavior of a
dynamical system. 113 In the non-autonomous case attractors are sets
extending in the phase space so that attractors become global objects which
are typically difficult to study with standard mathematical tools (e.g., by
classical analysis). Strictly speaking, one cannot associate a non-
autonomous ODE with a dynamical system interpreted as a vector field
acting on 𝑉𝑛. Nevertheless, if it is known that the initial value (Cauchy)
problem has a unique solution (see above), one can introduce, in a standard
fashion, a two-parameter family of evolution operators, 𝑇(𝑡, 𝑠), 𝑡≥𝑠,
acting on 𝑉𝑛 in such a way that 𝑇(𝑡, 𝑠)𝑥(𝑠) = 𝑥(𝑡) , where 𝑥(𝑡) is the
solution of the Cauchy problem with initial conditions 𝑥(𝑠). This family of
operators
satisfies
obvious
relationships,
𝑇(𝑠, 𝑠) = 𝐼(𝑉𝑛)
and
𝑇(𝑡, 𝜏)𝑇(𝜏, 𝑠) = 𝑇(𝑡, 𝑠) for all 𝜏∈[𝑠, 𝑡], where 𝐼(𝑉𝑛) is the unity operator
on the vector space 𝑉𝑛.
112 As I have mentioned above, one can easily make an autonomous system from
non-autonomous by increasing the dimensionality of the corresponding vector field
by introducing the variable 𝑥𝑛+1 = 𝑡 and adding one more equation, 𝑑𝑥𝑛+1 𝑑𝑡
⁄
= 1.
For example, in the case of the undriven 1𝑑 pendulum, the phase space has two
dimensions whereas in the case of a driven pendulum, 𝑥̈ + 𝜔2𝑥= 𝐹(𝑥, 𝑡) , one may
consider that it has three i.e., 𝑥̇ 1 = 𝑥2, 𝑥̇ 2 = −𝜔2𝑥1 −𝐹(𝑥1, 𝑥3), 𝑥̇ 3 = 1.
113 Everyday examples of attractors are the planetary climate or the national
character.
4.13
Dynamical Systems in Mathematical Modeling
205
4.13 Dynamical Systems in Mathematical
Modeling
Today, one is inclined to think that modeling can be only computer-based i.e.,
in the form of computer simulations. Unfortunately, computer simulations
almost never provide the modeler with an insight why the physical
mechanism or, say, a biological system operates in a particular way.
Computer mathematicians usually retort that the great Carl Friedrich Gauss
and other prominent mathematicians were busy calculating the motion of
celestial bodies in the 18th-19th century during many years whereas
nowadays it takes seconds. Indeed, mathematicians of that time solved
algebraic and differential equations, calculated integrals without reaching the
efficiency of modern computers. But computers have not created much really
new in physically motivated mathematics, rivaling e.g., calculus, complex
numbers and functions, Riemann geometry and Lie groups.
So, computer simulations although often accompanied by a great hype
and even serving as a base for political decisions (as, e.g., in climate modeling
or universal flight bans following volcano eruptions) only have a limited
validity. It would be interesting, firstly, to understand this validity and,
secondly, to gain an insight when without the computer one would definitely
fall short of modeling targets. One of the main questions in modeling an
evolving physical situation is: “What happens as time flows from now to
infinity?” And in reality, all situations are evolving, steady-state ones are rare
and very approximate.
In fact, under modern conditions war is the most economically inefficient,
politically destabilizing, and militarily counterproductive mechanism to gain
control over any kind of resources. As an example, one can mention the US
war in Iraq. If the objective was to gain control over oil resources, then it is at
least a poor calculation in military planning since at least USD 100
million/day was spent by the USA to wage this war. One can easily calculate
the amount of oil that could have been bought for this money. Given a total
unproductiveness of military solutions, one can assume that starting a war is
typically the outcome of a crisis in the country’s governing model.
The solutions in such models show the temporal evolution of the military
conflict and may, in principle, predict which party can be defeated. In the
simplest model of military planning, the Lancaster model, the state of a
dynamical system describing two interacting (fighting) armies is given by a
2D-point (𝑥, 𝑦) located in the upper right (positive) quadrant on the plane,
(𝑥> 0, 𝑦> 0). Models in economic and military planning are usually unified
by this common feature: an essentially non-negative character of variables,
which may be regarded as a supplementary constraint. Now imagine two
adversary armies, 𝑋 and 𝑌, counting respectively 𝑥 and 𝑦 soldiers. Soldiers
are by default professional killers, so we can assume in the simplest model
that each soldier of the 𝑋-army kills per unit time 𝑎 soldiers of the 𝑌-army
and, conversely, each soldier of the 𝑌-army destroys per unit time 𝑏 soldiers
of the 𝑋-army. Then we can obtain a linear system of equations (parameters
𝑎 and 𝑏 are assumed constant so far)
206
Classical Deterministic Systems
𝑑𝑥
𝑑𝑡= −𝑏𝑦
𝑑𝑦
𝑑𝑡= −𝑎𝑥
One can interpret parameters 𝑎> 0 and 𝑏> 0 as characterizing the
weapon power of opposing armies 𝑋 and 𝑌. We can of course explore this
linear system using standard linear algebra techniques (see Chapter 3 about
linear differential equations), but in our simple analysis it is convenient to
integrate this system directly. Dividing the second equation by the first, we
get
𝑑𝑦
𝑑𝑥= 𝑎𝑥
𝑏𝑦
or 𝑎𝑥2 −𝑏𝑦2 = 𝑐 where 𝑐= const . Thus, integration gives a family of
hyperbolas depending on the integration constant 𝑐 that should be
determined by the initial state of military confrontation. One can see that the
phase point describing the temporal evolution of the conflict is moving along
the hyperbolic phase trajectories separated by a straight line √𝑎𝑥= √𝑏𝑦
(one can use, e.g., Maple to draw a simple picture of phase trajectories). Phase
trajectories cannot cross this straight line (a symptom of a “hard” model)
distinctly dividing two regimes: army 𝑋 is defeated or army 𝑌 is defeated. If
the initial state lies above the neutral line separating two outcomes, army 𝑋
is doomed: quantity 𝑥 (the number of soldiers in 𝑋) is reduced to zero in a
finite time, whereas quantity 𝑦 remains positive - a complete victory of 𝑌. Of
course, the 𝑥 and 𝑦 curves have certain symmetry properties whose meaning
is that army 𝑋 can be interchanged with army 𝑌 with simultaneous
replacement 𝑎1/2 ↔𝑏1/2 . For 𝑎= 𝑏 hyperbolas lie symmetrically with
respect to the line 𝑦= 𝑥. The meaning of the straight line dividing the phase
plane into two distinct areas may be clarified in the following way: this is a
neutral line on which both armies will equally run out of manpower and
eventually be destroyed (asymptotically 𝑥, 𝑦→0 for 𝑡→∞). The military
conflict persists even when both armies are almost totally demolished. The
equation of the separating straight line √𝑎𝑥= √𝑏𝑦 implies that to
counterbalance the 𝑛-fold manpower advantage of one adversary the other
has to possess the 𝑛2 -fold more powerful weaponry. Thus, to resist the
Mongol invasion in the 13th century, the Mongols’ combined forces
presumably surpassing those of Kievan Rus at least three times, the Russians
should have been about an order of magnitude more efficient warriors.
However, the above mathematical model is too simplistic, and its
applicability is utterly questioned. One can treat it only as a toy model which,
nevertheless, may serve as a backbone for more realistic approaches. So, in
accordance with general methodological recommendations exposed in
Chapter 2, we can try to improve this model.
4.14
Nonlinear Science
207
For economic planning, analogous models allow one to divide the
produced output into consumed and accumulated parts one of the crucial
problems of economic growth. Similar models appear in economics and in
military planning, specifically in the case when non-regular armies (e.g.,
partisans) are participating in military activities.
4.14 Nonlinear Science
We have seen that the theory of dynamical systems is just another name for
nonlinear dynamics, at least these two fields are inseparable. Nonlinear
dynamics is, in fact, as old as classical mechanics. In general, the dynamics of
planetary motion turns out to be nonlinear, but, fortunately, two-body
systems are integrable and can be treated exactly. Three-body problems are
of course also nonlinear, but in general non-integrable and very complicated.
For example, the question originally posed in the XIX century: “is the Solar
System stable?” naturally leads to a nonlinear description of the paths of three
bodies in mutual gravitational attraction. It is this ancient problem which was
simple to formulate but extremely difficult to solve that resulted in many
modern ideas of nonlinear science, e.g., chaos. It is curious that with the
advent of quantum mechanics and displacement of accent on atomic and
nuclear physics nonlinear dynamics almost disappeared from physical books
up to the 1970s. There was very little or no discussion of nonlinear science in
the popular courses on methods of mathematical physics during the prewar
and postwar (WW-2) period, when quantum mechanics and linear
electrodynamics dominated science and engineering. It is also curious that in
the 1980s nonlinear science gained a compensatory extreme popularity, to
the extent that many people in the physics community began considering
linear models as too primitive and unrealistic.
The great difference that exists between linear and non-linear problems
is one of the most important and, perhaps, subtle features of mathematics.
One always tries to linearize whenever possible, because linear problems are
enormously easier to solve. Unfortunately, the world is not linear, at least to
a large extent, so we have to learn how to deal with non-linear problems.
Nonlinearity in general can be well understood as the opposite of
linearity - actually its negation. The main features of linearity are additivity
and homogeneity, which result in linear superpositions. In linear theories
such as quantum mechanics or classical (pre-laser) electrodynamics
superposition is the key feature: an infinity of solutions may be constructed
provided a finite set of solutions is known. It is true that pure linearity rarely
occurs in the mathematical description of real-life phenomena, including
physical systems. The example of quantum mechanics, which is an essential
linear theory, although extremely important, may be considered an exception
- yet to a certain extent, since the superposition principle for the wave
function implies the infinite dimensionality of the respective vector space
(see Chapter 6). We shall see that many finite-dimensional nonlinear systems
can be mapped, with the help of some transformations, to linear systems with
infinite dimensionality. In other words, nonlinear systems, i.e., those which
should be modeled by nonlinear differential (integro-differential) equations
208
Classical Deterministic Systems
or nonlinear discrete maps are ubiquitous, while linear ones really seem to be
exceptions or approximations.
One obvious and strong motivation to study nonlinear dynamical systems
is the rich variety of applications of such studies covering such apparently
different areas in mathematics, physics, chemistry, biology, medical sciences,
engineering,
economics,
political
and
military
planning,
financial
management, etc. Yet, the ubiquity of nonlinear systems is counterbalanced
by their complexity, so another motivation to study nonlinear dynamics is
their intricate behavior. Examples are bifurcations, solitons, strange
attractors, and fractal structures. There are no counterparts of these
manifestations in the linear world; one can say that the extreme complexity
of nonlinear structures marks an essential difference between linear and
nonlinear phenomena.
The main source of difficulties in the nonlinear world is the fact that it is
in general not feasible to describe a nonlinear system by dividing it into parts
which are treated independently or blockwise - a favorite trick in linear
system theory. As near as I know, no general techniques have been invented
so far to foresee even the qualitative properties of a nonlinear system. For
instance, it is rarely possible to predict a priori whether a dynamical system
would exhibit regular or chaotic behavior. We shall illustrate this difficulty
even on the simplest example of the logistic equation.
There exist, of course, a multitude of textbooks, journal articles and other
sources where nonlinear dynamics in general and chaotic behavior in
particular are beautifully described. We shall try to discuss only some cases
of nonlinear dynamics which I consider quite important. There are, of course,
other cases, e.g., of nonlinear PDEs (elliptic, parabolic and hyperbolic), which
are very important in modern mathematical physics and, besides, present an
intrinsic theoretical interest. However, the theory of these equations as well
as related functional spaces and operator theory for nonlinear analysis are
not included in this book. We shall discuss Euler and Navier-Stokes equations
for incompressible fluids, but this topic seems to be inexhaustible, so the
discussion may be considered superficial and much of it is relegated to
Chapter 7. I shall also consider in Chapter 9, specifically in association with
some unsolved problems, Einstein’s equations, some aspects of which tend
more to nonlinear science than to customary field theory.
It is widely believed that the 20th century was the century of physics and
the 21st is the century of biology. The latter deals mostly with nonlinear
phenomena, and the respective models should by necessity be nonlinear. On
a macroscopic scale, it has been illustrated above by Lotka-Volterra model of
the struggle for existence between two competing species. This dynamic
situation (a predator-prey model) is described by nonlinear differential
equations giving the time rate of evolution. This is a very simple model, of
course, as compared with the level of complexity typically encountered in
biology, but it provides a good foundation for more sophisticated
mathematical modeling in this field.
We are immersed in the natural world of nonlinear events. For instance,
our emotional reactions and our likes and dislikes are probably highly
4.15
The logistic model: the bugs are coming
209
nonlinear. Our behavioral reactions to heat and cold, to colors and sounds, to
local pressure and other stimuli are mostly subordinated to the Weber-
Fechner law: the magnitude 𝑅 of psychological response is proportional to
the logarithm of magnitude 𝐽 of physical stimulus, 𝑅= 𝐴ln(𝐽/𝐽0), 𝐽= 𝐽0, 𝑅=
0 for 𝐽< 𝐽0 (threshold).
At the end of this section, I would like to give a few references to the books
I found interesting and useful in the field of nonlinear science. Some of these
sources deal predominantly with mathematical concepts of nonlinear
dynamics, such as [16, 73], whereas others accentuate the physical aspects
[17, 56, 57, 55]. Considering the abundant literature on dynamical systems
and nonlinear dynamics, I would recommend the curious reader to start from
simple, physically motivated examples which can be found in any textbook or
in the Internet (see, e.g., http://www.faqs.org/faqs/sci/nonlinear-faq/). My
own discourse is by necessity compilative and remains far from covering even
the small portion of this vast field.
4.15 The logistic model: the bugs are coming
Imagine a bunch of insects reproducing generation after generation so that
initially there were 𝐵 bugs, and after 𝑖 generations there will be 𝑁𝑖 of them (to
bug us: the total mass of insects on the Earth grows faster than that of
humans). When the population in a given region is sufficiently large, it can be
represented by a real continuous variable 𝑁(𝑡) > 0. If we assume that there
is a maximum sustainable population in the region (an equilibrium state) and
that that the population dynamics can be described by a single autonomous
equation, 𝑁̇ (𝑡) = 𝜑(𝑁), then we can produce a simple model assuming that
the iteration function has a quadratic polynomial form, 𝜑(𝑁, 𝑎) = 𝑎𝑁(1 −
𝑘𝑁). This is a real-valued function of two variables, 𝑎> 0 and 0 ≤𝑁≤1/𝑘.
Notice that the logistic model 𝜑(𝑁, 𝑎) is not invertible with respect to 𝑁=
𝑁(𝜑) since all the states in (0, 𝑁) have two sources in 𝜑-domain (preimages)
so that each value of 𝜑 corresponds to a pair of different values of population.
Therefore, information is lost in the course of inversion. In the Bourbaki
nomenclature, the logistic model is not injective; topologically-oriented
people prefer calling such maps non-continuous because they do not take an
open set into an open set.
Equation 𝑁̇ (𝑡) = 𝑎𝑁(1 −𝑘𝑁) is called the logistic model. Here
parameter 𝑎> 0 is called a control or growth parameter (in theoretical
ecology, this parameter, interpreted as the growth rate per individual, is
usually denoted as 𝑟) and parameter 𝑘> 0, determining the competition for
some critical resources, is often called in ecology and population biology an
inverse “carrying capacity” of the environment: it defines the equilibrium
population when the competition decreases the growth rate to such an extent
that the population ceases to rise. In electrical engineering, the same equation
describes the energy of 𝐸 of a nonlinear oscillator in the self-excitation regime
(in the first Bogoliubov-Mitropolsky approximation). A discrete alternative to
the continuous (differential) logistic model is the so-called logistic map:
𝑁𝑖+1 = 𝑎𝑁𝑖(1 − 𝑘𝑁𝑖), 0 ≤𝑁𝑖≤1/𝑘, 𝑖= 0,1,2, … , where the discrete
variable 𝑖 plays the role of time in the continuous model. This variable can be
210
Classical Deterministic Systems
interpreted as years or other characteristic temporal cycles, e.g., naturally
related to the reproductory behavior of respective species. Recall that the
term a “map” or “mapping” 𝜑 usually refers to the deterministic evolution
rule with discrete time and continuous state space 𝑋, 𝜑: 𝑋→𝑋. Then
evolution is synonymous with iteration: 𝑥𝑛+1 = 𝜑(𝑥𝑛). The logistic map, in
contrast with the logistic equation, is a model with discrete time i.e.
corresponds to snapshots taken at time points 𝑡= 𝑛𝜏, 𝑛= 0,1, … (the
elementary time step 𝜏 may be put to unity). It is interesting that discrete-
time dynamical systems can be produced from flows described by
continuous-time differential equations, an example is a stroboscopic model
provided by the Poincaré sections.
A discrete version of the logistic model can also have a direct physical or
biological meaning, for example, in the cases when generations are separated
(non-overlapped) in time. Thus, some insects just lay their eggs and die; they
do not interact with the next generations. One more famous (appeared
around 1200) discrete-time dynamical system, which was initially also a
mathematical model of biological reproductory behavior, is the Fibonacci
sequence, 𝑎𝑘+1 = 𝑎𝑘−1 + 𝑎𝑘, 𝑘= 1,2, … , 𝑎0 = 0, 𝑎1 = 1 . To describe this
process in terms of evolution, we can introduce matrices
𝑥𝑘= (𝑎𝑘−1
𝑎𝑘) , 𝐴= (0
1
1
1)
so that the Fibonacci sequence will be represented by a discrete-time cascade
𝑥𝑘+1 = 𝐴𝑥𝑘= 𝐴𝑘𝑥1. Here the phase space is 𝑋= ℝ2. The Fibonacci map, in
distinction to the logistic map, is a linear transformation, and we can directly
apply the standard prescriptions of linear algebra. The characteristic
equation of the Fibonacci map is det(𝐴−𝜆𝐼) = det (−𝜆
1
1
1 −𝜆) = 𝜆2 −𝜆−
1 so that eigenvalues 𝜆1,2 =
1±√5
2
give eigenvectors 𝑣1,2 = (1, 𝜆1,2)
𝑇. Then
𝑥1 = 𝐶1𝑣1 + 𝐶2𝑣2 and 𝑥𝑘+1 = 𝜆1
𝑘𝐶1𝑣1 + 𝜆2
𝑘𝐶2𝑣2. Using initial conditions 𝑥1 =
(0
1) = 𝐶1 ( 1
𝜆1) + 𝐶2 ( 1
𝜆2) we get 𝐶1 = −𝐶2, 𝐶1𝜆1 + 𝐶2𝜆2 = 1 so that 𝐶1 =
1
𝜆1−𝜆2 =
1
√5 = −𝐶2 and 𝑥𝑘+1 = ( 𝑎𝑘
𝑎𝑘+1) =
1
√5 [( 1
𝜆1) 𝜆1
𝑘−( 1
𝜆2) 𝜆2
𝑘] which gives for
Fibonacci numbers 𝑎𝑘=
1
√5 (𝜆1
𝑘−𝜆2
𝑘).
One can notice that the logistic map is a simple case of polynomial maps.
Of course, the logistic map can be represented in dimensionless form 𝑥𝑖+1 =
𝑎𝑥𝑖(1 −𝑥𝑖), where all 𝑥𝑖 are numbers interpreted as the population density
(or expectation value), and it is required that 0 ≤𝑥𝑖≤1, although certain
values of intrinsic growth parameter 𝑎 and initial data 𝑥0 may lead to
negative population densities, which fact indicates that the logistic map
should not be taken literally as a demographic model. In other words, the
logistic map takes the interval [0,1] into itself. Despite its apparent simplicity,
the logistic map is very general since any function having a nondegenerate
extremum behaves in the latter’s neighborhood like this map near 𝑥= 1/2.
An obvious extension of the logistic map depending on parameter 𝑎 is the
4.15
The logistic model: the bugs are coming
211
relationship 𝑥𝑖+1 = 𝑓(𝑥𝑖, 𝑎) usually called the Poincaré map; here
generations corresponding to 𝑖= 0,1,2, … can be interpreted as periods of
motion. Notice that in the case of maps with discrete time, phase trajectories
are given by a discontinuous sequence of points. The fixed point of the map
(𝑥= 𝑓(𝑥)) does not change with generations, 𝑥𝑖+1 = 𝑥𝑖 so that it would be
natural to define 𝜇𝑖≔𝑑𝑥𝑖+1/𝑑𝑥𝑖 as a multiplier. It is clear that the maximum
value in the sequence 𝑥𝑖 is reached when 𝜇𝑖= 𝑎(1 −2𝑥𝑖) = 0 and is 𝑥𝑖=
1/2. Therefore, the maximum iterated value 𝑥𝑖+1(𝑎) = 𝑎/4, which means that
to hold the logistic map in the required domain 0 ≤𝑥𝑖≤1 one must restrict
the growth parameter to segment 0 ≤𝑎≤4.
In general, for one-dimensional Poincaré recurrences, the iteration
function 𝑓 and control parameter 𝑎 can be selected and normalized in such a
way as when the initial data 𝑥0 (known as the seed) are taken from a finite
interval 𝑃= (𝛼, 𝛽), the iterates 𝑥1, … , 𝑥𝑛 also belong to 𝑃 (in the logistic map
𝑃= (0,1) for small values of the control parameter, see below). It means that
function 𝑓 maps interval 𝑃 into itself i.e. this map is an endomorphism (the
term “into” means that the iterations may not fill the whole interval). Single-
dimensional endomorphisms are often not invertible, which physically means
that the past cannot be uniquely reconstructed from the current data, so that
one might say that in such cases we are dealing with the systems having an
unpredictable past (isn’t it typical of human history?). When, however, an
endomorphism has a smooth inverse, then the map is a diffeomorphism.
The discrete logistic model may exhibit a somewhat unusual behavior of
the population. Thus for small values of the growth (multiplication)
parameter 𝑎, the initial value of the population produces a rather little effect
on the population dynamics, the latter being mostly controlled by parameter
𝑎. Nevertheless, when this parameter is increased, the population (e.g., of
bugs, mosquitoes or other insects and, apart from the latter, of rats, mice,
reptiles, leeches, etc.) start to change chaotically, and in this chaotic regime
one can observe an extreme sensitivity to the concrete number of initially
present members 𝑁(𝑡0) = 𝐵 (or to the “seed” 𝑥0). For large enough values of
the growth parameter, e.g., for 𝑎> 3 in the simple logistic map 𝑥𝑖+1 =
𝑎𝑥𝑖(1 −𝑥𝑖), the population bifurcates into two, so that the colony of insects
acts as if it were “attracted” by two different stable populations (such
asymptotic solutions are called attractors). This is a primary example of the
so-called period doubling. As this process has become very popular in
nonlinear dynamics (starting from approximately 1976) and has generated a
great lot of papers most of which are easily available (see the list of literature)
and because of the lack of space, we shall not reproduce here the respective
computations, restricting the current exposition to a catalogue of the basic
facts. We only mention here that period doubling occurs in many models and
in a variety of disciplines: in the Navier-Stokes equation (turbulence), in
meteorology (the Lorenz system), in chemical reactions, in nerve pulse
propagation, etc. In all such cases, we see that for different values of the
control parameter 𝑎 the system’s behavior alters drastically: from settling
down to a point (the population dies out or tends to a non-zero fixed value),
through quasi-regular oscillations, then irregular oscillations, and finally to
212
Classical Deterministic Systems
chaos i.e. totally decorrelated process. Recall that the dynamical systems,
described by ODEs, in general may have four types of solutions: equilibrium
states, regular (periodic) motion, quasi-periodic motion and chaos. These
solution types are associated with four attractor varieties: stable equilibrium,
limit cycle, d-dimensional torus and chaotic attractor.
One usually studies the properties of the logistic map with the help of a
computer, giving the seed 𝑥0 and the growth parameter 𝑎 as inputs. The task
is to find the asymptotic value of 𝑥𝑖 for 𝑖→+∞; one should of course bear in
mind that computer modeling gives no genuine asymptotics, but only the
values corresponding to large finite 𝑖-numbers. Yet one can obtain the graph
(𝑥, 𝑎) which is known as a bifurcation diagram for the logistic map. This
diagram plots a sequence of generations (population densities 𝑥𝑖) vs. growth
parameter 𝑎, actually representing the limit solutions i.e. the same and
cyclically repeated (after some number 𝑚 of steps) values of 𝑥. Such cycles
appear following the attempts to find the already mentioned fixed (also
known as stationary) points of the map 𝑥𝑖+1 = 𝜑(𝑥𝑖, 𝑎) i.e. limit solutions of
equation 𝑥= 𝜑(𝑥, 𝑎) mapping point 𝑥 onto itself. Fixed points can be both
stable and unstable, in particular, depending on the values of the control
parameter 𝑎. Specifically, for the logistic map, when 0 < 𝑎< 1 the only limit
value is 𝑥= 𝑥(1) = 0, and at 𝑎= 1 the first bifurcation occurs: now there are
two solutions i.e. besides 𝑥(1) = 0 a new solution 𝑥(2) = 1 −1/𝑎 appears.
Indeed, the search for fixed points gives 𝑥= 𝑎𝑥(1 −𝑥) which produces 𝑥(2)
for 𝑥≠0. In the range 1 < 𝑎< 3, the computed limit value corresponds to
this solution i.e. fixed points coincide with 𝑥(2) = 1 −1/𝑎 since it is stable
whereas the first solution 𝑥(1) = 0 is unstable. We can test the stability of
both solutions using the standard linearization procedure. Putting 𝑥= 𝑥∗+
𝛿𝑥, where 𝑥∗ is either 𝑥(1) or 𝑥(2), we have after linearization 𝛿𝑥𝑖+1 = 𝑎(1 −
2𝑥∗)𝛿𝑥𝑖, and we see that when 𝑥∗= 0, this solution is stable for 𝑎< 1 and
unstable for 𝑎> 1. If 𝑥= 𝑥(2), then 𝛿𝑥𝑖+1/𝛿𝑥𝑖= 2 −𝑎 i.e. 𝛿𝑥𝑖= (2 −𝑎)𝑖𝛿𝑥0
and 𝛿𝑥𝑖 converges to zero for |2 −𝑎| < 1 and diverges for |2 −𝑎| > 1. Thus,
solution 𝑥(1) becomes unstable for 𝑎> 1 and solution 𝑥(2) for 𝑎> 3; within
interval 1 < 𝑎< 3, 𝑥(2) remains stable. At point 𝑎= 3, the second bifurcation
occurs: apart from solutions 𝑥(1) = 0 and 𝑥(2) = 1 −1/𝑎, both of which
become unstable, a stable two-cycle (𝑚= 2) fixed point emerges. One can
predict the population for this two-cycle attractor e.g. by requiring that
generation (𝑖+ 2) has the same number (density) of individuals as
generation 𝑖: 𝑥= 𝑥𝑖= 𝑥𝑖+2 = 𝑎𝑥𝑖+1(1 −𝑥𝑖). Combining this relationship with
the logistic map, we get 𝑥= 𝑎2𝑥(1 −𝑥)(1 −𝑎𝑥+ 𝑎𝑥2). One may notice that
solutions 𝑥(1) = 0 and 𝑥(2) = 1 −1/𝑎 satisfy this algebraic equation. To find
two other solutions, one can divide the equation by 𝑥−𝑥(1) = 𝑥≠0 and 𝑥−
𝑥(2) = 𝑥−(1 −1/𝑎) to obtain the quadratic equation 𝑥2 −(1 + 1/𝑎)𝑥+
(1/𝑎)(1 + 1/𝑎) = 0 whose solutions are
𝑥= 1 + 𝑎± √𝑎2 −2𝑎−3
2𝑎
.
4.15
The logistic model: the bugs are coming
213
Putting here the value 𝑎= 3 that delimits the new bifurcation interval in
the diagram, we get the exact solution 𝑥= 2/3 (which can also be verified
directly from the logistic map). It is remarkable that a simple computer
iteration procedure demonstrates rather slow convergence to this exact
solution. It is also interesting that one can already observe the onset of
oscillations: when 2 < 𝑎< 3, the population density 𝑥𝑖 begins to fluctuate
near the solution 𝑥(2) = 1 −1/𝑎. With the growth parameter exceeding 𝑎=
3, oscillations become more and more pronounced, at first between two
values (depending on 𝑎) and then, with a further increase of the growth
parameter, between 4,8,16,…, 2𝑚 values. The size ratio of subsequent
bifurcation intervals on the diagram converges to the so-called Feigenbaum
constant 𝛿= 4.6692 … that has been computed up to more then 103decimal
places. So with the growth parameter being increased, period doubling
bifurcations occur more and more often, and after a certain value 𝑎= 𝑎∗
(sometimes called an accumulation point) has been reached, period
doublings transit to chaos. One can also notice that different parts of the
bifurcation diagram contain similar plots i.e. they are self-similar.
The physically limiting case 𝑎= 4, i.e. 𝑥𝑖+1 = 4𝑥𝑖(1 −𝑥𝑖) is especially
often encountered in the literature on logistic maps because one can find an
exact solution in this case. One usually makes the substitution 𝑥𝑖= (1/2)(1 −
cos 2𝜋𝜃𝑖) ≡sin2 𝜋𝜃𝑖 to obtain the following form of the logistic map:
cos 2𝜋𝜃𝑖+1 = cos 4𝜋𝜃𝑖. One can then define the principal value of 𝜃𝑖 as
belonging to interval [0,1/2] so that 𝜃𝑖+1 = 2𝜃𝑖 and 𝜃𝑖= 2𝑖𝜃0 . Thus
depending on the initial value 𝜃0, the map admits a countable set of cyclic
(periodic) solutions and a continuum of aperiodic solutions. Periodic
solutions can be found by putting successively 𝜃0 = 1 (𝑥= 0); 1/2 (𝑥=
0); 1/3 (𝜃𝑖= 1/3,2/3,4/3,8/3, … i.e. stationary solution 𝑥= 3/4 ); 1/5
(double cycle 𝜃𝑖= 1/5,2/5,4/5,8/5 … →1/5,2/5 i.e. 𝑥1 ≈0.345, 𝑥2 ≈0.904);
1/7 (triple cycle 1/7,2/7,4/7 i.e. 𝑥1 ≈0.188, 𝑥2 ≈0.611, 𝑥3 ≈0.950 ), etc.
Using the standard trick of linearization, one can see that all such solutions
are exponentially unstable, 𝛿𝜃𝑖= 2𝑖𝛿𝜃0 so that the actual solutions fluctuate
between such unstable ones: the situation known as chaos (or at least quasi-
chaos, when some solutions still remain stable).
This example is quite instructive since one can observe on it the
emergence of simple statistical concepts. Indeed, although chaos is a
dynamical notion arising in deterministic systems (there are no random
variables in the starting equations), the continuum chaotic states can only be
adequately described by introducing a probability measure i.e. probability to
find the population between 𝑥 and 𝑥+ 𝑑𝑥. It is this indispensable emergence
of probabilistic description that is usually known as stochasticity. To find the
respective distribution function, let us, in accordance with the archetypal
model of statistical mechanics, consider the simplified logistic map 𝜃𝑖= 2𝑖𝜃0
as an ensemble of solutions labeled by different initial conditions. A dynamic
map 𝑥= 𝑔𝑡𝑥0 , where 𝑔𝑡 is a one-parameter group (or semigroup) of
translations along vector field trajectories, takes these initial conditions into
the actual state of the system. For example, in the case of continuous time and
smooth vector field 𝑣(𝑥) on an 𝑛-dimensional manifold 𝑀, 𝑔𝑡 is a measure-
214
Classical Deterministic Systems
preserving diffeomorphism (see the above section on dynamical systems) i.e.
if 𝜇 is a measure that in any local coordinate system is represented through
the probability density, 𝑑𝜇= 𝜌(𝑥)𝑑𝑥, 𝑥= {𝑥1, … , 𝑥𝑛} , then measure 𝜇 is
invariant under the action of 𝑔𝑡 if density 𝜌(𝑥) satisfies the Liouville equation,
𝜕𝑖(𝜌𝑣𝑖) ≡div(𝜌𝐯) = 0 (the Liouville theorem, see sections 4.1, 4.5.3.). Such a
measure is an integral invariant of flow 𝑔𝑡 (i.e. of dynamical system). A similar
probability distribution for the logistic map with growth parameter 𝑎= 4 for
fixed 𝑖 is 𝜌(𝜃𝑖) = 𝑑𝜃𝑖/𝑑𝜃0 = 2𝑖= const . But from substitution 𝑥𝑖= (1/
2)(1 −cos 2𝜋𝜃𝑖) we have
𝑑𝜃𝑖
𝑑𝑥𝑖
=
1
𝜋sin 2𝜋𝜃𝑖
=
1
𝜋√1 −cos2 2𝜋𝜃𝑖)
=
1
𝜋√1 −(1 −2𝑥𝑖)2 =
1
2𝜋√𝑥𝑖(1 −𝑥𝑖)
and, in particular,
𝑑𝜃0
𝑑𝑥=
1
2𝜋√𝑥(1−𝑥). Defining 𝑑𝜇= 𝜌(𝑥)𝑑𝑥≔𝜌(𝜃)𝑑𝜃, we get
𝜌(𝑥) =
const
√𝑥(1−𝑥), where the constant can be determined from the normalization
condition, ∫𝜌(𝑥)𝑑𝑥= 1
1
0
which gives const = 1/𝜋. Finally, we have the
distribution function for the probability to find the logistic system (e.g., the
population) between 𝑥 and 𝑥+ 𝑑𝑥
𝜌(𝑥) =
1
𝜋√𝑥(1 −𝑥)
(4.16.1. )
Notice that this distribution function does not depend on the starting
value 0 ≤𝑥0 ≤1 i.e. is universal. This is a specific feature of the physically
limiting case of the logistic map with 𝑎= 4.
One can show that the logistic map for 𝑎= 4, which is in this case chaotic
for almost all initial conditions, may be related to the Lorenz attractor
appearing in the three-dimensional meteorological model constructed in
1963 by E. Lorenz. This mathematical model is represented by a dynamical
system with 3d phase space and is fully deterministic since it is represented
by three ODEs 𝐱= 𝐟(𝐱, 𝑎), 𝐱= (𝑥1, 𝑥2, 𝑥3) (with quasilinear vector field 𝐟).
Nonetheless, the model demonstrates chaotic behavior i.e, abrupt and
apparently random changes of state for some set of control parameters 𝑎.
We have already noted that there are many interesting things about
chaos and one of them is that it had not been discovered much earlier. The
logistic map, for instance, could well have been explored by the brilliant
Enlightenment mathematicians, but probably, after the creation of Galileo-
Newton’s mechanics in the late 17th century, scientists were preoccupied with
continuous-time mathematical analysis and differential equations describing
the objects that change smoothly with time. Even today, in the epoch of digital
technologies, many “old-guard” scientists are much better familiar with
differential equations than with their discrete counterparts.
Another chrestomathic example of a discrete-time system 𝑥𝑛+1 =
𝑓(𝑥𝑛), 𝑛∈ℤ besides the logistic map is given by piecewise linear (!) maps
known as the Bernoulli shift 𝐵(𝑥)
4.15
The logistic model: the bugs are coming
215
𝑓: [0,1), 𝑓(𝑥) ≡𝐵(𝑥) = {
2𝑥, 0 ≤𝑥< 1/2
2𝑥−1, 1/2 ≤𝑥< 1 = 2𝑥 mod1.
This simple map produces a rather complex dynamics which is
characterized by an extreme sensitivity to initial conditions which can be
demonstrated, as usual, by computing the Lyapunov exponents. Indeed, for
two paths beginning in two nearby points 𝑥0 and 𝑥0
′ = 𝑥0 + 𝜀0 with
displacement 𝜀0 ≪1 we shall have ∆𝑥𝑛≔|𝑥𝑛
′ −𝑥𝑛| = 2𝜀𝑛−1 = 22𝜀𝑛−2 =
⋯= 2𝑛𝜀0 ≡𝑒𝑛ln 2𝜀0. We see that two closely located points diverge with the
rate 𝜆= ln 2 > 0, which is the Lyapunov exponent for map 𝐵(𝑥). Since 𝜆> 0
the map exhibits an exponential instability and can be viewed as chaotic.
Discrete time maps 𝑥𝑛+1 = 𝑓(𝑥𝑛), 𝑛∈ℤ are also known as fixed point
iterations. In general, single-dimensional discrete time maps 𝑓(𝑥): 𝑥∈𝑋⊆
ℝ, 𝑋→𝑋 that may be represented as 𝑥𝑛+1 = 𝑓(𝑥𝑛) (an obvious
generalization of the logistic map), even very primitive ones, can also exhibit
an exponential dynamical instability and thus may be called chaotic (in the
Lyapunov sense). Expression 𝑥𝑛+1 = 𝑓(𝑥𝑛) supplied with initial condition
𝑥(0) = 𝑥0 (as the initial population in the logistic map) can be viewed as an
equation of motion for our 1d deterministic dynamical system with discrete
time.
Let us now return to the continuous-time case (such systems are the most
interesting for physics and technology). In accordance with the idea of
linearity, the model of exponential growth, discussed in the preceding section,
can be applied when the number 𝑁 of individuals is relatively small so that
the correlation effects between them can be disregarded. Correlations
between individuals can be observed on both the pairwise and collective
level114. Pair correlations give rise, e.g., to the hyperbolic or explosion model
leading to the sharpening regime with vertical asymptotes meaning that the
population will tend to infinity in finite time (this model was briefly discussed
in 4.15. and 4.16.), whereas collective correlations manifest themselves, for
example, in the competition for resources (such as food, water, living space,
energy, information, position in the hierarchy, political power, etc.) within the
population. With the rising number of individuals, competing for resources
tends to impede the growth rate i.e. growth factor 𝑏= 𝑏(𝑁), 𝑑𝑏/𝑑𝑁< 0. The
simplest model of an impeded growth would be putting rate 𝑏 to fall linearly
with growing population, 𝑏= 𝑎−𝑘𝑁, 𝑎> 0, 𝑘> 0. This mathematical model
is quite natural since one can approximate any smooth function by a linear
one for sufficiently small values of its argument (this is, by the way, the main
idea of calculus and differentiation), in the present case, for small enough 𝑁.
Then we have the equation expressing, in the continuous case, the logistic
model:
114 The patterns of pairwise vs. collective correlations is quite often encountered in many-
particle physical models, e.g., in statistical and plasma physics.
216
Classical Deterministic Systems
𝑑𝑁
𝑑𝑡= (𝑎−𝑘𝑁)𝑁= 𝑎𝑁(1 −𝑁
𝑁0
) ,
(4.16.2. )
where 𝑁0 ≡𝑎/𝑘 is the equilibrium population (in ecological literature this
parameter is usually denoted as “carrying capacity” 𝐾). One-dimensional
equation (4.16.2.) is known as the logistic equation; this is an apparently
primitive, but very rich and important mathematical model which may be
interpreted as a combination of the exponential and hyperbolic models. The
meaning of the logistic model is that the reproductive rate is assumed to be
proportional to the number of individuals, whereas the mortality rate is
proportional to the frequency (probability) of pair encounters (collisions). Of
course, by properly scaling time 𝑡 and number of individuals 𝑁, e. g. , 𝑡→𝜏: =
𝑎𝑡, 𝑁→𝑘𝑁/𝑎= 𝑁/𝑁0: = 𝑥, we can reduce (4.16.2.) to a dimensionless form
𝑥̇ = 𝑥(1 −𝑥) which is convenient to study the main dynamical properties of
the model in the (𝑥, 𝜏) plane, but we shall rather stick to the form (4.16.3.) in
order to better understand the role of parameters 𝑎 and 𝑁0.
The logistic equation, though being a nonlinear ODE, is simple in the
sense that it can be easily integrated (since it is autonomous and variables in
it can be separated). There are many ways to integrate the logistic equation;
for demonstration purposes, we can choose probably the simplest one,
representing the logistic equation as
𝑑𝑁
𝑑𝑡= (𝑎−𝑘𝑁)𝑁= 𝑎𝑁−𝑘𝑁2, 𝑁(𝑡0) = 𝐵, 𝑎> 0, 𝑘> 0,
(4.16.3. )
𝐵, 𝑎, 𝑘 are constants. For 𝑁≠𝑎/𝑘, 𝑡−𝑡0 = ∫𝑑𝑁(𝑎𝑁−𝑘𝑁2)−1
𝑁
𝐵
. This
integral exists only when both 𝑁 and 𝐵= 𝑁(𝑡0) i.e the current and the initial
values of the population lie in intervals (0, 𝑎/𝑘) or (𝑎/𝑘, +∞). In other words,
there exist no solutions that cross the straight line 𝑎−𝑘𝑁= 0. Integration
gives
𝑡−𝑡0 = 1
𝑎∫𝑑𝑁(1
𝑁+
𝑘
𝑎−𝑘𝑁) =
𝑁
𝐵
1
𝑎log 𝑁(𝑎−𝑘𝐵)
𝐵(𝑎−𝑘𝑁)
Solving this equation with respect to 𝑁, we have
𝑁(𝑡) =
𝑎𝐵exp[𝑎(𝑡−𝑡0)]
𝑎−𝑘𝐵+ 𝑘𝐵exp[𝑎(𝑡−𝑡0)] =
𝑎𝐵
𝑘𝐵+ (𝑎−𝑘𝐵)exp[−𝑎(𝑡−𝑡0)]
One can see that for an initial population lower than the equilibrium
value, 𝑁(𝑡0) < 𝑁0 i.e. 0 < 𝐵< 𝑎/𝑘, 𝑁(𝑡) is defined for all 𝑡, 0 < 𝑡< +∞,
while for 𝑁(𝑡0) > 𝑁0 i.e. 𝐵> 𝑎/𝑘, 𝑁(𝑡) is only defined for
𝑡> 𝑡0 −1
𝑎log
𝑘𝐵
𝑘𝐵−𝑎
4.15
The logistic model: the bugs are coming
217
In the case 𝑁(𝑡0) ≡𝐵= 𝑎/𝑘, the solution is a constant, 𝑁(𝑡) = 𝑎/𝑘, since in
this case 𝑑𝑁/𝑑𝑡= 0.
One can also see that the solution converges to the constant value 𝑎/𝑘.
For 𝐵< 𝑎/𝑘, 𝑁(𝑡) < 𝑁0 ≡𝑎/𝑘 for all 𝑡 so that 𝑎−𝑘𝑁(𝑡) > 0 and 𝑑𝑁/𝑑𝑡> 0
which means that if the initial population is lower than the equilibrium one,
the number of individuals monotonicly increases. In the opposite case, when
the starting population exceeds the equilibrium one, 𝐵> 𝑎/𝑘, we have
𝑁(𝑡) > 𝑁0 for all 𝑡 and 𝑑𝑁/𝑑𝑡< 0 which means that the population is
monotonicly shrinking.
The linear regime of the logistic model corresponds to the case when one
can neglect the second term (proportional to 𝑁2) in the logistic equation. This
is correct for small probabilities of pair correlations and for short observation
times. More accurately, (𝑘𝐵/𝑎)(exp[𝑎(𝑡−𝑡0)] −1) ≪1 which gives for the
short period of observation, 𝑎(𝑡−𝑡0) ≪1, the following restriction on the
death rate in the population, 𝑘𝐵(𝑡−𝑡0) ≪1. An obvious drawback of the
logistic model is that it does not contain spatial variables, which means that it
cannot be applied to spatially inhomogeneous situations.
The most frequently used form (4.16.2.) of the logistic equation
introduces explicitly the asymptotic value 𝑁0 = 𝑘/𝑎. Then the solution with
the initial condition 𝑁(𝑡= 𝑡0) ≡𝐵 can be written as
𝑁(𝑡) =
𝐵𝑁0𝑒𝑎(𝑡−𝑡0)
𝑁0 + 𝐵(𝑒𝑎(𝑡−𝑡0) −1) = 𝑁0
𝑁(𝑡0)
𝑁(𝑡0) + (𝑁0 −𝑁(𝑡0)𝑒−𝑎(𝑡−𝑡0))
(4.16.4. )
So, by directly integrating the logistic equation (which is a rare occasion
in the world of nonlinear equations), we get the explicit expression for
integral curves that are usually called the logistic curves whose family
depends (in the deterministic case) on three parameters, 𝑎, 𝑁0 and 𝐵=
𝑁(𝑡0). The graph of this solution produces an S-shaped curve which can be
represented in dimensionless units by the function 𝑁(𝑡) = 1/(1 +
exp (−𝑎𝑡) ), in which one can easily recognize the Fermi distribution of
fermions over single-particle energy states in quantum statistical physics. The
process described by the logistic model (the logistic process) has two
equilibrium points, 𝑁= 0 and 𝑁= 𝑁0, with the first being unstable, since
small population 𝛿𝑁 grows near 𝑁= 0 (𝑁̇ > 0), and the second stable (𝑁̇ <
0 for 𝑁> 𝑁0 and 𝑁̇ > 0 for 𝑁< 𝑁0 near 𝑁= 𝑁0 (which can be seen already
from equation (4.16.2.) i.e. without even integrating it). In other words, the
population dynamics evolves to the equilibrium value 𝑁= 𝑁0. More exactly,
the process tends asymptotically for 𝑡→+∞ to the stable equilibrium 𝑁=
𝑁0 at any starting value 𝑁(0) = 𝐵> 0 . For 𝑡→−∞, the process
asymptotically converges to the state 𝑁= 0 . Thus the logistic model
describes the transition from the unstable state 𝑁= 0 to the stable state 𝑁=
𝑁0, occurring in infinite time. We shall see the examples of similar transitions
when addressing quantum-mechanical models below. The integral curves
have vertical asymptotes 𝑡= const for any 𝑡> 0. The logistic process is very
close to the exponential (Malthusian) growth for small 𝑁, (𝑁≪𝑁0) , but
218
Classical Deterministic Systems
begins to fall behind approximately at 𝑁≈𝑁0/2. This saturation effect is a
manifestation of correlations within the population.
One usually considers a single control parameter in the logistic model, the
growth parameter 𝑎. In fact, however, there are at least two control
parameters which can be both important, especially if the logistic model is not
necessarily applied to describe the population growth, when all the
parameters and variables are positive by default. Renaming the constants,
e.g., 𝑎= 𝑘𝑁0
2, 𝑏= 𝑎/𝑘𝑁0 (see (4.16.3.)-(4.16.4.)), we arrive at equation
𝑑𝑥
𝑑𝑡=
𝑓(𝑥, 𝑎, 𝑏) = 𝑎𝑥(𝑏−𝑥) whose solutions depend on two parameters 𝑎 and 𝑏,
not necessarily strictly positive. In other words, the dynamical evolution
occurs in 3d space (𝑥, 𝑎, 𝑏) without restricting the motion to domain 𝑥>
0, 𝑎> 0, 𝑏> 0 as in population models. For 𝑏> 0, point 𝑥= 0 is unstable
whereas for 𝑏< 0 it is stable; at 𝑏= 0 stability is changed to instability both
at the stable point 𝑥= 𝑏 and the unstable point 𝑥= 0 of the traditional
logistic model – a simple example of the bifurcation.
One can emphasize that there is a signifcant contrast between the
solutions to differential equations and the ones to the respective difference
equations (the logistic map). If, e.g., we take differential equation 𝑥̇ = 𝑎𝑥(𝑏−
𝑥), its solution (an S-curve) can be easily obtained:
𝑥(𝑡) =
𝑥0𝑒𝑎𝑡
1 −𝑥0(1 −𝑒𝑎𝑡) , 𝑥0 = 𝑥(0)
(see above, here 𝑡0 = 0), whereas the solution to the logistic map 𝑥𝑖+1 =
𝑎𝑥𝑖(1 −𝑥𝑖) obtained by iterating this difference equation has a totally
different character reflecting much more complicated behavior. This behavior
has been studied by many researchers and is still hardly totally understood.
From a more general viewpoint, the logistic model is a very particular
case of the evolution of an autonomous system described by vector equation
𝐱̇ = 𝐟(𝐱, 𝑎), where vector field 𝐟 depends on parameter 𝑎 (it can also be a
vector). As we have seen, in certain cases, variations of the control parameter
𝑎 can radically change the system’s motion, for instance, result in chaotic
behavior. Note that the logistic model is not necessarily identical with the
population model. When the logistic equation is not interpreted as describing
the population growth, one can explore the behavior of solutions as
parameter 𝑘 (see (4.16.2.)-(4.16.3.)) is varied.
4.15.1. Extensions of the logistic model
Although the logistic model was initially devised to describe mathematically
the population dynamics, in particular, the survival conditions, this model has
a more general meaning. We have already mentioned the similarity between
the continuous-time logistic model and the self-sustained oscillations in laser
generation. Indeed, one often writes the logistic equation in the extended
form d 𝑁/𝑑𝑡= 𝑎𝑁(1 −𝑘𝑁) = (𝛼−𝛾−𝛽𝑁)𝑁, where parameters 𝛼 and 𝛾
define the birth and mortality rates, respectively, whereas the nonlinear term
𝛽𝑁 corresponds to population shrinking due to intraspecific competition for
4.15
The logistic model: the bugs are coming
219
resources. This form of the logistic equation is analogous to one of the forms
of the Van der Pol equation written for the oscillation energy 𝐸=
(
𝑚
2) (𝑣2 + 𝜔0
2𝑥2) (in fact the Lyapunov function)
𝑑𝐸
𝑑𝑡= (𝛼−𝛽𝐸)𝐸,
𝛼= 𝛼0 −𝛾,
where 𝛼0 is the feedback parameter, 𝛾 and 𝛽 are friction coefficients (linear
and nonlinear). Thus, the birth rate in the logistic equation characterizes the
positive feedback, while the mortality rate accounts for friction.
This analogy enables one to link the logistic model with the nonlinear
theory of Brownian motion. This latter theory is quite universal and has a
number of important physical, chemical and biological applications, in
particular, in electrical and chemical engineering (e.g. catalysis), radiophysics,
laser technology, biological self-organization, etc. Mathematically, the
analogy between the logistic model and the nonlinear Brownian motion may
be expressed on the level of the Fokker-Planck equation with the nonlinear
diffusion coefficient, 𝐷(𝑁) = 𝛾+ 𝛽𝑁:
𝜕𝑓(𝑁, 𝑡)
𝜕𝑡
= 𝜕
𝜕𝑁(𝐷(𝑁)𝑁𝜕𝑓(𝑁, 𝑡)
𝜕𝑁
) + 𝜕
𝜕𝑁[(−𝛼+ 𝛾+ 𝛽𝑁)𝑁𝑓(𝑁, 𝑡)],
where 𝑓(𝑁, 𝑡) is the distribution function characterizing the probability of the
population to be in a state with 𝑁 individuals at time 𝑡. Notice that the
description of population dynamics both in terms of the continuous-time
logistic equation and the Fokker-Planck equation is valid for a large number
of individuals, 𝑁≫1. One can read about the relationship of the logistic
equation to stochastic differential equations and closely connected theory of
nonequilibrium phase transitions in the often-cited book [301].
Both the logistic map and the logistic model can be extended and
improved. Thus, the growth (control) parameter 𝑎 in the logistic map may
depend on discrete time i.e. iteration step 𝑖 so that we shall have 𝑥𝑖+1 =
𝑎𝑖𝑥𝑖(1 −𝑥𝑖). This is a discrete-time version of the optimal control problem,
and the corresponding extremal properties similar to Pontryagin’s maximum
principle in continuous-time dynamical systems can be established. In a more
general situation than the one-dimensional logistic map, the discrete-time
iterated process may have a vector character, 𝐱𝑖+1 = 𝛗(𝐱𝑖, 𝐚𝑖), where 𝐱𝑖=
{𝑥𝑖
1, … , 𝑥𝑖
𝑝}, 𝐚𝑖= {𝑎𝑖
1, … , 𝑎𝑖
𝑞} (𝐚 is known as a decision vector), and index 𝑖
enumerates iteration steps. The simplest logistic model with scalar control is
of course linear, e.g. 𝑎𝑖= 𝑎0(1 + 𝜎𝑖), 𝑖= 1,2, … One can rewrite the logistic
map for the variable growth parameter as 𝑥𝑖+1 −𝑥𝑖= 𝑎𝑖𝑥𝑖(𝑋𝑖−𝑥𝑖), where
𝑋𝑖≡1 −1/𝑎𝑖 (recall that this quantity coincides, for 𝑎𝑖= 𝑎= const, with one
of the main fixed points of the logistic map, 𝑋= 𝑥(2)). If we assume that ∆𝑎𝑖=
𝑎𝑖+1 −𝑎𝑖≪𝑎𝑖, then we can approximate the logistic map by the continuous-
time
equation, 𝑥𝑖+1 −𝑥𝑖≡
𝑥𝑖+1−𝑥𝑖
1
≈
𝑑𝑥
𝑑𝑡 and 𝑎𝑖≈𝑎(𝑡) so
that
𝑑𝑥
𝑑𝑡=
220
Classical Deterministic Systems
𝑎(𝑡)𝑥(𝑋(𝑡) −𝑥) or, in the form (4.16.2.)
𝑑𝑥
𝑑𝑡= 𝑎∗(𝑡)𝑥(1 −
𝑥
𝑋(𝑡)) , where
𝑎∗(𝑡) ≔𝑎(𝑡)𝑋(𝑡). We see that the logistic map under the assumption of small
variations of the growth parameter between two successive generations is
reduced to the continuous-time logistic model with variable growth
parameter and drifting stable equilibrium point. The drift on a slow time scale
of equilibrium population 𝑋 is quite natural since the ecosystem carrying
capacity may change with time due to human technological impact, biological
evolution or geophysical variations (such as climate change).
One can consider two subcases here: one corresponding to adiabatic
variations of parameters 𝑎∗(𝑡) and 𝑋(𝑡): i.e. they both change little during the
characteristic time 𝑡~1/𝑎∗of the process i.e. |𝑑𝑎∗(𝑡)/𝑑𝑡| ≪(𝑎∗(𝑡))
2, |𝑑𝑋(𝑡)/
𝑑𝑡| ≪𝑎∗(𝑡)𝑋(𝑡) on some set of temporal values 𝑡; the other reflects the
opposite situation of abrupt changes. In the linear model of growth parameter
variation, 𝑎∗(𝑡) = 𝑎0
∗(1 + 𝜎𝑡) (one can omit the asterisks for simplicity), the
adiabatic regime can be described by a series over small parameter 𝜎 (the
characteristic time of slow control parameter variation is 1/𝜎). Writing the
logistic equation in the form 𝑥= 𝑋−
𝑥̇
𝑎𝑥 and noticing that 𝑥̇~(𝑎0𝜎𝑡)𝑥, we may
obtain the expansion 𝑥= 𝑋+ 𝛿𝑥(1) + 𝛿𝑥(2) + ⋯ , where 𝛿𝑥(1) = −
𝑋̇
𝑎0𝑋≲𝜎𝑡,
𝛿𝑥(2) = −
1
𝑎0𝑋(𝑋̇ 𝛿𝑥(1)
𝑋
+
𝑑𝛿𝑥(1)
𝑑𝑡) =
𝑋̈
(𝑎0𝑋)2 , etc. The meaning of the adiabatic
regime is that the stable fixed point 𝑋 (e.g., interpreted as the eventually
reached population) is slowly drifting and the solution 𝑥(𝑡) (current
population) adjusts to this smooth evolution of parameters. In this adaptation
process a band of values is formed instead of pointlike solution 𝑥. The
positions of bifurcation points also change a little so that the initial bifurcation
diagram is slightly distorted.
In the other limiting subcase of abrupt changes i.e. occurring at times 𝜏≪
1/𝑎, one can consider all the model parameters (𝑎, 𝑋, … ) to remain constant
during the change so that the solution will jump between two levels (or two
bands). This situation is close to the ones treated within the framework of
perturbation theory in quantum mechanics (recall that the quantum-
mechanical perturbation theory had its origins in the methods of finding
solutions to ordinary differential equations).
One can of course consider the system of two or more interacting
populations, each being described by the logistic model. For two populations
we might have the following system of coupled logistic equations
𝑑𝑥1
𝑑𝑡= 𝑎1𝑥1 (1 −𝑥1 + 𝑥2
𝑁
) ,
𝑑𝑥2
𝑑𝑡= 𝑎2𝑥21 (1 −𝑥1 + 𝑥2
𝑁
) .
(4.16.1.1. )
We can explore the stability of solutions to this model (with a two-
dimensional phase space in the same way as for a single population) and, e.g.,
draw the phase portrait. The model of two coupled populations belongs to the
class of competition models which we shall discuss shortly.
4.15
The logistic model: the bugs are coming
221
We have already noted that the logistic model does not account for
spatially inhomogeneous processes. In general, mathematical and computer
models that are constructed by averaging out the spatial115 information such
as territorial population distribution and migration and keeping only the time
variable as the vector field parameter hide many essential characteristics. To
overcome this obvious drawback of the logistic model, one can use a simple
trick of introducing one more parameter. Namely, we can write the initial
condition 𝑁(𝑡0) ≡𝐵 (see equation (4.16.3.)) in the form 𝑁(𝑥, 𝑡0) ≡𝐵(𝑥) =
1/(1 + exp(−𝑝𝑥)), where 𝑥 is interpreted as a spatial variable and treated as
a supplementary parameter. One can regard variable 𝑥 as completely hidden
due to some averaging, with only the time variable remaining directly
accessible. The spatially dependent initial condition is assumed to restore the
hidden variable and to evolve with time into a space-dependent solution
𝑁(𝑥, 𝑡) =
1
1 + 𝑒−𝑝𝜉,
𝜉= 𝑥+ 𝑤𝑡
(here, we put for simplicity 𝑡0 = 0). In this extension of the logistic model,
one can interpret 𝑁(𝑥, 𝑡0) = 𝑁(𝑥, 0) as a spatially inhomogeneous
distribution that spreads with time as a kinematic wave: its propagation is
due to nonlinear terms and is not caused by dynamical interactions. If we put
𝜉= 0, we get 𝑥= 𝑤𝑡 i.e. a steady profile. In other words, condition 𝜉= 0
corresponds to choosing the coordinate frame moving with transfer velocity
𝑤= 𝑥/𝑡.
The mentioned shortcoming of the model - absence of spatial information
– can also be surmounted by using the logistic model combined with partial
differential equations, 𝐿𝑢= 𝜑(𝑢), where 𝐿 is some operator (not necessarily
linear) acting on the space on which function 𝑢 is defined, 𝜑(𝑢) = 𝑎𝑢(1 −𝑢).
Here for brevity the dimensionless form is used. Operator 𝐿 is assumed to
contain a spatial part, for instance, 𝐿= 𝜕𝑡−𝜕𝑥𝑥 (diffusion) or 𝐿= 𝜕𝑡𝑡−𝜕𝑥𝑥
(linear waves), 𝐿= 𝜕𝑡−(𝜂(𝑢))𝑥𝑥 (nonlinear diffusion), etc. In such models,
each zero of 𝜑(𝑢) corresponds to a stationary solution of the associated PDE,
𝐿𝑢= 0. When 𝐿 is the diffusion operator, one can interpret the problem
simply as supplementing the logistic model with a spatially dependent
spreading part (which reminds one of the Navier-Stokes equation)
𝑢𝑡= 𝑎Λ2𝑢𝑥𝑥+ 𝜑(𝑢)
(4.16.1.2. )
so that solutions now depend on both space and time variables, 𝑢= 𝑢(𝑥, 𝑡).
Here quantity Λ is the so-called diffusion length, by an appropriate choice of
scaling one can set Λ2 to unity. Models similar to (4.16.1.2.) are rather popular
in biological sciences and biomathematics, for example, Fisher’s model of
115 Also, information provided from other underlying spaces, not necessarily of geometrical
nature, such as biological, ecological, epidemiological, immunological, physiological, social, etc.
character.
222
Classical Deterministic Systems
biological adaptation116. One can interpret this diffusion model as a strategy
to compute varying distributions of gene frequencies within the population –
an approach known today as population genetics. Equation (4.16.1.2.) is also
said to describe nonlinear diffusion and is known as the reaction-diffusion
equation. One often ascribes the probabilistic meaning to the right-hand side
𝜑(𝑢) (it is interpreted as being proportional to the product of probabilities 𝑝
that an event occurred and 𝑞= 1 −𝑝 that it did not), but mathematically this
is still the function forming the logistic equation. The corresponding PDE has
two stationary solutions 𝑢= 0 and 𝑢= 1 and a traveling wave solution,
𝑢(𝑥, 𝑡) = 𝐹(𝑥−𝑤𝑡) . Note that equation (4.16.1.2.) is invariant under
reflection 𝑥→−𝑥 so that velocity 𝑤 can be both positive and negative. It is
also clear that (4.16.1.2.) is invariant under affine translations of 𝑥 and 𝑡,
therefore one can add any arbitrary constant to 𝑧≔𝑥−𝑤𝑡. Inserting the
ansatz 𝑢= 𝐹(𝑧) into PDE (4.16.1.2.), we get the second-order ODE
𝐹′′ + 𝑤
𝑎Λ2 𝐹′ + 1
𝑎Λ2 𝜑(𝐹) = 0,
(4.16.1.3. )
where the prime denotes differentiation over 𝑧. This ODE is of the nonlinear
damped oscillator type (sometimes equations of this type are called the
Liénard equations), and after solving it, we can find, if possible, asymptotic
solutions for 𝑥→±∞ and the transitions between two stationary points 𝑢1 =
𝑢(−∞) = 𝐹(−∞) and 𝑢2 = 𝑢(+∞) = 𝐹(+∞) (in the case of the underlying
logistic model, 𝑢1 = 0, 𝑢2 = 1). Such asymptotic requirements play the role of
boundary conditions for (4.16.1.3.); now the problem is to determine the
values of 𝑤 (one may consider them eigenvalues) for which solution 𝐹≥0
satisfying these asymptotic conditions exists, and if it does, then for what
initial states 0 ≤𝑢(𝑥, 0) ≤1. Will the solution 𝐹 take the form of a traveling
wave for any initial distribution 𝑢(𝑥, 0)? This question proved to be rather
nontrivial, and stimulated an extensive research. One of the natural ways to
treat this problem is by representing (4.16.1.3.) in the form of a dynamical
system
𝐹′ = 𝑦, 𝑦′ = −𝑤
𝑎Λ2 𝑦−1
𝑎Λ2 𝜑(𝐹),
then we can investigate the solution in the phase plane (𝑦, 𝑦′) or, more
conveniently, (𝑢, 𝑦) ≡(𝑦1, 𝑦2). The phase trajectories are obtained, as usual,
by excluding the vector field parameter (here 𝑧), and we have117
𝑑𝑦
𝑑𝑢= −
1
𝑎Λ2𝑦(𝑤𝑦−𝜑(𝑢)) ≡−
1
𝑎Λ2𝑦2
(𝑤𝑦2 −𝜑(𝑦1)).
(4.16.1.4. )
116 R. Fisher was a prominent British biologist and statistician. Fisher’s model was explored in
detail by A. N. Kolmogorov, I. G. Petrovsky and N. S. Piskunov and is sometimes called the Fisher-
KPP model.
117 Here, to avoid confusion we are placing vector indices below. The difference between
vectors and covectors is usually immaterial in the context of two-dimensional dynamical systems.
4.15
The logistic model: the bugs are coming
223
This equation has two equilibrium (also known as fixed, singular or
critical) points in the (𝑢, 𝑦) ≡(𝑦1, 𝑦2)-plane: (0,0) and (1,0). In the vicinity of
point (0,0), one can linearize equation (4.16.1.4.), which corresponds to the
transition from logistic to exponential growth model, obtaining
𝑑𝑦
𝑑𝑢≈−
1
𝑎Λ2𝑦(𝑤𝑦−𝑎𝑢) =
𝑢
Λ2𝑦−𝑤
𝑎Λ2
(4.16.1.5. )
or, identically,
𝑑𝑦2
𝑑𝑦1
=
𝑦1
Λ2𝑦2
−𝑤
𝑎Λ2
When exploring the behavior of solutions, one can observe here the
competition between two terms in the right-hand side, therefore the phase
portrait changes its character for velocity 𝑤 greater or smaller some critical
value ( 𝑤= 𝑤𝑐). By solving (4.16.1.5.), one can show that 𝑤𝑐~𝑎Λ . To
determine the type of fixed points and thus to characterize the phase flow, we
may proceed according to the standard prescriptions of two-dimensional
dynamical systems. Writing 𝑦1
′ = 𝑦2, 𝑦2
′ = −
1
Λ2 𝑦1 −
𝑤
𝑎Λ2 𝑦2 and looking for a
solution of the form exp (𝜇𝑧), we obtain the system matrix
𝐴= (
0
1
−1
Λ2
−𝑤
𝑎Λ2
)
(notice that quantity 𝑤/𝑎Λ2 is formally analogous to the dissipation
coefficient). The trace of matrix 𝐴 is Tr 𝐴= −𝑤/𝑎Λ2 and det 𝐴= 1/Λ2 so that
the characteristic equation is 𝜇2 −𝜇Tr 𝐴+ det 𝐴= 𝜇2 + 𝜇𝑤/𝑎Λ2 + 1/Λ2 = 0,
and the roots are real and different when discriminant 𝐷= (𝑤2 −4𝑎2Λ2)/
4𝑎2Λ4 > 0 i.e. |𝑤| > 2𝑎Λ. This last value can be identified with 𝑤𝑐. Assume at
first that the roots of the characteristic equation have the same sign, then the
real solutions are 𝑦1 = 𝑦10𝑒𝜇1𝑧, 𝑦2 = 𝑦20𝑒𝜇2𝑧, where 𝑦10, 𝑦20 are arbitrary
constants. One can, as before, get rid of parameter 𝑧 and obtain either the
family of parabola-like118 orbits |𝑦1| = 𝑐|𝑦2|𝜇1/𝜇2, where 𝑐 is some constant
(independent of 𝑧), or 𝑦1 = 0. Recall that critical point of this kind is called a
node; if 𝜇1, 𝜇2 < 0 i.e. 𝑤> 2𝑎Λ, then point (0,0) is a positive attractor i.e.
there exists a neighborhood of (0,0) such that all the paths starting at this
neighborhood at some 𝑧= 𝑧0 finish at (0,0) with 𝑧→+∞ (this property is
almost obvious due to the exponential character of solutions). Likewise for
𝜇1/𝜇2 > 0, the critical point (0,0) is a negative attractor. From the physical
viewpoint, condition 𝑤> 2𝑎Λ signifies that the disturbance for 𝑡> 0 moves
in the direction 𝑥> 0 (recall that 𝑢= 𝐹(𝑥−𝑤𝑡) ). If 0 < 𝑤< 2𝑎Λ , the
118 The ratio 𝜇1/𝜇2 not necessarily equals 2 or ½.
224
Classical Deterministic Systems
singular point (0,0) becomes an unstable focus, and the degenerate case 𝑤=
0 (i.e. a stationary disturbance; from the positions of dynamical systems, this
fixed point is a center) leads to “unphysical” solutions 𝑢< 0 (recall that we
need to have solutions in the band 0 ≤𝑢≤1). One may note that in general
equilibrium near the coordinate origin, when one can disregard nonlinearity,
is an unstable focus.
In the same way one can explore the other singular point (1,0). In the
vicinity of this point 𝑢≈1 so that the linearized equation, instead of
(4.16.1.5.), is
𝑑𝑦
𝑑𝑢≈−
1
𝑎Λ2𝑦(𝑤𝑦−𝑎(𝑢−1)) = 𝑢−1
Λ2𝑦−𝑤
𝑎Λ2.
Proceeding just as before, we see that point (1,0) is a saddle for all 𝑤≥0
as long as we consider growth factor 𝑎> 0 and constant and diffusion length
Λ real.
A more complicated mathematical setting of the Fisher-KPP model is the
Cauchy problem for a nonlinear parabolic equation (parabolic equations
express the most popular models arising from biological studies)
𝑢𝑡(𝑥, 𝑡) = (𝜂(𝑢))𝑥𝑥+ 𝜑(𝑢)
in domain 𝐷≔{𝑥∈ℝ, 0 ≤𝑡< +∞} with initial condition lim
𝑡→+0 𝑢(𝑥, 𝑡) =
𝑢0(𝑥) and, in some versions, asymptotic boundary values lim
𝑥→−∞𝑢(𝑥, 𝑡) =
0, lim
𝑥→+∞𝑢(𝑥, 𝑡) = 1, with a non-negative and continuously differentiable on
[0,1] source function 𝜑(𝑢) such as 𝜑(0) = 𝜑(1) = 0, 𝜑′(0) > 0. The initial
function 0 ≤𝑢0(𝑥) ≤1 can be piecewise continuous in ℝ. When 𝜂(𝑢) = 𝑢,
we have a linear diffusive process with a nonlinear source.
4.15.2. Applications of the logistic model
Both the logistic model and the logistic map have many applications in
science, society and engineering. The general idea leading to the logistic
model – simple growth limited by self-interaction – may be applied to many
real-life processes. For instance, epidemics and spread of rumors can be
modeled by the logistic equation. We can take a typical microeconomic
situation as another example: what will be the output of certain goods
produced by a company (e.g. car manufacturer) over several years? The
simplest model describing the annual growth of the output, with the imposed
constraints of limited resources and market saturation would be a logistic
model. What would be the total revenue (and respectively the profit) of a
company over several years, if the annual revenue growth is 𝑎? The revenue
for the (𝑖+ 1)-th year will be 𝑥𝑖+1 = 𝑎𝑥𝑖. However, for rather high revenues
the latter are restrained, e.g., by the market share, and we arrive again to the
logistic model. These examples suggest a natural generalization: any human
activity subordinated to the imposed constraints of external factors and/or
limited resources can be described by a logistic model. The nonlinear term
4.15
The logistic model: the bugs are coming
225
that limits growth is sometimes metaphorically interpreted as “influence of
the future”.
If we make an affine transformation 𝑥𝑖= 𝑝𝑧𝑖+ 𝑞, where 𝑝, 𝑞 are as yet
undetermined parameters, we get from the logistic map a quadratic
recurrence equation
𝑧𝑖+1 = −𝑎[𝑝𝑧𝑖
2 −(1 −2𝑞)𝑦𝑖−𝑞
𝑝(1 −𝑞)],
and putting 𝑝= −1/𝑎, 𝑞= 1/2, we obtain the quadratic map 𝑧𝑖+1 = 𝑧𝑖
2 +
𝑐, 𝑐≡𝑎/2 −𝑎2/4 which produces Julia sets. Quadratic Julia sets are probably
the best known examples of fractals that are generated by this quadratic map
for almost any value of 𝑐 (although 𝑐= 0 and 𝑐= −2 are exceptional: the
produced sets are not fractals). It is interesting that the above quadratic map
was commercially used in the graphical industry to obtain rather beautiful
ornaments which are fractals: this is an example of direct market value of
mathematics.
It is remarkable that the logistic map can serve as a rough model of the
transition to turbulence, when the regular (laminar) or almost periodic
character of fluid motion is destroyed after some critical value of the flow
parameter (the Reynolds number 𝑅𝑒) has been reached. Like the eventual
population in most logistic models, turbulence practically does not depend on
the initial state of the fluid. The Reynolds number is a control parameter in
the models of turbulent flow and plays the role of intrinsic growth parameter
𝑎. Multiplier 𝜇 passes in the transition to turbulence the value +1.
The logistic map manifests such common features of discrete-time
algorithms as stability and chaotic behavior. From this viewpoint, it is
interesting for numerical techniques and generally for computational science
and engineering. As far as engineering applications of the continuous-time
logistic model go, we have already mentioned that the equation describing the
energy evolution of a nonlinear oscillator in the self-excitation mode has the
form 𝐸̇ = 𝑎𝐸(1 −𝐸) (in dimensionless units). Here the growth parameter 𝑎
is close to 1.
One can consider an example of estimating the population growth with
the help of the logistic model. We may assume the total current (2010) human
population to be 6.9*109, the growth factor to be a = 0.029, the annual
population growth to be 0.011 year-1 (more or less standard demographic
data). Then we have
𝑑𝑁(2010)/𝑑𝑡
𝑁(2010)
= 𝑑
𝑑𝑡log(𝑁(2010)/𝑁(𝑡0 = 2010) = 0.011 = 𝑎−𝑘𝑁(2010)
= 0.029 −𝑘∙6.9 ∙109
which can be considered an equation to find the attenuation factor 𝑘≈2.32 ∙
10−12. Then the projection for the equilibrium population will be 𝑁0 = 𝑎/𝑘≈
11.1 ∙109 i.e., the world population tends to converge to approximately 11
226
Classical Deterministic Systems
billion people. This result is not very sensitive to the slight changes of the
constant a and the annual population growth rate.
We can comment here on the general concept of the control parameter
whose variation results in altering the evolution regime of a dynamical
system. Choosing the control parameter may present a special problem when
modeling a complex (multiparametric) process, in particular, when the
transition to chaos in an open system should be explored. For instance, in
medico-biological studies the concentration of the prescribed medicine can
play the role of control parameter. In surgery, the control parameter may be
identified with the pre-planned invasion path. In fluid motion problems, one
can define the control parameter as the pressure gradient between the flow
boundaries, e.g., pipe ends. In laser technology, the control parameter can
correspond to the pumping level i.e., energy input producing the population
inversion. Using the engineering language, one can define an analogous
control parameter as the feedback level in classical generators.
One often uses the logistic model in practical ecology to control the
population. For example, a slight modification of this model (harvesting) is
applied in fisheries. Harvesting means mathematically a transition from the
population freely evolving in accordance with internal processes (balance of
the birth, death, and migration) to introducing an external pressure 𝑞(𝑁, 𝑡)
i.e. the model equation will be 𝑁̇ (𝑡) = 𝜑(𝑁) −𝑞(𝑁, 𝑡), where term 𝑞(𝑁, 𝑡)
signifies the removal of 𝑞 individuals per unit time (say, each year). In
fisheries, for example, this external influence corresponds to fishing quotas
that can be constant, 𝑞(𝑁, 𝑡) ≡𝑞 or differential, 𝑞(𝑁, 𝑡) = 𝑞(𝑁). The latter
case may be interpreted as the simplest manifestation of a feedback, which is
a milder model than the one corresponding to 𝑞= const. Indeed, 𝑞= 𝑞(𝑁)
depends on the actual state of the system. It is usually important to select
quotas 𝑞 in such a way as to ensure sustainable fishing. Sometimes, as e.g.
during genocides, external influence 𝑞(𝑁, 𝑡) cannot even be exactly known
and should be treated as a perturbation, not necessarily small. One can in such
cases figuratively call function 𝑞 the murder rate.
More complicated population models arise when pair bonding is
considered. The effectiveness of survival mechanisms tends to decrease as the
population density falls since it becomes increasingly difficult for sexually
reproducing species to find appropriate mates. The respective mathematical
models are in general spatially dependent, at least at the level of any given
individual, since extended mobility and higher mate detection capability
(greater identification distance) can to a certain degree compensate low
population density. However, there are stringent physiological limitations for
excessive mobility since it requires higher metabolism rates that are only
possible under the conditions of ample resource availability (just as increased
consumption in the rich population groups of human society enhances
mobility and selectiveness). Considering pair stability issues further
complicates the model.
To focus on the role of harvesting we can use, for simplicity, the
dimensionless form of the logistic model that can be, in particular, achieved
by scaling time 𝑡 as 𝜏= 𝑎𝑁0𝑡 (see equation (4.16.3.) and below). Formally, we
4.15
The logistic model: the bugs are coming
227
can put the growth parameter 𝑎 and the “equilibrium population” 𝑁0 equal to
unity (it is trivial to notice that for 𝑎≠1, 𝑞→𝑞/𝑎). Then we have 𝑥̇ =
𝑥(1 −𝑥) −𝑞≡𝑓(𝑥, 𝑞), and when quote 𝑞 is constant, its critical value is 𝑞=
1/4. In fisheries, this critical point is usually called maximum sustainable
yield (MSY), and its evaluation is vital for sustainable fishing. For 0 < 𝑞<
1/4, there are two equilibrium points (𝑥̇ = 0) corresponding to two roots,
𝑥1, 𝑥2 of the resulting quadratic equation. The lower equilibrium point 𝑥1 is
unstable which means that if, for some reason, the population falls under 𝑥1,
then the entire population dies out in finite time. When 𝑞> 1/4 , both
equilibriums disappear, but what is more important, 𝑓(𝑥, 𝑞) < 0 for all values
of population 𝑥 i.e. the population necessarily becomes extinct. On the contrary,
for 𝑞< 1/4 the population never dies out, and at the bifurcation point 𝑞=
1/4, the vector field 𝑓(𝑥, 𝑞) = 1/2 i.e. equilibriums 𝑥1 and 𝑥2 merge, which,
for sufficiently big initial population, can ensure that it will asymptotically end
up near the 𝑥= 1/2 value. However, just a small drop of the population below
this value would result in its extinction in finite time. In this sense, bifurcation
point 𝑞= 1/4 is unstable. When the growth parameter 𝑎≠1, bifurcation
occurs at critical value 𝑞= 𝑎/4.
The population evolution accompanied by forceful elimination of some
part of the individuals represents simple feedback mechanisms. For example,
one can exercise the ecological control over the quantities of certain species
(e.g. mosquitoes), in such cases 𝑞= 𝑞(𝑥) > 0 and 𝑑𝑞/𝑑𝑥> 0 for all 𝑥. One
can assume 𝑞(𝑥) to be a polynomial with positive coefficients. Let us consider
the simplest case of a feedback scenario, 𝑞(𝑥) = 𝑏𝑥. In this case there are two
stationary values, 𝑥1 = 0 (unstable) and 𝑥2 = 1 −𝑏, 0 < 𝑏< 1 (stable). We
see that equilibrium point 𝑥2 corresponds to elimination, e.g. harvesting or
hunting, quota 𝑏𝑥2 = 𝑏(1 −𝑏) with sustainable maximum (optimum) 𝑏=
𝑥2 = 1/2. When 𝑏→1, 𝑥2 →0 i.e. both equilibrium points merge, and the
population becomes extinct in finite time (𝑥̇ = −𝑎𝑥2 →𝑥(𝑡) = 1/𝑎(𝑡−𝑡0)).
Physically, this signifies excessive harvesting, hunting or killing and
geometrically to the disappearing crossing point of parabola 𝑦= 𝑥(1 −𝑥)
and straight line 𝑦= 𝑏𝑥. For 𝑏> 1 equilibrium point 𝑥2 becomes negative. It
is easy to see that when the growth parameter 𝑎≠1, we must scale 𝑏→
𝑏/𝑎≡𝑏̃ so that a stationary state corresponding to an optimum is reached
with 𝑏= 𝑎/2.
The case of quadratic elimination quota is analyzed similarly to the case
of the linear one i.e. 𝑥̇ = 𝑎𝑥(1 −𝑥) −𝑏𝑥−𝑐𝑥2 so that the stationary points
are 𝑥1 = 0 and 𝑥2 = (1 −
𝑏
𝑎) / (1 −
𝑐
𝑎) , 0 < 𝑏< 𝑎, 0 < 𝑐< 𝑎. The optimum
point is 𝑏= 𝑎/2.
The same model describes bankruptcy of companies and, with slight
modifications, the downfall of political entities such as groups, parties, unions,
states, etc. In certain models, e.g., taxation models in some economies, where
𝑥 denotes a tax and 𝜑(𝑥) the taxation base, quotas 𝑞(𝑥) can be negative and
piecewise continuous. The meaning of the model is that one should not
harvest more than a certain threshold, otherwise the system will devour itself
in finite time, regardless what it is: fisheries, small businesses, emigration of
228
Classical Deterministic Systems
scientists and specialists. One can also interpret this model as the
manufactirer-consumer equilibrium, where the logistic part 𝑎𝑥(1 −𝑥)
corresponds to production and the harvesting part −𝑞(𝑥) = 𝑏𝑥+ 𝑐𝑥2 + ⋯ to
the consumption of goods.
The lesson learned from studying the logistic model and logistic map is
that apparently simple and completely deterministic dynamical systems can
exhibit very complex motion that can be perceived as chaotic and stochastic.
It may be instructive to make the computer implementation of the logistic
model, in particular, trying to produce some computer codes corresponding
to it. In Supplement 2, the Java code is given, where for simplicity the form
(4.16.3.) is used i.e., with 𝑁0 = 𝑎/𝑘. One can also readily apply Euler’s method
to the logistic equation (see section on scientific computing below).
4.16 Instabilities and chaos
The most outstanding fact in the theory of dynamical systems is that fully
deterministic systems depending on only a few variables can exhibit chaotic
behavior which is similar to that formerly encountered only in many-body
systems. Many nonlinear deterministic systems although looking quite simple
are observed to behave in an unpredictable, seemingly chaotic way. The term
“chaotic” is commonly attributed to evolutions exhibiting an extreme
sensitivity to initial data, but this is not the whole story. Chaos is also
understood as an aperiodic – close to random – behavior emerging from a
totally deterministic environment i.e., described by a dynamical system
involving no random parameters or noise. Such a behavior appears only in
nonlinear systems and manifests an extreme sensitivity to initial conditions
(in general, also to external parameters). Therefore, nonlinear systems can be
viewed as one of the most interesting subjects in mathematics since even in
the fully deterministic case they envisage a certain unpredictability.
Thus, classical chaotic systems, although they may have only a few
degrees of freedom and be mathematically represented through
deterministic equations, have to be described by probabilistic methods.
Nonetheless, a distinction from traditional probability theory is one of the
main issues of chaos theory. Chaos is also sometimes called “deterministic
noise”.
The notion “aperiodic” mathematically means that the flow paths do not
converge for 𝑡→+∞ to fixed points or to periodic (quasiperiodic) orbits such
as limit cycles. We have just mentioned that extremely sensitive dependence
of the system trajectories on initial conditions is the distinguishing mark of
deterministic chaos. But this is just a verbal expression that has to be
quantified. One of the possible ways to estimate this sensitivity is to find a
variational derivative 𝛿𝑥(𝑡, 𝑥0) 𝛿𝑥0
⁄
, where 𝑥(𝑡) = 𝑥(𝑡, 𝑥0) = 𝑔𝑡,𝑡0𝑥0, 𝑥0 ≡
𝑥(𝑡= 𝑡0) is the motion generated by flow (the dynamical system) 𝑔𝑡. In
simple cases, we can replace the variational derivative by the usual one to
have
𝑑𝑥(𝑡, 𝑥0)
𝑑𝑥0
= 𝐴exp ( 𝑡
𝜏0
),
4.16
Instabilities and chaos
229
where 𝜏0 ≈Λ0
−1 is the predictability horizon, Λ0 > 0 is the greatest positive
Lyapunov exponent which expresses instability in the form of “stretching”
(and, perhaps, also “folding” leading to chaos). Thus, the necessary (not
sufficient!) condition for deterministic chaos is that dynamics should be
unstable. In particular, instability means that small perturbations in the initial
conditions bring large uncertainties in dynamics after the predictability
horizon.
By remarking that extreme sensitivity to initial data is not the whole
story, we wanted to hint that such sensitive dependence can be found in very
simple, e.g., linear systems. Take, for example, the map 𝑥𝑛+1 = 2𝑥𝑛, 𝑥𝑛∈
ℝ, 𝑛∈ℤ, which has an unfortunate property to explode (see also below, “The
logistic model: the bugs are coming”). However, the explosive divergence of
nearby trajectories in this case alongside the blow-up of an initial data
discrepancy to infinity has nothing to do with deterministic chaos. The latter,
combining the sensitivity to initial data with unpredictable behavior, appears
only if the trajectories are bounded which is possible only in nonlinear
dynamical systems: in linear ones there can be either bounded trajectories or
sensitive dependence on initial conditions but not both, so that nonlinearities
are necessary to have both effects.
The word “chaos” is intuitively associated with a disorganized state,
completely without order. This is deceptive: chaos in dynamical systems is
not a total disorder but corresponds to irregular variations of the system’s
variables controlled by rather simple rules. Probably, no mathematical
definition of chaos has been universally accepted so far, but the following
descriptive explanation what chaos is can be used to work with this
phenomenon. Chaos is a durable irregular (i.e., aperiodic) behavior emerging
in deterministic systems which become extremely sensitive to slight
variations of parameters (in particular, initial conditions). Notice that this
verbal description of chaos contains three components:
1. Durable irregular (aperiodic) behavior implies that the phase paths
do not asymptotically (𝑡→∞) stabilize either to a point or to
periodic orbits.
2. The term “deterministic” implies that the system is not described by
stochastic differential (or other) equations i.e., there are no random
parameters or noise present.
3. Sensitivity to slight variations of parameters (in particular, initial
conditions) implies that the integral trajectories, at first very close
to one another, diverge exponentially fast with time, the divergence
rate being governed by the Lyapunov exponents (with at least one
of them positive).
Thus, if one abstracts oneself from certain fine points such as irregular
trajectories, uncorrelated behavior in close time points and so on, chaos can
be basically viewed as an absence of Lyapunov stability. Although “chaos” has
become the code word for nonlinear science, there is nothing particularly
230
Classical Deterministic Systems
exotic or intricate about chaos. The fact that chaotic phenomena practically
had not been studied until 1970s, when chaos suddenly came into fashion
together with its interdisciplinary applications, can only be attributed to an
absolute dominance, both in science and technology, of successful linear
theories such as classical electrodynamics, quantum mechanics, linear
oscillations, plasma instabilities, etc. Ubiquitous linear input-output models
in electrical engineering that have resulted in many practically important
technologies also left little room for studying somewhat exotic nonlinear
models. The main feature of chaos in finite-dimensional dynamical systems
(known as deterministic chaos to distinguish it from molecular chaos in
many-particle systems) is the exponential divergence of trajectories. The
quantitative measure of this divergence is the so-called K-entropy (the
Kolmogorov-Krylov-Sinai entropy [281]). The value of K-entropy is positive
for chaotic states, which corresponds to mixing and exponential decay of
correlations.
Many people are still inclined to think that one should not treat chaos in
deterministic systems as a special subject, and the word itself is just a poetic
metaphor for the long familiar instability. This is wrong: chaos is not
completely synonymous with instability; it incorporates also other concepts
(such as irregularity of behavior and decorrelation in time series). In
distinction to instabilities, chaos points at unpredictability of behavior in the
systems described by completely deterministic and even primitive looking
equations. Instability in dynamical systems can be viewed as a symptom of
the possible transition to chaotic motion. One can observe on some examples,
in particular on deterministic models of growth that may exhibit instability
(i.e., have diverging integral trajectories), but cannot be viewed as chaotic; the
simplest example of such a system is the model of exponential growth 𝑥̇ = 𝑎𝑥.
Conversely, one should not think that the chaotic state is always visibly
unstable: for example, turbulence is a stable chaotic regime. Thus, the often
repeated view that chaos is just an extreme case of instability i.e. in chaotic
regimes paths in the phase space that start arbitrarily close to each other
diverge exponentially in time whereas in regular (nonchaotic) regimes two
nearby trajectories diverge not faster than polynomial, typically linear in
time, is not quite accurate. If we compute the distance 𝑑(𝑡) between two
phase paths whose initial separation is 𝑑0 ≡𝑑(0) , then exponential
divergence of trajectories, 𝑑(𝑡) = 𝑑0𝑒Λ𝑡, where Λ is the Lyapunov
characteristic exponent, is a necessary but not sufficient condition for chaos.
Notice that the concept of dynamical systems as evolving quasi-closed
parts of the world does not exclude their chaotic behavior, when a system’s
evolution (i.e., dependence of its observable quantities 𝑥𝑖 in time) looks like a
random process; at least it is totally unpredictable beyond a certain “horizon
of predictability” (recall weather forecasts). One should not, however, think
that unpredictability in chaotic systems is identical to the true randomness,
but the difference is rather academic since chaos looks quite like a random
process and can also be described by a probability measure (distribution
functions). One can crudely say that there are at least two types of
randomness: one is due to an unobservable quantity of interacting
4.16
Instabilities and chaos
231
subsystems (as, e.g., in gas), the other - usually known as deterministic chaos
– is due to our limited ability to formulate the rules of behavior under the
conditions of drastic instability. Such poorly known rules must govern the
processes that basically arise from irreversibility in dynamical systems.
The main feature of chaos in dynamical systems is an extreme sensitivity
to small perturbations, e.g., to tiny inaccuracies in input data such as initial
conditions. It is this property that makes it impossible to forecast the state of
a dynamical system for the time exceeding some characteristic predictability
horizon amenable to the state-of-the-art numerical computation. The time
scale for exponential divergence of nearby trajectories, Λ−1, may serve as an
estimate for this predictability horizon. It is important that this time scale
usually does not depend on the exact value of initial conditions. Anyway, large
Lyapunov exponents are symptomatic of the onset of chaos. Recall that the
Lyapunov characteristic exponent manifests the expansion rate of linearized
dynamical system along its trajectory.
Nevertheless, one can distinguish between dynamical chaos in
deterministic systems and physical chaos in many-particle models. For
example, evolution to thermodynamic (statistical) equilibrium is directed to
the most chaotic and disordered state characterized by maximal entropy. The
difference between “dynamical” and “physical” chaos has been reflected in
long-standing debates about the origin of stochasticity in physical systems.
There were historically two distinct trends of thought: (1) stochasticity
arising due to dynamic instability of motion in nonlinear deterministic
systems and (2) necessity of statistical description due to the enormous
number of degrees of freedom i.e., huge dimensionality of the phase space in
realistic
(many-particle)
physical
systems,
resulting
in
practical
irreproducibility of solutions (integral trajectories). These two trends were
poorly compatible because they required different approaches to physical
statistics. In the section devoted to statistical physics and thermodynamics,
we shall comment on the possibility to reconcile the two manners of
description.
One can produce many examples illustrating the importance of chaotic
behavior in real life. Thus, when an asteroid or a comet approaches a planet
(e.g., Earth), the planet’s gravity perturbs the comet’s trajectory so that small
changes of the latter can be amplified into large and poorly predictable
deflections. Since a comet or an asteroid trajectory is affected by numerous
close encounters with planets, small variations in the initial parameters of the
trajectory may result in practically complete unpredictability of the body’s
eventual motion (for large enough time – outside the predictability horizon).
One can call this few-body process a trivial chaotization of the comet or
asteroid path. Because of such sensitivity to small variations of parameters,
trajectories of small celestial bodies can diverge and cross the planetary
orbits, possibly with devastating consequences. The information about the
fate of a comet or an asteroid is practically lost after several Lyapunov time
scales, and the resulting unpredictability may be the source of meteorite
hazard for the Earth.
232
Classical Deterministic Systems
One can show that the logistic map for 𝑎= 4 which is in this case chaotic
for almost all initial conditions may be related to the Lorenz attractor
appearing in the three-dimensional meteorological model constructed in
1963 by E. Lorenz. This mathematical model is represented by a dynamical
system with 3d phase space and is fully deterministic since it is represented
by three ODEs 𝐱̇ = 𝐟(𝐱, 𝑎), 𝐱= (𝑥1, 𝑥2, 𝑥3) (with quasilinear vector field 𝐟).
Nonetheless, the model demonstrates chaotic behavior i.e., abrupt and
apparently random changes of state for some set of control parameters 𝑎.
From a more general viewpoint, the logistic model is a very particular
case of the evolution of an autonomous system described by vector equation
𝐱̇ = 𝐟(𝐱, 𝑎), where vector field 𝐟 depends on parameter 𝑎 (it can also be a
vector). As we have seen, in certain cases variations of the control parameter
𝑎 can radically change the system’s motion, for instance, result in chaotic
behavior. Note that the logistic model is not necessarily identical with the
population model. When the logistic equation is not interpreted as describing
the population growth, one can explore the behavior of solutions as
parameter 𝑘 is varied.
The logistic map manifests such common features of discrete-time
algorithms as stability and chaotic behavior. From this viewpoint, it is
interesting for numerical techniques and generally for computational science
and engineering. As far as engineering applications of continuous-time
logistic model go, we have already mentioned that the equation describing the
energy evolution of a nonlinear oscillator in the self-excitation mode has the
form 𝐸̇ = 𝑎𝐸(1 −𝐸) (in dimensionless units). Here the growth parameter 𝑎
is close to 1.
One can bring an example of estimating the population growth with the
help of the logistic model. We may assume the total current (2010) human
population to be 6.9*109, the growth factor to be a = 0.029, the annual
population growth to be 0.011 year-1 (more or less standard demographic
data). Then we have
𝑑𝑁(2010)/𝑑𝑡
𝑁(2010)
= 𝑑
𝑑𝑡log(𝑁(2010)/𝑁(𝑡0 = 2010) = 0.011 = 𝑎−𝑘𝑁(2010)
= 0.029 −𝑘∙6.9 ∙109
which can be considered an equation to find the attenuation factor 𝑘≈2.32 ∙
10−12. Then the projection for the equilibrium population will be 𝑁0 = 𝑎/𝑘≈
11.1 ∙109 i.e., the world population tends to converge to approximately 11
billion people. This result is not very sensitive to the slight changes of the
constant a and the annual population growth rate.
4.16.1
Chaos in dissipative systems
Chaotic behavior is quantified by the presence and the value of Lyapunov
exponents that must be positive in order for the chaotic regime – and the
complexity associated with it - to emerge. Recall that the notion “chaotic” is
related to the bounded motions displaying an extreme sensitivity to initial
data. If there is no dissipation i.e., the dynamical system is conservative, the
4.16
Instabilities and chaos
233
sum of all Lyapunov exponents Λ𝑖 must be zero – to ensure that a volume
element of the phase space remains intact along the phase trajectory (see
8.3.1. “Phase space and phase volume”). If the system is dissipative, the sum
of all Lyapunov exponents should be negative, and if ∑Λ𝑖< 0
𝑖
, at least one
Lyapunov exponent must exist in a dissipative system119.
We observed from expression (1) that a free pendulum eventually stops:
it is damped down to a stable state corresponding to the minimum of potential
energy in the gravitation field. Such a stable state is referred to today as an
attractor. Although there seems to be no universally accepted definition of
attractor, we may intuitively understand it as a limit set for which any nearby
orbit regardless of its initial conditions ends up (has its limit points) in the
set. The simple example of a pendulum has, however, a deep meaning. Firstly,
it demonstrates that a dynamical system can radically change its behavior
when energy is withdrawn from the system. Secondly, for large times (𝑡→
∞), trajectories of a dissipative system tend to a low-dimensional subset of
the phase space which is an attractor. Thirdly, the evolution of a dissipative
system, after a certain time, is restricted to this attractor forever.
There may be more than one attractor in a dynamical system, and the
latter can wander between its attractors as the value of some control
parameter varies (we shall observe such transitions shortly on a simple
example of the logistic map). Many physical systems exhibit such a behavior.
For example, addressing again the meteorite hazard, we may note that small
celestial bodies such as asteroids or comets (sometimes called
“planetesimals”) move around the Sun on almost Keplerian orbits
experiencing a slight drag due to the presence of interplanetary gas and solar
radiation. This small resistance forces the small body trajectories to be
damped down to the Sun, and the ensuing radial component of the motion
results in crossing the planetary orbits. Then small bodies can be trapped by
the gravity field of a major planet such as Earth or Jupiter, usually in some
resonance motion – a far analogy to Bohr’s model of the atom. The drag force
plays the role of control parameter: with the increased drag, equilibrium
paths of asteroids or comets can bifurcate (as in the logistic map, see below).
For large enough drag, the system may become chaotic so that an asteroid or
a comet no longer stays on a fixed trajectory relative to the planet, but
wanders between seemingly random distances, down to a number of close
approaches to the planet within a short time interval on an astronomical
scale. So, the dissipative chaotic motion may bring celestial bodies
dangerously close to the Earth’s surface. However, exact mathematical
modeling of the dissipative motion of small celestial bodies is rather involved
and requires numerical integration on high-performance computers rather
than analytical calculations: the cascade of chaotic returns to close encounters
produces a complex pattern that can hardly be explored analytically. The
uncertainty cloud of the asteroid or comet initial conditions, taken from
119 Contrariwise, a positive Lyapunov exponent reflects stretching along the corresponding
axis.
234
Classical Deterministic Systems
observational data, must be propagated throughout decades to detect
possible intersections with the Earth’s orbit (see, e.g., [302]).
In conclusion to this section, one can make the following nearly obvious
remark about instabilities and chaos. We have seen that the theory of
continuous-time differentiable dynamical systems is largely based on the
linearization of the respective differential equations near equilibrium points.
Therefore, this theory may fail while attempting to explain the global
behavior of highly nonlinear systems, in particular, the complexity of chaotic
motions. Notice that, in the phase portrait, chaotic motion is typically imaged
as “clouds” of the phase points.
5.1
The Maxwell Equations
235
5 Classical Fields and Waves
One may easily notice that in the Newtonian picture of classical mechanics
particles interact through instantaneous forces, 𝐅= 𝑚𝐫̈. Forces 𝐅 are always
produced by other bodies or particles. But writing down the differential
equation with forces is clearly not the whole story. So already in the first half
of the 19th century, mostly due to experiments of M. Faraday, it became
obvious that there was something else in the physical world besides bodies
and particles. It is presumed that Faraday was the first to call this something
a “field”, although I failed to find this term in sources available to me on
Faraday’s works, see however http://en.wikipedia.org/wiki/Field (physics).
Then J. C. Maxwell and O. Heaviside constructed the classical theory of the
electromagnetic field.
The classical theory of fields is a remarkable subject because it unifies
extremely abstract and mathematically advanced areas of modern theoretical
physics and down-to-earth engineering. This intermediary, bridging position
of classical electrodynamics is even more pronounced than in the case of
classical mechanics. Although there exist a number of very good books on
electromagnetic theory [5,6], I still prefer the textbook by Landau and Lifshitz
[39] and will often cite it. Almost all of the material contained in this chapter
can be found in [39] except for some comments of mine, still I write out the
main expressions - for the reader’s convenience and because some of them
may be needed further for more complex subjects. One may think that the
classical field theory (CFT) in empty space is not a difficult subject - indeed, a
great many of its results and occasional derivations appear boring and trivial,
but this impression is deceptive. There exist a lot of refined implications of
classical field theory and classical electrodynamics is full of understatements.
But the most important thing is that CFT provides a solid foundation, a perfect
model for all other field theories, and while touching upon them we shall
recall CFT with gratitude.
5.1
The Maxwell Equations
The classical theory of fields 120 is based on Maxwell’s equations which we
have already discussed in other contexts. Since these equations are very
fundamental, I write them once more. The inhomogeneous pair:
∇𝐄= 4𝜋𝜌,
∇× 𝐇−1
𝑐
𝜕𝐄
𝜕𝑡= 4𝜋
𝑐𝐣;
and the homogeneous pair:
∇𝐇= 0,
∇× 𝐄+ 1
𝑐𝐇= 0
120 Here, we are limiting our discussion to an electromagnetic field.
236
Classical Fields and Waves
When discussing dualities in Chapter 3, we have mentioned the magnetic
monopole. Modern physics does not exclude the possibility that the magnetic
monopole might exist, and eventually it may be discovered. If the magnetic
monopole really exists, then the right-hand side of the first equation from the
homogeneous pair namely that expressing the solenoidal character of the
magnetic field, ∇𝐇= 0, should be not zero, but proportional to the density of
magnetic monopoles. At present, however, this possibility is highly
hypothetical, and we shall not take monopoles into account.
The electric and magnetic components of the electromagnetic field can be
more conveniently written using the potentials121. By introducing the vector
and scalar potentials, 𝐴 and 𝜑, with
𝐄= −∇𝜑−1
𝑐
𝜕𝐀
𝜕𝑡,
𝐇= ∇× 𝐀
(5.1)
we get the wave-type equations for the potentials
∆𝐀−∇(∇𝐀+ 1
𝑐
𝜕𝜑
𝜕𝑡) −1
𝑐2
𝜕2𝐀
𝜕𝑡2 = −4𝜋
𝑐𝐣
∇𝜑+ 1
𝑐∇∂𝐀
∂t = −4𝜋𝜌
(5.2)
Below,
we
shall
often
use
the
symbol ☐≔∆−
1
𝑐2
𝜕2
𝜕𝑡2 (the
D’Alembertian).
These are four coupled linear partial differential equations. Nevertheless,
this system of equations with respect to four scalar functions is obviously
simpler than the initial system of Maxwell equations for six field components.
In fact, the matter is even more complicated, since one has to account for the
motion of charged particles in the electromagnetic field. This is a self-
consistency problem because these particles, on the one hand, are
experiencing forces from the field and, on the other hand, themselves
contribute to the field. The charge and current densities on the right-hand side
are defined as
𝜌(𝐫, 𝑡) = ∑𝑒𝑎𝛿(𝐫−𝐫𝑎(𝑡))
𝑎
and
121 It is worth noting that the four expressions that we know as the Maxwell equations are
probably the creation of O. Heaviside who reduced the original system of J. C. Maxwell consisting
of twenty equations to four vector equations, see, e.g., http://en.wikipedia.org/wiki/Maxwell’s
equations. I would also highly recommend the beautiful book about O. Heaviside by a prominent
Russian physicist and a very gifted writer, Prof. B. M. Bolotovskii, [282]. Unfortunately, this book
is in Russian and I don’t know whether its translation into other European languages exists.
5.1
The Maxwell Equations
237
𝑗(𝐫, 𝑡) = ∑𝑒𝑎𝐫̇𝑎(𝑡)𝛿(𝐫−𝐫𝑎(𝑡))
𝑎
,
where 𝐫𝑎(𝑡), 𝐫̇𝑎(𝑡) are unknown quantities (here summation goes over all
charged particles in the considered system). Thus, in order to close the self-
consistent system of equations, we must provide equations for these
quantities. In the classical case, for example, we may supplement the Maxwell
equations with the Newton equations containing the Lorentz force:
𝑚𝐫̈𝑎(𝑡) = 𝑒𝑎[𝐄(𝐫𝑎(𝑡)) + 1
𝑐(𝐫̇𝑎(𝑡) × 𝐇(𝐫𝑎(𝑡)))]
(we neglect in these equations of motion close interparticle interactions,
possibly of non-electromagnetic character). The solution of such self-
consistent electromagnetic problems is usually a difficult task, it is usually
treated in plasma physics and while considering the interaction of beams of
charged particles with matter (see Chapter 8). Moreover, self-consistency
leads in its limit to self-interaction of charges which has always been a source
of grave difficulties in early attempts of electromagnetic field quantization.
However, in classical field theory particles, in particular, electric charges, due
to relativistic requirements, must be considered point-like (see [39], §15),
which results in an infinite self-energy of the particle. Therefore, classical field
theory becomes self-contradictory (at least at small distances) and should be
replaced by a more advanced theory [39], §37, see also below.
The charge and current densities automatically satisfy the continuity
equation
𝜕𝜌
𝜕𝑡+ 𝑑𝑖𝑣𝐣= 0,
(5.3)
which can be actually obtained from the Maxwell equations by taking the
divergence of the ∇𝐇 and using the Coulomb law equation ∇𝐄= 4𝜋𝜌. The fact
that the continuity equation, which expresses the electric charge
conservation, is not independent of Maxwell’s equations means that the latter
are constructed in such a way as to be in automatic agreement with the charge
conservation law. More than that, one can even say that charge conservation
is a more fundamental concept than the Maxwell equations, since it results in
their generalizations (see below).
Before we produce solutions to the equations for electromagnetic
potentials, let us discuss some properties of the Maxwell equations. When
speaking about differential equations, the first thing to pay attention to is
their invariance (symmetry) properties. In many cases the requirement of an
invariance of physically relevant equations, e.g., the motion equations with
respect to some group of transformations allows one to single out the
mathematical model from a wide class of available equations. For instance,
one can prove [145] that among all systems of the first-order partial
differential equations for two vector-functions 𝐄(𝐫, 𝑡), 𝐇(𝐫, 𝑡) there exists a
238
Classical Fields and Waves
unique system of equations invariant under the Poincaré group. This system
is the Maxwell equations.
When discussing dualities in physics, we have already noticed that the
Maxwell equations are invariant with respect to a dual change of functions
performed by Heaviside [37]
𝐄→𝐇,
𝐇→−𝐄.
(5.4)
Later, this symmetry was generalized by Larmor to a single parameter
family of plane rotations, R(θ):
(𝐄′
𝐇′) = 𝑅(𝜃) (𝐄
𝐇),
where
𝑅(𝜃) = ( cos 𝜃
sin 𝜃
−sin 𝜃
cos 𝜃)
or
𝐄′ = 𝐄cos 𝜃+ 𝐇sin 𝜃
𝐇′ = −𝐄sin 𝜃+ 𝐇cos 𝜃.
(5.5)
The Lie-group analysis performed
by a
well-known Russian
mathematician N. Ibrahimov shows that the ultimate local symmetry group
for the system of Maxwell’s equations in empty space is a 16-parameter group
𝐶(1,3) ⊗𝑆𝑂(2, 𝑅) where 𝑆𝑂(2, 𝑅) is represented by 𝑅(𝜃) above.
5.2
Gauge Invariance in Classical
Electrodynamics
Probably, the most important invariance property of the Maxwell equations
is connected with the transformation of the potentials ([39], §18)
𝐀→𝐀′ = 𝐀+ ∇𝜒,
𝜑→𝜑′ = 𝜑−1
𝑐
𝜕𝜒
𝜕𝑡,
(5.6)
which is called the gauge transformation. It is clear that the fields 𝐄, 𝐇 do not
change under this transformation. In other words, different potentials (𝜑, 𝐀)
and (𝜑′, 𝐀′) correspond to the same physical situation. This fact is known as
gauge invariance. The term gauge refers to some specific choice of potentials,
or what is synonymous, to a fixed attribution of the function 𝜒 which is
usually called the gauge function. When one considers quantum-mechanical
particles described by the wave function 𝜓, coupled to the electromagnetic
field (Chapters 5, 8), one must extend the transformations of the
electromagnetic potentials with the aid of gauge function 𝜒 by including the
change of the phase of wave function
5.2
Gauge Invariance in Classical Electrodynamics
239
𝜓→𝜓′ = 𝜓exp (𝑖𝑒𝜒
ℏ𝑐)
(5.7)
In recent years, it has been established that the gauge invariance is not
only a fundamental property of classical field theory, but it defines other
fundamental interactions in nature, perhaps even all possible interactions. In
particular, the principle of gauge invariance plays a decisive role in the so-
called “Standard Model” that has been designed to describe electroweak and
strong interactions between elementary particles. In the Standard Model of
particle physics, all particles (other than the Higgs boson, see Chapter 9)
transform either as vectors or as spinors. The vector particles are also called
“gauge bosons”, and they serve to carry the forces in the Standard Model. The
spinor particles are also called fermions, and they correspond to the two basic
constituent forms of matter: quarks and leptons. In this section, however, we
shall only deal with electromagnetic forces – the low-energy classical limit of
electroweak interactions. In the classical description of electromagnetic
fields, gauge invariance brings about supplementary and seemingly
unnecessary degrees of freedom which may be called the gauge degrees of
freedom. In classical electromagnetic theory, we may interpret the
electromagnetic potentials, (𝜑, 𝐀) , as a tool introduced to simplify the
Maxwell equations which, however, does not have any direct experimental
relevance. Immediately observable quantities of classical electromagnetic
theory are the electric and magnetic fields, 𝐄, 𝐇, corresponding to gauge
equivalence classes of the potentials (𝜑′, 𝐀′)~(𝜑, 𝐀) giving the same electric
and magnetic fields. If one takes in the canonical formalism the
electromagnetic potentials as phase space variables in order to obtain the
Euler-Lagrange equations from the least action principle (see below
“Equations of Motion for the Electromagnetic Field”), then one sees that the
dimensionality of the classical phase space is reduced122.
As is well known (see [39], §46), in those situations when relativistic
(Lorentz) invariance should be explicitly ensured one can use the Lorentz
gauge, i.e., the condition
∇𝐀+ 1
𝑐
𝜕𝜒
𝜕𝑡= 0
(5.8)
imposed on the potentials. In this case, the above equations for the potentials
take the form of usual (linear) wave equations
𝐿𝐀= −4𝜋
𝑐𝐣,
𝐿𝜑= −4𝜋𝜌,
𝐿≔∆−1
𝑐2
𝜕2
𝜕𝑡2 ,
(5.9)
122 This phase space reduction may be accompanied by a change in geometry, the latter
becoming more complicated, curved, and containing, e.g., holes. No canonically conjugated
coordinates such as electromagnetic potentials and conjugated to them “momenta” may be
globally possible, which makes a direct canonical quantization of the electromagnetic field quite
difficult.
240
Classical Fields and Waves
𝐿 being the wave operator which coincides here with the D’Alembertian (this
is not always the case since there may be many wave operators).
It is important to note that in many physical situations the manifest
Lorentz invariance is irrelevant since one typically uses a privileged reference
frame, the one being fixed with respect to the observer or the media. Thus, in
considering the interaction of an electromagnetic field with atoms (Chapter
8) other gauges are more convenient. The most popular gauge in physical
situations with a fixed frame of reference is the so-called Coulomb gauge
defined by the transversality condition ∇𝐀= 𝟎. The terms “Coulomb gauge”
and “transversality” can be easily explained. Suppose we managed to impose
the condition ∇𝐀= 𝟎 - a little further I shall prove that it is always possible.
Then we obtain the following equations for the potentials
𝑙∆𝐀−1
𝑐2
𝜕2𝐀
𝜕𝑡2 −1
𝑐
𝜕
𝜕𝑡∇𝜑−4𝜋
𝑐𝐣
(5.10)
∆𝜑= −4𝜋𝜌.
(5.11)
The last equation is exactly the same as the one from which the
electrostatic potential can be determined (see Chapter 5). For the model of
point particles, 𝜌(𝐫, 𝑡) = ∑𝑒𝑎𝛿(𝐫−𝐫𝑎(𝑡))
𝑎
, the scalar potential becomes
𝜑(𝐫, 𝑡) = ∑
𝑒𝑎
|𝐫−𝐫𝑎(𝑡)|
𝑎
,
which again coincides with the form of elementary electrostatic equation. The
solution for potential 𝜑 may be written through the Coulomb Green’s function
𝐺0(𝐫, 𝐫′) = −1
4𝜋
1
|𝐫−𝐫′|
This purely electrostatic form of the scalar potential which is represented
as the superposition of Coulomb potentials created by individual charges,
explains the name of the Coulomb gauge. Let us now try to explain why this
gauge is also called transversal. To do this we can, for example, expand the
potentials into plane waves, as in [39], §§51,52:
𝐀(𝐫, 𝑡) =
∑𝐀𝐤,𝑗(𝑡) exp(𝑖𝐤𝑗𝐫)
𝐤,𝑗=1,2
,
where index 𝑗 denotes two possible polarizations of the vector plane waves
normal to the wave vector 𝐤. We know from elementary functional analysis
that plane waves form a full system of functions so that they may be taken as
a basis and a series over them is legitimate. Below, when discussing the field
quantization, we shall consider some details of expansions over the system of
plane waves. Now, all we need is just a general form of such an expansion.
5.2
Gauge Invariance in Classical Electrodynamics
241
Inserting the expression for 𝐀(𝐫, 𝑡) into ∇𝐀= 𝟎, we see that
∇𝐀(𝐫, 𝑡) = 𝑖∑(𝐤𝐀𝐤,𝑗(𝑡)) exp(𝑖𝐤𝑗𝐫) = 0
𝐤,𝑗=1,2
may hold for arbitrary 𝐀 only if 𝐤𝐀𝐤,𝑗= 0, which means that all the modes
𝐀𝐤,𝑗 should be transversal to the wave vector. If we similarly expand the
scalar potential
𝜑(𝐫, 𝑡) =
∑𝜑𝐤,𝑗(𝑡) exp(𝑖𝐤𝑗𝐫)
𝐤,𝑗=1,2
,
then we shall be able to represent the electric field as the sum of the
longitudinal and the transversal components corresponding to the terms
𝐄∥= −∇𝜑= −𝑖∑𝐤𝜑𝐤,𝑗(𝑡) exp(𝑖𝐤𝑗𝐫)
𝐤,𝑗=1,2
,
and
𝐄⊥= −1
𝑐
𝜕𝐀
𝜕𝑡= −1
𝑐𝜕𝑡𝐀=
∑𝐀̇ 𝐤,𝑗(𝑡) exp(𝑖𝐤𝑗𝐫)
𝐤,𝑗=1,2
The magnetic field comprised of components (𝐤𝑗× 𝐀𝐤,𝑗) is obviously
transversal (∇𝐇= 0). Thus, the Coulomb gauge allows one to separate the
entire electromagnetic field into the transversal component corresponding to
the vector potential 𝐀 and the longitudinal component described by the scalar
potential 𝜑. This separation is usually quite convenient, at least for
nonrelativistic problems - for instance, when treating the interaction of
radiation with matter (Chapter 8). In this class of problems, one may describe
forces between the particles of the medium by the scalar potential 𝜑 whereas
the radiation field is described by the vector potential 𝐀 alone.
This is all more or less banal, nevertheless, there are some nontrivial
questions around such a decomposition of the electromagnetic field into
transversal and longitudinal components. To begin with, we see that the
electrostatic form of the scalar potential in the Coulomb gauge implies an
instantaneous nature of the longitudinal field. How then can it be that the
electromagnetic signal is causal and propagates with the speed 𝑐, as
prescribed by relativity theory? This is obviously an apparent paradox, but its
discussion leads to interesting representations of the gauge function.
242
Classical Fields and Waves
5.3
Four-Dimensional Formulation of
Electrodynamics
Many elementary results formulated above in the conventional language of
vector analysis can be cast into a more elegant form using the invariance
properties of the Maxwell equations, so this section in fact does not contain
unfamiliar expressions. In principle, the Maxwell equations admit a number
of various formulations (see below), but the most popular one - the four-
dimensional formulation - is based on the use of four-component function
𝐴𝜇= (𝜑, 𝐀) often called the four-dimensional potential [39], §16, or simply
the 4-potential. This function is connected with the fields 𝐄, 𝐇 by the familiar
relations which we write here, for future usage, through the momentum
operator 𝐩= −𝑖∇, 𝑝𝜇= −𝑖𝜕𝜇:
𝐄= 𝜕𝐀
𝜕𝑥0
−𝑖𝐩𝐴0,
𝐇= 𝑖𝐩× 𝐀
Then the homogeneous Maxwell equations (see the next section) are
satisfied identically, if one introduces the so-called electromagnetic field
tensor as a four-dimensional rotation:
𝐹𝜇𝜈= −𝐹𝜈𝜇= 𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇.
Indeed,
𝜕𝜆𝐹𝜇𝜈+ 𝜕𝜇𝐹𝜈𝜆+ 𝜕𝜈𝐹𝜆𝜇= 0
or, in terms of the dual tensor [39], §26, 𝐹∗𝜇𝜈=
1
2 𝜖𝜇𝜈𝛼𝛽𝐹𝛼𝛽, 𝜕𝜇𝐹∗𝜇𝜈. The
inhomogeneous Maxwell equations are
𝜕𝜇𝐹𝜇𝜈= −4𝜋
𝑐𝑗𝜈,
which follows from the variational principle with the Lagrangian density, see
[39], §28,
ℒ= −1
𝑐𝐴𝜇𝑗𝜇−
1
16𝜋𝐹𝜇𝜈𝐹𝜇𝜈.
We have seen that for a given electromagnetic field Aµ is not unique, since
the gauge transformation 𝐴𝜇→𝐴𝜇
′ = 𝐴𝜇+ 𝜕𝜇𝜒(𝑥) leaves 𝐹𝜇𝜈 unchanged:
𝐹𝜇𝜈→𝐹𝜇𝜈
′ = 𝜕𝜇𝐴𝜈
′ −𝜕𝜈𝐴𝜇
′ = 𝜕𝜇(𝐴𝜈+ 𝜕𝜈𝜒) −𝜕𝜈(𝐴𝜇+ 𝜕𝜇𝜒)
= 𝐹𝜇𝜈+ (𝜕𝜇𝜕𝜈−𝜕𝜈𝜕𝜇)𝜒= 𝐹𝜇𝜈.
5.3
Four-Dimensional Formulation of Electrodynamics
243
If we raise index 𝜇 in 𝐴𝜇, we can write
𝜕𝜇𝐴′𝜇= 𝜕𝜇(𝐴𝜇+ 𝜕𝜇𝜒) = 𝜕𝜇𝐴𝜇+ 𝜕𝜇𝜕𝜇𝜒
or, if we choose the gauge function 𝜒 to satisfy the equation
𝜕𝜇𝜕𝜇𝜒= −𝜕𝜇𝐴𝜇
(we have seen above that we can always do it due to gauge invariance) or, in
three-dimensional representation,
☐𝜒= −𝜕𝐴0
𝜕𝑥0
−∇𝐴= 1
𝑐
𝜕𝜑
𝜕𝑡−∇𝐴,
then we get the familiar Lorentz gauge 𝜕𝜇𝐴𝜇= 0 . This supplementary
condition reduces the number of independent components of 𝐴𝜇 to three, yet
it does not ensure that 𝐴𝜇 is uniquely defined. From the above formulas for
transition from 𝐴𝜇
′ to 𝐴𝜇 it becomes clear that if 𝐴𝜇 satisfies the Lorentz
gauge, so will 𝐴𝜇
′ provided ☐𝜒≡𝜕𝜇𝜕𝜇= 0. Thus, the ambiguity due to gauge
invariance persists, and one needs a more restrictive constraint to eliminate
it. We have seen that we can, for example, impose the condition
𝜕0𝜒= 𝜕𝑥0 = 1
𝑐
𝜕𝜒
𝜕𝑡= −𝜑,
then 𝐴0
′ = 𝐴0 + 𝜕0𝜒= 𝐴0 −𝜑= 𝜑−𝜑= 0, i.e., we may put 𝜑= 0 and ∇𝐀=
0 - the Coulomb or radiation gauge discussed above. This gauge reduces the
number of independent components of 𝐴𝜇 to only two, that is one more
reason why working in the Coulomb gauge is usually always more convenient
than in the Lorentz gauge unless it is indispensable to retain the explicit
Coulomb invariance.
Let me now make a remark of a purely technical character. Some authors
introduce the vector potential as a contravariant quantity, 𝐴𝜇= (𝜑, 𝐀) which
gives the corresponding skew-symmetric electromagnetic field tensor, 𝐹𝜇𝜈=
−𝐹𝜈𝜇= 𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇. It is of course just a technicality and a matter of taste,
since one can raise or lower indices with the help of the metric tensor which
in
the
Minkowski
(flat)
background
is
simply
𝑔𝜇𝜈= 𝛾𝜇𝜈=
𝑑𝑖𝑎𝑔(1, −1, −1, −1). So both definitions differ by the sign of 𝑨 in 𝐴𝜇= (𝜑, 𝐀).
However, I think that such a definition is less convenient than 𝐴𝜇, specifically
when introducing the covariant derivative 𝜕𝜇→𝜕𝜇+ (𝑖𝑒/𝑐)𝐴𝜇. Moreover, the
1-form 𝐴𝜇𝑑𝑥𝜇, which is the connection form in the coordinate basis, may be
used to write the particle-field coupling term in the action 𝑆 (see below)
𝑆𝑝𝑓= −1
𝑐∑∫𝑒𝑎𝐴𝜇(𝑥𝑎)𝑑𝑥𝑎
𝜇
𝑎
,
244
Classical Fields and Waves
where the integral is taken over the world lines of particles, or in the “current”
form
𝑆𝑝𝑓= −1
𝑐2 ∫𝑗𝜇𝐴𝜇𝑑4𝑥.
Recall that the total action for the electromagnetic field, 𝑆= 𝑆𝑝+ 𝑆𝑓+
𝑆𝑝𝑓 is the additive combination of terms corresponding to particles (charges)
without field, field with no particles, and particle-field coupling, respectively.
Using the four-component vector potential 𝐴𝜇 enables us to be well-
prepared for field quantization in quantum field theory (see Chapter 6).
Inserting 𝐴𝜇 and the corresponding expression for the fields into the Maxwell
equations, we get the unified system of equations for the potentials (𝜑, 𝐀)
obtained previously in three-dimensional form:
𝑝𝜇𝑝𝜇𝐴𝜈−𝑝𝜈𝑝𝜇𝐴𝜇= 4𝜋
𝑐𝑗𝜈.
This is, as we have seen, the system of four linear equations for 𝐴𝜇 instead
of eight Maxwell’s equations. Here, I have intentionally written this wave-like
system through momenta, 𝑝𝜇= −𝑖𝜕𝜇. One can solve such a system - a linear
system always can be solved for given source terms 𝑗𝜇 - and then obtain the
experimentally observable fields 𝐄 and 𝐇. Thus, using the potentials results
in a considerable simplification.
Now we may exploit gauge invariance, which permits, as we have seen, a
certain arbitrariness in the choice of 𝐴𝜇. Indeed, we have already discussed
that the Maxwell equations expressed through the potentials are invariant
with respect to the substitution
𝐴𝜇→𝐴𝜇
′ = 𝐴𝜇+ 𝜕𝜇𝜒= 𝐴𝜇+ 𝑖𝑝𝜇𝜒,
where 𝜒 is some arbitrary function. In the four-dimensional formulation it is
natural to use the Lorentz constraint, 𝑝𝜇𝐴𝜇= 0, which allows one to uncouple
the above system of equations for 𝐴𝜇.
Here, I guess, a remark about the use of potentials may be pertinent.
Highschool and even some university students typically don’t completely
understand the meaning of potentials as well as the idea of introducing them.
Only by trying to solve vector problems directly, with a lot of tedious
computations, people begin to value a clever trick with a single scalar
function, the electrostatic potential, from which the fields can be obtained by
a simple differentiation. The fact that the electrostatic potential is defined up
to a constant then becomes self-evident. Less evident, however, is that this
fact is a manifestation of some hidden invariance which we now call the gauge
invariance. Decoupling the system of equations for 𝐴𝜇 is basically the same
5.4
Classical Electromagnetic Field without Sources
245
thing, more generality is reflected in replacing the electrostatic relationship
𝜕𝑐𝜕𝐫
⁄
= 0, where 𝑐= 𝑐𝑜𝑛𝑠𝑡, by, e.g., 𝑝𝜇𝐴𝜇= 0.
5.4
Classical Electromagnetic Field without
Sources
This section is entirely dedicated to an elementary description of the classical
electromagnetic field in the absence of electric charges and currents
(𝜌= 0, 𝐣= 0) . In this case, the Maxwell equations in vacuum take the
homogeneous form
∇𝐄= 0,
∇× 𝐇−1
𝑐
𝜕𝐄
𝜕𝑡= 0,
∇𝐇= 0,
∇× 𝐄+ 1
𝑐𝐇= 0
(As usual, we are writing this system as an array of two parts.) In terms of the
field tensor (see [39], §23), 𝐹𝜇𝜈= 𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇, take in four-dimensional
notations a particularly compact form
𝜀𝜇𝜈𝜌𝜎𝜕𝜈𝐹𝜌𝜎= 0
or
𝜕𝜎𝐹𝜇𝜈+ 𝜕𝜈𝐹𝜎𝜇= 0
- the first pair of Maxwell’s equations and 𝜕𝜇𝐹𝜇𝜈 - the second pair. Recall that
𝜀 denotes, as we have seen several times, the absolute antisymmetric tensor
of the fourth rank whose components change sign with the interchange of any
pair of indices and 𝜀0123 = 1 by convention, see [39], §6 123. When discussing
dualities in physics in Chapter 3, we have already dealt with four-dimensional
notations in electromagnetic theory, and we shall discuss this formulation
later in connection with some geometric concepts of field theory. For many
practical problems, however, such as the interaction of radiation with matter
(Chapter 8) the four-dimensional formulation seems to be inconvenient,
requiring superfluous operations. One may notice in this context that as long
as we limit ourselves to the domain of special relativity (flat or Minkowski
space), no distinction between contra- and covariant components is, strictly
speaking, necessary and no metric tensor 𝑔𝜇𝜈 can be introduced. We,
however, shall be using the Minkowski space metric tensor 𝛾𝜇𝜈 from time to
time, its only function will be to raise or to lower indices.
Introducing explicitly the vector and scalar potentials, ∇× 𝐀= 𝐇, −∇𝜑−
1
𝑐
𝜕𝐀
𝜕𝑡= 𝐄, we get the wave equations for the potentials
123 Not always, sometimes 𝜀0123 = 1 is taken. This discrepancy of conventions, though
unimportant, leads to confusion.
246
Classical Fields and Waves
∆𝐀−∇(∇𝐀+ 1
𝑐
𝜕𝜑
𝜕𝑡) −1
𝑐2
𝜕2𝐀
𝜕𝑡2 = 0
(5.12)
and
∇𝜑+ 1
𝑐∇𝜕𝐀
𝜕𝑡= 0
(5.13)
or, in the four-dimensional notation
𝜕2𝐴𝜇
𝜕𝑥𝜈𝜕𝑥𝜇−𝜕
𝜕𝑥𝜇(𝜕𝐴𝜈
𝜕𝑥𝜈) ≡𝜕𝜈𝜕𝜇𝐴𝜇−𝜕𝜇𝜕𝜈𝐴𝜈= 0,
(5.14)
where, as usual, 𝐴𝜈= 𝛾𝜈𝜇𝐴𝜇. Being a direct consequence of the Maxwell
equations, these wave equations are of extreme importance in their own
right, since they lead to the representation of the Maxwell field as a set of
oscillator equations, which is the only equation that has an equidistant
spectrum, and it is this equidistant spectrum, as we have already noticed, that
allows one to interpret the field as consisting of independent particles -
electromagnetic quanta or photons. Therefore, two accidentally occurring
reasons Maxwell’s equations leading to harmonic oscillator and the unique
equidistant character of its spectrum, being combined render an essential
part of the conceptual frame for entire contemporary physics. Physically
speaking, since the charges interact with each other, and if they did not
interact it would be impossible to observe them, the electromagnetic field
becomes a necessity, and hence photons appear as an inevitable consequence
of electromagnetic field quantization (based on the oscillator model). Thus,
photons invoke the idea of interaction carriers. This idea has later evolved
into the concept of gauge bosons.
5.5
Equations of Motion for the
Electromagnetic Field
We have already discussed that the equations of motion for any field may - in
fact must - be derived from an appropriate Lagrangian density using the least
action principle. This fact is, in our terminology, the backbone of modern
physics. In this approach, however, one has to correctly provide the
Lagrangian density. It is not difficult to demonstrate (see [39], §27) that the
Lagrangian density of the form
ℒ0 = 𝐸2 −𝐻2
8𝜋
= −1
16𝜋𝐹𝜇𝜈𝐹𝜇𝜈≡−1
16𝜋(𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇)(𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇) (5.15)
can suit our purposes. Indeed, the Euler-Lagrange equations for a Lagrangian
density ℒ
Hamiltonian Formalism in Electromagnetic Theory
247
𝜕
𝜕𝑥𝜇
𝜕ℒ
𝜕(𝜕𝐴𝜈
𝜕𝑥𝜈)
−𝜕ℒ
𝜕𝐴𝜇≡𝜕𝜇
𝜕ℒ
𝜕(𝜕𝜈𝐴𝜈) −𝜕ℒ
𝜕𝐴𝜇= 0
in the case of ℒ= ℒ0 are reduced to the first term only, 𝜕𝜇
𝜕ℒ
𝜕(𝜕𝜈𝐴𝜈), that is
𝜕ℒ0
𝜕(𝜕𝜇𝐴𝜈) −
1
16𝜋{
𝜕
𝜕(𝜕𝜇𝐴𝜈) [(𝜕𝜌𝐴𝜎−𝜕𝜎𝐴𝜌)(𝜕𝜌𝐴𝜎−𝜕𝜎𝐴𝜌)]}
= −1
8𝜋[
𝜕
𝜕(𝜕𝜇𝐴𝜈) (𝜕𝜌𝐴𝜎𝜕𝜌𝐴𝜎−𝜕𝜌𝐴𝜎𝜕𝜎𝐴𝜌)]
= 1
4𝜋(𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇)
(5.16)
Inserting this expression into the Euler-Lagrange equation, we get the
already familiar wave equation derived from the Maxwell equations
𝜕𝜈𝜕𝜈𝐴𝜇−𝜕𝜇𝜕𝜈𝐴𝜈= 0.
Thus, the wave equation may be regarded as the motion equation for an
electromagnetic field.
One must note that ℒ0 is not the unique Lagrangian density that produces
the Maxwell equations. Indeed, if we add to ℒ0 , for example, the four-
divergence of some vector 𝜒𝜇 (which may be a function of 𝐴𝜇, 𝑥𝜇), the
equations of motion obtained from the Euler-Lagrange equations for the new
Lagrangian density ℒ= ℒ0 + 𝜕𝜇𝑥𝜇 will coincide with those for ℒ= ℒ0 (see
[39], §27). More generally, any two Lagrangian densities that would differ by
terms vanishing while integrated over spacetime, produce the same equations
of motion.
This subject of field invariants is beautifully exposed in §25 of [39], and
the only justification of repeating it here would be to provide some fresh
views or interpretations.
5.6. Hamiltonian Formalism in Electromagnetic
Theory
In this section, we shall deal with the Hamiltonian approach applied to
classical electrodynamics. The intention is to represent electrodynamics in a
form close to mechanics. In my opinion, using the Hamiltonian formalism in
electromagnetic theory is little more than a heuristic catch; this formalism
emphasizing evolution in time does not seem to be perfectly tuned to
relativistic problems. However, to gain more insight into field theory, we shall
briefly discuss its Hamiltonian description.
When trying to apply the standard Hamiltonian formalism to a free
electromagnetic field one can start, for example, from the gauge invariant
Lagrangian density
248
Classical Fields and Waves
ℒ0 = −
1
16𝜋𝑐𝐹𝜇𝜈𝐹𝜇𝜈= −
1
16𝜋𝑐𝐹𝜇𝜈
2 = −1
8𝜋𝑐(𝐇2 −𝐄2).
Recall that the total action for the electromagnetic field is[39]
𝑆= −∑∫𝑚𝑎𝑐𝑑𝑙
𝑎
−∑∫𝑒𝑎
𝑐𝐴𝜇𝑑𝑥𝜇
𝑎
−
1
16𝜋𝑐∫𝐹𝜇𝜈
2 𝑑4𝑥.
Taking, as usual, the field potentials 𝐴𝜇 as the generalized coordinates,
we obtain the canonically conjugate momenta
Π𝜇=
1
𝑐
𝜕ℒ
𝜕(𝜕𝐴𝜇
𝜕𝑥0)
so that the Hamiltonian density is
ℋ0 = 𝑐Π𝜇𝜕0𝐴𝜇−ℒ0.
One may immediately notice that by virtue of the relationship
𝜕ℒ
𝜕(𝜕𝜇𝐴𝜈) = −1
4𝜋𝑐𝐹𝜇𝜈= −1
4𝜋(𝜕𝜇𝐴𝜈−𝜕𝜈𝐴𝜇)
the momentum Π0 conjugate to the scalar potential 𝜑= 𝐴0 is identically zero.
This is the old difficulty, well known to relativistic field theorists, see, e.g.
[206]. There exist a number of ways to circumvent this difficulty, but the
ultimate cause for it is the limited adequacy of the Hamiltonian formalism
for relativistic field problems. Then we may sacrifice the Lorentz invariance
anyway and choose the Coulomb gauge 𝑑𝑖𝑣𝐀= 0, 𝜑= 0. In this gauge we
can write
ℒ0 = 1
8𝜋[ 1
𝑐2 (𝜕𝑡𝐀)2 −(𝜕𝑖𝐴𝑗−𝜕𝑗𝐴𝑖)
2],
where 𝑖, 𝑗= 1,2,3 . From this Lagrangian density we obtain the field
momentum components
Π𝑖=
𝜕ℒ0
𝜕(𝜕𝑡𝐴𝑖) =
1
4𝜋𝑐2 𝜕𝑡𝐴𝑖= −
1
4𝜋𝑐𝐸𝑖
and the Hamiltonian density is obviously
ℋ0 = Π𝑖𝜕𝑡𝐴𝑖−ℒ0 = 1
8𝜋[ 1
𝑐2 (𝜕𝑡𝐀)2 + (𝜕𝑖𝐴𝑗−𝜕𝑗𝐴𝑖)
2] = 1
8𝜋(𝐄2 + 𝐇2).
Hamiltonian Formalism in Electromagnetic Theory
249
Using the Hamiltonian approach in electrodynamics mainly appeals to
human intuition which tends to gauge everything by the archetypes inherited
from classical mechanics. We have seen that the Hamiltonian formalism in
classical mechanics was developed as a geometric theory on a finite-
dimensional symplectic manifold having an even (2n) dimensionality. While
trying to apply this formalism to electromagnetic theory we are less
interested in geometrical aspects of symplectic manifolds, mostly focusing on
the form of the Hamiltonian operator convenient for calculations124. This was,
e.g., the strategy adopted in the classical textbook by W. Heitler [207] as well
as many other sources related to the early development of field theory (see
also
section
“On
Hamiltonian
Formalism for
Particle
Motion
in
Electromagnetic Fields” in Chapter 8). Today, more sophisticated methods are
commonly in use, and the Hamiltonian formalism is not considered well-
adapted to treat the electromagnetic field regarded as a Hamiltonian system.
Simply speaking, a Hamiltonian system is described by pairs (𝑝𝑖, 𝑞𝑗) of
canonically conjugated local variables living in some vector space 𝑍≔(𝑝, 𝑞).
Then a Hamiltonian function ℋ(𝑝𝑖, 𝑞𝑗) is supposed to exist, with the property
that the equations of motion are produced from it as from some potential in
the form of a symplectic gradient
𝑝̇𝑖= −𝜕ℋ
𝜕𝑞𝑖,
𝑞̇ 𝑖= 𝜕ℋ
𝜕𝑝𝑖
,
𝑖= 1, … , 𝑛
One can observe that this formalism was initially developed for finite-
dimensional systems; however, it can be generalized to their infinite-
dimensional analogs. One of the possibilities (probably not unique) for such a
generalization is to replace coordinates 𝑞𝑖, at least some of them, by fields
𝜑𝑖(𝑥𝛼). Thus for an ensemble of particles and fields the Hamiltonian function
is just a sum of kinetic energies of the particles and the energy contained in
the electromagnetic field. Recall that for a conservative system the
Hamiltonian is represented as the energy of the system expressed through
canonical variables, e.g., 𝑝, 𝑞. The energy of a system “particles + EM field”
can be written as
ℰ= 1
2 ∑𝑚𝑎𝑣𝑎
2
𝑎
+ ∫𝑑3𝑟𝐄2 + 𝐇2
8𝜋
,
(5.17)
where index 𝑎 enumerates particles. Here, an interesting and not quite trivial
question arises: where is the interaction between particles and fields? The
answer depends on how we define an electromagnetic field. If we understand
by 𝐄 and 𝐇 the total electromagnetic field and not only its free part, then the
energy of electromagnetic interaction is hidden in the total field energy. Recall
124 Due to this reason and to attain greater simplicity of notations, we shall
temporarily disregard the difference between contra- and covariant coordinates,
although this may be considered a crime by geometrically-minded people.
250
Classical Fields and Waves
that here we forget about all other fields acting on particles except the
electromagnetic one.
Notice that although expression (5.17) gives the energy of the system
“particles + EM field”, it is still not the Hamiltonian function because this
energy is expressed through the Lagrangian (TM) variables (𝑞𝑖, 𝑞̇ 𝑖) i.e.,
through pairs (𝐫𝑎, 𝐯𝑎).
This was a somewhat naive and straightforward construction of the
Hamiltonian function for electromagnetism. We have seen above that the
Hamiltonian formalism makes the passage to quantum theory intuitive and
convenient.
5.7
Limitations of Classical Electromagnetic
Theory
There are, of course, many limitations in any classical theory, which require a
consistent quantum treatment. Traditionally, such limitations of classical
electrodynamics are discussed in the initial chapters of textbooks on quantum
field theory, and we shall also observe some deficiencies of the classical
electromagnetic theory when considering the quantum world of
electromagnetic phenomena in Chapter 6. Here I venture to say a few words
about completeness of classical electrodynamics based on the Maxwell
equations in the classical domain.
The classical electromagnetic theory based on the Maxwell equations is a
linear model - probably the simplest one possible in the Minkowski
background - in which the scalar and vector potentials are to a large extent
arbitrary and must be specified by fixing the gauge and the boundary
conditions. The potentials in the electromagnetic theory are usually regarded
as having mostly mathematical and not physical meaning. This is, however,
not true. It would be a triviality to say that in quantum theory, especially in
quantum field theory (QFT), the electromagnetic potentials have a clear
physical significance. Yet even in the classical domain, the electromagnetic
potentials, i.e., vector fields 𝐴𝜇(𝑥𝜈), 𝜇, 𝜈= 1, … ,4 are gauge fields that really
act as local to global operators mapping the global space-time conditions to
local electromagnetic fields 125 . These potentials produce physically
observable effects, especially salient when one departs from the conventional
interpretation of the electromagnetic theory as a linear model having a simple
𝑈(1) gauge symmetry. (We shall later dwell on gauge symmetry in some
detail.) This old interpretation of the Maxwell theory goes back to such
prominent physicists and mathematicians as H. Bateman, O. Heaviside, H.
Hertz, G. F. Fitzgerald, L. V. Lorenz, H. A. Lorentz, J. C. Maxwell, and W.
Thomson. Mathematically, as I have just mentioned, this interpretation treats
electrodynamics as a linear theory of 𝑈(1) symmetry with Abelian
commutation relations. Now we know that the electromagnetic theory can be
extended to 𝑆𝑈(2) and even higher symmetry formulations. One might note
in passing that elementary vector algebra corresponds to the 𝑈(1) symmetry,
125 Here we are using notations as in [84].
5.7
Limitations of Classical Electromagnetic Theory
251
whereas the 𝑆𝑈(2) group of transformations may be associated with more
complicated algebraic constructions such as quaternions (see in this
connection simple examples from sections on vector fields in Chapter 3).
When describing electromagnetic waves propagating through material
media, such mathematical dynamic objects as solitons quite naturally appear.
Solitons are in fact pseudoparticles, they can be classical or quantum
mechanical, and, in principle, they can be both nonlinear and linear (at least
piecewise). Other families of pseudoparticles include magnetic monopoles,
magnetic charges (if defined separately from monopoles as e.g., dyons) and
instantons. The standard electromagnetic 𝑈(1) theory cannot describe
solitons, so in order to be able to do that one must go beyond the linear
electrodynamics based on the Maxwell equations, in other words, this theory
must be extended (at least to 𝑆𝑈(2)). Although, as I have already remarked,
we shall not deal methodically with nonlinear PDEs - this is a separate topic
requiring a serious treatment, I shall try to explain later what a symmetry
extension means. Now I just want to notice that some physical effects
unambiguously indicate that the conventional Maxwellian field theory cannot
be considered complete.
For example, leaving apart the well-known difficulties with the classical
electron radius, such phenomena as the Aharonov-Bohm (AB) effect [101] or
the Altshuler-Aronov-Spivak (AAS) effect [105] are based upon direct
physical influence of the vector potential on a charged particle, free or
conducting electrons respectively. It is interesting that both effects may be
interpreted as breakdown of time-reversal symmetry in a closed-loop
trajectory by a magnetic field (see about the time-reversal symmetry in
Chapter 9). Recall that the vector potential 𝐴𝜇 has been commonly regarded
as a supplementary mathematical entity introduced exclusively for
computational convenience. It is mostly in view of this interpretation that the
conventional electromagnetic theory may be considered incomplete - it fails
to define 𝐴𝜇 as operators. Besides the AB and AAS effects, other important
physical phenomena also depend on potentials 𝐴𝜇, for instance, the Josephson
effect (both at the quantum and macroscopical level), the quantum Hall effect,
de Haas-van Alphen effect and even the Sagnac effect (known since the
beginning of the 20th century). Recently, the effects associated with the
topological phase properties - and all the above phenomena belong to this
class - have become of fashion, and we shall devote several pages to elucidate
the essence of such phenomena.
One origin of the difficulties with classical electromagnetism lies in the
obvious fact that the electromagnetic field (like any other massless field)
possesses only two independent components, but is covariantly described by
the four-vector 𝐴𝜇. However, in many models, for example, in radiation-
matter interaction (Chapter 8) we usually break this explicit covariance.
Likewise in nonrelativistic quantum theory, by choosing any two of the four
components of 𝐴𝜇 (e.g., for quantization), we also lose the explicit covariance.
Contrariwise, if one desires to preserve the Lorentz covariance, two
redundant components must be retained. This difficulty is connected with the
252
Classical Fields and Waves
important fact that for a given electromagnetic field potentials 𝐴𝜇 are not
unique: they are defined up to the gauge transformation 𝐴𝜇→𝐴𝜇
′ = 𝐴𝜇+
𝜕𝜇𝜒(𝑥) (see above).
One may note that random processes (see Chapter 7, “Stochastic Reality”)
are a direct generalization of deterministic processes considered in classical
mechanics. Historically, the first random processes to have been studied were
probably the Markov processes: a random process is called a Markov process,
if for any two 𝑡0 and 𝑡> 0 the process probability distribution 𝑤(𝑡) (in
general, under the condition that all 𝑤(𝑡) for 𝑡≤𝑡0 are known) depends only
on 𝑤(𝑡0). While for deterministic processes the state of the system at some
initial moment uniquely determines the system’s future evolution, in Markov
processes the state (probability distribution) of the system at 𝑡= 𝑡0 uniquely
defines probability distributions at any 𝑡> 0, with no new information on the
system’s behavior prior to 𝑡= 𝑡0 being capable of modifying these
distributions. Here one may notice a hint at a preferred direction of time or
time-reversal non-invariance, which is typical of real-life processes and does
not exist in unprobabilistic mechanical theories (we do not consider the
Aristotelian model here). Time-reversal invariance requires that direct and
time-reversed processes should be identical and have equal probabilities.
Thus, purely mechanical models based on Newton’s (or Lagrangian)
equations are time-reversible, whereas statistical or stochastic models,
though based ultimately on classical (reversible) mechanics, are time-
noninvariant. Likewise, mathematical models designed to describe real-life
processes are mostly irreversible. See below (“The Arrow of Time”) for more
details.
5.8
Integral Equations in Field Theory
I don’t quite understand why, but integral equations have become a subject
partly alien to physicists. I have met people - otherwise highly qualified - who
said that differential equations are more than enough to cover all the principal
areas of physics and to construct the models in other disciplines, so why
should one attract new and rather complicated concepts? It is unnecessary
and contradicts the principle of Occam’s razor.
However, the statement that knowing only the differential equations is
sufficient for physics is wrong: there are areas which cannot be studied (or at
least become extremely difficult to study) without integral equations, and
such areas are not rarely encountered in science and engineering. Take, for
example, antenna theory and design. To obtain the required value of the
electromagnetic field one has to compute or optimize the electric current,
which is an unknown quantity being integrated over the antenna volume or
surface. In scattering problems, both for waves and particles, integral
equations are a naturally arising concept. In the problem of electromagnetic
field scattering by a bounded inhomogeneity, the scattered field appears due
to the induced currents which start flowing in the inhomogeneity under the
influence of the incident field. More exactly, these currents may be viewed as
a response to the total field present in the volume occupied by the
5.9
Phenomenological Electrodynamics
253
inhomogeneity, this total field being the sum of the incident field and the one
generated by induced currents. This is a typical self-consistence problem that
leads to the volume integral equation where the unknown field stands under
the integral. In general, self-consistence problems like scattering, quite often
result in integral equations; now one more class of problems requires an
extensive knowledge of integral equations, namely the inverse problems. We
shall dwell on the subject of integral equations in association with the models
which can be compartmentalized to each of these classes of problems. So
integral equations may connect several seemingly unrelated topics, therefore
every person interested in physmatics - a connected framework of physical
and mathematical knowledge - should be familiar with integral equations.
Now, let me make a brief overview.
In mathematics, integral equations are considered as a natural part of
analysis linking together differential equations, complex analysis, harmonic
analysis, operator theory, potential theory, iterative solutions, regularization
methods and many other areas of mathematics. When I was studying integral
equations at the university, I was surprised to discover their multiple
connections with other areas of mathematics such as various vector spaces,
Hilbert spaces, Fourier analysis, Sturm-Liouville theory, Green’s functions,
special functions you name it. Maybe such course openness was due to our
teacher, professor V. I. Kondrashov, a well-known Russian mathematician
who had a considerable erudition in a variety of areas.
5.9 Phenomenological Electrodynamics
The ideal of theoretical physics is a microscopic theory of everything. This is
probably an unreachable star, yet some microscopic models built in physics,
for
example
in
non-relativistic
quantum
mechanics
or
quantum
electrodynamics, have been quite successful. This success is, however, a
precious rarity. More common are the phenomenological models. The
collection of such models related to electromagnetic phenomena is
aggregately called macroscopic electrodynamics. To better understand what
it is let us consider a simple school-time example. Assume that we need to
compute the electric field between a capacitor’s plates. Then we would have
to write the equations for electromagnetic fields produced by all the particles
constituting the plates and the dielectric between them. We would also have
to add to these field equations the ones describing the motion of the particles
in the fields. The self-consistence problem thus obtained is, in principle,
quantum mechanical, and any attempt to solve it would be a hopeless task.
Such an approach is usually called microscopic - because it considers
phenomena on an atomic scale - it is too complicated and in most cases
superfluous. Solving problems microscopically usually produces a great lot
of immaterial data. It is much more reasonable to formulate general rules for
the systems containing many particles i.e., for macroscopic bodies. Such rules
have, by necessity, the average character, totally disregarding atomistic
structure of the matter. An electromagnetic theory dealing with macroscopic
bodies and fields between them is called phenomenological electrodynamics.
Thematically, phenomenological electrodynamics belongs more to matter
254
Classical Fields and Waves
rather than to fields. However, it is more convenient to discuss this part of
electromagnetic theory in connection with Maxwell’s equations which should
be properly averaged than in the context of, e.g., laser-matter interaction.
There may be many phenomenological theories related to the same
subject, they are in fact comprehensive models placed between fundamental
microscopic laws and ad hoc results describing a given phenomenon. When
dealing with a phenomenological theory one cannot be sure that its equations
are unique and correctly describe the entire class of phenomena considered
unless a well-established procedure of obtaining these equations from
microscopic theory is provided. In this respect, macroscopic electrodynamics
and, e.g., thermodynamics are “lucky” phenomenological theories: they both
can be derived - although with considerable difficulties - from underlying
microscopic theories. On the other hand, empirical phenomenologies
flourishing in medical, social, economic and even engineering research
cannot - at least so far - be derived from first-principle theories and thus
the limits of applicability for the respective phenomenological concepts
are undetermined. In principle, the question of applicability of any
phenomenological theory is very intricate, and later I shall try to illustrate
this fact even on the examples of two privileged phenomenological theories:
macroscopic electrodynamics and hydrodynamics.
Besides, most of what we call microscopic theories are in fact
phenomenological. Take, for instance, Newton’s laws of motion. They are
considered exact, however Newton’s laws can be with certainty applied only
to macroscopic bodies i.e., to those composed of a very large number of
atomic particles in slow, smooth motion. The Newtonian model will
eventually lose validity if we continuously dissect such macroscopic bodies. It
is, however, not always easy to distinguish between microscopic and
phenomenological theories. Thus, the Schrödinger equation, which is also
considered exact microscopic, is nothing more than a phenomenological
model devised to describe the nonrelativistic motion of an atomic particle.
The Schrödinger equation becomes invalid when one starts scrutinizing the
interaction between particles through fields. The Newtonian theory of
gravity is also phenomenological, this is the mathematical model stating that
the attracting force between any two bodies does not depend on their
composition, matter structure (crystalline, amorphous, liquid, plasma, etc.)
and other constitutional details important from the physical point of view.
This force depends only on some aggregate (and to some extent
mysterious) coefficient called mass. Newtonian gravity is a very nontrivial
model, gravitation could have been totally different, for example, inertial and
gravitational masses could have been nonproportional to each other so that
“light” objects would fall slower than “heavy” ones in the gravitation field.
Phenomenological theory of gravitation, due to independence of physical
details, allows astronomers to predict the motion of celestial bodies
ignoring physical processes in them. In general, phenomenological theories
usually neglect the effects of microscopic quantities or represent them by a
set of numbers.
5.9
Phenomenological Electrodynamics
255
So
philosophically
speaking,
most
equations
of
physics
are
phenomenological. How can one try to establish validity of phenomenological
theories? We have seen that the two basic concepts of microscopic - atomistic
- models, in contrast to phenomenological theories (an extreme case is
thermodynamics), are particles and fields. Phenomenological theories
typically (but not always!) do not treat separate particles, they tend to regard
objects as continuous without rapidly fluctuating local quantities such as true
densities. This approach appears to be quite reasonable from the practical
point of view: microscopic values vary in spacetime in a very complicated
manner, and it would be merely meaningless to follow their instantaneous
local values. In other words, any compelling theory should only operate with
smoothed values, the fluctuations being averaged out. This way of thought
naturally leads to the possibility of obtaining the phenomenological
description by averaging the corresponding microscopic theories. This
sounds simple - especially for linear theories - yet the question arises: what
is actually “averaging”?
There does not seem to be a universal answer to this question, nor a
universal recipe for the transition to the phenomenological version of a given
microscopic
theory.
Each
specific
case
is
different,
and
“phenomenologization” can be quite intricate, not reducing to formal
averaging. To illustrate the emerging difficulties let us get back to the
relationship between microscopic and macroscopic electrodynamics. The
system of Maxwell equations constituting the backbone of microscopic
electrodynamics is linear, and thus it presumably can be averaged in a
straightforward fashion i.e., one can simply substitute into the Maxwellian
system average values for the field 𝐄̅ and 𝐇̅ in place of their genuine values 𝐄, 𝐇
containing fluctuations. However, here two problems arise. Firstly, the charge
and current densities, 𝜌 and 𝐣, representing inhomogeneities in the Maxwell
equations should also be averaged, but nobody knows a priori what the
relationship between the averaged fields and the averaged currents would be,
and one badly needs this relationship to close the system of averaged
equations of macroscopic electrodynamics. This is an important problem, and
we shall discuss it later.
The second problem is the very meaning of the averaging operation and
the corresponding mathematical procedure. Physicists traditionally relied in
this issue on the concept of the so-called physically infinitesimal volume and
declared averaging over this volume. The whole procedure of averaging over
the infinitesimal volume is described in detail in the classical textbook [208].
Below I shall briefly reproduce this standard procedure accentuating the
points that were for L. D. Landau and E. M. Lifshitz too obvious. Nevertheless,
I consider everything based on the “physically infinitesimal” misleading and
poorly suitable for many practically important cases (such as X-ray optics,
molecular optics, plasma physics, etc.). Besides, the very notion of the
physically infinitesimal volume is badly defined – “on the hand-waving level”
- so that it would be difficult to indicate the accuracy of the averaging
procedure. One must be satisfied with the formally written average fields
(overlined 𝐄 and 𝐇), the question of their deviations from microscopic fields
256
Classical Fields and Waves
being irrelevant. One can only assume that fluctuations of the fields averaged
over a “physically infinitesimal volume” are such that one considers these
macroscopic fields as real statistical averages.
One might also remark that the difference between the fields averaged
over a physically infinitesimal volume and statistically averaged is inessential.
Right, it may be inessential for the description of quasistatic processes in
simple model systems such as homogeneous dielectric. Even if we assume
that averaging over physically infinitesimal volume is equivalent to averaging
over all possible positions of scattering centers for the field (which is not at
all obvious), in such averaging we neglect the motion of these centers.
Moreover, it would be hardly possible to define a universal physically
infinitesimal scale for all systems. For example, there exist many
characteristic spatial and temporal parameters for plasmas, rarefied gases,
turbulent fluids, superfluids, crystalline and amorphous solids, etc. The
question of averaging over the physically infinitesimal volume is closely
connected with the possibility of a universal definition of a continuous
medium and leads to such nontrivial questions as dynamic irreversibility and
time-noninvariance (see Chapter 9).
No matter what averaging method is applied to the Maxwell equations,
the representation of the electromagnetic field as having exact (rapidly
fluctuating) values at each spacetime point should be abolished. Bearing this
in mind, one usually regards phenomenological electrodynamics as dealing
only with the slow-varying fields, more specifically only with those
having the wavelength 𝜆≫𝑛−1/3, where 𝑛 is the particle density in the
medium. One may note that it is this condition that should exclude the X-
ray range from macroscopic treatment (nonetheless, many problems of X-
ray optics are actually handled by the tools of phenomenological
electrodynamics). In general, along this way of thought it would be difficult to
consider the short-wavelength phenomena in matter, e.g., those for large
absolute values of dielectric function or refractive index. Besides, the matter
response to an electromagnetic field may essentially depend on the motion of
particles, the latter in reality “feeling” local and instantaneous fields in each
spacetime point and not the mean fields averaged over the volume containing
many particles. It is similar to the fact that the car driver reacts on the “here
and now” traffic conditions more acutely than on smooth road curves, speed
limiting signs, information tableaux and other factors of “macroscopic”
character.
An alternative - and more correct - averaging method is not the one over
the “physically infinitesimal volume”, but a standard method of statistical
physics: averaging over a statistical ensemble, e.g., in the case of equilibrium
over the Gibbs distribution. Taking averages over the quantum state, with the
wave function for the pure state or with the density matrix for the mixed state,
will be discussed separately below. One may notice that in the statistical
5.9
Phenomenological Electrodynamics
257
method there is no spatial averaging per se, and the fields can still remain
quasilocal and quasi-instantaneous126.
In the system of Maxwell equations
𝑐𝑢𝑟𝑙𝐇−1
𝑐
𝜕𝐄
𝜕𝑡= 4𝜋
𝑐(𝐣+ 𝐣0)
(5.18)
𝑑𝑖𝑣𝐄= 4𝜋(𝜌+ 𝜌0)
(5.19)
𝑐𝑢𝑟𝑙𝐄+ 1
𝑐
𝜕𝐇
𝜕𝑡= 0
(5.20)
𝑑𝑖𝑣𝐇= 0,
(5.21)
where 𝜌0 and 𝐣0 are external charge and current densities, the induced
currents 𝐣 and charges 𝜌 are the functions of fields in the matter, 𝐣= 𝐣(𝐄). In
phenomenological electrodynamics, the quantities 𝐄, 𝐇, 𝜌, 𝐣 are assumed to be
averaged over either a “physically infinitesimal volume” or a statistical
ensemble (see below). The relationship between the field and the current
induced by it represents the matter response to an electromagnetic excitation
and determines such essential quantities as the medium susceptibilities, both
linear and nonlinear.
The phenomenological approach allows one to conveniently formulate
mathematical problems for the “Maxwell operator”. By introducing the tensor
functions 𝜖𝑖𝑗(𝑥) and 𝜇𝑖𝑗(𝑥), 𝑥∈Ω ⊂ℝ3 i.e., dielectric permittivity and
magnetic permeability of the medium (see below a detailed discussion of
these quantities), we can write the homogeneous Maxwell equations in the
operator form through the stationary operator ℳ(𝜖, 𝜇) acting on the pair
(𝐄, 𝐇)𝑇 where 𝐄= 𝐄(𝜔, 𝐫), 𝐇= 𝐇(𝜔, 𝐫) are respectively electric and
magnetic field vectors in the considered domain Ω. More specifically, for
a stationary electromagnetic field its eigenfrequencies correspond to the
spectrum of ℳ(𝜖, 𝜇) acting on (𝐄, 𝐇)𝑇 according to the rule
ℳ(𝜖, 𝜇) (𝐄
𝐇) = ( 𝑖𝜖−1∇× 𝐇
−𝑖𝜇−1∇× 𝐄).
One usually assumes the solenoidal constraints 𝑑𝑖𝑣(𝜖𝐄) = 0, 𝑑𝑖𝑣(𝜇𝐇) and
some boundary conditions, e.g.,
𝐄𝜏|𝜕Ω = 0, (𝜇𝐇)𝜈|𝜕Ω = 0.
Here index 𝜏 denotes the tangential and index 𝜈 the normal component of the
respective vector on boundary 𝜕Ω. In isotropic medium, permittivity and
permeability may be reduced to positive scalar functions 𝜖(𝐫), 𝜇(𝐫). When
these functions as well as boundary 𝜕Ω are sufficiently smooth, the spectral
problem for the Maxwell operator can be solved. The Maxwell operator
126 The microscopic electromagnetic fields are still bound from below by the nuclear scale
( ~10−13 cm). As for the macroscopic fields in the matter, it does not make sense to
phenomenologically consider the distances smaller than the atomic ones (~10−8 cm).
258
Classical Fields and Waves
formulation is especially useful for calculating modes in electromagnetic
resonators (see, e.g., [6, 5]).
In optical problems related to isotropic media, permittivity and
permeability are also typically treated as smooth and positive scalar function
determining the refraction index 𝑛= (𝜖𝜇)1/2 and the velocity of light in the
medium, 𝑣𝑝ℎ= 𝑐/𝑛. Then the “optical” metric
𝑑𝑠2 = 𝑐2𝑑𝑡2 −𝑑𝑥𝑖𝑑𝑥𝑖
turns, as already discussed, the considered domain Ω ⊂ℝ3 into a Riemannian
manifold.
5.9.1
The Traditional Averaging Procedure
The meaning of the averaging over a “physically infinitesimal volume” needs
to be made clear for the sake of sound understanding of phenomenological
electrodynamics. Let us now carry out explicitly the standard procedure of
such an averaging. Apart from being a useful exercise, it is also a useful trick
the details of which are not always given in the textbooks. In the standard
averaging scheme of transition to macroscopic electrodynamics, one usually
introduces the electric induction vector 𝐃= 𝐄+ 4𝜋𝐏 and the magnetic field
strength 𝐇= 𝐁−4𝜋𝐌 where 𝐏 is the polarization vector of the medium and
𝐌 is the magnetization vector (by definition, it is the mean magnetic dipole
moment for unit volume, 𝐌(𝑟) = ∑𝑁𝑖𝐦𝑖(𝐫)
̅̅̅̅̅̅̅̅
𝑖
, 𝐦𝑖(𝐫)
̅̅̅̅̅̅̅̅ is the average magnetic
dipole of the elementary domain or cell in the vicinity of point 𝐫, e.g., of a
single molecule located at 𝐫 and 𝑁𝑖 is the average number of such cells). One
might complain that such phenomenological construction is far from being
elegant. The average value of the magnetic field is usually denoted as 𝐵 and
called the magnetic induction.
In the classical averaging scheme over the “physically infinitesimal
volume”, the polarization vector 𝐏 is usually defined as the vector whose
divergence equals (up to a sign) the average charge density in the medium,
∇𝐏= −𝜌 (see [208], §6). This definition is consistent with the general
requirement of the matter electric neutrality, the latter being necessary for the
stability of the matter. One can easily understand the physical meaning of the
polarization vector 𝐏 as the average dipole moment of the unit volume.
Indeed, if for the zero-moment of the averaged charge density 𝜌 we have,
due to neutrality,
𝜌̅ = ∫𝜌̅(𝐫, 𝑡)𝑑3𝑟
Ω
= 0
for all 𝑡, for the first moment we may write
∫𝐫𝜌̅(𝐫, 𝑡)𝑑3𝑟
Ω
= ∫𝐏(𝐫, 𝑡)𝑑3𝑟
Ω
5.9
Phenomenological Electrodynamics
259
where integration runs over the whole matter (e.g., a macroscopic dielectric
body). One may notice that these relations are supposed to be valid not only
in the static case. To prove the second relation, we can use the vector integral
identity
∮𝐫(𝐏𝑑𝜎)
Σ
= ∫(𝐏∇)𝐫𝑑3𝑟
Ω
+ ∫𝐫∇𝐏𝑑3𝑟
Ω
,
where Σ is any surface enclosing the matter volume Ω. If this surface passes
outside this volume (a material body), the integral over it vanishes, and since
(𝐏∇)𝐫= 𝐏 we get the required relationship for the first moment of 𝜌̅. The
above vector identity can be proved by a number of ways, e.g., by using the
tensor (index) notations. The simplest, but possibly not very elegant, proof
would consist in using the identity
∇(𝜑(𝐫)𝐏) = 𝜑∇𝐏+ (𝐏∇)𝜑
which can be applied to any component of 𝐫= (𝑥, 𝑦, 𝑧) and then
integrating it over Ω, with the left-hand side giving the surface integral.
Thus, we have the four vectors - two vector field pairs - of macroscopic
electrodynamics, 𝐄, 𝐃, 𝐇, 𝐁, which must be supplemented by some
phenomenological relations between these vectors. Besides, if the above four
quantities must satisfy the equations of the type of Maxwell’s equation for the
microscopic vacuum values, the question of sources for the phenomenological
quantities arises. The textbook relationships between the vectors within each
pair, 𝐃= 𝜖𝐄 and 𝐁= 𝜇𝐇, complete the traditional construction of
macroscopic electrodynamics.
This simple construction can be justified for static or slowly varying
(quasistationary) fields, but it usually becomes totally inadequate for rapidly
changing vector functions such as for short wavelengths or ultrashort pulses
(USP), the latter being an increasingly popular tool in contemporary laser
physics. One may note that such commonly used terms as “short wavelengths”
or “hard electromagnetic radiation” are relative: for certain media the
radiation typically perceived as “soft”, e.g., infrared, may exhibit short-
wavelength features whereas in some other type of matter the same radiation
may be treated as quasistationary. Thus in plasma, where the average
distance between the particles, 𝑛−1/3, 𝑛 is the particle density, may easily
exceed the characteristic wavelength 𝜆 of the electromagnetic field, using
the phenomenological field equations obtained by averaging over the
physically infinitesimal volume can easily become meaningless.
I have already mentioned that the conventional approach to macroscopic
electrodynamics, corresponding to the averaging of microscopic fields over
“physically infinitesimal volume”, consists in the additive decomposition of
the total induced current and charge densities, 𝐣 and 𝜌 (also averaged over the
physically infinitesimal volume), into “physically different” parts, e.g.,
𝐣= 𝐣𝑐+ 𝐣𝑝+ 𝐣𝑚,
260
Classical Fields and Waves
where 𝐣𝑐 represents the current of conductivity electrons, 𝐣𝑝 is the
polarization current, 𝐣𝑝= 𝜕𝐏𝜕𝑡
⁄
, where 𝐣𝑚 is the magnetization current,
𝐣𝑚= 𝑐𝑐𝑢𝑟𝑙𝐌. (I would recommend reading carefully the respective
material in the textbook by L. D. Landau and E. M. Lifshitz [208].) One does
not need to perform the same decomposition procedure for the charge
density because of the constraint given by the continuity equation 5.3
which remains valid after any kind of averaging (due to its linearity).
5.9.2
Ensemble Averaging of Fields and Currents
Paying attention to the difference between averaging over a “physically
infinitesimal volume” and ensemble averaging may be essential to
understanding of the fundamentals involved. The Maxwell equations in the
medium obtained by the traditional averaging procedure (i.e., over the
“physically infinitesimal volume”) and having the form ([208], §75)
𝑐𝑢𝑟𝑙𝐇−1
𝑐
𝜕𝐃
𝜕𝑡= 4𝜋
𝑐𝐣0
(5.22)
𝑑𝑖𝑣𝐃= 4𝜋𝜌0
(5.23)
𝑐𝑢𝑟𝑙𝐄+ 1
𝑐
𝜕𝐁
𝜕𝑡= 0
(5.24)
𝑑𝑖𝑣𝐁= 0,
(5.25)
being supplemented by the “material equations” relating the quantities 𝐃, 𝐁
and 𝐄, 𝐇, 𝐃= 𝜖𝐄 and 𝐁= 𝜇𝐇 can be used without reservations only for
relatively slow-varying fields (static and quasistationary). For fast changing
electromagnetic fields and pulses as well as for the media where spatial field
variations can be shorter than the average distance between the constituent
particles (e.g., in plasmas), these equations become inconvenient or merely
inadequate. Indeed, it seems to be nearly obvious that, even leaving aside the
dubious and in general mathematically incorrect procedure of averaging over
the physically infinitesimal volume, breaking down the total current into
presumably non-overlapping components cannot be unambiguous for high
frequencies. It may be easily seen that the currents excited in the medium due
to free and to bound electrons cannot be separated already for optical
frequencies or for rapid variations of an external field (ultrashort pulses). One
may illustrate this fact by a simple example of an atomic electron in an
external field 𝐄. For simplicity, we may consider here classical (non-quantum
and non-relativistic) motion, 𝑚𝐫̈ = 𝑒𝐄(𝑡), 𝑒, 𝑚 are the electron charge and
mass respectively, then the characteristic displacement or oscillation
amplitude of an electron in the field 𝐄(𝑡) of an electromagnetic pulse or wave
is 𝑟0~𝑒𝐸𝜏2/𝑚 or 𝑟0~𝑒𝐸/𝑚𝜔2 where 𝜏 is the ultrashort pulse duration. Such
a displacement can be readily scaled down to atomic distances (Bohr’s
radius, 𝑟𝐵~10−8cm) even for rather strong fields, say only an order of
magnitude lower than atomic fields, 𝐸𝑎𝑡~𝑒/𝑎𝐵
2~109V/cm2.
This simple example (and many others, too) demonstrates that the
difference between the conductivity, polarization and magnetization
5.9
Phenomenological Electrodynamics
261
currents, the latter being due to the charge motion along closed trajectories,
rapidly becomes very vague with the decreased wavelength of the radiation
field. Starting from ultraviolet light, unambiguous decomposition of current
into these three components becomes virtually impossible. At least, such a
decomposition at optical frequencies and for ultrashort electromagnetic
pulses is pretty arbitrary. Thus, the optimal possibility we have in the case of
fast varying fields is to consider the total current 𝐣(𝐫, 𝑡) incorporating all kinds
of charge motions caused by an electromagnetic field. This current, in the
situation of thermal equilibrium, should be averaged over a statistical
ensemble i.e., with the Gibbs distribution (or equilibrium density matrix in
the quantum case). Technically, it is often convenient to introduce the total
polarization 𝒫(𝐫, 𝑡) (see [209]) instead of the total current:
𝒫(𝐫, 𝑡) = ∫𝐣(𝐫, 𝑡′)𝑑𝑡′
𝑡
−∞
,
𝐣(𝐫, 𝑡) = 𝜕𝑡𝒫(𝐫, 𝑡)
It is important to remember that total polarization 𝒫(𝐫, 𝑡) includes all
currents, due both to free and to bound charges, and not only the
displacement current, as in the intuitive scheme of averaging over the
physically infinitesimal volume. One can also see that the total polarization
accumulates current contributions starting from the remote past (𝑡→−∞),
but not from the domain 𝑡′ > 𝑡. This is a manifestation of the causality
principle which is one of the crucial assumptions in physics (see also
Chapters 6, 9). In the distant past, 𝑡→−∞, polarization is assumed to be
absent.
Introducing the total polarization enables us to write down the total
induction, 𝐃(𝐫, 𝑡) = 𝐄(𝐫, 𝑡) + 4𝜋𝐏(𝐫, 𝑡), and not only induction only owing to
displaced bound charges, as in traditional theory of dielectrics. Using the total
induction, we may write the averaged Maxwell equations in the medium as
𝑐𝑢𝑟𝑙𝐄+ 1
𝑐
𝜕𝐇
𝜕𝑡= 0
(5.26)
𝑑𝑖𝑣𝐇= 0
(5.27)
𝑐𝑢𝑟𝑙𝐇−1
𝑐
𝜕𝐃
𝜕𝑡= 4𝜋
𝑐𝐣0
(5.28)
𝑑𝑖𝑣𝐃= 4𝜋𝜌0,
(5.29)
where the last equation is in fact a consequence of the continuity equation 5.3.
Indeed, from
𝜕𝜌
𝜕𝑡+ ∇𝐣= 𝜕𝜌
𝜕𝑡+ ∇𝜕𝒫
𝜕𝑡= 0
we have
𝜌= −∇𝐏+ 𝜌−∞= 1
4𝜋∇(𝐃−𝐄),
262
Classical Fields and Waves
where 𝜌−∞ is the integration constant that can be put to zero (we assume that
there were no polarization in the distant past). Then we have 4𝜋𝜌=
−∇(𝐃−𝐄) , but ∇𝐄= 4𝜋(𝜌+ 𝜌0) where 𝜌0 is, as before, the density of
external charges introduced into the medium. Thus, we get ∇𝐃= 4𝜋𝜌0.
One may notice that in this “optical” approach one does not need to
introduce, in addition to a magnetic field, such a quantity as magnetic
induction and, consequently, magnetic susceptibility [209]. So, there is no
distinction between 𝐁 and 𝐇. Indeed, the total polarization already contains a
contribution from the magnetization (circular) currents, e.g., arising from
nearly closed trajectories of electrons moving in the electromagnetic field of
elliptically polarized light.
One often asks: why are currents and polarization in the medium
considered the functions of the electric field 𝐄 alone, with magnetic field 𝐇
being disregarded both in the dielectric relationship 𝒫= 𝜒𝐄 and in the
conductivity relationship 𝐣= 𝜎𝐄? Or, to put it slightly differently, why is it
always implied that the induction 𝐃 is only proportional to 𝐄 and not to 𝐇?
One might encounter several versions of answering this question. One of the
versions is that 𝐇 should be excluded because (1) it is an axial vector (𝐇=
𝛁× 𝐀), whereas 𝐄 is a “true” vector so that they both cannot be simply
superposed, and (2) 𝐇 is not a time-invariant quantity. The second argument
implies, however, an a priori requirement imposed by our intuitive perception
of the world: nobody can be sure that all electromagnetic phenomena must
be strictly invariant under time reversal (even in the static case). In fact, it
seems to be wrong (more about time-reversal invariance in Chapter 9). As to
the first argument, it may be “neutralized” by introducing a pseudoscalar
proportionality coefficient between 𝐏 and 𝐇 (or j and 𝐇 and 𝐃 and 𝐇). In
reality, we may disregard the magnetic field in the “material relations” because
it can be expressed through the electric field using Maxwell’s equations.
Besides, even if we wished to explicitly involve the magnetic field, it would
make little sense unless we considered ultrarelativistic motion of charges,
which is an exotic case in the medium. The point is that factor 𝑣/𝑐 always
accompanying a direct effect of a magnetic field on moving charges would
make its influence hardly noticeable for the particles in matter whose
characteristic velocities are of the atomic scale (𝑣𝑎𝑡~𝑒2/ℏ) i.e., at least two
orders of magnitude lower than the speed of light.
Now the main problem is: how to link the current induced in the
medium to an external field exciting this current? It is clear that such a
problem is in general extremely difficult and can hardly be solved for an
arbitrary matter containing a macroscopic number (𝑁~1023) of particles
placed in a fortuitous field. Nonetheless, there are several approaches to this
universal problem. One of such approaches has been developed within the
framework of nonlinear optics (NLO). This is a comparatively new
discipline which emerged in the 1960s, following the advent of lasers. Before
that time optics was essentially linear, and probably the only attempt to
consider nonlinear electromagnetic effects were made in quantum
electrodynamics while treating the scattering of light by light [210] (see also
5.9
Phenomenological Electrodynamics
263
[263]), which is in fact vacuum breakdown process 127 . Linearity of
electrodynamics required that the polarization of the medium and
induced current should be written respectively as 𝒫𝑖−𝜒𝑖𝑗𝐸𝑗, where 𝜒𝑖𝑗 is
the susceptibility tensor, and 𝑗𝑖𝜎𝑖𝑗𝐸𝑗, where 𝜎𝑖𝑗 is the conductivity
tensor128. A more general, nonlocal, linear expression linking the induced
current to an electric field may be written as
𝑗𝑖(𝐫, 𝑡) ≡𝑗𝑖
(1)(𝐫, 𝑡) = ∫𝜎𝑖𝑗(𝐫, 𝐫1; 𝑡, 𝑡1)𝐸𝑗(𝐫1, 𝑡1)𝑑3𝑟1𝑑𝑑1
(5.30)
Here I intentionally did not indicate integration limits implying that they are
ranging from −∞ to +∞; however, I think it necessary to comment on this
point. I have already mentioned that if one assumes that the causality
principle holds for all types of medium, then one must consider polarization
and currents, observed at time 𝑡, depending only on the field values related to
the preceding moments of time, 𝑡1 < 𝑡. Therefore, integration over 𝑡1 goes
only from −∞ to 𝑡. Furthermore, relativistic causality requires that spatial
integration may spread only over spacelike domains |𝐫1 −𝐫| < 𝑐|𝑡1 −𝑡|
because an electromagnetic field existing at points outside this domain cannot
produce an excitation of the medium at a spacetime point (𝐫, 𝑡). In fact,
however, integral expressions for currents and polarization determining the
response of the medium essentially depend on the properties of the response
functions 𝜎𝑖𝑗(𝐫, 𝐫1; 𝑡, 𝑡1), 𝜒𝑖𝑗(𝐫, 𝐫1; 𝑡, 𝑡1) serving as kernels in the integral
transformations realizing nonlocal maps. Experience as well as physical
considerations show that the response functions are essentially different
from zero at much shorter distances than those required by relativistic
causality. Thus, one may consider integration limits really infinite, which is
true not only in the linear case, but also for nonlinear response functions.
It may be rewarding to clarify the physical reason for spatial and
temporal nonlocality in (5.30). For integration over time, the origin of
nonlocality is pretty obvious: it is the retardation of response. For
example, a particle at point 𝐫 of its trajectory 𝐫(𝒕) feels the influence of the
force 𝐅(𝐫, 𝑡) not only taken at time-point 𝑡, but also related to preceding
moments, 𝑡′ < 𝑡. Indeed, directly from Newton’s equation we have
𝐫(𝑡) = ∫𝑑𝑡1
𝑡
𝑡0
∫𝑑𝑡2
1
𝑚𝐅(𝐫(𝑡2), 𝑡2)
𝑡
𝑡0
+ 𝐯0(𝑡−𝑡0) + 𝐫0
where 𝐫0 ≔𝐫(𝑡0), 𝐯0 ≔𝐫̇(𝑡0) ≡𝐫̇0 are integration constants having the
meaning of the starting point and initial velocity of the trajectory; 𝑡0 is the
127 The usual electrodynamics was perceived as essentially linear so that even in the classical
textbook by L. D. Landau and E. M. Lifshitz [208], §77, small intensities of fast changing fields are
always assumed, see the text before formula (77.3). The development of lasers has radically
changed this perception.
128 Here we disregard the difference between contra- and covariant components; such a
difference is inessential for our - rather superficial - intuitive discussion.
264
Classical Fields and Waves
−
{
initial time point for the motion. In many physical situations one is not much
interested in parameter 𝑡0 so that, e.g., for many particles one can average
over all possible 𝑡0 (as well as over initial conditions, which amounts to a
simple statistical approach) or, in mechanical problems, one may take the
limit 𝑡0 →−∞ assuming 𝐫0 = 𝐫−∞= 0 and 𝐯0 = 𝐯−∞= 0 so that
𝐫(𝑡) = ∫𝑑𝑡1
𝑡
𝑡0
∫𝑑𝑡2
1
𝑚𝐅(𝐫(𝑡2), 𝑡2)
𝑡
𝑡0
.
We shall also discuss the role of the initial moment of time 𝑡0 in many-
particle systems in connection with the Liouville equation and the choice
of supplementary (e.g., boundary) conditions to it (Chapter 7). In many
cases it is appropriate to shift the non-physical parameter 𝑡0 to −∞.
If the particle considered is, e.g., a free electron, then the force 𝐅 may be
exerted by an electromagnetic wave or a pulse, 𝐅(𝐫, 𝑡) ≈𝑒𝐄(𝐫, 𝑡), whereas for
a bound electron the force acting on it is a superposition of the incident and
atomic fields
𝐅(𝐫, 𝑡) ≈𝑒𝐄(𝐫, 𝑡) −∇𝑉({𝐑}, 𝐫, 𝑡).
Here 𝑉({𝐑}, 𝐫, 𝑡) is the atomic potential depending, besides spacetime
variables 𝐫, 𝑡, on a set of atomic parameters denoted by symbol {𝐑}. When
external fields are much lower than the atomic Coulomb fields, 𝐸𝑎𝑡~𝑒/𝑟𝐵
2,
the force acting on electron is mainly determined by the atomic potential
and the time interval 𝜏 of integration in (5.30) is essentially determined
by atomic (or interatomic) parameters i.e., is of the order of atomic time
𝜏𝑎𝑡~ℏ/𝐼~10−16cm where 𝐼 is the typical atomic energy of the order of
ionization potential, 𝐼~𝑒2/𝑟𝐵. Physically, this is the time of the atomic
velocity change, 𝑣𝑎𝑡~𝑟𝐵/𝜏𝑎𝑡~𝑟𝐵𝐼/ℏ. For a free electron, the integration
time interval is of the order of the velocity relaxation time (see some details
in Chapter 9).
There may be situations when one has to take into account initial
correlations between the particles entering the medium or the domain
occupied by a field at times 𝑡0𝑎→−∞ (𝑎 enumerates particles). Then simple
averaging or taking the limit 𝑡0𝑎→−∞ does not hold and should be
replaced by an integration (convolution) with the correlation functions. One
can see a simple physical example in [151].
In strong electromagnetic fields such as produced by lasers, polarization
and currents in the medium may become nonlinear functions of the field. This
nonlinearity can be easily understood by using one of our favorite models:
that of an oscillator, but a nonlinear one (see below). For example, in the case
of quadratic nonlinearity we shall have
𝑗𝑖
(2)(𝐫, 𝑡) = ∫𝜎𝑖𝑗𝑘(𝐫, 𝐫1, 𝐫2; 𝑡, 𝑡1, 𝑡2)𝐸𝑗(𝐫1, 𝑡1)𝐸𝑘(𝐫2, 𝑡2)𝑑3𝑟1𝑑3𝑟2𝑑𝑡1𝑑𝑡2
and, in general,
5.9
Phenomenological Electrodynamics
265
𝑗𝑖
(𝑛)(𝐫, 𝑡)
= ∫𝜎𝑗1…𝑗𝑛(𝐫, 𝐫1, … , 𝐫𝑛; 𝑡, 𝑡1, … , 𝑡𝑛)𝐸𝑗1(𝐫1, 𝑡1) … 𝐸𝑗𝑛(𝐫𝑛, 𝑡𝑛)𝑑3𝑟1 … 𝑑3𝑟𝑛𝑑𝑡1 … 𝑑𝑡𝑛
Continuing this approximation process, we obtain an expansion of the
current129 (or polarization) in powers of electric field:
𝐣(𝐫, 𝑡) = ∑𝑗(𝑛)(𝐫, 𝑡)
∞
𝑛=1
.
Now the question naturally arises: what is the actual (dimensionless)
parameter of such an expansion? Plausible physical discourse gives a cause
for a belief that in wide classes of condensed matter such as dielectric solids
and non-conducting liquids the expansion parameter should be 𝜂𝐸=
𝐸/𝐸𝑎𝑡, where 𝐸𝑎𝑡~𝑒/𝑟𝐵
2~109𝑉/cm is a characteristic atomic field. Indeed,
for 𝜂𝐸≪1 an external electromagnetic field cannot strongly perturb the
matter, and an expansion on 𝜂𝐸, with retaining only a few initial terms,
must be fully adequate. However, one has to bear in mind that although
parameter 𝜂𝐸 can be used for expansion over external field in many types of
condensed media and even in gases, there exist other types of media, in
particular containing free charges such as plasmas, metals, and even
semiconductors, where parameters of nonlinearity can be different. At least,
one needs to carry out a special analysis for each of these media. This is an
important point, and we shall return to it later.
129 Recall that here we are talking about microscopic currents, also called microcurrent,
induced by an external macroscopic field in matter.
266
The Quantum World
6 The Quantum World
The quantum world is generally considered to be weird, indeterministic,
counterintuitive, hard to visualize and even irreducibly requiring a human
observer. I think this is a philosophical exaggeration, a projection of human
anxieties on physical problems. All these fears are not required by
experimental results in the quantum domain. The quantum world is nothing
more than a collection of very handsome mathematical models which are, by
the way, simpler than mathematical models of classical mechanics. In this
chapter, we shall discuss some of these quantum models, their origins and
outcomes, and we shall see that there is nothing weird about them. For
example, an electron is, in quantum mechanics, neither a particle nor a wave
in the conventional (classical) meaning. It is a mathematical model, a formula,
in the sense that its properties are described by the quantum mechanical
equations. One can build up other models for electron, for example,
corpuscular as in Newtonian mechanics and in Bohm’s version of quantum
mechanics or wavelike as in de Broglie version. Nevertheless, standard
quantum mechanics based on the Schrödinger equation is sufficient for
practical computations so there is usually no need to resort to other models.
Quantum mechanics, even in its simple form of the Schrödinger equation,
is so far typically alien to computer scientists or to mechanical engineers, but
these two categories of researchers are in general fully satisfied when one
says that the classical mechanics they think they understand can be derived
from the quantum theory as a particular case for “heavy” bodies. Quantum-
mechanical modeling stands out in many scientific areas because it is based
on ab initio calculations i.e., attempts to tackle the problem from first
principles. Ideally, quantum-mechanical modeling does not rely on
phenomenological parameters, with the exception of fundamental constants
and, possibly, the atomic number of constituent atoms. This independence
from external, hand-introduced adjustment, is intended to ensure that
preconceptions and bias should play a minimal role in the final results
produced by quantum modeling. Yet ab initio calculations have also the dark
side which may be designated by a single word: complexity. Firstly, the high
complexity of ab initio quantum-mechanical calculations enables one to
obtain exact analytical solutions only in rare specific cases and, secondly,
when one is forced to resort to numerical techniques, complexity leads to
inhibitive scaling of computational costs and required resources. And if the
computational costs grow exponentially with, e.g., the size of the system
under study, such modeling efforts may become too demanding to be of
practical use - even despite the rapid progress of computer technology and
low prices on resources. This fact justifies the retreat from head-on
numerical solutions by using well-controlled approximations, also of
analytical nature. Historically the first such approximations were
perturbation and semiclassical methods. Both methods are examples of
5.9
Phenomenological Electrodynamics
267
asymptotic expansions, although of different, sometimes even opposite kind,
and we shall discuss these methods in some detail a little later.
Quantum mechanics is quite similar to the classical field theory discussed
in the previous chapter because these two theories use the same wave
equations. Moreover, both theories admit a duality i.e., a kind of
correspondence between particles and fields which may be interpreted as a
semiclassical back-and-forth migration between mechanics (the study of
particle motion) and field theories (the study of wavelike excitations).
Although quantum mechanics is basically of the mathematical modeling
character, it is more than any other discipline a mix of concepts and
calculations. Sometimes the concept component prevails, then quantum
mechanics becomes overburdened with philosophy; in other cases
mathematical techniques may dominate, and then mathematical structures
and occasionally some fancy mathematical models protrude. From a narrow
pragmatic viewpoint, quantum mechanics is reduced to the calculation of
states and their eigenvalues, evolution of states with time, and transitions
between them (including scattering). Thus, quantum mechanics is basically
rather simple as compared, e.g., with classical mechanics, which does not
exclude the fact that specific calculations may be very difficult, as each person
who performed quantum-mechanical computations might have noticed.
Nevertheless, it would hardly be possible to expose the whole of it in a single
chapter. Therefore, I shall restrict myself to the discussion of the main
quantum-mechanical structures. I do not use the term “mathematical
methods of quantum mechanics” as I think it is a tautology (like, e.g., new
innovation) since the whole quantum mechanics is a mathematical model of
reality. The reader can find all the answers to arising - quite naturally -
questions in any textbook on quantum mechanics, for instance, in those that
I liked most and referred to in this text.
Contemporary quantum physics tends to be constructed as an axiomatic
theory i.e., based on a set of mathematical axioms. This methodology has both
advantages and drawbacks. The good side is that the axiomatic approach is
convenient in order to derive powerful mathematical tools out of the set of
axioms. The bad side is that the connection of these tools with reality is rather
weak or may be totally absent. Besides, there exists a hidden danger to take
as axioms such statements that cannot be directly tested in physical
experiment as, e.g., in string theories. In such cases, there may be, at best, the
hope to test some remote consequences. If the base quantum postulates130
cannot be directly physically tested, there exists a hazard of putting too many
of them or making the system of postulates inadequate in some other way
which can later result in contradictions. Nevertheless, the most fanciful
starting assumptions are allowed in mathematical modeling - it’s a free craft.
Despite
a
comparatively
primitive
mathematical
scheme
of
nonrelativistic quantum mechanics, in numerous discussions with my
130 I make no difference between axioms and postulates in this context - paying
attention to such a distinction would be too philosophical.
268
The Quantum World
colleagues131 I have found that there is a significant confusion about many
aspects of quantum mechanics, which mainly arises in connection with the
physical interpretation of the formalism. The majority of theoretical
physicists probably think nowadays that the so-called physical intuition is
entirely based on classical concepts and is useless in quantum theory.
Therefore, quantum theories can be constructed on a more or less arbitrary
set of axioms, the sole essential requirement being that such a system of
axioms should be non-contradictory. In this way, one can construct a number
of theories for different sets of axioms being put in the base and chosen more
or less arbitrarily. This approach is close to mathematical model building. The
ultimate goal is very pragmatic: one should not care about the connection of
the mathematical scheme with reality, the only thing that matters is that the
constructed models or the consequences of the axiomatically derived theories
could quantitatively describe a wide set of experimental results. It is, by the
way, this approach that has led to changes in the notion of a physical
explanation. In modern physics, such an explanation is synonymous with
providing a mathematical description which is not necessarily the same thing.
Although I do not want to join the philosophical ritual dance around
quantum mechanics, I still decided to include the discussion of its various
interpretations which is by necessity an interpolation between mathematical
modeling and philosophy. To better illustrate the ideas involved in the
development of quantum mechanics I allowed myself to make a brief
historical overview. There exist many exhaustive sources on the conceptual
development of quantum mechanics (see, e.g., [137]), but I, nonetheless, have
found some unexpected items which I would like to share with the reader.
6.1
In the Middle of Revolution
Quantum mechanics exists less than a hundred years, i.e., this discipline is
much younger than, say, Newtonian mechanics or hydrodynamics. One
cannot exclude the possibility that the current formulation of quantum
mechanics is far from its final form; it is still in the process of development
and may be modified in such a way as to easily construct such models that
appear ambiguous or even incorrect to a number of today’s researchers. For
example, can quantum mechanics treated as a probabilistic theory describe
unique objects such as the entire universe? How in general can one correctly
handle isolated events and stand-alone objects in standard quantum
mechanics? However, the example of more ancient and stable disciplines such
as fluid dynamics or classical mechanics shows that there are chances that a
discipline does not change considerably. Probably, the necessary condition
for the stability of a discipline is wide acceptance of its language, including
interpretation in lay terms, which lacks in quantum theory. If one tries to
interpret quantum mechanics in common terms, many paradoxes arise,
which produces a feeling of dissatisfaction by many physicists. It is well
known that even founders of quantum mechanics including Einstein, de
131 That was rather long ago: discussions on physical Weltanschauung are popular
mostly among young people. Physics is in general a “game for the young”.
6.1
In the Middle of Revolution
269
Broglie and Schrödinger did not consider quantum mechanics a complete
theory, because the reality it described seemed to them strange and devoid of
simple physical interpretation. Indeed, standard quantum theory is
formulated in such a way as to produce only probabilities, which very fact
Einstein did not want to accept. Besides, there appeared to be mathematical
divergences between scientists on how to interpret these probabilities. The
probability theory was devised to formalize experimental results when there
is a lack of information. Is information by default missing in quantum
mechanics? Even today, some researchers are convinced that quantum
mechanics does not provide a full description (see below section on Bohmian
mechanics). There are other things - actually, all of them interrelated - that
can frustrate a philosophically minded physicist, even in case she/he is
brilliant in applying mathematical tools of quantum theory. What is the
uncertainty principle, can one explain it without using central moments of
operators defined on unitary space? Mathematically, it is not difficult to
demonstrate that the average spread of values produced by non-commuting
operators (in quantum language - observables) cannot be made arbitrarily
small simultaneously, but why does it restrict the experimental accuracy?
Some people who are accustomed to quantum language think that the answer
to this question is trivial, yet personally I think this issue is quite intricate; it
is connected with the notions of coherent and squeezed states and touches
such profound question as the nature of the photon. Furthermore, conceptual
trickery of quantum measurement theory seems to me especially bizarre.
How can it be that physical reality should depend on the manner of
observation? Is there anything, according to quantum mechanics, if one does
not observe at all, in the limiting case when there are no humans in the world?
What is then the so-called reality?
All these and similar questions are well known and usually discussed by
philosophers, whereas physicists tend to laughingly disregard them. This
attitude probably stems from the substantial effectiveness in explaining
experimental results in microscopic physics by means of quantum theory. I
remember that professor V. M. Galitskii, an outstanding theoretical physicist
who could at once deeply understand almost any problem, recommended to
his students that they should for a while forget all the questions of
interpretation that could inevitably arise at the very start of studying
quantum mechanics. A person should not, said Galitskii, pay attention to the
philosophical issues before she/he have mastered the mathematical tools of
quantum mechanics, and then there are the chances all possible perplexity
would somehow fade away. I fully agree with him.
Quantum mechanics has currently become a very successful engineering
discipline whose foundation - condensed-matter physics and in particular
solid state theory - may be regarded as specific applications of nonrelativistic
quantum mechanics. All these (and other) loud successes tend to shade
inconsistences in current formulations and, especially, interpretations of
quantum theory. Physicists are reluctant to invent a new theory that would
mostly satisfy philosophers who did not, as physicists used to think,
270
The Quantum World
contributed much in knowledge, when the current theory seems to work
perfectly well. Indeed, why should they?
However, it is curious that quantum theory is more than one hundred
years old, but we still need to ruminate about quantization. Unfortunately,
there are up till now many unclear issues which must be thought over. What
are the peculiarities of quantum mechanics which prevent it from being
interpreted in the classical sense, without incomprehensible measurement
paradigm, i.e., separation of the world into two subworlds: active observers
and passive systems to be observed? Even if we take the relatively simple
Schrödinger formulation, which is a mathematical model based on the scheme
of boundary value problems (see below), can we regard the wave function as
a field distributed in space like any classical field, for instance,
electromagnetic field?
We may put aside some profound gnoseological considerations and take
up several formal grounds precluding the possibility of classical
interpretation of quantum models. I counted five underlying reasons
hampering interpretation of quantum mechanics in classical manner.
1.
Standard quantum mechanics uses the tools of linear operators in
Hilbertspace,
with
canonical
transformations
of
wave
functions
corresponding to change of bases, fully analogous to the Fourier transform. I
have already mentioned that choice of a specific basis is not the natural way
for mathematical description of a physical state. Manifestation of this
independence on basis in quantum theory consists in the fact that
transformed (in many simple cases, just Fourier transformed) wave functions
are equivalent to the coordinate-dependent wave function and all of them
describe the same physical state. Not only the squared module of the initial
wave function 𝜓(𝐫) (in the coordinate representation) has a physical
meaning, but also all the transformed wave functions corresponding to other
representations. We shall later illustrate this fact by examples of working in
momentum representation.
2.
In the coordinate representation the wave function is a complex
valued function defined on a space-time domain. This implies that one should
invent a certain procedure to obtain observable quantities, which are real, by
using the wave function. Already this fact leads to ambiguities (see below on
the Bohm’s version of quantum mechanics). Furthermore, even in the
simplest case of a single particle, the wave function does not necessarily exist
and it does not always change according to the Schrödinger equation (or even
according to other evolution equation of similar type). In the quantum
measurement situation the initial wave function is simply crossed out and
replaced by some other. This ugly procedure is traditionally called “reduction
of the wave packet”, although neither the initial nor the secondary wave
function may have the wave packet form. Such instant change of the wave
function is hard to reconcile with the concept of the classical field.
3.
Many-body problems in quantum mechanics substantially differ
from the same kind of problems in classical mechanics. Quantum mechanical
6.1
In the Middle of Revolution
271
many-body problems132 are characterized by peculiarities not allowing one to
reduce such problems to the motion of individual particles (quasiparticles
and elementary excitations are not individual particles). Moreover, these
peculiarities prevent formulating the many-body problem as that for a field
in the Euclidean 3D space. Thus, if a complex system consisting of many
particles is described by a wave function taking into account all the particles,
one cannot ascribe separate wave functions to the individual particles.
One more specific feature of quantum mechanics cardinally distinguishing
it from classical consists in the notion of spin and one of its nontrivial
implications - the connection of spin with statistics for identical particles. For
fermions, i.e., particles with half-integer spin, which obey the Pauli exclusion
principle and, in the many-body state, are described by anti-symmetric wave
functions leading to the Fermi statistical distribution, there exists a distinctive
quantum interaction that cannot be reduced to the usual force interaction in
the Euclidean space. Another kind of interaction that also cannot be reduced
to the classical forces exists between bosons - the particles described by
symmetric wave functions and subordinated to the Bose statistics. Recall that
the matter surrounding us consists of fermions and its stability is in fact due
to the specific quantum interaction. Forces between particles, i.e., field
interactions including classical fields such as electromagnetism, are
transferred by bosons, which behave totally different from fermions.
4.
In the case of a complex multi-particle system, its wave function, even
in the coordinate representation, depends not on three Euclidean coordinates
but on a set of parameters corresponding to quantum degrees of freedom.
Therefore, the wave function describing the system is a complex-valued
function defined on a certain multidimensional domain which does not
necessarily lie in the real physical world.
5.
Classical physics is implicitly based on the assumption that one can
obtain information about the system infinitesimally influencing it. This is in
general not true: information always has a price, and a device133 measuring an
observable destroys the initial state and produces a new one. In classical
mechanics one can neglect the device-object interaction and consider that
observation plays no role, thus talking about the object state of motion
irrespective of observation means. Therefore, by the classical description one
132 The two-body and Kepler problem are treated separably. There is also a quantum
specificity in the two-body problem.
133 The device is traditionally understood as a physical system that can, on the one hand,
interact with the quantum object producing some observable reaction and, on the other hand,
admits, with some sufficient accuracy, the classical description. Therefore, the information
filtered by the device does not require further conversion and can be directly read out by humans.
It is immaterial whether the device is engineered by humans or is a lucky combination of
classically described natural circumstances convenient for observations, where the quantum
object is placed. I don’t consider this definition fully correct since in it the device is, by default,
classical and macroscopic, whereas it is not necessarily the case. The device is a physical system,
and nobody cares whether it is quantum or classical. “Why must I treat the measuring device
classically? What will happen to me if I don’t?” - these words are ascribed to E. P. Wigner, the
great Hungarian/US physicist and mathematician, although I, honestly speaking, failed to find
them in Wigner’s works. Moreover, “quantum object” is not identical to “microobject”.
272
The Quantum World
may understand the one with the disturbance introduced by observation
being disregarded 134 . Accuracy of such a description is limited by the
uncertainty relations. The measurement procedure is characterized by a
certain probability, which in standard quantum mechanics is mathematically
described by the squared module of the scalar product of the initial and final
wave functions. However, human intuition being trained in the world of
classical objects typically fails to accept the fact that an observed object is
necessarily disturbed by the measuring device. This intuitive perception leads
to inadequate conclusions.
Due to these and other specific features of quantum mechanics, all never
ending attempts to interpret quantum theory in the classical sense remain
deficient. Interpretation of Bohr’s ideas as reflecting an extreme positivism
produced a reaction disclaiming, for the sake of “realism”, mathematical
models based on the wave function. One might notice an artificial character
of many such attempts (starting from de Broglie) and some lack of heuristic
value: in contrast to the adherents of Copenhagen interpretation, authors of
allegedly “classical” variants of quantum mechanics seem to be reluctant to
solve radically new physical problems. Rather, the treatment of quantum
problems by means of classical considerations is commonly tuned to the a
priori known results from standard quantum mechanics. One cannot be
surprised, for the interpretation in the spirit of classical “realism” is probably
quite a narrow view. To impose on reality, contrary to all existing evidence,
uniquely the rigid deterministic type of evolution, while refusing to admit a
more general form of probabilistic rules, means to follow the dogmas and not
the laws of nature. One might recall in relation to this that probability theory
may be considered to generalize classical analysis, with statistical
distributions being reduced to deterministic functions when variance (and
other higher moments) are striving to zero.
The tendency to reject probabilistic interpretation based on the wave
function is illustrated today by the Bohmian version of quantum mechanics,
which provides the classical formulation based on trajectories derived
entirely from the Schrödinger equation. The price to pay for introducing
classical notions such as individual trajectories into quantum theory is rather
high: one must also introduce some unknown and unmeasurable quantity
known as the quantum potential, which has some weird properties (non-
locality, non-diminishing with distance, etc.). Many physicists do not like
“classical” constructions in quantum mechanics - of course, I do not mean here
semiclassical approximation (or quasiclassics, as we used to call it in Russia),
which is a powerful asymptotic theory to be discussed later in some detail
(see 6.9. Quantum-Classical Correspondence). Speaking of individual
trajectories, I did not imply here the Feynman path integral which is probably
the best quantization means whose value is especially significant in quantum
field theory.
134 An element of observational influence still remains: one must account for a given system
of reference.
6.1
In the Middle of Revolution
273
We have already seen that interaction with measuring devices in
quantum mechanics required new methods of description of quantum objects
introducing a new element of relativism, in this case with respect to means of
observation. One can remark that this loss of absoluteness does not destroy
objectivity: one must only specify the accuracy. Likewise, a classical
trajectory, while being considered quite objective, acquires a definite
meaning only when the coordinate system is fixed.
One can recall in this connection that in the early electromagnetic theory
created by M. Faraday in the first half of the 19th century the concept of field
lines connecting positive and negative charges or magnet poles was
predominant. Faraday considered these lines to be real and carrying electrical
and magnetic forces. It is only after J. C. Maxwell constructed his classical
electromagnetic theory (improved by O. Heaviside) based on partial
differential equations that the field lines became unnecessary or, at least, of
secondary nature to the electromagnetic field. In superconductivity models,
magnetic field lines may become discrete - each line carrying a quantized
amount of magnetic flux (see Chapter 9), this image was a prototype for string
theory. Since the magnetic field is just one component of the general
electromagnetic field, one can imagine also quantized lines of electrical field
or other gauge fields (see, e.g., [130] on Wilson loops). Nowadays, in string
theories one can observe a revival of the field lines; actually the strings
themselves may be - to a certain extent - interpreted as field lines. Thus, the
picture of lines hints at a possible duality between fields and trajectories, and
possibly future versions of quantum mechanics will fully exploit this duality.
In my understanding, the relation of quantum and classical mechanics is
not in formulating quantum concepts in classical language, an example of
which is the alleged possibility of imposing trajectories on wave-like
solutions, but in the correspondence principles and respective limits. The
“particle-wave” duality is just the philosophical expression for the
superposition principle, familiar already from elementary linear algebra.
Indeed, the superposition principle requires some linear (vector) space which
necessarily has a dual. Besides, what one really needs, in my opinion, is to find
a rapid and uncomplicated connection between the abstract linear operator
theory in Hilbert space and application-oriented prescriptions of quantum
mechanics. Operator theory satisfies mathematicians, but appears to be too
detached from real-life tasks in the engineering world. On the contrary,
intuitive engineering models appeal to people engaged in “practical”
problems, but mathematicians and even theoretical physicists tend to
uncompromisingly reject such intuitive models. Quantum mechanics has
acquired many aspects of an engineering discipline; in such a discipline it is
especially important to see how apparently different branches can be joined
together to produce new technologies and products.
The ingenious people who formulated the initial versions of quantum
mechanics such as Bohr, Heisenberg and Schrödinger probably admitted that
quantum mechanics gives an incomplete description of reality. My impression
is that these people, while being also deep thinkers, supposed that if quantum
mechanics was to be extended was unlikely that it would be done in isolation
274
The Quantum World
from other fields of science. It was not accidental that these creators of
quantum mechanics tried to transcend the boundaries of compartmentalized
disciplines. Bohr was much of a philosopher and it is well known that he was
strongly influenced by S. Kierkegaard (see, e.g., the Wikipedia material
http://en.wikipedia.org/wiki/Niels Bohr). In fact, this philosophy is, very
crudely of course, the eloquent developments of the obvious thesis that
nature manifests itself to us only through our senses. Heisenberg devoted
much of his life to attempts to construct the unified field theory and new
concepts of the whole physics [119, 124, 125]. Schrödinger who appeared to
dislike the approach to quantum theory formulated by N. Bohr as “Nothing
exists until it is measured” wrote many papers on various subjects including,
as Heisenberg, investigations on unified field theory. He also was famous for
his book “What is life?” [126] which is largely devoted to the extension of
physical models on living systems. Schrödinger seemed to consider the
Copenhagen interpretation with wave-particle duality inconsistent [126] and
that attitude probably caused some frustration and much controversies. It is
not a revolution if nobody loses.
6.2
Some Notes on the Historical Development
of Quantum Mechanics
In order to illustrate the main ideas of quantum mechanics I shall present a
very simplified survey of its development into the current form. I am not going
to follow in this very brief survey the real chain of historical events - there
exist many classical sources on the evolution of quantum theory, (see, e.g.,
[137]), and I have neither inclination nor qualifications to add something
nontrivial from a historian’s point of view. My objective is more modest: to
grope for some logical threads. The two expositions can be quite different.
Before the 20th century, most people believed that classical physics based on
the Newtonian mathematical model for mechanical motion and on the
Maxwell equations for electromagnetic phenomena was tantamount to a
fundamental theory that can describe any natural phenomenon. However, in
the beginning of the 20th century it became clear that classical physics failed,
at least for many microscopic systems, which led to the development of an
alternative mechanical theory - quantum mechanics. There exist many
concepts in the latter that cannot be derived from or even contradict classical
physics. It is curious that P. A. M. Dirac called his famous treatise on quantum
mechanics published in 1930 “Principles” [20] - it seems that he wanted to
imitate Newton.
In my opinion, the genuine understanding of many physical theories is
hardly possible without historical method. This method allows one to trace
the development of ideas and the succession of models leading to the
formation of a working theory. I shall try to bring here a small part of these
creative evolutions, emphasizing some unexpected and even astonishing
attributes that I came across while studying the original papers related to the
initial period of quantum considerations. It is only paying attention to
nontrivial features that may serve as a justification for this section.
6.3
Mathematical Models of Quantum Mechanics
275
To begin with, I have noticed that the number of principal, fundamental
ideas in many disciplines is not as big as it might seem, when a person,
especially a young one, is bombarded with enormous torrents of facts,
concepts, notions, representations contained in a variety of bulky textbooks.
Due to this information overload, people tend to lose orientation and develop
defenses such as reducing the studied material to a bare minimum (arbitrarily
chosen), in particular neglecting historical developments altogether. Yet, the
fundamental ideas of a discipline, even if they are quite complicated (which is
a rare occasion), may be at least apprehensible, and the historical
presentation of ideas can help greatly. This is exactly the case of quantum
mechanics.
It is well known from the works of a number of historians of science (see
e.g., [137]) that what we now call “twentieth-century physics” started from
the Planck hypothesis that emission or absorption of electromagnetic
radiation is not a continuous process, in accordance with classical field theory
(Chapter 5), but occurs in discrete chunks called “quanta”, each having energy
ℎ𝜈, where 𝜈 is the radiation frequency and ℎ is some phenomenological
constant, now known as the Planck constant. The current value of this
constant is ℎ= 6.626 ∙10−27𝑒𝑟𝑔∙𝑠. It is quite interesting how Planck came
to this postulate applying thermodynamics to oscillators [143] - the simple
oscillator model again served a good service, this time in providing an early
basis for the emergent quantum theory.
6.3
Mathematical Models of Quantum
Mechanics
Let us summarize the mathematical content of quantum mechanics in simple
terms. In standard quantum mechanics we describe a system using a complex
Hilbert space ℍ. States 135 of this system are described by vectors 𝜓 in ℍ. The
transition amplitude from a state 𝜓 to a state 𝜑 is 〈𝜑, 𝜓〉136 . It is thus
convenient to normalize states so that 〈𝜓, 𝜓〉= 1 . Quantum mechanical
processes are described by linear operators 𝐴: ℍ→ℍ̃ (initial and final
Hilbert spaces may differ as, e.g., in decay, scattering or light-matter
interaction). Thus, one can see that the gist of elementary quantum mechanics
is really very simple.
Taking again our favorite model of the harmonic oscillator, we can
illustrate these quantum mechanical concepts. The Hilbert space of the
harmonic oscillator may be constructed by starting from the complex vector
space 𝐶(𝑓) and writing a typical vector 𝑓 therein as
𝑓(𝑥) = ∑𝑓𝑛𝑥𝑛
𝑛
.
135 Now the term “rays” is more in fashion.
136 In some texts, especially of mathematical character, the scalar (inner) product is defined in
reverse order so that the transition amplitude is written as 〈𝜑, 𝜓〉. It is not very important, of
course.
276
The Quantum World
The inner product of two vectors, 𝑓, 𝑔 can be defined by the expression
∑𝑛! 𝑓𝑛
∗𝑔𝑛
𝑛
where one assumes that the sum converges. Then the Hilbert space of the
harmonic oscillator may be declared as the subspace of complex space 𝐶
including such 𝑓 that 〈𝑓, 𝑓〉< ∞. Now we can fix a basis in this space, which
may be given by the states
𝜓𝑛≔|𝑛⟩= 𝑥𝑛
𝑛!
These states are orthogonal but not normalized, since
⟨𝑥𝑛
𝑛! , 𝑥𝑛
𝑛!⟩= 1
𝑛!
One may ask, why do we need the factor
1
𝑛! in the inner product, 〈𝑛, 𝑛〉=
1
𝑛!?
As we have already discussed, one can interpret 𝜓𝑛=
|𝑛⟩
𝑛! as the 𝑛-particle
state, i.e., the state where 𝑛 identical particles coexist. The possibility to think
of 𝜓𝑛 as of the 𝑛-particle state is due to the unique equidistant character of
spectrum of the harmonic oscillator, although it is not easily seen in this
formalism (see below special section on the harmonic oscillator).
This question is in fact a matter of convenience and habit, being closely
connected with the choice of space to work with. In many cases, it is more
convenient to remain in the complex vector space 𝐶(𝑓) than to work with the
Hilbert space. On 𝐶(𝑓) we can define two linear operators, creation and
annihilation, usually denoted as 𝑎+ and 𝑎, respectively. Both operators take
elements of 𝐶 to 𝐶, they are respectively raising and lowering operators,
𝑎+|𝑛⟩= |𝑛+ 1⟩ and 𝑎|𝑛⟩= |𝑛−1⟩, adjoint, 〈𝑎𝑓, 𝑔〉= 〈𝑓, 𝑎+𝑔〉, and non-
commutative, 𝑎𝑎+ = 𝑎+𝑎+ 1.
Let us see how this purely algebraic approach is connected with standard
quantum mechanics based on the Schrödinger equation. For the time-
independent one-dimensional harmonic oscillator, this equation reads
(−ℏ2
2𝑚
𝑑2
𝑑𝑥2 + 𝑚𝜔2𝑥2
2
) 𝜓(𝑥) = 𝐸𝜓(𝑥)
It is of course convenient to use the dimensionless form by introducing
the variable 𝑞= (𝑚𝜔/ℏ)1/2𝑥, which gives
6.3
Mathematical Models of Quantum Mechanics
277
(−𝑑2
𝑑𝑞2 + 𝑞2) 𝜓(𝑞) = (
𝐸
ℏ𝜔/2) 𝜓(𝑞)
Now one can write the operator corresponding to linear oscillator as
−𝑑2
𝑑𝑞2 + 𝑞2 = −𝑑2
𝑑𝑞2 + 𝑞𝑑
𝑑𝑞−𝑑
𝑑𝑞𝑞+ 𝑞2 −𝑞𝑑
𝑑𝑞+ 𝑑
𝑑𝑞𝑞
= (−𝑑
𝑑𝑞+ 𝑞) ( 𝑑
𝑑𝑞+ 𝑞) + [ 𝑑
𝑑𝑞, 𝑞]
The commutator [
𝑑
𝑑𝑞, 𝑞] = 1, therefore we get a supplementary term. If the
Hamiltonian consisted only of numbers or functions as in classical mechanics
(physicists used to say “c-numbers” emphasizing their commutative algebraic
properties), then this commutator would be zero and we would have had a
school-time commutative algebraic identity, 𝑎2 −𝑏2 = (𝑎+ 𝑏)(𝑎−𝑏) .
However, this identity is in general not true when we deal with objects other
than c-numbers (scalars from a field), and in particular in quantum mechanics
it is almost never true, so we get additional terms proportional to
commutators. In the case of the linear oscillator, this additional term is equal
to ℏ𝜔/2 since the Hamiltonian for the oscillator may be written as
𝐻= 1
2 ℏ𝜔(−𝑑2
𝑑𝑞2 + 𝑞2) = ℏ𝜔
−𝑑
𝑑𝑞+ 𝑞
√2
𝑑
𝑑𝑞+ 𝑞
√2
+ 1
2 ℏ𝜔= ℏ𝜔(𝑎+𝑎+ 1
2)
where the familiar operators 𝑎+ and 𝑎 may be written in terms of coordinate
and momentum operators in dimensionless form (in “natural units”)
𝑎+ = 1
√2
(𝑞−𝑖𝑝),
𝑎= 1
√2
(𝑞+ 𝑖𝑝)
or, in ordinary dimensional form (𝑝= −𝑖ℏ
𝑑
𝑑𝑥),
𝑎+ = √𝑚𝜔
2ℏ(𝑥−
𝑖
𝑚𝜔𝑝) ,
𝑎= √𝑚𝜔
2ℏ(𝑥+
𝑖
𝑚𝜔𝑝)
The ground state 𝜓0(𝑞) of the harmonic oscillator is defined by the
condition 𝑎𝜓0(𝑞) = 0 which is in fact a differential equation
𝑑𝜓0(𝑞)
𝑑𝑞
+ 𝑞𝜓0(𝑞) = 0,
with the solution 𝜓0(𝑞) = 𝐶𝑒−𝑞2/2 where constant 𝐶 may be found from the
normalization condition, (𝜓0, 𝜓0) = 1, which gives 𝐶=
1
√𝜋. Now, one can
278
The Quantum World
construct the algebra of raising and lowering operators. Define 𝜀=
𝐻
ℏ𝜔, then
we have
𝜀= 𝑎+𝑎+ 1
2
and
[𝑎, 𝑎+] = 1,
[𝜀, 𝑎+] = 𝑎+,
[𝜀, 𝑎] = −𝑎
This is the oscillator Lie algebra - a four-dimensional complex space 𝑉
with the basis {1, 𝑎+, 𝑎, 𝜀} and an antisymmetric operation (commutator)
[𝑓, 𝑔]: 𝑉× 𝑉→𝑉 which satisfies the Jacobi identity137. One can also construct
different Lie algebras by using other bases, e.g., {1, 𝑞, 𝑝, 𝜀} with [𝑞, 𝑝] =
𝑖, [𝜀, 𝑝] = 𝑞, [𝜀, 𝑞] = −𝑖. These are Lie algebras over real or complex numbers
which are not necessarily all isomorphic.138. It may be noticed that in this
formalism, the Hamiltonian is just an element of the algebra; it loses in some
sense its special role as generator of the temporal evolution. One may
consider the described algebra containing four basis elements as a four-
dimensional extension of the simple oscillator algebra (the Lie algebra usually
denoted as SU(1,1)). It is also interpreted as a superalgebra containing two
odd elements, 𝑎, 𝑎+ and two even elements, 𝐼, 𝜀, where 𝜀+ = 𝜀. Oscillator
representations of the Lie algebra SU(1,1) (and its quantum extensions often
called q-deformations) can in fact be constructed in terms of operators
𝑎, 𝑎+, 𝐼, 𝜀 in an infinite number of ways. It is usually convenient to build
simultaneously the linear algebra for bosons and fermions or even a
superalgebra unifying them. From the algebraic viewpoint, bosons are
operators that satisfy the commutation relation 𝑎𝑎+ −𝑎+𝑎= 𝐼 and fermions
are operators satisfying the anticommutation relation 𝑎𝑎+ + 𝑎+𝑎= 𝐼.
One can see that it is due to noncommutativity of coordinate and
momentum that the zero-point energy ℏ𝜔/2 appears in quantum mechanics
and quantum electrodynamics(where it is usually called the vacuum energy).
6.4 The Schrödinger Equation
In the nonrelativistic quantum theory, the fundamental concept is that of a
state which is supposed to allow one to determine for it (to “measure”) the
values of certain physical quantities such as energy, momentum, coordinate,
angular momentum, spin, etc. Quantum states are perceived, mostly due to
abundant mathematical texts, as totally different from classical linear waves.
This leads to some conceptual difficulties which brought about a plethora of
137 Here, actually the commutator [𝑎, 𝑎+] = 𝐼 where 𝐼 is the unit operator, but for our
purposes it is sufficient to consider it just unity from the number field.
138 Some authors prefer to consider three elements 𝑎, 𝑎+, 𝐼 generating the oscillator Lie
algebra, see, e.g., the classical book by A. M. Perelomov [156]. There is also a great diversity of
notations for the Heisenberg-Weyl oscillator algebra.
6.4
The Schrödinger Equation
279
attempts to make the superposition principle of quantum mechanics more
palatable from the intuitive, “physical” viewpoint. It is important that if the
measurement units and the initial (zero) point for each of these physical
quantities have been fixed, the produced results should be real numbers. The
concept of a state appears to be quite natural, a far analogy may be the state
of a person which would allow one to obtain her/his behavioral characteristic
at a given time. The state of a quantum system is fully described by a wave
function Ψ. Similarly to dynamical systems theory, knowledge of this function
at initial moment t0 determines the systems behavior in all future moments.
This evolution is governed by the Schrödinger equation:
𝑖ℏ𝜕𝑡Ψ(𝐫, 𝑡) = 𝐻Ψ(𝐫, 𝑡).
(6.1)
Here operator 𝐻 is the systems Hamiltonian, a fundamental
“observable” acting in the space of states ℍ, 𝐻= 𝑇+ 𝑉, where 𝑇 and 𝑉 are
kinetic and potential energy, respectively. One can notice that for writing
down the Schrödinger equation, a simultaneous physical-mathematical
assumption is implicitly taken for granted, namely that for each physical
system an operator 𝐻 does exist, this operator determining the time
evolution of function Ψ through the above written equation (provided
such evolution is not disturbed by an external influence, e.g.,
measurement). Operator 𝐻 is assumed self-adjoint - this is an important
requirement139. In principle, one can imagine equations of the Schrödinger
type having higher derivatives over time or, to put it differently, Hamiltonians
containing the terms with higher derivatives of the wave function over time.
Then the quantum-mechanical problem may become non-local in time and,
besides, would require the knowledge of higher derivatives of the wave
function at some initial time-point 𝑡0 i.e., quantum evolution would not be
determined by the value of Ψ(𝑡0) . This would, however, contradict the
established “causal” model of quantum mechanics.
In general, solving the time-dependent Schrödinger problem for a
Hamiltonian with an arbitrary time dependence is technically quite difficult.
However, if the Hamiltonian is time-independent, the problem can be
considerably simplified. For time-independent (autonomous, in the parlance
of classical mechanics) problems, we may write a solution to the Schrödinger
equation in the “separated” form, Ψ(𝐫, 𝑡) = 𝑇(𝑡)𝜓(𝐫), according to typical
recipes of classical mathematical physics. Substituting this Ansatz into the
above time-dependent equation and dividing by Ψ(𝐫, 𝑡), we have
𝑖ℏ𝑇′(𝑡)
𝑇(𝑡)
= −ℏ2∆𝜓(𝐫) 2𝑚
⁄
+ 𝑉(𝐫)𝜓(𝐫)
𝜓(𝐫)
Since the left-hand side depends only on 𝑡 whereas the right-hand side only
on spatial variables, they both must be equal to a constant - the standard
139 This requirement can be weakened in certain cases, some of which we shall discuss
in connection with the concept of time in quantum mechanics, see Chapter 14.
280
The Quantum World
reasoning for equations of mathematical physics. Let us label this constant 𝐸.
Then we have two equations that should be solved simultaneously
𝑖ℏ𝑇′(𝑡) = 𝐸𝑇(𝑡)
and
−ℏ2
2𝑚∆𝜓(𝐫) + 𝑉(𝐫)𝜓(𝐫) = 𝐸𝜓(𝐫).
(6.2)
The first equation gives 𝑇(𝑡) = exp(−𝑖𝐸𝑡/ℏ) while the second one
leads to the eigenvalue problem for the Hamiltonian (Schrödinger) operator.
The solution of such a problem leads to the bound state picture and, in
particular, in energy quantization as well as to spectral expansions over
eigenfunctions regarded as vectors in the Hilbert space (i.e., a complete
complex inner-product space). The evolution of Ψ(𝐫, 𝑡) in such a space is
given, as it follows from the first equation, by the expression
Ψ(𝐫, 𝑡) = 𝑒−𝑖𝐸𝑡/ℏΨ(𝐫, 0),
which reflects, here in simple terms, the one-parameter unitary group
property of the evolution operator (below we shall discuss quantum
evolution in more detail). As usual, the meaning of the exponent where an
operator expression is present should be specified. One possible way to define
an operator exponent is through a power series
exp(−𝑖𝑡𝐻) = ∑
(−𝑖𝑡)𝑛
𝑛!
𝐻𝑛
∞
𝑛=0
.
It is clear that in such an approach the Hamiltonian should be assumed
bounded, otherwise the series may be divergent. One cannot, however,
require the boundedness of 𝐻 for an arbitrary physical system. For instance,
it is easy to see that operator of multiplication by variable 𝑥 is unbounded as
an operator from ℍ(ℝ⊮) into ℍ(ℝ⊮) and also the differentiation operator
𝐴𝑓(𝑥) = 𝑖𝑑𝑑𝑥
⁄
is unbounded. The Hamiltonian of a system placed in an
external electrical field may become unbounded140.
So, the wave function Ψ changes with time under the action of the
evolution operator 𝑈(𝑡) = exp(−𝑖𝑡𝐻/ℏ) . However, the bilinear scalar
product of the time-dependent Ψ-function remains preserved in the process of
140 Recall what is understood by the boundedness of a linear operator 𝐴: the latter
is bounded if its norm ‖𝐴‖ ≔sup‖𝐴𝑓‖𝑓∈𝐷(𝐴) < ∞ where 𝐷 is the operator domain. We
shall further assume the following simplified version of the boundedness definition,
usually accepted for Hermitian (symmetric) and self-adjoint operators: an operator 𝐴
will be considered bounded from below if the scalar product (𝑓, 𝐴𝑓) (𝑓, 𝑓)
⁄
≥𝑚 where
𝑚 is finite (𝑚> −∞) and is bounded from above if (𝑓, 𝐴𝑓) (𝑓, 𝑓)
⁄
≤𝑀, 𝑀< ∞ for
every 𝑓. See more on linear operators in Chapter 2.
6.4
The Schrödinger Equation
281
evolution dictated by the Schrödinger equation. Indeed, one can easily prove
the following conservation property
𝜕
𝜕𝑡(Ψ, Ψ) = 0,
where we take the following convention 141 for the scalar product: (𝑓, 𝑔) =
∫𝑓∗𝑔𝑑𝜇. We have
𝜕
𝜕𝑡(Ψ, Ψ) = (𝜕Ψ
𝜕𝑡, Ψ) + (Ψ, 𝜕Ψ
𝜕𝑡) = (−𝑖
ℏ𝐻Ψ, Ψ) + (Ψ, −𝑖
ℏ𝐻Ψ)
= 𝑖
ℏ(𝐻Ψ, Ψ) + (Ψ, −𝑖
ℏ) = 𝑖
ℏ(Ψ, 𝐻Ψ) −𝑖
ℏ(Ψ, 𝐻Ψ) = 0,
(6.3)
if the operator 𝐻 is self-adjoint.
Here, one can make the following observation. Typically, the Schrödinger
equation in the form is directly applied to the simplest systems such as the
hydrogen-like atoms, where 𝐫 in Ψ(𝐫, 𝑡) denotes the position of a single
electron such as the only atomic electron in the hydrogen atom or the outer-
shell electron interacting with the rest of the atom (the atomic nucleus) in the
case of hydrogen-like, e.g., Rydberg, atoms. For such atomic systems, the
Hamiltonian 𝐻 is represented by the sum of kinetic and potential energies for
a single electron, 𝐻= 𝐩2 2𝑚
⁄
+ 𝑉(𝐫, 𝑡) , where 𝐩= −𝑖ℏ∇ is the
momentum operator and 𝑉(𝐫, 𝑡) is the potential energy of electron in the
Coulomb field of atomic nucleus as well as, possibly, in external fields. If the
considered atomic system is not affected by any external fields, then the
potential energy of the electron does not depend on time 𝑡, and such atomic
systems have only stationary solutions Ψ(𝐫, 𝑡) = 𝜓(𝐫) exp(−𝑖𝐸𝑡/ℏ) ,
described above. Here, the “shortened” wave function ψ(r) may be found
from the time-independent Schrödinger equation 6.2, 𝐻𝜓(𝐫) = 𝐸𝜓(𝐫) . As
already mentioned, this equation leads to the Sturm-Liouville problem giving
the discrete values of energy for the finite motion. One may note that the
description of bound states with discrete spectrum is necessarily quantum-
mechanical whereas the free-particle motion in continuous spectrum can, in
principle, be described classically or semi-classically 142. Of course, such
infinite motion can also be portrayed fully quantum-mechanically - the
quantum theory is sufficiently general to adequately describe any kind of
motion as well as the transitions between continuous and discrete portions
of the spectrum. Nevertheless, the classical description, though not
141 This is the “physical” convention; in mathematics, the complex conjugated
product (𝑓, 𝑔) = ∫𝑓𝑔∗𝑑𝜇 is more often encountered. The choice (𝑓, 𝑔) or (𝑓, 𝑔)∗ is of
course inessential. Here, as everywhere in this text I use an asterisk to denote
complex conjugation; in the mathematical literature an overline is more customary.
142 By the way, this is one of the reasons for incessant attempts to formulate the whole
quantum mechanics in the language of a single classical trajectory, although this
language (Bohm’s version of quantum mechanics) is technically maladapted to bound
states problem. See below section on Bohmian mechanics.
282
The Quantum World
necessarily simpler, appeals more to human intuition and therefore seems
easier to interpret and comprehend.
The motion of a free particle does not necessarily imply that this particle
ought to be totally exempt from any interactions For instance, the free
electron may interact with an external electromagnetic field, with the atomic
nucleus from which it has been detached, with other electrons, ions, or atoms
of the medium, etc. The important thing is only that the energy spectrum of a
free particle is continuous (usually taken to be positive, 𝐸∈(0, ∞)) whereas
the energy spectrum of a particle to be found in finite motion is discrete i.e.,
energies of bound states are allowed to admit only certain particular values
separated by finite intervals. Such values are usually taken to be negative.
The Schrödinger equation, determining the behavior of a system in a
stationary field, may be written as
ℏ2
2𝑚∆𝜓+ [𝐸−𝑉(𝐫)]𝜓= 0,
Ψ = 𝜓𝑒−𝑖𝐸𝑡
ℏ
This equation has been explored in countless textbooks and papers, and
I shall not repeat the well-known results, e.g., those for the Coulomb problem
(see [84], ch. 5).
Let us now get engaged in a brief verbal discussion of the Schrödinger theory.
The Schrödinger equation is a linear one which reflects the fact that quantum
mechanics is a linear theory, and each solution for the state of a system can be
represented as a superposition of other states. This superposition principle
allows one to invoke the mathematical techniques of linear algebra, matrix
theory and functional analysis. All that naturally leads to the spectral theory
for linear operators. One can, however, observe that the spectral theory of the
Schrödinger equation seems to be rather non-uniformly developed: for example,
the Schrödinger equation with a monotonicly varying potential has been
studied more thoroughly than the same equation with a bounded potential.
Indeed, one can easily find a lot of results, e.g., for the oscillator or Coulomb
case as well as many related models, whereas for bounded potentials most of
the results have been restricted to the periodic case.
The Schrödinger equation is, at first sight, just another linear PDE.
However, this equation corresponds to a mathematical model conceptually
different from the archetypal classical systems based on field partial
differential equations (see Chapter 5). First of all, the Schrödinger equation
implies a totally different physical interpretation: It basically describes the
motion of a single particle. Of course, the Schrödinger equation can be
generalized to describe many particles involving also interactions between
them, but then we would have more complicated (bilocal, trilocal, etc.)
Schrödinger fields, Ψ(𝐫1, 𝐫2, … , 𝑡), than for a single-particle motion. The
initial idea of Schrödinger was to construct a mathematical model of quantum
mechanics which would give physically, e.g., spectroscopically observed
discrete spectra, based on the Sturm-Liouville problem [211]. It is known that
Schrödinger, while a student at the Vienna University (1906-1910) had
mastered eigenvalue problems in connection with the physics of continuous
6.4
The Schrödinger Equation
283
media. About two decades later, being motivated by some dissatisfaction with
Bohr’s “planetary” model of the atom, he applied his knowledge proposing a
theory of atomic spectra giving discrete - quantized - energy levels as a result
of a Sturm-Liouville procedure. It is for this work (consisting of four papers)
that E. Schrödinger received in 1933 the Nobel Prize, sharing it with P. A. M.
Dirac. Although the linear PDEs leading to the Sturm-Liouville problems had
been extensively studied throughout the 19th century and a lot of knowledge
had been accumulated in the mathematical literature by the time E.
Schrödinger divined his equation (1926), an associated eigenfunction
expansion involved a lot of technical difficulties making practical calculations
very cumbersome in practically important cases such as in chemistry. Thus,
computation of the energy levels for many-electron atoms and molecules is
notoriously hard even today, after all the years of development of quantum
theory and computational techniques. Complexity of the Schrödinger
equation rapidly increases as more particles are introduced in the considered
system. It is important that in distinction to classical systems the number of
degrees of freedom cannot be reduced by imposing constraints. Nonetheless,
from the mathematical point of view, the Schrödinger equation for a fixed
number of particles is described by a sound theory, provided some physically
natural conditions are fulfilled, e.g., energy the Hamiltonian - should be
bounded from below. Recall what is understood by a spectrum. The discrete
spectrum, 𝜎𝑑(𝐻) , of the operator 𝐻 is, roughly speaking, the set of all
eigenfunctions which are discrete points. In other words, any infinitesimal
transition between eigenvalues of an operator is impossible. If one can
continuously go from one eigenvalue to another, i.e., the spectrum of an
operator is not discrete, such a spectrum is called continuous. In addition, the
corresponding invariant subspace (eigenspace) in the case of discrete
spectrum is assumed finite dimensional. In mathematics, there exist more
refined characterizations and classifications of spectra (see below), but here,
for the initial qualitative discussion, these notions seem to be sufficient. A
little further we shall be concerned with some spectral theory for self-adjoint
operators and mathematical requirements to the Schrödinger operator in
slightly more precise terms.
One may ask, how do we know when the spectrum of the Schrödinger
operator is discrete and when it is continuous? This question is not as naive
as it appears. There is an extensive theory of spectral splitting in functional
analysis. But here some primitive considerations will be temporarily sufficient.
Take, for simplicity, the 1d Schrödinger equation
𝑑2𝜓
𝑑𝑥2 + 2𝑚
ℏ2 (𝐸−𝑉(𝐫))𝜓= 0
(see, e.g., [84], §21). In fact, we can efficiently integrate only one-dimensional
differential equations; all multidimensional equations are typically reduced
first to the ones depending on a single variable.
It is curious that the bound state picture inherited from non-relativistic
quantum mechanics has penetrated such modern theories as, e.g., quantum
284
The Quantum World
chromodynamics and string theories. When we say that, for instance, a proton
is composed of three quarks, we implicitly involve the good old bound state
concept. What do the words “composed of” mean? Is it really a quantum-
mechanical binding? The magical words “quark confinement” seem to be
closer to a protective mantra than to a physical model143
6.5 Quantum Tunneling
Quantum tunneling is often considered - mostly in popular science texts - as
one of the most striking manifestations of the “weird” world of quantum
mechanics. In reality, there is nothing weird about the tunneling: it is a natural
consequence of the wavelike nature of quantum motion. The tunneling effect
was suggested in 1928 by G. A. Gamow144 as an attempt to explain the nuclear
decay which was hardly possible to understand from the classical positions
[212]. The probability to find a particle under the barrier is exponentially
small, but still non-zero, which fact was expressed in the Gamow-Condon-
Gurney treatment by a decaying exponential. Due to its inherently wave
nature quantum tunneling may be manifested in a rather broad class of
phenomena including those where elementary particles are not as directly
observed as in nuclear decay (for example, in so-called macroscopic quantum
phenomena [213]). One should also mention scanning tunneling microscopy
(STM) and spectroscopy (STS), which is a considerable achievement in
applied, experimental and engineering physics. Today, both STM and STS
have become important technological tools in nanoelectronics and
biotechnology.
6.6
Quantum Evolution
Assume that a small enough object like a single electron can be well enough
isolated from the environment so that external perturbations may be
143 Importunate demons of ignorance haunt physicists all life long, so there exist a
number of incantations in physics which are lexical forms simulating knowledge, quark
confinement seems to be one such form. Other frequently used examples are
“wavefunction collapse”, “quantum measurement of observables”, “time-reversal
symmetry”, various versions of “extra dimensions”, etc., without mentioning hidden
parameters in quantum mechanics. Currently, these words mainly serve to protect
physicists from the imminent punishment which is due to lack of understanding. With
the accumulation of knowledge, the magical words may acquire an exact meaning, like
the X- rays in the past. The use of mantras or other quasi-magic spells was traditionally
accepted even in comparatively recent biomedical disciplines, where not properly
understood phenomena were often handled solely by introducing new designations. It
is only the ubiquitous spread of new, mostly digital, technology that shattered the
dominance of intuitive, speculative statements. There are of course other candidates for
nonsensical, half-magical credos in other disciplines as well such as “user friendliness”
in computer technology.
144 Probably less known is the paper by E. U. Condon and R. W. Gurney [244] that
appears to be independent from the work of Gamow. All three authors calculated the
energy dependence of the alpha decay half-life, mathematically modeling the decay
phenomenon by a leak of the particle wave function through a classically forbidden area
- potential (Coulomb) barrier. Yet George Gamow was probably the first.
6.7
The Stone Theorem
285
considered negligible. In experimental reality this is very, very hard - to the
point of being almost unrealistic (see also in Chapter 9 on time reversal), but
such assumptions are often made in physical theory. Then such an isolated
object will evolve quantum-mechanically - or with some dose of classicality
i.e., semi classically and in the classical limit even fully classically - for a long
period of time until the perturbations from the outside world destroy the
independent evolution of the object. The outside world is always interfering
with the freedom of the object, and this interference disturbs the individual
character of the object’s evolution. In the quantum case, because of the fact
that the outside world constantly tends, so to say, to poke at the object,
evolution of the latter may even lose its quantum mechanical character; this
irreversible emergence of classical properties because of interaction with the
environment is called decoherence (see below).
6.7
The Stone Theorem
The mathematical background of quantum dynamics, i.e., of quantum-
mechanical time evolution is given by the important Stone theorem [144].
This theorem states that for an isolated quantum-mechanical system
described by a Hamiltonian and possessing the Hilbert space of states ℍ, the
time evolution is represented by a strongly continuous one-parameter
unitary group acting on this Hilbert space. The generator of such group is the
Hamiltonian of the quantum system. Thus, the Stone theorem maps
(bijectively) self-adjoint operators on a Hilbert space to a one-parameter
family of unitary operators, 𝑈(𝑡), 𝑡∈ℝ. The notion of strong continuity
implies that lim
𝑡→𝑡0 𝑈(𝑡)Ψ = 𝑈(𝑡0)Ψ, Ψ ∈ℍ, 𝑡, 𝑡0 ∈ℝ. The idea of the Stone
theorem and of its proof appears rather obvious to any person who had ever
studied quantum mechanics. Assume that for each 𝑡∈ℝ, 𝑈(𝑡) is a unitary
operator on ℍ with 𝑈(0) = 1. Assume also that there exist homomorphisms
𝑈(𝑡+ 𝑠) = 𝑈(𝑡)𝑈(𝑠) for all 𝑡, 𝑠∈ℝ and that for any Ψ ∈ℍ we have
lim
𝑡→𝑡0 𝑈(𝑡)Ψ = Ψ145. Let us define a linear operator 𝐴 on ℍ as follows:
AΨ = −𝑖lim
𝑡→0
𝑈(𝑡)Ψ −Ψ
𝑡
(6.4)
with the domain 𝐷(𝐴) consisting of all Ψ in ℍ for which this limit exists and
belongs to ℍ. Then 𝐴 is a densely defined self-adjoint operator called the
infinitesimal generator of 𝑈(𝑡). Conversely, if 𝐴 is a densely defined self-
adjoint operator on ℍ, there exists a unique continuous symmetry 𝑈(𝑡) for
which 𝐴 is the infinitesimal generator. Such 𝑈(𝑡) is quite naturally denoted as
exp(𝑖𝑡𝐴); thus if 𝜓(𝑡) = 𝑒𝑖𝑡𝐴𝜓0 with 𝜓0 ∈𝐷(𝐴), then 𝜓(𝑡) ∈𝐷(𝐴) for all 𝑡∈
ℝ and is the unique solution to the equation 𝜓̇(𝑡) = 𝑖𝐴𝜓(𝑡) with the initial
condition 𝜓(0) = 𝜓0.
145 This is a specific case of a strongly continuous one-parameter group of unitary maps, in
short a continuous symmetry.
286
The Quantum World
The meaning of this theorem is a one-to-one correspondence between
continuous symmetries 𝑈(𝑡) and self-adjoint operators 𝐴. In the physical
language, this means that the temporal evolution of an isolated quantum
system is determined by its energy observable, i.e., its Hamiltonian. This is the
most fundamental observable determining the space of states of a quantum
system. In some formulations of quantum mechanics, the Stone theorem is
taken as a postulate defining the quantum-mechanical evolution: if in the time
moment 0 the system was found in the state Ψ0 and during time interval 𝑡 it
evolved as an isolated system (e.g., no measurement was performed on the
system), then at the time moment 𝑡 it will be found in the state Ψ(𝑡) =
exp(−𝑖𝐻𝑡) Ψ0 where, according to the usual rules of taking the functions of
linear operators (see Chapter 3),
exp(−𝑖𝐻𝑡) = ∑
(−𝑖𝐻)𝑛𝑡𝑛
𝑛!
∞
𝑛=0
,
(6.5)
with ℍ→ℍ. For standard physical situations (e.g., no decay), operator
𝑈(𝑡) ≔exp(−𝑖𝐻𝑡) is unitary, and the evolution of an isolated quantum
system is entirely determined by the one-parameter group of unitary
operators 𝑈(𝑡), 𝑡∈ℝ. Moreover, since in conventional quantum mechanics
the evolution operator 𝑈(𝑡) is linear, it transfers states (rays in ℍ) into states,
i.e., acts between states and not between arbitrary vectors Ψ. Thus, one
typically assumes in quantum mechanics that the time evolution of the
observables is defined by a unitary automorphism
𝐴
𝑡→𝐴(𝑡) = 𝑈(𝑡)𝐴𝑈+(𝑡) = exp (−𝑖𝐻𝑡
ℏ) 𝐴exp (𝑖𝐻𝑡
ℏ)
(6.6)
6.8
Geometrical Formulation of Quantum
Mechanics
We have seen that the conventional formulation of nonrelativistic quantum
mechanics uses state vectors (rays) and operators on a Hilbert space. More
modern trends, however, require utilizing the concept of fiber bundles and, in
particular, vector bundles. I am not sure there is much more than just fashion
in these formulations, at least as far as standard quantum mechanics goes, and
that they help much in solving quantum-mechanical problems, but
nevertheless I shall try to translate some part of the usual content of quantum
mechanics into the language of these geometric structures.
The idea of representing quantum mechanics as a geometric theory is not
a new one (see, e.g., [102]), originating probably in the works of P. A. M. Dirac.
Roughly speaking, nonrelativistic quantum mechanics can be subdivided into
two parts: time-independent - quantum description of states which may be
called quantum statics and time-dependent - quantum evolution or quantum
dynamics. Both parts may be formulated in geometric terms, however,
specifically for the static part such formulation hardly gives anything new and
6.9
Quantum-Classical Correspondence
287
nontrivial it would be reduced to a description of geometric structures146. On
the contrary, the dynamic part, when reformulated as a geometrical theory,
can be further developed using such concepts as transport along paths and
other notions typical of differential geometry. This can, in principle, provide
a possibility for new results. The general framework of the geometric
formulation is, as usual, to define an appropriate geometric structure by
specifying parallel transport in a suitable fiber bundle, introducing forms and
other convenient coordinate-free tools. In simplest cases, one can consider
quantum dynamics as a linear parallel transport in a vector bundle.
We already know how dynamics is treated in orthodox quantum
mechanics: the evolution between two pure states Ψ1 = Ψ(𝑡1) and Ψ2 =
Ψ(𝑡2) is formulated in terms of an evolution operator connecting these two
states namely Ψ1 = 𝑈(𝑡1, 𝑡2)Ψ2. Here state vectors Ψ1,2 belong to a Hilbert
space which must be identified for each specific system. We may recall (see
Chapter 3) that the Hilbert space is endowed with the a scalar product
(𝜑, 𝜓): ℋ× ℋ→ℂ2.
6.9
Quantum-Classical Correspondence
It is generally believed that the universe is governed at a fundamental level
by quantum-mechanical laws, which are probabilistic and lead to
indeterminism (an electron can be reflected simultaneously from the floor
and the ceiling). On the other hand, in our everyday life we observe
predominantly deterministic laws (Chapter 4). Ordinary objects we deal with
have definite locations and move quite predictably so that we can calculate
their trajectories and future (or past) location with arbitrary accuracy using
classical mechanics. This is clearly not the case in the quantum world. One can
then ask: are there any traces of quantum mechanical indeterminacy in our
classical reality? Or, to put it another way, what are the limitations of the
classical laws that can be traced back to the underlying quantum mechanical
foundation? And why should there be two very different mechanics
depending on the size of the objects being studied, even although we
intuitively understand when we are supposed to apply each theory?
Although both theories may be considered very successful in explaining
phenomena at their respective scales, the relation between them is still a
substantial problem and not well understood. One of the main manifestations
of this problem is the superposition principle - in fact the linearity of quantum
mechanics, which leads to counterintuitive results when applied to
macroscopic objects. The most common example is the poor Schrödinger cat,
a virtual victim of the apparent paradox of linearity.
One of the main tokens of quantum mechanics is the Heisenberg
“uncertainty principle”, which limits the accuracy of simultaneous
measurements of any two conjugated (dual) quantities. Usually, the position
and momentum are considered in this context, but in fact the Heisenberg
uncertainty relation holds for any dual quantities (as it is readily illustrated
by the Fourier transform). The transition to classical measurement implies a
146 For instance, replacement of the Hilbert space by Hilbert bundle.
288
The Quantum World
certain crudity of description such as averaging similar to the Ehrenfest
theorem [84], see below. For example, a coarse description of the position and
momentum of a particle can be modeled by a roughly narrow wave packet
approximating the particle’s classical behavior. We shall discuss some
properties of the wave packets in different contexts in the following sections.
A substantial part of classical mechanics is devoted to the motion of a
rigid body. One may ask, is there a corresponding part in quantum theory?
And in general, is it possible to apply quantum mechanics as a probabilistic
theory to a single object such as, e.g., a planet or to the whole universe (see
Chapter 9). If we establish that yes, it is legal to use quantum mechanics to
unique macro- and even mega-objects, then, provided we believe that the
whole world is governed by the quantum mechanical laws, we must be able
to obtain the classical motion of macroscopic bodies as a limiting case of
quantum theory, including, for example, the planetary motions, Kepler’s laws
and the like. An exhaustive derivation of the classical equations of motion
would then require the assessment of probabilities for classical trajectories
or, in the modern talk, for time histories. Such assessment of probabilities is
more demanding than the standard study of evolution of the expectation
values. Consider, for instance, planetary motion. The center of mass of a rigid
body, say of the Earth, may be regarded as moving in accordance with
Newton’s law for a rigid body (see, e.g., [23]) when one can observe that
successive measurements for the positions of the center of mass of the body
produce results that are correlated according to this law with the probability
close to unity. This is, of course, scholastic, but to compute such probabilities,
without brushing off quantum mechanics because the wavelength is
presumably exorbitantly small and without appealing to intuitive
considerations, would probably require some ingenious ad hoc methods and
considerable effort.
There have been, of course, a great body of discussions on how to obtain
classical deterministic behavior from quantum mechanics. Before we proceed
to calculations, it would be perhaps appropriate to outline the general
philosophy behind such discussions, so I would dare to offer a few comments
on main directions in these discussions. Let us recall only the principal
approaches aimed at connecting the classical and quantum realities. To
observe all such approaches would be an enormous task, and I will not even
try to do it. Especially hard would be to establish a relationship between
individual methods (see, for instance, [116]). Somewhat superficial
observation of the relevant literature shows that the main lines of thought in
the problem of quantum-classical correspondence are centered around 1) the
WKB (WKBJ) approximation; 2) the Ehrenfest theorem; 3) the Wigner-Weyl
distribution; 4) the Feynman path integral (mostly in its short-wavelength
form, e.g., in steepest descent approximation); 5) Bohm’s mechanics. There
are also approaches discussing explicitly quantum noise, but despite a great
number of papers treating quantum noise in laser physics, quantum optics
and, lately, mesoscopic physics, studies of quantum noise in the context of the
quantum-classical transition are comparatively rare, probably because of
their complexity. Nevertheless, mixed classical-statistical equations uniting
6.9
Quantum-Classical Correspondence
289
classical predictability and stochastic noise have been numerously derived in
the linear systems case; in fact, those are the evolution equations for the
probability distributions in phase space such as the famous Fokker-Planck
and Wigner equations. We shall deal with the respective models in Chapter 7.
In this section, I shall briefly comment only on those equations belonging
to the class of stochastic differential equations, which are relevant to the
quantum-classical correspondence. These are just preliminary comments,
and a little further I shall try to give a simplified version of the theoretical
framework for the transition to the quasi-classical domain.
Evolution equations for probability distribution in the phase space were
derived by A. O. Caldeira and A. J. Leggett [113] from the evolution equation
for the Wigner function (see below), in fact using the techniques proposed by
R. Feynman. The model treated by Caldeira and Leggett is perhaps the easiest
possible model for a system embedded in a reservoir, where the observed
system is coupled linearly to this reservoir. Their approach is largely
phenomenological in the sense that the properties of the reservoir are not
derived from the Hamiltonian. In fact, Caldeira and Leggett have discussed the
Langevin equation for linear systems, which reminds us of the treatment of
quantum theory from the point of view of statistical mechanics.
It would, of course, be interesting to provide the consistent dynamical
treatment of the quantum-classical transition for a system interacting with
the environment, but I failed to find it in the literature available to me, see
however [114]. Typically, to consider the decoherence of histories, by which
classical behavior emerges, explicitly treating the histories’ probabilities
seems to be technically quite difficult. Even specific mathematical models
illustrating the transition to the classical limit are rather complicated (see
below).
In the whole variety of approaches to establish the quantum-classical
correspondence, two sharp signals have been produced: the familiar quasi-
classical147 asymptotic theory of the WKB (WKBJ) type (see [84], Chapter 7)
and the new decoherence concept that can hopefully diminish the discrepancy
between quantum and classical descriptions. I shall first briefly comment on
these approaches and slightly touch upon the so-called Gutzwiller formula
derived by a Swiss physicist M. Gutzwiller, the student of W. Pauli, which is
also relevant in the context of quantum-classical correspondence. Afterwards,
we shall discuss the Ehrenfest theorem, the Wigner-Weyl (and analogous)
distributions and the Bohmian version of quantum mechanics. The Feynman
path integral will be reviewed mainly in association with the quantum field
theory where the path integral presents the most popular techniques.
Remarkably, the classical phenomenology dominating our everyday
experience seems to be much more versatile than all the derivations of the
elementary transition from the quantum mechanical to the semiclassical
domain of the motion equation (e.g., of the WKB type). This richness of
147 In the English-language literature the term “semiclassical” is commonly used to denote
short-wavelength asymptotics of the WKB type, whereas in the Russian-language literature the
word “quasi-classics” is almost exclusively in use.
290
The Quantum World
classical phenomena is manifested already in the fact that classical features
and the respective language (Chapter 4) are essentially different from those
of quantum mechanics. Indeed, while studying classical systems we pay no
attention to “observers”. The poorly defined word “measurement”, used now
and then in quantum mechanics, is meaningless in classical mechanics; in the
classical context measurement is associated mainly with metrology. Discrete
symmetries play almost no role in classical mechanics and are of extreme
importance in the quantum theory. Indeed, all transformations that may
prove important for classical mechanics are infinitesimal. Moreover, when
discussing classical behavior, we usually talk about orbits and trajectories,
and not about energy levels (or other quantum numbers) as in quantum
mechanics. One may also note that classical equations of motion are
essentially phenomenological - they are direct mathematical models of
reality, and simply identifying them may be a serious problem (e.g., as in the
case of dissipative systems).
The semiclassical approximation is commonly identified with the regime
of large quantum numbers, although to give an exact meaning to this
statement is not that simple. Quantum numbers are not inherently present in
any classical theory; they are, as we have seen, an attribute of quantum
models. We have to introduce a space of functions with the required
properties, to explore spectral qualities of operators and to declare
appropriate prescriptions to be announced as quantization. Nothing of the
kind exists in classical theories based on a homogeneous manifold called the
phase space.
Nowadays, the key word for the transition to the semiclassical regime
seems to be “decoherence”. The concept of decoherence has been introduced
in the 1980s [107] and advanced in the 1990s [108] as a possible solution to
numerous paradoxes of quantum theory, in particular those stemming from
the superposition principle. The pioneers of decoherence, H. D. Zeh and E.
Joos asserted that macroscopic bodies are open systems continuously
interacting with the environment, and it is this interaction that destroys the
interference terms which obscure the transition to classical theory (see also
[110]). Then the obvious strategy would be to study the concrete decoherence
mechanism on some simple model. The most favored model in physics, as we
have seen, is the oscillator; can the decoherence considerations be tested on
this model? And indeed, this modeling strategy has been pursued by W. Zurek
and collaborators [111] to test the decoherence in the case of a quantum
harmonic oscillator coupled to a heat bath of other harmonic oscillators that
constitute the environment. This is in fact the quantum Brownian motion
model. As I have already mentioned, this model seemed to be one of the
favorite problems for R. Feynman; he used it in his dissertation to illustrate
the Lagrangian method in quantum mechanics.
The standard techniques of quantum mechanics based on the
Schrödinger equation are typically applied to isolated systems, despite the
fact that in certain interpretations one constantly talks about ensembles (see
below section “Do you need an interpretation?”). In practice, however, true
isolation is extremely difficult to achieve for a macroscopic physical system,
6.9
Quantum-Classical Correspondence
291
the latter being in constant interaction with their environment. So, the adepts
of the decoherence theory take the view that physical systems are always in
interaction with their immediate surroundings, are thus never isolated and
hence not Hamiltonian. Such systems are in fact stochastic, and a method of
quantization should be offered without the need of a Hamiltonian.
Respectively, the dequantization process, i.e., obtaining classical limits such
as retrieving classical dynamics from the quantum mechanical equations of
motion should not be relied on Hamiltonian methods. Using the quantum-
mechanical language, one may say that the environment continuously
produces “measurement” over a macroscopic physical system, and it is this
measurement that destroys quantum wavelike interference and only the
classical features are exhibited by the system. This is what is intuitively
understood as decoherence and is considered nowadays as absolutely
essential for our comprehension of a classical limiting
process
(dequantization).
Honestly speaking, personally I do not quite understand this modern
cognitive scheme. Why does one need to consider only open systems to
understand the transition to classical physics (“classicality”, in the modern
talk)? The fundamental dynamics of the system both in the quantum and in
the classical domain is governed by the action 𝑆 or, equivalently, by the
Hamiltonian 𝐻. We have grown up with the usual scheme of quantum
mechanical probabilistic predictions based on the paradigm of Hilbert space,
states as rays, linear operators and their spectra, etc., this scheme is working
beautifully, its interpretation is good enough for me (see the section “Do you
need an interpretation?”) and I don’t understand why we should abandon it
in favor of some kind of random dynamics, though promoted by a number of
very smart craftspeople.
Let us describe the decoherence considerations in some more detail. The
central notion here is a sequence of events at successive time points (again a
similarity with the Feynman’s approach). This notion is usually named as a
quantum mechanical history. What we traditionally denote as a set of classical
paths for the system, in this language is identified as histories having only
negligible interference between each other. One can assign not only
amplitudes, but probabilities to such histories. One can formalize the notion
of histories by introducing the decoherence matrix or decoherence functional
so that when this functional is strictly diagonal, probabilities are exactly
defined and all probabilistic algebra can be applied, e.g., sum rules are
fulfilled. However, there may be obviously approximate decoherence when
the probability sum rules are satisfied only to some finite accuracy
determined by inequalities bounding the absolute value of the off-diagonal
terms of the decoherence functional. To produce concrete results from these
general considerations, one can take some model, e.g., a simple one-
dimensional system with a variety of initial states. For such systems, e.g., for
the one-dimensional oscillator (see [112] and literature cited there), it is
possible to calculate the decoherence functional explicitly and explore the
degree of decoherence by analyzing the diagonal elements of the decoherence
functional representing the probabilities of the classical histories of the
292
The Quantum World
system, and off-diagonal elements showing the limit to which the histories
decohere. Decoherence is telling us that the reduced dynamics of a system
interacting with a typical environment must be essentially classical. In other
words, there may be a certain class of model quantum mechanical systems in
which one is concerned with classical probabilities.
One can then ask: is decoherence equivalent to the crude description
(coarse graining) one uses to get rid of the “uncertainty principle”? Then what
amount of coarse graining is quantitatively sufficient to suppress quantum
fluctuations and obtain the classical equations of motion? Is it the same
amount as needed for decoherence? And what are the quantum corrections
to classical motion equations in terms of quantum correlations that must die
out to ensure the now fashionable decoherence? Do such corrections quantify
deviations from classical determinism? It is interesting that this bunch of
questions seems to be connected with the possibility to disregard the so-
called quantum potential in Bohm’s version of quantum mechanics (see the
respective section below). Intuitively, it must be clear that to the same degree
as the histories decohere, the probability distributions should be centered
around classical paths or, more generally, histories of the system. Then the
distributions of positions and momenta must be probably given by some
version of the Wigner function. Can one evade any conflict between the
general requirements of decoherence and peaking on the classical paths with
exactly defined smearing about them? I don’t know whether there exists a
general mathematically correct answer to the above bunch of questions, yet
one can try to illustrate the connection between coarse graining, not
necessarily uniquely defined, and the degree of decoherence using the
familiar model of the one-dimensional oscillator. For this simple model, one
can imagine at least two kinds of coarse graining, each contributing to the
extent to which the histories of the oscillator decohere. The first kind of
coarse graining is due to imprecise knowledge of the oscillator initial position
and momentum, i.e., smearing in the phase space. Coarse graining of the
second kind is due to coupling of the oscillator with the environment, e.g.,
submerging our oscillator in a thermal bath of other harmonic oscillators (this
model of oscillator interacting with the environment is typically called the
Caldeira-Leggett model [113]) although a similar approach is contained in the
works of Feynman (see, e.g., [44] and [45]). The original motivation for the
Caldeira-Leggett model was the study of dissipation in superconductors,
which may be interpreted as macroscopic quantum systems (see Chapter 9).
This oscillator-based model, though being comparatively simple, clearly
depicts quantum dissipation and thus irreversible suppression of
interference (off-diagonal) terms due to interaction with the environment. I
shall recall the Caldeira-Leggett setting, which in its simplest form looks as
𝐻= 𝑃2
2𝑀+ 𝑉(𝑋) + 𝑋∑𝐶𝑖𝑞𝑖
𝑖
+ ∑( 𝑝𝑖
2
2𝑚𝑖
+ 1
2 𝑘𝑖𝑞𝑖
2)
𝑖
(6.7)
This expression represents the Caldeira-Leggett Hamiltonian where 𝑃, 𝑋
and 𝑀 are respectively the momentum, position and mass of the observed
6.9
Quantum-Classical Correspondence
293
system, 𝑝𝑖, 𝑞𝑖, 𝑚𝑖 and 𝑘𝑖= 𝑚𝑖𝜔𝑖
2 are the momenta, positions, masses and the
elastic constants of the harmonic oscillators making up the environment. I do
not intend to produce all the calculations for the Caldeira-Leggett model now
because our current subject is the quantum-classical transition and not the
properties of nonunitary evolution models. I shall only remark that the
Caldeira-Leggett model is a notable exception, since it allows us to calculate
the evolution of the state of a Brownian particle (in this model an oscillator)
submerged in a thermostat comprised of multiple oscillators, i.e., interacting
with the bosonic reservoir being in thermal equilibrium at temperature 𝑇.
A few words about the term “decoherence”. Remarkably, like many novel
terms
in
physics
(entanglement,
consistent
histories,
classicality)
decoherence is devoid of any exact meaning, being only loosely defined. At
first, the term “decoherence” was used to denote the effect transition to
classicality (should not be at this stage confused with semi-classics) with
vanishing of off-diagonal matrix elements. A little bit later, decoherence has
been applied to emerging “alternative histories” of a quantum system
referring to weakened interference between these histories. Intuitively, these
two meanings appear to be very close, however, they are not in general
equivalent. Mathematically speaking, decay (e.g., with time) of the off-
diagonal matrix elements of some model-based decoherence matrix is not the
same as the diminishing interference between individual “histories”.
Decoherence theory tells us what happens to a quantum system in a
dissipative environment and how such an environment selects preferred
states148.
Nonetheless, for both above-mentioned definitions of decoherence, the
role of intrinsic quantum phase correlations seems to be essential. It seems
intuitively clear that in a coarse-grained picture phase correlations between
alternative histories fade away together with off-diagonal matrix elements in
the semiclassical domain. There are no phase correlations in classical
mechanics.
Comparing the quantum fluctuations ideology implicit in the uncertainty
principle with the classical theory, one might notice that even in the classical
deterministic limit we can encounter an extraordinary sensitivity of some
physical systems to slight variation of parameters, e.g., of initial conditions
(see Chapter 4). This sensitivity observed predominantly in nonlinear
systems is exponential in time and leads not only to instability, but also to
chaotic behavior. In the presence of chaos small fluctuations may be amplified
to such an extent as to result in large uncertainties in the outcome. Chaos
produces indeterminacy of the final state. The small initial fluctuations that
tend to be exponentially amplified with the time may be, in particular,
quantum fluctuations.
Now, let us return to the coarse graining. By a coarse-grained description
one typically understands such a picture when some variables are left
unspecified or averaged (traced) out. Moreover, even those variables that still
148 These may be totally different states, depending on the interaction. For example, they may
be position or momentum eigenstates or coherent states, etc.
294
The Quantum World
are specified may be not defined at each time point or it would not be possible
to determine them with arbitrary precision. For instance, they may be
replaced by some averages as in statistical mechanics. So, one can separate all
the variables relevant to a microscopical (quantum) problem into two types:
variables of one type may be called ignored, variables of the other type are
sometimes called “privileged” or “distinguished”. It is clear that in most of the
physical situations coarse graining based on privileged variables would be
sufficient for physicists as observers of the universe. Indeed, even the utmost
careful observations can produce only a tiny portion of all the variables
characterizing the universe (see on this subject in [88]). Furthermore, any
meticulous observation provides the estimates of these variables with some
limited accuracy that defines some corridor of values over which averaging
should be carried out.
Thus, one of the simplest procedures of coarse graining consists in
averaging out some (ignored) variables and paying attention only to the
remaining ones. This procedure reminds us of the transition from the quasi-
microscopic Langevin equation describing, e.g., stochastic dynamics of
Brownian motion to the damped motion equation containing a
phenomenological dissipation coefficient. Here one can intuitively feel a
connection between noise, decoherence and dissipation, all of them being
linked by the procedure of course graining.
Another example of coarse graining is provided by the transition from the
mechanical description to hydrodynamics and thermodynamics (Chapter 7).
Coarse graining in this case is due to deliberate loss of information, and our
phenomenological - hydrodynamical or thermodynamical - variables are just
the weighted averages (e.g., with the distribution function or over some
suitable “physically infinitesimal” volume) of “privileged” variables. When
one can disregard the noise generated by the stochastic Langevin-type force,
the deviations from the classical equations of motion become small, and one
can
observe
the
complete
classical
predictability.
Derivation
of
hydrodynamical equations from physical kinetics 149 necessarily includes
dissipation. The obtained equations of motion contain as unknown variables
not the actual values (realizations) of mechanical quantities, but the
expectation values of hydrodynamical variables. There exists the graining
hierarchy: by subsequent averaging one can achieve further coarse graining.
One can justifiably ask: is the course graining operation invertible or, in other
words, is the refined graining treated as the inverse operation correctly
defined? In the hydrodynamic example, this would amount to obtaining the
Boltzmann equation or the Liouville equation from hydrodynamic equations.
This procedure would require an infinite sequence of these hydrodynamic
equations (e.g., Euler or Navier-Stokes) regarded as statistical moments or
weighted averages of the Liouville-type equations of Hamiltonian mechanics,
classical or quantum. There are, of course, difficulties by restoring the original
Liouville-type equation from the corresponding BBGKY hierarchy (see
Chapter 7) in any finite closure scheme due to the information loss.
149 One of the best sources on this subject is, to my mind, the book by V. P. Silin [72].
6.9
Quantum-Classical Correspondence
295
Hydrodynamic variables such as the densities of mass, energy, momentum,
charges, currents as well as local temperature and entropy have the structure
𝑞𝑖= (𝑥𝛼(𝑡), 𝑄𝑎(𝑡)) where 𝑥𝛼(𝑡) denote “privileged” coordinates such as
center of mass position or orientation of a macroscopic fluid particle, which
is actually a large group of constituent molecules, and 𝑄𝑎(𝑡) are “ignored”
variables such as internal coordinates of the fluid molecules. Tracking the
positions of individual molecules of the fluid by solving the coupled equations
of motion for all of them, though feasible (only approximately) for a small
number (𝑁 1) of molecules - this is the subject of molecular dynamics, would
be clearly impossible and even absurd for any macroscopic system (𝑁 1023).
Instead, an abridged description in terms of a small number of “privileged”
macroscopic variables is provided, with most of the microscopic information
being thrown away. Exactly how this transition to a reduced description is
achieved is the subject of kinetic theory, which is in general very subtle and
here I don’t have much to say about this theory (see Chapter 7 for some details
and references). I would like to make just two remarks. Firstly, the drastic
simplification achieved by the reduced description comes with the penalty:
one has to introduce purely phenomenological quantities such as friction or
viscosity whose values must be taken from experiment. Similarly, we also
have to pay the price for decoherence throwing away delicate quantum
information on interfering paths. Transition to a set of histories that
constitute the quasi-classical domain of everyday experience lead, for
example, to complications with time-reversive behavior (see Chapter 9).
Secondly, any transition to a reduced description is based on some
multiscale scheme and can break down when the respective conditions cease
to hold. So, one should be careful with applying theories or models based only
on “privileged” variables. Example: macroscopic theory of continuum media
may be inapplicable for rarefied gases.
Both the Caldeira-Leggett and Feynman-Vernon models are based on the
favorite model of physics – the oscillator. In both models, which basically
belong to the same class of quantum dissipative dynamical models, a
“distinguished” oscillator is linearly coupled to a large number of other
oscillators constituting an immediate environment, specifically forming an
equilibrium thermal bath characterized by temperature 𝑇. As we know, such
a situation is typically modeled in physics by the density matrix 𝜌(𝑥′, 𝑥; 𝑄′, 𝑄),
which in this model is assumed to be factorized: 𝜌(𝑥′, 𝑥; 𝑄′, 𝑄) =
𝜌(𝑥′, 𝑥)𝜌𝑇(𝑄′, 𝑄) where the coordinates are separated on “privileged” and
“ignored” as above. More specifically, let 𝑥(𝑡) be the privileged coordinate of
the “distinguished” oscillator and 𝑄 the ignored coordinates characterizing
the environment. One can introduce the reduced density matrix by taking the
trace over the environment variables 𝑄, 𝜌(𝑥′, 𝑥) = 𝑇𝑟𝜌(𝑥′, 𝑄′; 𝑥, 𝑄) =
∫𝑑𝑄𝜌(𝑥′, 𝑄′; 𝑥, 𝑄), we shall do it explicitly later when we attempt to find the
parameters (e.g., renormalized frequency) of our distinguished test oscillator
due to its interaction with all others (bath).
Classical mechanics is commonly considered to be the limiting case of
quantum mechanics when the Planck constant ℏ tends to zero. One can hardly
be satisfied with this automatic reply to the question about the quantum-
296
The Quantum World
classical correspondence. The quantity ℏ is dimensional, and the limit to zero
(or infinity) of a dimensional quantity is, honestly speaking, meaningless. It is
therefore hardly possible to consider quantum mechanics as a higher
perturbative approximation of classical mechanics. The ℏ→0 stereotype
mostly serves psychological purposes: it satisfies the need to make quantum
mechanics more comprehensible in customary everyday terms thus making
it cognitively acceptable. However, the relationship between a family of
trajectories, e.g., obtained by integrating the Hamiltonian equations of motion
and starting from some prescribed initial conditions, and the wave function
corresponding to the spectral problem for the Schrödinger operator
(Hamiltonian) is rather intricate and may be intuitively well understood only
in the case of closed periodic orbits - like those of an electron moving in the
Coulomb field. It is this situation that is mainly discussed in standard courses
of quantum mechanics. In this case, the energy eigenvalues are represented
by sharp peaks in the density of states, i.e., the energy level distribution
consists only of a sum of delta-functions.
In textbook quantum mechanics, the discrete spectrum is “physically”
interpreted as a set of energy levels for which an integer number of de Broglie
waves would fit in the considered spatial region, e.g., around the atomic
orbital. In other words, only standing electron waves are allowed in the atom.
Such orbits are considered stable, whereas all other orbits, when the wave
ends up, after a complete revolution, at a different phase point on the wave
are unstable and the respective spectral numbers (in this case energy) are not
allowed. This competition of stable (phase-locked constructive interference)
and unstable (out of phase destructive interference) orbits produces the
discrete spectrum in naive quantum mechanics.
So, the discrete spectrum in orthodox quantum mechanics is interpreted
as a set of energy values for which only an integer number of waves is strictly
allowed in the considered spatial domain. The picture of sharp spikes in the
spectrum of the wave operator is typical of any resonance phenomenon.
However, it is this simple picture that laid the foundation for the development
of quantum mechanics, first by N. Bohr and A. Sommerfeld, later by L. de
Broglie and E. Schrödinger. These “founding fathers” of quantum theory quite
naturally focused their attention on the hydrogen-like atoms, that is on an
electron moving in the Coulomb field. However, this case is quite exceptional,
since only closed periodic orbits are present in the Coulomb problem.
To address this issue, we may start from some basic recollections. When
Bohr was formulating the a priori rules of the old quantum mechanics, he was
handling a very special “planetary” model of the hydrogen atom and,
consequently, considered the periodic classical orbits of an electron pursuing
the finite motion in the 1/𝑟 field of a Coulomb center. As I have just
mentioned, these classical orbits are closed (ellipses), which is rather an
exception than the rule. Indeed, it is a well-known fact that there are only two
central fields in which finite motion occurs along the closed orbits, these are
the attractive Coulomb field, 𝑈(𝑟) = −𝛼/𝑟, 𝛼> 0, and the isotropic oscillator
field, 𝑈(𝑟) = 𝑘𝑟2, 𝑘> 0. In principle, for any central field 𝑈(𝑟) there exist a
collection of conserved aphelion 𝑟= 𝑎(1 + 𝑒) and perihelion, 𝑟= 𝑎(1 −𝑒),
6.9
Quantum-Classical Correspondence
297
vectors. This corresponds to motion in a central field in celestial mechanics
and the Runge-Lenz (Laplace-Runge-Lenz, LRL) vector, which is the
conserved quantity. The reason for its existence as a conserved quantity is
that central force problems are characterized by a more profound symmetry
than SO3, the LRL vector being the manifestation of this fact (see, e.g., [149]).
Nevertheless, I don’t know any good physical interpretation of this conserved
quantity and would be thankful if someone could provide it. It is clear that the
LRL vector is connected with the orbit’s eccentricity and as such it can be used
to calculate the eccentricity. For any central potential there must exist a
number of eccentricity (aphelion and perihelion) vectors, but what happens
when the potential continuously differs from the pure Coulomb? For example,
how does the SO4 algebra change with the variation of the screening constant
in the Yukawa potential? Due to its conservation, the LRL vector commutes
with the Hamiltonian, which gives an additional relationship to the Lie
algebra whose Lie group (SO4) is of a larger symmetry than the rotational
group (SO3). The consequence of this fact is that the spherical motion
problem in quantum mechanics admits, besides SO3, other non-trivial
subgroups of SO4 realized as solutions to the usual nonrelativistic hydrogen
problem, i.e., as modes derived from the spinless Schrödinger equation.
The question of how many vector constants of motion do exist for a
particle moving in a central potential will be discussed later, in connection
with general features of particle motion in a central field. Now we are mostly
interested in bridging over the cognitive chasm between classical and
quantum mechanics. Unfortunately, since these two theories are too different
there does not seem to be the uniquely defined and mathematically
impeccable way to make the transition between them, irrespective of the
manner of understanding the word “transition”. For example, semiclassical
theory is just an asymptotic expansion of the solutions to the partial
differential equations and is thus not equivalent to classical mechanics in the
mathematical sense. This asymptotic expansion corresponds to the limit ℏ→
0, but ℏ is a dimensional quantity so that its limit to zero or infinity is not quite
meaningful. In general, the connection between quantum and classical
mechanics is intricate and not quite clear, there are a lot of beliefs and folklore
about it. P. A. M. Dirac, probably deeply understanding this fact, in his famous
book “Principles of Quantum Mechanics” [20] and especially in subsequent
lectures [139] has replaced proofs of quantum-classical correspondence by a
kind of axiomatic formalization of terminology. This axiomatic or rather
quasi-engineering approach proved to be very successful, but it could not
compensate for the loss of logical connections between classical and quantum
mechanics. It is remarkable that Dirac has intuitively guessed how to
construct quantum mechanics on the base of classical one. One might recall in
this connection that some great scientists, e.g., Einstein disagreed with such
intuitive replacement of logical links between the two models of the
mechanical world. In my view, it is the logical transition between classical and
quantum mechanics that should be improved in future, not the philosophical
issue of interpretation of quantum mechanics which seems to be a pseudo-
problem. To have two great theories for the description of mechanical motion,
298
The Quantum World
with some indeterminacy when to use one and when the other 150 is an
unsatisfactory state of affairs.
6.10 The Ehrenfest Theorem and Its Meaning
The first, simple but tricky, question which is posed by any student beginning
to study quantum mechanics is: in classical mechanics we have all the
quantities depending on the particle coordinate, 𝐫(𝑡) or, for generalized
coordinates, 𝑞𝑖(𝑡), 𝑖= 1, … , 𝑛, 𝑛= 3 for a single particle . What should we use
instead of 𝐫(𝑡) or 𝑞𝑖(𝑡) in quantum mechanics? An automatic answer “the
position operator” that might be given by a person who has already had some
experience with quantum theory is not quite satisfactory for a beginner or at
least incomplete because it implicitly involves a lot of new concepts. Thinking
about this seemingly primitive basic problem leads to a number of surprising
issues that should be elucidated lest one thoughtlessly use quantum
prescriptions. Below we shall tackle some of them, but to prepare ourselves
for surprises we have to discuss some elementary concepts of orthodox
quantum mechanics first.
Assume now that we find ourselves within the fully quantum world. The
most natural way to establish the quantum-classical correspondence would
be probably restoring the classical behavior of the position (or coordinate)
operator, 𝐫, or of generalized coordinates 𝑞𝑖(𝑡), 𝑖= 1, … , 𝑛, (𝑛= 3) starting
from the quantum picture. To this end, one can employ the Heisenberg picture
or Copenhagen interpretation (see the respective section in this Chapter),
𝐴(𝑡) = 𝑈+𝐴𝑈, where 𝑈= exp(−𝑖𝐻𝑡ℏ
⁄ ) is the evolution operator for a “pure”
quantum mechanical system characterized by the Hamiltonian 𝐻= 𝐩2 2𝑚
⁄
+
𝑉(𝑞𝑖). Differentiating this equation, we get the standard expression for the
time derivative of the operator corresponding to the classical function 𝐴(𝑡):
𝜕
𝜕𝑡𝐴(𝑡) = 𝑖
ℏ[𝐻, 𝐴]
Putting 𝐴(𝑡) = 𝑞𝑖(𝑡) (for simplicity, one can consider one-dimensional
motion here, of course), we must arrive at some analog of the classical
150 When one says that quantum mechanics must be used in all cases when atomic (or
subatomic) particles are considered, I would say it is wrong: particles in accelerators are treated
classically with extremely good precision. Moreover, in highly precise scientific instruments such
as mass spectrometers, beta-ray spectrometers and the like, which are based on subatomic
particle motion in external fields, deflection of these microscopic particles are computed with
high precision with the means of classical mechanics and no quantum corrections are included.
In a rather important problem of the passage of atomic particles through the matter, it is very
hard to tell a priori whether a concrete task should be treated classically or quantum
mechanically, this is a purely modeling approach, a matter of arbitrary choice. Furthermore, there
are many attempts to treat the entire universe as a quantum object (see below the discussion of
the wave function of the universe). Should one then treat such subsystems of the universe as
galaxies, stars, planets and other astrophysical objects also with quantum mechanical means.
Does it always make sense?
6.10
The Ehrenfest Theorem and Its Meaning
299
equation of motion 151. One usually considers in this context the so-called
Ehrenfest theorem which is related to the average (expectation) values of
coordinates and momenta. It is useful, however, to trace the correspondence
of the quantum and classical formulas already at the level of operator
equations, before the transition to average values. This question is described
in detail in the textbook by A. S. Davydov [140], chapter 2.
Let us find the quantum-mechanical time derivative of an arbitrary
function 𝑓(𝐫), where 𝐫 is what we call in classical mechanics the radius-vector
of a particle (or the time derivative of 𝑓(𝑞𝑖(𝑡)), where 𝑞𝑖(𝑡) are generalized -
in fact curvilinear - coordinates). In quantum mechanics, we must interpret
the function 𝑓(𝐫) as an operator, so according to the general formula for the
time derivative of an operator we have
𝑑
𝑑𝑡𝑓(𝐫) = 𝑖
ℏ[𝐻, 𝑓(𝐫)],
where 𝑓 can also be a vector quantity of course.
Since [𝑉, 𝑓(𝐫)] = 0 (or, in generalized coordinates, [𝑉(𝑞𝑖), 𝑓(𝑞𝑖)] = 0) we get
𝑑
𝑑𝑡𝑓(𝐫) =
𝑖
2𝑚ℏ(𝐩2𝑓(𝐫) −𝑓(𝐫)𝐩2) =
𝑖
2𝑚ℏ(𝐩𝐩𝑓(𝐫) −𝑓(𝐫)𝐩𝐩)
=
𝑖
2𝑚ℏ(𝐩𝑓(𝐫)𝐩−𝑖ℏ𝐩𝛁𝑓(𝐫) −𝐩𝑓(𝐫)𝐩−𝑖ℏ𝛁𝑓(𝐫)𝐩)
= 1
2𝑚(𝐩𝛁𝑓(𝐫) + 𝛁𝑓(𝐫)𝐩),
(6.8)
since 𝐩𝑓(𝐫) −𝑓(𝐫)𝐩= 𝑖ℏ𝛁𝑓(𝐫). Indeed, (𝐩𝑓−𝑓𝐩)𝜓= 𝑖ℏ(𝛁(𝑓𝜓) −𝑓𝛁𝜓) =
−𝑖ℏ𝜓𝛁𝑓 for an arbitrary 𝜓.
If we take, for instance, the time-dependent operator 𝑓(𝐫) to be the
particle velocity interpreted as a function of position 𝐫, then we may find the
time derivative of this velocity operator
𝑑
𝑑𝑡𝐯(𝐫) = 𝑖
ℏ[𝐻, 𝐯(𝐫)] =
𝑖
ℏ𝑚(𝑉(𝐫)𝐩−𝐩𝑉(𝐫)) = −1
𝑚∇𝑉(𝐫)
and we arrive at the operator equation whose form is exactly the same as
Newton’s equation in classical mechanics, 𝑑𝐩𝑑𝑡
⁄
= −∇𝑉(𝐫). Thus, using only
the commutation relations between coordinate and momenta we obtained
the operator analogs of the classical equations of motion. It is, however,
difficult or at least unconventional for the people trained in classical
mechanics to use directly operator differential equations, so one would rather
invent a scheme that results in the classical equations of motion for the
151 We assume here that the Hamiltonian does not depend explicitly on time. An explicit
dependence of time in the Hamiltonian results in some difficulties which are natural, while in this
case we have a different physical situation - the system is under variable external conditions, e.g.,
in an external field. We shall discuss this situation below.
300
The Quantum World
expected values since one can handle the latter as classical quantities. Such a
scheme leads to the so-called Ehrenfest equations which were obtained by P.
Ehrenfest in 1927 [157]. The Ehrenfest theorem states that the motion of a
quantum particle will be, on average, identical to the motion of a classical one.
The words “on average” in this interpretation are usually understood in the
following way: the quantum particle is represented by a wave packet
concentrated near its classically moving expectation value, and the potential
in which the particle moves does not change significantly over the dimensions
of the wave packet. In this sense, the particle may be considered pointlike
with respect to potential motion despite being spread over a finite packet size.
Such a condition allows one to replace the particle position and momentum
by their expectation values; in fact, the condition of packet “smallness” is
neither sufficient nor necessary (see [164]), for example, one can see that the
Statement of the Ehrenfest theorem is valid not only for the “concentrated”
wave packets.
In order to elucidate the meaning of the Ehrenfest theorem, let us begin
from some simple averaging procedures. To get rid of supplementary but not
essential difficulties we assume again that the Hamiltonian does not depend
explicitly on time. The simplest method to obtain the equations for
expectation values which would correspond to the above operator equations
is to directly average these operator equations, but this procedure is not
mathematically impeccable unless we exactly define the averaging procedure.
If we close our eyes on this pedantry and just take the expectation values of
both sides of the operator equation (with respect to a Heisenberg ket-state
that does not change with time), we obtain
𝑑𝑝
̅̅̅̅
𝑑𝑡= 𝑚𝑑2𝐫
̅̅̅̅̅
𝑑𝑡2 = 𝑚𝑑2𝐫̅
𝑑𝑡2 = −∇𝑉(𝐫)
̅̅̅̅̅̅̅̅
or, for generalized coordinates,
𝑚𝑖𝑘𝑞𝑘
̅̅̅̈ + 𝜕𝑉
̅̅̅̅
𝜕𝑞𝑖= 0,
where 𝑚𝑖𝑘 plays the role of the metric tensor. Here symbol 𝐴̅ means the
quantum averaging over a pure state, 𝐴̅ = (Ψ, AΨ) = ∫𝑑3𝑟Ψ ∗(𝐫)𝐴Ψ(𝐫) or,
in Dirac’s notations, 𝐴̅ = 〈Ψ|A|Ψ〉. In general, averaging and differentiation
over time are non-commutative operations so that we are unable to write
𝑑𝑝𝑑𝑡
⁄
̅̅̅̅̅̅̅̅ = 𝑑𝑝̅ 𝑑𝑡
⁄
(see below). This is just an attempt to replace the quantum
calculations by the classical ones.
It is clear that in the case of many particles one has to replace integration
over 𝑑3𝑟 by integration over hypervolume element 𝑑𝜏 and calculate a
multidimensional
integral.
We
may
perform
here
straightforward
calculations to illustrate the simple bridge between classical and quantum
mechanics based on the Ehrenfest theorem. Incidentally, the coherent states
popular today were first discussed by E. Schrödinger [129] as a by-product of
obtaining a solution for the wave packet whose center moves according to the
6.10
The Ehrenfest Theorem and Its Meaning
301
laws of classical mechanics in the quadratic (oscillator) potential, 𝑉(𝑞) =
1/2𝑚𝜔2𝑞2. We have already touched upon the oscillator model many times
and shall deal with the model of coherent states later in some detail. To find
the “motion equations” for expectation values we may, for simplicity, use also
the Schrödinger picture (recall that the Schrödinger and the Heisenberg
representations are equivalent, see above). Differentiating straightforwardly
the defining expression for 𝐴̅, we get
𝑑𝐴̅
𝑑𝑡= (𝜕Ψ
𝜕𝑡, 𝐴Ψ) + (Ψ, 𝐴𝜕Ψ
𝜕𝑡).
Now, from the Schrödinger equation we have
𝜕Ψ
𝜕𝑡= −𝑖
ℏ𝐻Ψ,
𝜕Ψ ∗
𝜕𝑡
= −𝑖
ℏ𝐻∗Ψ ∗,
where 𝐻= 𝐻+ (we assume the Hamiltonian to be self-adjoint). Inserting
these expressions into that for the derivative 𝑑𝐴̅ 𝑑𝑡
⁄
, we get
−𝑖ℏ𝑑𝐴̅
𝑑𝑡= (𝐻Ψ, AΨ) −(Ψ, 𝐴𝐻Ψ) = (Ψ, 𝐻+𝐴Ψ) −(Ψ, 𝐴𝐻Ψ) = [𝐻. 𝐴]
̅̅̅̅̅̅̅̅.
Putting here 𝐴= 𝑞𝑖 and 𝐴= 𝑝𝑘, we get
−𝑖ℏ𝑑𝑞𝑖̅
𝑑𝑡= [𝐻, 𝑞𝑖]
̅̅̅̅̅̅̅̅,
(6.9)
−𝑖ℏ𝑑𝑝𝑘
̅̅̅
𝑑𝑡= [𝐻, 𝑝𝑘]
̅̅̅̅̅̅̅̅̅.
(6.10)
It is not difficult to demonstrate that commutators [𝐻, 𝑞𝑖] and [𝐻, 𝑝𝑘] give
the derivatives over 𝑝𝑖 and 𝑞𝑘, respectively:
𝜕𝐻
𝜕𝑝𝑖
= 𝑖
ℏ[𝐻, 𝑞𝑖],
−𝜕𝐻
𝜕𝑞𝑘= 𝑖
ℏ[𝐻, 𝑝𝑘].
(6.11)
These relations follow directly from the canonical commutation relations
(CCR), [𝑝𝑖, 𝑞𝑘] = −𝑖ℏ𝛿𝑖
𝑘, and are valid not only for the Hamiltonian 𝐻(𝑝𝑘, 𝑞𝑖),
but also for any polynomial (entire) function of 𝑝, 𝑞.
The above formulas for the dynamics of expectation values were related
to the autonomous case when the operator 𝐴 does not depend explicitly on
time. In case the operator 𝐴 depends explicitly on time, we have only a slight
modification of the formulas namely
𝑑𝐴̅
𝑑𝑡= (Ψ, ∂𝐴
∂𝑡Ψ) + (∂Ψ
∂𝑡, 𝐴Ψ) + (Ψ, 𝐴∂Ψ
∂𝑡) = (Ψ, ∂𝐴
∂𝑡Ψ) −𝑖
ℏ(Ψ, [𝐻, 𝐴]Ψ).
One can introduce here a time derivative operator, 𝑑𝐴𝑑𝑡
⁄
, defined by
302
The Quantum World
𝑑𝐴̅
𝑑𝑡= (Ψ, 𝑑𝐴
𝑑𝑡Ψ).
Then we get the operator identity
𝑑𝐴
𝑑𝑡= 𝜕𝐴
𝜕𝑡+ 𝑖
ℏ[𝐻, 𝐴],
which is an expression of a well-known fact that if some operator 𝐴 does not
explicitly depend on time and commutes with the Hamiltonian, the
expectation value of the respective physical quantity does not change with
time in any state. In such a case, 𝐴 is the quantum integral of motion.
Replacing again 𝐴 with 𝑞𝑖 and 𝑝𝑘, we have the operator equations
𝑑𝑞𝑖
𝑑𝑡= 𝑖
ℏ[𝐻, 𝑞𝑖],
𝑑𝑝𝑘
𝑑𝑡= 𝑖
ℏ[𝐻, 𝑝𝑘]
(6.12)
One can compare ?? with 6.12. Assuming the standard form of the
Hamiltonian,
𝐻= −ℏ2
2𝑚𝜕𝑘𝜕𝑘+ 𝑉(𝑞𝑖),
we obtain the following operator relations
𝑑𝑝𝑘
𝑑𝑡= −𝜕𝑘𝑉,
𝑑𝑞𝑖
𝑑𝑡= 𝑝𝑖
𝑚
(6.13)
which, despite their operator character, have the form of classical
Hamiltonian equations. This fact is often interpreted as a testimony for a close
connection of quantum and classical mechanics.
However, it is difficult to ascribe an exact meaning to the term “close
connection”. The usual beliefs that quantum mechanics incorporates classical
mechanics are, honestly speaking, not well founded. What do we expect when
we think that a new theory includes an old one? That there exists a universal
procedure of retrieving the old theory in its precisely defined application
area, by taking an appropriate mathematical limit. There is no such universal
procedure for the transition from quantum to classical mechanics. As it has
been already mentioned, it does not seem to be possible to recover classical
mechanics in all areas of its validity. The classical limit problem remains an
open question. For example, one of the main problems related to the
relationship between classical and quantum mechanics remains open: can
quantum mechanics recover the classical motion over a single orbit, and in
what limit? There are contradictory statements about this limit, some people
say ℏ→0 is sufficient, others consider the limit of “large quantum numbers”.
There exist also a number of other points of view as to at what level classical
6.10
The Ehrenfest Theorem and Its Meaning
303
behavior follows from quantum theory (e.g., based on stochastic schemes i.d.
the Brownian motion and Fokker-Planck equation). Today, the decoherence
concepts (see above) are of fashion, they did not exist twenty years ago
although the correspondence problem is as old as the history of quantum
mechanics. In different models, a variety of classical limits is discussed. Thus,
in the computation of radiation processes classical behavior is identified
(sometimes erroneously) with the emission of “small quanta with large
probability” whereas rare events of “large” quanta emission (with non-
negligible recoil) testifies to the necessity of quantum description. In the
coherent states theory, there has long been a belief that classical mechanics is
essentially an approximation of quantum mechanics, if one restricts all the
states to coherent states only. In other words, it is the coherent states that are
the cause of classicality. More sophisticated approaches to quantum-classical
correspondence are based on the Wigner and Husimi functions as well as
Weyl-Moyal algebra. The Feynman path integral (see below) in the original
form introduced by R. Feynman [44] may also be considered a bridge between
the two mechanics. Other groups actively promote the Bohm-de Broglie
version of mechanics as a unifying theory. In short, any person doing quantum
computations may have her/his own opinion about the development of
classical behavior from quantum theory. This diversity of opinions indicates
a certain logical inconsistency of the classical limit schemes that do not
necessarily fit together.
The obvious difficulty for the classical limit is rooted in the fact that
quantum physics is largely based on probabilities - only eigenvalues and
expectation values have definite (exact) values. On the other hand, in classical
mechanics a particle is presumed to be at a definite position and to possess a
precise velocity (or momentum) under the influence of a definite force at each
instant of classical time. We have seen above that discussions of the classical
limit of quantum mechanics are frequently based on the Ehrenfest theorem
which deals with the averaged quantum quantities. According to this theorem,
as we have seen, quantum evolution in the mean resembles classical
dynamical behavior. For instance, the Ehrenfest theorem for a system of
electrons in a time-dependent external force gives the complete analog to
Newton’s law of motion for the interacting classical particles, in terms of the
averaged positions of electrons and the averaged force acting on them.
Averaging in this context can be interpreted in the following way. If one were
able to produce a large number of observations over the positions of a particle
described in quantum mechanics by a wave function Ψ, the average of all
observed values of position vector 𝐫 is obtained with the help of the quantum
probability density, 𝑑𝑤= |Ψ|2𝑑3𝑟,
𝐫̅ = ∫𝐫𝑑𝑤= ∫Ψ∗𝐫Ψ 𝑑3𝑟,
that is the weight function 𝑑𝑤= |Ψ|2𝑑3𝑟 defines the frequency of occurrence
of the position 𝐫. Likewise the average momentum is defined by the
expression
304
The Quantum World
𝐩̅ = ∫Ψ∗(−𝑖ℏ∇)Ψ 𝑑3𝑟,
Generalizing these expressions, we may conclude that the average value
of any dynamical variable 𝐴(𝐫, 𝐩, 𝑡) can be written as
𝐴̅ = ∫Ψ∗𝐴(𝐫, −𝑖ℏ∇, 𝑡)Ψ 𝑑3𝑟,
where 𝐴(𝐫, −𝑖ℏ∇, 𝑡) is an operator representing the quantity 𝐴(𝐫, 𝐩, 𝑡) in
quantum mechanics. We have seen that the evolution of the average
momentum, 𝑑𝐩̅ 𝑑𝑡
⁄
= −∇𝑉(𝐫, 𝑡)
̅̅̅̅̅̅̅̅̅̅, looks very similar to the Newton motion
equation, however it does not produce Newtonian mechanics from the
quantum theory. We have seen, for instance, that it is in general incorrect to
define velocity 𝐯 as dr/dt (see, e.g., [84], §19). Due to the fact that quantum
mechanics operates with objects which are spread in space and time, it has
been customary to introduce the notion of a “wave packet” as a mathematical
model for a quantum particle. Below, after we discuss a little more the
significance of the Ehrenfest theorem and its possible generalizations, we
consider wave packets from the viewpoint of the quantum-classical
relationship.
6.11 Wave Packets in Quantum Mechanics
We have seen in the preceding section that the Ehrenfest theorem can be
interpreted in terms of expectation values, namely that the rate of change of
the particle momentum expectation value is equal to the expectation value for
the force acting on the particle. The term “expectation value” has a meaning
only when some averaging procedure is defined. In quantum mechanics, it is
assumed that the state 152 of a particle is described by the wave function
Ψ(𝐫, 𝑡) that may be interpreted as representing a “wave packet”, a kinematic
image associated with the quantum particle motion. In the process of motion,
a wave packet tends to spread with time so that it is not a steady entity. The
Ehrenfest theorem deals with the “center of gravity” of the wave packet,
which has a fair meaning when the packet is unimodal, well concentrated and
in a certain sense small. It would be impractical to define the “center of
gravity” for a plane wave or for a polynomial. If the dimensions of the wave
packet associated with the particle are small, the particle motion, according
to the Ehrenfest theorem, may be approximated by classical mechanics. The
“center of gravity” of the wave packet, 𝐫̅𝑡= {𝑥̅𝑖(𝑡)}, 𝑖= 1,2,3, moves along a
trajectory which is a set of points 𝑥̅𝑖(𝑡) for all values of time 𝑡. One must
remember, however, that the notion of a classical force standing in the right-
hand side of the Ehrenfest equation has only a limited validity since in general
the average value of a function does equal the value of a function at the point
152 We consider only “pure” states in this context.
6.12
Semiclassical Expansions and Asymptotic Methods
305
where its argument takes the average value, i.e.,∇V(𝐫)
̅̅̅̅̅̅ ≠∇V(𝐫̅). For instance,
𝑥2
̅̅̅ = 𝑥̅2 and, more generally, 𝑥𝑛
̅̅̅ = 𝑥̅𝑛 for 𝑛≫ 2.
But what is exactly a wave packet in quantum mechanics? Can it be
interpreted as an appropriate ensemble of classical orbits?
6.12 Semiclassical Expansions and Asymptotic
Methods
One usually asserts that quantum mechanics is logically more fundamental
than classical mechanics, in the sense that classical mechanics can be derived
from quantum mechanics but not vice versa. However, this statement can be
neither proved nor disproved, at least up till now. The classical end of
quantum mechanics is represented, as a rule by the quasi-classical WKBJ
approximation (see above) which is, from the mathematical view-point, just
an asymptotic expansion of wave equations. Quasi-classical approximation is
so obvious that it was used for wave equations long before WKBJ, e.g., by
Liouville, Stokes, Green, and Rayleigh. Recently, the WKBJ theory originally
constructed for linear equations has been extended to the nonlinear
framework, in particular, employed for the nonlinear Schrödinger equation
[214].
In general, for an arbitrary quantum mechanical setting (i.e., for an
arbitrary set of quantum dynamical variables), the ℎ→0 limit is not obliged
to exist which means that the quantum theory does not necessarily imply
classical mechanics in the sense of this limit.
6.13 The Density Matrix and Its Relatives
In the conventional quantum mechanics the state is usually defined either by
a vector (ray) in a Hilbert space or, in a more general case, by a density matrix.
The first kind of a state is called “pure” whereas the second kind “mixed”.
Mathematically, a state Ψ may be called pure if the interpolating relationship
Ψ = αΨ1 + (1 −𝛼)Ψ2,
where Ψ1,2 are two states, 0 < 𝛼< 1, is possible only if Ψ1 = Ψ2.
The dynamics of a finite and closed quantum mechanical system such as
atoms or atomic particles in a given steady external field is represented by a
one-parameter group of unitary transformations in Hilbert space. However,
this simple formalism of unitary time evolution is poorly suited for the
description of systems interacting with the environment, for example, totally
inadequate for the study of irreversible processes.
6.14 Do You Need an Interpretation?
Frankly speaking, personally I don’t. Discussing interpretational issues of
quantum mechanics brings by itself no physical results and usually serves as
a tool to produce publications and dissertations. Nevertheless, some thoughts
about interpretations of nonrelativistic quantum mechanics have driven a
306
The Quantum World
number of experimental groups to perform precise measurements of the
effects typical only of quantum systems [216] and thus clearly discriminating
classical (local) and quantum (nonlocal) behavior. Stimulating such highly
accurate experimental work may be considered, as I see it, of a substantial
value, as the discussions of interpretational issues have indirectly contributed
to physics. This is already not so little.
In other respects, all endless discussions of such issues have only limited
value: a lot of talk about the interpretation of quantum mechanics is just a
tribute to aesthetic preferences, comprehension problems or dominating
group attitudes. This is not physical but philosophical business. The fact that
scientific value of the enormous bulk of words said about how to interpret
quantum mechanics is only very limited is understandable since there seems
to be nothing particularly intricate about quantum mechanics, especially in
its nonrelativistic part, that should be somehow interpreted in a complicated
philosophical way. One can notice that philosophers are fond of “interpreting”,
this process with no tangible results gives them a feeling of self-importance,
since common people willingly listen to them. If you announce a dispute on
such topics as “What is happiness?” or “What is beauty?” which are purely
interpretational questions, crowds of people will be attracted. Such
philosophical subjects stimulate the suppressed mental activity of any person
engaged in the routines of everyday life. If one also has a chance to express
one’s sentiment on an abstract subject where one’s opinion will be no worse
than that of a “renowned” expert, then one’s self-esteem drastically rises. But
the scientific value measured as the attainment of new and quantifiable
answers to precisely put questions has nothing to do with promoting opinions
or building self-esteem. One can incessantly discuss various interpretations of
quantum mechanics with the same productivity as the archetypal question:
“How many angels can be placed on the tip of a needle?”
I have mentioned that there is nothing particularly intricate in quantum
mechanics that would justify a whole philosophical business of
interpretation. Just recall simple experimental facts. There were interference
effects observed in the diffraction of beams both of light and of electrons.153
Such interference effects made people believe that it is the amplitudes
rather than the intensities that are additive, with the squares of the
amplitudes describing the intensity distributions in particle beams. This is a
typical model of any wave process, and it should not apparently involve any
ambiguity or confusion. The latter, however, arises in such situations when
the conditions for using either interfering amplitudes or non-interfering
intensities to estimate the outcome of a particular experiment (most often a
thought experiment) are vaguely formulated. For the conventional wave
processes, say in optics or acoustics, the question of carefully formulated
conditions for the a priori presence (or absence) of interference arises very
seldom: we are used to wave phenomena in everyday life and take many
things for granted. However, in quantum mechanics, especially in the so-called
153 As well as beams of some other particles such as, e.g., slow neutrons.
6.14
Do You Need an Interpretation?
307
quantum theory of measurements, intuitive conclusions about interference
effects may prove wrong (or be declared wrong).
Still, despite the relative simplicity of non-relativistic quantum mechanics
many people consider it to contain grave conceptual paradoxes that remain
unresolved even after almost a century of using quantum concepts. Even
professional physicists have some uneasy feelings about quantum mechanics
because it yields only probabilistic predictions. The standard interpretation
of quantum prescriptions tells us a bizarre story that an electron can be
anywhere until we “measure” it, which implies that observing the system
determines its state. It was always assumed in classical physics that reality
existed independently of the presence or absence of an observer. Humans are
just an accidental result of biological evolution, and the world can successfully
exist without any intelligent observers. One can construct, for example, a
bunch of mathematical models united by the slogan “The world without us”.
Furthermore, can one apply quantum mechanics to the whole universe
treated as a quantum object? Who is going to be an observer in this case?
These and some other related questions comprise a partly philosophical issue
that is called “Foundational problems of quantum mechanics”. Some experts
consider this issue to be very serious and invest a lot of time and effort in
finding a new interpretation. For these researchers, interpretational
problems have become a kind of philosophical obsession. Others think that
the issue of interpretation of quantum mechanics is not important so long as
it gives answers to correctly set physical problems. It is curious that the
discussions on conceptual problems of quantum mechanics (and relativity)
tend to be extremely harsh, easily touching personality issues, with stinging
accusations and even direct insults. The fancy 1920s language of quantum
mechanics contributed much to the confusion154.
Many ongoing debates about the interpretation of quantum mechanics
may be largely reduced to the question of the meaning of the quantum state
vector (or a ray, in the modern parlance). There exist two main views: 1) the
state vector gives the statistical description of an ensemble of identically
prepared systems, and 2) the state vector completely describes an individual
system, determining a full set of its dynamical variables (e.g., 𝐴|𝜓⟩= 𝑎|𝜓⟩).
Although the frontier between physics and philosophy is blurred in this area,
the issue of the meaning of state vector has a pragmatic importance and is
utterly instrumental. The matter is that quantum phenomena do not occur
in philosophical texts, nor even in Hilbert space - they occur in laboratories.
However, the question remains: is there a preferred interpretation of
quantum mechanics?
Schrödinger initially viewed the wave function Ψ as a certain field
distributed in space, like the electromagnetic field and others; stationary
states, e.g., in atoms, correspond to eigenmodes of this field, similarly to
proper oscillations in a resonator where only certain patterns and oscillation
154 An example of a peculiar manner of quantum speech is “preparing the system
to be in the state 𝜓”. This statement is ambiguous and can be understood only
intuitively, if at all.
308
The Quantum World
frequencies will be sustained, with the others being suppressed by
destructive interference.
The Schrödinger equation for the state vector (wave function) 𝜓 is not
stochastic, so when speaking about ensembles, distributions, fluctuations,
expectation values, etc. one has to gather such probabilistic notions for the
state vector interpretation from some external considerations lying outside
the Schrödinger mathematical model per se.
The hypothesis that historically preceded the Schrödinger point of view
was one of Herzog Louis de Broglie. In 1923 [59] he postulated that the field
distributed in space actually carries the particles and thus determines their
motion in the classical sense (the pilot wave). It is interesting that de Broglie
was not a physicist by training, initially he was trained in humanities
(https://nobelprize.org/prizes/physics/1929/broglie/biographical). It is
also interesting that de Broglie had left his pilot wave viewpoint for a quarter
of a century and then returned to it. The main idea behind this hypothesis -
in fact really brilliant - is that the motion of the particle depends on the
accompanying wave.
In 1926, Max Born (who, incidentally, was a mathematician by training,
and his tutor was D. Hilbert) offered the probabilistic interpretation of the
wave function, by introducing statistical averaging [217]
𝐴̅ = ∫𝜓∗(𝑞)𝐴(𝑞)𝜓(𝑞)𝑑𝑞
for the self-adjoint dynamical variable 𝐴 in a physical state described by a
complex function 𝜓(𝑞) (normalized to unity). So, the quantum states were
identified with one- dimensional subspaces of a Hilbert space ℍ (rays) which
corresponded to the normalized state vectors 𝜓 defined up to a phase factor
exp(𝑖𝜑). This interpretation was developed in Copenhagen and therefore was
nicknamed later “the Copenhagen interpretation”. Some people tend to
associate the Copenhagen interpretation not with Max Born but with Nils
Bohr and, to some extent, with Werner Heisenberg. Bohr was more deeply
engaged with philosophical aspects than his colleagues. Let the historians of
science specify the role of each of creator of quantum mechanics and the
origin of the term “Copenhagen interpretation”, we are more interested in
general quantum models as well as in questions of quantum evolution in this
chapter. The standard Born interpretation is little more than a strongly
idealized connection between physics and mathematical probability. The
excellent (to my understanding, of course, there are people who do not share
this praise) textbook [84] may be considered as the manual on the
Copenhagen interpretation of quantum mechanics, although this fact has not
been explicitly acknowledged by the authors.
6.14.1
More on Copenhagen Interpretation
The colloquial term “Copenhagen interpretation” is extensively used but
remains poorly defined. It usually implies some connection to the experiment
while discussing only the measurable quantities 𝐴 represented by the
6.14
Do You Need an Interpretation?
309
operator 𝐴. Then for any function 𝑓(𝑥) the expectation value obtained by the
measurement of 𝑓(𝑥) in the quantum state Ψ is defined by the scalar product
(Ψ|𝑓(𝐴)|Ψ). The Schrödinger equation and the Copenhagen formulation
naturally place great emphasis on time development, since the Hamiltonian
operator 𝐻 corresponding to the mechanical energy treated as the quantum
“observable”155 plays a special role determining the quantum evolution: we
have seen that any operator 𝐴= 𝐴(𝑡) varies in time according to the rule
𝑑𝐴
𝑑𝑡= 𝜕𝐴
𝜕𝑡+ 𝑖
ℏ[𝐻, 𝐴],
where [, ] denotes a commutator. The time evolution operator is based
on the Hamiltonian and defined as 𝑈(𝑡) = exp (−
𝑖
ℏ𝐻𝑡). We have seen that
the one-parameter group of unitary operators 𝑈(𝑡), 𝑡∈ℝ, completely
determines the time development of an isolated system (see [84], §9,13,
see also “time evolution” and “Quantum Evolution” above in this chapter).
Honestly speaking, it is just an assumption that the laws of time development
may always be represented in the one-parameter group (more general,
semigroup156) form i.e., that the rule 𝑈(𝑡, 𝑡0)𝑥0 = 𝑥(𝑡) with 𝑥0 = 𝑥(𝑡0) and
𝑈(𝑡, 𝑡0) = 𝑈(𝑡, 𝑠)𝑈(𝑠, 𝑡0) is valid under all circumstances. In classical
mechanics, such time development is a canonical transformation due to a
special (symplectic) form of Hamiltonian geometry. It is this beautiful
geometric form that makes classical mechanics so impressive - like an
eternally young film star who emotionally appeals to everyone. We do not
discuss here the shift of the focus in the temporal evolution from states to
operators representing measurable quantities, which corresponds to the
transition from Schrödinger to Heisenberg representation and has already
been discussed. The most important thing here is the connection of these
mathematical schemes with reality, i.e., experiment. However here there are
difficulties, not only because a meaningful theoretical interpretation for
quantum tests is usually formulated in the language of classical physics, but
also because time evolution determined by the Hamiltonian and represented
by a one-parameter group of unitary transformations in a Hilbert space ℍ is
poorly compatible with quantum measurements. Indeed, if the system
undergoes some measurements, its time evolution is no more determined by
Hamiltonian 𝐻 and after time 𝑡 it will not in general be found in the state
𝑈(𝑡)Ψ(0), ℍ→ℍ or, more explicitly,
Ψ(𝑡) = 𝑈(𝑡)Ψ(0) = exp (−𝑖
ℏ𝐻𝑡) = ∑
(−𝑖𝐻)𝑛𝑡𝑛
𝑛!
∞
𝑛=0
Ψ(0)
155 Recall that to each physically observable quantity 𝐴 there exists an operator
which is usually designated by the same letter.
156 Recall that in a semigroup an inverse is not guaranteed for all its elements, in
contrast with a group.
310
The Quantum World
Thus, unitary evolution is inconsistent with the quantum state reduction
postulate, and in the measurement situations one usually tends to avoid the
unitary time evolution. So, an irreversible behavior is semi-implicitly
introduced. One of the standard ways of introducing irreversibility is to
postulate an interaction of the considered system with an external heat bath
which, in the measurement case, should correspond to a measuring
instrument. In other words, the measurement process in quantum theory,
especially when the Copenhagen interpretation is used to connect the theory
to an experiment, provides an example of an irreversible process even within
the framework and axioms of a mechanical model. One must admit that for
measurement situations in quantum mechanics the standard formulas for
quantum-mechanical evolution is not valid, and measurement processes must
be treated separately. This fact makes quantum mechanics a strange theory:
on the one hand the Copenhagen interpretation places a strong focus on
measurement and connection with the experiment, on the other hand the
standard evolution expression is incorrect when measurement enters the
picture.
Many physicists and the majority of philosophers have always been
dissatisfied with the Copenhagen interpretation particularly because the
unitary state vector evolution cannot be directly applied to measurement
situations. At the same time, within the Copenhagen framework,
measurement is crucial for understanding quantum behavior: without
measurement one is unable to observe phenomena. This inconsistency is
indeed annoying, and one has to add to the mathematically impeccable
unitary evolution postulate another one, more of an ad hoc nature, namely that
of reduction of the state vector. In operational terms, this additional postulate
states that one observes after the measurement a normalized (unitary) state
corresponding to the measurement outcome, the probability of this new state
being given by the Born rule (squared amplitude).
Greatly simplifying, one can say that the Copenhagen interpretation
states that there is no quantum world (so that the subject of this chapter is
null and void - it may also be null and void due to other reasons - or at least
the chapter should be renamed). There exists only an abstract quantum-
mechanical description i.e., nothing more than a mental, mathematical model.
A quantum particle, say an electron, is just a mathematical formula, and a
mathematical expression can be neither weird nor indeterministic. Moreover,
one cannot require a mathematical formula to be dependent on human
observation as it is sometimes implied in the sophisticated conscience-based
measurement concepts (see, e.g., [215]). Honestly speaking, I still do not
understand why the interaction of a system with the environment must
involve human conscience. Besides, I do not understand what the term
“conscience” means: it is one of those vaguely defined notions which are
difficult to use in “exact” sciences (see Chapter 2).
Greatly simplifying, one can separate physicists into two groups (this is of
course more sociology than physics). The physicists belonging to the first
group submit that being treated as a mathematical model with a collection of
derived submodels, nonrelativistic quantum mechanics works impeccably. So
6.14
Do You Need an Interpretation?
311
one should not talk too much about the possible interpretations of excellent
mathematical tools, many people describe this situation as “shut up and
calculate”. One should leave all “blah-blah-blah” to philosophers.
Nevertheless, the other group of physicists, also large in numbers and
incorporating rather prominent scientists, find this situation very
unsatisfactory. For them, it appears insufficient when physics is reduced to a
description of rules and prescriptions so that “understanding” is lacking. For
instance, it is hard to believe that one does not know exactly where the system
(e.g., an electron) is between the measurements; at maximum only the wave
function can be calculated to establish the probability of a system to be found
in any particular state. Thus a particle can be found with a certain probability
in any particular spatial domain.
But what causes the particle to land in certain domains of a detecting
surface (screen)? Is it some kind of a force? The Copenhagen - the historically
dominant - interpretation brushes off the questions of such type as
meaningless, which means that the study of quantum physics must stop short
at some phenomenological results, for instance, the distribution of dots on a
detecting surface. This agnostic attitude postulated as an intrinsic property
of quantum systems continues to irritate both the members of the second –
“anti-Copenhagen” - group and a large part of philosophically-minded
physicists. They still perceive a disturbing lack of knowledge in the
presumably most fundamental physical theory as an unacceptable flaw.
I can understand the people, the majority of them being very intelligent
and competent, who are utterly dissatisfied with the current interpretation of
quantum mechanics. These people tend to think that quantum mechanics, at
least in its orthodox Copenhagen version, does not give a complete picture of
reality. It must not be, they assert, that reality should depend on presence or
absence of a human observer, so the current interpretation of quantum
mechanics is not compatible with realism. By the way, the term “Copenhagen
interpretation” seems to be rather intuitive: it is ubiquitously used, but poorly
defined. I am not sure anybody really knows what exactly the Copenhagen
interpretation means and what mathematical statement corresponds to it,
see, e.g. [117]. One can diffusely understand the Copenhagen interpretation
of quantum mechanics as the generalization of the trivial statement that any
phenomenon is disturbed by its observation157. In this sense, the Copenhagen
interpretation is quite reasonable and, for me personally, is good enough not
to suffer from torturing doubts over methodical abstractions. Unending
discussions on the status of the entire quantum mechanics or of a variety of
its interpretations are more suitable for philosophy (metaphysics) than for
physics. Despite numerous claims, I still think that the issue of interpretation
of quantum mechanics is devoid of any practical interest and therefore does
not matter much. I may be mistaken, of course.
157 To sharpen the polemics about the Copenhagen interpretation, some philosophers
bring the example of the Moon that need not necessarily be in its place in the sky when
no one is looking at it. One may produce many examples of this kind, of course.
312
The Quantum World
One should not confuse interpretations and formulations. Like classical
mechanics where there are three main formulations - Newtonian, Lagrangian
and Hamiltonian - quantum mechanics admits several more or less equivalent
formulations. I have written “more or less” because I am not aware of any
theorem proving the equivalence of all possible quantum-mechanical
formulations. When talking about interpretations, one usually means a
combination of high principles and low calculations. Interpretations bear the
risk of transgressing the applicability of mathematical models, as can be seen
on the example of the many-worlds interpretation of quantum mechanics. To
clarify this point, I shall give an example of a formulation in order to see the
difference between formulations and interpretations. Thus one of the most
popular current formulations of quantum mechanics is based on the
transition amplitude paradigm. The transition amplitude (𝑥1, 𝑡1|𝑥2, 𝑡2) having
a form of the inner product is destined to answer the following question: if
the quantum system is prepared “here and now” i.e., in the state |𝑥1, 𝑡1⟩, what
will be the probability of finding it “there and then” (state |𝑥2, 𝑡2⟩)? If this
amplitude is known, then to calculate the probability one simply has to take
the amplitude squared. One may notice that in most practically important
situations, only a small domain near the classical path contributes
significantly into the total probability amplitude. This formulation, in contrast
with interpretations, is a practical computational prescription; one can build
an algorithm or a mathematical model around it. A philosophical component
in this and other quantum-mechanical formulations is typically negligible.
Physics, from ancient times, always attracted philosophers attempting to
interpret its results and incorporate them into a philosophical framework. I
would call it a cognitive stigmatization. As a rule, this resulted in a kind of stiff
censorship and, in any case, produced nothing new, but could be very
harmful. One can remember utter mistrust to physicists studying relativity
and quantum mechanics in Hitler’s Germany and especially in Stalin’s USSR.
This mistrust permeated the entire society and was fueled by vindictive
claims on the part of top placed philosophers. A series of belligerent attacks
led
to
repressions
of
physicists
[50],
see
also
http://www.ihst.ru/projects/sohist/document/vs1949pr.htm. It was only
the atomic bomb project that was assigned top priority in the post-war USSR
that saved many Soviet physicists from physical extermination on purely
philosophical ground - like in Middle Ages. Stalin presumably said to the
dreaded chief of secret police Beria of the physicists: “Leave them in peace.
We can always shoot them later.”
Although many physicists have a substantial philosophical component in
their minds, nobody would experience a need to enter a fundamental
philosophical discussion every time one computes a matrix element. The
toolbox of nonrelativistic quantum mechanics has never revealed any internal
contradictions. Nevertheless, since the “physical meaning” - in fact
philosophical interpretation of quantum mechanics - was to many people
unclear, there were numerous attempts to introduce elements of classical
interpretation into quantum theory. I have tried to make a short catalog of
these endeavors which, by the way, were produced by very clever people; so
6.14
Do You Need an Interpretation?
313
one should not think that it would be a sheer waste of time to get oneself
familiar with these ideas.
6.14.2
Bohm’s version
David Bohm made an attempt to preserve the concept of a trajectory and to
reconcile it with the formulas of standard quantum mechanics, e.g., the
Schrödinger equation, by picking up some special function - the “quantum
potential”.
Personally, I think that attempts to interpret quantum mechanics in
terms of classical trajectories are just about as meaningful as explaining
cosmological phenomena in terms of the geocentric system158. We have seen
that quantum mechanics is based on a totally different mathematical
foundation as compared to classical mechanics, but nonetheless any viable
mathematical model of quantum mechanics is fully deterministic in the sense
that for every event there exists a cause and there is an effect for any cause.
So, there is no need to impose classical trajectories on the quantum
description to save it from the alleged indeterminism. However, since the
Bohmian version is becoming increasingly popular today, particularly due to
its apparent attractiveness to numerical modelers, we shall discuss it here at
some length. Bohm was a great physicist, and already because of this his
thoughts about quantum trajectories deserve to be respectfully overviewed.
The Bohmian interpretation of quantum mechanics is in a certain sense
opposite to the many-worlds interpretation of Everett who retained all
technical details of standard quantum mechanics. One may regard Bohm’s
version of quantum mechanics as a further and more sophisticated
development of de Broglie ideas: some disciples of de Broglie’s school did not
accept the conventional probabilistic interpretation of quantum theory at all,
considering it as contradicting to “realism” and to objectivity of physical laws.
According to this point of view, only the determinism of classical type based
on trajectories may be compatible with realism. D. Bohm’s motivation was, as
he formulated it in [218], to suggest an interpretation of quantum mechanics,
alternative to the conventional one, which would be free of the usual
conceptual difficulties and, besides, would “determine the precise behavior of
an individual system”.
Individuals and groups who promote Bohm’s “causal interpretation” of
quantum mechanics tend to present themselves as intrepid thought
transformers, almost revolutionaries daringly fighting against inert
reactionaries who try to preserve their “comfort zones” of calloused concepts.
The adherents to Bohm’s version (the “Bohmians”) probably think that they
are opening eyes to the physical community whose dominant members
uncritically maintain the “idealistic” orthodox interpretation of quantum
158 Here, I cannot refrain from a remark that some opinion polls, in particular, carried
out in Russia in 2007 demonstrate an increasing number of people, to my surprise
mostly young ones, who believe that it is the Sun that rotates around the Earth and that
it is only those scientific jerks who “powder the brains” of simple people living in the
real, not queer and scientific, world. As near as I remember, there were about 27 percent
of sampled persons who possessed this level of astronomical knowledge.
314
The Quantum World
mechanics. In fact, the kind of interpretation now known as the Bohmian
picture had appeared even before the Copenhagen interpretation (or at least
simultaneously with the latter, in the 1920s) in the works by L. de Broglie
[59]. This “materialistic” interpretation was originally named the “pilot wave
theory” (see above), being later supported and developed by J.-P. Vigier and
then extended by D. Bohm and his followers (see a detailed account in [137]
and [138]). One should, however, notice that the pilot wave theory did not
survive in contrast to the “orthodox” quantum mechanics repudiated by the
“Bohmians”. We must also mention E. Madelung who was probably the first to
use the polar representation of the wave function taken as any complex
quantity,
Ψ(𝐫, 𝑡) = 𝑅(𝐫, 𝑡) exp(𝑖𝑆(𝐫, 𝑡)),
in a systematic way. The meaning of the Madelung representation is in
replacing the wave function by the density field 𝜌(𝐫,𝑡) = 𝑅2(𝐫,𝑡) and the
phase field 𝑆(𝐫, 𝑡). We shall see below that it is this Ansatz that is the
mathematical essence of Bohmian mechanics. Separating the real and
imaginary parts, one gets the Madelung equations (also known as quantum
hydrodynamic equations). This form of representing the wave function
inspired numerous attempts to interpret nonrelativistic quantum mechanics
in the classical spirit.
I still think that the Bohmian version based on the trivial polar
representation of the 𝜓-function is a sheep in wolf’s clothing. This version and
associated “causal” interpretation brings nothing radically new, although it
may in certain cases be convenient for numerical computations.
6.14.3
Statistical interpretation
The real meaning of the wave function has gradually become more elucidated
starting from the M. Born works on statistical interpretation of quantum
mechanics 159 . This is another class of possible interpretations. In the
statistical interpretation, the notion of probability is indispensable, although,
being applied to quantum mechanics, it was at first not at all clear what is
meant by the probability, at least in the mathematical sense. Probability of
what? But leaving mathematical rigorism aside, physicists constantly talked
of probabilities.
An essential part in elucidating this issue - as in general in the problem of
interpretation of quantum mechanics - belonged to intuitive considerations
of N. Bohr that the quantum mechanical description of the microscopic object
properties must be harmonized with the classical treatment of observation
means [136]. Object properties are always manifested in the interaction with
observation means. Bohr emphasized the necessity to consider the entire
experimental scheme, with the human observer being included. Sometimes
an impression arises that Bohr was forgetting what was actually measured:
159 I would recommend carefully reading the Nobel Lecture by Max Born “The Statistical
Interpretation of Quantum Mechanics” [217].
6.14
Do You Need an Interpretation?
315
micro-object properties or an observation device. One may often encounter
reprimands addressed to Bohr’s views bordering on philosophy that he
presumably underestimated the fact that such micro-object properties as
mass, charge, spin or the form of a Hamiltonian, including the character of
interaction with external fields, are absolutely objective and independent of
the observation means. There is probably a certain misunderstanding of a
terminological character: Bohr uses the expression “uncontrollable
interaction”, which was probably necessary to cover the gap originating from
applying classical concepts outside their validity area. In fact, the interaction
of the observed system with the measuring device may be considered as a
physical process and is, in this respect, quite controllable. Nevertheless, each
person familiar with measurement techniques knows that statistical methods
are an indispensable element of experimental data interpretation.
6.14.4
The Many-Worlds Interpretation
Curiously enough, when one starts studying quantum mechanics, one cannot
get rid of an impression that the quantum theory is built not from the very
beginning. Later on, as one learns how to calculate, this impression
disappears or, rather, is displaced. However, some irritating symptoms of
logical incompleteness of quantum mechanics do remain, and it is these
symptoms that drive people to invent new formulations, interpretations and
logical schemes to satisfy the quest for cognitive comfort. Indeed, to perform
quantum computations one has to familiarize oneself with the concepts that
are far from being rudimentary, though they have been put at the base of
quantum mechanics, such as expectation values, probabilities, operators,
unitary spaces, etc. In order to make this stunning machinery work smoothly,
one had to lubricate and support it with a number of supplementary
“principles”: complementarity, indeterminacy, compatibility, superposition,
measurement,
state
reduction,
projection,
absence
of trajectories,
correspondence, indistinguishability, etc. Not all of these principles are
obvious or logically independent from one another and some even look
slightly artificial. Therefore, one is tempted to think that quantum mechanics
has not yet reached its final stage and will undergo a further development like
some technological system such as an automobile. On the other hand,
quantum mechanics forces one in certain situations to weaken the principles
that seemed to have been firmly established before the invention of quantum
theory such as causality, classical (Kolmogorov) probability, mathematical
logic160.
One of the main supplementary supports of quantum mechanics is the
superposition principle. From the classical viewpoint, this principle amounts
to sitting between all the chairs. It leads to such classically unimaginable
phenomena as entangled states, possibility of an electron to be reflected
simultaneously from the floor and the ceiling, interference of amplitudes, etc.
160 Of course, these principles also remain mathematically unproved and can be just
readily adopted human stereotypes.
316
The Quantum World
There is a lot of literature devoted to the many-world interpretation of
quantum mechanics so that the reader can easily educate herself or himself
in this area. Despite my deep respect for Hugh Everett’s free and independent
thinking, I cannot agree that the many-worlds interpretation has contributed
a lot in quantum mechanics. One can even call “everetics” the whole flood of
scientific and pseudoscientific papers as well as science fiction around the
many-world interpretation. The heated debate around this concept, in my
opinion, is useful mainly for Hollywood and science fiction writers. The
reason is that the many-worlds concept did not provide any real insight, nor
new computational procedures. One still has to solve the same equations,
supplemented by the same boundary conditions and, quite naturally, obtain
the same probabilities. The only thing one gets partly new with the many-
worlds interpretation is an attempt to eliminate the uncomfortable
“measurement problem”, i.e., the question how a definite classical reality
emerges from a set of quantum alternatives. The many-worlds interpretation
treats this problem by assuming a hypothetical selection of only one of the
infinite number of unobservable eigenstates, one per universe. We see only
one since we are accidentally present in the universe with just this
measurement. By the way, we ourselves, all individuals are constantly
“splitting” into non-communicating copies of themselves together with the
immediate environment161. A fanciful construction!
Some people think that this construction preserves realism (I find this
ridiculous) or, at least, determinism. Determinism or “free will” - leave it to
philosophers, these concepts have nothing to do with physics or mathematics.
Physicists are mostly interested in obtaining answers to correctly set
problems in operationally defined terms, and the question “Is there a free will
anywhere in Nature?” seems to be out of scope of physical research.
The scientific content of the many-worlds interpretation as opposed to
the metaphysical one may be formulated in the following way. The many-
worlds interpretation was an attempt to escape the dubious state vector
reduction by replacing it with the Schrödinger evolution of the whole
universe. However, to describe 𝑁 successive quantum measurements one
needs to consider the tensor product of 𝑁 branched wave functions. The
mathematical framework of the Everett model is not quite clear - at least it
remains unclear to me since I failed to find any pertinent discussion at the
level of modern mathematical physics. One can, for example, consider the
space of all such possible tensor products. Then one may have to define a
suitable measure on that space. Afterwards, the Born probability rule may be
interpreted as a procedure of searching the probability to find oneself in a
particular branch of the wave function describing the entire universe. This
probability measure should be identified in the many-worlds model with the
probability of a particular measurement outcome.
One should not underestimate the heuristic value of the many-worlds
interpretation. It paved the way for the wave function of the closed universe
161 I wonder, what is the time scale of such branching. Are unobservable universes
created every attosecond or faster?
6.15
Causality
317
[67] so that modern cosmologists should owe a debt to Hugh Everett for his
insight half a century ago that the entire universe can be described by a single
wave function. But the scientific status of the many-world interpretation is
just that: interpretation - not a theory or a mathematical model. By the same
token, despite all persistent claims, Bohmian or Copenhagen views are also
just metaphysical interpretations of a physical theory. Nevertheless, I have
already remarked that the Copenhagen interpretation seems to be perfectly
adequate for physics without fancy hypotheses such as that of an
indetermined number of non-interacting universes and other metaphysics.
At least, it is good enough for me.
The mathematical models of quantum mechanics are not in the least
bizarre, as many non-physicists used to think. Mathematics of quantum
mechanics can be easily understood and in many cases is more transparent
than mathematics of classical mechanics. Later, we shall see examples when
the quantum-mechanical way of reasoning proves to be simpler than the
classical picture. What seems to be really bizarre is interpretation. Sometimes
it comes down to accepting that human consciousness is somehow involved
in determining the outcome of an experiment.
6.15 Causality
Causality in general may be understood as a statement related to general
properties of space and time. Causality manifests itself in many areas of
knowledge, although I am not aware of any theorem warranting that causality
should be omnipresent. Thus, causality does not follow from any underlying
equation or theory - it is simply a postulate. I have already mentioned that we
may regard causality as a result of human experience. Non-causal systems
would allow us to get signals from the future or to influence the past - a
marvelous possibility extensively exploited by phantasy writers. Figuratively
speaking, in the noncausal world you could have killed your grandmother
before she had been born. Models based on a dynamical systems approach are
always causal, i.e., effect cannot precede the cause and the response cannot
appear before the input signal has been applied. In the special theory of
relativity, causality is interpreted as a consequence of the finite speed of
signal or interaction, with acausal solutions being rejected - it is impossible to
influence the past, it is presumed constant.
We have already noticed that causality is closely related to non-
invariance with respect to time reversal, frequently called TIV - time-
invariance violation. Indeed, causality establishes the temporal priority of
cause and effect or a temporally ordered sequence of causes. In causal models,
one cannot affect the past, it is constant. We shall later focus on this issue in
connection with the so-called arrow of time. Although the principle of
causality does not appear explicitly in physics162, it underlies its foundations.
In purely human terms, this principle simply means that there are no
miracles. I agree that it is sad, and many people still want to believe in
miracles. Of course, nobody can forbid other persons to believe in faster-than-
162 See, however, below a discussion of dispersion relations.
318
The Quantum World
light travel, time journeys, precognition, teleportation, witchcraft, evil eye,
magic spell and other captivating stuff, especially in our time when the virus
of infantilism rapidly spreads all over the world. If you think that fancy creeds
make your life better, then do believe in miracles. To hell with science, it is
not the main thing in life, anyway.
6.16 Quantum Chaos
From the theory of dynamical systems (Chapter 4) we know that classical
trajectories, even those related to the simplest systems, exhibit two types of
behavior: (1) regular, orderly flow and (2) irregular, chaotic zigzags. On the
other hand, we know that there exists the correspondence principle linking
classical and quantum mechanics. Thus, the question naturally arises: are
there any manifestations of the two different types of classical motion in
quantum mechanics? On the simplest level, quantum mechanics may be
formulated in terms of the wave function; so, the correspondence principle
must specify some relationship between wave functions and the families of
classical trajectories, orderly or chaotic. But what happens with the wave
function when the corresponding classical trajectories leave the regularity
domain in the parameter space and transit to the chaotic regime? To correctly
discuss this issue, one has to re-examine the relationship between the
description based on classical trajectories (classical rays) and on wave
functions (quantum rays). In other words, the subject of manifestations of
classical chaos in quantum mechanics invokes the fundamental aspects of the
quantum-classical correspondence.
6.17 Path Integrals in Physics
In this chapter, we have already discussed at length and on various occasions
the connection between the quantum and classical mechanics. One may
observe that this issue, while still crucial for the whole physics, is far from
being ultimately resolved. The same might be said about the interpretation of
quantum mechanics, although this latter issue has some metaphysical flavor
(see below). The fact that the interrelation of quantum and classical
mechanics is hard to elucidate is understandable from the primary
mathematical viewpoint: quantum and classical theories are based on totally
different concepts - the first, in its primitive form, is tantamount to the theory
of linear self-adjoint operators in Hilbert space whereas the second is
basically nonlinear, with its foundation lying in more sophisticated
geometrical considerations. Thus, to link these two diverse disciplines is far
from trivial.
There exists, however, an elegant method invented by R. Feynman [44]
allowing us to comprehend the connection between quantum and classical
mechanics. This method is called the path integral formalism. Feynman
showed that time evolution, which is the main process considered in quantum
mechanics, can be expressed through the classical action for various
trajectories the quantum system might follow. As usual, the term “quantum
system” denotes something simple, like a particle moving in 1D, but we shall
6.17
Path Integrals in Physics
319
see later that the path integral formalism may be applied to rather complex
physical (and not only physical) objects. From the geometrical viewpoint,
path or, more generally, functional integrals provide a means to deal with
piecewise linear curves which manifest fluctuating 1D structures. These
fluctuations may be of different nature: quantum-mechanical, statistical,
thermodynamical, etc. Respectively, path integrals may be an adequate
formalism in all situations where such fluctuating trajectories are
encountered, e.g., in areas seemingly distant from physics such as economics
and financial analysis. This is a typical physmatical effect of interconnected
disciplines.
The word “action” in the original Feynman’s version of path integration
implies the Lagrangian formulation of the theory (see Chapter 4). Indeed, it is
this formulation of quantum mechanics that, in contrast to the Hamiltonian
approach used by Schrödinger, that leads to path integrals. We may recall that
Lagrangian mechanics provided us with an understanding of the conservation
laws - their origin lies in the connection between conserved quantities and
the symmetries of a physical system (namely, there is a conserved quantity
corresponding to each generator of the Lie algebra for a Lie symmetry group).
Moreover, Lagrangian mechanics, in principle, gives a coordinate-free and
global description for the paths on the configuration manifold (see Chapter
4). It is this feature of the Lagrangian approach that led P. A. M. Dirac and R.
P. Feynman to invent their new quantum theories, in particular the path
integral formulation.
The path integral may be formally represented by the formula for the
transition amplitude
〈𝑥, 𝑡|𝑥0, 𝑡0〉= 𝐴∫[𝐷𝑥] exp {(𝑖
ℏ) 𝑆[𝑥(𝜏); 𝑡, 𝑡0]}
(6.14)
where the exponent 𝑆[𝑥(𝜏); 𝑡, 𝑡0] is the classical action for a path joining the
space-time (Minkowski) points (𝑥0, 𝑡0) and (𝑥, 𝑡), 𝑥0, 𝑥∈(ℝ3)𝑛, 𝑛< ∞ is the
number of particles, [𝐷𝑥] is some formal measure and 𝐴 is the normalization
constant. Before we discuss this expression, clarify the meaning of the term
“path integration” and learn how to use it, we may notice that for |𝑆| ≫ℏ
(asymptotically, in the limit ℏ→0 ), the integral containing the weight
exp{𝑖𝑆ℏ
⁄ } can be calculated with stationary phase techniques (see Chapter
3), which implies that the value of the integral over paths should be
determined by the neighborhood of the path that brings an extremum to the
classical action 𝑆. This special path exactly corresponds to the classical
trajectory provided by the Lagrangian formulation of classical mechanics (see
Chapter 4). The asymptotic expansion of the path integral for ℏ→0 coincides
with the WKB approximation, which is typically derived from the Schrödinger
equation. This link with WKB (semi-classics) is already a hint at the generality
of the path integral.
Later we shall see that the path integral also has many applications in
statistical physics and may be interpreted in accordance with its (Gibbs)
principles. This fact provides an important link between statistical methods
320
The Quantum World
and quantum field theory. The underlying idea is simple and clear: the
equilibrium statistical physics is a probabilistic theory, the main statement of
which is that a physical system under temperature 𝑇 has a probability
proportional to exp{−𝐸/𝑘𝐵𝑇} to be found in a state characterized by energy
𝐸, 𝑘𝐵 is the Boltzmann constant (usually, we shall put it equal to unity thus
measuring temperature in energy units). One can then interpret the path
integral in its original form for a single particle as moments of some
probability measure, with the analytic continuation to the imaginary time 𝑡→
−𝑖ℏ/𝑘𝐵𝑇≡−𝑖ℏ𝛽. In other words, the “free measure” corresponding to a
system of non-interacting particles becomes perturbed by the Boltzmann
factor exp{−𝛽𝑉(𝐫1, … , 𝐫𝑛)} where 𝑉(𝐫1, … , 𝐫𝑛) is the interaction energy
between the particles. One can then, with the help of the path integral,
comparatively easily calculate the partition function
𝑍= ∑𝑒
−𝐸𝑖
𝑘𝐵𝑇= 𝑇𝑟(𝑒
−𝐻
𝑘𝐵𝑇)
𝑖
(6.15)
- the principal quantity in equilibrium statistical mechanics, from which
all macroscopic (thermodynamic) quantities can be derived. (The term
“partition function” reflects the possibility of partitioning the system’s total
energy between its subsystems; sometimes this term is replaced by a more
old-fashioned one “statistical sum”).
Let us now get back to our simple mechanical model of a single particle
moving in 1D in the potential field 𝑉(𝑥) . One can recall that while the
Hamiltonian version of classical mechanics had provided a foundation for the
Hilbert space formalism in quantum mechanics, it is the Lagrangian
formulation that led P. A. M. Dirac and then R. Feynman to path (functional)
integration. The essence of this new build of quantum mechanics, based on
the action for various paths the quantum system may follow classically, is an
expression for the evolution operator 𝑈𝑡= 𝑒𝑖𝑡𝐻 where 𝐻 is the system’s
Hamiltonian. For our simple model of a single-particle system, in order to gain
some experience of handling the path integrals, we may try to derive a
straightforward expression for the quantum-mechanical time evolution
starting from the Lagrangian formulation. The evolution operator 𝑈𝑡 in this
case corresponds to the transfer of a particle from (𝑥0, 𝑡0) to (𝑥, 𝑡) and may
be written as a functional integral over the space of classical paths
𝑈𝑡(𝑥0, 𝑥) =
∫
exp (𝑖
ℏ𝑆[𝑥(𝜏)]) ∏𝑑𝑥(𝜏)
𝜏∈[0,𝑡]
𝑥(𝜏):𝑥0=𝑥(0),𝑥=𝑥(𝑡)
(6.16)
This expression is actually no more than a symbol that still needs to be
interpreted. For example, it is not even clear what the symbolic product
∏
𝑑𝑥(𝜏)
𝜏∈[0,𝑡]
means. R. Feynman, in his original paper [44] interpreted the
above expression as the limit of finite-dimensional integrals over piecewise
linear curves approximating the smooth particle path, these curves having
vertices at discrete points 𝑡𝑖∈[𝑡0, 𝑡], 𝑖= 1,2, … for ∆𝑡𝑖= 𝑡𝑖+1 −𝑡𝑖→0. This
6.18
Quantum Field Theory
321
construct is very close to the finite-difference techniques ubiquitously
employed in numerical mathematics and scientific computing. Since
functional integrals are increasingly often encountered in modern physics
(and nowadays not only in physics, see the comprehensive book by H. Kleinert
[49]), I shall try to illustrate this approach on the simplest possible model.
Although this illustration is by default simplistic, for me personally it was a
good reference to study more sophisticated subjects such as path integral
formulation of supersymmetric systems.
6.18 Quantum Field Theory
In special relativity the velocity of light is the same in all inertial frames but
space-time remains flat and is understood as a fixed arena given ab initio. In
this sense, special relativity distinguishes from general relativity and is a
predecessor of the latter. Quantum field theory is a much more difficult
theory than ordinary non-relativistic quantum mechanics. It is also much
more general, which is manifested by the fact that it embraces most of the
laws of physics, except gravity. QFT’s development has never stopped: it was
originated in the late 1920s and has been constantly modernized up to the
present. In its advancement, QFT progressively conquered new areas going
through such important issues as “antimatter”, which emerged around 1930,
the theory of photons, electron-photon interaction, electromagnetic radiation
processes and vacuum (1950s), culminating then in the “Standard Model” of
particle physics that was a successful attempt to describe strong,
electromagnetic and weak interactions within a unified theoretical
framework. The Standard Model which appeared in the 1970s was a
considerable achievement, a milestone that led to new predictions and
stimulated the construction of new gigantic accelerators - the main of which
(LHC) in CERN - where one hopes to test the principal implications of QFT.
One, however, needs a lot of artistry in order to describe real particles by the
QFT means. For instance, attempts to construct models of elementary
particles and their interactions by means of QFT are essentially based on the
concept of renormalizability, while the notion of a renormalizable theory is a
bit vague and applied ad hoc to QED, QCD, 𝜙4-theory and so on (see, e.g., [206],
chapter 9). We shall return to the problem of renormalizability when
discussing the standard computational techniques of QFT.
One can find in textbooks the widespread opinion that there is a grave
conflict between Einstein’s relativity theory, both special and general, and
quantum theory (see, e.g., http://en.wikipedia.org/wiki/Quantum mechanics
and references to this article). But what is the essence of this conflict is
revealed - and probably understood - in a dissimilar way by different authors.
The prevailing opinion, shared mostly by the “pragmatic” wing of physicists,
is that it would be difficult to merge these two classes of theories for all energy
and spacetime scales because they are based on totally different assertions,
and the best one can achieve is to construct a non-contradictory quantum field
theory, free from divergences. Honestly speaking, the problem of ultraviolet
divergences still remains unsolved in contemporary QFT. Again, the dominant
opinion, shared probably by the majority of physicists, is that since QFT as we
322
The Quantum World
know it now is a low-energy approximation to some future fundamental
theory on how the universe works, e.g., “theory of everything” (TOE), then the
divergences will disappear as soon as people better understand physics at
small distances, in particular, with the aid of this new theory, so far unknown.
There exist today a number of crucial ideas to be put into the core of this
hypothetical final theory. The most popular among these ideas is that of
replacing point particles by extended objects such as strings or tiny
membranes (see Chapter 9). Another popular idea is that the space structure
at small distances, e.g., of the order of the Planck length, 𝑙𝑝~10−33 , is
geometrically highly nontrivial, for example, foamy, discrete, or non-
commutative. Yet prior to the emergence of the future fundamental theory, we
have to be satisfied with heuristic methods such as renormalization, while
closing our eyes on ultraviolet divergences.
Moreover, quantum field theory has been constructed based on the flat
(pseudo-Euclidean) spacetime of special relativity. Contrariwise, general
relativity understands gravity as a spacetime curvature. There were
numerous attempts to build gravitation theory over flat spacetime, but they
did not seem to succeed (which, as near as I know, was pointed out already by
Poincar´e). Never-ending attempts to make gravity renormalizable `a la
quantum electrodynamics were also unsuccessful so far.
It seems to be of extreme importance that one can construct rather
general field theories by using only two guiding principles: (1) symmetries
and (2) the least action principle (albeit slightly generalized as compared to
Lagrangian mechanics).
One should not think, however, that the simplest but the best
computationally developed version of quantum field theories, QED is just a
relativistic generalization of standard quantum mechanics (QM). Not at all,
QED and QM are totally different animals.
When talking about the standard non-relativistic quantum mechanics,
one may use the field theory concepts and ask, for example: how can we get
from the quantized field Φ to the “single-particle sector” with the associated
wave function 𝜓(𝐫) satisfying the Schrödinger equation for a free non-
relativistic particle? The answer in the same language would be: if |0⟩ and |𝐩⟩
denote the vacuum and the single-particle state with momentum 𝐩,
respectively, then
𝜓𝐩(𝐫, 𝑡= 0) = 〈|Φ(0)|〉= 𝑒𝑖𝐩𝐫
is the plane wave solution to the free non-relativistic single-particle
Schrödinger equation. This language is sometimes considered more
“modern”, although it brings nothing new and may be confusing.
The main feature that distinguishes quantum field theory from non-
relativistic quantum mechanics is the phenomenon of vacuum polarization.
The latter also leads to some specific mathematical properties such as
algebraic structures of QFT unfamiliar in quantum mechanics. Vacuum
polarization and vacuum fluctuations are omnipresent in QFT, and this fact
makes any bound state picture of quantum mechanics naive and obsolete. In
6.18
Quantum Field Theory
323
view of vacuum polarization, QFT is intrinsically a many-body theory.
Moreover, in case one desires to retain the concept of vacuum, there seems to
be no viable alternative to the conventional QFT.
There are some standard computational techniques in QFT. In quantum
theory in general and especially in quantum field theory, one must primarily
learn how to perform approximate calculations (though with controlled
accuracy) because the equations of the theory are so complicated that they
cannot be exactly solved. The standard approximation strategy, perturbation
theory, is always based on the presence of some small parameter.
6.18.1
More on Relativistic Invariance
Quantum field theories were initially formulated as some extensions of
nonrelativistic quantum-mechanical models such as represented by the
Schrödinger equation, e.g., by formulating relativistic wave-like equations
intended to describe the motion of single particles. Those were, in particular,
the Klein-Gordon, Dirac and certain more complex equations portraying
relativistic quantum motion. A characteristic feature of all such equations was
that time and position variables had to be formally equivalent - that was one
of the symptoms of relativistic invariance distinguishing relativistic quantum
theories from nonrelativistic quantum mechanics. So relativistic invariance is
an indispensable attribute of relativistic wave equations. Now, in connection
with relativistic quantum theory, we shall discuss this kind of invariance in
some more detail as compared to the discussion of classical fields and waves.
The archetypical examples of relativistic wave equations are Maxwell’s
equations163 for classical electromagnetic fields, the Klein-Gordon and Dirac
equations. From the position of a general theory of fields, it is not so important
that Maxwell’s equations are classical (indeed, they do not contain the
parameter ℏ and therefore may, by definition, be considered classical), more
essential is that they describe the dynamics of a vector field represented by
the vector potential 𝐴𝜇(𝑥) = (𝐴0, 𝐀). Here 𝑥 is as usual understood as a
contravariant four-vector, 𝑥≔(𝑥0, 𝑥1, 𝑥2, 𝑥3) = (𝑐𝑡, 𝐫) . In distinction to
Maxwell’s equations, the Klein-Gordon equation governs the dynamics of a
scalar field 𝜙(𝑥) whereas the Dirac equation describes excitations of the four-
component spinor field, 𝜓𝜇(𝑥), 𝜇= 0,1,2,3. Note that these three fields give
the most frequent examples of relativistically invariant models, but in no way
exhaust the whole relativistic field theory, as it might seem when reading
standard textbooks.
It is also important to note that each of the relativistic fields i.e., scalar,
vector, tensor, etc. functions defined on the Minkowski space ℳ (or on some
manifold ℳ) which are transformed in a specific way under the action of the
Lorentz or Poincaré group.
163 It is curious that in the English-language literature it is customary to write
“Maxwell’s equations” and not “the Maxwell equations”, this latter form is encountered
much more seldom than the former. On the contrary, “the Dirac equation” is
encountered more often than “Dirac’s equation”.
324
The Quantum World
6.18.2
Feynman Diagrams
The only existing version of QFT has long been quantum electrodynamics
(QED), which is mostly devoted to studying the interaction of electrons with
photons. This interaction, however, is of a very general character and
efficiently models any kind of interaction between matter and forces [99].
Studying QED seems to be extremely useful because one can test a lot of new
ideas on rather simple and intuitively comprehensible models. For instance,
such concepts as already mentioned vacuum polarization, vacuum
fluctuations 164 as well as such essential techniques as those of Green’s
functions and Feynman diagrams can be fully explored on the QED level. The
interaction between the matter and the forces is described in QED by a
trilinear map involving two spinors and one vector. This map is often drawn
as a Feynman diagram:
Figure 7.1.: An example of the Feynman diagram (Bremsstrahlung)
where the straight lines denote spinors and the wiggly one denotes a vector.
The most familiar example is the process whereby an electron emits or
absorbs a photon.
In conclusion to this section, I can mention that the QFT, specifically the
Feynman diagram representation techniques in QED, has been taken many
times as a pattern for other theories, even for some versions of such a
complicated and nonlinear theory as Einstein’s gravitation, although this
issue is highly controversial.
6.18.3 S-Matrix
The scattering matrix or the S-matrix, as it was called later, was introduced by
J. Wheeler [303] in 1937 and W. Heisenberg [296] around 1943. Heisenberg
was looking for the means to represent quantum mechanics and, in particular,
scattering theory only in terms of directly observable quantities such as
energy and energy levels, momentum, mass, spin, lifetimes, cross-sections. S-
matrix became increasingly popular in the 1960s, together with particle
physics [304] fashionable at that time. At that time plenty of new particles and
resonances were discovered so the question, what consisted of what
(hierarchical or “aristocratic” approach), was very urgent. The S-matrix
164 Nevertheless, to my best knowledge, the question: “What is vacuum?” has not been
fully answered in QED.
6.18
Quantum Field Theory
325
rapidly became the main observable, and most experimental results in
particle and high-energy physics were analyzed in terms of the S-matrix, the
latter being interpreted as a main invariant measure of particle interactions.
One can regard the S-matrix as a peculiar formfactor sandwiched between the
incoming ket state and the outgoing bra state165 (it can be defined vice versa
of course). Anyway, the S-matrix is a convenient tool because it is a global
object thus eluding the annoying ultraviolet i.e., short-distance problems due
to omnipresent vacuum fluctuations. Besides, there are not so many
mathematical requirements imposed on the S-matrix: the only non-trivial
property apart from Poincaré invariance 166 is the asymptotic factorization
claim, when the wave packets corresponding to the scattered particles
become spatially separated. These features of S-matrix (or S-operator as it is
increasingly fashionable to call it now) enable one to construct an efficient
computational scheme producing finite models of interacting relativistic
particles, without explicitly attracting the underlying intricate QFT concepts.
6.18.4
Particles and Symmetries in Quantum Theory
We have seen that particles, in the standard quantum theory, are described by
the wave functions which are defined on the set of variables chosen to
describe the particle’s behavior. It is assumed in quantum mechanics that
wave functions form a complex linear space (the superposition principle),
with linear combinations of the wave functions representing a superposition
of states. This complex linear space is the unitary space (i.e., endowed with a
Hermitian positive-definite inner product usually called “amplitude” in
quantum mechanics) of states for some quantum system, in the simplest case
a single particle. Linear operators acting in this space transform wave
functions167, and therefore it would be worthwhile to study their behavior
under these transformations. Moreover, we know that wave functions can be
distinguished and classified according to their symmetry or, more generally,
transformation properties. For example, physicists studying particle models
in relativistic quantum theory are mostly interested in symmetries described
by the Lorentz and Poincaré groups.168 Thus, the symmetry properties may be
considered as the principal feature of quantum particles. Quite roughly,
particle symmetries can be classified into two types: spacetime symmetries
and internal ones.
An instinctive requirement of high symmetry is mostly dictated by the
quest for “beauty” in physics. Logically speaking, beauty has nothing to do
165
We
are
using
the
Bra-
ket
notation,
or
Dirac
notation,
see
https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation.
166 Recall that the Poincaré group includes Lorentz transformations plus space-time
translations. Initial and final particles considered in the S-matrix approach are required
to transform according to irreducible representations of Poincaré group.
167 If ℍ is a linear space with the inner (scalar) product (𝜑, 𝜓), then a set of all
invertible linear operators 𝐴: ℍ→ℍ such that (𝐴𝜑, 𝐴𝜓) = (𝜑, 𝜓) (i.e., isometries) forms
a group. This is one of the first symmetries one encounters in quantum mechanics.
168 The rotation group which is also relevant for classification of wave functions may
be regarded as a special case of the Lorentz group.
326
The Quantum World
with physics, these two are completely disparate entities. Imposing beauty on
physics reminds me of the great hype about “Millennium” which was nothing
more than the magic of round numbers. In reality, “Year 2000” did not
correspond to the beginning of the third millennium, but people tend to react
more often on perception than on facts.
6.18.5
Quantum Field Theory and Mathematics
If one wants to be completely honest, one has to admit that the problem of
combining quantum mechanics and relativity is still open. Even in the greatly
simplified case of special relativity (Galilean spacetime) no essential
advancement beyond the good old perturbation theory has been observed in
QFT. Despite many man-years of research, an operational non-perturbative
control over nontrivial QFTs in the 4d spacetime still remains something of a
dream for the physicists engaged in practical computations.
Quantum field theory, apart from being a very rich subject for physicists,
has produced a profound impact on mathematics, but, curiously, not vice
versa. One can easily list a number of great mathematicians that had tried to
attack QFT from purely mathematical positions during the last 70+ years (e.g.,
constructive quantum field theory, axiomatic quantum field theory, algebraic
quantum field theory, topological quantum field theory, K-theory, results on
complex manifolds, affine Lie algebras, knots and some more piecemeal
mathematical studies related to QFT). Still the main achievements in QFT have
been provided by physicists, and QFT can still hardly be represented as a
rigorous mathematical theory. I think that the problem with mathematical
attacks on QFT lies in the difficulty to connect isolated major results (such as
e.g., handling the 1/r singularity) in pre-string physics with the application of
now prevailing new geometrical ideas, e.g., those leading to string theories
and supersymmetry.
On a somewhat primitive level, the traditional (pre-string) quantum field
theory may be reduced to the quantization of the harmonic oscillator, no
matter how many high-brow mathematical sentences are attracted to wrap
this fact. At least, the harmonic oscillator has always been the dominating
mathematical model in quantum field theory. This is fully justified, if we recall
how the correspondence between fields and particles has been traditionally
established in QFT. Each normal mode of the field oscillations is interpreted
as a particle, with the quantum number 𝑛 of a normal mode being thought of
as the number of particles. A normal mode itself is understood as a harmonic
oscillator, with the energy associated with the field excitation, 𝜀𝑛= (𝑛+
1/2)ℏ𝜔, corresponding to the 1D oscillator model which immediately appears
as soon as the primary field equations - the Maxwell equations - are Fourier
transformed: each Fourier component of the vector potential must satisfy the
oscillator equation. Although this is quite obvious, let us see how it happens.
The Maxwell equations (ME) read (see Chapter 5):
𝜀𝜇𝜈𝜎𝜏𝜕𝜈𝐹𝜎𝜏(𝑥) = 0
(6.17)
the homogeneous Maxwell equations (HME) and
6.18
Quantum Field Theory
327
𝜕𝜇𝐹𝜇𝜈(𝑥) = 4𝜋
𝑐𝑗𝜈(𝑥)
(6.18)
the nonhomogeneous Maxwell equations (NME), 𝑥= (𝑥0, 𝑥𝑖) =
(𝑐𝑡, 𝐫), 𝑖01,2,3.
If we choose the Lorenz gauge 𝜕𝜇𝐴𝜇(𝑥) = 0, then we get from NME the
inhomogeneous wave equation
☐𝐴𝜇(𝑥) = 4𝜋
𝑐𝑗𝜇(𝑥),
(6.19)
where the D’Alembert operator ☐≡𝜕𝜇𝜕𝜇= 𝜕02 −∆=
1
𝑐2
𝜕2
𝜕𝑡2 −∆. One
may note that the ☐ operator is often defined with the inverse sign, so one
should be attentive in the calculation of fields, see Chapter 5 for more details.
This relation may be interpreted as the motion equation for the
electromagnetic field; mathematically it is the Klein-Gordon equation with
zero mass and an external source term proportional to the charge current. If
we make here the Fourier transform over spatial coordinates, we obtain the
oscillator equations for each spatial Fourier component 𝐴𝜇(𝑡, 𝐤):
𝐴̈𝜇+ 𝑐2𝑘2𝐴𝜇= 4𝜋
𝑐𝑗𝜇(𝑡, 𝐤)
(6.20)
It means that the Fourier decomposition leads to the representation of
fields as infinite sums of harmonic oscillators - one for each wave vector 𝑘=
|𝐤| (here, to simplify notations I denote by 𝑘 the absolute value of a 3-vector).
The magnitude of 𝐤 determines the oscillator frequency. See the classical
treatment of this subject (expansion of electromagnetic fields on oscillators)
in the textbook of Landau and Lifshitz [39], page 52. Each normal mode itself
is thought of as a harmonic oscillator, and the energy of a field excitation is
𝜀𝑛= (𝑛+ 1/2)ℏ𝜔, i.e., corresponds to a 1D harmonic oscillator. An
important thing here is that the Maxwell equations already dictate the
correspondence between fields and particles invoking the necessity of the
harmonic oscillator model, which is a rather happy coincidence, because the
oscillator model is really unique.
Why is it unique? We may remember that the spectrum of a harmonic
oscillator has an especially simple structure: the ground state (vacuum) energy
is ℏ𝜔/2 and all the above levels are equidistantly spaced, with the energy
difference ℏ𝜔 between them. States having other energies are not allowed.
Such spectrum structure can be viewed as consisting of 𝑛 particles (or
quasiparticles), each having the energy ℏ𝜔. This is a natural interpretation
since each normal mode of field oscillations behaves as a free particle - it
has energy, momentum, can propagate, collide, be identified as a separate
entity. Thus, the quantum number 𝑛 of each normal mode is interpreted as
the number of particles. Indeed, when excited to the 𝑛-th energy state, the
oscillator contains 𝑛 identical quanta (photons, phonons, etc.) of the same
328
The Quantum World
energy - an interpretation which is possible only due to the equidistant
spacing of the oscillator spectral levels. In such cases, it is natural to introduce
raising and decreasing operators. One may note that a similar situation is
encountered in the theory of angular momentum where the eigenvalue 𝑀 of
the operator 𝐿𝑧 may take only equidistant values, 𝑀= −𝐿, … , 𝐿
corresponding to the eigenfunctions of the SO(2) type, exp(𝑖𝑚𝜑), where
𝐿(𝐿+ 1) determines the eigenvalues of the angular momentum. One may also
note that the raising and decreasing operators in the angular momentum
theory are acting on functions used in the Fourier transform. In the oscillator
case, the raising and decreasing operators are also connected with the Fourier
transform, e.g., the raising operator happens to coincide with the negative-
frequency part of the Fourier transformed vector potential and the decreasing
operator turns out to be its positive-frequency part. These operators,
producing the next or the previous eigenstates in the equidistant energy
ladder, may be readily interpreted as “creating” or “annihilating” the particle
with energy ℏ𝜔. We shall return to the creation and annihilation operators in
the following section, now I shall remind us of some basic mathematical facts
about the oscillator model.
What is the linear (harmonic) oscillator from the mathematical
viewpoint? In the traditional spectral theory - Sturm-Liouville - language, the
linear oscillator model is connected with the differential operator
𝐿= −𝑑2
𝑑𝑥2 + 𝑥2, 𝑥∈(−∞, ∞)
(6.21)
(for some transparency, we use the dimensionless variables here). This is a
simple form of a more general Schrödinger operator:
𝐿= −𝑑2
𝑑𝑥2 + 𝑞(𝑥), 𝑞(𝑥) ≥0
(6.22)
Obviously, we can replace the requirement of non-negativity of the
function 𝑞(𝑥) by the condition of semi-boundedness from below.
We have already discussed the harmonic oscillator in various contexts
and probably return to this model in future - the oscillator model is one of the
main physmatical nodes. I think it is important to notice once again that the
quantum description of the free electromagnetic field automatically leads to
photons - due to the combination of the wavelike structure of the Maxwell
equations with the equidistancy of the oscillator spectrum. This fact appears
to be undeserved luck. If the harmonic oscillator had been substituted by
some physical system whose mathematical model had not produced equal
spacings between energy eigenvalues or, due to some curious properties of
our spacetime (e.g., endowed by a Euclidean structure instead of affine so that
the world would have had a center), the free Maxwell equations could not be
reduced to the system of independent wave equations, 𝑝𝜇𝑝𝜇𝐴𝜈= 0, 𝑝𝜇=
−𝑖∇𝜇= −𝑖𝜕/𝜕𝑥𝜇, one still might have defined the creation and annihilation
operators, but each new particle would have had the properties, e.g., mass or
6.18
Quantum Field Theory
329
energy, depending on all other particles within the considered system. Thus,
it would be difficult to add a particle identical to those already present - e.g.,
all the photons would be different, which would have made the laser
impossible. Analogous difficulties would have accompanied the annihilation
process one could not have destroyed a particle leaving all others intact. In
principle, such effects could exist, at least on the Planck scale or, on the
contrary, in large-scale physics, where deviations from the classical Maxwell’s
field produce equations noticeably different from the classical wave equation
and, consequently, would not lead to the oscillator model. As far as the Planck
scale goes, this is a pure speculation, since physics should be totally different
on Planck length scales (𝑙𝑝≈1.6 ∙10−33 cm) at Planck energies, 𝐸𝑝=
ℏ𝑡𝑝
⁄
= (ℏ𝑐5/𝐺) ≈1.22 ∙1019GeV ≈2 ∙109J , where 𝑡𝑝= ℏ𝑚𝑝𝑐2,
⁄
𝑚𝑝=
ℏ𝑐/𝐺 and the gravity constant 𝐺= 6.674 ∙10−8𝑐𝑚3𝑔−1𝑠−2 , but an
extension of the Maxwell field quantization on cosmological scales, when the
background spacetime is no more flat, thus producing photon-like particles
may remain a task for future research.
6.18.6
Canonical quantization in QED
It is worth reflecting on the term “canonical quantization”. This term was
probably first coined by Pascual Jordan, one of the principal creators of
mathematical - mostly algebraic - tools of quantum mechanics (see Chapter
3). Recall that the symplectic structure of classical mechanics is also called the
canonical structure. In this context, the term “canonical” simply means that
one can introduce a certain skew-symmetric binary operation, called Poisson
brackets (see Chapter 3), between the dynamical variables specifying the state
in classical mechanics [84]. Each transformation between dynamical
variables such as coordinates 𝑥𝑖 and momenta 𝑝𝑗 leaving their Poisson
brackets intact is known as a canonical transformation in classical mechanics.
One might notice that using in parallel canonical quantization in QED and
path-integral quantization in more recent quantum field theories (such as
quantum chromodynamics - QCD) seemingly contradicts not only didactic
principles, but also scientific norms which would require applying the same
universal techniques in all theories. One can remark, however, that canonical
quantization has obvious shortcomings which probably motivated R.
Feynman to invent other approaches to field quantization in QED such as
diagram techniques and path integrals. For instance, canonical quantization
conceals an explicit Lorentz invariance. No wonder, since canonical
quantization was originally invented as an essentially nonrelativistic
technique.
6.18.7
Gauge Fields in Quantum Theory
When discussing the classical field theory in Chapter 5, we have seen that
the Maxwell equations are invariant under the gauge transformations,
𝐴𝜇→𝐴𝜇
′ = 𝐴𝜇+ 𝜕𝜇𝜒 where 𝜒 is the scalar gauge function. Physicists
usually say that classical electromagnetic theory is the 𝑈(1) gauge theory
meaning that it is described by the 𝑈(1) gauge group which is one-
dimensional and Abelian. We have also seen that gauge invariance in classical
330
The Quantum World
field theory is closely connected with the concept of charge conservation. In
the quantum world, gauge invariance plays an even more important role. I
have already mentioned that quantum electrodynamics (QED) is a particular
case of a field theory, and as such it also represents a whole class of theories
called the gauge theories. One can observe certain hints at gauge ideas already
in classical electrodynamics where one of the Maxwell equations (𝑑𝑖𝑣𝐄=
4𝜋𝜌) does not contain time derivatives and is therefore an “instant” relation.
It follows from this equation that the electric field would change at once in the
whole space, which would break causality, unless the charge is not conserved.
Here, one can already see a bridge to gauge theories.
The story of gauge fields usually clusters around the Yang-Mills
equations, although it seems to be more confused and fascinating. It is
interesting, by the way, that the Yang-Mills theory can be in principle
discussed before the introduction of any quantum field concepts.
Let us try to understand the main ideas of gauge fields in simple language.
All the particles of a given type may be treated as manifestations of some
quantum field which is characterized by the magnitude and the phase at each
point of spacetime. This picture evokes the image of an ocean, with individual
particles being similar to waves: the real thing is the field, and waves are just
observable excitations over some average level. Then each excitation viewed
as a particle has an attached phase i.e., a cyclic angle variable which may have
(and actually has) a local character: the phase should be updated as one
moves from one observation point to another.
6.18
Quantum Field Theory
331
7 Stochastic Reality
In this chapter, we shall deal with the systems whose state cannot be
characterized exactly, as one used to describe classical deterministic systems
of mechanics (Chapter 4). Systems we are going to observe here can be closed
or open, they may consist of many elements or of a small number of particles,
but the unifying characteristic of them is as follows: they admit only the
probabilistic description. We shall mainly discuss macroscopic bodies, i.e.,
those consisting of an enormous number of particles (in physics typically of
the order of 1024). Properties of such macroscopic bodies are to a large extent
determined by the collective behavior of constituent particles and may be
almost insensitive to details of individual interparticle coupling, like the
crowd behavior practically does not depend on individual human features or
attitudes. This means that laws of a different type arise, as compared to
dynamical laws of, e.g., classical mechanics. These laws are usually called
statistical (or, in some specific cases, stochastic) emphasizing the fact that
they are based on the notion of probability.
The concept of probability has a variety of interpretations, and it may
have a totally different meaning in a different context. For example, when
reality occurs to be intrinsically random as in simple dice throwing, classical
probabilities might be predicted from symmetry considerations. On the other
hand, quantum paths (see Chapter 6) and in general predictions of future
behavior is based on more complicated structural arguments. One may notice
that different theories and modeling structures require diverse notions of
probability. Since atoms and various atomic microparticles are subordinated
to the rules of quantum mechanics, the concept of probability lies in the very
nature of things.
The text of this chapter is divided into six sections. The first section
reiterates basic facts on the dynamics of finite particle systems: classical
Hamiltonian mechanics, the Liouville equation, and the evolution operator. In
the second section the Bogoliubov equations are introduced and the Cauchy
problem for them is formally solved. Next, two sections follow on equilibrium
states of systems of infinitely many particles in the frame of the canonical and
the grand canonical ensembles. The main subjects are here the existence and
the uniqueness of limit distribution functions as well as spectral and
topological properties of evolution operators. The fifth section treats the
thermodynamic limit for non-equilibrium systems. This is possible, however,
only for spatially homogeneous systems, the whole field of transport
processes as a paradigm for non-equilibrium systems being in general outside
the scope of the book. Some concepts of statistical theory of open systems are
reviewed here. Finally, the notion of deterministic chaos is discussed, and its
implications are briefly overviewed.
332
Stochastic Reality
7.1 Thermodynamics: the Study of Paradoxes
The study of statistical systems, i.e., consisting of a large number of particles,
starts with the question: what are the right variables to describe the behavior
of statistical systems? Thermodynamics provides the simplest answer to this
question.
I have always perceived the second law of thermodynamics as a
frustrating paradox: on the one hand, all the systems should tend to the
morgue state of absolute equilibrium while on the other hand there are plenty
of examples in nature demonstrating in no way the continuous
disorganization or progressive decay, but just the opposite, evolution and self-
organization. All physicists seem to be aware of this paradox, sometimes
formulated as the “heat death of the universe”, but probably each physicist
has her/his own version of treating it (if one thinks at all about such highly
theoretical problems - or pseudo-problems). The matter is that if the universe
(or multiverse) has existed for a very long time, in some models infinitely long,
then it must have reached the equilibrium state. Nevertheless, the world as a
whole as well as its separate subsystems are apparently in a state rather far
from thermal equilibrium, and one cannot observe the processes that would
bring the universe into the equilibrium state. One possible solution to this
paradox is that statistical physics and thermodynamics cannot be applied to
the world as a whole. (Why then can quantum mechanics be applied?)
Probably, this inapplicability is related to the fundamental role played by
gravity forces, for which even the Gibbs distribution can be introduced only
with certain difficulties due to emerging divergences169. However, one cannot
be sure that the declaration of non-validity of the usual statistical physics and
thermodynamics for gravitating systems is an ample explanation of the
absence of equilibrium state in the universe. Another popular explanation of
the heat death paradox consists in the consideration that the metric of general
relativity depends on time and therefore the universe is submerged into a
non-stationary field [24].
The real implication of the “heat death” paradox is that, besides asking a
generally interesting though a typically scholastic question, one can easily
transgress the limits of applicability of classical thermodynamics, without
even noticing it. The problem with thermodynamics is that it tends to be
extrapolated over the areas where it cannot be applied. Strictly speaking,
classical thermodynamics is valid only for equilibrium states of the explored
systems. However, the equilibrium conditions are only an idealization, a
169 Free energy F/N counted per one particle diverges in the presence of gravitational
interaction due to the long-range character of the gravitational forces (see, e.g., [220]). Moreover,
this quantity diverges also on small distances, but one can get rid of these divergences by
restricting the possibility for the particles to condense, e.g., by introducing the exclusion principle
at zero separation. Long-range forces imply a strong coupling of distant domains of the system,
even widely separated. It is interesting to notice that similar divergences could be observed in
the presence of Coulomb interaction, but they are removed due to the neutrality of Coulomb
systems containing charges of opposite sign. This is impossible for gravity because mass is always
positive.
7.2
Statistical Way of Thinking
333
mathematical model and can be seldom encountered in nature.
Thermodynamics is in essence a branch of physics studying the collective
properties of complex
steady-state systems.
The
main
part
of
thermodynamics is purely phenomenological and may be treated completely
independently from mainstream - microscopic - physics. Phenomenological
thermodynamics historically served the industrial revolution, mostly driven
by the steam engines. These alluring machines produced a euphoria in the
then existing society, similar to that created by today’s digital gadgets. Steam
machines fascinated not only the engineers, even such outstanding scientists,
mainly physicists, as N. S. Carnot, E. Clapeyron, R. Clausius, J. B. J. Fourier,
H. Helmholtz, J. Joule, Lord Kelvin (W. Thomson), J. C. Maxwell who were
trying to formulate the laws governing the work of heat engines in a
mathematical form. The most challenging task was to correctly describe
the conversion of heat into mechanical work. Later L. Boltzmann, J. W.
Gibbs, and M. Planck provided links from thermodynamics to the major
body of physics.
Although time is not included in the set of thermodynamical variables,
classical thermodynamics states that we live in a universe that becomes more
and more disordered with time. The second law of thermodynamics, despite
its apparently soft - statistical - character, is one of the hardest constraints
ever imposed by the science on life and technology. By the word “soft” I denote
the tendency to verbal formulations of the second law of thermodynamics.
Historically starting from heat engines, classical thermodynamics still
investigates the processes when energy, heat, and mass can be exchanged.
There are two main laws governing such processes. While the first law of
thermodynamics postulates the strict energy balance, the second law states
that for all thermodynamic processes some portion of energy is necessarily
converted into heat and dissipates in the environment. Such dissipation is
irreversible: to undo it, one must spend some energy. Thus, losses are
unavoidable. This is the informal gist of the second law of thermodynamics
(without introducing a rather counterintuitive concept of entropy). Then
there are some grave consequences of the second law such as the limited
efficiency of heat engines or irreversibility of erasing the data in computer
memory: this latter process generates heat [221].
7.2
Statistical Way of Thinking
The thermodynamical - in fact thermostatistical - approach to many-particle
systems is simple but unsatisfactory because the crucial question: “how can
the values of thermodynamical variables be derived from the equations of
mechanics, classical or quantum?” is irrelevant. Thermodynamics was
constructed as a totally independent discipline, an engineering isle whose
study does not imply any knowledge of physics. Quite naturally, physicists
were frustrated by such isolated standing of thermodynamics, and the first
expansion of physicists into the field of thermodynamics was in the form of
attempts to interpret such quantities as heat, work, free energy, entropy on a
level of individual moving molecules.
334
Stochastic Reality
Derivations of thermodynamical quantities from molecular models of
physics are not at all easy, even for such a comparatively simple system made
up of many particles as a rarefied gas. Three great physicists, all of them very
proficient in mathematics, were the first to apply probabilistic concepts and,
accordingly, to use the term “statistical” in the context of thermodynamics.
Those were (in chronological order) J. C. Maxwell, L. Boltzmann, J. W. Gibbs.
Due to a greatly reduced number of variables, the behavior of statistical
systems may look simple, but this is deceptive. Statistical irreversibility is a
clear symptom of serious difficulties.
The apparent time reversibility of Newtonian dynamics formally allows
us to observe many dramatic events such as the gas leaving the whole volume
empty (say 1 𝑚3) it had just occupied. Time reversibility simply requires the
gas molecules to follow trajectories back in time in the phase space (see
Chapter 9 on the “Time Arrow”). However, nobody has ever observed such
events, at least we believe so. The matter is that these time reversed paths as
compared to “direct”, those that result in filling up a prepared vacuum, though
in principle possible, are characterized by such a tiny probability as to render
any observable macroscopic effect unfeasible. Similarly, we would not expect
a spontaneous heating of some macroscopic body being initially at room
temperature (even though there are not so few people who continue
designing engines working on the principle of heat transfer from cold to warm
bodies). We could perhaps expect this to occur in a nanoscopic system
consisting of a small number (𝑁≤10) of molecules - for a short period of
time, but the experience shows that a spontaneous heating or cooling of a
considerable piece of matter (𝑁≫1), which would persist for a long period,
e.g., sufficient to produce mechanical work, is considered an abnormal state
and has never been observed. Such phenomena are of a fluctuative nature and
can be modeled mathematically using an assortment of fluctuation theorems
[161-163] (Jarzynski, Wojcik, Evans, Searles).
One might ask, why should we be concerned with rare events that can
happen only on molecular time and distance scales and become almost totally
improbable when we try to observe macroscopic phenomena? The answer is
that these rare microscopic events are important for understanding the whole
statistical thermodynamics. Besides, new experimental techniques have
made microscopic and nanoscopic scales accessible to observation so that
fluctuations and their probabilities which had been irrelevant before are
becoming important for understanding experiments.
The foundations of statistical mechanics are a topic which has been in the
focus of scientific interest since the times of Boltzmann. One distinct problem,
namely the foundation of classical statistical mechanics of continuous
systems, i.e., of systems of many particles moving in a continuous phase space
as opposed to lattice systems, is touched upon in this book. The treatment
method consists in the solution of the Bogoliubov (BBGKY - Bogoliubov, Born,
Green, Kirkwood, Yvon) equations, an infinite system of integro-differential
equations for the many-particle distribution functions.
The field of statistical continuum mechanics, viz. the statistical theory of
continuous media, is not touched here.
7.3
Statistical Equilibrium
335
Unfortunately, recent developments of chaos theory and ergodic theory
so important for the foundations of statistical mechanics are taken into
account only superficially. Likewise, other competing methodologies, e.g., of
the Prigogine school, modern projection operator techniques or the
information theoretical approach are mentioned only briefly.
7.3
Statistical Equilibrium
In the foundation of equilibrium statistical mechanics lies some formal
expression called Hamiltonian (see Chapter 4), which must help us to find the
probability distribution for any random physical process - and most physical
processes are in fact random. For example, gas in some closed volume gives
an elementary example of a random physical process; there may be less trivial
systems, for instance, with interaction between the constituents. Since no
physical system, probably except the entire universe, is isolated from the
environment, one must account for its influence, e.g., by introducing some
“physically natural” supplementary conditions. For example, initial or
boundary values must be fixed, subjugation to which turns the physical
system into a model.
We see that there exists a natural hierarchy of time scales (already
mentioned in association with the hierarchical multilevel principle of
mathematical modeling), this fact playing a crucial part in statistical
mechanics as it determines the behavior of distribution and correlation
functions. In particular, after time 𝜏0, correlations between the particles are
drastically weakened and many-particle distribution functions turn into the
product of single-particle distributions 𝑓(𝐫, 𝐩, 𝑡) (the principle of correlation
weakening was introduced by N. N. Bogoliubov) whereas for 𝑡> 𝜏𝑟≫𝜏0, a
single-particle distribution function tends to the equilibrium Maxwell-
Boltzmann distribution170. In other words, the whole many-body system for
𝑡≫𝜏𝑟 reaches the state of statistical equilibrium so that the respective many-
particle distribution functions tend to the canonical Gibbs distribution,
𝑓𝑁(𝑥1, … , 𝑥𝑁, 𝑡) →exp[𝛽(𝐹−𝐻(𝑥1, … , 𝑥𝑁))] , where 𝛽= 1/𝑇 is the inverse
temperature, 𝐹 is the free energy and 𝐻(𝑥1, … , 𝑥𝑁) is the system’s
Hamiltonian. In other words, the single-particle distribution function
𝑓(𝐫, 𝐩, 𝑡) substantially changes over time scales 𝑡 𝜏𝑟≫𝜏0 whereas at the
initial stage of evolution this function remains practically intact. Yet many-
particle distribution functions can change very rapidly at short times
comparable with the chaotization period 𝜏0. Physically, one can understood
this fact by considering spatially uniform systems with pair interaction
between the particles, when many-body distribution functions depend on the
coordinate differences of rapidly moving constituents. It is intuitively
plausible that many-particle distribution functions would adjust to instant
values of a single-particle distribution. To translate this intuitive
consideration into the mathematical language one can say that for 𝜏𝑟> 𝑡≫
170 One can imagine even tinier time scales in a many-body system, namely 𝜏0~ 𝜏𝑟/𝑁, where
𝑁 is the number of particles in the considered part of the system.
336
Stochastic Reality
𝜏0 (intermediate asymptotics) many-particle distribution functions become
the functionals of a single-particle distribution function
𝑓𝑁( 𝑥1, … , 𝑥𝑁, 𝑡)
𝑡≫𝜏0
→ 𝑓𝑁[ 𝑥1, … , 𝑥𝑁; 𝑓(𝐫, 𝐩, 𝑡)]
so that the temporal dependence of many-particle distributions is now
determined by the single-particle function.
This idea (expressed by N. N. Bogoliubov) is rather important since it
leads to a drastic simplification of the models describing many-body systems.
In particular, although the respective distribution functions formally depend
on initial data for all the particles, after a rather short time (𝜏0~10−12 −10−13
s) this dependence becomes much simpler since its relics are only retained in
the relatively smooth single-particle function 𝑓(𝐫, 𝐩, 𝑡). One usually applies to
this situation the notion of “erased memory” designating asymptotic
independence of many-particle distribution functions on precise values of
initial data – a huge simplification since initial values of all coordinates and
momenta are never known exactly and, even if known, would be completely
useless. In the modern world, the idea of fast forgotten details of microscopic
instances, in fact of insensitivity to microscopics, has become especially useful
for mathematical modeling of complex systems.
7.4
Statistical Ensembles
Let us start by recapitulating the basic notions of the idea of statistical
ensembles. In conventional statistical mechanics, there has been a strong
expectation that an ensemble average can correctly describe the behavior of
a single particular system. Despite numerous attempts, there seems to be no
rigorous mathematical proof for applying statistical ensembles to an
individual observed system. The ensemble methodology lying at the very
foundation of statistical physics still has the status of a logical presumption
rather than of a compelling mathematical fact. A variety of good samples for
concrete ensembles can be satisfactory from the physical point of view but
require a more ample treatment for a cultivated mathematical taste. In
physical terms, the default of ensemble methodology would mean that some
quantities for an observed system may substantially deviate from the
ensemble averages. Nonetheless, it would not be a tragedy; on the contrary,
such deviations can provide important physical information about the
observed system.
In the model of a microcanonical ensemble, one basically considers an
isolated system, which is in fact not very interesting from a practical point of
view. Explaining this requires a reminder of basic thermodynamics, which
probabilists call large deviation theory. In the microcanonical ensemble one
considers the number of microstates of an isolated system at some fixed value
of internal energy 𝑈, volume 𝑉 and other extensive conserved quantities. The
logarithm of this number of microstates is the entropy 𝑆 (it is convenient to
set the Boltzmann constant 𝑘𝐵= 1) and by inverting this function one obtains
the internal energy 𝑈(𝑆, 𝑉, … ). From this so-called thermodynamic potential
7.5
The Bogoliubov Chain
337
all other thermodynamic quantities (temperature, pressure, heat capacity
and so on) can be computed as a function of the extensive quantities.
A kinetic equation is a mathematical model allowing one to find
(approximately!) the distribution function for the statistical ensemble.
7.5
The Bogoliubov Chain
In this section, we are going to study mainly the stationary solutions of the
Bogoliubov hierarchy equations. In literature this chain of equations - an
infinite system of integro-differential equations for the many-particle
distribution functions - is generally known as Bogoliubov-Born-Green-
Kirkwood-Yvon (BBGKY) hierarchy. For systems in a finite volume these
equations are equivalent to the dynamical Liouville equation and characterize
the time evolution of the probability measure on the phase space of a finite
number of particles. Performing the thermodynamic limit, one obtains the
infinite chain of the Bogoliubov hierarchy equations which are related to a
system of particles in the whole space. As near as I know, the problem of
existence and uniqueness for this chain of equations has not been solved so
far. It is natural to connect the stationary solutions of the Bogoliubov
hierarchy equations with the states of an infinite system of particles (i.e.,
probability measures defined on the phase space) which are invariant with
respect to time evolution. In the cases where the dynamics on the phase space
has been constructed it is possible to demonstrate that any invariant measure
satisfying further conditions of a general type generates a stationary solution
of the Bogoliubov hierarchy equations. On the other hand, an immediate
analysis of stationary solutions of the Bogoliubov hierarchy equations (unlike
the invariant measures) does not require, in general, the use of such delicate
dynamical properties as clustering. Apparently, the point is that only
functions of a finite (although not bounded) number of variables enter the
Bogoliubov hierarchy equations. One can consider these functions (the
correlation functions) as integral characteristics of a measure and their
behavior must not necessarily show the influence of singularities arising from
the motion of individual configurations of an infinitely large number of
particles. Thus, the approach based on the Bogoliubov hierarchy equations
seems not only to be more general but also more natural from the physical
point of view.
We shall also discuss the derivation of the kinetic equation for a classical
system of hard spheres based on an infinite sequence of equations for
distribution functions in the Bogoliubov (BBGKY) hierarchy case. It is known
that the assumption of full synchronization of all distributions leads to certain
problems in describing the tails of the autocorrelation functions and some
other correlation effects with medium or high density. We shall discuss how
to avoid these difficulties by maintaining the explicit form of time-dependent
dynamic correlations in the BBGKY closure scheme.
The question usually is how to obtain hydrodynamic equations (Euler,
Navier-Stokes) from the Liouville-type equations of Hamiltonian mechanics,
classical or quantum. The original idea was due to Ch. Morrey (1956) who
introduced a concept of a hydrodynamic limit and was able to formally derive
338
Stochastic Reality
an Euler equation from the classical Liouville equations (more precisely, from
the corresponding BBGKY hierarchy). However, Morrey had to make some
assumptions about the long-term behavior of the motion, and this included a
statement on ergodicity, in the sense that all ‘reasonable’ first integrals are
functions of the energy, linear momentum and the number of particles. Since
then, the idea of a hydrodynamic limit became very popular in the literature
and has been successfully applied to a variety of models of (mostly stochastic)
dynamics relating them to non-linear equations. However, in the original
problem there was no substantial progress until the work by S. Olla, S. R. S.
Varadhan and T. Yau (1992) where Morrey’s assumptions were replaced by
introducing a small noise into the Hamiltonian (which effectively kills other
integrals of motion), and a classical Euler equation was correctly derived. In
some quantum models (e.g., of the Bohm-Madelung type) the hydrodynamic
limit can be rigorously demonstrated. The resulting Euler-type equation is
similar to the one that arises for the classical counterpart of these models.
This suggests that perhaps classical and quantum hydrodynamic equations
must look similar if they are written for local densities of ‘canonical’
conserved quantities (the density of mass, linear momentum and energy).
7.6
Chaotic Behavior
We have already discussed chaotic systems in Chapter 4, in connection with
nonlinear dynamics and elementary stability theory, where “chaos” became
the code word for nonlinear science. In that case, we can speak of
deterministic chaos, a transition to randomness due to sensitive dependence
on initial conditions – experimentally or computationally indistinguishable
initial states eventually evolve to states that are far apart (in the phase space).
Chaotic dynamics bridges regular evolution of complex systems with the
random one. However, here, in the spirit of stochastic reality, we shall put
more accent on probabilistic features of chaotic behavior.
In Chapter 2 we have seen that the notion of time is crucial for dynamical
systems, which evolve into the future given the data related to the actual state
specified at some particular moment. One might have noticed the drastic
difference between the steady-state situations, for instance Kepler’s laws, and
evolutionary processes: one is unable to determine the state or geometry of a
dynamical system unless one stipulates some input data at a specified
moment of time. To obtain the steady-state picture, e.g., Kepler’s laws, one has
to exclude time so, strictly speaking, no dynamics is left or it is hidden deeply
enough to be unnoticed (for example, by ancient astronomers). The notion of
time becomes relevant again only when one starts considering the stability of
Kepler’s static celestial geometry. We shall see below that this consideration
leads to very powerful results such as the famous KAM (Kolmogorov-Arnold-
Moser) theorem. One can, in principle, test all steady-state solutions on
stability and even chaoticity; the elliptic trajectories of the planets in the Solar
System may prove unstable or even chaotic after such an analysis. However,
the perennial question is: what would be the timescale during which the
instability or chaos might evolve? By the way, if and when such chaos evolves,
7.6
Chaotic Behavior
339
the time-reversal invariance of the regular dynamical behavior of Newtonian
gravitational motion and consequently Kepler’s laws would be lost.
Let us, for simplicity, consider first classical statistical mechanics. In fact,
a consistent exposition of statistical mechanics requires a quantum treatment
anyway, but to elucidate the main points we shall for the time being stay with
classical model. We have seen that classical statistical mechanics typically
considers systems of 𝑁 structureless particles (material points) in a 𝑑-
dimensional space moving along classical (Newtonian) trajectories in
continuous time 𝑡. The number of particles is usually considered very large,
but still finite so that the concept of the phase flow widely used in classical
mechanics can be applied without excessive mathematical precautions. Recall
that the term “flow” is usually understood in classical mechanics as a
continuous one-parametric group of transformations of the phase space (on
the phase manifold). In most classical problems of physics, parameter𝑑=
1,2,3. Here, I would like to recall that in classical statistical mechanics the
entire phase space of the many-particle system is called Γ -space, which
contains 2𝑁𝑑 dimensions whereas the phase space of a single particle is
commonly known as a 𝜇-space. So, we see that in classical mechanics and
hence in classical statistical mechanics the dimensionality of the phase space
and the number of degrees of freedom differ by the factor of 2 which is, of
course, a trifle but still not very convenient from the positions of dynamical
systems theory. Recall that in this latter case one usually does not discriminate
coordinates and momenta, they are equal components of the vector function
𝑢= (𝑢1, … , 𝑢𝑛) whose evolution governed by the vector equation 𝑑𝑢𝑑𝑡
⁄
=
𝑓(𝑢, 𝑡) is considered (see Chapter 4). In statistical mechanics, 𝑛= 2𝑁𝑑. This
is of course trivial, but one must be careful.
Notice that even for the simplest possible statistical system of 𝑁
structureless material points some nontrivial questions arise, for instance,
how can one reconcile the time-reversible equations of motion for a single
particle with the obviously irreversible macroscopic properties of a many-
particle system? This paradox, included in the collection of Perennial
Problems, has been reflected in the metaphor of the “arrow of time” (see
Chapter 9).
In quantum theory there exist some additional reasons for the breach of
time-reversal symmetry, one of them being the issue of quantum
measurement that is not fully clarified – at least within the unitary scheme of
orthodox quantum mechanics. However, this issue is a special problem, and
its discussion would require much space so that we shall not deal with the
arrow of time in the present manuscript. Notice only that in dynamical
systems theory, which can be considered as an intermediate between
mechanics and macroscopic physics, the arrow of time i.e., the breakdown of
symmetry between past and future appears quite naturally in view of
instabilities and their limit case –chaotic behavior.
In the quantum world, the reason for the breach of time reversal
symmetry is often ascribed to the process of quantum measurement, which
we shall not consider here in detail. When one is solely focused on unitary
quantum dynamics, one typically treats quantum evolution as completely
340
Stochastic Reality
time invertible. This is, however, a somewhat naive simplifying assumption
since the Hilbert space where quantum dynamics occurs is not necessarily
filled with complex conjugated states (of thermo-isolated systems).
In fluids, manifestations of stochasticity are generally called turbulence,
in this case chaotic behavior corresponds to the so-called Lagrangian
turbulence (see, e.g., [305]). It is, however, important that both the advection
and, more generally, the fluid flow dynamics within the Lagrangian
framework can, due to the possibility of representing them in the dynamical
systems form, be treated as Hamiltonian systems. The role of the Hamiltonian
in fluid dynamics is in the two-dimensional case played by the deterministic
stream function Ψ(𝐱∥, 𝑡) ([306,307]), where 𝐱∥= (𝑥1, 𝑥2) ∈Ω2 . In a
particular case when the 2d domain Ω2 occupied by the flow lies in the
Euclidean plane with Cartesian coordinates 𝑥1 = 𝑥, 𝑥2 = 𝑦 on it, we have the
Hamiltonian (symplectic) form of the motion equations 𝑥̇ = 𝑣𝑥(𝑥, 𝑦, 𝑡) =
∂Ψ/ ∂𝑦, 𝑦̇ = 𝑣𝑦(𝑥, 𝑦, 𝑡) = −𝜕Ψ/ ∂𝑥. Here the domain Ω2 ≡Ω2+1 of variables
(𝑥, 𝑦, 𝑡) corresponds to the extended phase space. In other words, the
nonstationary planar flow is described by the Hamiltonian dynamical system
with 1.5 degrees of freedom. Notice, however, that the stream function
Ψ(𝑥, 𝑦, 𝑡) is a somewhat strange Hamiltonian: a pair of “canonical” variables
(𝑥, 𝑦) are just local coordinates, and they are not canonically conjugate as, e.g.,
coordinate and momentum in mechanics.
8.1
Interaction of Electromagnetic Radiation with Matter. General
Concepts
341
8 Radiation and Matter
This chapter is devoted to a loose description of theoretical approaches to the
interaction of radiation, both electromagnetic and corpuscular, with matter.
Other possible types of radiation are not considered in this chapter (and in
the book in general). The term “matter” implies in the context of the present
chapter a macroscopic (averaged) ensemble of point electrons and nuclei
combined, which constitute the medium - it may be, for example, gas, plasma
or solid state. Averaging, to be discussed below, occurs at distances
substantially larger than interparticle (𝑛−1/3) or atomic ones (≅10−8 so that
nuclei, electrons and other atomic particles can really be regarded as point-
like. Thus, the described theory is inapplicable at smaller distances. I shall
generally regard the matter as being non-relativistic, only some restrained
comments will be made about relativistic effects in connection to the passage
of fast charged particles through the matter. Interaction of hard gamma
radiation (ℏ𝜔≥2𝑚𝑒𝑐2, 𝑚𝑒= 9.1 ∙10−28g is the electron mass) with matter
is not treated in this book, therefore creation of particles (pair production) is
ignored. To limit the scope of the book as well as for simplicity sake, I also do
not discuss creation of pairs by superstrong electromagnetic (laser) fields,
although such effects may be important considering the power densities
achieved
today
(
𝑃> 1021𝑊/𝑐𝑚2
,
see,
e.g.,
http://space.newscientist.com/article/dn13634-powerful-laser-is-
brightestlight-in-the-universe.html).
In
general,
relativistic
quantum
phenomena typically studied in quantum field theory are largely ignored in
this chapter. Here, the matter is described by the means of standard non-
relativistic quantum mechanics, while the electromagnetic field is treated
classically using the Maxwell equations (without second quantization, see
Chapters 5, 6). Such an approach is usually called semiclassical.
The chapter is divided into two parts, one related to the electromagnetic
field (mainly radiation) in material media, the other to the passage of particles
(mostly charged ones) through the matter. Since the treatment of the
interaction of corpuscular radiation with matter to a large extent uses the
concepts developed in electromagnetic (EM) theory, I decided to put first an
overview of the interaction of the electromagnetic field with material media.
8.1
Interaction of Electromagnetic Radiation
with Matter. General Concepts
A general description of mechanisms underlying the interaction of
electromagnetic radiation with matter is based on the concept of
electromagnetic response and is performed, respectively, in terms of
response functions. The notion of response functions generalizes the
description of electromagnetic response in terms of susceptibilities (both
linear and nonlinear), permittivity, permeability, conductivity, Coulomb
screening, mean and local field, etc., all listed notions being specific cases of
342
Radiation and Matter
electromagnetic response functions. The quantum mechanical description of
matter allows one to construct a linear theory which, assuming that spectral
properties of the matter are known, enables us to determine its response to
an external field (e.g., in terms of susceptibilities, even nonlinear ones). On the
base of such a theory, simple models using specific examples of the media can
be constructed and some primitive representations of otherwise complex
systems (gas, plasma, solid, liquid, etc.) can be discussed and tested.
Nevertheless, one cannot imagine that the problem of field-matter
interaction can be easily decomposed into two subproblems, one related to
the matter and the other to the field. The response of the matter to an
electromagnetic field is usually a self-consistence problem, which implies
both the effect of the field on the matter and a modification of the external
field by the matter. The self-consistence approach is widely used in
macroscopical electrodynamics and optics (both linear and nonlinear), see
Chapter 5. In these disciplines the so-called material constants are
phenomenologically introduced. Ideally, these phenomenological material
constants - dielectric permittivity, magnetic permeability µ171, conductivity σ,
refractive index n, susceptibilities 𝜒(1) (linear) and 𝜒(𝑘), 𝑘= 2,3, … - should be
determined as functions of the fundamental constants alone: the mass 𝑚𝑒 and
charge 𝑒 of the electron, the velocity of light 𝑐, Planck constant ℏ, atomic
numbers 𝑍, 𝑁, and atomic mass 𝑀. In the statistical description of matter, the
temperature 𝑇 is also an essential parameter. In the energy scale, however, 𝑇
is a small parameter172 so that the factor exp(−ℏ𝜔/𝑇), where ℏ𝜔= 𝐸𝑛−
𝐸𝑚, 𝐸𝑛, 𝐸𝑚 - atomic energy levels, arising from the averaging over the Gibbs
ensemble (see below; see also Chapter 7) is often quite small.
We have already discussed in Chapter 5 the description of the
electromagnetic (EM) field within the framework of classical theory. Exact
values of electric 𝐄(𝐫, 𝑡) and magnetic 𝐇(𝐫, 𝑡) components of the EM field at
some space-time point (𝑡, 𝑟) = (𝑥0, 𝑥1, 𝑥2, 𝑥3) satisfy the Maxwell equations
(see Chapter 5), where inhomogeneous terms proportional to charge and
current densities,
𝑙𝜌(𝐫, 𝑡) = ∑𝑒𝑎𝛿(𝐫−𝐫𝑎(𝑡))
𝑎
(8.1)
𝐣(𝐫, 𝑡) = ∑𝑒𝑎𝐯𝑎𝛿(𝐫−𝐫𝑎(𝑡))
𝑎
,
(8.2)
171 Magnetic permeability is not necessary when a spatial dispersion is taken into account, see
below. It is interesting that the term “magnetic permeability” was probably introduced by O.
Heaviside.
172 In this book, I shall measure the temperature in energy units so that, e.g., room
temperature, 𝑇~102𝐾,
would correspond
to 𝑇~10−14erg~10−2eV ≪𝑚𝑒𝑒4/ℏ2 ≈4.3 ∙
10−11erg ≈27.2eV, where 𝑚𝑒𝑒4/ℏ2 is the characteristic atomic energy value constructed from
fundamental constants. The physical meaning of this inequality is that a typical quantum system
at room temperature remains in the ground state since thermal excitation is insufficient to
overcome the spacing between atomic energy levels. This weakness of thermal excitation is,
however, not true for quantum systems with a dense energy spectrum such as complex
molecules. Furthermore, in a hot plasma 𝑇 is often higher than 𝑚𝑒𝑒4/ℏ2.
8.1
Interaction of Electromagnetic Radiation with Matter. General
Concepts
343
where 𝐯𝑎= 𝐫̇𝑎. These densities automatically obey the continuity equation
𝜕𝜌
𝜕𝑡+ 𝑑𝑖𝑣𝐣= 0.
(8.3)
We have already discussed the continuity equation and its meaning as the
conservation law. Recall that in a compact domain the continuity equation for
some quantity173 states that the amount of this quantity in the considered
domain varies to the extent of the quantity gain or loss through the domain
boundaries. In the case of charge conservation, however, the meaning of the
continuity equation is much deeper: it is connected with gauge invariance
(see Chapter 5). This invariance manifests itself, in particular, by the fact that
the continuity equation for electrical charges and currents is not independent
of the Maxwell equations (see [39], §29). One can even say that the continuity
equation for electrical charges and currents is a trivial mathematical
consequence of the Maxwell equations; a more accurate statement would be
that the Maxwell equations are adjusted to or agreed with the continuity
equation, that is to say they are constructed in such a way as to automatically
provide charge conservation.
A model of some experimentally produced setting in radiation-matter
interaction, accounting for quantum effects, would consist in the treatment of
a quantized electromagnetic field created by atomic sources and propagating
in the presence of macroscopic bodies or surfaces. For example, such a scheme
can be applied for modeling typical quantum-optical experiments when
radiation passes through various optical instruments. The latter may be active
(i.e., nonlinear) or passive, anyway in the standard approach they may be
regarded as dielectric bodies or surfaces having a certain type of geometric or
physical complexity. This complexity can be described in mathematical terms
and is typically accounted for by introducing the dielectric function that
depends both on radiation features, e.g., frequency and on spatial coordinates.
In spite of many attempts to construct a fully quantum theory of the medium
response to an electromagnetic field (see, e.g., one of the latest papers [260],
see also below), the approach based on the dielectric function is by necessity
semiclassical, being heavily based on the classical concept of the field in a
macroscopic medium. One may remark in passing that, e.g., nonlinear optics
in general seems to be an essentially classical (or semiclassical) discipline
because the coupled evolution equations for the light and matter fields are
extremely difficult - and perhaps unnecessary - to solve. Classical
electrodynamics of polarizable media seems to be quite adequate for
contemporary nonlinear optics, at least for its applied part.
173 This quantity can be of arbitrary nature, not only the charge but, e.g., the number of cars
on some part of the road.
344
Radiation and Matter
8.2
Field Energy Dissipation in Matter
𝑑𝑄
𝑑𝑡= 𝜔
8𝜋𝜖𝑖𝑗
′′𝐸∗𝑖𝐸𝑗
(8.4)
We can illustrate this general formula using a simple model of the field
energy absorption by an electron driven by the field of an electromagnetic
wave and from time to time colliding with other particles of the medium174.
We shall assume that such collisions are only elastic i.e., the total kinetic
energy (treated as a quadratic form) of colliding particles is conserved in each
individual collision and no excitation occurs. In other words, it is assumed
that energy is not dissipated into internal degrees of freedom. Only the
momentum of the colliding particles changes, with such changes taking
place abruptly i.e., the collision time 𝜏≪2𝜋/𝜔, where 𝜔 is the characteristic
frequency of the electromagnetic wave (actually, we may assume 𝜔𝜏≪1). The
assumption of negligibly short collision time is standard, but not always
correct. When the electromagnetic field cannot be conveniently represented
by a quasimonochromatic wave centered around a characteristic frequency
(for example, the particle is driven by the field of an ultrashort laser pulse),
then the assumption of abrupt collisions looks as 𝜏≪𝜏𝑝, where 𝜏𝑝 is the
characteristic time of pulse growth. Imagine for simplicity that electrons
collide only with the heavy centers of the medium namely atoms, ions, or
molecules whose masses 𝑀𝑎 are much larger than the electron mass 𝑚.
Usually, the parameter 𝑚/𝑀𝑎 is very small, 10−3 −10−4 which allows one to
ignore the motion or recoil of heavy scatterers with high accuracy. Then, since
collisions are assumed elastic, the energy of the electron in an individual
collision is conserved. Energy of the field is dissipated as an averaged
stochastic process due to multiple elastic collisions, this dissipation leading to
a gradual increase of the mean energy of electrons i.e., to the heating of the
electronic subsystem.
For an electron moving in a high-frequency harmonic field 𝐄=
𝐄0 cos 𝜔𝑡, we have 𝐩̇ = 𝐄0 cos 𝜔𝑡 so that
𝑝(𝑡) = 𝑝0 + 𝑒𝐸0
𝜔sin 𝜔𝑡
(8.5)
and the energy instant value in the field is
ℰ(𝑡) = 𝐩2(𝑡)
2𝑚= 1
2𝑚(𝐩0
2 + 2𝑒
𝜔(𝐩0𝐄0 sin 𝜔𝑡+ 𝑒2
𝜔2 𝐄0
2 sin2 𝜔𝑡)).
This instant energy oscillates near its mean value, which reflects the fact that
the particle exchanges electromagnetic energy with the field. In the quantum
language, this corresponds to constantly exchanging photons, some of them
being real other virtual. In a high-frequency harmonic field, we are interested
174 This example is discussed in more details in [222].
8.2
Field Energy Dissipation in Matter
345
of course not in the instant value of the particle energy, but in its energy
averaged over many field periods. This average energy is given by
ℰ−
̅̅̅̅ = 𝐩0
2(𝑡)
2𝑚+ 𝑒2𝐄0
2
4𝑚𝜔2 = ℰ0 + 𝑇𝑝,
where ℰ0, 𝐩0 are the initial energy and momentum of the electron, 𝑇𝑝=
𝑒2𝐸2 4𝑚𝜔2
⁄
is its average kinetic energy in the oscillatory motion (see, e.g.,
[23], 30); here lower index 𝑝 stands for “ponderomotive”, see the next section.
The minus sign at the energy symbol signifies “energy before collision”. To
simplify the model, we shall neglect small-angle scattering, assuming all
individual collisions to be “strong” i.e., resulting in drastic, large-angle
deflections. In the limiting case, the scattering angle 𝜃 in such collisions
reaches 𝜋 which corresponds to a complete reflection of the electron
momentum, 𝐩(𝑡0 + 𝜏) = −𝐩(𝑡0 −𝜏) where 𝑡0 denotes the moment of time
when the collision occurs. Using (8.5), we have
𝐩0 + 𝑒𝐄0
𝜔sin 𝜔(𝑡0 + 𝜏) = −𝐩0 −𝑒𝐄0
𝜔sin 𝜔(𝑡0 −𝜏)
or, exploiting the assumption 𝜔𝑡≪1,
𝐩0 + 𝑒𝐄0
𝜔sin 𝜔𝑡0 = −𝐩0 −𝑒𝐄0
𝜔sin 𝜔𝑡0
or, inserting 𝐩(𝑡) −𝑒𝐄0 sin 𝜔𝑡/𝜔 into the left-hand side instead of 𝐩0, we
get for the particle momentum in the wave after scattering by a heavy center
𝐩(𝑡) = −𝐩0 −2𝑒𝐄0
𝜔
sin 𝜔𝑡0 + 𝑒𝐄0
𝜔sin 𝜔𝑡.
Using this expression, we can calculate the electron energy in the wave
after collision
ℰ(⊔)+ = 𝐩2(𝑡)
2𝑚= 1
2𝑚((𝐩0 + 2𝑒𝐄0
𝜔) sin 𝜔𝑡0 −𝑒𝐄0
𝜔sin 𝜔𝑡)
2
.
This equation also shows that the electron is exchanging energy with the
wave. Making elementary transformations and averaging over many periods
of the wave, we obtain
ℰ+
̅̅̅̅ = 1
2𝑚((𝐩0 + 2𝑒𝐄0
𝜔) sin 𝜔𝑡0)
2
+ 𝑒2𝐄0
2
4𝑚𝜔2
= ℰ−
̅̅̅̅ + 2𝑒
𝑚𝜔𝐩0𝐄0 sin 𝜔𝑡0 + 2𝑒2𝐄0
2
𝑚𝜔2 sin2 𝜔𝑡0
346
Radiation and Matter
where ℰ−
̅̅̅̅ = ℰ0 + 𝑇𝑝 (see above). The energy transfer between the wave and
the particle is expressed as the difference
∆ℰ= ℰ+
̅̅̅̅ −ℰ−
̅̅̅̅ = 2𝑒
𝑚𝜔𝐩0𝐄0 sin 𝜔𝑡0 + 2𝑒2𝐄0
2
𝑚𝜔2 sin2 𝜔𝑡0 .
(8.6)
Notice that this energy transfer depends on the collision time 𝑡0, more
exactly, on the phase of the wave in this moment. This is the manifestation of
the fact that waves and particles exchange energy just because the collisions
randomly perturb the phase synchronism between them. In particular, the
particle may both take energy from the wave and give it to the field, depending
on phase 𝜔𝑡0 as well as the mutual orientation of vectors 𝐩0 and 𝐄0. One can
also notice that when writing the field acting on a particle in the form
𝐄= 𝐄0 cos 𝜔𝑡, we disregarded the field phase in the initial moment 𝑡= 0; it
would be more correct to write 𝐄= 𝐄0 cos(𝜔𝑡+ 𝜑). For some problems,
taking this initial phase into account may be essential [226]. We shall briefly
discuss some effects, in particular systematic drift associated with initial
phase of the wave, in the next section. If we, however, ignore the effects
connected with the phase of the wave, we can average (8.6) over all phases to
obtain
∆ℰ
̅̅̅̅ = 𝑒2𝐄0
2
𝑚𝜔2 = 4𝑇𝑝.
(8.7)
This average energy transfer from the field to the particle is strictly positive,
which means that the field is losing, and the particle is gaining energy in a
typical large-angle elastic collision. If the frequency of such collisions is 𝜈,
then the attenuation rate of an electromagnetic wave in the medium is
−𝑑ℰ
𝑑𝑡= 𝑒2𝐄0
2𝜈
𝑚𝜔2 = 4𝜈𝑇𝑝
where the collision frequency depends in general on the particle momentum
𝜈= 𝜈(𝐩) and may be calculated within the kinetic theory. Here, however,
we treat the collision frequency as a purely phenomenological quantity: using
a microscopic value for it within our crude estimates would be unacceptable
due to excessive accuracy. In most cases, 𝜈= 𝜈(|𝐩|) = 𝜈(𝑝). We may use the
phenomenological collision frequency to find the dielectric permittivity and to
estimate response functions for simple models.
The above semi-qualitative treatment demonstrates a heuristic
usefulness of the models based on the classical motion of individual particles.
Some experience shows that many intricate problems of radiation-matter
interaction can be successfully understood using the simple language of
classical single-particle models. Below we shall see more on this.
8.3
More on Charge in Electromagnetic Fields
347
8.3
More on Charge in Electromagnetic Fields
We may begin the study of radiation-matter interaction with the simplest
problem when matter is reduced to a single charged particle. The model of
individual particles interacting with an external electromagnetic field reflects
the situation when the matter (usually it is plasma or plasma-like medium)
has a rather low density so that the average energy of interaction of the
particles with the medium as well as between the particles is much lower than
the energy of their individual motion in the electromagnetic field. The
problem of a single charge in the field is usually studied already in the first
chapters of textbooks on classical electromagnetic theory (see, e.g., [84], ch.
3); it may be in fact rather intricate even at the classical level, and we, while
discussing this problem, shall emphasize certain details that require slightly
more knowledge than needed for reading initial-stage textbooks on
electromagnetism. Thus, the treatment of the discussed problem can be both
classical and quantum, and it is interesting that the classical treatment is in
most cases sufficient for the description of nearly all practically interesting
applications. A little further we shall discuss the applicability criteria of the
classical approach to describing the behavior of free charges in an
electromagnetic field, now let us start with the naive Lorentz model:
𝑑𝐩
𝑑𝑡= 𝑒[𝐄(𝐫, 𝑡) + 1
𝑐(𝐯× 𝐇(𝐫, 𝑡))] ,
(8.8)
where 𝐩 is the momentum of charge 𝑒 moving in the electromagnetic field
(𝐄,H). Notice that the Lorentz force contains the particle velocity which is a
kinematic variable and not the momentum which is a dynamic variable. This
physical fact is nontrivial and may be regarded as a consequence of countless
experiments. To make the vector equation (8.8) (system of equations) closed,
one ought to add the relationship between 𝐩 and 𝐯. Recall from classical
mechanics (see also Chapter 4) that the relationship between momentum
𝑝𝑗 considered as a variable in phase space and the rate of change 𝑣𝑖= 𝑥̇ 𝑖 is
not necessarily as trivial as it is assumed in the Newtonian model and, e.g.,
in more advanced Lagrangian version of mechanics is given by 𝑝𝑖= 𝜕𝐿𝜕𝑥̇ 𝑖
⁄
where 𝐿≔𝐿(𝑥𝑖, 𝑥̇𝑖, 𝑡) is the Lagrangian function which may be, in principle,
an arbitrary twice continuously differentiable function. So the differential
relationship between momenta and velocities does not necessarily give the
simple linear connection 𝑝𝑖= 𝑚𝑖𝑗𝑥̇ 𝑗, customary for Newtonian mechanics
(here 𝑚𝑖𝑗 is the mass tensor which, as we have seen, can assume the role of
metric tensor). One may observe the symptoms of more intricate than linear
relationship between momenta 𝑝𝑖 and velocities 𝑥̇ 𝑗 already in relativistic
mechanics where such a relationship is nonlinear
𝐩= 𝛾(𝐯)𝑚𝐯=
𝑚𝐯
(1 −𝑣2
𝑐2)
1/2
348
Radiation and Matter
and can be made linear only when the relativistic effects corresponding to an
expansion over small parameter 𝛽2 =
𝑣2
𝑐2 are disregarded. In the
nonrelativistic limit (𝛽→0), 𝐩= 𝑚𝐯 and the Newton-Lorentz equation
(8.8) takes a simple form convenient for calculations:
𝑚𝐯̇ = 𝑒[𝐄(𝐫, 𝑡) + 1
𝑐(𝐯× 𝐇(𝐫, 𝑡))] .
One can see that the Lorentz force is of the first order in 𝛽= 𝑣𝑐
⁄ , 𝑣=
|𝐯|, which is an important fact to be used further.
8.3.1
Interaction of a Particle with a Standing Wave
Let us, as a primary example, consider the motion of a free charged particle in
the field of a standing wave
{𝐄(𝐫, 𝑡)
𝐇(𝐫, 𝑡)} = {𝐄0(𝐫) cos 𝜔𝑡
𝐇0(𝐫) sin 𝜔𝑡}
A standing wave may be, e.g., formed by two almost monochromatic
( ∆𝜔𝜔
⁄
≪1 where ∆𝜔 is the effective spectrum width) waves with
frequencies 𝜔 and 𝜔′ ≈𝜔 traveling in opposite directions so that their wave
vectors 𝐤 and 𝐤′ satisfy the relation 𝐤+ 𝐤′ = 𝟎. Then the surfaces of equal
phase are planes which are fixed in space. The particle, in general, may cross
the standing wave structure at an arbitrary angle 𝜗 to such planes so that
both longitudinal and transversal momentum components, 𝐩∥ and 𝐩⊥, are
not necessarily small compared to the particle momentum 𝐩. Actually, the
amplitudes 𝐄0 and 𝐇0 are smooth envelopes 𝐄0(𝐫, 𝑡) and 𝐇0(𝐫, 𝑡)
depending in general on both time 𝑡 and position 𝐫. We shall, however,
assume that 𝐄0(𝐫, 𝑡) and 𝐇0(𝐫, 𝑡) change very little when we transit from 𝑡
to 𝑡+ 2𝜋𝑛/𝜔 (assuming also 𝑛 finite and not very large). Then we may
write 𝐄0(𝐫, 𝑡) ≈𝐄0(𝐫) and 𝐇0(𝐫, 𝑡) ≈𝐇0(𝐫).
The change of the time-harmonic dependence from cos 𝜔𝑡 in the electric
field to sin 𝜔𝑡 in the magnetic field (the 𝜋/2 phase shift) is consistent with the
Maxwell equation
∇× 𝐄+ 1
𝑐
𝜕𝐇
𝜕𝑡= 0.
In these relationships the initial phase has been put to zero as a seemingly
inessential parameter. This is not always the case. A little below we shall
discuss the role of the phase in the expressions 𝐄(𝐫, 𝑡) = 𝐄0(𝐫)cos(𝜔𝑡+ φ)
and 𝐇(𝐫, 𝑡) = 𝐇0(𝐫)sin(𝜔𝑡+ φ) and find out that the nonzero phase may
signify an additional drift.
In the above expressions the spatial amplitudes 𝐄0(𝐫) and 𝐇0(𝐫) are not
independent since due to Maxwell’s equations ∇× 𝐄0(𝐫) = −𝑘𝐇0(𝐫), 𝑘=
𝜔/𝑐. When the electromagnetic fields accelerating the charge are not very
strong (see Chapter 5), the motion remains nonrelativistic so that the
8.3
More on Charge in Electromagnetic Fields
349
magnetic field term in (8.8) can be disregarded and the equation of motion
takes the form
𝑚𝐫̈ = 𝑒𝐄0(𝐫) cos 𝜔𝑡.
Technically, this is a nonlinear ordinary differential equation in which
variables 𝑟 and 𝑡 can be separated. There are a number of ways how to solve
this equation; we shall start with the simplest approximation method when
the external field is initially regarded as homogeneous (coordinate-
independent), at least on a scale of the particle displacement due to the
Lorentz force i.e., 𝐄0(𝐫) = 𝐄0(𝐫0 + 𝛿𝐫(𝐄)) ≈𝐄0(𝐫0). Here 𝐫0 is the particle’s
initial position. In other words, we may think that during an oscillation period
the particle passes the distance on which field amplitudes 𝐄0(𝐫) and 𝐇0(𝐫)
do not vary substantially, 𝛿≔|𝛿𝐫| ℎ
⁄
≪1 , where ℎ is the characteristic
length of the spatial field variation. Then, in the zeroth approximation in 𝛿
we have
𝐫̇ = 𝐯= 𝑒𝐄0
𝑚𝜔sin 𝜔𝑡+ 𝐯0,
where 𝐯0 is the particle initial velocity and 𝐄0 = 𝐄(𝐫0) is the constant vector.
Further,
𝐫−𝑒𝐄0
𝑚𝜔cos 𝜔𝑡+ 𝐯0𝑡+ 𝐫0 ≡+𝛿𝐫.
Since we are considering the nonrelativistic situation175, we may assume that
the field-induced displacement 𝛿𝐫= 𝛿𝐫(𝐄) ≈𝛿𝐫(𝐄0) is small compared with
the field’s characteristic scale i.e., with the wavelength in the case of a
harmonic wave. Indeed, the displacement |𝛿𝐫| = 𝛿𝑟 reached over a half-
period of the wave is 𝛿𝑟~𝑒𝐸/𝑚𝜔2~𝑣/𝜔 so that 𝛿𝑟𝜆
⁄ ~ 𝑣𝑐
⁄ ≪1 i.e., in
nonrelativistic problems the field-induced displacement is small compared to
the wavelength. Of course, for the fields rapidly varying in space this
assumption may become invalid (see also below).
The particle of mass 𝑚 (for definiteness, we shall talk about electrons)
oscillates in the harmonic field with frequency 𝜔, gaining average kinetic
energy
ℰ= 1
2 𝑚𝑣2
̅̅̅̅̅̅ = 𝑒2𝐸2
4𝑚𝜔2
175 The criterion of validity for a nonrelativistic approximation in classical (non-quantum)
theory is 𝑒𝐄0 𝑚𝜔𝑐
⁄
≪1 which shows that, e.g., electrons may attain relativistic velocities for
electric fields accelerating the particles to their rest energy over the distance of the order of the
wavelength, 𝑒𝐸0𝜆~𝑚𝑐2 ≈0.5MeV. If quantum theory is considered, the classical theory for a free
electron interacting with the electromagnetic field is valid when the energy of a transferred or
radiated electromagnetic quantum is small compared to the rest energy, ℏ𝜔≪𝑚𝑐2.
350
Radiation and Matter
This is a standard forced motion of a particle under the influence of an
external harmonic field. Motion in oscillating fields, despite its apparent
simplicity, contains a number of intricacies, and we shall try to discuss some
of them in detail. For example, one usually tacitly assumes that it is only the
magnetic field that can confine charged particles. Wrong! One may be
surprised to find out that charged particles may be confined also by a
harmonically oscillating electromagnetic field, provided it is spatially
inhomogeneous, such a confinement occurring regardless of the charge sign.
So an inhomogeneous electromagnetic wave can be used to trap, accelerate,
and control charged particles.
To better understand some interesting effects accompanying the
interaction of electromagnetic fields with free charged particles, we shall as
usual start from simplified models gradually incorporating new features in
the equations. We have just obtained the simplest possible solution for the
particle moving in the field of a monochromatic (harmonic) standing wave in
the zero-order in relativistic parameter 𝛽= 𝑣/𝑐. In this approximation no
influence of the magnetic field is taken into account although the magnetic
field is present in any electromagnetic wave. Nevertheless, if a particle moves
nonrelativistically, neglect of the magnetic field in the wave which drives the
particle is fully justified.
Let us now see what happens in the first approximation in β, in case
the scale of spatial inhomogeneity of the field is determined by the
wavelength 𝜆= 2𝜋/𝜔 (in general, however, the field amplitudes may have
other spatial scales). Then we have to retain the term with the magnetic field
in the Lorentz force. Expanding slowly varying amplitudes 𝐄0(𝐫) and 𝐇0(𝐫)
near point 𝐫0 which may be interpreted as the initial position of the electrons,
we have in the first approximation in inhomogeneity parameter 𝛿=
𝑒𝐸0(𝐫0)𝜅/𝑚𝜔2, where 𝜅 characterizes the spatial inhomogeneity of the
field:
𝐄0(𝐫) ≈𝐄0(𝐫0) + (𝛿𝐫∇)𝐄0(𝐫0)
or, in component notation,
𝐸0
𝑗(𝑥𝑖) ≈𝐸0
𝑗(𝑥0
𝑖) + 𝛿𝑥𝑖𝜕𝑖𝐸0
𝑗(𝑥0
𝑖).
The length 𝑠= 𝜅−1 denotes the distance on which amplitudes 𝐄0 and 𝐇0
change significantly. Such a distance is often thought to coincide with the
wavelength of the field; this is, however, a very particular case. Only in this
special case expansion on the inhomogeneity parameter coincides with that
on relativistic parameter 𝛽= 𝑣/𝑐≈𝑒𝐸0 𝑚𝜔𝑐
⁄
.
Now insert these expansions into the motion equation, using the obtained
expression for 𝛿𝐫:
𝛿𝐫(𝐄𝟎) = 𝐯0𝑡−𝑒𝐄0(𝐫0)
𝑚𝜔2
cos 𝜔𝑡.
8.3
More on Charge in Electromagnetic Fields
351
Notice that the quantity 𝐀0 ≔𝑒𝐄0 𝑚𝜔2
⁄
has the meaning of the particle
oscillation amplitude in a monochromatic field, when all other factors
affecting the particle motion, e.g., collisions, are neglected. Later we shall take
collisions into account and see that in this case the amplitude for a harmonic
field
𝐄(𝐫, 𝑡) = 𝐄0𝑒−𝑖𝜔𝑡+ 𝐄0
∗𝑒𝑖𝜔𝑡
will take the form 𝐀0 ≔𝑒𝐄0 𝑚𝜔(𝜔+ 𝑖𝜈)
⁄
corresponding to the solution of
the motion equation
𝒓(𝑡) = −𝐀0𝑒−𝑖𝜔𝑡−𝐀0
∗𝑒𝑖𝜔𝑡
In other words, the particle amplitude in an electromagnetic field will acquire
additional terms (in real representation of the field) or becomes complex (in
complex representation), which physically corresponds to the losses of
electromagnetic energy and to phase shift i.e., retardation of the
electromagnetic response of an individual particle to an external
electromagnetic field. All these issues are closely connected with the
elementary theory of dielectric permittivity (see Chapter 5).
Now we get
𝑚𝐫̈ = 𝑒𝐄0(𝐫0) cos 𝜔𝑡+ 𝑒(𝐯0∇)𝐄0(𝐫0)𝑡−𝑒2
𝑚𝜔2 (𝐄0(𝐫0)∇)𝐄0(𝐫0) cos2 𝜔𝑡
+ 𝑒2
𝑚𝜔2
𝜔
𝑐(𝐄0(𝐫0) × 𝐇0(𝐫0)) sin2 𝜔𝑡
+ 𝑒
𝑐(𝐯0 × 𝐇0(𝐫0)) sin 𝜔𝑡
where for the Lorentz force we used:
𝑒
𝑐(𝐯× 𝐇(𝐫, 𝑡)) = 𝑒
𝑐[(𝑒𝐄0(𝐫0)
𝑚𝜔
sin 𝜔𝑡+ 𝐯0) × 𝐇0(𝐫0) sin 𝜔𝑡].
By using the Maxwell equation 𝐇0(𝐫0) = −(𝑐/𝜔)curl𝐄0(𝐫0) we get
after averaging over the field period:
𝑚𝐫̅̈ = 𝑒𝑡(𝐯0∇)𝐄0(𝐫0) −
𝑒2
2𝑚𝜔2 (𝐄0(𝐫0)∇)𝐄0(𝐫0)
+
𝑒2
2𝑚𝜔2
𝜔
𝑐(−𝑐
𝜔) 𝐄0(𝐫0) × (∇× 𝐄0(𝐫0))
or
𝑚𝐫̅̈ = 𝑒𝑡(𝐯0∇)𝐄0(𝐫0) −
𝑒2
4𝑚𝜔2 ∇𝐸0
2(𝐫0).
(8.9)
352
Radiation and Matter
Here the vector identity
𝐄0 × (∇× 𝐄0) = 1
2 ∇𝐸0
2 −𝐄0∇𝐄0
was used.
One can make several remarks concerning this standard procedure176 of
obtaining the average force (8.9) acting on particles moving in the field of a
monochromatic wave with spatially inhomogeneous amplitude. This force is
usually called ponderomotive; in the Russian literature it is mostly known as
the “Miller force”, by the name of a prominent physicist belonging to the well-
known Nizhni Novgorod (formerly Gor’ki) radiophysical scientific school.
This force has a field gradient character, attracting a charged particle
irrespective of the charge sign to low-intensity field regions and repelling it
from the regions of strong field. The term proportional to time 𝑡 corresponds
to drift motion of the particle177. The initial time-point 𝑡0 determines the
phase of a particle when it starts the motion in the electromagnetic field. This
phase may play an important role for the drift component of the motion (see
below). One may notice that the dynamics of a charged particle is described
in terms of the average value of the particle position 𝐫̅ whereas the
ponderomotive force is calculated at its staring point 𝐫0. In most cases, the
difference between 𝐸0(𝐫0) and 𝐸0(𝐫̅) is inessential due to slow variation of
the amplitudes, and taking such a difference into consideration would mean
an excessive accuracy level. However, two pairs of values, 𝐫̅, 𝐫0 and,
correspondingly, 𝐸0(𝐫0), 𝐸0(𝐫̅) may differ significantly when systematic drift
component is taken into account. This fact may lead to somewhat unexpected
effects related to anisotropy of drift motion. Usually, when ponderomotive
effects such as repelling particles from strong field regions are considered,
field is represented by axial-symmetric intensity distributions such as in
idealized laser beam and drift components breaking axial symmetry are
disregarded. In this case, the ponderomotive force only depends on the
particle’s distance from the laser beam axis and, by the way, does not depend
on the field polarization. Taking drift into account would mean that, e.g., the
energy of the particles gained owing to ponderomotively pushing them out of
the strong field region of the laser beam would depend on the scalar product
between the particle velocity and the field polarization vector. The
corresponding calculations are simple but lengthy178, therefore I do not
bring them here (see also below).
We can now generalize the formulas for the ponderomotive force acting
on a particle (electron), taking into account collisions with other particles.
This situation takes place, for example, in plasma. The simplest -
176 This is actually an iteration method that can be applied also in more general
situations, for example, when collisions should be taken into account, see below.
177 It would be more appropriate to write 𝑡−𝑡0 instead of 𝑡 which may be important
when many particles, e.g., a beam, are considered.
178 One may write 𝐫0 as 𝐫̅ + (𝐫0 −𝐫̅) ≡𝐫̅ + 𝛿0 and expand (8.9) over 𝛿0.
8.3
More on Charge in Electromagnetic Fields
353
phenomenological - way to take collisions into account consists in adding the
friction term to the equation of motion:
𝑚𝐫̈ + 𝑚𝜈𝐫̇ = 𝑒(𝐄(𝐫, 𝑡) + 1
𝑐(𝐫̇ × 𝐇(𝐫, 𝑡))) .
(8.10)
Here 𝜈 is the collision frequency i.e., the quantity determining the rate of
momentum interchange in multiple collisions. It is clear that in general 𝜈 is a
function of the particle velocity 𝑣= |𝐫̇|, 𝜈= 𝑁𝜎𝑡(𝑣)𝑣= 𝜈(𝑣) , where 𝑁
denotes the density of scatterers and 𝜎𝑡 is the transport cross-section
𝜎𝑡(𝑣) = 2𝜋∫𝑑𝜎
𝑑Ω (𝑣, 𝜃)(1 −cos 𝜃) sin 𝜃𝑑𝜃
𝜋
0
,
𝑑𝜎
𝑑Ω (𝑣, 𝜃) = |𝑓(𝐪)|2 is the differential cross-section, 𝑓(𝐪) is the scattering
amplitude, 𝐪∈𝑆2, 𝐪∙𝐪−1, i.e., the vector 𝐪 belongs to the unit sphere 𝑆2
(see any textbook on scattering theory or Chapter 5 of this book). Recall that
it is the transport cross-section and not the total cross-section
𝜎(𝑣) = 2𝜋∫𝑑𝜎
𝑑Ω (𝑣, 𝜃) sin 𝜃𝑑𝜃
𝜋
0
that determines the collision frequency (and also allows one to estimate the
mean free path of a particle) because it accounts for the fact that the
momentum transfer in elastic collisions reaches a maximum at large
scattering angles (head-on collisions) and tends to zero for small-angle
scattering (large impact parameters).179 Although the collision frequency
depends on velocity, we may understand by 𝜈 in our phenomenological
treatment dealing with effective quantities some average collision rate,
𝜈(𝑣)
̅̅̅̅̅̅ = 𝜈, where averaging is carried out over all relative velocities of our
particle with respect to scattering centers of the medium. In reality, this
averaging can be accurately performed only in the kinetic theory; here we
have to be satisfied with the qualitative considerations.
Now, let us, as before, start with a zero-order approximation in 𝑣/𝑐
omitting the Lorentz force:
𝑚𝐫̈ + 𝑚𝜈𝐫̇ = 𝑒(𝐄0(𝐫) cos(𝜔𝑡+ 𝜑)),
(8.11)
where we have included the phase of the field 𝜑. The reason why this phase
may deserve to be included in the motion equations will be clear from the
further discussion. The zero-order equation is a linear ODE and can be easily
integrated. Modern humanity integrates differential equations with the help
of “computer algebra” products such as Maple or Mathematica which are
179 One can readily see that the velocity change in every act of scattering accompanied
with a deflection by the angle 𝜃 is ∇𝑣= 𝑣(1 −cos 𝜃).
354
Radiation and Matter
great products indeed, but it is sometimes even faster to integrate simple
equations in the good old manner - with bare hands, pen and paper. (Here, I
must perhaps apologize for bringing the calculation details that might seem
excessive to an experienced reader.) One may look for a special solution to
the above equation in the form
𝐫𝑠= 𝑒𝐄0
𝑚[𝑎cos(𝜔𝑡+ 𝜑) + 𝑏sin(𝜔𝑡+ 𝜑)]
then, inserting this expression into the equation (8.11), we get the following
system of linear equations for coefficients 𝑎 and 𝑏:
{−𝑎𝜔2 + 𝑏𝜈𝜔= 1
𝑎𝜈𝜔+ 𝑏𝜔2 = 0
which gives
𝑎= −
1
𝜔2 + 𝜈2 ,
𝑏= −𝑎𝜈
𝜔=
𝜈
𝜔(𝜔2 + 𝜈2)
so that
𝐫𝑠=
𝑒𝐄0
𝑚(𝜔2 + 𝜈2) (−cos(𝜔𝑡+ 𝜑) + 𝜈
𝜔sin(𝜔𝑡+ 𝜑))
=
𝑒𝐄0
𝑚(𝜔2 + 𝜈2)
cos(𝜔𝑡+ 𝜑+ 𝜒)
cos 𝜒
,
where 𝜒≡arctan(𝜈𝜔
⁄ ) and cos 𝜒=
𝜔
(𝜔2+𝜈2)1/2. Finally,
𝐫𝑠(𝑡) = 𝑒𝐄0 cos(𝜔𝑡+ 𝜑+ 𝜒)
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄
.
(8.12)
The general solution to (8.11) is
𝐫(𝑡) = 𝐀+ 𝐁𝑒−𝜈𝑡+ 𝐫𝑠(𝑡) = 𝐀+ 𝐁𝑒−𝜈𝑡−𝑒𝐄0 cos(𝜔𝑡+ 𝜑+ 𝜒)
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄
(8.13)
where the constants 𝐀 and 𝐁 should be determined from the initial
conditions, e.g., 𝐫(𝑡0) = 𝐫0 and 𝐫̇(𝑡0) = 𝐯0. We may, for simplicity, put 𝑡0 =
0 (see the footnote on the preceding page) and obtain
𝐁= −𝐯0
𝜈+ 𝑒𝐄0 sin(𝜑+ 𝜒)
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄
and
8.3
More on Charge in Electromagnetic Fields
355
𝐀= 𝐫0 −𝐁+ 𝑒𝐄0 cos(𝜑+ 𝜒)
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄
= 𝐫0 + 𝐯0
𝜈+
𝑒𝐄0
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄ [1
𝜔cos(𝜑+ 𝜒) −1
𝜈sin(𝜑+ 𝜒)]
Using the definition 𝜒≡arctan(𝜈𝜔
⁄ ), we have
𝐀= 𝐫0 + 𝐯0
𝜈−𝑒𝐄0 sin 𝜑
𝑚𝜈𝜔
and
𝐁= −𝐯0
𝜈+ 𝑒𝐄0 sin(𝜑+ 𝜒)
𝑚𝜈(𝜔2 + 𝜈2)1 2
⁄
so that the general solution to the zero-order equation describing the motion
of a particle in the standing wave may be written as
𝐫(𝑡) = 𝐫0 + 𝐯0
𝜈−𝑒𝐄0 sin 𝜑
𝑚𝜈𝜔
−𝐯0
𝜈𝑒−𝜈𝑡+ 𝑒𝐄0𝑒−𝜈𝑡sin(𝜑+ 𝜒)
𝑚𝜈(𝜔2 + 𝜈2)
1
2
+ 𝐫𝑠(𝑡)
(8.14)
where
𝐫𝑠(𝑡) = 𝑒𝐄0 cos(𝜔𝑡+ 𝜑+ 𝜒)
𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄
Now we can write the first-order equation like we did before, when
collisions were not taken into account, as
𝐫̈ + 𝜈𝐫̇ = 𝑒
𝑚(𝐄0(𝐫0) cos(𝜔𝑡+ 𝜑) + (𝐫∇)𝐄0(𝐫0) cos(𝜔𝑡+ 𝜑)).
(8.15)
Here, as in the higher-order terms of the right-hand side expansion on the
inhomogeneity parameter (see above), one should insert the argument 𝐫0
after having differentiated the field amplitude 𝐄0(𝐫0). To obtain the right-
hand side in (8.15) by the iteration method that we employ, one has to put the
zero-order solution (8.14) into it. Inserting (8.14) into (8.15) and averaging
over oscillations, we get for the first-order equation
𝑚𝐫̈ + 𝑚𝜈𝐫̇ = −𝛼(𝜑)
𝜈
(𝐯0∇)𝐄0 −𝑒2(𝐄0∇)𝐄0𝛼(𝜑) sin(𝜑+ 𝜒)
𝑚𝜈(𝜔2 + 𝜈2)1 2
⁄
−𝑒2(𝐄0∇)𝐄0 cos 𝜒
2𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄ ,
(8.16)
where
356
Radiation and Matter
𝛼(𝜑) ≔1
𝑇∫𝑑𝑡𝑒−𝜈𝑡cos(𝜔𝑡+ 𝜑)
𝑇
0
= 𝑒𝑖(𝜔+𝑇
2
+𝜑) sin 𝜔+𝑇
2
𝜔+𝑇
2 + 𝑒−𝑖(𝜔−𝑇
2
+𝜑) sin 𝜔−𝑇
2
𝜔−𝑇
2,
with 𝜔+ = 𝜔+ 𝑖𝜈, 𝜔−= 𝜔+
∗= 𝜔−𝑖𝜈 and 𝑇 is the averaging period180. One
can also represent 𝛼(𝜑) in the form:
𝛼(𝜑) =
𝜔
𝜔2 + 𝜈2 𝑒−𝜈𝑡sin(𝜔𝑡+ 𝜑) −𝜈
𝜔cos(𝜔𝑡+ 𝜑)|
0
𝑇
or
𝛼(𝜑) = 𝜔(𝑒−𝜈𝑡sin(𝜔𝑇+ 𝜑−𝜒) −sin(𝜑−𝜒))
𝑇(𝜔2 + 𝜈2) cos 𝜒
.
One can notice that the right-hand side in the motion equation (8.15) or (8.16)
depends on the field initial phase 𝜑 which, in general, does not vanish after
averaging over oscillations. This is a curious fact, which deserves a special
discussion (see below).
Now, to proceed with the time-averaging, we may consider two cases. For
long-wave harmonics such as, e.g., in the microwave domain, we may perform
this averaging over a single period of the wave, putting 𝑇= 2𝜋/𝜔. Then we
get for the quantity 𝛼(𝜑)
𝛼(𝜑) =
𝜔2
2𝜋(𝜔2 + 𝜈2)
sin(𝜑−𝜒)
cos 𝜒
(1 −exp(−2𝜋𝜈/𝜔)).
In the optical case, when the number of periods 𝑛 in 𝑇= 2𝜋𝑛/𝜔 is very
large, 𝑛→∞, i.e., 𝜔𝑇≫1, 𝛼(𝜑) →0 as 1/𝑛. In this case, only the special
solution of the zero-order motion equation (8.14) contributes to the force
acting on a particle in the standing wave:
𝑚𝐫̈ + 𝑚𝜈𝐫̇ = 𝑒2(𝐄0∇)𝐄0 cos 𝜒
2𝑚𝜔(𝜔2 + 𝜈2)1 2
⁄ = 𝑒2(𝐄0∇)𝐄0
2𝑚(𝜔2 + 𝜈2) ,
and in the collisionless limit 𝜈𝜔
⁄
≪1 this expression takes the form of the
usual ponderomotive (Miller) force. It is interesting that in this case the
dependence on the initial phase of the wave drops out.
In the quantum language, the elementary process described here may be
interpreted as a stimulated Compton scattering i.e., consisting in the
absorption of a photon with frequency 𝜔 and 3-momentum 𝐤 and emission of
a photon with (𝜔′, 𝐤′) subjected to conditions 𝜔′ ≈𝜔 and 𝐤′ ≈−𝐤. The
180 In this simple model, we do not take into account possible temporal non-
uniformities or transition effects connected with turning the field on and off.
8.3
More on Charge in Electromagnetic Fields
357
particle energy in this process is almost conserved, up to the terms
determining the difference between 𝜔′ and 𝜔 i.e., 𝐸′ −𝐸= 𝜔−𝜔′ .
However, the particle momentum may change significantly which
corresponds, in the classical language, to the action of the force standing in
the right-hand side of the above motion equations, ∆𝐩= 𝐩′ −𝐩= 𝐤−𝐤′ ≈
±2𝐤. Actually, scattering of an electromagnetic wave by a free electron may
be interpreted as the Compton effect only if we take into account the
frequency shift of the scattered wave. Nevertheless, in the classical
approach the frequency change can be ignored, the electron is classically
accelerated in accordance with the motion equations in the electromagnetic
field of the wave, then emitting radiation as prescribed by the rules of classical
electrodynamics [193], §66. In the classical description, we obtain the
standard formula for the Thomson scattering cross-section [193], §78
𝜎𝜔= 1
𝑆̅
𝑑𝐽(𝜔)
̅̅̅̅̅̅̅̅
𝑑𝜔
= 2|𝐝̈𝜔|/3𝑐2
𝑐|𝐄𝜔|/8𝜋= 1
3𝑐2 |𝑒2𝐄
𝑚|
2
𝑐|𝐄|2
8𝜋
⁄
= 8𝜋
3 ( 𝑒2
𝑚𝑐2)
2
= 8𝜋
3 𝑟0
2
≈6.65 ∙10−25cm2
where 𝐝𝜔 is the Fourier component of the electronic dipole moment induced
by the wave, 𝑆 is the Poynting vector representing the flow of electromagnetic
energy (overline bar denotes averaging over time), and 𝑟0 = 𝑒2 𝑚𝑐2
⁄
is the
classical electron radius.
One may also recall the connection of the motion in a standing wave with
the so-called Kapitza-Dirac [227] effect and Kapitza pendulum [23] §30
problem. The Kapitza-Dirac effect is just the elastic scattering of electrons in
the field of a standing electromagnetic wave which may be written as
𝐄(𝐫, 𝑡) = 𝐄0(𝐫)𝑒−𝑖𝜔𝑡+ 𝐄𝑜
∗(𝐫)𝑒−𝑖𝜔𝑡,
𝐄0(𝐫) = 𝐄0 cos 𝐤𝐫= 𝐄0 cos 𝑘𝑥.
An electron in such a field experiences the ponderomotive force (we use for
simplicity scalar notations)
𝐹𝑥= −𝜕𝑈𝑝
𝜕𝑥,
𝑈𝑝= 𝑒2𝐸2(𝑥)
4𝑚𝜔2 .
Using an optical analogy, one can say that the electronic wave is
scattered on a diffraction grating having a spatial period 𝑑= 𝜋𝑘
⁄
=
𝜆/2. P. L. Kapitza and P. A. M. Dirac used one more analogy namely with
the Bragg diffraction in ideal crystals, when diffraction angles 𝜃𝑛 are
determined by the condition 𝑛𝜆= 2𝑑sin 𝜃𝑛. In Kapitza-Dirac scattering,
this condition takes the form sin 𝜃𝑛= 𝑛ℏ𝑘/𝑝 where 𝑝 is the momentum of
incident particles. Thus, electrons crossing a standing electromagnetic
wave would be reflected from the planes of peak intensity. This is, of course,
a very intuitive, qualitative consideration; in order to treat this problem
accurately one should account for the motion of an electron in the field of
an electromagnetic wave, e.g., within the framework of the nonrelativistic
358
Radiation and Matter
Schrödinger equation, see, e.g., the paper by M. V. Fedorov [228] who was
probably the first to consider the Kapitza-Dirac scattering on the quantum-
mechanical level (see also [229]).
8.3.2
Interaction of a Particle with a Traveling Wave
An important class of problems is related to the situation when a charged
particle encounters a traveling electromagnetic wave. These problems have
been typically considered in
microwave electronics (see, e.g., a
comprehensive book [230]) and in accelerator physics. Recently, much
interest has been aroused to the possibility of cost-efficient acceleration of
particles in the field of high-power ultrashort laser pulses, for instance so-
called laser wake-field acceleration (LWFA). Perhaps the most prospective
application of the theory related to the interaction of charged particles with a
traveling electromagnetic wave, both in the single-particle and many-particle
dynamics regimes, is the free-electron laser (FEL). We shall not discuss here
a lot of technicalities and specially adapted methods of integration of the
motion equations, arising in connection with all these applications. In the
spirit of this book, we shall only observe the general patterns associated with
the particle-wave interaction, with the hope that such a high-level discussion
would enable one to understand more professional minutes should they
appear.
The primary setting of the problem is the following: let a linearly
polarized traveling wave propagate along the 𝑧 axis in vacuum. We may write
the field in the wave as
𝐄(𝐫, 𝑡) = 𝐄0 cos(𝜔𝑡−𝑘𝑧𝑧) = 𝐹𝐞𝑥cos(𝜔𝑡−𝑘𝑧)
and
𝐇(𝐫, 𝑡) = 𝐇0 cos(𝜔𝑡−𝑘𝑧𝑧) = 𝐹𝐞𝑦cos(𝜔𝑡−𝑘𝑧)
where 𝐞𝑥 and 𝐞𝑦 are unit vectors defining the polarization of the electric and
magnetic field, respectively, 𝑘𝑧= 𝑘; the quantity 𝐹 is the amplitude of both 𝐄
and 𝐇 because for the wave propagating in vacuum their amplitudes are
equal. The vector motion equation for an electron in such a wave may be
written in components as
𝑚𝑥̈ = 𝑒𝐹(1 −𝑣𝑧
𝑐) cos(𝜔𝑡−𝑘𝑧) ,
𝑚𝑦̈ = 0,
𝑚𝑧̈ = 𝑒𝑣𝑥
𝑐𝐹cos(𝜔𝑡−𝑘𝑧)
8.4 On Hamiltonian Formalism for Particle
Motion in Electromagnetic Fields
The Hamiltonian method seems to be poorly adapted to relativistic problems
such as field theories. For example, classical electrodynamics, though being a
relativistic theory, is represented in this method in the form similar to the
8.5
Interaction between Atoms and Radiation Field
359
nonrelativistic classical mechanics. Nonetheless, application of the
Hamiltonian method to electrodynamics was thoroughly discussed in the
classical book by W. Heitler [207]. V. L. Ginzburg, the 2003 Nobel Prize
winner, RAS (the Russian Academy of Sciences) member and one of the last
encyclopedists in physics, also preferred the Hamilton approach to
electrodynamics due to its simplicity vs. more sophisticated techniques used
in quantum field theory. Of course, the Hamiltonian method usually fails in
relativistic problems, and the electromagnetic field is an essentially
relativistic object.
The first thing to do in order to describe the particle motion in an
electromagnetic field using the Hamiltonian formalism is to construct an
effective Hamiltonian function for the field and charges in it. I write
“effective” because it would be hardly possible to include in the Hamiltonian
all possible factors influencing the particle motion. In fact, all Hamiltonians
in physics are effective ones since they represent a system in terms of
operators defined over some arbitrarily built and very restrictive Hilbert
space. In reality, one ignores the dependence on temperature i.e., interaction
with the environment (thermostat), energy cutoffs, boundary conditions
(such as the electromagnetic field vanishing at infinity in our case), etc. I have
already mentioned that it seems to be the ultimate aim of physics to write
down the Hamiltonian for the entire universe, and the rest should be
presumably done by mathematicians and computer modelers. This is,
however, more a metaphysical utopia rather than a scientific program.
Striving to employ the Hamiltonian method everywhere appears to be mainly
psychologically driven: one desires to represent any theory in the
conventional form of classical mechanics. It is, however, clear that, e.g., for
electrodynamics the Hamiltonian formalism is poorly adapted, in particular,
because it picks out one specific instant of time. This is, by the way, one of the
reasons why predominantly the Lagrangian formalism is used while treating
electromagnetic field problems.
8.5
Interaction between Atoms and Radiation
Field
In this section, I shall try to dispel the popular view that the problem of
field-matter interaction is so complicated that one can obtain reliable results
only by using numerical techniques and very powerful computers. Here,
some simple microscopic mechanisms determining the response of the
matter to electromagnetic fields are briefly discussed. Of course, in a
general formulation this task is enormous and has been treated in
numerous classical works which I shall refer to in the subsequent text. In
most cases, when considering the interaction of electromagnetic radiation
with atomic systems one can use the electric dipole approximation (see
[39], 67). In general, one can observe that the absolute majority of
electrodynamic and optical phenomena can be well described in this
approximation. Indeed, the electric dipole approximation is valid when
the parameter 𝑎𝜆
⁄
≪1, where 𝑎 is the characteristic dimension of an
360
Radiation and Matter
atomic or molecular system interacting with an electromagnetic field (one
usually speaks in terms of scattering or radiation of waves). Typically,
𝑎~10−7 −10−8cm and 𝜆~10−4 −10−3cm which corresponds to visible or
infrared light. One can, however, pay attention to the following paradox. If
we consider the interaction of electromagnetic radiation with a material
medium regarding all its atoms and molecules together as a single
scattering system of size 𝐿, then, e.g., in optics, we have as a rule 𝐿𝜆
⁄
≫1
whereas for the applicability of the dipole approximation we should
ensure the opposite condition 𝐿𝜆
⁄
≪1 or at least 𝑘𝐿≲1 . This is,
however, not a real paradox since in most cases different atoms and
molecules of the medium radiate and scatter electromagnetic fields
statistically independently of each other. Correlations between atoms and
molecules interacting with the radiation field determine coherence
properties of light.
8.6
Laser-Matter Interaction
Popular ideas about the laser generation - analogy with phase transition -
must have a reliable physical and mathematical ground.
Many issues still remain open in the study of laser-matter interaction,
especially at high intensities, and nowadays lasers produce radiation
intensities up to 1022W/cm2 that corresponds to electric fields reaching
1011CGSE i.e., by four orders of magnitude higher than atomic fields
𝐸𝑎𝑡~ 𝑒𝑎𝐵
2
⁄
= 𝑚2𝑒5/ℏ4 ≈5 ∙109V/cm . For estimates in laser-matter
interaction it is convenient to introduce the “atomic intensity”, 𝐼𝑎𝑡=
𝑐𝐸𝑎𝑡
2
8𝜋
⁄
= cm4𝑒10/8𝜋ℏ8 ≈3.5 ∙1016W/cm2 . One can expect that the
effect of high-intensity electromagnetic radiation on material media would
consist at least in very efficient heating of the matter electrons. Moreover,
electrons can be accelerated in the fields of laser pulses which opens the
possibility to construct cost-effective machines both for high energy research
and practical applications.
8.6.1. Ultrashort Laser Pulses
Let us consider, as an example, a typical situation when an ultrashort laser
pulse (ULP) is directed at the surface of metal. The pulsed electromagnetic
field accelerates electrons in the metal skin layer, thus heating the electronic
subsystem of the material. One must note that the notion of a skin layer is not
quite trivial and is not always correctly defined, but we shall assume that we
do know exactly this layer’s properties (see [208], 60). Owing to the heat
conduction processes, the electromagnetic energy absorbed in the skin layer
migrates into the metal volume and eventually dissipates there increasing the
equilibrium temperature of the entire sample. One can imagine that if the
predominant mechanism of the heat transfer from the skin layer into the
volume is electronic conductivity, then after having reached a certain power
(fluence) threshold the temperature gradients may become so high that the
heat flux caused by the electrons would spread faster than phonons in metal.
In other words, the speed of the heat flux would surpass the phase velocity of
8.6
Laser-Matter Interaction
361
phonons. In such a situation, the drift electrons can produce phonons due to
the Cherenkov (or Čerenkov) radiation - a complete analogy to Cherenkov
radiation of photons in dielectric media. One can observe the characteristic
soft blue glow in nuclear reactors, for example, during the excursion to a
nuclear power plant, a very useful tour, by the way. This glow is due to
Cherenkov radiation of photons in the visible range. But let us get back to
phonons. One can expect that due to massive generation of phonons in the
Cherenkov mechanism their distribution can deviate from the equilibrium
Planck shape, which in its own right would produce an impact on the heat
transfer - a sort of a self-consistent feedback.
362
What remains to be solved?
9 What remains to be
solved?
There are a lot of unresolved issues in physics and in natural sciences in
general - much more than resolved. I don’t think that physics and
mathematics have an agenda: nobody knows what comes next. The great
discoveries have always been unexpected.
9.1
The Standard Model
The idea of spontaneous symmetry breaking enabled physicists to build the
famous Standard Model of the unified weak and electromagnetic interactions.
It is interesting that the same idea allowed one to demonstrate that the
important phenomenon of superconductivity 181 can be regarded as
spontaneously broken electromagnetism. Ultimately, the Standard Model
strives to describe all the processes occurring in nature within the framework
of the four known interactions: electromagnetic, weak, strong and
gravitational. To understand most astronomic concepts and even
cosmological models, to learn chemistry or electrical engineering one only
needs gravitation and electromagnetism. Quark-gluon models, the Higgs
particle or the spontaneous breach of symmetry are often superfluous at this
level of knowledge. Yet without strong and weak interactions underlying the
respective mathematical models, one would not be able to understand what
makes the stars shine nor how the Sun burns chemical elements producing
the radiation power that supports life on the Earth. In general, without this
knowledge one cannot grasp the principles of nuclear physics.
Current understanding of fundamental high-energy physics (formerly
elementary particle physics) is based on the just mentioned Standard Model
which stands on two main pillars: gauge invariance and spontaneous
symmetry breaking. The original “material” foundation of the Standard Model
was made up of 6 quarks 6 leptons whereas the main goal of the Standard
Model was to describe the four known fundamental forces of nature – strong,
weak, electromagnetic and gravitational – on the same footing i.e., in terms of
gauge concepts. This endeavor was accomplished for three out of four
interactions: for the strong interaction the corresponding gauge group is
SU(3), for the weak and electromagnetic interactions, respectively SU(2) and
U(1) groups; within the framework of the Standard Model, the weak and
electromagnetic forces are unified and known as the electroweak interaction.
The first attempt at the unification of gravity and electromagnetism (i.e., the
unification of gauge fields with gravitation) goes back to the Kaluza (1921)
and Klein (1926) five-dimensional models. The unification of three gauge
181 It would hardly be possible to build the Large Hadron Collider (LHC) without using the
superconducting magnets.
9.2
The Arrow of Time
363
forces (strong, weak, EM) with gravity is an extremely difficult endeavor,
requiring special skills, and we shall not discuss this issue here. One can only
note that the Standard Model does not treat gravity on an equal footing with
the three microscopic gauge forces.
What is today’s status of the Standard Model? It is mostly regarded as a
minimal one i.e., incomplete and designed as a temporary step towards a
more refined unified theory. One calls the Standard Model minimal since
there are no more elementary particles in it besides six quarks, six leptons
and four gauge bosons needed to transfer interactions (the Higgs boson was
discovered with high probability in the LHC experiments and is one more
elementary – not compound - particle). The incompleteness of the Standard
Model is, in particular, manifested by cosmological data. Thus, the Standard
Model requires modifications to accommodate new phenomena. Such
modifications are also expected to be based on gauge principles, with their
geometric, Lie groups and bundles approach.
Although ideas and results usually don’t come in a prescribed order, soon
there may be an exception. The Large Hadron Collider in CERN is destined to
come closer to the microscopic spacetime structure than any previous device.
Putting together all seemingly diverse topics in a manuscript takes time,
and I am writing all this before the LHC machine in CERN has been put in
operation, but you may read it after the new results have been obtained. For
the energies provided by the LHC, new results will inevitably appear, despite
the fact that the whole project is rather a process than a business that can be
eventually completed. I have already mentioned that the LHC is the world’s
most powerful accelerator (27 km ring diameter) designed to achieve 7 TeV
energy for each of the two counter-rotating and colliding proton beams (see
https://edms.cern.ch/file/445830/5/Vol_1_Chapter_2.pdf for the beam
parameters, see also http://lhc.web.cern.ch/lhc). It is expected that the Higgs
particle (see above) can be produced in the collision so that the mechanism
ensuring the origin of particle masses in the Standard Model will be validated.
Still the questions remain, at least I don’t quite see the answers. For
instance, I don’t completely understand the lack of gravity in the Standard
Model, maybe some experts know better.
9.2
The Arrow of Time
“I can’t go back to yesterday - because I was a different person then” Lewis
Carrol.
In this section we shall discuss the subject of the unidirectional flow of
time in general and of its various manifestations in the form of so-called time
arrows. More specifically, we shall observe the properties of the time reversal
operator together with some physical and mathematical aspects of time
reversal symmetry. The whole subject is discussed here both on the classical
and the quantum level, but mostly for the case without spin. Effects which are
due to spin (such as spin-orbit interaction) are only briefly mentioned: a full-
scope inclusion of such effects into our discussion would make the section
hardly observable.
364
What remains to be solved?
This section is constructed in the following way: primarily some standard
approaches to the problem of time reversal are briefly reviewed, with my
personal comments being added. In the interim, it appeared necessary to
dwell on the very concept of time, which unfortunately borders to
metaphysics. Afterwards, more physical (and even some mathematical) stuff
was discussed, mostly related to a correct definition and properties of time
reversal operator. Some time-reversal noninvariant models, usually declared
purely phenomenologically (as if the Newtonian model were not
phenomenological), are observed under the angle of possible presence of the
hidden time reversal symmetry when the coefficients of a phenomenological
model are explained in “more fundamental” terms i.e., by using other models.
The assumption of a closed physical system, needed for time reversal
invariance, is discussed.
At first, I did not want to include a section on time-reversal puzzles in this
book. The reason for this reluctance was that the so-called problem of time is
so strongly contaminated with fancy speculations and philosophical
metaphors (see e.g., [53] and also [54]) that while discussing this subject one
can easily be trapped by embarrassing pitfalls of vagueness. After a certain
age one tends to think why she/he is doing this or that. I would rather prefer
to solve equations than get engaged in some sort of foundational scholastics.
By the same token, the issue of interpretation of quantum mechanics which is
often considered extremely important (see e.g., [74]) is, to my mind, of a
similar soft and scholastic nature. Issues of this kind, to my understanding,
are not quite physical problems because they are unlikely to bring new
results. The only justification, in my opinion, to handle such highly speculative
issues is what I call the “physmatical effect” - emergence of unexpected links
to other areas of physics and mathematics (see in this connection the works
by I. Prigogine, e.g., [55]). Moreover, the problem of time-reversal asymmetry
is a very ancient issue and a great lot has been already written about it e.g.
Davies, Hoover [59,60], so I did not think I could add anything new and fresh.
But then I suddenly caught a serious illness and, by necessity, got more
time for relaxed contemplations. Once professor Christoph Zenger, a well-
known mathematician, whom I consider one of my teachers, came to visit me
and asked what, in my opinion, the physicists think in general about the
unidirectional flow of time. Professor Zenger’s idea was that it is probably the
dominance of matter over antimatter that produces the time asymmetry. I
argued that this view seems a bit naive as well as considering that time-
reversal symmetry should be a priori guaranteed; moreover, even in the
microscopic world this statement would be wrong since only C and T
symmetries would be handled, with parity P remaining untouched, provided
of course we believe that the CPT theorem is never violated (see below). And
who can guarantee that CPT is true also in the macroscopic world? So, I
immediately thought this issue is not worth serious treatment. Yet afterwards
I decided to systematize a little what I knew on this subject. Thus, the present
section appeared.
The issue of a preferred direction of time may be classified as one of
Perennial Problems. Such problems have a philosophical flavor and although
9.2
The Arrow of Time
365
they can admit simple formulations, they present a perennial challenge for
great scientists.
One of distinguishing features of Perennial Problems is the illusion that
they have been solved a long time ago, with this solution being known to any
more or less literate person. However, if one addresses original papers and
textbooks, one will find a wild variety of opinions and would-be solutions to
each Perennial Problem. So, there is no universally accepted solution to any
of the Perennial Problems despite the fact that a lot of great minds have tried
to attack them. As examples of Perennial Problems one can name such issues
as: interpretation of quantum mechanics, unification of fields and forces,
unification of general relativity and quantum mechanics (the quantum gravity
problem), the problem of the origin of the universe and its subproblem - that
of initial conditions, the problem of fundamental constants - of their actual
values and possible variation, of elementary particle masses, of causality, and
a number of other fundamental problems essential for our representation of
the world. Nowadays, one more issue is of fashion and seems to become a
Perennial Problem: that of dark energy and dark matter.
9.2.1
Perennial Problems
Children sometimes ask: what has become of the previous days, where are
they stored? Or they are ruthlessly destroyed by some specific creatures, as in
the well-known fantasy by Stephen King (“The Langoliers”)? It would of
course be great being able to turn back time. The issue of the preferred
direction of time may be classified as one of Perennial Problems. Such
problems have a philosophical flavor and although they can admit simple
formulations, they present a perennial challenge for great scientists. One of
distinguishing features of Perennial Problems is the illusion that they have
been solved long time ago, with this solution being known to any more or less
literate person. However, if one addresses original papers and textbooks, one
will find a wild variety of opinions and would-be solutions to each Perennial
Problem. So, there is no universally accepted solution to any of the Perennial
Problems despite the fact that a lot of great minds have tried to attack them.
As examples of Perennial Problems one can name such issues as:
interpretation of quantum mechanics, reductionism (i.e., possibility to reduce
biological phenomena to physics), unification of fields and forces, unification
of general relativity and quantum mechanics (the quantum gravity problem),
the problem of the origin of the universe and its subproblem - that of initial
conditions, problem of fundamental constants - of their actual values and
possible variation, of elementary particle masses, of causality, and a number
of other fundamental problems essential for our representation of the world.
Nowadays, one more issue is of fashion and seems to become a Perennial
Problem: that of dark energy and dark matter.
Some people tend to call the above examples of Perennial Problems also
the “Princeton Problems”: they are not really important from the utilitarian
viewpoint, at least for the time being, and they are neither physical nor
mathematical with regard to their setup. That is to say that such problems are
not necessarily reduced to a completely and correctly set task. One can be
366
What remains to be solved?
preoccupied with such problems for infinite time - they are just Perennial. I
don’t think this label is correct: one can judge by the publications that the
Princeton Institute for Advanced Study (IAS) has lately become considerably
less detached from practical problems.
It is curious that Perennial Problems though being totally irrelevant to
daily life182 tend to stir much more acute interest than mundane everyday
ones. Take, for instance, the Fermi paradox concerning the existence of
extraterrestrial civilizations. One usually gets orders of magnitude more
responses when starting a discussion on this abstract subject than, say, on
such vital issues as outrageous medical errors or, say, police brutality. The
nearby subject of the so-called Drake equation, an attempt to estimate the
potential number of extraterrestrial civilizations in our galaxy - the Milky Way
(see http://www.seti.org) still attracts a great number of enthusiasts. The
SETI community grows and is supported by decent funding. One might
however wonder whether the search for extraterrestrial creatures is a
promising scientific avenue to pursue. To be honest, I am inclined to think that
anyone who is seriously engaged in the search of extraterrestrial civilizations
manifests some escapism and possibly has difficulties in dealing with the
boring, often unpleasant but necessary hardships of daily life.
Although Perennial Problems are mostly of quixotic character, not all of
them are totally useless from a pragmatist’s viewpoint: many are even not too
remote from real scientific quests. Specifically, many arguments related to
time inversion are not always of purely scholastic character. Thus, in high
energy physics such arguments are essential for the development of a theory
describing the fundamental interactions. For instance, the famous CPT
theorem183 provides guidance for the construction of any viable field theory
involving particles and antiparticles. More specifically, by combining CPT
with internal symmetries one can introduce the correct transformations, e.g.,
expressed through matrices. One may also recall more practical things such
as the phase conjugation techniques in optics and acoustics. Nevertheless, the
main point of mine is that discrete symmetries, in particular time reversal
symmetry, do not follow from any fundamental principles, and there is no
reason to declare such symmetries to be fundamental principles by
themselves.
Thus, Perennial Problems border on really fundamental questions (some
of them we shall discuss below), more specific than Perennial, namely - How
did the universe originate? - What is the origin of mass and what stands
behind the differences of masses of particles we call fundamental, in
particular leptons and quarks? - What is the reason for the matter-antimatter
asymmetry observed in our universe - What is “dark matter” and “dark
182 The well-known medieval Perennial Problem: “How many angels can dance on the
head of a pin (or on the point of a needle)?” seems to have many today’s analogs. In
Russia, for example, such eternal topics as the “special way of Russia” were a signature
of the intelligentsia at all times.
183 The CPT theorem deals with charge conjugation (C), parity (P) i.e., spatial
inversion, and time inversion (T).
9.2
The Arrow of Time
367
→
energy” which apparently manifest themselves without being directly
observed?
Perhaps today’s Perennial Problems will be sharpened in future physical
experiments, and then their speculative (“many-words”) component will be
drastically diminished. In this connection one might recall that theories and
models are always abound whereas there is only a single reality.
9.2.2
Observations of Possible TRS Breakdown
The time reversal symmetry (TRS) is obvious in many basic equations of
physics but is exceptionally seldomly manifested in reality. One should
probably always remember that mathematical equations are just pieces of
human-produced text and can be linked to reality through careful
observations and purposeful experiments. It has already been noted that
without paying attention to external measurements, physics would look like
a loose collection of philosophical speculations or, at best, a vast array of
mathematical texts. Time reversal symmetry is a so deeply engraved
stereotype in physics that to question it is not at all easy: one should overcome
a cognitive barrier of the prevailing opinion. Thus, TRS may be viewed rather
as an important philosophical principle i.e., without the necessity to have an
experimental link to reality. Below we shall estimate what limitations are
imposed by such links.
Let us briefly return to basics: in 1686, Sir Isaac Newton presented to the
Royal Society the first volume of his “Principia” (Philosophiae Naturalis
Principia Mathematica)184. In this first volume three laws were postulated that
were destined to describe the world. Firstly, any physical body maintains its
state of rest or motion185 unless acted upon by an external force; secondly, this
force equals the rate of change of the body momentum; and thirdly, each
action entails an equal and oppositely directed reaction (counteraction).
Those three laws named later after Newton were presumably sufficient to
determine the fate of each particle of the universe since to predict the future
state ad infinitum.
Time reversal invariance is a particular case of a more general anti-
unitary invariance so from symmetry positions there is nothing specific about
time reversal. Simultaneously, some quantities are conserved that are
invariant in time such as energy. The presence of even very small dissipation
or diffusion breaks time reversal invariance and tends to drive the physical
system to homogeneity. Since time reversal is a simple transformation, the
problem of time-asymmetry is simple to formulate: if one believes physics,
which is considered to lie in the foundation of the human knowledge of the
world, to be based on classical mechanics of Galileo-Newton which is time-
symmetric, then how would it be possible that real-life processes are
184 Actually, this endeavor comprised three volumes, see, e.g., Kartsev, V. P. Newton
[245].;
see
also
http://books.google.com/books?id=6EqxPav3vIsC&pg=PA1#v=onepage&q&f=false.
185 We have seen that according to Galileo the rest is a particular case of the
motion.
368
What remains to be solved?
obviously time-asymmetric. The subjective experience of an essential
difference between past and future in real life i.e., between two possible
directions of time has been metaphorically called “the arrow of time”
(probably it was A. S. Eddington who was the first to coin this term). The
motion equations of classical, non-relativistic quantum mechanics
(without the magnetic field) and, to a certain extent, of quantum field theory
(QFT) seem to admit time reversal 𝑡→−𝑡, although in QFT
simultaneously with CP-transformation. Macroscopic equations are,
however, irreversible. Reconciliations of fundamental and phenomenological
models of the world has traditionally been one of the most burning issues
in physics. This paradox, known since the end of 19th century, is
sometimes called the time arrow problem or “global irreversibility
paradox” and I shall try to make a short overview of different approaches to
its resolution.
From a very general observation, one may notice that asymmetry is much
more generic than symmetry, the latter being a very special feature. So, one
might suspect that it would be an extraordinary and very intelligent fine-
tuning if all the laws of nature were time-reversal symmetric. The probability
of more generic asymmetric manifestations must be significantly higher.
Looking for symmetries in the equations is an important area of mathematical
physics which provides useful tools [187], but imposing symmetries on every
natural process in sight seems to be a totally arbitrary act. Nevertheless, it is
generally believed that all dynamical equations of fundamental physics are
invariant under time reversal whereas the phenomenological and hence “less
fundamental” laws of physics have an obvious temporal asymmetry. The
microscopic186 dynamical equations govern the evolution (or devolution) of a
physical system under the assumption that it is perfectly closed. If the
system is not closed, then it is a priori phenomenological, e.g., dissipative
(the simplest example is the damped oscillator), and, strictly speaking, its
complete description lies outside of microscopical mechanics - in statistical
physics or physical kinetics. If the system is not perfectly closed, it does not
seem possible to screen it from fluctuations, e.g., of thermal character.
However, the assumption of perfect closure seems to be utterly unrealistic
and can therefore be substantially weakened, as I shall try to demonstrate a
little below.
Although one cannot exclude the low-probability option that we are
living in a completely time-symmetric universe, the usual statement that all
physics should be time- reversal symmetric seems to be a belief, a quasi-
religious credo. One might observe in passing that there are surprisingly
many credos (sometimes called principles) appealing to intuitive
extrapolations in physics. Maximum what can be accurately said about time-
invariance is the famous CPT theorem [99], 19, see also below. Its content is
as follows. Assume that we have a quantum field theory which is
186 They are only considered microscopic by some consensus as constituting the
backbone of contemporary physics: Newton’s equations of classical mechanics are by no
means microscopic as well as the Schrödinger equation applied to the whole Universe.
9.2
The Arrow of Time
369
characterized by a positive energy density, Lorentz invariance and local
causality187. Then such a field theory is invariant under CPT.
Furthermore, it is typically assumed that any C or P non-invariance
are inessential, especially in practically important physical manifestations,
therefore all physical laws should be invariant under T transformation
(colloquially, T-invariant). This chain of arguments is in a sense remarkable:
nearly everything is false. First of all, the conditions for the CPT theorem
are very stringent - to the degree of being applicable to a very limited set
of quantum field theories. It raises little doubt that the universe we live in
is not Lorentz-invariant: in Einstein’s general relativity spacetime is not
flat, nor even asymptotically flat. There exist some rival models with
asymptotically flat spacetime, but they still do not ensure CPT. This
problem is closely connected with the energy density in the universe. This
energy density is not necessarily strictly positive, in particular, because
the potential energy related to gravitational attraction is negative 188.
Local causality cannot be in general a well-defined notion, especially when
the space- time metric fluctuates or is quantized. Moreover, it is in general
not true that one can correctly define a spacetime separation for an
arbitrary quantum state. Thus, there may be difficulties already with the
statement of the CPT theorem.
Let us pay some more attention to the assertion that C and P invariance
play a negligible role in practically important physical processes. The curious
thing about this assertion is that it contradicts the general logical scheme of
microscopic reasoning. On the one hand, it is generally stated that all physics
is presumably time-invariant on the level of microscopic laws, on the other
hand such physical manifestations as kaon decays and in general weak
interactions are considered too microscopic to be viewed as physically
significant. Personally, I do not understand this logic.
One standard explanation of the time reversal violation in macroscopic
physics is based on time-asymmetry introduced by some supplementary -
initial or boundary - conditions (see more on that below). This explanation
seems to me at least insufficient since the disparity of two different directions
of time in the entire picture of the world remains. As far as the hypothesis of
electron-positron and, more general, particle-antiparticle asymmetry
resulting in time reversal asymmetry goes, it appears also limited and even a
bit naive since it a priori believes that time reversal invariance is deeply inside
an exact symmetry. Of course, at a somewhat primitive level of description an
anti-particle may be viewed as a particle moving backwards in time.
Indeed, a particle having a positive energy 𝐸 contains in its wave function
factor exp(−𝑖𝐸𝑡), i.e., Ψ+ = 𝜓exp(−𝑖𝐸𝑡). An anti-particle having a negative
energy −E is characterized by wave function Ψ−= 𝜓exp(−𝑖(−𝐸)𝑡) =
187 The term “local causality” refers to a situation when the fields of QFT, 𝜑(𝑥),𝜑(𝑦)
commute (or anticommute) if spacetime points 𝑥 and 𝑦 are separated by a spacelike
interval.
188 The question of the energy density in the universe is considered a “hot subject”,
see, e.g., Sean M. Carroll https://link.springer.com/article/10.12942/lrr-2001-1.
370
What remains to be solved?
→
𝜓exp(−𝑖𝐸(−𝑡)). In the framework of QFT189, time reversal invariance seems
to be violated, which has been experimentally confirmed in CPLEAR
experiments conducted in 1998 in CERN [63]190
It would be of course a truism to state that in classical (Newtonian)
physics there is no preferred direction of time. In physics, microscopic time
reversibility is actually a consequence of idealized modeling of physical
experience. We have seen in Chapter 4 that classical dynamics, being very
approximate and of limited applicability, had to be deterministic in order to
predict the flow of events with elapsing time. To a large extent, the same is
true for quantum mechanics (see Chapter 6). Otherwise, theories or specific
models derived from them were of a very limited use. Nevertheless, there are
strong controversies among physicists - as well as among philosophers - how
to explain the obvious discrepancy between two facts: 1. we all know that time
is flowing only in one direction, at least at our macroscopic level (trivial
example - thermal or diffusive processes); 2. all basic (microscopical) laws of
physics are invariant under time reversal, i.e., change 𝑡→−𝑡. One may find the
most comprehensive account of the time asymmetry problem in [89]. In
classical deterministic physics, the time, although not satisfactorily defined,
was considered a universal continuous 191 variable. Due to this approach,
classical and even quantum theories were mostly formulated as models based
on differential equations of evolutionary type (or on systems of such
equations) giving explicit time derivatives as functions of current values
describing the actual state. It is this mathematical construction that allowed
one, by integrating these differential equations, to “predict” the variable
future or retrodict the constant past, depending on setting up initial or final
conditions. Moreover, ordinary everyday life entices us into thinking that time
is absolute and the same everywhere, which is connected with the erroneous
impression that speed has no limit. Yet, this is true only in a low-energy
approximation.
Although time-reversal invariance is mostly taken for granted, there is
no compelling reason why this should always be the case. And indeed, more
and more evidence has been accumulated lately that time-reversal symmetry
(TRS) is broken on a more sophisticated physical level than simple classical
or quantum models. Please note that here I am not talking about statistical
problems or open systems, where it would be ridiculous to require time-
reversal invariance. There is currently much empirical evidence that TRS
does not hold in optics, for examples, in experiments with nonmagnetic
metamaterials (see, e.g., an account of spectacular experiments carried out
by the N. I. Zheludev group in: [251], [252]). One of the relevant hypotheses
189 With Minkowski flat background and Lorentz invariance, motion in different
Minkowski frames, as already discussed, can be represented by different 4d graphs, with
Lorentz transformations being mappings between them.
190 In a series of CPLEAR experiments, weak decay of K-mesons has been
observed, 𝐾𝐿→𝑒+𝜈𝑒𝜋−.
191 Today there are of course also discrete-time and lattice models, but their
detailed discussion is outside the scope of this book.
9.2
The Arrow of Time
371
is that TRS breakdown may accompany a nonlocality of the optical
response, when surface plasmons are excited depending on the polarization
of light incident on intricately surface-structured metamaterials. Probably
these experiments as well as metamaterials in general require quite an
extensive analysis, see the cited papers for details.
Another field of interest where time-reversal invariance is apparently
broken is high- temperature superconductivity (see, e.g., a discussion in the
papers [253], [254] and [255]).
Time-reversal symmetry has also been shown to break down in biological
molecules [256].
To my regret, I can never undo things - maybe some people can. And I am
unable to see the reversed chain of events: my personal time clearly flows in
a single direction. Most people, especially elderly persons, are sadly aware of
the passage of time; it is usually in a very young age one wishes the time run
faster. It would be a truism to say that in general young and old (like rich and
poor) fathom the world by different scales. Moreover, the perception of time
intervals also seems to change with the age of an individual, these intervals
appearing progressively shorter192 One can easily construct a mathematical
model of this diminishing of time intervals with the life-span. An intuitive
awareness of the unidirectional and accelerating time flow manifests the
psychological time arrow. This is, of course, no physics - so far. However one
may justifiably ask: what are the physical reasons for the perception of time
as having a sole direction? Although experimental proofs, both direct and
indirect, of time reversal non-invariance are quite convincing [63],
recognition of such non-invariance within the physical community
unexpectedly turns out to be reluctant, sometimes the attitude towards these
experimental facts is averse and disapproving. The reason for such an attitude
is exactly the widespread stereotype that the basic laws of physics are written
in time-reversal invariant form, so all the phenomena should be - at the basic,
microscopical level - also time-reversal invariant. Thus, the tension between
the generally perceived microscopic science and common sense is reflected
in the question of why the past is different from the future.
9.2.3
Model-Based Claims
One sometimes forgets that the so-called basic laws of nature are formulated
as mathematical models, quite efficient but having a limited domain of
applicability. For example, Newton’s equations are formulated, in distinction
to, for example, the Aristotelian model, as a time-invariant mathematical
model, so it is quite natural that Newtonian mechanics, for instance, does not
automatically involve time-asymmetric phenomena, e.g., omnipresent in
optics where the difference between time-forward and time-reversal
processes has been observed for many years. It would be stupid to require of
a model to be applicable in the field it has no relation to - this general
consideration is often forgotten when time-invariance issues are discussed.
Reversibility of the main microscopic laws of physics is nothing more than a
192 “The more we live, the shorter are the years”, Bulat Okudjava.
372
What remains to be solved?
very productive assumption. The present-day physical laws are just
mathematical models, nothing more, and they tend to be replaced by other
models when ample experimental facts are accumulated to invalidate the
actual beliefs. For instance, such a simple system as a damped pendulum
already gives an example of time-invariance violation, irrespective of
imposing initial or final conditions which are often taken as a main source of
time-invariance violation. The model itself is non-invariant in time. The
verbal claims of reducing this model to truly time-invariant Newton equations
for some point particles are just beliefs of a rather philosophical nature193.
One may notice that in order to define time derivatives one already assumes
that the direction of time does exist. Thus, the arrow of time is implicitly
suggested by the very formulation of the theory, which is mathematically
equivalent to a Cauchy problem.
Time in the Newtonian model as well as in dynamical systems theory is
an absolute mathematical notion (Galilean fibration) flowing uniformly with
no regard to physical processes. This representation is false already for
simple relativistic models. Furthermore, time is not an observable in the
physical sense; the fact that struck me first when I started to work in the
Russian (then Soviet) Committee for Standards. Meteorologists deceive
people by saying that they are measuring time - in fact time is defined as some
number of periods of some oscillating physical quantity that is measured by
“the clock”. Figuratively speaking, we see only the hands of the clock, but not
the time itself, and we cannot place the time sample in the International
Bureau for Weights and Measures headquarters at Sêvres, France (Bureau
International des Poids et Mesures), which serves as a depository for the
primary international standards. We would need a whole laboratory staffed
with highly qualified physicists for producing some kind of clock, e.g., atomic
clock, its certification and comparison, etc. This impossibility to see time
directly is the consequence of the fact that it is difficult (maybe even not
possible) to correctly construct the time operator as an entity corresponding
to an observable (see below). Had we been able to introduce the time
operator, we could have established its evolution properties (e.g., in the
Heisenberg picture) and could have compared forward and backward
temporal behavior. We could have found also the expectation values of the
time operator and seen how they would change with time 𝑡∈ℝ for 𝑡> 0 and
𝑡< 0.
Usually, considering a classical dynamical system, we assume it evolving
from the past into the future, i.e., setting up the initial conditions on some
manifold. However, from the theory of differential equations we know that
one can specify the final conditions just as well as the initial conditions. Final
conditions fixed at some distant time point in the “future” would produce the
retroactive evolution - back in time. Yet, it will still be a dynamical system.
Such time-reversed development is valid both for the customary dynamical
193 The already mentioned Aristotle and Newton point mechanics are examples of intrinsically
different schemes with respect to time reversal T, the first being truly time non-invariant, the
second time-invariant.
9.2
The Arrow of Time
373
systems described by vector ODEs and for field evolution described by partial
differential equations (PDEs). 194 In the case of fields described by PDEs,
initial or final conditions are specified at some spacelike 3-manifold and this
initial or final state evolves respectively into the future or into the past
according to the dynamics provided by time-dependent PDEs which govern
the evolution. Here, however, the time-reversal symmetry may be broken
since the state to which the evolution strives may itself be determined by the
sought-for solution, i.e., be a functional of it195.
9.2.4
Closed Systems
The notion of a closed system is a strong idealization, convenient in
conventional mechanics but becoming inadequate in more refined physical
situations. The time-reversible mechanical models are usually considered to
always lie at the very foundation of entire physics, however this claim is not
more than a belief. In most cases the statement of ultimate reduction to time-
invariant mechanical models appears to be right, but nobody can guarantee
that all physical states and systems can be reduced, in the final analysis, to the
relatively simple - and time-reversible - mechanical motion. Even less so as
regards real-life physical situations (they do not necessarily coincide with the
considered physical systems). Thus the frequent statement “All physics is
basically time-reversible” may be only referred to the systems that can be
reduced to a simple mechanical motion of particles. In particular, this motion
should be confined within the closed system and any fluctuations should be
absent.
9.2.5
Irreversibility and Time Reversal Noninvariance: Remarks about
Terminology
Strictly speaking, time asymmetry, irreversibility and time-reversal non-
invariance are not identical concepts, although they are often used
interchangeably. Of course, it is a matter of defining the terms. I understand,
for example, time asymmetry as the property of time-asymmetric solutions to
time-symmetric equations. Usually, this property stems from a special choice
of boundary conditions; the radiation condition in classical electrodynamics
or the selection of an outgoing wave in quantum scattering theory are typical
examples. These retarded solutions lead to the so-called radiation arrow of
time (see below). In classical theories one typically has time-symmetric
dynamical equations, however not always - the damped harmonic oscillator
is a counterexample. In contrast to this, time-reversal invariance violation
may be understood as a non-invariance of the underlying dynamical
equations, of the Hamiltonian, the Lagrangian or the action with respect to the
time-reversal operator. The latter was first introduced by E. Wigner in 1932
and, simply speaking, is the antiunitary operator 𝑇− defined by the complex
conjugation in the coordinate representation of the wave function: Ψ(𝑥, 𝑡) →
194 Examples of such dynamically evolving fields are fields in electrodynamics and general
relativity or wave function in quantum mechanics.
195 This situation reminds us of the self-consistent field approach in condensed-matter
physics.
374
What remains to be solved?
Ψ∗(𝑥, −𝑡). One must also assume ℑ𝑉(𝑥) = 0, i.e., there are, for example, no
unstable or chaotic subsystems, excited and decaying states or
creation/annihilation of particles, which is a seemingly innocent but in fact a
very strong assumption. Mathematically speaking, the question is whether
the Hamiltonian is defined over the Hilbert space or outside the Hilbert space.
The standard quantum theory is basically the theory of linear self-adjoint
operators in Hilbert space (see Chapter 6) leading automatically to a unitary
(the Stone-von Neumann theorem) and therefore time-reversible evolution.
Thus, the orthodox quantum theory in Hilbert space is time symmetric. This
is sufficient for solving steady-state structural problems like e.g., calculation
of optical spectra. The Hilbert space technique, however, becomes
mathematically inconsistent when one needs to include decay processes,
short-lived resonances and other transient phenomena. One may notice, for
example, that in the quantum scattering theory the discrepancy arises
between the operator Hilbert space domains and the time asymmetry of
outgoing or resonance solutions. There exist even paradoxes like causality
breakdowns [61,62] or deviations from the exponential decay law. One
usually resolves this conflict purely heuristically - e.g., by selecting the sole
outgoing wave, which automatically leads to the time asymmetry. Another
way to circumvent this problem is to use only one type of Green’s functions,
retarded or (rarely) advanced. However, by using Green’s functions one is
forced to introduce distributions and, strictly speaking, to move outside the
Hilbert space. Basically, the same difficulties and similar procedures are
related to the time-reversal non-invariance in the classical theory of
electromagnetic radiation.
The problem of selecting the correct solution formulated in physics as
finding the scattering (or outgoing radiation) states is closely connected with
the theory of differential equations. For example, if a differential equation has
some localized solutions, and the solutions may be parameterized that is
represented as a function of some parameters, in case it is a single parameter
𝑡 we can ascribe to it the conceptual meaning of time. The scattering states
emerge if e.g., two such solutions are initially (in the “remote past”, 𝑡→−∞)
set up in spacelike domains “far away” from each other, then, with increasing
𝑡 these solutions move towards each other, collide, i.e., interact according to
the governing differential equation and then drift apart in the “distant future”,
𝑡→∞. One can then introduce the S-matrix (scattering matrix), first
introduced by W. Heisenberg, in order to couple the solutions related to the
“remote past” to those for the “distant future” [296]. Then the analytical
properties of the S-matrix determine the time-reversal conditions for the
scattering solutions of the governing differential equation. Usually, the
scattering states require us to go outside of the set of Hilbert space solutions.
Finally, irreversibility is commonly associated with the increase in
entropy, no matter how you define this quantity. Thus, irreversibility implies
the thermodynamic arrow of time, both in classical and quantum physics. This
subject seems to be especially prone to misinterpretation (see below). It is
interesting to find out why the thermodynamical arrow of time being
determined by entropy growth points in the same direction as all other time
9.2
The Arrow of Time
375
arrows. If the thermodynamical arrow of time is coupled with the
cosmological expansion, then what would happen if and when the actual
expansion phase were to be replaced by the contraction period, as it is
envisaged in many cosmological models. According to such models which are
often called cyclic, the universe expands for a while after the Big Bang, before
the gravitational attraction of matter causes it to collapse back into the Big
Crunch. Would the irreversibility change its sign and point in the direction of
increased order when the universe undergoes the bounce? Would the
“contramotion” - motion back in time be as ubiquitous as the normal motion
now? What would happen to our clothes when they reach the time point when
they have not been manufactured? How would living organisms process food?
S. W. Hawking in his famous book [78] gives an example of a broken cup that
would reassemble itself. Nevertheless, Hawking asserts that intelligent life
could not exist during the contracting phase of the universe.
If I were a carpenter, and you were a lady, would you marry me anyway,
would you have my baby.
Our perception of time is determined by entropy growth. The subjective
feeling that we are flowing through time is determined by the fact that we, as
living creatures, exchange information (entropy) with the outer world, its
entropy in this process being increased. Time exists independent of entropy,
but entropy gives us, macroscopic many- body creatures, the sense of time
direction. Time flow against the entropy increase would be perceived as the
wrong direction. Thus, although irreversibility is connected with the
unidirectional flow of time, these two phenomena are not identical. Below, we
shall discuss irreversibility in more detail.
One should remember that time-symmetrical models is only a special
class of mathematical models developed, in particular, to describe the greatly
idealized motion of individual particles, with the states being points rather
than measures, which excludes the possibility that we may have only a
probabilistic knowledge of the “exact” state. There is no dispersion in
measurement of any observable in such “pure” states, since they are only
point measures and, mathematically, only represent extreme points of some
convex sets. Such models can be too idealized to judge about the time-reversal
features of reality in general. Matter rarely consists of individual particles,
and the macroscopic parameters in terms of which we describe matter such
as temperature, density, pressure, elasticity, viscosity, etc. are not necessarily
uniquely determined by pure and idealized mechanical models and pointlike
mechanical variables. For instance, such mechanical models are totally
inadequate for condensed matter physics where, due to many-particle effects
and quasiparticle excitation, only time-noninvariant solutions are physically
meaningful.
We usually hear some objections to this type of reasoning claiming that
mechanical
models
are
“more
fundamental”
than,
for
instance,
thermodynamical or statistical, since the latter involve many particles and
such loosely defined notions as temperature or entropy. I would respond that
this statement is hardly true. First of all, as I have just mentioned and we have
seen more in detail in Chapter 2, the state of a physical system is represented
376
What remains to be solved?
by a Liouvillean measure, with the pure state being only a particular case
(extremal points of a convex set). Furthermore, in the accelerated system or in
the vicinity of a gravitating body the temperature measured by an observer
would be increased, e.g., from a “mechanical” zero to “thermodynamical” non-
zero values. This is the so-called Unruh effect [88], which is a simple and at
the same time profound model worth of reviewing (see below).
Before we proceed to the discussion of time asymmetry and related
problems, allow me to make one more general remark about the models of
physics (see Chapter 2). Physics may be interpreted as the game consisting in
attaching mathematical expressions to pieces of reality, with a subsequent
interpretation of obtained results. There exist many ways of reality
processing: an artist would paint a picture, a composer would create a tune, a
poet would write a poem, a scientist would produce a mathematical model,
i.e., a simple account of a piece of reality encoded in mathematical jargon.
There are at least two things common for all these activities. Firstly, by
producing new information they appear to decrease disorder, i.e., they act in
the direction opposite to the thermodynamical time arrow. However, a more
detailed analysis [249] shows that there is an energy price for the information
so that the energy dissipated (in entropy terms) in the process of creating it
largely exceeds the gain in information, which means that the net result of
human intellectual (e.g., modeling) activities still increases entropy of the
universe and does not reverse the thermodynamical arrow of time. Secondly,
all forms of human reality processing bear an imprint of transient time, and it
would be difficult to imagine the time-reversed reality for e.g., poem writing.
This would imply the causality breakdown, since the cause (e.g., a poet’s
intention) would follow the result. Of course, it might be “logically” possible
to imagine that the causality does not exist or reverses chaotically, so that
causes occur sometimes before and sometimes after the consequences, but
nobody, as near as I know it, have ever observed this effect. Thus we have to
assume that the existence of causal and psychological time arrows is a well-
established experimental fact, which should be also reflected in mathematical
models of physics. I have repeatedly mentioned - and we shall see it further a
number of times - that mathematical models of physics are local but linked,
just like the nodes of a network. Treating some local model is like sending a
signal throughout the whole network; if you pull a seemingly innocent thread
(in this case, the time asymmetry issue) the large part of the entire physical
web responds. Thus, the discussion of the time reversal problem would lead
us to such presumably detached subjects as the spectral properties of linear
operators, radiation conditions, Liouville measure, entropy, and even such an
uncommon device for traditional physics as Hardy spaces.
9.2.6
The Time Operator
There exists a sizable amount of literature on the subject of “time operator”,
which testifies to the fact that a special and somewhat strange status of time
in science and life has been a serious challenge not only to philosophers, but
also to physicists and even to mathematicians. There have been many
attempts to treat time as a physical observable, i.e., to construct the respective
9.2
The Arrow of Time
377
- and meaningful - time operator, both in quantum and in classical mechanics.
Probably the first attempt to define the time operator was endeavored by W.
Pauli [46] who proved that there cannot exist a self-adjoint time operator 𝑇
that could canonically commute with the Hamiltonian 𝐻 (Pauli did not
consider mixed states described by a density operator) whose spectrum is
bounded from below. The latter requirement is quite natural if one wishes to
ensure the stability of matter. Pauli found that this requirement rapidly comes
in contradiction with canonical relations for the would-be time operator.
Before we proceed to discussing Pauli’s proof, let me explain why it is
important for establishing the time-reversal properties. The matter is that if
we have the time operator we can explore its properties, e.g., find its domain,
range, eigenvectors |𝑡⟩ and eigenvalues 𝑡 (which should give the physical
time at any moment). We would see what happens with the change 𝑡→−𝑡,
provided of course the operator could be defined for 𝑡< 0. If, for example, the
expectation value of 𝑇 in some physical state Ψ(𝑥, 𝑡) is 〈Ψ(𝑥, 𝑡)|𝑇(𝑡)|Ψ(𝑥, 𝑡)〉,
then what would be the expectation value in the time-reversed state,
Ψ∗(𝑥, −𝑡)? In other words, is it true that
〈Ψ(𝑥, 𝑡)|𝑇(𝑡)|Ψ(𝑥, 𝑡)〉= 〈Ψ∗(𝑥, −𝑡)|𝑇(−𝑡)|Ψ∗(𝑥, −𝑡)〉
(9.1)
for Ψ(𝑥, 𝑡) satisfying the time-dependent Schrödinger equation with any
possible Hamiltonian? This relation obviously does not hold for non-
Hermitian Hamiltonians, there may be also time-reversal violations for the
motion in an electromagnetic field. It is well-known that for such motion the
time-reversal symmetry only holds under the condition of simultaneous
change of the magnetic field sign (i.e., vector potential 𝐴) [39], §17. In other
words, if some motion is observed in an electromagnetic field, then the time-
reversed motion is also possible, provided the magnetic field vector changes
its direction. This requirement seems to disturb physical intuition: how can
one invert, for example, the magnetic field of the Earth in order to produce
the reverse motion of charged particles? Spin components in the non-
relativistic Hamiltonian introduce supplementary difficulties that can be
correctly discussed only within the framework of quantum field theory (QFT).
It is important to emphasize - although this is of course a trivial remark
that one should not confuse the time-reversal operator 𝑇− (discrete
symmetry),
time-evolution
operator
𝑈(𝑡) = exp(−𝑖𝐻𝑡)
where
the
Hamiltonian serves as the generator of temporal translations and the
hypothetical time operator whose very existence is quite doubtful. From the
Schrödinger equation one can readily see that provided ℑ𝑉(𝑥) = 0, Ψ(𝑥, 𝑡)
and Ψ∗(𝑥, −𝑡) satisfy the same equation, moreover Ψ∗(𝑥, 𝑡)Ψ(𝑥, 𝑡) =
Ψ(𝑥, −𝑡)Ψ∗(𝑥, −𝑡).
9.2.7
Elementary Properties of the Time-Reversal Operator
Quantum mechanics is different from classical mechanics, in particular, by the
fact that in the classical world all the transformations may be done
infinitesimally, whereas in the quantum world discrete symmetries play a
378
What remains to be solved?
crucial role. Time-reversal invariance appeared in the quantum context due
to E. P. Wigner in 1932 [152].196 The Wigner 𝑇−-operator changed 𝑡 to −𝑡 and
had the following elementary properties. The time reversal operation applied
to a state Ψ may be generally expressed as 𝑇−Ψ = 𝑀Ψ∗, where 𝑀 is some
constant unitary matrix. Then the time reversal operation applied to a matrix
or operator 𝐴 may be expressed as 𝐴(−𝑡) = 𝑀𝐴(𝑡)𝑀−1. If 𝐴(−𝑡) = 𝐴(𝑡), the
operator 𝐴 is often called self-dual. A physical system is invariant under time
reversal if its Hamiltonian is self-dual i.e., 𝐻(𝑡) = 𝐻(−𝑡). A system without
time-reversal invariance would have a Hamiltonian which is represented by
an arbitrary self-adjoint matrix free from constraints of being symmetric or
self-dual (𝐻(𝑡) = 𝐻(−𝑡)). If the Hamiltonian 𝐻 is time-reversal invariant
(self-dual), any unitary matrix 𝐴 which is a function of 𝐻 will also be time-
reversal invariant (self-dual).
9.2.8
Time Operator in Classical Mechanics
I hope we have acquired some fragmentary mathematical slang to express
simple things in highbrow form. In the awesome mathematical language, the
laws of classical mechanics are time-reversible if there exists an involution 𝑇
giving a bijective mapping between the time-reversed motion of each state
and the time-forward dynamics of the respective state expressed by the
symbolic operator equation:
𝑈−𝑡= 𝑇𝑈𝑡𝑇
(9.2)
Physicists hardly would express it this way. Recall, nonetheless, that
involution is a map 𝑓(𝑥) that is its own inverse so that 𝑓(𝑓(𝑥)) = 𝑥 for all 𝑥∈
𝐷 where 𝐷 is the domain of 𝑓 or, in the language of group theory, an
involution is an element 𝐴 such that 𝐴2 = 𝐸 where 𝐸 is the identity element.
To put it simply, an involution is a function that, being applied twice, takes
one back to the initial position. All the transformations in the CPT-theorem
(see below) are involutions as well as well-known complex conjugations and
matrix transpose. Not all models of classical mechanics admit such an
involution, for example, dissipative models in general don’t since they
produce certain time-independent structures such as fixed points and
attractors that are not necessarily self-symmetrical under the action of the 𝑇-
operator. In Hamiltonian dynamics, the mapping 𝑇 reverses the momenta of
all the particles, so that the form of the Hamilton’s equations of motion does
not change. The Hamiltonian systems possess time-reversal symmetry (𝑇-
symmetry).
However, we have already seen that Hamiltonian systems comprise a
very restricted class of generic dynamical systems (see Chapter 4, Section
“Hamiltonian Mechanics”). Thus, Newtonian mechanics with friction is
definitely not time-reversal invariant at the macroscopic level where it is
196 To this invariance was added the new quantum particle-antiparticle symmetry or charge
conjugation (C) introduced by Dirac in the famous 1931 paper Quantized singularities in the
electromagnetic field.
9.2
The Arrow of Time
379
usually applied, and it is a big question whether it is 𝑇-invariant at the
microscopic level when one includes all the atomic motions into which the
dissipated energy is translated. The matter is that entropy in this dissipative
process is still increased which selects one direction of time.
We have already discussed that one particular feature of time is that it
cannot be observed directly: all time measurements are performed in an
indirect fashion via reading out the state of a physical system we call the clock.
If the clock is classical, i.e., we find ourselves within the field admitting an
approximate classical description, we can consider any evolving “observable”
𝐴(𝑡) = 𝐴(𝑡, 𝑥𝑖(𝑡), 𝑝𝑖(𝑡)), i.e., any dynamical quantity defined as a function of
coordinates, momenta and, possibly, explicitly on the parameter 𝑡 we usually
identify with time. 197 The observable 𝐴(𝑡) evolves according to the
Hamiltonian equation
𝑑𝐴
𝑑𝑡= 𝜕𝐴
𝜕𝑡+ {𝐴, 𝐻}
(9.3)
where the symbol {, } denotes the Poisson bracket (Chapter 3).
One may note in passing that this expression, which represents the flow
along a vector field associated with the Hamiltonian 𝐻, the latter being
treated simply as a function on the cotangent bundle 𝑇∗(𝑀) known as the
phase space, provides a compact and symmetrical form of the whole
Hamiltonian mechanics. In short, Hamiltonian dynamics is merely the flow
along the symplectic vector field (symplectic gradient of 𝐻). The Hamiltonian
itself serves here, like in quantum mechanics, as the generator of translations
in time (see Chapters 4 and 6 for more details). Let us assume for the moment
that the time operator 𝑇 really exists as a dynamical quantity and also that the
system, for which we are going to consider the time operator (there may be
different variants of such an operator for different physical systems), is
“closed” in the classical sense198, i.e., the Hamiltonian as well as the dynamical
quantity in question (𝐴≔𝑇) do not depend explicitly on parameter 𝑡, 𝐻=
𝐻(𝑥𝑖, 𝑝𝑖), 𝐴= 𝐴(𝑥𝑖, 𝑝𝑖). Then we get, identifying the observable 𝐴 with time
operator 𝑇:
𝑑𝑇
𝑑𝑡= {𝑇, 𝐻}
(9.4)
with 𝑇(𝑡) = 𝑡. The last expression contains an assumption that the classical
operator 𝑇 and the parameter of the phase curves 𝑡, firstly, have the same
dimensionality and, secondly, may be synchronized to the same initial point.
The parameter 𝑡 is treated simply as a label for a one-parameter family of
197 In some theoretical models “explicit time” is denoted by a different symbol, say 𝜏, to
distinguish it from “implicit time” 𝑡 parameterizing geometrical structures in the phase space.
There may be of course many “explicit times”, 𝜏𝑗, 𝑗= 1,2, …..
198 In the quantum-mechanical picture it would correspond to a pure state when the system
may be fully described by a wave function.
380
What remains to be solved?
diffeomorphism-invariant observables. If we also suppose that the
measurement taken on 𝑇 at any moment would produce the value of 𝑡, that is
〈𝑇(𝑡)〉= 𝑇(𝑡) where the angular brackets denote averaging over a classical
state, then we have 𝑑𝑇𝑑𝑡
⁄
= 1 and the Poisson bracket {𝑇, 𝐻} = 1. It means
that under all the assumptions taken, the classical time operator
corresponding to the observed elapsed time must be canonically conjugate to
the system’s Hamiltonian. Being translated into the usual quantum language,
it would mean that the energy-time uncertainty relation, ∆𝑡∆𝐸≥ℏ2
⁄ , should
hold (see Chapter 6 for a detailed discussion of uncertainty relations).
Here it is important to note that 𝑡 is just a parameter and not an
observable in the usual sense of classical or quantum mechanics, since it
cannot be interpreted as a function on 𝑇∗(𝑀) (the phase space), whereas the
quantity 𝑇(𝑥𝑖, 𝑝𝑖) is already an observable - a dynamical quantity.
9.2.9
The Pauli Theorem
The general expression for the evolution of an expectation value of the
operator 𝑇 is given by
𝑖ℏ𝑑〈𝑇〉
𝑑𝑡= ⟨𝜕𝑇
𝜕𝑡⟩+ 〈[𝑇, 𝐻]〉
(9.5)
One can easily corroborate this expression by writing down the scalar
product 〈𝑇〉= (Ψ, 𝑇Ψ), performing the time differentiation and using the
Schrödinger equation. Here, for simplicity we assume that the system is
characterized by a time-independent Hamiltonian and also remains in the
pure state. It is important to note that the expectation value of any operator
𝐴, 〈𝐴〉, is just a number depending on 𝑡, when 𝐴= 𝑇 this number should be
equal to 𝑡.
We have discussed time-dependent quantum mechanics in Chapter 6 in
some detail, here I can only report that the usual Hilbert space formalism may
prove insufficient to handle time-dependent problems, so one might be forced
to introduce other spaces (e.g., so-called Hardy spaces). One may ask: why is
it so difficult to define the operator of time? I have already mentioned that the
simple consideration belonging to W. Pauli [46] shows where the difficulty
lies. Assume that there exists some time operator 𝑇 which must satisfy the
commutation relation [𝑇, 𝐻] = 𝑖ℏ𝐼 in order to ensure the fulfillment of the
Heisenberg uncertainty relations (here we denote the unity operator as 𝐼). If
𝑇 is self-adjoint - it must be lest time eigenvalues were complex - then we may
construct the unitary operator 𝑊(𝑧) = exp (−
𝑖
ℏ𝑧𝑇) where 𝑧∈ℝ is an
arbitrary real number. Allowing 𝑧 to be complex would result in complex
energy values. It easy to see, e.g., by using the exponent expansion and the
identity [𝐴2, 𝐵] = 𝐴[𝐴, 𝐵] + [𝐴, 𝐵]𝐴, that
[𝑊(𝑧), 𝐻] = ∑
(−𝑖𝑧)𝑘
𝑘!
[𝑇𝑘, 𝐻]
∞
𝑘=0
= −𝑧𝑊(𝑧)
(9.6)
9.2
The Arrow of Time
381
Now let |Ψ⟩ be an eigenstate of 𝐻 with eigenvalue (energy) 𝐸, then the
above commutator gives H 𝑊(𝑧)|Ψ⟩= (𝐸+ 𝑧)|Ψ⟩, i.e., 𝑊(𝑧)|Ψ⟩ is an
eigenvector of the Hamiltonian 𝐻 with energy 𝐸+ 𝑧. This means that any
spectrum of 𝐻, e.g., point-like or bounded, can be mapped onto the entire real
axis, since 𝑧 is an arbitrary real number. In other words, the time operator
canonically conjugated to semi-bounded Hamiltonians of nonrelativistic
quantum mechanics should not exist or, at least, it can exist only when the
energy spectrum of the system is continuous and unbounded from below.
Thus, Pauli concluded “that the introduction of an operator 𝑇 must
fundamentally be abandoned and that the time 𝑡 in quantum mechanics has
to be regarded as an ordinary number.” In quantum theory, time is totally
classical - it must be read out from a classical clock.
There have been numerous attempts to circumvent this theorem of Pauli.
Some of the authors who undertook such attempts pay attention to the fact
that the domains and ranges of operators involved should be precisely
analyzed [79]. In the physical language it would mean, for example, that a
version of a quantum theory with an energy spectrum unbounded from
below, e.g., similar to that of the electron-positron field in quantum field
theory, is admissible. However, the “vacuum sea” of holes in the Dirac theory
contains only occupied states for 𝐸< 0. We discuss the structure of the QED
(quantum electrodynamics) vacuum and its local deformations, e.g., in a laser
field, at some length in Chapters 6; now we need only to mention that there
may be a connection between the energy spectrum of the system and its time-
reversal properties. Another possibility would be to repudiate the canonical
commutation relations allowing [𝑇, 𝐻] = 𝑖ℏ𝐶 where 𝐶 is some self-adjoint
linear operator, not necessarily a multiple of the identity operator. In this case
the spectrum of 𝐻 may be unbounded from below [80].
9.2.10
Time Reversal Puzzles
Is it true that if we were living in the anti-world the time would flow in the
opposite direction? Or is it true that if the universe had contained positrons
instead of electrons we could predict the past and make records of the future?
Although these questions are rather metaphysical than physical, my answer
to the both of them is “no”. Before I try to explain this negative answer, I shall
have to expand on what is usually meant by the direction of time, at least how
I understand it. Some musing about the perception of the unidirectional flow
of time invokes the notion of a number of the so-called time arrows, more
physical than the above mentioned psychological.
Professor Zenger was right by indicating that it is the initial conditions at
the early stage of cosmological development that result in general time-
asymmetry. First of all, it is an observational fact that the universe expands,
at least at the present era, so that there must be time-reversal asymmetry at
the cosmological level - the cosmological arrow of time. This arrow is directed
from the time domain when the universe’s spatial dimensions were smaller
to the time when they become larger. One may notice that the cosmological
arrow of time is not based on any mathematical model or “law of physics”, it
is an observational fact. One may also notice that the corresponding
382
What remains to be solved?
mathematical model, general relativity, is based (in its standard form) on
time-invariant differential equations (see, e.g., [39], see also Chapter 9 of the
present book), but these equations admit the solutions that reinforce
expansion versus contraction, again at least for the current era mass
distribution. Cosmological models, in general, tend to be time-reversal non-
invariant, see e.g., [75]. Furthermore, the dominant model of the naissance of
the universe is that of a Big Bang started at some point. There exist standard
objections that the expansion of the universe from this point will eventually
cease and reverse, but these objections, mostly based on the well-known
Friedman-Lemaître-Robertson-Walker (FLRW) mathematical model [39],
see also [66], are rather induced by the desire to reconcile the observed
cosmological expansion with the presumed time-reversal invariance than by
the need to balance observation and opinion. The FLRW-class models are
comparatively simple because they are completely spatially homogeneous
and isotropic, i.e., admit a six-dimensional symmetry group. They are just
quite convenient mathematical models for the average matter distribution on
a very large scale fully disregarding the deviations from spatial uniformity
and isotropy. We discuss this type of models in Chapter 8; now I am interested
only in the time-evolutionary aspects of the universe starting from the Big
Bang. The latter is usually understood as a singularity where spacetime
curvature becomes infinite, and the universe rapidly expands outwards from
this state. The expansion rate essentially depends on the spatial curvature of
the universe: if this curvature is positive, the expansion is finally reversed and
the universe collapses again into a singularity which is usually called the Big
Crunch. In the simplest version of FLRW models, the Big Crunch may be
considered an exact time-reversal copy of the Big Bang. However, this
reversible model of cosmological time evolution seems to be rather crude. For
example, one should take into account black holes that must be formed on a
mass scale during the collapsing phase of the universe. Such black holes
would produce some kind of “noise” which would drive the Big Crunch to a
state different from that of a Big Bang. Generally speaking, it would be
arrogant to impose our simple 𝑇-symmetry (i.e., time-reversal, 𝑡→−𝑡)
postulated for elementary single-particle dynamical models on the whole
universe with its curved space-time and rich amount of physical processes.
The model of a cyclic universe when the Big Bang/Big Crunch is just one
act in an eternal sequence of fireballs starting the expansion and finalizing the
contraction is thoroughly discussed in the recent book by the renowned
cosmologists, P. Steinhardt and N. Turok [81]. The authors themselves
consider the cyclic universe model “a radical alternative to the standard Big
Bang/inflationary picture”. We shall briefly discuss competing cosmological
models in Chapter 9, now we need only to understand that there is one
common feature in many of them, namely that the universe swings between
sequences of Big Bangs and Big Crunches, i.e., between acts of extermination
and reemergence. The cyclic models regard our contemporary expansion as
just a transitory phase. However, retrogressing - extrapolating back in time -
is still a difficult issue when observing the entire cosmological landscape. For
example, I could not find a satisfactory answer to the naive question: how did
9.2
The Arrow of Time
383
the universe begin and why do you come to a cosmic singularity in a finite (i.e.,
to be comprised of fundamental constants) time?
It is generally considered that the world should be subordinated to
quantum laws. In quantum mechanics, we mainly describe the state of a
physical system by a wave function 199 - a complex-valued function on the
classical configuration space 𝑀. If quantum mechanics can be applied to the
whole universe, this naturally leads to the question: what is its wave function?
A popular model of J. Hartle and S. Hawking [67] has proposed an answer. In
the above paper, Hartle and Hawking, also somewhat naively, if one speaks of
the quantum state |Ψ⟩ of the whole universe, then |Ψ𝑖⟩ corresponds to the Big
Bang and ⟨Ψ𝑓| describes the Big Crunch. This concept leads to a vast collection
of fascinating mathematical models unified by the name “quantum
cosmology”, almost all of them being highly speculative and deprived of any
direct astrophysical verification. A detailed discussion of these models now
would take us far away from the subject of time-reversal invariance, although
the union of quantum mechanics and cosmology poses a number of questions
pertinent to the time asymmetry. One such question is, for example, the
applicability of the CPT theorem, one of the basic results of quantum field
theory. The standard proof of the CPT theorem implies the flat Minkowski
spacetime as the background manifold; nevertheless, CPT-invariance may
still hold and lead to observed time asymmetry when the quantum state of the
universe is defined as the Hawking path integral [82], see also below. Another
question, very general and in fact related to the first one, is about the meaning
of the wave function of the universe. The model of Hartle and Hawking treated
the wave function of the universe as the probability amplitude for the
universe to appear (actually, from nothing). Then Hartle and Hawking
proceed to estimate this wave function using the path integral method (see
Chapter 6). Conceptually, this is tantamount to the assumption that the
universe may possess all possible life trajectorie histories. An ingenious trick
employed by Hartle and Hawking was to compute the path integral in
imaginary time instead of the usual time parameter 𝑡. This is the usual trick
in path integrals theory (see Chapter 6, section on path integrals in QFT) - to
replace the time variable 𝑡 by an imaginary number, then perform the
calculations, and then analytically continue the answer back to real times. But
in the quantum field theory a flat Minkowski background is taken for granted,
and it is not so easy when there is a curved spacetime, for instance, a black
hole around. Nonetheless, Hartle and Hawking managed to adapt the usual
path-integral techniques to the case when spacetime is essentially curved as
it should be in the presence of matter.
This is a difficulty with time in quantum mechanics. One prefers to
consider the background spacetime as fixed and, in particular, to assume that
there exists a well-defined time 𝑡∈ℝ. This means that quantum mechanics
implies the approximation when large fluctuations in the spacetime metric
can be disregarded.
199 For simplicity, we regard only pure states in the cosmological context.
384
What remains to be solved?
The idea for the trick with passing to imaginary time was probably
twofold: first, to get rid of singularities (points where spatial curvature tends
to infinity), which may haunt calculations with ordinary time, and second, to
eliminate the difference between the forward and backward directions in the
imaginary time so that it would be possible to go backward, turn around,
make loops, closed contours, etc. Besides, there is no problem with end points,
since the imaginary (complex) time does not have a physically defined
beginning or end. In the Hartle-Hawking approach to quantum cosmology, the
initial wave function of the universe is described by a path integral over a
compact manifold 𝑀 with a single spatial boundary 𝛴. The wave function
itself, according to Hartle and Hawking, is interpreted as describing all
possible universes, being large near our universe, which is thus regarded as
the most probable, and small in the vicinity of other universes (there may be
an infinite number of them) where the laws of physics or set of constants are
different. Since all possible universes in this model are described by the same
wave function, there may be transitions between universes, although
characterized by an exceedingly small (almost infinitesimal) probability
because the wave function of the universe is largely concentrated in the
domain of the universe we to live in.
It might seem a bit too arrogant to claim that the wave function of the
entire universe could be known. We have not even been able to observe the
whole universe. One might believe that by knowing the wave function of the
universe one would know everything that has occurred in the past or will
occur in the future. There are rumors that M. Gell-Mann, 1969 Nobel Prize
Winner, asked J. Hartle once: “If you know the wavefunction of the universe,
why aren’t you rich yet?” probably meaning that J. Hartle would know
beforehand all the stock-exchange results, casino roulette outcomes or
lucrative investment possibilities.
However, this detailed knowledge is in fact an illusion, since the rules of
quantum mechanics, due to the generalized uncertainty principle, preclude it.
So, if the universe is governed by the quantum-mechanical laws, knowledge
about the future as well as about the past would be possible only to the
accuracy of quantum fluctuations which may be large. The same applies to the
time reversal. In other words, given the basic law and some initial (final)
conditions, the future (history) of the universe is by no means determined
since the law is quantum mechanical thus giving only the probabilities for
alternative futures or histories.
We have seen in Chapter 6 that the interpretation of quantum mechanics
amounts to translation of quantum alternatives into classical language. In
other words, it is through an understanding of the quasi-classical domain that
quantum mechanical notions become useful for our everyday experience. We
have also seen that the concept of decoherence as the transition to classicality
consists in continual “measurement” actions on the quantum system from the
side of the environment, with the result that quantum variables pass to the
quasi-classical domain. This transition like any other measurement process
selects only previous alternatives, and thus we have to embody the notion of
the arrow of time (consistent with causality) in the transition to classicality
9.2
The Arrow of Time
385
through decoherence. It may thus seem that the arrow of time and the
associated constancy of the past (what has happened in the past cannot be
changed and does not depend on any new information obtained in the future)
are the features only of the quasi-classical or classical domains, but this is not
true, at least as long as we have to use the density matrix tools in quantum
mechanics.
There has been a great lot of arguing in the physical literature about the
Hartle and Hawking wave function. Roughly speaking, the wave function of
the universe is some complex-valued expression defined over semi-classical
configuration space. In general relativity, this is represented by all possible
metrics on the manifold 𝛴 with no boundaries, analogous to a 3-sphere. The
boundary condition for the universe, according to Hawking, is that it has no
boundaries. The primitive form of the wave function of the universe may be
written as
Ψ(𝑥) =
∫[𝑑𝑔] exp (−𝑆(𝑔)
ℏ)
𝜕𝑀=Σ
𝑔[Σ]=ℎ
(9.7)
A little later we shall deal with the ground state wave function proposed
by Hartle and Hawking more explicitly and in mathematical terms. Now we
are interested only in the time-reversal aspects of this model. The model of
Hartle and Hawking is, of course, a highly speculative one but it has produced
a major impact on the physical community and stimulated the explosive
development of quantum cosmology. The Hartle and Hawking model also
envisages the existence of “wormholes” connecting different universes.
According to Hartle and Hawking, the multitude of universes implied by their
wave function should be connected by wormholes, some of these universes
being connected with many others, while others tend to be isolated. Migrating
between the universes may be equivalent to traveling in time. The multiple
universe models typically suggest that all the constituent universes are
structurally identical - quantum copies from the same ensemble - but they
may exist in different states related to different values of the time parameter.
Therefore, the universes constituting an ensemble do not communicate, in the
sense that no information passes between them. Nevertheless, in a number of
“multiverse” models they may have the same values of the fundamental
constants (their “set of genes”) and be governed by the same physical laws.
The state of the whole multiverse is obtained from the states of the
constituent universes according to the quantum superposition principle and
is described by a single universal wave function. In contrast to this multiverse,
the many-worlds interpretation of quantum mechanics proposed by H.
Everett (see Chapter 6) has a shared time parameter. The quantum
mechanical controversy in general and with regard to the quantum time
arrow in particular is ahead; now we want to emphasize that the quantum
evolution of the universe imposes constraints on determinism and time-
reversal properties of our world.
386
What remains to be solved?
As to the time-reversed variant of cyclic models in which the universe
undergoes an infinite series of oscillations, each beginning with a Big Bang
and ending with a Big Crunch, the puzzles are still more annoying. Is the series
of bounces between Big Bang and Big Crunch really infinite? What
fundamental quantities determine the period of oscillations? What happens
with the time arrows at various stages of a bounce? Shall we observe the
resurrection of the dead at the time-point corresponding to the maximum of
the scale factor, 𝑎̇(𝑡) = 0 (see [39], §112)? Is the final state of the Big Crunch
the ideal time-reverse of the initial state of Big Bang? General relativity leads
to a space-time picture where singularities are a universal feature of various
cosmologies with a Big Bang; it appears that no feature of general relativity
can prevent them (see [39], §§102-104, see also [88], Ch.27), although it is
known that Einstein considered singularities a drawback of his own theory
and spent a lot of time and effort to build cosmological models which would
be free of them [91]. Thus, even despite the fact that the equations of classical
general relativity are time-reversal invariant and, in the cyclic models, no
memory of previous cycles would be preserved which means that the entropy
rise would be eliminated, the final state of the Big Crunch still does not seem
to be the pure time-reverse of the time-symmetrical (e.g., FLRW) Big Bang.
The latter may be regarded rather as a melting pot of black hole singularities
characterized by a substantial entropy (Bekenstein-Hawking entropy [87]
HawkingEntropy), see [89].
Thus, time-reversal invariance is not necessarily an immanent property
of cosmological models, in contrast to an apparent temporal symmetry of
general relativity. The cosmological time-non-invariance is in principle
sufficient to break the presumed time-reversal symmetry in general, since the
universe is the largest system to determine the arrow of time. Nonetheless,
we shall proceed to discuss other time arrows as well. R. Penrose counted
seven different arrows of time [89] and analyzed their relationships to each
other. His conclusion was more or less obvious: there is only one explanation
to the observed time-reversal non invariance - in contrast to the commonly
shared opinion, not all exact laws of physics are time symmetric 200 .
Subjectively, we experience the time arrow as having memory of the past but
not of the future. This asymmetry is universal for all people and for all times,
so it may be considered a well-established experimental fact. As such, it
should be explained by some physical theory. I am not aware of such a theory,
probably only piecewise approaches illuminating diverse aspects of the
unidirectional time flow exist so far, see however a discussion in [92].
Such manifestations of discrete symmetry breakdown as the non-
equivalence of direct and reverse time directions - 𝑇-symmetry violation or
what we poetically call “the arrow of time” as well as non-equivalence of left
and right ( 𝑃-symmetry violation, often denoted as 𝑃𝑁𝐶 - parity non-
conservation) and of particles and antiparticles (𝐶-symmetry violation) have
200 I would supplement the list of time arrows provided by R. Penrose with the arrow of time
observed in modern optics where probabilities of the direct and reverse processes are most often
different.
9.3
Irreversibility
387
been well-known in our everyday life for a long time: we grow old and die,
our heart is located in the left half of the body, and we are built of nucleons
and electrons. In distinction to the macroscopic world, 𝑇-, 𝑃-, and 𝐶-
symmetry breakdown in the microscopic world was only recently discovered.
The physical question corresponding to this observational time lag may be
formulated as: what is the relationship between 𝑇, 𝑃, and 𝐶 violation in the
macroscopic and the microscopic worlds? Or, to put it another way, is there
any correspondence between discrete symmetries in the macroscopic and
microscopic worlds and, in particular, between 𝐶𝑃𝑇 symmetries? Here we
implicitly assume 𝐶𝑃𝑇 symmetry to be exact, at least no experimental
evidence for 𝐶𝑃𝑇 violation has been produced so far. Thus, 𝐶𝑃𝑇 appears to be
always conserved in microscopic physics, but not necessarily at the
cosmological level, see [99]. 𝐶𝑃𝑇 symmetry is the only exact symmetry that
includes time reversal. (See Chapter 5, sections on the quantum field theory
for more details on 𝐶𝑃𝑇).
These two questions naturally return us to cosmological problems such
as knowing the wave function (a pure state candidate) or the density matrix,
e.g., in the multiverse models in which our universe emerged as a kind of
bubble, one among a very large number of them, within a statistical ensemble,
and afterwards became relatively isolated from it [93]. The probabilities in
the density matrix correspond to statistics of such bubbles in the multiverse.
As soon as we introduce probabilities in our model, we find ourselves in the
situation of irreversibility, both in classical and quantum mechanics. This
general relationship of probability with irreversibility is thoroughly
discussed in [65], Ch.6. We tackle probabilistic irreversibility in Chapter 7 of
the present book, now I only mention the so-called Zermelo paradox in
statistical mechanics (named after the German mathematician Ernst
Zermelo), which is closely connected with the Poincaré recurrence cycle. In
1893, H. Poincaré in a short note pointed out that kinetic theory contradicts
his so-called recurrence theorem which he proved in 1890 (but published his
proof for a system with a 3D phase space - this case is not valid for kinetic
theory). The note of Poincaré, although, in my opinion very simple and at the
same time fundamental, did not receive much attention until E. Zermelo
raised an interesting objection in 1896.
9.3
Irreversibility
The local time arrow is perceived as striving to equilibrium, and the two
concepts are regarded as synonymous. The trouble, however, is that striving
to equilibrium admits many physical forms: from the Boltzmannian molecular
chaos hypothesis (Stosszahlansatz) in classical kinetics to random phase
approximation (RPA) in quantum many-body theory 201 . It is generally
201 Recall that in many-body theory the random phase approximation is one of the
most popular assumptions allowing one to significantly simplify computations. For
example, the RPA ground state is typically described by a collection of dressed particles
interacting separately through short-range forces and quantized collective (coherent)
modes such as plasmons. Accordingly, one can factorize the many- particle wave
function which leads to enormous simplifications.
388
What remains to be solved?
considered that one cannot obtain the real-life irreversible behavior using
only the fundamental physical laws such as the laws of mechanics.
It is quite common - and physicists exploit this fact over and over again -
that a complex phenomenon can be best understood on simple models. An
example of such models is the one treated in statistical mechanics - the model
of the ideal gas. The time-reversal non-invariance is exhibited even in this
comparatively simple system, and not only irreversibility associated with the
entropy increase, as it is often stated in the textbooks. The fact that the
elementary dynamics of gas molecules is time-reversal invariant whereas the
gas as a whole is a time-oriented system was puzzling L. Boltzmann to the
point of a serious depression. Even at present this paradox continues to stir
controversies among physicists and mathematicians. The above mentioned
Zermelo paradox is of the same type. Consider a closed box containing 𝑁
molecules which move and collide according to interparticle interaction
potentials, elastically reflecting from the walls of the box. We may assume that
the equations of motion for the gas define a Hamiltonian system, therefore
one-parameter group of translations along all the paths preserves the
Liouville measure (see Chapter 4, section about Hamiltonian systems).
Manifolds corresponding to fixed energy values are compact, hence the
Liouville measure produces finite invariant measures at the fixed energy
manifolds. Thus, we can justifiably apply the Poincaré recurrence theorem
which in a somewhat simplified form may be stated as (see for many
interesting details [14], §16, see also the famous book by M. Kac [68], Ch.3):
Let (𝑀, 𝛴, 𝜇) be some space with measure, 𝛴 being a 𝜎-algebra of its
subsets, 𝜇 an invariant measure (see Chapter 3). Let 𝑇: 𝑀→𝑀 be an
endomorphism of this space (phase space). Then for any 𝐴∈𝛴, 𝜇(𝐴) > 0
almost each point 𝑥∈𝐴 is returned infinitely many times in 𝐴, i.e., there exists
an infinite sequence {𝑛𝑖}, 𝑛𝑖→∞ so that 𝑇𝑖
𝑛(𝑥) ∈𝐴. In other words, each set
𝐶∈𝛴 such that 𝑇𝑛𝐶∩𝐶= ∅ is of zero measure.
The standard way of reasoning proceeds as follows: let us imagine now
that the set 𝐴 consists of such phase points that all the molecules of gas are
gathered in one (say, the left) half of the box - we can consider it the initial
state. Then, according to the Poincaré recurrence theorem, one would find
such moments of time that all the molecules will return to the same (left) half
of the box. However, nobody has ever observed the case when the gas does
not occupy the whole accessible volume.
The usual explanation of this paradox is as follows. The probability
measure for the set 𝐴 is of the order of exp(−𝑐𝑁) where the constant 𝑐
depends on some macroscopic physical parameters such as temperature and
density (see Chapter 7). For a gas under normal conditions, when 𝑁≈1023
the measure 𝜇(𝐴) is numerically extremely small so that recurrence cycles
(𝜇(𝐴))
−1 are greater than any extrinsic time scale, e.g., the characteristic
cosmological times. In order to keep the box intact and observe the return to
an initial state, one has to isolate the box from external influences during
cosmological times, which is physically unrealistic. On the other hand, if the
number of molecules is small, say 𝑁≈10, one may in principle observe a
gigantic fluctuation when all the molecules gather in a single half of the box.
9.3
Irreversibility
389
This situation can be rather modeled on a computer than produced in a direct
physical experiment.
One might note that there is no restriction on the time the recurrence
could take. This recurrence (or return) time may drastically differ on the
paths starting from various subregions 𝛴. The first return of each path defines
a measure-preserving map of the region into itself, if one can wait for a long
enough time, of course. One may also notice that there is no intrinsic time-
scale with which the Poincaré recurrence cycle can be compared, so that the
recurrence time is considered large or even infinite not mathematically but
practically. The intrinsic time scale in this system appears only when an
external influence or field is allowed to drive it. External agents can easily
pump the gas into a nonequilibrium state. However, without external driving
forces the gas expands and fills up the whole available volume which
demonstrates the irreversible and time-noninvariant behavior. How can this
behavior be explained for an isolated system?
The Poincaré recurrence theorem is a typical result of dynamical systems
theory (see Chapter 4). Some contemporary studies in dynamical systems are
aimed at a possible “explanation” of the time arrow. There are several ways
to describe the time evolution of a dynamical system. In the classical
framework (see e.g., a very good book [73]), one considers systems of
differential equations with respect to explicit time (there may be other
temporal parameters such as “fast” or “slow” time). In the autonomous case,
the solutions to such systems are usually time reversible. However, in many
cases the dynamical system exhibits either ergodic or mixing properties, and
it is usually assumed that mixing and ergodicity are closely related with the
arrow of time. Indeed, from the viewpoint of statistical mechanics mixing
means irreversibility: every initial measure converges to an invariant
measure (see Chapter 4) under the action of dynamics.
It may be difficult to obtain exact solutions for mixing and ergodic
systems, so to prove irreversibility in a mathematical sense is also difficult.
One can explore discrete-time models or difference equations using fast
computers, instead of solving differential equations. Discretized models, e.g.,
produced with iterated functions [90], may easily exhibit irreversibility due
to a number of different past histories for a given time-point in the present.
It is only recently that the study of dynamical systems became connected
with statistical mechanics and kinetic theory (Chapter 7). It is sometimes said
that in dynamical systems one deals with absolute irreversibility whereas in
statistical physics and thermodynamics only with statistical irreversibility
(owing to the lack of knowledge). Statistical irreversibility appears in large
ensembles of particles (or other entities) whose exact behavior is
subordinated to more specific laws such as the Newtonian or Hamiltonian
laws of motion. The statistical or probabilistic way of explaining time-reversal
paradoxes differs from explanations based on the theory of dynamical
systems, but in fact one considers the same process of passing to probabilistic
(based on finite measure) models starting from the deterministic phase flow
- only the languages are different: more modern “geometrical” in the theory
of dynamical systems and more traditional “physical” discussing inter-
390
What remains to be solved?
particle correlations and many-particle distribution functions in statistical
physics and physical kinetics. This “physical” way of reasoning was originated
in the works of L. Boltzmann [69] and J.W. Gibbs [297].
9.4
Origins of Unpredictability
The way to explain the Zermelo paradox pointed out by L. Boltzmann is based
on the so-called 𝐻-theorem which is usually proved starting from the
Boltzmann transport equation (see e.g. [72, 25], see also Chapter 7).
Traditionally, to prove this theorem the assumption of “molecular chaos” is
premised. Honestly speaking, I have not seen a correct (to my understanding)
formulation of this assumption, the famous Boltzmann’s molecular chaos
hypothesis (Stosszahlansatz)202 Intuitively, it may be understood using the
language of correlations. Before any collision, the momenta of all colliding
particles were distributed uniformly in the momentum subspace
independently of their positions, but after the collision these momenta
become correlated. However, Boltzmann’s conjecture of molecular chaos
appears to remain unproven, therefore it may or may not be true.
The key assumption of “molecular chaos” breaks down time-reversal
symmetry as it leads to a special form of collision integral in the Boltzmann
kinetic equation. This is what was exactly necessary to make ground for the
second law of thermodynamics (the entropy rise). The concept of molecular
chaos (or factorization) can be more or less understood from the dynamical
derivation of the Boltzmann equation [72, 25]. We discuss this derivation and
related issues in Chapter 7; now I shall try to illustrate the molecular chaos
on a simple model.
Assume that we have a homogeneous gas of particles and the single-
particle distribution function (the density of particles having momentum 𝐩)
is 𝑓(𝐩). If we consider only pair interactions and let 𝑤(𝐩1, 𝐩2; 𝐩′
1, 𝐩′
2) be the
probability per unit time (transition rate) for particles with momenta 𝐩1, 𝐩2
to collide and become pairs with momenta 𝐩′
1, 𝐩′
2 . To simplify the two-
particle collision description, let us assume the time and space reversal
symmetry of the elementary process in gas, which gives the detailed balance
equation:
𝑤(𝐩1, 𝐩2; 𝐩′
1, 𝐩′
2) = 𝑤(𝐩′
1, 𝐩′
2; 𝐩1, 𝐩2)
(9.8)
Then, due to Stosszahlansatz, we get the Boltzmann equation (in a
somewhat simplified form)
𝑑𝑓(𝐩)
𝑑𝑡
=
∫
𝑤(𝐩, 𝐩2; 𝐩′, 𝐩′
2)[𝑓(𝐩′)𝑓(𝐩′
2) −𝑓(𝐩)𝑓(𝐩2)] ×
𝛿( 𝐩2
2𝑚+ 𝐩2
2
2𝑚2
−𝐩′2
2𝑚−𝐩′2
2
2𝑚2
) ×
𝛿(𝐩+ 𝐩2 −𝐩′ −𝐩′
2)𝑑𝐩𝑑𝐩2𝑑𝐩′𝑑𝐩′
2
(9.9)
202 It is interesting that Boltzmann was only 27 years old when he introduced Stosszahlansatz
in statistical mechanics.
9.4
Origins of Unpredictability
391
where we have omitted index 1. It is from this equation that Boltzmann
proved the 𝐻-theorem:
𝑑𝐻(𝑡)
𝑑𝑡
=
𝑑
𝑑𝑡∫𝑓(𝐩) ln 𝑓(𝐩)𝑑3𝑝≤0 , which gives the
entropy with negative sign. The meaning of this derivation is that entropy
increases provided the model of Stosszahlansatz (molecular chaos) is valid.
Thus, only the hypothesis of molecular chaos brings with itself time
asymmetry, though not very explicitly.
One can see that the 𝐻-theorem is so simply proved only for the rarefied
Boltzmann gas. Actually, it is usually formulated for such a gas which is not far
from an equilibrium ideal system. I failed to find a satisfactory proof of the
Boltzmann 𝐻-theorem for the non-ideal gas with arbitrary interaction
between the particles and I could not find a proof of this theorem for arbitrary
open systems either, despite the fact that the entropy can in principle be
defined for such systems.
Let us make a brief distraction recalling some well-known facts about
statistical mechanics in general (one can find specific details in Chapter 7).
The foundations of statistical mechanics are still a hot topic, despite the fact
that this subject has persistently been in the focus of interest since the times
of Boltzmann and Gibbs. The traditional issue here is the foundation of
classical statistical mechanics of continuous systems as opposed to now
fashionable lattice systems being closely associated with numerical modeling.
By a continuous classical system one can understand a collection of classical
particles moving in a continuous phase space. The theoretical approach to
deriving the equations of classical statistical mechanics is based on the
solution of an infinite hierarchical system of integro-differential equations for
the many-particle distribution functions (the Bogoliubov chain) [72]. For
many-particle systems in a finite volume the Bogoliubov hierarchy of
equations is equivalent to the Liouville equation.
So the outcome of all this stuff is that is hard to use the reversible models
for the laws of motion to explain why we observe the world having a
comparatively low entropy at any moment of observation as compared to the
equilibrium entropy (e.g. of universal heat death). Moreover, the world as a
whole must have had even lower entropy in the past (see [88], Ch.27). One
may notice here that the definition of entropy in cosmology is still a
controversial issue: there does not seem to be a consensus about the notion
of a global entropy in the universe, specifically for the part of entropy
associated with the gravitational field.
Remark. Large entropy fluctuations in the equilibrium steady state of
classical mechanics can be studied in extensive numerical experiments in a
simple strongly chaotic Hamiltonian model with two degrees of freedom (e.g.,
described by the modified Arnold cat map). The rise and fall of a large,
separated fluctuation is shown to be described by the (regular and stable)
macroscopic kinetics, both fast (ballistic) and slow (diffusive). One can then
abandon a vague problem of the appropriate initial conditions by observing
(in a long run) a spontaneous birth and death of arbitrarily big fluctuations
for any initial state of our dynamical model. Statistics of the infinite chain of
fluctuations similar to the Poincaré recurrences is shown to be Poissonian. A
392
What remains to be solved?
simple empirical relationship for the mean period between the fluctuations
(the Poincaré cycle) can be found and, presumably, confirmed in numerical
experiments. One can propose a new representation of the entropy via the
variance of only a few trajectories (particles) that greatly facilitates the
computation and at the same time is sufficiently accurate for big fluctuations.
The relation of these results to long-standing debates over the statistical
irreversibility and the time arrow is briefly discussed.
One can then show that the Poincaré recurrence theorem incorporates
Loschmidt’s requirement for velocity reversion in thermodynamic gas
systems. It differs essentially from Hamiltonian dynamics from which
Boltzmann’s 𝐻-theorem follows. The inverse automorphism, 𝑇−1, on which
the demonstration of the recurrence theorem is based, does not exist for
atomic and molecular systems. Thermodynamic systems need not
spontaneously return to states they occupied in the past and a Zermelo
paradox has never existed for them. The same conclusion follows a fortiori for
quantum systems in chrono-topology. Poincaré’s recurrence theorem does
not conflict with Boltzmann’s 𝐻-theorem because they apply to systems
described by quite different mathematical structures.
The way I presented the story was to strictly impose molecular chaos (no
momentum correlations) at one moment in time. That is really breaking time
translation
invariance,
not
time
reversal.
From
that
you
could
straightforwardly derive that entropy should increase to the past and the
future, given the real Hamiltonian dynamics. What the real Boltzmann
equation does is effectively to assume molecular chaos, chug forward one
timestep, and then re-assume molecular chaos. Its equivalent to a dynamical
coarse-graining, because the distribution function on the single-particle
phase space can’t carry along all the fine-grained information.
9.5
Understanding superconductivity
When I was a student long ago, I heard the rumors that L. D. Landau had
named three problems in physics as being of an outstanding importance: the
problem of cosmological singularity, the problem of phase transition, and that
of superconductivity. This latter subject is a good example of a synthetic
discipline that has links into many other fields of physics and mathematics.
Superconductivity has lately acquired the status of a whole discipline
combining profound theoretical ideas with engineering applications.
Naturally, when speaking about superconductivity, I only scratch the surface.
Superconductivity is a whole world centered around designing and producing
new materials with highly required but unusual properties. I write “designing
and producing” because the typical approach in superconductivity symbolizes
a new phase in science and technology: previously people have used available
materials, now they are trying to construct them. Note that in fact all of
physics or chemistry, with their vast baggage of analytical tools and research
patterns, have their eventual goal in creating appropriate materials. This is
basically what people understand by technology.
In 1957, the Physical Review published the first fundamental theory
explaining how, at low temperatures, some materials can conduct electricity
9.5
Understanding superconductivity
393
entirely without resistance. Building on experimental clues and earlier
theoretical hints, John Bardeen, Leon Cooper, and Robert Schrieffer, all at the
University of Illinois in Urbana, explained not just the absence of electrical
resistance but also a variety of magnetic and thermal properties of
superconductors. The “BCS” theory also had an important influence on
theories of particle physics and provided the starting point for many attempts
to explain the new high-temperature superconductors. The “BCS” theory has
played a prominent role not only as an ad hoc model for superconducting
electrons, but also in many other areas of physics. The BCS model also had an
important influence on theories of particle physics and provided the starting
point for many attempts to explain the “new” high-temperature
superconductors. Recently, it has been applied to the analysis of dilute gases
of cold fermions in the case of weak interactions between the atoms.
9.5.1
Early History of Superconductivity
Superconductivity was discovered in 1911 and always remained a riddle. Due
to some mysterious reason, metals at very low temperature came into a state
when the resistance to electric current practically disappeared. Physicists
have been trying to solve this puzzle for many years. Only by the 1930s, it was
concluded that electrons in a superconductor must occupy a quantum-
mechanical state distinct from that of normal conduction electrons. In 1950,
researchers found that the temperature at which mercury becomes a
superconductor is slightly higher for mercury isotopes of lower atomic
weight, suggesting that superconductivity somehow involves motion of the
atoms in a material as well as the electrons.
Following up on this “isotope effect,” Bardeen and Illinois colleague David
Pines showed theoretically that within an atomic lattice, electrons could
attract one another, despite their strong electrostatic repulsion. Essentially,
an electron can create vibrations among the lattice atoms, which can in turn
affect other electrons, so the attraction is indirect.
By the mid 1950s, Bardeen was collaborating with Cooper, a post-
doctoral fellow, and Schrieffer, a graduate student. Cooper published a short
paper showing how the Bardeen-Pines attraction could cause electrons with
opposite momentum to form stable pairs [283]. This pairing mechanism,
Cooper suggested, might be responsible for superconductivity, but Bardeen
was initially skeptical. The paired electrons were not physically close together
but moved in a coordinated way, always having equal but opposite
momentum. It was not clear that these tenuous, extended pairs could be
crammed together to create a superconducting medium without getting
disrupted.
A few months later, however, Schrieffer hit on a mathematical way of
defining a quantum mechanical state containing lots of paired electrons, with
the pairs oblivious to other electrons and the lattice, allowing them to move
without hindrance. He later compared the concept to the Frug, a popular
dance at the time, where dance partners could be far apart on the dance floor,
separated by many other dancers, yet remain a pair [284].
394
What remains to be solved?
After publishing a short note early in 1957 [48], the team published what
became known as the Bardeen-Cooper-Schrieffer, or BCS, theory of
superconductivity in December. They won the Nobel prize in 1972. The
theory explained the isotope effect and the fact that magnetic fields below a
certain strength cannot penetrate superconductors. It also explained why
superconductivity could only occur near absolute zero–the tenuous Cooper
pairs break up in the presence of too much thermal jiggling. It’s a testament
to Bardeen’s insight that he chose the right collaborators and kept his eye on
experiment
in
seeing
the
way
forward,
says
superconductivity
experimentalist Laura Greene of the University of Illinois: “It’s how science
should be done.”
One oddity of the BCS wave function is that it lacks some of the
mathematical symmetry expected at the time for any quantum or classical
solution of electromagnetic equations. Further analysis of this point spurred
the development of so-called symmetry breaking theories in particle physics.
Although the superconductors discovered in 1986 rely on electron
pairing, they remain superconducting at temperatures above what the pairing
mechanism in BCS can easily explain. But Marvin Cohen of the University of
California at Berkeley says that given the poor understanding of the new
materials, the original BCS pairing mechanism shouldn’t be ruled out. And,
adds Greene, it took “some very smart people” almost 50 years to get from the
discovery of superconductivity to BCS, so she’s not worried that the high
temperature superconductors remain unsolved after a mere 20 years.
Although some engineers state that the potential impact of high-
temperature superconductivity would be negligible due to high costs
involved, from the physical viewpoint any major breakthrough in
superconductivity research is difficult to overestimate. Historically,
physicists had to cool conventional conductors almost down to absolute zero
to observe the phenomenon. Any person who did some experimental work in
physics would understand what it means - to cool samples to Helium
temperatures: this is usually a multistage process, and even provided the
institution or the laboratory chief has enough money at his or her disposal to
buy the modern equipment, reaching such low temperatures becomes
progressively difficult, since each temperature drop requires finding some
kind of energy within the substance and then devising a means of removing
this energy.
In several years, superconductivity will be one hundred years old, which
is quite a long time for an area of physics. Usually, after such a prolonged
period one could expect the area to be a closed text-book chapter. Yet,
understanding of superconductivity is still rather unsatisfactory, especially as
far as “new” superconductors are concerned. Superconductivity has
traditionally been thought of as a phenomenon that occurs only at
temperatures near absolute zero, but in 1987 several materials that exhibit
superconductivity at temperatures exceeding 100K had been found. I
remember the hype of March 1987 around this discovery - all media sources
treated it almost as the most important event of the XX century. At that time,
I worked as the editor of the physics and mathematics department of the
9.5
Understanding superconductivity
395
leading Soviet popular science journal called “Science and Life” (Nauka i
Zhisn’), and our Editor-in-Chief who usually did not care much about physics
quite unexpectedly summoned me and ordered me to make an urgent
material on the discovery of high-temperature superconductivity, which I
hastily did (“Nauka i Zhisn’” No. 6-7, 1987). For those who can read Russian I
would also recommend a popular-level but comprehensive article on
superconductivity by Academician V. L. Ginzburg - a great physicist of the
dying-out generation of universalists who received his long-deserved Nobel
Prize just for the theory of high-𝑇𝐶 superconductivity (“Nauka i Zhisn’” No.7,
1987). So, a new era of active exploration in the superconductivity field had
begun. People - including the newspaper-peeking lay public - thought the
scientists would very soon provide new superconducting wires to transmit
power as well as levitating trains Nevertheless, the phenomenon of high-
temperature superconductivity is still poorly understood, which makes its
practical applications evasive. No materials currently exist that can become
superconductive at room - biologically acceptable - temperatures. Biological
environment and superconductivity do not seem to coincide in their
equilibrium states, at least so far. The new superconductors - ceramic
materials based on copper oxides (cuprates) combined with various other,
usually rare-earth, elements - support the superconducting current at
temperatures as high as 140K. That was a noticeable jump toward room-
temperature superconductors, since, although requiring rather low
temperatures, these new materials can be cooled with liquid hydrogen, which
is enormously easier and much less expensive than the liquid helium cooling
required by the old materials.
These are all well-known facts, and I repeat them here only to emphasize
the importance of the problem. Now that the superconductivity hype is long
over, with other fields being at the forefront and enjoying mass-media
attention, e.g., nanotechnology, the potential social and engineering impact of
superconductivity is viewed with increasing skepticism. I still think that
room-temperature superconductivity, if it is by any chance reached, would
produce a major breakthrough in energy policy, transportation and possibly
information technologies. Since the world is becoming increasingly
infrastructured, it would result in major technologically-induced social
changes. Take power transmission as an example. Currently, electricity-
generating utilities must transmit power at a voltage of tens or hundreds
kilovolts because otherwise a large portion of transmitted energy is
dissipated during transmission. This is in fact ridiculous since the typical
usage of electric power requires volts or, at most, hundreds of volts.
Enormous voltages required solely for electricity transmission lead to the
whole industry with their monopolies, inefficient management, elevated
prices and frequent dangerous blackouts, without even mentioning ecological
harm and wasted terrains. If one were able to build large-scale power grids
using high-temperature superconducting materials, it would be possible to
generate and transmit power at much lower voltages, rendering a lot of
clumsy technology superfluous. The use of alternative energy sources could
be boosted and the cost of energy worldwide, now exasperatingly rising,
396
What remains to be solved?
could be drastically reduced. In electronics and IT, new and efficient devices
may be designed, since thin films of normal metals and superconductors that
are brought into contact can form superconductive electronic components,
which could replace transistors in some applications. These fancy pictures
are, of course, far from today’s reality - in the same sense as fusion is a
controversial ideal for power generation. One can recall in this connection the
onset of the automobile, aviation, and space research eras.
For example, we can consider the BCS equation for a Fermi gas
characterized by the chemical potential 𝜇∈ℝ and 𝑇> 0. We may assume
the local pair interaction between the gas particles to be 𝜆𝑉(𝐫) where 𝜆>
0 denotes the coupling constant.203 The primary assumption about the
interaction potential 𝑉(𝐫) might be that it is the real function and, e.g.,
that 𝑉∈𝐿1(ℝ3). The BCS gap equation can be written as
∆(𝑝) = −
𝜆
2(2𝜋)
3
2
∫𝑑3𝑞𝑉(𝑝−𝑞) ∆(𝑞)
𝐸(𝑞) tanh 𝐸(𝑞)
2𝑇
ℝ3
,
with 𝐸(𝑝) = √(𝑝2 −𝜇)2 + |∆(𝑝)|2 . One can readily see that the BCS
equation is non- linear.
9.5.2
Some Physical Models of Superconductivity
Now, let us try to understand this amazing phenomenon of super
conductivity. Here I do not attempt to overview the various mechanisms that
have been suggested to explain superconductivity, especially in high- 𝑇𝐶
materials. My intention is very modest and consists in sharing my
impressions of superconductivity research conducted by real experts in the
field. So, it is rather a position of an elderly scientific observer than of an
energetic researcher. Superconductivity is the vanishing of all electrical
resistance in certain substances when they reach a transition temperature 𝑇𝐶
that varies from one substance to another. An interesting manifestation of this
phenomenon is the continued flow of current in a superconducting circuit,
even after the source of current has been shut off. For example, if a lead ring
is immersed in liquid helium, an electric current that is induced magnetically
will continue to flow after the removal of the magnetic field. This observation
immediately invokes an engineering application: powerful superconducting
electromagnets, which, once energized, retain magnetism virtually
indefinitely, have been developed to be used, e.g., in fusion experiments.
Contrariwise, in normal materials the current attenuates nearly exponentially
with time after switching off the source because electrical resistance causes
energy loss due to electron-phonon interaction and multiple scattering even
in very clean conductors. Superconductivity is commonly interpreted as a
transition to a new phase inside a material - a macroscopic quantum
phenomenon when a macroscopic number of electrons (of the order of 1023)
203 Sometimes, the factor 2 is introduced for convenience i.e., the interaction is written
as 2𝜆𝑉(𝐫).
9.5
Understanding superconductivity
397
condense into a coherent quantum state. In case this state is the one of a
current flowing through a material, such current would flow virtually
indefinitely (from the theoretical point of view), and the material can serve to
transmit electric power with no energy loss, unless the superconducting
phase of the sample is destroyed by some external agent, e.g., by a strong
magnetic field.
Although many theoretical explanations related to the superconductors
discovered in 1986 also rely on the BCS pairing mechanism, these “new”
materials remain superconducting at temperatures higher than those which
the BCS-type electron pairing can readily explain. But given the poor
understanding of new and structurally complex materials, a number of
competing mechanisms should not be ruled out in spite of quasi-religious
wars waged by the proponents of different models for superconductivity.
Besides, it is not especially astonishing that the riddle of high temperature
superconductivity remains unsolved after twenty+ years: indeed, it took
about fifty years before “some very smart people” managed to arrive at the
BCS theory from the discovery of superconductivity.
The 1972 Nobel Prize in Physics was awarded to J. Bardeen, L. Cooper,
and S. Schrieffer for their model (known as the BCS theory) of the “old”
superconductors, i.e., those that exhibit superconductivity at temperatures
near absolute zero, including such metals as zinc, magnesium, lead, gray tin,
aluminum, mercury, and cadmium. Some other metals, e.g., molybdenum,
may become superconductive after high purification, and a number of alloys
(e.g., two parts of gold to one part of bismuth) as well as such compounds as
tungsten carbide and lead sulfide can also display superconductivity, so they
can also be classified as “old” superconductors. The BCS theory stipulates that
at very low temperatures electrons in an electric current move in pairs. Such
pairing enables them to move through a crystal lattice without having their
motion disrupted by collisions with the lattice. It is important that the BCS
formalism invented to explain superconductivity is in fact much more
general. For example, the BCS model can be applied to describe phase
transitions in solids, more specifically the metal-dielectric transition. In the
famous Kopaev-Keldysh model [222] the BCS-type phase transition was
interpreted as the Bose condensation of electron-positron pairs (excitons),
which is a direct analogy to BCS superconducting transition. I shall discuss the
BCS theory later in some detail, since it is a beautiful model having many
implications and, besides, because I have a suspicion that many authors
writing on various superconductivity issues did not read the classical BCS
paper [48].
While this microscopic theory explaining conventional superconductivity
has already existed for half a century, as well as the phenomenological
Ginzburg-Landau (GL) theory [47], which can be derived from the
microscopic BCS theory [298] some doubts exist that the BCS model is fully
applicable to the “new” superconductors. The latter appear to be rather
“muddy” in order to serve as model physical systems, so the clear-cut BCS
model based on the Cooper pair formation and condensation in an ideal lattice
does not seem to describe the whole lot of competing phenomena in very
398
What remains to be solved?
special chemical systems. Superconductivity in the “new” and “dirty”
materials, although ultimately producing the same effect, may be a very
different phenomenon, so that the BCS model, which has served as a standard
for describing the “old” superconductivity, may simply be inadequate in the
case of “new” superconductors. There exist competing models, e.g., the 2D
Hubbard model, bipolaron model, bosonic SO(5), U(1), SU(2) and many other
models, which have been proposed to describe high-temperature
conductivity in new ceramic materials (see e.g. the overview by Yu. V. Kopaev
[40]. To my knowledge, all of the models have not been proven and remained
controversial for many years. At least two schools have been confronting each
other while putting forward their versions. These may be classified according
to the type of main quasiparticles involved. One of the schools holds that the
electron-phonon interaction, the same as in the BCS model, still plays the
dominant role in the cuprates. The other insists that electron-electron
Coulomb interaction must be strong in the new superconductors (there are
some experimental indications), so that electron-phonon interaction is
immaterial for pair formation. Sometimes, when listening to excited
arguments of adepts of these schools of thought or even “independent”
adherents to a specific model, I could not get rid of an impression that I was
witnessing a sort of a religious battle in which the adversary must be totally
devastated. The subject of superconductivity seems to be full of prejudices
and can only be compared in this respect with the heated ideological war
waged by the open source community against Microsoft. However, while the
market needs and user convenience can eventually settle the debate in
personal and enterprise computing, the choice of an adequate model to
understand superconductivity is probably a much harder task. This
understanding, by the way, also depends on the current state of the computer
technology.
I shall describe the Hubbard model a bit later, now I can only remark that
this model is simple and universal and can be applied far beyond
superconductivity. Moreover, thanks to the new developments in parallel
computation, one can verify the 2D Hubbard model directly. The Hubbard
model purports to describe superconductivity using a few microscopically
defined parameters such as (1) the probability that carriers - electrons or
holes - hop from one site to another in the crystal lattice; (2) the energy
penalty when two carriers occupy simultaneously the same site; (3) the
concentration of carriers. As I have already mentioned, several theories of
high-temperature superconductors have been proposed, but none has been
experimentally confirmed, so there was a strong disagreement within the
physics community about what model was appropriate at all. One could have
a single validation criterion: if some model does not foresee the
superconducting state in the typical range of temperatures as well as in the
domain of specific compositional and structural parameters, specifically for
the case of cuprate superconductors, then it should be abolished. At first, the
Hubbard model was unsolvable even on high-performance computers due the
amount of computation required. Indeed, superconductivity is a macroscopic
quantum effect, therefore any simulation must involve a lattice scale of 1023
9.5
Understanding superconductivity
399
sites. Besides, the computer model must also account for individual carriers
hopping from site to site on the lattice, which means that the scale of several
lattice spacing with carrier interactions on this scale should be included. All
that is tantamount to a computationally complex multi-scale problem. We
have already encountered multiscale problems in connection with the
transition to classical description in Chapter 4 as well as in statistical
mechanics. Here, I would only like to stress that because of prevailing
numerical difficulties, the issue of applicability of the Hubbard model
remained unsolved, as well as the question of survival of other models for
high-temperature superconductivity.
It has always been a guesswork for me why the subject of
superconductivity is treated in four different volumes of the Landau-Lifshitz
course: in “Electrodynamics of Continuous Media”, in each of the two volumes
of “Statistical Physics”, and in “Physical Kinetics”. This is a peculiar fact which
may be explained, of course, by personal likings of the renowned authors. But
there must be also an objective reason for such a dissemination of a single
issue: superconductivity on its current level of understanding is a multi-facet
subject, no unifying theory for this phenomenon is known, so it should be
tackled from diverse viewing angles and using different techniques. Even the
phenomenological treatment of superconductivity has admitted a variety of
models, many of which had been proposed in the 1930s (see, e.g., the paper
by R. Kronig [264] and the famous paper by H. and F. Londons). One might
mention in passing that quite a lot of the grands of theoretical physics who
were active in that period competed in offering a plausible explanation for
superconductivity: W. Heisenberg, E. Schrödinger, M. von Laue, J. Slater, H.
Casimir and many others.
One can see that superconductivity is really a multilateral phenomenon
by a very simple observation. The current flowing in a superconductor is, as
any current, a manifestation of the nonequilibrium process consisting in a
stream of charged particles. Therefore, the effect of disappearing resistance
to this current cannot, strictly speaking, be treated within equilibrium
thermodynamics, it is a subject for physical kinetics. On the other hand, the
normal and the superconducting states may be represented as two different
phases (in the Gibbs’ sense), because they can be characterized, besides
conductivities, by different values of purely thermodynamical quantities, and
the transition between these two phases can be treated as a normal phase
transition. Thus, the transition of a material from normal to superconducting
phase may be studied by applying usual equilibrium thermodynamics, which
makes the treatment of this phenomenon in statistical physics courses quite
natural. By studying the literature, one may notice that during about the first
quarter century after the discovery of superconductivity, physicists were
focused mostly on the electrical problem, namely on how to explain the
abrupt change and nearly immeasurably small value of resistance. I could not
trace the idea of phase transitions in the papers available to me from that
period, at least it was not predominant until the paper of Gorter [257]; there
are also references to early-stage applications of thermodynamics in the
works of W. H. Keesom around 1925, but I could not find them without
400
What remains to be solved?
investing too much time. Superconductors were customarily described as
crystals in which scattering of carriers is absent, the limiting case of a perfect
conductor [258] R. Becker, G. Heller, F. Sauter. “Über die Stromverteilung in
einer supraleitenden Kugel”, Zeitschrift für Physik 85, 772-787 (1933). This
is a simple model of the Drude type that is traditionally treated in courses on
macroscopic electrodynamics. The electrodynamical approach immediately
invokes the idea of electromagnetic response of a superconducting material
to an external electromagnetic field - in terms of dielectric function or
otherwise. The response theory was discussed in Chapter 7 and I will not
repeat it here.
In the electromagnetic theory of superconductors, the key notion is the
current. Let us try to calculate the current starting at first from the naive
Drude-Lorentz model for electrons in an external electrical field:
𝑚𝐯̇ + 𝑣𝑚𝐯= 𝑒𝐄(𝑡)
(9.10)
where 𝑣 is the collision frequency. If we take for 𝐄(𝑡) the harmonic
components 𝐄(𝑡) = 𝐄0𝑒−𝑖𝜔𝑡, we shall have
𝐯(𝑡) = 𝐯0𝑒−𝑖𝜔𝑡
(9.11)
and the solution
𝐯= 𝑒𝐄
𝑚𝑣(1 −𝑖𝜔
𝑣)
−1
(9.12)
The macroscopical current is
𝐣(𝐫, 𝑡) = ∑𝑒𝑖𝐯𝑖(𝑡)𝛿(𝐫−𝐫𝑖(𝑡))
𝑛
𝑖=1
= 𝑒𝑛𝐯= 𝑒2𝑛
𝑚𝑣
𝐸
1 −𝑖𝜔
𝑣
= 𝜎𝐄
(9.13)
and the conductivity is defined as
𝜎= (4𝜋)−1
𝜔0
2
𝑣−𝑖𝜔≡
𝜎0
1 −𝑖𝜔𝜏, 𝜏= 1
𝑣, 𝜔0
2 = 4𝜋𝑛𝑒2
𝑚
, 𝜎0 = 𝜔0
2
4𝜋𝑣
(9.14)
Here 𝜔0 is the usual plasma frequency and 𝜏 has the meaning of the
collision-free time.
In transient models, when the field cannot be adequately represented by
a single Fourier component, the current value at time 𝑡 is determined by the
entire history of 𝐄(𝑡):
𝑗𝛼(𝑡) = ∫𝑑𝑡′𝐺𝛼𝛽(𝑡, 𝑡′)𝐸𝛽(𝑡′)
𝑡
−∞
(9.15)
9.6
Superfluids and Supersolids
401
See a discussion of the electromagnetic response of material media in Chapter
8.
If we consider the superconductor as a material with disappearing
scattering of electrons, then 𝑣→0 and the condition 𝜔≫𝑣 will be fulfilled
even for very low frequencies. In other words, 𝜎→𝜔 with 𝜔→0 since the
currents circulate without Joule losses (𝑐′′ = 2𝑛𝜒= 0). Recall that 𝑐′ = 1 −
𝜔0
2 𝜔2
⁄
= 𝑛2 −𝜒2 (Ch.5) so that for 𝜔< 𝜔0 the field decays exponentially in
the superconductor with the penetration length 𝑙𝑝= 𝑐𝜔𝜒
⁄
= 𝑐/𝜔√|𝜀′|. For
low frequencies (𝜔≪𝜔0), we obtain the so-called London penetration depth
that does not depend on frequency, 𝑙𝐿= 𝑐/𝜔0.
The wave damping length in this limit is of the order of
(
𝑚𝑐2
4𝜋𝑛𝑒2)
1/2
~
106
√𝑛cm, i.e., depends on the concentration of superconducting
carriers. If we assume that all the conductivity electrons contribute to the
superconducting current (𝑛~1023), we get for 𝑙𝐿 the value about 200Å that is
of the order of the ultraviolet radiation wavelength. Thus, the superconductor
should fully screen the electromagnetic field having the frequency up to
1017𝑠−1. However, this is a very naive model. In fact, the question: “what are
the superconducting carriers?” has hardly been answered, at least one can
find in the literature numerous model representations related to
quasiparticles forming the superconducting current, and, honestly speaking,
I could not find the ultimate value for the superconducting carrier
concentration. A simple representation of the total electronic density as the
sum of normal and superconducting electrons, 𝑛= 𝑛0 + 𝑛𝑠, seems to be an
intuitive illustration, rather than pertinent to a mathematical model. This is a
regrettable situation, as the above question about the nature and quantity of
superconducting carriers is, in my opinion, crucial to the understanding of
superconductivity. Indeed, for instance, the Ginzburg-Landau wave function
is normalized to the number of superconducting electrons 𝑛𝑠 and, besides, as
we have seen, 𝑛𝑠 enters all the equations of superconductor electrodynamics.
The Ginzburg-Landau (GL) model is usually labeled as a phenomenological
one, implying that it is allegedly of lower rank than “true” microscopic models.
In the London electrodynamical theory (see also below), the issue of
superconducting carrier density is not discussed; the quantity 𝑛𝑠 is
considered as a phenomenological parameter whose value for 𝑇= 0 tends to
a natural limit - the total electronic density. In the BCS theory, the full number
of pairs is still undefined, and so on.
9.6
Superfluids and Supersolids
By straining one’s fantasy, one can imagine other peculiar “super”-objects, for
example, supersolids. We have just seen that a superfluid can flow through
the narrowest channels without any resistance. In the geometric language204,
superfluid breaks a global U(1) phase rotational symmetry and acquires the
off-diagonal long-range order. In some other - more physical - language, there
exist low-energy quasiparticle (phonon) excitations in the superfluid. On the
204 One can coin a term for this language, for example, “geometrese”.
402
What remains to be solved?
other hand, we know that there are solids and low-energy phonon excitations
in them. But a solid cannot flow. Using the same “geometrese”, one might say
that a solid breaks a continuous translational symmetry reducing it to a
discrete crystal lattice symmetry. We have discussed the accompanying
mathematical models and effects in Chapter 6. Exploring the arising logical
possibilities, we can make a conjecture that a state may be constructed that
would combine both the crystalline long-range order and the off-diagonal
long-range order inherent to superfluids. This is typical model-building in
physics. Such a model might be naturally called a supersolid one. Historically,
the first model conjuring on the possibility of a supersolid state was probably
contained in the classical paper by A. F. Andreev and I. M. Lifshitz [166]. From
today’s viewpoint, that was a pure speculation on the subject of Bose-Einstein
condensation (BEC) of vacancies. An important idea, however, was that there
can be a state supersolid - possessing both the superfluid and solid-state
order. This idea was rather controversial, of course.
9.7
Relativity
9.7.1
Special Relativity
According to Einstein’s principle of relativity, all laws of physics should be the
same in all uniformly moving (i.e., without acceleration) reference frames.
This is a very strong, even bold assumption. Who can guarantee that all the
interactions possible in nature, including those not yet discovered should be
invariant under the transition from one Lorentz frame to another? Although
the Lorentz transformations had been known besides Lorentz to a number of
other great scientists before Einstein, e.g., to Larmor and Poincaré, nobody
had dared to postulate their universality, probably understanding that it
would be a serious restriction imposed on all possible forces of nature. Notice
that there are incessant attempts, currently intensified, to find the violations
of Lorentz symmetry, in particular by using light and the known fundamental
particles of the standard model, see e.g., a review by R. Bluhm [265], see also
[ 2 6 6 ] . So far, no convincing evidence for Lorentz violation has been found,
but should any kind of Lorentz symmetry violation be discovered, it would
produce a very profound impact on the whole physics. We shall dwell more
on this issue a little below (see section “Is Relativity Firmly Established?”)
However, Einsteinian special relativity like Galilean relativity that had
appeared three centuries before was concerned predominantly with the
uniform, non-accelerated motion. One would, however, desire to have a
theory describing the dynamics of classical and quantum particles in
general i.e., non-inertial frames. A close approximation to such a theory is
given by general relativity.
9.7.2
General relativity
General relativity is an example of a theory that did not originate in any
dedicated experimental endeavors. That was rather unconventional since no
massive observational data (with the exception of equivalence of inertial and
gravitational masses, which was taken for granted by many physicists and
9.8
Gravitation
403
perceived as a trivial observation, see below) were accumulated to activate
one person’s imagination and logic. I mean of course Einstein. It is curious that
in contrast with, e.g., special relativity or quantum mechanics there were no
practical needs which would have dictated the invention of a new theory. It is
remarkable how Einstein himself characterized general relativity. He has
written in the 1950s: “An essential achievement of general relativity theory
consists in that it has saved physics from the necessity of introducing an
‘inertial reference system’ (or inertial systems).” (Albert Einstein. The
Meaning of Relativity. Appendix II, 1950/1955.)
We shall see below that Einstein’s field equations of general relativity link
the space-time curvature to the mass distribution scaled by fundamental
constants 𝐺 and 𝑐. A common slogan-like interpretation of these equations
states that geometry is dictated by the distribution of matter. In fact, the
situation described by Einstein’s equations is deeper and more self-
consistent. Nonetheless, Einstein’s main idea that in the presence of the
gravity field its parameters simultaneously determine the spacetime geometry
was later developed in physics and, one could observe, changed its culture.
9.8
Gravitation
Gravitation is one of the greatest mysteries of physics. Indeed, what causes
gravitation? Recall that Aristotle asserted that heavy objects fall faster
whereas Galileo observed experimentally that all objects fall with the same
acceleration independent of their masses once we remove air resistance (see
Chapter 4). Thinking more or less deeply about this fact leads to the so-called
equivalence principle, profound geometric considerations and a paradigm
change (see [88]). As already noted, there were no practical needs to replace
Newton’s theory of gravitation by anything else. Indeed, the simple
Newtonian theory of gravity worked perfectly well, and its accuracy for
navigation, elementary geophysics of the early twentieth century, or
astronomical observations was totally sufficient - measurement errors were
greater than possible theoretical inaccuracies.
Let us recollect what we know about spacetime. From our experience with
classical mechanics and electrodynamics, we may be inclined to think that
spacetime provides only a rigid framework, a scene upon which a physical
play consisting of multiple events and sets is staged205. Could it be that the cast
would adjust and modify the scene during the performance? Actors or players
only seem to be invited ad hoc to amuse the observers - today one cast or
team, tomorrow another - to the scene or sports arena constructed forever
and for countless past and future events; likewise, spacetime cannot be
changed due to the presence of a specific cast or team members playing their
physical roles206. As a matter of fact, it is a general conviction nowadays that
205 A theologically-scented question is invoked in this place: are there also play
directors, invisible managers and the like?
206 One can recall in this connection a beautiful fantasy by a Soviet writer L. Lagin (L.
Ginzburg) “The old Hottabych”, where a Moscow boy Volka fishes out an ancient vessel
from a river. As Volka opens up the vessel, a genie (jinn) emerges from it and a suite of
404
What remains to be solved?
the scene itself that is spacetime acquires an active role in physical
phenomena, at least in some of them, the most prominent example being
gravity. A little below, however, we shall discuss a class of theories of
gravitation founded on a traditional concept of a passive spacetime like in
classical mechanics or electrodynamics. Yet such theories of gravitation are
only marginal and do not seem to play any significant role in the development
of modern theoretical or observational techniques. And since the properties
of spacetime are, as we have seen in Chapter 4, 5, fully determined by its
geometry one has to study and interpret geometric concepts first in order to
try to understand gravitation.
From purely geometric positions, a plain metric of Euclidean (Galilean)
and pseudo-Euclidean (Minkowski) spacetime corresponds to a rather
special and highly restrictive choice of metric and affine properties (see, e.g.,
[39], §82). It means that one can drastically relax the restrictive assumptions
of special relativity, thus noticing that gravitation is a property of the physical
world which should be naturally expected and not regarded as a mysterious
effect needing to be explained. In general, the metric function 𝑔𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈, 1 ≤
𝜇, 𝜈≤𝑁 or, in relativistic theories 0 ≤𝜇, 𝜈≤𝑁−1, if it exists, encodes the
geometry of the space or manifold. Recall that in physics the metric function
or simply metric can be interpreted as the function determining the line
element between two distinct nearby points so that this function generalizes
the notion of a distance along this line. More specifically, let 𝑀4 be some
manifold (interpreted in physics as a spacetime) and let 𝑥0, 𝑥1, 𝑥2, 𝑥3 be local
(curvilinear) coordinates on 𝑀4 , then the length (line) element 𝑑𝑠2 =
𝑔𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈, 𝜇, 𝜈= 0,1,2,3 on 𝑀4 is identified in physics with the gravitational
field.
So, one can see what gravity is about: it changes the geometric properties
of spacetime. In particular, gravity can change the flow of time so that an
observer in the vicinity of a heavy mass sees time running slower than
another observer far from such a mass (see, e.g., [39], §102). One can
actually find how the time slows down: the respective observable notion is
the so-called red shift which can be expressed as (1 −𝑟𝐺/𝑟)1/2 where 𝑟𝐺=
2𝐺𝑀/𝑟2 is the so-called gravitation radius. One can pose a question: is there
a frame of reference in which all the clocks tick at the same rate? If one
considers the clock in the presence of gravitation the answer in general is
“no”.
Einstein’s astonishing hypothesis that gravitation is just a manifestation
of the spacetime curvature had a profound effect first on mathematics and,
later, on physics. It seemed to be an extremely productive combination of two
seemingly disparate lines of thinking: Einstein’s relativity theory and the
works of Italian geometers, especially T. Levi-Civita and G. Ricci, on tensor
merry adventures begins. Once the supernatural creature was invited by his young
rescuer (the two became inseparable friends) to a football match, and the old spirit
suddenly became a fan of one of the teams. Using his miraculous abilities (notice that the
words “genie” and “genius” are etymologically very close), the old jinn, in order to help his
favorite team, began changing the geometry of the playground, e.g., twisting the goal
configuration, generously created balls for each player (instead of a single one) and so on.
9.8
Gravitation
405
calculus, analysis and algebra on Riemannian (or pseudo-Riemannian) 207
manifolds.
By the way, representing gravitation as the consequence of the spatial
curvature only is a popular misconception. In reality, gravitation emerges due
to the curvature of spacetime (in 4d); the most obvious manifestation of this
fact is the time slow-down in the vicinity of a massive gravitating object such
as a black hole. The field tensor 𝑅𝜇𝜈 of general relativity (which is in fact the
Ricci tensor of classical differential geometry) identifies the spacetime
curvature, and Einstein’s field equations enable one to compute this tensor
field for a given mass distribution. When there are no masses, one would
intuitively think that this field tensor ought to be reduced to the identity
transformation of a Lorentz frame corresponding to flat geometry with the
Minkowski metric of special relativity. This is, however, not always the case:
one can find solutions to Einstein’s equations with no matter i.e., 𝑇𝜇𝜈= 0
everywhere, and yet the Riemann tensor 𝑅𝜈𝛼𝛽
𝜇
≠0 in 4d (see [39], §95 and
[159], §37). The solutions to Einstein’s equations with no matter are referred
to as vacuum solutions. A flat Minkowski spacetime manifold is just an
example of a vacuum solution, probably the simplest one. Other well-known
examples are the Schwarzschild solution ([39], §100) and the Kerr solution
([39], §104) which are rather nontrivial. In classical differential geometry,
manifolds for which 𝑅𝜇𝜈= 0 are often called Ricci-flat solutions.
What is the basic difficulty with gravitation? Figuratively speaking,
according to Einstein’s idea, gravity is weaved into the texture of spacetime
so that it is no longer the force in the Newtonian sense. From the
experimental point of view, it is hard to get the accurate results about
gravity since one cannot measure it in the laboratory as efficiently as one
can detect gravitational effects in cosmology i.e., on the megascale level. It
is not in the least accidental that nearly all laboratory-based gravitational
experiments (and they are comparatively rare) produce the results
characterized by a record accuracy [14]. Indeed, the electrostatic repulsion
force between two protons is ~1036 times stronger than the gravitational
attraction between them, so it is really a challenge to measure gravitational
effects against a predominant background of everyday influences. From the
theoretical viewpoint, it is very hard (many experts even say impossible) to
include gravitational forces in special relativity in the same way as the
Lagrangian for particles and fields was constructed in electrodynamics (both
classical and quantum). Now, I shall recall some basic formulas. Suppose we
have a finite number 𝑁 of classical particles with charges 𝑒𝑖 moving in an
electromagnetic field 𝐴𝜇(𝑥) over their word lines 𝛾𝑖, 𝑖= 1, … , 𝑁 in ℝ4𝑁, then
one can construct both the Lagrangian and the action of this system of
particles in an external field. Taking for simplicity just one particle 𝑁= 1, we
have (see [39])
207 The pseudo-Riemannian manifold, in distinction to the Riemannian case, has at each point
on the tangent space a smooth bilinear form with e.g., (+, −, −, −) signature. Recall that a
Riemannian metric 𝑔𝑖𝑘 on a differentiable manifold 𝑀 is a tensor field of the (0,2) type which is
a symmetric positive-definite bilinear form.
406
What remains to be solved?
𝐿= 𝐿𝑐+ 𝑒
𝑐𝐴𝜇
𝑑𝑥𝜇
𝑑𝜏
(9.16)
where 𝜏 is a natural parameter (proper time) on the curve 𝛾(𝜏), 𝑑𝜏= 𝑑𝑠/𝑐 (𝑠
is the interval), 𝐿𝑐 is the free particle relativistic Lagrangian which is usually
taken to be
−𝑚𝑐(𝑢𝜈𝑢𝜈)1 2
⁄ ,
𝑢𝜈= 𝑐𝑑𝑥𝜈
𝑑𝑥0 =
𝑑𝑥𝜈
𝑑(𝑥0 𝑐
⁄ )
(9.17)
or 𝑢= 𝑑𝛾/𝑑𝑡.208 When there are no particles, the free electromagnetic field
must satisfy the homogeneous Maxwell equations, 𝜕𝐹𝜇𝜈𝜕𝑥𝑛𝑢
⁄
= 0. For an 𝑁-
particle system, the total action may be written as
𝑆= −∑∫{𝑚𝑖𝑐𝑑𝑠𝑖+ 𝑒𝑖
𝑐𝐴𝜇𝑑𝑥𝑖
𝜇}
𝛾𝑖
𝑁
𝑖=1
+
1
16𝜋𝑐∫𝐹𝜇𝜈𝐹𝜇𝜈𝑑4𝑥
= −∑𝑚𝑖𝑐2 ∫(1 −𝑣𝑖
2
𝑐2)
1
2
𝑑𝑡
𝑁
𝑖=1
+ ∑∫{𝑒𝑖
𝑐𝐀𝐯𝑖+ 𝑒𝑖𝐴0} 𝑑𝑡
𝛾𝑖
𝑁
𝑖=1
+
1
16𝜋𝑐∫𝐹𝜇𝜈𝐹𝜇𝜈𝑑4𝑥
(9.18)
So, in Einstein’s general relativity, which is basically the theory of
gravitation, gravity is described by the metric tensor 𝑔𝜇𝜈 defined on pseudo-
Riemannian spacetime. An interval of spacetime has the generic form 𝑑𝑠2 =
𝑔𝜇𝜈𝑑𝑥𝜇𝑑𝑥𝜈. The motion of test bodies (particles) occurs on timelike
geodesics. The source of gravity is considered to be the energy-momentum
tensor of the whole matter (including in the self-consistent way the
gravitation field). Thus, the gravitation field is not regarded as a conventional
physical field of the Faraday-Maxwell type, present over the flat background
Minkowski space. As a consequence, the Minkowski metric 𝛾𝜇𝜈 of special
relativity is not contained in general relativity. The gravitational field in
Einstein’s theory is manifested as a spacetime curvature, not as an
independent actor over the background geometry. This fact has long been a
point of criticism directed at Einstein’s general relativity.
One of the most vocal critics of the Einstein’s general relativity appears
to be A. A. Logunov, a prominent Russian physicist, originally active in high-
energy and elementary particle physics.
It is, in my understanding, largely a conceptual problem. It would of
course be tempting to describe any field as an independent actor against an
208 There may be another choice of the free-particle Lagrangian, e.g., 𝐿=
−(𝑚𝑐/2)𝑢𝜈𝑢𝜈.
9.8
Gravitation
407
unperturbed background as the electromagnetic field in electrodynamics. The
background in such a scheme possesses its own geometric features, it is a
geometry by itself, without an overlaid field.
9.8.1
The Equivalence Principle
I have already mentioned that the emergence of general relativity was not
required by any practical needs. The observational discrepancies
substantially troubled neither physicists nor astronomers. Yet there was one
experimental fact observed already by Galileo in the late 16th century that
was related only to gravitation and did not hold for any other force field.
Namely that all bodies are moving in gravitational fields in the same way
regardless of their composition (provided the initial conditions i.e., positions
and velocities are the same). Or, in slightly different words, the trajectory of
a body in a gravitation field depends only on its initial position and velocity
and is independent of its composition. As customary in physics, saying the
“body” one actually implies a point mass, but only for the analysis of
mechanical motion. Notice that, for instance, for the electric field this
statement is obviously false: the motion in an electric field is determined by
one essential parameter, the charge to mass ratio (see Chapters 4, 5), with not
only its absolute value but also the sign being important for the motion. It
means that acceleration of the body in an electric field crucially depends on
the body’s composition.
The equivalence principle implies that in the reference frame of a free-
falling body another body, so long as it is also falling freely to the same gravity
source, does not feel any gravity. Indeed, the two bodies of any composition do
not experience any acceleration with respect to each other. Due to this
interpretation first adopted by Einstein 209 , the equivalence principle is
usually regarded as the cornerstone of general relativity. We have already seen
that the Einstein’s general relativity is grounded on the idea of general
covariance, which can be traced back to Mach’s thoughts of unifying
accelerated and inertial motion. More specifically, the idea of general
covariance may be expressed as the statement that all physical laws (and
possibly other laws) must take the same form in all systems of coordinates,
i.e., for all observers. When discussing some beautiful ultimate-precision
experiments, we have seen that it may be considered a hard fact that all local
centers of mass fall along identical (parallel displaced) trajectories210 toward
the gravity source. There seems to be no exception to this experimental fact
in modern physics.
As I have just mentioned, the equivalence principle in its prototype form
was known already to Galileo and also to Newton (they both established
experimentally that the acceleration of bodies due to gravitation depends
neither on their internal structure nor on composition expressed through the
distribution of masses in a body, see, e.g., the classical paper by L. Eötvös “On
209 Einstein formulated it as follows: “If a person falls freely he will not feel his own
weight”. See, e.g., http://en.wikiquote.org/wiki/Albert Einstein
210 From a theoretical viewpoint, these are the minimum action paths or geodesics.
408
What remains to be solved?
the gravitation produced by the Earth on different substances.” Available in
http://zelmanov.ptep-online.com/papers/zj-2008-02.pdf
).
Newton
considered this principle as so essential for mechanics that he discussed it
already in the opening chapter of his “Principia”.
If gravity is a manifestation of spacetime curvature i.e., a purely
geometrical effect, then the question naturally arises: is gravity the same as
acceleration, e.g., the centrifugal force which is successfully used to
compensate gravitation, say, for astronauts’ training? Or, to put it more
accurately, are gravity and acceleration manifestations of the same underlying
force? In the Einstein’s general relativity gravity and acceleration are
mathematically indistinguishable, is it just a mathematical model or a
profound law of nature? In many situations, gravity and acceleration are not
only mathematically, but also practically indistinguishable. When a fighter
pilot performs acrobatic figures she/he experiences high acceleration, much
greater than 𝑔 - for today’s supersonic jets acceleration values may reach
9 −10 𝑔, higher than the human body may endure. (The forces experienced
by a jet pilot are the masses of her/his body parts multiplied by this enhanced
acceleration.) Strictly speaking this is not a single force, the quantity 𝑚𝑔 being
only the gravity component, but the pilot cannot distinguish between gravity
and acceleration, and the entire force is colloquially referred altogether as
the “g-force”.
Yet the equivalence principle does not say that all accelerations are
equivalent to gravitational forces - this would be a very strong assertion. It
only says that motion in a noninertial reference frame may be considered
equivalent to motion in some gravity field. This latter is not completely
identical with the physically real gravitation fields existing in nature (see a
very clear exposition of this question in [39], chapter 10).
1. The Principle of Equivalence (or Equivalence Principle) does not say that
all accelerations are equivalent to gravitational forces. It only says that
motion in noninertial reference frames is equivalent to the one in some
gravity fields, these latter not being completely identical with the real
gravitation fields existing in nature. See a very clear exposition of this and
other questions being discussed here in the standard textbook by L. D. Landau
and E. M. Lifshitz “The Classical Theory of Fields”, chapter 10. 2. As far as
Maxwell’s EM theory combined with General Relativity goes, I would advise
you to read 90 of the same book. This is, by the way, about “more standard
established theories”.
The trajectory of a point mass in a gravitational field depends only on its
initial position and velocity, and is independent of its composition.
9.8.2
The Einstein Equations
So general relativity is in fact a theory of gravitation where the latter appears
as the spacetime property. The geometry of spacetime is determined by the
metric tensor 𝑔𝜇𝜈 which in turn defines one more tensorial object, the
Riemann curvature tensor 𝑅𝛼𝛽𝛿
𝛾
. This object is so important for
9.9
Cosmology
409
understanding gravity that it deserves a special description and comments
(see [39], §§91,92). I shall discuss it here only very briefly.
The Riemann curvature tensor is usually defined through the Ricci
identity for an arbitrary covector field 𝐴𝜇:
𝐴𝛼;𝛽𝛾−𝐴𝛼;𝛿𝛽= 𝑅𝛼𝛽𝛿
𝛾
𝐴𝛾,
(9.19)
where
𝑅𝛼𝛽𝛿
𝛾
= 𝜕𝛽Γ𝛼𝛿
𝛾−𝜕𝛿Γ𝛼𝛽
𝛾+ Γ𝛼𝛿
𝜎Γ𝜎𝛽
𝛾−Γ𝛼𝛽
𝜎Γ𝜎𝛿
𝛾.
(9.20)
One can decompose 15.20 into the following sum
𝑅𝛼𝛽𝛾𝛿≔𝐶𝛼𝛽𝛾𝛿+ 𝐸𝛼𝛽𝛾𝛿+ 𝑆𝛼𝛽𝛾𝛿,
where 𝐶𝛼𝛽𝛾𝛿 is the so-called Weyl tensor, 𝑆𝛼𝛽𝛾𝛿= (𝑅/12)(𝑔𝛼𝛿𝑔𝛽𝛾−
𝑔𝛼𝛾𝑔𝛽𝛿) 𝐸𝛼𝛽𝛾𝛿 is the Einstein curvature tensor:
𝐸𝛼𝛽𝛾𝛿= (1/2)(𝑔𝛼𝛿𝐺̃𝛽𝛾+ 𝑔𝛽𝛾𝐺̃𝛼𝛿−𝑔𝛼𝛾𝐺̃𝛽𝛿) −𝑔𝛽𝛿𝐺̃𝛼𝛾
and
𝐺̃𝛼𝛽≔𝑅𝛼𝛽−1
4 𝑔𝛼𝛽𝑅.
Here 𝑅𝛼𝛽 is the Ricci tensor which is defined is defined as the contracted
Riemann tensor, 𝑅𝛼𝛽≔𝑅𝛼𝛽𝛾
𝛾
and 𝑅≔𝑔𝛼𝛽𝑅𝛼𝛽 is the Ricci scalar. One can
see that by using all these definitions it is possible to represent the Riemann
tensor by another decomposition:
𝑅𝛼𝛽𝛾𝛿= 𝐶𝛼𝛽𝛾𝛿+ 1
2 (𝑔𝛼𝛿𝑅𝛽𝛾+ 𝑔𝛽𝛾𝑅𝛼𝛿−𝑔𝛼𝛾𝑅𝛽𝛿−𝑔𝛽𝛿𝑅𝛼𝛾)
−𝑅
6 (𝑔𝛼𝛿𝑔𝛽𝛾−𝑔𝛼𝛾𝑔𝛽𝛿).
9.9
Cosmology
There is probably no literate person in the so-called Western civilization who
has not read stories by Sir Arthur Conan Doyle. If there is one, here is the link:
http://www.sherlockholmesonline.org. To me, Mr. Sherlock Holmes, one of
several famous personages created by Conan Doyle, was always remarkable
in his inclination to aphorisms. One of the statements repeatedly pronounced
by Holmes (see, e.g., https://www.goodreads.com/quotes/92128-it-is-a-
capital-mistake-to-theorize-before-one-has) is often cited in physical
papers: “It is a capital mistake to theorize before one has data. Insensibly,
410
What remains to be solved?
one begins to twist facts to suit theories, instead of theories to suit facts.”
Sometimes this dilemma “theory first, data later or vice versa” is even called
the Sherlock Holmes problem. Strictly speaking, statistical inferences would
be impossible if one dogmatically sticks to Holmes’ maxim. In observational
sciences, it is hardly possible to accumulate the whole evidence, moreover, it
is hardly possible to establish if and when the collected evidence is ample for
constructing a valuable theory. Therefore, one constructs models first and
then tries to reconcile them with available data. In astronomy and cosmology,
for example, there is simply no other way to do research.
Cosmology is an odd mix of observational facts, beautiful mathematical
models (predominantly based on Einstein’s general relativity) and rather
wild speculations. Speculative fantasies about the early universe mostly come
from particle and high-energy physics. Formally, cosmology is the study of the
actual state and evolution of the universe as a whole or at least of its maximal
observable region. About a hundred years ago, the prevailing opinion was that
the universe must be static, and this view universally formed the perception
of the world, to the point that even Einstein shared it. As far as observations
are concerned, the existence of galaxies apart from ours was unknown. By the
way, cosmology is not the only discipline where models strongly influence the
perception of reality. In new physics centered around superstrings, the
relationship between models and reality is a bit twisted (see the
corresponding section below). Nonetheless, the subject of cosmology has
impressively matured lately as a physical discipline [285, 286]. The models
that have been traditionally treated only in “hard” physics are now
dominating cosmology [94].
Yet many models of modern cosmology seem to be little more than
philosophy, although camouflaged by some mathematics. One of the most
provocative philosophical ideas in modern cosmology is that the universe, a
tiny (and undetermined) part of which we observe is only one of many -
possibly of an infinite number - of individual universes. One of the prominent
hypotheses in this series of speculations claims that the laws of physics and
physical constants must be very different in all member universes in the
whole multiverse ensemble so that our universe has special conditions that
are fine-tuned to sustain our carbon-based life. This is a variant of the
anthropic principle. The latter is a philosophical principle which appears to
be impossible to falsify. In spite of this speculative context, many highly
qualified physicists are fully seriously discussing anthropic arguments as well
as the conjectural Drake equation associated with it, an attempt to estimate
an attempt to evaluate the potential number of extraterrestrial civilizations,
initially in our galaxy - the Milky Way, now in the multiverse where the fraction
of the universes with the favorable for the human existence (in the spirit of
anthropic principle) physical laws and constants should be estimated, see one
of the latest discussions in [267] and the literature cited therein. I don’t think
that such probabilities can be correctly derived unless a solid underpinning
theory is formulated which possibility is far from being obvious. Yet to lapse
into wild speculations, almost on the verge of daydreams is a very fascinating
pastime (recall, e.g., the character of Manilov in “Dead Souls” by N. Gogol).
9.9
Cosmology
411
In contrast with other physical models, cosmological models represent
objects on the largest possible scales, e.g., the universe is treated as a whole.
For such scales the geometry of spacetime is assumed to be described by
general relativity i.e., determined by the metric tensor 𝑔𝑖𝑘(𝑥𝑗). One should,
however, remember that this method of description is just a convention and
there may be other framework assumptions or variants of general relativity
compatible with current astronomical observations. For example, the basic
self-consistent approach to the question how matter determines the
spacetime geometry (which in its turn determines the motion of matter)
essentially depends on the so-called cosmological constant Λ = −8𝜋𝐺𝜌𝑣𝑎𝑐/
𝑐2, where 𝐺 is the gravitational constant and 𝜌𝑣𝑎𝑐/𝑐2 is the vacuum energy
density (see below), that is typically assumed not to vary in space and time
(∇𝑖Λ = 0). The difficulty with the Λ-term is that one can find no compelling
evidence for setting up any specific value of cosmological constant so that
there is no shortage in cosmological models corresponding to different values
and domains of Λ and resulting in significantly different possible behaviors of
the universe. One can, in particular, set the value of Λ by hand to zero, as it was
suggested, e.g., by Einstein ([308]), but such an ad hoc fix can be difficult to
view as physically generic.
From the theoretical viewpoint, the cosmological constant is caused by
quantum vacuum fluctuations of the fundamental fields present in the
universe: scalar, vector, tensor, and spinor fields. Here, however, there is a
notorious problem: most standard quantum field theories using the concept
of the quantum vacuum (zero-point) energy suggest large values of the
cosmological constant, typically of the order of 𝑚𝑃𝐼
4 where 𝑚𝑃𝐼= (ℏ𝑐/
𝐺)1/2 ≈1.22 ∙1019𝐺𝑒𝑉/𝑐2 which exceeds by approximately 120 orders of
magnitude the observed value of the cosmological constant. This discrepancy
between theory and observations is known as the “cosmological constant
problem”: why is the cosmological constant so small or, to put it another way,
why do quantum fluctuations not manifest themselves in a large Λ-term?
The conventional model of cosmological evolution states that the history
of the universe can be traced back to a highly compressed state, when the now
known physical laws were mostly inadequate. This protostate was followed
by the inflation phase (see below) when tiny quantum fluctuations became
the germs of the galaxies observed today. Afterwards, as the inflationary
universe had cooled, radiation was decoupled from the matter and gradually
became what we now know as the cosmic background radiation, whereas the
density fluctuations contracted due to gravitational forces. Eventually the
observed cosmic structure has organized. A fluctuation that might have
occurred a dozen of billions years ago originated our galaxy, and this process
was followed by a further chain of accidents leading to the formation of the
solar system including the planet called Earth. Later, other low-chance
accidents happened, which contributed to the appearance of life and rapid -
as compared with the cosmological periods - multiplication of its forms (along
with natural selection). Driven by the negative pressure produced by the
mysterious “dark energy”, the universe is expanding with an accelerated
speed. Based on their increasingly sophisticated techniques, the astronomers
412
What remains to be solved?
have come to believe that the universe should be much larger than the
remotest horizon that can be observed even through the most powerful
telescopes.
There seem to be two more or less distinct brands of cosmology (and,
accordingly, cosmologists): one is the “global cosmology”, focused on the
physical principles behind the evolution of the entire universe, and the other
is more local, closer to astrophysics, scrutinizing the structure and behavior
of separate objects such as stars, galaxies, black holes, quasars, etc. These two
brands of cosmology deal with different space (and often time) scales, and
there are few people who can professionally synthesize both perspectives. In
this section, I shall deal mainly with cosmological models for the whole
universe.
According to modern cosmology, our universe appeared from a
primordial state approximately 13.7 billion years ago 211 . The term
“primordial” in this context implies that the initial state was almost empty,
with no matter and very little energy. Then the question naturally arises:
where do all the stars, planets and galaxies come from?
One of the other basic questions of cosmology traditionally was: is the
universe finite or infinite? In the Marxist philosophy 212 unconditionally
accepted in the Soviet Union, the universe was viewed as infinite, I don’t
understand why. Here, one might recall the words of Einstein: “Two things
are infinite: the universe and human stupidity; and I am not sure about the
universe”. Cosmology belongs to a class of synthetic scientific disciplines like
ecology, biology, geophysics, embracing concepts from many other fields of
science. Such disciplines are full of interconnections and complex interactions
with seemingly alien conceptual systems. Due to its interdisciplinary
character, synthetic disciplines are favorite targets for the philosophers.
Thus, in cosmology, philosophically-minded people are usually very active.
One of the main concepts of cosmology is the discovery that the universe
is currently expanding and that it originated with the Big Bang. Indeed, the
expansion points at an extremely hot origin of the universe known as the Big
Bang which is a starting point of all possible universe histories. It would still
be strange if a curious person did not ask: what was before the Big Bang?
However, in the milieu of professionals this question is considered
impertinent. Saying differently, it is asserted that no event prior to the Big
Bang was physically possible, which is a very strong assertion. One typically
explains the statement of impossibility of events before the Big Bang by some
semi-philosophical analogies, e.g., of the following type. The geometric
properties of the Earth are such as not allowing one point to have the distance
from another greater than approximately 20000 km. This is the maximal
geographic length for the Earth. In a similar way, there must be (I would say
might be) the largest time interval in the universe determined by its geometric
211 This date varies from source to source, but it is not substantial since the exact,
e.g., up to a million years, age of the universe is an abstraction anyway.
212 It is just an ideological cliché and has nothing to do with Karl Marx. Marxism
and Marx are almost unrelated.
9.10
Black Holes
413
properties. Therefore, it does not make sense to speak of the time as about
the ordered sequence of measurable events outside this maximal interval. Yet
I think that putting aside some typical quasi-scientific dogmatism, such naive
questions of cosmology as: “Did anything happen before the Big Bang? What
will be the final fate of the universe? How credible is the Big Bang model?” are
in fact very deep. We ought to remember that the Big Bang is only a model,
though a dominating and, as many cosmologists think, a necessary one.
Scientists usually do not like when someone dares to express doubt or
skepticism about a currently dominating concept; one can observe the
righteous indignation when, e.g., string theory or the Big Bang model are
critically assessed. It is considered a “mauvais goût”, almost as if one
criticized special relativity. This is not a scientific feature, but utterly
psychological: people are trying to evade cognitive dissonance. There are
surely many facts compatible with the model of Big Bang and corroborating
it, but all these facts are just interpreted observations of the Big Bang effects.
In cosmology, there is no shortage of models bordering on wild
speculations. Inflation is probably the most plausible one among these wild
speculations. Inflationary scenario is actually a bunch of models, so
fascinating that more and more cosmologists are contaminated with the
ideology of inflation. As far as modern astrophysics can tell, the universe is
about 13.7 109 years old and has the size of 93 109 light years. The
combination of these two numbers produces a little mysterious impression
since in 13.7 billion years light can only travel 13.7 109 light years. Then how
is the universe that big? And how did it become so big that fast? A short
answer: inflation. This model implies that the universe, soon after its birth,
exponentially increased its size by many orders of magnitude during a very
short time. But this is again a little mysterious. What triggered inflation? What
stopped it? Why cannot the universe continue to inflate at an exponential
rate? There are claims that the reliable answers to these questions do exist,
but I, having read some literature on inflation, was only reinforced in thinking
that there exist little more than just speculative models, although I am in no
way a specialist in the field. By the way, one of the possible answers to the
question why inflation stopped is that it did not stop. The universe, according
to the model of never-ending inflation is still exponentially expanding its size,
but we cannot observe it so far because we live in a small region of stability -
in a cosmic bubble. I don’t know how one can prove or falsify this conjecture,
provided the bubble boundaries are beyond the edge of the visible part of the
universe. There is also an open question: are there other bubbles? If one of
them exists, why cannot there be many of them? Is the number of bubbles
finite? Besides, if there are many bubbles, can they interact, e.g., collide with
each other?
9.10 Black Holes
We have seen that there exist a great lot of crossovers between diverse fields
of physics, but the issue of black holes (BH) probably contains the majority of
them. Models associated with the concept of a black hole attract scientists
414
What remains to be solved?
with different background - and not only scientists. It is curious that the
thermophysics of black holes has been used recently for practical purposes,
namely for navigation engineering. Now ubiquitous GPS devices are tied to
satellites whose orbital position (ephemeris) must be determined very
precisely. However, referring the satellite position to the planet may have
poorly controllable errors, e.g., owing to fluctuations of the Earth’s rotation
axis orientation. Simply speaking, the planet is constantly wobbling. Such
fluctuations may have a number of underlying physical causes such as tidal
forces induced by the gravitational pull from the Moon or coupling to the Sun,
ocean currents, or fluid motion in the Earth’s molten core. In general,
fluctuations are omnipresent regardless of whether one can detect their
immediate cause or not. So for highly precise measurements, tying up
satellites to the Earth’s position in space may have an insufficient accuracy. In
such cases, stars seemed to be the obvious candidates to designate the
landmarks in space, and indeed they have been used for navigation - in fact
stars had been used for navigation by the seamen long before such modern
notions as signals, frequency bands, trilateration, satellites, GPS and the like
became known. However, for exact position determination needed in today’s
navigation and, e.g., in geodesy even stars are poorly suitable because they
are also moving. So, the objects that are sufficiently bright to be observable
through astronomical distances and whose position may be considered
stationary in fair approximation should be chosen for navigation. Such objects
do exist in nature: those are the quasars. The typical brightness (luminosity
of 1040) of a quasar is 1012 that of the Sun (quasars are believed to be the
brightest astrophysical objects), and most quasars are remote enough - over
109 light years away 213 - to appear stationary within the accuracy of the
current detectors. More than 2 105 quasars have been registered by
astrophysicists (see, e.g., http://www.sdss.org), and about 3000 of them are
said to be possibly used for navigation purposes. A grid of distant quasars
whose positions in the sky is known with a sufficient accuracy can form a map
to be used for specifying the Earth’s axis orientation and, consequently, for
satellite-based positioning systems. Such a map, e.g., ICRF and ICRF2
(International Celestial Reference Frame) has been recognized by the IAU
(International Astronomical Union) as the primary reference system for
astronomy. I am not going to discuss this issue, quite interesting and having
many physical aspects, see more on that, for example, in connection with VLBI
(very long baseline interferometer) in, e.g., http://ivscc.gsfc.nasa.gov/.
Now, what is a quasar? A standard astrophysical model today for such an
object is that it consists of a massive black hole feeding itself on the
surrounding interstellar matter (gas and dust). The material being trapped
inside a black hole is compressed by gravity forces and thus heated to
estimated 106 K emitting intense blackbody radiation in the X-ray, UV, visible,
infrared, microwave, and a part of them also in radiofrequency ranges,
213 A light year, 𝑙𝑦~1013km is the main astrophysical unit of length; our galaxy (the
Milky Way), containing of the order of 1011 stars, is estimated to be ~105ly across, see,
for example, http://www.atlasoftheuniverse.com/galaxy.html.
9.11
Quantum Gravity
415
sometimes almost equally over these ranges. This devouring mechanism
(called accretion) of energy production is not a single model for quasars.
There were also speculations about possible involvement of antimatter so that
high luminosity of quasars could be explained by the annihilation; there also
existed concepts based on “white holes” (a white hole is a time-reversed
black hole).
Life-time of black holes, 𝜏, should depend only on mass 𝑀.
If you, by any chance, approach a black hole, the tidal force between your
head and toes will tear you apart. Where and when this sad event occurs,
depends on the mass of a black hole: if it has a typical stellar mass, say, 30
times that of the Sun, you will be torn apart well outside the horizon of the
black hole - there are estimates showing that for such an object the tidal force
acceleration between your head and toes would be 106g as you pass the
horizon. For supermassive BHs you could survive inside the horizon before
being torn apart.
Conservation of energy, in our standard understanding, is possible only
in flat space. When the horizon is present, pairs can be produced.
Information conservation: it is generally believed that information should be
preserved under quantum-mechanical operations although you would not
find any correctly defined continuous symmetry corresponding to this
presumed conservation law.
9.11 Quantum Gravity
This is perhaps the most intricate problem of physics. Quantum mechanics
and theory of gravitation appear to be different in every aspect, and it seems
to be one of the greatest puzzles of gravitation why it does not naturally unify
with other fundamental forces. General relativity should be regarded in this
context as the classical limit of quantum gravity. Yet today the theory of
gravitation is based on general relativity which is a geometrical theory whose
geometry seems to be totally different from that of quantum mechanics, even
in the most far-fetched geometrical formulation. It has been difficult so far to
find links between the geometry of quantum mechanics and Riemannian
geometry of general relativity, and many prominent physicists have attacked
the problem of quantum gravity but still unsuccessfully. Nevertheless, much
knowledge has been accumulated in this field. Of course, it would be too
ambitious to examine the whole scope of the distributed knowledge that
guided physicists in addressing the problem of quantum gravity. Being not a
specialist in this field, I shall delineate this problem, its scope and origin, very
superficially. I think that specifically in this field containing a lot of “raw
material” - poorly established results - one must inevitably discuss the
reliability of the distributed knowledge, which leads to some reflections on
methods of obtaining this knowledge, i.e., on methodological questions.
One such methodological question immediately comes to mind. One of
the obvious consequences of the Schrödinger equation is that the wave packet
corresponding to a free particle should spread to infinity in empty space, this
fact follows directly from the mathematical structure of the Schrödinger
equation expressed, e.g., in terms of its Green’s function. It would not be easy
416
What remains to be solved?
to reconcile the diffusive spread of wavelike solution in the Schrödinger
phenomenology with the picture of particles collapsing due to the gravity field.
One can hypothesize that it is probably difficult to reconcile relativity
with quantum mechanics since the two theories have totally different focuses:
relativity deals with the nature of spacetime whereas quantum mechanics is
concerned with the microscopic properties of matter. On a rather primitive,
intuitive level quantum gravity may be illustrated by the necessity to deal with
such unconventional objects as smeared points and fuzzy light cone (recall
that the light cone is a causality arrangement). Also recall that the quantum
field at small distances where the quantum gravity effects should be essential
has a cell structure and evolves dynamically.
9.12 String Theory and String Theorists
“If facts contradict the theory, too bad for the facts”. (A modern proverb)
String theory originally appeared as an attempt to describe strong
interactions (see, e.g., the well-known papers by Y. Nambu [287], B. Nielsen
[288], and L. Susskind [289], see also the historical overview by J. Schwarz
[290]). Later, however, after some period of initial enthusiasm, high-energy
physicists began entertaining doubts that the string theory of that time could
lead to a genuine understanding of strong interactions. I think that such a
sentiment was only reinforced after the emergence and explosive
development of the quantum chromodynamics (QCD). Then a certain shift of
focus occurred: quantum gravity became the dominant interest of string
theory.
In many ways string theory has turned into an eclectic set of closely
interrelated mathematical theories or models. There is nothing wrong in such
a development: for example, ecology is also an eclectic collection of models,
nonetheless people are paying more and more respect to this discipline
(regardless even of its political engagement). It is not necessary that a
discipline be well-composed, harmonious, and deductive as, say the number
theory, calculus, or classical mechanics to be popular. But it is important for
a physical theory that it would be able to predict observable physical effects,
and as near as I can judge one should not speak about string theory in terms
of its predictions unless there are confirmed ones. Unfortunately, some
theories are consistent with pairs of mutually excluding predictions i.e., are
unfalsifiable. For example, string theory “predicts” the existence of massless
fields to the same extent as their absence just like climate theories predict
simultaneously warm and cold weather periods in a given location and
economical theories can simultaneously predict inflation and deflation.
There are many examples of success story of bubbles. As I have
mentioned before, nanotechnology (especially in the beginning of 2000s), IT
(in 1990s), high-𝑇𝑐 (in the 1980s 1990s), chaos, and superstrings have been
fashionable and attracted crowds of people interested not in ideas but in
lavish financing and rapid success. This fact is so often mentioned as pure
sociology instead of physics that it has become a banality. Still, I shall allow
myself to comment on the current situation with new and highly fashionable
subjects, because despite obvious sociological effects there is a really
9.12
String Theory and String Theorists
417
interesting physical and mathematical content in them. Perhaps these
comments induce someone to think more with her/his own head rather than
follow the fashion. It is obvious that the one who puts one’s feet into the
others’ steps leaves no individual traces.
String theory appeared as a mathematical construct unprovoked by
experimental challenges. This was rather a product of internal theoretical
development than a response to any discrepancies (like in the neutrino case),
but if accepted - even without experimental evidence - it will radically change
physicists’ concepts of matter and spacetime. Fundamental quantities of
matter, according to string theory, are not point particles, as it was
traditionally assumed, but some extended small objects called strings. Such
objects, in contrast to particles, may have their own degrees of freedom. As far
as spacetime is concerned, the number of basic dimensions is not 4 as we used
to think but many more, e.g., 10, 11 or 26 - depending on the theory variant.
Although many string theorists do not even hope on experimental
confirmation of string theory, there are claims that it has the status of the long
anticipated
“theory
of
everything”
(TOE),
see,
e.g.,
https://en.wikipedia.org/wiki/Superstring_theory.
On the mathematical side of string theories, there are possibly difficulties
in choosing the right model for them. One can find certain parallels between
modern string theorists and ancient philosophers. Like ancient Greek
philosophers, string theorists are trying to explain the construction of the
world by pure thinking, relying more on group attitudes, internal
communication and speculative models than on observational data or
directed experiments. Both groups used to say that their theories are not
bound to provide earthly, profane results. A genuine high-level theory such as
M-theory is not obliged to give a clue how to descend from fundamental
equations to the real-world problems. Maybe I am inclined to a conservative,
old-fashioned way of thinking, but I don’t see any value in such high-brow
theories: they contain a mythological element and might even produce an
unpleasant association with astrology in the sense that it does not matter
whether they exist or not. One more feature of string theory invoking
involuntary recurrences to ancient philosophers is the fact that there exists
not a single one but numerous string theories - like there were many schools
of thought in ancient times214. Today, one can observe an attempt to unify all
string theories within the framework of some mysterious M-theory, but so far
no one seems to know what it is. Being interested in “M-theory”, I nonetheless
failed to find some fundamental microscopic parameters and corresponding
degrees of freedom that could, in my dogmatic opinion, constitute the essence
of any physical theory. I also could not see how the fundamental states in M-
theory would be defined, but it is of course due to my extremely low
qualifications in this new kind of physics. Moreover, this theory is under
construction. Yet, to be honest, I don’t feel comfortable with the theory that
214 There exists nowadays a supposed connection between at least some of string theories in
the form of duality relations (see Chapter 3 Dualities in Physics) when, e.g., one theory is viewed
as a limiting case of some other, the limit being taken over some phenomenological parameter of
the theory.
418
What remains to be solved?
relies on hopes, group attitudes, conjectures, metaphors and analogies, albeit
garmented in a sophisticated mathematical form (e.g., noncommutative
geometry). I also don’t feel quite comfortable with 10, 11, 26 or undefined
number of dimensions - depending on the model, are you? Then why is it
stipulated that the world is made of strings and not, say, of tiny spherical
clusters? The latter can also produce vibrating modes identified with some
constants, masses, etc. One can also construct Lagrangian density, action,
impose
boundary
conditions,
develop
somewhat
speculative
but
sophisticated mathematical models and the like. Why has the string model
survived so long even though it has so many problems?
I may only repeat that such fault-finding remarks of mine are obvious, not
new, and thus trivial. One can find a critical analysis of string theory on the
popular level, but performed by an expert in the well-known book by L.
Smolin [74]. But a nontrivial thing is: why the whole world and specifically
physicists who traditionally cultivated skepticism
as
their
group
characteristic together sing rhapsodies and stand frozen in awe at the sight of
far-fetched models based on artificially sophisticated mathematics? Besides,
declaring string theory a new TOE people seem to forget that the original
motivation for modern string concepts was the construction of a consistent
quantum gravity, which is a more modest quest. Sorry, but I am inclined to
think that in the issue of string/M theory people are getting a bit off track. And
I still neither understand nor share the general exaltation, while the gap
between the claims of string theory protagonists and physical reality is still
insurmountably wide and reminds us of wild expectations of the space travel
enthusiasts.
A single fact can spoil a good argument. This maxim fully applies to the
string theories. Thus, it would be extremely interesting to collect the pieces
of relevant information coming out from the planned experiments on the
Large Hadron Collider machine (LHC) under construction and permanent
repair in CERN (originally Centre Européenne pour la Recherche Nucléaire,
now mostly known as Organisation Européenne pour la Recherche Nucléaire).
In particular, project ATLAS (a toroidal LHC apparatus) is one of six particle
detector experiments currently being assembled in CERN aimed to provide
new data both on the Standard Model and string theories. To understand the
main ideas underlying the series of grandiose experiments in CERN, let us
briefly recall some concepts of the generally accepted string theory.
One of the startling predictions of string theories is the existence of extra
spatial dimensions. If they happen to be not too tiny, they can, in principle, be
traced in high-energy accelerator experiments. Even in case such extra
dimensions are too small for current accelerators and if, what is very
probable, there will be lack of motivation and/or resources to carry out
further high-cost accelerator experiments, these hypothetical dimensions of
the space we live in can be observed as remnants of very early cosmology.
However, to my best knowledge, nothing indicates possible experimental
proof of extra dimensions.
Therefore, in the low-energy domain, string theories can bring about a
number of massless scalar fields that, in principle, can influence
9.12
String Theory and String Theorists
419
experimentally verifiable low-energy physics, e.g., by violating the
equivalence principle or leading to deviations from Newton’s law. Such fields
might also induce spatial and temporal variations of fundamental constants.
However, no imprints of such massless particles have been revealed as yet.
Thus, string theory as TOE including gravity does not represent an
extension of the existing high-energy physics. Rather it may be viewed as a
collection of artificially invented mathematical models, quite handsome but
without any connection to experimentally confirmed physical principles, at
least by this time. Standard analogy with positron as well as other theoretical
predictions does not pass here, since energies to secure the vague predictions
of string theories experimentally seem to be hardly reachable.
Centuries ago, philosophers modeled the world by mixing various
combinations of the four primary elements: Earth, Air, Fire, and Water
(EAFW-model). About the same time, healers suggested that bodily
discomfort and diseases emerged due to bad perspirations that were inhaled
in the presence of an ill or morally spoiled person. In the 17th century, some
“heretics” suggested the existence of tiny particles called germs that could
appear in many more varieties than the four basic elements and cause
sickness. All those were purely speculative models, initially of the same type
as angels upon the tip of a needle or little devils in the toilet. The general idea,
however, was to offer an explanation based on invisible objects. Yet
“invisible” does not necessarily mean “unobservable”, and when the
technology of physical experiment progressed over the later centuries, one
could see the formerly invisible minuscule particles and germs. These
visualizations brought about new understanding in the natural sciences,
primarily in physics, chemistry, biology and medicine. However, not all
phenomena could successfully pass from the category of invisible to
observable; there are still perennial problems with wild speculations, and the
main question is: how to separate invisible from unobservable, figuratively
speaking fantasies about angels and demons from phenomena caused by
mismatch between “large” and “small”.
Many pseudo-scientific models and theories discussed in Chapter 2
appear to be of the unobservable character or are heavily relying on
unobservable entities such as biofield, orgone energy, etc. There are also a lot
of poorly defined and unobservable notions in generally accepted disciplines
such as psychology, psychoanalysis and psychiatry, which often allows one to
misuse these disciplines for commercial and political purposes (even in
democratic countries). All this may be viewed as remnants of medieval
atmosphere when opinions were valued much more than facts. Unfortunately,
string theory is so far controversial exactly in this “medieval” sense215. This
theory exploits the need to overcome a mismatch between very large (mainly
cosmological, when gravitation is substantial) and very small (Planck) scales
claiming that it can bridge general relativity and quantum mechanics. Strings
215 We may jokingly classify all the models and theories as the germs-type and the angels type.
Using this funny terminology, one might ask whether string theories are of the first or the second
type.
420
What remains to be solved?
are obviously beyond our current ability of direct detection. The adherents of
the string theories, mostly but not exclusively young persons belonging to the
new breed of physicists, rather aggressively contend that the symmetry and
presumed mathematical elegance of string theories is already a sufficient
ground to consider them as variants of “theory of everything”, a grand
unifying concept that bridges general relativity and quantum mechanics as
well as includes all known interactions such as gravitation together with
those described by the Standard Model216.
One often says “string theories”, not theory, because in fact there are
many
of
them
(see,
for
example,
http://www.stringwiki.org/wiki/String_Theory_Wiki). Recall that the basic
string theory suggests that fundamental particles are not zero-dimensional
points as prescribed, e.g., by special relativity, but perpetually vibrating lines
or loops - thus the name “strings”. These submicroscopic strings can be open-
ended or closed and possess natural frequencies as any macroscopic string.
At its core, the theory of such type can produce various mass values for
observable elementary particles just by combining different vibration
frequencies. In fact, one of the dominating mathematical models of quantum
mechanics is also based on obtaining numbers with the aid of vibration
frequencies. Of course, string theories are much more sophisticated,
especially with regard to the mathematics used, to the extent that ordinary
quantum mechanics looks like a primitive school-time algebra as compared
to the rich geometrical techniques of string theories. The latter, for instance,
assert that not only point-like objects should be rejected, but also
conventional spaces (or manifolds) where mechanics - classical or quantum -
evolves. Even the spaces suggested by general relativity are inadequate to
sustain strings. These energy lines or loops presumably vibrate on a six-
dimensional manifold that should be added to the background four space-
time dimensions prescribed by relativity theory. Although, as we have
discussed, physicists and mathematicians nowadays are trying to visualize
objects in multidimensional spaces, it is by no means intuitive and easy. Just
try to visualize a ten-dimensional cube. Besides, as if all these difficulties were
not enough, according to the late developments in string theory one needs
one more dimension, the eleventh. Otherwise, anomalies inherent in the
theory spoil the whole beautiful picture. Moreover, the eleventh dimension
was originally included in the string theory to account for the effect we
perceive as gravity. The initial idea was to visualize the eleventh dimension
as a membrane having two orthogonal directions - one along the membrane
and the other directed outside of it. Then there comes one more conjecture
namely that the gauge bosons of the Standard Model are confined to the
membrane space so that they can only travel along the membrane, whereas
the messengers for gravitational interaction - gravitons - are allowed to
escape the membrane moving outwards. The loss of gravitons can be
216 Recall that the Standard Model essentially consists in the suggestion that the forces
between elementary particles are due to the exchange of gauge bosons playing the role of
messengers.
9.13
Is Relativity Firmly Established?
421
described by the term which is missing both in the Standard Model and in
general relativity and which would couple both theories. It is the absence of
such a coupling term that results, according to some developers of the string
theory, to a complete separation of gravitation and elementary particle
physics together with quantum mechanics.
By the way, the hopes of experimental validation of string theory are
partly related to this concept of escaping gravitons. One experiment in LHC is
designed in such a way as to create the conditions, in the high-energy collision
between hadrons, when the energy loss due to escaping gravitons could be
detected and measured.
9.13 Is Relativity Firmly Established?
When young Albert Einstein proposed in 1905 the theory that was later called
special relativity, he assumed that it was utterly impossible to find an absolute
velocity. Any velocity, according to Einstein, is considered as being related to
the coordinate frame at rest. The astonishing, especially for non-physicists,
feature of special relativity is that any observer measuring the speed of light
(in vacuum) will get the same answer. It is this invariance of the velocity of
light that provokes a lot of people (mostly non-professionals) to try to
disprove the relativity principle 217. Sometimes the controversy stirred by
such disproval attempts becomes quite heated. The flux of papers derogating
Einstein and the theory of relativity does not diminish and seems to be
synchronized with social tensions and hardship periods. I have already
mentioned that during the Perestroika period in Russia (then still the Soviet
Union), I was working as an editor in the leading Soviet popular science
journal. Every day I got dozens of anti-relativity letters and articles whose
authors usually rather aggressively required from me to immediately publish
their papers. At first, I tried to answer these letters, but this consumed nearly
all my time and, paradoxically, provoked still more aggression, so eventually
I gave up and simply disregarded them. Besides, a vast majority of anti-
relativity papers were not only very, very poor from the mathematical
viewpoint, but also carried distinct antisemitic overtones. Indeed, one can
observe that anti-relativism and antisemitism are correlated.
Some anti-relativists argue that Einstein has produced nothing cardinally
new since Lorentz transformations had been already known and used, in
particular, by W. Voigt (1887), J. Larmor (1897), H. A. Lorentz (1899), A.
Poincar´e
(1900,
1905)
(see,
e.g.,
https://en.wikipedia.org/wiki/History_of_Lorentz_transformations). This is
true, but none of these great scientists had formulated relativity as the
217 Roughly speaking, the relativity principle may be formulated as follows: if there are two
inertial systems, 𝐾, 𝐾′, 𝐾′ uniformly moving with respect to 𝐾, then all natural phenomena occur
in 𝐾′ according to exactly the same general laws as in 𝐾. In other words, this principle states that
physical laws should be the same in all inertial reference frames, but that they may differ in non-
inertial (e.g., rotating) ones. This is a powerful unification principle: it is because of this principle
that one cannot determine an absolute speed (understood as a vector). The relativity principle
imposes a very general symmetry on the laws of nature which are not allowed to arbitrarily run
their course in different inertial systems. A bold assumption indeed!
422
What remains to be solved?
fundamental principle to which all physical processes must be subordinated.
We have already discussed in connection with relativistic mechanics (Chapter
4) that relativistic symmetry groups impose stringent constraints on all
interactions in nature (see a detailed discussion in [193], §19).
I remember an episode in autumn 1987 when two rather physically fit
and a priori hostile gentlemen came to my office with a categorical request to
derogate the theory of relativity and the “damned Jewish science” altogether.
One of the two who was older and more talkative represented his younger
protégé as a new genius who could easily refute the whole concept of relativism.
I asked the older one: “And what about you?” He answered: “I am his
manager” which was the term imported from the Western capitalist
environment and utterly alien to the Soviet people at that time. “What else
are you doing in life?” was my second question. The older muttered somewhat
evasively: “We are from the Institute of Physical Culture and Sports.” The
younger guy was scrutinizing me sullenly like a boxer before the inevitable
fight. Then he caught a barely visible nod of the older man and produced a
rather thick handwritten manuscript with a few colored formulas - I
remember they were written in purple red, green, blue, and violet. I could see
right away that the whole stuff had nothing to do with physics. “Why don’t
you publish your theory in a serious physical journals? This is just a popular
science magazine,” I said. “We publish only well-established and only after
they have been published in scientific press. This is an absolute principle
imposed by the Central Committee, and I can’t break this rule. You see, we
belong to the “Pravda” publishing house.” That was not totally a lie because
our journal did belong to “Pravda” and accordingly was controlled by the
communist party Central Committee, and the principle of the “second night”
had been strictly adhered to before Gorbachev’s “perestroika” began in 1985.
By 1987 one could already deviate from any stringent rules imposed by the
party. Nonetheless when I mentioned the Central Committee the sulky “boxer”
became less belligerent, probably a deep-rooted fear of “party organs”
resident in most Soviet people had not evaporated yet. So, I politely escorted
the two gentlemen to the entrance door. I think it might have come to a fight
had I decided to actually discuss relativity theory.
One more episode about the same time was more scientifically flavored.
A man who called himself a professional mathematician rather aggressively
insisted on publishing an article about logical inconsistencies in special and
general relativity. As far as I remember, his objections concerning general
relativity were centered around acceleration: since the equivalence principle
presumably states that since, on the one hand, any acceleration is equivalent
to gravity and, on the other hand, according to the second law of Newton
any acceleration is equivalent to force then any force in nature should be
reduced to gravity which is not true and therefore general relativity is
logically inconsistent. Both statements articulated by “the mathematician”
are wrong, but I could not explain it to him because he just did not want to
listen. By the way, later I have encountered similar dogmas about relativity
several times more, mostly formulated by the persons who could not even
write the equation for geodesics. And such people want to disprove relativity.
9.13
Is Relativity Firmly Established?
423
However, strong anti-relativistic sentiments persisted not only in the
unfortunate Russia. I was present once at an international conference held in
April 1988 in Munich devoted to disproval of the theory of relativity. To my
astonishment, there were numerous participants present from all over the
world, some of them being quite qualified persons. But nearly all were
focused on rejecting the theory of relativity. They were genuinely sure, for
example, that special relativity - which is nothing more than a geometric
theory - is fundamentally wrong. That was a purely psychological fixation
having nothing to do with scientific objectivity. These people’s attitudes and
behavior are still a riddle for me. Is it a fear of some unconscious kind, induced
by counterintuitive suggestions such as interval invariance? Later I found out
that anti-relativity conferences are repetitive and gather a vast audience (see,
e.g., http://www.relativitychallenge.com). One can also recall a historical
irony: A. A. Michelson whose brilliant experiment with E. Morley directly led
to the relativity principle never accepted relativity theory. On the contrary,
modern experimenters who study the theory of relativity at the universities
and commonly fully accept it still strive to measure the “absolute velocity”
[131].
Classical physics with its simple Galilean (and to some extent also
Minkowski) space-time in many cases provide a very good approximation
to the real world, but one should always bear in mind that it is just an
approximation. I concede that the anti-relativistic movement may have a
purely human element. The point is that the human brain is tuned for the
Euclidean geometry and absolute Galilean reference frames. No wonder
that many people refuse to abandon their “comfort zones” i.e., the local
Euclidean-Galilean illusion. Thus, it is difficult for many people to
distinguish between the 3d distance and the 4-distance in the Lorentz
frame (which is zero for events on the light cone). It had always seemed
quite obvious before Einstein’s relativity appeared and then became a
dominating paradigm that the geometry of space was fully Euclidean, all
logical possibilities for non-Euclidean geometries explored by J. Bolyai, C.
F. Gauss, N. I. Lobachevsky, B. Riemann notwithstanding. So, the Euclidean
vision of the world is probably still deeply imprinted in the common
conscience. Recall (see Chapter 3) that the Galilean space-time, which is a
structure incorporating the Euclidean geometry, is strictly speaking not a
metric space where a single metric tensor defines the distance between any
two points or, more exactly, the arc length of any curve. In the Galilean
spacetime, in distinction to spacetimes of general relativity, there is a big
difference between spatial and temporal intervals (and in general between
space and time), see a clear and comprehensive discussion of the Galilean
spacetime features in [103], ch.17.
By the way, absence of a unified metric space is a rather typical situation
in life. Take music, for example, it is, in elementary representation, a 2d
process: tone versus driving rhythm. The tone scales (pitch levels) can be
measured in hertz (Hz) and have nothing to do with rhythm basically
measured in time units (s). The corresponding 2d space might be called
424
What remains to be solved?
“pitchbeat”, and its two coordinates cannot be unified in a realistic musical
theory.
The Galilean structure is in general not a metric space, but a fibre bundle
i.e., a direct product of two manifolds: time 𝑇⊆ℝ and space 𝑆⊆ℝ3, see 3.22,
“Geometry of Classical Mechanics” in Chapter 3 (for simplicity we consider
here a single particle). Time serves in this bundle as a base manifold, to each
point of which is attached (projected on) a fibre. Each fibre can be interpreted
as a clone of the 3d Euclidean space. Such a structure is sometimes called
“chronogeometry”. Intuitively, one can understand the pre-relativistic
Galilean-Newtonian spacetime as the 4d geometric object that can be
factorized into a set of Euclidean hyperplanes on which time is constant and
geometry is represented by the covariant rank 3 metric field. More formally,
one can say that all covariant derivatives of the time manifold with respect to
fibers are zero. Mechanical motion in such a structure is understood as a
mapping 𝐹: 𝐼→𝑆 of some interval 𝐼⊂𝑇 into 𝑆, the graph 𝐼× 𝐹(𝐼) of such a
mapping is called the world line which is a curve passing through the
spacetime. The motion mapping is usually considered smooth and invertible,
at least all the encountered functions are assumed to possess all needed
derivatives and analytical properties. The projection of the world line
fragments on the base manifold 𝑇 defines time intervals 𝑡𝑖𝑗= 𝑡𝑖−𝑡𝑗, and its
projection on any of the fibres has a length 𝑙, but the world line as a curve in
the Galilean spacetime does not provide a a well-defined arc length since each
manifold present in the direct product characterizing a bundle, base 𝑇 and
fibers 𝑆, have their own metric. In contrast with the Galilean structure, the
spacetimes of relativity theory are genuine metric spaces218 where the arc
length is composed of time and space intervals that cannot be correctly (e.g.,
uniquely) separated into two different equivalence classes.
Relativity theory has accounted for a vast range of experimental facts,
both known at the beginning of the 20th century and accumulated ever since.
It is impossible to construct a particle accelerator without taking into account
relativistic principles [118]. In this respect (and in many others, too) the
theory of relativity has become an engineering discipline. So far, despite
numerous attempts, no experiment has contradicted special relativity - at
least I am not aware of such experiments. This implies that any modern
theory or model should be consistent with special relativity, and future
theories dealing with space-time should include it as a special case. The
psychologically charged discussions and the general controversy over special
relativity stem, in my opinion, not from misrepresentations of physical
experience such as, e.g., the Michelson-Morley experiment or more modern
tests
(see
the
comprehensive
overview
by
John
Baez,
http://math.ucr.edu/home/baez/physics/Relativity/SR/experiments.html)
but from reluctance to accept the space-time geometry which is different from
the space geometries of classical mechanics (see Chapter 4). Some training in
contemporary geometry of physics would have removed a great many
objections of “relativity foes”.
218 One sometimes talks about metric manifolds or manifolds with metric.
9.13
Is Relativity Firmly Established?
425
There are, however, some issues in relativity that are not so easy to digest
from the physical point of view. One may notice, for instance, that light in the
theory of relativity in any case in special relativity and even in general
relativity - is hardly treated as a physical entity possessing material properties
such as a flux of photons or a wave. Light is a highly abstract thing in both
relativity theories appearing only as a means of signaling between different
spacetime points219. One should not call this signaling feature “light” because
it has nothing in common with the physical ray of light. It is sufficient to
assume only the constant velocity of signaling between different spacetime
points to build up the geometric carcass of special relativity. Likewise, one can
only consider the curvature produced in the space by masses to define
geodesics along which light signals are supposed to propagate. In general
relativity these geodesics are no longer straight lines as in special relativity.
No physical questions such as concerning possible light interaction with
matter (in the form of masses) or about individual properties of light and
matter (mass) are involved in the mathematical models of relativity. Since it
would be irrelevant to pose questions about the physical nature of light and
matter based solely on relativistic considerations, in order to draw some
conclusion about physical properties of light (field), matter (mass) and light-
matter interaction one has to combine relativity with other mathematical
models. Not all such combinations are easy to produce, mathematical models
may prove incompatible. This is a notorious difficulty in combining relativity
with quantum theory.
219 I do not touch upon Maxwell’s equations in the context of standard special
relativity which has a kinematic character.
426
Climate as a Physical System
10 Climate as a Physical
System
Sapere aude. Find the courage to know.
In this section some physical and social aspects of modeling the Earth’s
climate variations are discussed. Eventually I thought of placing this passage
into “What Remains to Solve? chapter but considering the amount of attention
the subject is attracting by the broad public audience, I decided it deserves a
separate section. I must confess that I am not a meteorologist, nor a climate
modeler, and I have not summarized surface temperature indices for years to
make a living. I just read publicly available sources and try to make my own
picture. It may be distorted or totally wrong, but I have honestly tried to
understand what is going on. In general, one of the most important qualities
of a researcher is to understand what is currently important in the new field
and what is not. I was doing my homework trying very hard to stay
uninfluenced by numerous journalistic comments about climate as well as
by rather wild projections one can find in today’s polemics. My purpose in
this issue is to learn and not to take an a priori position. And I remember
quite well the saying by Benjamin Franklin that any fool can criticize,
condemn and complain and most fools do. One often says that there are two
things in which everyone feels herself/himself an expert: football and
medicine. Now it seems climate and energy problems take over as fields of
ubiquitous expert knowledge. The attitude that anyone’s opinion on any
subject is equally valuable brings the very idea of expertise to a collapse. A
world in which there is presumably no difference between those who know
what they are talking about and those who don’t is going to be ruled by
ignorance and stupidity. I have read that about 70 per cent of the population
in Europe consider climate issues more important than nuclear
confrontation, international terrorism, poverty and unemployment. I find it
astonishing. This section is to some extent a reflection of such astonishment
and may be perceived not as a scientific endeavor but rather as a skeptical
layman’s bewilderment. I see my task here in signaling some awareness and
a sense of growing apprehension.
How can one make and support crucial political decisions when there is
still so much to learn about the Earth’s climate? The reality is that people -
even self-assured climatologists - still know no more about the long-term
human influence on climate than about what had happened before the Big
Bang. Uncertainty can make some people worry, but political decisions
affecting billions of people who want to improve their living conditions should
not be based on uncertainty. However, it is hardly the uncertainty that might
worry, on the contrary - it is certainty. A good way to hide something is to
conceal it behind an illusion of knowledge or even science.
9.13
Is Relativity Firmly Established?
427
My own motivation to join the debates on climate warming is not in an
attempt to say something new. The point is that climate variability is an
objective reality since climate is a dynamical system and it should change.
Fighting the global warming appears to be a fictive environmental activity
worsening our lives. It seems to be a poorly camouflaged attempt to restore
the longed-for political attitude of the 19th and the first half of the 20th
centuries, when political leaders alone were allowed to run (and ruin)
people’s lives according to those leaders’ interests. Contrariwise, explaining
the roots of the misplaced environmental activity, while studying the real
causes for climate change seems to be a decent task. Besides, if you constantly
hear about the imminent climate catastrophe threatening your existence (see,
e.g., WWF reports), and if you are raped with strange new taxes, it would only
be natural to do one’s homework trying to find out the physical facts
underlying the terrible global warming. As for me, I don’t have much faith in
the predictive power of climate models - as little as of social or economic ones.
One can notice, for example, that highly discussed the so called AGW factor
(anthropogenic global warming caused predominantly by human activity) is
not the fact, it’s a hypothesis. Even assuming that the Earth’s surface is found
in the warming phase does not prove it is due to anthropogenic carbon
dioxide.
This does not mean that humans are incapable to do much harm to the
environment - unfortunately, they can affect nature substantially. We know
that there is much pollution and in general we are observing a rapidly
degrading habitat caused by humans. However, AGW seems to be a misplaced
threat, dictated by political and not ecological purposes. The escalating role of
biosphere, in particular of humans, must be assessed in numerous other
areas, where international bureaucracy and certified environmentalists does
not want to poke, and not be reduced to looking for scapegoats such as the
ubiquitous CO2.
A number of assumptions might be invoked by looking at the title to this
section. The first is apparently unarguable: that physicists begin to enter the
field traditionally occupied by meteorologists and climatologists. I would add:
this fine field has been lately occupied by the politicians who are more and
more aggressively intruding into it. Therefore, we shall have to discuss in this
section not only purely physical problems of tinkering with the computer
models of the atmosphere, but to some extent political implications as well.
The second is that the tradition of carefully assembling data and using the
time series for data analysis may be threatened since there are totally
different traditions in physical analysis which is mostly based on modeling
and construction of theories. The third assumption is a consequence of the
second and may be that immature but pretentious models of physics would
dominate over the refined intuitive models, requiring almost esoteric
knowledge for climatic prognoses. Models, one may assert, are a fake, sterile
knowledge; they have little to do with real life (see the discussion in Chapter
2).
There exist a number of highly contested issues in climatic studies. First
of all, one should give a precise meaning to the assertion that human-induced
428
Climate as a Physical System
emission of CO2 has the main - and the most dangerous - impact on the
planetary climate. When discussing climate as a physical system, it is
important to understand that it is difficult to define the notion of “climate” in
physical terms because it is not a directly observable quantity. One can
directly observe only local - in space and time - meteorological quantities such
as atmospheric temperature, pressure, humidity, etc., and climate can only be
derived - not necessarily uniquely - from a set of such local observations. To
illustrate locality we may take, for example, such forcing as aerosols which
serve, together with the release of gaseous species, as one of the two main
factors of human impact on the atmosphere. Aerosols tend to be very
nonuniformly distributed, both horizontally and vertically, being mainly
concentrated over land regions. Tropospheric aerosols, including soot and
dust, typically produce a significant absorption in the visible spectral domain
and have a very small extinction in the infrared. In other words, the shortwave
absorption and multiple scattering effects of aerosols lead to a considerable
cooling, especially in the daytime, since they screen the Earth’s surface from
the incident short-wavelength radiation and let the outgoing thermal infrared
escape almost freely. The long wave (IR) effect of aerosols results only in
negligible warming, mostly during the day, so that the net effect of increased
aerosols should be a rather strong cooling during the daytime. But this effect
is local in accordance with the distribution of aerosol concentrations.
Analogously, there is no physical - directly measurable - quantity
corresponding to the concept of climate as a whole, but only to its local
manifestations. This difficulty seems to contribute much into the heated
debates about the properties of climate and its dynamics.
It is curious that a rather marginal domain of physics - the physics of the
atmosphere - has caught so much attention of the general public. Besides, the
media, constantly striving to dramatize and sensationalize the routine daily
regularity, tends to amplify alarmist statements. So, the thesis of
anthropogenic global warming (AGW) is more a social, rather than a natural
phenomenon. The population in the developed countries has been polarized
into two groups: Pro-AGW and anti-AGW220, nearly throwing various heavy
objects at each other. It would be interesting to trace who are “pro” and who
are “anti”. For instance, people who are younger, left-wing, or low-income
more enthusiastically support the thesis of catastrophic climate changes (C3)
than those who are older, more politically conservative, and have higher
earnings. Attitudinally, in climate projection debates hardly anyone will
change one’s mind, everybody is looking for biased arguments confirming
her/his
position
(see,
e.g.,
http://vocalminority.typepad.com/blog/2010/03/failed-scientist-paul-
ehrlich-among-agw-alarmists-lashing-at-skeptics.html). So, there is no love
lost between “skeptics” and “warmers” (i.e., AGW-proponents): both say the
other party is wrong and definitely pursues non-scientific interests.
220 In the developing countries, other problems seem to be more acute so that the
population does not care much about global warming.
Some purely Climatological Questions.
429
Some people say that the climate change science has already been settled,
and it is now time to act i.e., to impose on people some restrictive and legally
binding political decisions. Thus, unfortunately, physical and mathematical
models of the climatic system have lately become almost inseparable from
political decisions so that apart from physics we shall have to discuss political
issues as well. The vehement reaction of political circles who enthusiastically
support climate scare - actually more enthusiastically than a large part of
scientists - raise suspicions that the thesis of an alleged climate catastrophe
is, in reality, rather a political tool than a scientific result. It is difficult to
understand how the study of such a complicated subject as climate dynamics
can ever be settled. This is the field where scientific conclusions do not
precede political decisions.
10.1 Some purely Climatological Questions.
Climatology, in contrast with meteorology, was traditionally not perceived as
having any practical use. The word “practical” in the ultimate political
meaning is understood as serving military or defense purposes. For instance,
everything was clear with meteorology: the military needed accurate weather
forecasts. Thus, meteorology gained much of its impetus during the First
World War and its current importance during the Second World War because
air forces required a more and more detailed description of local and regional
weather: information about winds, rains, thunderstorms, etc.221 Climatology,
on the other hand, was mainly considered as an academic discipline, with only
a few scientists being active in it222. It began attracting attention only after
politicians realized that it can be a political tool and serve the parties with
vested interests.
Climatology is intended to study the global and - more practical - regional
weather patterns over some extended periods, much longer than a year. long-
term trends in the physics of atmosphere, with certain futurological elements.
The central message of the whole current climate controversy is: global
warming is happening, and it is caused by humans. This is a combined
statement (a product of two), and it is a very strong assertion. Can it be
proved?
The honest answer will be “no”. Nobody can compellingly prove the truth
of anthropogenic global warming, otherwise there would be a real consensus
- no dissent among scientists as in firmly established parts of physics.
Although the current models of climate may be quite relevant and interesting,
at least to persons who professionally study the evolution of the climatic
system, they can hardly be considered as the whole answer. God only knows
what real atmospheric perturbations, say, in the next hundred years, may
221 To some extent, this need explains the fact that most weather stations were
traditionally located near airports and, when now in use as climatological devices, tend
to provide distorted data in temperature measurements, in particular, due to heat
generation at the airports and to the influence of spreading urban centers.
222 One can name such scientists as M. I. Budyko in the USSR, R. Bryson in the USA, H.
Hare (Canada), K. Ya. Kondratiev (USSR-Russia), H. H. Lamb (UK).
430
Climate as a Physical System
∼
occur, with many counterbalanced factors contributing, e.g., in the global
average ∆T being summarized. Besides, there exist in the past numerous
examples of sudden climate transitions which occurred on a variety of time
scales (see, e.g., Abrupt Climate Change: Inevitable Surprises. NRC, 2002). One
should honestly admit that the level of knowledge of the Earth’s climatic
system is very inhomogeneous across the physical processes involved and
mostly very low which is reflected in countless debates. Does one hear so
many debates on classical mechanics or electrodynamics? It means that
climate science is grossly immature, at least for the level to base essential
decisions affecting the lives of many citizens on it.
Despite multiple assertions that the AGW concept has been accepted by
all climatologists, there is no real consensus in climate science which is
natural since there are no persuasive quantitative data of the anthropogenic
contribution to climate variability. To my mind, all standard arguments in favor
of anthropogenic global warming (AGW) may only serve as circumstantial
evidence, with the understated assumption that, firstly, there are no other
factors of the recent warming besides anthropogenic CO2 and, secondly,
current warming is allegedly unique in the climatic history, therefore it should
be man-made. The logic is hard to understand. Even if we assume that there
was not enough sunlight to ensure the recent warming (assuming of course
that it really takes place), does it necessarily imply that it is only the human
produced CO2 that is responsible for the increased average surface
temperature? Recall that the human CO2 contribution accounts for
~10Gt/year which is well under 5 percent of the total output from natural
sources (fauna and flora each comprising ~2 102Gt/year, ocean ~3
102Gt/year,
volcanoes ~0.5 102Gt/year). So the anthropogenic CO2
contribution seems to be numerically too small to be a significant source of
the global warming, particularly as compared with water vapor - the far more
potent
greenhouse
gas,
see,
e.g.,
http://www.co2science.org/articles/V12/N31/EDIT.php
and
references
cited therein.
The AGW evangelists usually respond that natural CO2 is balanced
between sources and sinks whereas anthropogenic carbon dioxide breaks
this equilibrium, is accumulated in the atmosphere and remains there for
very long, almost infinite or at least indefinite time. This is however not
even a scientific hypothesis, it’s a belief. Nevertheless, the well-known
paper[232] which has become the flagship of alarmists, just like the
discredited “hockey stick” in previous years 223 , is based on this
assumption although it is not articulated explicitly. The two main sinks for
CO2, both of the natural and anthropogenic origin, are ocean and land
biota, and the annual exchange between the ocean and the atmosphere
(~3 102Gt) as well as between land biota and the atmosphere (~4 102Gt)
is much greater than the human produced CO2, see [262].
223 One can observe a curiosity, by the way: a hockey stick is sometimes used as a
pointer in the prestigious Perimeter Institute in Canada, evidently symbolizing the great
love for hockey in this country.
Some Purely Physical Questions
431
10.2 Some Purely Physical Questions
Climate science is a really unique discipline: it has no general theoretical
background - there is no physical theory underlying climatology, in contrast,
for example, with astronomy or astrophysics which are also disciplines based
mostly on observations. In these latter disciplines, the background physical
theory, general relativity, is omnipresent serving as a frame of reference for
data, models, and speculations. Here, general relativity is guiding the
research. Very little of the kind exists in climatology, which makes this
discipline more akin to, say, economics than to a physical science - show me
any falsifiable theory of global warming. Like economics, climatology contains
a loose collection of models trying to relate them to disjoint observational
data. Wild speculations, unsubstantiated hypotheses and poorly conditioned
projections abound, which is only natural because of the complexity of the
subjects under study. The conclusions are inferred from computer models
that could not demonstrate their ability to predict (show me the model that
predicted the mean global surface temperature over one month forward or
backward). On top of all this, climate science is supervised by political bodies
such as the UN, UNFCCC, IPCC, EC, DOE, EPA, etc. and strongly influenced by
populistic environmentalism. Even synergetic that was at the beginning more
like a slogan than a science and has been treated piecewise for a few decades
seems to have acquired now a guiding theory, that of dynamical systems (see
Chapter 4). By the way, this theory is also useful in climate modeling (see
below). Yet only basic physical facts are reliably known about the climate
system. What are these facts and to what extent are they plausible?
Primarily, climate is an example of a hierarchical multilevel system
(HMS), containing multiple variables which interact in a complex way on very
different temporal scales from hours for rapid atmospheric variations, days
characteristic of weather patterns, years through centuries for essential
variations of ocean currents, millennia for ice period-warm period
transitions, and millions of years for continental drift and tectonic changes
that can also affect the Earth’s climate. Secondly, the climate system has long
been known to undergo sharp transitions - from the “frozen” states with very
low average global surface temperatures (GST) T to warm periods
characterized by much higher temperatures, explosive development of the
biosphere - in particular, by the complex multicellular life forms.
One might single out several physical aspects of the climate dynamics
problem, I can name four of them right away: 1) purely spectroscopic, 2)
radiation transfer, 3) fluid motion and 4) stochastic dynamics, in particular,
transitions between equilibrium climatic states. Of course, these (and all
additional physical aspects) are coupled, but different researchers tend to
concentrate on each one of them separately of others. I have just mentioned
that the climate system is an example of a hierarchical multilevel system with
a ladder of physical processes determining the energy balance between the
incoming solar and the outgoing terrestrial radiation as well as the evolution
of climate, whatever meaning is given to this latter term.
432
Climate as a Physical System
Some people assert that physical problems of the climate, in contrast with
those for the weather, lead to boundary-value problems and thus cannot be
chaotic or even unstable (in the Lyapunov sense). Then it is not clear what is
meant by the climate evolution. There are even claims that climate
simulations are fundamentally different from the usual computational
modeling of physical situations due to emergent properties - the latter notion
is nowadays of fashion but does not have any hard scientific content. See a
brief discussion in [260].
In the final analysis, the climate system is governed by the well-known
laws of classical physics so the scientific background may be considered
relatively mature. The greenhouse effect itself was discovered as early as in
1824 by Fourier, the heat trapping properties of CO2 and other gases were first
measured by Tyndall in 1859, the climate sensitivity to CO2 was first
computed in 1896 by Arrhenius, and by the 1950s the scientific foundations
were pretty much understood. There is plenty of hard data and peer-
reviewed studies,
Note that the greenhouse effect theory can remind us of an old
paradox typical of the phenomenological viewpoint, since viewed
phenomenologically such a theory would mean that the heat spontaneously
flows from the colder body (atmosphere) to the warmer one (Earth’s
surface). Indeed, absorption of radiation and its re-emission by the
atmosphere is a passive process and as such it should not increase the average
energy of radiating body - the Earth (up to local fluctuations, see, e.g., [259]).
Besides, tiny quantities of carbon dioxide seem to be badly suited for
making up the black box walls or resonator where the entire infrared
radiation, irrespective of its spectral domain, is trapped. The primary
function of any greenhouse is to protect plants from the outer cold by
impeding convection.
The heat transfer from the Earth’s surface through the atmosphere
mostly occurs in the wavelength domain greater than 1 µm, the maximum
absorption lying around 10-11 µm. Experimentally, the main absorption
bands a re in the near IR (around 0.94 µm, 1.10 µm, µm, 1.95 µm, and 2.50
µm) and carbon dioxide absorption bands (around 1.4 µm and 2.0 µm)
overlap.
Thus, it is obvious that water vapor can warm the Earth more than CO2;
it must produce much more pronounced greenhouse effect in wet space-time
domains than carbon dioxide224. This fact is well-known to all meteorologists
- although I heard some AGW evangelists vehemently deny it. Indeed, water
vapor is estimated to account for over 60 percent of the greenhouse effect
(see, e.g., https://water.lsbu.ac.uk/water/water_vibrational_spectrum.html)
whereas carbon dioxide is responsible for not more than 25 percent even if
the most alarmistic CO2-centric estimates are used. This means that the
climate affecting - bad - role of CO2 must be non-uniformly distributed: its
greenhouse features are more salient in very cold locations, say, in the poles
224 See an interesting discussion of this fact by F. Dyson, an outstanding universal
physicist, http://noconsensus.org/scientists/freeman-dyson.php.
Some Purely Physical Questions
433
(or in deserts) where the air is dry, being almost negligible in warm places
(see, e.g., [ 2 6 8 ] ). Indeed, there are approximately 30 water vapor
molecules in the air per one CO2 molecule, and due to large values of the
dipole moment (1.84 D vs. nearly zero for CO2 molecules, recall that 1𝐷≈
0.3934𝑒𝑎𝐵, 𝑎𝐵 is the atomic length.) water vapor absorbs the electromagnetic
radiation at least several times more effectively than carbon dioxide. Both
CO2 and H2O are triatomic molecules, but CO2 is a linear one with two polar
𝐶= 𝑂(𝑑≈2.3𝐷) bonds being oriented in opposite directions so that their
dipole moments cancel each other. Thus, the whole CO2 molecule is non-
polar225. The vibrational and rotational modes of the H2O and CO2 molecules
are very different, with the water vapor absorption spectrum being much
more complex than that of carbon dioxide. The richness of the H2O IR
absorption spectrum as compared to CO2 is mainly due to the possibility of
asymmetric rotations of the H2O molecule about all three spatial axes, each
rotation having its own moment of inertia, plus the water molecule has three
vibrational modes. In contrast, the CO2 molecule, due to its linear symmetry
(O-C-O) has four vibrational modes which correspond to bending along two
perpendicular directions and one rotational mode that corresponds to two
identical rotations in opposite directions about the center of mass, with the
same moment of inertia. These are the basic rotational and vibrational states,
and transitions between them account for absorption spectra. There are of
course also combinations of transitions from one rotational or vibrational
state to another, which makes real absorption spectra quite complicated so
that a number of large spectroscopic databases have been created that contain
the measured molecular absorption spectra. One might notice that there are
still plenty of conflicting data in such databases, one of the most authoritative
among them being HITRAN (http://www.hitran.com).
Thus, the wide-spread assertion that CO2 is responsible for 20-25
percent of the heating is difficult to understand. Perhaps under the conditions
of zero humidity CO2 can account for up to 0.20-0.25 of the total near-infrared
absorption whereas under the “normal” humidity conditions of the
atmosphere relative absorption of IR photons by the CO2 molecules should
be considerably suppressed (down to about 0.05). At the temperature of 20
Celsius, relative humidity would be approximately 50 percent which
corresponds to approximately 3 percent concentration at sea level, see e.g.,
http://www.physicalgeography.net/fundamentals/8c.html. The water vapor
concentration rises very rapidly with temperature reaching 100 percent at
225 It is, by the way, significant that the most widely spread atmospheric gases, N2 and
O2, do not produce any greenhouse effect. These substances have symmetric linear
diatomic molecules and like CO2 they do not have a constant component of a dipole
moment. Moreover, even the variable component of dipole moment cannot be induced
in such molecules by vibrational or rotational excitations since the latter do not change
the molecule symmetry (in contrast with the triatomic CO2, where excitation of the
vibrational degrees of freedom can break the symmetry of the molecule and enable it to
absorb atmospheric radiation in the infrared region.) Therefore, absorption of radiation
by atmospheric nitrogen and oxygen is only possible due to electronic excitation i.e., in
the shortwave domain (visible and UV).
434
Climate as a Physical System
·
100 C (steam). One can, by the way, conclude from the fact that the infrared
absorption by CO2 becomes significant when the temperature and,
correspondingly, water vapor concentration are low that carbon dioxide
emissions will tend to smooth the local climate.
10.3 The Earth as a Black Body Emitter
The crucial physical question related to the Earth’s climate is: what would be
the average terrestrial temperature when optical properties of the
atmosphere are modified, e.g., due to anthropogenic influence? This question
is, unfortunately, highly politically charged (see below). Let us, however,
temporarily disregard any political implications and make an attempt to
roughly estimate the equilibrium temperature that would ensure the
radiation balance at the Earth’s surface. The total radiation flux received by
the Earth’s surface from the Sun cannot exceed the so-called solar constant
𝑆0 ≈1366𝑊/𝑚2 which is the satellite measured yearly average amount of
radiation received by a unit area whose normal coincides with the solar rays
direction at the top of the atmosphere (see more details below, in subsection
“The Role of the Sun”). One should notice that the value of the solar constant
is experimentally obtained for the outer surface of the Earth’s atmosphere
and not on the Earth’s surface, and it is difficult to calculate the actual surface
irradiance (insolation) because its value can vary in wide limits following the
atmospheric conditions. Clear sky insolation is ~1kW/m2, and the average
estimate for this quantity is typically taken to be 𝑆≈240W/m2. This value can
be obtained by equating the incident and outgoing radiation, 𝑊+ ≈Π𝑅2(1 −
𝛼)𝑆0 and 𝑊−≈4Π𝑅2𝑆≈4Π𝑅2𝜂𝜎𝑇4, where 𝛼 is the surface albedo, 𝜂< 1
is the emissivity coefficient, 𝑅 the Earth’s radius, 𝑇 is the average global
surface temperature (GST) and 𝜎= 5.67 ∙10−8W m2K4
⁄
= 5.67 ∙10−5erg/
cm2sK4 is the Stefan-Boltzmann constant226. Then we get 𝑆≈(1 −𝛼)𝑆0/
4 ≈0.7 ∙1366 4W
⁄
/m2 ≈240W/m2.
Let us now estimate the equilibrium temperature for perfect
absorption (albedo 𝛼 is zero) and perfect emissivity 𝜂= 1 . The total
radiative power received by the Earth is 𝑊+ = Π𝑅2𝑆0. The total thermal
emission from the Earth’s surface227 is 𝑊−= 4Π𝑅2𝜎𝑇4. In the equilibrium
we have 𝑊+ = 𝑊− so that the average temperature is
𝑇≈(𝑆0
4𝜎)
1 4
⁄
~ (𝑆
𝜎)
1 4
⁄
.
Recall that quantity 𝑆= 𝑆0 4
⁄ may be interpreted in the equilibrium as
the average insolation (i.e., average radiative energy incident on unit
226 This constant is not as fundamental as truly basic physical constants such as
𝑐, ℏ, 𝑒, 𝐺. The Stefan-Boltzmann constant may be obtained in statistical mechanics as
combination of such “more fundamental” constants, see Chapter 7.
227 An analogous quantity in astrophysics is usually called luminosity. I have never
encountered a physicist who could faultlessly apply photometric terminology. See some
brief account of photometry in Chapter 10.
The Earth as a Black Body Emitter
435
surface area per unit time - average flux received by a unit surface). Now we
can calculate the derived quantity called climate sensitivity Λ which is
defined as the average temperature change responding to the variation of
insolation 𝑆, Λ = 𝑑𝑇/𝑑𝑆. More exactly, 𝑆 is the equilibrium irradiance of
the Earth’s surface which is equal to the additional energy absorbed by this
surface. For the perfect black body Earth (𝛼= 0, 𝜂= 1), we may take
approximately 𝑇≈288K, 𝑊≈340W/m2 and get Λ = 𝑑𝑇𝑑𝑆
⁄
= 𝑇/4𝑆≈
0.2Km2/W. This is, of course, a time- independent model and no-feedback
sensitivity. For the gray-body Earth (𝛼> 0, 𝜂< 1), we have
𝑇≈(
(1 −𝛼)𝑆0
4𝜂𝜎
)
1 4
⁄
~ ( 𝑆
𝜂𝜎)
1 4
⁄
.
and climate sensitivity may rise a little:
Λ = 𝑑𝑇
𝑑𝑆= 𝑇
4𝑆= ((1 −𝛼)𝑆0
4𝜂𝜎
)
1 4
⁄
1
4𝑆≈0.3 Km2
W .
However, within the accepted accuracy such corrections do not matter much,
and in many cases taking them into account would be inadequate.
When considering the gray Earth’s climate sensitivity, we have tacitly
assumed that both the albedo coefficient 𝛼 and the emissivity 𝜂 are
constant. In fact they can be functions of the global average temperature 𝑇.
Indeed, with changing temperature, the equilibrium is shifted, and radiative
balance should adapt to the new equilibrium states. This common fact
from statistical thermodynamics can be better understood on some
specific examples. In particular, higher terrestrial temperatures cause
stronger evaporation and in general additional transfer of gases (e.g.,
dissolved in the ocean water) into the atmosphere. However, more gases
in the atmosphere - and more primarily water vapor - result in a more
pronounced greenhouse effect so that there will be a tendency to the further
growth of global temperature (positive feedback). More infrared radiation
will be trapped in the atmosphere and returned to the surface, which may
change albedo, e.g., diminishing reflectivity due to accelerated snow
melting. On the other hand, more water vapor in the atmosphere would
inevitably lead to an enhanced cloud formation i.e., through this process
albedo is increased. Moreover, as the average humidity grows, the average
cloudiness should also grow, and this relationship can be for simple
modeling assumed monotonic (one can even take humidity as a
parameter). Thus, albedo depends on global temperature in an intricate
way, it is determined by the competition of opposite processes (positive
and negative feedbacks) and function 𝛼(𝑇) can hardly be intuitively
defined. The emissivity 𝜂 also depends on the global temperature 𝑇, e.g.,
through an additional cloud formation since clouds can screen radiation
escaping from the Earth. Besides, extra water vapor in the atmosphere traps
thermal infrared emitted by the surface, e.g., due to saturation, which also
436
Climate as a Physical System
decreases emissivity. So, one might assume that 𝑑𝜂𝑑𝑇
⁄
< 0, at least the
main term of this derivative. As to the possible behavior of 𝛼(𝑇), this issue
essentially depends either on empirical data or on efficient cloud modeling
which is a very difficult task. In general, by scrolling through the literature,
one gets an impression that climate sensitivity is more reliably estimated
from observations than from the corresponding models.
Let us formally write phenomenological relations for the climate
sensitivity assuming that the albedo 𝛼 and emissivity 𝜂 are some unknown
functions of 𝑇
Λ−1 = 𝑑𝑆
𝑑𝑇= (𝜕𝑆
𝜕𝑇)
𝛼,𝜂
+ 𝜕𝑆
𝜕𝛼
𝑑𝛼
𝑑𝑇+ 𝜕𝑆
𝜕𝜂
𝑑𝜂
𝑑𝑇.
So, if one knows the derivatives 𝑑𝛼𝑑𝑇
⁄
and 𝑑𝜂𝑑𝑇
⁄
(from observations or
otherwise), one can estimate the climate sensitivity. Physically, this is,
however, not a trivial issue, in particular due to the overlap of infrared
absorption bands of various greenhouse gases.
Recall in this connection that the question “What is the net flux of
longwave (infrared) radiation to the surface?” is one of the hottest issues in
the whole global warming problematics. The question is still open. Climate is
one of the most complex dynamic systems with multiple interacting
components. Against the background of powerful terrestrial and atmospheric
processes the anthropogenic impact is likely to be just one of many factors
influencing the average terrestrial temperature distribution. Determining the
specific weight of one or another variable impacting climate perturbations
and change requires laborious scientific efforts liberated from political biases.
10.4 Climate and Weather
What is actually the climate? One can crudely define it as the statistical
ensemble of states for the system “atmosphere-ocean-ground”, averaged over
several dozen years. It is an observational fact that the Earth’s ground in the
triad “atmosphere-ocean-ground” or three geospheres - atmosphere,
hydrosphere, and lithosphere - produces less impact on climate than ocean:
one can say that ocean is more strongly coupled to the atmosphere than the
firm ground. Therefore, in the following discussion we shall mostly consider
the reduced climate-forming system consisting of the Earth’s atmosphere
coupled with the ocean. By the way, modeling of the climate in terms of
atmospheric circulation influenced by the ocean may be valid not only for the
Earth but also, say, for Venus. When speaking about the Earth’s climate, one
should of course include two other components (geospheres) such as
biosphere and cryosphere which can also produce a substantial impact on the
planetary climate.
One can roughly say that climate is weather averaged a large number of
years, not fewer than ten, typically twenty or thirty. Because of its averaged
character climate varies on the time scale of several decades. On the contrary,
the concept of weather is “instantaneous”, it is limited to a period of
approximately ten days (it is nearly impossible to predict weather evolutions
Climate and Weather
437
for the time exceeding this period). Weather forecasts clearly demonstrate to
everyone the “law of unpredictability” well understood (relatively recently)
in nonlinear dynamics and the theory of dynamical systems (see Chapters 4
and 7). Namely, each phenomenon has its limits of predictability. The point is
that there are deterministic and stochastic components in any complex
system. One can reliably predict only deterministic evolution of the system
whereas its variations due to stochastic factors are in general unpredictable.
Using the terminology of the theory of dynamical systems (see Chapter 4)
one may say that weather is analogous to the instant state or the “phase” on
individual trajectories whereas the climate is an attractor (e.g., of a limit cycle
type) for such trajectories. The phase can move in the vicinity of the attractor
surface228in a very intricate and seemingly unpredictable manner, and a small
perturbation of initial data can radically modify the character of instant
motion (the weather) whereas attractor (the climate) remains unchanged. In
other words, the details at any fixed moment of time can be radically different,
but all observable quantities, averaged over large time intervals, will be
basically the same, with the details of perturbations almost completely
ignored. In meteorological terms it means that, say, a subtropical desert would
not turn all of a sudden into a maritime polar zone and vice versa.
Computer-based forecasting the local and regional weather and climate
change is a nascent and so far dubious activity. The matter is that current
models may have too many flaws to give accurate predictions of climatic
phenomena on a 102km scale (in particular, projections of local and regional
temperature dynamics fifty years from now). It is doubtful that such models
should be relied upon in political decisions.
Both climate and weather give examples of an unstable physical system:
even small deviations in input data lead to large errors in the output results
(see Chapter 4). Mathematically, the climatic system is described by a large
system of nonlinear PDEs. We can, in principle, write down this system of
equations, but such a procedure would not be very useful since one cannot
correctly fix supplementary (boundary and initial) conditions to such a
system. Indeed, these supplementary conditions would change within a very
short period, for example, during one of the relaxation times for the
atmospheric system. This fact alone makes correct prognoses of climate
dynamics very difficult. It is, however, interesting that in spite of extreme
complexity and dynamical instability of the climatic system the latter is
periodically reproducing itself. Thus, paleoclimatic research indicates that
climate has several more or less stable cycles, e.g., about 105 years, 2.5 103
years, 200 years and some other. The presence of stable cycles in the climatic
system of the Earth may be considered as an example of chaos-order
transitions. One can, in principle, use the observation of climatic cycles as an
input for prognoses unless too stringent requirements imposed on them are
envisaged. Stable climatic cycles reflect the coexistence of evolution and
robustness, the feature typically characterizing the living organisms.
228 Here, for simplicity, I pay no attention to the difference between the notions of
attractor and attracting set.
438
Climate as a Physical System
It is clear that reliable climatic prognoses are extremely important for
human activities. Many economic models, for example, essentially depend on
climate variables, e.g., in agriculture, housing construction, power production
and transportation. In particular, there exist alarmistic prognoses of
economic disasters due to rapid human-induced heating (in average by 2-3
degrees centigrade) of the near-surface atmosphere within the next several
decades. If such a warming rate really occurred, it would be a substantial
challenge because humans have no experience of living in fast changing
climatic conditions. However, who can bet his head or even hand that the
contemporary computer models leading to such prognoses fully account for
the impact of all possible natural factors determining climate dynamics,
particularly those that have a negative forcing such as clouds?
The global climate229 viewed as a physical system may be regarded as a
composition of several mutually interacting subsystems: atmosphere,
hydrosphere, biosphere, lithosphere, and cryosphere. Every subsystem
evolves according to its own internal laws and, therefore, possesses its own
set of characteristic time scales determining the subsystem’s variability. In
the spirit of dynamical systems theory (Chapter 4), it means that there exist
complex flow patterns with the range of time scales spreading, in the case of
the Earth’s climatic system, from hours and days through seasons and years
to many centuries and even millennia. Slow (or low-frequency) variations are
usually identified with “climate” whereas the fast (high-frequency) ones with
“weather”. This natural time-scale separation hints at a possible modeling
approach which would consist in decomposing all the relevant quantities - for
example, those forming the vector field in the corresponding dynamical
system - into “climate” and “weather” components. Then the “climate” (slow)
vector component is allowed to evolve deterministically and the “weather”
(fast) one varies stochastically.
Most of the terrestrial atmospheric circulations, together with the
atmospheric-oceanic mixed layers, are formed through instabilities and can
thus easily become turbulent. The same applies to intense oceanic currents,
even considered inside the isolated hydrosphere, i.e., without any active
mixing with the atmospheric fluxes. These are, of course, just words.
There exist a great lot of models for climate and weather variations, and I
have neither qualifications nor space in this section to review them. My
purpose is much more modest: to expose the physical ideas underlying the
climate and weather modeling and to establish - as usual - the connections
with other portions of physics and mathematics. This exposition is by
necessity rather superficial. The reader who really wants to study the
subject may start with a comprehensive monograph by M. Ghil and S.
Childress [233].
The dire climate predictions often involve extreme weather events such
as typhoons, tornadoes, hurricanes in one package with climatic effects such
229 This terminology is not quite correct: being derived from weather by the
procedure of statistical averaging, climate is hard to define globally, it is basically a local
concept.
Climate and Weather
439
as drought, desertification, repeated coastal floods. The “improved” AGW
concept assumes a significant feedback with much water vapor added to
the atmosphere. Does not it produce more precipitation and associated
instant weather effects opposite to drought and desertification? Moreover,
does not ice melting contribute into the water exchange loop, increasing
precipitation and evaporation? Predictions of the imminent collapse of the
global climatic system are founded on computer simulations and intuitive
beliefs. As to the computer simulations, they are always full of
uncertainties, as weather prediction clearly demonstrates. One typically
argues that weather forecasting and climate simulations are two different
tasks so that one should in no way juxtapose them. This is generally true,
yet all computer simulations are sensitive to input data and introduce
specific uncertainties. Computer simulations are also prone to both
commission and omission errors. In the particular case of climate modeling
such phenomena as cloud formation, hydrosphere-atmosphere exchange,
volcanism, the role of biosphere are physically intransparent and lack data
so that the overall computer simulations for the climate, involving the
fluid
motion
equations
supplemented
with
radiative
transfer
phenomenology and solved by the time-forward finite-difference procedure
on a grid with some a priori resolution, necessarily have huge error bars.
Besides, such models may be so sensitive to parameter values that they
easily become unstable (see below “Dynamical Systems in Climate
Modeling”). In practical terms it means that the reliability of current
climate models is at minimum insufficient for political decisions affecting
the interests of billions of people.
The crucial point in assessing long-term climate variations as well as local
short-term weather fluctuations is the study of state of the oceans and their
interaction with the atmosphere (see, e.g., an important paper by Kravtsov, S.,
Swanson, K., Tsonis, A. A. A [269]). Of all geophysical subsystems, atmosphere
and ocean are the most important ones for the climate: atmosphere and ocean
may be regarded as the Earth’s fluid envelope, where a great variety of
motions can be comparatively easily excited. The incidence of major weather
events such as hurricanes, tornadoes, cyclones, rainsqualls, heatwaves, etc.
can be predicted by meteorological models, but one can anticipate that with
the spacetime resolution of the climate models being increased, these latter
models can also be used to forecast these weather events. In other words,
meteorological and climatological models are expected to converge into the
unified meteoclimatic modeling system (see, e.g., http://www.ecmwf.int; see
also in this connection an interesting paper by Lockwood, M., Harrison, R. G.,
Woolings, T., Solanki, S. K. [270]).
Meteorology can be viewed, unlike traditional physics, an “almost exact
science”. People are usually angry with meteorologists and take weather
forecasts with customary skepticism (“they are always wrong”). The similar
discrepancy is often observed at some cellular phone network forecasts which
manifests a strange adherence to clumsy computerized procedures for
elementary everyday things. I don’t understand why it is necessary to connect
440
Climate as a Physical System
to the server instead of looking at the good old thermometer. Coolness factor
results in multitudes of errors.230
Roughly speaking, the mathematical models for the state of the
atmosphere can be constructed using the following physical principles. The
major air masses constantly moving around the Earth collide with each other
under the effect of the two main forces: the pressure gradient and the Coriolis
force. The Coriolis force, in particular, induces the so-called Ekman transport
that deflects water masses perpendicularly to the near-surface wind direction
(to the right). In the North Atlantic, the Ekman transport leads to the
formation, due to divergence and convergence of near-surface water masses,
to a pair of oceanic gyres: a cyclonic gyre in latitudes near the North Pole and
an anti-cyclonic one in the subtropics, the second having a larger spatial scale
than the former. This effect is usually termed as the double-gyre circulation.
The combination of the two forces - the pressure gradient and the Coriolis
force - produces the crude common pattern for large air masses movement.
For example, in the northern hemisphere these masses turn predominantly
clockwise about high-pressure zones and anti-clockwise about low-pressure
zones. The word “predominantly” here reflects the possibility of a paradoxical
behavior of the atmospheric gas masses, which may be viewed as gigantic
fluctuations whose probability is small 231 . Large air masses can be
approximately defined as masses of atmospheric gases characterized by a
homogeneous temperature and humidity level. These gas masses may,
however, have different densities and hence pressure values. For example, two
different air masses at the same altitude necessarily have different pressure
values so that when these masses come into contact inevitable pressure
gradients appear which results in so-called weather fronts.
10.5 Dynamical Systems in Climate Modeling
The climate modeling problem relevant to AGW could be set up in the
following way: climate as a complex physical system receives a continuous
extra input (e.g., human-produced carbon dioxide), and one should try to find
the domain boundaries (thresholds) for this input when the domain climatic
system does not flip into a new state. In fact, as already mentioned, one
should consider a number of inputs corresponding to multiple natural and
anthropogenic forcings, with some of them being competing. Recall that
there are many resilient, non-rigid systems in nature, for instance, biological
organisms, ecological systems, population of species, planetary systems, etc.
There are also resilient systems in human society i.e., social systems that
230 In the middle of the 1990s, SGI (then Silicon Graphics) supplied a powerful Cray
machine to the Russian (Moscow) weather forecasting service, the Hydrometcenter. The
expensive supercomputer was standing in the specially designated place without any
use or any keen interest until it became outdated. The SGI engineers assigned to train
the Russian meteorologists were astonished by such lack of motivation. I have heard
this story from the former SGI experts responsible for this contract.
231 One can mention in this connection that there are in general two main classes of
climatic models as well as weather forecasting tools: deterministic and probabilistic
models.
Dynamical Systems in Climate Modeling
441
|
| « |
possess the ability to deal with disturbances while retaining their essential
functions, for example, capitalist economies, autocratic states, demographic
developments. The ubiquitous resilient system is the Internet: it can
withstand significant perturbations without losing its network unifying
functions. The climatic system also has some - unknown a priori - amount
of resilience, when it can tolerate disturbances without collapsing into a
qualitatively different state.
The view of the climate as a static and predictable system is hardly
adequate and must be replaced by a dynamic view emphasizing a continuous
change and uncertainty. There exist numerous historic examples of sudden
climate transitions on a variety of spatial and temporal time scales. For just
one example, one can recall the unforeseen and devastating drought in the
West of North America which occurred the 16th century. This catastrophic
phenomenon, which was probably the most severe of anything ever present
in climatic records, manifested a sharp climate transition lasting for several
decades (see, e.g., [271]. See also a discussion of possible chaotic transitions
in the subsection “The AGW Evidence” below.
A simple and straightforward description of the climate equilibrium
states can be based on the scalar ODE for the mean global surface temperature
𝑇̅:
𝑑𝑇
𝑑𝑡= 𝑟𝑇+ 𝐹,
(10.1)
where 𝑟= ∑𝑟𝑖
𝑖
represents the sum of feedbacks 𝑟𝑖, 𝐹= ∑𝐹𝑖
𝑖
denotes the
algebraic sum of partial forcings 𝐹𝑖 which can be positive i.e., acting as
sources or negative (sinks). For instance, certain 𝐹𝑖 may correspond to the IR
absorption of greenhouse gases and act as the positive forcings whereas
others may represent reflection of the incoming solar radiation and thus
describe negative forcings. The overline bar denoting the averaging of 𝑇 over
the terrestrial surface is omitted to simplify the notations: I shall do it in
almost all formulas so please don’t be confused.
If this model were truly linear as it is implicitly assumed by the majority
of AGW proponents and local in time (without delay effects), then the gradual
increase of CO2 concentration would produce a synchronized monotonic
growth of global surface temperature and a corresponding shift of
equilibrium states. Indeed, in any linear model factoring (e.g., doubling) the
forces factors the response. However, the linear model is too primitive to be
true so that the models expressed through nonlinear equations, when all
feedbacks and forcings in general depend on temperature 𝑇, must be
employed for the climate evolution. Recall that the superposition principle
does not hold for nonlinear equations so that one cannot assert that an
increment 𝛿𝐹𝑖 of a partial forcing (e.g., the increasing CO2 concentration)
would result in the proportional global temperature rise, 𝛿𝑇= 𝑎𝑖𝛿𝐹𝑖. Besides,
some forcings may have a random character expressing themselves very
powerfully - much more potently than the relatively weak deterministic
action of small CO2 concentrations with their gradual increments
442
Climate as a Physical System
(presumably manmade). Examples of such powerful forcings having the
dominating random component are unforeseen solar-irradiance variations
and volcano eruptions. If we stubbornly stick to the linear model of the (10.1)
kind, then we must use the Langevin-type equation, maybe with inertia and
damping, and consider small forcings, |𝐹𝑖| ≪|𝐹𝑗|, 𝑖≠𝑗, only as corrections.
Even if we temporarily restrict ourselves to the class of deterministic
models, we shall have to replace the linear response (10.1) by a more
complicated though still one-component dynamical system, 𝑑𝑇𝑑𝑡
⁄
=
𝑓(𝑇, 𝑡, 𝜇) , where 𝜇 is some control parameter (see more on dynamical
systems in Chapter 4). The latter may have many components232. Actually, the
above scalar model is still too primitive for the description of the climate
evolution, its straightforward extension being the following vector equation
𝑑𝑋
𝑑𝑡= 𝑓(𝑡, 𝑋, 𝜇),
𝑡∈𝐼⊆ℝ,
𝑋∈ℝ𝑛,
𝜇∈ℝ𝑚,
(10.2)
where 𝑋= (𝑋1, … , 𝑋𝑛) denotes the climate state vector which
incorporates all relevant variables i.e., those describing the atmosphere,
hydrosphere, geosphere, and biosphere. As already noted, the parameter
𝜇 is also of vector nature containing the set of components, 𝜇= 𝜇1, … , 𝜇𝑛
which specify, e.g., numerous feedbacks, strength of the incident
shortwave solar radiation, albedo 𝛼, and general parameters of the
radiation transfer in the atmosphere. The vector equation (10.2) is
basically essentially nonlinear so that the rate of change of climate vector
𝑋 depends on the actual climatic state 233 . When the vector 𝜇 varies
continuously, the climate state 𝑋 can pass through a number of equilibria
𝑋𝑎, 𝑎= 1,2, … , and the system’s behavior can change abruptly, e.g., from
stationary states to periodic, multi-periodic or even to completely chaotic
development. In a more mathematically-oriented language, one might say
that the topological structure of a vector field corresponding to the family of
solutions to equation (10.2) can change qualitatively following the gradual
variation of parameter 𝜇. So, the predictability of the climate behavior
essentially depends on the knowledge of its possible bifurcations.
However, one should not think that the climate which we observe is
completely determined by its equilibrium positions (critical points).
Relaxation times to climatic equilibria can substantially exceed our time of
observation so that transient regimes, complex oscillations, bifurcations,
and even chaotic processes can be perceived on the human scale as almost
stable or slowly evolving states although such phenomena only correspond
to transitions in the natural climatic time scales (e.g., when the climate
232 The codimension of a vector subspace where 𝜇 lives inside the vector space of the
entire dynamical system is more than one.
233 Here climate is considered approximately local in space and time, which is far from
being obvious. For simplicity, we do not discuss here the time-delayed (retarded) effects
as well as nonlocal correlations (a particular example of nonlocality relevant to climate
studies is methane hotspots that can influence the climate state over large distances).
Such phenomena require a special treatment.
Dynamical Systems in Climate Modeling
443
system undergoes a smooth transition to chaos) and objectively should be
interpreted as irregular episodes. Here, needless to say, a fundamental
difficulty is hidden since the trouble with the transient states is that it is hard
to predict their variations. Just like our quite limited understanding of the
universe prevents us from reliable evaluations of, e.g., probability of
extraterrestrial life forms, poor knowledge of climate physics does not allow us
to infer the reasonable parameters of particular climate transitions
including those with GST intervals within a given time range, 𝛿(𝑡1) ≔
∆𝑇1 ≤∆𝑇(𝑡)
̅̅̅̅̅̅ ≤∆𝑇2 ≔∆𝑇(𝑡2), 𝑡1 ≤𝑡≤𝑡2. I think one should always admit
the possibility of a wrong interpretation of evolutionary behavior when
dealing with complex multilevel systems characterized by multiple time
scales.
Apparently, the problems in climate dynamics ought to be addressed at
some fundamental level, which is difficult. The above-mentioned qualitative
principles can reveal, in particular, the basic underlying mechanisms of
atmospheric
mass
movement,
but
to
understand
this
movement
quantitatively one has to resort to more comprehensive mathematical
models. Nevertheless, the qualitative considerations have been successfully
put in the base of so-called general circulation models (GCMs) that can be very
detailed. CGMs may be used in climate projections that can be claimed to have
not only methodical, but predictive and practical character. However, there
exist a lot of uncertainties in many climate projections. For one thing, the
natural climate variability is coupled with that of the world’s ocean
circulation which has, besides a comparatively regular low-frequency
component, also noise-like stochastic currents. For another thing, it has
recently become obvious - both from observational data and model
investigations - that the climate of the Earth has never been in equilibrium (in
the meaning that we are accustomed to while studying dynamical systems,
see Chapter 4). Moreover, it is highly unlikely that the climate will ever be in
equilibrium. This fact implies, besides the necessity to employ more
complicated methods of non-autonomous dynamical systems for modeling,
an increased sensitivity of atmospheric circulations to, e.g., small-scale
variations of model parameters or to regional perturbations. One can notice
that small-scale atmospheric physics is even more complex than global, in
particular because it should correctly account for regional effects such as
clouds and winds. From the physical viewpoint, the respective small-scale
models rely on nonlinear fluid dynamics which, in its own right, is based on
deterministic theory of dynamical systems. Due to the complexity of the
physical processes involved, climate modeling can be successful not by using
a single all-embracing model, but a “staircase” of models - a whole hierarchy
of them starting from the simplest toy model of the double-gyre circulation to
the most detailed general circulatory description.
To illustrate the complexity of the dynamical processes involved, one can
render the following example. Apart from large-scale circulations, intense
wind-driven oceanic flows, jets and vortices may be formed, all of them being
the manifestations of nonlinear effects. Such flows may have totally different
spatial and temporal scales: from global oceanic streams down to mesoscopic
444
Climate as a Physical System
(of the order of several km) eddies. Small-scale oceanic phenomena, being
coupled to the atmosphere, tend to induce turbulent flows in it. Turbulence,
as we have seen in Chapter 7, is an irregular fluid motion which has strong
vorticity and causes rapid mixing. But the main thing is that turbulence is a
multiscale phenomenon: all eddies in the fluid - in this case in the atmosphere
- are of a different size. One can state that the very essence of turbulence is
its multiscale behavior.
One might recall ubiquitous examples of near-surface atmospheric
turbulence. Thus, turbulent airflow produces aircraft noise which leads to
the “noise pollution”, e.g., in the form of considerable noise peaks around
airports. Other examples are clouds and smoke plumes emitted by the
pipes. In the atmosphere, the largest turbulent length scales 𝑙 (for instance,
in cumulous clouds) are typically of the order of several km, and the
fluctuating turbulent velocities 𝑢 are of the order of several m/s (the
characteristic time scale is of the order of an hour i.e., 103 s)
One can observe in passing that there is one salient feature in
atmospheric science. Contrary to most other sciences, where problems can be
attacked by dissecting them into small pieces, in atmospheric science such an
analytical approach is hardly adequate. It would be unlikely to obtain realistic
results in the study of atmospheric dynamics by looking at isolated fragments
in detail one by one while keeping everything else constant and assuming the
division on fragments well defined - the usual modeling strategy. Indeed, in
the atmosphere one has to consider a lot of interacting processes at the same
time: radiation transfer, turbulent transport, irreversible thermodynamical
processes, and chemical reactions.
So, we see that the climate is an extremely complicated physical system
which can be described only under highly idealized assumptions. At first, the
hierarchy of climatic models was developed for the atmosphere. Atmospheric
models were originally designed for weather forecasts - on the hourly or daily
time scales. Recall, in this context, that one still has hard times even in
retrospective weather modeling, say, ten days back. Later, atmospheric
models have been extended to consider climate variability for all temporal
scales. They have also been coupled to other subsystems, primarily the ocean
(such as wind-driven ocean circulation). Practically all current climatic
models are based on dynamical systems theory (see Chapter 4). Recall that
dynamical systems in general are very sensitive to the input data, for
instance, initial conditions. In this connection one should bear in mind that
geophysical data in general are practically always highly uncertain. There are
still scientific discussions in progress, and making firm conclusions about the
climate future trajectory is hardly possible.
In more realistic models, the number of such tipping points (which
actually represent separatrices) will probably be much greater than three,
and some of them may turn out to be favorable for mankind whereas others
can be disastrous. However, climatologists do not seem to have reliable data
to construct such more realistic models - recall that most data in geophysics
have a large uncertainty corridor. Anyway, the attempts to build impeccable
Dynamical Systems in Climate Modeling
445
phenomenological models of the climate can hardly be called satisfactory so
far.
A crude but very useful method of modeling the dynamics of the climate
system is through considering the energy balance. The incoming shortwave
solar radiation (corresponding to the Sun’s surface temperature of
approximately 6000 K) is partly reflected back into outer space (~ 3K) and
partly absorbed by the planetary system including the atmosphere and the
surface of the planet (~ 300K). The surface is heated and releases the heat
into the atmosphere by thermal radiation, by convective fluxes, and by
evaporation of water, with water vapor later condensing in clouds. Both
convection and phase transitions such as evaporation/condensation
distribute the heat vertically whereas macroscopic horizontal mass transfer
processes i.e., winds redistribute the heat along the planetary surface through
the dissipation of mechanical energy generated by inhomogeneous heating of
the surface. This is, of course, a very crude verbal picture of energy flows in
the atmosphere. The two corresponding main types of mathematical models
for the state of the atmosphere which meteorologists regard as comparatively
simple (although, to my mind, they are still quite difficult to solve) are the
model of vertical variations (horizontally averaged) and that of horizontal
variation (vertically averaged or integrated). The first mathematical model
(usually termed as RCM - regional climate model) considers convection
caused by solar radiation as the dominating process whereas the second
(called EBM - energy balance model) is based on the phenomenological
representation of energy balance.
The RCM approach is based on studying the radiative transfer in a fluid
(gas) column with surface albedo i.e., the reflected portion of the solar
radiation, 𝛼 - the reflection coefficient - is not a constant factor and should be
treated phenomenologically as a function of temperature. More
complicated though, albedo depends on the state of the planetary surface
(e.g., for simple ground albedo 𝛼= 0.30 −0.50 , for fresh snow 𝛼=
0.80 −0.85, old snow 𝛼= 0.50 −0.60, for ice covered ground 𝛼= 0.90,
grass 𝛼= 0.20 −0.25 , forest 0.05 −0.10 , etc., see, e.g., [291]), on the
amount of clouds and thus on the state of climate itself234. Moreover, albedo
can be a rapidly varying or random function since it is quite sensitive to the
optical transmission properties of the atmosphere. The RCM description of
atmospheric reflection, coupling processes through radiative dynamics, is in
principle nonlinear and typically produces multiple equilibria in the
horizontally averaged temperature profile (see below more on a single-
component Budyko-Sellers climate model). In fact, RCM is a model of direct
solar influence on climate and weather.
234 Currently, the mean global surface temperature is estimated to be ≈ 15C. If the
Earth were covered with ice (the “Snowball Earth”), the GST would be ≈ 52C, with
forests, GST would be ≈ 24C. If the Earth consisted of the ocean alone, GST would be ≈
32C since water absorbs more in the visible and UV range; were the planet covered with
deserts, GST would be ≈ 13C.
446
Climate as a Physical System
The EBM type of models is based on accounting for the balance between
the incoming solar radiation and the outgoing terrestrial radiation. The
corresponding balance equations (see below) can produce multiple equilibria
in terms of the model parameters, primarily the surface temperatures. From
the physical point of view, the presence of different equilibria in the space of
parameters is a typical symptom of an interplay of competing processes. We
have seen in Chapter 4 that exploring the states of equilibrium is the most
important part in the study of the models based on dynamical systems. Some
of the equilibrium states in the dynamical models of climate variation may
well be unstable so that small fluctuations and random influences can push
the dynamical system corresponding to a climatic or, in the narrow sense,
atmospheric model to another equilibrium. Transitions between equilibria
may be viewed as a typical manifestation of instability in dynamical systems:
tiny causes produce sizable long-term effects.
Before we proceed to more concrete examples, I would like to point at the
following classification of dynamical models - not only for the climate or
atmosphere, but for a number of other phenomena, e.g., population dynamics
or structure formation. One can distinguish 0d, 1d, 2d, and 3d models. The
number of dimensions here corresponds to that of independent spatial
coordinates in which the modeling problem is stated. For instance, in 0d
models, spatial variability is totally ignored so that such models describe only
the evolution of homogeneous parameters, in the case of atmosphere
variations of the “global” near-surface temperature. Zero-dimensional (0d)
models may be considered as representing the lowest level in the hierarchy
of atmospheric models. More sophisticated models take into account
heterogeneous distribution of crucial parameters. Such distributed
parameter models are typically formulated in terms of partial differential
equations which makes mathematical handling of the problem much more
difficult.
In a simplified mathematical setting, the 0d energy-balance model may
be formulated as the system of equations expressing the evolution of global
surface air temperature 𝑇 influenced by the global radiative balance and its
variations:
𝑐𝑑𝑇
𝑑𝑡= 𝐴(𝑇) −𝐵(𝑇),
𝐴(𝑇) = 𝜇𝑄(1 −𝛼(𝑇)), 𝐵(𝑇) = 𝜎𝑏(𝑇)𝑇4. (10.3)
Here functions 𝐴(𝑇) and 𝐵(𝑇) denote the incident solar radiation and
outgoing terrestrial radiation, respectively, 𝑐 is the heat capacity of the
system “atmosphere plus ocean”. It is a nontrivial problem to elucidate the
exact meaning of the coefficient 𝑐. The quantity 𝑄 denotes the amount of solar
radiation falling per unit time on the upper atmosphere, and 𝜎 is the usual
Stefan-Boltzmann constant. The parameter 𝜇 is a correction coefficient for
the amount of solar variation received by the Earth’s surface, it is defined
in such a way as 𝜇= 1 for a standard daytime. The parameter 𝜇 (insolation)
may diminish due to worsening of atmosphere transparency, say after
Combining Models with Observations
447
catastrophic volcano eruption or during a “nuclear winter”. The temperature
dependence of the albedo coefficient 𝛼(𝑇) and “grayness” 𝑏(𝑇) is a priori
unknown, it is only clear that 𝑏(𝑇) = 1 for a thermodynamic black body and
𝑏(𝑇) varies in the interval (0,1) for a gray body such as the Earth. Hence one
has to test different variants by modeling.
So, we obtain a spatially homogeneous (0d) dynamical model for the
surface air temperature with unknown nonlinearity. One can naturally start
with the simplest hypothesis namely that nonlinearity has a polynomial
character. One of the most popular models in nonlinear dynamics, as we have
seen, is the logistic model when the nonlinearity polynomial is quadratic (see
Chapter 4). This scheme is so crude that it is almost insulting to apply it to a
very sophisticated, mysteriously fine-tuned and self-reproducing climatic
system of the Earth.
We may note that climate as much as the weather is a chaotic system: its
variations cannot be predicted with an arbitrary certainty235. Besides, as we
have seen, there are too many parameters to be accounted for so that when
considering concrete problems some of them are by necessity ignored. But in
chaotic systems ignoring parameters does not go without penalty: both small
variations and neglect can produce great changes.
10.6 Combining Models with Observations
If you ask ordinary people supporting the AGW concept where do they get the
information from, they would typically answer - with a certain
embarrassment masked with aggression - that “the climate warming science
has been settled”, “everyone knows it” or “this is the experts’ opinion and who
are you?” and only a small number would confess that they get the
information about climate warming from the media, mostly from TV. Few
people really assess scientific raw material.
The focus of climatic models differs drastically. Some of them put the main
weight on CO2 and the respective infrared radiation transfer aspects, others
to concentrate on the Sun’s activity, the third kind of models are devoted to
the ocean’s circulations and so on. Highly abstract mathematical models are
unfortunately not sufficient to make reliable climate (long-term) and weather
(short-term) projections. Even using modern supercomputers and computing
grids does not help much without fair observations, in particular because
numerical climate and weather models are typically based on solving an
initial value problem (IVP) which should be fed with the correct initial data.
To obtain observational data on the actual state of the atmosphere over 108
info items are gathered daily from the ground stations, weather satellite
lidars, drifting buoys, balloons, etc. The entire surface of the Earth has been
divided into a grid, and observations are obtained at the crossover points.
These data may serve as entries to computer-simulated evolution of the state
of the atmosphere. Of course, even the most powerful computers cannot
process such amounts of data, even by applying the averaged laws of fluid
235 There exists a famous saying by Benjamin Franklin about certainty: in this
world nothing is certain but death and taxes.
448
Climate as a Physical System
mechanics and thermodynamics. One needs some hard physics here, not just
computer
models.
See,
however,
http://www.skepticalscience.com/empirical-evidence-for-global-
warming.htm on the empirical evidence for the potential threat of AGW.
In the real study of the atmosphere, field studies are a must, so a
number of global observational and weather forecasting networks have been
established. However, about 25 per cent of the Earth’s surface (about 1.4
108km2 out of approximately 5.1 108km2) is so far uncovered by this network
of observation devices. These “blank areas” are mostly located in the southern
parts of the ocean. Moreover, there are rumors (sorry, I found neither pieces
of evidence nor negative statements) that the number of temperature-
measuring stations diminished, especially in rural regions, in the 1990s.
Satellite observations do not seem to fully compensate for this lack of data due
to relatively low accuracy, e.g., in temperature measurements. Besides, the
cellular structure of climate monitoring networks is very non-uniform due to
the different level of scientific development. One can notice that observational
networks are orders of magnitude less dense in Africa than, say, in Western
Europe. The value of the Earth’s surface temperature measurements over the
ocean is indetermined - recall that the ocean covers about 70 percent of the
terrestrial surface. It is not at all obvious that the data recorded by a rather
sparse grid of ground- and ocean-based stations reliably manifest the surface
temperature trends.
The speculative way on which many climate variability models are built
are hardly capable of predicting the transitions between states despite the
bold claims of computer modelers. Probably the main reason is the gap
between the models and observational data. Mathematical and computer
models based on plausible considerations may be very useful to develop an
understanding of the climate system dynamics (see “Dynamical systems in
climate modeling” above), but such models are hardly suitable for planning
decisions. The latter would require hard experimental facts, convincing raw
empirical data, not models or computer codes. Even the islands of
disconnected data sets about the present state of the atmosphere, the
hydrosphere, albedo, etc. as well as the geological records are insufficient for
planning decisions affecting the world economies. The typical response of
pro-AGW modelers is that one must write codes using whatever data one can
presently have. I however, think that although such an approach may be
justified in modeling simple systems, it is inadequate for highly complex,
unstable, and chaotic ones if good accuracy is required. One can, of course,
obtain qualitative results using simple models, but performing sophisticated
computer simulations using inaccurate or missing data does not make much
sense: it only produces excessive expectations. To write a code head-on for
unstable nonlinear systems is highly questionable. And in any case, planning
decisions should not be based on the codes which are using insufficient
observational data. And there is one thing that is common for all computer
models: they should rely on data. Reliability of the model is no better than that
of the data. Computer simulations of the climate do not provide an accurate
forecast, as it is usually claimed by the AGW evangelists, not only because the
Climate Variability
449
underlying physical equations are enormously complex and should be
radically simplified, but already because good data are scarce and unreliable
(as, e.g., surface temperature measurement). For example, highly publicized
temperature estimates derived from tree rings are local and indirect by its
very nature i.e., such proxies are mediated by a number of local biological
processes. Are such estimates unambiguous and reliable, do they reflect the
global temperature trends? Besides, one might recall that statistics in data
processing is such a tool that one can get a desirable result if one inputs the
biased data, one only needs a desire.
By the way, computer codes are of secondary nature with regard to
possible climate transitions. One should discuss not the codes, but the type of
equations that govern the climate dynamics and the essential terms in such
system of equations, these terms reflecting the relevant physical processes.
Without due attention paid to such equations and their experimental
validation, opinions can influence one’s view of the issue of climate variability
more than reliable physics and mathematics, and there is now a tendency to
value codes more than equations. I don’t think that computer models can
supplant falsifiable physical laws. Incidentally, the AGW hypothesis cannot be
currently falsified in any way so that it should be believed or not and thus may
be considered as a contemporary version of religion.
10.7 Climate Variability
The crucial question one must answer in connection with the possible human-
induced heating is the following: is it true that recent variations of the
climate
236 are significantly outside the natural variability borders
experienced in former centuries? If it is the case then one must pay attention
to the possibility of a dangerous anthropogenic influence, but if recent climate
changes are within the range of natural variability, then one cannot
substantiate a significant human impact. One can notice in passing that some
climatologists make a distinction between the terms “climate change”,
“climate variations” and “climate variability” [234]237
The climate transitions are still believed by many scientists (the AGW
concept notwithstanding) to follow variations in the energy flux to the Earth’s
surface due to insolation changes when the Earth experiences almost-
periodic perturbations from other planets (mainly Jupiter). Besides, the
incoming short-wave solar radiation varies in accordance with the Sun’s own
cycles. Yet the response of the Earth’s climatic system to the variations of the
driving “naked” solar is not well understood, mainly because of
236 Say, since the beginning of the 20th century or since the year 1750 which is rather
arbitrarily taken to indicate the beginning of active human impact on the Earth climate
system. The accuracy of paleoclimatic measurements, even in the industrial era, is so
low that it does not matter much which initial time point in the past should be taken.
237 Another fine distinction in climate studies is made between paleoclimatology and
historical climatology, with one being used by “alarmists” whereas the other by
“skeptics”. All such nuances imposed on friend-foe dichotomy and mixed with
overdramatization closely remind us of “ideological struggle” for the purity of party
ideals in the USSR.
450
Climate as a Physical System
indeterminacies related to the radiation transfer properties of the
atmosphere. Multi-periodic variations of the climate mostly taking the form
of sharp transitions between two crudely defined states - cold and hot - give
rise to many wild speculations about the random character of the climate
system and limited predictability of its behavior. Consequently, the climate is
a system that is driven by the Sun, but due to the presence of atmosphere-
hydrosphere-geosphere-biosphere climate can be characterized by additional
interlocked nonlinear processes running at a hierarchy of speeds i.e., some of
the climatic periods do not necessarily coincide with the solar cycles.
The climate models simulating the near-surface temperature variability,
the state and composition of low atmospheric layers, are highly politically
charged issues. Unfortunately, there were cases when the field data (like
those discussed above) were adapted by the conflicting groups to corroborate
their presumptions. Climate modeling is an indicative example of science
affected by ideology; this is a piece of engaged science.
10.8 The AGW Evidence
It would be probably easy to dismiss many questions and statements about
AGW as sheer platitudes, yet I dare to reiterate them here. None of the data
seem to categorically prove the case of human-induced climate change. It is
only a matter of the public’s perception and the leaders’ intentions. The main
questions here are: can one foresee the climate changes and what factors may
be considered as the most important for their prediction? What are the
factors that may affect the accuracy of such a prediction? These questions
seem to be of tremendous importance also beyond academic studies because
it is very risky to make political decisions which may be founded on rather
inaccurate scientific results. The matter is that humans - even the most
enlightened climatologists - do not know enough either about the Earth’s
climatic system or about the chaotic dynamic systems to produce accurate
mathematical models containing thousands of entangled variables. Averaging
does not help much, mostly due to possible instabilities: if you change any
parameter a little bit, the output results may vary dramatically (this
phenomenon is sometimes called the “butterfly effect”). Reversing this
situation, one can tune the model parameters just a little bit and obtain any
desirable result. This fact, common for highly complex and possibly unstable
systems (in particular, exhibiting deterministic chaos) enables one to obtain
the biased results, for instance, the ones needed for lavish funding. I am far
from thinking that many climate modelers are dishonest opportunists or that
there is some kind of a “conspiracy” - such paranoid statements are a sheer
stupidity, but I guess there may be a noticeable component of banal
conformism in today’s politically motivated climate studies, people in general
tend to swim with the stream.
Many “catastrophic modelers” maintain that the Earth’s climate system is
not a chaotic one and that due to its averaged character it is rather stable
(then I don’t quite get it why so much fuss about AGW). Moreover, some of the
catastrophists even state that there is nothing particularly complex about the
climatic system: the Sun is heating the Earth providing thermal flux Q0, some
The AGW Evidence
451
part of the heat Q1 is re-radiated into the space, and some part (Q2) is trapped
by the atmospheric “blanket” whose permeability for thermal radiation is
worsened, primarily due to the increased anthropogenic CO2 emission. In this
naive heat balance model one can, of course, obtain a steady increase of the
Earth’s surface temperature, with no instabilities or chaotic phenomena
implied. Indeed, strictly speaking, nobody has proved the presence of chaotic
regimes for climate. But this is a flimsy argument. Abrupt shifts in the past
climate evolution, which are well documented (see, e.g., the references in
“Abrupt Climate Change: Inevitable Surprises”, NRC, The National Academies
Press, Washington D.C., 2002), give serious reasons to think about the
presence of chaotic regimes in climatic system. Mathematically speaking, this
is nevertheless only a conjecture, but the admission of climatic chaos is
compatible with available data [235]. Indeed, as just mentioned, paleoclimatic
data at our disposal testify that the planetary climate has experienced notable
changes in the past, characterized by an intermittent glacial-interglacial
period. Experimentally, one can measure, for example, the ratio of O18 to O16
isotopes in marine sediments (see [236] and references therein). Although
this ratio fluctuates rather wildly on a variety of time scales, such a method
allows one to infer variations of the continental ice volume, in particular over
the last million years [236]. One can observe the notably aperiodic - in fact
chaotic - character of glacial evolution, with a crudely extracted intermittency
component on a time scale of 105 years sometimes referred to as Quaternary
Glaciation Cycle. This intermittent character of the glacial evolution may be
interpreted as unpredictable - chaotic - switches between different climatic
states occurring on a hierarchy of time scales.
Thus, chaos upsets the predictability of climate, and quasideterministic
predictions made by the AGW adherents depend on model errors which is the
consequence of the obvious fact that any model is only a crude representation
of nature. Climatic models confront an obvious difficulty consisting in the
necessity to account for a large number of interconnected physical
mechanisms and hence even greater number of variables entering the
corresponding nonlinear equations. Therefore, only drastically simplified
models of climate can be constructed, with a heavily abridged number of
physical mechanisms, variables, and equations to be taken into account.
But while radically simplifying the model to make it treatable, modelers
encounter the problem of identifying the relevant mechanisms, variables,
equations and reliably (i.e., based on a well-determined smallness parameter)
selecting them from the “impertinent” ones. Yet even if one succeeds in this
selection procedure, the model complexity still remains very high so that one
typically has to handle the model numerically. And we know that numerical
techniques introduce their specific and unavoidable errors, thus the
possibility of obtaining reliable predictions for the climate’s future evolution
would be thoroughly compromised. At least it would not be possible to
substantiate the drastic economic and political measures that have a
substantial negative impact on the people’s well-being, on which the
environmentalists and some politicians insist. This is the well-known
consequence of a sensitive dependence on small variations of parameters, in
452
Climate as a Physical System
particular, of initial conditions in an unstable - in a specific case chaotic -
system which makes it impossible to predict its future evolution beyond a
certain time horizon, even if its present state is known with a good precision
(the latter is never attained for the climate system.)
Furthermore, by tweaking the model parameters, which are only known
very roughly anyway, it would be possible to get a required temperature
increase, e.g., the global-mean surface temperature change ∆𝑇̅ ≥4
degrees centigrade in fifty years. Incidentally, I do not quite understand
how the quantity ∆𝑇, being a function (more exactly, a functional) of a great
many variables, can be not only strictly positive-definite, but also fatally
confined within certain dimensional limits (say, 2 and 6 degrees Celsius),
irrespective of all possible physical processes in the atmosphere,
hydrosphere, and solar system. I would prefer to scrutinize the behavior of
some dimensionless entity, e.g., 𝛿𝑇̅/𝑇0 where 𝑇0 is an essential quantity
characterizing the physical properties of a considered system. It is not
necessary that 𝑇0 should coincide with a mean surface temperature: the
latter may be a physically meaningless parameter since many states, even
an infinite number of states with the same mean surface temperature
(GST) are possible.
Notice that all AGW debates revolve around the projected value of the
average terrestrial temperature (global surface temperature - GST). However,
the global average of the temperature may be too crude to serve as a valid
scientific parameter to base political decisions upon. In the folklore, one
typically refers to such averages as the mean temperature over the hospital.
Using this global average, AGW becomes the aggregate notion, a political
slogan rather than a scientific concept. Physical effects are local and
structured: there exist variations of atmospheric fluxes, ocean currents,
convection and other fluid instabilities, humidity, precipitation, sediments,
cloudiness, aerosol distributions, etc., all depending on the point in space-
time - without mentioning biological factors such as vegetation, ocean
plankton, and agriculture. Very little of such physics and biology is present in
the now popular computer models (and even less in codes). Such popular
models can, in principle, describe certain regimes of fluid motion, in
particular, fluid dynamics in the atmosphere and hydrosphere, but are far
from accurately incorporating such crucial factors influencing the
atmosphere’s local transparency as volcanic eruption, dust, cloud formation
and drift, biochemical processes (e.g., vegetation), etc. The much-publicized
IPCC computer models are based on drastically curtailed fluid dynamics
systems of equations coupled with the not less reduced radiation transfer
models, mostly in the integral (averaged) form. Such models may
inadvertently omit not only physically important terms in the equations -
and in unstable systems all terms may be important - but the entire regimes.
In particular, the Earth’s climatic system may go - at least locally - through
vigorous self-sustaining oscillations of such quantities as atmospheric
temperature and pressure, e.g., driven by feedbacks between the ocean
temperatures and global wind patterns. An example is the “Wet Sahara” effect
observed approximately 6000-7000 years ago (see, e.g., [272]). This is just
The AGW Evidence
453
one illustration of the complexity and poor predictability of the climate
dynamics, even on the level of average trends.
One may notice that there can be different climatic states characterized
by the same average temperature of the Earth, but with different pole-equator
temperature gradients. By the way, in most references only the average
terrestrial temperature is discussed and attempts to find its determining
factors, in particular, anthropogenic ones are made whereas the question of
what determines the meridional (pole-equator) temperature distribution
gains much less attention. It is, however, quite interesting that paleoclimatic
records testify that equatorial temperatures remained nearly constant238 in
the course of Earth’s evolution so that the issue of the causes for the
meridional temperature distribution over the planetary surface becomes
rather nontrivial and important.
It is intuitively clear that the meridional temperature (and pressure)
gradients should influence the frequency and severity of hurricanes, cyclones,
tsunamis, and other extreme weather events. One of the favorite theses of
climate alarmists is that such events have lately become more ferocious and
recurring than in previous years due to human-induced CO2 emissions. The
statement is confusing: additional emission of greenhouse gases should
smooth temperature and pressure gradients, thus softening the climate and
making extreme weather events rarer. At least one may ask: can one prove that
the severity and frequency of hurricanes, cyclones, typhoons, tsunamis, etc. is
on the rise synchronously with the CO2 concentration in the atmosphere?
Likewise, can one prove what threshold conditions are necessary for the total
and irreversible collapse of thermohaline circulation - one of the favorite
catastrophic scenarios of AGW evangelists?
Another catastrophic scenario predicted by AGW believers is that the
ocean level is dangerously rising at an accelerated rate determined by the
anthropogenic CO2 emission. The problem of the ocean level increase also
attracts, to my mind, an exaggerated attention. This is an example of a highly
speculative issue. Who has proved that the ocean level is rising at a rate that
has accelerated in accordance with increasing anthropogenic CO2 emissions?
Projections of how high ocean level might increase thereby endangering
coastal communities are, to put it mildly, highly uncertain. What is the
accuracy of the ocean rise measurements on the global scale? Another
question: how can one separate the climatic component in the change of ocean
level from the tectonic factor? It is well known that there existed ancient
townships that are now lying many meters under the water because the
ground sank. These latter processes have nothing to do with human activity
and global warming. Recall that one of the favorite catastrophic scenarios of
the AGW belief system is the rapidly growing frequency of floods and coastal
area destructions (such areas will be presumably engulfed in the flood
following the rise of the ocean level) due to the pathological sea-ice
circulation distorted by AGW. Recall in this connection that the volume of
238 Temperatures on the equator could even be a few degrees Celsius lower than at
present, see, e.g., Wang N., Yao T., Shi Y [246]. and the references therein.
454
Climate as a Physical System
water in the world’s ocean is estimated as V~1.3 109km3~1018m3, 3.3
107km3 in the polar ice caps whereas the total volume of glaciers does not
exceed 2 105km3 (these estimates are taken from the article [261])239. So
even if all the glaciers melted it would hardly result in the catastrophic ocean
level rise.
Politicians, especially of the social-populistic brand, have a general
tendency to talk about the distant future - this is simpler than cleaning up
mundane and unpopular everyday problems. Climate variability provides a
convenient option. Now all natural catastrophes are presented by the
propaganda as direct consequences of climate change induced by human
activity. This is false. The Kyoto treaty presumes that global warming of the
near-surface atmosphere is, firstly, a proven fact - it may be during the current
climatic cycle, but this cycle is finite - and, secondly, is exclusively produced
by human industrial activities. In reality, however, there exist powerful natural
factors affecting the climate, and human-induced changes in the gaseous
composition of the atmosphere is not necessarily the dominating one. One
can imagine the following factors influencing the Earth’s climate
Main anthropogenic factors:
1. fossil fuel burning (power production, utilities, transport, etc.)
2. industry
3. agriculture, land reclamation
4. forestry
5. hydrosystem construction and melioration
Main natural factors:
1. solar activity
2. collective motions and oscillations in the atmosphere-ocean
system
3. oceanic circulations
4. volcanic activity
5. perturbations of the Earth’s orbit parameters
6. motion of heavy planets (primarily Jupiter and Saturn)
influencing solar activity
7. lunar motion
239 Previously often cited values were 𝑉∼1.338 109km3, surface of the world’s ocean 𝑆∼
361.3 106km2 , average depth ℎ̅~3700m. Volume 𝑉 of ocean waters amounts to about
1/800 of the planetary volume. Mass of ocean waters comprises approximately 96.5
percent of mass of the entire hydrosphere which, in its turn, makes up about 1/400 of the
Earth’s mass. Volume of water vapor in the atmosphere is estimated to be approximately
13000km3, with the renewal period of about 10 days. It is interesting to compare this
period with the estimated renewal time for the whole ocean: about two million years.
The AGW Evidence
455
8. other astrophysical factors (such as fluctuations of the angular
velocity of the Earth’s rotation, cosmic rays, meteorites, etc.)
9. tectonic activity
10. geomagnetic activity
To understand the scientific foundations of long term climate dynamics
one has to account for all these factors. In the linear modeling, they may be
treated quasi-independently i.e., one can subsequently turn them “on” and
“off”. However, we have seen that climate models are predominately nonlinear
which means that all the above factors can strongly influence one another.
The fact that due to natural factors the climate has already undergone many
noticeable variations, comparable or exceeding current changes, during the
existence of the Earth is fully ignored - largely on political or ideological
reasons. In reality, in correct models of the climate variability one should take
into account at least the superposition of natural and human factors (linear
approach), in more sophisticated models their mutual influence (nonlinear
approach). It is quite understandable why the problem of climate variability
all of a sudden became an indispensable element of current world politics.
Interestingly enough, the placement of climate issues in the center of
political debates coincided with the downfall of the Iron Curtain and
dissolution of the bipolar model of the world. Climate is a transcontinental
and transborder system, it cannot be confined within a single country and
meaningful talks on climatic issues can hardly be kept within a certain
political block. Thus, since climatic changes have a global character, their
discussion is an example global politics, of effective intergovernmental
interaction on the global scale. One can therefore forecast that the role of
climate and its variability in global politics will only increase. If and when this
issue ceases to be a hot topic on a global scale, another one will be invented to
supersede it as a would-be matter of acute transnational interest.
The presence of a large number of powerful natural factors testify that
only a portion - so far unknown - of global warming is due to anthropogenic
factors. AGW models will always cast doubt until natural causes of GW are
fully taken into account. Moreover, global warming and the “greenhouse
effect” are not synonymous as it is implied in ecological propaganda
campaigns. Drastic temperature contrasts may exist between different years,
reaching in certain locations 4-5 degrees Celsius. This is a manifestation of
natural climate variability. By the way, powerful year-to-year jumps, say, in
winter temperatures, that is within a single year, cannot be explained by
anthropogenic factors. Only a part of climate variations is deterministic, the
other - probably not less considerable - part has a stochastic nature. Besides,
there are space-time climate non- homogeneities which are typically ignored
by ecological alarmists. One must understand that climate variations are not
distributed uniformly over the Earth’s surface. For example, warming in
Russia is observed to be more intensive than averaged over the globe. The rise
of global temperature (this is an averaged quantity, provided such average
exists and is meaningful) is predicted to reach in various estimates from 1.0◦C
to 5.0◦C during the 21st century, which is a big difference. Of course, diverse
456
Climate as a Physical System
doses of political engagement, ideological bias or publicity-seeking alarmism
are hidden in these estimates. IPCC, a collaboration of several hundred
climatologists,
has
offered
a
whole
spectrum
of scenarios
(see
http://www.ipcc.ch/). If climate forecasts are in general possible, then they
depend on many factors, and in case the latter can be correctly identified one
can construct mathematical models of climate as a physical system as well as
make prognoses based on such models.
Nevertheless, the reverse influence of the biosphere on the atmosphere
seems to be a proven fact: for example, the contemporary composition of the
atmosphere has been made by living organisms. Furthermore, there exists
much evidence that the Earth’s atmosphere at the initial stage of the planet
development 240 contained negligible quantities of oxygen: it was bound
within carbon dioxide. Approximately 1.8 109 years ago, free oxygen (O2)
appeared in the atmosphere due to the action of microorganisms.
Consequently, the ozone (O3) layer in the higher atmosphere (30-50 km)
emerged which screened the Earth’s surface from the deadly ultraviolet
radiation emitted by the Sun. Without such screening, living organisms could
only subsist in places hidden from the solar rays, and with the ozone
screening they could spread over the entire Earth’s surface. Eventually, the
living organisms conquered the world and were able to affect the gaseous
composition of the atmosphere even more. That was a salient example of a
positive feedback in the atmosphere.
Likewise, at the contemporary stage of development living organisms
including humans can seriously influence the gaseous composition of the
atmosphere, thus modifying its physical (primarily optical) properties. As
already mentioned, such changes may distort the transmission properties of
the atmosphere. In particular, human activities can, through the changes in
atmospheric transparency, modulate the radiation balance between the
Earth’s surface and outer space which may well twist thermal equilibrium
that we view as the Earth’s climate. Moreover, one can imagine a number
(more than one) of equilibrium states corresponding to a variety of
combinations of CO2 concentrations and average terrestrial temperatures,
other parameters being solar activity, orbital variations, albedo, state of
hydrosphere, etc. This is a physical problem: one should estimate possible
effects of the changes in the gaseous composition of the atmosphere on
thermal equilibrium 241 near the Earth’s surface. The CO2 concentration
increase over some quasi-equilibrium level, arbitrarily taken to be “normal”,
may be accounted for as a perturbation, ∆𝑐. If the regime is reached when the
global surface temperature average 𝑇̅ is perturbed stronger as linear in ∆𝑐,
say, ∆𝑇= 𝒪((𝛿𝑐)2)
or
even
first-degree
superpolynomial
growth
∆𝑇~𝐴∆𝑐log∆𝑐, 𝑇= 𝑇0 + 𝑇 ( 𝐴 is some constant or a function of other
240 The age of the planet Earth is estimated to be 4.6 10
9 years.
241 Of course, one can only talk about a thermal equilibrium in the atmosphere in a
very approximate sense. In reality, the atmosphere is a highly nonequilibrium system,
with many time-dependent currents flowing along the gradients of pressure,
temperature, density, partial concentrations, etc.
The AGW Evidence
457
variables), then one has an accelerated warming with rising concentration of
CO2. Such an amplification might lead to a considerable heating of the
atmosphere which, however, is not substantiated by the observations:
average temperatures grow much slower with the increased CO2 content in
the atmosphere and are usually approximated by a logarithmic behavior (see,
e.g., http://en.wikipedia.org/wiki/Radiative_forcing).
The next problem, more of a biophysical nature, is: to what extent can the
changes of the physical (mostly optical) and chemical properties of the
atmosphere affect living organisms, in particular humans? If and when it has
been proved that the anthropogenic factors change the gaseous composition
of the atmosphere to the extent that its modified optical properties lead to a
substantial adverse influence on the living conditions of human beings, only
then should political and economic measures directed at reducing such
anthropogenic factors ensue, and not in the reverse order since political and
economic actions may be painful for large groups of the population. We,
however, are observing just the reverse picture: political and economic
decisions, based on presumptions and models, but strongly affecting living
conditions of the population, precede the ultimate scientific results on the
possible effect of anthropogenic changes in the gaseous composition of the
atmosphere.
One can notice that despite hard physical foundation, the study of climate
- climatology - may be related more to the humanities than to exact sciences.
Climatology seems to be on the same level as the society and the environment:
we know climate to the extent we know the environment in general.
Expectations of the accuracy in climatological predictions are unwarrantably
elevated in the society. Recall that any scientific result has a definite
accuracy; if a prognosis is claimed to be an unconditional truth without
prerequisites and error margins, there is no science in it. The catastrophic
climate forecasts are in this sense almost non-scientific since error margins,
e.g., for anthropogenic influence are indicated rather arbitrarily (? < ∆𝑇<?,
where ∆𝑇 is the average surface temperature growth due to man-made CO2
release)242. Besides, error margins in the physical sense i.e., corresponding to
a chosen approximation may eliminate the assumed positive-definiteness of
∆𝑇. The current trend may be with ∆𝑇≥𝑐> 0, but ∆𝑇 is a function of time 𝑡,
and in the next temporal domain, e.g., following such unaccounted natural
factors as orbital change, sunspot periods, or volcano eruptions (see, e.g.,
[273]) ∆𝑇 may well be negative. Is there a corroborative evidence that recent
242 Some experts refer to probabilistic error margins present in the IPCC reports, see
http://www.ipcc.ch/. However, those are not the precision boundaries usually
presented in physics and corresponding to the applicability limits, but the error margins
typical of computer codes. I could not find in IPCC reports a satisfactory substantiation
of the approximations made in IPCC models. Physical applicability limits correspond to
expansions in series, often asymptotic ones, when some small (or large) parameter can
be explicitly indicated. When the terms in the system of equations describing the
mathematical model considered are omitted, one usually talks about a zero order in such
terms (in dimensionless form). One can always find the corrections however difficult it
might be technically.
458
Climate as a Physical System
climate variations considerably surpass the changes observed in the past and
caused by the natural factors such as fluctuations of the Earth’s orbital
parameters, solar cycles, ocean currents, volcano eruptions, etc.? Incidentally,
one might notice that the IPCC texts and models include very little material
about volcanic activity which has always been one of the most powerful
climate forcings on the planet. Indeed, ash and aerosols scatter the sunlight,
on the other hand volcanoes produce IR-absorbing gases (i.e., GHG),
submarine eruptions warm oceanic waters and so on.
The fact that ∆𝑇 can be negative owing to anthropogenic factors as
well is reflected also in mathematical models of the so-called nuclear winter,
when an explosive energy release occurring locally produces large
quantities of aerosols changing the optical properties of the atmosphere.
The effect of aerosols can in most cases exceed the effect of any minor243
greenhouse gas (Kahn, R. A., Yu, H., Schwartz, S. E., Chin, M., Feingold, G.,
Remer, L. A., Rind, D., Halthore, R., DeCola, P. Atmospheric Aerosol
Properties and Climate Impacts. A Report by the U.S. Climate Change
Science Program and the Subcommittee on Global Change Research, M.
Chin, R.A. Kahn, and S.E. Schwartz (eds.), National Aeronautics and Space
Administration, Washington, D.C., 2009). So, the limits of applicability for
the models of climate transitions is a serious issue.
Moreover, in contrast, say, with nuclear physics, purposed experiments
cannot be carried out in climatology so that the prognostic quality of the
models cannot be reliably verified. To some extent, paleoclimatic studies can
serve as a substitute for the physical experiment to verify the models. For
instance, a certain alternative to pointed experimental validation of climatic
models would be the quantitative explanation of paleoclimatic cooling and
warming periods, e.g., comparatively recent cold and warm climatic intervals
such as the Little Ice Age in the 17th century or the Medieval Warm Period.
Where are such quantitative explanations?
Some climate variations will inevitably occur, notwithstanding any
political action taken to reduce CO2 emissions so that the research aimed at
adapting to climate changes makes more sense than the “IPCC science” whose
main task is to substantiate the a priori set up catastrophic AGW views.
Climate science, as I have already mentioned, in general can hardly be called
impartial and satisfactory. The point is that climate science (of which the
“IPCC science” is the branch) is a somewhat special discipline: in distinction
to most scientific disciplines, climate science is not supported by any
background theory, it just contains a lot of observations, often conflicting, a
great number of wild speculations, plenty of disagreeing hypotheses, a
growing number of dubious computer models which are nonetheless
unquestionably believed to provide the ultimate, complete and holy truth, and,
moreover, contains a strong ideological component and uses ad hominem
arguments.
243 Recall that all greenhouse gases including CO2 are considered “minor” in contrast
with the water vapor which is “major”.
The AGW Evidence
459
At least the IPCC can hardly be called neutral in the assessment of climate
dynamics. An a priori givenness is especially obvious from the “Summary for
policymakers” where such clichés as “dangerous anthropogenic interference
with the climate system”, “climate change can affect human health directly”,
“populations in developing countries are generally exposed to relatively high
risks of adverse impacts from climate change” as well as mysterious words
“models project that” and intuitive statements of the type “it is very likely”. Yet
climate and its instant manifestations - weather - are determined by physical
processes, which are sufficiently complex to be intuitively assessed and
explained at the hand-waving level. For example, transfer of the solar
radiation, its reflection from the Earth’s surface and selective absorption in
the atmosphere, in particular by small quantities of the notorious carbon
dioxide, requires a considerable physical knowledge usually not available to
the most vocal climate alarmists or to “climate economists” and other
proponents of carbon control and trading schemes.
However, the crucial difficulty here is that real processes in nature are
irreversible (see section “The Arrow of Time”) so that validating parameters
of a model by paleoclimatic observations, even from the recent past, does not
guarantee reliable forecasts concerning the future state of the climatic system.
In particular, one cannot say that the state with a given concentration 𝑐(𝑡) of
CO2 and characterized by a certain average global temperature 𝑇(𝑡) in the
past ( 𝑡< 𝑡0 ) will be repeated in the future ( 𝑡> 𝑡0 ), where 𝑡0 is some
reference instant of time, e.g., 1990, 2005, 1750, etc. Such extrapolations are
not valid.
The true AGW believers and environmentalists are trying to explain the
global climatic transitions occurring since the beginning of the industrial era
as the result of a single human-induced process i.e., simple one-dimensional
causal relationship: warming is related to human-produced CO2, cooling (if
any) is related to human-produced soot and aerosols. The implication in any
case is that greedy and consumption-crazy humanity totally controls the
Earth’s climate, all the natural mechanisms playing almost no role. Hence, say
the true AGW believers and environmentalists, humanity must radically
change the way of life in order to avoid an awful catastrophe - the climate
Armageddon. Also, some climatologists assert that the emission of carbon
dioxide will double within a few decades (instead of a few centuries which
would be obtained by extrapolating the current trends, 23 percent since
1900). Here, there is an implicit assumption that fossil fuel consumption will
explosively grow resulting in accelerating CO2 emissions, which is highly
unlikely. This is an unjustified socio-economical hypothesis, not climatology -
whatever this latter term means. Speculating about future consumption
trends does not help much in predicting possible climatic states, even their
primitive characteristics such as ups and downs of the average global surface
temperature (GST). It would be much more useful, by the way, to analyze the
sensitivity of social structures (in particular infrastructure) to climate
changes in different regions rather than making highly indefinite prognoses
of GST increase: the point is that climate warming is not necessarily
460
Climate as a Physical System
detrimental; some countries and societies may profit from the regional
temperature increase.
10.9 The Evil Role of Carbon Dioxide
True AGW believers have selected a single factor - CO2 concentration - from a
set of variables relevant for the climate system and resulting in the variations
of the Earth’s surface temperature. Although nobody can deny that CO2 is a
greenhouse gas (a comparatively weak one), it can hardly ensure the entire
contribution into such variations. We shall also see below that this gas can
hardly be considered as the principal cause of global warming. At least the
usual scientific-looking statement that human-produced CO2 represents the
dominant radiative forcing, so radically shifting the Earth’s energy balance as
to induce the projected severe climatic consequences, cannot be proved. In
this subsection we shall discuss from the physical position whether it is true
that anthropogenic carbon dioxide is the primary factor driving the global
warming.
Curiously enough, the rather innocuous carbon dioxide (CO2) gas has
recently begun playing an important part in politics and declared as the
most harmful by the propaganda. This chemical substance (see about CO2
properties, e.g., in http://www.uigi.com/carbondioxide has moved to the
center of attention as the so-called climate gas or greenhouse gas. The
meaning of these metaphoric terms is that the surface of the Earth is allegedly
heating up, with the concentration of CO2 being increased due to the human
activities. There are four main “greenhouse gases” (GHG) in the atmosphere:
water vapor H2O, carbon dioxide CO2, methane CH4, and nitrous oxide N2O,
of which water vapor is by far the most efficient: its characteristic spectrum
is more than three times wider than that of CO2 and is responsible for roughly
95 percent of the greenhouse effect (see, e.g., [274]). The Earth’s atmosphere
mainly consists of nitrogen N2 (about 78.0 percent), oxygen O2 (about 21.0
percent) and argon Ar (about 0.9 percent) which are not greenhouse gases
because of negligible absorption in the infrared. Apart from these principal
components, there are some small quantities of the rest gases such as water
vapor H2O, carbon dioxide CO2 (about 0.035 percent), methane CH4, sulfur
dioxide SO2, ammonia NH3, ozone O3, nitrous oxide 244 N2O, nitrogen
trifluoride245 NF3, etc. Concentration of all these “impurities” is a function of
coordinates and varies with time. For example, wind can easily displace and
244 Nitrous oxide (the “laughing gas”) is contained in the atmosphere in very small
concentrations (about 320 ∙10−9), but it is approximately 300 times more efficient as IR
absorber compared to CO2 and its concentration is rapidly growing due to ubiquitous
use of fertilizers. However, little attention is paid to this fact, probably due to political
importance of modern agricultural technologies.
245 Nitrogen trifluoride (NF3) is mostly used in the production of electronic
components. It has the greenhouse potential approximately 17 ∙104 that of CO2, with an
estimated lifetime in the atmosphere about 700 years. The estimated production of NF3
amounts to 5000 m.t.; how much of this amount is released into the atmosphere seems
to be an open question, see the details in: Hoag [247].
The Evil Role of Carbon Dioxide
461
flatten the local bunches of CO2 concentration as well as fluctuations of water
vapor density. Does the global average of CO2 bring a more pronouncing
effect than local heating due to enhanced heat generation near human
dwelling places and industry sites? The latter phenomenon can be easily
observed, e.g., by noticing that temperature in big cities is higher than
between them.
One can also note that paleometeorological studies show that there were
sharp irregularities and oscillations of the carbon dioxide concentration in the
atmosphere, and such CO2 variations did not necessarily coincide with warm
periods in the past. For example, the Sahara Ice Age occurred when the CO2
level was an order of magnitude higher than today. What is the difference
between CO2 at that time and today’s anthropogenic one? Atmospheric levels
of CO2 (and methane) naturally fluctuate, partly due to changes of the Earth’s
orbit resulting in variations of the incident sunlight. Hence, there is no
compelling evidence that the observed human-induced increase in CO2
concentration has really resulted in the frightful greenhouse effect (Kauffman,
J. M. Climate change reexamined. Journal of Scientific Exploration, v. 21, No.4,
723749, (2007)). I have already mentioned that the main “greenhouse gas” is
generally known to be water vapor, causing about 60-70 percent of the
greenhouse effect on Earth246, since water vapor absorbs much more infrared
than CO2 (see the above brief discussion of physical absorption mechanisms)
so that it is strange to ascribe the whole absorption and thus heating of the
atmosphere to CO2 when there is about two orders of magnitude more potent
IR absorber present nearby, which should thus dominate and swamp the effect
of CO2. It is, however, curious that although this is a common knowledge
among meteorologists and climatologists, but in the media or within
governmental groups the overwhelming effect of water vapor tends to be
altogether ignored or at least under-emphasized. Moreover, this distortion of
the truth seems to be adopted by repetition: some people may even concede
that it might be “a little misleading” to ignore the water vapor as the
greenhouse gas, they tend nevertheless to call this neglect an accepted
practice and defend it by claiming that it is customary to leave water vapor
out.
Furthermore, water vapor strongly affects weather and, consequently,
climate through cloud formation changing radiation transfer conditions. This
mechanism can be more pronounced than selective absorption and re-
emission of IR radiation by tiny quantities of CO2. Unfortunately, I could not
find comparative studies in the scientific literature available to me. Besides,
the water vapor concentration depends on the surface temperature, primarily
on that of the ocean surface (which comprises 70 percent of the entire planet
area). Transition of water masses into the gaseous phase is accompanied by
cooling, so the whole thermal system of the Earth involving the hydrosphere
becomes very complex, with many feedback mechanisms. Heat and mass
246 It is, however, not quite correct to assign a definite percentage of the greenhouse
effect to a certain gas because the effects of different gases are not additive. So the given
percentage must be understood as a crude estimate.
462
Climate as a Physical System
transfer in the atmosphere on the global scale make climate modeling and
parameter computations so difficult that it becomes hardly possible to
indicate model accuracies.
The current greenhouse effect is due to about 0.12 percent of the
atmospheric CO2 generated by human activity (see the calculations in
http://www.geocraft.com/WVFossils/greenhouse_data.html). So the usual
statement that human activity since 1750 has warmed the climate seems
to be wrong, and anthropogenically produced CO2 has no discernible effect
on the global temperature. Recall also that the increase of the average surface
temperature depends on the growth of the CO2 concentration only
logarithmically, and a doubling of CO2 would roughly correspond to
approximately 1C temperature change (this problem has already been
explored by S. Arrhenius, the chemistry classic, [292]. Therefore, the panic
about anthropogenic CO2 production is hard to comprehend. It seems utterly
ridiculous that the increase of a tiny fraction of CO2 level would produce a
global temperature increase of, say, 6 C i.e., over 20 percent of the whole
atmospheric contribution into the Earth’s mean terrestrial temperature
believed to be about 33C. And there are approximately 30 times as many H2O
molecules
in
the
atmosphere
as
CO2
molecules
(see,
e.g.,
http://www.geocraft.com/WVFossils/greenhouse_data.html), with much
more efficient absorption properties (fingerprint spectrum). Following the
logic of AGW proponents, one should primarily ban water vapor production,
for example, teapots and hydrogen-driven vehicles.
It is easy to observe that carbon dioxide, water vapor and oxygen, all of
them necessary for sustaining life, are produced at different locations at the
Earth’s surface: for example, carbon dioxide in large urban and industrial
centers whereas water vapor over the oceans and oxygen mainly in rain
forests. Transport of these gases is ensured by the atmospheric turbulence,
which is a highly complicated process, not fully understood up till now.
Besides, it would be important to note that from a strictly environmental
point of view, CO2 does not present a severe pollution hazard as compared,
e.g., to such agents as NOx, SO2, SO3 (H2SO4), CO, Hg, Cd, Pb, other metals,
especially heavy ones. In fact, CO2 is not a pollutant at all. Declaring carbon
dioxide as the most harmful substance exposes a certain ecological
irrelevance of the whole “save the climate” campaign. Moreover, the absolute
quantity of CO2 in the atmosphere may only be correlated with the climate
temperature or be interpreted as its indicator because the gaseous
composition of the atmosphere depends on its temperature, even locally. If
this is the case, then stating that slight increase of CO2 produces catastrophic
temperature variations is wagging the dog. In principle, variations of the
average surface temperature ∆𝑇 and of the carbon dioxide average
concentration 𝛿𝐶𝐶𝑂2 can not necessarily be correlated. As an illustration of
some limited relevance of carbon dioxide as a unique determinant of
atmospheric temperatures one might recall that about half a million years
ago the concentration of CO2 reached the level of about an order of
magnitude higher than today, but everywhere on the Earth there was an
Ice Age. On the contrary, before that period the dinosaurs had lived in a
The Evil Role of Carbon Dioxide
463
much hotter climate for more than a hundred million years. There exist
estimates showing that during a long time of the Mesozoic the ocean level
reached the mark at least 100 meters higher than today (some estimates
even give up to 250 meters), there were no polar ice caps, and the Earth’s
terrestrial temperatures were much more homogeneous than today: the
difference between polar temperatures and those at the equator was on
average only about 25C, with the poles being about 50C warmer than
today. Therefore, the mean global surface temperature (GST) was also much
higher (see, e.g., [275]). Were the dinosaurs also incessantly polluting the
atmosphere with nature-hostile CO2? One should only attempt to answer the
“paleoclimatic” question: why was the climate so warm at those periods
(from Triassic to Cretaceous), and for such a long time?
There is an assumption put forward by the scientific wing of
environmentalists and climate alarmists that CO2 molecules re-radiate back
to the ground the substantial part of thermal radiation emitted by the Earth
thus essentially heating it. More specifically, in the process of relaxation the
infrared photons are re-emitted by CO2 molecules in the direction of the
Earth’s surface thus creating a supplementary energy flux, and the total
thermal power absorbed by the surface substantially exceeds the power sent
to the Earth by the Sun. It is this thermal power amplification that is known
as the greenhouse effect. As a result, the Earth’s surface temperature becomes
noticeably higher than in the absence of additional quantities of carbon
dioxide. Superfluous molecules of CO2 so radically improve the blanket
feature of the atmosphere that overheating may occur. We have already
discussed that the molecule of CO2 absorbs electromagnetic (infrared)
radiation due to rotational and vibrational-rotational transitions in three
narrow bands around 2.7, 4.3, and 15.0 µm i.e., rather selectively. The total
thermal spectrum of the IR radiation corresponding to the Earth’s surface
temperatures comprises about 100 µm. This means that the major part of the
infrared radiation, lying outside of the absorption spectral domains of the CO2
molecule, passes through the atmosphere into outer space practically without
loss. The CO2 concentration is, as we have seen, rather small (the current
alarm-producing concentration is 𝐶𝐶𝑂2 ≈0.038 percent), so the question
persistently arises: how can a linear growth of a comparatively tiny impurity
concentration be responsible for catastrophic climate and weather changes?
In any case, one should have the data of the total atmospheric absorption in
the infrared versus CO2 selective absorption, and it seems that actual
measurements and numbers with well-defined error margins do not exist yet.
Carbon dioxide is often designated by environmentalists as an “ecological
poison” so that one must keep its global concentration at some arguably
manageable level. What is exactly this level? Maybe I was not diligent enough,
but I could not find empirical evidence of the sensitivity of the variation ∆𝑇 of
the average surface temperature 𝑇 to anthropogenic CO2 emission (from 280
ppm to the current level of 380 ppm). Furthermore, the last decade showed
an approximately 0.1 degree cooling of the atmosphere (see, e.g., the graph
http://farm3.static.flickr.com/2600/3670965001_4249d9a68e_b.jpg).
In
fact, the CO2 gas is indispensable for life on the Earth, and geological periods
464
Climate as a Physical System
characterized by its increased quantities were also characterized by the most
rapid development of the biosphere. It would of course be nice to prove that
nowadays CO2 is really a great problem, but that has not been done, contrary
to what one may have read or heard in the media. There exist only sparse
observations and measurements characterized by not always clearly defined
accuracy as well as some largely qualitative models of the “greenhouse” effect.
The latter may very well be a fictitious theory, at least for the current
quantities of carbon dioxide ( ≤0.038 percent). Is it really a scary
concentration resulting in a significant rise of the Earth’s temperature and
requiring immediate drastic measures to restructure economy and
consumption habits? However, many billions of dollars and euros are
evaporated due to excited emotions over the so-called catastrophic climate
changes (C3). One must be a true believer in global warming and possibly a
true believer in general, i.e., a person inclined to believe in anything
supported by the majority of other people - God, fashion, vampires, life after
death, supremacy of one’s nation, intelligent design, etc., not to notice a
banal brainwashing.
The climate controversy is not so much about whether the Earth’s surface
is getting warmer, it may well happen due to a number of physical reasons
(see above), but about whether it is human activity that makes it warmer, and
to what extent. But this is basically a scientific question, not a political issue -
in fact a physical problem requiring a correct physical solution. The usual
argument of climate catastrophists is that the concentration of CO2 grows
rapidly due to human activities (from 0.028 to 0.038 percent since the
beginning of industrial revolution), and this growth of concentration must be
accompanied by a rapid heating of the atmosphere. However, this is not more
than a hypothesis based on correlation or, at best, a model; the exact physical
mechanism of the process of atmospheric (and hydrospheric) heating
remains unclear. The major portion of CO2 is dissolved in the ocean’s water,
and with the increasing temperature equilibrium solubility of most gases in
water, including CO2, is diminished. Hence, with the general heating caused
by any external factor, large quantities of carbon dioxide are transferred from
the hydrosphere to the atmosphere. The main factor for this external heating
is most probably the rising solar activity, which happened many times in the
Earth’s history when ice ages alternated with warm periods. There exist two
(at least!) informative experimental sources allowing one to obtain
paleoclimatic data: boring holes in Greenland and Antarctica, with the hole
depth reaching several kilometers. Samples of kern are taken, containing air
bubbles from those distant epochs when the material obtained, e.g., snow or
ice, was formed. Spectroscopic analysis allows one to determine the gaseous
composition (percentage of O2, N2, CO2, etc.) with very good accuracy;
besides, the temperature related to the time when snow was falling, and ice
was formed as well as some other physical characteristics can also be
obtained. All classical ice ages, warm periods, and corresponding quantities
of CO2 in the atmosphere have been established by this method. The reported
result was that rising concentrations of carbon dioxide did not precede
warming, but followed it. This fact can be readily explained: 90 percent of CO2
The Evil Role of Carbon Dioxide
465
is dissolved in the world’s ocean, and when the latter is heated large
quantities of carbon dioxide transit to the atmosphere. This process is of
course reversible: during the cool periods, the ocean absorbs carbon dioxide.
This quasi-oscillatory process, with both positive and negative feedback
components, is eternal.
There exist, as just mentioned, various feedback mechanisms affecting
climatic factors. The Earth’s climate seems to be balanced within certain (so
far not exactly known) margins. If it is pushed too far, a series of positive
feedbacks can be triggered that would cause substantial changes. For
instance, rapid heating of soil in the permafrost regions is hypothesized to
release more methane which would amplify warming. This is a domino effect.
However, it is hardly possible to say with a scientifically accepted accuracy,
firstly, how likely these scary scenarios are and, secondly, what exactly is the
human component in such runaway heating. The environmentalist
doomsayers247 usually speculate on unquantifiable risks. As far as carbon
dioxide is concerned, the salient example of the feedback is the growth of
photosynthesizing biomass with the enhanced CO2 concentration and
increased surface temperature which diminishes the amount of CO2 in the
atmosphere due to intensified photosynthetic activity. So, the question: “By
how much has the CO2 concentration increased since the industrial revolution
owing to human activity?” may be totally irrelevant to climate variability.
Instead one can pose another question namely “What fraction of the carbon
dioxide being exchanged per unit time in the whole climatic system (i.e., in the
atmosphere, hydrosphere, biosphere, lithosphere, and cryosphere combined,
see above) is due to human activities?” I suspect that the human-conditioned
percentage of CO2 in the real-time ecosystem kinetics will be negligible, and
thus human involvement into the climate dynamics is strongly exaggerated.
This is, however, a conjecture; one must estimate human involvement by
considering a correct kinetic model for the entire ecosystem. Such a model,
provided it included complete radiation transfer equations, would give local
temperatures as a by-product.
From the physical viewpoint, the equilibrium temperature setup is a
kinetic problem, in the simplest case an energy balance problem. This holds
also for the Earth’s surface. In this connection I do not understand why certain
terms in the kinetic relationships, such as infrared absorption and re-
emission towards the Earth’s surface, are taken into account whereas the
counterbalance terms, such as variations of atmospheric transparency due to
anthropogenic and natural impurities, are neglected or at least not fully
considered. What about correctly accounting for fluctuations of the Earth’s
orbit and, in general, of direct and diffuse insolation of the Earth’s surface?
And even if one takes the vicious role of CO2 seriously, what about the carbon
sinks? In short, the terms ensuring positive contribution into the energy
balance are retained whereas those resulting in diminished temperatures are
mainly disregarded. Indeed, the most essential human influence on the
atmosphere has always been the release of aerosols and various gases, some
247 And AGW skeptics are called “naysayers” by them.
466
Climate as a Physical System
of them known as greenhouse gases. Nowadays, due to “political forcing” the
role of the latter is emphasized whereas the role of tropospheric aerosols is
somehow blurred over. Thus, the global dimming caused by aerosols and
clouds may cause a drastic cooling effect, as was demonstrated by the
consequences of volcano eruptions. As to the absorbing aerosols of
anthropogenic origin, they can diminish the averaged solar irradiation by at
least several W/m2 (see, e.g., [237]). Clouds can also reduce the total solar
irradiance, thus contributing to negative radiative forcing. In all cases, spatial
and temporal distribution of clouds must have a significant effect on the
diurnal asymmetry (one may notice here that CO2 produces more warming
during the night than in the daytime).
Let us recall that in this book we are basically discussing scientific
modeling. However, most of the “science” backing up global warming has
been produced by computer modeling, the latter being only a part of scientific
modeling. I trust in science, but I do not always trust in widely publicized
computer models (WPCM), due to a great lot of people, money, and politics
involved. Such models are somewhat similar to Hollywood blockbusters,
where numbers are pulled out of the hat. The trouble with computer modeling
is that it is typically a bad science since computer models can be tinkered in
the desired direction to get any arbitrary result, even physically meaningless.
The climate (defined as the averaged weather) may change, but attribute its
variations to a single factor, small and not quite correctly accounted for in the
radiation transfer, namely increased CO2 concentration is hard for me to
understand. The assertion 𝑑𝑇̅ 𝑑𝑡
⁄
= 𝑐𝑜𝑛𝑠𝑡∙𝑑𝑐̅/𝑑𝑡, where 𝑇̅ , 𝑐̅ are land-
averaged temperature and CO2 concentration does not fully reflect the
physical reality, I am afraid. And I dare to predict that if eventually the AGW
propaganda campaign fails, which is quite probable due to the interplay of a
number of physical factors influencing the climate, carbon dioxide will still be
pictured as an evil, with the shifted propaganda focus: for example, increased
acidity of the world’s ocean due to anthropogenic CO2, killing the sea
organisms on a global scale can be declared an actual catastrophe.
In short, personally I do not believe in the catastrophic human-induced
global warming which is claimed to be the doom of mankind. “It is global
warming that will surely cause the fall of civilization and perhaps the
extinction of Homo Sapient,” I took this sentence from a book on nuclear
energy [238], in many respects quite interesting and informative248. I do not
quite understand how the microscale effects (on the scale of 10−1 −105 cm)
such as plumes, car exhausts and the like can affect climate on the planetary
scale much stronger than global or even astronomical (space) factors. Or why
occasional hurricanes (which had always occurred with high but unmeasured
strength and frequency long before the CO2 panic was spread over
enlightened circles), wildfires and cyclical melt-accretion of polar ice caps are
unequivocal evidence of the assertion that human activity causes global
248 Probably the author, a well-known writer and journalist, formerly an anti-nuclear
activist, still shares popular environmentalist views, which are reflected in the sentence
cited.
The Evil Role of Carbon Dioxide
467
warming through CO2 release. Nobody has proved any connection between
CO2 release and hurricanes or floods.
The climate changes may happen, of course, because climate is a
dynamical system influenced by numerous agents, and if such changes
happen they naturally have a certain sign of time-derivative for the local
temperature on a certain time interval. Now in some measurement sites (e.g.,
near “urban heat islands”) this derivative may be positive. I wonder what
climate alarmists will say if in some time the sign of 𝑑𝑇/𝑑𝑡, where 𝑇 is the
local temperature, will change, at least for many locations? That humans are
releasing too much sulfur acid? Or that CO2-provoked global warming leads
to global cooling, e.g., due to the meridional air mass transfer? By the way, I
dare to think that global warming is better than global cooling, in particular,
because much less energy is to be spent to sustain life. And I hope that people
will eventually be sick and tired of the outcries that the sky is falling and
therefore everyone (except selected few, of course) must get back into the
caves right away.
As far as the economic effect of the enforced CO2 reduction goes, it can
become detrimental, but this is not quite obvious. For instance, one can easily
calculate the costs of replacing all coal-fired power plants (they may really be
dirty, but not because of carbon dioxide) by wind and solar farms, the
generated power being kept constant. One can also calculate the reduction of
CO2 emission in the process of such replacement and, by using, e.g., the IPCC
anti-CO2 method, translate this hypothetic removal of CO2 from the
atmosphere into its greenhouse heating (say, for the USA which is considered
the worst thermal pollutant). I made some crude estimates and obtained 0.1◦
Celsius. And the costs are of the order of a trillion US dollars, these costs are
borne by the public, of course. A trillion for hypothetical 0.1◦ Celsius? Not too
expensive? Maybe I was wrong, yet everyone can reproduce such estimates
using some officially published data. Of course, one needs intense,
emotionally loaded propaganda campaigns to substantiate such spending, for
instance, horror stories like inevitable droughts, floods, hurricanes,
tornadoes, tsunamis, etc. Science, however, strives to not confuse talk with
empirical evidence.
Controlling human-produced CO2 is like chasing a ghost; in fact, there are
more urgent things to do than to please politicians. The attacks on CO2 are so
ferocious as if there were no other harmful ecological factors. The latter is
obviously not true, but to solve real environmental problems is much more
difficult than to create panic and exploit it for political purposes. There are,
for instance, places where people get sick due to the detrimental state of the
local atmosphere, and this has nothing to do with CO2, but environmental
organizations react very meekly on such information. Moreover, global steel
consumption is rising by about four percent a year (see, e.g.,
http://www.worldsteel.org), this growth is especially pronounced in rapidly
developing economies such as China and India. Steel production is
accompanied by massive emissions of really harmful substances, primarily
sulfur dioxide (SO2), nitrogen oxides (NO, N2O, and others), dust, etc. Large
energy inputs as well as large amounts of carbon (e.g., in the form of coke)
468
Climate as a Physical System
needed to produce steel inevitably release CO2, but it would be unrealistic to
curb the production of steel, drastically needed by emerging economies, on
the bogus pretext of climate saving. The effect of climate-motivated political
impact on the industry must be carefully estimated because it implies the
redistribution of resources and productive capacities. If the entire coal mining
industry is obstructed, it will mean a lot of unemployment and possible crisis
in many regions of the world. Besides, coal-fired energy supply accounts for
about 40 percent of heat and electricity generation so that the energy vacuum
will be left, which hardly can be compensated for by the “renewable” energy
sources - the favorite thesis of environmentalists which seems to be
disconnected from reality. By the way, what will be the concentration of CO2
in the atmosphere in the limiting case when all fossil fuels on Earth will be
burned? The projections differ so wildly that I don’t know which to select.
Yet all said does not mean of course that one should abandon developing
new technologies and specifically new energy sources which do not use fossil
fuel (see also below). In principle, instead of fossils the fuel can be made of
CO2 and sunlight, for instance, using bioreactors.
The blame put on human industrial activity in global warming is
somewhat exaggerated. For example, I do not quite understand the hysterical
over-reaction with car 249 production and use in connection with CO2
emission, although many people (as well as some TV reporters) confuse CO,
which is highly toxic, and CO2, which is not 250. Of all human produced
greenhouse gases, automobiles are estimated to be responsible for 10-15
percent whereas the breweries are accountable for several percent. At least
the CO2 emission from beer (and wine) making is not negligible. Is it sufficient
to curb the auto industry or one should also stop drinking beer and wine?
Because of environmental constraints cars tend to become less reliable, safe
and robust - in compliance with purely arbitrary, established by bureaucrats,
emission norms. “Green” environmentalists urge to reduce the output of cars.
By the way, it has already become a truism that a cow produces the same
amount of greenhouse gases as an automobile, however mainly not CO2 but
methane CH4. Does it follow from here that we should kill the cows or
drastically reduce the livestock? There were the proposals to introduce the
“fart tax” imposed on dairy farmers. Or a human being produces over 1 kg CO2
per 24 hours251 which makes about 6 million tons human-exhaled carbon
dioxide a day i.e., the annual amount of more than 2000 megatons CO2. For
comparison, the second (after China) largest national producer of industrial
CO2 emissions, the US, has an estimated annual production of about 5800
megatons
(see,
e.g.,
249 Interestingly enough, trucks and other heavy vehicles that produce more harmful
substances and more CO2, which is strictly speaking not a harmful substance, are
somehow placed outside alarmistic attacks. Numerous fossil-fuel machines used by
the military who could not care less about environmental effects are totally immune
from any “save the climate” charges, and I doubt that heavy tanks, supersonic fighter
jets and strategic bombers will ever be powered by batteries.
250 Carbon dioxide is toxic only in high concentrations, about 10 percent or more.
251 This is a very conservative estimate, actually more.
The Evil Role of Carbon Dioxide
469
http://en.wikipedia.org/wiki/List_of_countries_by_carbon_dioxide_emission
s ). Other sources estimate the whole world’s annual emission of CO2 on the
level of 10000 megatons. Still others give the overall industrial activity of
humans resulting in annual CO2 output an order of magnitude less. At any
rate, human breathing accounts for at least 10-15 percent of the annual
production of CO2. Besides, humans are not the only species exhaling carbon
dioxide. One should not forget also such powerful sources of greenhouse
gases as volcanoes, and on top of all this, wildfires can add about 1500
megatons of carbon dioxide. Anyway, the figures for industrially produced
CO2 are comparable with biologically produced carbon dioxide.
It would also be interesting to compare humans with vehicles as far as
CO2 production goes. We can take that a human exhales about 10 ml CO2 per
second which would roughly correspond with 1 kg CO2 per 24 hours (about
1 mole/hour). For the entire human population, it amounts to 108 l/s. An
average vehicle exhaust can be estimated to produce carbon dioxide at a rate
about 1 l/s. If we take that there are 108 automobiles each moment on the
roads of the world (actually less), we get the total CO2 production by the
vehicles amounting to 108 l/s i.e., of the same order of magnitude as by the
humans. And there are a great lot of other species. Following the ambivalent
anti-anthropocentric ethic of environmentalists, one should do something
urgent with climate-hostile human breathing. Perhaps people should exercise
less? Forbid jogging and refrain from lovemaking? Severely restrict the birth
rate? Or the ultimate logic of environmentalists could be to kill all humans:
this would remove letter “A” in AGW.
Let us now briefly summarize the CO2 arguments of the “green” circles
and their sympathizers supporting catastrophic scenarios of climatic change.
One must, however, admit that the level of energy consumption per capita
does not grow during the last 30 years. It seems that people have learned to
some extent how to save energy. It is only the total energy consumption in the
world that continues to increase - owing to the global population growth.
Therefore, energy consumption (which accounts for approximately 70
percent of CO2 emission) grows slower than assumed by the AGW
catastrophists. In fact, the probability of catastrophic prognoses seems to be
rather small if not close to zero, since there is simply not enough carbon (and
consequently hydrocarbons) capable to ensure the corresponding increase of
CO2 in the atmosphere (by 4-5 degrees Celsius) i.e., 3-4 times the current
level (0.038 per cent, about 36 percent over the level taken as reference,
that of year 1750, before the European “industrial revolution”.)252. In other
words, the “catastrophic” CO2 level should exceed 0.12 percent. Here, one
252 Previously often cited values were 𝑉∼1.338 · 109km3, surface of the world ocean
𝑆∼361.3 · 106km2, average depth h̅ ∼3700m. Volume 𝑉 of ocean waters amounts to
about 1/800 of the planetary volume. Mass of ocean waters comprises approximately
96.5 percent of mass of the entire hydrosphere which, in its turn, makes up about 1/400
of the Earth’s mass. Volume of water vapor in the atmosphere is estimated to be
approximately 13000km3, with the renewal period of about 10 days. It is interesting to
compare this period with the estimated renewal time for the whole ocean: about two
million years.
470
Climate as a Physical System
can make two remarks. First of all, to reach such a level of CO2 concentration
in the atmosphere one has to burn a great amount of hydrocarbons, e.g., oil,
natural gas, coal. The question is: can one obtain the corresponding amount
of fossil fuel from the available sources within the observed period, say, within
the 21st century? The second remark: is it only the absolute concentration of
greenhouse gases in the atmosphere that matters, not the rate of its increase
- time derivative of the concentration? This rate seems to be increasing
(different sources give the estimates 0.5-3 percent per year, I do not know
what to believe), at least the second time-derivative of CO2 atmospheric
concentration seems to be non-negative. What is the real role of this time
derivative?
Catastrophic switches in natural systems may occur, with the transition
into radically new states, but the assertion that it is humans who are causing
such abrupt changes is, to put it mildly, highly controversial. Are humans
responsible for the drift and possible switch of the Earth’s magnetic poles?
According to paleoclimatic studies the CO2 atmospheric concentration in the
epoch of dinosaurs was higher than today. It means that the climate of the
Earth has survived different states and continued to support the biosphere.
One can of course imagine that if the CO2 concentration reaches a certain
critical level in terms of radiative forcing (RF) then near-surface
temperatures can increase in such a rate that human civilizations will not be
able to adapt to these changes. This maladaptation to fast changes can be
dangerous, not the slow (adiabatic) temperature increase per se, by the way in
various locations and during different seasons. Besides, the current
comparatively warm climatic period may be changed by a colder one,
potentially bringing more disastrous effects than warming. Such misfortunes
occurred, for example, in the third millennium BC or in the end of 17th
century (Little Ice Age), when hunger and diseases eradicated a large portion
of the population. Humans have probably no experience how to effectively
survive under rapidly varying climatic conditions which may arise in the case
of catastrophic developments within the next several decades. But to ascribe
all today’s rainfalls, storms, hurricanes and even cold weather only to
anthropogenic climate warming i.e., induced exclusively by human activity is
either a delusion or a deliberate lie.
10.10
The Role of the Sun
Probably the most serious argument in favor of anthropogenic (more exactly
- green- house) global warming is the observational data suggesting that it is
difficult to ascribe the increase of global average temperature that occurred
since 1975 entirely to the Sun-Earth coupling (see, e.g., [239]). Although the
Sun is by large the dominant energy source on Earth, some others - such as
manifestation of internal planetary heat through the volcanic activity, burning
fossil fuels, exploding nuclear devices - may have locally a comparable impact
on the environment largely determining its temperature. However, such
effects are local in space and time and, moreover, should be treated by the
methods of statistical dynamics applied to the climate system.
The Role of the Sun
471
Historical records demonstrate a relationship between solar activity and,
say, winter temperatures, at least at the local and regional level. Already this
fact makes it more difficult to assess the reality of AGW without relying on
computer models. One can notice that each model focuses on its own specific
details, which fact makes climate modeling heavily dependent on expert
opinions. Therefore, the weight of determining factors as well as robustness
and reliability of climate prognoses depends more on subjective judgments
than on fundamental physical principles. It would also be interesting to
observe that solar activity ceased to influence the Earth’s climate almost
synchronously with the creation of the IPCC (1990s): since that time sharp
warming has become accompanied by the fall of solar activity.
Although it would be stupid to deny the presence of anthropogenic
factors (see above) especially when they are growing, but their escalating role
as compared with natural and very powerful ones such as intensity variations
of solar radiation, influence of massive planets and deviations of the Earth’s
orbit has not been quantitatively elucidated. It may well happen that the main
cause of climate change is still solar irradiance253, with the delay effects being
taken into account. For instance, the shortwave radiation from the Sun
propagates through the ocean waters and heats their deeper layers, and one
should integrate over time the radiation absorption by the ocean (mainly in
the shortwave range). Physically, one can understand this absorption in
deeper strata of the ocean in the following way: the absorption coefficient µ
in the phenomenological Lambert’s law, 𝑑𝐼(𝜆, 𝑧) 𝑑𝑧
⁄
= 𝜇𝐼(𝜆, 𝑧), where 𝐼(𝜆, 𝑧)
is the solar radiation intensity, 𝑧 is the vertical (ocean depth) coordinate,
which depends on the wavelength 𝜆 (and also on water impurities, in
particular, on salinity). Since the thermal conductivity as well as diffusion and
convection in water are relatively slow, processes and the water strata
remain stable, the heat may be stored for years in the ocean254 before it is
eventually transferred to the atmosphere (mainly through the horizontal
flows to the polar regions). This physical mechanism of time-delayed climate
response to solar activity (mainly characterized by the sunspot numbers)
should exist, but regretfully I was unable to find calculations of the
corresponding time lag in the literature available to me. Yet in general the
impulse response of the climatic system to the Sun’s activity simplistically
manifested by the surface temperature is not a delta-function of time but has
253 It has been noted in most encyclopedias that the very word “climate” originates
from the Greek word “klima” which means inclination and was referred in ancient
times
to
the
inclination
angle
of
the
Sun,
see
e.g.
http://en.wikipedia.org/wiki/Climate.
254 Although the main absorption of the solar energy occurs in the water layers near
the ocean surface, the heat stored in deeper strata is still significant which is manifested
by ocean flows (see, e.g., Primeau [248],) as well as by the fact that deep water
temperature is well over the freezing point everywhere. The most part of the shortwave
fraction of solar energy is absorbed in tropical latitudes where this fraction is more
pronounced, which enhances the effect. Recall that approximately 70 percent of the total
terrestrial surface are covered by ocean waters, with 90 percent of the total volume of
the ocean being below the thermocline.
472
Climate as a Physical System
a finite width 𝜏 that is determined by a number of physical processes and may
well reach dozens of years.
In short, it seems to be clear that variations of solar activity should
seriously influence the climate - the thesis that is emphatically refuted by
environmentalists. Discussing the role of the Sun in possible global warming
has been lately regarded as primitive, non-scientific, and almost indecent,
although the modulation of solar radiation transfer by the atmosphere is a
passive effect and cannot lead to substantial global temperature shifts. If
modulations in solar radiation transfer by the atmosphere cannot lead to
global temperature shifts, does it not mean that the role of the Sun is not big?
Yet in a well-known book by S. R. Weart “The Discovery of Global Warming”
(Harvard University Press, Harvard, 2003) the following statement is
referenced: “For a young climate researcher to entertain any statement of sun-
weather relationships was to brand oneself a crank”. The forcing of TSI (total
solar irradiance) is declared by the engaged climatologists as a negligible
factor compared to human influence 255 . Advocates of man-made global
warming mockingly beat the drums: “Humans are small, Sun is big”. However,
the point is not in this trivial observation, but in the hard physical fact that all
phenomena on the Earth are strongly coupled with the processes in the Sun.
To completely negate the solar activity, the latter being measured by sunspot
numbers, as a climatic factor is at least shortsighted since modulations of
the energy output from the Sun are significant and their measurements have
not been finalized yet. One can assume that foreseeing future climate
variations heavily depends on the ability to envisage dynamics of the Sun-
Earth physical coupling, which is an interdisciplinary task. Specifically for the
Earth’s surface temperature, its global variations of at least 0.1 ◦ Celsius
associated with the 11 year solar cycle have been extracted (see, e.g.,
http://data.giss.nasa.gov/gistemp/2007). This magnitude is comparable
with the estimates provided by the computer climate simulations. However,
the signal directly originating from the Earth’s irradiance by the Sun is
difficult to separate from other causes of the terrestrial temperature
variations including fluctuations.
We have already discussed that the Earth’s climate is determined by a
delicate kinetic balance between the incoming solar radiation (in all spectral
domains plus corpuscular flux) and outgoing thermal radiation, this balance
being mediated by the composition of the Earth’s atmosphere. Both types of
the forcing agents - natural ones such as solar variations and volcanic
emissions as well as anthropogenic ones such as greenhouse gases and sulfate
aerosols significantly affect the corresponding kinetic equations. Changes in
the kinetic equations mostly occur in the coefficients, with such changes
acting in opposite directions (e.g., cooling effects of aerosols can be partly
neutralized by heating due to the emission of greenhouse gases). Solar
irradiance variations contain both the direct part entering the source term
255 Recall that the Earth’s surface average temperature was estimated by the IPCC
to have increased by approximately 0.6C over the past century whereas the TSI forcing
have contributed only about 0.2C over the same period.
The Role of the Sun
473
(the incoming radiation) and the coefficient term such as, e.g., due to changes
in the ultraviolet component leading to the modulations of ozone production
rate. Ozone is a rather potent greenhouse gas since it absorbs long-wave
infrared radiation (LWIR) emitted from the Earth’s surface and thus
contributes to the heating of the atmosphere. Moreover, ozone in the lower
stratosphere where temperatures of 70C to 80C are encountered is thought
to have a much larger effect on the radiative balance as compared to ozone at
surface level: it can absorb infrared radiation and re-emit the latter with the
wavelength
corresponding
to
about
18
Celsius
(http://www.ozonelayer.noaa.gov/science/basics.htm. This means that the
impact of ozone leads to effective warming of the gas in the troposphere. This
is an example of complex interactions when the indirect partial effect may be
of a similar magnitude, or even larger, than the direct effect. Delicate interplay
of the factors present, both explicitly and implicitly, in the kinetic equation
determining the radiative balance can make the multiparameter climatic
system highly sensitive to small variations of different factors, easily bringing
it to unstable or chaotic domains (see, e.g., [240]). In this situation, there is
more of a consensus than real science about the relative role of natural,
primarily TSI and anthropogenic (such as CO2) forcing agents, i.e., mainly a
social effect. It is interesting to notice that the generally perceived role of solar
variations in the observed climate dynamics changes - in fact oscillates - with
time. We have already discussed on various occasions that it is usually quite
difficult to disentangle the sociological from the scientific.
The solar activity256 minima have been observed to be correlated with
colder temperatures of the Earth’s surface, an example was the notorious
Little Ice Age in Europe, North America and possibly in other parts of the
world in the 17th century ascribed to the “Maunder Minimum” (see, e.g.,
http://en.wikipedia.org/wiki/Maunder_Minimum).
In
Europe,
many
dwellings perished because of starvation during the Little Ice Age. However,
some scholars who insist on anthropogenic climate changes deny the causal
link between the lull in solar activity depicted by the Maunder Minimum and
bitter cold temperatures during the Little Ice Age, considering the overlap of
these two periods as a statistical coincidence257. The AGW concept is based on
256 The Sun’s activity is in general an integral notion being understood not only in
terms of sunspots, but also accounting for changes in total irradiance, ultraviolet
irradiance, magnetic flux variations, solar wind, energetic solar particles, variations of
the size and intensity of heliosphere, etc.
257 Curiously enough, the same scholars do not regard the analogy in two time series -
temperature data and CO2 emissions - expressed by the controversial “hockey stick”
(see e.g., http://en.wikipedia.org/wiki/Hockey_stick_controversy and Holland [249]) as a
statistical co-incidence. In general, one can add a zero-centered random number to the previous
value and get a variety of statistical series similar to the random walk series. When plotted, such
series can resemble the “hockey stick” (for amusement, I have done it with MS Excel, but probably
such products as SPSS or “Statistica” are suited better.). The usual “hockey stick” argument
means no more than the fact that one data set (temperature reconstructions) matches another
(CO2 concentration) over some arbitrarily selected averaging or calibration period. In this
process one can obtain as many “hockey sticks” as one desires, by putting a variety of data
474
Climate as a Physical System
one-to-one correspondence between CO2 concentration and temperature
rise whereas looking at time series, one cannot exclude the possibility that the
overall temperature behavior is flatter than the CO2 concentration increase.
If this is the case, then the orthodox AGW theory does not seem to hold. There
exist many other notable relationships - pure coincidences or not - between
the Sun’s activity and terrestrial climate. One more example is the so-called
medieval climatic optimum (MCO) that was observed in the 11th through
13th centuries and coincided with the Grand Maximum of the solar activity
(see, e.g.,[241]) 258. Maybe it sounds non-scientific or even “cranky” but a
striking thing is that there existed a nearly one-to-one agreement between the
periods of average temperature changes (as best they could be known) and
solar activity modulation, although the climate alarmists deny this
coincidence. There are data (though also controversial) that the Sun was in a
state of unusually high activity for about the last 60 years of the 20th century.
A similar period of enhanced solar activity (however, characterized by a
significantly fewer number of sunspots than during this last period of activity)
occurred in the Middle Ages approximately from 1100 to 1250. It is
approximately at that period that the above-mentioned medieval climatic
optimum happened on the Earth when, for instance, the Vikings settled down
in Greenland and Iceland. The Vikings’ colonies were recorded to have
flourished for several centuries and deteriorated due to starvation and cold
during the Little Ice Age. Today, we can survive an analogous warm period
when, in addition, the enhanced solar activity works in phase with the
possible anthropogenic factors. One should explore this controversial issue
further, of course, as well as the current and historical variations of solar
activity. In particular it is unlikely that a reliable answer to the crucial
question “On what time scales can the Sun affect terrestrial climate?” does
exist.
10.11
Limitations of Current Climate
Modeling
In the current political and media discussions the issue of accuracy in AGW
forecasts has been somehow covered up by slogans and fears. Yet the question
of accuracy is always of paramount importance in physics, and the reliability
of scientific prognoses essentially depends on it. It is clear that the accuracy
of computer model output (the error margins) cannot be better than that of
the input data. Recall that most data in geophysics have a large uncertainty
corridor. Besides, the complexity of climate science compels one to make
sets (e.g., population growth, number of newspapers, scientific publications, bicycles, etc.).
There may even be lucky “hockey sticks”, for example, in personal or corporate budgets.
258 One can object that the book issued in 1975 is totally outdated and does not reflect
modern opinions. I don’t understand such objections: the book contains hard scientific
evidence which cannot be outdated by the very definition of science. This is not a
political journalism, after all. In the same fashion, one can merely declare outdated the
works by Newton, Boltzmann, Gibbs, Einstein, Poincaré, Einstein, Rutherford, and many
other scientists.
Limitations of Current Climate Modeling
475
many simplifying assumptions which only can lower the accuracy. Therefore,
there exist a lot of indeterminacies in climate dynamics, and the error margins
of climate models must be specially appraised by independent scientists before
political decisions about allocation of resources are made.
Current mathematical models of climate are basically those of fluid
dynamics, at least they are centered about the description of fluid motion. The
kernel of these models is devoted to describing the fluid motion in the ocean
and the atmosphere. But this is far from describing the real world we are
living in, with plenty of intricate physics as well as chemical, biological,
geological, anthropological, social, etc. factors influencing the climate
variability. It is hard to understand from the present mathematical models
(and, of course, totally impossible from political discussions) how to separate
the CO2 produced by fossil-fuel incineration from its conversion in biomass.
Seasonal accumulation and release of carbon dioxide is a spatial-dependent
kinetic process that should be coupled to fluid dynamics and radiation
transfer equations of climate modelling. It is clear that since CO2 hidden in
the biomass and soil is an important reservoir of carbon comparable with that
produced by human activities, biological processes must be an important link
in CO2-based climate projections. However, such processes are usually not
included in climate modeling. I also could not find a satisfactory solution of
the energy balance problem based on the radiation transfer in the
atmosphere (a detailed kinetic approach and not only energy balance). It is
this problem that should theoretically corroborate or disprove the
greenhouse effect concept. Models that I have come across are mostly of a
particular character or provide crude estimates, many essential factors being
neglected. What climatologists really know is based on very limited
paleoclimatic observations and on a bunch of computer models.
From the analysis of biological effects on climate changes, would
probably follow that one can bind excessive CO2 with the help of appropriate
farming technology, land management and genetic engineering. 259 These
advanced agricultural techniques have nothing to do with current
mathematical models of climate and still less with ideological denunciations
of environmentalists. There are, however, certain hallucinatory ideas of so-
called geoengineering, for instance to spray many megatons of sulfuric acid in
the atmosphere, with the possible grave consequences such as acid rains.260
Another fancy idea is the CO2 underground sequestration. Such projects may
also involve a considerable risk since in the case of a leak all living organisms
within the layer of 1.5-2 meters from the ground, i.e., humans and domestic
animals, in the vicinity of the reservoir of CO2 will be killed. Besides, people
do not understand the climate system well enough to take radical decisions
about influencing it on a global scale (geoengineering). But I think one should
259 In fact, carbon dioxide is precious for the biosphere and comparatively rare - the
actuality environmentalists and climate alarmists always seem to forget.
260 By the way, where are the precautionary environmentalists who always fear the
unknown side effects? The fact that the environmentalists do not object signifies that
there is more ideology than ecology in their position.
476
Climate as a Physical System
not worry too much: whether or not one thinks that radical actions should be
urgently taken, they are unlikely to follow.
In order to evade ambiguity, I can state my position right away: I am
neither for nor against global warming (GW). It may happen. Moreover, it
probably happens. More precisely, the warming probably occurs in the
current period - there exist climatic cycles. And it must even have an
anthropogenic component (AGW), at least locally, which can be seen already
from the fact that average temperatures are higher in the cities than in rural
dwellings. However, it seems to be only a belief that the warming is
completely man-made. Relative roles played by the natural (such as Sun-
Earth coupling or volcanic activity) and anthropogenic (such as greenhouse
gases or sulfide aerosols emission) factors as well as their interplay are far
from being elucidated. Quantitative results of comparing natural and
anthropogenic forcings do not have a sufficient accuracy to guarantee
unequivocal statements 261. Furthermore, due to multitudes of parameters
important to determine the dynamics and equilibrium states of the Earth’s
climatic system I don’t think reliable prognoses for its evolution could be
feasible, at least for the current state of physical and mathematical
knowledge. Computer codes in climate modeling, at least those that I could
come across, are based on strikingly shallow physical theories. Furthermore,
there is even an underlying philosophy justifying this apparent simplicity of
climate physics: climate is allegedly no more complex as a heat engine. I don’t
think that simplistic nearly-linear models readily implemented in computer
codes are relevant for obtaining the values of average terrestrial temperatures
for 50-100 years ahead. Therefore, I suspect that the importance of the human
component (letter A added to GW) is, using again Mark Twain’s famous
expression, greatly exaggerated. And please recall that vanity, hubris is a
serious sin.262
261 The role of the Sun-Earth coupling is often a priori declared negligible compared
with anthropogenic factors. Those who still dare to insist on an accurate assessment of
varying insolation as a climatic factor are often labeled as anti-scientific retrogrades.
This question seems to the adherents of AGW trivial, long time passé, and only
producing yawns (see below more on the possible role of the Sun).
262 This book was written before the Nobel Price 2021 for physics was granted to Syukuro
Manabe and Klaus Hasselmann for “leading the foundation of our knowledge of the Earth’s
climate and how humanity influences it”. Back in the 1960s Syukuro Manabe, Japanese American
meteorologist and climatologist, was working on physical models to explore the interaction
between radiation balance and the vertical transport of air masses. Based on stratospheric and
tropospheric measurements showing that temperatures rise in the lower atmosphere and fall in
the upper, the scientist argues that the cause of temperature fluctuations are changes in CO2
concentrations, not solar activity. The conclusion that followed from this model (at least the way
it was interpreted) was that oxygen and nitrogen had negligible effects on surface temperature,
while carbon dioxide had a clear impact. Klaus Hasselmann, Max-Planck Institute for
Meteorology, became a Laureate for developing a method of distinguishing between natural and
human causes of atmospheric heating, the so-called fingerprints. The problem of a mean
atmospheric response to external forcing such as volcanos, albedo, surface temperature, sea-ice
etc. is addressed by applying a filtering technique based on comparison of atmospheric response
patterns derived from multiple sources: models, experiments and data sets for long periods of
time. The expectations are that this method will allow us not only to distinguish the increased
Limitations of Current Climate Modeling
477
temperature in the atmosphere caused by natural processes from those of human activities, but
also
to
make
climate
change
more
predictable
and
reliable.
https://www.nobelprize.org/uploads/2021/10/sciback_fy_en_21.pdf.
https://pure.mpg.de/rest/items/item_3030122/component/file_3030123/content
478
Made in physics
11 Made in physics
Physics has recently demonstrated additional power by crossing into other
disciplines such as biology, economics, ecology, sociology, medicine, even
political sciences. Physics-based mathematical models (PBMMs) may have
nothing to do with physics. For instance, numerous traffic flow models are
essentially based on the conservation laws, which is a physical concept.
Another example is the famous logistic model, usually applied to the natural
population growth, is a simple generalization of the exponential model widely
used in physics 263. The logistic model, despite its simplicity, was able to
adequately represent the population dynamics in various countries, e.g., in
England, Scotland and some parts of USA. In biology and ecology this model
is used to describe various evolution scenarios, when the future population of
species depends linearly on the present population. A little later we shall
discuss these two above-mentioned classes of models in some detail. Recently,
physics-based models won great popularity in the field of social and
economic modeling.
Nevertheless, I think that the mathematical work such as in physics is still
very peripheral in social and even in economic disciplines, at least so far. In
contrast with physics, social sciences attempt to produce descriptions and
diagnoses not for material phenomena, but for mental trends and
psychological conditions, assuming that collective attitudes matter more than
material phenomena so that the material order (or disorder) in the world is
not the foundation but the implication of psychic inclinations of the people.
So far, the ultimate purpose of physics is to control the properties of non-
living matter, for instance, to model, design, and eventually mass-produce
new materials. Likewise, the purpose of social models would be to explore
and then tune the properties of human material.
In social studies there exist particularly many unanswered questions,
with some of them may be considered to lie at the very core of social
disciplines. For instance, do different nations pass through the same stages,
albeit with some delay with respect to one another, just like people who, while
growing up, are passing through universal stages in their development? If one
can give a reliable affirmative answer, would it mean that a time-ordered (and
irreversible) process of social and cultural evolution is maintained? Or the
alleged orderly progression from the primitive to more “civilized” stages may
be reverted under certain conditions? In other words, do patterns of
development exist for human societies, and to what extent can individuals
deviate from these patterns?
Unfortunately, when discussing such issues, qualitative analysis is
emphasized, sometimes rather aggressively, which probably testifies to
certain inferiority undertones.
263 The logistic and the exponential models are nearly identical at small times.
11.1
Exported Models
479
11.1 Exported Models
It has long been speculated that physical models or rather mathematical
models of physics might prove useful for the analysis of human behavior, both
on the individual and the collective level.
Unfortunately, dynamics of the systems, for which we do not know the
main evolution equations, appears to be irreproducible. In contrast to
physics, such disciplines as biology, medicine, ecology, psychology,
economics, sociology, etc. have so far admitted correct and non-speculative
theoretical study only at the level of time series. Now there are increasingly
frequent attempts to construct particular mathematical models, mostly based
on nonlinear dynamics, to describe the evolution of systems studied in the
above weakly formalized disciplines. Chaos theory has been especially
attractive for the scholars attempting to apply off-the-shelf physics-based
mathematical models (PBMM) to social and biomedical sciences. Although
there exist already a considerable bulk of papers, also in physical literature,
exploiting nonlinear dynamics to build mathematical models in these
disciplines, the notion of chaos still remains rather the subject of quasi-
scientific philosophy than the tool of quantitative science.
The aim of mathematical models in social sciences is to help estimating
and understanding the statements of humanities that are far from physics and
mathematics. Here, the principal technique is to properly identify the scale of
a problem and to pose the relevant questions, which is not always the case in
philosophy and artistic culture (see Chapter 2). Physicists are usually better
trained in restricting the problem to a set of localized models. It is interesting
that such a great physicist as E. Majorana who was in general inclined to use
sophisticated mathematical tools published a sociology paper [276]).
Majorana published not so many scientific articles, and this one was probably
intended to extend the theoretical schemes of physics onto social sciences.
One can see the deficiency of general statements, when the scale of the
problem is not appropriately identified, on the following slightly exaggerated
example. It would be in principle correct to say that, e.g., viruses are
ultimately constituted, as all matter, of quarks and leptons. However, this
statement does not help much: knowledge of such basic constituents will not
explain functioning of viruses which results in disease. Likewise, the
statements related to behavioral sciences such as psychology are usually so
general and arbitrary that it is difficult to define their domain of applicability
and construct a reasonable mathematical model corresponding to these
statements, although mathematical modeling in psychology has a long
tradition starting from F. Bessel and H. Helmholtz. The trouble with
behavioral disciplines is that they rely on “expert” opinions rather than on
experimental techniques and related theoretical explanations. Even the
attempts to construct quantifiable procedures are accompanied by the
drawbacks of unknown accuracy. Take, for instance, a very popular
assessment technique based on IQ. What is exactly the quantity that is
measured by this method? Does this unknown quantity change with time? Do
you believe, in general, that IQ means much? Nevertheless, checking a
480
Made in physics
person’s IQ by some bureaucratic authorities (e.g., human resource
management) can drastically influence her/his career. As for mathematical
modeling in behavioral sciences, they are either too abstract and detached
from experimental base or represent universal statistical applications such as
structural equation modeling (see, e.g., Schimmack, U. and Grob, A. (2000),
Dimensional models of core affect: a quantitative comparison by means of
structural
equation
modeling.
Eur.
J.
Pers.,
14:
325-345.
https://onlinelibrary.wiley.com/doi/abs/10.1002/1099-
0984(200007/08)14:4%3C325::AID-PER380%3E3.0.CO;2-I;
see
also
https://liberalarts.tamu.edu/communication/profile/hart-blanton).
11.2 The Limits of Sociology
In this section, a few vacuous generalizations can be encountered. Some
passages here only document my experience, reflecting the integrated psychic
attitudes and mental inclinations of several groups of persons I had a chance
to talk to. And I know very little or nothing at all about the boundaries of
reality constraining such group attitudes.
Several decades ago, sociology created many false expectations, e.g., that
it is capable to fully explain human behavior, flow of events and even
principles on which human societies are built. However, these expectations
have not been materialized, and many people became greatly disappointed by
sociological results. L. D. Landau, for example, considered sociology a
pseudoscience. What are actually the limits of sociology? To what extent can
sociology make predictions, in the meaning of conditional statements usually
accepted in natural sciences?
I don’t think I shall be able to correctly answer these and similar
questions, not only because I am a dilettante and not because the questions
are put in too general a form (deliberately), but because the subject of
sociology is extremely complicated and badly formalizable by the current
mathematical means. Sociology may be interpreted (not defined!) as the
collection of techniques providing the movement from opinions to
understanding. Then, if understanding has been achieved, one can make
projections, models and forecasts, even quantitative. Sociological research
deals with mass phenomena and mass behavior. A person is not discerned in
this mass. In such a sense, sociological research may be called defocused
rather than focused - its resolution is not higher than a “small group” of
people. What is this small group, sociology does not take pains to correctly
define. In mathematical terms, one can use the notions “averaging” or
“homogenization” to characterize the objects typically studied by sociology.
The macroscopic character of sociology is the first limitation that one must
bear in mind when talking about this discipline’s claims.
The second limitation consists in the lack of predictive power in
sociology. In distinction to physics and even modern economics, sociology is
mostly busy with explanatory constructions. They may be interesting and
plausible but without predictive component, and even if there may be one,
accuracy of predictions is unclear. Here, I might note that one can redirect a
similar reproach to modern theoretical physics. Experience shows that
11.2
The Limits of Sociology
481
theoreticians can explain everything, just anything you can imagine. This is
good, it testifies to their skills and flexibility. But it is much harder with
prognoses.
The third limitation is a virtual impossibility to talk about the unique
interest of a society or a country. Society in each country is composed of many
acting groups, each having its own interest. Therefore, it would be difficult to
define a common interest of the entire society, unique for the given country.
Usually interests of the ruling group are declared as “interests of the
country” or “interests of the state”.
The state is not completely determined by formally fixed constitutional
rules, over-constitutional rules are usually stronger than legislation. This is a
binding system of unconventional norms which make each society unique.
This is the form of civil life specific for each society. Even a high level of
administrative aggression such as endemic to dictatorships, if directed
against informal rules, cannot fully destroy them. Each state can be
characterized by its own level of coercion as well as of corruption and other
attributes of power concentration. I do not want to be engaged in something
similar to amateur kremlinology in this book, especially regarding the limits
of sociology, and one must study carefully the effect of power distribution on
social evolution, the latter being understood as the trajectory of a complex
human organization, e.g., a country, in the space determined by a set of
relevant parameters. A crude high-level look hardly brings any results or
prestige; therefore it should be relegated to kitchen talks or cocktail parties.
Actually, there does not seem to be many good quantitative results and
models describing power distribution effects in various societies. See in
relation to this [293] and [294].
11.2.1
Self-Reproducing Social Patterns
To what extent could the processes occurring in nature be mapped onto
human social structures? The complete isomorphism leading to such quasi-
naturalistic theories as social darwinism, in its extreme form resulting in
fascist and Nazi oppression of “unfit” individuals and groups. In such extreme
forms, it is the pure ideology in action: rulers interfere and suppress the
“unfits” even though the latter in no way present a direct threat to them. The
heavy hand of the ruling group is more appreciable in case the country is not
a “democracy”. Thus, societies may be ranged and measured according to the
ruling elite pressure. This is one of the main tasks of sociology which has not
been successfully accomplished so far. There are a number of other cardinal
questions that may be considered as sociological tasks. For instance, why and
to what extent laissez-faire capitalism is perceived as an enemy for
generalized socialists such as communists, marxist-leninists, nazis. It is more
correct to refer to the national-socialist (NS) regime, “nazi” is only a colloquial
term like “commi”., etc. Furthermore, it is an empirical (historical) fact that
socialism strongly relies on violence. The question is: why and how can this
accent on violence be quantitatively estimated and measured?
Democracy is usually understood as an antithesis to socialism. But there
may exist different versions of democracy, starting from liberal democracy to
482
Made in physics
the one with unlimited tyranny of the majority as during the French Revolution
of 1789-1799 or a kind of democracy supporting oppressive regimes such as
in Russia in the beginning of the 1920s as well as in Iran in the beginning of
the 1980s. The main principle of liberal democracy is not to enforce the will
of the majority, but to ensure the rights of each small group, down to a single
person. The main presumption of liberal democracy is that if any group of
people is suppressed in its rights i.e., the ruling group is allowed to suppress
such a group of people, then this ruling group will necessarily strive to
suppress all other groups. So, the liberal variant of democracy emphasizes
protection of all minorities and individuals. The liberal democracy may be
considered as a limiting case on an authority scale, the other limit might be
considered an ideological totalitarian state, with all intermediate kinds of
despotism
and
democracy
(authoritarianism,
autocracy,
military
dictatorship, ethnocracy, socialist state, ochlocracy, etc.) being positioned
between these two extremes. It is a business of sociology to assign numbers
(points on the scale) to each state264, thus providing an ordered set for a social
or political analysis [295]. Accent on qualitative descriptions and scathing
metaphors like “snob-rule” (elite or oligarchy) vs. “mob-rule” (the majority)
can hardly be satisfactory.
What is a totalitarian state? It would be difficult to give a strict definition,
but one of the dominant features of such a state, intuitively, consists in
describing its reaction on out-of-order messages. The totalitarian response to
a signal about a misrule or a trouble consists not in the attempt to streamline
the situation, but in efforts to switch off the signal source. In other words, in
a totalitarian state you should not try informing the authorities or the police
that something goes wrong because this won’t function and you will be, with
high probability, eliminated. In fact, any state, by definition, has some amount
of totalitarian features, but the feedback intensities differ from country to
country. Nevertheless, the totalitarian state of the country does not mean that
one should be sitting all one’s life under the bed, be completely silent and
never stick one’s head out. We might recall in this connection that in the Soviet
Union almost everybody, with tiny exceptions, figuratively speaking, had not
shown themselves from under the beds for 70 years, waiting for the outcome
of the fight of elites, until the Communist regime partly collapsed. Thus in a
totalitarian state each individual is forced to contribute to the stability of
regime and perpetuating it participates in evil.
It has long been observed that two wicked things never occur in liberal
democracies: mass murder and mass hunger (to my regret, I don’t remember
who was the first to verbalize this observation). It is, however, useful to recall
a simple maxim known to Aristotle, but totally forgotten in the 20th and 21st
centuries: democracy is the worst form of rule if the society is mostly
composed of paupers. In this case, democracy inevitably transforms into
tyranny. This observation leads to a general question: what are the main
causes for the death of the democracy on a given territory? The list of such
264 The word “state” is understood here as the state of a system and not as a political
organization of a country.
11.2
The Limits of Sociology
483
causes seems to be rather short. One is a war. A democratic state can be either
conquered, for instance, if it is small or, by defending itself and in process of
mobilization of its armed forces, transforms into a military dictatorship.
Another classical death of a democracy is through a military (or paramilitary)
coup, this scenario has been materialized a number of times in Africa and
Latin America. But the most general pattern of transition from democracy to
tyranny appears to be due to wide involvement of poor and illiterate
population into political decision making. It is understandable why
democracies were very unstable before the industrial revolution: the latter
provided necessary conditions for stability of a democratic order with respect
to power overturn by ensuring a well-being for the majority of the population.
These conditions are not sufficient, as we might observe: the majority rule can
still lead to tyranny even in relatively affluent societies. Intuitively we know
that some countries demonstrate examples of seemingly stable democracies,
in particular liberal democracies, whereas other societies appear to be
unstable with respect to the classical “democracy-tyranny” transition.
Moreover, we may notice that despotic, e.g., authoritarian forms are stable
because they are unfavorable to economic productivity and even distribution
of wealth thus preserving pauperism. This fact is intuitively grasped by anti-
democratically oriented political forces (e.g., left-wing socialists, radicals,
communists, etc.) who build their strategy and even propaganda on hindering
economic productivity. The main task of sociology in this respect is to find the
exact conditions expressed in numbers, formulas or algorithms, which would
tell us when the stability of a given democratic society is ensured and what is
the risk of transition to a certain variety of a despotic regime.
Since it is difficult to rely on self-restraint of the majority and its favorite
leaders as well as on self-control displayed by the ruling groups, tenable
power restriction mechanisms should be designed and developed in a stable
society, in particular the legal system, to secure a clearly defined scope of
rights of an individual and, consequently, of all groups of individuals. To offer
models for such control mechanisms may be regarded as another task of
quantitative sociology.
Apart from the authority scale, there are many other self-reproducing
patterns in human societies. How can one understand why all the regimes
promising socialism, communism265, total equality, possession of equal rights,
social justice and the like quickly turn into ruthless, despotic dictatorships?
Why is bureaucracy so overwhelming? Greatly simplifying, one can observe
that there exist in fact only two superparties in any state: “bureaucrats” and
“liberal economists”, all others being just nuances of these two mainstreams.
A variety of colors and hues may be considered as a fine structure of these two
principal levels. The above opposing superparties represent two main
attitudes present by people: paternalistic and self-sufficient. This attitudinal
dichotomy, being translated due to sociological collective effects into dual
social structure produces natural associations with two main types of objects
265 One usually distinguishes between socialism and communism as an extreme form
of socialism, but for me it is basically the same social phenomenon.
484
Made in physics
in physics: bound and free. The paternalistic attitude implies etatistic
structures on which an individual is heavily dependent and results in the
“bound state” of an individual, with movement restrictions, informational
censorship and other well-known limitations. Figuratively speaking,
paternalistic pattern is a projection of diapers onto the whole life of a person.
A swaddling effect of the diapers is realized through countless bureaucratic
circuits under the pretext that the authorities know better what “people really
need”. Accordingly, two main personality types having totally different images
of society can be developed: those who rely on their own forces, intellect, and
hard efforts versus those who hope that external agencies such as God, good
monarch, state control, social help, etc. will in any event interfere and protect
from excessive hardships. These latter people tend to unite under leftist,
communist, general collectivist, environmentalist, nationalist, anti-globalist,
and other populist slogans being easily recruited into militant groups (“take
everything away from the rich, then divide it up”). In the Marxist political
texts, the low-income fraction of people who depend on the state social
system for their day-to-day existence are typically called “lumpen”.
In the societies with a considerable component of liberal economy, more
relying on self-sufficient behavior and free choice, bureaucratic structures are
forced to compete for power, in particular by leveraging populistic ideologies.
In relation to just mentioned social dichotomy, one can notice that there are
two basic regimes of social equilibrium in human societies. In the countries
which use to call themselves “advanced” or “developed”, social equilibrium is
of dynamic character and is distinguished by an oscillatory “social trajectory”
which has intermittent bureaucratic and liberal phases. Typically, the process
runs as follows: after the fast transition from a “frozen” despotic regime to a
“democratic society” at its liberal phase, e.g., after a drastic social
transformation such as a revolution, people are initially motivated to work
hard and economy is on the rise. As a result, people on average begin to live
better, but high-income and exceedingly wealthy groups appear inducing
envy of the poorer part of the population. This envy produces demotivating
effect, and besides people tend to be more relaxed due to increased quality of
life266 and more attention devoted to hedonistic excesses. Economy begins to
fall, which is amplified by the outbursts of activity in the anti-liberal camp
exploiting mass envy and insisting on the enforcement of social justice and
equal distribution of wealth. “Down with capitalism!” is a typical slogan in
such periods. Distribution and to some extent production should be
maximally controlled by the state bureaucracy which represents such a
control as “achieving an order” in contrast with “liberal capitalist chaos”.
Ensuing nationalization and politization of the economy aggravates the
situation. Economy naturally falls deeper, and in the countries with the lack
of democratic traditions such as firmly established free periodic elections,
there is a hazard of transition to a “frozen” despotic state totally controlled by
bureaucracy. However, in advanced economies with still wealthy population
266 I don’t know a correct definition of the term “quality of life”; here it is used
intuitively in accordance with the whole intuitive exposition.
11.2
The Limits of Sociology
485
and really free choice between political parties, the population begins to sense
danger and votes anew for “liberal economists”, at least more liberal than
clinging to power bureaucratic control freaks. Then the cycle repeats itself -
I view this phenomenon as oscillations of social trajectory. So, the first basic
regime of social equilibrium is an oscillatory one, it is typical of developed
countries. For the countries with weak democratic traditions, especially with
dominating pauperism, another regime is habitual. From the viewpoint of
macroscopic social equilibrium such countries’ trajectory symbolizes a totally
different story. One can easily observe that many countries usually known as
“developing” are unstable under the transition to the stationary state of
“frozen” bureaucratic despotism, with reduced civil rights and limited
personal freedom. It does not matter how such regimes are called either from
outside (tyranny, autocracy, authoritarianism, communism, fascism, etc.) or
from inside (controlled democracy, real socialism, also communism, etc.), the
essence is that the societies supporting such regimes are in a metastable state
separated from the equilibrium corresponding to liberal democracy by a
rather high barrier. Nevertheless, contemporary history shows that this
barrier is not impenetrable: more and more societies have overcome it, being
transformed into liberal democracies. One can say that such a transformation
represents a historical trend. However, the characteristic transition time from
the metastable frozen state to democratic oscillations may significantly
exceed the active period for a single population (population life cycle). One
may also notice that the mentioned historical trend reflects a kinetic
situation: there exist faint democracies that are unstable under the reverse
transition into the frozen state.
I think that this verbally described cognitive model can be translated into
the
language of mathematical modeling and quantitative results
corresponding to the observable phenomena can be obtained. I found it
strange that sociology, when attacking societal issues using mathematical
tools, leaves the crucial problems of structure, dynamics, and equilibrium of
societies mathematically unexplored (although, I am just a layman and know
very little of sociological literature).
One more self-reproducing pattern in the society is an eternal conflict
between clericals and intellectuals, i.e., between the church267 and the science.
An established church seeks to universally spread its messages (preferably at
the expense of all citizens); therefore it tends to oppose secular and scientific
education which is driven by economic demand. In this conflict, the church
represents the social force directed against economic and technological
development. One can recall that it was only in October 1992 that Pope John
Paul II expressed regret for the Galileo affair and officially conceded that the
Earth was not stationary. I remember a meeting in Moscow organized by
Siemens in 1996 and devoted to the social impact of IT. Some clergy were
invited, and when it was mentioned that the Internet allows one to radically
267 The term “church” may be understood in this context in a broad sense including all
irrational currents in the society such as sects and quasi-religious groups. A unifying
feature for all such currents is relying on miracle or mysticism, that is on phenomena
contradicting proved scientific facts.
486
Made in physics
expand the scope of living activities, to live simultaneously several lives as it
was figuratively expressed by one of reporters, the orthodox priests
immediately began frenetically crossing themselves and raising vehement
protests.
One often cites Bacon’s aphorism: “Knowledge is power”. However,
ignorance is even stronger power. In a great majority of countries, the church
is a powerful group very close to the ruling elite. The latter is supporting
economic and technological development primarily to acquire military might.
On the other hand, church, as just noted, is impeding this development. Such
an ambivalence is of universal character and in extreme cases leads to civil
wars of clerical dogmatics against intellectuals. Recurrences of such conflicts
can be traced nowadays even in rather advanced societies (USA, some
European countries, Latin America, Russia, India, Middle East, etc.). In Russia,
for example, criminal penalty can be applied for publicly expressed atheistic
ideas and criticism of the orthodox church. This is a strange example of a
contemporary witch hunt supported by authorities in a presumably civilized
country268. Moreover, the limitations for applying the obtained knowledge,
e.g., in the form of introducing new technologies lies today not with
technology per se but with its acceptance by people and administration
despite the fact that new technologies can catalyze efficient approaches and
business processes269.
11.2.2
Archetypical Questions of Social Sciences
One of the main questions that should be answered by sociology is: under
what conditions the averaged, homogenized mass of individuals can be
transformed into a coherent group of highly motivated people? This process
bears some analogy to a phase transition: acquiring order out of chaos. The
conversion of atomized, alienated individuals into structured, mobilized
people is of crucial importance for political authorities, and it may become the
worst nightmare to dictatorships. A far analogue is the picture of a saturated
solution with an accidental center of crystallization. However, the methods
268 Some sociological studies in Russia indicate that had it been a vote with the
participation of Stalin, he would have won. This effect of one-sided love of Russians to
Stalin may appear paradoxical since Stalin exterminated more Russians than were killed
during the WWII, but it can (and should) be sociologically explained. Contrariwise,
people in Russia who are trying to struggle for civil rights and against the oppressive
bureaucracy are immediately branded as loonies or even persecuted. Not only in Russia
but in many other countries people were punished for adherence to the progress of the
European type. In Germany, Hitler has been forcefully excluded from any opinion polls
or official lists (such as the vote for the most outstanding Germans). Had he remained as
a candidate, nobody knows what would have happened. One can easily find many such
examples in modern history.
269 The difficulties of human acceptance become obvious when one observes the
struggle about genetically modified products and genetically produced technologies in
general. Another example is the difficulties of the IT penetration in medicine. The
physical methods of medical diagnostics and treatment such as laser medicine, nuclear
medicine, medical imaging, etc. were initially rather hard to adopt, see, e.g., Roberts
[250]; now they seem to have overcome the repulsion barrier of the old doctors’
community and basically ensure the progress of medicine.
11.2
The Limits of Sociology
487
typically employed by sociology mostly involve surveys in which the obtained
results are represented by numerical data corresponding to the percentage of
people sharing a certain opinion. This is, however, raw experimental material
that might be used as an input in theoretical models of sociology. In such
models one should derive differential equations, probably of stochastic
character, describing the transition between states in a small time interval - a
procedure well known in mathematical modeling of physical processes.
Analysis of surveys, a standard technique of sociology, does not seem to be
fully satisfactory since, e.g., by a strong desire one can correlate anything with
everything. A connection of the sociological data with mathematical models
would be highly desirable. When studying processes such as leadership that
occur in “small groups”270, relations between the members can be studied not
in terms of transitions between states and corresponding differential
equations, but using graph theory applied to all ordered pairs of group
members. In this way, one can describe power hierarchies in human
organizations taken as sets with dyadic relations between a comparatively
small number of elements (see, e.g., lectures on mathematical sociology by P.
Bonacich, http://www.sscnet.ucla.edu/soc/faculty/bonacich/textbook.htm).
However, sociology, in contrast with social psychology, studies the
behavior of human systems consisting of a very large number of members. For
example, sociology must establish certain irreducible laws that pertain to
large human groups having eternal conflicts of interests such as between
bureaucracies and citizens or between medical doctors and patients. One can
easily produce other examples of conflicting social pairs. In the final analysis,
of course, the collective properties of human groups are due to individual
attitudes,
opinions,
behavior
and
person-to-person
interaction
(communication) characterizing each single member of the group. There are
models in which human society behaves as an averaged individual (in a
similar way, plasma may be modeled by the motion of a typical particle
moving in the electromagnetic field, see Chapter 5). This averaged individual
can make sharp evolutions of behavior whereas in the society of collectivized
individuals, behavioral patterns are smoothed. Yet sudden transitions are
possible and have been observed in human history.
It is obvious that the solution of sociological problems involving many
human participants and accounting for the behavior of each individual person
is impossible. Analogous problems are impossible to solve even for
structureless particles in physics, and for elements of human societies that can
be found in a great number of personal (psychological) states determination
of collective behavior from individual properties is clearly out of the question.
One can only hope to describe the averaged overall characteristics, the crude
macroscopic features of sociological systems. Moreover, due to a self-
consistent situation - individuals create institutes that are forming individuals
- the transitions between states of sociological systems may take many years
which makes observations difficult: humans change slowly, in the course of
270 This is the term most favored by sociologists, but I don’t know how one can define a
small group.
488
Made in physics
several generations. This is also a limitation imposed on sociological research
and social projections.
One more important issue of social sciences is the amplification effect. It
appears to be a common observation in sociology that in initially
homogeneous dwellings “good” and “bad” neighborhoods gradually appear.
Likewise in economics, adjacent countries exhibit drastic differences in
growth rates and industry output levels. In other words, large differences in
the long run aggregate (or averaged) variables are observed for social or
economic systems whereas initial conditions for these systems were almost
identical. This phenomenon resembles the development of instabilities or
amplification in dynamical systems, when small changes in the initial
conditions are translated along the evolution trajectories and so amplified as
to produce large differences in the output values. In social systems, small
variations in the behavior of individuals can be transformed, due to the
amplification effect, into great differences in the long run aggregate quantities
such as the economic output or per capita GDP. This analogy prompts one to
think that it would be pertinent to use powerful methods of dynamical
systems theory to describe the effects of social and economic amplification
and to establish its limits.
11.2.3
Limits and Errors in Social Sciences
One might note that the culture of assessing errors and discussing the
accurateness of prognoses is nearly absent in social sciences. Typically, the
statements in social sciences have an unconditional character whereas in the
world of physics and mathematics an answer, which may be interpreted as a
prediction, has the form IF ¡conditions¿ THEN ¡prediction¿ so that altering the
conditions in general would change the predictions. Therefore, the universal
principle of physical and mathematical sciences is that one should interpret
forecasts in the context of conditions.
Not so in social sciences. Let us first take as an example one of the most
famous socio-economic accounts, “Das Kapital” by Karl Marx. The author, a
poor German emigrant living in London, produced a bestseller (the first
edition in London, 1867) which probably no one has read completely. The
manuscript by Karl Marx consists of four thick volumes, and it is unlikely that
the author himself was familiar with the last three: they were compiled from
loose drafts by the author’s friends and colleagues, Friedrich Engels and Karl
Kautsky. Marx appeared to be so certain in the infallibility of his theory that it
is hard to find in the “Capital” any discussion of the accuracy of his statements
and prophesies. At least, by scrolling “Das Kapital”, I could not find any. Marx’s
theoretical schemes are in fact somewhat abstract models with a very modest
mathematical element and hardly any domain of applicability. In this sense,
Marx’s claims are closer to religious models than to economic theories.
Indeed, we have seen (see section “Religious Models” in Chapter 2) that most
religions are based on promises, and most believers find their deeds
meaningful only to the extent that something abstractly pleasant can be
expected. The same applies to the Marxist abstraction of communism.
Marxism in general has plenty of religious attributes: it contains very little
11.2
The Limits of Sociology
489
empirical evidence (if at all), mostly relying on unconditional faith. It is
curious that Marxism rivaled religion mostly in the countries with very strong
religious adherence such as Albania, Afghanistan, China, Cuba, India, Laos,
Vietnam. Probably it means that a poor population is more susceptible to
uncritical acceptance of vague ideological doctrines than the population in
comparatively wealthy countries. It is also astounding how deeply people
were indoctrinated with the purely ideological, i.e., without empirical
element, Marxist schemes. Marx only tried to produce a highly theoretical,
scientific-looking study of the capitalist way of production albeit with plenty
of lateral associations. Common people, however, interpreted these
speculative theories rather emotionally, as an appeal to the violent
transformation of society. Many people were enslaved by the Marxist ideology
to the extent of being ready to sacrifice their freedom and lives, closing their
eyes to the obvious facts that the properties of socialist or communist states
were standing in sharp contrast with Marxist doctrines. The main thesis of
Marx is actually formulated at the very beginning: the wealth of societies
which are based on the capitalist way of production is manifested by a great
accumulation of goods. Then the author devotes about 800 pages of the first
volume, containing a lot of numbers and references, to a meticulous
investigation of the conversion of goods into money (and vice versa), creation
of a surplus value, and accumulation of the capital based on it. According to
Marx, it is the capital that is the real ruler of the society, with the capitalist
society ensuring a maximal capitalization and monopolization of the economy.
Marxism optimistically claims a such a greed-based process of capitalization
and monopolization should eventually result in the social explosion. So,
according to the frustrated author, capitalism is doomed.
It is because of this optimistic prognosis that all the people in the
communist countries were obliged to study Marxism-Leninism. In the Soviet
Union, studying “Capital” at each university or institute, was obligatory,
irrespective of the faculty. There was at least a year’s course of the so-called
“political economy” formally devoted to the study of Marx’s monumental
treatise, but nobody actually read it beyond the first chapters including
ignorant professors of the “Marxist-Leninist political economy”. We, students,
managed to pass the exams without reading neither the “Capital” nor its
abridged exposition specially adapted to the presumed Soviet mentality. It
was for us a kind of a sport: who gets better marks without any knowledge? It
was not so difficult to swindle the narcoleptic teachers of Marxism-Leninism
by a meaningless flux of words because they had no knowledge of the obscure
Marxist texts, either. And of course, Soviet functionaries, the absolute
majority of them being half-literate, had never read “Das Kapital”, but they
had to enforce it. I guess that such communist rulers as Stalin, Mao Zedong,
Kim Il Sung, Fidel Castro and others had neither time nor desire to study the
monumental theory by Karl Marx. Their purpose was more pragmatic: to
make their subjects believe that the sealed up communist system was a
“progressive” paradise as compared to the backward and inhuman capitalist
hell.
490
Made in physics
The socio-economic model of Marx still seems to be inadequate, despite
all its popularity, which is also similar to religious models. Indeed, the main
prognosis of the imminent social explosion in all developed countries, put
forward by Marx and his interpreter Engels, was never corroborated. Reality
manifested itself exactly opposite to the Marxist predictions. The
representation of the working class as the most progressive one is ridiculous.
Besides, the workers have never been active and eager enough to ignite the
world revolution, as it was proclaimed by the communist ideologists. Equally
inadequate was the model of socialism as a society that would abolish the
state and establish a genuine paradise on the Earth.
11.3 Hierarchical Multilevel Systems
Although the concept of hierarchical multilevel systems (HMS) is very broad
- from software and data communication systems through Earth’s climate to
social hierarchies here, we shall mainly talk about economics. There are
considerable mathematical challenges in describing HMS, especially as
concerns their multiscale modeling.
Bridging across many levels corresponding, e.g., to subcomponents that
operate on essentially different space and time scales requires some unifying
mathematics and, if treated as a head-on computer simulation, is
computationally demanding. The physical example of a multiscale problem is
turbulence (Chapter 7), it gives us an opportunity to feel the complexity
resulting from many interworking scales. Despite enormous practical
importance, apparently transparent mathematical setup (the Navier-Stokes
equations), and a great lot of efforts, nobody has managed to build a good
mathematical theory of turbulence. Economics, being an essentially
multilevel structure, is probably an even more complex system than fluid. This
is an example of multigrid methods - a computational mathematics counterpart
of multilevel systems, which represents, however, only a primitive reflection
of the entire complexity of hierarchical multilevel systems271.
11.3.1
The Politics of Bureaucracy
If the state begins to redistribute the created wealth too actively, a justified
resentment of those wealth producers who are the most efficient and
creative is aroused.
Bureaucratization in corporations leads to underproduction or
inadequate production
272 crises. This is a scaled-down effect of
macroeconomic drawbacks that constantly plagued, e.g., the Soviet-type
(Gosplan) economies and eventually led to their complete downturn. One can
even make a pessimistic prognosis related to the Great Bureaucratic
Revolution occurring almost everywhere in the world if the situation with
271 In fact, already the Fourier expansion exploits a hierarchical principle, and it is due
to the hierarchy of scales that the Fourier method is so powerful (in linear problems).
One can also bring the example of hierarchical Arabic numerals vs. the inefficient Roman
number system.
272 Manufacturing of products with no regard to the market signals.
11.3
Hierarchical Multilevel Systems
491
rapid bureaucratization 273 is not reversed and bureaucratic positions are
more and more attractive for young people, then in a dozen of years a new
generation of “intellectuals” will arise who will not be intellectuals at all.
These people will probably be different, not ready for intellectual work in the
sense of, say, the 1960s i.e., not capable of reflection and simply to reading
serious books. This can be bad for all.
Now that more and more people wish their children to become highly
paid functionaries, bosses, top managers, lawyers, bankers, TV or movie
actors, prominent sportsmen or other kind of celebrities, interest in hard
education appears to be waning. The lowering prestige of exact sciences and
engineering is indicative of the growing tendency to bureaucratization. One
can observe that the more primitive the culture the less useful science and
engineering are perceived to be274.
The prestige of science and engineering in the popular consciousness
might serve as an indicator of the society development stage. There is,
however, an intrinsic contradiction here: people who care less and less about
science more and more want new “cool” technologies. It is this “coolness
factor” that brings significant profits to capitalist corporations and compels
them to somewhat grudgingly support science together with the classical
future-oriented chain of development: idea, calculation, laboratory,
prototype, pilot project, full scale production. Yet such chains are typically
controlled by the corporate bureaucracy which, with the multilevel
hierarchical authority structure of modern corporations, is practically
indistinguishable from the governmental sector bureaucracy.
One detrimental consequence of corporate bureaucratization is the
system of semi-corrupt verticals: managers at the lower levels make absurd
and arbitrary decisions while higher levels tend to protect their subordinates
from any blame. This buddying mode is rather stable, eventually leading to
the total absence of persons who are ready to take responsibility for
inadequate decisions. But although one can hide from the facts, one cannot
hide the facts. Of course, rank-and-file employees, common workers, and the
“office plankton” are those who suffer, primarily from unemployment. One
should not think, however, that unemployment is only caused by the
bureaucratic mismanagement: even a perfectly functioning market (i.e., non-
centralized) economy is prone to unemployment because of insufficient
demand and its fluctuations. According to the canonical economic theory,
which is a deterministic model operating with averaged quantities, market
equilibrium is reached when demand equals supply - in fact this is the
definition of market equilibrium. Applying this definition to the labor market,
one can deduce that the demand for goods and services pushes upward also
the demand for labor, thus resulting in rising pay and employment. However,
273 The rate of creeping bureaucratization, e.g., of Russia is striking: even according to
official statistics the number of government officials has grown from 900 000 in 2000 to
2 000 000 in 2008.
274 In the post-Soviet Russia, scientists were perceived in popular consciousness as
good-for-nothing exotic creatures; people looked at “egg-heads” with a mixture of
disdain, superiority and fear.
492
Made in physics
the supply-demand equilibrium curve is not necessarily a smooth trajectory:
from time to time, it can change very abruptly, following social cataclysms,
governmental moves, appearance of new technologies, and so on. We have
already discussed that disruptive technologies favor certain groups of
population and can seriously inhibit other groups. For example, closing coal
mines in order to foster sustainable energy sources, even if this process is
accompanied by social programs and retraining activities, seriously
diminishes job opportunities for miners. In general, new production
paradigms have strong effects on wages and employment, these effects
deforming the socio-economic position of some groups relative to others.
There exists a considerable amount of highly professional literature devoted
to this subject (see, e.g., [277] and references therein) so that I do not need to
dwell on this subject, about which I know very little. There are also
mathematical models of unemployment in multilevel economies (i.e., HMS)
resulting from technological development, but we shall not discuss these
models here.
Bureaucracy is generally opposed to meritocracy because the latter
possesses some specialized technical knowledge or critical information, which
makes it difficult to control.
11.4 Physical Economy
Sometimes I think that the world would function better if it were run by
physicists and mathematicians, despite the fact that politicians and business
people usually claim that decisions are much better guided not by physics or
mathematics but by gut feeling derived from years of experience. And they are
supported by a largely math-phobic general public. Besides, the vast and
omnipresent bureaucracy - the driving belts of politics - rightly fears that
analytically driven programs might result in massive personal replacements.
We have seen that physics builds models of reality, but, firstly, it may be not
necessarily physical reality without participation of human agents and,
secondly, not only physicists construct models of reality à la theoretical physics.
For instance, the use of dynamical or stochastic (or dynamic stochastic)
models in socioeconomic research is now a substantial part of mathematical
economy. Economic change, growth processes, goal setting, development
time, driving forces, and large number of other topics are studied in the
mathematical form by techniques traditionally used in physics, e.g., by
variational methods. Mathematical economy often looks like physics in some
other guise. Therefore, one currently calls this interdisciplinary field
physical economy.
This discipline may be especially interesting to those who wish to see how
the ideas from a social sphere can be translated into mathematical models.
Paying attention to this translation is useful for all parties concerned -
physicists, mathematicians, economy experts, social scientists, for it helps to
enlarge the scope of problems to be objectively studied and possible
mathematical techniques to be used. Since I am not an expert on
mathematical economy, many topics are treated on a primitive level of general
concepts. Such important subjects as stochastic modeling, probabilistic
11.4
Physical Economy
493
analysis, structural change or socioeconomic discontinuity are treated in a
number of more specialized sources, so don’t expect any profound exposition
of these topics here. As everywhere in this book, the main attention is paid to
the motivation and relevance of physically-based mathematical modeling.
Hard laws analogous to those describing the processes in physics do not
exist in economics because the latter is necessarily mediated by social
phenomena. Social interactions and human behavior are exceedingly more
complex than physical phenomena and remain rather poorly understood.
Therefore, economists rely more on insight and expert judgments rather than
on objective methods ensuring the confidence close to physical modeling.
Economy is always an evolving system [168] – actually, any physical
system is an evolving system since a physical event occurs in space-time. Yet,
in physics one can in many cases use quasi-stationary or even steady-state
approximation. We have seen, for instance, that steady-state models such as
computation of energy levels play an outstanding part in quantum mechanics,
in fact quantum mechanics was initially invented and designed to produce
stationary states. Now we may call this subdiscipline that does not include
interstate transitions quantum statics. Contrariwise, the steady-state
approximation is rarely adequate in economical models because of rapidly
changing conditions. In other words, economic models are predominantly
dynamical, dealing with evolution. Generally speaking, evolution is any
process of development, change, or growth.275 To describe some evolving
system, one may construct at first a minimal model which is aimed to
elucidate the basic features of developing phenomenon. Such a minimal model
corresponds to the simplified dynamical system describing economic
development - with a minimum number of phenomenological parameters or
arbitrary constants. More detailed models can be built on the base of these
minimal models276.
There are a lot of questions regarding evolutionary economies, posed but
not answered. For example, should there be a convergence of economic
patterns towards a single, universal and ideal model? Does this common limit
exist for all societies, or evolution of specific economies is heterogeneous,
with the economic trajectories being dispersed according to each country’s
preferences? The latter assertion is known as the “Dahrendorf’s hypothesis”,
stated by a well-known German-English political economist, Sir Ralf
Dahrendorf.
Economics is an essentially quantitative subject. When, for example, a
minister for economy asks his advisers about the advantages or disadvantages
of raising the actual taxes, she/he does not expect a philosophical discourse,
nor a lecture on what Nobel Prize winners in economy would generally say
about increased taxes. The minister (or the secretary for economy) wants to
know how much revenue could be produced due to new taxes and how the
anticipated revenue figures would correspond to the optimistic prognoses
275 The word “evolution” stems from the Latin evolutio which means “unfolding”.
276 Such more detailed models have often been called “imitation models”, although it
seems that this terminology is outdated now.
494
Made in physics
supplied by the finance minister. The secretary for economy must know
quantitatively how the increase of any tax would affect various sectors of
economy. Thus, economics seems to be closer to engineering than to science,
so one uses a term “economical engineering”. Indeed, one must define goal or
cost functions and from them deduce social and economic behavior - a typical
engineering approach. For technological systems, to adjust parameters to the
specified goal function is common in traditional engineering. However, this
works well for technological processes due to the fact that they obey the well
understood laws of natural sciences. In economics and other social sciences,
the governing laws are poorly understood or at least do not exist in compact
mathematical form (expressed as formulas). Insufficient knowledge of the
mathematical structure for social and economic laws makes it hard to use a
genuine engineering, when free parameters can be fine-tuned with a good
accuracy. In particular, this leads to a well-known situation when the results
obtained within the framework of different economic models have an
undetermined applicability domain and may considerably overlap.
In contrast with physics, quantifying things in economics is in general a
nontrivial problem. How would you unequivocally quantify customer
satisfaction, for example? But you ought to do it, in order to plot this variable
versus costs. Moreover, this intuitive entity may determine economic strategy,
for instance in medical institutions, transportation or public conveyance
companies: do you want to minimize personnel (expendables, fuel, etc.) costs,
or do you want to maximize patient (customer) satisfaction and thus attract
more clients?
There is a peculiar feature of real economics differing it from natural
sciences and technological engineering. In economics, perception of reality is
a major part of reality. For example, when people are constantly told that
everything goes well with economy, they are inclined to believe it, start
making investments, buy stocks, purchase more goods, and the whole
economic system is accelerating or “heating”. It is thus the vision of economic
situation, i.e., reality dressed in human perception, and not the naked
situation that determines the economic behavior. More than that: perception
of reality may prove to be more important and contributing heavier to
economic behavior than reality itself. As a consequence, it seems very difficult
to model the behavioral components in theoretical economics. One of the
biggest challenges is to understand how the system will change its path as
humans react to stimuli, incentives and dangers. For example, one of the most
important economic factors in the contemporary world is the oil prices. The
cost of oil production (even including gradually rising exploration costs of oil-
carrying sources) is known to be a slowly varying quantity on a level of 10 US
dollars per barrel. On the contrary, the oil prices are varying comparatively
fast, on a level about ten times higher. It means that human-induced,
speculative and “soft” component is an order of magnitude larger than
technical production costs. Moreover, even the speculative variations of oil
prices following the human perception of the political and economic situation
may substantially exceed the hard economic parameters such as the
production costs. This example hints at an essential, sometimes dominant role
11.5
Naive Taxation Models
495
played by human perception in economy. Another example points at the
difficulties to account for the human role in economy. Thus, a number of
economies are plagued by corruption, and although it is intuitively very likely
that corruption hinders economic development in such countries. However,
when one tries to pass from vague intuitive statements to quantitative models,
the braking effect of corruption becomes hard to describe in mathematical
terms.
11.5 Naive Taxation Models
This section may be regarded as an occasional curiosity. The subject of taxes
is, however, very important for all people, and one cannot get rid of the
impression that it has not been studied with commensurate mathematical
thoroughness. The tax system is purposed to maximize the budget revenue
under the following constraints: non-degradation of the quality of life; non-
diminishing
income
level,
demand,
productivity
and
profitability;
preservation of social stability. Taxation fulfills one more important function:
stabilization of inflation. Optimal taxation of private persons and households
depends on the character of two important distribution functions
characterizing economic structure of the society: distribution 𝑓(𝑥) with
respect to income 𝑥 and distribution 𝑔(𝑢) with respect to accumulated liquid
assets or liquidity 𝑢. As usual, the quantity 𝑑𝑁= 𝑓(𝑥)𝑑𝑥 denotes the number
of households with income lying in (𝑥, 𝑥+ 𝑑𝑥) , and the quantity 𝑑𝑁=
𝑔(𝑢)𝑑𝑢 signifies the number of households whose liquidity is found in
(𝑢, 𝑢+ 𝑑𝑢) . Here 𝑁 denotes a continuum of households. The 𝑓(𝑥)
distribution is more important in societies with transitory economies
whereas the quantity 𝑔(𝑢) characterizes the wealth distribution. Examples of
quantity 𝑢 are bank deposits, stocks, bonds, expressed in currency units, in
short, 𝑢 denotes the mass of money to be at the household disposal. Optimal
(according to some criteria) taxation may be modified with the changes in the
society economic structure. Corporate taxes directly affect profits and
productivity. It may be noticed that corporate taxes are coupled with income
taxes. A well-organized tax system allows one to determine the society
economic structure reflected in the distribution functions 𝑓(𝑥) and 𝑔(𝑢). For
example, one can measure 𝑓(𝑥) directly from income taxes and 𝑔(𝑢) by
analyzing bank accounts277.
Mathematical modeling of taxation schemes does not seem especially
difficult, yet defensive financial bureaucracies are apparently not very
interested in modeling results. There are only continuous verbal discussions
in the mass media, but serious scientific results are only seldom presented
and popularized (see [172] in relation to this). The subject of taxes is highly
emotionalized and charged with political or group interests. It is by no means
accidental that finance ministers are most often without any financial or - it
would be almost improbable - financial-mathematical background; they are
277 Both bank accounts and accumulated stock may be of course analyzed impersonally,
without violating privacy requirements. Secretive character of individual wealth is an important
prerequisite of social stability.
496
Made in physics
customarily recruited from winning party inner circles, even in advanced
countries. This anti-meritocratic selection favors confused and not
transparent tax systems having deplorable economic and legal consequences.
Tax laws, governmental acts, statutory regulations and actual rules come in
great many formats so that it is usually impossible to know them all. Only this
fact tends to criminalize any citizen due to her/his unawareness of the whole
mass of current tax rules. The internet does not help much in this respect -
what matters most is not the medium, but whether the source is up to date. It
is usually not difficult to calculate taxes, for example, by using a look-up table
for progressive scales, yet the definition of what is subject to taxation taking
into account all possible exemptions, deductions, special regulations
accumulated over the years is very burdensome.
Thus, taxation rules in most countries are extremely complex, with
governing
financial
bureaucracies
strongly
resisting
any
possible
simplification. Changing the taxation schemes, introduction of new taxes or
removal of old ones is always extremely controversial. Some tax authorities
(e.g., in Germany where taxation is considered to be the most complex in the
world, see, e.g., http://en.wikipedia.org/wiki/Taxation in Germany) even
contend that the tax system does not need to be simple - sort of an esoteric
approach. Germany is an example of the country that traditionally strives to
be one of the leading (or even the leading) in the world in many respects -
economically, politically, financially, technologically, in exports, etc. However,
these ambitions are drastically impeded by the country’s clumsy and highly
controversial tax system. As a result, money is flowing out of the country.
Punitive reflexes of the state bureaucracy activating the “immune system” -
police, intelligence, excessive customs control - are costly, inefficient and only
make the matter worse. Money gained inside the country is easily percolating
through restrictive barriers. Small but numerous amendments issued each
year do not help since they only increase the complexity and benefit mainly
the parasite clan of tax consultants flourishing due to unobservable
legislation. On the surface, this situation is favorable to the state since tax
consultants are also paying taxes on income generated from the fee paid by
taxpayers. This is, however, a superfluous activity taking many man-years of
people that can be otherwise intellectually or manually productive. Germany
of course is no exception in highly inefficient and overcomplicated taxation.
Many countries cannot come to terms with their tax systems. All the countries
where the tax system is a product of political compromises rather than based
on solid financial and mathematical approaches are doomed to either money
loss or political instability due to people’s discontent, or both. It is indicative
that tax systems in various countries are all different and are constantly
modified - this testifies to the fact that they are far from being optimized. The
tendency to make taxation complicated for citizens is quite understandable
since under a simplified system (such as a universal flat rate, see below) the
ruling bureaucracy loses some particular form of control based on selective
granting tax preferences. So, there are groups besides ruling bureaucracy that
are opposed to tax simplification, which results in lobbying against efficient
mathematical modeling in this “politically sensitive” area. The usual
11.5
Naive Taxation Models
497
argument against simplified systems based on clear mathematical models is
based on the notion of “fairness”: a tax scheme is declared “unfair” if it is
mathematically rigid and does not allow to redistribute money according to
political purposes.
However, “fairness” is a subjective notion: how do you define “fair”?
Probably any definition of it would be contaminated with emotions, and this
complicates the taxation schemes. But mathematical models are immune to
emotions, and no matter what mathematical model one considers in regard
to the taxation problem one has to focus primarily on its simplification - in
accordance with the common mathematical modeling methodology (Chapter
2). To begin with, let us for simplicity consider a three-component taxation
scheme when taxes collected by the state278 are subdivided into three non-
intersecting components: individual (household) income tax, corporate profit
tax, and consumption tax (VAT, GST and the like)279. I shall not dwell here on
corporate taxes, it requires a thorough economical analysis bordering on legal
issues which subject only includes mathematical models to a minor extent
and would lead us far away from them. As regards taxes on consumption, not
on labor and capital income, this is a serious and interesting issue where
mathematical models can be correctly set up, especially in the context of
shifting the tax burden from individual income to penalized consumption.
Thus, taxes on consumption inherently encourage savings and, consequently,
investments instead of consumption. Income tax is a direct one whereas
consumption taxes are hidden in prices and felt indirectly. One may notice in
this connection that although VAT and its analogs are drastic instruments of
improving the state finances, they can bring more harm than benefits for the
state since they can significantly reduce consumption.280 It is because of this
hazard that mathematical models would be especially useful in optimizing the
relationship between the two components - income and consumption taxes.
The most important direct tax is the income tax. There are three main
mathematical models for the income tax - progressive, proportional and
regressive. Many people got used to proportional taxation schemes - due to
“fairness” or envy arguments, although nobody has proved that such schemes
are the best for the people. By the way, what is best for the people is not
necessarily good for the state and vice versa, even in democratic countries
with advanced economies. People - the country population as a whole and its
various groups - and the state represented by the ruling bureaucracy may
have, and actually possess, different goal functions whose difference281 as a
278 Typically, for “old” European countries tax revenue comprises about 40-50 per cent of GDP,
with
approximately
25
percent
in
the
USA,
http://ec.europa.eu/taxation
customs/taxation/index en.htm
279 Actually, the great variety of other taxes imposed on the citizens provide only corrections
to the state tax revenue and rather testify to inefficiency and greediness of the financial
bureaucracies who see their task as squeezing the money out of citizens.
280 Moreover, the VAT paperwork is extremely cumbersome, so it complicates the accounting
to the extent that a lot of man-hours are wasted. It is strange that people underestimate the fact
that time is the most precious resource.
281 One can talk of the norm in some functional space of course.
498
Made in physics
function of time increases when the economic situation becomes worse. In
most advanced economies, the personal (household) income tax is a
progressive tax defined as a rising, piecewise continuous (affine) function of
income, excluding various deductions, rebates, etc. In other words, the tax
base, i.e., the amount of money earned up to a certain threshold 1 amount 𝑇1
is taxed at a rate 𝑟1, then the remaining income, up to the second threshold
amount 𝑇2 is taxed at a rate 𝑟2, and so on. Conversely, regressive tax is levied
so that the tax rate decreases as the tax base increases. This is a rare occasion
for income taxes, being applied in developing economies to stimulate
accumulation of wealth. The in-between model of proportional or flat rate is
the most primitive taxation model based on the universal rate 𝑟 irrespective
of the income earned. In this sense, a flat rate may be considered a degenerate
case of a progressive or regressive rate. For some political reasons, mainly of
ideological character, proportional (flat) tax schemes are uncommon in
advanced economies, especially in Europe, where typically a graduated
progressive tax imposed both on household incomes and fixed taxes on
corporate profits are accepted. However, flat taxes seem to work well, e.g., in
Russia and Baltic countries; some more countries of Central and Eastern
Europe are contemplating introducing the flat income tax. Medical and
retirement insurances perceived by people as supplementary taxes do in fact
correspond to flat rate taxation. There was a proposal by a well-known tax
expert and lawyer Professor Paul Kirchhof to introduce 25 per cent flat rate
in Germany, but it was of course declined for political reasons.
The limiting case of the proportional tax model is confiscation of high
incomes and/or property - accumulated wealth. This variant is, however, not
optimal since it produces an excessive social tension, leads to the drain of
capital out of the country and removes economic stimulators. This is an anti-
liberal approach exploiting human envy and almost necessarily resulting in
violence. Furthermore, the ruling bureaucracy that appropriates the
functions of total redistribution rapidly (in some historically short time 𝜏)
becomes the elite group (“nomenclatura”) and behaves as if the entire
country belonged to it. This accompanies the transition of society to another
stationary or at least metastable state which is usually called totalitarian. The
only possibility to resort to a confiscatory taxation during some acute crisis is
to apply it for even shorter time 𝑡< 𝜏.
The flat tax model is based on one parameter - universal tax rate. It is the
simplest model whatsoever. Let us consider now another simple model of
taxation, but based on two parameters: the lower threshold 𝐵 (from German
“Betrag”), analogous to a tax exempt level (Steuerfreibetrag in Germany) and
the flat tax rate 𝑟 applied to the positive difference between the income 𝑥 and
level 𝐵. If 𝑥−𝐵< 0 , i.e., the household income lies under 𝐵, the state
compensates the difference ∆= |𝑥−𝐵| = 𝐵−𝑥. If 𝑥−𝐵> 0, the income tax
𝑡= 𝑟(𝑥−𝐵) is applied to the excess 𝑥−𝐵. the meaning of 𝐵 is in fact the
existence minimum which can be calculated or estimated if the economic
structure of the society is known, for example, as 𝐵= 𝑘∫
𝑥𝑓(𝑥)𝑑𝑥
∞
0
. Here we
assume that the first moment of the distribution exists, function 𝑓(𝑥) is
normalized and 𝑘< 1 is some coefficient depending as a functional on 𝑔(𝑢).
11.5
Naive Taxation Models
499
However, this calculation is too theoretical. In practice, the existence
minimum 𝐵 is determined by actual physiological needs such as food,
clothing, dwelling place, heating, mobility rather than by the aggregated
“human factor” - choice, assortment, fashion, prestige, advertising, etc.
Parameters 𝑟 and 𝐵 can be adjusted from time to time, depending on the
economic situation. No other payments to the country’s budget are necessary:
all social security contributions, state medical and unemployment
insurance282, payments to pension funds, perceived by the citizens just as flat
rate taxes, can be “absorbed” by the two parameters. In principle, almost any
contemporary tax system has a character 𝑡= 𝐹(𝑥−𝐵), where the function 𝐹
has some sophisticated piecewise behavior, and 𝐵 may take an ordered
number of values, 𝐵1, 𝐵2, …. One common model is a three-step system with
𝐵1, 𝐵2, 𝐵3 . Our linear model may be universal for all the taxpayers (no
exceptions and supplementary exemptions), which probably would irritate
the politicians. If, in certain crisis periods or when an acute budget deficit is
pending, two parameters are insufficient, one can, for example, use instead of
linear function a polynomial or the family of functions 𝐹(𝑥) = 𝑟(𝑥−𝐵)𝛼, with
𝛼 being the third tuning parameter. One must, however, remember that in this
case the tax burden is displaced to higher income and the most productive
portions of the population283, which inevitably reduces incentives to work
harder.
Like any other model, the above linear (affine) tax model has advantages
and drawbacks. Being extremely simple, it would drastically reduce the costs
of tax return processing and would save a lot of time. Moreover, it preserves
the existence minimum for each citizen. At the same time, the model can
implicitly stimulate poor households to increase their income. To reinforce
this effect, one may slightly correct the model introducing one more
parameter - the “enthusiasm coefficient” 0 < 𝛽(𝑡) < 1 applied as a
renormalizing factor to ∆: ∆→𝛽(𝑡)∆, with e.g., 𝛽(𝑡) = 1 −𝜇𝑡, 𝜇> 0 . One
more simplification is hidden in the fact that using such a model one can
neglect a separate retirement payment and provide the financing of pensions
through income tax. Everyone should receive her/his existence minimum
irrespective of age and health condition. Those who wish and are capable of
using supplementary private pension schemes receive more - merely
additive. But they have also to pay taxes. No privileged or elite groups are
282 As regards health systems, there are three basic models: state-controlled medical
insurance, private insurance, and health services paid by the state. There may be of course a
superposition of these “pure” models. State-controlled medical insurances do not actually differ
from budget organizations. Nothing changes for the patient if one covers her/his medical
expenses from the state budget. Naturally these payments should be executed directly to medical
service providers (doctors, healthcare personnel, etc.), without the patient’s participation.
Public-private partnership schemes may also be closer to optimal than state-controlled medical
insurances.
283 I try to evade using the term “population” recalling an involuntary definition of this term
which belonged to the Russian ex-Prime Minister, V. S. Chernomyrdin: Population is such a
people with whom one can do whatever one wants. V. S. Chernomyrdin was known for his
intuitive, unintentional aphoristicity.
500
Made in physics
allowed, everyone pays the same rate and no one receives special treatment
or has tax preferences. So simple may be the world.
This simplicity may be considered exactly the drawback of the model. The
matter is that there are some groups in society that perceive the simplicity
and transparency of financial flows as directed against their interests. To
these groups may belong not only misanthropic bureaucrats, but also social
romanticists and leftist world-improvers. Fewer regulations mean less
interference from the state part exercised by functionaries. Naturally, the
latter are not interested in tax simplification. Moreover, under a simplified tax
system the costs of processing tax returns paid to tax-collection officers
become drastically reduced, and the tax collection offices can be significantly
downsized. And what to do with the freed manpower? Moreover, the very fact
that the flat rate can be applied to all taxable income (and profits) without
exception or exemption exasperates many people.
There is also a risk that an increased part of an inactive population would
subsist for a long time on state transfers ensuring the existence level. The
above mentioned “enthusiasm coefficient” is intended to entice them into a
productive life. Another danger could be that employers would be tempted to
pay low wages in order to minimize payments to the state, but this should be
regulated by corporate taxes, in particular by an increased taxation of profits.
In this respect, personal income taxes become even more strongly coupled to
the corporate ones.
This two-parameter flat-tax model reminds us of the so-called negative
income tax (NIT) model suggested by one of the leading contemporary
economists, Milton Friedman, described in his well-known book “Capitalism
and Freedom” [169].
Currencies have a notorious tendency to plunge or soar in value because
of the herd mentality of markets, so it is crucial to manage currency
adjustments in an orderly fashion. As long as the decline of the US dollar is
smooth, there is no problem. But if the dollar plummets, the world faces a full-
blown currency crisis.
11.5
Naive Taxation Models
501
12 Conclusion and outlook
Now, one can justifiably ask: has anything been said at all? Some people might
say that the book is so uncommon and eclectic that it would be difficult to
trace the logic without being distracted by surrounding complexities. I still
hope that the described - maybe not always accurately - results and problems
would deserve being discussed anyway. And I also hope that even in those
numerous cases when I just mentioned some results and the great names
associated to them this content might be useful for the reader as an initial
piece of information about what has happened in the world.
One may observe that physics done on the level of journal articles is
different from physics presented in textbooks or treated in monographs.
People writing monographs or textbooks are trying to make the presentation
monolithic and the presented subject closed. In reality, no subject is closed,
and scientific disciplines, in particular physics, are in perpetual development,
conquering new areas and incorporating new results - new nodes, in
networking terminology. The networking approach, increasingly popular
these days, allows one to migrate between the physical and mathematical (in
my terminology, “physmatical”) subjects with a comparative easiness. Like
any network, a physmatical one is not closed, it develops and gradually
interconnects with other “proprietary” networks such as “biomedics”,
economics, astrophysics, “sociologics”, etc. To some extent, “physmatics”
reflects the scope of books one can see on the bookshelves of physicists. In
this book, I wanted to pay attention to synergetic interaction between
different fields of physics, mathematics, and other disciplines where
mathematical models, in particular those constructed according to a physical
pattern, are extensively used. I also attempted to bridge together two streams
of presenting physical and mathematical results: a conservative textbook or
monograph approach and a more daring article style. Bridging the gap
between these two manners of presentation requires a relatively complete
exposition of the basic concepts, therefore there are so many fragments in the
text, which a reader may find too elementary. In other cases, only a glimpse
at the non-core stuff was, to my mind, sufficient to convey the main ideas
underlying the model discussed, but some readers may find such exposition
too superficial. It is difficult to satisfy everyone, and I never wanted it. I did
not present only the standard material, but also tried to comment on some
open problems so that the interested reader may be induced to dig deeply in
more professional sources.
While preparing this manuscript my ambition was that the book would
stimulate young researchers to venture into interdisciplinary fields, which
are off the beaten track. The cross-fertilization process between such
interdisciplinary fields is essentially reduced to the triad: acquire everything
what is useful, neglect all unnecessary and add something of your own. As I
have several times mentioned in this book, the term “interdisciplinary” has
been seriously compromised due to heavy contamination of vague philosophy.
502
Conclusion and outlook
Bearing the latter consideration in mind, I still wanted to make a journey over
a variety of subjects rather than to settle down on any one of them. This
explains to some extent wordy and philosophical interludes of mine which
can irritate the reader anticipating a more focused approach. And I must once
again apologize for highly subjective personal recollections interspersed
through the manuscript: I hoped that occasional remembrances, however dim
they might be, could partly illuminate the scientific scene of the Soviet years,
the paradoxical period when physical and mathematical sciences could
flourish against the ruinous background of nearly everything.
Physicists and mathematicians are typically those who strive to
“crystallize” the problem and to achieve maximal lucidity before converting
their considerations into published papers. I concede that there are many
places in this book that would not scrape through the clarity test; moreover,
there are fragments that may seem “a mist” to an occasional reader. Broken
and not clearly cut presentation of material is untypical of scientific
publications, but, firstly, this book is not, strictly speaking, a scientific
publication as it does not fit in with any standard for a monograph or a
textbook and, secondly, I think this mosaic, non-monolithic style is more
natural for a human perception. A rare reader nowadays meticulously studies
the book from the first page to the last in the same methodical order as it was
arranged by the author, editor and publisher.
When writing this book, I did not think it would be fitting to ruminate
about the tiniest wrinkles in physical models, for example on the level of
details that were discussed last month in arXiv. Instead, I took the manner to
jump over some details which I considered not necessarily trifling but
distracting, speaking mostly about “the forest behind the trees”, i.e., the most
important models in physics. I have noticed that many physical questions may
be fundamental but vague, which marks a certain philosophical component in
physics. Personally, I am inclined to doubt that philosophy could answer
physical questions, yet I think that vague questions can provoke imagination
and diminish the dogmatic aplomb so typical of many science professionals.
I recall a statement ascribed to G. H. Hardy and cited by an outstanding
physicist, mathematician and writer Freeman Dyson 284 that young men
should prove theorems, old men should write books [231]. Proving theorems
and obtaining other hard scientific results projected into the future is indeed
a game for the young. The ratio of a scientific to a philosophical component in
a person’s work rapidly diminishes with age after a certain threshold simply
because young people are capable of more trial-and-error attempts. As I may
be considered old, my motivation, while writing this book, was primarily to
preserve the past. Books are in general intended to preserve the past.
Unfortunately, I have seen many times how the worlds of the past disappear
284 It is a shame, by the way, that F. Dyson did not receive the Nobel Prize. There are
many outstanding scientists who were overlooked by the Nobel Committee (more
exactly, by the Royal Swedish Academy of Sciences): G. Gamow, J. W. Gibbs, L. Meitner,
D. Mendeleev, nowadays F. Dyson and L. V. Keldysh, perhaps Yu. M. Kagan. The Nobel
Committee often makes strange - often politically motivated - decisions. See, e.g.,
http://en.wikipedia.org/wiki/Nobel_Prize_controversies.
11.5
Naive Taxation Models
503
together with their human carriers. And I shall be happy if some person
considerably younger than the author would refer to this book that it has
something interesting to say. Even if the reference were in the following well-
known form: there are some new and true things in this book, but the true
things are not new, and the new ones are not true.
Back to this issue of preserving the past. In this book, I wanted to discern
a timeline of 20th century physics and associated it mathematics which may
serve as a backbone for the entire physmatical network. Of course, it was only
a crude attempt. However, I do not think such an approach is hopeless.
Networking reasoning is a good instrument to unify diverse environments.
Modern physics began with Einstein’s attempt to reconcile electrodynamics,
mechanics, and thermodynamics in 1905 and his later endeavor to unify
special relativity and the Newtonian theory of gravitation. In a more general
social context, the synthesizing approach of Einstein meant a convergent
scientific change - a retreat from austerity concepts of Max Weber [9] who
insisted on “Zweckrational” action rejecting all unnecessary circumstances.
Einstein was rather thinking in the Renaissance spirit of more lenient
“communicative rationality” program, encouraging critical reassessment of
“everybody knows that” concepts and the establishment of mutual
understanding between diverse scientific communities. As a consequence,
Einstein’s program proved to be very successful as compared to rival
endeavors not because it could explain more “facts” or was more powerful
mathematically. It was better than rival programs probably because it
provided a wide basis for interpenetration and communication between
several main paradigms of 19th century physics. And in the long run, of
course, Einstein’s theory culminated in multiple empirical successes.
What is not covered in the book?
Since the manuscript is already too long, I have omitted many important
subjects such as, among applied mathematical topics, data analysis, the finite
element method, homotopy, generalized functions, projective spaces, etc., and
within the physical portion, anything about astrophysics, bosonization, Fermi
and Luttinger liquids, models of a ferromagnetic, quantum Hall effect, surface
science and low-dimensional physics, elementary particle physics, quark
models, spin systems (with spintronics applications), and many, many others.
All these subjects are treated by a vast number of highly competent authors,
and I, having only a superficial - not working - knowledge of them could
provide a mere compilation of authoritative sources. I was not bold enough to
exhibit my dilettantism, for the latter is always conspicuous when a person
ventures to write about the subjects she/he does not have a working
knowledge of. Besides, the number of sources covering the omitted topics is
enormous. Take, for example, the finite element method. It has been the
subject of so many books that it would be hard to select any pertinent
references.
For me, it was a serious challenge, while putting this book together, to
decide what to include: where exactly lie the boundaries of the so-called
“physmatics”? Or, in practical terms, what should be considered the standard
repertoire of the indispensable education of a physicist, and what can be
504
Conclusion and outlook
painlessly omitted? Can one find a common scheme to incorporate dozens of
seemingly different research activities? Probably, only a science historian
looking backwards in, say, thirty years from now can conclude whether the
networking view of physical-mathematical disciplines was at all justifiable.
In connection to a tremendous amount of topics and sources, I hope that
in the near future people can utilize computers to understand the relational
structures of physmatics, by constructing clever algorithms classifying its
inherent content. One can, for example, invent multilevel or graph clustering
algorithms which would predict physmatical complexes in large-scale
dedicated information networks. In this direction, visual representations of
complex multivariate information contained in physical and mathematical
sciences may become feasible, which allows one to communicate this
information synthetically (and not as traditional dispersed fragments) with
ease and high precision. The designed physmatical algorithms might produce
groups of high-quality nodes - nowadays this function is partly implemented
by forums devoted to specific topics. Eventually such nodes can be merged
before applying clustering algorithms. Then the clustering results may be
passed to the entire physmatical network, with a certain additional fine
tuning. Naturally, there will be some leftover nodes outside the mainstream
(though multicore) knowledge network. Such leftover nodes should not be
totally neglected or lost, since human representation of what is important and
what is not in physics changes with time. Besides, the requirements of fashion
are quite influential 285 . The leftover nodes may be added back to the
mainstream with the help of multilevel algorithms accounting for hierarchy
of
nodes,
supernodes
and
clusters.
This
computer-aided
human
communication in physical and mathematical sciences would make the
results produced by them much more observable than today. But it is only a
vision, of course.
285 For instance, string theory is of fashion nowadays, although it obviously lacks any
corroborative evidence. On the contrary, traditional nuclear physics is largely out of fashion.
Nanotechnology bordering on nano-mythology has become extremely fashionable in the
beginning of the 2000s, as well as highly hypothetical quantum computing. Nonlinear science had
been almost totally neglected before the 1960s-1970s, but in the 1980s it enjoyed enormous
popularity. There are a lot of sociological effects and ideology in the advancement of science.
11.5
Naive Taxation Models
505
13 Bibliography
[1] Samarskii, A. A., Mikhailov, A. P. Matematicheskoe Modelirovanie. Nauka,
Moscow 1997. Translation: Principles of Mathematical Modeling: Ideas,
Methods, Examples, CRC Press, 2002
[2] Basmadjian, Diran. Mathematical Modeling of Physical Systems: An
Introduction. Oxford University Press, 2002
[3] Tikhonov, A. N., Samarski, A. A. Equations of Mathematical Physics.
Pergamon Press, N. Y. 1963
[4] Morse, P., Feshbach, H. Methods of Theoretical Physics. McGraw-Hill, N.
Y. 1953
[5] Jackson, J. D. Classical Electrodynamics. Academic Press, N. Y., 1998
[6] Stratton, J. A. Electromagnetic Theory. IEEE Press, Piscataway, NJ, 2007
(new edition)
[7] Courant, R., Hilbert, D. Methoden der Mathematischen Physik.
SpringerVerlag, Berlin-Heidelberg, 1968
[8] Belotserkovskii, O. M., Guschin, V. A. Matematicheskoe Modelirovanie:
Problemy i Rezultaty. Nauka, Moscow 2003
[9] Weber, Max. Die protestantische Ethik und der ’Geist’ des Kapitalismus.
Paul Siebeck, Tübingen, 1986
[10] Aris, Rutherford. Mathematical Modeling Techniques. Dover, N. Y. 1994
[11] Surdin V. G. Vestnik Russian Academy of Sciences, No. 11, 1990 (in
Russian); Nanninga, R. The Astrotest. Correlation, 15(2),1996/97, p. 14-
20
[12] Braginski, V. B., Panov, V. I. JETP 34, 463 (1972)
[13] Myshkis, A. D. Elementy Teorii Matematicheskikh Modelei. Nauka,
Moscow 1994
[14] Arnold, V. I. Mathematical Methods of Classical Mechanics. Springer-
Verlag, LLC, 2nd ed., New York 1989
[15] Arnold, V. I. Ordinary Differential Equations. Translated from the Russian
by R. A. Silverman, MIT Press, Cambridge, Massachusetts, 1978
[16] Arnold, V. I., Avez, A. Ergodic Problems of Classical Mechanics.
AddisonWesley, Redwood, 1989
506
Bibliography
[17] Solari, H. G., Natiello, M. A., Mindlin, G. B. Nonlinear Dynamics. IoP
Publishing, London, 1996
[18] Weisskopf, V. F. Foreword to the book by W. Pauli, “Wave Mechanics”,
Dover, N. Y., 2000
[19] Coddington, E. A., Levinson, N. Theory of differential equations.
McGrawHill, New York, 1955
[20] Dirac, P. A. M. Principles of Quantum Mechanics. Oxford University Press,
London, 1958
[21] Gantmacher, F. R. Lectures on Analytical Mechanics. MIR, Moscow, 1970
[22] Gantmacher, F. R. Matrix Theory. Chelsea Publishing (AMS), N. Y.,1977
[23] Landau, L. D., Lifshitz, E. M. Mechanics. Pergamon Press, Oxford, 1976
[24] Landau, L. D., Lifshitz, E. M. Statistical Physics. Butterworth-Heinemann,
3rd edition, Oxford, 1980
[25] Lifshitz, E. M., Pitaevski, L. P. Physical Kinetics. Pergamon Press, Oxford,
1981
[26] Bronstein M., Lafaille S. Solutions of linear ordinary differential
equations in terms of special functions, Proceedings of the 2002
international symposium on Symbolic and algebraic computation, July
07-10, 2002, Lille, France, 23–28
[27] Cheb-Terrab, E. S., von Blow K.: A Computational approach for the
analytical solving of partial differential equations, Computer Physics
Communications, 90, 102–116 (1995)
[28] Schwarz, A. S. Topology for Physicists. Springer-Verlag, Berlin-
Heidelberg, 1994
[29] Erdelyi, A., Magnus, W., Oberhettinger, F., Tricomi, F. G.: Higher
Transcendental Functions, New York: McGraw-Hill Book Company, Inc.,
1953
[30] Gonis, A. Theory, Modeling, and Computation in Materials Science, LLNL,
Livermore, CA, 1993
[31] Kovacic, J. J. An algorithm for solving second order linear homogenous
differential equations. J. Symbolic Computation, 2(1), 3–43 (1986)
[32] Singer M. F., Liouvillian solutions of linear differential equations with
Liouvillian coefficients. J. Symbolic Computation, 11(3), 251–273 (1991)
[33] Theory and modeling in nanoscience. Report of the May 10-11, 2002
Workshop, DOE U. S. LBNL-50954
11.5
Naive Taxation Models
507
[34] Theory, simulation, and modeling in nanoscience, LLNL Nanoscience
Home Page http://www.llnl.gov/nanoscience
[35] Yoffe, A. D.: Low-dimensional systems: quantum size effects and
electronic properties of semiconductor microcristallities (zero-
dimensional systems) and some quasi-two-dimensional systems, Adv.
Physics, Vol. 51, 2002
[36] Yoffe, A. D.: Semiconductor quantum dots and related systems:
electronic,
optical,
luminescence
and
related
properties
of
lowdimensional systems, Adv. Physics, Vol. 50, 2001
[37] Heaviside, O. Phil. Trans. Royal Soc. (London) A 183, 423 (1893)
[38] Heaviside, O. On the electromagnetic effects due to the motion of
electrification through a dielectric. Phil. Mag. 27, 324-339 (1889)
[39] Landau, L. D., Lifshitz, E. M. The Classical Theory of Fields. Pergamon
Press, Oxford, 1975
[40] Kopaev, Yu. V. High-temperature superconductivity models. Physics -
Uspekhi, 45(6), 655 - 659 (2002)
[41] Gulyaev Yu. V., Godik E. E. The Physical Fields of Biological Objects.
Vestnik AN USSR, No. 8 (1983); Godik E. E., Gulyaev Yu. V. Functional
Imaging of the Human Body. IEEE Engineering in Medicine and Biology. 10,
No.4 (1991)
[42] Bolotovsky, B. M. Oliver Heaviside. Nauka, Moscow, 1985 (in Russian)
[43] Kronig, R. Zur Theorie der Supraleitfähigkeit, Zeischrift für Physik A78,
No. 11-12, 744-750 (1932)
[44] Feynman, R. P. A New Approach to Quantum Theory, ed. L. M. Brown.
World Scientific, Singapore 2005; Feynman, R. P., Hibbs, A. R. Quantum
Mechanics and Path Integrals. McGraw Hill, New York 1965
[45] Feynman, R. P., Vernon, F. L. The theory of a general quantum system
interacting with a linear dissipative system. Ann. Phys. (New York) 24,
118173 (1963)
[46] Pauli, W. In: Handbuch der Physik, ed. S. Flügge, vol. 5, Springer, Berlin
(1958).
[47] Ginzburg, V. L., Landau, L. D. Zh. Eksp. Teor. Fiz. 20, 1064 (1950). English
Translation: L. D. Landau, Articles (Oxford: Pergamon Press, 1965), p.
546
[48] Bardeen, J., Cooper, L., Schrieffer, J. Microscopic Theory of
Superconductivity, Phys. Rev. 106, 162-164 (1957); Theory of
Superconductivity, Phys. Rev. 108, 1175-1204 (1957).
508
Bibliography
[49] Kleinert, H. Path Integrals in Quantum Mechanics, Statistics, Polymer
Physics, and Financial Markets. World Scientific, Singapore 2003
[50] Gorelik, G. E. Fizika universitetskaia i akademicheskaia (The university
physics vs. the academic physics), Voprosy istorii estestvoznaniia i
techniki, no. 2, 1991; Filosofski voprosy sovremennoi fiziki, Academy of
Sciences USSR, Moscow, 1952 (in Russian)
[51] Gorelik, G. E. The creation of the “Course of Theoretical Physics”, Priroda,
No.
8,
2005;
http://vivovoco.rsl.ru/vv/journal/nature/0805/gorelik.htm
(in
Russian)
[52] DeBroglie, L. Compt. Rend. v.177, 507-548 (1923); v.179, 39 (1924)
[53] McTaggart, J. E. The Unreality of Time. A Quarterly Review of Psychology
and Philosophy v. 17,456-473 (1908)
[54] Goertzel,
B.
On
the
Physics
and
Phenomenology
of
Time,
http://www.goertzel.org/papers/timepap.html
[55] Prigogine, I. Nonlinear Science and the Laws of Nature. International
Journal of Bifurcation and Chaos, v.7, 1917-1926 (1997); From Poincarés
divergences to quantum mechanics with broken time symmetry. Z. fr
Naturforschung, v. 52a, 37-45 (1997)
[56] Abarbanel, H. D. I., Rabinovich, M. I., Sushchik, M. M. Introduction to
Nonlinear Dynamics for Physicists. World Scientific, Singapore, 1996
[57] Haken, H. Advanced Synergetics. Springer, Berlin, 1983
[58] Reichenbach, H. The Direction of Time. Univ. of California Press,
Berkeley, 1956
[59] Davies, P. C. W. The Physics of Time Asymmetry. Univ. of California Press,
Berkeley, 1974
[60] Hoover, W. G. Time Reversibility, Computer Simulation, and Chaos.
Advanced Series in Nonlinear Dynamics 13. World Scientific, Singapore-
River Edge, N. J., 1999
[61] Krasnikov, S. V. Causality violation and paradoxes. Phys. Rev. D 55, 3427
- 3430 (1997)
[62] Bachelot, A. Global properties of the wave equation on non-globally
hyperbolic manifolds. J. des Mathématiques Pures et Appliqués 81(1),
35-65 (2002)
11.5
Naive Taxation Models
509
[63] Angelopoulos, A. et al. (CPLEAR Collaboration). First direct observation
of time-reversal non-invariance in the neutral-kaon system, Phys. Lett.
B444, 43 (1998)
[64] Newton, R. Scattering Theory of Waves and Particles. McGraw-Hill, New
York, 1966
[65] Newton, R. Thinking about Physics. Princeton University Press,
Princeton Oxford, 2000
[66] Lemaître, G. C. R. Acad. Sci. Paris 196 (1933), 903; Ann. Soc. Sci. Brussels
A 53 (1933), 85
[67] Hartle, J. and Hawking, S. Wave Function Of The Universe. Phys. Rev. D28,
2960-2975 (1983)
[68] Kac, M. Probability and Related Topics in Physical Sciences. Interscience
Publishers, London-New York, 1957
[69] Boltzmann, L. Lectures on Gas Theory, University of California Press,
Berkeley, 1964 (Translated by S. Brush)
[70] Vilenkin, N. Ya. Special Functions and Theory of Group Representations,
Nauka, Moscow,1965; Translation: Math. Monographs, Vol. 22, Amer.
Math. Soc, Providence, R. I.,1968.
[71] Gibbs, J. W. Scientific Papers of J Willard Gibbs, 2 vols. Bumstead, H. A.,
and Van Name, R. G., eds. Ox Bow Press, Woodbridge (Conn), 1961, 1993
[72] Silin, V. P. Introduction to the kinetic theory of gases , Moscow, 1971 (In
Russian).
[73] Verhulst, F. Nonlinear Differential Equations and Dynamical Systems.
Springer-Verlag, Berlin-Heidelberg, 1990
[74] Smolin, L. The Trouble with Physics. Houghton Mifflin, New York, 2006
[75] Physical Origins of Time Asymmetry. Ed. by J. J. Halliwell, J.
PrezMercader, W. H. Zurek. Cambridge Univ. Press, Cambridge (UK),
2006.
[76] Zeh, H. D. The Physical Basis of the Direction of Time. Springer,
BerlinHeidelberg, 1999.
[77] Unruh, W. G. Notes on black hole evaporation. Phys. Rev. D 14, 870
(1976)
[78] Hawking, S.W. A Brief History of Time. Bantam Books, London, 1988
[79] Galapon, E. Paulis Theorem and Quantum Canonical Pairs:The
Consistency Of a Bounded, Self-Adjoint Time Operator Canonically
510
Bibliography
Conjugate to a Hamiltonian with Non-empty Point Spectrum. Proc. R. Soc.
Lond. A v. 458, 451-472(2002).
[80] Braunss, G. Mathematics of Noncanonical Quantum Theory. Commun.
Math. Phys. v. 45,159165 (1975).
[81] Steinhardt, P. J., Turok, N. Endless Universe: Beyond the Big Bang.
Doubleday, New York, 2007.
[82] Hawking, S. W. Arrow of time in cosmology. Phys. Rev. D32, 2489-2495
(1985)
[83] Hawking, S. W. Particle creation by black holes. Comm. Math. Phys. v. 43,
199-220 (1975)
[84] Landau, L. D., Lifshitz, E. M. Quantum Mechanics: Non-relativistic Theory.
Pergamon Press, London, 1977
[85] Landau, L. D. and Lifshitz, E. M. Fluid Mechanics. Pergamon Press,
London, 1987
[86] Adler, S. L. Quaternionic Quantum Mechanics and Quantum Fields.
Oxford University Press, Oxford, 1995
[87] Bekenstein, J. Black holes and entropy. Phys. Rev. D7, 2333-2346 (1973
[88] Penrose, R. The Road to Reality. Vintage Books, London 2006.
[89] Penrose, R. Singularities and time asymmetry. In: General Relativity. An
Einstein centenary survey. Hawking, S. W. and Israel, W. (eds.),
Cambridge University Press, Cambridge (U. K.) 1979.
[90] Ruelle, D. Dynamical Zeta Functions for Piecewise Monotone Maps of
the Interval. AMS, New York, 2006
[91] Einstein, A. On a stationary system with spherical symmetry consisting
of many gravitating masses. Annals of Mathematics, , vol 40, No 4, pp
922-936 (1939)
[92] Schulman, L. S. Time’s Arrow and Quantum Measurement. Cambridge
University Press, Cambridge (U. K.) 1997
[93] Linde, A. D. Inflation and Quantum Cosmology, Academic Press, Boston
1990; Linde, A. D. Linde, D. A., Mezhlumian, A. From the Big Bang theory
to the theory of a stationary universe. Phys. Rev. D 49, 1783 (1994)
[94] Linde, A. D. Particle Physics and Inflationary Cosmology. Harwood
Academic, Chur 1990.
[95] Foley, J. D., Feiner, S. K., Hughes, J. F., van Dam, A. Computer Graphics:
Principles and Practice. Addison-Wesley, Boston 1990.
11.5
Naive Taxation Models
511
[96] ’t Hooft, G. Magnetic monopoles in unified gauge theories. Nucl. Phys.
B79, 276-284 (1974)
[97] Polyakov, A. M. Particle spectrum in quantum field theory. Pisma Zh.
ksp. Teor. Fiz. 20, 430-432 (1974) [JETP Lett. 20, 194-196 (1974)]
[98] Gell-Mann,
M.
Hartle,
J.
Quasiclassical
coarse
graining
and
thermodynamic entropy. Phys. Rev. A 76, 022104 (2007)
[99] Sakharov, A. D. Cosmological models of the universe with the time-
arrow inversion. ZhETF 79, 689-693 (1980); translated in JETP Lett. 52,
349-351 (1980)
[100] Baz’, A. I., Zel’dovich, Ya. B., Perelomov, A. M. Rasseyanie, Reaktsii i
Raspady
v
Nerelyativistskoi
Kvantovoi
Mekhanike
(Scattering,Reactions
and
Decays
in
Nonrelativistic
Quantum
Mechanics) 2nd ed. (Moscow: Nauka, 1971) [Translated into English 1st
ed. (Jerusalem: Israel Program for Scientific Translations, 1969)]
[101] Aharonov, Y., Bohm D. Significance of electromagnetic potentials in
quantum theory. Phys. Rev. 115, 485-491 (1959); Further
considerations on electromagnetic potentials in the quantum theory.
Phys. Rev. 123: 1511-1524 (1961)
[102] Kibble, T. W. B. Geometrization of quantum mechanics. Comm. Math.
Phys. 65(2), 189-201 (1979)
[103] Frankel, T. The Geometry of Physics. An Introduction. Cambridge
University Press, Cambridge (U. K.), 1998
[104] Nakahara, M. Geometry, Topology and Physics. Taylor and Francis, LLC,
2003
[105] Altshuler, B. L., Aronov, A. G., Spivak, B. Z. The Aharonov-Bohm effect in
disordered conductors. Pisma Zh. ksp. Teor. Fiz. 33, 101-103 (1974)
[JETP Lett. 33, 94-96 (1974)]
[106] Born, M., Wolf, E. Principles of Optics. Cambridge University Press,
Cambridge (U. K.), 1999
[107] Joos, E., Zeh, H. D. The Emergence of Classical Properties through
Interaction with the Environment, Zeitschrift fr Physik B59, 223-243
(1985)
[108] Giulini, D., Joos, E., Kiefer, C., Kupsch, J., Stamatescu, I.-O. and Zeh, H. D.
Decoherence and the Appearance of a Classical World in Quantum
Theory. Springer, Berlin 1996
[109] Ziman, J. Public Knowledge. The Social Dimension of Science.
Cambridge University Press, Cambridge, 1968
512
Bibliography
[110] Zurek, W. H., Decoherence and the transition from quantum to classical,
Phys. Today 44, No. 10, 36-44 (1991)
[111] Zurek, W. H., Habib, S., and Paz, J.-P. Phys. Rev. Lett. 70, 11871190
(1993); Zurek, W. H., and Paz, J.-P. Phys. Rev. Lett. 72, 25082511 (1994)
[112] Zurek, W. H. Decoherence, einselection, and the quantum origins of the
classical. Reviews of Modern Physics 75, 715-775 (2003)
[113] Caldeira, A. O., Leggett, A. J. Path integral approach to quantum
Brownian motion. Physica (Amsterdam) 121A, 587 (1983)
[114] Hu, B. L., J. Paz, J., Zhang, Y. Quantum Brownian-motion in a general
environment-exact master equation with nonlocal dissipation and
colored noise. Phys. Rev. D45, 2843 (1992); Quantum Brownian-motion
in a general environment. 2. nonlinear coupling and perturbative
approach Phys. Rev. D47, 1576 (1993).
[115] Doran, C., Lasenby, A. Geometric Algebra for Physicists. Cambridge
University Press, Cambridge (U. K.), 2003
[116] Yaffe, L. G. Large N limits as classical mechanics. Reviews of Mod. Phys.
54, 407-435 (1982)
[117] Peres, A. Quantum Theory: Concepts and Methods. Kluwer Academic,
Dordrecht/Boston, 1993
[118] Kolomenski, A. A., Lebedev, A. N. Theory of Cyclic Accelerators. North
Holland, Amsterdam, 1966
[119] Heisenberg, W. Across the frontiers.Translated into English by Peter
Heath. Ox Bow Press, Woodbridge, CT., 1990
[120] Feynman, R. Simulating physics with computers, International Journal
of Theoretical Physics 21, 467-488 (1982)
[121] Deutsch, D. Quantum theory, the Church Turing principle, and the
universal quantum computer. Proc. Roy. Soc. Lond. A 400, 97-117
(1985)
[122] Deutsch, D. Quantum computation. Physics World, 5, 5761 (1992)
[123] Albert, D. On quantum mechanical automata. Phys. Lett. A 98, 249-252
(1983)
[124] Heisenberg, W. Physicist’s conception of nature. Greenwood Press,
Westport, CT., 1970
[125] Heisenberg, W. Across the frontiers. Harper and Row, New York, 1973
11.5
Naive Taxation Models
513
[126] Schrödinger, E. The interpretation of Quantum Mechanics. Ox Bow
Press, Woodbridge, CT., 1995
[127] Schrödinger, E. My View of the World. Ox Bow Press, Woodbridge, CT.
1983
[128] Schrödinger, E. What is Life? Cambridge University Press, Cambridge,
2002
[129] Schrödinger, E. Der stetige bergang von der Mikro- zur Makromechanik.
Naturwissenschaften, Bd. 14, H. 28, S. 664-666 (1926)
[130] Balachandran, A. P., Marmo, G., Skagerstam, B.-S., Stern, A. Gauge
Symmetries and Fibre Bundles. Application to Particle Dynamics,
Lecture Notes in Physics 188, Springer, 1983 (and references therein)
[131] Zhang, Y. Z. Special Relativity and its Experimental Foundations. World
Scientific, Singapore, 1997
[132] Todorov, I. Einstein and Hilbert: The Creation of General Relativity,
http://arxiv.org/abs/physics/0504179;
Earman,
J.,
Glymour,
C.
Einstein and Hilbert: Two Months in the History of General Relativity,
Archive for History of Exact Sciences v. 19, No. 3, 291 (1978); Logunov
A. A., Mestvirishvili, M. A., Petrov, V. A. How were discovered the
Hilbert-Einstein equations?, Russian Physics - Uspekhi v. 174, No. 6
(2004)
[133] Logunov, A. A., Loskutov, Yu. M., Mestvirishvili, M. A. The relativistic
theory of gravitation and its consequences. Sov. Phys. Uspekhi 31(7),
581596 (1988)
[134] Sen, A. Strong-weak coupling duality in four dimensional string theory,
Int. J. Mod. Phys. A9, 3707-3750 (1994); Electric-magnetic duality in
string theory, Nucl. Phys. B404, 109-126 (1993); Unification of string
dualities, Nucl. Phys. Proc.Suppl. 58, 5-19 (1997)
[135] Gauntlet, J. P., Harvey, J. A., Liu, J. T. Magnetic monopoles in string
theory. Nucl. Phys. B409, 363-381 (1993), see also: Gregory, R., Harvey,
J. A., Moore, G. Unwinding strings and T-duality of Kaluza-Klein and H-
Monopoles Adv. Theor. Math. Phys. 1 (1997) 283-297
[136] Pais, A. Niels Bohrs Times in Physics, Philosophy, and Polity Clarendon
Press, Oxford, 1991
[137] Jammer, M. The Conceptual Development of Quantum Mechanics.
McGraw-Hill, New York, 1966
[138] Feynman, R. Leighton, R., Sands, M. Feynman Lectures on Physics.
Addison Wesley, Reading (MA), 1963
514
Bibliography
[139] Dirac, P. A. M. Lectures on Quantum Mechanics. Dover, N. Y., 2001
[140] Davydov, A. S. Quantum Mechanics. Pergamon Press, Oxford, New York,
1976
[141] Popper, K. The Logic of Scientific Discovery. Basic Books, New York,
1959
[142] Toynbee, A. A Study of History. Oxford University Press, Oxford, 1961
[143] Planck, M. Collected Papers. Nauka, Moscow, 1975 (in Russian); On
improvement of the Wien formula for spectral distribution. Verhandl.
Deutsch. Phys. Gesellschaft, Bd. 2, S. 202; To the theory of energy
distribution in the normal spectrum. Bd. 2, S. 237, Berlin, 1900
[144] Stone, M. On one-parameter unitary group in Hilbert space. Annals of
Mathematics, 33, 643-648 (1932)
[145] Fushich, V. I., Nikitin, A. G. Symmetry of Equations in Quantum
Mechanics. Nauka, Moscow, 1990 ; Symmetry of Maxwell’s Equations.
Naukova dumka, Kiev, 1983 (in Russian)
[146] Hirsch, M. W. The dynamical systems approach to differential
equations. Bull. Amer. Math. Soc. , 11, 164 (1984)
[147] Sinai, Ya. G. (ed.) Dynamical Systems, vol. 1-3. VINITI, Moscow, 1985 (in
Russian)
[148] Rukhadze, A. A. Sobytiia i Liudi (Events and People) (1948-1991). Star,
Tula, 2000 (in Russian)
[149] Wu, Zuo-Bing, Zeng, Jin-Yan. Dynamical symmetry of screened Coulomb
potential and isotropic harmonic oscillator. Phys. Rev. A 62, 032509,
(2000)
[150] Holas, A., March,N. H. How many vector constants of motion exist for a
particle moving in a central potential? J. Phys. A: Math. Gen. 27, 2615-
2617 (1994)
[151] Pankratov, S. G. Coherent electromagnetic radiation of a modulated
beam of charged particles. Physics Letters A59(5), 338-340 (1976)
[152] Wigner, E. P. Über die Operation der Zeitumkehr in der
Quantenmechanik. Akad. Wiss. Göttingen, Math-Physik, 546 (1932)
[153] Shilov, G. E. Introduction to the Theory of Linear spaces. Dover
Publications, N. Y., 1974
[154] Rashevski, P. K. Riemannian Geometry and Tensor Analysis. Moscow,
Nauka, 1967
11.5
Naive Taxation Models
515
[155] Kostrikin, A. I, Manin, Yu. I. Linear Algebra and Geometry. Gordon and
Breach Science Pub, London - New York, 1989
[156] Perelomov, A. M. Generalized Coherent States and Their Applications.
Springer, Berlin, 1985
[157] Ehrenfest, P. Bemerkung uber die angenaherte Gultigkeit der
klassichen Mechanik innerhalb der Quantunmechanik (Note on
approximate validity of classical mechanics). Z. Phys. 45(7/8), 455-457
(1927)
[158] Oas, G. On the use of relativistic mass in various published works. arXiv:
physics/0504111.
[159] Topping, P. Lectures on the Ricci Flow. Cambridge University Press,
Cambridge (U. K.), 2006
[160] Kalashnikov, N. P., Pankratov, S. G. Coherent excitation of atoms by the
periodic field of crystal lattice. Sov. Phys. Solid State 16, 542 - 545
(1974)
[161] Andrieux D, Gaspard P. Fluctuation theorems and the nonequilibrium
thermodynamics of molecular motors. Phys. Rev. E 74, 011906 (2006)
[162] Jarzynski C, Wojcik D. Classical and quantum fluctuation theorems for
heat exchange. Phys. Rev. Lett. 92, 230602 (2004)
[163] Evans D. J., Searles D. Causality, response theory, and the second law of
thermodynamics. Phys. Rev. E 53, 5808-5815 (1996)
[164] Ballentine, L. E., Yang, Y., Zibin, J. P. Inadequacy of Ehrenfest’s theorem
to characterize the classical regime. Phys. Rev. A 50, 2854-2859 (1994)
[165] Petrina, D. Ya., Gerasimenko, V. I., Malyshev, P. V. Mathematical
foundations of classical statistical mechanics. Continuous systems.
Gordon and Breach, Newark (NJ), 1989
[166] Andreev, A. F., Lifshitz, I. M. Quantum theory of defects in crystals. JETP
29(6), 1107-1118 (1969); Zh. Eksp. Teor. Fiz. 56, 2057-2068 (1969)
[167] Vizgin,
V.
P.
Physics
in
Moscow.
http://www.ihst.ru/projects/sohist/books/moskva/185211.pdf
[168] Anderson, P. W., Arrow, K. J, Pines, D., eds. The Economy as an Evolving
Complex System. Addison-Wesley, Redwood, 1988
[169] Friedman, M. Capitalism and Freedom. University of Chicago Press,
1962
[170] Yeor, B. Eurabia: The Euro-Arab Axis. Fairleigh Dickinson University
Press,
Madison,
N.
J.,
2005;
see
also
http://en.wikipedia.org/wiki/Eurabia
516
Bibliography
[171] Snow, C. P. The Two Cultures. Cambridge University Press, Cambridge
(U. K.), 1993
[172] Kirchhof, P. Der sanfte Verlust der Freiheit. Carl Hanser Verlag,
München, 2004
[173] Busemeyer, J. R., & Diederich, A. (2010). Cognitive modeling: Sage.
[174] Sairamya, N. J., Susmitha, L., Thomas George, S., & Subathra, M. S. P.
(2019). Chapter 12 - Hybrid Approach for Classification of
Electroencephalographic Signals Using Time–Frequency Images With
Wavelets and Texture Features. In D. J. Hemanth, D. Gupta & V. Emilia
Balas (Eds.), Intelligent Data Analysis for Biomedical Applications (pp.
253-273): Academic Press.
[175] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. [Insight].
Nature, 521(7553), 436-444. doi: 10.1038/nature14539
[176] Bresnick, J. Machine Learning Algorithm Outperforms Cardiologists
Reading
EKGs
(healthitanalytics.com)
https://healthitanalytics.com/news/machine-learning-algorithm-
outperforms-cardiologists-reading-ekgs
[177] Morris, P. D., Ryan D Fau - Morton, A. C., Morton Ac Fau - Lycett, R.,
Lycett R Fau - Lawford, P. V., Lawford Pv Fau - Hose, D. R., Hose Dr Fau
- Gunn, J. P., & Gunn, J. P. Virtual fractional flow reserve from coronary
angiography: modeling the significance of coronary lesions: results
from the VIRTU-1 (VIRTUal Fractional Flow Reserve From Coronary
Angiography) study. (1876-7605 (Electronic)).
[178] Dawkins, R. The God Delusion. Transworld Publishers, London, 2007
[179] Matthews, D. A. (1999). The faith factor: Proof of the healing power of
prayer: Penguin.
[180] Newton, I. The Mathematical Principles of Natural Philosophy (1846)
Book
II,
Section
IX.
Translated
by
Andrew
Motte,
https://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natu
ral_Philosophy_(1846)/BookII-IX
[181] Fischer, H. P. (2008). Mathematical modeling of complex biological
systems: from parts lists to understanding systems behavior. Alcohol
research & health: the journal of the National Institute on Alcohol Abuse
and Alcoholism, 31(1), 49-59.
[182] https://mhealthfairview.org
[183] Dirac, P. A. M. Quantized singularities in the electromagnetic field.
Proc. Roy. Soc. London A 133, 60-72, 1931
11.5
Naive Taxation Models
517
[184] Veneziano, G. Construction of a crossing-symmetric, Regge behaved
amplitude for linearly-rising trajectories. Nuovo Cimento, 57A, 190
(1968)
[185] G. Veneziano, “Duality and Dual Models”, in Proceedings 15th Int.
Conference on High-Energy Physics, Kiev, 1970 (Naukova Dumka, Kiev,
1972, p. 437)
[186] Popper, K. The Open Society and Its Enemies. Routledge, London,
1945
[187] Dubrovin, B. A., Fomenko, A. T., Novikov, S. P. Modern Geometry -
Methods and Applications. Translated by Robert G. Burns, Part 1.
Springer-Verlag, Berlin-Heidelberg-New York, 1984
[188] Rashevski, P. K. Riemannian Geometry and Tensor Analysis. Nauka,
Moscow, 1967
[189] Hawking, S., Ellis, J. The Large-Scale Structure of Space-Time.
Cambridge University Press, Cambridge (U. K.), 1973
[190] Olver, P. J. Applications of Lie Groups to Differential Equations.
Springer-Verlag, N. Y., 1993
[191] Dieudonné, J., ed. Abrégé d’histoire des mathématiques 1700-1900.
Hermann, Paris, 1978
[192] Eisenhart, L. P. Riemannian Geometry. Princeton University Press,
Princeton, 1964
[193] L. D., Lifshitz, E. M. The Classical Theory of Fields. Pergamon Press
Landau, Oxford, 1975
[194] Born, M., Heisenberg, W., Jordan, P. Zur Quantenmechanik II.
Zeitschrift fuür Physik 35, 557 (1926)
[195] Jordan, P., von Neuman, J., Wigner, E. On an Algebraic
Generalization of the Quantum Mechanical Formalism. Annals of
Mathematics (Princeton) 35, No. 1, 2964 (1934)
[196] Arnold, V. I. Lobachevsky triangle altitude theorem as the Jacobi
identity in the Lie algebra of quadratic forms on symplectic plane. J.
of Geometry and Physics, 53, 421-427 (2005)
[197] Arnold, V. I. Mathematical Methods of Classical Mechanics. Springer-
Verlag, LLC, 2nd ed., New York 1989
[198] Brecher, K. Is the Speed of Light Independent of the Velocity of the
Source? Phys. Rev. Lett. v. 39, 10511054, (1977)
[199] Wolf, P., Bize, S., Tobar, M. E., Chapelet, F., Clairon, A., Luiten, A. N.,
Santarelli, G. Recent experimental tests of special relativity.
http://arXiv.org/abs/physics/0506168v1).
518
Bibliography
[200] Lichtenberg, A. J., Lieberman, M. A. Regular and Stochastic Motion.
Springer, Berlin-Heidelberg-New York, 1992. [Comprehensive and
containing nontrivial results, can be difficult for the first reading].
[201] Lin, C. C. The Theory of Hydrodynamic Stability. Cambridge University
Press, Cambridge, 1955. [An old and possibly forgotten, but very insightful
book].
[202] Gibbs, J. W. On the equilibrium of heterogeneous substances. Transactions
of the Connecticut Academy of Arts and Sciences, v. 3, pp. 108-248, 343-
524 (1874-1878). [This fundamental work on foundations of
thermodynamics was reproduced in The Collected Works of J. Willard
Gibbs, in two volumes, eds. W. R. Longley and R. G. Van Name, New Haven:
Yale University Press, 1957 (also 1928)].
[203] Arnold, V. I. Lectures on Partial Differential Equations. Springer, Berlin-
Heidelberg-New York, 2004 (2nd Edition). [This book is both
comprehensive and understandable for those who are not well-versed
with contemporary mathematical language].
[204] Cattaruzza, E., Gozzi, E., Francisco Neto, A. Least-action principle and path-
integral for classical mechanics. Phys. Rev. D 87, 067501.
[205] Marsden, J. E., Ratiu, T. S. Introduction to Mechanics and Symmetry.
Springer, New York, 1999.
[206] Schweber, S. S. An Introduction to Relativistic Quantum Field Theory.
Harper and Row, N. Y., 1964
[207] Heitler, W. The Quantum Theory of Radiation. Dover Publications, N. Y.,
1984
[208] Landau, L. D., Lifshitz, E. M. Electrodynamics of Continuous Media.
Pergamon Press, London, 1984
[209] Silin, V P, Rukhadze, A A Elektromagnitnye Svostva Plazmy i
Plazmopodobnykh Sred (Electromagnetic Properties of Plasmas and
Plasma-Like Media), Gosatomizdat, Moscow, 1961 (in Russian)
[210] Akhiezer, A. I., Berestetskii, V. B. Quantum Electrodynamics. Nauka,
Moscow, 1965 (in Russian), translation: Wiley-Interscience, New York-
London, 1969
[211] Schrödinger, E. Abhandlungen zur Wellenmechanik. Verlag J. A. Barth,
Leipzig, 1928
[212] Gamow, G. Zur Quantentheorie des Atomkernes. Zeitschrift für Physik,
51, 204-212 (1928)
[213] Leggett, A. J. Macroscopic quantum systems and the quantum theory of
measurement. Progress of Theoretical Physics Supplement, No. 69,
pp. 80-100 (1980)
11.5
Naive Taxation Models
519
[214] Carles, R. Semi-Classical Analysis for Nonlinear Schrödinger Equations.
World Scientific, Singapore, 2008
[215] Mensky, M. B. Concept of conscience in the context of quantum
mechanics. Phys. Uspekhi 48, 389-410 (2005)
[216] Aspect, A., Dalibard, J., Roger, G. Experimental Test of Bell’s
inequalities using time-varying analyzers. Phys. Rev. Letters, v. 49,
No. 25, 1804-1807 (1982); Aspect, A., Grangier, P., Roger, G.
Experimental realization of Einstein-Podolsky-Rosen-Bohm
Gedankenexperiment: a new violation of Bell’s inequalities. Phys.
Rev. Letters, v. 49, No. 2, 91-94 (1982)
[217] Born, Max. The statistical interpretation of quantum mechanics. Nobel
Lecture, December 11, 1954, Bohm, D. A suggested interpretation of the
quantum theory in terms of ”hidden” variables. I. Phys. Rev. 85, No.
2, 166-179 (1952)
http://nobelprize.org/prizes/physics/1954/born/lecture
[218] Bohm, D. A suggested interpretation of the quantum theory in terms of
“hidden” variables. I. Phys. Rev. 85, No. 2, 166-179 (1952)
[219] Zel’dovich, Ya. B., Novikov, I. D. Relativistic Astrophysics, I: Stars
and relativity, Chicago University Press, Chicago, 1971
[220] Balesku, R. Equilibrium and Nonequilibrium Statistical Mechanics.
John Wiley and Sons, N. Y.,1975
[221] Landauer, R. Irreversibility and heat generation in the computing process.
IBM J. of Research and Development, v. 5, No. 3 (1961)
[222] Il’inskii, Yu. A., Keldysh, L. V. Electromagnetic Response of
Material Media. Plenum Press, N. Y.-London, 1994
[223] Arnold, V. I. Catastrophe Theory. Springer, Berlin-Heidelberg-New York,
2004 (3d Edition).
[224] Currie, D. G., Jordan, T. F., Sudarshan, E. C. G. Relativistic invariance and
Hamiltonian theories of interacting particles. Rev. Mod. Phys. v. 35, 350-
375 (1963).
[225] Goldberg, S. The Abraham theory of electron: The symbiosis of
experiment and theory. Archive for History of Exact Sciences, v. 7, No. 1,
pp. 7-25 (1970).
[226] Bolotovskii, B. M., Serov, A. V. Details of the motion of charged
nonrelativistic particles in a variable field. Physics Uspekhi 37, No. 5, 515
516 (1994)
[227] Kapitza, P. L., Dirac, P. A. M. The reflections of electrons from
standing light waves. Proc. Cambridge Phil. Soc. v. 29, 297 (1933)
[228] Fedorov, M. V. Quantum theory of the Kapitza-Dirac effect. Zh. Teor.
520
Bibliography
i Exper. Fiz. 52, 1434 (1967)
[229] Fedorov, M. V., Goreslavsky, S. P., Letokhov, V. S. Ponderomotive
forces and stimulated Compton scattering of free electrons in a laser
field. Phys. Rev. E 55, 1015-1027 (1997)
[230] Weinstein, L. A., Solntsev V. A. Lectures on Microwave Electronics.
Sovetskoye Radio, 1973 (in Russian)
[231] Freeman Dyson: Mathematician, Physicist, and Writer. Interview
with D J Albers, The College Mathematics Journal, 25, No. 1, January
1994
[232] Solomon, S., Plattner, G.-K., Knutti, R., Friedlingstein, P. Irreversible
climate change due to carbon dioxide emissions. Proceedings of
The National Academy of Sciences of the USA [PNAS] 106, No.6,
1704-1709 (2009)
[233] Ghil, M., Childress, S. Topics in Geophysical Fluid Dynamics:
Atmospheric Dy- namics, Dynamo Theory, and Climate Dynamics.
Springer, Berlin-Heidelberg, 1987
[234] B¨arring, L. Climate - change or variation? Climatic Change, v. 25, No.
1, pp. 1-13 (1993)
[235] Nicolis, G., Nicolis, C. Foundations of complex systems: nonlinear
dynamics, statistical physics, information and prediction, World
Scientific, 2007
[236] Niclis, C. Niclis, G. Reconstruction of the dynamics of the climatic
system from time-series data. Proc. Natl. Acad. Sci. USA (Geophysics),
v.83, pp. 536-540, February 1986
[237] Radiative Forcing of Climate Change. Board on Atmospheric Sciences
and Climate (BASC). The National Academies Press, Washington,
D.C., 2005
[238] Cravens, G. Power to Save the World. The Truth About Nuclear
Energy. Knopf, N.Y., 2007
[239] Usoskin, I. G., Schu¨ssler, M., Solanki, S. K., Mursula, K. Solar activity
over the last 1150 years: does it correlate with climate? Proc. 13th
Cool Stars Workshop, Hamburg, July 5-9, 2004 (ESA SP-560, Jan.
2005, eds.: Favata, F., Hussain, G., Battrick, B.)
[240] Ditlevsen, P. D. Bifurcation structure and noise-assisted transitions in
the Pleistocene glacial cycles. Paleooceanography, v. 24, PA3204,
August 2009
[241] Gates, W. L., Mintz, Y. (eds.) Understanding Climate Change.
National Academy of Sciences (NAS), Washington, D.C., 1975
[242] Bogoliubov, N.N., Mitropolski, Yu.A. Asymptotic Methods in the
11.5
Naive Taxation Models
521
Theory of Non- linear Oscillations. Gordon and Breach, N.Y., 1961
[243] Bartens, W. Das Ärztehasserbuch. Knaur Taschenbuch Verlag, München,
2007; Auf Kosten der Patienten. Wie das Krankenhaus uns krank macht.
Eichborn-Verlag, Frankfurt 2008.
[244] E. U. Condon and R. W. Gurney “Quantum mechanics and radioactive
disintegration”, Phys. Rev. 33, No.2, 127-140 (1929)
[245] Kartsev, V. P. Newton. The Lives of Remarkable People. Biography Series.
Moscow, 1987 (in Russian).
[246] Wang N., Yao T., Shi Y. On the magnitude of temperature decrease in the
equatorial regions during the Last Glacial Maximum. Science in China
Supplement (series D), v. 42, 80-90 (1999)
[247] Hoag, H. The missing greenhouse gas. Nature Reports, v. 2, p.99-100,
August 2008
[248] Primeau, F. Characterizing transport between the surface mixed layer and
the ocean interior with a forward and adjoint global ocean transport
model. Journal of Physical Oceanography, v. 35, 545-564 (2005)
[249] Holland, D. “Bias and concealment in the IPCC process: the ‘hockey-stick’
affair and its implications”. Energy and Environment, v. 18, 951-983
(2007)
[250] Roberts, J. E. Meandering in Medical Physics: A Personal Account of
Hospital Physics, Institute of Physics Publishing, Bristol, 1999
[251] Popakostas, A., Potts, A., Bagnall, D. M., Prosvirnin, S. L., Cles, H. J.,
Zheludev, N. I. Optical manifestations of planar chirality. Phys. Rev. Lett.
v.90(10), 107404 (March 14, 2003)
[252] Schwanecke, A. S., Krasavin, A., Bagnall, D. M., Potts, A., Zayats, A. V.,
Zheludev, N. I. Broken time reversal of light interaction with planar chiral
nanostructures. Phys. Rev. Lett. v.91(24), 247404 (Dec. 9, 2003)
[253] Sigrist, M., Bailey, D. B., Laughlin, R. B. Fractional vortices as evidence of
time-reversal symmetry breaking in high Tc superconductors. Phys. Rev.
Lett. v.74, pp.3249-3252 (1995)
[254] Lee, W.-C., Zhang, S.-C., Wu, C. Pairing state with a time-reversal symmetry
breaking in FeAs- based superconductors. Phys. Rev. Lett. v.102, 217002
(29 May, 2009)
[255] Hillier A. D., Quintanilla J, Cywinski R. Evidence for time-reversal
symmetry breaking in the non-centrosymmetric superconductor LaNiC2.
Phys. Rev. Lett. v.102(11), 117007 (March 20, 2009)
[256] Doniach, S., Kapitulnik, A., Frank, P., Fejer, M. M., Spielman, S., Dodge, J. S.
Time Reversal Symmetry Breaking in Biological Molecules. Addison-
Wesley Publishing Company, Reading (MA), 1992
522
Bibliography
[257] C. J. Gorter and H. Casimir C. J. Gorter, H. Casimir. “On superconductivity”,
Physica 1, 306-320 (1934); Phys. Z. 35, 963 (1934)
[258] R. Becker, G. Heller, F. Sauter. “Über die Stromverteilung in einer
supraleitenden Kugel”, Zeitschrift für Physik 85, 772-787 (1933)
[259] Gerlich, G., Tschenschner, R. D. Falsification of the atmospheric CO2
greenhouse effects within the frame of physics. Int’l J. of Mod. Phys. B 23,
No. 3, 275-364 (2009)
[260] Schmidt, G. A. The physics of climate modeling. Phys. Today, p.72-73,
January 2007
[261] P. G. Debenedetti, H. G. Stanley. Supercooled and glassy water. Phys.
Today, No. 6, p. 40-46, June 2003
[262] Botkin, D. B. Forests, lakes, and the anthropogenic production of carbon
dioxide. BioScience, v.27, No. 5, pp. 325-331 (1977)
[263] Akhiezer, A. I., Landau, L. D., Pomeranchuk, I. Ya. Scattering of Light by
Light. Nature 138, 206-206 (1936)
[264] R. Kronig “Zur Theorie der Supraleitfähigkeit”, Zeitschrift für Physik A 78,
No.11-12, 744-750 (1932)
[265] R. Bluhm, “Overview of the SME: Implications and Phenomenology of
Lorentz Violation”, Talk presented at the conference “Special Relativity:
Will it Survive the Next 100 Years?” Potsdam, Germany, February 2005,
published in Lecture Notes in Physics, v.702, pp.191-226 (2006).
http://arxiv.org/abs/hepph/0506054v1.
[266] Kostelecky, V. A., Russell, N. Data tables for Lorentz and CPT violation,
http://arxiv.org/abs/hep-ph/0801.0287v3.
[267] Gleiser, M. “Drake equation for the multiverse: from the string landscape
to complex life.” http://ArXiv.org:hep-th1002.1651.
[268] Groves, G. V. Hough components of water vapor heating (in atmosphere).
Journal of Atmospheric and Terrestrial Physics, v. 44, 281-290, (1982)
[269] Kravtsov, S., Swanson, K., Tsonis, A. A. A new dynamical mechanism for
major climate shifts. Geophys. Res. Lett., v. 34, No.13, L13705 (12 July
2007)
[270] Lockwood, M., Harrison, R. G., Woolings, T., Solanki, S. K. Are cold winters
in Europe associated with low solar activity? Environmental Research
Letters, v. 5, No. 2 (April-June 2010), 024001
[271] Acuna-Soto, R., Stahle, D. W., Cleaveland, M. K., Therrell, M. D.
Megadrought and Megadeath in 16th Century Mexico. Emerging
Infectious Diseases, CDC, v. 8, No. 4, p. 360-362, April 2002.
https://wwwnc.cdc.gov/eid/article/8/4/01-0175_article
[272] Shaw, B. D., Climate, environment and prehistory in the Sahara, World
11.5
Naive Taxation Models
523
Archaeology, 8, No.2, 142, (1976)
[273] Yang, F., Schlesinger, M. E. On the surface and atmospheric temperature
changes following the 1991 Pinatubo volcanic eruption - A GCM study. J.
Geophys. Res., 107 No. D8, 1-14 (2002), 10.1029/2001JD000373
[274] Freidenreich S. M., Ramaswamy, V. Solar radiation absorption by carbon
dioxide, overlap with water, and a parameterization for general
circulation models. Journal of Geophysical Research v. 98, 7255-7264
(1993)
[275] McArthur, J. M., Janssen N. M., Reboulet S., Leng M. J., Thirlwalle M. F., van
de Shootbrugge, B. Palaeotemperatures, polar ice-volume, and isotope
stratigraphy. Palaeogeography, Palaeoclimatology, Palaeoecology 248,
391430 (2007).
[276] E. Majorana “Il valore delle leggi statistiche nella fisica e nelle scienze
sociali” (The value of statistical laws in physics and the social sciences,
Scientia 36, 55-56 (1942)
[277] Aghion, P., Howitt, P. Growth and unemployment. Review of Economic
Studies, 61, 477-494 (1994)
[278] Jahn, R. G., Dunne, B. J. The PEAR Proposition, EXPLORE, Volume 3, Issue
3, 2007, Pages 205-226, ISSN 1550-8307,
https://doi.org/10.1016/j.explore.2007.03.005.
(https://www.sciencedirect.com/science/article/pii/S15508307070005
84), https://www.princeton.edu/search?search=PEAR+proposition
[279] Lloyd, S. Programming the Universe: A Quantum Computer
Scientist Takes On the Cosmos, Knopf, N. Y., 2006
[280] Scheck, F. Theoretische Physik, B.1-4. Springer-Verlag, Berlin-
Heidelberg-New York, 2001-2007
[281] Sinai, Ya.G. On the Notion of Entropy of a Dynamical System. Doklady of
the Russian Academy of Sciences (DAN), v.124, pp. 768–771 (1959).
[282] B. M. Bolotovskii, Oliver Heaviside 1850–1925 (Nauka, Moscow, 1985), p.
152 [in Russian]
[283] Cooper, L. N. “Bound Electron Pairs in a Degenerate Fermi Gas,” Phys. Rev.
104, 1189 (1956)
[284] Hoddeson, L. and Daitch, V. True Genius: The Life and Science of John
Bardeen, (Joseph Henry Press, 2002), p. 203
[285] Mukhanov, V. Physical Foundations of Cosmology. Cambridge University
Press, Cambridge (U.K.), 2005
[286] Mukhanov, V. Winitzki, S. Introduction to Quantum Effects in Gravity.
Cambridge University Press, Cambridge (U.K.), 2007
[287] Nambu, Y. Quark model and the factorization of the Veneziano amplitude.
524
Bibliography
In: Detroit 1969, Symmetries and Quark Models, ed. R. Chand, p.269-278,
Gordon and Breach, N.Y., 1970; Lectures at the Copenhagen Summer
Symposium (1970), see also: Quarks, strings and gauge fields, in
Baltimore 1974, Johns Hopkins Workshop On Current Problems In High
Energy Theory, p. 3-13, Baltimore 1974
[288] Nielsen, H.B. String from Veneziano Model. arXiv:0904.4221, see also ”An
almost physical interpretation of the integrand of the n-point Veneziano
model”, preprint of the Niels Bohr Institute; paper presented at the XV Int.
Conf. on High Energy Physics, Kiev, 1970; Fairlie, D. B., Nielsen, H.B. An
Analogue Model for KSV Theory. Nucl. Phys. B 20, 637-649 (1970)
[289] Susskind, L. Dual-symmetric theory of hadrons. Nuovo Cimento, 69A, 457-
496 (1970); Structure of Hadrons Implied by Duality. Phys. Rev D1 1182-
1188 (1970); Galli, E., Susskind, L. Phys. Rev D 1, 1189 (1970)
[290] Schwarz, J. String theory: the early years. https://arxiv.org/abs/hep-
th/0007118
[291] Datseris, G., & Stevens, B. (2021). Earth’s albedo and its symmetry. AGU
Advances, 2, e2021AV000440. https://doi.org/10.1029/2021AV000440
[292] Arrhenius, S. (1901). Ueber die Wärmeabsorption durch Kohlensäure.
Annalen der Physik, 309(4), 690-705.
https://scholar.archive.org/work/udqm6kin6nac3m4ngfcms7eh2e
[293] McGuire, M., Olson, M. The economics of autocracy and majority rule: the
invisible hand and the use of force. Journal of Economic Literature, March
1996
[294] Jervis, R. System Effects: Complexity in Political and Social Life. Princeton
University Press, Princeton (N.J.), 1997
[295] Lijphart, A. Patterns of Democracy: Government Forms and Performance
in Thirty-Six Countries. Yale University Press, New Haven (CT), 1999
[296] Pauli W. (1993) Das Jahr 1946 Heisenbergs Theorie der S-Matrix. In: von
Meyenn K. (eds) Wolfgang Pauli. Sources in the History of Mathematics
and Physical Sciences, vol 11. Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-540-78802-7_7
[297] Gibbs W.: Elementary Principles in Statistical Mechanics (Yale University
Press, New Haven 1902) Chapter XII
[298] Ginzburg VL (July 2004). "On superconductivity and superfluidity (what I
have and have not managed to do), as well as on the 'physical minimum'
at the beginning of the 21st century". ChemPhysChem. 5 (7): 930–945.
doi:10.1002/cphc.200400182. PMID 15298379
[299] von Neumann, J. (1963) [1942]. "Theory of detonation waves. Progress
Report to the National Defense Research Committee Div. B, OSRD-549 (PB
31090)". In Taub, A. H. (ed.). John von Neumann: Collected Works, 1903–
1957. Vol. 6. New York: Pergamon Press. pp. 203–218.
11.5
Naive Taxation Models
525
[300] Zel’dovich, Ya. B. (1940). "On the theory of the propagation of detonation
in gaseous systems" К теории распространения детонации в
газообразных системах [On the theory of the propagation of detonations
on gaseous system]. Zhurnal Éksperimental'noĭ i Teoreticheskoĭ Fiziki (in
Russian). 10: 542–568.
[301] Strogatz, S.H. (2015). Nonlinear Dynamics and Chaos: With Applications
to Physics, Biology, Chemistry, and Engineering (2nd ed.). CRC Press.
[302] Thuillot, W. (2013). Statistical and numerical study of asteroid orbital
uncertainty. Astronomy Astrophysics.
[303] Wheeler, J. A. Theory of Nuclear Reactions. Physical Review.52.1107.1937.
[304] Weinberg, S. (1996). The Quantum Theory of Fields.
[305] Frisch, U. (1995). Turbulence: The Legacy of A. N. Kolmogorov. Cambridge:
Cambridge University Press.
[306] Moffatt, H. K. (1973). Statistical Fluid Mechanics: The Mechanics of
Turbulence, volume 1. By Monin A.S. and Jaglom A. M. M. I. T. Press, 1971.
769 pp. £10.50. Journal of Fluid Mechanics, 60(2), 410–414.
[307] Bohr, T., Jensen, M. H., Paladin, G., and Vulpiani, A. Dynamical Systems
Approach to Turbulence (chapter On Lagrangian Chaos. Cambridge
University Press.1998.
[308] Einstein, A. "Kosmologische Betrachtungen zur allgemeinen
Relativitätstheorie". (Cosmological Considerations in the General Theory of
Relativity). Published in: Sitzungsberichte der Königlich Preußischen
Akademie der Wissenschaften (Berlin), 1917, pp. 142–152.
|
Preface 1 myPhysmatics Connected Mathematical Models in Physics Sergey Pankratov 2 Preface Preface "If you don't know where you are going, any road will get you there." Lewis Carroll For many years, I have conducted a sort of scientific diary in which I put facts and results that I liked at the moment or found interesting and useful. My principal motivation was to learn how clever people construct beautiful mathematical models out of physical garbage - incongruent sets of data, apparently disjoined experimental facts and other multiple raw material. Having looked in hindsight through these notes, I could ascertain that nearly everything in them revolved around a bunch of more or less standard ideas, facts, techniques, and approaches which may be regarded as indispensable elements of the standard education of a physicist. Such elements can be networked to provide an apparently cohesive picture of a fair physical education and a mathematical culture, sufficient for a physicist. At least satisfactory to many of us. The present book stems from that old diary of mine and to a certain extent retains its disparity. However, the surrounding world seems to be united, which manifests itself in the fact that new horizons are inevitably unfolded and unexpected links between apparently disjoined, isolated models, most of them being physics-based mathematical models, are continuously opened when one attempts to consider various aspects of stand-alone and ad hoc constructs. As a result, the book was not limited to physics alone, but also contains some rudimentary information on the situation in neighboring disciplines. The existence of hidden relationships between seemingly different domains has always been to many people a wonderful and astounding quality of physical and mathematical sciences. One can observe that the art of producing a good work in physics is essentially the art of revealing connections between seemingly disparate manifestations. The same applies to mathematics, sometimes in even larger extent. So, in my diary, I tried to pay attention to the links, analogies and similarities inside the mosaic of fragmentary mathematical models of physics. I rather register and describe those links than offer a rational explanation for the very phenomenon of linking. To illustrate such linking, I can bring a rather obvious example. Although general relativity is regarded by many people as an autonomous subscience, a discipline which is separate from the whole body of physics, I think that is erroneous. The study of general relativity helps to understand classical mechanics much better than while studying mechanics alone although general relativity and classical mechanics traditionally belong to different physical courses and are practically never taught together. As one more example, one can recall Bohr's atom, which was basically an ad hoc model, but later this naive model has profusely stimulated the emergence of more sophisticated mathematical (quantum) approaches and theories. One can remember that during the conception of quantum mechanics, the model of Bohr's Preface 3 atom, quite unexpectedly, was connected with such distant subjects as the blackbody radiation - also originally an ad hoc mathematical model, with the theory of adiabatic invariants, which is very close to the modern theory of dynamical systems, and with the spectral theory of linear operators. This and some other cross-disciplinary links will be described in the book where appropriate. For myself, long time ago, I called such unification of seemingly disjoint results a "physmatical effect" - pointing to the fact that multitudes of fine-grained physicsbased mathematical models (PBMM) become inextricably linked and networked, and I called the corresponding network a physmatical one. Mathematics serves as a key code gluing together isolated physical models viewed as nodes of this network. Widely disparate and independently developed, subjects from physics and mathematics converge unexpectedly to become unified ingredients of a single theme. This is similar to a polyphonic construction of music, when a diversity of tunes combine to produce an interesting melody. The corresponding discipline that is focused on ties and lateral associations between mathematical models in physics may, in this terminology, be called "physmatics" - in distinction to "mathphysics", or mathematical physics, which has traditionally been a well-structured discipline centered around partial differential equations. A considerable portion of mathematical physics is devoted to attempts of rigorous scrutiny of a selected bunch of mathematical models that involve mathematical aspects of the proof of these models' freedom from inconsistencies. In particular, proofs of existence and uniqueness of solutions, analysis of the properties of the chosen system of equations, construction of exact or self-similar solutions plays the major part in mathematical physics; nowadays a trend towards axiomatic constructs is more and more obvious. Although extremely important and comprising an indispensable element of any physicist's background, mathematical physics appears to be a realm of mathematicians, rather than physicists. In contrast, "physmatics" treats mathematics as a service subject, a set of protocols carrying universal research techniques between physical models, the latter may be regarded as networked "nodes". Scientific journals (and, lately, servers) play the role of hubs, routers and, sometimes, switches in this network redirecting the research efforts. Here "switch" is a component of a cognitive network that connects two or more lines of thinking to complete a task or a model. Physmatics is a more or less a closed, "private" network, in the sense that it does not so far include social sciences or medicine - these possess their own cognitive networks. Focusing on common features provides exciting observations and deepens understanding of such models, like accidental recognition of common acquaintances stimulates mutual interest and fastens friendship ties. I think that people are better inclined to recognition than to hard-core ab ovo learning. The more unexpected the liaison, the deeper is the appreciation. I concede of course that writing just about everything is a firm symptom of unprofessionalism: physicists are commonly encouraged to generate very 4 Preface focused articles. Today, however, in contrast with the 19th and the beginning of the 20th century, probably most fruitful period in physics, physicists too often explore minute problems. But, honestly speaking, papers such as - this is my fantasy, of course - "On the third correction to the second off-diagonal matrix element in the quasibound X-like to Γ-like states transition within the eight-band approximation in III-V heterojunctions" leave me unmoved, although such partial results form the very texture of physics and their authors should be utterly respected. Furthermore, the papers of the "Influence of Singing on Seeing"-type1 serve as a publication multiplier, and the number of publications is, in practice, considered as the main indicator of success in science (although quite the opposite may be true). One can rapidly produce dissertations, which actually happens. In relation to this, I often recall J. W. Gibbs who may be considered an antipode to the current breed of prolific scientists. Since in 1970s I translated thermodynamical works by Gibbs into Russian, I had to study his scientific legacy and found out that he, being really a great figure, was quite reluctant to publishing his papers, at least in a rush. The American "publish or perish" approach seems to me a noisy impasse, more suitable to journalists than to scientists. It would be a mere truism to observe that rapid multiplication of scientific (and near-scientific) journals and other publications makes the entire physmatical network increasingly complex and noisy. This multiplication process is analogous to the unfavorable development of the Internet which can eventually result in its catastrophic transformation (e.g., split). Furthermore, physmatics is constructed in such a way that one can, in principle, starting from each particular result (a physmatical node, in this terminology), reach any other point, even conceptually quite distant, say, one can travel from nanotechnology to topology. However, the time needed to drift from one physmatical concept (node) to another may well exceed the duration of human life. And what to do if so many things in physics and mathematics are equally interesting that it is one's desire to grasp them all? Then the networking agenda, with many routes between the concepts of interest would be the only solution. After all, intelligence is primarily the skill to interrelate seemingly different, heterogeneous things. The great Russian theoretical physicist L. D. Landau considered theoretical physics a "small science", he used to say that a physicist can understand the whole of it. On the contrary, experimental physics, according to Landau, is a "big science", a single person is unable to know all its parts [51]. The famous "Course of Theoretical Physics" by L. D. Landau and E. M. Lifshitz was based on this idea - a physicist can understand the whole physics, no matter how far from each other its specific models might appear to be. But that was long ago. Today, unfortunately, physics and mathematics more and more remind us of the Tower of Babel, and the only efficient method to cope with such an overwhelming flood 1 In Russian, "Vliyanie peniya na zrenie". Preface 5 of information is to employ the networking approach (similar to hypertext or hyper-reference used in modern online encyclopedia). To a physicist trained in the late 20th century, theoretical physics constructed around the least action principle serves as a backbone for the entire physmatical network. To me personally, "physmatics" reminds me of a hybrid language2 capable of activating the nodes of networked physical and mathematical knowledge. Network-oriented languages aimed at the analysis and depiction of data flows and network topologies have been known in networking technology for some time. Such languages enable us to see structures and links that may remain hidden at first glance. Creation of network-oriented languages is usually an interdisciplinary undertaking that merges network analysis, complexity theory, graph theory, communication theory and other disciplines elucidating the laws according to which new ideas, results, technologies, etc. are propagated. Networks are ubiquitous: just think of everyday networks like public transportation, communication, utilities, electrical engineering, etc. Physmatics represents a class of other, invisible networks that become observable when their objects are seen in relationship to each other. This all sounds unnecessarily abstract, but can be made transparent by simple examples. In the oscillator example, photons, phonons and many other quasiparticles are mostly described in the universal language of creation and annihilation operators, which is just an interpretation stemming from the fact that the energy spectrum of an oscillator is equidistant and can be obtained algebraically by representing the Hamiltonian through the product of ubiquitous raising and lowering operators. One might say that the oscillator model "radiates" its messages throughout the physmatical network and, in principle, receives feedback. Analogous exchange of messages can be observed, for instance, in nonlinear differential equations and dynamical systems, in geometrical properties of gauge models, in physics-based models beyond physics. Probably, in the near future, an algorithm may be built to produce a series of hierarchical - coarser - networks by searching highly dense subnets (or subgraphs) in each level of the entire network, and then a clustering multilevel algorithm can be applied. This networking approach would enable us to carry out a structured analysis of nowadays unweighted physical and mathematical networks. The networking paradigm fuels my conviction that physics and mathematics are a non-stop issue, with new nodes being constantly incorporated. Such an outlook is, of course, closer to expressing a dream than setting up a concrete problem3. My primary goal in this manuscript is much more modest - I 2 Hybrid languages exist not only as artificial programming or document-oriented formal languages (with some distributed data model that allows addressing, searching and linking of content from documents), but also among human dialects. For instance, Yiddish, with its ex-territoriality and massive inclusion of Slavic elements, even sovietisms, is a typical hybrid language destined to consolidate a network of disparate enclaves. 3 General features of representing science as a network or a map are discussed in the wellknown book by John Ziman "Reliable Knowledge" [109], ch.4. 6 Preface was trying to make the book a good read as well as a good reference to some interesting facts thoroughly covered in the available literature. I also tried to combine attention to detail with personal recollections and the occasional anecdote (often in order to make my points), rather than the purely scholarly approach. The diary form implies by default a compendium of many topics, with not necessarily everything really derived. Thus, the book may also appeal to people who wish to "comprehend" numerous fields of physics without doing much work. There is an obvious gap between the expert knowledge space and the educational one. The number of scientific papers grows so rapidly that the traditional methods of information dissemination (via journal papers, monographs, textbooks) become inefficient: even the specialists in narrow fields find it difficult to focus on the principal issues since there may be a lot of "noise" totally obscuring useful information. The Internet search, though very useful in skillful hands, often only aggravates the problem. Everyone is familiar with the difficulty of finding a precise piece of information, and the novices are completely lost in it. Strict monographs and detailed review papers provide little help: the monographs are usually too rigorous so that the reader must invest a lot of time to get to the essence, and rare overview articles are unbiased. The latter fact is natural: most of the authors extensively refer to their own work. Besides, people who are most active in the research seem to be reluctant to write long and detailed reviews about their field in general. Under these circumstances, it is likely that some free-style manuscripts might be required. Such free-style books would occupy an intermediate position between science4 popularization and real monographs. The free-style genre provides an opportunity for the author to convey her/his point of view whereas for the reader it may serve as a means to fill the gap between the common university courses and the highbrow scientific monographs or research papers based on special mathematical techniques. The present text is an attempt to quilt such a free-style book. And I repeat, I am aware that loose reflections are perceived as a grave professional disadvantage as well as extensively using "I" instead of impersonal "we". This "I-ing" i.e., an obvious retreat from the academic style is neither puerilism nor exhibitionism - it is rather an expression of personal opinion. So, the book may appear heavily opinionated, which is quite natural for a diary. Putting together seemingly diverse subjects takes time of course, so this book has been written in steps reflecting different levels of understanding. Some parts of the book are intended also for the people more attracted by arts, literature, and social studies than by science saturated with hard math. The obvious problem is that such people usually experience difficulties with mathematics. The book partially aims to reinforce basic mathematical knowledge, in particular, by favoring and interpreting mathematical models as well as by trying to criticize 4 For definiteness, I shall only speak about physics. Preface 7 some of them. Mathematics viewed as an intellectual reality can be made instrumental not only in physics, but also in social life. As far as the scientific component of the manuscript goes (there are also unscientific ones), only the most basic models are discussed in this book. The style of the text - slow, ruminating, recurrent, sometimes light - reflects the tendency to study things while writing. Some readers might designate a number of fragments in the text as "philology" or as "pouring from void into empty". I, however, think this impression is deceptive. There exist at least three good reasons to ruminate about foundations of physics: firstly, by scrutinizing the fundamental ideas once again, one can learn a lot from ingenious individuals who originated them; secondly, one can reformat individual thought templates making them less susceptible to collective mantras; and thirdly, pondering over possible mathematical formulations of physical basics can catalyze the study of mathematics. We shall see later in this book that many valuable mathematical ideas have come through contemplating over various possibilities of exploring basic laws of physics. As an illustration one might recall that any successful physical theory has historically served as a great stimulus and rich source of ideas primarily for mathematicians. This was an exchange with instruments of understanding. Furthermore, we shall parse some customary notions; re-examining conceptual systems seems to be useful because otherwise attitudes pass ahead of knowledge. Beside "philology", one may encounter some inconsistencies - often deliberate, as, e.g., in coordinate transformations - in the notation of different sections, largely because different parts of the book were written at different times. One may also notice that in special and general relativity different systems of notations such as ((x1, x2, x3, x4 = ict, ) (pseudo-Euclidean) and Minkowski space with signatures (+, -, -, -) or sometimes (-, +, +, +), respectively, (see Chapters 3, 9) are in fact more convenient than a single one, although they are just parts of the same geometric theory (however, I use only one system of relativistic notations in this book with a small exception of writing coordinate indices below to simplify the notations in a couple of the most primitive cases). Usually, understanding the author's notations takes considerable time while reading physical and especially mathematical papers. To spare such superfluous efforts, I tried to keep notations as simple as possible. Yet, I would not consider this text as fully suitable for students since it would take a lot of precious time to read all my reminiscences which may be unnecessary for focused studies. I concede that this book is a kind of an occasional supplementary reading, a dubious introspective report rather than a textbook or a monograph, although I was much more interested in physical and corresponding mathematical problems than in discussions and gossips about these problems. Some parts of this book may be read by a general audience without difficulties. Because of numerous distractions this text is not exactly what one needs to get prepared for exams. Nevertheless, I disagree with those who regard all such distractions totally irrelevant - I hope they might produce helpful associations. There is, incidentally, a general principle 8 Preface known to many people studying martial arts, e.g., karate, but applicable for each kind of learning: absorb what is useful, reject what is useless, and add what is specifically your own. As stated, the manuscript is intentionally defocused and lacks unity. In a number of fragments, I decided to sacrifice stringency to make the reading more comprehensible. In some respects, the book is closer to a collection of popular essays than to a scientific treatise. I can understand those readers who would be irritated by such style and organization of the book and would rate it as unsatisfactory. To such readers I might remark that I tried to assume the role of a generalist in order to see the forest behind the trees. It is my deep conviction that physics and mathematics are the playground for free-thinking, open-minded and versatile personalities. As far as the interaction between physics and mathematics is concerned, one can, of course, better spend one's time reading refined and focused reflections about mathematics and physics by V. Arnold, G. H. Hardy, M. Klein, Yu. Manin, J. von Neumann, G. Polya, N. Wiener, a number of prominent physicists, even Bourbaki who seem to be a collective pseudonym for a group of mathematical extremists united in their disdain of physicists. The present book may only fuel such disdain since I never intended to meet the present-day mathematical standards. The book consists of twelve chapters which are in fact not independent: 1. Introduction 2. Principles of Mathematical Modeling 3. Mathematical Potpourri 4. Classical Deterministic Systems 5. Classical Fields and Waves 6. The Quantum World 7. Stochastic Reality 8. Radiation in Matter 9. What Remains to Be Solved 10. Climate as a Physical System 11. Made in Physics 12. Conclusion and Outlook. I could not refrain from revealing cross-chapter associations and exploiting common models. There exist popular and pretty universal mathematical methods such as Fourier series, Green's functions, perturbation techniques, asymptotic expansions, etc. that can be applied in each field to handle models with totally different physical content. Thus, it seems indispensable to master these universal methods, if one is striving to work professionally with physical models. Being essentially a diary and based on personal reflections, the book enjoys relaxed standards of modesty and customary humility. I did not try to make the Preface 9 text look maximally impersonal as well as to conceal subjective likings and dislikings. In this book I used to intersperse technical descriptions with some reminiscences of my personal interactions with physicists and mathematicians. The scientific content in some fragments and sections is nearly absent. Yet the book contains, apart from personal reminiscences and opinions, a pronounced objective i.e., a scientific drive whose aim is to demonstrate strength and versatility of modeling methods in physics. This scientific component may bring certain difficulties to an occasional general reader, so the book prerequisites are some standard university courses on algebra, analysis, and differential equations as well as familiarity with basic facts from physics. Honestly speaking, learning the models of physics presupposes a certain degree of maturity, since they usually involve tying together diverse concepts from many areas of physics and mathematics. One can observe that a good article on physics is in fact a collection of mathematical problems with solutions. Bearing in mind these mathematical necessities, the preliminary Chapter 3 ("Mathematical Potpourri") presents a recapitulation of known mathematical facts, with the notation adopted throughout the book being introduced. Tinkering with mathematical expressions seems to be a growing trend in modern physics, and the rows of such expressions are often overloaded with fancy notations and can be totally intransparent for a person who is not a narrow specialist in the field. Occasionally, when the calculations are too lengthy and tedious to be reproduced exhaustively, I have simply quoted the results with the respective references. Many examples in the book can be traced back to the works of giants. Great physicists may be considered to come in two more or less pure varieties: deep thinkers (like e.g., N. Bohr, P. A. M. Dirac, F. Dyson, A. Einstein, J. W. Gibbs, or W. Heisenberg) and efficient problem solvers (like H. Bethe, R. Feynman, E. Fermi, L. D. Landau, A. Sommerfeld, or Ya. B. Zeldovich). There may be also intermediate types interpolating between the two pure varieties, e.g., S. Chandrasekhar, L. I. Mandelstam, W. Pauli. Besides, there exist great minds whose mathematical power equals physical considerations or even dominates over them, those are exemplified by V. I. Arnold, N. N. Bogoliubov, L. D. Faddeev, V. A. Fock, H. A. Kramers, J. von Neumann, E. Wigner, E. Witten. All these people have produced highly influential results to be consumed by a sea of less creative individuals. The present book reflects the position of such a consumer: my main motivation was rather to learn, understand and absorb than to create and excite. One should not, however, try to absorb new science or techniques in a gulp, it must be done gradually, bit by bit. Recall that a real connoisseur would never hastily gulp the precious old wine, he would rather enjoy each sip, feel the gradations of taste, nuances of scent, hues of color. I realize that the title of this book may be misleading, since this is not a book on mathematical methods in physics. Many of such methods have been described in comprehensive textbooks and monographs. For myself, it was important to trace how the outstanding scientists worked, how they employed their intuition, 10 Preface set up concrete problems, guessed the answer and tried to corroborate it by developing the mathematical metaphors - pretty universal to be transferred to other models and to connect them. It was also utterly instructive to observe how the great inventive minds tried to adjust and sometimes distort mathematics in order to obtain the required, intuitively anticipated answer without, of course, crucially violating the strict rules of mathematical operations. Therefore, one may note that the book is mainly focused on examples of physics-based models and not on hard-core rigorous descriptions favored, in all their generality, mostly by professional mathematicians. It is in this sense that the present book may serve only as a supplementary material to existing textbooks, e.g., on theoretical and mathematical physics. My interest lies not in mathematics as such but only in its use. I treat mathematics as a support stuff for physics or sometimes even as a part of physics. Therefore, despite some explicit calculations contained in this book, I wrote it mostly in a "do-it-yourself" manner so that in many places I give only drafts of well-known models or theories. This means that starting definitions and statements (theorems) are often only formulated, with basic ideas and formulas being provided. The importance of correct definitions of physical and especially mathematical concepts is irreducible. Nevertheless, physicists often believe that it is the image of the concept and not its definition which forms the most essential component of understanding, and mathematicians are inclined to unjustified exaggeration of partial results. To some extent, in this book I succumb to this popular physical stereotype, and some mathematically important details such as establishing consistence of definitions or proving existence/uniqueness theorems may be left to the careful reader. To be really creative, one must solve problems and not just describe how other people did it - this is a second-hand science. Likewise, it is inefficient to study any physmatical subject of interest beforehand: much more useful would be to take some problem relevant to this subject. At first, I wanted to supply the book with a list of problems, some of them rather complicated (with solutions), but then I dropped this idea because many problems are inevitably scattered over the main text in the form of models and adding some artificial problems would make the manuscript look like a textbook, whereas it is basically a diary. In other words, most exercises that I conjured initially for this book would look very difficult not because they really are such, but because the theory included in the book is not systematized enough to solve them all. This is a book for reading, not for studying. In many cases, I return to the problems and models I have already discussed at a different level of understanding and try to observe them from a different angle. Such a recursive method of viewing a problem helps to elucidate its sensitive points. I tried to write as freely as possible, often preferring to stop and once again scrutinize the well-known models, sometimes with slightly different Preface 11 notations, attempting to find new features or connections to other models in the formerly discussed problems. Thus, despite a cacophony of subjects, I hope the book is not totally useless. In general, the book may be perceived as reflecting a kind of intuitive protest against the progressive compartmentalization of science, its artificial breakdown into narrow disciplines. There are no natural frontiers between disciplines - it is people who have established them, often for the purposes having nothing in common with science. A few words about sources. Many results in physics are hard to ascribe to a single author, they may be perceived as an outcome of certain scientific evolution. This fact manifests itself in the cited literature. In view of the rather broad scope of the book, no attempt has been made to supply it with an exhaustive list of references. I tried to cite moderately but exactly. The general criterion was to point at sources that could supplement the information, contained in the present book, which I consider insufficient or "divergent" (too many chained references with branching). Although in general I tried to make all the calculations transparent and understandable, in many instances in this book explorations of ideas contained in many standard sources such as textbooks is not fully complete and thorough; to provide a detailed account in all cases would make the manuscript unobservable. Some references, especially related to the material to be readily found on the Internet, are given, for the reader's convenience, in the main text. I did not shy away from using "unscientific" sources and from citing popular science books and articles, even those written by journalists and appearing in newspapers and on the Internet i.e., the "gray literature" - the material not published in peer-reviewed scientific journals. I also repeatedly cite Wikipedia, although this source can hardly be called an authority (for instance, Wikipedia can be biased). The names of many great people are present in the text. I was lucky to listen and to talk to some of them: V. M. Galitskii, V. L. Ginzburg, L. V. Keldysh, D. A. Kirznitz, A. B. Migdal, Ya. B. Zeldovich, D. N. Zubarev. I would not dare to say that I know or knew them all: it is only narrow circles of close friends have the right to say so. I was also lucky to have very kind and understanding teachers: Professors N. P. Kalashnikov and M. I. Ryazanov. Almost all the material in this book is not based on original results. Nevertheless, the presentation of the material is predominantly my own and consists of original or considerably modified explanations of known results, discussion of their relevance to the contemporary scientific or engineering practice. In this sense, although there are no fundamentally new results, the book is not a primitive compilation. I tried to acknowledge all borrowings of presentations, which I was aware of, in the notes scattered over the manuscript. Since I attempted to present the modeling techniques by developing them from the first principles and in a self-contained way, I did not provide a hundred percent comprehensive list of references. Yet, I have supplied the book with the 12 Preface minimal bibliography, which would help the reader to duly appreciate the work performed by many ingenious people. After all that was said one can justifiably ask: why am I writing all this stuff? A variant of the answer lies in the following image I used to have in my mind: people who study physics and mathematics today often find themselves in a situation of a late passenger trying to catch the train that has already started. My vision is that it would be possible to spare the inconvenience of jumping into the last car of the train, and instead of hectic turns to be comfortably installed in a cozy lounge. To this end, one should not strive to be at the forefront of modern science, which is frequently dictated by fashion, but to understand thoroughly not so many crucial scientific patterns. And I would like to add that I wanted to write not to please the experts or the referees, but what people could remember and use. Acknowledgements I would like to express my deep gratitude to all those colleagues and friends of mine who have helped me in discussions, in text and pictures preparation and in many other ways. I am grateful to Professor Christoph Zenger and Professor Hans-Joachim Bungartz of the Technische Universität München whose shrewd critical remarks made this text less vague. Special thanks to Dr. Dmytro Chibisov who was very patient to explain to me a lot of tricks in practical TEX/ LATEX usage (I call such tricks TEX-nicalities). Dr. Chibisov who is a well-known specialist in computer algebra techniques has also given me many good advice especially concerning computer usage in mathematics. I would also like to thank Professor Vladimir Preobrazhenski from l'Institut d'Electronique, de Microélectronique et de Nanotechnologie de Lille for many constructive comments and suggestions. Finally, but not the least I would like to express my deep gratitude to Dr. Thomas S. Ligon, who overtook the burden of the scientific editing of this book and my wife, Tatjana Znamenski, for preparing this manuscript for publication. Basic Principles of Mathematical Modeling 13 Contents 14 Contents 2 Contents Acknowledgements ........................................................................................................ 12 Contents .............................................................................................................................. 13 1 Introduction.............................................................................................................. 20 2 Principles of Mathematical Modeling ............................................................ 30 2.1 Basic Principles of Mathematical Modeling .............................. 32 2.2 Mathematical Models in Physics .................................................... 36 2.3 Ten Worlds of Physics ........................................................................ 45 2.3.1 The Classical World ............................................................. 46 2.3.2 The Thermal World ............................................................. 46 2.3.3 The Nonequilibrium World ............................................. 47 2.3.4 The Continuum World ....................................................... 47 2.3.5 The Electromagnetic World ............................................. 48 2.3.6 The Plasma World ................................................................ 48 2.3.7 The quantum world............................................................. 49 2.3.8 The high energy world ....................................................... 50 2.3.9 The relativistic world ......................................................... 50 2.3.10 The Cosmological World ................................................... 51 2.4 Physics-Based Mathematical Models (PBMM) ........................ 53 2.5 Theory, Experiment, and Models ................................................... 55 2.6 On the Relationship Between Physics and Mathematics .... 58 2.7 Mathematical Physics and Physmatics ....................................... 62 2.8 The Role of Human Communication ............................................ 63 2.9 Antimodels .............................................................................................. 72 2.10 Topological Models .............................................................................. 79 2.11 Engineering Models ............................................................................. 80 2.12 Mathematical Models in Biology .................................................. 82 2.13 Cognitive Models .................................................................................. 87 2.13.1 Religious Models ................................................................... 92 2.14 Science and Arts .................................................................................... 96 2.15 Physics and Philosophy ..................................................................... 97 2.16 Prognosis .................................................................................................. 98 2.17 Some Tricks of the Trade ............................................................... 104 3 Mathematical Potpourri ................................................................................... 108 Basic Principles of Mathematical Modeling 15 3.1 Sets ........................................................................................................... 113 3.2 Maps and Operators ......................................................................... 114 3.3 Groups .................................................................................................... 114 3.3.1. Semigroups ............................................................................. 115 3.4 The Rotation Group ........................................................................... 116 3.5 Lorentz and Poincaré Groups ......................................................... 116 3.6 Rings and Fields .................................................................................. 122 3.7 Morphisms ............................................................................................ 123 3.8 Algebras ................................................................................................. 124 3.9 Lie Groups and Lie Algebras ........................................................ 124 3.10 Vector Spaces ...................................................................................... 125 3.11 Basis Vectors........................................................................................ 135 3.12 Dual Spaces........................................................................................... 139 3.13 Some Remarks on Indices .............................................................. 143 3.14 Operators in Quantum Mechanics ............................................. 143 3.15 Dualities in Physics ........................................................................... 144 3.16 Manifolds ............................................................................................... 151 3.17 Notes on Derivatives ........................................................................ 154 3.18 Notes on Calculus .............................................................................. 156 3.19 Basic Geometry for Physics ........................................................... 157 3.20 Vector Fields ........................................................................................ 162 3.21 Geometry and Physics ..................................................................... 163 3.22 Geometry of Classical Mechanics ................................................ 163 3.23 Transformation of Affine Coordinates ..................................... 169 3.24 General Coordinate Transformations ....................................... 170 3.25 Variational Methods ......................................................................... 171 3.26 Differential Equations ..................................................................... 172 4 Classical Deterministic Systems ................................................................... 174 4.1 Main models of classical mechanics .......................................... 175 4.2 Newtonian Mechanics ..................................................................... 177 4.3 Lagrangian Mechanics ..................................................................... 178 4.4 Hamiltonian Mechanics .................................................................. 183 4.5 Oscillations ........................................................................................... 192 4.6 Harmonic Oscillator ......................................................................... 194 16 Contents 4.7 Symmetries and Conservation Laws ......................................... 196 4.8 Relativistic Mechanics ..................................................................... 197 4.9 Dynamical Systems ........................................................................... 198 4.10 Dynamical Systems and the Cauchy Problem ....................... 198 4.11 Autonomous Dynamical Systems ............................................... 200 4.12 Non-autonomous Systems ............................................................ 202 4.13 Dynamical Systems in Mathematical Modeling .................... 205 4.14 Nonlinear Science .............................................................................. 207 4.15 The logistic model: the bugs are coming ................................. 209 4.15.1. Extensions of the logistic model .................................. 218 4.15.2. Applications of the logistic model ............................... 224 4.16 Instabilities and chaos ..................................................................... 228 4.16.1 Chaos in dissipative systems ........................................ 232 5 Classical Fields and Waves .............................................................................. 235 5.1 The Maxwell Equations ................................................................... 235 5.2 Gauge Invariance in Classical Electrodynamics ................... 238 5.3 Four-Dimensional Formulation of Electrodynamics ......... 242 5.4 Classical Electromagnetic Field without Sources ................ 245 5.5 Equations of Motion for the Electromagnetic Field ........... 246 5.6. Hamiltonian Formalism in Electromagnetic Theory ......... 247 5.7 Limitations of Classical Electromagnetic Theory ................ 250 5.8 Integral Equations in Field Theory ............................................ 252 5.9 Phenomenological Electrodynamics ........................................ 253 5.9.1 The Traditional Averaging Procedure .................... 258 5.9.2 Ensemble Averaging of Fields and Currents ......... 260 6 The Quantum World .......................................................................................... 266 6.1 In the Middle of Revolution ........................................................... 268 6.2 Some Notes on the Historical Development of Quantum Mechanics ............................................................................................................... 274 6.3 Mathematical Models of Quantum Mechanics ...................... 275 6.4 The Schrödinger Equation ............................................................. 278 6.5 Quantum Tunneling .......................................................................... 284 6.6 Quantum Evolution ........................................................................... 284 6.7 The Stone Theorem .......................................................................... 285 Basic Principles of Mathematical Modeling 17 6.8 Geometrical Formulation of Quantum Mechanics .............. 286 6.9 Quantum-Classical Correspondence ......................................... 287 6.10 The Ehrenfest Theorem and Its Meaning ............................... 298 6.11 Wave Packets in Quantum Mechanics ...................................... 304 6.12 Semiclassical Expansions and Asymptotic Methods .......... 305 6.13 The Density Matrix and Its Relatives ........................................ 305 6.14 Do You Need an Interpretation? ................................................. 305 6.14.1 More on Copenhagen Interpretation ........................ 308 6.14.2 Bohm's version ................................................................... 313 6.14.3 Statistical interpretation ................................................ 314 6.14.4 The Many-Worlds Interpretation .............................. 315 6.15 Causality ................................................................................................ 317 6.16 Quantum Chaos .................................................................................. 318 6.17 Path Integrals in Physics ................................................................ 318 6.18 Quantum Field Theory .................................................................... 321 6.18.1 More on Relativistic Invariance .................................. 323 6.18.2 Feynman Diagrams .......................................................... 324 6.18.3 S-Matrix ................................................................................ 324 6.18.4 Particles and Symmetries in Quantum Theory ... 325 6.18.5 Quantum Field Theory and Mathematics............... 326 6.18.6 Canonical quantization in QED .................................. 329 6.18.7 Gauge Fields in Quantum Theory .............................. 329 7 Stochastic Reality ................................................................................................ 331 7.1 Thermodynamics: the Study of Paradoxes ............................ 332 7.2 Statistical Way of Thinking ........................................................... 333 7.3 Statistical Equilibrium .................................................................... 335 7.4 Statistical Ensembles ....................................................................... 336 7.5 The Bogoliubov Chain ...................................................................... 337 7.6 Chaotic Behavior ................................................................................ 338 8 Radiation and Matter ......................................................................................... 341 8.1 Interaction of Electromagnetic Radiation with Matter. General Concepts ................................................................................................. 341 8.2 Field Energy Dissipation in Matter ........................................... 344 8.3 More on Charge in Electromagnetic Fields............................ 347 18 Contents 8.3.1 Interaction of a Particle with a Standing Wave . 348 8.3.2 Interaction of a Particle with a Traveling Wave .... 358 8.4 On Hamiltonian Formalism for Particle Motion in Electromagnetic Fields ..................................................................................... 358 8.5 Interaction between Atoms and Radiation Field ................. 359 8.6 Laser-Matter Interaction ................................................................ 360 8.6.1. Ultrashort Laser Pulses ...................................................... 360 9 What remains to be solved? ........................................................................... 362 9.1 The Standard Model ......................................................................... 362 9.2 The Arrow of Time ............................................................................ 363 9.2.1 Perennial Problems ......................................................... 365 9.2.2 Observations of Possible TRS Breakdown ............ 367 9.2.3 Model-Based Claims ........................................................ 371 9.2.4 Closed Systems .................................................................. 373 9.2.5 Irreversibility and Time Reversal Noninvariance: Remarks about Terminology ......................................................... 373 9.2.6 The Time Operator ........................................................... 376 9.2.7 Elementary Properties of the Time-Reversal Operator ................................................................................................ 377 9.2.8 Time Operator in Classical Mechanics ..................... 378 9.2.9 The Pauli Theorem ........................................................... 380 9.2.10 Time Reversal Puzzles .................................................... 381 9.3 Irreversibility ...................................................................................... 387 9.4 Origins of Unpredictability ............................................................ 390 9.5 Understanding superconductivity ............................................. 392 9.5.1 Early History of Superconductivity ........................... 393 9.5.2 Some Physical Models of Superconductivity ........ 396 9.6 Superfluids and Supersolids ......................................................... 401 9.7 Relativity ............................................................................................... 402 9.7.1 Special Relativity ............................................................... 402 9.7.2 General relativity .............................................................. 402 9.8 Gravitation ............................................................................................ 403 9.8.1 The Equivalence Principle ............................................ 407 9.8.2 The Einstein Equations .................................................. 408 9.9 Cosmology ............................................................................................ 409 Basic Principles of Mathematical Modeling 19 9.10 Black Holes ........................................................................................... 413 9.11 Quantum Gravity................................................................................ 415 9.12 String Theory and String Theorists ........................................... 416 9.13 Is Relativity Firmly Established? ................................................ 421 10 Climate as a Physical System ........................................................ 426 10.1 Some purely Climatological Questions. ................................... 429 10.2 Some Purely Physical Questions ................................................ 431 10.3 The Earth as a Black Body Emitter ............................................ 434 10.4 Climate and Weather ....................................................................... 436 10.5 Dynamical Systems in Climate Modeling ................................ 440 10.6 Combining Models with Observations .................................... 447 10.7 Climate Variability ............................................................................ 449 10.8 The AGW Evidence ............................................................................ 450 10.9 The Evil Role of Carbon Dioxide ................................................. 460 10.10 The Role of the Sun ........................................................................... 470 10.11 Limitations of Current Climate Modeling ............................... 474 11 Made in physics .................................................................................. 478 11.1 Exported Models ................................................................................ 479 11.2 The Limits of Sociology ................................................................. 480 11.2.1 Self-Reproducing Social Patterns .............................. 481 11.2.2 Archetypical Questions of Social Sciences ............ 486 11.2.3 Limits and Errors in Social Sciences ......................... 488 11.3 Hierarchical Multilevel Systems .................................................. 490 11.3.1 The Politics of Bureaucracy ......................................... 490 11.4 Physical Economy ............................................................................. 492 11.5 Naive Taxation Models .................................................................... 495 12 Conclusion and outlook .................................................................. 501 13 Bibliography ........................................................................................ 505 20 Introduction 1 Introduction By some experience I have noticed that many people do not really read the whole of a science book, actually they don't read anything beyond the foreword, introduction and conclusive chapter. Bearing this in mind, I decided to slightly exceed the commonly accepted scope of these ancillary parts, although the present manuscript is strictly speaking not a science book. It pursues two main goals: firstly, to refresh the standard repertoire of working physicists and, secondly, to emphasize the role of interdisciplinary links enjoying physical and mathematical inclination (I call the col- lection of all such links a physmatical network). The first goal implies only a superficial coverage of subjects, with the main intention here being to arouse interest to foundations, whereas the second goal was to demonstrate occasional fruitfulness of cross-disciplinary jumps and unexpected analogies, often regarded with disfavor as rather philosophical and speculative. The very notion of "interdisciplinary approach" as well as the words "interdisciplinary", "cross-disciplinary", "transdisciplinary", or "multidisciplinary" seem to be strongly discredited although it is often asserted that highly institutionalized knowledge tends to limit understanding. Scientists habitually consider all such terms as a distinctive mark of unprofessionalism, when everything is swept into a single heap and it would be difficult to set up a clear-cut problem under blurred circumstances. As a consequence, no specialized knowledge appears to be required, and active charlatans with their childish babble or at least holistic half-professionals with their vague claims may profit. Although such a danger really exists, I still think that total refutation of the interdisciplinary approach (or even its occasional equalizing to pseudoscience) is erroneous. Interdisciplinarity is basically a good thing, allowing one to transcend artificial boundaries and to overcome excessively narrow specialization. More and more areas are becoming inherently interdisciplinary: biophysics, biomechanics, medical physics, other life sciences, robotics, nanoscience and nanotechnology, quantum computing, ecology, climate studies, complex systems and so on. Such renowned scientific journals as the Physical Review E and Physica D claim to be "interdisciplinary in scope" (PRE) or devoted to "nonlinear phenomena in general" (Physica D). Yet people are uncertain whether an interdisciplinary career is in principle feasible, with interdisciplinary studies only denoting a rubric. People who are comfortable in multiple areas, e.g., in physics and social sciences, in mathematics and climate science, human perception and nuclear engineering, chemistry and geoscience, mathematics and genetics, or political analysis and scientific ecology are exceedingly rare, and I think one should create a favorite environment - institutional, intellectual and moral - that would encourage the emergence of a new breed of scientists feeling at home Basic Principles of Mathematical Modeling 21 with unusual alliances of specialties. These are Renaissance persons5 yet I don't think that today value of such scientists is properly appreciated: in spite of many words pronounced in favor of interdisciplinary research, the current reward system seems to impede flourishing of cross-field experts and projects. In this book, I group together things that appear to be utterly diverse partly in order to emphasize the value of interdisciplinarity. Another curious observation of mine was that people - even very qualified persons - mostly do not think thoroughly about the basics. This is quite natural: people study the rudimentary things at a very young age when wonderful distracting factors surrounding them are abundant and afterwards, in adult life, there is a drastic shortage of time to return to foundations. One should not blame people for a few blank spots in their basic thesaurus. Many of us have probably met some angry, disturbed and humorless professors who were pinching students and younger colleagues, without any compelling reason, solely on the ground of having forgotten allegedly elementary facts. Bearing this situation in mind, I have devoted a considerable space to the subjects commonly considered too primitive to pay attention to. To me, they proved to be not at all primitive, and I often found, with utter amazement, that thinking about the basics rapidly develops into facing very intricate issues protruding into various fields of physics and mathematics. Although this remark about the importance of reviewing elementary concepts may sound trivial, I wanted to share my amazement with other people. This explains why I ruminate about pesky basics for so long in the manuscript. This book is devoted to an informal discussion of patterns constructed for treating physical problems. Such patterns, when sufficiently formalized, are usually referred to as "models", and they tend to be applied today not only in physics, but conquer the fields traditionally occupied by other disciplines generally considered to be totally different from physics. Accordingly, in this book the word "physics" is understood in a broad sense as the general study of natural phenomena. A tiny part of the models related to natural phenomena may be set up as mathematical problems and solved using contemporary mathematical means, exactly or approximately (e.g., numerically), whereas a much larger part of the models can only be qualitatively described. These latter verbal patterns are typically regarded as imprecise low-level statements which, hopefully, will be formalized in future. A mathematical formalism, however, does not necessarily imply exact knowledge; rather it demarcates the frontiers of ignorance that are fuzzy in qualitative statements. Nevertheless, a large part of this book is devoted to qualitative statements and verbal patterns considered as a kind of raw material for building up satisfactory mathematical models. Inevitably, the presentation of many topics may contain short excerpts from the courses of classical and quantum mechanics as well as from classical 5 A renaissance person is a model of personality possessing a broad range of knowledge. One of such person was, for example, Alexander von Humboldt who had the ability to embrace nearly all scientific disciplines known at his time and traveled through all the continents (maybe except the Antarctic). 22 Introduction electrodynamics and quantum field theory. Much of this material may appear trivial and even unnecessary. I think this impression is false. The aim of including the textbook excerpts is to trigger the imagination. Well-known results start the intrigue, and afterwards the new and unusual things come along. The reader may also find it annoying when I use innumerable "possibly", "probably", "might be", "perhaps", "in principle" and so on. The reason for this approximativeness is not an obsessive carefulness but merely an inexact knowledge. Unfortunately, most of the knowledge in the world is inexact and cannot be reliably quantified, and one should not commit the adolescent sin of not admitting it. In the Middle Ages, there existed the so-called "Exempla", i.e., collections of illustrative examples to be used by priests to save the parish people from falling asleep during the sermon. The wakening effect was reached by the presence of numerous vivid details in the "exempla", for instance, chilling minutes about hell, the devil, demons, etc. Imagining easy to understand, lifelike pictures, the public heard prayers. In this book, I have tried to produce something of the kind of such "Exempla", although I am not sure I can always keep the reader awake. Being overloaded with personal details and opinions, this book nevertheless contains examples of the application of selected mathematical methods to mathematical models frequently encountered in various branches of science and engineering. Since physics seems to be the best-known collection of models, it is physics based mathematical models (PBMMs) that are predominantly discussed. Even when other disciplines come into play, most of the models under review have been imported from physics and adapted for scholarly or engineering problems arising in these disciplines. To promote cross-fertilization of ideas between a variety of disciplines, it is often advantageous to consider the so-called complex systems requiring a real exchange of concepts and techniques. In this transdisciplinary approach there is of course a danger of debating and arguing about too many things. After a brief methodological prelude about general modeling principles (Chapter 2), some standard mathematical techniques are informally discussed in Chapter 3. This chapter is neither a micro-handbook nor a review of mathematical methods in physics. The main purpose of this "mathematical potpourri" is to introduce some mathematical concepts, mostly of geometrical nature, which have long become routine in mathematical texts but still are not quite easily acquired by physicists and engineers engaged in the business of mathematical modeling 6 . Chapter 4 is devoted to classical mechanics culminating in the theory of dynamical systems. This is perhaps the main part of the book; the emphasis in it is placed on dynamical systems - this is due to the fact that change is the most interesting aspect of models. Moreover, classical dynamics is probably the most developed part of science, it studies the evolution of systems of material points, bodies that are so small that their inner structure is disregarded and the only surviving characteristic is their 6 I have written "business", but it is probably a wrong and compromised word; one should rather think of an art of mathematical modeling, even in its numerical stage. Basic Principles of Mathematical Modeling 23 position in space, r= ri(t) . The dominant modeling concept exploited throughout Chapter 4 is the notion of local equilibrium. Mathematical modeling of complex systems far from equilibrium is mostly reduced to irreversible nonlinear equations. Being trained in such an approach allows one to model a great lot of situations in science and engineering. In Chapter 5, classical field theory is briefly outlined, with some accent being placed on wave motion and wavelike models. Since there is hardly any means to define what can be called a wave process in general, a very broad variety of problems is mentioned in this chapter. Physical systems can be roughly separated into two classes: particles and fields, correspondingly there are two basic classes of models. I used the word "roughly" because in fact there is an overlap, for example, particles serve as field sources. Moreover, the separation of matter into particles and fields is, strictly speaking, outdated and incorrect. It is used here only for convenience: the main difference between dynamical systems in classical mechanics, where particles are studied, and in field theory is in the number of degrees of freedom. Any classical mechanical system consisting of a finite number N of particles has only a finite number of degrees of freedom, n = 3N, whereas fields possess an infinite number of them. Chapter 6 is devoted to quantum (and mostly relativistic) fields. No former knowledge of quantum field theory (QFT) by the reader is implied, although the corresponding mathematical problems are quite intricate. Quantum field theory has a reputation of being hard to study, but I think that such a reputation is mostly due to "user-unfriendly" expositions, heavily technical and overloaded with details. I hope that after scrolling this chapter one will be able to read the literature on QFT with the least amount of pain. Quantum field theory can be crudely interpreted as quantum mechanics of the systems with infinite number of degrees of freedom. The development of quantum field theory began with quantum electrodynamics whose main equations appeared in the 1920s, almost simultaneously with nonrelativistic quantum mechanics in the works of Dirac, Heisenberg and Pauli. The primary bunch of quantum field models being just an extension of quantum electrodynamics may be called the "old" quantum field theory. As it is customary, we shall discuss quantum fields in the relativistic fourdimensional context, although the requirement of relativistic invariance is, strictly speaking, not necessary for quantum fields. After some recapitulation of the four-dimensional formalism, we shall subsequently observe quantum fields for spin 0, spin 1, and spin 1/2. Then we shall discuss a "new" physics of quantum fields, where geometric ideas are fully exploited. Thus, theories based on gauge invariance comprise a class of "new" quantum field theories. One can illustrate the bridge between "old" and "new" quantum field theories by an observation that all physical theories in four-dimensional spacetime are characterized by a number of common features. In particular, long-range forces should exist that require conservation of the corresponding charges. This fact provides a passage from "old" to "new" QFT based on gauge theories. It is widely known that methods of quantum field theory are applicable in other areas of physics; the most popular application area of such methods is statistical physics. 24 Introduction Mathematical models of quantum theory reviewed in Chapter 6 are partly connected with the wave problems discussed in the preceding chapter, so there are a number of natural cross-chapter references. This fact may be viewed as a manifestation of modularity and reusability of many mathematical models in physics. Symmetry issues that should necessarily be taken into account both in the orthodox quantum theory and in field theory are tackled in many sections of Chapters 5 and 6. Apart from a description of mathematical models of quantum mechanics, questions of its interpretation are discussed in this chapter. Although bearing a heavy philosophical load, these questions survived a renaissance in the 1990s and continue to be popular also among physicists dealing with quantum computing and quantum cryptography. For me, the so-called Copenhagen interpretation is good enough, although it leads to paradoxes and inconsistencies. It is presumed that N. Bohr understood causality as the Laplace determinism only, i.e., in a narrow sense, and juxtaposed to it "complementarity". Similarly to Laplace who extrapolated the successful solution by Newton of the Kepler problem on the entire universe, thus postulating a mechanistic model of the world, Bohr created an anti-Laplace model by a philosophical generalization of the Heisenberg "indeterminacy relations" which are in fact just trivial consequence of the Fourier integral. This juxtaposition led Bohr to the idea of incompatibility of quantum mechanics with determinism, at least of the Laplace type. Less conservative ideas lead to a negation of causality in some situations with participation of quantum particles, i.e., to acausal or even anticausal quantum models. Causality as a principle is discussed more or less thoroughly in this chapter. By the way, when studying quantum mechanics, it is only natural to acquire some interest in its historical developments. Physics professionals usually do not have a passion for the history of physics, mostly considering the human element behind the scene an unnecessary distraction. Yet people are different. As far as I am concerned, my interest in specific physical phenomena was quite often fueled by the desire to comprehend the corresponding historical occurrences. Besides, some evolutions of science contain a pronounced dramatic element and bringing it to light allows one to deepen the purely scientific understanding. For instance, I have found out that scrutinizing the historical facts which accompanied the development of superconductivity allowed me to understand its theoretical models easier and better. Statistical models treated, maybe a little superficially, in Chapter 7 lead to problems that cannot be fully treated within the framework of classical theory only, despite a lot of classical concepts discussed previously in Chapter 4 being invoked. It would of course be difficult to treat the whole scope of statistical and stochastic issues permeating both physics and other disciplines using methods developed for physical systems (such as economics and sociology), thus many problems are only briefly mentioned in this chapter, with respective references being provided of course. Chapter 8 is devoted to the loose description of theoretical approaches to the interaction of radiation, both electromagnetic and corpuscular, with Basic Principles of Mathematical Modeling 25 matter. The chapter is divided into two parts, one related to the electromagnetic field (mainly radiation) in material media, the other to the passage of particles (mostly charged ones) through matter. The concept of material medium and its interaction with external agents, in its general setting, has a great many particular facets far beyond physics. For example, many social problems may be reduced to the depiction of an appropriate medium and its interaction with external influence such as immigration. One can also portray a human society in a steady communicational state affected by outside opinions. Mathematical models of immunology are conceptually close to these considerations. In physics, external agents provoking a response in material media are usually taken to be radiation fields or streams. The problem of material media response is very complex and is dissected - sometimes artificially - into many subproblems and models. Typical models are related to the interaction of pulsed electromagnetic radiation with material media. Such models are very relevant these days, specifically with regard to new laser technologies allowing one to produce extremely powerful ultrashort pulses. The next Chapter 9 outlines "dark spots" - subjects that not only the author does not fully understand, but also the great minds of the current physmatical milieu seem to be unable to give exhaustive answers about. In this chapter and the next one, two large subsections can be found - on time reversal symmetry and about Earth's climate (the latter discussed in Chapter 10) - like "novels in a novel" or, say, dreams seen in a dream i.e., so to say, dreams of a second order. And like in a dream, the real problems are mixed with fantasies, speculations and pseudoproblems. Both subsections contain elements of a journalistic essay, and I do not shy away from discussing the fictive problems bordering on metaphysics or excessive philosophical generalities. In fact, a lot of current physics easily accommodates things that are little more than philosophy and cannot be falsified in any way. For instance, the problem of time-invariance violation discussed in this chapter is of semiphilosophical, worldview character. For centuries it was believed that all events are, in principle, predictable, time-revertive, and can be described by differential equations similar to the equations of motion for an isolated body. Seemingly unpredictable and irreversible phenomena, such as weather or human behavior, were believed unpredictable and irreversible only due to a very large number of variables. As to predictions, it is still hoped by some scholars that with the advent of the powerful computers all long-range predictions would be possible (which is probably wrong). When thinking about mathematical models in physics, one cannot get rid of a thought that the whole army of researchers has won a great many battles but is losing the war. Indeed, there exist several diverse paradigms in physics which serve as a base for model construction, but taken together they produce an impression of great confusion. Which model of the particle is ultimately correct - pointlike, as prescribed by special relativity, or extended and spread over some finite domain, in accordance with quantum mechanics? Or a tiny string? Or containing an internal world? Is the reductionist approach, when one attempts to explain all phenomena as based on a fundamental set of 26 Introduction elementary interactions, in general valid? One can notice that almost all cases of physical interest, except probably some exotic ultra-high-energy accelerator experiments, involve systems of a great number of particles. It may be impossible to directly calculate the behavior of complex systems from single-particle models. For example, reductionist arguments trying to derive properties of biological tissue starting with some fundamental model such as Newton's second law or the Schrödinger equation can hardly be called adequate. In Chapter 9 we shall discuss a typical reductionist problem: time reversal in complex physical systems. It is generally believed that each phenomenon in our reality should be, in the final analysis, invariant under time reversal on the ground that the most popular mathematical models of fundamental interactions are time-invariant. This, however, can be a delusion or at least a stereotype. Chapter 10 "Climate as a Physical System" could be considered as a continuation of "dark spots". It has already been emphasized that the climate system is a very complex physical object and prognoses of its evolution must necessarily be an extremely refined issue. There are many discussions among professional climatologists, meteorologists, physicists and representatives of other sciences about how to approach - not even to solve - this problem. There seems to be more of humanities and politics than of natural sciences in the climate disputes. The matter is that humans - even the most enlightened climatologists do not know enough either about the Earth's climatic system or about the chaotic dynamic systems to produce accurate mathematical models containing thousands of entangled variables. Hence the prevailing role of the anthropogenic factor compared to the natural influencers on Climate such as solar activity, oceanic circulation or lunar motion is highly questionable. Chapter 11 is devoted to mathematical models beyond physics. Physics is distinguished from other disciplines also employing mathematical modeling by the fact that models in physics are linked. This is an important property ensuring the success of the collection of mathematical models called a science, and that is a feature that makes physics "the splendid architecture of reason" [18]. This linked architecture appeared mostly due to firmly established laws of physics that can be expressed in the form of differential equations. As soon as a model is disconnected from the main architecture, its heuristic value becomes substantially reduced. One could see this diminishing value of standalone models - despite the generally flourishing physics - on the examples of numerous group (multiplet) models of elementary particles, being of fashion in the 1960s. Nowadays, after the Standard Model epoch, nobody seems to remember those quickly baked concepts which did not use the wealth of physical results. A similar situation is typical, for example, of economics where mainly ad hoc mathematical models are in use, characterized by a lot of arbitrary parameters and suggestions. Now, let me make some additional comments about the subjects contained in the present book. This book is not about well-known mathematical methods in physics - there exist a great lot of sources on this subject, and I had no intention to compete. The reason for discussing Basic Principles of Mathematical Modeling 27 foundational issues was to stir up the reader's interest, inducing one to address oneself to more precisely focused and professional sources. Nonetheless, in this manuscript I tried to evade repeating well-known results, numerously described in the vast literature, but whenever I still had to address such results - usually while starting to describe a particular model or a theoretical approach - I made an attempt to concentrate on those features which seemed to me fresh and unexpectedly connected to apparently far away subjects. Indeed, in physics and mathematics combined we primarily find out about the existence of wonderful connections between things which at first sight seem to be completely different. Some of these connections are really intriguing. Today, well-known links exist between the theory of dynamical systems and statistical mechanics, between the models of black holes and thermodynamics, although these subject pairs are apparently related to different fields of physics. There is now also a well-known interface between superconductivity, cosmology and high energy physics, based on the unifying idea of gauge invariance. Less known connections may appear quite striking. For example, can one apply the notion of wave function in classical statistical mechanics? What does classical signal analysis have to do with the foundations of quantum mechanics? Another example of networked physmatical domains is the presence of thick links between heat and mass transfer (with countless engineering, geophysical, ecological, sociodemographic, etc. applications), quantum mechanics, and geometric topology, in particular the now fashionable7 Ricci flows. The latter describe the spread of a tensor quantity, the Ricci curvature, which in distinction to scalar quantities governed by the heat, diffusion and Schrödinger equations manifests the behavior of a purely geometrical property. But the idea behind all these models consisting in the dynamical smoothing out of high concentrations is basically the same. Or, I was astonished to find out that the Bogoliubov-Mitropolski expansion, well- known in the classical vibration theory [242], can be efficiently applied to nanostructure analysis. The burgeoning areas of biotechnology, nanotechnology, quantum engineering, and quantum information processing are today closely converging. This cross-border linkage of apparently diverse disciplines may serve as an example of a physmatical network in action. One might notice that physics is full of shared approaches and synthetic subdisciplines such as the interaction of radiation with matter, astrophysics, geophysics, cosmology, etc. Even the elementary study of physics and its relevant mathematics increasingly shows that apparently disparate topics are in fact closely related. For example, let us start from mechanical problems. Then we very soon come to a well-known fact that Hamiltonian systems determine a vector field and solving the Hamiltonian equations means finding the integral curves of this vector field. This is known as finding the dynamics of a physical system. Only a slight generalization of this simple physical situation leads to familiar linear algebra notions and geometrical concepts: a first-order differential equation on some manifold M is a vector field on this manifold, and solving a differential 7 Mostly due to the famous recent results by G. Perelman. 28 Introduction equation for given initial conditions is translated into finding a curve or, rather, a family of curves. All this is of course trivial, yet there is an unexpected link here, namely that we naturally come to the notion of a flow for a vector field leading in its turn to quite sophisticated concepts of the theory of dynamical systems with a multitude of applications. In other parts of physics and associated to it mathematics, we have, for instance, the HahnBanach theorem and then the concept of separation of convex sets, which eventually leads to such ultra-practical things as the optimal control of missiles. Still in other nodes of this physical-mathematical network we locate the integral equations and Fredholm operators naturally connected with the scattering of waves and particles and, although it may seem strange, with the Navier-Stokes equations. Hence some people may find this text suitable for acquiring some knowledge on the interaction between different branches of mathematics and physics, starting from some seemingly familiar examples contained in the book. In physmatics, due to its networking properties, it is easier to think inductively or, roughly speaking, to generalize simple examples - in contrast to the famous Sherlock Holmes' method. Although this book is about mathematical models and, predominantly, mathematical models in physics, it contains many notions that appear too inconcrete from a scientific point of view: nature, society, values, knowledge, science, politics, policymakers, bureaucracy, democracy, scientific tribes, history, intuition, conscience, God, faith, beauty, truth, freedom, etc. These descriptive terms usually serve to label some phenomena with the purpose to invoke associations and exploit human attitudes. Such objects form fuzzy sets or are high-level entities existing in dedicated spaces, multidimensional and of course not necessarily linear so that it would be difficult to define them exhaustively with words. These and a great deal of other badly defined concepts are appealing to unwritten laws. Moreover, such objects cannot be in general reduced to "black-white" dichotomies 0-1, but involve complex though essential details. More than that, the subsets of these fuzzy notions which describe certain groups of people are not only very unspecific but, on the one hand, impersonal while on the other hand too poorly defined to be studied by really scientific methods. By the way, fuzzy notions are often used not only in the humanities, but also in physics and even mathematics. For instance, such words as local, very small, negligible, immediate vicinity and their inverse i.e., global, very large, essential, far from, etc. are ambiguous words, but their use seems to be unavoidable in science. So it would be wrong to assume that "exact" sciences are based solely on numbers understood as ordered sets; by the way, one may recall that numbers themselves originally modeled physical (or rather biomechanical) objects such as human fingers. Since this book is mainly devoted to building mathematical models in physics, allow me to expand a little bit on the subject of modeling in this introduction. Mathematical modeling in physics is a nontrivial interplay between mathematics, physics and engineering in the study of complex systems (see in the following chapter a slightly more extensive discussion of the properties of models in physics). Any model in general is a simplification of reality, with irrelevant details being ignored. It follows from here that when Basic Principles of Mathematical Modeling 29 a model is used, it may lead to incorrect predictions when the neglected details cease to be irrelevant. Thus, it is important to estimate the limits of applicability of the model - the art largely mastered by theoretical physicists and often overlooked in more applied disciplines and, strange as it might seem, by computer scientists whose primary task is to appreciate errors. And appreciating errors is indispensable for validating models. When we are approaching a problem, we tend to do it in a biased, limited way (recall the chrestomathic tale "The Blind Men and the Elephant"), with some feature seeming the most substantial according to previous experience, prejudices, stereotypes and other purely human factors. Understanding of where a particular model fits within the overall theoretical description enables one to estimate the model limitations. Moreover, it does not mean that we shall always succeed in giving a useful answer by employing mathematical tools. When one tries to solve a problem that one does not know thoroughly, there is always the risk of building castles in the air. For instance, it would be risky to model biomedical processes without elementary knowledge of the subject area. I think that if one is interested in applying mathematics, one should start first by studying carefully the problem one is interested in and then learning the necessary mathematical theory. The use of mathematics in physics is usually unavoidable, as well as in other areas which claim to be "exact" (chemistry, engineering, some parts of biology). Examples of disciplines that are definitely not "exact" are the major part of medicine, psychology and philosophy. The bad thing about such disciplines is that they are relying more on so-called "expert opinions" and too generally understood "human experience" than on a solid reproducible base so that it is difficult to controvert or falsify the vague statements of inexact disciplines. One often forgets the truism that even in "exact" sciences the theoretical solution to an actual problem must be confirmed by experiment. Experimental proof is the strongest of arguments whereas reference to an authoritative opinion is the weakest one. 30 Principles of Mathematical Modeling 2 Principles of Mathematical Modeling The world available to us may be defined as the sum of manifestations of the systems having different levels of complexity. The systems whose complexity (defined in any manner) exceeds our abilities by orders of magnitude cannot be studied by exact methods, in particular, by mathematical means. In this case, two possibilities come out: either to dissect the observed system into subsystems of lower complexity, so that such subsystems could already be studied by available mathematical techniques, or resort to fuzzy cognitive representations. The first option is that of mathematical modeling, whereas the second is closer to philosophy. In this book, predominantly the first alternative is used. Nonetheless, models of reality can be of vital importance also without an exact mathematical form. When the Black Death - the plague - swept relentlessly over Europe (it was several times, probably the most ferocious pestilence attack was in the 14th century), there were three main models for the cause of the plague: it was regarded as a medical event, as astrological misfortune, or as a God's punishment. Thus, there were roughly speaking three respective classes of models explaining the plague cause: terrestrial, celestial, and divine (religious). These three classes of models were not independent. Terrestrial models, for example, were based on the ancient Greek science represented, e.g., on the Aristotle's "Meteorology" stressing beside atmospheric phenomena the view that certain conjunctions of planets (e.g., Mars, Saturn, Jupiter) would bring disaster, and on the Hippocratic "Epidemics", where the importance of astrology for medical practice was discussed. We shall deal with astrological and religious models a little later; now it may be noticed that terrestrial models were closer to contemporary scientific concepts than the two other classes of models. Terrestrial models discussed atmospheric events, weather, climatic variations, comets, meteors, earthquakes as possible sources of poisonous gases spoiling the air and thus causing illness. Rain, thunder, storm, wet winds could disperse ominous vapors (for instance, produced by cadavers rotting in swamps). So the essence of the model for the cause of the plague consisted in the notion of the poisoned air entering the body, contaminating it and causing its organs to disintegrate - quite a viable model even from the contemporary viewpoint. It is clear that understanding the cause of the plague could substantially increase chances to prevent its spread. We shall discuss below some models of epidemics based on dynamical systems, here I wanted to emphasize the importance of non-mathematical models for the human race. Many people think that real science and technology begins only where sophisticated mathematical methods enter the picture. I think it is just a stereotype. Refined mathematical models of the world are not necessarily Basic Principles of Mathematical Modeling 31 destined to play an instrumental part in life practice. For example, there is no practical necessity to use geometric theorems in order to measure the plot area: the value of a particular estate is only very approximately determined by its area, "human factors" such as prestige or location are much more weighty. For practical purposes, the ancient Egyptian formula, S= a+c 2 + b+d 2 , crudely expressing the area of a rectangle as the product of half-sums of the opposite sides is quite sufficient. Moreover, using geometric theorems would be counterproductive for a land surveyor, since they require more precise measurement which does not produce a better estimate of the plot real value. Actually, the dichotomy "mathematical models - cognitive models" is linked to the models of language and semantic scattering in it. Each word in any human language has a spectrum of different values (meanings). Such a spectrum can be represented in the dictionary8, but if one reckons with a multitude of nuances and gradations (which may be represented as a fine structure of each semantic value) then one is inclined to think that discrete models of the language may be too deficient. Therefore, to build up a model of the language which would be adequate to its complexity one has to consider continuous spectra of values. Cognitive models of the world exploit such continuum spectra of verbal values, whereas mathematical models are centered around discrete notions which can be more precisely defined and thus expressed by equations. We shall return below to cognitive models, some of them, as I have mentioned, may be quite valuable. Nevertheless, one can often encounter an utterly disdainful attitude to models which are not based on mathematical equations, especially from the young people studying mathematical, physical, or engineering disciplines. These mathematical extremists tend to decry anything not containing mathematical formulas as a "bla-bla-bla", not understanding that mathematical notation and models are only related to a comparatively low complexity level. Therefore, the question arises, which seems to be common for all young people who study "exact" sciences but are in fact attracted by the humanities: is there in this latter part of the culture a concrete knowledge that must be considered necessary for understanding the world? This question may be rephrased in more operational terms: does the inductive method of scientific research bring the results whose value may be as high as obtained by formal deductive method favored by mathematicians? Traditional physics combines both methods, but which one is more important? The question may probably be formalized, but this is a prerogative of the philosophy or history of science. I intentionally put it in vague terms, because I do not know the answer. The only thing that seems to be certain: modeling requires a certain kind of people possessing crossover skills and an intensive dialog between specialists representing a variety of disciplines. 8 In fact only a discrete semantic spectrum can be represented in the dictionary. One can imagine here a massive thesaurus such as Webster, Oxford, Roget's, Larousse, Duden, Meyers, etc. 32 Principles of Mathematical Modeling Here, I would like to make a personal remark. For myself, ruminating about mathematical modeling in general is simply an occasion to babble on unspecified subjects, making idle recommendations. There are, however, people who possess a natural aversion to diffuse quasi-philosophical and other verbose texts. They can easily omit this chapter. 2.1 Basic Principles of Mathematical Modeling This introductory section reiterates some traditional concepts which are basic to the overview of connected mathematical models of physics described in the subsequent chapters. The reader can find that I did not shy away from truisms and general statements. I hope I shall be forgiven for these elements of didactics. Moreover, this section contains standard methodological recommendations about rudimentary modeling principles. I tried to make all these observations a bit less boring by supplying them with some personal remarks. The reader not much interested in contemplative hints on unexpected interrelations between different domains of physics and mathematics may easily skip this section. Since a large part of science is based on models, it would be useful to learn some general principles of their construction9, no matter how declarative the enumeration of such principles may seem. To make their discussion less boring I rely not on terse quasi-scientific "proofs", but on simple examples - for myself, I call this anti-didactic procedure "descholastization". In a scientific research or an engineering inquiry, in order to understand, describe or predict some complex phenomenon, we employ mathematical modeling, already reflected upon. In the mathematical modeling technique, we describe the state of a physical, biological, economical, social or any other system by time-depending functions of relevant variables. Then we attempt to formulate, in mathematical terms, a basic law governing the phenomenon. When we are interested in the system's behavior, the law to be provided is expressed by one or several differential equations relating the rate of temporal change of the system variables to the components of some vector field, analogous to a field of forces. Such a formulation of a modeling task leads to a dynamical system. Then the dynamical system models some piece of the world - of course, it may not model it very well, but the model's success or failure largely depends on the law that has been proposed to govern the phenomenon. Usually, mathematical modeling can be naturally connected with dynamical systems theory. It means that mostly the dynamical processes, that is those evolving in time, are considered. This restriction leaves out quasi-static models displaying the relationships between the system attributes close to equilibrium (in physics, for instance, the Gibbs thermodynamical equilibrium, in economics - the national economy models, etc.). In general, such equilibrium states when all distribution functions do not depend on time (see Chapter 4 of the book) are disregarded in dynamical modeling. In this book I am focused mainly on physical models and reduction of the latter only to dynamical systems is not necessarily relevant. One may 9 This was in fact the unique motivation to write the entire modeling chapter. Basic Principles of Mathematical Modeling 33 note that, for example, the major part of quantum mechanics is devoted to describing the spectral states of a quantum system i.e. is static by nature. Present-time dynamic modeling of real systems such as weather forecast, climate viewed as a physical system, transportation networks, energy grids, genomics, etc. has become increasingly complex. To obtain trustworthy solutions to these problems, especially real time results, intensive use of computational resources (processors, memory, storage) is required. In general, modeling can be performed in a variety of ways, with different aspects of the situation being emphasized and different levels of difficulty standing out. This is a typical case in mathematical modeling, which implies that the process to be modeled is not necessarily described uniquely. It means that modeling is not reduced to a set of rigid rules but rather provides a vast field for transdisciplinary creativity. The reader I vaguely have in mind is a person who strives to transfer the methods developed within a certain scientific or engineering area to other branches of knowledge. Modern mathematical modeling is a synthetic discipline embracing mathematics, physics, computer science, engineering, biology, economics, sociology - you name it. Mathematical modeling enjoys enrichment due to interdisciplinarity. Although, as I have already mentioned, this book is mostly devoted to physical models, the term "physics" is to be understood throughout the book in the universal sense: most of the models in other fields of human knowledge are increasingly using methods developed in physics (see Chapter 11). When computers were not yet abusing the most part of your time, the main tools of a mathematician were pencil, paper and a waste-paper basket. I don't remember who said that it was the waste-paper basket that distinguished a mathematician from a philosopher, but I still think this is very true even today, although mathematics has become much more scholastic than previously, when it was closely connected with physics. Unfortunately, as far as contemporary mathematical modeling goes, the waste basket should be used much more intensely as a working tool. One might note that nearly all books on mathematical modeling are generally very eclectic and incorporate a variety of random problems as well as disjoint hat trick mathematical approaches. The very notion of mathematical modeling is hard to define - everything e.g., in physics is mathematical modeling. So, this term is infected with vagueness. Nevertheless, there exist certain common principles of modeling such as the just mentioned "divide and conquer" principle. To evade some methodological prolixity, which seems to be customary when speaking about modeling, I may refer the reader to many good books on principles of mathematical modeling ([1, 2, 13]) 10. Here, I mention just the following counterpoints: • Qualitative vs. quantitative 10 The reader can also find some collection of modeling examples illustrating general principles in https://www5.in.tum.de/lehre/praktika/comp_mod/SS02/modeling.pdf, where modeling of dynamical processes is presented. 34 Principles of Mathematical Modeling • Discreet vs. continuous • Analytical vs. numerical • Deterministic vs. random • Microscopic vs. macroscopic • First principles vs. phenomenology In practice, these pure types are interpolated and, like in musical counterpoints, all parts of the model must be blended together as one, even if each is a feature of its own. The ultimate difficulty in mathematical modeling still persists: there cannot be a single recipe how to build a model. One always has to make a choice: this implies rather the art than the science of modeling. As in any art, some vague principles come into play, here e.g. the spirit of Occam's razor, which may be roughly formulated as "simple explanations are preferable to complex ones", should not be violated. This leads to a somewhat deplorable situation: many models may be rejected not because they are basically wrong but because they are too sophisticated. One can, in principle, construct any kind of mathematical model irrespective of their relationship to reality; such models may be non-contradictory and even beautiful (like in the modern string theory), but the requirement of their experimental validation may be substantially weakened. In this connection, I can recall an aphorism ascribed to J. W. Gibbs and quoted in "The Scientific Monthly", December, 1944: "A mathematician may say anything he pleases, but a physicists must be at least partially sane." Mathematical trickery alone rarely brings fruit in physics. A mathematical model is not uniquely determined by the investigated object or considered situation. There are usually a plethora of conditions, and only one of them is selected. Moreover, selection of the model is dictated by accuracy requirements. For example, in motion and surface planning models that are very important in robotics a table may be represented as having a rectangular or oval shape; in road traffic models a car is typically represented either as a point or a rectangle; should one take into account atmospheric influence on a falling body or not? The answer depends on the required precision. When some difficulties arise due to an over-idealized mathematical model of a given physical situation, the model can be changed or improved. This is the approach most favored by physicists. The other approach, logically opposite and adopted by many mathematicians, would be to address the physical situation in the most general and possibly rigorous formulation right from the start imposing restrictions as one proceeds. Such methodics may be quite powerful (historical examples are the works by N. N. Bogoliubov, V. A. Fock, H. A. Kramers, L. A. Vainstein), but often makes the treatment unpalatable for physicists. Core mathematics contained in the real physical models is not too abundant. By "real", I mean those models and theories that can be corroborated or disproved by some experiment or observation. In this sense, I do not think that a vast number of string theories are real, although some of them might be quite "beautiful". Incidentally, aesthetic criteria alone may well Basic Principles of Mathematical Modeling 35 have nothing to do with reality: it may happen that physical phenomena are often described by simple and symmetrical equations which can be considered "beautiful", but the reverse is not necessarily true: not every "beautiful" (i.e. simple and symmetrical) expression pertains to physical phenomenon. It is trivial but not always held that mathematical models should not contradict the fundamental laws of nature. Sometimes, however, this requirement is weakened. Thus, some numerical models, for example, based on the Runge-Kutta method may lead to energy non-conservation. The number of particles or mass should be conserved, which is usually formalized as the continuity equation; again, there exist numerical simulations, with the continuity equation being violated or, at least, not exactly fulfilled. It is not infrequent that the laws of physics are rejected in computer technology: one can recall "magical" fluid simulations and animations in computer graphics. In general, however, mathematical models must be tested against basic laws of physics. Although an ever increasing number of people think that mathematics is totally different from physics and therefore must be free from constraints imposed by physical laws, I think nonetheless that mathematics is a service subject, like the art of hieroglyphic painting in the Oriental cultures, the content being still provided by natural sciences, in this case by physics. From here it follows that many principles successfully employed in physics, such as symmetry and scaling, should be taken into account in the domain of mathematical modeling and widely exploited in order to reduce the complexity of real-life models. In the spirit of our physmatics, one can also extensively use analogies, e.g., chemical reactions are analogous to competition models in biology and economics. The latter fact points at the universality of certain techniques: different objects are described by the same mathematical model. The trivial example is given by the ubiquitous oscillator model which can be equally well applied in mechanical engineering, e.g., to explore the car body vibrations, in electrical engineering, e.g., to study the passage of signals through filters, and in physics - to build up models based on the concept of elementary excitations. On the other hand, the oscillator model participates in the intriguing duality between the Hooke's law and the Newton's or Coulomb 1/r forces. In general, the oscillator model seems to be the most favored by physicists, and I shall dwell on it many times in various contexts. Universality of some models is a manifestation of numerous physmatical polymorphisms, somewhat mysterious analogies hinting at the fundamental unity of all parts of physmatics. This may imply that increasing specialization and fragmentation of physical and mathematical sciences into tiny domains can become the principal obstacle to their development, like bureaucratic subdivision of the world into nationalistic states has become the main impediment to global progress. Being universally interconnected, physmatical models almost always produce sub models, i.e., even apparently simple models are organized on hierarchical principle and can be step-like refined. It is here that the vague boundary between a model and a theory lies: a theory incorporates a large number of interconnected models, just as a 36 Principles of Mathematical Modeling manifold incorporates a number of coordinate patches being glued together. The presence of sub models naturally leads to modularity and reusability - concepts that are well-known for software developers. When treated more or less seriously, mathematical modeling should be endowed with some unifying paradigm which would be operational throughout the whole domain (ideally) or, at least, constitute the backbone of it. For instance, the finite-dimensional linear algebra serves as a backbone for numerics and so-called scientific computing. In the domain of mathematical modeling, the dominant paradigm is a greatly simplified dynamical systems theory. We shall discuss dynamical systems in some detail in Chapter 4 largely devoted to classical deterministic mechanics; here I merely state some salient features of this, now more or less traditional, approach to mathematical modeling. When speaking about mathematical modeling techniques, V. I. Arnold uses the terms "hard" and "soft" models giving an example of multiplication table as a hard model. One can find numerous examples of soft models in life, for instance, the statements of "the more the less" type as well as given by proverbs, aphorisms, and other common sayings. Thus the statement "A man with a watch knows what time it is, a man with two watches is never sure" may be considered an example of a soft mathematical model.11 Nevertheless, one should remember the famous aphorism by Voltaire "A witty saying proves nothing", so the value of common sayings as soft models is very limited. 2.2 Mathematical Models in Physics Almost a century ago, Lord Ernest Rutherford somewhat arrogantly subdivided science into physics and collecting stamps. This contemptuous remark can be interpreted in the following way: "physics" designates a group of experimentally-oriented sciences accompanied by a well-developed theoretical overlay. "Collecting stamps" denotes a group of disciplines that accentuate the descriptions and accumulation of data, for example, old-time biology, orthodox medicine, geography, and many others. Models accepted in such disciplines are mainly reduced to classifications, which seems to be the primary stage of a good theory, and one can observe that traditional disciplines that could be recently attributed to the "stamp collection" class continuously evolve towards "physics" (although with different speed). The theory wrapped over the "raw" reality is quantitative and endowed with predictive capabilities. Thus although the principal results are obtained through observations and dedicated reproducible experiments (as performed by Galileo, but in modern times relying on powerful scientific instrumentation), one should not think that theoretical models mostly fulfil a decorative function in physics as it was frequently claimed by administrators. In fact, experimental achievements and modeling results are difficult to separate, and it is this blend - not the experimental results alone - that can be 11 I like especially the saying "Power tends to corrupt, absolute power corrupts absolutely" which may be regarded as a soft model with a very large area of applicability. Mathematical Models in Physics 37 converted into ubiquitous technologies. Unfortunately, the role of mathematical knowledge in technologies is often hidden. Modeling is an art of simplifying a complex system, and mainly due to this, models play a very important part in physics and technology. A good model enables one to understand the most fundamental physical parameters of a complicated process. Mathematical modeling may be loosely defined as an idealized description of reality constrained by mathematical concepts. All approaches to study the reality, both in the form of natural or behavioral sciences, which are using mathematical tools may be declared as mathematical modeling. Many people even assert that there is no mathematical modeling, but just instances of applied physics and mathematics under a fancy - and sometimes fashionable - name. People claiming that they pursue some scientific purposes are just replacing a natural (or social) object by its model and studying the latter. There is an element of deceit in this process. No model fully describes the studied phenomenon, and models of physics are no exception. Therefore, by the way, the question of a model's applicability is usually highly nontrivial. The already mentioned successful solution by Newton of the mysterious Kepler problem inspired thinkers and philosophers, primarily Laplace, to make bold generalizations and to develop the mechanistic model of the universe. There was no room for randomness in this clockwork vision of the world. The biggest challenge of biology, medicine, society, and economics is that randomness leads to finetuned processes (in time) and structures (in space). It means that the notion of the world as a machine seems to be inadequate. In fact, fully mechanistic nature would be incompatible with life, where evolution gains order through fluctuations. Classical science mostly studied systems and their states close to equilibrium, and that allowed one to construct a beautiful collection of comparatively simple physical models for the world. Such models depicted the systems that reacted on perturbations more or less predictably: these systems tend to return to equilibrium (in the parlance of statistical physics, they evolve to a state that minimizes the free energy). However, systems close to equilibrium can describe only a small fraction of phenomena in the surrounding world; in fact, it is a linear model. In contrast, nonequilibrium systems are ubiquitous in nature. Any system subject to a flow of energy and matter can be driven in the nonlinear mode, far from equilibrium. For example, open systems such as the Earth, living cell, public economy or a social group exhibit highly complex behavior which is hard to model mathematically using the methods adapted mostly to mechanical patterns. Most of the processes in the open systems far from equilibrium are interrelated, nonlinear, and irreversible. Often a tiny influence can produce a sizable effect. One more typical feature of systems far from equilibrium is that they can lose their stability and evolve to one of many states. This behavior appeared so "unphysical" from the habitual viewpoint that many orthodox physicists were inclined to despise those colleagues who were trying to consider systems far from equilibrium, especially those beyond physics. Yet, to model the processes in the real world, one must learn how to describe the 38 Principles of Mathematical Modeling systems far from equilibrium. We shall deal with systems far from equilibrium many times in this book. Physicists generally dislike the term "mathematical modeling" because of its overwhelmingly general applicability. Everything can be declared mathematical modeling. This is, of course, true. The most enthusiastic proponents of mathematical modeling were mathematical physicists, specifically those belonging to the well-known school of A. N. Tikhonov and A. A. Samarski. Unfortunately, I could not find any good cliometric work on mathematical modeling in physics where the popularity rises and falls of this concept would be monitored. Neither could I find a quasi-definition of what a mathematical model really is. Such a definition will be probably useless, yet I would prefer to have a unified consideration. What I understand by mathematical modeling in this book is building compartmental fragments of our representation of reality based on clear-cut assumptions and discussed in mathematical terms, predominantly with differential equations. One must remember that models are not reality and should not be perceived as such. In some extreme cases, models may have nothing to do with reality. There are two favorite mathematical models which play an outstanding role in physics: the harmonic oscillator in its linear part and the logistical model in its nonlinear part. These two privileged models are at the same time simple and rich, giving rise to many theories and connecting diverse domains. I shall discuss both of them thoroughly on a number of occasions. In the singleelectron energy band theory of solid state, the favorite model is that of Kronig and Penney, which beautifully illustrates the spread of individual energy levels and zone formation. There also exist other favorite models which are applied in many areas of science and we shall play with them as well, however, not very thoroughly. Here I typically start from a qualitative discussion of a physical phenomenon to be modeled, then produce physical examples and arguments and only after that I attempt to provide the mathematical results. By the way, one can easily find a situation when the qualitative discussion, though seemingly plausible, gives a wrong answer. For instance, there is a folklore question: you have a hot cup of coffee and want to cool it to the drinkable temperature, will the coffee be cooled faster if you leave it as it is and pour cold milk into it right before drinking than if you pour the same amount of cold milk right away? The usual qualitative answer is "yes", presumably because the difference of temperatures is higher if you leave hot coffee intact, but let us see what mathematics says. Let hot coffee have the initial temperature Θ0 which is higher than the environment temperature T0 so that Θ0 -T0 > 0. To produce a mathematical model corresponding to the coffee-with-milk situation we may assume that cooling subordinates to a linear law (usually ascribed to Newton): -dT(t) dt = k(T(t) -T0), Mathematical Models in Physics 39 where T(t) is the coffee temperature at moment t. This is a simple firstorder inhomogeneous differential equation with a parameter k. One can also consider for simplicity the milk to be in equilibrium with the environment and has therefore the ambient temperature T0 . This assumption is not essential and serves only to simplify the formulas. The general solution to the respective homogeneous equation is T(k, t) = Ce-kt, and if the ambient temperature T0 is constant, we have a physically natural particular solution to the inhomogeneous equation, T(t) = T0 . Then the general solution of the inhomogeneous equation corresponding to the initial temperature T(0) ≡Θ0 is T(t) = T0 -(Θ0 -T0)e-kt To further simplify our model we may assume that coffee and milk have the same density as well as specific heat values. Corrections in terms of the respective density and specific heat differences would be an unwarranted excess of accuracy. Thus, we have TVc+ T0Vm-(Vc+ Vm)Θ -VΘ, where Vc, Vm are the partial volumes of coffee and milk, respectively, V= Vc+ Vm is the total volume, and Θ is the temperature of coffee with milk. Let us now, to answer our first question, compare two cases. 1. We leave our coffee cooling by itself (according to the above equation for T(t)) and pour milk into it just before drinking. The coffee-with-milk has the temperature Θ1(t) = Vc(T0 + (Θ0 -T0)e-kt) + VmT0 Vc+ Vm 2. We pour milk into the hot coffee right away, at T= 0, then the starting temperature becomes ΘΘ̃0 (a physicist might say is "renormalized") where Θ̃0 = VcΘ0 + VmT0 Vc+ Vm , and in the process of cooling the coffee-milk mixture reaches by moment t temperature value Θ2 Θ2(t) = T0 + (Θ̃0 -T0)e-kt. Now, the most astonishing thing happens. By simple algebraic manipulations one can show that Θ1(t) = Θ2(t) for all moments t. Indeed, 40 Principles of Mathematical Modeling Θ1(t) = T0 + Vc V(Θ0 -T0)e-kt= Θ2. In other words, it does not matter whether you pour milk in your coffee first or last thing before drinking. The model described above represents an example of semi-useless models. It is just a simple mathematical record of an everyday observation. There exist, however, totally useless models such as simulation of smoke rings puffed by a cigarette smoker. Constructing an exhaustive mathematical model for such a process would involve considerable difficulties, yet probably its analysis would produce a superfluous knowledge. There exist probably many instances of such extraneous knowledge, but the difficulty is to a priori diagnose it as such. The reason why I am writing about mathematical modeling in physics at all is that, contrary to the general view that physics is centered around the laws of nature, I observe that it deals mostly with their mathematical models. Thus, Newton's law is a successful mathematical model that can be applied to a body moving under the influence of other bodies in the low-energy limit, rather than the universal law of nature such as, e.g., certain symmetry properties of the universe, more exactly, of its part available to observations. Likewise, the Schrödinger equation is a typical mathematical model, also quite successful when applied for description of atomic and subatomic scale particles in the low-energy limit. The wave function standing in the Schrödinger equation is not directly observable and is a typical attribute of a mathematical model. An extreme case of such a model - the wave function of the universe, for example, in the form suggested by J. Hartle and S. Hawking [67], see Chapter 9, is just an interesting speculation and hardly a verifiable model. In this sense, the model of Hartle and Hawking is more like a philosophical concept expressed in the mathematical language (untypical of philosophers) rather than a physical object, that is the one to be experimentally validated. This kind of model building reflects a current trend in physics, where the quest for experimental proof is largely subdued. A huge generation of armchair physicists, focusing almost exclusively on formalism and symbols, produce numerous pieces of writing without much attention to experimental techniques. For them there is no question of participating in and observing the mundane work of experimentalists, with all its tinkering and small cunnings. Armchair physicists are mainly motivated to express themselves so that the subjects are treated for their own sake and glory, with little attention paid to firm experimental knowledge or applications. The produced symbolic forms can also create powerful myths12 and are closer to art than to physics. Moreover, it is still a philosophical extrapolation, a widely spread belief that the models developed by humans and rather arrogantly 12 Examples of such powerful theoretical myths are, e.g., anthropic principle, broadly interpreted many-worlds interpretation of quantum mechanics, and possibly superstring/Mtheories, to say nothing of such pseudoscience concepts as time travel, teleportation of macroscopic bodies, or military use of torsion fields. Mathematical Models in Physics 41 called the "Laws of Nature" should be necessarily applied to the entire universe. This statement may be considered a belief because it has never been proved, only examples are generally used to corroborate it, and isolated facts by themselves mean very little. Nevertheless, to say something against the belief in universality of our "Laws of Nature" is merely a mauvais ton, for example, in the community of astrophysicists, although nobody can deny that astrophysics in general achieved really great successes13. Yet, when one stops questioning the truth, it easily transforms into dogma. Once we have mentioned the Schrödinger equation in the modeling context, let me make the following remark. The situation with the Schrödinger equation is very indicative: this equation works very well in a great number of situations so that the "shut up and calculate" approach to quantum mechanics has become quite popular among physicists. Yet we know that the Schrödinger equation is unsatisfactory in many respects. For example, it describes interaction between the particles purely phenomenologically, with a coefficient U(r), it does not account for the quantum vacuum, it is not at all trivial to get the classical limit, there are still many questions in regard to the so-called Bohmian version of quantum mechanics, etc. We shall discuss the limitations of the Schrödinger equation in Chapter 6. The greatest question of all is, probably: what are the ultimate laws of physics? And I don't think anybody has ever provided the final answer, although there have been many distinguished candidates. This question may be answered in the language of mathematics, like Newton's law was formulated as a mathematical model, but one cannot exclude that the answer will be provided in some other form. It is possible, as Professor John Wheeler, an outstanding physicist famous for his unexpected thoughts, once predicted (see section "Prognosis" below), that if and when people eventually happen to learn the ultimate laws of physics they will be astonished that these laws had not been known from the beginning - so obvious they will look. I have already noted that nobody in fact can be certain that the ultimate underlying laws of physics from which everything we know can be derived do really exist. Even if such a set of final physical statements does exist, it is not obvious that it will be useful for concrete applications. The obvious example is thermodynamics as an engineering discipline. For engineering applications, thermodynamics can be based on the 17th century phlogiston model. Phlogiston, a hypothetical fluid conjured up only to explain the spread of heat, was a typical mathematical - not physical - model. Indeed, the heat propagates in matter in the same way as a fluid diffuses through other media: equations describing both processes are the same. Therefore, an ad hoc model of phlogiston could satisfactorily describe plenty of idealized thermodynamic processes. However, the fact that this model was only mathematical and not physical (or "physmatical" - in our fancy language) eventually backlashed. The 13 Actually, it is in astrophysics that processes are abound when our conventional models for physical laws cease to hold. For instance, it is unlikely that one can apply Newton's law of motion to describe the phenomena accompanying the collision of black holes located in the centers of galaxies. 42 Principles of Mathematical Modeling phlogiston model encountered insurmountable difficulties. Nobody succeeded to observe the fluid called phlogiston or to measure its properties such as density, even in indirect experiments. A lot of heat-generating events, e.g., friction, dissipation in general, electric discharge, etc. could not be explained by the phlogiston model. So, the latter had to be abandoned in favor of the molecular-kinetic theory. I think the phlogiston history is rather instructive, because a relatively successful mathematical model had to gracefully fade away, succumbing to a more physically compelling theory. Incidentally, the same process, but perhaps more painful, was observed in connection with the model of ether. The phlogiston model, a clever gadget which was nonetheless totally wrong from the physical point of view, would have been more adequate than molecular dynamics models of the future, when all molecular trajectories might have been computed. However, now we know that even if one can, in principle, predict the behavior of each molecule, the value of such a prediction for engineering applications will be very low - in engineering models based on thermodynamics people are interested in such quantities as temperature or enthalpy and not in molecular trajectories or quantum particles. For every phenomenon, one may have many possible levels of description, which makes the physmatic modeler's life more interesting but not easier. Physics looks as not simply a good reason - it is much less than that. It is reason constrained by experiment, so one must give pragmatic answers and withdraw from being carried away by mathematical possibilities. The goal of physically-based mathematical modeling (PBMM) may be formulated as to abstract from the tremendous wealth of variables leaving only those relevant to the research objectives. We can very well see the implementation of this principle in the works of masters of physics, a good example for me personally was the collection of papers by J. W. Gibbs ([71]), see more minutes in Chapter 7. This is in fact an intuitive construction of a multilevel hierarchical system of models. It is interesting to notice that modern - exponential - representation of numbers is successful in computational techniques just because it is hierarchical, in distinction to ancient Roman representation that makes arithmetical operations cumbersome. Nowadays the multilevel methods with different degree of detail and hierarchical ladder of abstraction are increasingly popular in computer simulations (integrated circuits, traffic, fluids, fusion plasma, weather, complex boundaries, etc.). One must, however, be very careful about which details are discarded. A single erroneous assumption about a detail that was arbitrarily assumed to be irrelevant can totally invalidate a model. One can see this effect in computer simulations where multitudes of beautiful images can be obtained - quite easily nowadays, but the physical or engineering value of such visualized artifacts might be quite doubtful, to the chagrin of many computer scientists. It is not a coincidence that computer science (informatics) departments at the universities are usually not in high demand among physicists, mathematicians or mechanical engineering people - the specialists in the respective fields prefer to develop their own computer simulations, often less artificial and based on solid knowledge of the subject matter. Mathematical Models in Physics 43 One of the fundamental problems of classical physics was the stability of atoms. Why do the electrons not fall onto the nucleus? This notorious problem plagued physics in the beginning of the 20th century. We shall illustrate the instability of the "classical" atom as a good example of a local model, apparently free of contradictions but nonetheless producing wrong results. This wrong result was an indication that a more global model or the whole underlying theory should be modernized or abandoned. The point electron had already been discovered (by J. J. Thomson) and the model of point nucleus had been experimentally corroborated by E. Rutherford. To explain the stability of atoms, an entirely different approach should be summoned, and indeed, this fact was explained by quantum mechanics which is based on totally different mathematical models (see Chapter 6). Speaking very roughly, the stability of the atom is the consequence of noncommutativity of quantum mechanical operators, in this case of nonrelativistic kinetic energy of electrons and Coulomb potential energy between the charged particles. The stability of the atom may be formulated as the existence of a finite lower bound for the energy of electrons in an atom. Due to noncommutativity of kinetic and potential energy operators, any attempt to make the electrostatic energy very large negative would require that an electron be localized near the nucleus, but this would result in even larger increase of the positive kinetic energy. In short, one of the first successes of quantum mechanics was that it explained the stability of the atom. Now, let us see why classical models are inadequate even at the naive atomic level. As mentioned before, physical systems can be roughly separated into two classes: particles and fields, these are two basic models. This separation is really very crude, primarily because there is an overlap, for example, when particles are treated as field sources or excitations of some gauge field. We shall discuss the difference between particles and fields more thoroughly when talking about fermions and bosons and their possible mixing within the supersymmetry framework. Although, to frequent complaints on the part of mathematicians, there seems to be no good definition of the notion "field" in physics, each physicist intuitively understands the main difference between particles and fields which lies in the number of degrees of freedom. Any physical system consisting of a finite number N of particles has only finite number of degrees of freedom, n ≤ 3N. Recall that the number of degrees of freedom is defined as dimensionality of the configuration space of a physical system, e.g., a system with 2 degrees of freedom is ẍ = F(x, t) where F is a plane vector field, x∈R2. The field, on the contrary, is characterized by an infinite number of degrees of freedom. We shall discuss the implications of this fact in Chapters 5 and 9. Although in the foundation of mathematical model building lies the reduction of the complex initial real-life problem to some idealized scheme, typically with clearly defined input and output, which can be treated by comparatively simple mathematical means, many models in physics and technology are by necessity bundled - they incorporate concepts from different fields of physics. For instance, cosmological models as a rule cannot be compartmentalized to just general relativity; one has to include quantum 44 Principles of Mathematical Modeling mechanics, statistical physics, and sometimes high-energy physics considerations. In particular, different models of phase transitions involving cosmological phase transitions, which have given rise to inflationary models, may be reiterated. Superconductivity (which is also a model of phase transition) is also based on bundled models, mostly combining those of quantum mechanics, solid state, electrodynamics and statistical physics. These inter-field bundles make mathematical models in physics rich and flexible. We shall illustrate the richness of the model bundles, in particular, while discussing the interaction of radiation (both electromagnetic and corpuscular) with matter (Chapter 8). As I have already mentioned, all physical laws are, in today's terminology, just mathematical models, although underlying ideas are not necessarily formulated in mathematical terms. The final questions are often put not by physicists, but by philosophers though maybe in a vague form. There is the reverse trend to canonize mathematical models, make a fetish of them. A model, even an accurate simulation, is only a rehearsal of some reality, there may be many conceivable realities and, in particular, those that people might not have devised without modeling. In practical terms, models aim to help technologists (in the broad sense) predicting the operations and evade possible pitfalls. Therefore, one should be reminded of some risks. Quite often, people construct would-be good mathematical models that have only a very distant relationship to reality. I - and probably many other persons - have seen it many times. In such cases, mathematical modeling gives a wrong idea of what it means to solve an actual problem. A mathematical model is not uniquely determined by the investigated object or situation, and selection of the model is dictated by accuracy requirements. Examples of this relativism of models are abundant. In our everyday life, if we look at a table, its representation as rectangular or not depends on our practical needs; in ballistics, we may take into consideration or totally disregard the influence of the atmosphere; in atomic physics, it is relatively seldom that we have to consider finite dimensions of the nucleus; in military planning, one may regard a variety of models with different degree of "hardness", for instance, harmonic oscillator without attenuation is "harder" than oscillator with damping and nonlinearity. More exotic examples may cite regular armies (linear models), and the presence of partisans or terrorists (nonlinear models). Some results can be easily obtained using one model, but very difficult within another approach. The fact that it would be possible to find a nice solution to a highly simplified model we have built does not at all mean that we were able to obtain a practical solution to the problem we actually face in the real world. So, one must be careful in taking the current physical laws as something invariable and firmly established. Since the laws of physics are formulated as mathematical models for the current level of knowledge, they can be eventually corrected. Physics is an experimental science, and any of its theory, presumably rigorous and absolutely exact, must be from time to time experimentally validated. However, no experiment can have an absolute precision, an experiment's accuracy is inevitably limited (e.g., due to Ten Worlds of Physics 45 technological constraints, noise, etc.). One may always expect a violation of the established views, an indication at novel interactions 14, necessity of radically new mathematical (algebraic or geometric) representations. Even the important "correspondence principle" which is a statement that each new theory must incorporate the old one as the limiting case can be broken - nobody has ever proved its absolute inevitability, and nobody knows its area of application. This principle may well be just a belief "adopted by repetition" or an obscurantist stereotype, even though contemporary physics essentially uses it to test new theories. 2.3 Ten Worlds of Physics In principle, the following topics comprise the main part of today's physics: quantum mechanics, with its applications in atomic, nuclear, particle, and condensed-matter physics; relativistic quantum theory including the concept of a photon, the Dirac equation, electron-photon interaction (QED), and Feynman diagrams; quantum fields; and general relativity. Elementary working knowledge of these topics would be sufficient for a solid foothold in the physical community. However, this standard repertoire is very limited, and I shall try to list the major concepts which, in my view, constitute the real bulk of physics. I call these major concepts "worlds of physics" to be organized as 1. The classical world 2. The thermal world 3. The nonequilibrium world 4. The continuum world 5. The electromagnetic world 6. The plasma world 7. The quantum world 8. The high energy world 9. The relativistic world 10. The cosmological world These ten worlds constitute a backbone of physics. Some of them will be discussed in this book more or less thoroughly, others (e.g., the continuum world or the plasma world) will be touched upon only superficially. Of course, the above list is highly subjective and not exhaustive. Below I ventured to bring the main topics inside each item. All these internal sub lists are also 14 I am not hinting at profound studies of notorious torsion fields of course, because they are small and their interaction with matter is negligible, but looking for new forces is a legitimate physical task. The flip side of the coin is always pseudoscientific fantasies. 46 Principles of Mathematical Modeling open and far from being complete, anyone can complement them in accordance with their tree-like structure. In the following lists, I give only a telegraph-style account of the main items, in the respective chapters we shall discuss some of the concepts only mentioned here in more detail. 2.3.1 The Classical World The classical world of physics is based on the following key notions: • The Galileo group (inertial systems) • Newton's law of motion (classical limit of special relativity and quantum mechanics) • Newtonian gravity (classical limit of general relativity) • The Kepler problem (rotation of planets about the Sun) • Potential fields, classical scattering • The Euler-Lagrange equations • Variational schemes • Noether's theorems and conservation laws, conservative systems • The Hamiltonian equations, Hamiltonian flows on symplectic manifolds • The Hamilton-Jacobi equation • Motion on manifolds, constraints • The Liouville theorem • Key figures: Galileo, J. Kepler, I. Newton, L. Euler, J. L. Lagrange, W. R. Hamilton. 2.3.2 The Thermal World • Classical thermodynamics (equilibrium) • The nature of heat, temperature, heat transfer • Mechanical work and heat, interconversion, engines and cycles • Heat capacity (C= dQ dT) • Laws of thermodynamics, thermodynamic potentials • The concept of entropy, reversible and irreversible processes • Entropy production • Thermochemistry, chemical reactions • Equations of state • Phase transitions, Ginzburg-Landau model • Low temperatures, superfluidity and superconductivity Ten Worlds of Physics 47 • Heat as the particles motion, Maxwell distribution, statistical mechanics Key figures: L. Boltzmann, S. Carnot, R. Clausius J.-B-J. Fourier, J. W. Gibbs, V. L. Ginzburg, J. P. Joule, L. D. Landau, A.-L. Lavoisier, J. C. Maxwell. 2.3.3 The Nonequilibrium World • The Liouville equation, Gibbs distribution • Open systems • Kinetic equations, Boltzmann equation, Bogoliubov's hierarchy • Diffusion, Langevin equation, Fokker-Planck equation • Fluctuation-dissipation theorem (FDT) • Linear response theory, Kubo formula, Onsager's reciprocal relations • Multiple scattering theory • Nonequilibrium phase transitions, time-dependent GinzburgLandau model • Classical stochastic models, nonlinear regime, branching and bifurcations, stability of nonequilibrium stationary states, attractors • The Poincaré map, logistic model • Dynamical chaos, indeterminism (impossibility of predictions) • Dissipative structures, order through fluctuations, Turing structures • Chiral symmetry breaking and life Key figures: N. N. Bogoliubov, L. Boltzmann, J. W. Gibbs, Yu. L. Klimontovich, N. S. Krylov, R. Kubo, L. Onsager, A. Poincaré, I. Prigozhin, D. Ruelle, Ya. G. Sinai, R. L. Stratonovich 2.3.4 The Continuum World • The Euler equation • The Navier-Stokes equation • Hyperbolic flow equations, shock and rarefaction waves • Compressible gas dynamics and supersonic flows • Self-similar models and explosions • Turbulent flows and the models of turbulence • Elastic solid models • Viscoelasticity, plasticity, composites • Seismic ray propagation and seismic ray theory • Acoustics, sound wave/pulse excitation, propagation and scattering 48 Principles of Mathematical Modeling • Detonation and flames, propagation of fires • Superfluidity Key figures: Archimedes, D. Bernoulli, L. Euler, A. N. Kolmogorov, L. D. Landau, B. Pascal, Rayleigh (J. W. Strutt), G. Stokes, Ya. B. Zel'dovich. 2.3.5 The Electromagnetic World • Maxwell's equations • Laplace and Poisson equations • Interaction of electromagnetic (EM) fields with matter • Electromagnetic response of material media • Linear and nonlinear susceptibilities • Linear and nonlinear optics • Atoms and molecules in the electromagnetic field • Electromagnetic wave and pulse propagation • Diffraction and scattering of electromagnetic waves • Electromagnetic radiation • Rays of light, asymptotic theories, coherence of light • Photometry and colorimetry Key figures: A.-M. Ampère, M. Faraday, C. F. Gauss, O. Heaviside, J. C. Maxwell, Rayleigh (J. W. Strutt). 2.3.6 The Plasma World • The plasma dielectric function, linear waves in plasma • Screening in plasma, correlations of charged particles • Hydrodynamic models of plasma • Distribution functions • Kinetic models of plasma, collision integrals of Boltzmann, Landau, Klimontovich, Lenard-Balescu, etc. • Collisionless plasma, a self-consistent field model (the Vlasov equation) • Plasma in external fields, the magnetized plasma • Landau damping • Theory of plasma instabilities • Quasilinear and nonlinear models of plasma Key figures: P. Debye, Yu. L. Klimontovich, L. D. Landau, I. Langmuir, A. A. Vlasov. Ten Worlds of Physics 49 2.3.7 The quantum world • The particle nature of electromagnetic radiation, the Planck hypothesis • The duality of light, photoelectric effect (A. Einstein, 1905) • The Bohr atom • The de Broglie hypothesis, hidden parameters discussion, quantum trajectories • The Schrödinger equation, wave functions • Observables, measurements, probability amplitudes • Wave packets, Heisenberg uncertainty relations • The Heisenberg-Weyl algebra, representations of compact Lie groups (H. Weyl) • The theory of unbounded self-adjoint operators (J. von Neumann), Hilbert space • Rotation group representations, spin • Sturm-Liouville problem, discrete spectrum, eigenfunction expansions, Green's functions • The density matrix, the Wigner function (E. Wigner, 1932), Husimi and tomographic representation • Unitary evolution, semigroups • Eigenvalue perturbation theory, iterative procedures • Quantum-classical correspondence, decoherence, canonical quantization • Asymptotic expansions, semiclassical limit • Scattering theory, S-matrix, continuous spectrum • Integral equations, inverse problems • Decaying states, resonances • Periodic potentials (F. Bloch, L. Brillouin, H. A. Kramers), Floquet and Hill equations • Solid state physics, semiconductors, transistors, engineering applications of quantum mechanics • Many-body problems, second quantization, elementary excitations, condensed matter physics, Coulomb energy, thermofield dynamics • Ergodic potentials, Anderson localization • New developments, EPR and hidden parameters debates, Bell's inequalities, Bohm version 50 Principles of Mathematical Modeling • Quantum computing Key figures: H. Bethe, N. Bohr, L. de Broglie, P. A. M. Dirac, A. Einstein, V. A. Fock, G. A. Gamov, W. Heisenberg, L. D. Landau, J. von Neumann, W. Pauli, M. Planck, E. Schrödinger, E. Wigner 2.3.8 The high energy world • Strong (nuclear) interaction, π-mesons, exchange forces, Yukawa's model • Resonances, super multiplets • Baryons, mesons, hyperons • CPT-theorem, group concepts in particle physics • K-mesons, particle mixing, C and P non-conservation, CP and T violation • Isospin, strange particles, "elementary particle zoo", SU(2), SU(3) - first attempts of Lie group classification • Early quark models (1960s), color, flavor, charm, etc. • Hypercharge, Gell-Mann - Nishijima relation • Cross-sections, formfactors, S-matrix, current algebra • J/ψ-meson, confirmation of charm, quark-antiquark system • Quark-quark interaction through gluons, quark-gluon plasma • Quantum chromodynamics (QCD) confinement, deconfinement, asymptotic freedom • Electroweak interactions, the Standard Model, W- and Z- bosons, spontaneous symmetry breaking, Higgs particle • Non-abelian gauge theories • Grand Unification and new proposed theories and models: SO(10), left-right model, technicolor, SUSY, etc. • String and M theories Key figures: L. D. Faddeev, R. Feynman, M. Gell-Mann, S. Glashow, J. Goldstone, P. W. Higgs, G. t'Hooft, R. L. Mills, A. M. Polyakov, A. Salam, S. Weinberg, E. Witten, C. N. Yang 2.3.9 The relativistic world • The Michelson-Morley experiment • Lorentz transformations, relativistic kinematics • Special relativity, Einstein's paper "Zur Elektrodynamik Bewegter Körper (On the Electrodynamics of Moving Bodies)", Annalen der Physik 17, pp. 891921 • Minkowski space, Poincaré group Ten Worlds of Physics 51 • General relativity, Einstein's paper "Die Feldgleichungen der Gravitation (Field Equations of Gravitation)", Königlich Preussische Akademie der Wissenschaften, pp. 844847 • Redshift of spectral lines, deflection of light, time delay by gravitation • Relativistic mechanics, E = mc2, relativistic mass • Accelerators, nuclear physics • The Dirac equation, quantum vacuum, positron, antiparticles • Relativistic quantized fields, particles as field excitations, bosons and fermions, spin-statistic theorem • Quantum electrodynamics, particle-field interactions, Feynman diagrams, renormalization • Path integrals, Feynman-Kac formula • Gauge models, Yang-Mills theory • Higgs particle, the Standard Model • New quantum field theories, scalar, vector, tensor fields • Gravitational radiation, gravitational wave detectors • Strings and superstrings • Controversies over relativity, tests of special and general relativity Key figures: P. A. M. Dirac, F. Dyson, A. Einstein, R. Feynman, D. Hilbert, H. A. Lorentz, H. Minkowski, J. Schwinger, S. Weinberg 2.3.10 The Cosmological World • Spacetime curvature, the spacetime of general relativity • Solutions to Einstein's equations • Early cosmological models, non-stationary metric, redshift, Hubble constant, FLRW (Friedman-Lemaître-Robertson-Walker) isotropic model • The Big Bang, expanding universe, time asymmetry, the universe evolution • Relic radiation, cosmic microwave background • Black holes, escape velocity, Schwarzschild solution, Chandrasekhar limit, event horizon, spacetime singularities • Astrophysics, radio, optical, infrared images, gravitational lenses • Early universe symmetry breaking, cosmological phase transitions, topological defects, cosmic strings and structures • Anthropic principle and other speculations (e.g., large numbers) 52 Principles of Mathematical Modeling • Cosmological constant, vacuum energy, inflationary models • Hartle-Hawking wave function • Quantum gravity • Strings and extra dimensions • Universe: finite (Aristotle) or infinite (Giordano Bruno) • Dark energy, dark matter (hidden mass), WMAP Key figures: S. Chandrasekhar, A. A. Friedman, S. Hawking, E. Hubble, P. Laplace, G. Lemaître, R. Penrose, W. de Sitter. Although these "worlds of physics" have links to each other, it is still difficult to combine current physical theories into a coherent picture. For instance, classical mechanics, quantum mechanics, classical field theory, quantum field theory, high energy physics (the Standard Model), general relativity, string/M theory are all essentially different. These are a basic set of theories that can be constructed independently of one another. Each of these theories may be regarded as a cluster of intrinsic models. One can, if desired, even find notorious contradictions between models belonging to different clusters (i.e., built up on the base of different theories). It has already been mentioned that particles in physics should be treated as point-like in classical relativistic models and as extended in quantum models. And in general, is the world finally classical or quantum? Should one take quantum mechanics seriously or is it a well guessed collection of calculational prescriptions? It is clear that, for example, the Schrödinger equation is a successful guesswork. Another contradiction is associated with fixed background metric of special relativity and "old" quantum field theory, which is badly compatible with the dynamic spacetime of general relativity. Some physicists, mostly trained in high-energy physics, are to these days reluctant to accept the geometric nature of general relativity [133] preferring to treat it as a usual quantum field theory (i.e., a Lagrangian theory on a fixed Minkowski background). This point of view obviously defies the Einstein guiding idea that spacetime has dynamical properties of its own and cannot be regarded as a passive background. We shall discuss this problem in Chapter 6 in some mathematical detail. There is also a notorious irreversibility paradox, a contradiction between time-reversal invariant microscopic models of physics (the so-called laws of motion) and phenomenological time non-invariance observed in our everyday experience. This paradox stirs a lot of controversy up till now and does not seem to be ultimately resolved, although I think it belongs to an extensive class of pseudo problems. We shall devote some space to the discussion of the irreversibility paradox in Chapter 7, too. The lack of a coherent picture can be tolerated in cookbook disciplines such as scientific computing, numerical mathematics, networking protocols or computer graphics, but is hard to be meekly accepted in physics which strives to provide a unified image of the world. Local experimental successes, albeit quite impressive as, e.g., in quantum field theory (QFT) do not soften Physics-Based Mathematical Models (PBMM) 53 the frustration, and typical compensatory reactions in the form of escape from empirical reality are more and more often observed. People easily accept fanciful anti-empirical speculations such as multiple universes, a great number of dimensions or going back in time. Nevertheless, I don't think it reflects a "trouble with physics" or some sort of a crisis. Simply a unique hierarchical principle or a dreamlike super-formula from which all physical theories could be derived as specific cases may not exist. The networking construction connecting a variety of mathematical models of physics seems to be more plausible, at least nowadays. 2.4 Physics-Based Mathematical Models (PBMM) The world is, in principle, inseparable, and the ultimate purpose of physics is to find its meaning. It was presumably Einstein's dream - to grasp how God had designed the world. Contrary to that synthetic view, modern physicists (with very rare exceptions) never attempt to understand everything at once. One cannot be sure that there are in general universal, model-free structures in physics. Being by tradition strictly limited to the phenomena that occur in non-living nature, physics has always striven to produce models describing such phenomena in relatively simple terms. Indeed, while trying to model some entity, it is wise to think about the simplest at first. Nevertheless, one can apply the same approach to studying more complex phenomena than only those encountered in inorganic nature15. In this book, I treat physics not as a closed discipline, but rather as an intelligent approach to the entire human environment, that is to say this book may be considered as a form of a protest against the physical isolationism. And naturally, one can touch upon things that traditionally have nothing to do with physics in the narrow sense such as biology, economics, or even sociology. This extension often infuriates those physics professionals who are inclined to regard physics in the traditional "dead" sense as an ample arena for building even the wildest models. Representatives of other professions, primarily humanities and medicine, are also not always happy when physicists intrude into their "comfort zones". Of course, physics is so vast that one can easily spend one's professional life within any of its small and seemingly isolated subfields. However, our "physmatics" approach presupposes the study of links between different disciplines. Accordingly, in chapter 11, we shall discuss a few interesting mathematical models applied to the phenomena lying outside physics, if the word "physics" is still intuitively understood in the narrow sense, implying exclusively the subjects which I have classified for myself as "ten worlds of physics" (see above). Worlds of physics are just clusters of suitable models. There are, as we have discussed, links between worlds invoking substructures with repeatable, 15This has nothing to do with the usual physicists' arrogance, when people, especially young and not quite experienced but trained in some physics, claim that they can excel in many other areas such as chemistry, Earth sciences, economics, history, politology, sociology, business, etc. 54 Principles of Mathematical Modeling reusable patterns. These patterns have lately been spread outside physics. The tree of mathematical modeling in physics, with branches, leaves and buds as individual models, provides the primary networking structure (called "physmatics" in our somewhat fancy terminology). This situation is reflected in new notions, for example "econophysics", "physical economics", "chaos in finance", "nonlinear dynamics in the financial markets", "fractals and scaling in finance", "percolation model of financial market", "dynamical model", "self-organized fragmentation and coagulation of financial markets", "stochastic PDE for option pricing", etc. One can observe the rising popularity of traffic flow models which are mainly constructed on the principles generally adopted in physics (e.g., conservation laws). All these model clusters are quite important and undoubtedly their potential future impact may be difficult to overstate, so they deserve a special treatment being out of scope of this book. The very idea of easy linking different subjects seems to be close to the purely mathematical idea of compatibility lying, e.g., at the core of quantum mechanics. The concept of compatibility reflects the consistency of approaches or, as in the special case of quantum mechanics, of measurements (see Chapter 6). The idea of compatibility of different things is also manifest in mechanics where the notion of complete integrability of Hamiltonian flows has recently become of great value (partly due to some overcompensation for the oblivion of classical mechanics and nonlinear science during the most part of the 20th century). More specifically, the flow must be included into a class of compatible (commuting) Hamiltonian flows. Analogously, it is a characteristic feature of integrable PDEs (such as those encountered in the theory of solitons) that they should be organized in families of compatible i.e., commuting type, mostly having an hierarchical nature. So, the general idea of consistency of different subjects manifests itself in mathematics as integrability that is to say the possibility to find a closed solution to a complicate problem based, e.g., on nonlinear partial differential equations. Even in the realm of linear PDEs, one can trace many links at the level of mathematical models related to absolutely different fields of physics. It would be, for example, a truism to observe the connection between the onedimensional Schrödinger equation -∂x 2ψ(x, E) + V(x)ψ(x, E) = Eψ(x, E) and the one-dimensional scalar wave (acoustic) problem -∂t 2u(t, λ) = V(t) λ u(t, λ). It is a sad fact that a great deal of phenomena can be explained only at the level of hypotheses. For example, current science does not exactly know what is inside of the Earth. Geologists have dug the whole planet, but this is only a thin surface layer. One can only assume what happened on the Earth when life subsisted in the forms different from today's, e.g., in the form of viruses and 2.5 Theory, Experiment, and Models 55 bacteria. The human organism is an entire universe of its own, with the interplay of physical, mechanical, chemical, and biological processes in concerted action. Despite the successful pace of modern medicine (largely due to physical instrumentation), understanding of these processes is very approximate and primitive. A great number of serious illnesses remain unexplained, and biomedical models can only describe the symptoms intuitively correlating them to previous cases. This is an unsatisfactory procedure leading to frequently occurring medical errors dangerous for the patients. Moreover, inexact knowledge invokes pseudoscience such as healers and "biofield correctors" in paramedicine, parapsychologists and clairvoyants in other fields. In Chapter 10, we shall discuss climate viewed as a physical system. Unfortunately, the climatic system is so complex that no one seems to understand its functioning, although prognoses, pledges and political rigmarole are abound. In space science, despite a fast accumulation of observational data, almost everything is still at the level of hypothesis, rather than can be perceived as a reliably established result. To illustrate this pessimistic statement, it would be enough to recall the "dark side" of the universe problems (dark matter, dark energy, black holes). The cosmological constant problem (and the related problem of vacuum energy density), the idea of quintessence, inflationary models, the anthropic principle and so on are discussed, despite sophisticated mathematical tools, still on the hypothetical level. Everybody can prolong this list with topics of inexact knowledge. However, inexact knowledge is not necessarily bad: it has a provocative role, fascinates curious persons, and stimulates them to play with intelligent models. 2.5 Theory, Experiment, and Models "I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation." Stephen Hawking What is the relationship between these three components of human endeavor attaining physical knowledge? The interplay between theory and experiment induces the creation of models that are used to simulate the observations and predict new features that can be observed in new experiments. A theory in general may be defined as a cohesive system of concepts that was experimentally validated to the extent that there is no unclearness or contradictions within the limits of applicability for this system of concepts. A good theory contains a heavy load of knowledge. For example, the mechanical theory (discussed at some length in Chapter 4), together with the whole set of initial values, allows one to calculate the positions of the planets for many thousands of years into the future and into the past with sufficient accuracy (limited by the Newtonian approximation). The real mathematical modeling probably began with the nuclear bomb construction in 1940s in the USA and USSR (below, I shall reiterate some relevant facts that I was able to come across). Now, the trend of increasing complexity and resorting to expensive (sometimes prohibitively) experiments, initiated 56 Principles of Mathematical Modeling during nuclear weapon projects, calls for simulation of both the theory and experiment. Modeling in this field usually gives the following results: • The theory is insufficient and must be modified, revised, improved • A new theory is needed • The accuracy of experiments is insufficient • New and better experiments are needed One may note in passing that each nuclear test explosion is nothing else but a physical experiment, rather than a military event. A physical experiment is intended to measure certain physical quantities. In general, the main physical quantities to be measured in nuclear test experiments are the medium (gas) variable density and velocity - as in more conventional fluid dynamics measurements. For example, the so-called Richtmyer-Meshkov instability, which develops when the interface between two fluids having different densities is struck by the propagating shock wave, has been one of the prime objectives in nuclear explosion experiments. (This instability was predicted around 1960 by R. D. Richtmyer, a well-known mathematical physicist, and experimentally confirmed in the USSR by E. E. Meshkov.) Nowadays, due to the nuclear test ban, nuclear explosions are mostly simulated on computers, with validation provided by small-scale local laboratory experiments. Now, how is a theory related to a model? This is again a philosophical question - one of those that can induce lengthy discussions with no outcome. Nonetheless, I shall try to answer it as I understand this relationship. Models are usually constructed within the framework of a certain theory, for instance, the Friedman cosmological model is built up within the general relativity theory or the BCS model is a limited fragment of the non-relativistic quantum theory. The keyword for a model is the result, the keyword for a theory is a framework. The merit of a model is that it can be explored exhaustively. A theory cannot cover everything in the universe to the smallest detail and does not necessarily bring specific results, it only provides tools to obtain them. For example, the whole solid state theory may be considered as a particular case of quantum mechanics (also a theory but higher in the hierarchy), however, concrete results are produced when specific models within solid state theory are considered. There is a hierarchy both of theories and models, with inheritance of basic features down the ladder. Of course, the classification of models and theories is not absolute and may be subjective. Thus, the single-electron model is partly a model, partly a theory, whereas the Kronig-Penney model is just a model. One of the best examples of a model related to a theory is the E. Ising model of ferromagnetism 16 where the theory provides a simple framework and one can obtain ultimate results assuming a simple model of spin interactions. Later we shall see that the Ising model 16 The Ising model is probably the simplest possible model of a ferromagnet in two directions. 2.5 Theory, Experiment, and Models 57 serves as a pattern for a number of derived models in the areas having nothing in common with a ferromagnetic. In contrast to the broad mathematical theories, mathematical models are much more specific, illustrative and closed. The ultimate - and the most difficult - skill of modeling is to convert a seemingly complex problem into a much simpler one. Or to dissect a big problem into a multitude of small and sharply cut ones. This modelization of reality does not go without penalty: some reality must be sacrificed in order to gain more understanding of the problem being modeled. Even worse: the scope of reality left outside of a mathematical model is in many cases unknown, so it is rather nontrivial to quantitatively establish the model's applicability unless the model is formulated in terms of analytical functions or asymptotic expansions when one can evaluate and quantify the corrections. More than that: a model may be understood as a set of constraints which is bound to be replaced by a new set in the further study, and one can often foresee how the old model would be discarded. When a mathematical model is set up and processed, it results in the outcome of mathematical statements. Here, one must not forget that the validity of such statements is very limited - largely conditioned by the model. Nowadays, one can often observe the tendency to treat the model's outcome as absolute, which leads to a confusion and even grave mistakes. As an example, one might recall the deeply rooted stereotype that all physics should be time-reversal invariant, which is not true. It is just an arbitrary extrapolation of time-invariant models onto all observed phenomena. However, this simple belief is sometimes used as an instrument, for instance, to discard solutions containing terms with odd powers of frequency. Mathematical theories, to the extent they are not part of physics, are considered true when they are accepted by a certain social group - a community of mathematicians, with only a negligible part of this community really understanding a theory in question. In distinction to such public acceptance, physical theories model reality and are on the one hand a product of observations and on the other hand a source of predictions in new series of observations. Ideally, a theory is based on measured results, which are not precisely known. Then the predictions made from such a theory are also not exact. Scientific measurements - not necessarily in physics - have one profoundly lying common problem: one is limited in them by the type of experiments one is capable of performing. Even in the case of comparatively pure physical systems, the applied experimental techniques severely constrain our ability to investigate things. Take as an example one of the most precise experimental techniques, a scanning probe microscopy/spectroscopy (SPM/SPS) applied to explore solid surfaces. Even in such highly sensitive experiments, one has to admit the drastic information loss: to explore quantum localized states on the surface one has to apply voltage and produce the tunneling current which would then strongly perturb and modify such states. Thus, a genuine "non-demolition" experiment intended to study quantum (in this example) states in their pure, native form becomes highly 58 Principles of Mathematical Modeling problematic. This situation is typical not only of the quantum world. In the case of mental processes, for example, to produce a non-demolition measurement may be more difficult than in physics: there are a lot of components and interactions between them. One can see that such disciplines as physics and mathematics, though incorporating considerable applied parts, are centered around basic research. So, the question naturally arises: is basic research in physics and mathematics a luxury for certain countries or a necessity for all the countries? 2.6 On the Relationship Between Physics and Mathematics "I am so clever that sometimes I don't understand a single word of what I am saying." (Oscar Wilde)// The relationship between physics and mathematics is, of course, a subject for philosophical discussions, with scientific content in this section being close to zero. I would only state the following observation: complicated models are seldom useful - maybe solely to produce the texts exhibiting scholarly merits of an applicant, e.g., in theses. Strangely enough, mathematics continues to stir powerful emotions not only in mathematical circles, but also among non-mathematicians including physicists. For the old guard, using more modern mathematical techniques such as differential forms or geometric methods in general seems to be similar to undergoing an unpleasant medical procedure. An appropriate use of mathematics appears to be unclear even to theoretical physicists to whom mastering math is an absolute must. This element of obscurity and ill-digested trials is more and more evident in contemporary papers on theoretical physics, being strongly aggravated by an obsessive drive towards abstraction and maximal generality fashionable among "pure" mathematicians (and being servilely acquiesced by some physicists). Mathematics, among the so-called exact science, is the only discipline that enjoys being abstract. Philosophy is also abstract, but it is in no way an exact science. The feature of abstraction results in the unique power of mathematics - generality that works. In distinction to physics and other exact sciences, the underlying content is immaterial in mathematics. For example, if one considers a rotation group and properties of its operations, it is irrelevant to ask whether one implies an electron, an atom, a nucleus or a molecule of some chemical substance, although all of them are totally diverse physical objects. This working generality of mathematics permits it to be applied throughout all other sciences. It permits thus to perform crossdisciplinary mathematical modeling. The property of working generality leads to a special role of mathematical models in other sciences: these models combine the insight accumulated in specialized disciplines, primarily in physics, with the powerful abstraction of available mathematical structures - at least available to the developers of mathematical models. So far, such a combination worked amazingly well, especially in physics. Analogies stemming from generalization attempts are very important, they sharpen mathematical instincts and encourage one to 2.6 On the Relationship Between Physics and Mathematics 59 look for counterexamples. Many intricate things in physics appear simpler due to analogies. For example, non-commutative algebra based on space-time variables (x, p) ↔(t, ω) have the same mathematical framework and may be treated similarly, although the physical content is totally different. The main ideas of using mathematics for model construction in physics are better grasped by discussing analytical models. Once we understand them, we can proceed to computational models and to specific cases, collectively named physical engineering. One can say that all this is the socalled applied mathematics. Of course, it does not matter how you denote your field of activities, but personally I don't think there is such a field as applied mathematics, there are applied mathematicians. For instance, quantum mechanics may be considered as applied functional analysis, cryptography as applied number theory, signal theory as applied theory of functions, and so on. Even a very abstract mathematics may turn its application side to physicists. Does a physicist really need topology? Traditionally, the work of a physicist consisted in describing models in terms of differential, rarely integral or integro-differential equations and then trying to solve them. However, to really produce the solutions to physical-based mathematical equations, applied to specific cases, is typically quite tiresome and may even prove impossible, even despite the fact that the starting-point equations of physics are as a rule well known and all their nice mathematical properties - smoothness of solutions, their localization, compactness, etc. - have been explored. It does not help much. Therefore, a number of ingenious methods have been developed, with attempt to analyze the solutions of complicated equations without solving them. There has appeared lately a number of human activities aimed at doing something without really doing it. Management, for example - it's the art of doing something with others' hands. Or software engineering - a popular movement under the slogan "how to produce a software system without knowing how to write a code". A very cunning strategy. I would call such an approach "creative alienation". One of the best examples of productively using this approach in science is group theory, which consequently exploits symmetry. Topology in physics serves more or less the same purpose of creative alienation from the honest process of directly solving physical equations (see below). It seems only natural that a progressively widening chasm between mathematically minded physicists and physically minded physicists is to be observed. Mathematically minded people are the real insiders in math, they must know what is going on in mathematics and are attentive to all fine mathematical stitches, all those smooth and nice properties that fasten mathematical constructions that initially appear totally abstract but later foster the progress of physics. We shall see examples of such fostering in this book. In contrast to this refined approach, physically minded physicists prefer broad strokes, largely intuitive, not meticulously founded or even completely unfounded (we shall see some salient examples of mathematical inaccuracies in physics). This approach is typical of engineers for whom it is important to obtain a tangible result than to prove that it can (or cannot) be in principle obtained. Thus, the physically minded physicists, even theoretical physicists, 60 Principles of Mathematical Modeling are just mathematical engineers. And these two breeds of physicists are submerged in different environments: while mathematically minded people deal mostly with books and papers, those physically minded still have to understand experiments - physics never ceases to be an experimental science, despite all theoretical extremism. For a physically minded person, it is harder to deal only with books than for a mathematically minded one. There are almost no drawings in most mathematical books, which is uncommon for an engineer, even mathematical engineer. And the basic questions for these two breeds of people are different: "how would I prove it?" - for a mathematician and "how would I do it?" - for a physicist. There has been written a lot about the interaction between mathematics and physics. One of the best statements about this relationship I (as well as many friends and colleagues of mine) have read was the inscription left by some anonymous thinker on one of the student tables at the Central Physical Audience in the Physical Department of the Moscow State University: "Physics without mathematics is the same as a naked person in the metro: possible but indecent". Here, one can sense a hint at a somewhat troubled relationship. Personally, I still think that mathematics stems from physics, at least even today physics serves as the greatest source of mathematical ideas. Professional mathematicians are usually infuriated by this statement claiming that mathematics is totally different from physics. Nowadays there are new sources of inspiration for the mathematicians originating from other disciplines such as economics, telecommunications, networking, computer science, social studies, defense and military planning. We shall review some mathematical models related to these disciplines and we shall see that all such models to a large extent bear the imprint of approaches developed in physics. It is surprising to me that vocal objections to contributions from theoretical physics as the main supplier of mathematical problems seems to be a bon ton among mathematicians. "Theoretical physics is a science locally isomorphic to mathematics" is the most favorable saying I have heard recently from mathematicians. Mathematicians and physicists are not always friendly tribes. Is a new kind of religious cold war pending? This is my hyperbolization, of course. Sometimes, however, human incompatibility of physicists and mathematicians reaches a rather high degree. To illustrate this strange feud one can recall the following episode from the history of the Soviet nuclear weapons project. Needless to say, what importance was assigned to this project by the Soviet authorities. In the 1950s, the theoretical laboratory headed by I. M. Khalatnikov, a well-known physicist and one of the closest Landau disciples 17 was performing calculations of the design and physics of the Soviet hydrogen bomb. The lab employees worked at the 1953 specifically for nuclear weapons and missile 17 I. M. Khalatnikov is mostly known for his classical works on superfluidity, superconductivity and other issues of low-temperature physics. He has been a long-time director of the famous L. D. Landau Institute for Theoretical Physics. 2.6 On the Relationship Between Physics and Mathematics 61 computations (now the Keldysh ). Rather acute conflicts between newcomer physicists and aboriginal mathematicians followed, and the physical laboratory could survive in the milieu of mathematicians for only half a year. Owing to the personal order of I. V. Kurchatov, the head of the Soviet nuclear weapon project, the entire lab of Khalatnikov was transferred again - without mathematicians, this time to the institute headed by I. V. Kurchatov (now the Russian Scientific Centre "Kurchatov Institute"). Let us conjecture a little bit about what physicists (and other natural scientists) might think regarding mathematical methods in their respective sciences. Mathematics is a wonderful language, but does it really produce exact results? Mathematics can correctly - without singularities, infinities, and unsurmountable difficulties - describe only very simple and artificial physical situations. Our everyday phenomena like the transition to turbulence in the flow of water streaming down from the tap or the friction force are beyond the reach of mathematical description. One can produce more examples where mathematics fails than where it admits a successful theoretical description. Just try to construct a mathematical theory of a flag fluttering in the wind. Or of a living cell irradiated by a laser pulse. Or even of an isolated living cell. Complexity of biomedical, social, economical or cultural systems is merely of a different level than that attained in contemporary mathematics, and it is not incidentally that the so-called serious scientists usually express astonishment mixed with contempt when hearing that some of their colleagues had ventured to attack a "weird" social problem by mathematical means18. In the kingdom of Soviet physics, such people were outlaws. It is because of this complexity level unreachable for contemporary mathematics that we have to focus primarily on particular mathematical models of reality than on new theories19. Yet, frankly speaking I think we already have enough theories. Mathematics in its actual form appears to be perfectly adapted to describing general principles such as motion equations and conservation laws. The study of symmetries on a certain complexity level can also be achieved by mathematical means. Nevertheless, I doubt that such general objects as spacetime, universe, quantum background, mass or - even worse - complex sociocultural systems can in principle be investigated by theoretical mathematics. It may simply not work, that is to say, standard mathematical methods may be compromised by infinities and singularities. Then probably only ad hoc "cookbook" methods and local mathematical models (e.g., not based on any regular theory) would be possible. This would be a very unfortunate development, of course. "A mathematician may say anything he pleases, but a physicist must be at least partially sane", this statement is ascribed to J. W. Gibbs. In that sense, 18 The disdainful attitude towards these persons exceeded that semi-openly manifested in regard to those who became interested in the history or (even worse) philosophy of science. Such persons were regarded as just impotents and weaklings, whereas the misguided thinkers over fanciful social mathematics were simply considered inadequate. 19 It does not necessarily mean that the models should be phenomenological - they may be based on first-principle equations. 62 Principles of Mathematical Modeling string theory, for example, is more mathematics than physics. Probably today it would be better to say "string/M theory", which sounds even more mysterious, is even farther alienated from experimental roots of traditional conservative physics and based almost uniquely on purely mathematical considerations. Where is the demarcation line between physics and mathematics in the new theoretical physics, especially after the so-called second string revolution, I don't know. Although I devoted a considerable amount of time trying to understand new concepts in physics, of course, I am familiar with the string/M theory only extremely superficially. Like many other amateurs, I failed to find any experimental confirmation or astronomical observations in favor of string theory (except some attempts to predict the value of the cosmological term, see Chapter 9). Besides, does anyone know exactly what the M-theory is? The rapidly increasing number of string theorists is, to my mind, a very weak indicator of physical soundness of the string theory (see a well-known book by L. Smolin [74] on this subject). I am not going to even try to formulate the axiomatics of the current string/M theory. As near as I understand, this theory may be regarded as a speculative area of theoretical physics beyond the traditional theoretical or even mathematical physics developed in the course of the 20th century and closely related to experimental real-life problems. In general, theoretical physicists try to describe the world inventing models of reality which are used to explain, rationalize and, in the best case, predict physical phenomena within a certain "physical theory". As for physical theories, one usually distinguishes between three sorts of them: basic or mainstream theories - such as classical or quantum mechanics; proposed but not validated theories - such as loop quantum gravity; and marginal or fringe theories - such as torsion fields. On the one hand, some of the physical theories simply cannot be confirmed by an experiment or even by observation (although it was always required of classical theories and generally would be highly desirable) and, on the other hand, they cannot be deduced from a non-contradictory system of axioms like a mathematical theorem. This specific position of modern physical theories leads to regarding theoretical physics as a chimeric discipline between physics and mathematics, a cross of physics without experiment and mathematics without rigor20. Both physics and mathematics can be vital, not optional for survival. Indeed, it could take, for example, just one comet to demolish the Earth. To ensure the survival of the biosphere, humans need to learn how to avoid this danger which is basically a physical and mathematical task. 2.7 Mathematical Physics and Physmatics What is the traditional mathematical physics? Some prominent physicists and mathematicians used to say that it is also, like theoretical physics, neither mathematics nor physics. When I was studying physics at the university, our course of mathematical physics was largely reduced to a linear theory of 20 « La physique théorique est l'alliance de la physique sans l'expérience, et des mathématiques sans la rigueur ». Jean-Marie Souriau, a French mathematician. 2.8 The Role of Human Communication 63 partial differential equations, covering the respective theorems of existence and uniqueness, some concepts of functional analysis including the spectral theory of linear operators in Hilbert space, and the usual tricks of solving boundary value problems (variable separation, Green's functions, eigenfunction expansions, etc.) Such university courses were typically named "The Equations of Mathematical Physics" or "Methods of Mathematical Physics", with the traditional textbooks such as those by A. N. Tikhonov and A. A. Samarski ([3]), R. Courant and D. Hilbert ([7]) and sometimes P. Morse and H. Feshbach ([4]) being recommended as standard. (All these books are even now a right place to look, anyway.) In traditional mathematical physics, rather simplistic models are generally constructed, with actual physical details being reduced to an absolute minimum. In this way, the conventional PDEs of mathematical physics have been obtained and investigated. This is a typical model-building approach, and although it enabled one to study the standard set of equations and the properties of their solutions - so-called special functions - physicists usually consider the canonical mathematical physics inappropriate or at least insufficient for their opulent circumstances. Physmatics stands to mathematical physics approximately in the same relationship as, e.g., physical biology to biophysics. During its evolutionary development and due to the accumulation of vast experimental material, biophysics has become a more or less crystallized discipline, and people usually understand - although intuitively - what one is talking about when the term "biophysics" is used. Contrariwise, physical biology 21 is a synthetic discipline devoted to the application of general physical principles and models to biological systems, and understood as such it may even include biophysics as a particular subdiscipline. 2.8 The Role of Human Communication "Unthinking respect for authority is the greatest enemy of truth." Albert Einstein Science as the refined extension of the intrinsic human curiosity instinct necessary for survival evolved as the increasingly specialized attempts to answer the "old human questions" (OHQ) such as why is the sky blue, why is the heart on the left side, why do birds fly, why is the water wet and liquid, etc. Over the last several hundred years, more and more specialized science has succeeded to answer many old human questions in terms of light and atoms. As a by-product, new weaponry has been invented and deployed, which justified the separation of scientists into an isolated (in extreme cases, physically) urban group, not all of professional scientists and not continuously privileged but always under direct control of the top authorities. During and after the World War II, a distinct and influential scientific component of the human society emerged, instead of separate dispersed individuals driven by curiosity and exchanging letters. Inside this coherent 21 The originator of physical biology was probably the mathematician Alfred Lotka whose book "Elements of Physical Biology" (1925) seems to be unfairly forgotten. 64 Principles of Mathematical Modeling scientific milieu, light and atoms have been described in progressively microscopic terms of quarks, leptons, gauge bosons, etc. That was a chain of downsized multilevel models about the human environment that appeared synchronously with the stabilization of the social institute of science, with all attributes of human groups: dominant bosses who are forced to surround themselves by intellectual specialists, self-assertive leaders who are afraid most of all to lose face, small number of compulsive fanatics fostering conflicting evidence, and the large mass of the weaker members of the community supporting off-the-shelf enthusiasms. All like in other hierarchical groups. One of the salient phenomena of the 20th century science was the establishment of semi-closed scientific communities, so-called scientific schools. This is neither bad nor good, it has just happened. Scientific schools provide people with great strengths as well as grave weaknesses. It is a common observation that science nowadays is done by big collaborations which may be called scientific tribes (scitribes). There are countries where more accent is put on such tribes, as, e.g., the former Soviet Union22, as well as the countries where individualistic traditions were more pronounced. There are also paradoxical countries with a combination of collectivist trends and individual science. In Germany, for example, the culture of scientific schools is not highly developed, although communities - also scientific ones - are highly praised. In this section, I shall offer some personal observations about scientific schools [167], predominantly in the former USSR where they were very powerful. By their very nature, these remarks of mine are highly subjective and can be met with skepticism. They reflect that tiny chunk of experience that I had while observing scientific as well as bureaucratic mechanisms from inside. However, I cannot be counted as a genuine insider. Honestly speaking, I know very little about the whole complicated texture of the relations between scientific schools and between people inside each of them which, in the former Soviet Union, was developed against a background of communist party and KGB control, state supported antisemitism and ideological doublethink. I know the situation with the two major Soviet schools of physics (these of Landau and Bogoliubov, see below) only by attending their respective seminars and from my sporadic informal contacts with some of the established participants of these two schools. My observations are related to 1970s when I was able to attend seminars especially often, first as a student of the theoretical department of the Moscow Engineering Physics Institute and then as a young research scientist. Later, in 1985-1989, while working in the popular science magazine "Science and Life", where I was responsible for physics and mathematics, I have had a second round of observations, but by that time the "Perestroika" began already distorting the pure Soviet patterns. So, my knowledge of the situation with these schools is very superficial or, rather, tangential. Nevertheless, while filtering my memories, I tried to refrain 22 There were social reasons for it since the ruling authorities preferred to communicate with an appointed leader. 2.8 The Role of Human Communication 65 from inadvertent confabulations or vague, verbose fantasies that could have only a hazy relationship to reality. And I don't want to name concrete persons. Indeed, can one remember all the details after 30-40 years have elapsed? One might then justifiably ask: if you don't know the situation exactly from inside, why the hell are you writing about it? My answer is: that was my subjective vision of the situation with the so-called scientific schools, and there is nothing wrong in presenting it. The only grave sin would be a deliberate lie, which is not the case here. Furthermore, this "sociological" context is inseparable from a "scientific" one, at least for me - and I have strong suspicions that is the case for any person immersed in the scientific milieu. The prevalent effect of scientific schools is sociological, not scientific, even despite wonderful achievements obtained by the school members. It means that social forces can act against the purely scientific process. In such a case science can be easily betrayed and even sacrificed. The cold war of thinking or not thinking was dominating my beginner's studies of theoretical physics when I was under twenty. This "war" was waged between the adepts of the well-known "Landau school" and those who considered themselves to belong to the "Bogoliubov school"23. Both school leaders were great scientists and there were probably equally outstanding scientists among the both school members, but it was a rare occasion that they openly interacted, and I don't remember any joint cross-school publications related to that period (end 1960s - beginning 1970s). When recollecting my experience as a physics student, I remember an odd sense of irrationality when I heard at the famous theoretical seminar in the Institute for Physical Problems that one should not read papers on low-temperature physics unless they stem from this institute, or when some prominent people in Dubna asserted that all what is and will be (!) said about elementary particles and high energy physics in "Physproblemy"24 is a priori crap and should not be even listened to. We, physics students, were of course too inexperienced to make our own judgment, and the situation of uncritical choice between two mainstreams (either theoretical physics in the Landau school or mathematical physics formalism in the Bogoliubov school) produced sort of a schizophrenic effect on a number of young persons. I suspect that some of them were unable to recover before today. As far as I am concerned, I experience the same strange sense of irrationality when I hear, for instance, contemptuous remarks about the scientists belonging to the Princeton Institute for Advance Studies. I simply cannot rationalize this attitude nowadays, in the same way as I could not at my student time comprehend derogatory remarks exchanged by the representatives of the two great Soviet schools of physics. This is a pure sociology whose role in physics and mathematics is, regrettably, enormously important but is mostly left unnoticed by the historians of science. At least, I 23 Every school of thought is like a man who has talked to himself for a hundred years and is delighted with his own mind, however stupid it may be. (J.W. Goethe, 1817, Principles of Natural Science) 24 The colloquial name of the Institute for Physical Problems 66 Principles of Mathematical Modeling failed to find any good accounts of the impact of group prejudices and nearscientific stereotypes on functioning of physics or mathematics communities. Unfortunately, I was too young to see and hear L. D. Landau himself, as well as N. N. Bogoliubov, so my impression of their schools is by necessity secondary from sporadic contacts with their participants, therefore my remarks about the Bogoliubov and Landau schools should be taken only as personal impressions. There is a lot of literature depicting these two great persons, but I don't think the information in this literature is sufficiently accurate. As far as L. D. Landau is concerned, some of the publications bear a scandalous character. The figure of N. N. Bogoliubov is not so controversial. I hope that maybe Professor Igor Landau25, son of L. D. Landau, will write his own authentic recollections, which could probably remove a lot of controversies. I grew up in the USSR, the country characterized by what may be called a monocentric evolution: all important scientific and bureaucratic institutions were concentrated in Moscow. As a reaction to this top-heaviness, people in many regional scientific centers disliked the Moscow scientists intensely. (Dwellers of Moscow are in general nearly hated in the rest of Russia.) The scientists in provincial cities formed their own "schools" where Muscovites were rarely welcome unless, of course, they were influential scientific or bureaucratic bosses. My personal observations were that scientific workers from the city of Gor'ki (now Nizhni Novgorod) were marked by specific arrogance towards rank-and-file Moscow colleagues. Even the Leningrad (now St. Petersburg) and Novosibirsk scientists, which had created really world-class scientific schools in many directions, were more tolerable to the Muscovites. This parochial miscommunication rooted in regional prejudices considerably influenced the scientific scene. At least, group attitudes in scientific communities have always interfered with answering the simple question of research: "Is it true or false?" Nevertheless, even in countries characterized by polycentric evolution such as Germany, group stereotypes in local scientific communities often prevail over rational responses. There are attempts to solve this problem by rotating research staff between various universities, but such rotation may be painful for individuals and destroys scientific schools that need a prolonged period of stability to be firmly established. Presumably, the "Landau school" was originally based on the so-called theoretical minimum or, shortly, "theorminimum" that was in effect a test to be passed by any physicist who wanted to work with L. D. Landau. On the other hand, the program of "theorminimum" later provided the contents for the famous "Course of Theoretical Physics" by L. D. Landau and E. M. Lifshitz. Already from here one can see that a sheer preparation to the obligatory "theorminimum" test ensured that an applicant would acquire rather extensive qualifications in physics and accompanying it mathematics. The Landau school produced physical universalists. 25 By a sheer coincidence, I studied with Igor Landau in the same class in the Moscow high school No. 5 2.8 The Role of Human Communication 67 Now the times of universal physicists like L. D. Landau are gone forever26. Today, universality became feasible only within large collectives or institutions. But in such cases the problem of the common language appears. Such a common language is hard to develop in a loose assembly of physicists unless they are carefully picked up and uniformly trained. It is my firm belief that extremely sharp specialization, a typical presentday phenomenon, has a pronounced detrimental effect on the development of physics. Strictly speaking, it is not obvious that the laws of nature do exist. These laws are invented by humans - in modern times in the form of mathematical models - to approximately describe the processes in nature. In fact, these laws and their numerous consequences are present only on paper and mostly serve as tools to increase the status of the "professional scientists". It is sometimes astounding that these mathematical laws and especially their corollaries really do work not on the paper of status seekers, but to manufacture real products. There is a kind of complicity among the professional scientists bordering on tribalism, with socially significant slangs, jargons and dialects purposed to insulate professional scientists from common folk, usually under disguise of scientific precision. Members of scientific tribes calling themselves academics are relying on internal communication more than on anything else, with somewhat ambivalent attitude towards colleagues: rivalry mixed with support against outsiders. It is curious that if the specialized jargon becomes widely spread and partly adopted by non-scientific groups as, for example, in cosmology, computer science and chaos theory, then the jargon's isolating function is lost, and new slang expressions are created restoring the anti-communication barriers. These barriers are erected between scientists and "ordinary folk" as well as between scientific groups. Life, however, by putting forward real problems, in most cases forces us to transcend the frontiers between different sciences, which have been established - purely artificially - by people. Nature does not care at all what mental schemes people are trying to impose on its manifestations. Indeed, why should we look for an artificial way to structure data and procedures, when the real world has already organized them for us? Nevertheless, "professionals" tend to safeguard artificial frontiers invented by them with jealousy bordering on aggression27. People tend to form closed groups declaring them "elite". I have just given above examples of zealotic behavior in Bogoliubov and Landau schools and I shall try to describe some salient features of these elitist establishments a bit later. The Landau school was especially indicative: theoretical physicists belonging to the "Landau 26 The last physicist of this universal type is probably V. L. Ginzburg, the 2003 Nobel Prize winner. I would also mention R. Feynman, V. A. Fock and W. Pauli who are unfortunately long gone. 27 This is especially clearly seen in the medical community, which, basically belonging to a breed of natural sciences, vehemently opposes penetration of physicists into their field. This becomes increasingly difficult since the progress of medicine is largely due to modern engineering physics. Safeguarding "medical traditions" in new technological environment is nothing but a characteristic ambivalent attitude of defensive hierarchical groups. 68 Principles of Mathematical Modeling school" created in the 1960s rather high barriers, mostly of psychological character, around themselves. Young proselytes were specifically aggressive: some of them seemed to be always ready to apply clichés to the work of everyone not too eager to join their elitist circle, denouncing physical papers containing much math as "a disordered set of formulas" (that label was often applied to the works of competitors from "Bogoliubov school") whereas papers based mainly on physical reasoning were labeled as "philology" or "philosophy". The latter label was considered especially humiliating: it was considered a shame not to be able to saturate paper with equations (mostly differential), inequalities delineating limiting cases, integrals and, in the final part, formalized statement of results. Frankly speaking, papers based on "physical considerations" usually produce weaker results and contain vague fragments. Yet, the folklore saying that each discipline contains the amount of science exactly equal to its mathematical content is, to my mind, a mathematical extremism. This is all history now, and perhaps of no interest to the new generations of scientists but, to my understanding, a very instructive history. There have been no similar mechanisms of scientific development in the West after the World War as scientific schools in the USSR. Great scientists in Western countries did not form schools around themselves. It has been noticed that neither Einstein nor Feynman, nor Wigner - in one word nobody has created scientific schools that could rival in impact with those existing in the Soviet science. University professors may have a couple of students, graduates, or postgraduates, or in comparatively rare cases a chair. That is all. What personally I did not like about elitist scientific establishments - not only "Physproblemy", there were some other institutions and not only in Moscow, usually calling themselves "firma" (a firm) where the research employees saw themselves as local prim donnas. There was even an element of ritualized aggression mixed with inferiority in the behavior of these researchers - against intruders and under the slogan "I am great". By the way, really bright people such as V. L. Ginzburg, L. V. Keldysh, D. A. Kirzhnitz, A. B. Migdal or A. D. Sakharov never emphasized their top position. Both the Landau and the Bogoliubov schools were really very advanced and powerful, yet there were certain things which I found appalling. What one could observe, sometimes painfully, about both schools was their "national orientation"28. As for the Landau school, this effect was of course a natural defensive reaction to the disgusting antisemitic politics of the Communist Party of the USSR in approximately 1965-1985, but sometimes one could not get rid of an impression that the "national orientation" extended far beyond the compensatory focus. This may probably be called an overreaction. However, such an overreaction had some inherent dangers distorting the delightful construction of scientific schools - an informal community of highly qualified people. In particular, the "national orientation" tended to attract 28 This expression belongs to A. A. Rukhadze, a well-known plasma physicist[148]. Quite naturally, I risk invoking indignation and derogatory arguments from the remnants of the both schools, if they condescend of course. 2.8 The Role of Human Communication 69 stray persons (I met some of those). The second thing that I could not unconditionally accept was the principle of problem selection according to the leadership taste. Take the Landau school, for example. L. D. Landau was indisputably a genius, and he raised a bunch of super professionals who inherited, to some extent, his arbitrariness and prima donna habits. One of the principles of problem selection was according to their potential solvability. That amounted to the situation when there existed a class of physical problems which deserved some consideration and an antagonist class of problems not worthy to spend time on. Technically speaking, this meant that the problems where there was no distinct small parameter should be neglected. Unfortunately, some physically interesting problems landed in the "unworthy" class and had to be rejected. For example, the theory of liquids, as near as I remember, was totally uninteresting - at least for young adepts of the Landau school of that time (beginning 1970s). Incidentally, a somewhat haughty attitude towards the theory of liquids was reflected in the textbook "Statistical Physics" by L. D. Landau and E. M. Lifshitz [24], beginning of §76. I remember that I tried to discuss the situation with liquids with Professor D. N. Zubarev whom I respected very much and who was a very kind and knowledgeable person. D. N. Zubarev was generous and absolutely neutral as far as group prejudices were concerned, and there were, as I remember, no limitations in topics discussed at his seminar in the Steklov Mathematical Institute devoted mostly to statistical mechanics. I also remember approximately what he said to me about the attitude of the "Physproblemy" people to liquids: this problem is outside the customary mathematical repertoire of young Landau scholars, therefore liquids are not interesting to them; it is rather our problem. I had an impression that collective norms, likings and dislikings, were obvious and nobody tried to conceal them. The sense of solidarity was more important to school members than personal aspirations and even individual achievements. This could be easily explained: belonging to a school meant security, a conviction that the personal and at least professional career is mapped out in advance. The pressure for working commitments and rapid fulfillment, if one had already established oneself within the school, was lessened. Despite the pressures exerted on ordinary members by the elite scientific community, the benefits are great. The community and its influential leaders protect the community members from the critic of rival schools, even in the case of scientific mistakes. Thus, the basic problems of survival in a scientific milieu, very acute for an individual researcher, become inessential for a member of an established community. Formally, this quasi-paternal protection is due to severe filtering of scientific mistakes inside the community, but in fact it is the very stamp of the renowned scientific school that matters, especially in highly complex subjects. There is nothing unnatural or unexpected about such protective mechanisms and cooperative behavior in science, all this is a basic part of human social and biological inheritance. Bygone are the unhurried days of individual, alienated scientists. Today, collective norms dominate. Even 70 Principles of Mathematical Modeling Einstein was known to be largely neglected and almost ridiculed by the herd of physicists engaged in so-called elementary particle physics which in 1950s - 1960s was in fact hardly physics, rather multiple attempts to cope with the bewildering number of particles by classification. Now that period is often referred to as the "zoological" one, but at that time it was of fashion, with crowds of people being attracted. In today's parlance, that was the period of an extensive build-up of mathematical models based on the Lie group theory - a decent pastime in its own right, but having little in common with physics. One can see the remnants of this approach in http://pdg.lbl.gov/2007/download/rpp-2006-book.pdf. Scientific schools, despite all their drawbacks and emphasis on human communication, have become, as stated, a powerful development mechanism of basic science. They ensured a very encouraging, even protective atmosphere which is vital for the young members. Such a coaching and mutual assistance, a fundamental urge to cooperate within the "scientific tribe" - this was basically provided by the school - existed only in the Soviet Union of those years. Now this scientifically productive period has gone forever. To make things clear, I don't have any nostalgia for the Soviet times, as many people do. Nostalgia for one's childhood does not necessarily mean that the childhood was a happy one. The communist regime commited a lot of atrocities, and any society under communist rule is completely deformed. There is no point to be attracted to this regime. But science was excellent, maybe due to some compensatory merit selection - the best people went to science and not in business. Besides, radically different criteria were applied in Soviet scientific schools as compared with the current situation. For example, nobody paid much attention to the number of publications. The quality of the work, the degree of understanding of scientific ideas by a given person (for concreteness, I speak of physics here) were most valued - even more than his or her nearness to the leader. Nothing of the kind exists today. The "publish or perish" principle of career development resulted in the multiplication of scientific and quasi-scientific journals, their number nowadays exceeding the human capacity to observe. Worse than that, scientific literature has become cluttered up by a heap of immature, wrong or "not even wrong" papers. Unfortunately, these statements are not empty lamentations. I am not saying that a disintegration of science into "schools" or scientific tribes as major units - a 20th century phenomenon - is necessarily a drawback. This is probably an answer on the increased complexity of problems, especially in nuclear weapon creation. What I am trying to say is the increasing price people have to pay for the breakdown of science into groups, usually in the form of community norms, attitudes, strict subordination, prejudices and stereotypes. I have a few personal reminiscences how powerful these community norms can be. In 1970s, together with one good experimental physicist, we produced a theory and made a device aimed at exploring radiation properties of modulated electron and electron-ion beams. This work was supported by some leading Soviet 2.8 The Role of Human Communication 71 physicists including Prof. E. P. Velikhov and Prof. M. A. Leontovich. Now I think that it was not a bad work elucidating some interesting microscopic properties of coherent mechanisms of electromagnetic radiation. We were recommended to submit the work to the competition for the so-called Komsomol Prize. In the final part of the competition, however, we were rejected with the explanation that the Prize that year should go to the city of Gorki (now Nizhni Novgorod) because the powerful leaders of the local radio physical school complained that Moscow had taken all the prizes in the previous years. My colleague-experimentalist was so disappointed that in a short time ceased to work on his experimental installation. Later, he completely changed his field of activities and became the managing director of one of the most famous restaurants in Moscow. I also remember an extreme case of a brutal verbal attack on a person from the Moscow Engineering Physics Institute (MEPhI) during his presentation of the doctor of sciences' thesis. The attack carried out by a member of a rival scientific organization was scientific only in the form. The genuine reason was, as I understood, to demonstrate a supremacy of the scientific institution the attacker represented. The unhappy applicant died of a heart attack the same evening. I have also had an experience of being attacked by some narcissistic member of a well-known scientific school during a presentation. His argumentation was ridiculously incompetent and not in the form of questions, but even if he was understanding this, it did not matter at all. It was the social act of community defense. How important it was to belong to a certain "school" was manifested in the group attitudes of even the most prominent physicists. I was constantly reminded of this. When in autumn 1985 I wanted to publish a very interesting and fresh article by D. N. Klyshko on quantum optics in the journal "Science and Life" (David Klyshko was a modest university professor and little known to physics grands at that time; besides he was a very modest person and an individualist). The editor-in-chief and his servile deputy were harshly against this publication on the ground that Klyshko did not belong to any known scientific community. It was only much later that D. N. Klyshko's works became almost classic (unfortunately, he died prematurely in 2000). In the same 1985, I was at a very interesting MePHI summer school on condensed media physics (later I wrote a report on this school in "Science in Life", No.1, 1986). One of the leading presenters there was Professor D. A. Kirzhnitz, a super qualified physicist and a very good person, whom many of us, school participants, liked very much. He seemed to possess a special charm of firmness - a feature of independent and unfairly traumatized person. I think Professor Kirzhnitz was not fully estimated as he deserved it. Once we - Professor Kirzhnitz and me - accidentally met near the news stand, and his first words to me were: "And you, Sergey, to which school do you adhere?" I did not know what to reply. What astonished me then was that even such a prominent physicist as D. A. Kirzhnitz tended to automatically classify people 72 Principles of Mathematical Modeling according to their adherence to a certain school. At least I interpreted his question in such a way29. So there are many tribal signs in scientific communities. No one apart from your scitribe dares using the community resources such as library, computers, Internet access, discussion notes, etc. From time to time some members of the community are set off assailing other communities. Younger members produce as much noise as possible around the community, imitating the aggressive manners of their senior leaders. In case the community is successful, it attracts new members and grows in size. Then splinter groups typically appear establishing themselves inside new institutions 30 . This process reminds me of colonization of new territories, in this case in sociobureaucratic space. 2.9 Antimodels The problem with science is that it is an extreme manifestation of the human curiosity and exploratory activity, thus it tends to be alienated from everyday needs on which "common wisdom" is based. Therefore, scientific truth must be safeguarded, developed, improved and continuously explained to people who, mostly due to personal circumstances, do not necessarily have an appropriate background - the frame of reference required to appreciate scientific statements. Science is developed to be approved by colleagues, nonscience is mainly approved by consumers - bureaucrats, businessmen, journalists, and other lay public. Both science and non-science can be good or bad, interesting or dull, useful or useless, as well as approvers can be fair or foul, competent or incompetent, disinterested or biased. Many prominent philosophers were engaged in analyzing the phenomenon of pseudoscience (see, e.g., [141]). In my terminology, non-science incorporates scholarship, fiction and pseudoscience. Scholarship is a collection of disciplines based on study of documents and in this respect it is close to science. Fair and unbiased investigative journalism, in my understanding, is akin to scholarship. Fiction does not pretend to be based on accurate documentary investigation, it constructs models of life by depicting a series of specific situations, in which presumably free-will personages develop some action to embody the author's presumed ideas. Fiction borders on philosophy - in fact philosophy is a kind of fiction where the presumed ideas become so valued to the author that he/she does not resort to specific situations to convey these ideas, using abstractions and generalizations instead31. Philosophers tend to argue and declare instead of calculate, in this sense philosophical works are closer to theoretical systems than to specific models. Pseudoscience is based neither on documents nor even on philosophical generalizations of observed factoids; it uses beliefs and vague resemblances - as in, e.g., telepathy - instead of 29 Perhaps incorrectly, but now it is impossible to find out the truth: unfortunately, D. A. Kirzhnitz died in 1998. 30 Remarkably, not necessarily scientific, which means that status drive is stronger than curiosity that drives science. 31 One can see an interesting example of the transition from fiction to philosophy in the novel "War and Peace" by L. N. Tolstoy. 2.9 Antimodels 73 analyzing facts. In this sense, pseudoscience is a falsified science which substitutes the knowledge about the real world by speculative prescriptions and arbitrary rules. Scientific models, in particular, mathematical models, in distinction to others, operate with quantities that may be measured with quantified precision. To measure a quantity means to compare it with a homological quantity taken as a unit and express the obtained result of this comparison by a real number. One must remember that there exist many "quantities" that can be ordered, i.e., to them the notions "more" (>) or "less" ( 0, into themselves. One can easily see that hyperbolic rotations form a group just like the usual rotations in Rn. Indeed, a composition of two hyperbolic rotations is again a hyperbolic rotation as well as an inverse hyperbolic rotation, see more on it in [188]. 3.5 Lorentz and Poincaré Groups 121 (ct)2 -r2 = c2t2 -x2 -y2 -z2 where c is a constant. The form (x, y) = x0y0 -xiyi, i= 1,2,3 is usually known as the Minkowski metric so that the Lorentz group is the one of linear isometries of this metric. Nevertheless, to be absolutely honest one has to admit that the Lorentz transformations for a single particle in a given spacetime point - and it is usually only a single particle that is considered in kinematic theories - may also depend on the interaction between particles, which makes Lorentz boosts sensitive to the presence of other particles or fields. One might pick up the hint about the necessity of possible dynamic corrections to purely kinematic Lorentz transformations already in the fact that time evolution of a system interacting with the environment is different from the time evolution of an isolated system (see more on time evolution in Chapters 4 and 7). However, the phenomena described by special relativity are not reduced to just kinematics. Recall that Lorentz transformations were used by Einstein in his famous first paper of 1905 on special relativity to express the transition between two frames of reference: one is considered to be fixed i.e., at rest with respect to an observer ("laboratory system") whereas the other is instantly co-moving with the observed electron, and the latter can be in general i.e., accelerating motion. It means that Lorentz transformations may contain the instantaneous velocity v(t) as a parameter, which fact manifests itself in relativistic mechanics considering not only kinematic but also dynamical situation. For example, the particle 4-momentum pi= ∂L ∂ui, ui≔ dxi dτ, is usually expressed through the derivatives of coordinates over the proper time, pi= γijpj, pj= mcuj (in this particular case we restrict the dynamics to the Galilean plane metric), and the dynamical 3-momentum is differentiated with respect to laboratory time, as in nonrelativistic Newton's law, dp dt= F+ v c× ( γ γ+ 1 (v× F c )), where F is the force field in the classical (Newtonian) sense, for instance, the Lorentz force. However, the question of applying special relativity to dynamical situations that imply an accelerated motion is not at all trivial (see more on that below in the "Relativistic Mechanics" section). In special relativity, spacetime is considered plane (flat) so that one can describe it globally by the pseudo-Euclidean coordinates with the Galilean diagonal metric diag(1, -1, -1, -1) i.e., in the general metric tensor gik only the terms with i= k are left and the length element (usually known in relativistic physics as interval) is simply ds2 = (dx0)2 -(dx1)2 -(dx2)2 -(dx2)2. All inertial (Lorentz) frames leave this metric invariant. This is of course a very special choice of the metric which, by the way, justifies the term "special relativity". One cannot introduce such a plane metric globally on a general 122 Mathematical Potpourri manifold M4 because the latter may be curved. In such cases the metric tensor gik(x) has in general all off-diagonal components, i≠k and, besides, depends on spacetime point x on the manifold M4. Yet one can always61, by an appropriate coordinate transformation, bring tensor gik locally i.e., in the vicinity of x to a diagonal form, in other words, to introduce the locally Galilean metric diag(1, -1, -1, -1). Physically it means that the spacetime can be regarded as locally Euclidean. By the way, if such a coordinate transformation brings the metric tensor to a diagonal (Galilean) form in each point of M4 then g≔detgik 0, b> 0, x> 0 y> 0 , with the solution x= b+ m a+ b, y= a-m a+ b. It would not be easy to obtain the field partial areas in a purely thoughtful logical way, without solving the system of equations. 3.10 Vector Spaces 127 a real vector space V is just an additive group 66 whose elements can be multiplied by real numbers so that λ(x+ y) = λx+ λy, λ∈R, x, y∈V, (λ+ μ)x= λx+ μx, λ, μ∈R, λ(μx) = (λμ)x, and 1x= x. So, vectors are elements of a linear (vector) space. If a linear space V is exemplified by a coordinate space Rn, then vectors are typically identified with n-tuples of numbers and written as a≔(a1, ... , an)T= a1e1 + ⋯+ anen≡aiei. The vectors ei form a coordinate basis in Rn, the most convenient choice of the basis vectors is when they are orthonormalized i.e., their scalar (inner) product gives a Kronecker symbol, eiej= δij. We shall see soon that the generalization of this scalar product naturally leads to the concept of metrics. In mathematics, one usually defines vector spaces over some "field". This amounts to choosing λ not necessarily from R but from another set with similar properties, e.g., C of complex numbers, or in general any set in which the operations of addition, subtraction, multiplication, and division (except division by zero) are allowed and the same rules that are familiar from the school-time arithmetic of ordinary numbers can be applied. In such more general cases the considered additive group ceases to be a real vector space. One may notice by association with numbers that natural numbers were obviously the first used by the humans, these numbers could be easily added and multiplied, but the inverse operations - subtraction and division - required some head-scratching. Such inverse operations became possible only with the invention of negative and fractional numbers. Quite often the definition of a vector space consists of a long list of axiomatic statements which may produce the impression that the vector space is a complicated object. In fact, just the opposite is true: objects that comply only with some parts of these rules are more complicated. For instance, human beings that obey none of such rules are not described by any satisfactory theory. But an important thing to be stressed is that vectors can be multiplied by numbers from some field called scalars and added according to the "rule of parallelogram" and these operations retain vectors within a clearly defined set which is the vector space. The possibility to use such operations means that vector spaces are supplied with a linear structure. The preservation of a linear structure is not a trivial requirement: for instance the maps f: Rm→Rn implemented by differential functions do not in general preserve the linear structure, although such maps are ubiquitous in geometry 66 A commutative (Abel) group with respect to addition. 128 Mathematical Potpourri and in the theory of dynamical systems. 67 In vector spaces with a linear structure, linear maps are naturally the most important ones, just because they preserve the linear structure. A map F: U→V (map, mapping, operator, transformation are all synonyms. In quantum mechanics, the term "operator" is mostly used in connection with maps F: U→U.) A map is linear if for all x, y∈U and a∈R (here for simplicity we restrict ourselves to scalars from R), F(x+ y) = F(x) + F(y), F(ax) = aF(x) , i.e., F(ax+ by) = aF(x) + bF(y) for all x, y∈U, and a, b∈R. One may note that the set of all linear maps F: U→V forms itself a vector space with respect to addition and multiplication by a scalar. Symbolically, one may write F: U→V+ G: U→ V= (F+ G): U→V or (F+ G)(x+ y) = (F+ G)x+ (F+ G)y and the composition of linear maps F and G is also linear, GF(ax) = GaF(x) = aGF(x). Such a transitivity of linear maps is well reflected in the matrix theory: once a basis has been fixed, every linear map F is represented by a matrix Fmn and every matrix corresponds to a map (isomorphism). This fact is extensively used in quantum mechanics (see Chapter 6); actually, it is the mathematical reason for the equivalence of the Schrödinger and Heisenberg formulations, although both great physicists presumably did not wish to accept this equivalence in the initial period of quantum mechanics (1924 - 1928). There is a long list of familiar linear maps in mathematics and physics. For example, integration may be interpreted as a linear map or a linear functional. Let us take as U the vector space of all real-valued continuous functions (with compact support) defined at [0,1] and V= R. Then F(f) = ∫f(x)dx 1 0 A linear functional may be given for the same choice of U, V also by, e.g., F(f) = f(0). And of course, differentiation also produces a linear map: let U= V be the vector space of all differentiable functions on R, then F(f) = f′ is a linear map. In simple geometry in R2, rotation or shift (x′ y′) = f(x y) with x′ = xcosφ-ysinφ y′ = xsinφ+ ycosφ or 67 Later we shall see that there exists a corresponding linear map df called differential. 3.10 Vector Spaces 129 x′ = x+ ay y′ = y Here, it would be worthwhile to recall what the isomorphism is (in simple terms). A linear map F: U→V is an isomorphism if there exists a map G: V→ U such that both FG= E and GF= E where E is the identity map, Ex= x for all x∈U. Strictly speaking, we must write EU and EV for spaces U and V, respectively, but we shall neglect this difference. One usually says: F: U~V is an isomorphism from U to V (or between U and V). Since G is the inverse of F, F is invertible and G= F-1. An invertible map is nonsingular, i.e., from F(x) = 0 follows x= 0 and if F(x) = 0 for x≠0 then x is a singular element (a singular vector, if one considers a vector space) of map F. A set of such singular elements {x∈U: F(x) = 0} is called the kernel of map F, denoted KerF. One can easily prove that KerF forms a linear subspace in U and if U is finite-dimensional then KerF is also finite-dimensional. It is exactly a subspace and not just a subset. In other words, the kernel, KerF of F: U→V denotes the subspace of all singular vectors of F. It is easy to prove (see below) that F is injective if and only if KerF= 0, i.e., map F is non-singular. The finite-dimensional ( dimV= n) vector space allows a simple geometrical or rather graphical (for n≤3) interpretation, with elements of the space being directed arrows connecting the origin and the point x1, ... , xn. In some different cases, e.g., for dual spaces (see below), other pictorial representations are more adequate. Usually, only the real (as in classical mechanics) and the complex (as in quantum mechanics) vector spaces, Rn and Cn respectively, are considered but nothing prevents us from using other number fields as the source of scalars over which the vector space is defined. Different fields are important in various scientific contexts, for example, finite fields consisting of 2,4,8,16 or in general 2n elements are important in digital technology and theory of communication. Allow me some remarks on the terminology and sources. Algebra seems to be a discipline which provides a certain freedom for each author to demonstrate his/her "mathematical culture". Therefore, there exists a variety of expositions of essentially the same basic facts. When I was first studying linear algebra, the contemporary exposition of maps in terms of injection, surjection and bijection was not yet accustomed. Honestly speaking, I had some difficulties in translating old notions related to mappings into this new (and in many situations very convenient) terminology. For instance, one can see that a map F: U→V is an isomorphism when and only when it is a linear bijection so that in the linear case the "old" notion of isomorphism and a comparatively "new" notion of bijection are equivalent. Indeed, a linear map F: U→V is by definition an isomorphism from U to V if there is an inverse map G: V→U such that FG= E and GF= E. Thus, an isomorphism is a linear map by definition and a bijection since its inverse does exist. Conversely, if F is a bijection, then there exists G: V→U (which need not be necessarily linear) so that FG= E and GF= E. If, additionally, map F is linear, then it is easy to prove that its inverse G is also linear. Indeed, let us take two elements v1, v2 ∈V. Then since F is a bijection and, consequently, a 130 Mathematical Potpourri surjection, there exist such u1 and u2 that v1 = F(u1) and v2 = F(u2). Since F is linear, we have v1 + v2 = F(u1 + u2) and G(v1 + v2) = GF(u1 + u2) = u1 + u2 = GF(u1) + GF(u2) = G(v1) + G(v2) . Therefore, an isomorphism implies that the linear map F is surjective and nonsingular. In fact, nonsingularity means that F is an injection, i.e., its inverse contains not more than one element. Here, I am using the standard convention: the simple right arrow U→V denotes mapping from set U to set V whereas mapping of point u into point v is usually denoted by the "↦" arrow. Why is all this stuff important for physics? Primarily because we can in practical calculations - physical modeling - identify any finite-dimensional space U with Rn. More precisely, if U is an n-dimensional vector space then there exists an isomorphism F: U→Rn. This fact enables us to easily change spaces by specifying a map F from one n-dimensional space U to another ndimensional space V. Technically in practice this is done by fixing a basis and writing vectors as n-tuples of numbers. A map to another space is then defined by its action on the basis vectors. Let U and V be two finitedimensional vector spaces (dimU= dimV= n) and let α= (α1, ... , αn) be a set of basis vectors that we have fixed in vector space U and β= (β1, ... , βn) the basis in V. Then we have u= uiαi and v= viβi, with vj= [F(u)]j= [F(uiαi)]j= ui[F(αi)]j or vj= αi jui where αi j= [F(αi)]j. It is important and highly convenient that linear maps can be represented by matrices but for this, as we saw, one needs to fix a basis (in the above example α= (α1, ... , αn) in U= Un). According to the definition of a linear map F, the latter is determined by images F(α) = (F(α1), ... , F(αn)). An inverse map is represented by the inverse matrix. See in this connection, e.g., the classical book by P. K. Rashevski [154], where one can find a lot of clearly explained details.68 We have just asked ourselves, what do we need all this algebraic stuff for? Apart from playing with bases, which is one of the favorite games for physicists and not particularly appreciated by mathematicians, elementary algebraic notions are really needed to explore the quantum world (Chapter 6). The concept of a linear operator that plays a key role in quantum mechanics, is conventionally defined as a linear map from a vector space V to itself. In case this map is an isomorphism, it is called (as well as an operator on vector space V) an automorphism. A frequently used concept of the general linear group of V, GL(V) which will be from time to time encountered in this text69, may be interpreted as merely the set of all automorphisms on V. 68 In spite of being very old and out of fashion, this is one of my favorite books. When reading it, you get an impression of being in a good company, talking with a very clever interlocutor - a feeling which never emerges when dealing with a computer program, although one can also learn a lot in this latter case. 69 We shall deal with the general linear group especially in the context of Lie groups which are of utmost importance to physics. 3.10 Vector Spaces 131 One can find an exhaustive treatment of the formal linear algebra in many modern courses and books. Personally, I studied this subject using the book by G. E. Shilov [153] which I found at that time excellent. There are of course more modern textbooks, from which I especially like the one by A. I. Kostrikin and Yu. I. Manin [155]. For completeness, let us now recall the notion of an additive group using the form commonly given in relation to vector spaces. In principle, groups are easier to understand not algebraically but geometrically, e.g., through physical transformations such as motion. We shall exploit this geometrical representation later in this book; now we shall stick to the more traditional algebraic definitions. A group may be simply defined as a pair of a non-empty set X and a map ∗ (composition law, X∗X→X) such that this map is associative, invertible, and possesses an identity element. An additive group G is a set together with a map such that to each pair of elements x, y∈G corresponds a new element from G, denoted as x+ y, with the following rules being valid: 1. (x+ y) + z= x+ (y+ z) (which means that operation "+" is associative) 2. x+ y= y+ x (commutativity of the group, which is a very strong requirement) 3. There exists an element 0 ∈G such that x+ 0 = x for all x∈G 4. For any x there is y such that x+ y= 0 (existence of the inverse element) This short list of axioms is usually supplemented by some simple consequences (see any textbook on algebra). These consequences look very much like simple exercises, but algebra teachers are fond of them considering them good logical exercises (at least such was my university experience). The primary consequence is that there may be only one 0 element. Indeed, assume that there are two zeros, 01 and 02, then we would have x+ 01 = x and x+ 02 = x for all x∈G. By putting, e.g., x= 01, we get 01 = 01 + 02 = 02 + 01 = 02. The second consequence of axioms is usually formulated as "uniqueness": for any pair x, y∈G there exists only one z∈G such that x+ z= y. This property also can be easily proved: according to axiom 4, to each x we can find x′ so that x+ x′ = 0. Then for z≔x′ + y we have x+ z= x+ (x′ + y) = (x+ x′) + y= 0 + y= y+ 0 = y A variation of this "uniqueness" statement: let x+ z= y and x+ w= y. Then z= w. Indeed, z= (x+ x′) + z= x′ + (x+ z) = x′ + y= x′ + (x+ w) = (x′ + x) + w = 0 + w= w 132 Mathematical Potpourri The inverse element y in Axiom 4 is determined by x, y(x). It is denoted as -x, which is a kind of a mathematical standard. Like any good standard (and standards are not necessarily good, an example of a bad standard is the SI system of units), this one makes life easier. So instead of x+ (-y) we can simply write x-y, which is interpreted as the difference between elements x and y. A remark on zero. Strictly speaking, there are two zeros in vector spaces - one (denoted as 0) is inherited from the number system (field), the other (denoted as 0) from the vector space V itself. However, it is easy to see that 0x= 0 for all elements x∈V. Indeed, 0x+ x= 0x+ 1x= (0 + 1)x= x, which exactly means that 0x= 0. As far as the inverse element in V goes, one can see that (-1)x= -x for any element x∈V. Indeed, -x is the only element from V with the property x+ (-x) = 0. But x+ (-)x= (1 -1)x= 0x= 0, thus (-)x= -x. Thus, we can disregard the logical disparity of zeros from different algebraic structures and always write only 0. Let us now give some illustrations to this very simple though slightly abstract algebraic theory. Sets (fields) R and C are examples of an additive group G with respect to ordinary addition. Yet the group operation "+" is, of course, not always the ordinary addition. Let us now look at a simple but less common example, the even-odd algebra A with a binary operation {, }, A= {e, o}: e+ e= e, e+ o= o, o+ o= e(e-even, o-odd) . Here the role of zero is played by e and -x= x for all elements x. We can generalize this algebra by introducing a cycle. Take any integer n> 1 and the set A= {0,1,2, ... , n-1} with the binary operation x⊕y= { x+ y, if x+ y 1), by mapping strongly coupled to weakly coupled states. In the context of usual four-dimensional quantum field theory (QFT), which is much simpler than string theory, S-duality exchanges the electrical and magnetic field components and, respectively, charged particles with already mentioned magnetic monopoles (see, e.g., [134], [135]). Discussion of magnetic monopoles, as well as of T-dualities (topological, see, e.g., https://en.wikipedia.org/wiki/Superstring_theory), would lead us far astray, e.g., to scrutinizing the solutions of nonlinear PDEs and topological properties of space. Topology is a highly advanced discipline requiring a profound treatment, even in the physical context. More or less detailed theory of nonlinear PDEs is a special and serious topic being outside the scope of this book - otherwise the text would be swollen beyond the perceptive ability of a normal person. Nevertheless, we shall discuss later some symmetry properties of the main equations of physics and see how the duality appears. Now I shall only note that a symmetry of the Maxwell equations permitting us to exchange electrical and magnetic fields appears as a duality between elementary charges and collective excitations, since in a weak coupling case electrical charges look like quanta whereas magnetic charges emerge as collective excitations. Moreover, it seems that in general, quantum dualities exchange particles with collective excitations, in QFT with quasiparticles represented as solitonic solutions. Furthermore, one can often hear about the "wave-particle duality" (see Chapter 6). However, the concept of wave-particle duality as it is widely taught to students is intuitive, unspecific, somewhat ambiguous and imprecise. This is essentially the same kind of duality as between dual spaces in algebra, because to each displacement or velocity-like vector in classical mechanics we have a momentum or wavenumber-like vector in quantum mechanics. Due to this duality, quantum mechanics becomes very similar to classical field theory by semiclassical correspondence between particles (mechanics) and waves (fields) - the same wave equations are used since the mechanical Hamiltonian becomes the kinetic energy operator of the corresponding wave theory. The fact that differentials and derivatives are dual to each other can be a good introduction not only into differential geometry but into quantum mechanics as well. So, examples of dualities are abundant both in physics and in mathematics. One of the popular recent examples is 148 Mathematical Potpourri the Veneziano dual model [184], see also [185]. The Veneziano model was later reformulated and included into a set of modern string theories. For the students of numerical mathematics and so-called scientific computing, nowadays of fashion, it would be interesting to know that, for instance, matrix columns, which may be treated in Euclidean space as forming a vector subspace, possess the dual space which is the subspace of rows. In solid state physics, one uses dual (reciprocal) basis to describe the crystallographic lattice and specify the electron states. If we take the Dirac notation in quantum mechanics (Chapter 6), the ket vectors form a subspace of a Hilbert space and have the dual space of bra vectors. For any vector field we have a dual field, however sometimes not so easily represented in the form of simple geometrical images. Likewise, the dual space for the tangent vector space, Tx(M), is the cotangent space Tx ∗(M) which, unfortunately, also does not have a simple geometrical representation like the tangent space - we can mathematically define but cannot easily visualize the cotangent space even for the low-dimensional cases d= 1,2,3 ... So it seems that in this case we must leave the visual patterns of physics aside and stick to mathematical definitions. We have already seen that for each vector space V there exists a dual space V∗ - here I may refer, e.g., to the section on coordinate transformations where the difference between contravariant and covariant vectors and coordinates was discussed. For convenience, I shall recall some simple geometrical interpretations. For example, the velocity vector for each smooth curve passing through point x∈M is a contravariant tangent vector lying in tangent space Tx(M) at a point x of a manifold M. It is an element of the vector space tangent to the manifold in this point and one can view it as being located in a hyperplane analogous to the tangent plane for dimension d= 2. Another example is a displacement vector r. We have seen that a covariant vector in a point x of a manifold is an element of another - dual - vector space called cotangent space in the point x of the manifold. A covariant vector is a geometrical object that transforms like the basis vectors, whereas a contravariant vector is an object that transforms "against" the basis vectors. By the way, one should not confuse covariant vectors and Lorentz covariance, this is just an abuse of the language. For instance, a contravariant Lorentz vector is different from a covariant Lorentz vector, although the former is a vector that is Lorentz covariant (see Chapter 5). Now, however, we confine ourselves to simple examples needed only to illustrate the concept of duality. Duality, in human terms, is just two different ways of looking at the same phenomenon.85 In physics, duality in fact appeared long before many other concepts, when in the 17th century Huygens and Newton proposed 85 This principle has been long exploited by authors in classical literature. For example, in in polyphonic novels by F. M. Dostoyevsky such as "The Karamazov Brothers" there exists no 'objective' description of the world, but only different manifestations of reality depending on the perception of acting personages. A brilliant illustration of duality has been produced in the film "Rashomon" by the famous Japanese film director, Akira Kurosava. This film was based on the short story "In a Grove" by Akutagawa Ryunoske (although it was a far adaptation); both the story by Akutagawa and the movie depict several characters offering different "true" accounts of rape and murder. Thus, it is an artistic exploration of the nature of "truth". 3.15 Dualities in Physics 149 competing wave and corpuscular theories of light's behavior. For over a century Newton's corpuscular theory was dominant, mainly, it seems, due the author's authority - one more example of sociological effects in science. It is only in the early 19th century when diffraction has been reliably observed, e.g., in Thomas Young's double slit experiments, that people apprehended grave complications for the Newton's corpuscular theory of light. Many observations clearly demonstrated an obvious wave behavior, the light waves showed interference patterns - here the principle of superposition for linear waves is manifested (see many interesting details in [106]). All this seemed to prove that light traveled in waves so that the Newton's theory had to be replaced by Huygens' theory. But if light existed as waves, this fact would imply, according to the standard wave theory accepted at that time, a medium of some kind through which the wave must propagate. This invisible medium was suggested by Huygens and called by him "luminoforous aether", in a more customary terminology, ether. Nobody could observe this medium despite a number of experimental attempts throughout the 19th century. Finally, the search for ether culminated in the famous and direct Michelson-Morley experiment, which led to creation of the relativity theory. Then the photoeffect was studied (in particular, in the Einstein's work) and again a particle theory of light took dominance, eventually leading to mathematical models of quantum mechanics (see Chapter 6). Thus, the old concept of duality was plainly related to today's physics, the latter possessing more modern dualities. Now, it would be a truism to say that light manifests itself as both a particle and a wave, depending on what observations are made and how the experiment is set up. The same duality applies to matter, e.g., to elementary particles. To manifest dual properties of matter, different conditions are necessary. Incompatible external conditions reveal either wavelike or corpuscular properties of microparticles, e.g., electrons or neutrons. In other words, depending on imposed external conditions (such as observation means - experimental setup) quantum particles display wave or particle features, just like light. The meaning of duality is that there is a potential possibility for every object to display totally diverse - even opposite and complementary - features depending on premeditated settings. It would be probably wrong to understand duality too literally, for instance, in the quantum-mechanical case, to model the particle as a singular point of a certain field or represent the particle motion as being carried by the wave. Such mathematical models, of course, can be constructed but their heuristic value seems to be too low (see more details in Chapter 6). Here, I would like to make a few comments. One must notice that such circumstances may exist when dual features can be displayed simultaneously. For example, the bound state of an electron in atoms is described by a standing wave, usually with an amplitude that is rapidly diminishing with the distance to the center (nucleus). This fact means that the electron is approximately localized (a corpuscular feature), sometimes with a good accuracy, but it is nonetheless a wave. A typical feature of such combined manifestations of the dual (complementary) properties is that they are less distinctly exhibited. 150 Mathematical Potpourri competing wave and corpuscular theories of light's behavior. For over a century Newton's corpuscular theory was dominant, mainly, it seems, due the author's authority - one more example of sociological effects in science. It is only in the early 19th century when diffraction has been reliably observed, e.g., in Thomas Young's double slit experiments, that people apprehended grave complications for the Newton's corpuscular theory of light. Many observations clearly demonstrated an obvious wave behavior, the light waves showed interference patterns - here the principle of superposition for linear waves is manifested (see many interesting details in [106]). All this seemed to prove that light traveled in waves so that the Newton's theory had to be replaced by Huygens' theory. But if light existed as waves, this fact would imply, according to the standard wave theory accepted at that time, a medium of some kind through which the wave must propagate. This invisible medium was suggested by Huygens and called by him "luminoforous aether", in a more customary terminology, ether. Nobody could observe this medium despite a number of experimental attempts throughout the 19th century. Finally, the search for ether culminated in the famous and direct Michelson-Morley experiment, which led to creation of the relativity theory. Then the photoeffect was studied (in particular, in the Einstein's work) and again a particle theory of light took dominance, eventually leading to mathematical models of quantum mechanics (see Chapter 6). Thus, the old concept of duality was plainly related to today's physics, the latter possessing more modern dualities. Now, it would be a truism to say that light manifests itself as both a particle and a wave, depending on what observations are made and how the experiment is set up. The same duality applies to matter, e.g., to elementary particles. To manifest dual properties of matter, different conditions are necessary. Incompatible external conditions reveal either wavelike or corpuscular properties of microparticles, e.g., electrons or neutrons. In other words, depending on imposed external conditions (such as observation means - experimental setup) quantum particles display wave or particle features, just like light. The meaning of duality is that there is a potential possibility for every object to display totally diverse - even opposite and complementary - features depending on premeditated settings. It would be probably wrong to understand duality too literally, for instance, in the quantum-mechanical case, to model the particle as a singular point of a certain field or represent the particle motion as being carried by the wave. Such mathematical models, of course, can be constructed but their heuristic value seems to be too low (see more details in Chapter 6). Here, I would like to make a few comments. One must notice that such circumstances may exist when dual features can be displayed simultaneously. For example, the bound state of an electron in atoms is described by a standing wave, usually with an amplitude that is rapidly diminishing with the distance to the center (nucleus). This fact means that the electron is approximately localized (a corpuscular feature), sometimes with a good accuracy, but it is nonetheless a wave. A typical feature of such combined manifestations of the dual (complementary) properties is that they are less distinctly exhibited. 3.16 Manifolds 151 Another remark is rather trivial: diverse experimental settings often may properties. The resulting wave function (amplitude) is constructed as a linear superposition of all interfering probability amplitudes and, consequently, reveals the same wave-like features. One can interpret this construction as follows: the probability of finding a particle in some location is determined by a wave, but the actual physical observation detects a corpuscle. The so-called physical meaning of the wave-particle duality has long been a focus point of heated debates in the early years of quantum physics. Nowadays, it seems this issue remains a hot topic only for amateurs. Nevertheless, attempts to explain what the wave-particle duality "really means" has resulted in a number of interesting interpretations of quantum mechanics. To my knowledge, all these interpretations produced no new results and no fresh experimental predictions as compared with the set of wave equations. The mathematical models associated with the latter may appear to some people rather complicated, but they mostly provide accurate experimental predictions. One can also note that duality as a principle exists not only in mathematics and physics. In the preceding chapter, I have briefly mentioned cognitive models, in particular those developed in so-called cultural anthropology. One such model, comprehensively discussed by the famous philosopher Karl Popper in his highly interesting two-volume manuscript "The Open Society and Its Enemies" [186], was designed to explain the stability of human societies86. The main statement here is that any stable society must have a dual structure, in particular support two parties, e.g., "left" and "right". This bipartite system was found to be typical even of the ancient and primeval societies, and up till now the majority of societies stick to the primeval pattern. The society was divided into two parts, and each member of society was exactly aware to what part she/he belonged. Even today, one can tell whether a person sympathizes with the liberal attitudes or identifies oneself with a strong state domineering over its citizens. Ancient people even formulated social duality in the form of a slogan: "Always take your wife from the other half of the tribe." This principle made intra-tribal aggression highly ritualized and, consequently, non-destructive. The dual notions are especially evident in oriental - mainly Chinese - traditions (Yin-Yang etc., see, e.g., http://en.wikipedia.org/wiki/Yin_and_yang). Social duality was also popular and extensively depicted in literature (R. L. Stevenson, J. Swift). 3.16 Manifolds One may ask: why did we devote so much attention to a seemingly trivial subject of vector spaces? The main motivation was the following: ndimensional linear (vector) spaces, e.g., Rn, and their open subsets U are the simplest sets on which one can define functions as well as vector and tensor fields. Recall that Rn may be thought as consisting of vectors (x1, ... , xn), with dimensionality n being arbitrary large, but still finite. 86 Periods of instability, e.g., revolutions, are assumed significantly less durable than periods of stability. 152 Mathematical Potpourri The most useful feature of such simple sets is the possibility to establish on them a unique coordinate system. For instance, one can represent a map f(x) on x∈U⊆Rn as a function of n coordinates xi, i= 1, ... , n. However, already very simple examples demonstrate that such a compelling-looking scheme is a somewhat provisional way of looking at physical spaces, and it may easily become inadequate. If we take, for instance, the unit sphere, Sn-1 ⊂Rn, which may be described by the equation xixi= 1, i= 1, ... , n, then we quickly get into trouble. To be more specific let us take n= 3 i.e., S2 ⊂R3 . Then we have the familiar equation from the school-time geometry: (x1)2 + (x2)2 + (x3)2 = 1 (hopefully, there will be no confusion between coordinate and power indices). Writing this equation in spherical coordinates x1 = cosφsinθ, x2 = sinφsinθ, x3 = cosθ, 0 0 for all v≠0, v∈V. This scalar product results in the distance between two simultaneous points a, b (on fibers Vt) ρ(a, b) = (a-b, a-b) 1 2 = ‖a-b‖ (3.11) and makes each fiber Vt⊂R4 a 3D Euclidean space (fig.3.1) or, more generally, a manifold M which is not necessarily Euclidean (Galilean). Roughly speaking, the n-dimensional Euclidean space En is the vector space Rn endowed with the Euclidean metric. Figure 3.1.: Galilean and curved spacetime fibrations. Here γ(t) is a path of a material point (world line) represented as a curve in spacetime R3 × R. Slightly generalizing this picture, we might say that a global time mapping should exist on the spacetime, i.e., a function t: M→R whose gradient is everywhere timelike. The existence of this mapping signifies that the spacetime can be foliated into time-ordered spacelike hypersurfaces, each corresponding to ti= const, i= 1,2, .... The mapping t: M→R guarantees simultaneity of all events contained in t-hypersurfaces (see, e.g., [189]). Each spacelike hypersurface Vt corresponding to some value t of the global time splits the entire spacetime into two halves, being in the Galilean-Newtonian picture the mirror copies of one another. Initial conditions to evolutionary equations are set up on any hypersurface Vt. In a curved spacetime this time166 Mathematical Potpourri reflection symmetry can be broken: there may be no spacelike hypersurfaces Vt from which the spacetime observed in two opposite directions of time looks identically. This temporal asymmetry is expressed by the spacetime metric gik. The Galileo group expressing the invariance properties of classical mechanics, together with the Euclidean, Lorentz and Poincaré groups, may be all considered the relativity groups. Mathematically, all these groups are particular cases of semidirect group construction, which is often used in group theory and its applications. Take as a basic example the Euclidean group E(3) or SO(3) acting on R3 - it is a semidirect product of rotations and translations. Generalizing this construction a little bit, we get some group G acting on a vector space V (and on its dual space V∗)93. In a nonrelativistic framework (Chapter 4), the Galileo group G is the natural kinematic model. Adding independent space and time dilations, we get the affine Galileo group. It is interesting that by imposing certain constraints on these dilations we may get other invariance groups important for physics, for example, the Schrödinger group (the invariance group of the Schrödinger and heat equation) and the Poincaré group essential for relativity theory. To better understand all this terminology let us write down the generic element of the affine relativity group, AR. The latter is the Galileo group G94 combined with independent spatial and temporal dilations. Recall that dilations (also called dilatations) are, in the physical language, just scaling transformations whereas from the mathematical standpoint these transformations represent a specific case of the conformal group. Dilatations are usually understood as a collection of maps from a metric space into itself that scale the distances between each two points. Thus, dilatations give the result of uniform stretching or shrinking, perhaps accompanied by rotations. In school geometry we study similarity transformations, which are dilatations in Euclidean space. Dilatations comprise a subgroup of the affine group (see above), with matrix A of a linear transformation is an orthogonal matrix multiplied by a scalar (dilatation ratio). So roughly speaking, a dilatation is a transformation that expands or contracts all points with respect to a given central point by some ratio, which may be greater or smaller than unity. Therefore, in order to specify a dilatation, one has to fix the central point and the ratio. A well-known physical example of dilatations is the γ-factor, which appears in relativity. It is always greater than unity but very close to it for nonrelativistic velocities. If the velocity (of a particle, for example) equals the velocity of light c, γ-factor is infinite and for velocities greater than c it is, strictly speaking, undefined. Let us take a look for a moment at the affine relativity group AR. A generic element of AR may be denoted as g= (φ, t0, x0, v, R, α, β) where t0 ∈R, x0 ∈Rn are time and space translations, v∈Rn in the boost 93 G may be, e.g., a Lie group or a group of automorphisms. 94 Some authors consider the group of Galilean transformations not being the affine group since in general one cannot write x↦x′ = Ax+ b where matrix A∈O(n), but it depends on the definition of the affine group. Here, we simply consider any Euclidean group E(n) to be a subgroup of an affine group A(n) in n dimensions. 3.22 Geometry of Classical Mechanics 167 parameter, R∈SO(n) is a rotation, α and β are time and space dilatations, respectively. It is interesting, by the way, that dilatations provide a link to quantum theory where there the quantity r∇V= xi∂iV (V is the potential) known as virial is considered. In fact, operator xi∂i is the generator of the dilatation group. The virial theorem in quantum mechanics states that when wave function ψ is the eigenfunction of the Schrödinger operator, then the expectation value of the virial is twice the kinetic energy i.e., (ψ, r∇Vψ) = 2(ψ, -∆ψ). Now, let us return to affine geometry of classical mechanics. The fundamental group for this geometry is the group of all affine transformations, i.e., the composition u= P(a) ∘v where P(a); = x+ a is the translation by vector a, V is the vector space over R, v∈GL(V) is the linear and bijective mapping, GL(V) is, as usual, the general linear group. Simply speaking, the group of affine transformations or affine group is just a group of linear maps supplemented by translations. The transition from the Euclidean group E(n) to the affine group A(n) exemplifies a mathematical structure which is called a semidirect product (see some details in [187], 4). So, the Euclidean group E(n) is a subgroup of affine transformations. The affine group A(n) in n-dimensional space works over the general linear group and is isomorphic to the group of matrices of order n+ 1 of the type L= (A a 0 1) - a fact extensively exploited also in computer graphics (where n= 3 of course) [95]. To put it another way, we may specify each element of the group of affine transformations in an n-dimensional space in two ways: either by a pair (A, a), with A being an n× n orthogonal matrix and a being a real ndimensional vector, or by a single (n+ 1) × (n+ 1) square matrix. One may notice that affine groups are simple specific cases of Lie groups. An affine group on Rn is a combination of orthogonal (in general pseudoorthogonal) transformations and translations in the vector space Rn. Now we might try to discern at which point geometry enters classical mechanics. To begin with, we are often reminded of geometric optics in textbooks on mechanics. This allusion actually means that the ray of light selects the path between each two points, x1 and x2, requiring the shortest time to travel. In our everyday life we usually say that light travels in straight lines. In simple mathematical terms, this ray theory of geometric optics as well as of classical mechanics is formulated by the expression δ∫n(x)ds x2 x1 = 0, (3.12) 168 Mathematical Potpourri where function n(x) is in optics the refraction index, n(x) = cv(x) ⁄ , ds= vdt, dt= dsn(x) c ⁄ , c is the velocity of light in vacuum, v(x) is its velocity in the medium. An analogous expression holds also for mechanical particles, and now we shall find this analog. The above expression is an elementary path integral construction, where time does not appear explicitly - usually a very convenient presentation, a predecessor of the Feynman path integrals [44], see below "Path Integrals in Physics". It is interesting that the ideas of replacing time when considering the mechanical motion by other variables circulated already in the 18th century (Euler, Fermat, Jacobi). Excluding the time as a parameter leads directly to geometrization of mechanics. Indeed, let us consider, for simplicity, a single free particle whose kinetic energy is T= 1 2 mik dxi dt dxk dt. Here, the quantities (x) are the components of the mass tensor. (A little later we shall see that the notion of mass is highly nontrivial and stirs a lot of controversy up to now.) In the simplest (e.g., isotropic) case mik= mδik. Introducing the length element by the usual expression ds2 = mik(qj)dqidqj, where qi are the generalized coordinates in the threedimensional coordinate space95, we get for the kinetic energy T= 1 2 ds2 dt2. We may observe that the mass tensor mik begins to play the role of metric tensor gik. This simple analogy provides a direct link from mechanics to geometry. We see that the attained geometry is non-Euclidean since the components of mass (metric) tensor are position dependent. For instance, in cylindrical coordinates we have mik= diag(m, I, m) = ( m 0 0 0 I 0 0 0 m ), where I≔mr2 is the moment of inertia of a point mass, and T= 1 2 mdr2 + r2dφ2 + dz2 dt2 . Mass tensor models are frequent attributes of condensed matter physics where the anisotropic response of quasiparticles (electrons, excitons, polarons, etc.) to an applied force should be described. They are an important 95 In this simple example, the coordinate manifold is three-dimensional, in a more general case of N particles it obviously has n = 3N dimensions. 3.23 Transformation of Affine Coordinates 169 concept in condensed matter, particularly in energy band theory. The mass tensor is also encountered in other fields of physics such as relativistic theories, nuclear physics, studies of particle motion in a magnetic field (e.g., in accelerators), theory of elasticity, soft tissue and polymer physics, mechanics of the rigid body, robotics, study of mobility and frictional effects, etc. In general, in the non-relativistic picture mass is a covariant tensor to be contracted with two contravariant velocities producing a scalar kinetic energy, T= (1 2 ⁄ )mikq̇ iq̇ k. The corresponding inverse mass tensor is given by (m-1)ik. In general, two tensors aik and bik are called inverse96 if aijbjk= δi k. We can define the contravariant mass tensor mik to be inverse to mik which means that (m-1)ik= mik, analogously to the metric tensor gik. One might observe that the mass tensor, at least in many examples, is an involution, i.e., its own inverse. 3.23 Transformation of Affine Coordinates Let us start from the most primitive notions. Affine coordinates are defined as rectilinear coordinates in an affine space. If we have an n-dimensional affine space, in which we have fixed a basis, with each point x in it has coordinate labels x which are transformed as x′j= ai jxi+ bj (3.13) when we move from one affine coordinate system (K) to another (K′). The coefficients ai j forming elements of a matrix A are, in principle, arbitrary; the only condition on this matrix, detai j= detA≠0, ensures that one can express the old coordinates xi through the new ones x′j. This reversibility of the coordinate transformation (K↔K′) in fact means that the systems K and K′ (which are arbitrary affine coordinate systems) are equivalent. Affine coordinate unit vectors (formerly often called repère) in K′ can be expanded over unit vectors in K e′ j= αj iek (3.14) 96 Inverting a tensor or a matrix requires in general some tedious calculations usually performed on a computer. A more or less simple formula is practical only for small dimensionality such as, for example 3 × 3. Thus, for matrix (tensor) A= ( a b c d e f g h i ) A-1 = 1 detA( -fh+ ei ch-bi -ce+ bf fg-di -cg+ ai -db+ ae -eg+ dh bg-ah -bd+ ae ) 170 Mathematical Potpourri where coefficients αj i represent a matrix that must be related to the previously encountered matrix ai j. This relationship is well-known and can be found in any textbook on linear algebra. For the sake of completeness, I shall bring here some main formulas. Vectors e′ j and ek may be chosen arbitrary, the sole requirement being that they are linearly independent, which results in detαj i≠0. Due to this condition, the inverse transform ei= βi ke′ k (3.15) does exist, with matrix βi k being inverse to αj k, i.e., αi kβj i= δj k and αi kβk j= δi j. These formulas express a simple identical transformation ei= βi kαk jej (one may recall that the expansion on repère vectors is unique). Now let us write the components of the same vector x in two different coordinate systems, K and K′: x= xiei= x′je′ j= xiβi ke′ k= xiβi kδk je′ j (3.16) Comparing these two expressions, we have {x′j= βi kxiδk j= βi jxi e′ j= αj iei (3.17) One may see that when one moves from K to K′, the vector coordinates are transformed with the matrix βi j which is the inverse transposed of αi j. This is the traditional linear algebra techniques. Later we shall see that the basic facts from linear algebra are related to the spectral properties of Hermitian matrices in finite-dimensional spaces. Here the connectivity with other physmatical domains, e.g., quantum theory, is reflected in the faith that unbounded self-adjoined operators in Hilbert spaces are characterized by similar properties. 3.24 General Coordinate Transformations The above example shows some simple properties of coordinate transformations, namely the connection between matrices relating the change of basis vectors for two affine coordinate systems with the matrices governing the transformation of vector components in the transition between these different coordinate systems. The problem of a change of coordinate systems is ubiquitous in physmatics and may be considered one of the most important. We shall encounter it practically in all fields of physics: relativity theory is essentially the discipline studying the invariance properties of physical quantities under coordinate transformations; quantum mechanics is based, to a large extent, on a mathematical model of a unitary state space understood as an infinite-dimensional Hilbert space, with state vectors represented as linear combinations of basis states being transformed from 3.25 Variational Methods 171 one representation to another (see Chapter 6); in classical mechanics, canonical transformations (the group) preserving the measure in the phase space are a powerful tool to simplify problems by making a canonical change of variables (see Chapter 4); the Fourier transform, which is one of the main mathematical instruments of physics (see Chapters 5, 6), is nothing more than a transformation between different bases (transition to a dual basis); change of coordinates (substitution) is a favorite trick in integration, and so on. Therefore, it would be worthwhile to study some general principles of general coordinate transformations. It is curious that this subject, even in its not mathematically refined and abstract version, seems to be traditionally difficult for the students. So, I shall try to leave no ambiguities in telling the story of coordinate transformations and especially as far as their relationship with various physical models is concerned. 3.25 Variational Methods The most demonstrative example of the usefulness of variational calculus is traditionally given by the Lagrangian formulation of classical mechanics (see the next chapter). In the variational formulation of classical mechanics, the system (e.g., particle) trajectories q(t) are extremizers (minimizers), or at least critical points, of the action integral with fixed endpoints S= ∫L(t, q, q̇)dt t1 t1 (3.18) where L(t, q, q̇): R2n+ 1 →R is the Lagrangian - the difference between kinetic and potential energy. One usually assumes the Lagrangian to be smooth and strictly convex in v≡q̇ , i.e., ∂vv 2 L> 0 (physically this last condition may be wrong for the systems with negative mass). The minimizing trajectories are then the solutions to the Euler-Lagrange equations δS δq(t) = d dt ∂L ∂q̇i -∂L ∂qi = (3.19) Here we have used the symbol of functional derivative (see below). In the classical monograph by R. Courant and D. Hilbert [7], chapter 7, one can find a very interesting and, in my opinion, quite actual discussion of the relation between the calculus of variations and the boundary value (eigenvalue) problems. This relationship has been thoroughly worked out ever since and served as a foundation for the so-called direct methods of variational calculus. In physics, especially in quantum field theory (QFT), it is customary to introduce the notion of a so-called functional derivative. To elucidate the idea of such a derivative, let us consider the system with a single degree of freedom. By definition, the functional derivative of a quantity S with respect to q(t) is written as 172 Mathematical Potpourri δS[q, q̇] δq(s) = lim ε→0 S[q(t) + εδ(t-s), q̇(t) + εd dtδ(t-s)] -S[q(t), q̇(t)] ε (3.20) Let us expand the quantity containing delta-functions: S[q(t) + εδ(t-s), q̇(t) + εd dtδ(t-s)] = ∫ dtL(q(t) + εδ(t-s), q̇(t) + εd dtδ(t-s)) t1 t1 = ∫dtL(q, q̇) t1 t1 + ε∫ dt(∂L ∂qδ(t-s) + ∂L ∂q̇ δ(t-s)) t1 t1 + o(ε) = S[q, q̇] + ε(∂L ∂q+ d dt ∂L ∂q̇) t=s + o(ε) (3.21) So the Euler-Lagrange equations may be written as δS δq(t) = 0 (3.22) (where we have put t= s). 3.26 Differential Equations In fact, we have already encountered differential equations many times, but this subject is so important for physics that I shall try to briefly describe a few methods of obtaining solutions to differential equations, both in the twovariable domain (ordinary differential equations, ODE) and many-variable domain (partial differential equation, PDE). Differential equations may be without exaggeration called the principal tool of physics. The term œquatio differentiale was probably first coined by Leibnitz to designate the relationship between variables x, y and their infinitesimal increments (differentials) dx, dy. In the linear case, ẋ = A(t)x, x∈Rn, matrix function A(t): Rn→Rn may be considered smooth, but in a number of physically interesting applications one can only require this matrix to be continuous in t∈I⊆ R. Physicists usually don't pay attention to such sophistry, but would it be mathematically correct? Curiously enough, yes, because the proof of the famous Cauchy-Peano (or Picard-Lindel ̈of) theorem stating existence and uniqueness of a solution as well as its continuity with respect to initial data of a differential equation, x(t0) = x0 uses only the Lipschitz condition or, at maximum, differentiability over x for a fixed t (see above more on this theorem and also below the section on dynamical systems) so that continuity in t would be sufficient. 3.26 Differential Equations 173 A second-order linear differential equation y′′ + py′ + q= 0, (3.23) where p(x) and q(x) is extensively used in the wave-mechanical (Schrödinger) version of quantum mechanics. In this chapter, well-known mathematical facts and results are rendered. I have provided only a cursory coverage of geometric methods in physics since, firstly, I am in no way a specialist and, secondly, there are excellent texts on modern geometry with physical applications such as [187]. The main purpose of mathematical potpourri is to ensure a more or less firm ground to painlessly read the major part of the literature on current physics, in particular where geometric methods are used. I was hoping that this section will at least help to unite physical and mathematical communities overcoming corporate arrogances and isolationism. 174 Classical Deterministic Systems 4 Classical Deterministic Systems Deterministic systems are those for which the full dynamic description can be achieved without introducing the concept of probability. Ideally, in deterministic systems the state of the system at an initial moment t0 uniquely determines the future behavior of the system, i.e., for any t> 0. The study of continuous deterministic systems may be reduced to mathematical models based on differential equations for the functions giving the state of a system. The first - and the most famous - model of this type was the one of Newton and is usually called Newtonian mechanics. It studies the motion of a system of material points in R3 and may be regarded as a special case of classical dynamics. Classical dynamics is probably the most developed part of science, it studies the evolution of systems made of material points - bodies that are so small that their inner structure is disregarded and the only characteristic is their position in space, ri= ri(t), i= 1,2, ... , N where N is the number of points incorporated into the system. Typically, in classical mechanics the socalled generalized coordinates qi(t) are used, which correspond to the degrees of freedom of a mechanical system. Generalized coordinates have been introduced by Lagrange and are especially useful in Lagrangian mechanics (see below). In modern terminology using generalized coordinates means that one should consider possible configurations of a mechanical system as points on a differentiable manifold. Naturally, one can also use any system of local coordinates which are convenient from a practical point of view. We have already observed in Chapter 3 the general change of coordinates; below we shall see in some detail how to pass from one system of curvilinear coordinates to another. In modern terminology this means that one should consider possible configurations of a mechanical system as points in a differentiable manifold, and, of course, then use any system of local coordinates which are continuously differentiable. Here, one may notice that the number of degrees of freedom is defined in classical mechanics as dimensionality of the configuration space of a physical system. For example, a system with two degrees of freedom is r̈ = F(r, t) where F is a plane vector field, r∈R2. If we convert this equation into the dynamical system form, we shall get four first-order equations and, respectively, four degrees of freedom as the dimensionality of the vector system of differential equations (defining a vector field) corresponding to a dynamical system. In other words, there may be a certain confusion in counting the number of degrees of freedom when one passes from conventional classical mechanics to the dynamical systems theory. The set of possible paths qi(t) in the configuration space of generalized coordinates completely describes the system. Of course, these paths should be single4.1 Main models of classical mechanics 175 valued functions, usually with a compact support, i.e., defined on a compact time interval[t1, t2]. Time t is clearly a very important variable; in the static models the role of dynamics - not only evolution - it has been excluded. One can of course take the dynamics into account employing a parameter which changes with time (such as the growing size of some object). This approach is used, for example, in self-similar models. 4.1 Main models of classical mechanics The primary feature of models based on classical deterministic (dynamical) systems is causality, i.e., in such models the effect cannot precede the cause and the response cannot appear before the input signal is applied. One may note that causality does not follow from any deeply underlying equation or a theory, it is simply a postulate, a result of human experience. Non-causal systems would allow us to get the signals from the future or to influence the past. Causality is closely connected with time-reversal non-invariance (the arrow of time). The time-invariance requires that direct and time-reversed processes should be identical and have equal probabilities. Most mathematical models corresponding to real-life processes are time noninvariant (in distinction to mechanical models). There is a wide-spread belief that all real processes in nature, in the final analysis, should not violate timereversal invariance, but this presumption seems to be wrong (see Chapter 9). Now, let me say a few words about bibliography. There exist tons of books on classical mechanics, but most of them must be updated by adding a number of modern subjects such as nonlinear phenomena, dynamical systems and differential geometric methods. The first book on classical dynamics that I studied was a comparatively thin book by F. R. Gantmacher [21], the Russian (Soviet) mathematician well-known for his monumental volume "The Matrix Theory" [22]. Gantmacher's book on analytical dynamics was simple and rigorous, so one could easily trace all the conventional transformations and derivations comprising the major part of traditional analytical dynamics (e.g., transition to generalized coordinates, canonical transformations, etc.). Difficult topics were treated by the author in a comprehensive manner. After the nice Gantmacher's "Lectures", the obligatory course of L. D. Landau and E. M. Lifshitz [23] at first seemed too lapidary and full of declarative prescriptions. Only much later did I realize that the whole classical multivolume course by Landau and Lifshitz was intended not for students, but rather for professionals working specifically in the field of theoretical (not mathematical!) physics. Classic books are invariably valuable, but they are not always the best ones for a concrete person. Despite some strange but luckily rare mistakes, it is an undeniable fact that every physicist (at least in Russia) has learned a lot from these books and specifically from "Mechanics". Now I think that as far as classical dynamics goes, the book by V. I. Arnold [14] is fully sufficient to provide the basic knowledge. Despite being over thirty years old, this is the kind of book belonging to the narrow class I appreciate most of all: one that contains the information you need. The meticulous text of Arnold covers almost every subject of classical mechanics (and some extra), starting 176 Classical Deterministic Systems from elementary Newtonian mechanics to semi-classics (short-wave asymptotics) and perturbation theory. In my opinion, this book, like some volumes of the course by Landau and Lifshitz, may be considered an example of connected sciences approach. Physicists often treat classical mechanics with an element of disdain or at least as something alien to "real" physics, say condensed matter physics. Personally, I think that classical mechanics is extremely useful for any part of physics, for many great ideas are rooted in classical mechanics. Examples of Gibbs and Dirac who persistently looked for opportunities to transfer methods of classical mechanics to other fields are very persuasive. One may notice that the ideas from dynamical systems theory have been present in mechanics from the time of Lyapunov and Poincaré, but could not overcome the barrier between classical mechanics and physics until the 1970s when nonlinear dynamics all of a sudden became of fashion. Why is classical mechanics so important? The matter is that classical mechanics has been the primary collection of mathematical models about the world, mostly about the motion of its objects. The principal aim of classical mechanics is to describe and explain this motion, especially under the influence of external forces. Ancient thinkers were enchanted by the celestial clockwork and devised a number of interesting models, the most well-known among them was that of Aristoteles (Aristotle), which can be formulated as the statement that the motion of bodies is possible only in the presence of external forces produced by other bodies. This verbal construct of Aristoteles can be translated into the mathematical language as the first-order differential equation, with the state of a moving body (or particle) being described by three coordinates (x, y, z) that change under the influence of an external force dr dt= f(r), r= (x, y, z), f= (fx, fy, fz) (4.1) This is really the simplest model of the motion. One can immediately see that the crucial difference of Aristotle's model based on the first-order vector equation with the second-order system corresponding to Newton's model is, primarily, in the irreversible character of the motion. Aristotle's considerations were rooted in everyday experience: the trolley should be towed to be in motion; if the muscle force stops acting, the trolley comes to rest. In such a model the state of a system would be given by positions alone - velocities could not be assigned freely. One might observe in this connection that even the most fundamental models reflecting everyday reality are not unique and often controversial. The contrast of Aristotle's model to that of Newton is readily seen when one starts thinking about acceleration (as probably Einstein did). In Newton's model of single-particle dynamics, the general problem is to solve the equation F= ma, where 4.2 Newtonian Mechanics 177 a≔d2r dt2 , r≔(x, y, z), r∈R3 when the force F is given. One might, however, ask: was Aristotle always wrong? The trolley stops due to friction: FR= -αv. The Newton's equations for this case may be written in the form dr dt= v, mdv dt= F-αv, (4.2) where F is the towing force and m is the mass of the trolley. When F is nearly constant, i.e. the towing force, as it is frequently the case, slowly varies with time, the mechanical system is close to equilibrium, so the inertial term m dv dt is small compared to other terms. In this case, we get the equilibrium (stationary) solution v= dr dt= F α, which has Aristotle's form. This solution form is valid only when the motion is almost uniform, i.e., the acceleration is negligeable, and the friction is sufficiently large, | dv dt| ≪|αv| . One can, in principle, imagine the world constructed on Aristotle's principles: the Aristotle model would correspond to the Universe immersed in an infinite fluid with a low Reynolds number. It is curious that Aristotle's model is in fact extensively used in contemporary physics, engineering, and even in everyday life. An example is Ohm's law, j= σE, v= E/neρ, where e is the electron charge, E is the electric field (acting force), n is the charge density, and v is the average velocity of the charge carriers. Ohm's law is a typical macroscopic stationary model, when the driving force is compensated by resistance. Stationary models are typical of classical physics: in fact, classical physics dealt only with slow and smooth motions, e.g., planetary movement. Models describing rapid and irreversible changes, resulting in multiple new states, appeared only in the 20th century. Stationary and quasi-stationary models serve as a foundation of thermodynamics (Chapter 7) whose principal notion - temperature - may be correctly defined only for equilibrium. Another example of such models is steady-state traffic flow simulation. 4.2 Newtonian Mechanics Let us now get back to the Newton's model. Although the stuff below looks trivial, there are several reasons to discuss it once more in order to better understand why, despite the fact that Newton's law of motion may be considered a complete statement of classical mechanics, one would desire other mathematical formulations for this discipline. As is well known, the general problem in Newtonian mechanics is to solve the system of equations ma d2ra dt2 = ∑Fi(ra, ṙa, t) i , 178 Classical Deterministic Systems where index a= 1, ... , N enumerates the particles and the sum goes over all the forces acting on the a-th particle. Forces Fi are regarded as given functions of r, ṙ, t in every point ra, ṙa, t. A mathematical solution of this system of second-order differential equations is a 3N vector-function ra(t), a= 1, ... , N. If one considers the simplest case of the motion of a point mass (material particle) in 3D space, the position vector r may be expressed as r= xi+ yj+ zk, where {x, y, z} are projections that are varying when the particle moves, three quantities {i, j, k} (also denoted {e1, e2, e3}) are unit vectors which are assumed constant in the laboratory system. Consider a particle in a potential force field, namely the one where force F is the (negative) gradient of some potential energy function V: mr̈ = -∇V(r) (4.3) where V: Rn→R is some function on Rn. The total - kinetic plus potential - energy is given by the expression E= m 2 |ṙ|2 + V(r) and is easily verified to be constant on any solution. To show it, one can just differentiate E(t) and use Newton's equation: dE dt= mdr dt d2r dt2 + dr dt∇V(r) = 0 (4.4) In principle, dimensionality n, r= (x1, ... , xn) may be arbitrary. For the sake of simplicity and to make the treatment more intuitive, let us confine ourselves to n= 3. Assume that the potential V(r) is spherically symmetric, i.e., that V depends only on the distance from the origin - this is usually called the central field case. We may put e.g., V(r) = (1/2)v(|r|2), then the equation of motion becomes mẍ i= -v′(|r|2)xi (4.5) The energy E is still a conserved quantity. The term "conserved" here means that the function E(xi, ẋ i, t), t∈R remains constant on any solution of the equation of motion. 4.3 Lagrangian Mechanics Although Newton's formulation of classical mechanics has been very successful, it still has some drawbacks. First of all, it requires that all forces acting on a mechanical system should be explicitly known. In practice, however, forces are often given implicitly, for example, by constraining the trajectory to belong to some manifold such as to lie on a surface. Newton's law is a vector equation, and it may be cumbersome to transform its components into curvilinear coordinate systems. Furthermore, the analysis of symmetries is not easy to perform as long as we stay within the framework of Newtonian mechanics. Constraints are also difficult to take into account. The Newtonian 4.3 Lagrangian Mechanics 179 model is essentially a local one: it was designed to describe the motion in terms of the local velocity change caused by the local force value. Thus, it is difficult to gain an insight into the global features of motion, its mathematical structure. Mathematically speaking, it is reflected in the fact that Newton's equation is a second-order one, so the global properties of the system cannot be figured out as easily as for the first-order equations, for which an extensive dynamical systems theory has been developed. Furthermore, Newton's model is difficult to apply to fields - it is basically suitable for classical systems which consist of particles. Because of this, Newtonian mechanics cannot serve as a starting point for quantization of distributed systems. Thus, Newton's equations, despite their ubiquity, are seldom used in modern condensed matter physics. And in general, the quantization of a classical system can hardly be based on Newton's formulation of mechanics. In contrast to Newtonian mechanics, the Lagrangian mathematical model provides a global and, in principle, coordinate-free formulation of classical (i.e., trajectory-based) motion. Let us first give some elementary considerations. Imagine a system of many particles having coordinates qi(t) and velocities q̇i(t) (here, for simplicity, we make no distinction between coand contravariant coordinates), although the usual summation convention omitting the sum symbol will be used. The quantities qi(t), q̇i(t) labeling the particles' positions and velocities are not necessarily Cartesian (rectangular) coordinates. For N particles, i runs from 1 to 3N, so the motion of these particles in R3 may be represented by a trajectory γ: [t1, t2] in R3N usually called the configuration space. This trajectory starts from the point in the configuration space corresponding to the initial time t1 and terminates at the point corresponding to the final time t2 . The main idea of Lagrangian mechanics is to characterize the actual motion of a mechanical system by a single trajectory, against all others joining the initial and final points in the configuration space, which extremizes some functional S called action. The latter is usually written as S= ∫L(q, q̇, t)dt t2 t1 (4.6) We already know that the function L is called the Lagrangian. Mathematically speaking, a Lagrangian is defined as a smooth function L: R× TM→R on a manifold M. Nowadays people say that L is a function on the tangent bundle TM of M. The action S= SL associated with L is defined for any smooth curve γ: [t1, t2] →M. The set of all such smooth curves may be thought of as an infinite-dimensional manifold, with the action to be a function on it. Simply speaking, the classical action is the integral of the Lagrangian along the trajectory. We allow the Lagrangian L to depend on time t, i.e., L is a function on L: R× TM→R, although often L is only required to be defined on some open set in TM. Also, for some models it might be important to weaken the smoothness requirement for γ and L. 180 Classical Deterministic Systems In other words, one is usually interested in determining the curves γ: [t1, t2] →M with given endpoint conditions, for which the action functional reaches a minimum among nearly lying curves. Such a set of curves are typically called variations. Given a curve γ with fixed endpoints t1, t2 and some ε> 0, one can define a smooth variation of it as a smooth map γε(t, σ) ≔γε[t1, t2] × (-ε, ε) →M, (4.7) σ∈(-ε, ε) with the properties γε(t, σ) = γ(t) for all t∈[t1, t2] and γε(t1, σ) = γ(t1), γε(t2, σ) = γ(t2) for all σ∈(-ε, ε) . In standard texts on Lagrangian mechanics, this variation is typically denoted as δq(t). Now, it is easy to produce once again the Euler-Lagrange equations, but before that I would like to note that the Lagrangian L depends on two independent sets of 3N variables denoted qi(t) and q̇i(t), but the latter are not necessarily the t-derivatives of the former. One should not be confused by the fact that in these coordinates an arbitrary curve in the tangent bundle has to be denoted as (qi(t), q̇i(t)). Geometrically, the second set of variables coincides with the time derivative of the first only when the curve in TU is the so-called tangential lift of a curve in U where U⊂M is e.g., an open set, on which we may establish a coordinate chart x: U→Rn (here n = 3N). From the physicist's perspective, only the elementary methods of the calculus of variations are needed to derive the Euler-Lagrange equations. The latter are produced by the elementary integration by parts. Assume qi(t) to be a path for which the action S is stationary, i.e., under the path variation qi(t) →qi(t) + δqi(t) where δqi(t) is an arbitrary smooth function of t∈ [t1, t2] vanishing at the endpoints of the curve (for simplicity, we are using here the customary physical notations), the variation δS of S must be zero in the first order in δqi(t). This means S= ∫dt( ∂L ∂qi δqi+ ∂L ∂q̇i δq̇i) t2 t1 = ∫dt( ∂L ∂qi -d dt ∂L ∂q̇i ) δqi t2 t1 + [ ∂L ∂q̇i δqi] t1 t2 = 0 (4.8) where integration by parts has been carried out. At the endpoints of the path δqi= 0, and we get the Euler-Lagrange equation in its elementary form for the classical extremal trajectory d dt ∂L ∂q̇i -∂L ∂qi = 0 (4.9) It is in general the system of second-order ODEs and according to the Cauchy-Peano theorem from the theory of differential equations, this system should have a unique solution, provided the initial values of qi, q̇i are given. How can we establish connection with the Newtonian model? Let us take, for simplicity, a single particle: 4.3 Lagrangian Mechanics 181 L(qi, q̇i, t) ≡L(xi, ẋ i) = m 2 ẋ iẋ i-V(xi) (4.10) Then the Euler-Lagrange equation takes the form of Newton's law d dt(mẋ i) = -∂V ∂xi (4.11) This simple example in fact demonstrates the generality of the Lagrangian approach. Taking other Lagrangians, one can produce a set of mathematical case studies. For instance, one may consider a Riemannian metric L= gij(xk)ẋ iẋ j or L may be a linear form on each tangent space, L= ai(xk)ẋ i. The system under consideration is nearly always specified in physics by choosing an appropriate model Lagrangian. As we shall see in the chapter devoted to quantum field theory, the same methods can be applied to physical systems with an infinite number of degrees of freedom which are not constituted of moving particles, such as fields, by simply stipulating a suitable Lagrangian function. We have mentioned that it is not so easy to take into account symmetries in the context of the Newtonian mechanics. Let us try to consider symmetries in the Lagrangian formalism (one can consult the book "Mechanics" by L. D. Landau and E. M. Lifshitz for an elegant treatment of this subject, [23]). Assume that the Lagrangian L does not depend on a certain coordinate qj (such a coordinate is historically called cyclic). Naturally, L may depend on q̇j, otherwise the variable qj is of no interest at all. Then one can immediately see from the Euler-Lagrange equations that the generalized momentum pj conjugate to a cyclic coordinate, ∂L ∂qj, is conserved: dpj dt= d dt ∂L ∂q̇j = ∂L ∂qj = 0 (4.12) It is from here, by some simple generalization in the spirit of Lie group theory, that one of the two famous Noether's theorems can be obtained [14]. Assume that the Lagrangian has a symmetry that may be continuously parameterized, i.e., the action S is invariant under an infinitesimal symmetry operation applied to the path qj(t), qj(t) →qj(t) + δqj(t). It is important that the symmetry is continuous: in this case it is always possible to define an infinitesimal symmetry operation (infinitesimal displacement leaving the action invariant). Another way to put it is to notice that discrete groups do not in general result in conservation laws. The reason to this fact is that discrete symmetries cannot in general imply infinitesimal variations. Since the action S is invariant under the displacement δqj(t) , we have for an arbitrary number of degrees of freedom 182 Classical Deterministic Systems S= ∫∑δqj( ∂L ∂qj -d dt ∂L ∂q̇j ) j t2 t1 + ∑[δqj ∂L ∂q̇j ] j t1 t2 = 0 (4.13) The first term here vanishes since all qj(t) are the solutions of the EulerLagrange equations. Thus, we get ∑δqj(t1)pj(t1) j = ∑δqj(t2)pj(t2) j (4.14) where t1 and t2 are arbitrary initial and final time-points on the trajectory qj(t), t∈[t1, t2]. It is clear that δqj(t1) and δqj(t2) do not in general vanish. Since t1 and t2 are arbitrary, the above equation may be interpreted as the conservation of the quantity δS= ∑δqj(t1)pj(t1) j , (4.15) i.e., independence of this quantity on t (see the textbook "Mechanics" by L. D. Landau and E. M. Lifshitz, [23], §43 "Action as a function of coordinates", for a discussion and examples). A connection of this invariant with the customary conservation laws may be illustrated by a classical particle moving in a central potential V(r). The Lagrangian written in spherical coordinates (r, θ, φ) is L= m 2 [ṙ2 + r2(θ̇ 2 + φ̇ 2sin2θ)] -V(r) (4.16) Since φ is here cyclic, the invariant (∂L∂φ̇ ⁄ )δφ= const results in the conservation law: m(rsinθ)2φ̇ = const, (4.17) which is the conserved component of the angular momentum about z-axis. Let me now, as usual, make a few comments on the Lagrangian framework and its general validity in physics. The famous "Course of Theoretical Physics" by L. D. Landau and E. M. Lifshitz is based on the assumption that the whole physics may be derived from the least action principle97. Action and action density have long been the well-established concepts in classical mechanics and field theories. However, in fluid dynamics the corresponding quantities have been introduced only comparatively recently. One of the reasons for this delay is probably owing to the fact that fluid dynamics had traditionally been developed in the Eulerian coordinates whereas the action principle is more conveniently written and treated in the Lagrangian frame. 97 An exception is "Fluid Mechanics" volume [85], see below. 4.4 Hamiltonian Mechanics 183 The Lagrangian framework has many other advantages and spreads well beyond classical mechanics. The generalized coordinates qi(t) do not necessarily describe the motion of a material point or a set of material points. For instance, in scalar field theory (we shall study some models based on it later) each qi(t) may correspond to the field value regarded as a function of time at various field points. Certain versions of quantum mechanics (e.g., a novel sum-over-histories formulations) also use the Lagrangian framework and generalized coordinates. Moreover, such a description is not restricted to nonrelativistic physics only. Thus, physical systems in quantum field theory are customarily specified by their Lagrangians. The Lagrangian approach provides a global description of the trajectory in a configuration space as well as a fair understanding of the conservation laws. One trivial example of a conservation law is obvious already from the form of the Euler-Lagrange equations. If the function L does not depend on a particular variable qi, then the quantity ∂L ∂q̇i≡pi is a constant of the motion - a conserved quantity. In this case, it is the conserved momentum conjugate to the cyclic coordinate qi. In general, there exists a conserved quantity corresponding to each generator of the Lie algebra of a Lie group of continuous symmetries. Moreover, there exists a close relationship between Lagrangian mechanics, which is typically based on positive-definite symmetrical bilinear forms, and Riemannian geometry. One can even say that Lagrangian mechanics is a subdomain of Riemannian geometry. 4.4 Hamiltonian Mechanics There are some practical inconveniences in Lagrangian mechanics. First of all, the Euler-Lagrange equations, like those of Newton, are second-order ODEs. This fact makes it more difficult to provide a simple geometric interpretation for their solutions than for a system of the first-order differential equations, which readily admit geometric interpretation as flows along vector fields. Possibly the main motivation underlying the Hamiltonian formulation of mechanics was the intention to interpret the time evolution of a physical system as such flows. This nice interpretation is especially convenient when studying the properties of dynamical systems (see below). Moreover, in the Hamiltonian formulation there is practically no difference between coordinates and momenta - they are just variables in a system of an even number (2n) of equations, with the possibility to treat all of them equally, make changes and transformations between them and construct some kind of geometry in this 2n-dimensional phase space. Such a geometry is usually formulated in terms of symplectic manifolds, with a group of diffeomorphisms acting on them. However, before we proceed to using this geometrical language, I shall make some general observations and try to illustrate the main ideas on simple examples. One more important motivation for some other description, different from the Lagrangian formulation, consists in the desire to get explicit expressions for the right-hand side of the dynamical system vector equation, dxdt ⁄ = F(x, t) where in the Lagrangian formulation variables are x= 184 Classical Deterministic Systems (qi, q̇ i) , so the first half of equations in the dynamical system is simply dqidt ⁄ = q̇ i, but there is no regular method to extract the right-hand side for dq̇ idt ⁄ from the Euler-Lagrange equations. This is related to the geometric properties of the tangent bundle TM, which is too simple to support dynamics, but luckily not the only one. The tangent bundle TM is formed by tangent spaces (fibers) attached to each point of the configuration manifold M, which contain the velocity vector fields of the same contravariant type as vectors qi∈M. One may note in passing that the Hamiltonian equations are covariant whereas the Lagrangian ones are contravariant. The tangent bundle TM is insufficient to form the phase space of a physical system, one needs also a dual space (see below). We can interpret the Hamiltonian equations as a dynamical system on a cotangent space to configuration manifold M. There exist of course many important books on classical mechanics where the geometry of phase space is thoroughly discussed. I have already remarked that for me, personally, the best one is still the book by Arnold [14], in spite of the fact that it has been written over thirty years ago. Yet, I have heard several times from the students an astounding declaration that the book by V. I. Arnold was highly unorthodox and was viewed a bit skeptically by their professors. (A similar statement was related also to another book by V. I. Arnold [15] which personally I found quite fresh when I first read it in the beginning of the 1970s.) Before we proceed, one may notice that the Hamiltonian systems are only a very restricted class of dynamical systems, since they are specified by a single scalar function - the Hamiltonian. The description of an autonomous (time-independent) Hamiltonian system possessing N degrees of freedom, i.e., evolving in a 2N-dimensional phase space98, is reduced, due to the energy integral of motion, to (2N-1) variables - one can say that the Hamiltonian dynamical system is restricted to (2N-1)-dimensional energy shell (manifold). Physicists traditionally prefer the coordinate-dependent notation, whereas in contemporary mathematics the coordinate-free geometric language is nowadays of fashion. For me, it was useful to compile sort of a dictionary translating the notions of Hamiltonian (and, to some extent, Lagrangian) mechanics into the language of modern geometry. In fact, this language is not that modern - it has been used in mathematics for about a century. As usual, I avoid hard technical definitions. The particle coordinates xi, i= 1, ... , n on the configuration space R3n, n is the number of particles in the system, are usually replaced by "generalized" coordinates qi which are understood as global variables, but more generally they may be interpreted as local coordinates on a configuration space M, which can be any manifold whose points coincide with all possible configurations of the system. The variables q̇ i represent all tangent vectors to all possible curves (paths) lying in M. These variables, viewed as 98 Recall that variables in Hamiltonian systems appear as canonically conjugated pairs, which results in an even dimension of the phase space. This is not true for the case of generic dynamical systems. 4.4 Hamiltonian Mechanics 185 "generalized" velocities, are usually called by mathematicians "fiber coordinates on the tangent bundle T(M) ", the fiber itself is sometimes denoted as Tx(M). Tangent bundles and fibers are often denoted with no brackets, i.e., as TM and TxM, respectively (see more details in Chapter 3 on mathematics). The paradigmatic examples of a vector bundle are the tangent and cotangent bundles TM and T∗M of a smooth manifold M. These are the bundles whose fibers at x in M are the tangent space TxM and cotangent space Tx ∗M respectively. In this case the bundle charts and transition maps are induced by the derivatives of the coordinate charts on M (or their adjoints, for the cotangent bundle). The momenta pi in this language are called fiber coordinates for the cotangent bundle T∗(M), which is in fact the phase space. In Lagrangian mechanics, the momenta are defined as pi= ∂L∂q̇ i ⁄ which implies that pi and qi would be transformed with mutually inverse matrices under a coordinate change, similarly to co- and contravariant vectors in linear algebra. These inverse matrices correspond to dual bundles. The concept of the cotangent bundle, or simpler the cotangent space, is constructed by using the base notion of a manifold and its associated tangent space (see Chapter 3 on vector spaces and other geometrical definitions). Let us now recall some basic facts. A manifold M may be regarded as a direct generalization of the concept of a surface. To put it simply, a manifold can be treated as an n-dimensional "curved" space where one desires to define vectors as in usual Euclidean space. In principle, there are several ways to introduce vectors, one of them being with the help of a tangent space. At each point x of a manifold M (we assume it to be finite-dimensional, in the infinite-dimensional case one may encounter some complications, see below) we can define the tangent space of vectors, Tx(M), for example, as the set of all vectors tangent at point x to smooth curves passing through x. Such a construction produces the space of all vectors defined at x with the same dimension n as M, which is also a manifold. By taking the union over all points x∈M, we may define a tangent bundle of M T(M) = ⋃{x} × Tx(M) x∈M Later we shall often use these (and a little extended) geometric concepts; now, before we proceed with simple analysis and examples more commonly used in traditional physics textbooks, e.g., in [23], allow me to make a small digression. In general, given any n-dimensional vector space V, we might look at the space of all real-valued linear maps on V. This space of linear maps forms itself a vector space, which we can call, under certain conditions of orthogonality, the dual space V∗. This may appear, at the first sight, an extremely abstract space, but in fact it is not drastically different from the initial space V. If, for example, the space V is formed by the contravariant vectors like radius-vector 186 Classical Deterministic Systems or velocity, then V∗ contains covariant vectors such as gradient, wave vector or quantum-mechanical momentum. One can, by the way, rather abstractly define a covariant vector as a linear functional or, rather, a polylinear operator mapping contravariant vectors into scalars, yet I don't think this level of mathematical abstraction would bring new understanding. One may ask: what was the main idea behind the introduction of tangent and cotangent spaces? One idea probably was to bypass the traditional coordinate representation of classical differential geometry. In fact, choosing a basis is an arbitrary operation: once you choose a basis in a vector space, it becomes isomorphic to Rn. Then, unless you take some precautions, you risk establishing an isomorphism between tangent and cotangent spaces, because Rn is isomorphic to its dual. Moreover, you may be tempted to introduce a usual inner (scalar) or canonical dot product via coordinate entries as in Rn, but this object does not exist in general in arbitrary vector spaces. When we discussed differential manifolds in Chapter 3, we observed that for them a similar problem exists: although one may introduce local coordinates, there is no regular way to select them. On a sphere, for example, which is one of the simplest realizations of a manifold, one cannot find a natural place for the coordinate origin. Thus, intrinsic properties of manifolds must be studied without any special choice of local coordinates. The latter cannot be found by physics - they do not occur in nature and we introduce them only for convenience. One of the reasons some people identify tangent and cotangent (dual) spaces e.g., in the form of contravariant and covariant objects may be that they implicitly assume the existence of local coordinates and bases isomorphic to those in Rn. Let us repeat the main points once again. The concept of a cotangent space is built upon the already introduced structures of a manifold and an associated tangent space. In a manifold M, which actually is an n-dimensional space, we would like to define "vectors" as we usually do in the ordinary Euclidean space. There are different manners to do it, and one of the simplest ways is to introduce the tangent space of vectors at a point x in M, denoted Tx(M), which is essentially the space of all vectors defined at x. By the way, this construction is reasonable, because the tangent space is a manifold in itself. Now, the linear algebra - the most powerful discipline in handling linear vector spaces - teaches us that for each vector space V there is a naturally associated dual space V∗. Thus, we may assume that in this special case there exists a dual space to the tangent vector space Tx(M), which is what we may call the cotangent space Tx ∗(M). Dual vectors in Tx ∗(M) are often referred to as one-forms. We may note that, even though Tx(M) and Tx ∗(M) may have the same size and more or less the same structure, there is in general no unique map between these two spaces. In other words, given an arbitrary vector x, there is no natural way to associate it with a unique one-form ω1. One can try to put these vectors in mutual correspondence using components in some fixed basis (see Chapter 3), but, as I have several times mentioned, the components of a vector are not fundamental quantities: they change under coordinate transformations. Nevertheless, we shall see later that it is possible 4.4 Hamiltonian Mechanics 187 to choose some particular structure helping to identify vectors and oneforms, and this convenient additional structure is referred to as metric. Given the local cotangent space Tx ∗(M), one can also define the cotangent bundle in the same way as with the tangent bundle: T∗(M) = ⋃{x} × Tx ∗(M) x∈M . One may observe (and the reciprocal lattice in solid state physics is a good physical illustration to this observation) that if one has an n-dimensional vector space, one can build up a space of all linear maps on it (e.g., comprised of nontrivial linear combinations with real coefficients). This space of linear maps is also a linear space, which can be defined through basis vectors ej ∗, each of the latter being a linear combination of the full set of basis vectors ei for the vector space V. If we require, as when we considered above the transformations between affine coordinates, that ei ∗ej= δij, then we get a basis for a dual space formed by linear maps in the initial vector space V. Any linear map in V∈Rn may be written as a linear combination of the basis vectors ej ∗ with real coefficients (see Chapter 3). In other words, given a basis ei of a vector space V, we may define the dual basis of V∗ using the rule ei ∗ej= δij, i.e., any basis in V may produce a dual basis in V∗. The Kronecker symbol (as well as delta-function in the case of functional spaces) ensures a one-toone correspondence between the two sets of basis vectors, but does not guarantee orthonormality, if one deals with vector spaces and not necessarily with inner product spaces (see about the hierarchy of spaces in Chapter 3). A brief note about infinite-dimensional spaces: this case may be quite nontrivial, since vectors or functions dual to those forming a basis in V do not in general provide a basis in V∗. In particular, the usual consequence of isomorphism between vector spaces V and V∗ and the Euclidean space Rn, dim V= dim V∗ becomes meaningless in the infinite-dimensional case. Moreover, if you try to find the double dual, i.e., dual of dual vector space V∗∗, it will not necessarily be isomorphic to the starting vector space V. See e.g., http://planetmath.org/encyclopedia/DualSpace.html. After all these digressions let us get back to Hamiltonian mechanics. We know that it is essentially based on the notion of a Hamiltonian function or Hamiltonian. One usually defines the Hamiltonian as a function H(pi, qi, t) of 6N+1 variables pi, qi, i, j= 1, ... ,3N plus parameter t which is interpreted as time. The role of time is not always simple, and we shall specially discuss it several times in various contexts (see e.g., Chapter 4). The Hamiltonian is related to the Lagrangian as H(pi, qi, t) = piqjδj i-L(qj, q̇ j, t). (4.18) In curved spaces one should in general write, instead of δj i, some nondiagonal tensor gj i, e.g., the metric tensor. One usually assumes that the 188 Classical Deterministic Systems defining expressions pi= ∂L(qi, q̇ i, t) ∂q̇ i ⁄ are invertible and can be solved as equations with respect to q̇ i giving the generalized velocities in terms of pi, qi. A sufficient condition for such invertibility is the nonsingularity of the Hessian matrix Dij= ∂2L ∂q̇ i∂q̇ j= ∂pi ∂q̇ j, det Dij≠0 This condition is almost always fulfilled in simple mechanical problems but is not necessarily valid in more complicated physical problems. This fact may be a source of difficulties in quantum field theory (QFT), and we shall discuss this issue later (Chapter 6). The procedure of inverting the momentum equations, pi= ∂L∂q̇ i ⁄ , and arriving at the Hamiltonian from the Lagrangian is usually called in mechanics the Legendre transformation, although it is only a particular case of the Legendre transformation in general. Now we may notice that the Hamiltonian H is a function of its variables on the cotangent bundle T∗M of a d= 3Ndimensional manifold with the local coordinates qi, i= 1, ... ,3N, whereas the Lagrangian of the system is a function on the tangent bundle TM of M. Considering, for simplicity the case when the Hamiltonian does not explicitly depend on time (this case corresponds to a closed system, which is not driven by any external agent), we have dH= pidq̇ i+ q̇ idpi-∂L ∂qidqi-∂L ∂q̇ idq̇ i= q̇ idpi-∂L ∂qidqi, (4.19) which means that the differential dH is conveniently expressed only through the differentials of its own (cotangent bundle) variables with ∂H ∂pi = q̇ i, ∂H ∂qi= -∂L ∂qi (4.20) We know that trajectories of mechanical particles are described by the curves qi(t) on M, with the dynamics being specified by the Lagrangian L(q̇ i(t), qi(t), t) . These trajectories obey the Euler-Lagrange equations resulting from extremizing the classical action defined as an integral of the Lagrangian along the trajectory (see above) ∂L ∂qi= d dt ∂L ∂q̇ i i = ṗi Thus, using these simple transformations, we arrive at an alternative description of the extremal path - this time in terms of other equations of motion: 4.4 Hamiltonian Mechanics 189 q̇ i= ∂H ∂pi , ṗi= -∂H ∂qi. (4.21) This system of the first-order ordinary differential equations (ODEs) is called the Hamiltonian's system of equations. We may write the least action principle in the Hamiltonian formulation as δS[pi, qi] = δ∫dt(piq̇ i-H(pi, qi)) t2 t1 = 0, (4.22) which gives the above written Hamiltonian equations of motion. Denoting η≔pidqi (often called the Poincaré form) so that the first term in the action may be written as an integral over path γ ∫piq̇ idt t2 t1 = ∫η γ One can associate with this integral a so-called symplectic form, ω= dpi∧dqi (see e.g., [14], Ch.8). It is sometimes said that the action written in the Hamiltonian form characterizes the symplectic structure, with the cotangent bundle T∗M on which the Hamiltonian H is defined being the symplectic manifold associated with the considered mechanical system, i.e., the particle motion on manifold M. Since this geometrical language seems to be universally accepted nowadays, I shall use it systematically even in simple examples in order to get accustomed to it. Imagine, for instance, a particle moving over the real line R so that η= pdq 99. The associated symplectic manifold T∗R has local coordinates (p, q). Then the corresponding symplectic form is simply ω= dp∧dq. Let us now try to generalize this construction a little bit. The local coordinates (p, q) may be denoted, for instance, as (z1, z2), and if a symplectic manifold has an arbitrary finite dimensionality, then local coordinates can be written as z≔(z1, ... , zn) so that the Poincaré form is η= Bα(z)dzα and the symplectic form may be written as ω= 1 2 ωα,β(z)dzα∧dzβ where ωα,β(z) ≔∂αBβ(z) -∂βBα(z). This last equation provides the coordinate expression of an exterior derivative of the form ω. One can define here the general Poisson brackets (some authors prefer a singular - the Poisson bracket) 99 It is common now to omit the symbol d in differential forms, in particular in the form dη; frankly speaking, I do not like this "modern" style (although I use it), since I belong to the old school in which the number of integrals was always equal to the number of differentials. Besides, there is a lot of confusion about this modern notation. 190 Classical Deterministic Systems {F, G} = ωi,j(z) ∂F ∂zi ∂G ∂zj where ωi,j(z) is the inverse matrix to ωi,j(z), ωi,j(z)ωj,k(z) = δk i; here to simplify notation, I changed indices from Greek to Latin - I hope it will not produce any confusion. One can encounter the Poisson brackets in various contexts, and we shall discuss their properties ad hoc. Now I would like only to emphasize their connection with quantum mechanics. Indeed, quantization of classical mechanics on a manifold M was originally performed as replacing the Poisson bracket algebra by the commutator algebra, i.e., by the transition from the classical phase space (cotangent bundle T∗M) to the quantum mechanical Hilbert space H. In the conventional textbook example when the manifold M is simply R3, the Hamiltonian coordinates qk and pi are treated as operators q̂k and p̂i on H= L2(R3) , with [p̂i, q̂k] = iħδi k whereas the classical Poisson brackets are {pi, qk} = δi k. Operator q̂k is simply the multiplication by qk, whereas pk is a covector differential operation, pk≔ -i∂∂qk ⁄ . Then one needs to replace "classical observables" represented by real functions F(p, q) defined on T∗M 100 by quantum self-adjoint operators F̂ on H. Then, by definition, the expectation value F̅ of a "quantum observable" F̂ at some state Ψ ∈H is the scalar product F̅ = (Ψ, F̂Ψ) (we assume the unit norm here). See Chapter 6 for details of quantization, in particular canonical quantization. Now we may just state that orthodox quantum mechanics based on the Schrödinger equation finds its direct analog in the Hamiltonian picture of classical mechanics. In Chapter 6 we shall see that the Schrödinger equation itself is the quantum analog of the Hamiltonian equations. Thus, one may boldly say that Hamiltonian systems play a central part in mathematics and physics. They have brought forth symplectic geometry methods, canonical quantization and the concept of integrable systems. They also allowed us to better understand time evolution and conserved quantities as Noether symmetries. We have also seen that if the Lagrangian of a physical system is known, then one can obtain the Hamiltonian by the Legendre transformation (see about the general Legendre transformation in Chapter 4). However, for an arbitrary system of equations, it may be not an easy procedure. From the viewpoint of a theory of differential equations, the Hamiltonian form of a system of differential equations has a specific structure, namely the symplectic structure. To use all the advantages of the Hamiltonian structure it would be desirable to know whether an arbitrary system of differential equations has a Hamiltonian structure, and if it has to be capable of writing the system of equations in the Hamiltonian form, i.e., finding such a structure explicitly. Thus, when we use the Hamiltonian formulation in classical mechanics, we have to study symplectic geometry in the phase space and, correspondingly, the algebra of differential forms. When turning to quantum 100 We have already noticed that in classical mechanics functions are assumed to be smooth, e.g., F∈C∞(T∗M). 4.4 Hamiltonian Mechanics 191 mechanics, the classical Hamiltonian formulation naturally mutates into the Hilbert space paradigm using the language of operators and the techniques of boundary value problems for the Schrödinger equation. In distinction with the Hamiltonian formulation, the Lagrangian mechanics requires, as we have seen, studying variational calculus, and when we pass from the Lagrangian description to quantum theory, we have to learn the language of path integrals (developed by P. A. M. Dirac and R. Feynman). Let us now try to illustrate how Hamiltonian mechanics works on several simple examples. We have already briefly described the most trivial example possible - that of a particle moving along a line. If we try to formulate this onedimensional problem in the jargon of symplectic geometry, we must first write the necessary forms. The corresponding symplectic manifold (the cotangent bundle T∗R) has local real coordinates (z1, z2) = (q, p), with the following forms: Poincaré: η= pdq symplectic: ω= dη= dp∧dq exterior derivatives: ω1,2 = ω2,1 = -1; ω1,2 = ω2,1 = 1 Poisson brackets: {F, G} = ω1,2 ∂F ∂q ∂G ∂p+ ω2,1 ∂F ∂p ∂G ∂q= ∂F ∂q ∂G ∂p- ∂F ∂p ∂G ∂q= 1; {q, p} = 1. Here, as we have noted a couple of times, F and G are typically considered smooth. One may also notice that the operation of taking the Poisson bracket is skew-symmetric and satisfies the Jacobi identity. This means that the Poisson brackets are simultaneously Lie brackets (see Chapter 3) on C∞(M). We can write the Hamiltonian equations of motion for any "classical observable" F(q(t), p(t), t), which is again considered smooth, F(q, p) ∈C∞(T∗M) (for simplicity, I have omitted the explicit dependence on parameter t): dF(q(t), p(t)) dt = {F(q(t), p(t)), H(q(t), p(t))} This equation is quite important, since it is equivalent to the Hamilton equations and states that the evolution of an "observable" F - i.e., the rate of change of the observed value of the function F(q, p) - is equal to the observed value of the Poisson brackets {F, H}. For many applications of classical mechanics, it is sufficient to assume the Hamiltonian function H: R2 →R to be a C2 function of 2n variables qi, pj, i, j= 1, ... , n. In the 1D case (n= 1) the integral H(q(t), p(t)) = const fully describes the trajectories in the phase space (phase plane). One can easily illustrate the meaning of the "energy surface" H(qi, pj) = E on this simple example. Since we have excluded an explicit dependence on time (such a case is usually called autonomous), we obviously have a time-translation invariance. This property can be easily verified by replacing t with τ= t-t0 where t0 = const is a translation along the time axis. Consider, e.g., the Hamiltonian equations or, in general, a vector equation of the form dzdt ⁄ = F(z) where, as before, z= (z1, ... , zn)T are local coordinates. The motion equations written in this form are usually called a dynamical system (see below). Consider the initial value problem (IVP) for the above vector 192 Classical Deterministic Systems equation, dzdt ⁄ = F(z), z(0) = z0, and assume that it has the solution z= φ(t). Then making a substitution τ= t-t0 we see that the IVP dzdt ⁄ = F(z), z(0) = z0 has the solution ψ(t) = φ(t-t0). This second solution is obtained by merely translating the first one along the time axis. It is important to understand that these two solutions to our dynamical (IVP) problem, φ(t) and ψ(t), are different, but belong to the same energy. In our 1D case, these solutions lie on the same phase curve (orbit). This situation is commonly illustrated by the oscillator equation, z̈ + ω0 2 = 0, which has solutions z1 = sin ω0t and z2 = cos ω0t, the latter being obtained from the first by the transformation t→t-π/2. The oscillator equation can be cast in the form of the equivalent vector equation with x= x1, ẋ = x2. So, in a fairly unorthodox manner, the notation has led us to a map from functions into forms; d is known as the exterior derivative. You might ask, "What about our notion of dx as a small change in x?" Well, we have to throw that picture out of the window, because the notation "dx" does not actually mean a small change in x. "dx" isn't even really a number. It's a linear map from vectors to numbers. It can act on small vectors to produce small numbers, but it isn't a small number in itself; it's not even an element of R. It's an element of Tp∗M. So what does it really mean when we see "dx" in an integral? To finalize this section, we can emphasize the link between the general theory of dynamical systems and classical mechanics describing the motion of material bodies. Namely, one can show (see, e.g., [14]) that any dynamical system dxj/dt= vi(xj) admits, at least not in the vicinity of its critical points, a symplectic structure (i.e., Hamiltonian vector field) ωij(x), x= (x1, ... , xn) so that one can write the dynamical system as viωij(x) = ∂jH(x), where H(x) is the respective Hamiltonian function. 4.5 Oscillations In many cases, the forces appearing in a mechanical system, when it is driven from the equilibrium, strive to return the system to the equilibrium position, when there are no forces producing motions in the system. Return to equilibrium is the general tendency of almost all natural systems (see Chapter 7). For small deviations from equilibrium, the returning forces are proportional to such deviations (Hooke's law), which is simply a manifestation of the fact that each smooth (or analytical) function is locally linear. This linearity leads to the equation of small oscillations in a 1d mechanical system, mẍ + kx= 0, k> 0, which is the simplest possible model of a finite analytical motion. This motion is obviously periodic. One may notice that although the motion equations are purposed to describe evolution, systems in purely periodic motion are difficult to call evolving since they repeat the same states a countless number of times. In an evolving system, each state is, in general, unlike any other. A little later we 4.5 Oscillations 193 shall specially discuss time evolution in classical mechanics in connection with dynamical systems theory. The case of small oscillations with a single degree of freedom is such a common mathematical model that it is rather boring to discuss it once again. onetheless, I shall do it in order to have a possibility of returning to this ubiquitous model in future chapters and, perhaps, to find some not so common features in it. One can of course find careful exposition of a 1d oscillation theory in any textbook on mechanics so that I won't care much about details. Let x= x0 be a stable equilibrium i.e., ∂U(x0) ∂x ⁄ = 0, ∂2U(x0) ∂x2 ⁄ > 0.101 Then the solution to the mechanical system with kinetic energy T(p, x) = T(ẋ, x) = (1/2)x(a)ȧ 2, U= U(x) is periodic for (p, x) near (p= 0, x0). The first question that arises here is: what is the period of the motion? The answer is: the period T0 near equilibrium position with the decrease of the amplitude x(t) tends to the limit T0 = 2π/ω0 where ω0 2 = b a, b= 1 2 ∂2U(x0) ∂x2 , a= a(x0), since for a linearized system U(x) = (1/2)b(x-x0)2 and the solution is x(t, ω0) = Acosω0t+ Bsinω0t . The qualitative picture of 1d small oscillations (e.g., in the (ẋ, x)-space) can be plotted even without solving differential equations of motion: the relationship for the "invariant manifold" (see below the section on dynamical systems) mẋ 2 2 + bx2 2 = const represents the ellipse. The most common example of a periodic, i.e., oscillating, 1d mechanical system is a pendulum. The simple mathematical pendulum consisting of a point mass m and of a massless thread (constraint) of length l is described by the Hamiltonian H= pφ 2 2ml2 ⁄ -mglcosφ where pφ= ml2φ̇ is the angular momentum and φ is the declination angle. In this section, we shall mostly deal with small oscillations. For small oscillations φ≫1 around the equilibrium position φ= 0 , the pendulum motion is described by the linearized (harmonic) Hamiltonian H= pφ 2 2ml2 + mgl 2 φ2 = ml2φ̇ 2 2m + mgl 2 φ2 = mẋ 2 2 + mω2x2 2 , where ω= (g/l)1/2. One can of course describe oscillations in all possible versions of mechanics: Newtonian, Lagrangian, Hamiltonian, with the help of 101 I write these expressions for brevity not in a fully correct form; one should write, e.g., ∂U(x) ∂x ⁄ |x=x0 and analogously for second derivatives. 194 Classical Deterministic Systems Hamilton-Jacobi equations and all other possible methods. I prefer to attach oscillations to the most intuitive version of mechanics, the Newtonian one, because geometric language, more adequate in other formulations than in Newtonian, may obscure the basic notions of the oscillation theory. Oscillations are usually understood as finite motions that occur in the vicinity of equilibrium points. Recall an equilibrium point.102 If, for instance, point a is a local minimum of the potential U(x, λ), where λ is some parameter, then x= a(λ) brings the Lyapunov stability (see the section on dynamical systems), i.e., for initial conditions {p(0), x(0)} sufficiently close to {0, a} the whole phase trajectory {p(t), x(t)} is close to {0, a} . Mathematical models of almost all multidimensional vibrating systems are often just generalizations of the 1d case: xn+1 = Q(x1, ... , xn), and if Q is positive definite, then any small motion can be represented as a superposition of oscillations along the main axes. 4.6 Harmonic Oscillator The model of harmonic oscillator is not only the favorite toy model for physicists, both in classical and quantum domain, but it possesses certain unique features that manifest themselves especially conspicuously in quantum mechanics and quantum field theory (QFT). We shall see in Chapters 6 that the harmonic oscillator in quantum mechanics has very distinguished spectral properties. The energy spectrum of a harmonic oscillator is noteworthy due to at least three reasons. Firstly, its lowest energy is non-zero which means that the quantum oscillator performs ground-state motions. The average kinetic energy of the quantum oscillator in a ground state is positive. This fact leads in quantum field theory to formally infinite vacuum energy. In QFT, non-zero energy in the lowest state implies that the electromagnetic field exists even when there are no photons. All these ramifications of harmonic oscillator treatment irritated theoretical physicists to such an extent that eventually a bunch of new theories emerged out of this seemingly primitive model, and we shall later discuss some of them. Secondly, the energy levels of the harmonic oscillator are quantized i.e., they may only take the discrete values of elementary energy ħω times (2n+ 1)/2, where ω is the classical oscillation frequency, ħ is the Planck constant, n= 0,1,2, .... Taking only discrete values is a typical feature of many (but not all!) quantum systems and, in general, of boundary value problems, but the energy levels of a harmonic oscillator are equally spaced unlike for other quantized systems. The equidistant character of the harmonic oscillator spectrum is the third and maybe the most important feature. Besides, scrutinizing the solutions to the quantum oscillator problem one can observe the wavelike tunneling effect non-existent in classical mechanics. Within the classical Newtonian framework, it will be for the beginning sufficient to know that oscillations are just finite motions occurring in the 102 One more time trying to infuriate the mathematicians, I shall make in the present context no distinction between equilibrium, fixed, stationary and critical points. In general, however, these notions may be different. 4.6 Harmonic Oscillator 195 vicinity of equilibrium points. In principle, Newton's equations may be highly nonlinear which makes their solution a complicated problem, but harmonic oscillations are the result of linearization of any smoothly changing force near the state of equilibrium. This fact makes the model of t h e harmonic oscillator very general. In the Newtonian version of classical mechanics, one usually considers coordinates xi, corresponding velocities vi= dxi/dt≡ ẋ i and forces Fj(xi, vi, t), not necessarily potential ones. The kinetic energy is defined simply as a quadratic form Tkin= ∑mivi 2 i . We have already seen in connection with a brief discussion of geometric properties of classical mechanics that the kinetic energy is in general a bilinear form defining a metric on the tangent space TQ where Q is the configuration manifold, Tkin= 1 2 gikq̇ iq̇ k, but these geometric concepts are not indispensable for discussing simple models of Newtonian mechanics. By the way, since geometric transformations are usually of minor importance in the Newtonian formulation of classical mechanics, we shall temporarily not distinguish between contra- and covariant coordinates, writing vector indices below. Let us start, for simplicity, from a one-dimensional motion of a point mass: some practical experience shows that in Newtonian mechanics mathematical models for a great many systems are just multidimensional generalizations of 1d situations. In a single dimension we have the secondorder motion equation mẍ = F(x, ẋ, t) or, writing it in the dynamical system form, ẋ = 1 mp, ṗ = F(x, p, t), we may define vectors x≔{x1 = x, x2 = p} and f≔{f1 = pm ⁄ , f2 = F} and then write Newton's equation in the vector form ẋ = f(x, t). Here x represents points in the phase space (phase manifold) M of dimension two ( dimM= 2 ), with both variables x, p being regarded as independent coordinates. Solutions to the motion equation or, equivalently, to the corresponding dynamical system define a family of phase curves γ(t) = (γ1, γ2) or, in "physical" variables, x1(t) = γ(t), x2(t) = mγ̇(t) . If the function F does not depend on time t (one usually assumes this function to be also independent of p and continuous in x), then one may introduce the potential 103 U(x) = -∫F(x′)dx′ x x0 so that F(x) = - d dxU(x). Then the total energy E= T+ U is conserved i.e., dE dt= d dt(T+ U) = 0 in the process of motion, in other words, energy E(p, x) = E(γ̇(t), γ(t)) = const along the phase curves γ. Note that along such solution curves p is a function of x (and vice versa). The solution "flows" through the phase space, being restricted to constant energy curves (or surfaces if dimM> 2) if energy is conserved. 103 The independence of F on time and velocity is a condition at any rate sufficient, but in certain cases may be too restrictive. One can sometimes introduce a potential for time and velocity dependent forces. 196 Classical Deterministic Systems All these trivial reminders serve as a preparatory material for the models of concrete physical systems such as oscillator. We have seen that the harmonic oscillator is characterized by the simplest possible force law, F(x, p, t) ≔F(x) = -kx, k> 0, which means that the force pulls the point mass back to the equilibrium position (origin of coordinates). Here the quantity k is the model parameter which physically describes the oscillator rigidity: the more the numerical value of k, the more is the force driving the point mass to the origin of coordinates. One, however, typically introduces another parameter: the frequency ω= (k/m)1/2 and writes all the formulas pertaining to the oscillator in terms of this parameter. For example, using the definition of potential, we get U(x) = 1 2 mω2(x2 -x0 2), where x0 stems from the integration and we can put x0 = 0 without any loss of generality104. Then using the above notations, we have ẋ = f(x) with x1 = x, x1 = p and f1 = pm ⁄ = x2 m ⁄ , f2 = F(x) = -mω2x1 so that the motion equations in the dynamical system form look as ẋ1 = 1 mx2, ẋ2 = -mω2x1. 4.7 Symmetries and Conservation Laws Symmetries play a fundamental role in understanding the physical process considered. Usually, the importance of symmetries for physics is related to invariance properties of differential equations modeling the physical situation. Symmetries associated with conservation laws are specific cases of such invariance properties leading to integral combinations for the respective differential equations. One is tempted to think in this connection that each symmetry of differential equations leads to a conservation law and, conversely, each conservation law is the consequence of some symmetry. Both statements are wrong: there exist symmetries of differential equations of physics that do not provide conservation laws 105 as well as there are conservation laws which are not related to any symmetries of such equations (e.g., conservation of mass). A natural question would be: how many integrals of motion are there in a mechanical system? A classical system is described by 2n first-order differential equations 106 whose solution is determined by 2n independent constants. It may be natural to choose the initial coordinates and momenta of the classical system, i.e., coordinates of the initial point of the system's path in its phase space, as such independent constants. One can also arrange these initial coordinates and momenta in various combinations, not all of them independent. Corresponding to the number of initial coordinates and momenta as independently chosen constants, a classical system admits 104 One performs this transformation (x-x0 →x) every time in physics often forgetting that we are justified to do it owing to affine properties of our space. 105 Point or discrete symmetries. 106 This is the typical dynamical systems form, see the following section. 4.8 Relativistic Mechanics 197 exactly 2n independent integrals of motion. By the way, one often forgets that initial coordinates and momenta are also integrals of motion. Here, it would be probably worthwhile to make the following remark. There are two interpretations of symmetry transformations: active and passive. Under the active interpretation one understands an actual change of a physical state such as rotation or reflection whereas the passive interpretation consists in changing the viewpoint of an observer such as transition to a different coordinate system. In other words, the passive interpretation does not imply any real physical transformation, and maybe because of that it is mostly favored by mathematicians. In contrast, physicists generally prefer the active interpretation, and this diversity of preferences is of course curious. As a standard example, one usually takes the rotation of a vector: under the active interpretation vector is actually rotated and a new vector is obtained, while under the passive interpretation vector remains intact and only the basis vectors are rotated. 4.8 Relativistic Mechanics In almost all the textbooks, it is customary to treat relativistic mechanics together with classical field theory, probably because of the unifying concept of Lorentz invariance. This is a matter of taste of course, but personally I think that mechanics is just mechanics and can be treated as a quasi-isolated cluster of models, without the concept of fields. Relativistic mechanics of a material point, on a primitive level of a 3D geometry may be viewed as a classical mechanics with a velocity-dependent mass (see [39], §9). As R. Feynman put it in his "Feynman's Lectures on Physics", "The theory of relativity just changes Newtons laws by introducing a correction factor to the mass." [138], p.15-1. Simply speaking, one can substitute γm instead of m, where γ= (1 -β2)-1/2, β= v/c is the relativistic factor, v is velocity with respect to a laboratory frame (an observer being at rest). For many practical calculations this is sufficient. So, one sees that the "relativistic mass" increases with velocity. In popular science interpretations of special relativity, the statement that the mass of a body rises with its velocity is ubiquitous. This statement is supported, although more accurately, in many textbooks on relativity (see [158]). On a more profound, geometric level the simple heuristic concept of "relativistic mass" may lead to some difficulties. Let us take a closer look at relativistic mechanics, using, in the spirit of connected models in physics, this opportunity to discuss the properties of mass in general. It is interesting that the concept of variable mass, e.g., depending on the body velocity, emerged before the creation of special relativity, in particular, in the paper by O. Heaviside [38]. Moreover, there were experiments set up in the beginning of the 19th century which demonstrated the dependence of the quantity that the experimenters identified with the mass of a moving body on its velocity. 198 Classical Deterministic Systems 4.9 Dynamical Systems Historically, the term "dynamical system" was applied to a mechanical system with a finite number of degrees of freedom. The state of such a system was usually characterized by its position, for example, by the location of a center mass point or by the configuration of a number of points, whereas the rate of change of this position (more generally, of the system's state) was given by some law of motion. That was all in the original meaning of the term. In general, the state of a dynamical system may be characterized by some quantities, not necessarily of a mechanical origin, which may assume arbitrary real values, for instance in chemistry, biology or ecology. Mathematically, of course, complex values are also admitted. If these quantities are treated as coordinates xi of a point in an n-dimensional space, i= 1,2, ... , n, then such a space is usually called the phase space of the considered dynamical system and the point xi representing the state of a system is usually called its "phase". This terminology is probably due to J. W. Gibbs who called the state of the system its phase. The phase space U⊂Rn is usually considered an open domain; we shall limit ourselves to Euclidean domains, although it is of course possible - and in many cases necessary - to consider more general differential manifolds as the phase space, e.g., circle, cylinder, or torus. We have already seen that the phase space of a particle is a six-dimensional Euclidean space, the six components of the phase velocity vector being the components of the ordinary velocity and of the force, whereas the projection of the phase trajectory on the space TpX (parallel to the momentum space) is the trajectory of the particle in the ordinary sense of the word. The evolution of the system with time is represented as a motion of the phase point in the phase space over some curve - the phase trajectory. As we have seen on the example of vector spaces, to each point xi a vector with components xi may be associated. The usual type of a dynamical system is given by a map F: U→U. As particular cases of maps F, we have homeomorphisms, which are continuous maps admitting a continuous inverse, and diffeomorphisms, which are continuously differentiable maps admitting a continuously differentiable inverse. In other words, by a dynamical system one can mean a diffeomorphism of a compact differentiable manifold without boundary or a one parameter group such that φt+s= φt∘φs. This is a one-parameter transformation of a phase space - a phase flow, which may be both causal, dxidt ⁄ = fi(x1, ... , xn) and non-causal, dxidt ⁄ = fi(x1, ... , xk(t+ τ), ... , xn). 4.10 Dynamical Systems and the Cauchy Problem Let us consider the elementary dynamical systems theory in simple terms of the theory of ordinary differential equations. One often writes the system of differential equations corresponding to a dynamical system in the abridged (vector) form, dxdt ⁄ = f(t, x), where f is a vector function f: S→Rn, with S being an open subset of Rn+1. The parameter t∈R is as a rule identified with 4.10 Dynamical Systems and the Cauchy Problem 199 time, x∈Rn. More specifically, the vector function x: T→Rn is a solution to the equation dxdt ⁄ = f(t, x) on an interval T→Rn, and we shall assume it to be at least continuously differentiable at this interval. The vector function f(t, x) then is supposed to be at least continuous in both t and x. In elementary dynamical systems theory, all the variables are considered real. It is clear that the general scalar equation of the n-th order dnx dtn= F(t, x, dx dt, ... , dn-1x dtn-1), F: Rn+1 →R, can be represented in the vector form. One usually requires the vector function f(t, x) defined in x∈D⊂ Rn, t∈T⊂R, to satisfy the Lipschitz condition ‖f(t, x1) -f(t, x2)‖ ≤L‖x1 -x2‖ with respect to x for all (t, x) ∈T× D. Here ‖. ‖ denotes the norm, ‖f‖ = ∑ |fi| n i=1 . One often calls such functions to be "Lipschitz-continuous in variable x". It is easily seen that Lipschitz-continuity in x leads to ordinary continuity but the reverse is not true. However, continuous differentiability - absence of breakpoints - is sufficient for Lipschitz-continuity. The notion of Lipschitz-continuity is an important condition for establishing uniqueness of the solution to the initial value problem (the Cauchy problem or IVP): dx dt= f(t, x), x∈D⊂Rn, t∈T⊂R, (4.23) x(t0) = x0. (4.24) One can see the proof and commentaries to this essential theorem in any good textbook on ordinary differential equations (ODE), e.g. [15], [19]. It may be important to note that in the context of this uniqueness theorem the domain S= T× D is usually specified as T≔|t-t0| ≤a, D≔{x: ‖xx0‖ ≤d} , where a> 0 and d> 0 are constants. The statement of the uniqueness theorem is typically formulated as follows: if f(t, x) is continuous in S= T× D and is also Lipschitz-continuous in x, then the Cauchy problem has one and only one solution in |t-t0| ≤inf(a, d/G) where G≔supS‖f‖. It is interesting that the existence and uniqueness of a solution x(t) ≔ x(t, t0, x0) is ensured not very far from initial "time" t0, the respective domain effectively decreasing with the supnorm G= sup(t,x)∈S‖f‖. However, the time domain of existence and uniqueness is determined by a sufficient condition that may become too restrictive if the actual solution can exist beyond the guaranteed domain. Therefore, it may be more practical in a specific case to analyze a concrete problem finding the time domain of existence ad hoc from the available a, d, G, rather than to apply straightforward the general theorem. 200 Classical Deterministic Systems 4.11 Autonomous Dynamical Systems Let us now consider a particular but very important case when the variable t is not explicitly present in the vector equation of a dynamical system. Such systems are called autonomous107. We have already seen, in connection with energy conservation issue, that autonomous systems are invariant with respect to time translations, viz. if x(t) is a solution in D⊂Rn then x(t-t0) is also a solution, in general a different one (as, e.g., sin t and cos t). Here t0 is assumed to be a constant time shift. In the theory of dynamical systems, the domain D⊂Rn is regarded as the phase space for the vector equation ẋ = f(x), x∈D. This concept may be considered as a certain generalization of the traditional physical terminology where phase space is understood as a direct product of coordinate and momentum spaces. In modern classical mechanics, phase space is typically defined as a cotangent bundle T∗M where M is a configuration manifold (see section on Hamiltonian mechanics for some comments). However, when dealing with dynamical systems there are some other features and accents which are important as compared to the traditional exposition of classical mechanics. For instance, in mechanics it is often convenient to consider the phase space as a Euclidean space whereas in the theory of dynamical systems the phase space is, in general, not a Euclidean space but a differential manifold, on which a vector field corresponding to the vector equation ẋ = f(x) is defined. This equation means that, in the process of evolution described by the dynamical system, to each point x a vector f(x) is ascribed determining the velocity of the phase point (the vector f(x) belongs to the tangent space of the manifold at point x). This is a kinematic, in fact a geometric interpretation of the above vector equation. Nevertheless, all this are just nuances: one usually knows exactly in what space one finds oneself and operates. In all cases, phase space is a geometric concept embracing the total number of all states of a dynamical system and convenient to describe the evolution of state points x= (x1, ... , xn)T parameterized by variable t (usually time). The state points x(t) = (x1(t), ... , xn(t)) T for a fixed t are, as we know, called phase points108, and they move through the phase space with changing t, each of them traversing the phase manifold along its own phase trajectory. The term "phase" was probably coined by J. W. Gibbs who referred to the state of a system as its phase. In physical literature, one often talks about ensembles of dynamical systems - the set of non-interacting systems of the same type differing from one another only by their state at any given moment, that is by initial conditions. There is an illustrative analogy that is customarily exploited in physics namely the picture of a stationary flow of some fluid, in which every fluid particle moves from point in phase space to another during time t-t0 according to equation ẋ = f(x), x(t0) = x0 . Later we shall see that this 107 Scalar equations of the n-th order corresponding to the autonomous case take the form dnx ddn= F(x, dx dt, ... , dn-1x dtn-1) 108 Rarely, representing points. 4.11 Autonomous Dynamical Systems 201 equation may be interpreted as a mapping of the phase space into itself so the "flow" of the "phase fluid" implements a transformation (in fact a family of transformations) of the phase manifold into itself, in the considered case of a smooth vector field described by differential equations - a diffeomorphism i.e., continuously differentiable invertible map. One may notice that the analogy between the "phase fluid" and some physically observable continuous media - liquid or gas - is not complete: there is no interaction between particles of the phase fluid. Excluding t from parametric equations xi(t), i= 1, ... , n (if we can do it), we get a projection onto phase space D. The domain S= T× D is conventionally called (mostly in old sources) an extended phase space. Although it is not always easy or at all feasible to get rid of parameter t109, it may be possible to obtain differential equations directly describing the trajectories in the phase space. Indeed, from the equation ẋ i= fi(x), i= 1, ... , n we get, for example, the following system of n-1 equations: dx2 dx1 = f2(x) f1(x) , ... , dxn dx1 = fn(x) f1(x) whose solution gives the trajectories parameterized by x1. Conversely, we can turn any non-autonomous system into an autonomous one by introducing a new variable xn+1, thus producing a system of n+ 1 equations instead of n which corresponds to increasing the dimensionality of the phase space. In this sense, autonomous systems may be considered general enough to focus mainly on their study. Applying the uniqueness theorem to the above system of n-1 equations, we may conclude that the phase trajectories do not intersect. Of course, here it is assumed that f1(x) ≠0. If f1(xa) = 0 in some points xa, a= 1,2 ... , we can take any other fj(x), j= 1, ... , n instead of f1(x) provided fj(x) ≠0, which means that we are taking xj as a parameter. There may be, however, difficulties in the so-called critical points - zero points x̅ = (x̅1, ... , x̅n) where f(x̅) = 0 i.e., fi(x̅) = 0 for all i= 1, ... , n. We shall discuss this problem as soon as we learn a little more about the dynamical systems in general. The above-mentioned uniqueness theorem 110 states that under rather weak assumptions about the properties of vector field f(x) (usually it is assumed differentiable or just Lipschitz-continuous) there exists for each point x∈D exactly one solution x(t) of the law of motion ẋ = f(x) with initial value x(t0) = x0. In other words, the evolution of a dynamical system - its future states at t> t0 - is completely determined by its initial state.111 It is 109 One might recall in this connection that the parametric form is the most general one in representing curves and surfaces. 110 This theorem is usually known as the Cauchy-Peano theorem, although probably it is more correct to name it the Picard-Lindelöf theorem. The matter is that the first theorem states only existence whereas the second one requires less and guarantees uniqueness. 111 It may also be the final state that can be taken to determine the backward evolution - a retrodiction setting. 202 Classical Deterministic Systems a fundamental model of determinism: what we can say about tomorrow (more correct - about the properties the system in question will have tomorrow) is uniquely determined by what we can say about the system today. This is not true for quantum or statistical mechanics, although some philosophers contend that quantum mechanics is a fully deterministic theory (because of time evolution features). In my opinion, this is just a cunning word usage typical of philosophers since it is hard to imagine a deterministic scheme where observations affect the system. When speaking about a dynamical system, one can totally forget about its mechanical origin. It is completely irrelevant whether the vector equation for a dynamical system describes a mechanical or any other evolution. Mechanical systems are commonly attracted in textbooks as convenient examples of dynamical systems. More important, however, is the fact that some mechanical systems possess specific properties narrowing the entire class of dynamical systems to a clearly defined distinguished subclass, e.g., that of Hamiltonian systems. Nevertheless, it would be a mistake to think that only the Hamiltonian systems are considered in classical mechanics. For instance, non-holonomic systems of mechanics are also dynamical systems of course. The notion of a dynamical system is a generalization of classical mechanics. Thus, the term "dynamical system" can be applied to any vector field described by a first-order vector differential equation of the form ẋ = f(x), x(t0) = x0 (or any equivalent form, see below), irrespective of its natural or behavioral content. This abstraction serves as a background for mathematical modeling based on dynamical systems. Similarly, for all r> 1. See also Topological dynamical system; Bendixson criterion (absence of closed trajectories); Poincaré-Bendixson theory that has the unfortunate tendency to explode. A rough system is sometimes called a structurally stable system or a robust system. A well-documented survey on (mainly differentiable) dynamical systems is in [146]. Many recent developments are discussed in the various volumes of [147]. In general, an attractor of a dynamical system is a non-empty subset of the phase space such that all trajectories from a neighborhood of it tend to this subset when time increases. An attractor is also called a domain of attraction or basin of attraction. A repelling set, or repellor in a dynamical system is a subset of the phase space of the system that is an attractor for the reverse system. If an attractor, respectively repellor, consists of one point, then one speaks of an attracting, respectively repelling, point. For details (e.g., on stability of attractors) see [146]. It should be noted that in other literature the definition of an attractor is what is called a stable attractor in [146]. For discussions on the "correct" definition of an attractor see [205], Sect. 5.4, and [147]. 4.12 Non-autonomous Systems We have already partly discussed the non-autonomous case in the theory of differential equations, now we shall mainly focus on some aspects of nonautonomous dynamical systems i.e., with more focus on system evolution 4.12 Non-autonomous Systems 203 properties. One must confess that even today, in the beginning of the 21st century, one does not have fully rigorous foundations of non-autonomous (as well as relativistic) mechanics. Indeed, even such basic notions as force, energy, power, frame of reference, etc. often need to be reassessed for nonautonomous systems. We have probably felt from the brief overview of differential equations that it seems to be appropriate to treat autonomous and non-autonomous cases differently - so different they are despite the fact that each non-autonomous dynamical system can be reduced to an autonomous one merely by introducing a new variable thus increasing the dimensionality of the vector field constituting the dynamic system. This additional dimension of the phase space is the price paid for the transition from a non-autonomous to an autonomous system. The situation with non-autonomous systems is much more complicated from both the physical and mathematical perspective as with autonomous systems. Physically, a non-autonomous dynamical system corresponds to an open system placed in a time-dependent external field. This fact is reflected by the explicit dependence of vectors fi, i= 1, ... , n in the corresponding vector differential equation ẋ i= fi on the independent variable t (usually interpreted as time), ẋ = f(t, x) which makes the solution timenoninvariant in distinction to those for autonomous systems (see above). Of course, in the non-autonomous case the energy integral in general does not exist (see [23], §6) which makes the system of equations significantly more difficult to integrate than in the autonomous case. In classical mechanics, explicit dependence of the coefficients in motion equations on time are usually interpreted as the presence of an external field ([23], §5). On an intuitive physical level one can illustrate additional difficulties by allowing the coefficients of a differential equation which could be explicitly solved to depend on time. Even in the most primitive linear models, this dependence will immediately produce new types of solutions (e.g., parametric oscillations in elementary vibration theory, see [23], §27). One can sometimes encounter the assertion that the equation ẋ = f(t, x) can be "solved" implying that the solution for x(t) can be represented as an explicit expression consisting of, e.g., power functions, exponentials, trigonometric functions, etc. This is not true: even in the scalar case, n= 1 i.e., x∈R and f∈R, equations of the type ẋ = f(t, x) can be explicitly (analytically) integrated only in a relatively small number of very exceptional cases. To illustrate this point for themselves, just try to integrate a rather innocent-looking first-order equation ẋ = exp tx. One should not, however, think that substantial difficulties and - quite often - a sheer impossibility to integrate non-autonomous equations in terms of elementary or even special functions mean that such equations do not have solutions. Let us try to "feel the difference" between autonomous and nonautonomous dynamical systems. One may notice that explicit dependence on time in a dynamical system (non- autonomous case), although not essential 204 Classical Deterministic Systems from purely theoretical viewpoint 112 , can lead to substantial technical difficulties. Indeed, the usual concept of attractor (see below) may become inadequate since the natural notion of time asymptotics is hard to apply: any object in the phase space is "drifting" with time. To understand the situation better let us recall some basic facts from the theory of ordinary differential equations (ODE). The vector ODE ẋ = f(t, x), x∈Vn, Vn is an ndimensional vector space (manifold), defines a smooth field of directions in the domain T× Vn, T⊂R of the extended phase space of arbitrary finite dimension 1 + n. Any equation of motion has this form, with f= {f1, ... , fn} being composed of the vector field components for a dynamical system. If one deals with distributed systems described by partial differential equations (PDE), as in the case of fluid dynamics or climate modeling, then the vector space V can be infinite-dimensional. Physically speaking, the system becomes non-autonomous when a time-dependent driving force is acting on it, or when time-dependent constraints are applied, or if the system parameters are varying with time. Equilibrium (stationary) states of nonautonomous systems are, in general, no longer fixed points in the phase space, but rather extended orbits. One might recall from the courses of classical mechanics that the concept of a phase portrait is not as helpful for nonautonomous, e.g., driven, systems as for autonomous ones. One can also recall that attractors play a vital role in assessing the long-term behavior of a dynamical system. 113 In the non-autonomous case attractors are sets extending in the phase space so that attractors become global objects which are typically difficult to study with standard mathematical tools (e.g., by classical analysis). Strictly speaking, one cannot associate a nonautonomous ODE with a dynamical system interpreted as a vector field acting on Vn. Nevertheless, if it is known that the initial value (Cauchy) problem has a unique solution (see above), one can introduce, in a standard fashion, a two-parameter family of evolution operators, T(t, s), t≥s, acting on Vn in such a way that T(t, s)x(s) = x(t) , where x(t) is the solution of the Cauchy problem with initial conditions x(s). This family of operators satisfies obvious relationships, T(s, s) = I(Vn) and T(t, τ)T(τ, s) = T(t, s) for all τ∈[s, t], where I(Vn) is the unity operator on the vector space Vn. 112 As I have mentioned above, one can easily make an autonomous system from non-autonomous by increasing the dimensionality of the corresponding vector field by introducing the variable xn+1 = t and adding one more equation, dxn+1 dt ⁄ = 1. For example, in the case of the undriven 1d pendulum, the phase space has two dimensions whereas in the case of a driven pendulum, ẍ + ω2x= F(x, t) , one may consider that it has three i.e., ẋ 1 = x2, ẋ 2 = -ω2x1 -F(x1, x3), ẋ 3 = 1. 113 Everyday examples of attractors are the planetary climate or the national character. 4.13 Dynamical Systems in Mathematical Modeling 205 4.13 Dynamical Systems in Mathematical Modeling Today, one is inclined to think that modeling can be only computer-based i.e., in the form of computer simulations. Unfortunately, computer simulations almost never provide the modeler with an insight why the physical mechanism or, say, a biological system operates in a particular way. Computer mathematicians usually retort that the great Carl Friedrich Gauss and other prominent mathematicians were busy calculating the motion of celestial bodies in the 18th-19th century during many years whereas nowadays it takes seconds. Indeed, mathematicians of that time solved algebraic and differential equations, calculated integrals without reaching the efficiency of modern computers. But computers have not created much really new in physically motivated mathematics, rivaling e.g., calculus, complex numbers and functions, Riemann geometry and Lie groups. So, computer simulations although often accompanied by a great hype and even serving as a base for political decisions (as, e.g., in climate modeling or universal flight bans following volcano eruptions) only have a limited validity. It would be interesting, firstly, to understand this validity and, secondly, to gain an insight when without the computer one would definitely fall short of modeling targets. One of the main questions in modeling an evolving physical situation is: "What happens as time flows from now to infinity?" And in reality, all situations are evolving, steady-state ones are rare and very approximate. In fact, under modern conditions war is the most economically inefficient, politically destabilizing, and militarily counterproductive mechanism to gain control over any kind of resources. As an example, one can mention the US war in Iraq. If the objective was to gain control over oil resources, then it is at least a poor calculation in military planning since at least USD 100 million/day was spent by the USA to wage this war. One can easily calculate the amount of oil that could have been bought for this money. Given a total unproductiveness of military solutions, one can assume that starting a war is typically the outcome of a crisis in the country's governing model. The solutions in such models show the temporal evolution of the military conflict and may, in principle, predict which party can be defeated. In the simplest model of military planning, the Lancaster model, the state of a dynamical system describing two interacting (fighting) armies is given by a 2D-point (x, y) located in the upper right (positive) quadrant on the plane, (x> 0, y> 0). Models in economic and military planning are usually unified by this common feature: an essentially non-negative character of variables, which may be regarded as a supplementary constraint. Now imagine two adversary armies, X and Y, counting respectively x and y soldiers. Soldiers are by default professional killers, so we can assume in the simplest model that each soldier of the X-army kills per unit time a soldiers of the Y-army and, conversely, each soldier of the Y-army destroys per unit time b soldiers of the X-army. Then we can obtain a linear system of equations (parameters a and b are assumed constant so far) 206 Classical Deterministic Systems dx dt= -by dy dt= -ax One can interpret parameters a> 0 and b> 0 as characterizing the weapon power of opposing armies X and Y. We can of course explore this linear system using standard linear algebra techniques (see Chapter 3 about linear differential equations), but in our simple analysis it is convenient to integrate this system directly. Dividing the second equation by the first, we get dy dx= ax by or ax2 -by2 = c where c= const . Thus, integration gives a family of hyperbolas depending on the integration constant c that should be determined by the initial state of military confrontation. One can see that the phase point describing the temporal evolution of the conflict is moving along the hyperbolic phase trajectories separated by a straight line √ax= √by (one can use, e.g., Maple to draw a simple picture of phase trajectories). Phase trajectories cannot cross this straight line (a symptom of a "hard" model) distinctly dividing two regimes: army X is defeated or army Y is defeated. If the initial state lies above the neutral line separating two outcomes, army X is doomed: quantity x (the number of soldiers in X) is reduced to zero in a finite time, whereas quantity y remains positive - a complete victory of Y. Of course, the x and y curves have certain symmetry properties whose meaning is that army X can be interchanged with army Y with simultaneous replacement a1/2 ↔b1/2 . For a= b hyperbolas lie symmetrically with respect to the line y= x. The meaning of the straight line dividing the phase plane into two distinct areas may be clarified in the following way: this is a neutral line on which both armies will equally run out of manpower and eventually be destroyed (asymptotically x, y→0 for t→∞). The military conflict persists even when both armies are almost totally demolished. The equation of the separating straight line √ax= √by implies that to counterbalance the n-fold manpower advantage of one adversary the other has to possess the n2 -fold more powerful weaponry. Thus, to resist the Mongol invasion in the 13th century, the Mongols' combined forces presumably surpassing those of Kievan Rus at least three times, the Russians should have been about an order of magnitude more efficient warriors. However, the above mathematical model is too simplistic, and its applicability is utterly questioned. One can treat it only as a toy model which, nevertheless, may serve as a backbone for more realistic approaches. So, in accordance with general methodological recommendations exposed in Chapter 2, we can try to improve this model. 4.14 Nonlinear Science 207 For economic planning, analogous models allow one to divide the produced output into consumed and accumulated parts one of the crucial problems of economic growth. Similar models appear in economics and in military planning, specifically in the case when non-regular armies (e.g., partisans) are participating in military activities. 4.14 Nonlinear Science We have seen that the theory of dynamical systems is just another name for nonlinear dynamics, at least these two fields are inseparable. Nonlinear dynamics is, in fact, as old as classical mechanics. In general, the dynamics of planetary motion turns out to be nonlinear, but, fortunately, two-body systems are integrable and can be treated exactly. Three-body problems are of course also nonlinear, but in general non-integrable and very complicated. For example, the question originally posed in the XIX century: "is the Solar System stable?" naturally leads to a nonlinear description of the paths of three bodies in mutual gravitational attraction. It is this ancient problem which was simple to formulate but extremely difficult to solve that resulted in many modern ideas of nonlinear science, e.g., chaos. It is curious that with the advent of quantum mechanics and displacement of accent on atomic and nuclear physics nonlinear dynamics almost disappeared from physical books up to the 1970s. There was very little or no discussion of nonlinear science in the popular courses on methods of mathematical physics during the prewar and postwar (WW-2) period, when quantum mechanics and linear electrodynamics dominated science and engineering. It is also curious that in the 1980s nonlinear science gained a compensatory extreme popularity, to the extent that many people in the physics community began considering linear models as too primitive and unrealistic. The great difference that exists between linear and non-linear problems is one of the most important and, perhaps, subtle features of mathematics. One always tries to linearize whenever possible, because linear problems are enormously easier to solve. Unfortunately, the world is not linear, at least to a large extent, so we have to learn how to deal with non-linear problems. Nonlinearity in general can be well understood as the opposite of linearity - actually its negation. The main features of linearity are additivity and homogeneity, which result in linear superpositions. In linear theories such as quantum mechanics or classical (pre-laser) electrodynamics superposition is the key feature: an infinity of solutions may be constructed provided a finite set of solutions is known. It is true that pure linearity rarely occurs in the mathematical description of real-life phenomena, including physical systems. The example of quantum mechanics, which is an essential linear theory, although extremely important, may be considered an exception - yet to a certain extent, since the superposition principle for the wave function implies the infinite dimensionality of the respective vector space (see Chapter 6). We shall see that many finite-dimensional nonlinear systems can be mapped, with the help of some transformations, to linear systems with infinite dimensionality. In other words, nonlinear systems, i.e., those which should be modeled by nonlinear differential (integro-differential) equations 208 Classical Deterministic Systems or nonlinear discrete maps are ubiquitous, while linear ones really seem to be exceptions or approximations. One obvious and strong motivation to study nonlinear dynamical systems is the rich variety of applications of such studies covering such apparently different areas in mathematics, physics, chemistry, biology, medical sciences, engineering, economics, political and military planning, financial management, etc. Yet, the ubiquity of nonlinear systems is counterbalanced by their complexity, so another motivation to study nonlinear dynamics is their intricate behavior. Examples are bifurcations, solitons, strange attractors, and fractal structures. There are no counterparts of these manifestations in the linear world; one can say that the extreme complexity of nonlinear structures marks an essential difference between linear and nonlinear phenomena. The main source of difficulties in the nonlinear world is the fact that it is in general not feasible to describe a nonlinear system by dividing it into parts which are treated independently or blockwise - a favorite trick in linear system theory. As near as I know, no general techniques have been invented so far to foresee even the qualitative properties of a nonlinear system. For instance, it is rarely possible to predict a priori whether a dynamical system would exhibit regular or chaotic behavior. We shall illustrate this difficulty even on the simplest example of the logistic equation. There exist, of course, a multitude of textbooks, journal articles and other sources where nonlinear dynamics in general and chaotic behavior in particular are beautifully described. We shall try to discuss only some cases of nonlinear dynamics which I consider quite important. There are, of course, other cases, e.g., of nonlinear PDEs (elliptic, parabolic and hyperbolic), which are very important in modern mathematical physics and, besides, present an intrinsic theoretical interest. However, the theory of these equations as well as related functional spaces and operator theory for nonlinear analysis are not included in this book. We shall discuss Euler and Navier-Stokes equations for incompressible fluids, but this topic seems to be inexhaustible, so the discussion may be considered superficial and much of it is relegated to Chapter 7. I shall also consider in Chapter 9, specifically in association with some unsolved problems, Einstein's equations, some aspects of which tend more to nonlinear science than to customary field theory. It is widely believed that the 20th century was the century of physics and the 21st is the century of biology. The latter deals mostly with nonlinear phenomena, and the respective models should by necessity be nonlinear. On a macroscopic scale, it has been illustrated above by Lotka-Volterra model of the struggle for existence between two competing species. This dynamic situation (a predator-prey model) is described by nonlinear differential equations giving the time rate of evolution. This is a very simple model, of course, as compared with the level of complexity typically encountered in biology, but it provides a good foundation for more sophisticated mathematical modeling in this field. We are immersed in the natural world of nonlinear events. For instance, our emotional reactions and our likes and dislikes are probably highly 4.15 The logistic model: the bugs are coming 209 nonlinear. Our behavioral reactions to heat and cold, to colors and sounds, to local pressure and other stimuli are mostly subordinated to the WeberFechner law: the magnitude R of psychological response is proportional to the logarithm of magnitude J of physical stimulus, R= Aln(J/J0), J= J0, R= 0 for J 0. If we assume that there is a maximum sustainable population in the region (an equilibrium state) and that that the population dynamics can be described by a single autonomous equation, Ṅ (t) = φ(N), then we can produce a simple model assuming that the iteration function has a quadratic polynomial form, φ(N, a) = aN(1 - kN). This is a real-valued function of two variables, a> 0 and 0 ≤N≤1/k. Notice that the logistic model φ(N, a) is not invertible with respect to N= N(φ) since all the states in (0, N) have two sources in φ-domain (preimages) so that each value of φ corresponds to a pair of different values of population. Therefore, information is lost in the course of inversion. In the Bourbaki nomenclature, the logistic model is not injective; topologically-oriented people prefer calling such maps non-continuous because they do not take an open set into an open set. Equation Ṅ (t) = aN(1 -kN) is called the logistic model. Here parameter a> 0 is called a control or growth parameter (in theoretical ecology, this parameter, interpreted as the growth rate per individual, is usually denoted as r) and parameter k> 0, determining the competition for some critical resources, is often called in ecology and population biology an inverse "carrying capacity" of the environment: it defines the equilibrium population when the competition decreases the growth rate to such an extent that the population ceases to rise. In electrical engineering, the same equation describes the energy of E of a nonlinear oscillator in the self-excitation regime (in the first Bogoliubov-Mitropolsky approximation). A discrete alternative to the continuous (differential) logistic model is the so-called logistic map: Ni+1 = aNi(1 - kNi), 0 ≤Ni≤1/k, i= 0,1,2, ... , where the discrete variable i plays the role of time in the continuous model. This variable can be 210 Classical Deterministic Systems interpreted as years or other characteristic temporal cycles, e.g., naturally related to the reproductory behavior of respective species. Recall that the term a "map" or "mapping" φ usually refers to the deterministic evolution rule with discrete time and continuous state space X, φ: X→X. Then evolution is synonymous with iteration: xn+1 = φ(xn). The logistic map, in contrast with the logistic equation, is a model with discrete time i.e. corresponds to snapshots taken at time points t= nτ, n= 0,1, ... (the elementary time step τ may be put to unity). It is interesting that discretetime dynamical systems can be produced from flows described by continuous-time differential equations, an example is a stroboscopic model provided by the Poincaré sections. A discrete version of the logistic model can also have a direct physical or biological meaning, for example, in the cases when generations are separated (non-overlapped) in time. Thus, some insects just lay their eggs and die; they do not interact with the next generations. One more famous (appeared around 1200) discrete-time dynamical system, which was initially also a mathematical model of biological reproductory behavior, is the Fibonacci sequence, ak+1 = ak-1 + ak, k= 1,2, ... , a0 = 0, a1 = 1 . To describe this process in terms of evolution, we can introduce matrices xk= (ak-1 ak) , A= (0 1 1 1) so that the Fibonacci sequence will be represented by a discrete-time cascade xk+1 = Axk= Akx1. Here the phase space is X= R2. The Fibonacci map, in distinction to the logistic map, is a linear transformation, and we can directly apply the standard prescriptions of linear algebra. The characteristic equation of the Fibonacci map is det(A-λI) = det (-λ 1 1 1 -λ) = λ2 -λ1 so that eigenvalues λ1,2 = 1±√5 2 give eigenvectors v1,2 = (1, λ1,2) T. Then x1 = C1v1 + C2v2 and xk+1 = λ1 kC1v1 + λ2 kC2v2. Using initial conditions x1 = (0 1) = C1 ( 1 λ1) + C2 ( 1 λ2) we get C1 = -C2, C1λ1 + C2λ2 = 1 so that C1 = 1 λ1-λ2 = 1 √5 = -C2 and xk+1 = ( ak ak+1) = 1 √5 [( 1 λ1) λ1 k-( 1 λ2) λ2 k] which gives for Fibonacci numbers ak= 1 √5 (λ1 k-λ2 k). One can notice that the logistic map is a simple case of polynomial maps. Of course, the logistic map can be represented in dimensionless form xi+1 = axi(1 -xi), where all xi are numbers interpreted as the population density (or expectation value), and it is required that 0 ≤xi≤1, although certain values of intrinsic growth parameter a and initial data x0 may lead to negative population densities, which fact indicates that the logistic map should not be taken literally as a demographic model. In other words, the logistic map takes the interval [0,1] into itself. Despite its apparent simplicity, the logistic map is very general since any function having a nondegenerate extremum behaves in the latter's neighborhood like this map near x= 1/2. An obvious extension of the logistic map depending on parameter a is the 4.15 The logistic model: the bugs are coming 211 relationship xi+1 = f(xi, a) usually called the Poincaré map; here generations corresponding to i= 0,1,2, ... can be interpreted as periods of motion. Notice that in the case of maps with discrete time, phase trajectories are given by a discontinuous sequence of points. The fixed point of the map (x= f(x)) does not change with generations, xi+1 = xi so that it would be natural to define μi≔dxi+1/dxi as a multiplier. It is clear that the maximum value in the sequence xi is reached when μi= a(1 -2xi) = 0 and is xi= 1/2. Therefore, the maximum iterated value xi+1(a) = a/4, which means that to hold the logistic map in the required domain 0 ≤xi≤1 one must restrict the growth parameter to segment 0 ≤a≤4. In general, for one-dimensional Poincaré recurrences, the iteration function f and control parameter a can be selected and normalized in such a way as when the initial data x0 (known as the seed) are taken from a finite interval P= (α, β), the iterates x1, ... , xn also belong to P (in the logistic map P= (0,1) for small values of the control parameter, see below). It means that function f maps interval P into itself i.e. this map is an endomorphism (the term "into" means that the iterations may not fill the whole interval). Singledimensional endomorphisms are often not invertible, which physically means that the past cannot be uniquely reconstructed from the current data, so that one might say that in such cases we are dealing with the systems having an unpredictable past (isn't it typical of human history?). When, however, an endomorphism has a smooth inverse, then the map is a diffeomorphism. The discrete logistic model may exhibit a somewhat unusual behavior of the population. Thus for small values of the growth (multiplication) parameter a, the initial value of the population produces a rather little effect on the population dynamics, the latter being mostly controlled by parameter a. Nevertheless, when this parameter is increased, the population (e.g., of bugs, mosquitoes or other insects and, apart from the latter, of rats, mice, reptiles, leeches, etc.) start to change chaotically, and in this chaotic regime one can observe an extreme sensitivity to the concrete number of initially present members N(t0) = B (or to the "seed" x0). For large enough values of the growth parameter, e.g., for a> 3 in the simple logistic map xi+1 = axi(1 -xi), the population bifurcates into two, so that the colony of insects acts as if it were "attracted" by two different stable populations (such asymptotic solutions are called attractors). This is a primary example of the so-called period doubling. As this process has become very popular in nonlinear dynamics (starting from approximately 1976) and has generated a great lot of papers most of which are easily available (see the list of literature) and because of the lack of space, we shall not reproduce here the respective computations, restricting the current exposition to a catalogue of the basic facts. We only mention here that period doubling occurs in many models and in a variety of disciplines: in the Navier-Stokes equation (turbulence), in meteorology (the Lorenz system), in chemical reactions, in nerve pulse propagation, etc. In all such cases, we see that for different values of the control parameter a the system's behavior alters drastically: from settling down to a point (the population dies out or tends to a non-zero fixed value), through quasi-regular oscillations, then irregular oscillations, and finally to 212 Classical Deterministic Systems chaos i.e. totally decorrelated process. Recall that the dynamical systems, described by ODEs, in general may have four types of solutions: equilibrium states, regular (periodic) motion, quasi-periodic motion and chaos. These solution types are associated with four attractor varieties: stable equilibrium, limit cycle, d-dimensional torus and chaotic attractor. One usually studies the properties of the logistic map with the help of a computer, giving the seed x0 and the growth parameter a as inputs. The task is to find the asymptotic value of xi for i→+∞; one should of course bear in mind that computer modeling gives no genuine asymptotics, but only the values corresponding to large finite i-numbers. Yet one can obtain the graph (x, a) which is known as a bifurcation diagram for the logistic map. This diagram plots a sequence of generations (population densities xi) vs. growth parameter a, actually representing the limit solutions i.e. the same and cyclically repeated (after some number m of steps) values of x. Such cycles appear following the attempts to find the already mentioned fixed (also known as stationary) points of the map xi+1 = φ(xi, a) i.e. limit solutions of equation x= φ(x, a) mapping point x onto itself. Fixed points can be both stable and unstable, in particular, depending on the values of the control parameter a. Specifically, for the logistic map, when 0 1. If x= x(2), then δxi+1/δxi= 2 -a i.e. δxi= (2 -a)iδx0 and δxi converges to zero for |2 -a| 1. Thus, solution x(1) becomes unstable for a> 1 and solution x(2) for a> 3; within interval 1 0, which is the Lyapunov exponent for map B(x). Since λ> 0 the map exhibits an exponential instability and can be viewed as chaotic. Discrete time maps xn+1 = f(xn), n∈Z are also known as fixed point iterations. In general, single-dimensional discrete time maps f(x): x∈X⊆ R, X→X that may be represented as xn+1 = f(xn) (an obvious generalization of the logistic map), even very primitive ones, can also exhibit an exponential dynamical instability and thus may be called chaotic (in the Lyapunov sense). Expression xn+1 = f(xn) supplied with initial condition x(0) = x0 (as the initial population in the logistic map) can be viewed as an equation of motion for our 1d deterministic dynamical system with discrete time. Let us now return to the continuous-time case (such systems are the most interesting for physics and technology). In accordance with the idea of linearity, the model of exponential growth, discussed in the preceding section, can be applied when the number N of individuals is relatively small so that the correlation effects between them can be disregarded. Correlations between individuals can be observed on both the pairwise and collective level114. Pair correlations give rise, e.g., to the hyperbolic or explosion model leading to the sharpening regime with vertical asymptotes meaning that the population will tend to infinity in finite time (this model was briefly discussed in 4.15. and 4.16.), whereas collective correlations manifest themselves, for example, in the competition for resources (such as food, water, living space, energy, information, position in the hierarchy, political power, etc.) within the population. With the rising number of individuals, competing for resources tends to impede the growth rate i.e. growth factor b= b(N), db/dN 0, k> 0. This mathematical model is quite natural since one can approximate any smooth function by a linear one for sufficiently small values of its argument (this is, by the way, the main idea of calculus and differentiation), in the present case, for small enough N. Then we have the equation expressing, in the continuous case, the logistic model: 114 The patterns of pairwise vs. collective correlations is quite often encountered in manyparticle physical models, e.g., in statistical and plasma physics. 216 Classical Deterministic Systems dN dt= (a-kN)N= aN(1 -N N0 ) , (4.16.2. ) where N0 ≡a/k is the equilibrium population (in ecological literature this parameter is usually denoted as "carrying capacity" K). One-dimensional equation (4.16.2.) is known as the logistic equation; this is an apparently primitive, but very rich and important mathematical model which may be interpreted as a combination of the exponential and hyperbolic models. The meaning of the logistic model is that the reproductive rate is assumed to be proportional to the number of individuals, whereas the mortality rate is proportional to the frequency (probability) of pair encounters (collisions). Of course, by properly scaling time t and number of individuals N, e. g. , t→τ: = at, N→kN/a= N/N0: = x, we can reduce (4.16.2.) to a dimensionless form ẋ = x(1 -x) which is convenient to study the main dynamical properties of the model in the (x, τ) plane, but we shall rather stick to the form (4.16.3.) in order to better understand the role of parameters a and N0. The logistic equation, though being a nonlinear ODE, is simple in the sense that it can be easily integrated (since it is autonomous and variables in it can be separated). There are many ways to integrate the logistic equation; for demonstration purposes, we can choose probably the simplest one, representing the logistic equation as dN dt= (a-kN)N= aN-kN2, N(t0) = B, a> 0, k> 0, (4.16.3. ) B, a, k are constants. For N≠a/k, t-t0 = ∫dN(aN-kN2)-1 N B . This integral exists only when both N and B= N(t0) i.e the current and the initial values of the population lie in intervals (0, a/k) or (a/k, +∞). In other words, there exist no solutions that cross the straight line a-kN= 0. Integration gives t-t0 = 1 a∫dN(1 N+ k a-kN) = N B 1 alog N(a-kB) B(a-kN) Solving this equation with respect to N, we have N(t) = aBexp[a(t-t0)] a-kB+ kBexp[a(t-t0)] = aB kB+ (a-kB)exp[-a(t-t0)] One can see that for an initial population lower than the equilibrium value, N(t0) N0 i.e. B> a/k, N(t) is only defined for t> t0 -1 alog kB kB-a 4.15 The logistic model: the bugs are coming 217 In the case N(t0) ≡B= a/k, the solution is a constant, N(t) = a/k, since in this case dN/dt= 0. One can also see that the solution converges to the constant value a/k. For B 0 and dN/dt> 0 which means that if the initial population is lower than the equilibrium one, the number of individuals monotonicly increases. In the opposite case, when the starting population exceeds the equilibrium one, B> a/k, we have N(t) > N0 for all t and dN/dt 0), and the second stable (Ṅ N0 and Ṅ > 0 for N 0 . For t→-∞, the process asymptotically converges to the state N= 0 . Thus the logistic model describes the transition from the unstable state N= 0 to the stable state N= N0, occurring in infinite time. We shall see the examples of similar transitions when addressing quantum-mechanical models below. The integral curves have vertical asymptotes t= const for any t> 0. The logistic process is very close to the exponential (Malthusian) growth for small N, (N≪N0) , but 218 Classical Deterministic Systems begins to fall behind approximately at N≈N0/2. This saturation effect is a manifestation of correlations within the population. One usually considers a single control parameter in the logistic model, the growth parameter a. In fact, however, there are at least two control parameters which can be both important, especially if the logistic model is not necessarily applied to describe the population growth, when all the parameters and variables are positive by default. Renaming the constants, e.g., a= kN0 2, b= a/kN0 (see (4.16.3.)-(4.16.4.)), we arrive at equation dx dt= f(x, a, b) = ax(b-x) whose solutions depend on two parameters a and b, not necessarily strictly positive. In other words, the dynamical evolution occurs in 3d space (x, a, b) without restricting the motion to domain x> 0, a> 0, b> 0 as in population models. For b> 0, point x= 0 is unstable whereas for b 0 i.e. |w| > 2aΛ. This last value can be identified with wc. Assume at first that the roots of the characteristic equation have the same sign, then the real solutions are y1 = y10eμ1z, y2 = y20eμ2z, where y10, y20 are arbitrary constants. One can, as before, get rid of parameter z and obtain either the family of parabola-like118 orbits |y1| = c|y2|μ1/μ2, where c is some constant (independent of z), or y1 = 0. Recall that critical point of this kind is called a node; if μ1, μ2 2aΛ, then point (0,0) is a positive attractor i.e. there exists a neighborhood of (0,0) such that all the paths starting at this neighborhood at some z= z0 finish at (0,0) with z→+∞ (this property is almost obvious due to the exponential character of solutions). Likewise for μ1/μ2 > 0, the critical point (0,0) is a negative attractor. From the physical viewpoint, condition w> 2aΛ signifies that the disturbance for t> 0 moves in the direction x> 0 (recall that u= F(x-wt) ). If 0 0 and constant and diffusion length Λ real. A more complicated mathematical setting of the Fisher-KPP model is the Cauchy problem for a nonlinear parabolic equation (parabolic equations express the most popular models arising from biological studies) ut(x, t) = (η(u))xx+ φ(u) in domain D≔{x∈R, 0 ≤t 0. The initial function 0 ≤u0(x) ≤1 can be piecewise continuous in R. When η(u) = u, we have a linear diffusive process with a nonlinear source. 4.15.2. Applications of the logistic model Both the logistic model and the logistic map have many applications in science, society and engineering. The general idea leading to the logistic model - simple growth limited by self-interaction - may be applied to many real-life processes. For instance, epidemics and spread of rumors can be modeled by the logistic equation. We can take a typical microeconomic situation as another example: what will be the output of certain goods produced by a company (e.g. car manufacturer) over several years? The simplest model describing the annual growth of the output, with the imposed constraints of limited resources and market saturation would be a logistic model. What would be the total revenue (and respectively the profit) of a company over several years, if the annual revenue growth is a? The revenue for the (i+ 1)-th year will be xi+1 = axi. However, for rather high revenues the latter are restrained, e.g., by the market share, and we arrive again to the logistic model. These examples suggest a natural generalization: any human activity subordinated to the imposed constraints of external factors and/or limited resources can be described by a logistic model. The nonlinear term 4.15 The logistic model: the bugs are coming 225 that limits growth is sometimes metaphorically interpreted as "influence of the future". If we make an affine transformation xi= pzi+ q, where p, q are as yet undetermined parameters, we get from the logistic map a quadratic recurrence equation zi+1 = -a[pzi 2 -(1 -2q)yi-q p(1 -q)], and putting p= -1/a, q= 1/2, we obtain the quadratic map zi+1 = zi 2 + c, c≡a/2 -a2/4 which produces Julia sets. Quadratic Julia sets are probably the best known examples of fractals that are generated by this quadratic map for almost any value of c (although c= 0 and c= -2 are exceptional: the produced sets are not fractals). It is interesting that the above quadratic map was commercially used in the graphical industry to obtain rather beautiful ornaments which are fractals: this is an example of direct market value of mathematics. It is remarkable that the logistic map can serve as a rough model of the transition to turbulence, when the regular (laminar) or almost periodic character of fluid motion is destroyed after some critical value of the flow parameter (the Reynolds number Re) has been reached. Like the eventual population in most logistic models, turbulence practically does not depend on the initial state of the fluid. The Reynolds number is a control parameter in the models of turbulent flow and plays the role of intrinsic growth parameter a. Multiplier μ passes in the transition to turbulence the value +1. The logistic map manifests such common features of discrete-time algorithms as stability and chaotic behavior. From this viewpoint, it is interesting for numerical techniques and generally for computational science and engineering. As far as engineering applications of the continuous-time logistic model go, we have already mentioned that the equation describing the energy evolution of a nonlinear oscillator in the self-excitation mode has the form Ė = aE(1 -E) (in dimensionless units). Here the growth parameter a is close to 1. One can consider an example of estimating the population growth with the help of the logistic model. We may assume the total current (2010) human population to be 6.9*109, the growth factor to be a = 0.029, the annual population growth to be 0.011 year-1 (more or less standard demographic data). Then we have dN(2010)/dt N(2010) = d dtlog(N(2010)/N(t0 = 2010) = 0.011 = a-kN(2010) = 0.029 -k∙6.9 ∙109 which can be considered an equation to find the attenuation factor k≈2.32 ∙ 10-12. Then the projection for the equilibrium population will be N0 = a/k≈ 11.1 ∙109 i.e., the world population tends to converge to approximately 11 226 Classical Deterministic Systems billion people. This result is not very sensitive to the slight changes of the constant a and the annual population growth rate. We can comment here on the general concept of the control parameter whose variation results in altering the evolution regime of a dynamical system. Choosing the control parameter may present a special problem when modeling a complex (multiparametric) process, in particular, when the transition to chaos in an open system should be explored. For instance, in medico-biological studies the concentration of the prescribed medicine can play the role of control parameter. In surgery, the control parameter may be identified with the pre-planned invasion path. In fluid motion problems, one can define the control parameter as the pressure gradient between the flow boundaries, e.g., pipe ends. In laser technology, the control parameter can correspond to the pumping level i.e., energy input producing the population inversion. Using the engineering language, one can define an analogous control parameter as the feedback level in classical generators. One often uses the logistic model in practical ecology to control the population. For example, a slight modification of this model (harvesting) is applied in fisheries. Harvesting means mathematically a transition from the population freely evolving in accordance with internal processes (balance of the birth, death, and migration) to introducing an external pressure q(N, t) i.e. the model equation will be Ṅ (t) = φ(N) -q(N, t), where term q(N, t) signifies the removal of q individuals per unit time (say, each year). In fisheries, for example, this external influence corresponds to fishing quotas that can be constant, q(N, t) ≡q or differential, q(N, t) = q(N). The latter case may be interpreted as the simplest manifestation of a feedback, which is a milder model than the one corresponding to q= const. Indeed, q= q(N) depends on the actual state of the system. It is usually important to select quotas q in such a way as to ensure sustainable fishing. Sometimes, as e.g. during genocides, external influence q(N, t) cannot even be exactly known and should be treated as a perturbation, not necessarily small. One can in such cases figuratively call function q the murder rate. More complicated population models arise when pair bonding is considered. The effectiveness of survival mechanisms tends to decrease as the population density falls since it becomes increasingly difficult for sexually reproducing species to find appropriate mates. The respective mathematical models are in general spatially dependent, at least at the level of any given individual, since extended mobility and higher mate detection capability (greater identification distance) can to a certain degree compensate low population density. However, there are stringent physiological limitations for excessive mobility since it requires higher metabolism rates that are only possible under the conditions of ample resource availability (just as increased consumption in the rich population groups of human society enhances mobility and selectiveness). Considering pair stability issues further complicates the model. To focus on the role of harvesting we can use, for simplicity, the dimensionless form of the logistic model that can be, in particular, achieved by scaling time t as τ= aN0t (see equation (4.16.3.) and below). Formally, we 4.15 The logistic model: the bugs are coming 227 can put the growth parameter a and the "equilibrium population" N0 equal to unity (it is trivial to notice that for a≠1, q→q/a). Then we have ẋ = x(1 -x) -q≡f(x, q), and when quote q is constant, its critical value is q= 1/4. In fisheries, this critical point is usually called maximum sustainable yield (MSY), and its evaluation is vital for sustainable fishing. For 0 1/4 , both equilibriums disappear, but what is more important, f(x, q) 0 and dq/dx> 0 for all x. One can assume q(x) to be a polynomial with positive coefficients. Let us consider the simplest case of a feedback scenario, q(x) = bx. In this case there are two stationary values, x1 = 0 (unstable) and x2 = 1 -b, 0 1 equilibrium point x2 becomes negative. It is easy to see that when the growth parameter a≠1, we must scale b→ b/a≡b̃ so that a stationary state corresponding to an optimum is reached with b= a/2. The case of quadratic elimination quota is analyzed similarly to the case of the linear one i.e. ẋ = ax(1 -x) -bx-cx2 so that the stationary points are x1 = 0 and x2 = (1 - b a) / (1 - c a) , 0 0 is the greatest positive Lyapunov exponent which expresses instability in the form of "stretching" (and, perhaps, also "folding" leading to chaos). Thus, the necessary (not sufficient!) condition for deterministic chaos is that dynamics should be unstable. In particular, instability means that small perturbations in the initial conditions bring large uncertainties in dynamics after the predictability horizon. By remarking that extreme sensitivity to initial data is not the whole story, we wanted to hint that such sensitive dependence can be found in very simple, e.g., linear systems. Take, for example, the map xn+1 = 2xn, xn∈ R, n∈Z, which has an unfortunate property to explode (see also below, "The logistic model: the bugs are coming"). However, the explosive divergence of nearby trajectories in this case alongside the blow-up of an initial data discrepancy to infinity has nothing to do with deterministic chaos. The latter, combining the sensitivity to initial data with unpredictable behavior, appears only if the trajectories are bounded which is possible only in nonlinear dynamical systems: in linear ones there can be either bounded trajectories or sensitive dependence on initial conditions but not both, so that nonlinearities are necessary to have both effects. The word "chaos" is intuitively associated with a disorganized state, completely without order. This is deceptive: chaos in dynamical systems is not a total disorder but corresponds to irregular variations of the system's variables controlled by rather simple rules. Probably, no mathematical definition of chaos has been universally accepted so far, but the following descriptive explanation what chaos is can be used to work with this phenomenon. Chaos is a durable irregular (i.e., aperiodic) behavior emerging in deterministic systems which become extremely sensitive to slight variations of parameters (in particular, initial conditions). Notice that this verbal description of chaos contains three components: 1. Durable irregular (aperiodic) behavior implies that the phase paths do not asymptotically (t→∞) stabilize either to a point or to periodic orbits. 2. The term "deterministic" implies that the system is not described by stochastic differential (or other) equations i.e., there are no random parameters or noise present. 3. Sensitivity to slight variations of parameters (in particular, initial conditions) implies that the integral trajectories, at first very close to one another, diverge exponentially fast with time, the divergence rate being governed by the Lyapunov exponents (with at least one of them positive). Thus, if one abstracts oneself from certain fine points such as irregular trajectories, uncorrelated behavior in close time points and so on, chaos can be basically viewed as an absence of Lyapunov stability. Although "chaos" has become the code word for nonlinear science, there is nothing particularly 230 Classical Deterministic Systems exotic or intricate about chaos. The fact that chaotic phenomena practically had not been studied until 1970s, when chaos suddenly came into fashion together with its interdisciplinary applications, can only be attributed to an absolute dominance, both in science and technology, of successful linear theories such as classical electrodynamics, quantum mechanics, linear oscillations, plasma instabilities, etc. Ubiquitous linear input-output models in electrical engineering that have resulted in many practically important technologies also left little room for studying somewhat exotic nonlinear models. The main feature of chaos in finite-dimensional dynamical systems (known as deterministic chaos to distinguish it from molecular chaos in many-particle systems) is the exponential divergence of trajectories. The quantitative measure of this divergence is the so-called K-entropy (the Kolmogorov-Krylov-Sinai entropy [281]). The value of K-entropy is positive for chaotic states, which corresponds to mixing and exponential decay of correlations. Many people are still inclined to think that one should not treat chaos in deterministic systems as a special subject, and the word itself is just a poetic metaphor for the long familiar instability. This is wrong: chaos is not completely synonymous with instability; it incorporates also other concepts (such as irregularity of behavior and decorrelation in time series). In distinction to instabilities, chaos points at unpredictability of behavior in the systems described by completely deterministic and even primitive looking equations. Instability in dynamical systems can be viewed as a symptom of the possible transition to chaotic motion. One can observe on some examples, in particular on deterministic models of growth that may exhibit instability (i.e., have diverging integral trajectories), but cannot be viewed as chaotic; the simplest example of such a system is the model of exponential growth ẋ = ax. Conversely, one should not think that the chaotic state is always visibly unstable: for example, turbulence is a stable chaotic regime. Thus, the often repeated view that chaos is just an extreme case of instability i.e. in chaotic regimes paths in the phase space that start arbitrarily close to each other diverge exponentially in time whereas in regular (nonchaotic) regimes two nearby trajectories diverge not faster than polynomial, typically linear in time, is not quite accurate. If we compute the distance d(t) between two phase paths whose initial separation is d0 ≡d(0) , then exponential divergence of trajectories, d(t) = d0eΛt, where Λ is the Lyapunov characteristic exponent, is a necessary but not sufficient condition for chaos. Notice that the concept of dynamical systems as evolving quasi-closed parts of the world does not exclude their chaotic behavior, when a system's evolution (i.e., dependence of its observable quantities xi in time) looks like a random process; at least it is totally unpredictable beyond a certain "horizon of predictability" (recall weather forecasts). One should not, however, think that unpredictability in chaotic systems is identical to the true randomness, but the difference is rather academic since chaos looks quite like a random process and can also be described by a probability measure (distribution functions). One can crudely say that there are at least two types of randomness: one is due to an unobservable quantity of interacting 4.16 Instabilities and chaos 231 subsystems (as, e.g., in gas), the other - usually known as deterministic chaos - is due to our limited ability to formulate the rules of behavior under the conditions of drastic instability. Such poorly known rules must govern the processes that basically arise from irreversibility in dynamical systems. The main feature of chaos in dynamical systems is an extreme sensitivity to small perturbations, e.g., to tiny inaccuracies in input data such as initial conditions. It is this property that makes it impossible to forecast the state of a dynamical system for the time exceeding some characteristic predictability horizon amenable to the state-of-the-art numerical computation. The time scale for exponential divergence of nearby trajectories, Λ-1, may serve as an estimate for this predictability horizon. It is important that this time scale usually does not depend on the exact value of initial conditions. Anyway, large Lyapunov exponents are symptomatic of the onset of chaos. Recall that the Lyapunov characteristic exponent manifests the expansion rate of linearized dynamical system along its trajectory. Nevertheless, one can distinguish between dynamical chaos in deterministic systems and physical chaos in many-particle models. For example, evolution to thermodynamic (statistical) equilibrium is directed to the most chaotic and disordered state characterized by maximal entropy. The difference between "dynamical" and "physical" chaos has been reflected in long-standing debates about the origin of stochasticity in physical systems. There were historically two distinct trends of thought: (1) stochasticity arising due to dynamic instability of motion in nonlinear deterministic systems and (2) necessity of statistical description due to the enormous number of degrees of freedom i.e., huge dimensionality of the phase space in realistic (many-particle) physical systems, resulting in practical irreproducibility of solutions (integral trajectories). These two trends were poorly compatible because they required different approaches to physical statistics. In the section devoted to statistical physics and thermodynamics, we shall comment on the possibility to reconcile the two manners of description. One can produce many examples illustrating the importance of chaotic behavior in real life. Thus, when an asteroid or a comet approaches a planet (e.g., Earth), the planet's gravity perturbs the comet's trajectory so that small changes of the latter can be amplified into large and poorly predictable deflections. Since a comet or an asteroid trajectory is affected by numerous close encounters with planets, small variations in the initial parameters of the trajectory may result in practically complete unpredictability of the body's eventual motion (for large enough time - outside the predictability horizon). One can call this few-body process a trivial chaotization of the comet or asteroid path. Because of such sensitivity to small variations of parameters, trajectories of small celestial bodies can diverge and cross the planetary orbits, possibly with devastating consequences. The information about the fate of a comet or an asteroid is practically lost after several Lyapunov time scales, and the resulting unpredictability may be the source of meteorite hazard for the Earth. 232 Classical Deterministic Systems One can show that the logistic map for a= 4 which is in this case chaotic for almost all initial conditions may be related to the Lorenz attractor appearing in the three-dimensional meteorological model constructed in 1963 by E. Lorenz. This mathematical model is represented by a dynamical system with 3d phase space and is fully deterministic since it is represented by three ODEs ẋ = f(x, a), x= (x1, x2, x3) (with quasilinear vector field f). Nonetheless, the model demonstrates chaotic behavior i.e., abrupt and apparently random changes of state for some set of control parameters a. From a more general viewpoint, the logistic model is a very particular case of the evolution of an autonomous system described by vector equation ẋ = f(x, a), where vector field f depends on parameter a (it can also be a vector). As we have seen, in certain cases variations of the control parameter a can radically change the system's motion, for instance, result in chaotic behavior. Note that the logistic model is not necessarily identical with the population model. When the logistic equation is not interpreted as describing the population growth, one can explore the behavior of solutions as parameter k is varied. The logistic map manifests such common features of discrete-time algorithms as stability and chaotic behavior. From this viewpoint, it is interesting for numerical techniques and generally for computational science and engineering. As far as engineering applications of continuous-time logistic model go, we have already mentioned that the equation describing the energy evolution of a nonlinear oscillator in the self-excitation mode has the form Ė = aE(1 -E) (in dimensionless units). Here the growth parameter a is close to 1. One can bring an example of estimating the population growth with the help of the logistic model. We may assume the total current (2010) human population to be 6.9*109, the growth factor to be a = 0.029, the annual population growth to be 0.011 year-1 (more or less standard demographic data). Then we have dN(2010)/dt N(2010) = d dtlog(N(2010)/N(t0 = 2010) = 0.011 = a-kN(2010) = 0.029 -k∙6.9 ∙109 which can be considered an equation to find the attenuation factor k≈2.32 ∙ 10-12. Then the projection for the equilibrium population will be N0 = a/k≈ 11.1 ∙109 i.e., the world population tends to converge to approximately 11 billion people. This result is not very sensitive to the slight changes of the constant a and the annual population growth rate. 4.16.1 Chaos in dissipative systems Chaotic behavior is quantified by the presence and the value of Lyapunov exponents that must be positive in order for the chaotic regime - and the complexity associated with it - to emerge. Recall that the notion "chaotic" is related to the bounded motions displaying an extreme sensitivity to initial data. If there is no dissipation i.e., the dynamical system is conservative, the 4.16 Instabilities and chaos 233 sum of all Lyapunov exponents Λi must be zero - to ensure that a volume element of the phase space remains intact along the phase trajectory (see 8.3.1. "Phase space and phase volume"). If the system is dissipative, the sum of all Lyapunov exponents should be negative, and if ∑Λi 0 the process probability distribution w(t) (in general, under the condition that all w(t) for t≤t0 are known) depends only on w(t0). While for deterministic processes the state of the system at some initial moment uniquely determines the system's future evolution, in Markov processes the state (probability distribution) of the system at t= t0 uniquely defines probability distributions at any t> 0, with no new information on the system's behavior prior to t= t0 being capable of modifying these distributions. Here one may notice a hint at a preferred direction of time or time-reversal non-invariance, which is typical of real-life processes and does not exist in unprobabilistic mechanical theories (we do not consider the Aristotelian model here). Time-reversal invariance requires that direct and time-reversed processes should be identical and have equal probabilities. Thus, purely mechanical models based on Newton's (or Lagrangian) equations are time-reversible, whereas statistical or stochastic models, though based ultimately on classical (reversible) mechanics, are timenoninvariant. Likewise, mathematical models designed to describe real-life processes are mostly irreversible. See below ("The Arrow of Time") for more details. 5.8 Integral Equations in Field Theory I don't quite understand why, but integral equations have become a subject partly alien to physicists. I have met people - otherwise highly qualified - who said that differential equations are more than enough to cover all the principal areas of physics and to construct the models in other disciplines, so why should one attract new and rather complicated concepts? It is unnecessary and contradicts the principle of Occam's razor. However, the statement that knowing only the differential equations is sufficient for physics is wrong: there are areas which cannot be studied (or at least become extremely difficult to study) without integral equations, and such areas are not rarely encountered in science and engineering. Take, for example, antenna theory and design. To obtain the required value of the electromagnetic field one has to compute or optimize the electric current, which is an unknown quantity being integrated over the antenna volume or surface. In scattering problems, both for waves and particles, integral equations are a naturally arising concept. In the problem of electromagnetic field scattering by a bounded inhomogeneity, the scattered field appears due to the induced currents which start flowing in the inhomogeneity under the influence of the incident field. More exactly, these currents may be viewed as a response to the total field present in the volume occupied by the 5.9 Phenomenological Electrodynamics 253 inhomogeneity, this total field being the sum of the incident field and the one generated by induced currents. This is a typical self-consistence problem that leads to the volume integral equation where the unknown field stands under the integral. In general, self-consistence problems like scattering, quite often result in integral equations; now one more class of problems requires an extensive knowledge of integral equations, namely the inverse problems. We shall dwell on the subject of integral equations in association with the models which can be compartmentalized to each of these classes of problems. So integral equations may connect several seemingly unrelated topics, therefore every person interested in physmatics - a connected framework of physical and mathematical knowledge - should be familiar with integral equations. Now, let me make a brief overview. In mathematics, integral equations are considered as a natural part of analysis linking together differential equations, complex analysis, harmonic analysis, operator theory, potential theory, iterative solutions, regularization methods and many other areas of mathematics. When I was studying integral equations at the university, I was surprised to discover their multiple connections with other areas of mathematics such as various vector spaces, Hilbert spaces, Fourier analysis, Sturm-Liouville theory, Green's functions, special functions you name it. Maybe such course openness was due to our teacher, professor V. I. Kondrashov, a well-known Russian mathematician who had a considerable erudition in a variety of areas. 5.9 Phenomenological Electrodynamics The ideal of theoretical physics is a microscopic theory of everything. This is probably an unreachable star, yet some microscopic models built in physics, for example in non-relativistic quantum mechanics or quantum electrodynamics, have been quite successful. This success is, however, a precious rarity. More common are the phenomenological models. The collection of such models related to electromagnetic phenomena is aggregately called macroscopic electrodynamics. To better understand what it is let us consider a simple school-time example. Assume that we need to compute the electric field between a capacitor's plates. Then we would have to write the equations for electromagnetic fields produced by all the particles constituting the plates and the dielectric between them. We would also have to add to these field equations the ones describing the motion of the particles in the fields. The self-consistence problem thus obtained is, in principle, quantum mechanical, and any attempt to solve it would be a hopeless task. Such an approach is usually called microscopic - because it considers phenomena on an atomic scale - it is too complicated and in most cases superfluous. Solving problems microscopically usually produces a great lot of immaterial data. It is much more reasonable to formulate general rules for the systems containing many particles i.e., for macroscopic bodies. Such rules have, by necessity, the average character, totally disregarding atomistic structure of the matter. An electromagnetic theory dealing with macroscopic bodies and fields between them is called phenomenological electrodynamics. Thematically, phenomenological electrodynamics belongs more to matter 254 Classical Fields and Waves rather than to fields. However, it is more convenient to discuss this part of electromagnetic theory in connection with Maxwell's equations which should be properly averaged than in the context of, e.g., laser-matter interaction. There may be many phenomenological theories related to the same subject, they are in fact comprehensive models placed between fundamental microscopic laws and ad hoc results describing a given phenomenon. When dealing with a phenomenological theory one cannot be sure that its equations are unique and correctly describe the entire class of phenomena considered unless a well-established procedure of obtaining these equations from microscopic theory is provided. In this respect, macroscopic electrodynamics and, e.g., thermodynamics are "lucky" phenomenological theories: they both can be derived - although with considerable difficulties - from underlying microscopic theories. On the other hand, empirical phenomenologies flourishing in medical, social, economic and even engineering research cannot - at least so far - be derived from first-principle theories and thus the limits of applicability for the respective phenomenological concepts are undetermined. In principle, the question of applicability of any phenomenological theory is very intricate, and later I shall try to illustrate this fact even on the examples of two privileged phenomenological theories: macroscopic electrodynamics and hydrodynamics. Besides, most of what we call microscopic theories are in fact phenomenological. Take, for instance, Newton's laws of motion. They are considered exact, however Newton's laws can be with certainty applied only to macroscopic bodies i.e., to those composed of a very large number of atomic particles in slow, smooth motion. The Newtonian model will eventually lose validity if we continuously dissect such macroscopic bodies. It is, however, not always easy to distinguish between microscopic and phenomenological theories. Thus, the Schrödinger equation, which is also considered exact microscopic, is nothing more than a phenomenological model devised to describe the nonrelativistic motion of an atomic particle. The Schrödinger equation becomes invalid when one starts scrutinizing the interaction between particles through fields. The Newtonian theory of gravity is also phenomenological, this is the mathematical model stating that the attracting force between any two bodies does not depend on their composition, matter structure (crystalline, amorphous, liquid, plasma, etc.) and other constitutional details important from the physical point of view. This force depends only on some aggregate (and to some extent mysterious) coefficient called mass. Newtonian gravity is a very nontrivial model, gravitation could have been totally different, for example, inertial and gravitational masses could have been nonproportional to each other so that "light" objects would fall slower than "heavy" ones in the gravitation field. Phenomenological theory of gravitation, due to independence of physical details, allows astronomers to predict the motion of celestial bodies ignoring physical processes in them. In general, phenomenological theories usually neglect the effects of microscopic quantities or represent them by a set of numbers. 5.9 Phenomenological Electrodynamics 255 So philosophically speaking, most equations of physics are phenomenological. How can one try to establish validity of phenomenological theories? We have seen that the two basic concepts of microscopic - atomistic - models, in contrast to phenomenological theories (an extreme case is thermodynamics), are particles and fields. Phenomenological theories typically (but not always!) do not treat separate particles, they tend to regard objects as continuous without rapidly fluctuating local quantities such as true densities. This approach appears to be quite reasonable from the practical point of view: microscopic values vary in spacetime in a very complicated manner, and it would be merely meaningless to follow their instantaneous local values. In other words, any compelling theory should only operate with smoothed values, the fluctuations being averaged out. This way of thought naturally leads to the possibility of obtaining the phenomenological description by averaging the corresponding microscopic theories. This sounds simple - especially for linear theories - yet the question arises: what is actually "averaging"? There does not seem to be a universal answer to this question, nor a universal recipe for the transition to the phenomenological version of a given microscopic theory. Each specific case is different, and "phenomenologization" can be quite intricate, not reducing to formal averaging. To illustrate the emerging difficulties let us get back to the relationship between microscopic and macroscopic electrodynamics. The system of Maxwell equations constituting the backbone of microscopic electrodynamics is linear, and thus it presumably can be averaged in a straightforward fashion i.e., one can simply substitute into the Maxwellian system average values for the field E̅ and H̅ in place of their genuine values E, H containing fluctuations. However, here two problems arise. Firstly, the charge and current densities, ρ and j, representing inhomogeneities in the Maxwell equations should also be averaged, but nobody knows a priori what the relationship between the averaged fields and the averaged currents would be, and one badly needs this relationship to close the system of averaged equations of macroscopic electrodynamics. This is an important problem, and we shall discuss it later. The second problem is the very meaning of the averaging operation and the corresponding mathematical procedure. Physicists traditionally relied in this issue on the concept of the so-called physically infinitesimal volume and declared averaging over this volume. The whole procedure of averaging over the infinitesimal volume is described in detail in the classical textbook [208]. Below I shall briefly reproduce this standard procedure accentuating the points that were for L. D. Landau and E. M. Lifshitz too obvious. Nevertheless, I consider everything based on the "physically infinitesimal" misleading and poorly suitable for many practically important cases (such as X-ray optics, molecular optics, plasma physics, etc.). Besides, the very notion of the physically infinitesimal volume is badly defined - "on the hand-waving level" - so that it would be difficult to indicate the accuracy of the averaging procedure. One must be satisfied with the formally written average fields (overlined E and H), the question of their deviations from microscopic fields 256 Classical Fields and Waves being irrelevant. One can only assume that fluctuations of the fields averaged over a "physically infinitesimal volume" are such that one considers these macroscopic fields as real statistical averages. One might also remark that the difference between the fields averaged over a physically infinitesimal volume and statistically averaged is inessential. Right, it may be inessential for the description of quasistatic processes in simple model systems such as homogeneous dielectric. Even if we assume that averaging over physically infinitesimal volume is equivalent to averaging over all possible positions of scattering centers for the field (which is not at all obvious), in such averaging we neglect the motion of these centers. Moreover, it would be hardly possible to define a universal physically infinitesimal scale for all systems. For example, there exist many characteristic spatial and temporal parameters for plasmas, rarefied gases, turbulent fluids, superfluids, crystalline and amorphous solids, etc. The question of averaging over the physically infinitesimal volume is closely connected with the possibility of a universal definition of a continuous medium and leads to such nontrivial questions as dynamic irreversibility and time-noninvariance (see Chapter 9). No matter what averaging method is applied to the Maxwell equations, the representation of the electromagnetic field as having exact (rapidly fluctuating) values at each spacetime point should be abolished. Bearing this in mind, one usually regards phenomenological electrodynamics as dealing only with the slow-varying fields, more specifically only with those having the wavelength λ≫n-1/3, where n is the particle density in the medium. One may note that it is this condition that should exclude the Xray range from macroscopic treatment (nonetheless, many problems of Xray optics are actually handled by the tools of phenomenological electrodynamics). In general, along this way of thought it would be difficult to consider the short-wavelength phenomena in matter, e.g., those for large absolute values of dielectric function or refractive index. Besides, the matter response to an electromagnetic field may essentially depend on the motion of particles, the latter in reality "feeling" local and instantaneous fields in each spacetime point and not the mean fields averaged over the volume containing many particles. It is similar to the fact that the car driver reacts on the "here and now" traffic conditions more acutely than on smooth road curves, speed limiting signs, information tableaux and other factors of "macroscopic" character. An alternative - and more correct - averaging method is not the one over the "physically infinitesimal volume", but a standard method of statistical physics: averaging over a statistical ensemble, e.g., in the case of equilibrium over the Gibbs distribution. Taking averages over the quantum state, with the wave function for the pure state or with the density matrix for the mixed state, will be discussed separately below. One may notice that in the statistical 5.9 Phenomenological Electrodynamics 257 method there is no spatial averaging per se, and the fields can still remain quasilocal and quasi-instantaneous126. In the system of Maxwell equations curlH-1 c ∂E ∂t= 4π c(j+ j0) (5.18) divE= 4π(ρ+ ρ0) (5.19) curlE+ 1 c ∂H ∂t= 0 (5.20) divH= 0, (5.21) where ρ0 and j0 are external charge and current densities, the induced currents j and charges ρ are the functions of fields in the matter, j= j(E). In phenomenological electrodynamics, the quantities E, H, ρ, j are assumed to be averaged over either a "physically infinitesimal volume" or a statistical ensemble (see below). The relationship between the field and the current induced by it represents the matter response to an electromagnetic excitation and determines such essential quantities as the medium susceptibilities, both linear and nonlinear. The phenomenological approach allows one to conveniently formulate mathematical problems for the "Maxwell operator". By introducing the tensor functions εij(x) and μij(x), x∈Ω ⊂R3 i.e., dielectric permittivity and magnetic permeability of the medium (see below a detailed discussion of these quantities), we can write the homogeneous Maxwell equations in the operator form through the stationary operator M(ε, μ) acting on the pair (E, H)T where E= E(ω, r), H= H(ω, r) are respectively electric and magnetic field vectors in the considered domain Ω. More specifically, for a stationary electromagnetic field its eigenfrequencies correspond to the spectrum of M(ε, μ) acting on (E, H)T according to the rule M(ε, μ) (E H) = ( iε-1∇× H -iμ-1∇× E). One usually assumes the solenoidal constraints div(εE) = 0, div(μH) and some boundary conditions, e.g., Eτ|∂Ω = 0, (μH)ν|∂Ω = 0. Here index τ denotes the tangential and index ν the normal component of the respective vector on boundary ∂Ω. In isotropic medium, permittivity and permeability may be reduced to positive scalar functions ε(r), μ(r). When these functions as well as boundary ∂Ω are sufficiently smooth, the spectral problem for the Maxwell operator can be solved. The Maxwell operator 126 The microscopic electromagnetic fields are still bound from below by the nuclear scale ( ~10-13 cm). As for the macroscopic fields in the matter, it does not make sense to phenomenologically consider the distances smaller than the atomic ones (~10-8 cm). 258 Classical Fields and Waves formulation is especially useful for calculating modes in electromagnetic resonators (see, e.g., [6, 5]). In optical problems related to isotropic media, permittivity and permeability are also typically treated as smooth and positive scalar function determining the refraction index n= (εμ)1/2 and the velocity of light in the medium, vph= c/n. Then the "optical" metric ds2 = c2dt2 -dxidxi turns, as already discussed, the considered domain Ω ⊂R3 into a Riemannian manifold. 5.9.1 The Traditional Averaging Procedure The meaning of the averaging over a "physically infinitesimal volume" needs to be made clear for the sake of sound understanding of phenomenological electrodynamics. Let us now carry out explicitly the standard procedure of such an averaging. Apart from being a useful exercise, it is also a useful trick the details of which are not always given in the textbooks. In the standard averaging scheme of transition to macroscopic electrodynamics, one usually introduces the electric induction vector D= E+ 4πP and the magnetic field strength H= B-4πM where P is the polarization vector of the medium and M is the magnetization vector (by definition, it is the mean magnetic dipole moment for unit volume, M(r) = ∑Nimi(r) ̅̅̅̅̅̅̅̅ i , mi(r) ̅̅̅̅̅̅̅̅ is the average magnetic dipole of the elementary domain or cell in the vicinity of point r, e.g., of a single molecule located at r and Ni is the average number of such cells). One might complain that such phenomenological construction is far from being elegant. The average value of the magnetic field is usually denoted as B and called the magnetic induction. In the classical averaging scheme over the "physically infinitesimal volume", the polarization vector P is usually defined as the vector whose divergence equals (up to a sign) the average charge density in the medium, ∇P= -ρ (see [208], §6). This definition is consistent with the general requirement of the matter electric neutrality, the latter being necessary for the stability of the matter. One can easily understand the physical meaning of the polarization vector P as the average dipole moment of the unit volume. Indeed, if for the zero-moment of the averaged charge density ρ we have, due to neutrality, ρ̅ = ∫ρ̅(r, t)d3r Ω = 0 for all t, for the first moment we may write ∫rρ̅(r, t)d3r Ω = ∫P(r, t)d3r Ω 5.9 Phenomenological Electrodynamics 259 where integration runs over the whole matter (e.g., a macroscopic dielectric body). One may notice that these relations are supposed to be valid not only in the static case. To prove the second relation, we can use the vector integral identity ∮r(Pdσ) Σ = ∫(P∇)rd3r Ω + ∫r∇Pd3r Ω , where Σ is any surface enclosing the matter volume Ω. If this surface passes outside this volume (a material body), the integral over it vanishes, and since (P∇)r= P we get the required relationship for the first moment of ρ̅. The above vector identity can be proved by a number of ways, e.g., by using the tensor (index) notations. The simplest, but possibly not very elegant, proof would consist in using the identity ∇(φ(r)P) = φ∇P+ (P∇)φ which can be applied to any component of r= (x, y, z) and then integrating it over Ω, with the left-hand side giving the surface integral. Thus, we have the four vectors - two vector field pairs - of macroscopic electrodynamics, E, D, H, B, which must be supplemented by some phenomenological relations between these vectors. Besides, if the above four quantities must satisfy the equations of the type of Maxwell's equation for the microscopic vacuum values, the question of sources for the phenomenological quantities arises. The textbook relationships between the vectors within each pair, D= εE and B= μH, complete the traditional construction of macroscopic electrodynamics. This simple construction can be justified for static or slowly varying (quasistationary) fields, but it usually becomes totally inadequate for rapidly changing vector functions such as for short wavelengths or ultrashort pulses (USP), the latter being an increasingly popular tool in contemporary laser physics. One may note that such commonly used terms as "short wavelengths" or "hard electromagnetic radiation" are relative: for certain media the radiation typically perceived as "soft", e.g., infrared, may exhibit shortwavelength features whereas in some other type of matter the same radiation may be treated as quasistationary. Thus in plasma, where the average distance between the particles, n-1/3, n is the particle density, may easily exceed the characteristic wavelength λ of the electromagnetic field, using the phenomenological field equations obtained by averaging over the physically infinitesimal volume can easily become meaningless. I have already mentioned that the conventional approach to macroscopic electrodynamics, corresponding to the averaging of microscopic fields over "physically infinitesimal volume", consists in the additive decomposition of the total induced current and charge densities, j and ρ (also averaged over the physically infinitesimal volume), into "physically different" parts, e.g., j= jc+ jp+ jm, 260 Classical Fields and Waves where jc represents the current of conductivity electrons, jp is the polarization current, jp= ∂P∂t ⁄ , where jm is the magnetization current, jm= ccurlM. (I would recommend reading carefully the respective material in the textbook by L. D. Landau and E. M. Lifshitz [208].) One does not need to perform the same decomposition procedure for the charge density because of the constraint given by the continuity equation 5.3 which remains valid after any kind of averaging (due to its linearity). 5.9.2 Ensemble Averaging of Fields and Currents Paying attention to the difference between averaging over a "physically infinitesimal volume" and ensemble averaging may be essential to understanding of the fundamentals involved. The Maxwell equations in the medium obtained by the traditional averaging procedure (i.e., over the "physically infinitesimal volume") and having the form ([208], §75) curlH-1 c ∂D ∂t= 4π cj0 (5.22) divD= 4πρ0 (5.23) curlE+ 1 c ∂B ∂t= 0 (5.24) divB= 0, (5.25) being supplemented by the "material equations" relating the quantities D, B and E, H, D= εE and B= μH can be used without reservations only for relatively slow-varying fields (static and quasistationary). For fast changing electromagnetic fields and pulses as well as for the media where spatial field variations can be shorter than the average distance between the constituent particles (e.g., in plasmas), these equations become inconvenient or merely inadequate. Indeed, it seems to be nearly obvious that, even leaving aside the dubious and in general mathematically incorrect procedure of averaging over the physically infinitesimal volume, breaking down the total current into presumably non-overlapping components cannot be unambiguous for high frequencies. It may be easily seen that the currents excited in the medium due to free and to bound electrons cannot be separated already for optical frequencies or for rapid variations of an external field (ultrashort pulses). One may illustrate this fact by a simple example of an atomic electron in an external field E. For simplicity, we may consider here classical (non-quantum and non-relativistic) motion, mr̈ = eE(t), e, m are the electron charge and mass respectively, then the characteristic displacement or oscillation amplitude of an electron in the field E(t) of an electromagnetic pulse or wave is r0~eEτ2/m or r0~eE/mω2 where τ is the ultrashort pulse duration. Such a displacement can be readily scaled down to atomic distances (Bohr's radius, rB~10-8cm) even for rather strong fields, say only an order of magnitude lower than atomic fields, Eat~e/aB 2~109V/cm2. This simple example (and many others, too) demonstrates that the difference between the conductivity, polarization and magnetization 5.9 Phenomenological Electrodynamics 261 currents, the latter being due to the charge motion along closed trajectories, rapidly becomes very vague with the decreased wavelength of the radiation field. Starting from ultraviolet light, unambiguous decomposition of current into these three components becomes virtually impossible. At least, such a decomposition at optical frequencies and for ultrashort electromagnetic pulses is pretty arbitrary. Thus, the optimal possibility we have in the case of fast varying fields is to consider the total current j(r, t) incorporating all kinds of charge motions caused by an electromagnetic field. This current, in the situation of thermal equilibrium, should be averaged over a statistical ensemble i.e., with the Gibbs distribution (or equilibrium density matrix in the quantum case). Technically, it is often convenient to introduce the total polarization P(r, t) (see [209]) instead of the total current: P(r, t) = ∫j(r, t′)dt′ t -∞ , j(r, t) = ∂tP(r, t) It is important to remember that total polarization P(r, t) includes all currents, due both to free and to bound charges, and not only the displacement current, as in the intuitive scheme of averaging over the physically infinitesimal volume. One can also see that the total polarization accumulates current contributions starting from the remote past (t→-∞), but not from the domain t′ > t. This is a manifestation of the causality principle which is one of the crucial assumptions in physics (see also Chapters 6, 9). In the distant past, t→-∞, polarization is assumed to be absent. Introducing the total polarization enables us to write down the total induction, D(r, t) = E(r, t) + 4πP(r, t), and not only induction only owing to displaced bound charges, as in traditional theory of dielectrics. Using the total induction, we may write the averaged Maxwell equations in the medium as curlE+ 1 c ∂H ∂t= 0 (5.26) divH= 0 (5.27) curlH-1 c ∂D ∂t= 4π cj0 (5.28) divD= 4πρ0, (5.29) where the last equation is in fact a consequence of the continuity equation 5.3. Indeed, from ∂ρ ∂t+ ∇j= ∂ρ ∂t+ ∇∂P ∂t= 0 we have ρ= -∇P+ ρ-∞= 1 4π∇(D-E), 262 Classical Fields and Waves where ρ-∞ is the integration constant that can be put to zero (we assume that there were no polarization in the distant past). Then we have 4πρ= -∇(D-E) , but ∇E= 4π(ρ+ ρ0) where ρ0 is, as before, the density of external charges introduced into the medium. Thus, we get ∇D= 4πρ0. One may notice that in this "optical" approach one does not need to introduce, in addition to a magnetic field, such a quantity as magnetic induction and, consequently, magnetic susceptibility [209]. So, there is no distinction between B and H. Indeed, the total polarization already contains a contribution from the magnetization (circular) currents, e.g., arising from nearly closed trajectories of electrons moving in the electromagnetic field of elliptically polarized light. One often asks: why are currents and polarization in the medium considered the functions of the electric field E alone, with magnetic field H being disregarded both in the dielectric relationship P= χE and in the conductivity relationship j= σE? Or, to put it slightly differently, why is it always implied that the induction D is only proportional to E and not to H? One might encounter several versions of answering this question. One of the versions is that H should be excluded because (1) it is an axial vector (H= ∇× A), whereas E is a "true" vector so that they both cannot be simply superposed, and (2) H is not a time-invariant quantity. The second argument implies, however, an a priori requirement imposed by our intuitive perception of the world: nobody can be sure that all electromagnetic phenomena must be strictly invariant under time reversal (even in the static case). In fact, it seems to be wrong (more about time-reversal invariance in Chapter 9). As to the first argument, it may be "neutralized" by introducing a pseudoscalar proportionality coefficient between P and H (or j and H and D and H). In reality, we may disregard the magnetic field in the "material relations" because it can be expressed through the electric field using Maxwell's equations. Besides, even if we wished to explicitly involve the magnetic field, it would make little sense unless we considered ultrarelativistic motion of charges, which is an exotic case in the medium. The point is that factor v/c always accompanying a direct effect of a magnetic field on moving charges would make its influence hardly noticeable for the particles in matter whose characteristic velocities are of the atomic scale (vat~e2/ħ) i.e., at least two orders of magnitude lower than the speed of light. Now the main problem is: how to link the current induced in the medium to an external field exciting this current? It is clear that such a problem is in general extremely difficult and can hardly be solved for an arbitrary matter containing a macroscopic number (N~1023) of particles placed in a fortuitous field. Nonetheless, there are several approaches to this universal problem. One of such approaches has been developed within the framework of nonlinear optics (NLO). This is a comparatively new discipline which emerged in the 1960s, following the advent of lasers. Before that time optics was essentially linear, and probably the only attempt to consider nonlinear electromagnetic effects were made in quantum electrodynamics while treating the scattering of light by light [210] (see also 5.9 Phenomenological Electrodynamics 263 [263]), which is in fact vacuum breakdown process 127 . Linearity of electrodynamics required that the polarization of the medium and induced current should be written respectively as Pi-χijEj, where χij is the susceptibility tensor, and jiσijEj, where σij is the conductivity tensor128. A more general, nonlocal, linear expression linking the induced current to an electric field may be written as ji(r, t) ≡ji (1)(r, t) = ∫σij(r, r1; t, t1)Ej(r1, t1)d3r1dd1 (5.30) Here I intentionally did not indicate integration limits implying that they are ranging from -∞ to +∞; however, I think it necessary to comment on this point. I have already mentioned that if one assumes that the causality principle holds for all types of medium, then one must consider polarization and currents, observed at time t, depending only on the field values related to the preceding moments of time, t1 -∞) and is bounded from above if (f, Af) (f, f) ⁄ ≤M, M 0, and the isotropic oscillator field, U(r) = kr2, k> 0. In principle, for any central field U(r) there exist a collection of conserved aphelion r= a(1 + e) and perihelion, r= a(1 -e), 6.9 Quantum-Classical Correspondence 297 vectors. This corresponds to motion in a central field in celestial mechanics and the Runge-Lenz (Laplace-Runge-Lenz, LRL) vector, which is the conserved quantity. The reason for its existence as a conserved quantity is that central force problems are characterized by a more profound symmetry than SO3, the LRL vector being the manifestation of this fact (see, e.g., [149]). Nevertheless, I don't know any good physical interpretation of this conserved quantity and would be thankful if someone could provide it. It is clear that the LRL vector is connected with the orbit's eccentricity and as such it can be used to calculate the eccentricity. For any central potential there must exist a number of eccentricity (aphelion and perihelion) vectors, but what happens when the potential continuously differs from the pure Coulomb? For example, how does the SO4 algebra change with the variation of the screening constant in the Yukawa potential? Due to its conservation, the LRL vector commutes with the Hamiltonian, which gives an additional relationship to the Lie algebra whose Lie group (SO4) is of a larger symmetry than the rotational group (SO3). The consequence of this fact is that the spherical motion problem in quantum mechanics admits, besides SO3, other non-trivial subgroups of SO4 realized as solutions to the usual nonrelativistic hydrogen problem, i.e., as modes derived from the spinless Schrödinger equation. The question of how many vector constants of motion do exist for a particle moving in a central potential will be discussed later, in connection with general features of particle motion in a central field. Now we are mostly interested in bridging over the cognitive chasm between classical and quantum mechanics. Unfortunately, since these two theories are too different there does not seem to be the uniquely defined and mathematically impeccable way to make the transition between them, irrespective of the manner of understanding the word "transition". For example, semiclassical theory is just an asymptotic expansion of the solutions to the partial differential equations and is thus not equivalent to classical mechanics in the mathematical sense. This asymptotic expansion corresponds to the limit ħ→ 0, but ħ is a dimensional quantity so that its limit to zero or infinity is not quite meaningful. In general, the connection between quantum and classical mechanics is intricate and not quite clear, there are a lot of beliefs and folklore about it. P. A. M. Dirac, probably deeply understanding this fact, in his famous book "Principles of Quantum Mechanics" [20] and especially in subsequent lectures [139] has replaced proofs of quantum-classical correspondence by a kind of axiomatic formalization of terminology. This axiomatic or rather quasi-engineering approach proved to be very successful, but it could not compensate for the loss of logical connections between classical and quantum mechanics. It is remarkable that Dirac has intuitively guessed how to construct quantum mechanics on the base of classical one. One might recall in this connection that some great scientists, e.g., Einstein disagreed with such intuitive replacement of logical links between the two models of the mechanical world. In my view, it is the logical transition between classical and quantum mechanics that should be improved in future, not the philosophical issue of interpretation of quantum mechanics which seems to be a pseudoproblem. To have two great theories for the description of mechanical motion, 298 The Quantum World with some indeterminacy when to use one and when the other 150 is an unsatisfactory state of affairs. 6.10 The Ehrenfest Theorem and Its Meaning The first, simple but tricky, question which is posed by any student beginning to study quantum mechanics is: in classical mechanics we have all the quantities depending on the particle coordinate, r(t) or, for generalized coordinates, qi(t), i= 1, ... , n, n= 3 for a single particle . What should we use instead of r(t) or qi(t) in quantum mechanics? An automatic answer "the position operator" that might be given by a person who has already had some experience with quantum theory is not quite satisfactory for a beginner or at least incomplete because it implicitly involves a lot of new concepts. Thinking about this seemingly primitive basic problem leads to a number of surprising issues that should be elucidated lest one thoughtlessly use quantum prescriptions. Below we shall tackle some of them, but to prepare ourselves for surprises we have to discuss some elementary concepts of orthodox quantum mechanics first. Assume now that we find ourselves within the fully quantum world. The most natural way to establish the quantum-classical correspondence would be probably restoring the classical behavior of the position (or coordinate) operator, r, or of generalized coordinates qi(t), i= 1, ... , n, (n= 3) starting from the quantum picture. To this end, one can employ the Heisenberg picture or Copenhagen interpretation (see the respective section in this Chapter), A(t) = U+AU, where U= exp(-iHtħ ⁄ ) is the evolution operator for a "pure" quantum mechanical system characterized by the Hamiltonian H= p2 2m ⁄ + V(qi). Differentiating this equation, we get the standard expression for the time derivative of the operator corresponding to the classical function A(t): ∂ ∂tA(t) = i ħ[H, A] Putting A(t) = qi(t) (for simplicity, one can consider one-dimensional motion here, of course), we must arrive at some analog of the classical 150 When one says that quantum mechanics must be used in all cases when atomic (or subatomic) particles are considered, I would say it is wrong: particles in accelerators are treated classically with extremely good precision. Moreover, in highly precise scientific instruments such as mass spectrometers, beta-ray spectrometers and the like, which are based on subatomic particle motion in external fields, deflection of these microscopic particles are computed with high precision with the means of classical mechanics and no quantum corrections are included. In a rather important problem of the passage of atomic particles through the matter, it is very hard to tell a priori whether a concrete task should be treated classically or quantum mechanically, this is a purely modeling approach, a matter of arbitrary choice. Furthermore, there are many attempts to treat the entire universe as a quantum object (see below the discussion of the wave function of the universe). Should one then treat such subsystems of the universe as galaxies, stars, planets and other astrophysical objects also with quantum mechanical means. Does it always make sense? 6.10 The Ehrenfest Theorem and Its Meaning 299 equation of motion 151. One usually considers in this context the so-called Ehrenfest theorem which is related to the average (expectation) values of coordinates and momenta. It is useful, however, to trace the correspondence of the quantum and classical formulas already at the level of operator equations, before the transition to average values. This question is described in detail in the textbook by A. S. Davydov [140], chapter 2. Let us find the quantum-mechanical time derivative of an arbitrary function f(r), where r is what we call in classical mechanics the radius-vector of a particle (or the time derivative of f(qi(t)), where qi(t) are generalized - in fact curvilinear - coordinates). In quantum mechanics, we must interpret the function f(r) as an operator, so according to the general formula for the time derivative of an operator we have d dtf(r) = i ħ[H, f(r)], where f can also be a vector quantity of course. Since [V, f(r)] = 0 (or, in generalized coordinates, [V(qi), f(qi)] = 0) we get d dtf(r) = i 2mħ(p2f(r) -f(r)p2) = i 2mħ(ppf(r) -f(r)pp) = i 2mħ(pf(r)p-iħp∇f(r) -pf(r)p-iħ∇f(r)p) = 1 2m(p∇f(r) + ∇f(r)p), (6.8) since pf(r) -f(r)p= iħ∇f(r). Indeed, (pf-fp)ψ= iħ(∇(fψ) -f∇ψ) = -iħψ∇f for an arbitrary ψ. If we take, for instance, the time-dependent operator f(r) to be the particle velocity interpreted as a function of position r, then we may find the time derivative of this velocity operator d dtv(r) = i ħ[H, v(r)] = i ħm(V(r)p-pV(r)) = -1 m∇V(r) and we arrive at the operator equation whose form is exactly the same as Newton's equation in classical mechanics, dpdt ⁄ = -∇V(r). Thus, using only the commutation relations between coordinate and momenta we obtained the operator analogs of the classical equations of motion. It is, however, difficult or at least unconventional for the people trained in classical mechanics to use directly operator differential equations, so one would rather invent a scheme that results in the classical equations of motion for the 151 We assume here that the Hamiltonian does not depend explicitly on time. An explicit dependence of time in the Hamiltonian results in some difficulties which are natural, while in this case we have a different physical situation - the system is under variable external conditions, e.g., in an external field. We shall discuss this situation below. 300 The Quantum World expected values since one can handle the latter as classical quantities. Such a scheme leads to the so-called Ehrenfest equations which were obtained by P. Ehrenfest in 1927 [157]. The Ehrenfest theorem states that the motion of a quantum particle will be, on average, identical to the motion of a classical one. The words "on average" in this interpretation are usually understood in the following way: the quantum particle is represented by a wave packet concentrated near its classically moving expectation value, and the potential in which the particle moves does not change significantly over the dimensions of the wave packet. In this sense, the particle may be considered pointlike with respect to potential motion despite being spread over a finite packet size. Such a condition allows one to replace the particle position and momentum by their expectation values; in fact, the condition of packet "smallness" is neither sufficient nor necessary (see [164]), for example, one can see that the Statement of the Ehrenfest theorem is valid not only for the "concentrated" wave packets. In order to elucidate the meaning of the Ehrenfest theorem, let us begin from some simple averaging procedures. To get rid of supplementary but not essential difficulties we assume again that the Hamiltonian does not depend explicitly on time. The simplest method to obtain the equations for expectation values which would correspond to the above operator equations is to directly average these operator equations, but this procedure is not mathematically impeccable unless we exactly define the averaging procedure. If we close our eyes on this pedantry and just take the expectation values of both sides of the operator equation (with respect to a Heisenberg ket-state that does not change with time), we obtain dp ̅̅̅̅ dt= md2r ̅̅̅̅̅ dt2 = md2r̅ dt2 = -∇V(r) ̅̅̅̅̅̅̅̅ or, for generalized coordinates, mikqk ̅̅̅̈ + ∂V ̅̅̅̅ ∂qi= 0, where mik plays the role of the metric tensor. Here symbol A̅ means the quantum averaging over a pure state, A̅ = (Ψ, AΨ) = ∫d3rΨ ∗(r)AΨ(r) or, in Dirac's notations, A̅ = 〈Ψ|A|Ψ〉. In general, averaging and differentiation over time are non-commutative operations so that we are unable to write dpdt ⁄ ̅̅̅̅̅̅̅̅ = dp̅ dt ⁄ (see below). This is just an attempt to replace the quantum calculations by the classical ones. It is clear that in the case of many particles one has to replace integration over d3r by integration over hypervolume element dτ and calculate a multidimensional integral. We may perform here straightforward calculations to illustrate the simple bridge between classical and quantum mechanics based on the Ehrenfest theorem. Incidentally, the coherent states popular today were first discussed by E. Schrödinger [129] as a by-product of obtaining a solution for the wave packet whose center moves according to the 6.10 The Ehrenfest Theorem and Its Meaning 301 laws of classical mechanics in the quadratic (oscillator) potential, V(q) = 1/2mω2q2. We have already touched upon the oscillator model many times and shall deal with the model of coherent states later in some detail. To find the "motion equations" for expectation values we may, for simplicity, use also the Schrödinger picture (recall that the Schrödinger and the Heisenberg representations are equivalent, see above). Differentiating straightforwardly the defining expression for A̅, we get dA̅ dt= (∂Ψ ∂t, AΨ) + (Ψ, A∂Ψ ∂t). Now, from the Schrödinger equation we have ∂Ψ ∂t= -i ħHΨ, ∂Ψ ∗ ∂t = -i ħH∗Ψ ∗, where H= H+ (we assume the Hamiltonian to be self-adjoint). Inserting these expressions into that for the derivative dA̅ dt ⁄ , we get -iħdA̅ dt= (HΨ, AΨ) -(Ψ, AHΨ) = (Ψ, H+AΨ) -(Ψ, AHΨ) = [H. A] ̅̅̅̅̅̅̅̅. Putting here A= qi and A= pk, we get -iħdqi̅ dt= [H, qi] ̅̅̅̅̅̅̅̅, (6.9) -iħdpk ̅̅̅ dt= [H, pk] ̅̅̅̅̅̅̅̅̅. (6.10) It is not difficult to demonstrate that commutators [H, qi] and [H, pk] give the derivatives over pi and qk, respectively: ∂H ∂pi = i ħ[H, qi], -∂H ∂qk= i ħ[H, pk]. (6.11) These relations follow directly from the canonical commutation relations (CCR), [pi, qk] = -iħδi k, and are valid not only for the Hamiltonian H(pk, qi), but also for any polynomial (entire) function of p, q. The above formulas for the dynamics of expectation values were related to the autonomous case when the operator A does not depend explicitly on time. In case the operator A depends explicitly on time, we have only a slight modification of the formulas namely dA̅ dt= (Ψ, ∂A ∂tΨ) + (∂Ψ ∂t, AΨ) + (Ψ, A∂Ψ ∂t) = (Ψ, ∂A ∂tΨ) -i ħ(Ψ, [H, A]Ψ). One can introduce here a time derivative operator, dAdt ⁄ , defined by 302 The Quantum World dA̅ dt= (Ψ, dA dtΨ). Then we get the operator identity dA dt= ∂A ∂t+ i ħ[H, A], which is an expression of a well-known fact that if some operator A does not explicitly depend on time and commutes with the Hamiltonian, the expectation value of the respective physical quantity does not change with time in any state. In such a case, A is the quantum integral of motion. Replacing again A with qi and pk, we have the operator equations dqi dt= i ħ[H, qi], dpk dt= i ħ[H, pk] (6.12) One can compare ?? with 6.12. Assuming the standard form of the Hamiltonian, H= -ħ2 2m∂k∂k+ V(qi), we obtain the following operator relations dpk dt= -∂kV, dqi dt= pi m (6.13) which, despite their operator character, have the form of classical Hamiltonian equations. This fact is often interpreted as a testimony for a close connection of quantum and classical mechanics. However, it is difficult to ascribe an exact meaning to the term "close connection". The usual beliefs that quantum mechanics incorporates classical mechanics are, honestly speaking, not well founded. What do we expect when we think that a new theory includes an old one? That there exists a universal procedure of retrieving the old theory in its precisely defined application area, by taking an appropriate mathematical limit. There is no such universal procedure for the transition from quantum to classical mechanics. As it has been already mentioned, it does not seem to be possible to recover classical mechanics in all areas of its validity. The classical limit problem remains an open question. For example, one of the main problems related to the relationship between classical and quantum mechanics remains open: can quantum mechanics recover the classical motion over a single orbit, and in what limit? There are contradictory statements about this limit, some people say ħ→0 is sufficient, others consider the limit of "large quantum numbers". There exist also a number of other points of view as to at what level classical 6.10 The Ehrenfest Theorem and Its Meaning 303 behavior follows from quantum theory (e.g., based on stochastic schemes i.d. the Brownian motion and Fokker-Planck equation). Today, the decoherence concepts (see above) are of fashion, they did not exist twenty years ago although the correspondence problem is as old as the history of quantum mechanics. In different models, a variety of classical limits is discussed. Thus, in the computation of radiation processes classical behavior is identified (sometimes erroneously) with the emission of "small quanta with large probability" whereas rare events of "large" quanta emission (with nonnegligible recoil) testifies to the necessity of quantum description. In the coherent states theory, there has long been a belief that classical mechanics is essentially an approximation of quantum mechanics, if one restricts all the states to coherent states only. In other words, it is the coherent states that are the cause of classicality. More sophisticated approaches to quantum-classical correspondence are based on the Wigner and Husimi functions as well as Weyl-Moyal algebra. The Feynman path integral (see below) in the original form introduced by R. Feynman [44] may also be considered a bridge between the two mechanics. Other groups actively promote the Bohm-de Broglie version of mechanics as a unifying theory. In short, any person doing quantum computations may have her/his own opinion about the development of classical behavior from quantum theory. This diversity of opinions indicates a certain logical inconsistency of the classical limit schemes that do not necessarily fit together. The obvious difficulty for the classical limit is rooted in the fact that quantum physics is largely based on probabilities - only eigenvalues and expectation values have definite (exact) values. On the other hand, in classical mechanics a particle is presumed to be at a definite position and to possess a precise velocity (or momentum) under the influence of a definite force at each instant of classical time. We have seen above that discussions of the classical limit of quantum mechanics are frequently based on the Ehrenfest theorem which deals with the averaged quantum quantities. According to this theorem, as we have seen, quantum evolution in the mean resembles classical dynamical behavior. For instance, the Ehrenfest theorem for a system of electrons in a time-dependent external force gives the complete analog to Newton's law of motion for the interacting classical particles, in terms of the averaged positions of electrons and the averaged force acting on them. Averaging in this context can be interpreted in the following way. If one were able to produce a large number of observations over the positions of a particle described in quantum mechanics by a wave function Ψ, the average of all observed values of position vector r is obtained with the help of the quantum probability density, dw= |Ψ|2d3r, r̅ = ∫rdw= ∫Ψ∗rΨ d3r, that is the weight function dw= |Ψ|2d3r defines the frequency of occurrence of the position r. Likewise the average momentum is defined by the expression 304 The Quantum World p̅ = ∫Ψ∗(-iħ∇)Ψ d3r, Generalizing these expressions, we may conclude that the average value of any dynamical variable A(r, p, t) can be written as A̅ = ∫Ψ∗A(r, -iħ∇, t)Ψ d3r, where A(r, -iħ∇, t) is an operator representing the quantity A(r, p, t) in quantum mechanics. We have seen that the evolution of the average momentum, dp̅ dt ⁄ = -∇V(r, t) ̅̅̅̅̅̅̅̅̅̅, looks very similar to the Newton motion equation, however it does not produce Newtonian mechanics from the quantum theory. We have seen, for instance, that it is in general incorrect to define velocity v as dr/dt (see, e.g., [84], §19). Due to the fact that quantum mechanics operates with objects which are spread in space and time, it has been customary to introduce the notion of a "wave packet" as a mathematical model for a quantum particle. Below, after we discuss a little more the significance of the Ehrenfest theorem and its possible generalizations, we consider wave packets from the viewpoint of the quantum-classical relationship. 6.11 Wave Packets in Quantum Mechanics We have seen in the preceding section that the Ehrenfest theorem can be interpreted in terms of expectation values, namely that the rate of change of the particle momentum expectation value is equal to the expectation value for the force acting on the particle. The term "expectation value" has a meaning only when some averaging procedure is defined. In quantum mechanics, it is assumed that the state 152 of a particle is described by the wave function Ψ(r, t) that may be interpreted as representing a "wave packet", a kinematic image associated with the quantum particle motion. In the process of motion, a wave packet tends to spread with time so that it is not a steady entity. The Ehrenfest theorem deals with the "center of gravity" of the wave packet, which has a fair meaning when the packet is unimodal, well concentrated and in a certain sense small. It would be impractical to define the "center of gravity" for a plane wave or for a polynomial. If the dimensions of the wave packet associated with the particle are small, the particle motion, according to the Ehrenfest theorem, may be approximated by classical mechanics. The "center of gravity" of the wave packet, r̅t= {x̅i(t)}, i= 1,2,3, moves along a trajectory which is a set of points x̅i(t) for all values of time t. One must remember, however, that the notion of a classical force standing in the righthand side of the Ehrenfest equation has only a limited validity since in general the average value of a function does equal the value of a function at the point 152 We consider only "pure" states in this context. 6.12 Semiclassical Expansions and Asymptotic Methods 305 where its argument takes the average value, i.e.,∇V(r) ̅̅̅̅̅̅ ≠∇V(r̅). For instance, x2 ̅̅̅ = x̅2 and, more generally, xn ̅̅̅ = x̅n for n≫ 2. But what is exactly a wave packet in quantum mechanics? Can it be interpreted as an appropriate ensemble of classical orbits? 6.12 Semiclassical Expansions and Asymptotic Methods One usually asserts that quantum mechanics is logically more fundamental than classical mechanics, in the sense that classical mechanics can be derived from quantum mechanics but not vice versa. However, this statement can be neither proved nor disproved, at least up till now. The classical end of quantum mechanics is represented, as a rule by the quasi-classical WKBJ approximation (see above) which is, from the mathematical view-point, just an asymptotic expansion of wave equations. Quasi-classical approximation is so obvious that it was used for wave equations long before WKBJ, e.g., by Liouville, Stokes, Green, and Rayleigh. Recently, the WKBJ theory originally constructed for linear equations has been extended to the nonlinear framework, in particular, employed for the nonlinear Schrödinger equation [214]. In general, for an arbitrary quantum mechanical setting (i.e., for an arbitrary set of quantum dynamical variables), the h→0 limit is not obliged to exist which means that the quantum theory does not necessarily imply classical mechanics in the sense of this limit. 6.13 The Density Matrix and Its Relatives In the conventional quantum mechanics the state is usually defined either by a vector (ray) in a Hilbert space or, in a more general case, by a density matrix. The first kind of a state is called "pure" whereas the second kind "mixed". Mathematically, a state Ψ may be called pure if the interpolating relationship Ψ = αΨ1 + (1 -α)Ψ2, where Ψ1,2 are two states, 0 τr≫τ0, a single-particle distribution function tends to the equilibrium MaxwellBoltzmann distribution170. In other words, the whole many-body system for t≫τr reaches the state of statistical equilibrium so that the respective manyparticle distribution functions tend to the canonical Gibbs distribution, fN(x1, ... , xN, t) →exp[β(F-H(x1, ... , xN))] , where β= 1/T is the inverse temperature, F is the free energy and H(x1, ... , xN) is the system's Hamiltonian. In other words, the single-particle distribution function f(r, p, t) substantially changes over time scales t τr≫τ0 whereas at the initial stage of evolution this function remains practically intact. Yet manyparticle distribution functions can change very rapidly at short times comparable with the chaotization period τ0. Physically, one can understood this fact by considering spatially uniform systems with pair interaction between the particles, when many-body distribution functions depend on the coordinate differences of rapidly moving constituents. It is intuitively plausible that many-particle distribution functions would adjust to instant values of a single-particle distribution. To translate this intuitive consideration into the mathematical language one can say that for τr> t≫ 170 One can imagine even tinier time scales in a many-body system, namely τ0~ τr/N, where N is the number of particles in the considered part of the system. 336 Stochastic Reality τ0 (intermediate asymptotics) many-particle distribution functions become the functionals of a single-particle distribution function fN( x1, ... , xN, t) t≫τ0 → fN[ x1, ... , xN; f(r, p, t)] so that the temporal dependence of many-particle distributions is now determined by the single-particle function. This idea (expressed by N. N. Bogoliubov) is rather important since it leads to a drastic simplification of the models describing many-body systems. In particular, although the respective distribution functions formally depend on initial data for all the particles, after a rather short time (τ0~10-12 -10-13 s) this dependence becomes much simpler since its relics are only retained in the relatively smooth single-particle function f(r, p, t). One usually applies to this situation the notion of "erased memory" designating asymptotic independence of many-particle distribution functions on precise values of initial data - a huge simplification since initial values of all coordinates and momenta are never known exactly and, even if known, would be completely useless. In the modern world, the idea of fast forgotten details of microscopic instances, in fact of insensitivity to microscopics, has become especially useful for mathematical modeling of complex systems. 7.4 Statistical Ensembles Let us start by recapitulating the basic notions of the idea of statistical ensembles. In conventional statistical mechanics, there has been a strong expectation that an ensemble average can correctly describe the behavior of a single particular system. Despite numerous attempts, there seems to be no rigorous mathematical proof for applying statistical ensembles to an individual observed system. The ensemble methodology lying at the very foundation of statistical physics still has the status of a logical presumption rather than of a compelling mathematical fact. A variety of good samples for concrete ensembles can be satisfactory from the physical point of view but require a more ample treatment for a cultivated mathematical taste. In physical terms, the default of ensemble methodology would mean that some quantities for an observed system may substantially deviate from the ensemble averages. Nonetheless, it would not be a tragedy; on the contrary, such deviations can provide important physical information about the observed system. In the model of a microcanonical ensemble, one basically considers an isolated system, which is in fact not very interesting from a practical point of view. Explaining this requires a reminder of basic thermodynamics, which probabilists call large deviation theory. In the microcanonical ensemble one considers the number of microstates of an isolated system at some fixed value of internal energy U, volume V and other extensive conserved quantities. The logarithm of this number of microstates is the entropy S (it is convenient to set the Boltzmann constant kB= 1) and by inverting this function one obtains the internal energy U(S, V, ... ). From this so-called thermodynamic potential 7.5 The Bogoliubov Chain 337 all other thermodynamic quantities (temperature, pressure, heat capacity and so on) can be computed as a function of the extensive quantities. A kinetic equation is a mathematical model allowing one to find (approximately!) the distribution function for the statistical ensemble. 7.5 The Bogoliubov Chain In this section, we are going to study mainly the stationary solutions of the Bogoliubov hierarchy equations. In literature this chain of equations - an infinite system of integro-differential equations for the many-particle distribution functions - is generally known as Bogoliubov-Born-GreenKirkwood-Yvon (BBGKY) hierarchy. For systems in a finite volume these equations are equivalent to the dynamical Liouville equation and characterize the time evolution of the probability measure on the phase space of a finite number of particles. Performing the thermodynamic limit, one obtains the infinite chain of the Bogoliubov hierarchy equations which are related to a system of particles in the whole space. As near as I know, the problem of existence and uniqueness for this chain of equations has not been solved so far. It is natural to connect the stationary solutions of the Bogoliubov hierarchy equations with the states of an infinite system of particles (i.e., probability measures defined on the phase space) which are invariant with respect to time evolution. In the cases where the dynamics on the phase space has been constructed it is possible to demonstrate that any invariant measure satisfying further conditions of a general type generates a stationary solution of the Bogoliubov hierarchy equations. On the other hand, an immediate analysis of stationary solutions of the Bogoliubov hierarchy equations (unlike the invariant measures) does not require, in general, the use of such delicate dynamical properties as clustering. Apparently, the point is that only functions of a finite (although not bounded) number of variables enter the Bogoliubov hierarchy equations. One can consider these functions (the correlation functions) as integral characteristics of a measure and their behavior must not necessarily show the influence of singularities arising from the motion of individual configurations of an infinitely large number of particles. Thus, the approach based on the Bogoliubov hierarchy equations seems not only to be more general but also more natural from the physical point of view. We shall also discuss the derivation of the kinetic equation for a classical system of hard spheres based on an infinite sequence of equations for distribution functions in the Bogoliubov (BBGKY) hierarchy case. It is known that the assumption of full synchronization of all distributions leads to certain problems in describing the tails of the autocorrelation functions and some other correlation effects with medium or high density. We shall discuss how to avoid these difficulties by maintaining the explicit form of time-dependent dynamic correlations in the BBGKY closure scheme. The question usually is how to obtain hydrodynamic equations (Euler, Navier-Stokes) from the Liouville-type equations of Hamiltonian mechanics, classical or quantum. The original idea was due to Ch. Morrey (1956) who introduced a concept of a hydrodynamic limit and was able to formally derive 338 Stochastic Reality an Euler equation from the classical Liouville equations (more precisely, from the corresponding BBGKY hierarchy). However, Morrey had to make some assumptions about the long-term behavior of the motion, and this included a statement on ergodicity, in the sense that all 'reasonable' first integrals are functions of the energy, linear momentum and the number of particles. Since then, the idea of a hydrodynamic limit became very popular in the literature and has been successfully applied to a variety of models of (mostly stochastic) dynamics relating them to non-linear equations. However, in the original problem there was no substantial progress until the work by S. Olla, S. R. S. Varadhan and T. Yau (1992) where Morrey's assumptions were replaced by introducing a small noise into the Hamiltonian (which effectively kills other integrals of motion), and a classical Euler equation was correctly derived. In some quantum models (e.g., of the Bohm-Madelung type) the hydrodynamic limit can be rigorously demonstrated. The resulting Euler-type equation is similar to the one that arises for the classical counterpart of these models. This suggests that perhaps classical and quantum hydrodynamic equations must look similar if they are written for local densities of 'canonical' conserved quantities (the density of mass, linear momentum and energy). 7.6 Chaotic Behavior We have already discussed chaotic systems in Chapter 4, in connection with nonlinear dynamics and elementary stability theory, where "chaos" became the code word for nonlinear science. In that case, we can speak of deterministic chaos, a transition to randomness due to sensitive dependence on initial conditions - experimentally or computationally indistinguishable initial states eventually evolve to states that are far apart (in the phase space). Chaotic dynamics bridges regular evolution of complex systems with the random one. However, here, in the spirit of stochastic reality, we shall put more accent on probabilistic features of chaotic behavior. In Chapter 2 we have seen that the notion of time is crucial for dynamical systems, which evolve into the future given the data related to the actual state specified at some particular moment. One might have noticed the drastic difference between the steady-state situations, for instance Kepler's laws, and evolutionary processes: one is unable to determine the state or geometry of a dynamical system unless one stipulates some input data at a specified moment of time. To obtain the steady-state picture, e.g., Kepler's laws, one has to exclude time so, strictly speaking, no dynamics is left or it is hidden deeply enough to be unnoticed (for example, by ancient astronomers). The notion of time becomes relevant again only when one starts considering the stability of Kepler's static celestial geometry. We shall see below that this consideration leads to very powerful results such as the famous KAM (Kolmogorov-ArnoldMoser) theorem. One can, in principle, test all steady-state solutions on stability and even chaoticity; the elliptic trajectories of the planets in the Solar System may prove unstable or even chaotic after such an analysis. However, the perennial question is: what would be the timescale during which the instability or chaos might evolve? By the way, if and when such chaos evolves, 7.6 Chaotic Behavior 339 the time-reversal invariance of the regular dynamical behavior of Newtonian gravitational motion and consequently Kepler's laws would be lost. Let us, for simplicity, consider first classical statistical mechanics. In fact, a consistent exposition of statistical mechanics requires a quantum treatment anyway, but to elucidate the main points we shall for the time being stay with classical model. We have seen that classical statistical mechanics typically considers systems of N structureless particles (material points) in a ddimensional space moving along classical (Newtonian) trajectories in continuous time t. The number of particles is usually considered very large, but still finite so that the concept of the phase flow widely used in classical mechanics can be applied without excessive mathematical precautions. Recall that the term "flow" is usually understood in classical mechanics as a continuous one-parametric group of transformations of the phase space (on the phase manifold). In most classical problems of physics, parameterd= 1,2,3. Here, I would like to recall that in classical statistical mechanics the entire phase space of the many-particle system is called Γ -space, which contains 2Nd dimensions whereas the phase space of a single particle is commonly known as a μ-space. So, we see that in classical mechanics and hence in classical statistical mechanics the dimensionality of the phase space and the number of degrees of freedom differ by the factor of 2 which is, of course, a trifle but still not very convenient from the positions of dynamical systems theory. Recall that in this latter case one usually does not discriminate coordinates and momenta, they are equal components of the vector function u= (u1, ... , un) whose evolution governed by the vector equation dudt ⁄ = f(u, t) is considered (see Chapter 4). In statistical mechanics, n= 2Nd. This is of course trivial, but one must be careful. Notice that even for the simplest possible statistical system of N structureless material points some nontrivial questions arise, for instance, how can one reconcile the time-reversible equations of motion for a single particle with the obviously irreversible macroscopic properties of a manyparticle system? This paradox, included in the collection of Perennial Problems, has been reflected in the metaphor of the "arrow of time" (see Chapter 9). In quantum theory there exist some additional reasons for the breach of time-reversal symmetry, one of them being the issue of quantum measurement that is not fully clarified - at least within the unitary scheme of orthodox quantum mechanics. However, this issue is a special problem, and its discussion would require much space so that we shall not deal with the arrow of time in the present manuscript. Notice only that in dynamical systems theory, which can be considered as an intermediate between mechanics and macroscopic physics, the arrow of time i.e., the breakdown of symmetry between past and future appears quite naturally in view of instabilities and their limit case -chaotic behavior. In the quantum world, the reason for the breach of time reversal symmetry is often ascribed to the process of quantum measurement, which we shall not consider here in detail. When one is solely focused on unitary quantum dynamics, one typically treats quantum evolution as completely 340 Stochastic Reality time invertible. This is, however, a somewhat naive simplifying assumption since the Hilbert space where quantum dynamics occurs is not necessarily filled with complex conjugated states (of thermo-isolated systems). In fluids, manifestations of stochasticity are generally called turbulence, in this case chaotic behavior corresponds to the so-called Lagrangian turbulence (see, e.g., [305]). It is, however, important that both the advection and, more generally, the fluid flow dynamics within the Lagrangian framework can, due to the possibility of representing them in the dynamical systems form, be treated as Hamiltonian systems. The role of the Hamiltonian in fluid dynamics is in the two-dimensional case played by the deterministic stream function Ψ(x∥, t) ([306,307]), where x∥= (x1, x2) ∈Ω2 . In a particular case when the 2d domain Ω2 occupied by the flow lies in the Euclidean plane with Cartesian coordinates x1 = x, x2 = y on it, we have the Hamiltonian (symplectic) form of the motion equations ẋ = vx(x, y, t) = ∂Ψ/ ∂y, ẏ = vy(x, y, t) = -∂Ψ/ ∂x. Here the domain Ω2 ≡Ω2+1 of variables (x, y, t) corresponds to the extended phase space. In other words, the nonstationary planar flow is described by the Hamiltonian dynamical system with 1.5 degrees of freedom. Notice, however, that the stream function Ψ(x, y, t) is a somewhat strange Hamiltonian: a pair of "canonical" variables (x, y) are just local coordinates, and they are not canonically conjugate as, e.g., coordinate and momentum in mechanics. 8.1 Interaction of Electromagnetic Radiation with Matter. General Concepts 341 8 Radiation and Matter This chapter is devoted to a loose description of theoretical approaches to the interaction of radiation, both electromagnetic and corpuscular, with matter. Other possible types of radiation are not considered in this chapter (and in the book in general). The term "matter" implies in the context of the present chapter a macroscopic (averaged) ensemble of point electrons and nuclei combined, which constitute the medium - it may be, for example, gas, plasma or solid state. Averaging, to be discussed below, occurs at distances substantially larger than interparticle (n-1/3) or atomic ones (≅10-8 so that nuclei, electrons and other atomic particles can really be regarded as pointlike. Thus, the described theory is inapplicable at smaller distances. I shall generally regard the matter as being non-relativistic, only some restrained comments will be made about relativistic effects in connection to the passage of fast charged particles through the matter. Interaction of hard gamma radiation (ħω≥2mec2, me= 9.1 ∙10-28g is the electron mass) with matter is not treated in this book, therefore creation of particles (pair production) is ignored. To limit the scope of the book as well as for simplicity sake, I also do not discuss creation of pairs by superstrong electromagnetic (laser) fields, although such effects may be important considering the power densities achieved today ( P> 1021W/cm2 , see, e.g., http://space.newscientist.com/article/dn13634-powerful-laser-isbrightestlight-in-the-universe.html). In general, relativistic quantum phenomena typically studied in quantum field theory are largely ignored in this chapter. Here, the matter is described by the means of standard nonrelativistic quantum mechanics, while the electromagnetic field is treated classically using the Maxwell equations (without second quantization, see Chapters 5, 6). Such an approach is usually called semiclassical. The chapter is divided into two parts, one related to the electromagnetic field (mainly radiation) in material media, the other to the passage of particles (mostly charged ones) through the matter. Since the treatment of the interaction of corpuscular radiation with matter to a large extent uses the concepts developed in electromagnetic (EM) theory, I decided to put first an overview of the interaction of the electromagnetic field with material media. 8.1 Interaction of Electromagnetic Radiation with Matter. General Concepts A general description of mechanisms underlying the interaction of electromagnetic radiation with matter is based on the concept of electromagnetic response and is performed, respectively, in terms of response functions. The notion of response functions generalizes the description of electromagnetic response in terms of susceptibilities (both linear and nonlinear), permittivity, permeability, conductivity, Coulomb screening, mean and local field, etc., all listed notions being specific cases of 342 Radiation and Matter electromagnetic response functions. The quantum mechanical description of matter allows one to construct a linear theory which, assuming that spectral properties of the matter are known, enables us to determine its response to an external field (e.g., in terms of susceptibilities, even nonlinear ones). On the base of such a theory, simple models using specific examples of the media can be constructed and some primitive representations of otherwise complex systems (gas, plasma, solid, liquid, etc.) can be discussed and tested. Nevertheless, one cannot imagine that the problem of field-matter interaction can be easily decomposed into two subproblems, one related to the matter and the other to the field. The response of the matter to an electromagnetic field is usually a self-consistence problem, which implies both the effect of the field on the matter and a modification of the external field by the matter. The self-consistence approach is widely used in macroscopical electrodynamics and optics (both linear and nonlinear), see Chapter 5. In these disciplines the so-called material constants are phenomenologically introduced. Ideally, these phenomenological material constants - dielectric permittivity, magnetic permeability μ171, conductivity σ, refractive index n, susceptibilities χ(1) (linear) and χ(k), k= 2,3, ... - should be determined as functions of the fundamental constants alone: the mass me and charge e of the electron, the velocity of light c, Planck constant ħ, atomic numbers Z, N, and atomic mass M. In the statistical description of matter, the temperature T is also an essential parameter. In the energy scale, however, T is a small parameter172 so that the factor exp(-ħω/T), where ħω= EnEm, En, Em - atomic energy levels, arising from the averaging over the Gibbs ensemble (see below; see also Chapter 7) is often quite small. We have already discussed in Chapter 5 the description of the electromagnetic (EM) field within the framework of classical theory. Exact values of electric E(r, t) and magnetic H(r, t) components of the EM field at some space-time point (t, r) = (x0, x1, x2, x3) satisfy the Maxwell equations (see Chapter 5), where inhomogeneous terms proportional to charge and current densities, lρ(r, t) = ∑eaδ(r-ra(t)) a (8.1) j(r, t) = ∑eavaδ(r-ra(t)) a , (8.2) 171 Magnetic permeability is not necessary when a spatial dispersion is taken into account, see below. It is interesting that the term "magnetic permeability" was probably introduced by O. Heaviside. 172 In this book, I shall measure the temperature in energy units so that, e.g., room temperature, T~102K, would correspond to T~10-14erg~10-2eV ≪mee4/ħ2 ≈4.3 ∙ 10-11erg ≈27.2eV, where mee4/ħ2 is the characteristic atomic energy value constructed from fundamental constants. The physical meaning of this inequality is that a typical quantum system at room temperature remains in the ground state since thermal excitation is insufficient to overcome the spacing between atomic energy levels. This weakness of thermal excitation is, however, not true for quantum systems with a dense energy spectrum such as complex molecules. Furthermore, in a hot plasma T is often higher than mee4/ħ2. 8.1 Interaction of Electromagnetic Radiation with Matter. General Concepts 343 where va= ṙa. These densities automatically obey the continuity equation ∂ρ ∂t+ divj= 0. (8.3) We have already discussed the continuity equation and its meaning as the conservation law. Recall that in a compact domain the continuity equation for some quantity173 states that the amount of this quantity in the considered domain varies to the extent of the quantity gain or loss through the domain boundaries. In the case of charge conservation, however, the meaning of the continuity equation is much deeper: it is connected with gauge invariance (see Chapter 5). This invariance manifests itself, in particular, by the fact that the continuity equation for electrical charges and currents is not independent of the Maxwell equations (see [39], §29). One can even say that the continuity equation for electrical charges and currents is a trivial mathematical consequence of the Maxwell equations; a more accurate statement would be that the Maxwell equations are adjusted to or agreed with the continuity equation, that is to say they are constructed in such a way as to automatically provide charge conservation. A model of some experimentally produced setting in radiation-matter interaction, accounting for quantum effects, would consist in the treatment of a quantized electromagnetic field created by atomic sources and propagating in the presence of macroscopic bodies or surfaces. For example, such a scheme can be applied for modeling typical quantum-optical experiments when radiation passes through various optical instruments. The latter may be active (i.e., nonlinear) or passive, anyway in the standard approach they may be regarded as dielectric bodies or surfaces having a certain type of geometric or physical complexity. This complexity can be described in mathematical terms and is typically accounted for by introducing the dielectric function that depends both on radiation features, e.g., frequency and on spatial coordinates. In spite of many attempts to construct a fully quantum theory of the medium response to an electromagnetic field (see, e.g., one of the latest papers [260], see also below), the approach based on the dielectric function is by necessity semiclassical, being heavily based on the classical concept of the field in a macroscopic medium. One may remark in passing that, e.g., nonlinear optics in general seems to be an essentially classical (or semiclassical) discipline because the coupled evolution equations for the light and matter fields are extremely difficult - and perhaps unnecessary - to solve. Classical electrodynamics of polarizable media seems to be quite adequate for contemporary nonlinear optics, at least for its applied part. 173 This quantity can be of arbitrary nature, not only the charge but, e.g., the number of cars on some part of the road. 344 Radiation and Matter 8.2 Field Energy Dissipation in Matter dQ dt= ω 8πεij ′′E∗iEj (8.4) We can illustrate this general formula using a simple model of the field energy absorption by an electron driven by the field of an electromagnetic wave and from time to time colliding with other particles of the medium174. We shall assume that such collisions are only elastic i.e., the total kinetic energy (treated as a quadratic form) of colliding particles is conserved in each individual collision and no excitation occurs. In other words, it is assumed that energy is not dissipated into internal degrees of freedom. Only the momentum of the colliding particles changes, with such changes taking place abruptly i.e., the collision time τ≪2π/ω, where ω is the characteristic frequency of the electromagnetic wave (actually, we may assume ωτ≪1). The assumption of negligibly short collision time is standard, but not always correct. When the electromagnetic field cannot be conveniently represented by a quasimonochromatic wave centered around a characteristic frequency (for example, the particle is driven by the field of an ultrashort laser pulse), then the assumption of abrupt collisions looks as τ≪τp, where τp is the characteristic time of pulse growth. Imagine for simplicity that electrons collide only with the heavy centers of the medium namely atoms, ions, or molecules whose masses Ma are much larger than the electron mass m. Usually, the parameter m/Ma is very small, 10-3 -10-4 which allows one to ignore the motion or recoil of heavy scatterers with high accuracy. Then, since collisions are assumed elastic, the energy of the electron in an individual collision is conserved. Energy of the field is dissipated as an averaged stochastic process due to multiple elastic collisions, this dissipation leading to a gradual increase of the mean energy of electrons i.e., to the heating of the electronic subsystem. For an electron moving in a high-frequency harmonic field E= E0 cos ωt, we have ṗ = E0 cos ωt so that p(t) = p0 + eE0 ωsin ωt (8.5) and the energy instant value in the field is E(t) = p2(t) 2m= 1 2m(p0 2 + 2e ω(p0E0 sin ωt+ e2 ω2 E0 2 sin2 ωt)). This instant energy oscillates near its mean value, which reflects the fact that the particle exchanges electromagnetic energy with the field. In the quantum language, this corresponds to constantly exchanging photons, some of them being real other virtual. In a high-frequency harmonic field, we are interested 174 This example is discussed in more details in [222]. 8.2 Field Energy Dissipation in Matter 345 of course not in the instant value of the particle energy, but in its energy averaged over many field periods. This average energy is given by E- ̅̅̅̅ = p0 2(t) 2m+ e2E0 2 4mω2 = E0 + Tp, where E0, p0 are the initial energy and momentum of the electron, Tp= e2E2 4mω2 ⁄ is its average kinetic energy in the oscillatory motion (see, e.g., [23], 30); here lower index p stands for "ponderomotive", see the next section. The minus sign at the energy symbol signifies "energy before collision". To simplify the model, we shall neglect small-angle scattering, assuming all individual collisions to be "strong" i.e., resulting in drastic, large-angle deflections. In the limiting case, the scattering angle θ in such collisions reaches π which corresponds to a complete reflection of the electron momentum, p(t0 + τ) = -p(t0 -τ) where t0 denotes the moment of time when the collision occurs. Using (8.5), we have p0 + eE0 ωsin ω(t0 + τ) = -p0 -eE0 ωsin ω(t0 -τ) or, exploiting the assumption ωt≪1, p0 + eE0 ωsin ωt0 = -p0 -eE0 ωsin ωt0 or, inserting p(t) -eE0 sin ωt/ω into the left-hand side instead of p0, we get for the particle momentum in the wave after scattering by a heavy center p(t) = -p0 -2eE0 ω sin ωt0 + eE0 ωsin ωt. Using this expression, we can calculate the electron energy in the wave after collision E(⊔)+ = p2(t) 2m= 1 2m((p0 + 2eE0 ω) sin ωt0 -eE0 ωsin ωt) 2 . This equation also shows that the electron is exchanging energy with the wave. Making elementary transformations and averaging over many periods of the wave, we obtain E+ ̅̅̅̅ = 1 2m((p0 + 2eE0 ω) sin ωt0) 2 + e2E0 2 4mω2 = E- ̅̅̅̅ + 2e mωp0E0 sin ωt0 + 2e2E0 2 mω2 sin2 ωt0 346 Radiation and Matter where E- ̅̅̅̅ = E0 + Tp (see above). The energy transfer between the wave and the particle is expressed as the difference ∆E= E+ ̅̅̅̅ -E- ̅̅̅̅ = 2e mωp0E0 sin ωt0 + 2e2E0 2 mω2 sin2 ωt0 . (8.6) Notice that this energy transfer depends on the collision time t0, more exactly, on the phase of the wave in this moment. This is the manifestation of the fact that waves and particles exchange energy just because the collisions randomly perturb the phase synchronism between them. In particular, the particle may both take energy from the wave and give it to the field, depending on phase ωt0 as well as the mutual orientation of vectors p0 and E0. One can also notice that when writing the field acting on a particle in the form E= E0 cos ωt, we disregarded the field phase in the initial moment t= 0; it would be more correct to write E= E0 cos(ωt+ φ). For some problems, taking this initial phase into account may be essential [226]. We shall briefly discuss some effects, in particular systematic drift associated with initial phase of the wave, in the next section. If we, however, ignore the effects connected with the phase of the wave, we can average (8.6) over all phases to obtain ∆E ̅̅̅̅ = e2E0 2 mω2 = 4Tp. (8.7) This average energy transfer from the field to the particle is strictly positive, which means that the field is losing, and the particle is gaining energy in a typical large-angle elastic collision. If the frequency of such collisions is ν, then the attenuation rate of an electromagnetic wave in the medium is -dE dt= e2E0 2ν mω2 = 4νTp where the collision frequency depends in general on the particle momentum ν= ν(p) and may be calculated within the kinetic theory. Here, however, we treat the collision frequency as a purely phenomenological quantity: using a microscopic value for it within our crude estimates would be unacceptable due to excessive accuracy. In most cases, ν= ν(|p|) = ν(p). We may use the phenomenological collision frequency to find the dielectric permittivity and to estimate response functions for simple models. The above semi-qualitative treatment demonstrates a heuristic usefulness of the models based on the classical motion of individual particles. Some experience shows that many intricate problems of radiation-matter interaction can be successfully understood using the simple language of classical single-particle models. Below we shall see more on this. 8.3 More on Charge in Electromagnetic Fields 347 8.3 More on Charge in Electromagnetic Fields We may begin the study of radiation-matter interaction with the simplest problem when matter is reduced to a single charged particle. The model of individual particles interacting with an external electromagnetic field reflects the situation when the matter (usually it is plasma or plasma-like medium) has a rather low density so that the average energy of interaction of the particles with the medium as well as between the particles is much lower than the energy of their individual motion in the electromagnetic field. The problem of a single charge in the field is usually studied already in the first chapters of textbooks on classical electromagnetic theory (see, e.g., [84], ch. 3); it may be in fact rather intricate even at the classical level, and we, while discussing this problem, shall emphasize certain details that require slightly more knowledge than needed for reading initial-stage textbooks on electromagnetism. Thus, the treatment of the discussed problem can be both classical and quantum, and it is interesting that the classical treatment is in most cases sufficient for the description of nearly all practically interesting applications. A little further we shall discuss the applicability criteria of the classical approach to describing the behavior of free charges in an electromagnetic field, now let us start with the naive Lorentz model: dp dt= e[E(r, t) + 1 c(v× H(r, t))] , (8.8) where p is the momentum of charge e moving in the electromagnetic field (E,H). Notice that the Lorentz force contains the particle velocity which is a kinematic variable and not the momentum which is a dynamic variable. This physical fact is nontrivial and may be regarded as a consequence of countless experiments. To make the vector equation (8.8) (system of equations) closed, one ought to add the relationship between p and v. Recall from classical mechanics (see also Chapter 4) that the relationship between momentum pj considered as a variable in phase space and the rate of change vi= ẋ i is not necessarily as trivial as it is assumed in the Newtonian model and, e.g., in more advanced Lagrangian version of mechanics is given by pi= ∂L∂ẋ i ⁄ where L≔L(xi, ẋi, t) is the Lagrangian function which may be, in principle, an arbitrary twice continuously differentiable function. So the differential relationship between momenta and velocities does not necessarily give the simple linear connection pi= mijẋ j, customary for Newtonian mechanics (here mij is the mass tensor which, as we have seen, can assume the role of metric tensor). One may observe the symptoms of more intricate than linear relationship between momenta pi and velocities ẋ j already in relativistic mechanics where such a relationship is nonlinear p= γ(v)mv= mv (1 -v2 c2) 1/2 348 Radiation and Matter and can be made linear only when the relativistic effects corresponding to an expansion over small parameter β2 = v2 c2 are disregarded. In the nonrelativistic limit (β→0), p= mv and the Newton-Lorentz equation (8.8) takes a simple form convenient for calculations: mv̇ = e[E(r, t) + 1 c(v× H(r, t))] . One can see that the Lorentz force is of the first order in β= vc ⁄ , v= |v|, which is an important fact to be used further. 8.3.1 Interaction of a Particle with a Standing Wave Let us, as a primary example, consider the motion of a free charged particle in the field of a standing wave {E(r, t) H(r, t)} = {E0(r) cos ωt H0(r) sin ωt} A standing wave may be, e.g., formed by two almost monochromatic ( ∆ωω ⁄ ≪1 where ∆ω is the effective spectrum width) waves with frequencies ω and ω′ ≈ω traveling in opposite directions so that their wave vectors k and k′ satisfy the relation k+ k′ = 0. Then the surfaces of equal phase are planes which are fixed in space. The particle, in general, may cross the standing wave structure at an arbitrary angle θ to such planes so that both longitudinal and transversal momentum components, p∥ and p⊥, are not necessarily small compared to the particle momentum p. Actually, the amplitudes E0 and H0 are smooth envelopes E0(r, t) and H0(r, t) depending in general on both time t and position r. We shall, however, assume that E0(r, t) and H0(r, t) change very little when we transit from t to t+ 2πn/ω (assuming also n finite and not very large). Then we may write E0(r, t) ≈E0(r) and H0(r, t) ≈H0(r). The change of the time-harmonic dependence from cos ωt in the electric field to sin ωt in the magnetic field (the π/2 phase shift) is consistent with the Maxwell equation ∇× E+ 1 c ∂H ∂t= 0. In these relationships the initial phase has been put to zero as a seemingly inessential parameter. This is not always the case. A little below we shall discuss the role of the phase in the expressions E(r, t) = E0(r)cos(ωt+ φ) and H(r, t) = H0(r)sin(ωt+ φ) and find out that the nonzero phase may signify an additional drift. In the above expressions the spatial amplitudes E0(r) and H0(r) are not independent since due to Maxwell's equations ∇× E0(r) = -kH0(r), k= ω/c. When the electromagnetic fields accelerating the charge are not very strong (see Chapter 5), the motion remains nonrelativistic so that the 8.3 More on Charge in Electromagnetic Fields 349 magnetic field term in (8.8) can be disregarded and the equation of motion takes the form mr̈ = eE0(r) cos ωt. Technically, this is a nonlinear ordinary differential equation in which variables r and t can be separated. There are a number of ways how to solve this equation; we shall start with the simplest approximation method when the external field is initially regarded as homogeneous (coordinateindependent), at least on a scale of the particle displacement due to the Lorentz force i.e., E0(r) = E0(r0 + δr(E)) ≈E0(r0). Here r0 is the particle's initial position. In other words, we may think that during an oscillation period the particle passes the distance on which field amplitudes E0(r) and H0(r) do not vary substantially, δ≔|δr| h ⁄ ≪1 , where h is the characteristic length of the spatial field variation. Then, in the zeroth approximation in δ we have ṙ = v= eE0 mωsin ωt+ v0, where v0 is the particle initial velocity and E0 = E(r0) is the constant vector. Further, r-eE0 mωcos ωt+ v0t+ r0 ≡+δr. Since we are considering the nonrelativistic situation175, we may assume that the field-induced displacement δr= δr(E) ≈δr(E0) is small compared with the field's characteristic scale i.e., with the wavelength in the case of a harmonic wave. Indeed, the displacement |δr| = δr reached over a halfperiod of the wave is δr~eE/mω2~v/ω so that δrλ ⁄ ~ vc ⁄ ≪1 i.e., in nonrelativistic problems the field-induced displacement is small compared to the wavelength. Of course, for the fields rapidly varying in space this assumption may become invalid (see also below). The particle of mass m (for definiteness, we shall talk about electrons) oscillates in the harmonic field with frequency ω, gaining average kinetic energy E= 1 2 mv2 ̅̅̅̅̅̅ = e2E2 4mω2 175 The criterion of validity for a nonrelativistic approximation in classical (non-quantum) theory is eE0 mωc ⁄ ≪1 which shows that, e.g., electrons may attain relativistic velocities for electric fields accelerating the particles to their rest energy over the distance of the order of the wavelength, eE0λ~mc2 ≈0.5MeV. If quantum theory is considered, the classical theory for a free electron interacting with the electromagnetic field is valid when the energy of a transferred or radiated electromagnetic quantum is small compared to the rest energy, ħω≪mc2. 350 Radiation and Matter This is a standard forced motion of a particle under the influence of an external harmonic field. Motion in oscillating fields, despite its apparent simplicity, contains a number of intricacies, and we shall try to discuss some of them in detail. For example, one usually tacitly assumes that it is only the magnetic field that can confine charged particles. Wrong! One may be surprised to find out that charged particles may be confined also by a harmonically oscillating electromagnetic field, provided it is spatially inhomogeneous, such a confinement occurring regardless of the charge sign. So an inhomogeneous electromagnetic wave can be used to trap, accelerate, and control charged particles. To better understand some interesting effects accompanying the interaction of electromagnetic fields with free charged particles, we shall as usual start from simplified models gradually incorporating new features in the equations. We have just obtained the simplest possible solution for the particle moving in the field of a monochromatic (harmonic) standing wave in the zero-order in relativistic parameter β= v/c. In this approximation no influence of the magnetic field is taken into account although the magnetic field is present in any electromagnetic wave. Nevertheless, if a particle moves nonrelativistically, neglect of the magnetic field in the wave which drives the particle is fully justified. Let us now see what happens in the first approximation in β, in case the scale of spatial inhomogeneity of the field is determined by the wavelength λ= 2π/ω (in general, however, the field amplitudes may have other spatial scales). Then we have to retain the term with the magnetic field in the Lorentz force. Expanding slowly varying amplitudes E0(r) and H0(r) near point r0 which may be interpreted as the initial position of the electrons, we have in the first approximation in inhomogeneity parameter δ= eE0(r0)κ/mω2, where κ characterizes the spatial inhomogeneity of the field: E0(r) ≈E0(r0) + (δr∇)E0(r0) or, in component notation, E0 j(xi) ≈E0 j(x0 i) + δxi∂iE0 j(x0 i). The length s= κ-1 denotes the distance on which amplitudes E0 and H0 change significantly. Such a distance is often thought to coincide with the wavelength of the field; this is, however, a very particular case. Only in this special case expansion on the inhomogeneity parameter coincides with that on relativistic parameter β= v/c≈eE0 mωc ⁄ . Now insert these expansions into the motion equation, using the obtained expression for δr: δr(E0) = v0t-eE0(r0) mω2 cos ωt. 8.3 More on Charge in Electromagnetic Fields 351 Notice that the quantity A0 ≔eE0 mω2 ⁄ has the meaning of the particle oscillation amplitude in a monochromatic field, when all other factors affecting the particle motion, e.g., collisions, are neglected. Later we shall take collisions into account and see that in this case the amplitude for a harmonic field E(r, t) = E0e-iωt+ E0 ∗eiωt will take the form A0 ≔eE0 mω(ω+ iν) ⁄ corresponding to the solution of the motion equation r(t) = -A0e-iωt-A0 ∗eiωt In other words, the particle amplitude in an electromagnetic field will acquire additional terms (in real representation of the field) or becomes complex (in complex representation), which physically corresponds to the losses of electromagnetic energy and to phase shift i.e., retardation of the electromagnetic response of an individual particle to an external electromagnetic field. All these issues are closely connected with the elementary theory of dielectric permittivity (see Chapter 5). Now we get mr̈ = eE0(r0) cos ωt+ e(v0∇)E0(r0)t-e2 mω2 (E0(r0)∇)E0(r0) cos2 ωt + e2 mω2 ω c(E0(r0) × H0(r0)) sin2 ωt + e c(v0 × H0(r0)) sin ωt where for the Lorentz force we used: e c(v× H(r, t)) = e c[(eE0(r0) mω sin ωt+ v0) × H0(r0) sin ωt]. By using the Maxwell equation H0(r0) = -(c/ω)curlE0(r0) we get after averaging over the field period: mr̅̈ = et(v0∇)E0(r0) - e2 2mω2 (E0(r0)∇)E0(r0) + e2 2mω2 ω c(-c ω) E0(r0) × (∇× E0(r0)) or mr̅̈ = et(v0∇)E0(r0) - e2 4mω2 ∇E0 2(r0). (8.9) 352 Radiation and Matter Here the vector identity E0 × (∇× E0) = 1 2 ∇E0 2 -E0∇E0 was used. One can make several remarks concerning this standard procedure176 of obtaining the average force (8.9) acting on particles moving in the field of a monochromatic wave with spatially inhomogeneous amplitude. This force is usually called ponderomotive; in the Russian literature it is mostly known as the "Miller force", by the name of a prominent physicist belonging to the wellknown Nizhni Novgorod (formerly Gor'ki) radiophysical scientific school. This force has a field gradient character, attracting a charged particle irrespective of the charge sign to low-intensity field regions and repelling it from the regions of strong field. The term proportional to time t corresponds to drift motion of the particle177. The initial time-point t0 determines the phase of a particle when it starts the motion in the electromagnetic field. This phase may play an important role for the drift component of the motion (see below). One may notice that the dynamics of a charged particle is described in terms of the average value of the particle position r̅ whereas the ponderomotive force is calculated at its staring point r0. In most cases, the difference between E0(r0) and E0(r̅) is inessential due to slow variation of the amplitudes, and taking such a difference into consideration would mean an excessive accuracy level. However, two pairs of values, r̅, r0 and, correspondingly, E0(r0), E0(r̅) may differ significantly when systematic drift component is taken into account. This fact may lead to somewhat unexpected effects related to anisotropy of drift motion. Usually, when ponderomotive effects such as repelling particles from strong field regions are considered, field is represented by axial-symmetric intensity distributions such as in idealized laser beam and drift components breaking axial symmetry are disregarded. In this case, the ponderomotive force only depends on the particle's distance from the laser beam axis and, by the way, does not depend on the field polarization. Taking drift into account would mean that, e.g., the energy of the particles gained owing to ponderomotively pushing them out of the strong field region of the laser beam would depend on the scalar product between the particle velocity and the field polarization vector. The corresponding calculations are simple but lengthy178, therefore I do not bring them here (see also below). We can now generalize the formulas for the ponderomotive force acting on a particle (electron), taking into account collisions with other particles. This situation takes place, for example, in plasma. The simplest - 176 This is actually an iteration method that can be applied also in more general situations, for example, when collisions should be taken into account, see below. 177 It would be more appropriate to write t-t0 instead of t which may be important when many particles, e.g., a beam, are considered. 178 One may write r0 as r̅ + (r0 -r̅) ≡r̅ + δ0 and expand (8.9) over δ0. 8.3 More on Charge in Electromagnetic Fields 353 phenomenological - way to take collisions into account consists in adding the friction term to the equation of motion: mr̈ + mνṙ = e(E(r, t) + 1 c(ṙ × H(r, t))) . (8.10) Here ν is the collision frequency i.e., the quantity determining the rate of momentum interchange in multiple collisions. It is clear that in general ν is a function of the particle velocity v= |ṙ|, ν= Nσt(v)v= ν(v) , where N denotes the density of scatterers and σt is the transport cross-section σt(v) = 2π∫dσ dΩ (v, θ)(1 -cos θ) sin θdθ π 0 , dσ dΩ (v, θ) = |f(q)|2 is the differential cross-section, f(q) is the scattering amplitude, q∈S2, q∙q-1, i.e., the vector q belongs to the unit sphere S2 (see any textbook on scattering theory or Chapter 5 of this book). Recall that it is the transport cross-section and not the total cross-section σ(v) = 2π∫dσ dΩ (v, θ) sin θdθ π 0 that determines the collision frequency (and also allows one to estimate the mean free path of a particle) because it accounts for the fact that the momentum transfer in elastic collisions reaches a maximum at large scattering angles (head-on collisions) and tends to zero for small-angle scattering (large impact parameters).179 Although the collision frequency depends on velocity, we may understand by ν in our phenomenological treatment dealing with effective quantities some average collision rate, ν(v) ̅̅̅̅̅̅ = ν, where averaging is carried out over all relative velocities of our particle with respect to scattering centers of the medium. In reality, this averaging can be accurately performed only in the kinetic theory; here we have to be satisfied with the qualitative considerations. Now, let us, as before, start with a zero-order approximation in v/c omitting the Lorentz force: mr̈ + mνṙ = e(E0(r) cos(ωt+ φ)), (8.11) where we have included the phase of the field φ. The reason why this phase may deserve to be included in the motion equations will be clear from the further discussion. The zero-order equation is a linear ODE and can be easily integrated. Modern humanity integrates differential equations with the help of "computer algebra" products such as Maple or Mathematica which are 179 One can readily see that the velocity change in every act of scattering accompanied with a deflection by the angle θ is ∇v= v(1 -cos θ). 354 Radiation and Matter great products indeed, but it is sometimes even faster to integrate simple equations in the good old manner - with bare hands, pen and paper. (Here, I must perhaps apologize for bringing the calculation details that might seem excessive to an experienced reader.) One may look for a special solution to the above equation in the form rs= eE0 m[acos(ωt+ φ) + bsin(ωt+ φ)] then, inserting this expression into the equation (8.11), we get the following system of linear equations for coefficients a and b: {-aω2 + bνω= 1 aνω+ bω2 = 0 which gives a= - 1 ω2 + ν2 , b= -aν ω= ν ω(ω2 + ν2) so that rs= eE0 m(ω2 + ν2) (-cos(ωt+ φ) + ν ωsin(ωt+ φ)) = eE0 m(ω2 + ν2) cos(ωt+ φ+ χ) cos χ , where χ≡arctan(νω ⁄ ) and cos χ= ω (ω2+ν2)1/2. Finally, rs(t) = eE0 cos(ωt+ φ+ χ) mω(ω2 + ν2)1 2 ⁄ . (8.12) The general solution to (8.11) is r(t) = A+ Be-νt+ rs(t) = A+ Be-νt-eE0 cos(ωt+ φ+ χ) mω(ω2 + ν2)1 2 ⁄ (8.13) where the constants A and B should be determined from the initial conditions, e.g., r(t0) = r0 and ṙ(t0) = v0. We may, for simplicity, put t0 = 0 (see the footnote on the preceding page) and obtain B= -v0 ν+ eE0 sin(φ+ χ) mω(ω2 + ν2)1 2 ⁄ and 8.3 More on Charge in Electromagnetic Fields 355 A= r0 -B+ eE0 cos(φ+ χ) mω(ω2 + ν2)1 2 ⁄ = r0 + v0 ν+ eE0 mω(ω2 + ν2)1 2 ⁄ [1 ωcos(φ+ χ) -1 νsin(φ+ χ)] Using the definition χ≡arctan(νω ⁄ ), we have A= r0 + v0 ν-eE0 sin φ mνω and B= -v0 ν+ eE0 sin(φ+ χ) mν(ω2 + ν2)1 2 ⁄ so that the general solution to the zero-order equation describing the motion of a particle in the standing wave may be written as r(t) = r0 + v0 ν-eE0 sin φ mνω -v0 νe-νt+ eE0e-νtsin(φ+ χ) mν(ω2 + ν2) 1 2 + rs(t) (8.14) where rs(t) = eE0 cos(ωt+ φ+ χ) mω(ω2 + ν2)1 2 ⁄ Now we can write the first-order equation like we did before, when collisions were not taken into account, as r̈ + νṙ = e m(E0(r0) cos(ωt+ φ) + (r∇)E0(r0) cos(ωt+ φ)). (8.15) Here, as in the higher-order terms of the right-hand side expansion on the inhomogeneity parameter (see above), one should insert the argument r0 after having differentiated the field amplitude E0(r0). To obtain the righthand side in (8.15) by the iteration method that we employ, one has to put the zero-order solution (8.14) into it. Inserting (8.14) into (8.15) and averaging over oscillations, we get for the first-order equation mr̈ + mνṙ = -α(φ) ν (v0∇)E0 -e2(E0∇)E0α(φ) sin(φ+ χ) mν(ω2 + ν2)1 2 ⁄ -e2(E0∇)E0 cos χ 2mω(ω2 + ν2)1 2 ⁄ , (8.16) where 356 Radiation and Matter α(φ) ≔1 T∫dte-νtcos(ωt+ φ) T 0 = ei(ω+T 2 +φ) sin ω+T 2 ω+T 2 + e-i(ω-T 2 +φ) sin ω-T 2 ω-T 2, with ω+ = ω+ iν, ω-= ω+ ∗= ω-iν and T is the averaging period180. One can also represent α(φ) in the form: α(φ) = ω ω2 + ν2 e-νtsin(ωt+ φ) -ν ωcos(ωt+ φ)| 0 T or α(φ) = ω(e-νtsin(ωT+ φ-χ) -sin(φ-χ)) T(ω2 + ν2) cos χ . One can notice that the right-hand side in the motion equation (8.15) or (8.16) depends on the field initial phase φ which, in general, does not vanish after averaging over oscillations. This is a curious fact, which deserves a special discussion (see below). Now, to proceed with the time-averaging, we may consider two cases. For long-wave harmonics such as, e.g., in the microwave domain, we may perform this averaging over a single period of the wave, putting T= 2π/ω. Then we get for the quantity α(φ) α(φ) = ω2 2π(ω2 + ν2) sin(φ-χ) cos χ (1 -exp(-2πν/ω)). In the optical case, when the number of periods n in T= 2πn/ω is very large, n→∞, i.e., ωT≫1, α(φ) →0 as 1/n. In this case, only the special solution of the zero-order motion equation (8.14) contributes to the force acting on a particle in the standing wave: mr̈ + mνṙ = e2(E0∇)E0 cos χ 2mω(ω2 + ν2)1 2 ⁄ = e2(E0∇)E0 2m(ω2 + ν2) , and in the collisionless limit νω ⁄ ≪1 this expression takes the form of the usual ponderomotive (Miller) force. It is interesting that in this case the dependence on the initial phase of the wave drops out. In the quantum language, the elementary process described here may be interpreted as a stimulated Compton scattering i.e., consisting in the absorption of a photon with frequency ω and 3-momentum k and emission of a photon with (ω′, k′) subjected to conditions ω′ ≈ω and k′ ≈-k. The 180 In this simple model, we do not take into account possible temporal nonuniformities or transition effects connected with turning the field on and off. 8.3 More on Charge in Electromagnetic Fields 357 particle energy in this process is almost conserved, up to the terms determining the difference between ω′ and ω i.e., E′ -E= ω-ω′ . However, the particle momentum may change significantly which corresponds, in the classical language, to the action of the force standing in the right-hand side of the above motion equations, ∆p= p′ -p= k-k′ ≈ ±2k. Actually, scattering of an electromagnetic wave by a free electron may be interpreted as the Compton effect only if we take into account the frequency shift of the scattered wave. Nevertheless, in the classical approach the frequency change can be ignored, the electron is classically accelerated in accordance with the motion equations in the electromagnetic field of the wave, then emitting radiation as prescribed by the rules of classical electrodynamics [193], §66. In the classical description, we obtain the standard formula for the Thomson scattering cross-section [193], §78 σω= 1 S̅ dJ(ω) ̅̅̅̅̅̅̅̅ dω = 2|d̈ω|/3c2 c|Eω|/8π= 1 3c2 |e2E m| 2 c|E|2 8π ⁄ = 8π 3 ( e2 mc2) 2 = 8π 3 r0 2 ≈6.65 ∙10-25cm2 where dω is the Fourier component of the electronic dipole moment induced by the wave, S is the Poynting vector representing the flow of electromagnetic energy (overline bar denotes averaging over time), and r0 = e2 mc2 ⁄ is the classical electron radius. One may also recall the connection of the motion in a standing wave with the so-called Kapitza-Dirac [227] effect and Kapitza pendulum [23] §30 problem. The Kapitza-Dirac effect is just the elastic scattering of electrons in the field of a standing electromagnetic wave which may be written as E(r, t) = E0(r)e-iωt+ Eo ∗(r)e-iωt, E0(r) = E0 cos kr= E0 cos kx. An electron in such a field experiences the ponderomotive force (we use for simplicity scalar notations) Fx= -∂Up ∂x, Up= e2E2(x) 4mω2 . Using an optical analogy, one can say that the electronic wave is scattered on a diffraction grating having a spatial period d= πk ⁄ = λ/2. P. L. Kapitza and P. A. M. Dirac used one more analogy namely with the Bragg diffraction in ideal crystals, when diffraction angles θn are determined by the condition nλ= 2dsin θn. In Kapitza-Dirac scattering, this condition takes the form sin θn= nħk/p where p is the momentum of incident particles. Thus, electrons crossing a standing electromagnetic wave would be reflected from the planes of peak intensity. This is, of course, a very intuitive, qualitative consideration; in order to treat this problem accurately one should account for the motion of an electron in the field of an electromagnetic wave, e.g., within the framework of the nonrelativistic 358 Radiation and Matter Schrödinger equation, see, e.g., the paper by M. V. Fedorov [228] who was probably the first to consider the Kapitza-Dirac scattering on the quantummechanical level (see also [229]). 8.3.2 Interaction of a Particle with a Traveling Wave An important class of problems is related to the situation when a charged particle encounters a traveling electromagnetic wave. These problems have been typically considered in microwave electronics (see, e.g., a comprehensive book [230]) and in accelerator physics. Recently, much interest has been aroused to the possibility of cost-efficient acceleration of particles in the field of high-power ultrashort laser pulses, for instance socalled laser wake-field acceleration (LWFA). Perhaps the most prospective application of the theory related to the interaction of charged particles with a traveling electromagnetic wave, both in the single-particle and many-particle dynamics regimes, is the free-electron laser (FEL). We shall not discuss here a lot of technicalities and specially adapted methods of integration of the motion equations, arising in connection with all these applications. In the spirit of this book, we shall only observe the general patterns associated with the particle-wave interaction, with the hope that such a high-level discussion would enable one to understand more professional minutes should they appear. The primary setting of the problem is the following: let a linearly polarized traveling wave propagate along the z axis in vacuum. We may write the field in the wave as E(r, t) = E0 cos(ωt-kzz) = Fexcos(ωt-kz) and H(r, t) = H0 cos(ωt-kzz) = Feycos(ωt-kz) where ex and ey are unit vectors defining the polarization of the electric and magnetic field, respectively, kz= k; the quantity F is the amplitude of both E and H because for the wave propagating in vacuum their amplitudes are equal. The vector motion equation for an electron in such a wave may be written in components as mẍ = eF(1 -vz c) cos(ωt-kz) , mÿ = 0, mz̈ = evx cFcos(ωt-kz) 8.4 On Hamiltonian Formalism for Particle Motion in Electromagnetic Fields The Hamiltonian method seems to be poorly adapted to relativistic problems such as field theories. For example, classical electrodynamics, though being a relativistic theory, is represented in this method in the form similar to the 8.5 Interaction between Atoms and Radiation Field 359 nonrelativistic classical mechanics. Nonetheless, application of the Hamiltonian method to electrodynamics was thoroughly discussed in the classical book by W. Heitler [207]. V. L. Ginzburg, the 2003 Nobel Prize winner, RAS (the Russian Academy of Sciences) member and one of the last encyclopedists in physics, also preferred the Hamilton approach to electrodynamics due to its simplicity vs. more sophisticated techniques used in quantum field theory. Of course, the Hamiltonian method usually fails in relativistic problems, and the electromagnetic field is an essentially relativistic object. The first thing to do in order to describe the particle motion in an electromagnetic field using the Hamiltonian formalism is to construct an effective Hamiltonian function for the field and charges in it. I write "effective" because it would be hardly possible to include in the Hamiltonian all possible factors influencing the particle motion. In fact, all Hamiltonians in physics are effective ones since they represent a system in terms of operators defined over some arbitrarily built and very restrictive Hilbert space. In reality, one ignores the dependence on temperature i.e., interaction with the environment (thermostat), energy cutoffs, boundary conditions (such as the electromagnetic field vanishing at infinity in our case), etc. I have already mentioned that it seems to be the ultimate aim of physics to write down the Hamiltonian for the entire universe, and the rest should be presumably done by mathematicians and computer modelers. This is, however, more a metaphysical utopia rather than a scientific program. Striving to employ the Hamiltonian method everywhere appears to be mainly psychologically driven: one desires to represent any theory in the conventional form of classical mechanics. It is, however, clear that, e.g., for electrodynamics the Hamiltonian formalism is poorly adapted, in particular, because it picks out one specific instant of time. This is, by the way, one of the reasons why predominantly the Lagrangian formalism is used while treating electromagnetic field problems. 8.5 Interaction between Atoms and Radiation Field In this section, I shall try to dispel the popular view that the problem of field-matter interaction is so complicated that one can obtain reliable results only by using numerical techniques and very powerful computers. Here, some simple microscopic mechanisms determining the response of the matter to electromagnetic fields are briefly discussed. Of course, in a general formulation this task is enormous and has been treated in numerous classical works which I shall refer to in the subsequent text. In most cases, when considering the interaction of electromagnetic radiation with atomic systems one can use the electric dipole approximation (see [39], 67). In general, one can observe that the absolute majority of electrodynamic and optical phenomena can be well described in this approximation. Indeed, the electric dipole approximation is valid when the parameter aλ ⁄ ≪1, where a is the characteristic dimension of an 360 Radiation and Matter atomic or molecular system interacting with an electromagnetic field (one usually speaks in terms of scattering or radiation of waves). Typically, a~10-7 -10-8cm and λ~10-4 -10-3cm which corresponds to visible or infrared light. One can, however, pay attention to the following paradox. If we consider the interaction of electromagnetic radiation with a material medium regarding all its atoms and molecules together as a single scattering system of size L, then, e.g., in optics, we have as a rule Lλ ⁄ ≫1 whereas for the applicability of the dipole approximation we should ensure the opposite condition Lλ ⁄ ≪1 or at least kL≲1 . This is, however, not a real paradox since in most cases different atoms and molecules of the medium radiate and scatter electromagnetic fields statistically independently of each other. Correlations between atoms and molecules interacting with the radiation field determine coherence properties of light. 8.6 Laser-Matter Interaction Popular ideas about the laser generation - analogy with phase transition - must have a reliable physical and mathematical ground. Many issues still remain open in the study of laser-matter interaction, especially at high intensities, and nowadays lasers produce radiation intensities up to 1022W/cm2 that corresponds to electric fields reaching 1011CGSE i.e., by four orders of magnitude higher than atomic fields Eat~ eaB 2 ⁄ = m2e5/ħ4 ≈5 ∙109V/cm . For estimates in laser-matter interaction it is convenient to introduce the "atomic intensity", Iat= cEat 2 8π ⁄ = cm4e10/8πħ8 ≈3.5 ∙1016W/cm2 . One can expect that the effect of high-intensity electromagnetic radiation on material media would consist at least in very efficient heating of the matter electrons. Moreover, electrons can be accelerated in the fields of laser pulses which opens the possibility to construct cost-effective machines both for high energy research and practical applications. 8.6.1. Ultrashort Laser Pulses Let us consider, as an example, a typical situation when an ultrashort laser pulse (ULP) is directed at the surface of metal. The pulsed electromagnetic field accelerates electrons in the metal skin layer, thus heating the electronic subsystem of the material. One must note that the notion of a skin layer is not quite trivial and is not always correctly defined, but we shall assume that we do know exactly this layer's properties (see [208], 60). Owing to the heat conduction processes, the electromagnetic energy absorbed in the skin layer migrates into the metal volume and eventually dissipates there increasing the equilibrium temperature of the entire sample. One can imagine that if the predominant mechanism of the heat transfer from the skin layer into the volume is electronic conductivity, then after having reached a certain power (fluence) threshold the temperature gradients may become so high that the heat flux caused by the electrons would spread faster than phonons in metal. In other words, the speed of the heat flux would surpass the phase velocity of 8.6 Laser-Matter Interaction 361 phonons. In such a situation, the drift electrons can produce phonons due to the Cherenkov (or Čerenkov) radiation - a complete analogy to Cherenkov radiation of photons in dielectric media. One can observe the characteristic soft blue glow in nuclear reactors, for example, during the excursion to a nuclear power plant, a very useful tour, by the way. This glow is due to Cherenkov radiation of photons in the visible range. But let us get back to phonons. One can expect that due to massive generation of phonons in the Cherenkov mechanism their distribution can deviate from the equilibrium Planck shape, which in its own right would produce an impact on the heat transfer - a sort of a self-consistent feedback. 362 What remains to be solved? 9 What remains to be solved? There are a lot of unresolved issues in physics and in natural sciences in general - much more than resolved. I don't think that physics and mathematics have an agenda: nobody knows what comes next. The great discoveries have always been unexpected. 9.1 The Standard Model The idea of spontaneous symmetry breaking enabled physicists to build the famous Standard Model of the unified weak and electromagnetic interactions. It is interesting that the same idea allowed one to demonstrate that the important phenomenon of superconductivity 181 can be regarded as spontaneously broken electromagnetism. Ultimately, the Standard Model strives to describe all the processes occurring in nature within the framework of the four known interactions: electromagnetic, weak, strong and gravitational. To understand most astronomic concepts and even cosmological models, to learn chemistry or electrical engineering one only needs gravitation and electromagnetism. Quark-gluon models, the Higgs particle or the spontaneous breach of symmetry are often superfluous at this level of knowledge. Yet without strong and weak interactions underlying the respective mathematical models, one would not be able to understand what makes the stars shine nor how the Sun burns chemical elements producing the radiation power that supports life on the Earth. In general, without this knowledge one cannot grasp the principles of nuclear physics. Current understanding of fundamental high-energy physics (formerly elementary particle physics) is based on the just mentioned Standard Model which stands on two main pillars: gauge invariance and spontaneous symmetry breaking. The original "material" foundation of the Standard Model was made up of 6 quarks 6 leptons whereas the main goal of the Standard Model was to describe the four known fundamental forces of nature - strong, weak, electromagnetic and gravitational - on the same footing i.e., in terms of gauge concepts. This endeavor was accomplished for three out of four interactions: for the strong interaction the corresponding gauge group is SU(3), for the weak and electromagnetic interactions, respectively SU(2) and U(1) groups; within the framework of the Standard Model, the weak and electromagnetic forces are unified and known as the electroweak interaction. The first attempt at the unification of gravity and electromagnetism (i.e., the unification of gauge fields with gravitation) goes back to the Kaluza (1921) and Klein (1926) five-dimensional models. The unification of three gauge 181 It would hardly be possible to build the Large Hadron Collider (LHC) without using the superconducting magnets. 9.2 The Arrow of Time 363 forces (strong, weak, EM) with gravity is an extremely difficult endeavor, requiring special skills, and we shall not discuss this issue here. One can only note that the Standard Model does not treat gravity on an equal footing with the three microscopic gauge forces. What is today's status of the Standard Model? It is mostly regarded as a minimal one i.e., incomplete and designed as a temporary step towards a more refined unified theory. One calls the Standard Model minimal since there are no more elementary particles in it besides six quarks, six leptons and four gauge bosons needed to transfer interactions (the Higgs boson was discovered with high probability in the LHC experiments and is one more elementary - not compound - particle). The incompleteness of the Standard Model is, in particular, manifested by cosmological data. Thus, the Standard Model requires modifications to accommodate new phenomena. Such modifications are also expected to be based on gauge principles, with their geometric, Lie groups and bundles approach. Although ideas and results usually don't come in a prescribed order, soon there may be an exception. The Large Hadron Collider in CERN is destined to come closer to the microscopic spacetime structure than any previous device. Putting together all seemingly diverse topics in a manuscript takes time, and I am writing all this before the LHC machine in CERN has been put in operation, but you may read it after the new results have been obtained. For the energies provided by the LHC, new results will inevitably appear, despite the fact that the whole project is rather a process than a business that can be eventually completed. I have already mentioned that the LHC is the world's most powerful accelerator (27 km ring diameter) designed to achieve 7 TeV energy for each of the two counter-rotating and colliding proton beams (see https://edms.cern.ch/file/445830/5/Vol_1_Chapter_2.pdf for the beam parameters, see also http://lhc.web.cern.ch/lhc). It is expected that the Higgs particle (see above) can be produced in the collision so that the mechanism ensuring the origin of particle masses in the Standard Model will be validated. Still the questions remain, at least I don't quite see the answers. For instance, I don't completely understand the lack of gravity in the Standard Model, maybe some experts know better. 9.2 The Arrow of Time "I can't go back to yesterday - because I was a different person then" Lewis Carrol. In this section we shall discuss the subject of the unidirectional flow of time in general and of its various manifestations in the form of so-called time arrows. More specifically, we shall observe the properties of the time reversal operator together with some physical and mathematical aspects of time reversal symmetry. The whole subject is discussed here both on the classical and the quantum level, but mostly for the case without spin. Effects which are due to spin (such as spin-orbit interaction) are only briefly mentioned: a fullscope inclusion of such effects into our discussion would make the section hardly observable. 364 What remains to be solved? This section is constructed in the following way: primarily some standard approaches to the problem of time reversal are briefly reviewed, with my personal comments being added. In the interim, it appeared necessary to dwell on the very concept of time, which unfortunately borders to metaphysics. Afterwards, more physical (and even some mathematical) stuff was discussed, mostly related to a correct definition and properties of time reversal operator. Some time-reversal noninvariant models, usually declared purely phenomenologically (as if the Newtonian model were not phenomenological), are observed under the angle of possible presence of the hidden time reversal symmetry when the coefficients of a phenomenological model are explained in "more fundamental" terms i.e., by using other models. The assumption of a closed physical system, needed for time reversal invariance, is discussed. At first, I did not want to include a section on time-reversal puzzles in this book. The reason for this reluctance was that the so-called problem of time is so strongly contaminated with fancy speculations and philosophical metaphors (see e.g., [53] and also [54]) that while discussing this subject one can easily be trapped by embarrassing pitfalls of vagueness. After a certain age one tends to think why she/he is doing this or that. I would rather prefer to solve equations than get engaged in some sort of foundational scholastics. By the same token, the issue of interpretation of quantum mechanics which is often considered extremely important (see e.g., [74]) is, to my mind, of a similar soft and scholastic nature. Issues of this kind, to my understanding, are not quite physical problems because they are unlikely to bring new results. The only justification, in my opinion, to handle such highly speculative issues is what I call the "physmatical effect" - emergence of unexpected links to other areas of physics and mathematics (see in this connection the works by I. Prigogine, e.g., [55]). Moreover, the problem of time-reversal asymmetry is a very ancient issue and a great lot has been already written about it e.g. Davies, Hoover [59,60], so I did not think I could add anything new and fresh. But then I suddenly caught a serious illness and, by necessity, got more time for relaxed contemplations. Once professor Christoph Zenger, a wellknown mathematician, whom I consider one of my teachers, came to visit me and asked what, in my opinion, the physicists think in general about the unidirectional flow of time. Professor Zenger's idea was that it is probably the dominance of matter over antimatter that produces the time asymmetry. I argued that this view seems a bit naive as well as considering that timereversal symmetry should be a priori guaranteed; moreover, even in the microscopic world this statement would be wrong since only C and T symmetries would be handled, with parity P remaining untouched, provided of course we believe that the CPT theorem is never violated (see below). And who can guarantee that CPT is true also in the macroscopic world? So, I immediately thought this issue is not worth serious treatment. Yet afterwards I decided to systematize a little what I knew on this subject. Thus, the present section appeared. The issue of a preferred direction of time may be classified as one of Perennial Problems. Such problems have a philosophical flavor and although 9.2 The Arrow of Time 365 they can admit simple formulations, they present a perennial challenge for great scientists. One of distinguishing features of Perennial Problems is the illusion that they have been solved a long time ago, with this solution being known to any more or less literate person. However, if one addresses original papers and textbooks, one will find a wild variety of opinions and would-be solutions to each Perennial Problem. So, there is no universally accepted solution to any of the Perennial Problems despite the fact that a lot of great minds have tried to attack them. As examples of Perennial Problems one can name such issues as: interpretation of quantum mechanics, unification of fields and forces, unification of general relativity and quantum mechanics (the quantum gravity problem), the problem of the origin of the universe and its subproblem - that of initial conditions, the problem of fundamental constants - of their actual values and possible variation, of elementary particle masses, of causality, and a number of other fundamental problems essential for our representation of the world. Nowadays, one more issue is of fashion and seems to become a Perennial Problem: that of dark energy and dark matter. 9.2.1 Perennial Problems Children sometimes ask: what has become of the previous days, where are they stored? Or they are ruthlessly destroyed by some specific creatures, as in the well-known fantasy by Stephen King ("The Langoliers")? It would of course be great being able to turn back time. The issue of the preferred direction of time may be classified as one of Perennial Problems. Such problems have a philosophical flavor and although they can admit simple formulations, they present a perennial challenge for great scientists. One of distinguishing features of Perennial Problems is the illusion that they have been solved long time ago, with this solution being known to any more or less literate person. However, if one addresses original papers and textbooks, one will find a wild variety of opinions and would-be solutions to each Perennial Problem. So, there is no universally accepted solution to any of the Perennial Problems despite the fact that a lot of great minds have tried to attack them. As examples of Perennial Problems one can name such issues as: interpretation of quantum mechanics, reductionism (i.e., possibility to reduce biological phenomena to physics), unification of fields and forces, unification of general relativity and quantum mechanics (the quantum gravity problem), the problem of the origin of the universe and its subproblem - that of initial conditions, problem of fundamental constants - of their actual values and possible variation, of elementary particle masses, of causality, and a number of other fundamental problems essential for our representation of the world. Nowadays, one more issue is of fashion and seems to become a Perennial Problem: that of dark energy and dark matter. Some people tend to call the above examples of Perennial Problems also the "Princeton Problems": they are not really important from the utilitarian viewpoint, at least for the time being, and they are neither physical nor mathematical with regard to their setup. That is to say that such problems are not necessarily reduced to a completely and correctly set task. One can be 366 What remains to be solved? preoccupied with such problems for infinite time - they are just Perennial. I don't think this label is correct: one can judge by the publications that the Princeton Institute for Advanced Study (IAS) has lately become considerably less detached from practical problems. It is curious that Perennial Problems though being totally irrelevant to daily life182 tend to stir much more acute interest than mundane everyday ones. Take, for instance, the Fermi paradox concerning the existence of extraterrestrial civilizations. One usually gets orders of magnitude more responses when starting a discussion on this abstract subject than, say, on such vital issues as outrageous medical errors or, say, police brutality. The nearby subject of the so-called Drake equation, an attempt to estimate the potential number of extraterrestrial civilizations in our galaxy - the Milky Way (see http://www.seti.org) still attracts a great number of enthusiasts. The SETI community grows and is supported by decent funding. One might however wonder whether the search for extraterrestrial creatures is a promising scientific avenue to pursue. To be honest, I am inclined to think that anyone who is seriously engaged in the search of extraterrestrial civilizations manifests some escapism and possibly has difficulties in dealing with the boring, often unpleasant but necessary hardships of daily life. Although Perennial Problems are mostly of quixotic character, not all of them are totally useless from a pragmatist's viewpoint: many are even not too remote from real scientific quests. Specifically, many arguments related to time inversion are not always of purely scholastic character. Thus, in high energy physics such arguments are essential for the development of a theory describing the fundamental interactions. For instance, the famous CPT theorem183 provides guidance for the construction of any viable field theory involving particles and antiparticles. More specifically, by combining CPT with internal symmetries one can introduce the correct transformations, e.g., expressed through matrices. One may also recall more practical things such as the phase conjugation techniques in optics and acoustics. Nevertheless, the main point of mine is that discrete symmetries, in particular time reversal symmetry, do not follow from any fundamental principles, and there is no reason to declare such symmetries to be fundamental principles by themselves. Thus, Perennial Problems border on really fundamental questions (some of them we shall discuss below), more specific than Perennial, namely - How did the universe originate? - What is the origin of mass and what stands behind the differences of masses of particles we call fundamental, in particular leptons and quarks? - What is the reason for the matter-antimatter asymmetry observed in our universe - What is "dark matter" and "dark 182 The well-known medieval Perennial Problem: "How many angels can dance on the head of a pin (or on the point of a needle)?" seems to have many today's analogs. In Russia, for example, such eternal topics as the "special way of Russia" were a signature of the intelligentsia at all times. 183 The CPT theorem deals with charge conjugation (C), parity (P) i.e., spatial inversion, and time inversion (T). 9.2 The Arrow of Time 367 → energy" which apparently manifest themselves without being directly observed? Perhaps today's Perennial Problems will be sharpened in future physical experiments, and then their speculative ("many-words") component will be drastically diminished. In this connection one might recall that theories and models are always abound whereas there is only a single reality. 9.2.2 Observations of Possible TRS Breakdown The time reversal symmetry (TRS) is obvious in many basic equations of physics but is exceptionally seldomly manifested in reality. One should probably always remember that mathematical equations are just pieces of human-produced text and can be linked to reality through careful observations and purposeful experiments. It has already been noted that without paying attention to external measurements, physics would look like a loose collection of philosophical speculations or, at best, a vast array of mathematical texts. Time reversal symmetry is a so deeply engraved stereotype in physics that to question it is not at all easy: one should overcome a cognitive barrier of the prevailing opinion. Thus, TRS may be viewed rather as an important philosophical principle i.e., without the necessity to have an experimental link to reality. Below we shall estimate what limitations are imposed by such links. Let us briefly return to basics: in 1686, Sir Isaac Newton presented to the Royal Society the first volume of his "Principia" (Philosophiae Naturalis Principia Mathematica)184. In this first volume three laws were postulated that were destined to describe the world. Firstly, any physical body maintains its state of rest or motion185 unless acted upon by an external force; secondly, this force equals the rate of change of the body momentum; and thirdly, each action entails an equal and oppositely directed reaction (counteraction). Those three laws named later after Newton were presumably sufficient to determine the fate of each particle of the universe since to predict the future state ad infinitum. Time reversal invariance is a particular case of a more general antiunitary invariance so from symmetry positions there is nothing specific about time reversal. Simultaneously, some quantities are conserved that are invariant in time such as energy. The presence of even very small dissipation or diffusion breaks time reversal invariance and tends to drive the physical system to homogeneity. Since time reversal is a simple transformation, the problem of time-asymmetry is simple to formulate: if one believes physics, which is considered to lie in the foundation of the human knowledge of the world, to be based on classical mechanics of Galileo-Newton which is timesymmetric, then how would it be possible that real-life processes are 184 Actually, this endeavor comprised three volumes, see, e.g., Kartsev, V. P. Newton [245].; see also http://books.google.com/books?id=6EqxPav3vIsC&pg=PA1#v=onepage&q&f=false. 185 We have seen that according to Galileo the rest is a particular case of the motion. 368 What remains to be solved? obviously time-asymmetric. The subjective experience of an essential difference between past and future in real life i.e., between two possible directions of time has been metaphorically called "the arrow of time" (probably it was A. S. Eddington who was the first to coin this term). The motion equations of classical, non-relativistic quantum mechanics (without the magnetic field) and, to a certain extent, of quantum field theory (QFT) seem to admit time reversal t→-t, although in QFT simultaneously with CP-transformation. Macroscopic equations are, however, irreversible. Reconciliations of fundamental and phenomenological models of the world has traditionally been one of the most burning issues in physics. This paradox, known since the end of 19th century, is sometimes called the time arrow problem or "global irreversibility paradox" and I shall try to make a short overview of different approaches to its resolution. From a very general observation, one may notice that asymmetry is much more generic than symmetry, the latter being a very special feature. So, one might suspect that it would be an extraordinary and very intelligent finetuning if all the laws of nature were time-reversal symmetric. The probability of more generic asymmetric manifestations must be significantly higher. Looking for symmetries in the equations is an important area of mathematical physics which provides useful tools [187], but imposing symmetries on every natural process in sight seems to be a totally arbitrary act. Nevertheless, it is generally believed that all dynamical equations of fundamental physics are invariant under time reversal whereas the phenomenological and hence "less fundamental" laws of physics have an obvious temporal asymmetry. The microscopic186 dynamical equations govern the evolution (or devolution) of a physical system under the assumption that it is perfectly closed. If the system is not closed, then it is a priori phenomenological, e.g., dissipative (the simplest example is the damped oscillator), and, strictly speaking, its complete description lies outside of microscopical mechanics - in statistical physics or physical kinetics. If the system is not perfectly closed, it does not seem possible to screen it from fluctuations, e.g., of thermal character. However, the assumption of perfect closure seems to be utterly unrealistic and can therefore be substantially weakened, as I shall try to demonstrate a little below. Although one cannot exclude the low-probability option that we are living in a completely time-symmetric universe, the usual statement that all physics should be time- reversal symmetric seems to be a belief, a quasireligious credo. One might observe in passing that there are surprisingly many credos (sometimes called principles) appealing to intuitive extrapolations in physics. Maximum what can be accurately said about timeinvariance is the famous CPT theorem [99], 19, see also below. Its content is as follows. Assume that we have a quantum field theory which is 186 They are only considered microscopic by some consensus as constituting the backbone of contemporary physics: Newton's equations of classical mechanics are by no means microscopic as well as the Schrödinger equation applied to the whole Universe. 9.2 The Arrow of Time 369 characterized by a positive energy density, Lorentz invariance and local causality187. Then such a field theory is invariant under CPT. Furthermore, it is typically assumed that any C or P non-invariance are inessential, especially in practically important physical manifestations, therefore all physical laws should be invariant under T transformation (colloquially, T-invariant). This chain of arguments is in a sense remarkable: nearly everything is false. First of all, the conditions for the CPT theorem are very stringent - to the degree of being applicable to a very limited set of quantum field theories. It raises little doubt that the universe we live in is not Lorentz-invariant: in Einstein's general relativity spacetime is not flat, nor even asymptotically flat. There exist some rival models with asymptotically flat spacetime, but they still do not ensure CPT. This problem is closely connected with the energy density in the universe. This energy density is not necessarily strictly positive, in particular, because the potential energy related to gravitational attraction is negative 188. Local causality cannot be in general a well-defined notion, especially when the space- time metric fluctuates or is quantized. Moreover, it is in general not true that one can correctly define a spacetime separation for an arbitrary quantum state. Thus, there may be difficulties already with the statement of the CPT theorem. Let us pay some more attention to the assertion that C and P invariance play a negligible role in practically important physical processes. The curious thing about this assertion is that it contradicts the general logical scheme of microscopic reasoning. On the one hand, it is generally stated that all physics is presumably time-invariant on the level of microscopic laws, on the other hand such physical manifestations as kaon decays and in general weak interactions are considered too microscopic to be viewed as physically significant. Personally, I do not understand this logic. One standard explanation of the time reversal violation in macroscopic physics is based on time-asymmetry introduced by some supplementary - initial or boundary - conditions (see more on that below). This explanation seems to me at least insufficient since the disparity of two different directions of time in the entire picture of the world remains. As far as the hypothesis of electron-positron and, more general, particle-antiparticle asymmetry resulting in time reversal asymmetry goes, it appears also limited and even a bit naive since it a priori believes that time reversal invariance is deeply inside an exact symmetry. Of course, at a somewhat primitive level of description an anti-particle may be viewed as a particle moving backwards in time. Indeed, a particle having a positive energy E contains in its wave function factor exp(-iEt), i.e., Ψ+ = ψexp(-iEt). An anti-particle having a negative energy -E is characterized by wave function Ψ-= ψexp(-i(-E)t) = 187 The term "local causality" refers to a situation when the fields of QFT, φ(x),φ(y) commute (or anticommute) if spacetime points x and y are separated by a spacelike interval. 188 The question of the energy density in the universe is considered a "hot subject", see, e.g., Sean M. Carroll https://link.springer.com/article/10.12942/lrr-2001-1. 370 What remains to be solved? → ψexp(-iE(-t)). In the framework of QFT189, time reversal invariance seems to be violated, which has been experimentally confirmed in CPLEAR experiments conducted in 1998 in CERN [63]190 It would be of course a truism to state that in classical (Newtonian) physics there is no preferred direction of time. In physics, microscopic time reversibility is actually a consequence of idealized modeling of physical experience. We have seen in Chapter 4 that classical dynamics, being very approximate and of limited applicability, had to be deterministic in order to predict the flow of events with elapsing time. To a large extent, the same is true for quantum mechanics (see Chapter 6). Otherwise, theories or specific models derived from them were of a very limited use. Nevertheless, there are strong controversies among physicists - as well as among philosophers - how to explain the obvious discrepancy between two facts: 1. we all know that time is flowing only in one direction, at least at our macroscopic level (trivial example - thermal or diffusive processes); 2. all basic (microscopical) laws of physics are invariant under time reversal, i.e., change t→-t. One may find the most comprehensive account of the time asymmetry problem in [89]. In classical deterministic physics, the time, although not satisfactorily defined, was considered a universal continuous 191 variable. Due to this approach, classical and even quantum theories were mostly formulated as models based on differential equations of evolutionary type (or on systems of such equations) giving explicit time derivatives as functions of current values describing the actual state. It is this mathematical construction that allowed one, by integrating these differential equations, to "predict" the variable future or retrodict the constant past, depending on setting up initial or final conditions. Moreover, ordinary everyday life entices us into thinking that time is absolute and the same everywhere, which is connected with the erroneous impression that speed has no limit. Yet, this is true only in a low-energy approximation. Although time-reversal invariance is mostly taken for granted, there is no compelling reason why this should always be the case. And indeed, more and more evidence has been accumulated lately that time-reversal symmetry (TRS) is broken on a more sophisticated physical level than simple classical or quantum models. Please note that here I am not talking about statistical problems or open systems, where it would be ridiculous to require timereversal invariance. There is currently much empirical evidence that TRS does not hold in optics, for examples, in experiments with nonmagnetic metamaterials (see, e.g., an account of spectacular experiments carried out by the N. I. Zheludev group in: [251], [252]). One of the relevant hypotheses 189 With Minkowski flat background and Lorentz invariance, motion in different Minkowski frames, as already discussed, can be represented by different 4d graphs, with Lorentz transformations being mappings between them. 190 In a series of CPLEAR experiments, weak decay of K-mesons has been observed, KL→e+νeπ-. 191 Today there are of course also discrete-time and lattice models, but their detailed discussion is outside the scope of this book. 9.2 The Arrow of Time 371 is that TRS breakdown may accompany a nonlocality of the optical response, when surface plasmons are excited depending on the polarization of light incident on intricately surface-structured metamaterials. Probably these experiments as well as metamaterials in general require quite an extensive analysis, see the cited papers for details. Another field of interest where time-reversal invariance is apparently broken is high- temperature superconductivity (see, e.g., a discussion in the papers [253], [254] and [255]). Time-reversal symmetry has also been shown to break down in biological molecules [256]. To my regret, I can never undo things - maybe some people can. And I am unable to see the reversed chain of events: my personal time clearly flows in a single direction. Most people, especially elderly persons, are sadly aware of the passage of time; it is usually in a very young age one wishes the time run faster. It would be a truism to say that in general young and old (like rich and poor) fathom the world by different scales. Moreover, the perception of time intervals also seems to change with the age of an individual, these intervals appearing progressively shorter192 One can easily construct a mathematical model of this diminishing of time intervals with the life-span. An intuitive awareness of the unidirectional and accelerating time flow manifests the psychological time arrow. This is, of course, no physics - so far. However one may justifiably ask: what are the physical reasons for the perception of time as having a sole direction? Although experimental proofs, both direct and indirect, of time reversal non-invariance are quite convincing [63], recognition of such non-invariance within the physical community unexpectedly turns out to be reluctant, sometimes the attitude towards these experimental facts is averse and disapproving. The reason for such an attitude is exactly the widespread stereotype that the basic laws of physics are written in time-reversal invariant form, so all the phenomena should be - at the basic, microscopical level - also time-reversal invariant. Thus, the tension between the generally perceived microscopic science and common sense is reflected in the question of why the past is different from the future. 9.2.3 Model-Based Claims One sometimes forgets that the so-called basic laws of nature are formulated as mathematical models, quite efficient but having a limited domain of applicability. For example, Newton's equations are formulated, in distinction to, for example, the Aristotelian model, as a time-invariant mathematical model, so it is quite natural that Newtonian mechanics, for instance, does not automatically involve time-asymmetric phenomena, e.g., omnipresent in optics where the difference between time-forward and time-reversal processes has been observed for many years. It would be stupid to require of a model to be applicable in the field it has no relation to - this general consideration is often forgotten when time-invariance issues are discussed. Reversibility of the main microscopic laws of physics is nothing more than a 192 "The more we live, the shorter are the years", Bulat Okudjava. 372 What remains to be solved? very productive assumption. The present-day physical laws are just mathematical models, nothing more, and they tend to be replaced by other models when ample experimental facts are accumulated to invalidate the actual beliefs. For instance, such a simple system as a damped pendulum already gives an example of time-invariance violation, irrespective of imposing initial or final conditions which are often taken as a main source of time-invariance violation. The model itself is non-invariant in time. The verbal claims of reducing this model to truly time-invariant Newton equations for some point particles are just beliefs of a rather philosophical nature193. One may notice that in order to define time derivatives one already assumes that the direction of time does exist. Thus, the arrow of time is implicitly suggested by the very formulation of the theory, which is mathematically equivalent to a Cauchy problem. Time in the Newtonian model as well as in dynamical systems theory is an absolute mathematical notion (Galilean fibration) flowing uniformly with no regard to physical processes. This representation is false already for simple relativistic models. Furthermore, time is not an observable in the physical sense; the fact that struck me first when I started to work in the Russian (then Soviet) Committee for Standards. Meteorologists deceive people by saying that they are measuring time - in fact time is defined as some number of periods of some oscillating physical quantity that is measured by "the clock". Figuratively speaking, we see only the hands of the clock, but not the time itself, and we cannot place the time sample in the International Bureau for Weights and Measures headquarters at Sêvres, France (Bureau International des Poids et Mesures), which serves as a depository for the primary international standards. We would need a whole laboratory staffed with highly qualified physicists for producing some kind of clock, e.g., atomic clock, its certification and comparison, etc. This impossibility to see time directly is the consequence of the fact that it is difficult (maybe even not possible) to correctly construct the time operator as an entity corresponding to an observable (see below). Had we been able to introduce the time operator, we could have established its evolution properties (e.g., in the Heisenberg picture) and could have compared forward and backward temporal behavior. We could have found also the expectation values of the time operator and seen how they would change with time t∈R for t> 0 and t 0 almost each point x∈A is returned infinitely many times in A, i.e., there exists an infinite sequence {ni}, ni→∞ so that Ti n(x) ∈A. In other words, each set C∈Σ such that TnC∩C= ∅ is of zero measure. The standard way of reasoning proceeds as follows: let us imagine now that the set A consists of such phase points that all the molecules of gas are gathered in one (say, the left) half of the box - we can consider it the initial state. Then, according to the Poincaré recurrence theorem, one would find such moments of time that all the molecules will return to the same (left) half of the box. However, nobody has ever observed the case when the gas does not occupy the whole accessible volume. The usual explanation of this paradox is as follows. The probability measure for the set A is of the order of exp(-cN) where the constant c depends on some macroscopic physical parameters such as temperature and density (see Chapter 7). For a gas under normal conditions, when N≈1023 the measure μ(A) is numerically extremely small so that recurrence cycles (μ(A)) -1 are greater than any extrinsic time scale, e.g., the characteristic cosmological times. In order to keep the box intact and observe the return to an initial state, one has to isolate the box from external influences during cosmological times, which is physically unrealistic. On the other hand, if the number of molecules is small, say N≈10, one may in principle observe a gigantic fluctuation when all the molecules gather in a single half of the box. 9.3 Irreversibility 389 This situation can be rather modeled on a computer than produced in a direct physical experiment. One might note that there is no restriction on the time the recurrence could take. This recurrence (or return) time may drastically differ on the paths starting from various subregions Σ. The first return of each path defines a measure-preserving map of the region into itself, if one can wait for a long enough time, of course. One may also notice that there is no intrinsic timescale with which the Poincaré recurrence cycle can be compared, so that the recurrence time is considered large or even infinite not mathematically but practically. The intrinsic time scale in this system appears only when an external influence or field is allowed to drive it. External agents can easily pump the gas into a nonequilibrium state. However, without external driving forces the gas expands and fills up the whole available volume which demonstrates the irreversible and time-noninvariant behavior. How can this behavior be explained for an isolated system? The Poincaré recurrence theorem is a typical result of dynamical systems theory (see Chapter 4). Some contemporary studies in dynamical systems are aimed at a possible "explanation" of the time arrow. There are several ways to describe the time evolution of a dynamical system. In the classical framework (see e.g., a very good book [73]), one considers systems of differential equations with respect to explicit time (there may be other temporal parameters such as "fast" or "slow" time). In the autonomous case, the solutions to such systems are usually time reversible. However, in many cases the dynamical system exhibits either ergodic or mixing properties, and it is usually assumed that mixing and ergodicity are closely related with the arrow of time. Indeed, from the viewpoint of statistical mechanics mixing means irreversibility: every initial measure converges to an invariant measure (see Chapter 4) under the action of dynamics. It may be difficult to obtain exact solutions for mixing and ergodic systems, so to prove irreversibility in a mathematical sense is also difficult. One can explore discrete-time models or difference equations using fast computers, instead of solving differential equations. Discretized models, e.g., produced with iterated functions [90], may easily exhibit irreversibility due to a number of different past histories for a given time-point in the present. It is only recently that the study of dynamical systems became connected with statistical mechanics and kinetic theory (Chapter 7). It is sometimes said that in dynamical systems one deals with absolute irreversibility whereas in statistical physics and thermodynamics only with statistical irreversibility (owing to the lack of knowledge). Statistical irreversibility appears in large ensembles of particles (or other entities) whose exact behavior is subordinated to more specific laws such as the Newtonian or Hamiltonian laws of motion. The statistical or probabilistic way of explaining time-reversal paradoxes differs from explanations based on the theory of dynamical systems, but in fact one considers the same process of passing to probabilistic (based on finite measure) models starting from the deterministic phase flow - only the languages are different: more modern "geometrical" in the theory of dynamical systems and more traditional "physical" discussing inter390 What remains to be solved? particle correlations and many-particle distribution functions in statistical physics and physical kinetics. This "physical" way of reasoning was originated in the works of L. Boltzmann [69] and J.W. Gibbs [297]. 9.4 Origins of Unpredictability The way to explain the Zermelo paradox pointed out by L. Boltzmann is based on the so-called H-theorem which is usually proved starting from the Boltzmann transport equation (see e.g. [72, 25], see also Chapter 7). Traditionally, to prove this theorem the assumption of "molecular chaos" is premised. Honestly speaking, I have not seen a correct (to my understanding) formulation of this assumption, the famous Boltzmann's molecular chaos hypothesis (Stosszahlansatz)202 Intuitively, it may be understood using the language of correlations. Before any collision, the momenta of all colliding particles were distributed uniformly in the momentum subspace independently of their positions, but after the collision these momenta become correlated. However, Boltzmann's conjecture of molecular chaos appears to remain unproven, therefore it may or may not be true. The key assumption of "molecular chaos" breaks down time-reversal symmetry as it leads to a special form of collision integral in the Boltzmann kinetic equation. This is what was exactly necessary to make ground for the second law of thermodynamics (the entropy rise). The concept of molecular chaos (or factorization) can be more or less understood from the dynamical derivation of the Boltzmann equation [72, 25]. We discuss this derivation and related issues in Chapter 7; now I shall try to illustrate the molecular chaos on a simple model. Assume that we have a homogeneous gas of particles and the singleparticle distribution function (the density of particles having momentum p) is f(p). If we consider only pair interactions and let w(p1, p2; p′ 1, p′ 2) be the probability per unit time (transition rate) for particles with momenta p1, p2 to collide and become pairs with momenta p′ 1, p′ 2 . To simplify the twoparticle collision description, let us assume the time and space reversal symmetry of the elementary process in gas, which gives the detailed balance equation: w(p1, p2; p′ 1, p′ 2) = w(p′ 1, p′ 2; p1, p2) (9.8) Then, due to Stosszahlansatz, we get the Boltzmann equation (in a somewhat simplified form) df(p) dt = ∫ w(p, p2; p′, p′ 2)[f(p′)f(p′ 2) -f(p)f(p2)] × δ( p2 2m+ p2 2 2m2 -p′2 2m-p′2 2 2m2 ) × δ(p+ p2 -p′ -p′ 2)dpdp2dp′dp′ 2 (9.9) 202 It is interesting that Boltzmann was only 27 years old when he introduced Stosszahlansatz in statistical mechanics. 9.4 Origins of Unpredictability 391 where we have omitted index 1. It is from this equation that Boltzmann proved the H-theorem: dH(t) dt = d dt∫f(p) ln f(p)d3p≤0 , which gives the entropy with negative sign. The meaning of this derivation is that entropy increases provided the model of Stosszahlansatz (molecular chaos) is valid. Thus, only the hypothesis of molecular chaos brings with itself time asymmetry, though not very explicitly. One can see that the H-theorem is so simply proved only for the rarefied Boltzmann gas. Actually, it is usually formulated for such a gas which is not far from an equilibrium ideal system. I failed to find a satisfactory proof of the Boltzmann H-theorem for the non-ideal gas with arbitrary interaction between the particles and I could not find a proof of this theorem for arbitrary open systems either, despite the fact that the entropy can in principle be defined for such systems. Let us make a brief distraction recalling some well-known facts about statistical mechanics in general (one can find specific details in Chapter 7). The foundations of statistical mechanics are still a hot topic, despite the fact that this subject has persistently been in the focus of interest since the times of Boltzmann and Gibbs. The traditional issue here is the foundation of classical statistical mechanics of continuous systems as opposed to now fashionable lattice systems being closely associated with numerical modeling. By a continuous classical system one can understand a collection of classical particles moving in a continuous phase space. The theoretical approach to deriving the equations of classical statistical mechanics is based on the solution of an infinite hierarchical system of integro-differential equations for the many-particle distribution functions (the Bogoliubov chain) [72]. For many-particle systems in a finite volume the Bogoliubov hierarchy of equations is equivalent to the Liouville equation. So the outcome of all this stuff is that is hard to use the reversible models for the laws of motion to explain why we observe the world having a comparatively low entropy at any moment of observation as compared to the equilibrium entropy (e.g. of universal heat death). Moreover, the world as a whole must have had even lower entropy in the past (see [88], Ch.27). One may notice here that the definition of entropy in cosmology is still a controversial issue: there does not seem to be a consensus about the notion of a global entropy in the universe, specifically for the part of entropy associated with the gravitational field. Remark. Large entropy fluctuations in the equilibrium steady state of classical mechanics can be studied in extensive numerical experiments in a simple strongly chaotic Hamiltonian model with two degrees of freedom (e.g., described by the modified Arnold cat map). The rise and fall of a large, separated fluctuation is shown to be described by the (regular and stable) macroscopic kinetics, both fast (ballistic) and slow (diffusive). One can then abandon a vague problem of the appropriate initial conditions by observing (in a long run) a spontaneous birth and death of arbitrarily big fluctuations for any initial state of our dynamical model. Statistics of the infinite chain of fluctuations similar to the Poincaré recurrences is shown to be Poissonian. A 392 What remains to be solved? simple empirical relationship for the mean period between the fluctuations (the Poincaré cycle) can be found and, presumably, confirmed in numerical experiments. One can propose a new representation of the entropy via the variance of only a few trajectories (particles) that greatly facilitates the computation and at the same time is sufficiently accurate for big fluctuations. The relation of these results to long-standing debates over the statistical irreversibility and the time arrow is briefly discussed. One can then show that the Poincaré recurrence theorem incorporates Loschmidt's requirement for velocity reversion in thermodynamic gas systems. It differs essentially from Hamiltonian dynamics from which Boltzmann's H-theorem follows. The inverse automorphism, T-1, on which the demonstration of the recurrence theorem is based, does not exist for atomic and molecular systems. Thermodynamic systems need not spontaneously return to states they occupied in the past and a Zermelo paradox has never existed for them. The same conclusion follows a fortiori for quantum systems in chrono-topology. Poincaré's recurrence theorem does not conflict with Boltzmann's H-theorem because they apply to systems described by quite different mathematical structures. The way I presented the story was to strictly impose molecular chaos (no momentum correlations) at one moment in time. That is really breaking time translation invariance, not time reversal. From that you could straightforwardly derive that entropy should increase to the past and the future, given the real Hamiltonian dynamics. What the real Boltzmann equation does is effectively to assume molecular chaos, chug forward one timestep, and then re-assume molecular chaos. Its equivalent to a dynamical coarse-graining, because the distribution function on the single-particle phase space can't carry along all the fine-grained information. 9.5 Understanding superconductivity When I was a student long ago, I heard the rumors that L. D. Landau had named three problems in physics as being of an outstanding importance: the problem of cosmological singularity, the problem of phase transition, and that of superconductivity. This latter subject is a good example of a synthetic discipline that has links into many other fields of physics and mathematics. Superconductivity has lately acquired the status of a whole discipline combining profound theoretical ideas with engineering applications. Naturally, when speaking about superconductivity, I only scratch the surface. Superconductivity is a whole world centered around designing and producing new materials with highly required but unusual properties. I write "designing and producing" because the typical approach in superconductivity symbolizes a new phase in science and technology: previously people have used available materials, now they are trying to construct them. Note that in fact all of physics or chemistry, with their vast baggage of analytical tools and research patterns, have their eventual goal in creating appropriate materials. This is basically what people understand by technology. In 1957, the Physical Review published the first fundamental theory explaining how, at low temperatures, some materials can conduct electricity 9.5 Understanding superconductivity 393 entirely without resistance. Building on experimental clues and earlier theoretical hints, John Bardeen, Leon Cooper, and Robert Schrieffer, all at the . The "BCS" theory also had an important influence on theories of particle physics and provided the starting point for many attempts to explain the new high-temperature superconductors. The "BCS" theory has played a prominent role not only as an ad hoc model for superconducting electrons, but also in many other areas of physics. The BCS model also had an important influence on theories of particle physics and provided the starting point for many attempts to explain the "new" high-temperature superconductors. Recently, it has been applied to the analysis of dilute gases of cold fermions in the case of weak interactions between the atoms. 9.5.1 Early History of Superconductivity Superconductivity was discovered in 1911 and always remained a riddle. Due to some mysterious reason, metals at very low temperature came into a state when the resistance to electric current practically disappeared. Physicists have been trying to solve this puzzle for many years. Only by the 1930s, it was concluded that electrons in a superconductor must occupy a quantummechanical state distinct from that of normal conduction electrons. In 1950, researchers found that the temperature at which mercury becomes a superconductor is slightly higher for mercury isotopes of lower atomic weight, suggesting that superconductivity somehow involves motion of the atoms in a material as well as the electrons. Following up on this "isotope effect," Bardeen and Illinois colleague David Pines showed theoretically that within an atomic lattice, electrons could attract one another, despite their strong electrostatic repulsion. Essentially, an electron can create vibrations among the lattice atoms, which can in turn affect other electrons, so the attraction is indirect. By the mid 1950s, Bardeen was collaborating with Cooper, a postdoctoral fellow, and Schrieffer, a graduate student. Cooper published a short paper showing how the Bardeen-Pines attraction could cause electrons with opposite momentum to form stable pairs [283]. This pairing mechanism, Cooper suggested, might be responsible for superconductivity, but Bardeen was initially skeptical. The paired electrons were not physically close together but moved in a coordinated way, always having equal but opposite momentum. It was not clear that these tenuous, extended pairs could be crammed together to create a superconducting medium without getting disrupted. A few months later, however, Schrieffer hit on a mathematical way of defining a quantum mechanical state containing lots of paired electrons, with the pairs oblivious to other electrons and the lattice, allowing them to move without hindrance. He later compared the concept to the Frug, a popular dance at the time, where dance partners could be far apart on the dance floor, separated by many other dancers, yet remain a pair [284]. 394 What remains to be solved? After publishing a short note early in 1957 [48], the team published what became known as the Bardeen-Cooper-Schrieffer, or BCS, theory of superconductivity in December. They won the Nobel prize in 1972. The theory explained the isotope effect and the fact that magnetic fields below a certain strength cannot penetrate superconductors. It also explained why superconductivity could only occur near absolute zero-the tenuous Cooper pairs break up in the presence of too much thermal jiggling. It's a testament to Bardeen's insight that he chose the right collaborators and kept his eye on experiment in seeing the way forward, says superconductivity experimentalist Laura Greene of the : "It's how science should be done." One oddity of the BCS wave function is that it lacks some of the mathematical symmetry expected at the time for any quantum or classical solution of electromagnetic equations. Further analysis of this point spurred the development of so-called symmetry breaking theories in particle physics. Although the superconductors discovered in 1986 rely on electron pairing, they remain superconducting at temperatures above what the pairing mechanism in BCS can easily explain. But Marvin Cohen of the 't be ruled out. And, adds Greene, it took "some very smart people" almost 50 years to get from the discovery of superconductivity to BCS, so she's not worried that the high temperature superconductors remain unsolved after a mere 20 years. Although some engineers state that the potential impact of hightemperature superconductivity would be negligible due to high costs involved, from the physical viewpoint any major breakthrough in superconductivity research is difficult to overestimate. Historically, physicists had to cool conventional conductors almost down to absolute zero to observe the phenomenon. Any person who did some experimental work in physics would understand what it means - to cool samples to Helium temperatures: this is usually a multistage process, and even provided the institution or the laboratory chief has enough money at his or her disposal to buy the modern equipment, reaching such low temperatures becomes progressively difficult, since each temperature drop requires finding some kind of energy within the substance and then devising a means of removing this energy. In several years, superconductivity will be one hundred years old, which is quite a long time for an area of physics. Usually, after such a prolonged period one could expect the area to be a closed text-book chapter. Yet, understanding of superconductivity is still rather unsatisfactory, especially as far as "new" superconductors are concerned. Superconductivity has traditionally been thought of as a phenomenon that occurs only at temperatures near absolute zero, but in 1987 several materials that exhibit superconductivity at temperatures exceeding 100K had been found. I remember the hype of March 1987 around this discovery - all media sources treated it almost as the most important event of the XX century. At that time, I worked as the editor of the physics and mathematics department of the 9.5 Understanding superconductivity 395 leading Soviet popular science journal called "Science and Life" (Nauka i Zhisn'), and our Editor-in-Chief who usually did not care much about physics quite unexpectedly summoned me and ordered me to make an urgent material on the discovery of high-temperature superconductivity, which I hastily did ("Nauka i Zhisn'" No. 6-7, 1987). For those who can read Russian I would also recommend a popular-level but comprehensive article on superconductivity by Academician V. L. Ginzburg - a great physicist of the dying-out generation of universalists who received his long-deserved Nobel Prize just for the theory of high-TC superconductivity ("Nauka i Zhisn'" No.7, 1987). So, a new era of active exploration in the superconductivity field had begun. People - including the newspaper-peeking lay public - thought the scientists would very soon provide new superconducting wires to transmit power as well as levitating trains Nevertheless, the phenomenon of hightemperature superconductivity is still poorly understood, which makes its practical applications evasive. No materials currently exist that can become superconductive at room - biologically acceptable - temperatures. Biological environment and superconductivity do not seem to coincide in their equilibrium states, at least so far. The new superconductors - ceramic materials based on copper oxides (cuprates) combined with various other, usually rare-earth, elements - support the superconducting current at temperatures as high as 140K. That was a noticeable jump toward roomtemperature superconductors, since, although requiring rather low temperatures, these new materials can be cooled with liquid hydrogen, which is enormously easier and much less expensive than the liquid helium cooling required by the old materials. These are all well-known facts, and I repeat them here only to emphasize the importance of the problem. Now that the superconductivity hype is long over, with other fields being at the forefront and enjoying mass-media attention, e.g., nanotechnology, the potential social and engineering impact of superconductivity is viewed with increasing skepticism. I still think that room-temperature superconductivity, if it is by any chance reached, would produce a major breakthrough in energy policy, transportation and possibly information technologies. Since the world is becoming increasingly infrastructured, it would result in major technologically-induced social changes. Take power transmission as an example. Currently, electricitygenerating utilities must transmit power at a voltage of tens or hundreds kilovolts because otherwise a large portion of transmitted energy is dissipated during transmission. This is in fact ridiculous since the typical usage of electric power requires volts or, at most, hundreds of volts. Enormous voltages required solely for electricity transmission lead to the whole industry with their monopolies, inefficient management, elevated prices and frequent dangerous blackouts, without even mentioning ecological harm and wasted terrains. If one were able to build large-scale power grids using high-temperature superconducting materials, it would be possible to generate and transmit power at much lower voltages, rendering a lot of clumsy technology superfluous. The use of alternative energy sources could be boosted and the cost of energy worldwide, now exasperatingly rising, 396 What remains to be solved? could be drastically reduced. In electronics and IT, new and efficient devices may be designed, since thin films of normal metals and superconductors that are brought into contact can form superconductive electronic components, which could replace transistors in some applications. These fancy pictures are, of course, far from today's reality - in the same sense as fusion is a controversial ideal for power generation. One can recall in this connection the onset of the automobile, aviation, and space research eras. For example, we can consider the BCS equation for a Fermi gas characterized by the chemical potential μ∈R and T> 0. We may assume the local pair interaction between the gas particles to be λV(r) where λ> 0 denotes the coupling constant.203 The primary assumption about the interaction potential V(r) might be that it is the real function and, e.g., that V∈L1(R3). The BCS gap equation can be written as ∆(p) = - λ 2(2π) 3 2 ∫d3qV(p-q) ∆(q) E(q) tanh E(q) 2T R3 , with E(p) = √(p2 -μ)2 + |∆(p)|2 . One can readily see that the BCS equation is non- linear. 9.5.2 Some Physical Models of Superconductivity Now, let us try to understand this amazing phenomenon of super conductivity. Here I do not attempt to overview the various mechanisms that have been suggested to explain superconductivity, especially in high- TC materials. My intention is very modest and consists in sharing my impressions of superconductivity research conducted by real experts in the field. So, it is rather a position of an elderly scientific observer than of an energetic researcher. Superconductivity is the vanishing of all electrical resistance in certain substances when they reach a transition temperature TC that varies from one substance to another. An interesting manifestation of this phenomenon is the continued flow of current in a superconducting circuit, even after the source of current has been shut off. For example, if a lead ring is immersed in liquid helium, an electric current that is induced magnetically will continue to flow after the removal of the magnetic field. This observation immediately invokes an engineering application: powerful superconducting electromagnets, which, once energized, retain magnetism virtually indefinitely, have been developed to be used, e.g., in fusion experiments. Contrariwise, in normal materials the current attenuates nearly exponentially with time after switching off the source because electrical resistance causes energy loss due to electron-phonon interaction and multiple scattering even in very clean conductors. Superconductivity is commonly interpreted as a transition to a new phase inside a material - a macroscopic quantum phenomenon when a macroscopic number of electrons (of the order of 1023) 203 Sometimes, the factor 2 is introduced for convenience i.e., the interaction is written as 2λV(r). 9.5 Understanding superconductivity 397 condense into a coherent quantum state. In case this state is the one of a current flowing through a material, such current would flow virtually indefinitely (from the theoretical point of view), and the material can serve to transmit electric power with no energy loss, unless the superconducting phase of the sample is destroyed by some external agent, e.g., by a strong magnetic field. Although many theoretical explanations related to the superconductors discovered in 1986 also rely on the BCS pairing mechanism, these "new" materials remain superconducting at temperatures higher than those which the BCS-type electron pairing can readily explain. But given the poor understanding of new and structurally complex materials, a number of competing mechanisms should not be ruled out in spite of quasi-religious wars waged by the proponents of different models for superconductivity. Besides, it is not especially astonishing that the riddle of high temperature superconductivity remains unsolved after twenty+ years: indeed, it took about fifty years before "some very smart people" managed to arrive at the BCS theory from the discovery of superconductivity. The 1972 Nobel Prize in Physics was awarded to J. Bardeen, L. Cooper, and S. Schrieffer for their model (known as the BCS theory) of the "old" superconductors, i.e., those that exhibit superconductivity at temperatures near absolute zero, including such metals as zinc, magnesium, lead, gray tin, aluminum, mercury, and cadmium. Some other metals, e.g., molybdenum, may become superconductive after high purification, and a number of alloys (e.g., two parts of gold to one part of bismuth) as well as such compounds as tungsten carbide and lead sulfide can also display superconductivity, so they can also be classified as "old" superconductors. The BCS theory stipulates that at very low temperatures electrons in an electric current move in pairs. Such pairing enables them to move through a crystal lattice without having their motion disrupted by collisions with the lattice. It is important that the BCS formalism invented to explain superconductivity is in fact much more general. For example, the BCS model can be applied to describe phase transitions in solids, more specifically the metal-dielectric transition. In the famous Kopaev-Keldysh model [222] the BCS-type phase transition was interpreted as the Bose condensation of electron-positron pairs (excitons), which is a direct analogy to BCS superconducting transition. I shall discuss the BCS theory later in some detail, since it is a beautiful model having many implications and, besides, because I have a suspicion that many authors writing on various superconductivity issues did not read the classical BCS paper [48]. While this microscopic theory explaining conventional superconductivity has already existed for half a century, as well as the phenomenological Ginzburg-Landau (GL) theory [47], which can be derived from the microscopic BCS theory [298] some doubts exist that the BCS model is fully applicable to the "new" superconductors. The latter appear to be rather "muddy" in order to serve as model physical systems, so the clear-cut BCS model based on the Cooper pair formation and condensation in an ideal lattice does not seem to describe the whole lot of competing phenomena in very 398 What remains to be solved? special chemical systems. Superconductivity in the "new" and "dirty" materials, although ultimately producing the same effect, may be a very different phenomenon, so that the BCS model, which has served as a standard for describing the "old" superconductivity, may simply be inadequate in the case of "new" superconductors. There exist competing models, e.g., the 2D Hubbard model, bipolaron model, bosonic SO(5), U(1), SU(2) and many other models, which have been proposed to describe high-temperature conductivity in new ceramic materials (see e.g. the overview by Yu. V. Kopaev [40]. To my knowledge, all of the models have not been proven and remained controversial for many years. At least two schools have been confronting each other while putting forward their versions. These may be classified according to the type of main quasiparticles involved. One of the schools holds that the electron-phonon interaction, the same as in the BCS model, still plays the dominant role in the cuprates. The other insists that electron-electron Coulomb interaction must be strong in the new superconductors (there are some experimental indications), so that electron-phonon interaction is immaterial for pair formation. Sometimes, when listening to excited arguments of adepts of these schools of thought or even "independent" adherents to a specific model, I could not get rid of an impression that I was witnessing a sort of a religious battle in which the adversary must be totally devastated. The subject of superconductivity seems to be full of prejudices and can only be compared in this respect with the heated ideological war waged by the open source community against Microsoft. However, while the market needs and user convenience can eventually settle the debate in personal and enterprise computing, the choice of an adequate model to understand superconductivity is probably a much harder task. This understanding, by the way, also depends on the current state of the computer technology. I shall describe the Hubbard model a bit later, now I can only remark that this model is simple and universal and can be applied far beyond superconductivity. Moreover, thanks to the new developments in parallel computation, one can verify the 2D Hubbard model directly. The Hubbard model purports to describe superconductivity using a few microscopically defined parameters such as (1) the probability that carriers - electrons or holes - hop from one site to another in the crystal lattice; (2) the energy penalty when two carriers occupy simultaneously the same site; (3) the concentration of carriers. As I have already mentioned, several theories of high-temperature superconductors have been proposed, but none has been experimentally confirmed, so there was a strong disagreement within the physics community about what model was appropriate at all. One could have a single validation criterion: if some model does not foresee the superconducting state in the typical range of temperatures as well as in the domain of specific compositional and structural parameters, specifically for the case of cuprate superconductors, then it should be abolished. At first, the Hubbard model was unsolvable even on high-performance computers due the amount of computation required. Indeed, superconductivity is a macroscopic quantum effect, therefore any simulation must involve a lattice scale of 1023 9.5 Understanding superconductivity 399 sites. Besides, the computer model must also account for individual carriers hopping from site to site on the lattice, which means that the scale of several lattice spacing with carrier interactions on this scale should be included. All that is tantamount to a computationally complex multi-scale problem. We have already encountered multiscale problems in connection with the transition to classical description in Chapter 4 as well as in statistical mechanics. Here, I would only like to stress that because of prevailing numerical difficulties, the issue of applicability of the Hubbard model remained unsolved, as well as the question of survival of other models for high-temperature superconductivity. It has always been a guesswork for me why the subject of superconductivity is treated in four different volumes of the Landau-Lifshitz course: in "Electrodynamics of Continuous Media", in each of the two volumes of "Statistical Physics", and in "Physical Kinetics". This is a peculiar fact which may be explained, of course, by personal likings of the renowned authors. But there must be also an objective reason for such a dissemination of a single issue: superconductivity on its current level of understanding is a multi-facet subject, no unifying theory for this phenomenon is known, so it should be tackled from diverse viewing angles and using different techniques. Even the phenomenological treatment of superconductivity has admitted a variety of models, many of which had been proposed in the 1930s (see, e.g., the paper by R. Kronig [264] and the famous paper by H. and F. Londons). One might mention in passing that quite a lot of the grands of theoretical physics who were active in that period competed in offering a plausible explanation for superconductivity: W. Heisenberg, E. Schrödinger, M. von Laue, J. Slater, H. Casimir and many others. One can see that superconductivity is really a multilateral phenomenon by a very simple observation. The current flowing in a superconductor is, as any current, a manifestation of the nonequilibrium process consisting in a stream of charged particles. Therefore, the effect of disappearing resistance to this current cannot, strictly speaking, be treated within equilibrium thermodynamics, it is a subject for physical kinetics. On the other hand, the normal and the superconducting states may be represented as two different phases (in the Gibbs' sense), because they can be characterized, besides conductivities, by different values of purely thermodynamical quantities, and the transition between these two phases can be treated as a normal phase transition. Thus, the transition of a material from normal to superconducting phase may be studied by applying usual equilibrium thermodynamics, which makes the treatment of this phenomenon in statistical physics courses quite natural. By studying the literature, one may notice that during about the first quarter century after the discovery of superconductivity, physicists were focused mostly on the electrical problem, namely on how to explain the abrupt change and nearly immeasurably small value of resistance. I could not trace the idea of phase transitions in the papers available to me from that period, at least it was not predominant until the paper of Gorter [257]; there are also references to early-stage applications of thermodynamics in the works of W. H. Keesom around 1925, but I could not find them without 400 What remains to be solved? investing too much time. Superconductors were customarily described as crystals in which scattering of carriers is absent, the limiting case of a perfect conductor [258] R. Becker, G. Heller, F. Sauter. "Über die Stromverteilung in einer supraleitenden Kugel", Zeitschrift für Physik 85, 772-787 (1933). This is a simple model of the Drude type that is traditionally treated in courses on macroscopic electrodynamics. The electrodynamical approach immediately invokes the idea of electromagnetic response of a superconducting material to an external electromagnetic field - in terms of dielectric function or otherwise. The response theory was discussed in Chapter 7 and I will not repeat it here. In the electromagnetic theory of superconductors, the key notion is the current. Let us try to calculate the current starting at first from the naive Drude-Lorentz model for electrons in an external electrical field: mv̇ + vmv= eE(t) (9.10) where v is the collision frequency. If we take for E(t) the harmonic components E(t) = E0e-iωt, we shall have v(t) = v0e-iωt (9.11) and the solution v= eE mv(1 -iω v) -1 (9.12) The macroscopical current is j(r, t) = ∑eivi(t)δ(r-ri(t)) n i=1 = env= e2n mv E 1 -iω v = σE (9.13) and the conductivity is defined as σ= (4π)-1 ω0 2 v-iω≡ σ0 1 -iωτ, τ= 1 v, ω0 2 = 4πne2 m , σ0 = ω0 2 4πv (9.14) Here ω0 is the usual plasma frequency and τ has the meaning of the collision-free time. In transient models, when the field cannot be adequately represented by a single Fourier component, the current value at time t is determined by the entire history of E(t): jα(t) = ∫dt′Gαβ(t, t′)Eβ(t′) t -∞ (9.15) 9.6 Superfluids and Supersolids 401 See a discussion of the electromagnetic response of material media in Chapter 8. If we consider the superconductor as a material with disappearing scattering of electrons, then v→0 and the condition ω≫v will be fulfilled even for very low frequencies. In other words, σ→ω with ω→0 since the currents circulate without Joule losses (c′′ = 2nχ= 0). Recall that c′ = 1 - ω0 2 ω2 ⁄ = n2 -χ2 (Ch.5) so that for ω 0, η 0, but ∆T is a function of time t, and in the next temporal domain, e.g., following such unaccounted natural factors as orbital change, sunspot periods, or volcano eruptions (see, e.g., [273]) ∆T may well be negative. Is there a corroborative evidence that recent 242 Some experts refer to probabilistic error margins present in the IPCC reports, see http://www.ipcc.ch/. However, those are not the precision boundaries usually presented in physics and corresponding to the applicability limits, but the error margins typical of computer codes. I could not find in IPCC reports a satisfactory substantiation of the approximations made in IPCC models. Physical applicability limits correspond to expansions in series, often asymptotic ones, when some small (or large) parameter can be explicitly indicated. When the terms in the system of equations describing the mathematical model considered are omitted, one usually talks about a zero order in such terms (in dimensionless form). One can always find the corrections however difficult it might be technically. 458 Climate as a Physical System climate variations considerably surpass the changes observed in the past and caused by the natural factors such as fluctuations of the Earth's orbital parameters, solar cycles, ocean currents, volcano eruptions, etc.? Incidentally, one might notice that the IPCC texts and models include very little material about volcanic activity which has always been one of the most powerful climate forcings on the planet. Indeed, ash and aerosols scatter the sunlight, on the other hand volcanoes produce IR-absorbing gases (i.e., GHG), submarine eruptions warm oceanic waters and so on. The fact that ∆T can be negative owing to anthropogenic factors as well is reflected also in mathematical models of the so-called nuclear winter, when an explosive energy release occurring locally produces large quantities of aerosols changing the optical properties of the atmosphere. The effect of aerosols can in most cases exceed the effect of any minor243 greenhouse gas (Kahn, R. A., Yu, H., Schwartz, S. E., Chin, M., Feingold, G., Remer, L. A., Rind, D., Halthore, R., DeCola, P. Atmospheric Aerosol Properties and Climate Impacts. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research, M. Chin, R.A. Kahn, and S.E. Schwartz (eds.), National Aeronautics and Space Administration, Washington, D.C., 2009). So, the limits of applicability for the models of climate transitions is a serious issue. Moreover, in contrast, say, with nuclear physics, purposed experiments cannot be carried out in climatology so that the prognostic quality of the models cannot be reliably verified. To some extent, paleoclimatic studies can serve as a substitute for the physical experiment to verify the models. For instance, a certain alternative to pointed experimental validation of climatic models would be the quantitative explanation of paleoclimatic cooling and warming periods, e.g., comparatively recent cold and warm climatic intervals such as the Little Ice Age in the 17th century or the Medieval Warm Period. Where are such quantitative explanations? Some climate variations will inevitably occur, notwithstanding any political action taken to reduce CO2 emissions so that the research aimed at adapting to climate changes makes more sense than the "IPCC science" whose main task is to substantiate the a priori set up catastrophic AGW views. Climate science, as I have already mentioned, in general can hardly be called impartial and satisfactory. The point is that climate science (of which the "IPCC science" is the branch) is a somewhat special discipline: in distinction to most scientific disciplines, climate science is not supported by any background theory, it just contains a lot of observations, often conflicting, a great number of wild speculations, plenty of disagreeing hypotheses, a growing number of dubious computer models which are nonetheless unquestionably believed to provide the ultimate, complete and holy truth, and, moreover, contains a strong ideological component and uses ad hominem arguments. 243 Recall that all greenhouse gases including CO2 are considered "minor" in contrast with the water vapor which is "major". The AGW Evidence 459 At least the IPCC can hardly be called neutral in the assessment of climate dynamics. An a priori givenness is especially obvious from the "Summary for policymakers" where such clichés as "dangerous anthropogenic interference with the climate system", "climate change can affect human health directly", "populations in developing countries are generally exposed to relatively high risks of adverse impacts from climate change" as well as mysterious words "models project that" and intuitive statements of the type "it is very likely". Yet climate and its instant manifestations - weather - are determined by physical processes, which are sufficiently complex to be intuitively assessed and explained at the hand-waving level. For example, transfer of the solar radiation, its reflection from the Earth's surface and selective absorption in the atmosphere, in particular by small quantities of the notorious carbon dioxide, requires a considerable physical knowledge usually not available to the most vocal climate alarmists or to "climate economists" and other proponents of carbon control and trading schemes. However, the crucial difficulty here is that real processes in nature are irreversible (see section "The Arrow of Time") so that validating parameters of a model by paleoclimatic observations, even from the recent past, does not guarantee reliable forecasts concerning the future state of the climatic system. In particular, one cannot say that the state with a given concentration c(t) of CO2 and characterized by a certain average global temperature T(t) in the past ( t t0 ), where t0 is some reference instant of time, e.g., 1990, 2005, 1750, etc. Such extrapolations are not valid. The true AGW believers and environmentalists are trying to explain the global climatic transitions occurring since the beginning of the industrial era as the result of a single human-induced process i.e., simple one-dimensional causal relationship: warming is related to human-produced CO2, cooling (if any) is related to human-produced soot and aerosols. The implication in any case is that greedy and consumption-crazy humanity totally controls the Earth's climate, all the natural mechanisms playing almost no role. Hence, say the true AGW believers and environmentalists, humanity must radically change the way of life in order to avoid an awful catastrophe - the climate Armageddon. Also, some climatologists assert that the emission of carbon dioxide will double within a few decades (instead of a few centuries which would be obtained by extrapolating the current trends, 23 percent since 1900). Here, there is an implicit assumption that fossil fuel consumption will explosively grow resulting in accelerating CO2 emissions, which is highly unlikely. This is an unjustified socio-economical hypothesis, not climatology - whatever this latter term means. Speculating about future consumption trends does not help much in predicting possible climatic states, even their primitive characteristics such as ups and downs of the average global surface temperature (GST). It would be much more useful, by the way, to analyze the sensitivity of social structures (in particular infrastructure) to climate changes in different regions rather than making highly indefinite prognoses of GST increase: the point is that climate warming is not necessarily 460 Climate as a Physical System detrimental; some countries and societies may profit from the regional temperature increase. 10.9 The Evil Role of Carbon Dioxide True AGW believers have selected a single factor - CO2 concentration - from a set of variables relevant for the climate system and resulting in the variations of the Earth's surface temperature. Although nobody can deny that CO2 is a greenhouse gas (a comparatively weak one), it can hardly ensure the entire contribution into such variations. We shall also see below that this gas can hardly be considered as the principal cause of global warming. At least the usual scientific-looking statement that human-produced CO2 represents the dominant radiative forcing, so radically shifting the Earth's energy balance as to induce the projected severe climatic consequences, cannot be proved. In this subsection we shall discuss from the physical position whether it is true that anthropogenic carbon dioxide is the primary factor driving the global warming. Curiously enough, the rather innocuous carbon dioxide (CO2) gas has recently begun playing an important part in politics and declared as the most harmful by the propaganda. This chemical substance (see about CO2 properties, e.g., in http://www.uigi.com/carbondioxide has moved to the center of attention as the so-called climate gas or greenhouse gas. The meaning of these metaphoric terms is that the surface of the Earth is allegedly heating up, with the concentration of CO2 being increased due to the human activities. There are four main "greenhouse gases" (GHG) in the atmosphere: water vapor H2O, carbon dioxide CO2, methane CH4, and nitrous oxide N2O, of which water vapor is by far the most efficient: its characteristic spectrum is more than three times wider than that of CO2 and is responsible for roughly 95 percent of the greenhouse effect (see, e.g., [274]). The Earth's atmosphere mainly consists of nitrogen N2 (about 78.0 percent), oxygen O2 (about 21.0 percent) and argon Ar (about 0.9 percent) which are not greenhouse gases because of negligible absorption in the infrared. Apart from these principal components, there are some small quantities of the rest gases such as water vapor H2O, carbon dioxide CO2 (about 0.035 percent), methane CH4, sulfur dioxide SO2, ammonia NH3, ozone O3, nitrous oxide 244 N2O, nitrogen trifluoride245 NF3, etc. Concentration of all these "impurities" is a function of coordinates and varies with time. For example, wind can easily displace and 244 Nitrous oxide (the "laughing gas") is contained in the atmosphere in very small concentrations (about 320 ∙10-9), but it is approximately 300 times more efficient as IR absorber compared to CO2 and its concentration is rapidly growing due to ubiquitous use of fertilizers. However, little attention is paid to this fact, probably due to political importance of modern agricultural technologies. 245 Nitrogen trifluoride (NF3) is mostly used in the production of electronic components. It has the greenhouse potential approximately 17 ∙104 that of CO2, with an estimated lifetime in the atmosphere about 700 years. The estimated production of NF3 amounts to 5000 m.t.; how much of this amount is released into the atmosphere seems to be an open question, see the details in: Hoag [247]. The Evil Role of Carbon Dioxide 461 flatten the local bunches of CO2 concentration as well as fluctuations of water vapor density. Does the global average of CO2 bring a more pronouncing effect than local heating due to enhanced heat generation near human dwelling places and industry sites? The latter phenomenon can be easily observed, e.g., by noticing that temperature in big cities is higher than between them. One can also note that paleometeorological studies show that there were sharp irregularities and oscillations of the carbon dioxide concentration in the atmosphere, and such CO2 variations did not necessarily coincide with warm periods in the past. For example, the Sahara Ice Age occurred when the CO2 level was an order of magnitude higher than today. What is the difference between CO2 at that time and today's anthropogenic one? Atmospheric levels of CO2 (and methane) naturally fluctuate, partly due to changes of the Earth's orbit resulting in variations of the incident sunlight. Hence, there is no compelling evidence that the observed human-induced increase in CO2 concentration has really resulted in the frightful greenhouse effect (Kauffman, J. M. Climate change reexamined. Journal of Scientific Exploration, v. 21, No.4, 723749, (2007)). I have already mentioned that the main "greenhouse gas" is generally known to be water vapor, causing about 60-70 percent of the greenhouse effect on Earth246, since water vapor absorbs much more infrared than CO2 (see the above brief discussion of physical absorption mechanisms) so that it is strange to ascribe the whole absorption and thus heating of the atmosphere to CO2 when there is about two orders of magnitude more potent IR absorber present nearby, which should thus dominate and swamp the effect of CO2. It is, however, curious that although this is a common knowledge among meteorologists and climatologists, but in the media or within governmental groups the overwhelming effect of water vapor tends to be altogether ignored or at least under-emphasized. Moreover, this distortion of the truth seems to be adopted by repetition: some people may even concede that it might be "a little misleading" to ignore the water vapor as the greenhouse gas, they tend nevertheless to call this neglect an accepted practice and defend it by claiming that it is customary to leave water vapor out. Furthermore, water vapor strongly affects weather and, consequently, climate through cloud formation changing radiation transfer conditions. This mechanism can be more pronounced than selective absorption and reemission of IR radiation by tiny quantities of CO2. Unfortunately, I could not find comparative studies in the scientific literature available to me. Besides, the water vapor concentration depends on the surface temperature, primarily on that of the ocean surface (which comprises 70 percent of the entire planet area). Transition of water masses into the gaseous phase is accompanied by cooling, so the whole thermal system of the Earth involving the hydrosphere becomes very complex, with many feedback mechanisms. Heat and mass 246 It is, however, not quite correct to assign a definite percentage of the greenhouse effect to a certain gas because the effects of different gases are not additive. So the given percentage must be understood as a crude estimate. 462 Climate as a Physical System transfer in the atmosphere on the global scale make climate modeling and parameter computations so difficult that it becomes hardly possible to indicate model accuracies. The current greenhouse effect is due to about 0.12 percent of the atmospheric CO2 generated by human activity (see the calculations in http://www.geocraft.com/WVFossils/greenhouse_data.html). So the usual statement that human activity since 1750 has warmed the climate seems to be wrong, and anthropogenically produced CO2 has no discernible effect on the global temperature. Recall also that the increase of the average surface temperature depends on the growth of the CO2 concentration only logarithmically, and a doubling of CO2 would roughly correspond to approximately 1C temperature change (this problem has already been explored by S. Arrhenius, the chemistry classic, [292]. Therefore, the panic about anthropogenic CO2 production is hard to comprehend. It seems utterly ridiculous that the increase of a tiny fraction of CO2 level would produce a global temperature increase of, say, 6 C i.e., over 20 percent of the whole atmospheric contribution into the Earth's mean terrestrial temperature believed to be about 33C. And there are approximately 30 times as many H2O molecules in the atmosphere as CO2 molecules (see, e.g., http://www.geocraft.com/WVFossils/greenhouse_data.html), with much more efficient absorption properties (fingerprint spectrum). Following the logic of AGW proponents, one should primarily ban water vapor production, for example, teapots and hydrogen-driven vehicles. It is easy to observe that carbon dioxide, water vapor and oxygen, all of them necessary for sustaining life, are produced at different locations at the Earth's surface: for example, carbon dioxide in large urban and industrial centers whereas water vapor over the oceans and oxygen mainly in rain forests. Transport of these gases is ensured by the atmospheric turbulence, which is a highly complicated process, not fully understood up till now. Besides, it would be important to note that from a strictly environmental point of view, CO2 does not present a severe pollution hazard as compared, e.g., to such agents as NOx, SO2, SO3 (H2SO4), CO, Hg, Cd, Pb, other metals, especially heavy ones. In fact, CO2 is not a pollutant at all. Declaring carbon dioxide as the most harmful substance exposes a certain ecological irrelevance of the whole "save the climate" campaign. Moreover, the absolute quantity of CO2 in the atmosphere may only be correlated with the climate temperature or be interpreted as its indicator because the gaseous composition of the atmosphere depends on its temperature, even locally. If this is the case, then stating that slight increase of CO2 produces catastrophic temperature variations is wagging the dog. In principle, variations of the average surface temperature ∆T and of the carbon dioxide average concentration δCCO2 can not necessarily be correlated. As an illustration of some limited relevance of carbon dioxide as a unique determinant of atmospheric temperatures one might recall that about half a million years ago the concentration of CO2 reached the level of about an order of magnitude higher than today, but everywhere on the Earth there was an Ice Age. On the contrary, before that period the dinosaurs had lived in a The Evil Role of Carbon Dioxide 463 much hotter climate for more than a hundred million years. There exist estimates showing that during a long time of the Mesozoic the ocean level reached the mark at least 100 meters higher than today (some estimates even give up to 250 meters), there were no polar ice caps, and the Earth's terrestrial temperatures were much more homogeneous than today: the difference between polar temperatures and those at the equator was on average only about 25C, with the poles being about 50C warmer than today. Therefore, the mean global surface temperature (GST) was also much higher (see, e.g., [275]). Were the dinosaurs also incessantly polluting the atmosphere with nature-hostile CO2? One should only attempt to answer the "paleoclimatic" question: why was the climate so warm at those periods (from Triassic to Cretaceous), and for such a long time? There is an assumption put forward by the scientific wing of environmentalists and climate alarmists that CO2 molecules re-radiate back to the ground the substantial part of thermal radiation emitted by the Earth thus essentially heating it. More specifically, in the process of relaxation the infrared photons are re-emitted by CO2 molecules in the direction of the Earth's surface thus creating a supplementary energy flux, and the total thermal power absorbed by the surface substantially exceeds the power sent to the Earth by the Sun. It is this thermal power amplification that is known as the greenhouse effect. As a result, the Earth's surface temperature becomes noticeably higher than in the absence of additional quantities of carbon dioxide. Superfluous molecules of CO2 so radically improve the blanket feature of the atmosphere that overheating may occur. We have already discussed that the molecule of CO2 absorbs electromagnetic (infrared) radiation due to rotational and vibrational-rotational transitions in three narrow bands around 2.7, 4.3, and 15.0 μm i.e., rather selectively. The total thermal spectrum of the IR radiation corresponding to the Earth's surface temperatures comprises about 100 μm. This means that the major part of the infrared radiation, lying outside of the absorption spectral domains of the CO2 molecule, passes through the atmosphere into outer space practically without loss. The CO2 concentration is, as we have seen, rather small (the current alarm-producing concentration is CCO2 ≈0.038 percent), so the question persistently arises: how can a linear growth of a comparatively tiny impurity concentration be responsible for catastrophic climate and weather changes? In any case, one should have the data of the total atmospheric absorption in the infrared versus CO2 selective absorption, and it seems that actual measurements and numbers with well-defined error margins do not exist yet. Carbon dioxide is often designated by environmentalists as an "ecological poison" so that one must keep its global concentration at some arguably manageable level. What is exactly this level? Maybe I was not diligent enough, but I could not find empirical evidence of the sensitivity of the variation ∆T of the average surface temperature T to anthropogenic CO2 emission (from 280 ppm to the current level of 380 ppm). Furthermore, the last decade showed an approximately 0.1 degree cooling of the atmosphere (see, e.g., the graph http://farm3.static.flickr.com/2600/3670965001_4249d9a68e_b.jpg). In fact, the CO2 gas is indispensable for life on the Earth, and geological periods 464 Climate as a Physical System characterized by its increased quantities were also characterized by the most rapid development of the biosphere. It would of course be nice to prove that nowadays CO2 is really a great problem, but that has not been done, contrary to what one may have read or heard in the media. There exist only sparse observations and measurements characterized by not always clearly defined accuracy as well as some largely qualitative models of the "greenhouse" effect. The latter may very well be a fictitious theory, at least for the current quantities of carbon dioxide ( ≤0.038 percent). Is it really a scary concentration resulting in a significant rise of the Earth's temperature and requiring immediate drastic measures to restructure economy and consumption habits? However, many billions of dollars and euros are evaporated due to excited emotions over the so-called catastrophic climate changes (C3). One must be a true believer in global warming and possibly a true believer in general, i.e., a person inclined to believe in anything supported by the majority of other people - God, fashion, vampires, life after death, supremacy of one's nation, intelligent design, etc., not to notice a banal brainwashing. The climate controversy is not so much about whether the Earth's surface is getting warmer, it may well happen due to a number of physical reasons (see above), but about whether it is human activity that makes it warmer, and to what extent. But this is basically a scientific question, not a political issue - in fact a physical problem requiring a correct physical solution. The usual argument of climate catastrophists is that the concentration of CO2 grows rapidly due to human activities (from 0.028 to 0.038 percent since the beginning of industrial revolution), and this growth of concentration must be accompanied by a rapid heating of the atmosphere. However, this is not more than a hypothesis based on correlation or, at best, a model; the exact physical mechanism of the process of atmospheric (and hydrospheric) heating remains unclear. The major portion of CO2 is dissolved in the ocean's water, and with the increasing temperature equilibrium solubility of most gases in water, including CO2, is diminished. Hence, with the general heating caused by any external factor, large quantities of carbon dioxide are transferred from the hydrosphere to the atmosphere. The main factor for this external heating is most probably the rising solar activity, which happened many times in the Earth's history when ice ages alternated with warm periods. There exist two (at least!) informative experimental sources allowing one to obtain paleoclimatic data: boring holes in Greenland and Antarctica, with the hole depth reaching several kilometers. Samples of kern are taken, containing air bubbles from those distant epochs when the material obtained, e.g., snow or ice, was formed. Spectroscopic analysis allows one to determine the gaseous composition (percentage of O2, N2, CO2, etc.) with very good accuracy; besides, the temperature related to the time when snow was falling, and ice was formed as well as some other physical characteristics can also be obtained. All classical ice ages, warm periods, and corresponding quantities of CO2 in the atmosphere have been established by this method. The reported result was that rising concentrations of carbon dioxide did not precede warming, but followed it. This fact can be readily explained: 90 percent of CO2 The Evil Role of Carbon Dioxide 465 is dissolved in the world's ocean, and when the latter is heated large quantities of carbon dioxide transit to the atmosphere. This process is of course reversible: during the cool periods, the ocean absorbs carbon dioxide. This quasi-oscillatory process, with both positive and negative feedback components, is eternal. There exist, as just mentioned, various feedback mechanisms affecting climatic factors. The Earth's climate seems to be balanced within certain (so far not exactly known) margins. If it is pushed too far, a series of positive feedbacks can be triggered that would cause substantial changes. For instance, rapid heating of soil in the permafrost regions is hypothesized to release more methane which would amplify warming. This is a domino effect. However, it is hardly possible to say with a scientifically accepted accuracy, firstly, how likely these scary scenarios are and, secondly, what exactly is the human component in such runaway heating. The environmentalist doomsayers247 usually speculate on unquantifiable risks. As far as carbon dioxide is concerned, the salient example of the feedback is the growth of photosynthesizing biomass with the enhanced CO2 concentration and increased surface temperature which diminishes the amount of CO2 in the atmosphere due to intensified photosynthetic activity. So, the question: "By how much has the CO2 concentration increased since the industrial revolution owing to human activity?" may be totally irrelevant to climate variability. Instead one can pose another question namely "What fraction of the carbon dioxide being exchanged per unit time in the whole climatic system (i.e., in the atmosphere, hydrosphere, biosphere, lithosphere, and cryosphere combined, see above) is due to human activities?" I suspect that the human-conditioned percentage of CO2 in the real-time ecosystem kinetics will be negligible, and thus human involvement into the climate dynamics is strongly exaggerated. This is, however, a conjecture; one must estimate human involvement by considering a correct kinetic model for the entire ecosystem. Such a model, provided it included complete radiation transfer equations, would give local temperatures as a by-product. From the physical viewpoint, the equilibrium temperature setup is a kinetic problem, in the simplest case an energy balance problem. This holds also for the Earth's surface. In this connection I do not understand why certain terms in the kinetic relationships, such as infrared absorption and reemission towards the Earth's surface, are taken into account whereas the counterbalance terms, such as variations of atmospheric transparency due to anthropogenic and natural impurities, are neglected or at least not fully considered. What about correctly accounting for fluctuations of the Earth's orbit and, in general, of direct and diffuse insolation of the Earth's surface? And even if one takes the vicious role of CO2 seriously, what about the carbon sinks? In short, the terms ensuring positive contribution into the energy balance are retained whereas those resulting in diminished temperatures are mainly disregarded. Indeed, the most essential human influence on the atmosphere has always been the release of aerosols and various gases, some 247 And AGW skeptics are called "naysayers" by them. 466 Climate as a Physical System of them known as greenhouse gases. Nowadays, due to "political forcing" the role of the latter is emphasized whereas the role of tropospheric aerosols is somehow blurred over. Thus, the global dimming caused by aerosols and clouds may cause a drastic cooling effect, as was demonstrated by the consequences of volcano eruptions. As to the absorbing aerosols of anthropogenic origin, they can diminish the averaged solar irradiation by at least several W/m2 (see, e.g., [237]). Clouds can also reduce the total solar irradiance, thus contributing to negative radiative forcing. In all cases, spatial and temporal distribution of clouds must have a significant effect on the diurnal asymmetry (one may notice here that CO2 produces more warming during the night than in the daytime). Let us recall that in this book we are basically discussing scientific modeling. However, most of the "science" backing up global warming has been produced by computer modeling, the latter being only a part of scientific modeling. I trust in science, but I do not always trust in widely publicized computer models (WPCM), due to a great lot of people, money, and politics involved. Such models are somewhat similar to Hollywood blockbusters, where numbers are pulled out of the hat. The trouble with computer modeling is that it is typically a bad science since computer models can be tinkered in the desired direction to get any arbitrary result, even physically meaningless. The climate (defined as the averaged weather) may change, but attribute its variations to a single factor, small and not quite correctly accounted for in the radiation transfer, namely increased CO2 concentration is hard for me to understand. The assertion dT̅ dt ⁄ = const∙dc̅/dt, where T̅ , c̅ are landaveraged temperature and CO2 concentration does not fully reflect the physical reality, I am afraid. And I dare to predict that if eventually the AGW propaganda campaign fails, which is quite probable due to the interplay of a number of physical factors influencing the climate, carbon dioxide will still be pictured as an evil, with the shifted propaganda focus: for example, increased acidity of the world's ocean due to anthropogenic CO2, killing the sea organisms on a global scale can be declared an actual catastrophe. In short, personally I do not believe in the catastrophic human-induced global warming which is claimed to be the doom of mankind. "It is global warming that will surely cause the fall of civilization and perhaps the extinction of Homo Sapient," I took this sentence from a book on nuclear energy [238], in many respects quite interesting and informative248. I do not quite understand how the microscale effects (on the scale of 10-1 -105 cm) such as plumes, car exhausts and the like can affect climate on the planetary scale much stronger than global or even astronomical (space) factors. Or why occasional hurricanes (which had always occurred with high but unmeasured strength and frequency long before the CO2 panic was spread over enlightened circles), wildfires and cyclical melt-accretion of polar ice caps are unequivocal evidence of the assertion that human activity causes global 248 Probably the author, a well-known writer and journalist, formerly an anti-nuclear activist, still shares popular environmentalist views, which are reflected in the sentence cited. The Evil Role of Carbon Dioxide 467 warming through CO2 release. Nobody has proved any connection between CO2 release and hurricanes or floods. The climate changes may happen, of course, because climate is a dynamical system influenced by numerous agents, and if such changes happen they naturally have a certain sign of time-derivative for the local temperature on a certain time interval. Now in some measurement sites (e.g., near "urban heat islands") this derivative may be positive. I wonder what climate alarmists will say if in some time the sign of dT/dt, where T is the local temperature, will change, at least for many locations? That humans are releasing too much sulfur acid? Or that CO2-provoked global warming leads to global cooling, e.g., due to the meridional air mass transfer? By the way, I dare to think that global warming is better than global cooling, in particular, because much less energy is to be spent to sustain life. And I hope that people will eventually be sick and tired of the outcries that the sky is falling and therefore everyone (except selected few, of course) must get back into the caves right away. As far as the economic effect of the enforced CO2 reduction goes, it can become detrimental, but this is not quite obvious. For instance, one can easily calculate the costs of replacing all coal-fired power plants (they may really be dirty, but not because of carbon dioxide) by wind and solar farms, the generated power being kept constant. One can also calculate the reduction of CO2 emission in the process of such replacement and, by using, e.g., the IPCC anti-CO2 method, translate this hypothetic removal of CO2 from the atmosphere into its greenhouse heating (say, for the USA which is considered the worst thermal pollutant). I made some crude estimates and obtained 0.1◦ Celsius. And the costs are of the order of a trillion US dollars, these costs are borne by the public, of course. A trillion for hypothetical 0.1◦ Celsius? Not too expensive? Maybe I was wrong, yet everyone can reproduce such estimates using some officially published data. Of course, one needs intense, emotionally loaded propaganda campaigns to substantiate such spending, for instance, horror stories like inevitable droughts, floods, hurricanes, tornadoes, tsunamis, etc. Science, however, strives to not confuse talk with empirical evidence. Controlling human-produced CO2 is like chasing a ghost; in fact, there are more urgent things to do than to please politicians. The attacks on CO2 are so ferocious as if there were no other harmful ecological factors. The latter is obviously not true, but to solve real environmental problems is much more difficult than to create panic and exploit it for political purposes. There are, for instance, places where people get sick due to the detrimental state of the local atmosphere, and this has nothing to do with CO2, but environmental organizations react very meekly on such information. Moreover, global steel consumption is rising by about four percent a year (see, e.g., http://www.worldsteel.org), this growth is especially pronounced in rapidly developing economies such as China and India. Steel production is accompanied by massive emissions of really harmful substances, primarily sulfur dioxide (SO2), nitrogen oxides (NO, N2O, and others), dust, etc. Large energy inputs as well as large amounts of carbon (e.g., in the form of coke) 468 Climate as a Physical System needed to produce steel inevitably release CO2, but it would be unrealistic to curb the production of steel, drastically needed by emerging economies, on the bogus pretext of climate saving. The effect of climate-motivated political impact on the industry must be carefully estimated because it implies the redistribution of resources and productive capacities. If the entire coal mining industry is obstructed, it will mean a lot of unemployment and possible crisis in many regions of the world. Besides, coal-fired energy supply accounts for about 40 percent of heat and electricity generation so that the energy vacuum will be left, which hardly can be compensated for by the "renewable" energy sources - the favorite thesis of environmentalists which seems to be disconnected from reality. By the way, what will be the concentration of CO2 in the atmosphere in the limiting case when all fossil fuels on Earth will be burned? The projections differ so wildly that I don't know which to select. Yet all said does not mean of course that one should abandon developing new technologies and specifically new energy sources which do not use fossil fuel (see also below). In principle, instead of fossils the fuel can be made of CO2 and sunlight, for instance, using bioreactors. The blame put on human industrial activity in global warming is somewhat exaggerated. For example, I do not quite understand the hysterical over-reaction with car 249 production and use in connection with CO2 emission, although many people (as well as some TV reporters) confuse CO, which is highly toxic, and CO2, which is not 250. Of all human produced greenhouse gases, automobiles are estimated to be responsible for 10-15 percent whereas the breweries are accountable for several percent. At least the CO2 emission from beer (and wine) making is not negligible. Is it sufficient to curb the auto industry or one should also stop drinking beer and wine? Because of environmental constraints cars tend to become less reliable, safe and robust - in compliance with purely arbitrary, established by bureaucrats, emission norms. "Green" environmentalists urge to reduce the output of cars. By the way, it has already become a truism that a cow produces the same amount of greenhouse gases as an automobile, however mainly not CO2 but methane CH4. Does it follow from here that we should kill the cows or drastically reduce the livestock? There were the proposals to introduce the "fart tax" imposed on dairy farmers. Or a human being produces over 1 kg CO2 per 24 hours251 which makes about 6 million tons human-exhaled carbon dioxide a day i.e., the annual amount of more than 2000 megatons CO2. For comparison, the second (after China) largest national producer of industrial CO2 emissions, the US, has an estimated annual production of about 5800 megatons (see, e.g., 249 Interestingly enough, trucks and other heavy vehicles that produce more harmful substances and more CO2, which is strictly speaking not a harmful substance, are somehow placed outside alarmistic attacks. Numerous fossil-fuel machines used by the military who could not care less about environmental effects are totally immune from any "save the climate" charges, and I doubt that heavy tanks, supersonic fighter jets and strategic bombers will ever be powered by batteries. 250 Carbon dioxide is toxic only in high concentrations, about 10 percent or more. 251 This is a very conservative estimate, actually more. The Evil Role of Carbon Dioxide 469 http://en.wikipedia.org/wiki/List_of_countries_by_carbon_dioxide_emission s ). Other sources estimate the whole world's annual emission of CO2 on the level of 10000 megatons. Still others give the overall industrial activity of humans resulting in annual CO2 output an order of magnitude less. At any rate, human breathing accounts for at least 10-15 percent of the annual production of CO2. Besides, humans are not the only species exhaling carbon dioxide. One should not forget also such powerful sources of greenhouse gases as volcanoes, and on top of all this, wildfires can add about 1500 megatons of carbon dioxide. Anyway, the figures for industrially produced CO2 are comparable with biologically produced carbon dioxide. It would also be interesting to compare humans with vehicles as far as CO2 production goes. We can take that a human exhales about 10 ml CO2 per second which would roughly correspond with 1 kg CO2 per 24 hours (about 1 mole/hour). For the entire human population, it amounts to 108 l/s. An average vehicle exhaust can be estimated to produce carbon dioxide at a rate about 1 l/s. If we take that there are 108 automobiles each moment on the roads of the world (actually less), we get the total CO2 production by the vehicles amounting to 108 l/s i.e., of the same order of magnitude as by the humans. And there are a great lot of other species. Following the ambivalent anti-anthropocentric ethic of environmentalists, one should do something urgent with climate-hostile human breathing. Perhaps people should exercise less? Forbid jogging and refrain from lovemaking? Severely restrict the birth rate? Or the ultimate logic of environmentalists could be to kill all humans: this would remove letter "A" in AGW. Let us now briefly summarize the CO2 arguments of the "green" circles and their sympathizers supporting catastrophic scenarios of climatic change. One must, however, admit that the level of energy consumption per capita does not grow during the last 30 years. It seems that people have learned to some extent how to save energy. It is only the total energy consumption in the world that continues to increase - owing to the global population growth. Therefore, energy consumption (which accounts for approximately 70 percent of CO2 emission) grows slower than assumed by the AGW catastrophists. In fact, the probability of catastrophic prognoses seems to be rather small if not close to zero, since there is simply not enough carbon (and consequently hydrocarbons) capable to ensure the corresponding increase of CO2 in the atmosphere (by 4-5 degrees Celsius) i.e., 3-4 times the current level (0.038 per cent, about 36 percent over the level taken as reference, that of year 1750, before the European "industrial revolution".)252. In other words, the "catastrophic" CO2 level should exceed 0.12 percent. Here, one 252 Previously often cited values were V∼1.338 · 109km3, surface of the world ocean S∼361.3 · 106km2, average depth h̅ ∼3700m. Volume V of ocean waters amounts to about 1/800 of the planetary volume. Mass of ocean waters comprises approximately 96.5 percent of mass of the entire hydrosphere which, in its turn, makes up about 1/400 of the Earth's mass. Volume of water vapor in the atmosphere is estimated to be approximately 13000km3, with the renewal period of about 10 days. It is interesting to compare this period with the estimated renewal time for the whole ocean: about two million years. 470 Climate as a Physical System can make two remarks. First of all, to reach such a level of CO2 concentration in the atmosphere one has to burn a great amount of hydrocarbons, e.g., oil, natural gas, coal. The question is: can one obtain the corresponding amount of fossil fuel from the available sources within the observed period, say, within the 21st century? The second remark: is it only the absolute concentration of greenhouse gases in the atmosphere that matters, not the rate of its increase - time derivative of the concentration? This rate seems to be increasing (different sources give the estimates 0.5-3 percent per year, I do not know what to believe), at least the second time-derivative of CO2 atmospheric concentration seems to be non-negative. What is the real role of this time derivative? Catastrophic switches in natural systems may occur, with the transition into radically new states, but the assertion that it is humans who are causing such abrupt changes is, to put it mildly, highly controversial. Are humans responsible for the drift and possible switch of the Earth's magnetic poles? According to paleoclimatic studies the CO2 atmospheric concentration in the epoch of dinosaurs was higher than today. It means that the climate of the Earth has survived different states and continued to support the biosphere. One can of course imagine that if the CO2 concentration reaches a certain critical level in terms of radiative forcing (RF) then near-surface temperatures can increase in such a rate that human civilizations will not be able to adapt to these changes. This maladaptation to fast changes can be dangerous, not the slow (adiabatic) temperature increase per se, by the way in various locations and during different seasons. Besides, the current comparatively warm climatic period may be changed by a colder one, potentially bringing more disastrous effects than warming. Such misfortunes occurred, for example, in the third millennium BC or in the end of 17th century (Little Ice Age), when hunger and diseases eradicated a large portion of the population. Humans have probably no experience how to effectively survive under rapidly varying climatic conditions which may arise in the case of catastrophic developments within the next several decades. But to ascribe all today's rainfalls, storms, hurricanes and even cold weather only to anthropogenic climate warming i.e., induced exclusively by human activity is either a delusion or a deliberate lie. 10.10 The Role of the Sun Probably the most serious argument in favor of anthropogenic (more exactly - green- house) global warming is the observational data suggesting that it is difficult to ascribe the increase of global average temperature that occurred since 1975 entirely to the Sun-Earth coupling (see, e.g., [239]). Although the Sun is by large the dominant energy source on Earth, some others - such as manifestation of internal planetary heat through the volcanic activity, burning fossil fuels, exploding nuclear devices - may have locally a comparable impact on the environment largely determining its temperature. However, such effects are local in space and time and, moreover, should be treated by the methods of statistical dynamics applied to the climate system. The Role of the Sun 471 Historical records demonstrate a relationship between solar activity and, say, winter temperatures, at least at the local and regional level. Already this fact makes it more difficult to assess the reality of AGW without relying on computer models. One can notice that each model focuses on its own specific details, which fact makes climate modeling heavily dependent on expert opinions. Therefore, the weight of determining factors as well as robustness and reliability of climate prognoses depends more on subjective judgments than on fundamental physical principles. It would also be interesting to observe that solar activity ceased to influence the Earth's climate almost synchronously with the creation of the IPCC (1990s): since that time sharp warming has become accompanied by the fall of solar activity. Although it would be stupid to deny the presence of anthropogenic factors (see above) especially when they are growing, but their escalating role as compared with natural and very powerful ones such as intensity variations of solar radiation, influence of massive planets and deviations of the Earth's orbit has not been quantitatively elucidated. It may well happen that the main cause of climate change is still solar irradiance253, with the delay effects being taken into account. For instance, the shortwave radiation from the Sun propagates through the ocean waters and heats their deeper layers, and one should integrate over time the radiation absorption by the ocean (mainly in the shortwave range). Physically, one can understand this absorption in deeper strata of the ocean in the following way: the absorption coefficient μ in the phenomenological Lambert's law, dI(λ, z) dz ⁄ = μI(λ, z), where I(λ, z) is the solar radiation intensity, z is the vertical (ocean depth) coordinate, which depends on the wavelength λ (and also on water impurities, in particular, on salinity). Since the thermal conductivity as well as diffusion and convection in water are relatively slow, processes and the water strata remain stable, the heat may be stored for years in the ocean254 before it is eventually transferred to the atmosphere (mainly through the horizontal flows to the polar regions). This physical mechanism of time-delayed climate response to solar activity (mainly characterized by the sunspot numbers) should exist, but regretfully I was unable to find calculations of the corresponding time lag in the literature available to me. Yet in general the impulse response of the climatic system to the Sun's activity simplistically manifested by the surface temperature is not a delta-function of time but has 253 It has been noted in most encyclopedias that the very word "climate" originates from the Greek word "klima" which means inclination and was referred in ancient times to the inclination angle of the Sun, see e.g. http://en.wikipedia.org/wiki/Climate. 254 Although the main absorption of the solar energy occurs in the water layers near the ocean surface, the heat stored in deeper strata is still significant which is manifested by ocean flows (see, e.g., Primeau [248],) as well as by the fact that deep water temperature is well over the freezing point everywhere. The most part of the shortwave fraction of solar energy is absorbed in tropical latitudes where this fraction is more pronounced, which enhances the effect. Recall that approximately 70 percent of the total terrestrial surface are covered by ocean waters, with 90 percent of the total volume of the ocean being below the thermocline. 472 Climate as a Physical System a finite width τ that is determined by a number of physical processes and may well reach dozens of years. In short, it seems to be clear that variations of solar activity should seriously influence the climate - the thesis that is emphatically refuted by environmentalists. Discussing the role of the Sun in possible global warming has been lately regarded as primitive, non-scientific, and almost indecent, although the modulation of solar radiation transfer by the atmosphere is a passive effect and cannot lead to substantial global temperature shifts. If modulations in solar radiation transfer by the atmosphere cannot lead to global temperature shifts, does it not mean that the role of the Sun is not big? Yet in a well-known book by S. R. Weart "The Discovery of Global Warming" (Harvard University Press, Harvard, 2003) the following statement is referenced: "For a young climate researcher to entertain any statement of sunweather relationships was to brand oneself a crank". The forcing of TSI (total solar irradiance) is declared by the engaged climatologists as a negligible factor compared to human influence 255 . Advocates of man-made global warming mockingly beat the drums: "Humans are small, Sun is big". However, the point is not in this trivial observation, but in the hard physical fact that all phenomena on the Earth are strongly coupled with the processes in the Sun. To completely negate the solar activity, the latter being measured by sunspot numbers, as a climatic factor is at least shortsighted since modulations of the energy output from the Sun are significant and their measurements have not been finalized yet. One can assume that foreseeing future climate variations heavily depends on the ability to envisage dynamics of the SunEarth physical coupling, which is an interdisciplinary task. Specifically for the Earth's surface temperature, its global variations of at least 0.1 ◦ Celsius associated with the 11 year solar cycle have been extracted (see, e.g., http://data.giss.nasa.gov/gistemp/2007). This magnitude is comparable with the estimates provided by the computer climate simulations. However, the signal directly originating from the Earth's irradiance by the Sun is difficult to separate from other causes of the terrestrial temperature variations including fluctuations. We have already discussed that the Earth's climate is determined by a delicate kinetic balance between the incoming solar radiation (in all spectral domains plus corpuscular flux) and outgoing thermal radiation, this balance being mediated by the composition of the Earth's atmosphere. Both types of the forcing agents - natural ones such as solar variations and volcanic emissions as well as anthropogenic ones such as greenhouse gases and sulfate aerosols significantly affect the corresponding kinetic equations. Changes in the kinetic equations mostly occur in the coefficients, with such changes acting in opposite directions (e.g., cooling effects of aerosols can be partly neutralized by heating due to the emission of greenhouse gases). Solar irradiance variations contain both the direct part entering the source term 255 Recall that the Earth's surface average temperature was estimated by the IPCC to have increased by approximately 0.6C over the past century whereas the TSI forcing have contributed only about 0.2C over the same period. The Role of the Sun 473 (the incoming radiation) and the coefficient term such as, e.g., due to changes in the ultraviolet component leading to the modulations of ozone production rate. Ozone is a rather potent greenhouse gas since it absorbs long-wave infrared radiation (LWIR) emitted from the Earth's surface and thus contributes to the heating of the atmosphere. Moreover, ozone in the lower stratosphere where temperatures of 70C to 80C are encountered is thought to have a much larger effect on the radiative balance as compared to ozone at surface level: it can absorb infrared radiation and re-emit the latter with the wavelength corresponding to about 18 Celsius (http://www.ozonelayer.noaa.gov/science/basics.htm. This means that the impact of ozone leads to effective warming of the gas in the troposphere. This is an example of complex interactions when the indirect partial effect may be of a similar magnitude, or even larger, than the direct effect. Delicate interplay of the factors present, both explicitly and implicitly, in the kinetic equation determining the radiative balance can make the multiparameter climatic system highly sensitive to small variations of different factors, easily bringing it to unstable or chaotic domains (see, e.g., [240]). In this situation, there is more of a consensus than real science about the relative role of natural, primarily TSI and anthropogenic (such as CO2) forcing agents, i.e., mainly a social effect. It is interesting to notice that the generally perceived role of solar variations in the observed climate dynamics changes - in fact oscillates - with time. We have already discussed on various occasions that it is usually quite difficult to disentangle the sociological from the scientific. The solar activity256 minima have been observed to be correlated with colder temperatures of the Earth's surface, an example was the notorious Little Ice Age in Europe, North America and possibly in other parts of the world in the 17th century ascribed to the "Maunder Minimum" (see, e.g., http://en.wikipedia.org/wiki/Maunder_Minimum). In Europe, many dwellings perished because of starvation during the Little Ice Age. However, some scholars who insist on anthropogenic climate changes deny the causal link between the lull in solar activity depicted by the Maunder Minimum and bitter cold temperatures during the Little Ice Age, considering the overlap of these two periods as a statistical coincidence257. The AGW concept is based on 256 The Sun's activity is in general an integral notion being understood not only in terms of sunspots, but also accounting for changes in total irradiance, ultraviolet irradiance, magnetic flux variations, solar wind, energetic solar particles, variations of the size and intensity of heliosphere, etc. 257 Curiously enough, the same scholars do not regard the analogy in two time series - temperature data and CO2 emissions - expressed by the controversial "hockey stick" (see e.g., http://en.wikipedia.org/wiki/Hockey_stick_controversy and Holland [249]) as a statistical co-incidence. In general, one can add a zero-centered random number to the previous value and get a variety of statistical series similar to the random walk series. When plotted, such series can resemble the "hockey stick" (for amusement, I have done it with MS Excel, but probably such products as SPSS or "Statistica" are suited better.). The usual "hockey stick" argument means no more than the fact that one data set (temperature reconstructions) matches another (CO2 concentration) over some arbitrarily selected averaging or calibration period. In this process one can obtain as many "hockey sticks" as one desires, by putting a variety of data 474 Climate as a Physical System one-to-one correspondence between CO2 concentration and temperature rise whereas looking at time series, one cannot exclude the possibility that the overall temperature behavior is flatter than the CO2 concentration increase. If this is the case, then the orthodox AGW theory does not seem to hold. There exist many other notable relationships - pure coincidences or not - between the Sun's activity and terrestrial climate. One more example is the so-called medieval climatic optimum (MCO) that was observed in the 11th through 13th centuries and coincided with the Grand Maximum of the solar activity (see, e.g.,[241]) 258. Maybe it sounds non-scientific or even "cranky" but a striking thing is that there existed a nearly one-to-one agreement between the periods of average temperature changes (as best they could be known) and solar activity modulation, although the climate alarmists deny this coincidence. There are data (though also controversial) that the Sun was in a state of unusually high activity for about the last 60 years of the 20th century. A similar period of enhanced solar activity (however, characterized by a significantly fewer number of sunspots than during this last period of activity) occurred in the Middle Ages approximately from 1100 to 1250. It is approximately at that period that the above-mentioned medieval climatic optimum happened on the Earth when, for instance, the Vikings settled down in Greenland and Iceland. The Vikings' colonies were recorded to have flourished for several centuries and deteriorated due to starvation and cold during the Little Ice Age. Today, we can survive an analogous warm period when, in addition, the enhanced solar activity works in phase with the possible anthropogenic factors. One should explore this controversial issue further, of course, as well as the current and historical variations of solar activity. In particular it is unlikely that a reliable answer to the crucial question "On what time scales can the Sun affect terrestrial climate?" does exist. 10.11 Limitations of Current Climate Modeling In the current political and media discussions the issue of accuracy in AGW forecasts has been somehow covered up by slogans and fears. Yet the question of accuracy is always of paramount importance in physics, and the reliability of scientific prognoses essentially depends on it. It is clear that the accuracy of computer model output (the error margins) cannot be better than that of the input data. Recall that most data in geophysics have a large uncertainty corridor. Besides, the complexity of climate science compels one to make sets (e.g., population growth, number of newspapers, scientific publications, bicycles, etc.). There may even be lucky "hockey sticks", for example, in personal or corporate budgets. 258 One can object that the book issued in 1975 is totally outdated and does not reflect modern opinions. I don't understand such objections: the book contains hard scientific evidence which cannot be outdated by the very definition of science. This is not a political journalism, after all. In the same fashion, one can merely declare outdated the works by Newton, Boltzmann, Gibbs, Einstein, Poincaré, Einstein, Rutherford, and many other scientists. Limitations of Current Climate Modeling 475 many simplifying assumptions which only can lower the accuracy. Therefore, there exist a lot of indeterminacies in climate dynamics, and the error margins of climate models must be specially appraised by independent scientists before political decisions about allocation of resources are made. Current mathematical models of climate are basically those of fluid dynamics, at least they are centered about the description of fluid motion. The kernel of these models is devoted to describing the fluid motion in the ocean and the atmosphere. But this is far from describing the real world we are living in, with plenty of intricate physics as well as chemical, biological, geological, anthropological, social, etc. factors influencing the climate variability. It is hard to understand from the present mathematical models (and, of course, totally impossible from political discussions) how to separate the CO2 produced by fossil-fuel incineration from its conversion in biomass. Seasonal accumulation and release of carbon dioxide is a spatial-dependent kinetic process that should be coupled to fluid dynamics and radiation transfer equations of climate modelling. It is clear that since CO2 hidden in the biomass and soil is an important reservoir of carbon comparable with that produced by human activities, biological processes must be an important link in CO2-based climate projections. However, such processes are usually not included in climate modeling. I also could not find a satisfactory solution of the energy balance problem based on the radiation transfer in the atmosphere (a detailed kinetic approach and not only energy balance). It is this problem that should theoretically corroborate or disprove the greenhouse effect concept. Models that I have come across are mostly of a particular character or provide crude estimates, many essential factors being neglected. What climatologists really know is based on very limited paleoclimatic observations and on a bunch of computer models. From the analysis of biological effects on climate changes, would probably follow that one can bind excessive CO2 with the help of appropriate farming technology, land management and genetic engineering. 259 These advanced agricultural techniques have nothing to do with current mathematical models of climate and still less with ideological denunciations of environmentalists. There are, however, certain hallucinatory ideas of socalled geoengineering, for instance to spray many megatons of sulfuric acid in the atmosphere, with the possible grave consequences such as acid rains.260 Another fancy idea is the CO2 underground sequestration. Such projects may also involve a considerable risk since in the case of a leak all living organisms within the layer of 1.5-2 meters from the ground, i.e., humans and domestic animals, in the vicinity of the reservoir of CO2 will be killed. Besides, people do not understand the climate system well enough to take radical decisions about influencing it on a global scale (geoengineering). But I think one should 259 In fact, carbon dioxide is precious for the biosphere and comparatively rare - the actuality environmentalists and climate alarmists always seem to forget. 260 By the way, where are the precautionary environmentalists who always fear the unknown side effects? The fact that the environmentalists do not object signifies that there is more ideology than ecology in their position. 476 Climate as a Physical System not worry too much: whether or not one thinks that radical actions should be urgently taken, they are unlikely to follow. In order to evade ambiguity, I can state my position right away: I am neither for nor against global warming (GW). It may happen. Moreover, it probably happens. More precisely, the warming probably occurs in the current period - there exist climatic cycles. And it must even have an anthropogenic component (AGW), at least locally, which can be seen already from the fact that average temperatures are higher in the cities than in rural dwellings. However, it seems to be only a belief that the warming is completely man-made. Relative roles played by the natural (such as SunEarth coupling or volcanic activity) and anthropogenic (such as greenhouse gases or sulfide aerosols emission) factors as well as their interplay are far from being elucidated. Quantitative results of comparing natural and anthropogenic forcings do not have a sufficient accuracy to guarantee unequivocal statements 261. Furthermore, due to multitudes of parameters important to determine the dynamics and equilibrium states of the Earth's climatic system I don't think reliable prognoses for its evolution could be feasible, at least for the current state of physical and mathematical knowledge. Computer codes in climate modeling, at least those that I could come across, are based on strikingly shallow physical theories. Furthermore, there is even an underlying philosophy justifying this apparent simplicity of climate physics: climate is allegedly no more complex as a heat engine. I don't think that simplistic nearly-linear models readily implemented in computer codes are relevant for obtaining the values of average terrestrial temperatures for 50-100 years ahead. Therefore, I suspect that the importance of the human component (letter A added to GW) is, using again Mark Twain's famous expression, greatly exaggerated. And please recall that vanity, hubris is a serious sin.262 261 The role of the Sun-Earth coupling is often a priori declared negligible compared with anthropogenic factors. Those who still dare to insist on an accurate assessment of varying insolation as a climatic factor are often labeled as anti-scientific retrogrades. This question seems to the adherents of AGW trivial, long time passé, and only producing yawns (see below more on the possible role of the Sun). 262 This book was written before the Nobel Price 2021 for physics was granted to Syukuro Manabe and Klaus Hasselmann for "leading the foundation of our knowledge of the Earth's climate and how humanity influences it". Back in the 1960s Syukuro Manabe, Japanese American meteorologist and climatologist, was working on physical models to explore the interaction between radiation balance and the vertical transport of air masses. Based on stratospheric and tropospheric measurements showing that temperatures rise in the lower atmosphere and fall in the upper, the scientist argues that the cause of temperature fluctuations are changes in CO2 concentrations, not solar activity. The conclusion that followed from this model (at least the way it was interpreted) was that oxygen and nitrogen had negligible effects on surface temperature, while carbon dioxide had a clear impact. Klaus Hasselmann, Max-Planck Institute for Meteorology, became a Laureate for developing a method of distinguishing between natural and human causes of atmospheric heating, the so-called fingerprints. The problem of a mean atmospheric response to external forcing such as volcanos, albedo, surface temperature, sea-ice etc. is addressed by applying a filtering technique based on comparison of atmospheric response patterns derived from multiple sources: models, experiments and data sets for long periods of time. The expectations are that this method will allow us not only to distinguish the increased Limitations of Current Climate Modeling 477 temperature in the atmosphere caused by natural processes from those of human activities, but also to make climate change more predictable and reliable. https://www.nobelprize.org/uploads/2021/10/sciback_fy_en_21.pdf. https://pure.mpg.de/rest/items/item_3030122/component/file_3030123/content 478 Made in physics 11 Made in physics Physics has recently demonstrated additional power by crossing into other disciplines such as biology, economics, ecology, sociology, medicine, even political sciences. Physics-based mathematical models (PBMMs) may have nothing to do with physics. For instance, numerous traffic flow models are essentially based on the conservation laws, which is a physical concept. Another example is the famous logistic model, usually applied to the natural population growth, is a simple generalization of the exponential model widely used in physics 263. The logistic model, despite its simplicity, was able to adequately represent the population dynamics in various countries, e.g., in England, Scotland and some parts of USA. In biology and ecology this model is used to describe various evolution scenarios, when the future population of species depends linearly on the present population. A little later we shall discuss these two above-mentioned classes of models in some detail. Recently, physics-based models won great popularity in the field of social and economic modeling. Nevertheless, I think that the mathematical work such as in physics is still very peripheral in social and even in economic disciplines, at least so far. In contrast with physics, social sciences attempt to produce descriptions and diagnoses not for material phenomena, but for mental trends and psychological conditions, assuming that collective attitudes matter more than material phenomena so that the material order (or disorder) in the world is not the foundation but the implication of psychic inclinations of the people. So far, the ultimate purpose of physics is to control the properties of nonliving matter, for instance, to model, design, and eventually mass-produce new materials. Likewise, the purpose of social models would be to explore and then tune the properties of human material. In social studies there exist particularly many unanswered questions, with some of them may be considered to lie at the very core of social disciplines. For instance, do different nations pass through the same stages, albeit with some delay with respect to one another, just like people who, while growing up, are passing through universal stages in their development? If one can give a reliable affirmative answer, would it mean that a time-ordered (and irreversible) process of social and cultural evolution is maintained? Or the alleged orderly progression from the primitive to more "civilized" stages may be reverted under certain conditions? In other words, do patterns of development exist for human societies, and to what extent can individuals deviate from these patterns? Unfortunately, when discussing such issues, qualitative analysis is emphasized, sometimes rather aggressively, which probably testifies to certain inferiority undertones. 263 The logistic and the exponential models are nearly identical at small times. 11.1 Exported Models 479 11.1 Exported Models It has long been speculated that physical models or rather mathematical models of physics might prove useful for the analysis of human behavior, both on the individual and the collective level. Unfortunately, dynamics of the systems, for which we do not know the main evolution equations, appears to be irreproducible. In contrast to physics, such disciplines as biology, medicine, ecology, psychology, economics, sociology, etc. have so far admitted correct and non-speculative theoretical study only at the level of time series. Now there are increasingly frequent attempts to construct particular mathematical models, mostly based on nonlinear dynamics, to describe the evolution of systems studied in the above weakly formalized disciplines. Chaos theory has been especially attractive for the scholars attempting to apply off-the-shelf physics-based mathematical models (PBMM) to social and biomedical sciences. Although there exist already a considerable bulk of papers, also in physical literature, exploiting nonlinear dynamics to build mathematical models in these disciplines, the notion of chaos still remains rather the subject of quasiscientific philosophy than the tool of quantitative science. The aim of mathematical models in social sciences is to help estimating and understanding the statements of humanities that are far from physics and mathematics. Here, the principal technique is to properly identify the scale of a problem and to pose the relevant questions, which is not always the case in philosophy and artistic culture (see Chapter 2). Physicists are usually better trained in restricting the problem to a set of localized models. It is interesting that such a great physicist as E. Majorana who was in general inclined to use sophisticated mathematical tools published a sociology paper [276]). Majorana published not so many scientific articles, and this one was probably intended to extend the theoretical schemes of physics onto social sciences. One can see the deficiency of general statements, when the scale of the problem is not appropriately identified, on the following slightly exaggerated example. It would be in principle correct to say that, e.g., viruses are ultimately constituted, as all matter, of quarks and leptons. However, this statement does not help much: knowledge of such basic constituents will not explain functioning of viruses which results in disease. Likewise, the statements related to behavioral sciences such as psychology are usually so general and arbitrary that it is difficult to define their domain of applicability and construct a reasonable mathematical model corresponding to these statements, although mathematical modeling in psychology has a long tradition starting from F. Bessel and H. Helmholtz. The trouble with behavioral disciplines is that they rely on "expert" opinions rather than on experimental techniques and related theoretical explanations. Even the attempts to construct quantifiable procedures are accompanied by the drawbacks of unknown accuracy. Take, for instance, a very popular assessment technique based on IQ. What is exactly the quantity that is measured by this method? Does this unknown quantity change with time? Do you believe, in general, that IQ means much? Nevertheless, checking a 480 Made in physics person's IQ by some bureaucratic authorities (e.g., human resource management) can drastically influence her/his career. As for mathematical modeling in behavioral sciences, they are either too abstract and detached from experimental base or represent universal statistical applications such as structural equation modeling (see, e.g., Schimmack, U. and Grob, A. (2000), Dimensional models of core affect: a quantitative comparison by means of structural equation modeling. Eur. J. Pers., 14: 325-345. https://onlinelibrary.wiley.com/doi/abs/10.1002/10990984(200007/08)14:4%3C325::AID-PER380%3E3.0.CO;2-I; see also https://liberalarts.tamu.edu/communication/profile/hart-blanton). 11.2 The Limits of Sociology In this section, a few vacuous generalizations can be encountered. Some passages here only document my experience, reflecting the integrated psychic attitudes and mental inclinations of several groups of persons I had a chance to talk to. And I know very little or nothing at all about the boundaries of reality constraining such group attitudes. Several decades ago, sociology created many false expectations, e.g., that it is capable to fully explain human behavior, flow of events and even principles on which human societies are built. However, these expectations have not been materialized, and many people became greatly disappointed by sociological results. L. D. Landau, for example, considered sociology a pseudoscience. What are actually the limits of sociology? To what extent can sociology make predictions, in the meaning of conditional statements usually accepted in natural sciences? I don't think I shall be able to correctly answer these and similar questions, not only because I am a dilettante and not because the questions are put in too general a form (deliberately), but because the subject of sociology is extremely complicated and badly formalizable by the current mathematical means. Sociology may be interpreted (not defined!) as the collection of techniques providing the movement from opinions to understanding. Then, if understanding has been achieved, one can make projections, models and forecasts, even quantitative. Sociological research deals with mass phenomena and mass behavior. A person is not discerned in this mass. In such a sense, sociological research may be called defocused rather than focused - its resolution is not higher than a "small group" of people. What is this small group, sociology does not take pains to correctly define. In mathematical terms, one can use the notions "averaging" or "homogenization" to characterize the objects typically studied by sociology. The macroscopic character of sociology is the first limitation that one must bear in mind when talking about this discipline's claims. The second limitation consists in the lack of predictive power in sociology. In distinction to physics and even modern economics, sociology is mostly busy with explanatory constructions. They may be interesting and plausible but without predictive component, and even if there may be one, accuracy of predictions is unclear. Here, I might note that one can redirect a similar reproach to modern theoretical physics. Experience shows that 11.2 The Limits of Sociology 481 theoreticians can explain everything, just anything you can imagine. This is good, it testifies to their skills and flexibility. But it is much harder with prognoses. The third limitation is a virtual impossibility to talk about the unique interest of a society or a country. Society in each country is composed of many acting groups, each having its own interest. Therefore, it would be difficult to define a common interest of the entire society, unique for the given country. Usually interests of the ruling group are declared as "interests of the country" or "interests of the state". The state is not completely determined by formally fixed constitutional rules, over-constitutional rules are usually stronger than legislation. This is a binding system of unconventional norms which make each society unique. This is the form of civil life specific for each society. Even a high level of administrative aggression such as endemic to dictatorships, if directed against informal rules, cannot fully destroy them. Each state can be characterized by its own level of coercion as well as of corruption and other attributes of power concentration. I do not want to be engaged in something similar to amateur kremlinology in this book, especially regarding the limits of sociology, and one must study carefully the effect of power distribution on social evolution, the latter being understood as the trajectory of a complex human organization, e.g., a country, in the space determined by a set of relevant parameters. A crude high-level look hardly brings any results or prestige; therefore it should be relegated to kitchen talks or cocktail parties. Actually, there does not seem to be many good quantitative results and models describing power distribution effects in various societies. See in relation to this [293] and [294]. 11.2.1 Self-Reproducing Social Patterns To what extent could the processes occurring in nature be mapped onto human social structures? The complete isomorphism leading to such quasinaturalistic theories as social darwinism, in its extreme form resulting in fascist and Nazi oppression of "unfit" individuals and groups. In such extreme forms, it is the pure ideology in action: rulers interfere and suppress the "unfits" even though the latter in no way present a direct threat to them. The heavy hand of the ruling group is more appreciable in case the country is not a "democracy". Thus, societies may be ranged and measured according to the ruling elite pressure. This is one of the main tasks of sociology which has not been successfully accomplished so far. There are a number of other cardinal questions that may be considered as sociological tasks. For instance, why and to what extent laissez-faire capitalism is perceived as an enemy for generalized socialists such as communists, marxist-leninists, nazis. It is more correct to refer to the national-socialist (NS) regime, "nazi" is only a colloquial term like "commi"., etc. Furthermore, it is an empirical (historical) fact that socialism strongly relies on violence. The question is: why and how can this accent on violence be quantitatively estimated and measured? Democracy is usually understood as an antithesis to socialism. But there may exist different versions of democracy, starting from liberal democracy to 482 Made in physics the one with unlimited tyranny of the majority as during the French Revolution of 1789-1799 or a kind of democracy supporting oppressive regimes such as in Russia in the beginning of the 1920s as well as in Iran in the beginning of the 1980s. The main principle of liberal democracy is not to enforce the will of the majority, but to ensure the rights of each small group, down to a single person. The main presumption of liberal democracy is that if any group of people is suppressed in its rights i.e., the ruling group is allowed to suppress such a group of people, then this ruling group will necessarily strive to suppress all other groups. So, the liberal variant of democracy emphasizes protection of all minorities and individuals. The liberal democracy may be considered as a limiting case on an authority scale, the other limit might be considered an ideological totalitarian state, with all intermediate kinds of despotism and democracy (authoritarianism, autocracy, military dictatorship, ethnocracy, socialist state, ochlocracy, etc.) being positioned between these two extremes. It is a business of sociology to assign numbers (points on the scale) to each state264, thus providing an ordered set for a social or political analysis [295]. Accent on qualitative descriptions and scathing metaphors like "snob-rule" (elite or oligarchy) vs. "mob-rule" (the majority) can hardly be satisfactory. What is a totalitarian state? It would be difficult to give a strict definition, but one of the dominant features of such a state, intuitively, consists in describing its reaction on out-of-order messages. The totalitarian response to a signal about a misrule or a trouble consists not in the attempt to streamline the situation, but in efforts to switch off the signal source. In other words, in a totalitarian state you should not try informing the authorities or the police that something goes wrong because this won't function and you will be, with high probability, eliminated. In fact, any state, by definition, has some amount of totalitarian features, but the feedback intensities differ from country to country. Nevertheless, the totalitarian state of the country does not mean that one should be sitting all one's life under the bed, be completely silent and never stick one's head out. We might recall in this connection that in the Soviet Union almost everybody, with tiny exceptions, figuratively speaking, had not shown themselves from under the beds for 70 years, waiting for the outcome of the fight of elites, until the Communist regime partly collapsed. Thus in a totalitarian state each individual is forced to contribute to the stability of regime and perpetuating it participates in evil. It has long been observed that two wicked things never occur in liberal democracies: mass murder and mass hunger (to my regret, I don't remember who was the first to verbalize this observation). It is, however, useful to recall a simple maxim known to Aristotle, but totally forgotten in the 20th and 21st centuries: democracy is the worst form of rule if the society is mostly composed of paupers. In this case, democracy inevitably transforms into tyranny. This observation leads to a general question: what are the main causes for the death of the democracy on a given territory? The list of such 264 The word "state" is understood here as the state of a system and not as a political organization of a country. 11.2 The Limits of Sociology 483 causes seems to be rather short. One is a war. A democratic state can be either conquered, for instance, if it is small or, by defending itself and in process of mobilization of its armed forces, transforms into a military dictatorship. Another classical death of a democracy is through a military (or paramilitary) coup, this scenario has been materialized a number of times in Africa and Latin America. But the most general pattern of transition from democracy to tyranny appears to be due to wide involvement of poor and illiterate population into political decision making. It is understandable why democracies were very unstable before the industrial revolution: the latter provided necessary conditions for stability of a democratic order with respect to power overturn by ensuring a well-being for the majority of the population. These conditions are not sufficient, as we might observe: the majority rule can still lead to tyranny even in relatively affluent societies. Intuitively we know that some countries demonstrate examples of seemingly stable democracies, in particular liberal democracies, whereas other societies appear to be unstable with respect to the classical "democracy-tyranny" transition. Moreover, we may notice that despotic, e.g., authoritarian forms are stable because they are unfavorable to economic productivity and even distribution of wealth thus preserving pauperism. This fact is intuitively grasped by antidemocratically oriented political forces (e.g., left-wing socialists, radicals, communists, etc.) who build their strategy and even propaganda on hindering economic productivity. The main task of sociology in this respect is to find the exact conditions expressed in numbers, formulas or algorithms, which would tell us when the stability of a given democratic society is ensured and what is the risk of transition to a certain variety of a despotic regime. Since it is difficult to rely on self-restraint of the majority and its favorite leaders as well as on self-control displayed by the ruling groups, tenable power restriction mechanisms should be designed and developed in a stable society, in particular the legal system, to secure a clearly defined scope of rights of an individual and, consequently, of all groups of individuals. To offer models for such control mechanisms may be regarded as another task of quantitative sociology. Apart from the authority scale, there are many other self-reproducing patterns in human societies. How can one understand why all the regimes promising socialism, communism265, total equality, possession of equal rights, social justice and the like quickly turn into ruthless, despotic dictatorships? Why is bureaucracy so overwhelming? Greatly simplifying, one can observe that there exist in fact only two superparties in any state: "bureaucrats" and "liberal economists", all others being just nuances of these two mainstreams. A variety of colors and hues may be considered as a fine structure of these two principal levels. The above opposing superparties represent two main attitudes present by people: paternalistic and self-sufficient. This attitudinal dichotomy, being translated due to sociological collective effects into dual social structure produces natural associations with two main types of objects 265 One usually distinguishes between socialism and communism as an extreme form of socialism, but for me it is basically the same social phenomenon. 484 Made in physics in physics: bound and free. The paternalistic attitude implies etatistic structures on which an individual is heavily dependent and results in the "bound state" of an individual, with movement restrictions, informational censorship and other well-known limitations. Figuratively speaking, paternalistic pattern is a projection of diapers onto the whole life of a person. A swaddling effect of the diapers is realized through countless bureaucratic circuits under the pretext that the authorities know better what "people really need". Accordingly, two main personality types having totally different images of society can be developed: those who rely on their own forces, intellect, and hard efforts versus those who hope that external agencies such as God, good monarch, state control, social help, etc. will in any event interfere and protect from excessive hardships. These latter people tend to unite under leftist, communist, general collectivist, environmentalist, nationalist, anti-globalist, and other populist slogans being easily recruited into militant groups ("take everything away from the rich, then divide it up"). In the Marxist political texts, the low-income fraction of people who depend on the state social system for their day-to-day existence are typically called "lumpen". In the societies with a considerable component of liberal economy, more relying on self-sufficient behavior and free choice, bureaucratic structures are forced to compete for power, in particular by leveraging populistic ideologies. In relation to just mentioned social dichotomy, one can notice that there are two basic regimes of social equilibrium in human societies. In the countries which use to call themselves "advanced" or "developed", social equilibrium is of dynamic character and is distinguished by an oscillatory "social trajectory" which has intermittent bureaucratic and liberal phases. Typically, the process runs as follows: after the fast transition from a "frozen" despotic regime to a "democratic society" at its liberal phase, e.g., after a drastic social transformation such as a revolution, people are initially motivated to work hard and economy is on the rise. As a result, people on average begin to live better, but high-income and exceedingly wealthy groups appear inducing envy of the poorer part of the population. This envy produces demotivating effect, and besides people tend to be more relaxed due to increased quality of life266 and more attention devoted to hedonistic excesses. Economy begins to fall, which is amplified by the outbursts of activity in the anti-liberal camp exploiting mass envy and insisting on the enforcement of social justice and equal distribution of wealth. "Down with capitalism!" is a typical slogan in such periods. Distribution and to some extent production should be maximally controlled by the state bureaucracy which represents such a control as "achieving an order" in contrast with "liberal capitalist chaos". Ensuing nationalization and politization of the economy aggravates the situation. Economy naturally falls deeper, and in the countries with the lack of democratic traditions such as firmly established free periodic elections, there is a hazard of transition to a "frozen" despotic state totally controlled by bureaucracy. However, in advanced economies with still wealthy population 266 I don't know a correct definition of the term "quality of life"; here it is used intuitively in accordance with the whole intuitive exposition. 11.2 The Limits of Sociology 485 and really free choice between political parties, the population begins to sense danger and votes anew for "liberal economists", at least more liberal than clinging to power bureaucratic control freaks. Then the cycle repeats itself - I view this phenomenon as oscillations of social trajectory. So, the first basic regime of social equilibrium is an oscillatory one, it is typical of developed countries. For the countries with weak democratic traditions, especially with dominating pauperism, another regime is habitual. From the viewpoint of macroscopic social equilibrium such countries' trajectory symbolizes a totally different story. One can easily observe that many countries usually known as "developing" are unstable under the transition to the stationary state of "frozen" bureaucratic despotism, with reduced civil rights and limited personal freedom. It does not matter how such regimes are called either from outside (tyranny, autocracy, authoritarianism, communism, fascism, etc.) or from inside (controlled democracy, real socialism, also communism, etc.), the essence is that the societies supporting such regimes are in a metastable state separated from the equilibrium corresponding to liberal democracy by a rather high barrier. Nevertheless, contemporary history shows that this barrier is not impenetrable: more and more societies have overcome it, being transformed into liberal democracies. One can say that such a transformation represents a historical trend. However, the characteristic transition time from the metastable frozen state to democratic oscillations may significantly exceed the active period for a single population (population life cycle). One may also notice that the mentioned historical trend reflects a kinetic situation: there exist faint democracies that are unstable under the reverse transition into the frozen state. I think that this verbally described cognitive model can be translated into the language of mathematical modeling and quantitative results corresponding to the observable phenomena can be obtained. I found it strange that sociology, when attacking societal issues using mathematical tools, leaves the crucial problems of structure, dynamics, and equilibrium of societies mathematically unexplored (although, I am just a layman and know very little of sociological literature). One more self-reproducing pattern in the society is an eternal conflict between clericals and intellectuals, i.e., between the church267 and the science. An established church seeks to universally spread its messages (preferably at the expense of all citizens); therefore it tends to oppose secular and scientific education which is driven by economic demand. In this conflict, the church represents the social force directed against economic and technological development. One can recall that it was only in October 1992 that Pope John Paul II expressed regret for the Galileo affair and officially conceded that the Earth was not stationary. I remember a meeting in Moscow organized by Siemens in 1996 and devoted to the social impact of IT. Some clergy were invited, and when it was mentioned that the Internet allows one to radically 267 The term "church" may be understood in this context in a broad sense including all irrational currents in the society such as sects and quasi-religious groups. A unifying feature for all such currents is relying on miracle or mysticism, that is on phenomena contradicting proved scientific facts. 486 Made in physics expand the scope of living activities, to live simultaneously several lives as it was figuratively expressed by one of reporters, the orthodox priests immediately began frenetically crossing themselves and raising vehement protests. One often cites Bacon's aphorism: "Knowledge is power". However, ignorance is even stronger power. In a great majority of countries, the church is a powerful group very close to the ruling elite. The latter is supporting economic and technological development primarily to acquire military might. On the other hand, church, as just noted, is impeding this development. Such an ambivalence is of universal character and in extreme cases leads to civil wars of clerical dogmatics against intellectuals. Recurrences of such conflicts can be traced nowadays even in rather advanced societies (USA, some European countries, Latin America, Russia, India, Middle East, etc.). In Russia, for example, criminal penalty can be applied for publicly expressed atheistic ideas and criticism of the orthodox church. This is a strange example of a contemporary witch hunt supported by authorities in a presumably civilized country268. Moreover, the limitations for applying the obtained knowledge, e.g., in the form of introducing new technologies lies today not with technology per se but with its acceptance by people and administration despite the fact that new technologies can catalyze efficient approaches and business processes269. 11.2.2 Archetypical Questions of Social Sciences One of the main questions that should be answered by sociology is: under what conditions the averaged, homogenized mass of individuals can be transformed into a coherent group of highly motivated people? This process bears some analogy to a phase transition: acquiring order out of chaos. The conversion of atomized, alienated individuals into structured, mobilized people is of crucial importance for political authorities, and it may become the worst nightmare to dictatorships. A far analogue is the picture of a saturated solution with an accidental center of crystallization. However, the methods 268 Some sociological studies in Russia indicate that had it been a vote with the participation of Stalin, he would have won. This effect of one-sided love of Russians to Stalin may appear paradoxical since Stalin exterminated more Russians than were killed during the WWII, but it can (and should) be sociologically explained. Contrariwise, people in Russia who are trying to struggle for civil rights and against the oppressive bureaucracy are immediately branded as loonies or even persecuted. Not only in Russia but in many other countries people were punished for adherence to the progress of the European type. In Germany, Hitler has been forcefully excluded from any opinion polls or official lists (such as the vote for the most outstanding Germans). Had he remained as a candidate, nobody knows what would have happened. One can easily find many such examples in modern history. 269 The difficulties of human acceptance become obvious when one observes the struggle about genetically modified products and genetically produced technologies in general. Another example is the difficulties of the IT penetration in medicine. The physical methods of medical diagnostics and treatment such as laser medicine, nuclear medicine, medical imaging, etc. were initially rather hard to adopt, see, e.g., Roberts [250]; now they seem to have overcome the repulsion barrier of the old doctors' community and basically ensure the progress of medicine. 11.2 The Limits of Sociology 487 typically employed by sociology mostly involve surveys in which the obtained results are represented by numerical data corresponding to the percentage of people sharing a certain opinion. This is, however, raw experimental material that might be used as an input in theoretical models of sociology. In such models one should derive differential equations, probably of stochastic character, describing the transition between states in a small time interval - a procedure well known in mathematical modeling of physical processes. Analysis of surveys, a standard technique of sociology, does not seem to be fully satisfactory since, e.g., by a strong desire one can correlate anything with everything. A connection of the sociological data with mathematical models would be highly desirable. When studying processes such as leadership that occur in "small groups"270, relations between the members can be studied not in terms of transitions between states and corresponding differential equations, but using graph theory applied to all ordered pairs of group members. In this way, one can describe power hierarchies in human organizations taken as sets with dyadic relations between a comparatively small number of elements (see, e.g., lectures on mathematical sociology by P. Bonacich, http://www.sscnet.ucla.edu/soc/faculty/bonacich/textbook.htm). However, sociology, in contrast with social psychology, studies the behavior of human systems consisting of a very large number of members. For example, sociology must establish certain irreducible laws that pertain to large human groups having eternal conflicts of interests such as between bureaucracies and citizens or between medical doctors and patients. One can easily produce other examples of conflicting social pairs. In the final analysis, of course, the collective properties of human groups are due to individual attitudes, opinions, behavior and person-to-person interaction (communication) characterizing each single member of the group. There are models in which human society behaves as an averaged individual (in a similar way, plasma may be modeled by the motion of a typical particle moving in the electromagnetic field, see Chapter 5). This averaged individual can make sharp evolutions of behavior whereas in the society of collectivized individuals, behavioral patterns are smoothed. Yet sudden transitions are possible and have been observed in human history. It is obvious that the solution of sociological problems involving many human participants and accounting for the behavior of each individual person is impossible. Analogous problems are impossible to solve even for structureless particles in physics, and for elements of human societies that can be found in a great number of personal (psychological) states determination of collective behavior from individual properties is clearly out of the question. One can only hope to describe the averaged overall characteristics, the crude macroscopic features of sociological systems. Moreover, due to a selfconsistent situation - individuals create institutes that are forming individuals - the transitions between states of sociological systems may take many years which makes observations difficult: humans change slowly, in the course of 270 This is the term most favored by sociologists, but I don't know how one can define a small group. 488 Made in physics several generations. This is also a limitation imposed on sociological research and social projections. One more important issue of social sciences is the amplification effect. It appears to be a common observation in sociology that in initially homogeneous dwellings "good" and "bad" neighborhoods gradually appear. Likewise in economics, adjacent countries exhibit drastic differences in growth rates and industry output levels. In other words, large differences in the long run aggregate (or averaged) variables are observed for social or economic systems whereas initial conditions for these systems were almost identical. This phenomenon resembles the development of instabilities or amplification in dynamical systems, when small changes in the initial conditions are translated along the evolution trajectories and so amplified as to produce large differences in the output values. In social systems, small variations in the behavior of individuals can be transformed, due to the amplification effect, into great differences in the long run aggregate quantities such as the economic output or per capita GDP. This analogy prompts one to think that it would be pertinent to use powerful methods of dynamical systems theory to describe the effects of social and economic amplification and to establish its limits. 11.2.3 Limits and Errors in Social Sciences One might note that the culture of assessing errors and discussing the accurateness of prognoses is nearly absent in social sciences. Typically, the statements in social sciences have an unconditional character whereas in the world of physics and mathematics an answer, which may be interpreted as a prediction, has the form IF ¡conditions¿ THEN ¡prediction¿ so that altering the conditions in general would change the predictions. Therefore, the universal principle of physical and mathematical sciences is that one should interpret forecasts in the context of conditions. Not so in social sciences. Let us first take as an example one of the most famous socio-economic accounts, "Das Kapital" by Karl Marx. The author, a poor German emigrant living in London, produced a bestseller (the first edition in London, 1867) which probably no one has read completely. The manuscript by Karl Marx consists of four thick volumes, and it is unlikely that the author himself was familiar with the last three: they were compiled from loose drafts by the author's friends and colleagues, Friedrich Engels and Karl Kautsky. Marx appeared to be so certain in the infallibility of his theory that it is hard to find in the "Capital" any discussion of the accuracy of his statements and prophesies. At least, by scrolling "Das Kapital", I could not find any. Marx's theoretical schemes are in fact somewhat abstract models with a very modest mathematical element and hardly any domain of applicability. In this sense, Marx's claims are closer to religious models than to economic theories. Indeed, we have seen (see section "Religious Models" in Chapter 2) that most religions are based on promises, and most believers find their deeds meaningful only to the extent that something abstractly pleasant can be expected. The same applies to the Marxist abstraction of communism. Marxism in general has plenty of religious attributes: it contains very little 11.2 The Limits of Sociology 489 empirical evidence (if at all), mostly relying on unconditional faith. It is curious that Marxism rivaled religion mostly in the countries with very strong religious adherence such as Albania, Afghanistan, China, Cuba, India, Laos, Vietnam. Probably it means that a poor population is more susceptible to uncritical acceptance of vague ideological doctrines than the population in comparatively wealthy countries. It is also astounding how deeply people were indoctrinated with the purely ideological, i.e., without empirical element, Marxist schemes. Marx only tried to produce a highly theoretical, scientific-looking study of the capitalist way of production albeit with plenty of lateral associations. Common people, however, interpreted these speculative theories rather emotionally, as an appeal to the violent transformation of society. Many people were enslaved by the Marxist ideology to the extent of being ready to sacrifice their freedom and lives, closing their eyes to the obvious facts that the properties of socialist or communist states were standing in sharp contrast with Marxist doctrines. The main thesis of Marx is actually formulated at the very beginning: the wealth of societies which are based on the capitalist way of production is manifested by a great accumulation of goods. Then the author devotes about 800 pages of the first volume, containing a lot of numbers and references, to a meticulous investigation of the conversion of goods into money (and vice versa), creation of a surplus value, and accumulation of the capital based on it. According to Marx, it is the capital that is the real ruler of the society, with the capitalist society ensuring a maximal capitalization and monopolization of the economy. Marxism optimistically claims a such a greed-based process of capitalization and monopolization should eventually result in the social explosion. So, according to the frustrated author, capitalism is doomed. It is because of this optimistic prognosis that all the people in the communist countries were obliged to study Marxism-Leninism. In the Soviet Union, studying "Capital" at each university or institute, was obligatory, irrespective of the faculty. There was at least a year's course of the so-called "political economy" formally devoted to the study of Marx's monumental treatise, but nobody actually read it beyond the first chapters including ignorant professors of the "Marxist-Leninist political economy". We, students, managed to pass the exams without reading neither the "Capital" nor its abridged exposition specially adapted to the presumed Soviet mentality. It was for us a kind of a sport: who gets better marks without any knowledge? It was not so difficult to swindle the narcoleptic teachers of Marxism-Leninism by a meaningless flux of words because they had no knowledge of the obscure Marxist texts, either. And of course, Soviet functionaries, the absolute majority of them being half-literate, had never read "Das Kapital", but they had to enforce it. I guess that such communist rulers as Stalin, Mao Zedong, Kim Il Sung, Fidel Castro and others had neither time nor desire to study the monumental theory by Karl Marx. Their purpose was more pragmatic: to make their subjects believe that the sealed up communist system was a "progressive" paradise as compared to the backward and inhuman capitalist hell. 490 Made in physics The socio-economic model of Marx still seems to be inadequate, despite all its popularity, which is also similar to religious models. Indeed, the main prognosis of the imminent social explosion in all developed countries, put forward by Marx and his interpreter Engels, was never corroborated. Reality manifested itself exactly opposite to the Marxist predictions. The representation of the working class as the most progressive one is ridiculous. Besides, the workers have never been active and eager enough to ignite the world revolution, as it was proclaimed by the communist ideologists. Equally inadequate was the model of socialism as a society that would abolish the state and establish a genuine paradise on the Earth. 11.3 Hierarchical Multilevel Systems Although the concept of hierarchical multilevel systems (HMS) is very broad - from software and data communication systems through Earth's climate to social hierarchies here, we shall mainly talk about economics. There are considerable mathematical challenges in describing HMS, especially as concerns their multiscale modeling. Bridging across many levels corresponding, e.g., to subcomponents that operate on essentially different space and time scales requires some unifying mathematics and, if treated as a head-on computer simulation, is computationally demanding. The physical example of a multiscale problem is turbulence (Chapter 7), it gives us an opportunity to feel the complexity resulting from many interworking scales. Despite enormous practical importance, apparently transparent mathematical setup (the Navier-Stokes equations), and a great lot of efforts, nobody has managed to build a good mathematical theory of turbulence. Economics, being an essentially multilevel structure, is probably an even more complex system than fluid. This is an example of multigrid methods - a computational mathematics counterpart of multilevel systems, which represents, however, only a primitive reflection of the entire complexity of hierarchical multilevel systems271. 11.3.1 The Politics of Bureaucracy If the state begins to redistribute the created wealth too actively, a justified resentment of those wealth producers who are the most efficient and creative is aroused. Bureaucratization in corporations leads to underproduction or inadequate production 272 crises. This is a scaled-down effect of macroeconomic drawbacks that constantly plagued, e.g., the Soviet-type (Gosplan) economies and eventually led to their complete downturn. One can even make a pessimistic prognosis related to the Great Bureaucratic Revolution occurring almost everywhere in the world if the situation with 271 In fact, already the Fourier expansion exploits a hierarchical principle, and it is due to the hierarchy of scales that the Fourier method is so powerful (in linear problems). One can also bring the example of hierarchical Arabic numerals vs. the inefficient Roman number system. 272 Manufacturing of products with no regard to the market signals. 11.3 Hierarchical Multilevel Systems 491 rapid bureaucratization 273 is not reversed and bureaucratic positions are more and more attractive for young people, then in a dozen of years a new generation of "intellectuals" will arise who will not be intellectuals at all. These people will probably be different, not ready for intellectual work in the sense of, say, the 1960s i.e., not capable of reflection and simply to reading serious books. This can be bad for all. Now that more and more people wish their children to become highly paid functionaries, bosses, top managers, lawyers, bankers, TV or movie actors, prominent sportsmen or other kind of celebrities, interest in hard education appears to be waning. The lowering prestige of exact sciences and engineering is indicative of the growing tendency to bureaucratization. One can observe that the more primitive the culture the less useful science and engineering are perceived to be274. The prestige of science and engineering in the popular consciousness might serve as an indicator of the society development stage. There is, however, an intrinsic contradiction here: people who care less and less about science more and more want new "cool" technologies. It is this "coolness factor" that brings significant profits to capitalist corporations and compels them to somewhat grudgingly support science together with the classical future-oriented chain of development: idea, calculation, laboratory, prototype, pilot project, full scale production. Yet such chains are typically controlled by the corporate bureaucracy which, with the multilevel hierarchical authority structure of modern corporations, is practically indistinguishable from the governmental sector bureaucracy. One detrimental consequence of corporate bureaucratization is the system of semi-corrupt verticals: managers at the lower levels make absurd and arbitrary decisions while higher levels tend to protect their subordinates from any blame. This buddying mode is rather stable, eventually leading to the total absence of persons who are ready to take responsibility for inadequate decisions. But although one can hide from the facts, one cannot hide the facts. Of course, rank-and-file employees, common workers, and the "office plankton" are those who suffer, primarily from unemployment. One should not think, however, that unemployment is only caused by the bureaucratic mismanagement: even a perfectly functioning market (i.e., noncentralized) economy is prone to unemployment because of insufficient demand and its fluctuations. According to the canonical economic theory, which is a deterministic model operating with averaged quantities, market equilibrium is reached when demand equals supply - in fact this is the definition of market equilibrium. Applying this definition to the labor market, one can deduce that the demand for goods and services pushes upward also the demand for labor, thus resulting in rising pay and employment. However, 273 The rate of creeping bureaucratization, e.g., of Russia is striking: even according to official statistics the number of government officials has grown from 900 000 in 2000 to 2 000 000 in 2008. 274 In the post-Soviet Russia, scientists were perceived in popular consciousness as good-for-nothing exotic creatures; people looked at "egg-heads" with a mixture of disdain, superiority and fear. 492 Made in physics the supply-demand equilibrium curve is not necessarily a smooth trajectory: from time to time, it can change very abruptly, following social cataclysms, governmental moves, appearance of new technologies, and so on. We have already discussed that disruptive technologies favor certain groups of population and can seriously inhibit other groups. For example, closing coal mines in order to foster sustainable energy sources, even if this process is accompanied by social programs and retraining activities, seriously diminishes job opportunities for miners. In general, new production paradigms have strong effects on wages and employment, these effects deforming the socio-economic position of some groups relative to others. There exists a considerable amount of highly professional literature devoted to this subject (see, e.g., [277] and references therein) so that I do not need to dwell on this subject, about which I know very little. There are also mathematical models of unemployment in multilevel economies (i.e., HMS) resulting from technological development, but we shall not discuss these models here. Bureaucracy is generally opposed to meritocracy because the latter possesses some specialized technical knowledge or critical information, which makes it difficult to control. 11.4 Physical Economy Sometimes I think that the world would function better if it were run by physicists and mathematicians, despite the fact that politicians and business people usually claim that decisions are much better guided not by physics or mathematics but by gut feeling derived from years of experience. And they are supported by a largely math-phobic general public. Besides, the vast and omnipresent bureaucracy - the driving belts of politics - rightly fears that analytically driven programs might result in massive personal replacements. We have seen that physics builds models of reality, but, firstly, it may be not necessarily physical reality without participation of human agents and, secondly, not only physicists construct models of reality à la theoretical physics. For instance, the use of dynamical or stochastic (or dynamic stochastic) models in socioeconomic research is now a substantial part of mathematical economy. Economic change, growth processes, goal setting, development time, driving forces, and large number of other topics are studied in the mathematical form by techniques traditionally used in physics, e.g., by variational methods. Mathematical economy often looks like physics in some other guise. Therefore, one currently calls this interdisciplinary field physical economy. This discipline may be especially interesting to those who wish to see how the ideas from a social sphere can be translated into mathematical models. Paying attention to this translation is useful for all parties concerned - physicists, mathematicians, economy experts, social scientists, for it helps to enlarge the scope of problems to be objectively studied and possible mathematical techniques to be used. Since I am not an expert on mathematical economy, many topics are treated on a primitive level of general concepts. Such important subjects as stochastic modeling, probabilistic 11.4 Physical Economy 493 analysis, structural change or socioeconomic discontinuity are treated in a number of more specialized sources, so don't expect any profound exposition of these topics here. As everywhere in this book, the main attention is paid to the motivation and relevance of physically-based mathematical modeling. Hard laws analogous to those describing the processes in physics do not exist in economics because the latter is necessarily mediated by social phenomena. Social interactions and human behavior are exceedingly more complex than physical phenomena and remain rather poorly understood. Therefore, economists rely more on insight and expert judgments rather than on objective methods ensuring the confidence close to physical modeling. Economy is always an evolving system [168] - actually, any physical system is an evolving system since a physical event occurs in space-time. Yet, in physics one can in many cases use quasi-stationary or even steady-state approximation. We have seen, for instance, that steady-state models such as computation of energy levels play an outstanding part in quantum mechanics, in fact quantum mechanics was initially invented and designed to produce stationary states. Now we may call this subdiscipline that does not include interstate transitions quantum statics. Contrariwise, the steady-state approximation is rarely adequate in economical models because of rapidly changing conditions. In other words, economic models are predominantly dynamical, dealing with evolution. Generally speaking, evolution is any process of development, change, or growth.275 To describe some evolving system, one may construct at first a minimal model which is aimed to elucidate the basic features of developing phenomenon. Such a minimal model corresponds to the simplified dynamical system describing economic development - with a minimum number of phenomenological parameters or arbitrary constants. More detailed models can be built on the base of these minimal models276. There are a lot of questions regarding evolutionary economies, posed but not answered. For example, should there be a convergence of economic patterns towards a single, universal and ideal model? Does this common limit exist for all societies, or evolution of specific economies is heterogeneous, with the economic trajectories being dispersed according to each country's preferences? The latter assertion is known as the "Dahrendorf's hypothesis", stated by a well-known German-English political economist, Sir Ralf Dahrendorf. Economics is an essentially quantitative subject. When, for example, a minister for economy asks his advisers about the advantages or disadvantages of raising the actual taxes, she/he does not expect a philosophical discourse, nor a lecture on what Nobel Prize winners in economy would generally say about increased taxes. The minister (or the secretary for economy) wants to know how much revenue could be produced due to new taxes and how the anticipated revenue figures would correspond to the optimistic prognoses 275 The word "evolution" stems from the Latin evolutio which means "unfolding". 276 Such more detailed models have often been called "imitation models", although it seems that this terminology is outdated now. 494 Made in physics supplied by the finance minister. The secretary for economy must know quantitatively how the increase of any tax would affect various sectors of economy. Thus, economics seems to be closer to engineering than to science, so one uses a term "economical engineering". Indeed, one must define goal or cost functions and from them deduce social and economic behavior - a typical engineering approach. For technological systems, to adjust parameters to the specified goal function is common in traditional engineering. However, this works well for technological processes due to the fact that they obey the well understood laws of natural sciences. In economics and other social sciences, the governing laws are poorly understood or at least do not exist in compact mathematical form (expressed as formulas). Insufficient knowledge of the mathematical structure for social and economic laws makes it hard to use a genuine engineering, when free parameters can be fine-tuned with a good accuracy. In particular, this leads to a well-known situation when the results obtained within the framework of different economic models have an undetermined applicability domain and may considerably overlap. In contrast with physics, quantifying things in economics is in general a nontrivial problem. How would you unequivocally quantify customer satisfaction, for example? But you ought to do it, in order to plot this variable versus costs. Moreover, this intuitive entity may determine economic strategy, for instance in medical institutions, transportation or public conveyance companies: do you want to minimize personnel (expendables, fuel, etc.) costs, or do you want to maximize patient (customer) satisfaction and thus attract more clients? There is a peculiar feature of real economics differing it from natural sciences and technological engineering. In economics, perception of reality is a major part of reality. For example, when people are constantly told that everything goes well with economy, they are inclined to believe it, start making investments, buy stocks, purchase more goods, and the whole economic system is accelerating or "heating". It is thus the vision of economic situation, i.e., reality dressed in human perception, and not the naked situation that determines the economic behavior. More than that: perception of reality may prove to be more important and contributing heavier to economic behavior than reality itself. As a consequence, it seems very difficult to model the behavioral components in theoretical economics. One of the biggest challenges is to understand how the system will change its path as humans react to stimuli, incentives and dangers. For example, one of the most important economic factors in the contemporary world is the oil prices. The cost of oil production (even including gradually rising exploration costs of oilcarrying sources) is known to be a slowly varying quantity on a level of 10 US dollars per barrel. On the contrary, the oil prices are varying comparatively fast, on a level about ten times higher. It means that human-induced, speculative and "soft" component is an order of magnitude larger than technical production costs. Moreover, even the speculative variations of oil prices following the human perception of the political and economic situation may substantially exceed the hard economic parameters such as the production costs. This example hints at an essential, sometimes dominant role 11.5 Naive Taxation Models 495 played by human perception in economy. Another example points at the difficulties to account for the human role in economy. Thus, a number of economies are plagued by corruption, and although it is intuitively very likely that corruption hinders economic development in such countries. However, when one tries to pass from vague intuitive statements to quantitative models, the braking effect of corruption becomes hard to describe in mathematical terms. 11.5 Naive Taxation Models This section may be regarded as an occasional curiosity. The subject of taxes is, however, very important for all people, and one cannot get rid of the impression that it has not been studied with commensurate mathematical thoroughness. The tax system is purposed to maximize the budget revenue under the following constraints: non-degradation of the quality of life; nondiminishing income level, demand, productivity and profitability; preservation of social stability. Taxation fulfills one more important function: stabilization of inflation. Optimal taxation of private persons and households depends on the character of two important distribution functions characterizing economic structure of the society: distribution f(x) with respect to income x and distribution g(u) with respect to accumulated liquid assets or liquidity u. As usual, the quantity dN= f(x)dx denotes the number of households with income lying in (x, x+ dx) , and the quantity dN= g(u)du signifies the number of households whose liquidity is found in (u, u+ du) . Here N denotes a continuum of households. The f(x) distribution is more important in societies with transitory economies whereas the quantity g(u) characterizes the wealth distribution. Examples of quantity u are bank deposits, stocks, bonds, expressed in currency units, in short, u denotes the mass of money to be at the household disposal. Optimal (according to some criteria) taxation may be modified with the changes in the society economic structure. Corporate taxes directly affect profits and productivity. It may be noticed that corporate taxes are coupled with income taxes. A well-organized tax system allows one to determine the society economic structure reflected in the distribution functions f(x) and g(u). For example, one can measure f(x) directly from income taxes and g(u) by analyzing bank accounts277. Mathematical modeling of taxation schemes does not seem especially difficult, yet defensive financial bureaucracies are apparently not very interested in modeling results. There are only continuous verbal discussions in the mass media, but serious scientific results are only seldom presented and popularized (see [172] in relation to this). The subject of taxes is highly emotionalized and charged with political or group interests. It is by no means accidental that finance ministers are most often without any financial or - it would be almost improbable - financial-mathematical background; they are 277 Both bank accounts and accumulated stock may be of course analyzed impersonally, without violating privacy requirements. Secretive character of individual wealth is an important prerequisite of social stability. 496 Made in physics customarily recruited from winning party inner circles, even in advanced countries. This anti-meritocratic selection favors confused and not transparent tax systems having deplorable economic and legal consequences. Tax laws, governmental acts, statutory regulations and actual rules come in great many formats so that it is usually impossible to know them all. Only this fact tends to criminalize any citizen due to her/his unawareness of the whole mass of current tax rules. The internet does not help much in this respect - what matters most is not the medium, but whether the source is up to date. It is usually not difficult to calculate taxes, for example, by using a look-up table for progressive scales, yet the definition of what is subject to taxation taking into account all possible exemptions, deductions, special regulations accumulated over the years is very burdensome. Thus, taxation rules in most countries are extremely complex, with governing financial bureaucracies strongly resisting any possible simplification. Changing the taxation schemes, introduction of new taxes or removal of old ones is always extremely controversial. Some tax authorities (e.g., in Germany where taxation is considered to be the most complex in the world, see, e.g., http://en.wikipedia.org/wiki/Taxation in Germany) even contend that the tax system does not need to be simple - sort of an esoteric approach. Germany is an example of the country that traditionally strives to be one of the leading (or even the leading) in the world in many respects - economically, politically, financially, technologically, in exports, etc. However, these ambitions are drastically impeded by the country's clumsy and highly controversial tax system. As a result, money is flowing out of the country. Punitive reflexes of the state bureaucracy activating the "immune system" - police, intelligence, excessive customs control - are costly, inefficient and only make the matter worse. Money gained inside the country is easily percolating through restrictive barriers. Small but numerous amendments issued each year do not help since they only increase the complexity and benefit mainly the parasite clan of tax consultants flourishing due to unobservable legislation. On the surface, this situation is favorable to the state since tax consultants are also paying taxes on income generated from the fee paid by taxpayers. This is, however, a superfluous activity taking many man-years of people that can be otherwise intellectually or manually productive. Germany of course is no exception in highly inefficient and overcomplicated taxation. Many countries cannot come to terms with their tax systems. All the countries where the tax system is a product of political compromises rather than based on solid financial and mathematical approaches are doomed to either money loss or political instability due to people's discontent, or both. It is indicative that tax systems in various countries are all different and are constantly modified - this testifies to the fact that they are far from being optimized. The tendency to make taxation complicated for citizens is quite understandable since under a simplified system (such as a universal flat rate, see below) the ruling bureaucracy loses some particular form of control based on selective granting tax preferences. So, there are groups besides ruling bureaucracy that are opposed to tax simplification, which results in lobbying against efficient mathematical modeling in this "politically sensitive" area. The usual 11.5 Naive Taxation Models 497 argument against simplified systems based on clear mathematical models is based on the notion of "fairness": a tax scheme is declared "unfair" if it is mathematically rigid and does not allow to redistribute money according to political purposes. However, "fairness" is a subjective notion: how do you define "fair"? Probably any definition of it would be contaminated with emotions, and this complicates the taxation schemes. But mathematical models are immune to emotions, and no matter what mathematical model one considers in regard to the taxation problem one has to focus primarily on its simplification - in accordance with the common mathematical modeling methodology (Chapter 2). To begin with, let us for simplicity consider a three-component taxation scheme when taxes collected by the state278 are subdivided into three nonintersecting components: individual (household) income tax, corporate profit tax, and consumption tax (VAT, GST and the like)279. I shall not dwell here on corporate taxes, it requires a thorough economical analysis bordering on legal issues which subject only includes mathematical models to a minor extent and would lead us far away from them. As regards taxes on consumption, not on labor and capital income, this is a serious and interesting issue where mathematical models can be correctly set up, especially in the context of shifting the tax burden from individual income to penalized consumption. Thus, taxes on consumption inherently encourage savings and, consequently, investments instead of consumption. Income tax is a direct one whereas consumption taxes are hidden in prices and felt indirectly. One may notice in this connection that although VAT and its analogs are drastic instruments of improving the state finances, they can bring more harm than benefits for the state since they can significantly reduce consumption.280 It is because of this hazard that mathematical models would be especially useful in optimizing the relationship between the two components - income and consumption taxes. The most important direct tax is the income tax. There are three main mathematical models for the income tax - progressive, proportional and regressive. Many people got used to proportional taxation schemes - due to "fairness" or envy arguments, although nobody has proved that such schemes are the best for the people. By the way, what is best for the people is not necessarily good for the state and vice versa, even in democratic countries with advanced economies. People - the country population as a whole and its various groups - and the state represented by the ruling bureaucracy may have, and actually possess, different goal functions whose difference281 as a 278 Typically, for "old" European countries tax revenue comprises about 40-50 per cent of GDP, with approximately 25 percent in the USA, http://ec.europa.eu/taxation customs/taxation/index en.htm 279 Actually, the great variety of other taxes imposed on the citizens provide only corrections to the state tax revenue and rather testify to inefficiency and greediness of the financial bureaucracies who see their task as squeezing the money out of citizens. 280 Moreover, the VAT paperwork is extremely cumbersome, so it complicates the accounting to the extent that a lot of man-hours are wasted. It is strange that people underestimate the fact that time is the most precious resource. 281 One can talk of the norm in some functional space of course. 498 Made in physics function of time increases when the economic situation becomes worse. In most advanced economies, the personal (household) income tax is a progressive tax defined as a rising, piecewise continuous (affine) function of income, excluding various deductions, rebates, etc. In other words, the tax base, i.e., the amount of money earned up to a certain threshold 1 amount T1 is taxed at a rate r1, then the remaining income, up to the second threshold amount T2 is taxed at a rate r2, and so on. Conversely, regressive tax is levied so that the tax rate decreases as the tax base increases. This is a rare occasion for income taxes, being applied in developing economies to stimulate accumulation of wealth. The in-between model of proportional or flat rate is the most primitive taxation model based on the universal rate r irrespective of the income earned. In this sense, a flat rate may be considered a degenerate case of a progressive or regressive rate. For some political reasons, mainly of ideological character, proportional (flat) tax schemes are uncommon in advanced economies, especially in Europe, where typically a graduated progressive tax imposed both on household incomes and fixed taxes on corporate profits are accepted. However, flat taxes seem to work well, e.g., in Russia and Baltic countries; some more countries of Central and Eastern Europe are contemplating introducing the flat income tax. Medical and retirement insurances perceived by people as supplementary taxes do in fact correspond to flat rate taxation. There was a proposal by a well-known tax expert and lawyer Professor Paul Kirchhof to introduce 25 per cent flat rate in Germany, but it was of course declined for political reasons. The limiting case of the proportional tax model is confiscation of high incomes and/or property - accumulated wealth. This variant is, however, not optimal since it produces an excessive social tension, leads to the drain of capital out of the country and removes economic stimulators. This is an antiliberal approach exploiting human envy and almost necessarily resulting in violence. Furthermore, the ruling bureaucracy that appropriates the functions of total redistribution rapidly (in some historically short time τ) becomes the elite group ("nomenclatura") and behaves as if the entire country belonged to it. This accompanies the transition of society to another stationary or at least metastable state which is usually called totalitarian. The only possibility to resort to a confiscatory taxation during some acute crisis is to apply it for even shorter time t 0, the income tax t= r(x-B) is applied to the excess x-B. the meaning of B is in fact the existence minimum which can be calculated or estimated if the economic structure of the society is known, for example, as B= k∫ xf(x)dx ∞ 0 . Here we assume that the first moment of the distribution exists, function f(x) is normalized and k 0 . One more simplification is hidden in the fact that using such a model one can neglect a separate retirement payment and provide the financing of pensions through income tax. Everyone should receive her/his existence minimum irrespective of age and health condition. Those who wish and are capable of using supplementary private pension schemes receive more - merely additive. But they have also to pay taxes. No privileged or elite groups are 282 As regards health systems, there are three basic models: state-controlled medical insurance, private insurance, and health services paid by the state. There may be of course a superposition of these "pure" models. State-controlled medical insurances do not actually differ from budget organizations. Nothing changes for the patient if one covers her/his medical expenses from the state budget. Naturally these payments should be executed directly to medical service providers (doctors, healthcare personnel, etc.), without the patient's participation. Public-private partnership schemes may also be closer to optimal than state-controlled medical insurances. 283 I try to evade using the term "population" recalling an involuntary definition of this term which belonged to the Russian ex-Prime Minister, V. S. Chernomyrdin: Population is such a people with whom one can do whatever one wants. V. S. Chernomyrdin was known for his intuitive, unintentional aphoristicity. 500 Made in physics allowed, everyone pays the same rate and no one receives special treatment or has tax preferences. So simple may be the world. This simplicity may be considered exactly the drawback of the model. The matter is that there are some groups in society that perceive the simplicity and transparency of financial flows as directed against their interests. To these groups may belong not only misanthropic bureaucrats, but also social romanticists and leftist world-improvers. Fewer regulations mean less interference from the state part exercised by functionaries. Naturally, the latter are not interested in tax simplification. Moreover, under a simplified tax system the costs of processing tax returns paid to tax-collection officers become drastically reduced, and the tax collection offices can be significantly downsized. And what to do with the freed manpower? Moreover, the very fact that the flat rate can be applied to all taxable income (and profits) without exception or exemption exasperates many people. There is also a risk that an increased part of an inactive population would subsist for a long time on state transfers ensuring the existence level. The above mentioned "enthusiasm coefficient" is intended to entice them into a productive life. Another danger could be that employers would be tempted to pay low wages in order to minimize payments to the state, but this should be regulated by corporate taxes, in particular by an increased taxation of profits. In this respect, personal income taxes become even more strongly coupled to the corporate ones. This two-parameter flat-tax model reminds us of the so-called negative income tax (NIT) model suggested by one of the leading contemporary economists, Milton Friedman, described in his well-known book "Capitalism and Freedom" [169]. Currencies have a notorious tendency to plunge or soar in value because of the herd mentality of markets, so it is crucial to manage currency adjustments in an orderly fashion. As long as the decline of the US dollar is smooth, there is no problem. But if the dollar plummets, the world faces a fullblown currency crisis. 11.5 Naive Taxation Models 501 12 Conclusion and outlook Now, one can justifiably ask: has anything been said at all? Some people might say that the book is so uncommon and eclectic that it would be difficult to trace the logic without being distracted by surrounding complexities. I still hope that the described - maybe not always accurately - results and problems would deserve being discussed anyway. And I also hope that even in those numerous cases when I just mentioned some results and the great names associated to them this content might be useful for the reader as an initial piece of information about what has happened in the world. One may observe that physics done on the level of journal articles is different from physics presented in textbooks or treated in monographs. People writing monographs or textbooks are trying to make the presentation monolithic and the presented subject closed. In reality, no subject is closed, and scientific disciplines, in particular physics, are in perpetual development, conquering new areas and incorporating new results - new nodes, in networking terminology. The networking approach, increasingly popular these days, allows one to migrate between the physical and mathematical (in my terminology, "physmatical") subjects with a comparative easiness. Like any network, a physmatical one is not closed, it develops and gradually interconnects with other "proprietary" networks such as "biomedics", economics, astrophysics, "sociologics", etc. To some extent, "physmatics" reflects the scope of books one can see on the bookshelves of physicists. In this book, I wanted to pay attention to synergetic interaction between different fields of physics, mathematics, and other disciplines where mathematical models, in particular those constructed according to a physical pattern, are extensively used. I also attempted to bridge together two streams of presenting physical and mathematical results: a conservative textbook or monograph approach and a more daring article style. Bridging the gap between these two manners of presentation requires a relatively complete exposition of the basic concepts, therefore there are so many fragments in the text, which a reader may find too elementary. In other cases, only a glimpse at the non-core stuff was, to my mind, sufficient to convey the main ideas underlying the model discussed, but some readers may find such exposition too superficial. It is difficult to satisfy everyone, and I never wanted it. I did not present only the standard material, but also tried to comment on some open problems so that the interested reader may be induced to dig deeply in more professional sources. While preparing this manuscript my ambition was that the book would stimulate young researchers to venture into interdisciplinary fields, which are off the beaten track. The cross-fertilization process between such interdisciplinary fields is essentially reduced to the triad: acquire everything what is useful, neglect all unnecessary and add something of your own. As I have several times mentioned in this book, the term "interdisciplinary" has been seriously compromised due to heavy contamination of vague philosophy. 502 Conclusion and outlook Bearing the latter consideration in mind, I still wanted to make a journey over a variety of subjects rather than to settle down on any one of them. This explains to some extent wordy and philosophical interludes of mine which can irritate the reader anticipating a more focused approach. And I must once again apologize for highly subjective personal recollections interspersed through the manuscript: I hoped that occasional remembrances, however dim they might be, could partly illuminate the scientific scene of the Soviet years, the paradoxical period when physical and mathematical sciences could flourish against the ruinous background of nearly everything. Physicists and mathematicians are typically those who strive to "crystallize" the problem and to achieve maximal lucidity before converting their considerations into published papers. I concede that there are many places in this book that would not scrape through the clarity test; moreover, there are fragments that may seem "a mist" to an occasional reader. Broken and not clearly cut presentation of material is untypical of scientific publications, but, firstly, this book is not, strictly speaking, a scientific publication as it does not fit in with any standard for a monograph or a textbook and, secondly, I think this mosaic, non-monolithic style is more natural for a human perception. A rare reader nowadays meticulously studies the book from the first page to the last in the same methodical order as it was arranged by the author, editor and publisher. When writing this book, I did not think it would be fitting to ruminate about the tiniest wrinkles in physical models, for example on the level of details that were discussed last month in arXiv. Instead, I took the manner to jump over some details which I considered not necessarily trifling but distracting, speaking mostly about "the forest behind the trees", i.e., the most important models in physics. I have noticed that many physical questions may be fundamental but vague, which marks a certain philosophical component in physics. Personally, I am inclined to doubt that philosophy could answer physical questions, yet I think that vague questions can provoke imagination and diminish the dogmatic aplomb so typical of many science professionals. I recall a statement ascribed to G. H. Hardy and cited by an outstanding physicist, mathematician and writer Freeman Dyson 284 that young men should prove theorems, old men should write books [231]. Proving theorems and obtaining other hard scientific results projected into the future is indeed a game for the young. The ratio of a scientific to a philosophical component in a person's work rapidly diminishes with age after a certain threshold simply because young people are capable of more trial-and-error attempts. As I may be considered old, my motivation, while writing this book, was primarily to preserve the past. Books are in general intended to preserve the past. Unfortunately, I have seen many times how the worlds of the past disappear 284 It is a shame, by the way, that F. Dyson did not receive the Nobel Prize. There are many outstanding scientists who were overlooked by the Nobel Committee (more exactly, by the Royal Swedish Academy of Sciences): G. Gamow, J. W. Gibbs, L. Meitner, D. Mendeleev, nowadays F. Dyson and L. V. Keldysh, perhaps Yu. M. Kagan. The Nobel Committee often makes strange - often politically motivated - decisions. See, e.g., http://en.wikipedia.org/wiki/Nobel_Prize_controversies. 11.5 Naive Taxation Models 503 together with their human carriers. And I shall be happy if some person considerably younger than the author would refer to this book that it has something interesting to say. Even if the reference were in the following wellknown form: there are some new and true things in this book, but the true things are not new, and the new ones are not true. Back to this issue of preserving the past. In this book, I wanted to discern a timeline of 20th century physics and associated it mathematics which may serve as a backbone for the entire physmatical network. Of course, it was only a crude attempt. However, I do not think such an approach is hopeless. Networking reasoning is a good instrument to unify diverse environments. Modern physics began with Einstein's attempt to reconcile electrodynamics, mechanics, and thermodynamics in 1905 and his later endeavor to unify special relativity and the Newtonian theory of gravitation. In a more general social context, the synthesizing approach of Einstein meant a convergent scientific change - a retreat from austerity concepts of Max Weber [9] who insisted on "Zweckrational" action rejecting all unnecessary circumstances. Einstein was rather thinking in the Renaissance spirit of more lenient "communicative rationality" program, encouraging critical reassessment of "everybody knows that" concepts and the establishment of mutual understanding between diverse scientific communities. As a consequence, Einstein's program proved to be very successful as compared to rival endeavors not because it could explain more "facts" or was more powerful mathematically. It was better than rival programs probably because it provided a wide basis for interpenetration and communication between several main paradigms of 19th century physics. And in the long run, of course, Einstein's theory culminated in multiple empirical successes. What is not covered in the book? Since the manuscript is already too long, I have omitted many important subjects such as, among applied mathematical topics, data analysis, the finite element method, homotopy, generalized functions, projective spaces, etc., and within the physical portion, anything about astrophysics, bosonization, Fermi and Luttinger liquids, models of a ferromagnetic, quantum Hall effect, surface science and low-dimensional physics, elementary particle physics, quark models, spin systems (with spintronics applications), and many, many others. All these subjects are treated by a vast number of highly competent authors, and I, having only a superficial - not working - knowledge of them could provide a mere compilation of authoritative sources. I was not bold enough to exhibit my dilettantism, for the latter is always conspicuous when a person ventures to write about the subjects she/he does not have a working knowledge of. Besides, the number of sources covering the omitted topics is enormous. Take, for example, the finite element method. It has been the subject of so many books that it would be hard to select any pertinent references. For me, it was a serious challenge, while putting this book together, to decide what to include: where exactly lie the boundaries of the so-called "physmatics"? Or, in practical terms, what should be considered the standard repertoire of the indispensable education of a physicist, and what can be 504 Conclusion and outlook painlessly omitted? Can one find a common scheme to incorporate dozens of seemingly different research activities? Probably, only a science historian looking backwards in, say, thirty years from now can conclude whether the networking view of physical-mathematical disciplines was at all justifiable. In connection to a tremendous amount of topics and sources, I hope that in the near future people can utilize computers to understand the relational structures of physmatics, by constructing clever algorithms classifying its inherent content. One can, for example, invent multilevel or graph clustering algorithms which would predict physmatical complexes in large-scale dedicated information networks. In this direction, visual representations of complex multivariate information contained in physical and mathematical sciences may become feasible, which allows one to communicate this information synthetically (and not as traditional dispersed fragments) with ease and high precision. The designed physmatical algorithms might produce groups of high-quality nodes - nowadays this function is partly implemented by forums devoted to specific topics. Eventually such nodes can be merged before applying clustering algorithms. Then the clustering results may be passed to the entire physmatical network, with a certain additional fine tuning. Naturally, there will be some leftover nodes outside the mainstream (though multicore) knowledge network. Such leftover nodes should not be totally neglected or lost, since human representation of what is important and what is not in physics changes with time. Besides, the requirements of fashion are quite influential 285 . The leftover nodes may be added back to the mainstream with the help of multilevel algorithms accounting for hierarchy of nodes, supernodes and clusters. This computer-aided human communication in physical and mathematical sciences would make the results produced by them much more observable than today. But it is only a vision, of course. 285 For instance, string theory is of fashion nowadays, although it obviously lacks any corroborative evidence. On the contrary, traditional nuclear physics is largely out of fashion. Nanotechnology bordering on nano-mythology has become extremely fashionable in the beginning of the 2000s, as well as highly hypothetical quantum computing. Nonlinear science had been almost totally neglected before the 1960s-1970s, but in the 1980s it enjoyed enormous popularity. There are a lot of sociological effects and ideology in the advancement of science. 11.5 Naive Taxation Models 505 13 Bibliography [1] Samarskii, A. A., Mikhailov, A. P. Matematicheskoe Modelirovanie. Nauka, Moscow 1997. Translation: Principles of Mathematical Modeling: Ideas, Methods, Examples, CRC Press, 2002 [2] Basmadjian, Diran. Mathematical Modeling of Physical Systems: An Introduction. Oxford University Press, 2002 [3] Tikhonov, A. N., Samarski, A. A. Equations of Mathematical Physics. Pergamon Press, N. Y. 1963 [4] Morse, P., Feshbach, H. Methods of Theoretical Physics. McGraw-Hill, N. Y. 1953 [5] Jackson, J. D. Classical Electrodynamics. Academic Press, N. Y., 1998 [6] Stratton, J. A. Electromagnetic Theory. IEEE Press, Piscataway, NJ, 2007 (new edition) [7] Courant, R., Hilbert, D. Methoden der Mathematischen Physik. SpringerVerlag, Berlin-Heidelberg, 1968 [8] Belotserkovskii, O. M., Guschin, V. A. Matematicheskoe Modelirovanie: Problemy i Rezultaty. Nauka, Moscow 2003 [9] Weber, Max. Die protestantische Ethik und der 'Geist' des Kapitalismus. Paul Siebeck, Tübingen, 1986 [10] Aris, Rutherford. Mathematical Modeling Techniques. Dover, N. Y. 1994 [11] Surdin V. G. Vestnik Russian Academy of Sciences, No. 11, 1990 (in Russian); Nanninga, R. The Astrotest. Correlation, 15(2),1996/97, p. 1420 [12] Braginski, V. B., Panov, V. I. JETP 34, 463 (1972) [13] Myshkis, A. D. Elementy Teorii Matematicheskikh Modelei. Nauka, Moscow 1994 [14] Arnold, V. I. Mathematical Methods of Classical Mechanics. SpringerVerlag, LLC, 2nd ed., New York 1989 [15] Arnold, V. I. Ordinary Differential Equations. Translated from the Russian by R. A. Silverman, MIT Press, Cambridge, Massachusetts, 1978 [16] Arnold, V. I., Avez, A. Ergodic Problems of Classical Mechanics. AddisonWesley, Redwood, 1989 506 Bibliography [17] Solari, H. G., Natiello, M. A., Mindlin, G. B. Nonlinear Dynamics. IoP Publishing, London, 1996 [18] Weisskopf, V. F. Foreword to the book by W. Pauli, "Wave Mechanics", Dover, N. Y., 2000 [19] Coddington, E. A., Levinson, N. Theory of differential equations. McGrawHill, New York, 1955 [20] Dirac, P. A. M. Principles of Quantum Mechanics. Oxford University Press, London, 1958 [21] Gantmacher, F. R. Lectures on Analytical Mechanics. MIR, Moscow, 1970 [22] Gantmacher, F. R. Matrix Theory. Chelsea Publishing (AMS), N. Y.,1977 [23] Landau, L. D., Lifshitz, E. M. Mechanics. Pergamon Press, Oxford, 1976 [24] Landau, L. D., Lifshitz, E. M. Statistical Physics. Butterworth-Heinemann, 3rd edition, Oxford, 1980 [25] Lifshitz, E. M., Pitaevski, L. P. Physical Kinetics. Pergamon Press, Oxford, 1981 [26] Bronstein M., Lafaille S. Solutions of linear ordinary differential equations in terms of special functions, Proceedings of the 2002 international symposium on Symbolic and algebraic computation, July 07-10, 2002, Lille, France, 23-28 [27] Cheb-Terrab, E. S., von Blow K.: A Computational approach for the analytical solving of partial differential equations, Computer Physics Communications, 90, 102-116 (1995) [28] Schwarz, A. S. Topology for Physicists. Springer-Verlag, BerlinHeidelberg, 1994 [29] Erdelyi, A., Magnus, W., Oberhettinger, F., Tricomi, F. G.: Higher Transcendental Functions, New York: McGraw-Hill Book Company, Inc., 1953 [30] Gonis, A. Theory, Modeling, and Computation in Materials Science, LLNL, Livermore, CA, 1993 [31] Kovacic, J. J. An algorithm for solving second order linear homogenous differential equations. J. Symbolic Computation, 2(1), 3-43 (1986) [32] Singer M. F., Liouvillian solutions of linear differential equations with Liouvillian coefficients. J. Symbolic Computation, 11(3), 251-273 (1991) [33] Theory and modeling in nanoscience. Report of the May 10-11, 2002 Workshop, DOE U. S. LBNL-50954 11.5 Naive Taxation Models 507 [34] Theory, simulation, and modeling in nanoscience, LLNL Nanoscience Home Page http://www.llnl.gov/nanoscience [35] Yoffe, A. D.: Low-dimensional systems: quantum size effects and electronic properties of semiconductor microcristallities (zerodimensional systems) and some quasi-two-dimensional systems, Adv. Physics, Vol. 51, 2002 [36] Yoffe, A. D.: Semiconductor quantum dots and related systems: electronic, optical, luminescence and related properties of lowdimensional systems, Adv. Physics, Vol. 50, 2001 [37] Heaviside, O. Phil. Trans. Royal Soc. (London) A 183, 423 (1893) [38] Heaviside, O. On the electromagnetic effects due to the motion of electrification through a dielectric. Phil. Mag. 27, 324-339 (1889) [39] Landau, L. D., Lifshitz, E. M. The Classical Theory of Fields. Pergamon Press, Oxford, 1975 [40] Kopaev, Yu. V. High-temperature superconductivity models. Physics - Uspekhi, 45(6), 655 - 659 (2002) [41] Gulyaev Yu. V., Godik E. E. The Physical Fields of Biological Objects. Vestnik AN USSR, No. 8 (1983); Godik E. E., Gulyaev Yu. V. Functional Imaging of the Human Body. IEEE Engineering in Medicine and Biology. 10, No.4 (1991) [42] Bolotovsky, B. M. Oliver Heaviside. Nauka, Moscow, 1985 (in Russian) [43] Kronig, R. Zur Theorie der Supraleitfähigkeit, Zeischrift für Physik A78, No. 11-12, 744-750 (1932) [44] Feynman, R. P. A New Approach to Quantum Theory, ed. L. M. Brown. World Scientific, Singapore 2005; Feynman, R. P., Hibbs, A. R. Quantum Mechanics and Path Integrals. McGraw Hill, New York 1965 [45] Feynman, R. P., Vernon, F. L. The theory of a general quantum system interacting with a linear dissipative system. Ann. Phys. (New York) 24, 118173 (1963) [46] Pauli, W. In: Handbuch der Physik, ed. S. Flügge, vol. 5, Springer, Berlin (1958). [47] Ginzburg, V. L., Landau, L. D. Zh. Eksp. Teor. Fiz. 20, 1064 (1950). English Translation: L. D. Landau, Articles (Oxford: Pergamon Press, 1965), p. 546 [48] Bardeen, J., Cooper, L., Schrieffer, J. Microscopic Theory of Superconductivity, Phys. Rev. 106, 162-164 (1957); Theory of Superconductivity, Phys. Rev. 108, 1175-1204 (1957). 508 Bibliography [49] Kleinert, H. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets. World Scientific, Singapore 2003 [50] Gorelik, G. E. Fizika universitetskaia i akademicheskaia (The university physics vs. the academic physics), Voprosy istorii estestvoznaniia i techniki, no. 2, 1991; Filosofski voprosy sovremennoi fiziki, Academy of Sciences USSR, Moscow, 1952 (in Russian) [51] Gorelik, G. E. The creation of the "Course of Theoretical Physics", Priroda, No. 8, 2005; http://vivovoco.rsl.ru/vv/journal/nature/0805/gorelik.htm (in Russian) [52] DeBroglie, L. Compt. Rend. v.177, 507-548 (1923); v.179, 39 (1924) [53] McTaggart, J. E. The Unreality of Time. A Quarterly Review of Psychology and Philosophy v. 17,456-473 (1908) [54] Goertzel, B. On the Physics and Phenomenology of Time, http://www.goertzel.org/papers/timepap.html [55] Prigogine, I. Nonlinear Science and the Laws of Nature. International Journal of Bifurcation and Chaos, v.7, 1917-1926 (1997); From Poincarés divergences to quantum mechanics with broken time symmetry. Z. fr Naturforschung, v. 52a, 37-45 (1997) [56] Abarbanel, H. D. I., Rabinovich, M. I., Sushchik, M. M. Introduction to Nonlinear Dynamics for Physicists. World Scientific, Singapore, 1996 [57] Haken, H. Advanced Synergetics. Springer, Berlin, 1983 [58] Reichenbach, H. The Direction of Time. Univ. of California Press, Berkeley, 1956 [59] Davies, P. C. W. The Physics of Time Asymmetry. Univ. of California Press, Berkeley, 1974 [60] Hoover, W. G. Time Reversibility, Computer Simulation, and Chaos. Advanced Series in Nonlinear Dynamics 13. World Scientific, SingaporeRiver Edge, N. J., 1999 [61] Krasnikov, S. V. Causality violation and paradoxes. Phys. Rev. D 55, 3427 - 3430 (1997) [62] Bachelot, A. Global properties of the wave equation on non-globally hyperbolic manifolds. J. des Mathématiques Pures et Appliqués 81(1), 35-65 (2002) 11.5 Naive Taxation Models 509 [63] Angelopoulos, A. et al. (CPLEAR Collaboration). First direct observation of time-reversal non-invariance in the neutral-kaon system, Phys. Lett. B444, 43 (1998) [64] Newton, R. Scattering Theory of Waves and Particles. McGraw-Hill, New York, 1966 [65] Newton, R. Thinking about Physics. Princeton University Press, Princeton Oxford, 2000 [66] Lemaître, G. C. R. Acad. Sci. Paris 196 (1933), 903; Ann. Soc. Sci. Brussels A 53 (1933), 85 [67] Hartle, J. and Hawking, S. Wave Function Of The Universe. Phys. Rev. D28, 2960-2975 (1983) [68] Kac, M. Probability and Related Topics in Physical Sciences. Interscience Publishers, London-New York, 1957 [69] Boltzmann, L. Lectures on Gas Theory, 1964 (Translated by S. Brush) [70] Vilenkin, N. Ya. Special Functions and Theory of Group Representations, Nauka, Moscow,1965; Translation: Math. Monographs, Vol. 22, Amer. Math. Soc, Providence, R. I.,1968. [71] Gibbs, J. W. Scientific Papers of J Willard Gibbs, 2 vols. Bumstead, H. A., and Van Name, R. G., eds. Ox Bow Press, Woodbridge (Conn), 1961, 1993 [72] Silin, V. P. Introduction to the kinetic theory of gases , Moscow, 1971 (In Russian). [73] Verhulst, F. Nonlinear Differential Equations and Dynamical Systems. Springer-Verlag, Berlin-Heidelberg, 1990 [74] Smolin, L. The Trouble with Physics. Houghton Mifflin, New York, 2006 [75] Physical Origins of Time Asymmetry. Ed. by J. J. Halliwell, J. PrezMercader, W. H. Zurek. Cambridge Univ. Press, Cambridge (UK), 2006. [76] Zeh, H. D. The Physical Basis of the Direction of Time. Springer, BerlinHeidelberg, 1999. [77] Unruh, W. G. Notes on black hole evaporation. Phys. Rev. D 14, 870 (1976) [78] Hawking, S.W. A Brief History of Time. Bantam Books, London, 1988 [79] Galapon, E. Paulis Theorem and Quantum Canonical Pairs:The Consistency Of a Bounded, Self-Adjoint Time Operator Canonically 510 Bibliography Conjugate to a Hamiltonian with Non-empty Point Spectrum. Proc. R. Soc. Lond. A v. 458, 451-472(2002). [80] Braunss, G. Mathematics of Noncanonical Quantum Theory. Commun. Math. Phys. v. 45,159165 (1975). [81] Steinhardt, P. J., Turok, N. Endless Universe: Beyond the Big Bang. Doubleday, New York, 2007. [82] Hawking, S. W. Arrow of time in cosmology. Phys. Rev. D32, 2489-2495 (1985) [83] Hawking, S. W. Particle creation by black holes. Comm. Math. Phys. v. 43, 199-220 (1975) [84] Landau, L. D., Lifshitz, E. M. Quantum Mechanics: Non-relativistic Theory. Pergamon Press, London, 1977 [85] Landau, L. D. and Lifshitz, E. M. Fluid Mechanics. Pergamon Press, London, 1987 [86] Adler, S. L. Quaternionic Quantum Mechanics and Quantum Fields. Oxford University Press, Oxford, 1995 [87] Bekenstein, J. Black holes and entropy. Phys. Rev. D7, 2333-2346 (1973 [88] Penrose, R. The Road to Reality. Vintage Books, London 2006. [89] Penrose, R. Singularities and time asymmetry. In: General Relativity. An Einstein centenary survey. Hawking, S. W. and Israel, W. (eds.), Cambridge University Press, Cambridge (U. K.) 1979. [90] Ruelle, D. Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval. AMS, New York, 2006 [91] Einstein, A. On a stationary system with spherical symmetry consisting of many gravitating masses. Annals of Mathematics, , vol 40, No 4, pp 922-936 (1939) [92] Schulman, L. S. Time's Arrow and Quantum Measurement. Cambridge University Press, Cambridge (U. K.) 1997 [93] Linde, A. D. Inflation and Quantum Cosmology, Academic Press, Boston 1990; Linde, A. D. Linde, D. A., Mezhlumian, A. From the Big Bang theory to the theory of a stationary universe. Phys. Rev. D 49, 1783 (1994) [94] Linde, A. D. Particle Physics and Inflationary Cosmology. Harwood Academic, Chur 1990. [95] Foley, J. D., Feiner, S. K., Hughes, J. F., van Dam, A. Computer Graphics: Principles and Practice. Addison-Wesley, Boston 1990. 11.5 Naive Taxation Models 511 [96] 't Hooft, G. Magnetic monopoles in unified gauge theories. Nucl. Phys. B79, 276-284 (1974) [97] Polyakov, A. M. Particle spectrum in quantum field theory. Pisma Zh. ksp. Teor. Fiz. 20, 430-432 (1974) [JETP Lett. 20, 194-196 (1974)] [98] Gell-Mann, M. Hartle, J. Quasiclassical coarse graining and thermodynamic entropy. Phys. Rev. A 76, 022104 (2007) [99] Sakharov, A. D. Cosmological models of the universe with the timearrow inversion. ZhETF 79, 689-693 (1980); translated in JETP Lett. 52, 349-351 (1980) [100] Baz', A. I., Zel'dovich, Ya. B., Perelomov, A. M. Rasseyanie, Reaktsii i Raspady v Nerelyativistskoi Kvantovoi Mekhanike (Scattering,Reactions and Decays in Nonrelativistic Quantum Mechanics) 2nd ed. (Moscow: Nauka, 1971) [Translated into English 1st ed. (Jerusalem: Israel Program for Scientific Translations, 1969)] [101] Aharonov, Y., Bohm D. Significance of electromagnetic potentials in quantum theory. Phys. Rev. 115, 485-491 (1959); Further considerations on electromagnetic potentials in the quantum theory. Phys. Rev. 123: 1511-1524 (1961) [102] Kibble, T. W. B. Geometrization of quantum mechanics. Comm. Math. Phys. 65(2), 189-201 (1979) [103] Frankel, T. The Geometry of Physics. An Introduction. Cambridge University Press, Cambridge (U. K.), 1998 [104] Nakahara, M. Geometry, Topology and Physics. Taylor and Francis, LLC, 2003 [105] Altshuler, B. L., Aronov, A. G., Spivak, B. Z. The Aharonov-Bohm effect in disordered conductors. Pisma Zh. ksp. Teor. Fiz. 33, 101-103 (1974) [JETP Lett. 33, 94-96 (1974)] [106] Born, M., Wolf, E. Principles of Optics. Cambridge University Press, Cambridge (U. K.), 1999 [107] Joos, E., Zeh, H. D. The Emergence of Classical Properties through Interaction with the Environment, Zeitschrift fr Physik B59, 223-243 (1985) [108] Giulini, D., Joos, E., Kiefer, C., Kupsch, J., Stamatescu, I.-O. and Zeh, H. D. Decoherence and the Appearance of a Classical World in Quantum Theory. Springer, Berlin 1996 [109] Ziman, J. Public Knowledge. The Social Dimension of Science. Cambridge University Press, Cambridge, 1968 512 Bibliography [110] Zurek, W. H., Decoherence and the transition from quantum to classical, Phys. Today 44, No. 10, 36-44 (1991) [111] Zurek, W. H., Habib, S., and Paz, J.-P. Phys. Rev. Lett. 70, 11871190 (1993); Zurek, W. H., and Paz, J.-P. Phys. Rev. Lett. 72, 25082511 (1994) [112] Zurek, W. H. Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics 75, 715-775 (2003) [113] Caldeira, A. O., Leggett, A. J. Path integral approach to quantum Brownian motion. Physica (Amsterdam) 121A, 587 (1983) [114] Hu, B. L., J. Paz, J., Zhang, Y. Quantum Brownian-motion in a general environment-exact master equation with nonlocal dissipation and colored noise. Phys. Rev. D45, 2843 (1992); Quantum Brownian-motion in a general environment. 2. nonlinear coupling and perturbative approach Phys. Rev. D47, 1576 (1993). [115] Doran, C., Lasenby, A. Geometric Algebra for Physicists. Cambridge University Press, Cambridge (U. K.), 2003 [116] Yaffe, L. G. Large N limits as classical mechanics. Reviews of Mod. Phys. 54, 407-435 (1982) [117] Peres, A. Quantum Theory: Concepts and Methods. Kluwer Academic, Dordrecht/Boston, 1993 [118] Kolomenski, A. A., Lebedev, A. N. Theory of Cyclic Accelerators. North Holland, Amsterdam, 1966 [119] Heisenberg, W. Across the frontiers.Translated into English by Peter Heath. Ox Bow Press, Woodbridge, CT., 1990 [120] Feynman, R. Simulating physics with computers, International Journal of Theoretical Physics 21, 467-488 (1982) [121] Deutsch, D. Quantum theory, the Church Turing principle, and the universal quantum computer. Proc. Roy. Soc. Lond. A 400, 97-117 (1985) [122] Deutsch, D. Quantum computation. Physics World, 5, 5761 (1992) [123] Albert, D. On quantum mechanical automata. Phys. Lett. A 98, 249-252 (1983) [124] Heisenberg, W. Physicist's conception of nature. Greenwood Press, Westport, CT., 1970 [125] Heisenberg, W. Across the frontiers. Harper and Row, New York, 1973 11.5 Naive Taxation Models 513 [126] Schrödinger, E. The interpretation of Quantum Mechanics. Ox Bow Press, Woodbridge, CT., 1995 [127] Schrödinger, E. My View of the World. Ox Bow Press, Woodbridge, CT. 1983 [128] Schrödinger, E. What is Life? Cambridge University Press, Cambridge, 2002 [129] Schrödinger, E. Der stetige bergang von der Mikro- zur Makromechanik. Naturwissenschaften, Bd. 14, H. 28, S. 664-666 (1926) [130] Balachandran, A. P., Marmo, G., Skagerstam, B.-S., Stern, A. Gauge Symmetries and Fibre Bundles. Application to Particle Dynamics, Lecture Notes in Physics 188, Springer, 1983 (and references therein) [131] Zhang, Y. Z. Special Relativity and its Experimental Foundations. World Scientific, Singapore, 1997 [132] Todorov, I. Einstein and Hilbert: The Creation of General Relativity, http://arxiv.org/abs/physics/0504179; Earman, J., Glymour, C. Einstein and Hilbert: Two Months in the History of General Relativity, Archive for History of Exact Sciences v. 19, No. 3, 291 (1978); Logunov A. A., Mestvirishvili, M. A., Petrov, V. A. How were discovered the Hilbert-Einstein equations?, Russian Physics - Uspekhi v. 174, No. 6 (2004) [133] Logunov, A. A., Loskutov, Yu. M., Mestvirishvili, M. A. The relativistic theory of gravitation and its consequences. Sov. Phys. Uspekhi 31(7), 581596 (1988) [134] Sen, A. Strong-weak coupling duality in four dimensional string theory, Int. J. Mod. Phys. A9, 3707-3750 (1994); Electric-magnetic duality in string theory, Nucl. Phys. B404, 109-126 (1993); Unification of string dualities, Nucl. Phys. Proc.Suppl. 58, 5-19 (1997) [135] Gauntlet, J. P., Harvey, J. A., Liu, J. T. Magnetic monopoles in string theory. Nucl. Phys. B409, 363-381 (1993), see also: Gregory, R., Harvey, J. A., Moore, G. Unwinding strings and T-duality of Kaluza-Klein and HMonopoles Adv. Theor. Math. Phys. 1 (1997) 283-297 [136] Pais, A. Niels Bohrs Times in Physics, Philosophy, and Polity Clarendon Press, Oxford, 1991 [137] Jammer, M. The Conceptual Development of Quantum Mechanics. McGraw-Hill, New York, 1966 [138] Feynman, R. Leighton, R., Sands, M. Feynman Lectures on Physics. Addison Wesley, Reading (MA), 1963 514 Bibliography [139] Dirac, P. A. M. Lectures on Quantum Mechanics. Dover, N. Y., 2001 [140] Davydov, A. S. Quantum Mechanics. Pergamon Press, Oxford, New York, 1976 [141] Popper, K. The Logic of Scientific Discovery. Basic Books, New York, 1959 [142] Toynbee, A. A Study of History. Oxford University Press, Oxford, 1961 [143] Planck, M. Collected Papers. Nauka, Moscow, 1975 (in Russian); On improvement of the Wien formula for spectral distribution. Verhandl. Deutsch. Phys. Gesellschaft, Bd. 2, S. 202; To the theory of energy distribution in the normal spectrum. Bd. 2, S. 237, Berlin, 1900 [144] Stone, M. On one-parameter unitary group in Hilbert space. Annals of Mathematics, 33, 643-648 (1932) [145] Fushich, V. I., Nikitin, A. G. Symmetry of Equations in Quantum Mechanics. Nauka, Moscow, 1990 ; Symmetry of Maxwell's Equations. Naukova dumka, Kiev, 1983 (in Russian) [146] Hirsch, M. W. The dynamical systems approach to differential equations. Bull. Amer. Math. Soc. , 11, 164 (1984) [147] Sinai, Ya. G. (ed.) Dynamical Systems, vol. 1-3. VINITI, Moscow, 1985 (in Russian) [148] Rukhadze, A. A. Sobytiia i Liudi (Events and People) (1948-1991). Star, Tula, 2000 (in Russian) [149] Wu, Zuo-Bing, Zeng, Jin-Yan. Dynamical symmetry of screened Coulomb potential and isotropic harmonic oscillator. Phys. Rev. A 62, 032509, (2000) [150] Holas, A., March,N. H. How many vector constants of motion exist for a particle moving in a central potential? J. Phys. A: Math. Gen. 27, 26152617 (1994) [151] Pankratov, S. G. Coherent electromagnetic radiation of a modulated beam of charged particles. Physics Letters A59(5), 338-340 (1976) [152] Wigner, E. P. Über die Operation der Zeitumkehr in der Quantenmechanik. Akad. Wiss. Göttingen, Math-Physik, 546 (1932) [153] Shilov, G. E. Introduction to the Theory of Linear spaces. Dover Publications, N. Y., 1974 [154] Rashevski, P. K. Riemannian Geometry and Tensor Analysis. Moscow, Nauka, 1967 11.5 Naive Taxation Models 515 [155] Kostrikin, A. I, Manin, Yu. I. Linear Algebra and Geometry. Gordon and Breach Science Pub, London - New York, 1989 [156] Perelomov, A. M. Generalized Coherent States and Their Applications. Springer, Berlin, 1985 [157] Ehrenfest, P. Bemerkung uber die angenaherte Gultigkeit der klassichen Mechanik innerhalb der Quantunmechanik (Note on approximate validity of classical mechanics). Z. Phys. 45(7/8), 455-457 (1927) [158] Oas, G. On the use of relativistic mass in various published works. arXiv: physics/0504111. [159] Topping, P. Lectures on the Ricci Flow. Cambridge University Press, Cambridge (U. K.), 2006 [160] Kalashnikov, N. P., Pankratov, S. G. Coherent excitation of atoms by the periodic field of crystal lattice. Sov. Phys. Solid State 16, 542 - 545 (1974) [161] Andrieux D, Gaspard P. Fluctuation theorems and the nonequilibrium thermodynamics of molecular motors. Phys. Rev. E 74, 011906 (2006) [162] Jarzynski C, Wojcik D. Classical and quantum fluctuation theorems for heat exchange. Phys. Rev. Lett. 92, 230602 (2004) [163] Evans D. J., Searles D. Causality, response theory, and the second law of thermodynamics. Phys. Rev. E 53, 5808-5815 (1996) [164] Ballentine, L. E., Yang, Y., Zibin, J. P. Inadequacy of Ehrenfest's theorem to characterize the classical regime. Phys. Rev. A 50, 2854-2859 (1994) [165] Petrina, D. Ya., Gerasimenko, V. I., Malyshev, P. V. Mathematical foundations of classical statistical mechanics. Continuous systems. Gordon and Breach, Newark (NJ), 1989 [166] Andreev, A. F., Lifshitz, I. M. Quantum theory of defects in crystals. JETP 29(6), 1107-1118 (1969); Zh. Eksp. Teor. Fiz. 56, 2057-2068 (1969) [167] Vizgin, V. P. Physics in Moscow. http://www.ihst.ru/projects/sohist/books/moskva/185211.pdf [168] Anderson, P. W., Arrow, K. J, Pines, D., eds. The Economy as an Evolving Complex System. Addison-Wesley, Redwood, 1988 [169] Friedman, M. Capitalism and Freedom. 1962 [170] Yeor, B. Eurabia: The Euro-Arab Axis. Fairleigh Dickinson University Press, Madison, N. J., 2005; see also http://en.wikipedia.org/wiki/Eurabia 516 Bibliography [171] Snow, C. P. The Two Cultures. Cambridge University Press, Cambridge (U. K.), 1993 [172] Kirchhof, P. Der sanfte Verlust der Freiheit. Carl Hanser Verlag, München, 2004 [173] Busemeyer, J. R., & Diederich, A. (2010). Cognitive modeling: Sage. [174] Sairamya, N. J., Susmitha, L., Thomas George, S., & Subathra, M. S. P. (2019). Chapter 12 - Hybrid Approach for Classification of Electroencephalographic Signals Using Time-Frequency Images With Wavelets and Texture Features. In D. J. Hemanth, D. Gupta & V. Emilia Balas (Eds.), Intelligent Data Analysis for Biomedical Applications (pp. 253-273): Academic Press. [175] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. [Insight]. Nature, 521(7553), 436-444. [176] Bresnick, J. Machine Learning Algorithm Outperforms Cardiologists Reading EKGs (healthitanalytics.com) https://healthitanalytics.com/news/machine-learning-algorithmoutperforms-cardiologists-reading-ekgs [177] Morris, P. D., Ryan D Fau - Morton, A. C., Morton Ac Fau - Lycett, R., Lycett R Fau - Lawford, P. V., Lawford Pv Fau - Hose, D. R., Hose Dr Fau - Gunn, J. P., & Gunn, J. P. Virtual fractional flow reserve from coronary angiography: modeling the significance of coronary lesions: results from the VIRTU-1 (VIRTUal Fractional Flow Reserve From Coronary Angiography) study. (1876-7605 (Electronic)). [178] Dawkins, R. The God Delusion. Transworld Publishers, London, 2007 [179] Matthews, D. A. (1999). The faith factor: Proof of the healing power of prayer: Penguin. [180] Newton, I. The Mathematical Principles of Natural Philosophy (1846) Book II, Section IX. Translated by Andrew Motte, https://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natu ral_Philosophy_(1846)/BookII-IX [181] Fischer, H. P. (2008). Mathematical modeling of complex biological systems: from parts lists to understanding systems behavior. Alcohol research & health: the journal of the National Institute on Alcohol Abuse and Alcoholism, 31(1), 49-59. [182] https://mhealthfairview.org [183] Dirac, P. A. M. Quantized singularities in the electromagnetic field. Proc. Roy. Soc. London A 133, 60-72, 1931 11.5 Naive Taxation Models 517 [184] Veneziano, G. Construction of a crossing-symmetric, Regge behaved amplitude for linearly-rising trajectories. Nuovo Cimento, 57A, 190 (1968) [185] G. Veneziano, "Duality and Dual Models", in Proceedings 15th Int. Conference on High-Energy Physics, Kiev, 1970 (Naukova Dumka, Kiev, 1972, p. 437) [186] Popper, K. The Open Society and Its Enemies. Routledge, London, 1945 [187] Dubrovin, B. A., Fomenko, A. T., Novikov, S. P. Modern Geometry - Methods and Applications. Translated by Robert G. Burns, Part 1. Springer-Verlag, Berlin-Heidelberg-New York, 1984 [188] Rashevski, P. K. Riemannian Geometry and Tensor Analysis. Nauka, Moscow, 1967 [189] Hawking, S., Ellis, J. The Large-Scale Structure of Space-Time. Cambridge University Press, Cambridge (U. K.), 1973 [190] Olver, P. J. Applications of Lie Groups to Differential Equations. Springer-Verlag, N. Y., 1993 [191] Dieudonné, J., ed. Abrégé d'histoire des mathématiques 1700-1900. Hermann, Paris, 1978 [192] Eisenhart, L. P. Riemannian Geometry. Princeton University Press, Princeton, 1964 [193] L. D., Lifshitz, E. M. The Classical Theory of Fields. Pergamon Press Landau, Oxford, 1975 [194] Born, M., Heisenberg, W., Jordan, P. Zur Quantenmechanik II. Zeitschrift fuür Physik 35, 557 (1926) [195] Jordan, P., von Neuman, J., Wigner, E. On an Algebraic Generalization of the Quantum Mechanical Formalism. Annals of Mathematics (Princeton) 35, No. 1, 2964 (1934) [196] Arnold, V. I. Lobachevsky triangle altitude theorem as the Jacobi identity in the Lie algebra of quadratic forms on symplectic plane. J. of Geometry and Physics, 53, 421-427 (2005) [197] Arnold, V. I. Mathematical Methods of Classical Mechanics. SpringerVerlag, LLC, 2nd ed., New York 1989 [198] Brecher, K. Is the Speed of Light Independent of the Velocity of the Source? Phys. Rev. Lett. v. 39, 10511054, (1977) [199] Wolf, P., Bize, S., Tobar, M. E., Chapelet, F., Clairon, A., Luiten, A. N., Santarelli, G. Recent experimental tests of special relativity. http://arXiv.org/abs/physics/0506168v1). 518 Bibliography [200] Lichtenberg, A. J., Lieberman, M. A. Regular and Stochastic Motion. Springer, Berlin-Heidelberg-New York, 1992. [Comprehensive and containing nontrivial results, can be difficult for the first reading]. [201] Lin, C. C. The Theory of Hydrodynamic Stability. Cambridge University Press, Cambridge, 1955. [An old and possibly forgotten, but very insightful book]. [202] Gibbs, J. W. On the equilibrium of heterogeneous substances. Transactions of the Connecticut Academy of Arts and Sciences, v. 3, pp. 108-248, 343524 (1874-1878). [This fundamental work on foundations of thermodynamics was reproduced in The Collected Works of J. Willard Gibbs, in two volumes, eds. W. R. Longley and R. G. Van Name, New Haven: Yale University Press, 1957 (also 1928)]. [203] Arnold, V. I. Lectures on Partial Differential Equations. Springer, BerlinHeidelberg-New York, 2004 (2nd Edition). [This book is both comprehensive and understandable for those who are not well-versed with contemporary mathematical language]. [204] Cattaruzza, E., Gozzi, E., Francisco Neto, A. Least-action principle and pathintegral for classical mechanics. Phys. Rev. D 87, 067501. [205] Marsden, J. E., Ratiu, T. S. Introduction to Mechanics and Symmetry. Springer, New York, 1999. [206] Schweber, S. S. An Introduction to Relativistic Quantum Field Theory. Harper and Row, N. Y., 1964 [207] Heitler, W. The Quantum Theory of Radiation. Dover Publications, N. Y., 1984 [208] Landau, L. D., Lifshitz, E. M. Electrodynamics of Continuous Media. Pergamon Press, London, 1984 [209] Silin, V P, Rukhadze, A A Elektromagnitnye Svostva Plazmy i Plazmopodobnykh Sred (Electromagnetic Properties of Plasmas and Plasma-Like Media), Gosatomizdat, Moscow, 1961 (in Russian) [210] Akhiezer, A. I., Berestetskii, V. B. Quantum Electrodynamics. Nauka, Moscow, 1965 (in Russian), translation: Wiley-Interscience, New YorkLondon, 1969 [211] Schrödinger, E. Abhandlungen zur Wellenmechanik. Verlag J. A. Barth, Leipzig, 1928 [212] Gamow, G. Zur Quantentheorie des Atomkernes. Zeitschrift für Physik, 51, 204-212 (1928) [213] Leggett, A. J. Macroscopic quantum systems and the quantum theory of measurement. Progress of Theoretical Physics Supplement, No. 69, pp. 80-100 (1980) 11.5 Naive Taxation Models 519 [214] Carles, R. Semi-Classical Analysis for Nonlinear Schrödinger Equations. World Scientific, Singapore, 2008 [215] Mensky, M. B. Concept of conscience in the context of quantum mechanics. Phys. Uspekhi 48, 389-410 (2005) [216] Aspect, A., Dalibard, J., Roger, G. Experimental Test of Bell's inequalities using time-varying analyzers. Phys. Rev. Letters, v. 49, No. 25, 1804-1807 (1982); Aspect, A., Grangier, P., Roger, G. Experimental realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: a new violation of Bell's inequalities. Phys. Rev. Letters, v. 49, No. 2, 91-94 (1982) [217] Born, Max. The statistical interpretation of quantum mechanics. Nobel Lecture, December 11, 1954, Bohm, D. A suggested interpretation of the quantum theory in terms of "hidden" variables. I. Phys. Rev. 85, No. 2, 166-179 (1952) http://nobelprize.org/prizes/physics/1954/born/lecture [218] Bohm, D. A suggested interpretation of the quantum theory in terms of "hidden" variables. I. Phys. Rev. 85, No. 2, 166-179 (1952) [219] Zel'dovich, Ya. B., Novikov, I. D. Relativistic Astrophysics, I: Stars and relativity, Chicago University Press, Chicago, 1971 [220] Balesku, R. Equilibrium and Nonequilibrium Statistical Mechanics. John Wiley and Sons, N. Y.,1975 [221] Landauer, R. Irreversibility and heat generation in the computing process. IBM J. of Research and Development, v. 5, No. 3 (1961) [222] Il'inskii, Yu. A., Keldysh, L. V. Electromagnetic Response of Material Media. Plenum Press, N. Y.-London, 1994 [223] Arnold, V. I. Catastrophe Theory. Springer, Berlin-Heidelberg-New York, 2004 (3d Edition). [224] Currie, D. G., Jordan, T. F., Sudarshan, E. C. G. Relativistic invariance and Hamiltonian theories of interacting particles. Rev. Mod. Phys. v. 35, 350375 (1963). [225] Goldberg, S. The Abraham theory of electron: The symbiosis of experiment and theory. Archive for History of Exact Sciences, v. 7, No. 1, pp. 7-25 (1970). [226] Bolotovskii, B. M., Serov, A. V. Details of the motion of charged nonrelativistic particles in a variable field. Physics Uspekhi 37, No. 5, 515 516 (1994) [227] Kapitza, P. L., Dirac, P. A. M. The reflections of electrons from standing light waves. Proc. Cambridge Phil. Soc. v. 29, 297 (1933) [228] Fedorov, M. V. Quantum theory of the Kapitza-Dirac effect. Zh. Teor. 520 Bibliography i Exper. Fiz. 52, 1434 (1967) [229] Fedorov, M. V., Goreslavsky, S. P., Letokhov, V. S. Ponderomotive forces and stimulated Compton scattering of free electrons in a laser field. Phys. Rev. E 55, 1015-1027 (1997) [230] Weinstein, L. A., Solntsev V. A. Lectures on Microwave Electronics. Sovetskoye Radio, 1973 (in Russian) [231] Freeman Dyson: Mathematician, Physicist, and Writer. Interview with D J Albers, The College Mathematics Journal, 25, No. 1, January 1994 [232] Solomon, S., Plattner, G.-K., Knutti, R., Friedlingstein, P. Irreversible climate change due to carbon dioxide emissions. Proceedings of The National Academy of Sciences of the USA [PNAS] 106, No.6, 1704-1709 (2009) [233] Ghil, M., Childress, S. Topics in Geophysical Fluid Dynamics: Atmospheric Dy- namics, Dynamo Theory, and Climate Dynamics. Springer, Berlin-Heidelberg, 1987 [234] B ̈arring, L. Climate - change or variation? Climatic Change, v. 25, No. 1, pp. 1-13 (1993) [235] Nicolis, G., Nicolis, C. Foundations of complex systems: nonlinear dynamics, statistical physics, information and prediction, World Scientific, 2007 [236] Niclis, C. Niclis, G. Reconstruction of the dynamics of the climatic system from time-series data. Proc. Natl. Acad. Sci. USA (Geophysics), v.83, pp. 536-540, February 1986 [237] Radiative Forcing of Climate Change. Board on Atmospheric Sciences and Climate (BASC). The National Academies Press, Washington, D.C., 2005 [238] Cravens, G. Power to Save the World. The Truth About Nuclear Energy. Knopf, N.Y., 2007 [239] Usoskin, I. G., Schu ̈ssler, M., Solanki, S. K., Mursula, K. Solar activity over the last 1150 years: does it correlate with climate? Proc. 13th Cool Stars Workshop, Hamburg, July 5-9, 2004 (ESA SP-560, Jan. 2005, eds.: Favata, F., Hussain, G., Battrick, B.) [240] Ditlevsen, P. D. Bifurcation structure and noise-assisted transitions in the Pleistocene glacial cycles. Paleooceanography, v. 24, PA3204, August 2009 [241] Gates, W. L., Mintz, Y. (eds.) Understanding Climate Change. National Academy of Sciences (NAS), Washington, D.C., 1975 [242] Bogoliubov, N.N., Mitropolski, Yu.A. Asymptotic Methods in the 11.5 Naive Taxation Models 521 Theory of Non- linear Oscillations. Gordon and Breach, N.Y., 1961 [243] Bartens, W. Das Ärztehasserbuch. Knaur Taschenbuch Verlag, München, 2007; Auf Kosten der Patienten. Wie das Krankenhaus uns krank macht. Eichborn-Verlag, Frankfurt 2008. [244] E. U. Condon and R. W. Gurney "Quantum mechanics and radioactive disintegration", Phys. Rev. 33, No.2, 127-140 (1929) [245] Kartsev, V. P. Newton. The Lives of Remarkable People. Biography Series. Moscow, 1987 (in Russian). [246] Wang N., Yao T., Shi Y. On the magnitude of temperature decrease in the equatorial regions during the Last Glacial Maximum. Science in China Supplement (series D), v. 42, 80-90 (1999) [247] Hoag, H. The missing greenhouse gas. Nature Reports, v. 2, p.99-100, August 2008 [248] Primeau, F. Characterizing transport between the surface mixed layer and the ocean interior with a forward and adjoint global ocean transport model. Journal of Physical Oceanography, v. 35, 545-564 (2005) [249] Holland, D. "Bias and concealment in the IPCC process: the 'hockey-stick' affair and its implications". Energy and Environment, v. 18, 951-983 (2007) [250] Roberts, J. E. Meandering in Medical Physics: A Personal Account of Hospital Physics, 1999 [251] Popakostas, A., Potts, A., Bagnall, D. M., Prosvirnin, S. L., Cles, H. J., Zheludev, N. I. Optical manifestations of planar chirality. Phys. Rev. Lett. v.90(10), 107404 (March 14, 2003) [252] Schwanecke, A. S., Krasavin, A., Bagnall, D. M., Potts, A., Zayats, A. V., Zheludev, N. I. Broken time reversal of light interaction with planar chiral nanostructures. Phys. Rev. Lett. v.91(24), 247404 (Dec. 9, 2003) [253] Sigrist, M., Bailey, D. B., Laughlin, R. B. Fractional vortices as evidence of time-reversal symmetry breaking in high Tc superconductors. Phys. Rev. Lett. v.74, pp.3249-3252 (1995) [254] Lee, W.-C., Zhang, S.-C., Wu, C. Pairing state with a time-reversal symmetry breaking in FeAs- based superconductors. Phys. Rev. Lett. v.102, 217002 (29 May, 2009) [255] Hillier A. D., Quintanilla J, Cywinski R. Evidence for time-reversal symmetry breaking in the non-centrosymmetric superconductor LaNiC2. Phys. Rev. Lett. v.102(11), 117007 (March 20, 2009) [256] Doniach, S., Kapitulnik, A., Frank, P., Fejer, M. M., Spielman, S., Dodge, J. S. Time Reversal Symmetry Breaking in Biological Molecules. AddisonWesley Publishing Company, Reading (MA), 1992 522 Bibliography [257] C. J. Gorter and H. Casimir C. J. Gorter, H. Casimir. "On superconductivity", Physica 1, 306-320 (1934); Phys. Z. 35, 963 (1934) [258] R. Becker, G. Heller, F. Sauter. "Über die Stromverteilung in einer supraleitenden Kugel", Zeitschrift für Physik 85, 772-787 (1933) [259] Gerlich, G., Tschenschner, R. D. Falsification of the atmospheric CO2 greenhouse effects within the frame of physics. Int'l J. of Mod. Phys. B 23, No. 3, 275-364 (2009) [260] Schmidt, G. A. The physics of climate modeling. Phys. Today, p.72-73, January 2007 [261] P. G. Debenedetti, H. G. Stanley. Supercooled and glassy water. Phys. Today, No. 6, p. 40-46, June 2003 [262] Botkin, D. B. Forests, lakes, and the anthropogenic production of carbon dioxide. BioScience, v.27, No. 5, pp. 325-331 (1977) [263] Akhiezer, A. I., Landau, L. D., Pomeranchuk, I. Ya. Scattering of Light by Light. Nature 138, 206-206 (1936) [264] R. Kronig "Zur Theorie der Supraleitfähigkeit", Zeitschrift für Physik A 78, No.11-12, 744-750 (1932) [265] R. Bluhm, "Overview of the SME: Implications and Phenomenology of Lorentz Violation", Talk presented at the conference "Special Relativity: Will it Survive the Next 100 Years?" Potsdam, Germany, February 2005, published in Lecture Notes in Physics, v.702, pp.191-226 (2006). http://arxiv.org/abs/hepph/0506054v1. [266] Kostelecky, V. A., Russell, N. Data tables for Lorentz and CPT violation, http://arxiv.org/abs/hep-ph/0801.0287v3. [267] Gleiser, M. "Drake equation for the multiverse: from the string landscape to complex life." http://ArXiv.org:hep-th1002.1651. [268] Groves, G. V. Hough components of water vapor heating (in atmosphere). Journal of Atmospheric and Terrestrial Physics, v. 44, 281-290, (1982) [269] Kravtsov, S., Swanson, K., Tsonis, A. A. A new dynamical mechanism for major climate shifts. Geophys. Res. Lett., v. 34, No.13, L13705 (12 July 2007) [270] Lockwood, M., Harrison, R. G., Woolings, T., Solanki, S. K. Are cold winters in Europe associated with low solar activity? Environmental Research Letters, v. 5, No. 2 (April-June 2010), 024001 [271] Acuna-Soto, R., Stahle, D. W., Cleaveland, M. K., Therrell, M. D. Megadrought and Megadeath in 16th Century Mexico. Emerging Infectious Diseases, CDC, v. 8, No. 4, p. 360-362, April 2002. https://wwwnc.cdc.gov/eid/article/8/4/01-0175_article [272] Shaw, B. D., Climate, environment and prehistory in the Sahara, World 11.5 Naive Taxation Models 523 Archaeology, 8, No.2, 142, (1976) [273] Yang, F., Schlesinger, M. E. On the surface and atmospheric temperature changes following the 1991 Pinatubo volcanic eruption - A GCM study. J. Geophys. Res., 107 No. D8, 1-14 (2002), 10.1029/2001JD000373 [274] Freidenreich S. M., Ramaswamy, V. Solar radiation absorption by carbon dioxide, overlap with water, and a parameterization for general circulation models. Journal of Geophysical Research v. 98, 7255-7264 (1993) [275] McArthur, J. M., Janssen N. M., Reboulet S., Leng M. J., Thirlwalle M. F., van de Shootbrugge, B. Palaeotemperatures, polar ice-volume, and isotope stratigraphy. Palaeogeography, Palaeoclimatology, Palaeoecology 248, 391430 (2007). [276] E. Majorana "Il valore delle leggi statistiche nella fisica e nelle scienze sociali" (The value of statistical laws in physics and the social sciences, Scientia 36, 55-56 (1942) [277] Aghion, P., Howitt, P. Growth and unemployment. Review of Economic Studies, 61, 477-494 (1994) [278] Jahn, R. G., Dunne, B. J. The PEAR Proposition, EXPLORE, Volume 3, Issue 3, 2007, Pages 205-226, ISSN 1550-8307, https://doi.org/10.1016/j.explore.2007.03.005. (https://www.sciencedirect.com/science/article/pii/S15508307070005 84), https://www.princeton.edu/search?search=PEAR+proposition [279] Lloyd, S. Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos, Knopf, N. Y., 2006 [280] Scheck, F. Theoretische Physik, B.1-4. Springer-Verlag, BerlinHeidelberg-New York, 2001-2007 [281] Sinai, Ya.G. On the Notion of Entropy of a Dynamical System. Doklady of the Russian Academy of Sciences (DAN), v.124, pp. 768-771 (1959). [282] B. M. Bolotovskii, Oliver Heaviside 1850-1925 (Nauka, Moscow, 1985), p. 152 [in Russian] [283] Cooper, L. N. "Bound Electron Pairs in a Degenerate Fermi Gas," Phys. Rev. 104, 1189 (1956) [284] Hoddeson, L. and Daitch, V. True Genius: The Life and Science of John Bardeen, (Joseph Henry Press, 2002), p. 203 [285] Mukhanov, V. Physical Foundations of Cosmology. Cambridge University Press, Cambridge (U.K.), 2005 [286] Mukhanov, V. Winitzki, S. Introduction to Quantum Effects in Gravity. Cambridge University Press, Cambridge (U.K.), 2007 [287] Nambu, Y. Quark model and the factorization of the Veneziano amplitude. 524 Bibliography In: Detroit 1969, Symmetries and Quark Models, ed. R. Chand, p.269-278, Gordon and Breach, N.Y., 1970; Lectures at the Copenhagen Summer Symposium (1970), see also: Quarks, strings and gauge fields, in Baltimore 1974, Johns Hopkins Workshop On Current Problems In High Energy Theory, p. 3-13, Baltimore 1974 [288] Nielsen, H.B. String from Veneziano Model. , see also "An almost physical interpretation of the integrand of the n-point Veneziano model", preprint of the Niels Bohr Institute; paper presented at the XV Int. Conf. on High Energy Physics, Kiev, 1970; Fairlie, D. B., Nielsen, H.B. An Analogue Model for KSV Theory. Nucl. Phys. B 20, 637-649 (1970) [289] Susskind, L. Dual-symmetric theory of hadrons. Nuovo Cimento, 69A, 457496 (1970); Structure of Hadrons Implied by Duality. Phys. Rev D1 11821188 (1970); Galli, E., Susskind, L. Phys. Rev D 1, 1189 (1970) [290] Schwarz, J. String theory: the early years. https://arxiv.org/abs/hepth/0007118 [291] Datseris, G., & Stevens, B. (2021). Earth's albedo and its symmetry. AGU Advances, 2, e2021AV000440. https://doi.org/10.1029/2021AV000440 [292] Arrhenius, S. (1901). Ueber die Wärmeabsorption durch Kohlensäure. Annalen der Physik, 309(4), 690-705. https://scholar.archive.org/work/udqm6kin6nac3m4ngfcms7eh2e [293] McGuire, M., Olson, M. The economics of autocracy and majority rule: the invisible hand and the use of force. Journal of Economic Literature, March 1996 [294] Jervis, R. System Effects: Complexity in Political and Social Life. Princeton University Press, Princeton (N.J.), 1997 [295] Lijphart, A. Patterns of Democracy: Government Forms and Performance in Thirty-Six Countries. Yale University Press, New Haven (CT), 1999 [296] Pauli W. (1993) Das Jahr 1946 Heisenbergs Theorie der S-Matrix. In: von Meyenn K. (eds) Wolfgang Pauli. Sources in the History of Mathematics and Physical Sciences, vol 11. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78802-7_7 [297] Gibbs W.: Elementary Principles in Statistical Mechanics (Yale University Press, New Haven 1902) Chapter XII [298] Ginzburg VL (July 2004). "On superconductivity and superfluidity (what I have and have not managed to do), as well as on the 'physical minimum' at the beginning of the 21st century". ChemPhysChem. 5 (7): 930-945. PMID 15298379 [299] von Neumann, J. (1963) [1942]. "Theory of detonation waves. Progress Report to the National Defense Research Committee Div. B, OSRD-549 (PB 31090)". In Taub, A. H. (ed.). John von Neumann: Collected Works, 19031957. Vol. 6. New York: Pergamon Press. pp. 203-218. 11.5 Naive Taxation Models 525 [300] Zel'dovich, Ya. B. (1940). "On the theory of the propagation of detonation in gaseous systems" К теории распространения детонации в газообразных системах [On the theory of the propagation of detonations on gaseous system]. Zhurnal Éksperimental'noĭ i Teoreticheskoĭ Fiziki (in Russian). 10: 542-568. [301] Strogatz, S.H. (2015). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (2nd ed.). CRC Press. [302] Thuillot, W. (2013). Statistical and numerical study of asteroid orbital uncertainty. Astronomy Astrophysics. [303] Wheeler, J. A. Theory of Nuclear Reactions. Physical Review.52.1107.1937. [304] Weinberg, S. (1996). The Quantum Theory of Fields. [305] Frisch, U. (1995). Turbulence: The Legacy of A. N. Kolmogorov. Cambridge: Cambridge University Press. [306] Moffatt, H. K. (1973). Statistical Fluid Mechanics: The Mechanics of Turbulence, volume 1. By Monin A.S. and Jaglom A. M. M. I. T. Press, 1971. 769 pp. £10.50. Journal of Fluid Mechanics, 60(2), 410-414. [307] Bohr, T., Jensen, M. H., Paladin, G., and Vulpiani, A. Dynamical Systems Approach to Turbulence (chapter On Lagrangian Chaos. Cambridge University Press.1998. [308] Einstein, A. "Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie". (Cosmological Considerations in the General Theory of Relativity). Published in: Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin), 1917, pp. 142-152.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.